url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/1912.08327 | Extreme Values of the Fiedler Vector on Trees | Let $G$ be a connected tree on $n$ vertices and let $L = D-A$ denote the Laplacian matrix on $G$. The second-smallest eigenvalue $\lambda_{2}(G) > 0$, also known as the algebraic connectivity, as well as the associated eigenvector $\phi_2$ have been of substantial interest. We investigate the question of when the maxima and minima of $\phi_2$ are assumed at the endpoints of the longest path in $G$. Our results also apply to more general graphs that `behave globally' like a tree but can exhibit more complicated local structure. The crucial new ingredient is a reproducing formula for the eigenvector $\phi_k$. | \section{Introduction}
\subsection{Introduction.}
Let $G=(V,E)$ be a simple, undirected, connected tree on $n$ vertices $\left\{v_1, \dots, v_n\right\}$. The degree matrix $D$ is the diagonal matrix $d_{ii} = \deg(v_i)$, the adjacency matrix $A$ encodes the connections between the
vertices. The matrix
$$ L = D - A$$
is known as the Laplacian matrix of $G$. It is symmetric and has eigenvalues that we order by their size
$$ \lambda_n(G) \geq \lambda_{n-1}(G) \geq \dots \geq \lambda_{2}(G) \geq \lambda_1(G) = 0.$$
We refer to \cite{chung, davies, mohar} for an introduction to the spectral theory on graphs.
It is not difficult to see that the unique eigenvector associated to the eigenvalue $\lambda_1(G)=0$ is the vector having constant entries and that
\begin{equation} \label{one}
\lambda_{2}(G) = \min_{x \perp \textbf{1}} \frac{\sum_{v_i \sim_E v_j}{(x_i - x_j)^2}}{\sum_{i=1}^{n}{x_i^2}}.
\end{equation}
This shows that $\lambda_{2}(G) > 0$ if and only if $G$ is connected. The eigenvector $\phi_2$ associated to the second smallest eigenvalue is
also known as the Fiedler vector \cite{fiedler1, fiedler2, fiedler3, gern, stone}. The following crucial result is due to Fiedler \cite{fiedler3}.
\begin{thm}[Fiedler] The induced subgraph on $\left\{v \in V: \phi_2(v) \geq 0\right\}$ is connected.
\end{thm}
This, together with many other desirable properties, motivates the classical spectral cut whereby the sign of $\phi_2$ is used to decompose a graph. Overall, relatively little seems to be known about the actual behavior of the Fiedler vector:
\begin{quote}
However, apart from the original results from M. Fiedler, very few is known about the
Fiedler vector and its connection to topological properties of the underlying graph [...] (\cite{gern}, 2018)
\end{quote}
Simultaneously, these types of questions have become increasingly important in the framework of Graph Signal Processing, we refer to \cite{ham, irion, or, per, saito, shu, shu2, shu3} for an introduction into recent work.
\subsection{The Problem}
Let $G=(V,E)$ be a tree. Equation (\ref{one}) suggests that $\phi_2$ is the `smoothest' vector that is orthogonal to the constants, so maximum and minimum of $\phi_2$ should be attained
at the points of largest distance. This was explicitly conjectured in \cite{moo}, a counterexample was then produced by Evans \cite{evans} and is shown in Figure 1. Maximum and minimum
are assumed far away from one another but not at the two points of maximum distance from one another. A natural question that remains is (1) to understand
the behavior of the Fiedler vector and, as discussed by Lef\`evre \cite{lef} and Gernandt \& Pade \cite{gern}, (2) to understand under which conditions such a
result might still be true.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\filldraw (0,0) circle (0.07cm);
\filldraw (1,0) circle (0.07cm);
\filldraw (2,0) circle (0.07cm);
\filldraw (3,0) circle (0.07cm);
\filldraw (4,0) circle (0.07cm);
\filldraw (5,0) circle (0.07cm);
\filldraw (6,0) circle (0.07cm);
\filldraw (6,0) circle (0.07cm);
\filldraw (7,0) circle (0.07cm);
\filldraw (8,0) circle (0.07cm);
\filldraw (9,0) circle (0.07cm);
\draw [thick] (0,0) -- (9,0);
\filldraw (3,-1) circle (0.07cm);
\filldraw (2,-2) circle (0.07cm);
\filldraw (2.2,-2) circle (0.07cm);
\filldraw (2.4,-2) circle (0.07cm);
\filldraw (2.6,-2) circle (0.07cm);
\filldraw (2.8,-2) circle (0.07cm);
\filldraw (3,-2) circle (0.07cm);
\filldraw (3.2,-2) circle (0.07cm);
\filldraw (3.4,-2) circle (0.07cm);
\filldraw (3.6,-2) circle (0.07cm);
\filldraw (3.8,-2) circle (0.07cm);
\filldraw (4,-2) circle (0.07cm);
\draw [thick] (3,0) -- (3,-1);
\draw [thick] (2,-2) -- (3,-1);
\draw [thick] (2.2,-2) -- (3,-1);
\draw [thick] (2.4,-2) -- (3,-1);
\draw [thick] (2.6,-2) -- (3,-1);
\draw [thick] (2.8,-2) -- (3,-1);
\draw [thick] (3,-2) -- (3,-1);
\draw [thick] (3.2,-2) -- (3,-1);
\draw [thick] (3.4,-2) -- (3,-1);
\draw [thick] (3.6,-2) -- (3,-1);
\draw [thick] (3.8,-2) -- (3,-1);
\draw [thick] (4,-2) -- (3,-1);
\node at (3, -2.5) {minimum assumed here};
\node at (9, -1) {maximum assumed here};
\draw [thick, ->] (9, -0.8) -- (9, -0.3);
\end{tikzpicture}
\end{center}
\caption{The `Fiedler rose' counterexample of Evans \cite{evans}.}
\end{figure}
We hope that our paper contributes to both problems and can be helpful in clarifying the situation; we also raise a number of questions.
\subsection{The Continuous Case.}
Let $\Omega \subset \mathbb{R}^2$ be a domain and consider the second smallest eigenfunction of the Laplace operator $-\Delta$ with Neumann boundary conditions, i.e. the equation
\begin{align*}
-\Delta \phi_2 &= \mu_2 \phi_2 \quad \mbox{in}~ \Omega\\
\frac{\partial \phi_2}{\partial \nu} &= 0 \quad \mbox{on}~\partial \Omega.
\end{align*}
Rauch conjectured in 1974 that the maximum and the minimum are assumed at the boundary. This is known to fail at this level of generality \cite{b25, b4} but widely assumed to be true for convex domains $\Omega$.
The second author \cite{stein} recently showed that if $x_1, x_2 \in \Omega$ satisfy
$\|x_1 - x_2\| = \mbox{diam}(\Omega)$, then every maximum and minimum is
assumed within distance $c\cdot \mbox{inrad}(\Omega)$ of $x_1$ and $x_2$, where $c$ is a universal constant (which is the optimal scaling up to the value of $c$). Therefore, up to an inradius, the maximum and minimum are essentially assumed at maximum distance.
\section{Main Results}
We present two main results: the first is a representation formula for $\phi_2$ that is very useful. It allows to quickly recover some of the existing results and gives a better understanding of the behavior of $\phi_2$.
In particular, it will explain that for generic trees there is little to no reason to assume that the extrema of $\phi_2$ are assumed at vertices that are at distance $\mbox{diam}(G)$. We hope that the representation formula will also be useful in other settings. The second contribution is an explicit application of the representation formula to construct families of trees on which the desired statement indeed holds: the extrema of the Fiedler vector are assumed at vertices which are distance $\mbox{diam}(G)$ apart.
\subsection{A Representation Formula}\label{sec:game}
Let us fix $G=(V,E)$ to be a Graph on $n$ vertices.
Let $v_{s}, v_{t}$ be two arbitrary vertices. We introduce a game that results in a representation formula for any eigenvector $\phi_k$ associated to the eigenvalue $\lambda_k$.
\begin{enumerate}
\item You start with zero payoff and in the vertex $v_s$.
\item If you find yourself in a vertex $w \neq v_t$,
\begin{enumerate}
\item add $\lambda_{k}\cdot\phi_k(w)/\mbox{deg}(w)$ to your payoff and
\item then jump to a randomly chosen neighbor of $w$.
\end{enumerate}
\item If you find yourself in the vertex $w = v_t$, the game ends.
\end{enumerate}
We note that the game ends in finite time almost surely.
\begin{theorem} The expected payoff of the game satisfies
$$ \mathbb{E}\left(\emph{payoff}\right) = \phi_k(v_s) - \phi_k(v_t).$$
\end{theorem}
Throughout the rest of the paper, we will only use this result for $k=2$ (since only then do we have Fiedler's theorem at our disposal).
Results of such a flavor are very easy to obtain for the random walk normalized Laplacian $A^{-1}D$ (which, by its very nature, is strongly tied to random walks). Our game adjusts for the canonical
Laplacian $L = D-A$. The game could also be interpreted as a discretized version of the Feynman-Kac formula (see e.g. \cite{taylor}).
We believe that this representation theorem has a substantial amount of explanatory power. A simple example is the following (known) Corollary: a simple form of monotonicity of the second eigenvector along paths in a tree.
\begin{corollary}[see also \cite{gern, lef}] Let $\Gamma$ be a path in the tree such that $\phi_2$ only assumes positive values on the path. If $v,w$ are two vertices on the path and $v$ is at a greater distance than $w$ from the closest vertex where $\phi_2$ is negative, then
$$ \phi_2(v) > \phi_2(w).$$
In particular, maxima and minima are assumed in vertices with degree 1.
\end{corollary}
\begin{proof} Let $v_1$ be the vertex where $\phi_2$ is negative but where $\phi_2$ is positive for one of the neighbors. By Fiedler's theorem, that vertex is unique. Let us now assume $v_2$ and $v_3$ are vertices on a path and $v_3$ is from a greater distance from $v_1$ than $v_2$.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=1]
\filldraw (0,0) circle (0.06cm);
\filldraw (1,0) circle (0.06cm);
\filldraw (2,1) circle (0.06cm);
\filldraw (2,0) circle (0.06cm);
\filldraw (2,-1) circle (0.06cm);
\filldraw (3,1) circle (0.06cm);
\draw [very thick, dashed] (0,0) -- (1,0);
\draw [very thick] (1,0) -- (2,-1);
\draw [very thick] (1,0) -- (2,0);
\draw [very thick, dashed] (2,-1) -- (3,-1.5);
\draw [very thick] (1,0) -- (2,1);
\draw [very thick] (2,1) -- (3,1);
\draw [very thick, dashed] (0,0) -- (-1,0);
\draw [very thick, dashed] (2,0) -- (3,0);
\draw [very thick, dashed] (3,1) -- (4,1);
\node at (0, -0.3) {\Large $v_1$};
\node at (2, 1.3) {\Large $v_2$};
\node at (3, 1.3) {\Large $v_3$};
\end{tikzpicture}
\end{center}
\end{figure}
We use Theorem 1 and start the game in the vertex $v_3$. We then collect payoff in every run of the game and thus in expectation and therefore $\phi_2(v_3) > \phi_2(v_2)$ as desired.
\end{proof}
\subsection{How to produce counterexamples.} We now return to the original question from \cite{moo}: whether the second eigenvector assumes maximum and minimum at the endpoints of the longest path in the tree.
This was disproven by Evans \cite{evans} by means of an explicit example. The question has also been studied by \cite{gern, lef}. The purpose of this subsection is to argue that our representation formula from Theorem 1 allows to heuristically explain why, generally, there is no reason the second eigenvector should assume extreme values at the endpoints of the longest path -- it shows that Evans' counterexample is actually representative of one of the main driving forces behind localization of large values of the Fiedler vector in sub-structures. We are not making any precise claims at this point, this section tries to provide a good working heuristic that (a) allows to contruct counterexamples quite easily and (b) will underlie all our formal arguments later in the paper.\\
One of the crucial ingredients in the representation formula is the number of steps a typical random walk will need to reach another vertex: if the tree has a complicated structure (say, many vertices with large degree), then it will take a very long time for a random walk to reach a specific vertex (this principle is already embodied in Evans' counterexample \cite{evans} shown in Figure 1). Put differently, distance is not so crucial as complexity -- this immediately implies a large family of counterexamples whose type is shown in Figure 4: we consider a path graph of length $d$ and attach a tree $T$ to vertex $d/4$. We assume the tree $T$ has diameter much smaller than $d/4$ but has vertices of very large degree.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\draw [very thick] (0,0) -- (8,0);
\draw[very thick] (2.5,1.5) circle (0.6cm);
\draw [thick] (2.5,0) -- (2.5,0.9);
\node at (2.5, 1.5) {$T$};
\node at (5.5, -0.4) {path of length $d$};
\node at (2.5, -0.3) {$d/4$};
\node at (5.5, 1.5) {tree that traps random walk};
\node at (5.5, 1) {and has diameter $\ll d$};
\end{tikzpicture}
\end{center}
\caption{A generic counterexample.}
\end{figure}
The game then suggests that, if the tree has vertices of sufficiently large degree, one extremum is at the `most remote' part off the path in the tree $T$ -- in particular, one of the two extrema would not be on the path and thus not at the endpoints of the longest path in the Graph. The distance between the extrema would be
$$ d(v_{\max}, v_{\min}) \leq \frac{3d}{4} + \mbox{diam}(T) < d.$$
A sketch of the argument to show that this type of construction works is as follows. There are two cases: either the sign change of the second eigenvector happens inside $T$ or it happens on the path. If the diameter of $T$ is sufficiently small compared to $d$, known inequalities on the eigenvalue $\lambda_2$ (which we use below) suggest that the first case cannot occur. This means that the sign change happens on the path. If the value of the eigenvector in the vertex $v$ that connects to $T$ is nonzero, we can play the game with vertices starting in $T$ and ending in $v$. Corollary 1 shows that the values of the eigenfunction inside $T$ are (in absolute value) at least as big as the value in $v$. Then the game leads to a nonzero contribution for each step of the random walk that is not arbitrarily small. This means that in order to ensure large (absolute) values inside the tree, the quantity to maximize is the expected number of steps in the game -- this, in turn, can be achieved by having vertices of large degree. We emphasize that this heuristic is non-rigorous but quickly motivates the construction of many counterexamples. All the positive results in our paper can be understood as ensuring the absence of such a structure.\\
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.8]
\draw [very thick] (0,0) -- (8,0);
\draw [thick] (4,0) -- (4,2);
\node at (6.5, -0.4) {path of length $d$};
\node at (4, -0.4) {$d/2$};
\node at (5.5, 1) {height $\sim d/3$};
\end{tikzpicture}
\end{center}
\caption{Another type of counterexample: extrema are assumed at the end of the long path.}
\end{figure}
To build further intuition, we quickly sketch another type of counterexample. Take a path graph of length $d$ and
add a path graph of length $d/3$ to the middle vertex. What we observe is that the eigenvector changes along the
long path, that it assumes extrema at its end and that the eigenvector is small and changes slowly on the little path in the middle.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=0.8]
\draw [very thick] (0,0) -- (8,0);
\draw [thick] (4,0) -- (4,2);
\node at (6.5, -0.4) {path of length $d$};
\node at (4, -0.4) {$d/2$};
\node at (5.5, 1) {height $\sim d/3$};
\draw [thick] (4,1) -- (4.2, 1.2);
\draw [thick] (4,1) -- (4-0.2, 1.2);
\draw [thick] (4,1) -- (4-0.2, 0.8);
\draw [thick] (4,1.5) -- (4.2, 1.7);
\draw [thick] (4,1.5) -- (4-0.2, 1.3);
\draw [thick] (4,0.7) -- (4.2, 0.7);
\draw [thick] (4,0.7) -- (3.8, 0.5);
\draw [thick] (4,0.7) -- (3.8, 0.7);
\end{tikzpicture}
\end{center}
\caption{Another type of counterexample: adding little trees to the path in the middle leads to a counterexample.}
\end{figure}
However, if we start adding paths of length 1 to the vertices of the short path in the middle (or short trees, even ones with bounded
diameter), then after a while the eigenvector flips and assumes an extremum in the tip of short path in the middle.
Perhaps the main contribution of our paper is a framework that clearly establishes \textit{why} this happens. The
theorems we give are one way of capturing the phenomenon but presumably there are many other possible formulations
that could be proven by formalizing the same kind of mechanism that we use here.\\
A particular consequence of these ideas is that a generic tree should not have the desired property of $\phi_2$ assuming its extrema
at the endpoints of a path of length $\mbox{diam}(G)$. We refer to numerical work done by Lef\`evre \cite{lef} showing that all trees
with $n \leq 11$ vertices do have the property but already $2\%$ of trees with $n=20$ vertices do not. Lef\`evre specifically asks
whether a typical tree on $n$ vertices does not have the property as $n$ becomes large and we also consider this to be an interesting
problem.
\subsection{An Admissible Class.}
The purpose of this section is to construct a large family of tree-like graphs for which the following statement is true: the second eigenvector of the Graph Laplacian does indeed assume maximum and minimum at the endpoints of the longest path. We assume that we are given a Graph that can be constructed by taking a path of length $\mbox{diam}(G)$ and then possibly adding to each vertex one or several attached graphs that are isolated from each other except for being connected to the path. Of course, trees have this property.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=1]
\filldraw (0,0) circle (0.06cm);
\filldraw (1,0) circle (0.06cm);
\filldraw (2,0) circle (0.06cm);
\filldraw (3,0) circle (0.06cm);
\filldraw (4,0) circle (0.06cm);
\filldraw (5,0) circle (0.06cm);
\filldraw (6,0) circle (0.06cm);
\filldraw (7,0) circle (0.06cm);
\filldraw (8,0) circle (0.06cm);
\filldraw (9,0) circle (0.06cm);
\filldraw (9,0) circle (0.06cm);
\draw [very thick] (0,0) -- (4,0);
\draw [very thick, dashed] (4,0) -- (6,0);
\draw [very thick] (6,0) -- (9,0);
\node at (0, -0.5) {1};
\node at (1, -0.5) {2};
\node at (9,-0.5) {$\mbox{diam}(G)$};
\node at (5,-0.5) {$k$};
\draw [thick] (5,0) to[out=60, in=270] (5.5, 1) to[out=90, in=0] (5, 2) to[out=180, in=90] (4.5, 1) to[out=270, in=120] (5, 0);
\node at (5,1) {$G_{k,1}$};
\draw [thick] (5,0) to[out=30, in=270] (6.5,1) to[out=90, in=0] (6,2) to[out=180, in=90] (5.8, 1.5) to[out=270, in=40] (5,0);
\node at (6.1,1) {$G_{k,2}$};
\end{tikzpicture}
\end{center}
\caption{The class of admissible graphs: a long path whose attached graphs are connected to exactly one vertex on the path and do not have any connections between them.}
\end{figure}
We will now assign to each such Graph $G_{k,i}$ a natural quantity: for any vertex $v \in G_{k,i}$, we can consider a random walk started in $v$ that jumps uniformly at random to an adjacent vertex until it hits the path. We can then, for each such vertex $v$, compute the expected hitting time $\mbox{hit}(v)$ (that is: the expected number of steps in the random walk until one hits the path) and define
$$ \mbox{hit}(G_{k,i}) = \max_{v \in G_{k,i}} \mbox{hit}(v).$$
We argue that this quantity, in a certain sense, captures the essence of the underlying dynamics. We refer to \cite{xiu} for a paper where the same quantity has been used in a similar way. To get a feeling for the scaling of things, we observe that if $G_{k,i}$ is itself a path Graph, then
\begin{equation}\label{eq:path:time}
\mbox{hit}(G_{k,i}) \sim \mbox{diam}(G_{k,i})^2.
\end{equation}
This classical scaling result follows easily from observing that the problem is structurally similar to a random walk on the lattice $\mathbb{Z}$ and that the standard random walk on $\mathbb{Z}$
after $\ell$ random steps has variance $\ell$ (and thus standard deviation $\sim \sqrt{\ell}$).
\begin{theorem} Let $G$ be a graph whose longest path of length $\emph{diam}(G)$ has the following property: the graphs attached to the vertices on the path are isolated (any path from any vertex in $G_{k,i}$ to the complement goes through $k$) and each graph $G_{k,i}$ attached to vertex $k$ satisfies
\begin{enumerate}
\item the attached Graph $G_{k,i}$ does not have too many vertices
$$ |G_{k,i}| \leq \frac{\emph{diam}(G)}{32}.$$
\item and the hitting time is not too large
$$ \emph{hit}(G_{k,i}) \leq \frac{1}{50} \min\left\{ k, \emph{diam}(G) - k \right\}^{2}.$$
\end{enumerate}
Then the second eigenvector of the Graph Laplacian assumes its extrema at the endpoints of the graph.
\end{theorem}
The structure of the proof in \S 3.3. exploits the heuristic developed in \S 2.2. By considering a path of length $\mbox{diam}(G)$ and then attaching another path of length $k$ to the $k-$th vertex on the long path, we see that both assumptions are optimal up to the values of the constants. We point out that it would be of interest to obtain inverse results: explicit conditions under which one of the extrema is not attained at the endpoints of the longest path.
We give one sample application of Theorem 2 to Evans' counterexample and consider what he called the Fiedler rose (see Fig. 6): let $G$ denote the Fiedler rose with $n+2$ vertices. If we start in an outermost vertex, then one step of the random walk leads to the center and the next step leads to the path with likelihood $p=1/(n+1)$. This means that the expected number of steps required until one hits the path is
\begin{align*}
\mbox{hit}(\mbox{Rose}) &= 2 \frac{1}{n+1} + 4 \frac{1}{n+1} \left(1- \frac{1}{n+1}\right) + 6 \frac{1}{n+1} \left(1- \frac{1}{n+1}\right) ^2 + \dots\\
&= \frac{2}{n+1} \sum_{k=0}^{\infty} (k+1) \left(1- \frac{1}{n+1}\right) ^k = 2n +2.
\end{align*}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\filldraw (3,0) circle (0.07cm);
\draw [thick] (0,0) -- (6,0);
\filldraw (3,-1) circle (0.07cm);
\filldraw (2,-2) circle (0.07cm);
\filldraw (2.2,-2) circle (0.07cm);
\filldraw (2.4,-2) circle (0.07cm);
\filldraw (2.6,-2) circle (0.07cm);
\node at (3,-2) {$\dots$};
\filldraw (3.4,-2) circle (0.07cm);
\filldraw (3.6,-2) circle (0.07cm);
\filldraw (3.8,-2) circle (0.07cm);
\filldraw (4,-2) circle (0.07cm);
\draw [thick] (3,0) -- (3,-1);
\draw [thick] (2,-2) -- (3,-1);
\draw [thick] (2.2,-2) -- (3,-1);
\draw [thick] (2.4,-2) -- (3,-1);
\draw [thick] (2.6,-2) -- (3,-1);
\draw [thick] (3.4,-2) -- (3,-1);
\draw [thick] (3.6,-2) -- (3,-1);
\draw [thick] (3.8,-2) -- (3,-1);
\draw [thick] (4,-2) -- (3,-1);
\node at (7, -2) {$n$ vertices here};
\draw [->] (5.7,-2) -- (4.5, -2);
\end{tikzpicture}
\end{center}
\caption{The `Fiedler rose' counterexample of Evans \cite{evans}. }
\end{figure}
This means that if we have a path graph of length $\mbox{diam}(G)$ and attach a Fiedler rose with $n$ vertices to
the middle point of the path graph, then the rose can have up to $n \leq \mbox{diam}(G)/120$ vertices without violating
the result. Much more precise asymptotics for this special case were given by Lef\`evre \cite{lef}.
\subsection{A Hitting Time Bound.} The purpose of this section is to establish bounds on hitting times under an assumption on the maximum degree. Let $T$ be a connected graph and assume that vertex $v_1$ is marked (in the setting above, $v_1$ is the vertex that lies on the long path). Evans' counterexample shows that we necessarily need to make some assumptions on the maximum degree of $T$ and we introduce
$$ \Delta = \max_{v \in T}~ \mbox{deg}(v).$$
\begin{proposition} Let $T$ be a connected graph with maximum degree $\Delta$ and marked vertex $v_1$. The maximum expected time of a random walk started in a vertex in $T$ until it hits $v_1$ can be bounded by
$$ \emph{hit}(T) \leq c_{\Delta} e^{ c_{\Delta} \emph{diam}(T)},$$
where $c_{\Delta} > 0$ is a constant that depends only on $\Delta$.
\end{proposition}
Revisiting Theorem 2, if we only have an assumption on the maximal degree, then we are allowed to attach graphs of diameter up to at most $c_{\Delta} \sqrt{\log{\mbox{diam}(G)}}$. In light of Evans' counterexample, this estimate is perhaps not surprising (one can attach Fiedler roses on top of Fiedler roses on top of Fiedler roses etc. to the desired effect). However, we also point out that if the graphs do not have a `labyrinth-' type structure where random walkers can easily get lost (in the sense of hitting time being large), then one could attach graphs of larger diameter without violating the conditions of Theorem 2.
\subsection{Caterpillar graphs.} We conclude with a simple example: a caterpillar graph \cite{cat2, cat1} is path of length $n$ where to each vertex we may add trees of size 1 (alternatively: after removing all vertices of degree 1, a path graph remains). Gernandt \& Pade \cite{gern} proved that the extrema of the second eigenvector are assumed at the endpoints of the longest graph and established various generalizations of this result. We give another one.
\begin{corollary} Let $G$ be a path graph of length $n$ with vertices order $1, 2, \dots, n$. Suppose we attach to the vertex $k$
an arbitrary number of paths of length at most $f(k)$, where
$$ f(k) \leq \frac{1}{20} \min\left\{ k, n-k\right\}^{}.$$
Then the global extrema are assumed at the endpoints of the longest path.
\end{corollary}
This Corollary follows almost immediately from Theorem 2 and the behavior of hitting times for path graphs. It is natural to conjecture that stronger results should be true, maybe even
$f(k) = \min\left\{k, n-k\right\} - 1.$
\subsection{A Hitting Time Problem.}
An interesting question is the following: suppose $G=(V,E)$ is a connected Graph with a marked vertex $v_1$ and $h:V \rightarrow \mathbb{R}$
is a function such that $h(v)$ the expected number of steps a random walk started in $v$ takes until it hits $v_1$. What bounds (both from above and from below) can be proven
on
$$ \mbox{hit}_{v_1}(G) = \max_{v \in V}{h(v)} ?$$
A trivial bound is
$$ \mbox{hit}_{v_1}(G) \geq \max_{v \in V}{ d(v,v_1)}.$$
Amusingly, this might be close to optimal. Fix a degree $\Delta$ and consider the following type of Graph where each vertex has the maximal number of children ($\Delta -1$) up to a certain level. Let us then connect all the vertices in the last level to the root of the tree. The induced random walk can be regarded as a biased random walk in terms of the level and will quickly lead to the root of the tree.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\filldraw (0,0) circle (0.07cm);
\filldraw (1,1) circle (0.07cm);
\filldraw (1,0) circle (0.07cm);
\filldraw (1,-1) circle (0.07cm);
\draw [thick] (0,0) -- (1,1);
\draw [thick] (0,0) -- (1,0);
\draw [thick] (0,0) -- (1,-1);
\filldraw (2,1.3) circle (0.07cm);
\filldraw (2,1) circle (0.07cm);
\filldraw (2,0.7) circle (0.07cm);
\draw [thick] (1,1) -- (2,1.3);
\draw [thick] (1,1) -- (2,1);
\draw [thick] (1,1) -- (2,0.7);
\filldraw (2,0.3) circle (0.07cm);
\filldraw (2,0) circle (0.07cm);
\filldraw (2,-0.3) circle (0.07cm);
\filldraw (2,-1.3) circle (0.07cm);
\filldraw (2,-1) circle (0.07cm);
\filldraw (2,-0.7) circle (0.07cm);
\draw [thick] (1,0) -- (2,0.3);
\draw [thick] (1,0) -- (2,0);
\draw [thick] (1,0) -- (2,-0.3);
\draw [thick] (1,-1) -- (2,-1.3);
\draw [thick] (1,-1) -- (2,-1);
\draw [thick] (1,-1) -- (2,-0.7);
\foreach \x in {0,...,12}
{
\filldraw (4,1.5-\x/4) circle (0.07cm);
};
\node at (3,0) {$\dots$};
\end{tikzpicture}
\end{center}
\caption{An example with an induced drift on the random walk.}
\end{figure}
A simple question is the following: what sort of hitting time bounds are possible and how do they depend on the graph. For example, if $G$ is a tree, then we have
$ \mbox{hit}_{v_1}(G) \gtrsim \mbox{diam}(G)^2$. What other results are possible?
\section{Proofs}
\subsection{Proof of Theorem 1}
\begin{proof}
Recall that $L \phi_2 = \lambda_2 \phi_2$ or
$$ (D-A) \phi_2 = \lambda_2 \phi_2.$$
Evaluating this equation in a single vertex means that
$$ \phi_2(v_1) = \frac{\lambda_2 \phi_2(v_1)}{\mbox{deg}(v)} + \frac{1}{\mbox{deg}(v_1)} \sum_{w \sim_{E} v}{ \phi_2(w)}.$$
This can be interpreted as one step of the game: you add $\lambda_2 \phi_2(v_1)/\mbox{deg}(v_1)$ to your account and then jump to a random neighbor (`random' because we properly normalized the sum) and evaluate the function there. We now iteratively apply this identity to every term involving $\phi_2$ except for those involving $\phi_2(v_2)$ which we keep.
The arising terms can be bijectively mapped to the random walks in the game.
\end{proof}
\subsection{Some Preliminary Considerations}
Before embarking on the proof of Theorem 2, we recall several helpful statements. A result of McKay \cite{mohar2} states that
$$ \lambda_2 \geq \frac{4}{|V| \cdot \mbox{diam}(G)}.$$
We also know, from the variational characterization
$$ \lambda_{2}(G) = \min_{x \perp \textbf{1}} \frac{\sum_{v_i \sim_E v_j}{(x_i - x_j)^2}}{\sum_{i=1}^{n}{x_i^2}},$$
that the eigenvalue decreases if the Graph is enlarged. Since the Graph contains a path of length $\mbox{diam}(G)$, we can use the
second eigenvalue of the path graph as an upper bound. It is known (see e.g. \cite{chung}) that
$$ \lambda_2(P_n)= 2 \left( 1 - \cos{\left( \frac{\pi}{n} \right)} \right) \leq \frac{10}{n^2}$$
and therefore
\begin{equation}\label{eq:eig2bound}
\lambda_2(G) \leq \frac{10}{\mbox{diam}(G)^2}.
\end{equation}
McKay's bound shows that this upper bound using only the diameter is optimal up to constants. These facts are well known (see e.g. \cite{mohar2, mohar}).
We also observe that $\phi_2$ changes sign. This means that for every vertex $v$, we can estimate the size of $v$ by summing over a path $\pi$ from $v$
to the nearest vertex where $\phi_2$ is negative. For a normalized eigenvector $\phi_2$, this shows that
\begin{align*}
\max_{v \in V}{ \phi_2(v)} &\leq \sum_{(i,j) \in \pi}{ | \phi_2(i) - \phi_2(j)|} \\
&\leq \left( \sum_{(i,j) \in \pi}{ | \phi_2(i) - \phi_2(j)|^2}\right)^{1/2} \left( \mbox{length of}~\pi\right)^{1/2}\\
&\leq \left( \sum_{(i,j) \in E}{ | \phi_2(i) - \phi_2(j)|^2}\right)^{1/2} \mbox{diam}(G)^{1/2}\\
&\leq \lambda_2^{1/2} \mbox{diam}(G)^{1/2}\\
&\leq \left( \frac{10}{\mbox{diam}(G)^2}\right)^{1/2} \mbox{diam}(G)^{1/2} \leq \frac{4}{\mbox{diam}(G)^{1/2}},
\end{align*}
where the second line uses the Cauchy--Schwarz inequality, the fourth line uses equation (\ref{one}) and the fifth line uses equation (\ref{eq:eig2bound}).
Since this holds for every vertex, we have
\begin{equation}\label{eq:linf}
\| \phi_2\|_{\ell^{\infty}} \leq \frac{4}{\mbox{diam}(G)^{1/2}}.
\end{equation}
The normalization in $\ell^2$ of $\phi_2$ implies by the H\"{o}lder inequality that
$$ 1 = \sum_{v \in V}{\phi_2(v)^2} \leq \| \phi\|_{\ell^{\infty}} \| \phi\|_{\ell^1} $$
and therefore, by equation (\ref{eq:linf}):
$$ \|\phi\|_{\ell^1} \geq \frac{\mbox{diam}(G)^{1/2}}{4}.$$
Moreover, $\phi$ has mean value 0 and therefore the positive part and the negative part cancel out and therefore
\begin{equation}\label{eq:maxphi}
\sum_{v \in V}{ \max\left\{ \phi_2(v), 0 \right\}} \geq \frac{\mbox{diam}(G)^{1/2}}{8}.
\end{equation}
It is known that the expected hitting time of a path graph $P_k$ of length $k$ is
\begin{equation}\label{eq:blum}
\mbox{hit}(P_k) = (k-1)^2.
\end{equation}
For a proof, see \cite{blum}.
\subsection{Proof of Theorem 2}
\begin{proof}
The proof decuples into two parts: first, we show that the sign change of the second eigenfunction occurs somewhere on the long path and not within any attached Graph $G_{k, i}$ (see Fig. 5). The second part of the proof makes use of the Game Interpretation. We start by assuming that $\phi_2$ is the $\ell^2-$normalized eigenvector associated to the smallest nontrivial eigenvalue $\lambda_2 > 0$.
We now assume that the statement is false and that the sign change occurs somewhere inside the Graph $G_{k,i}$. In particular, appealing to Fiedler's theorem, the eigenvector has the same sign everywhere on the long path which we can assume without loss of generality to be negative.\\
{\bf Part 1:}
The argument is rather simple and exploits the bounds derived in \S 3.2. Let us assume that $\phi_2$ changes sign in $G_{k,i}$. Then, by Fiedler's theorem, all the positive values are attained inside $G_{k,i}$.
Then, however, by Equation (\ref{eq:maxphi}), we have
\begin{align*}
\frac{\mbox{diam}(G)^{1/2}}{8} &\leq \sum_{v \in V}{ \max\left\{ \phi_2(v), 0 \right\}}\\
&\leq \sum_{v \in G_{k,i}}{ \max\left\{ \phi_2(v), 0 \right\}} \\
&\leq |G_{k,i}| \|\phi_2\|_{\ell^{\infty}} \leq \frac{4 |G_{k,i}|}{\mbox{diam}(G)^{1/2}}
\end{align*}
and thus
$$ |G_{k,i}| \geq \frac{\mbox{diam}(G)}{32}$$
which is a contradiction to the assumption (1) in the Theorem.\\
{\bf Part 2:} It remains to show that the maxima occur at the endpoints of the path under the assumption that there is a sign change on the long path. Let us
assume that this is not the case and that, without loss of generality, the maximum is assumed in $G_{k,1}$.
Since there is exactly one sign change along the long path, one of the endpoints of the long path also
has a positive value; since we have not specified anything about the value of $k$, we can assume without loss of generality that
$\phi_2(1)$ is positive (see Fig. \ref{fig:thm2part2}). By Fiedler's theorem, we have that $\phi_2(k)$ is nonnegative and so are
$\phi_2(2), \dots, \phi_2(k-1)$ and the values in all the attached graphs.\\
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}
\filldraw (0,0) circle (0.06cm);
\filldraw (1,0) circle (0.06cm);
\filldraw (2,0) circle (0.06cm);
\filldraw (3,0) circle (0.06cm);
\filldraw (4,0) circle (0.06cm);
\filldraw (5,0) circle (0.06cm);
\filldraw (6,0) circle (0.06cm);
\filldraw (7,0) circle (0.06cm);
\filldraw (8,0) circle (0.06cm);
\filldraw (9,0) circle (0.06cm);
\filldraw (9,0) circle (0.06cm);
\draw [very thick] (0,0) -- (4,0);
\draw [very thick, dashed] (4,0) -- (6,0);
\draw [very thick] (6,0) -- (9,0);
\node at (0, -0.5) {1};
\node at (1, -0.5) {2};
\node at (9,-0.5) {$\mbox{diam}(G)$};
\node at (5,-0.5) {$k$};
\draw [thick] (5,0) to[out=60, in=270] (5.5, 1) to[out=90, in=0] (5, 2) to[out=180, in=90] (4.5, 1) to[out=270, in=120] (5, 0);
\node at (5,1) {$G_{k,1}$};
\draw [thick] (5,0) to[out=30, in=270] (6.5,1) to[out=90, in=0] (6,2) to[out=180, in=90] (5.8, 1.5) to[out=270, in=40] (5,0);
\node at (6.1,1) {$G_{k,2}$};
\node at (0,1) {positive here};
\draw [->] (0,0.8) -- (0,0.2);
\node at (2.2,2.1) {maximum here};
\draw [->] (3.4, 2) -- (4,1.8);
\end{tikzpicture}
\end{center}
\caption{Setup of part 2 of the proof }\label{fig:thm2part2}
\end{figure}
We now play the game twice: first to obtain an upper bound on the maximum of $\phi_2$ (under the assumption
that this maximum is assumed in $G_{k,1}$) and then to obtain a lower bound on $\phi_2(1)$. The game in Section \ref{sec:game}
implies that
\begin{align}\label{eq:thm2:gamegraph}
\max_{j } \phi_2(j) - \phi_2(k) &= \mathbb{E} ~\mbox{payoff}\\
&\leq \lambda_{2} \cdot \left(\max_{j} \phi_2(j) \right) \cdot \mbox{hit}(G_{k,1})
\end{align}
where $k$ is the vertex on the path where $G_{k,1}$ is connected to the path.
Recall that, by Equation (\ref{eq:eig2bound}), $\lambda_2(G) \leq 10 \cdot \mbox{diam}(G)^{-2}$ and, by assumption 2 of the Theorem,
$$ \mbox{hit}(G_{k,1}) \leq \frac{\mbox{diam}(G)^2}{20}.$$
These imply that
\begin{equation} \label{eq:thm2:lambdahit}
\lambda_2(G)\cdot \mbox{hit}(G_{k,1}) \leq \frac{1}{2}
\end{equation}
By Equation (\ref{eq:thm2:gamegraph}),
\begin{align} \label{eq:thm2:upperphi}
\max_{j } \phi_2(j) &\leq \frac{\phi(k)}{1 - \lambda_2 \cdot \mbox{hit}(G_{k,1}) },
\end{align}
with a positive denominator by Equation (\ref{eq:thm2:lambdahit}).
This implies that $\phi_2(k) > 0$.
We now start the game of Section \ref{sec:game} in the vertex 1 and obtain
$$ \phi_2(1) - \phi_2(k) = \mathbb{E}~ \mbox{payoff}.$$
It remains to understand the game. We jump around randomly and add
$$ \lambda_2 \frac{\phi_2(v)}{\mbox{deg}(v)} \qquad \mbox{at every step.}$$
Suppose we are on the path and $\mbox{deg}(v) \geq 3$. This means that there
are graphs $G_{v,\cdot}$ attached and since there is a chance of going into these graphs
which we will show increasees the expected payoff. We will obtain a lower bound on the
expected number of times we encounter the vertex $1<v<k$ before going towards
the next one on the path. We have encountered this type of computation before when
computing hitting times for the Fielder rose. Introducing
$$ p = \frac{\mbox{deg}(v) - 2}{\mbox{deg}(v)}$$
we can bound
\begin{align*}
\mathbb{E}~\mbox{\# encounter} &\geq 1(1-p) + 2 p (1-p) + 3 p^2 (1-p) + \dots \\
&= \sum_{q=1}^{\infty}{ q (1-p) p^{q-1}} = \frac{1}{1-p} = \frac{\mbox{deg}(v)}{2}.
\end{align*}
This means that in expectation, a visit to a vertex along the path would contribute at least
$\phi_2(v) \mbox{deg}(v) / 2\mbox{deg}(v) = \phi_2(v)/2$ before we continue to the
next vertex on the path.
For comparison, if $\mbox{deg}(v)=2$, so that the vertex is not connected to any additional graph,
the vertex contributes $\phi_2(v)/2$ to the game and in the next step we move to one of the two vertices attached.
Corollary 1 yields
$$\lambda_2 \phi_2(v) \geq \lambda_2 \phi_2(k).$$
Abbreviating the maximum hitting time on a path graph by $\mbox{hit}(P_k)$, we
have
$$ \phi_2(1) - \phi_2(k) \geq \frac{1}{2} \lambda_2\cdot \phi_2(k) \cdot \mbox{hit}(P_k),$$
and thus the lower bound
\begin{equation}\label{eq:thm2:lower}
\phi_2(1) \geq \phi_2(k) \left( 1+\frac{1}{2} \lambda_2 \cdot \mbox{hit}(P_k) \right).
\end{equation}
Putting together Equations (\ref{eq:thm2:upperphi}) and (\ref{eq:thm2:lower}) and the assumption of part 2 of this proof, we have
\begin{equation}
\phi_2(k) \left( 1+\frac{1}{2} \lambda_2 \cdot \mbox{hit}(P_k) \right) \leq \phi_2(1) < \max_{j} \phi_2(j) \leq \frac{\phi_2(k)}{1 - \lambda_2 \cdot \mbox{hit}(G_{k,1}) }
\end{equation}
and since we observed that $\phi_2(k) > 0$,
\begin{equation}
\left( 1+\frac{1}{2} \lambda_2 \cdot \mbox{hit}(P_k) \right) \leq \frac{1}{1 - \lambda_2 \cdot \mbox{hit}(G_{k,1}) }.
\end{equation}
In other words, since the denominator is positive,
\begin{equation}
1+\frac{1}{2} \lambda_2 \cdot \mbox{hit}(P_k) - \lambda_2 \cdot \mbox{hit}(G_{k,1}) - \frac{1}{2} \lambda_2^2 \cdot \mbox{hit}(G_{k,1}) \cdot \mbox{hit}(P_k) \leq 1 ,
\end{equation}
and therefore
\begin{equation}
\mbox{hit}(G_{k,1}) \geq \frac{\mbox{hit}(P_k)}{2} - \frac{\lambda_2 \cdot \mbox{hit}(P_k) \mbox{hit}(G_{k,1})} {2} .
\end{equation}
By Equation (\ref{eq:eig2bound}) and Equation (\ref{eq:blum}),
\begin{equation}
\lambda_2 \cdot \frac{\mbox{hit}(P_k)}{2} \leq 5.
\end{equation}
Therefore,
\begin{equation}
\mbox{hit}(G_{k,1}) \geq \frac{\mbox{hit}(P_k)}{2} - 5 \cdot \mbox{hit}(G_{k,1}) .
\end{equation}
Thus,
\begin{equation}
\mbox{hit}(G_{k,1}) \geq \frac{ \mbox{hit}(P_k)}{12}.
\end{equation}
Using the hitting time bound (\ref{eq:blum}) and the second assumption of Theorem 2,
\begin{equation}
\frac{k^2}{50} \geq \mbox{hit}(G_{k,1}) \geq \frac{ \mbox{hit}(P_k)}{12} = \frac{(k-1)^2}{12},
\end{equation}
wWhich is a contradiction.
\end{proof}
\subsection{Proof of the Proposition.}
\begin{proof}
We give a simple estimate that does not yield the sharp constant (for which we refer to Aldous \& Fill \cite{aldous}). Wherever we are, there is always at least one adjacent vertices that decreases the distance to the marked vertex (because the Graph is connected). If we start anywhere, then the likelihood of going $\mbox{diam}(T)$ number of times into a direction that decreases the distances to the marked vertex is at least
$$p = \Delta^{-\mbox{ \tiny diam}(T)}.$$
This is not very likely, the expected number of runs until this happens is
\begin{align*}
\mbox{E}~\mbox{number of runs} &\leq \mbox{diam}(T) \left( p + 2 (1-p) p + 3 (1-p)^2 p + \dots\right) \\
&= \mbox{diam}(T) \sum_{k=1}^{\infty}{ p (1-p)^{k-1} k} = \mbox{diam}(T) p^{-1} \\
&= \mbox{diam}(T) \Delta^{\mbox{\tiny diam}(T)}.
\end{align*}
This, however, establishes the desired result.
\end{proof}
| {
"timestamp": "2019-12-19T02:04:28",
"yymm": "1912",
"arxiv_id": "1912.08327",
"language": "en",
"url": "https://arxiv.org/abs/1912.08327",
"abstract": "Let $G$ be a connected tree on $n$ vertices and let $L = D-A$ denote the Laplacian matrix on $G$. The second-smallest eigenvalue $\\lambda_{2}(G) > 0$, also known as the algebraic connectivity, as well as the associated eigenvector $\\phi_2$ have been of substantial interest. We investigate the question of when the maxima and minima of $\\phi_2$ are assumed at the endpoints of the longest path in $G$. Our results also apply to more general graphs that `behave globally' like a tree but can exhibit more complicated local structure. The crucial new ingredient is a reproducing formula for the eigenvector $\\phi_k$.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM); Spectral Theory (math.SP)",
"title": "Extreme Values of the Fiedler Vector on Trees",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.988491848562538,
"lm_q2_score": 0.8311430415844384,
"lm_q1q2_score": 0.821578121595692
} |
https://arxiv.org/abs/1906.12272 | Weighing the Sun with five photographs | With only five photographs of the Sun at different dates we show that the mass of Sun can be calculated by using a telescope, a camera, and the Kepler's third law. With these photographs we are able to calculate the distance between Sun and Earth at different dates in a period of time of about three months. These distances allow us to obtain the correct elliptical orbit of Earth, proving the Kepler's first law.The analysis of the data extracted from photographs is performed by using an analytical optimization approach that allow us to find the parameters of the elliptical orbit. Also, it is shown that the five data points fit an ellipse using an geometrical scheme. The obtained parameters are in very good agreement with the ones for Earth's orbit, allowing us to foresee the future positions of Earth along its trajectory. The parameters for the orbit are used to calculate the Sun's mass by applying the Kepler's third law and Newton's law for gravitation. This method gives a result wich is in excellent agreement with the correct value for the Sun's mass. Thus, in a span of time of about three months, any student is capable to calculate the mass of the sun with only five photographs, a telescope and a camera. | \section{Introduction}
Our solar system is completly dominated by the Sun. Its huge mass bounds the eight planets in elliptical orbits around it. The proper explanation of the motion of each planet can be found from the Newton's law of universal gravitation. One of the greatest triumph of Newton's law is to provide the physics foundation to the empirical observations encompassed in the three Kepler's laws.
Understanding Kepler's laws allow us to calculate the relation between the shape of the orbit of each planet around Sun and the amount of time (the period) that takes one revolution around it. This relation depends only on the mass of the Sun, and usually is used to calculate the distance to it or the period of the celestial object orbiting it. However, it can be used in the opposite way. If the distance and period are known, then the mass of the central star can be calculated.
The purpose of this work is to draw attention that the mass of the Sun can be accuratelly calculated with only five photographs of the Sun taken from Earth. Our aim is to draw attention that any student can perform this project in an very easy way in an span of time of about four months. Only five photographs are needed as they provide five data points for the Earth's position around the Sun. With those five points, always a conic curve can be found to fit them. As we will show below, that conic curve will correspond to an ellipse. Thus, knowing the ellipse properties, we are able to calculate the Sun's mass.
According to NASA \cite{nasasunshhet}, the values for the Sun's mass and diameter are
\begin{eqnarray}
M_\odot&=&1.9885\times 10^{30}\, \mbox{[Kg]} \, ,\label{datossolnasaMASA}\\
D_\odot&=&1.3914\times 10^6\, \mbox{[Km]}\, .\label{datossolnasa}
\end{eqnarray}
Also, Earth's orbit is elliptical, with an orbit eccentricity $\epsilon_E$ and a semimajor axis $a_E$ given by \cite{nasasunshhet2}
\begin{eqnarray}
\epsilon_E&=&0.0167 \, ,\label{datossolnasaEx2}\\
a_E&=&1.496\times 10^8\, \mbox{[Km]}\, .\label{datossolnasasemiejer}
\end{eqnarray}
Below, we show how with five photgraphs, the values of parameters \eqref{datossolnasaEx2} and \eqref{datossolnasasemiejer}
can be accuratelly determined.
In order to perform the estimation for the Sun's mass, only the following few assumptions are required:
\begin{enumerate}
\item Earth revolves around Sun.
\item The time that Earth takes to perform one revolution around the Sun (a year) is known, given by
\begin{equation}\label{period}
T=365.25\, [\mbox{days}]=3.15576\times 10^7\, [\mbox{s}]\, .
\end{equation}
\item It is assumed that the Sun's diameter \eqref{datossolnasa} is known.
\end{enumerate}
The above reasonable assumptions are the key to estimate the Sun's mass. The photographs taken to the Sun are used to calculated the distance from Earth to it at five different dates. These distances and dates correspond to the position of Earth with respect to Sun, allowing us to calculate the Sun's mass using Kepler's laws.
In the precceding sections, we will show how from the five photographs we are able to estimate the Sun's mass \eqref{datossolnasaMASA} with a percentage error of about 0.1\%.
The most sensitive part of this project is the photograph analysis, which is thoroughly explained in Sec.~\ref{photsunsection}. It is from them which the distance from Sun to Earth can be obtained by relating the telescope angles (telescope features) and the physical angles and distances between the two astronomical objects. Thus, just measuring angles in the images, we are able to measure distances.
Then, in Sec.~\ref{polarajuste}, the approach to use the data obtained from the photographs is explained, showing that it allows us to find an ellipse with parameters in agreement with \eqref{datossolnasaEx2} and \eqref{datossolnasasemiejer}.
Besides, in Sec.~\ref{Keplersections} we use the results from previous sections to calculate the Sun's mass, and to discuss how our data adjust to the three Kepler's laws.
Finally, in the last section, we comment the validity of our assumptions.
\section{Photographs of the Sun and its distance estimation}
\label{photsunsection}
During 2019, from March to June, five photographs of the Sun were taken from Santiago, Chile. The photographs are shown in Fig.~1.
\begin{figure*}[ht]
\begin{tabular}{cc}
\includegraphics[width=70mm]{sol1} & \includegraphics[width=74mm]{sol3} \\
(a) March 26th, 2019 & (b) May 7th, 2019 \\[6pt]
\includegraphics[width=61mm]{sol5} & \includegraphics[width=68mm]{sol4} \\
(c) May 20th, 2019 & (d) June 8th, 2019 \\[6pt]
\multicolumn{2}{c}{\includegraphics[width=70mm]{sol6} }\\
\multicolumn{2}{c}{(e) June 21th, 2019}
\end{tabular}
\caption{Five photographs of the Sun at different dates}
\label{eclipseFig}
\end{figure*}
The used telescope for this experience is a Celestron Nexstar 8SE. Its focal lenght is $2032$[mm], and its magnification is \ang{52} with the ocular focal length of $25$[mm]. It was used a Eclipsmart Solar Filter 8” SCT attached to the telescope. Also, an Iphone 6 was used to take the photographs attached to the telescope with
a NexYZ 3-Axis Universal Smartphone adapter.
The photographs, taken in several different dates, were used to calculate the distance from Earth to the Sun. The only required condition is that the image of Sun is completely inside of the ocular of the telescope. Once that condition is achieved, the method to calculate the distance is straightforward.
By using the features of the telescope, and compare them with the images obtained, the physical distances to the observed objects can be inferred. The procedure is as follows.
A given telescope has an ocular focal length $d_o$, a focal length $f$, and an apparent field of view $m$. The telescope magnification is determined as $f/d_o$, and the effective ocular angle of the telescope $\phi_T$ can be obtained as
\begin{equation}
\phi_T=\left(\frac{\pi}{180}\right)\left(\frac{m\, d_o}{f}\right)\, ,
\end{equation}
measured in radians. For the telescope used in this work, $d_o=25$[mm], $f=2032$[mm], and $m=$ \ang{52}, giving $\phi_T=0.011165984$.
Once $\phi_T$ is determined, one can use it to obtain the aparent ocular angle $\phi_{aS}$ of any object that with an image lying completely inside the ocular size of the telescope, as it is shown in Fig.~\ref{figuratelescopesuncal}(a). Let us assume that for a given photograph (such as those shown in Fig.~\ref{eclipseFig}) we calculate the effective radius $r_o$ of the ocular and the effective radius $r_{aS}$ of the image (of the Sun in our cases). As it is shown in Fig.~\ref{eclipseFig}, those radii can be calculated with GeoGebra software \cite{geogebra} in a scale to convinience, as it is only important their relative magnitude. If the two radii are known, then the apparent ocular angle of the image (of the Sun) is given simply by
\begin{equation}
\phi_{aS}=\left(\frac{r_{aS}}{r_o}\right)\phi_T\, .
\label{eqdistanSun0}
\end{equation}
\begin{figure}[h!]
\centering
\includegraphics[width=6cm]{telescope_cal}
\caption{(a) Scheme of the relation between the apparent ocular angle of an image $\phi_{aS}$ and the effective ocular angle $\phi_T$ of the telescope. (b) Relation between the the apparent ocular angle of an image $\phi_{aS}$, its physical size $D_\odot$ and its distance $r$ to the observer.}
\label{figuratelescopesuncal}
\end{figure}
The apparent ocular angle $\phi_{aS}$ is the most important parameter to be determined in an image,
as it allow us to relate the real physical size of the image with its physical distance $r$ to the observer that took the photograph. As it can been seen in Fig.~\ref{figuratelescopesuncal}(b), if the physical size of the image (in this case the diameter $D_\odot$ of the Sun) is known, then it can be modelled as the arc sustended by the ocular angle $\phi_{aS}$ at distance $r$, such that $D_\odot=r \, \phi_{aS}$. This is possible as the scales of the Sun are much less than its distance from Earth. Therefore, we find that the distance from Earth to the Sun is
\begin{equation}
r=\frac{D_\odot}{\phi_{aS}}\, .
\label{eqdistanSun}
\end{equation}
In our case, when the diameter of the Sun is assumed with a value given in Eq.~\eqref{datossolnasa}, we can readily calculate distances $r$ by using photographs and Eq.~\eqref{eqdistanSun}.
In Table \ref{bosons}, we tabulate the different photographs (ordered by date) with their corresponding values of $r_o$ and $r_{aS}$ measured directly from the photographs (see Fig.~\ref{eclipseFig}), together with their calculated values of $\phi_{aS}$ and $r$ obtained from Eqs.~\eqref{eqdistanSun0} and \eqref{eqdistanSun}. The radii $r_o$ and $r_{aS}$ depicted in Table \ref{bosons} are measured in centimeters units using the GeoGebra software. The measurement units of these quantities is not important, as we only require the information of its ratio.
\begin{table*}[htp]
\begin{ruledtabular}
\begin{tabular}{c c c c c c}
photo & date & $r_o\, \mbox{(in cm)}$ & $r_{aS}\, \mbox{(in cm)}$ & $\phi_{aS}$ & $r\, \mbox{(in Km)}$ \\
\hline
1& March 26th, 2019 & 4.44039 & 3.71059 & 0.0093308 & 149119050.10\\
2& May 7th, 2019 & 4.46076 & 3.68514 & 0.009224485 & 150837681.61 \\
3& May 20th, 2019 & 4.82953 & 5.86592 & 0.009192418 & 151363869.43 \\
4& June 8th, 2019 & 4.39625 & 3.60794 & 0.009163765 & 151837154.80 \\
5& June 21th, 2019 & 4.52576 & 3.71146 & 0.009156938 & 151950354.55
\end{tabular}
\end{ruledtabular}
\centering
\caption{Estimation of distances $r$ to Sun at different dates by using telescope and images measurements.}
\label{bosons}
\end{table*}
Notice that at different dates, the Sun is at different distances $r$ from Earth. As we know that Earth is in orbit around the Sun, then the data shows that the orbit is not a circle.
In order to calculate the shape of the Earth's orbit we need to know the position of Earth during the whole timespan of the data acquisition. As the orbital trayectory is a two-dimensional curve, we need two coordinates to describe the position of Earth. As it is shown in Fig.~\ref{coordinatesfigure}, the coordinates can be defined using the distance from Sun to Earth and its angular position (called polar coordinates), or using the Earth position with respect to the Sun in a cartesian plane (called cartesian coordinates). Both systems are explored in the following sections.
\begin{figure}[h!]
\centering
\includegraphics[width=6cm]{orbitScheme}
\caption{Scheme of the Earth's elliptical orbit around the Sun (at one of the focus). Earth's position depends on the distance to the Sun and on its date along the year. This can be mathematically represented in polar coordinates $(r,\theta)$, or in cartesian coordinates $(x,y)$.}
\label{coordinatesfigure}
\end{figure}
From Table \ref{bosons}, we already have the magnitude of the distance $r$ for different dates, but we lack the information on the angular position $\theta$ of Earth along the trayectory (see Fig.~\ref{coordinatesfigure}). This angle can be defined from the perihelion (the closest distance from Earth to Sun) by choosing that in that point $\theta=0$.
We can estimate the relation between date and the angular position of Earth in a very simple manner. First, as we known that Earth is revolving in a closed orbit, then let us assume that $\theta=2\pi$ in one year. Second, let us assume that Earth revolves around Sun at a constant rate. With these considerations, the angular position $\theta$ can be related with the date of each photograph in the simplest possible way as
\begin{equation}\label{thetaconstantrate}
\theta=\frac{2\pi t}{T}\, ,
\end{equation}
measured in radians. Here, $T$ is the time that last one terrestial year given in days by Eq.~\eqref{period}, and
$t$ is the time (in days) that have passed from the perihelion of the orbit.
For Earth, during 2019, the perihelion occurred approximatelly in the night of January, 2nd.
The relation \eqref{thetaconstantrate} between the angular position $\theta$ on the Earth's orbit
and the date along the year is not correct in principle, as we will show in Sec.~\ref{Keplersections}. Earth does not revolve at constant rate during the complete orbit (that is the second Kepler's law). However, we will show that $\theta$ given by Eq.~\eqref{thetaconstantrate} is a very good approximation for Earth's orbit.
By using the dates of each photograph, in Table \ref{bosons2} we show that angular position of Earth along the trayectory for the different dates in which the photographs were taken.
\begin{table}[htp]
\begin{ruledtabular}
\begin{tabular}{c c c}
photo & $t$ & $\theta$ \\
\hline
1& 83 & 1.42780118 \\
2& 125 & 2.15030298 \\
3& 138& 2.37393449\\
4& 157 & 2.70078054 \\
5& 170 & 2.92441205
\end{tabular}
\end{ruledtabular}
\centering
\caption{Angular position $\theta$ of Earth at different dates, given by Eq.~\eqref{thetaconstantrate}.}
\label{bosons2}
\end{table}
Now we have all the required data to calculate the form of the orbit of Earth. We are able to obtain the five distances $r$ and the angular positions $\theta$ extracted from Tables \ref{bosons} and \ref{bosons2} for the five photographs of the Sun
\begin{eqnarray}\label{netdata}
r_1&=& 149119050.10\mbox{ [Km]} \, ,\quad \theta_1 = 1.42780118 \, ,\nonumber\\
r_2&=& 150837681.61\mbox{ [Km]} \, ,\quad \theta_2 = 2.15030298 \, ,\nonumber\\
r_3&=& 151363869.43\mbox{ [Km]} \, ,\quad \theta_3 = 2.37393449 \, ,\nonumber\\
r_4&=& 151837154.80\mbox{ [Km]} \, ,\quad \theta_4 = 2.70078054 \, ,\nonumber\\
r_5&=& 151950354.55\mbox{ [Km]} \, ,\quad \theta_5 = 2.92441205 \, ,
\end{eqnarray}
where the subindex of distances and the angular positions
is related to the number of each photograph.
These data will be useful in the next sections in order to find the correct form of the orbit. In Sec.~\ref{polarajuste},
we show that the data for the position obtained from the photographs describe a ellipse in polar coordinates. Also, we show that the same elliptical orbit can be obtained in cartesian coordinates.
This orbit is in very good agreement with the ellipse with correct values for eccentricity and semimajor axis of the Earth's orbit \eqref{datossolnasaEx2} and \eqref{datossolnasasemiejer}.
\section{Orbit of the Earth from five photographs}
\label{polarajuste}
Using polar coordinates, we can develop an analitical process in which the equation for an ellipse is shown to be the equation that fit the previous data \eqref{netdata}. This is achieved obtaining the ellipse parameters by an optimization scheme that resut to be in agreement with values \eqref{datossolnasaEx2} and \eqref{datossolnasasemiejer}.
The equation of an ellipse, written in polar coordinates, is \cite{bate}
\begin{equation}\label{eqelipsegeneral}
r(\theta)=\frac{a \left(1-\epsilon^2\right)}{1+\epsilon\cos\theta}\, ,
\end{equation}
where $r$ and $\theta$ are depicted and defined in Fig.~\ref{coordinatesfigure}. Here, $a$ is the semimajor axis of the ellipse. The eccentricity is defined as
\begin{equation}
\epsilon=\sqrt{1-\frac{b^2}{a^2}}\, ,
\end{equation}
in terms of the semiminor axis $b$ (see Fig.~\ref{coordinatesfigure}). This equation implies that the Sun is in one of the focus, and that the distance from Sun to Earth changes during the trayectory (as the angle changes). An eccentricity $\epsilon=0$ implies that the orbit is a circle, as $b=a$.
The purpose of this section is to show that the data points \eqref{netdata} are points in an ellipse \eqref{eqelipsegeneral}
for parameters $a$ and $\epsilon$ that are in agreement for those of Earth's orbit.
In order to prove this, we construct an optimization procedure that allow us to calculate the best fit of parameters $a$ and $\epsilon$ to the data \eqref{netdata}. Let us define the error function
\begin{equation}\label{errorfunction}
E(a,\epsilon)=\sum_{i=1}^5 \left(r_i -\frac{a \left(1-\epsilon^2\right)}{1+\epsilon\cos\theta_i} \right)^2\, ,
\end{equation}
where the sum is on $r_i$ and $\theta_i$ for $i=1,2,3,4,5$,
the five data points displayed in \eqref{netdata}. This error function measures how far the data points are from fit an ellipse equation \eqref{eqelipsegeneral}. To have the best fit parameters, this function must be minimum. This is achieved by applying a minimization process to $E$ with respect to $a$ and $\epsilon$. Analitically, this implies that the partial derivatives of $E$ with respect to $a$ and to $\epsilon$ are both simultaneously null. The condition $\partial E/\partial a=0$ can be written as
\begin{equation}\label{condminimiza1}
\sum_{i=1}^5\frac{r_i}{1+\epsilon \cos\theta_i}=\sum_{i=1}^5 \frac{a\left(1-\epsilon^2\right)}{\left(1+\epsilon \cos\theta_i\right)^2}\, ,
\end{equation}
while the condition $\partial E/\partial \epsilon=0$ leads to
\begin{eqnarray}\label{condminimiza2}
&&\sum_{i=1}^5\frac{r_i\left(2\epsilon+\left(1+\epsilon^2\right)\cos\theta_i\right)}{\left(1+\epsilon \cos\theta_i\right)^2}\nonumber\\
&&\qquad=\sum_{i=1}^5 \frac{a\left(1-\epsilon^2\right)\left(2\epsilon+\left(1+\epsilon^2\right)\cos\theta_i\right)}{\left(1+\epsilon \cos\theta_i\right)^3}\, .
\end{eqnarray}
Eqs.~\eqref{condminimiza1} and \eqref{condminimiza2} must be solved for $a$ and $\epsilon$. Those values minimize the error function $E(a,\epsilon)$ given in \eqref{errorfunction}. Performing both sumation \eqref{condminimiza1} and \eqref{condminimiza2} on the five data points \eqref{netdata} in order to solve the above equations for $\epsilon$ and $a$ can be done by analitical methods or by using computational programs. Thereby, using data \eqref{netdata},
we obtain that the parameters that minimize the error function $E$
are
\begin{eqnarray}\label{datosellipeanalitical}
\epsilon&=& 0.01695\, ,\nonumber\\
a&=& 1.4952\times 10^8 \mbox{ [Km]}\, .
\end{eqnarray}
These values correspond to eccentricity and the semimajor axis of the ellipse described by Eq.~\eqref{eqelipsegeneral} that best fit the data points \eqref{netdata}.
Note the excellent agreement with the correct values \eqref{datossolnasaEx2} and \eqref{datossolnasasemiejer} for Earth's orbit. We can calculate the error of our prediction with respect to the values \eqref{datossolnasaEx2} and \eqref{datossolnasasemiejer}. We find that for the eccentricity, the percentage error is about of $\left|({\epsilon_E-\epsilon})/{\epsilon_E}\right|\approx 1.4\%$. For the semimajor axis, the percentage error is about of $\left|({a_E-a})/{a_E}\right|\approx 0.05\%$
On the other hand, the error of our fit can be quantitative estimated by calculating the coefficient of determination
\begin{equation}
R^2=\left(\sum_{i=1}^5 \left(r(\theta_i)-\bar r\right)^2\right)\left(\sum_{i=1}^5 \left(r_i-\bar r\right)^2\right)^{-1}={0.9988}\, ,
\end{equation}
where $\bar r=(1/5)\sum_{i=1}^5 r_i$ is the average of our data, and $r(\theta_i)$ is the function \eqref{eqelipsegeneral} evaluated in the data points \eqref{netdata} with the found parameters \eqref{datosellipeanalitical}. The value for coefficient $R^2$ tells us that our fit is very good.
Results \eqref{datosellipeanalitical} imply that the five points \eqref{netdata} correspond to points in an ellipse. This is shown in Fig.~\ref{ajustelipsepountosanaliti} where the predicted elliptical orbit with paremters \eqref{datosellipeanalitical} (purple dashed line) is compared with the correct Earth's orbit (blue solid line). Notice how the data points \eqref{netdata}, in red solid circles, fit very well in both curves.
\begin{figure}[h!]
\centering
\includegraphics[width=9cm]{puntoselipsanalit}
\caption{The plot show the radii (measured in units of $10^8$[Km]) dependence with angle $\theta$ for an elliptical orbit in polar coordinates. The blue solid line is the known Earth's elliptical orbit \eqref{eqelipsegeneral} with parameter \eqref{datossolnasaEx2} and \eqref{datossolnasasemiejer}. The purple dashed line is the Earth's elliptical orbit \eqref{eqelipsegeneral} predicted for parameters \eqref{datosellipeanalitical}.
The five data points \eqref{netdata} correspond to red circle points. Notice the striking fitting.}
\label{ajustelipsepountosanaliti}
\end{figure}
On the other hand, in order to evaluate our error in the determination the Earth's orbit, we can construct a difference function between the correct orbit [with parameters \eqref{datossolnasaEx2} and \eqref{datossolnasasemiejer}] and our calculation due to optimization of the five data points. This function reads
\begin{equation}\label{diffducntion}
\mbox{Diff}(\theta)=\left|\frac{a_E \left(1-\epsilon_E^2\right)}{1+\epsilon_E\cos\theta}-\frac{a \left(1-\epsilon^2\right)}{1+\epsilon\cos\theta}\right|\, .
\end{equation}
The absolute value of this difference will tell us information about where our estimation error is larger. The difference function $\mbox{Diff}$ is plotted in
Fig.~\ref{errorelipsesanalit2}, from {where we see that the error margin is larger in $\theta=0$ and $2\pi$, i.e., in the perihelion and aphelion. However, this error is about $\sim10^5$[Km], and thus our estimation has an approximated maximal percentage error of about
$0.1$\%. On the other hand, near $\theta\sim\pi$, in the lapse of time where we took the photographs, the error decreases.}
\begin{figure}[h!]
\centering
\includegraphics[width=8cm]{errorelipsesanalit}
\caption{Plot of the difference function Diff given in Eq.~\eqref{diffducntion} in units of $10^5$[Km]. }
\label{errorelipsesanalit2}
\end{figure}
In order to show that the predicted elliptical orbit of Earth coincide with the real one in a more appealing fashion, we plot the data points \eqref{netdata} in a cartesian plane using GeoGebra software \cite{geogebra}. This is shown in Fig.~\ref{errorelipsesanalit5}, where the data points \eqref{netdata} now represent the different position of Earth along the elliptical trayectory. The data points (in cartesian coordinates) are in orange solid circles, while the sun is in one of the focus of the orbit. The black solid line is the elliptical orbit predicted by parameters \eqref{datosellipeanalitical}, and as we can see the predicted orbit pass for all the five points.
Another interesting feature of Fig.~\ref{errorelipsesanalit5} is shown through in blue hollow circles. These points correspond to different future Earth's positions extracted from Stellarium software \cite{estela}, and they lie in the orbit fulfilled by parameters \eqref{datosellipeanalitical}. In this way, our predicted orbit also foresee the future positions of Earth. Also, in dashed purple line, the elliptical conic curve that fit the five data points \eqref{datosellipeanalitical} is shown. In principle, always a conic curve pass by five points. However, in our case, this conic curve does not fit the elliptical orbit predicted by parameters \eqref{datosellipeanalitical}. The main reason is due to the proximity (in time) of our data. In order to use the treatment of finding an elliptical orbit by fitting a conic curve with five points, these data points should be obtained along a more extended span of time (a year), and not only four months.
\begin{figure}[h!]
\centering
\includegraphics[width=8.7cm]{orbitaCartesian}
\caption{Earth's orbit is cartesian plane. Orange circle are the points \eqref{netdata}. Black solid line is the predicted orbit. Blue hollow circles are future positions of Earth taken from Stellarium software. Our predicted orbit fit all the points. }
\label{errorelipsesanalit5}
\end{figure}
\section{Mass of the Sun and Kepler's laws}
\label{Keplersections}
In above section, we have proved that the position of Earth (with its distances and angles at several dates) correspond to points in an elliptical orbit where the Sun is in one focus. Our estimations of parameters of the elliptical orbit are in very good agreement with the real values for Earth's orbit. Therefore, we have proved the first Kepler's law. This law establishes that elliptical orbits are direct consequence of the Newton central gravitational force (that is inverse to the square of the distance) owe to the central star \cite{prentis,yasou}.
Even more interesting is the third Kepler's law, that relates the form of the ellipse with the mass of the central star (in our case, the Sun). When the star is in one of the focus of the elliptical orbit, the third law tell us that the cube of the semimajor axis $a$ of the orbit is proportional to the square of the period $T$ of one revolution around that star. Mathematically, it
is written as
\begin{equation}\label{secondKlaw}
\frac{a^3}{T^2}=\frac{GM}{4\pi^2}\, ,
\end{equation}
where $G=6.6726\times 10^{-11}$[$\mbox{m}^3\mbox{s}^{-2}/\mbox{Kg}$] is the universal gravitational constant \cite{nrl}, and $M$ is the mass of the central star.
By using Eq.~\eqref{secondKlaw} we can calculate the Sun's mass.
As in Secs.~\ref{polarajuste} we already have found $a$ for the period $T$ given in \eqref{period}, then we can use our results to calculate this mass with parameters \eqref{datosellipeanalitical}. This calculation for the Sun's mass is simply
\begin{equation}\label{masasun1}
{M}=\frac{4\pi^2 a^3}{G T^2}=1.98586\times 10^{30}\mbox{[Kg]}\, ,
\end{equation}
where $T$ have been used in seconds \eqref{period}. Compare this mass estimation with the real Sun's mass value given in
\eqref{datossolnasaMASA}. This is the most important results of this work. With only five photographs, and using Kepler's law,
we have been able to estimate the mass of Sun, with a percentage error of about $|(M_{\odot}-M)/M_{\odot}|\sim 0.13\%$. This shows that with only five photographs, the mass calculations are strikingly accurately.
Lastly, we are in position to discuss the second Kepler's law.
This law states that a line between the sun and the planet sweeps equal areas in equal times. This is a consequence of the conservation of the angular momentum of the planet. Basically, it establishes that any planet moves faster when is closer to the Sun, and slower when is far, implying that the angular velocity is not constant, i.e., a planet does not revolve around Sun a constant rate. In mathematical terms, the second Kepler's law for Earth (conservation of its angular momentum) is written as $r^2 (d\theta/dt)=\sqrt{G\, a_E(1-\epsilon_E^2) M_\odot}$, where $d\theta/dt$ is the derivative of the angular position, i.e., how it changes in time \cite{bate}. Using the third's Kepler law and the elliptical form of the orbit, this equation can be put in the form
\begin{equation}
\frac{d\theta_E}{dt}=\frac{2\pi}{T}\frac{\left(1+\epsilon_E\cos\theta\right)^2}{\left( 1-\epsilon_E^2\right)^{3/2}}\, .
\end{equation}
Compare this result with Eq.~\eqref{thetaconstantrate} for an orbital motion at constant rate. Taking its derivative we find
\begin{equation}
\frac{d\theta}{dt}=\frac{2\pi}{T}\, .
\end{equation}
This was our assumption of revolutions at constant rate
in the calculations of Sec. III. Nevertheless, one can notice that because $\epsilon_E\ll 1$, the maximum error of our assumption (occurring in the perihelion) is of the order
$\left|({d\theta_E/dt-d\theta/dt})/({d\theta_E/dt})\right|\sim 2\epsilon_E$,
i.e., the maximum percentage error of considering Earth moving at constant rate is about of 3.4\%. As the earth moves away from the perihelion, that error can be very small. For example, for photograph 1 at March 26th (day 83 and angle $\theta\approx1.4278$), the error is $\sim |2\epsilon_E\cos\theta|=0.48$\%. Therefore, our assumption \eqref{thetaconstantrate} for revolutions at constant rate is justified, and in agreement with the second Kepler's law for percentage errors of the order of 1\%. This is the reason why our results are so good even when strictly we are violating the second Kepler's law.
\section{Conclusions}
With this work, we have shown that in a lapse of time of four months, several features about the Earth's trajectory and Sun can be obtained from only five photographs. Any student can perform the present analysis under the guidance of a teacher. For example, students perfoming this experience will
realize that just measuring angles from Earth, distances to any celestial body can be calculated.
Most important, this experience combine practical astronomical observations and theoretical physics knowledge with analitical and computational skills. These are the kind of proficiencies that any student should possess.
Several following comments are useful to anyone attempting to repeat this experience. First, with the five point presented here, we were able to calculate the elliptical orbit \eqref{eqelipsegeneral} that assumes that the Sun is in a focus of the ellipse. With more photographs, or the same amount of photographs taken in a large lapse of time, it is even possible to prove that the Sun is in one focus of the ellipse. Secondly, we do not recommend to take data points in a small lapse of time, as that increases the magnitude of the error estimation.
Also, the photographs should be taken in a sunny day with the Sun in its highest position, as this decreases the atmospheric aberration. On the other hand, in general, all the photographs in Fig.~\ref{eclipseFig} have errors associated to the measurements of $r_o$ and $r_{aS}$. One can improve the accuracy in their calculation, by counting the pixels
in the photographs. This allow us to minimize the error associated to the focus of photographs. This is important as the Sun (when is inside the telescope ocular) varies its angular size very little during a year. In Fig.~\ref{minmax} is shown the Sun's maximum $32'32''$ and minimum $31'27''$ angular sizes. Therefore, the difference between Sun's maximum and minimum angular sizes is $1'5''$ or $\ang{0.0181}$. For this experience is imperative
a correct calculation of $r_{aS}$ through a careful measuments of the angles.
\begin{figure}[h!]
\centering
\includegraphics[width=7cm]{minmax}
\caption{Maximum and minimum of Sun's angular sizes}
\label{minmax}
\end{figure}
Finally, it is important the quality of the telescope solar filter in order to have good photographs. Also, a better focusing of the photographs can be obtained when solar spots are present.
On the other hand, the eccentricity can be measured by only knowing the position of Earth's perihelion and aphelion. By taking photographs in those points, geometrical method can be used to estimate the eccentricity. However, for this approache, the ellipse orbit is assumed, and those two points do not provide enough information to calculate the orbit's parameters. On the contrary,
following the same procedure of this paper, the orbits of Moon and Jupiter can be calculated, as these bodies are visible from Earth using telescopes. These works are left for the future.
All the above considerations lead to or improve the results shown along this work.
We strongly believe that this experience can be developed in a simpler manner by a group of students, who will find how to calculate something apparently difficult in a very straightforward and precise way.
The process of taking the photographs, the use and understanding of a telescope,
the computational analysis to measure the radii, the analitical analysis to evaluate the orbit, and the theoretical use of Kepler's law to estimate the Sun's mass can be used by a teacher to encourage the scientific knowledge pursue in students, and at the same time, to explain to them the different processes that a scientist must do in order to unveil deep truths behind the facts.
\begin{acknowledgments}
F.A.A. was supported by Fondecyt-Chile Grant No. 1180139.
\end{acknowledgments}
| {
"timestamp": "2019-07-01T02:18:31",
"yymm": "1906",
"arxiv_id": "1906.12272",
"language": "en",
"url": "https://arxiv.org/abs/1906.12272",
"abstract": "With only five photographs of the Sun at different dates we show that the mass of Sun can be calculated by using a telescope, a camera, and the Kepler's third law. With these photographs we are able to calculate the distance between Sun and Earth at different dates in a period of time of about three months. These distances allow us to obtain the correct elliptical orbit of Earth, proving the Kepler's first law.The analysis of the data extracted from photographs is performed by using an analytical optimization approach that allow us to find the parameters of the elliptical orbit. Also, it is shown that the five data points fit an ellipse using an geometrical scheme. The obtained parameters are in very good agreement with the ones for Earth's orbit, allowing us to foresee the future positions of Earth along its trajectory. The parameters for the orbit are used to calculate the Sun's mass by applying the Kepler's third law and Newton's law for gravitation. This method gives a result wich is in excellent agreement with the correct value for the Sun's mass. Thus, in a span of time of about three months, any student is capable to calculate the mass of the sun with only five photographs, a telescope and a camera.",
"subjects": "Physics Education (physics.ed-ph)",
"title": "Weighing the Sun with five photographs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9759464457063559,
"lm_q2_score": 0.8418256492357359,
"lm_q1q2_score": 0.8215767502760619
} |
https://arxiv.org/abs/1607.00420 | Coloring the power graph of a semigroup | Let $G$ be a semigroup. The vertices of the power graph $\mathcal{P}(G)$ are the elements of $G$, and two elements are adjacent if and only if one of them is a power of the other. We show that the chromatic number of $\mathcal{P}(G)$ is at most countable, answering a recent question of Aalipour et al. | \section{Introduction}
This note is devoted to the graph constructed in a special way from a given semigroup $G$. This graph is called the \textit{power graph} of $G$, denoted by $\mathcal{P}(G)$, and its vertices are the elements of $G$. Elements $g,h\in G$ are adjacent if and only if one of them is a power of the other, that is, if we have either $g=h^k$ or $h=g^k$ for some $k\in\mathbb{N}$. (Here an in what follows, $\mathbb{N}$ denotes the set of positive integers.) This concept has attracted some attention in both the discrete mathematics and group theory, see~\cite{Cam, CG}.
Let us consider a related directed graph $D(G)$ on the set $G$. Assume $D(G)$ contains an edge leading from $x$ to $y$ if and only if $y$ is a power of $x$. Clearly, the outdegree of any vertex of $D(G)$ is at most countable, and one gets the graph $\mathcal{P}(G)$ by forgetting about the orientations of the edges of $D(G)$. A classical result by Fodor (see~\cite{Fod, Kom}) shows that the chromatic number of $\mathcal{P}(G)$ does not exceed any uncountable cardinal. Is this chromatic number always at most countable? This question was studied in~\cite{AAC} by Aalipour et al. They answer this question in the special cases which include groups with finite exponent, free groups, and Abelian groups. However, the general version of this problem remained open even for groups, and it was posed in~\cite{AAC} as Question~42. This note gives an affirmative answer to this question. More than that, we prove that the chromatic number of $\mathcal{P}(G)$ is countable even if $G$ is an arbitrary power-associative magma. (Here, a \textit{magma} is a set endowed with a binary operation. A magma is called \textit{power-associative} if a sub-magma generated by a single element is associative.)
We proceed with the proof. The \textit{order} of an element $h\in G$ is the cardinality of the subsemigroup generated by $h$. An element $h\in G$ is called \textit{cyclic} if the subsemigroup generated by $h$ is a finite group. In other words, an element $h$ is cyclic if and only if the equality $h=h^{n+1}$ holds for some positive integer $n$. If $h$ is not cyclic but has a finite order, then the \textit{pre-period} of $h$ is defined as the largest $p$ such that the element $h^p$ occurs in the sequence $h,h^2,h^3,\ldots$ exactly once.
\section{Coloring the elements of finite orders}
The following claim allows us to split the set of all cyclic elements into a union of countably many independent sets.
\begin{claim}\label{claim1}
Fix a number $n\in\mathbb{N}$. The subgraph of $\mathcal{P}(G)$ induced by the set of cyclic elements of order $n$ is a union of cliques of size at most $n$.
\end{claim}
\begin{proof}
Denote this induced subgraph by $P'$. Let $\thicksim$ be the relation on $P'$ containing those pairs $(x,y)$ such that $x$ is a power of $y$; this relation is clearly reflexive and transitive. Assuming that $x\thicksim y$, we get $x=y^p$, and we note that $p$ is relatively prime to $n$ because the orders of $x,y$ are equal to $n$. So we get $pq+p'n=1$ for some $p'\in\mathbb{Z}$, $q\in\mathbb{N}$, which shows that $y=x^q$. Therefore, $\thicksim$ is an equivalence relation, and every equivalence class is a subset of the set of powers of some $x\in P'$.
\end{proof}
As we see from the proof, the sizes of the cliques as in Claim~\ref{claim1} are equal to $\varphi(n)$, where $\varphi$ is the Euler's totient function. This result is similar to Theorem~15 in~\cite{AAC}. Now we are going to prove that the set of all non-cyclic elements of finite orders can be represented as a union of countably many independent sets. We need the following observation.
\begin{obs}\label{obs111}
If $g\in G$ has a finite order $n$ and pre-period $p$, then $g^q$ is cyclic for all $q>p$.
\end{obs}
\begin{proof}
We have $g^{n+1}=g^{p+1}$, so that $(g^q)^{n-p+1}=g^{p+1}g^{(n-p)q}g^{q-p-1}=g^q$.
\end{proof}
\begin{claim}\label{claim11}
Let $g,h\in G$ be distinct elements with finite orders. If $g,h$ have the same pre-period $p$, then they are non-adjacent in $\mathcal{P}(G)$.
\end{claim}
\begin{proof}
Assume the result is not true. Then we have $g=h^t$, for some $t>1$. (We omit the case $h=g^t$, which is considered similarly.) Observation~\ref{obs111} shows that $g^p=h^{tp}$ is a cyclic element, which contradicts to the initial assumption that $p$ is the pre-period of $g$.
\end{proof}
\section{Coloring the elements of the infinite order}
In the following claim, we assume $m,n\in\mathbb{N}$, and we denote by $G(x,m,n)$ the set of all $y\in G$ satisfying $x^m=y^n$.
\begin{claim}\label{claim2}
Let $x\in G$ be an element of the infinite order. Then the set $G(x,m,n)$ is independent in $\mathcal{P}(G)$.
\end{claim}
\begin{proof}
Assume that $k\in\mathbb{N}$ and $y,z\in G$ are such that $x^m=y^n$, $x^m=z^n$, $y=z^k$. Then we have $x^{mk}=z^{kn}=y^n=x^m$. Since $x$ has the infinite order, we get $k=1$, which implies $y=z$ and completes the proof.
\end{proof}
In what follows, we denote by $\pi\subset G$ the set of elements of finite orders and by $\mathcal{P}_*(G)$ the graph obtained from $\mathcal{P}(G)$ by removing the vertices in $\pi$.
\begin{claim}\label{claim3}
Let $x\in G$ be an element of the infinite order. We define the set $C(x)=\bigcup_{m,n\in\mathbb{N}}G(x,m,n)$. Then $C(x)$ is a connected component of $\mathcal{P}_*(G)$.
\end{claim}
\begin{proof}
If we have $x^{m_1}=g^{n_1}$, $x^{m_2}=h^{n_2}$ with $m_1,m_2,n_1,n_2\in\mathbb{N}$, then both $g$ and $h$ are adjacent to $x^{m_1m_2}\in C(x)$, which shows that $C(x)$ is connected. Now assume that an element $z\in \mathcal{P}_*(G)$ is adjacent to a vertex $y$ in $C(x)$. Then we have $x^m=y^n$ and $y^{p}=z^{q}$ with positive integers $m,n,p,q$ (and either $p=1$ or $q=1$, but this fact is not relevant for our proof). We get $x^{mp}=z^{nq}$, which implies that $z$ belongs to $C(x)$ as well. In other words, the vertices in $C(x)$ can be adjacent only to vertices in $C(x)$.
\end{proof}
Now we are ready to prove our main result, which states that $G$ is a union of countably many independent subsets of $\mathcal{P}(G)$. Claim~\ref{claim1} shows that the subgraph of $\mathcal{P}(G)$ induced by the cyclic elements can be covered by countably many independent sets. Claim~\ref{claim11} proves the same result for the subgraph induced by those elements that have finite orders but are not cyclic. These claims together allow us to cover the set $\pi$ by countably many independent sets of $\mathcal{P}(G)$.
We denote by $\{C_\alpha\}$ the set of all connected components of the graph $\mathcal{P}_*(G)$, which is obtained from $\mathcal{P}(G)$ by removing the vertices in $\pi$. We choose an element $x_\alpha$ in every connected component $C_\alpha$, and we deduce from Claim~\ref{claim3} that $C_\alpha=C(x_\alpha)$ for all indexes $\alpha$. Claim~\ref{claim2} shows that every $C(x_\alpha)$ is the union of the independent sets $G(x_\alpha,m,n)$ over all pairs of positive integers $(m,n)$. We see that $G\setminus\pi$ is the union of the independent sets $\cup_{\alpha}G(x_\alpha,m,n)$, which completes the proof.
| {
"timestamp": "2016-07-05T02:01:01",
"yymm": "1607",
"arxiv_id": "1607.00420",
"language": "en",
"url": "https://arxiv.org/abs/1607.00420",
"abstract": "Let $G$ be a semigroup. The vertices of the power graph $\\mathcal{P}(G)$ are the elements of $G$, and two elements are adjacent if and only if one of them is a power of the other. We show that the chromatic number of $\\mathcal{P}(G)$ is at most countable, answering a recent question of Aalipour et al.",
"subjects": "Combinatorics (math.CO)",
"title": "Coloring the power graph of a semigroup",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9908743618023694,
"lm_q2_score": 0.8289387998695209,
"lm_q1q2_score": 0.8213742042939336
} |
https://arxiv.org/abs/2008.01693 | Stable and accurate numerical methods for generalized Kirchhoff-Love plates | Efficient and accurate numerical algorithms are developed to solve a generalized Kirchhoff-Love plate model subject to three common physical boundary conditions: (i) clamped; (ii) simply supported; and (iii) free. We solve the model equation by discretizing the spatial derivatives using second-order finite-difference schemes, and then advancing the semi-discrete problem in time with either an explicit predictor-corrector or an implicit Newmark-Beta time-stepping algorithm. Stability analysis is conducted for the schemes and the results are used to determine stable time steps in practice.A series of carefully chosen test problems are solved to demonstrate the properties and applications of our numerical approaches. The numerical results confirm the stability and 2nd-order accuracy of the algorithms, and are also comparable with experiments for similar thin plates. As an application, we illustrate a strategy to identify the natural frequencies of a plate using our numerical methods in conjunction with a fast Fourier transformation (FFT) power spectrum analysis of the computed data. Then we take advantage of one of the computed natural frequencies to simulate the interesting physical phenomena known as resonance and beat for a generalized Kirchhoff-Love plate. |
\section{Stability analysis and time step determination}\label{sec:analysis}
We study the stability of the schemes and use the analytical results to determine stable time steps in practical computations. As is already pointed out in \cite{Newmark59} that the implicit NB2 time-stepping scheme is unconditionally stable, the focus of the stability analysis here is on the explicit PC22 scheme.
\subsection{Stability of the PC22 scheme}
\begin{figure}[h]
\centering
\includegraphics[width=2.in,height=2.in]{fig/ABAMstabilityRegion.eps}
\caption{Regions of absolute stability for the PC22 time-step scheme and the scheme with an AB2 predictor only. The approximated region of stability using a half super-ellipse is also depicted in the plot. Here, $z=\lambda\Delta t$ with $\lambda$ being the time-stepping eigenvalue and $\Delta t$ representing the time step. $\Re(z)$ and $\Im(z)$ represent the real and imaginary parts of the complex number $z$.}
\label{fig:AbsoluteStabilityRegion}
\end{figure}
Applying the PC22 scheme to the Dahlquist test equation $\eta'=\lambda \eta$ leads to the characteristic polynomial for a complex-valued amplification factor $\zeta$. Letting $z=\lambda\Delta t$, the roots of the characteristic equation are found to be
\begin{equation}\label{eq:RootsOfCharacteristicEqnPC22}
\zeta(z)=\dfrac{1}{2}\left(1+z+\frac{3}{4}z^2\pm\sqrt{\left(1+z+\frac{3}{4}z^2\right)^2-z^2}\right).
\end{equation}
The region of absolute stability for the PC22 time-stepping scheme is the set of complex values $z$ for which the roots of the characteristic polynomial satisfy $|\zeta(z)| \leq 1$.
The stability region can be used to find the time step restriction for a typical problem. However, it is not straightforward to obtain a stable time step by solving the inequality $|\zeta(z)| \leq 1$ directly from \eqref{eq:RootsOfCharacteristicEqnPC22}, so a half super-ellipse is introduced as an approximation of the stability region. To be specific, we define the half super-ellipse by
\begin{equation}\label{eq:approxStabilityRegion}
\left|\frac{\Re(z)}{a} \right|^n+\left|\dfrac{\Im(z)}{b} \right|^n \leq 1 \quad\text{and}\quad \Re(z) \leq 0,
\end{equation}
where $\Re(z)$ and $\Im(z)$ denote the real and imaginary parts of $z$, respectively. We want the half super-ellipse to be completely enclosed by the actual region of stability and to be as large as possible. Given a time-stepping eigenvalue $\lambda$, it is much easier to find a sufficient condition for stability by requiring $\lambda\Delta t$ to be inside the approximated region defined by \eqref{eq:approxStabilityRegion}.
It is found that the above half super-ellipse makes a good approximation for the stability region of the PC22 time-stepping scheme by setting $a=1.75,b=1.2$ and $n=1.5$.
The region of absolute stability for the PC22 time-stepping scheme, together with the approximated region, is shown in Figure~\ref{fig:AbsoluteStabilityRegion}. For comparison purposes, we also plot the stability region for the scheme that only uses an AB2 predictor in the same figure. We can see that by including a corrector step the PC22 scheme has a much larger stability region than the predictor alone, and the stability region includes the imaginary axis so that the scheme can be used for problems with no dissipations. From the plot, we can also see that the half super-ellipse that is chosen to be an approximation fits perfectly inside the original stability region for the PC22 scheme.
\subsection{Time-step determination}
A strategy for the determination of stable time steps to be used in Algorithms~\ref{alg:pc22} \& \ref{alg:nb2} is outlined here. We first transform the semi-discrete problem \eqref{eq:ODEw}--\eqref{eq:ODEa} into Fourier Space, and then derive a stable time step by imposing the condition that the product of the time step and any eigenvalue of the Fourier transformation of the difference operators lies inside the stability region of a particular time-stepping method for all wave numbers.
For simplicity of presentation, we assume the plate resides on a unit square domain ($\Omega=[0,1]\times[0,1]$), and the solution is 1-periodic in both $x$ and $y$ directions. Results on a more general domain can be readily obtained by mapping the general domain to a unit square. After Fourier transforming the homogeneous version of equations \eqref{eq:ODEw}--\eqref{eq:ODEa} (i.e., assuming $F_\mathbf{ i}(t)\equiv0$), an ODE system for the transformed variables ($\hat{w}$, $\hat{v}$) is derived with $\omega=(\omega_x,\omega_y)$ denoting any wavenumber pair,
\begin{equation}\label{m1}
\begin{bmatrix}
\hat{w}(\omega,t)\\
\hat{v}(\omega,t)
\end{bmatrix}_t =
{\hat{Q}(\omega)}
\begin{bmatrix}
\hat{w}(\omega,t)\\
\hat{v}(\omega,t)
\end{bmatrix},
~~\text{where}~~
\hat{Q}(\omega)= \begin{bmatrix}
0 & 1 \\
-\hat{{\mathcal K}}(\omega) & -\hat{{\mathcal B}}(\omega)
\end{bmatrix}.
\end{equation}
Here $\hat{{\mathcal K}}$ and $\hat{{\mathcal B}}$ are the Fourier transformations of the difference operators ${\mathcal K}_h/(\rho h)$ and ${\mathcal B}_h/(\rho h)$, respectively. Let $k_{\omega_x}={2\sin({\omega_xh_x}/{2})}/{h_x}$ and $k_{\omega_y}={2\sin({\omega_yh_y}/{2}})/{h_y}$, where $h_x$ and $h_y$ are the grid spacings in the corresponding directions; then we have
\begin{alignat*}{2}
\hat{{\mathcal K}}(\omega) &=\frac{1}{\rho{h}}\left[K_0+T\left(k^2_{\omega_x}+k^2_{\omega_y}\right)+D\left(k^4_{\omega_x}+2 k^2_{\omega_x}k^2_{\omega_y}+ k^4_{\omega_y}\right)\right],\\
\hat{{\mathcal B}}(\omega) &=\frac{1}{\rho{h}}\left[K_1+T_1\left(k^2_{\omega_x}+k^2_{\omega_y}\right)\right].
\end{alignat*}
Noting that both $\hat{{\mathcal K}}$ and $\hat{{\mathcal B}}$ are non-negative for any $\omega$. In the analysis to follow, their maximum values denoted by $\hat{{\mathcal K}}_M$ and $\hat{{\mathcal B}}_M$ are of interest, which are attained when $\omega_xh_x=n\pi$ and $\omega_yh_y=m\pi$ ($n,m\in \mathbb{Z}$); namely,
\begin{alignat}{2}
\hat{{\mathcal K}}_M&=\frac{1}{\rho{h}}\left[K_0+4{T}\left(\frac{1}{h_x^2}+\frac{1}{h_y^2}\right)+16D\left(\frac{1}{h_x^2}+\frac{1}{h_y^2}\right)^2\right] ,\label{eq:KM}\\
\hat{{\mathcal B}}_M&=\frac{1}{\rho{h}}\left[K_1+4T_1\left(\frac{1}{h_x^2}+\frac{1}{h_y^2}\right)\right] .\label{eq:BM}
\end{alignat}
A numerical method is stable provided all the eigenvalues of $\hat{Q}(w) \Delta t$ lie within the stability region of the time-stepping method. The eigenvalues of the coefficient matrix $\hat{Q}(w) $ for the problem \eqref{m1} are
\begin{equation}\label{eq:eigenValueStability}
\hat{\lambda}(\omega)=-\frac{\hat{{\mathcal B}}(\omega)}{2}\pm \sqrt{\left(\frac{\hat{{\mathcal B}}(\omega)}{2}\right)^2-\hat{{\mathcal K}}(\omega)}.
\end{equation}
For stability analysis, it suffices to consider the eigenvalue with the largest possible magnitude denoted by $\hat{\lambda}_M$. To find $\hat{\lambda}_M$, we consider the following situations.
\newcommand{\left({\hat{{\mathcal B}}(\omega)}/{2}\right)^2-\hat{{\mathcal K}}(\omega)}{\left({\hat{{\mathcal B}}(\omega)}/{2}\right)^2-\hat{{\mathcal K}}(\omega)}
\textbf{Under-damped case.} If $\left({\hat{{\mathcal B}}(\omega)}/{2}\right)^2-\hat{{\mathcal K}}(\omega)<0$, we obtain complex eigenvalues,
$$
\hat{\lambda}(\omega)=-\frac{\hat{{\mathcal B}}(\omega)}{2}\pm i\sqrt{\hat{{\mathcal K}}(\omega)-\left(\frac{\hat{{\mathcal B}}(\omega)}{2}\right)^2}.
$$
In this case, we may define
\begin{equation}\label{eq:lambdaMUnderDamped}
\hat{\lambda}_M=-\frac{\hat{{\mathcal B}}_M}{2}\pm i\sqrt{\hat{{\mathcal K}}_M-\left(\frac{\hat{{\mathcal B}}_M}{2}\right)^2},
\end{equation}
since
$|\hat{\lambda}(\omega)|=\sqrt{\hat{{\mathcal K}}(\omega)} \leq \sqrt{\hat{{\mathcal K}}_M} =|\hat{\lambda}_M|$. Here $\hat{{\mathcal K}}_M$ and $\hat{{\mathcal B}}_M$ are the maximum values of $\hat{{\mathcal K}}(\omega)$ and $\hat{{\mathcal B}}(\omega)$ that are given by \eqref{eq:KM} and \eqref{eq:BM}.
\textbf{Over-damped case.} If $\left({\hat{{\mathcal B}}(\omega)}/{2}\right)^2-\hat{{\mathcal K}}(\omega)>0$, the eigenvalues are real and are of the same form as \eqref{eq:eigenValueStability}.
In this case, we may define
\begin{equation}\label{eq:lambdaMOverDamped}
\hat{\lambda}_M=-\hat{{\mathcal B}}_M.
\end{equation}
This is because
$$
|\hat{\lambda}(\omega)|\leq \frac{\hat{{\mathcal B}}(\omega)}{2}+ \sqrt{\left(\frac{\hat{{\mathcal B}}(\omega)}{2}\right)^2-\hat{{\mathcal K}}(\omega)}\leq \hat{{\mathcal B}}(\omega) \leq \hat{{\mathcal B}}_M=|\hat{\lambda}_M|.
$$
We note that $\hat{\lambda}_M$ introduced in \eqref{eq:lambdaMUnderDamped} and \eqref{eq:lambdaMOverDamped} represent the eigenvalues of the worst-case scenario for the under-damped and over-damped cases, respectively. A sufficient condition that ensures stability for the PC22 scheme is found by letting $z=\hat{\lambda}_M\Delta t$ lie in the approximated stability region that is defined by the half super-ellipse in \eqref{eq:approxStabilityRegion}. Since the approximated stability region is a subset of the actual one, a time step that is sufficient to guarantee the stability of Algorithm~\ref{alg:pc22} can be chosen as following,
\begin{equation} \label{eq:timeStep}
\Delta t ={C_{\text{sf}}}{\left(\left|\frac{\Re(\hat{\lambda}_M)}{a}\right|^n+\left|\frac{\Im(\hat{\lambda}_M)}{b}\right|^n\right)^{-1/n}},
\end{equation}
where $C_\text{sf}\in (0,1] $ is a stability factor (sf) that multiplies an estimate of the largest stable time step based on the above analysis. Unless otherwise noted, we choose $C_\text{sf} =0.9$ for the PC22 scheme throughout this paper.
In terms of the NB2 scheme, we know that it is implicit in time and stable for any time step. However, for accuracy reasons, we choose its time step based on the condition for the explicit PC22 time-stepping scheme \eqref{eq:timeStep}, but with a much larger stability factor. Typically, we choose $C_\text{sf}=90$ for Algorithm~\ref{alg:nb2}.
\section{Governing equations}\label{sec:governingEquations}
The classical Kirchhoff-Love plate model concerns the small deflection of thin plates that is used as a simplified theory of solid mechanics to determine the stresses and deformations in the plates subject to external forcings. The governing equation for an isotropic and homogeneous plate is a single time-dependent biharmonic partial differential equation (PDE) for the transverse displacement of the plate's middle surface. It is derived by balancing the external loads with the internal bending force that tends to restore the plate to its stress-free state.
In this paper, we consider a
plate model that is generalized from the classical Kirchhoff-Love equation by including additional terms to account for more physical effects, such as linear restoration, tension and visco-elasticity.
To be specific, this work concerns developing numerical algorithms for solving the following generalized Kirchhoff-Love model for an isotropic and homogeneous plate with constant thickness $h$,
\begin{equation} \label{eq:generalizedKLPlate}
{\rho}{h}\pdn{w}{t}{2}=-{K}_0w+{T}\nabla^2w-{D} \nabla^4w-{K}_1\pd{w}{t}+{T}_1\nabla^2\pd{w}{t}+F(\mathbf{ x},t),
\end{equation}
where $w(\mathbf{ x},t)$ with $\mathbf{ x}\in\Omega\subset\mathbb{R}^2$ is the transverse displacement of the middle surface subject to some given body force $F$. Here, ${\rho}$ denotes density, ${K}_0$ is the linear stiffness coefficient that acts as a linear restoring force, ${T}$ is the tension coefficient, and ${D}={E}{h}^3/(12(1-\nu))$ represents the flexural rigidity with $\nu$ and $E $ being the Poisson's ratio and Young's modulus, respectively. The term with coefficient ${K}_1$ is a linear damping term, while the term with coefficient ${T}_1$ is a visco-elastic damping term that tends to smooth high-frequency oscillations in space. Noting that the visco-elastic damping is often added to model vascular structures in haemodynamics \cite{CanicEtal06}.
On the boundary, one of the following physical boundary conditions is imposed; namely,
for $\mathbf{ x}\in\partial\Omega$, we have
{\small
\begin{align}
&\text{clamped:} & w=0, && \pd{ w}{\mathbf{ n}}=0; \label{eq:clampedBC}\\
&\text{supported:} & w=0, && \pdn{ w}{\mathbf{ n}}{2}+\nu\pdn{ w}{\mathbf{ t}}{2}=0; \label{eq:supportedBC}\\
&\text{free:} & \pdn{ w}{\mathbf{ n}}{2}+\nu\pdn{ w}{\mathbf{ t}}{2}=0, && \pd{}{\mathbf{ n}}\left[\pdn{ w}{\mathbf{ n}}{2}+\left(\nu-2\right)\pdn{ w}{\mathbf{ t}}{2}\right]=0, \label{eq:freeBC}
\end{align}
}
where $\partial/\partial{\mathbf{ n}}$ and $\partial/\partial{\mathbf{ t}}$ are the normal and tangential derivatives defined on the boundary of the domain. It is important to point out that, for a rectangular plate, the free boundary conditions \eqref{eq:freeBC} must be complemented by a corner condition that imposes zero forcing \cite{bilbao2008family}; in other words, we also impose $\partial^2w/\partial x\partial y=0$ at the corners of a rectangular plate.
Note that for notational brevity the functional dependence on $(\mathbf{ x},t)$ has been suppressed in the statement of the boundary conditions.
Appropriate initial conditions need to be specified to complete the statement of the governing equations. Specifically, we assign
\begin{equation}\label{eq:IC}
w(\mathbf{ x},0)=w_0(\mathbf{ x}) ~\text{and}~\pd{w}{t}(\mathbf{ x},0)=v_0(\mathbf{ x})
\end{equation}
as the initial conditions with $w_0(\mathbf{ x})$ and $v_0(\mathbf{ x})$ representing two given functions that prescribe the plate's initial displacement and velocity.
\section{Conclusions}\label{sec:conclusions}
In this paper,
we propose two numerical schemes, referred to as the PC22 and the NB2 schemes, for the approximation of a generalized Kirchhoff-Love plate model. Both schemes are based on centered finite difference methods of second-order accuracy for spatial discretization; and the resulted spatially discretized
equations are then advanced in time using an appropriate time-stepping scheme. The PC22 scheme uses an explicit predictor-corrector scheme that consists of a second-order Adams-Bashforth (AB2) predictor and a second-order
Adams-Moulton (AM2) corrector, and the NB2 scheme utilizes an implicit Newmark-Beta scheme of second-order accuracy. Stable and accurate numerical boundary conditions are also derived for three common plate boundary conditions (clamped, simply supported and free). Stability analysis is performed for the time-stepping schemes to find the regions of absolute stability, which are utilized to determine stable time steps for both proposed schemes.
Carefully designed test problems are solved to demonstrate the properties and applications of our numerical approaches. The stability and accuracy of the schemes are verified by mesh refinement studies using problems with known exact solutions, and by cross-validation with experimental results. An interesting application concerning the exploration of the resonance and beat phenomena of an annular plate with general configurations is considered to further display the accuracy and efficiency of the numerical methods.
The domains of all the examples considered in this paper are restricted to simple ones that can be discretized with a single Cartesian or curvilinear mesh. We would like to extend the schemes for more general geometries using composite overlapping grids \cite{CGNS}. According to previous studies for wave-like equations on overlapping grids \cite{max2006b}, we expect weak instabilities to occur near the interpolation points of the overlapping grids. Therefore, the investigation of novel methods, such as adding high-order spatial dissipation and upwind schemes, to suppress possible instabilities that can be generated from the overlapping grid interpolation
would be interesting topics for future research.
\section*{Acknowledgement}
L. Li is grateful to Professor W.D. Henshaw of Rensselaer Polytechnic Institute (RPI) for helpful conversations. Portions of this research were conducted with high performance computational resources provided by the Louisiana Optical Network Infrastructure (http://www.loni.org).
\section{Numerical results}\label{sec:results}
We now present the results for a series of test problems to demonstrate the properties and applications of our numerical approaches.
Mesh refinement studies using problems with known exact solutions are first considered to verify the stability and accuracy of the schemes. Free and forced vibrations of thin plates with various geometrical and physical configurations are then solved to further demonstrate the numerical properties of our schemes and to compare with existing results. In particular, the simulation of one test problem is cross-validated with reported experimental results. As an application, we illustrate a strategy using our numerical methods, together with fast Fourier transformation (FFT), to identify the natural frequencies of a plate, and then numerically investigate the interesting physical phenomena known as resonance and beat.
\subsection{Method of manufactured solutions}
As a first test, we verify the accuracy and stability of the algorithms using the method of manufactured solutions by adding forcing functions to the PDE \eqref{eq:generalizedKLPlate} and the boundary conditions \eqref{eq:clampedBC}--\eqref{eq:freeBC} so that a chosen function becomes an exact solution. The exact solution is chosen to be
\begin{equation}\label{eq:manufacturedExact}
w_e(x,y,t)=\sin^4\left( \pi (x+1)\right) \sin^4\left( \pi (y+1)\right) \cos(2\pi t).\\
\end{equation}
In order to validate the algorithms on both Cartesian and curvilinear grids, we consider a square plate ($\Omega_S=[-1,1]\times[-1,1]$) and an annular plate ($\Omega_A=\{\mathbf{ x}: 0.5 \leq |\mathbf{ x}| \leq 1\} $). Physical parameters of the governing equation are specified as $\rho h=1,K_0=2,T=1,D=0.01,K_1=5,T_1=0.1$ and $\nu=0.1$ for this test.
\begin{figure}[h]
\begin{center}
\begin{subfigure}[b]{0.45\linewidth}
\centering
\includegraphics[width=1.\linewidth]{fig/SolUTrigTestPABCAMFreeG16}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=1.\linewidth]{fig/AnnSolUTrigTestNoCNBSupportedG16}
\end{subfigure}
\end{center}
\caption{Contour plots of the computed displacement $w$ on grid ${\mathcal G}_{160}$ when $\text{t}=1$. The solution presented for the square plate are solved using the PC22 scheme subject to the free boundary conditions, while the plot for the annular plate is generated from the NB2 scheme with simply supported boundary conditions. Results obtained using either algorithm are similar regardless of the boundary conditions.
}
\label{fig:ManufactureSol}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.32\linewidth}
\centering
Clamped\\\vspace{0.1in}
\includegraphics[width=1\linewidth]{fig/ErrUTrigTestPABCAMClampedG16}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
Simply Supported\vspace{0.1in}
\includegraphics[width=1\linewidth]{fig/ErrUTrigTestPABCAMSupportedG16}
\end{subfigure
\begin{subfigure}[b]{0.32\linewidth}
\centering
Free\vspace{0.1in}
\includegraphics[width=1\linewidth]{fig/ErrUTrigTestPABCAMFreeG16}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\bigskip
\includegraphics[width=1\linewidth]{fig/AnnErrUTrigTestNoCNBClampedG16}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/AnnErrUTrigTestNoCNBSupportedG16}
\end{subfigure
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/AnnErrUTrigTestNoCNBFreeG16}\\
\end{subfigure}
\caption{ Contour plots showing the errors of the numerical solutions for the displacement $w$ with various boundary conditions when $\text{t}=1$. Results shown here are obtained on grid ${\mathcal G}_{160}$ using the PC22 method for the square plate and the NB2 method for the annular plate. }
\label{fig:ManufactureErr}
\end{figure}
Given initial conditions from the exact solution at $t=0$, we solve the test problem to $t=1$ on a sequence of refined grids ${\mathcal G}_N$, where $N=10\times 2^j$ represents the number of grid points in each axial direction with $j$ ranging from $0$ through $4$. All the boundary conditions listed in \eqref{eq:clampedBC}--\eqref{eq:freeBC} as well as both of the proposed schemes (i.e., PC22 and NB2) are considered, but we selectively present two of the numerical solutions obtained on the finest considered grid (i.e., ${\mathcal G}_{160}$) in Figure~\ref{fig:ManufactureSol}. In particular, the solution presented for the square plate is solved using the PC22 scheme subject to the free boundary conditions, while the plot for the annular plate is generated from the NB2 scheme with simply supported boundary conditions. We note that
numerical solutions for the other cases are similar, since the test problem is designed to have the same exact solution \eqref{eq:manufacturedExact}.
Let $E(w)=|w_{\mathbf{ i}}(t)-w_e(\mathbf{ x}_\mathbf{ i},t)|$ denote the error function of a numerical solution $w_{\mathbf{ i}}(t)$, and we show in Figure~\ref{fig:ManufactureErr} the contour plots of $E(w)$ on ${\mathcal G}_{160}$ to demonstrate the accuracy of our schemes for all the boundary conditions. Results shown for the square plate are obtained using the PC22 scheme, and those for the annular plate are solved with the NB2 scheme. We observe that the numerical solutions subject to all the boundary conditions are accurate in the sense that the errors are small and smooth throughout the domain including the boundaries. Errors for all the other cases behave similarly, so their plots are omitted here to save space.
{
\newcommand{13cm}{6cm}
\def13.{13.}
\def13.{10.5}
\newcommand{\trimfig}[2]{\trimw{#1}{#2}{0.08}{0.05}{0.08}{0.0}}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=1]
\useasboundingbox (0.0,0.0) rectangle (13.,13.);
\draw(0.25,5.) node[anchor=south west,xshift=0pt,yshift=0pt] {\trimfig{fig/UTrigTestPABCAMConvRateLmax.png}{13cm}};
\draw(6.5,5.) node[anchor=south west,xshift=0pt,yshift=0pt] {\trimfig{fig/UTrigTestNoCNBConvRateLmax.png}{13cm}};
\draw(0.25,0.0) node[anchor=south west,xshift=0pt,yshift=0pt] {\trimfig{fig/UTrigTestPABCAMConvRateLmaxAnnular}{13cm}};
\draw(6.5,0.0) node[anchor=south west,xshift=0pt,yshift=0pt] {\trimfig{fig/UTrigTestNoCNBConvRateLmaxAnnular}{13cm}};
\draw(0,2.5) node[anchor=south ,xshift=0.4cm,yshift=0pt] {$\Omega_A$:};
\draw(0,7.5) node[anchor=south,xshift=0.4cm,yshift=0pt] {$\Omega_S$:};
\draw(4,10) node[anchor=south,xshift=0pt,yshift=0pt] {PC22};
\draw(10.,10) node[anchor=south,xshift=0pt,yshift=0pt] {NB2};
\end{tikzpicture}
\end{center}
\caption{Convergence rates of $w$ for both the square ($\Omega_S$) and the annular ($\Omega_A$) plates subject to all the boundary conditions are presented. The upper panel of plots show the results of the square plate, and the lower panel illustrates the results of the annular plate. The errors for both numerical methods (PC22 and NB2) are computed at $t=1$ and measured in the maximum norm.}\label{fig:ManufactureSquareConv}
\end{figure}
}
Convergence studies for both the square and annular plates subject to all the boundary conditions are performed for the numerical results obtained on the sequence of grids ${\mathcal G}_N$'s using both numerical methods. We plot the maximum norm errors $||E(w)||_\infty$ against the grid size together with a second-order reference curve in log-log scale to reveal the order of accuracy. In the top row of Figure~\ref{fig:ManufactureSquareConv}, we show the results for the square plate; and in the bottom row of the same figure, we show the results for the annular plate. For all the boundary conditions and both of the numerical schemes, we observe the expected second-order accuracy regardless of the plate shape.
\subsection{Vibration of plates}\label{sec:vibrationOfPlates}
Mechanical vibrations are problems of great interest in engineering and material sciences. For a thin plate-like structure, a 2D plate theory is capable of giving an excellent approximation to the actual 3D motion. The vibration of a plate can be caused either by displacing the plate from its stress-free state or by exerting an external forcing, where the former is referred to as free vibration and the latter is called forced vibration.
Among the numerous plate models, the Kirchhoff-Love theory is most commonly used. To further validate the numerical properties of the proposed schemes, we consider the vibration problems of the generalized Kirchhoff-Love plate \eqref{eq:generalizedKLPlate}, which include the study of natural frequencies and mode shapes of vibration, and propagating or standing waves in the plate.
\subsubsection{Vibration with known analytical solutions}
The classical Kirchhoff-Love plate with some simple specifications can be solved analytically.
We consider a thin plate on a rectangular domain (i.e., $\Omega=[0,L]\times [0,H]$). Analytical solutions to the classical Kirchhoff-Love plate equation subject to simply supported boundary conditions \eqref{eq:supportedBC} are available for both the free and forced vibration cases. We solve each case numerically and compare our approximations with the analytical solutions to reveal the stability and accuracy of our schemes.
{\bf $\bullet$ Free vibration.} Consider the free vibration case; i.e., the forcing function in \eqref{eq:generalizedKLPlate} is zero ($F(\mathbf{ x},t)\equiv0$). In this case, the governing equation can be analytically solved using separation of variables or Fourier transformation. Let $A_{mn}$ and $B_{mn}$ denote the coefficients to be determined by the initial conditions and the orthogonality of Fourier components, then the general solution to this simple plate can be expressed as the following infinite series,
\begin{equation}\label{eq:generalSolution}
w(\mathbf{ x},t)=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}\sin\frac{m\pi x}{L}\sin\frac{n\pi y}{H}(A_{mn}\cos\omega_{mn}t+B_{mn}\sin\omega_{mn}t),
\end{equation}
where the natural frequencies of vibration for this plate is found to be
\begin{equation}\label{eq:NaturalFrequencySupported}
\omega_{mn}={\pi^2}\left(\frac{m^2}{L^2}+\frac{n^2}{H^2}\right)\sqrt{\frac{D}{ \rho h}}.
\end{equation}
Standing wave test problems can be constructed by specifying modes of vibration as the given functions for the initial conditions \eqref{eq:IC}; namely,
\begin{equation}\label{eq:freeVibrationIC}
w_0(\mathbf{ x})=\sin\frac{m\pi x}{L}\sin\frac{n\pi y}{H},~\text{and}~v_0(\mathbf{ x})=0.
\end{equation}
Enforcing the above initial conditions on the general solution \eqref{eq:generalSolution}, we deduce the exact standing wave solution for each 2-tuple $(m,n)$,
\begin{equation}\label{eq:freeVibrationExact}
w_e(\mathbf{ x},t) =\sin\frac{m\pi x}{L}\sin\frac{n\pi y}{H}\cos\omega_{mn}t.
\end{equation}
We solve the standing wave test problem for a few modes to validate our numerical schemes. That is to say, we specify the initial conditions \eqref{eq:freeVibrationIC} with various values of $(m,n)$, and compare the numerical approximations with the exact solution \eqref{eq:freeVibrationExact}.
For simplicity, the rectangular domain is restricted to a unit square in the computations; namely, we set $L=H=1$ in $\Omega=[0,L]\times [0,H]$. The parameters for this test are specified as $\rho h=2.7$, $K_0=0$, $T=0$, $D=6.4527$, $K_1=0$, $T_1=0$ and $\nu=0.33$. Note that this is a classical Kirchhoff-Love model because only the bending dynamics is accounted for (namely, $D\neq0$).
Both of the proposed schemes are used to conduct numerical simulations, but only the results of the PC22 scheme are presented in Figure~\ref{fig:ModeShapesSupportedFreeVibABAM} since the other scheme produces comparable results. In Figure~\ref{fig:ModeShapesSupportedFreeVibABAM}, we show the contour plots of the displacement $w$ at $t=1$ for a few $(m,n)$-tuples. In the plots, we also show the zero contours, which represent the nodal lines of the standing wave solutions. It is clear that the patterns of these nodal lines exhibited in the numerical solutions resemble those of the corresponding modes of vibration used in the initial conditions \eqref{eq:freeVibrationIC}.
\begin{figure}[h]
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/SolUModeTestPABCAMSupported11G16}
\caption{\footnotesize $m=1,n=1$}
\end{subfigure
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/SolUModeTestPABCAMSupported12G16}
\caption{\footnotesize $m=1,n=2$}\label{fig:probedCase}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/SolUModeTestPABCAMSupported22G16}
\caption{\footnotesize $m=2,n=2$ }
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/SolUModeTestPABCAMSupported13G16}
\caption{\footnotesize $m=1,n=3$ }
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/SolUModeTestPABCAMSupported23G16}
\caption{\footnotesize $m=2,n=3$ }
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/SolUModeTestPABCAMSupported14G16}
\caption{\footnotesize $m=1,n=4$}
\end{subfigure}
\caption{Standing wave solutions with the nodal lines at time $t=1$ for some $(m, n)$ values. Simulations are performed using the PC22 scheme on grid ${\mathcal G}_{160}$. Results obtained using NB2 scheme are similar.} \label{fig:ModeShapesSupportedFreeVibABAM}
\end{figure}
\begin{figure}[h]
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/ErrUModeTestPABCAMSupported12G16}
\caption*{\footnotesize Error}
\end{subfigure
\bigskip
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/UModeTestPABCAMConvRate12Lmax}
\caption*{\footnotesize Convergence rate}
\end{subfigure}
\bigskip
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/TrackPointSolUModeTestPABCAMSupported12G16}
\caption*{\footnotesize Displacement at $\mathbf{ x}_p$}
\end{subfigure
\vspace{-0.4 in}
\caption{More results for the case with $m=1, n=2$. Left: the error plot of the displacement $w$ at $t=1$. Middle: the convergence rate of $w$. Right: displacement at point $\mathbf{ x}_p$; the probed location is marked by a cross symbol in Figure~\ref{fig:probedCase}.}
\label{fig:ErrorConvTrackPointSupportedABAMFreeVib}
\end{figure}
Given that the exact solutions for these standing waves are available in \eqref{eq:freeVibrationExact}, it is possible for us to perform mesh refinement studies to show the accuracy and the convergence of the numerical solutions. In Figure~\ref{fig:ErrorConvTrackPointSupportedABAMFreeVib}, we present more results for the case with $m=1$ and $n=2$, which include the error of $w$ at $t=1$ plotted in the left image, the convergence rate of $w$ shown in the middle image, and the evolution of $w$ at a probed location depicted in the right image. The probed location is $\mathbf{ x}_p=(0.2,0.1)$, which is marked by a cross symbol in Figure~\ref{fig:probedCase}. The error and convergence rate plots confirm that the scheme is accurate and the rate of convergence is second order.
The evolution of $w$ at a point, as the one shown in the right image of Figure~\ref{fig:ErrorConvTrackPointSupportedABAMFreeVib}, demonstrates how the standing wave solutions oscillate in time. From the evolution curve, we can estimate the frequency of oscillation by measuring the number of cycles per unit time, which should match up with the natural frequency of the plate. As another indication of the accuracy of our proposed schemes, we compare the numerically estimated frequencies with the natural frequencies. Please note that the natural frequencies defined in \eqref{eq:NaturalFrequencySupported} are actually angular frequencies that measure the number of oscillations in $2\pi$ units of time. For comparison, we use the ordinary frequency (measured in hertz) that is given by $f_{mn}=\omega_{mn}/(2\pi)$.
We track the evolution of the displacement at $\mathbf{ x}_p=(0.2,0.1)$ for the modes that correspond to the first 9 natural frequencies, and estimate the frequencies of oscillation from the evolution curves. The frequencies inferred from the numerical solutions on grid ${\mathcal G}_{160}$ are summarized in Table~\ref{Tab:NaturalFrequencySSSSFreeVib}. We can see that, for both numerical methods, the discrepancies between the estimated frequencies and $f_{mn}$ are small for all the examined cases.
\begin{table}[h]
\begin{center}
\centering
\begin{tabular}{cccccc}
\hline
\multirow{2}{*}{$(m,n)$} & \multirow{2}{*}{ ${f_{mn}}$ } & \multicolumn{2}{c}{PC22 scheme} & \multicolumn{2}{c}{NB2 scheme} \\ \cline{3-6}
& & frequency (est.) & error (\%) & frequency (est.) & error (\%) \\ \hline
$(1,1)$ & $4.8567$ & $4.8565$ & $0.0037\%$ & $4.8541$ & $0.0540\%$ \\
$(1,2)$ & $12.1417$ & $12.1403$ & $0.0112\%$ & $12.1359$ & $0.0480\%$ \\
$(2,2)$ & $19.4267$ & $19.4242$ & $0.0130\%$ & $19.4203$ & $0.0331\%$ \\
$(1,3)$ & $24.2834$ & $24.2769$ & $0.0268\%$ & $24.2691$ & $0.0589\%$ \\
$(2,3)$ & $31.5684$ & $31.5608$ & $0.0239\%$ & $31.5544$ & $0.0443\%$ \\
$(1,4)$ & $41.2817$ & $41.2617$ & $0.0485\%$ & $41.2344$ & $0.1146\%$ \\
$(3,3)$ & $43.7100$ & $43.6975$ & $0.0286\%$ & $43.6542$ & $0.1277\%$ \\
$(2,4)$ & $48.5667$ & $48.5455$ & $0.0436\%$ & $48.4883$ & $0.1615\%$ \\
$(3,4)$ & $60.7084$ & $60.6821$ & $0.0434\%$ & $60.5840$ & $0.2048\%$ \\ \hline
\end{tabular}
\caption{ Comparison between the frequencies estimated from the numerical solutions on grid ${\mathcal G}_{160}$ and the first $9$ natural frequencies. Discrepancies are measured using the percentage of the relative errors.
} \label{Tab:NaturalFrequencySSSSFreeVib}
\end{center}
\end{table}
{\bf $\bullet$ Forced vibration.}
Now, we consider the vibration of the classical Kirchhoff-Love plate driven by a time-dependent sinusoidal force
$F(\mathbf{ x},t)=F_0\sin(\xi t)$, where $F_0$ and $\xi $ are constants for the magnitude and frequency of the sinusoidal force. The plate is assumed to be undeformed and at rest initially; that is, $w_0=v_0=0$. Using method of eigenfunction expansion, we find the exact solution to the forced vibration problem,
\begin{equation}\label{eq:SupportedGeneralSol}
w(x,y,t)=\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}\sin\frac{m\pi x}{L}\sin\frac{n\pi y}{H}T_{mn}(t),
\end{equation}
where the time-dependent coefficient $T_{mn}(t)$ is given by
{\small$$
T_{mn}(t)=\frac{2F_0 \left(1-\cos(m\pi)\right)\left(1-\cos(n\pi)\right)}{\rho h mn\pi^2\omega_{mn}}\left(\frac{\sin(\xi t)+\sin(\omega_{mn}t)}{\xi+\omega_{mn}}-\frac{\sin(\xi t)-\sin(\omega_{mn}t)}{\xi-\omega_{mn}}\right).
$$
}
For numerical test, we consider a classical Kirchhoff-Love plate with the parameters specified as $\rho h=1$, $K_0=0$, $T=0$, $D=0.1$, $K_1=0$, $T_1=0$ and $\nu=0.3$ on a rectangular domain $\Omega=[0,0.4]\times [0,0.2]$; namely $L=0.4$ and $H=0.2$. The magnitude and frequency of the driving force are set as $F_0=1000$ and $\lambda=40$. This test problem is solved using both of the proposed schemes on a uniform Cartesian grid with grid spacings $h_x=h_y=1/300$. The numerical results are compared with an analytical solution truncated from the exact solution \eqref{eq:SupportedGeneralSol} by keeping 49 modes (i.e., $m=1,\dots,7$ and $n=1,\dots,7$).
{
\newcommand{13cm}{6cm}
\def13.{13.}
\def13.{3}
\newcommand{\trimfig}[2]{\trimw{#1}{#2}{0.07}{0.}{0.2}{0.18}}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=1]
\useasboundingbox (0.0,0.0) rectangle (13.,13.);
\draw(-0.5,0.0) node[anchor=south west,xshift=0pt,yshift=0pt] {\trimfig{fig/SolUPointTest6NoCNBSupportedG4}{13cm}};
\draw(6.5,0.0) node[anchor=south west,xshift=0pt,yshift=0pt] {\trimfig{fig/ErrUPointTest6NoCNBSupportedG4}{13cm}};
\end{tikzpicture}
\end{center}
\caption{ Contour plots of the numerical solution (left) and the error (right) of $w$ at $t=1$. Results are obtained using the NB2 scheme on a uniform Cartesian grid with grid spacings $h_x=h_y=1/300$. } \label{fig:SolErrorTimeHarmonicSupported}
\end{figure}
}
{
\newcommand{13cm}{6cm}
\def13.{13.}
\def13.{5}
\newcommand{\trimfig}[2]{\trimw{#1}{#2}{0.}{0.}{0.}{0.}}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=1]
\useasboundingbox (0.0,0.0) rectangle (13.,13.);
\draw(-0.5,0.0) node[anchor=south west,xshift=0pt,yshift=0pt] {\trimfig{fig/TrackPointSolUSupportedHarmonicForceG120}{13cm}};
\draw(6.5,0.0) node[anchor=south west,xshift=0pt,yshift=0pt] {\trimfig{fig/TrackPointSolVSupportedHarmonicForceG120}{13cm}};
\draw(3.2,4.5) node[anchor=south,xshift=0pt,yshift=0pt] {Displacement};
\draw(10.3,4.5) node[anchor=south,xshift=0pt,yshift=0pt] {Velocity};
\end{tikzpicture}
\end{center}
\caption{ The displacement and velocity of the plate at point $\mathbf{ x}_p=(0.2,0.1)$. Simulation is performed using the NB2 scheme on a uniform Cartesian grid with grid spacings $ h_x=h_y = 1/300$.} \label{fig:TrackPointTimeHarmonicSupported}
\end{figure}
}
Both schemes perform comparably well and produce similar solutions, so we only show the one obtained with the NB2 schemes here. In Figure~\ref{fig:SolErrorTimeHarmonicSupported}, the contour plots of the numerical solution of $w$ at $t=1$ and the error compared with the truncated exact solution are presented. To show the accuracy of the numerical results over time, we track the displacement $w$, as well the velocity $v$, at the point $\mathbf{ x}_p=(0.2,0.1)$. The time evolution of the numerical displacement and velocity at $\mathbf{ x}_p$ are plotted on top of the referenced analytical solutions
in Figure~\ref{fig:TrackPointTimeHarmonicSupported}; it is easily seen that our numerical results agree well with the analytical solution over time.
\subsubsection{Vibration of the generalized Kirchhoff-Love plate}
For the generalized Kirchhoff-Love model \eqref{eq:generalizedKLPlate} that cannot be solved analytically, we numerically solve the frequency domain eigenvalue problem to identify the natural frequencies and modes of vibration for the plate, and then utilize the computed eigenvalues and eigenvectors to construct standing wave solutions. The nodal line patterns (i.e., Chladni figures) of the standing wave solutions obtained from our numerical simulations are compared against those solved from the eigenvalue problem for the validation of the numerical schemes.
Specifically, a square plate ($\Omega_S=[0,0.25]\times[0,0.25]$) and an annulus plate ($\Omega_A=\{\mathbf{ x}: 0.1\leq |\mathbf{ x}|\leq 0.5\}$) are considered. For both plates, we assume the same parameters, $\rho h=1,K_0=2,T=1,D=2,K_1=0,T_1=0$, and $\nu=0.1$, noting that these parameters specify an undamped plate.
On the edges of the plates, we impose the clamped boundary conditions \eqref{eq:clampedBC} for square plate, and the simply supported boundary conditions \eqref{eq:supportedBC} for the annular plate.
First, let's consider the eigenvalue problem for the undamped plate on mesh $\Omega_h$,
\begin{equation}\label{eq:eigenProblem}
{\mathcal K}_h \phi_\mathbf{ i}=\lambda \phi_\mathbf{ i}, ~\mathbf{ x}_\mathbf{ i}\in\Omega_h,
\end{equation}
where $\phi_\mathbf{ i}=\phi(\mathbf{ x}_\mathbf{ i})$ is the mode function (or eigenfunction) for the eigenvalue $\lambda$. The definition of the difference operator ${\mathcal K}_h$ is given in \eqref{eq:KBOperators}. To get the eigenfunction-eigenvalue pairs $(\phi_n(\mathbf{ x}),\lambda_n)$, we numerically solve the eigenvalue problem \eqref{eq:eigenProblem} subject to the appropriate numerical boundary conditions; that is, clamped \eqref{eq:discreteClampedBC} for the square plate and supported \eqref{eq:discreteSupportedBC} for the annular plate. The \texttt{eigs} function in MATLAB is used here.
To save space, we put the results in Appendix~\ref{sec:appendix}, where the nodal lines of the first 25 eigenmodes (with multiplicity) for the plates (square and annular) are presented in Figures~\ref{fig:ClampedModeShapeSquareMATLAB} \& \ref{fig:SupportedModeShapeAnnMATLAB}. Note that, following the tradition in structural engineering, the values of natural frequencies, rather than the eigenvalues, are reported in the plots. The natural frequency corresponding to $\lambda_n$ is given by
$$
f_n=\frac{1}{2\pi} \sqrt{\frac{\lambda_n}{\rho h}}.
$$
\begin{figure}[h]
\begin{subfigure}[b]{0.325\linewidth}
\centering
$f_{11}=395.1906$
\includegraphics[width=1\linewidth]{fig/SolUNodalTestFreeVibClampedMode11}
\end{subfigure}
\begin{subfigure}[b]{0.325\linewidth}
\centering
$f_{17}=553.9450$
\includegraphics[width=1\linewidth]{fig/SolUNodalTestFreeVibClampedMode17}
\end{subfigure}
\begin{subfigure}[b]{0.325\linewidth}
\centering
$f_{22}=705.9267$
\includegraphics[width=1\linewidth]{fig/SolUNodalTestFreeVibClampedMode22}
\end{subfigure}
\caption{ Standing waves in the square plate with clamped edges at time $t = 1$ for three eigenvalue cases. Zero contour lines of the solutions are also plotted to indicate the nodal line patterns. Simulations are performed using the NB2 scheme on grid ${\mathcal G}_{80}$.}
\label{fig:ClampedModeShapeNB}
\end{figure}
\begin{figure}[h]
\begin{subfigure}[b]{0.325\linewidth}
\centering
$f_{2}=8.8174$
\includegraphics[width=1\linewidth]{fig/AnnSolUNodalTestFreeVibClampedMode2G80}
\end{subfigure}
\begin{subfigure}[b]{0.325\linewidth}
\centering
$f_{10}=28.9529$
\includegraphics[width=1\linewidth]{fig/AnnSolUNodalTestFreeVibClampedMode10G80}
\end{subfigure}
\begin{subfigure}[b]{0.325\linewidth}
\centering
$f_{20}=43.8879$
\includegraphics[width=1\linewidth]{fig/AnnSolUNodalTestFreeVibClampedMode20G80}
\end{subfigure}
\caption{ Standing waves in the annular plate with simply supported edges at time $t = 1$ for three eigenvalue cases. Zero contour lines of the solutions are also plotted to indicate the nodal line patterns. Simulations are performed using the NB2 scheme on grid ${\mathcal G}_{80}$.}
\label{fig:AnnClampedModeShapeNB}
\end{figure}
Next, we solve for standing waves in the generalized Kirchhoff-Love model \eqref{eq:generalizedKLPlate} with the aforementioned parameters and boundary conditions numerically using the NB2 scheme. Same as before, standing wave test problems are generated by assigning the initial conditions with the eigenmodes,
$
w_0(\mathbf{ x})=\phi_n(\mathbf{ x}) \quad\text{and}\quad v_0(\mathbf{ x})=0,
$
and then let the plate vibrate freely (i.e., zero external forcing).
Results for the square and annular plates are respectively shown in Figure~\ref{fig:ClampedModeShapeNB} and Figure~\ref{fig:AnnClampedModeShapeNB}; three modes for each plate is selected for presentation. Nodal lines of the numerical solutions are also plotted on top of the contour images.
The fact that the nodal line patterns obtained from snapshots (at $t=1$) of the numerical solutions to the dynamical PDE \eqref{eq:generalizedKLPlate} clearly match those solved from the eigenvalue problem \eqref{eq:eigenProblem} (shown in Figures~\ref{fig:ClampedModeShapeSquareMATLAB} \& \ref{fig:SupportedModeShapeAnnMATLAB} in Appendix~\ref{sec:appendix}) is a strong evidence indicating the accuracy of our numerical methods.
\subsubsection{Cross-validation with experiments}
As a final test, we compare our numerical results with existing experimental results. In \cite{Tuan-2015}, Tuan {\em et al.} experimentally measured the Chladni nodal line patterns and resonant frequencies for a thin plate excited by an electronically controlled mechanical oscillator. For one of their reported experiments, a thin square plate with a length of $L=0.24$~m and a thickness of $h=0.001$~m was used. The plate was made of aluminum sheet that has the following material parameters: $E=69$~GPa, $\rho=2700$~kg/m$^3$ and $\nu=0.33$. The center of the plate was fixed with a screw supporter that can be driven with an electronically controlled mechanical oscillator. Silica sands with grain size of $0.3$~mm were placed on the top surface of the plate. When the oscillator drove the plate to vibrate at a resonant (natural) frequency, the sand particles stopped at the nodes of the resonant modes and therefore manifested the nodal line patterns for the vibrating plate.
For this test, we attempt to simulate the experiment and reconstruct comparable nodal line patterns numerically. To mimic the experiment, we consider the Kirchhoff-Love plate \eqref{eq:generalizedKLPlate} on the square domain, $\Omega=[0,0.24]\times [0,0.24]$. The edges of the plate are assumed to move freely, so the free boundary conditions \eqref{eq:freeBC} are applied; and the center of the plate is fixed, i.e., $w(\mathbf{ x}_c,t)=0$. The parameters of the governing equation are chosen to represent the material properties of the aluminum sheet; specifically, we set $\rho h=2.7$, $K_0=0$, $T=0$, $D = 6.4527$, $K_1=0$, $T_1=0$, and $\nu=0.33$. The plate is assumed to be at rest and undeformed at $t=0$; that is, we have $w_0=v_0=0$ for the initial conditions \eqref{eq:IC}.
To account for the driving force exerted by the mechanical oscillator used in the experiment, we specify the external forcing of the model as a time-dependent sinusoidal function that is none-zero on a small square area ${\mathcal A}$ at the center $(x_c,y_c)$ of the plate,
\begin{equation}\label{eq:Chladni1}
F(\mathbf{ x},t)=\begin{cases}
F_0\cos\xi t,& \mathbf{ x}\in {\mathcal A} \\
0, & \mathbf{ x}\not\in {\mathcal A} \end{cases},
\end{equation}
where $F_0$ and $\xi$ are the magnitude and the angular frequency of the driving force, and the square area is ${\mathcal A}=\left[x_c-0.01,x_c+0.01\right]\times \left[y_c-0.01,y_c+0.01\right]$.
To reconstruct the nodal line patterns numerically, we need the driving force \eqref{eq:Chladni1} to oscillate at a resonant (natural) frequency. So we first solve the eigenvalue problem \eqref{eq:eigenProblem} using the \texttt{eigs} function in MATLAB to find the natural frequencies for the model plate. The relation between the $k$th resonance angular frequency and the corresponding eigenvalue is $\xi_k=\sqrt{{\lambda_k}/({\rho h})}$. We choose the magnitude of the force to be $F_0=10^{10}$, and perform the simulations using the NB2 scheme on grid ${\mathcal G}_{160}$. The reason why such a large magnitude is used for the driving force is that we hope to quickly force the plate to vibrate in the resonant mode.
\begin{figure}[h!]
\begin{subfigure}[b]{0.32\linewidth}
\centering
{$f_1=609.7$Hz}
\includegraphics[width=1\linewidth]{fig/SolUNodalFreeCenterClampedForceVibMode19G160}
\end{subfigure
\bigskip
\begin{subfigure}[b]{0.32\linewidth}
\centering
{$f_2=995.0$Hz}
\includegraphics[width=1\linewidth]{fig/SolUNodalFreeCenterClampedForceVibMode28G160}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
{$f_{10}=3443.9$Hz}
\includegraphics[width=1\linewidth]{fig/SolUNodalFreeCenterClampedForceVibMode77G160}
\end{subfigure}
\caption{ Contour plots of the displacement and the nodal lines (i.e., zero contours) at $t=1$ for three typical resonant frequencies. The results shown here are obtained from the simulation of the NB2 scheme on grid ${\mathcal G}_{160}$.}
\label{fig:ChladniFreeModeShapeNB}
\end{figure}
The results obtained from the simulations using the NB2 scheme on grid ${\mathcal G}_{160}$ are presented in Figure~\ref{fig:ChladniFreeModeShapeNB}, which includes contour plots of the displacement and the nodal lines at $t = 1$ for three typical resonant frequencies.
In order to directly compare with the regular frequencies reported in the experiment, we also report in Figure~\ref{fig:ChladniFreeModeShapeNB} the regular frequencies that are converted from the angular frequencies by $f_k={\xi_k}/{2\pi}$.
The nodal line patterns manifested in our numerical results are in excellent agreement to the experimental results for all the frequencies (except for the degenerate eigenvalues); note that the specific experimental results we are comparing with are reported in Figure~3b in \cite{Tuan-2015}.
It is worth pointing out that there are noticeable discrepancies between the values of the numerical and experimental resonant frequencies. This is because the plate model we used for this test is a simple classical Kirchhoff-Love plate that does not consider the influences of the ambient air and the extra mass of the sand particles on the plate. The model could be improved by tuning the various parameters in \eqref{eq:generalizedKLPlate}, which is beyond the scope of this study. The purpose of this test is to validate our numerical methods; and the fact that the numerically reconstructed resonant nodal line patterns agree well with the experimental ones and the values of resonant frequencies are in qualitative agreement serves that purpose.
\subsection{Application} As an application of our schemes, we numerically explore the interesting physical phenomena known as resonance and beat that occur when the driving frequency is right at or close to a natural frequency. Here we demonstrate the application by considering an annular plate ($\Omega=\{\mathbf{ x}: 0.1\leq|\mathbf{ x}|\leq 0.5\}$) with no external forcing as an example. The plate satisfies the generalized Kirchhoff-Love equation \eqref{eq:generalizedKLPlate} and is driven to vibrate by the following time-dependent clamped boundary conditions that prescribe the displacement of the inner edge; i.e.,
\begin{equation} \label{eq:ModifiedClampedBC}
w(\mathbf{ x},t)=W_\text{in}\cos\xi t, \quad \pd{w}{\mathbf{ n}}(\mathbf{ x},t)=0 ~\text{for}~|\mathbf{ x}|=0.1,
\end{equation}
where $W_\text{in}$ and $\xi$ are the maximum value (amplitude) and angular frequency of the prescribed boundary displacement, respectively. For this example, we set $W_{in}=1$ and vary $\xi$ to investigate its effects.
The outer edge of the plate is allowed to move freely; namely, the free boundary conditions \eqref{eq:freeBC} are applied at $|\mathbf{ x}|=0.5$. Initially, we assume $w_0(\mathbf{ x},0)=v_0(\mathbf{ x},0)=0$. The setup of this problem can easily be replicated experimentally by clamping the inner edge of an annular plate to a mechanical oscillator undergoing a sinusoidal motion.
The material parameters of the plate are $\rho h =1,D=0.01,T=0,K_0=0$, and $\nu=0.3$. With the intention to study the effects of the damping terms in the equation, various values for $K_1$ and $T_1$ are considered below.
To begin with, we consider the undamped case (i.e., $K_1=T_1=0$), and specify a value to $\xi$ in \eqref{eq:ModifiedClampedBC} that is either close to or at a natural frequency of the plate. Due to the complexity of the generalized plate equation and the time-dependent boundary conditions, it is non-trivial to analytically find the frequency domain eigenvalue problem and then solve it for the natural frequencies and modes as were done in Section~\ref{sec:vibrationOfPlates}. Following a procedure proposed in \cite{Chugh2007}, we illustrate a more general strategy to identify the natural frequencies of a plate using our numerical methods in conjunction with a fast Fourier transformation (FFT) power spectrum analysis of the numerical data.
The strategy for finding a natural frequency goes as following. We first simulate the problem for an arbitrary driving frequency; say, $\xi=2\pi$ (or $f_d=1$~Hz), and trace the response of the plate at $\mathbf{ x}_p=(-0.2,0)$. The simulation runs until $t=30$ using the NB2 scheme on grid ${\mathcal G}_{80}$. The left image of Figure~\ref{fig:MovingClampedOmega1} shows the displacement response at the selected location over time.
We then perform FFT to the displacement data using the \texttt{fft} function in MATLAB, and present its power spectrum in the right image of Figure~\ref{fig:MovingClampedOmega1}.
From this graph, we are able to identify two natural frequencies ($f_1=0.367$, $f_2=2.067$) and the driving frequency ($f_d=1$). More natural frequencies can be identified this way by sampling different values for the driving frequency $\xi$.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
$w(\mathbf{ x}_p,t)$
\includegraphics[width=1\linewidth]{fig/TrackPointSolUNodalTestFreeVibMovingT30f1}
\end{subfigure
\bigskip
\begin{subfigure}[b]{0.49\linewidth}
\centering
FFT of $w(\mathbf{ x}_p,t)$
\includegraphics[width=1\linewidth]{fig/FFTSolUNodalTestFreeVibMovingT30f1}
\end{subfigure}
\caption{ Vibration of the plate when the driving frequency is $\xi=2\pi$ (or $f_d=1$~Hz). The simulation runs until $t=30$ using the NB2 scheme on grid ${\mathcal G}_{80}$.
Left: the displacement response at the selected location $\mathbf{ x}_p=(-0.2,0)$ over time. Right: the FFT power spectrum of the displacement response.}
\label{fig:MovingClampedOmega1}%
\end{figure}
\begin{figure}[h]
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/SolUNodalTestFreeVibMovingT30f2new}
\caption{\footnotesize contour of $w(\mathbf{ x},30)$} \label{fig:MovingClampedResonanceA}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/TrackPointSolUNodalTestFreeVibMovingT30f2new}
\caption{\footnotesize $w(\mathbf{ x}_p,t)$} \label{fig:MovingClampedResonanceB}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/FFTSolUNodalTestFreeVibMovingT30f2new}
\caption{\footnotesize FFT of $w(\mathbf{ x}_p,t)$} \label{fig:MovingClampedResonanceC}
\end{subfigure}
\caption{ Numerical simulation of the resonance phenomenon using the NB2 method on grid ${\mathcal G}_{80}$. Here the driving frequency is $\xi_2=2\pi f_2$ and the probed location is $\mathbf{ x}_p=(-0.2,0)$.}
\label{fig:MovingClampedResonance}%
\end{figure}
Resonance occurs when the driving frequency of the plate is at a natural frequency. As an example, we simulate this phenomenon at the natural frequency $f_2$ by setting the driving frequency as $\xi_2=2\pi f_2$. The simulation is carried out using the NB2 scheme until $t=30$, and the results are collected in Figure~\ref{fig:MovingClampedResonance}. In particular, we show in Figure~\ref{fig:MovingClampedResonanceA} the contour plot of $w$ as well as its nodal lines at $t=30$. The nodal line pattern sheds light on the mode shape (eigenfunction) associated with the natural frequency $f_2$. We also trace the displacement at the point $\mathbf{ x}_p=(-0.2,0)$ and depict its time history in Figure~\ref{fig:MovingClampedResonanceB}. The resonance phenomenon is clearly observed as the amplitude of the vibration increases over time. The FFT power spectrum of the displacement data at this point, as is shown in Figure~\ref{fig:MovingClampedResonanceC}, also confirms that the plate vibrates at a frequency consistent with the natural frequency $f_2$.
\begin{figure}[h]
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/SolUNodalTestFreeVibMovingT30f2}
\caption{\footnotesize contour of $w(\mathbf{ x},30)$} \label{fig:MovingClampedBeatA}%
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/TrackPointSolUNodalTestFreeVibMovingT30f2}
\caption{\footnotesize $w(\mathbf{ x}_p,t)$} \label{fig:MovingClampedBeatB}%
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{fig/FFTSolUNodalTestFreeVibMovingT30f2}
\caption{\footnotesize FFT of $w(\mathbf{ x}_p,t)$} \label{fig:MovingClampedBeatC}%
\end{subfigure}
\caption{
Numerical simulation of the beat phenomenon using the NB2 method on grid ${\mathcal G}_{80}$. Here the driving frequency is $\xi_b=4\pi$ (or $f_b=2$~Hz) that is close to the natural frequency $2\pi f_2$ and the probed location is $\mathbf{ x}_p=(-0.2,0)$.
}
\label{fig:MovingClampedBeat}%
\end{figure}
To simulate beat, we drive the plate at a frequency that is very close to the previously found natural frequency $f_2$. Therefore, we set the driving frequency in \eqref{eq:ModifiedClampedBC} as the so-call beat frequency $\xi_b =4\pi$ (or $f_b=2$~Hz), noting that the difference between the beat frequency and the natural frequency $f_2$ is small for $|f_b- f_2|= 0.067$. Again, the simulation is performed using the NB2 scheme until $t=30$, and a similar collection of results are presented in Figure~\ref{fig:MovingClampedBeat}. The nodal line pattern for this case (Figure~\ref{fig:MovingClampedBeatA}) is in accordance with that shown in Figure~\ref{fig:MovingClampedResonanceA}. Furthermore, Figure~\ref{fig:MovingClampedBeatB} shows the expected oscillation pattern that resembles the beat phenomenon. The FFT power spectrum of the displacement data at $\mathbf{ x}_p$, as is shown in Figure~\ref{fig:MovingClampedBeatC}, clearly shows the two adjacent frequencies that correspond respectively to the beat frequency $f_b$ and the natural frequency $f_2$.
{
\newcommand{13cm}{5.5cm}
\def13.{12.5}
\def13.{10}
\newcommand{\trimfig}[2]{\trimw{#1}{#2}{0.08}{0.08}{0.08}{0.06}}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=1]
\useasboundingbox (0.0,0.0) rectangle (13.,13.);
\draw(3.,5.5) node[anchor=south west,xshift=0pt,yshift=0pt] {\trimfig{fig/TrackPointSolUNodalTestFreeVibMovingT30f2newMagInSet.png}{13cm}};
\draw(-.5,0.5) node[anchor=south west,xshift=0pt,yshift=0pt] {\trimfig{fig/TrackPointSolUNodalTestFreeVibMovingT15f2newK1MagInSet}{13cm}};
\draw(6.5,0.5) node[anchor=south west,xshift=0pt,yshift=0pt] {\trimfig{fig/TrackPointSolUNodalTestFreeVibMovingT5f2newT1equal01MagInSet}{13cm}};
\draw(3,0) node[anchor=south,xshift=0pt,yshift=0pt] {Damped: $K_1=5, {T_1=0} $};
\draw(10,0) node[anchor=south,xshift=0pt,yshift=0pt] {Damped : ${K_1=0}, T_1=0.1$};
\draw(6.5,5.) node[anchor=south,xshift=0pt,yshift=0pt] {Undamped: $K_1=0, T_1=0 $};
\end{tikzpicture}
\end{center}
\caption{Plot of $w(\mathbf{ x}_p,t)$ {\em vs.} $t$ for three different damping scenarios.
Simulations are carried out using the NB2 scheme on grid ${\mathcal G}_{80}$.
Here the driving frequency is the same as the resonant frequency $\xi_2=2\pi f_2$ and $\mathbf{ x}_p=(-0.2,0)$.}
\label{fig:MovingClampedDamped}%
\end{figure}
}
Now, we consider the effects of the damping terms in the generalized Kirchhoff-Love plate \eqref{eq:generalizedKLPlate}, which are the linear damping term with coefficient ${K}_1$ and the visco-elastic damping term with coefficient ${T}_1$. The visco-elastic damping tends to smooth high-frequency oscillations in space and is often added to model vascular structures in haemodynamics. For the simulation, we use the same numerical setup as the resonance example, and demonstrate the damping effects by tracking the displacement over time at the point $\mathbf{ x}_p=(-0.2,0)$ for three combinations of $K_1$ and $T_1$ values. The time history of the displacements are shown in Figure~\ref{fig:MovingClampedDamped}. For the case when only the linear damping term $(K_1=5, {T_1=0})$ is added, we can see from the plot that the amplitude of the oscillation is damped down to around $2$ when compared with the undamped resonance case. If we zoom in the plot at early times, we do see some high-frequency oscillations. However, when the visco-elastic damping term is included (${K_1=0}, T_1=0.1$), we can no longer see the high-frequency oscillations in the zoomed in plot; therefore, this term serves to smooth the wave as expected.
Similar numerical results can also be obtained using the PC22 method, although all the simulations are conducted with the NB2 scheme in this section. It is important to remark that the examples considered here also showcase the accuracy and efficiency of our numerical methods. The fact that we are able to simulate the resonance and beat phenomena using a numerically found natural frequency and that the numerically observed resonance frequency further corroborates that value strongly suggests the accuracy of our computations.
\section{Numerical methods} \label{sec:numericalMethods}
In this section, we present the numerical approaches to solve the governing equation \eqref{eq:generalizedKLPlate} subject to the boundary conditions \eqref{eq:clampedBC}--\eqref{eq:freeBC}. Standard centered finite difference methods of second-order accuracy are used for the discretization of all the spatial derivatives, and then the resulted semi-discrete equations are integrated in time using an appropriate time-stepping scheme.
Let $\Omega_h$ denote a mesh covering the domain $\Omega$, and let $\mathbf{ x}_\mathbf{ i}\in \Omega_h$ denote the coordinates of a grid point with multi-index $\mathbf{ i}=(i_1,i_2)$. The time-dependent grid function that approximates the displacement on the mesh is given by $w_\mathbf{ i}(t)\approx w(\mathbf{ x}_{\mathbf{ i}},t)$. Similarly, $F_\mathbf{ i}$ is used to denote the given forcing function evaluated at $\mathbf{ x}_\mathbf{ i}$; namely, $F_\mathbf{ i}(t)=F(\mathbf{ x}_\mathbf{ i},t)$. We spatially discretize the governing equation and its boundary conditions by replacing the differential operators with the corresponding finite-difference operators (distinguished with a subscript $h$) to derive the semi-discrete equations,
\begin{equation}\label{eq:discreteGeneralizedKLPlate}
{\rho}{h}\ddn{w_{\mathbf{ i}}}{t}{2}=-{K}_0w_{\mathbf{ i}}+{T}\nabla_h^2w_{\mathbf{ i}}-{D} \nabla_h^4w_{\mathbf{ i}}-{K}_1\dd{w_{\mathbf{ i}}}{t}+{T}_1\nabla_h^2\dd{w_{\mathbf{ i}}}{t}+F_{\mathbf{ i}}, ~\forall \mathbf{ x}_\mathbf{ i}\in\Omega_h,
\end{equation}
as well as the following discrete boundary conditions. For $\forall \mathbf{ x}_{\mathbf{ i}_b}\in\partial\Omega_h$, the numerical boundary conditions are given by
{\small
\begin{align}
& \text{clamped:} & w_{\mathbf{ i}_b}=0, && \pdh{ w_{\mathbf{ i}_b}}{\mathbf{ n}}=0; \label{eq:discreteClampedBC}\\
&\text{supported:} & w_{\mathbf{ i}_b}=0, && \pdhn{ w_{\mathbf{ i}_b}}{\mathbf{ n}}{2}+\nu\pdhn{ w_{\mathbf{ i}_b}}{\mathbf{ t}}{2}=0; \label{eq:discreteSupportedBC}\\
&\text{free:} & \pdhn{ w_{\mathbf{ i}_b}}{\mathbf{ n}}{2}+\nu\pdhn{ w_{\mathbf{ i}_b}}{\mathbf{ t}}{2}=0, && \pdh{}{\mathbf{ n}}\left[\pdhn{ w_{\mathbf{ i}_b}}{\mathbf{ n}}{2}+\left(\nu-2\right)\pdhn{ w_{\mathbf{ i}_b}}{\mathbf{ t}}{2}\right]=0. \label{eq:discreteFreeBC}
\end{align}
}
For numerical purposes, we rewrite \eqref{eq:discreteGeneralizedKLPlate} into a system of first-order ODEs.
If we denote $v_{\mathbf{ i}}$ and $a_{\mathbf{ i}}$ the numerical approximations of the velocity and acceleration at grid point $\mathbf{ x}_\mathbf{ i}$, equation \eqref{eq:discreteGeneralizedKLPlate} can thus be conveniently written as
\begin{align}
&\dd{w_\mathbf{ i}}{t}(t)= v_{\mathbf{ i}}(t),\label{eq:ODEw}\\
&\dd{v_\mathbf{ i}}{t}(t)= a_{\mathbf{ i}}(t),\label{eq:ODEv}\\
&{\rho}{h}a_\mathbf{ i}(t)=-{{\mathcal K}_h} w_{\mathbf{ i}}(t)-{{\mathcal B}_h}v_{\mathbf{ i}}(t)+F_{\mathbf{ i}}(t),\label{eq:ODEa}
\end{align}
where the operators for the internal forces ${\mathcal K}_h$ and the damping forces ${\mathcal B}_h$ are introduced below to simplify the notations;
\begin{equation}\label{eq:KBOperators}
{\mathcal K}_h = {K}_0-{T}\nabla_h^2+{D} \nabla_h^4\quad\text{and}\quad
{\mathcal B}_h={K}_1- {T}_1\nabla_h^2.
\end{equation}
In this paper, two time-stepping methods are considered to advance the ODE system in time. In particular, one of the methods is an explicit predictor-corrector scheme that consists of a second-order Adams-Bashforth (AB2) predictor and a second-order Adams-Moulton (AM2) corrector, while the other one is an implicit Newmark-Beta scheme of second-order accuracy \cite{Newmark59}. We refer to the former scheme as PC22 scheme and the latter one as NB2 scheme for short.
To simplify the discussion, the algorithms are developed for a fixed time-step $\Delta t$ so that $t_n =n\Delta t$.
Let the numerical solutions of \eqref{eq:ODEw}--\eqref{eq:ODEa} at time $t_n$ be $w_\mathbf{ i}^n\approx w_\mathbf{ i}(t_n)$, $v_\mathbf{ i}^n\approx v_\mathbf{ i}(t_n)$, and $a_\mathbf{ i}^n\approx a_\mathbf{ i}(t_n)$, and denote $F_{\mathbf{ i}}^{n}=F(\mathbf{ x}_\mathbf{ i},t_n)$. The goal of a time-stepping algorithm is to determine the solutions at a new time given solutions at previous time levels.
First, we describe the PC22 scheme in Algorithm~\ref{alg:pc22}.
\begin{algorithm}[H]
{\bf Input:} {solutions at two previous time levels; i.e., $(w_\mathbf{ i}^n,v_\mathbf{ i}^n,a_\mathbf{ i}^n)$ and $(w_\mathbf{ i}^{n-1},v_\mathbf{ i}^{n-1},a_\mathbf{ i}^{n-1})$ }\\
{\bf Output:} {solutions at the new time level; i.e., $(w_\mathbf{ i}^{n+1},v_\mathbf{ i}^{n+1},a_\mathbf{ i}^{n+1})$ }\\
{\bf Procedures:} \\
{\em Stage I: predict solutions using a second-order Adams-Bashforth (AB2) predictor}
\begin{equation*}
\forall \mathbf{ x}_\mathbf{ i}\in\Omega_h:\quad
\begin{cases}
w_\mathbf{ i}^p= w_\mathbf{ i}^n+\Delta t\left(\frac{3}{2}v_\mathbf{ i}^{n}-\frac{1}{2}v_\mathbf{ i}^{n-1}\right)\\
v_\mathbf{ i}^p= v_\mathbf{ i}^n+\Delta t\left(\frac{3}{2}a_\mathbf{ i}^{n}-\frac{1}{2}a_\mathbf{ i}^{n-1}\right) \\
a^{p}_\mathbf{ i}=\frac{1}{{\rho}{h}}\left(-{{\mathcal K}_h} w^{p}_{\mathbf{ i}}-{{\mathcal B}_h}v^{p}_{\mathbf{ i}}+F^{n+1}_{\mathbf{ i}}\right)
\end{cases}
\end{equation*}
{\em Stage II: correct solutions using a second-order Adams-Molton (AM2) corrector}
\begin{equation*}
\forall \mathbf{ x}_\mathbf{ i}\in\Omega_h:\quad
\begin{cases}
w_\mathbf{ i}^{n+1}= w_\mathbf{ i}^n+\Delta t\left(\frac{1}{2}v_\mathbf{ i}^{n}+\frac{1}{2}v_\mathbf{ i}^{p}\right)\\
v_\mathbf{ i}^{n+1}= v_\mathbf{ i}^n+\Delta t\left(\frac{1}{2}a_\mathbf{ i}^{n}+\frac{1}{2}a_\mathbf{ i}^{p}\right) \\
a^{n+1}_\mathbf{ i}=\frac{1}{{\rho}{h}}\left(-{{\mathcal K}_h} w^{n+1}_{\mathbf{ i}}-{{\mathcal B}_h}v^{n+1}_{\mathbf{ i}}+F^{n+1}_{\mathbf{ i}}\right)
\end{cases}
\end{equation*}
{\bf Remark}:{\em Boundary conditions are applied after both the predictor and corrector stages to fill in the solutions at ghost and/or boundary grid points. Note that numerical boundary conditions for $v$ and $a$ are derived from those for $w$ by taking the appropriate time derivatives of \eqref{eq:discreteClampedBC}--\eqref{eq:discreteFreeBC}.}
\caption{PC22 time-stepping scheme}\label{alg:pc22}
\end{algorithm}
Second, we consider the Newmark-Beta scheme for solving our problem \eqref{eq:ODEw}--\eqref{eq:ODEa}. The so-called Newmark-Beta scheme is a general procedure proposed by Newmark for the solution of problems in structural dynamics \cite{Newmark59}. Given acceleration, the scheme updates the velocity and displacement by solving
\begin{equation} \label{eq:nbWV}
\begin{cases}
w_{\mathbf{ i}}^{n+1}= w_{\mathbf{ i}}^{n}+\Delta t v_{\mathbf{ i}}^n +\frac{\Delta t^2}{2}\left[ (1-2\beta)a_{\mathbf{ i}}^{n}+2\beta a_{\mathbf{ i}}^{n+1} \right], \\
v_{\mathbf{ i}}^{n+1} = v_{\mathbf{ i}}^{n}+\Delta t \left[ (1-\gamma)a_{\mathbf{ i}}^{n}+\gamma a_{\mathbf{ i}}^{n+1} \right],
\end{cases}
\end{equation}
where the acceleration in our case is given by
\begin{equation} \label{eq:nbA}
{\rho}{h}a^{n+1}_\mathbf{ i}=-{{\mathcal K}_h} w_{\mathbf{ i}}^{n+1}-{{\mathcal B}_h}v_{\mathbf{ i}}^{n+1}+F_{\mathbf{ i}}^{n+1}.
\end{equation}
We note that the scheme is unconditionally stable if $1/2\leq \gamma\leq 2\beta$, whereas it is conditionally stable if $\gamma>\max\{1/2,2\beta\}$.
Instead of solving the above implicit system for $w_\mathbf{ i}^{n+1}$, $v_\mathbf{ i}^{n+1}$ and $a_\mathbf{ i}^{n+1}$ all at the same time, we use \eqref{eq:nbWV} to eliminate $w_\mathbf{ i}^{n+1}$ and $v_\mathbf{ i}^{n+1}$ in \eqref{eq:nbA}, and then solve a smaller system for $a_\mathbf{ i}^{n+1}$ only. The complete algorithm for this scheme is summarized in Algorithm~\ref{alg:nb2}.
\begin{algorithm}[H]
{\bf Input:} {solutions at the previous time level; i.e., $(w_\mathbf{ i}^n,v_\mathbf{ i}^n,a_\mathbf{ i}^n)$ }\\
{\bf Output:} {solutions at the new time level; i.e., $(w_\mathbf{ i}^{n+1},v_\mathbf{ i}^{n+1},a_\mathbf{ i}^{n+1})$ }\\
{\bf Procedures:} \\
{\em Stage I. compute a first-order prediction for displacement and velocity}
\begin{equation*}
\forall \mathbf{ x}_\mathbf{ i}\in\Omega_h:\quad
\begin{cases}
w_{\mathbf{ i}}^{p}= w_{\mathbf{ i}}^{n}+\Delta t v_{\mathbf{ i}}^n +\frac{\Delta t^2}{2} (1-2\beta)a_{\mathbf{ i}}^{n} \\
v_{\mathbf{ i}}^{p} = v_{\mathbf{ i}}^{n}+\Delta t (1-\gamma)a_{\mathbf{ i}}^{n}
\end{cases}
\end{equation*}
{\em Stage II. solve a system of equations for acceleration at $t_{n+1}$}
\begin{equation*}
\forall \mathbf{ x}_\mathbf{ i}\in\Omega_h:\quad \left( {\rho}{h}+\beta \Delta t^2{\mathcal K}_h+\gamma\Delta t{\mathcal B}\right)a^{n+1}_\mathbf{ i}=-{{\mathcal K}_h} w_{\mathbf{ i}}^{p}-{{\mathcal B}_h}v_{\mathbf{ i}}^{p}+F_{\mathbf{ i}}^{n+1}
\end{equation*}
{\em Stage III. solve for displacement and velocity at $t_{n+1}$} \\
$$
\text{
compute $w_\mathbf{ i}^{n+1}$ and $v_\mathbf{ i}^{n+1}$ explicitly from \eqref{eq:nbWV}
}
$$
{\bf Remark}: {\em In this paper, we set $\beta = 1/4$ and $\gamma = 1/2$. With this choice of parameters, the scheme is second-order accurate and unconditionally stable. We also note that boundary conditions are applied after stages I and III to fill in the solutions of $w$ and $v$ at ghost and/or boundary grid points. For stage II, equations for acceleration at ghost and boundary nodes are replaced with boundary conditions.}
\caption{NB2 time-stepping scheme}\label{alg:nb2}
\end{algorithm}
\section{Introduction}
Thin-walled elastic solids, often referred to as plates or shells, are ubiquitous in engineerings and applied sciences. Examples of plates or shells can be found in many common mechanical and biological structures such as dome-shaped stadium rooftops, airplane fuselages, vessel walls and aortic valves, etc. Adequately understanding the intrinsic properties of plates (shells) is crucial for the various applications involving these structures. Therefore, the investigation of mathematically modeling plate-like structures and the subsequent development of numerical approximations for their solutions have long been active areas of research. Noting that the difference between a plate and a shell lies in its precast stress-free shape, which is flat for a plate and curved for a shell.
To study plate structures analytically, numerous theories have been developed over the years aiming at predicting the various key physical characteristics; see \cite{Reissner76,TimoshenkoWoinowsky59,Leissa69,Love1888,KoiterSimmonds73} and the references therein. The classical Kirchhoff-Love plate theory, which was developed way back in 1888 under the assumptions that the thickness of the plate remains fixed and any straight lines normal to the reference surface remain straight and normal to the reference surface after deformation, captures the bending dynamics of a plate in response to a transverse load and determines the propagation of waves in the plate \cite{Love1888}. As an extension to the Kirchhoff-Love model, the Mindlin-Reissner plate theory takes a first-order shear deformation into account and no longer assumes that straight lines normal to the reference surface remain normal during a deformation \cite{Mindlin51}. As is reviewed in \cite{KoiterSimmonds73}, there are also many other plate theories that are able to describe more sophisticated nonlinear physical phenomena, which make them viable choices for modeling complicated engineering applications. For example, the Koiter shell theory \cite{Koiter60} and its recent variant that incorporates viscoelasticity
\cite{CanicEtal06} are often used in biomedical engineering to model artery walls.
These plate theories are in general derived by utilizing the disparity in the length scales of the thin structures, and significantly reduce the complexity of the three-dimensional (3D) continuum mechanics problem to a two-dimensional one (2D). The governing equations of a plate theory typically deal with variables defined only on a reference surface that resides on a 2D domain; for an isotropic and homogeneous plate, its middle (or center) surface is used as reference. See Figure~\ref{fig:shellCartoon} for a schematic illustration of a 3D thin plate and its 2D reference surface.
Physical assumptions of the underlining plate theories also provide means of calculating the load-carrying and deflection characteristics of the original thin-walled structures; and therefore, the complete deformation and stress fields of a 3D thin structure can be inferred from the solution of its reference surface. It is generally expected that the thinner the structure, the more accurate the plate theory.
\begin{figure}[h]
\centering
\resizebox{9cm}{!}{
\input texFiles/3DShell.tex
}
\caption{Cartoon illustration of a deformed thin plate and its reference surface.}\label{fig:shellCartoon}
\end{figure}
From both the analytical and numerical points of view, 2D plate theories are immensely more tractable than 3D solid mechanical models. The plate theories are especially appealing to researchers exploring multi-physical problems, such as fluid-structure interaction (FSI) problems involving thin-walled elastic structures \cite{fis2014,LiHenshaw2016,CanicEtal06}, whereby multiple physical subproblems are dealt with simultaneously. Although greatly simplified from the full 3D continuum mechanics problem, governing equations for plates are still too complicated to be solved analytically, except for a limited number of cases with simple specifications \cite{Rudolph-2004}. Efficient and accurate numerical approximations for the solutions are therefore of greater interest in practice. However, due to numerical challenges posed by the high-order spatial derivatives that are associated with the bending effect of plates, the development of stable and accurate numerical methods for solving plate equations is non-trivial. Many numerical approaches have been developed for solving various plate models based on common discretization methods such as finite difference \cite{bilbao2008family,JiLi2019}, finite element (FEM) \cite{Jean-Loui-1982,Adna-1993,Eugeni-2000,daVeiga-2007,Bischoff-2004,Motasoares-2006,Perotti-2013,Huang-2011,BecacheEtal2004}, and boundary element (BEM) \cite{AFrangi-1999}, to name just a few. More recently, new computational methods developed from the isogeometric analysis have also emerged; see \cite{BensonEtal2010} for an example of solving the Reissner-Mindlin shell using the isogeometric analysis.
In this paper,
we present accurate and efficient numerical approximations of a generalized Kirchhoff-Love model that incorporates additional important physics, such as linear restoration, tension and visco-elasticity, for isotropic and homogeneous thin plates. The generalized model equation is spatially discretized with a standard second-order accurate finite difference method and integrated in time using either an explicit predictor-corrector or an implicit time-stepping scheme. Stability analysis is performed and the results are utilized to determine stable time steps for the proposed numerical schemes. Stable and accurate numerical boundary conditions are also investigated for the most common physical boundary conditions (i.e., clamped, simply supported and free). Carefully designed test problems are solved using all the proposed schemes for numerical validations. Interesting applications using the numerical methods are also discussed.
The remainder of the paper is organized as follows. In Section \ref{sec:governingEquations}, we present the governing equation and its boundary conditions for a generalized Kirchhoff-Love model. The numerical algorithms for solving the model equation are discussed in Section \ref{sec:numericalMethods}. We analyze the stability of the numerical schemes and lay out a strategy for determining stable time steps for the algorithms in Section~\ref{sec:analysis}. Numerical results that provide verification of the stability and accuracy of the schemes, as well as cross-validation with experiments, are presented in Section~\ref{sec:results}. Finally, concluding remarks are made in Section~\ref{sec:conclusions}.
\section{Nodal line patterns for the eigenvalue problem}\label{sec:appendix}
We show the results of the eigenvalue problem \eqref{eq:eigenProblem} here. Nodal lines of the first $25$ eigenmodes (with multiplicity) for the square plate with clamped edges and the annular plate with simply supported boundaries are shown in Figures~\ref{fig:ClampedModeShapeSquareMATLAB} \& \ref{fig:SupportedModeShapeAnnMATLAB}, respectively. The eigenmodes plotted for each degenerated pair are arbitrary so they can be asymmetric.
{
\newcommand{13cm}{13cm}
\def13.{13.}
\def13.{13.}
\newcommand{\trimfig}[2]{\trimw{#1}{#2}{0.02}{0.22}{0.02}{0.0}}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=1]
\useasboundingbox (0.0,0.0) rectangle (13.,13.);
\draw(6.3,6.3) node[anchor=center,xshift=0pt,yshift=0pt] {\trimfig{fig/Clampednx80ny80.png}{13cm}};
\end{tikzpicture}
\end{center}
\caption{Nodal lines of the first $25$ eigenmodes (with multiplicity) of the clamped square plate. There are $6$ degenerate pairs of eigenmodes, so that only $19$ distinct normalized eigenvalues are represented. Values reported in the plots are the natural frequencies in increasing order.
}
\label{fig:ClampedModeShapeSquareMATLAB}
\end{figure}
}
{
\newcommand{13cm}{13cm}
\def13.{13.}
\def13.{13.}
\newcommand{\trimfig}[2]{\trimw{#1}{#2}{0.02}{0.22}{0.02}{0.0}}
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=1]
\useasboundingbox (0.0,0.0) rectangle (13.,13.);
\draw(6.3,6.3) node[anchor=center,xshift=0pt,yshift=0pt] {\trimfig{fig/annSupportednx80ny80.png}{13cm}};
\end{tikzpicture}
\end{center}
\caption{ Nodal lines of the first $25$ eigenmodes (with multiplicity) of the simply supported annular plate. There are $11$ degenerate pairs of eigenmodes, so that only $14$ distinct eigenvalues are represented. Values reported in the plots are the natural frequencies in increasing order.
}
\label{fig:SupportedModeShapeAnnMATLAB}
\end{figure}
}
\end{appendix}
| {
"timestamp": "2020-08-05T02:21:56",
"yymm": "2008",
"arxiv_id": "2008.01693",
"language": "en",
"url": "https://arxiv.org/abs/2008.01693",
"abstract": "Efficient and accurate numerical algorithms are developed to solve a generalized Kirchhoff-Love plate model subject to three common physical boundary conditions: (i) clamped; (ii) simply supported; and (iii) free. We solve the model equation by discretizing the spatial derivatives using second-order finite-difference schemes, and then advancing the semi-discrete problem in time with either an explicit predictor-corrector or an implicit Newmark-Beta time-stepping algorithm. Stability analysis is conducted for the schemes and the results are used to determine stable time steps in practice.A series of carefully chosen test problems are solved to demonstrate the properties and applications of our numerical approaches. The numerical results confirm the stability and 2nd-order accuracy of the algorithms, and are also comparable with experiments for similar thin plates. As an application, we illustrate a strategy to identify the natural frequencies of a plate using our numerical methods in conjunction with a fast Fourier transformation (FFT) power spectrum analysis of the computed data. Then we take advantage of one of the computed natural frequencies to simulate the interesting physical phenomena known as resonance and beat for a generalized Kirchhoff-Love plate.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Stable and accurate numerical methods for generalized Kirchhoff-Love plates",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517456453798,
"lm_q2_score": 0.8397339616560073,
"lm_q1q2_score": 0.8213032670753684
} |
https://arxiv.org/abs/2212.10586 | A combinatorial proof of a tantalizing symmetry on Catalan objects | We investigate a tantalizing symmetry on Catalan objects. In terms of Dyck paths, this symmetry is interpreted in the following way: if $w_{n,k,m}$ is the number of Dyck paths of semilength $n$ with $k$ occurrences of $UD$ and $m$ occurrences of $UUD$, then $w_{2k+1,k,m}=w_{2k+1,k,k+1-m}$. We give two proofs of this symmetry: an algebraic proof using generating functions, and a combinatorial proof which makes heavy use of the cycle lemma and an alternate interpretation of the numbers $w_{n,k,m}$ using plane trees. In particular, our combinatorial proof expresses the numbers $w_{2k+1,k,m}$ in terms of Narayana numbers, and we generalize this to a relationship between the numbers $w_{n,k,m}$ and a family of generalized Narayana numbers due to Callan. Some further generalizations and applications of our combinatorial proofs are explored. Finally, we investigate properties of the polynomials $W_{n,k}(t)= \sum_{m=0}^k w_{n,k,m} t^m$, including real-rootedness, $\gamma$-positivity, and a symmetric decomposition. | \section{Introduction}
Lattice path enumeration is an active area of investigation in combinatorics.
A \emph{lattice path} is a path in the discrete integer lattice $\mathbb{Z}^n$ consisting of a sequence of steps from a prescribed \emph{step set} and satisfying prescribed restrictions.
Among classical lattice paths, Dyck paths are perhaps the most well-studied.
A \emph{Dyck path} of \emph{semilength} $n$ is a path in $\mathbb{Z}^2$ with step set $S= \{ (1,1), (1,-1) \}$ that starts at the origin $(0,0)$, ends at $(2n,0)$, and never traverses below the horizontal axis.
The steps $(1,1)$ are called \emph{up steps} and can be represented by the letter $U$, while the steps $(1,-1)$ are \emph{down steps} and denoted $D$.
It is well known that the number of Dyck paths of semilength $n$ is the Catalan number $C_n=\binom{2n}{n}/(n+1)$.
The number of Dyck paths of semilength $n$ that contain exactly $k$ places where a $U$ is immediately followed by a $D$---called $UD$-\emph{factors} (or \emph{peaks})---is equal to the \emph{Narayana number} $N_{n,k}$ defined by
\[N_{n,k} = \frac{1}{n}\binom{n}{k} \binom{n}{k-1} \]
for all $1 \leq k \leq n$ and by $N_{0,0} = 1$. Also, $N_{n,k}$ is the number of Dyck paths of semilength $n$ with $k-1$ $DU$-factors. The Narayana numbers are known to exhibit the symmetry $N_{n,k} = N_{n,n+1-k}$, which can be proved combinatorially via several involutions; see, for example, \cite{Kreweras1970, Kreweras1972, Lalanne1992}.
There is interest in counting Dyck paths (and other kinds of lattice paths) by the number of occurrences of longer factors, or even finding the \emph{joint distribution} of occurrences of multiple kinds of factors.
Two early works in this direction are \cite{Deutsch1999, sapounakis-tsoulas-tsikouras}.
In \cite{wang}, Wang discusses a general technique that is useful for obtaining the relevant generating functions in many such cases; see also \cite{yam}.
Our work is concerned with the joint distribution of $UD$-factors and $UUD$-factors over Dyck paths, which does not seem to have been studied before, and is different from the instances discussed in the works cited above as the factor $UD$ is an ending segment of the factor $UUD$. We can also interpret these factors in terms of \emph{runs}---maximal consecutive subsequences of up steps---as the number of $UD$-factors is equal to the total number of runs, and the number of $UUD$-factors is equal to the number of runs of length at least 2.
Let $w_{n,k,m}$ be the number of Dyck paths of semilength $n$ with $k$ $UD$-factors and $m$ $UUD$-factors. Then we have the following tantalizing symmetry:
\begin{thm}
\label{th:main}
For all $1 \leq m \leq k$, we have
$$
w_{2k+1,k,m} = w_{2k+1,k, k+1-m}.
$$
\end{thm}
\noindent In other words, among all Dyck paths of semilength $2k+1$ with $k$ $UD$-factors, the number of those with $m$ $UUD$-factors is equal to the number of those with $k+1-m$ $UUD$-factors.
To prove this symmetry, the first and third authors used generating function techniques to derive the following closed formula for the numbers $w_{n,k,m}$, from which Theorem \ref{th:main} readily follows.
\begin{thm} \label{t-explicit} We have
\begin{equation*}
w_{n,k,m} = \begin{cases}
\displaystyle{\frac{1}{k}\binom{n}{k-1}\binom{n-k-1}{m-1}\binom{k}{m}}, & \text{ if } m>0, \, m\leq k, \, \text{and } k+m\leq n, \\
1, & \text{ if } m = 0 \text{ and } n = k,\\
0, & \text{ otherwise.}
\end{cases}
\end{equation*}
\end{thm}
Nonetheless, Theorem \ref{th:main} cries for combinatorial explanation, but a combinatorial proof eluded the first and third authors for several years.
The remaining authors joined this project in Summer 2022 as part of the AMS Mathematics Research Community \emph{Trees in Many Contexts}, during which we found a combinatorial proof for Theorem \ref{th:main}; this proof is the main focus of our paper.
To give a combinatorial proof for Theorem \ref{th:main}, we in fact give a combinatorial proof for the following identity relating the numbers $w_{2k+1,k,m}$ to Narayana numbers.
\begin{thm}
\label{th:relationNandW}
For all $k \geq 1$ and $m\geq 0$, we have
\begin{equation}
\label{eq:relNW}
w_{2k+1,k, m} = \binom{2k+1}{k-1} N_{k,m}.
\end{equation}
\end{thm}
Observe that Theorem \ref{th:main} follows immediately from Theorem \ref{th:relationNandW} and the symmetry of the Narayana numbers.
Our combinatorial proof of Theorem \ref{th:relationNandW} requires heavy use of the cycle lemma, as well as an alternate interpretation of the numbers $w_{n,k,m}$ in terms of plane trees.
We also generalize Theorem \ref{th:relationNandW} to a relationship between the $w_{n,k,m}$ and a family of generalized Narayana numbers introduced by Callan ~\cite{callan}.
Along the way, we provide a combinatorial proof for our explicit formula for the numbers $w_{n,k,m}$ stated in Theorem ~\ref{t-explicit}. In fact, many of our results generalize to the numbers $w_{n, k_{1}, k_{2}, \ldots, k_{r}}$, defined to be the number of Dyck paths of semilength $n$ with $k_{1}$ $UD$-factors, $k_{2}$ $UUD$-factors, \dots , and $k_{r}$ $U^{r}D$-factors.
This paper is structured as follows.
In Section \ref{explicit_solution}, we present an algebraic proof (using generating functions) for the explicit formula in Theorem \ref{t-explicit}. Readers who are only interested in combinatorial proofs may skip to Section \ref{preliminaries}, which reviews background material needed for our combinatorial proofs. In Section \ref{combinatorial_proofs}, we present a combinatorial proof for Theorem ~\ref{th:relationNandW} as well as a variant of Theorem ~\ref{th:relationNandW} for the numbers $w_{2k-1,k,m}$.
Section \ref{s-generalized} provides a combinatorial proof of Theorem ~\ref{t-explicit} and presents generalizations of various results, including our generalized formula relating the $w_{n,k,m}$ to generalized Narayana numbers as well as results on the numbers $w_{n, k_{1}, k_{2}, \ldots, k_{r}}$.
We conclude in Section \ref{polynomials} with some investigation of the polynomials $W_{n,k}(t)= \sum_{m=0}^k w_{n,k,m} t^m$, including real-rootedness and $\gamma$-positivity results, as well as a symmetric decomposition. Several conjectures on the polynomials $W_{n,k}(t)$ and their symmetric decomposition are given.
\section{The generating function approach}\label{explicit_solution}
The goal of this section is to present an algebraic proof of Theorem \ref{t-explicit}, which gives an explicit formula for the numbers $w_{n,k,m}$ and in turn implies the symmetry in Theorem \ref{th:main}.
To do so, we need a functional equation satisfied by the generating function $W$ defined by
\[W=W(x,y,z)=\sum_{n,k,m}w_{n,k,m}\,x^ny^kz^m.\]
In other words, $W$ is the generating function of all Dyck paths, where the exponents of $x$, $y$, and $z$ encode the semilength, the number of $UD$-factors, and the number of $UUD$-factors, respectively.
\begin{lemma}
The functional equation
\begin{equation}\label{func_eq-first}
(x-x^2y+x^2yz)W^2-(1+x-xy)W+1=0
\end{equation}
holds.
\end{lemma}
\begin{proof}
Let us call a Dyck path of positive semilength \emph{irreducible} if it does not touch the horizontal axis, other than at its starting point and endpoint.
Then each nonempty Dyck path $P$ uniquely decomposes as the concatenation of an irreducible Dyck path and a Dyck path (in that order), where the latter is the empty path if $P$ itself is irreducible.
If $V(x,y,z)$ is the generating function for irreducible Dyck paths defined similarly to $W(x,y,z)$ but only for irreducible paths, then this leads to the functional equation
\begin{equation} \label{funceq1}
W = 1+ VW.
\end{equation}
Let $I$ be an irreducible Dyck path; then $I$ must start with an up step, end with a down step, and consist of a Dyck path $P_1$ between the two.
If $P_1$ is empty, then $I$ is simply the path $UD$, contributing $xy$ to the generating function $V(x,y,z)$.
Otherwise, $I=UP_1D$, and $I$ has the same number of $UD$-factors as $P_1$, while its semilength is one longer.
The number of $UUD$-factors of $I$ compared with that of $P_1$, however, depends on whether $P_1$ begins with a $UD$, so let us consider these two cases separately: \vspace{5bp}
\begin{itemize}
\item If $P_1$ does begin with a $UD$, then $P_1$ is of the form $UDP_2$ and $I=UP_1D=UUDP_2D$ has one more $UUD$-factor than $P_1$.
Therefore, in this case, $I$ has one more $UD$-factor and one more $UUD$-factor than $P_2$, while its semilength is two longer.
Dyck paths of this type contribute $x^2yzW$ to $V$. \vspace{5bp}
\item If $P_1$ does not begin with a $UD$, then $I$ also has the same number of $UUD$-factors as $P_1$; Dyck paths of this type contribute $x(W-1-xyW)$ to the generating function $V$. \vspace{5bp}
\end{itemize}
Thus, we have the functional equation
\begin{equation} \label{funceq2}
V= xy+ x^2yzW + x(W-1-xyW).
\end{equation}
The proof of our claim is now routine by substituting the expression obtained for $V$ in \eqref{funceq2} into (\ref{funceq1}) and rearranging.
\end{proof}
\begin{proof}[Algebraic proof of Theorem ~\ref{t-explicit}]
Equation (\ref{func_eq-first}) can be rewritten as
\begin{equation*}\label{func_eq2}
uW^2-vW+1=0, \quad {\text{where }} u=x-x^2y+x^2yz {\text{ and }} v=1+x-xy.
\end{equation*}
Solving for $W$ in this second degree equation, after elementary manipulations, gives
\begin{equation*}\label{quad_sol}
W=\frac{v-\sqrt{v^2-4u}}{2u}=\frac{v}{2u} \left(1-\sqrt{1-4 \frac{u}{v^2}}\right).
\end{equation*}
Setting $t=u/v^2$ in the well known classical equation
\begin{equation*}
\frac{1-\sqrt{1-4t}}{2t}=\sum_{\nu \geq 0} C_\nu t^\nu,
\end{equation*}
where $C_\nu=\binom{2\nu}{\nu}/{(\nu+1)}$ is the $\nu^{th}$ Catalan number, we expand $W$ as
\begin{equation}\label{the_sum}
W=W(x,y,z)=\sum_{\nu \geq 0} C_\nu\, \frac{(x-x^2y+x^2yz)^\nu}{(1+x-xy)^{2\nu+1}},
\end{equation}
whose terms are rational functions in $x$, $y$, and $z$ and there are no square roots involved.
We now expand each term of (\ref{the_sum}) using a multinomial or negative binomial expansion.
First, the multinomial expansion of the numerator takes the form
\begin{equation}\label{expansion1}
(x-x^2y+x^2yz)^\nu = \sum_{i+j+m=\nu}\frac{\nu!}{i!\, j!\, m!}x^i(-x^2y)^j(x^2yz)^m=\sum_{i+j+m=\nu}\frac{(-1)^j\nu!}{i!\, j!\, m!}x^{i+2j+2m}y^{j+m}z^m.
\end{equation}
Next, the negative binomial expansion applied to the denominator gives
\begin{equation}\label{expansion2}
(1+x-yx)^{-(2\nu+1)}=\sum_s \frac{(2\nu+s)!}{(2\nu)!\, s!} (-x+yx)^s= \sum_{\ell,r}\frac{(2\nu+\ell+r)!}{(2\nu)!\, \ell !\, r!}(-1)^\ell y^rx^{\ell+r}.
\end{equation}
Then, multiplying (\ref{expansion1}) and (\ref{expansion2}), summing with respect to $\nu$, and taking into account that by (\ref{expansion1}) we have $\nu=i+j+m$, Equation (\ref{the_sum}) becomes
\begin{equation}\label{long_expansion}
W(x,y,z)=\sum_{i,j,m,\ell,r} \frac{(2i+2j+2m+\ell+r)!}{(i+j+m+1)!\, i!\, j!\, m!\, \ell!\, r!}(-1)^{j+\ell}x^{i+2j+2m+\ell+r}y^{j+m+r}z^m.
\end{equation}
Collecting terms in (\ref{long_expansion}), we get
\begin{align}
w_{n,k,m} &= \!\!\!\sum_{\substack{i+2j+2m+\ell+r=n\\ j+m+r=k} } \frac{(2i+2j+2m+\ell+r)!}{(i+j+m+1)!\, i!\, j!\, m!\, \ell!\, r!}(-1)^{j+\ell} \nonumber \\
&=\frac{1}{m!}\!\sum_{\substack{i+2j+2m+\ell+r=n\\j+m+r=k}} \frac{(-1)^{j+\ell}(2i+2j+2m+\ell+r)!}{(i+j+m+1)!\, i!\, j!\, \ell!\, r!}.\label{w2}
\end{align}
Now, solving the system $\{i+2j+2m+\ell+r=n, j+m+r=k\}$ for $\ell, r\geq0$, the indexes $\ell$ and $r$ must satisfy the conditions
\begin{equation}\label{cond_ell_r}
\ell=n-k-m-i-j \geq 0 \quad \text{and} \quad r=k-m-j \geq 0,
\end{equation}
while $i,j\geq 0$.
This implies that $n$, $k$, and $m$ must satisfy the inequalities
\begin{equation}\label{ineq}
m\leq k \quad \text{and} \quad k+m \leq n
\end{equation}
(otherwise the sum is empty and has value $0$).
Substituting in (\ref{w2}) the values of $\ell$ and $r$ given by (\ref{cond_ell_r}), the coefficient $w_{n,k,m}$ is expressed as the double sum
\begin{equation}\label{double_sum}
w_{n,k,m} = \frac{1}{m!}\sum_{\substack{i+j\leq n-k-m \\j \leq k-m}} \frac{(-1)^{n-k-m-i}(n+i)!}{(i+j+m+1)!\, i!\, j!\, (n-k-m-i-j)!\, (k-m-j)!}.
\end{equation}
This double sum can be greatly simplified using the Maple procedure ``\texttt{sum}" as follows.
Since the factorials in the denominators in (\ref{double_sum}) are $\pm \infty$ when $i,j$ are big enough to be out of range, the corresponding terms in the double sum vanish.
Hence, we can take the ranges {\texttt{i=0..infty}} and {\texttt{j=0..infty}} in the double sum.
Typing the Maple command
\vspace{0.2cm}
\noindent \texttt{> w[n,k,m] = 1/m!*sum(((-1)**(n-k-m-i)*(i+n)!/i!)*\\
sum(1/(i+j+m+1)!/j!/(n-i-j-k-m)!/(k-m-j)!,j=0..infinity),i=0..infinity);}
\vspace{0.2cm}\\
\noindent produces the output
\begin{equation*}\label{output}
w_{n,k,m} = -{\frac { \left( -1 \right) ^{n-k-m}n!\, \left( m+1 \right) m\,\sin
\left( \pi\,k-\pi\,n \right) }{m!\, \left( m+1 \right) !\, \left( n-k
-m \right) !\, \left( k-m \right) !\,\sin \left( \pi\,m \right)
\left( k-n \right) \left( k-1-n \right) }},
\end{equation*}
which we rewrite in the form
\begin{equation}\label{wmnp+}
w_{n,k,m} = {\frac { \left( -1 \right) ^{n-k-m}n!\, \left( m+1 \right) }{m!\, \left( m+1 \right) !\, \left( n-k
-m \right) !\, \left( k-m \right) !\, \left( n-k+1 \right) }}\cdot q(n,k,m)
\end{equation}
where $q(n,k,m)$ takes the indeterminate form
\begin{equation*}
q(n,k,m)=\frac{m\sin(\pi (n-k))}{\sin(\pi m)(n-k)}=\frac{0}{0},
\end{equation*}
\noindent since $n$, $k$, and $m$ are positive integers. We ``lift" this indetermination by making use of the limits
\begin{align}
\lim_{t \rightarrow 0}\frac{t}{\sin(\pi t)}&=\frac{1}{\pi} \label{lim_i} \quad \text{and}\\
\lim_{t \rightarrow \pi}\frac{\sin(kt)}{\sin(\ell t)}&=(-1)^{k-\ell}\,\frac{k}{\ell}, \quad k \in \mathbb{Z}, \,\, 0\neq \ell \in \mathbb{Z}.\label{lim_ii}
\end{align}
To complete the proof, we break into cases: \vspace{5bp}
\begin{itemize}
\item If $m=0$ and $k=n$, then using (\ref{lim_i}) twice, we get $q(k,k,0)=\frac{1}{\pi}\cdot \frac{\pi}{1}=1$. This implies, by (\ref{wmnp+}), that $w_{n,k,0}=1$ when $k=n$. \vspace{5bp}
\item If $m=0$ and $k \neq n$, then by (\ref{ineq}) we need only to consider the case $k<n$. Hence, by (\ref{lim_i}) we get \[q(n,k,0)=\frac{1}{\pi}\cdot \frac{\sin(\pi(n-k))}{n-k}=0,\] since $\sin(\pi(n-k))=0$ and $n-k\neq 0$. This implies that $w_{k,0,n} = 0$ if $k \neq n$. \vspace{5bp}
\item If $m>0$ and $k+m\leq n$, then we must have $k < n$. Thus, by (\ref{lim_ii}) with $k=n-k$ and $\ell=m$, we have
\begin{equation*}
q(n,k,m)=\frac{m}{n-k}\cdot \frac{\sin(\pi(n-k))}{\sin(\pi m)}=\frac{m}{n-k}\cdot (-1)^{n-k-m}.
\end{equation*}
Substituting this value into (\ref{wmnp+}) yields
\begin{equation*}
w_{n,k,m}={\frac {n!}{m!\, \left( m-1 \right) !\, \left( n-k-m \right) !\,
\left( k-m \right) !\, \left( n-k \right) \left( n-k+1 \right) }},
\end{equation*}
which is equivalent to the desired expression in Theorem ~\ref{t-explicit}. \qedhere
\end{itemize}
\end{proof}
In order to ``cross check" our formula in Theorem \ref{t-explicit}, we computed the first terms of $W(x,y,z)$ using two methods:
firstly, by making use of the explicit expression for the coefficients given in Theorem \ref{t-explicit}, and
secondly, by making use of the Maple ``\texttt{mtaylor}" command applied to (\ref{quad_sol}).
Both methods gave the same results displayed in Table \ref{Tab_w}.
\begin{table}[ht]
\caption{The nonzero values of $w_{n,k,m}$ for $0\leq n\leq 10$.}\label{Tab_w}
\small
\begin{tabular}[t]{|c|c|}
\hline
$n,k,m$ & $w_{n,k,m}$\\ \hline \hline
0, 0, 0 & 1 \\ \hline
1, 1, 0 & 1 \\ \hline
2, 1, 1 & 1 \\
2, 2, 0 & 1 \\ \hline
3, 1, 1 & 1 \\
3, 2, 1 & 3 \\
3, 3, 0 & 1 \\ \hline
4, 1, 1 & 1 \\
4, 2, 1 & 4 \\
4, 2, 2 & 2 \\
4, 3, 1 & 6 \\
4, 4, 0 & 1 \\\hline
5, 1, 1 & 1 \\
5, 2, 1 & 5 \\
5, 2, 2 & 5 \\
5, 3, 1 & 10 \\
5, 3, 2 & 10 \\
5, 4, 1 & 10 \\
5, 5, 0 & 1 \\ \hline
6, 1, 1 & 1 \\
6, 2, 1 & 6 \\
6, 2, 2 & 9 \\
6, 3, 1 & 15 \\
6, 3, 2 & 30 \\
6, 3, 3 & 5 \\
6, 4, 1 & 20 \\
6, 4, 2 & 30 \\
6, 5, 1 & 15 \\
6, 6, 0 & 1 \\ \hline
\end{tabular}
\quad
\begin{tabular}[t]{|c|c|}
\hline
$n,k,m$ & $w_{n,k,m}$\\ \hline \hline
7, 1, 1 & 1 \\
7, 2, 1 & 7 \\
7, 2, 2 & 14 \\
7, 3, 1 & 21 \\
7, 3, 2 & 63 \\
7, 3, 3 & 21 \\
7, 4, 1 & 35 \\
7, 4, 2 & 105 \\
7, 4, 3 & 35 \\
7, 5, 1 & 35 \\
7, 5, 2 & 70 \\
7, 6, 1 & 21 \\
7, 7, 0 & 1 \\ \hline
8, 1, 1 & 1 \\
8, 2, 1 & 8 \\
8, 2, 2 & 20 \\
8, 3, 1 & 28 \\
8, 3, 2 & 112 \\
8, 3, 3 & 56 \\
8, 4, 1 & 56 \\
8, 4, 2 & 252 \\
8, 4, 3 & 168 \\
8, 4, 4 & 14 \\
8, 5, 1 & 70 \\
8, 5, 2 & 280 \\
8, 5, 3 & 140 \\
8, 6, 1 & 56 \\
8, 6, 2 & 140 \\
8, 7, 1 & 28 \\
8, 8, 0 & 1 \\ \hline
\end{tabular}
\quad
\begin{tabular}[t]{|c|c|}
\hline
$n,k,m$ & $w_{n,k,m}$\\ \hline \hline
9, 1, 1 & 1 \\
9, 2, 1 & 9 \\
9, 2, 2 & 27 \\
9, 3, 1 & 36 \\
9, 3, 2 & 180 \\
9, 3, 3 & 120 \\
9, 4, 1 & 84 \\
9, 4, 2 & 504 \\
9, 4, 3 & 504 \\
9, 4, 4 & 84 \\
9, 5, 1 & 126 \\
9, 5, 2 & 756 \\
9, 5, 3 & 756 \\
9, 5, 4 & 126 \\
9, 6, 1 & 126 \\
9, 6, 2 & 630 \\
9, 6, 3 & 420 \\
9, 7, 1 & 84 \\
9, 7, 2 & 252 \\
9, 8, 1 & 36 \\
9, 9, 0 & 1 \\ \hline
\end{tabular}
\quad
\begin{tabular}[t]{|c|c|}
\hline
$n,k,m$ & $w_{n,k,m}$\\ \hline \hline
10, 1, 1 & 1 \\
10, 2, 1 & 10 \\
10, 2, 2 & 35 \\
10, 3, 1 & 45 \\
10, 3, 2 & 270 \\
10, 3, 3 & 225 \\
10, 4, 1 & 120 \\
10, 4, 2 & 900 \\
10, 4, 3 & 1200 \\
10, 4, 4 & 300 \\
10, 5, 1 & 210 \\
10, 5, 2 & 1680 \\
10, 5, 3 & 2520 \\
10, 5, 4 & 840 \\
10, 5, 5 & 42 \\
10, 6, 1 & 252 \\
10, 6, 2 & 1890 \\
10, 6, 3 & 2520 \\
10, 6, 4 & 630 \\
10, 7, 1 & 210 \\
10, 7, 2 & 1260 \\
10, 7, 3 & 1050 \\
10, 8, 1 & 120 \\
10, 8, 2 & 420 \\
10, 9, 1 & 45 \\
10, 10, 0 & 1 \\ \hline
\end{tabular}\\
\end{table}
\section{Preliminaries for Combinatorial Proofs}\label{preliminaries}
In this section, we introduce the background material necessary for our combinatorial proofs in Sections 4--5.
\subsection{Dyck paths and plane trees}
Recall that a \emph{Dyck path} of \emph{semilength} $n$ is a path in $\mathbb{Z}^2$ that begins at the origin, ends at $(2n,0)$, never goes below the horizontal axis, and consists of a sequence of \emph{up steps} $(1,1)$ and \emph{down steps} $(1,-1)$.
We can represent Dyck paths as \emph{Dyck words}: words $\pi$ on the alphabet $\{U,D\}$ with the same number of $U$s and $D$s, such that there are never more $D$s than $U$s in any prefix of $\pi$.
When we refer to a $UD$- or $UUD$-factor in a Dyck path, we really mean a factor in the corresponding Dyck word.
As mentioned in the introduction, Dyck paths are a classical example of Catalan objects, as the number of Dyck paths of semilength $n$ is the $n$th Catalan number.
\begin{figure}[b]
\begin{subfigure}[b]{.4\linewidth}
\centering
\begin{tikzpicture}[scale = .5, auto=center]
\draw[step=1.0, gray!100, thin] (0,0) grid (10, 4);
\draw[black!80, line width=2pt] (0,0) -- (1,1) -- (2,0) -- (3,1) -- (4,2) -- (5,1) -- (6,2) -- (7,3) -- (8,2) -- (9,1) -- (10,0) ;
\end{tikzpicture}
\caption{Dyck path of semilength $5$ with $3$ $UD$-factors and $2$ $UUD$-factors.}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}[b]{.4\linewidth}
\centering
\begin{tikzpicture}[scale=.5,auto=center, roundnode/.style={circle,fill=black!60, inner sep=1.5pt, minimum width=4pt}]
\node[roundnode] (root) at (0,0) {};
\node[roundnode] (a1) at (-1.5,-1.5) {};
\node[roundnode] (a2) at (1.5,-1.5) {};
\node[roundnode] (b1) at (0,-3) {};
\node[roundnode] (b2) at (3,-3) {};
\node[roundnode] (c1) at (3,-4.5) {};
\node[circle,draw, color = red] at (0, -3) {};
\node[circle,draw, color = red] at (3, -4.5) {};
\draw (root) -- (a1);
\draw (root) -- (a2);
\draw (a2) -- (b1);
\draw (a2) -- (b2);
\draw (b2) -- (c1);
\end{tikzpicture}
\caption{Plane tree on $5$ non-root vertices with $3$ leaves and $2$ good leaves (circled in red).}
\end{subfigure}
\caption{Dyck path and its corresponding plane tree under the map $\Phi$ defined in the proof of Proposition ~\ref{prop:reformulation}.}
\end{figure}
Another family of Catalan objects are \emph{plane trees}, defined recursively in the following way.
In a plane tree $T$, one vertex $v$ of $T$ is designated as the \emph{root} of $T$, and $v$ is either the only vertex of $T$ or otherwise is connected to a sequence of subtrees $(T_1, T_2, \dots, T_j)$, each of which is a plane tree.
When drawing $T$, an edge is drawn from $v$ to the root of each of the subtrees $T_i$, with the $T_i$ drawn left-to-right in the order that they appear in the sequence $(T_1, T_2, \dots, T_j)$.
Then the number of plane trees with $n+1$ vertices (equivalently, $n$ non-root vertices) is the $n$th Catalan number.
We point out that every plane tree must have at least one vertex, namely the root.
Given a non-root vertex $v$ of a plane tree $T$, we say that $v$ is a \emph{leaf} of $T$ if it has no children.
In particular, the root is not considered to be a leaf if $T$ is the plane tree consisting solely of the root.
We say that a leaf $v$ is a \emph{good leaf} if $v$ is the oldest child (i.e., the left-most child) of a non-root vertex.
We will need the following interpretation of the numbers $w_{n,k,m}$ in terms of plane trees.
\begin{prop} \label{prop:reformulation}
The number $w_{n,k,m}$ counts plane trees with $n$ non-root vertices, $k$ leaves, and $m$ good leaves.
\end{prop}
\begin{proof}
First, note that every Dyck word $\pi$ is either empty or can be recursively decomposed as \[U\pi_1 DU\pi_2 D\cdots U\pi_j D,\] where each $\pi_i$ is a Dyck word.
Note that every $D$ in the above decomposition results in the path returning to the horizontal axis.
Thus, we can translate Dyck words to plane trees by sending the empty path to the plane tree consisting of exactly one vertex (the root) and by recursively sending each subword $\pi_i$ to the subtree $T_i$.
It is easily verified that this yields a bijective correspondence---call it $\Phi$---between Dyck words of semilength $n$ and plane trees with $n$ non-root vertices, so it remains to show that $\Phi$ preserves the appropriate parameters.
Let us induct on the semilength $n$ of our Dyck word.
Note that the parameters are preserved when we have an empty Dyck word, so let us assume that the desired result holds for Dyck words of semilength up to some fixed $n\geq0$, and take a Dyck word $\pi = U\pi_1 DU\pi_2 D\cdots U\pi_j D$ of semilength $n+1$.
In particular, this means that each $\pi_i$ has semilength at most $n$.
Let $(T_1,T_2,\dots,T_j)$ denote the sequence of subtrees in the corresponding plane tree $T=\Phi(\pi)$.
Observe that each $UD$-factor of $\pi$ is either a $UD$-factor in one of the $\pi_i$, or results from an empty $\pi_i$.
By the induction hypothesis, $UD$-factors in any one of the $\pi_i$ correspond precisely to leaves in $T_i$.
Furthermore, an empty $\pi_i$ is sent to a subtree $T_i$ consisting of only a root vertex, which is a leaf of $T$.
Every leaf of $T$ arises in one of these two ways, so the number of $UD$-factors is sent to the number of leaves.
Similarly, it is easy to check that the reverse procedure sends the number of leaves to the number of $UD$-factors.
Now, observe that each $UUD$-factor of $\pi$ is either a $UUD$-factor of one of the $\pi_i$, or results from a $\pi_i$ with a $UD$ prefix---that is, a $\pi_i$ beginning with a $UD$-factor.
By the induction hypothesis, $UUD$-factors in any one of the $\pi_i$ correspond precisely to good leaves in $T_i$.
Moreover, each $\pi_i$ with a $UD$ prefix is sent to a subtree $T_i$ in which the oldest child of the root of $T_i$ is a leaf, and is in fact a good leaf of $T$ because the root of $T_i$ is a non-root vertex of $T$.
Every good leaf of $T$ arises in one of these two ways, and it is straightforward to verify that the reverse procedure sends the number of good leaves to the number of $UUD$-factors.
\end{proof}
It is worth noting that the inverse $\Phi^{-1}$ of the bijection $\Phi$ used in the proof of Proposition \ref{prop:reformulation} has a simple non-recursive description: We perform a preorder traversal of a plane tree $T$, and record a $U$ step whenever we go down and record a $D$ step whenever we go up; the result is the Dyck path which $\Phi$ sends to $T$.
\subsection{Cyclic compositions and the cycle lemma}
Given a sequence $p = p_1 p_2 \cdots p_n$, we say that a sequence $p^\prime$ is a \emph{cyclic shift} (or \emph{cyclic rotation}) of $p$ if $p^\prime$ is of the form
\[p^\prime = p_i p_{i+1} \cdots p_n p_1 p_2 \cdots p_{i-1}\]
for some $1 \leq i \leq n$.
Let us write $p \sim p^\prime$ whenever $p$ and $p^\prime$ are cyclic shifts of each other.
Let $\mathsf{Comp}_{n,k}$ denote the set of all compositions of $n$ into $k$ parts, i.e., a sequence of $k$ positive integers whose sum is $n$.
We define a \emph{cyclic composition} $[\mu]$ to be the equivalence class of a composition $\mu$ under cyclic shift.
Let $\mathsf{CComp}_{n,k}$ be the set of cyclic compositions consisting of compositions of $n$ into $k$ parts, which is well-defined because the number of parts of a composition and the sum of its parts are clearly invariant under cyclic shift.
Let us say that an element of $\mathsf{CComp}_{n,k}$ is a cyclic composition of $n$ into $k$ parts.
We define the \emph{order} of a cyclic composition $[\mu]$, denoted by $\mathsf{ord}[\mu]$, to be the number of representatives of $[\mu]$---that is, the number of distinct compositions that can be obtained from cyclically shifting $\mu$.
Note that applying $\mathsf{ord}[\mu]$ number of cyclic shifts to $\mu$ will return back $\mu$.
If $[\mu] \in \mathsf{CComp}_{n,k}$ has order $k$, then we say that $[\mu]$ is \emph{primitive}.
For any $[\mu] \in \mathsf{CComp}_{n,k}$, there exists a positive integer $d$ dividing both $n$ and $k$ such that $[\mu]$ is a \emph{concatenation} of $d$ copies of a primitive cyclic composition $[\nu] \in \mathsf{CComp}_{n/d,k/d}$, which means that there exists $\bar{\nu}\in[\nu]$ for which $\mu$ is a concatenation of $d$ copies of $\bar{\nu}$.
In this case, $\mathsf{ord}[\mu]=k/d=\mathsf{ord}[\nu]$.
(If $d=1$, then $[\mu]$ itself is primitive and is a concatenation of itself.)
For example, the cyclic composition $[1,2,1,1,2,1]$ is a concatenation of two copies of the primitive cyclic composition $[1,2,1]$, and both of these cyclic compositions have order 3.
It is easy to see that this decomposition of cyclic compositions into primitive cyclic compositions is unique.
\begin{lemma}\label{lemma:2k+1_primitive}
If $n$ and $k$ are relatively prime, then $[\mu] \in \mathsf{CComp}_{n,k}$ is primitive.
\end{lemma}
\begin{proof}
Let $[\mu] \in \mathsf{CComp}_{n,k}$.
Then $[\mu]$ can be uniquely decomposed as a concatenation of $d$ copies of a primitive cyclic composition, where $d$ is a common divisor of $n$ and $k$.
Since $n$ and $k$ are relatively prime, it follows that $d=1$, whence it follows that $[\mu]$ itself is primitive.
\end{proof}
The cycle lemma will play an important role in our proofs.
Given a positive integer $k$ and a sequence $p = p_{1}p_{2}\cdots p_{l}$ consisting only of $U$s and $D$s, we say that $p$ is \emph{$k$-dominating} if every prefix of $p$---that is, every sequence $p_{1}p_{2}\cdots p_{i}$ where $1 \leq i \leq l$---has more copies of $U$ than $k$ times the number of copies of $D$.
\begin{lemma}[Cycle lemma \cite{dvoretzky-motzkin}]
\label{lemma:cycle}
Let $k$ be a positive integer.
For any sequence $p = p_{1}p_{2}\cdots p_{m+n}$ consisting of $m$ copies of $U$ and $n$ copies of $D$, there are exactly $m-kn$ cyclic shifts of $p$ that are $k$-dominating.
\end{lemma}
We refer to \cite{dershowit-zaks} for a proof of the cycle lemma as well as some applications.
We note that Raney \cite{raney} showed that the cycle lemma is equivalent to the Lagrange inversion formula; Raney's proof was later generalized to the multivariate case by Bacher and Schaeffer \cite{bacher-schaeffer}.
\begin{cor}[of the cycle lemma]
\label{cor:reverse_cycle}
Any sequence of $k$ copies of $\bigcirc$ and $k+1$ copies of $\square$ has exactly one cyclic shift with no proper prefix having more $\square$s than $\bigcirc$s.
\end{cor}
\begin{proof}
Given any sequence $\lambda$ of $k$ copies of $\bigcirc$ and $k+1$ copies of $\square$, let $\tilde{\lambda}$ be the reverse sequence of $\lambda$---that is, the sequence consisting of the entries of $\lambda$ but in reverse order.
The cycle lemma guarantees that there is exactly one cyclic shift of $\tilde{\lambda}$ that is $1$-dominating.
The reverse sequence of this $1$-dominating cyclic shift is the cyclic shift of $\lambda$ that has no proper prefix having more $\square$s than $\bigcirc$s.
\end{proof}
\section{Combinatorial proofs}\label{combinatorial_proofs}
\subsection{Combinatorial proof of Theorem \ref{th:relationNandW}}
Now, we focus our attention on finding a combinatorial proof of Theorem \ref{th:relationNandW}, which in turn will give a combinatorial proof of Theorem~\ref{th:main}.
See Figure ~\ref{im:symmetry} for an example---using the plane tree interpretation of the numbers $w_{n,k,m}$---of the symmetry in Theorem~\ref{th:main} that we wish to prove.
\begin{figure}[ht]
\begin{subfigure}[c]{.5\linewidth}
\centering
\begin{tikzpicture}
[scale=.4, auto = center, roundnode/.style={circle,fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (root) at (0,0) {};
\node[roundnode] (a1) at (-1, -1.5) {};
\node[roundnode] (a2) at (1,-1.5) {};
\node[roundnode] (a3) at (1,-3) {};
\node[roundnode] (a4) at (1,-4.5) {};
\node[roundnode] (a5) at (1,-6) {};
\node[circle,draw, color = red] at (1, -6) {};
\draw (root) -- (a1);
\draw (root) -- (a2);
\draw (a2) -- (a3);
\draw (a3) -- (a4);
\draw (a4) -- (a5);
\begin{scope}[shift={(3.5,0)}]
\node[roundnode] (root) at (0,0) {};
\node[roundnode] (a1) at (1, -1.5) {};
\node[roundnode] (a2) at (-1,-1.5) {};
\node[roundnode] (a3) at (-1,-3) {};
\node[roundnode] (a4) at (-1,-4.5) {};
\node[roundnode] (a5) at (-1,-6) {};
\node[circle,draw, color = red] at (-1, -6) {};
\draw (root) -- (a1);
\draw (root) -- (a2);
\draw (a2) -- (a3);
\draw (a3) -- (a4);
\draw (a4) -- (a5);
\end{scope}
\begin{scope}[shift={(7,0)}]
\node[roundnode] (root) at (0,0) {};
\node[roundnode] (a1) at (0, -1.5) {};
\node[roundnode] (a2) at (-1,-3) {};
\node[roundnode] (a3) at (1,-3) {};
\node[roundnode] (a4) at (-1,-4.5) {};
\node[roundnode] (a5) at (-1,-6) {};
\node[circle,draw, color = red] at (-1, -6) {};
\draw (root) -- (a1);
\draw (a1) -- (a2);
\draw (a1) -- (a3);
\draw (a2) -- (a4);
\draw (a4) -- (a5);
\end{scope}
\begin{scope}[shift={(10.5,0)}]
\node[roundnode] (root) at (0,0) {};
\node[roundnode] (a1) at (0, -1.5) {};
\node[roundnode] (a2) at (0,-3) {};
\node[roundnode] (a3) at (-1,-4.5) {};
\node[roundnode] (a4) at (1,-4.5) {};
\node[roundnode] (a5) at (-1,-6) {};
\node[circle,draw, color = red] at (-1, -6) {};
\draw (root) -- (a1);
\draw (a1) -- (a2);
\draw (a2) -- (a3);
\draw (a2) -- (a4);
\draw (a3) -- (a5);
\end{scope}
\begin{scope}[shift={(14,0)}]
\node[roundnode] (root) at (0,0) {};
\node[roundnode] (a1) at (0, -1.5) {};
\node[roundnode] (a2) at (0,-3) {};
\node[roundnode] (a3) at (0,-4.5) {};
\node[roundnode] (a4) at (-1,-6) {};
\node[roundnode] (a5) at (1,-6) {};
\node[circle,draw, color = red] at (-1, -6) {};
\draw (root) -- (a1);
\draw (a1) -- (a2);
\draw (a2) -- (a3);
\draw (a3) -- (a4);
\draw (a3) -- (a5);
\end{scope}
\end{tikzpicture}
\caption{$w_{5, 2, 1} = 5$}
\end{subfigure}
\begin{subfigure}[c]{.45\linewidth}
\centering
\begin{tikzpicture}
[scale=.4, auto = center, roundnode/.style={circle,fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (root) at (0,0) {};
\node[roundnode] (a1) at (-1, -1.5) {};
\node[roundnode] (a2) at (1,-1.5) {};
\node[roundnode] (a3) at (-1,-3) {};
\node[roundnode] (a4) at (1,-3) {};
\node[roundnode] (a5) at (1,-4.5) {};
\node[circle,draw, color = red] at (-1, -3) {};
\node[circle,draw, color = red] at (1, -4.5) {};
\draw (root) -- (a1);
\draw (root) -- (a2);
\draw (a1) -- (a3);
\draw (a2) -- (a4);
\draw (a4) -- (a5);
\begin{scope}[shift={(3.5,0)}]
\node[roundnode] (root) at (0,0) {};
\node[roundnode] (a1) at (1, -1.5) {};
\node[roundnode] (a2) at (-1,-1.5) {};
\node[roundnode] (a3) at (1,-3) {};
\node[roundnode] (a4) at (-1,-3) {};
\node[roundnode] (a5) at (-1,-4.5) {};
\node[circle,draw, color = red] at (1, -3) {};
\node[circle,draw, color = red] at (-1, -4.5) {};
\draw (root) -- (a1);
\draw (root) -- (a2);
\draw (a1) -- (a3);
\draw (a2) -- (a4);
\draw (a4) -- (a5);
\end{scope}
\begin{scope}[shift={(7,0)}]
\node[roundnode] (root) at (0,0) {};
\node[roundnode] (a1) at (0, -1.5) {};
\node[roundnode] (a2) at (-1,-3) {};
\node[roundnode] (a3) at (1, -3) {};
\node[roundnode] (a4) at (-1,-4.5) {};
\node[roundnode] (a5) at (1,-4.5) {};
\node[circle,draw, color = red] at (-1, -4.5) {};
\node[circle,draw, color = red] at (1, -4.5) {};
\draw (root) -- (a1);
\draw (a1) -- (a2);
\draw (a1) -- (a3);
\draw (a2) -- (a4);
\draw (a3) -- (a5);
\end{scope}
\begin{scope}[shift={(10.5,0)}]
\node[roundnode] (root) at (0,0) {};
\node[roundnode] (a1) at (0, -1.5) {};
\node[roundnode] (a2) at (-1,-3) {};
\node[roundnode] (a3) at (1,-3) {};
\node[roundnode] (a4) at (1,-4.5) {};
\node[roundnode] (a5) at (1,-6) {};
\node[circle,draw, color = red] at (-1, -3) {};
\node[circle,draw, color = red] at (1, -6) {};
\draw (root) -- (a1);
\draw (a1) -- (a2);
\draw (a1) -- (a3);
\draw (a3) -- (a4);
\draw (a4) -- (a5);
\end{scope}
\begin{scope}[shift={(14,0)}]
\node[roundnode] (root) at (0,0) {};
\node[roundnode] (a1) at (0, -1.5) {};
\node[roundnode] (a2) at (0,-3) {};
\node[roundnode] (a3) at (-1,-4.5) {};
\node[roundnode] (a4) at (1,-4.5) {};
\node[roundnode] (a5) at (1,-6) {};
\node[circle,draw, color = red] at (-1, -4.5) {};
\node[circle,draw, color = red] at (1, -6) {};
\draw (root) -- (a1);
\draw (a1) -- (a2);
\draw (a2) -- (a3);
\draw (a2) -- (a4);
\draw (a4) -- (a5);
\end{scope}
\end{tikzpicture}
\caption{$w_{5, 2, 2} = 5$.}
\end{subfigure}
\caption{Plane trees on $5$ non-root vertices with $2$ leaves with the good leaves circled.}
\label{im:symmetry}
\end{figure}
Our proof will mostly rely on two important lemmas.
The first lemma gives a combinatorial interpretation for Narayana numbers in terms of cyclic compositions.
\begin{lemma}
\label{lemma:part1_path}
Let $k \geq 1$ and $m \geq 0$.
Then the Narayana number $N_{k,m}$ is the number of cyclic compositions of $2k+1$ into $k$ parts such that exactly $m$ parts are at least 2.
\end{lemma}
\begin{proof}
We will give a bijective map that takes a cyclic composition of $2k+1$ into $k$ parts, exactly $m$ of which are at least $2$, to a Dyck path of semilength $k$ with $m$ $UD$-factors, which are counted by the Narayana numbers $N_{k,m}$.
Given a cyclic composition $[\mu_{1},\mu_{2},\ldots, \mu_{k}]$ of $2k+1$ with exactly $m$ parts that are at least $2$, consider the word $U^{\mu_1-1}D U^{\mu_2-1}D \cdots U^{\mu_k-1}D$, which has $k+1$ copies of $U$ and $k$ copies of $D$.
By the cycle lemma, there is exactly one cyclic shift of this word that is $1$-dominating---that is, with more $U$s than $D$s in every prefix.
Then the first two entries of this $1$-dominating sequence are necessarily $U$s.
Removing the first $U$, we obtain a Dyck path of semilength $k$ with exactly $m$ $UD$-factors.
It is easily verified that the inverse procedure is given by the following: from a semilength $k$ Dyck path with $m$ $UD$-factors, we get a sequence $(a_1, a_2, \dots a_k)$ where $a_i$ is the number of $U$s that immediately precede the $i$th $D$.
For example, from $UDUUDDUD$ we get the sequence $(1,2,0,1)$.
Then we add $2$ to $a_1$ and $1$ to each other $a_i$, forming a composition $\mu$ of $2k+1$ into $k$ parts, exactly $m$ of which are at least $2$.
Taking the cyclic composition $[\mu]$ completes the inverse.
\end{proof}
We note that the map used in the proof of Lemma \ref{lemma:part1_path} is related to the standard bijection between \L ukasiewicz paths and Dyck paths.
A \emph{\L ukasiewicz path} of length $n$ is a path in $\mathbb{Z}^2$ with step set $\{(1,-1),(1,0),(1,1),(1,2), \ldots \}$, starting from $(0,0)$ and ending at $(n,0)$, that never traverses below the $x$-axis; these paths were introduced in relation to the preorder degree sequence of a plane tree, which determines the tree unambiguously \cite[Chapter 1.5]{flajolet-sedgewick}.
The statement and proof of our second lemma are more involved, and will require the notion of ``extended leaves'' and the decomposition of a plane tree into extended leaves.
\begin{dfn}
An \emph{extended leaf} is an unlabeled path graph with exactly one end-vertex designated as the \emph{leaf}.\footnote{A path graph has two end-vertices, both of which are typically considered leaves, but in an extended leaf, we only think of one of them as being a leaf.}
The length of an extended leaf $E$, denoted by $\ell(E)$, is the number of edges in $E$.
\end{dfn}
Let $v_i$ be the $i$th leaf, as read from left to right, of a plane tree $T$ with $k$ leaves.
Let us now describe the \emph{extended leaf decomposition} of $T$, which is obtained as follows.
For each leaf $v_i$, we trace the path from $v_i$ to the closer of the two:
\begin{enumerate}
\item the root, or
\item the closest ancestor of $v_i$ that has two or more children, and $v_i$ is not the oldest of those children nor a descendent of the oldest child.
\end{enumerate}
This path is the extended leaf $E_i$.
In other words, to find $E_i$, we start at the leaf $v_i$ and then trace the path from $v_i$ toward the root until we reach a vertex $a$ that has another child to the left of the path; if no such $a$ exists, we take $a$ to be the root.
The path from $v_i$ to $a$ is the extended leaf $E_i$.
The sequence $E_1E_2\cdots E_k$ is the extended leaf decomposition of $T$; it is not difficult to see that this decomposition is unique.
\begin{figure}[ht]
\begin{center}
\begin{subfigure}[c]{.2\textwidth}
\begin{tikzpicture}[scale=.44, auto = center, roundnode/.style={circle,fill=black!60, inner sep=1.5pt, minimum width=4pt}]
\node[roundnode] (root) at (0,0) {};
\node[roundnode] (a1) at (0, -1.5) {};
\node[roundnode] (b1) at (-2,-3) {};
\node[roundnode] (b2) at (2,-3) {};
\node[roundnode] (c1) at (-3,-4.5) {};
\node[roundnode] (c2) at (-1,-4.5) {};
\node[roundnode] (c3) at (1,-4.5) {};
\node[roundnode] (c4) at (3,-4.5) {};
\node[roundnode] (d1) at (-1,-6) {};
\node[roundnode] (d2) at (1,-6) {};
\node (v1) at (-3.5, -5) {$v_{1}$};
\node (v2) at (-1.5, -6.5) {$v_{2}$};
\node (v3) at (1.5, -6.5) {$v_{3}$};
\node (v4) at (3.5, -5) {$v_{4}$};
\draw (root) -- (a1);
\draw (a1) -- (b1);
\draw (b1) -- (c1);
\draw (b1) -- (c2);
\draw (c2) -- (d1);
\draw (a1) -- (b2);
\draw (b2) -- (c3);
\draw (c3) -- (d2);
\draw (b2) -- (c4);
\end{tikzpicture}
\end{subfigure}
\hspace{.1cm} $\longrightarrow$
\begin{subfigure}[c]{.2\textwidth}
\begin{tikzpicture}[scale=.44, auto = center, roundnode/.style={circle,fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (root) at (0,0) {};
\node[roundnode] (a1) at (0, -1.5) {};
\node[roundnode] (b1) at (-2,-3) {};
\node[roundnode] (b2) at (2,-3) {};
\node[triangnode] (c1) at (-3,-4.5) {};
\node[roundnode] (c2) at (-1,-4.5) {};
\node[roundnode] (c3) at (1,-4.5) {};
\node[triangnode] (c4) at (3,-4.5) {};
\node[triangnode] (d1) at (-1,-6) {};
\node[triangnode] (d2) at (1,-6) {};
\node (v1) at (-3.5, -5) {$v_{1}$};
\node (v2) at (-1.5, -6.5) {$v_{2}$};
\node (v3) at (1.5, -6.5) {$v_{3}$};
\node (v4) at (3.5, -5) {$v_{4}$};
\draw[color = red] (root) -- (a1);
\draw[color = red] (a1) -- (b1);
\draw[color = red] (b1) -- (c1);
\draw[color = yellow!80] (b1) -- (c2);
\draw[color = yellow!80] (c2) -- (d1);
\draw[color = cyan] (a1) -- (b2);
\draw[color = cyan] (b2) -- (c3);
\draw[color = cyan] (c3) -- (d2);
\draw[color = teal] (b2) -- (c4);
\end{tikzpicture}
\end{subfigure}
\hspace{.3cm} $\longrightarrow$ \hspace{.2cm}
\begin{subfigure}[c]{.25\textwidth}
\begin{tikzpicture}[scale=.44,auto=center, roundnode/.style={circle, fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (0, -1.5) {};
\node[roundnode] (a3) at (0,-3) {};
\node[triangnode] (a4) at (0,-4.5) {};
\node[roundnode] (b1) at (2,0) {};
\node[roundnode] (b2) at (2, -1.5) {};
\node[triangnode] (b3) at (2,-3) {};
\node[roundnode] (c1) at (4,0) {};
\node[roundnode] (c2) at (4,-1.5) {};
\node[roundnode] (c3) at (4,-3) {};
\node[triangnode] (c4) at (4,-4.5) {};
\node[roundnode] (d1) at (6,0) {};
\node[triangnode] (d2) at (6,-1.5) {};
\node (v1) at (0, -5.2) {$E_{1}$};
\node (v2) at (2, -3.7) {$E_{2}$};
\node (v3) at (4, -5.2) {$E_{3}$};
\node (v4) at (6, -2.2) {$E_{4}$};
\draw[color = red] (a1) -- (a2) -- (a3) -- (a4);
\draw[color = yellow!80] (b1) -- (b2) -- (b3);
\draw[color = cyan] (c1) -- (c2) -- (c3) -- (c4);
\draw[color = teal] (d1) -- (d2);
\end{tikzpicture}
\end{subfigure}
\end{center}
\caption{Decomposition of a plane tree into its extended leaves.}
\label{im:ext_leaf}
\end{figure}
\begin{exm}
Figure \ref{im:ext_leaf} shows a decomposition of a plane tree into $4$ extended leaves, $E_1$ (red), $E_2$ (orange), $E_3$ (blue), and $E_4$ (green), ordered from left to right with $\ell(E_1)=3$, $\ell(E_2)=2$, $\ell(E_3)=3$, and $\ell(E_4)=1$.
The triangular nodes represent the leaves of the extended leaves.
\end{exm}
We define a \emph{necklace of extended leaves} (or simply a \emph{necklace}) to be the equivalence class of a sequence of extended leaves under cyclic shift.
Often it is more convenient for us to view a necklace as simply a collection of extended leaves with a given cyclic order; it will be clear from context when we do so.
Let $\mathsf{Neck}_{n,k}$ denote the set of all necklaces with $k$ extended leaves and a total of $n$ non-leaf vertices.
Note that the total number of edges in each of those necklaces is also $n$.
Let $\psi$ be the map taking a composition $(\mu_1,\mu_2,\dots,\mu_k)$ of $n$ to the sequence $E_1 E_2\cdots E_k$ of extended leaves where $\ell(E_i)=\mu_i$ for each $i$.
It is easy to see that $\psi$ is a bijection between compositions of $n$ with $k$ parts and sequences of $k$ extended leaves with a total of $n$ edges; moreover, $\psi$ induces a bijection---which we also denote $\psi$ by a slight abuse of notation---from $\mathsf{CComp}_{n,k}$ to $\mathsf{Neck}_{n,k}$.
To be precise, the necklace $\psi[\mu]$ is the equivalence class of $\psi(\bar{\mu})$ for any $\bar{\mu} \in [\mu]$, which clearly does not depend on the choice of representative.
We define a \emph{marking} of a necklace of extended leaves $[E_{1}, \ldots, E_{k}]$ to be the necklace $[E_{1}, \ldots, E_{k}]$ with $k-1$ non-leaf vertices marked.
If $[\mu]$ is primitive---that is, if $\mathsf{ord}[\mu]=k$---then it is easy to see that the necklace $\psi[\mu]$ has $\binom{n}{k-1}$ distinct markings.\footnote{The skeptical reader may wish to visit Lemma \ref{lemma:markednecksize} proven later, which is a more general result from which this claim follows as a special case.}
\begin{lemma}\label{lemma:part2_trees}
Let $k\geq 1$. Given a cyclic composition $[\mu] \in \mathsf{CComp}_{2k+1,k}$, there are exactly $\binom{2k+1}{k-1}$ plane trees whose extended leaf decomposition belongs to the necklace $\psi[\mu] \in \mathsf{Neck}_{2k+1,k}$.
\end{lemma}
\begin{proof}
Let $[\mu]=[\mu_1,\mu_2,\dots,\mu_k] \in \mathsf{CComp}_{2k+1,k}$ so that $\psi[\mu]=[E_1,E_2,\ldots,E_k]$ is a necklace of $k$ extended leaves with $\ell(E_i)=\mu_i$ for each $i$ and with a total of $2k+1$ non-leaf vertices.
Since $2k+1$ and $k$ are relatively prime, Lemma \ref{lemma:2k+1_primitive} implies that $[\mu]$ is primitive, so $\psi[\mu]$ has $\binom{2k+1}{k-1}$ distinct markings.
We will show that each marking of $\psi[\mu]$ determines a unique plane tree whose extended leaf decomposition belongs to the necklace $\psi[\mu]$.
Consequently, we will have $\binom{2k+1}{k-1}$ plane trees that correspond to $\psi[\mu]$.
Given a marking of $\psi[\mu]$, we record the $k-1$ marked vertices and $k$ extended leaves using a sequence of $\bigcirc$s and $\square$s as follows.
We start with any extended leaf in the necklace $\psi[\mu]$.
First record a $\bigcirc$ for each marked vertex on this extended leaf, and then record a $\square$ for this extended leaf.
We do the same for the next extended leaf in the cyclic order of $\psi[\mu]$, and this process is repeated until we have traversed through all the marked vertices and extended leaves in $\psi[\mu]$.
We now have a sequence of $k-1$ copies of $\bigcirc$ and $k$ copies of $\square$.
It then follows from Corollary~\ref{cor:reverse_cycle} that there is exactly one cyclic shift $\sigma=\sigma_1 \sigma_2 \cdots \sigma_{2k-1}$ of this sequence whose every proper prefix has at least as many $\bigcirc$s as the number of $\square$s.
Note that $\sigma_1=\bigcirc$ and $\sigma_{2k-2}\sigma_{2k-1}=\square\square$.
Then we obtain from $\sigma$ a sequence $E_1E_2\cdots E_k$ of extended leaves by taking $E_1$ to be the extended leaf containing the marked vertex corresponding to $\sigma_1$, and proceeding in accordance with the cyclic order of $\psi[\mu]$.
We will build a plane tree using the sequence $E_1E_2\cdots E_k$ in the following manner.
Henceforth, we make use of the term ``\emph{top vertex}'' to refer to the vertex on an extended leaf furthest from its leaf, i.e., the other end-vertex of that extended leaf.
\begin{enumerate}
\item Take the root of our tree to be the top vertex of $E_1$.
\item Take the marked vertex on $E_1$ that is furthest from the root---call it $v_1$---and attach the next extended leaf $E_2$ to $E_1$ by identifying the top vertex of $E_2$ with $v_1$.
\item Remove the mark of $v_1$. The partially-built tree currently has two extended leaves $E_1$ and $E_2$.
\item Attach the next extended leaf $E_i$ to the tree by identifying the top vertex of $E_i$ with the unused marked vertex that is furthest from the root on the current partially-built tree.
\item Remove the mark of that vertex after attaching $E_i$.
\item Repeat $(4)$ and $(5)$ until we have attached all $k$ extended leaves.
\end{enumerate}
\noindent There will always be at least one unused marked vertex on a partially-built tree to indicate where the next extended leaf should be attached, because the number of marked vertices (the $\bigcirc$s) will always be at least the number of extended leaves (the $\square$s) that need attaching.
Also, note that since we always use the marked vertex that is furthest from the root, there can only be marked vertices on the shortest path connecting the root and the right-most leaf on a partially-built tree at any stage; thus, there must be a unique marked vertex that is furthest from the root.
The $k-1$ marked vertices determine how the extended leaves $E_1, E_2,\dots, E_k$ are put together, forming a plane tree $T$ having the extended leaf decomposition $E_1E_2\cdots E_k$, which belongs to the necklace $\psi[\mu]$.
\begin{figure}[ht]
\begin{center}
\begin{subfigure}[c]{.3\textwidth}
\begin{tikzpicture}[scale=.4, roundnode/.style={circle, fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (0, -1.5) {};
\node[roundnode] (a3) at (0,-3) {};
\node[triangnode] (a4) at (0,-4.5) {};
\node[circle,draw, color = red] at (0, 0) {};
\node[circle,draw, color = red] at (0, -1.5) {};
\node[roundnode] (b1) at (2,0) {};
\node[roundnode] (b2) at (2, -1.5) {};
\node[triangnode] (b3) at (2,-3) {};
\node[circle,draw, color = red] at (2, -1.5) {};
\node[roundnode] (c1) at (4,0) {};
\node[roundnode] (c2) at (4,-1.5) {};
\node[roundnode] (c3) at (4,-3) {};
\node[triangnode] (c4) at (4,-4.5) {};
\node[roundnode] (d1) at (6,0) {};
\node[triangnode] (d2) at (6,-1.5) {};
\node (v1) at (0, -5.2) {$E_{1}$};
\node (v2) at (2, -3.7) {$E_{2}$};
\node (v3) at (4, -5.2) {$E_{3}$};
\node (v4) at (6, -2.2) {$E_{4}$};
\draw[color = red] (a1) -- (a2) -- (a3) -- (a4);
\draw[color = yellow!80] (b1) -- (b2) -- (b3);
\draw[color = cyan] (c1) -- (c2) -- (c3) -- (c4);
\draw[color = teal] (d1) -- (d2);
\draw[dashed, gray!80, thin, ->] (6.5, -.75) to [out= 10,in=0, min distance = 2.5cm, looseness = 2] (3, 2);
\draw[dashed, gray!80, thin] (3, 2) to [out= 180,in=170, min distance = 2.5cm, looseness = 2] (-.5, -.75);
\draw[dashed, gray!80, thin] (.5, -.75) -- (1.5, -.75);
\draw[dashed, gray!80, thin] (2.5, -.75) -- (3.5, -.75);
\draw[dashed, gray!80, thin] (4.5, -.75) -- (5.5, -.75);
\end{tikzpicture}
\end{subfigure}
$\longrightarrow$ \hspace{.2cm}
\begin{subfigure}[c]{.07\textwidth}
\begin{tikzpicture}[scale=.4, roundnode/.style={circle,fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (0, -1.5) {};
\node[roundnode] (a3) at (-1,-3) {};
\node[triangnode] (a4) at (-1,-4.5) {};
\node[roundnode] (b2) at (1,-3) {};
\node[triangnode] (b3) at (1,-4.5) {};
\node[circle,draw, color = red] at (0, 0) {};
\node[circle,draw, color = red] at (1, -3) {};
\draw[color = red] (a1) -- (a2);
\draw[color = red] (a2) -- (a3);
\draw[color = red] (a3) -- (a4);
\draw[color = yellow!80] (a2) -- (b2);
\draw[color = yellow!80] (b2) -- (b3);
\end{tikzpicture}
\end{subfigure}
\hspace{.1cm} $\longrightarrow$ \hspace{.2cm}
\begin{subfigure}[c]{.1\textwidth}
\begin{tikzpicture}[scale=.4, auto = center, roundnode/.style={circle,fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (0, -1.5) {};
\node[roundnode] (a3) at (-1,-3) {};
\node[triangnode] (a4) at (-1,-4.5) {};
\node[roundnode] (b2) at (1.5,-3) {};
\node[triangnode] (b3) at (.75,-4.5) {};
\node[roundnode] (c2) at (2.25,-4.5) {};
\node[roundnode] (c3) at (2.25,-6) {};
\node[triangnode] (c4) at (2.25,-7.5) {};
\node[circle,draw, color = red] at (0, 0) {};
\draw[color = red] (a1) -- (a2);
\draw[color = red] (a2) -- (a3);
\draw[color = red] (a3) -- (a4);
\draw[color = yellow!80] (a2) -- (b2);
\draw[color = yellow!80] (b2) -- (b3);
\draw[color = cyan] (b2) -- (c2);
\draw[color = cyan] (c2) -- (c3);
\draw[color = cyan] (c3) -- (c4);
\end{tikzpicture}
\end{subfigure}
\hspace{.1cm} $\longrightarrow$ \hspace{.2cm}
\begin{subfigure}[c]{.1\textwidth}
\begin{tikzpicture}[scale=.4, auto = center, roundnode/.style={circle,fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (a1) at (1,0) {};
\node[roundnode] (a2) at (0, -1.5) {};
\node[roundnode] (a3) at (-1,-3) {};
\node[triangnode] (a4) at (-1,-4.5) {};
\node[roundnode] (b2) at (1.5,-3) {};
\node[triangnode] (b3) at (.75,-4.5) {};
\node[roundnode] (c2) at (2.25,-4.5) {};
\node[roundnode] (c3) at (2.25,-6) {};
\node[triangnode] (c4) at (2.25,-7.5) {};
\node[triangnode] (d2) at (2,-1.5) {};
\draw[color = red] (a1) -- (a2);
\draw[color = red] (a2) -- (a3);
\draw[color = red] (a3) -- (a4);
\draw[color = yellow!80] (a2) -- (b2);
\draw[color = yellow!80] (b2) -- (b3);
\draw[color = cyan] (b2) -- (c2);
\draw[color = cyan] (c2) -- (c3);
\draw[color = cyan] (c3) -- (c4);
\draw[color = teal] (a1) -- (d2);
\end{tikzpicture}
\end{subfigure}
\end{center}
\caption{Building a plane tree using a marked necklace of extendend leaves.}
\label{im:buildtree}
\end{figure}
Figure \ref{im:buildtree} illustrates the process of building a tree from a marking of a necklace with four extended leaves.
On the left is a marked necklace.
The marked vertices are circled and the leaf of each $E_i$ is denoted by a triangular node.
The top vertex of $E_1$ will be the root of this tree. Then $E_2$ is attached to the marked vertex on $E_1$ that is furthest from the root, and we remove the mark after attaching $E_2$ (see the second step in Figure \ref{im:buildtree}).
Now on the partially-built tree consisting of $E_1$ and $E_2$, the furthest unused marked vertex from the root is the one on $E_2$.
In the third step, $E_3$ is attached to that marked vertex, leaving only one unused marked vertex, which is where we attach $E_4$ in the last step. Because the choice of $E_1$ is unique, we can only build one plane tree from our marked necklace.
Conversely, consider a plane tree whose extended leaf decomposition $E_1E_2\cdots E_k$ belongs to $\psi[\mu] \in \mathsf{Neck}_{2k+1,k}$.
Following the detaching process that is described below, one can retrieve a unique marking of $\psi[\mu]$, which is the marked necklace that determines this tree via the procedure already described above.
We detach extended leaves one-by-one from the last (right-most) extended leaf to the first (left-most) extended leaf.
For each $E_i$, let $j$ be the largest index less than $i$ such that $E_j$ and $E_i$ share a common vertex.
When $E_i$ is detached, the vertex on $E_j$ that is shared with $E_i$ is marked on $E_j$.
Note that this shared vertex is necessarily the top vertex of $E_i$.
All marked vertices are kept on the extended leaf when detaching that extended leaf.
We will mark one vertex when detaching each of the $k-1$ extended leaves $E_2, E_3,\ldots, E_k$, which yields a marking of the necklace $\psi[\mu]$.
\begin{figure}[ht]
\begin{center}
\begin{subfigure}[c]{.16\textwidth}
\begin{tikzpicture}[scale=.4, roundnode/.style={circle,fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (0, -1.5) {};
\node[roundnode] (a3) at (-2.5,-3) {};
\node[triangnode] (a4) at (-2.5,-4.5) {};
\node[roundnode] (b2) at (0,-3) {};
\node[triangnode] (b3) at (-1,-4.5) {};
\node[roundnode] (c2) at (1,-4.5) {};
\node[triangnode] (c3) at (1,-6) {};
\node[triangnode] (d2) at (2.5,-3) {};
\draw[color = red] (a1) -- (a2);
\draw[color = red] (a2) -- (a3);
\draw[color = red] (a3) -- (a4);
\draw[color = yellow!80] (a2) -- (b2);
\draw[color = yellow!80] (b2) -- (b3);
\draw[color = cyan] (b2) -- (c2);
\draw[color = cyan] (c2) -- (c3);
\draw[color = teal] (a2) -- (d2);
\end{tikzpicture}
\end{subfigure}
$\longrightarrow$ \hspace{.1cm}
\begin{subfigure}[c]{.14\textwidth}
\begin{tikzpicture}[scale=.4, roundnode/.style={circle,fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (0, -1.5) {};
\node[roundnode] (a3) at (-2.5,-3) {};
\node[triangnode] (a4) at (-2.5,-4.5) {};
\node[roundnode] (b2) at (0,-3) {};
\node[triangnode] (b3) at (-1,-4.5) {};
\node[roundnode] (c2) at (1,-4.5) {};
\node[triangnode] (c3) at (1,-6) {};
\node[roundnode] (d1) at (1.5,0) {};
\node[triangnode] (d2) at (1.5,-1.5) {};
\node[circle,draw, color = red] at (0, -1.5) {};
\draw[color = red] (a1) -- (a2);
\draw[color = red] (a2) -- (a3);
\draw[color = red] (a3) -- (a4);
\draw[color = yellow!80] (a2) -- (b2);
\draw[color = yellow!80] (b2) -- (b3);
\draw[color = cyan] (b2) -- (c2);
\draw[color = cyan] (c2) -- (c3);
\draw[color = teal] (d1) -- (d2);
\end{tikzpicture}
\end{subfigure}
$\longrightarrow$ \hspace{.13cm}
\begin{subfigure}{.18\textwidth}
\begin{tikzpicture}[scale=.4, roundnode/.style={circle,fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (0, -1.5) {};
\node[roundnode] (a3) at (-2.5,-3) {};
\node[triangnode] (a4) at (-2.5,-4.5) {};
\node[roundnode] (b2) at (0,-3) {};
\node[triangnode] (b3) at (-1,-4.5) {};
\node[roundnode] (c1) at (1.5,0) {};
\node[roundnode] (c2) at (1.5,-1.5) {};
\node[triangnode] (c3) at (1.5,-3) {};
\node[roundnode] (d1) at (3,0) {};
\node[triangnode] (d2) at (3,-1.5) {};
\node[circle,draw, color = red] at (0, -1.5) {};
\node[circle,draw, color = red] at (0, -3) {};
\draw[color = red] (a1) -- (a2);
\draw[color = red] (a2) -- (a3);
\draw[color = red] (a3) -- (a4);
\draw[color = yellow!80] (a2) -- (b2);
\draw[color = yellow!80] (b2) -- (b3);
\draw[color = cyan] (c1) -- (c2);
\draw[color = cyan] (c2) -- (c3);
\draw[color = teal] (d1) -- (d2);
\end{tikzpicture}
\end{subfigure}
\hspace{.1cm} $\longrightarrow$
\begin{subfigure}[c]{.25\textwidth}
\begin{tikzpicture}[scale=.4, roundnode/.style={circle, fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (0, -1.5) {};
\node[roundnode] (a3) at (0,-3) {};
\node[triangnode] (a4) at (0,-4.5) {};
\node[circle,draw, color = red] at (0, -1.5) {};
\node[roundnode] (b1) at (2,0) {};
\node[roundnode] (b2) at (2, -1.5) {};
\node[triangnode] (b3) at (2,-3) {};
\node[circle,draw, color = red] at (2, 0) {};
\node[circle,draw, color = red] at (2, -1.5) {};
\node[roundnode] (c1) at (4,0) {};
\node[roundnode] (c2) at (4,-1.5) {};
\node[roundnode] (c3) at (4,-3) {};
\node[triangnode] (c4) at (4,-4.5) {};
\node[roundnode] (d1) at (6,0) {};
\node[triangnode] (d2) at (6,-1.5) {};
\node (v1) at (0, -5.2) {$E_{1}$};
\node (v2) at (2, -3.7) {$E_{2}$};
\node (v3) at (4, -5.2) {$E_{3}$};
\node (v4) at (6, -2.2) {$E_{4}$};
\draw[color = red] (a1) -- (a2) -- (a3) -- (a4);
\draw[color = yellow!80] (b1) -- (b2) -- (b3);
\draw[color = cyan] (c1) -- (c2) -- (c3) -- (c4);
\draw[color = teal] (d1) -- (d2);
\draw[dashed, gray!80, thin, ->] (6.5, -.75) to [out= 10,in=0, looseness = 1.5] (3, 2);
\draw[dashed, gray!80, thin] (3, 2) to [out= 180,in=170, looseness = 1.5] (-.5, -.75);
\draw[dashed, gray!80, thin] (.5, -.75) -- (1.5, -.75);
\draw[dashed, gray!80, thin] (2.5, -.75) -- (3.5, -.75);
\draw[dashed, gray!80, thin] (4.5, -.75) -- (5.5, -.75);
\end{tikzpicture}
\end{subfigure}
\end{center}
\caption{Retrieving marked vertices from a plane tree.}
\label{im:treetoleaf}
\end{figure}
As shown in Figure \ref{im:treetoleaf}, when $E_4$ is detached, we mark the vertex on $E_2$ that is common to $E_2$ and $E_4$.
This vertex is marked on $E_2$ not $E_1$ because $2>1$.
In other words, $E_2$ is the closest extended leaf to $E_4$ that is still on the left of $E_4$ and shares a common vertex with $E_4$.
Then we detach $E_3$ and mark the vertex on $E_2$ that is shared with $E_3$.
When $E_2$ is detached, the two marked vertices are kept on $E_2$, and a vertex on $E_1$ is marked.
It is straightforward to verify that the two procedures described above are inverse bijections between markings of a necklace $\psi[\mu] \in \mathsf{Neck}_{2k+1,k}$ and plane trees whose extended leaf decompositions belong to $\psi[\mu]$.
Since there are exactly $\binom{2k+1}{k-1}$ such markings, the conclusion follows.
\end{proof}
We are now ready to complete our combinatorial proof of Theorem \ref{th:relationNandW}.
\begin{proof}[Proof of Theorem \ref{th:relationNandW}]
Recall that $w_{2k+1, k, m}$ counts plane trees with $2k+1$ non-root vertices, $k$ leaves, and $m$ good leaves; these are precisely the plane trees with $2k+1$ non-root vertices whose extended leaf decomposition has $k$ extended leaves, exactly $m$ of which have length at least 2.
These extended leaf decompositions belong to necklaces corresponding to cyclic compositions of $2k+1$ into $k$ total parts and $m$ parts at least 2, which are counted by $N_{k,m}$ as established in Lemma \ref{lemma:part1_path}.
Furthermore, by Lemma \ref{lemma:part2_trees}, there are exactly $\binom{2k+1}{k-1}$ plane trees corresponding to each necklace.
It follows that $w_{2k+1, k, m} = \binom{2k+1}{k-1} N_{k,m}$ as desired.
\end{proof}
From the proofs of Lemmas ~\ref{lemma:part1_path} and ~\ref{lemma:part2_trees} we implicitly obtain a bijection that demonstrates the symmetry $w_{2k+1, k, m} = w_{2k+1, k, k+1-m}$.
For the sake of completeness, we explicitly write out the bijection that we obtain and give an example in Figure ~\ref{im:bijection}.
\begin{dfn} \label{dfn:bijection}
Let $T$ be a plane tree with $2k+1$ non-root vertices, $k$ leaves, and $m$ good leaves.
Construct a plane tree $T'$ on $2k+1$ non-root vertices, $k$ leaves, and $k+1-m$ good leaves via the following algorithm:
\begin{enumerate}
\item Set $M$ to be the marked necklace of extended leaves associated to $T$ via Lemma ~\ref{lemma:part2_trees}.
\item Decompose $M$ into a pair consisting of its underlying unmarked necklace of extended leaves $N$ and a $(k-1)$-subset $S$ of $[2k+1] = \{1,2,\dots,2k+1\}$ containing the positions of the non-leaf vertices marked in $M$.
\item Set $P$ to be the Dyck path of semilength $k$ with $m$ $UD$-factors that is associated to $N$ via the bijective map in Lemma ~\ref{lemma:part1_path}.
\item Set $P'$ to be a Dyck path of semilength $k$ with $k+1-m$ $UD$-factors obtained via any bijection demonstrating the Narayana symmetry \textup{(}see \cite{Kreweras1970, Kreweras1972, Lalanne1992} for example\textup{)}. Perhaps the simplest and most intuitive is the one in terms of non-crossing set partitions in \cite{Kreweras1972}, where the author proved the stronger fact that the lattice of non-crossing partitions is self-dual.
\item Set $N'$ to be the necklace of extended leaves associated to $P'$ via Lemma~\ref{lemma:part1_path}.
\item Set $M'$ to be the marked necklace of extended leaves obtained from the necklace $N'$ and subset $S$.
\item Set $T'$ to be the plane tree with $2k+1$ non-root vertices, $k$ leaves, and $k+1-m$ good leaves associated to $M'$ via Lemma~\ref{lemma:part2_trees}.
\end{enumerate}
\end{dfn}
\begin{rmk}
In Steps \textup{(}2\textup{)} and \textup{(}6\textup{)}, there is some choice of how to label the positions of the non-leaf vertices in a necklace $N\in \mathsf{Neck}_{2k+1,k}$ such that one can pass from a marked necklace to a pair consisting of its underlying unmarked necklace and a $(k-1)$-subset of $[2k+1]$ and vice versa.
We detail a choice of labelling that we deem to be canonical.
By Lemma ~\ref{lemma:part1_path}, there is a unique ordering $(E_1, E_2, \ldots, E_k)$ of the extended leaves in $N$ such that $U^{\mu_{1}-1}DU^{\mu_{2}-1}D\cdots U^{\mu_{k}-1}D$ is $1$-dominating where $\mu_i = \ell(E_{i})$.
Starting from the vertex furthest away from the leaf and moving inwards, label the non-leaf vertices in $E_1$ with the numbers $1, 2, \ldots, \mu_1$, label the non-leaf vertices in $E_2$ with the numbers $\mu_1+1, \ldots, \mu_1+\mu_2$, and so on.
\end{rmk}
\begin{figure}[ht]
\begin{center}
\begin{subfigure}[c]{.13\textwidth}
\begin{tikzpicture}[scale=.35, roundnode/.style={circle,fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (-2, -1.5) {};
\node[roundnode] (a3) at (-3,-3) {};
\node[roundnode] (a4) at (-3,-4.5) {};
\node[roundnode] (b2) at (-1,-3) {};
\node[roundnode] (c2) at (0,-1.5) {};
\node[roundnode] (c3) at (0,-3) {};
\node[roundnode] (c4) at (0,-4.5) {};
\node[roundnode] (d2) at (2,-1.5) {};
\node[roundnode] (d3) at (2,-3) {};
\draw (a1) -- (a2);
\draw (a2) -- (a3);
\draw (a3) -- (a4);
\draw (a2) -- (b2);
\draw (a1) -- (c2);
\draw (c2) -- (c3);
\draw (c3) -- (c4);
\draw (a1) -- (d2);
\draw (d2) -- (d3);
\end{tikzpicture}
\end{subfigure}
\hspace{.1cm} $\longrightarrow$
\begin{subfigure}[c]{.25\textwidth}
\begin{tikzpicture}[scale=.35, roundnode/.style={circle, fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (0, -1.5) {};
\node[roundnode] (a3) at (0,-3) {};
\node[triangnode] (a4) at (0,-4.5) {};
\node[circle,draw, color = red] at (0, 0) {};
\node[circle,draw, color = red] at (0, -1.5) {};
\node[roundnode] (b1) at (2,0) {};
\node[triangnode] (b2) at (2,-1.5) {};
\node[roundnode] (c1) at (4,0) {};
\node[roundnode] (c2) at (4,-1.5) {};
\node[roundnode] (c3) at (4,-3) {};
\node[triangnode] (c4) at (4,-4.5) {};
\node[circle,draw, color = red] at (4, 0) {};
\node[roundnode] (d1) at (6,0) {};
\node[roundnode] (d2) at (6, -1.5) {};
\node[triangnode] (d3) at (6,-3) {};
\draw (a1) -- (a2) -- (a3) -- (a4);
\draw (b1) -- (b2);
\draw (c1) -- (c2) -- (c3) -- (c4);
\draw (d1) -- (d2) -- (d3);
\draw[dashed, gray!80, thin, ->] (6.5, -.75) to [out= 10,in=0, looseness = 1.5] (3, 2);
\draw[dashed, gray!80, thin] (3, 2) to [out= 180,in=170, looseness = 1.5] (-.5, -.75);
\draw[dashed, gray!80, thin] (.5, -.75) -- (1.5, -.75);
\draw[dashed, gray!80, thin] (2.5, -.75) -- (3.5, -.75);
\draw[dashed, gray!80, thin] (4.5, -.75) -- (5.5, -.75);
\end{tikzpicture}
\end{subfigure}
$\longrightarrow$
\begin{subfigure}[c]{.33\textwidth}
\begin{tikzpicture}[scale=.35, roundnode/.style={circle, fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node at (-2, -1.3) {$\Big ($};
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (0, -1.5) {};
\node[roundnode] (a3) at (0,-3) {};
\node[triangnode] (a4) at (0,-4.5) {};
\node[roundnode] (b1) at (2,0) {};
\node[roundnode] (b2) at (2, -1.5) {};
\node[triangnode] (b3) at (2,-3) {};
\node[roundnode] (c1) at (4,0) {};
\node[roundnode] (c2) at (4,-1.5) {};
\node[roundnode] (c3) at (4,-3) {};
\node[triangnode] (c4) at (4,-4.5) {};
\node[roundnode] (d1) at (6,0) {};
\node[triangnode] (d2) at (6,-1.5) {};
\node at (6.9, -1.8) {,};
\node at (10,-1.3) {$\{ 1, 6, 7\} \Big)$};
\draw (a1) -- (a2) -- (a3) -- (a4);
\draw (b1) -- (b2) -- (b3);
\draw (c1) -- (c2) -- (c3) -- (c4);
\draw (d1) -- (d2);
\draw[dashed, gray!80, thin, ->] (6.5, -.75) to [out= 10,in=0, looseness = 1.5] (3, 2);
\draw[dashed, gray!80, thin] (3, 2) to [out= 180,in=170, looseness = 1.5] (-.5, -.75);
\draw[dashed, gray!80, thin] (.5, -.75) -- (1.5, -.75);
\draw[dashed, gray!80, thin] (2.5, -.75) -- (3.5, -.75);
\draw[dashed, gray!80, thin] (4.5, -.75) -- (5.5, -.75);
\end{tikzpicture}
\end{subfigure}
$\longrightarrow$
\\[.1cm]
$\longrightarrow$
\begin{subfigure}[c]{.29\textwidth}
\begin{tikzpicture}[scale = .3, auto=center]
\node at (-1, 1.5) {$\Big($};
\draw[step=1.0, gray!100, thin] (0,0) grid (8, 3);
\draw[black!80, line width=2pt] (0,0) -- (1,1) -- (2,0) -- (3,1) -- (4,0) -- (5,1) -- (6,2) -- (7,1) -- (8,0);
\node at (8.3, 1) {,};
\node at (11.4, 1.5) {$\{1, 6, 7\} \Big)$};
\end{tikzpicture}
\end{subfigure}
$\longrightarrow$
\begin{subfigure}[c]{.29\textwidth}
\begin{tikzpicture}[scale = .3, auto=center]
\node at (-1, 1.5) {$\Big($};
\draw[step=1.0, gray!100, thin] (0,0) grid (8, 3);
\draw[black!80, line width=2pt] (0,0) -- (1,1) -- (2,2) -- (3,3) -- (4,2) -- (5,1) -- (6,0) -- (7,1) -- (8,0);
\node at (8.3, 1) {,};
\node at (11.4, 1.5) {$\{1, 6, 7\} \Big)$};
\end{tikzpicture}
\end{subfigure}
$\longrightarrow$
\\[.1cm]
$\longrightarrow$
\begin{subfigure}[c]{.3\textwidth}
\begin{tikzpicture}[scale=.3, roundnode/.style={circle, fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node at (-2, -2.8) {$\Big ($};
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (0, -1.5) {};
\node[roundnode] (a3) at (0,-3) {};
\node[roundnode] (a4) at (0,-4.5) {};
\node[roundnode] (a5) at (0,-6) {};
\node[triangnode] (a6) at (0,-7.5) {};
\node[roundnode] (b1) at (2,0) {};
\node[triangnode] (b2) at (2, -1.5) {};
\node[roundnode] (c1) at (4,0) {};
\node[triangnode] (c2) at (4,-1.5) {};
\node[roundnode] (d1) at (6,0) {};
\node[roundnode] (d2) at (6,-1.5) {};
\node[triangnode] (d3) at (6,-3) {};
\node at (6.9, -3.3) {,};
\node at (10,-2.8) {$\{ 1, 6, 7\} \Big)$};
\draw (a1) -- (a2) -- (a3) -- (a4) -- (a5) -- (a6);
\draw (b1) -- (b2);
\draw (c1) -- (c2);
\draw (d1) -- (d2) -- (d3);
\draw[dashed, gray!80, thin, ->] (6.5, -.75) to [out= 10,in=0, looseness = 1.5] (3, 2);
\draw[dashed, gray!80, thin] (3, 2) to [out= 180,in=170, looseness = 1.5] (-.5, -.75);
\draw[dashed, gray!80, thin] (.5, -.75) -- (1.5, -.75);
\draw[dashed, gray!80, thin] (2.5, -.75) -- (3.5, -.75);
\draw[dashed, gray!80, thin] (4.5, -.75) -- (5.5, -.75);
\end{tikzpicture}
\end{subfigure}
\hspace{.15cm} $\longrightarrow$
\begin{subfigure}[c]{.22\textwidth}
\begin{tikzpicture}[scale=.3, roundnode/.style={circle, fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (0, -1.5) {};
\node[roundnode] (a3) at (0,-3) {};
\node[roundnode] (a4) at (0,-4.5) {};
\node[roundnode] (a5) at (0,-6) {};
\node[triangnode] (a6) at (0,-7.5) {};
\node[circle,draw, color = red] at (0, 0) {};
\node[roundnode] (b1) at (2,0) {};
\node[triangnode] (b2) at (2, -1.5) {};
\node[circle,draw, color = red] at (2, 0) {};
\node[roundnode] (c1) at (4,0) {};
\node[triangnode] (c2) at (4,-1.5) {};
\node[circle,draw, color = red] at (4, 0) {};
\node[roundnode] (d1) at (6,0) {};
\node[roundnode] (d2) at (6,-1.5) {};
\node[triangnode] (d3) at (6,-3) {};
\draw (a1) -- (a2) -- (a3) -- (a4) -- (a5) -- (a6);
\draw (b1) -- (b2);
\draw (c1) -- (c2);
\draw (d1) -- (d2) -- (d3);
\draw[dashed, gray!80, thin, ->] (6.5, -.75) to [out= 10,in=0, looseness = 1.5] (3, 2);
\draw[dashed, gray!80, thin] (3, 2) to [out= 180,in=170, looseness = 1.5] (-.5, -.75);
\draw[dashed, gray!80, thin] (.5, -.75) -- (1.5, -.75);
\draw[dashed, gray!80, thin] (2.5, -.75) -- (3.5, -.75);
\draw[dashed, gray!80, thin] (4.5, -.75) -- (5.5, -.75);
\end{tikzpicture}
\end{subfigure}
$\longrightarrow$ \hspace{.3cm}
\begin{subfigure}[c]{.15\textwidth}
\begin{tikzpicture}[scale=.35, roundnode/.style={circle,fill=black!60, inner sep=1.5pt, minimum width=4pt}, triangnode/.style={regular polygon,regular polygon sides=3, fill=black!60, inner sep=1.5pt, outer sep = 0pt}]
\node[roundnode] (a1) at (0,0) {};
\node[roundnode] (a2) at (-2, -1.5) {};
\node[roundnode] (a3) at (-2,-3) {};
\node[roundnode] (a4) at (-2,-4.5) {};
\node[roundnode] (a5) at (-2,-6) {};
\node[roundnode] (a6) at (-2,-7.5) {};
\node[roundnode] (b2) at (-.75,-1.5) {};
\node[roundnode] (c2) at (.75,-1.5) {};
\node[roundnode] (d2) at (2,-1.5) {};
\node[roundnode] (d3) at (2,-3) {};
\draw (a1) -- (a2) -- (a3) -- (a4) -- (a5) -- (a6);
\draw (a1) -- (b2);
\draw (a1) -- (c2);
\draw (a1) -- (d2) -- (d3);
\end{tikzpicture}
\end{subfigure}
\end{center}
\caption{Example of the bijection given in Definition ~\ref{dfn:bijection} for $k = 4$ where the Lalanne-Kreweras involution \cite{Kreweras1970, Lalanne1992} is used in Step (4). }
\label{im:bijection}
\end{figure}
\subsection{Combinatorial proof of a related symmetry}
In addition to the symmetry in Theorem~\ref{th:main}, it can also be observed that $w_{2k-1, k, m} = w_{2k-1, k, k-m}$ for all $1 \leq m \leq k$, which is a consequence of the following variation of Theorem~\ref{th:relationNandW}.
\begin{thm}
For all $k \geq 1$ and $m \geq 0$, we have
\label{th:relationNandW2}
\begin{equation}
\label{eq:relNW2}
w_{2k-1, k, m} = \binom{2k-1}{k-1} N_{k-1,m}.
\end{equation}
\end{thm}
Theorem~\ref{th:relationNandW2} can be proven in a way that is completely analogous to our combinatorial proof of Theorem~\ref{th:relationNandW}, but relying on Lemmas \ref{lemma:part1_path2} and \ref{cor:part2_trees2k-1} below.
\begin{lemma}
\label{lemma:part1_path2}
Let $k\geq 1$ and $m \geq 0$. Then the Narayana number $N_{k-1,m}$ is the number of cyclic compositions of $2k-1$ into $k$ parts such that exactly $m$ parts are at least 2.
\end{lemma}
\begin{proof}
We follow the proof of Lemma \ref{lemma:part1_path} closely.
Given a cyclic composition $[\mu_{1},\mu_{2},\ldots, \mu_{k}]$ of $2k-1$ with exactly $m$ parts that are at least $2$, we build a sequence consisting of $k-1$ copies of $U$ and $k$ copies of $D$ in the same way as in the proof of Lemma \ref{lemma:part1_path}.
By Corollary \ref{cor:reverse_cycle}, there is exactly one cyclic shift of this sequence such that any proper prefix of the sequence contains at least as many $U$s as the number of $D$s.
Then the last entry of this cyclic shift is a $D$; removing this last $D$, we obtain a Dyck path of semilength $k-1$ and exactly $m$ $UD$-factors.
Conversely, consider a Dyck path of semilength $k-1$ with exactly $m$ $UD$-factors.
We append a $D$ to the corresponding Dyck word, and form the sequence $a_1, a_2, \dots, a_k$, where $a_i$ is the number of $U$s that immediately precede the $i$th $D$.
We then add $1$ to every number in this sequence and take the equivalence class of its cyclic shifts, yielding a cyclic composition of $2k-1$ into $k$ parts, exactly $m$ of which are at least $2$.
\end{proof}
\begin{lemma}
\label{cor:part2_trees2k-1}
Let $k\geq 1$. Given a cyclic composition $[\mu] \in \mathsf{CComp}_{2k-1,k}$, there are exactly $\binom{2k-1}{k-1}$ plane trees whose extended leaf decomposition belongs to the necklace $\psi[\mu] \in \mathsf{Neck}_{2k-1,k}$.
\end{lemma}
\begin{proof}
The proof of Lemma \ref{lemma:part2_trees} can be readily adapted to prove Lemma \ref{cor:part2_trees2k-1}; we omit the details.
\end{proof}
\section{Combinatorial proofs of generalized formulas} \label{s-generalized}
\subsection{Combinatorial proof of generalized formulas for \texorpdfstring{$w_{n,k,m}$}{w n,k,m}} \label{ss-generalized}
We have demonstrated a combinatorial proof of Theorem ~\ref{th:relationNandW}, which expresses the numbers $w_{2k+1,k,m}$ in terms of the Narayana numbers.
In fact, the numbers $w_{n,k,m}$, for any $n\neq 2k$, can be expressed in terms of a family of generalized Narayana numbers due to Callan ~\cite{callan}, and the purpose of this section is to describe how our combinatorial proof for Theorem ~\ref{th:relationNandW} can be adapted to prove this more general result.
Along the way, we will give a combinatorial proof for our explicit formula for the numbers $w_{n,k,m}$ stated in Theorem \ref{t-explicit}.
Given $[\mu] \in \mathsf{CComp}_{n,k}$, let $\mathsf{MNeck}[\mu]$ be the set of all marked necklaces of extended leaves corresponding to the cyclic composition $[\mu]$.
In the proof of Lemma~\ref{lemma:part2_trees}, we used the fact that $\lvert \mathsf{MNeck}[\mu] \rvert = \binom{n}{k-1}$ when $[\mu]$ is primitive.
More generally, we have the following:
\begin{lemma} \label{lemma:markednecksize}
Given $[\mu] \in \mathsf{CComp}_{n,k}$, we have
\begin{equation*}
\lvert \mathsf{MNeck}[\mu] \rvert = \frac{\mathsf{ord}[\mu]}{k}\binom{n}{k-1}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $N$ denote the necklace of extended leaves corresponding to $[\mu]$, and fix a sequence $E_1 E_2 \cdots E_k \in N$ of extended leaves.
Then there are $\binom{n}{k-1}$ ways to choose the $k-1$ non-leaf vertices to be marked in $E_1 E_2 \cdots E_k$.
Upon taking all $k$ cyclic shifts of $E_1 E_2 \cdots E_k$, observe that each cyclic shift appears $k/\mathsf{ord}[\mu]$ times; accordingly, each of the markings counted by $\binom{n}{k-1}$ is $k/\mathsf{ord}[\mu]$ times the number of markings in $\mathsf{MNeck}[\mu]$.
In other words, we have
\begin{equation*}
\frac{k}{\mathsf{ord}[\mu]} \lvert \mathsf{MNeck}[\mu] \rvert = \binom{n}{k-1},
\end{equation*}
which is equivalent to our desired conclusion.
\end{proof}
A Dyck word $\pi = \pi_{1} \cdots \pi_{2n}$ with exactly $k$ $UD$-factors can be expressed uniquely in the form $\pi = U^{a_{1}}D^{b_{1}} \cdots U^{a_{k}}D^{b_{k}}$, where $(a_{1}, \ldots, a_{k})$ and $(b_{1}, \ldots, b_{k})$ are both compositions of $n$.
Let us call $(a_{1}, \ldots, a_{k})$ the \emph{rise composition} of $\pi$.
Given a cyclic composition $[\mu]$, denote by $\mathsf{D}[\mu]$ the set of all Dyck words with rise composition contained in the equivalence class $[\mu]$.
\begin{lemma} \label{lemma:risecompsize}
Given $[\mu] \in \mathsf{CComp}_{n,k}$, we have
\begin{equation}
\lvert \mathsf{D}[\mu] \rvert = \frac{\mathsf{ord}[\mu]}{k}\binom{n}{k-1}.
\end{equation}
\end{lemma}
\begin{proof}
From Lemma ~\ref{lemma:markednecksize}, it suffices to find a bijection from $D[\mu]$ to $\mathsf{MNeck}[\mu]$.
Once again, we appeal to the plane tree interpretation of Dyck paths from which the bijection to $\mathsf{MNeck}[\mu]$ follows similarly to that of Lemma ~\ref{lemma:part2_trees}.
\end{proof}
Let $\mathsf{Comp}_{n,k,m}$ denote the set of all compositions in $\mathsf{Comp}_{n,k}$ with exactly $m$ parts at least $2$ and let $\mathsf{CComp}_{n,k,m}$ be its cyclic counterpart.
Using Lemma ~\ref{lemma:risecompsize}, we obtain a combinatorial proof for Theorem ~\ref{t-explicit}. The proof of the nontrivial case is given below.
\begin{proof}[Combinatorial proof of Theorem ~\ref{t-explicit}]
First, we take the Dyck paths counted by $w_{n,k,m}$ and partition them by the cyclic equivalence classes of their rise compositions.
Then we have
\begin{equation} \label{e-needthislater}
w_{n,k,m} = \sum_{[\mu] \in \mathsf{CComp}_{n,k,m}} \lvert D[\mu] \rvert
= \sum_{[\mu] \in \mathsf{CComp}_{n,k,m}} \frac{\mathsf{ord}[\mu]}{k} \binom{n}{k-1}
\end{equation}
upon applying Lemma ~\ref{lemma:risecompsize}.
Next, recall that every cyclic composition $[\mu]\in\mathsf{CComp}_{n,k,m}$ contains $\mathsf{ord}[\mu]$ distinct compositions in $\mathsf{Comp}_{n,k,m}$, so we have
\begin{equation} \label{e-wcomp}
w_{n,k,m} = \sum_{\mu \in \mathsf{Comp}_{n,k,m}} \frac{1}{\mathsf{ord}[\mu]}\frac{\mathsf{ord}[\mu]}{k} \binom{n}{k-1}
= \frac{1}{k} \binom{n}{k-1} \lvert \mathsf{Comp}_{n,k,m} \rvert.
\end{equation}
Finally, we claim that
\begin{equation} \label{e-compbin}
\lvert \mathsf{Comp}_{n,k,m} \rvert = \binom{n-k-1}{m-1}\binom{k}{m};
\end{equation}
indeed, we can uniquely generate all compositions of $n$ into $k$ parts with exactly $m$ parts at least 2 using the following process:
\begin{enumerate}
\item Take the composition $(1^k)$ consisting of $k$ copies of 1, and choose $m$ positions $1\leq i_1<i_2<\cdots<i_m \leq k$ within this composition; there are $\binom{k}{m}$ ways to do this.
\item Choose a composition $\mu=(\mu_1,\mu_2,\dots,\mu_m)$ of $n-k$ into $m$ parts; there are $\binom{n-k-1}{m-1}$ ways to do this.
\item For each $1 \leq j \leq m$, add $\mu_j$ to the $i_j$th entry of $(1^k)$. The result is a composition of $n$ into $k$ parts with exactly $m$ parts at least 2.
\end{enumerate}
Substituting \eqref{e-compbin} into \eqref{e-wcomp} completes the proof.
\end{proof}
Given $0 \leq r \leq n$ and $0 \leq k \leq n-r$, define the \emph{$r$-generalized Narayana number} $N_{n, k}^{(r)}$ by
\[N_{n, k}^{(r)} = \frac{r+1}{n+1} \binom{n+1}{k} \binom{n-r-1}{k-1}.\]
Observe that the usual Narayana numbers $N_{n,k}$ can be obtained by setting $r = 0$ in $N_{n, k}^{(r)}$.
For $k <0$, we use the convention that $\binom{n}{k} = 0$ except for the special case when $n = k = -1$, where we define $\binom{-1}{-1}$ to be $1$.
The following is a generalization of Theorems \ref{th:relationNandW} and \ref{th:relationNandW2}.
\begin{thm}
\label{thm:simplify}
For all $j,k \geq 1$ and $m\geq 0$, we have
\begin{equation*}
w_{2k+j,k,m} = \frac{1}{j}\binom{2k+j}{k-1}N^{(j-1)}_{k+j-1,m}
\end{equation*}
and for all $1 \leq j \leq k$ and $m\geq 0$, we have
\begin{equation*}
w_{2k-j,k,m} = \frac{1}{j}\binom{2k-j}{k-1}N^{(j-1)}_{k-1,m}.
\end{equation*}
\end{thm}
Before proving Theorem ~\ref{thm:simplify}, we first introduce a generalization of Dyck paths and prove a useful lemma.
Consider paths in $\mathbb{Z}^2$ from $(0,0)$ to $(2n-r, r)$, consisting of $n$ up steps $(1,1)$ and $n-r$ down steps $(1,-1)$, that never pass below the horizontal axis.
Denote by $D^{(r)}_{n,k}$ the set of words on the alphabet $\{U, D \}$ corresponding to such paths with exactly $k$ $UD$-factors.
As shown by Callan and Schulte ~\cite{callan}, $N_{n, k}^{(r)}$ is the cardinality of $D^{(r)}_{n,k}$.
For $\omega, \nu \in D^{(r)}_{n,k}$, let us write $\omega \sim \nu$ if the words $U \omega$ and $U \nu$ are cyclic shifts of each other.
The relation $\sim$ is an equivalence relation on $D^{(r)}_{n,k}$, and we denote the set of its equivalence classes by $\tilde{D}^{(r)}_{n,k}$.
For $[\omega] \in \tilde{D}^{(r)}_{n,k}$, let $\mathsf{ord}[\omega]$ be the number of distinct elements of $D^{(r)}_{n,k}$ contained within the equivalence class $[\omega]$.
\begin{dfn}
For $j,k \geq 1$, let $\phi_{j,k}$ be the map from $\mathsf{CComp}_{2k+j, k, m}$ to $\tilde{D}^{(j-1)}_{k+j-1, m}$ where $\phi_{j,k}[\mu]$ is obtained via the following algorithm:
\begin{enumerate}
\item For $[\mu] = [\mu_{1}, \ldots, \mu_{k}] \in \mathsf{CComp}_{2k+j, k,m}$, set $\omega = U^{\mu_{1}-1}DU^{\mu_{2}-1}D \cdots U^{\mu_{k}-1}D$.
\item Let $\nu = \nu_{1} \nu_{2} \cdots \nu_{2k+j}$ be any cyclic shift of $\omega$ that is $1$-dominating.
\item Set $\phi_{j,k}[\mu]$ to be the equivalence class of the subword $\nu_{2} \cdots \nu_{2k+j}$.
\end{enumerate}
\end{dfn}
It is not immediately clear from the above definition whether the map $\phi_{j,k}$ is well-defined, but this will be established in the proof of the following lemma.
\begin{lemma} \label{lemma:genbijection}
For all $j,k \geq 1$, the map $\phi_{j,k}$ is a bijection. Moreover, for all $[\mu] \in \mathsf{CComp}_{2k+j,k,m}$, we have
\begin{equation*}
\frac{\mathsf{ord}(\phi_{j,k}[\mu])}{\mathsf{ord}[\mu]} = \frac{j}{k}.
\end{equation*}
\end{lemma}
\begin{proof}
We first prove that $\phi_{j,k}$ is well-defined.
Since $\omega$ contains $k+j$ copies of $U$ and $k$ copies of $D$, the cycle lemma guarantees that at least one cyclic shift of $\omega$ is $1$-dominating.
By construction of $\omega$, the word $\nu$ must contain exactly $m$ $UD$-factors.
The fact that $\nu$ is $1$-dominating and contains $m$ $UD$-factors implies that its subword $\nu_{2} \cdots \nu_{2k+j}$ is an element of $D^{(j-1)}_{k+j-1, m}$.
From the definition of $\sim$ on $D^{(j-1)}_{k+j-1, m}$, any $1$-dominating cyclic shift of $\omega$ will be sent to the same equivalence class in $\tilde{D}^{(j-1)}_{k+j-1, m}$.
This same argument also implies that $\phi_{j,k}[\mu]$ does not depend on the representative of $[\mu]$ that is chosen.
Injectivity and surjectivity are straightforward to check from the definition of $\phi_{j,k}$.
By the cycle lemma, there are exactly $j$ cyclic shifts of $\omega$ that are $1$-dominating; among these $j$ words, there are $j \cdot \mathsf{ord}[\mu]/k$ \emph{distinct} cyclic shifts as each of them appears $k/\mathsf{ord}[\mu]$ times.
These $1$-dominating sequences are in bijection with paths in the equivalence class of $\phi_{j,k}[\mu]$ by removing the first $U$ from the sequence.
Thus, $\mathsf{ord}(\phi_{j,k}[\mu]) = j \cdot \mathsf{ord}[\mu]/k$.
\end{proof}
We are now ready to prove Theorem ~\ref{thm:simplify}.
\begin{proof} [Proof of Theorem ~\ref{thm:simplify}]
From Lemma ~\ref{lemma:genbijection}, we have
\begin{equation*}
\lvert \mathsf{Comp}_{2k+j, k, m} \rvert = \frac{k}{j}\lvert D^{(j-1)}_{k+j-1,m} \rvert = \frac{k}{j} N^{(j-1)}_{k+j-1,m}.
\end{equation*}
Plugging this into ~\eqref{e-wcomp} gives the desired result
\begin{equation*}
w_{2k+j, k, m} = \frac{1}{j} \binom{2k+j}{k-1} N^{(j-1)}_{k+j-1,m}.
\end{equation*}
The proof for $w_{2k-j,k,m}$ follows similarly by defining an analogous map $\varphi_{j,k}$ from $\mathsf{CComp}_{2k-j, k, m}$ to $\tilde{D}^{(j-1)}_{k-1, m}$ and reproving Lemma ~\ref{lemma:genbijection} for $\varphi_{j,k}$.
\end{proof}
\subsection{Further generalizations and applications} We now detail several interesting generalizations and applications that can be obtained from our results in Section ~\ref{ss-generalized}.
First, we obtain a formula for the Catalan numbers $C_n$ in terms of primitive cyclic compositions via Lemma ~\ref{lemma:risecompsize}.
\begin{cor}
For all $n\geq 1$, we have
\begin{equation*}
C_n = \frac{1}{n+1} \binom{2n}{n} = \sum_{d|n} \sum_{\substack{\mathcal{[\mu]} \in \mathsf{CComp}_{n/d}\\
\mathrm{primitive}}} \frac{1}{d}\binom{n}{\mathsf{ord}[\mu]\cdot d-1},
\end{equation*}
where $\mathsf{CComp}_{n}$ is the set of all cyclic compositions of $n$.
\end{cor}
\begin{proof}
Grouping Dyck paths by their cyclic rise composition, we have
\begin{equation*}
C_{n} = \sum_{[\mu]\in \mathsf{CComp}_{n}} D[\mu].
\end{equation*}
Recall that every cyclic composition of $n$ can be uniquely expressed as the concatenation of $d$ copies of a primitive cyclic composition of $n/d$ for some divisor $d$ of $n$.
Similarly, for every divisor $d$ of $n$, each primitive cyclic composition of $n/d$ can be made into a cyclic composition of $n$ by concatenating $d$ copies.
This gives us
\begin{equation*}
C_n = \sum_{[\mu]\in \mathsf{CComp}_{n}} D[\mu] = \sum_{d|n}\sum_{\substack{[\mu] \in \mathsf{CComp}_{n/d}\\ \text{primitive}}} D([\mu]^d)
\end{equation*}
where $[\mu]^d$ is the concatenation of $d$ copies of $[\mu]$.
From Lemma ~\ref{lemma:risecompsize}, this gives us precisely
\begin{equation*}
C_n = \sum_{d|n}\sum_{\substack{[\mu] \in \mathsf{CComp}_{n/d}\\ \text{primitive}}} D([\mu]^d) = \sum_{d|n} \sum_{\substack{\mathcal{[\mu]} \in \mathsf{CComp}_{n/d}\\
\text{primitive}}} \frac{1}{d}\binom{n}{\mathsf{ord}[\mu]\cdot d-1}. \qedhere
\end{equation*}
\end{proof}
Next, Lemma ~\ref{lemma:genbijection} naturally leads to the following generalization of Lemmas ~\ref{lemma:part1_path} and ~\ref{lemma:part1_path2}, which expresses the number of cyclic compositions in $\mathsf{CComp}_{2k\pm j, k,m}$ in terms of $r$-generalized Narayana numbers.
Below, $\varphi$ denotes Euler's totient function.
\begin{prop} \label{p-ccomp}
Let $k \geq 1$ and $m,j\geq 0$, and let $d = \mathsf{gcd}(k,m,j)$.\footnote{If $m=0$ or $j=0$, then $\mathsf{gcd}(k,m,j)$ is defined to be the greatest common divisor of the nonzero numbers among $k$, $m$, and $j$.}\vspace{5bp}
\begin{enumerate}
\item[(a)] If $j \geq 1$, we have $\displaystyle{\lvert \mathsf{CComp}_{2k+j, k, m} \rvert = \frac{1}{j} \sum_{s\mid d} \varphi(s)N^{(j/s-1)}_{(k+j)/s-1, \, m/s}}$.\vspace{5bp}
\item[(b)] If $j=0$, we have $\displaystyle{\lvert \mathsf{CComp}_{2k, k, m} \rvert = \frac{1}{k} \sum_{s\mid d} \varphi(s) \binom{\frac{k}{s}-1}{\frac{m}{s}-1}\binom{\frac{k}{s}}{\frac{m}{s}}}$.
\vspace{5bp}
\item[(c)] If $1 \leq j \leq k$, we have $\displaystyle{\lvert \mathsf{CComp}_{2k-j, k, m} \rvert = \frac{1}{j} \sum_{s\mid d} \varphi(s)N^{(j/s-1)}_{k/s-1,\, m/s}}$.
\end{enumerate}
\end{prop}
\begin{proof}
Let us call an (ordinary) composition $\mu$ \emph{primitive} if $[\mu]$ is primitive, and let $\mathsf{PComp}_{n,k,m}$ denote the set of primitive compositions of $n$ with $k$ parts with exactly $m$ parts at least two.
Let $1\leq k \leq m$ and $j\geq 0$, and let $d = \mathsf{gcd}(k,m,j)$.
Given $\ell \mid d$, define
$$f(\ell) = \lvert \mathsf{Comp}_{(2k+j)\ell/d, \, k\ell/d, \, m\ell/ d} \rvert \quad \text{and} \quad g(\ell) = \lvert \mathsf{PComp}_{(2k+j)\ell/d,\, k\ell/d,\, m\ell /d} \rvert.$$
Every composition can be uniquely decomposed as a concatenation of one or more copies of a primitive composition, which leads to the formula $f(\ell) = \sum_{s\mid \ell} g(s)$.
By M\"obius inversion, we then have $g(\ell) = \sum_{s\mid \ell} \mathsf{M\ddot{o}b}(s) f(\ell/s)$ where $\mathsf{M\ddot{o}b}$ is the M\"obius function.
Observe that
\begin{equation*}
\lvert \mathsf{CComp}_{2k+j, k, m} \rvert = \sum_{\ell\mid d} \frac{\ell}{k} \lvert \mathsf{PComp}_{(2k+j)/\ell, \, k/\ell, \, m/\ell} \rvert;
\end{equation*}
after all, every cyclic composition in $\mathsf{CComp}_{2k+j, k, m}$ is a concatenation of $\ell$ copies of a primitive cyclic composition with $k/\ell$ parts for some $\ell$ dividing $d$, and this primitive cyclic composition is the cyclic equivalence class of $k/\ell$ elements of $\mathsf{PComp}_{(2k+j)/\ell, \,k/\ell , \,m/\ell}$. We then have
{\allowdisplaybreaks\begin{align*}
\lvert \mathsf{CComp}_{2k+j, k, m} \rvert &= \sum_{\ell\mid d} \frac{\ell}{k} \lvert \mathsf{PComp}_{(2k+j)/\ell, \, k/\ell, \, m/\ell} \rvert \\
&= \sum_{\ell\mid d} \frac{\ell}{k} g\Big(\frac{d}{\ell}\Big) \\
&= \frac{1}{k}\sum_{\ell\mid d} \sum_{q\mid (d/\ell)} \mathsf{M\ddot{o}b}(q) \ell f\Big(\frac{d}{\ell q} \Big) \\
&= \frac{1}{k}\sum_{s\mid d} \sum_{\ell q = s} \mathsf{M\ddot{o}b}(q) \frac{s}{q} f\Big(\frac{d}{s}\Big). \\
&= \frac{1}{k}\sum_{s\mid d} \varphi(s) f\Big(\frac{d}{s}\Big),
\end{align*}}where the last step uses the well-known identity $\varphi(s) = \sum_{q\mid s} \mathsf{M\ddot{o}b}(q) s/q$.
If $j\geq 1$, then we have
$$f\Big(\frac{d}{s} \Big) = \lvert \mathsf{Comp}_{2(k/s)+j/s,\, k/s,\, m/s} \rvert = \frac{k}{j} N^{(j/s-1)}_{(k+j)/s-1,\, m/s}$$
by Lemma ~\ref{lemma:genbijection}, and if $j=0$, then we instead have
$$f\Big(\frac{d}{s} \Big) = \lvert \mathsf{Comp}_{2(k/s),\, k/s,\, m/s} \rvert = \binom{\frac{k}{s}-1}{\frac{m}{s}-1}\binom{\frac{k}{s}}{\frac{m}{s}}$$
by \eqref{e-compbin}; substituting appropriately completes the proof of parts (a) and (b).
We omit the proof of (c) as it is similar to that of (a).
\end{proof}
\begin{rmk}
Proposition ~\ref{p-ccomp} has an interesting interpretation related to permutation enumeration, as $\lvert \mathsf{CComp}_{n, k, m} \rvert$ is the number of distinct cyclic descent sets among cyclic permutations of length $n$ with $k$ cyclic descents and $m$ cyclic peaks \textup{(}for all $1 \leq k<n$\textup{)}; see \cite{sagan2, sagan1, liang, sagan3} for definitions.
In particular, when $j\neq 0$ and $\mathsf{gcd}(k,j,m) = 1$, the number of cyclic descent classes among such cyclic permutations of length $2k+j$ is equal to a generalized Narayana number divided by $j$.
The case $j=\pm 1$ \textup{(}Lemmas ~\ref{lemma:part1_path} and ~\ref{lemma:part1_path2}\textup{)} yields a new interpretation of the \textup{(}ordinary\textup{)} Narayana numbers $N_{k,m}$ in terms of cyclic descent classes.
\end{rmk}
Finally, many of our results can be generalized to the numbers $w_{n, k_{1}, k_{2}, \ldots, k_{r}}$ which count Dyck paths of semilength $n$ with $k_{1}$ $UD$-factors, $k_{2}$ $UUD$-factors, \dots , and $k_{r}$ $U^{r}D$-factors. Let $\mathsf{Comp}_{n, k_1, k_2, \ldots, k_r}$ denote the set of compositions of $n$ with $k_{1}$ parts, exactly $k_{2}$ parts larger than $1$, $\ldots$, and exactly $k_{r}$ parts larger than $r-1$, and let $\mathsf{CComp}_{n,k_1, k_2, \ldots, k_r}$ be the set of corresponding cyclic compositions.
Using Lemma ~\ref{lemma:risecompsize} and the proofs of Lemmas ~\ref{lemma:part1_path} and \ref{lemma:part1_path2}, we obtain the following symmetries for $w_{n,k_1, k_2, \ldots, k_r}$.
\begin{cor} \label{c-evenmoretantalizing}
For all $r\geq 1$ and $1\leq m \leq k$, taking $k_1 = k_2 = \cdots = k_{r-1} = k$, we have
\begin{equation*}
w_{rk+1, k_1, k_2, \ldots, k_{r-1}, m} = w_{rk+1, k_{1}, k_{2}, \ldots, k_{r-1}, k+1-m}
\end{equation*}
and
\begin{equation*}
w_{rk-1, k_1, k_2, \ldots, k_{r-1}, m} = w_{rk-1, k_1, k_2, \ldots, k_{r-1}, k-m}
\end{equation*}
\end{cor}
\begin{proof}
Let $\mu \in \mathsf{CComp}_{rk+1, k, k, \ldots, k, m}$. From Lemma ~\ref{lemma:risecompsize}, we have $D[\mu] = \binom{rk+1}{k-1}$ as $[\mu]$ must be primitive. Note that $\lvert \mathsf{CComp}_{rk+1, k, k, \ldots, k, m} \rvert = N_{k,m}$ which can be shown via an argument analogous to that in the proof of Lemma ~\ref{lemma:part1_path}. Thus, we have $w_{rk+1, k, k, \ldots, k, m} = \binom{n}{k-1}N_{k,m}$, which gives the desired symmetry in light of the Narayana symmetry. The symmetry for the numbers $w_{rk-1, k, k, \ldots, k, m}$ can be proven similarly.
\end{proof}
Note that the $r=1$ case of Corollary ~\ref{c-evenmoretantalizing} is the Narayana symmetry $N_{n,k} = N_{n,n+1-k}$, whereas setting $r=2$ recovers our symmetries for the numbers $w_{2k\pm 1,k,m}$.
In addition, as a direct consequence of our combinatorial proof for Theorem ~\ref{t-explicit}, we have the following formula for $w_{n, k_{1}, k_{2}, \ldots, k_{r}}$.
\begin{cor}\label{c-wr}
Let $r\geq 1$, $k_1 \geq k_2 \geq \cdots \geq k_r \geq 0$, and $n \geq k_1 + k_2 + \cdots + k_r$.
For convenience, write $\hat{k} = k_1 + k_2 + \cdots + k_{r-1}$. Then
\begin{multline*}
w_{n,k_1,k_2, \ldots, k_r} = \\
\begin{cases}
\displaystyle{\frac{1}{k_{1}}\binom{n}{k_{1}-1} \binom{n-\hat{k}-1}{k_r-1} \binom{k_{1}}{k_1-k_2, k_2-k_3, \ldots, k_{r-1}-k_{r}, k_r }}, & \text{ if } k_r>0, \\[10bp]
\displaystyle{\frac{1}{k_{1}}\binom{n}{k_{1}-1}\binom{k_{1}}{k_1-k_2, k_2-k_3, \ldots, k_{r-1}-k_{r}, k_r }}, & \text{ if } k_{r} = 0 \text{ and } n = \hat{k}, \\[2bp]
0, & \text{ otherwise.}
\end{cases}
\end{multline*}
\end{cor}
Corollary \ref{c-wr} specializes to the formula for Narayana numbers upon setting $r=1$, and to Theorem \ref{t-explicit} for $r=2$.
\section{Polynomials}\label{polynomials}
\subsection{Real-rootedness}
A natural question is whether or not the sequence $\{ w_{n,k,m} \}_{0 \leq m \leq k}$, for a fixed $n$ and $k$, is unimodal.
In other words, for fixed $n$ and $k$, does there always exist $0 \leq j \leq k$ such that
$$w_{n,k,0} \leq w_{n,k, 1} \leq \cdots \leq w_{n,k,j} \geq w_{n, k, j+1} \geq \cdots \geq w_{n,k,k}?$$
One powerful way to prove unimodality results in combinatorics is through real-rootedness.
A polynomial with coefficients in $\mathbb{R}$ is said to be \emph{real-rooted} if all of its roots are in $\mathbb{R}$.
(We use the convention that constant polynomials are also real-rooted.)
It is well known that if a polynomial with non-negative coefficients is real-rooted, then the sequence of its coefficients are unimodal (see ~\cite{Branden2015}, for example).
Let $W_{n,k}(t)$ be the polynomial defined by
\begin{align*}
W_{n,k}(t) = \sum_{m=0}^{k} w_{n,k,m}t^m.
\end{align*}
In what follows, we prove that the polynomials $W_{n,k}(t)$ are real-rooted, thus implying the unimodality of the sequences $\{ w_{n,k,m} \}_{0 \leq m \leq k}$.
We begin with a simple result involving the roots of $W_{n,k}(t)$.
\begin{prop}\label{prop:reflection}
For all $1 \leq k \leq n-1$, the polynomials $W_{n,k}(t)$ and $W_{n,n-k}(t)$ have the same roots.
\end{prop}
\begin{proof}
This follows from the fact that $w_{n,k,m}= \frac{k(k+1)}{(n-k)(n-k+1)} w_{n,n-k,m}$, which is readily verified from Theorem ~\ref{t-explicit}.
\end{proof}
To prove the real-rootedness of the $W_{n,k}(t)$, we make use of Malo's result regarding the roots of the Hadamard product of two real-rooted polynomials.
\begin{thm}[\cite{malo}]\label{thm:Malo}
Let $f(t) = \sum_{i = 0}^{m} a_{i} t^{i}$ and $g(t) = \sum_{i=0}^{n} b_{i} t^{i}$ be real-rooted polynomials in $\mathbb{R}[t]$ such that all the roots of $g$ have the same sign.
Then their Hadamard product
$$f\ast g = \sum_{i = 0}^{\ell} a_{i}b_{i} t^{i},$$
where $\ell = \min\{m,n\}$, is real-rooted.
\end{thm}
\begin{thm}\label{thm:real-rooted}
For all $n, k \geq 0$, the polynomials $W_{n,k}(t)$ are real-rooted.
\end{thm}
\begin{proof}
From Theorem ~\ref{t-explicit}, we have
\begin{equation}
W_{n,k}(t) = \begin{cases}
0, & \text{ if } n < k,\\
1, & \text{ if } n = k,\\
\frac{1}{k}\binom{n}{k-1}\sum_{m = 1}^{\text{min}\{k, n-k \}} \binom{n-k-1}{m-1} \binom{k}{m} t^m, & \text{ if } n > k.
\end{cases}
\end{equation}
Thus it suffices to check that the polynomial $\sum_{m = 1}^{\text{min}\{k, n-k\}} \binom{n-k-1}{m-1}\binom{k}{m} t^m$ is real-rooted, which follows from applying Theorem ~\ref{thm:Malo} to $f(t) = t(t-1)^{n-k-1}$ and $g(t) = (t-1)^k$.
\end{proof}
More generally, we conjecture the polynomials $W_{n,k}(t)$ satisfy stronger conditions which we presently define.
For two real-rooted polynomials $f$ and $g$, let $\{u_i\}$ be the roots of $f$ and $\{v_i\}$ the roots of $g$, both in non-increasing order. We say that $g$ \emph{interlaces} $f$, denoted by $g \rightarrow f$, if either $\deg(f) = \deg(g)+1 = d$ and
$$u_{d} \leq v_{d-1} \leq u_{d-1} \leq \cdots \leq v_{1} \leq u_{1},$$
or if $\deg(f) = \deg(g) = d$ and
$$v_{d} \leq u_{d} \leq v_{d-1} \leq u_{d-1} \leq \cdots \leq v_{1} \leq u_{1}.$$
(By convention, we assume that a constant polynomial interlaces with every real-rooted polynomial.)
We say that a sequence of real-rooted polynomials $f_{1}, f_{2}, \ldots$ is a \emph{Sturm sequence} if $f_{1} \rightarrow f_{2} \rightarrow \cdots.$
Moreover, a finite sequence of real-rooted polynomials $f_{1}, f_{2}, \ldots, f_{n}$ is said to be \emph{Sturm-unimodal} if there exists $1 \leq j \leq n$ such that
$$f_{1} \rightarrow f_{2} \rightarrow \cdots \rightarrow\ f_{j} \leftarrow f_{j+1} \leftarrow \cdots \leftarrow f_{n}.$$
\begin{conj}
For any fixed $k \geq 1$, the polynomials $\{W_{n,k}(t)\}_{n \geq k}$ form a Sturm sequence.
\end{conj}
\begin{conj}
For any fixed $n \geq 1$, the sequence $\{W_{n,k}(t)\}_{1\leq k \leq n}$ is Sturm-unimodal.
\end{conj}
Our result expressing the numbers $w_{n,k,m}$ in terms of generalized Narayana numbers has a natural polynomial analogue. Let $\operatorname{Nar}^{(r)}_k(t)$ denote the $k$th \textit{$r$-generalized Narayana polynomial} defined by
$$\operatorname{Nar}^{(r)}_k(t)=\sum_{m=0}^{k-r}N^{(r)}_{k,m}t^m=\frac{r+1}{k+1}\sum_{m=0}^{k-r}\binom{k+1}{m}\binom{k-r-1}{m-1}t^m.$$
Setting $r=0$ recovers the usual \emph{Narayana polynomials} $\operatorname{Nar}_{k}(t) = \sum_{m = 0}^{k} N_{k,m} t^m$.
From Theorem ~\ref{thm:simplify} and straightforward computations, we have the following expressions for $W_{n,k}(t)$.
\begin{prop} \label{prop:polysimplify}
Let $k\geq 1$.\vspace{5bp}
\begin{enumerate}
\item[(a)] For all $j\geq1$, we have $\displaystyle{W_{2k+j,k}(t)= \frac{1}{j}\binom{2k+j}{k-1}\operatorname{Nar}^{(j-1)}_{k+j-1}(t)}$.\vspace{5bp}
\item[(b)] We have $\displaystyle{W_{2k,k}(t) = C_k \sum_{m=1}^{k}\binom{k-1}{m-1}\binom{k}{m}t^m}$ where $C_{k}$ denotes the $k$th Catalan number.\vspace{5bp}
\item[(c)] For all $1\leq j\leq k$, we have $\displaystyle{W_{2k-j,k}(t)= \frac{1}{j}\binom{2k-j}{k-1}\operatorname{Nar}^{(j-1)}_{k-1}(t)}$.
\end{enumerate}
\end{prop}
The real-rootedness of the $r$-generalized Narayana polynomials was recently shown in \cite{chen-yang-zhao} using a different approach; Proposition \ref{prop:polysimplify} shows that the real-rootedness of the $W_{n,k}(t)$ implies the real-rootedness of the $\operatorname{Nar}^{(r)}_{k}(t)$, thus giving an alternative proof of this result.
\subsection{Symmetry, \texorpdfstring{$\gamma$}{gamma}-positivity, and a symmetric decomposition} It is fitting that we end this paper by returning full circle to the topic of symmetry. First, we note that our symmetries for the numbers $w_{2k+1,k,m}$ and $w_{2k-1,k,m}$ immediately imply the following:
\begin{prop}\label{prop:symmetric}
The polynomials $W_{2k+1,k}(t)$ and $W_{2k-1,k}(t)$ are symmetric.
\end{prop}
A symmetric polynomial of degree $d$ can be written uniquely as a linear combination of the polynomials $\{ t^j(1+t)^{d-2j} \}_{0 \leq j \leq \lfloor d/2 \rfloor}$, referred to as the \emph{gamma basis}. A symmetric polynomial is called \emph{$\gamma$-positive} if its coefficients in the gamma basis are nonnegative. Gamma-positivity has shown up in many combinatorial and geometric contexts; see \cite{Athanasiadis2018} for a thorough survey. It is well known that the coefficients of a $\gamma$-positive polynomial form a unimodal sequence, and that $\gamma$-positivity is connected to real-rootedness in the following manner:
\begin{thm}[\cite{Branden2004}] \label{thm:gammapositive}
If $f$ is a real-rooted, symmetric polynomial with nonnegative coefficients, then $f$ is $\gamma$-positive.
\end{thm}
The $\gamma$-positivity of the polynomials $W_{2k+1,k}(t)$ and $W_{2k-1,k}(t)$ then follows directly from Theorem ~\ref{thm:real-rooted}, Proposition \ref{prop:symmetric}, and Theorem \ref{thm:gammapositive}. Furthermore, we can get explicit formulas for their gamma coefficients by exploiting their connection to the Narayana polynomials.
\begin{prop}\label{cor:gamma-positivity}
The polynomials $W_{2k+1,k}(t)$ and $W_{2k-1,k}(t)$ are $\gamma$-positive for all $k\geq 1$. More precisely, we have the following gamma expansions:
\vspace{5bp}
\begin{enumerate}
\item[(a)] $\displaystyle{W_{2k+1,k}(t)=\sum_{j=1}^{\lfloor \frac{k+1}{2}\rfloor} \binom{2k+1}{k-1} \frac{(k-1)!}{(k-2j+1)!\, (j-1)!\, j!} \, t^j(1+t)^{k+1-2j}}$ for all $k\geq 1$\textup{;} \vspace{5bp}
\item[(b)] $\displaystyle{W_{2k-1,k}(t)=\sum_{j=1}^{\lfloor \frac{k+1}{2}\rfloor} \binom{2k-1}{k-1} \frac{(k-2)!}{(k-2j)!\, (j-1)!\, j!} \, t^j(1+t)^{k+1-2j}}$ for all $k\geq 2$.
\end{enumerate}
\end{prop}
\begin{proof}
The Narayana polynomials are known to be $\gamma$-positive with gamma expansion
$$\operatorname{Nar}_k (t)=\sum_{j=1}^{\lfloor \frac{k+1}{2} \rfloor} \frac{(k-1)!}{(k-2j+1)!\, (j-1)!\, j!} \, t^j(1+t)^{k+1-2j}$$
for all $k\geq 1$ \cite[Theorem 2.32]{Athanasiadis2018}, and this implies the desired result by the $j=1$ case of Proposition ~\ref{prop:polysimplify} (a) and (c).
\end{proof}
\begin{rmk}
The gamma coefficients of the Narayana polynomials have a nice combinatorial interpretation in terms of lattice paths: $\frac{(k-1)!}{(k-2j+1)!\, (j-1)!\, j!}$ is the number of Motzkin paths of length $k-1$ with $j-1$ up steps \cite{blanco}. We can use this fact to give combinatorial interpretations of the gamma coefficients of $W_{2k+1,k}(t)$ and $W_{2k-1,k}(t)$. It would be interesting to find a combinatorial proof for Proposition ~\ref{cor:gamma-positivity} using Motzkin paths, perhaps in the vein of the ``valley-hopping'' proof for the $\gamma$-positivity of Narayana polynomials \cite{hoppityhophop}.
\end{rmk}
While the polynomials $W_{n,k}(t)$ are not symmetric in general, it turns out that we can always express $W_{n,k}(t)$ as the sum of two symmetric polynomials.
\begin{thm}\label{conj:symmetric_decomposition}
Let $1\leq k \leq n$, and let $\widetilde{m}=\operatorname{deg}W_{n,k}(t)$. Then there exist symmetric polynomials $W_{n,k}^+(t)$ and $W_{n,k}^-(t)$, both with nonnegative coefficients, such that:
\begin{enumerate}
\item[(a)] if $n=\widetilde{m}+k$ and $\widetilde{m}\neq k$, then $W_{n,k}(t)= W_{n,k}^+(t) -tW_{n,k}^-(t)$;
\item[(b)] if $n=\widetilde{m}+k$ and $\widetilde{m}=k$, then $W_{n,k}(t)= W_{n,k}^+(t) + tW_{n,k}^-(t)$;
\item[(c)] and if $n>\widetilde{m}+k$, then $W_{n,k}(t)= -W_{n,k}^+(t)+tW_{n,k}^-(t)$.
\end{enumerate}
\end{thm}
\begin{proof}
Since $n$ and $k$ are fixed, let us simplify notation by writing $w_m$ in place of $w_{n,k,m}$, so that $W_{n,k}(t)=\sum_{m=0}^k w_m t^m$. Let
\begin{equation}\label{w+}
w^+_{i+1}=\Big( \sum_{j=0}^{i+1} w_j \Big) - \Big( \sum_{j=0}^{i} w_{k-j} \Big)
\quad \text{and} \quad
w^-_{i}= - \Big( \sum_{j=0}^{i} w_j \Big) + \Big( \sum_{j=0}^{i} w_{k-j} \Big)
\end{equation}
for all $1 \leq i \leq k$; also take $w^+_0=1$ when $k=n$ and $w^+_0=0$ otherwise. Observe that
\begin{align}
w^+_i+ w^-_{i-1} &= \Big( \sum_{j=0}^{i} w_j \Big) - \Big( \sum_{j=0}^{i-1} w_{k-j} \Big) - \Big( \sum_{j=0}^{i-1} w_j \Big) + \Big( \sum_{j=0}^{i-1} w_{k-j} \Big) = w_i, \label{eq:sum} \\
w^+_i-w^+_{k-i} &= \Big( \sum_{j=0}^{i} w_j \Big) - \Big( \sum_{j=0}^{i-1} w_{k-j} \Big) - \Big( \sum_{j=0}^{k-i} w_j \Big) + \Big( \sum_{j=0}^{k-i-1} w_{k-j} \Big) = 0, \quad \text{and} \label{eq:sym1} \\
w^-_{i-1}-w^-_{k-i} &= - \Big( \sum_{j=0}^{i-1} w_j \Big) + \Big( \sum_{j=0}^{i-1} w_{k-j} \Big) + \Big( \sum_{j=0}^{k-i} w_j \Big) - \Big( \sum_{j=0}^{k-i} w_{k-j} \Big) = 0. \label{eq:sym2}
\end{align}
A standard induction argument utilizing the explicit formula in Theorem ~\ref{t-explicit} yields the following:
\begin{itemize}
\item the $w_i^+$ are positive when $n=\widetilde{m}+k$,
\item the $w_i^+$ are negative when $n>\widetilde{m}+k$,
\item the $w_i^-$ are positive when $n=\widetilde{m}+k$ for $\widetilde{m}=k$ and when $n>\widetilde{m}+k$, and
\item the $w_i^-$ are negative when $n=\widetilde{m}+k$ for $\widetilde{m}\neq k$.
\end{itemize}
Define the polynomials $W_{n,k}^{+}(t)$ and $W_{n,k}^{-}(t)$ by
\begin{align*}
W_{n,k}^{+}(t) = \sum_{i=0}^{\widetilde{m}}|w_i^{+}|\, t^{i}\quad\text{and}\quad W_{n,k}^{-}(t)=\sum_{i=0}^{k}|w_i^{-}|\, t^{i},
\end{align*}
respectively. These polynomials are symmetric by (\ref{eq:sym1}) and (\ref{eq:sym2}), and the decompositions given in (a)--(c) hold by construction in light of (\ref{eq:sum}).
\end{proof}
We end with a couple conjectures concerning the polynomials $W_{n,k}^{+}(t)$ and $W_{n,k}^{-}(t)$ arising from our symmetric decomposition. Our first conjecture, concerning real-rootedness, has been numerically verified for all $n\leq100$.
\pagebreak
\begin{conj}
The following completely characterizes when $W_{n,k}^{+}(t)$ and
$W_{n,k}^{-}(t)$ are both real-rooted\textup{:}
\begin{enumerate}
\item [(a)] If $n=1$ or $n=2$, then $W_{n,k}^{+}(t)$ and $W_{n,k}^{-}(t)$ are both real-rooted if and only if $k=1$.
\item [(b)] If $n>1$ and $n \equiv 1 \textup{ (mod 4)}$, then $W_{n,k}^{+}(t)$ and $W_{n,k}^{-}(t)$ are both real-rooted if and only if $k=1,2,\left\lfloor n/2\right\rfloor -1,\left\lfloor n/2\right\rfloor ,\text{or }\left\lfloor n/2\right\rfloor +1$.
\item [(c)] If $n \equiv 3 \textup{ (mod 4)}$, then $W_{n,k}^{+}(t)$ and $W_{n,k}^{-}(t)$ are both real-rooted if and only if $k=1,2,\left\lfloor n/2\right\rfloor -1,\left\lfloor n/2\right\rfloor ,\left\lfloor n/2\right\rfloor +1,\text{or }\left\lfloor n/2\right\rfloor +2$.
\item [(d)] If $n$ is even and not equal to $2$, $10$, $12$, or $16$, then $W_{n,k}^{+}(t)$ and $W_{n,k}^{-}(t)$ are both real-rooted if and only if $k=1,2,n/2-1,n/2,\text{or }n/2+1$.
\item [(e)] If $n=10$, $12$, or $16$, then $W_{n,k}^{+}(t)$ and $W_{n,k}^{-}(t)$ are both real-rooted if and only if $k=1,2,n/2-2,n/2-1,n/2,\text{or }n/2+1$.
\end{enumerate}
\end{conj}
To establish real-rootedness, it would be helpful to have formulas for the polynomials $W_{n,k}^{+}(t)$ or $W_{n,k}^{-}(t)$. Our next conjecture gives formulas for $W_{2k,k}^{+}(t)$, $W_{2k,k}^{-}(t)$, $W_{2k,k+1}^{-}(t)$, and $W_{2k,k-1}^{+}(t)$. Unfortunately, we do not have conjectured formulas for any of the other polynomials.
\begin{conj} \label{conj-W2k}
Let $k\geq 1$.
\begin{enumerate}
\item[(a)] We have
\[
W_{2k,k}^{+}(t)=(k-1)C_{k}\Nar_{k-1}(t)\quad\text{and}\quad W_{2k,k}^{-}(t)=C_{k}\sum_{i=0}^{k-1}\binom{k-1}{i}^{2}t^{i}.
\]
\item[(b)] If $k\geq 2$, we have
\[
W_{2k,k+1}^{-}(t)=\binom{2k}{k}\Nar_{k-1}(t) \quad \text{and} \quad
W_{2k,k-1}^{+}(t)=-\frac{t}{2}\binom{2k}{k-2}\overline{\Nar}_{k-2}^{(1)}(t)
\]
where $\overline{\Nar}_{k}^{(j)}(t)=\sum_{i=0}^{k-j}\frac{j+1}{k+1}\binom{k+1}{i}\binom{k+1}{i+j+1}t^{i}$.
\end{enumerate}
\end{conj}
The $\overline{\Nar}_{k}^{(j)}(t)$ defined in Conjecture \ref{conj-W2k} form a family of generalized Narayana polynomials \cite{yang} different from Callan's.
\iffalse
Since we have shown that $W_{2k,k}(t)=C_{k}\sum_{m=1}^{k}\binom{k-1}{m-1}\binom{k}{m}t^{m}$,
this conjecture implies the identity
\[
\binom{k-1}{m-1}\binom{k}{m}=(k-1)N_{k-1,m}+\binom{k-1}{m-1}^{2}\qquad(1\leq m\leq k).
\]
The following is a conjectured formula for $W_{2k,k+1}^{-}(t)$.
\begin{conj}
Suppose that $k\geq2$.
Then $W_{2k,k+1}^{-}(t)=\binom{2k}{k}\Nar_{k-1}(t)$.
\end{conj}
Now, recall that $\Nar_{k}^{(j)}(t)$ denotes the generalized Narayana polynomials introduced by Callan.
In the next conjecture, we will see an appearance of a different family of generalized Narayana polynomials, which we denote $\overline{\Nar}_{k}^{(j)}(t)$ and is defined by
\[
\overline{\Nar}_{k}^{(j)}(t)\coloneqq\sum_{i=0}^{k-j}\frac{j+1}{k+1}\binom{k+1}{i}\binom{k+1}{i+j+1}t^{i}.
\]
\begin{conj}
Suppose that $k\geq2$.
Then $W_{2k,k-1}^{+}(t)=-\frac{t}{2}\binom{2k}{k-2}\overline{\Nar}_{k-2}^{(1)}(t)$.
\end{conj}
\fi
\section*{Acknowledgments}
The authors thank the American Mathematical Society and the organizers of the MRC on Trees in Many Contexts, where this work was initiated, as well as David Callan for helpful discussions.
This material is based upon work supported by the National Science Foundation under Grant Number DMS-1641020.
\bibliographystyle{plain}
| {
"timestamp": "2022-12-22T02:00:20",
"yymm": "2212",
"arxiv_id": "2212.10586",
"language": "en",
"url": "https://arxiv.org/abs/2212.10586",
"abstract": "We investigate a tantalizing symmetry on Catalan objects. In terms of Dyck paths, this symmetry is interpreted in the following way: if $w_{n,k,m}$ is the number of Dyck paths of semilength $n$ with $k$ occurrences of $UD$ and $m$ occurrences of $UUD$, then $w_{2k+1,k,m}=w_{2k+1,k,k+1-m}$. We give two proofs of this symmetry: an algebraic proof using generating functions, and a combinatorial proof which makes heavy use of the cycle lemma and an alternate interpretation of the numbers $w_{n,k,m}$ using plane trees. In particular, our combinatorial proof expresses the numbers $w_{2k+1,k,m}$ in terms of Narayana numbers, and we generalize this to a relationship between the numbers $w_{n,k,m}$ and a family of generalized Narayana numbers due to Callan. Some further generalizations and applications of our combinatorial proofs are explored. Finally, we investigate properties of the polynomials $W_{n,k}(t)= \\sum_{m=0}^k w_{n,k,m} t^m$, including real-rootedness, $\\gamma$-positivity, and a symmetric decomposition.",
"subjects": "Combinatorics (math.CO)",
"title": "A combinatorial proof of a tantalizing symmetry on Catalan objects",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9881308782546027,
"lm_q2_score": 0.8311430520409024,
"lm_q1q2_score": 0.8212781139683879
} |
https://arxiv.org/abs/1401.5766 | On matrix balancing and eigenvector computation | Balancing a matrix is a preprocessing step while solving the nonsymmetric eigenvalue problem. Balancing a matrix reduces the norm of the matrix and hopefully this will improve the accuracy of the computation. Experiments have shown that balancing can improve the accuracy of the computed eigenval- ues. However, there exists examples where balancing increases the eigenvalue condition number (potential loss in accuracy), deteriorates eigenvector accuracy, and deteriorates the backward error of the eigenvalue decomposition. In this paper we propose a change to the stopping criteria of the LAPACK balancing al- gorithm, GEBAL. The new stopping criteria is better at determining when a matrix is nearly balanced. Our experiments show that the new algorithm is able to maintain good backward error, while improving the eigenvalue accuracy when possible. We present stability analysis, numerical experiments, and a case study to demonstrate the benefit of the new stopping criteria. | \section{Introduction}
For a given vector norm $\|\cdot\|$, an $n$-by-$n$ (square) matrix $A$ is said to
be {\it balanced} if and only if, for all $i$ from 1 to $n$, the norm of its
$i$-th column and the norm of its $i$-th row are equal.
$A$ and $\widetilde{A}$ are {\em diagonally similar} means that there exists a
diagonal nonsingular matrix $D$ such that $\widetilde{A} = D^{-1} A D $.
Computing $D$ (and/or $\widetilde{A}$) such that $\widetilde{A}$ is balanced is called {\em balancing} $A$, and
$\widetilde{A}$ is called {\em the balanced matrix}.
Using the 2-norm, given an $n$-by-$n$ (square) matrix $A$,
Osborne~\cite{Osborne:1960:PM:321043.321048} proved that there exists a unique
matrix $\widetilde{A}$ such that $A$ and $\widetilde{A}$ are diagonally similar
and $\widetilde{A}$ is balanced. Osborne~\cite{Osborne:1960:PM:321043.321048}
also provides a practical iterative algorithm for computing $\widetilde{A}$.
Parlett and Reinsch~\cite{parlett:1969:NM} (among other things) proposed to approximate $D$ with $D_2$, a
diagonal matrix containing only powers of two on the diagonal. While $D_2$ is only an approximation
of $D$, $\widetilde{A}_2 = D_2^{-1} A D_2 $ is ``balanced enough'' for all practical matter. The key idea
is that there is no floating-point error in computing $\widetilde{A}_2$ on a base-two computing machine.
Balancing is a standard preprocessing step while solving the nonsymmetric eigenvalue
problem (see for example the subroutine
DGEEV in the LAPACK library). Indeed, since, $A$ and $\widetilde{A}$ are
similar, the eigenvalue problem on $A$ is similar to the eigenvalue problem on
$\widetilde{A}$. Balancing is used to hopefully improve the accuracy of the
computation~\cite{templatesSolutions:2000:siam,chen:1998:masters,chen:2001:phd,chen:2000:LAA}.
However, there are examples where the
eigenvalue accuracy~\cite{watkins:2006:case}, eigenvector
accuracy~\cite{LAPACK-forum-4270}, and backward error will deteriorate. We
will demonstrate these examples throughout the paper.
What we know balancing does is decrease the norm of the matrix. Currently, the stopping criteria
for the LAPACK balancing routine, \texttt{GEBAL}, is if a step in the algorithm does not provide a
significant reduction in the norm of the matrix (ignoring the diagonal elements). Ignoring the
diagonal elements is a reasonable assumption since they are unchanged by a diagonal similarity transformation.
However, it is our observation that this stopping criteria
is not strict enough and allows a matrix to be ``over balanced'' at times.
We propose a simple fix to the stopping criteria. Including the diagonal elements and balancing with respect to
the 2-norm solves the problem of deteriorating the backward error in our test cases. Using the
2-norm also prevents balancing a Hessenberg matrix, which previously was an example of
balancing deteriorating the eigenvalue condition number \cite{watkins:2006:case}.
Other work on balancing was done by Chen and Demmel~\cite{chen:1998:masters,chen:2001:phd, chen:2000:LAA}.
They extended the balancing literature to sparse matrices.
They also looked at using a weighted norm to balance the matrix.
For nonnegative matrices, if the weight vector is taken to be the Perron vector (which is nonnegative),
then the largest eigenvalue has perfect conditioning. However, little can be said about the
conditioning of the other eigenvalues. This is the only example in the literature
that guarantees balancing improves the condition number of any of the eigenvalues.
The rest of the paper is organized as follows. In Section~\ref{sec:background}
we provide background and review the current literature on balancing.
In Section~\ref{sec:casestudy} we introduce a simple example
to illustrate why balancing can deteriorate the backward error (and the accuracy of the eigenvectors).
In Section~\ref{sec:backerror} we provided backward error analysis.
Finally, in Section~\ref{sec:experiments} we look at a few test cases
and conclude that that using the 2-norm in the balancing algorithm is the
best solution.
\section{The Balancing Algorithm}\label{sec:background}
Algorithm~\ref{alg:osborne} shows Osborne's original balancing algorithm,
which balances a matrix in the 2-norm~\cite{Osborne:1960:PM:321043.321048}.
The algorithm assumes that the matrix is {\it irreducible}. A matrix is {\it reducible} if
there exists a permutation matrix, $P$, such that,
\begin{equation}\label{eq:reduce}
P A P^T = \left( \begin{array}{cc} A_{11} & A_{12} \\ 0 & A_{22} \end{array} \right),
\end{equation}
where $A_{11}$ and $A_{22}$ are square matrices. A diagonally similarity transformation can
make the off-diagonal block, $A_{12}$, arbitrarily small with $D = \textmd{diag}(\alpha I, I)$ for
arbitrarily large $\alpha$. For converges of the balancing algorithm it is necessary for the
elements of $D$ to be bounded. For reducible matrices, the diagonal blocks can be balanced
independently.
\begin{algorithm}[htbp]
\SetKwInOut{input}{Input}
\SetKwInOut{output}{Output}
\SetKwInOut{notes}{Notes}
\Indm
\input{An irreducible matrix $A \in \R[n][n]$.}
\notes{$A$ is overwritten by $D^{-1} A D$, and converges to unique balanced matrix. $D$ also
converges to a unique nonsingular diagonal matrix.}
\BlankLine
\Indp
$D \leftarrow I$ \\
\For{ $k \leftarrow 0,1,2,\dots$ } {
\For{ $i \leftarrow 1,\dots, n$ } {
$\displaystyle c \leftarrow \sqrt{ \sum_{j\ne i} |a_{j,i}|^2 }, \;\;
r \leftarrow \sqrt{ \sum_{j\ne i} |a_{i,j}|^2 }$ \\ \label{alg1.line1}
$\displaystyle f \leftarrow \sqrt{\frac{r}{c}} $ \\ \label{alg1.line2}
$d_{ii} \leftarrow f \times d_{ii}$ \\
$ A(:,i) \leftarrow f \times A(:,i), \;\;
A(i,:) \leftarrow A(i,:)/f$ \\
}
}
\caption{ Balancing (Osborne)}
\label{alg:osborne}
\end{algorithm}
The balancing algorithm operates on columns and rows
of $A$ in a cyclic fashion.
In line~\ref{alg1.line1}, the 2-norm (ignoring the diagonal element)
of the current column and row are calculated.
Line~\ref{alg1.line2} calculates the scalar, $f$, that will be used to update $d_{ii}$,
column $i$, and row $i$. The quantity $f^2 c^2 + r^2 / f^2$ is minimized at $f = \sqrt{\frac{r}{c}}$,
and has a minimum value of $2rc$.
It is obvious that for this choice of $f$, the Frobenius norm is non-increasing since,
\[ \| A^{(nk+i)} \|_F^2 - \| A^{(nk+i+1)} \|_F^2 = ( c^2 + r^2 ) - ( 2rc ) = ( c - r )^2 \ge 0. \]
Osborne showed that Algorithm~\ref{alg:osborne}
converges to a unique balanced matrix and $D$ converges to a unique (up to a scalar multiple) nonsingular
diagonal matrix. The balanced matrix has minimal Frobenius norm among all diagonal similarity transformations.
Parlett and Reinsch~\cite{parlett:1969:NM} generalized the balancing algorithm to any $p$-norm.
Changing line~\ref{alg1.line1} to
\[
c \leftarrow \Big( \sum_{j\ne i} |a_{j,i}|^p \Big)^{1/p}, \;\;
r \leftarrow \Big( \sum_{j\ne i} |a_{i,j}|^p \Big)^{1/p},
\]
the algorithm will converge to a balanced matrix in the $p$-norm. Parlett and Reinsch however, do not
quantify if the norm of the matrix will be reduced or minimized. The norm will in fact
be minimized, only it is a non-standard matrix norm. Specifically, the balanced matrix, $\widetilde{A}$,
\[
\| \widetilde{A} \|_p = \min \left( \| \textmd{vec}( D^{-1} A D ) \|_p \mid D \in \mathcal{D} \right),
\]
where $\mathcal{D}$ is the set of all nonsingular diagonal matrices and $\textmd{vec}$
stacks the columns of $A$ into an $n^2$ column vector. For $p = 2$, this is the Frobenius norm.
Parlett and Reinsch also
restrict the diagonal element of $D$ to be powers of the radix base (typically 2).
This restriction ensures there is no computational error in the balancing algorithm.
The algorithm provides only an approximately balanced matrix. Doing
so without computational error is desirable and the exact balanced matrix is not
necessary. Finally, they introduce a stopping criteria for the algorithm. Algorithm~\ref{alg:parlett}
shows Parlett and Reinsch's algorithm for any $p$-norm. Algorithm~\ref{alg:parlett}
(balancing in the $1$-norm) is essentially what is currently implemented by LAPACK's
\texttt{GEBAL} routine.
\begin{algorithm}[htbp]
\SetKwInOut{input}{Input}
\SetKwInOut{output}{Output}
\SetKwInOut{notes}{Notes}
\Indm
\input{ A matrix $A \in \R[n][n]$. We will assume that $A$ is irrreducible.}
\output{ A diagonal matrix $D$ and $A$ which is overwritten by $D^{-1} A D$. }
\notes{ $\beta$ is the radix base.
On output $A$ is nearly balanced in the $p$-norm. }
\BlankLine
\Indp
$D \leftarrow I$ \\
$converged \leftarrow 0$ \\
\While{ $converged = 0$ } {
$converged \leftarrow 1$ \\
\For{ $i \leftarrow 1,\dots,n$ } {
$\displaystyle c \leftarrow \Big( \sum_{j\ne i} |a_{j,i}|^p \Big)^{1/p}, \;\;
r \leftarrow \Big( \sum_{j\ne i} |a_{i,j}|^p \Big)^{1/p}$ \\ \label{alg2.line1}
$s \leftarrow c^p + r^p , \;\;
f \leftarrow 1 $ \\
\While{ $c < r/\beta$ } { \label{alg2.line2}
$\displaystyle c \leftarrow c\beta, \;\;
r \leftarrow r/\beta, \;\;
f \leftarrow f \times \beta $ \\
}
\While{ $c \ge r\beta$ } {
$\displaystyle c \leftarrow c/\beta, \;\;
r \leftarrow r\beta, \;\;
f \leftarrow f/\beta $ \\ \label{alg2.line3}
}
\If{ $(c^p + r^p) < 0.95 \times s$ } { \label{alg2.line4}
$ converged \leftarrow 0, \;\;
d_{ii} \leftarrow f \times d_{ii}$ \\
$ A(:,i) \leftarrow f \times A(:,i), \;\;
A(i,:) \leftarrow A(i,:)/f$ \\
}
}
}
\caption{ Balancing (Parlett and Reinsch) }
\label{alg:parlett}
\end{algorithm}
Again, the algorithm proceeds in a cyclic fashion operating on a single row/column at a time.
In each step, the algorithm seeks to minimize $f^p c^p + r^p / f^p$.
The exact minimum is obtained for $f = \sqrt{r/c}$.
Lines~\ref{alg2.line2}-\ref{alg2.line3} find the closest approximation to the exact minimizer.
At the completion of the second inner while loop, $f$ satisfies the inequality
\[ \beta^{-1}\left( \frac{r}{c} \right) \le f^2 < \left( \frac{r}{c}\right)\beta. \]
Line~\ref{alg2.line4} is the stopping criteria. It states that the current step must decrease
$f^p c^p + r^p / f^p$ to at least 95\% of the previous value. If this is not satisfied then the step is skipped.
The algorithm terminates when a complete cycle does not provide significant decrease.
Our observation is that the stopping criteria is not a good indication of the true decrease in the
norm of the matrix. If the diagonal element of the current
row/column is sufficiently larger than the rest of the entries, then balancing will be
unable to reduce the matrix norm by a significant amount (since the diagonal element is unchanged).
Including the diagonal element in the stopping criteria provides a better in indication of the
relative decrease in norm.
This could be implemented by changing line~\ref{alg2.line4} in Algorithm~\ref{alg:parlett} to
\[(c^p + r^p + |a_{ii}|^p) < 0.95 \times ( s + |a_{ii}|^p ). \]
However, it is easier to absorb the addition of the diagonal element into the calculation of $c$ and $r$.
This is what we do in our proposed algorithm (Algorithm~\ref{alg:balance}).
The only difference in Algorithm~\ref{alg:balance} and Algorithm~\ref{alg:parlett} is in line~\ref{alg3.line1}.
In Algorithm~\ref{alg:balance} we include the diagonal elements in the calculation of $c$ and $r$.
\begin{algorithm}[htbp]
\SetKwInOut{input}{Input}
\SetKwInOut{output}{Output}
\SetKwInOut{notes}{Notes}
\Indm
\input{ A matrix $A \in \R[n][n]$. We will assume that $A$ is irrreducible.}
\output{ A diagonal matrix $D$ and $A$ which is overwritten by $D^{-1} A D$. }
\notes{ $\beta$ is the radix base.
On output $A$ is nearly balanced in the $p$-norm. }
\BlankLine
\Indp
$D \leftarrow I$ \\
$converged \leftarrow 0$ \\
\While{ $converged = 0$ } {
$converged \leftarrow 1$ \\
\For{ $i \leftarrow 1,\dots,n$ } {
$\displaystyle c \leftarrow \| A(:,i) \|_p , \;\;
r \leftarrow \| A(i,:) \|_p $ \\ \label{alg3.line1}
$s \leftarrow c^p + r^p , \;\;
f \leftarrow 1 $ \\
\While{ $c < r/\beta$ } { \label{line2}
$\displaystyle c \leftarrow c\beta, \;\;
r \leftarrow r/\beta, \;\;
f \leftarrow f \times \beta $ \\
}
\While{ $c \ge r\beta$ } {
$\displaystyle c \leftarrow c/\beta, \;\;
r \leftarrow r\beta, \;\;
f \leftarrow f/\beta $ \\ \label{line3}
}
\If{ $(c^p + r^p) < 0.95 \times s$ } { \label{line4}
$ converged \leftarrow 0, \;\;
d_{ii} \leftarrow f \times d_{ii}$ \\
$ A(:,i) \leftarrow f \times A(:,i), \;\;
A(i,:) \leftarrow A(i,:)/f$ \\
}
}
}
\caption{ Balancing (Proposed) }
\label{alg:balance}
\end{algorithm}
The 1-norm is desirable for efficiency and the typical
choice of norm in the LAPACK library.
However, we where unable to solve all of our test cases using the 1-norm.
The 2-norm is able to balance the
matrix while not destroying the backward error for all our test cases.
Therefore, we propose using $p = 2$ in Algorithm~\ref{alg:balance}.
\section{Case Study}\label{sec:casestudy}
The balancing algorithm requires that the matrix be irreducible. The case study we present in this section shows the
potential danger if a matrix in nearly irreducible.
Consider the matrix
\[
A=\left( \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 0 & 2 & 1 & 0 \\ 0 & 0 & 3 & 1 \\ \epsilon & 0 & 0 & 4 \end{array} \right)
\]
where $0 \le \epsilon \ll 1$ \cite{LAPACK-forum-4270}. As $\epsilon \rightarrow 0$ the eigenvalues tend towards the diagonal elements. Although
these are not the exact eigenvalues (which are computed in Appendix~\ref{append:casestudy}), we will refer to the eigenvalues as the diagonal elements.
The diagonal matrix
\[
D = \alpha \left( \begin{array}{cccc}
1 & 0 & 0 & 0 \\ 0 & \epsilon^{1/4} & 0 & 0 \\ 0 & 0 & \epsilon^{1/2} & 0 \\ 0 & 0 & 0 & \epsilon^{3/4}
\end{array} \right),
\]
for any $\alpha > 0$, balances $A$ exactly for any $p$-norm.
The balanced matrix is
\[
\widetilde{A} = D^{-1}AD =\left( \begin{array}{cccc} 1 & \epsilon^{1/4} & 0 & 0 \\ 0 & 2 & \epsilon^{1/4} & 0 \\
0 & 0 & 3 & \epsilon^{1/4} \\ \epsilon^{1/4} & 0 & 0 & 4 \end{array} \right).
\]
If $v$ is an eigenvector of $\widetilde{A}$, then $Dv$ is an eigenvector of $A$.
The eigenvector associated with 4 is the most problematic. The eigenspace of 4 for $A$ is
$\textmd{span}( (1, 3, 6, 6)^T )$. All of the components have
about the same magnitude and therefore contribute equally towards the backward error.
The eigenspace of 4 for $\widetilde{A}$ approaches $\textmd{span}( (0, 0, 0, 1)^T )$ as $\epsilon \rightarrow 0$ and
the components
begin to differ greatly in magnitude. Error in the computation of the small elements are fairly irrelevant for
backward error of $\widetilde{A}$. But, when
converting back to the eigenvectors of $A$ the errors in the small components will be magnified and a poor
backward error is expected.
Computing the eigenvalue decomposition via Matlab's routine \texttt{eig} with $\epsilon = 10^{-32}$ the first component
is zero. This results in
no accuracy in the eigenvector computation and backward error.
Appendix~\ref{append:casestudy} provides detailed computations of the exact eigenvalues and eigenvectors of the
balanced and unbalanced matrices.
This example can be generalized to more cases where one would expect the backward error to deteriorate.
If a component of the eigenvector that is originally
of significance is made to be insignificant in the balanced matrix, then error in the computation is magnified
when converting back to the original matrix.
Moreover, the balanced matrix fails to reduce the norm of the matrix by a significant amount since
\[
\frac{ \| \textmd{vec}( \widetilde{A} ) \|_1 } { \| \textmd{vec}( A ) \|_1 } \rightarrow 10 / 13
\]
as $\epsilon \rightarrow 0$. If we ignore the diagonal elements, then
\[
\frac{ \| \textmd{vec}( \widetilde{A}' ) \|_1 }{ \| \textmd{vec}( A' ) \|_1 } \rightarrow 0 ,
\]
where $A'$ denotes the off diagonal entries of $A$.
Excluding the diagonal elements makes it appear as if balancing is reducing the norm of the matrix greatly. But, when the
diagonal elements are included the original matrix is already nearly balanced.
\section{Backward Error}\label{sec:backerror}
In this section $u$ is machine precision, $c(n)$ is a low order polynomial dependent only on the size of $A$, and $V_i$ is the
$i^{th}$ column of the matrix $V$.
Let $\widetilde{A} = D^{-1} A D$, where $D$ is a diagonal matrix and the elements of the diagonal of $D$ are restricted to being
exact powers of the radix base (typically 2). Therefore, there is no error in computing $\widetilde{A}$.
If a backward stable algorithm is used to compute the eigenvalue decomposition of $\widetilde{A}$,
then $\widetilde{A}\widetilde{V} = \widetilde{V}\Lambda + E$, where $\| \widetilde{V}_i \|_2 = 1$ for all $i$
and $\| E \|_F \le c(n) u \| \widetilde{A} \|_F.$ The eigenvectors of $A$ are obtained by multiplying $\widetilde{V}$ by $D$, again there is
no error in the computation $V = D\widetilde{V}$. Scaling the columns of $V$ to have 2-norm of 1 gives
$A V_i = \lambda_{ii}V_i + D E_i / \| D \widetilde{V}_i \|_2$ and we have the following upper bound on the backward error:
\begin{align*}
\| A V_i - \lambda_{ii}V_i \|_2 &= \frac{\| D E_i \|_2 }{\| D \widetilde{V}_i \|_2} \\
&\le c(n) u \frac{\| D \|_2 }{\| D \widetilde{V_i} \|_2}\| \widetilde{A} \|_F \\
&\le c(n) u \frac{\| D \|_2 \| \widetilde{A} \|_F }{\sigma_{min}(D)} \\
&= c(n) u \kappa(D) \| \widetilde{A} \|_F.
\end{align*}
Therefore,
\begin{equation}\label{eq:backerror}
\frac{\| A V - V\Lambda \|_F}{\| A \|_F} \le c(n) u \frac{\kappa(D) \| \widetilde{A} \|_F}{\| A \|_F}.
\end{equation}
If $\widetilde{V}_i$ is close to the singular vector associated with the minimum singular value then the bound
will be tight. This will be the case when
$\widetilde{A}$ is near a diagonal matrix. On the other hand, if $\widetilde{A}$ is not near a diagonal matrix
then most likely $\| D \widetilde{V}_i \|_2 \approx \| D \|_2$ and the upper bound is no longer tight.
For this case it would be
better to expect the error to be close to $c(n) u \| \widetilde{A} \|_F$.
Ensuring that the quantity
\begin{equation}\label{bound}
u \frac{\kappa(D) \| \widetilde{A} \|_F }{ \| A \|_F}
\end{equation}
is small will guarantee that the backward error remains small. The quantity is easy to compute
and therefore a reasonable stopping criteria for the balancing algorithm. However, in many cases
the bound is too pessimistic and will tend to not allow balancing at all (see Section~\ref{sec:experiments}).
\section{Experiments}\label{sec:experiments}
We will consider four examples to justify balancing in the 2-norm and including the
diagonal elements in the stopping criteria. The first example is the case study
discussed in Section~\ref{sec:casestudy} with $\epsilon = 10^{-32}$.
The second example is a near triangular matrix (a generalization of the case study).
Let $T$ be a triangular matrix with normally distributed nonzero entries. Let $N$ be a dense random matrix with
normally distributed entries. Then $A = T + \epsilon N$, with $\epsilon = 10^{-30}$. Balancing will tend towards
a diagonal matrix and the eigenvectors will contain entries with little significance. Error in these components
will be magnified when converting back to the eigenvectors of $A$.
The third example is a Hessenberg matrix, that is, the Hessenberg form of a dense random matrix with normally
distributed entries. It is well known that balancing this matrix (using the LAPACK algorithm)
will increase the eigenvalue condition number~\cite{watkins:2006:case}.
The finally example is a badly scaled matrix. Starting with a dense random matrix with normally
distributed entries. A diagonally similarity transformation with the logarithm of the
diagonal entries evenly spaced between 0 and 10 gives us the initial matrix. This is an example
where we want to balance the matrix. This example is used to show that the new algorithm will
still balance matrices and improve accuracy of the eigenvalues.
To display our results we will display plot the quantities of interest versus iterations of the
balancing algorithm. The black stars and squares on each line indicate the iteration in which the balancing
algorithm converged. Plotting this way allows use to see how the quantities vary during the
balancing algorithm. This is especially useful to determine if the error bound~\eqref{bound}
is a viable stopping criteria.
To check the accuracy of the eigenvalue we will examine the absolute condition number of an eigenvalue
which is given by the formula
\begin{equation}\label{condeig}
\kappa(\lambda,A) = \frac{ \| x \|_2 \| y \|_2 }{ | y^H x | },
\end{equation}
where $x$ and $y$ are the right and left eigenvectors associated with $\lambda$ \cite{kressner:2005:phd}.
Figure~\ref{fig:all-alg} plots the relative backward error,
\begin{equation}\label{eq:relbackerror}
\| A V - VD \|_2 / \| A \|_2,
\end{equation}
(solid lines) and machine precision times the
maximum absolute condition number of the eigenvalues,
\[ u \cdot \max( \kappa(\lambda_{ii}, A)\; | \; 1 \le i \le n ), \]
(dashed lines) versus iterations of the
balancing algorithms. The dashed lines are an estimate of the forward error.
\begin{figure}[!b]
\centering
\subfloat[Case study]{
\resizebox{.4\textwidth}{!}{\includegraphics{lapackforum-1.pdf}}
\label{fig:lapackforum-1}
}
\hfil
\subfloat[Near upper triangular]{
\resizebox{.4\textwidth}{!}{\includegraphics{uppertri-1.pdf}}
\label{fig:uppertri-1}
} \\
\subfloat[Hessenberg form]{
\resizebox{.4\textwidth}{!}{\includegraphics{watkins-1.pdf}}
\label{fig:watkins-1}
}
\hfil
\subfloat[Badly scaled]{
\resizebox{.4\textwidth}{!}{\includegraphics{badlyscaled-1.pdf}}
\label{fig:badlyscaled-1}
}
\caption{Backward error and absolute eigenvalue condition number versus iterations.}
\label{fig:all-alg}
\end{figure}
In Figures~\ref{fig:lapackforum-1} and \ref{fig:uppertri-1} the LAPACK algorithm
has no accuracy in the backward sense, while both the 1-norm and 2-norm algorithms maintain good
backward error.
On the other hand the eigenvalue condition number is reduced much greater by the LAPACK algorithm in the
second example. The LAPACK algorithm has a condition number 7 orders of magnitude less than the
1-norm algorithm and 10 orders of magnitude less than the 2-norm algorithm.
Clearly there is a trade-off to maintain desirable backward error. If only the eigenvalues
are of interest, then the current balancing algorithm may still be desirable. However, the special case of balancing
a random matrix already reduced to Hessenberg form would still be problematic and need to be avoided.
In Figure~\ref{fig:watkins-1} both the backward error and the eigenvalue condition number worsen with the LAPACK and
1-norm algorithms. The 2-norm algorithm does not allow for much scaling to occur and as a consequence
maintains a good backward error and small eigenvalue condition number. This is the only
example for which the 1-norm is significantly worse than the 2-norm algorithm
( 5 orders of magnitude difference). This example is the only reason we choose to use the 2-norm over the 1-norm
algorithm.
The final example (Figure~\ref{fig:badlyscaled-1}) shows all algorithms
have similar results: maintain good backward error and reduce the eigenvalue condition number.
This indicates that the new algorithms would not prevent balancing when appropriate.
Figure~\ref{fig:bounds} plots the relative backward error~\eqref{eq:relbackerror} (solid lines)
and the backward error bound~\eqref{bound} (dashed lines).
For the bound to be used as a stopping criteria, the bound should be
a fairly good estimate of the true error. Otherwise, the bound would prevent balancing when balancing could have
been used without hurting the backward error. Figures~\ref{fig:lapackforum-2} and \ref{fig:uppertri-2} show the bound
would not be tight for the LAPACK algorithm and therefore not a good stopping criteria.
Figure~\ref{fig:badlyscaled-2} shows that the bound is not tight for all algorithms. For this reason we do not
consider using the bound as a stopping criteria.
\begin{figure}[!b]
\centering
\subfloat[Case study]{
\resizebox{.4\textwidth}{!}{\includegraphics{lapackforum-bounds.pdf}}
\label{fig:lapackforum-2}
}
\hfil
\subfloat[Near upper triangular]{
\resizebox{.4\textwidth}{!}{\includegraphics{uppertri-bounds.pdf}}
\label{fig:uppertri-2}
} \\
\subfloat[Hessenberg form]{
\resizebox{.4\textwidth}{!}{\includegraphics{watkins-bounds.pdf}}
\label{fig:watkins-2}
}
\hfil
\subfloat[Badly scaled]{
\resizebox{.4\textwidth}{!}{\includegraphics{badlyscaled-bounds.pdf}}
\label{fig:badlyscaled-2}
}
\caption{Backward error bounds. }
\label{fig:bounds}
\end{figure}
Figure~\ref{fig:reducenorm} shows the ratio of the norm of the balanced matrix and the
norm of the original matrix versus iterations of the balancing algorithm.
In Figures~\ref{fig:lapackforum-3},~\ref{fig:uppertri-3},~\ref{fig:watkins-3} the norm is not decreased by much in any of the
algorithms. The maximum decrease is only 70\% for the near triangular matrix (Figure~\ref{fig:uppertri-3}).
In Figure~\ref{fig:badlyscaled-3} all algorithms successfully reduce the norm of the matrix by 9
orders of magnitude.
In all three examples where the LAPACK algorithm deteriorates the backward error, the norm
is not successfully reduced by a significant quantity. But, in our example where balancing is most
beneficial the norm is reduced significantly.
\begin{figure}[!t]
\centering
\subfloat[Case study]{
\resizebox{.4\textwidth}{!}{\includegraphics{lapackforum-reducenorm.pdf}}
\label{fig:lapackforum-3}
}
\hfil
\subfloat[Near upper triangular]{
\resizebox{.4\textwidth}{!}{\includegraphics{uppertri-reducenorm.pdf}}
\label{fig:uppertri-3}
} \\
\subfloat[Hessenberg form]{
\resizebox{.4\textwidth}{!}{\includegraphics{watkins-reducenorm.pdf}}
\label{fig:watkins-3}
}
\hfil
\subfloat[Badly scaled (log scale)]{
\resizebox{.4\textwidth}{!}{\includegraphics{badlyscaled-reducenorm-logscale.pdf}}
\label{fig:badlyscaled-3}
}
\caption{Ratio of balanced matrix norm and original matrix norm.}
\label{fig:reducenorm}
\end{figure}
\section{Matrix Exponential}
Balancing could also be used for the computation of the matrix exponential, since
$e^{XAX^{-1}} = Xe^{A}X^{-1}$. If the exponential of the balanced matrix can be
computed more accurately, then balancing may be justified. There are numerous ways to
compute the matrix exponential. The methods are discussed in the paper by Moler and Van Loan,
``Nineteen Dubious Ways to Compute the Exponential of a Matrix''~\cite{MolerVanLoan:1978} and later
updated in 2003~\cite{MolerVanLoan:2003}. The scaling and squaring method is consider one of the
best options. Currently, MATLAB's function \texttt{expm} implements this method
using a variant by Higham~\cite{Higham:JMA:2005}. The scaling and squaring
method relies on the Pad{\'e} approximation of the matrix exponential,
which is more accurate when $\| A \|$ is small. Hence, balancing
would be beneficial.
However, the matrix exponential possess the special property: $e^A = ( e^{A/s} )^s$.
Scaling the matrix by $s$ successfully reduces the norm of $A$ to an acceptable level for an
accurate computation of $e^{A/s}$ by the Pad{\'e} approximate.
When $s$ is a power of 2 then repeat squaring is used to obtain the final result.
Since the scaling of $A$ successfully achieves the same goal as balancing, balancing is not required
as a preprocessing step.
\section{Conclusion}
We proposed a simple change to the LAPACK balancing algorithm \texttt{GEBAL}. Including the diagonal elements
in the balancing algorithm provides a better measure of significant decrease in the matrix norm than the current
version. By including the diagonal elements and switching to the 2-norm we are able to avoid deteriorating the
backward error for our test cases.
The proposed change is to protect the backward error of the computation. In some case the new algorithm
will prevent from decreasing the maximum eigenvalue condition number as much as the current algorithm.
If only the eigenvalues are of interest, then it may be desirable to uses the current version.
However, for the case where $A$ is dense and poorly scaled,
the new algorithm will still balance the matrix and improve the eigenvalue condition number.
If accurate eigenvectors are desired, then one should consider not balancing the matrix.
| {
"timestamp": "2014-01-23T02:11:44",
"yymm": "1401",
"arxiv_id": "1401.5766",
"language": "en",
"url": "https://arxiv.org/abs/1401.5766",
"abstract": "Balancing a matrix is a preprocessing step while solving the nonsymmetric eigenvalue problem. Balancing a matrix reduces the norm of the matrix and hopefully this will improve the accuracy of the computation. Experiments have shown that balancing can improve the accuracy of the computed eigenval- ues. However, there exists examples where balancing increases the eigenvalue condition number (potential loss in accuracy), deteriorates eigenvector accuracy, and deteriorates the backward error of the eigenvalue decomposition. In this paper we propose a change to the stopping criteria of the LAPACK balancing al- gorithm, GEBAL. The new stopping criteria is better at determining when a matrix is nearly balanced. Our experiments show that the new algorithm is able to maintain good backward error, while improving the eigenvalue accuracy when possible. We present stability analysis, numerical experiments, and a case study to demonstrate the benefit of the new stopping criteria.",
"subjects": "Numerical Analysis (math.NA)",
"title": "On matrix balancing and eigenvector computation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808724687407,
"lm_q2_score": 0.837619961306541,
"lm_q1q2_score": 0.8211028264668089
} |
https://arxiv.org/abs/0710.5611 | Universal cycles for permutations | A universal cycle for permutations is a word of length n! such that each of the n! possible relative orders of n distinct integers occurs as a cyclic interval of the word. We show how to construct such a universal cycle in which only n+1 distinct integers are used. This is best possible and proves a conjecture of Chung, Diaconis and Graham. | \section{Introduction}
A \emph{de Bruijn cycle} of order $n$ is a word in $\{0,1\}^{2^n}$ in
which each $n$-tuple in $\{0,1\}^n$ appears exactly once as a cyclic
interval (see \cite{dB}). The idea of a universal cycle generalizes the notion of a de Bruijn
cycle.
Suppose that $\mathcal{F}$ is a family of
combinatorial objects with $|\mathcal{F}|=N$, each of which is represented (not necessarily in a unique way) by an
$n$-tuple over some alphabet $A$. A \emph{universal cycle} (or \emph{ucycle})
for $\mathcal{F}$ is a word $u_1u_2\dots{u}_N$ with each
$F\in\mathcal{F}$ represented by exactly one $u_{i+1}u_{i+2}\dots{u}_{i+n}$
where, here and throughout, index addition is interpreted modulo $N$. With this terminology
a de Bruijn cycle is a ucycle for words of length $n$ over $\{0,1\}$
with a word represented by itself. The definition of ucycle was introduced by
Chung, Diaconis and Graham in \cite{CDG}. Their paper, and the
references therein, forms an good overview of the topic of universal cycles. The cases considered by them
include $\mathcal{F}$ being the set of permutations of an $n$-set,
$r$-subsets of an $n$-set, and partitions of an $n$-set.
In this paper we will be concerned with ucycles for permutations: our
family $\mathcal{F}$ will be $S_n$, which we will regard as the set
of all $n$-tuples of distinct elements of $[n]=\{1,2,\dots{n}\}$. It is not immediately obvious how we should
represent permutations with words. The most natural thing to do would be to
take $A=[n]$ and represent a permutation by itself, but it is easily verified that
(except when $n\leq2$) it is not possible to have a ucycle in this
case. Indeed, if every cyclic interval of a word is to represent a
permutation then our word must repeat with period $n$, and so only $n$
distinct permutations can be represented. Another possibility which we
mention in passing would be
to represent the permutation $a_1a_2\dots{a}_n$ by
$a_1a_2\dots{a}_{n-1}$. It is clear that the permutation is determined
by this. It was shown by Jackson \cite{J} (using similar techniques to
those used for de Bruijn cycles) that these ucycles exist for all
$n$. Recently an efficient algorithm for constructing such ucycles
was given by Williams \cite{W}. He introduced the term \emph{shorthand
universal cycles for permutations} to describe them. Alternatively, Chung, Diaconis and Graham in \cite{CDG}
consider ucycles for permutations using a larger alphabet where each
permutation is represented by any $n$-tuple in which the elements have
the same relative order. Our aim is to prove their conjecture that such
ucycles always exist when the alphabet is of size $n+1$, the smallest
possible. In contrast to the situation with shorthand universal
cycles, the techniques used for de Bruijn cycles do not seen to help
with this so a different approach is needed.
To describe the problem more formally we need the notion of
order-isomorphism. If $a=a_1a_2\dots{a}_n$ and $b=b_1b_2\dots{b}_n$
are $n$-tuples of distinct integers, we say that $a$ and $b$
are $\emph{order-isomorphic}$ if
\[
a_i<a_j \Leftrightarrow b_i<b_j
\]
for all $1\leq{i},j\leq{n}$. Note that no two distinct permutations in
$S_n$ are order-isomorphic, and that any $n$-tuple of distinct integers is order-isomorphic to exactly one permutation in
$S_n$. Hence, the set of $n$-tuples of distinct integers is
partitioned into $n!$ order-isomorphism classes which correspond to
the elements of $S_n$.
We say that a word $u_1u_2\dots{u}_{n!}$ over an alphabet $A\subset\mathbb{Z}$ is a ucycle for $S_n$ if
there is exactly one $u_{i+1}u_{i+2}\dots{u}_{i+n}$ order-isomorphic
to each permutation in $S_n$. For example $012032$ is a ucycle for
$S_3$. Let $M(n)$ be the smallest integer $m$
for which there is a ucycle for $S_n$ with $|A|=m$. Note that if
$|A|=n$ then each permutation is represented by itself and so, as we
noted earlier, no ucycle is possible (unless $n\leq2$). We deduce that
$M(n)\geq{n+1}$ for all $n\geq3$. Chung, Diaconis and Graham in \cite{CDG} give the upper
bound $M(n)\leq{6n}$ and conjecture that $M(n)=n+1$ for all $n\geq3$. Our main result is that this
conjecture is true.
\begin{theorem}
For all $n\geq3$ there exists a word of length $n!$ over the alphabet
$\{0,1,2,\dots,n\}$ such that each element of $S_n$ is
order-isomorphic to exactly one of the $n!$ cyclic intervals of length $n$.
\end{theorem}
We prove this by constructing such a word inductively. The details of
our construction are in the next section. Having shown that such a
word exists, it is natural to ask how many there are. In the
final section we give some bounds on this.
Our construction works for $n\geq5$. For smaller values of $n$ it is a
relatively simple matter to find such words by hand. For completeness
examples are $012032$ for $n=3$, and $012301423042103421302143$ for $n=4$.
\section{A Construction of a Universal Cycle}
We will show how to construct a word of length $n!$ over the alphabet
$\{0,1,2,\dots,n\}$ such that for each $a\in{S}_n$ there is a cyclic
interval which is order-isomorphic to $a$.
Before describing the construction we make a few preliminary definitions.
As is standard for universal cycle problems we let $G_n=(V,E)$, the
\emph{transition graph}, be the
directed graph with
\begin{align*}
V&=\{(a_1a_2\dots{a_n}): a_i\in\{0,1,2,\dots,n\}, \text{ and } a_i\not=a_j
\text{ for all } i\not=j\}\\
E&=\{(a_1a_2\dots{a_n})(b_1b_2\dots{b_n}): a_{i+1}=b_i \text{ for all
} 1\leq{i}\leq{n-1}\}.
\end{align*}
Notice that every vertex of $G_n$ has out-degree and in-degree both
equal to 2.
The vertices on a directed cycle in $G_n$ plainly correspond to the $n$-tuples which
occur as cyclic intervals of some word. Our task, therefore, is to find a directed cycle in $G_n$ of length $n!$ such that for each
$a\in{S_n}$ there is some vertex of our cycle which is
order-isomorphic to $a$. This is in contrast to many universal
cycle problems where we seek a Hamilton cycle in the transition graph.
We define the map on the integers:
\[
s_x(i)=\left\{
\begin{array}{ll}
i & \text{ if } i<x\\
i+1 &\text{ if } i\geq{x}.\\
\end{array}\right.
\]
We also, with a slight abuse of notation, write $s_x$ for the map
constructed by applying this map coordinatewise to an $n$-tuple. That is,
\[
s_x(a_1a_2\dots{a_n})=s_x(a_1)s_x(a_2)\dots{s_x}(a_n).
\]
The point of this definition is that if $a=a_1a_2\dots{a_n}\in{S}_n$
is a permutation of $[n]$ and $x\in[n+1]$ then $s_x(a)$ is the unique $n$-tuple of elements of
$[n+1]\setminus\{x\}$ which is order-isomorphic to $a$. Note that, as
will become clear, this is the definition we need even though our final construction will produce a ucycle for
permutations of $[n]$ using alphabet $\{0,1,2,\dots,n\}$.
We also define a map $r$ on $n$-tuples which permutes the elements of the
$n$-tuple cyclically. That is,
\[
r(a_1a_2\dots{a_n})=a_2a_3\dots{a}_{n-1}a_na_1.
\]
Note that $(a,r(a))$ is an edge of $G_n$ and that $r^n(a)=a$.
As indicated above, we prove Theorem 1 by constructing a cycle of length $n!$ in $G_n$ such that for
each $a\in{S}_n$ the cycle contains a vertex which is order-isomorphic
to $a$. Our approach is to find a collection of short cycles in $G_n$
which between them contain one vertex from each order-isomorphism
class and to join them up. The joining up of the short cycles requires
a slightly involved induction step which is where the main work lies.
\begin{proof}[Proof of Theorem 1:]
\noindent
\emph{Step 1:} Finding short cycles in $G_n$
The first step is to find a collection of short cycles (each of length
$n$) in $G_n$ which between them contain exactly one element from each
order-isomorphism class of $S_n$. These cycles will use only $n$
elements from the alphabet and we will think of each cycle as being
``labelled'' with the remaining unused element. Suppose that for each
$a=a_1a_2\dots{a_{n-1}}\in{S}_{n-1}$ we choose a label $l(a)$ from
$[n]$. Let $0a$ be the $n$-tuple $0a_1a_2\dots{a_{n-1}}$. We have
the following cycle in $G_n$:
\[
s_{l(a)}(0a),r(s_{l(a)}(0a)),r^2(s_{l(a)}(0a)),\dots,r^{n-1}(s_{l(a)}(0a)).
\]
We denote this cycle by $\mathcal{C}(a,l(a))$. As an example, $\mathcal{C}(42135,2)$ is the
following cycle in $G_6$:
\[
053146\rightarrow531460\rightarrow314605\rightarrow146053\rightarrow460531\rightarrow605314\rightarrow053416,
\]
where arrows denote directed edges of $G_6$.
Note that for any choice of labels (that is any map $l$) the cycles
$\mathcal{C}(a,l(a))$ and $\mathcal{C}(b,l(b))$ are disjoint when
$a,b\in{S}_n$ are distinct. Consequently, whatever the choice of
labels, the collection of cycles
\[
\bigcup_{a\in{S}_{n-1}}\mathcal{C}(a,l(a))
\]
is a disjoint union. It is easy to see that the vertices on these
cycles contain between them exactly one $n$-tuple
order-isomorphic to each permutation in $S_n$.
We must now show how, given a suitable labelling, we can join up these short cycles.
\vskip5mm
\noindent
\emph{Step 2:} Joining two of these cycles
Suppose that $\mathcal{C}_1=\mathcal{C}(a,x)$ and
$\mathcal{C}_2=\mathcal{C}(b,y)$ are two of the cycles in $G_n$ described
above. What conditions on $a,b$ and their labels $x,y$ will allow us to join
these cycles?
We may assume that $x\leq y$. Suppose further that $1\leq x\leq{y}-2\leq n-1$, and that $a$ and $b$ satisfy the
following:
\[
b_i=\left\{
\begin{array}{ll}
a_i & \text{ if } 1\leq{a_i}\leq{x}-1\\
a_i+1 & \text{ if } x\leq{a_i}\leq{y}-2\\
x & \text{ if } a_i=y-1\\
a_i & \text{ if } y\leq{a_i}\leq{n-1}.\\
\end{array}\right.
\]
If this happens we will say that the pair of cycles $\mathcal{C}(a,x),\mathcal{C}(b,y)$
are \emph{linkable}.
In this case $s_{x}(0a)$ and $s_{y}(0b)$ agree at all but one
position; they differ only at the $t$ for which $a_t=y-1$ and
$b_t=x$. It follows that there is a directed edge in $G_n$ from
\[
r^t(s_{x}(0a))=s_x(a_t\dots{a}_{n-1}0a_1\dots{a}_{t-1})
\]
to
\[
r^{t+1}(s_{y}(0b))=s_y(b_{t+1}\dots{b}_{n-1}0b_1\dots{b}_t).
\]
Similarly, there is a directed edge in $G_n$ from
\[
r^t(s_{y}(0b))=s_x(b_t\dots{b}_{n-1}0b_1\dots{b}_{t-1})
\]
to
\[
r^{t+1}(s_{x}(0a))=s_y(a_{t+1}\dots{a}_{n-1}0a_1\dots{a}_t).
\]
If we add these edges to $\mathcal{C}_1\cup\mathcal{C}_2$ and remove
the edges
\[
r^t(s_{x}(0a))r^{t+1}(s_{x}(0a))
\]
and
\[
r^{t}(s_{y}(0b))r^{t+1}(s_{y}(0b)),
\]
then we produce a single cycle of length $2n$ whose vertices are
precisely the vertices in $\mathcal{C}_1\cup\mathcal{C}_2$.
We remark that if $x=y-1$ then the other conditions imply that $a=b$
and so although we can perform a similar linking operation it is not
useful. If $x=y$ then $b$ is not well-defined.
As an example of the linking operation consider the linkable pair of 6-cycles $\mathcal{C}(42135,2)$ and
$\mathcal{C}(23145,5)$ in $G_6$. If we add the edges $531460\rightarrow314602$ and
$231460\rightarrow314605$, and remove the edges $531460\rightarrow314605$
and $231460\rightarrow314602$ then a single cycle of length 12 in $G_6$
is produced. These cycles and the linking operation are shown in
Figure 1.
\begin{figure}
\includegraphics[scale=0.75]{link.eps}
\caption{The cycles $\mathcal{C}(42135,2)$ and $\mathcal{C}(23145,5)$ are linkable}
\end{figure}
\vskip5mm
\noindent
\emph{Step 3:} Joining all of these cycles
We now show that this linking operation can be used repeatedly to join
a collection of disjoint short cycles, one for each $a\in{S}_n$, together.
Let $H_{n}=(V,E)$ be the (undirected) graph with,
\begin{align*}
V&=\{(a,x):a\in{S}_{n-1}, x\in[n]\}\\
E&=\{(a,x)(b,y): \mathcal{C}(a,x), \mathcal{C}(b,y) \text{ are linkable}\}
\end{align*}
If we can find a subtree $T_{n}$ of $H_{n}$ of order $(n-1)!$ which contains
exactly one vertex $(a,x)$ for each $a\in{S}_{n-1}$ then we will be
able to construct the required cycle. Take any vertex $(a,l(a))$ of $T_{n}$ and consider the cycle
$\mathcal{C}(a,l(a))$ associated with it. Consider also the cycles associated
with all the neighbours in $T_n$ of $(a,l(a))$. The linking operation
described above can be used to join the cycles associated with these neighbours to $\mathcal{C}(a,l(a))$. This is
because the definition of adjacency in $H_{n}$ guarantees that we can join
each of these cycles individually. Also, the fact that every vertex in
$G_n$ has out-degree 2 means that the joining happens at different
places along the cycle. That is if $(b,l(b))$ and $(c,l(c))$ are
distinct neighbours of $(a,l(a))$ then the edge of $\mathcal{C}(a,l(a))$ which must
be deleted to join $\mathcal{C}(b,l(b))$ to it is not the same as the one which
must be deleted to join $\mathcal{C}(c,l(c))$ to it. We conclude that we can
join all of the relevant cycles to the cycle associated with $(a,l(a))$. The
connectivity of $T_n$ now implies that we can join all of the cycles
associated with vertices of $T_n$ into one cycle. This is plainly a
cycle with the required properties.
The next step is to find such a subtree in $H_{n}$.
\vskip5mm
\noindent
\emph{Step 4:} Constructing a Suitable Tree
We will prove, by induction on $n$, the stronger statement that for all $n\geq5$, there is a
subtree $T_n$ of $H_n$ of order $(n-1)!$ which satisfies:
\begin{enumerate}
\item for all $a\in{S}_{n-1}$ there exists a unique $x\in[n]$ such that $(a,x)\in{V}(T_n)$,
\item $(12\dots(n-1),1)\in{V}(T_n)$,
\item $(23\dots(k-1)1(k)(k+1)\dots(n-1),k)\in{V}(T_n) $ for all $3\leq{k}\leq{n}$,
\item $(32145\dots(n-1),2)\in{V}(T_n)$,
\item $(243156\dots(n-1),3)\in{V}(T_n)$,
\item $v(31245\dots(n-1))$ is a leaf in $T_n$,
\item $v(24135\dots(n-1))$ is a leaf in $T_n$.
\end{enumerate}
Where, for a tree satisfying property 1, we denote the unique vertex in $V(T_n)$ of the form $(a,x)$ by $v(a)$.
For $n=5$ a suitable tree can be found. One such is given in Figure 2.
\begin{figure}
\includegraphics[scale=0.75]{tree.eps}
\caption{A suitable choice for $T_5$}
\end{figure}
Suppose that $n\geq5$ and that we have a
subtree $T_n$ of the graph $H_n$ which satisfies the above
conditions. We will use this to build a suitable subtree of $H_{n+1}$.
A key observation for our construction is that the map from $V(H_n)$
to $V(H_{n+1})$ obtained by replacing each vertex
$(a_1a_2\dots{a}_{n-1},x)\in{V}(H_n)$ by
$(1(a_1+1)(a_2+1)\dots(a_{n-1}+1),x+1)\in{V}(H_{n+1})$ preserves
adjacency. It follows that subgraphs of $H_n$ are mapped into isomorphic
copies in $H_{n+1}$ by this map. Further, applying a fixed permutation to the coordinates of
the $n$-tuple associated with each vertex of $H_{n+1}$ gives an
automorphism of $H_{n+1}$ and so subgraphs of $H_{n+1}$ are mapped
into isomorphic copies.
We take $n$ copies of $T_n$. These copies will be modified to form
the building blocks for our subtree of $H_{n+1}$ as follows.
In the first copy we replace each vertex $(a_1a_2\dots{a_{n-1}},x)$ by
\[
(1(a_3+1)(a_1+1)(a_4+1)(a_2+1)(a_5+1)(a_6+1)\dots(a_{n-1}+1),x+1).
\]
By the observation above this gives an isomorphic copy of $T_n$ in $H_{n+1}$. We denote this tree by $T_{n+1}^{(0)}$.
In the next copy we replace each vertex
$(a_1a_2\dots{a_{n-1}},x)$ by
\[
((a_3+1)1(a_2+1)(a_1+1)(a_4+1)(a_5+1)\dots(a_{n-1}+1),x+1).
\]
We denote this tree by $T_{n+1}^{(1)}$.
For all $2\leq{k}\leq{n-1}$, we take a new copy of $T_n$ and modify it
as follows. We replace each vertex
$(a_1,a_2\dots{a}_{n-1},x)$ by
\[
((a_{k}+1)(a_1+1)(a_2+1)\dots(a_{k-1}+1)1(a_{k+1}+1)\dots(a_{n-1}+1),x+1).
\]
We denote these trees by $T_{n+1}^{(2)}, T_{n+1}^{(3)},\dots, T_{n+1}^{(n-1)}$.
As we mentioned this results in $n$ subtrees of $H_{n+1}$. They are clearly
disjoint because the position in which 1 appears in the first
coordinate of each vertex is distinct for distinct trees. Let $F_{n+1}$ be the $n$ component subforest of $H_{n+1}$ formed by taking the union
of the trees $T_{n+1}^{(k)}$ for $0\leq{k}\leq{n-1}$.
It is also easy to see that for every $a\in{S}_n$ there is a vertex in $F_{n+1}$ of the form
$(a,x)$ for some $x\in[n+1]$. It remains to show that the $n$ components
can be joined up to form a single tree with the required properties.
Notice that it is a consequence of the construction of the
$T_{n+1}^{(k)}$ that if $v(l_1l_2\dots{l}_{n-1})$ is a leaf in $T_n$
then the following vertices are all leaves in $F_{n+1}$:
\begin{itemize}
\item $v(1(l_3+1)(l_1+1)(l_4+1)(l_2+1)(l_5+1)(l_6+1)\dots(l_{n-1}+1))$
\item $v((l_3+1)1(l_2+1)(l_1+1)(l_4+1)(l_5+1)\dots(l_{n-1}+1))$
\item $v((l_{k}+1)(l_1+1)(l_2+1)\dots(l_{k-1}+1)1(l_{k+1}+1)\dots(l_{n-1}+1))$
for $2\leq{k}\leq{n-1}$.
\end{itemize}
We claim that $v(12\dots{n})$ is a leaf in $T_{n+1}^{(0)}$, and hence
a leaf in $F_{n+1}$. This follows from the fact that $v(24135\dots(n-1))$ is a leaf in $T_n$ and
the remark above. We delete this vertex from $F_{n+1}$ to form
a new forest.
The construction of the $T_{n+1}^{(k-1)}$ and the fact that
$(23\dots(k-1)1(k)(k+1)\dots(n-1),k)\in{V}(T_n) $ for all
$3\leq{k}\leq{n}$ means that
$(23\dots{k}1(k+1)(k+2)\dots{n}),k+1)\in{V}(F_{n+1})$ for all
$3\leq{k}\leq{n}$. Further, the construction of the $T_{n+1}^{(1)}$
and the fact that $(32145\dots(n-1),2)\in{V}(T_n)$ means that
$(2134\dots{n},3)\in{V}(F_{n+1})$.
We add to $F_{n+1}$ a new vertex $(123\dots{n},1)$ (this replaces the
deleted vertex from $T_{n+1}^{(0)}$) and edges from this
new vertex to $(23\dots{k}1(k+1)(k+2)\dots{n},k+1)$ for all
$2\leq{k}\leq{n}$. The previous observation shows that all of these
vertices are in $F_{n+1}$, and it is easy to check, using the
definition of linkability, that the
added edges are in $H_{n+1}$. This new forest has only two components.
Similarly, the inductive hypothesis that $v(31245\dots(n-1))$ is
a leaf in $T_n$ gives that $v(342156\dots{n})$ is a leaf in
$F_{n+1}$ (using the case $k=3$ of the observation on
leaves). We delete this leaf from the forest, replace it with a new vertex
$(342156\dots{n},1)$, and add edges from this new vertex to
$(341256\dots{n},3)$ and $(143256\dots{n},4)$. The construction of
$T_{n+1}^{(2)}$ and the fact that $(32145\dots(n-1),2)\in{V}(T_n)$
ensures that the first of these vertices is in $F_{n+1}$. The
construction of $T_{n+1}^{(0)}$ and the fact that $(243156\dots(n-1),3)\in{V}(T_n)$
ensures that the second of these vertices is in our modified $T_{n+1}^{(0)}$.
These modifications to $F_{n+1}$ produce a forest of one component --
that is a tree. Denote this tree by $T_{n+1}$. We will be done if we
can show that $T_{n+1}$ satisfies the properties demanded.
Plainly, $T_{n+1}$ contains exactly one vertex of the form $(a,x)$ for each
$a\in{S}_n$. By construction $(12\dots{n},1)$, and
$(23\dots{t}1(t+1)(t+2)\dots{n},t+1)$ are vertices of $T_{n+1}$ for
all $2\leq{t}\leq{n}$. Hence the first three properties are satisfied.
The construction of $T_{n+1}^{(2)}$ and the fact that
$(123\dots(n-1),1)\in{V}(T_n)$ ensures that $(32145\dots{n},2)$ is a
vertex of $T_{n+1}$. The construction of $T_{n+1}^{(3)}$ and the fact that
$(32145\dots(n-1),2)\in{V}(T_n)$ ensures that $(243156\dots{n},3)$ is a
vertex of $T_{n+1}$. Hence properties 4 and 5 are satisfied.
Finally, the construction of $T_{n+1}^{(1)}$ and the fact that
$v(31245\dots(n-1))$ is a leaf in $T_n$ ensures that
$v(31245\dots{n})$ is a leaf in $F_{n+1}$. The modifications which
$F_{n+1}$ undergoes do not change this and so it is a leaf in
$T_{n+1}$. The construction of $T_{n+1}^{(2)}$ and the fact that
$v(31245\dots(n-1))$ is a leaf in $T_n$ ensures that
$v(241356\dots{n})$ is a leaf in $F_{n+1}$. Again, the modifications to
$F_{n+1}$ do not change this and so this vertex is still a leaf in
$T_{n+1}$. Hence properties 6 and 7 are satisfied.
We conclude that the tree $T_{n+1}$ has the
required properties. This completes the construction.
\end{proof}
\section{Bounds on the Number of Universal Cycles}
Having constructed a ucycle for $S_n$ over the alphabet
$\{0,1,\dots,n\}$ it is natural to ask how
many such ucycles exist. We will regard words which differ only by a
cyclic permutation as the same so we normalize our universal
cycles by insisting that the first $n$ entries give a word
order-isomorphic to $12\dots{n}$. We denote by $U(n)$ the number of words of length $n!$ over the alphabet
$\{0,1,2,\dots,n\}$ which contain exactly one cyclic interval
order-isomorphic to each permutation in $S_n$ and for which the first
$n$ entries form a word which is order-isomorphic to
$12\dots{n}$. There is a natural upper bound which is essentially exponential
in $n!$ based on the
fact that if we are writing down the word one letter at a time we have 2
choices for each letter. We can also show that there is
enough choice in the construction of the previous section to prove a
lower bound which is exponential in $(n-1)!$. It is slightly
surprising that our construction gives a lower bound which is this large. However, the upper and
lower bounds are still far apart and we have no idea where the true answer
lies.
\begin{theorem}
\[
420^{\frac{(n-1)!}{24}}\leq{U}(n)\leq{(n+1)}2^{n!-n}
\]
\end{theorem}
\begin{proof}
Suppose we write down our universal cycle one letter at a time. We
must start by writing down a word of length $n$ which is
order-isomorphic to $12\dots{n}$; there are $n+1$ ways of doing
this. For each of the next $n!-n$ entries we must not choose any of the
previous $(n-1)$ entries (all of which are distinct) and so we have 2
choices for each entry. This gives the required upper bound.
Now for the lower bound. We will give a lower bound on the number of subtrees of $H_n$ which satisfies the
conditions of step 4 of the proof of Theorem 1. It can be checked that
if a universal cycle comes from a subtree of $H_n$ in the way
described then the tree is determined by the universal cycle. It
follows that the number of such trees is a lower bound for $U(n)$.
Notice that if we have $t_n$ such subtrees of $H_n$ then we have at least $t_n^n$ such
subtrees of $H_{n+1}$. This is because in our construction we took $n$
copies of $T_n$ to build $T_{n+1}$ from and each different set of choices
yields a different tree. We conclude that the number of subtrees of $H_n$
satisfying the conditions is at least
\[
t_5^{5\times6\times\dots\times{n-1}}=t_5^{\frac{(n-1)!}{24}}.
\]
Finally, we bound $t_5$. We modify the given $T_5$ be adding edges from $(1432,1)$ to $(2143,5)$, from $(3421,1)$ to $(4132,5)$ and from
$(4231,2)$ to $(4321,4)$. This graph is such that that any of its
spanning trees satisfies the properties for our $T_5$. It
can be checked that this graph has 420 spanning trees. This gives
the lower bound.
\end{proof}
\section{Acknowledgements}
This work was inspired by the workshop on Generalizations of de Bruijn Cycles and Gray Codes held at Banff in December
2004. I thank the organisers and participants of the workshop for a
stimulating and enjoyable week.
| {
"timestamp": "2007-10-30T12:22:53",
"yymm": "0710",
"arxiv_id": "0710.5611",
"language": "en",
"url": "https://arxiv.org/abs/0710.5611",
"abstract": "A universal cycle for permutations is a word of length n! such that each of the n! possible relative orders of n distinct integers occurs as a cyclic interval of the word. We show how to construct such a universal cycle in which only n+1 distinct integers are used. This is best possible and proves a conjecture of Chung, Diaconis and Graham.",
"subjects": "Combinatorics (math.CO)",
"title": "Universal cycles for permutations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.990440601781584,
"lm_q2_score": 0.8289388083214156,
"lm_q1q2_score": 0.8210146521539721
} |
https://arxiv.org/abs/2101.08038 | Infinitely many twin prime polynomials of odd degree | While the twin prime conjecture is still famously open, it holds true in the setting of finite fields: There are infinitely many pairs of monic irreducible polynomials over $\mathbb{F}_q$ that differ by a fixed constant, for each $q \geq 3$. Elementary, constructive proofs were given for different cases by Hall and Pollack. In the same spirit, we discuss the construction of a further infinite family of twin prime tuples of odd degree, and its relations to the existence of certain Wieferich primes and to arithmetic properties of the combinatorial Bell numbers. | \section{Introduction.}
Let $\mathbb{F}_q$ be a finite field of $q\geq3$ elements, where $q$ is a prime power. The ring of integers $\mathbb{Z}$ and the polynomial ring $\mathbb{F}_q[X]$ exhibit a number of common features, including both being unique factorization domains. A prime (polynomial) in the latter setting is a monic irreducible polynomial. Our understanding of the distribution of prime polynomials is significantly more complete. To start with, one can precisely count the number $\pi_q(n)$ of monic irreducible polynomials of degree $n$ over $\mathbb{F}_q$, as was done by Gauss, who proved that
$$
\pi_q(n) = \frac{1}{n}\sum_{d|n}\mu(d)q^{\tfrac{n}{d}},
$$
where $\mu(d)$ is the classical M\"obius function. As a result, as $q^n\to\infty$,
$$
\pi_q(n) = \frac{q^n}{n} + O\left(\frac{q^{n/2}}{n}\right),
$$
and this should be contrasted with the classical problem of counting prime numbers, for which the Riemann hypothesis is equivalent to the assertion that the number $\pi(x)$ of prime numbers less or equal than $x$ is
$$
\pi(x) = \frac{x}{\log x} + O\left(x^{1/2+o(1)}\right),
$$
as $x\to\infty$.
Another famous, long-standing open problem of number theory with a happier resolution over finite fields is the twin prime conjecture. For integers, the conjecture is that there are infinitely many pairs of primes of the form $(p,p+2)$ and this is still open, despite the spectacular breakthroughs of Zhang \cite{Zh} and Maynard \cite{Ma}. A refined quantitative form of this conjecture due to Hardy and Littlewood asserts that given distinct integers $a_1,\dots,a_r$, the number $\pi(x;a_1,\dots,a_r)$ of integers $n\leq x$ for which $n+a_1,\dots,n+a_r$ are simultaneously prime is
$$
\pi(x;a_1,\dots,a_r)\sim \frak{S}(a_1,\dots,a_r)\frac{x}{(\log x)^r}
$$
as $x\to\infty$, for a nonnegative constant $\frak{S}(a_1,\dots,a_r)$ encoding local congruence obstructions.
To formulate the corresponding problems over finite fields, let $q\geq3$. The \emph{size} of a nonzero polynomial $f$ of degree $n$ over $\mathbb{F}_q$ is defined to be $$|f|_q := q^n=|\mathbb{F}_q[X]/(f)|.$$ Two prime polynomials $f,g\in\mathbb{F}_q[X]$ form a \emph{twin prime pair} if the size of their difference $|f-g|_q$ is as small as possible, namely, if $|f-g|_q=1$.
The twin prime conjecture asks for the existence of infinitely many twin prime pairs $(f,f+a)$ for some fixed $a\in\mathbb{F}_q^\times$. Given positive integers $n$ and $r$, and given distinct polynomials $a_1,\dots,a_r\in\mathbb{F}_q[X]$, each of degree less than $n$,
the Hardy--Littlewood conjecture asks for the growth of the number $\pi_q(n;a_1,\dots,a_r)$ of tuples $(f+a_1,\dots,f+a_r)$, with $|f|_q=q^n$, as $q^n\to\infty$. There are two ways in which $q^n\to\infty$, namely taking either $q\to\infty$ or $n\to\infty$. These limits are usually considered separately, as they may lead to different asymptotic behaviors. Bary-Soroker \cite{BS2} proved that for any fixed positive integers $n$ and $r$, and distinct $a_1,\dots,a_r\in\mathbb{F}_q[X]$, each of degree less than $n$,
\begin{align}\label{HL}
\pi_q(n;a_1,\dots,a_r) = \frac{q^n}{n^r}+O_{n,r}(q^{n-1/2})
\end{align}
as $q\to\infty$, and very recently, Sawin and Shusterman \cite{SawinShusterman} settled the Hardy--Littlewood conjecture over finite fields by showing that for every large enough odd prime power $q$,
$$
|\{f\in\mathbb{F}_q[X]: |f|_q=x,\ (f,f+a) \text{ twin prime pair}\}| \sim \frak{S}_q(a)\frac{x}{(\log_q x)^2}
$$
as $x\to\infty$ through powers of $q$, and where $\frak{S}_q(a)$ is the function field analogue of $\mathfrak{S}(a)$.
Interestingly, it is also possible to showcase infinite families of twin prime pairs (or tuples) of polynomials. In this article, we consider \emph{elementary} constructions of this sort.
\section{An elementary proof.}
In his Ph.D. thesis \cite{Hall1}, Hall observed that over most finite fields, the twin prime conjecture in its qualitative form is an easy consequence of the following classical result of field theory; see, e.g., \cite[Theorem 9.1, p.~297]{Lang}.
\begin{theorem}\label{Lang}
Let $F$ be a field. Fix $n\in\mathbb{N}$ and $a\in F^\times$. Then $X^n-a\in F[x]$ is irreducible over $F$ if and only if
\begin{enumerate}
\item[(a)]
$a\not\in F^\ell=\{ a^\ell : a\in F\}$ for each prime divisor $\ell|n$,
\item[(b)] and $a\not\in -4F^4$ whenever $4|n$.
\end{enumerate}
\end{theorem}
\begin{corollary}\cite[Corollary 19]{Hall1}\label{Hall}
If $q-1$ admits an odd prime divisor, then there are infinitely many twin prime pairs $(f,f+1)$ over $\mathbb{F}_q$.
\end{corollary}
\begin{proof}
Let $\ell\mid q-1$ be an odd prime, and let $(\mathbb{F}_q^\times)^\ell$ be the subgroup of $\ell$th powers of elements in the unit group $\mathbb{F}_q^\times$. Since
$
|(\mathbb{F}_q^\times)^\ell|=\frac{q-1}{\ell} <\tfrac{q-1}{2},
$
there exist two consecutive elements $a, a-1\not\in (\mathbb{F}_q^\times)^\ell$ by Dirichlet's pigeonhole principle. Then by Theorem \ref{Lang}, $(X^{\ell^m}-a,X^{\ell^m}-a+1)$ is a twin prime pair, and this for each $m\geq0$.
\end{proof}
This remarkably simple proof settles the twin prime conjecture for all finite fields $\mathbb{F}_q$ save for the cases where
\begin{align}\label{generic}
q=2^n+1,
\end{align}
for some $n\in\mathbb{N}$. We observe that for prime fields of this form, $q$ is a Fermat prime. To this day, the only known Fermat primes are 3, 5, 17, 257, 65,537 and conjecturally, only finitely many exist. On the other hand, Catalan's conjecture (proved by Mih\u{a}ilescu \cite{Mi}) asserts that $2^3$ and $3^2$ are the only two existing consecutive positive powers, and hence $q=9$ is the only admissible prime power of the form (\ref{generic}).
\begin{definition}
A finite field $\mathbb{F}_q$ is called \emph{generic} if the order $|\mathbb{F}_q^\times|=q-1$ of the unit group $\mathbb{F}_q^\times$ has at least one odd prime divisor. Otherwise, it is called \emph{nongeneric}.
\end{definition}
For nongeneric finite fields (and in fact, more generally when $q\equiv 1$ (mod 4)), part (b) of Theorem \ref{Lang}, together with elementary counting considerations for quadratic residues, yields an explicit infinite family of twin prime polynomials of even degree; see \cite{Pol}. This construction has been extended to obtain twin prime tuples; see \cite{Eff}.
\section{A refined problem.}
Every student who took an introductory number theory class will be familiar with the following question: Given that there are infinitely many primes, and that each odd prime is congruent to either 1 or 3 (mod 4), are there infinitely many primes $p$ such that $p\equiv 1$ (mod 4), or, respectively, such that $p\equiv 3$ (mod 4)? Similarly, one may ask whether there exist infinitely many twin prime pairs $(f,f+1)$ of odd (respectively, even) degree?
The question was settled affirmatively in the Ph.D. thesis of Pollack \cite{Pol}. For large enough $q$, the asymptotic (\ref{HL}) implies a positive answer, and Pollack's strategy was to bootstrap such an asymptotic to a substitution procedure, and treat the cases where $q$ is small by hand. The case of even degree can actually be approached directly, relying on a number of elementary constructions; see \cite[Lemmas 6.3.2--4]{Pol}. In the rest of this note, we wish to consider a new elementary construction, in the spirit of Corollary \ref{Hall}, to cover the odd degree case.
Clearly, this is already achieved for generic fields $\mathbb{F}_q$ by the proof of Corollary \ref{Hall}. To cover also nongeneric fields, we examine below a different construction over prime fields $\mathbb{F}_p$. This does not leave out the case $\mathbb{F}_9$, as we explain next. In fact, any infinite family of twin primes of odd degree over $\mathbb{F}_3$ also defines an infinite family of twin primes of odd degree over $\mathbb{F}_9$. This follows from the following standard result for polynomials over finite fields: An irreducible polynomial of degree $n$ over $\mathbb{F}_q$ is irreducible over $\mathbb{F}_{q^k}$ if and only if $(k,n)=1$; see, e.g., \cite[Corollary 3.47]{LN}. Since in our situation, $k=2$ and $n$ is an odd degree, the odd twin prime conjecture for $\mathbb{F}_9[X]$ reduces to the case of $\mathbb{F}_3[X]$.
Let $p$ be an odd prime. The starting point of our construction is the polynomial
$$
f(X)=X^p-X-1.
$$
By the Artin--Schreier theorem, $f(X)$ is irreducible over $\mathbb{F}_p$. Thus if $\alpha$ is a root, then $\mathbb{F}_p(\alpha)$ is a cyclic Galois extension of degree $p$ over $\mathbb{F}_p$; see \cite[Theorem 6.4, p. 290]{Lang}. Hence $\mathbb{F}_p(\alpha)\cong\mathbb{F}_{p^p}$. In particular, all roots $\alpha, \alpha^p,\dots,\alpha^{p^{p-1}}$ of $f$ are Galois conjugates and have the same multiplicative order in $\mathbb{F}_{p^p}^\times$. The order $e$ of the polynomial $f(X)$ is defined to be the multiplicative order of any of its roots in $\mathbb{F}_{p^p}^\times$. To examine this order $e$, we observe that
$$
f(0) = -1 =\prod_{i=0}^{p-1} (0-\alpha^{p^i})=(-\alpha)^{1+p+\dots+p^{p-1}}=-\alpha^Q,
$$
where
$$
Q:= 1+p+p^2+\dots+p^{p-1} = \frac{p^p-1}{p-1}.
$$
It easily follows that $e \mid Q$. We have $(Q, 2p(p-1)) = 1$ directly from the definition of $Q$, and hence $e$ is odd and relatively prime to $p(p-1)$.
Less obviously, the order $e$ coincides with the minimal period of Bell numbers modulo $p$. The Bell number $B(n)$ is the number of distinct partitions of a finite set of $n$ elements. A great number of problems can be interpreted in terms of Bell numbers; among other things, $B(n)$ counts
\begin{itemize}
\item the number of equivalence relations among $n$ elements,
\item the number of factorizations of the product of $n$ distinct primes into coprime factors,
\item the number of permutations of $n$ elements with ordered cycles;
\end{itemize}
see \cite{Rota} and references therein. Determining this minimal period has attracted quite a bit of attention. For very small primes ($p<180$), numerical computations show that $e=Q$, with some further probabilistic evidence given in \cite{Mo}.
To state our main result, we recall that a prime $\ell$ satisfying the congruence equation $b^{\ell-1}\equiv1$ (mod $\ell^2$), where $(b,\ell)=1$, is called a \emph{Wieferich prime in base $b$}.
\begin{theorem}\label{thm}
Let $p$ be an odd prime. For each $a\in\mathbb{F}_p^\times$, set $f_a(X)=X^p-X+a$ and let $e$ denote the order of $f_{-1}(X)$. For each odd prime divisor $\ell\mid e$,
\begin{enumerate}
\item
if $\ell\nmid\tfrac{p^p-1}{e}$, then
\begin{align}\label{family}
\{(f_1(X^{\ell^m}),f_2(X^{\ell^m}),\dots,f_{p-1}(X^{\ell^m})): m\geq0\}
\end{align}
is an infinite family of twin prime tuples of odd degree over $\mathbb{F}_p$;
\item if $\ell\mid \tfrac{p^p-1}{e}$, then $\ell$ is a Wieferich prime in base $p$.
\end{enumerate}
\end{theorem}
We note that the conjecture $e=Q$ would in particular imply that for each $\ell\mid e$, (\ref{family}) forms an infinite family of twin prime tuples over $\mathbb{F}_p$. The proof of Theorem \ref{thm} is elementary; we postpone it to Section \ref{sec3} and discuss here the problem of the existence of Wieferich primes.
The fame of Wieferich primes in number theory owes to their appearance in work on Fermat's last theorem. In 1909, Wieferich \cite{Wi} proved that if the first case of Fermat's last theorem is false, i.e., if $X^p+Y^p=Z^p$ is solvable in positive integers $X$, $Y$, $Z$ for an odd prime $p$ such that $(p,XYZ)=1$, then $p$ must be a Wieferich prime in base 2. A year later, Miramanoff reached the same conclusion for base 3. An arms race was engaged to prove that up to large $x$, no prime below $x$ is simultaneously a Wieferich prime in base 2 and in base 3. In fact, numerically, Wieferich primes are rare: in base 2, the only ones presently known \cite{DK} below $6.7\times 10^{15}$ are 1093 and 3511, while in base 47, there is simply no known Wieferich prime. Heuristically, if we consider $\tfrac{b^\ell -1}{\ell}$ as a random integer, the probability that $\ell\mid \tfrac{b^\ell -1}{\ell}$ is roughly $1/\ell$. Since
$$
\sum_{\ell\leq x} \frac{1}{\ell} \ll \log\log x,
$$
this heuristic suggests that the number of Wieferich primes up to $x$ in base $b$ is of the order of the iterated logarithm $\log\log x$. The iterated logarithm tends to $\infty$ as $x\to\infty$, but it does so very, very slowly; e.g., if $x=10^{100}$, then $\log\log x\approx 5.4$. For comparison, the number of atoms in the universe is roughly of the order of $10^{80}$. As such, we expect that for every base $b$, there are infinitely many Wieferich primes as well as infinitely many non-Wieferich primes. (The latter, see \cite{Sil}, is not even known unless one assumes the $abc$-conjecture.)
Fermat's last theorem is not the only place where Wieferich primes appear as obstructions. Fermat and Mersenne numbers, i.e., $F_n=2^{2^n}+1$ and $M_n=2^n-1$, $n\in\mathbb{N}$, are believed to be squarefree. In trying to prove this directly, one quickly sees that any prime factor $p$ such that $p^2$ divides either $F_n$ or $M_n$ must be a Wieferich prime in base 2. Theorem \ref{thm} showcases a similar phenomenon.
\section{Proof of Theorem \ref{thm}.}\label{sec3}
We quickly recall elements of notation. Let $p$ be an odd prime. Let $f_a(X)=X^p-X+a$, for $a\in\mathbb{F}_p^\times$. The order $e$ of $f(X):=f_{-1}(X)$
satisfies $e\mid Q=\tfrac{p^p-1}{p-1}$ and $(e,2p(p-1))=1$. Fix $\ell\mid e$ prime, and note that $\ell$ is necessarily odd.
Suppose first that $\ell\mid\tfrac{p^p-1}{e}$. In particular, $\ell^2\mid p^p-1$. Since $(\ell,p)=1$ and $p^p\equiv 1$ (mod $\ell$), Fermat's little theorem implies that $p\mid \ell-1$. Then
$$
p^{\ell-1}= p^{p\cdot(\ell-1)/p} = (1+(p^p-1))^{(\ell-1)/p}\equiv 1\quad (\text{mod }\ell^2),
$$
which proves that $\ell$ is a Wieferich prime in base $p$. For readability, we break down the rest of the proof into the following two lemmata.
\begin{lemma}\label{A}
Fix $m\geq0$. If $f(X^{\ell^m})$ is irreducible over $\mathbb{F}_p$, then each polynomial in the tuple
$
(f_1(X^{\ell^m}),f_2(X^{\ell^m}),\dots,f_{p-1}(X^{\ell^m}))
$
is irreducible over $\mathbb{F}_p$.
\end{lemma}
\begin{proof}
Fix $a\in\mathbb{F}_p^\times$. Choose $b\in\mathbb{F}_p^\times$ such that $ba=-1$ in $\mathbb{F}_p$. Then
$$
b\cdot f_a(b^{-1}X^{\ell^m}) = b\left(b^{-p}X^{p\ell^m}-b^{-1}X^{\ell^m}+a\right) = X^{p\ell^m}-X^{\ell^m}+ba =f(X^{\ell^m}).
$$
Since $(\ell,p-1)=1$, we have $b^{-1}=c^{\ell^m}$ for some $c\in\mathbb{F}_p^\times$. If $f_a(X^{\ell^m})$ is reducible, then there exist two nonconstant polynomials $g(X), h(X)\in\mathbb{F}_p[X]$ such that
$$
f(X^{\ell^m})=b\cdot f_a((cX)^{\ell^m})=b\cdot g(cX)h(cX).
$$
Hence if $f(X^{\ell^m})$ is irreducible, then so is $f_a(X^{\ell^m})$.
\end{proof}
\begin{lemma}\label{B}
Fix $m\geq0$, and let $\beta:=\beta_{m,\ell}$ be a root of $f(X^{\ell^m})$. Then the multiplicative order $\mathrm{ord}(\beta)$ of $\beta$ in $\mathbb{F}_p(\beta)$ is $e\ell^m$. Moreover, $[\mathbb{F}_{p}(\beta):\mathbb{F}_p]=p\ell^m$ if and only if $\ell\nmid \tfrac{p^p-1}{e}.$
\end{lemma}
\begin{proof}
Let $d:=d_{m,\ell}$ be the smallest positive integer such that $\mathbb{F}_{p^d}=\mathbb{F}_p(\beta)$. Equivalently, $d=[\mathbb{F}_p(\beta):\mathbb{F}_p]$. Since $\mathrm{ord}(\beta)\mid |\mathbb{F}_{p^d}^\times|=p^d-1$, we note that $d$ is also the order of $p$ in $(\mathbb{Z}/\mathrm{ord}(\beta)\mathbb{Z})^\times$. To determine $\mathrm{ord}(\beta)$, we first observe that
$$
\mathrm{ord}(\beta^{\ell^m})=\frac{\mathrm{ord}(\beta)}{(\ell^m,\mathrm{ord}(\beta))},
$$
which is a standard result for cyclic groups. Since $f(\beta^{\ell^m})=0$ and the order of $f(X)$ is $e$, we have
$$
e = \frac{\mathrm{ord}(\beta)}{(\ell^m,\mathrm{ord}(\beta))}.
$$
It follows that $\mathrm{ord}(\beta)\mid e\ell^m$, and the above equation is equivalent to $(\tfrac{e\ell^m}{\mathrm{ord}(\beta)},e)=1$. We conclude that $\mathrm{ord}(\beta)=e\ell^m$.
Therefore $d_{m,\ell}$ is the order of $p$ in $(\mathbb{Z}/e\ell^m\mathbb{Z})^\times$.
We claim that
\begin{align}\label{key cong}
p^{p\ell^m} \equiv 1 +(p-1)Q\ell^m\ (\text{mod }e\ell^{m+1})
\end{align}
for each $m\geq0$. If $m=0$, this follows from the definition of $Q$. The claim then follows by induction, using the binomial theorem and that $\ell\mid\binom{\ell}{j}$ for each $0<j<\ell$. With this congruence relation in hand, we can now show by induction over $m\geq0$ that $d_{m,\ell}=p\ell^m$ if and only if $\ell\nmid \tfrac{p^p-1}{e}$.
If $m=0$, we have $p^p\equiv 1$ (mod $e$) and $p\not\equiv 1$ (mod $e$). Hence $d_{0,\ell}=p$. For $m>0$, we have $p^{p\ell^m} \equiv 1$ (mod $e\ell^m$), and hence $d_{m,\ell}\mid p\ell^m$. On the other hand, by definition of $d_{m,\ell}$, we have $p^{d_{m,\ell}}\equiv 1$ (mod $e\ell^{m-1}$), and hence $d_{m-1,\ell}\mid d_{m,\ell}$. The induction hypothesis $d_{m-1,\ell}=p\ell^{m-1}$ implies that $d_{m,\ell}$ is equal to either $p\ell^m$ or $p\ell^{m-1}$. To rule out the latter option, we deduce from (\ref{key cong}) that
$$
p^{p\ell^{m-1}}\equiv 1+(p-1)Q\ell^{m-1}\equiv 1\ (\text{mod }e\ell^m)
$$
if and only if $\ell\mid \tfrac{p^p-1}{e}$.
\end{proof}
If $\ell\nmid\tfrac{p^p-1}{e}$, then by Lemma \ref{B}, the minimal polynomial of $\beta$ over $\mathbb{F}_p$ has degree $p\ell^m$. Since this is also the degree of $f(X^{\ell^m})$, by the uniqueness of the minimal polynomial, we conclude that $f(X^{\ell^m})$ is irreducible. Lemma \ref{A} then finishes the proof of Theorem \ref{thm}.
\begin{acknowledgment}{Acknowledgments.}
The authors thank the reviewers for their many helpful comments and suggestions.
\end{acknowledgment}
| {
"timestamp": "2021-01-21T02:15:18",
"yymm": "2101",
"arxiv_id": "2101.08038",
"language": "en",
"url": "https://arxiv.org/abs/2101.08038",
"abstract": "While the twin prime conjecture is still famously open, it holds true in the setting of finite fields: There are infinitely many pairs of monic irreducible polynomials over $\\mathbb{F}_q$ that differ by a fixed constant, for each $q \\geq 3$. Elementary, constructive proofs were given for different cases by Hall and Pollack. In the same spirit, we discuss the construction of a further infinite family of twin prime tuples of odd degree, and its relations to the existence of certain Wieferich primes and to arithmetic properties of the combinatorial Bell numbers.",
"subjects": "Number Theory (math.NT)",
"title": "Infinitely many twin prime polynomials of odd degree",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98504291340685,
"lm_q2_score": 0.8333245994514082,
"lm_q1q2_score": 0.8208604912572115
} |
https://arxiv.org/abs/1101.0612 | Optimal Meshes for Finite Elements of Arbitrary Order | Given a function f defined on a bidimensional bounded domain and a positive integer N, we study the properties of the triangulation that minimizes the distance between f and its interpolation on the associated finite element space, over all triangulations of at most N elements. The error is studied in the Lp norm and we consider Lagrange finite elements of arbitrary polynomial degree m-1. We establish sharp asymptotic error estimates as N tends to infinity when the optimal anisotropic triangulation is used, recovering the earlier results on piecewise linear interpolation, an improving the results on higher degree interpolation. These estimates involve invariant polynomials applied to the m-th order derivatives of f. In addition, our analysis also provides with practical strategies for designing meshes such that the interpolation error satisfies the optimal estimate up to a fixed multiplicative constant. We partially extend our results to higher dimensions for finite elements on simplicial partitions of a domain of arbitrary dimension.Key words : anisotropic finite elements, adaptive meshes, interpolation, nonlinear approximation. | \section{Introduction.}
\subsection{Optimal mesh adaptation}
In finite element approximation, a usual distinction is between {\it uniform} and
{\it adaptive} methods. In the latter, the elements defining the mesh
may vary strongly in size and shape for a better adaptation
to the local features of the approximated function $f$. This naturally raises
the objective of characterizing and constructing an {\it optimal mesh}
for a given function $f$.
Note that depending on the context, the function $f$ may be fully
known to us, either through an explicit formula or a discrete sampling,
or observed through noisy measurements,
or implicitly defined as the solution of a given
partial differential equation.
In this paper, we assume that $f$ is a function
defined on a polygonal bounded domain $\Omega\subset {\rm \hbox{I\kern-.2em\hbox{R}}}^2$.
For a given conforming triangulation ${\cal T}$ of $\Omega$,
and an arbitrary but fixed
integer $m>1$, we denote by $I_{m,{\cal T}}$
the standard interpolation operator on the
Lagrange finite elements of degree $m-1$
space associated to ${\cal T}$.
Given a norm $X$ of interest and a number
$N>0$, the objective of finding the optimal mesh for $f$ can be
formulated as solving the optimization problem
$$
\min_{\#({\cal T})\leq N} \|f-I_{m,{\cal T}}f\|_X,
$$
where the minimum is taken over all conforming triangulations of
cardinality $N$. We denote by ${\cal T}_N$ the minimizer of the
above problem.
Our first objective is to establish
sharp asymptotic error estimates that precisely describe the
behavior of $\|f-I_{m,{\cal T}}f\|_X$ as $N\to +\infty$. Estimates of that type
were obtained in \cite{BBLS,B,CSX} in the particular case of linear
finite elements ($m-1=1$) and with the error measured in $X=L^p$. They have the form
\begin{equation}
\limsup_{N\to +\infty} \(N\min_{\#({\cal T})\leq N}\|f-I_{m,{\cal T}}f\|_{L^p}\) \leq C\| \sqrt{|\det(d^2f)|}\|_{L^\tau},\;\; \frac 1 \tau=\frac 1 p+1,
\label{optiaffine}
\end{equation}
which reveals that the convergence rate is governed by the quantity $\sqrt{|\det(d^2f)|}$, which depends
nonlinearly the Hessian $d^2f$. This is heavily tied to the fact that we allow
triangles with possibly highly anisotropic shape.
In the present work, the polynomial degree $m-1$ is arbitrary
and the quantities governing the convergence rate will therefore depend nonlinearly on the
$m$-th order derivative $d^mf$.
Our second objective is to propose simple and practical ways
of designing meshes which behave similar to the optimal one,
in the sense that they satisfy the sharp error estimate up
to a fixed multiplicative constant.
\subsection{Main results and layout}
We denote by ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$ the space of homogeneous polynomials of degree $m$
$$
{\rm \hbox{I\kern-.2em\hbox{H}}}_m:={\rm Span}\{ x^ky^l\sep k+l=m\}.
$$
For any triangle $T$, we denote by $I_{m,T}$ the local interpolation
operator acting from $C^0(T)$ onto ${\rm \hbox{I\kern-.2em\hbox{P}}}_{m-1}$ the space of
polynomials of total degree $m-1$. The image of $v\in C^0(T)$ by this operator is defined by the
conditions
$$
I_{m,T}v(\gamma)=v(\gamma),
$$
for all points $\gamma\in T$ with barycentric coordinates in
the set $\{0, \frac 1 {m-1},\frac 2 {m-1},\cdots,1\}$. We denote by
$$
e_{m,T}(v)_p:=\|v-I_{m,T}v\|_{L^p(T)}
$$
the interpolation error measured in the norm $L^p(T)$. We also denote
by
$$
e_{m,{\cal T}}(v)_p:=\|v-I_{m,{\cal T}}v\|_{L^p}=\left(\sum_{T\in{\cal T}}e_{m,T}(v)_p^p\right)^{\frac 1 p},
$$
the global interpolation error for a given triangulation ${\cal T}$, with the standard modification if $p=\infty$.
A key ingredient in this paper is a function
defined by a {\it shape optimization problem}:
for any fixed $1\leq p\leq \infty$ and for any $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$, we define
\begin{equation}
K_{m,p}(\pi):=\inf_{|T|=1}e_{m,T}(\pi)_p.
\label{shapefunction}
\end{equation}
Here, the infimum is taken over all triangles of area $|T|=1$.
Note that from the homogeneity of $\pi$, we find that
\begin{equation}
\inf_{|T|=A}e_{m,T}(\pi)_p=K_{m,p}(\pi)A^{\frac m 2 + \frac 1 p}.
\label{shapeA}
\end{equation}
This optimization problem thus gives the shape of the triangles
of a given area which is at best adapted
to the polynomial $\pi$ in the sense of minimizing the interpolation error
measured in $L^p$.
We refer to $K_{m,p}$ as the {\it shape function}.
We discuss in \S 2 the main properties of this function.
\newline
\newline
Our asymptotic error estimate for the optimal triangulation is given by
the following theorem.
\begin{theorem}
\label{maintheorem}
For any polygonal domain $\Omega\subset {\rm \hbox{I\kern-.2em\hbox{R}}}^2$, and any function
$f\in C^m(\Omega)$, there exists a sequence
of triangulations $\seqT$, with $\#({\cal T}_N)=N$ such that
\begin{equation}
\label{Cf}
\limsup_{N\ra \infty} N^{\frac m 2} e_{m,{\cal T}_N}(f)_p\leq
\left\|K_{m,p}\left(\frac{d^m f}{m!}\right)\right\|_{L^q(\Omega)},\ \frac 1 q := \frac m 2 +\frac 1 p
\end{equation}
\end{theorem}
An important feature of this estimate is the ``$\limsup$'' asymptotical operator. Recall that the upper limit of a sequence $(u_N)_{N\geq N_0}$ is defined by
$$
\limsup_N u_N := \lim_{N\to \infty} \sup_{n\geq N} u_n,
$$
and is in general strictly smaller than the supremum $\sup_{N\geq N_0} u_N$. It is still an open question to find an appropriate upper estimate for $\sup_N N^{m/2} e_{m,{\cal T}_N}(f)_p$ when optimally adapted anisotropic triangulations are used.
In the estimate \iref{Cf}, the $m$-th derivative $d^mf$ is identified to
an homogeneous polynomial in ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$:
$$
\frac{d^mf}{m!}\sim \sum_{k+l=m}\frac{\partial^m f}{\partial^kx\partial^l y}\frac{x^k}{k!}\frac{y^l}{l!}.
$$
In order to illustrate the sharpness of \iref{Cf}, we introduce
a slight restriction on sequences of triangulations, following
an idea in \cite{BBLS}: a sequence $\seqT$ of triangulations, such that $\#({\cal T}_N) = N$, is said to be \emph{admissible} if
\begin{equation}
\label{admissibilitycond}
\sup_{T\in {\cal T}_N} \diam(T) \leq C_AN^{-1/2},
\end{equation}
for some $C_A>0$ independent of $N$. The following theorem shows that the estimate
\iref{Cf} cannot be improved when we restrict our attention to admissible sequences.
It also shows that this class is reasonably large in the sense that
\iref{Cf} is ensured to hold up to small perturbation.
\begin{theorem}
\label{optitheorem}
Let $\Omega\subset {\rm \hbox{I\kern-.2em\hbox{R}}}^2$ be a compact polygonal domain, and $f\in C^m(\Omega)$.
Denote $ \frac 1 q := \frac m 2 +\frac 1 p$.
For all \emph{admissible} sequences of triangulations $\seqT$, one has
$$
\liminf_{N\ra \infty} N^{\frac m 2} e_{m,{\cal T}_N}(f)_p \geq \left\|K_{m,p}\left(\frac{d^m f}{m!}\right)\right\|_{L^q(\Omega)}.
$$
For all $\ve>0$, there exists an \emph{admissible} sequence of triangulations $({\cal T}_N^\ve)_{N\geq N_0}$, such that
$$
\limsup_{N\ra \infty} N^{\frac m 2} e_{m,{\cal T}_N^\ve}(f)_p \leq \left\|K_{m,p}\left(\frac{d^m f}{m!}\right)\right\|_{L^q(\Omega)}+\ve.
$$
\end{theorem}
Note that the sequences $({\cal T}_N^\ve)_{N\geq N_0}$ satisfy the admissibility condition \iref{admissibilitycond} with a constant $C_A(\ve)$ which may explode as $\ve\to 0$.
The proofs of both theorems are given in \S 3. These proofs reveal that the construction of the
optimal triangulation obeys two principles:
(i) the triangulation should {\it equidistribute} the local approximation error $e_{m,T}(f)_p$ between
each triangle and (ii) the aspect ratio of a triangle $T$ should be
{\it isotropic} with respect to a distorted
metric induced by the local value of $d^mf$ on $T$
(and therefore anisotropic in the sense of the euclidean metric).
Roughly speaking, the quantity $\|K_{m,p}\left(\frac{d^m f}{m!}\right)\|_{L^q(T)}$
controls the local interpolation $L^p$-error estimate on a triangle $T$ once this triangle
is optimized with respect to the local properties of $f$.
This type of estimate differs from those obtained in \cite{Ap} which hold for any $T$, optimized
or not, and involve the partial derivatives of $f$ in a local coordinate system which is
adapted to the shape of $T$.
The proof of the upper estimates in Theorem \ref{optitheorem} involves the construction
of an optimal mesh based on a patching strategy similar to \cite{B}.
However, inspection of the proof reveals that
this construction becomes effective only when
the number of triangles $N$ becomes very large. Therefore it
may not be useful in practical applications.
A more practical approach consists in deriving the above mentioned distorted metric from
the exact or approximate data of $d^mf$, using the following procedure.
To any $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$, we associate a
symmetric positive definite matrix $h_\pi\in S_2^+$. If $z\in \Omega$ and $d^m f(z)$ is close to $\pi$, then the triangle $T$ containing $z$ should be isotropic in the metric $h_\pi$.
The global metric is given
at each point $z$ by
$$
h(z)=s(\pi_z) h_{\pi_z},\;\; \pi_z=d^mf(z),
$$
where $s(\pi_z)$ is a scalar factor which depends on the desired accuracy
of the finite element approximation.
Once this metric has been properly identified, fast algorithms such as in \cite{Inria, Bamg, Peyre}
can be used to design a near-optimal mesh based on it. Recently in \cite{Shew,Bois}, several algorithms have been rigorously proved to terminate and produce good quality meshes.
Computing the map
\begin{equation}
\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m \mapsto h_\pi\in S_2^+,
\label{mappi}
\end{equation}
is therefore of key use in applications. This problem is well understood
in the case of linear elements ($m=2$): the matrix $h_\pi$ is then defined as the
absolute value (in the sense of symmetric matrices) of the matrix associated to
the quadratic form $\pi$. In contrast, the exact form of this map in the case $m\geq 3$
is not well understood.
In this paper, we propose
algebraic strategies for computing the map \iref{mappi} for $m=3$ which corresponds to
quadratic elements. These strategies
have been implemented in an open-source
Mathematica code \cite{sitejm}.
In a similar manner, we address the algebraic computation
of the shape function $K_{m,p}(\pi)$
from the coefficients of $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$, when $m\geq 3$.
All these questions are addressed in \S 4, 5 and 6.
In \S 4, we discuss the particular case of linear ($m=2$)
and quadratic ($m=3$) elements. In this case, it is possible
to obtain explicit formulas for $K_{m,p}(\pi)$ from the coefficients
of $\pi$. In the case $m=2$, this formula is of the form
$$
K_{2,p}(ax^2+2bxy+cy^2)=\sigma\sqrt{|b^2-ac|},
$$
where the constant $\sigma$ only depends on $p$ and the sign of $b^2-ac$,
and we therefore recover the known estimate \iref{optiaffine} from Theorem \ref{maintheorem}. The formula for $m=3$ involves the
discriminant of the third degree polynomial $d^3f$. Our analysis
also leads to an algebraic computation of the map \iref{mappi}.
We want to mention that a different strategy for the
the construction of the distorted metric and
the derivation of error estimate for finite element
of arbitrary order was proposed in
\cite{C3}. In this approach, the distorted
metric is obtained at a point $z\in\Omega$ by finding the largest
ellipse contained in a level set of the polynomial $d^mf_z$.
This optimization problem has connections with the one
that defines the shape function in \iref{shapefunction}
as we shall explain in \S 2. The approach
proposed in the present work in the case $m=3$ has the advantage of
avoiding the use of numerical optimization,
the metric being directly derived from the
coefficients of $d^mf$.
In \S 5, we address the case $m>3$. In this case, explicit formulas for $K_{m,p}(\pi)$
seem out of reach. However we can introduce explicit functions
$\Kpol_m(\pi)$ which are polynomials in the coefficients of $\pi$,
and are equivalent to $K_{m,p}(\pi)$, leading therefore to similar
asymptotic error estimates up to multiplicative constants. At the current stage,
we did not obtain a simple solution to the
algebraic computation of the map \iref{mappi} in the case $m>3$.
The derivation of $\Kpol_m$ is based on the
theory of invariant polynomials due to Hilbert. Let us mention
that this theory was also recently applied in \cite{OST} to image processing tasks
such as affine invariant edge detection and denoising.
We finally discuss in \S 6 the possible extension of our analysis to simplicial
elements in higher dimension. This extension is not straightforward except in the case of
linear elements $m=2$.
\section{The shape function}
In this section, we establish several properties of the
function $K_{m,p}$ which will be of key use in the sequel.
We assume that $m\geq 2$ is an integer, and $p\in[1,\infty]$.
We equip the finite dimensional vector space ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$ with a norm $\|\cdot\|$
defined as the supremum of the coefficients
\begin{equation}
\label{normdef}
\text{ If } \pi(x,y)=\sum_{i=0}^m a_i x^i y^{m-i}, \text{ then } \|\pi\| = \max_{0\leq i\leq m} |a_i|.
\end{equation}
Our first result shows that the function $K_{m,p}$
vanishes on a set of polynomials which has a simple algebraic characterization.
\begin{prop}
\label{vanishprop}
We denote by $\mhalf:= \lfloor \frac m 2 \rfloor +1$ the smallest integer strictly larger than $m/2$.
The vanishing set of $K_{m,p}$ is the set of polynomials which have a generalized
root of multiplicity at least $\mhalf$:
$$
K_{m,p}(\pi)=0 \Leftrightarrow \pi(x,y) = (\alpha x +\beta y)^\mhalf \tilde \pi, \mbox{ for some }
\alpha,\beta \in {\rm \hbox{I\kern-.2em\hbox{R}}}\mbox{ and } \tilde\pi \in {\rm \hbox{I\kern-.2em\hbox{H}}}_{m-\mhalf}.
$$
\end{prop}
\proof
We denote by $\Teq$ a fixed equilateral triangle of unit area, centered at $0$.
We first assume that $\pi(x,y)= (\alpha x+\beta y)^\mhalf \tilde \pi$. Then there exists a rotation
$R\in {\cal O}_2$ and $\hat \pi\in H_{m-\mhalf}$ such that
$$
\pi\circ R(x,y)= x^\mhalf \hat \pi(x,y)=x^\mhalf \left( \sum_{i=0}^{m-s_m}a_ix^iy^{m-s_m-i} \right),
$$
Therefore denoting by $\phi_\ve$ the linear transform $\phi_\ve(x,y) = R\left(\ve x ,\frac y \ve\right)$ we obtain
$$
\|\pi\circ \phi_\ve\| = \max_{i=0,\cdots,m-s_m} |a_i| \ve^{2s_m-m+2i}
\leq \ve^{2\mhalf-m} \|\hat \pi\|
\to 0\; \; {\rm as}\;\; \ve\to 0.
$$
Consequently
$$
e_{m,\phi_\ve(\Teq)} (\pi)_p = e_{m,\Teq} (\pi\circ \phi_\ve)_p\to 0\; \; {\rm as}\;\; \ve\to 0.
$$
Since $|\det \phi_\ve| = 1$, the triangles $\phi_\ve(\Teq)$ have unit area,
and therefore $K_{m,p}(\pi) = 0$.
Conversely, let $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m\bs\{0\}$ be such that $K_{m,p}(\pi)=0$. Then there exists a sequence $(T_n)_{n\geq 0}$ of triangles with unit area such that $e_{m,T_n}(\pi)_p\to 0$.
We remark that the interpolation error $e_T(\pi)_p$ of $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$ is
invariant by a translation $\tau_h: z\mapsto z+h$ of the triangle $T$. Indeed
$\pi-\pi\circ\tau_h\in {\rm \hbox{I\kern-.2em\hbox{P}}}_{m-1}$ so that
\begin{equation}
\|\pi- I_{m,T}\pi\|_{L^p(\tau_h(T))}=
\|\pi\circ \tau_h - I_{m,T}(\pi\circ\tau_h)\|_{L^p(T)}=
\|\pi- I_{m,T}\pi\|_{L^p(T)}.
\label{transinv}
\end{equation}
Hence we may assume that the barycenter of $T_n$ is $0$,
and write $T_n = \phi_n(\Teq)$, for some linear transform $\phi_n$ with $\det \phi_n = 1$.
Since $e_{m,\Teq}(\cdot)_p$ is a norm on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$, it follows
that $\pi\circ \phi_n\ra 0$.
The linear transform $\phi_n$ has a singular value decomposition
$$
\phi_n = U_n \circ D_n \circ V_n, \text{ where } U_n,V_n\in {\cal O}_2,\text{ and } D_n = \left(
\begin{array}{cc}
\ve_n & 0\\
0 & 1/\ve_n
\end{array}
\right)
,\ 0<\ve_n\leq 1.
$$
Since the orthogonal group ${\cal O}_2$ is compact, there is a uniform constant $C$ such that
$$
\|\pi \circ V\| \leq C \|\pi\|,\;\; \pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m,\; V\in {\cal O}_2.
$$
Therefore
$$
\|\pi\circ U_n \circ D_n\| = \|\pi\circ U_n \circ D_n \circ V_n \circ V^{-1}_n\|
\leq C\|\pi\circ\phi_n\| \to 0.
$$
Denoting by $a_{i,n}$ the coefficient of $x^i y^{m-i}$ in
$\pi\circ U_n$, we find that $a_{i,n}\ve_n^{2i-m}$ tends to $0$ as $n\to +\infty$.
In the case where $i<s_m$, this implies that $a_{i,n}$ tends to $0$ as $n\to +\infty$
Moreover, again by compactness of ${\cal O}_2$,
we may assume, up to a subsequence, that $U_n$ converges to some $U\in {\cal O}_2$.
Denoting by $a_i$ the coefficient of $x^i y^{m-i}$ in
$\pi\circ U$, we thus find that $a_i=0$ if $i<s_m$. This
This implies that $\pi\circ U (x,y) = x^\mhalf \hat \pi(x,y)$ which concludes the proof.
\hfill $\diamond$\\
\begin{remark}
In the simple case $m=2$, we infer from Proposition \ref{vanishprop}
that $K_{2,p}(\pi)=0$ if and only if $\pi$ is of the form $\pi(x,y)=x^2$ up to a rotation,
and therefore a one-dimensional function. For such a function, the optimal
triangle $T$ degenerates to a segment in the $y$ direction, i.e.
optimal triangles of a fixed area tend to be infinitely long in one direction.
This situation also holds when $m>2$. Indeed, we see in the second part
in the proof of Proposition \ref{vanishprop} that if $\pi$ is a non-trivial polynomial such that
$K_{m,p}(\pi)=0$, then $\varepsilon_n$ must tends to $0$ as $n\to +\infty$. This shows that
$T_n=\phi_n(T)$ tends to be infinitely flat in the direction $Ue_y$ with $e_y=(0,1)$.
However, $K_{m,p}(\pi)=0$ does not any longer mean that
$\pi$ is a polynomial of one variable.
\end{remark}
\noindent
Our next result shows that the function $K_{m,p}$ is homogeneous,
and obeys an invariance property with respect to linear change of variables.
\begin{prop}
\label{propinvarK}
For all $\pi \in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$, $\lambda\in {\rm \hbox{I\kern-.2em\hbox{R}}}$ and $\phi \in \mathcal L({\rm \hbox{I\kern-.2em\hbox{R}}}^2)$,
\begin{eqnarray}
\label{homogK}
K_{m,p}(\lambda \pi) &=& |\lambda| K_{m,p}(\pi)\\
\label{invK}
K_{m,p}(\pi\circ \phi) &=& |\det\phi|^{m/2} K_{m,p}(\pi)
\end{eqnarray}
\end{prop}
\proof
The homogeneity property \iref{homogK} is a direct consequence of the definitions of $K_{m,p}$. In order to prove the invariance property \iref{invK} we assume in a first part that $\det \phi\neq 0$ and
we define $\tilde T:=\frac{\phi(T)}{\sqrt{|\det \phi|}}$
and $\tilde \pi (z):=\pi(\sqrt{|\det \phi|}z)= |\det \phi|^{m/2} \pi(z)$.
We now remark that the local interpolant $I_{m,T}$ commutes
with linear change of variables in the sense that, when $\phi$ is an invertible linear transform,
\begin{equation}
I_{m,T}(v\circ \phi)=(I_{m,\phi(T)} v) \circ \phi,
\label{intercommut}
\end{equation}
for all continuous function $v$ and triangle $T$. Using this commutation formula we obtain
\begin{eqnarray*}
e_{m,T}(\pi\circ \phi)_p &=& |\det \phi|^{-1/p} e_{m,\phi(T)}(\pi)_p\\
&=& e_{m,\tilde T}(\tilde \pi)_p\\
&=& |\det \phi|^{m/2} e_{m,\tilde T} (\pi)_p.
\end{eqnarray*}
Since the map $T\mapsto \tilde T$ is a bijection of the set of triangles onto itself, leaving the area invariant, we obtain the relation \iref{invK} when $\phi$ is invertible.
When $\det \phi = 0$, the polynomial $\pi \circ \phi$ can be written $(\alpha x+\beta y)^m$
so that $K_{m,p}(P\circ \phi) = 0$ by Proposition \ref{vanishprop}.
\hfill $\diamond$\\
The functions $K_{m,p}$ are not necessarily continuous, but the
following properties will be sufficient for our purposes.
\begin{prop}
\label{propsemicont}
The function $K_{m,p}$ is upper semi-continuous in general, and continuous if $m=2$ or $m$ is odd.
Moreover the following property holds:
\begin{equation}
\text{If } \pi_n\ra \pi \text{ and } K_{m,p}(\pi_n) \ra 0 \text{ then } K_{m,p}(\pi) = 0.
\label{contzero}
\end{equation}
\end{prop}
\proof
The upper semi-continuity property comes from the fact that the infimum of a family of upper
semi-continuous functions is an upper semi-continuous function.
We apply this fact to the functions $\pi \mapsto e_{m,T}(\pi)_p$ indexed by triangles which are obviously continuous.
For any polynomial $\pi \in {\rm \hbox{I\kern-.2em\hbox{H}}}_2$, $\pi = a x^2+2 b xy+ c y^2$, we define $\det \pi = ac-b^2$. It will be shown in \S4 that $K_{2,p}(\pi)= \sigma_p\sqrt{|\det \pi|}$, where $\sigma_p$ only depends on the sign of $\det \pi$. This clearly implies the continuity of $K_{2,p}$.
We next turn to the proof of the continuity of $K_{m,p}$ for odd $m$.
Consider a polynomial $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$. If $K_{m,p}(\pi)=0$ then the upper semi-continuity of $K_{m,p}$, combined with its non-negativity, implies that it is continuous at $\pi$.
Otherwise, assume that $K_{m,p}(\pi)>0$. Consider a sequence $\pi_n\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$ converging to $\pi$, and a sequence $\phi_n$ of linear transformations satisfying $\det \phi_n = 1$, and such that
$$
\lim_{n\to +\infty} e_{\phi_n(\Teq)}(\pi_n) = \liminf_{\pi^*\to \pi} K_{m,p}(\pi^*)
:= \lim_{r\to 0} \inf_{\|\pi- \pi\|\leq r} K_{m,p}(\pi^*).
$$
If the sequence $\phi_n$ admits a converging subsequence $\phi_{n_k}\to \phi$, it follows that
$$
K_{m,p}(\pi) \leq e_{\phi(\Teq)}(\pi) = \lim_{k\to+\infty} e_{\phi_{n_k}(\Teq)}(\pi_{n_k}) = \liminf_{\pi^*\to \pi} K_{m,p}(\pi^*).
$$
This asserts that $K_{m,p}$ is lower semi continuous at $\pi$, and therefore continuous at $\pi$ since we already know that $K_{m,p}$ is upper semi-continuous.
If $\phi_n$ does not admit any converging subsequence, then we invoke the SVD decomposition
$\phi_n = U_n\circ D_n \circ V_n$, where $U_n,V_n\in {\cal O}_2$ and $D_n = \diag(\ve_n,\frac 1 {\ve_n})$, where $0<\varepsilon_n\leq 1$. (Here and below, we use the shorthand $\diag(a,b)$ to denote the diagonal matrix with entries $a$ and $b$)
The compactness of ${\cal O}_2$ implies that $U_n$ admits a converging subsequence $U_{n_k}\to U$.
In particular $\pi_{n_k}\circ U_{n_k}$ converges to $\pi\circ U$. Therefore,
denoting by $a_{i,n}$ the coefficient of $x^i y^{m-i}$ in $\pi_n \circ U_n$, the subsequence $a_{i,n_k}$ converges to the coefficient $a_i$ of $x^i y^{m-i}$ in $\pi\circ U$.
Observe also that $\varepsilon_n\to 0$, otherwise some converging
subsequence could be extracted from $\phi_n$.
Since $e_{\phi_n(\Teq)}(\pi_n) = e_{\Teq}(\pi_n\circ\phi_n)$, the sequence of polynomials $\pi_n\circ\phi_n$ is uniformly bounded, and so is the sequence $\pi_n\circ U_n\circ D_n$. Therefore
the sequences $(a_{i,n} \ve_n^{2i-m})_{n\geq 0}$ are uniformly bounded.
It follows that $a_i = 0$ when $i< \frac m 2$. Since $m$ is odd, this implies that $\pi\circ U(x,y) = x^{s_m} \tilde\pi(x,y)$ and Proposition \ref{vanishprop} implies that $K_{m,p}(\pi) = 0$ which contradicts the hypothesis $K_{m,p}(\pi)>0$.
Last, we prove property \iref{contzero}.
The assumption $K_{m,p}(\pi_n)\to 0$ is equivalent to the existence of
a sequence $T_n=\phi_n(\Teq)$ with $\det \phi_n = 1$ such that
$e_{m,T_n}(\pi_n)_p \to 0$. Reasoning in a similar way as in
the proof of Proposition \ref{vanishprop}, we first obtain that $\pi_n\circ \phi_n\to 0$,
and we then invoke the SVD decomposition of $\phi_n$
to build a converging sequence of orthogonal matrices
$U_n\ra U$ and a sequence $0<\ve_n \leq 1$ such that if $a_{i,n}$ is the coefficient of
$x^i y^{m-i}$ in $\pi_n\circ U_n$, we have $a_{i,n} \ve_n^{2i-m} \to 0$.
When $i<s_m$, it follows that $a_{i,n}\to 0$ and therefore $\pi\circ U (x,y)= x^{s_m} \hat \pi(x,y)$. The result follows from Proposition \ref{vanishprop}.
\hfill $\diamond$\\
We finally make a connection between the shape function
and the approach developed in \cite{C3}.
For all $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$, we denote by $\Lambda_\pi$ as the level set of $|\pi|$ for the value $1$,
\begin{equation}
\Lambda_\pi = \{(x,y)\in {\rm \hbox{I\kern-.2em\hbox{R}}}^2,\ |\pi (x,y)|\leq 1\}.
\label{lambdapi}
\end{equation}
We now define
\begin{equation}
\label{defKE}
K^\cE_m(\pi) = \left(\sup_{E\in \cE,\ E\subset \Lambda_\pi} |E|\right)^{-m/2}.
\end{equation}
where the supremum is taken over the set $\cE$ of all ellipses centered at $0$.
The optimization problem defining $K^\cE_m$ is equivalent to
\begin{equation}
\label{OptimEll}
\inf \{\det H\sep H\in S_2^+ \text{ and } \forall z \in {\rm \hbox{I\kern-.2em\hbox{R}}}^2, \<Hz,z\> \geq |\pi(z)|^{2/m}\},
\end{equation}
where $S_2^+$ is the cone of $2\times 2$ symmetric definite positive matrices.
The minimizing ellipse $E^*$ is then given by $\{ \<Hz,z\>\leq 1\}$.
The optimization problem described in \iref{OptimEll}
is quadratic in dimension $2$, and subject to (infinitely many) linear constraints.
This apparent simplicity is counterbalanced by the fact that it is non convex. In
particular, it does not have unique solutions and may also have no solution.
\begin{prop}
\label{propequivEllTri}
On ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$, one has the equivalence
$$
cK^\cE_m \leq K_{m,p} \leq C K^\cE_m
$$
with constant $0<c\leq C$ independent of $p$.
\end{prop}
\proof Let $\Teq$ denote an equilateral triangle of unit area,
and $B$ its circumscribed disk. It is easy to see that the inscribed disc is $B/2$.
We first show that $K_{m,p} \leq CK_{m}^{\cE}$.
Let $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$, and let $E_n$ be a sequence of ellipsoids inscribed in $\Lambda_\pi$
and such that $|E_n|$ tends to $\sup \{|E|\sep E\in \cE,\ E\subset \Lambda_\pi\}$ as $n\to +\infty$.
We write $E_n = \lambda_n \phi_n(B)$, where $\phi_n$ is a linear transform such that
$\det \phi_n=1$ and $\lambda_n>0$. We define the triangle $T_n = \phi_n(\Teq)$ which
satisfies $|T_n|=1$. We then have
\begin{eqnarray*}
K_{m,p}(\pi) &\leq &
\|\pi - I_{m,T_n} \pi\|_{L^p(T_n)} \\
&= & \|\pi\circ\phi_n - (I_{m,T_n} \pi)\circ\phi_n\|_{L^p(\Teq)}\\
&= & \|\pi\circ\phi_n - I_{m,\Teq} (\pi\circ\phi_n)\|_{L^p(\Teq)}\\
&\leq & \|\pi\circ\phi_n - I_{m,\Teq} (\pi\circ\phi_n)\|_{L^\infty(\Teq)},
\end{eqnarray*}
where we have used the commutation formula \iref{intercommut}.
Remarking that $I_{m,\Teq}$ is a continuous
operator from ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$ to ${\rm \hbox{I\kern-.2em\hbox{P}}}_{m-1}$ in the sense of any norm since these
spaces are finite dimensional, we thus obtain
\begin{eqnarray*}
K_{m,p}(\pi)
& \leq & C_1 \|\pi\circ\phi_n\|_{L^\infty(\Teq)}\\
& \leq & C_1 \|\pi\circ\phi_n\|_{L^\infty(B)}\\
&=& C_1 \|\pi\|_{L^\infty(\phi_n(B))}\\
&=& C_1 \lambda_n^{-m} \|\pi\|_{L^\infty(E_n)}\\
&\leq & C_1 \left(\frac{|E_n|}{|B|}\right)^{-m/2},
\end{eqnarray*}
where we have used the fact that $|\pi|\leq 1$ in $E_n\subset \Lambda_\pi$.
Letting $n\to +\infty$, we obtain that $K_{m,p}(\pi)\leq C K_m^\cE(\pi)$ with
$C= C_1 |B|^{m/2}$.
We next prove that $cK_m^\cE \leq K_{m,p}$.
Let $T_n$ be a sequence of triangles of unit area such that
$e_{m,T_n}(\pi)_p$ tends to $K_{m,p}(\pi)$ as $n\to +\infty$.
As already remarked in \iref{transinv}
the interpolation error is invariant by translation.
We may therefore assume that the triangles $T_n$ have their barycenter at the origin.
Then there exists linear transforms $\phi_n$ with $\det \phi_n=1$, such that $T_n = \phi_n(\Teq)$.
We now write
\begin{eqnarray*}
\|\pi\|_{L^\infty(\phi_n(B/2))} &\leq & \|\pi\|_{L^\infty(T_n)}\\
&=& \|\pi\circ\phi_n\|_{L^\infty(\Teq)}\\
&\leq & C_2 e_{m,\Teq}(\pi\circ\phi)_1,\\
&\leq & C_2 e_{m,\Teq}(\pi\circ\phi)_p,\\
& = & C_2 e_{m,T_n}(\pi)_p,
\end{eqnarray*}
where we have used the fact that $\|\cdot\|_{L^\infty(\Teq)}$
and $e_{\Teq}(\cdot)_1$ are equivalent norms on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$, and that $e_{m,\Teq}(\cdot)_p$ is an increasing function of $p$ since $|\Teq| = 1$. By homogeneity, it
follows that if $\lambda_n:= (C_2 e_{m,T_n}(\pi)_p)^{-1/m}$, we have
$$
\|\pi\|_{L^\infty(\lambda_n \phi_n(B/2))} \leq \lambda_n^m C_2 e_{m,T_n}(\pi)_p=1.
$$
Therefore the ellipse $E_n:=\lambda_n \phi_n(B/2)$
is contained in $\Lambda_\pi$, so that
$$
K_m^\cE \leq |E_n|^{-m/2} =\left(\lambda_n^2 \frac {|B|} 4\right)^{-m/2}=C_2\left(\frac4 {|B|} \right)^{m/2} e_{m,T_n}(\pi)_p.
$$
Letting $n\to +\infty$, we obtain that $cK_m^\cE \leq K_{m,p}$
with $c=C_2^{-1}\left(\frac {|B|} 4\right)^{m/2}$.
\hfill $\diamond$\\
\begin{remark}
Since $K_{m,p}$ and $K_m^\cE$ are equivalent, they must vanish on the
same set, and therefore Proposition \ref{vanishprop} is also valid for $K_m^\cE$.
It also easy to see that $K_m^\cE$ satisfies the homogeneity and invariance
properties stated for $K_{m,p}$ in \iref{homogK} and \iref{invK}, as well as
the continuity properties stated in Proposition \ref{propsemicont}.
\end{remark}
\begin{remark}
The continuity of the functions $K_{m,p}$ and $K_m^\cE$ can be established
when $m$ is odd or equal to $2$, as shown by Proposition \ref{propsemicont}, but seems to fail otherwise. In particular, direct computation shows that $K_4^\cE(x^2y^2-\ve y^4)$ is independent of
$\ve>0$ and strictly smaller than $K_4^\cE(x^2 y^2)$.
Therefore $K_4^\cE$ is upper semi-continuous but discontinuous at the point $x^2 y^2\in {\rm \hbox{I\kern-.2em\hbox{H}}}_4$.
\end{remark}
\section{Optimal estimates}
This section is devoted to the proofs of our main theorems, starting with the lower estimate of Theorem \ref{optitheorem}, and continuing with the upper estimates involved in both Theorem \ref{maintheorem} and \ref{optitheorem}.
Throughout this section, for the sake of notational simplicity,
we fix the parameters $m$ and $p$ and use the shorthand
$$
K=K_{m,p}\;\; {\rm and}\;\; e_T(\pi) = e_{m,T}(\pi)_p.
$$
For each point $z\in \Omega$ we define
$$
\pi_z := \frac{d^mf_z}{m!} \in {\rm \hbox{I\kern-.2em\hbox{H}}}_m,
$$
where $f\in C^m(\Omega)$ is the function in the statement of the theorems. We
denote by
$$
\omega(r):=\sup_{\|z-z'\|\leq r} \|\pi_z-\pi_{z'}\|,
$$
the modulus of continuity of $z\mapsto \pi_z$
with the norm $\|\cdot\|$ defined by \iref{normdef}.
Note that $\omega(r)\to 0$ as $r\to 0$.
\subsection{Lower estimate}
In this proof we will use an estimate by below of the local interpolation error.
\begin{prop}
\label{errorfPz}
Assume that $1\leq p<\infty$. There exists a constant $C>0$, depending on $f$ and $\Omega$,
such that for all triangle $T\subset \Omega$ and $z\in T$,
\begin{equation}
e_T(f)^p \geq K^p(\pi_z) |T|^{\frac{mp} 2+1} - C(\diam T)^{mp} |T| \omega(\diam T).
\label{localbelow}
\end{equation}
\end{prop}
\proof
Denoting by $\mu_z\in {\rm \hbox{I\kern-.2em\hbox{P}}}_m$ the Taylor development of $f$ at the point $z$ up to degree $m$, we obtain
$$
f(z+u) - \mu_z(z+u) = m \int_{t=0}^1 (\pi_{z+tu}(u)-\pi_z(u))(1-t)^{m-1} dt.
$$
and therefore
$$
\|f-\mu_z\|_{L^\infty(T)} \leq C_0\diam(T)^m \omega(\diam(T)),
$$
where $C_0$ is a fixed constant.
By construction $\pi_z$ is the homogenous part of $\mu_z$ of degree $m$, and therefore $\mu_z-\pi_z\in {\rm \hbox{I\kern-.2em\hbox{P}}}_{m-1}$. It follows that for any triangle $T$, we have
\begin{equation}
\label{eqMuPi}
\mu_z - I_{m,T}\mu_z = \pi_z -I_{m,T} \pi_z.
\end{equation}
We therefore obtain
\begin{eqnarray*}
|e_T(f)- e_T(\pi_z)| &\leq & \|(f-I_{m,T}f) - (\pi_z-I_{m,T}\pi_z)\|_{L^p(T)} \\
&\leq & |T|^{1/p} \|(f-I_{m,T}f) - (\mu_z-I_{m,T}\mu_z)\|_{L^\infty(T)} \\
&= & |T|^{1/p} \|(I-I_{m,T})(f-\mu_z)\|_{L^\infty(T)} \\
&\leq & C_1|T|^{1/p} \|f-\mu_z\|_{L^\infty(T)} \\
&\leq & C_0C_1 |T|^{1/p} \diam(T)^m \omega(\diam(T))
\end{eqnarray*}
where $C_1$ is the norm of the operator $I-I_{m,T}$ in $L^\infty(T)$
which is independent of $T$.
From \iref{shapeA} we know that $e_T(\pi_z) \geq |T|^{\frac m 2+\frac 1 p} K(\pi_z)$, and therefore
$$
e_T(f) \geq K(\pi_z) |T|^{\frac m 2+\frac 1 p} - C_0C_1 |T|^{1/p} \diam(T)^m \omega(\diam(T)).
$$
We now remark that for all $p\in[1,\infty)$ the function $r\mapsto r^p$ is convex, and therefore if $a,b,c$ are positive numbers, and $a\geq b-c$ then $a^p \geq \max\{0,b-c\}^p \geq b^p -p c b^{p-1}$. Applying this to our last inequality we obtain
$$
e_T(f)^p \geq K^p(\pi_z) |T|^{\frac {mp} 2+1} -
pC_0C_1 (K(\pi_z))^{p-1} |T|^{(p-1)(\frac m 2+\frac 1 p)+\frac 1 p} \diam(T)^m \omega(\diam T).
$$
Since $|T|^{(p-1)(\frac m 2+\frac 1 p)+\frac 1 p}=|T|^{(p-1)\frac m 2} |T| \leq (\diam T)^{m(p-1)}|T|$, this leads to
$$
e_T(f)^p \geq K^p(\pi_z) |T|^{\frac{mp} 2+1} - C(\diam T)^{mp} |T| \omega(\diam T),
$$
where $C :=pC_0C_1 (\sup_{z\in\Omega}K(\pi_z))^{p-1}$.
\hfill $\diamond$\\
We now turn to the proof of the lower estimate in Theorem \ref{optitheorem} in the case
where $p<\infty$.
Consider a sequence $\seqT$ of triangulations which is admissible in the sense of equation
\iref{admissibilitycond}. Therefore, there exists a constant $C_A$ such that
$$
\diam T\leq C_A N^{-1/2},\; N\geq N_0,\; T\in {\cal T}_N
$$
For $T\in {\cal T}_N$, we combine this estimate with \iref{localbelow}, which gives
$$
e_T(f)^p \geq K^p(\pi_z) |T|^{\frac{mp} 2+1} - (C_AN^{-1/2})^{mp} |T| C\omega(C_AN^{-1/2}).
$$
Averaging over $T$, we obtain
$$
e_T(f)^p \geq \int_T K^p(\pi_z) |T|^{\frac{mp} 2} dz - |T| N^{-\frac{mp} 2}C_A^{mp}
C\omega(C_A N^{-1/2}).
$$
Summing on all $T\in {\cal T}_N$, and denoting by $T_z^N$ the triangle in ${\cal T}_N$ containing the point $z\in\Omega$, we obtain the estimate
\begin{equation}
\label{lowerint}
e_{{\cal T}_N}(f)^p \geq \int_\Omega K(\pi_z) |T_z^N|^{\frac{mp} 2} dz - N^{-\frac{mp} 2}\varepsilon(N),
\end{equation}
where $\varepsilon(N):=|\Omega|C_A^{mp}C\omega(C_AN^{-1/2})\to 0$ as $N\to +\infty$.
The function $z\mapsto |T_z^N|$ is linked with the number of triangles in the following way:
$$
\int_\Omega \frac {dz} {|T_z^N|} = \sum_{T\in {\cal T}_N} \int_T \frac 1{|T|} = N.
$$
On the other hand, with $\frac 1 q = \frac m 2 +\frac 1 p$, we have by H\"older's inequality,
\begin{equation}
\int_\Omega K^q(\pi_z) dz\leq \left(\int_\Omega K^p(\pi_z) |T_z^N|^{\frac{mp} 2} dz\right)^{q/p}
\left(\int_\Omega \frac 1 {|T_z^N|}dz\right)^{1-q/p}.
\label{holder}
\end{equation}
Combining the above, we obtain a lower bound for the integral term in \iref{lowerint}
which is independent of ${\cal T}_N$:
$$
\int_\Omega K^p(\pi_z) |T_z^N|^{\frac{mp} 2} dz\geq \left(\int_\Omega K^q(\pi_z)dz\right)^{p/q} N^{-m p/2}.
$$
Injecting this lower bound in \iref{lowerint} we obtain
$e_{{\cal T}_N}(f)^p \geq \left[\left(\int_\Omega K^q(\pi_z)dz\right)^{p/q} -\varepsilon(N) \right] N^{-m p/2}.$
This allows us to conclude
\begin{equation}
\liminf_{N\to +\infty} N^{\frac m 2} e_{{\cal T}_N}(f) \geq \left(\int_\Omega K^q(\pi_z)dz\right)^{\frac 1 q},
\label{lowerest}
\end{equation}
which is the desired estimate.
\newline
\newline
The case $p=\infty$ follows the same ideas.
Adapting Proposition \ref{errorfPz}, one proves that
$$
e_T(f) \geq K(\pi_z) |T|^{\frac m 2} - C(\diam T)^{m} \omega(\diam T).
$$
and therefore
\begin{equation}
e_{{\cal T}_N}(f) \geq \left\|K(\pi_z) |T_z^N|^{\frac m 2} \right\|_{L^\infty(\Omega)} - N^{-\frac m 2} \varepsilon(N),
\label{lowerint2}
\end{equation}
where $\varepsilon(N):= C_A^m C\omega(C_A N^{-\frac 1 2})\to 0$ as $N\to +\infty$.
The Holder inequality now reads
$$
\int_\Omega K(\pi_z)^{\frac 2 m} dz \leq \left\|K(\pi_z)^{\frac 2 m} |T_z^N|\right\|_{L^\infty(\Omega)} \left\|\frac 1 {|T_z^N|}\right\|_{L^1(\Omega)}
$$
equivalently
$$
\left\|K(\pi_z) |T_z^N|^{\frac m 2} \right\|_{L^\infty(\Omega)} \geq \left(\int_\Omega K(\pi_z)^{\frac 2 m} dz\right)^{\frac m 2} N^{- \frac m 2}.
$$
Combining this with \iref{lowerint2}, this leads to the desired estimate \iref{lowerest}
with $p=\infty$ and $q=\frac 2 m$.
\begin{remark}
This proof reveals the two principles which characterize the optimal triangulations.
Indeed, the lower estimate \iref{lowerest}
becomes an equality only when both inequalities in \iref{localbelow} and \iref{holder}
are equality. The first condition - equality in \iref{localbelow} - is met when each
triangle $T$ has an optimal shape, in the sense that $e_T(\pi_z)=K(\pi_z)|T|^{\frac m 2+\frac 1 p}$
for some $z\in T$. The second condition - equality in \iref{holder} - is met when the ratio
between
$K^p(\pi_z)|T_z^N|^{\frac{mp} 2}$ and $|T_z^N|^{-1}$ is constant, or equivalently
$K(\pi_z)|T|^{\frac m 2+\frac 1 p}$ is independent of the triangle $T$.
Combined with the first condition, this means that the error $e_T(f)^p$ is equidistributed
over the triangles, up to the perturbation by $(\diam T)^{mp} |T| \omega(\diam T)$ which
becomes neglectible as $N$ grows.
\end{remark}
\subsection{Upper estimate}
We first remark that the upper estimate in Theorem \ref{optitheorem}.
implies the upper estimate in Theorem \ref{maintheorem}
by a sub-sequence extraction argument: if the upper estimate in Theorem \ref{optitheorem}
holds, then for all $n>0$ there exists a sequence
$({\cal T}_N^n)_{N>N_0}$ such that
$$
\limsup_{N\to +\infty}\( N^{\frac m 2} e_{{\cal T}_N^n}(f)\) \leq \left\| K\left(\frac {d^mf}{m!}\right)\right\|_{L^q}+ \frac 1 n,
$$
with $\frac 1 q=\frac 1 p+\frac m 2$. We then take ${\cal T}_N={\cal T}_N^{n(N)}$, where
$$
n(N)=\max\left\{n\leq N\; ;\; N^{\frac m 2} e_{{\cal T}_N^n}(f)\leq \left\| K\left(\frac {d^mf}{m!}\right)\right\|_{L^q}+ \frac 2 n\right\}.
$$
For $N$ large enough this set is finite and non empty, and therefore $n(N)$ is well defined. Furthermore $n(N)\to +\infty$ as $N\to +\infty$ and therefore
$$
\limsup_{N\to +\infty}\( N^{\frac m 2} e_{{\cal T}_N}(f)\) \leq \left\| K\left(\frac {d^mf}{m!}\right)\right\|_{L^q}.
$$
We are thus left with proving the upper estimate in Theorem \ref{optitheorem}.
We begin by fixing a (large) number $M>0$. We shall take the limit $M\to \infty$ in the very last step of our proof. We define
$$
\mT_M = \{T \text{ triangle, } |T|=1,\ \bary(T)=0 \text{ and } \diam(T)\leq M\},
$$
the set of triangles centered at the origin, of unit area and diameter smaller than $M$.
This set is compact with respect to the Hausdorff distance.
This allows us to define a ``tempered'' version of $K = K_{m,p}$ that we denote by $K_M$:
$$
K_M(\pi) = \inf_{T\in \mT_M} e_T(\pi).
$$
Since $\mT_M$ is compact, the above infimum
is attained on a triangle that we denote by $T_M(\pi)$.
Note that the map $\pi\mapsto T_M(\pi)$ need not be continuous.
It is clear that $K_M(\pi)$ decreases as $M$ grows.
Note also that the restriction to triangles $T$ centered at $0$ is artificial, since
the error is invariant by translation as noticed in
\iref{transinv}. Therefore $K_M(\pi)$ converges to $K(\pi)$ as $M\to+\infty$.
Since $\mT_M$ is compact, the map $\pi \mapsto \max_{T\in\mT_M}e_T(\pi)$
defines a norm on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$, and is therefore bounded by $C_M\|\pi\|$ for some $C_M>0$.
One easily sees that the functions $\pi \mapsto e_T(\pi)$ are uniformly $C_M$-Lipschitz
for all $T\in \mT_M$, and so is $K_M$.
We now use this new function $K_M$ to obtain a local upper error estimate that is closely related to the local lower estimate in Proposition \ref{errorfPz}
\begin{prop}
For $z_1\in\Omega$, let $T$ be a triangle which is obtained from
$T_M(\pi_{z_1})$ by rescaling and translation ( $T=tT_M(\pi_{z_1})+z_0$).
Then for any $z_2\in T$,
\begin{equation}
\label{localabove}
e_T(f) \leq \(K_M(\pi_{z_2}) + B_M\omega(\max\{|z_1-z_2|,\diam(T)\})\) |T|^{\frac m 2+\frac 1 p},
\end{equation}
where $B_M>0$ is a constant which depends on $M$.
\end{prop}
\proof
For all $z_1,z_2\in \Omega$, we have
\begin{eqnarray*}
\label{eTMPxPy}
e_{T_M(\pi_{z_1})}(\pi_{z_2}) &\leq & e_{T_M(\pi_{z_1})} (\pi_{z_1}) + C_M \|\pi_{z_1}-\pi_{z_2}\| \\
& = & K_M(\pi_{z_1}) + C_M \|\pi_{z_1}-\pi_{z_2}\|,\\
&\leq & K_M(\pi_{z_2}) + 2 C_M\|\pi_{z_1}-\pi_{z_2}\|,\\
&\leq & K_M(\pi_{z_2}) + 2C_M\omega (|z_1-z_2|).
\end{eqnarray*}
Therefore, if $T$ is of the form $T=tT_M(\pi_{z_1})+z_0$, we obtain by a change of variable that
$$
e_T(\pi_{z_2})\leq \(K_M(\pi_{z_2}) + 2C_M\omega(|z_1-z_2|)\) |T|^{\frac m 2+\frac 1 p}
$$
Let $\mu_z\in {\rm \hbox{I\kern-.2em\hbox{P}}}_m$ be the Taylor polynomial of $f$ at the point $z$ up to degree $m$.
Using Equation \iref{eqMuPi} we obtain
\begin{eqnarray*}
e_T(f) &\leq & e_T(\mu_{z_2})+ e_T(f-\mu_{z_2})\\
&=& e_T(\pi_{z_2})+ e_T(f-\mu_{z_2})\\
&\leq & \(K_M(\pi_{z_2}) + 2C_M\omega(|z_1-z_2|)\) |T|^{\frac m 2+\frac 1 p} + e_T(f-\mu_{z_2})\\
\end{eqnarray*}
By the same argument as in the proof of Proposition \ref{errorfPz}, we derive that
$$
e_T(f-\mu_{z_2})\leq C|T|^{\frac 1 p} \diam(T)^m \omega(\diam T),
$$
and thus
$$
e_T(f) \leq \(K_M(\pi_{z_2}) + 2C_M\omega(|z_1-z_2|)\) |T|^{\frac m 2+\frac 1 p} +
C|T|^{\frac 1 p} \diam(T)^m \omega(\diam T).
$$
Since $T$ is the scaled version of a triangle in $\mT_M$, it obeys $\diam(T)^2 \leq M^2 |T|$. Therefore
$$
e_T(f) \leq \left(K_M(\pi_{z_2}) + (2C_M+CM^m)\omega(\max\{|z_1-z_2|,\diam (T)\})\right) |T|^{\frac m 2 +\frac 1 p},
$$
which is the desired inequality with $B_M:=2C_M+CM^m$.
\hfill $\diamond$\\
For some $r>0$ to be specified later, we now choose an arbitrary triangular mesh ${\cal R}$ of $\Omega$ satisfying
$$
r \geq \sup_{R\in {\cal R}} \diam(R).
$$
Our strategy to build a triangulation that satisfies the optimal upper estimate
is to use the triangles $R$ as {\it macro-elements} in the sense that each of them
will be tiled by a locally optimal uniform triangulation. This strategy was already used in \cite{B}.
For all $R\in {\cal R}$ we consider the triangle
$$
T_R:=(K_M(\pi_{b_R})+2B_M\omega(r))^{-\frac q 2} T_M(\pi_{b_R}),
$$
which is a scaled version of $T_M(\pi_{b_R})$
where $b_R$ is the barycenter of $R$. We use this triangle
to build a periodic tiling ${\cal P}_R$
of the plane: there exists a vector $c$ such that
$T_R\cup T'_R$ forms a parallelogram
of side vectors $a$ and $b$, with $T'_R=c-T_R$. We
then define
\begin{equation}
{\cal P}_R:=\{T_R+ma+nb\sep m,n\in{\rm {{\rm Z}\kern-.28em{\rm Z}}}^2\} \cup \{T'_R +ma+nb\sep m,n\in{\rm {{\rm Z}\kern-.28em{\rm Z}}}^2\}.
\label{defTiling}
\end{equation}
Observe that for all $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$, and all triangles $T,T'$ such that $T'=-T$ one has $e_T(\pi) = e_{T'}(\pi)$ since $\pi$ is either an even polynomial when $m$ is an even integer, or an odd polynomial when $m$ is odd. Since we already know that $e_T(\pi)$ is invariant by translation of $T$,
we find that the local error $e_T(\pi)$ is constant on all $T\in {\cal P}_R$.
We now define as follows a family of triangulations ${\cal T}_s$ of the domain
$\Omega$, for $s>0$. For every $R\in {\cal R}$, we consider the elements $T\cap R$
for $T\in s{\cal P}_R$, where $s{\cal P}_R$ denotes the triangulation ${\cal P}_R$
scaled by the factor $s$. Clearly $\{T\cap R,\; T\in s{\cal P}_R,\; R\in{\cal R}\}$ constitute
a partition of $\Omega$. In this partition, we distinguish the interior
elements
$$
\cTr:=\{T\in s{\cal P}_R \sep T\in {\rm int}(R) \; , R\in {\cal R}\},
$$
which define pieces of a conforming triangulation, and the
boundary elements $T\cap R$ for $T\in s{\cal P}_R$ such that $T\cap \partial R\neq \emptyset$.
These last elements might not be triangular, nor conformal with the
elements on the other side. Note that for $s>0$ small enough, each $R\in{\cal R}$
contains at least one triangle in $\cTr$, and therefore the boundary elements
constitute a layer around the edges of ${\cal R}$. In order to obtain a conforming
triangulation, we proceed as follow: for each boundary element $T\cap R$,
we consider the points on its boundary which are either its vertices or
those of a neighboring element. We then build the Delaunay triangulation
of these points, which is a triangulation of $T\cap R$ since it is a convex set.
We denote by $\cTb$ the set of all triangles obtained by this procedure,
which is illustrated on Figure 1.
\begin{figure}
\centering
\includegraphics[width=4cm,height=4cm]{Illustrations/NonConformTiling.eps}
\hspace{1cm}
\includegraphics[width=4cm,height=4cm]{Illustrations/ConformTiling.eps}
\hspace{1cm}
\includegraphics[width=4cm,height=4cm]{Illustrations/TriangleClasses.eps}
\caption{a. An edge (Thick) of the macro-triangulation ${\cal R}$ separating to uniformly paved regions ($T_R$ is thick, ${\cal P}_R$ is dashed). b. Additional edges (dashed) are added near the interface in order to preserve conformity. c. The sets of triangles $\cTr$ (gray) and $\cTb$ (white)}
\end{figure}
Our conforming triangulation is given by
$$
{\cal T}_s=\cTr \cup \cTb.
$$
As $s\ra 0$, clearly
$$
\#(\cTb)\leq C_{\rm bd}s^{-1} \text{ and } \sum_{T\in \cTb} |T| \leq C_{\rm bd} s,
$$
for some constant $C_{\rm bd}$ which depends on the macro-triangulation ${\cal R}$.
We do not need to estimate $C_{\rm bd}$ since ${\cal R}$ is fixed and the contribution due to $C_{\rm bd}$ in the following estimates is neglectible as $s\to 0$.
We therefore obtain that the number of triangles
in $\cTb$ is dominated by the number of triangles in $\cTr$. More precisely, we
have the equivalence
\begin{equation}
\label{cardcTs}
\#( {\cal T}_s )\sim \#(\cTr) \sim \sum_{R\in {\cal R}} \frac{ |R|}{s^2|T_R|}
= s^{-2} \sum_{R\in {\cal R}} |R| (K_M(\pi_{b_R})+2B_M \omega(r))^q,
\end{equation}
in the sense that the ratio between the above quantities tends to $1$ as $s\to 0$.
The right hand side in \iref{cardcTs} can be estimated through an integral:
\begin{eqnarray*}
s^2 \#(\cTr) &\leq & \sum_{R\in {\cal R}} |R| (K_M(\pi_{b_R})+2B_M \omega(r))^q\\
& = & \sum_{R\in {\cal R}} \int_R (K_M(\pi_{b_R})+2B_M \omega(r))^q dz\\
& \leq & \sum_{R\in {\cal R}} \int_R (K_M(\pi_z)+C_M\|\pi_z-\pi_{b_R}\|+2B_M \omega(r))^q dz\\
&\leq & \int_\Omega (K_M(\pi_z)+(2B_M+C_M) \omega(r))^q dz
\end{eqnarray*}
Therefore, since $C_M\leq B_M$,
\begin{equation}
\label{cardcTsUpper}
\#({\cal T}_s)\leq s^{-2}\left(\int_\Omega (K_M(\pi_z)+3B_M \omega(r))^q dz +C_{\rm bd} s \right).
\end{equation}
Observe that the construction of ${\cal T}_s$ gives a bound on the diameter of its elements
$$
\sup_{T\in {\cal T}_s}\diam(T)\leq s C_a, \;\; C_a:=\max_{R\in{\cal R}}{\rm diam}(T_R)
$$
Combining this with \iref{cardcTs}, we obtain that
$$
\sup_{T\in {\cal T}_s}\diam(T) \leq C_A (\#{\cal T}_s)^{-1/2} \text{ for all } s>0
$$
which is analogous to the admissibility condition
\iref{admissibilitycond}.
\newline
\newline
We now estimate the global interpolation error
$\|f-I_{m,{\cal T}_s}f\|_{L^p}:=(\sum_{T\in{\cal T}_s}e_T(f)^p)^{\frac 1 p}$,
assuming first that $1\leq p<\infty$. We first estimate
the contribution of $\cTb$, which will eventually be neglectible.
Denoting $\nu_z\in {\rm \hbox{I\kern-.2em\hbox{P}}}_{m-1}$ the Taylor polynomial of $f$ up to degree $m-1$ at $z$ we remark that
$$
\|f-I_{m,T} f\|_{L^\infty(T)} = \|(I-I_{m,T} )(f-\nu_{b_T})\|_{L^\infty(T)}
\leq C_1\|f-\nu_{b_T}\|_{L^\infty(T)} \leq C_0C_1 \diam(T)^m.
$$
where $C_1$ is the norm of $I-I_{m,T}$
in $L^\infty(T)$ which is independent of $T$
and $C_0$ only depends on the $L^\infty$ norm
of $d^mf$. Remarking that $e_T(f) = \|f-I_{m,T} f\|_{L^p(T)} \leq |T|^{\frac 1 p} \|f-I_{m,T} f\|_{L^\infty(T)}$, we obtain an upper bound for the contribution of $\cTb$ to the error:
\begin{eqnarray*}
\sum_{T\in \cTb} e_{T}(f)^p &\leq& C_0^pC_1^p\sum_{T\in \cTb} |T| \diam(T)^{mp}\\
&\leq & C_0^pC_1^p \left(\sum_{T\in \cTb} |T|\right) \sup_{T\in \cTb} \diam(T)^{mp} \\
&\leq & C_0^pC_1^p C_{\rm bd} s \sup_{T\in \cTb} \diam(T)^{mp}\\
&\leq & C^*_{\rm bd}s^{mp+1},
\end{eqnarray*}
with $C^*_{\rm bd}=C_0^pC_1^p C_a^{mp}C_{\rm bd}$.
We next turn to the the contribution of $\cTr$ to the error.
If $T\in \cTr$, $T\subset R\in {\cal R}$, we consider any point $z_1 = z\in T$ and define $z_2 = b_R$
the barycenter of $R$. With such choices, the estimate \iref{localabove} reads
$$
e_{T}(f) \leq \left(K_M(\pi_z) + B_M \omega(\max\{r,C_A s\})\right) |T|^{\frac m 2+\frac 1 p}.
$$
We now assume that $s$ is chosen small enough such that
$C_A s\leq r$. Geometrically, this condition ensures that the ``micro-triangles'' constituting ${\cal T}_s$ actually have a smaller diameter than the ``macro-triangles'' constituting ${\cal R}$. This implies
\begin{equation}
\label{estimabove}
e_{T}(f)^p \leq \left(K_M(\pi_z) + B_M \omega(r)\right)^p |T|^{\frac {mp} 2+1}
\end{equation}
Given a triangle $T\in \cTr$, $T\subset R \in {\cal R}$, and a point $z\in T$, one has
\begin{eqnarray*}
|T| &= &s^2 \left(K_M(\pi_{b_R})+2B_M \omega(r)\right)^{-q}\\
& \leq & s^2 \left(K_M(\pi_{z})-C_M\|\pi_z-\pi_{b_R}\|+2B_M \omega(r)\right)^{-q}\\
&\leq& s^2\left(K_M(\pi_z)+(2B_M-C_M) \omega(r)\right)^{-q}.
\end{eqnarray*}
Observing that $B_M\geq C_M$, and that $p-q\frac{mp} 2 = q$, we inject the above inequality in the estimate \iref{estimabove}, which yields
$$
e_{T}(f)^p \leq s^{mp} \left(K_M(\pi_z) + B_M \omega(r)\right)^q |T|.
$$
Averaging on $z\in T$, we obtain
$$
e_{T}(f)^p \leq s^{mp} \int_T \left(K_M(\pi_z) + B_M \omega(r)\right)^q dz.
$$
Adding up contributions from all triangles in ${\cal T}_s$, we find
$$
e_{{\cal T}_s}(f)^p = \sum_{T\in \cTr} e_{T}(f)^p + \sum_{T\in \cTb} e_{T}(f)^p \leq s^{mp} \int_\Omega \left(K_M(\pi_z) + B_M \omega(r)\right)^q dz + C^*_{\rm bd} s^{mp+1}
$$
Combining this with the estimate \iref{cardcTsUpper} we obtain,
$$
e_{{\cal T}_s} \#({\cal T}_s)^{m/2} \leq \left(\int_\Omega \left(K_M(\pi_z) + B_M \omega(r)\right)^q dz+C^*_{bd} s\right)^{\frac 1 p}
\left(\int_\Omega \left(K_M(\pi_z)+3B_M \omega(r)\right)^q dz +C_{\rm bd} s \right)^{\frac m 2}
$$
and therefore, since $\frac 1 q = \frac m 2+\frac 1 p$,
$$
\limsup_{s\to 0}\(\#({\cal T}_s)^{m/2} e_{{\cal T}_s}\) \leq \left(\int_\Omega \left(K_M(\pi_z)+3B_M \omega(r)\right)^q dz \right)^{\frac 1 q}.
$$
It is now time to observe that for fixed $M$,
$$
\lim_{r\ra 0} \int_\Omega \left(K_M(\pi_z) + 3B_M \omega(r)\right)^q dz = \int_\Omega K_M^q(\pi_z) dz,
$$
and that
$$
\lim_{M\ra +\infty} \int_\Omega K_M^q(\pi_z)dz = \int_\Omega K^q(\pi_z) dz.
$$
Therefore, for all $\varepsilon>0$, we can choose $M$ sufficiently large and $r$
sufficiently small, such that
$$
\limsup_{s\to 0}\(\#({\cal T}_s)^{m/2} e_{{\cal T}_s}\)\leq \(\int_\Omega K^q(\pi_z) dz\)^{\frac 1 q}+\varepsilon.
$$
This gives us the announced statement of Theorem \ref{optitheorem}, by defining
$$
s_N:=\min\{s>0 \sep \#(T_s)\leq N\},
$$
and by setting ${\cal T}_N={\cal T}_{s_N}$.
\newline
\newline
The adaptation of the above proof in the case $p=\infty$ is not straightforward
due to the fact that the contribution to the error of $\cTb$ is not anymore neglectible
with respect to the contribution of $\cTr$. For this reason, one needs to modify
the construction of $\cTb$. Here, we provide a simple construction
but for which the resulting triangulation ${\cal T}_s$ is non-conforming, as we do not know how to produce a satisfying conforming triangulation.
More precisely, we define $\cTr$ in a similar way as for $p<\infty$,
and add to the construction of $\cTb$ a post processing step
in which each triangle is splitted in $4^j$ similar triangles according
to the midpoint rule. Here we take for $j$ the smallest integer
which is larger than $-\frac {\log s}{4\log 2}$. With such an additional splitting,
we thus have
$$
\max_{T\in \cTb}{\rm diam}(T)\leq s^{\frac 1 4}\max_{R\in{\cal R}}{\diam}(sT_R)
=C_as^{1+\frac 1 4}.
$$
The contribution of $\cTb$ to the $L^\infty$ interpolation error is bounded by
$$
e_{\cTb}(f) \leq C_0C_1\max_{T\in \cTb}{\rm diam}(T)^m \leq C^*_{\rm bd}s^{\frac {5m} 4},
$$
with $C^*_{\rm bd}:=C_0C_1C_a^m$. We also have
$$
\#(\cTb)\leq C_{\rm bd} s^{-3/2},
$$
which remains neglectible compared to $s^{-2}$. We therefore obtain
\begin{equation}
\label{cardcTsInf}
\#({\cal T}_s)\leq s^{-2}\left(\int_\Omega (K_M(\pi_z)+3B_M \omega(r))^{\frac 2 m} dz +C_{\rm bd} s^{1/2} \right)
\end{equation}
Moreover, if $T\in \cTr$ and $T\subset R\in {\cal R}$, we have according to the estimate \iref{localabove}
$$
e_{T}(f) \leq \left(K_M(\pi_{b_R}) + B_M \omega(\max\{r,C_A s\})\right) |T|^{\frac{m} 2}.
$$
By construction $|T| = s^2 (K_M(\pi_{b_R})+2B_M \omega(r))^{-2/m}$. This implies $e_{T}(f) \leq s^m$ when $C_A s\leq r$. Therefore
$$
e_{{\cal T}_s}(f) = \max\{e_\cTr,e_\cTb\} \leq s^m\max\{1, C^*_{\rm bd} s^{\frac m 4}\}.
$$
Combining this estimate with \iref{cardcTsInf} yields
$$
\limsup_{s\to 0}\(\#({\cal T}_s)^{m/2} e_{{\cal T}_s}\) \leq \left(\int_\Omega \left(K_M(\pi_z)+3B_M \omega(r)\right)^{\frac 2 m} dz \right)^{\frac m 2},
$$
and we conclude the proof in a similar way as for $p<\infty$.
\section{The shape function and the optimal metric for linear and quadratic elements}
This section is devoted to linear ($m=2$) and quadratic ($m=3$) elements,
which are the most commonly used in practice.
In these two cases, we are able to derive an exact expression for $K_{m,p}(\pi)$ in terms
of the coefficients of $\pi$. Our analysis also gives us access to the distorted metric
which characterizes the optimal mesh.
While the results concerning linear elements
have strong similarities with those of \cite{B}, those
concerning quadratic elements are to our knowledge
the first of this kind, although \cite{C1} analyzes a similar setting.
\subsection{Exact expression of the shape function}
In order to give the exact expression of $K_{m,p}$, we define the determinant
of an homogeneous quadratic polynomial by
$$
\det (ax^2+2bxy+cy^2)=ac-b^2,
$$
and the discriminant of an homogeneous cubic polynomial by
$$
\disc(a x^3+ b x^2 y+ c x y^2+ d y^3) = b^2 c^2 - 4 a c^3 - 4 b^3 d + 18 a b c d - 27 a^2 d^2.
$$
The functions $\det$ on ${\rm \hbox{I\kern-.2em\hbox{H}}}_2$ and $\disc$ on ${\rm \hbox{I\kern-.2em\hbox{H}}}_3$ are homogeneous in the sense that
\begin{equation}
\label{homogDisc}
\det(\lambda \pi) = \lambda^2\det \pi,\quad \disc(\lambda \pi) = \lambda^4 \disc \pi.
\end{equation}
Moreover, it is well known that they obey an invariance property with respect to linear changes of coordinates $\phi$:
\begin{equation}
\label{invDisc}
\det(\pi\circ \phi) = (\det \phi)^2 \det \pi,\quad \disc( \pi\circ \phi) = (\det \phi)^6 \disc \pi.
\end{equation}
Our main result relates $K_{m,p}$ to these quantities.
\begin{theorem}
\label{equal23}
We have for all $\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_2$,
$$
K_{2,p}(\pi) = \sigma_p(\det\pi) \sqrt{|\det \pi|},
$$
and for all $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_3$,
$$
K_{3,p}(\pi) = \sigma^*_p(\disc \pi) \sqrt[4]{|\disc \pi|},
$$
where $\sigma_p(t)$ and $\sigma_p^*(t)$ are constants that only depend on the sign of $t$.
\end{theorem}
The proof of Theorem \ref{equal23} relies on the possibility of
mapping and arbitrary polynomial $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_2$ such that
${\rm det}(\pi)\neq 0$ or $\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_3$ such that $\disc(\pi)\neq 0$ onto
two fixed polynomials $\pi_-$ or $\pi_+$ by a linear change of variable
and a sign change.
In the case of ${\rm \hbox{I\kern-.2em\hbox{H}}}_2$, it is well known
that we can choose $\pi_-=x^2-y^2$ and $\pi_+=x^2+y^2$.
More precisely, to all $\pi\in H_2$, we associate a
symmetric matrix $Q_\pi$ such that $\pi(z)=\<Q_\pi z,z\>$. This
matrix can be diagonalized according to
$$
Q_\pi=U^\trans
\left(
\begin{array}{cc}
\lambda_1 & 0\\
0 & \lambda_2
\end{array}
\right)
U,\quad U\in {\cal O}_2, \; \lambda_1,\lambda_2\in{\rm \hbox{I\kern-.2em\hbox{R}}}.
$$
Then, defining the linear transform
$$
\phi_\pi:=U^\trans
\left(
\begin{array}{cc}
|\lambda_1|^{-\frac 1 2} & 0\\
0 & |\lambda_2|^{-\frac 1 2}
\end{array}
\right)
$$
and $\lambda_\pi={\rm sign}(\lambda_1)\in \{-1,1\}$,
it is readily seen that
$$
\lambda_\pi \pi\circ \phi_\pi =
\left\{
\begin{array}{cl}
x^2+y^2 &\text{ if } \det \pi>0\\
x^2-y^2 &\text{ if } \det \pi<0.
\end{array}
\right.
$$
In the case of ${\rm \hbox{I\kern-.2em\hbox{H}}}_3$, a similar result holds, as shown by the following lemma.
\begin{lemma}
\label{lemmaChgVar}
Let $\pi \in {\rm \hbox{I\kern-.2em\hbox{H}}}_3$. There exists a linear transform
$\phi_\pi$ such that
\begin{equation}
\label{eqChgVar}
\pi\circ \phi_\pi =
\left\{
\begin{array}{cl}
x(x^2-3y^2) &\text{ if } \disc \pi>0\\
x(x^2+3y^2) &\text{ if } \disc \pi<0.
\end{array}
\right.
\end{equation}
\end{lemma}
\proof
Let us first assume that $\pi$ is not divisible by $y$ so that it can be factorized
as
$$
\pi = \lambda (x-r_1y) (x-r_2 y) (x-r_3 y),
$$
with $\lambda\in{\rm \hbox{I\kern-.2em\hbox{R}}}$ and $r_i\in\C$. If $\disc \pi>0$, then the $r_i$ are real
and we may assume $r_1<r_2<r_3$. Then, defining
$$
\phi_\pi = \lambda (2\disc \pi)^{-1/3}
\left(\begin{array}{cc}
r_1 (r_2 + r_3) -2 r_2 r_3 & (r_2-r_3) r_1 \sqrt 3\\
2 r_1-(r_2+r_3) & (r_2-r_3) \sqrt 3,
\end{array}\right).
$$
an elementary computation shows that $\pi\circ \phi_\pi = x(x^2 - 3 y^2)$.
If $\disc \pi < 0$, then we may assume that $r_1$ is real, $r_2$ and $r_3$
are complex conjugates with ${\rm Im}(r_2)>0$. Then, defining
$$
\phi_\pi = \lambda (2\disc \pi)^{-1/3}
\left(\begin{array}{cc}
r_1 (r_2 + r_3) -2 r_2 r_3 & \mi(r_2-r_3) r_1 \sqrt 3\\
2 r_1-(r_2+r_3) & \mi(r_2-r_3) \sqrt 3
\end{array}\right).
$$
an elementary computation shows that $\pi\circ \phi_\pi = x(x^2 +3 y^2)$.
Moreover it is easily checked that $\phi_\pi$ has real entries
and is therefore a change of variable in ${\rm \hbox{I\kern-.2em\hbox{R}}}^2$.
In the case where $\pi$ is divisible by $y$, there exists a rotation $U\in {\cal O}_2$
such that $\tilde \pi:=\pi\circ U$ is not divisible by $y$. By the invariance property
\iref{invDisc} we know that $\disc \pi=\disc \tilde \pi$. Thus, we reach the same conclusion
with the choice $\phi_\pi :=U\circ \phi_{\tilde \pi}$.
\hfill $\diamond$\\
\noindent
{\bf Proof of Theorem \ref{equal23}:} for all $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_2$ such that $\det\pi\neq 0$
and for all change of variable $\phi$ and $\lambda\neq 0$,
we may combine the properties of the determinant in \iref{homogDisc} and \iref{invDisc}
with those of the shape function established in
Proposition \ref{propinvarK}. This gives us
$$
\frac {K_{2,p}(\pi)} {\sqrt{|\det\pi|}}
=\frac {K_{2,p}(\lambda \pi\circ \phi)} {\sqrt{|\det(\lambda \pi\circ\phi)|}}.
$$
Applying this with $\phi=\phi_\pi$ and $\lambda=\lambda_\pi$, we therefore obtain
$$
K_{2,p}(\pi) =
\sqrt{|\det \pi|}
\left\{\begin{array}{cc}
K_{2,p}(x^2+y^2) & \text{ if } \det \pi>0,\\
K_{2,p}(x^2-y^2) & \text{ if } \det \pi<0.
\end{array}\right.
$$
This gives the desired result with $\sigma_p(t)=K_{2,p}(x^2+y^2)$ for $t>0$
and $\sigma_p(t)=K_{2,p}(x^2-y^2)$ for $t<0$. In the case
where $\det\pi=0$, then $\pi$ is of the
form $\pi(x,y) = \lambda (\alpha x +\beta y)^2$ and we conclude by
Proposition \ref{vanishprop} that $K_{2,p}(\pi)=0$.
For all $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_3$ such that $\disc\pi\neq 0$, a similar reasoning yields
$$
K_{3,p}(\pi) = \sqrt[4]{|\disc \pi|} 108^{-\frac{1} 4}
\left\{\begin{array}{cc}
K_{3,p}(x(x^2-3y^2)) & \text{ if } \disc \pi>0,\\
K_{3,p}(x(x^2+3y^2)) & \text{ if } \disc \pi<0.
\end{array}\right.,
$$
where the constant $108$ comes from the fact that
$\disc(x(x^2-3y^2)) = - \disc(x(x^2-3y^2)) = 108$. This gives the
desired result with $\sigma_p^*(t)=108^{-\frac{1} 4}K_{3,p}(x(x^2-3y^2))$ for $t>0$
and $\sigma_p^*(t)=108^{-\frac{1} 4}K_{3,p}(x(x^2+3y^2))$ for $t<0$.
In the case where $\disc\pi=0$, then $\pi$ is of the
form $\pi(x,y) = (\alpha x +\beta y)^2(\gamma x+\delta y)$ and we conclude by
Proposition \ref{vanishprop} that $K_{3,p}(\pi)=0$.\hfill $\diamond$\\
\begin{remark}
We do not know any simple analytical expression for the constants involved in
$\sigma_p$ and $\sigma^*_p$, but these can be found by numerical optimization. These constants are known for some special values of $p$ in the case $m=2$, see for example \cite{B}.
\end{remark}
\subsection{Optimal metrics}
Practical mesh generation techniques such as in \cite{Shew,Bois,Peyre,Bamg,Inria}
are based on the data of a Riemannian metric, by which we mean
a field $h$ of symmetric definite positive matrices
$$
x \in \Omega \mapsto h(x) \in S_2^+.
$$
Typically, the mesh generator takes the metric $h$ as an input and hopefully
returns a triangulation ${\cal T}_h$ adapted to it in the sense that all triangles are close
to equilateral of unit side length with respect to this metric.
Recently, it has been rigorously proved in \cite{Shew2, Bois} that some algorithms produce bidimensional meshes obeying these constraints, under certain conditions. This must be contrasted with algorithms based on heuristics, such as \cite{Bamg} in two dimensions, and \cite{Inria} in three dimensions, which have been available for some time and offer good performance \cite{A} but no theoretical guaranties.
For a given function $f$ to be approximated, the field of metrics given
as input should be such that the
local errors are equidistributed and the aspect ratios are optimal for the
generated triangulation. Assuming that the error is measured in $X=L^p$
and that we are using finite elements of degree $m-1$, we can construct
this metric as follows, provided that some estimate of $\pi_z = \frac{d^mf(z)}{m!}$ is available
all points $z\in \Omega$.
An ellipse $E_z$ such that $|E_z|$ is equal
or close to
\begin{equation}
\sup_{E\in \cE, E\subset \Lambda_{\pi_z}} |E|
\label{maxellips}
\end{equation}
is computed, where $\Lambda_{\pi_z}$ is defined as in \iref{lambdapi}.
We denote by $h_{\pi_z}\in S_2^+$ the associated symmetric definite
positive matrix such that
$$
E_z=\left\{(x,y) \sep (x,y)^T h_{\pi_z} (x,y) \leq 1\right\}.
$$
Let us notice that the supremum in \iref{maxellips} might not
always be attained or even be finite. This particular case is discussed
in the end of this section. Denoting by $\nu>0$ the desired
order of the $L^p$ error on each triangle, we then define the
metric by rescaling $h_{\pi_z}$ according to
$$
h(z)= \frac 1 {\alpha_z^2} h_{\pi_z}\; \; {\rm where}\;\;
\alpha_z:=\nu^{\frac {p}{mp+2}} |E_z|^{-\frac {1}{mp+2}}.
$$
With such a rescaling, any triangle $T$ designed by the
mesh generator should be comparable to the ellipse
$z+\alpha_z E_z$ centered around $z$ the barycenter of $T$,
in the sense that
\begin{equation}
z+c_1\alpha_z E_z \subset T\subset z+c_2\alpha_z E_z,
\end{equation}
for two fixed constants $0<2c_1\leq c_2$ independent of $T$ (recall that for any ellipse $E$
there always exist a triangle $T$ such that $E\subset T \subset 2E$).
Such a triangulation heuristically
fulfills the desired properties of optimal aspect ratio and
error equidistribution when the level of refinement
is sufficiently small. Indeed, we then have
\begin{eqnarray*}
e_{m,T}(f)_p & \approx & e_{m,T}(\pi_z)_p \\
&=& \|\pi_z - I_{m,T}\pi_z\|_{L^p(T)},\\
&\sim & |T|^{\frac 1 p}\|\pi_z - I_{m,T}\pi_z\|_{L^\infty(T)},\\
&\sim & |T|^{\frac 1 p} \|\pi_z\|_{L^\infty(T)},\\
&\sim & |\alpha_z E_z|^{\frac 1 p} \|\pi_z\|_{L^\infty(\alpha_z E_z)},\\
&=& \alpha_z^{m+\frac 2 p} |E_z|^{\frac 1 p} \|\pi_z\|_{L^\infty(E_z)},\\
&= & \nu,
\end{eqnarray*}
where we have used the fact that $\pi_z\in{\rm \hbox{I\kern-.2em\hbox{H}}}_m$.
Leaving aside these heuristics on error estimation and mesh generation,
we focus on the main computational issue in the design of the metric $h(z)$,
namely the solution to the problem \iref{maxellips}: to any given $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$,
we want to associate $h_\pi\in S_2^+$ such that
the ellipse $E_\pi$ defined by $h_\pi$ has area equal
or close to $\sup_{E\in \cE, E\subset \Lambda_{\pi}} |E|$.
When $m=2$ the computation of the optimal matrix $h_\pi$
can be done by elementary algebraic means. In fact, as it will
be recalled below, $h_\pi$ is simply the absolute value
(in the sense of symmetric matrices) of the symmetric
matrix $[\pi]$ associated to the quadratic form $\pi$. These facts
are well known and used in mesh generation algorithms
for ${\rm \hbox{I\kern-.2em\hbox{P}}}_1$ elements.
When $m\geq 3$ no such algebraic derivation of $h_\pi$ from
$\pi$ has been proposed up to now and current
approaches instead consist in numerically solving
the optimization problem \iref{OptimEll}, see \cite{C3}.
Since these computations have to be done extremely frequently
in the mesh adaptation process, a simpler algebraic procedure
is highly valuable. In this section, we propose a simple and
algebraic method in the case $m=3$, corresponding to quadratic elements.
For purposes of comparison the results already known in the case $m=2$ are recalled.
\begin{prop}
\label{propEllipseMax}
\begin{enumerate}
\item
Let $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_2$ be such that $\det(\pi)\neq 0$, and consider its associated $2\times 2$ matrix which can be
written as
$$
[\pi] = U^\trans
\left(
\begin{array}{cc}
\lambda_1 & 0\\
0 & \lambda_2
\end{array}
\right)
U,\quad U\in {\cal O}_2.
$$
Then, an ellipse of maximal volume inscribed in $\Lambda_\pi$
is defined by the matrix
$$
h_\pi = U^\trans
\left(
\begin{array}{cc}
|\lambda_1| & 0\\
0 & |\lambda_2|
\end{array}
\right)
U
$$
\item
Let $\pi \in {\rm \hbox{I\kern-.2em\hbox{H}}}_3$ be such that $\disc \pi > 0$, and $\phi_\pi$ a matrix satisfying \iref{eqChgVar}. Define
\begin{equation}
\label{hPiPos}
h_\pi =(\phi_\pi^{-1})^\trans \phi_\pi^{-1}.
\end{equation}
Then $h_\pi$ defines an ellipse of maximal volume inscribed in $\Lambda_\pi$. Moreover $\det h_\pi =\frac {2^{-2/3}} 3 (\disc \pi)^{\frac 1 3}$.
\item
Let $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_3$ be such that $\disc \pi < 0$, and $\phi_\pi$ a matrix satisfying \iref{eqChgVar}. Define
$$
h_\pi = 2^{\frac 1 3} (\phi_\pi^{-1})^\trans \phi_\pi^{-1}.
$$
Then $h_\pi$ defines an ellipse of maximal volume inscribed in $\Lambda_\pi$. Moreover $\det h_\pi = \frac 1 3 |\disc \pi|^{\frac 1 3}$ .
\end{enumerate}
\end{prop}
\begin{figure}
\centering
\includegraphics[width=5cm,height=5cm]{Illustrations/EllMaxDiscPos.eps}
\includegraphics[width=5cm,height=5cm]{Illustrations/EllMaxDiscNeg.eps}
\caption{Maximal ellipses inscribed in $\Lambda_\pi$, $\pi = x(x^2-3y^2)$ or $\pi = x(x^2+3y^2)$.}
\end{figure}
\proof
Clearly, if the matrix $h_\pi$ defines an ellipse of maximal volume in the set $\Lambda_\pi$, then for any linear change of coordinates $\phi$, the metric $(\phi^{-1})^\trans h_\pi \phi^{-1}$ defines an ellipse of maximal volume in the set $\Lambda_{\pi\circ \phi}$.
When $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_2$, we know that $\lambda_\pi \pi \circ \phi_\pi = x^2+y^2$ when $\det \pi >0$, and $x^2-y^2$ when $\det \pi <0$, where $|\lambda_\pi|=1$. When $\pi \in {\rm \hbox{I\kern-.2em\hbox{H}}}_3$,
we know from Lemma \ref{lemmaChgVar} that $\pi \circ \phi_\pi = x(x^2-3y^2)$
when $\disc \pi>0$ and $x (x^2+3y^2)$ when $\disc \pi <0$.
Hence it only remains to prove that when $\pi \in \{x^2+y^2,\ x^2-y^2,\ x(x^2-3y^2)\}$, then $h_\pi=\Id$, which means that the disc of radius $1$ is an ellipse of maximal volume inscribed in $\Lambda_\pi$, while when $\pi = x(x^2+3y^2)$ we have $h_\pi=2^{1/3}\Id$.
The case $\pi = x^2+y^2$ is trivial. We next concentrate on the case $\pi = x(x^2+3y^2)$,
the treatment of the two other cases being very similar.
Let $E$ be an ellipse included in $\Lambda_\pi$, $\pi = x(x^2+3y^2)$.
Analyzing the variations of the function $\pi(\cos\theta,\sin\theta)$, it is not hard to see that we can rotate $E$ into another ellipse $E'$, also verifying the inclusion $E'\subset \Lambda_\pi$, and which principal axes are $\{x=0\}$ and $\{y=0\}$.
We therefore only need to consider ellipses of the form $k x^2 + h y^2 \leq 1$.
For a given value of $h$, we denote by $k(h)$ the minimal value of $k$ for which this ellipse is included in $\Lambda_\pi$. Clearly the boundary of the ellipse, defined by $k(h) x^2 + h y^2 = 1$, must be tangent to the curve defined by $\pi(x,y) = 1$ at some point $(x,y)$. This translates into the following system of equations
\begin{equation}
\left\{
\begin{array}{ccc}
\pi(x,y) &=& 1,\\
h x^2 + k y^2 &=& 1,\\
k y \partial_x \pi(x,y) - hx \partial_x \pi(x,y) &=& 0.
\end{array}
\right.
\label{tangencyEqn}
\end{equation}
Eliminating the variables $x$ and $y$ from this system, as well as negative
or complex valued solutions, we find that $k(h) = \frac{4+h^3}{3h^2}$ when $h\in (0,2]$, and $k(h) = k(2) = 1$ when $h\geq 2$.
The minimum of the determinant $h k(h) = \frac 1 3 \left(\frac 4 h + h^2\right)$ is attained for $h=2^{\frac 1 3}$. Observing that $k(2^{\frac 1 3}) = 2^{\frac 1 3}$ we obtain as announced $h_\pi = 2^{1/3}\Id$ and that the ellipse of largest area included in $\Lambda_\pi$ is the disc
of equation $2^{1/3}(x^2+y^2)\leq 1$, as illustrated on Figure 2.b.
The same reasoning applies to the other cases. For $\pi = x^2-y^2$ we obtain $k(h) = \frac 1 h$, $h\in (0,\infty)$. In this case the determinant $h k(h)$ is independent of $h$, and we simply choose $h=1 = k(1)$.
For $\pi = x(x^2-3y^2)$ we obtain $k(h) = \frac{4-h^3}{3h^2}$ when $h\in (0,1]$ and $k(h) = k(1) = 1$ when $h>1$. The maximal volume is attained when $h=1$, corresponding to the
unit disc, as illustrated on Figure 2.a.
\hfill $\diamond$\\
\begin{remark}
When $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_3$ and $\disc \pi>0$ a surprising simplification happens : the matrix
{\rm \iref{hPiPos}} has entries which are symmetric functions of the roots $r_1,r_2,r_3$.
Using the relation between the roots and the coefficients of a polynomial, we find the following expression
$$
\text{If }\pi = a x^3+ 3 b x^2 y+ 3 c x y^2+ d y^3,\text{ then } h_\pi = 2^{-\frac 1 3} 3 (\disc \pi)^{\frac{-1} 3}
\left(
\begin{array}{cc}
2 (b^2-ac) & bc - ad\\
bc - ad & 2 (c^2-bd)
\end{array}
\right).
$$
This yields a direct expression of the matrix as a function of the coefficients. Unfortunately there is no such expression when $\disc \pi<0$.
\end{remark}
At first sight, Proposition \ref{propEllipseMax} might seem to be a complete solution to the problem of building an appropriate metric for mesh generation. However, some difficulties arise at points $z\in \Omega$ where $\det \pi_z=0$ or $\disc \pi_z=0$.
If $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_2\bs \{0\}$ and $\det \pi = 0$, then up to a linear change of coordinates, and a change of sign, we can assume that $\pi = x^2$. The minimization problem clearly yields the degenerate matrix
$h_\pi = \diag(1,0)$, the $2\times 2$ diagonal matrix with entries $1$ and $0$.
If $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_3\bs \{0\}$ and $\disc \pi = 0$, then up to a linear change of coordinates either $\pi = x^3$ or $\pi = x^2 y$. In the first case the minimization problem gives again $h_\pi = \diag(1,0)$. In the second case a wilder behavior appears, in the sense that minimizing sequences
for the problem \iref{maxellips} are of the type
$h_\pi = \diag(\ve^{-1},\ve^2)$ with $\ve\to 0$. The minimization process
therefore gives a matrix which is not only degenerate, but also unbounded.
These degenerate cases appear generically, and constitute a problem for
mesh generation since they mean that the adapted triangles are not well defined.
Current anisotropic mesh generation algorithms for linear elements
often solve this problem by fixing a small parameter $\delta>0$, and working with the modified matrix
$\tilde h_\pi := h_\pi+\delta\Id$ which cannot degenerate. However this procedure cannot be extended to quadratic elements, since $h_{x^2 y}$ is both degenerate and unbounded.
In the theoretical construction of an optimal mesh which
was discussed in \S 3.2, we tackled this problem by imposing a bound $M>0$ on the diameter of the triangles. This was the purpose of the modified shape function $K_M(\pi)$ and of the triangle $T_M(\pi)$ of minimal interpolation error among the triangles of diameter smaller than $M$.
We follow a similar idea here, looking for the ellipse of largest area included in $\Lambda_\pi$ with constrained diameter. This provides matrices which are both positive definite and bounded,
and vary continuously with respect to the data $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_3$.
The constrained problem, depending on $\alpha>0$, is the following:
\begin{equation}
\sup \{|E|\sep E\in \cE,\; E\subset \Lambda_\pi \text{ and }\diam E\leq 2\alpha^{-1/2}\},
\label{ellipsconstrained}
\end{equation}
or equivalently
\begin{equation}
\label{hConstrained}
\inf \{\det H\sep H\in S_2^+ \;\;{\rm s.t.}\;\; \<Hz,z\> \geq |\pi(z)|^{2/m}, z\in{\rm \hbox{I\kern-.2em\hbox{R}}}^2,\; \text{ and } H\geq \alpha \Id\}.
\end{equation}
We denote by $E_{\pi,\alpha}$ and $h_{\pi,\alpha}$ the solutions
to \iref{ellipsconstrained} and \iref{hConstrained}.
In the remainder of this section, we show that this solution can also be computed
by a simple algebraic procedure, avoiding any kind of numerical optimization. In the case
where $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_2$, it can easily be checked that
\begin{equation}
[h_{\pi,\alpha}] = U^\trans
\left(
\begin{array}{cc}
\max\{|\lambda_1|,\alpha\} & 0\\
0 & \max\{|\lambda_2|,\alpha\},
\end{array}
\right)
U
\label{family2}
\end{equation}
as illustrated on Figure 3.
\begin{figure}
\centering
\includegraphics[width=6cm,height=4cm]{Illustrations/AllEllipsesDetPos.eps}
\includegraphics[width=6cm,height=4cm]{Illustrations/AllEllipsesDetNeg.eps}
\caption{The set $\Lambda_\pi$ (full) and the ellipses $E_{\pi,\alpha}$ (dashed)
for various values of $\alpha>0$ when $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_2$.}
\end{figure}
When $\pi \in {\rm \hbox{I\kern-.2em\hbox{H}}}_3$, the problem is
more technical, and the matrix $h_{\pi,\alpha}$ takes different forms depending on the value of $\alpha$ and the sign of $\disc \pi$.
In order to describe these different regimes, we introduce three real numbers
$0\leq \beta_\pi\leq \alpha_\pi\leq \mu_\pi$ and a matrix
$U_\pi\in {\cal O}_2$ which are defined as follow. We first define $\mu_\pi$ by
$$
\mu_\pi^{-1/2}:=\min\{\|z\|\sep |\pi(z)|=1\},
$$
the radius of the largest
disc $D_\pi$ inscribed in $\Lambda_\pi$. For $z_\pi$ such that $|\pi(z_\pi)|=1$
and $\|z_\pi\|=\mu_\pi^{-1/2}$, we define $U_\pi$
as the rotation which maps $z_\pi$ to the vector $(\|z_\pi\|,0)$.
We then define $\alpha_\pi$ by
$$
2\alpha_\pi^{-1/2}:=\max \{{\rm diam}(E)\sep E\in \cE\; ;\; D_\pi\subset E\subset\Lambda_\pi\},
$$
the diameter of the largest ellipse inscribed in $\Lambda_\pi$
and containing the disc $D_\pi$. In the case where $\pi$ is
of the form $(ax+by)^3$, this ellipse is infinitely long
and we set $\alpha_\pi=0$. We finally define $\beta_\pi$ by
$$
2\beta_\pi^{-1/2}:={\rm diam}(E_\pi),
$$
where $E_\pi$ is the optimal
ellipse described in Proposition \ref{propEllipseMax}. In the case
where $\disc\pi=0$, the ``optimal ellipse'' is infinitely long
and we set $\beta_\pi=0$. It is readily seen that
$0\leq \beta_\pi\leq \alpha_\pi\leq \mu_\pi$.
All these quantities can be algebraically computed
from the coefficients of $\pi$ by solving equations of degree
at most $4$, as well as the other quantities involved in the
description of the optimal $h_{\pi,\alpha}$ and $E_{\pi,\alpha}$
in the following result.
\begin{prop}
\label{family3}
For $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_3$ and $\alpha>0$, the matrix $h_{\pi,\alpha}$ and ellipse $E_{\pi,\alpha}$ are described as follows.
\begin{enumerate}
\item
If $\alpha\geq \mu_\pi$, then $h_{\pi,\alpha}=\alpha\Id$
and $E_{\alpha,\pi}$ is the disc of radius $\alpha^{-1/2}$.
\item
If $\alpha_\pi\leq \alpha\leq \mu_\pi$, then
\begin{equation}
\label{hBigAlpha}
h_{\pi,\alpha} = U_\pi^\trans
\left(
\begin{array}{cc}
\mu_\pi & 0\\
0 & \alpha
\end{array}
\right)
U_\pi,
\end{equation}
and $E_\alpha$ is the ellipse of diameter $2\alpha^{-1/2}$
which is inscribed in $\Lambda_\pi$ and contains $D_\pi$.
It is tangent to $\partial\Lambda_\pi$ at the two points $z_\pi$ and $-z_\pi$.
\item
If $\beta_\pi\leq \alpha\leq \alpha_\pi$ then $E_{\pi,\alpha}$ is tangent
to $\partial\Lambda_\pi$ at four points
and has diameter $2\alpha^{-1/2}$. There are at most
three such ellipses and $E_{\pi,\alpha}$ is the one
of largest area. The matrix $h_{\pi,\alpha}$ has a form
which depends on the sign of $\disc\pi$.
\newline
(i) If $\disc \pi<0$,
then
$$
h_{\pi,\alpha} = (\phi_\pi^{-1})^\trans
\left(
\begin{array}{cc}
\lambda_\alpha & 0\\
0 & \frac{4+ \lambda_\alpha^3}{3\lambda_\alpha^2}
\end{array}
\right)
\phi_\pi^{-1}
$$
where $\phi_\pi$ is the matrix defined in Proposition {\rm \ref{propEllipseMax}}
and $\lambda_\alpha$ determined by $\det(h_{\pi,\alpha}-\alpha\Id)=0$.
\newline
(ii)
If $\disc \pi >0$, then
$$
h_{\pi,\alpha} = (\phi_\pi^{-1})^\trans V^\trans
\left(
\begin{array}{cc}
\lambda_\alpha & 0\\
0 & \frac{4-\lambda_\alpha^3}{3\lambda_\alpha^2}
\end{array}
\right)
V
\phi_\pi^{-1}
$$
where $\phi_\pi$ and $\lambda_\alpha$ are given as in the case $\disc\pi<0$ and
where $V$ is chosen between the three rotations by $0$, $60$ or $120$ degrees
so to maximize $|E_{\alpha,\pi}|$.\newline
(iii)
If $\disc \pi = 0$ and $\alpha_\pi>0$, then there exists a linear change
of coordinates $\phi$ such that $\pi\circ\phi =x^2 y$ and we have
$$
h_{\pi,\alpha} = (\phi^{-1})^\trans
\left(
\begin{array}{cc}
\lambda_\alpha & 0\\
0 & \frac 4 {27\lambda_\alpha^2}
\end{array}
\right)
\phi^{-1}
$$
where $\lambda_\alpha$ is determined by $\det(h_{\pi,\alpha}-\alpha\Id)=0$.
\item
If $\alpha\leq \beta_\pi$, then $h_{\pi,\alpha}=h_\pi$ and
$E_{\pi,\alpha}=E_\pi$ is the solution of the unconstrained problem.
\end{enumerate}
\end{prop}
\proof
See Appendix.
\hfill $\diamond$\\
\begin{figure}
\centering
\includegraphics[width=4cm,height=4cm]{Illustrations/EllNeg1.eps}
\includegraphics[width=4cm,height=4cm]{Illustrations/EllPos1.eps}
\caption{The set $\Lambda_\pi$ (full), the disc $E_{\pi,\mu_\pi}=D_\pi$ (full),
the ellipse $E_{\pi,\alpha_\pi}$ (full), and the ellipses $E_{\pi,\alpha}$ (dashed)
for various values of $\alpha>0$ when $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_3$ and $\alpha \in (\alpha_\pi,\infty)$.
Left: $\disc\pi <0$. Right: $\disc\pi >0$}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=4cm,height=4cm]{Illustrations/EllNeg2.eps}
\includegraphics[width=4cm,height=4cm]{Illustrations/EllPos2.eps}
\caption{The set $\Lambda_\pi$ (full), the ellipse $E_{\pi,\alpha_\pi}$ (full),
the ellipse $E_{\pi,\beta_\pi}=E_\pi$ (full), and the ellipses $E_{\pi,\alpha}$ (dashed)
for various values of $\alpha>0$ when $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_3$ and $\alpha \in (\beta_\pi,\alpha_\pi)$.
Left: $\disc\pi <0$. Right: $\disc\pi >0$}
\end{figure}
\noindent
Figure 4 illustrates the ellipses $E_{\pi,\alpha}$, $\alpha \in (\alpha_\pi,\infty)$ when $\disc \pi>0$ (4.a) or $\disc\pi<0$ (4.b). Figure 5 illustrates the ellipses $E_{\pi,\alpha}$, $\alpha \in (\beta_\pi,\alpha_\pi)$ when $\disc \pi>0$ (5.a) or $\disc\pi<0$ (5.b). Note that when $\alpha\geq \alpha_\pi$,
the principal axes of $E_{\pi,\alpha}$ are independent of $\alpha$ since $U_\pi$
is a rotation that only depends on $\pi$, while these axes generally
vary when $\beta_\pi\leq \alpha \leq \alpha_\pi$, since the matrix $\phi_\pi$ is not
a rotation.
\begin{remark}
\label{overfitting}
For interpolation by cubic or higher degree polynomials ($m\geq 4$), an additional difficulty arises that can be summarized as follows: one should be careful not to ``overfit'' the polynomial $\pi$ with the matrix $h_\pi$. An approach based on exactly solving the optimization problem \iref{maxellips} might indeed
lead to a metric $h(z)$ with unjustified strong variations with respect to $z$
and/or bad conditioning, and jeopardize the mesh generation process.
As an example, consider the one parameter family of polynomials
$$
\pi_t=x^2y^2+ty^4\in{\rm \hbox{I\kern-.2em\hbox{H}}}_4,\;\; t\in [-1,1].
$$
It can be checked that when $t>0$, the supremum
$S_+=\sup_{E\in\cE,E\subset\Lambda_{\pi_t}} |E|$ is finite and independent of $t$, but not attained, and that
any sequence $E_n\subset\Lambda_{\pi_t}$ of ellipses such that $\lim_{n\to\infty} |E_n| = S_+$ becomes infinitely elongated in the $x$ direction, as $n\to\infty$. For $t<0$, the supremum
$S_-=\sup_{E\in\cE,E\subset\Lambda_{\pi_t}} |E|$ is independent of $t$
and attained for the optimal ellipse of equation
$|t|^{-1/2}\frac {\sqrt 2-1} 2 x^2+|t|^{1/2}y^2\leq 1$. This ellipse becomes infinitely elongated
in the $y$ direction as $t\to 0$. This example shows the instability of the optimal matrix $h_\pi$ with respect
to small perturbations of $\pi$. However, for all values of $t\in [-1,1]$,
these extremely elongated ellipses could be discarded in favor, for example,
of the unit disc $D=\{x^2+y^2\leq 1\}$ which obviously satisfies $D\subset \Lambda_{\pi_t}$ and is a near-optimal
choice in the sense that $2|D|= S_+\leq S_-= |D|\sqrt{2(\sqrt 2+1)}$.
\end{remark}
\section{Polynomial equivalents of the shape function in higher degree}
In degrees $m\geq 4$, we could not find analytical expressions of $K_{m,p}$ or $K_m^\cE$, and do not expect them to exist. However, equivalent quantities with analytical expressions
are available, under the same general form as in Theorem \ref{equal23}: the root of a polynomial in the coefficients of the polynomial $\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_m$. This result improves on the analysis of \cite{C2}, where a similar setting is studied.
In the following, we say that a function $\rm \mathbf R$ is a polynomial on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$ if there exists a polynomial $P$ of $m+1$ variables such that for all $(a_0,\cdots, a_m)\in {\rm \hbox{I\kern-.2em\hbox{R}}}^{m+1}$,
$$
{\rm \mathbf R}\left(\sum_{i=0}^m a_i x^i y^{m-i}\right) := P(a_0,\cdots, a_m) ,
$$
and we define $\deg {\rm \mathbf R} := \deg P$.
The object of this section is to prove the following theorem
\begin{theorem}
\label{thequiv}
For all degree $m\geq 2$, there exists a polynomial $\Kpol_m$ on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$,
and a constant $C_m>0$ such that for all $ \pi \in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$, and all $1\leq p \leq \infty$
$$
\frac 1 {C_m} \sqrt[r_m]{\Kpol_m(\pi)} \leq K_{m,p}(\pi )\leq C_m \sqrt[r_m]{\Kpol_m(\pi)},
$$
where $r_m = \deg \Kpol_m$.
\end{theorem}
Since for fixed $m$ all functions $K_{m,p}$, $1\leq p\leq \infty$, are equivalent on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$, there is no need to keep track of the exponent $p$ in this section and we use below the notation $K_m = K_{m,\infty}$. In this section, please do not confuse the functions $K_m$ and $\Kpol_m$, as well as the polynomials $Q_d$ and $\mathbf Q_d$ below, which notations are only distinguished by their case.
Theorem \ref{thequiv} is a generalization of Theorem \ref{equal23}, and the polynomial $\Kpol_m$ involved should be seen as a generalization of the determinant on ${\rm \hbox{I\kern-.2em\hbox{H}}}_2$, and of the discriminant on ${\rm \hbox{I\kern-.2em\hbox{H}}}_3$.
Let us immediately stress that the polynomial $\Kpol_m$ is not unique. In particular, we shall propose
two constructions that lead to different $\Kpol_m$ with different degree $r_m$.
Our first construction is simple and intuitive, but leads to a polynomial
of degree $r_m$ that grows quickly with $m$. Our second construction
uses the tools of Invariant Theory to provide a polynomial
of much smaller degree, which might be more useful in practice.
We first recall that there is a strong connection between the roots of a polynomial in ${\rm \hbox{I\kern-.2em\hbox{H}}}_2$ or ${\rm \hbox{I\kern-.2em\hbox{H}}}_3$ and its determinant or discriminant.
\begin{eqnarray*}
\det\left(\lambda \prod_{1\leq i\leq 2} (x-r_i y)\right) & = & \frac {-1} 4 \lambda^2(r_1-r_2)^2,\\
\disc\left(\lambda\prod_{1\leq i\leq 3} (x-r_i y)\right) & = & \lambda^4 (r_1-r_2)^2(r_2-r_3)^2(r_3-r_1)^2.
\end{eqnarray*}
We now fix an integer $m>3$. Observing that these expressions are a ``cyclic'' product of the squares of differences of roots, we define
$$
\cyc(\lambda,r_1,\cdots,r_m) := \lambda^4 (r_1-r_2)^2\cdots (r_{m-1}-r_m)^2 (r_m-r_1)^2.
$$
Since $m>3$, this quantity is not invariant anymore under reordering of the $r_i$. For any
positive integer $d$, we
introduce the symmetrized version of the $d$-powers of the cyclic product
$$
Q_d(\lambda,r_1,\cdots,r_m) := \sum_{\sigma\in \Sigma_m} \cyc(\lambda,r_{\sigma_1},\cdots,
r_{\sigma_m})^d,
$$
where $\Sigma_m$ is the set of all permutations of $\{1,\cdots, m\}$.
\begin{prop}
For all $d>0$ there exists a homogeneous polynomial $\mathbf Q_d$ of degree $4 d$ on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$, with integer coefficients, and such that
$$
\text{If } \pi = \lambda \prod_{i=1}^m (x-r_i y)
\text{ then } \mathbf Q_d \left(\pi \right) = Q_d(\lambda,r_1,\cdots,r_m).
$$
In addition, $\mathbf Q_d$ obeys the invariance property
\begin{equation}
\label{invprop}
\mathbf Q_d(\pi\circ\phi) = (\det \phi)^{2md} \mathbf Q_d(\pi).
\end{equation}
\end{prop}
\proof
We denote by $\sigma_i$ the elementary symmetric functions in the $r_i$, in such way that
$$
\prod_{i=1}^m (x-r_i y) = x^m - \sigma_1 x^{m-1} y+ \sigma_2 x^{m-2} y^2 -\cdots +(-1)^m \sigma_m y^m.
$$
A well known theorem of algebra (see e.g. chapter IV.6 in \cite{Lang}) asserts that any symmetrical polynomial in the $r_i$, can be reformulated as a polynomial in the $\sigma_i$. Hence for any $d$ there exists a polynomial $\tilde Q_d$ such that
$$
Q_d(1,r_1,\cdots,r_m) = \tilde Q_d(\sigma_1,\cdots,\sigma_m).
$$
In addition it is known that the total degree of $\tilde Q_d$ is the partial degree of $Q_d$ in the variable $r_1$, in our case $4d$, and that $\tilde Q_d$ has integer coefficients since $Q_d$ has.
Given a polynomial $\pi\in H_m$ not divisible by $y$, we write it under the two equivalent forms
$$
\pi = a_0 x^m + a_1 x^{m-1}y+\cdots + a_m y^m = \lambda \prod_{i=1}^m(x-r_i y).
$$
clearly $a_0 = \lambda$ and $\sigma_i = (-1)^i \frac{a_i}{a_0}$. It follows that
$$
Q_d(\lambda,r_1,\cdots,r_m)= \lambda^{4d}\tilde Q_d(\sigma_1, \cdots,\sigma_m) = a_0^{4d} \tilde Q_d\left(\frac{-a_1}{a_0},\cdots,\frac{(-1)^m a_m}{a_0}\right)
$$
Since $\deg \tilde Q_d = 4d$, the negative powers of $a_0$ due to the denominators are cleared
by the factor $a_0^{4d}$ and the right hand side is thus
a polynomial in the coefficients $a_0,\cdots,a_m$ that we denote by $\mathbf Q_d(\pi)$.
We now prove the invariance of $\mathbf Q_d$ with respect to linear changes of coordinates, this proof is adapted from \cite{Hilbert}.
By continuity of $\mathbf Q_d$, it suffices to prove this invariance property for pairs $(\pi,\phi)$ such that $\phi$ is an invertible linear change of coordinates, and neither $\pi$ or $\pi\circ \phi^{-1}$ is divisible by $y$.
Under this assumption, we observe that if $\pi = \lambda \prod_{i=1}^m(x-r_i y)$ and $\phi = \left(\begin{array}{cc} \alpha &\beta\\ \gamma &\delta \end{array}\right)$, then $\pi \circ \phi^{-1} = \tilde \lambda \prod_{i=1}^m (x-\tilde r_i y)$ where
$$
\tilde \lambda = \lambda (\det\phi)^{-m} \prod_{i=1}^m(\gamma+\delta r_i) \text{ and } \tilde r_i =\frac {\alpha r_i+\beta }{\gamma r_i+\delta}.
$$
Observing that
$$
\tilde r_i - \tilde r_j = \frac{\det\phi}{(\gamma r_i+\delta)(\gamma r_j+\delta)} (r_i-r_j),
$$
it follows that
$$
\cyc(\tilde \lambda,\tilde r_1,\cdots,\tilde r_m) = (\det \phi)^{-2m} \cyc(\lambda,r_1,\cdots,r_m).
$$
The invariance property \iref{invprop} follows readily.
\hfill $\diamond$\\
\newline
We now define $r_m = 2 {\rm lcm}\{\deg \mathbf Q_d\sep 1\leq d \leq m!\}$
where ${\rm lcm}\{a_1,\cdots,a_k\}$ stands for the lowest common multiple of
$\{a_1,\cdots,a_k\}$, and we consider the following polynomial on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$:
$$
\Kpol_m := \sum_{d=1}^{m!} \mathbf Q_d^{\frac{r_m}{\deg \mathbf Q_d} }.
$$
Clearly $\Kpol_m$ has degree $r_m$ and obeys the invariance property $\Kpol_m(\pi\circ\phi) = (\det\phi)^{\frac{r_m m} 2} \Kpol_m(\pi)$.
\begin{lemma}
Let $\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_m$. If $\Kpol_m(\pi) = 0$ then $K_m(\pi)=0$.
\end{lemma}
\proof
We assume that $K_m(\pi)\neq 0$ and intend to prove that $\Kpol_m(\pi) \neq 0$. Without loss of generality, we may assume that $y$ does not divide $\pi$, since $K_m(\pi\circ U)= K_m(\pi)$ and $\Kpol_m(\pi\circ U)= \Kpol_m(\pi)$ for any rotation $U$. We thus write $\pi = \lambda\prod_{i=1}^m(x-r_i y)$, where $r_i\in\C$. Since $K_m(\pi)\neq 0$, we know
from Proposition \ref{vanishprop} that there is no group of $\mhalf:= \lfloor \frac m 2 \rfloor +1$ equal roots $r_i$.
We now define a permutation $\sigma^* \in \Sigma_m$ such that $r_{\sigma^*(i)}\neq r_{\sigma^*(i+1)}$ for $1\leq i\leq m-1$ and $r_{\sigma^*(m)} \neq r_{\sigma^*(1)}$.
In the case where $m=2m'$ is even and $m'$ of the $r_i$ are equal, any permutation $\sigma^*$ such that $r_{\sigma^*(1)} = r_{\sigma^*(3)} = \cdots = r_{\sigma^*(2m'-1)}$ satisfies this condition.
In all other cases let us assume that the $r_i$ are sorted by equality :
if $i<j<k$ and $r_i=r_k$ then $r_i=r_j=r_k$. If $m=2m'$ is even, we set $\sigma^*(2i-1) = i$ and $\sigma^*(2i) = m'+i$, $1\leq i\leq m'$. If $m=2m'+1$ is odd we set $\sigma^*(2i) = i$, $1\leq i\leq m'$ and $\sigma^*(2i-1) = m'+i$, $1\leq i\leq m'+1$. For example, $\sigma^* = (4\ 1\ 5\ 2\ 6\ 3\ 7)$
when $m=7$ and $\sigma^* =(1\ 5\ 2\ 6\ 3\ 7\ 4\ 8)$ when $m=8$.
With such a construction, we find that $|\sigma^*(i) -\sigma^*(i+1)|\geq m'$
if $m$ is odd and $|\sigma^*(i) -\sigma^*(i+1)|\geq m'-1$ if $m$ is even,
for all $1\leq i\leq m$, where we have set $\sigma^*(m+1):=\sigma^*(1)$.
Hence $\sigma\*$ satisfies the required condition, and therefore $\cyc(\lambda,r_{\sigma^*(1)},\cdots,r_{\sigma^*(m)})\neq 0$.
It is well known that if $k$ complex numbers $\alpha_1,\cdots,\alpha_k\in\C$ are such that $\alpha_1^d+\cdots+\alpha_k^d=0$, for all $1\leq d \leq k$, then $\alpha_1=\cdots =\alpha_k=0$.
Applying this property to the $m!$ complex numbers $\cyc(\lambda,r_{\sigma(1)},\cdots,r_{\sigma(m)})$, $\sigma\in\Sigma_m$, and noticing that the term corresponding to $\sigma^*$ is non zero, we see that there exists $1\leq d\leq m!$ such that $\mathbf Q_d(\pi) = Q_d(\lambda,r_1,\cdots,r_m)\neq 0$. Since
$\mathbf Q_d$ has real coefficients, the numbers $\mathbf Q_d(\pi)$ are real. Since the exponent $r_m/\deg \mathbf Q_d$ is even it follows that $\Kpol_m(\pi)>0$, which concludes the proof of this lemma.
\hfill $\diamond$\\
\newline
The following proposition, when applied to the function $\Keq=\sqrt[r_m]{\Kpol_m}$ concludes the proof of Theorem \ref{thequiv}.
\begin{prop}
\label{propequiv}
Let $m\geq 2$, and let $\Keq : {\rm \hbox{I\kern-.2em\hbox{H}}}_m\to {\rm \hbox{I\kern-.2em\hbox{R}}}_+$ be a continuous function obeying the following properties
\begin{enumerate}
\item \emph{Invariance property} : $\Keq(\pi\circ\phi) = |\det\phi|^{\frac m 2} \Keq(\pi)$.
\item \emph{Vanishing property} : for all $\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_m$, if $\Keq(\pi)=0$ then $K_m(\pi) = 0$.
\end{enumerate}
Then there exists a constant $C>0$ such that $\frac 1 C \Keq\leq K_m\leq C \Keq$ on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$.
\end{prop}
\proof
We first remark that $\Keq$ is homogeneous in a similar way as
$K_m$: if $\lambda\geq 0$, then applying the invariance property to
$\phi = \lambda^{\frac 1 m}\Id$ yields $\Keq(\pi\circ(\lambda^{\frac 1 m}\Id)) = \Keq(\lambda\pi)$ and $|\det\phi|^{\frac m 2} = \lambda$. Hence $\Keq(\lambda\pi) = \lambda \Keq(\pi)$.
Our next remark is that a converse of the vanishing property
holds: if $K_m(\pi) = 0$, then there exists a sequence $\phi_n$ of linear changes of coordinates, $\det\phi_n=1$, such that $\pi\circ\phi_n\to 0$ as $n\to\infty$. Hence $\Keq(\pi) = \Keq(\pi\circ\phi_n) \to \Keq(0)$. Furthermore, $\Keq(0) = 0$ by homogeneity. Hence $\Keq(\pi) = 0$.
We define the set $\NF_m:=\{\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m\sep K_m(\pi) = 0\}$.
We also define a set $A_m\subset {\rm \hbox{I\kern-.2em\hbox{H}}}_m$ by a property ``opposite'' to the property defining $\NF_m$. A polynomial $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$ belongs to $A_m$ if and only if
$$
\|\pi\| \leq \|\pi\circ \phi\|\text{ for all } \phi \text{ such that } \det \phi = 1.
$$
The sets $\NF_m$ and $A_m$ are closed by construction, and clearly $\NF_m\cap A_m = \{0\}$.
We now define
$$
\underline{K_m}(\pi) = \lim_{r\to 0} \inf_{\|\pi'-\pi\|\leq r} K_m(\pi')
$$
the lower semi-continuous envelope of $K_m$. If $\underline{K_m}(\pi) = 0$ then there exists a converging sequence $\pi_n\to \pi$ such that $K_m(\pi_n)\to 0$. According to Proposition \ref{propsemicont}, it follows that $K_m(\pi)=0$ and hence $\pi\in\NF_m$. Therefore the lower semi continuous function $\underline{K_m}$ and the continuous function $\Keq$ are bounded below by a positive constant on the compact set $\{\pi\in A_m, \|\pi\|=1\}$. Since
in addition $\Keq$ is continuous and $K_m$ is upper semi-continuous, we find that
the constant
$$
C = \sup_{\pi\in A_m, \|\pi\|=1} \max\left\{ \frac{\Keq(\pi)}{\underline{K_m}(\pi)},\; \frac{K_m(\pi)}{\Keq(\pi)} \right\},
$$
is finite. By homogeneity of $K_m$ and $\Keq$, we infer that on $A_m$
\begin{equation}
\label{equivAm}
\frac 1 C \Keq \leq \underline{K_m} \leq K_m \leq C \Keq.
\end{equation}
Now, for any $\pi \in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$, we consider $\hat \pi$ of minimal norm in the
closure of the set
$\{\pi \circ \phi \sep \det \phi = 1\}$. By construction, we have $\hat \pi \in A_m$, and there exists a sequence $\phi_n$, $\det \phi_n=1$ such that $\pi \circ \phi_n \to \hat \pi$ as $n\to \infty$.
If $\hat \pi = 0$, then $K_m(\pi) = \Keq(\pi) = 0$.
Otherwise, we observe that
$$
\underline{K_m}(\hat \pi)\leq K_m(\pi) \leq K_m(\hat \pi) \text{ and } \Keq(\hat \pi) = \Keq(\pi).
$$
Where we used the fact that $\underline{K_m}$, $K_m$ and $\Keq$ are respectively lower semi continuous, upper semi continuous, and continuous on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$.
Combining this with inequality \iref{equivAm} concludes the proof.
\hfill $\diamond$\\
A natural question is to find the polynomial of smallest degree satisfying Theorem \ref{thequiv}.
This leads us to the theory of {\it invariant polynomials}
introduced by Hilbert \cite{Hilbert} (we also refer to \cite{dix} for a survey on this subject).
A polynomial $R$ on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$ is said to be invariant if $\mu = \frac{m\deg R} 2$ is a positive integer and for all $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_m$ and linear change of coordinates $\phi$, one has
\begin{equation}
R(\pi\circ\phi) = (\det\phi)^\mu R(\pi).
\label{invarpolR}
\end{equation}
We have seen for instance that $\Kpol_m$ and $\mathbf Q_d$ are ``invariant polynomials'' on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$.
Nearly all the literature on invariant polynomials is concerned with the case of complex coefficients, both for the polynomials and the changes of variables. It is known in particular \cite{dix} that for all $m\geq 3$, there exists $m-2$ invariant polynomials $R_1,\cdots R_{m-2}$ on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$, such that for any $\pi$ (complex coefficients are allowed) and any other invariant polynomial $R$ on ${\rm \hbox{I\kern-.2em\hbox{H}}}_m$,
\begin{equation}
\text{If } R_1(\pi) = \cdots = R_{m-2}(\pi) = 0,\text{ then } R(\pi) = 0.
\label{generators2}
\end{equation}
A list of such polynomials with minimal degree is known explicitly at least when $m\leq 8$.
Defining $r=2{\rm lcm} (\deg R_i)$ and $\Keq := \sqrt[r]{\sum_{i=1}^{m-2} \mathbf R_i^{\frac{r}{\deg R_i} }}$, we see that $\Keq(\pi)=0$ implies $\Kpol_m(\pi) = 0$ and hence $K_m(\pi)=0$. According to proposition \ref{propequiv}, we have constructed a new, possibly simpler, equivalent of $K_m$.
For example when $m=2$ the list $(R_i)$ is reduced to the polynomial $\det$, and for $m=3$ to the polynomial $\disc$.
For $m=4$, given $\pi = ax^4+4 b x^3 y+6c x^2 y^2+4 d x y^3+ey^4$, the list consists of the two polynomials
$$
I = ae-4bd+3c^2,\quad J=\left|\begin{array}{ccc} a&b&c\\ b&c&d\\ c&d&e\end{array}\right|,
$$
therefore $K_4(\pi)$ is equivalent to the quantity $\sqrt[6]{|I(\pi)|^3+J(\pi)^2}$.
As $m$ increases these polynomials unfortunately become more and more complicated, and their number $m-2$ obviously increases. According to \cite{dix}, for $m=5$ the list consists of three polynomials of
degrees $4,8,12$, while for $m=6$ it consists of $4$ polynomials of degrees $2,4,6,10$.
\section{Extension to higher dimension}
The function $K_{m,p}$ can be generalized to higher dimension $d>2$ in the following way.
We denote by ${\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$ the set of homogeneous polynomials of degree $m$ in $d$ variables. For all $d$-dimensional simplex $T$, we define the interpolation operator $I_{m,T}$ acting from $C^0(T)$ onto the space ${\rm \hbox{I\kern-.2em\hbox{P}}}_{m-1,d}$ of polynomials of total degree $m-1$ in $d$ variables. This operator is defined by the conditions $I_{mT} v(\gamma) = v(\gamma)$ for all point $\gamma \in T$ with barycentric coordinates in the set $\{0,\frac 1 {m-1},\frac 2 {m-1},\cdots,1\}$. Following Section \S1.2, and generalizing Definition \iref{shapefunction}, we define the local interpolation error on a simplex, the global interpolation error on a mesh,
as well as the shape function.
For all $\pi \in {\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$,
$$
K_{m,p,d}(\pi) := \inf_{|T|=1} \|\pi -I_{m,T}\pi\|_p.
$$
where the infimum is taken on all $d$-dimensional simplexes $T$ of volume $1$.
The variant $K_m^\cE$ introduced in \iref{defKE} also generalizes in higher dimension, and was introduced by Weiming Cao in \cite{C3}. Denoting by $\cE_d$ the set of $d$-dimensional ellipsoids,
we define
$$
K_{m,d}^\cE(\pi) = \left( \sup_{E\in \cE_d, E\subset \Lambda_\pi} |E| \right)^{-\frac {m} d},
$$
with $\Lambda_\pi = \{z\in {\rm \hbox{I\kern-.2em\hbox{R}}}^d \sep |\pi(z)|\leq 1\}$.
Similarly to Proposition \ref{propequivEllTri}, it is not hard to show that the functions $K_{m,p,d}(\pi)$ and $K_{m,d}^\cE(\pi)$ are equivalent: there exists constants $0<c\leq C$ depending only on $m,d$, such that
$$
c K_{m,d}^\cE \leq K_{m,p,d} \leq C K_{m,d}^\cE.
$$
Let $({\cal T}_n)_{n\geq 0}$ be a sequence of simplicial meshes (triangles if $d=2$, tetrahedrons
if $d=3$, \ldots) of a $d$-dimensional, polygonal open set $\Omega$. Generalizing
\iref{admissibilitycond}, we say that $({\cal T}_n)_{n\geq 0}$ is admissible if there exists a constant $C_A$ verifying
$$
\sup_{T\in {\cal T}_n} \diam(T) \leq C_A N^{-1/d}.
$$
The lower estimate in Theorem \ref{optitheorem} can be generalized, with straightforward adaptations in the proof. If $f\in C^m(\Omega)$ and $\seqT$ is an admissible sequence of triangulations, then
$$
\liminf_{N\to\infty} N^{\frac m d} e_{m,{\cal T}_N}(f)_p \geq \left\|K_{m,d,p}\left(\frac{d^m f}{m!}\right)\right\|_{L^q(\Omega)}.
$$
Where $\frac 1 q:= \frac m d+\frac 1 p$.
The upper estimate in Theorem \ref{optitheorem}
however does not generalize. The reason is that we used
in its proof a tiling of the plane consisting of translates of a single triangle
and of its symmetric with respect to the origin. This construction is not possible anymore in higher dimension, for example it is well known that one cannot tile the space ${\rm \hbox{I\kern-.2em\hbox{R}}}^3$, with equilateral tetrahedra.
The generalization of the second part of Theorem \iref{optitheorem} is therefore the following.
For all $m$ and $d$, there exists a constant $C=C(m,d)>0$, such that for any polygonal open set
$\Omega \subset {\rm \hbox{I\kern-.2em\hbox{R}}}^d$ and $f\in C^m(\Omega)$ the following holds: for all $\varepsilon>0$, there exists an admissible sequence ${\cal T}_n$ of triangulations of $\Omega$ such that
$$
\limsup_{N\to\infty} N^{\frac m d} e_{m,{\cal T}_N}(f)_p \leq C\left\|K_{m,d,p}\left(\frac{d^m f}{m!}\right)\right\|_{L^q(\Omega)}+\varepsilon.
$$
The ``tightness'' Theorem \ref{optitheorem} is partially lost due to the constant $C$. This upper bound is not new, and can be found in \cite{C3}.
In the proof of the bidimensional theorem we define by \iref{defTiling} a tiling ${\cal P}_R$ of the plane made of a triangle $T_R$, and some of its translates and of their symmetry with respect to the origin. In dimension $d$, the tiling ${\cal P}_R$ cannot be constructed by the same procedure.
The idea of the proof is to first consider a fixed tiling ${\cal P}_0$ of the space, constituted of simplices
bounded diameter, and of volume bounded below by a positive constant, as well as a reference equilateral simplex $\Teq$ of volume $1$. We then set ${\cal P}_R = \phi({\cal P}_0)$, where $\phi$ is a linear change of coordinates such that $T_R=\phi(\Teq)$. This procedure can be applied in any dimension, and yields all subsequent estimates ``up to a multiplicative constant'', which concludes the proof.
Since this upper bound is not tight anymore, and since the functions $K_{m,p,d}$ are all equivalent to $K_{m,d}^\cE$ as $p$ varies (with equivalence constants independent of $p$), there is no real need to keep track of the exponent $p$. We therefore denote by $K_{m,d}$ the function $K_{m,\infty,d}$.
For practical as well as theoretical purposes, it is desirable to have an efficient way to compute the shape function $K_{m,d}$, and an efficient algorithm to produce adapted triangulations.
The case $m=2$, which corresponds to piecewise linear elements, has been extensively studied see for instance \cite{B,CSX}. In that case there exists constants $0<c<C$, depending only on $d$, such that for all $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_{2,d}$,
$$
c\sqrt[d]{|\det \pi|} \leq K_{2,d}(\pi) \leq C\sqrt[d]{|\det\pi|}.
$$
where $\det \pi$ denotes the determinant of the symmetric matrix associated to $\pi$.
Furthermore, similarly to Proposition \ref{propEllipseMax}, the optimal metric for mesh refinement is given by the absolute value of the matrix of second derivatives, see \cite{B,CSX}, which is constructed in a similar way as in dimension $d=2$: with $U$ and $D=\diag(\lambda_1,\cdots,\lambda_d)$
the orthogonal and diagonal matrices such that $[\pi] = U^\trans D U$ and
with $|D|:=\diag(|\lambda_1|,\cdots,|\lambda_d|)$, we set $h_\pi = U^\trans |D| U$. It can be shown that the matrix $h_\pi$ defines an ellipsoid of maximal volume included
into the set $\Lambda_\pi$. The case $m=2$ can therefore be regarded as solved.
For values $(m,d)$ both larger than $2$, the question of computing the shape function as well as the optimal metric is much more difficult, but we have partial answers, in particular for quadratic elements in dimension $3$. Following \S5, we need fundamental results from the theory of invariant polynomials, developed in particular by Hilbert \cite{Hilbert}. In order to apply these
results to our particular setting, we need to introduce a compatibility condition
between the degree $m$ and the dimension $d$.
\begin{definition}
We call the pair of numbers $m\geq 2$ and $d\geq 2$ ``compatible'' if and only if the following holds.
For all $\pi \in {\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$ such that there exists a sequence $(\phi_n)_{n\geq 0}$ of $d\times d$ matrices with \emph{complex} coefficients, verifying $\det \phi_n=1$ and $\lim_{n\to\infty} \pi\circ \phi_n = 0$,
there also exists a sequence $\psi_n$ of $d\times d$ matrices with \emph{real} coefficients, verifying $\det \psi_n=1$ and $\lim_{n\to\infty} \pi\circ \psi_n = 0$.
\end{definition}
Following Hilbert \cite{Hilbert}, we say that
a polynomial $Q$ of degree $r$ defined on ${\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$ is
\emph{invariant} if $\mu=\frac{mr} d$ is a positive integer and
if for all $\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$ and all linear changes of coordinates $\phi$,
\begin{equation}
Q(\pi\circ\phi) = (\det \phi)^\mu Q(\pi).
\label{invpolQ}
\end{equation}
This is a generalization of \iref{invarpolR}. We denote by ${\rm \hbox{I\kern-.2em\hbox{I}}}_{m,d}$ the set of invariant
polynomials on ${\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$. It is easy to see that if $\pi\in {\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$
is such that $K_{m,d}(\pi)=0$, then $Q(\pi)=0$ for all
$Q\in {\rm \hbox{I\kern-.2em\hbox{I}}}_{m,d}$. Indeed, as seen in the proof of Proposition
\ref{vanishprop}, if $K_{m,d}(\pi)=0$ then there exists a sequence $\phi_n$ such that
$\det \phi_n=1$ and $\pi\circ\phi_n\to 0$. Therefore \iref{invpolQ}
implies that $Q(\pi)=0$.
The following lemma shows that
the compatibility condition for the pair $(m,d)$
is equivalent to a converse of this property.
\begin{lemma}
\label{lemmacomp}
The pair $(m,d)$ is compatible if and only if for all $\pi \in {\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$
$$
K_{m,d}(\pi) = 0 \text{ if and only if } Q(\pi) = 0 \text{ for all } Q\in{\rm \hbox{I\kern-.2em\hbox{I}}}_{m,d}.
$$
\end{lemma}
\proof
We first assume that the pair $(m,d)$ is not compatible. Then there exists a polynomial $\pi_0\in{\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$ such that there exists a sequence $\phi_n$, $\det \phi_n=1$ of matrices with \emph{complex} coefficients such that $\pi\circ\phi_n \to 0$, but there exists no such sequence with \emph{real} coefficients. This last property indicates that $K_{m,d}(\pi)>0$. On the contrary let $Q\in {\rm \hbox{I\kern-.2em\hbox{I}}}_{m,d}$ be an invariant polynomial, and set $\mu = \frac{m\deg Q} d$. The identity
$$
Q(\pi_0\circ\phi) =(\det\phi)^\mu Q(\pi_0)
$$
is valid for all $\phi$ with real coefficients, and is a \emph{polynomial identity} in the coefficients of $\phi$. Therefore it remains valid if $\phi$ has complex coefficients. If follows that $Q(\pi_0) = Q(\pi_0\circ\phi_n)$ for all $n$, and therefore $Q(\pi_0)=0$, which concludes the proof in the case where the pair $(m,d)$ is not compatible.
We now consider a compatible pair $(m,d)$.
Following Hilbert \cite{Hilbert}, we say that a polynomial $\pi\in H_{m,d}$ is a {\it null form} if and only if there exists a sequence of matrices $\phi_n$ with \emph{complex} coefficients such that $\det \phi_n=1$
and $\pi\circ \phi_n \to 0$. We denote by $\NF_{m,d}$ the set of such polynomials.
Since the pair $(m,d)$ is compatible, note that $\pi\in\NF_{m,d}$ if and only if there exists
a sequence $\phi_n$ of matrices with \emph{real} coefficients such that $\det \phi_n=1$ and $\pi\circ\phi_n\to 0$. Hence, we find that
$$
\NF_{m,d}=\{\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}\sep K_{m,d}(\pi) = 0\}.
$$
Denoting by ${\rm \hbox{I\kern-.2em\hbox{I}}}_{m,d}^\C$ the set of invariant
polynomials on ${\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$ with {\it complex} coefficients,
a difficult theorem of \cite{Hilbert} states that
$$
\NF_{m,d} = \{\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}\sep Q(\pi) =0 \text{ for all } Q \in {\rm \hbox{I\kern-.2em\hbox{I}}}_{m,d}^\C\}
$$
It is not difficult to check that if $Q = Q_1+ i Q_2$ where $Q_1$ and $Q_2$ have real coefficients
then \iref{invpolQ} holds for $Q$ if and only if it holds for both $Q_1$ and $Q_2$,
i.e. $Q_1$ and $Q_2$ are also invariant polynomials.
Hence denoting by ${\rm \hbox{I\kern-.2em\hbox{I}}}_{m,d}$ the set of invariant polynomials on ${\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$ with real coefficients, we have obtained that
$$
\NF_{m,d} = \{\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}\sep Q(\pi) =0 \text{ for all } Q \in {\rm \hbox{I\kern-.2em\hbox{I}}}_{m,d}\}
$$
which concludes the proof.
\hfill $\diamond$\\
\begin{theorem}
\label{ThEquivMd}
If the pair $(m,d)$ is compatible, then there exists a polynomial $\Kpol$ on ${\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$ (we set $r=\deg \Kpol$) and a constant $C>0$ such that for all $\pi \in {\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$
\begin{equation}
\frac 1 C \sqrt[r]{\Kpol(\pi)}\leq K_{m,d}(\pi) \leq C\sqrt[r]{\Kpol(\pi)}.
\label{equivKmd}
\end{equation}
If the pair $(m,d)$ is not compatible, then there does not exist such a polynomial $\Kpol$.
\end{theorem}
\proof
The proof of the non-existence property when the pair $(m,d)$ is not compatible is reported in the appendix.
Assume that the pair $(m,d)$ is compatible. We follow a reasoning very similar to \S5 to prove the equivalence \iref{equivKmd}.
We use the notations of Lemma \ref{lemmacomp} and consider the set
$$
\NF_{m,d} = \{\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}\sep K_{m,d}(\pi) = 0\} = \{\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}\sep Q(\pi) =0, \; Q \in {\rm \hbox{I\kern-.2em\hbox{I}}}_{m,d}\}.
$$
The ring of polynomials on a field is known to be Noetherian.
This implies that there exists a finite family $Q_1,\cdots,Q_s\in {\rm \hbox{I\kern-.2em\hbox{I}}}_{m,d}$ of invariant polynomials on ${\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$ such that any invariant polynomial is of the form $\sum P_i Q_i$ where
$P_i$ are polynomials on ${\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$. We therefore obtain
$$
\NF_{m,d} = \{\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}\sep Q_1(\pi) =\cdots =Q_s(\pi) = 0\}.
$$
which is a generalization of \iref{generators2}, however with no clear bound on $s$.
We now fix such a set of polynomials, set $r:= 2 {\rm lcm}_{1\leq i\leq s} \deg Q_i$, and define
$$
\Kpol = \sum_{i=1}^s \mathbf Q_i^{\frac r {\deg \mathbf Q_i}}\;\;{\rm and}\;\;\Keq := \sqrt[r]{\Kpol}.
$$
Clearly $\Kpol$ is an invariant polynomial on ${\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$, and $\NF_{m,d} = \{\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}\sep \Kpol(\pi)=0\}$.
Hence the function $\Keq$ is \emph{continuous} on ${\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$, obeys the invariance property $\Keq(\pi\circ\phi) = |\det\phi| \Keq(\pi)$, and for all $\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_m$, $\Keq(\pi)=0$ implies $\Kpol(\pi)=0$ and therefore $K_{m,d}(\pi)=0$.
We recognize here the hypotheses of Proposition \ref{propequiv}, except that the dimension $d$ has changed.
Inspection of the proof of Proposition \ref{propequiv} shows that we use only once the fact that $d=2$, when we refer to Proposition \ref{propsemicont} and state that if $(\pi_n) \in{\rm \hbox{I\kern-.2em\hbox{H}}}_m$, $\pi_n\to \pi$ and $K_m(\pi_n)\to 0$, then $K_m(\pi)=0$.
This property also applies to $K_{m,d}$, when the pair $(m,d)$ is compatible. Assume that $(\pi_n)\in{\rm \hbox{I\kern-.2em\hbox{H}}}_{m,d}$, $\pi_n\to \pi$ and that $K_{m,d}(\pi_n)\to 0$. Then there exists a sequence of linear changes of coordinates $\phi_n$, $\det\phi_n=1$, such that $\pi_n\circ\phi_n\to 0$. Therefore
$$
\Kpol(\pi) = \lim_{n\to \infty} \Kpol(\pi_n) = \lim_{n\to\infty} \Kpol(\pi_n\circ \phi_n) = 0
$$
It follows that $\pi\in\NF_{m,d}$, and therefore $K_{m,d}(\pi)=0$.
Since the rest of the proof of Proposition \ref{propequiv} never uses that $d=2$, this concludes the proof of Equivalence \iref{equivKmd}.
\hfill $\diamond$\\
Hence there exists a ``simple'' equivalent of $K_{m,d}$ for all compatible pairs $(m,d)$,
while equivalents of $K_{m,d}$ for incompatible pairs need to be more sophisticated,
or at least different from the root of a polynomial. This theorem leaves open several questions.
The first one is to identify the list of compatible pairs $(m,d)$. It is easily shown that the pairs $(m,2)$, $m\geq 2$, and $(2,d)$, $d\geq 2$ are compatible, but this does not provide any new results since we already derived equivalents of the shape function in these cases. More interestingly, we show in the next corollary that the pair $(3,3)$ is compatible, which corresponds to approximation by quadratic elements in dimension $3$.
There exists two generators $S$ and $T$
of $I_{3,3}$, which expressions are given in \cite{Salmon}
and which have respectively degree $4$ and $6$.
\begin{corol}
$\sqrt[6]{|S|^3+T^2}$ is equivalent to $K_{3,3}$ on ${\rm \hbox{I\kern-.2em\hbox{H}}}_{3,3}$.
\end{corol}
\proof
The invariants $S$ and $T$ obey the invariance properties $S(\pi\circ\phi) = (\det\phi)^4 S(\pi)$ and $T(\pi\circ\phi) = (\det\phi)^6 T(\pi)$. We intend to show that if $\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_{3,3}$ and $S(\pi) = T(\pi) = 0$ then $K_{3,3}(\pi) = 0$. Let us first admit this property and see how to conclude the proof of this corollary.
According to Lemma \ref{lemmacomp} the pair $(3,3)$ is compatible.
The function $\Keq := \sqrt[6]{|S|^3+T^2}$ is continuous on ${\rm \hbox{I\kern-.2em\hbox{H}}}_{3,3}$, obeys the invariance property
$\Keq(\pi\circ\phi) = |\det\phi|\Keq(\pi)$
and is such that $\Keq(\pi) = 0$ implies $K_{3,3}(\pi) = 0$.
We have seen in the proof of Theorem \ref{ThEquivMd} that these properties imply the desired equivalence of $\Keq$ and $K_{3,3}$.
We now show that $S(\pi) = T(\pi) = 0$ implies $K_{3,3}(\pi) = 0$.
A polynomial $\pi\in{\rm \hbox{I\kern-.2em\hbox{H}}}_{3,3}$ can be of two types. Either it is \emph{reducible}, meaning that there exists $\pi_1\in {\rm \hbox{I\kern-.2em\hbox{H}}}_{1,3}$ (linear) and $\pi_2\in {\rm \hbox{I\kern-.2em\hbox{H}}}_{2,3}$ (quadratic) such that $\pi = \pi_1 \pi_2$, or it is \emph{irreducible}.
In the latter case according to \cite{Hartshorne}, there exists a linear change of coordinates $\phi$ and two reals $a,b$ such that
$$
\pi\circ\phi = y^2 z - (x^3+ 3 a x z^2 + b z^3).
$$
A direct computation from the expressions given in \cite{Salmon} shows that $S(\pi\circ\phi) = a$ and $T(\pi\circ\phi) = -4b$. If $S(\pi)=T(\pi) = 0$ then $S(\pi\circ\phi)=T(\pi\circ\phi) = 0$ and $\pi\circ \phi = y^2 z -x^3$. Therefore for all $\lambda\neq 0$, $\pi\circ \phi(\lambda x,\lambda^2 y, \lambda^{-3} z) = \lambda y^2 z -\lambda ^3 x^3$, which tends to $0$ as $\lambda\to 0$. We easily construct from this point a sequence $\phi_n$, $\det \phi_n = 1$, such that $\pi\circ\phi_n\to 0$. Therefore $K_{3,3}(\pi) = 0$.
If $\pi$ is reducible, then $\pi = \pi_1 \pi_2$ where $\pi_1$ is linear and $\pi_2$ is quadratic. Choosing a linear change of coordinates $\phi$ such that $\pi_1\circ\phi = z$ we obtain
$$
\pi\circ\phi = 3 z (a x^2+ 2 b xy + c y^2)+ z^2 ( u x + v y + w z),
$$
for some constants $a,b,c,u,v,w$.
Again, a direct computation from the expressions given in \cite{Salmon} shows that $S(\pi\circ\phi) = -(ac -b^2)^2$ (and $T(\pi\circ\phi) = 8 (ac -b^2)^3$). Therefore if $S(\pi) = T(\pi)= 0$ then the quadratic function $a x^2+ 2 b xy + c y^2$ of the pair of variables $(x,y)$ is degenerate. Hence there exists a linear change of coordinates $\psi$, altering only the variables $x,y$, and reals $\mu,u',v'$ such that
$$
\pi\circ\phi\circ\psi = \mu z x^2 + z^2 ( u' x + v' y + w z).
$$
It follows that $\pi\circ\phi\circ\psi(x,\lambda^{-1} y,\lambda z)$ tends to $0$ as $\lambda\to 0$. Again, this implies that $K_{3,3}(\pi)=0$, and concludes the proof of this proposition.
\hfill $\diamond$\\
We could not find any example of incompatible pair $(m,d)$, which leads us to formulate the conjecture that all pairs $(m,d)$ are compatible (hence providing ``simple'' equivalents of $K_{m,d}$ in full generality). Another even more difficult problem is to derive a polynomial $\Kpol$ of minimal degree for all couples $(m,d)$ which are compatible and of interest.
Last but not least, efficient algorithms are needed to compute metrics, from which effective triangulations are built that yield the optimal estimates. A possibility is to follow the approach proposed in \cite{C3}, i.e. solve numerically the optimization problem
$$
\inf \{\det H \sep H\in S_d^+ \text{ and } \forall z \in {\rm \hbox{I\kern-.2em\hbox{R}}}^d, \<Hz,z\> \geq |\pi(z)|^{2/m}\},
$$
which amounts to minimizing a degree $d$ polynomial under an infinite set of linear constraints. When $d>2$, this minimization problem is not quadratic which makes it rather delicate. Furthermore, numerical instabilities similar to those described in Remark \ref{overfitting} can be expected to appear.
\section{Conclusion and Perspectives}
In this paper, we have introduced asymptotic estimates for the
finite element interpolation error measured in $L^p$ when the mesh is
optimally adapted to the interpolated function.
These estimates are asymptotically sharp for functions of two variables, see Theorem \ref{optitheorem}, and precise up to a fixed multiplicative constant in higher dimension, as described in \S 6. They involve a shape function $K_{m,p}$ (or $K_{m,d,p}$ if $d>2$) which generalizes the determinant which appears in estimates for piecewise linear interpolation \cite{CSX,B,CDHM}. This function can be explicitly computed in several cases, as shows Theorem \ref{equal23}, and has equivalents of a simple form in a number of other cases, see Theorems \ref{thequiv} and \ref{ThEquivMd}.
All our results are stated and proved for sufficiently smooth functions.
One of our future objectives is to extend these results to larger classes of functions,
and in particular to functions exhibiting discontinuities along curves. This means that
we need to give a proper meaning to the nonlinear
quantity $K_{m,p}\left(\frac{d^m f}{m!}\right)$
for non-smooth functions.
This paper also features a constructive algorithm (similar to \cite{B}), that produces
triangulations obeying our sharp estimates, and is described in \S3.2. However, this algorithm
becomes asymptotically effective only for a highly refined triangulation. A more practical
way to produce quasi-optimal triangulations is to adapt them to a metric, see \cite{Bois,Shew,Peyre}.
This approach is discussed in \S 4.2. This raises the question of generating
the appropriate metric from the (approximate) knowledge of the derivatives of the function to be interpolated. We addressed this question in the particular case of piecewise quadratic approximation in two dimensions in Theorems \ref{propEllipseMax} and \ref{family3}.
We plan to integrate this result in the PDE solver FreeFem++ in a near future.
Note that a Mathematica source code is already available on the web \cite{sitejm}.
We also would like to derive appropriate metrics for other settings of
degree $m$ and dimension $d$, although, as we pointed it in
Proposition \ref{overfitting}, this might be a rather delicate matter.
We finally remark that in many applications, one seeks for error estimates in the Sobolev norms $W^{1,p}$ (or $W^{m,p}$)
rather than in the $L^p$ norms. Finding the optimal triangulation for such norms requires a new error analysis.
For instance, in the survey \cite{Shew2} on piecewise linear approximation,
it is observed that the metric $h_\pi = |d^2f|$ (evoked in Equation \iref{family2}) should be replaced with $h_\pi = (d^2f)^2$ for best adaptation in $H^1$ norm. In other words, the principal axes of the positive definite matrix $h_\pi$ remain the same, but its conditioning is squared.
| {
"timestamp": "2011-01-05T02:00:08",
"yymm": "1101",
"arxiv_id": "1101.0612",
"language": "en",
"url": "https://arxiv.org/abs/1101.0612",
"abstract": "Given a function f defined on a bidimensional bounded domain and a positive integer N, we study the properties of the triangulation that minimizes the distance between f and its interpolation on the associated finite element space, over all triangulations of at most N elements. The error is studied in the Lp norm and we consider Lagrange finite elements of arbitrary polynomial degree m-1. We establish sharp asymptotic error estimates as N tends to infinity when the optimal anisotropic triangulation is used, recovering the earlier results on piecewise linear interpolation, an improving the results on higher degree interpolation. These estimates involve invariant polynomials applied to the m-th order derivatives of f. In addition, our analysis also provides with practical strategies for designing meshes such that the interpolation error satisfies the optimal estimate up to a fixed multiplicative constant. We partially extend our results to higher dimensions for finite elements on simplicial partitions of a domain of arbitrary dimension.Key words : anisotropic finite elements, adaptive meshes, interpolation, nonlinear approximation.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Optimal Meshes for Finite Elements of Arbitrary Order",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9901401467331061,
"lm_q2_score": 0.8289388146603365,
"lm_q1q2_score": 0.8207655995805526
} |
https://arxiv.org/abs/2207.10512 | The shape of $x^2\bmod n$ | We examine the graphs generated by the map $x\mapsto x^2\bmod n$ for various $n$, present some results on the structure of these graphs, and compute some very cool examples. | \section{Overview}
For any $n$, we consider the map
\begin{equation*}
f_n(x) := x^2\bmod n.
\end{equation*}
From this map, we can generate a (directed) graph $\gr n$ whose vertices are the set $\{0,1,\dots, n-1\}$ and edges from $x$ to $f_n(x)$ for each $x$. We show a few examples of such graphs later in the paper --- one particularly intruiging graph is shown in Figure~\ref{fig:1024attractor}, that appears as a subgraph of $\gr{10455}$. Figure~\ref{fig:817}, which shows the entire graph $\gr{817}$, is also excellent. As we can see from examples, sometimes the graph $\gr n$ is a forest (a collection of trees), sometimes it has loops, etc. In this paper we will seek to characterize all such graphs; not surprisingly, the shape of these graphs depend on the number theoretic properties of the integer $n$.
We give a quick summary of the results below:
\begin{enumerate}
\item (Theorem~\ref{thm:kronecker}) We show that if $n=a\cdot b$ where $\gcd(a,b)=1$, then the graph $\gr n$ is the Kronecker product of the graphs $\gr a$ and $\gr b$ (we define the Kronecker product in Definition~\ref{def:kronecker} below). Using induction, we can therefore determine the graph $\gr n$ in terms of its prime decomposition, and all that remains is to describe $\gr{p^k}$ for all primes $p$ and $k\ge 1$. We break this up into several cases.
\item (Theorem~\ref{thm:units}) Let $p$ be an odd prime. Let us denote by $\units{p^k}$ the set of numbers that are relatively prime to $p^k$; these are the multiplicative units modulo $p^k$. We use heavily the fact that the units under multiplication modulo $p^k$ form a cyclic group, and this allows us to characterize $\units{p^k}$ (we note here that the technique for $\units{p^k}$ follows closely the results of~\cite{vasiga2004iteration} where they studied $\gr p$ for $p$ prime);
\item (Theorem~\ref{thm:nilpotent}) Let $p$ be an odd prime. This theorem will fully characterize $\nilp{p^k}$, the component of $\gr{p^k}$ that corresponds to the nilpotent elements.
\item (Section~\ref{sec:2^k}) Finally, we work on the graph for $\gr{2^k}$, describing both $\units{2^k}$ and $\nilp{2^k}$ separately.
\end{enumerate}
From this, we can characterize any $\gr n$, and we then use this characterization to compute several quantities of interest and work out many sweet examples.
A few notes: this manuscript does not contain any new theoretical results and, in fact, only uses results from elementary number theory\footnote{Anyone familiar with the author's body of research would know that any number theory appearing here is perforce extremely elementary}. If the reader is so inclined, they can think of this paper as an extended pedagogical example. In fact, the author recently taught UIUC's MATH 347 (our undergrad math major "intro to proofs" course) and found that the students responded pretty well to visualizing the graphs of various functions that appear in a math course at this level (functions modulo some integer being a key family of examples). The author also made some videos with visualizations for the square function considered here~\cite{vid}, and got intrigued by the patterns that appear. This led to the current manuscript.
\section{The main theory}
\setcounter{subsection}{-1}
\subsection{Summary of results}
As noted above, there are four main results, and the sections below address each of these in order. After we have stated the theoretical results in this section, this is sufficient to describe all $\gr n$ --- with the caveat that in many cases there is still some work to do to compute everything. We work out many concrete examples in Section~\ref{sec:computations}.
\subsection{Kronecker products and the Classical Remainder Theorem}\label{sec:Kronecker}
In this section we show how the graphs $\gr a$ and $\gr b$ ``combine'' to form the graph of $\gr{ab}$, when $a,b$ are relatively prime.
\begin{definition}\label{def:kronecker}
Let $G_1 = (V_1,E_1)$ and $G_2 = (V_2,E_2)$ be two (directed) graphs. Let us define $G_1\oblong G_2$, the {\bf Kronecker product} of $G_1$ and $G_2$, as follows: the vertex set of $G_1\oblong G_2$ is the Cartesian product $V_1\times V_2$, and we say that
\begin{equation*}
(v_1,w_1)\to (v_2,w_2)\in G_1\oblong G_2\mbox{ iff } v_1\to w_1\in G_1\mbox{ and } v_2\to w_2\in G_2.
\end{equation*}
In particular, the edge exists for the pair iff there is an edge for the first component of each and one for the second component of each.
\end{definition}
\begin{remark}
One motivation for calling this the Kronecker product is that the adjacency matrix of $G_1\oblong G_2$ is the Kronecker matrix product of the adjacency matrices of $G_1$ and $G_2$. This is also called the Cartesian product of graphs by many authors.
We also note that it is straightforward to show that the Kronecker graph product is associative, and so we can define $n$-ary Kronecker products unambiguously as well.
\end{remark}
\begin{thm}\label{thm:kronecker}If $gcd(a,b)=1$, then
\begin{equation*}
\gr{ab} \cong \gr a \oblong \gr b.
\end{equation*}
Furthermore, If we write
\begin{equation*}
n = p_1^{\alpha_1} p_2^{\alpha_2} \cdots p_j^{\alpha_j}
\end{equation*}
as the prime factorization of $n$, then
\begin{equation*}
\gr n \cong \bigbox_{i=1}^j \gr{p_i^{\alpha_i}},
\end{equation*}
namely: the graph for $n$ is just the Kronecker product of the individual graphs for each of the $p_i^{\alpha_i}$.\end{thm}
Perhaps surprisingly, the proof of Theorem~\ref{thm:kronecker} is more or less equivalent to the Classical Remainder Theorem (see Lemma~\ref{lem:crt} below). The basic idea goes as follows: if we consider the map $x\mapsto x^2\bmod {ab}$, then we can naturally map this to a pair where we first consider the operation modulo $a$, and then consider the operation modulo $b$. (In fact, we can do this for any $a,b$, not just those that are relatively prime.) However, if $a,b$ are relatively prime, then the CRT tells us that we can invert this process, and this is what gives us the full isomorphism.
\begin{lem}\label{lem:crt}[Classical Remainder Theorem (CRT)]~\cite[\S 7.6]{dummit1991abstract}
Given a set of coprime numbers $n_1, n_2, \dots, n_j$, then for any integers $a_1,\dots, a_j$, the system
\begin{equation*}
x \equiv a_1\bmod n_1,\quad x \equiv a_2\bmod n_2,\quad\cdots, \quad x \equiv a_j\bmod n_j
\end{equation*}
has a solution. Moreover, any two such solutions are congruent modulo $n_1\times n_2\times \cdots \times n_j$.
\end{lem}
\begin{remark}\label{rem:CRT}
An equivalent statement of the CRT is that the map
\begin{align*}
\zmod{(n_1\times n_2\times \cdots \times n_j)} &\to \zmod{n_1}\times \zmod{n_2}\times\cdots\times \zmod{n_j},\\
x\bmod{(n_1\times n_2\times \cdots \times n_j)} &\mapsto (x\bmod n_1, x\bmod n_2, \dots, x\bmod n_j),
\end{align*}
is a ring isomorphism, which is why it respects the squaring operation.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm:kronecker}]
The first claim basically boils down to the interpretation of Remark~\ref{rem:CRT}. Let us consider the map from the vertex set of $\gr{ab}$ to the vertex set of $\gr a\oblong \gr b$ given by $x\bmod(ab) \mapsto (x\bmod a, x\bmod b)$. By the CRT, this is a bijection of the vertex sets. Now, assume that there is an edge $x\to y$ in $\gr{ab}$ --- this will be true iff $x^2=y\bmod{(ab)}$. But then $x^2=y\bmod a$ and $x^2=y\bmod b$, and therefore there is an edge $(x\bmod a, x\bmod b) \to (y\bmod a, y\bmod b)$ in $\gr a\oblong \gr b$ as well. Conversely, if there is an edge $(x\bmod a, x\bmod b) \to (y\bmod a, y\bmod b)$ in $\gr a\oblong \gr b$, then by CRT we have $x^2=y\bmod{(ab)}$, and there is an edge $x\to y$ in $\gr{ab}$.
Finally, the second claim follows directly from the first using induction.
\end{proof}
\subsection{The graph of units $\units{p^k}$ when $p$ is an odd prime}\label{sec:units}
In this section, we determine how to compute the graph containing the ``units'' modulo $p^k$ when $p$ is an odd prime. We start with a few definitions and state the main result, and then break down how we prove it.
\begin{defn}
The set of {\bf multiplicative units} modulo $n$ (or, more simply, the {\bf units} modulo $n$) are those numbers in the set $\{0,1,\dots, n-1\}$ that are relatively prime to $n$. This set is typically denoted $(\zmod{n})^\times$ and the function $\phi(n) = \av{(\zmod{n})^\times}$ is called Euler's phi function (or totient function). In this paper we will denote by $\units{p^k}$ the graph induced by $x\mapsto x^2\bmod{p^k}$ on the set of units.
\end{defn}
\begin{defn}
Let $\gcd(d,k)=1$. We define the {\bf multiplicative order of $k$, modulo $d$}, denoted $\mathsf{ord}_d(k)$, as the smallest power $p$ such that $k^p\equiv 1\bmod d$, or $d|(k^p-1)$.
\end{defn}
\begin{defn}\label{def:flower-cycle}
Here we define several special graphs that appear all over the place in $\units{p^k}$:
\begin{itemize}
\item The {\bf (directed) cycle graph} of length $k$, denoted $C_k$, is a directed graph with vertex set $\{1,\dots, k\}$ and arrows $v_i\to v_{i+1\bmod k}$. Note that we also allow the cycle of length 2, given by the graph $1\to 2, 2\to 1$, and the directed cycle of length $1$, which is just a single vertex labelled 1 with a loop to itself.
\item
A {\bf tree} is a connected graph with no cycles. A {\bf rooted tree} is a tree with a distinguished vertex, called the root. A {\bf grounded tree} is a rooted tree with two properties: all edges flow towards the root (i.e. for any vertex in the tree, there is a path from that vertex to the root), and the root has a loop to itself.
\item The {\bf flower cycle of length $\alpha$ and flower type $T$}, denoted $C_\alpha(T)$, is the directed graph defined in the following manner. Take $k$ copies of the grounded tree $T$, denoted $T_1,\dots, T_k$. Remove the loop at the root from each of these, and then replace it with an edge from the root of tree $i$ to the root of tree $i+1\bmod \alpha$.
\item
The {\bf regular grounded tree of width $w>1$ and $\ell>0$ layers} (denoted $T_{w}^{\ell}$) is a tree with $w^\ell$ vertices, described as follows: start with a root vertex and a loop to itself, this forms layer 0. In layer 1, there are $w-1$ vertices, each with an edge to the root. For any layer $k< \ell$, and for any vertex in layer $k$, there are $w$ vertices in layer $k+1$ that map to each vertex in layer $k$. Layer $\ell$ is given by leaves that map to vertices in level $\ell-1$.
\end{itemize}
\end{defn}
\begin{remark}
A few remarks:
\begin{itemize}\item Another way to describe the flower cycle is that we start with a cycle, and then we glue $k$ copies of the tree at each vertex in the cycle.
\item Note that the in-degree of every vertex in $T_w^\ell$ is either $w$ or 0.
\item The graph $C_\alpha(T_w^\theta)$ appears often in the sequel (esp. with $w=2$), and we give a colloquial description here: start with a cycle of length $C_\alpha$, and then to each vertex in the cycle, ``glue'' a tree of type $T_w^\theta$. In this picture, the nodes in the cycle itself are the periodic points, and the trees coming off each node are the preperiodic points that map into that particular orbit. See Figure~\ref{fig:flower-examples} for a few examples.
\end{itemize}
\end{remark}
\begin{figure}[th]
\begin{center}
\end{center}
\includegraphics[width=0.95\textwidth]{TwoSampleGraphs.pdf}
\caption{Examples of $C_6(T_2^2)$ and $C_3(T_2^3)$ as defined in Definition~\ref{def:flower-cycle}. Recall that the subscript gives the number of terms in the cycle (or periodic orbit) of the graph, and the tree tells us what is ``hanging off'' of each of those periodic vertices. In one case we have the grounded tree of width two and depth two, and in the other width two and depth three. Trees of width two show up ubiquitously below --- in particular they are the only trees we see in $\units{p^k}$ when $p$ is an odd prime.}
\label{fig:flower-examples}
\end{figure}
\begin{thm}\label{thm:units}[Structure of $\units{p^k}$]
Let $p$ be an odd prime, and write $m=\phi(p^k)$. Write $m = 2^\theta\mu$ where $\mu$ is odd. (Note that $\phi(p^k)$ is always even, so $\theta\ge 1$.) For each divisor $d$ of $\mu$, take $\phi(d)/\ordt d$ disjoint copies of $C_{\ordt d}(T^\theta_2)$, and $\units{p^k}$ is isomorphic to the disjoint union of these cycles over the divisors $d$.
\end{thm}
\begin{remark}\label{rem:2or0}
Note that this implies that every vertex in $\units{p^k}$ has an in-degree of two or zero (note that the trees attached to the cycles are always of the form $T_2^\theta$).
\end{remark}
To describe the result a bit more colloquially: we first compute $m = \phi(p^k)$, and pull out all possible powers of $2$, until we obtain the odd number $\mu$. From there, we enumerate all of the divisors $d$ of $\mu$, and these $d$ determine all of the periodic orbits of $\units{p^k}$. Then the number of powers of $2$ that we pulled out gives us the full structure of the pre-periodic points attached to these periodic orbits.
Onto the proof: we start with the following result that was proven by Gauss in~\cite{gauss1986english}:
\begin{prop}
If $n=p^k$, where $p$ is an odd prime, then $(\zmod{p^k})^\times$ is a cyclic group of order $\phi(p^k) = p^k-p^{k-1}$.
\end{prop}
The fact that this group is cyclic allows us to more or less completely characterize the graph structure of $\units{p^k}$. We follow the approach of~\cite{vasiga2004iteration} who completely characterized $\gr p$ for $p$ prime in the same way (in this section the novelty is that we are extending these ideas to $p^k$, but since $(\zmod{p^k})^\times$ is cyclic the same idea goes through).
We first give a definition and a lemma:
\begin{defn}
We define $\add{m}$ as the graph whose vertices are the set $\{0,1,\dots, m-1\}$ and edges given by $x\mapsto 2x\bmod m$.
\end{defn}
\begin{lem}\label{lem:cyclic}
Let $G$ be any cyclic group of order $m$ (where here we denote the group operation as multiplication). Call one of its generators $g$. We can define a function on $G$ by
\begin{equation*}
g^\alpha \mapsto g^{2\alpha},\quad \alpha = 0,1,\dots, m-1.
\end{equation*}
This function is conjugate to the map $(\Z/m\Z, x\mapsto 2x\bmod m)$, and so has the same dynamical structure (periodic orbits, fixed points, etc.) and its graph is isomorphic to $\add m$.
\end{lem}
\begin{proof}
Let us define the maps $\alpha\colon \Z/m\Z \to \Z/m\Z$ with $x\mapsto 2x\bmod m$, $\gamma\colon G\to G$ with $\gamma(g^a) = g^{2a}$, and $\kappa\colon\Z/m\Z\to G$ with $\kappa(a) = g^a$. Since $g$ generates $G$, the map $\kappa$ is invertible. We see that $\gamma\circ \kappa = \kappa\circ \alpha$, so $\alpha$ and $\gamma$ are conjugate. In particular, note that $\alpha(x)=x$ iff $\gamma(\kappa(x)) = \kappa(x)$ and moreover that $\alpha^k (x) = x$ iff $\gamma^k(\kappa(x)) = \kappa(x)$, so that there is a one-to-one correspondence between the fixed/periodic points of $\alpha$ and $\gamma$. In particular this implies that the two functions have isomorphic graphs.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:units}]
By Lemma~\ref{lem:cyclic}, the graph $\units{p^k}$ is isomorphic to $\add{\phi(p^k)}$ and the result follows.
\end{proof}
Basically we just need to understand $\add m$. Under addition, this is a cyclic group of order $m$, with generator $1$. We stress here that we are now moving to {\em additive notation} for simplicity, so here the identity is $0$ and the generator is $1$; when we convert this back to the multiplicative group $\units{p^k}$ we will have an identity of $1$ and a generator of $g$.
Some standard results from group theory~\cite[\S 2.3]{dummit1991abstract} tell us that:
\begin{enumerate}
\item For any $d$ dividing $m$, the set of all elements with (additive) order $d$ is given by $$S_d = \{j (m/d): j\mbox{ such that }\gcd(j,d)=1\};$$
\item The set $S_d$ has $\phi(d)$ elements;
\item Since $\sum_{d|m}\phi(d) = m$, these sets exhaust $\Z/m\Z$.
\end{enumerate}
Let us first consider the case with $m$ odd:
\begin{prop}
Let $m$ be odd, and consider the map $f(x)=2x$ on $\Z/m\Z$. Then this map has a unique fixed point at $x=0$, and all other elements are periodic under this map. The map $f$ leaves each of the $S_d$ invariant, and $S_d$ decomposes into a disjoint union of periodic orbits, each with period $\mathsf{ord}_d(2)$. (Note that since $\av{S_d} = \phi(d)$, there will be exactly $\phi(d)/\mathsf{ord}_d(2)$ such disjoint orbits and note that $\ordt d$ divides $\phi(d)$ by Fermat's Little Theorem.)
\end{prop}
\begin{proof}
We can see by inspection that no point other than $x=0$ is fixed by the map. Now note that since $\gcd(2,m)=1$, the map $x\mapsto 2x$ is invertible, and is thus effectively a permutation. Therefore all points are periodic under repeated application of this map.
To prove the second claim in the proposition, note that $S_d$ is the set of points in $\Z/m\Z$ with additive order $d$, i.e. $x\in S_d$ iff $dx\equiv0\bmod m$ and if $0<d'<d$, $d'x\not\equiv 0\pmod m$. This is invariant under the map $x\mapsto 2x$ since $m$ is odd: if $d'(2x)\equiv 0\bmod m$ then $d'x\equiv 0 \bmod m$, etc.
Let $x\in S_d$ with $f^k(x) = 2^kx\equiv x$, and if $x\in S_d$ then $x=j(d/m)$ for $\gcd(j,d)=1$. This means that we have
\begin{equation*}
(2^k-1) j\frac m d \equiv 0\bmod m,
\end{equation*}
which is equivalent to saying that $(2^k-1)j/d\in \mathbb{Z}$. Since $\gcd(j,d)=1$, this is true iff $d|(2^k-1)$.
The period of $x$ is the smallest $k$ for which $2^kx=x$, which is the smallest $k$ for which $d|(2^k-1)$, and this is $\mathsf{ord}_d(2)$ by definition.
\end{proof}
We are now able to understand the cyclic group for more general $m$.
\begin{prop}
Let $m = 2^\theta \mu$ where $\mu$ is odd, and again consider the map $f(x)=2x\bmod m$. Then the graph $\add m$ consists of flower cycles, specifically: replace every cycle of length $\alpha$ in $\add\mu$ with a copy of $C_\alpha(T_2^\theta)$.
\end{prop}
\begin{proof}
We first consider the basin of attraction of $0$, which consists of the multiples of $\mu$. (Note that $f^\theta(x)=2^\theta x\equiv 0\bmod m$ for some $\theta$ iff $x$ is a multiple of $\mu$.) In fact, we can see that if $x= 2^q \nu \mu$ with $q<\theta$ and $\nu$ odd, then $x$ reaches 0 in exactly $k-q$ iterations, each passing through higher orders of $2$. For example, all of the odd multiples of $\mu$ will be in the top layer (there will be exactly $2^{\theta-1}$ of these); the odd multiples of $2\mu$ will be in the next layer (exactly $2^{\theta-2}$ of these), and so on. Note that every vertex in this tree has in-degree 2 or 0: nothing maps to the odd numbers in the top layer, but each subsequent number has two preimages: given an $x$ such that $2y\equiv x\bmod m$, we also have $2(y+m/2) \equiv x\bmod m.$ So in general, we have a tree of type $T_2^\theta$ with $0$ at the root, the point $m/2 = 2^{\theta-1}\mu$ the single point in layer $1$, the points $\{1,3\}\times 2^{\theta-2}\mu$ in layer two (each of which map to $2^{\theta-1}\mu$), the points $\{1,3,5,7\}\times 2^{\theta-3}\mu$ in layer three, etc. We will call this the ``tree rooted at 0'' below.
We now claim that for any periodic orbit that appears in $\add \mu$, there is a corresponding flower orbit in $\add m$. Consider any period $k$ orbit in $\zmod \mu$ with elements $x_1,\dots, x_k$. By definition, $x_{i+1}\equiv 2x_i\bmod \mu$ and $x_1 = 2x_k\bmod \mu$. Now let $y_i = 2^\theta x_i$ and note that gives a periodic orbit modulo $m$: if $x_{i+1}-2x_i$ is a multiple of $\mu$, then $y_{i+1}-2y_i = 2^\theta(x_{i+1}-x_i)$ is a multiple of $m=2^\theta \mu$. Now pick any of the $y_i$. We know that there is at least one $z\in \Z/m\Z$ that maps to $y_i$ (since it is part of a periodic orbit). Therefore there is another, $z_1 = z\pm n/2$ that maps to $y_i$, and this corresponds to the $n/2$ vertex in the tree rooted at 0. If $z$ is odd, then the tree terminates, but if it is even, there are two numbers that map to $z_1$, namely $z_1/2$ and $z_1/2\pm n/2$. As we move up the branches of this tree, each layer removes one power of two, and each vertex has exactly two vertices mapping to it, until we reach the $\theta$'th layer. Therefore there will be a $T_2^\theta$ attached to $y_i$ --- and the same argument applies to any of the elements in any of the periodic orbits, and we are done.
\end{proof}
\begin{cor}\label{cor:2^k}
If $m=2^k$ for some power of $k$, then the graph $\add{m}$ is a tree with $k$ layers. The $k$th layer consists of the $2^{k-1}$ odd numbers, the $k-1$st layer is those $2^{k-2}$ numbers that are two times an odd, etc. In particular, the graph $\add{2^k}$ is a $T_2^k$.
\end{cor}
One direct corollary of this is that if $n$ is a Fermat prime, then the graph $\units{n}$ is a tree.
\subsection{The graph of nilpotent elements $\nilp{p^k}$ when $p$ is an odd prime}\label{sec:nilpotent}
Now we can attack the case of those elements that are not units when $p$ is an odd prime.
\begin{defn}
We say that $x$ is {\bf nilpotent} modulo $n$ if there exists some power $a$ such that $x^a \equiv \bmod n$.
\end{defn}
It is easy to see that $x$ is nilponent modulo $p^k$ iff $p|x$. Also note that if $x^a\equiv 0\bmod n$ then $x^b\equiv 0\bmod n$ for any $b>a$, and in particular it is true for $b$ being some power of two, so we have:
\begin{lem}
$x$ is nilpotent modulo $p^k$ $\iff$ $p|x$ $\iff$ $f_{p^k}^\alpha(x) = 0$ for some $\alpha>0$. In particular, this implies that $\nilp{p^k}$ is a (connected) tree.
\end{lem}
Now that we have established that $\nilp{p^k}$ is a tree and we want to determine its structure. It can actually have an interesting and complex structure, especially when $k$ is large. Our main result is the following.
\begin{defn}\label{def:trees}\label{def:tree}
Let $p$ be an odd prime. Fix $\widehat{x} \in \units p = \{1,2,\dots,p-1\}$, and for each $\ell$ we define the tree $\tree p \widehat{x} \ell$ recursively:
\begin{enumerate}
\item If $\ell$ is odd: then $\tree p \widehat{x}\ell$ is just a single node with no edges;
\item If $\ell$ is even and $\widehat{x}$ has no preimages modulo $p$: then then $\tree p \widehat{x}\ell$ is just a single node with no edges;
\item If $\ell$ is even and $\widehat{x}$ has two preimages modulo $p$: denote these preimages by $z_1$ and $z_2$, then $\tree p \widehat{x}\ell$ is a rooted tree with $2 p^{\ell/2}$ trees mapping into this root:
\begin{equation*}
\mbox{ $p^{\ell/2}$ copies of $\tree p {z_1}{\ell/2}$ and $p^{\ell/2}$ copies of $\tree p {z_2}{\ell/2}$.}
\end{equation*}
\end{enumerate}
(Note by Remark~\ref{rem:2or0} that this exhausts all possible cases.)
\end{defn}
\begin{thm}\label{thm:nilpotent}
The graph $\nilp{p^k}$ is a rooted tree with $0$ as the root and $p^{k-1}$ vertices. The in-neighborhood of 0 is constructed as follows: for any $k/2\le \ell < k$ and for any $1 \le y \le p^{k-\ell}$ with $\gcd(y,p)=1$, attach a tree of type $\tree{p}{\widehat{y}}{\ell}$, where $\widehat{y}=y\bmod p$, to 0. For any $x$ (where $x\bmod p = 0$ and $x\bmod{p^k}\neq0$), write $x$ as the (unique) decomposition $x = \widetilde{x} p^\ell$ with $\gcd(\widetilde{x},p)=1$, and the in-neighborhood of $x$ is the tree $\tree{p}{\widehat{x}}{\ell}$, where $\widehat{x}=\widetilde{x}\bmod p$.
\end{thm}
\begin{remark}
The descriptions given in Theorem~\ref{thm:nilpotent} and Definition~\ref{def:trees} are recursive and not explicit --- although it is relatively easy to use these descriptions to compute $\nilp{p^k}$ when $k$ is not too large. We give an equivalent, but more explicit, description below.
\end{remark}
Theorem~\ref{thm:nilpotent} basically follows from the following lemma:
\begin{lem}\label{lem:q2}
Consider $n=p^k$, let $x=\widetilde{x} p^\ell$ (where $\ell < k$) and consider the equation
\begin{equation}\label{eq:q2}
q^2 = x\bmod{p^k}.
\end{equation}
Write $\widehat{x}$ as the element of $\{1,2,\dots, p-1\}$ that is congruent to $\widetilde{x}$ modulo $p$. If $\ell$ is odd, or if $\ell$ is even and $\widehat{x}$ is a leaf in $\units p$, then~\eqref{eq:q2} has no solutions. If there are two numbers $\widetilde z_1,\widetilde z_2$ such that $\widetilde z_i^2 = \widehat{x}\bmod p$, then~\eqref{eq:q2} has $2p^{\ell/2}$ solutions. Half of these ($p^{\ell/2}$ of them) are solutions of the form $z p^{\ell/2}$ where $z\equiv z_1\bmod p$ and the other half are solutions of the form $z p^{\ell/2}$ where $z\equiv z_2\bmod p$.
\end{lem}
\begin{proof}
Let us first consider the case where $\ell = 2$ as a warm-up; the general case is similar but has more details.
\fbox{$\ell=2$.} Let us write $x = \widetilde{x} p^2$, $y=\widetilde{y} p$, and assume that $y^2 = x\bmod {p^k}$. This tells us that
\begin{align*}
\widetilde{y}^2 p^2 &\equiv \widetilde{x} p^2 \bmod {p^k},\\
\widetilde{y}^2 &\equiv \widetilde{x} \bmod{p^{k-2}}.
\end{align*}
Note that we have $1\le \widetilde{x} < p^{k-2}$ and $1\le \widetilde{y} < p^{k-1}$ --- this is the range of prefactors we can put in front of each of those powers.
Let us write $\widehat{x} = \widetilde{x}\pmod p$ and $\widehat{y} = \widetilde{y}\pmod p$. If we consider the last equation modulo $p$, this implies $\widehat{y}^2\equiv \widehat{x}\bmod p$. This implies that if $\widehat{x}$ has no preimages in $\gr p$ then there is no solution. Let us assume that $\widehat{x}$ has two preimages in $\gr p$, and pick one of them, say $z_1$. If we can show that there are exactly $p$ solutions to the system of congruences
\begin{equation*}
\widetilde{y}^2 \equiv \widetilde{x} \bmod{p^{k-2}},\quad \widetilde{y}\equiv z_1\bmod p,
\end{equation*}
then we have established the result of the theorem for $\ell=2$. Let us define two sets:
\begin{equation*}
S_x := \{\widetilde{x}: 1 \le \widetilde{x} < p^{k-2} \land \widetilde{x} \equiv \widehat{x} \bmod p\},\quad S_y := \{\widetilde{y}: 1 \le \widetilde{y} < p^{k-1} \land \widetilde{y} \equiv \widehat{y} \bmod p\}.
\end{equation*}
We can see that $\av{S_x} = p^{k-3}$ and $\av{S_y} = p^{k-2}$, and of course $pS_y$ maps into $p^2S_x$ under the squaring map. Thus each point in $S_x$ is hit an average of $p$ times, but we want to show that it is exactly $p$ times. Let us parameterize $S_y$ by $\widetilde{y} = z_1 + \beta p$, with $0\le \beta < p^{k-2}$. We claim that as we run through these $\beta$, the values of $S_x$ each get hit exactly $p$ times. First note that if we replace $\beta$ with $\beta+p^{k-3}$, then we get
\begin{equation*}
(z_1 + (\beta +p^{k-3})p)^2 \equiv (z_1+\beta p)^2\bmod p^{k-2},
\end{equation*}
so we only need consider $0\le \beta < p^{k-3}$. We claim that these $\beta$ give distinct values modulo $p^{k-2}$, and since there are $p^{k-3}$ of them, they must cover the set $S_x$ --- and then shifting the arguments by $p^{k-3}$ gives the same values, and this gives us $p$ copies. All we need to show now is that every element of $S_x$ gets hit once as $\beta$ runs from $0$ to $p^{k-3}-1$. We compute:
\begin{align*}
(1+\beta p)^2 &\equiv (1+\beta' p)^2&\pmod {p^{k-2}}\\
1+2\beta p + \beta^2p^2 &\equiv 1+2\beta' p + (\beta')^2 p^2&\pmod {p^{k-2}}\\
2\beta + \beta^2p &\equiv 2\beta' + (\beta')^2 p&\pmod {p^{k-3}},\\
2(\beta-\beta') + p(\beta^2-(\beta')^2)&\equiv 0&\pmod {p^{k-3}},
\end{align*}
so we nowneed to show that the map $2\xi + p\xi^2$ has only one root on $\zmod{p^{k-3}}$. If we assume that $2\xi+p\xi^2\equiv 0\mod{p^{k-3}}$, then by taking modulo $p$, this gives $2\xi \equiv 0\bmod p$, and thus $\xi\equiv 0\bmod p$. From this we have $p\xi^2\equiv 0\bmod {p^3}$, thus the same for $2\xi$, thus for $\xi$, etc. From this we get that only $\xi=0$ can solve this equation, and we are done.
\fbox{General $\ell$.}
The argument is similar: writing $x = \widetilde{x} p^\ell$, $y=\widetilde{y} p^{\ell/2}$, and that $y^2 = x\bmod {p^k}$. This tells us that
\begin{align*}
\widetilde{y}^2 p^\ell &\equiv \widetilde{x} p^{\ell} \bmod {p^k},\\
\widetilde{y}^2 &\equiv \widetilde{x} \bmod{p^{k-\ell}}.
\end{align*}
Note that we now have $1\le \widetilde{x} < p^{k-\ell}$ and $1\le \widetilde{y} < p^{k-\ell/2}$. Again assume that $\widehat{x}$ has preimages in $\gr p$ and call one $z_1$. We now show that there are $p^{\ell/2}$ solutions to the system
\begin{equation*}
\widetilde{y}^2 \equiv \widetilde{x} \bmod{p^{k-\ell}},\quad \widetilde{y}\equiv z_1\bmod p.
\end{equation*}
Here we parameterize $\widetilde{y} = z_1 + \beta p$ with $0\le \beta < p^{k-\ell/2-1}$. We obtain the same repeating argument when adding a multiple of $p^{k-\ell-1}$, and so we only need to consider $0\le \beta < p^{k-\ell-1}$. But this will repeat $p^{(k-\ell/2-1)-(k-\ell-1)} = p^{\ell/2}$ times, so again we only need to show that every target gets hit in the cycle. But then the rest of the argument follows.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:nilpotent}]
The lemma does most of the work for us here. Note that implicit in the lemma that for any $x$, if we write $x = \widetilde{x} p^\ell$, then the basin of attraction of $x$ depends only on $\widehat{x} = \widetilde{x} \bmod p$. We can go through the cases: if $\ell$ is odd, then $x$ has no preimages modulo $p^k$ by the lemma, and so it is a leaf in $\nilp{p^k}$ --- and similarly for $\ell$ even but $\widehat{x}$ having no preimages modulo $p$.
Finally, for the neighborhood of 0: note that if $\ell \ge k/2$, then $x^2\equiv 0 \bmod p^k$. Thus any such point maps to 0 under squaring, and the preimage of each of these points is a tree of type $\tree p \widehat{x} \ell$.
\end{proof}
Again it is worth pointing out that Lemma~\ref{lem:q2} tells us that basically all that matters is the mod $p$ value of the prefactor. We can now define a non-recursive way to compute the trees defined in Definition~\ref{def:trees}. We now describe an algorithm that will compute this tree exactly. We will state but not prove the algorithm.
\newcommand{\U}[3]{Z_{#1}({#2},{#3})}
\begin{defn}\label{def:unfurl}
For any $p$ prime, and $x\in\{0,\dots, n-1\}$, we define the {\bf unfurled preimage of $x$ of depth $\zeta$}, $\U p x \zeta$, as a directed tree.
The vertices of $\U p x \zeta$ are all sequences of the form $(a_1,\dots, a_\eta)$ with $\eta \le \zeta$ such that $a_1 = x$ and $a_{i-1} = a_i^2 \bmod p$.
The edges of $\U p x \zeta$ are
\begin{equation*}
(a_1, a_2,\dots, a_{\eta-1}, a_\eta)\to(a_1,a_2,\dots, a_{\eta-1})
\end{equation*}
\end{defn}
\begin{remark}
It is always true that $\U p x \zeta$ forms a tree, since each edge moves from a sequence to a strictly smaller sequence and thus cannot have loops. (The construction above is sometimes referred to as the ``universal cover of the graph $\units{p}$ rooted at $x$''.
\end{remark}
\begin{prop}
$\U p x \zeta$ is always a tree with degree $2$. If $x$ is a transient point $\bmod\ p,$ then $\U p x \zeta$ is a regular $2$-tree. If $p$ is prime, and $p-1=2^\theta \mu$ with $\mu$ odd, and $\zeta \le \mu$, then $\U p x \zeta$ is a regular $2$-tree. Otherwise, it is not a regular $2$-tree.
\end{prop}
\begin{exa}
Assume that $p\equiv 3 \bmod 4$, so that $p-1 = 2\mu$. Let us first compute $\U p 1 \zeta$. Note that $1$ has two preimages: $1$ and $-1$, so $\U p 1 1$ is a regular $2$-tree. Now, if we want to go to depth $\zeta=2$, we see that $-1$ has no preimage, but $1$ again has two, so the $-1$ note terminates, but the $1$ node bifurcates. Again, going to depth $\zeta=3$, we again terminate at $-1$ and bifurcate at $1$. Depending on the depth to which we want to unfurl this tree, we can continue as long as we'd like. Also note that we are labeling each node by the last term in the sequence defining the node, and not the whole sequence itself (it is useful to think of just this last value, but of course the sequence is important in the definition to give unique descriptors for each node). See Figure~\ref{fig:tree}.
Now, consider any periodic $x$. We claim that $\U p x \zeta$ will be isomorphic to $\U p 1 \zeta$. To see this, note that since $x$ is periodic, it has two preimages: one is the periodic $y_1$ with $y_1\to x$, and the other is the preperiodic $z_1$ with $z_1\to x$, so we get a regular $2$-tree at depth 1. To go to depth $\zeta=2$, we have that $y_1$ is periodic so it again bifurcates into the periodic $y_2$ and perperiodic $z_2$, whereas $z_1$ is a leaf in $\gr p$, so it does not.
\end{exa}
\begin{figure}
\begin{center}
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm,
thick,main node/.style={circle,fill=blue!20,draw,font=\sffamily\bfseries},transform shape, scale=0.75]
\node[main node] (1) {$1$};
\node[main node] (m1) [above of=1] {$-1$};
\node[main node] (11) [right of=1] {$1$};
\node[main node] (m11) [above of=11] {$-1$};
\node[main node] (111) [ right of=11] {$1$};
\node[main node] (m111) [above of=111] {$-1$};
\node[main node] (1111) [ right of=111] {$1$};
\node[main node] (m1111) [above of=1111] {$-1$};
\node(dots)[right of=1111]{$\cdots$};
\path[every node/.style={font=\sffamily\small}]
(m1) edge (1)
(m11) edge (11)
(m111) edge (111)
(m1111) edge (1111)
(11) edge (1)
(111) edge (11)
(1111) edge (111)
(dots) edge (1111)
;
\end{tikzpicture}
\end{center}
\label{fig:tree}
\caption{Unfurling the case where $\theta=1$.}
\end{figure}
\begin{defn}[Layered trees and expansions]\label{def:expansion}
We say that $T$ is a {\bf layered tree} with $L$ layers if $V(T)$ can be written as the disjoint union of $V_1, V_2,\dots, V_L$ and every edge in the graph goes from layer $k$ to layer $k-1$. (This is basically the directed version of $L$-partite.) Note that a vertex in a layer-$L$ tree is a leaf iff it is in layer $L$.
Given an $L$-layer tree $T$, and the integer vector ${\bf a} = (a_1,a_2,a_3,\dots,a_{L-1})$, we define $T^{({\bf a})},$ {\bf $T$ expanded by ${\bf a}$}, to be the following $L$-layer tree:
\begin{enumerate}
\item Let $v\in V(T)$ be a vertex in the $\ell$-th layer. Then the $a_1\times a_2\times\cdots\times a_{\ell-1}$ vectors of the form
$$(v,(\alpha_1,\alpha_2,\dots, \alpha_\ell)), 1\le \alpha_i\le a_i$$
are all in the $\ell$-th later of $V(T^{({\bf a})})$.
\item We define the edges of $T^{({\bf a})}$ by
\begin{equation*}
(v,(\alpha_1,\alpha_2,\dots, \alpha_\ell)) \to (w,(\alpha_1,\dots, \alpha_{\ell-1})) \quad \iff \quad v\to w\mbox{ in } T.
\end{equation*}
\end{enumerate}
\end{defn}
\begin{lem}\label{lem:layers}
Let $n=p^k$, and let $x=\widetilde{x} p^\ell$ for $\ell < k$, and $\widehat{x} = \widetilde{x} \bmod p$. Write $\ell = 2^\zeta \lambda$, where $\lambda$ is odd. Define $\U p \widehat{x} \zeta$ as in Definition~\ref{def:unfurl} and expand this graph by $(p^{\ell/2}, p^{\ell/4}, p^{\ell/8},\dots, p^{2\lambda}, p^\lambda)$. Then this graph is isomorphic to the basin of attraction of $x$ in $\nilp{p^k}$.
\end{lem}
\subsection{The graph $\gr{2^k}$}\label{sec:2^k}
We have dealt with all powers of odd primes above, but still need to deal with $n=2^k$. This will be a bit different than what has gone before, both for the units and the nilpotents, but it will be similar enough that we can reuse some earlier ideas here. First, a preliminary result that we can prove pretty simply.
\begin{prop}
Let $n=2^k$. Then $\gr{2^k}$ has exactly two components --- one corresponding to $\units{2^k}$ and the other to $\nilp{2^k}$, and they are both trees. (Equivalently, the graph $\gr{2^k}$ has exactly two fixed points: $0$ and $1$, and no periodic points.)
\end{prop}
\begin{proof}
The proof of the claim for $\nilp{2^k}$ is similar to that for odd primes. The nilpotent elements modulo $2^k$ are exactly the even numbers, and when these are raised to a high enough power, they will be 0 modulo $2^k$. Now we consider the units, which are the odd numbers. Let $x\equiv 1\bmod 2^\ell$ with $1\le \ell < k$. Then $x^2 = 1 + \alpha 2^\ell$, and
\begin{equation*}
x^2 = (1 + 2\alpha 2^\ell + \alpha^2 2^{2\ell}) = 1+\alpha 2^{\ell+1} + \alpha^2 2^{2\ell} \equiv 1\pmod {2^\ell+1}.
\end{equation*}
Since the power has increased, from this we see that any odd number will be 1 modulo $2^k$ in at most $k-1$ steps, and we are done.
\end{proof}
From this we just need to characterize each of the two trees. We consider $\units{2^k}$ first. One complication here is that the units modulo $2^k$ do not form a cyclic group under multiplication, but we get something that is close enough to make it work:
\begin{lem}\label{lem:3}\cite{gauss1986english}
The powers of 3 modulo $2^k$ form a cyclic group of size $2^{k-2}$ under multiplication. If we write $S_k$ as the set
\begin{equation*}
\{3^q\bmod 2^k: 0\le q < 2^{k-2}\},
\end{equation*}
then the units modulo $2^k$ (basically the odd numbers) can be written as $S_k \cup -S_k$.
\end{lem}
Using Lemma~\ref{lem:3}, we can characterize $\units{2^k}$:
\begin{prop}\label{prop:units2k}
The graph of $\units{2^k}$ is a tree constructed in the following manner: Start with the tree $T_2^k$, and for every node that is not a leaf, add two leaves directly mapping into it.
\end{prop}
\begin{proof}
Since $S_k$ forms a cyclic subgroup of the units of size $2^{k-2}$, the graph of these numbers can be realized as $\add{2^{k-2}}$, which by Corollary~\ref{cor:2^k} is $T_2^k$. Note that all of the other units are negatives of powers of $3$, and under squaring the negative sign cancels. So, for example, if $x\in\units{2^k}$ is such that $z_1^2\equiv z_2^2\equiv x\bmod{2^k}$, then we also have $(-z_1)^2\equiv (-z_2)^2\equiv x\bmod{2^k}$, so that $x$ now has in-degree four. Moreover, if there is any leaf in the $\add{2^{k-2}}$, i.e. a number in $S_k$ that is not a square modulo $2^k$, then there can be no number in $-S_k$ that squares to it, since the negative of that number would be in $S_k$. Therefore if a number is a leaf in the original tree, then it remains a leaf when we add in the $-S_k$ terms as well.
\end{proof}
See Figure~\ref{fig:units128} for an example with $n=2^7$. We can also describe the graph $\units{2^k}$ intuitively as follows: Start with the node at $1$ and add a loop; add three nodes: $2^{k}-1$ and $2^{k/2}-1$, which will be leaves, and the node $2^{k/2}+1$, which will itself be a tree; to $2^{k/2}+1$, we add four inputs: the leaves $2^{k/4}-1$ and $3\cdot 2^{k/4}-1$, and two trees: $2^{k/4}+1$ and $3\cdot 2^{k/4}+1$, etc.
\begin{figure}[th]
\begin{center}
\end{center}
\includegraphics[width=0.95\textwidth]{units128.pdf}
\caption{The graph $\units{128}$. Note that it is a graph of depth $5$ (because the cyclic subgraph generated by powers of $3$ is of order $32$, but instead of being a $T_2^5$, we add two leaves to each non-left node. So for example without the additional leaves, $65 = 2^6+1$ would have two preimages: $33=2^5+1$ and $97 = 3\cdot 2^5+1$, which themselves are trees. But we also have the two leaves $31 = 2^5-1$ and $95 = 3\cdot 2^5-1$ as well. (And similarly for all of the other nodes.)}
\label{fig:units128}
\end{figure}
Now onto the nilpotent elements. This case is a bit more complex than the case where $p$ is odd. The main problem here is that there is no statement analogous to Lemma~\ref{lem:q2} --- the main idea there is that the basin of attraction of any point of the form $x=\widetilde{x} p^\ell$ depends only on the value of $\widehat{x} = \widetilde{x} \bmod p$, whenever $p$ is odd. This is clearly not true in the case where $p=2$, since this would mean that the basin of attraction for all numbers of the form $\mbox{(odd)}*2^\ell$ would be the same, but this cannot be true since some odd numbers have square roots modulo $2^k$ and some do not (q.v.~the discussion of units above). In particular, when computing the basin of attraction of a point $x = \widetilde{x} p^\ell$, we didn't need to pay attention to the ``ambient space'' $p^k$, but unfortunately this breaks when $p=2$. So in theory one can compute this tree for any given $n$, but we don't expect such a nice result as seen in Theorem~\ref{thm:nilpotent} or Lemma~\ref{lem:layers}. We do have a partial result that we state without proof.
\begin{prop}
Consider $n=2^k$, and let $x=\widetilde{x} 2^\ell$, where $\widetilde{x}\neq 1$. Let us define $\theta_1,\theta_2$ by the equations
\begin{equation*}
\widetilde{x} - 1 = 2^{\theta_1} \cdot q_1,\quad \ell = 2^{\theta_2}\cdot q_2,
\end{equation*}
where $q_1,q_2$ are odd. Define $\theta = \min(\theta_1,\theta_2)$. Define $T_{2}^\theta$ as in Definition~\ref{def:tree}, add on the extra nodes as we did in Proposition~\ref{prop:units2k}, and expand it as in Definition~\ref{def:expansion}, where we expand the first layer by $2^{\ell/2}$, the second layer by $2^{\ell/4}$, etc. Then the basin of attraction of $x$ is isomorphic to this expanded graph.
\end{prop}
\section{Specific Computations}\label{sec:computations}
\subsection{Computing the Kronecker products in practice}
As we have seen above, the type of graph structures we see for primes, or powers of primes, come in certain forms --- and the graphs for composite $n$ come from Kronecker products of these. We have two main results in this section: the first is that one can compute the Kronecker product ``component by component'', and the second one is a list of the typical graph products will we see in practice.
\begin{defn}
If $G$ are $H$ are two graphs with disjoint vertex sets, then we define the {\bf disjoint union of $G$ and $H$}, denote $G\oplus H$, as the graph with $V(G\oplus H) = V(G) \cup V(H)$ and $E(G\oplus H) = E(G)\cup E(H)$.
\end{defn}
Now, let us assume that there is a path from $(x_1,y_1)$ to $(x_2,y_2)$ in $G\oblong H$. If we ignore the $y$'s, this gives a path from $x_1$ to $x_2$ in $G$, and if we ignore the $x$'s, this gives a path from $y_1$ to $y_2$ in $H$. Therefore if $(x_1,y_1)$ and $(x_2,y_2)$ are in the same component of $G\oblong H$, then $x_1$ and $x_2$ must be in the same component in $G$, and $y_1$ and $y_2$ must be in the same component in $H$. This gives the following:
\begin{prop}\label{prop:components}
Let $G$ and $H$ be graphs, and let
\begin{equation*}
G = \bigoplus_{i\in I} G_i,\quad\quad
H = \bigoplus_{j\in J} H_j
\end{equation*}
be the decomposition of each graph into its connected components. Then we have
\begin{equation*}
G\oblong H = \bigoplus_{i\in I, j\in J} (G_i\oblong H_j),
\end{equation*}
where the individual terms in the union are themselves disjoint. More specifically, if there is a path from $(x_1,y_1)$ to $(x_2,y_2)$ in $G\oblong H$, then there must be a path from $x_1$ to $x_2$ in $G$ and one from $y_1$ to $y_2$ in $H$.
\end{prop}
Basically this means that we can compute the Kronecker product ``component by component'' --- more specifically, when computing the Kronecker product we can just look at each component individually and not worry about the impact from other components. This is good, because we can compartmentalize. However, one caveat, as we will see below: it is possible for the product of two connected graphs to not be connected, so that some of the $G_i\oblong H_j$ terms in the expansion above might not themselves be a single components, but can be multiple components. However, we do see that the number of connected components of $G\oblong H$ is bounded below by the product of the numbers of connected components of $G$ and $H$, respectively. Ok, so what do these component-by-component products actually look like?
As we learned from Theorems~\ref{thm:units} and~\ref{thm:nilpotent}, every component of $\gr{p^k}$ is either a flower cycle, or a looped tree (recall Definition~\ref{def:flower-cycle}). This, with Proposition~\ref{prop:components}, tells us that the natural question is what happens when we take the Kronecker product of two graphs which are each of one of these two types. We categorize all of the cases we might expect to see later in the following Proposition.
\begin{prop}\label{prop:kron2}
We have the following results:
\begin{enumerate}
\item (Two looped trees) If $T_1$ and $T_2$ are looped trees, then so is $T_1\oblong T_2$ --- specifically, this implies that $T_1\oblong T_2$ is connected, has a single root vertex, that root vertex loops to itself, and all vertices flow toward that root.
\item (Degrees) Assume that $T_1$ and $T_2$ are looped trees with the property that every vertex in $T_i$ either has in-degree $w_i$ or $0$. Then every vertex in $T_1\oblong T_2$ has in-degree $w_1\times w_2$ or $0$.
\item (Specific grounded trees) For any widths $w_1,w_2$ and depth $\theta$, we have
\begin{equation*}
T_{w_1}^\theta\oblong T_{w_2}^\theta \cong T_{w_1\cdot w_2}^\theta.
\end{equation*}
(If we take two regular grounded trees {\bf of the same depth}, then we get a regular grounded tree of that depth, just wider.)
\item (Another description of flower cycles)
\begin{equation*}
C_\alpha\oblong T \cong C_\alpha(T).
\end{equation*}
\item (One looped tree and one flower cycle) \begin{equation*}
C_\alpha(T_1)\oblong T_2 \cong C_\alpha(T_1\oblong T_2).
\end{equation*}
\item (Two flower cycles) Let $G = C_\alpha(T_1)$ and $H = C_\beta(T_2)$. Let $\gamma = \gcd(\alpha,\beta)$ and $\lambda = \lcm(\alpha, \beta)$. Then $G\oblong H$ is $\gamma$ disjoint copies of the graph $G_\lambda(T_1\oblong T_2)$.
\end{enumerate}
\end{prop}
Note the last case where two connected graphs have a disconnected product!
\begin{proof}
\begin{enumerate}
\item Consider the vector $(0,0)$ in $V(T_1)\times V(T_2)$, where each $0$ corresponds to the root in that tree. Then note that $(0,0)\to(0,0)$ by definition, and we have this loop. More generally, if we take any $(x,y)\in V(T_1)\times V(T_2)$, then there is a finite path from $x$ to $0$ in $T_1$ and a finite path from $y$ to $0$ in $T_2$, so there is a finite path (whose length is the maximum of the two aforementioned paths) from $(x,y)$ to $(0,0)$.
\item In fact we prove something a bit more general: if $x\in V(G)$ has an in-degree of $k$ and $y\in V(H)$ has an in-degree of $l$, then $(x,y)\in V(G\oblong H)$ has an in-degree of $k\times l$. To see this, note that if $z_i \to x$ for $i=1,\dots, k$ and $w_j\to y$ for $j=1,\dots, l$, then $(z_i, w_j)\to (x,y)$ for all $i,j$. The claim follows directly.
\item Let us consider the case where we have two trees $G_1 = T_{w_1}^\theta$ and $G_2 = T_{w_2}^\theta$. See Lemma~\ref{lem:param} below for a parameterization of each of these trees. Then, the root of $G_1\oblong G_2$ is $(0,0)$ where $0$ is the root of each $G_1$. Note that it has a loop to itself. Let us define the first layer of $G_1\oblong G_2$ to be any pair $(k_1,k_2)$ with $0\le k_1\le w_1-1$ $0\le k_2\le w_2-1$ but not both zero. Note that all of these map to $(0,0)$ in one step, and there are exactly $w_1w_2-1$ of these. Now we define the $\ell$th layer of $G_1\oblong G_2$: given pair $(v_1,v_2)$ where $v_1$ is in level $\ell_1$ of $G_1$ and $v_2$ is in level $\ell_2$ of $G_2$, define $\ell = \max(\ell_1, \ell_2)$. Note that any such vertex will map down to layer $\ell-1$. Moreover, if $\ell<\theta$, then each of these vertices has exactly $w_1w_2$ preimages, because we can append $w_1$ numbers to the end of $v_1$ and $w_2$ numbers to the end of $v_2$. If $\ell=\theta$, then these elements are all leaves, and we are done.
\item This is actually a bit trickier than it looks, since these two descriptions are isomorphic but not equal. Recall the definition of $C_\alpha(T)$: we define the vertices of this graph to be $(v,\beta)$ where $v\in V(T)$ and $1\le \beta\le \alpha$. The edges in $ C_\alpha(T)$ are defined as follows:
\begin{equation*}
(v,\beta)\to (w,\gamma) \iff (\beta=\gamma \land v\to w) \lor (v=w=r \land \gamma = \beta+1).
\end{equation*}
However, in $C_\alpha\oblong T$ (actually $T\oblong C_\alpha$ but who's counting) we have
\begin{equation*}
(v',\beta') \to (w',\gamma') \iff v' \to w' \land \gamma' = \beta' +1.
\end{equation*}
Now let us define a map $\psi\colon C_\alpha(T) \to C_\alpha \kgp T$ as $\psi(v, \beta) = (v, \beta-\mathcal L(v))$, where $\mathcal L (v)$ is the level of the vertex $v$ (in this case, the number of edges it takes to get from $v$ to $r$). Now, let us assume that $\phi(v,\beta)\to \psi(w,\gamma)$ in $C_\alpha \kgp T$. This is true iff $(v,\beta-\mathcal L(v))\to (w,\gamma-\mathcal L(w))$ in $C_\alpha \kgp T$, which if true iff $v\to w$ in $T$ and $\gamma-\mathcal L(w) = \beta-\mathcal L(v) + 1$. Now we claim that this is equivalent to $(v,\beta)\to(w,\gamma)$ in $C_\alpha(T)$. Breaking up by cases: if $v\neq w$, then $\mathcal L(w) = \mathcal L(v) = 1$, which would imply $\gamma = \beta$, and if $v=r$, then $w=r$, and $\mathcal L(v) = \mathcal L(w) = 0$, which would imply $\gamma =\beta+1$.
\item This follows from the previous result twice and associativity, since
\begin{equation*}
C_\alpha(T_1) \oblong T_2 = (C_\alpha \oblong T_1)\oblong T_2 = C_\alpha \oblong (T_1\oblong T_2) = C_\alpha(T_1\oblong T_2).
\end{equation*}
\item Let us first note that if we consider two bare cycles, we get a similar formula: Let $G_1 = C_\alpha$ and $G_2 = C_\beta$. Note that we can think of these cycles as the map $x\mapsto x+1$ on $\zmod \alpha$ and $\zmod\beta$ respectively, and here $G_1\oblong G_2$ will be the graph of the map $(x_1,x_2) \mapsto (x_1+1,x_2+1)$ on $\zmod\alpha\times\zmod \beta$. It is easy enough to see that the element $(1,1)$ has order $\lambda$, and so every point is on a periodic cycle of length $\lambda$. Since there are $\alpha\cdot\beta$ total vertices, and $\alpha\cdot\beta/\lambda = \gamma$, there are exactly $\gamma$ distinct periodic cycles.
To get the full case: we have
\begin{equation*}
C_\alpha(T_1)\oblong C_\beta(T_2) = (C_\alpha\oblong T_1)\oblong (C_\beta \oblong T_2) = (C_\alpha\oblong C_\beta)\oblong(T_1\oblong T_2),
\end{equation*}
and we have computed above that $C_\alpha\oblong C_\beta$ is $\gamma$ copies of $C_\lambda$.
\end{enumerate}
\end{proof}
\begin{lem}\label{lem:param}
We can parameterize $T_w^\theta$ as follows: let $V_0 = \{0\}$, $V_1$ be the set $\{1,\dots, w-1\}$ and let
\begin{equation*}
V_\ell = \{(v_1,v_2,\dots, v_\ell): 1 \le v_1 \le w-1, \forall k > 1, 1 \le v_k \le w\}.
\end{equation*}
(These correspond to the ``layers'' in the graph, so any vertex in $V_\ell$ is ``in layer $\ell$''.) Let $V = \cup_{i=0}^\theta V_\ell$. Now define edges as follows: for all $k\in V_1$, add an edge $k\to 0$, and for all $v\in V_\ell$ with $2\le \ell \le \theta$, add the edge
\begin{equation*}
(v_1,v_2,\dots, v_{\ell-1}, v_\ell) \to (v_1,v_2,\dots, v_{\ell-1}),
\end{equation*}
i.e. throw out the last component. Then this graph is isomorphic to $T_w^\theta$.
\end{lem}
\subsection{The same unit graph can show up in many places}
A natural question is how similar the graphs can be for different $n$. However, note that if we have two primes $p,q$ with $p\neq q$, then $\phi(p)\neq\phi(q)$, so they do not have isomorphic sets of units. But it is possible that we can have two primes $p\neq q$ with $\phi(p^k) = \phi(q)$, and thus $\units{p^k} \cong \units{q}$. For example, we have that
\begin{equation*}
\phi(27) = \phi(19) = 18,
\end{equation*}
so $\units{27}\cong\units{19}\cong\add{18}$. We now work both cases out. Since $18 = 2\cdot 9$, we have $\theta=1$ (so the flowers on the cycle are just one node sticking out) and the divisors are $1,3,9$. Since $\phi(3) = \ordt 3 = 2$, there is one orbit of period $2$, and since $\phi(9) = \ordt 9 = 6$, there is one orbit of period $6$. This determines the graph of units completely. For $n=19$, of course, we have that $\nilp{19}$ is just $0$ with a loop to itself, but $\nilp{27}$ is more complicated. The set of points that map to zero modulo $27 = 3^3$ are the two points $9=3^2$ and $18=2\cdot 3^2$. If we recall the graph $\gr 3$, we see that $1$ has a preimage of $2$, and $2$ has no preimage. Therefore $18$ is a leaf in $\nilp{27}$, whereas $9$ is a tree of the form $\tree 3{1}2$, which will be a regular tree of width $6$ and depth $1$ (there are three trees of type $\tree 3 1 1$ and three of type $\tree 3 2 1$, which are themselves single nodes by definition, so are leaves in $\nilp{27}$). See Figure~\ref{fig:18}.
\begin{figure}[th]
\begin{center}
\end{center}
\includegraphics[width=0.95\textwidth]{G27G19.pdf}
\caption{The graphs $\gr{27}$ and $\gr{19}$. Note that they are isomorphic on the units (the graphs are the same, but of course the values at the vertices are different), with only a different nilpotent tree.}
\label{fig:18}
\end{figure}
\subsection{Example where we really go to town on the Kronecker products}
As we proved in Proposition~\ref{prop:kron2}, all of the components that show up in $\gr n$ are themselves Kronecker products of graphs that show up in the factors of other graphs. So, let's say, for example, we want to find an $n$ that contains a copy of the graph
\begin{equation*}
T_2^1 \oblong T_2^2 \oblong T_2^3 \oblong T_2^4.
\end{equation*}
One way to obtain this is to find primes that contain such graphs, and then multiply these together. We know that a $\gr p$ for $p$ prime will contain a $T_2^\theta$ iff $p-1$ is divisible by exactly $\theta$ powers of $2$. For $\theta=1$, we can pick $p_1=3$, and for $\theta=2$ we can pick $p_2=5$. For $\theta=3$ we cannot pick $2^3+1$, since it's not prime, and we don't want to pick any even multiple of $8$, as we'll get too many powers of $2$. So we can pick $p_3=41$, and finally we can pick $p_4= 17$. Of course, $p_1$, $p_2$, and $p_4$ are relatively small since they are Fermat primes, but we need to stretch a bit for $p_3$.
Now we know that the basin of attraction of $1$ in $\gr{p_i}$ is exactly $T_2^i$, for $i=1,2,3,4$. We have $p_1p_2p_3p_4 = 10455$, and therefore the basin of attraction of $1$ in $\gr{10455}$ is exactly $T_2^1 \oblong T_2^2 \oblong T_2^3 \oblong T_2^4$, shown in Figure~\ref{fig:1024attractor}. (In fact, to generate this picture we actually just directly computed the subgraph of $\gr{10455}$ containing $1$ directly.) It's a pretty wild graph: it has a radius of $4$ due to the $T_2^4$ term, but the smaller terms give it some interesting heterogeneity (e.g. each intermediate node has leaves hanging off of it, etc.).
\begin{figure}[th]
\begin{center}
\includegraphics[width=0.65\textwidth]{1024attractor.pdf}
\caption{The graphs $T_2^1 \oblong T_2^2 \oblong T_2^3 \oblong T_2^4$, which appears as the basin of attraction of $1$ in the graph $\gr{10455}$. At this point, it is perhaps clear why we gave ``flower cycles'' their names: the }
\label{fig:1024attractor}
\end{center}
\end{figure}
In fact, we can completely understand the graph of $\gr{10455}$ using the tools above. Note that since $p_1,p_2,p_4$ are all Fermat primes, their graphs consist of a tree containing $p_i-1$ nodes, plus a $0$ with a loop, so is the union $T_2^i \cup T_1^1$. For $p_3=41$, we note that $p-1=40 = 8*5$, and $\ordt 5 =\phi(5)= 4$, so $\gr{41}$ has a single loop of period $4$ in the form of a $C_4(T_2^3)$, plus a $T_2^3$ going to $1$, plus the single loop at $0$. There are 24 possible products in the expansion
\begin{equation*}
(T_2^1 \cup T_1^1)\oblong (T_2^2 \cup T_1^1)\oblong(C_4(T_2^3) \cup T_2^3 \cup T_1^1) \oblong (T_2^4 \cup T_1^1)
\end{equation*}
and notice that there is only one loop that can be chosen in any of these products, so each product remains connected and there are precisely 24 components. Moreover, exactly eight of these will have a loop of period $4$, and sixteen will not, for example for loops we can get a $C_4(T_2^3)$ by choosing $T_1^1\oblong T_1^1 \oblong C_4(T_2^3) \oblong T_1^1$, but we can also get $C_4(T_2^1\oblong T_2^3)$ by choosing $T_2^1\oblong T_1^1 \oblong C_4(T_2^3) \oblong T_1^1$, etc.
In general, we can extrapolate directly from this example that for any flower cycle of the form $C_\alpha(T)$, where $T$ is a tree that can be formed by taking Kronecker products of any collection of $T_2^\theta$'s, then we can find a (square-free) $n$ that contains $C_\alpha(T)$.
\subsection{Triggering divisors}\label{sec:triggering}
\begin{defn}
Given a period $k$, we say that $d$ is a {\bf triggering divisor} for $k$ if
\begin{itemize}
\item $\ordt d = k$;
\item $\ordt e\neq k$ for any $e$ that is a proper divisor of $d$.
\end{itemize}
\end{defn}
\begin{remark}
It follows directly from the definition that $\ordt d = k$ iff $d$ is a multiple of a triggering divisor of $k$.
The name ``triggering divisor'' comes from the fact that the presence of such a divisor of $m$ will trigger a periodic orbit of period $k$. (It might be that other divisors of $m$ also give periodic orbits of period $k$, but this minimal one is the one that triggers the condition...)
\end{remark}
\begin{cor}
The graph $\gr{p^k}$ has a flower orbit of type $C_k(T_2^\theta)$ iff $m = \phi(p^k)$ is a multiple of $2^\theta$ times a triggering divisor of $k$.
\end{cor}
\begin{proof}
This follows directly from Theorem~\ref{thm:units} and the definition of triggering divisor.
\end{proof}
We present the triggering divisors for all periods up to $50$ in Table~\ref{tab:trig}, and we have also put the first $n$ for which a period appears. Note that the triggering divisors are not ``unique'' in some cases for certain periods --- we can obtain multiple numbers, neither of which divides the other. Also note, interestingly, that these numbers might not even be relatively prime. In the table, when there are multiple triggering divisors we have bolded the one that gives rise to the first prime to have that period; for example, for period 18, the smallest prime that is $1\pmod{19}$ is actually 191, whereas the smallest prime that is $1\pmod{27}$ is 109.
Also, one might ask whether a composite number gives rise to a particular periodic orbit before any prime does. In theory it is possible that one could observe a (composite) period first in a composite number. For example, to obtain a period-10 orbit, we can find a prime that gets it from a triggering divisor (in this case, 47) or we could find a number with a period-2 and a period-5 and multiply them. So, for example, $n=7*311 = 2177$ also has a period-10 orbit, but $2177 > 47$. In practice we never found an example where a period first shows up in a composite number but there are a lot of integers that we didn't check. The author is agnostic on the claim as to whether a particular period length can ever show up first at a composite $n$.
There is a lot of heterogeneity in how large the triggering divisors are. One thing that follows directly from the definitions is that
\begin{prop}\label{prop:Mersenne}
If $p$ is a Mersenne prime ($p$ is prime and $2^p-1$ is also prime) then period-$p$ has the single triggering divisor $2^p-1$.
\end{prop}
One can see this markedly at $2, 3, 5, 7, 13, 17, 19, 31$ in Table~\ref{tab:trig}, whereas for primes that are not Mersenne primes (e.g. $p=11, 37, 39$ there is a ``pretty small'' triggering divisor.)
The method to obtain the values in Table~\ref{tab:trig} was as follows. For any period $r$, we list all divisors of the number $2^r-1$. For any $d$ that divides $2^r-1$, we compute the multiplicative order of $2$ modulo $d$ and retain those where $\ordt d = r$. (Of course, since $d$ divides $2^r-1$, then $\ordt d$ must divide $r$, but it could be strictly smaller.) From this retained list of divisors, we remove those divisors that are multiples of others in the list.
For example, let us consider period $6$. We have $2^6-1 = 63$, the divisors of which are $1,3,7,9,21,63$. We have $\ordt 1=1,\ordt3=2$ and $\ordt7=3$ and remove those. Of the remaining three, we can compute that they satisfy $\ordt d = 6$. But then notice that $63$ is a multiple of $9$ (and also $21$ for that matter) so we throw out $63$, leaving $9$ and $21$. And also note that these divisors are independent in the sense that they give different sets of primes. For example, if we consider $p=19$, then $19-1 = 18 = 2*9$, which is not divisible by $21$, and conversely $p=43$ has $p-1 = 2*21$ which is not divisible by $9$. Note that $\theta=1$ in each of these cases, so we see that $\gr{19}$ has a single period-6 orbit of type $C_6(T_2^1)$ (since $\phi(9) = 6$) but $\gr{43}$ has two disjoint such orbits (since $\phi(21) = 12$). See Figure~\ref{fig:1943}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}\hline
Period & Trig div &First&Period & Trig div & First \\\hline
1 & 1 & 1& 26 & {\bf 2731},24573 &60083\\\hline
2 & 3 & 7& 27 & 262657 &2626571\\\hline
3 & 7 & 29 &28 & {\bf 29},113,215,635 & 59 \\\hline
4 & 5 & 11& 29 & {\bf 233},1103,2089 & 467 \\\hline
5 & 31 & 311& 30 & 77,{\bf 99},279,331,453,651,1661 &199\\\hline
6 & {\bf 9}, 21 & 19& 31 & 2147483647 & ?? \\\hline
7 & 127 & 509&32 & 65537 & 917519\\\hline
8 & 17 & 103 & 33 & {\bf 161},623,599479 & 967 \\\hline
9 & 73 & 293&34 & {\bf 43691},393213 &87383\\\hline
10 & {\bf 11},93 & 23& 35 & {\bf 71},3937,122921 & 569\\\hline
11 & {\bf 23},89 & 47&36 & {\bf 37},95,109,135,247,351,365,949 & 149 \\\hline
12 & {\bf 13},35,45 & 53& 37 & {\bf 223},616318177 & 2677 \\\hline
13 & 8191&376787 & 38 & {\bf 174763},1572861 & 699053\\\hline
14 & {\bf 43},381 & 173&39 & {\bf 79},57337,121369 & 317 \\\hline
15 & {\bf 151},217 & 907&40 & {\bf 187},425,527,697,61681 & 1123\\\hline
16 & 257 & 1543&41 & {\bf 13367},164511353 & 106937\\\hline
17 & 131071 &1572853& 42 & {\bf 147},301,387,1011,1143,2667,5419,14491 & 883\\\hline
18 & 19,{\bf 27},219 & 109& 43 & {\bf 431},9719,2099863 & 863 \\\hline
19 & 524287 & 8388593&44 & {\bf 115},397,445,2113,3415 & 461\\\hline
20 & 25,{\bf 41},55,155 & 83&45 & {\bf 631},2263,11023,23311 & 6311 \\\hline
21 & {\bf 49},337,889 & 197& 46 & {\bf 141},535443,2796203 & 283\\\hline
22 & {\bf 69},267,683 & 139& 47 & {\bf 2351},4513,13264529 & 4703 \\\hline
23 & {\bf 47},178481 & 283 &48 & {\bf 97},673,1799,2313,3341,61937 & 389\\\hline
24 & {\bf 119},153,221,241 & 239&49 & 4432676798593 & ??\\\hline
25 & {\bf 601},1801 & 3607&50 & {\bf 251},1803,4051,5403,6611,19811 &503\\\hline
\end{tabular}
\caption{List of triggering divisors for period up to 50}
\end{center}
\label{tab:trig}
\end{table}
\begin{figure}[th]
\begin{center}
\includegraphics[width=0.95\textwidth]{G19G43.pdf}
\caption{The graphs $\gr{19}$ and $\gr{43}$. Note that they both have period-6 orbits, and all periodic orbits have a ``spoke'' sticking out from the $T_2^1$.}
\label{fig:1943}
\end{center}
\end{figure}
\subsection{When cycles beget many more cycles}
Consider the two numbers considered in the previous section: the primes $19$ and $43$. These numbers were chosen as primes that give period-6 orbits, but ``for different reasons'', because they derive from independent triggering divisors. Since they both have periodic orbits of the same period, if we take the Kronecker product, we will obtain many orbits of that period, from Proposition~\ref{prop:kron2}, since $\alpha=\beta = \lambda = \gamma$ --- so that the product of two period-6 cycles is actually 6 distinct period-6 cycles.
Again, we first consider $\gr{19}$. Since $19-1 = 18 = 2*9$, we need to consider the divisors of $9$. We already saw that we have an orbit of period $6$, and from the divisor $3$ we get and orbit of period $2$. Since $\theta=1$ we have $T_2^1$ attached to the graphs (just giving a ``spoke''). Therefore $\gr{19}$ has four components, namely $$\gr{19}\cong C_6(T_2^1) \oplus C_2(T_2^1) \oplus T_2^1 \oplus T_1^1.$$
For $\gr{43}$, we have $43-1 = 42 = 2*21$, and so we need to consider the divisors $1,3,7,21$. As we computed above, the $d=21$ gives two period-6 orbits. Since $\phi(7) = 6$ and $\ordt 7 = 3$, the $d=7$ term gives two period-3 orbits, and $d=3$ gives one period two orbit. Therefore $\gr{43}$ has 7 components:
$$\gr{43} \cong C_6(T_2^1) \oplus C_6(T_2^1) \oplus C_3(T_2^1)\oplus C_3(T_2^1) \oplus C_2(T_2^1) \oplus T_2^1 \oplus T_1^1.$$
Now let us consider $n=817 = 19*43$. We know that the graph $\gr{817}$ will contain at least $28$ components, since one of the graphs has 4 and the other 7. But in fact, it will have many more components: each time we take the product of two period-6 orbits, we actually get {\em six} period-6 orbits, etc.
For example, let us consider the $C_6(T_2^1)$ component of $\gr{19}$, and multiply it against the seven components of $\gr{43}$. We get
\begin{align*}
C_6(T_2^1) \oblong C_6(T_2^1) &= C_6(T_4^1) \quad\mbox{times 6}; \\
C_6(T_2^1) \oblong C_3(T_2^1) &= C_6(T_4^1) \quad\mbox{times 3};\\
C_6(T_2^1) \oblong C_2(T_2^1) &= C_6(T_4^1) \quad\mbox{times 2};\\
C_6(T_2^1) \oblong T_2^1 &= C_6(T_4^1) \quad\mbox{times 1};\\
C_6(T_2^1) \oblong T_1^1 &= C_6(T_2^1).
\end{align*}
And also note that the first two components listed above each appear twice in $\gr{43}$, so when we multiply $C_6(T_2^1)$ by the components of $\gr{43}$ we obtain $2*6+2*3+2+1 = 21$ copies of $C_6(T_4^1)$ and one copy of $C_6(T_2^1)$.
When we multiply the $C_2(T_2^1)$ component of $\gr{19}$ against the other seven, we obtain
\begin{align*}
C_2(T_2^1) \oblong C_6(T_2^1) &= C_6(T_4^1) \quad\mbox{times 2}; \\
C_2(T_2^1) \oblong C_3(T_2^1) &= C_6(T_4^1) \quad\mbox{times 1};\\
C_2(T_2^1) \oblong C_2(T_2^1) &= C_2(T_4^1) \quad\mbox{times 2};\\
C_2(T_2^1) \oblong T_2^1 &= C_2(T_4^1) \quad\mbox{times 1};\\
C_2(T_2^1) \oblong T_1^1 &= C_2(T_2^1).
\end{align*}
This gives a total of $2*2+2*1 = 6$ copies of $C_6(T_4^1)$, one copy of $C_2(T_4^1)$, and one copy of $C_2(T_2^1)$.
The other two components are simpler, since they are trees. Multiplying $T_2^1$ against the components of $\gr{43}$ gives $2$ copies of $C_6(T_4^1)$, $2$ copies of $C_3(T_4^1)$, $1$ copy of $C_2(T_4^1)$, $1$ copy of $T_4^1$, and finally one copy of $T_2^1$. Multiplying $T_1^1$ against these components just copies them.
Putting this all together, we have
\begin{center}
\begin{tabular}{|c|c|}\hline
Graph type & copies\\\hline
$C_6(T_4^1)$ & 21 + 6 +2 = 29\\\hline
$C_3(T_4^1)$ & 2\\\hline
$C_2(T_4^1)$ & 1+1=2\\\hline
$C_6(T_2^1)$& 1 + 1 +1 = 3\\\hline
$C_3(T_2^1)$& 1 + 1 + 1 = 3\\\hline
$C_2(T_2^1)$& 1 + 1 + 1 = 3\\\hline
$T_4^1$ & 1\\\hline
$T_2^1$ & 1+1=2\\\hline
$T_1^1$ & 1\\\hline
\end{tabular}
\end{center}
We can compare this against the graph given in Figure~\ref{fig:817}, which was obtained by direct computation. Note that $C_\alpha(T_4^1)$ will be a cycle of period $\alpha$ with three leaves sticking out of each node, and $C_\alpha(T_2^1)$ will be a cycle of period $\alpha$ with a single stem out of each node.
\begin{figure}[th]
\begin{center}
\includegraphics[width=0.85\textwidth]{G817.pdf}
\caption{The (unlabeled) graph $\gr{817}$, chosen because $817=19*43$ (q.v.~Figure~\ref{fig:18}) }
\label{fig:817}
\end{center}
\end{figure}
\subsection{When the nilpotents end up chillin' in the corner}
Consider $n=5^4=625$. Here we want to study the graph $\gr{625}$, and in particular, focus on $\nilp{625}$.
Let us use the formula in Theorem~\ref{thm:nilpotent}. We first compute the graph $\tree 5 \widetilde{y} 2$. Recalling the graph $\units5$, we see that $1$ has two preimages: $1$ and $4$, and $4$ has two preimages: $2$ and $3$. In this case the actual values of the preimages won't be important, since $\tree 5 \widetilde{y} 1$ is a leaf regardless of $\widetilde{y}$, due to the single power of $5$.
According to the theorem, $\tree 5 1 2$ has five incoming copies of $\tree 5 1 1$ and five incoming copies of $\tree 5 4 1$, but these are all leaves, so $\tree 5 1 2$ is just a regular tree of width $10$, or $T_{10}^1$. We get the same result for $\tree 5 4 2$. And, of course, $\tree 5 2 2 = \tree 5 3 2$ is just a single node.
We also have that $\tree 5 \widetilde{y} 3$ is a node simply because $3$ is odd.
Now, the in-neighborhood of $0$ will be any numbers with 2 or 3 powers of $5$ in them. There are four such preimages with power three ($\tree 5 \widetilde{y} 3$ for $\widetilde{y}=1,2,3,4$, which are all nodes) and $\phi(25)=20$ such preimages of power two. As we say above, half of these are trees $T_{10}^1$ and the other half are nodes. Putting all of this together, the in-neighborhood of $0$ is $10$ copies of $T_{10}^1$ and $14$ leaves. See Figure~\ref{fig:625attractor}, where we have plotted $\nilp{625}$ .
\begin{figure}[th]
\begin{center}
\includegraphics[width=0.95\textwidth]{625attractor.pdf}
\caption{The graph $\nilp{625}$.}
\label{fig:625attractor}
\end{center}
\end{figure}
Note the structure is as we say: the node $0$ has fourteen leaves leading into it: the four that come from $5^3$ terms: 125, 250, 375, 500, and the ten that come from $5^2$ terms, where the prefactor is $2$ or $3$ modulo 5: $25*\{2,3,7,8,12,13,17,18,22,23\}$. We also see that there are ten trees that go into $0$, coming from $5^2$ terms, where the prefactor is $1$ or $4$ modulo 5: $25*\{1,4,6,9,11,14,16,19,21,24\}$.
Of course, if we want to understand the full graph $\gr{625}$ we also need to consider the units. Note that $\phi(625) = 625-125 = 500 = 2^2*125$. The divisors are $1,5,25,125$, where we have $\phi(125) = \ordt{125} = 100$, so one period-100 orbit, $\phi(25) = \ordt{25} = 20$, so one period-20 orbit, $\phi(5) = \ordt5=4$, so one period-4 orbit, and of course one fixed point. Since $\theta=2$ these are all decorated with copies of $T_2^2$, so in fact we have
\begin{equation*}
\units{625} = C_{100}(T_2^2) \oplus C_{20}(T_2^2) \oplus C_{4}(T_2^2) \oplus T_2^2,
\end{equation*}
so $\gr{625}$ has five components, see Figure~\ref{fig:625}. (We can see that the tree $\nilp{625}$ is just chilling down there in the corner.)
\begin{figure}[th]
\begin{center}
\includegraphics[width=0.65\textwidth]{G625.pdf}
\caption{The graph $\gr{625}$.}
\label{fig:625}
\end{center}
\end{figure}
\subsection{Well, *these* nilpotents ain't messing around}
Here we are going to describe the graph $\gr n$ when $n=177147=3^{11}$.
First we consider $\units{3^{11}}$. We have $\phi(3^{11}) = 2\cdot 3^{10}$, so we have $\theta = 1$ and $\mu = 3^{10}$. The divisors $d$ of $3^{10}$ are, of course, $3^k$ for $k=0,1,\dots, 10$. In each of these cases, it turns out that $\phi(3^k) = \ordt{3^k} = 2\cdot 3^{k-1}$, so we have exactly 11 periodic orbits: a fixed point, one of period 2, one of period 6, etc. all the way up to one of period $2\cdot 3^9 = 39366$. Since $\theta=1$, all of these periodic orbits have a ``spoke'' coming out of each periodic point.
Now for $\nilp{3^{11}}$. Let us first consider the neighborhood of $0$: any $x$ of the form $x = \widetilde{x} p^\ell$ with $\ell \ge 6$ will map directly into 0. If $\ell$ is odd, then these are leaves. If $\ell$ is even but $\widetilde{x} \equiv 2\bmod 3$, then this is also a leaf. In the other cases, we will have some more complex trees. For each even $\ell$, if $\widetilde{x}$ is $1\pmod 3$ we have a tree of type $\tree 3 1 \ell$, giving a total of $1/2\phi(3^{11-k}) = 3^{10-k}$ of them, and if $\widetilde{x}$ is $2\pmod 3$ it is a leaf, so we have $1/2\phi(3^{11-k}) =3^{10-k}$ leaves. If $\ell$ is odd we have $phi(3^{11-k}) = 2\cdot 3^{10-k}$ leaves.
Therefore the total number of leaves is
$1 + 6 + 9 + 54 + 81 = 151$.
From here we see that we should have attached to $0$:
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}\hline
Tree type &$\tree 3 1 {10}$ &$\tree 3 1 {8}$ &$\tree 3 1 {6}$ & leaves\\\hline
Count & 1 & 9 & 81 & 151\\\hline
\end{tabular}
\end{center}
\caption{The structures directly adjacent to $0$ in the graph $\nilp{3^{11}}$}
\label{tab:3^11}
\end{table}
Moreover, we can see that since $10=2\cdot 5$, $\tree 3 1 {10}$ is just a tree of depth one, since $y^2 = 3^{10}\bmod{3^{11}}$ iff $y = \widetilde{x} 3^5$ with $\gcd(\widetilde{x},3)=1$ and $1\le \widetilde{x} \le 3^6$, and there are $\phi(3^6) = 486$ such $\widetilde{x}$'s, so $\tree 3 1 {10} \cong T_{486}^{(1)}$ is just a regular tree of width $486$ and depth $1$. Similarly, $\tree 3 1 6$ is a tree of depth 1 and width $\phi(3^3) = 2\cdot 3^2 = 18$.
The more complicated case is $\tree 3 1 8$. Note that $8=2^3$, so we can consider the unfurled graph of depth three $\U 3 1 3$, see Figure~\ref{fig:tree}. According to Lemma~\ref{lem:layers}, we take this graph and expand it by the powers $3^4, 3^2, 3^1$, so that the in-neighborhood of $x=3^8$ has $3^4$ leaves coming in, and $3^4$ trees of type $\tree 3 1 4$, which themselves have $3^2$ leaves coming in, and $3^2$ trees coming in, and those are regular trees of the form $T_6^1$. (And of course, for any $\widetilde{x} 3^8$ with $\widetilde{x}\equiv 1\bmod 3$ we get an isomorphic tree, so there are $9$ of them.) See Figure~\ref{fig:3^8} for a visualization of $\tree 3 1 8$, which is also complicated and fun. (We would have liked to put a visualization of the full $\nilp{3^{11}}$ here but at $59,049$ vertices and $177,147$ edges, it made our computer sad.)
\begin{figure}[th]
\begin{center}
\includegraphics[width=0.95\textwidth]{3p8in3p11.pdf}
\caption{The subgraph of $\gr{3^{11}}$ that goes to $x=3^8$, giving an example of $\tree 3 1 8$. Its details are given in the text, but note that there are {\bf nine} copies of this bad boy attached to zero, not to mention all of the other things going in (see Table~\ref{tab:3^11}).}
\label{fig:3^8}
\end{center}
\end{figure}
\section{Conclusions}
We have presented some basic theory and give a few nice examples. The examples were a lot of fun, and there's no doubt that one can find a whole host of other interesting examples. I leave it to the readers of this article to explore and find more.
| {
"timestamp": "2022-07-22T02:22:33",
"yymm": "2207",
"arxiv_id": "2207.10512",
"language": "en",
"url": "https://arxiv.org/abs/2207.10512",
"abstract": "We examine the graphs generated by the map $x\\mapsto x^2\\bmod n$ for various $n$, present some results on the structure of these graphs, and compute some very cool examples.",
"subjects": "Combinatorics (math.CO); Dynamical Systems (math.DS); Number Theory (math.NT)",
"title": "The shape of $x^2\\bmod n$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9873750492324249,
"lm_q2_score": 0.8311430436757313,
"lm_q1q2_score": 0.8206499036685128
} |
https://arxiv.org/abs/2007.09719 | Rainbow odd cycles | We prove that every family of (not necessarily distinct) odd cycles $O_1, \dots, O_{2\lceil n/2 \rceil-1}$ in the complete graph $K_n$ on $n$ vertices has a rainbow odd cycle (that is, a set of edges from distinct $O_i$'s, forming an odd cycle). As part of the proof, we characterize those families of $n$ odd cycles in $K_{n+1}$ that do not have any rainbow odd cycle. We also characterize those families of $n$ cycles in $K_{n+1}$, as well as those of $n$ edge-disjoint nonempty subgraphs of $K_{n+1}$, without any rainbow cycle. | \section{Introduction}\label{sec:introduction}
Given a family $\mathcal{E}$ of sets, an $\mathcal{E}$-\emph{rainbow set} is a set $R \subseteq \union\mathcal{E}$ with an injection $\sigma\colon R \to \mathcal{E}$ such that $e \in \sigma(e)$ for all $e \in R$. The term rainbow set originates in viewing every member of $\mathcal{E}$ as a color, and every $e \in R$ as colored by $\sigma(e)$. When we speak of a rainbow set, we often keep in mind the injection $\sigma$, and we say that $\sigma(e) \in \mathcal{E}$ is \emph{represented} by $e$ in $R$.
\begin{remark}
Throughout we use the term ``family'' in the sense of ``multiset'' allowing repeated members.
\end{remark}
A recurring theme in the study of rainbow sets is finding an $\mathcal{E}$-rainbow set satisfying a property $\mathcal{P}$, assuming that every member of $\mathcal{E}$ satisfies $\mathcal{P}$, and that $\mathcal{E}$ is large. A classic result of this type is B\'ar\'any's colorful Carath\'eodory theorem~\cite{B82}: every family of $n+1$ subsets of $\mathbb{R}^n$, each containing a point $a$ in its convex hull, has a rainbow set satisfying the same property. An application mentioned in \cite{B82} is a theorem due to Frank and Lov\'asz, on rainbow directed cycles. Other results of this type are about rainbow matchings. For example, improving a theorem of Drisko~\cite{D98}, Aharoni and Berger~\cite[Theorem~4.1]{AB} proved that $2n-1$ matchings of size $n$ in any bipartite graph have a rainbow matching of size $n$. In \cite{AKZ} the examples showing sharpness of this result were characterized, and in \cite{ARJ} the theorem was given a topological proof. A more general context is that of independent sets in graphs, see, e.g., \cite{ABKK,KL,KKK}.
In this paper we study conditions for the existence of rainbow cycles, with or without a parity constraint on their lengths. Hereafter a cycle is viewed as a set of edges. Our main result is:
\begin{theorem}\label{thm:odd-cycles}
Every family of $2\ceil{n/2}-1$ odd cycles in the complete graph $K_n$ on $n$ vertices has a rainbow odd cycle.
\end{theorem}
Put more explicitly, the theorem states that when $n$ is odd, every family of $n$ odd cycles in $K_n$ has a rainbow odd cycle; when $n$ is even $n-1$ odd cycles suffice. The case of $n$ odd is relatively easy, and the main effort goes into the even case. The proof is done in \cref{sec:roc} via a characterization of families of $n$ odd cycles in $K_{n+1}$ without any rainbow odd cycle.
In \cref{sec:rgc} we deal with rainbow cycles of general length. The fact that $n$ cycles in $K_n$ have a rainbow cycle is easy, and the main result is a characterization of families of $n$ cycles in $K_{n+1}$ without any rainbow cycle. In \cref{sec:rainbow-cycles-in-multigraphs}, we consider rainbow cycles in edge-disjoint families; our result in this case turns out to be a rediscovery, with a short proof, of a theorem of \cite{HHJO}. In \cref{sec:remarks} we conclude with a generalization to matroids, and a result on rainbow even cycles.
\section{Rainbow odd cycles}\label{sec:roc}
We start with an observation which yields \cref{thm:odd-cycles} in the case of $n$ odd.
\begin{proposition}\label{thm:woc}
Every family of $n$ odd cycles in $K_n$ has a rainbow odd cycle.\footnote{A reworded version of \cref{thm:woc}, suggested by the first author, appeared as Problem 3 of Day 1 in the 12th Romanian Master in Mathematics, RMM 2020.}
\end{proposition}
\begin{proof}
Let $R$ be a maximal rainbow forest. Since $R$ has fewer than $n$ edges, one of the odd cycles, say $O$, is not represented in $R$. By the maximality of $R$, no edge in $O$ connects two components of $R$. Thus $O$ is contained in a connected component $T$ of $R$. Since $O$ is of odd length, one of its edges does not obey the bipartition of $T$. Adding that edge to $T$ yields a rainbow subgraph that supports an odd cycle.
\end{proof}
A Hamiltonian cycle on $n$ vertices repeated $n-1$ times shows the sharpness of \cref{thm:woc} only for $n$ odd. This example can be generalized as follows.
\begin{definition}
A family $\mathcal{O}$ of cycles is a \emph{pruned cactus} if all the cycles in $\mathcal{O}$ are identical to a cycle on $\abs{\mathcal{O}} + 1$ vertices, or $\mathcal{O}$ can be partitioned into two pruned cacti $\mathcal{O}_1, \mathcal{O}_2$ such that $\union\mathcal{O}_1$ and $\union\mathcal{O}_2$ share exactly one vertex.\footnote{A cactus graph is a connected graph in which two cycles have at most one vertex in common. A pruned cactus $\mathcal{O}$ is named after the fact that $\union\mathcal{O}$ is a $2$-edge-connected cactus graph.}
\end{definition}
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.02, very thick]
\coordinate (a1) at (135,-342);
\coordinate (a2) at (162,-324);
\coordinate (a5) at (153,-297);
\coordinate (a6) at (153,-234);
\coordinate (a9) at (108,-315);
\coordinate (a10) at (126,-279);
\coordinate (a11) at (108,-234);
\coordinate (a15) at (126,-198);
\coordinate (a16) at (189,-171);
\coordinate (a17) at (207,-135);
\coordinate (a20) at (180,-135);
\coordinate (a21) at (189,-108);
\coordinate (a22) at (180,-81 );
\coordinate (a23) at (207,-72 );
\coordinate (a24) at (198,-54 );
\coordinate (a28) at (99 ,-162);
\coordinate (a29) at (135,-162);
\coordinate (a30) at (117,-126);
\coordinate (a31) at (135,-90 );
\coordinate (a32) at (126,-54 );
\coordinate (a33) at (135,-27 );
\coordinate (a35) at (117,-9 );
\coordinate (a36) at (99 ,-54 );
\coordinate (a39) at (45 ,-90 );
\coordinate (a40) at (27 ,-117);
\coordinate (a41) at (63 ,-108);
\coordinate (a42) at (45 ,-144);
\coordinate (a43) at (36 ,-180);
\draw[green] (a1) -- (a2) -- (a5) -- (a6) -- (a15) -- (a11) -- (a10) -- (a9) -- cycle;
\draw[green] (a15) -- (a29) -- (a30) -- (a28) -- cycle;
\draw[green] (a30) -- (a31) -- (a32) -- (a33) -- (a35) -- (a36) -- cycle;
\draw[green] (a15) -- (a16) -- (a20) -- cycle;
\draw[green] (a20) -- (a17) -- (a21) -- cycle;
\draw[green] (a21) -- (a23) -- (a24) -- (a22) -- cycle;
\draw[green] (a28) -- (a42) -- (a43) -- cycle;
\draw[green] (a42) -- (a41) -- (a39) -- (a40) -- cycle;
\foreach \i in {1,2,5,6,9,10,11,15,16,17,20,21,22,23,24,28,29,30,31,32,33,35,36,39,40,41,42,43}
\node[vertex] at (a\i) {};
\end{tikzpicture}\qquad\qquad%
\begin{tikzpicture}[scale=0.02, very thick, xscale=-1]
\coordinate (a1) at (135,-342);
\coordinate (a2) at (162,-324);
\coordinate (a5) at (153,-297);
\coordinate (a6) at (153,-234);
\coordinate (a9) at (108,-315);
\coordinate (a10) at (126,-279);
\coordinate (a11) at (108,-234);
\coordinate (a15) at (126,-198);
\coordinate (a16) at (189,-171);
\coordinate (a17) at (207,-108);
\coordinate (a20) at (180,-135);
\coordinate (a21) at (20,-144);
\coordinate (a22) at (180,-81 );
\coordinate (a23) at (207,-72 );
\coordinate (a24) at (198,-54 );
\coordinate (a28) at (99 ,-162);
\coordinate (a29) at (135,-162);
\coordinate (a30) at (117,-126);
\coordinate (a31) at (135,-90 );
\coordinate (a32) at (126,-54 );
\coordinate (a33) at (135,-27 );
\coordinate (a35) at (117,-9 );
\coordinate (a36) at (99 ,-54 );
\coordinate (a39) at (45 ,-90 );
\coordinate (a40) at (27 ,-117);
\coordinate (a41) at (63 ,-108);
\coordinate (a42) at (45 ,-144);
\coordinate (a43) at (36 ,-180);
\draw[green] (a1) -- (a5) -- (a6) -- (a15) -- (a11) -- (a10) -- (a9) -- cycle;
\draw[green] (a15) -- (a29) -- (a31) -- (a32) -- (a33) -- (a35) -- (a36) -- (a30) -- (a28) -- cycle;
\draw[green] (a15) -- (a16) -- (a20) -- cycle;
\draw[green] (a20) -- (a17) -- (a23) -- (a24) -- (a22) -- cycle;
\draw[green] (a28) -- (a42) -- (a43) -- cycle;
\draw[green] (a42) -- (a41) -- (a39) -- (a40) -- (a21) -- cycle;
\foreach \i in {1,5,6,9,10,11,15,16,17,20,21,22,23,24,28,29,30,31,32,33,35,36,39,40,41,42,43}
\node[vertex] at (a\i) {};
\end{tikzpicture}
\caption{Underlying graphs of two pruned cacti. The one on the right is composed of odd cycles.}\label{fig:dancers}
\end{figure}
Given a pruned cactus $\mathcal{O}$, by our recursive definition, one can check that $\mathcal{O}$ has no rainbow cycle, and the underlying graph $\union\mathcal{O}$ contains exactly $\abs{\mathcal{O}} + 1$ vertices (see \cref{fig:dancers}). A key result towards the proof of \cref{thm:odd-cycles} is that the converse is also true for $\mathcal{O}$ composed of only odd cycles. For technical reasons, we shall switch as from now to cycles in $K_{n+1}$ rather than $K_n$.
\begin{theorem}\label{thm:odd-cycles-char}
If a family of $n$ odd cycles in $K_{n+1}$ has no rainbow odd cycle, then it is a pruned cactus.
\end{theorem}
Clearly, the cardinality of a pruned cactus composed solely of odd cycles is even. Therefore, when $n$ is even, $n-1$ odd cycles cannot form a pruned cactus, and so \cref{thm:odd-cycles} follows from \cref{thm:odd-cycles-char}.
For the inductive proof of \cref{thm:odd-cycles-char}, we need the following technical lemma.
\begin{lemma}\label{lem:occ}
Let $\mathcal{O} := \set{O_1, \dots, O_n}$ be a family of odd cycles in $K_{n+1}$ without any rainbow odd cycle, and denote $\mathcal{K} := \set{O_1, \dots, O_k}$, where $k < n$. Suppose that $Q$ is a $(k+1)$-vertex subgraph of $\union\mathcal{K}$ and $V \subseteq V(Q)$ such that every pair of vertices in $V$ can be connected by a $\mathcal{K}$-rainbow even path in $Q$. Then
\begin{enumerate}[nosep, label=(\alph*)]
\item No edge in $O_{k+1}, \dots, O_n$ has both endpoints in $V$.\label{lem:occ-a}
\end{enumerate}
Moreover, let $\pi$ be the contraction\footnote{A contraction operation removes all edges between any pair of contracted vertices.} that replaces $V(Q)$ with a single vertex $\bar{v}$, and suppose that $P_{k+1}, \dots, P_n$ are subgraphs of $O_{k+1}, \dots, O_n$ such that each $P_i$ avoids the vertices in $V(Q) \setminus V$. Denote $\bar{\PP} := \set{\pi(P_{k+1}), \dots, \pi(P_n)}$. Then the following holds.
\begin{enumerate}[nosep, label=(\alph*), resume]
\item There is no $\bar{\PP}$-rainbow odd cycle in $\pi(K_{n+1})$.\label{lem:occ-b}
\item If $\bar{\PP}$ is a pruned cactus of odd cycles, then $\union\bar{\PP}$ is spanning in $\pi(K_{n+1})$, and no $O_i \setminus P_i$ contains an edge of the form $uv$ with $u \not\in V(Q) \cup V(P_i)$ and $v \in V \cap V(P_i)$.\label{lem:occ-c}
\end{enumerate}
\end{lemma}
\begin{proof}
Note that any edge in $O_{k+1}, \dots, O_{n}$ with both endpoints in $V$ can be completed to an $\mathcal{O}$-rainbow odd cycle by a $\mathcal{K}$-rainbow even path in $Q$.
Assume for the sake of contradiction that there is a $\bar{\PP}$-rainbow odd cycle $C$ in $\pi(K_{n+1})$. Edges of the form $u\bar{v}$ after the contraction correspond to edges of the form $uv$ with $v \in V$ before the contraction. Hence, prior to the contraction, $C$ was either itself an $(\mathcal{O}\setminus \mathcal{K})$-rainbow odd cycle (which does not exist), or an ($\mathcal{O}\setminus \mathcal{K}$)-rainbow odd path between a pair of vertices in $V$, which can be completed to an $\mathcal{O}$-rainbow odd cycle by a $\mathcal{K}$-rainbow even path in $Q$.
To prove \ref{lem:occ-c}, suppose that the family $\bar{\PP}$ is a pruned cactus of odd cycles. Notice that $\pi(K_{n+1})$ has $n + 1 - \abs{V(Q)} + 1 = n - k + 1$ vertices, and the underlying graph $\union\bar{\PP}$ of the pruned cactus $\bar{\PP}$ has $\abs{\bar{\PP}} + 1 = n - k + 1$ vertices. Thus $\union\bar{\PP}$ is spanning in $\pi(K_{n+1})$, and so $\bar{v}$ is on $\union\bar{\PP}$. Finally, suppose on the contrary that some $O_i \setminus P_i$ contains an edge $uv$ with $u \not\in V(Q) \cup V(P_i)$ and $v \in V \cap V(P_i)$. Since $\bar{v} = \pi(v)$ is on $\pi(P_i)$ and $u$ is not on $\pi(P_i)$, one can find a $\bar{\PP}$-rainbow even path from $\bar{v}$ to $u$, in which $\pi(P_i)$ is not represented. This $\bar{\PP}$-rainbow even path can then be completed by the edge $\pi(uv) = u\bar{v}$ to a $\set{\pi(P_{k+1}), \dots, \pi(P_{i-1}), \pi(uv), \pi(P_{i+1}),\dots, \pi(P_n)}$-rainbow odd cycle. However this contradicts \ref{lem:occ-b} for $uv$ is an edge of $O_i$ that avoids $V(Q)\setminus V$.
\end{proof}
The last ingredient is a corollary of Rado's theorem for matroids~\cite{R42}, that gives a necessary and sufficient condition for a family of connected subgraphs to have a rainbow spanning tree.
\begin{theorem}[Rado's theorem for matroids]
Given a matroid with ground set $E$, for every family $\set{E_1, \dots, E_m}$ of subsets of $E$, there exists a rainbow independent set of size $m$ if and only if $\rank{E_I} \ge \abs{I}$ for every $I \subseteq [m]$, where $E_I$ is shorthand for $\bigcup_{i \in I}E_i$.
\end{theorem}
\begin{corollary}\label{lem:rainbow-spanning-tree}
For every family $\set{E_1, \dots, E_m}$ of connected subgraphs (viewed as edge sets) in $K_{m+1}$, the family has a rainbow spanning tree if and only if $\abs{V(E_I)} \ge \abs{I} + 1$ for every $I \subseteq [m]$.
\end{corollary}
\begin{proof}
The ``only if'' direction is easy to check. For the ``if'' direction, it suffices to verify the rank inequalities in Rado's theorem for matroids. Recall that, in a graphic matroid, $\rank{E} = \abs{V(E)} - c(E)$ for every edge set $E$, where $c(E)$ is the number of connected components of $E$. Pick an arbitrary $I \subseteq [m]$. Because each $E_i$ is connected, we can partition $I$ into sets $I_1, \dots, I_c$, where $c := c(E_I)$, such that $E_{I_1}, \dots, E_{I_c}$ are the connected components of $E_I$. Since $\abs{V(E_{I_j})} \ge \abs{I_j} + 1$ for all $j \in [c]$, we have the desired inequality
\begin{equation*}
\rank{E_I} = \abs{V(E_I)} - c(E_I) = \sum_{j = 1}^c \left(\abs{V(E_{I_j})} - 1\right) \ge \sum_{j = 1}^c \abs{I_j} = \abs{I}.\qedhere
\end{equation*}
\end{proof}
\begin{proof}[Proof of \cref{thm:odd-cycles-char}]
We do this by induction. The base case $n=2$ is trivial. Suppose $n \ge 3$, and let $\mathcal{O}=\set{O_1, \dots, O_n}$ be a family of odd cycles in $K_{n+1}$ without any rainbow odd cycle. We break the inductive step into three cases.
\paragraph{Case 1:}
There exists a proper subfamily $\mathcal{K}$ of $\mathcal{O}$ such that $\abs{V(\union\mathcal{K})} \le \abs{\mathcal{K}}+1$.
Since there is no $\mathcal{K}$-rainbow odd cycle, by the induction hypothesis $\mathcal{K}$ is a pruned cactus. By passing to a subfamily of $\mathcal{K}$, we may assume without loss of generality that $\mathcal{K}=\set{O_1, \dots, O_k}$, for some $k < n$, and $O_1, \dots, O_k$ are identical to an odd cycle $O$ on $k+1$ vertices. Note that every pair of vertices in $V(O)$ can be connected by a $\mathcal{K}$-rainbow even path in $O$. By \cref{lem:occ}\ref{lem:occ-a}, for every $i \in \set{k+1, \dots, n}$, the arcs of $O_i$ defined by its vertices shared with $O$ are of length $\ge 2$. Since $O_i$ is odd, there exists an odd arc, call it $P_i$. In case $O_i$ and $O$ are vertex-disjoint, set $P_i := O_i$.
Let $\pi$ be the contraction of $V(O)$ to a single vertex $\bar{v}$. By our choice of $P_i$, for each $i > k$, $\pi(P_i)$ is an odd cycle, and so \cref{lem:occ}\ref{lem:occ-b} and the inductive hypothesis imply that the family $\bar{\PP} := \set{\pi(P_{k+1}), \dots, \pi(P_n)}$ is a pruned cactus.
\begin{claim*}
For every $i > k$, $P_i = O_i$, in other words, $O_i$ and $O$ share at most $1$ vertex.
\end{claim*}
Assume for contradiction that $P_i \neq O_i$ for some $i > k$. Let $uv$ be an edge in $O_i \setminus P_i$ with $u \not\in V(P_i)$ and $v \in V(P_i)$. Note in addition that $u \not\in V(O)$ by \cref{lem:occ}\ref{lem:occ-a}, while $v \in V(O)$, which conflicts with \cref{lem:occ}\ref{lem:occ-c}.
\begin{claim*}
For every $i, j > k$, if $\pi(O_i)=\pi(O_j)$, then $O_i=O_j$.
\end{claim*}
Suppose on the contrary that $\pi(O_i)=\pi(O_j)$ and $O_i \neq O_j$ for some $i,j > k$. Let $v_i,v_j$ be respectively the vertices of $O_i, O_j$ shared with $O$. Then there exists an $\set{O_i,O_j}$-rainbow cherry with endpoints $v_i,v_j$ and center not in $V(O)$, which can be completed to an $\mathcal{O}$-rainbow odd cycle by a $\mathcal{K}$-rainbow odd path in $O$.
By \cref{lem:occ}\ref{lem:occ-c}, $\bar{v}\in V(\union\bar{\PP})$, implying that $\union\mathcal{O}$ is connected. By the last claim, for $i > k$, the multiplicity of every $O_i$ in $\mathcal{O}$ is equal to the multiplicity of $\pi(O_i)$ in $\bar{\PP}$, which, by the fact that $\bar{\PP}$ is a pruned cactus, is $\abs{O_i}-1$. Together, this means that $\mathcal{O}$ is a pruned cactus, as desired.
\paragraph{Case 2:} Every odd cycle $O_i$ is Hamiltonian.
Let $S$ be an $\mathcal{O}$-rainbow star of maximum size, say $k$, and let $c$ be its center.\footnote{A \emph{star} of size $k$ is a set of $k \ge 2$ edges, sharing one vertex that is called the \emph{center} of the star.} Without loss of generality, we may assume that the cycles represented in $S$ are $O_1, \dots, O_k$. We may further assume that the cycles in $\mathcal{O}$ are not identical for otherwise $\mathcal{O}$ is already a pruned cactus.
\begin{claim*}
The size $k$ of $S$ satisfies $3 \le k <n$.
\end{claim*}
Because the cycles in $\mathcal{O}$ are not identical, there is a vertex $v$ in $\union\mathcal{O}$ of degree at least $3$. A quick argument shows an $\mathcal{O}$-rainbow star of size $3$ centered at $v$, meaning that $k \ge 3$. Negation of the second inequality means that $c$ is connected in $S$ to all other vertices of the graph. Suppose $O_1$ is represented by $cv$ in $S$. In the absence of an $\mathcal{O}$-rainbow triangle, no edge of $O_1$ has both endpoints in $V(K_{n+1})\setminus\set{c, v}$. Because $\abs{V(K_{n+1})\setminus\set{c, v}} = n - 1 \ge 2$, it is impossible for $O_1$ to be Hamiltonian given that $cv$ is already in $O_1$.
Let $V$ be the set of leaves of $S$. Since $\mathcal{O}$ has no rainbow triangle, the cycles $O_{k+1}, \dots, O_n$ do not connect pairs of vertices of $V$. By the maximality of $S$, these cycles enter and exit $c$ through $V$. Therefore, for every $i > k$, $V$ partitions $O_i$ into arcs of length at least two, and at least one of these arcs, call it $P_i$, is odd and does not contain $c$.
Let $\pi$ be the contraction that replaces $V(S)$ by a single vertex $\bar{v}$. As in Case~1, the family $\{\pi(P_{k+1}), \dots, \pi(P_n)\}$ is a pruned cactus of odd cycles.
Since $k \ge 3$, $V$ partitions $O_n$ into at least $3$ arcs, one of which is next to $P_n$ and does not contain $c$. Hence $O_n \setminus P_n$ contains an edge $uv$ with $u \not\in V(S) \cup V(P_n)$ and $v \in V \cap V(P_n)$, which contradicts \cref{lem:occ}\ref{lem:occ-c}.
\paragraph{Case 3:} For every proper subfamily $\mathcal{K}$ of $\mathcal{O}$, $\abs{V(\union\mathcal{K})} > \abs{\mathcal{K}} + 1$, and some $O_i$ is not Hamiltonian.
Without loss of generality, assume that $O_n$ does not contain some vertex $v$. Set $V := V(K_{n+1}) \setminus \set{v}$. We apply \cref{lem:rainbow-spanning-tree} to the family of subgraphs $O_1[V], \dots, O_{n-1}[V]$ induced by $V$, and obtain an $\set{O_1, \dots, O_{n-1}}$-rainbow tree that spans $V$. This rainbow tree together with $O_n$ would give rise to an $\mathcal{O}$-rainbow odd cycle.
\end{proof}
\section{Rainbow cycles}\label{sec:rgc}
Here is a cheap bound on the size of the family that ensures a rainbow set with a certain property.
\begin{proposition}\label{lem:naive}
Given a ground set $E$ and a property $\mathcal{P} \subseteq 2^E$ with $\varnothing\not\in \mathcal{P}$ that is closed upwards, every family of $m+1$ subsets $E_1, \dots, E_{m+1}$ of $E$ with each $E_i \in \mathcal{P}$ has a rainbow set in $\mathcal{P}$, where
\[
m := \max\dset{\abs{F}}{F \subseteq E \text{ and }F \not\in \mathcal{P}}.
\]
\end{proposition}
\begin{proof}
Take $R$ to be a rainbow subset of $E$ not in $\mathcal{P}$ of maximum size. Since $R \not\in \mathcal{P}$, $\abs{R} \le m$ and some $E_i$ is not represented in $R$. Because $E_i \in \mathcal{P}$, $E_i \neq \varnothing$, and moreover because $\mathcal{P}$ is closed upwards, $E_i \not\subseteq R$. Take $e \in E_i \setminus R$ and define $R' := R \cup \set{e}$, which is rainbow. By the maximality of $R$, we know that $R' \in \mathcal{P}$.
\end{proof}
For rainbow cycles, simply note that a subgraph of $K_n$ without cycles, that is a forest, contains at most $n-1$ edges.
\begin{proposition}\label{lem:rgc}
Every family of $n$ cycles in $K_n$ has a rainbow cycle. \qed
\end{proposition}
The sharpness of \cref{lem:rgc} is witnessed by a pruned cactus. But there is a more general construction showing this.
\begin{definition}
A family $\mathcal{O}$ of cycles is a \emph{saguaro} if the family $\mathcal{O}$ is already a pruned cactus, or the family $\mathcal{O}$ can be partitioned into three subfamilies $\mathcal{O}_1, \set{O}, \mathcal{O}_2$ such that $\mathcal{O}_1$ and $\mathcal{O}_2$ are two vertex-disjoint saguaros, and $O$ is an even cycle along which its vertices alternate between $V(\union \mathcal{O}_1)$ and $V(\union \mathcal{O}_2)$.
\end{definition}
We prove that this recursive construction is an exhaustive characterization of families of $n$ cycles in $K_{n+1}$ without any rainbow cycle.
\begin{theorem}\label{thm:cycles-char}
For every family $\mathcal{O}$ of $n$ cycles in $K_{n+1}$, no rainbow cycle exists if and only if the family is a saguaro.
\end{theorem}
Our proof strategy parallels the proof of \cref{thm:odd-cycles-char}, with a few detours. A complication arises when an even cycle, after contracting its maximum independent set, becomes a star. To handle this problem, we shall use the following:
\begin{proposition}\label{lem:almost-spanning-tree}
Let $v$ be a vertex of $K_{m+1}$, and let $\mathcal{E} := \set{E_1, \dots, E_m}$ be a family of subgraphs of $K_{m+1}$, where each $E_i$ is either a star centered at $v$ or a cycle. Suppose that $\mathcal{E}$ has no rainbow cycle, and every star in $\mathcal{E}$ is edge-disjoint from all the other members of $\mathcal{E}$. If $E_1$ is a star, then there are $\ell$ cycles in $\mathcal{E}$ avoiding $v$, for some $0 < \ell < m$, whose union with $E_1$ contains at most $\ell+2$ vertices.
\end{proposition}
\begin{proof}
Let $R$ be a maximal $\set{E_2, \dots, E_m}$-rainbow tree containing $v$. We may assume that such a tree exists, since otherwise $E_2, \dots, E_m$ are cycles as required.
Without loss of generality, assume that $E_2, \dots, E_k$ are represented in $R$, where $k = \abs{V(R)}$. Since $E_1$ is edge-disjoint from $E_i$ for $i \neq 1$, it is edge-disjoint from $R$. Furthermore, since $\mathcal{E}$ has no rainbow cycle, $R$ does not contain any leaf of $E_1$. Since a star has at least two edges, it follows that $k \le m - 1$.
\begin{claim*}
For every $i>k$, $E_i$ is a cycle that is vertex-disjoint from $R$.
\end{claim*}
The fact that $E_i$ is a cycle follows from the maximality of $R$ and the requirement that every star in $\mathcal{E}$ is edge-disjoint from all other members of $\mathcal{E}$. The disjointness from $R$ follows from the assumption that $\mathcal{E}$ has no rainbow cycle.
Let $\ell = m - k$. By the claim, $E_{k+1}, \dots, E_m$ are the desired $\ell$ cycles since their vertex sets, as well as that of $E_1$, are contained in $(V(K_{m+1}) \setminus V(R)) \cup \set{v}$, which is of size $m + 1 - k + 1 = \ell + 2$.
\end{proof}
Unlike in a pruned cactus, not every cycle in a saguaro is repeated more than once. We say an $\ell$-cycle is \emph{common} in the family if it is repeated exactly $\ell-1$ times. We shall use the following technical lemma that is analogous to \cref{lem:occ}.
\begin{lemma}\label{lem:gcc}
Let $\mathcal{O} := \set{O_1, \dots, O_n}$ be a family of cycles in $K_{n+1}$ without any rainbow cycle, and denote $\mathcal{K} := \set{O_1, \dots, O_k}$, where $k < n$. Suppose that $Q$ is a $(k+1)$-vertex subgraph of $\union\mathcal{K}$, and $V \subseteq V(Q)$ such that every pair of vertices in $V$ can be connected by a $\mathcal{K}$-rainbow path of length at least $2$ in $Q$. Then
\begin{enumerate}[nosep, label=(\alph*)]
\item No edge in $O_{k+1}, \dots, O_n$ has both endpoints in $V$.\label{lem:gcc-a}
\end{enumerate}
Moreover, let $\pi$ be the contraction that replaces $V(Q)$ by a single vertex $\bar{v}$, and suppose $P_{k+1}, \dots, P_n$ are subgraphs of $O_{k+1}, \dots, O_n$ such that each $P_i$ avoids the vertices in $V(Q) \setminus V$. Denote $\bar{\PP} := \set{\pi(P_{k+1}), \dots, \pi(P_n)}$. Then the following holds.
\begin{enumerate}[nosep, label=(\alph*), resume]
\item There is no $\bar{\PP}$-rainbow cycle in $\pi(K_{n+1})$.\label{lem:gcc-b}
\item If $\bar{\PP}$ is a saguaro of cycles, then $\union \bar{\PP}$ is spanning in $\pi(K_{n+1})$. Moreover, for every $\pi(P_i)$ that is common in $\bar{\PP}$, $O_i \setminus P_i$ does not contain any edge of the form $uv$ with $u \not\in V(Q) \cup V(P_i)$ and $v \in V \cap V(P_i)$.\label{lem:gcc-c}
\end{enumerate}
\end{lemma}
We leave the proof to the readers as it is similar to that of \cref{lem:occ}.
\begin{proof}[Proof of \cref{thm:cycles-char}]
The ``if'' direction is easy to check. We show the ``only if'' direction by induction. The base case $n=2$ is trivial. Suppose $n \ge 3$, and $\mathcal{O} := \set{O_1, \dots, O_n}$ is a family of cycles in $K_{n+1}$ without any rainbow cycle. We break the inductive step into three cases.
\paragraph{Case 1:} There exists a proper subfamily $\mathcal{K}$ of $\mathcal{O}$ such that $\abs{V(\union\mathcal{K})} \le \abs{\mathcal{K}}+1$.
Let $\mathcal{K}$ be maximal with this property. Without loss of generality, $\mathcal{K} = \set{O_1, \dots, O_k}$, where $k := \abs{\mathcal{K}} < n$. Set $V := V(\union \mathcal{K})$. By the induction hypothesis, $\mathcal{K}$ is a saguaro. In particular, as can be observed in any saguaro, $\abs{V} = k + 1$ and every pair of vertices in $V$ can be connected by a $\mathcal{K}$-rainbow path of length at least $2$. For every $i > k$, by \cref{lem:gcc}\ref{lem:gcc-a}, the arcs of $O_i$ defined by its vertices on $V$ are of length at least $2$. If there exists an arc of length $\ge 3$, choose one such arc and denote it by $P_i$. If there is no such arc, set $P_i := O_i$. In case $O_i$ avoids $V$, also set $P_i := O_i$.
Let $\pi$ be the contraction that replaces $V$ by a single vertex $\bar{v}$. Then $\pi(P_i)$ is a cycle, with one possible exception: the vertices of $O_i$ alternate between $V$ and $V(K_{n+1})\setminus V$. In the latter case, $P_i = O_i$ and $\pi(P_i)$ is a star centered at $\bar{v}$ (with at least $2$ edges).
We next break the current case into two subcases.
\medskip
\noindent\textbf{Subcase 1.1:} For every $i > k$, $\pi(P_i)$ is a cycle.
\cref{lem:gcc}\ref{lem:gcc-b} and the inductive hypothesis imply that the family $\bar{\PP} := \set{\pi(P_{k+1}), \dots, \pi(P_n)}$ is a saguaro. By \cref{lem:gcc}\ref{lem:gcc-c}, $\bar{v} \in V(\union\bar{\PP})$. As can be observed in any saguaro, there is a common cycle in $\bar{\PP}$ that contains $\bar{v}$. Let this cycle have length $\ell+1$, and assume without loss of generality that it appears in $\bar{\PP}$ as $\pi(P_{k+1}), \dots, \pi(P_{k+\ell})$.
\begin{claim*}
For every $i \in \set{k+1, \dots, k+\ell}$, $P_i = O_i$.
\end{claim*}
Suppose on the contrary that $P_i \neq O_i$ for some $i \in \set{k+1, \dots, k+\ell}$. Then one of the two edges in $O_i$, say $uv$, adjacent to $P_i$, satisfies $u \not\in V$ and $v \in V(P_i) \cap V$, contradicting \cref{lem:gcc}\ref{lem:gcc-c}.
Since $\pi(O_{k+1}), \dots, \pi(O_{k+\ell})$ are the same cycle of length $\ell + 1$, the union of $O_1, \dots, O_{k+\ell}$ contains $k + \ell + 1$ vertices. By the maximality property of $\mathcal{K}$, there follows $k + \ell = n$, in other words, $\pi(O_{k+1}), \dots, \pi(O_n)$ are the same cycle.
\begin{claim*}
The cycles $O_{k+1}, \dots, O_n$ also coincide.
\end{claim*}
The reason is that if $O_i\neq O_j$ for some $i,j >k$, then there exists an $\set{O_i, O_j}$-rainbow cherry with endpoints in $V$, that can be completed to an $\mathcal{O}$-rainbow cycle by a $\mathcal{K}$-rainbow path.
As in the parallel stage of the proof of \cref{thm:odd-cycles-char}, the last claim implies that $\mathcal{O}$ is a saguaro.
\medskip
\noindent\textbf{Subcase 1.2:} For some $i > k$, $\pi(P_i)$ is a star centered at $\bar{v}$.
Without loss of generality $\pi(P_{k+1})$ is a star centered at $\bar{v}$. By \cref{lem:gcc}\ref{lem:gcc-b}, $\bar{\PP}$ consists of stars centered at $\bar{v}$ and of cycles, and it does not have a rainbow cycle.
\begin{claim*}
Every star in $\bar{\PP}$ is edge-disjoint from all the other members of $\bar{\PP}$.
\end{claim*}
Indeed, assume that for some $i, j > k$ we have an edge $u\bar{v}$ shared by $\pi(P_i)$ and $\pi(P_j)$, where $\pi(P_i)$ is a star centered at $\bar{v}$. Then in $O_i$ the vertex $u$ has two neighbors in $V$ and in $O_j$ it has at least one neighbor in $V$. Hence there is an $\set{O_i,O_j}$-rainbow cherry with endpoints in $V$ and center $u$, which can be completed to an $\mathcal{O}$-rainbow cycle by a $\mathcal{K}$-rainbow path.
By \cref{lem:almost-spanning-tree} it follows that there exist $\ell$ cycles in $\bar{\PP}$ avoiding $\bar{v}$, say $\pi(P_{k+2}), \dots, \pi(P_{k+\ell+1})$, whose union with $\pi(P_{k+1})$ contains at most $\ell + 2$ vertices, one of them being $\bar{v}$. Note that if $\pi(P_i)$ is a cycle avoiding $\bar{v}$, then $P_i = O_i$. Hence the union of $O_1, \dots, O_{k+\ell+1}$ contains at most $(k + 1) + (\ell + 1)$ vertices. To reconcile this with our choice of $\mathcal{K}$, the only way out is that $k + \ell + 1 = n$ and none of $\pi(P_{k+2}), \dots, \pi(P_n)$ contains $\bar{v}$. Thus all of $O_{k+2}, \dots, O_n$ avoid $V$, and so the union of these $n - k - 1$ cycles contains at most $n - k$ vertices. By the induction hypothesis, the subfamily $\set{O_{k+2}, \dots, O_n}$ is a saguaro of cycles that avoid $V$. Recall that the vertices of $O_{k+1}$ alternate between $V$ and $V(K_{n+1})\setminus V$. Therefore $\mathcal{O}$ is a saguaro.
\paragraph{Case 2:} Every cycle $O_i$ is Hamiltonian.
Let $S$ be an $\mathcal{O}$-rainbow star of maximum size, say $k$. Without loss of generality, assume that the cycles represented in $S$ are $O_1, \dots, O_k$. Denote $\mathcal{K} := \set{O_1, \dots, O_k}$. As in the proof of \cref{thm:odd-cycles-char}, using the fact that $\mathcal{O}$ has no rainbow cycle, we can deduce that $k < n$. As there, if $k = 2$ then all the cycles in $\mathcal{O}$ are identical, so we may assume $k \ge 3$.
Let $c$ be the center of $S$ and $V$ the set of its leaves. Notice that every pair of vertices in $V$ can be connected by a $\mathcal{K}$-rainbow cherry. For an arbitrary $i > k$, by \cref{lem:gcc}\ref{lem:gcc-a}, $V$ is an independent set of $O_i$, and so $k \le (n+1)/2$.
Suppose for a moment that $k = (n+1)/2$. Since $n - k = k - 1 \ge 2$, there are at least $2$ cycles in $\mathcal{O}\setminus\mathcal{K}$, and there is a vertex $u \not\in V(S)$. Note that $V$ partitions both $O_{k+1}$ and $O_{k+2}$ into arcs of length $2$. The two arcs through $u$ obtained respectively from $O_{k+1}$ and $O_{k+2}$ yield an $\set{O_{k+1},O_{k+2}}$-rainbow cherry with endpoints in $V$ and center $u$, which can be completed to an $\mathcal{O}$-rainbow square by a $\mathcal{K}$-rainbow cherry in $S$.
Therefore $k < (n+1)/2$. Now, for every $i > k$, one of the arcs, call it $P_i$, of $O_i$ defined by $V$ is of length at least $3$. By the maximality of $S$, $P_i$ does not contain $c$. Let $\pi$ be the contraction that replaces $V(S)$ by $\bar{v}$. Again the family $\bar{\PP} := \set{\pi(P_{k+1}), \dots, \pi(P_n)}$ is a saguaro of cycles. Say $\pi(P_n)$ is a common cycle in $\bar{\PP}$. Because $O_n$ is partitioned into at least $3$ arcs by $V$, one of the two edges in $O_n$, say $uv$, adjacent to $P_n$ satisfies $u \not\in V(S) \cup V(P_n)$ and $v \in V \cap V(P_n)$, which contradicts \cref{lem:gcc}\ref{lem:gcc-c}.
\paragraph{Case 3:} For every proper subfamily $\mathcal{K}$ of $\mathcal{O}$, $\abs{V(\union\mathcal{K})} > \abs{\mathcal{K}} + 1$, and some $O_i$ is not Hamiltonian.
The analysis of the last case can be taken verbatim from \cref{thm:odd-cycles-char}.
\end{proof}
\section{Edge-disjoint families}\label{sec:rainbow-cycles-in-multigraphs}
Here we continue to pursue a rainbow cycle, but make the additional assumption that our family consists of pairwise disjoint sets of edges. In terms of colors, this amounts to the natural restriction that every edge of the underlying graph gets just one color.\footnote{When an edge gets two colors, one may or may not want to consider this a rainbow cycle of length $2$ (a \emph{digon}). In this paper we consider only cycles of length $3$ or more. If digons are allowed, then the restriction to edge-disjoint families serves to avoid this trivial kind of rainbow cycle.}
For a family $\mathcal{E}$ of $n$ disjoint edge sets in $K_n$, we no longer need to assume that each set in $\mathcal{E}$ is a cycle in order to guarantee a rainbow cycle. The following trivial observation holds.
\begin{proposition}\label{lem:edge-disjoint}
Every family of $n$ edge-disjoint nonempty subgraphs of $K_n$ has a rainbow cycle. \qed
\end{proposition}
The sharpness of the above is witnessed by a family of single edges forming a spanning tree. But there is a more general construction showing this.
\begin{definition}
A family $\mathcal{E}$ of graphs is a \emph{linkleaf} if it is an empty family (which we consider as having a ground set of one vertex), or the family $\mathcal{E}$ can be partitioned into three subfamilies $\mathcal{E}_1, \{E\}, \mathcal{E}_2$ such that $\mathcal{E}_1$ and $\mathcal{E}_2$ are two (possibly empty) vertex-disjoint linkleaves, and $E$ is a nonempty bipartite graph with respect to the bipartition $V(\union \mathcal{E}_1), V(\union \mathcal{E}_2)$.
\end{definition}
We prove below that this recursive construction is a characterization of families of $n$ edge-disjoint nonempty subgraphs of $K_{n+1}$ without any rainbow cycle.
\begin{theorem}\label{thm:linkleaf}
For every family $\mathcal{E}$ of $n$ edge-disjoint nonempty subgraphs of $K_{n+1}$, no rainbow cycle exists if and only if the family is a linkleaf.
\end{theorem}
The main part of the proof consists of the following lemma.
\begin{lemma}\label{lem:cut}
Let $\mathcal{E}$ be a family of $n$ edge-disjoint nonempty subgraphs of $K_{n+1}$, where $n \ge 1$. If $\mathcal{E}$ has no rainbow cycle then $\mathcal{E}$ has a monochromatic cut, that is, a partition $V(\union \mathcal{E}) = V_1 \cup V_2$ such that exactly one member of $\mathcal{E}$ has an edge (or more) from $V_1$ to $V_2$.
\end{lemma}
\begin{proof}[Proof of \cref{thm:linkleaf} assuming \cref{lem:cut}]
The ``if'' direction can easily be verified from the construction. For the ``only if'' direction we use induction. The base case $n=0$ is trivial. Let $\mathcal{E}$ be a family of $n \ge 1$ edge-disjoint nonempty subgraphs of $K_{n+1}$ without any rainbow cycle. By \cref{lem:cut}, there exists a partition of $V(K_{n+1})$ into $V_1$, say of size $k+1$, and $V_2$, say of size $\ell + 1$, where $k+\ell = n-1$, and a unique member $E$ of $\mathcal{E}$ having an edge or more from $V_1$ to $V_2$. Since $\mathcal{E}$ has no rainbow cycle, by \cref{lem:edge-disjoint}, at most $k$ of the subgraphs have an edge or more in $V_1$ and at most $\ell$ of them have an edge or more in $V_2$. Because the total number of members of $\mathcal{E}$ is $k+\ell + 1$, exactly $k$ of them are contained in $V_1$, exactly $\ell$ of them are contained in $V_2$, and $E$ has only edges from $V_1$ to $V_2$. It follows from the induction hypothesis that $\mathcal{E}$ is a linkleaf.
\end{proof}
\begin{proof}[Proof of \cref{lem:cut}]
Assume for the sake of contradiction that the family $\mathcal{E} := \set{E_1, \dots, E_n}$ has neither a rainbow cycle nor a monochromatic cut. Pick an arbitrary edge $e_i$ from $E_i$ for each $i$, and let $T$ be the rainbow set $\set{e_1, \dots, e_n}$. Since $T$ contains no cycle, $T$ must be a rainbow spanning tree.
We form a digraph $D$ with vertex set $[n]$, in which an arrow goes from $i$ to $j$, for $i\neq j$, if some edge of $E_j$ reconnects $T \setminus \set{e_i}$. Due to the nonexistence of monochromatic cuts in the family, for every $i$, some edge in $E_j$, for some $j \neq i$, reconnects $T \setminus \set{e_i}$. Thus the minimum out-degree of $D$ is at least $1$.
Without loss of generality, let $1\to 2\to \dots \to k\to 1$ be a minimum circuit in $D$. As such, let $f_i$ be an edge in $E_i$ that reconnects $T \setminus \{e_{i-1}\}$, for each $i \in [k]$, under the convention that $e_0 := e_k$. Write $O_i$ for the unique cycle formed by adding $f_i$ to $T$. Certainly $e_{i-1}$ is in $O_i$, and moreover $e_i$ is in $O_i$ as $O_i$ cannot be rainbow. By the minimality of the circuit, for each $i, j \in [k]$ with $i \neq j$ and $i \neq j-1 \pmod k$, we have $i \not\to j$ in $D$, which means that $f_j$ does not reconnect $T \setminus \set{e_i}$, and so $e_i \not\in O_j$.
To summarize, for each $i, j \in [k]$, $e_i \in O_j$ if and only if $i = j$ or $i = j-1 \pmod k$. Set $O := O_1 \bigtriangleup \dots \bigtriangleup O_k$, where $\bigtriangleup$ stands for symmetric difference. Note that
\[
\set{f_1, \dots, f_k} \subseteq O \subseteq (T \setminus \set{e_1, \dots, e_k}) \cup \set{f_1, \dots, f_k}.
\]
By the first inclusion $O$ is nonempty, and by the second inclusion, it is rainbow. Since $O$ is Eulerian, it contains a rainbow cycle.
\end{proof}
\begin{remark}
It has come to our attention that \cref{thm:linkleaf} already appeared in \cite{HHJO}, and very recently it was generalized for binary matroids by B\'erczi and Schwarcz~\cite{BS}. We still include our proof as it is elementary and transparent, and moreover it can be easily adapted for binary matroids. In the adapted proof, the binary-ness of the matroids is only needed in the last step of \cref{lem:cut} to show $O$, a symmetric difference of circuits, is a disjoint union of circuits. The rest of our argument works over arbitrary matroids.
\end{remark}
\section{Concluding remarks}\label{sec:remarks}
\subsection{Rainbow spanning in matroids}
\cref{thm:woc} can be seen as a special case of the following rainbow result for matroids.
\begin{proposition}\label{thm:matroid-spanning}
Let $M$ be a matroid of rank $n$ and let $e$ be an element in the ground set $E$ of $M$. For every family $\set{A_1, \dots, A_n}$ of subsets of $E$, if each $A_i$ contains $e$ in its closure, then the family has a rainbow set that contains $e$ in its closure.
\end{proposition}
\begin{proof}
Let $R$ be a maximal rainbow set such that $R \cup \set{e}$ is independent in $M$. If $e \in R$ we are done, so assume $e \not\in R$. As the rank of $M$ is $n$, we know that $\abs{R} < n$, and hence some $A_i$ is not represented in $R$. Denote by $\mathrm{span}(\cdot)$ the closure operator in $M$. Since $e \in \mathrm{span}(A_i) \setminus \mathrm{span}(R)$, there exists $a \in A_i \setminus \mathrm{span}(R)$. The set $R' := R \cup \set{a}$ is then a rainbow independent set, and by the maximality of $R$, we have that $R' \cup \set{e}$ is dependent, which implies $e \in \mathrm{span}(R')$.
\end{proof}
To see that \cref{thm:woc} follows from \cref{thm:matroid-spanning}, note that for every edge set $O$ whose vertex set is contained in $[n]$, $O$ contains an odd cycle if and only if $e_0 \in \mathrm{span}(A)$, where $e_0, e_1, \dots, e_n$ form the standard basis of $\mathbb{F}_2^{n+1}$ and $A := \dset{e_0 + e_i + e_j}{\set{i,j}\in O}$. This observation allows us to go back and forth between odd cycles in $K_n$ and subsets of $E$ that contain $e_0$ in their closures, where
\[
E := \dset{(x_0, x_1, \dots, x_n) \in \mathbb{F}_2^{n+1}}{x_1 + \dots + x_n = 0}
\]
is the ground set of a binary matroid of rank $n$.
\subsection{Rainbow even cycles}
Perhaps surprisingly, the analog of \cref{thm:woc} and \cref{lem:rgc} for even cycles is false. \cref{fig:no-rainbow-even-cycles} shows a family of $6$ squares ($4$-cycles) on $6$ vertices without a rainbow even cycle. By gluing copies of this construction, so that every new copy shares one vertex with the union of the previous ones, we get a family of roughly $6n/5$ squares on $n$ vertices without a rainbow even cycle.
\begin{figure}
\centering
\begin{tikzpicture}[scale=1, very thick]
\coordinate (a1) at (0,0);
\coordinate (a2) at (0,1);
\coordinate (a3) at (1,0);
\coordinate (a4) at (1,1);
\coordinate (a5) at (2,0);
\coordinate (a6) at (2,1);
\draw[green] (a1) -- (a4);
\draw[green, transform canvas={xshift=1.414pt, yshift=-1.414pt}] (a1) -- (a4);
\draw[green, transform canvas={xshift=-1.414pt, yshift=1.414pt}] (a1) -- (a4);
\draw[green] (a2) -- (a3);
\draw[green, transform canvas={xshift=1.414pt, yshift=1.414pt}] (a2) -- (a3);
\draw[green, transform canvas={xshift=-1.414pt, yshift=-1.414pt}] (a2) -- (a3);
\draw[green] (a1) -- (a3);
\draw[green, transform canvas={yshift=-2pt}] (a1) -- (a3);
\draw[green, transform canvas={yshift=2pt}] (a1) -- (a3);
\draw[green] (a2) -- (a4);
\draw[green, transform canvas={yshift=-2pt}] (a2) -- (a4);
\draw[green, transform canvas={yshift=2pt}] (a2) -- (a4);
\draw[red] (a3) -- (a6);
\draw[red, transform canvas={xshift=1.414pt, yshift=-1.414pt}] (a3) -- (a6);
\draw[red, transform canvas={xshift=-1.414pt, yshift=1.414pt}] (a3) -- (a6);
\draw[red] (a4) -- (a5);
\draw[red, transform canvas={xshift=-1.414pt, yshift=-1.414pt}] (a4) -- (a5);
\draw[red, transform canvas={xshift=1.414pt, yshift=1.414pt}] (a4) -- (a5);
\draw[red] (a3) -- (a4);
\draw[red, transform canvas={xshift=-2pt}] (a3) -- (a4);
\draw[red, transform canvas={xshift=2pt}] (a3) -- (a4);
\draw[red] (a5) -- (a6);
\draw[red, transform canvas={xshift=-2pt}] (a5) -- (a6);
\draw[red, transform canvas={xshift=2pt}] (a5) -- (a6);
\foreach \i in {1,2,3,4,5,6}
\node[vtx] at (a\i) {};
\end{tikzpicture}
\caption{A family of $6$ squares on $6$ vertices without any rainbow even cycle.}\label{fig:no-rainbow-even-cycles}
\end{figure}
To get an upper bound on the number of even cycles needed to guarantee a rainbow even cycle, we observe that each connected component of a graph without even cycles is a cactus graph.\footnote{Indeed, if two odd cycles share two vertices, this gives rise to an even cycle.} Note that the densest cactus graph on $n$ vertices is a triangular cactus graph\footnote{A cactus graph is triangular when every cycle in it is a triangle.} (with one bridge if $n$ is even). Thus the maximum number of edges in a graph on $n$ vertices without even cycles is $\lfloor{3(n-1)/2}\rfloor$. From \cref{lem:naive} we have the following rainbow result.
\begin{proposition}\label{lem:even-cycles}
Every family of $\floor{3(n-1)/2}+1$ even cycles in $K_n$ has a rainbow even cycle.\qed
\end{proposition}
This upper bound is not sharp: for example, $4$ even cycles on $4$ vertices always have a rainbow even cycle.\footnote{The upper bound becomes sharp if the family is allowed to consist of digons. This is seen by taking a triangular cactus graph on $n$ vertices with $\lfloor 3(n-1)/2 \rfloor$ edges, and using one digon for each of its edges. Strictly speaking, however, this takes us from graphs to multigraphs.} We leave the determination of the exact number needed in general (between roughly $6n/5$ and $3n/2$) as an open problem.
\section*{Acknowledgements}
We acknowledge the financial support from the Ministry of Educational and Science of the Russian Federation in the framework of MegaGrant no.\ 075-15-2019-1926 when the first author worked on \cref{sec:roc,sec:rgc} of the paper.
\bibliographystyle{plain}
| {
"timestamp": "2021-02-19T02:03:52",
"yymm": "2007",
"arxiv_id": "2007.09719",
"language": "en",
"url": "https://arxiv.org/abs/2007.09719",
"abstract": "We prove that every family of (not necessarily distinct) odd cycles $O_1, \\dots, O_{2\\lceil n/2 \\rceil-1}$ in the complete graph $K_n$ on $n$ vertices has a rainbow odd cycle (that is, a set of edges from distinct $O_i$'s, forming an odd cycle). As part of the proof, we characterize those families of $n$ odd cycles in $K_{n+1}$ that do not have any rainbow odd cycle. We also characterize those families of $n$ cycles in $K_{n+1}$, as well as those of $n$ edge-disjoint nonempty subgraphs of $K_{n+1}$, without any rainbow cycle.",
"subjects": "Combinatorics (math.CO)",
"title": "Rainbow odd cycles",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.989986427013312,
"lm_q2_score": 0.8289388083214156,
"lm_q1q2_score": 0.820638169062791
} |
https://arxiv.org/abs/2010.02211 | Likelihood-based solution to the Monty Hall puzzle and a related 3-prisoner paradox | The Monty Hall puzzle has been solved and dissected in many ways, but always using probabilistic arguments, so it is considered a probability puzzle. In this paper the puzzle is set up as an orthodox statistical problem involving an unknown parameter, a probability model and an observation. This means we can compute a likelihood function, and the decision to switch corresponds to choosing the maximum likelihood solution. One advantage of the likelihood-based solution is that the reasoning applies to a single game, unaffected by the future plan of the host. I also describe an earlier version of the puzzle in terms of three prisoners: two to be executed and one released. Unlike the goats and the car, these prisoners have consciousness, so they can think about exchanging punishments. When two of them do that, however, we have a paradox, where it is advantageous for both to exchange their punishment with each other. Overall, the puzzle and the paradox are useful examples of statistical thinking, so they are excellent teaching topics. | \section{The puzzle and the paradox}
First, here is the Monty Hall puzzle:
\begin{quote}
You are a contestant in a game show and presented with 3 closed doors. Behind one is a car, and behind the others only goats. You pick one door (let's call that Door 1), then the host \textit{will} open another door that reveals a goat. With two un-opened doors left, you are offered a switch. Should you switch from your initial choice?
\end{quote}
It is well known that you should switch: doing so will increase your
probability of winning the car from 1/3 to 2/3. The problem is
actually how to provide a convincing explanation for those who think
the chance of winning for either of the un-opened door is 50-50, so
there is no reason to switch. The puzzle, first published by Selvin
(1975ab), was inspired by a television game show \textit{Let's Make
a Deal} originally hosted by Monty Hall. It later became famous
after appearing in Marilyn vos Savant's "Ask Marilyn" column in
Parade magazine in 1990. It generated its own literature and many
passionate arguments, \textit{even} among those who agree that you
should switch.
Martin Gardner (1959)'s article on the same puzzle in terms of three prisoners (below) was titled `Problems involving questions of probability and ambiguity.' He started by referring to the American polymath Charles Peirce's observation that `no other branch of mathematics is it so easy for experts to blunder as in probability theory.' The main problem is that some probability problems may contain subtle ambiguities. What is so clear to me -- as the statement of the Monty Hall problem above -- may contain hidden assumptions I forget to mention or even am not aware of. For example, as will be clear later, the standard solution assumes that, if you happen to pick the winning door, then the host will open a door randomly with probability 0.5. For me that is obvious so as to make the problem solvable and the solution neat. If we explicitly drop this assumption, then there is no neat solution; but our discussion is not going to that direction.
Instead, we head to a more interesting paradox. In Martin Gardner's version there are three prisoners A, B and C, two of whom will be executed and one released. Being released is like winning a car. Suppose A asks the guard: since for sure either B or C will be executed, there is no harm in telling me which one. Suppose the guard says C will be executed. Should A asks to switch his punishment with B's? With the same logic as in the Monty Hall problem, it must be yes, A should switch with B. But, what if B asks the same question to the guard, which of A or C will be executed, and the guard also answers C? Then it seems also advantageous for B to switch with A. Now we have a paradox: how can it be advantageous for both A and B to switch?
We can make another version of the paradox: Suppose A just \textit{heard the breaking news} that C has been executed (no guard is involved). Should A ask to switch his punishment with B? What logic should apply here? If the same as before, then he should switch. But then, the same logic applies to B, and we arrive at the same paradox. If not the same, then having the guard to answer the questions matters. But why does it matter how A finds out that C is (to be) executed?
In probability-based reasoning we are considering: what is the
probability of winning if you stay with your initial choice vs if
you switch, This is relatively easy -- hard enough for some people
-- with the Monty Hall puzzle, but it gets really challenging if we
want to explain the 3-prisoners paradox. So, instead, the puzzle and
the paradox will be set up as an orthodox statistical problem
involving an unknown parameter, a probability model and an
observation. This means we can compute a likelihood function, and
the decision to switch corresponds to choosing the maximum
likelihood solution. In the Monty Hall puzzle, one advantage of the
likelihood-based solution is that the reasoning applies to a single
game, unaffected by the future plan of the host. In addition, the
likelihood construction will show explicitly all the technical
assumptions made in the game.
\section{Likelihood-based solutions}
\subsubsection*{Monty Hall problem}
Let $\theta$ be the location of the car, which is completely unknown to you. Your choice is called Door 1; for you the car could be anywhere, but of course the host knows where it is. Let $y$ be the door opened by the host. Since he has to avoid opening the prized door, $y$ is affected by both $\theta$ and your choice. The probabilities of $y$ under each $\theta$ are given in this table:
\begin{center}
\begin{tabular}{rcccc}
& $\theta=1$ & $\theta=2$ & $\theta=3$ & $\widehat{\theta}$ \\
\hline
$y=1$ & 0 & 0 & 0 & -\\
2 & 0.5&0 & {1}& 3\\
3 & 0.5& {1} & 0& 2\\
\hline
Total & 1 & 1 & 1
\end{tabular}
\end{center}
For example, it is clear that $y$ cannot be 1, because the host cannot open your door (duh!), so it has zero probability under any $\theta$. If your Door 1 is the winning door ($\theta=1$), the host is \textit{assumed} to choose randomly between Doors 2 and 3. If $\theta=2$, he can only choose Door 3. Finally, if $\theta=3$, he can only choose Door 2. So, the `data' in this game is the choice of the host. Reading the table row-wise gives the likelihood function of $\theta$ for each $y$, so the maximum likelihood estimate is obvious:
\begin{center}
If $y=2$, then $\widehat{\theta}=3$ (so you should switch from 1 to 3.) \\
If $y=3$, then $\widehat{\theta}=2$ (switch from 1 to 2.)
\end{center}
The increase in the likelihood of winning by switching is actually only 2 folds, so it is not enormous. But the increase in prize from a goat to a car is enormous, so a switch is warranted.
In this likelihood formulation, the problem is a classic statistical problem with data following a model indexed by an unknown parameter. For orthodox non-Bayesians, there is no need to assume that the car is randomly located, so no probability is involved on $\theta$; it is enough to assume that you are completely ignorant about it. From the table, it is clear also that, when $\theta=1$, the host does not need to randomize with probability 0.5; he can do that any probability at all and you will not reduce your likelihood of winning by switching.
\subsection*{Probability or likelihood?}\label{sec:3}
Previous solutions of the Monty Hall puzzle are typically given in terms of probability. Why bother with the likelihood? Imagine a forgetful host in the Monty Hall game: he forgets which door has the car, so he opens a door at random. If it reveals a goat, then the game can go on; if it reveals the car, then he makes excuses, and the game is cancelled and nobody wins. Suppose in your particular game, a goat is revealed. Is it still better to switch? As before, let's call your chosen door as Door 1. Define the data $y$ as the \textit{un-opened door} (other than yours) if the opened one is a goat (so the game is on); if the opened one is a car, then set $y\equiv 4$ (and the game is off). The probability table is now:
\begin{center}
\begin{tabular}{rcccc}
& $\theta=1$ & $\theta=2$ & $\theta=3$ & $\widehat{\theta}$ \\
\hline
$y=2$ & 0.5 & 0.5 & 0 & \{1,2\}\\
$3$ & 0.5 & 0 & 0.5 & \{1,3\}\\
4 & 0& 0.5 & 0.5& -\\
\hline
Total & 1 & 1 & 1
\end{tabular}
\end{center}
So, when $y=2$ or 3, and the game still on, your likelihood of winning the car with your original or the other unopened door is equal, and there is no benefit of switching. Comparing with the original version, this means that the evidence of an open door revealing a goat is not sufficient to say that switching is beneficial. We must also know whether it was intentional or accidental. But, how about the future plan? Should it also affect your reasoning? In technical probability reasoning it should also matter, because probability is not meant to apply for a specific game.
Say in the current game the host \textit{intentionally} opens a door with a goat, but in the future he \textit{plans} to open a door randomly. What logic applies to the current game? Non-Bayesians certainly cannot apply the orthodox probability-based argument to the current game. But, \emph{the likelihood-based reasoning still applies to the current game, unaffected by the future plan of the host.}
\subsection*{3-prisoner paradox: guard involved}
The likelihood-based reasoning also helps solve the 3-prisoner paradox.
Let $\theta$ be the identity of the prisoner to be released; to avoid
confusion, set $\theta=$ 1, 2 or 3 for the three prisoners A, B and C. Let $A_1$ be the guard's answer to prisoner A, also denote the answer by 1, 2 and 3. Then we have indeed the same probability table as for the game show:
\begin{center}
\begin{tabular}{rcccc}
& $\theta=1$ & $\theta=2$ & $\theta=3$ & $\widehat{\theta}$ \\
\hline
$A_1=1$ & 0 & 0 & 0 & -\\
2 & 0.5&0 & {1}&3\\
3 & 0.5& {1} & 0&2\\
\hline
Total & 1 & 1 & 1
\end{tabular}
\end{center}
Here, the guard cannot answer $1$ to prisoner A, and he will randomize whenever possible. So, we get the same maximum-likelihood estimate as above, leading to switching as a good strategy.
When B asks the same question, let $A_2$ be the guard's answer. The joint distribution of $(A_1,A_2)$ is as follows. It can be derived under the same requirements: the guard cannot tell the questioner that he would be executed, and he must randomize whenever possible.
\begin{center}
\begin{tabular}{rccc}
& $A_1=1$ & $A_1=2$ & $A_1=3$ \\
\hline
$\theta=1, A_2=1$ & 0 & 0 & 0 \\
$A_2=2$ & 0&0 &0\\
$A_2=3$ & 0& 0.5 & 0.5\\
\hline
$\theta=2, A_2=1$ & 0 & 0 & 0.5 \\
$A_2=2$ & 0&0 &0\\
$A_2=3$ & 0& 0 & 0.5\\
\hline
$\theta=3, A_2=1$ & 0 & 1 & 0 \\
$A_2=2$ & 0&0 &0\\
$A_2=3$ & 0& 0 & 0\\
\hline
\end{tabular}
\end{center}
Keeping only the non-trivial scenarios, the table can be simplified to
\begin{center}
\begin{tabular}{rcccccc}
& $\theta=1$ & $\theta=2$ & $\theta=3$ & $\widehat{\theta}$ & Better for A & Better for B \\
\hline
$(A_1,A_2)$=(2,1) & 0 & 0 & 1 & 3 & Switch with C& Switch with C\\
(2,3) & 0.5&0 & 0 & 1 & Don't switch! & Switch with A\\
(3,1) & 0& 0.5 & 0 & 2 & Switch with B & Don't switch!\\
(3,3) &0.5& 0.5& 0 & \{1,2\} & None& None\\
\hline
Total & 1 & 1 & 1
\end{tabular}
\end{center}
Based on the relevant outcome of the story $(A_1=3,A_2=3)$, the prisoners A and B actually have equal likelihood of being released. So, a switch confers no advantage to either side, and there is no paradox.
How can the paradox appear? Let's consider who has access to what information. If A only knows $A_1=3$, but does not know nor presume the existence of $A_2$, then his reasoning is incomplete. Similarly, B only knows $A_2$. So, the paradox appears because of incomplete information by each side. An external agent -- e.g. the guard -- who knows both answers $A_1$ and $A_2$ can see there is no advantage in switching.
Suppose, both A and B know that each has asked the relevant question, but A only knows his own answer $A_1$ and B only knows $A_2$. Furthermore, assume that a switch can only happen by mutual consent. Suppose $A_1=3$, then, for A, a switch is advantageous only when $A_2=1$, but neutral when $A_2=3$. But $A_2=1$ means B is told that A will be executed, so he will not be willing to switch with A. The same logic applies of B. So, when both sides are willing, they know the switch must be neutral, and there is no paradox.
\subsection*{3-prisoner paradox: news version}
How about the news break version? As before,let $\theta$ be the identity of the prisoner to be released. Now, consider the data $y$ is the first prisoner reported to be executed, assuming that execution is in random order. Then we can derive the probabilities under different $\theta$'s.
\begin{center}
\begin{tabular}{rcccc}
& $\theta=1$ & $\theta=2$ & $\theta=3$ & $\widehat{\theta}$ \\
\hline
$y=1$ & 0 & 0.5 & 0.5 & \{2,3\}\\
2 & 0.5&0 & 0.5& \{1,3\}\\
3 & 0.5&0.5 & 0&\{1,2\} \\
\hline
Total & 1 & 1 & 1
\end{tabular}
\end{center}
So, when $y=3$, A and B have equal likelihood of being released. In other words, when both A and B heard that C has been executed, there is no advantage for either of them to switch.
Why does the explanation look so trivial? Well, we can make it more complicated. The fact that A heard the news means that he was not the first to be executed. Suppose A was in fact told that, if he is to be executed (which he might not), then he will not be the first. Does that affect he should react to the news? The probability table becomes:
\begin{center}
\begin{tabular}{rcccc}
& $\theta=1$ & $\theta=2$ & $\theta=3$ & $\widehat{\theta}$ \\
\hline
$y=1$ & 0 & 0 & 0 & -\\
$2$ & 0.5 & 0 & 1 & 3\\
3 & 0.5& 1 & 0& 2\\
\hline
Total & 1 & 1 & 1
\end{tabular}
\end{center}
Hey, this looks familiar! Of course, it is exactly the same table as for the Monty Hall puzzle above. So, yes, in this case the news is informative: A should indeed ask to switch with B. But what did they tell B? Suppose B knows that A is told that A will not be the first to be executed, but B himself is not told that. How should he react to the news? From the table, when $y=3$, then $\widehat{\theta}=2$, so B should not switch with A. If both are told -- and both know this -- that they won't be the first to be executed, then the table becomes:
\begin{center}
\begin{tabular}{rcccc}
& $\theta=1$ & $\theta=2$ & $\theta=3$ & $\widehat{\theta}$ \\
\hline
$y=1$ & 0 & 0 & 0 & -\\
$2$ & 0 & 0 & 0 & -\\
3 & 1& 1 & 0& \{1,2\}\\
\hline
\end{tabular}
\end{center}
So, when $y=3$, we are back to the situation of neutral switch.
\section{Conclusion}
Not surprisingly, different decisions by the player in the Monty
Hall problem or by the prisoners depend on different model
setup/assumptions and available data. The same data, e.g. an open
Door 3 revealing a goat, can be interpreted differently depending on
the setup. The likelihood-based calculation requires explicit
assumptions, so it clarifies them. We also compare the probability-
vs likelihood-based reasoning; the advantage of the latter is its
applicability to a single game, unaffected by future intention or plans. Gill (2011) provided a clear discussion of the assumptions associated with the standard probability-based solution of the puzzle as well as a two-person game that shows switching as a minimax solution. The model formulation here agrees with Gill that the Monty Hall problem is not a probability puzzle; he considered it a mathematical modelling problem. Here I have phrased it as an orthodox statistical problem so it -- and the related paradox -- can be a useful example in likelihood-based modelling.
\section{References}
\begin{description}
\item Gardner, Martin (1959). "Mathematical Games: Problems involving questions of probability and ambiguity". \textit{Scientific American}, 201 (4): 174–182.
\item Gill, Richard (2011). The Monty Hall problem is not a probability puzzle* (It's a challenge in mathematical modelling). \textit{Statistica Neerlandica}, 65 (1), 58–71.
\item Selvin, Steve (1975a). "A problem in probability". \textit{American Statistician}, 29 (1), 67–71.
\item Selvin, Steve (1975b). "On the Monty Hall problem". \textit{American Statistician}, 29 (3), 134.
\end{description}
\end{document}
| {
"timestamp": "2020-10-07T02:00:10",
"yymm": "2010",
"arxiv_id": "2010.02211",
"language": "en",
"url": "https://arxiv.org/abs/2010.02211",
"abstract": "The Monty Hall puzzle has been solved and dissected in many ways, but always using probabilistic arguments, so it is considered a probability puzzle. In this paper the puzzle is set up as an orthodox statistical problem involving an unknown parameter, a probability model and an observation. This means we can compute a likelihood function, and the decision to switch corresponds to choosing the maximum likelihood solution. One advantage of the likelihood-based solution is that the reasoning applies to a single game, unaffected by the future plan of the host. I also describe an earlier version of the puzzle in terms of three prisoners: two to be executed and one released. Unlike the goats and the car, these prisoners have consciousness, so they can think about exchanging punishments. When two of them do that, however, we have a paradox, where it is advantageous for both to exchange their punishment with each other. Overall, the puzzle and the paradox are useful examples of statistical thinking, so they are excellent teaching topics.",
"subjects": "Other Statistics (stat.OT)",
"title": "Likelihood-based solution to the Monty Hall puzzle and a related 3-prisoner paradox",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9724147153749275,
"lm_q2_score": 0.8438950966654774,
"lm_q1q2_score": 0.8206160102302571
} |
https://arxiv.org/abs/1610.07227 | Sums of squares in Quaternion rings | Lagrange's Four Squares Theorem states that any positive integer can be expressed as the sum of four integer squares. We investigate the analogous question over Quaternion rings, focusing on squares of elements of Quaternion rings with integer coefficients. We determine the minimum necessary number of squares for infinitely many Quaternion rings, and give global upper and lower bounds. | \section{Introduction and Definitions}
\subsection*{Waring's Problem}
\begin{theorem}[Waring's Problem/Hilbert-Waring Theorem]
For every integer $k \geq 2$ there exists a positive integer $g(k)$ such that every positive integer is the sum of at most $g(k)$ $k$-th powers of integers.
\end{theorem}
Generalizations of Waring's Problem have been studied in a variety of settings (for example, number fields \cite{siegel} and polynomial rings over finite fields \cite{car}). Additionally, calculation of the exact values of $g(k)$ for all $k \geq 2$ was completed only relatively recently. For an excellent and thorough exposition of the research on Waring's Problem and its generalizations, see Vaughan and Wooley \cite{wooley}. We will examine a generalization of Waring's Problem to Quaternion rings.
\begin{definition} Let $Q_{a,b}$ denote the Quaternion ring
\[ \{\alpha_0 + \alpha_1 {\bf i} + \alpha_2 {\bf j} + \alpha_3 {\bf k} \mid \alpha_n,a,b \in {\mathbb Z}, {\bf i}^2 = -a, {\bf j}^2 = -b, {\bf i}{\bf j}=-{\bf j}{\bf i}={\bf k}\}.\]
Let $Q_{a,b}^n$ denote the additive group generated by all $n$th powers in $Q_{a,b}$.
\end{definition}
Note here that ${\bf k}^2 = -ab$, and that if $a = b = 1$, we have what are called the {\em Lipschitz Quaternions}. We then have the following analogue of Waring's Problem.
\begin{conjecture}
For every integer $k \geq 2$ and all positive integers $a,b$ there exists a positive integer $g_{a,b}(k)$ such that every element of $Q_{a,b}^k$ can be written as the sum of at most $g_{a,b}(k)$ $k$-th powers of elements of $Q_{a,b}$.
\end{conjecture}
\subsection*{Main Results}
We will examine sums of squares in Quaternion rings; that is, when $k=2$. We are therefore looking to generalize Lagrange's Four Squares Theorem, the inspiration for Waring's initial conjecture.
\begin{theorem}[Lagrange's Four Squares Theorem] \label{lag4}
Any positive integer can be written as the sum of four integer squares.
\end{theorem}
We prove the following general result giving the upper and lower bounds for $g_{a,b}(2)$ for any positive integers $a$ and $b$.
\begin{theorem} \label{sqbounds}
For all positive integers $a,b$, we have
\[3 \leq g_{a,b}(2) \leq 5.\]
Additionally, each possible value of $g_{a,b} (2)$ (i.e., 3, 4, and 5) occurs infinitely often.
\end{theorem}
We prove the general upper and lower bounds in Section \ref{sec:gbc2}; more specific results, including the proof of the latter half of Theorem \ref{sqbounds}, are given in Section \ref{valueg}. Note that for any positive integers $a$ and $b$, $Q_{a,b}$ and $Q_{b,a}$ are naturally isomorphic; we therefore generally assume that $a \leq b$.
\section{Squares of Quaternions -- Upper and Lower Bounds \label{sec:gbc2}}
In this section we prove the upper and lower bounds of Theorem \ref{sqbounds}. We will use the following classical result on sums of squares extensively; for this result and a more general look at sums of squares of integers see \cite{sumsofsq}.
\begin{theorem}[Legendre's Three Squares Theorem]
\label{leg3}
A positive integer $N$ can be written as the sum of three integer squares if and only if $N$ is not of the form $4^m(8\ell + 7)$ with $\ell, m$ non-negative integers.
\end{theorem}
To study $g_{a,b} (2)$, we first need to establish the general form of squares of quaternions, and to characterize elements of $Q^2_{a,b}$.
Let $\alpha=\alpha_0+\alpha_1{\bf i}+\alpha_2{\bf j}+\alpha_3{\bf k} \in Q_{a,b}$. We call $\alpha_0$ the {\em real} part of $\alpha$ and $\alpha_1{\bf i}+\alpha_2{\bf j}+\alpha_3{\bf k}$ the {\em pure} part of $\alpha$, with $\alpha_1, \alpha_2, \alpha_3$ the {\em pure coefficients}. Then note that
\begin{equation} \label{al2}
\alpha^2=\alpha_0^2-a\alpha_1^2-b\alpha_2^2 - ab\alpha_3^2 + 2\alpha_0\alpha_1{\bf i} + 2\alpha_0\alpha_2{\bf j} + 2\alpha_0\alpha_3{\bf k}.
\end{equation}
We therefore have that all the pure coefficients of squares of quaternions, and therefore the pure coefficients of all elements of $Q_{a,b}^2$, are even. Additionally, any set of even pure coefficients can be achieved (for example, set $\alpha_0 = 1$ in Equation (\ref{al2})), as can any negative real coefficient (since we are assuming $a,b \geq 1$). We therefore have
\begin{equation} \label{sqform}
Q_{a,b}^2 = \{\alpha_0 + 2\alpha_1 {\bf i} + 2\alpha_2 {\bf j} + 2\alpha_3 {\bf k} \mid \alpha_n \in \mathbb{Z}\}.
\end{equation}
\medskip
In 1946, Niven computed $g_{1,1}(2)$ and studied extensions of Waring's Problem in other various settings, including the complex numbers.
\begin{theorem}[Niven \cite{nivenquat}]
Every element in $Q^2_{1,1}$ can be written as the sum of at most three squares in $Q_{1,1}$. Additionally, $6+2{\bf i}$ is not expressible as the sum of two squares in $Q_{1,1}$, so $g_{1,1}(2) = 3$.
\end{theorem}
We extend this result to $Q_{a,b}$ for all positive integers $a,b$. The proofs for the lower bounds are similar to Niven's work (i.e., finding examples); the proofs for the upper bounds take more work.
\begin{lemma} \label{lbsq}
Suppose $a$ and $b$ are positive integers. Then if
\begin{itemize}
\item $a \equiv 1$ or $2 \bmod 4$, then $2 + 2{\bf i}$ is not expressible as the sum of two squares in $Q_{a,b}$; and
\item $a \equiv 0$ or $3 \bmod 4$, then $4 + 2{\bf i}$ is not expressible as the sum of two squares in $Q_{a,b}$.
\end{itemize}
\end{lemma}
\begin{proof}
Let $x = x_0 + x_1 {\bf i} + x_2 {\bf j} + x_3 {\bf k}$, and $y = y_0 + y_1 {\bf i} + y_2 {\bf j} + y_3 {\bf k}$, with $x_m, y_n \in {\mathbb Z}$ for $m,n \in \{0,1,2,3\}$.
Then if $x^2 + y^2 = \alpha$ with $\alpha = \alpha_0 + 2\alpha_1 {\bf i} + 2\alpha_2 {\bf j} + 2\alpha_3 {\bf k} \in Q^2_{a,b}$, we have
\begin{align}
\alpha_0 &= x_0^2 + y_0^2 - a(x_1^2 + y_1^2) - b(x_2^2 + y_2^2) - ab (x_3^2 + y_3^2) \label{lb1}\\
\alpha_1 &= x_0x_1 + y_0y_1 \label{lb2}\\
\alpha_2 &= x_0x_2 + y_0y_2 \label{lb3}\\
\alpha_3 &= x_0x_3 + y_0y_3. \label{lb4}
\end{align}
\underline{Case 1: ($a \equiv 1,2 \bmod 4$)} Suppose $a \equiv 1,2 \bmod 4$, and let $\alpha = 2 + 2{\bf i}$, so that $\alpha_0 = 2$, $\alpha_1 = 1$, and $\alpha_2=\alpha_3=0$. Since $\alpha_1 = 1$, Equation (\ref{lb2}) and Bezout's Identity then imply that $x_0$ and $y_0$ must be relatively prime, since they have a linear combination equal to 1. Then, by Equation (\ref{lb3}), we must have $x_0 | y_2$ and $y_0 | x_2$. However, since $b \geq 1$, if $x_2, y_2 \neq 0$, Equation (\ref{lb1}) then implies that $\alpha_0 \leq 0$. As $\alpha_0 = 2$, we must have $x_2 = y_2 = 0$. A similar argument using Equation (\ref{lb4}) implies that $x_3 = y_3 = 0$.
By Equation (\ref{lb2}), since $\alpha_1 = 1$, we have that exactly one of the products $x_0x_1$ and $y_0y_1$ must be odd; we therefore assume $y_0$ and $y_1$ are odd.
The following table then shows that Equation (\ref{lb1}) has no solutions mod 4 if $a \equiv 1,2 \bmod 4$:
\begin{center}
$\begin{array}{c|c|c}
x_0 & x_1 & \text{Equation (\ref{lb1})} \bmod 4 \\ \hline
\text{even} & \text{odd} & \alpha_0 = 2 \equiv 1 - 2a \\
\text{even} & \text{even} & \alpha_0 = 2 \equiv 1 - a \\
\text{odd} & \text{even} & \alpha_0 = 2 \equiv 2 - a
\end{array}$
\end{center}
Therefore $2 + 2{\bf i}$ cannot be written as the sum of two squares in $Q_{a,b}$.
\medskip
\underline{Case 2: ($a \equiv 0,3 \bmod 4$)} Suppose $a \equiv 0,3 \bmod 4$. Then let $\alpha = 4 + 2{\bf i}$. By the same argument as above, we get 3 possibilities for Equation (\ref{lb1}) mod 4, none of which have solutions. Therefore $4 + 2{\bf i}$ cannot be written as the sum of two squares in $Q_{a,b}$.
\end{proof}
As both $2+2{\bf i}$ and $4+2{\bf i}$ are in $Q_{a,b}^2$, this gives us the lower bound in Theorem \ref{sqbounds}. We then turn to the upper bound; we establish an algorithm for expressing every element as a sum of squares.
\begin{lemma} \label{ubsq}
Every element in $Q^2_{a,b}$ can be written as a sum of at most five squares in $Q_{a,b}$.
\end{lemma}
\begin{proof}
Let $\alpha=\alpha_0+2\alpha_1{\bf i}+2\alpha_2{\bf j}+2\alpha_3{\bf k} \in Q_{a,b}^2$; We want to show that we can represent $\alpha$ as a sum of squares of no more than five quaternions.
Let $v = 1 + U{\bf i} + \alpha_2{\bf j} + \alpha_3 {\bf k}$ for some $U\in {\mathbb Z}$, and note that
\[\alpha - v^2 = \alpha_0 - 1 + aU^2 + b\alpha_2^2 + ab\alpha_3^2 + 2(\alpha_1 - U){\bf i}.\]
If we also let $A = \alpha_0 - 1 +a\alpha_1^2 + b\alpha_2^2 + ab\alpha_3^2$, we have
\begin{equation} \label{vau}
\alpha - v^2 = A + a(U^2 - \alpha_1^2) + 2(\alpha_1 - U){\bf i}.
\end{equation}
We then have three cases: (1) when $A \geq 0$, (2) when $A < 0$ and $A$ cannot be written as $4^m (8 \ell + 7)$ for any non-negative integer $m$ and $\ell \in {\mathbb Z}$, and (3) when $A < 0$ and $A = 4^m (8 \ell + 7)$ for some non-negative integer $m$ and $\ell \in {\mathbb Z}$.
\underline{Case 1: $A \geq 0$.} If $A \geq 0$, then by Lagrange's Four Squares Theorem (Theorem \ref{lag4}), there exists $w,x,y,z \in {\mathbb Z}$ such that $A = w^2 + x^2 + y^2 + z^2$. Letting $U = \alpha_1$, Equation (\ref{vau}) becomes
\[\alpha - v^2 = A = w^2 + x^2 + y^2 + z^2,\]
so we can represent $\alpha$ as the sum of five squares.
\underline{Case 2: $A < 0$ and $A \neq 4^m (8 \ell + 7)$.} In this case we again let $U = \alpha_1$, so that $\alpha - v^2 = A$. Then let $e_1$ be the greatest exponent of 4 such that $4^{e_1}$ divides $A$, and let $e_2$ be the least exponent of 4 such that $4^{2e_2} + A \geq 0$. We then let $e = \max\{e_1+1,e_2\}$, and let $w = 4^e {\bf i}$.
We then have $\alpha - v^2 - w^2 = A + a4^{2e} \geq 0$. Additionally, since $2e \geq 2e_1 + 2$, if $A$ cannot be written in the form $4^m (8 \ell + 7)$, then neither can $A + 4^{2e}$. Therefore by Legendre's Three Squares Theorem (Theorem \ref{leg3}), there exist $x,y,z \in {\mathbb Z}$ such that $A+4^{2e} = x^2 + y^2 + z^2$. So
\[\alpha - v^2 - w^2 = A + 4^{2e} = x^2 + y^2 + z^2,\]
so we can represent $\alpha$ as the sum of five squares.
\underline{Case 3: $A < 0$ and $A = 4^m (8 \ell + 7)$.} We first treat the case when $m > 0$. Here we let
\[w = 2^{m-1} + \left(\frac{\alpha_1 - U}{2^{m-1}}\right){\bf i}\]
and choose $U = \alpha_1 + 2^{m-1}U_1$, where $U_1$ satisfies the following 3 conditions:
\begin{description}
\item[(a)] $4^{m+1} | U_1$,
\item[(b)] $U_1 > -\displaystyle\frac{2^{m}\alpha_1}{4^{m-1}+1}$, and
\item[(c)] $U_1 > \displaystyle\frac{A-4^{m-1}}{a}.$
\end{description}
Note that it is always possible to meet these conditions; for example, $U_1 = 4^{m+1}|A| \cdot \max\{1,|\alpha_1|\}$ satisfies all three. We then have
\begin{align*}
\alpha - v^2 - w^2 & = \left(A + a(U^2 - \alpha_1^2) + 2(\alpha_1 - U){\bf i}\right) - \left(4^{m-1} + 2 (\alpha_1 - U){\bf i} - a \left(\frac{\alpha_1 - U}{2^{m-1}}\right)^2\right) \\
& = A + a(\alpha_1^2 + 2^m \alpha_1U_1+ 4^{m-1} U_1^2 - \alpha_1^2) - 4^{m-1} + aU_1^2 \\
& = A - 4^{m-1} + aU_1\left(2^m \alpha_1 + (4^{m-1}+1)U_1\right).
\end{align*}
Note that condition (b) on $U_1$ ensures the quantity in parentheses must be positive, and condition (c) ensures that $\alpha - v^2 - w^2$ is positive. Letting $A = 4^m(8 \ell + 7)$ and (since $4^{m+1} | U_1$) the remainder of the equation equals $4^{m+1}\ell_1$ for some $\ell_1 \in {\mathbb Z}$, we have
\begin{align*}
\alpha - v^2 - w^2 & = 4^m(8 \ell + 7) - 4^{m-1} + 4^{m+1}\ell_1 \\
& = 4^{m-1}\left[4(8 \ell + 7) - 1 + 16 \ell_1\right] \\
& = 4^{m-1}\left[8(4\ell + 3 + 2\ell_1) + 3\right],
\end{align*}
Since this is not of the form excluded by Legendre's Three Squares Theorem, there exist $x,y,z \in {\mathbb Z}$ such that $\alpha - v^2 - w^2 = x^2 + y^2 + z^2$, so we can represent $\alpha$ as the sum of five squares.
Lastly, we treat the case when $A = 8 \ell + 7$ for some negative integer $\ell$. Here we let $U = \alpha_1 + U_1$ and $w = 1 + U_1 {\bf i}$, choosing $U_1$ such that $8 \mid U_1$ and $U_1 > \max\{|A|, |\alpha_1|\}$. Then
\begin{align*}
\alpha - v^2 - w^2 & = \left(A + a(U^2 - \alpha_1^2) + 2(\alpha_1 - U){\bf i}\right) - (1 - U_1{\bf i})^2 \\
& = A + a(2\alpha_1U_1+ U_1^2) - 1 + aU_1^2 \\
& = 8 \ell +6 + 8 \ell_1,
\end{align*}
where we have $\ell_1 + \ell \geq 0$ by the conditions on $U_1$. Since this is a positive number that is 6 mod 8, it is expressible as the sum of 3 integer squares by Legendre's Three Squares Theorem. So we can represent $\alpha$ as the sum of five squares here and in all cases.
\end{proof}
Lemmas \ref{lbsq} and \ref{ubsq} combined give the bounds for $g_{a,b} (2)$ in Theorem \ref{sqbounds}.
\section{Values of $g_{a,b}(2)$ \label{valueg}}
In this section, we establish exact values for $g_{a,b}(2)$ for several infinite families of Quaternion rings, and for each of the possible values of $g_{a,b}(2)$. We note that the methods for showing each are different: for example, to show $g_{a,b}(2)=3$, all we need is an algorithm to express every element in $Q^2_{a,b}$ as a sum of 3 squares, and to show $g_{a,b}(2)=5$, all we need is to find an element that cannot be expressed as the sum of 4 squares.
\subsection{$g_{a,b}(2) = 3$}
We examine $Q_{1,b}$, where $b \in {\mathbb N}$. We can view $Q_{1,b}$ as an extension of the Gaussian integers ${\mathbb Z}[\sqrt{-1}]=\{x+y\sqrt{-1} \mid x,y \in {\mathbb Z}\}$ by adjoining ${\bf j}$ and ${\bf k}$. The following Lemma then provides a shortcut for representing elements of $Q_{1,b}$ as sums of squares.
\begin{lemma}[Theorem 2 of \cite{nivengauss}] \label{nivgausslem}
The equation $\alpha_0+2\alpha_1{\bf i}=x^2+y^2$ is solvable in $\mathbb{Z}[\sqrt{-1}]$ if $\alpha_0/2$ and $\alpha_1$ are not both odd integers.
\end{lemma}
Note that this Lemma also implies that $g_{{\mathbb Z}[\sqrt{-1}]}(2) = 3$.
\begin{theorem}
For all $b \in {\mathbb N}$, every element in $Q_{1,b}^2$ can be written as the sum of at most three squares in $Q_{1,b}$. Therefore $g_{1,b} (2) = 3$ for all $b \in {\mathbb N}$.
\end{theorem}
\begin{proof}
Let $\alpha=\alpha_0+2\alpha_1{\bf i}+2\alpha_2{\bf j}+2\alpha_3{\bf k} \in Q^2_{1,b}$; we wish to find $x,y,z \in Q_{1,b}$ such that $\alpha = x^2 + y^2 + z^2$. Since ${\mathbb Z}[\sqrt{-1}] \subset Q_{1,b}$, Lemma \ref{nivgausslem} implies that it is sufficient to find $z \in Q_{1,b}$ such that $\alpha - z^2 \in {\mathbb Z}[\sqrt{-1}]$ and satisfies the hypotheses of Lemma \ref{nivgausslem}.
Therefore, let $z=1+U{\bf i}+\alpha_2{\bf j}+\alpha_3{\bf k}$, where $U=0$ if $\alpha_1$ is even and $U=1$ if $\alpha_1$ is odd. We then examine $\alpha-z^2$.
\begin{align*}
\alpha-z^2&=\alpha_0+2\alpha_1{\bf i}+2\alpha_2{\bf j}+2\alpha_3{\bf k} - 1+U^2+b\alpha_2^2+b\alpha_3^2 - 2U{\bf i} -2\alpha_2{\bf j}-2\alpha_3{\bf k}\\
&=\alpha_0-1 + U^2+b\alpha_2^2+b\alpha_3^2+2(\alpha_1-U){\bf i}
\end{align*}
Note that if $\alpha_1$ is even, then $U=0$, so $\alpha_1-U$ is even; conversely, if $\alpha_1$ is odd, then $U=1$, so $\alpha_1-U$ is again even. We can therefore apply Lemma \ref{nivgausslem} to find $x,y \in {\mathbb Z}[\sqrt{-1}] \subset Q_{1,b}$ such that $\alpha - z^2 = x^2 + y^2$.
\end{proof}
We note that the proof relies on the fact that squares in the Gaussian integers can be easily characterized. This is not generally true of imaginary quadratic fields (see \cite{eljoseph} and Theorem 3 of \cite{nivengauss}).
\subsection{$g_{a,b}(2) = 4$}
We combine a standard lower bound proof and a constructive upper bound proof to find a family of Quaternion rings with $g_{a,b}(2) = 4$.
\begin{lemma} \label{43lb}
There exist elements in $Q_{4m,4n+3}^2$ that are not the sum of three squares.
\end{lemma}
\begin{proof}
Suppose that there exist $x,y,z \in Q_{4m,4n+3}$ such that $x^2 + y^2 + z^2 = 9 + 2{\bf j}$. Letting
\begin{align*}
x &=x_0+x_1{\bf i}+x_2{\bf j}+x_3{\bf k}\\
y &=y_0+y_1{\bf i}+y_2{\bf j}+y_3{\bf k}\\
z &=z_0+z_1{\bf i}+z_2{\bf j}+z_3{\bf k},
\end{align*}
the resulting equations for the real and ${\bf j}$ coefficients of $9+2{\bf j}$ are, respectively:
\begin{align}
\begin{split}
x_{0}^{2}+y_{0}^{2}+z_{0}^{2} - 4m(x_{1}^{2}+y_{1}^{2}+z_{1}^{2}) &- (4n+3)(x_{2}^{2}+y_{2}^{2}+z_{2}^{2})\\
& - (4m)(4n+3)(x_{3}^{2}+y_{3}^{2}+z_{3}^{2})= 9
\end{split}\label{43r}\\
x_{0}x_{2}+y_{0}y_{2}+z_{0}z_{2}&=1. \label{43j}
\end{align}
Examining Equation (\ref{43r}) mod 4, we have:
\begin{equation}
x_{0}^{2}+y_{0}^{2}+z_{0}^{2} + x_{2}^{2}+y_{2}^{2}+z_{2}^{2} \equiv1\bmod 4. \label{43rm4}
\end{equation}
Recall then that for all integers $\ell$, we have $\ell^2\equiv 0 \bmod 4$ (if $\ell$ is even) or $\ell^2\equiv 1 \bmod 4$ (if $\ell$ is odd). From this we have two possibilities that satisfy Equation (\ref{43rm4}): we must have either 1 or 5 of $x_{0},y_{0},z_{0},x_{2},y_{2},z_{2}$ odd in order for the left side of Equation (\ref{43rm4}) to sum to 1 mod 4.
If only one of the terms is odd, then the left side of Equation (\ref{43j}) will be even since the lone odd term must be multiplied by an even term, and therefore cannot equal 1. Likewise, if there are 5 odd terms, the left side of Equation (\ref{43j}) will be the sum of two odd terms and one even term, which cannot sum to 1.
Since Equations (\ref{43r}) and (\ref{43j}) cannot simultaneously be satisfied, $9 + 2 {\bf j}$ cannot be expressed as the sum of three squares in $Q^{2}_{4m,4n+3}$.
\end{proof}
\subsubsection*{When $a$ is a Sum of 2 Integer Squares}
When $a$ is a sum of integer squares, we can construct an algorithm to express elements of $Q^2_{a,b}$ as the sum of 4 squares. This gives us
a general result when combined with the lower bound results of Lemma \ref{43lb}.
\begin{lemma} \label{toad}
Every element of $Q^2_{a,b}$ is the sum of at most four squares in $Q_{a,b}$ in the following two cases:
\begin{itemize}
\item $a = n_1^2 + n_2^2$ with $\gcd(n_1,n_2)=1$; or
\item $a = n_1^2 + n_2^2$ with $\gcd(n_1,n_2)=2$ and $n_1 \equiv 0 \bmod 4$, and $b \not\equiv 0 \bmod 4$.
\end{itemize}
\end{lemma}
Note that we allow $n_1 = 0$ only if $n_2 = 1$ or 2; in the latter case we get $a=4$, which will be useful in light of Lemma \ref{43lb}.
\begin{proof}
Let $\alpha = \alpha_{0}+2\alpha_{1}{\bf i}+2\alpha_{2}{\bf j}+2\alpha_{3}{\bf k}$. If we let $z=1+\alpha_{1}{\bf i}+\alpha_{2}{\bf j}+\alpha_{3}{\bf k} \in Q_{a,b}$, then $\alpha-z^{2} \in {\mathbb Z}$. We claim that every integer can be represented as the sum of three squares in $Q_{a,b}$; we could then represent $\alpha$ as the sum of four squares.
Let $x = n_1 \ell + r$, $y = n_2 \ell + s$, and $w = \ell {\bf i} + \delta {\bf j}$, for some $\ell, r, s, \delta \in {\mathbb Z}$. We then have
\begin{align}
x^2 + y^2 + w^2 & = (n_1 \ell + r)^2 + (n_2 \ell + s)^2 + (\ell {\bf i} + \delta {\bf j})^2 \notag\\
& = 2(r n_1 + s n_2)\ell + r^2 + s^2 - b\delta^2.
\label{toadex}
\end{align}
Our method will be to choose $r$ and $s$ to determine a ``modulus'' ($r n_1 + s n_2$) and residue class ($r^2 + s^2 - b\delta^2$). Since $\ell$ is independent of $r$ and $s$, we will therefore be able to represent every integer in that residue class. (We will only use $\delta$ in one particularly troublesome case.)
Recall that by Bezout's Identity there exist $r_0, s_0 \in {\mathbb Z}$ such that $r_0n_1 + s_0n_2= \gcd(n_1,n_2) \in \{1,2\}$; these will inform our choices of $r$ and $s$. We then have three cases (relabeling if necessary) that we address separately:
\begin{enumerate}
\item[(a)] $n_1$ odd, $n_2$ even, and $\gcd(n_1,n_2) = 1$;
\item[(b)] $n_1, n_2$ odd, and $\gcd(n_1,n_2) = 1$; and
\item[(c)] $n_1/2$ even, $n_2/2$ odd, and $\gcd(n_1,n_2) = 2$.
\end{enumerate}
\underline{Case (a)}: Our modulus here will be 2. Note that if $r=r_0$, $s=s_0$, and $\delta = 0$, we have from Equation (\ref{toadex})
\begin{equation*} \label{toad1a}
x^2+y^2+w^2 = 2\ell + r_0^2 + s_0^2.
\end{equation*}
Next, if $r=r_0-n_2$, $s=s_0+n_1$, and $\delta=0$, Equation (\ref{toadex}) yields
\begin{equation*} \label{toad1b}
x^2+y^2+w^2 = 2\ell + (r_0-n_2)^2 + (s_0+n_1)^2.
\end{equation*}
Recalling that $n_1$ is assumed to be odd and $n_2$ is assumed to be even, we necessarily have that $r_0^2 + s_0^2$ and $(r_0-n_2)^2 + (s_0+n_1)^2$ cover all residue classes mod 2 with the two equations above. With a proper choice of $\ell$, we can therefore directly find $x,y,w \in Q_{a,b}$ such that $\alpha - z^2 = x^2+y^2+w^2$, and so we can write $\alpha$ as a sum of four squares in $Q_{a,b}$.
\underline{Case (b)}: Our modulus here will be 4. Since $n_1$ and $n_2$ are here both odd, we may assume that without loss of generality that $r_0$ is odd and $s_0$ is even.
We then use three choices of $r$ and $s$ to represent all possible residue classes mod 4; we let $\delta=0$ for all subcases. First, let $r=r_0$ and $s=s_0$. Equation (\ref{toadex}) is then
\begin{equation*} \label{toad2a}
x^2+y^2+w^2 = 2\ell + r_0^2 + s_0^2
\end{equation*}
which represents all odd integers, since $r_0$ is odd and $s_0$ is even.
If we then let $r=2r_0$ and $s=2s_0$, Equation (\ref{toadex}) then yields
\begin{equation*} \label{toad2b}
x^2+y^2+w^2 = 4\ell + 4(r_0^2 + s_0^2).
\end{equation*}
This allows us to represent all multiples of 4.
If, instead, we let $r=2r_0 - n_2$ and $s=2s_0+n_1$, Equation (\ref{toadex}) then yields
\begin{equation*} \label{toad2c}
x^2+y^2+w^2 = 4\ell + (2r_0-n_2)^2 + (2s_0+n_1)^2.
\end{equation*}
As $2r_0-n_2$ and $2s_0+n_1$ are necessarily both odd, this allows us to represent all integers that are $2 \bmod 4$. Combined with the above two choices, this covers all residue classes mod 4, and so similarly to Case (a) we are done.
\underline{Case (c)}: Our modulus here will be 8. We will need four choices of $r$ and $s$, along with letting $\delta =1$ if $\alpha - z^2 \equiv 3 \bmod 4$. Note that we are assuming $n_2 \equiv 2 \bmod 4$, so we know that $n_2/2$ is odd. Additionally, we may assume that $s_0$ is odd and $r_0$ is even.
First, let $r=r_0$ and $s=s_0$. Equation (\ref{toadex}) is then
\begin{equation} \label{toad3a}
x^2+y^2+w^2 = 4\ell + r_0^2 + s_0^2.
\end{equation}
If we let $r=r_0 - n_2/2$ and $s = s_0 + n_1/2$, Equation (\ref{toadex}) yields
\begin{equation} \label{toad3b}
x^2+y^2+w^2 = 4\ell + (r_0-n_2/2)^2 + (s_0+n_1/2)^2.
\end{equation}
Since $s_0$ and $n_2/2$ are both odd, while $r_0$ and $n_1/2$ are even, Equation (\ref{toad3a}) represents all integers that are 1 mod 4, while Equation (\ref{toad3b}) represents all integers that are 2 mod 4.
Next, let $r=2r_0$ and $2s=s_0$. Equation (\ref{toadex}) is then
\begin{equation*} \label{toad3c}
x^2+y^2+w^2 = 8\ell + 4(r_0^2 + s_0^2).
\end{equation*}
As $r_0$ is even and $s_0$ is odd, this represents all integers that are 4 mod 8.
If we let $r=2r_0 - n_2$ and $s = 2s_0 + n_1$, Equation (\ref{toadex}) yields
\begin{equation*} \label{toad3d}
x^2+y^2+w^2 = 8\ell + (2r_0-n_2)^2 + (2s_0+n_1)^2.
\end{equation*}
Since $2r_0 \equiv n_1 \equiv 0 \bmod 4$ and $2s_0 \equiv n_2 \equiv 2 \bmod 4$, this represents all integers that are 0 mod 8, and we therefore have all integers that are 0 mod 4.
We still need to represent integers that are 3 mod 4; this is where $\delta$ comes in. If we let $\delta = 1$, Equation (\ref{toadex}) becomes
\[x^2 + y^2 + w^2 = 2(r n_1 + s n_2)\ell + r^2 + s^2 - b. \]
If $b \not\equiv 0 \bmod 4$ and $\alpha - z^2 \equiv 3 \bmod 4$, this allows us to represent $\alpha - z^2 + b$ via one of the choices of $r$ and $s$ above. Therefore we can always represent $\alpha$ as the sum of four squares in $Q_{a,b}$ in Case (c), which concludes the proof.
\end{proof}
If $a= n_1^2 + n_2^2$ with $\gcd(n_1,n_2) = 2$, then necessarily $a \equiv 0 \bmod 4$; we can then combine Lemmas \ref{43lb} and \ref{toad} to get the following Theorem.
\begin{theorem} \label{frogs}
Suppose that $a = n_1^2 + n_2^2$, where $n_1,n_2 \in {\mathbb N}$ are such that $\gcd(n_1,n_2) = 2$, and
$m \in {\mathbb N}$. Then $g_{a,4m+3} = 4$.
\end{theorem}
Specifically, if $n_1 = 0$ and $n_2=2$, we get that $g_{4,4m+3} = 4$ for all $m \in {\mathbb N}$.
\subsection{$g_{a,b}(2) = 5$}
In this section, we find $a,b \in {\mathbb N}$ such that there exists elements of $Q_{a,b}$ that require 5 squares, which by Theorem \ref{ubsq} gives us that $g_{a,b}(2) = 5$.
\begin{theorem}
For all $m,n \in {\mathbb N}$, there are elements of $Q_{4m,4n}^2$ that are not the sum of four squares in $Q_{4m,4n}$. Therefore $g_{4m,4n} (2) = 5$ for all $m,n \in {\mathbb N}$.
\end{theorem}
\begin{proof}
Suppose that there exist $w,x,y,z \in Q_{4m,4n}$ such that $w^2 + x^2 + y^2 + z^2 = 8+2{\bf k}$. Letting
\begin{align*}
&w=w_0+w_1{\bf i}+w_2{\bf j}+w_3{\bf k}\\
&x=x_0+x_1{\bf i}+x_2{\bf j}+x_3{\bf k}\\
&y=y_0+y_1{\bf i}+y_2{\bf j}+y_3{\bf k}\\
&z=z_0+z_1{\bf i}+z_2{\bf j}+z_3{\bf k},
\end{align*}
the resulting equations for the real, ${\bf i}$, ${\bf j}$, and ${\bf k}$ coefficients are, respectively:
\begin{align}
\begin{split}
w_{0}^{2}+x_{0}^{2}+y_{0}^{2}+z_{0}^{2}-4m(w_{1}^{2}+x_{1}^{2}+y_{1}^{2}+z_{1}^{2}) \hspace{1in}&\\ -4n(w_{2}^{2}+x_{2}^{2}+y_{2}^{2}+z_{2}^{2})-16mn(w_{3}^{2}+x_{3}^{2}+y_{3}^{2}+z_{3}^{2})&=8 \label{44r}
\end{split}\\
w_{0}w_{1}+x_{0}x_{1}+y_{0}y_{1}+z_{0}z_{1}&=0 \label{44i}\\
w_{0}w_{2}+x_{0}x_{2}+y_{0}y_{1}+z_{0}z_{1}&=0 \label{44j}\\
w_{0}w_{3}+x_{0}x_{3}+y_{0}y_{3}+z_{0}z_{3}&=1. \label{44k}
\end{align}
We start by examining Equation (\ref{44k}) mod 2, and note that at least one of $w_0,x_0,y_0,z_0$ must be odd, as otherwise the sum of the terms would be even. Since at least one of these terms must be odd, we assume without loss of generality that $w_{0}\equiv1\bmod2$. With that in mind, Equation (\ref{44r}) mod 8:
\begin{equation}
1+x_{0}^{2}+y_{0}^{2}+z_{0}^{2}-4m(w_{1}^{2}+x_{1}^{2}+y_{1}^{2}+z_{1}^{2})-4n(w_{2}^{2}+x_{2}^{2}+y_{2}^{2}+z_{2}^{2})\equiv0\bmod8. \label{44rm8}
\end{equation}
Recall then that for all odd $\ell$, we have $\ell^2\equiv1\bmod8$, and for all even $\ell$, $\ell^2\equiv0\text{ or }4\bmod8$. Since the left side of Equation (\ref{44rm8}) is 1 added to three squares followed by multiples of 4; in order for it to sum to 0 mod 8, $x_{0}^{2},y_{0}^{2},z_{0}^{2}$ must all be $1\bmod8$. So $w_{0}^{2},x_{0}^{2},y_{0}^{2},z_{0}^{2}$ are odd.
Then $w_{0}^{2}+x_{0}^{2}+y_{0}^{2}+z_{0}^{2}\equiv4\bmod8$, so an odd number of $w_{1}^{2},x_{1}^{2},y_{1}^{2},z_{1}^{2}$ or $w_{2}^{2},x_{2}^{2},y_{2}^{2},z_{2}^{2}$ must be odd to contribute an additional 4 mod 8. But this forces an odd number of odd terms on the left side of one of Equations (\ref{44i}) and (\ref{44j}), which contradicts their even sums.
Since the equations required for $8+2{\bf k}$ to be a sum of four squares in $Q_{4m,4n}$ cannot hold, $8+2{\bf k}$ cannot be expressed as a sum of four squares in $Q_{4m,4n}$.
\end{proof}
\section{Other individual cases}
We were able to find $g_{a,b}(2)$ in several other cases for specific values of $a$ and $b$. We include these here for completeness but also to demonstrate the methods used, which vary significantly from those used in Section \ref{valueg}.
\begin{theorem}\label{2223}
$g_{2,2}(2) = g_{2,3}(2) = 3$.
\end{theorem}
These proofs rely of the theory of quadratic forms -- specifically, representations of integers via ternary diagonal quadratic forms. A ternary diagonal quadratic form is a function $f(x,y,z) = rx^2 + sy^2 + tz^2$; for our purposes, we have $r,s,t \in {\mathbb N}$. We say a ternary diagonal quadratic form {\em represents} $n \in {\mathbb N}$ if there exists an integer solution to $f(x,y,z) = n$. Lastly, we say that a ternary diagonal quadratic form is {\em regular} if the only positive integers it does not represent coincide with certain arithmetic progressions. The most common example of this is Legendre's Three-Squares Theorem: that every positive integer not of the form $4^m (8\ell + 7)$ can be represented in the form $x^2 +y^2 + z^2$ with $x,y,z \in {\mathbb Z}$. For more information on representation of integers via quadratic forms, see \cite{JonesPall} or (more recently) \cite{Hanke}.
Noting that
\[(x{\bf i} + y{\bf j} + z{\bf k})^2 = -(ax^2 + by^2 + abz^2),\]
for our Theorem, we will examine the expressions $2x^2 + 2y^2 + 4z^2$ and $2x^2 + 3y^2 + 6z^2$. Dickson has a complete list of regular diagonal ternary quadratic forms, from whence we get the following Lemma.
\begin{lemma} (Table 5 of \cite{Dickson}) \label{ternreg}
\begin{enumerate}
\item Let $f_{2,2} (x,y,z) = 2x^2 + 2y^2 + 4z^2$. Then $f_{2,2}$ represents all even integers not of the form $2 \cdot 4^n (16 \ell + 14)$.
\item Let $f_{2,3} (x,y,z) = 2x^2 + 3y^2 + 6z^2$. Then $f_{2,3}$ represents all positive integers not of the form $4^n (8 \ell + 7)$ or $3m+1$.
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Theorem \ref{2223}]
Let $\alpha = \alpha_0 + 2\alpha_1 {\bf i} + 2\alpha_2 {\bf j} + 2\alpha_3 {\bf k} \in Q_{a,b}^2$. Then, letting $x = 1 + \alpha_1 {\bf i} + \alpha_2 {\bf j} + \alpha_3 {\bf k}$, we have
\begin{equation} \label{2223part}
\alpha - x^2 = \alpha_0 - 1 + a\alpha_1^2 + b\alpha_2^2 + ab\alpha_3^2 :=A \in {\mathbb Z}.
\end{equation}
It then suffices to find elements $y,z \in Q_{a,b}$ with $y = y_0 \in {\mathbb Z}$ and $z= z_1 {\bf i} + z_2 {\bf j} + z_3 {\bf k}$ such that
\begin{equation} \label{2223A}
A = y^2 + z^2 = y_0^2 - az_1^2 - bz_2^2 - abz_3^2,
\end{equation}
as we would then have $\alpha = x^2 + y^2 + z^2$.
\underline{Case 1: ($a=b=2$)} In light of Lemma \ref{ternreg} and the regularity of the associated quadratic form, we know that if we can represent the residue class of $A$ mod 32, then we can find $y_0, z_0, z_1, z_2$ that satisfy Equation (\ref{2223A}).
We let $S_{a,b;m}$ be the set of residue classes mod $m$ that are completely represented by $f_{a,b}(z_0,z_1,z_2) = az_0^2 + bz_1^2 + abz_2^2$. For example, $2 \in S_{2,2;32}$ since $f_{2,2}(1,0,0) = 2$, $2 \not\equiv 2 \cdot 4^n (16 \ell + 14) \bmod 32$ for any $n, \ell \in {\mathbb N}$, and by Lemma \ref{ternreg} $f_{2,2}$ represents all even integers not of the form $2 \cdot 4^n (16 \ell + 14)$. But $16 \not\in S_{2,2;32}$ since $16 \equiv 2 \cdot 4^1(16 \ell + 14) \bmod 32$.
When $a=b=2$ and $m = 32$, we have
\[S_{2,2;32} = \{2,4,6,8,10,12,14,18,20,22,24,26,30\};\]
our goal then is to show that for any $A \in {\mathbb Z}$, we can find $y_0 \in {\mathbb Z}$ and $s \in S_{2,2;32}$ such that $A \equiv y_0^2 - s \bmod 32$. By Lemma $\ref{ternreg}$, there would then exist $z = z_1{\bf i}+ z_2{\bf j} + z_3{\bf k} \in Q_{2,2}$ such that $-s \equiv z^2 \bmod 32$ and $A = y_0^2 +z^2$.
We can then break this search for $y_0$ and $s$ into cases:
\begin{itemize}
\item if $A \not\equiv 0,1,4,5,16$, or $17 \bmod 32$, then $A$ is congruent to either $-s$ or $1-s$ for some $s \in S_{2,2;32}$;
\item if $A \equiv 0,16 \bmod 32$, then $A \equiv 4-s \bmod 32$ for $s=4,20 \in S_{2,2;32}$;
\item if $A \equiv 1,5,17 \bmod 32$, then $A \equiv 9-s \bmod 32$ for $s=8,4,24 \in S_{2,2;32}$; and
\item if $A \equiv 4 \bmod 32$, then $A \equiv 16-s \bmod 32$ for $s=12 \in S_{2,2;32}$.
\end{itemize}
Therefore we can represent $A$ as a sum of two squares from $Q_{2,2}$, and so we can always express $\alpha$ as a sum of three squares from $Q_{2,2}$.
\underline{Case 2: ($a=2$, $b=3$)} We again use the set $S_{a,b;m}$, letting $m=24$; this yields
\[S_{2,3;24} = \{2, 3, 5, 6, 9, 11, 14, 17, 18, 21\}.\]
Similarly to Case 1, we search for $y_0 \in {\mathbb Z}$ and $s \in S_{2,3;24}$ such that $A \equiv y_0^2 - s \bmod 24$.
\begin{itemize}
\item if $A \not\equiv 0,1,2,5,9,12$, or $17 \bmod 24$, then $A$ is congruent to either $-s$ or $1-s$ for some $s \in S_{2,3;24}$;
\item if $A \equiv 1,2,17 \bmod 24$, then $A \equiv 4-s \bmod 24$ for $s=3,2,11 \in S_{2,3;24}$;
\item if $A \equiv 0,12 \bmod 24$, then $A \equiv 9-s \bmod 24$ for $s=9,21 \in S_{2,3;24}$;
\item if $A \equiv 5 \bmod 24$, then $A \equiv 16-s \bmod 24$ for $s=11 \in S_{2,3;24}$; and
\item if $A \equiv 9 \bmod 24$, then $A \equiv 36-s \bmod 24$ for $s=3 \in S_{2,3;24}$.
\end{itemize}
Therefore as above we can always express $\alpha$ as a sum of three squares from $Q_{2,3}$. Given the lower bound for $g_{a,b}(2)$ given by Lemma \ref{lbsq}, we therefore have $g_{a,b}(2) = 3$ in both cases.
\end{proof}
The proof of Theorem \ref{2223} relies entirely on the regularity of the associated ternary quadratic forms given in Lemma \ref{ternreg}. There are, unfortunately, only finitely many regular diagonal ternary quadratic forms (Table 5 of \cite{Dickson} is a complete list), so this exact method has limited general use. Nonetheless, there does seem to be a close relationship between these Quaternion rings and ternary quadratic forms, and one might be able to relax the regularity condition slightly and be able to represent ``enough'' integers to use a similar method as in Theorem \ref{2223}.
\section{Open Questions}
There are many questions left to explore here. It seems like it should be possible to find $g_{a,b}(2)$ for all $a$ and $b$ positive; at the very least, we'd like to know the proportion of such Quaternion rings that have each of the possible values of $g_{a,b}(2)$. We have also been using as our analog of the integers the Lipschitz Quaternions; the Hurwitz Quaternions would be an equally good choice, especially since we would get unique factorization. Lastly, we have been focusing on the cases when ${\bf i}^2$ and ${\bf j}^2$ are negative; one could easily investigate the cases when one or both are positive.
\bibliographystyle{plainnat}
| {
"timestamp": "2016-10-25T02:06:55",
"yymm": "1610",
"arxiv_id": "1610.07227",
"language": "en",
"url": "https://arxiv.org/abs/1610.07227",
"abstract": "Lagrange's Four Squares Theorem states that any positive integer can be expressed as the sum of four integer squares. We investigate the analogous question over Quaternion rings, focusing on squares of elements of Quaternion rings with integer coefficients. We determine the minimum necessary number of squares for infinitely many Quaternion rings, and give global upper and lower bounds.",
"subjects": "Number Theory (math.NT)",
"title": "Sums of squares in Quaternion rings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9898303422461273,
"lm_q2_score": 0.8289388040954683,
"lm_q1q2_score": 0.8205087801589128
} |
https://arxiv.org/abs/1507.01565 | Torsion and ground state maxima: close but not the same | Could the location of the maximum point for a positive solution of a semilinear Poisson equation on a convex domain be independent of the form of the nonlinearity? Cima and Derrick found certain evidence for this surprising conjecture.We construct counterexamples on the half-disk, by working with the torsion function and first Dirichlet eigenfunction. On an isosceles right triangle the conjecture fails again. Yet the conjecture has merit, since the maxima of the torsion function and eigenfunction are unexpectedly close together. It is an open problem to quantify this closeness in terms of the domain and the nonlinearity. | \section{\bf Introduction}
\label{sec:intro}
Suppose the Poisson equation
\[
\begin{cases}
-\Delta u = f(u) & \text{in $\Omega$,} \\
\quad \ \ u = 0 & \text{on $\partial \Omega$,}
\end{cases}
\]
has a positive solution on the bounded convex plane domain $\Omega$. Here the nonlinearity $f$ is assumed to be Lipschitz and \emph{restoring}, which means $f(z)>0$ when $z>0$. Cima and Derrick \cite{CD11,CDK14} have conjectured that the location of the maximum point of $u$ is independent of the form of the nonlinearlity $f$.
This conjecture sounds impossible, since the graph of the solution must vary with the nonlinearity. Numerical computations by Cima and co-authors give surprising support for the conjecture, though, and \autoref{fig:levelcurves} provides further food for thought by considering a triangular domain and plotting the level curves and maximum point for the choices $f(z)=1$ and $f(z)=\lambda z$. The corresponding linear Poisson equations describe the \emph{torsion function} and the \emph{ground state of the Laplacian} (see below). Our solutions were computed numerically by the finite element method on a mesh with approximately $10^6$ triangles. The maximum points for the two solutions in \autoref{fig:levelcurves} appear to coincide, even though the level curves differ markedly near the boundary.
\begin{figure}[t]
\hspace{\fill}
\includegraphics[width=3cm]{exit.png}
\hspace{\fill}
\includegraphics[width=3cm]{eig.png}
\hspace{\fill}
\caption{Level curves and the maximum point on a triangular domain, for solutions of two different Poisson type equations: the torsion function (left) and the first eigenfunction (right).}
\label{fig:levelcurves}
\end{figure}
We disprove the conjecture on a half-disk in \autoref{sec:half-disk}, and again on the right isosceles triangle in \autoref{sec:rightisos}. Interestingly, the conjecture is remarkably close to being true in these counterexamples, with the maximum points occurring in almost but not quite the same location. We cannot explain this unexpected closeness.
A fascinating open problem is to bound the difference in location of the maximum points of two semilinear Poisson equations in terms of the difference between their nonlinearity functions and geometric information on the shape of the domain. Also, note that for both the half-disk and right isosceles triangle, our results show that the maximum point of the torsion function lies to the left of the maximum for the ground state (when oriented as in \autoref{fig:levelcurves}), which perhaps hints at a general principle for a class of convex domains.
\subsection*{Notation}
The \emph{torsion} or \emph{landscape} function is the unique solution of the Poisson equation
\[
\begin{cases}
-\Delta u = 1 & \text{in $\Omega$,} \\
\quad \ \ u = 0 & \text{on $\partial \Omega$.}
\end{cases}
\]
Here we have chosen $f(z)=1$. Clearly $u$ is positive inside the domain, by the maximum principle.
The \emph{Dirichlet ground state} or \emph{first Dirichlet eigenfunction of the Laplacian} is the unique positive solution of
\begin{equation*}
\begin{cases}
-\Delta v = \lambda v& \text{in $\Omega$,} \\
\quad \ \ v = 0 & \text{on $\partial \Omega$,}
\end{cases}
\end{equation*}
where $\lambda>0$ is the first eigenvalue of the Laplacian on the domain under Dirichlet boundary conditions. Here we have chosen $f(z)=\lambda z$.
\section{\bf The half-disk}
\label{sec:half-disk}
The maximum points for the torsion function and ground state can lie so close together that one cannot distinguish them by the naked eye, as the following Proposition reveals. Yet the two points are not the same.
\begin{proposition} \label{pr:half-disk}
Take $\Omega = \{ (x,y) : x> 0, x^2 + y^2 < 1 \}$ to be the right half-disk. On this domain the torsion function $u$ attains its maximum at $(0.48022,0)$ while the ground state $v$ attains its maximum at $(0.48051,0)$. Here the $x$-coordinates have been rounded to $5$ decimal places.
\end{proposition}
\begin{proof}
(i) The ground state is given in polar coordinates by
\[
v(r,\theta) = J_1(j_{1,1}r) \cos \theta
\]
where $J_1$ is the first Bessel function and $j_{1,1} \simeq 3.831706$ is its first positive zero. Clearly the maximum is attained on the $x$-axis, where $\theta=0$, and the function is plotted along this line in \autoref{fig:Bessel}. By setting $J_1^\prime(j_{1,1}r)=0$ and solving, we find $r = j^\prime_{1,1}/j_{1,1} \simeq 0.48051$, rounded to five decimal places, where $j^\prime_{1,1} \simeq 1.841184$ is the first zero of $J_1^\prime$.
\begin{figure}
\includegraphics[width=4cm]{Bessel-halfdisk.pdf}
\caption{The radial part of the ground state on the right half-disk: $v(r,0) = J_1(j_{1,1}r)$.}
\label{fig:Bessel}
\end{figure}
\medskip
(ii) The torsion function is more complicated \cite[Section 4.6.2]{W}, and is given by
\begin{align*}
u(x,y) & = \frac{1}{4\pi} \Big[ -2 \pi x^2 - 2 x \Big( (x^2 + y^2)^{-1} - 1\Big) \\
& \qquad \quad + \Big( 2 + (x^2 - y^2) \big( (x^2 + y^2)^{-2} + 1 \big) \Big) \arctan \Big( \frac{2x}{1 - (x^2 + y^2)} \Big) \\
& \qquad \quad + xy \Big( (x^2 + y^2)^{-2} - 1 \Big) \log \frac{x^2 + (1 + y)^2}{x^2 + (1 - y)^2} \, \Big] .
\end{align*}
One verifies the Dirichlet boundary condition on the right half-disk by examining four cases: (i) $u=0$ if $x=0$ and $0<|y|<1$, (ii) $u \to 0$ as $(x,y) \to (0,0)$, (iii) $u \to 0$ as $(x,y) \to (0,\pm 1)$, and (iv) $u \to 0$ as $(x,y) \to (x_1,y_1)$ with $x_1>0$ and $x_1^2+y_1^2=1$.
To check $u$ satisfies the Poisson equation $-\Delta u=1$, a lengthy direct calculation suffices.
We claim $u$ attains its maximum at a point on the horizontal axis. For this, first notice $u$ is even about the $x$-axis by definition, meaning $u(x,y)=u(x,-y)$. Hence the harmonic function $u_y$ equals zero on the $x$-axis for $0<x<1$. Further, $u_y \leq 0$ at points on the unit circle lying in the open first quadrant, since $u>0$ in the right half-disk and $u=0$ on the boundary. Also, one can compute that $u_y(x,y)$ approaches $0$ as $(x,y) \to (0,0)$ or $(x,y) \to (1,0)$ or $(x,y) \to (0,1)$ from within the first quadrant of the unit disk. Lastly $u_y$ vanishes on the $y$-axis for $0 < y <1$ (since $u=0$ there). Hence we conclude from the maximum principle that $u_y \leq 0$ in the first quadrant of the unit disk, and so $u$ attains its maximum somewhere on the $x$-axis.
On the $x$-axis we have
\[
u(x,0) = \frac{1}{4\pi} \Big[ -2 \pi x^2 - 2 x^{-1} + 2x + (2 + x^{-2} +x^2) \arctan \Big( \frac{2x}{1 - x^2} \Big) \, \Big]
\]
for $0<x<1$. Clearly $u(0,0)=u(1,0)=0$, and
\[
u_x(x,0) = \frac{1}{\pi x^3} \big[ x + x^3 - \pi x^4 + \frac{1}{2}(x^4 - 1) \arctan \Big( \frac{2x}{1 - x^2} \Big) \big] .
\]
One can show by taking another derivative and applying elementary estimates that $u(x,0)$ is concave. Calculations show $u_x(x,0)$ is positive at $x=0.480219$ and negative at $x=0.480220$, and so the maximum of $u$ lies between these two points, that is, at $x=0.48022$ to $5$ decimal places.
\end{proof}
\section{\bf The right isosceles triangle}
\label{sec:rightisos}
\begin{proposition} \label{pr:rightisos}
Take $\Omega = \{ (x,y) : 0 < x < 1, |y|< 1-x \}$, which is an isosceles right triangle. On this domain the torsion function $u$ attains its maximum at $(0.39168,0)$ while the ground state $v$ attains its maximum at $(0.39183,0)$. Here the $x$-coordinates have been rounded to $5$ decimal places.
\end{proposition}
\begin{proof}
(i) Rotate the triangle by 45 degrees clockwise about the origin and scale up by a factor of $\pi/\sqrt{2}$, then translate by $\pi/2$ to the right and upwards, so that the triangle becomes
\[
T = \{ (x,y) : 0<y<x<\pi \}.
\]
This new triangle has ground state
\[
v(x,y) = \sin x \sin 2y - \sin 2x \sin y = 2 \sin x \sin y (\cos y - \cos x) > 0
\]
with eigenvalue $1^2 + 2^2 =5$. One checks easily that $v=0$ on the boundary of $T$, where $y=0$ or $x=\pi$ or $y=x$. To find the maximum point, set $v_x=0$ and $v_y=0$ and deduce $\cos 2x = \cos x \cos y = \cos 2y$. Therefore the maximum lies on the line of symmetry $y=\pi-x$ of the triangle $T$. A little calculus shows that $v(x,\pi-x)$ attains its maximum when $x=\arcsin(1/\sqrt{3})+\pi/2$. Hence the ground state of the original triangle attains its maximum at $\big( (2/\pi) \arcsin(1/\sqrt{3}),0 \big) = (0.39183,0)$ to $5$ decimal places.
\medskip
(ii) The torsion function on the triangle $T$ is
\begin{align*}
& \! \! \! u(x,y) \\
& = - \frac{1}{4}(x-y)^2 + \sum_{n=1}^\infty \frac{n^2 \pi^2 -2\big( 1 - (-1)^n \big)}{2\pi n^3 \sinh n\pi} \Big[ \sinh nx \sin ny - \sin nx \sinh ny \\
& \qquad \qquad \qquad + \sin n(\pi-x) \sinh n(\pi-y) - \sinh n(\pi-x) \sin n(\pi-y) \Big] ,
\end{align*}
as we now explain. Observe that $-\Delta u = 1$ because the infinite series is a harmonic function, and $u=0$ on the boundary of $T$ by simple calculations with Fourier series when $0<x<\pi,y=0$, and when $x=\pi,0<y<\pi$; also $u=0$ on the hypotenuse where $y=x$.
The torsion function is known to attain its maximum somewhere on the line of symmetry $y=\pi-x$, either by general symmetry results \cite{CD11,CDK14} or else by arguing as in the proof of \autoref{pr:half-disk} part (ii). On that line of symmetry we evaluate
\begin{align*}
& u(x,\pi-x) \\
& = - (x-\pi/2)^2 + \sum_{n=1}^\infty \frac{n^2 \pi^2 -2\big( 1 - (-1)^n \big)}{\pi n^3 \sinh n\pi} \big[ (-1)^{n+1} \sinh nx - \sinh n(\pi-x) \big] \sin nx .
\end{align*}
The series converges exponentially on each closed subinterval of $(0,\pi)$, and so we may differentiate term-by-term to find
\begin{align}
& \frac{d\ }{dx} u(x,\pi-x) \label{eq:derivseries} \\
& = - 2(x-\pi/2) + \sum_{n=1}^\infty \frac{n^2 \pi^2 -2\big( 1 - (-1)^n \big)}{\pi n^2 \sinh n\pi} \Big\{ \big[ (-1)^{n+1} \cosh nx + \cosh n(\pi-x) \big] \sin nx \notag \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \big[ (-1)^{n+1} \sinh nx - \sinh n(\pi-x) \big] \cos nx \Big\} , \notag
\end{align}
where once again the series converges exponentially on closed subintervals of $(0,\pi)$.
The absolute value of the $n$-th term in series \eqref{eq:derivseries} is bounded by
\[
\frac{\pi(e^{nx}+e^{n(\pi-x)})}{\sinh(n\pi)} < 3\pi (e^{-n(\pi-x)}+e^{-nx}) ,
\]
as we see by bounding the $\sin$ and $\cos$ terms with $1$, adding the $\sinh$ and $\cosh$ terms having the same arguments, and using that $\sinh(n \pi) > e^{n\pi}/3$ for $n \geq 1$. Hence the infinite series \eqref{eq:derivseries} is bounded term-by-term by $3\pi$ times the sum of two geometric series having ratios $e^{-(\pi-x)}$ and $e^{-x}$.
The derivative of $u$ along the line of symmetry is positive at $x=2.1860525$ and negative at $x=2.1860530$, as one finds by evaluating the first 20 terms of the series in \eqref{eq:derivseries} and then estimating the remainder with the geometric series as above. Hence $u$ has a local maximum at $x=2.186053$ to 6 decimal places. This local maximum is a global maximum because $\sqrt{u}$ is concave (see \cite[Example 1.1]{B85} or \cite{K84}). Translating to the left and downwards by $\pi/2$ and then scaling down by a factor of $\sqrt{2}/\pi$ and rotating counterclockwise by $45$ degrees, we find the torsion function on the original triangle has a maximum at
\[
x= \frac{2}{\pi} (2.186053-\pi/2) = 0.39168
\]
to 5 decimal places.
\end{proof}
\section{\bf Concluding remarks}
\label{sec:remarks}
The counterexamples in this paper concern Poisson's equation for $f(z)=1$ and $f(z)=\lambda z$. One can find a whole family of counterexamples using $f(z)=a+bz$, where $a>0$ and $0<b \leq \lambda$. Note the maximum point depends on $b$ but not $a$, as one checks by rescaling the solution $u$ to $u/a$. To study this maximum point as $b$ varies, one starts with the eigenfunctions of $-\Delta-b$ on the half-disk or right isosceles triangle and notes that the eigenfunctions are the same as for $-\Delta$, just with eigenvalues shifted by $b$. The corresponding torsion function can be computed in terms of an eigenfunction expansion, and then the position of the maximum point can be carefully numerically located. We leave such investigations to the interested reader.
Finally, while our counterexamples involve linear Poisson equations, our choices of $f$ could presumably be perturbed to obtain genuinely nonlinear counterexamples.
\section*{Acknowledgments}
This work was partially supported by grants from the Simons Foundation (\#204296 to Richard Laugesen) and Polish National Science Centre (2012/07/B/ST1/03356 to Bart\-{\l}omiej Siudeja). We are grateful to the Institute for Computational and Experimental Research in Mathematics (ICERM) for supporting participation by Benson and Minion in IdeaLab 2014, where the project began. Thanks go also to the Banff International Research Station for supporting participation by Laugesen and Siudeja in the workshop ``Laplacians and Heat Kernels: Theory and Applications (March 2015), during which some of the research was conducted.
\newcommand{\doi}[1]{%
\href{http://dx.doi.org/#1}{doi:#1}}
\newcommand{\arxiv}[1]{%
\href{http://front.math.ucdavis.edu/#1}{ArXiv:#1}}
\newcommand{\mref}[1]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#1}}
| {
"timestamp": "2015-07-07T02:20:23",
"yymm": "1507",
"arxiv_id": "1507.01565",
"language": "en",
"url": "https://arxiv.org/abs/1507.01565",
"abstract": "Could the location of the maximum point for a positive solution of a semilinear Poisson equation on a convex domain be independent of the form of the nonlinearity? Cima and Derrick found certain evidence for this surprising conjecture.We construct counterexamples on the half-disk, by working with the torsion function and first Dirichlet eigenfunction. On an isosceles right triangle the conjecture fails again. Yet the conjecture has merit, since the maxima of the torsion function and eigenfunction are unexpectedly close together. It is an open problem to quantify this closeness in terms of the domain and the nonlinearity.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Torsion and ground state maxima: close but not the same",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754483707558,
"lm_q2_score": 0.8333245953120233,
"lm_q1q2_score": 0.820470937067714
} |
https://arxiv.org/abs/0707.3450 | Stability and intersection properties of solutions to the nonlinear biharmonic equation | We study the positive, regular, radially symmetric solutions to the nonlinear biharmonic equation $\Delta^2 \phi = \phi^p$. First, we show that there exists a critical value $p_c$, depending on the space dimension, such that the solutions are linearly unstable if $p<p_c$ and linearly stable if $p\geq p_c$. Then, we focus on the supercritical case $p\geq p_c$ and we show that the graphs of no two solutions intersect one another. | \section{Introduction}
Consider the positive, regular, radially symmetric solutions of the equation
\begin{equation}\label{ee}
\Delta^2 \phi (x) = \phi(x)^p, \quad\quad x\in {\mathbb R}^n.
\end{equation}
Such solutions are known to exist when $n>4$ and $p\geq \frac{n+4}{n-4}$, but they fail to exist, otherwise. Our main
goal in this paper is to study their qualitative properties, and to also relate those to the well-understood properties
of solutions to the second-order analogue
\begin{equation}\label{2ee}
-\Delta \phi (x) = \phi(x)^p, \quad\quad x\in {\mathbb R}^n.
\end{equation}
Linear stability for the second-order equation \eqref{2ee} was addressed in \cite{KaSt}, where the positive, regular,
radially symmetric solutions were found to be linearly stable if and only if
\begin{equation}\label{sc2}
p \cdot Q_2\left( \frac{2}{p-1} \right) \leq Q_2\left( \frac{n-2}{2} \right),
\quad\quad Q_2(\alpha) \equiv |x|^{\alpha+2} (-\Delta) \,|x|^{-\alpha}.
\end{equation}
In this paper, we establish a similar result for the fourth-order equation \eqref{ee}, namely that the positive, regular,
radially symmetric solutions are linearly stable if and only if
\begin{equation}\label{sc}
p \cdot Q_4\left( \frac{4}{p-1} \right) \leq Q_4\left( \frac{n-4}{2} \right),
\quad\quad Q_4(\alpha) \equiv |x|^{\alpha+4} \,\Delta^2 |x|^{-\alpha}.
\end{equation}
Although originally stated in a different way, the last two conditions appeared in the work of Wang~\cite{Wang} for the
second-order equation and Gazzola and Grunau \cite{GG} for the fourth-order one. Among other things, these authors
studied the intersection properties\footnote{Due to scaling, the existence of one solution implies the existence of
infinitely many solutions.} of radially symmetric solutions, and they found that the above conditions play a crucial role
in that context.
According to a result of Wang \cite{Wang} for the second-order equation, the graphs of no two radially symmetric
solutions intersect one another, if \eqref{sc2} holds, while the graphs of any two radially symmetric solutions intersect
one another, otherwise. Although a similar dichotomy is expected to hold for the fourth-order equation, we are only able
to prove the first part of such a result, namely that the graphs of no two radially symmetric solutions of \eqref{ee}
intersect one another, if \eqref{sc} holds. This already improves a result of \cite{GG}, which shows that the number of
intersections is at most finite, if \eqref{sc} holds with strict inequality. As for the remaining case in which
\eqref{sc} is violated, neither our approach nor the one in \cite{GG} provides any conclusions.
In section~\ref{li}, we show that \eqref{sc} is a necessary condition for linear stability. In section~\ref{ls}, we show
that it is also a sufficient condition. Our main results appear in section \ref{omr}, where we also simplify the
stability condition \eqref{sc}. Our stability result is given in Theorem \ref{main}, and our result on the intersection
properties of solutions is given in Theorem \ref{sep}.
\section{Linear instability}\label{li}
The main result in this section is Proposition \ref{unst}, which gives a sufficient condition for linear instability.
Although this condition will be simplified in section \ref{omr}, it is much more convenient to initially state it in
terms of the quartic polynomial
\begin{equation}\label{Q}
Q_4(\alpha) \equiv |x|^{\alpha+4} \,\Delta^2 |x|^{-\alpha} = \alpha (\alpha+2) (\alpha+2-n) (\alpha+4-n).
\end{equation}
This polynomial is closely related to Rellich's inequality
\begin{equation}\label{Rel}
\int_{{\mathbb R}^n} (\Delta u)^2 \:dx \geq \frac{n^2(n-4)^2}{16} \:\int_{{\mathbb R}^n} |x|^{-4} u^2\:dx,
\end{equation}
which is valid for each $u\in H^2({\mathbb R}^n)$ and each $n>4$. Namely, the constant that appears on the right hand side is
merely the unique local maximum value of $Q_4$, and it is known to be sharp in the following sense.
\begin{lemma}\label{Rell}
Let $n>4$ and let $V$ be a bounded function on ${\mathbb R}^n$ that vanishes at infinity. If there exists some $\varepsilon>0$ such that
\begin{equation*}
V(x) \leq -(1+\varepsilon)\cdot \frac{n^2(n-4)^2}{16} \cdot |x|^{-4}
\end{equation*}
for all large enough $|x|$, then the operator $\Delta^2+V$ has a negative eigenvalue.
\end{lemma}
For a proof of Rellich's inequality \eqref{Rel} and Lemma \ref{Rell}, we refer the reader to section~II.7 in Rellich's
book \cite{Rell}. We now use the previous lemma to address the linear instability of positive, regular solutions to
\eqref{ee}. The known results on the existence of such solutions are summarized in our next lemma; see \cite{WX,CLO,GG}
for parts (a), (b) and (c), respectively.
\begin{lemma}\label{ex}
Let $n\geq 1$ and $p>1$. Denote by $Q_4$ the quartic in \eqref{Q}.
\begin{itemize}
\item[(a)]
If either $n\leq 4$ or $p<\frac{n+4}{n-4}$, then equation \eqref{ee} has no positive $\mathcal{C}^4$ solutions.
\item[(b)]
If $n>4$ and $p= \frac{n+4}{n-4}$, then all positive $\mathcal{C}^4$ solutions of \eqref{ee} are of the form
\begin{equation}\label{phila}
\phi_\lambda(x) = \Bigl[ (n-4)(n-2)n(n+2) \Bigr]^{\frac{1}{p-1}} \cdot
\left( \frac{\lambda}{\lambda^2 + |x-y|^2} \right)^{\frac{4}{p-1}}
\end{equation}
for some $\lambda>0$ and some $y\in{\mathbb R}^n$.
\item[(c)]
If $n>4$ and $p>\frac{n+4}{n-4}$, then the positive $\mathcal{C}^4$, radially symmetric solutions of \eqref{ee} form an
one-parameter family $\{ \phi_\alpha \}_{\alpha>0}$, where each $\phi_\alpha$ is such that
\begin{equation}\label{phia}
\phi_\alpha(0)= \alpha, \quad\quad \lim_{|x|\to \infty} |x|^4 \,\phi_\alpha(x)^{p-1} = Q_4\left( \frac{4}{p-1} \right) > 0.
\end{equation}
\end{itemize}
\end{lemma}
\begin{prop}\label{unst}
Let $n>4$ and $p\geq \frac{n+4}{n-4}$. Let $Q_4$ be the quartic in \eqref{Q} and let $\phi$ denote any one of the
solutions provided by Lemma \ref{ex}. Then $\Delta^2 - p\phi^{p-1}$ has a negative eigenvalue, if
\begin{equation}\label{ic}
p \cdot Q_4\left( \frac{4}{p-1} \right) > Q_4\left( \frac{n-4}{2} \right) = \frac{n^2(n-4)^2}{16} \,.
\end{equation}
In particular, it has a negative eigenvalue, if $p=\frac{n+4}{n-4}$.
\end{prop}
\begin{proof}
Suppose first that $p>\frac{n+4}{n-4}$. Using part (c) of Lemma \ref{ex} and our assumption \eqref{ic}, we can then find
some small enough $\varepsilon>0$ such that
\begin{equation*}
\lim_{|x|\to \infty} |x|^4 \,\phi(x)^{p-1} = Q_4\left( \frac{4}{p-1} \right)> p^{-1}(1+2\varepsilon)\cdot\frac{n^2(n-4)^2}{16} \,.
\end{equation*}
Since this implies that
\begin{equation*}
V(x) \equiv -p\phi(x)^{p-1} < -(1+\varepsilon)\cdot \frac{n^2(n-4)^2}{16} \cdot |x|^{-4}
\end{equation*}
for all large enough $|x|$, the existence of a negative eigenvalue follows by Lemma \ref{Rell}.
Suppose now that $p=\frac{n+4}{n-4}$. Then our assumption \eqref{ic} automatically holds because
\begin{equation*}
\frac{n^2(n-4)^2}{16} = Q_4\left( \frac{n-4}{2} \right) = Q_4\left( \frac{4}{p-1} \right) < p\cdot Q_4\left(
\frac{4}{p-1} \right)
\end{equation*}
for this particular case. According to part (b) of Lemma \ref{ex}, we also have
\begin{equation*}
\Delta^2 - p\phi(x)^{p-1}= \Delta^2 -\lambda^4 (n-2)n(n+2)(n+4) \cdot \left( \lambda^2 + |x-y|^2 \right)^{-4}
\end{equation*}
for some $\lambda>0$ and some $y\in {\mathbb R}^n$. Thus, it suffices to check that the associated energy
\begin{equation*}
E(\zeta) = \int_{{\mathbb R}^n} (\Delta \zeta)^2 \:dx - \int_{{\mathbb R}^n} p\phi^{p-1} \zeta^2 \:dx
\end{equation*}
is negative for some test function $\zeta\in H^2({\mathbb R}^n)$. Let us then consider the test function
\begin{equation*}
\zeta(x) = \left( \lambda^2 + |x-y|^2 \right)^{-\frac{n-2}{2}}.
\end{equation*}
Since $n>4$, we have $\zeta\in H^2({\mathbb R}^n)$, while a straightforward computation gives
\begin{align*}
E(\zeta) &= -8\lambda^4 n(n-2)(n+1) \int_{{\mathbb R}^n} \frac{dx}{(\lambda^2 +|x-y|^2)^{n+2}} <0.
\end{align*}
This implies the presence of a negative eigenvalue and it also completes the proof.
\end{proof}
\section{Linear stability}\label{ls}
In this section, we address the linear stability of the solutions provided by Lemma \ref{ex}. First, we use an
Emden-Fowler transformation to transform \eqref{ee} into an ODE whose linear part has constant coefficients. Although
this transformation is quite standard, the subsequent part of our analysis is not. The main result of this section is
given in Proposition \ref{st}.
\begin{lemma}\label{cco}
Let $n\geq 1$ and $p>1$. Let $Q_4$ be the quartic in \eqref{Q} and suppose $\phi$ is a positive solution of the
biharmonic equation \eqref{ee}. Setting $m=\frac{4}{p-1}$ for convenience, the function
\begin{equation}\label{W}
W(s) = e^{m s} \phi(e^s) = r^m \phi(r), \quad\quad s= \log r= \log |x|
\end{equation}
must then be a solution to the ordinary differential equation
\begin{equation}\label{ne1}
Q_4(m-\d_s) \,W(s) = W(s)^p.
\end{equation}
\end{lemma}
\begin{proof}
Since $\d_r = e^{-s}\d_s$, a short computation allows us to write the radial Laplacian as
\begin{equation*}
\Delta = \d_r^2 + (n-1) r^{-1}\d_r = e^{-2s} (n-2+\d_s) \d_s.
\end{equation*}
Using the operator identity $\d_s e^{-ks} = e^{-ks}(\d_s-k)$, one can then easily check that
\begin{align*}
\Delta^2 e^{-m s} &= e^{-4s -m s} \,Q_4(m-\d_s) = e^{-m p s} \,Q_4(m-\d_s).
\end{align*}
This also implies that $Q_4(m-\d_s) \,W(s) = e^{m ps} \Delta^2 \phi (e^s) = W(s)^p$, as needed.
\end{proof}
\begin{lemma}\label{poly}
Let $n>4$ and $p>\frac{n+4}{n-4}$. Set $m=\frac{4}{p-1}$ and let $Q_4$ be the quartic in \eqref{Q}. Assuming that the
stability condition \eqref{sc} holds, the polynomial
\begin{equation}\label{ch1}
\mathscr{P}(\lambda) = Q_4(m-\lambda) -pQ_4(m)
\end{equation}
must then have four real roots $\lambda_1,\lambda_2,\lambda_3<0<\lambda_4$.
\end{lemma}
\begin{proof}
Noting that $Q_4$ is symmetric about $\frac{n-4}{2}$, we see that $\mathscr{P}$ is symmetric about
\begin{equation*}
\lambda_* \equiv m- \frac{n-4}{2} = \frac{4}{p-1} - \frac{n-4}{2} \,,
\end{equation*}
where $\lambda_* < 0$ because $p>\frac{n+4}{n-4}$ by assumption. Moreover, we have
\begin{equation*}
\lim_{\lambda\to \pm\infty} \mathscr{P}(\lambda) = + \infty, \quad\quad
\mathscr{P}(2\lambda_*) = \mathscr{P}(0) = (1-p) \cdot Q_4(m) < 0
\end{equation*}
because of \eqref{phia}, and we also have
\begin{equation*}
\mathscr{P}(\lambda_*) = Q_4\left( \frac{n-4}{2} \right) - p\cdot Q_4\left( \frac{4}{p-1} \right) \geq 0
\end{equation*}
because of \eqref{sc}. This forces $\mathscr{P}(\lambda)$ to have at least one root in each of the intervals
\begin{equation*}
(-\infty,2\lambda_*), \quad\quad (2\lambda_*,\lambda_*], \quad\quad [\lambda_*,0), \quad\quad (0,\infty).
\end{equation*}
In the case that $\lambda_*$ itself happens to be a root, then it must be a double root by symmetry. In any case then,
$\mathscr{P}(\lambda)$ has three negative roots and one positive root, as needed.
\end{proof}
\begin{prop}\label{st}
Let $n>4$ and $p>\frac{n+4}{n-4}$. Let $Q_4$ be the quartic in \eqref{Q} and let $\phi$ denote any one of the solutions
provided by Lemma \ref{ex}. Assuming that \eqref{sc} holds, one has
\begin{equation}\label{bel1}
|x|^4 \,\phi(x)^{p-1} \leq Q_4\left( \frac{4}{p-1} \right)
\end{equation}
for each $x\in{\mathbb R}^n$, and the operator $\Delta^2-p\phi^{p-1}$ has no negative spectrum.
\end{prop}
\begin{proof}
First, suppose that \eqref{bel1} does hold. Using our assumption \eqref{sc}, we then get
\begin{equation*}
-p\phi(x)^{p-1} \geq -p \cdot Q_4\left( \frac{4}{p-1} \right) \cdot |x|^{-4} \geq -\frac{n^2(n-4)^2}{16} \cdot |x|^{-4}
\end{equation*}
for each $x\in{\mathbb R}^n$, so $\Delta^2-p\phi^{p-1}$ has no negative spectrum by Rellich's inequality \eqref{Rel}.
Let us now focus on the derivation of \eqref{bel1}. Set $m=\frac{4}{p-1}$ and consider the function
\begin{equation*}
W(s) = e^{ms} \phi(e^s)= r^m \phi(r), \quad\quad s=\log r= \log |x|.
\end{equation*}
Then $W(s)$ is positive and it satisfies the equation
\begin{equation}\label{11}
Q_4(m-\d_s) \,W(s) = W(s)^p
\end{equation}
by Lemma \ref{cco}. We note that $s$ ranges over $(-\infty,\infty)$ as $r$ ranges from $0$ to $\infty$, while
\begin{equation*}
\lim_{s\to -\infty} W(s) = \lim_{r\to 0^+} r^m\phi(r) = 0.
\end{equation*}
The derivatives of $W(s)$ must also vanish at $s=-\infty$ because
\begin{equation*}
\lim_{s\to -\infty} W'(s) = \lim_{r\to 0^+} r\cdot \d_r [r^m\phi(r)] = 0,
\end{equation*}
and so on. Using the fact that $x\mapsto x^p$ is convex on $(0,\infty)$, we now find
\begin{align}\label{12}
W(s)^p - Q_4(m)^{\frac{p}{p-1}} &\geq pQ_4(m)\cdot \left( W(s)-Q_4(m)^{\frac{1}{p-1}} \right).
\end{align}
Inserting this inequality in \eqref{11}, we thus find
\begin{equation}\label{13}
Q_4(m-\d_s) \,W(s) - Q_4(m)^{\frac{p}{p-1}} \geq pQ_4(m)\cdot \left( W(s)-Q_4(m)^{\frac{1}{p-1}} \right).
\end{equation}
To eliminate the constant term on the left hand side, we change variables by
\begin{equation}\label{Y}
Y(s) = W(s) - Q_4(m)^{\frac{1}{p-1}}.
\end{equation}
Then we can write equation \eqref{13} in the equivalent form
\begin{equation*}
\Bigl[ Q_4(m-\d_s) -pQ_4(m) \Bigr] \,Y(s) \geq 0.
\end{equation*}
Invoking Lemma \ref{poly}, we now factor the last ODE to obtain
\begin{equation}\label{14}
(\d_s - \lambda_1)(\d_s - \lambda_2)(\d_s - \lambda_3)(\d_s - \lambda_4) \,Y(s) \geq 0
\end{equation}
for some $\lambda_1,\lambda_2,\lambda_3<0<\lambda_4$. Multiplying by $e^{-\lambda_1s}$ and integrating over $(-\infty,s)$, we get
\begin{equation*}
e^{-\lambda_1s} (\d_s - \lambda_2)(\d_s - \lambda_3)(\d_s - \lambda_4) \,Y(s) \geq 0
\end{equation*}
because $\lambda_1<0$. We ignore the exponential factor and use the same argument twice to get
\begin{equation*}
(\d_s - \lambda_4) \,Y(s) \geq 0
\end{equation*}
since $\lambda_2,\lambda_3<0$ as well. Multiplying by $e^{-\lambda_4s}$ and integrating over $(s,+\infty)$, we then find
\begin{equation*}
e^{-\lambda_4s} \Bigl[ W(s) - Q_4(m)^{\frac{1}{p-1}} \Bigr]
\leq \lim_{s\to\infty} e^{-\lambda_4s} \Bigl[ W(s) - Q_4(m)^{\frac{1}{p-1}} \Bigr].
\end{equation*}
The limit on the right hand side is zero because $\lambda_4>0$ by above and since
\begin{equation*}
\lim_{s\to \infty} W(s) = \lim_{|x|\to\infty} |x|^{\frac{4}{p-1}} \,\phi(x) = Q_4(m)^{\frac{1}{p-1}}
\end{equation*}
by \eqref{phia}. In particular, we may finally deduce the estimate
\begin{equation*}
W(s) \leq Q_4(m)^{\frac{1}{p-1}},
\end{equation*}
which is precisely the desired estimate \eqref{bel1} because $W(s) = |x|^{\frac{4}{p-1}} \phi(x)$ by above.
\end{proof}
\section{Our main results}\label{omr}
In this section, we give our main results regarding the stability and intersection properties of the positive, regular
solutions to \eqref{ee}. Our first theorem is an easy consequence of the results obtained in the previous two sections.
\begin{theorem}\label{main}
Let $n>4$ and $p\geq p_n\equiv \frac{n+4}{n-4}$. Let $Q_4$ be the quartic in \eqref{Q} and let $\phi$ denote any one of
the solutions provided by Lemma \ref{ex}. Then the following dichotomy holds.
If $n\leq 12$, then $\phi$ is linearly unstable for any $p\geq p_n$ whatsoever.
If $n\geq 13$, on the other hand, then the equation
\begin{equation*}
p \cdot Q_4\left( \frac{4}{p-1} \right) = Q_4\left( \frac{n-4}{2} \right)
\end{equation*}
has a unique solution $p_c>p_n$, and $\phi$ is linearly unstable if and only if $p_c> p\geq p_n$.
\end{theorem}
\begin{proof}
Consider the expression
\begin{equation}\label{Qnew}
\mathcal{Q}(p) \equiv 16(p-1)^4\cdot \left[ Q_4\left( \frac{n-4}{2} \right) -p\cdot Q_4 \left( \frac{4}{p-1}
\right)\right].
\end{equation}
By Propositions \ref{unst} and \ref{st}, to say that $\phi$ is linearly unstable is to say that $\mathcal{Q}(p)<0$.
Let us now combine our definitions \eqref{Q} and \eqref{Qnew} to write
\begin{equation*}
\mathcal{Q}(p) = n^2(n-4)^2(p-1)^4 - 2^7p(p+1)\Bigl( (n-4)p-n \Bigr) \Bigl( (n-2)p- (n+2) \Bigr).
\end{equation*}
Using this explicit equation, it is easy to see that
\begin{equation*}
\mathcal{Q}(0)= n^2(n-4)^2, \quad\quad \mathcal{Q}(1)= -2^{12},\quad\quad
\mathcal{Q} \left( \frac{n+2}{n-2} \right) = \frac{2^8n^2(n-4)^2}{(n-2)^4} \,,
\end{equation*}
while a short computation gives
\begin{equation*}
\mathcal{Q}(p_n) = \mathcal{Q}\left( \frac{n+4}{n-4} \right) = - \frac{2^{15}\,n^2}{(n-4)^3} \,.
\end{equation*}
This forces $\mathcal{Q}(p)$ to have three real roots in the interval $(0,p_n)$, so the fourth root must also be real. To
find its exact location, we compute
\begin{equation}\label{lim2}
\lim_{p\to \pm\infty} \frac{\mathcal{Q}(p)}{p^4} = (n-4) \cdot (n^3-4n^2-128n+256)
\end{equation}
and we examine two cases.
\Case{1} When $4<n\leq 12$, the limit in \eqref{lim2} is negative. Since $\mathcal{Q}(0)$ is positive by above,
the fourth root lies in $(-\infty,0)$, so $\mathcal{Q}(p)$ is negative for any $p\geq p_n$ whatsoever.
\Case{2} When $n\geq 13$, the limit in \eqref{lim2} is positive. Since $\mathcal{Q}(p_n)$ is negative by above, the
fourth root $p_c$ lies in $(p_n,\infty)$, so $\mathcal{Q}(p)$ is negative on $[p_n,p_c)$ and non-negative on
$[p_c,\infty)$.
\end{proof}
\begin{lemma}\label{use}
Let $n>4$ and $p>\frac{n+4}{n-4}$. Set $m=\frac{4}{p-1}$ and let $Q_4$ be the quartic in \eqref{Q}. Then
\begin{equation*}
\mathscr{R}(\mu) = Q_4(m-\mu) - Q_4(m)
\end{equation*}
has four real roots $\mu_1 < \mu_2 < \mu_3 =0 < \mu_4$.
\end{lemma}
\begin{proof}
As in the proof of Lemma \ref{poly}, we exploit the fact that $\mathscr{R}(\mu)$ is symmetric about
\begin{equation*}
\mu_* \equiv m- \frac{n-4}{2} = \frac{4}{p-1} - \frac{n-4}{2} <0.
\end{equation*}
It is clear that $\mu_3=0$ is a root of $\mathscr{R}(\mu)$. Then $\mu_2= 2\mu_*<0$ must also be a root by symmetry. To
see that a positive root $\mu_4>m$ exists, we note that
\begin{equation*}
\lim_{\mu\to +\infty} \mathscr{R}(\mu) = +\infty, \quad\quad \mathscr{R}(m)= - Q_4(m) < 0
\end{equation*}
by \eqref{phia}. Then $\mu_1= 2\mu_*-\mu_4= \mu_2-\mu_4 <\mu_2$ must also be a root by symmetry.
\end{proof}
Finally, we address the intersection properties of the solutions provided by Lemma \ref{ex} in the supercritical case
$p\geq p_c$. To this end, let us also introduce the function
\begin{equation}\label{sing}
\Phi(x) = Q_4(m)^{\frac{1}{p-1}} \cdot |x|^{-m}, \quad\quad m=\frac{4}{p-1}
\end{equation}
which is easily seen to be a singular solution of \eqref{ee}.
\begin{theorem}\label{sep}
Suppose that $n\geq 13$ and $p\geq p_c$, where $p_c$ is given by Theorem \ref{main}. In other words, suppose that $n>4$
and $p>\frac{n+4}{n-4}$ and that \eqref{sc} holds. If $\phi_\alpha,\phi_\b$ are any two of the solutions provided by Lemma
\ref{ex}, then
\begin{itemize}
\item[(a)]
the graph of $\phi_\alpha$ does not intersect the graph of the singular solution \eqref{sing};
\item[(b)]
the graph of $\phi_\alpha$ does not intersect the graph of $\phi_\b$, unless $\alpha=\b$.
\end{itemize}
\end{theorem}
\begin{proof}
To establish part (a), we have to show that
\begin{equation}\label{bel2}
|x|^m \,\phi_\alpha(x) < Q_4(m)^{\frac{1}{p-1}}
\end{equation}
for each $x\in{\mathbb R}^n$. This amounts to a slight refinement of inequality \eqref{bel1} in Proposition \ref{st}, as we now
need the inequality to be strict. Let us then consider the function
\begin{equation*}
W(s) = r^m \phi_\alpha(r), \quad\quad s=\log r=\log |x|, \quad\quad m=\frac{4}{p-1} \,.
\end{equation*}
Inequality \eqref{bel1} in Proposition \ref{st} reads
\begin{equation}\label{21}
W(s) \leq Q_4(m)^{\frac{1}{p-1}}, \quad\quad s\in{\mathbb R}
\end{equation}
and we now have to show that this inequality is actually strict. Since
\begin{equation*}
\lim_{s\to -\infty} W(s) = \lim_{r\to 0^+} r^m\phi_\alpha(r) = 0 < Q_4(m)^{\frac{1}{p-1}}
\end{equation*}
by \eqref{phia}, we do have strict inequality near $s=-\infty$. Suppose equality holds at some point, and let $s_0$ be
the first such point. Since $W(s)$ reaches its maximum at $s_0$, we then have
\begin{equation}\label{end2}
W(s_0) = Q_4(m)^{\frac{1}{p-1}}, \quad\quad W'(s_0)=0, \quad\quad W(s) < Q_4(m)^{\frac{1}{p-1}}
\end{equation}
for each $s<s_0$. When it comes to the interval $(-\infty,s_0)$, we thus have
\begin{align*}
W(s)^p - Q_4(m)^{\frac{p}{p-1}} > pQ_4(m)\cdot \left( W(s)-Q_4(m)^{\frac{1}{p-1}} \right)
\end{align*}
by convexity. This is the same inequality as \eqref{12}, except that the inequality is now strict. In particular, the
argument that led us to \eqref{14} now leads us to a strict inequality
\begin{equation*}
(\d_s - \lambda_1)(\d_s - \lambda_2)(\d_s - \lambda_3)(\d_s - \lambda_4) \,Y(s) > 0, \quad\quad s<s_0
\end{equation*}
for some $\lambda_1,\lambda_2,\lambda_3<0<\lambda_4$. Using the same argument as before, we get
\begin{equation*}
(\d_s - \lambda_3)(\d_s - \lambda_4) \,Y(s) > 0, \quad\quad s<s_0
\end{equation*}
because $\lambda_1,\lambda_2<0$. Multiplying by $e^{-\lambda_3s}$ and integrating over $(-\infty,s_0)$, we then get
\begin{equation*}
e^{-\lambda_3s_0} \cdot [Y'(s_0) -\lambda_4 Y(s_0)] > 0
\end{equation*}
because $\lambda_3<0$ as well. In view of the definition \eqref{Y} of $Y(s)$, this actually gives
\begin{equation*}
W'(s_0) >\lambda_4 \left( W(s_0) - Q_4(m)^{\frac{1}{p-1}} \right),
\end{equation*}
which is contrary to \eqref{end2}. In particular, the inequality in \eqref{21} must be strict at all points and the
proof of part (a) is complete.
In order to prove part (b), we shall first show that
\begin{equation}\label{end3}
W'(s) > 0, \quad\quad s\in{\mathbb R}.
\end{equation}
Using Lemma \ref{cco} and the strict inequality in \eqref{21}, we find that
\begin{equation*}
\Bigl[ Q_4(m-\d_s) - Q_4(m) \Bigr] \,W(s) = W(s)^p - Q_4(m) W(s) < 0.
\end{equation*}
Invoking Lemma \ref{use}, we now factor the left hand side to get
\begin{equation*}
(\d_s - \mu_1)(\d_s - \mu_2)(\d_s - \mu_4) \,W'(s) < 0
\end{equation*}
for some $\mu_1 < \mu_2 < 0 < \mu_4$. Once again, the last equation easily leads to
\begin{equation}\label{16}
(\d_s - \mu_4) \,W'(s) < 0
\end{equation}
because $\mu_1,\mu_2<0$. This makes $W'(s)-\mu_4W(s)$ decreasing for all $s$, so the limit
\begin{equation*}
\lim_{s\to +\infty} W'(s) - \mu_4W(s)
\end{equation*}
exists. Since $W(s)$ tends to a finite limit as $s\to +\infty$ by \eqref{phia}, it easily follows that $W'(s)$ must
approach zero as $s\to +\infty$. Since $\mu_4>0$ by above, this gives
\begin{equation*}
\lim_{s\to +\infty} e^{-\mu_4s} \,W'(s) = 0,
\end{equation*}
and then we may integrate \eqref{16} over $(s,+\infty)$ to deduce the desired \eqref{end3}.
Now, the inequality \eqref{end3} we just proved can also be written as
\begin{equation}\label{31}
0< W'(s) = r\cdot \d_r [r^m \phi_\alpha(r)] = r^m \,[m\phi_\alpha(r) + r\phi_\alpha'(r)]
\end{equation}
in view of our definition \eqref{W}. On the other hand, a scaling argument shows that the solutions provided by Lemma
\ref{ex} are subject to the relation
\begin{equation}\label{32}
\phi_\alpha(r) = \alpha\phi_1(\alpha^{1/m} r).
\end{equation}
Differentiating \eqref{32} and using \eqref{31}, one now easily finds that $\d_\alpha \phi_\alpha(r)>0$ for all $\alpha,r>0$. In
particular, the graphs of distinct solutions cannot really intersect, as needed.
\end{proof}
| {
"timestamp": "2007-07-23T22:43:24",
"yymm": "0707",
"arxiv_id": "0707.3450",
"language": "en",
"url": "https://arxiv.org/abs/0707.3450",
"abstract": "We study the positive, regular, radially symmetric solutions to the nonlinear biharmonic equation $\\Delta^2 \\phi = \\phi^p$. First, we show that there exists a critical value $p_c$, depending on the space dimension, such that the solutions are linearly unstable if $p<p_c$ and linearly stable if $p\\geq p_c$. Then, we focus on the supercritical case $p\\geq p_c$ and we show that the graphs of no two solutions intersect one another.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Stability and intersection properties of solutions to the nonlinear biharmonic equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754452025767,
"lm_q2_score": 0.8333245973817158,
"lm_q1q2_score": 0.8204709364653608
} |
https://arxiv.org/abs/1510.06492 | Generalized Shortest Path Kernel on Graphs | We consider the problem of classifying graphs using graph kernels. We define a new graph kernel, called the generalized shortest path kernel, based on the number and length of shortest paths between nodes. For our example classification problem, we consider the task of classifying random graphs from two well-known families, by the number of clusters they contain. We verify empirically that the generalized shortest path kernel outperforms the original shortest path kernel on a number of datasets. We give a theoretical analysis for explaining our experimental results. In particular, we estimate distributions of the expected feature vectors for the shortest path kernel and the generalized shortest path kernel, and we show some evidence explaining why our graph kernel outperforms the shortest path kernel for our graph classification problem. |
\section{Analysis}
\label{sec:Analysis}
In this section
we give some approximated analysis of random feature vectors
in order to give theoretical support for our experimental observations.
We first show that one-cluster and two-cluster graphs
have quite similar SPI feature vectors (as their expectations).
Then we next show some evidence
that there is a non-negligible difference in their GSPI feature vectors.
Throughout this section,
we consider feature vectors
defined by considering only paths from any fixed source node $s$.
Thus,
for example,
$\numfst{d}$ is the number of nodes at distance $d$ from $s$
in a one-cluster graph,
and $\numsnd{d,x}$ is the number of nodes
that have $x$ shortest paths of length $d$ to $s$ in a two-cluster graph.
Here we introduce a way to state an approximation.
For any functions $a$ and $b$ depending on $n$,
we write
$a\myapprox b$ by which we mean
\[
b\left(1-{c\over n}\right)
<a<b\left(1+{c\over n}\right)
\]
holds for some constant $c>0$ and sufficiently large $n$.
We say that
$a$ and $b$ are relatively $\myrelclose$-close.
Note that
this closeness notion
is closed under constant number of additions/subtractions and multiplications.
For example,
if $a\myapprox b$ holds,
then we also have $a^k\myapprox b^k$ for any $k\ge1$
that can be regarded as a constant w.r.t.\ $n$.
In the following
we will often use this approximation.
In the following analysis it is important to recall the difference between a walk and a path. A walk is simply a sequence of neighboring nodes, while a path also has the requirement that no node in the path may appear more than once.
\subsection{Approximate Comparison of SPI Feature Vectors}
Here we consider relatively small\footnote{This smallness assumption is just for simplifying our analysis, and we believe that the situation is more or less the same for any $d$.}
distances $d$
so that $d$ can be considered as a small constant w.r.t.\ $n$.
We show that
$\numEfst{d}$ and $\numEsnd{d}$ are similar in the following sense.
\begin{theorem}
For any constant $d$,
we have
$\numEfst{d}\myapprox\numEsnd{d}$.
\end{theorem}
\noindent
{\bf Remark.}~
For deriving this relation
we assume a certain independence
on the existence of two walks in $G$;
see the argument below for the detail.
\bigskip
First consider a one-cluster graph $G=(V,E)$ and analyze $\numEfst{d}$.
For this analysis,
consider any target node $t$ ($\ne s$) of $G$,
and estimate first the probability $\Ffst_d$
that there exists at least one walk of length $d$ from $s$ to $t$.
Let $A_{u,v}$ be the event that an edge $\{u,v\}$ exists in $G$,
and let $W_d$ denote
the set of
all tuples $(v_1,\ldots,v_{d-1})$ of nodes in $V\setminus\{s\}$
such that $v_i\ne v_{i+1}$ for all $i$, $1\le i\le d-1$,
and $\{v_i,v_{i+1}\}\ne\{v_j,v_{j+1} \}$ for all $i$ and $j$,
$1\le i<j\le d-1$.
(It may be possible that
the same node appears more than once,
but no loop edge appears in the walk, neither does the same edge appear several times in the walk.)
For each tuple $(v_1,\ldots,v_{d-1})$ of $W_d$,
the event $A_{s,v_1}\land A_{v_1,v_2}\land\ldots\land A_{v_{d-1},t}$
is called
the event that
the walk specified by $(s,v_1,\ldots,v_{d-1},t)$ exists in $G$
(or, more simply,
{\em
the existence of one specific walk}).
Then the probability $\Ffst_d$ is expressed by
\begin{equation}
\label{eq:Ffst}
\Ffst_d
=\myP\left[\,
\bigvee_{(v_1,\ldots,v_{d-1})\in W_d}
A_{s,v_1}\land A_{v_1,v_2}\land\ldots\land A_{v_{d-1},t}\,\right]
\end{equation}
Clearly,
the probability of the existence of one specific walk is $p_1^d$,
and the above probability can be calculated
by using the inclusion-exclusion principle.
Here we follow the analysis of Fronczak et al.~\cite{fronczak2004average}
and assume that
every specific walk exists independently.
Then it is easy to see that
the first term of the inclusion-exclusion principle
is relatively $\myrelclose$-close to $n^{d-1}p_1^{d}$,
and hence, we have
\[
\Ffst_d
=
\sum_{(v_1,\ldots,v_{d-1})\in W_d} p_1^d
\myapprox
n^{d-1}p_1^d
\]
because $|W_d|$ is relatively $\myrelclose$-close to $n^{d-1}$.
Let $\ffst_d$ be the probability that
$t$ has a shortest path of length $d$ to $s$.
Then due to the disjointness of the two events
(i) $t$ has a shortest path of length $d$ to $s$,
and (ii) $t$ has a walk of length $d-1$ to $s$,
we have $\ffst_d=\Ffst_d-\Ffst_{d-1}$ for $d\ge2$,
where a similar analysis holds for $\Ffst_{d-1}$.
(For $d=1$,
we have $\ffst_1=\Ffst_1=p_1$.)
Then since $\numEfst{d}=(n-1)\ffst_d$,
we have
\[
\numEfst{d}
\myapprox
n^dp_1^d-n^{d-1}p_1^{d-1}.
\]
The argument is similar for a two-cluster graph.
Let us assume first that $s$ is in $V^+$.
Here we need to consider
the case that the target node $t$ is also in $V^+$ and
the case that it is in $V^-$.
Let $\Fsndp_d$ and $\Fsndm_d$ be
the probabilities that
$t$ has at least one walk of length $d$ to $s$ in the two cases.
Then with a similar analysis,
we have
\[
\Fsndp_d
\myapprox
\sum_{{\rm even}~k=0}^{d}\left(n\over2\right)^{d-1}
{d\choose k}p_2^{d-k}q_2^k,
{\rm~~and~~}
\Fsndm_d
\myapprox
\sum_{{\rm odd}~k=1}^{d}\left(n\over2\right)^{d-1}
{d\choose k}p_2^{d-k}q_2^k,
\]
and
\[
\numEsnd{d}
\myapprox
{n\over2}\bigl(\,\Fsndp_d+\Fsndm_d-(\Fsndp_{d-1}+\Fsndm_{d-1})\,\bigr)
\]
Here we note that
\[
\begin{array}{l}
\displaystyle
\sum_{{\rm even}~k=0}^{d}\left(n\over2\right)^{d-1}
{d\choose k}p_2^{d-k}q_2^k
+\sum_{{\rm odd}~k=1}^{d}\left(n\over2\right)^{d-1}
{d\choose k}p_2^{d-k}q_2^k\\
\hspace*{10mm}
\displaystyle
=
\sum_{k=0}^{d}\left(n\over2\right)^{d-1}{d\choose k}p_2^{d-k}q_2^k
=
\left(n\over2\right)^{d-1}(p_2+q_2)^d.
\end{array}
\]
and that
$p_2+q_2$ $\myapprox$ $2p_1$ from our choice of $q_2$
(see (\ref{eq:pandq})).
Thus,
$\numEsnd{d}$ is again $\myrelclose$-close to $n^{d}p_1^d - n^{d-1} p_1^{d-1}$,
which proves the theorem.
\subsection{Heuristic Comparison of GSPI Feature Vectors}
We compare the expected GSPI feature vectors $\vecvgspEfst$ and $\vecvgspEsnd$,
and show evidence
that they have some non-negligible difference.
In this section
we focus on the distance $d=2$ part of the GSPI feature vectors,
let $\vecvtwoEz$ denote
the subvector $[\numtwoEz{x}]_{x\ge1}$ of $\vecvgspEz$.
Since it is not so easy to analyze
the distribution of the values $\numtwoEz{1},\numtwoEz{2},\ldots$,
we here introduce some ``heuristic'' analysis.
Again we begin with a one-cluster graph $G$,
and let $V_2$ denote the set of nodes of $G$
with distance 2 from the source node $s$.
Consider any $t$,
and for any $x\ge1$,
we estimate the probability that
it has $x$ number of shortest paths of length 2 to $s$.
Recall that
$G$ has $(n-1)\ffst_1 \myapprox np_1$ nodes
with distance 1 from $s$ on average.
Here we assume that
$G$ indeed has $np_1$ nodes with distance 1 from $s$
and that
an edge between each of these distance 1 nodes and $t$ exists
with probability $p_1$ independently at random.
In other words,
$x$ follows the binomial distribution,
or more precisely,
$x$ is the random variable $\myBin(np_1,p_1)$,
where by $\myBin(N,p)$
we mean a random number of heads that we have
when flipping a coin that gives heads with probability $p$
independently $N$ times.
Then for each $x\ge1$,
$\numtwoEfst{x}$ is estimated by
\[
\numtwoEEfst{x}
=
\sum_{t\in V_2}\myP\bigl[\,\myBin(np_1,p_1)=x\,\bigr]
=
\numEfst{2}
\cdot
\myP\bigl[\,\myBin(np_1,p_1)=x\,\bigr],
\]
by assuming again
that $|V_2|$ is its expected value $\numEfst{2}$.
Clearly
the distribution of values of vector $[\numtwoEEfst{x}]_{x\ge1}$
is proportional to $\myBin(np_1,p_1)$,
and it has one peak at $\xpeakfst=np_1^2$.
Consider now a two-cluster graph $G$.
For $d\in\{1,2\}$,
let $V_d^+$ and $V_d^-$ denote respectively
the set of nodes in $V^+$ and $V^-$ with distance $d$ from $s$.
Let $V_2=V_2^+\cup V_2^-$.
Again we assume that
$V_1^+$ and $V_1^-$ have respectively $np_2/2$ and $nq_2/2$ nodes
and that
the numbers of edges
from $V_1^+$ and $V_1^-$ to a node in $V_2$ follow binomial distributions.
We once again assume that $s \in V^+$.
Note that we need to consider two cases here.
First consider the case that the target node $t$ is in $V_2^+$.
In this case,
we have
\begin{eqnarray*}
\ptwosndp{x}
&:=&
\myP\bigl[\,\mbox{$t$ has $x$ shortest paths}\,\bigr]\\
&=&
\myP\left[\,\myBin\left({n\over2}p_2,p_2\right)
+\myBin\left({n\over2}q_2,q_2\right)=x\,\right]\\
&\approx&
\myP\left[\,\myN\left({n\over2}(p_2)^2,\sigma_1^2\right)
+\myN\left({n\over2}(q_2)^2,\sigma_2^2\right)=x\,\right]\\
&=&
\myP\left[\,\myN
\left({n((p_2)^2+(q_2)^2)\over2},\sigma_1^2+\sigma_2^2\right)=x\right],
\end{eqnarray*}
where we use the
normal distribution $\myN(\mu,\sigma^2)$
to approximate each binomial distribution
so that we can express their sum by a normal distribution
(here we omit specifying $\sigma_1$ and $\sigma_2$).
The case where $t\in V^-_2$ can be analyzed similarly,
and we obtain that
\[
\ptwosndm{x}
:=
\myP\bigl[\,\mbox{$t$ has $x$ shortest paths}\,\bigr]
=
\myP\left[\,\myN
\left(np_2q_2,\sigma_3^2 + \sigma_4^2 \right)=x\right].
\]
Then again we may estimate $\numtwoEsnd{x}$
by $\numtwoEEsnd{x}$ $:=$ $\myE[V_2^+]\ptwosndp{x}+\myE[V_2^-]\ptwosndm{x}$.
We have now arrived at the key point in our analysis. Note that
$\numtwoEEsnd{x}$ now follows the mixture of two distributions,
namely,
$\myN(n((p_2)^2+(q_2)^2)/2,\sigma_1^2+\sigma_2^2)$
and $\myN(np_2q_2,\sigma_3^2 + \sigma_4^2)$, with weights $\myE[V_2^+]/(\myE[V_2^+] + \myE[V_2^-] )$ and $\myE[V_2^-] /(\myE[V_2^+] + \myE[V_2^-] )$.
Now we estimate
the distance between the two peaks $\xpeaksndp$ and $\xpeaksndm$
of these two distributions.
Then we have
\begin{eqnarray*}
2(\xpeaksndp-\xpeaksndm)
&=&
n(p_2^2+q_2^2-2p_2q_2)
=
n(p_2-q_2)^2\\
&\myapprox&
n(p_2-(2p_1-p_2))^2
=
4n\alpha_0^2p_1^2.
\end{eqnarray*}
From this we have
\[
\xpeaksndp
\myapprox
\xpeaksndm\left(1+{2n\alpha_0^2p_1^2\over\xpeaksndm}\right)
>(1+2\alpha_0^2)\xpeaksndm.
\]
Therefore these peaks have non-negligible relative difference,
which indicates that
two vectors $\vecvtwoEfst$ and $\vecvtwoEsnd$
have different distributions of their component values. In particular $\vecvtwoEfst$ will only have one peak, while $\vecvtwoEsnd$ will have a double peak shape (for large enough $\alpha_0)$.
Though this is a heuristic analysis,
we can show some examples that
our heuristic analysis is not so different
from experimental results. In Fig. \ref{fig:exp_approx} we have plotted both our approximated vector $\vecvtwoEsnd$ (actually we have plotted the normal distribution that gives this vector) and the corresponding experimental vector obtained by generating graphs according to our random model. In this figure the double peak phenomena can clearly be observed, which provides empirical evidence that our analysis is sound.
The experimental vector is the average vector for each fixed source node in the graph and averaged over 500 randomly generated graphs, where the graphs were generated with the parameters $n=400, p_2=0.18, q_2=0.0204$.
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth, keepaspectratio=true]{img/exp_approx1.png}
\caption{%
Average experimental and approximate distributions of number of nodes with $x$ number of shortest paths of length 2 to a fixed node. The experimental distribution has been averaged for each node in the graph and also averaged over 500 randomly generated graphs. The graphs used had parameters $n=400$, $p_2=0.18$ and $q_2=0.0204$. (For computing the graph of the
approximate distribution,
we used a more precise approximation for,
e.g., $\myE[V_2^+]$ because our $n$ is not
large enough.)
}
\label{fig:exp_approx}
\end{figure}
\section{Shortest Path Kernel and Generalized Shortest Path Kernel}
\label{sec:kernels}
A graph kernel is a function $k(G_1,G_2)$ on pairs of graphs, which can be represented as an inner product $k(G_1,G_2) = \langle \phi(G_1), \phi(G_2) \rangle_{\mathcal{H}}$ for some mapping $\phi(G)$ to a Hilbert space $\mathcal{H}$, of possibly infinite dimension. In many cases, graph kernels can be thought of as similarity functions on graphs.
Graph kernels have been used as tools
for using SVM classifiers for graph classification problems~\cite{borgwardt2005shortest,borgwardt2005protein,hermansson2013entity}.
The kernel
that we build upon in this paper is
the \emph{shortest path} (SP) kernel,
which compares graphs based
on the shortest path length of all pairs of nodes~\cite{borgwardt2005shortest}.
By $D(G)$ we denote
the multi set of shortest distances between all node pairs in the graph $G$.
For two given graphs $G_1$ and $G_2$,
the SP kernel is then defined as:
\begin{equation*}
K_{\rm SP}(G_{1}, G_{2})
=\sum_{d_1\in D(G_{1})}\;\sum_{d_2\in D(G_{2})}k(d_1,d_2),
\end{equation*}
where $k$ is a positive definite kernel~\cite{borgwardt2005shortest}.
One of the most common kernels for $k$ is the indicator function,
as used in Borgwardt and Kriegel~\cite{borgwardt2005shortest}.
This kernel compares shortest distances for equality.
Using this choice of $k$ we obtain the following definition of the SP kernel:
\begin{equation}
\label{eq:SPKernelInd}
K_{\rm SPI}(G_{1},G_{2})
=\sum_{d_1\in D(G_{1})}\;\sum_{d_2\in D(G_{2})}
\mathds{1} \left[d_1=d_2\right].
\end{equation}
We call this version of the SP kernel
the \emph{shortest path index} (SPI) kernel.
It is easy to check that
$K_{\rm SPI}(G_{1},G_{2})$ is simply
the inner product of the SPI feature vectors of $G_1$ and $G_2$.
We now introduce our new kernel,
the \emph{generalized shortest path} (GSP) kernel,
which is defined by using {\em also} the number of shortest paths.
For a given graph $G$,
by $ND(G)$ we denote
the multi set of numbers of shortest paths between all node pairs of $G$.
Then the GSP kernel is defined as:
\begin{equation*}
K_{\rm GSP}(G_{1}, G_{2})
=
\sum_{d_1\in D(G_1)}\;\sum_{d_2\in D(G_2)}\;
\sum_{t_1\in ND(G_1)}\;\sum_{t_2\in ND(G_2)}
k(d_1,d_2,t_1,t_2),
\end{equation*}
where $k$ is a positive definite kernel.
A natural choice for $k$ would be again
a kernel where we consider node pairs as equal
if they have the same shortest distance \emph{and}
the same number of shortest paths.
Resulting in the following definition,
which we call the \emph{generalized shortest path index} (GSPI) kernel.
\begin{equation}
\label{eq:GSPKernelInd}
K_{\rm GSPI}(G_{1}, G_{2}) =
\sum_{d_1\in D(G_1)}\;\sum_{d_2\in D(G_2)}\;
\sum_{t_1\in ND(G_1)}\;\sum_{t_2\in ND(G_2)}
\mathds{1}\left[d_1=d_2\right]\mathds{1}\left[t_1=t_2\right]
\end{equation}
It is easy to see
that this is equivalent to
the inner product of the GSPI feature vectors of $G_1$ and $G_2$.
\section{Experiments}
\label{sec:Experiments}
In this section we compare the performance of the GSPI kernel with the SPI kernel on datasets where the goal is to classify if a graph is a one-cluster graph or a two-cluster graph.
\begin{comment}
\subsection{Planted Partition Model}
The planted partition model is a model for generating random graphs that contain clusters~\cite{kollaspectra}. In the planted partition model, in our notation, each graph contains $n$ number of nodes $v_1, v_2 ... v_n$. Each node is part of \emph{one} cluster, there are $k_c$ clusters in total. We denote the cluster of any node $v_i$ by $c(v_i) \in \{ 1, ... , k_c\}$. The graphs that are generated according to the planted panted partition model are generated according to the following random process.
Two nodes are connected by an edge with different probabilities depending on if they are from the same cluster or not.
$v_i $ and $v_j$ are connected by an edge with probability $p$ if $c(v_i) = c(v_j)$ and probability $q$ if $c(v_i) \neq c(v_j)$. We do not allow nodes to have an edge to itself, i.e. Self loops. Note that in the case when $k_c = 1$, the planted partition model is simply the ER model, with $n$ nodes and probability $p$ of having an edge between a node pair.
\end{comment}
\subsection{Generating Datasets and Experimental Setup}
\label{sec:generation}
All datasets are generated using the models $G(n,p_1)$ and $G(n/2,n/2,p_2,q_2)$, described above. We generate 100 graphs from the two different classes in each dataset.
$q_2$ is chosen in such a way that the expected number of edges is the same for both classes of graphs.
Note that when $p_2 = p_1$, the two-cluster graphs actually become one-cluster graphs where all node pairs are connected with the same probability, meaning that the two classes are indistinguishable. The bigger difference there is between $p_1$ and $p_2$, the more different the one-cluster graphs are compared to the two-cluster graphs.
In our experiments we generate graphs where $n \in \{200,400,600,800,1000\}$, $np_1=c_0 = 40$ and $p_2 \in \{ 1.2p_1, 1.3p_1, 1.4p_1, 1.5p_1\} $. Hence $p_1 = 0.2$ for $n=200$, $p_1=0.1$ for $n=400$ etc.
In all experiments we calculate the normalized feature vectors for all graphs. By normalized we mean that each feature vector $\vecvsp$ and $\vecvgsp$ is normalized by its Euclidean norm. This means that the inner product between two feature vectors always is in $[0,1]$.
We then train an SVM using 10-fold cross validation and evaluate the accuracy of the kernels. We use Pegasos~\cite{shalev2011pegasos} for solving the SVM.
\subsection{Results}
Table \ref{table:accuracy} shows the accuracy of both kernels, using 10-fold cross validation, on the different datasets. As can be seen neither of the kernels perform very well on the datasets where $p_2=1.2 p_1$. This is because the two-cluster graphs generated in this dataset are almost the same as the one-cluster graphs. As $p_2$ increases compared to $p_1$, the task of classifying the graphs becomes easier. As can be seen in the table the GSPI kernel outperforms the SPI kernel on nearly all datasets. In particular, on datasets where $p_2 = 1.4 p_1$, the GSPI kernel has an increase in accuracy of over 20\% on several datasets. When $n=200$ the increase in accuracy is over 40\%!
Although the shown results are only for datasets where $c_0=40$, experiments using other values for $c_0$ gave similar results.
One reason that our GSPI kernel is able to classify graphs correctly when the SPI kernel is not, is because the feature vectors of the GSPI kernel, for the two classes, are a lot more different than for the SPI kernel. In Fig. \ref{fig:len_sp} we have plotted the SPI feature vectors, for a fixed node, for both classes of graphs and one particular dataset.
By feature vectors for a fixed node we mean that the feature vectors contains information for one fixed node, instead of node pairs, so that for example, $n_d$ from $\vecvsp=[n_1,n_2,\ldots]$, contains the number of nodes that are at distance $d$ from one fixed node, instead of the number of node pairs that are at distance $d$ from each other. The feature vector displayed in Fig. \ref{fig:len_sp} is the average feature vector, for any fixed node, and averaged over the 100 randomly generated graphs of each type in the dataset. The dataset showed in the figure is when the graphs were generated with $n=600$, the one-cluster graphs used $p_1=0.06667$, the two-cluster graphs used $p_2=0.08667$ and $q_2=0.04673$, this corresponds to, in Table \ref{table:accuracy}, the dataset where $n=600$, $p_2=1.3p_1$, this dataset had an accuracy of $60.5\%$ for the SPI kernel and $67.0\%$ for the GSPI kernel.
As can be seen in the figure there is almost no difference at all between the average SPI feature vectors for the two different cases.
In Fig. \ref{fig:num_sps} we have plotted the subvectors $\numtwoone{x}_{x\ge1}$ of $\vecvgspfst$ and $\numtwotwo{x}_{x\ge1}$ of $\vecvgspsnd$, for a fixed node, for the same dataset as in Fig. \ref{fig:len_sp}.
The vectors contain the number of nodes at distance 2 from the fixed node with $x$ number of shortest paths, for one-cluster graphs and two-cluster graphs respectively. The vectors have been averaged for each node in the graph and also averaged over the 100 randomly generated graphs, for both classes of graphs, in the dataset.
As can be seen the distributions of such numbers of nodes are at least distinguishable for several values of $x$, when comparing the two types of graphs.
This motivates why the SVM is able to distinguish the two classes better using the GSPI feature vectors than the SPI feature vectors.
\begin{table}[tbp]
\caption{The accuracy of the SPI kernel and the GSPI kernel using 10-fold cross validation. The datasets where $p_2 = 1.2 p_1$ are the hardest and the datasets where $p_2 = 1.5 p_1$ are the easiest. Very big increases in accuracy are marked in bold.}
\centering
\begin{tabular}{|lccc|}
\hline
Kernel & \multicolumn{1}{l}{$n$} & \multicolumn{1}{l}{$p_2$} & \multicolumn{1}{l|}{Accuracy}\\ \hline
SPI & 200 & $\{ 1.2p_1, 1.3p_1, 1.4p_1, 1.5p_1 \}$ & $\{ 52.5\% , 55.5\% , {\bf 54.5\% } {\bf 56.5\%} \} $ \\
GSPI & 200 & $\{ 1.2p_1, 1.3p_1, 1.4p_1, 1.5p_1 \}$ & $ \{ 52.5 \% , 64.0 \% , {\bf 99.0 \%} , {\bf 100.0 \% } \} $ \\
\hline
SPI & 400 & $\{ 1.2p_1, 1.3p_1, 1.4p_1, 1.5p_1 \}$ & $\{ 55.5\% , 63.5\% , {\bf 75.5\% } , 95.5\% \} $ \\
GSPI & 400 & $\{ 1.2p_1, 1.3p_1, 1.4p_1, 1.5p_1 \}$ & $ \{ 54.0\% , 62.0 \% , {\bf 96.5 \% } , 100.0 \% \} $ \\
\hline
SPI & 600 & $\{ 1.2p_1, 1.3p_1, 1.4p_1, 1.5p_1 \}$ & $\{ 58.0\% , 60.5\%, {\bf 75.5\%} , 93.5\%\}$ \\
GSPI & 600 & $\{ 1.2p_1, 1.3p_1, 1.4p_1, 1.5p_1 \}$ & $ \{ 58.0 \% , 67.0\% , {\bf 94.0 \% } , 100.0 \% \} $ \\
\hline
SPI & 800 & $\{ 1.2p_1, 1.3p_1, 1.4p_1, 1.5p_1 \}$ & $\{ 57.5 \%, 59.0\% , 72.0\%, 98.0 \% \} $ \\
GSPI & 800 & $\{ 1.2p_1, 1.3p_1, 1.4p_1, 1.5p_1 \}$ & $\{ 57.5\% , 58.0 \% , 82.0 \% , 100.0 \% \} $ \\
\hline
SPI & 1000 & $\{ 1.2p_1, 1.3p_1, 1.4p_1, 1.5p_1 \}$ & $ \{ 53.5 \% , 55.0 \% ,{\bf 66.0 \% }, 98.5 \% \} $ \\
GSPI & 1000 & $\{ 1.2p_1, 1.3p_1, 1.4p_1, 1.5p_1 \}$ & $\{ 55.0\% , 62.0\% , {\bf 87.5\% }, 100.0\% \} $ \\
\hline
\end{tabular}
\label{table:accuracy}
\end{table}
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth, keepaspectratio=true]{img/compare_sp.png}
\caption{%
Average distributions of number of nodes with a shortest path of length $x$ to a fixed node. The distributions have been averaged for each node in the graph and also averaged over 100 randomly generated graphs, for both classes of graphs. The graphs used the parameters $n=600$, $p_1=0.06667$ for one-cluster graphs and $p_2=0.08667$, $q_2=0.04673$ for two-cluster graphs.
}
\label{fig:len_sp}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth, keepaspectratio=true]{img/compare_gsp.png}
\caption{%
Average distributions of number of nodes with $x$ number of shortest paths of length 2 to a fixed node. The distributions have been averaged for each node in the graph and also averaged over 100 randomly generated graphs, for both classes of graphs. The graphs used to generate this figure are the same as in Fig. \ref{fig:len_sp}.
}
\label{fig:num_sps}
\end{figure}
\begin{comment}
\begin{figure}[tb]
\centering
\subfigure{
\centering
\includegraphics[width=0.45\columnwidth, keepaspectratio=true]{img/P[N_2].png}
}
\subfigure{
\centering
\includegraphics[width=0.45\columnwidth, keepaspectratio=true]{img/P[N_3].png}
}
\caption{%
Description.
}
\label{fig:datasets_edge_weights_histograms}
\end{figure}
\end{comment}
\section{Conclusions and Future Work}
\label{sec:conclusions}
We have defined a new graph kernel, based on the number of shortest paths between node pairs in a graph. The feature vectors of the GSP kernel do not take longer time to calculate than the feature vectors of the SP kernel. The reason for this is the fact that the number of shortest paths between node pairs is a by-product of using Dijkstra's algorithm to get the length of the shortest paths between all node pairs in a graph. The number of shortest paths between node pairs {\bf does} contain relevant information for certain types of graphs. In particular we showed in our experiments that the GSP kernel, which also uses the number of shortest paths between node pairs, outperformed the SP kernel, which only uses the length of the shortest paths between node pairs, at the task of classifying graphs as containing one or two clusters. We also gave an analysis motivating why the GSP kernel is able to correctly classify the two types of graphs when the SP kernel is not able to do so.
Future research could examine the distribution of the random feature vectors $\vecvsp$ and $\vecvgsp$, for random graphs, that are generated using the planted partition model, and have \emph{more} than two clusters. Although we have only given experimental results and an analysis for graphs that have either one or two clusters, preliminary experiments show that the GSPI kernel outperforms the SPI kernel on such tasks as e.g. classifying if a random graph contains one or four cluster, if a random graph contains two or four clusters etc. It would be interesting to see which guarantees it is possible to get, in terms of guaranteeing that the $\vecvgsp$ vectors are different and the $\vecvsp$ vectors are similar, when the numbers of clusters are not just one or two.
\section{Preliminaries}
\label{sec:prelim}
Here we introduce necessary notions and notation for our technical discussion.
Throughout this paper
we use symbols $G$, $V$, $E$ (with a subscript or a superscript)
to denote graphs, sets of nodes, and sets of edges respectively.
We fix $n$ and $m$
to denote the number of nodes and edges of considered graphs.
By $|S|$ we mean the number of elements of the set $S$.
We are interested in the length and number of shortest paths.
In relation to the kernels we use for classifying graphs,
we use {\em feature vectors} for expressing such information.
For any graph $G$,
for any $d\ge1$,
let $n_d$ denote
the number of pairs of nodes of $G$ with a shortest path of length $d$
(in other words, distance $d$ nodes).
Then we call
a vector $\vecvsp=[n_1,n_2,\ldots]$
a {\em SPI feature vector}.
On the other hand,
for any $d,x\ge1$,
we use $n_{d,x}$ to denote
the number of pairs of nodes of $G$ that have $x$ number of shortest paths of length $d$,
and we call
a vector $\vecvgsp=[n_{1,1},n_{1,2},\ldots , n_{2,1} \ldots]$ a {\em GSPI feature vector}.
Note that
$n_d=\sum_x n_{d,x}$.
Thus,
a GSPI feature vector is a more detailed version of a SPI feature vector.
In order to simplify our discussion
we often use feature vectors by considering shortest paths
from any fixed node of $G$.
We will clarify which version we use in each context. By $\myE[ \vecvsp ]$ and $\myE[ \vecvgsp ]$ we mean the expected SPI feature vector and the expected GSPI feature vector, for some specified random distribution. Note that the expected feature vectors are equal to
$[\myE[n_d]]_{d\ge1}$ and $[\myE[n_{d,x} ] ]_{d\ge1,x\ge1}$.
It should be noted that
the SPI and the GSPI feature vectors are computable efficiently.
For example,
we can use
Dijkstra's algorithm \cite{dijkstra1959note}
for each node in a given graph,
which gives all node pairs' shortest path length
(i.e. a SPI feature vector)
in time $\mathcal{O}(nm + n^2\log n)$.
Note that by using Dijkstra's algorithm to compute the shortest path from a fixed source node to any other node, the algorithm actually needs to compute {\em all} shortest paths between the two nodes, to verify that it really has found a shortest path. In many applications, however, we are only interested in obtaining one shortest path for each node pair, meaning that we do not store all other shortest paths for that node pair. It is however possible to store the number of shortest paths between all node pairs, {\bf without increasing the running time of the algorithm},
meaning that we can compute the GSPI feature vector in the same time as the SPI feature vector. Note that for practical applications, it might be wise to use a binning scheme for the number of shortest paths, where we consider numbers of shortest paths as equal if they are close enough. For example instead of considering the numbers of shortest paths $\{1 , 2 ... 100 \}$, as different. We could consider the intervals $\{[1,10], [11,20] ... [91,100] \}$ as different and consider all the numbers inside a particular interval as equal. Doing this will reduce the dimension of the GSPI feature vector, which could be useful since the number of shortest paths in a graph might be large for dense graphs.
We note that the graph kernels used in this paper can be represented explicitly as inner products of finite dimensional feature vectors. We choose to still refer to them as \emph{kernels}, because of their relation to other graph kernels.
\section{Introduction}
\label{sec:Introduction}
Classifying graphs into different classes depending on their structure is a problem that has been studied for a long time and that has many useful applications~\cite{bilgin2007cell,borgwardt2005protein,kong2010semi,kudo2004application}. By classifying graphs researchers have been able to solve important problems such as to accurately predict the toxicity of chemical compounds~\cite{kudo2004application}, classify if human tissue contains cancer or not~\cite{bilgin2007cell}, predict if a particular protein is an enzyme or not~\cite{borgwardt2005protein}, and many more.
It is generally regarded that the number of self-loop-avoiding paths between all pairs of nodes of a given graph is useful for understanding the structure of the graph~\cite{havlin1982theoretical,liskiewicz2003complexity}.
Computing the number of such paths between all nodes is however a computationally hard task (usually \#P-hard). Counting only the number of shortest paths between node pairs is however possible in polynomial time and such paths at least avoid cycles, which is why some researchers have considered shortest paths a reasonable substitute. When using standard algorithms to compute the shortest paths between node pairs in a graph we also get, as a by-product, the \emph{number} of such shortest paths between all node pairs. Taking this number of shortest paths into account when analyzing the properties of a graph could provide useful information and is what our approach is built upon.
One popular technique for classifying graphs is by using a \emph{support vector machine} (SVM) classifier with graph kernels. This approach has proven successful for classifying several types of graphs~\cite{borgwardt2005shortest,borgwardt2005protein,hermansson2013entity}. Different graph kernels can however give vastly different results depending on the types of graphs that are being classified. Because of this it is useful to analyze which graph kernels that works well on which types of graphs. Such an analysis contributes to understanding when graph kernels are useful and on which types of graphs one can expect a good result using this approach.
In order to classify graphs, graph kernels that consider many different properties have been proposed. Such as graph kernels considering all walks~\cite{gartner2003graph}, shortest paths~\cite{borgwardt2005shortest}, small subgraphs~\cite{shervashidze2009efficient}, global graph properties~\cite{johansson2014global}, and many more. Analyzing how these graph kernels perform for particular datasets, gives us the possibility of choosing graph kernels appropriate for the particular types of graphs that we are trying to classify.
One particular type of graphs, that appears in many applications, are graphs with a cluster structure. Such graphs appear for instance when considering graphs representing social networks.
In this paper, in order to test how well our approach works,
we test its performance on the problem of classifying graphs
by the number of clusters that they contain.
More specifically,
we consider two types of models for generating random graphs,
the Erd\H{o}s-R\'enyi model~\cite{bollobas1998random}
and the planted partition model~\cite{kollaspectra},
where we use the Erd\H{o}s-R\'enyi model to generate graphs with one cluster
and the planted partition model
to generate graphs with two clusters
(explained in detail in Sect. \ref{sec:gmodel}).
The example task considered in this paper
is to classify whether a given random graph is generated
by the Erd\H{o}s-R\'enyi model or by the planted partition model.
For this classification problem,
we use the standard SVM
and compare experimentally
the performance of the SVM classifier,
with the \emph{shortest path} (SP) kernel, and with our new \emph{generalized shortest path} (GSP) kernel.
In the experiments we generate datasets with 100 graphs generated according to the Erd\H{o}s-R\'enyi model and 100 graphs generated according to the planted partition model. Different datasets use different parameters for the two models. The task is then, for any given dataset, to classify graphs as coming from the Erd\H{o}s-R\'enyi model or the planted partition model, where we consider the supervised machine learning setting with 10-fold cross validation.
We show that the SVM classifier that uses our GSP kernel outperforms the SVM classifier that uses the SP kernel, on several datasets.
Next we give
some theoretical analysis of the random feature vectors
of the SP kernel and the GSP kernel, for the random graph models used in our experiments.
We give an approximate estimation
of expected feature vectors for the SP kernel
and show that the expected feature vectors are relatively close
between graphs with one cluster and graphs with two clusters.
We then analyze the distributions of component values
of expected feature vectors for the GSP kernel,
and we show some evidence that
the expected feature vectors have a different structure
between graphs with one cluster and graphs with two clusters.
The remainder of this paper is organized as follows. In Sect. \ref{sec:prelim} we introduce notions and notations that are used throughout the paper. Section \ref{sec:kernels} defines already existing and new graph kernels. In Sect. \ref{sec:gmodel} we describe the random graph models that we use to generate our datasets. Section \ref{sec:Experiments} contains information about our experiments and experimental results. In Sect. \ref{sec:Analysis} we give an analysis explaining why our GSP kernel outperforms the SP kernel on the used datasets. Section \ref{sec:conclusions} contains our conclusions and suggestions for future work.
\section{Analysis}
\label{sec:Analysis}
In this section
we give some approximated analysis of random feature vectors
in order to give theoretical support for our experimental observations.
We first show that one-cluster and two-cluster graphs
have quite similar SPI feature vectors (as their expectations).
Then we next show some evidence
that there is a non-negligible difference in their GSPI feature vectors.
Throughout this section,
we consider feature vectors
defined by considering only paths from any fixed source node $s$.
Thus,
for example,
$\numfst{d}$ is the number of nodes at distance $d$ from $s$
in a one-cluster graph,
and $\numsnd{d,x}$ is the number of nodes
that have $x$ shortest paths of length $d$ to $s$ in a two-cluster graph.
\OMIT{
For the following analysis,
the difference between ``walk'' and ``path'' becomes crucial.
Recall that
a {\em walk} is simply a sequence of neighboring nodes,
while a {\em path} also has the requirement
that no node in the path may appear more than once.}
Here we introduce a way to state an approximation.
For any functions $a$ and $b$ depending on $n$,
we write
$a\myapprox b$ by which we mean
\[
b\left(1-{c\over n}\right)
<a<b\left(1+{c\over n}\right)
\]
holds for some constant $c>0$ and sufficiently large $n$.
We say that
$a$ and $b$ are {\em relatively $\myrelclose$-close}
if $a\myapprox b$ holds.
Note that
this closeness notion
is closed under constant number of additions/subtractions and multiplications.
For example,
if $a\myapprox b$ holds,
then we also have $a^k\myapprox b^k$ for any $k\ge1$
that can be regarded as a constant w.r.t.\ $n$.
In the following
we will often use this approximation.
\subsection{Approximate Comparison of SPI Feature Vectors}
We consider relatively small\footnote{%
This smallness assumption is for our analysis,
and we believe that the situation is more or less the same for any $d$.}
distances $d$
so that $d$ can be considered as a small constant w.r.t.\ $n$.
We show that
$\numEfst{d}$ and $\numEsnd{d}$ are similar in the following sense.
\begin{theorem}
\label{similartheorem}
For any constant $d$,
we have
$\numEfst{d} \in \numEsnd{d}(1 \pm \frac{2}{c_0 - 1}) $, holds within our $\myapprox$ approximation when $c_0 \geq 2 + \sqrt{3}$.
\end{theorem}
\noindent
{\bf Remark.}~
For deriving this relation
we assume a certain independence
on the existence of two paths in $G$;
see the argument below for the detail. Note that this difference between $\numEfst{d}$ and $\numEsnd{d}$ vanishes for large values of $c_0$.
\bigskip
\begin{proof}
First consider a one-cluster graph $G=(V,E)$ and analyze $\numEfst{d}$.
For this analysis,
consider any target node $t$ ($\ne s$) of $G$ (we consider this target node to be a fixed node to begin with),
and estimate first the probability $\Ffst_d$
that there exists at least one path of length $d$ from $s$ to $t$.
Let $A_{u,v}$ be the event that an edge $\{u,v\}$ exists in $G$,
and let $\myW$ denote
the set of
all paths (from $s$ to $t$)
expressed
by a permutation $(v_1,\ldots,v_{d-1})$ of nodes in $V\setminus\{s,t\}$.
For each tuple $(v_1,\ldots,v_{d-1})$ of $\myW$,
the event $A_{s,v_1}\land A_{v_1,v_2}\land\ldots\land A_{v_{d-1},t}$
is called
the event that
the path (from $s$ to $t$)
specified by $(s,v_1,\ldots,v_{d-1},t)$ exists in $G$
(or, more simply,
{\em
the existence of one specific path}).
Then the probability $\Ffst_d$ is expressed by
\begin{equation}
\label{eq:Ffst}
\Ffst_d
=\myP\left[\,
\bigvee_{(v_1,\ldots,v_{d-1})\in \myW}
A_{s,v_1}\land A_{v_1,v_2}\land\ldots\land A_{v_{d-1},t}\,\right]
\end{equation}
Clearly,
the probability of the existence of one specific path is $p_1^d$,
and the above probability can be calculated
by using the inclusion-exclusion principle.
Here we follow the analysis of Fronczak et al.~\cite{fronczak2004average}
and assume that
every specific path exists independently. Note that the number of dependent paths can be big when the length of a path is long, therefore this assumption is only reasonable for short distances $d$.
To simplify the analysis\footnote{%
Clearly,
this is a rough approximation;
nevertheless,
it is enough for asymptotic analysis w.r.t.\ the $\myrelclose$-closeness.
For smaller $n$,
we may use a better approximation from \cite{fronczak2004average},
which will be explained in Subsection~\ref{sec:inex}.}
we only consider the first term of the inclusion-exclusion principle.
That is,
we approximate $\Ffst_d$ by
\begin{eqnarray}
\Ffst_d
&\approx&
\label{eq:Ffst_approxA}
\sum_{(v_1,\ldots,v_{d-1})\in \myW}
\Pr\left[\,
A_{s,v_1}\land A_{v_1,v_2}\land\ldots\land A_{v_{d-1},t}\,\right]\\
&=&
\nonumber
|\myW|p_1^d
~=~
(n-2)(n-3)\cdots(n-(2+d-2))p_1^d
~\myapprox~
n^{d-1}p_1^d,
\end{eqnarray}
where the last approximation relation holds since $d$ is constant.
From this approximation,
we can approximate
the probability $\ffst_d$ that
$t$ has a shortest path of length $d$ to $s$.
For any $d\ge1$,
let $\EventFfst_d$ be the event that there exists at least one path of length $d$, {\em or less}, between $s$ and $t$, and let $\Eventffst_d$ be
the event
that there exists a shortest path of length $d$
between $s$ and $t$. Then
\begin{equation}
F_d \leq \myP[\EventFfst_d] \leq \displaystyle\sum_{i=1}^{d} \Ffst_i.
\end{equation}
Note that
\begin{eqnarray}
\displaystyle\sum_{i=1}^{d} \Ffst_i ~ &\myapprox& \displaystyle\sum_{i=1}^{d} n^{i-1} p_1^i
= n^{d-1} p_1^d ( \displaystyle\sum_{i=0}^{d-1} \frac{1}{(np_1)^i} ) \nonumber \\
&\leq & n^{d-1}p_1^{d} (\frac{n p_1}{np_1 - 1})
= n^{d-1}p_1^d (1 + \frac{1}{np_1 -1}). \nonumber
\end{eqnarray}
Since $np_1= c_0 \geq 1$, it follows that
\begin{equation*}
n^{d-1}p_1^d (1 + \frac{1}{np_1 -1}) = n^{d-1}p_1^d (1 + \frac{1}{c_0 -1}).
\end{equation*}
While
$F_d \myapprox n^{d-1}p_1^{d}$. Thus we have within our $\myapprox$ approximation, that
\begin{equation*}
n^{d-1}p_1^{d} \leq \myP[\EventFfst_d] \leq n^{d-1}p_1^d (1 + \frac{1}{c_0 -1}).
\end{equation*}
It is obvious that $\ffst_d=\Pr[\Eventffst_d]$,
note also that
$\EventFfst_d$ $=$ $\Eventffst_d\lor \EventFfst_{d-1}$
and that the two events $\Eventffst_d$ and $\EventFfst_{d-1}$ are disjoint.
Thus,
we have
$\Pr[\EventFfst_d]$ $=$ $\Pr[\Eventffst_d]$+$\Pr[\EventFfst_{d-1}]$,
which is equivalent to
\[
n^{d-1} p_1^{d} - n^{d-2}p_1^{d-1}(1 + \frac{1}{c_0 -1} ) \leq \ffst_d \leq n^{d-1}p_1^d(1 + \frac{1}{c_0 -1} ) - n^{d-2} p_1^{d-1}.
\]
Since $\ffst_d$ is the probability that
there is a shortest path of length $d$ from $s$ to any fixed $t$,
it follows that $\numEfst{d}$,
i.e.,
the expected number of nodes that have a shortest path of length $d$ to $s$,
can be estimated by
\begin{equation*}
\label{eq:numEfst}
\numEfst{d}
=
(n-1)\ffst_d
\myapprox
n\ffst_d.
\end{equation*}
Which gives that
\begin{equation}
\label{complicated}
n^{d} p_1^{d} - n^{d-1}p_1^{d-1}(1 + \frac{1}{c_0 -1} ) \leq \numEfst{d} \leq n^{d}p_1^d(1 + \frac{1}{c_0 -1} ) - n^{d-1} p_1^{d-1}.
\end{equation}
holds within our $\myapprox$, approximation.
We may rewrite the above equation using the following equalities
\begin{align}
n^{d} p_1^{d} - n^{d-1}p_1^{d-1}(1 + \frac{1}{c_0 -1} ) &= n^d p_1^d - n^{d-1} p_1^{d-1} - \frac{n^{d-1}p_1^{d-1}}{c_0 - 1} \nonumber \\
&= (n^d p_1^d - n^{d-1} p_1^{d-1}) ( 1 - \frac{ \frac{ n^{d-1}p_1^{d-1} }{c_0 - 1}} { n^d p_1^d - n^{d-1} p_1^{d-1} } ) \nonumber \\
&= (n^d p_1^d - n^{d-1} p_1^{d-1}) ( 1 - \frac{ 1 } { (c_0 - 1)^2} ), \label{simple1}
\end{align}
and
\begin{align}
n^{d}p_1^d(1 + \frac{1}{c_0 -1} ) - n^{d-1} p_1^{d-1} &= n^{d} p_1^{d} - n^{d-1} p_1^{d-1} + \frac{n^{d} p_1^{d}}{c_0 - 1} \nonumber \\
&= (n^{d} p_1^{d} - n^{d-1} p_1^{d-1})( 1 + \frac{ \frac{n^{d} p_1^{d}}{c_0 - 1} } { n^{d} p_1^{d} - n^{d-1} p_1^{d-1}} ) \nonumber \\
&= (n^{d} p_1^{d} - n^{d-1} p_1^{d-1})( 1 + \frac{c_0}{(c_0 - 1)^2} ). \label{simple2}
\end{align}
Substituting (\ref{simple1}) and (\ref{simple2}) into (\ref{complicated}) we get
\begin{equation}
\label{bounds1}
\lowerb \leq \numEfst{d} \leq \upperb .
\end{equation}
We will later use these bounds to derive the theorem.
We now analyze a two-cluster graph and $\numEsnd{d}$.
Let us assume first that $s$ is in $V^+$.
Again we fix a target node $t$ to begin with.
Here we need to consider
the case that the target node $t$ is also in $V^+$ and
the case that it is in $V^-$.
Let $\Fsndp_d$ and $\Fsndm_d$ be
the probabilities that
$t$ has at least one path of length $d$ to $s$ in the two cases.
Then for the first case,
the path starts from $s \in V^+$ and ends in $t \in V^+$,
meaning that the number of times that
the path crossed from one cluster to another
(either from $V^+$ to $V^-$ or $V^-$ to $V^+$) has to be even.
Thus the probability of one specific path existing
is $p_2^{d-k}q_2^k$ for some even $k$, $0\le k\le d$.
Thus,
the first term of the inclusion-exclusion principle
(the sum of the probabilities of all possible paths) then becomes
\begin{equation*}
\Fsndp_d
\myapprox
\left(n\over2\right)^{d-1}
\sum_{{\rm even}~k=0}^{d}{d\choose k}p_2^{d-k}q_2^k,
\end{equation*}
where the number of paths is approximated as before,
i.e.,
$|V^+\setminus\{s,t\}|\cdot(|V^+\setminus\{s,t\}|-1)
\cdots((|V^+\setminus\{s,t\}|-(d-2))$ is approximated by $(n/2)^{d-1}$.
We can similarly analyze the case where $t$ is in $V^-$ to obtain
\begin{equation*}
\Fsndm_d
\myapprox
\left(n\over2\right)^{d-1}
\sum_{{\rm odd}~k=1}^{d}{d\choose k}p_2^{d-k}q_2^k.
\end{equation*}
Since both cases ($t \in V^+$, or $t \in V^-$) are equally likely, the average probability of there being a path of length $d$, between $s$ and $t$, in a two-cluster graph is
\begin{eqnarray*}
\frac{\Fsndp_d+\Fsndm_d}{2}
&\myapprox&
\left(n\over2\right)^{d-1}
\sum_{{\rm even}~k=0}^{d}
{d\choose k}\frac{p_2^{d-k}q_2^k}{2}
+
\left(n\over2\right)^{d-1}
\sum_{{\rm odd}~k=1}^{d}{d\choose k}\frac{p_2^{d-k}q_2^k}{2} \\
&=&
\left(n\over2\right)^{d-1}
\sum_{k=0}^{d}
{d\choose k}\frac{p_2^{d-k}q_2^k}{2}
=
\left(n\over2\right)^{d-1}\frac{(p_2+q_2)^d}{2}.
\end{eqnarray*}
Note here that
$p_2+q_2$ $\myapprox$ $2p_1$ from our choice of $q_2$ (see (\ref{eq:pandq})).
Thus, we have
\begin{equation*}
\frac{\Fsndp_d+\Fsndm_d}{2}
\myapprox
\left(n\over2\right)^{d-1}\frac{(2p_1)^d}{2} = n^{d-1} p_1^d.
\end{equation*}
Which is exactly the same as in the one cluster case, see (\ref{eq:Ffst_approxA}). Thus we have
\begin{equation}
\label{bounds2}
\lowerb \leq \numEsnd{d} \leq \upperb.
\end{equation}
Using this we now prove the main statement of the theorem, namely that
\begin{equation*}
\numEfst{d} \in \numEsnd{d} ( 1 \pm \frac{2}{c_0 - 1}).
\end{equation*}
To prove the theorem we need to prove the following two things
\begin{align}
\numEfst{d} &\leq \numEsnd{d} ( 1 + \frac{ 2} {c_0-1}), \text{ and} \label{prove1} \\
\numEfst{d} &\geq \numEsnd{d} ( 1 - \frac{ 2} {c_0-1}) \label{prove2}
\end{align}
The proof of (\ref{prove1}) can be done by using (\ref{bounds1}) and (\ref{bounds2}).
\begin{align}
&\numEfst{d} \leq \numEsnd{d} + \frac{ 2 \numEsnd{d} } {c_0-1} \nonumber \\
&\Leftrightarrow \upperb \leq \lowerb \nonumber \\
&~ ~ ~ ~ + \frac{ 2 \lowerb}{c_0 - 1} \nonumber \\
&\Leftrightarrow 1 + \frac{c_0}{(c_0 - 1)^2} \leq 1 - \frac{1}{(c_0-1)^2} + \frac{2}{c_0 - 1} - \frac{2}{(c_0-1)^3} \nonumber \\
&\Leftrightarrow \frac{c_0 + 1}{c_0-1} + \frac{2}{(c_0 - 1)^2} \leq 2.
\end{align}
Which holds when $c_0 \geq 2 + \sqrt{3} \approx 3.7$. The proof of (\ref{prove2}) is similar and shown below.
\begin{align}
&\numEfst{d} \geq \numEsnd{d} - \frac{ 2 \numEsnd{d} } {c_0-1} \nonumber \\
&\Leftrightarrow \lowerb \geq \upperb \nonumber \\
& ~ ~ ~ ~ - \frac{2 \lowerb}{c_0 - 1} \nonumber \\
&\Leftrightarrow 1 - \frac{1}{(c_0 - 1)^2} \geq 1 + \frac{c_0}{(c_0 -1)^2} - \frac{2}{c_0 -1} + \frac{2}{ (c_0 -1)^3} \nonumber \\
&\Leftrightarrow 2 \geq \frac{c_0+ 1}{c_0 - 1} + \frac{2}{(c_0 -1)^2}.
\end{align}
Which again holds when $c_0 \geq 2 + \sqrt{3} \approx 3.7$. This completes the proof of the theorem.
\noindent\hfill
$\square$
\end{proof}
\subsection{Heuristic Comparison of GSPI Feature Vectors}
We compare in this section
the expected GSPI feature vectors $\vecvgspEfst$ and $\vecvgspEsnd$,
that is,
$[\numEfst{d,x}]_{d\ge1,x\ge1}$ and $[\numEsnd{d,x}]_{d\ge1,x\ge1}$,
and show evidence
that they have some non-negligible difference.
Here we focus on the distance $d=2$ part of the GSPI feature vectors,
i.e.,
subvectors $[\numtwoEz{x}]_{x\ge1}$ for $z\in\{1,2\}$.
Since it is not so easy to analyze
the distribution of the values $\numtwoEz{1},\numtwoEz{2},\ldots$,
we introduce some ``heuristic'' analysis.
We begin with a one-cluster graph $G$,
and let $V_2$ denote the set of nodes of $G$
with distance 2 from the source node $s$.
Consider any $t$ in $V_2$,
and for any $x\ge1$,
we estimate the probability that
it has $x$ number of shortest paths of length 2 to $s$.
Let $V_1$ be the set of nodes at distance 1 from $s$.
Recall that
$G$ has $(n-1)\ffst_1 \myapprox np_1$ nodes in $V_1$ on average,
and we assume that
$t$ has an edge from some node in $V_1$
each of which corresponds to a shotest path of distance 2 from $s$ to $t$.
We now assume for our ``heuristic'' analysis that
$|V_1|=np_1$ and that
an edge between each of these distance 1 nodes and $t$ exists
with probability $p_1$ independently at random.
Then $x$ follows the binomial distribution $\myBin(np_1,p_1)$,
where by $\myBin(N,p)$
we mean a random number of heads that we have
when flipping a coin that gives heads with probability $p$
independently $N$ times.
Then for each $x\ge1$,
$\numtwoEfst{x}$,
the expected number of nodes of $V_2$
that have $x$ shortest paths of length 2 to $s$,
is estimated by
\[
\numtwoEfst{x}
\approx
\sum_{t\in V_2}\myP\bigl[\,\myBin(np_1,p_1)=x\,\bigr]
=
\numEfst{2}
\cdot
\myP\bigl[\,\myBin(np_1,p_1)=x\,\bigr],
\]
by assuming
that $|V_2|$ takes its expected value $\numEfst{2}$.
Clearly
the distribution of values of vector $[\numtwoEfst{x}]_{x\ge1}$
is proportional to $\myBin(np_1,p_1)$,
and it has one peak at $\xpeakfst=np_1^2$,
since the mean of a binomial distribution, $\myBin(N,p)$ is $Np$.
Consider now a two-cluster graph $G$.
We assume that our start node $s$ is in $V^+$.
For $d\in\{1,2\}$,
let $V_d^+$ and $V_d^-$ denote respectively
the set of nodes in $V^+$ and $V^-$ with distance $d$ from $s$.
Let $V_2=V_2^+\cup V_2^-$.
Again we assume that
$V_1^+$ and $V_1^-$ have respectively $np_2/2$ and $nq_2/2$ nodes
and that
the numbers of edges
from $V_1^+$ and $V_1^-$ to a node in $V_2$ follow binomial distributions.
Note that we need to consider two cases here,
$t \in V_2^+$ and $t \in V_2^-$.
First consider the case that the target node $t$ is in $V_2^+$.
In this case
there are two types of shortest paths.
The first type of paths goes from $s$ to $V^+_1$ and then to $t \in V_2^+$.
The second type of shortest path goes from $s$ to $V^-_1$ and then to $t \in V_2^+$.
Based on this we get
\begin{eqnarray*}
\ptwosndp{x}
&:=&
\myP\bigl[\,\mbox{$t$ has $x$ shortest paths}\,\bigr]\\
&=&
\myP\left[\,\myBin\left({n\over2}p_2,p_2\right)
+\myBin\left({n\over2}q_2,q_2\right)=x\,\right]\\
&\approx&
\myP\left[\,\myN\left({n\over2}p_2^2,\sigma_1^2\right)
+\myN\left({n\over2}q_2^2,\sigma_2^2\right)\in[x-0.5,x+0.5] \,\right]\\
&=&
\myP\left[\,\myN
\left({n(p_2^2+q_2^2)\over2},\sigma_1^2+\sigma_2^2\right)
\in[x-0.5,x+0.5] \, \right],
\end{eqnarray*}
where we use the
normal distribution $\myN(\mu,\sigma^2)$
to approximate each binomial distribution
so that we can express their sum by a normal distribution
(here we omit specifying $\sigma_1$ and $\sigma_2$).
For the second case where $t\in V^-_2$,
with a similar argument,
we derive
\[
\ptwosndm{x}
:=
\myP\bigl[\,\mbox{$t$ has $x$ shortest paths}\,\bigr]
=
\myP\left[\,\myN
\left(np_2q_2,\sigma_3^2 + \sigma_4^2 \right)\in[x-0.5,x+0.5] \, \right].
\]
Note that the first case ($t \in V_2^{+}$), happens with probability $|V_2^{+}| / (|V_2^{+}| + |V_2^{-}|)$. The second case ($t \in V_2^{-}$), happens with probability $|V_2^{-}| / (|V_2^{+}| + |V_2^{-}|)$.
Then again we may approximate
the $x$th component of the expected feature subvector
by
\begin{equation*}
\numtwoEsnd{x} \approx \frac{\myE[ |V_2^{+}|] }{ \myE[|V_2^{+}|] + \myE[|V_2^{-}|]} \myE[|V_2^+|]\ptwosndp{x}+ \frac{\myE[ |V_2^{-}|] }{ \myE[|V_2^{+}|] + \myE[|V_2^{-}|]} \myE[|V_2^-|]\ptwosndm{x}.
\end{equation*}
We have now arrived at the key point in our analysis.
Note that
the distribution of values of vector $[\numtwoEsnd{x}]_{x\ge1}$ follows
the mixture of two distributions,
namely,
$\myN(n(p_2^2+q_2^2)/2,\sigma_1^2+\sigma_2^2)$
and $\myN(np_2q_2,\sigma_3^2 + \sigma_4^2)$,
with weights $\myE[ |V_2^{+}|] / (\myE[|V_2^{+}|] + \myE[|V_2^{-}|])$ and $\myE[ |V_2^{-}|] / (\myE[|V_2^{+}|] + \myE[|V_2^{-}|])$.
Now we estimate
the distance
between the two peaks $\xpeaksndp$ and $\xpeaksndm$ of these two distributions.
Note that
the mean of a normal distribution $\myN(\mu,\sigma^2)$ is simply $\mu$.
Then we have
\begin{eqnarray*}
\xpeaksndp-\xpeaksndm
&=&
\frac{n}{2}(p_2^2+q_2^2)-np_2q_2
=
\frac{n}{2}(p_2-q_2)^2\\
&\myapprox&
\frac{n}{2}(p_2-(2p_1-p_2))^2
=
\frac{n}{2}(2p_2 - 2p_1)^2 \\
&\myapprox&
2n(p_1(1 + \alpha_0)-p_1)^2=2n(p_1\alpha_0)^2
=
2n\alpha_0^2p_1^2
\end{eqnarray*}
Note that
$q_2 \myapprox 2p_1 - p_2$ holds (from (\ref{eq:pandq}));
hence,
we have
$p_1$ $\myapprox$ $(p_2+q_2)/2$ $\ge$ $\sqrt{p_2q_2}$,
and we approximately have
$p_1^2$ $\ge$ $p_2q_2$.
By using this,
we can bound the difference between these peaks by
\begin{equation*}
\xpeaksndp-\xpeaksndm
\myapprox
2n\alpha_0^2p_1^2
\ge
2\alpha_0^2\xpeaksndm.
\end{equation*}
That is,
these peaks have non-negligible relative difference.
From our heuristic analysis
we may conclude that the
two vectors $[\numtwoEfst{x}]_{x\ge1}$ and $[\numtwoEsnd{x}]_{x\ge1}$
have different distributions of their component values.
In particular,
while the former vector has only one peak,
the latter vector has a double peak shape (for large enough $\alpha_0)$.
Note that this difference does not vanish even when
$c_0$ is big. This means that the GSPI feature vectors are different for one-cluster
graphs and two-clusters graphs, even when $c_0$ is big, which is not the case for the
SPI feature vectors, since their difference vanishes when $c_0$ is big. This provides
evidence as to why our GSPI kernel performs better than the SPI kernel
Though this is a heuristic analysis,
we can show some examples that
our observation is not so different from experimental results.
In Fig.~\ref{fig:exp_approx}
we have plotted both our approximated vector of $[\numtwoEsnd{x}]_{x\ge1}$
(actually we have plotted
the mixed normal distribution that gives this vector)
and the corresponding experimental vector obtained
by generating graphs according to our random model.
In this figure the double peak shape can clearly be observed,
which provides empirical evidence supporting our analysis.
This experimental vector
is the average vector for each fixed source node in random graphs,
which is averaged over 500 randomly generated graphs
with the parameters $n=400$, $p_2=0.18$, and $q_2=0.0204$.
(For these parameters,
we need to use a better approximation of (\ref{eq:Ffst}) explained
in the next subsection to derive the normal distribution of this figure.)
\subsection{Inclusion-Exclusion Principle}
\label{sec:inex}
Throughout the analysis,
we have always used the first term of
the inclusion-exclusion principle to estimate (\ref{eq:Ffst}).
Although this works well for expressing our analytical result,
where we consider the case where $n$ is big.
When applying the approximation for graphs with a small number of nodes,
it might be necessary to consider a better approximation of
the inclusion-exclusion principle.
For example,
we in fact used
the approximation from \cite{fronczak2004average}
for deriving the mixed normal distributions of Fig.~\ref{fig:exp_approx}.
Here for completeness,
we state this approximation as a lemma
and give its proof that is outlined in \cite{fronczak2004average}.
\begin{lemma}
\label{lem:bound}
Let $E_1,E_2,\ldots,E_l$ be mutually independent events
such that $\myP[E_i]\leq\epsilon$ holds for all $i$, $1\le i\le l$.
Then we have
\begin{equation}
\label{eq:lemfron}
\myP\left[\,\bigcup^{l}_{i=1}E_i\,\right]
=
1-\myexp\left(-\displaystyle\sum^l_{i=1}\myP[E_i]\right)-Q.
\end{equation}
Where
\begin{equation}
\label{eq:errorQ}
-\displaystyle\sum_{k=0}^{l+1} \frac{(l \epsilon)^k}{k!} + (1+\epsilon)^{l}
\le Q \le
\displaystyle\sum_{k=0}^{l+1} \frac{(l \epsilon)^k}{k!} - (1+\epsilon)^{l} .
\end{equation}
\end{lemma}
\noindent
{\bf Remark.}~
The above bound for the error term $Q$ is
slightly weaker than the one in \cite{fronczak2004average},
but it is sufficient enough for many situations,
in particular for our usage.
In our analysis of (\ref{eq:Ffst})
each $E_i$ corresponds to an event that one specific path exists.
Recall that we assumed that all paths exists independently.
\bigskip
\begin{proof}
Using the definition of the inclusion-exclusion principle we get
\begin{equation}
\label{eq:bigcup}
\myP\left[\,\bigcup^{l}_{i=1}E_i\,\right]
=
\sum^{l}_{k=1} (-1)^{k+1} S(k),
\end{equation}
where each $S(k)$ is defined by
\begin{equation*}
S(k)
=
\sum_{1 \leq i_1 < \ldots < i_k \leq l}
\myP[E_{i_1}] \myP[E_{i_2}] \cdots \myP[E_{i_k}]
=
\sum_{1 \leq i_1 < \ldots < i_k \leq l}
P_{i_1} P_{i_2}\cdots P_{i_k}.
\end{equation*}
Here and in the following
we denote each probability $\myP[E_i]$ simply by $P_i$.
First we show that
\begin{equation}
\label{eq:Sk}
S(k)
=
{1\over k!}\left(\sum_{i=1}^{l}P_i\right)^k-Q_k,
\end{equation}
where
\begin{equation}
\label{eq:Qk}
0\le Q_k\le\left(\frac{l^k}{k!}-\binom{l}{k}\right)\epsilon^k.
\end{equation}
To see this
we introduce
two index sequence sets $\Gamma_k$ and $\Pi_k$ defined by
\begin{eqnarray*}
\Gamma_k
&=&
\{\,(i_1,\ldots,i_k)\,:\,
\mbox{$i_j\in\{1,\ldots,l\}$ for all $j$, $1\le j\le k$}\,\},\\
\Pi_k
&=&
\{\,(i_1,\ldots,i_k)\in\Gamma_k\,:\,
\mbox{$i_j\ne i_{j'}$ for all $j,j'$, $1\le j<j'\le k$}\,\}.
\end{eqnarray*}
Then it is easy to see that
\[
\left(\sum_{i=1}^{l}P_i\right)^k
=
\sum_{(i_1,\ldots,i_k)\in\Gamma_k}P_{i_1}\cdots P_{i_k},
{\rm~~and~~}
k!S(k)
=
\sum_{(i_1,\ldots,i_k)\in\Pi_k}P_{i_1}\cdots P_{i_k}.
\]
Thus,
we have
\begin{eqnarray*}
k!Q_k
=
\left(\sum_{i=1}^{l}P_i\right)^k-k!S(k)
&=&
\sum_{(i_1,\ldots,i_k)\in\Gamma_k\setminus\Pi_k}P_{i_1}\cdots P_{i_k}\\
&\le&
|\Gamma_k\setminus\Pi_k|\epsilon^k
=
\bigl(l^k-l(l-1)\cdots(l-k+1)\bigr)\epsilon^k,
\end{eqnarray*}
which gives bound (\ref{eq:Qk}) for $Q_k$.
Now from (\ref{eq:bigcup}) and (\ref{eq:Sk})
we have
\begin{equation*}
1-\myP\left[\,\bigcup^{l}_{i=1}E_i\,\right]
=
\sum^{l}_{k=0}{(-1)^k\over k!}\left(\sum_{i=1}^{l}P_i\right)^k
+\sum^{l}_{k=1}(-1)^{k+1}Q_k.
\end{equation*}
Here we note that
the sum $\sum^{l}_{k=0}(-1)^k/k!(\sum_{i=1}^{l}P_i)^k$ is
the first $l+1$ terms of
the MacLaurin expansion of $\myexp(-\sum_{i=1}^lP_i)$.
Hence,
the error term $Q$ of (\ref{eq:lemfron}) becomes
\begin{equation*}
Q
=
- \sum_{k\ge l+1}{(-1)^k\over k!}\left(\sum_{i=1}^{l}P_i\right)^k
+ \sum^{l}_{k=1}(-1)^{k+1}Q_k.
\end{equation*}
We now derive an upper bound for $Q$.
\begin{eqnarray}
Q
&\le&
- \sum_{k\ge l+1}\frac{(-1)^{k}}{k!}\left(\sum_{i=1}^{l}P_i\right)^k
+\sum^{l}_{k=1}Q_k \nonumber \\
&\le &
\frac{(l \epsilon)^{l+1}}{(l+1)!}
+ \displaystyle\sum_{k=1}^{l} \frac{(l \epsilon)^k}{k!} - \displaystyle\sum_{k=1}^{l} \binom{l}{k} \epsilon^k \nonumber \\
&=&\displaystyle\sum_{k=0}^{l+1} \frac{(l \epsilon)^k}{k!} - (1+\epsilon)^{l}. \nonumber
\end{eqnarray}
This proves the upper bound on $Q$, the proof for the lower bound of $Q$ is completely analogous. Thus, the lemma holds.
\noindent
$\square$
\end{proof}
\begin{figure}[tb]
\centering
\includegraphics[width=1.0\columnwidth, keepaspectratio=true]{img/exp_approx1.png}
\caption{%
Average experimental and approximate distributions of number of nodes
with $x$ number of shortest paths of length 2 from a fixed node.
The experimental distribution has been averaged
for each node in the graph and also averaged
over 500 randomly generated graphs.
Graphs used had parameters $n=400$, $p_2=0.18$ and $q_2=0.0204$.
(For computing the graph of the approximate distribution,
we used a more precise approximation for $\Ffst_d$
because our $n$ is not large enough, see Sect. \ref{sec:inex} for details.)}
\label{fig:exp_approx}
\end{figure}
\section{Random Graph Models}
\label{sec:gmodel}
We investigate
the advantage of our GSPI kernel over the SPI kernel
for a synthetic random graph classification problem.
Our target problem is
to distinguish random graphs having two relatively ``dense parts'',
from simple graphs generated by the Erd\H{o}s-R\'enyi model.
Here by ``dense part''
we mean a subgraph that has more edges
in its inside compared with its outside.
For any edge density parameter $p$, $0<p<1$,
the Erd\H{o}s-R\'enyi model (with parameter $p$) denoted by $G(n,p)$
is to generate a graph $G$ (of $n$ nodes)
by putting an edge between each pair of nodes
with probability $p$ independently at random.
On the other hand,
for any $p$ and $q$, $0<q<p<1$,
the {\em planted partition model}~\cite{kollaspectra},
denoted by $G(n/2,n/2,p,q)$
is to generate a graph $G=(V^+\cup V^-,E)$ (with $|V^+|=|V^-|=n/2$)
by putting an edge between each pair of nodes $u$ and $v$
again independently at random
with probability $p$ if both $u$ and $v$ are in $V^+$ (or in $V^-$)
and with probability $q$ if $u\in V^+$ and $v\in V^-$
(or, $u\in V^-$ and $v\in V^+$).
Throughout this paper,
we use the symbol $p_1$ to denote the edge density parameter
for the Erd\H{o}s-R\'enyi model
and $p_2$ and $q_2$ to denote the edge density parameters
for the planted partition model.
We want to have $q_2<p_2$
while keeping the expected number of edges the same
for both random graph models
(so that
one cannot distinguish random graphs by just couting the number of edges).
It is easy to check
that this requirement is satisfied by setting
\begin{equation}
\label{eq:pandq}
p_2=(1+\alpha_0)p_1,
{\rm~~and~~}
q_2=2p_1-p_2-2(p_1 - p_2)/n
\end{equation}
for some constant $\alpha_0$, $0<\alpha_0<1$.
We consider the ``sparse'' situation for our experiments and analysis,
and assume that $p_1=c_0/n$ for sufficiently large constant $c_0$.
Note that
we may expect with high probability, that when $c_0$ is large enough, a random graph generated by both models have a large connected component but might not be fully connected~\cite{bollobas1998random}.
In the rest of the paper,
a random graph generated by $G(n,p_1)$ is called
a {\em one-cluster graph}
and a random graph generated by $G(n/2,n/2,p_2,q_2)$ is called
a {\em two-cluster graph}.
For a random graph, the
SPI/GSPI feature vectors are random vectors.
For each $z\in\{0,1\}$,
we use $\vecvspz$ and $\vecvgspz$ to denote
random SPI and GSPI feature vectors of a $z$-cluster graph.
We use $\numz{d}$ and $\numz{d,x}$
to denote respectively
the $d$th and $(d,x)$th component of $\vecvspz$ and $\vecvgspz$.
For our experiments and analysis,
we consider their expectations $\vecvspEz$ and $\vecvgspEz$,
that is,
$[\numEz{d}]_{d\ge1}$ and $[\numEz{d,x}]_{d\ge1,x\ge1}$.
Note that
$\myE[\numz{d,x}]$ is the expected {\em number of node pairs}
that have $x$ number of shortest paths of length $d$;
not to be confused
with the expected number of distance $d$ shortest paths.
| {
"timestamp": "2015-10-23T02:06:02",
"yymm": "1510",
"arxiv_id": "1510.06492",
"language": "en",
"url": "https://arxiv.org/abs/1510.06492",
"abstract": "We consider the problem of classifying graphs using graph kernels. We define a new graph kernel, called the generalized shortest path kernel, based on the number and length of shortest paths between nodes. For our example classification problem, we consider the task of classifying random graphs from two well-known families, by the number of clusters they contain. We verify empirically that the generalized shortest path kernel outperforms the original shortest path kernel on a number of datasets. We give a theoretical analysis for explaining our experimental results. In particular, we estimate distributions of the expected feature vectors for the shortest path kernel and the generalized shortest path kernel, and we show some evidence explaining why our graph kernel outperforms the shortest path kernel for our graph classification problem.",
"subjects": "Data Structures and Algorithms (cs.DS); Machine Learning (cs.LG)",
"title": "Generalized Shortest Path Kernel on Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9770226347732859,
"lm_q2_score": 0.839733963661418,
"lm_q1q2_score": 0.8204390896850933
} |
https://arxiv.org/abs/1406.3838 | A fast 25/6-approximation for the minimum unit disk cover problem | Given a point set P in 2D, the problem of finding the smallest set of unit disks that cover all of P is NP-hard. We present a simple algorithm for this problem with an approximation factor of 25/6 in the Euclidean norm and 2 in the max norm, by restricting the disk centers to lie on parallel lines. The run time and space of this algorithm is O(n log n) and O(n) respectively. This algorithm extends to any Lp norm and is asymptotically faster than known alternative approximation algorithms for the same approximation factor. | \section{Introduction}
Given a point set $P$ in $\mathbb{R}^2$, the \textit{unit disk cover problem} (UDC) seeks to find the smallest set of unit disks that cover all of $P$. This problem arises in applications to facility location, motion planning, and image processing \cite{fowler, hochbaummaass}.
In both the $L_2$ and $L_\infty$ norm, UDC is NP-hard \cite{fowler}. A \textit{shifting strategy} admits various polynomial time approximation algorithms in $d$ dimensions --- for some arbitrarily large integer shifting parameter $\ell$, it is possible to approximate to within $\left(1+\frac{1}{\ell}\right)^{d-1}$ \cite{hochbaummaass, gonzalez}. Since these algorithms rely on optimally solving the problem in an $\ell\times \ell$ square through exhaustive enumeration, they tend to have a slow time complexity that scales exponentially with $\ell$, making them impractical for large data sets. At the cost of incurring a constant approximation factor, the speed of the algorithm may be improved by constraining the disk centers to a unit square grid within the $\ell\times \ell$ square \cite{caltech1, caltech2}.
If the disk centers are constrained to an arbitrary finite set of points, UDC becomes the discrete unit disk covering problem (DUDC), which is also NP-hard. However, DUDC has a number of different approximation algorithms, with the current state-of-the-art achieving a constant factor of 15 \cite{dudc}.
In this paper, we present an algorithm that approximates UDC in the plane with the Euclidean and max norms by constraining the disk centers to a set of parallel lines. This algorithm is useable in practical settings and simple to implement. We show that, in the max norm, choosing a set of parallel lines distance $2$ apart achieves an approximation factor of 2. In the Euclidean norm, choosing a set of parallel lines distance $\sqrt{3}$ apart achieves an approximation factor of $25/6$. In both norms, the most costly step is simply from sorting the points. Consequently, the run time and space of the algorithm is $O(n \log n)$ and $O(n)$ respectively.
\renewcommand{\arraystretch}{1.5}
\begin{table}[htb]
\centering
\begin{tabular}{c|c|c|c}
\hline
Paper & Approximation & Running Time & Year\tabularnewline
\hline
\hline
\cite{hochbaummaass} & $\left(1+\frac{1}{\ell}\right)^2$ & $O\left(\ell^4 (2n)^{4\ell^2+1}\right)$ & 1985\tabularnewline
\hline
\cite{gonzalez} & $\left(1+\frac{1}{\ell}\right)$ & $O\left(\ell^{2}n^{6\ell\sqrt{2}+1}\right)$ & 1991\tabularnewline
\hline
\cite{gonzalez} & 8 & $O\left(n+n\log H\right)$ & 1991\tabularnewline
\hline
\cite{bronnimann} & $O(1)$ & $O(n^{3}\log n)$ & 1995\tabularnewline
\hline
\cite{caltech2} & $\alpha\left(1+\frac{1}{\ell}\right)^{2}$ & $O(Kn)$ & 2001\tabularnewline
\hline
Ours & $25/6$ & $O(n\log n)$ & 2014\tabularnewline
\hline
\end{tabular}
\caption{A history of approximation algorithms for the unit disk cover problem in $L_2$. $n$ is the number of points in $P$. The shifting parameter $\ell$ is a positive integer which may be arbitrarily large. $H$ is the number of circles in the optimal solution. $\alpha$ is a constant between 3 and 6. $K$ is a factor at least quadratic in $l$ and polynomial in the size of the approximation lattice.}
\end{table}
\section{Line restricted unit disk cover}
\label{lrudc}
Here we explore a restricted variant of UDC:
Given a point set $P$ in $\mathbb{R}^2$, the \textit{line restricted unit disk cover} problem (LRUDC) seeks to find the smallest set of unit disks --- each with \textit{centers on a given set of parallel lines $S$} --- that cover all of $P$.
For certain carefully chosen sets of lines, LRUDC can be solved efficiently using greedy methods. However, with no restrictions on the placement and number of parallel lines in $S$, LRUDC is NP-hard by reduction from UDC.
\begin{theorem}
Using $O(n^2)$ parallel lines, UDC reduces to LRUDC.
\end{theorem}
\begin{proof}
Consider a circle arrangement $\mathcal A$ consisting of unit radius circles centered at each of the points in the point set $P$. For any circle $C$ in the optimal solution of UDC, let $F$ be the face in $\mathcal A$ in which the center of $C$ resides. Observe that moving this center to any point in $F$ does not change the subset of points in $P$ that $C$ covers. If the set $S$ of parallel lines intersects all faces in $\mathcal A$, then the optimal line-restricted solution can have disks centered in the same set of faces as in the unrestricted case. Hence, any optimal solution of LRUDC for this set of lines is an optimal solution of UDC. Since there are only $O(n^2)$ faces in $\mathcal A$, having one line for each of the faces suffices.
\end{proof}
As an aside, it is unknown whether LRUDC is NP-hard if only $O(n)$ parallel lines are used.
\section{Approximation algorithms for UDC}
In our approximation algorithms for UDC, we use solutions to LRUDC on narrow vertical strips. The set $S$ of restriction lines we use for LRUDC will simply be uniformly spaced vertical lines. For each restriction line we will solve LRUDC confined to the subset of $P$ within a thin strip around the line. All points in $P$ will be in some strip, and we will choose the spacing between restriction lines so that a good approximation to UDC is obtained.
\subsection{A $2$-approximation with the $L_\infty$ norm}
\label{sec:max-norm-p3}
The max norm is a special case, as unit circles in the max norm are axis-aligned squares of width 2. We can take advantage of this fact to obtain a $2$-approximation algorithm.
\begin{enumerate}
\item Partition the plane into vertical strips of width $2$, and let the restriction line set $S$ be the set of vertical lines running down the centre of the strips.
\item For each non-empty strip, use the simple greedy procedure of inserting a square whose top edge is located at the topmost uncovered point. Repeat until all points in the strip are covered.
\end{enumerate}
The asymptotic cost of the algorithm is only $O(n\log n)$, as we need to sort the points by $x$-coordinate to partition them into the strips, and within each strip, we need to sort the points by $y$-coordinate to process the points in order of decreasing height.
The correctness of the greedy procedure in step 2 is easy to see, and is described in some detail by \cite{federgreene}. In fact, for this set of lines, this algorithm solves LRUDC optimally. This is because the greedy procedure is optimal for each strip, and the strips are all \textit{independent} from one another --- meaning that no point will be covered by squares from two different strips.
\begin{figure}
\centering
\includegraphics[width=7cm]{figure3.pdf}\\
$\,$ \\
\includegraphics[width=7cm]{figure3b.pdf}
\caption{Theorem \ref{square}. Top: a point set optimally coverable by one square. Bottom: a covering solution for each strip. Dashed lines denote restriction lines in $S$.}
\label{fig3}
\end{figure}
\begin{theorem}\label{square}
This algorithm is a 2-approximation for UDC in $L_\infty$.
\end{theorem}
\begin{proof}
For convenience, we define an \textit{$S$-restricted} solution to be any line-restricted solution covering all the points of $P$ using the same set $S$ of lines as our algorithm, but not necessarily the same set of circles as the one produced by our algorithm.
Let \textsc{opt} be an optimal solution for UDC in $L_\infty$. Each square in the optimal solution will intersect at most two strips, since each strip has the same width as the squares. We can construct an $S$-restricted solution, by simply using two $S$-restricted squares to cover each square in \textsc{opt} (see Figure \ref{fig3}). This uses exactly twice as many squares as \textsc{opt}. Since our algorithm solves LRUDC on $S$ optimally, it will be at least as good as the 2-approximation on each strip. As each strip is independent, our algorithm will be as good as the $S$-restricted solution over all strips.
\end{proof}
\subsection{A $5$-approximation with the $L_2$ norm}
\label{l2p2approx}
First, we present a simple $5$-approximation algorithm that forms the basis of our $25/6$-approximation algorithm.
\begin{enumerate}
\item Partition the plane into vertical strips of width $\sqrt{3}$. As before, let the restriction line set $S$ be the set of center lines of the strips.
\item For each non-empty strip, use the simple greedy procedure of inserting a circle positioned as low as possible while still covering the topmost uncovered point. Assume that all points in the strip are uncovered initially, and repeat until all points in the strip are covered.
\end{enumerate}
Note that the only difference between the algorithm above and the one for $L_\infty$ is the width of the vertical strip. With a width of $\sqrt{3}$, circles centred on a particular strip can cover points of neighbouring strips. Hence the strips are no longer independent, and we run the greedy procedure in step 2 assuming that all the points in the strip are uncovered initially (even though they may be covered by circles from different strips). Alternatively, we could remove points already covered by neighbouring strips as we go, but this makes no difference to the approximation factor or the asymptotic run time of the algorithm.
As before, parititioning the points into each strip is $O(n \log n)$ time. Within each strip, the subprocedure of greedily covering the circles can be done in $O(n_s \log n_s)$ time, where $n_s$ is the number of points in the strip. This is achieved by transforming the point covering problem into a segment covering problem instead.
The reduction is as follows: from each point $p$ in the strip draw a unit circle $C_p$ centred at $p$. The circle $C_p$ intersects the restriction line of the strip in two points, creating a segment $s_p$ between the two points. If the centre of an $S$-restricted circle is placed anywhere on $s_p$, it will cover $p$. Hence to cover all the points in the strip, we simply have to stab all the segments $\{s_p\}_{p\in P}$ with points representing centres of $S$-restricted circles. The strategy of greedily covering the topmost point reduces to choosing the stabbing point as low as possible, while still stabbing the topmost unstabbed segment. This can be done in $O(n_s \log n_s)$ time via sorting the segments by $y$-coordinate.
The correctness of the greedy subprocedure in step 2 follows from the same logic as Section \ref{sec:max-norm-p3}.
The argument that our algorithm is a 5-approximation is based on the following fact: for each circle $C$ in an optimal solution \textsc{opt} of UDC, there exists an $S$-restricted solution which covers each $C$ entirely using at most five circles. Furthermore, this solution is redundant in that the points of each strip are covered completely by $S$-restricted circles on that strip. We call such a solution \textit{oblivious}, as it does not take into account points covered by circles of neighbouring strips. Note that our algorithm produces a solution that is at least as good as any oblivious $S$-restricted solution, as each strip is solved optimally by our algorithm. It follows that our algorithm is also a 5-approximation.
It is necessary in the worst case to cover $C$ entirely since an adversary may provide an input point set $P$ consisting of arbitrarily many points coverable by a single circle (Figures \ref{fig1} and \ref{fig2}). The following proofs use a straightforward application of geometry to establish bounds on the number of $S$-restricted circles to cover $C$. There are two possible cases --- either $C$ intersects two strips, or $C$ intersects three strips.
\begin{obs}
\label{4c}
Let \textsc{opt} be the set of optimal circles for UDC. Suppose that the centre of a circle $C\in \textsc{opt}$ does not lie within $1-\frac{\sqrt{3}}{2}$ of a restriction line. Then, any oblivious $S$-restricted solution will require at least four circles to cover $C$.
Moreover, there exists an oblivious $S$-restricted solution which uses exactly four circles to cover $C$.
\end{obs}
\begin{proof}
Without loss of generality, let $C$ be centered at $(x_c,0)$ where $1-\frac{\sqrt{3}}{2}\leq x_c \leq 3\frac{\sqrt{3}}{2}-1$. For $x_c$ in this range, $C$ intersects two strips. Let the corresponding restriction lines be called $\mathcal L_1$ and $\mathcal L_2$ and be placed at $x=0$ and $x=\sqrt{3}$ respectively. Consider the strip boundary, a vertical line $\mathcal L_{12}$ at $x=\frac{\sqrt{3}}{2}$. The intersection of $C$ with this line forms a segment of length greater than 1 but smaller than 2. To cover $C$ entirely, this segment must be covered. For both strips that $C$ intersects, the algorithm would cover this segment, as each strip is oblivious that the neighbouring strip may have covered the same segment. Since each line-restricted circle can only cover a segment of length 1 on $\mathcal L_{12}$, each strip would need two circles, resulting in a total of 4.
It is easy to see that $C$ is covered by the four circles centred at $\left(0,\frac{1}{2}\right)$, $\left(0,-\frac{1}{2}\right)$, $\left(\sqrt{3},\frac{1}{2}\right)$, $\left(\sqrt{3},-\frac{1}{2}\right)$ (see Figure \ref{fig1}). Note that the obliviousness constraint is satisfied as the circles within each strip do not depend on circles from neighbouring strips to cover the points from $C$.
\end{proof}
\begin{figure}[h]
\centering
$\mathcal L_1$ \hspace{5pt} $\mathcal L_{12}$ \hspace{5pt} $\mathcal L_2$\\
$\,$ \\
\includegraphics[width=7.5cm]{figure1.pdf}\\
$\,$ \\
\includegraphics[width=7.5cm]{figure1b.pdf}
\caption{Observation \ref{4c}. Top: a set of points optimally coverable by one circle. Bottom: a covering solution for each strip. Dashed lines denote restriction lines in $S$.}
\label{fig1}
\end{figure}
\begin{obs}
\label{5c}
Let \textsc{opt} be the set of optimal circles for UDC. Suppose that the centre of a circle $C\in \textsc{opt}$ lies within $1-\frac{\sqrt{3}}{2}$ of a restriction line. Then, any oblivious $S$-restricted solution will require at least five circles to cover $C$.
Moreover, there exists an oblivious $S$-restricted solution which uses exactly five circles to cover $C$.
\end{obs}
\begin{proof}
Without loss of generality, let $C$ be centered at $(x_c,0)$ where $0\leq x_c < 1-\frac{\sqrt{3}}{2}$. For $x_c$ in this range, $C$ intersects three strips. Let the corresponding restriction lines be called $\mathcal L_1$, $\mathcal L_2$ and $\mathcal L_3$ and be placed at $x=-\sqrt{3}$, $x=0$ and $x=\sqrt{3}$ respectively. Consider the two strip boundaries, vertical lines $\mathcal L_{12}$ at $x=-\frac{\sqrt{3}}{2}$ and $\mathcal L_{23}$ at $x=\frac{\sqrt{3}}{2}$. The intersection of $C$ with $\mathcal L_{23}$ forms a segment centered at $y=0$ of length greater than 1 but smaller than 2, and the intersection of $C$ with $\mathcal L_{12}$ forms a segment centered at $y=0$ of length smaller than 1. To cover $C$ entirely, these segments must be covered. Since each line-restricted circle can only cover a segment of length 1 on the strip boundary, and each strip is oblivious that the neighbouring strip may have covered the same segment, strips 2 and 3 would need two circles each and strip 1 would need one circle, resulting in a total of 5.
Finally, it is easy to see that $C$ is covered by the five circles centred at $\left(-\sqrt{3},0\right)$, $\left(0,\frac{1}{2}\right)$, $\left(0,-\frac{1}{2}\right)$, $\left(\sqrt{3},\frac{1}{2}\right)$, $\left(\sqrt{3},-\frac{1}{2}\right)$ (see Figure \ref{fig2}).
\end{proof}
\begin{figure}[h]
\centering
\hspace{12pt} $\mathcal L_1$ \hspace{34pt} $\mathcal L_2$ \hspace{34pt} $\mathcal L_3$\\
$\,$ \\
\includegraphics[width=9cm]{figure2.pdf}\\
$\,$ \\
\includegraphics[width=9cm]{figure2b.pdf}
\caption{Observation \ref{5c}. Top: a set of points optimally coverable by one circle. Bottom: a covering solution for each strip. Dashed lines denote restriction lines in $S$.}
\label{fig2}
\end{figure}
\begin{theorem}
\label{5app}
This algorithm is a $5$-approximation for UDC in $L_2$.
\end{theorem}
\begin{proof}
Since our algorithm solves each strip optimally, it produces a solution that is at least as good as any oblivious $S$-restricted solution. We have shown that there exists such a solution with an approximation factor of 5, which follows directly from Observations \ref{4c} and \ref{5c}. Hence, our algorithm has an approximation factor of at most 5.
\end{proof}
\subsection{Improving the 5-approximation to a 25/6-approximation}
\label{improve}
To improve the 5-approximation algorithm to a $25/6$-approximation algorithm, we employ a ``smoothing'' technique. From the calculations in Observation \ref{5c}, the region where a circle $C$ in an optimal solution requires five circles to cover is only $2-\sqrt{3}$ wide. In all other cases, $C$ can be covered by only four circles. Since $2-\sqrt{3}$ is less than one-sixth of the width of the entire strip $\sqrt{3}$, it is intuitive that we can do better than a 5-approximation. Here, we show that by shifting the strip partition, we can smooth out the regions that require 5-circles and achieve a $25/6$-approximation.
To be precise, we define a strip partition with shift $\alpha$ to be the partition of $\mathbb{R}^2$ into width $\sqrt{3}$ vertical strips, where the boundaries of the strips are located at $x=\alpha + k\sqrt{3}$, $k\in \mathbb{Z}$. As usual, our restriction lines are in the centres of these strips. Our algorithm with the smoothing technique is:
\begin{enumerate}
\item For $\alpha=0,\frac{\sqrt{3}}{6},\frac{2\sqrt{3}}{6},\ldots, \frac{5\sqrt{3}}{6}$, partition the plane into vertical strips of width $\sqrt{3}$ with shift $\alpha$ and use the 5-approximation algorithm.
\item Return the best of the six solutions obtained above.
\end{enumerate}
\begin{theorem}
The algorithm with smoothing approximates is a $25/6$-approximation for UDC in $L_2$.
\end{theorem}
\begin{proof}
Let \textsc{opt} be the set of optimal circles for UDC, and let $S_1,\ldots,S_6$ be the six sets of shifted line sets used in our algorithm. For each of the six shifts, there exists an oblivious $S_i$-restricted solution.
Suppose that for $i=1,\ldots,6$, there are $q_i$ circles in \textsc{opt} with centers $(x,y)$ satisfying
\begin{align}
\alpha_i + k\sqrt{3} + \frac{5}{12}\sqrt{3} \leq x\leq \alpha_i + k\sqrt{3} + \frac{7}{12}\sqrt{3}\label{eq1}
\end{align}
for $k\in\mathbb{Z}$, where $\alpha_i = (i-1)\frac{\sqrt{3}}{6}$. Note that since these six ranges fill the plane, $\sum q_i=|\textsc{opt}|$.
According to Observations \ref{4c} and \ref{5c}, each solution has five circles for every circle in \textsc{opt} that has center $(x,y)$ satisfying
\begin{align}
\alpha_i + k\sqrt{3} + \sqrt{3}-1 \leq x \leq \alpha_i + k\sqrt{3} + 1\label{eq2}
\end{align}
and four circles for every other circle in \textsc{opt}. Since the range in Equation \ref{eq2} is a subrange of that in Equation \ref{eq1}, it follows that an oblivious $S_i$-restricted solution of Section \ref{l2p2approx} uses no more than
\begin{align}
5q_i + 4\sum_{\substack{j=1\\ j \neq i}}^6 q_j
\end{align}
circles.
For $i=1,\ldots,6$, let $A_i$ be the $i$-th candidate solution generated by our algorithm and let $A^*$ be the solution with fewest circles out of the 6 $A_i$'s. Since each $A_i$ is at least as good as any oblivious $S_i$-restricted solution, we have the inequality:
\begin{align}
|A^*| & = \min_{i=1,..,6} |A_i| \\
&\leq \min_{i=1,..,6} \left[5q_i + 4\sum_{\substack{j=1\\ j \neq i}}^6 q_j\right]\\
&=4|\textsc{opt}| + \min_{i=1,..,6} q_i\\
&\leq 4|\textsc{opt}| + \frac{1}{6}|\textsc{opt}| = \frac{25}{6}|\textsc{opt}|
\end{align}
Hence the output of our algorithm is a $25/6$ approximation to the unit disk cover problem.
\end{proof}
\section{Extensions}
The approximation algorithm outlined above can be applied to any $L_p$ norm --- one simply has to figure out the worst case number of oblivious line-restricted circles to cover an arbitrary circle in the plane. For each norm, the optimal line spacing varies. However, our algorithm is guaranteed to produce constant factor approximations when the spacing less than $2$.
Our algorithm can also be extended to higher dimensions. A natural extension is to use a collection of uniformly spaced parallel lines, and solve LRUDC in a small tube surrounding each line. In this way, we can obtain approximations in arbitrarily large $d$ dimensions, albeit with an approximation factor that scales exponentially with $d$. In particular, applying this technique to the $L_\infty$ norm gives a $2^{d-1}$-approximation in $d$ dimensions, matching an earlier result by \cite{gonzalez}.
Finally, our algorithm applies to covering objects more general than points, such as polygonal shapes. Our proofs only rely on the fact that any optimal circle can be covered entirely with a constant number of oblivious line-restricted circles. The fact that we are covering points is not used.
\section{Concluding Remarks}
We presented a simple algorithm to approximate the unit disk cover problem within factor of $25/6$ in $L_2$ and within a factor of 2 in $L_\infty$.
The algorithm runs in $O(n \log n)$ time $O(n)$ space, with the most time consuming step being a simple sorting of the input. On a practical level, we believe our algorithm has a good mix of performance and simplicity, with a typical implementation of no more than 30 lines of C++.
We wonder what the best approximation an oblivious line-restricted approach can achieve for the unit disk cover problem. For the $L_\infty$ norm, we saw that 2 was the best possible approximation factor for lines spaced equally apart. Similarly, one can show for the $L_2$ norm that a lower bound of $15/4$ is the best that can be done with an oblivious algorithm for equally spaced lines. It would be interesting to see if these lower bounds can be broken by an oblivious algorithm once the equal spacing condition is removed. Finally, an analysis of the optimal spacing in other $L_p$ norms would be interesting as well.
\section{Acknowledgements}
The authors would like to thank Professors David Kirkpatrick and Will Evans for their limitless patience and guidance, as well as Kristina Nelson for reading the early drafts.
\small
\bibliographystyle{abbrv}
| {
"timestamp": "2014-06-17T02:08:55",
"yymm": "1406",
"arxiv_id": "1406.3838",
"language": "en",
"url": "https://arxiv.org/abs/1406.3838",
"abstract": "Given a point set P in 2D, the problem of finding the smallest set of unit disks that cover all of P is NP-hard. We present a simple algorithm for this problem with an approximation factor of 25/6 in the Euclidean norm and 2 in the max norm, by restricting the disk centers to lie on parallel lines. The run time and space of this algorithm is O(n log n) and O(n) respectively. This algorithm extends to any Lp norm and is asymptotically faster than known alternative approximation algorithms for the same approximation factor.",
"subjects": "Computational Geometry (cs.CG)",
"title": "A fast 25/6-approximation for the minimum unit disk cover problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9896718490038141,
"lm_q2_score": 0.8289387998695209,
"lm_q1q2_score": 0.8203773947778714
} |
https://arxiv.org/abs/2106.09262 | Componentwise linear ideals in Veronese rings | In this article, we study the componentwise linear ideals in the Veronese subrings of $R=K[x_1,\ldots,x_n]$. If char$(K)=0$, then we give a characterization for graded ideals in the $c^{th}$ Veronese ring $R^{(c)}$ to be componentwise linear. This characterization is an analogue of that over $R$ due to Aramova, Herzog and Hibi in \cite{ah00} and Conca, Herzog and Hibi in \cite{chh04}. | \section{Introduction}
Let $R=K[x_1,\ldots,x_n]$, where $K$ is a field. For $c\in \mathbb N$, the $c^{th}$ {\it Veronese ring} of $R$ is the ring $$R^{(c)}:=K[\mbox{all monomials of degree equal to $c$ in } R].$$
The componentwise linear ideals in $R$ are well studied by various authors.
Their graded Betti numbers are studied by Herzog and Hibi, see \cite{hh99}. In \cite{ah00}, Aramova, Herzog and Hibi characterized that if char$(K)=0$, then a graded ideal $I$ in $R$ is componentwise linear if and only if $I$ and $\operatorname{Gin}_{\tau}(I)$ have the same graded Betti numbers, where $\operatorname{Gin}_{\tau}(I)$ is the generic initial ideal of $I$, with respect to the term order $\tau$, the reverse lexicographic order. Later Conca proved a characterization in terms of Koszul Betti numbers of
$\operatorname{Gin}_{\tau}(I)$, see \cite{c03}. Also see \cite{chh04} for the regidity of Koszul Betti numbers. Not much is known in the literature about the componentwise linear ideals in $R^{(c)}$. It is interesting to look whether
one can have a characterization for the componentwise linear ideals in the Veronese ring $R^{(c)}$ which is an analogue of that of \cite[Theorem 1.1]{ah00} and \cite[Theorem 4.5]{c03}.
\begin{question}
Can we give a characterization for the componentwise linear ideals in $R^{(c)}$?
\end{question}
In this work, we answer this question affirmatively. Gasharov, Murai and Peeva defined and studied the Borel ideals and lex ideals in the Veronese ring $R^{(c)}$ to prove the Macaulay's Theorem for graded ideals in $R^{(c)}$, see \cite{gmp11}. We show that if $I$ is a graded ideal in
$R^{(c)}$, then the contracted ideal $(\operatorname{Gin}_{\tau}(IR))^{(c)}$ in $R^{(c)}$ is a Borel ideal and has graded Betti numbers bigger than that of $I$ (Lemma \ref{lem1} and Proposition \ref{pro1}). Using this we prove formulas for the graded Betti numbers of componentwise
linear ideals in $R^{(c)}$ (Proposition \ref{pro2}). This is an analogue of that of
\cite[Proposition 1.3]{hh99}. Thus, we prove that \cite[Corollary 3.7.5]{gmp11} is more generally true for componentwise linear ideals. We show an analogue characterization of that of
\cite[Theorem 1.1]{ah00} in $R^{(c)}$:
\noindent
{\bf Theorem \ref{thm1}.}
Assume char$(K)=0$. Then a graded ideal $I$ in $R^{(c)}$ is componentwise linear if and only if
$$\beta^{R^{(c)}}_{ij}\left(\frac{R^{(c)}}{I}\right) =
\beta^{R^{(c)}}_{ij}\left(\frac{R^{(c)}}{(\operatorname{Gin}_{\tau}(IR))^{(c)}}\right) ~~\mbox{ for all }i,j.$$
Also we showed that
\noindent
{\bf Theorem \ref{thm2}.}
Assume char$(K)=0$. Let $I \subset R^{(c)}$ be a graded ideal and $M=\frac{R^{(c)}}{I}$. Let $y_1,\ldots,y_d$ be a generic sequence of homogeneous forms of degree one in $R^{(c)}$. Suppose $y_1,\ldots,y_d$ is a proper $M$-sequence. Then $I$ is componentwise linear.
\vskip 0.1cm
This is partly an analogue of \cite[Theorem 1.5]{chh04} for the graded ideals in the Veronese ring $R^{(c)}$. Also, we give a necessary condition for the componentwise linearity of ideals in $R^{(c)}$ as below.
\noindent
{\bf Theorem \ref{thm3}.}
Assume char$(K)=0$. Let $y_1,\ldots,y_d$ be a generic sequence of homogeneous forms of degree one in $R^{(c)}$. Let $I \subset R^{(c)}$ be a graded ideal. Suppose $I$ is componentwise linear. Then $\mathfrak m^{(c)}\left(Tor_i^{R^{(c)}}\left(K, \frac{R^{(c)}}{I}\right)\right)=0$ for all $i\geq 1$.
We organize the paper as follows. In the Section \ref{sec2}, we recall various notations, results that are needed in the rest of the paper. Also we prove our one of the main result, Theorem \ref{thm1}. Finally in the Section \ref{sec3}, we study a necessary and a sufficient condition for componentwise linear ideals in $R^{(c)}$ in Theorems \ref{thm2}, \ref{thm3}.
\noindent
{\bf Acknowledgement:} I would like to thank Professor Aldo Conca for some clarifications and I thank the anonymous referee for the critical reading of the manuscript.
\section{Betti numbers of componentwise linear ideals in $R^{(c)}$} \label{sec2}
Let $R=K[x_1,\ldots,x_n]$ be the polynomial ring in the variables $x_1,\ldots,x_n$
over a field $K$ and $\mathfrak m=(x_1,\ldots,x_n)$. Let $c\in \mathbb N$. The $c^{th}$ {\it Veronese ring}
of $R$ is the ring $R^{(c)}:=K[{\bf x}^{\bf a}~|~ {\bf x}^{\bf a} \mbox{ is a monomial of
degree $c$ in } R]$. $R^{(c)}$ is a standard graded $K$-algebra with the grading
$$R^{(c)} = \displaystyle \bigoplus_{j} R_{cj}.$$
In this work, we study the componentwise linearity of graded ideals in $R^{(c)}$. Let $I$
be a graded ideal in $R^{(c)}$. We say that $I$ is {\it componentwise linear}, if $I_{\langle j \rangle}$
has $j$-linear resolution over $R^{(c)}$, for all $j$. Here $I_{\langle j \rangle}$ denotes the ideal
generated by $I_j$, the $j^{th}$ graded component of $I$.
The $ij^{th}$ Betti number of $R^{(c)}/I$, where $I$ is a graded ideal in $R^{(c)}$, is denoted and defined as
$$\beta^{R^{(c)}}_{ij}\left(\frac{R^{(c)}}{I}\right):=\dim_K\left(Tor_i^{R^{(c)}}\left(K,\frac{R^{(c)}}{I}\right)_j\right).$$
Set $d={n+c-1 \choose n-1}$. Let $S=K[t_1,\ldots,t_d]$ be the polynomial ring in $d$ variables over $K$.
Define a ring homomorphism $\varphi:S \rightarrow R$,
$\varphi(t_i)={\bf x}^{\bf a_i}$, where ${\bf a_1, \ldots, a_d}$ be the vectors in $\mathbb Z_+^n$ such
that $|{\bf a}_i|=c$ and ${\bf a_1} >_{lex} \cdots >_{lex} {\bf a_d}$.
Then $\frac{S}{\operatorname{ker}(\varphi)}\cong R^{(c)}$ and denote this isomorphism as $\varphi_{R^{(c)}}$.
Let $\varphi_S:S\rightarrow \frac{S}{\operatorname{ker}(\varphi)}$ be the projection map. Then $\varphi=\varphi_{R^{(c)}}\circ \varphi_S$.
Recall that a monomial ideal $J$ in $R$ is called Borel, if $m x_j\in J$, then
$m x_i\in J$, for $1\leq i < j$. The Borel ideals in $R^{(c)}$ are defined as
follows due to Gasharov, Murai and Peeva.
\begin{definition}[Definition 3.7.1, \cite{gmp11}]
A monomial ideal $I$ in $R^{(c)}$ is called Borel, if the ideal $IR$ is Borel in $R$.
\end{definition}
Let $J$ be an graded ideal in $R$. Then the generic initial ideal of $J$ with respect
to a term order $\tau$, is denoted and defined as $\operatorname{Gin}_{\tau}(I):=in_{\tau}(g(J))$, where
$g$ is a generic element in $GL_n(K)$. Notice that $\operatorname{Gin}_{\tau}(J)$ is a Borel ideal.
For a graded ideal $J$ in $R$, $J^{(c)}$ denotes the contraction of $J$ in $R^{(c)}$.
\begin{lemma} \label{lem1}
Let $I$ be a graded ideal in $R^{(c)}$. Then $(\operatorname{Gin}_{\tau}(IR))^{(c)}$ is a Borel ideal
in $R^{(c)}$.
\end{lemma}
\begin{proof}
Note that the minimal monomial generators of $(\operatorname{Gin}_{\tau}(IR))^{(c)}R$ and $(\operatorname{Gin}_{\tau}(IR))^{(c)}$ are equal. Also we have
$$
\left((\operatorname{Gin}_{\tau}(IR))^{(c)}R\right)_j = \left\{
\begin{array}{ll}
\operatorname{Gin}_{\tau}(IR)_j & \mbox{ if } c|j \\
\mathfrak m^r \operatorname{Gin}_{\tau}(IR)_{cq} & \mbox{ if } j=cq+r, ~~0< r < c.
\end{array}
\right.
$$
Since $\operatorname{Gin}_{\tau}(IR)$ is Borel, then we have the property that for each monomial
$mx_u \in \operatorname{Gin}_{\tau}(IR)_j$, then $mx_i \in \operatorname{Gin}_{\tau}(IR)_j$, for $1 \leq i < u$. The same property holds for
each monomial which belongs to $\left((\operatorname{Gin}_{\tau}(IR))^{(c)}R\right)_j$ as well.
This implies that $\left((\operatorname{Gin}_{\tau}(IR))^{(c)}R\right)$ is Borel.
\end{proof}
\begin{lemma} \label{lem2}
Let $I\subset R^{(c)}$ be a graded ideal. Then $R^{(c)}/I$ and $R^{(c)}/(\operatorname{Gin}_{\tau}(IR))^{(c)}$ have the same Hilbert function.
\end{lemma}
\begin{proof}
For each $j$,
\begin{eqnarray*}
\dim_K((\operatorname{Gin}_{\tau}(IR))^{(c)}_j) &=& \dim_K(\operatorname{Gin}_{\tau}(IR)_{cj}) \\
&=& \dim_K((IR)_{cj}) ~~~( \operatorname{Gin}_{\tau}(IR), IR \mbox{ have the same Hilbert function}) \\
&=& \dim_K(I_{j}).
\end{eqnarray*}
\end{proof}
\begin{proposition} \label{pro1}
Let $I\subset R^{(c)}$ be a graded ideal. Then
$$\beta^{R^{(c)}}_{ij}\left(\frac{R^{(c)}}{I}\right)\leq
\beta^{R^{(c)}}_{ij}\left(\frac{R^{(c)}}{(\operatorname{Gin}_{\tau}(IR))^{(c)}}\right).$$
\end{proposition}
\begin{proof}
By the discussions 3.3 and 3.4 in \cite{gmp11}, we have $(\operatorname{Gin}_{\tau}(IR))^{(c)}$ is a type B
deformation of $I$. Therefore the graded Betti numbers of $(\operatorname{Gin}_{\tau}(IR))^{(c)}$ are bigger than that of $I$.
\end{proof}
Now prove the Betti number of componentwise linear ideals in $R^{(c)}$. This proves that \cite[Corollary 3.7.5]{gmp11} is more generally true for componentwise linear ideals.
\begin{proposition} \label{pro2}
Let $I$ be a componentwise linear ideal in $R^{(c)}$. Then
$$\beta^{R^{(c)}}_{i,i+j}(I)= \beta^{R^{(c)}}_i(I_{\langle j \rangle})-
\beta^{R^{(c)}}_i(\mathfrak m^{(c)} I_{\langle j-1 \rangle}),$$
for all $i,j$.
\end{proposition}
\begin{proof}
By induction on the highest degree $t$ of a minimal generator of $I$. Suppose $t=1$.
Then $I$ is generated by homogeneous elements of degree one in $R^{(c)}$. Then by the
\cite[Lemma 3.7.2]{gmp11}, $I$ has $1$-linear resolution over $R^{(c)}$. This implies that
$I$ is componentwise linear over $R^{(c)}$. Therefore the statement is trivially true for $t=1$.
Assume $t >1$. Consider the short exact sequence
$$0 \rightarrow I_{\leq t-1} \rightarrow I \rightarrow
\frac{I_{\langle t \rangle}}{\mathfrak m^{(c)}I_{\langle t \rangle}} \rightarrow 0,$$
which induces the long exact sequence
$$\cdots \rightarrow Tor_{i+1}\left(K, \frac{I_{\langle t \rangle}}{\mathfrak m^{(c)}I_{\langle t \rangle}}\right)_{i+j}
\rightarrow Tor_i\left(K, I_{\leq t-1}\right)_{i+j} \rightarrow Tor_i\left(K, I\right)_{i+j}
\rightarrow Tor_{i}\left(K, \frac{I_{\langle t \rangle}}{\mathfrak m^{(c)}I_{\langle t \rangle}}\right)_{i+j} \rightarrow \cdots.$$
By induction hypothesis we have
$$\beta^{R^{(c)}}_{i,i+j}(I_{\leq t-1})= \beta^{R^{(c)}}_i((I_{\leq t-1})_{\langle j \rangle}) -
\beta^{R^{(c)}}_i(\mathfrak m^{(c)} (I_{\leq t-1})_{\langle j-1 \rangle}),$$
for all $j \geq t$. Therefore by above long exact sequence we have
$$Tor_i\left(K, I\right)_{i+j}
\cong Tor_{i}\left(K, \frac{I_{\langle t \rangle}}{\mathfrak m^{(c)}I_{\langle t \rangle}}\right)_{i+j}
\mbox{ for all } j \geq t.$$
Since $\beta^{R^{(c)}}_{i,i+j}(I)=\beta^{R^{(c)}}_{i,i+j}(I_{\leq t-1})$ for all $j \leq t-1$, for all $i$,
then by the induction hypothesis the statement is true for $j \leq t-1$. Assume $j \geq t$.
Consider the short exact sequence
$$0 \rightarrow \mathfrak m^{(c)}I_{\langle t-1 \rangle} \rightarrow I_{\langle t \rangle} \rightarrow
\frac{I_{\langle t \rangle}}{\mathfrak m^{(c)}I_{\langle t-1 \rangle}}\rightarrow 0,$$
there is an exact sequence in homology
\begin{eqnarray*}
Tor_{i+1}\left(K, \frac{I_{\langle t \rangle}}{\mathfrak m^{(c)}I_{\langle t-1 \rangle}}\right)_{i+j} &\rightarrow& Tor_i(K, \mathfrak m^{(c)}I_{\langle t-1 \rangle})_{i+j}
\rightarrow Tor_i(K, I_{\langle t \rangle})_{i+j} \rightarrow Tor_{i}\left(K, \frac{I_{\langle t \rangle}}{\mathfrak m^{(c)}I_{\langle t-1 \rangle}}\right)_{i+j} \\
&\rightarrow& Tor_{i-1}(K, \mathfrak m^{(c)}I_{\langle t-1 \rangle})_{i+j}.
\end{eqnarray*}
If $j=t$, then the extreme Tors are zeros because $I_{\langle t \rangle}$ and $\mathfrak m^{(c)}I_{\langle t-1 \rangle}$ have $t$-linear resolutions.
This implies that the statement is true for $j=t$. Assume $j > t$. Then we have that
$$Tor_i(K, I_{\langle t \rangle})_{i+j} \rightarrow Tor_{i}\left(K, \frac{I_{\langle t \rangle}}{\mathfrak m^{(c)}I_{\langle t-1 \rangle}}\right)_{i+j}
\rightarrow 0$$
is exact for all $i$ and $j>t$. Since $I_{\langle t \rangle}$ has $t$-linear resolution, we have
$Tor_i(K, I_{\langle t \rangle})_{i+j}=0$ for all $i$ and $j>t$. Therefore the above exact sequence gives that
$Tor_{i}\left(K, \frac{I_{\langle t \rangle}}{\mathfrak m^{(c)}I_{\langle t-1 \rangle}}\right)_{i+j}=0$ for all $i$ and $j>t$.
By using the above isomorphism, this implies that $Tor_i(K, I)_{i+j}=0$, for all $i$ and for all $j > t$.
Thus $\beta^{R^{(c)}}_{i,i+j}(I)=0$ for all $i$ and for all $j > t$. This concludes the proof because
$\mathfrak m^{(c)}I_{\langle j-1 \rangle}=I_{\langle j \rangle}$ for $j > t$.
\end{proof}
\begin{lemma} \label{lem3}
Let $I$ be a componentwise linear ideal in $R^{(c)}$. Then $(\operatorname{Gin}_{\tau}(IR))^{(c)}$ is Borel.
\end{lemma}
\begin{proof}
Assume $I_{\langle j \rangle}$ has a linear resolution. That is $\operatorname{reg}(I_{\langle j \rangle})=j$.
Since $\operatorname{Gin}_{\tau}(I_{\langle j \rangle}R)$ is generated in single degree $cj$ in $R$, then
$\operatorname{reg}(\operatorname{Gin}_{\tau}(I_{\langle j \rangle}R))=cj$. Then by \cite[Proposition 10]{er94},
$\operatorname{Gin}_{\tau}(I_{\langle j \rangle}R)_{cj}$ generates a Borel ideal in $R$. But
$$\operatorname{Gin}_{\tau}(I_{\langle j \rangle}R)_{cj}=\operatorname{Gin}_{\tau}((IR)_{\langle cj \rangle})_{cj}=
\operatorname{Gin}_{\tau}((IR))_{cj}.$$
Since $j$ is arbitrary, we have that$(\operatorname{Gin}_{\tau}(IR))^{(c)}= \displaystyle \bigoplus_{j} \operatorname{Gin}_{\tau}((IR))_{cj}$ is Borel.
\end{proof}
\begin{theorem} \label{thm1}
Assume char$(K)=0$. Then a graded ideal $I$ in $R^{(c)}$ is componentwise linear if and only if
$$\beta^{R^{(c)}}_{ij}\left(\frac{R^{(c)}}{I}\right) =
\beta^{R^{(c)}}_{ij}\left(\frac{R^{(c)}}{(\operatorname{Gin}_{\tau}(IR))^{(c)}}\right) ~~\mbox{ for all }i,j.$$
\end{theorem}
\begin{proof}
Assume $I$ is componentwise linear. By using the Proposition \ref{pro2} it is suffices to show that
\begin{center}
$\beta^{R^{(c)}}_{i}\left(I_{\langle j \rangle}\right) =
\beta^{R^{(c)}}_{i}\left(\left((\operatorname{Gin}_{\tau}(IR))^{(c)}\right)_{\langle j \rangle}\right)$ and \\
$\beta^{R^{(c)}}_{i}\left(\mathfrak m^{(c)}I_{\langle j-1 \rangle}\right) =
\beta^{R^{(c)}}_{i}\left(\left(\mathfrak m^{(c)}(\operatorname{Gin}_{\tau}(IR))^{(c)}\right)_{\langle j-1 \rangle}\right)$
\end{center}
for all $i,j$.
We have that $$\left((\operatorname{Gin}_{\tau}(IR))^{(c)}\right)_{\langle j \rangle}=
\left(\operatorname{Gin}_{\tau}(IR)\right)_{\langle cj \rangle}=\left(\operatorname{Gin}_{\tau}((IR)_{\langle cj \rangle})\right)^{(c)}
=\left(\operatorname{Gin}_{\tau}(I_{\langle j \rangle}R)\right)^{(c)}.$$
Since $\left(\operatorname{Gin}_{\tau}(I_{\langle j \rangle}R)\right)^{(c)}$ generated in single
degree, then by \cite[Theorem 3.7.4]{gmp11}, we have that it has $j$-linear resolution.
This implies that $I_{\langle j \rangle}$ and $\left((\operatorname{Gin}_{\tau}(IR))^{(c)}\right)_{\langle j \rangle}$
have the same Betti numbers because they have the same Hilbert function (cf. Lemma \ref{lem2}). Since $\mathfrak m^{(c)}I_{\langle j-1 \rangle}$ and
$\left(\mathfrak m^{(c)}(\operatorname{Gin}_{\tau}(IR))^{(c)}\right)_{\langle j-1 \rangle}$ have $j$-linear
resolutions, then by above argument we get that they have same Betti numbers.
Conversely, assume that $I$ and $(\operatorname{Gin}_{\tau}(IR))^{(c)}$ have the same graded Betti numbers.
Let $r=\operatorname{max}(I)-\min(I)$, where $\operatorname{max}(I)$ ($\min(I)$) is the maximum (minimum) degree of a minimal
generator of $I$. Suppose $r=0$. Then $I$ is generated in single degree. This implies that
$(\operatorname{Gin}_{\tau}(IR))^{(c)}$ is also generated in single degree.
By \cite[Theorem 3.7.4]{gmp11}, we have that $(\operatorname{Gin}_{\tau}(IR))^{(c)}$ has linear resolution.
Then by the Proposition \ref{pro1}, $I$ has linear resolution. Assume
$r\geq 1$. Let $s=\min(I)$. First we show that $I_{\langle s \rangle}$ have $s$-linear
resolution. On the contrary, suppose $I_{\langle s \rangle}$ has no $s$-linear
resolution. This implies that $\beta^{R^{(c)}}_{i, i+j}\left(I_{\langle s \rangle}\right) \neq 0$,
for some $j \neq s$. By using the Proposition \ref{pro1},
$\beta^{R^{(c)}}_{i, i+j}\left(\left((\operatorname{Gin}_{\tau}(IR))^{(c)}\right)_{\langle s \rangle}\right) \neq 0$,
for some $j \neq s$. This implies that
$\beta^{R^{(c)}}_{i, i+j}\left(\left((\operatorname{Gin}_{\tau}(I_{\langle s \rangle}R))^{(c)}\right)\right) \neq 0$,
for some $j \neq s$. This gives that
$\operatorname{reg}((\operatorname{Gin}_{\tau}(I_{\langle s \rangle}R))^{(c)}) > s$. This is a contradiction
because $\operatorname{reg}((\operatorname{Gin}_{\tau}(I_{\langle s \rangle}R))^{(c)}) = s$ by \cite[Theorem 3.7.4]{gmp11}.
For the ideal $I_{\geq s+1}$, we have
$(\operatorname{Gin}_{\tau}(I_{\geq s+1}R))^{(c)}=\left((\operatorname{Gin}_{\tau}(IR))^{(c)}\right)_{\geq s+1}$ and
$I_{\geq s+1}, \left((\operatorname{Gin}_{\tau}(IR))^{(c)}\right)_{\geq s+1}$ have the same graded Betti numbers.
Then by induction, $I_{\geq s+1}$ is componentwise linear.
\end{proof}
\section{The componentwise linearity of ideals in $R^{(c)}$} \label{sec3}
Let $y_1,\ldots,y_d$ be a generic sequence of linear forms in $R^{(c)}$. Let $M$ be a
finitely generated graded $R^{(c)}$-module. Define $A_p= \frac{((y_1,\ldots,y_{p-1})M:y_p)}{(y_1,\ldots,y_{p-1})M}$
for $1\leq p \leq d$. Let $\alpha_p(M):= \dim_K(A_p)$. Let
$H_i(p):=Tor_i^{R^{(c)}}\left(\frac{R^{(c)}}{(y_1,\ldots,y_p)}, M\right)$, for $1\leq p\leq d$. Let $h_i(p):= \dim_K(H_i(p))$. From the short exact sequence
$$0\rightarrow \frac{R^{(c)}}{(y_1,\ldots,y_{p-1})}\xrightarrow{\pm y_p}
\frac{R^{(c)}}{(y_1,\ldots,y_{p-1})} \rightarrow \frac{R^{(c)}}{(y_1,\ldots,y_p)} \rightarrow 0,$$
we have the long exact sequence
\begin{equation} \label{eq2}
H_i(p-1)(-1)\xrightarrow{\varphi_{i,p-1}} H_i(p-1) \rightarrow H_i(p) \rightarrow H_{i-1}(p-1)(-1)
\xrightarrow{\varphi_{i-1,p-1}} \cdots
\end{equation}
\begin{equation*}
\rightarrow H_0(p-1)(-1)\xrightarrow{\varphi_{0,p-1}} H_0(p-1)
\rightarrow H_0(p) \rightarrow 0,
\end{equation*}
where $\varphi_{i,p-1}$ is the map multiplication by $\pm y_p$. Note that $A_p=\operatorname{ker}(\varphi_{0,p-1})$,
for $1\leq p \leq d$. Then we have
\begin{equation} \label{eq3}
h_1(p)=h_1(p-1)+\alpha_p(M)-\dim_K(Im(\varphi_{1,p-1}))
\end{equation}
\begin{equation*}
h_i(p)=h_i(p-1)+h_{i-1}(p-1)-\dim_K(Im(\varphi_{i,p-1}))-\dim_K(Im(\varphi_{i-1,p-1}))
\end{equation*}
for all $p$ and for all $i\geq 2$.
Let $\beta^{R^{(c)}}_{ijp}(M):=\dim_K(H_i(p)_j)$. If we take $p=d$, then these numbers coincide with the graded Betti numbers of $M$. Now the exact sequence \eqref{eq2}, implies that
\begin{equation}\label{eq4}
\beta^{R^{(c)}}_{ijp}(M)=\beta^{R^{(c)}}_{ij(p-1)}(M)+\beta^{R^{(c)}}_{(i-1)(j-1)(p-1)}(M)-
\dim_K(Im(\varphi_{i,p-1})_j)-\dim_K(Im(\varphi_{i-1,p-1})_j),
\end{equation}
\begin{equation*}
\beta^{R^{(c)}}_{1jp}(M)=\beta^{R^{(c)}}_{1j(p-1)}(M)+\beta^{R^{(c)}}_{0(j-1)(p-1)}(M)-
\beta^{R^{(c)}}_{0j(p-1)}(M)+\beta^{R^{(c)}}_{0jp}(M),
\end{equation*}
for all $i\geq 2,$ for all $j$, $1\leq p \leq d$.
Now we state a result on the upper bounds for the Betti numbers of a graded module over $R^{(c)}$. By using the above equations, one derive analogue formulas that are proved by Conca, Herzog and Hibi in \cite{chh04}. The proof is almost similar to that of
\cite[Proposition 1.1]{chh04}. Hence we skip the proof.
\begin{proposition} \label{pro3}
For $1\leq i \leq p$, let $A_{i,p}=\{(a,b)\in \mathbb Z_+^2~|~1\leq b \leq p-1
\mbox{ and } \operatorname{max}(i-p+b,1)\leq a \leq i\}$. Let $M$ be a finitely generated graded $R^{(c)}$-module.
Then
\begin{enumerate}
\item $h_i(p) \leq \displaystyle \sum_{j=1}^{p-i+1}{p-j \choose i-1}\alpha_j(M)$, for all $i, p \geq 1$.
\item For $i,p\geq 1$, the following are equivalent.
\subitem(i) $h_i(p)=\displaystyle \sum_{j=1}^{p-i+1}{p-j \choose i-1}\alpha_j(M)$.
\subitem(ii) $\varphi_{a,b}=0$ for all $(a,b)\in A_{i,p}$.
\subitem(iii) $\mathfrak m^{(c)}H_a(b)=0$ for all $(a,b)\in A_{i,p}$.
\end{enumerate}
\end{proposition}
To state the next result we need to define a proper $M$-sequence in $R^{(c)}$.
\begin{definition}
A sequence of homogeneous elements $f_1,\ldots,f_s$ in $R^{(c)}$ is called a proper $M$-sequence if $f_{p+1}H_i(p)=0$ for all $i \geq 1$ and $p=0,1, \ldots, s-1$.
\end{definition}
This definition is an extension of that in \cite{hsv83}.
Take $p=d$ in the Proposition \ref{pro3}, then we get
\begin{corollary} \label{cor3}
\begin{enumerate}
\item $\beta^{R^{(c)}}_i(M)\leq \displaystyle \sum_{j=1}^{d-i+1}{d-j \choose i-1}\alpha_j(M)$ for all $i\geq 1$.
\item The following are equivalent:
\subitem(i) $M$ has maximal Betti numbers. That is,
$\beta^{R^{(c)}}_i(M)=\displaystyle \sum_{j=1}^{d-i+1}{d-j \choose i-1}\alpha_j(M)$
for all $i\geq 1$.
\subitem(ii) $\mathfrak m^{(c)}H_a(b)=0$ for all $b$ and for all $a\geq 1$.
\subitem(iii) $y_1,\ldots,y_d$ is a proper $M$-sequence.
\end{enumerate}
\end{corollary}
Now we prove an analogue result of \cite[Lemma 1.2]{c03} which is required in the next theorem.
\begin{lemma} \label{lem4}
Let $I\subset R^{(c)}$ be a graded ideal. Let $y_1,\ldots,y_d$ be a generic sequence of
homogeneous forms of degree one in $R^{(c)}$. Then $\frac{R^{(c)}}{I+(y_1,\ldots,y_p)}$ and
$\frac{R^{(c)}}{(y_1,\ldots,y_p)+\operatorname{Gin}_{\tau}(IR)^{(c)}}$ have the same Hilbert function, for $0\leq p \leq d$.
\end{lemma}
\begin{proof}
Let $f_1,\ldots,f_d \in S$ be homogeneous polynomials of degree one such that $\varphi((f_1,\ldots,f_p))=(y_1,\ldots,y_p)$.
Let $g\in GL_n(K)$. Then $g$ gives a change of variables on $R$, $g:R \rightarrow R$. Let $g_{R^{(c)}}:R^{(c)}\rightarrow R^{(c)}$ denotes the restriction map of $g$ to $R^{(c)}$. Let $g_{S}$ be the
change of variables in the polynomial ring $S$ that is induced by $g_{R^{(c)}}$ defined such that $g_S(t_{d-p+1})= f_1, \ldots, g_S(t_d)=f_p$ (see the discussion 3.3 in \cite{gmp11}). Then we have $\varphi \circ g_S=g \circ \varphi$. Now
\begin{eqnarray*}
g_S^{-1}\circ \varphi^{-1}(IR+(y_1,\ldots,y_p)) &=& g_S^{-1}(\varphi^{-1}(IR)+\varphi^{-1}((y_1,\ldots,y_p))) \\
&=& g_S^{-1}(\varphi^{-1}(IR)+(f_1,\ldots,f_p)) \\
&=& g_S^{-1}\circ\varphi^{-1}(IR) +(t_{d-p+1}, \ldots,t_d).
\end{eqnarray*}
Now take the initial ideal with respect to a term order in $S$ that is compatible with $\tau$ in $R$. We denote this term order on $S$ also by $\tau$. Thus
by \cite[Lemma 3.2.1]{gmp11} we have
$$\mbox{in}_{\tau}(g_S^{-1}\circ \varphi^{-1}(IR+(y_1,\ldots,y_p)))=\mbox{in}_{\tau}(g_S^{-1}\circ\varphi^{-1}(IR) +(t_{d-p+1}, \ldots,t_d))=\mbox{in}_{\tau}(g_S^{-1}\circ\varphi^{-1}(IR))+(t_{d-p+1}, \ldots,t_d).$$
This implies that
$$g_S^{-1}\circ \varphi^{-1}(\mbox{in}_{\tau}(IR+(y_1,\ldots,y_p)))=(g_S^{-1}\circ\varphi^{-1}(\mbox{in}_{\tau}(IR)))+(t_{d-p+1}, \ldots,t_d).$$
Choose $g$ generic, then $g_S$ is so. Then we have
$$g_S^{-1}\circ \varphi^{-1}(\operatorname{Gin}_{\tau}(IR+(y_1,\ldots,y_p)))=(g_S^{-1}\circ\varphi^{-1}(\operatorname{Gin}_{\tau}(IR)))+(t_{d-p+1}, \ldots,t_d).$$
Since $g_S$ is an isomorphism, we have that
$$\varphi^{-1}(\operatorname{Gin}_{\tau}(IR+(y_1,\ldots,y_p)))=(\varphi^{-1}(\operatorname{Gin}_{\tau}(IR))+(f_1, \ldots,f_p).$$
Applying $\varphi$ both sides, we get that $\operatorname{Gin}_{\tau}(IR+(y_1,\ldots,y_p))=\operatorname{Gin}_{\tau}(IR)+(y_1,\ldots,y_p)$.
Since $\operatorname{Gin}_{\tau}(IR+(y_1,\ldots,y_p))$ and $IR+(y_1,\ldots,y_p)$ have the same Hilbert function, it follows that
$IR+(y_1,\ldots,y_p)$ and $\operatorname{Gin}_{\tau}(IR)+(y_1,\ldots,y_p)$ have the same Hilbert function. By taking the contraction of these ideals to $R^{(c)}$, then the contracted ideals still have the same Hilbert function. Notice that
$(IR+(y_1,\ldots,y_p))^{(c)}=I+(y_1,\ldots,y_p)$ and $(\operatorname{Gin}_{\tau}(IR)+(y_1,\ldots,y_p))^{(c)}=\operatorname{Gin}_{\tau}(IR)^{(c)}+(y_1,\ldots,y_p)$.
\end{proof}
Now we prove a sufficient condition for the componenetwise linearity of ideals in $R^{(c)}$.
\begin{theorem} \label{thm2}
Assume char$(K)=0$. Let $I \subset R^{(c)}$ be a graded ideal and $M=\frac{R^{(c)}}{I}$.
Let $y_1,\ldots,y_d$ be a generic sequence of homogeneous forms of degree one in $R^{(c)}$. Suppose $y_1,\ldots,y_d$ is a proper $M$-sequence. Then $I$ is componentwise linear.
\end{theorem}
\begin{proof}
Assume $y_1,\ldots,y_d$ is a proper $M$-sequence.
Since $R^{(c)}$ is Koszul, then $K=\frac{R^{(c)}}{\mathfrak m^{(c)}}$ has a finite free resolution over $R^{(c)}$.
Note that
$\mathfrak m^{(c)}$ is generated by $y_1,\ldots,y_d$ and $H_i(d)=Tor_i^{R^{(c)}}(K, M)$ for all $i$.
Since $y_1,\ldots,y_d$ is a proper $M$-sequence, we have that Im$(\varphi_{i,p-1})=0$, for all
$i,p$. Then by the exact sequence \eqref{eq4}, we get that
$$\beta^{R^{(c)}}_{ijp}(M)=\beta^{R^{(c)}}_{ij(p-1)}(M)+\beta^{R^{(c)}}_{(i-1)(j-1)(p-1)}(M)$$
and $\beta^{R^{(c)}}_{1jp}(M)=\beta^{R^{(c)}}_{1j(p-1)}(M)+\beta^{R^{(c)}}_{0(j-1)(p-1)}(M)-
\beta^{R^{(c)}}_{0j(p-1)}(M)+\beta^{R^{(c)}}_{0jp}(M)$, for all $i\geq 2,$ for all $j$, $1\leq p \leq d$.
Thus $\beta^{R^{(c)}}_{ijp}(M)$ are computed from $\beta^{R^{(c)}}_{0jp}(M)$,
for all $j,p$. Now we will prove that $\beta^{R^{(c)}}_{0jp}(M)=\beta^{R^{(c)}}_{0jp}\left(\frac{R^{(c)}}{(\operatorname{Gin}_{\tau}(IR))^{(c)}}\right)$, for all $j,p$. Since
$$H_0(p)=\frac{R^{(c)}}{I+(y_1,\ldots,y_p)},~~~~~~Tor^{R^{(c)}}_0\left(\frac{R^{c}}{(y_1,\ldots,y_p)}, \frac{R^{(c)}}{(\operatorname{Gin}_{\tau}(IR))^{(c)}}\right)=\frac{R^{(c)}}{(\operatorname{Gin}_{\tau}(IR))^{(c)}+(y_1,\ldots,y_p)},$$
then by the Lemma \ref{lem4}, we have that both these homologies have the same Hilbert function. This implies that
\begin{eqnarray*}
\beta^{R^{(c)}}_{0jp}(M)=\dim_K(H_0(p)_j)
&=& \dim_K\left(Tor^{R^{(c)}}_0\left(\frac{R^{c}}{(y_1,\ldots,y_p)}, \frac{R^{(c)}}{(\operatorname{Gin}_{\tau}(IR))^{(c)}}\right)\right) \\
&=&
\beta^{R^{(c)}}_{0jp}\left(\frac{R^{(c)}}{(\operatorname{Gin}_{\tau}(IR))^{(c)}}\right),
\end{eqnarray*}
for all $j,p$. Thus $\beta^{R^{(c)}}_{ijp}(I)=\beta^{R^{(c)}}_{ijp}((\operatorname{Gin}_{\tau}(IR))^{(c)})$, for all $i,j,p$. In particular, take $p=d$, then we get that $I$ and $(\operatorname{Gin}_{\tau}(IR))^{(c)}$ have the same graded Betti numbers. Therefore by the Theorem \ref{thm1}, $I$ is componentwise linear.
\end{proof}
Below we give a necessary condition for the componentwise linearity of ideals in $R^{(c)}$.
\begin{theorem} \label{thm3}
Assume char$(K)=0$. Let $y_1,\ldots,y_d$ be a generic sequence of homogeneous forms of degree one in $R^{(c)}$. Let $I \subset R^{(c)}$ be a graded ideal. Suppose $I$ is componentwise linear. Then $\mathfrak m^{(c)}H_i(d)=0$ for all $i\geq 1$.
\end{theorem}
\begin{proof}
Assume $I$ is componentwise linear. Suppose $I$ is generated in single degree $s$. Since $(\operatorname{Gin}_{\tau}(IR))^{(c)}$ is Borel, then by the formulas for its Betti numbers in \cite[Theorem 3.7.4]{gmp11}, we have that $(\operatorname{Gin}_{\tau}(IR))^{(c)}$ has $s$-linear resolution. This implies that
$Tor^{R^{(c)}}_i\left(K, \frac{R^{(c)}}{(\operatorname{Gin}_{\tau}(IR))^{(c)}}\right)$ is concentrated in single degree $s+i-1$. Then by the Proposition \ref{pro1}, $Tor^{R^{(c)}}_i\left(K, \frac{R^{(c)}}{I}\right)$ is also concentrated in single degree $s+i-1$. This implies that $\mathfrak m^{(c)}H_i(d)=0$.
Now assume that $I$ is generated in distinct degrees. Let ${\bf F_{\bullet}}$ be a minimal graded free resolution of $K$ over $R^{(c)}$. Let $\phi_i:F_i\otimes \frac{R^{(c)}}{I}\rightarrow F_{i-1}\otimes \frac{R^{(c)}}{I}$, be the $i^{th}$ map in the complex ${\bf F_{\bullet}}\otimes \frac{R^{(c)}}{I}$. Let $a\in Tor_i^{R^{(c)}}\left(K, \frac{R^{(c)}}{I}\right)$ be a homogeneous element of degree say $s$. Then $a=\overline{f}$, for some $f\in F_i\otimes \frac{R^{(c)}}{I}$. Then $\phi_i(f)\in IF_{i-1}$ and it is a homogeneous element of degree $s$. This implies that $\phi_i(f)\in I_kF_{i-1}$, where $k=s-i+1$. Set $J=I_{\langle k \rangle}$. Then $\phi_i(f)\in JF_{i-1}$.
Since $J$ has linear resolution, therefore by above case, we have $\mathfrak m^{(c)}f \in \mbox{Im}(\phi_{i+1})+IF_i$. This gives that $\mathfrak m^{(c)}a=0$. Thus $\mathfrak m^{(c)}H_i(d)=0$.
\end{proof}
| {
"timestamp": "2021-06-18T02:11:05",
"yymm": "2106",
"arxiv_id": "2106.09262",
"language": "en",
"url": "https://arxiv.org/abs/2106.09262",
"abstract": "In this article, we study the componentwise linear ideals in the Veronese subrings of $R=K[x_1,\\ldots,x_n]$. If char$(K)=0$, then we give a characterization for graded ideals in the $c^{th}$ Veronese ring $R^{(c)}$ to be componentwise linear. This characterization is an analogue of that over $R$ due to Aramova, Herzog and Hibi in \\cite{ah00} and Conca, Herzog and Hibi in \\cite{chh04}.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Componentwise linear ideals in Veronese rings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9793540740815275,
"lm_q2_score": 0.8376199552262967,
"lm_q1q2_score": 0.8203265156828604
} |
https://arxiv.org/abs/2203.03832 | The splitting algorithms by Ryu, by Malitsky-Tam, and by Campoy applied to normal cones of linear subspaces converge strongly to the projection onto the intersection | Finding a zero of a sum of maximally monotone operators is a fundamental problem in modern optimization and nonsmooth analysis. Assuming that the resolvents of the operators are available, this problem can be tackled with the Douglas-Rachford algorithm. However, when dealing with three or more operators, one must work in a product space with as many factors as there are operators. In groundbreaking recent work by Ryu and by Malitsky and Tam, it was shown that the number of factors can be reduced by one. A similar reduction was achieved recently by Campoy through a clever reformulation originally proposed by Kruger. All three splitting methods guarantee weak convergence to some solution of the underlying sum problem; strong convergence holds in the presence of uniform monotonicity.In this paper, we provide a case study when the operators involved are normal cone operators of subspaces and the solution set is thus the intersection of the subspaces. Even though these operators lack strict convexity, we show that striking conclusions are available in this case: strong (instead of weak) convergence and the solution obtained is (not arbitrary but) the projection onto the intersection. Numerical experiments to illustrate our results are also provided. | \section{Introduction}
Throughout the paper, we assume that
\begin{equation}
\text{$X$ is a real Hilbert space}
\end{equation}
with inner product $\scal{\cdot}{\cdot}$ and induced norm $\|\cdot\|$.
Let $A_1,\ldots,A_n$ be maximally monotone operators on $X$.
(See, e.g., \cite{BC2017} for background on maximally monotone operators.)
One central problem in modern optimization and nonsmooth analysis asks to
\begin{equation}
\label{e:genprob}
\text{find $x\in X$ such that $0\in(A_1+\cdots+A_n)x$.}
\end{equation}
In general, solving \cref{e:genprob} may be quite hard.
Luckily, in many interesting cases, we have access to the
\emph{firmly nonexpansive resolvents} $J_{A_i} := (\ensuremath{\operatorname{Id}}+A_i)^{-1}$
which opens the door to employ splitting algorithms to solve
\cref{e:genprob}.
The most famous instance is the \emph{Douglas-Rachford algorithm} \cite{DougRach}
whose importance for this problem was brought to light in the seminal paper
by Lions and Mercier \cite{LionsMercier}. However,
the Douglas-Rachford algorithm requires that $n=2$; if $n\geq 3$,
one may employ the Douglas-Rachford algorithm to a reformulation
in the product space $X^n$ \cite[Section~2.2]{Comb09}.
In recent breakthrough work by Ryu \cite{Ryu}, it was shown
that for $n=3$ one may formulate an algorithm that works
in $X^2$ rather than $X^3$.
We will refer to this method as \emph{Ryu's algorithm}.
Very recently, Malitsky and Tam proposed in \cite{MT}
an algorithm for a general $n\geq 3$
that is different from Ryu's and that operators in $X^{n-1}$.
(No algorithms exist in product spaces
featuring fewer factors than $n-1$ factors in a certain technical sense.
See also \cite{Luis} for an extension of the
Malitsky-Tam algorithm to handle linear operators.)
We will review these algorithms as well as a recent (Douglas-Rachford based) algorithm
introduced by Campoy \cite{Campoy} in \cref{sec:known} below.
These three algorithms are known to produce \emph{some} solution
to \cref{e:genprob} via a sequence that converges \emph{weakly}.
Strong convergence holds in the presence of uniform monotonicity.
\emph{The aim of this paper is provide a case study for the situation
when the maximally monotone operators $A_i$ are normal cone operators
of closed linear subspaces $U_i$ of $X$.}
These operators are not even strictly monotone.
Our main results show that the splitting algorithms by Ryu,
by Malitsky-Tam, and by Campoy actually produce a sequence
that converges \emph{strongly}\,! We are also able to \emph{identify the limit} to
be the \emph{projection} onto the intersection
$U_1\cap\cdots\cap U_n$\,!
The proofs of these results rely on the explicit identification of the fixed point
set of the underlying operators.
Moreover, a standard translation technique gives the same result for
\emph{affine} subspaces of $X$ provided their intersection is nonempty.
The paper is organized as follows.
In \cref{sec:aux}, we collect various auxiliary results for later use.
The known convergence results on Ryu splitting, on
Malitsky-Tam splitting, and on Campoy splitting are reviewed in \cref{sec:known}.
Our main results are presented in \cref{sec:main}.
Matrix representations of the various operators involved
are provided in \cref{sec:matrix}.
These are useful for our numerical experiments in \cref{sec:numexp}.
Finally, we offer some concluding remarks in \cref{sec:end}.
The notation employed in this paper is
standard and follows largely \cite{BC2017}.
When $z=x+y$ and $x\perp y$, then we also write
$z=x\oplus y$ to stress this fact.
Analogously for the Minkowski sum $Z=X+Y$, we
write $Z= X\oplus Y$ as well as $P_Z = P_X\oplus P_Y$ for the
associated projection provided that $X\perp Y$.
\section{Auxiliary results}
\label{sec:aux}
In this section, we collect useful properties of projection operators
and results on iterating linear/affine nonexpansive operators.
We start with projection operators.
\subsection{Projections}
\begin{fact}
\label{f:orthoP}
Suppose $U$ and $V$ are nonempty closed convex subsets of $X$ such that
$U\perp V$. Then
$U\oplus V$ is a nonempty closed subset of $X$ and
\begin{equation}
P_{U\oplus V} = P_U\oplus P_V.
\end{equation}
\end{fact}
\begin{proof}
See \cite[Proposition~29.6]{BC2017}.
\end{proof}
Here is a well known illustration of \cref{f:orthoP} which
we will use repeatedly in the paper (sometimes without explicit mentioning).
\begin{example}
\label{ex:perp}
Suppose $U$ is a closed linear subspace of $X$.
Then
\begin{equation}
P_{U^\perp} = \ensuremath{\operatorname{Id}}-P_U.
\end{equation}
\end{example}
\begin{proof}
The orthogonal complement $V := U^\perp$ satisfies $U\perp V$
and also $U+V=X$; thus $P_{U+V}=\ensuremath{\operatorname{Id}}$ and the result follows.
\end{proof}
\begin{fact}[\bf Anderson-Duffin]
\label{f:AD}
Suppose that $X$ is finite-dimensional and that
$U,V$ are two linear subspaces of $X$.
Then
\begin{equation}
P_{U\cap V} = 2P_U(P_U+P_V)^\dagger P_V,
\end{equation}
where ``$^\dagger$'' denotes the Moore-Penrose inverse of a matrix.
\end{fact}
\begin{proof}
See, e.g., \cite[Corollary~25.38]{BC2017} or
the original \cite{AD}.
\end{proof}
\begin{corollary}
\label{c:AD3}
Suppose that $X$ is finite-dimensional and that
$U,V,W$ are three linear subspaces of $X$.
Then
\begin{equation}
P_{U\cap V\cap W} = 4P_U(P_U+P_V)^\dagger P_V\big(2P_U(P_U+P_V)^\dagger P_V+P_W\big)^\dagger P_W.
\end{equation}
\end{corollary}
\begin{proof}
Use \cref{f:AD} to find $P_{U\cap V}$,
and then use \cref{f:AD} again on $(U\cap V,W)$.
\end{proof}
\begin{corollary}
\label{c:ADsum}
Suppose that $X$ is finite-dimensional and that $U,V$ are two linear
subspaces of $X$.
Then
\begin{subequations}
\begin{align}
P_{U+V} &= \ensuremath{\operatorname{Id}}-2P_{U^\perp}(P_{U^\perp}+P_{V^\perp})^\dagger P_{V^\perp}\\
&=\ensuremath{\operatorname{Id}}-2(\ensuremath{\operatorname{Id}}-P_U)\big(2\ensuremath{\operatorname{Id}}-P_U-P_V \big)^\dagger(\ensuremath{\operatorname{Id}}-P_V).
\end{align}
\end{subequations}
\end{corollary}
\begin{proof}
Indeed, $U+V = (U^\perp\cap V^\perp)^\perp$ and so
$P_{U+V} = \ensuremath{\operatorname{Id}} - P_{U^\perp\cap V^\perp}$.
Now apply \cref{f:AD} to $(U^\perp,V^\perp)$
followed by \cref{ex:perp}.
\end{proof}
\begin{fact}
\label{f:Pran}
Let $Y$ be a real Hilbert space, and
let $A\colon X\to Y$ be a continuous linear operator with closed range.
Then
\begin{equation}
P_{\ensuremath{{\operatorname{ran}}\,} A} = AA^\dagger.
\end{equation}
\end{fact}
\begin{proof}
See, e.g., \cite[Proposition~3.30(ii)]{BC2017}.
\end{proof}
\subsection{Linear (and affine) nonexpansive iterations}
We now turn results on iterating linear or affine nonexpansive operators.
\begin{fact}
\label{f:convasymp}
Let $L\colon X\to X$ be linear and nonexpansive,
and let $x\in X$.
Then
\begin{equation}
L^kx \to P_{\ensuremath{\operatorname{Fix}} L}(x)
\quad\Leftrightarrow\quad
L^kx - L^{k+1}x \to 0.
\end{equation}
\end{fact}
\begin{proof}
See \cite[Proposition~4]{Baillon},
\cite[Theorem~1.1]{BBR},
\cite[Theorem~2.2]{BDHP}, or
\cite[Proposition~5.28]{BC2017}.
(The versions in \cite{Baillon} and \cite{BBR} are much more general.)
\end{proof}
\begin{fact}
\label{f:averasymp}
Let $T\colon X\to X$ be averaged nonexpansive with
$\ensuremath{\operatorname{Fix}} T\neq\varnothing$.
Then $(\forall x\in X)$
$T^kx-T^{k+1}x\to 0$.
\end{fact}
\begin{proof}
See Bruck and Reich's paper \cite{BruRei} or \cite[Corollary~5.16(ii)]{BC2017}.
\end{proof}
\begin{corollary}
\label{c:key}
Let $L\colon \ensuremath{{\mathcal{H}}}\to\ensuremath{{\mathcal{H}}}$ be linear and averaged nonexpansive.
Then
\begin{equation}
(\forall x\in\ensuremath{{\mathcal{H}}})\quad L^kx \to P_{\ensuremath{\operatorname{Fix}} L}(x).
\end{equation}
\end{corollary}
\begin{proof}
Because $0\in\ensuremath{\operatorname{Fix}} L$, we have $\ensuremath{\operatorname{Fix}} L\neq\varnothing$.
Now combine
\cref{f:convasymp} with \cref{f:averasymp}.
\end{proof}
\begin{fact}
\label{f:BLM}
Let $L$ be a linear nonexpansive operator and let $b\in X$.
Set $T\colon X\to X\colon x\to Lx+b$ and suppose that
$\ensuremath{\operatorname{Fix}} T\neq\varnothing$.
Then $b\in\ensuremath{{\operatorname{ran}}\,}(\ensuremath{\operatorname{Id}}-L)$, and for every $x\in X$ and
$a\in(\ensuremath{\operatorname{Id}}-L)^{-1}b$, the following hold:
\begin{enumerate}
\item $b=a-La\in\ensuremath{{\operatorname{ran}}\,}(\ensuremath{\operatorname{Id}}-L)$.
\item $\ensuremath{\operatorname{Fix}} T = a + \ensuremath{\operatorname{Fix}} L$.
\item
\label{f:BLMiii}
$P_{\ensuremath{\operatorname{Fix}} T}(x) = P_{\ensuremath{\operatorname{Fix}} L}(x) + P_{(\ensuremath{\operatorname{Fix}} L)^\perp}(a)$.
\item $T^kx = L^k(x-a)+a$.
\item $L^kx \to P_{\ensuremath{\operatorname{Fix}} L}x$ $\Leftrightarrow$ $T^kx \to P_{\ensuremath{\operatorname{Fix}} T}x$.
\end{enumerate}
\end{fact}
\begin{proof}
See \cite[Lemma~3.2 and Theorem~3.3]{BLM}.
\end{proof}
\begin{remark}
\label{r:BLM}
Consider \cref{f:BLM} and its notation.
If $a\in(\ensuremath{\operatorname{Id}}-L)^{-1}b$ then
$P_{(\ensuremath{\operatorname{Fix}} L)^\perp}(a)$ is likewise because
$b = (\ensuremath{\operatorname{Id}}-L)a = (\ensuremath{\operatorname{Id}}-L)(P_{\ensuremath{\operatorname{Fix}} L}(a)+P_{(\ensuremath{\operatorname{Fix}} L)^\perp}(a))
=P_{(\ensuremath{\operatorname{Fix}} L)^\perp}(a)$; moreover,
using \cite[Lemma~3.2.1]{Groetsch}, we see that
\begin{equation}
(\ensuremath{\operatorname{Id}}-L)^\dagger b
=(\ensuremath{\operatorname{Id}}-L)^\dagger(\ensuremath{\operatorname{Id}}-L)a
=P_{(\ker(\ensuremath{\operatorname{Id}}-L))^\perp}(a)
=P_{(\ensuremath{\operatorname{Fix}} L)^\perp}(a),
\end{equation}
where again ``$^\dagger$'' denotes the Moore-Penrose inverse of a continuous
linear operator (with possibly nonclosed range).
So given $b\in X$, we may concretely set
\begin{equation}
a = (\ensuremath{\operatorname{Id}}-L)^\dagger b \in (\ensuremath{\operatorname{Id}}-L)^{-1}b;
\end{equation}
with this choice, \cref{f:BLMiii} turns into the even more pleasing identity
\begin{equation}
P_{\ensuremath{\operatorname{Fix}} T}(x) = P_{\ensuremath{\operatorname{Fix}} L}(x) + a.
\end{equation}
\end{remark}
\section{Ryu, Malitsky-Tam, and Campoy splitting }
In this section, we present the precise form of
Ryu's, the Malitsky-Tam, and Campoy's algorithms and review
known convergence results.
\label{sec:known}
\subsection{Ryu splitting}
We start with Ryu's algorithm. In this subsection,
\begin{equation}
\text{$A,B,C$ are maximally monotone operators on
$X$,
}
\end{equation}
with resolvents $J_A,J_B,J_C$, respectively.
The problem of interest is to
\begin{equation}
\label{e:Ryuprob}
\text{find $x\in X$ such that $0\in (A+B+C)x$,}
\end{equation}
and we assume that \cref{e:Ryuprob} has a solution.
The algorithm pioneered by Ryu \cite{Ryu} provides a method
for finding a solution to \cref{e:Ryuprob}.
It proceeds as follows. Set\footnote{We will express vectors in
product spaces both as column and as row vectors depending
on which version is more readable.}
\begin{equation}
\label{e:genM}
M\colon X\times X\to X\times X\times X\colon
\begin{pmatrix}
x\\
y
\end{pmatrix}
\mapsto
\begin{pmatrix}
J_A(x)\\[+1mm]
J_B(J_A(x)+y)\\[+1mm]
J_C\big(J_A(x)-x+J_B(J_A(x)+y)-y\big)
\end{pmatrix}.
\end{equation}
Next, denote by $Q_1\colon X\times X\times X\to X\colon
(x_1,x_2,x_3)\mapsto x_1$
and similarly for $Q_2$ and $Q_3$.
We also set $\Delta := \menge{(x,x,x)\in X^3}{x\in X}$.
We are now ready to introduce the \emph{Ryu operator}
\begin{equation}
\label{e:TRyu}
T := \ensuremath{T_{\text{\scriptsize Ryu}}} \colon X^2\to X^2\colon
z\mapsto
z + \big((Q_3-Q_1)Mz,(Q_3-Q_2)Mz\big).
\end{equation}
Given a starting point $(x_0,y_0)\in X\times X$,
the basic form of
Ryu splitting generates a governing sequence via
\begin{equation}
\label{e:basicRyu}
(\forall \ensuremath{{k\in{\mathbb N}}})\quad (x_{k+1},y_{k+1}) :=
(1-\lambda)(x_k,y_k) + \lambda T(x_k,y_k).
\end{equation}
The following result records the basic convergence properties
by Ryu \cite{Ryu},
and recently improved by Arag\'on-Artacho, Campoy, and Tam \cite{AACT20}.
\begin{fact}[\bf Ryu and also Aragon-Artacho-Campoy-Tam]
\label{f:Ryu}
The operator
$\ensuremath{T_{\text{\scriptsize Ryu}}}$ is nonexpansive with
\begin{equation}
\label{e:fixtryu}
\ensuremath{\operatorname{Fix}} \ensuremath{T_{\text{\scriptsize Ryu}}} =
\menge{(x,y)\in X\times X}{J_A(x)=J_B(J_A(x)+y) = J_C(R_A(x)-y)}
\end{equation}
and
\begin{equation}
\ensuremath{\operatorname{zer}}(A+B+C) = J_A\big(Q_1\ensuremath{\operatorname{Fix}}\ensuremath{T_{\text{\scriptsize Ryu}}}\big).
\end{equation}
Suppose that $0<\lambda<1$ and
consider the sequence generated by
\cref{e:basicRyu}.
Then there exists $(\bar{x},\bar{y})\in X\times X$ such that
\begin{equation}
(x_k,y_k)\ensuremath{\:{\rightharpoonup}\:} (\bar{x},\bar{y}) \in \ensuremath{\operatorname{Fix}}\ensuremath{T_{\text{\scriptsize Ryu}}},
\end{equation}
\begin{equation}
M(x_k,y_k) \ensuremath{\:{\rightharpoonup}\:} M(\bar{x},\bar{y})\in\Delta,
\end{equation}
and
\begin{equation}
\label{e:210815d}
\big((Q_3-Q_1)M(x_k,y_k),(Q_3-Q_2)M(x_k,y_k)\big)\to (0,0).
\end{equation}
In particular,
\begin{equation}
J_A(x_k) \ensuremath{\:{\rightharpoonup}\:} J_A\bar{x}\in \ensuremath{\operatorname{zer}}(A+B+C).
\end{equation}
\end{fact}
\begin{proof}
See \cite{Ryu} and \cite{AACT20}.
\end{proof}
\subsection{Malitsky-Tam splitting}
We now turn to the Malitsky-Tam algorithm.
In this subsection, let $n\in\{3,4,\ldots\}$ and
let $A_1,A_2,\ldots,A_n$
be maximally monotone operators on $X$.
The problem of interest is to
\begin{equation}
\label{e:MTprob}
\text{find $x\in X$ such that $0\in (A_1+A_2+\cdots + A_n)x$,}
\end{equation}
and we assume that \cref{e:MTprob} has a solution.
The algorithm proposed by Malitsky and Tam \cite{MT} provides a method
for finding a solution to \cref{e:MTprob}.
Now set\footnote{Again, we will express vectors in
product spaces both as column and as row vectors depending
on which version is more readable.}
\begin{subequations}
\label{e:MTM}
\begin{align}
M\colon X^{n-1}
&\to
X^n\colon
\begin{pmatrix}
z_1\\
\vdots\\
z_{n-1}
\end{pmatrix}
\mapsto
\begin{pmatrix}
x_1\\
\vdots\\
x_{n-1}\\
x_n
\end{pmatrix},
\quad\text{where}\;\;
\\[+2mm]
&(\forall i\in\{1,\ldots,n\})
\;\;
x_i = \begin{cases}
J_{A_1}(z_1), &\text{if $i=1$;}\\
J_{A_i}(x_{i-1}+z_i-z_{i-1}), &\text{if $2\leq i\leq n-1$;}\\
J_{A_n}(x_1+x_{n-1}-z_{n -1}), &\text{if $i=n$.}
\end{cases}
\end{align}
\end{subequations}
As before, we denote by $Q_1\colon X^n \to X\colon
(x_1,\ldots,x_{n-1},x_n)\mapsto x_1$
and similarly for $Q_2,\ldots,Q_n$.
We also set
\begin{equation}\label{e:delta}
\Delta := \menge{(x,\ldots,x)\in X^n}{x\in X},
\end{equation}
which is also known as the diagonal in $X^n$.
We are now ready to introduce the \emph{Malitsky-Tam (MT) operator}
\begin{equation}
\label{e:TMT}
T := \ensuremath{T_{\text{\scriptsize MT}}} \colon X^{n-1}\to X^{n-1}\colon
\ensuremath{\mathbf{z}}\mapsto
\ensuremath{\mathbf{z}}+
\begin{pmatrix}
(Q_2-Q_1)M\ensuremath{\mathbf{z}}\\
(Q_3-Q_2)M\ensuremath{\mathbf{z}}\\
\vdots\\
(Q_n-Q_{n-1})M\ensuremath{\mathbf{z}}
\end{pmatrix}.
\end{equation}
Given a starting point $\ensuremath{\mathbf{z}}_0 \in X^{n-1}$,
the basic form of
MT splitting generates a
governing sequence via
\begin{equation}
\label{e:basicMT}
(\forall \ensuremath{{k\in{\mathbb N}}})\quad \ensuremath{\mathbf{z}}_{k+1} :=
(1-\lambda)\ensuremath{\mathbf{z}}_{k} + \lambda T\ensuremath{\mathbf{z}}_k.
\end{equation}
The following result records the basic convergence.
\begin{fact}[\bf Malitsky-Tam]
\label{f:MT}
The operator
$\ensuremath{T_{\text{\scriptsize MT}}}$ is nonexpansive with
\begin{equation}
\label{e:fixtmt}
\ensuremath{\operatorname{Fix}} \ensuremath{T_{\text{\scriptsize MT}}} =
\menge{z\in X^{n-1}}{Mz \in \Delta},
\end{equation}
\begin{equation}
\ensuremath{\operatorname{zer}}(A_1+\cdots+A_n) = J_{A_1}\big(Q_1\ensuremath{\operatorname{Fix}}\ensuremath{T_{\text{\scriptsize MT}}}\big).
\end{equation}
Suppose that $0<\lambda<1$ and
consider the sequence generated by
\cref{e:basicMT}.
Then there exists $\bar{\ensuremath{\mathbf{z}}}\in X^{n-1}$ such that
\begin{equation}
\ensuremath{\mathbf{z}}_k\ensuremath{\:{\rightharpoonup}\:} \bar{\ensuremath{\mathbf{z}}} \in \ensuremath{\operatorname{Fix}}\ensuremath{T_{\text{\scriptsize MT}}},
\end{equation}
\begin{equation}
M\ensuremath{\mathbf{z}}_{k} \ensuremath{\:{\rightharpoonup}\:} M\bar{\ensuremath{\mathbf{z}}}\in\Delta,
\end{equation}
and
\begin{equation}
\label{e:210817a}
(\forall (i,j)\in\{1,\ldots,n\}^2)\quad
(Q_i-Q_j)M\ensuremath{\mathbf{z}}_k\to 0.
\end{equation}
In particular,
\begin{equation}
J_{A_1}Q_1 M\ensuremath{\mathbf{z}}_k \ensuremath{\:{\rightharpoonup}\:} J_{A_1}Q_1 M\bar{\ensuremath{\mathbf{z}}}\in \ensuremath{\operatorname{zer}}(A_1+\cdots+A_n).
\end{equation}
\end{fact}
\begin{proof}
See \cite{MT}.
\end{proof}
\subsection{Campoy splitting}
\label{ss:Csplit}
Finally, we turn to Campoy's algorithm \cite{Campoy}.
Again, let $n\in\{3,4,\ldots\}$ and
let $A_1,A_2,\ldots,A_n$
be maximally monotone operators on $X$.
The problem of interest is to
\begin{equation}
\label{e:Cprob}
\text{find $x\in X$ such that $0\in (A_1+A_2+\cdots + A_n)x$,}
\end{equation}
and we assume again that \cref{e:Cprob} has a solution.
Denote the diagonal in $X^{n-1}$ by $\Delta$, and define
the \emph{embedding operator} $E$ by
\begin{equation}
\label{e:embed}
E\colon X\to X^{n-1}\colon x\mapsto (x,x,\ldots,x).
\end{equation}
Next, define two operators $\ensuremath{{\mathbf{A}}}$ and $\ensuremath{{\mathbf{B}}}$ on $X^{n-1}$ by
\begin{subequations}
\begin{align}
\ensuremath{{\mathbf{A}}}\colon (x_1,\ldots,x_{n-1}) &\mapsto \tfrac{1}{n-1}\big(A_nx_1,\ldots,A_nx_{n-1}\big) + N_{\Delta}(x_1,\ldots ,x_{n-1}),\\
\ensuremath{{\mathbf{B}}}\colon (x_1,\ldots,x_{n-1}) &\mapsto \big(A_1x_1,\ldots,A_{n-1}x_{n-1}\big).
\end{align}
\end{subequations}
We note that this way of splitting is also contained
in early works of Alex Kruger
(see \cite{Kruger81} and \cite{Kruger85}) to whom we are grateful for making
us aware of this connection. However, the relevant resolvents (see \cref{f:C}) were only
very recently computed by Campoy.
We now define the \emph{Campoy (C) operator} by
\begin{equation}
\label{e:CT}
T := \ensuremath{T_{\text{\scriptsize C}}} \colon X^{n-1}\to X^{n-1}\colon
\ensuremath{\mathbf{z}}\mapsto
R_{\ensuremath{{\mathbf{B}}}}R_{\ensuremath{{\mathbf{A}}}}\ensuremath{\mathbf{z}} = \ensuremath{\mathbf{z}} - 2J_{\ensuremath{{\mathbf{A}}}}\ensuremath{\mathbf{z}} + 2J_{\ensuremath{{\mathbf{B}}}}R_{\ensuremath{{\mathbf{A}}}}\ensuremath{\mathbf{z}}.
\end{equation}
Given a starting point $\ensuremath{\mathbf{z}}_0 \in X^{n-1}$,
the basic form of
Campoy's splitting algorithm generates a
governing sequence via
\begin{equation}
\label{e:basicC}
(\forall \ensuremath{{k\in{\mathbb N}}})\quad \ensuremath{\mathbf{z}}_{k+1} :=
(1-{\lambda}\big)\ensuremath{\mathbf{z}}_{k} + {\lambda} T\ensuremath{\mathbf{z}}_k.
\end{equation}
The following result records basic properties and the convergence result.
\begin{fact}[\bf Campoy]
\label{f:C}
The operators $\ensuremath{{\mathbf{A}}}$ and $\ensuremath{{\mathbf{B}}}$ are maximally monotone, with resolvents
\begin{subequations}
\label{e:f:C}
\begin{align}
M := J_{\ensuremath{{\mathbf{A}}}}\colon (x_1,\ldots,x_{n-1})&\mapsto E\bigg(J_{\frac{1}{n-1}A_n}\Big(
\tfrac{1}{n-1}\sum_{i=1}^{n-1}x_i\Big)\bigg),\label{e:f:Ca}\\
J_{\ensuremath{{\mathbf{B}}}} \colon (x_1,\ldots,x_{n-1})&\mapsto
\big(J_{A_1}x_1,\ldots,J_{A_{n-1}}x_{n-1} \big),
\end{align}
\end{subequations}
respectively.
We also have
\begin{equation}
\ensuremath{\operatorname{zer}}(\ensuremath{{\mathbf{A}}}+\ensuremath{{\mathbf{B}}}) = E\big(\ensuremath{\operatorname{zer}}(A_1+\cdots+A_n)\big).
\end{equation}
The operator
\begin{equation}
\label{e:CF}
\ensuremath{F_{\text{\scriptsize C}}} := \ensuremath{\tfrac{1}{2}}\ensuremath{\operatorname{Id}} + \ensuremath{\tfrac{1}{2}}\ensuremath{T_{\text{\scriptsize C}}}
=\ensuremath{\operatorname{Id}}-J_{\ensuremath{{\mathbf{A}}}}+J_{\ensuremath{{\mathbf{B}}}}R_{\ensuremath{{\mathbf{A}}}}
\end{equation}
is the standard Douglas-Rachford (firmly nonexpansive)
operator for
finding a zero of $\ensuremath{{\mathbf{A}}}+\ensuremath{{\mathbf{B}}}$ and its reflected version is the
Campoy operator
$\ensuremath{T_{\text{\scriptsize C}}} = 2\ensuremath{F_{\text{\scriptsize C}}}-\ensuremath{\operatorname{Id}}$ is therefore nonexpansive.
Suppose that $0<\lambda<1$ and
consider the sequence generated by
\cref{e:basicC}.
Then there exists $\bar{\ensuremath{\mathbf{z}}}\in X^{n-1}$ such that
\begin{equation}
\ensuremath{\mathbf{z}}_k\ensuremath{\:{\rightharpoonup}\:} \bar{\ensuremath{\mathbf{z}}} \in \ensuremath{\operatorname{Fix}}\ensuremath{T_{\text{\scriptsize C}}},
\end{equation}
\begin{equation}
M\ensuremath{\mathbf{z}}_{k} = J_{\ensuremath{{\mathbf{A}}}}\ensuremath{\mathbf{z}}_{k} \ensuremath{\:{\rightharpoonup}\:} J_{\ensuremath{{\mathbf{A}}}}\bar{\ensuremath{\mathbf{z}}}\in\ensuremath{\operatorname{zer}}(\ensuremath{{\mathbf{A}}}+\ensuremath{{\mathbf{B}}}).
\end{equation}
\end{fact}
\begin{proof}
See \cite[Theorem~3.3(i)--(iii) and Theorem~5.1]{Campoy}, and \cite[Theorem~26.11]{BC2017}.
\end{proof}
If one wishes to avoid the product space reformulation, then one may set
\begin{equation}
p_k:=J_{\frac{1}{n-1}A_n}\Big(\frac{1}{n-1}\sum_{i=1}^{n-1} z_{k,i}\Big)
\end{equation}
so that $J_{\ensuremath{{\mathbf{A}}}}(\ensuremath{\mathbf{z}}_k)=E(p_k)$.
Now for $i\in \{1,\dots,n-1\}$, we define
\begin{equation}
x_{k,i}:=J_{A_i}(2p_k-z_{k,i}).
\end{equation}
It follows that
\begin{equation}
\begin{pmatrix}
x_{k,1}\\\vdots\\x_{k,n-1}
\end{pmatrix}=J_{\ensuremath{{\mathbf{B}}}}(R_{\hrulefill\ensuremath{{\mathbf{A}}}}(\ensuremath{\mathbf{z}})).
\end{equation}
Now one can rewrite Campoy's algorithm as follows (and this is
his original formulation):
Given a starting point $(z_{0,1},\ldots,z_{0,n-1})\in X^{n-1}$ and $k\in\ensuremath{\mathbb N}$,
update
\begin{subequations}
\begin{align}
p_k&=J_{\frac{1}{n-1}A_n}\bigg(\frac{1}{n-1}\sum_{i=1}^{n-1} z_{k,i}\bigg),\\
(\forall i\in \{1,\ldots,n-1\})\qquad
x_{k,i}&=J_{A_i}(2p_k-z_{k,i}),\\
(\forall i\in \{1,\ldots,n-1\})\quad z_{k+1,i}&=z_{k,i}+\lambda(x_{k,i}-p_k),\label{e:CTalt}
\end{align}
\end{subequations}
which gives us the equivalence of \cref{e:basicC} and \cref{e:CTalt}.
For future use in \cref{sec:matrix}, we bring the Campoy operator into a framework
similar to the other algorithms. As before, we denote by $Q_1\colon X^{n-1} \to X\colon
(x_1,\ldots,x_{n-1})\mapsto x_1$
and similarly for $Q_2,\ldots,Q_n$.
We recall from \cref{e:f:Ca} that
\begin{subequations}
\begin{align}
\label{e:CampoyM}
M &:
X^{n-1}\to X^{n-1} :
\ensuremath{\mathbf{z}} =
\begin{pmatrix}
z_{1}\\
\vdots\\
z_{n-1}
\end{pmatrix}
\mapsto
\begin{pmatrix}
J_{\frac{1}{n-1}A_n}\Big(\frac{1}{n-1}\sum_{i=1}^{n-1}z_{i}\Big)\\
\vdots\\
J_{\frac{1}{n-1}A_n}\Big(\frac{1}{n-1}\sum_{i=1}^{n-1}z_{i}\Big)
\end{pmatrix},
\end{align}
and we set
\begin{align}
\label{e:CampoyS}
S&:X^{n-1}\to X^{n-1}:
\ensuremath{\mathbf{z}} = \begin{pmatrix}
z_{1}\\
\vdots\\
z_{n-1}
\end{pmatrix}
\mapsto
\begin{pmatrix}
x_{1}\\
\vdots\\
x_{n-1}
\end{pmatrix},
\quad\text{where}\;\;
\\[+2mm]
&(\forall i\in\{1,\ldots,n-1\})
\;\;
x_{i} = J_{A_i}(2Q_iM\ensuremath{\mathbf{z}}- z_{i}).
\end{align}
\end{subequations}
We can then rewrite the \emph{Campoy operator} in \cref{e:CT} as
\begin{equation}
\label{e:CampoyT}
T = \ensuremath{T_{\text{\scriptsize C}}} \colon X^{n-1}\to X^{n-1}\colon
\ensuremath{\mathbf{z}}\mapsto
\ensuremath{\mathbf{z}}+ 2S\ensuremath{\mathbf{z}} - 2M\ensuremath{\mathbf{z}}.
\end{equation}
\section{Main Results}
We are now ready to tackle our main results.
We shall find useful descriptions of the fixed point sets
of the Ryu, the Malitsky-Tam, and the Campoy operators.
These description will allow us to deduce strong convergence of the
iterates to the projection
onto the intersection.
\label{sec:main}
\subsection{Ryu splitting}
In this subsection, we assume that
\begin{equation}
\text{
$U,V,W$ are closed linear subspaces of $X$.
}
\end{equation}
We set
\begin{equation}
A := N_U,
\;\;
B := N_V,
\;\;
C := N_{W}.
\end{equation}
Then
\begin{equation}
Z := \ensuremath{\operatorname{zer}}(A+B+C) = U\cap V\cap W.
\end{equation}
Using linearity of the projection operators, the operator $M$ defined in \cref{e:genM} turns into
\begin{equation}
\label{e:linM}
M\colon X\times X\to X\times X\times X\colon
\begin{pmatrix}
x\\
y
\end{pmatrix}
\mapsto
\begin{pmatrix}
P_Ux\\[+1mm]
P_VP_Ux+P_Vy\\[+1mm]
P_WP_Ux + \textcolor{black}{P_W}P_VP_Ux-P_Wx \textcolor{black}{+}P_WP_Vy-P_Wy
\end{pmatrix},
\end{equation}
while the Ryu operator is still (see \cref{e:TRyu})
\begin{equation}
\label{e:linT}
T := \ensuremath{T_{\text{\scriptsize Ryu}}} \colon X^2\to X^2\colon
z\mapsto
z + \big((Q_3-Q_1)Mz,(Q_3-Q_2)Mz\big).
\end{equation}
We now determine the fixed point set of the Ryu operator.
\begin{lemma}
\label{firelemma}
Let $(x,y)\in X\times X$.
Then
\begin{equation}
\label{e:210815b}
\ensuremath{\operatorname{Fix}} T = \big(Z\times\{0\}\big) \oplus
\Big(\big(U^\perp\times V^\perp) \cap \big(\Delta^\perp+(\{0\}\times W^\perp) \big)\Big),
\end{equation}
where $\Delta := \menge{(x,x)\in X\times X}{x\in X}$.
Consequently, setting
\begin{equation}
E:=\big(U^\perp\times V^\perp) \cap \big(\Delta^\perp+(\{0\}\times W^\perp) \big),
\end{equation}
we have
\begin{equation}
\label{e:210815c}
P_{\ensuremath{\operatorname{Fix}} T}(x,y) = (P_{Z}x,0)\oplus P_E(x,y) \in (P_Zx\oplus U^\perp)\times V^\perp.
\end{equation}
\end{lemma}
\begin{proof}
Note that $(x,y)=(P_{W^\perp}y+(x-P_{W^\perp}y),P_{W^\perp}y+P_Wy)
=(P_{W^\perp}y,P_{W^\perp}y)+(x-P_{W^\perp}y,P_Wy)
\in \Delta + (X\times W)$.
Hence
\begin{equation}
X\times X = \Delta+(X\times W)\;\;\text{is closed;}
\end{equation}
consequently, by, e.g., \cite[Corollary~15.35]{BC2017},
\begin{equation}
\label{e:210815a}
\Delta^\perp + (\{0\}\times W^\perp)\;\;\text{is closed.}
\end{equation}
Next, using \cref{e:fixtryu}, we have the equivalences
\begin{subequations}
\begin{align}
&\hspace{-1cm}(x,y)\in\ensuremath{\operatorname{Fix}}\ensuremath{T_{\text{\scriptsize Ryu}}}\\
&\Leftrightarrow
P_Ux = P_V\big(P_Ux+y\big) = P_W\big(R_Ux-y\big)\\
&\Leftrightarrow
P_Ux\in Z
\;\land\;
y\in V^\perp
\;\land\;
P_Ux = P_W\big(P_Ux-P_{U^\perp}x-y\big)\\
&\Leftrightarrow
x\in Z + U^\perp
\;\land\;
y\in V^\perp
\;\land\;
P_{U^\perp}x + y \in W^\perp.
\end{align}
\end{subequations}
Now define the linear operator
\begin{equation}
S\colon X\times X\to X\colon (x,y)\mapsto x+y.
\end{equation}
Hence
\begin{subequations}
\begin{align}
\ensuremath{\operatorname{Fix}} \ensuremath{T_{\text{\scriptsize Ryu}}}
&=
\menge{(x,y)\in (Z+U^\perp)\times V^\perp}{P_{U^\perp}x+y\in W^\perp}\\
&=
\menge{(z+u^\perp,v^\perp)}{z\in Z,\,u^\perp\in U^\perp,\,v^\perp\in V^\perp,
\,u^\perp + v^\perp\in W^\perp}\\
&=(Z\times\{0\})
\oplus \big((U^\perp\times V^\perp) \cap S^{-1}(W^\perp)\big).
\end{align}
\end{subequations}
On the other hand,
$S^{-1}(W^\perp) = (\{0\}\times W^\perp)+\ker S
= (\{0\}\times W^\perp)+\Delta^\perp$ is closed by \cref{e:210815a}.
Altogether,
\begin{equation}
\ensuremath{\operatorname{Fix}} \ensuremath{T_{\text{\scriptsize Ryu}}}
= (Z\times\{0\})
\oplus \big((U^\perp\times V^\perp) \cap
((\{0\}\times W^\perp)+\Delta^\perp)\big),
\end{equation}
i.e.,
\cref{e:210815b} holds.
Finally, \cref{e:210815c} follows from \cref{f:orthoP}.
\end{proof}
We are now ready for the main convergence result on Ryu's algorithm.
\begin{theorem}[\bf main result on Ryu splitting]
Given $0<\lambda<1$ and $(x_0,y_0)\in X\times X$, generated
the sequence $(x_k,y_k)_\ensuremath{{k\in{\mathbb N}}}$ via\footnote{Recall \cref{e:linM} and \cref{e:linT}
for the definitions of $M$ and $T$.}
\begin{equation}
(\forall\ensuremath{{k\in{\mathbb N}}})\quad (x_{k+1},y_{k+1}) := (1-\lambda)(x_k,y_k)+\lambda T(x_k,y_k).
\end{equation}
Then
\begin{equation}
\label{e:210815e}
M(x_k,y_k)\to \big(P_Z(x_0),P_Z(x_0),P_Z(x_0)\big);
\end{equation}
in particular,
\begin{equation}
P_U(x_k)\to P_Z(x_0).
\end{equation}
\end{theorem}
\begin{proof}
Set $T_\lambda := (1-\lambda)\ensuremath{\operatorname{Id}}+\lambda T$ and observe that
$(x_k,y_k)_\ensuremath{{k\in{\mathbb N}}} = (T^k_\lambda(x_0,y_0))_\ensuremath{{k\in{\mathbb N}}}$.
Hence, by \cref{c:key} and \cref{e:210815c}
\begin{subequations}
\begin{align}
(x_k,y_k)&\to P_{\ensuremath{\operatorname{Fix}} T_\lambda}(x_0,y_0)=P_{\ensuremath{\operatorname{Fix}} T}(x_0,y_0)\\
&=(P_Zx_0,0)+P_E(x_0,y_0)\in (P_Zx_0\oplus U^\perp)\times V^\perp,
\end{align}
\end{subequations}
where $E$ is as in \cref{firelemma}.
Hence
\begin{equation}
Q_1M(x_k,y_k) = P_Ux_k \to P_U(P_Zx_0) = P_Zx_0.
\end{equation}
Now \cref{e:210815d} yields
\begin{equation}
\lim_{k\to\infty}Q_1M(x_k,y_k)=\lim_{k\to\infty}Q_2M(x_k,y_k)
= \lim_{k\to\infty}Q_3M(x_k,y_k)= P_Zx_0,
\end{equation}
i.e., \cref{e:210815e} and we're done.
\end{proof}
\subsection{Malitsky-Tam splitting}
\label{ss:MTlin}
Let $n\in\{3,4,\ldots\}$.
In this subsection, we assume that
$U_1,\ldots,U_n$ are closed linear subspaces of $X$.
We set
\begin{equation}
(\forall i\in\{1,2,\ldots,n\})
\quad
A_i := N_{U_i} \;\;\text{and}\;\;
P_i := P_{U_i}.
\end{equation}
Then
\begin{equation}
Z := \ensuremath{\operatorname{zer}}(A_1+\cdots+A_n) = U_1\cap \cdots \cap U_n.
\end{equation}
The operator $M$ defined in \cref{e:MTM} turns into
\begin{subequations}
\label{e:linMTM}
\begin{align}
M\colon X^{n-1}
&\to
X^n\colon
\begin{pmatrix}
z_1\\
\vdots\\
z_{n-1}
\end{pmatrix}
\mapsto
\begin{pmatrix}
x_1\\
\vdots\\
x_{n-1}\\
x_n
\end{pmatrix},
\quad\text{where}\;\;
\\[+2mm]
&(\forall i\in\{1,\ldots,n\})
\;\;
x_i = \begin{cases}
P_{1}(z_1), &\text{if $i=1$;}\\
P_{i}(x_{i-1}+z_i-z_{i-1}), &\text{if $2\leq i\leq n-1$;}\\
P_{n}(x_1+x_{n-1}-z_{n -1}), &\text{if $i=n$}
\end{cases}
\end{align}
\end{subequations}
and the MT operator remains (see \cref{e:TMT})
\begin{equation}
\label{e:linMT}
T := \ensuremath{T_{\text{\scriptsize MT}}} \colon X^{n-1}\to X^{n-1}\colon
\ensuremath{\mathbf{z}}\mapsto
\ensuremath{\mathbf{z}}+
\begin{pmatrix}
(Q_2-Q_1)M\ensuremath{\mathbf{z}}\\
(Q_3-Q_2)M\ensuremath{\mathbf{z}}\\
\vdots\\
(Q_n-Q_{n-1})M\ensuremath{\mathbf{z}}
\end{pmatrix}.
\end{equation}
We now determine the fixed point set of the Malitsky-Tam operator.
\begin{lemma}
\label{mtfixlemma}
The fixed point set of the MT operator $T=\ensuremath{T_{\text{\scriptsize MT}}}$ is
\begin{align}
\label{e:210817e}
\ensuremath{\operatorname{Fix}} T &= \menge{(z,\ldots,z)\in X^{n-1}}{z\in Z} \oplus E,
\end{align}
where
\begin{subequations}
\label{e:bloodyE}
\begin{align}
E &:=
\ensuremath{{\operatorname{ran}}\,}\Psi \cap \big(X^{n-2}\times U_n^\perp)\\
&\subseteq
U_1^\perp \times
\cdots
\times (U_1^\perp+\cdots+U_{n-2}^\perp)
\times \big((U_1^\perp+\cdots+U_{n-1}^\perp)\cap U_n^\perp\big)
\end{align}
\end{subequations}
and
\begin{subequations}
\label{e:bloodyPsi}
\begin{align}
\Psi\colon U_1^\perp \times \cdots \times U_{n-1}^\perp
&\to X^{n-1}\\
(y_1,\ldots,y_{n-1})&\mapsto
(y_1,y_1+y_2,\ldots,y_1+y_2+\cdots+y_{n-1})
\end{align}
\end{subequations}
is the continuous linear partial sum operator which has closed range.
Let $\ensuremath{\mathbf{z}} = (z_1,\ldots,z_{n-1})\in X^{n-1}$,
and set $\bar{z} := (z_1+z_2+\cdots+z_{n-1})/(n-1)$.
Then
\begin{equation}
\label{e:210817d}
P_{\ensuremath{\operatorname{Fix}} T}\ensuremath{\mathbf{z}} = (P_Z\bar{z},\ldots,P_Z\bar{z}) \oplus P_E\ensuremath{\mathbf{z}} \in X^{n-1}
\end{equation}
and hence
\begin{equation}
\label{e:210817g}
P_1(Q_1P_{\ensuremath{\operatorname{Fix}} T})\ensuremath{\mathbf{z}} = P_Z\bar{z}.
\end{equation}
\end{lemma}
\begin{proof}
Assume temporarily that $\ensuremath{\mathbf{z}}\in\ensuremath{\operatorname{Fix}} T$ and
set $\ensuremath{\mathbf{x}} = M\ensuremath{\mathbf{z}} = (x_1,\ldots,x_n)$.
Then $\bar{x} := x_1=\cdots= x_n$ and so $\bar{x}\in Z$.
Now $P_1z_1=x_1=\bar{x}\in Z$ and thus
\begin{equation}
z_1 \in \bar{x}+U_1^\perp \subseteq Z + U_1^\perp.
\end{equation}
Next,
$\bar{x}=x_2 = P_2(x_1+z_2-z_1)=
P_2x_1 + P_2(z_2 - z_1)=
P_2\bar{x}+P_2(z_2-z_1) = \bar{x}$,
which implies $P_2(z_2-z_1)=0$ and so
$z_2-z_1\in U_2^\perp$.
It follows that
\begin{equation}
z_2 \in z_1+U_2^\perp.
\end{equation}
Similarly, by considering $x_3,\ldots,x_{n-1}$, we obtain
\begin{equation}
\label{e:210817b}
z_3\in z_2 + U_3^\perp, \ldots, z_{n-1}\in z_{n-2}+U_{n-1}^\perp.
\end{equation}
Finally,
$\bar{x}=x_n=P_n(x_1+x_{n-1}-z_{n-1})=P_n(\bar{x}+\bar{x}-z_{n-1})
=2\bar{x}-P_nz_{n-1}$,
which implies $P_nz_{n-1}=\bar{x}$, i.e.,
$z_{n-1}\in \bar{x}+U_n^\perp$.
Combining with \cref{e:210817b}, we see that
$z_{n-1}$ satisfies
\begin{equation}
z_{n-1}\in (z_{n-2}+U_{n-1}^\perp)\cap(P_1z_1+U_n^\perp).
\end{equation}
To sum up, our $\ensuremath{\mathbf{z}}\in \ensuremath{\operatorname{Fix}} T$ must satisfy
\begin{subequations}
\label{e:210817c}
\begin{align}
z_1 &\in Z + U_1^\perp\\
z_2 &\in z_1+U_2^\perp\\
&\;\;\vdots\\
z_{n-2}&\in z_{n-3}+U_{n-2}^\perp\\
z_{n-1}&\in (z_{n-2}+U_{n-1}^\perp)\cap(P_1z_1+U_n^\perp).
\end{align}
\end{subequations}
We now show the converse.
To this end, assume now that our $\ensuremath{\mathbf{z}}$ satisfies \cref{e:210817c}.
Note that $Z^\perp = \overline{U_1^\perp+\cdots + U_n^\perp}$.
Because $z_1 \in Z+U_1^\perp$,
there exists $z\in Z$ and $u_1^\perp\in U_1^\perp$ such that
$z_1 = z\oplus u_1^\perp$.
Hence $x_1=P_1z_1 = P_1z = z$.
Next,
$z_2\in z_1+U_2^\perp$, say
$z_2=z_1+u_2^\perp = z\oplus(u_1^\perp+u_2^\perp)$,
where $u_2^\perp \in U_2^\perp$.
Then
$x_2 = P_2(x_1+z_2-z_1)=P_2(z+u_2^\perp)=P_2z = z$.
Similarly, there exists also $u_3^\perp\in U_3^\perp,
\ldots,u_{n-1}^\perp\in U_{n-1}^\perp$ such that
$x_3=\cdots=x_{n-1}=z$ and
$z_i=z\oplus(u_1^\perp+\cdots +u_i^\perp)$ for
$2\leq i\leq n-1$.
Finally, we also have $z_{n-1}=z\oplus u_n^\perp$
for some $u_n^\perp\in U_n^\perp$.
Thus
$x_n = P_n(x_1+x_{n-1}-z_{n-1})=P_n(2z-(z+u_{n}^\perp))
=P_nz = z$. Altogether, $\ensuremath{\mathbf{z}} \in \ensuremath{\operatorname{Fix}} T$.
We have thus verified the description of $\ensuremath{\operatorname{Fix}} T$
announced in \cref{e:210817e}, using the convenient
notation of the operator $\Psi$ which is easily seen to have closed range.
Next, we observe that
\begin{equation}
\label{e:210817c+}
D := \menge{(z,\ldots,z)\in X^{n-1}}{z\in Z} =
Z^{n-1}\cap \Delta,
\end{equation}
where $\Delta$ is the diagonal in $X^{n-1}$ which has projection
$P_\Delta(z_1,\ldots,z_n)=(\bar{z},\ldots,\bar{z})$
(see, e.g., \cite[Proposition~26.4]{BC2017}).
By convexity of $Z$, we clearly have
$P_\Delta(Z^{n-1})\subseteq Z^{n-1}$.
Because $Z^{n-1}$ is a closed linear subspace of $X^{n-1}$,
\cite[Lemma~9.2]{Deutsch} and \cref{e:210817c+} yield
$P_D = P_{Z^{n-1}}P_\Delta$ and therefore
\begin{equation}
\label{e:210817f}
P_D\ensuremath{\mathbf{z}}
= P_{Z^{n-1}}P_\Delta\ensuremath{\mathbf{z}}
= \big(P_Z\bar{z},\ldots,P_Z\bar{z}\big).
\end{equation}
Combining \cref{e:210817e}, \cref{f:orthoP}, \cref{e:210817c+},
and \cref{e:210817f} yields \cref{e:210817d}.
Finally, observe that $Q_1(P_E\ensuremath{\mathbf{z}})\in U_1^\perp$ by
\cref{e:bloodyE}.
Thus $Q_1(P_{\ensuremath{\operatorname{Fix}} T}\ensuremath{\mathbf{z}})\in P_Z\bar{z} + U_1^\perp$
and \cref{e:210817g} follows.
\end{proof}
We are now ready for the main convergence result on
the Malitsky-Tam algorithm.
\begin{theorem}[\bf main result on Malitsky-Tam splitting]
Given $0<\lambda<1$ and $\ensuremath{\mathbf{z}}_0=(z_{0,1},\ldots,z_{0,n-1}) \in X^{n-1}$,
generate the sequence $(\ensuremath{\mathbf{z}}_k)_\ensuremath{{k\in{\mathbb N}}}$ via\footnote{Recall \cref{e:linMTM}
and \cref{e:linMT} for the definitions of $M$ and $T$.}
\begin{equation}
(\forall\ensuremath{{k\in{\mathbb N}}})\quad
\ensuremath{\mathbf{z}}_{k+1} := (1-\lambda)\ensuremath{\mathbf{z}}_k + \lambda T\ensuremath{\mathbf{z}}_k.
\end{equation}
Set
\begin{equation}
p := \frac{1}{n-1}\big(z_{0,1}+\cdots+z_{0,n-1}\big).
\end{equation}
Then there exists $\bar{\ensuremath{\mathbf{z}}}\in X^{n-1}$ such that
\begin{equation}
\bar{\ensuremath{\mathbf{z}}}_k \to \bar{\ensuremath{\mathbf{z}}} \in \ensuremath{\operatorname{Fix}} T,
\end{equation}
and
\begin{equation}
\label{e:210817h}
M\ensuremath{\mathbf{z}}_k \to M\bar{\ensuremath{\mathbf{z}}} = (P_Zp,\ldots,P_Zp) \in X^n.
\end{equation}
In particular,
\begin{equation}
\label{e:210817i}
P_1(Q_1\ensuremath{\mathbf{z}}_k)= Q_1M\ensuremath{\mathbf{z}}_k \to P_Z(p) = \tfrac{1}{n-1}P_Z\big(z_{0,1}+\cdots+z_{0,n-1}\big).
\end{equation}
Consequently, if $x_0\in X$ and
$\ensuremath{\mathbf{z}}_0 = (x_0,\ldots,x_0)\in X^{n-1}$,
then
\begin{equation}
\label{e:yayyay}
P_1Q_1\ensuremath{\mathbf{z}}_k \to P_Zx_0.
\end{equation}
\end{theorem}
\begin{proof}
Set $T_\lambda := (1-\lambda)\ensuremath{\operatorname{Id}}+\lambda T$ and observe that
$(\ensuremath{\mathbf{z}}_k)_\ensuremath{{k\in{\mathbb N}}} = (T_\lambda^k\ensuremath{\mathbf{z}})_\ensuremath{{k\in{\mathbb N}}}$.
Hence, by \cref{c:key} and \cref{mtfixlemma},
\begin{subequations}
\begin{align}
\ensuremath{\mathbf{z}}_k&\to P_{\ensuremath{\operatorname{Fix}} T_\lambda}\ensuremath{\mathbf{z}}_0=P_{\ensuremath{\operatorname{Fix}} T}\ensuremath{\mathbf{z}}_0\\
&=(P_Zp,\ldots,P_Zp)\oplus P_E(\ensuremath{\mathbf{z}}_0),
\end{align}
\end{subequations}
where $E$ is as in \cref{mtfixlemma}.
Hence, using also \cref{e:210817g},
\begin{subequations}
\begin{align}
Q_1M\ensuremath{\mathbf{z}}_k &= P_1Q_1\ensuremath{\mathbf{z}}_k \\
&\to P_1Q_1\big((P_Zp,\ldots,P_Zp)\oplus P_E(\ensuremath{\mathbf{z}}_0)\big)\\
&=P_1\big(P_Zp+Q_1(P_E(\ensuremath{\mathbf{z}}_0))\big)\\
&\in P_1\big(P_Zp+U_1^\perp\big)\\
&=\{P_1P_Zp\}\\
&=\{P_Zp\},
\end{align}
\end{subequations}
i.e., $Q_1M\ensuremath{\mathbf{z}}_k\to P_Zp$.
Now \cref{e:210817a} yields
$Q_iM\ensuremath{\mathbf{z}}_k\to P_zp$ for every $i\in\{1,\ldots,n\}$.
This yields \cref{e:210817h} and \cref{e:210817i}.
The ``Consequently'' part is clear because when
$\ensuremath{\mathbf{z}}_0$ has this special form, then $p=x_0$.
\end{proof}
\subsection{Campoy splitting}
\label{ss:Clin}
Let $n\in\{3,4,\ldots\}$.
In this subsection, we assume that
$U_1,\ldots,U_n$ are closed linear subspaces of $X$.
We set
\begin{equation}
(\forall i\in\{1,2,\ldots,n\})
\quad
A_i := N_{U_i} \;\;\text{and}\;\;
P_i := P_{U_i}.
\end{equation}
Then
\begin{equation}
Z := \ensuremath{\operatorname{zer}}(A_1+\cdots+A_n) = U_1\cap \cdots \cap U_n.
\end{equation}
By \cref{e:f:C},
\begin{subequations}
\begin{align}
J_{\ensuremath{{\mathbf{A}}}}\colon (x_1,\ldots,x_{n-1})&\mapsto E\bigg(P_n\Big(
\tfrac{1}{n-1}\sum_{i=1}^{n-1}x_i\Big)\bigg)\label{e:211012b},\\
J_{\ensuremath{{\mathbf{B}}}} \colon (x_1,\ldots,x_{n-1})&\mapsto
\big(P_{1}x_1,\ldots,P_{n-1}x_{n-1} \big). \label{e:211012d}
\end{align}
\end{subequations}
Now recall from \cref{e:embed} that $E:x\mapsto (x,\dots,x)$ and denote
by $\Delta=\{(x,\dots,x)\in X^{n-1}\mid x\in X\}$, which is the diagonal in $X^{n-1}$.
We are now ready for the following result.
\begin{lemma}
\label{l:Campoycvg}
Set $\widetilde{U} := E(U_n) = U_n^{n-1}\cap\Delta\subseteq X^{n-1}$ and
$\widetilde{V} := U_1\times\cdots\times U_{n-1}\subseteq X^{n-1}$.
Then for every $\ensuremath{\mathbf{z}} = (z_1,\ldots,z_{n-1})\in X^{n-1}$ and
$\bar{z}=\tfrac{1}{n-1}\sum_{i=1}^{n-1}z_i$, we have
\begin{subequations}
\label{e:211012ac}
\begin{align}
\label{e:211012a}
J_{\ensuremath{{\mathbf{A}}}}\ensuremath{\mathbf{z}} &= P_{U_n^{n-1}}P_\Delta\ensuremath{\mathbf{z}} = (P_n\bar{z},\ldots,P_n\bar{z})
= P_{\widetilde{U}}\ensuremath{\mathbf{z}},\\
J_{\ensuremath{{\mathbf{B}}}}\ensuremath{\mathbf{z}} &= P_{U_1\times\cdots\times U_{n-1}}\ensuremath{\mathbf{z}}
= P_{\widetilde{V}}\ensuremath{\mathbf{z}}, \label{e:211012c}\\
T &= \ensuremath{T_{\text{\scriptsize C}}} = \ensuremath{\operatorname{Id}}-2J_{\ensuremath{{\mathbf{A}}}} + 2J_{\ensuremath{{\mathbf{B}}}}R_{\ensuremath{{\mathbf{A}}}}, \label{e:211012f2}
\end{align}
\end{subequations}
and
\begin{equation}
\label{e:220201a}
\ensuremath{{\mathbf{A}}} = N_{\widetilde{U}}
\;\;\text{and}\;\;
\ensuremath{{\mathbf{B}}} = N_{\widetilde{V}}.
\end{equation}
Moreover,
$\widetilde{U}\cap \widetilde{V}= Z^{n-1}\cap\Delta$,
$P_{\ensuremath{\operatorname{Fix}} T} = P_{\widetilde{U}\cap \widetilde{V}}
\oplus P_{{\widetilde{U}}^\perp \cap {\widetilde{V}}^\perp}$,
and
\begin{equation}
\label{e:211012h}
J_{\ensuremath{{\mathbf{A}}}}P_{\ensuremath{\operatorname{Fix}} T}\ensuremath{\mathbf{z}} = P_{E(Z)}\ensuremath{\mathbf{z}} = E(P_Z\bar{z}).
\end{equation}
\end{lemma}
\begin{proof}
It is clear from \cref{e:211012b} that $J_{\ensuremath{{\mathbf{A}}}}\ensuremath{\mathbf{z}} = P_{U_n^{n-1}}P_\Delta\ensuremath{\mathbf{z}}$.
Note that $P_{U_n^{n-1}}(\Delta) \subseteq \Delta$.
It thus follows from \cite[Lemma~9.2]{Deutsch} that
\begin{equation}\label{e:211115a}
P_\Delta P_{U_n^{n-1}} = P_{U_n^{n-1}}P_\Delta = P_{U_n^{n-1}\cap \Delta}
= P_{\widetilde{U}}.
\end{equation}
and \cref{e:211012a} follows.
The formula for \cref{e:211012c} is clear from \cref{e:211012d}.
It is clear from \cref{e:CT} and \cref{e:CF} that $\ensuremath{F_{\text{\scriptsize C}}}$ is the Douglas-Rachford operator
for the pair $(\ensuremath{{\mathbf{A}}},\ensuremath{{\mathbf{B}}})$, which has the same fixed point set as the Campoy operator $\ensuremath{T_{\text{\scriptsize C}}}$.
In view of \cref{e:220201a}, this is the feasibility case applied to the
pair of subspaces $(\widetilde{U},\widetilde{V})$.
Note that
\begin{equation}
\label{e:211012e}
\widetilde{U}\cap \widetilde{V} = E(Z) = Z^{n-1}\cap \Delta.
\end{equation}
By \cite[Proposition~3.6]{BBCNPW1},
$\ensuremath{\operatorname{Fix}} T = (\widetilde{U}\cap \widetilde{V})\oplus
({\widetilde{U}}^\perp \cap {\widetilde{V}}^\perp)$,
\begin{align}
\label{e:CPFixT}
P_{\ensuremath{\operatorname{Fix}} T}= P_{\widetilde{U}\cap \widetilde{V}}
\oplus P_{{\widetilde{U}}^\perp \cap {\widetilde{V}}^\perp},
\end{align}
and
\begin{align}\label{e:211115b}
J_{\ensuremath{{\mathbf{A}}}}P_{\ensuremath{\operatorname{Fix}} T} = P_{\widetilde{U}}P_{\ensuremath{\operatorname{Fix}} T} =
P_{\widetilde{U}\cap \widetilde{V}}=P_{E(Z)} = P_{Z^{n-1}\cap \Delta}
= P_{Z^{n-1}}P_\Delta,
\end{align}
where the rightmost identity in \cref{e:211115b}
follows
from the same argument as in the proof of \cref{e:211115a}.
\end{proof}
\begin{theorem}[\bf main result on Campoy splitting]
Given $\ensuremath{\mathbf{z}}_0=(z_{0,1},\ldots,z_{0,n-1}) \in X^{n-1}$ and
$0<\lambda<1$,
generate the sequence $(\ensuremath{\mathbf{z}}_k)_\ensuremath{{k\in{\mathbb N}}}$ via\footnote{Recall \cref{e:211012f2}
for the definition of $T$.}
\begin{equation}
(\forall\ensuremath{{k\in{\mathbb N}}})\quad
\ensuremath{\mathbf{z}}_{k+1} := (1-\lambda)\ensuremath{\mathbf{z}}_k + \lambda T\ensuremath{\mathbf{z}}_k.
\end{equation}
Set
\begin{equation}
\bar{z} := \frac{1}{n-1}\big(z_{0,1}+\cdots+z_{0,n-1}\big).
\end{equation}
Then
\begin{equation}
\label{e:211012f}
{\ensuremath{\mathbf{z}}}_k \to P_{\ensuremath{\operatorname{Fix}} T}\bar{\ensuremath{\mathbf{z}}}_0 \in \ensuremath{\operatorname{Fix}} T
\end{equation}
and
\begin{equation}
\label{e:211012g}
M\ensuremath{\mathbf{z}}_k = J_{\ensuremath{{\mathbf{A}}}}\ensuremath{\mathbf{z}}_k \to J_{\ensuremath{{\mathbf{A}}}}P_{\ensuremath{\operatorname{Fix}} T}{\ensuremath{\mathbf{z}}}_0 = (P_Z\bar{z},\ldots,P_Z\bar{z}) \in X^n.
\end{equation}
\end{theorem}
\begin{proof}
Because $T$ is nonexpansive (see \cref{f:C}), we see that
\cref{e:211012f} follows from \cref{c:key}.
Finally, \cref{e:211012g} follows from \cref{e:211012h}.
\end{proof}
For future use in \cref{sec:matrix}, we note that
the operators defined in \cref{e:CampoyM} and \cref{e:CampoyS}
for turn into
\begin{subequations}
\begin{align}
\label{e:CMlin}M
&:
X^{n-1}\to X^{n-1} :
\ensuremath{\mathbf{z}} =
\begin{pmatrix}
z_{1}\\
\vdots\\
z_{n-1}
\end{pmatrix}
\mapsto
\frac{1}{n-1}\begin{pmatrix}\sum_{i=1}^{n-1}P_{n}(z_{i})\\
\vdots\\
\sum_{i=1}^{n-1}P_{n}(z_{i})\end{pmatrix}
\quad\text{and}\;\;\\[+2mm]
\label{e:CSlin}
S&:X^{n-1}\to X^{n-1}:
\ensuremath{\mathbf{z}} = \begin{pmatrix}
z_{1}\\
\vdots\\
z_{n-1}
\end{pmatrix}
\mapsto
\begin{pmatrix}
x_{1}\\
\vdots\\
x_{n-1}
\end{pmatrix},
\quad\text{where}\;\;
\\[+2mm]
&(\forall i\in\{1,\ldots,n\})
\;\;
x_{i} = P_{i}(2Q_iM\ensuremath{\mathbf{z}}- z_{i}),
\end{align}
\end{subequations}
while the \emph{Campoy operator} remains (see \cref{e:CampoyT})
\begin{equation}
\label{e:CTlin}
T = \ensuremath{T_{\text{\scriptsize C}}} \colon X^{n-1}\to X^{n-1}\colon
\ensuremath{\mathbf{z}}\mapsto
\ensuremath{\mathbf{z}}+ 2S\ensuremath{\mathbf{z}} - 2M\ensuremath{\mathbf{z}}.
\end{equation}
\subsection{Extension to the consistent affine case}
In this subsection, we comment on the behaviour of the above
splitting algorithms in the consistent affine case.
To this end, we shall assume that
$V_1,\ldots,V_n$ are closed affine subspaces of $X$ with
nonempty intersection:
\begin{equation}
V := V_1\cap V_2\cap \cdots \cap V_n \neq \varnothing.
\end{equation}
We repose the problem of finding a point in $Z$ as
\begin{equation}
\text{find $x\in X$ such that $0\in (A_1+A_2+\cdots+A_n)x$,}
\end{equation}
where each $A_i = N_{V_i}$.
When we consider Ryu splitting, we also impose $n=3$.
Set $U_i:=V_i-V_i$, which is the \emph{parallel space} of $V_i$.
Now let $v\in V$.
Then $V_i = v+U_{i}$ and hence
$J_{N_{V_i}}=P_{V_i} = P_{v+U_i}$ satisfies
$P_{v+U_i} = v+P_{U_i}(x-v)=P_{U_i}x+P_{U_i^\perp}(v)$.
Put differently, the resolvents from the affine problem
are translations of the the resolvents from the corresponding linear problem
which considers $U_i$ instead of $V_i$.
The construction of the operator $T\in \{\ensuremath{T_{\text{\scriptsize Ryu}}},\ensuremath{T_{\text{\scriptsize MT}}},\ensuremath{T_{\text{\scriptsize C}}}\}$ now shows
that it is a translation of the corresponding operator from the linear problem.
And finally $T_\lambda = (1-\lambda)\ensuremath{\operatorname{Id}}+\lambda T$ is a translation of
the corresponding operator from the linear problem which we denote by $L_\lambda$:
$L_\lambda = (1-\lambda)\ensuremath{\operatorname{Id}}+\lambda L$, where $L$ is
the Ryu operator, the Malitsky-Tam operator, or the Campoy operator
of the parallel linear problem,
and there exists $b\in X^{n-1}$ such that
\begin{equation}
T_\lambda(x) = L_\lambda(x)+b.
\end{equation}
By \cref{f:BLM} (applied in $X^{n-1}$),
there exists a vector $a\in X^{n-1}$ such that
\begin{equation}
\label{e:para1}
(\forall\ensuremath{{k\in{\mathbb N}}})\quad T_\lambda^kx = a + L_\lambda^k(x-a).
\end{equation}
In other words, the behaviour in the affine case is essentially
the same as in the linear parallel case, appropriately shifted by the vector $a$.
Moreover, because $L_\lambda^k\to P_{\ensuremath{\operatorname{Fix}} L}$ in the parallel linear setting,
we deduce from \cref{f:BLM} that
\begin{equation}
T_\lambda^k \to P_{\ensuremath{\operatorname{Fix}} T}
\end{equation}
By \cref{e:para1}, the rate of convergence in the affine case are identical
to the rate of convergence in the parallel linear case.
Each of our three algorithms under consideration features an operator $M$ ---
see \cref{e:linM}, \cref{e:linMTM}, \cref{e:CMlin} ---
for Ryu splitting, for Malitksy-Tam splitting, for Campoy splitting,
respectively. In all cases, the convergence results established guarantee that
\begin{equation}
\overline{M}T^k_\lambda\ensuremath{\mathbf{z}}_0 \to P_V\overline{z},
\end{equation}
where $\overline{M}$ returns the arithmetic average of the output of $M$,
where $\ensuremath{\mathbf{z}}_0$ is the starting point, and
where $\overline{z}$ is either the first component of $\ensuremath{\mathbf{z}}_0$(for Ryu splitting)
or the arithmetic average of the components of $\ensuremath{\mathbf{z}}_0$ (for Malitsky-Tam and for
Campoy splitting);
see
\cref{e:210815e},
\cref{e:210817i}, and
\cref{e:211012g}.
To sum up this subsection, we note that
\emph{in the consistent affine case, Ryu's, the Malitsky-Tam, and Campoy's algorithm
each exhibits the same pleasant convergence behaviour as their linear parallel counterparts!}
It is, however, less clear how these algorithms behave when $V=\varnothing$.
\section{Matrix representation}
\label{sec:matrix}
In this section, we assume that $X$ is finite-dimensional,
say
\begin{equation}
X=\ensuremath{\mathbb R}^d.
\end{equation}
All three splitting algorithms considered in this paper are of the form
\begin{equation}
\label{e:210818a}
T_\lambda^k\to P_{\ensuremath{\operatorname{Fix}} T},
\quad\text{where $0<\lambda<1$ and $T_\lambda = (1-\lambda)\ensuremath{\operatorname{Id}}+\lambda T$.
\end{equation}
Starting from \cref{sec:main}, we have dealt with a special case where
$T$ is a linear operator; hence, so is $T_\lambda$ and
by \cite[Corollary~2.8]{BLM}, the convergence of the iterates
is \emph{linear} because $X$ is finite-dimensional.
What can be said about this rate?
By \cite[Theorem~2.12(ii) and Theorem~2.18]{BBCNPW2}, a (sharp)
\emph{lower bound} for the rate of linear
convergence is the \emph{spectral radius} of $T_\lambda - P_{\ensuremath{\operatorname{Fix}} T}$, i.e.,
\begin{equation}
\rho\big(T_\lambda- P_{\ensuremath{\operatorname{Fix}} T}\big) :=
\max \big|\{\text{spectral values of $T_\lambda- P_{\ensuremath{\operatorname{Fix}} T}$}\}\big|,
\end{equation}
while an \emph{upper bound} is the operator norm
\begin{equation}
\big\|T_\lambda- P_{\ensuremath{\operatorname{Fix}} T}\big\|.
\end{equation}
The lower bound is optimal and close to the true rate of convergence,
see \cite[Theorem~2.12(i)]{BBCNPW2}.
Both spectral radius and operator norms of matrices are available
in programming languages such as \texttt{Julia} \cite{Julia} which features
strong numerical linear algebra capabilities.
In order to compute these bounds for the linear rates,
we must provide \emph{matrix representations}
for $T$ (which immediately gives rise to one for $T_\lambda$)
and for $P_{\ensuremath{\operatorname{Fix}} T}$.
In the previous sections, we casually switched back and forth
being column and row vector representations for readability.
In this section, we need to get the structure of the objects right.
To visually stress this, we will use \emph{square brackets} for vectors and
matrices.
For the remainder of this section, we fix
three linear subspaces $U,V,W$ of $\ensuremath{\mathbb R}^d$, with intersection
\begin{equation}
Z := U\cap V\cap W.
\end{equation}
We assume that the matrices $P_U,P_V,P_W$ in $\ensuremath{\mathbb R}^{d\times d}$ are available to us
(and hence so are $P_{U^\perp},P_{V^\perp},P_{W^\perp}$ and $P_Z$,
via \cref{ex:perp} and \cref{c:AD3}, respectively).
\subsection{Ryu splitting}
In this subsection, we consider Ryu splitting.
First,
the block matrix representation of the operator $M$ occurring
in Ryu splitting (see \cref{e:linM}) is
\begin{equation}
M = \begin{bmatrix}
P_U \;& 0 \\[0.5em]
P_VP_U \;& P_V\\[0.5em]
P_WP_U+P_WP_VP_U-P_W\;\; & P_WP_V-P_W
\end{bmatrix}\in\ensuremath{\mathbb R}^{3d\times 2d}.
\label{e:RyuMmat}
\end{equation}
Hence, using \cref{e:linT},
we obtain the following matrix representation of the
Ryu splitting operator $T=\ensuremath{T_{\text{\scriptsize Ryu}}}$:
\begin{subequations}
\begin{align}
T &=\textcolor{black}{\begin{bmatrix}\ensuremath{\operatorname{Id}}\;&0\\[0.5em]0\;&\ensuremath{\operatorname{Id}}\end{bmatrix}+}
\begin{bmatrix}
-\ensuremath{\operatorname{Id}} & 0 &\ensuremath{\operatorname{Id}} \\[0.5em]
0 & -\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}}
\end{bmatrix}
\begin{bmatrix}
P_U \;& 0 \\[0.5em]
P_VP_U \;& P_V\\[0.5em]
P_WP_U+P_WP_VP_U-P_W\;\; & P_WP_V-P_W
\end{bmatrix}\\[1em]
&=
\begin{bmatrix}
\textcolor{black}{\ensuremath{\operatorname{Id}}}-P_U+P_WP_U+P_WP_VP_U-P_W &\;\; P_WP_V-P_W\\[0.5em]
P_WP_U+P_WP_VP_U-P_W-P_VP_U & \;\; \textcolor{black}{\ensuremath{\operatorname{Id}}+}P_WP_V-P_V-P_W
\end{bmatrix}\in\ensuremath{\mathbb R}^{2d\times 2d}.
\end{align}
\end{subequations}
Next, we set, as in \cref{firelemma},
\begin{subequations}
\begin{align}
\Delta &= \menge{[x,x]^\intercal\in \ensuremath{\mathbb R}^{2d}}{x\in X},\\
E&=\big(U^\perp\times V^\perp) \cap \big(\Delta^\perp+(\{0\}\times W^\perp) \big)\label{e:RyuE}
\end{align}
\end{subequations}
so that, by \cref{e:210815c},
\begin{equation}
\label{e:RyuE4}
P_{\ensuremath{\operatorname{Fix}} T}\begin{bmatrix}x\\y\end{bmatrix} =
\begin{bmatrix}P_{Z}x\\0\end{bmatrix}+ P_E\begin{bmatrix}x\\y\end{bmatrix}.
\end{equation}
With the help of \cref{c:AD3}, we see that the first term, $[P_{Z}x,0]^\intercal$,
is obtained by applying the matrix
\begin{equation}
\label{e:RyuE1}
\begin{bmatrix}
P_Z & 0 \\
0 & 0
\end{bmatrix}
=
\begin{bmatrix}
4P_U(P_U+P_V)^\dagger P_V\big(2P_U(P_U+P_V)^\dagger P_V+P_W\big)^\dagger P_W & 0 \\
0 & 0
\end{bmatrix}
\in\ensuremath{\mathbb R}^{2d\times 2d}
\end{equation}
to $[x,y]^\intercal$.
Let's turn to $E$, which is an intersection of two linear subspaces.
The projector of the left linear subspace making up this intersection, $U^\perp\times V^\perp$,
has the matrix representation
\begin{equation}
\label{e:RyuE3}
P_{U^\perp \times V^\perp} =
\begin{bmatrix}
\ensuremath{\operatorname{Id}}-P_U & 0 \\
0 & \ensuremath{\operatorname{Id}}-P_V
\end{bmatrix}.
\end{equation}
We now turn to the right linear subspace,
$\Delta^\perp+(\{0\}\times W^\perp)$, which is a sum
of two subspaces whose complements are
$\Delta^{\perp\perp}=\Delta$ and
$((\{0\}\times W^\perp)^\perp = X\times W$, respectively.
The projectors of the last two subspaces are
\begin{equation}
P_\Delta =
\frac{1}{2}
\begin{bmatrix}
\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}} \\
\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}}
\end{bmatrix}
\;\;\text{and}\;\;
P_{X\times W} =
\begin{bmatrix}
\ensuremath{\operatorname{Id}} & 0 \\
0 & P_W
\end{bmatrix},
\end{equation}
respectively.
Thus, \cref{c:ADsum} yields
\begin{subequations}
\label{e:RyuE2}
\begin{align}
&P_{\Delta^\perp+(\{0\}\times W^\perp)}\\
&=
\begin{bmatrix}
\ensuremath{\operatorname{Id}} & 0 \\
0 & \ensuremath{\operatorname{Id}}
\end{bmatrix}
- 2\cdot \frac{1}{2}
\begin{bmatrix}
\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}} \\
\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}}
\end{bmatrix}
\left(
\frac{1}{2}
\begin{bmatrix}
\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}} \\
\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}}
\end{bmatrix}
+
\begin{bmatrix}
\ensuremath{\operatorname{Id}} & 0 \\
0 & P_W
\end{bmatrix}
\right)^\dagger
\begin{bmatrix}
\ensuremath{\operatorname{Id}} & 0 \\
0 & P_W
\end{bmatrix}\\
&=
\begin{bmatrix}
\ensuremath{\operatorname{Id}} & 0 \\
0 & \ensuremath{\operatorname{Id}}
\end{bmatrix}
- 2
\begin{bmatrix}
\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}} \\
\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}}
\end{bmatrix}
\begin{bmatrix}
{3}\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}} \\
\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}}+2P_W
\end{bmatrix}^\dagger
\begin{bmatrix}
\ensuremath{\operatorname{Id}} & 0 \\
0 & P_W
\end{bmatrix}.
\end{align}
\end{subequations}
To compute $P_E$, where $E$ is as in \cref{e:RyuE},
we combine \cref{e:RyuE3}, \cref{e:RyuE2} under the umbrella
of \cref{f:AD} --- the result does not seem to simplify so
we don't typeset it.
Having $P_E$, we simply add it to \cref{e:RyuE1} to obtain
$P_{\ensuremath{\operatorname{Fix}} T}$ because of \cref{e:RyuE4}.
\subsection{Malitsky-Tam splitting}
In this subsection, we turn to Malitsky-Tam splitting for the current setup
--- this corresponds to \cref{ss:MTlin} with $n=3$ and where
we identify $(U_1,U_2,U_3)$ with $(U,V,W)$.
The block matrix representation of $M$ from \cref{e:linMTM} is
\begin{equation}
\begin{bmatrix}
P_U \;& 0 \\[0.5em]
-P_V(\ensuremath{\operatorname{Id}}-P_U) \;& P_V\\[0.5em]
P_W(P_U+P_VP_U-P_V)\;\; & -P_W(\ensuremath{\operatorname{Id}}-P_V)
\end{bmatrix}
\in\ensuremath{\mathbb R}^{3d\times 2d}.
\label{e:MTMmat}
\end{equation}
Thus, using \cref{e:linMT},
we obtain the following matrix representation
of the Malitsky-Tam splitting operator $T=\ensuremath{T_{\text{\scriptsize MT}}}$:
\begin{subequations}
\begin{align}
T &=
\textcolor{black}{\begin{bmatrix}\ensuremath{\operatorname{Id}}\;&0\\[0.5em]0\;&\ensuremath{\operatorname{Id}}\end{bmatrix}+}
\begin{bmatrix}
-\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}} & 0 \\[0.5em]
0 & -\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}}
\end{bmatrix}
\begin{bmatrix}
P_U \;& 0 \\[0.5em]
-P_V(\ensuremath{\operatorname{Id}}-P_U) \;& P_V\\[0.5em]
P_W(P_U+P_VP_U-P_V)\;\; & -P_W(\ensuremath{\operatorname{Id}}-P_V)
\end{bmatrix}\\[1em]
&=
\begin{bmatrix}
\textcolor{black}{\ensuremath{\operatorname{Id}}}-P_U-P_V(\ensuremath{\operatorname{Id}}-P_U) &\;\; P_V\\[0.5em]
P_V(\ensuremath{\operatorname{Id}}-P_U)+P_W(P_U+P_VP_U-P_V) & \;\; \textcolor{black}{\ensuremath{\operatorname{Id}}}-P_V-P_W(\ensuremath{\operatorname{Id}}-P_V)
\end{bmatrix}\\[1em]
&=
\begin{bmatrix}
(\ensuremath{\operatorname{Id}}-P_V)(\ensuremath{\operatorname{Id}}-P_U) &\;\; P_V\\[0.5em]
(\ensuremath{\operatorname{Id}}-P_W)P_V(\ensuremath{\operatorname{Id}}-P_U)+P_WP_U & \;\; (\ensuremath{\operatorname{Id}}-P_W)(\ensuremath{\operatorname{Id}}-P_U)
\end{bmatrix}\in\ensuremath{\mathbb R}^{2d\times 2d}.
\end{align}
\end{subequations}
Next, in view of \cref{e:210817d},
we have
\begin{equation}
\label{e:zahn4}
P_{\ensuremath{\operatorname{Fix}} T}
= \frac{1}{2}
\begin{bmatrix}
P_Z & P_Z \\
P_Z & P_Z
\end{bmatrix}
+ P_E,
\end{equation}
where (see \cref{e:bloodyE} and \cref{e:bloodyPsi})
\begin{equation}
\label{e:zahn0}
E = \ensuremath{{\operatorname{ran}}\,}\Psi \cap (X\times W^\perp)
\end{equation}
and
\begin{equation}
\Psi \colon U^\perp \times V^\perp \to X^2
\colon \begin{bmatrix} y_1\\y_2\end{bmatrix}
\mapsto \begin{bmatrix} y_1\\y_1+y_2\end{bmatrix}.
\end{equation}
We first note that
\begin{equation}
\ensuremath{{\operatorname{ran}}\,} \Psi = \ensuremath{{\operatorname{ran}}\,}
\begin{bmatrix}
\ensuremath{\operatorname{Id}} & 0 \\
\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}}
\end{bmatrix}
\begin{bmatrix}
P_{U^\perp} & 0 \\
0 & P_{V^\perp}
\end{bmatrix}
=\ensuremath{{\operatorname{ran}}\,}
\begin{bmatrix}
P_{U^\perp} & 0 \\
P_{U^\perp} & P_{V^\perp}
\end{bmatrix}.
\end{equation}
We thus obtain from \cref{f:Pran} that
\begin{equation}
\label{e:zahn1}
P_{\ensuremath{{\operatorname{ran}}\,}\Psi} = \begin{bmatrix}
P_{U^\perp} & 0 \\
P_{U^\perp} & P_{V^\perp}
\end{bmatrix}
\begin{bmatrix}
P_{U^\perp} & 0 \\
P_{U^\perp} & P_{V^\perp}
\end{bmatrix}^\dagger.
\end{equation}
On the other hand,
\begin{equation}
\label{e:zahn2}
P_{X\times W^\perp}
= \begin{bmatrix}
\ensuremath{\operatorname{Id}} & 0 \\
0 & P_{W^\perp}
\end{bmatrix}.
\end{equation}
In view of \cref{e:zahn0}
and \cref{f:AD}, we obtain
\begin{equation}
\label{e:zahn3}
P_E = 2P_{\ensuremath{{\operatorname{ran}}\,}\Psi}\big( P_{\ensuremath{{\operatorname{ran}}\,}\Psi}+P_{X\times W^\perp}\big)^\dagger
P_{X\times W^\perp}.
\end{equation}
We could
now use our formulas \cref{e:zahn1} and \cref{e:zahn2}
for $P_{\ensuremath{{\operatorname{ran}}\,}\Psi}$ and $P_{X\times W^\perp}$
to obtain a more explicit formula for $P_E$ ---
but we refrain from doing so as the expressions become unwieldy.
Finally, plugging the formula for $P_Z$ from \cref{c:AD3}
into \cref{e:zahn4} as well as plugging \cref{e:zahn3} into \cref{e:zahn4}
yields a formula for $P_{\ensuremath{\operatorname{Fix}} T}$.
\subsection{Campoy splitting}
In this subsection, we look at the Campoy splitting for the current setup
--- this corresponds to \cref{ss:MTlin} with $n=3$ and where
we identify $(U_1,U_2,U_3)$ with $(U,V,W)$.
Using the linearity of $P_W$, we see that
the block matrix representation of $M$ from \cref{e:CMlin} is
\begin{equation}\label{e:CMmat}
M=\frac12\begin{bmatrix}
P_W&P_W\\P_W&P_W
\end{bmatrix}.
\end{equation}
We then write $S$ from \cref{e:CSlin} as
\begin{subequations}
\begin{align}
S&=\begin{bmatrix}
P_U&0\\
0&P_V
\end{bmatrix}\parens*{2M
-\begin{bmatrix}
\ensuremath{\operatorname{Id}}&0\\0&\ensuremath{\operatorname{Id}}
\end{bmatrix}}\\
&=\begin{bmatrix}
P_U&0\\
0&P_V
\end{bmatrix}{\begin{bmatrix}
P_W-\ensuremath{\operatorname{Id}}&P_W\\P_W&P_W-\ensuremath{\operatorname{Id}}
\end{bmatrix}}\\
&=\begin{bmatrix}
P_UP_W-P_U&P_UP_W\\P_VP_W&P_VP_W-P_V
\end{bmatrix}.
\end{align}
\end{subequations}
Therefore, using \cref{e:CTlin}, we see that
the Campoy operator $T=T_C$ is expressed as
\begin{subequations}
\begin{align}
T
&=\begin{bmatrix}
\ensuremath{\operatorname{Id}}&0\\0&\ensuremath{\operatorname{Id}}\end{bmatrix}
+2S
-2M\\
&= \begin{bmatrix}
\ensuremath{\operatorname{Id}}&0\\0&\ensuremath{\operatorname{Id}}\end{bmatrix}
+ 2\begin{bmatrix}
P_UP_W-P_U&P_UP_W\\P_VP_W&P_VP_W-P_V
\end{bmatrix}
- \begin{bmatrix}P_W & P_W\\P_W & P_W\end{bmatrix}
\\
&=\begin{bmatrix}
\ensuremath{\operatorname{Id}}+2P_UP_W-2P_U - P_W& 2P_UP_W-P_W\\
2P_VP_W- P_W&\ensuremath{\operatorname{Id}}+2P_VP_W-2P_V-P_W
\end{bmatrix}.
\end{align}
\end{subequations}
We set, as in \cref{l:Campoycvg},
\begin{equation}
\widetilde{U} := W^2 \cap \Delta
\;\;\text{and}\;\;
\widetilde{V} := U\times V,
\end{equation}
where $\Delta$ is the diagonal in $X^2$, and we obtain from \cref{e:211012ac}
\begin{align}
\label{e:211116a}
P_{\widetilde{U}}=
P_{W^2}P_\Delta =
\begin{bmatrix}
P_W & 0 \\ 0 & P_W
\end{bmatrix}
\frac{1}{2}
\begin{bmatrix}
\ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}} \\ \ensuremath{\operatorname{Id}} & \ensuremath{\operatorname{Id}}
\end{bmatrix}
=\frac{1}{2}
\begin{bmatrix}
P_W & P_W \\ P_W & P_W
\end{bmatrix}
\quad\text{and}\quad P_{\widetilde{V}}=\begin{bmatrix}
P_U & 0 \\ 0 & P_V
\end{bmatrix}.
\end{align}
We now use these projection formulas along with
\cref{l:Campoycvg}, \cref{f:AD}, and \cref{ex:perp} to obtain
\begin{subequations}
\label{e:211116b}
\begin{align}
P_{\ensuremath{\operatorname{Fix}} T}&=P_{\widetilde{U}\cap \widetilde{V}}+
P_{\widetilde{U}^\perp\cap \widetilde{V}^\perp}\\
&=2P_{\widetilde{U}}(P_{\widetilde{U}}+P_{\widetilde{V}})^\dagger P_{\widetilde{V}}+
2(\ensuremath{\operatorname{Id}}-P_{\widetilde{U}})(2\ensuremath{\operatorname{Id}}-P_{\widetilde{U}} -P_{\widetilde{V}})^\dagger
(\ensuremath{\operatorname{Id}}-P_{\widetilde{V}}).
\end{align}
\end{subequations}
If desired, one may express $P_{\ensuremath{\operatorname{Fix}} T}$ in terms of $P_U,P_V,P_W$
by substituting \cref{e:211116a} into \cref{e:211116b};
however, due to limited space, we refrain from listing the outcome.
\section{Numerical experiments}
\label{sec:numexp}
We now describe several experiments to evaluate the performance
of the algorithms described in \cref{sec:matrix}.
Each instance of an experiment involves three subspaces $U_i$ of dimension $d_i$ for $i\in\{1,2,3\}$ in $X=\ensuremath{\mathbb R}^d$.
By \cite[equation~(4.419) on page~205]{Meyer},
\begin{equation}
\dim(U_1+ U_2)=d_1+d_2-\dim(U_1\cap U_2).
\end{equation}
Hence
\begin{equation}
\dim(U_1\cap U_2)=d_1+d_2-\dim(U_1+U_2)\geq d_1+d_2-d.
\end{equation}
Thus $\dim(U_1\cap U_2)\geq 1$ whenever
\begin{equation}\label{e:U1U2dim}
d_1+d_2\geq d+1.
\end{equation}
Similarly,
\begin{equation}
\dim(Z)\geq \dim(U_1\cap U_2)+d_3-d\geq d_1+d_2-d+d_3-d=d_1+d_2+d_3-2d.
\end{equation}
Along with \cref{e:U1U2dim}, a sensible choice for $d_i$ satisfies
\begin{equation}
d_i\geq 1+\lceil2d/3\rceil
\end{equation}
because then $d_1+d_2\geq 2+2\lceil 2d/3\rceil\geq 2+4d/3>2+d$.
Hence $d_1+d_2\geq 3+d$ and $d_1+d_2+d_3> 3+3\lceil 2d/3\rceil\geq 3+2d$.
The smallest $d$ that gives proper subspaces
is $d=6$, for which $d_1=d_2=d_3=5$ satisfy the above conditions.
We now describe our set of three numerical experiments designed to
observe different aspects of the algorithms.
\subsection{Experiment 1: Bounds on the rates of linear convergence}
As shown in \cref{sec:matrix}, we have lower and upper bounds on the rate of
linear convergence of the operator $T_\lambda$.
We conduct this experiment to observe how these bounds change as we vary
$\lambda$. To this end, we
generate 1000 instances of triples of linear subspaces $(U_1,U_2,U_3)$.
This was done by randomly generating triples of three matrices
$(B_1,B_2,B_3)$ each drawn from $\ensuremath{\mathbb R}^{6\times 5}$.
These were used to define the range spaces of these subspaces,
which in turn gave us the projection onto $U_i$
(using, e.g., \cite[Proposition~3.30(ii)]{BC2017}) as
\begin{equation}
P_{U_i}=B_iB_i^\dagger.
\end{equation}
For each instance, algorithm, and $\lambda\in
\tmenge{0.01\cdot k}{k\in\{1,2,\ldots,110\}}$,
we obtain the operators $T_\lambda$ and $P_{\ensuremath{\operatorname{Fix}} T}$ as outlined in \cref{sec:matrix}
and compute the spectral radius and operator norm of
$T_\lambda-P_{\ensuremath{\operatorname{Fix}} T}$.
Note that the convergence of the algorithms is only guaranteed for $\lambda\in
\tmenge{0.01\cdot k}{k\in\{1,2,\ldots,99\}}$,
but we have plotted beyond this range
to observe the behaviour of the algorithms.
\cref{fig:exp1} reports the median of the spectral radii and operator norms for each $\lambda$.
\begin{figure}[ht!]
\centering
\includegraphics[width=\textwidth]{Exp1.pdf}
\caption{Experiment 1: The median (solid line) and range (shaded region) of the spectral radii and operator norms
}
\label{fig:exp1}
\end{figure}
For the spectral radius, while Ryu shows a steady decline in the median value for $\lambda< 1$,
Malitsky-Tam and Campoy start increasing well before $\lambda=1$.
The maximum value, which can be seen by the range of these values for each algorithm,
is some value less than, but close to $1$ for all $\lambda<1$.
The operator norm plot, which provides the upper bound for the convergence rates,
also stays below 1 for $\lambda<1$.
For MT, the sharp edge in the spectral norm appears as the spectral radius
involves the maximum of different spectral values.
By visual inspection, we see that for Campoy the lower and upper bounds coincide
--- we weren't aware of this beforehand and are now able to provide a rigorous proof in
\cref{r:radial} below.
\begin{remark}{\bf (relaxed Douglas-Rachford operator)}
\label{r:radial}
Suppose that $T=\ensuremath{T_{\text{\scriptsize C}}}$ is the Campoy operator which implies
that $T_\lambda = (1-\lambda)\ensuremath{\operatorname{Id}}+\lambda T$ is
the relaxed Douglas-Rachford operator for finding a zero of
the sum of two normal cone operators of two linear subspaces.
By \cite[Theorem~3.10(i)]{BBCNPW2}, the matrix $T_\lambda$ is normal, i.e.,
$T_\lambda T_\lambda^* = T_\lambda^* T_\lambda$.
Moreover, \cite[Proposition~3.6(i)]{BBCNPW1} yields
$\ensuremath{\operatorname{Fix}} T = \ensuremath{\operatorname{Fix}} T_\lambda = \ensuremath{\operatorname{Fix}} T_\lambda^* = \ensuremath{\operatorname{Fix}} T^*$.
Set $P := P_{\ensuremath{\operatorname{Fix}} T} = P_{\ensuremath{\operatorname{Fix}} T^*}$ for brevity.
Then $P=P^*$ and $T_\lambda P = P = T_\lambda^*P$.
Taking the transpose yields $PT_\lambda^* = P = PT_\lambda$.
Thus
\begin{subequations}
\begin{align}
(T_\lambda-P)(T_\lambda-P)^*
&=
T_\lambda T_\lambda^* - T_\lambda P - PT_\lambda^*+P\\
&=
T_\lambda T_\lambda^* - P\\
&=
T_\lambda^* T_\lambda - P\\
&=
T_\lambda^* T_\lambda - T_\lambda^*P - PT_\lambda + P\\
&=
(T_\lambda-P)^*(T_\lambda-P),
\end{align}
\end{subequations}
i.e., $T_\lambda-P$ is normal as well.
Now \cite[page~431f]{GoldZwas} implies that the matrix
$T_\lambda-P$ is \emph{radial}, i.e., its spectral radius coincides
with its operator norm.
This explains that the two (green) curves for the Campoy algorithm in
\cref{fig:exp1} are identical.
\end{remark}
\subsection{Experiment 2: Number of iterations to achieve prescribed accuracy}
Because we know
the limit points of the governing as well as shadow sequences,
we investigate how varying $\lambda$ affects the number of iterations
required to approximate
the limit to a given accuracy.
For Experiment~2, we fix 100 instances of triples of subspaces $(U_1,U_2,U_3)$.
We also fix 100 different starting points in $\ensuremath{\mathbb R}^6$.
For each instance of the subspaces, starting point $\ensuremath{\mathbf{z}}_0$ and
$\lambda\in\tmenge{0.01\cdot k}{k\in\{1,2,\ldots,199\}}$,
we obtain the number of iterations (up to a maximum of $10^4$ iterations) required to
achieve $\varepsilon=10^{-6}$ accuracy.
For the governing sequence, the limit $P_{\ensuremath{\operatorname{Fix}} T}(\ensuremath{\mathbf{z}}_0)$ is used to determine the stopping condition.
\cref{fig:exp2a} reports the median number of iterations required for each $\lambda$ to achieve the given accuracy.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{Exp2ab.pdf}
\caption{Experiment 2: The median (solid line) and range (shaded region) of the number of iterations for the governing and shadow sequences}
\label{fig:exp2a}
\label{fig:exp2}
\end{figure}
For the shadow sequence, we compute the median number of iterations required to achieve $\varepsilon=10^{-6}$ accuracy for the sequence $(\overline{M}\ensuremath{\mathbf{z}}_k)_\ensuremath{{k\in{\mathbb N}}}$ with respect to its limit $P_Zz_0$, where $z_0$ is the average of the components of $\ensuremath{\mathbf{z}}_0$.
See \cref{fig:exp2} for results.
For Ryu and MT in both experiments,
increasing values of $\lambda$ result in a decreasing number of median iterations required for $\lambda<1$.
For Campoy, the median iterations reach the minimum at $\lambda\approx 0.5$
and keep increasing after that.
As is evident from the maximum number of iterations required for a fixed $\lambda$,
the shadow sequence converges before the governing sequence for larger values of $\lambda$ in $\left]0,1\right[$.
One can also see that Ryu consistently requires fewer median iterations
for both the governing and the shadow sequence to achieve the same accuracy as MT for a fixed lambda. However, Campoy performs better than Ryu for small values of $\lambda$, and beats MT for a larger range of $\lambda\in \left]0,0.5\right[$.
\subsection{Experiment 3: Convergence plots of shadow sequences}
In this experiment, we measure the distance of the terms of the governing (and shadow) sequence from its limit point,
to observe how the iterates of the algorithms approach the solution.
Guided by \cref{fig:exp2},
we pick the $\lambda$ for which the median iterates are the least:
$\lambda=0.99$ for Ryu, $\lambda=0.97$ for MT, and $\lambda=0.57$ for Campoy.
Similar to the setup of Experiment~2,
we fix 100 starting points and 100 triples of subspaces $(U_1,U_2,U_3)$.
We then run the algorithms for 150 iterations for each starting point
and each set of subspaces,
and we measure the distance of the iterates $\overline{M}\ensuremath{\mathbf{z}}_k$ to its limit $P_Zz_0$.
\cref{fig:exp3} reports the median of these values for each iteration counter
$k\in\{1,\dots,200\}$.
As can be seen in \cref{fig:exp3} for both governing and shadow sequences, Ryu converges faster to the solution compared to MT and Campoy. Ryu and MT show faint ``rippling'' as is well known to occur for the Douglas-Rachford algorithm.
Campoy, already being a DR algorithm, has more prominent ripples.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{Exp3.pdf}
\caption{Experiment 3: Convergence plot of the governing and shadow sequences. The median (solid line) and range (shaded region) of distances are reported.}
\label{fig:exp3}
\end{figure}
\section{Conclusion}
\label{sec:end}
In this paper, we investigated the recent splitting methods
by Ryu, by Malitsky-Tam, and by Campoy
in the context of normal cone operators
for subspaces. We discovered and proved that all three algorithms find not just
some solution but in fact the \emph{projection} of the starting point onto the
intersection of the subspaces. Moreover, convergence of the iterates
is \emph{strong} even in infinite-dimensional settings.
Our numerical experiments illustrated that Ryu's method seems to converge
faster although neither Malitsky-Tam splitting nor Campoy
splitting is limited in its applicability to
just 3 subspaces.
Two natural avenues for future research are the following.
Firstly, when $X$ is finite-dimensional, we know that the convergence
rate of the iterates
is linear. While we illustrated this linear convergence
numerically in this paper, it is open whether there are \emph{natural
bounds for the linear rates} in terms of some version of angle between
the subspaces involved.
For the prototypical Douglas-Rachford splitting framework,
this was carried out in \cite{BBCNPW1} \cite{BBCNPW2}
in terms of the \emph{Friedrichs angle}.
Secondly, what can be said in the \emph{inconsistent} affine case?
Again, the Douglas-Rachford algorithm may serve as a guide to what
the expected results and complications might be; see, e.g., \cite{BM16}.
\section*{Acknowledgments}
HHB and XW were supported by NSERC Discovery Grants.
The authors thank Alex Kruger for making us aware of his \cite{Kruger81} and \cite{Kruger85}
which is highly relevant for Campoy splitting.
| {
"timestamp": "2022-03-09T02:09:46",
"yymm": "2203",
"arxiv_id": "2203.03832",
"language": "en",
"url": "https://arxiv.org/abs/2203.03832",
"abstract": "Finding a zero of a sum of maximally monotone operators is a fundamental problem in modern optimization and nonsmooth analysis. Assuming that the resolvents of the operators are available, this problem can be tackled with the Douglas-Rachford algorithm. However, when dealing with three or more operators, one must work in a product space with as many factors as there are operators. In groundbreaking recent work by Ryu and by Malitsky and Tam, it was shown that the number of factors can be reduced by one. A similar reduction was achieved recently by Campoy through a clever reformulation originally proposed by Kruger. All three splitting methods guarantee weak convergence to some solution of the underlying sum problem; strong convergence holds in the presence of uniform monotonicity.In this paper, we provide a case study when the operators involved are normal cone operators of subspaces and the solution set is thus the intersection of the subspaces. Even though these operators lack strict convexity, we show that striking conclusions are available in this case: strong (instead of weak) convergence and the solution obtained is (not arbitrary but) the projection onto the intersection. Numerical experiments to illustrate our results are also provided.",
"subjects": "Optimization and Control (math.OC)",
"title": "The splitting algorithms by Ryu, by Malitsky-Tam, and by Campoy applied to normal cones of linear subspaces converge strongly to the projection onto the intersection",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.984336354045338,
"lm_q2_score": 0.8333245973817158,
"lm_q1q2_score": 0.8202716959230174
} |
https://arxiv.org/abs/2207.11741 | Induced subgraphs of zero-divisor graphs | The zero-divisor graph of a finite commutative ring with unity is the graph whose vertex set is the set of zero-divisors in the ring, with $a$ and $b$ adjacent if $ab=0$. We show that the class of zero-divisor graphs is universal, in the sense that every finite graph is isomorphic to an induced subgraph of a zero-divisor graph. This remains true for various restricted classes of rings, including boolean rings, products of fields, and local rings. But in more restricted classes, the zero-divisor graphs do not form a universal family. For example, the zero-divisor graph of a local ring whose maximal ideal is principal is a threshold graph; and every threshold graph is embeddable in the zero-divisor graph of such a ring. More generally, we give necessary and sufficient conditions on a non-local ring for which its zero-divisor graph to be a threshold graph. In addition, we show that there is a countable local ring whose zero-divisor graph embeds the Rado graph, and hence every finite or countable graph, as induced subgraph. Finally, we consider embeddings in related graphs such as the $2$-dimensional dot product graph. | \section{Introduction}
In this paper, ``ring'' means ``finite commutative ring with unity'',
while ``graph'' means ``finite simple undirected graph'' (except in the
penultimate section, where finiteness will be relaxed). The
\emph{zero-divisor graph} $\Gamma(R)$ of a ring $R$ has vertices the
zero-divisors in $R$ (the non-zero elements $a$ for which there exists $b\ne0$
such that $ab=0$), with an edge $\{a,b\}$ whenever $ab=0$. The zero-divisor graphs have been extensively studied in the past \cite{Akb,AN21, AND99,And1,And2,AND20,BMT16,BMT17,Kavas,Lag1, sel}.
In this paper, we are interested in universal graphs. Let $G$ be a graph. A graph $H$ is said to be $G$-free if no subgraph of $H$ is isomorphic to $G$. A countable $G$-free graph $H$ is weakly universal if every
countable $G$-free graph is isomorphic to a subgraph of $H$, and strongly universal if every such graph is isomorphic to an induced subgraph of $H$.
Similarly, a class of graphs $\mathcal C$ is said to be universal if every graph is an induced subgraph of a graph in the class $\mathcal C$. Universal graphs are well-studied and there is a vast literature spreading over past few decades \cite{hjp99, lcc}.
Universal graphs for collection of graphs satisfying certain forbidden conditions also explored in the past \cite{av13,cs07,ks95}. In \cite{cg83}, the search for a graph $G$ on $n$ vertices and with minimum number of edges in which every tree on $n$ vertices is isomorphic to a spanning tree of $G$ is carried out.
Here we address the question: Which finite
graphs are induced subgraphs of the zero-divisor graph of some ring? The
answer turns out to be ``all of them'', but there are interesting questions
still open when we restrict the class of rings or the class of graphs.
\section{Zero divisor graphs are universal}
In this section, we show that the zero divisor graphs associated to various classes of rings are universal.
\subsection{The zero divisor graphs of Boolean rings are universal}
In this subsection we show that the intersection graphs are universal. Indeed we prove a stronger result: namely, every graph $G$ can be identified as an intersection graph for a natural choice of sets associated to $G$. We need this result in a couple of proofs in later sections.
\begin{defn}
Let $X$ be a family of subsets of a set $A$.
The \emph{intersection graph} on $X$, denoted by $\mathcal G(X)$, has
vertices the sets in the family, two vertices joined if they have non-empty
intersection.
\end{defn}
The following proposition is well known.
\begin{proposition}\label{intersection}
Let $G$ be a finite graph. Then $G$ can be identified as an intersection graph for a natural choice of sets associated to $G$.
\end{proposition}
\begin{proof}
Let $G$ be a finite graph with vertex set $E$. Without loss of generality we assume that $G$ has no isolated vertices and isolated edges [See Remark \ref{rem1}]. The construction is simple. We represent a vertex $v$ of $G$ by the set $A_v$ of all edges of $G$ containing $v$. Then
$$
A_v \cap A_w =
\begin{cases}
\{e\} \text{ if $u$ is adjacent to $v$ by the edge $e$}\\
\emptyset \text{ otherwise}
\end{cases}
$$
Moreover, $A_v\ne A_w$ for all $v\ne w$, since (in the absence of isolated vertices and edges) two vertices cannot be adjacent with the same set of edges. Now, it is immediate that the graph $G$ and the intersection graph $\mathcal G(X)$ are isomorphic where $X = \{A_v : v \in V(G)\}$. \qquad$\Box$
\end{proof}
\begin{rem}\label{rem1}\rm
If there are isolated vertices in $G$, they can be represented by non-empty
sets disjoint from the graph
obtained from the non-isolated vertices. If there are isolated edges in $G$
we represent the vertices of such an edge by two distinct but intersecting sets
disjoint from the sets representing the rest of the graph.
\end{rem}
Let $X$ be a finite set. The \emph{Boolean ring} on $X$ has as its elements
the subsets of $X$; addition is symmetric difference, and multiplication is
intersection. So the empty set is the zero element and $ab=0$ if and only if $a$ and $b$
are disjoint.
\begin{lem}\label{lemmaBoolean}
Let $\Gamma(R)$ be the zero divisor graph of a Boolean ring $R$. A graph $G$ is an induced subgraph of $\Gamma(R)$ if, and only if, the complement graph of $G$ is isomorphic to an intersection graph $\mathcal G(X)$ for some collection of sets $X$.
\end{lem}
\begin{proof}
Suppose $G$ is an induced subgraph of the zero divisor graph of a Boolean ring $R$. Now $R$ is a subring of $(P(X),\mathbin{\triangle},\cap)$
for some $X$. Therefore the vertices of $G$ are elements of $P(X)$ and two of them are adjacent if their intersection is empty. In otherwords, two of them are adjecent in the complement graph of $G$ if their intersection is non-empty. This shows that the complement graph of $G$ is an intersection graph.
Conversely, consider an arbitrary intersection graph $G:=\mathcal G(X)$ for some collection of subsets $X$ of a set $A$. Then the vertices of $G$ are subsets of $A$ and two of them are adjacent if their intersection is non-empty. In otherwords, two of them are adjecent in the complement graph of $G$ if their intersection is empty. This shows that the complement graph of $G$ is an induced subgraph of the zero divisor graph of the Boolean ring $(P(A),\mathbin{\triangle},\cap)$.
\qquad$\Box$
\end{proof}
\begin{thm}\label{theoremboolean}
The zero divisor graphs of Boolean rings are universal. That is, every finite
graph is an induced subgraph of the zero-divisor graph of a Boolean ring.
\end{thm}
\begin{proof}
Given an arbitrary finite graph $G$, by Lemma \ref{lemmaBoolean}, it is enough to show that the complement graph of $G$ is an intersection graph. But this claim follows already from Proposition \ref{intersection}. This completes the proof. \qquad$\Box$
\end{proof}
\begin{cor} For every finite graph $G$, there exists positive integer $k$ such that $G$ is an induced subgraph of the zero-divisor graph of the ring $\mathbb{Z}_2\times\mathbb{Z}_2\times \ldots \times \mathbb{Z}_2$ ($k$-times).
\end{cor}
\begin{proof}
Note that every finite Boolean ring is isomorphic to $\mathbb{Z}_2\times\mathbb{Z}_2\times \ldots \times \mathbb{Z}_2$, for some positive integer $k$ and hence the result follows from Theorem \ref{theoremboolean}.
\end{proof}
\subsection{The zero divisor graphs of ring of integers modulo $n$ and the reduced rings are universal}
\begin{lem}
Let $X=\{1,2,\dots,m\}$. Then the zero divisor graph of the Boolean ring
$(P(X),\mathbin{\triangle},\cap)$
is an induced subgraph of the zero divisor graph of the ring $\mathbb{Z}_n$, the integers modulo $n$, where $n = p_1 p_2 \cdots p_m$ is a square-free integer
with $m$ prime divisors.
\end{lem}
\begin{proof}
Let $A$ and $B$ be two proper subsets of $X$. Then the following statements are
equivalent:
\begin{itemize}
\item $A$ and $B$ are adjacent in the the zero divisor graph of the Boolean
ring $(P(X),\mathbin{\triangle},\cap)$;
\item $A$ and $B$ are disjoint;
\item $A^c \cup B^c = X$;
\item the product $n_{A^c} n_{B^c}$ is divisible by $n$, where
$n_A = \prod_{i \in A}p_i$.
\end{itemize}
Therefore the map which sends $A$ to $n_{A^c}$ defines an injective graph homomorphism of the zero divisor graph of the Boolean ring $(P(X),\mathbin{\triangle},\cap)$
and the zero divisor graph of the ring $\mathbb Z_n$, the integers modulo $n$.
\qquad$\Box$
\end{proof}
\begin{thm}
Every finite graph is an induced subgraph of the zero-divisor graph of a
product of distinct finite fields.
\end{thm}
\begin{proof}
By the Chinese Remainder Theorem, if $n=p_1p_2\cdots p_m$ where $p_1$, $p_2$,
\dots, $p_m$ are distinct primes, the product of the fields $\mathbb{Z}_{p_i}$
is isomorphic to $\mathbb{Z}_n$, so the result follows from the preceding
theorem.\qquad$\Box$
\end{proof}
\begin{cor}
The class of zero-divisor graphs of the rings of integers modulo $n$ is universal. Further, we can even restrict the $n$ to be squarefree.
\end{cor}
\subsection{The zero divisor graphs of local rings are universal}
A \emph{local ring} is a ring $R$ with a unique maximal ideal $M$.
Any non-zero element of $M$ is a zero-divisor, and every element of
$R\setminus M$ is a unit.
\begin{lem}
Let $R$ be a local ring with a unique maximal ideal $M$. Then there exists $r$ such that $M^r=\{0\}$.
\end{lem}
\begin{proof}
Since $R$ is a finite local ring, it is a local Artinian ring and hence $M$ is nilpotent. \qquad$\Box$
\end{proof}
From the above lemma, we observe that, if $a\in M^s$ and $b\in M^t$ with $s+t\ge r$, then $ab=0$ and hence $a$ and
$b$ are adjacent in the zero-divisor graph.
This observation might suggest that the zero-divisor graph is a threshold graph (these graphs are defined in the next section), but we show that this is not so:
The problem is complicated by the fact that the converse of this observation is false in general.
In the following theorem, we show that in fact the zero-divisor graphs of local rings are universal.
\begin{thm}\label{t:loc_universal}
For every finite graph $G$, there is a finite commutative local ring $R$ with
unity such that $G$ is an induced subgraph of the zero-divisor
graph of $R$.
\end{thm}
\begin{proof}
Let the vertex set of $G$ be $ \{v_1,\ldots,v_n\}$. Take $R$ to be the quotient of
$F[x_1,\ldots,x_n]$ by the ideal $I$ generated by all homogeneous polynomials
of degree $3$ in $x_1,\ldots,x_n$ together with all products $x_ix_j$ for which
$\{v_i,v_j\}$ is an edge of $G$, where $F$ is a finite prime field of order $p$ and $x_1,\ldots,x_n$
are indeterminates.
Let $M$ be the ideal of $R$ generated by the elements $x_1,\ldots,x_n$ (We
abuse notation by identifying polynomials in $x_1,\ldots,x_n$ with their
images in $R$). Thus $M$ is the set of elements represented by polynomials with
zero constant term. So $M$ is nilpotent and every element of $R\setminus M$
is a unit, so $M$ is the unique maximal ideal, and $R$ is a local ring.
Now the elements $x_1,\ldots,x_m$ of $M$ satisfy $x_ix_j=0$ if and only
if $\{v_i,v_j\}$ is an edge of $G$. To see this, note that the set
\[\{1,x_1,\ldots,x_n\}\cup\{x_ix_j:\{v_i,v_j\}\notin E(G)\}\]
is an $F$-basis for $R$. This completes the proof. \qquad$\Box$
\end{proof}
The above theorem shows that the zero divisor graph of a local ring need not be a threshold graph. We study threshold graphs extensively in the next section.
\section{When is the zero-divisor graph threshold?}
In preparation for the next section, we need some background on threshold
graphs. These were introduced by Chv\'atal and Hammer~\cite{ch} in 1977.
\begin{defn}
A graph $G$ is a threshold graph if there exists $t \in \mathbb R$ and for each vertex $v$ a weight $w(v) \in \mathbb R$ such that $uv$ is an edge in $G$ if, and only if, $w(u)+w(v)>t$.
\end{defn}
\begin{defn}
A \textit{split graph} is a graph whose vertex set is the disjoint union of an
independent set and a clique, with arbitrary edges between them.
A \textit{nested split graph} (or NSG for short) is a split graph in which
we add cross edges in accordance to partitions of $U$ (the independent set) and
$V$ (the clique) into $h$ cells (namely, $U =U_1\cup U_2\cup \ldots \cup U_h$
and $V=V_1\cup V_2\cup \ldots \cup V_h$) in the following way: each vertex
$u\in U_i$ is adjacent to all vertices $v\in V_1\cup V_2\cup \ldots \cup V_i$.
The vertices $U_i \cup V_i$ form the $i$-th level of the NSG, and $h$ is the
number of levels. The NSG as described can be denoted by
$\mathrm{NSG}(m_1,m_2,\ldots, m_h; n_1,n_2,\ldots, n_h)$, where $m_i =|U_i|$
and $n_i =|V_i|\ (i=1,2,\ldots, h)$.
\end{defn}
The following theorem is well-known.
\begin{thm}[\cite{ch}]
For a finite graph $G$, the following three properties are equivalent:
\begin{enumerate}
\item $G$ is a threshold graph;
\item $G$ has no four-vertex induced subgraph isomorphic to $C_4$
(the cycle), $P_4$ (the path), or $2K_2$ (the matching);
\item $G$ can be, and built from the empty set by repeatedly adding
vertices joined either to nothing or to all other vertices;
\item $G$ is a nested split graph.
\end{enumerate}
\end{thm}
We thought originally that the zero-divisor graph of a local ring might be a threshold graph. As we saw above, in Theorem \ref{t:loc_universal}, this is not true in general: see Example \ref{localex}. But there is
one case in which it holds, that where the maximal ideal is principal. If $p$
is a generator of $M$ as ideal, then every element of $M$ has the form $p^su$
where $u$ is a unit and $s>0$. If $u$ and $v$ are units, then $p^su.p^tv=0$
if and only if $s+t\ge r$, where $r$ is the nilpotent index of the ideal $M$.
\begin{defn}
A collection $\mathcal{C}$ of graphs is said to be \textit{threshold-universal} (for short, t-universal) if every threshold graph is an induced subgraph of a graph from $\mathcal{C}$.
\end{defn}
\begin{thm}
Let $R$ be a local ring whose maximal ideal $M$ is principal.
\begin{enumerate}
\item The zero-divisor graph of $R$ is a threshold graph.
\item Any threshold graph is an induced subgraph of some local ring whose
maximal ideal is principal. In other words the set of all zero divisor graphs of local rings with principal maximal form a t-universal collection of graphs.
\end{enumerate}
\end{thm}
\begin{prob}
We can similarly define other class of universal graphs such as c-universal, meaning ``cograph universal''. Can we find a class of rings whose zero-divisor
graphs are c-universal?
\end{prob}
\begin{proof}
(a) For the first part, let $R$ be a local ring with maximal ideal $M$. Let $r$
be the smallest integer such that $M^r=\{0\}$. Take threshold $t=r$, and set
$w(a)=i$ if $a\in M^i\setminus M^{i+1}$. By the remarks above,
$ab=0$ if and only if $w(a)+w(b)\ge r$, so the zero-divisor graph is a
threshold graph.
\medskip
(b) Let $G$ be an arbitrary threshold graph. Choose a prime which is sufficiently large (larger
than the number of vertices of $G$ is certainly enough). Let $m$ be the
number of stages required to dismantle $G$; embed the vertices of $G$
in $R=\mathbb{Z}/(p^{2m+1})$ as follows: if $a$ is removed as an isolated
vertex in round $i$, map it to an element of $p^iR\setminus p^{i+1}R$; if
it is removed as a vertex joined to all others in round $i$, map it to an
element of $p^{2m-i+1}R\setminus p^{2m-i+2}R$. (Each of these differences
contains at least $p-1$ elements, enough to embed all required vertices.)\qquad$\Box$
\end{proof}
We proved in Theorem \ref{t:loc_universal} that zero-divisor graphs of local rings are universal. In particular the graphs $P_4$ and $2K_2$ can occur as induced subgraphs of zero-divisor graphs of local rings. The following example from~\cite{an} is a local ring whose zero-divisor graph is not threshold.
\begin{example}\label{localex}
Let $A=\mathbb{Z}_4[x,y,z]/M$, where $M$ is the ideal generated by
$\{x^2-2, y^2-2, z^2, 2x, 2y, 2z, xy, xz, yz-2\}$.
In the zero-divisor graph of $A$, the induced subgraph on the
vertices $\{x, z, x+y, x+y+2\}$ is $2K_2$ and the induced subgraph of the
vertices $\{x, z+2, x+z, x+y\}$ is $P_4$.
\end{example}
The next result is a necessary condition for the zero divisor graph of a ring to
be threshold. Here $\mathop{\mathrm{Ann}}(x)=\{y\in R:xy=0\}$ is the \emph{annihilator} of $x$:
it is a non-zero ideal if $x$ is a zero-divisor.
\begin{thm}\label{t:threshold}
If $R$ is ring whose zero divisor graph is threshold, then for any two distinct
zero-divisors $x,y\in R$, the following holds:
\begin{enumerate}
\item $\mathop{\mathrm{Ann}}(x)\cap \mathop{\mathrm{Ann}}(y)\neq \{0\}$;
\item either $\mathop{\mathrm{Ann}}(x)\subseteq \mathop{\mathrm{Ann}}(y)$ or $\mathop{\mathrm{Ann}}(y)\subseteq \mathop{\mathrm{Ann}}(x)$;
\item if $\mathop{\mathrm{Ann}}(x)\subsetneq \mathop{\mathrm{Ann}}(y)$ and $xy\neq 0$, then $\langle \mathop{\mathrm{Ann}}(x)\rangle$ is a clique;
\item if $\mathop{\mathrm{Ann}}(x)\subsetneq \mathop{\mathrm{Ann}}(y)$ and $xy\neq 0$, then for any $a\in \mathop{\mathrm{Ann}}(x)$ and for any $b\in \mathop{\mathrm{Ann}}(y)\backslash \mathop{\mathrm{Ann}}(x)$, $ab=0$.
\end{enumerate}
\end{thm}
\begin{proof}
(a) Suppose $\mathop{\mathrm{Ann}}(x)\cap \mathop{\mathrm{Ann}}(y)= \{0\}$, for some zero-divisors $x,y\in R$.
Then there exist $0\neq a\in \mathop{\mathrm{Ann}}(x)$ and $0\neq b\in \mathop{\mathrm{Ann}}(y)$ such that
$a\notin \mathop{\mathrm{Ann}}(y)$ and $b\notin \mathop{\mathrm{Ann}}(x)$. Hence, the subgraph induced by
$\{x,a,y,b\}$ is isomorphic to $2K_2$, $P_4$ or $C_4$, a contradiction.
\medskip
(b) Suppose that there exist non-zero elements $x,y\in R$ (necessarily
zero-divisors) such that $\mathop{\mathrm{Ann}}(x)\not\subseteq\mathop{\mathrm{Ann}}(y)$ and
$\mathop{\mathrm{Ann}}(y)\not\subseteq\mathop{\mathrm{Ann}}(x)$. Then
again there exist $0\neq a\in \mathop{\mathrm{Ann}}(x)$ and $0\neq b\in \mathop{\mathrm{Ann}}(y)$ such that $a\notin \mathop{\mathrm{Ann}}(y)$ and $b\notin \mathop{\mathrm{Ann}}(x)$. Hence we get a similar contradiction in (a).
\medskip
(c) If $\mathop{\mathrm{Ann}}(x)\subsetneq \mathop{\mathrm{Ann}}(y)$ and $\langle \mathop{\mathrm{Ann}}(x)\rangle$ is not a clique,
then there exist $a,b\in \mathop{\mathrm{Ann}}(x)$ such that $ab\neq 0$ and hence the subgraph induced by
$\{x,a,y,b\}$ is isomorphic to $C_4$, a contradiction.
\medskip
(d) Similar to (c). \qquad$\Box$
\end{proof}
The following theorem is a characterization of non-local rings $R$ whose zero-divisor graph is threshold.
\begin{thm}
Let $R$ be a non-local ring. Then $\Gamma(R)$ is threshold if and only if $R\cong \mathbb{F}_2\times \mathbb{F}_q$, where $q\geq 3$ and $\mathbb{F}_q$ is the field with $q$ elements.
\end{thm}
\begin{proof}
If $R\cong \mathbb{F}_2\times \mathbb{F}_q$, then $\Gamma(R)\cong K_{1,\ q-1}$
(a star graph) and hence it is threshold. Suppose $\Gamma(R)$ is threshold and
$R=R_1\times R_2\times \ldots \times R_n$, where $n\geq 2$. If $n\geq 3$, then
$\mathop{\mathrm{Ann}}((1,1,0,0,\ldots,0))=\{0\}\times \{0\}\times R_3\times \ldots \times R_n$
and
$\mathop{\mathrm{Ann}}((0,0,1,0,\ldots, 0))=R_1\times R_2\times\{0\}\times R_4\times \ldots \times R_n$.
By Theorem~\ref{t:threshold}(b), we have $\Gamma(R)$ is not threshold. So $n=2$.
If one of the $R_i$ is not a field, say $R_1$, then there exist non-zero
elements $x,y\in R_1$ such that $xy=0$. Hence $\mathop{\mathrm{Ann}}((x,0))=\mathop{\mathrm{Ann}}(x)\times R_2$
and $\mathop{\mathrm{Ann}}((0,1))=R_1\times \{0\}$ and thus we have $\Gamma(R)$ is not threshold
by Theorem~\ref{t:threshold}(b). Therefore, both $R_1$ and $R_2$ are fields.
If $|R_i|>2$, for $i=1, 2$ then, by the same argument, $\mathop{\mathrm{Ann}}((1,0))$ is not a
subset of $\mathop{\mathrm{Ann}}((0,1))$ and $\mathop{\mathrm{Ann}}((0,1))$ is not a subset of $\mathop{\mathrm{Ann}}((1,0))$.\qquad$\Box$
\end{proof}
Next, we ask the following question about local rings.
\begin{prob} For which local rings $(R,M)$ is the zero-divisor graph
threshold?
\end{prob}
For the rest of this section, we consider a local ring $(R,M)$, where $M$ is the
maximal ideal.
Since every finite commutative ring $R$ with unity is Noetherian, every ideal
of $R$ is finitely generated. In particular, the maximal ideal $M$ in a local
ring $(R,M)$ is finitely generated.
\begin{proposition}
Let $M$ be the maximal ideal of $R$, and let $M$ be generated by $\{x_1,x_2,\ldots, x_k\}$, where $k\geq 2$. If $x_i^2=0$ for $1\leq i\leq k$ and $x_ix_j=0$ for $1\leq i<j \leq k$, then the zero-divisor graph of $R$ is threshold.
\end{proposition}
\begin{proof}
Let $X=\sum_{i=1}^ka_ix_i\in V(\Gamma(R))$ and $Y=\sum_{i=1}^kb_ix_i\in V(\Gamma(R))$. Then $XY=0$ and hence $\Gamma(R)$ is complete. Therefore it is threshold. \qquad$\Box$
\end{proof}
\begin{proposition} Let $M$ be the maximal ideal of $R$, and let $M$ be generated by $\{x_1,x_2,\ldots, x_k\}$, where $k\geq 2$, such that $x_ix_j=0$ for $1\leq i<j \leq k$. If $x_1^{n-1}\neq 0$, $x_1^n=0$ where $n\geq 3$ and $x_i^2=0$ for $2\leq i\leq k$, then $\Gamma(R)$ is threshold.
\end{proposition}
\begin{proof}
Using the definition of nested split graph, we prove the result. Set the level $h=\frac{n}{2}$ if $n$ is even and $h=\frac{n-1}{2}$ otherwise.
First we define, for $1\leq i\leq h-1$,
\begin{eqnarray} \nonumber
U_i&=& \left\{
\begin{array}{l}
a_{i,1}x_1^i+a_{i,2}x_2+\ldots+a_{i,n}x_n\mid a_{i,1}\mbox{ is unit element of }R,\mbox{ and}\\
\qquad a_{i,j} \mbox{ is either zero or a unit element of } R, \mbox{ for } 2\leq j\leq n
\end{array}\right\}
\end{eqnarray}
and if $n$ is odd, define
\begin{eqnarray} \nonumber
U_h&=& \left\{
\begin{array}{l}
a_{h,1}x_1^h+a_{h,2}x_2+\ldots+a_{h,n}x_n\mid a_{h,1}\mbox{ is unit element of }R,\mbox{ and}\\
\qquad a_{h,j} \mbox{ is either zero or a unit element of } R, \mbox{ for } 2\leq j\leq n
\end{array}\right\}
\end{eqnarray}
and if $n$ is even, define $U_h=\emptyset$.\\
Next we define,
\begin{eqnarray} \nonumber
V_1&=& \left\{\begin{array}{l}
b_{1,1}x_1^{n-1}+b_{1,2}x_2+\ldots+b_{1,n}x_n\mid b_{1,j}\mbox{ is either zero or unit}\\
\qquad\mbox{element of } R,\mbox{ for } 1\leq j\leq n
\end{array}\right\}.
\end{eqnarray}
and for $2\leq i\leq h$,
\begin{eqnarray} \nonumber
V_i&=& \left\{\begin{array}{l}
b_{i,1}x_1^{n-i}+b_{i,2}x_2+\ldots+b_{i,n}x_n\mid b_{i,1}\mbox{ is unit element of }R, \mbox{ and}\\
\qquad b_{i,j}\mbox{ is either zero or unit element of } R,\mbox{ for } 2\leq j\leq n
\end{array}\right\}.
\end{eqnarray}
Then clearly, $\mathop\bigcup\limits_{i=1}^h U_i$ is an independent set and $\mathop\bigcup\limits_{i=1}^h V_i$ is a complete subgraph of $\Gamma(R)$. Also they satisfy the definition of nested split graph and hence the graph is threshold. \qquad$\Box$
\end{proof}
\begin{proposition} If $M$ is the maximal ideal generated by $\{x_1,x_2,\ldots, x_k\}$, where $k\geq 2$ such that $x_1x_2=0$, $x_1^2\neq 0\neq x_2^2$, then $\Gamma(R)$ is not threshold.
\end{proposition}
\begin{proof}
Clearly, $x_1\in \mathop{\mathrm{Ann}}(x_2)\backslash \mathop{\mathrm{Ann}}(x_1)$ and $x_2\in \mathop{\mathrm{Ann}}(x_1)\backslash \mathop{\mathrm{Ann}}(x_2)$ and hence $\mathop{\mathrm{Ann}}(x_1)$ is not a subset of $\mathop{\mathrm{Ann}}(x_2)$ and $\mathop{\mathrm{Ann}}(x_2)$ is not a subset of $\mathop{\mathrm{Ann}}(x_1)$. Thus $\Gamma(R)$ is not threshold, by Theorem \ref{t:threshold}. \qquad$\Box$
\end{proof}
\section{The zero-divisor graphs of local rings with countable cardinality are universal}
It is natural to wonder about embedding infinite graphs in zero-divisor graphs
of infinite rings. Here we consider the countable case.
Our tool will be the \emph{Rado graph}, or \emph{countable random graph}
$\mathcal{R}$: this was first explicitly constructed by Rado, but about the
same time Erd\H{o}s and R\'enyi proved that if a countable graph was selected
by choosing edges independently with probability $\frac{1}{2}$ from the
$2$-subsets of a countably infinite set, the resulting graph is isomorphic
to $\mathcal{R}$ with probability~$1$. Among many beautiful properties of
this graph (for which we refer to \cite{rg}), we require the following:
\begin{itemize}
\item $\mathcal{R}$ is the unique countable graph having the property that,
given any two finite disjoint sets $U$ and $V$ of vertices, there is a vertex
$z$ joined to every vertex in $U$ and to none in $V$.
\item Every finite or countable graph is embeddable in $\mathcal{R}$ as an
induced subgraph.
\end{itemize}
We refer to \cite{cl} for terminology and results on Model Theory, in
particular the Compactness and L\"owenheim--Skolem theorems.
\begin{thm}
There is a countable local ring with unity having the property that
every finite or countable graph is an induced subgraph of its zero-divisor
graph.
\end{thm}
\begin{proof}
It suffices to show that the Rado graph $\mathcal{R}$ can be embedded in the
zero-divisor graph of a countable ring, since every finite or countable
graph is an induced subgraph of the Rado graph.
We give two proofs of this. The first is simple and direct, using the
method we used in Theorem~\ref{t:loc_universal}. The second is non-constructive
but shows a technique which we hope will be of wider use.\\
\noindent \textbf{First proof.} Let $F$ be a field (for simplicity the field with
two elements). Let $R=F[X]/S$, where the set $X$ of indeterminates is bijective
with the vertex set of $\mathcal{R}$ (with $x_i$ corresponding to vertex $i$),
and $S$ is the ideal generated by all homogeneous polynomials of degree~$3$
in these variables together with all products $x_ix_j$ for which $\{i,j\}$ is
an edge of $\mathcal{R}$. Just as in the proof of Theorem~\ref{t:loc_universal},
the induced subgraph on the set $\{x_i+S:i\in V(\mathcal{R})\}$ induces a
subgraph of the zero-divisor graph of $R$.
The ring $R$ has the properties that it is a local ring (though its maximal
ideal is not finitely generated) and its automorphism group contains the
automorphism group of $\mathcal{R}$.\\
\noindent \textbf{Second proof.} We use basic results from model theory.
We take the first-order language of rings with unity together with an
additional unary relation $S$. Now consider the
following set $\Sigma$ of first-order sentences:
\begin{itemize}
\item[(a)] the axioms for a commutative ring with unity;
\item[(b)] the statement that every element of $S$ is a (non-zero)
zero-divisor;
\item[(c)] for each pair $(m,n)$ of non-negative integers, the sentence
stating that for any given $m+n$ elements $x_1,\ldots,x_m,y_1,\ldots,y_n$, all
satisfying $S$, there exists an element $z$ such that $z$ satisfies $S$,
that $x_iz=0$ for $i=1,\ldots,m$, and that $y_jz\ne0$ for $j=1,\ldots,n$.
\end{itemize}
We claim that any finite set of these sentences has a model. Any
finite subset of the sentences in (c) are satisfied in some finite graph
(taking ``product zero'' to mean ``adjacent''), with $S$ satisfied by the
vertices used in the embedding: indeed, a sufficiently large finite random
graph will have this property. By our earlier results, this finite graph is
embeddable as an induced subgraph in the zero-divisor graph of some finite
commutative ring with unity.
By the First-Order Compactness Theorem, the entire set $\Sigma$ has a model $R$.
This says that the set of ring elements satisfying $S$ induces a subgraph
isomorphic to Rado's graph $\mathcal{R}$ in the zero-divisor graph of $R$. (The
sentences under (c) are first-order axioms for $\mathcal{R}$.) But
$\mathcal{R}$ contains every finite or countable graph as an induced subgraph.
Now the downward L\"owenheim--Skolem theorem guarantees that there is a
countable ring whose zero-divisor graph contains $\mathcal{R}$, and hence all
finite and countable graphs, as induced subgraphs.
\end{proof}
\begin{prob} Is there a local ring whose maximal ideal is finitely
generated as ideal and whose zero-divisor graph embeds the Rado graph as an
induced subgraph?
\end{prob}
\begin{prob} Find the smallest number $N$ (in terms of $n$ and $m$)
such that, if $G$ is a graph with $n$ vertices and $m$ edges, then $G$ is
an induced subgraph of the zero divisor graph of a ring (commutative with
unity) of order at most $N$.
\end{prob}
Let $f(n,m)$ be this number. Our construction using Boolean rings shows that $N\le 2^p$, where $p$ is the
least number of points in a representation of $G$ as an intersection graph.
Moreover, we constructed an intersection representation with $p=m+a+b$,
where $a$ and $b$ are the numbers of isolated vertices and edges in $G$.
This gives an upper bound for $f(n,m)$.
\begin{prob}
Is this best possible?
\end{prob}
We can ask similar questions for subclasses of rings (such as local rings),
or for variants of the zero-divisor graph.
\section{Other graphs from rings}
There are several natural classes of graphs containing the threshold graphs.
These include split graphs (defined earlier), chordal graphs (containing no
induced cycle of length greater than~$3$), cographs (containing no induced
path on four vertices) and perfect graphs (containing no induced odd cycle
of length at least~$5$ or complement of one). For each of these classes
$\mathcal{C}$, we can ask:
\begin{prob} For which commutative rings with unity does the
zero-divisor graph belong to $\mathcal{C}$?
\end{prob}
Another very general problem is to examine the induced subgraphs of various
generalizations of zero-divisor graphs, such as the extended zero-divisor
graphs and the trace graphs~\cite{Trace,Siva1,Siva2,Siva3,Siva4}.
As a contribution to this problem, here is an example of a kind of universality
question that can be asked when we have graphs defined from algebraic
structures where one is a subgraph of the other. The pattern for this theorem
is \cite[Theorem 5.9]{c22}, the analogous result for the enhanced power graph
and commuting graph of a group.
Let $A$ be a commutative ring with unity. We define the zero-divisor graph
of $A$ to have as vertices all the non-zero elements of $A$, two vertices
$a$ and $b$ joined if $ab=0$. (This is not the usual definition since we
don't restrict just to zero-divisors: vertices which are not zero-divisors are
isolated. This doesn't affect our conclusion.) Given a positive integer $m$, we
define the $m$-dimensional dot product graph to have vertices all the
non-zero elements of $A^m$ with two vertices $(a_1,a_2,\ldots,a_m)$ and
$(b_1,b_2,\ldots,b_m)$ joined if $a.b=0$, where
\[a \cdot b=a_1b_1+a_2b_2+\cdots+a_mb_m.\]
Note that $A^m$ is a ring, with the product $*$ given by
\[a*b=(a_1b_2,a_2b_2,\ldots,a_mb_m).\]
It is clear that $a*b=0$ implies $a.b=0$, so the zero-divisor graph of $A^m$
is a spanning subgraph of the $m$-dimensional dot product graph of $A$.
We prove the following:
\begin{thm}
Take the complete graph on a finite set $X$, with the edges coloured red, green
and blue in any manner whatever. Then there is a ring $A$ and an embedding of
$X$ into $A^2$ such that
\begin{itemize}\itemsep0pt
\item the red edges are edges of the zero-divisor graph of $A^2$;
\item the green edges are edges of the $2$-dimensional dot product graph of $A$
but not of the zero-divisor graph of $A^2$;
\item the blue edges are not edges of the $2$-dimensional dot product graph
of $A$.
\end{itemize}
\label{t:three}
\end{thm}
\begin{proof}
First, by enlarging $X$ by at most four points joined to all others with
blue or green edges, we can assume that neither the blue nor the green
subgraphs have isolated vertices or edges.
Let $P$ be the set of blue or green edges. For each vertex $v\in X$, define
a pair $(S(v),T(v))$ of subsets of $P$ by the rule that $S(v)$ is the set
of green edges containing $v$, while $T(v)$ is the set of blue or green edges
containing $v$. The assumption in the previous paragraph shows that the map
$\theta:v\mapsto(S(v),T(v))$ is one-to-one.
Now let $A$ denote the Boolean ring on $P$. Then $\theta$ is an embedding of
$X$ into $A\times A\setminus\{0\}$. We claim that this has the required
property.
\begin{itemize}
\item Suppose that $e=\{v,w\}$ is a red edge. Then $S(v)\cap S(w)=\emptyset$
and $T(v)\cap T(w)=\emptyset$; so $\theta(v)*\theta(w)=0$, whence
$\theta(v)$ and $\theta(w)$ are joined in the zero divisor graph of $A^2$.
\item Suppose that $e=\{v,w\}$ is green. Then $S(v)\cap S(w)=\{e\}$ and
$T(v)\cap T(w)=\{e\}$; so $\theta(v)*\theta(w)\ne0$ but
$\theta(v).\theta(w)=0$. Thus $\theta(v)$ and $\theta(w)$ are joined in the
dot product graph but not the zero divisor graph.
\item Suppose that $e=\{v,w\}$ is blue. Then $S(v)\cap S(w)=\emptyset$
and $T(v)\cap T(w)=\{e\}$, so $\theta(v).\theta(w)=\{e\}\ne0$. So $\theta(v)$
and $\theta(w)$ are not joined in the dot product graph.
\end{itemize}
The theorem is proved.\qquad$\Box$
\end{proof}
\begin{rem}
This theorem has several consequences:
\begin{itemize}
\item By ignoring the distinction between green and blue, we have another
construction showing the universality of the zero-divisor graphs of rings.
\item By ignoring the distinction between red and green, we have shown the
universality of the $2$-dimensional dot product graphs of rings.
\item By ignoring the distinction between red and blue, we have shown that
the graphs obtained from the $2$-dimensional dot product graph of $A$ by
deleting the edges of the zero-divisor graph of $A^2$ are universal.
\end{itemize}
\end{rem}
The theorem suggests several questions:
\begin{prob}
\begin{itemize}\itemsep0pt
\item Can we restrict the ring $A$ to a special class such as local rings?
\item Can we prove similar results for other pairs of graphs?
\item Can we prove similar results for more than two graphs?
\end{itemize}
\end{prob}
\noindent \textbf{Acknowledgment.}
\small{This work began in the Research Discussion on Groups and Graphs, organised by
Ambat Vijayakumar and Aparna Lakshmanan at CUSAT, Kochi, to whom we express
our gratitude. We are also grateful to the International Workshop on Graphs
from Algebraic Structures, organised by Manonmaniam Sundaranar University and
the Academy of Discrete Mathematics and Applications, for the incentive to
complete the work and the inspiration for Theorem~\ref{t:three}.} For T. Tamizh Chelvam, this research was supported by the University Grant Commissions Start-Up Grant, Government of India grant No. F. 30-464/2019 (BSR) dated 27.03.2019, while for the last author, it was supported by CSIR Emeritus Scientist Scheme (No. 21 (1123)/20/EMR-II) of Council of Scientific and Industrial Research, Government of India. Peter J. Cameron acknowledges the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme \textit{Groups, representations and applications: new perspectives} (supported by \mbox{EPSRC} grant no.\ EP/R014604/1), where he held a Simons Fellowship.
| {
"timestamp": "2022-07-26T02:17:51",
"yymm": "2207",
"arxiv_id": "2207.11741",
"language": "en",
"url": "https://arxiv.org/abs/2207.11741",
"abstract": "The zero-divisor graph of a finite commutative ring with unity is the graph whose vertex set is the set of zero-divisors in the ring, with $a$ and $b$ adjacent if $ab=0$. We show that the class of zero-divisor graphs is universal, in the sense that every finite graph is isomorphic to an induced subgraph of a zero-divisor graph. This remains true for various restricted classes of rings, including boolean rings, products of fields, and local rings. But in more restricted classes, the zero-divisor graphs do not form a universal family. For example, the zero-divisor graph of a local ring whose maximal ideal is principal is a threshold graph; and every threshold graph is embeddable in the zero-divisor graph of such a ring. More generally, we give necessary and sufficient conditions on a non-local ring for which its zero-divisor graph to be a threshold graph. In addition, we show that there is a countable local ring whose zero-divisor graph embeds the Rado graph, and hence every finite or countable graph, as induced subgraph. Finally, we consider embeddings in related graphs such as the $2$-dimensional dot product graph.",
"subjects": "Rings and Algebras (math.RA); Combinatorics (math.CO)",
"title": "Induced subgraphs of zero-divisor graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9895109070387997,
"lm_q2_score": 0.8289388125473629,
"lm_q1q2_score": 0.8202439962834066
} |
https://arxiv.org/abs/0711.0906 | Multivariate Fuss-Catalan numbers | Catalan numbers $C(n)=\frac{1}{n+1}{2n\choose n}$ enumerate binary trees and Dyck paths. The distribution of paths with respect to their number $k$ of factors is given by ballot numbers $B(n,k)=\frac{n-k}{n+k}{n+k\choose n}$. These integers are known to satisfy simple recurrence, which may be visualised in a ``Catalan triangle'', a lower-triangular two-dimensional array. It is surprising that the extension of this construction to 3 dimensions generates integers $B_3(n,k,l)$ that give a 2-parameter distribution of $C_3(n)=\frac 1 {2n+1} {3n\choose n}$, which may be called order-3 Fuss-Catalan numbers, and enumerate ternary trees. The aim of this paper is a study of these integers $B_3(n,k,l)$. We obtain an explicit formula and a description in terms of trees and paths. Finally, we extend our construction to $p$-dimensional arrays, and in this case we obtain a $(p-1)$-parameter distribution of $C_p(n)=\frac 1 {(p-1)n+1} {pn\choose n}$, the number of $p$-ary trees. | \section{Catalan triangle, binary trees, and Dyck paths}
We recall in this section well-known results about Catalan numbers and ballot numbers.
The {\em Catalan numbers}
$$C(n)=\frac{1}{n+1}{2n\choose n}$$
are integers that appear in many combinatorial problems. These numbers first appeared in Euler's work as the number of triangulations of a polygon by mean of non-intersecting diagonals. Stanley \cite{stanley,stanweb} maintains a dynamic list of exercises related to Catalan numbers, including (at this date) 127 combinatorial interpretations.
Closely related to Catalan numbers are {\em ballot numbers}. Their name is due to the fact that they are the solution of the so-called {\em ballot problem}: we consider an election between two candidates A and B, which respectively receive $a>b$ votes. The question is: what is the probability that during the counting of votes, A stays ahead of B? The answer will be given below, and we refer to \cite{bertrand} for Bertrand's first solution, and to \cite{andre} for Andr\'e's beautiful solution using the ``reflection principle''.
Since our goal here is different, we shall neither define ballot numbers by the previous statement, nor by their explicit formula, but we introduce integers $B(n,k)$ defined for a positive integer $n$ and a nonnegative integer $k$ by the following conditions:
\begin{itemize}
\item $B(1,0)=1$;
\item $\forall n> 1$ and $0\le k< n$, $B(n,k)=\sum_{i=0}^k B(n-1,i)$;
\item $\forall k\ge n$, $B(n,k)=0$.
\end{itemize}
Observe that the recursive formula in the second condition is equivalent to:
\begin{equation}\label{recc}
B(n,k)=B(n-1,k)+B(n,k-1).
\end{equation}
We shall present the $B(n,k)$'s by the following triangular representation (zero entries are omitted) where moving down increases $n$ and moving right increases $k$.
\vskip 0.2cm
$
\begin{array}{rrrrrr}
1&&&&&\cr
1&1&&&&\cr
1&2&2&&&\cr
1&3&5&5&&\cr
1&4&9&14&14&\cr
1&5&14&28&42&42\cr
\end{array}
$
\vskip 0.2cm
The crucial observation is that computing the horizontal sums of these integers give : $1,\ 2,\ 5,\ 14,\ 42,\ 132$. We recognize the first terms of the Catalan series, and this intuition will be settled in Proposition \ref{prop1}, after introducing combinatorial objects.
\vskip 0.2cm
A {\em binary tree} is a tree in which every internal node has exactly 2 sons. The number of binary trees with $n$ internal nodes is given by the $n$-th Catalan number. The nodes of the following tree are labelled to explain a bijection described in the next paragraph: internal nodes are labelled by letters and external nodes by numbers.
\vskip 0.2cm
{\centerline{\epsffile{bitree.eps}}}
A {\em Dyck path} is a path consisting of steps $(1,1)$ and $(1,-1)$, starting from $(0,0)$, ending at $(2n,0)$, and remaining in the half-plane $y\ge0$ (we shall sometimes say ``remaining above the horizontal axis'' in the same sense). The number of Dyck paths of length $2n$ is also given by the $n$-th catalan number. More precisely, the depth-first search of the tree gives a bijection between binary trees and Dyck paths: we associate to each external node (except the left-most one) a $(1,1)$ step and to each internal node a $(1,-1)$ step by searching recursively the left son, then the right son, then the root. As an example, we show below the Dyck path corresponding to the binary tree given above. The labels on steps correspond to those on the nodes of the tree.
{\centerline{\epsffile{dyck.eps}}}
An important parameter in our study will be the length of the right-most sequence of $(1,-1$) of the path. This parameter equals 2 in our example.
Observe that under the correspondence between paths and trees, this parameter corresponds to the length of the right-most string of right sons in the tree. We shall use the expressions {\em last down sequence} and {\it last right string}, for these parts of the path and of the tree.
Now we come to the announced result. It is well-known and simple, but is the starting point of our work.
\begin{prop}\label{prop1}
For $n$ a positive integer, we have the following equality:
$$\sum_{k=0}^{n-1} B(n,k)=C(n)=\frac 1 {n+1} {2n \choose n}.$$
\end{prop}
\begin{proof}
Let us denote by ${\mathcal C}_{n,k}$ the set of Dyck paths of length $2n$ with a last down sequence of length equal to $n-k$.
We shall prove that $B(n,k)$ is the cardinality of ${\mathcal C}_{n,k}$.
The proof is done recursively on $n$. If $n=1$, this is trivial. If $n>1$, let us suppose that $B(n-1,k)$ is the cardinality of ${\mathcal C}_{n-1,k}$ for $0\le k<n-1$. Let us consider an element of ${\mathcal C}_{n,k}$. If we erase the last step $(1,1)$ and the following step $(1,-1)$, we obtain a Dyck path of length $2(n-1)$, with a last decreasing sequence of length $n-1-l$ with $l\le k$. If we keep track of the integer $k$, we obtain a bijection between ${\mathcal C}_{n,k}$ and $\cup_{l\le k}{\mathcal C}_{n-1,l}$. We mention that this process is very similar to the ECO method \cite{eco}.
This is a combinatorial proof of Proposition \ref{prop1}.
\end{proof}
\noindent {\bf Remark 1.2.}
The integers $B(n,k)$ are known as {\em ballot numbers} and are given by the explicit formula:
\begin{equation}\label{ballot}
B(a,b)=\frac{a-b}{a+b}{a+b\choose a}.
\end{equation}
This expression can be obtained shortly by checking the recurrence \pref{recc}.
We can alternatively use the reflection principle (see \cite{hilton} for a clear presentation), or the cycle lemma ({\it cf.} \cite{cylem}), which will be used in the next sections to obtain a formula in the general case.
The expression \pref{ballot} constitutes a solution to the ballot problem. For this, we use a classical interpretation in terms of paths: we represent a vote for A by an up step, and a vote for B by a down step. The total number of countings of votes, or of paths from $(0,0)$ to $(a+b,a-b)$, is given by the binomial ${a+b\choose a}$. The countings such that A stays ahead of B correspond to paths remaining above the horizontal axis. Their number is given by $B(a,b)$. This implies that the probability asked for at the beginning of this section is $\frac{a-b}{a+b}$.
Of course , we could have used expression \pref{ballot} to prove Proposition \ref{prop1} by a simple computation, but our proof explains more about the combinatorial objects and can be adapted to ternary trees in the next section.
\medskip
It should be mentionned here that our study of multivariate Fuss-Catalan has nothing to do with the many-candidate ballot problem, as considered for example in \cite{zeilb} or \cite{nath}. In particular, the multivariate ballot numbers considered in these papers do not sum to the Fuss-Catalan numbers $C_p(n)=\frac 1 {(p-1)n+1} {pn\choose n}$. Conversely, the numbers studied in the present article do not give any answer to the generalized ballot problem.
\noindent {\bf Remark 1.3.}
Our Catalan array presents similarities with Riordan arrays, but it is not a Riordan array. It may be useful to explicate this point. We recall ({\it cf.} \cite{riordan}) that a Riordan array $M=(m_{i,j})\in{\mathbb C}^{{\mathbb N}\times{\mathbb N}}$ is defined with respect to 2 generating functions
$$g(x)=\sum g_n x^n \ \ \ {\rm and}\ \ \ f(x)=\sum f_n x^n$$
and is such that
$$M_j(x)=\sum_{n\ge 0} m_{n,j} x^n=g(x){f(x)}^j.$$
We easily observe that a Riordan array with the first two columns $M_0(x)$ and $M_1(x)$ of our Catalan array is relative to $g(x)=\frac{1}{1-x}$ and $f(x)=\frac{x}{1-x}$, which gives the Pascal triangle.
In fact, the Riordan array relative to $g(x)={\bf C}(x)=\sum C(n) x^n$ and $f(x)=x\, {\bf C}(x)$ gives the ballot numbers, but requires knowledge of the Catalan numbers.
\section{Fuss-Catalan tetrahedron and ternary trees}
\subsection{Definitions}
This section, which is the heart of this work, is the study of a 3-dimensional analogue of the Catalan triangle of the previous section. That is we consider exactly the same recurrence, and let the array grow, not in 2, but in 3 dimensions. More precisely, we introduce the sequence $B_3(n,k,l)$ indexed by a positive integer $n$ and nonnegative integers $k$ and $l$, and defined recursively by:
\begin{itemize}
\item $B_3(1,0,0)=1$;
\item $\forall n>1$, $k+l<n$, $B_3(n,k,l)=\sum_{0\le i\le k, 0\le j\le l} B_3(n-1,i,j)$;
\item $\forall k+l\ge n$, $B_3(n,k,l)=0$.
\end{itemize}
Observe that the recursive formula in the second condition is equivalent to:
\begin{equation}\label{rec}
B_3(n,k,l)=B_3(n-1,k,l)+B_3(n,k-1,l)+B_3(n,k,l-1)-B_3(n,k-1,l-1)
\end{equation}
and this expression can be used to make some computations lighter, but the presentation above explains more about the generalization of the definition of the ballot numbers $B(n,k)$.
Because of the planar structure of the sheet of paper, we are forced to present the tetrahedron of $B_3(n,k,l)$'s by its sections with a given $n$.
\vskip 0.2cm
$
n=1 \longrightarrow
\left[
\begin{array}{r}
1
\end{array}
\right]
$
$
n=2 \longrightarrow
\left[
\begin{array}{rr}
1&1\cr
1&
\end{array}
\right]
$
$
n=3 \longrightarrow
\left[
\begin{array}{rrr}
1&2&2\cr
2&3&\cr
2&&
\end{array}
\right]
$
$
n=4 \longrightarrow
\left[
\begin{array}{rrrr}
1&3&5&5\cr
3&8&10&\cr
5&10&&\cr
5&&&
\end{array}
\right]
$
$
n=5 \longrightarrow
\left[
\begin{array}{rrrrr}
1&4&9&14&14\cr
4&15&30&35&\cr
9&30&45&&\cr
14&35&&&\cr
14&&&&
\end{array}
\right]
$
\vskip 0.2cm
It is clear that $B_3(n,k,0)=B_3(n,0,k)=B(n,k)$. The reader may easily check that when we compute $\sum_{k,l} B_3(n,k,l)$, we obtain: $1,\ 3,\ 12,\ 55,\ 273$. These integers are the first terms of the following sequence ({\it cf.} \cite{njas}):
$$C_3(n)=\frac 1 {2n+1} {3n\choose n}.$$
This fact will be proven in Proposition \ref{prop2}.
\subsection{Combinatorial interpretation}
Fuss\footnote{Nikolai Fuss (Basel, 1755 -- St Petersburg, 1826) helped Euler prepare over 250 articles for publication over a period on about seven years in which he acted as Euler's assistant, and was from 1800 to 1826 permanent secretary to the St Petersburg Academy.}-Catalan numbers ({\it cf.} \cite{FC,hilton}) are given by the formula
\begin{equation}\label{fuss}
C_p(n)=\frac1{(p-1)n+1}{pn\choose n},
\end{equation}
and $C_3(n)$ appear as order-3 Fuss-Catalan numbers. The integers $C_3(n)$ are known \cite{njas} to count {\em ternary trees}, {\it ie.} trees in which every internal node has exactly 3 sons.
\vskip 0.2cm
{\centerline{\epsffile{tertree.eps}}}
Ternary trees are in bijection with {\em 2-Dyck paths}, which are defined as paths from $(0,0)$ to $(3n,0)$ with steps $(1,1)$ and $(1,-2)$, and remaining above the line $y=0$. The bijection between these objects is the same as in the case of binary trees, {\it ie.} a depth-first search, with the difference that here an internal node is translated into a $(1,-2)$ step. To illustrate this bijection, we give the path corresponding to the previous example of ternary tree:
{\centerline{\epsffile{terpath.eps}}}
We shall consider these paths with respect to the position of their down steps. The {\em height} of a down step is defined as the height of its end-point. Let ${\mathcal D}_{n,k,l}$ denote the set of 2-Dyck paths of length $3n$, with $k$ down steps at even height and $l$ down steps at odd height, excluding the last seqence of down steps. By definition, the last sequence of down steps is of length $n-k-l$.
\begin{prop}\label{prop2}
We have
$$\forall n>0,\ \ \ \sum_{k,l} B_3(n,k,l)=C_3(n)=\frac 1 {2n+1} {3n\choose n}.$$
Moreover, $B_3(n,k,l)$ is the cardinality of ${\mathcal D}_{n,k,l}$.
\end{prop}
\begin{proof}
Let $k$ and $l$ be fixed. Let us consider an element of ${\mathcal D}_{n,k,l}$. If we cut this path after its $(2n-2)$-th up step, and complete with down steps, we obtain a 2-Dyck path of length $3(n-1)$ (see figure below). It is clear that this path is an element of ${\mathcal D}_{n,i,j}$ for some $i\le k$ and $j\le l$. We can furthermore reconstruct the original path from the truncated one, if we know $k$ and $l$. We only have to delete the last sequence of down steps (here the dashed line), to draw $k-i$ down steps, one up step, $l-j$ down steps, one up step, and to complete with down steps. This gives a bijection from ${\mathcal D}_{n,k,l}$ to $\cup_{0\le i\le k,0\le j\le l}{\mathcal D}_{n-1,i,j}$, which implies Proposition \ref{prop2}.
{\centerline{\epsffile{prop2.eps}}}
\end{proof}
\noindent {\bf Remark 2.2.}
It is interesting to translate the bi-statistic introduced on 2-Dyck paths to the case of ternary trees. As previously, we consider the depth-first search of the tree, and shall not consider the last right string. We define ${\mathcal T}_{n,k,l}$ as the set of ternary trees with $n$ internal nodes, $k$ of them being encountered in the search after an even number of leaves and $l$ after and odd number of leaves. By the bijection between trees and paths, and Proposition \ref{prop2}, we have that the cardinality of $T_{n,k,l}$ is $B_3(n,k,l)$.
\noindent {\bf Remark 2.3.}
It is clear from the definition that:
$$B_3(n,k,l)=B_3(n,l,k).$$
But this fact is not obvious when considering trees or paths, since the statistics defined are not clearly symmetric. To explain this, we can introduce an involution on the set of ternary trees which sends an element of ${\mathcal T}_{n,k,l}$ to ${\mathcal T}_{n,l,k}$. To do this, we can exchange for each node of the last right string its left and its middle son, as in the following picture. Since the number of leaves of a ternary tree is odd, every ``even'' node becomes an odd one, and conversely.
{\centerline{\epsffile{inv.eps}}}
\subsection{Explicit formula}
Now a natural question is to obtain explicit formulas for the $B_3(n,k,l)$. The answer is given by the following proposition.
\begin{prop}\label{prop2.1}
The integers $B_3(n,k,l)$ are given by
\begin{equation}\label{prop2.1eq}
B_3(n,k,l)={n+k\choose k}{n+l-1\choose l}\frac{n-k-l}{n+k}
\end{equation}
\end{prop}
\begin{proof}
We use a combinatorial method to enumerate ${\mathcal D}_{n,k,l}$.
The method is a variation of the cycle lemma \cite{cylem} (called ``penetrating analysis'' in \cite{gopal}).
If we forget the condition of ``positivity'' ({\it ie.} the path remains above the line $y=0$), and cut the last down sequence, a path consists in:
\begin{itemize}
\item at even height: $n$ up steps, and $k$ down steps;
\item at odd height: $n$ up steps and $l$ down steps.
\end{itemize}
An important remark is to remember that an element of ${\mathcal D}_{n,k,l}$ has an up step just before the last sequence of down steps!
If we suppose that all these steps are distinguished, we obtain:
\begin{itemize}
\item $\frac{(n+k)!}{n!k!}$ choices for even places;
\item $\frac{(n-1+l)!}{(n-1)!l!}$ choices for odd places (we cannot put any odd down step after the last odd step).
\end{itemize}
Now we group the paths which are ``even permutations'' of a given path $P$. By even permutations, we mean cycle permutations which preserve the parity of the height of the steps.
We want to prove that the proportion of elements of ${\mathcal D}_{n,k,l}$ in any even orbit ({\it i.e. } in any orbit under even permutations) is given by $\frac{n-k-l}{n+k}$.
We suppose first that the path $P$ is acyclic (as a word): $P$ cannot be written as $P=U^p$ with $U$ a word in the two letters $(1,1)$ and $(1,-2)$. It is clear that such a path gives $n+k$ different even permutations.
Now we have to keep only those which give elements of ${\mathcal D}_{n,k,l}$.
To do this we consider the concatenation of $P$ and $P'$, which is a duplicate of $P$.
\centerline{\epsffile{gopal.eps}}
The cyclic permutations of $P$ (not necessarily even) are the subpaths of $P+P'$ of horizontal length $2n+k+l$. The number of such paths that remain above the horizontal axis, and end with an up step, is the number of (up) steps of $P$ in the light of an horizontal light source coming from the right. The only transformation is to put the illuminated up step at the end of the path. The number of illuminated up steps is $2n-2k-2l$, since every down step puts two up steps in the shadow. Among these $2n-2k-2l$ permutations, only half are even (observe that the set of heights of the illuminated steps is an interval). Thus $n-k-l$ paths among the $n+k$ elements of the orbit are in ${\mathcal D}_{n,k,l}$.
Now we observe that if $P$ is $p$-cyclic, then its orbit has $p$ times less elements, and we obtain $p$ times fewer different paths, whence the proportion of elements of elements of ${\mathcal D}_{n,k,l}$ in this orbit is:
$$\frac{(n-k-l)/p}{(n+k)/p}=\frac{n-k-l}{n+k}.$$
Finally, we obtain that the cardinality of ${\mathcal D}_{n,k,l}$ is
$$\frac{(n+k)!}{n!k!}\frac{(n-1+l)!}{(n-1)!l!}\frac{n-k-l}{n+k}$$
which was to be proved.
\end{proof}
\noindent {\bf Remark 2.5.}
The equation \pref{prop2.1eq} is of course symmetric in $k$ and $l$:
$${n+k\choose k}{n+l-1\choose l}\frac{n-k-l}{n+k}={n+k-1\choose k}{n+l-1\choose l}\frac{n-k-l}{n}.$$
\noindent {\bf Remark 2.6.}
I made the choice to present a combinatorial proof of Proposition \ref{prop2.1}. The interest is to show how these formulas are obtained, and to allow easy generalizations ({\it cf.} the next section). It is also possible to check directly the recurrence \pref{rec}.
\subsection{Generating function}
Let $F$ denote the following generating series:
\begin{equation}\label{F}
F(t,x,y)=1+\sum_{0\le k+l<n} B_3(n,k,l) t^n x^k y^l.
\end{equation}
In this expression, the term ``1'' corresponds to the {\em empty} 2-Dyck path. Thus $F$ is the generating series of 2-Dyck paths with respect to their length (variable $t$), their number of down steps at even height --excluding the last down sequence-- (variable $x$), and their number of down steps at odd height (variable $y$).
We also introduce
\begin{equation}\label{G}
G(t,x,y)=1+\sum_{0\le k+l<n} B_3(n,k,l) t^n x^{n-l} y^l,
\end{equation}
{\it ie.} $G$ is the generating function of 2-Dyck paths with respect to their length, to the number of down steps at even height -including the last down sequence- and to the number of down steps at odd height.
To obtain equations for $F$ and $G$, we decompose a non-empty 2-Dyck path by looking at two points: $\alpha$, defined as the last return to the axis (except the final point $(3n,0)$), and $\beta$ defined as the last point at height 1 after $\alpha$. This gives the following decomposition of a 2-Dyck path
$$P=P_1\ (up)\ P_2\ (up)\ P_3\ (down),$$
with $P_1$, $P_2$, $P_3$ any 2-Dyck path (maybe empty).
\vskip 0.2cm
\centerline{\epsffile{dec.eps}}
\vskip0.2cm
By observing that the up steps after $\alpha$ and $\beta$ change the parity of the height,
this gives the two following equations :
\begin{equation}
F(t,x,y)=1+G(t,x,y)\times G(t,y,x)\times t.F(t,x,y)
\end{equation}
and in the same way
\begin{equation}\label{Geq}
G(t,x,y)=1+G(t,x,y)\times G(t,y,x)\times t.G(t,x,y).x.
\end{equation}
By permuting the variables $x$ and $y$ in \pref{Geq}, we obtain
\begin{equation}\label{Geq2}
G(t,y,x)=1+G(t,y,x)\times G(t,x,y)\times t.G(t,y,x).y
\end{equation}
and we can eliminate $G(t,y,x)$ from \pref{Geq} and \pref{Geq2} to obtain the following result.
\begin{prop}
The generating function $F$ of the $B_3$'s is given by:
$$F(t,x,y)=\frac{1}{1-tG(t,x,y)G(t,y,x)}$$
where $G(t,x,y)$ is a solution of the algebraic equation
$$tx^2G^3+(y-x)G^2+(x-2y)G+y=0.$$
\end{prop}
\vskip 0.3cm
An alternate approach to the generating function is to use formula \ref{prop2.1eq} to obtain what MacMahon called a ``redundant generating function `` ({\it cf.} \cite{gessel}), since it contains terms other than those which are combinatorially significant.
To do this we extend the recursive definition of $B_3(n,k,l)$ as follows: we define $B'_3(n,k,l)$ for integers $n>0$, and $k,l\ge0$ by:
\begin{equation}\label{prime}
B'_3(n,k,l)={n+k-1\choose k}{n+l-1\choose l}\frac{n-k-l}{n}.
\end{equation}
Of course when $k+l>n$, the integer $B'_3(n,k,l)$ is negative.
As an example, here is the ``section'' of the array of $B'_3(n,k,l)$ with $n=3$.
\vskip 0.2cm
$\left[\begin{array}{cccccc}
1&2&2&0&-5&\cdots\cr
2&3&0&-10&-30&\cdots\cr
2&0&-12&-40&-90&\cdots\cr
0&-10&-40&-100&-200&\cdots\cr
-5&-30&-90&-200&-375&\cdots\cr
\vdots&\vdots&\vdots&\vdots&\vdots&\ddots
\end{array}
\right]
$
\vskip 0.2cm
The equation \pref{prime} is equivalent to the following recursive definition:
$\forall n\le 0$ or $k,l<0,\ B'_3(n,k,l)=0$ and
$$B'_3(n,k,l)=B'_3(n-1,k,l)+B'_3(n,k-1,l)+B'_3(n,k,l-1)-B'_ 3(n,k-1,l-1)+c(n,k,l)$$
$${\rm where}\ \ \ \ c(n,k,l)=\left\{
\begin{array}{rl}
+1&{\rm if }(n,k,l)=(1,0,0)\cr
-1&{\rm if }(n,k,l)=(0,1,0)\cr
-1&{\rm if }(n,k,l)=(0,0,1)\cr
+2&{\rm if }(n,k,l)=(0,1,1)\cr
0&{\rm otherwise.}
\end{array}
\right.
$$
The interest of this presentation is to give a simple (rational!) generating series. Indeed, it is quite simple to deduce from the recursive definition of $B'_3(n,k,l)$ that:
\begin{equation}\label{F'}
\sum_{n,k,l}B'_3(n,k,l)t^nx^ky^l=\frac{t-x-y+2xy}{1-t-x-y+xy}.
\end{equation}
This formula then encodes the $B_3(n,k,l)$ since $\forall k+l<n,\ B_3(n,k,l)=B'_3(n,k,l)$.
\section{Fuss-Catalan $p$-simplex and $p$-ary trees}
The aim of this final section is to present an extension of the results of Section 2 to $p$-dimensional recursive sequences. In the same spirit, we define the sequence $B_p(n,k_1,k_2,\dots,k_{p-1})$ by the recurrence:
\begin{itemize}
\item $B_p(1,0,0,\dots,0)=1$;
\item $\forall n> 1$ and $0\le k_1+k_2+\cdots+k_{p-1}< n$,
$$B_p(n,k_1,k_2,\dots,k_{p-1})=\sum_{0\le i_1\le k_1,\dots,0\le i_{p-1}\le k_{p-1}} B(n-1,i_1,i_2,\dots,i_{p-1});$$
\item $\forall k_1+k_2+\cdots+k_{p-1}\ge n$, $B_p(n,k_1,k_2,\dots,k_{p-1})=0$.
\end{itemize}
\noindent
Every result of Section 2 extends to general $p$. We shall only give the main results, since the proofs are straightforward generalizations of the proofs in the previous section.
\begin{prop}\label{prop3}
$$\sum_{k_1,\dots,k_{p-1}} B_p(n,k_1,\dots,k_{p-1})=C_p(n)=\frac 1 {(p-1)n+1} {pn\choose n}$$
\end{prop}
The integers $C_p(n)$ are order-$p$ Fuss-Catalan numbers and enumerate $p$-ary trees, or alternatively $p$-Dyck paths (the down steps are $(1,-p)$). In this general case, the recursive definition of $B_p(n,k_1,k_2,\dots,k_{p-1})$ gives rise to $p-1$ statistics on trees and paths analogous to those defined in section 3.
\noindent {\bf Remark 3.2.}
By the same method as in the previous section, it is possible to obtain an explicit formula for these multivariate Fuss-Catalan numbers:
$$B_p(n,k_1,k_2,\dots,k_{p-1})=\left(\prod_{i=1}^{p-1}{n+k_i-1\choose k_i}\right)\frac{n-\sum_{i=1}^{p-1}k_i}{n}.$$
\vskip 0.3cm
\noindent
{\bf\large Comment.} The numbers $B_3(n,k,l)$ first arose in a question of algebraic combinatorics ({\it cf.} \cite{aval}). Let ${\mathcal I}_n$ be the ideal generated by $B$-quasisymmetric polynomials in the $2n$ variables $x_1,\dots,x_n$ and $y_1,\dots,y_n$ ({\it cf.} \cite{BH}) without constant term. We denote by ${\bf Q}_n$ the quotient ${\mathbb Q}[x_1,\dots,x_n,y_1,\dots,y_n]/{\mathcal I}_n$ and by ${\bf Q}_n^{k,l}$ the bihomogeneous component of ${\bf Q}_n$ of degree $k$ in $x_1,\dots,x_n$ and degree $l$ in $y_1,\dots,y_n$. It is proven in \cite{aval} that:
$$\dim {\bf Q}_n^{k,l} = B_3(n,k,l) = {n+k-1\choose k}{n+l-1\choose l}\frac{n-k-l}{n}.$$
\vskip 0.3cm
\noindent
{\bf\large Acknowledgement.} The author is very grateful to referees (and in particular to the anonymous ``Referee 3'') for valuable remarks and corrections.
\vskip 0.3cm
| {
"timestamp": "2007-11-06T16:40:29",
"yymm": "0711",
"arxiv_id": "0711.0906",
"language": "en",
"url": "https://arxiv.org/abs/0711.0906",
"abstract": "Catalan numbers $C(n)=\\frac{1}{n+1}{2n\\choose n}$ enumerate binary trees and Dyck paths. The distribution of paths with respect to their number $k$ of factors is given by ballot numbers $B(n,k)=\\frac{n-k}{n+k}{n+k\\choose n}$. These integers are known to satisfy simple recurrence, which may be visualised in a ``Catalan triangle'', a lower-triangular two-dimensional array. It is surprising that the extension of this construction to 3 dimensions generates integers $B_3(n,k,l)$ that give a 2-parameter distribution of $C_3(n)=\\frac 1 {2n+1} {3n\\choose n}$, which may be called order-3 Fuss-Catalan numbers, and enumerate ternary trees. The aim of this paper is a study of these integers $B_3(n,k,l)$. We obtain an explicit formula and a description in terms of trees and paths. Finally, we extend our construction to $p$-dimensional arrays, and in this case we obtain a $(p-1)$-parameter distribution of $C_p(n)=\\frac 1 {(p-1)n+1} {pn\\choose n}$, the number of $p$-ary trees.",
"subjects": "Combinatorics (math.CO)",
"title": "Multivariate Fuss-Catalan numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.989510906574819,
"lm_q2_score": 0.8289388104343892,
"lm_q1q2_score": 0.8202439938079845
} |
https://arxiv.org/abs/1107.3852 | Sums of Ceiling Functions Solve Nested Recursions | It is known that, for given integers s \geq 0 and j > 0, the nested recursion R(n) = R(n - s - R(n - j)) + R(n - 2j - s - R(n - 3j)) has a closed form solution for which a combinatorial interpretation exists in terms of an infinite, labeled tree. For s = 0, we show that this solution sequence has a closed form as the sum of ceiling functions C(n). Further, given appropriate initial conditions, we derive necessary and sufficient conditions on the parameters s1, a1, s2 and a2 so that C(n) solves the nested recursion R(n) = R(n - s1 - R(n - a1)) + R(n- s2 - R(n - a2)). | \section{Introduction} \label{sec:Intro}
This paper investigates the occurrence of sums of ceiling functions as solutions to nested recursions
of the form
\begin{align}
R(n) = R(n - s_1 -R(n-a_1)) + R(n-s_2-R(n-a_2))
\label{Hn}
\end{align}
with $s_i,a_i$ integers, $a_i > 0$, and specified initial conditions. We adopt the terminology and notation from \cite{ConollyLike}, and write the above recursion as $\SEQ{s_1}{a_1}{s_2}{a_2}$.
The convergence of several recent discoveries has motivated our interest in such solutions. In \cite{BLT} the authors prove that the ceiling function $\cln{n}{2}$ solves the nested recursion $\SEQ{0}{1}{2}{3}$, with initial conditions 1,1,2. In \cite{ConollyLike}, we vastly generalize this result by deriving necessary and sufficient conditions for the parameters $s_i,a_i$ so that $\cln{n}{2}$ solves the nested recursion $\SEQ{s_1}{a_1}{s_2}{a_2}$ with appropriate initial conditions. \footnote{It is also shown in \cite{ConollyLike} that for every $p \geq 1$, the ceiling function $\cln{n}{2p}$ solves an infinite family of order $p$ nested recursions. In this paper we restrict our attention to the recursion ($\ref{Hn}$), which has order 1.}
In a separate but related direction, in \cite{Rpaper} we solved a natural generalization of the recursion $\SEQ{0}{1}{2}{3}$, namely, $\SEQ{s}{j}{s+2j}{3j}$, with $s,j$ integers and $j$ positive. In so doing, we identified a closed form for the solution sequence that included a nesting of ceiling functions, albeit in a complicated way.
Finally, in \cite{CeilFunSol} we focused once again on the occurrence of certain ceiling function solutions, this time to nested recursions that naturally generalize the nested recursion ($\ref{Hn}$). In this case, for each $q>1$, we derived necessary and sufficient conditions on the parameters of the recurrence so that its solution has the closed form $\cln{n}{q}$ (given appropriate initial conditions).
Inspired by the repeated derivation of classification schemes for ceiling function solutions, we reexamine here the solutions to the family of recursions $\SEQ{s}{j}{s+2j}{3j}$ in \cite{Rpaper} from a ceiling function perspective. We prove that certain of these solutions have the closed form of a sum of ceiling functions. For these solutions, we discover a classification theorem analogous to the result in \cite{ConollyLike}; loosely speaking, we determine all of the possible nested recursions of this ``general" form which share the same solution sequence as $\SEQ{0}{j}{2j}{3j}$.
In the body of this paper we proceed as follows. In Section $\ref{sec:Exp}$, we examine the periodicity properties of sums of ceiling functions. In particular, we derive several useful properties of the ceiling function sum $C(n)$ defined by
\begin{align}
\label{eqn:solution}
C(n) = \sum_{i=0}^{j-1}\cln{n-i}{2j}.
\end{align}
We explain our interest in $C(n)$ in Section $\ref{sec:Main}$, where we prove that $C(n)$ solves $\SEQ{0}{j}{2j}{3j}$. For $s \neq 0$, we show in Section $\ref{sec:Exp}$ that we cannot write the solution to $\SEQ{s}{j}{s+2j}{3j}$ as a sum of ceiling functions. Thus, we must limit our findings to the $s=0$ case.
In Section $\ref{sec:Main}$ we apply the results of Section $\ref{sec:Exp}$ to derive a classification theorem for all the recursions $\SEQ{s_1}{a_1}{s_2}{a_2}$ that have the solution $C(n)$ with appropriate initial conditions.
In other words, we determine completely all the parameters $s_i,a_i$ for which $(\ref{eqn:solution})$ solves the recursion $\SEQ{s_1}{a_1}{s_2}{a_2}$. As a byproduct of this work, it follows that $C(n)$ solves $\SEQ{0}{j}{2j}{3j}$. In Section $\ref{sec:Conc}$ we conclude with some thoughts about potential further work in this general area.
\section{Periodicity and Sums of Ceiling Functions} \label{sec:Exp}
In this section, we examine some key properties of sequences that arise as sums of ceiling functions in general, and of the sum $C(n)$ in particular. We begin by determining which sequences have a closed form as a sum of ceiling functions.
\begin{theorem}
\label{thm:Periodic}
Let $\{a_n\}$ be an integer sequence. We can find a closed form $a_n = c+\sum_{i=1}^k \lceil{q_in+r_i}\rceil$ with $q_i,r_i$ rational if and only if the difference sequence $d_n = a_{n+1} - a_n$ is periodic.
\end{theorem}
\begin{proof}
First, suppose $a_n = c+\sum_{i=1}^k \lceil{q_in+r_i}\rceil$. The difference sequence of $a_n$ is the sum of the difference sequences of $\lceil q_in+r_i \rceil$, each of which is periodic (with period the denominator of $q_i$), and their sum is thus periodic (with period a divisor of the lowest common denominator of the $q_i, i=1,...,k$).
Now suppose $d_n$ is periodic with period $p$. Then consider $b_n = a_1+\sum_{i=1}^pd_i\cln{n-i}{p}$. First, note that $b_1=a_1$. Next, observe that $\cln{n-i}{p}$ has a difference sequence consisting of $0$ for $n \not \equiv i$ (mod p), $1$ for $n \equiv i$ (mod p). Thus, $d_i\cln{n-i}{p}$ has a difference sequence of 0 for $n \not \equiv i$ (mod p), $d_i$ for $n \equiv i$ (mod p), and so the difference sequence of $b_n$ is just $b_{n+1}-b_n = d_i$ where $0<i\leq p$ and $i \equiv n$ (mod p). But since $d_n$ is periodic with period $p$, this means the difference sequence of $b_n$ is $d_n$, and so $a_n$ and $b_n$ have the same first element and the same difference sequence and are therefore equal.
\end{proof}
In \cite{Rpaper}, we derived a closed form for the solution to the recurrence $\SEQ{s}{j}{s+2j}{3j}$. This formula clearly shows that the solution has a periodic difference sequence if and only if $s=0$. Thus, in the remainder of this paper, we restrict ourselves to examining the solution to the $s=0$ case.
An interesting result related to Theorem $\ref{thm:Periodic}$ follows:
\begin{theorem}
If the solution sequence to a nested recursion has a periodic difference sequence, then this same sequence also solves a non-nested recursion.
\end{theorem}
\begin{proof}
By Theorem $\ref{thm:Periodic}$, such a solution sequence has a closed form $c+\sum_{i=1}^k \lceil{q_in+r_i}\rceil$; without loss of generality, rewrite the $q_i$ to have a common denominator so that $q_i = b_i/q$ with $b_i,q$ integers. Then the same solution sequence solves the recursion $A(n) = A(n-q)+\sum_{i=1}^kb_i$.
\end{proof}
Based on the structure of the solution to $\SEQ{0}{j}{2j}{3j}$ found in \cite{Rpaper}, we can observe that the sequence ($\ref{eqn:solution}$)
solves $\SEQ{0}{j}{2j}{3j}$. We will prove this fact in the next section, but first we prove here two lemmas which simplify computations involving $C(n)$.
\begin{lemma}
For any $n$,$d \in \mathbb{Z}$, $C(n+2jd) = C(n)+jd$.
\label{lm:C}
\end{lemma}
\begin{proof}
We have
$C(n+2jd) = \sum_{i=0}^{j-1}\cln{n+2jd-i}{2j} =
\sum_{i=0}^{j-1}(\cln{n-i}{2j}+d)= \sum_{i=0}^{j-1}\cln{n-i}{2j}+jd=C(n)+jd$ and this completes the proof.
\end{proof}
This lemma shows that if we know the values of $C(n)$ for $2j$ consecutive values of $n$, then we can easily compute the rest of the sequence. In the next lemma, we find the values of $C(n)$ for $0 \leq n \leq 2j-1$.
\begin{lemma}
$C(n)=n$ for $1 \leq n \leq j-1$ and $C(n)=j$ for $j \leq n \leq 2j$.
\label{lm:freq}
\end{lemma}
\begin{proof}
Let $n\in \{1,2,...,j-1\}$ and $i \in \{0,1,2,...,j-1\}$. Then $-j+2 \leq n-i \leq j-1$. Therefore, we have that $\cln{n-i}{2j}$ is $1$ when $n>i$ and it vanishes otherwise. Hence, we have $C(n) = \sum_{i=0}^{j-1}\cln{n-i}{2j} = \sum_{i=0}^{n-1}1 + \sum_{i=n}^{j-1}0 = n$. Now let $n\in \{j,j+1,...,2j\}$ and $i \in \{0,1,2,...,j-1\}$. Then $1 \leq n-i \leq 2j$, which implies that $\cln{n-i}{2j}=1$. Hence,
\begin{align*}
C(n) = \sum_{i=0}^{j-1}\cln{n-i}{2j} = \sum_{i=0}^{j-1}1 = j
\end{align*}
as required.
\end{proof}
\section{Finding All Recursions $\SEQ{s_1}{a_1}{s_2}{a_2}$ Solved By $C(n)$} \label{sec:Main}
In this section we determine all of the recursions $R(n) = \SEQ{s_1}{a_1}{s_2}{a_2}$ solved by $C(n)$ when given appropriate initial conditions. In so doing we make use of the idea of ``formal satisfaction." We say an infinite sequence formally satisfies a recursion if the recursive formula is well-defined and true on that sequence for all integers. By contrast, a sequence is generated as the unique solution to a recursion and a set of $c$ specific initial conditions if for all $n>c$, the recursion allows us to calculate the value of the solution sequence at $n$ by referencing only terms with indices less than $n$.
A simple example will clarify this distinction. The recursion $S(n) = S(S(n+1))$ is formally satisfied by the sequence $S(n)= 1$ for all $n$. However, for any positive integer $c$, if we are given the initial conditions $S(1) = S(2) = \ldots = S(c) = 1$, we cannot determine the value of $S(c+1)$ because the recursion requires we know the value of $S(c+2)$.
Thus, in general, formal satisfaction does not imply generation as an infinite solution sequence. But for the particular case we are dealing with, namely, the recursion $R(n) = \SEQ{s_1}{a_1}{s_2}{a_2}$ and the sequence $C(n)$, formal satisfaction does imply generation as an infinite solution sequence. To see why, note that $C(n)$ asymptotically approaches $n/2$, so $n-s_1-C(n-a_{1})$ and $n-s_2-C(n-a_{2})$ also asymptotically approach $n/2$. Furthermore, as long as $a_i>0$, for large enough $n$ the recursion for $R(n)$ refers only to prior positive terms and can thus be generated given sufficiently many appropriate initial conditions.
Although it might at first seem like an additional complication, the idea of formal satisfaction simplifies many of our proofs - in many cases, we can most easily prove that a recursion generates $C(n)$ as an infinite solution sequence by proving that $C(n)$ formally satisfies the recursion. Thus, we now proceed with a theorem classifying all recursions $R(n) = \SEQ{s_1}{a_1}{s_2}{a_2}$ that $C(n)$ formally satisfies.
\begin{theorem}
$C(n) = \sum_{i=0}^{j-1}\cln{n-i}{2j}$ formally satisfies the nested recursion $\SEQ{s_1}{a_1}{s_2}{a_2}$ if and only if the following conditions hold:
\begin{align*}
s_1,s_2 \equiv 0 \bmod{j} \tag{i}\\
a_1,a_2 \equiv j \bmod{2j} \tag{ii}\\
2(s_1+s_2) = a_1 + a_2 \tag{iii}
\end{align*}
\label{thm:Main}
\end{theorem}
Notice that for $j=1$, the conditions on the parameters reduce to the characterization of all nested recursions $\SEQ{s_1}{a_1}{s_2}{a_2}$ formally satisfied by the ceiling function $\cln{n}{2}$ (derived in \cite{ConollyLike}).
To show that the conditions listed above suffice for $C(n)$ to formally satisfy $\SEQ{s_1}{a_1}{s_2}{a_2}$, we adapt the proof technique used in \cite{CeilFunSol} to prove an analogous result classifying all nested recursions formally satisfied by $\cln{n}{q}$. The basic elements of our approach follow: first, we establish a natural equivalence relation on the set of all recursions of the form $\SEQ{s_1}{a_1}{s_2}{a_2}$. Next, we show that if the sequence ($\ref{eqn:solution}$) formally satisfies one element of an equivalence class, then it satisfies every element of that equivalence class. Then we prove that every equivalence class has a representative in the set $\mathbb{Z}\times S \times S \times S$, where $S = \{0,1,2,...,2j-1\}$. Finally, we demonstrate that if $C(n)$ formally satisfies $\SEQ{s_1}{a_1}{s_2}{a_2}$ for $4j$ consecutive values of $n$, then it does so for all $n$. Putting all these facts together, we conclude by directly verifying that when the conditions listed above hold, then $C(n)$ satisfies $\SEQ{s_1}{a_1}{s_2}{a_2}$ for $0 \leq n \leq 4j-1$.
We now proceed with a series of five lemmas.
To establish a natural equivalence relation on the set of all recursions of the form $\SEQ{s_1}{a_1}{s_2}{a_2}$, we treat $\SEQ{s_1}{a_1}{s_2}{a_2}$ as a vector in $\mathbb{Z}^4$, denoted by $y$. Then we define the equivalence relation $\sim$ on the set of vectors $y$:
\begin{align*}
\SEQ{s_1}{a_1}{s_2}{a_2} \sim \SEQ{s_1 + cj}{a_1 + 2cj}{s_2}{a_2} \tag{a}\\
\SEQ{s_1}{a_1}{s_2}{a_2} \sim \SEQ{s_1}{a_1}{s_2 + dj}{a_2 + 2dj} \tag{b}\\
\SEQ{s_1}{a_1}{s_2}{a_2} \sim \SEQ{s_1 - 2ej}{a_1}{s_2 + 2ej}{a_2} \tag{c}
\end{align*}
where $c$, $d$, $e \in \mathbb{Z}$. Our first lemma shows that if any element of an equivalence class satisfies conditions (i)-(iii) then every element of that equivalence class satisfies those conditions.
\begin{lemma}
Let $y$ satisfy (i)-(iii). If $y \sim y'$, then $ y'$ satisfies (i)-(iii).
\label{lm:CondPreserve}
\end{lemma}
\begin{proof}
It suffices to verify the statement of the theorem for relations (a), (b)
and (c) separately. In the following, let $y = \SEQ{s_1}{a_1}{s_2}{a_2}$ and $ y' = \SEQ{s_1'}{a_1'}{s_2'}{a_2'}$.
We first check that equivalence under (a) preserves (i)-(iii). Assume $ y' = \SEQ{s'_1}{a'_1}{s'_2}{a'_2} = \SEQ{s_1 + cj}{a_1 + 2cj}{s_2}{a_2}$ for some $c \in \mathbb{Z}$. Then since
$s_1 \equiv 0 \bmod{j}$ and
$a_1 \equiv j \bmod{2j}$ by assumption, we have that
$
s_1 + cj \equiv 0 + cj \equiv 0 \bmod{j}$ and
$a_1 + 2cj \equiv j + 2cj \equiv j \bmod{2j}.$ Furthermore, since $2(s_1 + s_2) = a_1 + a_2$, we have $2(s_1 + cj + s_2) = 2(s_1 + s_2) + 2cj = a_1 + a_2 + 2cj = (a_1 + 2cj) + a_2$. It follows that $s'_1$, $a'_1$, $s'_2$ and $a'_2$ satisfy (i)-(iii).
The argument for (b) is identical and therefore omitted.
Finally, we verify that equivalence under (c) preserves (i)-(iii). Let $ y' = \SEQ{s'_1}{a'_1}{s'_2}{a'_2} = \SEQ{s_1 - 2ej}{a_1}{s_2 + 2ej}{a_2}$ for some $e \in \mathbb{Z}$. By assumption $s_1 \equiv 0 \bmod{j}$ and $s_2 \equiv 0 \bmod{j}$, so we have $
s_1 - 2ej \equiv 0 - 2ej \equiv 0 \bmod{j},
s_2 + 2ej \equiv 0 + 2ej \equiv 0 \bmod{j}$, and $2(s_1 -2ej + s_2 + 2ej) = 2(s_1 + s_2) = a_1 + a_2.$
This shows that $s'_1$, $a'_1$, $s'_2$ and $a'_2$ satisfy (i)-(iii) as required, thereby completing the proof.
\end{proof}
Now we show that if the sequence ($\ref{eqn:solution}$) formally satisfies one element of an equivalence class, then it satisfies every element of that equivalence class.
Define the difference function $h(n, y) = C(n-s_1-C(n-a_1))+C(n-s_2-C(n-a_2))-C(n)$. Observe that for a fixed $y$, the sequence ($\ref{eqn:solution})$ formally satisfies $y$ if and only if $h(n,y) = 0$ for all $n \in \mathbb{Z}$. We have the following lemma.
\begin{lemma}
If $y \sim y'$ then $ h(n, y)= h(n, y').$
\label{lm:hInvariant}
\end{lemma}
\begin{proof}
As before, it suffices to prove this lemma separately for relations (a), (b) and (c).
First, we verify (a) preserves $h$. Let
$y = \SEQ{s_1}{a_1}{s_2}{a_2}$ and
$y' = \SEQ{s_1 + cj}{a_1 + 2cj}{s_2}{a_2}$ for some $c \in \mathbb{Z}$.
Note that by Lemma \ref{lm:C} we have $
C(n-s_1-cj-C(n-a_1-2cj)) =
C(n-s_1-cj-(C(n-a_1)-cj)) =
C(n-s_1-cj+cj-(C(n-a_1))=
C(n-s_1-C(n-a_1)).
$ Hence, $h(n, y') = C(n-s_1-cj-C(n-a_1-2cj))+C(n-s_2-C(n-a_2))-C(n) =
C(n-s_1-C(n-a_1))+C(n-s_2-C(n-a_2))-C(n) =
h(n, y)
$
The same argument applies to confirm that (b) preserves $h$; we omit the details.
We check that (c) preserves $h$. Assume
$y = \SEQ{s_1}{a_1}{s_2}{a_2}$ and
$y' = \SEQ{s_1-2ej}{a_1}{s_2+2ej}{a_2}$ for some $e \in \mathbb{Z}$.
Applying Lemma \ref{lm:C}, we get $
h(n, y') = C(n-s_1+2ej-C(n-a_1))+C(n-s_2-2ej-C(n-a_2))-C(n) =
C(n-s_1-C(n-a_1))+ej+C(n-s_2-C(n-a_2))-ej-C(n) =
h(n, y)$ and this completes the proof.
\end{proof}
Now we show that every equivalence class has a representative in the set $\mathbb{Z}\times S \times S \times S$, where $S = \{0,1,2,...,2j-1\}$. As we will see later, this result, together with the following lemma and conditions (i)-(iii), will reduce the verification of formal satisfaction to a finite number of confirmatory calculations.
\begin{lemma}
For every $y \in \mathbb{Z}^4$, there exists $ y' \in \mathbb{Z}\times S \times S \times S$ such that $ y \sim y'$.
\label{lm:repBouned}
\end{lemma}
\begin{proof}
Consider an arbitrary $ y = \SEQ{s_1}{a_1}{s_2}{a_2} \in \mathbb{Z}^4$.
Note that by the division algorithm $a_2 = 2jq + a'_2$ for some $q \in \mathbb{Z}$ and $a'_2 \in S$. Then, we apply (b) to obtain $y = \SEQ{s_1}{a_1}{s_2}{2jq+a'_2} \sim \SEQ{s_1}{a_1}{s_2-jq}{2jq+a'_2-2jq} = \SEQ{s_1}{a_1}{s'_2}{a'_2}$.
As before, $s'_2 = 2jc+s''_2$ for some $c \in \mathbb{Z}$ and $s''_2 \in S$.
Now, we apply (c) to get $\SEQ{s_1}{a_1}{s'_2}{a'_2} = \SEQ{s_1}{a_1}{2jc+s''_2}{a'_2} \sim \SEQ{s_1+2jc}{a_1}{2jc+s''_2-2jc}{a'_2} = \SEQ{s'_1}{a_1}{s''_2}{a'_2}$.
Finally, we use the fact that $a_1 = 2jd +a'_1$, where $d \in \mathbb{Z}$ and $a'_1 \in S$, and (a) to get $\SEQ{s'_1}{a_1}{s''_2}{a'_2} = \SEQ{s'_1}{2jd + a'_1}{s''_2}{a'_2} \sim \SEQ{s'_1-jd}{2jd + a'_1-2jd}{s''_2}{a'_2} = \SEQ{s''_1}{a'_1}{s''_2}{a'_2}$.
Therefore, $ y \sim y' = \SEQ{s''_1}{a'_1}{s''_2}{a'_2}$, where $a'_1$,$s''_2$,$a'_2 \in S$ and $s''_1 \in \mathbb{Z}$.
\end{proof}
Currently, to check that $C(n)$ formally satisfies some recursion corresponding to $y$, we have to check that $h(n, y) = 0$ for each $n \in \mathbb{Z}$. Our next lemma remedies this situation by reducing to finitely many $n$.
\begin{lemma}
For a fixed $y$ and any integer $d$, $h(n, y) = h(n + 4jd, y)$.
\label{lm:nBounded}
\end{lemma}
\begin{proof}
Fix $y = \SEQ{s_1}{a_1}{s_2}{a_2}$ and an integer $d$.
Applying Lemma \ref{lm:C}, we have $
C(n+4jd-s_1-C(n+4jd-a_1)) =
C(n+4jd-s_1-(C(n-a_1)+2jd)) =
C(n+2jd-s_1-C(n-a_1)) =
C(n-s_1-C(n-a_1)) + jd
$ and similarly $C(n+4jd-s_2-C(n+4jd-a_2)) = C(n-s_2-C(n-a_2)) + jd$. Then,
we calculate $
h(n+4jd, y) = C(n+4jd-s_1-C(n+4jd-a_1)) + C(n+4jd-s_2-C(n+4jd-a_2)) - C(n+4jd) =
C(n-s_1-C(n-a_1)) + jd + C(n-s_2-C(n-a_2)) + jd - C(n) -2jd =
C(n-s_1-C(n-a_1)) + C(n-s_2-C(n-a_2)) - C(n) =
h(n, y)$ and this completes the proof.
\end{proof}
Lemma \ref{lm:nBounded} has a key consequence: to confirm that $h(n, y) = 0$ for all $n \in \mathbb{Z}$, it suffices to verify that $h(n, y) = 0$ for $0 \leq n \leq 4j-1$.
The above results provide all the necessary tools to show that conditions (i)-(iii) suffice for the sequence $(\ref{eqn:solution})$ to formally satisfy $ y$.
\begin{lemma}
Let $ y$ satisfy conditions (i)-(iii). Then the sequence $(\ref{eqn:solution})$ formally satisfies $y$.
\label{lm:sufficient}
\end{lemma}
\begin{proof}
Observe that by Lemma \ref{lm:repBouned} there exists $y' = \SEQ{s'_1}{a'_1}{s'_2}{a'_2}$ such that $ y' \sim y$ and $ y' \in \mathbb{Z} \times S \times S \times S$. Notice that, by Lemma \ref{lm:CondPreserve}, $s_1'$,$a_1'$,$s_2'$ and $a_2'$ also satisfy conditions (i)-(iii). Then, $a'_1 = a'_2 = j$ and $s'_2$ is 0 or $j$. If $s'_2$ is 0, then from condition (iii) it follows that $s'_1 = j$. Alternatively, if $s'_2 = j$ then condition (iii) implies that $s'_1 = 0$. Either way (switching the order of the summands if needed), $ y'$ corresponds to the recursion $R(n) = R(n - R(n-j)) + R(n-j-R(n-j))$. Therefore, without loss of generality we assume that $y' = \SEQ{0}{j}{j}{j}$.
By Lemma \ref{lm:hInvariant}, to show that $h(n, y) = 0$ for all integers $n$, it suffices to show that $h(n, y') = 0$ for all integers $n$. Furthermore, Lemma \ref{lm:nBounded} shows that we only need to show that $h(n, y') = 0$ for $0 \leq n \leq 4j-1$. By Lemmas \ref{lm:C} and $\ref{lm:freq}$ we have
\begin{align*}
C(n) =
\begin{cases}
0, & -j \leq n \leq -1 \\
n, & 0 \leq n \leq j-1 \\
j, & j \leq n \leq 2j-1 \\
n-j, & 2j \leq n \leq 3j-1 \\
2j, & 3j \leq n \leq 4j-1
\end{cases}
\end{align*}
We need to consider several cases.
First, suppose that $0 \leq n \leq j-1$. Then $C(n-j)=0$ since $-j \leq n-j \leq -1$. Therefore, $h(n,y') = C(n-C(n-j)) + C(n-j-C(n-j)) - C(n) = C(n) + C(n-j) - C(n) = 0$, as required.
Next consider the case when $j \leq n \leq 2j-1$. Hence, $C(n-j)=n-j$ as $0 \leq n-j \leq j-1$. Thus, $h(n, y') = C(n-C(n-j)) + C(n-j-C(n-j)) - C(n) =
C(j) + C(0) - C(n) =
j + 0 - j = 0$.
Now let $2j \leq n \leq 3j-1$. In this case, $C(n-j)=j$ and $C(n-2j)=n-2j$ since $j \leq n-j \leq 2j-1$ and $0 \leq n-2j \leq j-1$. Hence, $
h(n, y') = C(n-C(n-j)) + C(n-j-C(n-j)) - C(n) =
C(n-j) + C(n-2j) - C(n) =
j + n-2j - n+j = 0$.
Finally, suppose that $3j \leq n \leq 4j-1$. Then $C(n-j)=n-2j$ since $2j \leq n-j \leq 3j-1$. Then $h(n, y') = C(n-C(n-j)) + C(n-j-C(n-j)) - C(n)
= C(2j) + C(j) - C(n) =
j + j - 2j = 0 $ as required. We conclude that $h(n, y')=0$ for $0 \leq n \leq 4j-1$ and this completes the proof of the lemma.
\end{proof}
We conclude the proof of Theorem \ref{thm:Main} by showing the necessity of conditions (i)-(iii) for $(\ref{eqn:solution}$) to formally satisfy $ y$.
\begin{lemma}
Let the sequence $(\ref{eqn:solution}$) formally satisfy $ y$. Then $ y$ satisfies conditions (i)-(iii).
\label{lm:Necess}
\end{lemma}
\begin{proof}
By Lemma \ref{lm:repBouned}, $ y \sim y'$ for some $ y' \in \mathbb{Z} \times S \times S \times S$, where $S = \{0,1,2,...,2j-1\}$. Furthermore, by Lemma \ref{lm:CondPreserve}, $ y$ satisfies (i)-(iii) if and only if $ y'$ does. Therefore, without loss of generality, we may assume that $ y \in \mathbb{Z} \times S \times S \times S$. We now prove that $ y = \SEQ{0}{j}{j}{j}$; since $\SEQ{0}{j}{j}{j}$ clearly satisfies conditions (i)-(iii), this will complete the proof of this lemma.
By assumption, $(\ref{eqn:solution}$) formally satisfies $ y$, so $C(n) = C(n-s_1-C(n-a_1)) + C(n-s_2-C(n-a_2))$ for all $n$. By the Euclidean division algorithm, $s_1 = 2jk + s$ for some $k \in \mathbb{Z}$ and $s \in S$. Therefore, by Lemma \ref{lm:C}, $C(n) = C(n-s-C(n-a_1)) + C(n-s_2-C(n-a_2))-jk$. Further, as in Lemma $\ref{lm:sufficient}$, we will use the following values of $C(n)$, which follow from Lemmas \ref{lm:C} and \ref{lm:freq}:
\begin{align*}
C(n) =
\begin{cases}
n+j &-2j \leq n \leq -j-1 \\
0, &-j \leq n \leq 0 \\
n, & 1 \leq n \leq j-1 \\
j, & j \leq n \leq 2j
\end{cases}
\end{align*}
In particular, observe that for $n$ satisfying $-2j \leq n < 2j$, $C(n)$ is constant precisely on the intervals $[-j,0]$ and $[j,2j]$ only; that is, if $C(n) = C(n+1)$ for some $n$ with $-2j \leq n < 2j$, then $n$ must satisfy $-j \leq n < 0$ or $j \leq n < 2j$. We repeatedly use this observation.
First, we demonstrate that one of the summands $C(n-s-C(n-a_1))$ or $C(n-s_2-C(n-a_2))$ is in fact $C(n-C(n-j))$; that is, we show that either $s=0,a_1=j$ or $s_2=0,a_2=j$.
Note that the sequences defined by $C(n-s-C(n-a_1))$ and $C(n-s_2-C(n-a_2))$ are both slow, that is, their forward differences equal either $0$ or $1$. This follows directly from the fact that $C(n)$ itself is slow, which in turn follows from Lemmas \ref{lm:freq} and \ref{lm:C}. Our main use of this fact is to note that if $C(n)$ stays constant, both of its summands must stay constant, and if $C(n)$ increases by 1, then exactly one of its summands must increase by $1$.
Since $C(0) = 0$ and $C(1) = 1$, one of the summands must have increased by $1$; without loss of generality we may assume it was $C(n-s-C(n-a_1))$, interchanging the summands if needed. Hence, $C(1-s-C(1-a_1)) = 1+C(-s-C(-a_1))$. Since $C(n)$ is slow, either $C(-a_1)=C(1-a_1)$, or $C(-a_1)+1=C(1-a_1)$. In the latter case, we would have $1-s-C(1-a_1)=-s-C(-a_1)$, contradicting $C(1-s-C(1-a_1)) = 1+C(-s-C(-a_1))$. Thus, $C(-a_1)=C(1-a_1)$ and so $C(1-s-C(-a_1))=1+C(-s-C(-a_1))$.
Since $C(-a_1) = C(1-a_1)$ and $a_1 \in S$, it must be the case that $C(-a_1) = 0$ and $0<a_1\leq j$ (see the listing of values of $C(n)$ above). Furthermore, since $C(-a_1) = 0$, it follows (by substituting into the last equation in the previous paragraph) that $C(1-s) = 1+C(-s)$. This implies that either $s=0$ or $j<s<2j$.
Summarizing the above results, we have the following restrictions: $0<a_1\leq j$, and either $s=0$ or $j<s<2j$.
Next, we show that $a_1 = j$. If not, then $0<a_1<j$. By the list of values of $C(n)$ above, $C(j+a_1)=C(j+a_1+1)= \ldots = C(2j) = j$. Therefore, since $C(n)$ is constant as $n$ ranges from $j+a_1$ to $2j$, both of its summands must be constant on the same range. In particular, $C(n-s-C(n-a_1))$ must stay constant as $n$ ranges from $j+a_1$ to $2j$. Thus $C(j+a_1-s-C(j+a_1-a_1)) = \ldots = C(2j-s-C(2j-a_1))$. Since $C(j+a_1-a_1) = \ldots = C(2j-a_1) = j$, we have that $C(j+a_1-s-j) = ... = C(2j-s-j)$, or, simplifying, $C(a_1-s) = C(a_1+1-s) = ... = C(j-s)$. This implies that $s \neq 0$ since otherwise $C(j) = C(a_1)$, where $a_1 < j$. Hence, we must have $j<s<2j$.
But observe that $C(n)$ also remains constant and equal to 0 as $n$ ranges from $-j$ to $0$. Applying the same argument as above with all the terms shifted back by 2j, we can conclude that $C(-j+a_1-s) = C(-j+1+a_1-s) = ... = C(-s)$. But this is impossible, since for $j<s<2j$ we have $C(-s) = -s+j$ and $C(-s-1) = -s-1+j$ (see the list of values of $C(n)$ above). Therefore, we must have $a_1 = j$.
Summarizing our current situation, we have $a_1=j$ and either $s=0$ or $j<s<2j$.
Now we show that $s=0$. If not, then $j<s<2j$. As an immediate consequence, $C(j-1-s-C(j-1-a_1)) = C(j-1-s)=0=C(j-s)=C(j-s-C(j-a_1))$ where the first and last equalities come from substituting $a_1=j$, and the middle two equalities hold because $j-s$ and $j-1-s$ lie between $-j$ and $0$, and thus $C(j-s)=C(j-1-s)=0$. Since $C(j-1) + 1 = C(j)$ and $C(j-1-s-C(j-1-a_1)) = C(j-s-C(j-a_1))$, the second summand of $C(n)$ must be the one to increase as $n$ changes from $j-1$ to $j$, so $C(j-1-s_2-C(j-1-a_2)) + 1 = C(j-s_2-C(j-a_2))$. Therefore, we now turn our attention to the term $C(n-s_2-C(n-a_2))$.
We have that $C(j-1-s_2-C(j-1-a_2)) + 1 = C(j-s_2-C(j-a_2))$. Thus, the arguments $j-1-s_2-C(j-1-a_2)$ and $j-s_2-C(j-a_2)$ must be different, which requires $C(j-1-a_2)=C(j-a_2)$. Since $0\leq a_ 2 < 2j$, we have that $-j < j - a_2 \leq j$, which together with $C(j-1-a_2) = C(j-a_2)$ implies that $-j < j-a_2 \leq 0$ (see the list of values for $C(n)$ above). Thus, $C(j-1-a_2)=C(j-a_2)=0$. So we can conclude $C(j-a_2) = C(j-1-a_2) = 0$ and $j \leq a_2 < 2j$. Furthermore, this implies that $C(j-s_2) = C(j-1-s_2)+1$, so $0 \leq s_2 < j$. Consider first the case that $a_2 \neq j$, so that $j<a_2<2j$. Since $C(n)$ is constant as $n$ ranges from $j$ to $a_2$, its second summand must be constant on this range so $C(j-s_2-C(j-a_2)) = ... = C(a_2-s_2-C(a_2-a_2))$. However, since $C(j-a_2) = ... = C(a_2-a_2) = 0$, this tells us that $C(j-s_2) = ... = C(a_2-s_2)$, which is only possible if $s_2 = 0$. Next, since $C(n)$ is also constant as $n$ ranges from $-j$ to $-2j+a_2$, we can apply the preceding argument shifted back by $2j$ terms to conclude that $C(-j+j) = ... = C(-2j+a_2+j)$, which we rewrite as $C(0) = ... = C(a_2-j)$. This is impossible since $C(a_2-j) \neq C(0)$. Thus we conclude that the case $j<a_2<2j$ cannot occur, and we now let $a_2=j$.
Recall that we are still working under the assumption that $j<s<2j$, and we have shown that $a_1=j$ and now $a_2=j$, and that $0 \leq s_2 < j$. Note that $0=C(0) = C(-s-C(-j)) + C(-s_2 - C(-j)) -jk$. Since $C(-j) = 0$, this reduces to $0 = C(-s) + C(-s_2)-jk$. However, $0 \leq s_2 < j$ and consulting the list of values for $C(n)$, we see $C(-s_2) = 0$. This implies $C(-s) = jk$. Since $j<s<2j$ we have $0 > C(-s) > -j$, giving a contradiction. Therefore, we conclude that $s=0$.
We now have proved that $s = 0$ and $a_1=j$, so $C(n) = C(n-C(n-j)) + C(n-s_2-C(n-a_2)) -jk$. We still don't know anything about $s_2$ or $a_2$.
A key property of $C(n-C(n-j))$ is that it remains constant as $n$ ranges from $2j$ to $3j$. Indeed, by the listing of values of $C(n)$ above, if $2j \leq n \leq 3j$, then $C(n-j)=j$, so $n-C(n-j)=n-j$, so $C(n-C(n-j))=j$, again by the listing of values of $C(n)$.
By Lemmas \ref{lm:freq} and \ref{lm:C}, $C(n+1)=C(n)+1$ for $2j \leq n < 3j$. But the first summand $C(n-C(n-j))$ remains constant on this range, so it follows that the second summand $C(n-s_2-C(n-a_2))$ must be increasing on this range. That is, for $2j \leq n < 3j$, we have $1+C(n-s_2-C(n-a_2))=C(n+1-s_2-C(n+1-a_2))$. This in turn implies that for $n$ in this range, $n-s_2-C(n-a_2) \neq n+1-s_2-C(n+1-a_2)$, which simplifies to $C(n+1-a_2) \neq 1+C(n-a_2)$. Since $C(n)$ increases only by 0 or 1, it must be the case that $C(n+1-a_2)=C(n-a_2)$ for all $n$ in the range $2j \leq n < 3j$. By consulting the list of values of $C(n)$ above, we see that the sequence $2j-a_2,2j+1-a_2,\ldots,3j-a_2$ must begin with $(2z+1)j$ for some integer $z$ (so that $C(n)$ is constant on the next $j$ values). But $a_2$ lies in the range $0 \leq a_2 < 2j$, so the only possibility is $2j-a_2=j$, or $a_2=j$.
In the previous paragraph, we showed that for $2j \leq n < 3j$, we have $1+C(n-s_2-C(n-a_2))=C(n+1-s_2-C(n+1-a_2))$. But $a_2=j$, so $j \leq n-a_2<2j$ and therefore by the list of values of $C(n)$, $C(n-a_2)=C(n+1-a_2)=j$. Thus, $1+C(n-s_2-j)=C(n+1-s_2-j)$ for all $n$ in the range $2j \leq n < 3j$. Consulting the list of values for $C(n)$ above, we see that $2j-s_2-j$ must be $2zj$ for some integer $z$, to ensure that $C(n)$ increases on the next $n$ values. Then since $0 \leq s_2 < 2j$, $2j-s_2-j=0$ is the only possibility so $s_2=j$.
Thus, we have that $C(n) = C(n-C(n-j))+C(n-j-C(n-j))-kj$. By substituting $n=j$ we immediately deduce that $k=0$, so $C(n) = C(n-C(n-j))+C(n-j-C(n-j))$ as desired. This completes the proof.
\end{proof}
Together, Lemmas \ref{lm:sufficient} and \ref{lm:Necess} prove Theorem \ref{thm:Main}. Further, observe that this also proves that $C(n)$ formally satisfies $R(n) = R(n-R(n-j)) + R(n-2j-R(n-3j))$ since $\SEQ{0}{j}{j}{j} \sim \SEQ{0}{j}{2j}{3j}$.
\section{Concluding Remarks} \label{sec:Conc}
A wide variety of nested recursions have solutions exhibiting either periodic or ``periodic-like" behaviour. In \cite{Golomb1990}, Golomb observed that if the one-term nested recursion $a_n = a_{n-a_{n-1}}$ has a solution, then it always eventually becomes periodic; in fact, this holds for any one-term homogeneous nested recursion. This paper, as well as \cite{Rpaper}, \cite{CeilFunSol}, and \cite{ConollyLike}, have exhibited large families of nested recursions with periodic difference sequences. In \cite{Golomb1990}, Golomb illustrated that with appropriate initial conditions Hofstadter's Q-recursion $Q(n) = Q(n-Q(n-1))+Q(n-Q(n-2))$ could be made to exhibit what he called ``quasi-periodic" behavior: more precisely, given initial conditions $Q(1) = 3, Q(2)=2$, and $Q(3)=1$, the resulting solution has $Q(3k+1)=3, Q(3k+2)=3k+2$ and $Q(3k)=3k-2$. Ruskey \cite{FibHof} has demonstrated similar behaviour involving the Q-recursion and the Fibonacci sequence.
All together, this suggests that ``periodic-like" behavior appears frequently in the solutions to nested recursions. Perhaps more such periodicity variants await discovery. Even further, perhaps some property, such as ``$a_n-a_{n-p}$ is periodic for some $p$", may unify the known examples and lead to a broader result about all such solution sequences.
| {
"timestamp": "2011-07-21T02:00:18",
"yymm": "1107",
"arxiv_id": "1107.3852",
"language": "en",
"url": "https://arxiv.org/abs/1107.3852",
"abstract": "It is known that, for given integers s \\geq 0 and j > 0, the nested recursion R(n) = R(n - s - R(n - j)) + R(n - 2j - s - R(n - 3j)) has a closed form solution for which a combinatorial interpretation exists in terms of an infinite, labeled tree. For s = 0, we show that this solution sequence has a closed form as the sum of ceiling functions C(n). Further, given appropriate initial conditions, we derive necessary and sufficient conditions on the parameters s1, a1, s2 and a2 so that C(n) solves the nested recursion R(n) = R(n - s1 - R(n - a1)) + R(n- s2 - R(n - a2)).",
"subjects": "Combinatorics (math.CO)",
"title": "Sums of Ceiling Functions Solve Nested Recursions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9817357200450139,
"lm_q2_score": 0.8354835371034368,
"lm_q1q2_score": 0.8202240318839976
} |
https://arxiv.org/abs/2106.07882 | Do the Hodge spectra distinguish orbifolds from manifolds? Part 1 | We examine the relationship between the singular set of a compact Riemannian orbifold and the spectrum of the Hodge Laplacian on $p$-forms by computing the heat invariants associated to the $p$-spectrum. We show that the heat invariants of the $0$-spectrum together with those of the $1$-spectrum for the corresponding Hodge Laplacians are sufficient to distinguish orbifolds with singularities from manifolds as long as the singular sets have codimension $\le 3.$ This is enough to distinguish orbifolds from manifolds for dimension $\le 3.$ | \section*{Introduction}
A (Riemannian) orbifold is a versatile generalization of a (Riemannian) manifold that permits the presence of well-structured singular points. Orbifolds appear in a variety of mathematical areas and have applications in physics, in particular to string theory.
Orbifolds are locally the orbit spaces of finite effective group actions on $\mathbf R^d$. Orbifold singularities correspond to orbits with non-trivial isotropy and thus each singularity has an associated ``isotropy type.'' The notions of the Laplace-Beltrami operator and Hodge Laplacian extend to Riemannian orbifolds and many results on the spectral geometry of Riemannian manifolds, such as Weyl asymptotics and Cheeger's inequality, extend to this more general setting. We will always assume all orbifolds under consideration are connected and compact without boundary.
Because the possible presence of singular points is a defining characteristic of the class of orbifolds, the question ``Can one hear the singularities of an orbifold?" is fundamental in the study of the spectral geometry of orbifolds. More precisely, one asks:
\begin{enumerate}
\item \label{it:orbimani} Does the spectrum distinguish orbifolds with singularities from manifolds?
\item Does the spectrum detect the topology and geometry of the set of singular points, including the isotropy types of singularities?
\end{enumerate}
Many authors have addressed these questions for the spectrum of the Laplace-Beltrami operator. However, the question of whether the spectrum of the Laplace-Beltrami operator \emph{always} distinguishes Riemannian orbifolds with singularities from Riemannian manifolds, remains open. In this article, we focus primarily on question \ref{it:orbimani} and consider whether additional spectral data, specifically the spectra of the Hodge Laplacians on $p$-forms, suffice to detect the presence of singularities. We will say that two closed Riemannian orbifolds are $p$-\emph{isospectral} if their Hodge Laplacians acting on $p$-forms are isospectral. In particular, 0-isospectrality means that the Laplace-Beltrami operators are isospectral.
We first construct the fundamental solution of the heat equation and the heat trace for the $p$-spectrum on closed Riemannian orbifolds, analogous to that of the 0-spectrum in \cite{DGGW08} discussed below, and then apply the resulting spectral invariants to question \ref{it:orbimani}. Our first main result is that the 0-spectrum and 1-spectrum together distinguish orbifolds with a sufficiently large singular set from manifolds. Orbifolds are stratified spaces in which the regular points form an open dense stratum, and each lower-dimensional stratum is a connected component of the set of singularities of a given isotropy type. By the \emph{codimension} of the singular set, we mean the minimum codimension of the singular strata.
\begin{thm}\label{thm:main1V4} The $0$-spectrum and $1$-spectrum together distinguish closed Riemannian orbifolds with singular sets of codimension $\leq 3$ from closed Riemannian manifolds.
\end{thm}
Consequently:
\begin{cor}\label{thm:lowdim} For $d\leq 3$, the $0$ and $1$-spectra together distinguish singular closed $d$-dimensional Riemannian orbifolds from closed Riemannian manifolds.
\end{cor}
The case $d=3$ is perhaps of particular interest since 3-dimensional orbifolds play a role in Thurston's Geometrization Conjecture, later proven by Grigori Perel'man.
We next consider the inverse spectral problem for individual $p$-spectra, obtaining both positive and negative results. The theorem below refers to the combinatorial Krawtchouk polynomials (see Example~\ref{isotropy2}, Equation~\eqref{krawtpoly}); there is a large literature on the zeros of these polynomials. Theorem~\ref{thm:krawt} part~\ref{negative} is motivated by and applies the work of Roberto Miatello and Juan Pablo Rossetti \cite{MR01}, where the Krawtchouk polynomials were used to construct $p$-isospectral Bieberbach manifolds.
\begin{thm}\label{thm:krawt} Denote by ${\mathcal Orb}^d_k$ the class of all closed $d$-dimensional Riemannian orbifolds with singular set of codimension $k$. Let $p\in \{0,1,2,\dots , d\}$ and let $K_p^d$ be the Krawtchouk polynomial given by Equation~\eqref{krawtpoly}.
\begin{enumerate}
\item\label{volume} If $k$ is odd and $K_p^d(k)\neq 0$, then the $p$-spectrum determines the $(d-k)$-dimensional volume of the singular set of each orbifold in ${\mathcal Orb}^d_k$. In particular, the $p$-spectrum distinguishes orbifolds in ${\mathcal Orb}^d_k$ from closed Riemannian manifolds.
\item\label{negative} For every $k\in\{1,\dots, d\}$ such that $K_p^d(k)=0$, there
exists an orbifold ${\mathcal{O}}\in {\mathcal Orb}^d_k$ that is $p$-isospectral to a closed Riemannian manifold.
\end{enumerate}
\end{thm}
\begin{remark}\label{rem on thm krawt}~
\begin{enumerate}
\item In case $p=0$, the Krawtchouk polynomial $K_0^d(1)\equiv 1$, so the first statement of item~\ref{volume} says that, for every closed Riemannian orbifold with singular set of odd codimension, the $0$-spectrum detects the volume of the singular set. To our knowledge, this statement is new even in the case $p=0$. On the other hand, as mentioned in the review below, the second statement in part~\ref{volume} was previously known in this case.
\item Consider the special case of Riemannian orbifolds for which \emph{all} singular strata have codimension $k=1$. The underlying space of such an orbifold ${\mathcal{O}}$ is a Riemannian manifold $M$ with boundary; the singular set of the orbifold corresponds to the manifold boundary. (See Remark~\ref{abs boun}.) In this case, the $p$-spectrum of the orbifold coincides with the $p$-spectrum of $M$ with absolute boundary conditions (Neumann boundary conditions if $p=0$). Statement~\ref{volume} of Theorem~\ref{thm:krawt} generalizes the well-known fact that, except when the dimension $d$ is even and $p=\frac{d}{2}$, the $p$-spectrum of a Riemannian manifold determines the volume of the boundary. (We note that $p=\frac{d}{2}$ is the unique $p$ for which $K_p^d(1)=0$.)
\item As discussed in \cite{GR03} and \cite{GR03-errata}, the middle degree $p=\frac{d}{2}$ Hodge spectrum of even-dimensional Riemannian manifolds, or more generally orbifolds, contains less information than the other spectra due to Hodge duality. Theorem~\ref{thm:krawt} illustrates another weakness of the middle degree Hodge spectrum: when $p=\frac{d}{2}$, the Krawtchouk polynomial $K_p^d(k)=0$ for every odd integer $k$ in the interval $[1,d-1]$. (Krawtchouk polynomials for other values of $p$ yield far fewer integer zeros.) In \cite{GR03}, J. P. Rossetti and the second author gave examples of singular Riemannian orbifolds of even dimension $d$ that are $\frac{d}{2}$-isospectral to Riemannian manifolds. The construction in Theorem~\ref{thm:krawt}, part~\ref{negative} reproduces some of those examples in this case.
Theorem~\ref{thm:krawt}, part~\ref{negative} also yields examples of Riemannian orbifolds of both even and odd dimensions with singularities that are $p$-isospectral to manifolds for various $p\neq 0$. To our knowledge, the only previously known examples of such isospectralities were in the special case $p=\frac{d}{2}$.
\end{enumerate}
\end{remark}
We then focus on the Hodge Laplacian on 1-forms, obtaining both positive and negative results. In particular:
\begin{itemize}
\item We address analogues for the 1-spectrum of those inverse spectral results for the $0$-spectrum appearing in Theorems~\ref{thm:common good} and \ref{thm:dggw5.1} in the review below. We find that these results remain valid for the 1-spectrum \emph{provided that} one imposes an inequality on the codimension of the singular set in the case Theorem~\ref{thm:common good} and on the codimension of the singular stratum that appears in the statement of Theorem~\ref{thm:dggw5.1}. Counterexamples constructed using Krawtchouk polynomials show that the conditions imposed are sharp at least for all even-dimensional orbifolds.
\item We construct a flat ``3-pillow'' (an orbifold ${\mathcal{O}}$ whose underlying space is a 2-sphere with three cone points) such that the 1-spectrum of ${\mathcal{O}}$, endowed with a flat Riemannian metric, coincides with the 1-spectrum of a square in $\mathbf R^2$ with absolute boundary conditions.
\end{itemize}
\subsection{Partial review of known results for the Laplace-Beltrami operator.}
The earliest results in the spectral geometry of Riemannian orbifolds are those of Yuan-Jen Chiang \cite{YJC} who established the existence of the Laplace spectrum and heat kernel of a Riemannian orbifold, and Carla Farsi \cite{F01} who extended the Weyl law to the orbifold setting. Results from a variety of authors soon followed, including several examples of isospectral non-isometric orbifolds with a variety of properties. For example, Naveed Bari, the last author, and David Webb \cite{BSW06} produced arbitrarily large finite families of mutually strongly isospectral orbifolds with non-isomorphic maximal isotropy groups of the same order. In particular, they are $p$-isospectral for all $p$. Shortly afterward, J. P. Rossetti, Dorothee Schueth, and Martin Weilandt \cite{RSW08} produced examples of strongly isospectral orbifolds with maximal isotropy groups of distinct orders.
Emily Dryden and Alexander Strohmaier \cite{DS09} proved that hyperbolic orientable closed 2-orbifolds are 0-isospectral if and only if they have the same geodesic length spectrum and the same number of cone points of each order. Benjamin Linowitz and Jeffrey Meyer \cite{LM17} obtained interesting results for locally symmetric spaces. A series of papers extended to the orbifold setting the result of Robert Brooks, Peter Perry, and Peter Petersen \cite{BPP92} that a set $\mathcal I_k(d)$ of 0-isospectral $d$-manifolds, sharing a uniform lower bound $k$ on sectional curvature, contains orbifolds of only finitely many homeomorphism types (diffeomorphism types for $d \neq 4$) . Specifically, the last author \cite{St05} obtained an upper bound on the order of isotropy groups that can arise in elements of the set $\mathcal I_k(d)$ (expanded to contain orbifolds), the last author and Emily Proctor \cite{PrSt10} showed orbifold diffeomorphism finiteness for $\mathcal I_k(2)$, Proctor \cite{Pr12} obtained orbifold homeomorphism finiteness for $\mathcal I_k(d)$ assuming only isolated singularities, and John Harvey \cite{H16} obtained orbifold homeomorphism finiteness for the full set $\mathcal I_k(d)$. In addition, Farsi, Proctor and Christopher Seaton \cite{FPS14} introduced and studied a stronger notion of isospectrality of orbifolds.
Recall that the trace of the heat kernel of the Laplace-Beltrami operator on a closed $d$-dimensional Riemannian manifold ${M}$ is a primary source of spectral invariants. Denoting the spectrum by $\{\lambda_j\}_{j=1}^\infty$, the heat trace (on the left below) yields an asymptotic expansion of the following form as $t \downarrow 0^{+}$:
\begin{equation*}
\sum_{j=0}^\infty e^{-\lambda_j t}
\sim
(4\pi t)^{-d/2} \sum_{k=0}^{\infty} a_k({M}) \, t^k.
\end{equation*}
The coefficients $a_k$ are spectral invariants referred to as the \emph{heat invariants}. The first two, for example, give the volume and the total scalar curvature of the Riemannian manifold.
A Riemannian orbifold is said to be \emph{good} if it is a global quotient ${\mathcal{O}}=\Gamma{\backslash} M$ of a Riemannian manifold $M$ by a (possibly infinite) discrete group $\Gamma$ acting effectively and isometrically. (We say $M$ is a Riemannian cover of ${\mathcal{O}}$ even though the group action need not be free.) In \cite{D76}, Harold Donnelly obtained an asymptotic expansion of the form
$$
\sum_{j=0}^\infty e^{-\lambda_j t}
\sim
(4\pi t)^{-d/2} \sum_{k=0}^{\infty} c_k(\mathcal{{\mathcal{O}}}) \, t^{k/2}$$ for the heat trace of a good Riemannian orbifold by adding contributions from each element $\gamma\in \Gamma$. Each $c_k({\mathcal{O}})$ is a sum of terms $c_k({\mathcal{O}},\gamma)$, $\gamma\in \Gamma$, which in turn is a sum of integrals over the connected components of the fixed point set of $\gamma$. The integrands are universal expressions in the germs of the metric and the isometry $\gamma$.
We cite some consequences of Donnelly's heat trace asymptotics for good orbifolds:
\begin{thm}\label{thm:common good} Let ${\mathcal{O}}$ be a closed Riemannian orbifold with singularities and $M$ a closed Riemannian manifold. If either of the following conditions hold, then ${\mathcal{O}}$ cannot be 0-isospectral to $M$:
\begin{enumerate}
\item\label{sutton}\cite[Theorem 1.2]{S10}. ${\mathcal{O}}$ and $M$ have 0-isospectral finite Riemannian covers;
\item\label{homog cover} (See \cite{GR03}, \cite{GR03-errata}) ${\mathcal{O}}$ and $M$ are have a common Riemannian cover $\widetilde{M}$ that is a homogenous Riemannian manifold.
\end{enumerate}
\end{thm}
(We note that the article \cite{GR03} by J. P. Rossetti and the second author originally asserted that a closed Riemannian orbifold with singularities and a closed Riemannian manifold cannot be 0-isospectral if they have a common Riemannian cover $\widetilde{M}$. Recently, a gap was found in the proof unless $\widetilde{M}$ is assumed to be either compact -- so that it is a finite cover of ${\mathcal{O}}$ and $M$ -- or else Riemannian homogeneous. In the interim, Craig Sutton generalized the case of a common finite Riemannian cover to obtain the first statement of the theorem above.)
In \cite{DGGW08}, Dryden, the second author, Sarah Greenwald and Webb extended the heat trace asymptotics to the case of arbitrary (not necessarily good) closed Riemannian orbifolds expressing the invariants $c_k({\mathcal{O}})$ as sums of contributions from the various ``primary'' strata of the orbifold. (See Notation and Remarks~\ref{notarem:isomax} part~\ref{it:isomax-primary} for the definition of primary.) An orbifold ${\mathcal{O}}$ with singularities will always contain primary singular strata; the singular strata of minimal codimension in ${\mathcal{O}}$ are always primary, and any component of the singular set that contains a non-primary stratum must contain at least three primary ones (See Notation and Remarks~\ref{notarem:isomax} parts \ref{it:isomax-mincodim} and \ref{it:isomax-three}.) As in the good case, the small-time heat trace expansion is of the form $(4\pi t)^{-d/2} \sum_{k=0}^{\infty} c_k(\mathcal{{\mathcal{O}}}) \, t^{k/2}$. Primary strata of even, respectively odd, codimension in ${\mathcal{O}}$ contribute to coefficients $c_k$ for $k$ even, odd respectively. As a consequence:
\begin{thm}[{\cite[Theorem 5.1]{DGGW08}}] \label{thm:dggw5.1}
A closed Riemannian orbifold that contains a primary singular stratum of odd codimension cannot be 0-isospectral to any closed Riemannian manifold.
\end{thm}
Sean Richardson and the last author \cite{RS20} proved that an orbifold ${\mathcal{O}}$ is locally orientable if and only if ${\mathcal{O}}$ does not contain any primary singular strata of odd codimension. They thus concluded that the 0-spectrum determines local orientability of an orbifold.
In addition,
\cite[Theorem 5.15]{DGGW08} gives that the $0$-spectrum distinguishes closed, locally orientable, 2-orbifolds with non-negative Euler characteristic from smooth, oriented, closed surfaces (in fact, a stronger result is proven there, see Remark~\ref{rem:orbisurfaces}).
\bigskip
This paper is organized as follows: Section \ref{sec:background} provides background on Riemannian orbifolds, their singular strata, differential forms and the Hodge Laplacian. In Section \ref{sec:heattrace}, we construct the heat kernel, heat trace, and heat invariants for $p$-forms on orbifolds. Our construction follows the construction in the manifold case by Gaffney \cite{G58} and Patodi \cite{P71}, uses results of Donnelly and Patodi \cite{DP77} for good orbifolds, and parallels the adaptations in \cite{DGGW08}.
Section \ref{applications} contains all the inverse spectral results -- both positive and negative -- discussed above as well as additional results. Theorem~\ref{thm:main1V4} is proven in Subsection~\ref{ss:codim2}. The negative results appear in Subsection~\ref{sec.isospec}, where we construct flat Riemannian orbifolds that are $p$-isospectral to Riemannian manifolds. In particular, part~\ref{negative} of Theorem~\ref{thm:krawt} above appears as Proposition~\ref{it:flat-involution}. Section~\ref{sec: common cover} addresses positive inverse spectral results for individual $p$-spectra including, in particular, Theorem~\ref{thm:krawt}, part~\ref{volume}, which appears as Theorem~\ref{thm:krawt4}, and the analogues for 1-forms of Theorems~\ref{thm:common good} and \ref{thm:dggw5.1}, which appear jointly as Theorem~\ref{thm:halfdim}.
In an appendix, we outline the computation of the heat invariants for good orbifolds in \cite{D76} and \cite{DP77}, indicating aspects of the construction that are applicable to more general settings.
\section{Riemannian Orbifold Background}
\label{sec:background}
\subsection{Definitions and Basic Properties}\label{background}
In this section we recall the definition and basic properties of a Riemannian orbifold. We also present some examples of singular strata in orbifolds that will be revisited in later sections. For comprehensive information about orbifolds see the paper by Peter Scott \cite{Scott83}, as well as the texts \cite{Thurston97} by William Thurston and \cite{ALR07} by Alejandro Adem, Johann Leida, and Yongbin Ruan. Here we follow a somewhat abridged form of the presentation given in \cite{G12} by the second author, and \cite{DGGW08} by Dryden, Gordon, Greenwald, and Webb.
\begin{definition}
\label{defn:ofld}~
Let $X$ be a second countable Hausdorff space.
\begin{enumerate}
\item For a connected open subset $U \subseteq X$, an \emph{orbifold chart} (of dimension $d$) over $U$ is a triple $\cc$ where $\widetilde{U} \subseteq \mathbf R^d$ is a connected open subset, $G_U$ is a finite group acting on $\widetilde{U}$ effectively and by diffeomorphisms, and $\pi_U\colon \widetilde{U} \to X$ is a map inducing a homeomorphism $\widetilde{U}/G \xrightarrow{\cong} U$. The open set $U$ is sometimes referred to as the image of the chart.
\item \label{it:injection} If $U \subseteq V \subseteq X$, an orbifold chart $\cc$ is said to \emph{inject} into an orbifold chart $\cc[V]$ if there exists a smooth embedding $i\colon \widetilde{U} \to \widetilde{V}$ and a monomorphism $\lambda\colon G_U \to G_V$ such that $\pi_U = \pi_V \circ i$ and $i \circ \gamma = \lambda(\gamma) \circ i$ for all $\gamma \in G_U$.
\item \label{it:ro} Orbifold charts $\cc$ and $\cc[V]$ on open sets $U$ and $V$ are said to be \emph{compatible} if for every $x\in U\cap V$, there exists a neighborhood $W\subset U\cap V$ of $x$ that admits an orbifold chart injecting into both $\cc$ and $\cc[V]$. A $d$-dimensional \emph{orbifold atlas} on $X$ consists of a collection of compatible $d$-dimensional orbifold coordinate charts whose images cover $X$. An \emph{orbifold} is a second countable Hausdorff topological space together with a maximal $d$-dimensional atlas.
\item A Riemannian structure on an orbifold ${\mathcal{O}}$ is an assignment of a $G_U$-invariant Riemannian metric $\widetilde{U}$ to each orbifold chart $\cc$, compatible in the sense that each injection of charts as in part~\ref{it:injection} is an isometric embedding.
\item If an orbifold is the quotient space of a manifold under the
smooth action of a discrete group it is called a \emph{good} orbifold. Otherwise, it is called a \emph{bad} orbifold.
\end{enumerate}
\end{definition}
\begin{definition}\label{def:iso}
Let $\mathcal O$ be a Riemannian orbifold and let $x \in \mathcal O$.
\begin{enumerate}
\item \label{it:isotropy}A chart $\cc$ about $x$ defines a smooth action of $G_{U}$ on $\widetilde U \subset \mathbb R^d$. Fix a lift $\widetilde x \in \widetilde{U}$ of $x$. The set of $\gamma \in G_U$ such that $\gamma \cdot \widetilde{x}=\widetilde{x}$ is called the \emph{isotropy group} of $x$ denoted ${\operatorname{{Iso}}}(x)$. An element $\gamma \in {\operatorname{{Iso}}}(x)$ defines an invertible linear map $d\gamma_x: T_{\widetilde{x}}\widetilde{U} \to T_{\widetilde{x}}\widetilde{U}$, which gives an injective linear representation of the isotropy group of $x$. Every finite dimensional linear representation is equivalent to an orthogonal representation, unique up to orthogonal equivalence. Thus ${\operatorname{{Iso}}}(x)$ can be viewed as a subgroup of the orthogonal group $O(d)$, unique up to conjugacy.
\item \label{it:welldef} The \emph{isotropy type} of $x$ is the conjugacy class of the isotropy group of $x \in O(d)$. This definition is independent of both the choice of lift of $x$ and the chart.
\item We say $x$ is a \emph{regular point} if it has trivial isotropy type, and a \emph{singular point} otherwise.
\end{enumerate}
\end{definition}
We conclude our discussion of the basic properties of Riemannian orbifolds by describing their singular stratification. Define an equivalence relation on $\mathcal O$ by saying that two points in $\mathcal O$ are \emph{isotropy equivalent} if they have the same isotropy type (this is well defined by Definition~\ref{def:iso} part~\ref{it:welldef}). The connected components of the isotropy equivalence classes of $\mathcal O$ form a smooth stratification of $\mathcal O$. When the corresponding isotropy type is non-trivial these strata are called \emph{singular strata}. Note that in the literature, the requirement that a singular stratum be connected is sometimes dropped.
\begin{notarem}\label{notarem:isomax}~
\begin{enumerate}
\item \label{it:isomax-primary} Let $N$ be a singular stratum in $\mathcal O$ and let $\operatorname{Iso}(N)$ denote a representative of its isotropy type, unique up to conjugacy. We denote by $\iiso^{\rm max}(N)$ the set of all elements $\gamma \in \operatorname{Iso}(N)$ such that in a coordinate chart $\cc$ over a neighborhood $U$ about a point in $N$, $\pi_U^{-1}(U \cap N)$ is open in the fixed point set of $\gamma$. We say that a stratum $N$ is \emph{primary} if $\iiso^{\rm max}(N)$ is non-empty. One can check that these definitions are independent of the choice of chart and of the choice of subgroup in the isotropy type of $N$.
\item \label{it:isomax-mincodim} Not every singular stratum is primary. (See Example 2.16 in \cite{DGGW17}.) The singular strata of minimal codimension must be primary. (See \cite[Notation and Remarks 2.14]{DGGW17}.)
\item \label{it:isomax-three} A component of the singular set $C$ that contains a non-primary singular stratum $N$ must also contain at least three primary singular strata. To see this consider a small enough neighborhood of $x$ for which we can take a coordinate chart $\cc$ about $x$ with $\widetilde U$ a ball about the origin in $\mathbf R^d$, $\pi_U(0)=x$, and $G_U = \operatorname{Iso}(N)$ acting on $\widetilde U$ by orthogonal transformations. For $\gamma \in \operatorname{Iso}(N)$, $Fix(\gamma)$ will be the intersection of a subspace of $\mathbf R^d$ with $\widetilde U$. Now $\gamma \in \operatorname{Iso}(N)$ must be in $\iiso^{\rm max}(N')$ for some stratum $N' \subset C$. If all nontrivial $\gamma \in \operatorname{Iso}(N)$ were in $\iiso^{\rm max}(N')$, we would have $\operatorname{Iso}(N)=\operatorname{Iso}(N')$ and $N$ and $N'$ wouldn't be distinct strata. Thus there are at least two primary singular strata $N',N'' \subset C$. Suppose these were the only two. Then for every nontrivial element $\gamma\in \operatorname{Iso}(N)$, at least one of $\operatorname{Iso}(N')$ or $\operatorname{Iso}(N'')$ contains $\gamma$. Since neither $N'$ nor $N''$ contain all elements of $\operatorname{Iso}(N)$, there are $\gamma_1, \gamma_2 \in \operatorname{Iso}(N)$ for which $\gamma_1 \notin \operatorname{Iso}(N')$ and $\gamma_2 \notin \operatorname{Iso}(N'')$. Then we must have $\gamma_1\in \operatorname{Iso}(N'')$ and $\gamma_2\in \operatorname{Iso}(N')$. Now $\gamma_1\gamma_2$ needs to be in one of these two isotropy groups, say $\operatorname{Iso}(N')$. But this implies $\gamma_1 \in \operatorname{Iso}(N')$, a contradiction.
\item We observe that $\iiso^{\rm max}(N)$ consists of all $\gamma\in {{\operatorname{{Iso}}}(N)}$ such that the multiplicity of $1$ as an eigenvalue of the orthogonal representation of $ \gamma$ is precisely the dimension of $N$.
\end{enumerate}
\end{notarem}
\subsection{Examples of Orbifold Singular Strata}
The singular strata described below will be important to the discussion in Section~\ref{applications}.
\begin{ex}
By considering all the finite subgroups of the orthogonal group $O(2)$, we obtain a classification of the isotropy types of singular points that can occur in dimension $2$:
\begin{enumerate}
\item A rotation about the origin in $\mathbf R^2$ through an angle $\frac{2\pi}{k}$, for some integer $k\geq2$, generates a finite cyclic group of order $k$. The image of the origin in the quotient of $\mathbf R^2$ under this action is an isolated singular point called a \emph{cone point} of order $k$.
\item A reflection across a line through the origin generates a cyclic group of order $2$. The fixed points of the reflection correspond to a $1$-dimensional singular stratum in the quotient called a \emph{mirror} or \emph{reflector edge}.
\item A dihedral group generated by a pair of reflections across lines forming an angle $\frac{\pi}{k}$, for $2\leq k\in \mathbb Z^+$, yields a $0$-dimensional stratum in the quotient called a \emph{corner reflector} or \emph{dihedral point}. The corner reflector is not an isolated singular point, rather it is the point
where two mirror edges intersect.
\end{enumerate}
\end{ex}
\begin{ex}
Starting from an orbifold $\mathcal O$ of dimension $\ell$, we can construct orbifolds of higher dimension $d$ by taking a product with a manifold $M$ of dimension $d-\ell$. For example, let $\mathcal O$ be an orbifold with a single cone point $x_0$ and underlying topological space $S^2$, and let $M$ be a closed $(d-2)$-dimensional manifold. The product $\mathcal{O} \times M$ has the structure of a $d$-dimensional orbifold with a unique singular stratum $\{x_0\} \times M$ of codimension $2$.
\end{ex}
\begin{nota}\label{nota:Rstuff} Let $r$ and $s$ be integers with $0 \le r \le d$ and $0 \le 2s \le d$, and $\theta_1, \dots \theta_s \in (0,\pi)$. For a given $\gamma \in O(d)$, the expression $E(\theta_1, \theta_2, \dots, \theta_s;r)$ will be called the \emph{eigenvalue type} of $\gamma$ and indicates that $\gamma$ has eigenvalues $e^{\pm i \theta_j}$ for $1 \le j \le s$, $-1$ with multiplicity $r$, and $1$ with multiplicity $d-2s-r$. We emphasize that the $\theta_j$ are repeated according to multiplicity. When $r=0$ we write $E(\theta_1, \theta_2, \dots, \theta_s;)$ and when $s=0$ we write $E(;r)$. We note that the parity of $r$ determines the parity of the codimension of the fixed point set.
\end{nota}
\begin{remark} We take $\theta_j \neq \pi$ for all $1 \le j \le s$ for the notational convenience of letting $r$ count the total number of eigenvalues equal to $-1$. This convention can be dropped without affecting the results of Section~\ref{applications}.
\end{remark}
\subsection{Differential forms and the Hodge Laplacian}
As the main results of this article concern the spectrum of the Laplacian acting on $p$-forms on a closed Riemannian orbifold, we give a brief description of how to define differential forms on orbifolds.
For details of the constructions we refer to \cite{C19} or \cite{W12}. We take $\mathcal O$ to be a $d$-dimensional orbifold throughout this section.
\begin{defn}\label{def:pform}
\hspace{1in}
\begin{itemize}
\item As in the case of manifolds, a $p$-form $\omega$ on an orbifold ${\mathcal{O}}$ is defined to be a section of the $p$-th exterior power of the cotangent bundle $T^*(\mathcal O)$. The cotangent bundle and its exterior powers are examples of orbibundles; see, for example, \cite{C19} or \cite{G12} for an expository introduction to orbibundles or the references therein for more in-depth explanations. We will only use the following: If $U\subset {\mathcal{O}}$ is an open set on which there exists a coordinate chart $(\widetilde U, G_U, \pi_U)$, then any $p$-form $\omega$ on $U$ corresponds to a $G_U$-invariant $p$-form $\widetilde{\omega}$ on $\widetilde U$, which we call the lift of $\omega$ to ${\widetilde{U}}$. The differential form $\omega$ is said to be of class $C^k$ on $U$ if $\widetilde{\omega}$ is of class $C^k$. The compatibility condition on overlapping charts in Definition~\ref{defn:ofld} part~\ref{it:ro} leads to a natural compatibility condition on the lifts of $p$-forms. One can use partitions of unity to construct globally defined $p$-forms from $p$-forms defined on an open covering of ${\mathcal{O}}$.
\item For a Riemannian manifold $M$, recall that the Hodge Laplacian $\Delta^p$ acting on the space of smooth $p$-forms $\Omega^p(M)$ is given by
\begin{align}
\label{defhodge}
\Delta^p:= -(d \delta + \delta d),
\end{align}
where $d$ is the exterior derivative and $\delta$ is the codifferential operator, i.e.\ the formal adjoint of $d$ obtained via the Riemannian metric. It is straightforward to check that the Hodge Laplacian commutes with isometries. The notion of Hodge Laplacian extends to Riemannian orbifolds ${\mathcal{O}}$ as follows: Let $\omega$ be a smooth $p$-form on ${\mathcal{O}}$. On each coordinate chart $\cc$, the $G$-invariance of the lift $\widetilde{\omega}$ and the fact that the Hodge Laplacian on Riemannian manifolds commutes with isometries together imply that $\Delta^p(\widetilde{\omega})$ is $G_U$-invariant. We define $\Delta^p(\omega)$ on $U$ by
$$\widetilde{\Delta^p(\omega)}=\Delta^p(\widetilde{\omega}).$$
We can again use the fact that isometries between Riemannian manifolds intertwine the Hodge Laplacians along with the compatibility condition on orbifold charts to conclude that $\Delta^p(\omega)$ is well-defined on ${\mathcal{O}}$.
\end{itemize}
\end{defn}
\begin{remark}\label{hodgestar-adjoint} The results in the present work apply to both orientable and nonorientable orbifolds and manifolds. For clarity we note the following:
\begin{itemize}
\item In what follows the notation $d V_{M}$ indicates the Riemannian volume element in the orientable case, and the Riemannian density in the nonorientable case.
\item The codifferential operator on a Riemannian manifold is a differential operator and can be expressed in terms of the Hodge-$*$ operator in any orientable neighborhood by the expression
\begin{align}
\label{codifforiented}
\delta = (-1)^{n(p+1)+1}*d*,
\end{align}
which is independent of the choice of orientation.
\end{itemize}
\end{remark}
It is known, see e.g.\ \cite[Theorem 4.8.1]{B99}, that the operator $\Delta^p$ viewed as an unbounded operator acting in $ L_p^2(\mathcal O)$, the space of differential $p$-forms whose components are square-integrable functions, is essentially self-adjoint and has a purely discrete spectrum. By abuse of notation, in what follows we denote its closure by $\Delta^p$, which is then self-adjoint with compact resolvent. The spectral theorem thus applies to
$\Delta^p$ and we denote its eigenvalues by $0\leq\lambda_1 \leq \lambda_2\leq\dots \to +\infty$, with associated smooth eigenforms, denoted by $(\varphi_i)_i$, which form an orthonormal $ L_p^2$-basis.
We conclude this section by reviewing the notion of multi $p$-forms.
\begin{nota}\label{nota:doubleform}~
\begin{enumerate}
\item Recall that if $\pi: B\to M$ and $\pi':B'\to M'$ are vector bundles over manifolds $M$ and $M'$, then the external tensor product $B\boxtimes B'\to M\times M'$ is the vector bundle whose fiber over the point $(m,m')$ is $\pi^{-1}(m)\otimes \pi^{-1}(m')$. Given a $C^\infty$ manifold $M$, let $M^k$ denote the $k$-fold Cartesian product $M\times \dots\times M$. Sections of the $k$-fold external tensor product $\boxtimes^k(\wedge^p T^*(M))\to M^k$ are called multi $p$-forms on $M$ of order $k$. In particular, a multi $p$-form of order $1$ is simply a $p$-form. Those of order $2$ or $3$ are called double or triple $p$-forms, respectively.
Thus for example, if $(\mathcal U,x)$ and $(\mathcal V,y)$ are local coordinate charts on $M$ and $F$ is a double $p$-form on $M$, then $F_{|\mathcal U\times \mathcal V}$ can be expressed as $\sum_{I, J} a_{I,J} dx^I \otimes dy^J,
$
where $a_{I,J} \in C^\infty(\mathcal U \times \mathcal V)$. (Here $I$ and $J$ vary over $p$-tuples $ 1\leq i_1<\dots <i_p\leq d=\dim(M).$)
\item \label{it:gammasigma} Given smooth manifolds $M$ and $N$ and smooth maps $\gamma: M\to M$ and $\sigma :N\to N$, denote by $\gamma_{1,\cdot}$ and $\sigma_{\cdot,2}$ the maps $M\times N\to M\times N$ given by $(a,b)\mapsto (\gamma(a), b)$ and $(a,b)\mapsto (a, \sigma(b))$. If $M=N$, we also define $\gamma_{1,2}: M^2\to M^2$ by $(a,b)\mapsto (\gamma(a),\gamma(b))$. We use analogous notation for maps of higher order Cartesian products. Thus, for example, the map $\gamma_{1,\cdot,3}: M^3\to M^3$ is given by $(m_1,m_2,m_3)\mapsto (\gamma(m_1), m_2,\gamma(m_3)).$
\item \label{it:multipformofld} Analogous to the case of $p$-forms in Definition~\ref{def:pform}, we can use the notion of orbibundle to extend the definition of multiple $p$-forms directly to the orbifold setting, but we will not need the orbibundle formalism in what follows. If $U$ and $V$ are open sets in an orbifold ${\mathcal{O}}$ on which there exist orbifold coordinate charts $\cc$ and $(\widetilde{V},G_V, \pi_V)$, then the restriction to $U\times V$ of a double $p$-form $F$ on ${\mathcal{O}}$ lifts to a section $\tilde{F}$ of $\wedge^p T^*({\widetilde{U}})\boxtimes \wedge^p T^*(\widetilde{V})$ that is invariant under pullback by each of the maps $\gamma_{1,\cdot}$ and $\sigma_{\cdot, 2}$ for $\gamma\in G_U$ and $\sigma\in G_V$. Again, the compatibility condition on overlapping charts in Definition~\ref{defn:ofld} part~\ref{it:ro} of an orbifold leads to a natural compatibility condition on the lifts of double $p$-forms. One can use partitions of unity to construct double $p$-forms on ${\mathcal{O}}$ from ones on an open covering of ${\mathcal{O}}\times{\mathcal{O}}$. Multiple $p$-forms of any order are defined similarly.
\end{enumerate}
\end{nota}
Observe that if $F$ and $H$ are multi $p$-forms, say of orders $k$ and $\ell$, respectively, then $F\otimes H$ is a multi $p$-form of order $k+\ell$.
\begin{definition}\label{defcontraction}
Recall that if $V_1,...,V_k$ are inner product spaces and if $1\leq i <j \leq k$ with $V_i=V_j$, then the contraction
$
C_{i,j}: V_1\otimes...\otimes V_n\to V_1\otimes...\otimes\widehat{V_i}\otimes...\otimes\widehat{V_j}\otimes...\otimes V_k $ is the linear map whose value on decomposable vectors is given by
$$\alpha_1\otimes ...\otimes \alpha_n \mapsto \langle \alpha_i, \alpha_j\rangle \alpha_1\otimes..\otimes \widehat{\alpha_i}\otimes...\otimes \widehat{\alpha_j}\otimes ... \otimes \alpha_k$$ where $\langle\,,\,\rangle$ is the inner product on $V_i$.
In particular, let $(M,g)$ be a Riemannian manifold, $F$ a multi $p$-form of order $k$ on $M$, and suppose $m=(m_1,\dots, m_k)\in M^k$ satisfies $m_i=m_j$. Then we can use the inner product on $\wedge^pT^*_{m_i}(M)$ to define the contraction $C_{i,j}(F(m)).$
To define the contraction of such a multi $p$-form in the setting of an orbifold ${\mathcal{O}}$ at a point $m=(m_1,\dots, m_k)\in {\mathcal{O}}^k$ with $m_i=m_j$, let $\widetilde{F}$ be a local lift on a neighborhood $U_1\times \dots \times U_k$ as in Notation~\ref{nota:doubleform} part~\ref{it:multipformofld}. Choose the lift $\widetilde{m}$ of $m$ so that $\widetilde{m}_i=\widetilde{m}_j\in \widetilde{U}_i=\widetilde{U}_j$. We can then define $C_{ij}(F(m))$ by the condition that it lifts to $C_{ij}(\widetilde{F}(\widetilde{m}))$. This definition is independent of the choice of $\widetilde{m}$ (subject only to the condition $\widetilde{m}_i=\widetilde{m}_j$), and it is independent of the choice of chart since the compatibility condition on charts respects the Riemannian structure.
\end{definition}
\section{Heat trace for differential forms on orbifolds}
\label{sec:heattrace}
In Subsection~\ref{subsec:kernel} we will construct the fundamental solution of the heat equation for the Hodge Laplacian on arbitrary closed Riemannian orbifolds by first constructing a parametrix. Our construction follows and generalizes the construction in the manifold case by Gaffney \cite{G58} and Patodi \cite{P71}. (The assumption of orientability in the work of Gaffney and Patodi is not needed.) In Subsection~\ref{subsec:Don}, we translate a result of Donnelly into our context and employ it
to develop the small-time asymptotic expansion of the heat trace. Our arguments are similar to those used in \cite{DGGW08} to construct the heat kernel and heat trace asymptotics for the Laplacian acting on functions on a closed Riemannian orbifold.
\subsection{Heat kernel for $p$-forms on manifolds and orbifolds}\label{subsec:kernel}
\begin{definition}\label{fundsol}
We say that $K^p: \mathbb (0,\infty) \times {\mathcal{O}} \times {\mathcal{O}} \to \Lambda^p( T^* {\mathcal{O}}) \otimes \Lambda^p (T^* {\mathcal{O}})$ is a \emph{fundamental solution} or \emph{heat kernel} of the heat equation for $p$-forms if it satisfies
\begin{enumerate}
\item $K^p(t,\cdot,\cdot)$ is a double $p$-form for any $t\in (0,\infty)$;
\item $K^p$ is continuous in the three variables, $C^1$ in the first variable and $C^2$ in the second variable;
\item $(\partial _t + \Delta^p_x ) K^p(t,x,y)= 0$ for each fixed $y\in {\mathcal{O}}$, where $\Delta^p_x$ is the Hodge Laplacian with respect to the variable $x$;
\item\label{initcond} For every continuous $p$-form $\omega$ on ${\mathcal{O}}$ and for all $x\in {\mathcal{O}}$, we have
$$\lim_{t\to 0^+}\,\int_{\mathcal{O}}\,C_{2,3} K^p(t,x,y)\otimes \omega(y) \, dV_{\mathcal{O}}(y) = \omega(x).$$
\end{enumerate}
(Because the variable $t$ in the preceding definition is a real parameter, in contrast to a space variable, it is not counted when specifying indices in expressions using Notation~\ref{nota:doubleform} part~\ref{it:gammasigma}, or in expressions involving contractions using the notation from Definition~\ref{defcontraction}. Also, the variables over which a contraction is occuring will be repeated. For example, in part~\ref{initcond} above, $C_{2,3} K^p(t,x,y)\otimes \omega(y)$ indicates contraction in the second and third entries, not counting $t$.)
\end{definition}
The same argument as in the manifold case shows that if the heat kernel exists, then it is unique and given by
\begin{equation}\label{fundsoldvp}
K^p(t,x,y)= \sum_{j= 1}^{\infty} \varphi_j\otimes\varphi_j(x,y) e^{-\lambda_j t}
\end{equation}
where $\{\varphi_j\}_{j=1}^\infty$ is an orthonormal basis of $L_p^2({\mathcal{O}})$ consisting of eigenfunctions and the $\lambda_j$ are the corresponding eigenvalues as above. In particular, the heat kernel is invariant under any isometry $\gamma$, i.e.,
\begin{equation}(\gamma^*_{1,2} K^p)(t,x,y)=K^p(t,x,y)\end{equation}
in the notation of Notation~\ref{nota:doubleform} part~\ref{it:gammasigma}.
\begin{defn}\label{def.param}
A \emph{parametrix} for the heat operator on $p$-forms on ${\mathcal{O}}$ is a function
$H:(0,\infty)\times{\mathcal{O}}\times{\mathcal{O}}\to \Lambda^p(T^* {\mathcal{O}}) \otimes \Lambda^p(T^*{\mathcal{O}})$ satisfying:
\begin{enumerate}
\item $H(t,\cdot,\cdot)$ is a double $p$-form for each $t\in \mathbb (0,\infty)$;
\item $H$ is $C^\infty$ on $(0,\infty)\times{\mathcal{O}}\times{\mathcal{O}}$;
\item $(\partial_t + \Delta^p_x) H(t,x,y)$ extends continuously to $[0,\infty)\times{\mathcal{O}}\times{\mathcal{O}}$;
\item For every continuous $p$-form $\omega$ on ${\mathcal{O}}$ and for all $x\in {\mathcal{O}}$, we have $$\lim_{t\to 0^+}\,\int_{\mathcal{O}}\,C_{2,3} H(t,x,y)\otimes\omega(y)\,dV_{\mathcal{O}}(y) = \omega(x).$$
\end{enumerate}
\end{defn}
We first recall the construction of a parametrix by Patodi, see also Gaffney \cite{G58}.
\begin{prop}[{\cite{P71}}]
\label{localparam}
There exist smooth double $p$-forms $u^p_i$ for $i=0,1,2,\dots$ defined on a neighborhood of the diagonal of $M\times M$ in any Riemannian manifold $M$ of dimension $d$ (more precisely, $u^p_i(x,y)$ is well-defined whenever $y$ is in a normal neighborhood about $x$) satisfying the following.
\begin{enumerate}
\item \label{uzero}
If $\{\omega_1,\dots, \omega_r\}$, where $r=\binom{d}{p}$, is an orthonormal basis of $\Lambda^p\,T^*_x(M)$, then
$$u_0^p(x,x)=\sum_{j=1}^r\, \omega_j\otimes \omega_j.$$
(Note that this expression is independent of the choice of orthonormal basis). Equivalently under the identification of $\Lambda^p\,T^*_x(M)\otimes \Lambda^p\,T^*_x(M)$ with $\operatorname{End}(\Lambda^p\,T^*_x(M)) $,
$u^p_0(x,x)$ is the identity endomorphism of $\Lambda^p( T_x^*M)$.
\item \label{um} For each $m=1,2,\dots$, we have
$$(\partial_t + \Delta^p_x) H^{(m)}(t,x,y)= (4\pi)^{-d/2}e^{-d^2(x,y)/4t}t^{m-\frac{d}{2}}\Delta^p_x u^p_m(x,y),$$
where
$$H^{(m)}(t,x,y):= {(4\pi t)^{-d/2}e^{-d^2(x,y)/4t}}{(u^p_0(x,y)+\dots +t^mu^p_m(x,y))},$$
and $d^2(x,y)$ is the square of the Riemannian distance between $x$ and $y$.
In particular, if $\psi:M\times M\to \mathbf R$ is supported on a sufficiently small neighborhood of the diagonal and is identically one on a smaller neighborhood of the diagonal, then $\psi H^{(m)}$ is a parametrix for the heat operator on $p$-forms when $m>\frac{d}{2}$.
\item \label{traceinv} The $u_i^p$ are the unique double $p$-forms satisfying conditions~\ref{uzero} and \ref{um}. If $\gamma$ is an isometry of an open set in the domain of $u^p_i $, then $\gamma^*_{1,2}u^p_i=u^p_i$.
\end{enumerate}
\end{prop}
\begin{notarem}\label{notarem:hm} Fix ${\epsilon}>0$ so that each $x\in{\mathcal{O}}$ is the center of a ``convex geodesic ball'' $W$ of radius $\epsilon$. By this we mean that W admits a coordinate chart $(\widetilde{W},G_W,\pi_W)$, where $\widetilde{W}$ is a convex geodesic ball of radius $\epsilon$ centered at the necessarily unique point ${\widetilde{x}}$ satisfying $\pi_U({\widetilde{x}})=x$. (In particular, ${\widetilde{x}}$ has isotropy $G_W$.) Cover ${\mathcal{O}}$ by finitely many such geodesic balls ${W_\a}$ with charts
$({\widetilde{W}_\a}, G_{\alpha},\pi_{\alpha})$, $\alpha=1,\dots, s$. (Here we write $G_{\alpha}$ for
$G_{{W_\a}}$ and $\pi_{\alpha}$ for $\pi_{W_\a}$.) Let ${\widetilde{U}_\a}\subset {\widetilde{W}_\a}$,
respectively ${\widetilde{V}_\a}\subset{\widetilde{W}_\a}$, be the concentric geodesic balls of radius $\frac{{\epsilon}}{4}$, respectively
$\frac{{\epsilon}}{2}$, and let ${U_\a}=\pi_{\alpha}({\widetilde{U}_\a})$ and ${V_\a}=\pi_{\alpha}({\widetilde{V}_\a})$. We may assume that the family of balls
$\{{U_\a}\}_{1\le{\alpha}\le s}$ still covers ${\mathcal{O}}$.
For each ${\alpha}$ and each nonnegative integer $m$, we define ${\widetilde{H}^{(m)}_\a}:
(0,\infty)\times{\widetilde{W}_\a}\times{\widetilde{W}_\a}$ by
\begin{align}
\label{nota.hm}
{\widetilde{H}^{(m)}_\a}(t,{\widetilde{x}},{\widetilde{y}}):={(4\pi t)^{-d/2}e^{-d(\tx,\ty)^2/4t}}{(u^p_0(\tx,\ty)+\dots +t^mu^p_m(\tx,\ty))},
\end{align}
where the $u^p_i$ are the double $p$-forms defined in Proposition~\ref{localparam}.
Since each ${\gamma}\in{G_\a}$ is an isometry of ${\widetilde{W}_\a}$, we have
$\gamma^*_{1,2}u^p_i=u^p_i$.
It follows that the double $p$-form (depending on the parameter $t$)
$$\sum_{\gamma\in{G_\a}}\gamma^*_{\cdot,2}{\widetilde{H}^{(m)}_\a}$$
is ${G_\a}$-invariant in both ${\widetilde{x}}$ and ${\widetilde{y}}$ and
thus descends to a well-defined function on
$(0,\infty)\times{W_\a}\times{W_\a}$, which we denote by ${H^{(m)}_\a}$.
Let ${\psi_\a}:{\mathcal{O}}\to\mathbf R$ be a $C^\infty$ cut-off function which is identically one
on ${V_\a}$
and is supported in ${W_\a}$. Let $\{{\eta_\a} :{\alpha}=1,\dots,s\}$ be a partition of
unity on
${\mathcal{O}}$ with the support of ${\eta_\a}$ contained in $ \overline{{U_\a}}$. Define ${L^{(m)}}$ on $(0,\infty)\times{\mathcal{O}}\times{\mathcal{O}}$
by
\begin{equation}\label{eq.hm}
{L^{(m)}}(t,x,y):=\sum_{{\alpha}
=1}^s\,{\psi_\a}(x){\eta_\a}(y){H^{(m)}_\a}(t,x,y).
\end{equation}
\end{notarem}
\begin{prop}\label{prop.param}
${L^{(m)}}$ is a parametrix for the heat kernel on ${\mathcal{O}}$ when
$m>\frac{d}{2}$.
Moreover, the extension of ${\left(\partial_t+\Delta^p_x\right)} {L^{(m)}}(t,x,y)$ to $[0,\infty)\times{\mathcal{O}}\times{\mathcal{O}}$ is of class $C^k$ if $m>\frac{d}{2}+k$ for any $k\in \mathbb N$.
\end{prop}
\begin{proof}
The proof is similar to that carried out in \cite{DGGW08} (see also references therein) for the case of the Laplacian on functions. The first three properties of a parametrix and the final statement of the proposition are straightforward. We include here the proof of the last property of a parametrix (the reproducing property).
Let $\omega$ be a continuous $p$-form on ${\mathcal{O}}$.
Let ${\widetilde{\psi}_\a}$ and ${\widetilde{\eta}_\a}$ be the lifts of ${\psi_\a}$ and ${\eta_\a}$ to
${\widetilde{W}_\a}$.
We consider the lifts so as to make use of Proposition~\ref{localparam}
part~\ref{um} and we also use the fact that the properties of a parametrix hold locally (see, \cite[Remark 3.7]{DGGW08}).
Since ${\rm supp}({\eta_\a})\subset\overline{{U_\a}}\subset{W_\a}$, we have
\begin{equation}
\label{eq.ham}\begin{aligned}
&\pi_{\alpha}^*\int_{\mathcal{O}}{\psi_\a}(x){\eta_\a}(y)C_{2,3}{H^{(m)}_\a}(t,x,y)\otimes \omega(y)\,dy\\&
=\frac{{\psi_\a}(x)}{|G_{\alpha}|}\sum_{{\gamma}\in{G_\a}}\int_{\widetilde{W}_\a}\,{\widetilde{\eta}_\a}({\widetilde{y}})C_{2,3}{\gamma}_{.,2}^*{\widetilde{H}^{(m)}_\a}(t,{\widetilde{x}},{\widetilde{y}})\otimes
\widetilde{\omega}_\alpha({\widetilde{y}}) \, d{\widetilde{y}},
\end{aligned}
\end{equation}
where $\widetilde{\omega}_\alpha$
is the pull-back of $\omega_{|{\widetilde{W}_\a}}$ to
${\widetilde{W}_\a}$ and ${\widetilde{x}}$ is an arbitrarily chosen point in the preimage of $x$
under the map ${\widetilde{W}_\a}\to{W_\a}$. We change variables in each of the integrals in the right-hand side of
Equation~\eqref{eq.ham}, letting ${\widetilde{u}}={\gamma}^{-1}({\widetilde{y}})$. Since
${\gamma}$ is an isometry and since ${\widetilde{\eta}_\a}$ and $\widetilde{\omega}_\alpha$
are ${\gamma}$-invariant, each integral in the summand is equal to
\begin{equation}\label{eq.tham}\int_{\widetilde{W}_\a}\,{\widetilde{\eta}_\a}({\widetilde{u}})C_{2,3} {\widetilde{H}^{(m)}_\a}(t,{\widetilde{x}},{\widetilde{u}})\otimes
\widetilde{\omega}_\alpha({\widetilde{u}}) \,d{\widetilde{u}}.\end{equation}
As ${t\to 0^+}$, the
integral \eqref{eq.tham} above converges to
${\widetilde{\eta}_\a}({\widetilde{x}})\widetilde{\omega}_\alpha({\widetilde{x}})={\eta_\a}(x)\pi_{\alpha}^*(\omega(x))$. Noting
that ${\psi_\a}\equiv 1$ on the support of ${\eta_\a}$, it follows that both sides of
Equation~\eqref{eq.ham} converge to ${\eta_\a}(x)\pi_{\alpha}^*(\omega(x))$ as ${t\to 0^+}$. Since both sides of
Equation~\eqref{eq.ham} are identically zero when $x$ lies outside of
${\rm supp}({\psi_\a})\subset {W_\a}$, the left-hand side converges to
${\eta_\a}(x)\pi_{\alpha}^*(\omega(x))$ for each fixed $x\in {\mathcal{O}}$.
Thus
\[
\lim_{t\to 0^+}\int_{\mathcal{O}}{\psi_\a}(x){\eta_\a}(y)C_{2,3} {H^{(m)}_\a}(t,x,y)\otimes \omega(y)\,dy ={\eta_\a}(x)\omega(x),
\]
and by Equation~\eqref{eq.hm} we have
\[
\lim_{t\to 0^+}\int_{\mathcal{O}}\,{L^{(m)}}(t,x,y)\omega(y)\,dy=\sum_{\alpha}\,{\eta_\a}(x)\omega(x)=\omega(x). \qedhere
\]
\end{proof}
The construction of the heat kernel from the parametrix ${L^{(m)}}$ then follows exactly as in \cite{G58}.
\begin{nota}\label{conv} ~
\begin{enumerate}
\item For $A$ and $B$ continuous double $p$-form valued functions on $[0,\infty)\times{\mathcal{O}}\times{\mathcal{O}}$, we define the convolution $A*B$ on $(0,\infty)\times{\mathcal{O}}\times{\mathcal{O}}$ as
$$A*B(t,x,y):=\int_0^t\,d\theta\int_{\mathcal{O}}\,C_{2,4} A(\theta,x,z)\otimes B(t-\theta,y,z) dV_{\mathcal O}(z).$$
\item Fix
$m>\frac{d}{2} +2$. Define $\kappa_0(t,x,y)
:={\left(\frac{\partial}{\partial t}+\Delta_x\right)}{L^{(m)}}(t,x,y)$ and, for $j=1,2,\dots$, set $\kappa_j(t,x,y):=\kappa_0*\kappa_{j-1}$.
\end{enumerate}
\end{nota}
An argument analogous to that in the manifold case \cite{G58} shows that the series
\begin{equation}\label{lem.qm} P_m(t,x,y):=\sum_{j=1}^\infty\,(-1)^{j+1}\kappa_j(t,x,y)\end{equation}
converges uniformly and absolutely on $[0,T]\times{\mathcal{O}}\times{\mathcal{O}}$ for each $T>0$. Thus $P_m$ is continuous. Moreover, $P_m$ is of class $C^2$ on ${\R_+}\times{\mathcal{O}}\times{\mathcal{O}}$ and for any $T>0$, there exists a constant $C$ such that
\begin{equation} P_m(t,x,y)\leq Ct^2\end{equation}
on $[0,T]\times{\mathcal{O}}\times{\mathcal{O}}$.
(Our indexing is different from that in \cite{G58}. Our $\kappa_j$ is Gaffney's $\kappa_{j+1}$.)
We are now able to state the main theorem of this section, which follows in the same way as that in \cite{G58}.
\begin{thm}
\label{lem.conv} Let $m>\frac{d}{2}+2$. Define $P_m$ as in Equation~\eqref{lem.qm} and let
$$K^p(t,x,y) :={L^{(m)}}(t,x,y)
+(P_m*{L^{(m)}})(t,x,y).$$
Then $K^p$ is a heat kernel.
Moreover,
$$K^p(t,x,y)={L^{(m)}}(t,x,y) +O(t^{m-\frac{d}{2}+1}), \text{ as } t\to 0^+.$$
\end{thm}
\begin{nota}\label{nota.ha} Let
$${\widetilde{H}_\a}(t,{\widetilde{x}},{\widetilde{y}})=\sum_{{\gamma}\in{G_\a}}{(4\pi t)^{-d/2}e^{-d(\tx,\g(\ty))^2/4t}}{(\g_{.,2}^*u^p_0(\tx,\ty)+ t\g_{.,2}^*u^p_1(\tx,\ty)+\dots)}.$$
Observe that ${\widetilde{H}_\a}$ is ${G_\a}$-invariant in both ${\widetilde{x}}$ and ${\widetilde{y}}$ and thus is the pull-back
of a double $p$-form valued function, which we denote by ${H_\a}$,
on $(0,\infty)\times{W_\a}\times{W_\a}$.
\end{nota}
As a consequence of Theorem~\ref{lem.conv}, we have:
\begin{thm}
\label{thm.heattrace} In the notation of Equation\eqref{eq.hm}
and Notation~\ref{nota.ha}, the trace of the heat kernel has an asymptotic expansion as
${t\to 0^+}$ given by
\[
\tr K^p(t):=\int_{\mathcal{O}}\,C_{1,2}K^p(t,x,x)\,dV_{\mathcal O}(x)
\sim\,\sum_{{\alpha}=1}^s\,\int_{\mathcal{O}}\,{\eta_\a}(x)C_{1,2}{H_\a}(t,x,x)\,dV_{\mathcal O}(x).
\]
\end{thm}
\subsection{Heat invariants for $p$-forms on orbifolds}\label{subsec:Don}
We will use Theorem~\ref{thm.heattrace} together with a theorem of Donnelly \cite[Theorem 4.1]{D76} (see also \cite[Theorem 3.1]{DP77}) to compute the small-time asymptotics of the heat trace for the Laplacian on $p$-forms on closed Riemannian orbifolds.
Our presentation is similar to that in \cite{DGGW08} where the case $p=0$ is carried out.
\begin{notarem}\label{nota:don} ~
\begin{enumerate}
\item \label{it:dontriples} Let $\mathcal{T}$ be the set of all triples $(M,\gamma, a)$ where $M$ is a Riemannian manifold, $\gamma$ is an isometry of $M$ and $a$ is in the fixed point set $Fix(\gamma)$ of $\gamma$. A function $h:\mathcal{T}\to \mathbf R$ is said to satisfy the \emph{locality property} if $h(M,\gamma, a)$ depends only on the germs at $a$ of the Riemannian metric and of $\gamma$. The function $h$ is said to be \emph{universal} if whenever
$\sigma:M_1\to M_2$ is an isometry between two Riemannian manifolds and $\gamma_i$ is an isometry of $M_i$ with $\gamma_2=\sigma\circ\gamma_1\circ\sigma^{-1}$, then $h(M_1,\gamma_1,a)=h(M_2,\gamma_2,\sigma(a))$ for all $a\in Fix(\gamma)$.
\item We will also consider functions on the set $\mathcal{T}'$ of all $(M, \eta, \gamma, a)$ where $(M,\gamma, a)$ is given as in part~\ref{it:dontriples} and $\eta$ is a $\gamma$-invariant function on $M$. The locality and universality properties are analogously defined. (In the definition of locality, $h(M,\eta, \gamma, a)$ may also depend on the germ of $\eta$ at $a$. In the definition of of universality, we have $h(M_1,\eta_1,\gamma_1,a)=h(M_2,\eta_2,\gamma_2,\sigma(a))$ if $\eta_2\circ\sigma=\eta_1$ and the other conditions in part~\ref{it:dontriples} hold.)
\item Let $M$ be a Riemannian manifold of dimension $d$ and let $\gamma: M\to M$ be an isometry. Then each component of the fixed point set $Fix(\gamma)$ is a totally geodesic, closed submanifold of $M$. (See \cite{K58}.) If $M$ is compact, then $Fix(\gamma)$ has only finitely many components. In any Riemannian manifold, at most one component of $Fix(\gamma)$ can intersect any given geodesically convex ball. For $a\in Fix(\gamma)$, note that the differential $d\gamma_a: T_a M \to T_a M$ fixes $T_aQ$ and restricts to an isomorphism of $T_a(Q)^\perp$, where $Q$ is the connected component of $Fix(\gamma)$ containing $a$.
In particular, we have $\det({\operatorname{{Id}}}_{\operatorname{codim}(Q)}-A_a)\neq 0$, where $A_a$ is the restriction of the action of $d\gamma_a$ to $(T_aQ)^\perp$ and $\operatorname{codim}(Q)$ denotes the codimension of $Q$.
\item \label{it:don-trace} For $\alpha$ in the orthogonal group $O(d,\mathbf R)$, we denote by $\tr_p(\alpha)$ the trace of the natural action of $\alpha$ on $\wedge^p(\mathbf R^d)$. For $M$ and $\gamma$ as in part~\ref{it:dontriples}, the Riemannian inner product on $T_a M$ allows us to identify $d\gamma_a$ with an element of $O(d,\mathbf R)$ and $\tr_p(d\gamma_a)$ coincides with the trace of $\gamma_a^*: \wedge^p(T_a^*M)\to \wedge^p(T_a^*M)$.
\item \label{it:don-CFix} Let $M$ be a compact Riemannian manifold and $\gamma: M \to M$ be an isometry. We denote the set of components of the fixed point set of $\gamma$ by $\mathcal{C}(Fix(\gamma))$.
\end{enumerate}
\end{notarem}
\begin{thm}\label{bkthm}\label{bklocal}
\hspace{1in}$\text{ }$
\newline\begin{enumerate}
\item \label{it:bkthm-global} \cite[Theorem 4.1]{D76}; \cite[Theorem 3.1]{DP77}. In the notation of Notation~\ref{nota:don}, there exist functions $b_k$, $k=0, 1, 2, \dots$, on $\mathcal{T}$ satisfying both the locality and universality properties such that if $M$ is a compact Riemannian manifold and $\gamma: M \to M$ is an isometry, then, as $t\rightarrow 0^+$,
\[
\int_{M} C_{1,2} \gamma^*_{\cdot, 2} K^p(t,x,x) \, dV_{M}(x) \sim \sum_{Q\in \mathcal{C}(Fix(\gamma))} (4\pi t)^{-\dim(Q)/2} \sum_{k=0}^{\infty} t^k \int_{Q} b^p_k(\gamma,a) \, dV_Q (a).
\]
(Here we are writing $b_k(\gamma,a)$ for $b_k(M,\gamma, a)$.) Moreover,
\[
b^p_0(\gamma,a)=\frac{\tr_p(d\gamma_a)}{\lvert \det({\operatorname{{Id}}}_{\operatorname{codim}(Q)}-A_a)\rvert}
\]
\item \label{it:bkthmlocal} (Local version.) There exist functions $c_k$, $k=0, 1, 2, \dots$, on $\mathcal{T}'$ satisfying both the locality and universality properties such that the following condition holds: If $M$ is an arbitrary Riemannian manifold, $\gamma$ is an isometry of $M$, and $\eta$ is a $\gamma$-invariant function whose support is compact and geodesically convex, then as $t\to 0^+$, we have
\begin{multline*}
\int_{M}\,(4\pi t) ^{-d/2} e^{-\frac{ d^2(x, \gamma(x))}{4t}}\eta(x)\left ( \sum_{j=0}^\infty t^j C_{1,2} \gamma^*_{\cdot, 2} u^p_j(x,x)\right) \, dV_{M}(x) \\
\sim \sum_{Q\in \mathcal{C}(Fix(\gamma))} (4\pi t)^{-\dim(Q)/2} \sum_{k=0}^{\infty} t^k \int_{Q} c^p_k(\eta,\gamma,a) \, dV_Q (a).
\end{multline*}
where the $u^p_j$, $j=1,2,\dots$ are the double $p$-forms defined in Proposition~\ref{localparam}. (We are writing $c_k(\eta,\gamma,a)$ for $c_k(M,\eta, \gamma, a).$ Note that there is at most one component $Q$ of $Fix(\gamma)$ that intersects the support of $\eta$ and thus at most one non-zero term in the sum.)
Moreover, the dependence on $\eta$ is linear. In particular, if $\eta\equiv 1$ near $a$, then $c_k^p(\eta, \gamma,a)=b_k^p(\gamma,a)$.
\end{enumerate}
\end{thm}
\begin{remark} In the first statement in the previous theorem, we have omitted the hypotheses in \cite{DP77} that $M$ is orientable and that $\gamma$ is orientation-preserving as they are not needed for the proof.
The proof of the second statement is a minor adaptation of Donnelly's proof of the first statement. In the appendix, we will give an exposition of the proof, making clear that aspects of the statement apply to much more general integrals.
\end{remark}
Recall that for closed Riemannian manifolds, the heat trace has a small-time asymptotic expansion \begin{equation}\label{manifoldasymp}
\tr K^p(t) :=\int_M C_{1,2}K^p(t,m ,m)\,dV_M(m)\sim_{t\to 0^+}
(4\pi t) ^{-d/2} \sum_{i=0}^{\infty} a^p_i(M) t^i,
\end{equation}
where
\[
a^p_i( M):= \int_{ M} C_{1,2} u^p_i(m,m)\, dV(m).
\]
The $a_i^p$ are called the \emph{heat invariants}.
The first two heat invariants for $p$-forms on manifolds are given by
\begin{align}
\label{a10}
a_0^p({M}) &= \binom{d}{p} {\rm vol} ({M}), \\
\label{a11}
a_1^p({M}) &= \left(\frac{1}{6}\binom{d}{p}-\binom{d-2}{p-1}\right) \int_{{M}} \tau (m) \, dV_M(m),
\end{align}
where $\tau$ is the scalar curvature, and we use the convention that $\binom{m}{n}=0$ whenever $n < 0$. (See, for example, \cite{P70} and references therein.)
We now address the heat invariants for the Hodge Laplacian on closed Riemannian orbifolds.
\begin{nota}\label{invariants} Let ${\mathcal{O}}$ be a closed Riemannian orbifold of dimension $d$.
\begin{enumerate}
\item \label{it:invariants-Ip0t} For $k=0,1,2,\dots$, define $U_k^p\in C^\infty({\mathcal{O}})$ as follows: For $x\in {\mathcal{O}}$, choose an orbifold chart $(\widetilde{U},G_U, \pi_U)$ about $x$, let ${\widetilde{x}}$ be any lift of $x$ in $\widetilde{U}$ and set $U_k^p(x):= C_{1,2}u_k^p({\widetilde{x}},{\widetilde{x}})$ where $u_k^p$ is the double $p$-form
defined in Proposition~\ref{localparam}. This definition is independent of both the choice of chart and the choice of lift ${\widetilde{x}}$ since the $u_k^p$ are local isometry invariants.
Define
\begin{equation}\label{eq:heatinvM}
a_k^p({\mathcal{O}}):=\int_{\mathcal{O}}\, U_k^p(x) \, dV_\mathcal O(x).
\end{equation}
One easily verifies that the resulting expressions for the $a_k^p({\mathcal{O}})$ are identical to those for manifolds. E.g., $a_0^p({{\mathcal{O}}}) = \binom{d}{p} {\rm vol} ({O})$ and $a_1^p({\mathcal{O}})$ is given by Equation~\eqref{a11} with $M$ replaced by ${\mathcal{O}}$.
Set
$$I_0^p(t):=(4\pi t)^{-d/2}\sum_{k=0}^\infty\,a_k^p({\mathcal{O}}) t^k.$$
\item \label{it:invariants-IpNt} Let $N$ be a stratum of the singular set of ${\mathcal{O}}$. For $\gamma\in \iiso^{\rm max}(N)$, define $b_k^p(\gamma,\cdot): N\to \mathbf R$ as follows: For $a\in N$, let $(\tilde{U}, G_U, \pi_U)$ be an orbifold chart about $a$ in ${\mathcal{O}}$ and let $\tilde{a}$ be a lift of $a$ in $\tilde{U}$. Note that $\tilde{a}$ lies in a stratum $W$ of $\tilde{U}$ and $\gamma$ is naturally identified with an element of $\iiso^{\rm max}(W)$. We define $$b_k^p(\gamma,a):=b_k^p(\gamma,\tilde{a}),$$ where the right-hand side is defined as in Theorem~\ref{bkthm} part~\ref{it:bkthm-global}. The universality of the functions $b_k^p :\mathcal{T}\to \mathbf R$ (as stated in Theorem~\ref{bkthm} part~\ref{it:bkthm-global}) implies that the left-hand side is independent both of the choice of chart and of the choice of lift $\tilde{a}$. Define $b_k^p(N,\cdot): N\to \mathbf R$ by
$$ b_ k^p(N,a):=\sum_{\gamma\in \iiso^{\rm max}(N)}\, b_k^p(\gamma,a),$$
and set
\begin{align}
\label{nota_heatb}
b_ k^p(N):=\sum_{\gamma\in \iiso^{\rm max}(N)}\,\int_N\, b_k^p(\gamma,a)\,dV_N(a),
\end{align}
where $dV_N$ is the volume element on $N$ for the Riemannian metric on $N$ induced by that of ${\mathcal{O}}$.
Aside: this notation differs slightly from that in \cite{DGGW08} where the case $p=0$ is studied.
Set
$$I_N^p(t):=(4\pi t)^{-\dim(N)/2}\sum_{k=0}^\infty\,b_k^p(N) t^k.$$
Observe that $I_N^p(t)=0$ if $N$ is not a primary stratum.
\end{enumerate}
\end{nota}
\begin{remark}
Since each stratum $N$ consists of points of a fixed isotropy type, the expression for $b_0^p(\gamma, a)$ in Theorem~\ref{bkthm} part~\ref{it:bkthm-global} is independent of $a\in N$, we denote it by $b_0^p(\gamma)$. In the notation of Equation~\eqref{nota_heatb}, we have
\begin{equation}\label{niceb0} b_0^p(N)={\rm vol}(N)\sum_{\gamma\in\iiso^{\rm max}(N)}\,b_0^p(\gamma).
\end{equation}
Note that in Equation~\eqref{niceb0} the volume of $N$ is present instead of the integral over $N$. This is because the invariant $b_0$ does not depend on the curvature (see Appendix).
\end{remark}
\begin{thm}\label{thm.asympt} Let ${\mathcal{O}}$ be a closed $d$-dimensional Riemannian orbifold, let $p\in \{1,\dots, d\}$ and let $0\leq\lambda_1\leq\lambda_2\leq\dots \to +\infty$ be the spectrum of the Hodge Laplacian acting on smooth $p$-forms on ${\mathcal{O}}$. The heat trace has an asymptotic expansion as $t\to 0^+$ given by
\begin{equation}
\label{eq:traceO}
\tr K^p(t) \sim_{t\to 0^+}\, I_0^p(t)+\sum_{N\in S({\mathcal{O}})} \, \frac{I_N^p(t)}{|\operatorname{Iso}(N)|},
\end{equation}
where $S({\mathcal{O}})$ is the set of all singular ${\mathcal{O}}$-strata and where $|\operatorname{Iso}(N)|$ is the order of the isotropy group of $N$. This asymptotic expansion is of the form
\begin{align}
\label{heatascoeffc}
(4\pi t)^{-d/2}\sum_{j=0}^\infty\, c^p_j(\mathcal O)\,t^{\frac{j}{2}}
\end{align}
with $c^p_j(\mathcal O)\in\mathbf R$.
\end{thm}
Observe that if there are no singular strata, then Equation~\eqref{eq:traceO} agrees with Equation~\eqref{manifoldasymp}.
\begin{proof} By Theorem~\ref{thm.heattrace}, we can write
\begin{equation}\label{prelim}
\tr K^p(t)
\sim_{{t\to 0^+}}\,\sum_{{\alpha}=1}^s\,\int_{\mathcal{O}}\,{\eta_\a}(x)C_{1,2}{H_\a}(t,x,x)\,dx.
\end{equation}
Recalling the notation introduced in Equation~\eqref{nota.hm}, Equation~\eqref{prelim}, we have
\begin{equation}\label{start}
\tr K^p(t) \sim_{{t\to 0^+}}\sum_{{\alpha}=1}^s\frac{1}{|G_{\alpha}|}\sum_{{\gamma}\in{G_\a}}L(t,\alpha,\gamma)
\end{equation}
where
\begin{equation}\label{start2}
L(t,\alpha,\gamma):= \int_{{\widetilde{U}_\a}}(4\pi t)^{-d/2}e^{-d^2({\widetilde{x}},{\gamma}({\widetilde{x}}))/4t}{\widetilde{\eta}_\a}({\widetilde{x}}) \notag\,
C_{1,2}\Big({\gamma}_{.,2}^*u^p_0({\widetilde{x}},{\widetilde{x}})+ t{\gamma}_{.,2}^*u^p_1({\widetilde{x}},{\widetilde{x}})+\dots\Big) dV_{{\widetilde{U}_\a}}({\widetilde{x}}).
\end{equation}
We will group together various terms in this double sum. First consider the identity element $1_{\alpha}$ of each $G_{\alpha}$. By Notation~\ref{invariants} part~\ref{it:invariants-Ip0t}, we have
$$\sum_{{\alpha}=1}^s\frac{1}{|G_{\alpha}|}L(t,{\alpha}, 1_{\alpha})= (4\pi t)^{-d/2}\int_{\mathcal{O}} (U_0^p(x) + tU_1^p(x) +\dots) \,dV_{\mathcal{O}}(x) =I_0^p(t).$$
Next let
\begin{equation}\label{I'} I'(t):=\sum_{{\alpha}=1}^s\frac{1}{|G_{\alpha}|}\sum_{1_{\alpha}\neq{\gamma}\in{G_\a}}L(t,\alpha,\gamma).
\end{equation}
It remains to show that
\begin{equation}\label{toshow}I'(t)=\sum _{N\in S({\mathcal{O}})}\frac{I_N^p(t)}{|\operatorname{Iso}(N)|}.\end{equation}
We will apply Theorem~\ref{bklocal} to each term in $I'(t)$. For each ${\alpha}$ and each nontrivial element $\gamma\in G_{\alpha}$, Theorem~\ref{bklocal} (with ${\widetilde{U}_\a}$ playing the role of $M$ in Theorem~\ref{bklocal} part~\ref{it:bkthmlocal}) expresses $L(t,{\alpha}, \gamma)$ as a sum of integrals over the various components of the fixed point set of $\gamma$ in ${\widetilde{U}_\a}$. Each such component is a union of strata in ${\widetilde{U}_\a}$. Since the union of those strata $W$ in ${\widetilde{U}_\a}$ for which $\gamma\in \iiso^{\rm max}(W)$ is an open subset of ${\operatorname{{Fix}}}(\gamma)$ of full measure, we can instead add up the integrals over such strata. Letting $S({\widetilde{U}_\a})$ denote the singular strata of ${\widetilde{U}_\a}$ and applying Theorem~\ref{bklocal}, we have
\begin{equation}
\label{ip}
I'(t)=\sum_{{\alpha}=1}^s \sum_{W\in S({\widetilde{U}_\a})}\,\sum_{{\gamma}\in\iiso^{\rm max}(W)}\,\frac{1}{|G_{\alpha}|}(4\pi t)^{-\dim(W)/2}\sum_{k=0}^{\infty}\,t^k\int_W\,c_k^p({\widetilde{\eta}_\a}, \gamma, \tilde{a})\, dV_W(\tilde{a}).
\end{equation}
Let $N\in S({\mathcal{O}})$.
For $a\in N$, set
$$c_k^p({\eta_\a},\gamma, a)= c_k^p({\widetilde{\eta}_\a}, \gamma, \tilde{a})$$
where $\tilde{a}$ is any lift of $a$ in ${\widetilde{U}_\a}$. This is well-defined since any two lifts differ by an element of $G_{\alpha}$ and ${\widetilde{\eta}_\a}$ is $G_{\alpha}$-invariant. Moreover, Theorem~\ref{bklocal} part~\ref{it:bkthmlocal} implies that
\begin{equation}\label{sumck}\sum_{{\alpha}=1}^s c_k^p({\eta_\a}, \gamma, a)=b_k^p(\gamma, a).\end{equation}
For each $\alpha$ and each $W\in S({\widetilde{U}_\a})$, there exists $N\in S({\mathcal{O}})$ such that $\pi_{\alpha}$ maps $W$ isometrically onto $N\cap U_{\alpha}$. Let
\begin{equation}\label{san}S({\widetilde{U}_\a};N)=\{W\in S({\widetilde{U}_\a}): \pi_{\alpha}(W)=N\cap U_{\alpha}\} \end{equation}
and observe that
\begin{equation}\label{osan}|S({\widetilde{U}_\a}; N)|= \frac{|G_{\alpha}|}{|{\operatorname{{Iso}}}(N)|}.\end{equation}
For $W\in S({\widetilde{U}_\a};N)$, we identify $\iiso^{\rm max}(N)$ with $\iiso^{\rm max}(W)$. Observe that
\begin{equation}\label{WN}\int_W\,c_k^p({\widetilde{\eta}_\a}, \gamma, \tilde{a})\, dV_W(\tilde{a}) =\int _{N\cap U_{\alpha}}\,c_k^p({\eta_\a},\gamma, a)\, dV_N(a).\end{equation}
Since $\dim(W)=\dim(N)$, Equations~\eqref{ip}, \eqref{sumck}, \eqref{san}, \eqref{osan} and \eqref{WN} yield
\begin{align}\label{inp}
I'(t)&=\sum_{N\in S({\mathcal{O}})}\sum_{{\alpha}=1}^s \frac{|G_{\alpha}|}{|{\operatorname{{Iso}}}(N)|}\sum_{{\gamma}\in\iiso^{\rm max}(N)}\,\frac{1}{|G_{\alpha}|}(4\pi t)^{-\dim(N)/2}\sum_{k=0}^\infty\,t^k\int _{N\cap U_{\alpha}}\,c_k^p({\eta_\a},\gamma, a) dV_N(a)\\
&=\sum_{N\in S({\mathcal{O}})}\frac{1}{|{\operatorname{{Iso}}}(N)|}\sum_{{\gamma}\in\iiso^{\rm max}(N)}\,(4\pi t)^{-\dim(N)/2}\sum_{k=0}^\infty\,t^k\int_N\,b_k^p(\gamma, a) \, dV_N(a)=\sum_{N\in S({\mathcal{O}})} \, \frac{I_N^p(t)}{|\operatorname{Iso}(N)|}.
\end{align}
The theorem follows.
\end{proof}
\begin{remark}\label{abs boun}
Let ${\mathcal{O}}$ be a closed Riemannian orbifold all of whose singular strata $N$ have co-dimension one; i.e., the singular strata are mirrors. Every such stratum is necessarily totally geodesic in ${\mathcal{O}}$. (Indeed, about any point $a$ in $N$, there exists a coordinate chart $({\widetilde{U}}, G_U,\pi_U)$ where $G_U$ is generated by a reflection $\tau$, and the Riemannian metric on $U=\pi({\widetilde{U}})$ arises from a $\tau$-invariant metric on ${\widetilde{U}}$. The fixed point set of any involutive isometry is necessarily totally geodesic.) The underlying topological space $\underline{{\mathcal{O}}}$ has the structure of a smooth Riemannian manifold each of whose boundary components is totally geodesic.
The orbifold ${\mathcal{O}}$ is good in this case, as we may double ${\mathcal{O}}$ over its reflectors
to obtain a smooth Riemannian manifold $M$ admitting a reflection symmetry $\tau$ so that ${\mathcal{O}}=\langle\tau\rangle\backslash M$. Let $P:M\to \underline{{\mathcal{O}}}$ be the projection. Identify the fixed point set of $\tau$ with $N$. Let $\omega$ be a smooth $p$-form on the orbifold ${\mathcal{O}}$. Then $\widetilde{\omega}:=P^*\omega$ is $\tau$ invariant. Let $\nu$ be a unit normal vector field along $N$ in $M$ and let $j:N\to M$ be the inclusion. The $\tau$-invariance of $\widetilde{\omega}$ is equivalent to the conditions $j^*\iota_\nu\widetilde{\omega}=j^*\iota_\nu d\widetilde{\omega}=0$. Using the same notation $\nu$ for the unit normal $P_*\nu$ along $N$ in ${\mathcal{O}}$ and now letting $j:N\to {\mathcal{O}}$ be the inclusion, this says
\[
j^*\iota_\nu \omega=j^*\iota_\nu d\omega=0,
\]
and these are precisely the absolute boundary conditions for the manifold $\underline{{\mathcal{O}}}$. The spectrum of the Hodge Laplacian on $p$-forms on ${\mathcal{O}}$ thus coincides with the spectrum of the Hodge Laplacian on $p$-forms on $\underline{{\mathcal{O}}}$ with absolute boundary conditions. It is then straightforward to prove that the spectral invariants $b^p_k(N)$ computed here for the orbifold agree with the familiar contributions to the heat trace asymptotics arising from the boundary of $\underline{{\mathcal{O}}}$ as obtained in \cite[(2) Theorem 3.2]{P03} (see also \cite[Theorem 1.2]{BG90}).
\end{remark}
\section{Applications}\label{applications}
In this section we use the asymptotic results derived in the previous section to prove our main theorems.
We first compute the invariant $b_0^p$ for various types of singular strata $N$ in a $d$-dimensional closed orbifold $\mathcal O$. We then prove an inverse spectral result involving both the 0-spectrum and 1-spectrum.
Finally, we construct examples of orbifolds that are $p$-isospectral to manifolds (for various $p \geq 1$) and obtain some inverse spectral results for the $p$-spectrum alone.
\subsection{Computation of $b_0^p(N)$ for various strata $N$}\label{sec.computation}
\begin{prop}\label{b01general} Let $N$ be a singular stratum of codimension $k$ in the $d$-dimensional closed orbifold ${\mathcal{O}}$. Suppose $\gamma \in\iiso^{\rm max}(N)$ has eigenvalue type $E(\theta_1, \theta_2, \dots, \theta_s;r)$, using Notation~\ref{nota:Rstuff}.
Then
\begin{equation}\label{eq.b01}b_0^1(\gamma)=\left( d-k-r+\sum_{j=1}^s\,2\cos(\theta_j)\right) \left( 2^{-k}\prod_{j=1}^s\,\csc^2(\theta_j/2)\right) .\end{equation}
Here we use the convention that when $s=0$, we have $\prod_{j=1}^s\,\csc^2(\theta_j/2)=1$.
\end{prop}
\begin{proof} We apply Theorem~\ref{bkthm}.
The expression in the first set of large parentheses in Equation~(\ref{eq.b01}) is the trace of $\gamma$. The expression in the second set of large parentheses equals $\displaystyle\frac{1}{|\det({\operatorname{{Id}}}_{\operatorname{codim}(N)}-A)|}$, where (taking note of Notation~\ref{invariants} and Notation and Remarks \ref{nota:don}), we are writing $A$ to denote the expression $A_a$ in Theorem~\ref{bkthm} part~\ref{it:bkthm-global}.
\end{proof}
\begin{ex}\label{isotropy2} ~
Suppose $N$ is a singular stratum of codimension $k$ with isotropy group of order $2$. The generator $\gamma$ of ${\operatorname{{Iso}}}(N)$ must have eigenvalue type $E(;k)$, and thus
\begin{equation}\label{Krawt_trace}\tr_p(\gamma)=\sum_{j=0}^p\,(-1)^j\,\binom{k}{j}\binom{d-k}{p-j}\end{equation}
with the understanding that $\binom{m}{n}=0$ when $n>m$. The expression on the right-hand side of Equation~\eqref{Krawt_trace} is the value at $k$ of the
so-called (binary) Krawtchouk polynomial of degree $p$ given by
\begin{equation}\label{krawtpoly}K^d_p(x)=\sum_{j=0}^p\,(-1)^j\,\binom{x}{j}\binom{d-x}{p-j}.\end{equation}
(The authors learned of the Krawtchouk polynomials through the work of Miatello and Rossetti \cite{MR01}, where these polynomials arise in the computation of the spectrum of $p$-forms on Bieberbach manifolds with holonomy $\mathbf Z_2^k$.)
By Theorem~\ref{bkthm} part~\ref{it:bkthm-global} and Notation~\ref{invariants} part~\ref{it:invariants-IpNt}, we have
\begin{equation}
\label{Krawt}b_0^p(N) =\frac{{\rm vol}(N)}{2^k}K_p^d(k).
\end{equation}
In particular, $b_0^p(N)$ vanishes if and only if the codimension $k$ of $N$ is a zero of the Krawtchouk polynomial $K_p^d$. There is a large literature on the zeros of Krawtchouk polynomials. We include a few elementary observations here:
\begin{itemize}
\item $K^d_0(k)=1$ and $K^d_d(k) = (-1)^k$.
\item $K^d_1(k)=0$ if and only if $k=\frac{d}{2}$.
\item $K^d_2(k)=0$ if and only if the dimension $d=
n^2$ is a perfect square and $k=\frac{n(n\pm 1)}{2}$. (For later use, we observe that if $4|n$, then both zeros $\frac{n(n\pm 1)}{2}$ are even.)
\item When $d$ is even and $p=\frac{d}{2}$, we have $K^d_p(k)=0$ for all odd $k$.
\item When $d$ is even and $p$ is odd, we have $K^d_p(\frac{d}{2})=0$.
\end{itemize}
\end{ex}
\begin{ex}\label{isotropy3} Suppose $N$ is a singular stratum of codimension $k$ with isotropy group ${{\operatorname{{Iso}}}(N)}$ of order $3$. Then ${{\operatorname{{Iso}}}(N)}$ is necessarily cyclic, generated by an element $\gamma$ with eigenvalue type $E(\theta_1, \theta_2, \dots, \theta_s;)$ where $k=2s$ and $\theta_j \in \{\frac{2\pi}{3},\frac{4\pi}{3}\}$ for $1 \le j \le s$. We have $\gamma \in \iiso^{\rm max}(N)$ as $\gamma$ has eigenvalue $1$ with multiplicity $d-k$. As $\gamma^2$ satisfies the same conditions, we conclude $\iiso^{\rm max}(N)=\{\gamma, \gamma^2\}$. Using the fact that $\cos(\theta_j)=-\frac{1}{2}$ and $\csc^2(\theta_j/2)=4/3$ for all $j$, we have
$$b_0^1(\gamma)=b_0^1(\gamma^2)=(d-k-s)2^{-k}(4/3)^s=\frac{d-3s}{3^s}.$$
Thus $$b_0^1(N)=\frac{2(d-3s)}{3^s}{\rm vol}(N).$$
Thus $b_0^1(N)=0$ if and only if $k=\frac{2}{3}d$. (In particular, we must have $3|d$ in this case.) When $d=3$ and $k=2$, we will construct an example of a flat orbifold that is 1-isospectral to a flat torus. (See Example \ref{it:flat-trianglat}).
\end{ex}
We next consider strata of low codimension.
If $N$ is a stratum of codimension one, then the isotropy necessarily has order 2 so this case is covered by Example~\ref{isotropy2}.
\begin{prop}
\label{b01codim2}
Let $N$ be a stratum of codimension two with cyclic isotropy group of order $m$. Then,
\begin{equation}\label{eq.codim2} b_0^1(N)=\left( (d-2)\frac{m^2 - 1}{12} + \frac{m^2 - 6m + 5}{6}\right) {\rm vol}(N).
\end{equation}
\end{prop}
In order to prove Proposition~\ref{b01codim2}, we need an intermediary result.
\begin{lemma}
\label{lem:trigsum2}
For $2\leq m \in \mathbb{Z}$, we have
\begin{equation*}\sum_{j=1}^{m -1} \csc^2(\pi j/ m)
= \frac{m^2 - 1}{3},
\end{equation*}
and
\begin{equation*}\sum_{j=1}^{m -1} \cos (2\pi j/ m)\csc^2 (\pi j/ m)
= \frac{m^2 - 6m + 5}{3}.
\end{equation*}
\end{lemma}
\begin{proof}
These formulas are proved using the calculus of residues. See \cite[Lemma 5.4]{DGGW08} for the first formula.
For the second, by \cite[Equation (2.3)]{CS12}, we have
\begin{align*}
S_3(m,1,1) &= \sum_{j=1}^{m-1} \cos (2\pi j/ m) \csc^2 (\pi j/ m) \\
&= 2 \sum_{\alpha =0}^{1} \binom{2}{2\alpha} B_{2\alpha}(1/m) B_{2-2\alpha}^{(2)}(1) m^{2\alpha} \\
&= 2B_0(1/m)B_2^{(2)}(1) + 2B_2(1/m)B_0^{(2)}(1)m^2
= \frac{m^2 - 6m + 5}{3},
\end{align*}
where $B_{n}^{(m)}(x)$ are the Bernoulli polynomials of order $m$ and degree $n$ in $x$
(see \cite{CS12} and references therein).
\end{proof}
\begin{proof}[Proof of Proposition~\ref{b01codim2}] Let $m$ be the order of ${{\operatorname{{Iso}}}(N)}$. If $m=2$, then the expression \eqref{eq.codim2} in Proposition \ref{b01codim2} agrees with that in Example~\ref{isotropy2}, so for notational convenience we assume $m\geq 3$. In this case ${{\operatorname{{Iso}}}(N)}$ is generated by an element $\gamma$ with eigenvalue type $E(2\pi/m;)$. All elements of ${{\operatorname{{Iso}}}(N)}$ other than the identity lie in $\iiso^{\rm max}(N)$, and the proposition follows from Notation~\ref{invariants} part~\ref{it:invariants-IpNt}, Proposition~\ref{b01general} and Lemma~\ref{lem:trigsum2}.
\end{proof}
\begin{ex}\label{compound rotation} Generalizing Proposition~\ref{b01codim2}, we can compute $b_0^1(N)$ for strata for which ${{\operatorname{{Iso}}}(N)}$ is cyclic and
\begin{itemize}
\item when $m>2$, the generator has eigenvalue type $E(2\pi/m, \dots, 2\pi/m;)$ with $2\pi/m$ repeated $s$ times, or
\item when $m=2$, the generator has eigenvalue type $E(;r)$ with $r$ even.
\end{itemize}
Consider the case that $N$ has codimension $4$ in ${\mathcal{O}}$. By \cite[Equation~(5.2) and Corollary~5.4]{BY02}, we have
\begin{equation}\label{trigsum4}
\sum_{j=1}^{m-1} \cos (2\pi j/ m) \csc^4(\pi j/ m) = \frac{m^4 -20m^2 +19}{45}.
\end{equation}
From \cite[Equation~(5.2) and Corollary~5.2]{BY02}, we have that
\begin{equation}\label{trigsum4'}
\sum_{j=1}^{m -1} \frac{d-4}{\sin^4(\pi j/ m)}
= \frac{(d-4) (m^4 +10m^2 -11)}{45}.
\end{equation}
Thus by Proposition~\ref{b01general},
\begin{equation*}
b_0^1(N) =
{\rm vol}(N) \frac{4(m^4 -20m^2 +19)+(d-4) (m^4 +10m^2 -11)}{720}.
\end{equation*}
By applying Example~\ref{isotropy2}, one can confirm the same expression is valid when $m=2$.
Reference \cite{BY02} also contains formulas analogous to Equations~(\ref{trigsum4}) and (\ref{trigsum4'}) with the power $4$ replaced by the power $2s$ for arbitrary positive $s$, thus yielding expressions for $b_0^1(N)$ for rotation strata of constant rotation angle and higher codimension.
\end{ex}
\subsection{The proof of Theorem~\ref{thm:main1V4}}\label{ss:codim2} The goal of this subsection is to prove Theorem~\ref{thm:main1V4} from the Introduction. Let ${\mathcal{O}}$ be a $d$-dimensional orbifold.
\begin{proof}[Proof of Theorem~\ref{thm:main1V4}]
If ${\mathcal{O}}$ contains a stratum $S$ of codimension three, then either $S$ is primary or else ${\mathcal{O}}$ contains a singular stratum of lower codimension. In the first case, Theorem~\ref{thm:dggw5.1} implies that the 0-spectrum alone suffices to distinguish ${\mathcal{O}}$ from any Riemannian manifold. Thus we may assume that ${\mathcal{O}}$ contains a stratum of codimension one or two.
If ${\mathcal{O}}$ contains a stratum of codimension one, then the theorem is immediate from Theorem~\ref{thm:dggw5.1}. Thus we may assume that there are no such strata. It then follows that every stratum $N$ of codimension two is primary and moreover that its isotropy group is cyclic. Indeed, the isotropy group $\operatorname{Iso}(N)$ is isomorphic to a subgroup of the orthogonal group $O(d)$ of the form $A\times I_{d-2}$ where $I_{d-2}$ is the $(d-2)\times (d-2)$ identity matrix and $A$ is a finite subgroup of $O(2)$ with the property that no element of $A$ other than the identity fixes any non-trivial vector. Every such subgroup $A$ is a cyclic subgroup of $SO(2)$.
In the following, we use the notation introduced in Equation~\eqref{heatascoeffc}.
If $M$ is any Riemannian manifold of dimension $d$, then its heat invariants $c_2^0(M)=a_1^0(M)$ and $c_2^1(M)=a_2^1(M)$ satisfy
$$
c_2^1(M)=\frac{d-6}{6} \int_M \tau \, dV_M=(d-6)c_2^0(M).
$$
Thus it suffices to show that the orbifold ${\mathcal{O}}$ satisfies
\begin{equation}\label{d-6}
c_2^1({\mathcal{O}})>(d-6)c_2^0({\mathcal{O}}).
\end{equation}
We will show that
\begin{equation}\label{greater d-6}
b_0^1(N)>(d-6)b_0^0(N)
\end{equation}
for every stratum of codimension two in ${\mathcal{O}}$. Equation~(\ref{d-6}) will then follow from Theorem~\ref{thm.asympt}.
Let $N$ be a stratum of codimension two with (necessarily cyclic) isotropy group of order $m$. The computation of $b_0^0(N)$ is analogous to the case of cone points in dimension 2, carried out in \cite[Proposition 5.5]{DGGW08}, yielding
$$b_ 0^0(N)=\frac{(m^2-1)}{12} {\rm vol}(N).$$
Using Proposition~\ref{b01codim2}, we have
$$
b_0^1(N)=\left((d-2)\frac{m^2-1}{12}+\frac{m^2-6m+5}{6}\right){\rm vol}(N).
$$
Thus, noting that $m\geq 2$, we have
$$b_0^1(N)-(d-6)b_0^0(N)=\frac{3m^2 - 6m + 3}{6}>0.$$
This proves Equation~(\ref{greater d-6}) and the theorem follows.
\end{proof}
\begin{remark}\label{rem:orbisurfaces} It is shown in \cite[Theorem 5.15]{DGGW08} that the $0$-spectrum alone suffices to distinguish singular, closed, locally orientable Riemannian orbisurfaces with nonnegative Euler characteristic from smooth, oriented, closed Riemannian surfaces. In fact, it is shown there that the degree zero term in the small-time heat trace asymptotics for functions gives rise to a complete topological invariant for orbisurfaces satisfying these constraints. In the previous theorem, by also appealing to the $1$-spectrum, we do not require local orientability or nonnegative Euler characteristic to distinguish Riemannian orbisurfaces from closed Riemannian surfaces.
\end{remark}
\subsection{Negative inverse spectral results for the $p$-spectra}\label{sec.isospec}
The article \cite{GR03} contains examples of orbifolds of even dimension $d=2m$ that are $m$-isospectral to manifolds. Here, we will construct examples of flat orbifolds that are $p$-isospectral to manifolds for various values of $p$.
\begin{nota}\label{nota:flat} Every closed flat orbifold or manifold is of the form ${\mathcal{O}}=\Sigma{\backslash} \mathbf R^d$ where $\Sigma$ is a discrete subgroup of the Euclidean motion group $I(\mathbf R^d)=\mathbf R^d\rtimes O(d)$. (Note that ${\mathcal{O}}$ is a manifold, i.e., $\Sigma$ acts freely on $\mathbf R^d$, if and only if $\Sigma$ is a Bieberbach group). The restriction of the projection $I(\mathbf R^d)\to O(d)$ to $\Sigma$ has finite image $F$ and kernel a lattice $\Lambda$ of rank $d$. We will refer to $\Lambda$ as the translation lattice of $\Sigma$. The group $F$ is the holonomy group of ${\mathcal{O}}$. For each $\gamma\in F$, there exists $a=a(\gamma)\in \mathbf R^d$, unique modulo $\Lambda$, such that $\gamma\circ L_a\in \Sigma$, where $L_a$ denotes translation by $a$. Let $\Lambda^*$ denote the lattice dual to $\Lambda$. For $\mu\geq 0$ and $\gamma\in F$, set
$$e_{\mu,\Sigma}(\gamma)=\sum_{v\in \Lambda^*, \|v\|=\mu,\gamma(v)=v}e^{2\pi i v\cdot a}.$$
\end{nota}
In the notation of Notation~\ref{nota:flat}, let $T$ be the torus $\Lambda{\backslash}\mathbf R^d$. Let $\eta =\sum_J \, f_J\,dx^J$ be the pullback to $\mathbf R^n$ of a $p$-form on ${\mathcal{O}}$. (Here $J$ varies over all multi-indices $1\leq j_1<\dots<j_p\leq d$, and the functions $f_J$ are $\Sigma$-invariant.) We have
$$\Delta^p(\eta)=\sum_J \,\Delta^0( f_J)\,dx^J.$$
Thus every element of the $p$-spectrum of ${\mathcal{O}}$ occurs as an eigenvalue in the 0-spectrum of the torus $T$; i.e., it is of the form $4\pi^2\|\mu\|^2$ for some $\mu$ in the dual lattice $\Lambda^*$ of $\Lambda$. The following result of Miatello and Rossetti, while originally stated in the context of flat manifolds, is also valid for orbifolds.
\begin{prop}[{\cite[Theorem 3.1]{MR01}}]
\label{flatspeccomp}
We use the notation of Notation~\ref{nota:flat}.
\begin{enumerate}
\item For $\mu\geq 0$, the multiplicity $m_{p,\mu}(\Sigma)$ of $\mu$ in the $p$-spectrum of ${\mathcal{O}}=\Sigma{\backslash}\mathbf R^d$ is given by
$$m_{p,\mu}(\Sigma)=\frac{1}{|F|}\sum_{\gamma\in F}\, \tr_p(\gamma) e_{\mu,\Sigma}(\gamma)=\frac{1}{|F|}\binom{d}{p}+\frac{1}{|F|}\sum_{1\neq\gamma\in F}\, \tr_p(\gamma) e_{\mu,\Sigma}(\gamma).$$
\item \label{it:flatspeccomp-isospectral} Thus if ${\mathcal{O}}'=\Sigma'{\backslash} \mathbf R^d$, where $\Sigma'$ has the same translation lattice $\Lambda'=\Lambda$, and if there is a bijection $\gamma\mapsto\gamma'$ between the holonomy groups of ${\mathcal{O}}$ and ${\mathcal{O}}'$ such that $$\tr_p(\gamma) e_{\mu,\Sigma}(\gamma)=\tr_p(\gamma') e_{\mu,\Sigma'}(\gamma')$$
for all $\gamma\in F$, then ${\mathcal{O}}$ and ${\mathcal{O}}'$ are $p$-isospectral.
\end{enumerate}
\end{prop}
\begin{cor}\label{isosp condition}
Suppose ${\mathcal{O}}=\Sigma{\backslash} \mathbf R^d$ and ${\mathcal{O}}'=\Sigma'{\backslash} \mathbf R^d$ where $\Sigma$ and $\Sigma'$ have the same translation lattice. If $|F|=|F'|$ and if $\tr_p(\gamma)=0=\tr_p(\gamma')$ for all $1\neq \gamma\in F$ and $1\neq \gamma'\in F'$, then ${\mathcal{O}}$ and ${\mathcal{O}}'$ are $p$-isospectral.
\end{cor}
\begin{ex}\label{it:flat-trianglat} Let $\Lambda$ be the lattice in $\mathbf R^3$ generated by $\{(1,0,0), (\frac{1}{2},\frac{\sqrt{3}}{2},0), (0,0,1)\}$. Let $\gamma\in I(\mathbf R^3)$ be rotation through angle $\frac{2\pi}{3}$ about the $x_3$-axis and let $F$ be the cyclic group generated by $\gamma$. Observe that $F$ stabilizes
$\Lambda$. Let $\Sigma$ be the subgroup of $I(\mathbf R^3)$ generated by $\Lambda$ and $\gamma$, and let $\Sigma'$ be generated by $\Lambda$ and $\gamma\circ L_a$ where $a=(0,0,\frac{1}{3})$. Corollary~\ref{isosp condition} and the computation in Example~\ref{isotropy3} imply that the orbifold $\Sigma{\backslash} \mathbf R^3$ and the manifold $\Sigma'{\backslash} \mathbf R^3$ are 1-isospectral.
\end{ex}
\begin{prop}\label{it:flat-involution}~
Let $d\geq 2$ and suppose that $p,k\in \{0, \dots, d\}$ satisfy $K_p^d(k)=0$ where $K_p^d$ is the Krawtchouk polynomial given by Equation~\eqref{krawtpoly}. Then there exists a $d$-dimensional Bieberbach manifold $M_k$ and a closed flat orbifold ${\mathcal{O}}_k$ of dimension $d$ such that:
\begin{enumerate}
\item The singular set of ${\mathcal{O}}_k$ consists precisely of $2^k$ singular strata each of which has codimension $k$.
\item ${\mathcal{O}}_k$ and $M_k$ are $p$-isospectral.
\end{enumerate}
Moreover, if $k'\in \{0, \dots, d\}$ is another zero of $K_p^d$, then the collection $\{M_k, M_{k'}, {\mathcal{O}}_k, {\mathcal{O}}_{k'}\}$ are all mutually $p$-isospectral.
\end{prop}
\begin{proof} Let $\gamma_k:\mathbf R^{d}\to\mathbf R^{d}$ be given by
$$\gamma_k(x_1,\dots, x_d)=(-x_1,\dots, -x_k, x_{k+1},\dots, x_d),$$
let $a\in (\frac{1}{2}\mathbf Z)^d$ with at least one of the last $d-k$ entries of $a$ equal to $\frac{1}{2}$, and let $\rho_k=\gamma_k\circ L_a$. Let $\Sigma_k$, respectively $\Sigma'_k$ be the discrete subgroup of the Euclidean motion group $I(\mathbf R^d)$ generated by $\mathbf Z^d$ together with $\gamma_k$, respectively $\rho_k$. Set ${\mathcal{O}}_k:=\Sigma{\backslash} \mathbf R^d$ and $M_k:= \Sigma'{\backslash} \mathbf R^d$. The torus $T=\mathbf Z^d{\backslash} \mathbf R^d$ is a two-fold cover of each of ${\mathcal{O}}_k$ and $M_k$. The map $\rho_k$ induces a fixed-point free involution of $T$, so $M_k$ is a manifold. In contrast, the involution of $T$ induced by $\gamma_k$ fixes all points each of whose first $k$ coordinates lie in $\{0, \frac{1}{2}\}$. Thus ${\mathcal{O}}_k$ contains $2^k$ singular strata each of codimension $k$. In the notation of Notation~\ref{nota:flat}, the holonomy groups of both ${\mathcal{O}}_k$ and $M_k$ have order 2 with generator $\gamma_k$. Since $\tr_p(\gamma_k)=K^d_p(k)=0$ (as seen in Example~\ref{isotropy2}), Corollary~\ref{isosp condition} implies that ${\mathcal{O}}_k$ and $M_k$ are $p$-isospectral . Moreover, the corollary also implies that their $p$-spectra are independent of the choice of the zero $k$ of $K^d_p$.
\end{proof}
\begin{ex}\label{ex:d/2}
As noted in Example~\ref{isotropy2}, when $d$ is even and $k=\frac{d}{2}$, we have $K^d_p(k)=0$ for all odd $p$. Thus in every even dimension $d$, there exists a Bieberbach manifold $M$ and a flat $d$-dimensional orbifold with singular set of codimension $\frac{d}{2}$ such that $M$ and ${\mathcal{O}}$ are $p$-isospectral for all odd $p$.
\end{ex}
The fact that the orbifolds and manifolds in Example~\ref{ex:d/2} are $\frac{d}{2}$-isospectral was proven by other methods in \cite{GR03}, Theorem 3.2(i). As noted there, in the special case that $d=2$, the resulting manifold in Example~\ref{ex:d/2} is a Klein bottle, while the orbifold ${\mathcal{O}}$ is a cylinder, and these were also shown to be 1-isospectral to a M\"obius strip. As in Remark~\ref{abs boun}, ${\mathcal{O}}$ can be viewed either as an orbifold or as a manifold with boundary with absolute boundary conditions. In \cite{GR03}, it was also asserted that these surfaces are $1$-isospectral to a flat $3$-pillow (i.e., a flat Riemannian orbifold whose underlying space is a sphere with three cone points); however as shown in the errata to \cite{GR03}, that assertion was incorrect.
In the example below, we will use the observation in Notation~\ref{abs boun} to construct a flat $3$-pillow that is $1$-isospectral to a square with absolute boundary conditions.
\begin{ex}\label{ex:isospecorb}
Let $\Lambda=\mathbf Z^2$. Let $F<O(2)$ be the cyclic group of order $4$ generated by rotation $\gamma$ through angle $\frac{\pi}{2}$ about the origin, let $\Sigma=\mathbf Z^2\rtimes F$, and let ${\mathcal{O}}=\Sigma{\backslash}\mathbf R^2$.
Observe that $\tr_1(\gamma)=\tr_1(\gamma^3)=0$ and that $\gamma^2=-{\operatorname{{Id}}}$.
Let $F'<O(2)$ be the Klein 4-group generated by the reflections $\alpha$ and $\beta$ across the $x$ and $y$-axes, let $\Sigma'=\mathbf Z^2\rtimes F'$, and let ${\mathcal{O}}'=\Sigma'{\backslash}\mathbf R^2$. Since $\tr_1(\alpha)=\tr_1(\beta)=0$ and $\alpha\circ\beta =-{\operatorname{{Id}}} =\gamma^2$, Proposition~\ref{flatspeccomp} part~\ref{it:flatspeccomp-isospectral} implies that ${\mathcal{O}}$ and ${\mathcal{O}}'$ are $1$-isospectral.
The underlying space of the orbifold ${\mathcal{O}}'$ is a square and the
$1$-spectrum of ${\mathcal{O}}'$ as an orbifold coincides with the $1$-spectrum of the square with absolute boundary conditions. (As an orbifold, the four edges are reflectors and the four corners are dihedral points.) On the other hand, ${\mathcal{O}}$ is a $3$-pillow. Indeed, the underlying topological space of ${\mathcal{O}}$ is a sphere. The singular set of ${\mathcal{O}}$ consists only of cone points: two cone points of order $4$ and one cone point of order $2$. (The cone points of order $4$ correspond to the $\Sigma$-orbits of the points $(0,0)$ and $(\frac{1}{2},\frac{1}{2})$ in $\mathbf R^2$, while the cone point of order $2$ corresponds to the $\Sigma$-orbit of $(\frac{1}{2},0)$, which is the same as the $\Sigma$-orbit of the point $(0,\frac{1}{2})$.)
\end{ex}
\subsection{Positive inverse spectral results for the $p$-spectra}\label{sec: common cover}
We continue to assume that ${\mathcal{O}}$ is a $d$-dimensional closed Riemannian orbifold.
\begin{nota}\label{parity}
We will say that a singular stratum of ${\mathcal{O}}$ has \emph{positive, respectively negative, parity} if its codimension in ${\mathcal{O}}$ is even, respectively odd. For $\epsilon\in\{\pm\}$, let $k_\epsilon$ be the minimum codimension of the primary singular strata (if any) of ${\mathcal{O}}$ of parity $\epsilon$, and let $S_\epsilon({\mathcal{O}})$ be the collection of all primary strata of codimension $k_\epsilon$. Set
$$B_\epsilon^p({\mathcal{O}}) =\sum_{N\in S_\epsilon({\mathcal{O}})}\,\frac{1}{|{\operatorname{{Iso}}}(N)|}\,b_0^p(N).$$
If there are no singular strata of parity $\epsilon$, then we set $B_\epsilon^p({\mathcal{O}})=0.$
\end{nota}
\begin{prop}\label{prop: odd par}~
\begin{enumerate}
\item \label{it:oddpar-1} $B_-^p({\mathcal{O}})$ is an invariant of the $p$-spectrum. In particular, if $B_-^p({\mathcal{O}})\neq 0$, then ${\mathcal{O}}$ cannot be $p$-isospectral to a Riemannian manifold.
\item \label{it:oddpar-2} If $B_+^p({\mathcal{O}})\neq 0$, then ${\mathcal{O}}$ cannot be $p$-isospectral to any closed Riemannian manifold
$M$ such that $M$ and ${\mathcal{O}}$ have either finite $p$-isospectral Riemannian covers or isometric (possibly infinite) homogeneous
Riemannian covers.
\end{enumerate}
\end{prop}
\begin{proof}~
\begin{enumerate}
\item The fact that $k_-$ is the minimal codimension of the odd primary singular strata implies that $B_-^p({\mathcal{O}})$ is precisely the term of degree $t^{-(d-k_-)/2}$ in the heat trace expansion for $\Delta^p$ on ${\mathcal{O}}$ and thus is a spectral invariant. The second statement is immediate.
\item The proof is analogous to the proofs of the analogous statements for the 0-spectrum cited in the introduction. We give a summary. It suffices to show that if $M$ and ${\mathcal{O}}$ are isospectral, then each of the two conditions on $M$ and ${\mathcal{O}}$ implies that $a_j^p({\mathcal{O}})=a_j^p(M)$ for all $j$. (See Notation~\ref{invariants} part~\ref{it:invariants-Ip0t}.) Writing $k_+=2\ell$, one then notes that the coefficient of $t^{\ell-d/2}$ in the heat trace for the $p$-Laplacian on ${\mathcal{O}}$ is $a_\ell^p({\mathcal{O}})+B_+^p({\mathcal{O}})$ whereas the corresponding coefficient for $M$ is $a_\ell^p(M)$, contradicting the $p$-isospectrality of ${\mathcal{O}}$ and $M$.
First consider the case that there exist finite Riemannian covers $M^*$ of ${\mathcal{O}}$ and $M^{**}$ of $M$ that are $p$-isospectral. Suppose that $M$ and ${\mathcal{O}}$ are $p$-isospectral. Since the $p$-spectrum determines the volume, we have ${\rm vol}(M)={\rm vol}({\mathcal{O}})$ and also ${\rm vol}(M^{**})={\rm vol}(M^*)$. Thus the two coverings are of the same order $r$. Using Notation~\ref{invariants} part~\ref{it:invariants-Ip0t}, we then have
$$a_j^p({\mathcal{O}}) =\frac{1}{r}a_j^p(M^*)=\frac{1}{r}a_j^p(M^{**})= a_j(M)$$
for all $j$.
Next suppose that ${\mathcal{O}}$ and $M$ have a common homogeneous Riemannian cover $\tilde{M}$. Write $U_j^p(M;\cdot)$ and $U_j^p({\mathcal{O}},\cdot)$ for the functions on $M$ and ${\mathcal{O}}$ in Equation~\eqref{eq:heatinvM} of Notation and Remarks \ref{nota:don} whose integrals yield $a_j^p(M)$ and $a_j^p({\mathcal{O}})$. One uses homogeneity of the cover and locality and universality of the $U_j^p$ to see that $U_j^p(M;\cdot)$ and $U_j^p({\mathcal{O}},\cdot)$ are constant functions with the same constant value $U_j$. If $M$ and ${\mathcal{O}}$ are isospectral, then again they have the same volume $V$, so $a_j^p(M)=U_jV=a_j^p({\mathcal{O}})$.
\qedhere
\end{enumerate}
\end{proof}
\begin{remark}\label{rem:p=0}
In case $p=0$, one has $b_0^0(N) > 0$ for every primary singular stratum. Thus the condition $B_-^0({\mathcal{O}})\neq 0$, respectively $B_+^0({\mathcal{O}})\neq 0$, is equivalent to the condition that ${\mathcal{O}}$ contains at least one primary singular stratum of odd, respectively even, codimension. Hence when $p=0$, the first statement of the proposition is the weak analogue of
Theorem~\ref{thm:dggw5.1} and Theorem~\ref{thm:common good} part~\ref{homog cover}.
\end{remark}
\begin{defn}\label{dim of sing set} We define the \emph{codimension} of the singular set to be the minimum codimension of the singular strata.
\end{defn}
\begin{thm}\label{thm:krawt4} Denote by ${\mathcal Orb}^d_k$ the class of all closed $d$-dimensional Riemannian orbifolds with singular set of codimension $k$. Assume that $k$ is odd. Let $p\in \{0,1,2,\dots , d\}$ and let $K_p^d$ be the Krawtchouk polynomial given by Equation~\eqref{krawtpoly}. If $K_p^d(k)\neq 0$, then the $p$-spectrum determines the $(d-k)$-dimensional volume of the singular set of elements of ${\mathcal Orb}^d_k$. In particular, the $p$-spectrum distinguishes elements of orbifolds in ${\mathcal Orb}^d_k$ from Riemannian manifolds.
\end{thm}
\begin{proof} Let $N \subset \mathcal O \in {\mathcal Orb}^d_k$ be a singular stratum of codimension $k$. View ${\operatorname{{Iso}}}(N)$ as a subgroup of the orthogonal group $O(d)$. All non-trivial elements of ${\operatorname{{Iso}}}(N)$ must lie in $\iiso^{\rm max}(N)$ and must have the same 1-eigenspace; otherwise there would exist a stratum of smaller codimension. Thus ${\operatorname{{Iso}}}(N)$ must be of the form
${\operatorname{{Iso}}}(N)=\Gamma\times \{{\operatorname{{Id}}}_{d-k}\}$ where $\Gamma$ is a subgroup of the orthogonal group $O(k)$ that acts freely on the sphere $S^{k-1}$. Since $k$ is odd and the only finite group acting freely on even-dimensional spheres has order 2, $\Gamma$ and thus also ${\operatorname{{Iso}}}(N)$ must have order 2.
Using Equation~\eqref{Krawt} and Notation~\ref{parity} we have,
\begin{equation*} B_-^p(\mathcal O)=\frac{K_p^d(k)}{2^{k+1}}\sum_{N \in S_-(\mathcal O)}\,{\rm vol}(N).
\end{equation*}
As shown in Proposition~\ref{prop: odd par} part~\ref{it:oddpar-1}, $B_-^p({\mathcal{O}})$ is a spectral invariant, thus the proof is complete.
\end{proof}
Thus for example (under the assumption that $k$ is odd), Theorem~\ref{thm:krawt} along with the bulleted items in Example~\ref{isotropy2} imply:
\begin{itemize}
\item An element of ${\mathcal Orb}^d_k$ can be 1-isospectral to a Riemannian manifold only if $d$ is even and $k=\frac{d}{2}$.
\item Assume $d$ is not a perfect square. Then no closed Riemannian $d$-orbifold with singular set of odd codimension can be 2-isospectral to a Riemannian manifold. (The same statement holds if $d$ is of the form $d=4m^2$ for some integer $m$.)
\end{itemize}
We next focus on the $1$-spectrum.
\begin{thm}\label{thm:halfdim}
Let $\mathcal O$ be a closed Riemannian orbifold of dimension $d$.
\begin{enumerate}
\item \label{it:halfdim-covers} If $\mathcal O$ contains at least one singular stratum of codimension $k<\frac{d}{2}$, then ${\mathcal{O}}$ cannot be $1$-isospectral to any closed Riemannian manifold
$M$ such that $M$ and ${\mathcal{O}}$ have either finite $1$-isospectral Riemannian covers or isometric infinite homogeneous
Riemannian covers.
\item \label{it:halfdim-primary} If $\mathcal O$ contains at least one primary singular stratum of odd codimension $k<\frac{d}{2}$, then $\mathcal O$ cannot be 1-isospectral to any closed Riemannian manifold.
\item \label{it:halfdim-deven} For $d$ even, both statements remain true when $k=\frac{d}{2}$ provided at least one stratum of codimension $k$ has isotropy group of order at least three.
\end{enumerate}
\end{thm}
\begin{remark}\label{rem: sharp}
Proposition~\ref{it:flat-involution} shows that the hypothesis on isotropy order in Theorem~\ref{thm:halfdim} part~\ref{it:halfdim-deven} cannot be removed .
When $d=3$, Theorem~\ref{thm:halfdim} part~\ref{it:halfdim-covers} is sharp: Example~\ref{it:flat-trianglat} shows that the conclusion fails when $k=2=\frac{d+1}{2}$.
\end{remark}
\begin{proof} Parts \ref{it:halfdim-covers} and \ref{it:halfdim-primary} follow from Proposition \ref{prop: odd par} along with the following two observations:
\begin{enumerate}
\item Since any singular stratum of minimum codimension is necessarily primary, the hypothesis of part~\ref{it:halfdim-covers} implies that $k_\epsilon <\frac{d}{2}$ for at least one $\epsilon\in\{\pm\}$, while the hypothesis of part~\ref{it:halfdim-primary} says that $k_-<\frac{d}{2}$.
\item If $N$ is any primary singular stratum of codimension $k$ and if $\gamma\in\iiso^{\rm max}(N)$, then 1 is an eigenvalue of $\gamma$ with multiplicity $d-k$ while all other eigenvalues lie in $[-1,1)$. Thus if $k<\frac{d}{2}$, then $\tr(\gamma)>0$ for all $\gamma\in\iiso^{\rm max}(N)$, so $b_0^1(N) > 0$. In particular, if $k_\epsilon(\mathcal O)<\frac{d}{2}$, then $B_\epsilon(\mathcal O)>0$.
\end{enumerate}
To prove part~\ref{it:halfdim-deven},
suppose that $k_\epsilon=\frac{d}{2}$ and let $N$ be a primary stratum of codimension $k=\frac{d}{2}$. If $N$ has isotropy order 2, then the unique $\gamma\in \iiso^{\rm max}(N)$ has eigenvalues $1$ and $-1$, each with multiplicity $k$, and thus $\tr(\gamma)=0$. Hence $b_0^1(N)=0$. If $N$ has higher order isotropy, then every $\gamma\in \iiso^{\rm max}(N)$ has at least two eigenvalues of absolute value strictly less than 1, while 1 occurs as an eigenvalue with multiplicity $k$, so $\tr(\gamma)>0$. Thus $b_0^1(N)>0$. Hence the presence of any primary strata with isotropy order greater than 2 implies that $B_\epsilon(\mathcal O)>0$, so we can again apply Proposition \ref{prop: odd par}.
\end{proof}
\section{Appendix}
\label{app}
We outline the proof of Theorem~\ref{bkthm}. We are closely following the proof of \cite[Theorem 4.1]{D76} but expressing it in a more general context.
Let $M$ be an arbitrary Riemannian manifold and $\gamma$ an isometry of $M$. Let $d(x,y)$ denote the induced distance function between $x$ and $y$ in $M$. Outside of a small tubular neighborhood of ${\operatorname{{Fix}}}(\gamma)$, we have that $(4\pi t) ^{-d/2}e^{-\frac{ d^2(x, \gamma(x))}{4t}}\to 0$ uniformly as $t\to 0^+$. Thus for any compactly supported smooth function $f$ on $M$, we have
\begin{align*}
&\int_{M} \,(4\pi t) ^{-d/2}e^{-\frac{ d^2(x, \gamma(x))}{4t}}f(x) \,dV_{ M}(x)
\\
&\qquad = \sum_{Q\in \mathcal{C}(Fix(\gamma))} \int_{U_Q} \,(4\pi t) ^{-d/2}e^{-\frac{ d^2(x, \gamma(x))}{4t}}f(x) \,dV_{ M}(x)
+ O(e^{-c/t}),
\end{align*}
where $U_Q$ is a small tubular neighborhood of $Q$, say of radius $r$, and $c>0$ is a constant, where, as in Notation and Remarks \ref{nota:don} part~\ref{it:don-CFix}, $\mathcal C({\operatorname{{Fix}}}(\gamma))$ denotes the set of connected components of ${\operatorname{{Fix}}}(\gamma).$
We will later specialize to the choices of $f$ that arise in Theorem~\ref{bkthm}, but for now we work in the general setting.
As in \cite[Theorem 4.1]{D76}, we analyze each term in the sum above individually and define
\begin{equation*}
I_Q(f):= \int_{U_Q} \,(4\pi t) ^{-d/2}e^{-\frac{ d^2(x, \gamma(x))}{4t}}f(x) \,dV_{ M}(x).
\end{equation*}
Let $W_Q\subset TM$ be the normal disk bundle of $Q$ of radius $r$ and let $\varphi: W_Q\to Q$ be the bundle projection. We use the diffeomorphism $W_Q\to U_Q$ given by ${\rm{x}}\mapsto\exp_{\varphi({\rm{x}})}{\rm{x}}$, where $\exp$ is the Riemannian exponential map of $M$, to express $I_Q(f)$ as
$$
I_Q(f)= \int_Q\int_{\varphi^{-1}(a)} (4\pi t) ^{-d/2} e^{-\frac{ d^2(x, \gamma(x))}{4t}}f(x)\,\psi({\rm{x}}) \,d{\rm{x}} \,dV_{Q}(a)
$$
Here $d{\rm{x}}$ is the Euclidean volume element defined by the Riemannian structure on $(T_aQ)^\perp$, and for ${\rm{x}}\in \varphi^{-1}(a)$, we are writing $x=\exp_a({\rm{x}})$. The function $\psi$ arising in this change of variables depends only on the curvature and its covariant derivatives and, as shown in \cite[Equation~(2.6)]{D76}, satisfies
\begin{equation}\label{psi}\psi({\rm{x}})=1-\frac{1}{2}R_{i\alpha j\alpha}{\rm{x}}^i{\rm{x}}^j-\frac{1}{6}R_{ikjk}{\rm{x}}^i{\rm{x}}^j+O({\rm{x}}^3) \end{equation}
where the ${\rm{x}}^i$ are the coordinates of ${\rm{x}}$ with respect to an orthonormal basis of $(T_{a}Q)^\perp$ and where the components of the curvature tensor are evaluated at the point $a$.
Letting $\bar{{\rm{x}}}={\rm{x}}-d\gamma_a({\rm{x}})$, Donnelly shows that $d^2(x, \gamma(x))= (\bar{{\rm{x}}})^2 +O(\bar{{\rm{x}}})^3$ and then applies a version of the Morse Lemma, see \cite[Lemma A.1]{D76}, to find new coordinates ${\rm y}$ so that
\begin{equation*}
d^2(x, \gamma(x))=\sum_{i=1}^s{\rm y}^2_i=\|{\rm y}\|^2,
\end{equation*}
Making the two changes of variables ${\rm{x}}\to \bar{{\rm{x}}}\to {\rm y}$ on $(T_aQ)^\perp$, we have
$$d{\rm{x}} =|\det(B(a))|\,d\bar{{\rm{x}}}=|\det(B(a))||J(\bar{{\rm{x}}},{\rm y})|d{\rm y}$$
where
\begin{equation}\label{B} B(a)= ({\operatorname{{Id}}}_{\operatorname{codim}(Q)}-A_a)^{-1}.\end{equation}
Here $A_a$ is the restriction of $d\gamma_a$ to $(T_aQ)^\perp$ and $|J(\bar{{\rm{x}}},{\rm y})|$ denotes the absolute value of the Jacobian determinant for the second change of variables.
We are viewing $x=\exp({\rm{x}})$, ${\rm{x}}$ and $\bar{{\rm{x}}}$ as functions of the new variable ${\rm y}$. Donnelly shows that $|J(\bar{{\rm{x}}},{\rm y})|$ has a Taylor expansion in ${\rm y}$ whose coefficients depend only on the values at $a$ of $B$ and of the curvature tensor $R$ of $M$ and its covariant derivatives:
\begin{equation}\label{J}
|J(\bar{{\rm{x}}},{\rm y})|=1+\frac{1}{6}(R_{ikih}B_{ks}B_{ht} +R_{iksh}B_{ki}B_{ht}+R_{ikth}B_{ks}B_{hi}){\rm y}^s{\rm y}^t +O({\rm y}^3).
\end{equation}
Letting $H_a(f):(T_aQ)^\perp\to \mathbf R$ be given by
\[
H_a(f)({\rm y}):= f(x) |\det(B(a))|\,| J(\overline{{\rm{x}}}, {\rm y})| \,\psi({\rm{x}})
\]
we have
\begin{equation}\label{iqf} I_Q(f)=\int_Q\,\int_{\varphi^{-1}(a)}\,(4\pi t)^{-d/2}e^{-\|{\rm y}\|^2/4t}\,H_a(f)({\rm y})\,d{\rm y}\,dV_{Q}(a).\end{equation}
We expand $H_a(f)$ into its Maclaurin series. Because of symmetry, only the terms of even degree contribute to the integral $I_Q(f)$. Thus denoting by $[H_a(f)]_j({\rm y})$ the homogeneous component of degree $2j$ in the Maclaurin series of $H_a(f)$ and then making the change of variable ${\rm z}=\frac{{\rm y}}{\sqrt{t}}$, we have
\begin{multline*}\label{expand} I_Q(f)=(4\pi t)^{-\dim(Q)/2}\int_Q\,\int_{\mathbf R^{\operatorname{codim}(Q)}}\,(4\pi )^{-(\operatorname{codim}(Q))/2}e^{-\|{\rm z}\|^2/4}\sum_{k=0}^L\,t^k [H_a(f)]_k({\rm z})\,d{\rm z}\,dV_Q(a) \\+O(t^{L+1}).
\end{multline*}
(Here we are using the fact that $\varphi^{-1}(a)$ is a disk of radius $\frac{r}{\sqrt{t}}$ in ${\rm z}$, which can be replaced by $\mathbf R^{\operatorname{codim}(Q)}$ without changing the asymptotics.)
Proceeding exactly as in the proof of \cite[Theorem 4.1]{D76}, we complete the integration by doing an iterated integral over the ${\rm z}$ variables, noting that odd powers of any of the coordinates ${\rm z}_i$ lead to integrals of the form $\int_\mathbb{R}\,{\rm z}_i^{2k+1}e^{-({\rm z}_i^2)/4}\,d{\rm z}_i$ which are equal to zero, while the contributions from the terms of the form $\int_\mathbb{R}{\rm z}_i^{2m}e^{-({\rm z}_i^2)/4}\,d{\rm z}_i$ can be computed by using the classical formula
\begin{equation*}
\int_{\mathbb R}x^{2m}e^{-x^2}\,dx=\frac{1\cdot3\cdot5\cdots (2m-1)}{2^{m+1}}\sqrt{\pi}.
\end{equation*}
We are left with
\begin{equation*}
I_Q(f)=(4\pi t)^{-\dim(Q)/2}\sum_{k=0}^L\frac{t^k}{k!}\int_Q\, \Box^k_{{\rm y}}(H_a(f))(0)\,dV_Q(a) +O(t^{L+1}),
\end{equation*}
where, following the notation of Donnelly in \cite[Theorem 4.1]{D76}, we denote
\[
\Box_{\rm y}:=\sum_{i=1}^s\frac{\partial^2}{\partial {\rm y}_i^2}.
\]
Thus we conclude:
\begin{prop}\label{f} For any compactly supported smooth function $f$ on $M$, we have
\begin{multline*}
\int_{M} \,(4\pi t) ^{-d/2}e^{-\frac{ d^2(x, \gamma(x))}{4t}}f(x) \,dV_{ M}(x)
\\
= \sum_{Q\in \mathcal{C}(Fix(\gamma))} (4\pi t)^{-\dim(Q)/2}\sum_{k=0}^L\frac{t^k}{k!}\int_Q\, \Box^k_{{\rm y}}\left(H_a(f)\right)(0)\,dV_Q(a) +O(t^{L+1})\end{multline*}
where $H_a(f)=\frac{f(x)\,| J(\overline{{\rm{x}}}, {\rm y})| \,\psi({\rm{x}})}{|\det({\operatorname{{Id}}}_{\operatorname{codim}(Q)}-A_a)|}$ with $\psi(x)$ and $| J(\overline{{\rm{x}}}, {\rm y})|$ given as in Equations~\eqref{psi} and \eqref{J} and with $A_a$ being the restriction of $d\gamma_a$ to $(T_aQ)^\perp$.
\end{prop}
\medskip
To complete the proof of Theorem~\ref{bkthm} part~\ref{it:bkthm-global}, write
\begin{equation}\label{fjp}f_j^p(x)=C_{1,2} \gamma^*_{\cdot, 2} u^p_j(x,x).\end{equation}
Then
\begin{multline*}\int_{M} C_{1,2} \gamma^*_{\cdot, 2} K^p(t,x,x) dV_{M}(x) = \sum_{j=0}^L\, t^j\int_{M}\,(4\pi t) ^{-d/2} e^{-\frac{ d^2(x, \gamma(x))}{4t}}f_j^p(x)dV_{M}(x)+O(t^{L+1})\\
= \sum_{Q\in \mathcal{C}(Fix(\gamma))}\,(4\pi t)^{-\dim(Q)/2}\sum_{k=0}^{L} t^k\,\int_{Q} b^p_k(\gamma,a) dV_Q (a) +O(t^{L+1})
\end{multline*}
where
$$b^p_k(\gamma,a)=\sum_{j=0}^k\,\frac{1}{j!}\Box^j_{{\rm y}}\left(H_a(f^p_{k-j})\right)(0)$$
with $f^p_{k-j}$ defined as in Equation~\eqref{fjp}.
In particular, when $k=0$, we have
\begin{equation}\label{bop}
b^p_0(\gamma,a)= H_a(f_0^p)(0)=\frac{C_{1,2}\gamma^*_{\cdot, 2} u^p_0(a,a)}{\lvert \det({\operatorname{{Id}}}_{\operatorname{codim}(Q)}-A_a)\rvert} = \frac{\tr_p(d\gamma_a)}{\lvert \det({\operatorname{{Id}}}_{\operatorname{codim}(Q)}-A_a)\rvert}
\end{equation}
where $\tr_p(d\gamma_a)$ is defined as in Notation and Remarks \ref{nota:don} part~\ref{it:don-trace}. (The second equality follows from the fact that $u_0^p(a,a)$ is the identity transformation of $\Lambda^p (T^*_a M)$
as noted in Proposition~\ref{localparam} part~\ref{uzero}.)
Theorem~\ref{bkthm} part~\ref{it:bkthmlocal} also follows with
$$c^p_k(\eta,\gamma,a)=\sum_{j=0}^k\,\frac{1}{j!}\Box^j_{{\rm y}}\left(H_a(\eta f^p_{k-j})\right)(0).$$
\textbf{Funding}: This work was supported by the Banff International Research Station and Casa Matem\'atica Oaxaca [19w5115]; the Association for Women in Mathematics National Science Foundation ADVANCE grant [1500481]; and the Swiss Mathematical Society [SMS Travel Grant] for the travel of KG. IMS was supported by the Leverhulme Trust grant RPG-2019-055.
\textbf{Acknowledgments}:
We thank Carla Farsi for a helpful discussion of orbifold orientability. We also thank the Banff International Research Station and Casa Matem\'atica Oaxaca for initiating our collaboration by hosting the 2019 Women in Geometry Workshop. In addition, we thank the Association for Women in Mathematics and the organizers of the Women In Geometry Workshop for supporting this collaboration. IMS thanks Prof. Jacek Brodzki for providing her with excellent conditions to work.
\bibliographystyle{alpha}
| {
"timestamp": "2021-06-16T02:11:09",
"yymm": "2106",
"arxiv_id": "2106.07882",
"language": "en",
"url": "https://arxiv.org/abs/2106.07882",
"abstract": "We examine the relationship between the singular set of a compact Riemannian orbifold and the spectrum of the Hodge Laplacian on $p$-forms by computing the heat invariants associated to the $p$-spectrum. We show that the heat invariants of the $0$-spectrum together with those of the $1$-spectrum for the corresponding Hodge Laplacians are sufficient to distinguish orbifolds with singularities from manifolds as long as the singular sets have codimension $\\le 3.$ This is enough to distinguish orbifolds from manifolds for dimension $\\le 3.$",
"subjects": "Differential Geometry (math.DG)",
"title": "Do the Hodge spectra distinguish orbifolds from manifolds? Part 1",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771763033943,
"lm_q2_score": 0.8311430436757313,
"lm_q1q2_score": 0.8201529857425469
} |
https://arxiv.org/abs/1607.03014 | The middle hedgehog of a planar convex body | A convexity point of a convex body is a point with the property that the union of the body and its reflection in the point is convex. It is proved that in the plane a typical convex body (in the sense of Baire category) has infinitely many convexity points. The proof makes use of the `middle hedgehog' of a planar convex body $K$, which is the curve formed by the midpoints of all affine diameters of $K$. The stated result follows from the fact that for a typical planar convex body the convex hull of the middle hedgehog has infinitely many exposed points. | \section{Introduction}\label{sec1}
The following question was posed to me by Shiri Artstein--Avidan: `Does every convex body $K$ in the plane have a point $z$ such that the union of $K$ and its reflection in $z$ is convex?' After some surprise about never having come across this simple question, and after some fruitless attempts to find counterexamples, this finally led to the following answer ({\cite{Sch16}). Here we call the point $z$ a {\em convexity point} of $K$ if $(K-z)\cup(z-K)$ is convex.
\begin{theorem}\label{Thm1}
A convex body in the plane which is not centrally symmetric has three affinely independent convexity points.
\end{theorem}
A triangle and a Reuleaux triangle are examples of convex bodies with precisely three convexity points. This then raises the question whether the existence of just three convexity points is `typical'. We recall the meaning of this terminology. The space ${\mathcal K}^2$ of convex bodies in the plane with the Hausdorff metric is a complete metric space and hence a Baire space, that is, a topological space in which any intersection of countably many dense open sets is still dense. A subset of a Baire space is called {\em comeager} or {\em residual} if its complement is a {\em meager} set, that is, a countable union of nowhere dense sets (also said to be of {\em first Baire category}). The intersection of countable many comeager sets in a Baire space is still dense, which is a good reason to consider comeager sets as `large'. Therefore, one says that `most' convex bodies in the plane have a certain property, or that a `typical' planar convex body has this property, if the set of bodies with this property is comeager in ${\mathcal K}^2$. With this definition, we prove the following.
\begin{theorem}\label{Thm2}
A typical convex body in the plane has infinitely many convexity points.
\end{theorem}
A result from which this one follows will be formulated at the end of the next section, after some preparations.
For surveys on Baire category results in convexity, we refer the reader to Gruber \cite{Gru85, Gru93} and Zamfirescu \cite{Zam91, Zam09}.
\section{The middle hedgehog}\label{sec2}
We work in the Euclidean plane ${\mathbb R}^2$, with scalar product $\langle\cdot\,,\cdot\rangle$, induced norm $\|\cdot\|$ and unit circle ${\mathbb S}^1$. The set of convex bodies (nonempty, compact, convex subsets) in ${\mathbb R}^2$ is denoted by ${\mathcal K}^2$. We use the Hausdorff metric $\delta$, which is defined on all nonempty compact subsets of ${\mathbb R}^2$ (for notions from convex geometry not explained here, we refer to \cite{Sch14}). Let $K\in{\mathcal K}^2$ and $u\in{\mathbb S}^1$. By $H(K,u)$ we denote the supporting line of $K$ with outer unit normal vector $u$, and we call the line
$$ M_K(u):=\frac{1}{2}[H(K,u)+H(K,-u)]$$
the {\em middle line} of $K$ with normal vector $u$ (hence, $M_K(u)=M_K(-u)$). With $F(K,u):= K\cap H(K,u)$, which is the face of $K$ with outer normal vector $u$, we call the convex set
$$ Z_K(u):=\frac{1}{2}[F(K,u)+F(K,-u)]$$
(either a singleton or a segment) the {\em middle set} of $K$ with normal vector $u$. If $F(K,u)$ is one-pointed, we write $F(K,u)=\{x_K(u)\}$, and if also $F(K,-u)$ is one-pointed, then $Z_K(u)=\{m_K(u)\}$ with
$$ m_K(u)= \frac{1}{2}[x_K(u)+x_K(-u)].$$
We call $m_K(u)$ a {\em middle point} of $K$. The set
$$ {\mathcal M}_K:= \bigcup_{u\in{\mathbb S}^1} Z_K(u)$$
is the {\em middle hedgehog} of $K$. It is a closed curve, the locus of all midpoints of affine diameters, that is, chords of $K$ connecting pairs of boundary points lying in distinct parallel support lines.
The following lemma, proved in \cite{Sch16}, was crucial for the proof of Theorem \ref{Thm1}.
\begin{lemma}\label{Lem1}
Suppose that $K\in{\mathcal K}^2$ has no pair of parallel edges. Then each exposed point of the convex hull of the middle hedgehog ${\mathcal M}_K$ is a convexity point of $K$.
\end{lemma}
We consider special examples of middle hedgehogs. First, let $K$ be a convex polygon with no pair of parallel edges. For each edge $F(K,u)$ of $K$ we have $F(K,-u)=\{x_K(-u)\}$ and $Z_K(u)= (1/2)[F(K,u)+x_K(-u)]$. (Each middle point belongs to some $Z_K(u)$ with suitable $u$.) The union ${\mathcal M}_K$ of these segments, over all unit normal vectors of the edges, is a closed polygonal curve.
Second, let $K\in{\mathcal K}^2$ be strictly convex. Then the support function of $K$, which we denote by $h(K,\cdot)$, is differentiable on ${\mathbb R}^2\setminus \{0\}$. To obtain a parametrization of ${\mathcal M}_K$, we choose an orthonormal basis $(e_1,e_2)$ of ${\mathbb R}^2$ and write
$$\mbox{\boldmath$u$}(\varphi):= (\cos\varphi)e_1+(\sin\varphi)e_2,\quad \varphi\in{\mathbb R};$$
then $(\mbox{\boldmath$u$}(\varphi),\mbox{\boldmath$u$}'(\varphi))$ is an orthonormal frame with the same orientation as $(e_1,e_2)$. We define
$$ {\bf x}(\varphi) := m_K(\mbox{\boldmath$u$}(\varphi))$$
and
\begin{equation}\label{2.1}
p(\varphi) := \frac{1}{2}\left[h_K(\mbox{\boldmath$u$}(\varphi))-h_K(-\mbox{\boldmath$u$}(\varphi))\right]
\end{equation}
for $\varphi\in[0,\pi]$. Note that ${\bf x}$ is a parametrized closed curve, since $m_K(u)=m_K(-u)$ for $u\in{\mathbb S}^1$. Since $h_K(u)=\langle x_K(u),u\rangle$, we have
\begin{equation}\label{2.2}
p(\varphi)=\langle {\bf x}(\varphi), \mbox{\boldmath$u$}(\varphi)\rangle.
\end{equation}
Differentiating (\ref{2.2}) and using that $x_K(u)= \nabla h_K(u)$ (where $\nabla$ denotes the gradient; see \cite{Sch14}, Corollary 1.7.3), we obtain
\begin{equation}\label{2.3}
p'(\varphi) =\langle {\bf x}(\varphi),\mbox{\boldmath$u$}'(\varphi)\rangle.
\end{equation}
The equations equations (\ref{2.2}) and (\ref{2.3}) together yield
$$ {\bf x}(\varphi) = p(\varphi) \mbox{\boldmath$u$}(\varphi) +p'(\varphi)\mbox{\boldmath$u$}'(\varphi),\quad \varphi\in[0,\pi].$$
This is a convenient parametrization of the middle hedgehog. The intersection point of the middle lines $M_K(\mbox{\boldmath$u$}(\varphi))$ and $M_K(\mbox{\boldmath$u$}(\varphi+\varepsilon))$ converges to $m_K(\mbox{\boldmath$u$}(\varphi))$ for $\varepsilon\to 0$, thus
${\bf x}$ is the envelope of the family of middle lines of $K$, suitably parametrized. We remark that generalized envelopes of more general line families were studied in \cite{HS53}.
We remark further that in the terminology of Martinez--Maure (see \cite{MM95, MM97}, for example, also \cite{MM99, MM06}), the curve ${\bf x}$ is a planar `projective hedgehog'. The set $\{{\bf x}(\varphi):\varphi\in[0,\pi)\}$ has been introduced and investigated as the `midpoint parallel tangent locus' in \cite{Hol01} and has been named the `area evolute' in \cite{Gib08}; a further study appears in \cite{Cra14}.
According to Lemma \ref{Lem1} and the fact that a typical convex body is strictly convex, Theorem \ref{Thm2} is a consequence of the following result.
\begin{theorem}\label{Thm3}
For a typical convex body in the plane, the convex hull of the middle hedgehog has infinitely many exposed points.\end{theorem}
\section{Proof of Theorem \ref{Thm3}}\label{sec3}
By ${\mathcal K}^2_*$ we denote the set of strictly convex convex bodies in ${\mathcal K}^2$. The set ${\mathcal K}^2_*$ is a dense $G_\delta$ set in ${\mathcal K}^2$ and hence is also a Baire space. Every set that is comeager in ${\mathcal K}^2_*$ is also comeager in ${\mathcal K}^2$.
To begin with the proof of Theorem \ref{Thm3}, we set
$$ {\mathcal A}:= \{K\in{\mathcal K}^2_*: {\rm conv}{\mathcal M}_K \mbox{ has only finitely many exposed points}\}$$
and, for $k\in {\mathbb N}$,
$$ {\mathcal A}_k:= \{K\in{\mathcal K}^2_*: {\rm conv}{\mathcal M}_K \mbox{ has at most $k$ exposed points}\}.$$
We shall prove the following facts.
\begin{lemma}\label{Lem2}
Each set ${\mathcal A}_k$ is closed in ${\mathcal K}^2_*$.
\end{lemma}
\begin{lemma}\label{Lem3}
Each set ${\mathcal A}_k$ is nowhere dense in ${\mathcal K}^2_*$.
\end{lemma}
When this has been proved, then we know that the set ${\mathcal A}= \bigcup_{k\in{\mathbb N}} {\mathcal A}_k$ is meager. Hence its complement, which is the set of all $K\in{\mathcal K}^2_*$ for which ${\rm conv}{\mathcal M}_K$ has infinitely many exposed points, is comeager in ${\mathcal K}^2_*$ and hence in ${\mathcal K}^2$. This is the assertion of Theorem \ref{Thm3}.
\vspace{2mm}
\noindent{\em Proof of Lemma} \ref{Lem2}. First we show that on ${\mathcal K}^2_*$, the mapping $K\mapsto{\mathcal M}_K$ is continuous (this would not be true if ${\mathcal K}^2_*$ were replaced by ${\mathcal K}^2$).
Let $(K_i)_{i\in{\mathbb N}}$ be a sequence in ${\mathcal K}^2_*$ converging to some $K\in {\mathcal K}^2_*$. To show that ${\mathcal M}_{K_i}\to {\mathcal M}_K$ in the Hausdorff metric for $i\to\infty$, we use Theorem 1.8.8 of \cite{Sch14} (it is formulated for convex bodies, but as its proof shows, it holds for connected compact sets---or see Theorems 12.2.2 and 12.3.4 in \cite{SW08}).
Let $x\in{\mathcal M}_{K}$. Then there is a vector $u\in{\mathbb S}^1$ with $x=(1/2)[x_K(u)+x_K(-u)]$. The sequence $(x_{K_i}(u))_{i\in{\mathbb N}}$ has a convergent subseqence, and its limit is a boundary point of $K$ with outer normal vector $u$, hence equal to $x_K(u)$. Since this holds for every convergent subsequence, the sequence $(x_{K_i}(u))_{i\in{\mathbb N}}$ itself converges to $x_K(u)$. Similarly, the sequence $(x_{K_i}(-u))_{i\in{\mathbb N}}$ converges to $x_K(-u)$. It follows that $m_{K_i}(u) = (1/2)[x_{K_i}(u)+x_{K_i}(-u)]\to (1/2)[x_{K}(u)+x_{K}(-u)]=x$ for $i\to\infty$, and here $m_{K_i}(u)\in {\mathcal M}_{K_i}$. Thus, each point in ${\mathcal M}_K$ is the limit of a sequence $(m_i)_{\in{\mathbb N}}$ with $m_i\in {\mathcal M}_{K_i}$ for $i\in{\mathbb N}$.
Let $x_{i(j)}\in{\mathcal M}_{K_{i(j)}}$ for a subsequence $(i(j))_{j\in{\mathbb N}}$, and suppose that $x_{i(j)}\to x$ for $j\to\infty$. Then $x_{i(j)}=(1/2)[x_{K_{i(j)}}(u_j)+x_{K_{i(j)}}(-u_j)]$ for suitable $u_j\in{\mathbb S}^1$ ($j\in{\mathbb N}$). There is a convergent subsequence of $(u_j)_{j\in{\mathbb N}}$, and we can assume that this is the sequence $(u_j)_{j\in{\mathbb N}}$ itself, say $u_j\to u$ for $j\to\infty$. Then $x_{K_{i(j)}}(u_j)\to x_K(u)$ and $x_{K_{i(j)}}(-u_j)\to x_K(-u)$, hence $x_{i(j)}\to (1/2)[x_K(u)+x_K(-u)]\in {\mathcal M}_K$. It follows that $x\in{\mathcal M}_K$. This completes the continuity proof for the mapping $K\mapsto{\mathcal M}_K$.
To show that ${\mathcal A}_k$ is closed in ${\mathcal K}^2_*$, let $(K_i)_{i\in{\mathbb N}}$ be a sequence in ${\mathcal A}_k$ converging to some $K\in {\mathcal K}^2_*$. As just shown, we have ${\mathcal M}_{K_i}\to {\mathcal M}_K$ and hence also ${\rm conv}{\mathcal M}_{K_i}\to {\rm conv}{\mathcal M}_K$ for $i\to\infty$, since the convex hull mapping is continuous (even Lipschitz, see \cite{Sch14}, p. 64). Since each ${\rm conv}{\mathcal M}_{K_i}$ is a convex polygon with at most $k$ vertices, also ${\rm conv}{\mathcal M}_{K}$ is a convex polygon with at most $k$ vertices, thus $K\in {\mathcal A}_k$. This completes the proof of Lemma \ref{Lem2}. \qed
To prepare the proof of Lemma \ref{Lem3}, we need to have a closer look at the middle hedgehog ${\mathcal M}_P$ of a convex polygon $P$. We assume in the following that $P$ has interior points and has no pair of parallel edges.
First, the unoriented normal directions of the edges of $P$ have a natural cyclic order. We may assume, without loss of generality, that no edge of $P$ is parallel to the basis vector $e_1$. Then there are angles $-\pi/2 < \varphi_1 < \varphi_2<\dots< \varphi_k< \pi/2$ such that, for each $i \in \{1,\dots,k\}$, either $\mbox{\boldmath$u$}(\varphi_i)$ or $-\mbox{\boldmath$u$}(\varphi_i)$ is an outer normal vector of an edge of $P$ (not both, since $P$ does not have a pair of parallel edges), and all unit normal vectors of the edges of $P$ are obtained in this way. We denote by $E_i$ the edge of $P$ that is orthogonal to $\mbox{\boldmath$u$}(\varphi_i)$. We call the pair $(E_i, E_{i+1})$ {\em consecutive} (where $E_{k+1} := E_1$; this convention is also followed below), and in addition we call it {\em adjacent} if $E_i\cap E_{i+1}$ is a vertex of $P$. For an angle $\psi\in[-\pi/2,\pi/2)$ we say that $\psi$ is {\em between} $\varphi_i$ and $\varphi_{i+1}$ if either $i\in \{1,\dots,k-1\}$ and $\varphi_i<\psi<\varphi_{i+1}$, or $i=k$ and either $-\pi/2 <\psi<\varphi_1$ or $\varphi_k<\psi\le \pi/2$. Let $(E_i, E_{i+1})$ be a consecutive pair. The following facts, to be used below, follow immediately from the definitions. If $\psi$ is between $\varphi_i$ and $\varphi_{i+1}$, then $\mbox{\boldmath$u$}(\psi)$ is not a normal vector of an edge of $P$. Suppose that, say, $\mbox{\boldmath$u$}(\varphi_i)$ is the outer normal vector of $E_i$. If $(E_i,E_{i+1})$ is adjacent, then $\mbox{\boldmath$u$}(\varphi_{i+1})$ is the outer normal vector of $E_{i+1}$. If $(E_i,E_{i+1})$ is not adjacent, then $\mbox{\boldmath$u$}(\varphi_{i+1})$ is the inner normal vector of $E_{i+1}$. These definitions of $E_i$ and $\varphi_i$ will be used in the rest of this note.
Now let $p$ and $q$ be {\em opposite} vertices of $P$, that is, vertices with $H(P,\mbox{\boldmath$u$}(\psi))\cap P=\{p\}$ and $H(P,-\mbox{\boldmath$u$}(\psi))\cap P=\{q\}$ for some $\psi$. After interchanging $p$ and $q$, if necessary, we can assume that $\psi\in[-\pi/2,\pi/2)$. Then there is a unique index $i\in\{1,\dots,k\}$ such that $\psi$ is between $\varphi_i$ and $\varphi_{i+1}$. The middle sets $Z_P(\mbox{\boldmath$u$}(\varphi_i))$ and $Z_P(\mbox{\boldmath$u$}(\varphi_{i+1}))$ have the midpoint $x=(p+q)/2$ in common. We say that $x$ is a {\em weak corner} of the middle hedgehog ${\mathcal M}_P$ if the pair $(E_i,E_{i+1})$ is adjacent, and $x$ is a {\em strong corner} of ${\mathcal M}_P$ if $(E_i,E_{i+1})$ is not adjacent. If $x$ is a weak corner, then the middle sets $Z_P(\mbox{\boldmath$u$}(\varphi_i))$ and $Z_P(\mbox{\boldmath$u$}(\varphi_{i+1}))$ lie on different sides of the line through $p$ and $q$, and if $x$ is a strong corner, then $Z_P(\mbox{\boldmath$u$}(\varphi_i))$ and $Z_P(\mbox{\boldmath$u$}(\varphi_{i+1}))$ lie on the same side of this line.
\vspace{-1.2cm}
\begin{center}
\resizebox{16cm}{!}{
\setlength{\unitlength}{1cm}
\begin{pspicture}(0,0)(14,11)
\psline[linewidth=1pt]{-}(6.8,0.5)(2.54,1.4)
\psline[linewidth=1pt]{-}(2.54,1.4)(1.04,4.62)
\psline[linewidth=1pt]{-}(1.04,4.62)(1.8,7.4)
\psline[linewidth=1pt]{-}(1.8,7.4)(8.24,10)
\psline[linewidth=1pt]{-}(8.24,10)(12.9,6.6)
\psline[linewidth=1pt]{-}(12.9,6.6)(12.7,4.3)
\psline[linewidth=1pt]{-}(12.7,4.3)(10.66,1.24)
\psline[linewidth=1pt]{-}(10.66,1.24)(6.8,0.5)
\psline[linewidth=0.4pt](6.8,0.5)(8.24,10
\psline[linewidth=0.4pt](8.24,10)(2.54,1.4
\psline[linewidth=0.4pt](8.24,10)(10.66,1.24
\psline[linewidth=0.4pt](1.8,7.4)(12.7,4.3
\psline[linewidth=0.4pt](1.8,7.4)(10.66,1.24
\psline[linewidth=0.4pt](12.9,6.6)(1.04,4.62
\psline[linewidth=0.4pt](12.9,6.6)(2.54,1.4
\psline[linewidth=0.4pt](1.04,4.62)(12.7,4.3
\psline{-}(5.39,5.7)(7.72,4
\psline{-}(7.72,4)(6.97,5.61
\psline{-}(6.97,5.61)(6.87,4.46
\psline{-}(6.87,4.46)(7.25,5.83
\psline{-}(7.25,5.83)(6.23,4.32
\psline{-}(6.23,4.32)(9.45,5.62
\psline{-}(9.45,5.62)(7.52,5.25
\psline{-}(7.52,5.25)(5.39,5.7
\rput(8.8,0.5){$E_1$
\rput(4.6,8.95){$E_2$
\rput(12,2.7){$E_3$
\rput(1.05,6.1){$E_4$
\rput(13.1,5.5){$E_5$
\rput(1.5,2.9){$E_6$
\rput(10.7,8.6){$E_7$
\rput(4.4,0.65){$E_8$
\end{pspicture}
}
\end{center}
\vspace{-2mm}
\noindent Figure 1: The middle hedgehog has one weak corner and seven strong corners, five of which are vertices of the convex hull.
\vspace{5mm}
\begin{lemma}\label{Lem4}
A weak corner of the middle hedgehog ${\mathcal M}_P$ is not a vertex of ${\rm conv}{\mathcal M}_P$.
\end{lemma}
\begin{proof}
We begin with an arbitrary vertex $x$ of ${\rm conv}{\mathcal M}_P$. Since ${\mathcal M}_P$ is the union of the finitely many middle sets $Z_P(\mbox{\boldmath$u$}(\varphi_i))$ (with $\varphi_i$ as above), the point $x$ must be one of the endpoints of these segments, thus $x$ is either a weak or a strong corner of ${\mathcal M}_P$.
We need to recall some facts from the proof of Lemma 6 in \cite{Sch16}. As there, we may assume, without loss of generality (after applying a rigid motion to $P$), that $x=0$ and that the orthonormal basis $(e_1,e_2)$ of ${\mathbb R}^2$ is such that
\begin{equation}\label{3.1}
\langle y,e_2\rangle >0 \quad \mbox{for each } y\in {\rm conv}{\mathcal M}_P \setminus\{0\}.
\end{equation}
Let $L$ be the line through $0$ that is spanned by $e_1$. For $\varphi\in (-\pi/2,\pi/2)$, the middle line $M_P(\mbox{\boldmath$u$}(\varphi))$ intersects the line $L$ in a point which we write as $f(\varphi)e_1$, thus defining a continuous function $f:(-\pi/2,\pi/2)\to{\mathbb R}$. It was shown in \cite{Sch16} that
$$ f(\varphi) =\frac{p(\varphi)}{\cos\varphi}.$$
At almost all $\varphi$, the functions $\varphi\mapsto h(P,\mbox{\boldmath$u$}(\varphi))$ and $\varphi\mapsto h(P,-\mbox{\boldmath$u$}(\varphi))$ are differentiable, hence the same holds for the function $f$, and where this holds, we have
\begin{equation}\label{3.2}
f'(\varphi)= \frac{\langle m_P(\mbox{\boldmath$u$}(\varphi)),e_2\rangle}{\cos^2\varphi},
\end{equation}
as shown in \cite{Sch16}.
We now first recall the rest of the proof of Lemma 6 in \cite{Sch16}, in a slightly simplified version. The claim to be proved is that
\begin{equation}\label{3.3}
0 \in M_P(\mbox{\boldmath$u$}(\varphi)) \mbox{ for some }\varphi\in(-\pi/2,\pi/2)\quad \Longrightarrow \quad 0\in Z_P(\mbox{\boldmath$u$}(\varphi)).
\end{equation}
By (\ref{3.2}) and (\ref{3.1}) we have $f'(\varphi)\ge 0$ for almost every $\varphi\in(-\pi/2,\pi/2)$. We conclude that the function $f$ (which is locally Lipschitz and hence the integral of its derivative) is weakly increasing on $(-\pi/2, \pi/2)$. Therefore, the set $I:=\{\varphi\in(-\pi/2, \pi/2): f(\varphi)=0\}$ is a closed interval (possibly one-pointed). Since $0\in{\mathcal M}_P$, there is some $\varphi_0\in (-\pi/2,\pi/2)$ with $0\in Z_P(\mbox{\boldmath$u$}(\varphi_0))$. If $I$ is one-pointed, then $I=\{\varphi_0\}$, and $0\notin M_P(\mbox{\boldmath$u$}(\varphi))$ for $\varphi\not=\varphi_0$. Thus, (\ref{3.3}) holds in this case. If $I$ is not one-pointed, then $f'(\varphi)=0$ for $\varphi \in {\rm relint}\,I$ and hence, by (\ref{3.2}) and (\ref{3.1}), $m_P(\mbox{\boldmath$u$}(\varphi))=0$ for $\varphi \in {\rm relint}\,I$. By continuity, we have $0\in Z_P(\mbox{\boldmath$u$}(\varphi))$ for all $\varphi\in I$. This shows that (\ref{3.3}) holds generally.
Now we can finish the proof of Lemma \ref{Lem4}. Suppose, to the contrary, that $0$ is a weak corner of ${\mathcal M}_P$. Then there is a consecutive, adjacent pair $(E_i,E_{i+1})$ of edges of $P$ such that $E_i\cap E_{i+1}=\{p\}$ for a vertex $p$ of $P$ and the line $H(P,\mbox{\boldmath$u$}(-\pi/2))$ supports $P$ at $p$. This is only possible if $(E_i,E_{i+1})= (E_k,E_{k+1})$. In this case, all the middle lines $M_P(\psi)$ with $\psi$ between $\varphi_k$ and $\varphi_{k+1}=\varphi_1$ pass through $0$. This means that the function $f$ defined above satisfies $f(\varphi)=0$ for $-\pi/2<\varphi \le \varphi_1$ and for $\varphi_k\le\varphi<\pi/2$. But since $f$ is increasing, it must then vanish identically, which is a contradiction, since $P$ is not centrally symmetric. This contradiction completes the proof of Lemma \ref{Lem4}.
\end{proof}
\vspace{2mm}
\noindent{\em Proof of Lemma} \ref{Lem3}. Let $k\in{\mathbb N}$. Since ${\mathcal A}_k$ is closed by Lemma \ref{Lem2}, the proof that ${\mathcal A}_k$ is nowhere dense amounts to showing that ${\mathcal A}_k$ has empty interior in ${\mathcal K}^2_*$. For this, let $K\in {\mathcal A}_k$ and $\varepsilon>0$ be given. We show that the $\varepsilon$-neighborhood of $K$ contains an element of ${\mathcal K}^2_*\setminus {\mathcal A}_k$.
In a first step, we choose a convex polygon $P$ with
\begin{equation}\label{3.0}
K\subset {\rm int} P,\quad P \subset {\rm int}(K+\varepsilon B^2),
\end{equation}
where $B^2$ denotes the closed unit disc of ${\mathbb R}^2$. We can do this in such a way that $P$ satisfies the following assumptions. First, $P$ has no pair of parallel edges. Second, $P$ has no `long' edge, by which we mean an edge the endpoints of which are opposite points of $P$. The goal of the following is to perform small changes on the polygon $P$ so that the number of vertices of ${\rm conv}{\mathcal M}_P$ is increased.
Let $x$ be a vertex of ${\rm conv}{\mathcal M}_P$. It is a corner of ${\mathcal M}_P$, and by Lemma \ref{Lem4} a strong corner. Therefore, there is a consecutive, non-adjacent edge pair $(E_i,E_{i+1})$ of $P$ and there are an endpoint $p$ of $E_i$ and an endpoint $q$ of $E_{i+1}$ such that $x=(p+q)/2$.
We position $P$ and choose the orthonormal basis $(e_1,e_2)$ in such a way that $x=0$, that $e_1$ is a positive multiple of $q$, and that $\langle y,e_2\rangle\ge 0$ for all $y\in E_i\cup E_{i+1}$ (note that $E_i$ and $E_{i+1}$ lie on the same side of the line through $p$ and $q$, since $0$ is a strong corner of ${\mathcal M}_P$).
We may assume (the other case is treated similarly) that $\mbox{\boldmath$u$}(\varphi_i)$ is the inner normal vector of $E_i$; then $\mbox{\boldmath$u$}(\varphi_{i+1})$ is the outer normal vector of $E_{i+1}$. Let $E_j\not= E_i$ be the other edge of $P$ with endpoint $p$, and let $E_m\not= E_{i+1}$ be the other edge of $P$ with endpoint $q$. The edges $E_j$ and $E_m$ do not lie in the line through $p$ and $q$, since $P$ has no long edge. We have $\varphi_m<\varphi_i< \varphi_{i+1} < \varphi_j$, since $\mbox{\boldmath$u$}(\psi)$ with $\psi$ between $\varphi_i$ and $\varphi_{i+1}$ is not a normal vector of an edge of $P$.
\vspace{-1.2cm}
\begin{center}
\resizebox{15cm}{!}{
\setlength{\unitlength}{1cm}
\begin{pspicture}(0,0)(14,11)
\psline[linewidth=1pt]{*-*}(1.3,3.64)(12,3.64)
\psline[linewidth=1pt]{-}(7.6,0.75)(12,3.64)
\psline[linewidth=1pt]{-}(5,0.75)(1.3,3.64)
\psline[linewidth=0.5pt]{*-}(6.65,3.64)(5.4,6.5)
\psline[linewidth=0.5pt]{-}(6.65,3.64)(7.7,6.5)
\psline[linestyle=dashed]{-}(4.5,2.7)(8.8,4.59)
\psline[linewidth=1pt]{-}(1.3,3.64)(3.5,9.2)
\psline[linewidth=1pt]{*-*}(2.78,2.5)(2.95,7.81)
\psline[linewidth=1pt]{*-}(9.199,1.8)(10.4,8.9)
\psline[linewidth=1pt]{-}(9.14,5.3)(12.7,6.94)
\psline[linewidth=1pt]{-}(12,3.64)(9.6,9.36)
\psline[linewidth=1pt]{-}(9.1,8.73)(12.7,6)
\psline{*-}(10.22,7.87)(10.22,7.87)
\psline[linewidth=1pt]{-*}(9.199,1.8)(10.702,6.734)
\psline{*-}(11.91,6.59)(11.91,6.59)
\psline{*-}(10.45,5.9)(10.45,5.9)
\rput(3.3,9.6){$E_i$}
\rput(10,9.6){$E_{i+1}$}
\rput(4.1,0.8){$E_j$}
\rput(8.4,0.8){$E_m$}
\rput(0.9,3.6){$p$}
\rput(12.3,3.6){$q$}
\rput(2.2,7.9){$p+t_2$}
\rput(11,7.9){$q+s_2$}
\rput(2,2.3){$p+t_1$}
\rput(9.9,1.7){$q+s_1$}
\rput(10.8,6.9){$q+s$}
\rput(13.3,6.5){$q+s_2+t_1$}
\rput(11.42,5.7){$q+s_1+t_2$}
\rput(6.65,3.3){$0$}
\rput(8,4.6){$S$}
\rput(10.4,4.6){$L_q$}
\end{pspicture}
}
\end{center}
\vspace{-8mm}
\noindent Figure 2: The vertices $p$ and $q$ are cut off by new edges, in the figure with endpoints $p+t_1, p+t_2$, respectively $q+s_1,q+s_2$.
\vspace{5mm}
By assumption, $0$ is a vertex of ${\rm conv}{\mathcal M}_P$. Therefore, there is a support line $S$ of ${\rm conv}{\mathcal M}_P$ which has intersection $\{0\}$ with ${\rm conv}{\mathcal M}_P$. Since $S$ supports also the convex hull of $Z_P(\mbox{\boldmath$u$}(\varphi_i))$ and $Z_P(\mbox{\boldmath$u$}(\varphi_{i+1}))$, the (with respect to ${\rm conv}{\mathcal M}_P$) outer unit normal vector $\mbox{\boldmath$u$}(\alpha)$ of the support line $S$ has an angle $\alpha$ that satisfies either $-\pi/2 \le \alpha <\varphi_i$ or $\varphi_{i+1}-\pi/2<\alpha < -\pi/2$. We assume that $-\pi/2 \le \alpha <\varphi_i$; the other case is treated analogously, with the roles of $p,E_i,E_j$ and $q,E_{i+1},E_m$ interchanged.
In the following, $t_1$ and $t_2$ denote vectors such that $p+t_1\in E_j$ and $p+t_2\in E_i$. For such vectors, let $\psi_p=\psi_p(t_1,t_2)$ with $\varphi_i < \psi_p < \varphi_{i+1}$ be the angle for which $\mbox{\boldmath$u$}(\psi_p)$ is orthogonal to the line through $p+t_1$ and $p+t_2$. Trivially, there are a constant $c>0$ and a continuous function $\gamma: [\varphi_i, \varphi_{i+1}] \to {\mathbb R}^+$ with $\lim_{\psi\to \varphi_i}\gamma(\psi) =0$ such that
\begin{align}\label{3.6}
\|t_1\| <c\|t_2\|\quad &\Longrightarrow \quad \psi_p(t_1,t_2) <\varphi_{i+1},\\
\label{3.7}
\psi\in [\varphi_i, \varphi_{i+1}]\mbox{ and } \|t_1\| >\gamma(\psi) \|t_2\|\quad & \Longrightarrow \quad \psi_p(t_1,t_2)> \psi.
\end{align}
Let $L_q$ be a line parallel to $E_i$ and strongly separating $q$ from the other endpoints of $E_{i+1}$ and $E_m$. This line intersects $E_m$ in a point $q+s_1$, and it intersects $E_{i+1}$ in a point $q+s$. We choose the line $L_q$ so close to $q$ that the vector $t:= s-s_1$ satisfies $p+t\in E_i$.
Let $0<\tau<1$, and let $\sigma>1$ be such that $q+\sigma s\in E_{i+1}$. The line through the point $q+s_1+\tau t$ parallel to the support line $S$ and the line through $q+\sigma s$ parallel to $E_j$ intersect in a point $q+\sigma s+t_1$. This defines a vector function $t_1=t_1(\tau,\sigma)$, with the property that $\|t_1\|$ is strictly increasing in $\sigma$. For $\tau,\sigma\to 1$ we have $\|t_1\|\to 0$; in particular, $p+t_1\in E_j$ if $\tau,\sigma$ are sufficiently close to $1$. Therefore, we can fix $t_2=\tau t $ (so that $t_1$ now depends only on $\sigma$) and choose $\sigma_0>1$ such that
$$ p+t_1(\sigma)\in E_j\quad\mbox{and}\quad\|t_1(\sigma)\|<c\|t_2\|\quad\mbox{for } 1<\sigma\le \sigma_0.$$
Let $\psi_q(\sigma)\in(\varphi_i,\varphi_{i+1})$ be the angle for which $\mbox{\boldmath$u$}(\psi_q)$ is orthogonal to the line through $q+s_1$ and $q+\sigma s$. We choose $\sigma_1$ with $1<\sigma_1<\sigma_0$ so close to $1$ that
$$ \gamma(\psi_q(\sigma_1)) < \|t_1(\sigma_1)\|/\|t_2\|,$$
which is possible because of $\lim_{\psi\to \varphi_i}\gamma(\psi) =0$ and $\lim_{\sigma\to 1}\|t_1(\sigma)\|>0$. For $\sigma\in (\sigma_1,\sigma_0]$ sufficiently close to $\sigma_1$ we then have
$$ \gamma(\psi_q(\sigma)) < \|t_1(\sigma)\|/\|t_2\| \le \|t_1(\sigma_0)\|/\|t_2\| <c.$$
Therefore, by (\ref{3.6}) and (\ref{3.7}), the angles $\psi_q=\psi_q(\sigma)$ and $\psi_p=\psi_p(t_1(\sigma), t_2)$ satisfy
\begin{equation}\label{3.9}
\varphi_i <\psi_q <\psi_p<\varphi_{i+1}.
\end{equation}
In the following we write $t_1(\sigma)=t_1$ and $\sigma s= s_2$.
Now we choose a number $0<\lambda<1$ and replace $p+t_1, p+t_2, q+s_1, q+s_2$ respectively by $p+\lambda t_1, p+ \lambda t_2, q+\lambda s_1, q+\lambda s_2$. This does not change the angles $\psi_p,\psi_q$. We replace $P$ by the polygon $P_\lambda$ that is the convex hull of the points $p+\lambda t_1, p+ \lambda t_2, q+\lambda s_1, q+\lambda s_2$ and of the vertices of $P$ different from $p$ and $q$. By choosing $\lambda$ sufficiently small, we can achieve that still
$$ K\subset {\rm int} P_\lambda.$$
Note that $P_\lambda \subset {\rm int}(K+\varepsilon B^2)$ holds trivially.
By decreasing $\lambda$ further, if necessary, we can also achieve that ${\rm conv}{\mathcal M}_{P_\lambda}$ has more vertices than ${\rm conv}{\mathcal M}_P$, as we now show. First we notice that the inequalities (\ref{3.9}) imply that $p+\lambda t_2$ and $q+\lambda s_1$ are opposite vertices of $P_\lambda$ and that also $p+\lambda t_1$ and $q+\lambda s_2$ are opposite vertices of $P_\lambda$. Hence,
$$\frac{1}{2}(p+\lambda t_2 + q+\lambda s_1) = \frac{\lambda}{2}(s_1+t_2)=:y_\lambda$$
and
$$ \frac{1}{2}(p+\lambda t_1 +q+\lambda s_2)= \frac{\lambda}{2}(s_2+t_1)=: z_\lambda$$
are strong corners of ${\mathcal M}_{P_\lambda}$. By construction,
\begin{equation}\label{3.4}
z_\lambda-y_\lambda = \frac{\lambda}{2}[(q+s_2+t_1)-(q+s_1+t_2)] \quad\mbox{is parallel to } S.
\end{equation}
Since the non-zero vectors $s_1-s_2$ and $t_1-t_2$ have different directions, we have $y_\lambda\not= z_\lambda$.
Let $v_0=0,v_1,\dots,v_r$ be the vertices of ${\rm conv}{\mathcal M}_P$. They are corner points of ${\mathcal M}_P$. To each $i\in\{0,\dots,r\}$ we choose a line $L_i$ that strongly separates $v_i$ from the other vertices; the particular line $L_0$ is chosen parallel to the support line $S$. We can choose a number $\eta>0$ such that, for any $\bar v_0,\dots,\bar v_r \in {\mathbb R}^2$ with $\|\bar v_i- v_i\| < \eta$ for $i=0,\dots,r$, the line $L_i$ strongly separates $\bar v_i$ from the points $\bar v_j\not= \bar v_i$. Then we can further decrease $\lambda$ so that the $\eta$-neighborhood of each $v_i$, $i=1,\dots,r$, contains at least one corner point of ${\mathcal M}_{P_\lambda}$, and that the $\eta$-neighborhood of $0$ contains the points $y_\lambda$ and $z_\lambda$. Since $L_0$ is parallel to $S$, it follows from (\ref{3.4}) that $y_\lambda$ and $z_\lambda$ are both vertices of ${\rm conv}{\mathcal M}_{P_\lambda}$. Thus, ${\rm conv}{\mathcal M}_{P_\lambda}$ has more vertices than ${\rm conv}{\mathcal M}_P$.
Since (\ref{3.0}) with $P$ replaced by $P_\lambda$ still holds, we can repeat the procedure. After finitely many steps, we obtain a polygon $Q$ with
\begin{equation}\label{3.5}
K\subset {\rm int}\, Q,\quad Q \subset {\rm int}(K+\varepsilon B^2)
\end{equation}
for which ${\mathcal M}_{Q}$ has more than $k$ vertices. Finally, we replace $Q$ by a strictly convex body $M$, by replacing each edge of $Q$ by a circular arc of large positive radius $R$. If $R$ is large enough, then (\ref{3.5}), with $Q$ replaced by $M$, still holds, and the number of vertices of ${\rm conv}{\mathcal M}_M$ is the same as for ${\rm conv}{\mathcal M}_{Q}$. Thus, in the $\varepsilon$-neighborhood of $K$ we have found an element of ${\mathcal K}^2_*\setminus {\mathcal A}_k$. \qed
| {
"timestamp": "2016-07-12T02:15:54",
"yymm": "1607",
"arxiv_id": "1607.03014",
"language": "en",
"url": "https://arxiv.org/abs/1607.03014",
"abstract": "A convexity point of a convex body is a point with the property that the union of the body and its reflection in the point is convex. It is proved that in the plane a typical convex body (in the sense of Baire category) has infinitely many convexity points. The proof makes use of the `middle hedgehog' of a planar convex body $K$, which is the curve formed by the midpoints of all affine diameters of $K$. The stated result follows from the fact that for a typical planar convex body the convex hull of the middle hedgehog has infinitely many exposed points.",
"subjects": "Metric Geometry (math.MG)",
"title": "The middle hedgehog of a planar convex body",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180690117798,
"lm_q2_score": 0.8519528000888386,
"lm_q1q2_score": 0.8397852689927489
} |
https://arxiv.org/abs/math/0501230 | Crossings and Nestings of Matchings and Partitions | We present results on the enumeration of crossings and nestings for matchings and set partitions. Using a bijection between partitions and vacillating tableaux, we show that if we fix the sets of minimal block elements and maximal block elements, the crossing number and the nesting number of partitions have a symmetric joint distribution. It follows that the crossing numbers and the nesting numbers are distributed symmetrically over all partitions of $[n]$, as well as over all matchings on $[2n]$. As a corollary, the number of $k$-noncrossing partitions is equal to the number of $k$-nonnesting partitions. The same is also true for matchings. An application is given to the enumeration of matchings with no $k$-crossing (or with no $k$-nesting). | \section{Introduction}
A (complete) matching on $[2n]=\{1,2,\dots,2n\}$ is a partition of
$[2n]$ of type $(2,2, \dots, 2)$. It can be represented by listing its
$n$ blocks, as $\{(i_1, j_1), (i_2, j_2), \dots, (i_n, j_n)\}$
where $i_r < j_r$ for $ 1 \leq r \leq n$. Two blocks (also called
arcs)
$(i_r, j_r)$ and $(i_s, j_s)$ form a \emph{crossing} if $i_r < i_s <
j_r < j_s$;
they form a \emph{nesting} if $i_r < i_s < j_s < j_r$.
It is well-known that the number of matchings on $[2n]$ with no
crossings (or with no nestings) is given by the $n$-th Catalan
number
\[
C_n= \frac{1}{n+1} \binom{2n}{n}.
\]
See \cite[Exercise 6.19]{Stanley99} for many combinatorial interpretations
of Catalan numbers, where item (o) is for noncrossing matchings, and item
(ww) can be viewed as nonnesting matchings, in which the blocks of
the matching are the columns of the standard Young tableaux of shape
$(n,n)$. Nonnesting matchings are also one of the items of
\cite{Stanley_web}.
Let $k \geq 2$ be an integer. A \emph{$k$-crossing} of a matching $M$ is a
set of $k$ arcs $(i_{r_1}, j_{r_1})$, $(i_{r_2}, j_{r_2}), \dots$,
$(i_{r_k}, j_{r_k})$ of $M$ such that $i_{r_1} <i_{r_2} < \cdots <
i_{r_k} < j_{r_1} < j_{r_2} < \cdots < j_{r_k}$.
A matching without any
$k$-crossing is a \emph{$k$-noncrossing matching.} Similarly, a
$k$-nesting is a set of $k$ arcs $(i_{r_1}, j_{r_1}), (i_{r_2},
j_{r_2})$, $\dots$, $(i_{r_k}, j_{r_k})$ of $M$ such that $i_{r_1}
<i_{r_2}$ $<$ $\cdots$ $<$ $i_{r_k} < j_{r_k}< \cdots < j_{r_2} < j_{r_1}$.
A matching without any $k$-nesting is a \emph{$k$-nonnesting matching.}
Enumeration on crossings/nestings of matchings has been studied
for the cases $k=2$ and $k=3$. For $k=2$, in addition to the
above results on Catalan numbers, the distribution of the number
of $2$-crossings has been studied by Touchard \cite{Touchard52},
and later more explicitly by Riordan \cite{Riordan75}, who gave a
generating function. M. de Sainte-Catherine \cite{SC83} proved
that $2$-crossings and $2$-nestings are identically distributed
over all matchings of $[2n]$, i.e., the number of matchings with
$r$ $2$-crossings is equal to the number of matchings with $r$
$2$-nestings.
The enumeration of 3-nonnesting matchings was first studied by
Gouyou-Beauschamps \cite{GB89}, in which he gave a bijection
between involutions with no decreasing sequence of length 6 and pairs
of noncrossing Dyck left factors by a recursive construction. His
bijection is essentially a correspondence between 3-nonnesting
matchings and pairs of noncrossing Dyck paths, where a matching can
also be considered as a fixed-point-free involution. We observed that
the number of 3-noncrossing matchings also equals the number of
pairs of noncrossing Dyck paths, and a one-to-one correspondence
between 3-noncrossing matchings and pairs of noncrossing Dyck paths
can be built recursively.
In this paper, we extend the above results. Let $\mathrm{cr}(M)$ be
maximal $i$ such that $M$ has an $i$-crossing, and $\mathrm{ne}(M)$ the
maximal $j$ such that $M$ has a $j$-nesting. Denoted by
$f_n(i,j)$ the number of matchings $M$ on $[2n]$ with $\mathrm{cr}(M)=i$
and $\mathrm{ne}(M)=j$. We shall prove that $f_n(i,j)=f_n(j,i)$. As a
corollary, the number of matchings on $[2n]$ with $\mathrm{cr}(M)=k$
equals the number of matchings $M$ on $[2n]$ with $\mathrm{ne}(M)=k$.
Our construction applies to a more general structure, viz., partitions
of a set. Given a partition $P$ of $[n]$, denoted by $P \in \Pi_n$, we
represent $P$ by a graph on the vertex set $[n]$ whose edge set
consists of arcs connecting the elements of each block in numerical
order. Such an edge set is called the \emph{standard representation}
of the partition $P$. For example, the standard representation of
1457-26-3 is $\{(1,4), (4, 5), (5, 7), (2,6)\}$. Here we always write
an arc $e$ as a pair $(i,j)$ with $i < j$, and say that $i$ is the
\emph{lefthand endpoint} of $e$ and $j$ is the \emph{righthand
endpoint} of $e$.
Let $k \geq 2$ and $P \in \Pi_n$. Define a \emph{$k$-crossing} of
$P$ as a $k$-subset $(i_1, j_1), (i_2, j_2), \dots, (i_k, j_k)$ of
the arcs in the standard representation of $P$ such that $i_1 <
i_2 < \cdots < i_k < j_1 < j_2 < \cdots < j_k$. Let $\mathrm{cr}(P)$ be
the maximal $k$ such that $P$ has a $k$-crossing. Similarly,
define a \emph{$k$-nesting} of $P$ as a $k$-subset $(i_1, j_1),
(i_2, j_2), \dots, (i_k, j_k)$ of the set of arcs in the standard
representation of $P$ such that $i_1 < i_2 < \cdots < i_k < j_k <
\cdots < j_2 < j_1$, and $\mathrm{ne}(P)$ the maximal $j$ such that $P$
has a $j$-nesting. Note that when restricted to complete
matchings, these definitions agree with the ones given before.
Let $g_n(i,j)$ be the number of partitions $P$ of $[n]$ with
$\mathrm{cr}(P)=i$ and $\mathrm{ne}(P)=j$. We shall prove that $g_n(i,j)=g_n(j,i)$, for
all $i,j$ and $n$. In fact, our result is much stronger. We present
a generalization which
implies the symmetric distribution
of $\mathrm{cr}(P)$ and $\mathrm{ne}(P)$ over all partitions in $\Pi_n$,
as well over all complete matchings on $[2n]$.
To state the main result, we need some notation.
Given $P \in \Pi_n$, define
\begin{eqnarray*}
\min(P)=\{ \text{minimal block elements of $P$}\}, \\
\max(P)=\{ \text{maximal block elements of $P$}\}.
\end{eqnarray*}
For example, for $P=\mbox{135-26-4}$, $\min(P) =\{1,2,4\}$ and
$\max(P)=\{4,5,6\}$. The pair $(\min(P), \max(P))$ encodes some useful
information about the partition $P$. For example, the number of blocks
of $P$ is $|\min(P)|=|\max(P)|$; number of singleton blocks is
$|\min(P) \cap \max(P)|$; $P$ is a (partial) matching if and only if
$\min(P) \cup \max(P)=[n]$, and $P$ is a
complete matching if in addition, $\min(P) \cap \max(P)=\emptyset$.
Fix $S, T \subseteq [n]$ with $|S|=|T|$. Let $P_n(S, T)$ be the
set $\{ P \in \Pi_n: \min(P)=S, \max(P)=T\}$, and
$f_{n, S, T}(i, j)$ be the cardinality of the set
$\{ P \in P_n(S, T): \mathrm{cr}(P)=i, \mathrm{ne}(P)=j\}$.
\begin{theorem} \label{thm1}
\begin{eqnarray}\label{eq1}
f_{n, S, T}(i,j)=f_{n, S, T}(j, i).
\end{eqnarray}
In other words,
\begin{eqnarray} \label{dist}
\sum_{P \in P_n(S, T)} x^{\mathrm{cr}(P)}y^{\mathrm{ne}(P)} =
\sum_{P \in P_n(S, T)} x^{\mathrm{ne}(P)}y^{\mathrm{cr}(P)}.
\end{eqnarray}
That is, the statistics $\mathrm{cr}(P)$ and $\mathrm{ne}(P)$ have a symmetric
joint distribution over each set $P_n(S, T)$. $\Box$
\end{theorem}
Summing over all pairs $(S, T)$ in \eqref{eq1}, we get
\begin{eqnarray}\label{eq2}
g_n(i,j)=g_n(j,i).
\end{eqnarray}
We say that a partition $P$ is \emph{$k$-noncrossing} if $\mathrm{cr}(P)<k$.
It is \emph{ $k$-nonnesting} if $\mathrm{ne}(P) <k$.
Let $\mathrm{NCN}_{k,l}(n)$ be the number of partitions of $[n]$ that are
$k$-noncrossing and $l$-nonnesting. Summing over $1 \leq i < k$ and
$1 \leq j <l $ in \eqref{eq2}, we get the following corollary.
\begin{cor}\label{ncn}
$\mathrm{NCN}_{k,l}(n)=\mathrm{NCN}_{l,k}(n). \qquad \Box$
\end{cor}
Letting $l>n $, Corollary \ref{ncn} becomes the following result.
\begin{cor} \label{nc_nn}
$\mathrm{NC}_k(n)=\mathrm{NN}_k(n)$, where $\mathrm{NC}_k(n)$ is
the number of $k$-noncrossing partitions of $[n]$, and
$\mathrm{NN}_k(n)$ is the number of
$k$-nonnesting partitions of $[n]$. ~~~~$\Box$
\end{cor}
Theorem \ref{thm1} also applies to complete matchings.
A partition $P$ of $[2n]$
is a complete matching if and only if $|\min(P)|=|\max(P)|=n$ and
$\min(P) \cap \max(P)=\emptyset$. (It follows that $\min(P) \cup \max(P)=[2n]$.)
Restricting Theorem \ref{thm1} to disjoint pairs $(S, T)$ of $[2n]$ with
$|S|=|T|=n$, we get the following
result on the crossing and nesting number of complete matchings.
\begin{cor} \label{matching}
Let $M$ be a matching on $[2n]$. \\
1. The statistics $\mathrm{cr}(M)$ and $\mathrm{ne}(M)$ have a symmetric joint
distribution over $P_{2n}(S, T)$, where $|S|=|T|=n$, and $S, T$
are disjoint. \\
2. $f_n(i,j)=f_n(j,i)$ where $f_n(i,j)$ is the number of matchings on
$[2n]$ with $\mathrm{cr}(M)=i$ and $\mathrm{ne}(M)=j$. \\
3. The number of matchings on $[2n]$ that are $k$-noncrossing and
$l$-nonnesting is equal to the number of matchings on $[2n]$ that are
$l$-noncrossing and $k$-nonnesting. \\
4. The number of
$k$-noncrossing matchings on $[2n]$ is equal to the number of
$k$-nonnesting matchings on $[2n]$.
\hfill $\Box$
\end{cor}
The paper is arranged as follows.
In Section 2 we introduce the concept of vacillating tableau of
general shape, and give a bijective proof for the number of
vacillating tableaux of shape $\lambda$ and length $2n$.
In Section 3 we apply the bijection of Section 2 to vacillating
tableaux of empty shape, and characterize crossings and nestings of
a partition by the corresponding vacillating tableau.
The involution on the
set of vacillating tableaux defined by taking the conjugate to each
shape leads to an involution on partitions which
exchanges the statistics $\mathrm{cr}(P)$ and $\mathrm{ne}(P)$ while preserves
$\min(P)$ and $\max(P)$, thus proving
Theorem \ref{thm1}.
Then we modify the bijection between partitions and vacillating
tableaux by taking isolated points into consideration, and give
an analogous result on the enhanced crossing number and nesting
number. This is the content of Section 4.
Finally in Section 5 we restrict our bijection to the set of complete
matchings and oscillating tableaux, and
study the enumeration of $k$-noncrossing matchings.
In particular, we construct bijections from $k$-noncrossing matchings for $k=2$
or $3$ to Dyck
paths and pairs of noncrossing Dyck paths, respectively, and
present the generating function for the number of
$k$-noncrossing matchings.
\section{A Bijection between Set Partitions and Vacillating Tableaux}
\label{sec2}
Let $Y$ be \emph{Young's lattice}, that is, the set of all partitions
of all integers $n \in \mathbb{N}$ ordered component-wise, i.e.,
$(\mu_1, \mu_2, \dots) \leq (\lambda_1, \lambda_2, \dots, )$ if $\mu_i \leq
\lambda_i$ for all $i$. We write $\lambda\vdash k$ or $|\lambda|=k$ if $\sum
\lambda_i=k$. A vacillating tableau is a walk on the Hasse diagram of
Young's lattice subject to certain conditions. The main tool in our
proof of Theorem~\ref{thm1} is a bijection between the set of set
partitions and the set of vacillating tableaux of empty shape
$\emptyset$.
\begin{definition}
A \emph{vacillating tableau} $V_\lambda^{2n}$ of shape $\lambda$ and
length $2n$ is a sequence $\lambda^0, \lambda^1, \dots, \lambda^{2n}$ of integer
partitions such that (i) $\lambda^0=\emptyset$, and
$\lambda^{2n}=\lambda$, (ii) $\lambda^{2i+1}$ is obtained from $\lambda^{2i}$
by doing nothing (i.e., $\lambda^{2i+1}=\lambda^{2i}$) or deleting a
square, and (iii) $\lambda^{2i}$ is obtained from $\lambda^{2i-1}$ by
doing nothing or adding a square.
\end{definition}
In other words, a vacillating tableau of shape $\lambda$ is a walk
on the Hasse diagram of Young's lattice from $\emptyset$ to $\lambda$
where each step consists of either
(i) doing nothing twice,
(ii) do nothing then adding a square,
(iii) removing a square then doing nothing, or
(iv) removing a square and then adding a square.
Note that if the length is larger than $0$,
$\lambda^1=\emptyset$. If the vacillating tableau is of empty shape,
then $\lambda^{2n-1}=\emptyset$ as well.
\begin{example} \label{VT1}
Abbreviate $\lambda=(\lambda_1, \lambda_2, \dots)$ by $\lambda_1\lambda_2\cdots$.
There are 5 vacillating tableaux of shape $\emptyset$ and length
$6$. They are
\begin{eqnarray} \label{VT1-1}
\begin{array}{ccccccc}
\lambda^0 & \lambda^1& \lambda^2 & \lambda^3 & \lambda^4&\lambda^5 & \lambda^6 \\
\emptyset & \emptyset & \emptyset & \emptyset & \emptyset & \emptyset & \emptyset \\
\emptyset & \emptyset & \emptyset & \emptyset & 1 & \emptyset &\emptyset \\
\emptyset & \emptyset & 1 & \emptyset & 1 & \emptyset & \emptyset \\
\emptyset & \emptyset & 1 & \emptyset & \emptyset & \emptyset &\emptyset \\
\emptyset & \emptyset & 1 & 1 & 1 & \emptyset & \emptyset
\end{array}
\end{eqnarray}
\end{example}
\begin{example}
An example of a vacillating tableau of shape $11$ and length $10$ is
given by
\[
\emptyset, \emptyset, 1, 1, 1, 1, 2, 2, 21, 11, 11.
\]
\end{example}
\begin{theorem} \label{VT-number}
(i) Let $g_\lambda(n)$ be the number of vacillating tableaux of shape
$\lambda\vdash k$ and length $2n$. By a \emph{standard Young
tableau} (SYT) of shape $\lambda$, we mean an array $T$ of shape
$\lambda$ whose entries are distinct positive integers that
increase in every row and column. The \emph{content} of $T$ is the
set of positive integers that appear in it. (We don't require that
content$(\lambda)=[k]$, where $\lambda\vdash k$.) We then have
\[
g_\lambda(n) = B(n, k) f^\lambda,
\]
where $f^\lambda$ is the number of SYT's of shape $\lambda$ and content
$[k]$, and $B(n, k)$ is the number of partitions of $[n]$ with $k$
blocks distinguished.
(ii) The exponential generating function of $B(n, k)$ is given by
\begin{eqnarray}
\sum_{n \geq 0} B(n, k) \frac{x^n}{n!} =
\frac{1}{k!}(e^x-1)^k \exp(e^x-1).
\end{eqnarray}
\end{theorem}
To prove Theorem \ref{VT-number}, we construct a bijection between the
set ${\cal V}_\lambda^{2n}$ of vacillating tableaux of shape $\lambda$ and
length $2n$, and pairs $(P, T)$, where $P$ is a partition of $[n]$,
and $T$ is an SYT of shape $\lambda$ such that $\mathrm{content}(T) \subseteq
\max(P)$. In the next section we apply this bijection to vacillating
tableaux of empty shape, and related it to the enumeration of crossing
and nesting numbers of a partition. In the following we shall assume
familiarity with the RSK algorithm, and use row-insertion $P
\longleftarrow k$ as the basic operation of the RSK algorithm. For
the notation, as well as some basic properties of the RSK algorithm,
see e.g.\ \cite[Chapter 7]{Stanley99}. In general we shall apply the
RSK algorithm to a sequence $w$ of distinct integers, denoted by $w
\stackrel{\text{RSK}}{\longmapsto}(A(w), B(w))$, where $A(w)$ is the
(row)-insertion tableau and $B(w)$ the recording tableau. The shape
of the SYT's $A(w)$ and $B(w)$ is also called the \emph{shape} of the
sequence $w$.
{\bf The Bijection $\psi$ from Vacillating Tableaux to Pairs $(P, T)$}. \\
Given a vacillating tableau $V=(\emptyset=\lambda^0, \lambda^1, \dots,
\lambda^{2n}=\lambda)$, we will recursively define a sequence $(P_0,
T_0)$, $(P_1, T_1)$,$\dots$, $(P_{2n}, T_{2n})$ where $P_i$ is a
set of ordered pairs of integers in $[n]$, and $T_i$ is an SYT of shape
$\lambda^i$. Let $P_0$ be the empty set, and let $T_0$ be the empty
SYT (on the empty alphabet).
\begin{enumerate}
\item If $\lambda^i=\lambda^{i-1}$, then $(P_i, T_i)=(P_{i-1}, T_{i-1})$.
\item If $\lambda^i \supset \lambda^{i-1}$, then $i=2k$ for some integer
$k \in [n]$. In this case let $P_i=P_{i-1}$ and $T_i$ is
obtained from $T_{i-1}$ by adding the entry $k$ in the square
$\lambda^i \setminus \lambda^{i-1}$.
\item If $\lambda^i \subset \lambda^{i-1}$, then $i=2k-1$ for some integer
$k \in [n]$. In this case
let $T_i$ be the unique SYT (on a suitable
alphabet) of shape $\lambda^i$ such that $T_{i-1}$ is obtained from $T_i$
by row-inserting some number $j$. Note that $j$ must be less than $k$.
Let $P_i$ be obtained from $P_{i-1}$ by adding the ordered pair $(j, k)$.
\end{enumerate}
It is clear from the above construction that (i)
$P_0 \subseteq P_1 \subseteq \cdots \subseteq P_{2n}$, (ii) for each integer
$i$, it appears at most once as the first component of an ordered pair
in $P_{2n}$, and appears at most once as the second component of an ordered
pair in $P_{2n}$.
Let $\psi(V)=(P, T_{2n})$, where
$P$ is the partition on $[n]$ whose standard representation is $P_{2n}$.
Note that
if an integer $i$ appears in $T_{2n}$, then $P_{2n}$
can not contain any ordered pair $(i,j)$ with $i<j$. It follows that
$i$ is the maximal
element in the block containing it. Hence the content of $T_{2n}$
is a subset of $\max(P)$.
\begin{example}
As an example of the map $\psi$, let the vacillating tableau be
\[
\emptyset, \emptyset, 1, 1, 2, 2, 2, 2, 21, 21, 211, 21, 21, 11, 21.
\]
Then the pairs $(B_i, T_i)$ (where $B_i$ is the pair added to $P_{i-1}$
to obtain $P_i$) are given by
\begin{eqnarray*}
\begin{array}{l|lllllllllllllll}
i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7& 8 & 9 & 10 & 11 & 12 & 13 & 14 \\
\hline
T_i
&\emptyset&\emptyset & 1 & 1 & 12 & 12 & 12 & 12 & 12 & 12 & 12 & 14 & 14 &
1 & 17\\[-.05in]
& & & & & & & & & 4 & 4 & 4 & 5 & 5 &
5 & 5 \\[-.05in]
& & & & & & & & & & & 5 & & & & \\
B_i& & & & & & & & & & & & (2,6)& & (4,7)&
\end{array}
\end{eqnarray*}
Hence
\begin{eqnarray*}
T=\begin{array}{l} 1\ 7 \\ [-.05in] 5 \end{array},\quad \quad
P=\mbox{1-26-3-47-5}.
\end{eqnarray*}
\end{example}
The map $\psi$ is bijective since the above construction can be
reversed. Given a pair $(P, T)$, where $P$ is a partition of $[n]$,
and $T$ is an SYT whose content consists of maximal elements of some
blocks of $P$, let $E(P)$ be
the standard representation of $P$, and $T_{2n}=T$. We work our way
backwards from $T_{2n}$, reconstructing the preceding tableaux and
hence the sequence of shapes. If we have the SYT $T_{2k}$ for some
$k \leq n$, we can get the tableaux $T_{2k-1}, T_{2k-2}$ by the following
rules.
\begin{enumerate}
\item
$T_{2k-1}=T_{2k}$ if the integer $k$ does not appear in $T_{2k}$. Otherwise
$T_{2k-1}$ is obtained from $T_{2k}$ by deleting the square containing $k$.
\item
$T_{2k-2}=T_{2k-1}$ if $E(P)$ does not have an edge of the form $(i, k)$.
Otherwise there is a unique $i < k$ such that $(i,k) \in E(P)$.
In that case let $T_{2k-2}$ be obtained from $T_{2k-1}$ by row-inserting $i$,
or equivalently, $T_{2k-2}=(T_{2k-1} \longleftarrow i)$.
\end{enumerate}
{\bf Proof of Theorem \ref{VT-number}}. \
Part (i) follows from the bijection $\psi$, where a block of $P$ is
distinguished if its maximal element belongs to $\mathrm{content}(T)$. For
part (ii), simply note that to get a structure counted by $B(n, k)$,
we can partition $[n]$ into two subsets, $S$ and $T$, and then partition
$S$ into $k$ blocks and put a mark on each block, and partition $T$
arbitrarily. The generating function of $B(n, k)$ then follows from the
well-known generating functions for $S(n,k)$, the Stirling number of the
second kind, and for the Bell number $B(n)$,
\begin{eqnarray*}
\sum_{n \geq k} S(n,k) \frac{x^n}{n!} =\frac{1}{k!} (e^x-1)^k, \qquad
\sum_{n \geq 0} B(n) \frac{x^n}{n!} =\exp(e^x-1). \qquad \Box
\end{eqnarray*}
\textsc{Remark.} (1). Restricting to vacillating tableaux of
empty shape, the map $\psi$ provides a bijection between the set
$\mathcal{V}_\emptyset^{2n}$ of vacillating tableaux of empty shape
and length $2n$ and the set of partitions of $[n]$. In particular,
$g_\emptyset(n)$, the cardinality of $\mathcal{V}_\emptyset^{2n}$, is
equal to the $n$th Bell number $B(n)$.
(2). Note that there is a symmetry between the four types of
movements in the definition of vacillating tableaux. Thus any walk
from $\emptyset$ to $\emptyset$ in $m+n$ steps can be viewed as a walk from
$\emptyset$ to some shape $\lambda$ in $n$ steps, then followed by the
reverse of a walk from $\emptyset$ to $\lambda$ in $m$ steps. It follows
that
\begin{eqnarray}\label{bell_number}
\sum_{\lambda} g_{\lambda}(n) g_\lambda(m) = g_\emptyset(m+n)=B(m+n).
\end{eqnarray}
For the case $m=n=k$, the identity \eqref{bell_number} is proved by
Halverson and Lowandowski \cite{h-l}, who gave a bijective proof using
similar procedures as those in $\psi$.
(3). The \emph{partition algebra} $\mathfrak{P}_n$ is a
certain semisimple algebra, say over $\mathbb{C}$, whose dimension is
the Bell number $B(n)$ (the number of partitions of $[n]$). (The
algebra $\mathfrak{P}_n$ depends on a parameter $x$ which is
irrelevant here.) See \cite{h-r,h-l} for a survey of this topic.
Vacillating tableaux are related to irreducible representations of
$\mathfrak{P}_n$ in the same way that SYT of content $[n]$ are related
to irreducible representations of the symmetric group $\mathfrak{S}_n$. In
particular, the irreducible representations $I_\lambda$ of
$\mathfrak{P}_n$ are indexed by partitions $\lambda$ for which there
exists a vacillating tableau of shape $\lambda$ and length $2n$, and
$\dim I_n$ is the number of such vacillating tableaux. This result is
equivalent to \cite[Thm.~2.24(b)]{h-r}, but that paper does not
explicitly define the notion of vacillating tableau.
Combinatorial identities arising from partition algebra and its
subalgebras are discussed in \cite{h-l}, where the authors used the
notion of vacillating tableau after the distribution of a preliminary
version of this paper.
\section{Crossings and Nestings of Partitions} \label{sec3}
In this section we restrict the map $\psi$ to vacillating tableaux of
empty shape, for which $\psi$ provides a bijection between
the set of vacillating tableaux of empty shape and
length $2n$ and the set of partitions of $[n]$.
To make the bijection clear, we restate the inverse map from the set of
partitions to vacillating tableaux.
{\bf The Map $\phi$ from Partitions to Vacillating Tableaux.} \label{phi}
Given a partition $P \in \Pi_n$ with the standard representation,
we construct the sequence of
SYT's, hence the vacillating tableau $\phi(P)$ as follows:
Start from the empty SYT by letting $T_{2n}=\emptyset$,
read the number $j \in [n]$ one by one from $n$ to 1, and
define $T_{2j-1}$, $T_{2j-2}$ for each $j$.
There are four cases. \\
1. If $j$ is the righthand endpoint of an arc $(i,j)$,
but not a lefthand endpoint, first do nothing, then
insert $i$ (by the RSK algorithm) into the tableau. \\
2. If $j$ is the lefthand endpoint of an arc $(j, k)$,
but not a righthand endpoint, first remove $j$, then do nothing. \\
3. If $j$ is an isolated point, do nothing twice. \\
4. If $j$ is the righthand endpoint of an arc $(i,j)$,
and the lefthand endpoint of another arc $(j,k)$, then
delete $j$ first, and then insert $i$. \\
The vacillating tableau $\phi(P)$
is the sequences of shapes of the above SYT's.
\begin{example} \label{VT1_SYT}
Let $P$ be the partition 1457-26-3 of $[7]$.
\begin{figure}[ht]
\begin{center}
\begin{picture}(120,40)
\setlength{\unitlength}{4mm} \multiput(0,0)(2,0){7}{\circle*{0.4}}
\qbezier(0,0)(3,4)(6,0) \qbezier(2,0)(6,5)(10,0)
\qbezier(6,0)(7,2)(8,0) \qbezier(8,0)(10,4)(12,0)
\put(0,-1){$1$}
\put(2,-1){$2$}\put(4,-1){$3$}\put(6,-1){$4$}\put(8,-1){$5$}\put(10,-1){$6$}\put(12,-1){$7$}
\end{picture}
\end{center}
\caption{The standard representation of the partition 1457-26-3.}
\label{Ex_1}
\end{figure}
Starting from $\emptyset$ on the right, go from 7 to 1, the seven steps are
(1) do nothing, then insert 5,
(2) do nothing, then insert 2,
(3) delete 5 and insert 4,
(4) delete 4 and insert 1,
(5) do nothing twice,
(6) remove 2 then do nothing,
(7) remove 1 then do nothing.
Hence the corresponding SYT's, constructed from right to left, are
\begin{eqnarray*}
\begin{array}{ccccccccccccccc}
\emptyset & \emptyset & 1 & 1 & 1 & 1 & 1 & 2 & 24 & 2 & 2 & 5 & 5 & \emptyset
&\emptyset \\[-.05in]
& & & & 2 & 2 & 2 & & & & 5 & & & &
\end{array}
\end{eqnarray*}
The vacillating tableau is
\begin{eqnarray*}
\emptyset, \emptyset, 1, 1,11, 11, 11, 1, 2, 1, 11, 1, 1, \emptyset, \emptyset.
\end{eqnarray*}
\end{example}
The relation between $\mathrm{cr}(P), \mathrm{ne}(P)$ and the vacillating tableau is
given in the next theorem.
\begin{theorem} \label{CN}
Let $P \in \Pi_n$ and $\phi(P)=(\emptyset=\lambda^0, \lambda^1, \dots, \lambda^{2n}=\emptyset)$.
Then $\mathrm{cr}(P)$ is the most number of rows in any $\lambda^i$, and $\mathrm{ne}(P)$ is
the most number of columns in any $\lambda^i$.
\end{theorem}
\noindent {\bf Proof}.
We prove Theorem~\ref{CN} in four steps.
First, we interpret a $k$-crossing/$k$-nesting of $P$ in terms of
entries of SYT's $T_i$ in $\phi(P)$. Then, we associate to each SYT $T_i$
a sequence $\sigma_i$ whose terms are entries of $T_i$. We prove that
$T_i$ is the insertion tableau of $\sigma_i$ under the RSK algorithm,
and apply Schensted's theorem to conclude the proof.
{\bf Step 1}.
Let $T(P)=(T_0, T_1, \dots, T_{2n})$ be the sequence
of SYT's associated to the vacillating tableau $\phi(P)$.
By the construction of $\psi$ and $\phi$, a pair $(i, j)$ is an arc in
the standard representation of $P$ if and only if $i$ is an entry in the
SYT's $T_{2i}, T_{2i+1}, \dots, T_{2j-2}$. We say that the the integer $i$
is \emph{added} to $T(P)$ at step $i$ and \emph{leaves} at step $j$.
First we prove that the arcs $(i_1, j_1), \dots,(i_k, j_k)$ form
a $k$-crossing of $P$ if and only if there exists a tableau $T_i$
in $T(P)$ such that the integers $i_1, i_2, \dots, i_k \in
\mathrm{content}(T_i)$, and $i_1, i_2, \dots, i_k$ leave $T(P)$ in
increasing order according to their numerical values. Given a
$k$-crossing $((i_1, j_1), \dots,(i_k, j_k))$ of $P$, where $i_r <
j_r$ for $1 \leq r \leq k$ and $i_1< i_2< \cdots < i_k < j_1 < j_2
< \cdots < j_k$, the integer $i_r$ is added to $T(P)$ at step
$i_r$ and leaves at step $j_r$. Hence all $i_r$ are in
$T_{2j_1-2}$, and they leave $T(P)$ in increasing order. The
converse is also true: if there are $k$ integers $i_1 < i_2 <
\dots < i_k$ all appearing in the same tableau at some steps, and
then leave in increasing order, say at steps $j_1 < j_2 < \dots <
j_k$, then $i_k < j_1$ and the pairs $((i_1, j_1), \dots,(i_k,
j_k)) \in P$ form a $k$-crossing. By a similar argument arcs
$(i_1, j_1), \dots,(i_k, j_k)$ form a $k$-nesting of $P$
if and only if there exists a tableaux $T_i$ in $T(P)$ such that
the integers $i_1, i_2, \dots, i_k \in \mathrm{content}(T_i)$, and $i_1, i_2, \dots,
i_k$ leave $T(P)$ in decreasing order.
{\bf Step 2}.
For each $T_i \in T(P)$, we define a permutation $\sigma_i$ of
$\mathrm{content}(T_i)$ (backward) recursively as follows.
Let $\sigma_{2n}$ be the empty sequence.
(1) If $T_i=T_{i-1}$, then $\sigma_{i-1}=\sigma_i$.
(2) If $T_{i-1}$ is obtained from $T_i$ by row-inserting some number $j$,
then $\sigma_{i-1}=\sigma_ij$, the juxtaposition of $\sigma_i$ and $j$.
(3) If $T_i$ is obtained from $T_{i-1}$ by adding the entry $i/2$,
(where $i$ must be even),
then $\sigma_{i-1}$ is obtained from $\sigma_i$ by deleting the number $i/2$.
Note that in the last case, $i/2$ must be the largest entry in $\sigma_i$.
Clearly $\sigma_i$ is a permutation of the entries in $\mathrm{content}(T_i)$.
If $\sigma_i=w_1w_2\cdots w_r$, then the entries of $\mathrm{content}(T_i)$
leave $T(P)$ in the order $w_r, \dots, w_2, w_1$.
{\bf Step 3.}
{\bf Claim}:
If $\sigma_i \stackrel{\text{RSK}}{\longmapsto} (A_i, B_i)$, then $A_i=T_i$.
We prove the claim by backward induction. The case $i=2n$ is trivial
as both $A_{2n}$ and $T_{2n}$ are the empty SYT. Assume the claim is
true for some $i$, $ 1 \leq i \leq 2n$.
We prove that the claim holds for $i-1$.
If $T_{i-1}=T_i$, the claim holds by the inductive hypothesis.
If $T_{i-1}$ is obtained from
$T_i$ by inserting the number $j$, then the claim holds by
the definition of the RSK algorithm. It is only left to consider the
case that $T_{i-1}$ is obtained from $T_{i}$ by removing the entry
$j=i/2$.
Let us write $\sigma_i$ as
$u_1u_2\cdots u_s j v_1\cdots v_t$, and $\sigma_{i-1}$ as
$u_1u_2\cdots u_s v_1 \cdots v_t$, where $j > u_1, \dots,u_s, v_1,
\dots, v_t$. We need to show that the insertion tableau of
$\sigma_{i-1}$ is the same as the insertion tableau of $\sigma_i$
deleting the entry $j$, i.e., $A_{i-1}=A_i\!\setminus\!
\{j\}$. Proof by induction on $t$. If $t=0$ then it is true by the
RSK algorithm that $A_i$ is obtained from $A_{i-1}$ by
adding $j$ at the end of the first row. Assume it is true for $t-1$,
i.e., $A(u_1\cdots u_s v_1\cdots v_{t-1})=A(u_1\cdots u_s j v_1\cdots
v_{t-1}) \setminus \{j\}$. Note that in $A(u_1\cdots u_s j v_1\cdots
v_{t-1})$, if $j$ is in position $(x, y)$, then there is no element in
positions $(x, y+1)$ or $(x+1, y)$. Now we insert the entry $v_t$ by
the RSK algorithm. Consider the insertion path $I=I(A(u_1\cdots
u_s j v_1\cdots v_{t-1})\longleftarrow v_t ). $ If $j$ does not appear
on this path, then we would have the exact same insertion path when
inserting $v_t$ into $A(u_1\cdots u_sv_1\cdots v_{t-1})$. This
insertion path results in the same change to $A(u_1\cdots u_s j
v_1\cdots v_{t-1})$ and $A(u_1\cdots u_s v_1\cdots v_{t-1})$, which
does not touch the position $(x,y)$ of $j$. So $A(u_1\cdots u_s
v_1\cdots v_{t-1}v_t )= A(u_1\cdots u_s j v_1\cdots
v_{t-1}v_t)\setminus \{j\}$. On the other hand, if $j$ appears in the
insertion path $I$, i.e., $(x,y) \in I$, then since $j$ is the largest
element, it must be bumped into the $(x+1)$-th row, and become the
last entry in the $(x+1)$-th row without bumping any number further.
Then the insertion path of $v_t$ into $A(u_1\cdots u_sv_1\cdots
v_{t-1})$ is $I$ minus the last position $\{(x+1, *)\}$, and again we
have $A(u_1\cdots u_s v_1\cdots v_{t-1}v_t )= A(u_1\cdots u_s j
v_1\cdots v_{t-1}v_t)\setminus \{j\}$. This finishes the proof of the
claim.
{\bf Step 4}. We shall need the following theorem of Schensted
\cite{Sch61}\cite[Thms.~7.23.13, 7.23.17]{Stanley99}, which gives the
basic connection between the RSK algorithm and the increasing and
decreasing subsequences.
\noindent
\begin{quote}
\noindent {\bf Schensted's Theorem}
Let $\sigma$ be a sequence of integers whose terms are distinct. Assume
$\sigma \stackrel{RSK}{\longmapsto} (A, B)$, where $A$ and $B$ are SYT's of
the shape $\lambda$.
Then the length of the longest increasing subsequence of $\sigma$ is
$\lambda_1$ (the number of columns of $\lambda$), and the length of the longest
decreasing subsequence is $\lambda'_1$ (the number of rows of $\lambda$).
\end{quote}
Now we are ready to prove Theorem~\ref{CN}. By Steps 1 and 2, a
partition $P$ has a $k$-crossing if and only if there exists $i$
such that $\sigma_i$ has a decreasing subsequence of length $k$.
The claim in Step 3 implies that the shape of the sequence
$\sigma_i$ is exactly the diagram of the $i$-th partition $\lambda^i$
in the vacillating tableau $\phi(P)$. By Schensted's Theorem,
$\sigma_i$ has a decreasing subsequence of length $k$ if and only
if the partition $\lambda^i$ in $\phi(P)$ has at least $k$ rows. This
proves the statement for $\mathrm{cr}(P)$ in Theorem~\ref{CN}. The
statement for $\mathrm{ne}(P)$ is proved similarly. ~~~$\Box$
\vspace{.3cm} The symmetric joint distribution of statistics
$\mathrm{cr}(P)$ and $\mathrm{ne}(P)$ over $P_n(S, T)$ follows immediately from
Theorem~\ref{CN}.
\noindent {\bf Proof of Theorem 1.} \\
From Theorem~\ref{CN}, a partition $P \in \Pi_n$ has $\mathrm{cr}(P)=k$ and
$\mathrm{ne}(P)=j$ if and only if for the partitions $\{ \lambda^i\}_{i=0}^{2n}$
of the vacillating tableau $\phi(P)$, the maximal number of rows of
the diagram of any $\lambda^i$ is $k$, and the maximal number of columns
of the diagram of any $\lambda^i$ is $j$. Let $\tau$ be the involution
defined on the set ${\cal V}_\emptyset^{2n}$ by taking the conjugate
to each partition $\lambda^i$. For $i \in [n]$, $i \in \min(P)$
(resp. $\max(P)$) if and only if $\lambda^{2i-1}=\lambda^{2i-2}$
and $\lambda^{2i} \setminus \lambda^{2i-1} = \Box$,
(reps. $\lambda^{2i-2} \setminus \lambda^{2i-1}=\Box$ and
$\lambda^{2i}=\lambda^{2i-1}$).
Since $\tau$ preserves $\min(P)$ and
$\max(P)$, it induces an involution on $P_n(S, T)$ which exchanges
the statistics $\mathrm{cr}(P)$ and $\mathrm{ne}(P)$. This proves Theorem
\ref{thm1}.
~~~$\Box$ \vspace{.3cm}
Let $\lambda=(\lambda_1, \lambda_2, \dots )$ be the shape of a sequence $w$ of
distinct integers.
Schensted's Theorem provides a combinatorial interpretation of the terms
$\lambda_1$ and $\lambda'_1$: they are the length of the longest increasing
and decreasing subsequences of $w$.
In \cite{Greene74} C. Greene extended Schensted's Theorem by giving an
interpretation of the
rest of the diagram of $\lambda=(\lambda_1, \lambda_2, \dots )$.
Assume $w$ is a sequence of length $n$. For each $k \leq n$, let
$d_k(w)$ denote the length of the longest subsequence of $w$ which
has no increasing subsequences of length $k+1$. It can be shown easily that any
such sequence is obtained by taking the union of $k$ decreasing subsequences.
Similarly, define $a_k(w)$ to be the length of the longest subsequence
consisting of $k$ ascending subsequences.
\begin{theorem}[Greene]
For each $k \leq n$,
\begin{eqnarray*}
a_k(w)&=&\lambda_1+\lambda_2+\cdots +\lambda_k, \\
d_k(w)& =& \lambda'_1 +\lambda'_2 +\cdots + \lambda'_k,
\end{eqnarray*}
where $\lambda'=(\lambda_1', \lambda_2', \dots)$ is the conjugate of $\lambda$.
~~~~$\Box$
\end{theorem}
We may consider the analogue of Greene's Theorem for set partitions.
Let $P \in \Pi_n$ with the standard
representation $\{(i_1, j_1), (i_2, j_2), \dots, (i_t, j_t)\}$,
where $i_r < j_r$ for $1 \leq r \leq t$.
Let $e_r=(i_r, j_r)$. We define
the \emph{crossing graph} $\mathrm{Cr}(P)$ of $P$ as follows. The vertex set
of $\mathrm{Cr}(P)$ is $\{e_1, e_2, \dots, e_t\}$. Two arcs $e_r$ and
$e_s$ are adjacent if and only if the edges $e_r$ and $e_s$ are
crossing, that is, $i_r < i_s < j_r < j_s$.
Clearly a $k$-crossing of $P$ corresponds to a
$k$-clique of $\mathrm{Cr}(P)$. Let $\mathrm{cr}_r(P)$ be the maximal number
of vertices in a union of $r$ cliques of $\mathrm{Cr}(P)$. In other
words, $\mathrm{cr}_r(P)$ is the maximal number of arcs in a union of $r$
crossings of $P$. Similarly, let $\mathrm{Ne}(P)$ be the graph
defined on the vertex set $\{e_1, \dots, e_t\}$ where two arcs $e_r$
and $e_s$ are adjacent if and only if $i_r < i_s < j_s < j_r$. Let
$\mathrm{ne}_r(P)$ be the maximal number of vertices in a union of $r$
cliques of $\mathrm{Ne}(P)$. In other words, $\mathrm{ne}_r(P)$ is the maximal number
of arcs in a union of $r$ nestings of $P$.
\begin{prop} \label{Greene1}
Let $P=((i_1, j_1), (i_2, j_2), \dots, (i_t, j_t))$ be the
standard representation of a partition of $[n]$, where $i_r < j_r$
for all $1 \leq r \leq t$ and $j_1 < j_2 < \cdots< j_t$. Let
$\alpha(P)$ be the sequence $i_1i_2\cdots i_t$. Then there is a
one-to-one correspondence between the set of nestings of $P$ and
the set of decreasing subsequences of $\alpha(P)$.
\end{prop}
\noindent {\bf Proof}.
Let $\phi(P)$ be the vacillating tableau corresponding to $P$,
and $T(P)$ the sequence of SYT's constructed in the
bijection.
Then $\alpha(P)$ records the order in which the entries of
$T_i$'s leave $T(P)$.
Let $\sigma = i_t \cdots i_2 i_1$ be the reverse of $\alpha(P)$,
and $\{\sigma_i : 1 \leq i \leq 2n\}$ the permutation of $\mathrm{content}(T_i)$
defined in Step 2 of the proof of Theorem \ref{CN}.
Then the $\sigma_i$'s are subsequences of $\sigma$.
From Steps 1 and 2 of the proof of Theorem~\ref{CN}, nestings
of $P$ are represented by the increasing subsequences of $\sigma_i$,
$1 \leq i \leq 2n$, and hence by the increasing subsequences of
$\sigma$. Conversely, let $i_{r_1} <i_{r_2}
< \cdots < i_{r_t}$ be an increasing subsequence of $\sigma$.
Being a subsequence of $\sigma$ means that its terms leave $T(P)$ in
reverse order,
so $i_{r_t}$ leaves first in step $j_{r_t}$. Thus
all the entries $i_{r_1}, \dots, i_{r_t}$ appear in the SYT's $T_i$ with
$2i_{r_t} \leq i \leq 2j_{r_t}-2$. Therefore
$i_{r_1}i_{r_2}\cdots i_{r_t}$ is also an increasing subsequence
of $\sigma_i$, for $2i_{r_t} \leq i \leq 2j_{r_t}-2$.
~~~ $\Box$
Combining Proposition~\ref{Greene1} and Greene's Theorem, we have
the following corollary describing $\mathrm{ne}_t(P)$.
\begin{cor} \label{cor9}
Let $P$ and $\alpha(P)$ be as in Proposition \ref{Greene1}.
Then
\[
\mathrm{ne}_r(P)=\lambda'_1+\lambda'_2+\cdots
+\lambda'_r,
\]
where $\lambda$ is the shape of $\alpha(P)$,
and $\lambda'$ is the conjugate of $\lambda$. ~~~~$\Box$
\end{cor}
The situation for $\mathrm{cr}_r(P)$ is more complicated. We don't have a
result similar to Proposition~\ref{Greene1}. Any crossing of $P$
uniquely corresponds to an increasing subsequence of $\alpha(P)$. But
the converse is not true. An increasing subsequence $i_{r_1}i_{r_2}
\cdots i_{r_t}$ corresponds to a $t$-crossing of $P$ only if we have
the additional condition $i_{r_t} < j_{r_1}$. It would be interesting
to get a result for $\mathrm{cr}_r(P)$ analogous to Corollary~\ref{cor9}.
To conclude this section we discuss the enumeration of noncrossing
partitions.
The following theorem is a direct corollary of
the bijection between vacillating tableaux and partitions.
\begin{theorem} \label{lattice_P}
Let $\epsilon_i$ denote the $i$th unit coordinate vector in
$\mathbb{R}^{k-1}$. The number of $k$-noncrossing partitions of $[n]$
equals the number of closed lattice walks in the region
\[
V_k=\{(a_1, a_2, \dots, a_{k-1}): a_1 \geq a_2 \geq \cdots \geq
a_{k-1} \geq 0, a_i \in \mathbb{Z} \}
\]
from the origin to itself of length $2n$ with steps
$\pm\epsilon_i$ or $(0,0,\dots,0)$, with the property that the
walk goes backwards (i.e., with step $-\epsilon_i$) or stands
still (i.e., with step $(0,0,\dots,0)$) after an even number of
steps, and goes forwards (i.e., with step $+\epsilon_i$) or
stands still after an odd number of steps.
~~~~$\Box$
\end{theorem}
Recall that a partition $P \in \Pi_n$ is \emph{$k$-noncrossing} if
$\mathrm{cr}(P) < k$, and is \emph{$k$-nonnesting} if $\mathrm{ne}(P) <k$.
A partition $P$ has
no $k$-crossings and no $j$-nestings
if and only if for all the partitions $\lambda^i$ of $\phi(P)$,
the diagram
fits into a $(k-1)\times (j-1)$ rectangle. Taking the conjugate of each
partition, we get bijective proofs of Corollaries \ref{ncn} and
\ref{nc_nn}.
Theorem \ref{thm1} asserts the symmetric distribution of $\mathrm{cr}(P)$ and
$\mathrm{ne}(P)$ over $P_n(S, T)$, for all $S, T \subseteq [n]$ with $|S|=|T|$.
Not every $P_n(S,T)$ is nonempty. A set
$P_n(S, T)$ is nonempty if and only if for all $i \in [n]$,
$|S \cap [i]| \geq |T \cap [i]|$. Another way to describe the nonempty
$P_n(S, T)$
is to use lattice paths. Associate to each pair $(S,T)$ a lattice path
$L(S, T)$ with steps $(1, 1)$, $(1, -1)$ and $(1, 0)$:
start from $(0,0)$, read the integers $i$ from $1$ to $n$ one by one, and
move two steps for each $i$. \\
\mbox{} \hspace{2em} 1. If $i \in S \cap T$, move $(1, 0)$ twice.\\
\mbox{} \hspace{2em} 2. If $i \in S\setminus T$, move $(1,0)$ then $(1,1)$.\\
\mbox{} \hspace{2em} 3. If $i \in T\setminus S$, move $(1,-1)$ then $(1,0)$.\\
\mbox{} \hspace{2em} 4. If $i \notin S\cup T$, move $(1,-1)$ then $(1,1)$. \\
This defines a lattice path $L(S, T)$ from $(0,0)$ to $(2n, 0)$,
Conversely, the path uniquely determines $(S, T)$.
Then $P_n(S, T) $ is nonempty if and only if the lattice path $L(S,
T)$ is a Motzkin path, i.e., never goes below the $x$-axis.
There are existing notions of noncrossing partitions and nonnesting
partitions, e.g., \cite[Ex.6.19]{Stanley99}.
A \emph{noncrossing partition of $[n]$} is a partition of $[n]$
in which no two blocks ``cross'' each other, i.e.,
if $a < b < c <d$ and $a, c$ belong to a block $B$ and $b, d$ to
another block $B'$,
then $B=B'$. A \emph{nonnesting partition of $[n]$} is a partition
of $[n]$ such that if $a, e$ appear in a block $B$ and $b, d$ appear
in a different block $B'$ where $a < b < d< e$, then there is a $c
\in B$ satisfying $b < c < d$.
It is easy to see that $P$ is a noncrossing partition
if and only if the standard representation of $P$ has no 2-crossing, and
$P$ is a nonnesting partition if and only if the standard representation of $P$
has no 2-nesting. Hence
the vacillating tableau correspondence, in the case of 1-row/column
tableaux, gives bijections
between noncrossing partitions of $[n]$, nonnesting partitions
of $[n]$, (both are counted by Catalan numbers) and
sequences $0=a_0, a_1, ..., a_{2n}=0$ of nonnegative integers such that
$a_{2i+1} = a_{2i}$ or $a_{2i}-1$, and $a_{2i} = a_{2i-1}$ or
$a_{2i-1}+1$. These sequences $a_0, ..., a_{2n}$ give a new combinatorial
interpretation of Catalan numbers.
Replacing a term $a_{i+1} = a_i + 1$ with a step $(1,1)$, a
term $a_{i+1} = a_i - 1$ with a step $(1,-1)$, and a term $a_{i+1} = a_i$
with a step $(1,0)$, we get a Motzkin path, so we also have a bijection
between noncrossing/nonnesting partitions and certain Motzkin paths.
The Motzkin paths are exactly the ones defined as $L(S, T)$, where
$S=\min(P)$ and $T=\max(P)$.
Conversely, given a Motzkin path of the form $L(S, T)$,
we can recover uniquely a noncrossing partition and a nonnesting
partition. Write the path as $\{(i, a_i): 0\leq i \leq 2n\}$.
Let $A=[n] \setminus T$ and $B=[n]\setminus S$.
Clearly $|A|=|B|$.
Assume
$A=\{ i_1, i_2, \dots, i_t\}_<$, and $B=\{ j_1, j_2, \dots, j_t\}_<$
where elements are listed in increasing order. Then to get
the standard representation of the noncrossing partition, pair each
$j_r$ with $\max\{i_s \in A: i_s < j_r, a_{2i_s}=a_{2j_r-2}\}$.
To get the standard representation of the nonnesting partition,
pair each $j_r $ with $i_r$, for $1\leq r \leq t$.
{\sc Remark.}
In our definition, a $k$-crossing is defined as a set of
$k$ mutually crossing arcs in the standard representation of the
partition.
There exist some other definitions. For example, in \cite{Klazar03}
M. Klazar defined \emph{3-noncrossing partition} as
a partition $P$ which does not have 3 mutually crossing blocks.
It can be seen that $P$ is 3-noncrossing in Klazar's sense
if and only if there do not
exist 6 elements
$a_1 < b_1 < c_1 < a_2 < b_2 < c_2$ in $[n]$ such that $a_1, a_2 \in A$,
$b_1, b_2 \in B$, $c_1, c_2 \in C$, and
$A, B, C$ are three distinct blocks of $P$.
Klazar's definition of 3-noncrossing partitions is different from ours.
For example, let $P$ be the partition 15-246-37 of $[7]$, with
standard representation as follows:
\begin{center}
\begin{picture}(120,30)
\setlength{\unitlength}{3mm}
\put(0,0){\circle*{0.4}}
\put(2,0){\circle*{0.4}}
\put(4,0){\circle*{0.4}}
\put(6,0){\circle*{0.4}}
\put(8,0){\circle*{0.4}}
\put(10,0){\circle*{0.4}}
\put(12,0){\circle*{0.4}}
\qbezier(0,0)(4,5)(8,0)
\qbezier(2,0)(4,3)(6,0)
\qbezier(4,0)(9,5)(12,0)
\qbezier(6,0)(8,3)(10,0)
\put(0,-1){$1$}
\put(2,-1){$2$}
\put(4,-1){$3$}
\put(6,-1){$4$}
\put(8,-1){$5$}
\put(10,-1){$6$}
\put(12,-1){$7$}
\end{picture}
\end{center}
\medskip
According to Klazar's definition, $P$
has a 3-crossing, since we have $1<2<3<5<6<7$ and $\{1,5\}$, $\{2,6\}$
and $\{3,7\}$ belong to three different blocks, respectively.
On the other hand, $P$ has no 3-crossing on our sense.
For general $k$, these three notions of $k$-noncrossing partitions, i.e.,
(1) no $k$-crossing in the standard representation of $P$,
(2) no $k$ mutually crossing arcs in distinct blocks of $P$,
and (3) no $k$ mutually crossing blocks, are all different, with
the first being the weakest, and the third the strongest.
\section{ A Variant: Partitions and Hesitating Tableaux}
We may also consider the \emph{enhanced} crossing/nesting of a partition,
by taking isolated points into consideration. For a partition $P$ of $[n]$,
let the \emph{enhanced representation} of $P$ be
the union of the standard representation of $P$ and the
loops $\{(i,i): i \text{ is an isolated point of $P$}\}$.
An \emph{enhanced $k$-crossing} of $P$
is a set of $k$ edges $(i_1, j_1), (i_2, j_2), \dots, (i_k, j_k)$ of
the enhanced representation of $P$
such that $i_1 < i_2 < \cdots < i_k \leq j_1 < j_2 < \cdots < j_k$.
In particular, two arcs of the form $(i,j)$ and $(j,l)$
with $i <j <l$ are viewed as crossing.
Similarly, an \emph{enhanced $k$-nesting} of $P$ is a set of
$k$ edges $(i_1, j_1), (i_2, j_2), \dots, (i_k, j_k)$ of the
enhanced representation of $P$ such that
$i_1 < i_2 < \cdots < i_k \leq j_k < \cdots < j_2 < j_1$.
In particular, an edge $(i,k)$ and an isolated point $j$ with $i < j<k$
form an enhanced 2-nesting.
Let $\overline{\mathrm{cr}}(P)$ be the size of the largest enhanced crossing,
and $\overline{\mathrm{ne}}(P)$ the size of the largest enhanced nesting.
Using a variant form of vacillating tableau, we obtain
again a symmetric joint distribution of
the statistics $\overline{\mathrm{cr}}(P)$ and $\overline{\mathrm{ne}}(P)$.
The variant tableau is a
\emph{hesitating tableau} of shape $\emptyset$ and length $2n$,
which is a path on the Hasse diagram of Young's lattice from $\emptyset$
to $\emptyset$ where each step consists of a pair of moves, where the pair
is either (i) doing nothing then
adding a square, (ii) removing a square then doing nothing, or (iii)
adding a square and then removing a square.
\begin{example} \label{GOT}
There are 5 hesitating tableaux of shape $\emptyset$ and length
$6$. They are
\begin{eqnarray} \label{GOT_3}
\begin{array}{ccccccc}
\emptyset & 1 & \emptyset & 1 & \emptyset & 1 & \emptyset \\
\emptyset & 1 & \emptyset & \emptyset & 1 & \emptyset& \emptyset \\
\emptyset & \emptyset & 1 & 11 & 1 & \emptyset& \emptyset \\
\emptyset & \emptyset & 1 & \emptyset & \emptyset & 1 & \emptyset \\
\emptyset & \emptyset & 1 & 2 & 1 & \emptyset &\emptyset
\end{array}
\end{eqnarray}
\end{example}
To see the equivalence with vacillating tableaux, let
$U$ be the operator that takes a shape to the sum of all shapes that
cover it in Young's lattice (i.e., by adding a square), and similarly
$D$ takes a shape to the sum of all shapes that it covers in Young's
lattice (i.e., by deleting a square). Then, as is well-known,
$DU-UD=I$ (the identity operator). See,
e.g.\ \cite{Stanley88}\cite[Exer.~7.24]{Stanley99}. It follows that
\begin{eqnarray} \label{opr}
(U+I)(D+I)=DU+ID+UI.
\end{eqnarray}
Iterating the left-hand side generates vacillating tableaux, and
iterating the right-hand side gives the hesitating tableaux defined
above.
A bijective map between partitions of $[n]$ and hesitating
tableaux of empty shape has been given by Korn \cite{Korn},
based on growth diagrams. Here by modifying the map $\phi$ defined
in Section~\ref{sec3} we get a more direct bijection $\bar \phi$
between partitions and hesitating tableaux of empty shape, which
leads to the symmetric joint distribution of $\overline{\mathrm{cr}}(P)$
and $\overline{\mathrm{ne}}(P)$. The construction and proofs are very
similar to the ones given in Sections~\ref{sec2} and \ref{sec3},
and hence are omitted here. We will only state the definition of
the map $\bar \phi$ from partitions to hesitating tableaux, to be
compared with the map $\phi$ in Section~\ref{sec3}.
{\bf The Bijection $\bar \phi$ from Partitions to Hesitating Tableaux}. \\
Given a partition $P \in\Pi_n$ with the enhanced representation,
we construct the sequence of
SYT's, and hence the hesitating tableau $\bar \phi(P)$, as follows:
Start from the empty SYT by letting $T_{2n}=\emptyset$,
read the numbers $j \in [n]$ one by one from $n$ to $1$, and
define two SYT's $T_
{2j-1}$, $T_{2j-2}$ for each $j$.
When $j$ is a lefthand endpoint only, or a righthand endpoint only,
the construction is identical to that of the map $\phi$. Otherwise, \\
1. If $j$ is an isolated point, first insert $j$, then delete $j$.\\
2. If $j$ is the righthand endpoint of an arc $(i,j)$,
and the lefthand endpoint of another arc $(j,k)$, then
insert $i$ first, and then delete $j$.
\begin{example} \label{GOT_SYT}
For the partition 1457-26-3 of $[7]$ in Figure \ref{Ex_2},
the corresponding SYT's are
\begin{eqnarray*}
\begin{array}{lllllllllllllll}
\emptyset & \emptyset & 1 & 1 & 1 & 13 & 1 & 14 & 24 & 24 & 2 & 5 & 5 & \emptyset
& \emptyset\\[-.05in]
& & & & 2 & 2 & 2 & 2 & & 5 & 5 & & & &
\end{array}
\end{eqnarray*}
The hesitating tableau $\bar \phi(P)$ is
\[
\emptyset, \emptyset, 1, 1, 11, 21, 11, 21, 2, 21, 11, 1, 1, \emptyset, \emptyset.
\]
\end{example}
\begin{figure}[ht]
\begin{center}
\begin{picture}(120,40)
\setlength{\unitlength}{4mm} \multiput(0,0)(2,0){7}{\circle*{0.4}}
\qbezier(0,0)(3,4)(6,0) \qbezier(2,0)(6,5)(10,0)
\qbezier(6,0)(7,2)(8,0) \qbezier(8,0)(10,4)(12,0)
\put(4,0.5){\circle{1}} \put(0,-1){$1$}
\put(2,-1){$2$}\put(4,-1){$3$}\put(6,-1){$4$}\put(8,-1){$5$}\put(10,-1){$6$}\put(12,-1){$7$}
\end{picture}
\end{center}
\caption{The enhanced representation of the partition 1457-26-3.}
\label{Ex_2}
\end{figure}
The conjugation of shapes does not preserve $\min(P)$ or $\max(P)$.
Instead, it preserves $\min(P)\!\setminus\! \max(P)$, and $\max(P)\!
\setminus\! \min(P)$. Let $S, T$ be disjoint subsets of $[n]$ with the
same cardinality,
$\bar P_n(S, T)=\{ P \in \Pi_n: \min(P)\!\setminus\! \max(P)=S,
\max(P) \!\setminus\! \min(P)=T\}$, and
$\bar f_{n, S, T}(i,j)=\#\{ P \in \bar P_n(S, T): \overline{\mathrm{cr}}(P)=i,
\overline{\mathrm{ne}}(P)=j\}$.
\begin{theorem}
We have
\[
\bar f_{n,S, T}(i,j)=\bar f_{n, S, T} (j, i). \qquad \Box
\]
\end{theorem}
As a consequence, Corollaries \ref{ncn} and \ref{nc_nn} remain valid
if we define $k$-noncrossing (or $k$-nonnesting) by $\bar \mathrm{cr}(P)<k$
(or $\bar \mathrm{ne}(P)<k$).
\textsc{Remark.} As done for vacillating tableaux, one can extend the
definition of hesitating tableaux by considering moves from $\emptyset$ to
$\lambda$, and denote by $f_{\lambda}(n)$ the number of such hesitating
tableaux of length $2n$. Identity \eqref{opr} implies that
$f_{\lambda}(n)=g_{\lambda}(n)$, the number of vacillating tableaux from
$\emptyset $ to $\lambda$. It follows that $f_{\emptyset}(n)=B(n)$, and
\[
\sum_\lambda f_\lambda(n) f_\lambda(m)=B(m+n),
\]
where $B(n)$ is the $n$th Bell number. For further discussion of the
number $f_\lambda(n)$, see \cite[Problem 33]{Stanley_web} (version of
17 August 2004).
\section{Enumeration of $k$-Noncrossing Matchings}
Restricting Theorem~\ref{thm1} to disjoint subsets $(S, T)$ of $[n]$,
where $n=2m$ and $|S|=|T|=m$, we get the symmetric joint distribution of the
crossing number and nesting number for matchings, as stated in
Corollary \ref{matching} in Section 1.
In a complete matching, an integer is either a left endpoint or a
right endpoint in the standard representation. In applying the map
$\phi$ to complete matchings on $[2m]$, if we remove all steps which
do nothing, we obtain a sequence $\emptyset=\lambda^0$, $\lambda^1, \dots,
\lambda^{2m} =\emptyset$ of partitions such that for all $1 \leq i \leq 2m$,
the diagram of $\lambda^i$ is obtained from that of $\lambda^{i-1}$ by
either adding one square or removing one square. Such a sequence is
called an \emph{oscillating tableau} (or \emph{up-down tableau}) of
empty shape and length $2m$. Thus we get a bijection between complete
matchings on $[2m]$ and oscillating tableaux of empty shape and
length $2m$. This bijection was originally constructed by the fourth
author, and then extended by Sundaram \cite{Sundaram90} to arbitrary
shapes to give a combinatorial proof of the Cauchy identity for the
symplectic group Sp$(2m)$. The explicit description of the bijection
has appeared in \cite{Sundaram90} and was included in \cite[Exercise
7.24]{Stanley99}. Oscillating tableaux first appeared (though not with
that name) in \cite{berele}.
Recall that the ordinary RSK algorithm gives a bijection between the
symmetric group $\mathfrak{S}_m$ and pairs $(P, Q)$ of SYTs of the same shape
$\lambda \vdash m$. This result and the Schensted's theorem
can be viewed as a special case of what we do.
Explicitly, identify an SYT $T$ of shape $\lambda$ and content $[m]$
with a sequence $\emptyset =\lambda^0, \lambda^1, \dots, \lambda^m=\lambda$ of integer
partitions where $\lambda^i$ is the shape of the SYT $T^i$ obtained from
$T$ by deleting all entries $\{j: j >i\}$. Let $w$ be a permutation of
$[m]$, and form the matching $M_w$ on $[2m]$ with arcs between $w(i)$
and $2m-i+1$. We get an oscillating tableau $O_w$ that increases to the
shape $\lambda \vdash m$ and then decreases to the empty shape. Assume $w
\stackrel{\text{RSK}}{\longmapsto}(A(w), B(w))$, where $A(w)$ is the
(row)-insertion tableau and $B(w)$ the recording tableau. Then $A(w)$
is given by the first $m$ steps of $O_w$, and $B(w)$
the reverse of the last $m$ steps.
The size of the largest crossing (resp. nesting) of $M_w$ is exactly
the length of the longest decreasing (resp. increasing) subsequence of $w$.
\begin{example}
Let $w=231$. Then
\begin{eqnarray*}
A(w)=\begin{array}{cc} 1 & 3 \\
2 & \end{array},
\qquad
B(w)=\begin{array}{cc} 1 & 2 \\
3 & \end{array}.
\end{eqnarray*}
The matching $M_w$ and the corresponding oscillating tableau are given
below.
\begin{center}
\begin{picture}(120,30)
\setlength{\unitlength}{3mm}
\put(0,1){\circle*{0.4}}
\put(3,1){\circle*{0.4}}
\put(6,1){\circle*{0.4}}
\put(9,1){\circle*{0.4}}
\put(12,1){\circle*{0.4}}
\put(15,1){\circle*{0.4}}
\qbezier(0,1)(4,4)(9,1)
\qbezier(3,1)(9,5)(15,1)
\qbezier(6,1)(9,3)(12,1)
\put(0,0){$1$}
\put(3,0){$2$}
\put(6,0){$3$}
\put(9,0){$4$}
\put(12,0){$5$}
\put(15,0){$6$}
\put(0,-1.5){$\emptyset$}
\put(3,-1.5){$1$}
\put(6,-1.5){$11$}
\put(9,-1.5){$21$}
\put(12,-1.5){$2$}
\put(15,-1.5){$1$}
\put(18,-1.5){$\emptyset$}
\end{picture}
\end{center}
\end{example}
\vspace{1em}
\textsc{Remark.} The \emph{Brauer algebra} $\mathfrak{B}_m$ is a
certain semisimple algebra, say over $\mathbb{C}$, whose dimension is the
number $1\cdot 3\cdot\cdots(2m-1)$ of matchings on $[2m]$. (The
algebra $\mathfrak{B}_m$ depends on a parameter $x$ which is
irrelevant here.) Oscillating tableaux of length $2m$ are related to
irreducible representations of $\mathfrak{B}_m$ in the same way that SYT of
content $[m]$ are related to irreducible representations of the
symmetric group $\mathfrak{S}_m$ and that vacillating tableaux of length $2m$ are
related to irreducible representations of the partition algebra
$\mathfrak{P}_m$. In particular, the irreducible representations
$J_\lambda$ of $\mathfrak{B}_m$ are indexed by partitions $\lambda$
for which there exists an oscillating tableau of shape $\lambda$ and
length $2m$, and $\dim J_\lambda$ is the number of such oscillating tableaux.
See e.g.\ \cite[Appendix~B6]{b-r} for further information.
Next we use the bijection between complete matchings and oscillating
tableaux to study the enumeration of $k$-noncrossing matchings. All
the results in the following hold for $k$-nonnesting matchings as
well.
For complete matchings, Theorem~\ref{lattice_P} becomes the following.
\begin{cor} \label{lattice}
The number of $k$-noncrossing matchings of $[2m]$ is equal to the
number of closed lattice walks of length $2m$ in the set
\[
V_k=\{ (a_1, a_2, \dots, a_{k-1}): a_1 \geq a_2 \geq \cdots \geq
a_{k-1} \geq 0, a_i \in\mathbb{Z} \}
\]
from the origin to itself with unit steps in any coordinate direction
or its negative. ~~~~$\Box$
\end{cor}
Restricted to the cases $k=2, 3$, Corollary~\ref{lattice} leads to some
nice combinatorial correspondences. Recall that a \emph{Dyck path} of
length $2m$ is a lattice path in the plane from the origin $(0,0)$ to
$(2m, 0)$ with steps $(1,1)$ and $(1, -1)$, that never passes below
the $x$-axis. A pair $(P, Q)$ of Dyck paths is \emph{noncrossing} if
they have the same origin and the same destination, and $P$ never goes
below $Q$.
\begin{cor} \label{lattice23}
\begin{enumerate}
\item The set of 2-noncrossing matchings is in one-to-one
correspondence with the set of Dyck paths.
\item The set of 3-noncrossing matchings is in one-to-one
correspondence with the set of pairs of noncrossing Dyck paths.
\end{enumerate}
\end{cor}
\noindent {\bf Proof}.
By Corollary~\ref{lattice}, $2$-noncrossing matchings are in
one-to-one correspondence with closed lattice paths
$\{\vec v_i=(x_i)\}_{i=0}^{2m}$
with $x_0=x_{2m}=0$, $x_i \geq 0$ and $x_{i+1}-x_i=\pm 1$.
Given such a 1-dimensional lattice path, define a lattice path in the
plane by
letting $P=\{ (i, x_i) \,|\, i=0, 1, \dots, 2m\}$.
Then $P$ is a Dyck path, and this gives the desired correspondence.
For $k=3$, $3$-noncrossing matchings are in
one-to-one correspondence with 2-dimensional lattice paths $\{
\vec v_i=(x_i, y_i) \}_{i=0}^{2m}$
with $(x_0, y_0)=(x_{2m}, y_{2m})=(0,0)$, $x_i \geq y_i \geq 0$ and
$(x_{i+1}, y_{i+1})-(x_i, y_i)=(\pm 1, 0)$ or $(0, \pm 1)$. Given
such a lattice path, define two lattice paths in the plane by
setting $P=\{(i, x_i+y_i) \,|\, i=0, 1, \dots, 2m\}$ and
$Q=\{(i, x_i-y_i) \,|\, i=0, 1, \dots, 2m\}$. Then $(P, Q)$ is a pair of
noncrossing Dyck paths. It is easy to see that this is a bijection.
~~~$\Box$
\begin{example}
We illustrate the bijections between $3$-noncrossing matchings,
oscillating tableaux, and pairs of noncrossing Dyck paths.
The oscillating tableau is
\[
\emptyset, 1, 2, 21, 31, 21, 11,
21, 2, 1, \emptyset.
\]
The sequence of SYT's as defined in the bijection is
\[
\emptyset, 1, 12,
\begin{array}{l} 1~2\\[-.05in] 3\end{array},
\begin{array}{l} 1~2~4\\[-.05in] 3\end{array},
\begin{array}{l} 1~2\\[-.05in] 3\end{array},
\begin{array}{l} 1\\[-.05in] 3\end{array},
\begin{array}{l} 1~7\\[-.05in] 3\end{array},\
37,\ 3,\ \emptyset.
\]
The corresponding matching and the pair of noncrossing
Dyck paths are given in
Figure~\ref{bijs}.
\begin{figure}[ht]
\begin{center}
\begin{picture}(380,80)
\setlength{\unitlength}{5mm}
\multiput(0.5,0)(1,0){10}{\circle*{0.2}}
\qbezier(0.5,0)(4,3)(7.5,0)\put(0.35,-0.8){1} \put(7.35,-0.8){8}
\qbezier(1.5,0)(3.5,1.6)(5.5,0)\put(1.35,-0.8){2}
\put(5.35,-0.8){6} \qbezier(2.5,0)(6,2.4)(9.5,0)\put(2.35,-0.8){3}
\put(9.3,-0.8){10} \qbezier(3.5,0)(4,0.6)(4.5,0)\put(3.35,-0.8){4}
\put(4.35,-0.8){5} \qbezier(6.5,0)(7.5,1.2)(8.5,0)
\put(6.35,-0.8){7}\put(8.35,-0.8){9}
\put(12,0){\line(1,1){1}}\put(12,0){\circle*{0.2}}
\put(13,1){\line(1,1){1}}\put(13,1){\circle*{0.2}}
\put(14,2){\line(1,1){1}}\put(14,2){\circle*{0.2}}
\put(15,3){\line(1,1){1}}\put(15,3){\circle*{0.2}}
\put(16,4){\line(1,-1){1}}\put(16,4){\circle*{0.2}}
\put(17,3){\line(1,-1){1}}\put(17,3){\circle*{0.2}}
\put(18,2){\line(1,1){1}}\put(18,2){\circle*{0.2}}
\put(19,3){\line(1,-1){1}}\put(19,3){\circle*{0.2}}
\put(20,2){\line(1,-1){1}}\put(20,2){\circle*{0.2}}
\put(21,1){\line(1,-1){1}}\put(21,1){\circle*{0.2}}
\put(22,0){\circle*{0.2}}
\put(12,-0.4){\line(1,1){1}}\put(12,-0.4){\circle*{0.2}}
\put(13,0.6){\line(1,1){1}}\put(13,0.6){\circle*{0.2}}
\put(14,1.6){\line(1,-1){1}}\put(14,1.6){\circle*{0.2}}
\put(15,0.6){\line(1,1){1}}\put(15,0.6){\circle*{0.2}}
\put(16,1.6){\line(1,-1){1}}\put(16,1.6){\circle*{0.2}}
\put(17,0.6){\line(1,-1){1}}\put(17,0.6){\circle*{0.2}}
\put(18,-0.4){\line(1,1){1}}\put(18,-0.4){\circle*{0.2}}
\put(19,0.6){\line(1,1){1}}\put(19,0.6){\circle*{0.2}}
\put(20,1.6){\line(1,-1){1}}\put(20,1.6){\circle*{0.2}}
\put(21,0.6){\line(1,-1){1}}\put(21,0.6){\circle*{0.2}}
\put(22,-0.4){\circle*{0.2}}
\end{picture}
\end{center}
\caption{The matching and the pair of noncrossing Dyck paths.}
\label{bijs}
\end{figure}
\end{example}
Let $f_k(m)$ be the number of $k$-noncrossing matchings of $[2m]$. By
Corollary~\ref{lattice} it is also the number of lattice paths of length
$2m$ in the region $V_k$ from the origin to itself with step set
$\{ \pm \epsilon_1, \pm \epsilon_2, \dots, \pm \epsilon_{k-1}\}$. Set
\[
F_k(x)=\sum_m f_k(m) \frac{x^{2m}}{(2m)!}.
\]
It turns out that a determinantal expression for $F_k(x)$ has been given
by Grabiner and Magyar \cite{GM93}.
It is simply the case $\lambda =
\eta = (m,m-1,...,1)$ of equation (38) in \cite{GM93}, giving
\begin{eqnarray}\label{gm}
F_k(x) = \det\left[ I_{i-j}(2x) - I_{i+j}(2x) \right]_{i,j=1}^{k-1},
\end{eqnarray}
where
\[
I_m(2x) = \sum_{j\geq 0}\frac{ x^{m+2j}}{j!(m+j)!},
\]
the hyperbolic Bessel function of the first kind of order $m$ \cite{WW27}.
One can easily check that when $k=2$, the
generating function of $2$-noncrossing matchings equals
$$F_2(x)=I_0(2x)-I_2(2x)=\sum_{j\geq 0}C_j\frac{x^{2j}}{(2j)!},$$
where $C_j$ is the $j$-th Catalan number. When $k=3$, we have
\begin{eqnarray*}
f_3(m) &=&\frac{3!(2m+2)!}{m!(m+1)!(m+2)!(m+3)!}\\
&=&C_{m}C_{m+2}-C_{m+1}^2.
\end{eqnarray*}
This result agrees
with the formula on the number of pairs of noncrossing Dyck paths
due to Gouyou-Beauchamps in \cite{GB89}.
{\sc Remark}.
The determinant formula \eqref{gm} has been studied by Baik and Rains
in \cite[Eqs. (2.25)]{BR00}. One simply puts $i-1$ for $j$ and
$j-1$ for $k$ in (2.25) of \cite{BR00} to get our formula.
The same formula was also obtained by Goulden \cite{Goulden}
as the generating function for fixed point free permutations with no
decreasing subsequence of length greater than $2k$. See Theorem 1.1 and 2.3
of \cite{Goulden} and specialize $h_i$ to be $x^i/i!$, so $g_l$ becomes the
hyperbolic Bessel function.
The asymptotic distribution of $\mathrm{cr}(M)$ follows from another
result of Baik and Rains. In Theorem 3.1 of \cite{BR01} they obtained the
limit distribution for the length of the longest decreasing
subsequence of fixed point free involutions $w$. But representing $w$
as a matching $M$, the condition that $w$ has no decreasing
subsequence of length $2k+1$ is equivalent to the condition that $M$
has no $k+1$-nesting, and we already know that $\mathrm{cr}(M)$ and
$\mathrm{ne}(M)$ have the same distribution. Combining the above
results, one has
\[ \lim_{m\rightarrow \infty} \mathrm{Pr}\left(
\frac{\mathrm{cr}(M)-\sqrt{2m}}{(2m)^{1/6}}\leq \frac{x}{2}\right) = F_1(x),
\]
where
$$ F_1(x) = \sqrt{F(x)}\exp\left( \frac 12\int_x^\infty
u(s)ds\right), $$
where $F(x)$ is the Tracy-Widom distribution and $u(x)$ the Painlev\'e
II function.
Similarly one can try to enumerate complete matchings of $[2m]$ with
no $(k+1)$-crossing and no $(j+1)$-nesting. By the oscillating tableau
bijection this is just the number of walks of length $2m$ from
$\hat{0}$ to $\hat{0}$ in the Hasse diagram of the poset $L(k,j)$,
where $\hat{0}$ denotes the unique bottom element (the empty
partition) of $L(k,j)$, the lattice of integer partitions whose shape
fits in a $k \times j$ rectangle, ordered by inclusion. Let
$g_{k,j}(m)$ be this number, and $G_{k,j}(x)=\sum_m g_{k,j}(m) x^{2m}$
be the generating function.
For $j=1$, the number $g_{k,1}(m)$ counts lattice paths
from $(0,0)$ to $(2m,0)$ with steps $(1, 1)$ or $(1, -1)$ that stay
between the lines $y=0$ and $y=k$. The evaluation of $g_{k,1}(m)$ was
first considered by Tak\'acs in \cite{Takacs62} by a probabilistic argument.
Explicit formula and generating function for this case are well-known.
For example, in \cite{Mohanty79} one obtains the explicit
formula by applying the reflection principle repeatedly, viz.,
\begin{eqnarray*}
g_{k,1}(m)=\sum_i \left[ \binom{2m}{m-i(k+2)}
-\binom{2m}{m+i(k+2)+k+1}\right].
\end{eqnarray*}
The generating function $G_{k,1}(x)$ is a special case of the one for
the duration of the game
in the classical ruin problem, that is, restricted random walks with
absorbing barriers at $0$ and $a$, and initial position $z$.
See, for example, Equation (4.11) of Chapter 14 of \cite{feller68}: Let
\[
U_z(x)=\sum_{m=0}^\infty u_{z,m}x^m,
\]
where $u_{z,n}$ is the probability that the process ends with the $n$-th step
at the barrier $0$. Then
\[
U_z(x)=\left(\frac{q}{p}\right)^z
\frac{\lambda_1^{a-z}(x)-\lambda_2^{a-z}(x)}{\lambda_1^{a}(x)-\lambda_2^{a}(x)},
\]
where
\[
\lambda_1(x)=\frac{1+\sqrt{1-4pqx^2}}{2px}, \qquad
\lambda_2(x)=\frac{1-\sqrt{1-4pqx^2}}{2px}.
\]
The generating function
$G_{k,1}(x)$ is just $U_1(2x)/x$ with $a=k+2$, $z=1$, and $p=q=1/2$.
In general, by the transfer matrix method \cite[{\S}4.7]{ec1}
\begin{eqnarray}
G_{k,j}(x)=\frac{\det(I-xA_{k,j}(0))}{\det(I-xA_{k,j})}
\end{eqnarray}
is a rational function, where $A_{k,j}$ is the adjacency matrix
of the Hasse diagram of $L(k,j)$, and $A_{k,j}(0)$ is obtained
from $A_{k,j}$ by deleting the row and the column corresponding to
$\hat{0}$. $A_{k,j}(0)$ is also the adjacency matrix of the Hasse
diagram of $L(k,j)$ with its bottom element (the empty partition)
removed. Note that $\det(I-xA_{k,j})$ is a polynomial in $x^2$
since $L(k,j)$ is bipartite \cite[Thm.~3.11]{c-d-s}. Let
$\det(I-xA_{k,j})=p_{k,j}(x^2)$. The following is a table of
$p_{k,j}(x)$ for the values of $1 \leq k \leq j \leq 4$. (We only
need to list those with $k \leq j$ since
$p_{k,j}(x)=p_{j,k}(x)$.)
\begin{eqnarray*}
\begin{array}{|c|l|}
\hline
(k,j) & p_{k,j}(x)=\det(I-\sqrt{x}A_{k,j}) \\ \hline
(1,1) & 1-x \\
(1,2) & 1-2x \\
(1,3) & 1-3x+x^2 \\
(1,4) & (1-x)(1-3x) \\
(2,2) & (1-x)(1-5x)\\
(2,3) & (1-x)(1-3x)(1-8x+4x^2)) \\
(2,4) & (1-14x+ 49x^2-49x^3)(1-6x+5x^2-x^3) \\
(3,3) & (1-x)(1-19x+83x^2-x^3)(1-5x+6x^2-x^3)^2\\
(3,4) & (1-2x)^2(1-8x+8x^2)(1-4x+2x^2)^2(1-16x+60x^2-32x^3+4x^4) \\
& \hspace{.5cm} \cdot (1-24x+136x^2-160x^3+16x^4) \\
(4,4) &
(1-x)^2(1-18x+81x^2-81x^3)^2(1-27x+99x^2-9x^3)(1-9x+18x^2-9x^3)^2 \\
& \hspace{.5cm} \cdot
(1-27x+195x^2-361x^3)(1-6x+9x^2-x^3)^2(1-9x+6x^2-x^3)^2 \\
\hline
\end{array}
\end{eqnarray*}
The polynomial $p_{k,j}(x)$ seems to have a lot of factors. We are
grateful to Christian Krattenthaler for
explaining equation
(\ref{eq:kratt}) below, from which we can explain the factorization of
$p_{k,j}(x)$. By an observation \cite[{\S}5]{grabiner} of Grabiner,
$g_{k,j}(n)$ is equal to the number of walks with $n$ steps $\pm e_i$
from $(j, j-1, \dots, 2, 1)$ to itself in the chamber
$j+k+1>x_1>x_2>\cdots>x_j>0$ of
the affine Weyl group $\tilde{C}_n$.
Write $m=j+k+1$. By
\cite[(23)]{grabiner} there follows
$$ \sum_n g_{k,j}(n)\frac{x^{2n}}{(2n)! } = \det\left[ \frac 1m \sum_{r=0}^{2m-1}
\sin(\pi ra/m)\sin(\pi rb/m)
\cdot \exp(2x\cos(\pi r/m))\right]_{a,b=1}^j.
$$
When this determinant is expanded, we obtain a linear combination of
terms of the form
\begin{equation} \exp(2x(\cos(\pi r_1/m)+\cdots+\cos(\pi r_j/m)))
= \sum_{n\geq 0} 2^n(\cos(\pi r_1/m)+\cdots+\cos(\pi r_j/m))^n
\frac{x^n}{n!}, \label{eq:exp} \end{equation}
where $0\leq r_i\leq 2m-1$ for $1\leq i\leq j$.
In fact, the case $\eta=\lambda$ of Grabiner's formula
\cite[(23)]{grabiner} shows that the number of walks of length $n$ in
the Weyl chamber from \emph{any} integral point to itself is again a
linear combination of terms $2^n(\cos(\pi r_1/m)+\cdots+\cos(\pi
r_j/m))^n$.
It follows that every eigenvalue of $A_{k,j}$ has the
form
\begin{equation} \theta=2(\cos(\pi r_1/m)+\cdots+\cos(\pi
r_j/m)). \label{eq:kratt} \end{equation}
(In particular, the Galois group over $\mathbb{Q}$ of every irreducible
factor of $p_{k,j}(x)$ is abelian.) Note that \emph{a priori} not
every such $\theta$ may be an eigenvalue, since it may appear with
coefficient $0$ after the linear combinations are taken.
The algebraic integer $z = 2(\cos(\pi r_1/m) +\dots +\cos(\pi r_j /m))$
lies in the field $\mathbb{Q}(\cos(\pi/m))$, an extension of $\mathbb{Q}$
of degree $\phi(2m)/2$, where $\phi$ is the Euler phi-function.
To see this, let $z$ be a primitive $2m$-th root of unity. Then $z$ is
a root of $x + 1/x = 2 \cos(\pi/m)$.
Hence the field $L = \mathbb{Q}(z)$ is quadratic or linear over $K =
\mathbb{Q}(\cos(\pi/m)$. Since $K$ is real and $L$ is not for $m>1$,
we cannot have $K=L$. Hence $[L:K] = 2$.
Since $[L:\mathbb{Q}] = \phi(2m)$, we have $[K:\mathbb{Q}] = \phi(2m)/2$.
It follows that
the minimal polynomial over $\mathbb{Q}$ of $z$ has degree dividing
$\phi(2m)/2$. Thus every irreducible factor of $\det(I-Ax)$ has degree
dividing $\phi(2m)/2$,
explaining why
$p_{k,j}(x)$ has many factors. A more careful analysis
should yield more precise information about the factors of
$p_{k,j}(x)$, but we will not attempt such an analysis here.
An interesting special case of determining $p_{k,j}(x)$ is determining
its degree, since the number of eigenvalues of $A_{k,j}$ equal to 0 is
given by ${j+k\choose j}- 2\,\cdot\deg\, p_{k,j}(x)$. Equivalently, since
$A_{k,j}$ is a symmetric matrix, $2\cdot\deg\, p_{k,j}(x) =
\mathrm{rank}(A_{k,j})$. We have observed the following.
\begin{enumerate}
\item For $k+j \leq 12$ and $1 \leq k \leq j$, $A_{k,j}$ is
invertible exactly
for $(k,j)=$$(1,1), (1,3), (1, 5), (1, 7)$, $(1, 9)$, $(1, 11)$, $(3,3)$,
$(3, 7)$, $(3, 9)$, $(5,5)$ and $(5, 7)$.
\item $A_{1, j}$ is invertible if and only if $j$ is odd. This is true
because $L(1,j)$ is a path of length $j$, whose determinant satisfies
the recurrence $\det(A_{1,j})=-\det(A_{1,j-2})$. The statement follows
from the initial conditions $\det(A_{1,1})=-1$ and
$\det(A_{1,2})=0$.
\item If $A_{k,j}$ is invertible, then $kj$ is odd. To see this,
let $X_0 (X_1)$ be the set of integer partitions of even (odd)
$n$ whose shape fits in a $k \times j$ rectangle.
Since $L(k,j)$ is bipartite graph with vertex partition $(X_0, X_1)$,
a necessary condition for $A_{k,j}$ to be invertible is $|X_0|=|X_1|$.
That is, the generating function $\sum_n p(k,j,n)q^n=\mathbf{
k+j \choose k}$ must have a root at $q=-1$,
where $p(k,j, n)$ is the number of integer partitions on $n$ whose shape
fits into a $k\times j$ rectangle.
But the multiplicity of
$1+q$ in the Gaussian polynomial $\mathbf{k+j \choose k}$ is
$\lfloor \frac{k+j}{2} \rfloor -\lfloor \frac k2 \rfloor-\lfloor \frac j2
\rfloor$, which is $0$ unless both $j$ and $k$ are odd.
\end{enumerate}
Item 3 is also proved independently by Jason Burns, who also found a counterexample for the converse:
For $k=3$ and $j=11$, $A_{3, 11}$ is not invertible, in fact its corank is 6.
The invertibility of $A_{k,j}$ for $kj$ being odd is currently under
investigation.
\vspace{2em}
\noindent {\large \bf Acknowledgments}
The authors would like to thank Professor
Donald Knuth for carefully reading the
manuscript and providing many helpful comments.
| {
"timestamp": "2005-11-14T21:18:50",
"yymm": "0501",
"arxiv_id": "math/0501230",
"language": "en",
"url": "https://arxiv.org/abs/math/0501230",
"abstract": "We present results on the enumeration of crossings and nestings for matchings and set partitions. Using a bijection between partitions and vacillating tableaux, we show that if we fix the sets of minimal block elements and maximal block elements, the crossing number and the nesting number of partitions have a symmetric joint distribution. It follows that the crossing numbers and the nesting numbers are distributed symmetrically over all partitions of $[n]$, as well as over all matchings on $[2n]$. As a corollary, the number of $k$-noncrossing partitions is equal to the number of $k$-nonnesting partitions. The same is also true for matchings. An application is given to the enumeration of matchings with no $k$-crossing (or with no $k$-nesting).",
"subjects": "Combinatorics (math.CO); Category Theory (math.CT)",
"title": "Crossings and Nestings of Matchings and Partitions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987946221548465,
"lm_q2_score": 0.8499711718571774,
"lm_q1q2_score": 0.8397258076614195
} |
https://arxiv.org/abs/1609.01321 | Backward Error Analysis for Perturbation Methods | We demonstrate via several examples how the backward error viewpoint can be used in the analysis of solutions obtained by perturbation methods. We show that this viewpoint is quite general and offers several important advantages. Perhaps the most important is that backward error analysis can be used to demonstrate the validity of the solution, however obtained and by whichever method. This includes a nontrivial safeguard against slips, blunders, or bugs in the original computation. We also demonstrate its utility in deciding when to truncate an asymptotic series, improving on the well-known rule of thumb indicating truncation just prior to the smallest term. We also give an example of elimination of spurious secular terms even when genuine secularity is present in the equation. We give short expositions of several well-known perturbation methods together with computer implementations (as scripts that can be modified). We also give a generic backward error based method that is equivalent to iteration (but we believe useful as an organizational viewpoint) for regular perturbation. |
\section{Introduction}
As the title suggests, the main idea of this paper is to use backward error analysis (BEA) to assess and interpret solutions obtained by perturbation methods. The idea will seem natural, perhaps even obvious, to those who are familiar with the way in which backward error analysis has seen its scope increase dramatically since the pioneering work of Wilkinson in the 60s, e.g., \cite{Wilkinson(1963),Wilkinson(1965)}. From early results in numerical linear algebraic problems and computer arithmetic, it has become a general method fruitfully applied to problems involving root finding, interpolation, numerical differentiation, quadrature, and the numerical solutions of ODEs, BVPs, DDEs, and PDEs, see, e.g, \cite{CorlessFillion(2013),Deuflhard(2003),Higham(1996)}. This is hardly a surprise when one considers that BEA offers several interesting advantages over a purely forward-error approach.
BEA is often used in conjunction with perturbation methods. Not only is it the case that many algorithms' backward error analyses rely on perturbation methods, but the backward error is related to the forward error by a coefficient of sensitivity known as the condition number, which is itself a kind of sensitivity to perturbation. In this paper, we examine an apparently new idea, namely, that perturbation methods themselves can also be interpreted within the backward error analysis framework. Our examples will have a classical feel, but the analysis and interpretation is what differs, and we will make general remarks about the benefits of this mode of analysis and interpretation.
However, due to the breadth of the literature in perturbation theory, we cannot determine with certainty the extent to which applying backward error analysis to perturbation methods is new. Still, none of the works we know, apart from \cite{Boyd(2014)}, \cite{Corless(1993)b}, and \cite{Corless(2014)}, even mention the possibility of using of using BEA to explain or measure the success of a perturbation computation. Among the books we have consulted, only \cite[p.~251 \& p.~289]{Boyd(2014)} mentions the residual by name, but does not use it systematically. At the very least, therefore, the idea of using BEA in relation to perturbation methods might benefit from a wider discussion.
\section{The basic method from the BEA point of view} \label{genframe}
The basic idea of BEA is increasingly well-known in the context of numerical methods. The slogan \textsl{a good numerical method gives the exact solution to a nearby problem} very nearly sums up the whole perspective. Any number of more formal definitions and discussions exist---we like the one given in \cite[chap.~1]{CorlessFillion(2013)}, as one might suppose is natural, but one could hardly do better than go straight to the source and consult, e.g., \cite{Wilkinson(1963),Wilkinson(1965),Wilkinson(1971),Wilkinson(1984)}. More recently \cite{Grcar(2011)} has offered a good historical perspective. In what follows we give a brief formal presentation and then give detailed analyses by examples in subsequent sections.
Problems can generally be represented as maps from an input space $\mathcal{I}$ to an output space $\mathcal{O}$.
If we have a problem $\varphi:\mathcal{I}\to\mathcal{O}$ and wish to find $y=\varphi(x)$ for some putative input $x\in\mathcal{I}$, lack of tractability might instead lead you to engineer a simpler problem $\hat{\varphi}$ from which you would compute $\hat{y}=\hat{\varphi}(x)$. Then $\hat{y}-y$ is the \textsl{forward error} and, provided it is small enough for your application, you can treat $\hat{y}$ as an approximation in the sense that $\hat{y}\approx \varphi(x)$. In BEA, instead of focusing on the forward error, we try to find an $\hat{x}$ such that $\hat{y}=\varphi(\hat{x})$ by considering the \textsl{backward error} $\Delta x=\hat{x}-x$, i.e., we try to find for which set of data our approximation method $\hat{\varphi}$ has exactly solved our reference problem $\varphi$. The general picture can be represented by the following commutative diagram:
\begin{center}
\begin{tikzpicture}
\def2{2}
\draw (0,0) node (x) {$x$};
\draw (2,0) node (y) {$y$};
\draw (0,-2) node (xhat) {$\hat{x}$};
\draw (2,-2) node (yhat) {$\hat{y}$};
\draw (x) edge[->] node[above] {$\varphi$} (y);
\draw (xhat) edge[->,dashed] node[below] {$\varphi$} (yhat);
\draw (x) edge[->,dashed] node[left] {$+\Delta x$} (xhat);
\draw (y) edge[->] node[right] {$+\Delta y$} (yhat);
\draw (x) edge[->] node[above right] {$\hat{\varphi}$} (yhat);
\end{tikzpicture}
\end{center}
We can see that, whenever $x$ itself has many components, different backward error analyses will be possible since we will have the option of reflecting the forward error back into different selections of the components.
It is often the case that the map $\varphi$ can be defined as the solution to $\phi(x,y)=0$ for some operator $\phi$, i.e., as having the form
\begin{align} x\xrightarrow{\varphi} \left\{ y\mid \phi(x,y)=0\right\}\>.\end{align}
In this case, there will in particular be a simple and useful backward error resulting from computing the residual $r=\phi(x,\hat{y})$. Trivially $\hat{y}$ then exactly solves the reverse-engineered problem $\hat{\varphi}$ given by $\hat{\phi}(x,y)=\phi(x,y)-r=0$.
Thus, when the residual can be used as a backward error, this directly computes a reverse-engineered problem that our method has solved exactly. We are then in the fortunate position of having both a problem and its solution, and the challenge then consists in determining how similar the reference problem $\varphi$ and the modified problems $\hat{\varphi}$ are, \textsl{and whether or not the modified problem is a good model for the phenomenon being studied}.
\paragraph{Regular perturbation BEA-style}
Now let us introduce a \textsl{general framework for perturbation methods} that relies on the general framework for BEA introduced above.
Perturbation methods are so numerous and varied, and the problems tackled are from so many areas, that it seems a general scheme of solution would necessarily be so abstract as to be difficult to use in any particular case.
Actually, the following framework covers many methods. For simplicity of exposition, we will introduce it using the simple gauge functions $1,\ensuremath{\varepsilon},\ensuremath{\varepsilon}^2,\ldots$, but note that extension to other gauges is usually straightforward (such as Puiseux, $\ensuremath{\varepsilon}^n\ln^m\ensuremath{\varepsilon}$, etc), as we will show in the examples.
To begin with, let
\begin{align}
F(x,u;\ensuremath{\varepsilon})=0 \label{operatoreq}
\end{align}
be the operator equation we are attempting to solve for the unknown $u$. The dependence of $F$ on the scalar parameter $\ensuremath{\varepsilon}$ and on any data $x$ is assumed but henceforth not written explicitly. In the case of a simple power series perturbation, we will take the $m$th order approximation to $u$ to be given by the \textsl{finite} sum
\begin{align}
z_m = \sum_{k=0}^m \ensuremath{\varepsilon}^ku_k\>.
\end{align}
The operator $F$ is assumed to be Fr\'echet differentiable. For convenience we assume slightly more, namely, that for any $u$ and $v$ in a suitable region, there exists a linear invertible operator $F_1(v)$ such that
\begin{align}
F(u) = F(v) + F_1(v)(u-v) + O\left(\|u-v\|^2\right)\>.
\end{align}
Here, $\|\cdot\|$ denotes any convenient norm. We denote the \textsl{residual} of $z_m$ by
\begin{align}
\Delta_m := F(z_m)\>,
\end{align}
\emph{i.e.}, $\Delta_m$ results from evaluating $F$ at $z_m$ instead of evaluating it at the reference solution $u$ as in equation \eqref{operatoreq}. If $\|\Delta_m\|$ is small, we say we have solved a ``nearby'' problem, namely, the reverse-engineered problem for the unknown $u$ defined by
\begin{align}
F(u)-F(z_m) = 0\>,
\end{align}
which is exactly solved by $u=z_m$. Of course this is trivial. It is \textsl{not} trivial in consequences if $\|\Delta_m\|$ is small compared to data errors or modelling errors in the operator $F$. We will exemplify this point more concretely later.
We now suppose that we have somehow found $z_0=u_0$, a solution with a residual whose size is such that
\begin{align}
\|\Delta_0\|=\|F(u_0)\| = O(\ensuremath{\varepsilon})\qquad \textrm{as} \qquad \ensuremath{\varepsilon}\to0\>.
\end{align}
Finding this $u_0$ is part of the art of perturbation; much of the rest is mechanical.
Suppose now inductively that we have found $z_n$ with residual of size
\[
\|\Delta_n\| = O\left(\ensuremath{\varepsilon}^{n+1}\right) \quad\textrm{ as }\quad \ensuremath{\varepsilon}\to0\>.
\]
Consider $F(z_{n+1})$ which, by definition, is just $F(z_n+\ensuremath{\varepsilon}^{n+1}u_{n+1})$. We wish to choose the term $u_{n+1}$ in such a way that $z_{n+1}$ has residual of size $\|\Delta_{n+1}\|=O(\ensuremath{\varepsilon}^{n+2})$ as $\ensuremath{\varepsilon}\to0$. Using the Fr\'echet derivative of the residual of $z_{n+1}$ at $z_n$, we see that
\begin{align}
\Delta_{n+1} &= F(z_n+\ensuremath{\varepsilon}^{n+1}u_{n+1})= F(z_n)+F_1(z_n)\ensuremath{\varepsilon}^{n+1}u_{n+1}+O\left(\ensuremath{\varepsilon}^{2n+2}\right)\>. \label{resseries1}
\end{align}
By linearity of the Fr\'echet derivative, we also obtain $F_1(z_n) = F_1(z_0)+O(\ensuremath{\varepsilon})= [\ensuremath{\varepsilon}^0]F_1(z_0)+O(\ensuremath{\varepsilon})$. Here, $[\ensuremath{\varepsilon}^k]G$ refers to the coefficient of $\ensuremath{\varepsilon}^k$ in the expansion of $G$. Let
\begin{align}
A=[\ensuremath{\varepsilon}^0]F_1(z_0)\>,
\end{align}
that is, the zeroth order term in $F_1(z_0)$. Thus, we reach the following expansion of $\Delta_{n+1}$:
\begin{align}
\Delta_{n+1} = F(z_n) + A\ensuremath{\varepsilon}^{n+1}u_{n+1}+O\left(\ensuremath{\varepsilon}^{n+2}\right)\>.\label{eqDnp1}
\end{align}
Note that, in equation \eqref{resseries1}, one could keep $F_1(z_n)$, not simplifying to $A$ and compute not just $u_{n+1}$ but, just as in Newton's method, double the number of correct terms. However, this in practice is often too expensive \cite[chap.~6]{Geddes(1992)b}, and so we will in general use this simplification. As noted, we only need $F_1(z_0)$ accurate to $O(\ensuremath{\varepsilon})$, so in place of $F_1(z_0)$ in equation \eqref{eqDnp1} we use $A$.
As a result of the above expansion of $\Delta_{n+1}$, we now see that to make $\Delta_{n+1} = O\left(\ensuremath{\varepsilon}^{n+2}\right)$, we must have $F(z_n)+A\ensuremath{\varepsilon}^{n+1}u_{n+1}=O(\ensuremath{\varepsilon}^{n+2})$, in which case
\begin{align}
A u_{n+1} +\frac{F(z_n)}{\ensuremath{\varepsilon}^{n+1}} = Au_{n+1} +\frac{\Delta_n}{\ensuremath{\varepsilon}^{n+1}} =O(\ensuremath{\varepsilon})\>.
\end{align}
Since by hypothesis $\Delta_n=F(z_n)=O(\ensuremath{\varepsilon}^{n+1})$, we know that $\sfrac{\Delta_n}{\ensuremath{\varepsilon}^{n+1}}=O(1)$.
In other words, to find $u_{n+1}$ we solve the linear operator equation
\begin{align*}
A u_{n+1} =
-[\ensuremath{\varepsilon}^{n+1}]\Delta_n\>,
\end{align*}
where, again, $[\ensuremath{\varepsilon}^{n+1}]$ is the coefficient of the $(n+1)$th power of $\ensuremath{\varepsilon}$ in the series expansion of $\Delta$. Note that by the inductive hypothesis the right hand side has norm $O(1)$ as $\ensuremath{\varepsilon}\to0$. Then $\|\Delta_{n+1}\| = O(\ensuremath{\varepsilon}^{n+2})$ as desired, so $u_{n+1}$ is indeed the coefficient we were seeking.
We thus need $A=[\ensuremath{\varepsilon}^0]F(z_0)$ to be invertible. If not, the problem is singular, and essentially requires reformulation.\footnote{We remark that it is a sufficient but not necessary condition for regular expansion to be able to find our initial point $u_0$ and to have invertible $A=F_1(u_0;0)$. A regular perturbation problem can be defined in many ways, not just in the way we have done, with invertible $A$. For example, \cite[Sec 7.2]{Bender(1978)} essentially uses continuity in $\ensuremath{\varepsilon}$ as $\ensuremath{\varepsilon}\to0$ to characterize it. Another characterization is that for regular perturbation problems infinite perturbation series are convergent for some non-zero radius of convergence.
} We shall see examples. If $A$ is invertible, the problem is regular.
This general scheme can be compared to that of, say, \cite{Bellman(1972)}. Essential similarities can be seen. In Bellman's treatment, however, the residual is used implicitly, but not named or noted, and instead the equation defining $u_{n+1}$ is derived by postulating an infinite expansion
\begin{align}
u=u_0+\ensuremath{\varepsilon} u_1+\ensuremath{\varepsilon}^2u_2+\cdots\>.
\end{align}
By taking the coefficient of $\ensuremath{\varepsilon}^{n+1}$ in the expansion of $\Delta_n$ we are implicitly doing the same work, but we will see advantages of this point of view. %
Also, note that in the frequent case of more general asymptotic sequences, namely Puiseux series or generalized approximations containing logarithmic terms, we can make the appropriate changes in a straightforward manner, as we will show below.
\section{Algebraic equations}
We begin by applying the regular method from section \ref{genframe} to algebraic equations. We begin with a simple scalar equation and gradually increase the difficulty, thereby demonstrating the flexibility of the backward error point of view.
\subsection{Regular perturbation}\label{RegularPert}
In this section, after applying the method from section \ref{genframe} to a scalar equation, we use the same method to solve a $2\times2$ system; higher dimensional systems can be solved similarly. We give some computer algebra implementations (scripts that the reader may modify) of the basic method. Finally, in this section, we give an alternative method based on the Davidenko equation that is simpler to use in Maple.
\subsubsection{Scalar equations}
Let us consider a simple example similar to many used in textbooks for classical perturbation analysis. Suppose we wish to find a real root of
\begin{align}
x^5 -x-1=0 \label{refprobalgeq}
\end{align}
and, since the Abel-Ruffini theorem---which says that in general there are no solutions in radicals to equations of degree 5 or more---suggests it is unlikely that we can find an elementary expression for the solution of this \textsl{particular} equation of degree 5, we introduce a parameter which we call $\ensuremath{\varepsilon}$, and moreover which we suppose to be small. That is, we embed our problem in a parametrized family of similar problems. If we decide to introduce $\ensuremath{\varepsilon}$ in the degree-1 term, so that
\begin{align}
u^5-\ensuremath{\varepsilon} u-1=0\>, \label{pertalgeq}
\end{align}
we will see that we have a so-called regular perturbation problem.
To begin with, we wish to find a $z_0$ such that $\Delta_0=F(z_0) = z_0^5-\ensuremath{\varepsilon} z_0-1=O(\ensuremath{\varepsilon})$. Quite clearly, this can happen only if $z_0^5-1=0$. Ignoring the complex roots in this example, we take $z_0=1$. To continue the solution process, we now suppose that we have found
\begin{align}
z_n = \sum_{k=0}^n u_k\ensuremath{\varepsilon}^k
\end{align}
such that $\Delta_n=F(z_n) = z_n^5-\ensuremath{\varepsilon} z_n-1=O(\ensuremath{\varepsilon}^{n+1})$ and we wish to use our iterative procedure. We need the Fr\'echet derivative of $F$, which in this case is just
\begin{align}
F_1(u) &= 5u^4-\ensuremath{\varepsilon}\>,
\end{align}
because
\begin{align}
F(u) = u^5-\ensuremath{\varepsilon} u-1 &= v^5-\ensuremath{\varepsilon} v-1 + F'(v)(u-v)+O(u-v)^2\>.
\end{align}
Hence, $A=5z_0^4=5$, which is invertible. As a result our iteration is $\Delta_n=F(z_n)$, i.e.,
\begin{align}
5u_{n+1} = -[\ensuremath{\varepsilon}^{n+1}]\Delta_n\>.
\end{align}
Carrying out a few steps we have
\begin{align}
\Delta_0 = F(z_0) = F(1) = 1-\ensuremath{\varepsilon}-1 = -\ensuremath{\varepsilon}
\end{align}
so
\begin{align}
5\cdot u_1 = -[\ensuremath{\varepsilon}]\Delta_0 = -[\ensuremath{\varepsilon}](-\ensuremath{\varepsilon}) = 1\>.
\end{align}
Thus, $u_1=\sfrac{1}{5}$. Therefore, $z_1=1+\sfrac{\ensuremath{\varepsilon}}{5}$ and
\begin{align}
\Delta_1 &= \left(1+\frac{\ensuremath{\varepsilon}}{5}\right)^5 -\ensuremath{\varepsilon}\left(1+\frac{\ensuremath{\varepsilon}}{5}\right)-1\\
& = \left(1+5\frac{\ensuremath{\varepsilon}}{5}+10\frac{\ensuremath{\varepsilon}^2}{25}+O\left(\ensuremath{\varepsilon}^3\right)\right) - \ensuremath{\varepsilon}-\frac{\ensuremath{\varepsilon}^2}{5}-1\\
&=\left(\frac{2}{5}-\frac{1}{5}\right)\ensuremath{\varepsilon}^2+O\left(\ensuremath{\varepsilon}^3\right) = \frac{1}{5}\ensuremath{\varepsilon}^2+O\left(\ensuremath{\varepsilon}^3\right)\>.
\end{align}
Then we find that $Au_1=-\sfrac{1}{5}$ and thus $u_1=-\sfrac{1}{25}$. So, $u=1+\sfrac{\ensuremath{\varepsilon}}{5}-\sfrac{\ensuremath{\varepsilon}^2}{25}+O(\ensuremath{\varepsilon}^3)$. Finding more terms by this method is clearly possible although tedium might be expected at higher orders.
Luckily nowadays computers and programs are widely available that can solve such problems without much human effort, but before we demonstrate that, let's compute the residual of our computed solution so far:
\[
z_2 = 1+\frac{1}{5}\ensuremath{\varepsilon}-\frac{1}{25}\ensuremath{\varepsilon}^2\>.
\]
Then $\Delta_2 = z_2^5-\ensuremath{\varepsilon} z_2-1$ is
\begin{align}
\Delta_2 &= \left(1+\frac{1}{5}\ensuremath{\varepsilon}-\frac{1}{25}\ensuremath{\varepsilon}^2\right)^5-\ensuremath{\varepsilon}\left(1+\frac{1}{5}\ensuremath{\varepsilon}-\frac{1}{25}\ensuremath{\varepsilon}^2\right)-1 \nonumber \\
& = -\frac{1}{25}\ensuremath{\varepsilon}^3 - \frac{3}{125}\ensuremath{\varepsilon}^4+\frac{11}{3125}\ensuremath{\varepsilon}^5 +\frac{3}{125}\ensuremath{\varepsilon}^6 -\frac{2}{15625}\ensuremath{\varepsilon}^7 \nonumber\\ &\qquad -\frac{1}{78125}\ensuremath{\varepsilon}^8 +\frac{1}{390625}\ensuremath{\varepsilon}^9 -\frac{1}{9765675}\ensuremath{\varepsilon}^{10}\>.
\end{align}
We note the following. First, $z_2$ exactly solves the modified equation
\begin{align}
x^5-\ensuremath{\varepsilon} x-1\enskip +\frac{1}{25}\ensuremath{\varepsilon}^3 + \frac{3}{25}\ensuremath{\varepsilon}^4-\ldots + \frac{1}{9765625}\ensuremath{\varepsilon}^{10}=0 \label{starred}
\end{align}
which is $O(\ensuremath{\varepsilon}^3)$ different to the original. Second, the complete residual was computed rationally: there is no error in saying that $z_2=1+\sfrac{\ensuremath{\varepsilon}}{5}-\sfrac{\ensuremath{\varepsilon}^2}{25}$ solves equation \eqref{starred} exactly. Third, if $\ensuremath{\varepsilon}=1$ then $z_2=1+\sfrac{1}{5}-\sfrac{1}{25}=1.16$ exactly (or $1\sfrac{4}{25}$ if you prefer), and the residual is then $(\sfrac{29}{25})^5-\sfrac{29}{25}-1\doteq -0.059658$, showing that $1.16$ is the exact root of an equation about 6\% different to the original.
Something simple but importantly different to the usual treatment of perturbation methods has happened here. We have assessed the quality of the solution in an explicit fashion without concern for convergence issues or for the exact solution to $x^5-x-1=0$, which we term the reference problem. We use this term because its solution will be the reference solution. We can't call it the ``exact'' solution because $z_2$ is \textsl{also} an ``exact'' solution, namely to equation~\eqref{starred}.
Every numerical analyst and applied mathematician knows that this isn't the whole story---we need some evaluation or estimate of
the effects of such perturbations of the problem. One effect is the difference between $z_2$ and $x$, the reference solution, and this is what people focus on. We believe this focus is sometimes excessive. The are other possible views. For instance, if the backward error is physically reasonable.
As an example, if $\ensuremath{\varepsilon}=1$ and $z_2=1.16$ then $z_2$ exactly solves $y^5-y-a=0$ where $a\neq 1$ but rather $a\doteq 0.9403$. If the original equation was really $u^5-u-\alpha=0$ where $\alpha=1\pm 5\%$ we might be inclined to accept $z_2=1.16$ because, for all we know, we might have the true solution (even though we're outside the $\pm 5\%$ range, we're only just outside; and how confident are we in the $\pm5\%$, after all?).
\subsubsection{Simple computer algebra solution}
The following Maple script can be used to solve this or similar problems $f(u;\ensuremath{\varepsilon})=0$. Other computer algebra systems can also be used.
\lstinputlisting{RegularScalar}
That code is a straightforward implementation of the general scheme presented in subsection \ref{genframe}. Its results, translated into \LaTeX\ and cleaned up a bit, are that
\begin{align}
z = 1+\frac{1}{5}\ensuremath{\varepsilon}-\frac{1}{25}\ensuremath{\varepsilon}^2+\frac{1}{125}\ensuremath{\varepsilon}^3
\end{align}
and that the residual of this solution is
\begin{align}
\Delta = \frac{21}{3125}\ensuremath{\varepsilon}^5+O\left( \ensuremath{\varepsilon}^6 \right) \>.
\end{align}
With $N=3$, we get an extra order of accuracy as the next term in the series is zero, but this result is serendipitous.
\subsubsection{Systems of algebraic equations}\label{systems}
Regular perturbation for systems of equations using the framework from section \ref{genframe} is straightforward. We include an example to show some computer algebra and for completeness. Consider the following two equations in two unknowns:
\begin{align}
f_1(v_1,v_2) &=v_1^2+v_2^2 -1-\ensuremath{\varepsilon} v_1v_2 = 0\\
f_2(v_1,v_2) &= 25v_1v_2-12+2\ensuremath{\varepsilon} v_1 =0
\end{align}
When $\ensuremath{\varepsilon}=0$ these equations determine the intersections of a hyperbola with the unit circle. There are four such intersections: $(\sfrac{3}{5},\sfrac{4}{5}), (\sfrac{4}{5},\sfrac{3}{5}), (-\sfrac{3}{5},-\sfrac{4}{5})$ and $(-\sfrac{4}{5},-\sfrac{3}{5})$. The Jacobian matrix (which gives us the Fr\'echet derivative in the case of algebraic equations) is
\begin{align}
F_1(v) = \begin{bmatrix} \frac{\partial f_1}{\partial v_1} & \frac{\partial f_1}{\partial v_2} \\[.25cm] \frac{\partial f_2}{\partial v_1} & \frac{\partial f_2}{\partial v_2} \end{bmatrix} = \begin{bmatrix} 2v_1 & 2v_2 \\ 25 v_2 & 25 v_1\end{bmatrix} + O(\ensuremath{\varepsilon})\>.
\end{align}
Taking for instance $u_0=[\sfrac{3}{5},\sfrac{4}{5}]^T$ we have
\begin{align}
A= F_1(u_0) = \begin{bmatrix} \sfrac{6}{5} & \sfrac{8}{5} \\ 20 & 15\end{bmatrix}\>.
\end{align}
Since $\det A=-14\neq 0$, $A$ is invertible and indeed
\begin{align}
A^{-1} = \begin{bmatrix} -\sfrac{15}{14} & \sfrac{4}{25} \\ \sfrac{10}{7} & -\sfrac{3}{35} \end{bmatrix}\>.
\end{align}
The residual of the zeroth order solution is
\begin{align}
\Delta_0 = F\left(\frac{3}{5},\frac{4}{5}\right) = \begin{bmatrix}-\sfrac{12}{25} \\ \sfrac{6}{5} \end{bmatrix}\>,
\end{align}
so $-[\ensuremath{\varepsilon}]\Delta_0 = [\sfrac{12}{25},-\sfrac{6}{5}]^T$. Therefore
\begin{align}
u_1 = \begin{bmatrix} u_{11} \\ u_{12}\end{bmatrix} = A^{-1}\begin{bmatrix}\sfrac{12}{25} \\ -\sfrac{6}{25}\end{bmatrix} = \begin{bmatrix} -\sfrac{114}{175} \\ \sfrac{138}{175}\end{bmatrix}
\end{align}
and $z_1=u_0+\ensuremath{\varepsilon} u_1$ is our improved solution:
\begin{align}
z_1 = \begin{bmatrix}\sfrac{3}{5} \\ \sfrac{4}{5} \end{bmatrix} + \ensuremath{\varepsilon} \begin{bmatrix} -\sfrac{114}{175} \\ \sfrac{138}{175}\end{bmatrix}\>.
\end{align}
To guard against slips, blunders, and bugs (some of those calculations were done by hand, and some were done in Sage on an Android phone) we compute
\begin{align}
\Delta_1 = F(z_1) = \ensuremath{\varepsilon}^2\begin{bmatrix}\sfrac{6702}{6125} \\ -\sfrac{17328}{1225}\end{bmatrix} + O\left(\ensuremath{\varepsilon}^3\right)\>.
\end{align}
That computation was done in Maple, completely independently. Initially it came out $O(\ensuremath{\varepsilon})$ indicating that something was not right; tracking the error down we found a typo in the Maple data entry ($183$ was entered instead of $138$). Correcting that typo we find $\Delta_1=O(\ensuremath{\varepsilon}^2)$ as it should be. Here is the corrected Maple code:
\lstinputlisting{ResidualSystem}
Just as for the scalar case, this process can be systematized and we give one way to do so in Maple, below. The code is not as pretty as the scalar case is, and one has to explicitly ``map'' the series function and the extraction of coefficients onto matrices and vectors, but this demonstrates feasibility.
\lstinputlisting{RegularSystem.tex}
This code computes $z_3$ correctly and gives a residual of $O(\ensuremath{\varepsilon}^4)$. From the backward error point of view, this code finds the intersection of curves that differ from the specified ones by terms of $O(\ensuremath{\varepsilon}^4)$. In the next section, we show a way to use a built-in feature of Maple to do the same thing with less human labour.
\subsubsection{The Davidenko equation}
Maple has a built-in facility for solving differential equations in series that (at the time of writing) is superior to its built-in facility for solving algebraic equations in series, because the latter can only handle scalar equations. This may change in the future, but it may not because there is the following simple workaround. To solve
\begin{align}
F(u;\ensuremath{\varepsilon})=0
\end{align}
for a function $u(\ensuremath{\varepsilon})$ expressed as a series, simply differentiate to get
\begin{align}
D_1(F)(u,\ensuremath{\varepsilon})\frac{du}{d\ensuremath{\varepsilon}} + D_2(F)(u,\ensuremath{\varepsilon})=0\>.
\end{align}
Boyd \cite{Boyd(2014)} calls this the Davidenko equation. If we solve this in Taylor series with the initial condition $u(0)=u_0$, we have our perturbation series. Notice that what we were calling $A=[\ensuremath{\varepsilon}^0]F_1(u_0)$ occurs here as $D_1(F)(u_0,0)$ and this needs to be nonsingular to be solved as an ordinary differential equation; if $\mathrm{rank}(D_1(F)(u_0,0))<n$ then this is in fact a nontrivial differential algebraic equation that Maple may still be able to solve using advanced techniques (see, e.g., \cite{Avrachenkov(2013)}). Let us just show a simple case here:
\lstinputlisting{RegularDavidenko}
This generates (to the specified value of the order, namely, \verb|Order=4|) the solution
\begin{align}
x(\ensuremath{\varepsilon}) &=\frac{3}{5}-\frac{114}{175}\ensuremath{\varepsilon}+\frac{119577}{42875}\ensuremath{\varepsilon}^2-\frac{43543632}{2100875}\ensuremath{\varepsilon}^3\\
y(\ensuremath{\varepsilon}) &=\frac{4}{5}+\frac{138}{175}\ensuremath{\varepsilon}-\frac{119004}{42875}\ensuremath{\varepsilon}^2+\frac{43245168}{2100875}\ensuremath{\varepsilon}^3\>,
\end{align}
whose residual is $O(\ensuremath{\varepsilon}^4)$. Internally, Maple uses its own algorithms, which occasionally get improved as algorithmic knowledge advances.
\subsection{Puiseux series}\label{Puiseux}
Puiseux series
are simply Taylor series or Laurent series with fractional powers. A standard example is
\begin{align}
\sin\sqrt{x} = x^{\sfrac{1}{2}} - \frac{1}{3!}x^{\sfrac{3}{2}} + \frac{1}{5!}x^{\sfrac{5}{2}}+\cdots
\end{align}
A simple change of variable (e.g. $t=\sqrt{x}$ so $x=t^2$) is enough to convert to Taylor series. Once the appropriate power $n$ is known for $\ensuremath{\varepsilon}=\mu^n$, perturbation by Puiseux expansion reduces to computations similar to those we've seen already.
For instance, had we chosen to embed $u^5-u-1$ in the family $u^5-\ensuremath{\varepsilon}(u+1)$ (which is somehow conjugate to the family of the last section), then because the equation becomes $u^5=0$ when $\ensuremath{\varepsilon}=0$ we see that we have a five-fold root to perturb, and we thus suspect we will need Puiseux series.
For scalar equations, there are built-in facilities in Maple for Puiseux series, which gives yet another way in Maple to solve scalar algebraic equations perturbatively. One can use the \texttt{RootOf} construct to do so as follows:
\lstinputlisting{Puiseux}
This yields
\begin{align}
z = \alpha\ensuremath{\varepsilon}^{\sfrac{1}{5}}+\frac{1}{5}\alpha^2\ensuremath{\varepsilon}^{\sfrac{2}{5}} -\frac{1}{25}\alpha^3\ensuremath{\varepsilon}^{\sfrac{3}{5}} +\frac{1}{125}\alpha^4\ensuremath{\varepsilon}^{\sfrac{4}{5}} - \frac{21}{15626}\alpha \ensuremath{\varepsilon}^{\sfrac{6}{5}} \>.
\end{align}
This series describes all paths, accurately for small $\ensuremath{\varepsilon}$. Note that the command
\begin{lstlisting}
alias(alpha = RootOf(u^5-1,u))
\end{lstlisting}
is a way to tell Maple that $\alpha$ represents a fixed fifth root of unity. Exactly which fixed root can be deferred till later. Working instead with the default value for the environment variable \texttt{Order}, namely \texttt{Order := 6}, gets us a longer series for $z$ containing terms up to $\ensuremath{\varepsilon}^{\sfrac{29}{5}}$ but not $\ensuremath{\varepsilon}^{\sfrac{30}{5}}=\ensuremath{\varepsilon}^6$. Putting the resulting $z_6$ back into $f(u)$ we get a residual
\begin{align}
\Delta_6 = f(z_6) = \frac{23927804441356816}{14551915228366851806640625}\ensuremath{\varepsilon}^7 + O(\ensuremath{\varepsilon}^8)
\end{align}
Thus we expect that for small $\ensuremath{\varepsilon}$ the residual will be quite small. For instance, with $\ensuremath{\varepsilon}=1$ the exact residual is, for $\alpha=1$, $\Delta_6=1.2\cdot 10^{-9}$. This tells us that this approximation ought to get us quite accurate roots, and indeed we do.
We conclude this discussion with two remarks. The first is that by a discriminant analysis as we describe in section \ref{SingPert}, we find that the nearest singularity is at $\ensuremath{\varepsilon}=\sfrac{3125}{256}$, and so we expect this series to actually converge for $\ensuremath{\varepsilon}=1$. Again, this fact was not used in our analysis above. Secondly, we could have used the \verb|series/RootOf| technique to do both the regular perturbation in subsection \ref{RegularPert} or the singular one we will do in subsection \ref{SingPert}. The Maple commands are quite similar:
\begin{lstlisting}
series(RootOf(u^5-e*u-1,u),e);
\end{lstlisting}
and
\begin{lstlisting}
series(RootOf(e*u^5-u-1,u),e);
\end{lstlisting}
However, in both cases only the real root is expanded. Some ``Maple art'' (that one of us more readily characterizes as black magic) can be used to complete the computation, but the previous code (both the loop and the Davidenko equation) are easier to generalize. Making the \texttt{dsolve/series} code for the Davidenko equation work in the case of Puiseux series requires a preliminary scaling.
\subsection{Singular perturbation}\label{SingPert}
Suppose that instead of embedding $u^5-u-1=0$ in the regular family we used in the previous section, we had used $\ensuremath{\varepsilon} u^5-u-1=0$. If we run our previous Maple programs, we find that the zeroth order solution is unique, and $z_0=-1$. The Fr\'echet derivative is $-1$ to $O(\ensuremath{\varepsilon})$, and so $u_{n+1} = [\ensuremath{\varepsilon}^{n+1}]\Delta_n$ for all $n\geq 0$. We find, for instance,
\begin{align}
z_7 = -1-\ensuremath{\varepsilon} -5\ensuremath{\varepsilon}^2 - 35\ensuremath{\varepsilon}^3 -285\ensuremath{\varepsilon}^4 -2530\ensuremath{\varepsilon}^5 -23751\ensuremath{\varepsilon}^6 -231880\ensuremath{\varepsilon}^7
\end{align}
which has residual $\Delta_7 = O(\ensuremath{\varepsilon}^8)$ but with a larger integer as the constant hidden in that $O$ symbol. For $\ensuremath{\varepsilon}=0.2$, the value of $z_7$ becomes \begin{align}z_7\doteq -7.4337280\end{align} while $\Delta_7=-4533.64404$, which is not small at all. Thus we have no evidence this perturbation solution is any good: we have the exact solution to $u^5-0.2 u-1=-4533.64404$ or $u^5-0.2 u+4532.64404=0$, probably not what was intended (and if it was, it would be a colossal fluke). Note that we do not need to know a reference value of a root of $u^5-0.2u-1$ to determine this.
Trying a smaller $\ensuremath{\varepsilon}$, we find that if $\ensuremath{\varepsilon}=0.05$ we have $z_7\doteq -1.07$ and $\Delta_7\doteq -1.2\cdot 10^{-4}$. This means $z_7$ is an exact root of $u^5-0.05 u-1.00012$; which may very well be what we want.
The following remark is not really germane to the method but it's interesting. Taking the discriminant with respect to $u$, i.e., the resultant of $f$ and $\sfrac{\partial f}{\partial u}$, we find $\mathrm{discrim}(f) = \ensuremath{\varepsilon}^3(3125\ensuremath{\varepsilon}-256)$. Thus $f$ will have multiple roots if $\ensuremath{\varepsilon}=0$ (there are 4 multiple roots at infinity) or if $\ensuremath{\varepsilon} = \sfrac{256}{3125}=0.08192$. Thus our perturbation expansion can be expected to diverge\footnote{A separate analysis leads to the identification of $u_k = \frac{1}{5k+1}\binom{5k+1}{k}$ (via \cite{OEIS}). The ratio test confirms that the series converges for $|\ensuremath{\varepsilon}|<\sfrac{256}{3125}$, and diverges if $\ensuremath{\varepsilon} = \sfrac{256}{3125}$.} for $\ensuremath{\varepsilon}\geq 0.08192$. What happens to $z_7$ if $\ensuremath{\varepsilon}=\sfrac{256}{3125}$? $z_7\doteq -1.1698$ and $\Delta_7=-9.65\cdot10^{-3}$, so we have an exact solution for $u^5-\sfrac{256}{3125}u-1.00965$; this is not bad. The reference double root is $-1.25$, about $0.1$ away, although this fact was not used in the previous discussion.
But this computation, valid as it is, only found one root out of five, and then only for sufficiently small $\ensuremath{\varepsilon}$. We now turn to the roots that go to infinity as $\ensuremath{\varepsilon}\to0$. Preliminary investigation from similar to that of subsection \ref{Puiseux} shows that it is convenient to replace $\ensuremath{\varepsilon}$ by $\mu^4$.
Many singular perturbation problems including this one can be turned into regular ones by rescaling. Putting $u=\sfrac{y}{\mu}$, we get
\begin{align}
\mu^4\left(\frac{y}{\mu}\right)^5-\frac{y}{\mu}-1=0\>,
\end{align}
which reduces to
\begin{align}
y^5-y-\mu=0\>.
\end{align}
This is now regular in $\mu$. The zeroth order the equation is $y(y^4-1)=0$ and the root $y=0$ just recovers the regular series previously attained; so we let $\alpha$ be a root of $y^4-1$, i.e., $\alpha\in\{1,-1,i,-i\}$. A very similar Maple program (to either of the previous two) gives
\begin{align}
y_5= \alpha +\frac{1}{4}\mu - \frac{5}{32}\alpha^3\mu^2 +\frac{5}{32}\alpha^2\mu^3-\frac{385}{2048}\alpha\mu^4 + \frac{1}{4}\mu^5
\end{align}
so our appoximate solution is $\sfrac{y_5}{\mu}$ or
\begin{align}
z_5 = \frac{\alpha}{\mu}+\frac{1}{4}-\frac{5}{32}\alpha^3\mu^2-\frac{385}{2048}\alpha\mu^3+\frac{1}{4}\mu^4
\end{align}
which has residual \textsl{in the original equation}
\begin{align}
\Delta_5 = \mu^4 z^5 -z-1= \frac{23205}{16384}\alpha^3\mu^5 - \frac{21255}{65536}\alpha^2\mu^6 +O(\mu^7)\>.\label{residorigeq}
\end{align}
That is, $z_5$ exactly solves $\mu^4u^5-u-1-\sfrac{23205}{16384}\>\alpha^2\mu^5=O(\mu^6)$ instead of the one we had wanted to solve. This differs from the original by $O(|\ensuremath{\varepsilon}|^{\sfrac{5}{4}})$, and for small enough $\ensuremath{\varepsilon}$ this may suffice.
\paragraph{Optimal backward error}
Interestingly enough, we can do better. The residual is only one kind of backward error. Taking the lead from the Oettli-Prager theorem \cite[chap.~6]{CorlessFillion(2013)}, we look for equations of the form
\begin{align}
\left(\mu^4 +\sum_{j=10}^{15} a_j\mu^j\right) u^5 - u -1
\end{align}
for which $z_5$ is a better solution yet. Simply equating coefficients of the residual
\begin{align}
\tilde{\Delta}_5 = \left(\mu^4+\sum_{j=10}^{15}a_j\mu^j\right)z_5^5-z_5-1
\end{align}
to zero, we find
\begin{align}
(\mu^4 - \frac{23205}{16384}\alpha^2\mu^{10}+ \frac{2145}{1024}\alpha\mu^{11})z_5^5 -z_5-1 = \frac{12165535425}{1073741824}\alpha\mu^{11}+O(\mu^{12})
\end{align}
and thus $z_5$ solves an equation that is $O(\mu^{\sfrac{10}{4}})=O(\ensuremath{\varepsilon}^{\sfrac{5}{2}})$ close to the original, not just an equation \eqref{residorigeq} that is $O(\mu^6)=O(|\ensuremath{\varepsilon}|^{\sfrac{5}{4}})$. This is a superior explanation of the quality of $z_5$.
This was obtained with the following Maple code:
\lstinputlisting{SingularPertOettli}
Computing to higher orders (see the worksheet) gives e.g. that $z_8$ is the exact solution to an equation that differs by $O(\mu^{13})$ from the original, or better than $O(\ensuremath{\varepsilon}^3)$. This in spite of the fact that the basic residual $\Delta_8=O(\ensuremath{\varepsilon}^{9/4})$, only slightly better than $O(\ensuremath{\varepsilon}^2)$.
We will see other examples of improved backward error over residual for singularly-perturbed problems. In retrospect it's not so surprising, or shouldn't have been: singular problems are sensitive to changes in the leading term, and so it takes less effort to match a given solution.
\subsection{Perturbing all roots at once}
The preceding analysis found a nearby equation for each root independently; this might suffice, but there are circumstances in which it might not. Perhaps we want a ``nearby'' equation satisfied by all roots at once. Sadly this is more difficult, and in general may not be possible. But it is possible for the example we've considered and we demonstrate how the backward error is used in such a case. Let
\begin{align}
\zeta_1 &= z_5(1)= \frac{1}{\mu}+\frac{1}{4} -\frac{5}{32}\mu-\frac{385}{2048}\mu^3+\frac{1}{4}\mu^4\\
\zeta_2 &= z_5(-1) = -\frac{1}{\mu}+\frac{1}{4}-\frac{5}{32}\mu +\frac{385}{2048}\mu^3 + \frac{1}{4}\mu^4\\
\zeta_3 &= z_5(i) = \frac{i}{\mu}+\frac{1}{4} + \frac{5}{32}\mu -\frac{385i}{2048}\mu^3 + \frac{1}{4}\mu^4\\
\zeta_4 &= z_5(-i) = -\frac{i}{\mu}+\frac{1}{4} + \frac{5}{32}\mu + \frac{385}{2048}\mu^3 + \frac{1}{4}\mu^4\\
\zeta_5 &= z_5 = -1-\mu^4-5\mu^8 ,
\end{align}
$\zeta_5$ is the regular root we have found first in the previous subsection. Now put
\begin{align}
\tilde{p}(x)=\mu^4(x-\zeta_1)(x-\zeta_2)(x-\zeta_3)(x-\zeta_4)(x-\zeta_5)
\end{align}
and expand it. The result, by Maple, is
\begin{multline}
\mu^4x^5-5\mu^{12}x^4 + \left(\frac{23205}{16384}\mu^8+\frac{45}{8}\mu^{12}\right)x^3
-\left(\frac{5435}{32768}\mu^8+\frac{195697915}{33554432}\mu^{12}\right)x^2 \\
+ \left( \frac{2575665}{2097152}\mu^8+\frac{5696429035}{1073741824}\mu^{12}-1 \right)x +
\frac{8453745}{2097152}\mu^8 -\frac{5355037365}{1073741824}\mu^{12}-1
\end{multline}
which equals
\begin{align}
\ensuremath{\varepsilon} x^5 -x-1-5\ensuremath{\varepsilon}^3 x^4 + (\frac{23205}{16384}\ensuremath{\varepsilon}^2+\frac{45}{8}\ensuremath{\varepsilon}^3)x^3 - (\frac{5435}{32768}\ensuremath{\varepsilon}^2+\cdots)x^2+O(\ensuremath{\varepsilon}^2)
\end{align}
As we see, this equation is remarkably close to the original, although we see changes in all the coefficients. The backward error is $O(\mu^8)$, i.e., $O(\ensuremath{\varepsilon}^2)$. Thus for algebraic equations it's possible to talk about simultaneous backward error.
\subsection{A hyperasymptotic example}
In \cite[sect.~15.3, pp.~285-288]{Boyd(2014)}, Boyd takes up the perturbation series expansion of the root near $-1$ of
\begin{align}
f(x,\ensuremath{\varepsilon})=1+x+\ensuremath{\varepsilon} \mathrm{sech}\left(\frac{x}{\ensuremath{\varepsilon}}\right) = 0\>,
\end{align}
a problem he took from \cite[p.~22]{Holmes(1995)}. After computing the desired expansion using a two-variable technique, Boyd then sketches an alternative approach suggested by one of us (based on \cite{Corless(1996)}), namely to use the Lambert $W$ function. Unfortunately, there are a number of sign errors in Boyd's equation (15.28). We take the opportunity here to offer a correction, together with a residual-based analysis that confirms the validity of the correction. First, the erroneous formula: Boyd has
\begin{align}
z_0 = \frac{W(-2e^{\sfrac{1}{\ensuremath{\varepsilon}}})\ensuremath{\varepsilon}-1}{\ensuremath{\varepsilon}}
\end{align}
and $x_0=-\ensuremath{\varepsilon} z_0$, so allegedly $x_0=1-\ensuremath{\varepsilon} W(-2\ensuremath{\varepsilon}^{\sfrac{1}{\ensuremath{\varepsilon}}})$. This can't be right: as $\ensuremath{\varepsilon}\to0^+$, $e^{\sfrac{1}{\ensuremath{\varepsilon}}}\to\infty$ and the argument to $W$ is negative and large; but $W$ is real only if its argument is between $-e^{-1}$ and $0$, if it's negative at all. We claim that the correct formula is
\begin{align}
x_0 = -1-\ensuremath{\varepsilon} W(2e^{-\sfrac{1}{\ensuremath{\varepsilon}}}) \label{star}
\end{align}
which shows that the errors in Boyd's equation (15.28) are explainable as trivial. Indeed, Boyd's derivation is correct up to the last step; rather than fill in the algebraic details of the derivation of formula~\eqref{star}, we here verify that it works by computing the residual:
\begin{align}
\Delta_0 = 1+x_0 + \ensuremath{\varepsilon} \mathrm{sech}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right).
\end{align}
For notational simplicity, we will omit the argument to the Lambert $W$ function and just write $W$ for $W(2e^{-\sfrac{1}{\ensuremath{\varepsilon}}})$. Then, note that $\mathrm{sech}(\sfrac{x_0}{\ensuremath{\varepsilon}}) = \mathrm{sech}(\sfrac{1+\ensuremath{\varepsilon} W}{\ensuremath{\varepsilon}})$ since each $\mathrm{sech}$ is even, and that
\begin{align}
\mathrm{sech}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right) = \frac{2}{\displaystyle e^{\sfrac{x_0}{\ensuremath{\varepsilon}}}+e^{-\sfrac{x_0}{\ensuremath{\varepsilon}}}} = \frac{1}{\displaystyle e^{(\sfrac{1}{\ensuremath{\varepsilon}}) +W}+e^{-\sfrac{1}{\ensuremath{\varepsilon}}-W}}\>.
\end{align}
Now, by definition,
\begin{align}
We^W = 2e^{-\sfrac{1}{\ensuremath{\varepsilon}}}
\end{align}
and thus we obtain
\begin{align}
e^W = \frac{2e^{-\sfrac{1}{\ensuremath{\varepsilon}}}}{W} \qquad \textrm{and} \qquad e^{-W} = \frac{We^{\sfrac{1}{\ensuremath{\varepsilon}}}}{2}\>.
\end{align}
It follows that
\begin{align}
\mathrm{sech}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right) = \frac{2}{\displaystyle \sfrac{2}{W}+\sfrac{W}{2}} = \frac{W}{\displaystyle 1+\sfrac{W^2}{4}}\>, \label{sechW}
\end{align}
and hence the residual is
\begin{align}
\Delta_0 &= 1+(-1-\ensuremath{\varepsilon} W)+\ensuremath{\varepsilon} \frac{W}{\displaystyle 1+\sfrac{W^2}{4}}
= \frac{\displaystyle -\ensuremath{\varepsilon} W(1+\sfrac{W^2}{4}) + \ensuremath{\varepsilon} W}{\displaystyle 1+\sfrac{W^2}{4} } \\
&= \frac{\displaystyle -\sfrac{\ensuremath{\varepsilon} W^3}{4}}{\displaystyle 1+\sfrac{W^2}{4}} = \frac{-\ensuremath{\varepsilon} W^3}{4+ W^2} \nonumber \>.
\end{align}
Now $W= W(2e^{-1/\ensuremath{\varepsilon}})$ and as $\ensuremath{\varepsilon}\to 0^+$, $2e^{-1/\ensuremath{\varepsilon}}\to 0$ rapidly; since the Taylor series for $W(z)$ starts as $W(z)= z-z^2+\frac{3}{2}z^3+\ldots$, we have that $W(2e^{-\sfrac{1}{\ensuremath{\varepsilon}}})\sim 2e^{-\sfrac{1}{\ensuremath{\varepsilon}}}$ and therefore
\begin{align}
\Delta_0 = -\ensuremath{\varepsilon} 2e^{-\sfrac{3}{\ensuremath{\varepsilon}}}+O(e^{-\sfrac{5}{\ensuremath{\varepsilon}}})\>.
\end{align}
We see that this residual is very small indeed. But we can say even more. Boyd leaves us the exercise of computing higher order terms; here is our solution to the exercise. A Newton correction would give us
\begin{align}
x_1 = x_0 - \frac{f(x_0)}{f'(x_0)}
\end{align}
and we have already computed $f(x_0)=\Delta_0$. What is $f'(x_0)$? Since $f(x) = 1+x+\ensuremath{\varepsilon}\mathrm{sech}(\sfrac{x}{\ensuremath{\varepsilon}})$, this derivative is
\begin{align}
f'(x) = 1-\mathrm{sech}\left(\frac{x}{\ensuremath{\varepsilon}}\right)\mathrm{tanh}\left(\frac{x}{\ensuremath{\varepsilon}}\right)\>.
\end{align}
Simplifying similarly to equation \eqref{sechW}, we obtain
\begin{align}
\mathrm{tanh}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right) = \frac{e^{1/\ensuremath{\varepsilon} +W} - e^{-1/\ensuremath{\varepsilon}-W}}{e^{1/\ensuremath{\varepsilon}+W}+e^{-1/\ensuremath{\varepsilon}+W}} = \frac{\frac{2}{W}-\frac{W}{2}}{\frac{2}{W}+\frac{W}{2}} = \frac{4-W^2}{4+W^2}\>.
\end{align}
Thus
\begin{align}
f'(x_0) &= 1-\mathrm{sech}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right)\mathrm{tanh}\left(\frac{x_0}{\ensuremath{\varepsilon}}\right)
= 1- \frac{\displaystyle W(1-\sfrac{W^2}{4})}{\displaystyle (1+\sfrac{W^2}{4})^2}\>.
\end{align}
It follows that
\begin{align}
x_1 &= x_0 - \frac{\Delta_0}{f'(x_0)}
= -1-\ensuremath{\varepsilon} W+\frac{\displaystyle\sfrac{\ensuremath{\varepsilon} W^3}{4+W^2}}{\displaystyle 1- \frac{W(1-\sfrac{W^2}{4})}{(1+\sfrac{W^2}{4})^2}} \\
&= -1 -\ensuremath{\varepsilon} W+ \frac{\ensuremath{\varepsilon} W^3(4+W^2)}{16-16W+8W^2+4W^3+W^4}\\
&= -1-\ensuremath{\varepsilon} W+ \frac{\ensuremath{\varepsilon}}{4}W^3+\frac{\ensuremath{\varepsilon}}{4}W^4+\frac{3}{16}\ensuremath{\varepsilon} W^5-\frac{11}{64}\ensuremath{\varepsilon} W^6+O(W^7)
\end{align}
Finally, the residual of $x_1$ is
\begin{align}
\Delta_1 = 4\ensuremath{\varepsilon} e^{\sfrac{7}{\ensuremath{\varepsilon}}}+O(\ensuremath{\varepsilon} e^{-\sfrac{8}{\ensuremath{\varepsilon}}})\>. \label{Newtresid}
\end{align}
We thus see an example of the use of $f'(x_0)$ instead of just $A$, as discussed in section \ref{genframe}, to approximately double the number of correct terms in the approximation.
This analysis can be implemented in Maple as follows:
\lstinputlisting{Hyperasymptotic}
Note that we had to use the MultiSeries package \cite{Salvy(2010)} to expand the series in equation \eqref{Newtresid}, for understanding how accurate $z_2$ was. $z_2$ is slightly more lacunary than the two-variable expansion in \cite{Boyd(2014)}, because we have a zero coefficient for $W^2$.
\section{Divergent Asymptotic Series}
Before we begin, a note about the section title: some authors give the impression that the word ``asymptotic'' is used \textsl{only} for divergent series,
and so the title might seem redundant. But the proper definition of an asymptotic series can include convergent series (see, e.g., \cite{Bruijn(1981)}), as it means that the relevant limit is not as the number of terms $N$ goes to infinity, but rather as the variable in question (be it \ensuremath{\varepsilon}, or $x$, or whatever) approaches a distinguished point (be it 0, or infinity, or whatever). In this sense, an asymptotic series might diverge as $N$ goes to infinity, or it might converge, but typically we don't care. We concentrate in this section on divergent asymptotic series.
Beginning students are often confused when they learn the usual ``rule of thumb'' for optimal accuracy when using divergent asymptotic series, namely to truncate the series \textsl{before} adding in the smallest (magnitude) term. This rule is usually motivated by an analogy with \textsl{convergent} alternating series, where the error is less than the magnitude of the first term neglected. But why should this work (if it does) for divergent series?
The answer we present in this section isn't as clear-cut as we would like, but nonetheless we find it explanatory. Perhaps you and your students will, too. The basis for the answer is that one can measure the residual $\Delta$ that arises on truncating the series at, say, $M$ terms, and choose $M$ to minimize the residual. Since the forward error is bounded by the condition number times the size of the residual, by minimizing $\|\Delta\|$ one minimizes a bound on the forward error. It often turns out that this method gives the same $M$ as the rule of thumb, though not always.
An example may clarify this. We use the large-$x$ asymptotics of $J_0(x)$, the zeroth-order Bessel function of the first kind. In \cite[section 10.17(i)]{NIST:DLMF}, we find the following asymptotic series, which is attributed to Hankel:
\begin{align}
J_0(x) = \left(\frac{2}{\pi x}\right)^{\sfrac{1}{2}}\left( A(x)\cos\left(x-\frac{\pi}{4}\right)-B(x)\sin\left(x-\frac{\pi}{4}\right)\right)
\end{align}
where
\begin{align}
A(x) = \sum_{k\geq 0} \frac{a_{2k}}{x^{2k}} \qquad \textrm{and}\qquad
B(x) = \sum_{k\geq 0} \frac{a_{2k+1}}{x^{2k+1}} \label{twoseries}
\end{align}
and where
\begin{align}
a_0 &= 1 \nonumber\\
a_k &= \frac{(-1)^k }{k! 8^k}\prod_{j=1}^k (2j-1)^2\>.
\end{align}
For the first few $a_k$s, we get
\begin{align}
a_0=1, a_1 = -\frac{1}{8}, a_2= -\frac{9}{128}, a_3 = \frac{75}{1024}\>,
\end{align}
and so on. The ratio test immediately shows the two series \eqref{twoseries} diverge for all finite~$x$.
Luckily, we always have to truncate anyway, and if we do, the forward errors get arbitrarily small so long as we take $x$ arbitrarily large. Because the Bessel functions are so well-studied, we have alternative methods for computation, for instance
\begin{align}
J_0(x) = \frac{1}{\pi}\int_0^\pi \cos(x\sin\theta)d\theta
\end{align}
which, given $x$, can be evaluated numerically (although it's ill-conditioned in a relative sense near any zero of $J_0(x)$). So we can directly compute the forward error.
But let's pretend that we can't. We have the asymptotic series, and not much more. Or course we have to have a defining equation---Bessel's differential equation
\begin{align}
x^2y''+xy'+x^2y=0
\end{align}
with the appropriate normalizations at $\infty$. We look at
\begin{align}
y_{N,M} = \left(\frac{2}{\pi x}\right)^{\sfrac{1}{2}}A_N(x)\cos\left(x-\frac{\pi}{4}\right)-\frac{2}{\pi x}B_M(x)\cos\left(x-\frac{\pi}{4}\right)
\end{align}
where
\begin{align}
A_N(x) = \sum_{k=0}^N \frac{a_{2k}}{x^{2k}}\qquad \textrm{and}\qquad
B_M(x) = \sum_{k=0}^M \frac{a_{2k+1}}{x^{2k+1}}\>.
\end{align}
Inspection shows that there are only two cases that matter: when we end on an even term $a_{2k}$ or on an odd term $a_{2k+1}$. The first terms omitted will be odd and even. A little work shows that the residual
\begin{align}
\Delta = x^2y''_{N,M} + x y'_{N,M} + x^2y_{N,M}
\end{align}
is just
\begin{align}
\frac{\displaystyle (k+\sfrac{1}{2})^2 a_k}{\displaystyle x^{k+\sfrac{1}{2}}} \cdot \left\{ \begin{array}{c} \cos(x-\sfrac{\pi}{4})\\ \sin(x-\sfrac{\pi}{4})\end{array}\right\}
\end{align}
if the final term \textsl{kept}, odd or even, is $a_k$. If even, then multiply by $\cos(x-\sfrac{\pi}{4})$; if odd, then $\sin(x-\sfrac{\pi}{4})$.
Let's pause a moment. The algebra to show this is a bit finicky but not hard (the equation is, after all, linear). This end result s an extremely simple (and exact!) formula for $\Delta$. The finite series $y_{N,M}$ is then the exact solution to
\begin{align}
x^2y''+xy'+xy &= \Delta\\
&= \frac{\displaystyle (k+\sfrac{1}{2})^2 a_k}{x^{k+\sfrac{1}{2}}} \cdot \left\{ \begin{array}{c} \cos(x-\frac{\pi}{4})\\ \sin(x-\frac{\pi}{4})\end{array}\right\}
\end{align}
and, provided $x$ is large enough, this is only a small perturbation of Bessel's equation. In many modelling situations, such a small perturbation may be of direct physical significance, and we'd be done. Here, though, Bessel's equation typically arises as an intermediate step, after separation of variables, say. Hence one might be interested in the forward error. By the theory of Green's functions, we may express this as
\begin{align}
J_0(x) - y_{N,M}(x) = \int_x^\infty K(x,\xi)\Delta(\xi)d\xi
\end{align}
for a suitable kernel $K(x,\xi)$. The obvious conclusion is that if $\Delta$ is small then so will $J_0(x)-y_{N,M}(x)$; but $K(x,\xi)$ will have some effect, possibly amplifying the effects of $\Delta$, or perhaps even damping its effects. Hence, the connection is indirect.
To have an error in $\Delta$ of at most $\ensuremath{\varepsilon}$, we must have
\begin{align}
\left(k+\frac{1}{2}\right)^2\frac{|a_k|}{x^{k+\sfrac{1}{2}}}\leq\ensuremath{\varepsilon}
\end{align}
(remember, $x>0$). This will happen only if
\begin{align}
x\geq \left(\left(k+\frac{1}{2}\right)^2 \frac{|a_k|}{\ensuremath{\varepsilon}}\right)^{2/(2k+1)}
\end{align}
and this, for fixed $k$, goes to $\infty$ as $\ensuremath{\varepsilon}\to0$.
Alternatively, we may ask which $k$, for a fixed $x$, minimizes
\begin{align}
\left(k+\frac{1}{2}\right)^2\frac{|a_k|}{x^{k+\sfrac{1}{2}}}
\end{align}
and this answers the truncation question in a rational way. In this particular case, minimizing $\|\Delta\|$ doesn't necessarily minimize the forward error (although, it's close). For $x=2.3$, for instance, the sequence $(k+\sfrac{1}{2})^2|a_k|x^{-k-\sfrac{1}{2}}$ is (no $\sqrt{\sfrac{2}{\pi}}$)
\begin{align}
\begin{array}{ccccccc}
k & 0 & 1 & 2 & 3 & 4 & 5\\
A_k & 0.165 & 0.081 & 0.055 & 0.049 & 0.054 & 0.070
\end{array}
\end{align}
The clear winner seems to be $k=3$. This suggests that for $x=2.3$, the best series to take is
\begin{align}
y_3 = \left(\frac{2}{\pi x}\right)^{\sfrac{1}{2}} \left( \left(1-\frac{9}{128x^2}\right)\cos\left(x-\frac{\pi}{4}\right) + \left(\frac{1}{8x}-\frac{75}{1024x^3}\right)\sin\left(x-\frac{\pi}{4}\right)\right)\>.
\end{align}
This gives $5.454\cdot 10^{-2}$ for $x=2.3$. But the cosine versus sine plays a role, here: $\cos(2.3-\sfrac{\pi}{4})\doteq 0.056$ while $\sin(2.3-\sfrac{\pi}{4})\doteq0.998$, so we should have included this. When we do, the estimates for $\Delta_0,\Delta_2$ and $\Delta_4$ are all significantly reduced---and this changes our selection, and makes $k=4$ the right choice; $\Delta_6>\Delta_4$ as well (either way). But the influence of the integral is mollifying.
Comparing to a better answer (computers via the integral formula) $0.0555398$, we see that the error is about $8.8\cdot 10^{-4}$ whereas $((4+\sfrac{1}{2})^2a_4/2.3^{4+\sfrac{1}{2}})\cos(2.3-\sfrac{\pi}{4})$ is $3.06\cdot 10^{-3}$; hence the residual overestimates the error slightly.
How does the rule of thumb do? The first term that is neglected here is $(\sfrac{1}{x})^{\sfrac{1}{2}}a_5x^{-5}\sin(x-\sfrac{\pi}{4})$ which is $\sim2.3\cdot 10^{-3}$ apart from the $(\sfrac{2}{\pi})^{\sfrac{1}{2}}=0.797$ factor, so about $1.86\cdot10^{-3}$. The \textsl{next} term is, however, $(\sfrac{2}{\pi x})^{\sfrac{1}{2}}a_6x^{-6}\cos(x-\sfrac{\pi}{4})\doteq -1.14\cdot 10^{-4}$ which is smaller yet, suggesting that we should keep the $a_5$ term.
But we shouldn't. Stopping with $a_4$ gives a better answer, just as the residual suggests that it should.
We emphasize that this is only a slightly more rational rule of thumb, because minimizing $\|\Delta\|$ only minimizes a bound on the forward error, not the forward error itself. Still, we have not seen this discussed in the literature before. A final comment is that the defining equation and its scale, define also the scale for what's a ``small'' residual.
So, a justification for the ``rule of thumb'' would be as follows. In our general scheme,
\begin{align}
Au_{n+1} = -[\ensuremath{\varepsilon}^{n+1}]\Delta_n
\end{align}
and thus, loosely speaking,
\begin{align}
u_{n+1} \sim -A^{-1}\Delta_n + O(\ensuremath{\varepsilon}^{n+1})\>.
\end{align}
Thus, if we stop when $u_{n+1}$ is smallest, this would tend to happen at the same integer $n$ that $\Delta_n$ was smallest.
This isn't going to be always true. For instance, if $A$ is a matrix with largest singular value $\sigma_1$ and smallest $\sigma_N>0$, with associated vectors $\hat{u}_k$ and $\hat{v}_k$, so that
\begin{align}
A\hat{v}_k = \sigma_k\hat{u}_k\>.
\end{align}
Then, if $u_{n+1}$ is like $\hat{v}_1$ then $\Delta_n$ will be like $\sigma\hat{u}_1$, which can be substantially larger; contrariwise, if $u_{n+1}$ is like $\hat{v}_N$ then $A\hat{v}_N=\sigma_N\hat{u}_N$ and $\Delta_n$ can be substantially smaller. The point is that directions of $\Delta_n$ can change between steps in the perturbation expansion; we thus expect correlation but not identity.
\section{Initial-Value problems}
BEA has successfully been applied to the \textsl{numerical} solution of differential equations for a long time, now. Examples include the works of Enright since the 1980s, e.g., \cite{Enright(1989)b,Enright(1989)a}, and indeed the Lanczos $\tau$-method is yet older~\cite{Lanczos(1988)}. It was pointed out in \cite{Corless(1992)} and \cite{Corless(1993)b} that BEA could be used for perturbation and other series solutions of differential equations, also. We here display several examples illustrating this fact. We use regular expansion, matched asymptotic expansions, the renormalization group method, and the method of multiple scales.
\subsection{Duffing's Equation}
This proposed way of interpreting solutions obtained by perturbation methods has interesting advantages for the analysis of series solutions to differential equations. Consider for example an unforced weakly nonlinear Duffing oscillator, which we take from \cite{Bender(1978)}:
\begin{align}
y''+y+\varepsilon y^3=0 \label{Duffing}
\end{align}
with initial conditions $y(0)=1$ and $y'(0)=0$. As usual, we assume that $0<\varepsilon\ll 1$.
Our discussion of this example does not provide a new method of solving this problem, but instead it improves the interpretation of the quality of solutions obtained by various methods.
\subsubsection{Regular expansion}
The classical perturbation analysis supposes that the solution to this equation can be written as the power series
\begin{align}
y(t) = y_0(t) + y_1(t)\ensuremath{\varepsilon} + y_2(t)\ensuremath{\varepsilon}^2+y_3(t)\ensuremath{\varepsilon}^3+\cdots\>.
\end{align}
Substituting this series in equation \eqref{Duffing} and solving the equations obtained by equating to zero the coefficients of powers of $\ensuremath{\varepsilon}$ in the residual, we find $y_0(t)$ and $y_1(t)$ and we thus have the solution
\begin{align}
z_1(t)= \cos( t) +\ensuremath{\varepsilon} \left( \frac{1}{32}\cos(3t) -\frac{1}{32}\cos( t) -\frac{3}{8}t\sin( t) \right)\>. \label{classical1st}
\end{align}
The difficulty with this solution
is typically characterized in one of two ways. Physically, the secular term $t\sin t$ shows that our simple perturbative method has failed since the energy conservation prohibits unbounded solutions. Mathematically, the secular term $t\sin t$ shows that our method has failed since the periodicity of the solution contradicts the existence of secular terms.
Both these characterizations are correct, but require foreknowledge of what is physically meaningful or of whether the solutions are bounded. In contrast, interpreting \eqref{classical1st} from the backward error viewpoint is much simpler. To compute the residual, we simply substitute $z_2$ in equation \eqref{Duffing}, that is, the residual is defined by
\begin{align}
\Delta_1(t) = z_1'' + z_1 + \ensuremath{\varepsilon} z_1^3\>.
\end{align}
For the first-order solution of equation \eqref{classical1st}, the residual is
\begin{multline}
\Delta_1(t) = \Big( -\tfrac {3}{64}\cos( t) +\tfrac{3}{128}\cos( 5t) +\tfrac{3}{128}\cos( 3t) -
\tfrac{9}{32}t\sin(t)\\ -\tfrac{9}{32}t\sin( 3t)\Big) \ensuremath{\varepsilon}^2+O( \ensuremath{\varepsilon}^3) \>. \label{ClassicalRes1st}
\end{multline}
$\Delta_1(t)$ is exactly computable. We don't print it all here because it's too ugly, but in figure \ref{ClassicalDuffingRes}, we see that the complete residual grows rapidly.
\begin{figure}
\centering
\includegraphics[width=.55\textwidth]{ClassicalDuffingRes.png}
\caption{Absolute Residual for the first-order classical perturbative solution of the unforced weakly damped Duffing equation with $\ensuremath{\varepsilon}=0.1$.}
\label{ClassicalDuffingRes}
\end{figure}
This is due to the secular term $-\tfrac{9}{32}t(\sin(t)-\sin(3t))$ of equation \eqref{ClassicalRes1st}. Thus we come to the conclusion that the secular term contained in the first-order solution obtained in equation \eqref{classical1st} invalidate it, but this time we do not need to know in advance what to physically expect or to prove that the solution is bounded. This is a slight but sometimes useful gain in simplicity.\footnote{In addition, this method makes it easy to find mistakes of various kinds. For instance, a typo in the 1978 edition of \cite{Bender(1978)} was uncovered by computing the residual. That typo does not seem to be in the later editions, so it's likely that the authors found and fixed it themselves.}
A simple Maple code makes it possible to easily obtain higher-order solutions:
\lstinputlisting{DuffingClassical}
Experiments with this code suggests the conjecture that $\Delta_n=O(t^n\ensuremath{\varepsilon}^{n+1})$. For this to be small, we must have $\ensuremath{\varepsilon} t=o(1)$ or $t<O(\sfrac{1}{\ensuremath{\varepsilon}})$.
\subsubsection{Lindstedt's method}
The failure to obtain an accurate solution on unbounded time intervals by means of the classical perturbation method suggests that another method that eliminates the secular terms will be preferable. A natural choice is Lindstedt's method, which rescales the time variable $t$ in order to cancel the secular terms.
The idea is that if we use a rescaling $\tau=\omega t$ of the time variable and chose $\omega$ wisely the secular terms from the classical perturbation method will cancel each other out.\footnote{Interpret this as: we choose $\omega$ to keep the residual small over as long a time-interval as possible.} Applying this transformation, equation \eqref{Duffing} becomes
\begin{align}
\omega^2 y''(\tau)+y(\tau)+\ensuremath{\varepsilon} y^3(\tau) \qquad y(0)=1,\enskip y'(0)=0\>.\label{DuffingTau}
\end{align}
In addition to writing the solution as a truncated series
\begin{align}
z_1(\tau) = y_0(\tau)+y_1(\tau)\ensuremath{\varepsilon} \label{ytau}
\end{align}
we expand the scaling factor as a truncated power series in \ensuremath{\varepsilon}:
\begin{align}
\omega=1+\omega_1\ensuremath{\varepsilon}\>. \label{omeg}
\end{align}
Substituting \eqref{ytau} and \eqref{omeg} back in equation \eqref{DuffingTau} to obtain the residual and setting the terms of the residual to zero in sequence, we find the equations
\begin{align}
y_0'' + y_0 =0\>,
\end{align}
so that $y_0=\cos(\tau)$, and
\begin{align}
y_1'' + y_1 = -y_0^3 - 2\omega_1 y_0''
\end{align}
subject to the same initial conditions, $y_0(0)=1, y'_0(0)=0, y_1(0)=0$, and $y_1'(0)=0$. By solving this last equation, we find
\begin{align}
y_1(\tau) =\frac{31}{32}\cos(\tau) +\frac{1}{32}\cos(3\tau) -\frac{3}{8}\tau \sin(\tau)+\omega_1\tau\sin(\tau)\>.
\end{align}
So, we only need to choose $\omega_1=\sfrac{3}{8}$ to cancel out the secular terms containing $\tau\sin(\tau)$. Finally, we simply write the solution $y(t)$ by taking the first two terms of $y(\tau)$ and plug in $\tau=(1+\sfrac{3\ensuremath{\varepsilon}}{8})t$:
\begin{align}
z_1(t) = \cos \tau +\ensuremath{\varepsilon} \left( \frac{31}{32}\cos \tau +\frac{1}{32}\cos \tau \right)
\end{align}
This truncated power series can be substituted back in the left-hand side of equation \eqref{Duffing} to obtain an expression for the residual:
\begin{align}
\Delta_1(t) = \left( \frac{171}{128}\cos \left( t \right) +\frac {3}{128}\cos \left( 5t \right) +\frac {9}{16}\cos \left( 3t \right) \right) \ensuremath{\varepsilon}^2+O \left( \ensuremath{\varepsilon}^3\right)
\end{align}
See figure \ref{FstLindstedt}.
\begin{figure}
\centering
\subfigure[First-Order\label{FstLindstedt}]{\includegraphics[width=.48\textwidth]{FstLindstedt.png}}
\subfigure[Second-Order\label{SndLindstedt}]{\includegraphics[width=.48\textwidth]{SndLindstedt.png}}
\caption{Absolute Residual for the Lindstedt solutions of the unforced weakly damped Duffing equation with $\ensuremath{\varepsilon}=0.1$.}
\end{figure}
We then do the same with the second term $\omega_2$. The following Maple code has been tested up to order $12$:
\lstinputlisting{DuffingLindstedt}
The significance of this is as follows: The normal presentation of the method first requires a proof (an independent proof) that the reference solution is bounded and therefore the secular term $\ensuremath{\varepsilon} t \sin t$ in the classical solution is spurious. \textsl{But} the residual analysis needs no such proof. It says directly that the classical solution solves not
\begin{align}
f(t,y,y',y'')=0
\end{align}
nor $f+\Delta f=0$ for uniformly small $\Delta$ but rather that the residual \textsl{departs} from 0 and is \textsl{not} uniformly small whereas the residual for the Lindstedt solution \textsl{is} uniformly small.
\subsection{Morrison's counterexample}
In \cite[pp.~192-193]{Omalley(2014)}, we find a discussion of the equation
\begin{align}
y''+y+\ensuremath{\varepsilon}(y')^3+3\ensuremath{\varepsilon}^2(y')=0\>.
\end{align}
O'Malley attributed the equation to \cite{Morrison(1966)}. The equation is one that is supposed to illustrate a difficulty with the (very popular and effective) method of multiple scales. We give a relatively full treatment here because a residual-based approach shows that the method of multiple scales, applied somewhat artfully, can be quite successful and moreover we can demonstrate \textsl{a posteriori} that the method was successful. The solution sketched in \cite{Omalley(2014)} uses the complex exponential format, which one of us used to good effect in his PhD, but in this case the real trigonometric form leads to slightly simpler formul\ae. We are very much indebted to our colleague, Professor Pei Yu at Western, for his careful solution, which we follow and analyze here.\footnote{We had asked him to solve this problem using one of his many computer algebra programs; instead, he presented us with an elegant handwritten solution.}
The first thing to note is that we will use three time scales, $T_0=t$, $T_1=\ensuremath{\varepsilon} t$, and $T_2=\ensuremath{\varepsilon}^2 t$ because the DE contains an $\ensuremath{\varepsilon}^2$ term, which will prove to be important. Then the multiple scales formalism gives
\begin{align}
\frac{d}{dt} = \frac{\partial}{\partial T_0} + \ensuremath{\varepsilon} \frac{\partial}{\partial T_1} + \ensuremath{\varepsilon}^2 \frac{\partial}{\partial T_2} \label{msformalism}
\end{align}
This formalism gives most students some pause, at first: replace an ordinary derivative by a sum of partial derivatives using the chain rule? What could this mean? But soon the student, emboldened by success on simple problems, gets used to the idea and eventually the conceptual headaches are forgotten.\footnote{This can be made to make sense, after the fact. We imagine $F(T_1,T_2,T_3)$ describing the problem, and $\sfrac{d}{dt}=\sfrac{\partial F}{\partial T_1}\sfrac{\partial T_1}{\partial t} + \sfrac{\partial F}{\partial T_2}\sfrac{\partial T_2}{\partial t} + \sfrac{\partial F}{\partial T_3}\sfrac{\partial T_3}{\partial t}$ which gives $\sfrac{d}{dt}=\sfrac{\partial F}{\partial T_1}+\ensuremath{\varepsilon} \sfrac{\partial F}{\partial T_2} + \ensuremath{\varepsilon}^2 \sfrac{\partial F}{\partial T_3}$ if $T_1=t, T_2=\ensuremath{\varepsilon} t$ and $T_3=\ensuremath{\varepsilon}^2t$.} But sometimes they return, as with this example.
To proceed, we take
\begin{align}
y=y_0+\ensuremath{\varepsilon} y_1+\ensuremath{\varepsilon}^2 y_2+O(\ensuremath{\varepsilon}^3)
\end{align}
and equate to zero like powers of $\ensuremath{\varepsilon}$ in the residual. The expansion of $\sfrac{d^2 y}{dt^2}$ is straightforward:
\begin{multline}
\left( \frac{\partial}{\partial T_0} + \ensuremath{\varepsilon}\frac{\partial}{\partial T_1}+\ensuremath{\varepsilon}^2\frac{\partial}{\partial T_2}\right)^2(y_0+\ensuremath{\varepsilon} y_1+\ensuremath{\varepsilon}^2 y_2) =\\
\frac{\partial^2 y_0}{\partial T_0^2} + \ensuremath{\varepsilon}\left(\frac{\partial^2 y_1}{\partial T_0^2}+2\frac{\partial^2 y_0}{\partial T_0\partial T_1}\right)
+\ensuremath{\varepsilon}^2\left(\frac{\partial^2 y_2}{\partial T_0^2}+2\frac{\partial^2 y_1}{\partial T_0\partial T_1}+\frac{\partial^2 y_0}{\partial T_1^2}+2\frac{\partial^2 y_0}{\partial T_0\partial T_1}\right)
\end{multline}
For completeness we include the other necessary terms, even though this construction may be familiar to the reader. We have
\begin{multline}
\ensuremath{\varepsilon}\left(\frac{dy}{dt}\right)^3 = \ensuremath{\varepsilon}\left( \left(\frac{\partial}{\partial T_0}+\ensuremath{\varepsilon}\frac{\partial}{\partial T_1}\right)(y_0+\ensuremath{\varepsilon} y_1)\right)^3\\
= \ensuremath{\varepsilon}\left(\frac{\partial y_0}{\partial T_0}\right)^3 + 3\ensuremath{\varepsilon}^2\left(\frac{\partial y_0}{\partial T_0}\right)^2 \left(\frac{\partial y_0}{\partial T_1}+\frac{\partial y_1}{\partial T_0}\right)+\cdots\>,
\end{multline}
and $y=y_0+\ensuremath{\varepsilon} y_1+\ensuremath{\varepsilon}^2 y_2$ is straightforward, and also
\begin{align}
3\ensuremath{\varepsilon}^2\left( \left(\frac{\partial}{\partial T_0}+\cdots\right)(y_0+\cdots)\right) = 3\ensuremath{\varepsilon}^2\frac{\partial y_0}{\partial T_0}+\cdots
\end{align}
is at this order likewise straightforward. At $O(\ensuremath{\varepsilon}^0)$ the residual is
\begin{align}
\frac{\partial^2 y_0}{\partial T_0^2}+y_0=0
\end{align}
and without loss of generality we take as solution
\begin{align}
y_0 = a(T_1,T_2)\cos(T_0+\varphi(T_1,T_2))
\end{align}
by shifting the origin to a local maximum when $T_0=0$. For notational simplicity put $\theta=T_0+\varphi(T_1,T_2)$. At $O(\ensuremath{\varepsilon}^1)$ the equation is
\begin{align}
\frac{\partial^2 y_1}{\partial T_0^2} + y_1 = -\left(\frac{\partial y_0}{\partial T_0}\right)^3 - 2\frac{\partial^2 y_0}{\partial T_0\partial T_1}
\end{align}
where the first term on the right comes from the $\ensuremath{\varepsilon}\dot{y}^3$ term whilst the second comes from the multiple scales formalism. Using $\sin^3\theta=\sfrac{3}{4}\sin\theta-\sfrac{1}{4}\sin 3\theta$, this gives
\begin{align}
\frac{\partial^2 y_1}{\partial T_0^2}+y_1 = \left(2\frac{\partial a}{\partial T_1}+\frac{3}{4} a^3\right)\sin\theta + 2a\frac{\partial \varphi}{\partial T_1}\cos\theta - \frac{a^3}{4}\sin 3\theta
\end{align}
and to suppress the resonance that would generate secular terms we put
\begin{align}
\frac{\partial a}{\partial T_1} = -\frac{3}{8}a^3 \quad\textrm{and}\qquad \frac{\partial\varphi}{\partial T_1}=0\>. \label{525}
\end{align}
Then $y_1 = \frac{a^3}{32}\sin 3\theta$ solves this equation and has $y_1(0)=0$, which does not disturb the initial condition $y_0(0)=a_0$, although since $\sfrac{dy_1}{dT_0}=\sfrac{3a^2}{32}\cos3\theta$ the derivative of $y_0+\ensuremath{\varepsilon} y_1$ will differ by $O(\ensuremath{\varepsilon})$ from zero at $T_0=0$. This does not matter and we may adjust this by choice of initial conditions for $\varphi$, later.
The $O(\ensuremath{\varepsilon}^2)$ term is somewhat finicky, being
\begin{multline}
\frac{\partial^2 y_2}{\partial T_0^2}+y_2 = -2\frac{\partial^2 y_0}{\partial T_0\partial T_2} -2\frac{\partial^2 y_1}{\partial T_0\partial T_1} \\
-3\left(\frac{\partial y_0}{\partial T_0}\right)^2 \left(\frac{\partial y_0}{\partial T_1}+\frac{\partial y_1}{\partial T_0}\right) - \frac{\partial^2 y_0}{\partial T_1^2}-3\frac{\partial y_0}{\partial T_0}
\end{multline}
where the last term came from $3(\dot{y})\ensuremath{\varepsilon}^2$. Proceeding as before, and using $\partial\varphi/\partial T_1=0$ and $\sfrac{\partial a}{\partial T_1}=-\sfrac{3}{8}\>a^3$ as well as some other trigonometric identities, we find the right-hand side can be written as
\begin{align}
\left(2\frac{\partial a}{\partial T_2}+3a\right)\sin\theta+\left(2a\frac{\partial\varphi}{\partial T_2}-\frac{9}{128}a^5\right)\cos\theta-\frac{27}{1024}a^5\cos3\theta+\frac{9}{128}a^5\cos5\theta\>.
\end{align}
Again setting the coefficients of $\sin\theta$ and $\cos\theta$ to zero to prevent resonance we have
\begin{align}
\frac{\partial a}{\partial T_2}=-\frac{3}{2}a \label{528}
\end{align}
and
\begin{align}
\frac{\partial \varphi}{\partial T_2} = \frac{9}{256}a^4\qquad (a\neq0).
\end{align}
This leaves
\begin{align}
y_2= \frac{27}{1024}a^5\cos3\theta - \frac{3 a^5}{1024}\cos5\theta
\end{align}
again setting the homogeneous part to zero.
Now comes a bit of multiple scales magic: instead of solving equations \eqref{525} and \eqref{528} in sequence, as would be usual, we write
\begin{align}
\frac{da}{dt} &= \frac{\partial a}{\partial T_0} + \ensuremath{\varepsilon} \frac{\partial a}{\partial T_1} + \ensuremath{\varepsilon}^2\frac{\partial a}{\partial T_2}
= 0 + \ensuremath{\varepsilon}\left(-\frac{3}{8}a^3\right) + \ensuremath{\varepsilon}^2\left(-\frac{3}{2}a\right) \nonumber \\
&= -\frac{3}{8}\ensuremath{\varepsilon} a(a^2+4\ensuremath{\varepsilon})\>. \label{magic}
\end{align}
Using $a=2R$ this is equation (6.50) in \cite{Omalley(2014)}. Similarly
\begin{align}
\frac{d\varphi}{dt} &= \ensuremath{\varepsilon} \frac{\partial\varphi}{\partial T_1}+\ensuremath{\varepsilon}^2 \frac{\partial\varphi}{\partial T_2}
= 0+\ensuremath{\varepsilon}^2 \frac{9}{256} a^4 \label{moremagic}
\end{align}
and once $a$ has been identified, $\varphi$ can be found by quadrature. Solving \eqref{magic} and \eqref{moremagic} by Maple,
\begin{align}
a = \frac{\sqrt{\ensuremath{\varepsilon}} a_0}{\displaystyle \sqrt{\ensuremath{\varepsilon} e^{3\ensuremath{\varepsilon}^2 t}+\frac{a_0^2}{4}(e^{3\ensuremath{\varepsilon}^2 t}-1)}} = 2\frac{\sqrt{\ensuremath{\varepsilon}} a_0}{\sqrt{u}}
\end{align}
and
\begin{align}
\varphi = -\frac{3}{16}\ensuremath{\varepsilon}^2\ln u + \frac{9}{16}\ensuremath{\varepsilon}^4 t-\frac{3}{16}\frac{\ensuremath{\varepsilon}^2a_0^2}{u}
\end{align}
where $u=4\ensuremath{\varepsilon} e^{3\ensuremath{\varepsilon}^2 t}+a_0^2(e^{3\ensuremath{\varepsilon}^2 t}-1)$. The residual is (again by Maple)
\begin{align}
\footnotesize \ensuremath{\varepsilon}^3\left( \frac{9}{16}a_0^3\cos3t+a_0^7\left( -\frac{351}{4096}\sin t - \frac{9}{512} \sin 7t+\frac{333}{4096}\sin 3t + \frac{459}{4096}\sin 5t\right)\right)+O(\ensuremath{\varepsilon}^4)
\end{align}
and there is no secularity visible in this term.
It is important to note that the construction of the equation \eqref{magic} for $a(t)$ required both $\sfrac{\partial a}{\partial T_1}$ and $\sfrac{\partial a}{\partial T_2}$. Either one alone gives misleading or inconsistent answers. While it may be obvious to an expert that both terms must be used at once, the situation is somewhat unusual and a novice or casual user of perturbation methods may well wish reassurance. (We did!) Computing (and plotting) the residual $\Delta=\ddot{z}+z+\ensuremath{\varepsilon}(\dot{z})^3+3\ensuremath{\varepsilon}^2\dot{z}$ does just that (see figure \ref{YuResidual}).
\begin{figure}
\centering
\includegraphics[width=.55\textwidth]{YuResidual.png}
\caption{The residual $|\Delta_3|$ divided by $\ensuremath{\varepsilon}^3a$, with $\ensuremath{\varepsilon}=0.1$, where $a=O(e^{-\sfrac{3}{2}\>\ensuremath{\varepsilon}^2 t})$, on $0\leq t\leq \sfrac{10\mathrm{ln}(10)}{\ensuremath{\varepsilon}^2}$ (at which point $a=10^{-15}$). We see that $|\sfrac{\Delta_3}{\ensuremath{\varepsilon}^3a}|<1$ on this entire interval.}
\label{YuResidual}
\end{figure}
It is simple to verify that, say, for $\ensuremath{\varepsilon}=1/100$, $|\Delta|<\ensuremath{\varepsilon}^3a$ on $0<t<10^5\pi$.
Notice that $a\sim O(e^{-\sfrac{3}{2}\>\ensuremath{\varepsilon}^2 t})$ and $e^{-\sfrac{3}{2}\cdot 10^{-4}\cdot 10^5\cdot\pi}=e^{-15\pi} \doteq 10^{-15}$ by the end of this range. The method of multiple scales has thus produced $z$, the exact solution of an equation uniformly and relatively near to the original equation. In trigonometric form,
\begin{multline}
z = a\cos(t+\varphi)+\ensuremath{\varepsilon}\frac{a^3}{32}\cos(3(t+\varphi)) \\
+ \ensuremath{\varepsilon}^2\left(\frac{27}{1024}a^5\cos(3(t+\varphi))
-\frac{3}{1024}a^5\cos^5((5(t+\varphi)) \right) \label{zeqn}
\end{multline}
and $a$ and $\varphi$ are as in equations \eqref{magic} and \eqref{moremagic}. Note that $\varphi$ asymptotically approaches zero. Note that the trigonometric solution we have demonstrated here to be correct, which was derived for us by our colleague Pei Yu, appears to differ from that given in \cite{Omalley(2014)}, which is
\begin{align}
y= Ae^{it} + \ensuremath{\varepsilon} Be^{3it} + \ensuremath{\varepsilon}^2 Ce^{5it}+\cdots
\end{align}
where (with $\tau=\ensuremath{\varepsilon} t$)
\begin{align}
C\sim \frac{3}{64}A^5+\cdots \qquad \textrm{and}\qquad B\sim -\frac{A^3}{8}(i+\frac{45}{8}\ensuremath{\varepsilon}|A|^2+\cdots)
\end{align}
and, if $A=Re^{i\varphi}$,
\begin{align}
\frac{dR}{d\tau} = -\frac{3}{2}(R^3+\ensuremath{\varepsilon} R+\cdots)
\qquad \textrm{and}\qquad
\frac{d\varphi}{d\tau} = -\frac{3}{2}R^2 (1+\frac{3\ensuremath{\varepsilon}}{8}R^2+\cdots)
\end{align}
Of course with the trigonometric form $y=a\cos(t+\varphi)$, the equivalent complex form is
\begin{align}
y &= a \left( \frac{e^{it+i\varphi}+ e^{-it-i\varphi}}{2}\right)
= \frac{a}{2}e^{i\varphi}e^{it}+c.c.
\end{align}
and so $R=\sfrac{a}{2}$. As expected, equation (6.50) in \cite{Omalley(2014)} becomes
\begin{align}
\frac{da}{d\tau}\left(\frac{a}{2}\right) = -\frac{3}{2}\frac{a}{2}\left(\frac{a^2}{4}+\ensuremath{\varepsilon}\right)
\end{align}
or, alternatively,
\begin{align}
\frac{da}{d\tau} = -\frac{3}{8}\ensuremath{\varepsilon} a(a^2+4\ensuremath{\varepsilon})
\end{align}
which agrees with that computed for us by Pei Yu. However, O'Malley's equation (6.48) gives
\begin{align}
C\cdot e^{i\cdot 5t} &= \frac{3}{64}A^5 e^{i5t} = \frac{3}{64}R^5e^{i5\theta} = \frac{3}{2048}a^5 e^{i5\theta}\>,
\end{align}
so that
\begin{align}
Ce^{i5t}+c.c = \frac{3}{1024}a^5\cos5\theta\>,
\end{align}
whereas Pei Yu has $-\sfrac{3}{1024}$. As demonstrated by the residual in figure \ref{YuResidual}, Pei Yu is correct. Well, sign errors are trivial enough.
More differences occur for $B$, however. The $-\sfrac{A^3}{8} \> ie^{3it}$ term becomes $\sfrac{a^3}{32}\>\cos 3\theta$, as expected, but $-\sfrac{45}{64}A^3\cdot |A|^2e^{3it}+c.c.$ becomes $-\sfrac{45}{32}\sfrac{a^5}{32}\>\cos3\theta = -\sfrac{45}{1024}\>a^5\cos3\theta$, not $\sfrac{27}{1024}\>a^5\cos3\theta$. Thus we believe there has been an arithmetic error in \cite{Omalley(2014)}. This is also present in \cite{Omalley(2010)}. Similarly, we believe the $\sfrac{d\varphi}{dt}$ equation there is wrong.
Arithmetic errors in perturbation solutions are, obviously, a constant hazard even for experts. We do not point out this error (or the other errors highlighted in this paper) in a spirit of glee---goodness knows we've made our own share. No, the reason we do so is to emphasize the value of a separate, independent check using the residual. Because we have done so here, we are certain that equation \eqref{zeqn} is correct: it produces a residual that is uniformly $O(\ensuremath{\varepsilon}^3)$ for bounded time, and which is $O(\ensuremath{\varepsilon}^{9/2}e^{-\sfrac{3}{2}\>\ensuremath{\varepsilon}^2 t})$ as $t\to \infty$. (We do not know why there is extra accuracy for large times).
Finally, we remark that the difficulty this example presents for the method of multiple scales is that equation \eqref{magic} cannot be solved itself by perturbation methods (or, al least, we couldn't do it). One has to use all three terms at once; the fact that this works is amply demonstrated afterwards.
Indeed the whole multiple scales procedure based on equation \eqref{msformalism} is really very strange when you think about it, but it can be justified afterwards. It really doesn't matter how we find equation \eqref{zeqn}. Once we have done so, verifying that it is the exact solution of a small perturbation of the original equation is quite straightforward. The implementation is described in the following Maple code:
\lstinputlisting{Morrison}
\subsection{The lengthening pendulum}
As an interesting example with a genuine secular term, \cite{Boas(1966)} discuss the lengthening pendulum. There, Boas solves the linearized equation exactly in terms of Bessel functions. We use the model here as an example of a perturbation solution in a physical context. The original Lagrangian leads to
\begin{align}
\frac{d}{dt} \left(m\ell^2\frac{d\theta}{dt}\right)+mg\ell\sin\theta =0
\end{align}
(having already neglected any system damping). The length of the pendulum at time $t$ is modelled as $\ell =\ell_0+vt$, and implicitly $v$ is small compared to the oscillatory speed $\sfrac{d\theta}{dt}$ (else why would it be a pendulum at all?). The presence of $\sin\theta$ makes this a nonlinear problem; when $v=0$ there is an analytic solution using elliptic functions \cite[chap.~4]{Lawden(2013)}.
We \textsl{could} do a perturbation solution about that analytic solution; indeed there is computer algebra code to do so automatically \cite{Rand(2012)}. For the purpose of this illustration, however, we make the same small-amplitude linerization that Boas did and replace $\sin\theta$ by $\theta$. Dividing the resulting equation by $\ell_0$, putting $\ensuremath{\varepsilon}=\sfrac{v}{\ell_0\omega}$ with $\omega=\sqrt{\sfrac{g}{\ell_0}}$ and rescaling time to $\tau=\omega t$, we get
\begin{align}
(1+\ensuremath{\varepsilon}\tau)\frac{d^2\theta}{d\tau^2}+2\ensuremath{\varepsilon}\frac{d\theta}{d\tau}+\theta=0\>.
\end{align}
This supposes, of course, that the pin holding the base of the pendulum is held perfectly still (and is frictionless besides).
Computing a regular perturbation approximation
\begin{align}
z_{\textrm{reg}} = \sum_{k=0}^N \theta_k(\tau)\ensuremath{\varepsilon}^k
\end{align}
is straightforward, for any reasonable $N$, by using computer algebra. For instance, with $N=1$ we have
\begin{align}
z_{\textrm{reg}} = \cos\tau + \ensuremath{\varepsilon}\left(\frac{3}{4}\sin\tau+\frac{\tau^2}{4}\sin\tau-\frac{3}{4}\tau\cos\tau\right)\>.
\end{align}
This has residual
\begin{align}
\Delta_{\textrm{reg}} &= (1+\ensuremath{\varepsilon}\tau)z''_{\textrm{reg}}+2\ensuremath{\varepsilon} z'_{\textrm{reg}}+z_{\textrm{reg}}\\
&= -\frac{\ensuremath{\varepsilon}^2}{4}\left(\tau^3\sin\tau-9\tau^2\cos\tau-15\tau\sin\tau\right)
\end{align}
also computed straightforwardly with computer algebra. By experiment with various $N$ we find that the residuals are always of $O(\ensuremath{\varepsilon}^{N+1})$ but contain powers of $\tau$, as high as $\tau^{2N+1}$. This naturally raises the question of just when this can be considered ``small.'' We thus have the \textsl{exact} solution of
\begin{align}
(1+\ensuremath{\varepsilon}\tau)\frac{d^2\theta}{d\tau^2}+2\ensuremath{\varepsilon}\frac{d\theta}{d\tau}+\theta = \Delta_{\textrm{reg}}(\tau)=P(\ensuremath{\varepsilon}^{N+1}\tau^{2N+1})
\end{align}
and it seems clear that if $\ensuremath{\varepsilon}^{N+1}\tau^{2N+1}$ is to be considered small it should at least be smaller than $\ensuremath{\varepsilon}\tau$, which appear on the left hand side of the equation. [$\sfrac{d^2}{d\tau^2}$ is $-\cos\tau$ to leading order, so this is periodically $O(1)$.] This means $\ensuremath{\varepsilon}^N\tau^{2N}$ should be smaller than $1$, which forces $\tau\leq T$ where $T=O(\ensuremath{\varepsilon}^{-q})$ with $q<\frac{1}{2}$. That is, this regular perturbation solution is valid only on a limited range of $\tau$, namely, $\tau=O(\ensuremath{\varepsilon}^{-\sfrac{1}{2}})$.
Of course, the original equation contains a term $\ensuremath{\varepsilon}\tau$, and this itself is small only if $\tau\leq T_{\max}$ with $T_{\max}=O(\ensuremath{\varepsilon}^{-1+\delta})$ for $\delta>0$. Notice that we have discovered this limitation of the regular perturbation solution without reference to the `exact' Bessel function solution of this linearized equation. Notice also that $\Delta_{\textrm{reg}}$ can be interpreted as a small forcing term; a vibration of the pin holding the pendulum, say. Knowing that, say, such physical vibrations, perhaps caused by trucks driving past the laboratory holding the pendulum, are bounded in size by a certain amount, can help to decide what $N$ to take, and over which $\tau$-interval the resulting solution is valid.
Of course, one might be interested in the forward error $\theta-z_{\textrm{reg}}$; but then one should be interested in the forward errors caused by neglecting physical vibrations (e.g. of trucks passing by) and the same theory---what a numerical analyst calls a condition number---can be used for both.
But before we pursue that farther, let us first try to improve the perturbation solution. The method of multiple scales, or equivalent but easier in this case the renormalization group method \cite{Kirkinis(2012)} which consists for a linear problem of taking the regular perturbation solution and replacing $\cos\tau$ by $\sfrac{(e^{i\tau}+e^{-i\tau})}{2}$ and $\sin\tau$ by $\sfrac{(e^{i\tau}-e^{-i\tau})}{2i}$, gathering up the result and writing it as $\sfrac{1}{2}\>A(\tau;e)e^{i\tau}+\sfrac{1}{2}\>\bar{A}(\tau;\ensuremath{\varepsilon})e^{-i\tau}$. One then writes $A(\tau;\ensuremath{\varepsilon}) = e^{L(\tau;\ensuremath{\varepsilon})}+O(\ensuremath{\varepsilon}^{N+1})$ (that is, taking the logarithm of the $\ensuremath{\varepsilon}$-series for $A(\tau;\ensuremath{\varepsilon})=A_0(\tau)+\ensuremath{\varepsilon} A_1(\tau)+\cdots+\ensuremath{\varepsilon}^NA_N(\tau)+O(\ensuremath{\varepsilon}^{N+1})$, a straightforward exercise (especially in a computer algebra system) and then (if one likes) rewriting $\sfrac{1}{2}\>e^{L(\tau;\ensuremath{\varepsilon})+i\tau}+$ c.c. in real trigonometric form again, gives an excellent result. If $N=1$, we get
\begin{align}
\tilde{z}_{\textrm{renorm}}=e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}\cos\left(\frac{3}{4}\ensuremath{\varepsilon}+\tau-\ensuremath{\varepsilon}\frac{\tau^2}{4}\right)
\end{align}
which contains an irrelevant phase change $\frac{3}{4}\ensuremath{\varepsilon}$ which we remove here as a distraction to get
\begin{align}
z_{\textrm{renorm}} = e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}\cos\left(\tau-\ensuremath{\varepsilon}\frac{\tau^2}{4}\right)\>.
\end{align}
This has residual:
\begin{align}
\Delta_{\textrm{renorm}} &= (1+\ensuremath{\varepsilon}\tau)\frac{d^2z_{\textrm{renorm}}}{d\tau^2} +2\ensuremath{\varepsilon}\frac{dz_{\textrm{renorm}}}{d\tau}+z_{\textrm{renorm}} \nonumber\\
&= \ensuremath{\varepsilon}^2e^{-\frac{3}{4}\ensuremath{\varepsilon}\tau} \left( (\frac{3}{4}\tau^2-\frac{15}{16})\cos(\tau-\ensuremath{\varepsilon}\frac{\tau^2}{4})-\frac{9}{4}\tau\sin(\tau-\ensuremath{\varepsilon}\frac{t^2}{4})\right)+O(\ensuremath{\varepsilon}^3\tau^3e^{-\frac{3}{4}\ensuremath{\varepsilon}\tau})
\>.
\end{align}
By inspection, we see that this is superior in several ways to the residual from the regular perturbation method. First, it contains the damping term $e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}$ just as the computed solution does; this residual will be small compared even to the decaying solution. Second, at order $N$ it contains only $\tau^{N+1}$ as its highest power of $\ensuremath{\varepsilon}$, not $\tau^{2N+1}$. This will be small compared to $\ensuremath{\varepsilon}\tau$ for times $\tau< T$ with $T=O(\ensuremath{\varepsilon}^{-1+\delta})$ for \emph{any} $\delta>0$; that is, this perturbation solution will provide a good solution so long as its fundamental assumption, that the $\ensuremath{\varepsilon}\tau$ term in the original equation, can be considered `small', is good.
Note that again the quality of this perturbation solution has been judged without reference to the exact solution, and quite independently of whatever assumptions are usually made to argue for multiple scales solutions (such as boundedness of $\theta$) or the renormalization group method.
Thus, we conclude that the renormalization group method gives a superior solution in this case, and this judgement was made possible by computing the residual. We have used the following Maple implementation:
\lstinputlisting{LengtheningPendulum}
See figure \ref{pendulum}.
\begin{figure}
\includegraphics[width=.45\textwidth]{LengtheningSols.png}\quad
\includegraphics[width=.45\textwidth]{RenormRes.png}
\caption{On the left, solutions to the lengthening pendulum equation (the renormalized solution is the solid line). On the right, residual of the renormalized solution, which is orders of magnitudes smaller than that of the regular expansion.}
\label{pendulum}
\end{figure}
Note that this renormalized residual contains terms of the form $(\ensuremath{\varepsilon}\tau)^k e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon} \tau}$> No matter what order we compute to, these have maxima $O(1)$ when $\tau=O(\sfrac{1}{\ensuremath{\varepsilon}})$, but as noted previously the fundamental assumption of perturbation has been violated by that large a $\tau$.
\paragraph{Optimal backward error again} Now, one further refinement is possible. We may look for an $O(\ensuremath{\varepsilon}^2)$ perturbation of the lengthening of the pendulum, that explains part of this computed residual! That is, we look for $p(t)$, say, so that
\begin{align}
\Delta_2 := (1+\ensuremath{\varepsilon}\tau+\ensuremath{\varepsilon} p(\tau)) z_{\textrm{renorm}}'' + 2(\ensuremath{\varepsilon}+\ensuremath{\varepsilon}^2 p'(\tau))z_{\textrm{renorm}}'+z_{\textrm{renorm}} \label{renormeqs}
\end{align}
has only \textsl{smaller} terms in it than $\Delta_{\textrm{renorm}}$. Note the correlated changes, $\ensuremath{\varepsilon}^2 p(\tau)$ and $\ensuremath{\varepsilon}^2 p'(\tau)$.
At this point, we don't know if this is possible or useful, but it's a good thing to try. In numerical analysis terms, we are trying to find a structured backward error for this computed solution.
The procedure for identifying $p(\tau)$ in equation \eqref{renormeqs} is straightforward. We put $p(\tau)=a_0+a_1\tau+a_2\tau^2$ with unknown coefficients, compute $\Delta_2$, and try to choose $a_0$, $a_1$, and $a_2$ in order to make as many coefficients of powers of $\ensuremath{\varepsilon}$ in $\Delta_2$ to be zero as we can. When we do this, we find that
\begin{align}
p = -\frac{15}{16}+\frac{3}{4}\tau^2
\end{align}
makes
\begin{align}
\Delta_{\textrm{mod}} &= \left(1+\ensuremath{\varepsilon}\tau+\ensuremath{\varepsilon}^2\left(\frac{3}{4}\tau^2-\frac{15}{16}\right)\right)z_{\textrm{renorm}}'' + 2\left(\ensuremath{\varepsilon}+\ensuremath{\varepsilon}^2\left(\frac{3}{2}\tau\right)\right) z_{\textrm{renorm}}' + z_{\textrm{renorm}}\\
&= \ensuremath{\varepsilon}^2e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}\left(-\frac{3}{4}\tau\sin\left(\tau-\sfrac{1}{4}\>\ensuremath{\varepsilon} \tau^2\right)\right) + O(\ensuremath{\varepsilon}^3\tau^3 e^{-\sfrac{3\ensuremath{\varepsilon}\tau}{4}})\>.
\end{align}
This is $O(\ensuremath{\varepsilon}^2\tau e^{-\sfrac{3\ensuremath{\varepsilon}\tau}{4}})$ instead of $O(\ensuremath{\varepsilon}^2\tau^2 e^{-\sfrac{3\ensuremath{\varepsilon}\tau}{4}})$, and therefore smaller. This \textsl{interprets} the largest term of the original residual, the $O(\ensuremath{\varepsilon}^2\tau^2)$ term, as a perturbation in the lengthening of the pendulum. The gain is one of interpretation; the solution is the same, but the equation it solves exactly is slightly different. For $O(\ensuremath{\varepsilon}^N\tau^N)$ solutions the modifications will probably be similar.
Now, if $z\doteq\cos\tau$ then $z'\doteq-\sin\tau$; so if we include a damping term
\begin{align}
\left( +\ensuremath{\varepsilon}^2\cdot\frac{3}{8}\cdot\tau \theta' \right)
\end{align}
in the model, we have
\begin{align}
\left(1+\ensuremath{\varepsilon}\tau+\ensuremath{\varepsilon}^2\left(\frac{3}{4}\tau^2-\frac{15}{16}\right)\right) z_{\textrm{renorm}}'' + 2\left(\ensuremath{\varepsilon}-\ensuremath{\varepsilon}^2\left(\frac{3}{2}\tau\right)+\ensuremath{\varepsilon}^2\frac{3}{8}\tau\right)z_{\textrm{renorm}}' + z_{\textrm{renorm}} \nonumber\\
= O\left(\ensuremath{\varepsilon}^3\tau^3e^{-\sfrac{3}{4}\>\ensuremath{\varepsilon}\tau}\right)
\end{align}
and \textsl{all} of the leading terms of the residual have been ``explained'' in the physical context.
If the damping term had been negative, we might have rejected it; having it increase with time also isn't very physical (although one might imagine heating effects or some such).
\subsection{Vanishing lag delay DE}
For another example we consider an expansion that ``everybody knows'' can be problematic. We take the DDE
\begin{align}
\dot{y}(t)+ay(t-\ensuremath{\varepsilon})+b y(t)=0
\end{align}
from \cite[p.~52]{Bellman(1972)} as a simple instance. Expanding $y(t-\ensuremath{\varepsilon})=y(t)-\dot{y}(t)\ensuremath{\varepsilon}+O(\ensuremath{\varepsilon}^2)$ we get
\begin{align}
(1-a\ensuremath{\varepsilon})\dot{y}(t) + (b+a)y(t)=0
\end{align}
by ignoring $O(\ensuremath{\varepsilon}^2)$ terms, with solution
\begin{align}
z(t) = \mathrm{exp}(-\frac{b+a}{1-a\ensuremath{\varepsilon}}t)u_0
\end{align}
if a simple initial condition $u(0)=u_0$ is given. Direct computation of the residual shows
\begin{align}
\Delta &= \dot{z} + az(t-\ensuremath{\varepsilon})+bz(t)\\
&= O(\ensuremath{\varepsilon}^2)z(t)
\end{align}
uniformly for all $t$; in other words, our computed solution $z(t)$ exactly solves
\begin{align}
\dot{y} + ay(t-\ensuremath{\varepsilon}) + (b+O(\ensuremath{\varepsilon}^2))y(t)=0
\end{align}
which is an equation of the same type as the original, with only $O(\ensuremath{\varepsilon}^2)$ perturbed coefficients. The initial history for the DDE should be prescribed on $-\ensuremath{\varepsilon}\leq t<0$ as well as the initial condition, and that's an issue, but often that history is an issue anyway. So, in this case, contrary to the usual vague folklore that Taylor series expansion in the vanishing lag ``can lead to difficulties'', we have a successful solution and we know that it's successful.
We now need to assess the sensitivity of the problem to small changes in $b$, but we all know that has to be done anyway, even if we often ignore it.
Another example of Bellman's on the same page, $\ddot{y}(t)+ay(t-\ensuremath{\varepsilon})=0$, can be treated in the same manner. Bellman cautions there that seemingly similar approaches can lead to singular perturbation problems, which can indeed lead to difficulties, but even there a residual/backward error analysis can help to navigate those difficulties.
\subsection{Artificial viscosity in a nonlinear wave equation}
Suppose we are trying to understand a particular numerical solution, by the method of lines, of
\begin{align}
u_t + uu_x = 0 \label{waveeq}
\end{align}
with initial condition $u(0,x)=e^{i\pi x}$ on $-1\leq x\leq 1$ and periodic boundary conditions. Suppose that we use the method of modified equations (see, for example, \cite{Griffiths(1986)}, \cite{Warming(1974)}, or \cite[chap~12]{CorlessFillion(2013)}) to find a perturbed equation that the numerical solution more nearly solves. Suppose also that we analyze the same numerical method applied to the divergence form
\begin{align}
u_t + \frac{1}{2}(u^2)_x=0\>. \label{waveeq2}
\end{align}
Finally, suppose that the method in question uses backward differences $f'(x) = \sfrac{(f(x)-f(x-2\ensuremath{\varepsilon}))}{2\ensuremath{\varepsilon}}$ (the factor 2 is for convenience) on an equally-spaced $x$-grid, so $\Delta x=-2\ensuremath{\varepsilon}$.
The method of modified equations gives
\begin{align}
u_t + uu_x -\ensuremath{\varepsilon}(uu_{xx})+O(\ensuremath{\varepsilon}^2)=0
\end{align}
for equation \eqref{waveeq} and
\begin{align}
u_t+uu_x -\ensuremath{\varepsilon} (u_x^2 + uu_{xx})+O(\ensuremath{\varepsilon}^2) = 0
\end{align}
for equation \eqref{waveeq2}.
The outer solution to each of these equations is just the reference solution to both equations \eqref{waveeq} and \eqref{waveeq2}, namely,
\begin{align}
u = \frac{1}{i\pi t} W(i\pi t e^{i\pi x})
\end{align}
where $W(z)$ is the principal branch of the Lambert $W$ function, which satisfies $W(z) e^{W(z)}=z$. See \cite{Corless(1996)} for more on the Lambert $W$ function. That $u$ is the solution for this initial condition was first noticed by \cite{weideman(2003)}.
The residuals of these outer solutions are just $-\ensuremath{\varepsilon} uu_{xx}$ and $-\ensuremath{\varepsilon}(u_x^2+uu_{xx})$ respectively. Simplifying, and again suppressing the argument of $W$ for tidiness, we find that
\begin{align}
-\ensuremath{\varepsilon} uu_{xx} = -\frac{\ensuremath{\varepsilon} W^2}{t^2(1+W^3)}
\end{align}
and
\begin{align}
-\ensuremath{\varepsilon}(u_x^2 + uu_{xx}) = -\frac{\ensuremath{\varepsilon} W^2(2+W)}{t^2(1+W^3)}
\end{align}
where $W$ is short for $W(i\pi t e^{i\pi x})$. We see that if $x=\sfrac{1}{2}$ and $t=\sfrac{1}{(\pi e)}$, both of these are singular:
\begin{align}
-\ensuremath{\varepsilon} uu_{xx} \sim -\ensuremath{\varepsilon} \left( \frac{i\pi^2 e^2\sqrt{2}}{4(et\pi-1)^{\sfrac{3}{2}}}+O\left(\frac{1}{et\pi-1}\right)\right)
\end{align}
and
\begin{align}
-\ensuremath{\varepsilon} (u^2_x +uu_{xx}) \sim -\ensuremath{\varepsilon} \left( \frac{i\pi^2e^2\sqrt{2}}{4(et\pi-1)^{\sfrac{3}{2}}} + O\left(\frac{1}{\sqrt{et\pi-1}}\right)\right)\>.
\end{align}
We see that the outer solution makes the residual very large near $x=\sfrac{1}{2}$ as $t\to \sfrac{1}{(\pi e)}^-$ suggesting that the solution of the modified equation---and thus the numerical solution---will depart from the outer solution. Both the original form and the divergence form are predicted to have similar behaviour, and this is confirmed by numerical experiments.
We remark that using forward differences instead just changes the sign of $\ensuremath{\varepsilon}$, and given the similarity of $euu_{xx}$ to $\ensuremath{\varepsilon} u_{xx}$, we intuit that this will blow up rather quickly, like the backward heat equation, because the exact solution to Burger's equation $u_t+uu_x=\ensuremath{\varepsilon} u_{xx}$ involves a change in variable to the heat equation \cite[pp.~352-353]{Kevorkian(2013)}. We also remark also that this use of residual is a bit perverse: we here substitute the reference solution into an approximate (reverse-engineered) equation. Some authors do use `residual' or even `defect' in this sense., e.g., \cite{Chiba(2009)}. It only fits our usage because the reference solution to the original equation is just the outer solution of the perturbation problem of interest here.
Finally, we can interpolate the numerical solution using a trigonometric interpolant in $x$ tensor producted with the interpolant in $t$ provided by the numerical solver (e.g., \texttt{ode15s} in Matlab). We can then compute the residual $\Delta(t,x) = z_t+zz_x$ in the original equation and we find that, away from the singularity, it is $O(\ensuremath{\varepsilon})$. If we compute the residual in the modified equation
\begin{align}
\Delta_1(t,x)=z_t+zz_x-\ensuremath{\varepsilon} zz_{xx}
\end{align}
we find that, away from the singularity, it is $O(\ensuremath{\varepsilon}^2)$. This is a more traditional use of residual in a numerical computation, and is done without knowledge of any reference solution. The analogous use we are making for perturbation methods can be understood from this numerical perspective.
\section{Concluding Remarks}
Decades ago, van Dyke had already made the point that, in perturbation theory, ``[t]he possibilities are too diverse to be subject to rules'' \cite[p.~31]{vanDyke(1964)}. Van Dyke was talking about the useful freedom to choose expansion variables artfully, but the same might be said for perturbation methods generally. This paper has attempted (in the face of that observation) to lift a known technique, namely the residual as a backward error, out of numerical analysis and apply it to perturbation theory. The approach is surprisingly useful and clarifies several issues, namely
\begin{itemize}
\item BEA allows one to directly use approximations taken from divergent series in an optimal fashion without appealing to ``rules of thumb'' such as stopping before including the smallest term.
\item BEA allows the justification of removing spurious secular terms, even when true secular terms are present.
\item Not least, residual computation and \emph{a posteriori} BEA makes detection of slips, blunders, and bugs all but certain, as illustrated in our examples.
\item Finally BEA interprets the computed solution solution $z$ as the exact solution to just as good a model.
\end{itemize}
In this paper we have used BEA to demonstrate the validity of solutions obtained by the iterative method, by Lindstedt's method, by the method of multiple scales, by the renormalization group method, and by matched asymptotic expansions.
We have also successfully used the residual and BEA in many problems not shown here: eigenvalue problems from \cite{Nayfeh(2011)}; an example from \cite{vanDyke(1964)} using the method of strained coordinates; and many more.
The examples here have largely been for algebraic equations and for ODEs, but the method was used to good effect in \cite{Corless(2014)} for a PDE system describing heat transfer between concentric cylinders, with a high-order perturbation series in Rayleigh number. Aside from the amount of computational work required, there is no theoretical obstacle to using the technique for other PDE; indeed the residual of a computed solution $z$ (perturbation solution, in this paper) to an operator equation $\varphi(y;x)=0$ is usually computable: $\Delta = \varphi(z;x)$ and its size (in our case, leading term in the expansion in the gauge functions) easily assessed.
It's remarkable to us that the notion, while present here and there in the literature, is not used more to justify the validity of the perturbation series.
We end with a caution. Of course, BEA is not a panacea. There are problems for which it is not possible. For instance, there may be hidden constraints, something like solvability conditions, that play a crucial role and where the residual tells you nothing. A residual can even be zero and if there are multiple solutions, one needs a way to get the right one.
There are things that can go wrong with this backward error approach. First, the final residual computation might not be independent enough from the computation of $z$, and repeat the same error. An example is if one correctly solves
\begin{align}
\ddot{y}+y+\ensuremath{\varepsilon} \dot{y}^3+3\ensuremath{\varepsilon}^2\dot{y}=0
\end{align}
and verifies that the residual is small, while \textsl{intending} to solve
\begin{align}
\ddot{y}+y+\ensuremath{\varepsilon} \dot{y}^3-3\ensuremath{\varepsilon}^2\dot{y}=0\>,
\end{align}
i.e., getting the wrong sign on the $\dot{y}$ term, both times. Another thing that can go wrong is to have an error in your independent check but not your solution. This happened to us with 183 instead of 138 in subsection \ref{systems}; the discrepancy alerted us that there \textsl{was} a problem, so this at least was noticeable. A third thing that can go wrong is that you verify the residual is small but forget to check the boundary counditions. A fourth thing that can go wrong is that the residual may be small in an absolute sense but still larger than important terms in the equation---the residual may need to be smaller than you expect, in order to get good qualitative results. A fifth thing is that the residual may be small but of the `wrong character', i.e., be unphysical. Perhaps the method has introduced the equivalent of negative damping, for instance. This point can be very subtle.
A final point is that a good solution needs not just a small backward error, but also information about the sensitivity (or robustness) of the model to physical perturbations. We have not discussed computation of sensitivity, but we emphasize that even if $\Delta\equiv 0$, you still have to do it, because real situations have real perturbations. Nonetheless, we hope that we have convinced you that BEA can be helpful.
\bibliographystyle{plain}
| {
"timestamp": "2016-09-07T02:00:44",
"yymm": "1609",
"arxiv_id": "1609.01321",
"language": "en",
"url": "https://arxiv.org/abs/1609.01321",
"abstract": "We demonstrate via several examples how the backward error viewpoint can be used in the analysis of solutions obtained by perturbation methods. We show that this viewpoint is quite general and offers several important advantages. Perhaps the most important is that backward error analysis can be used to demonstrate the validity of the solution, however obtained and by whichever method. This includes a nontrivial safeguard against slips, blunders, or bugs in the original computation. We also demonstrate its utility in deciding when to truncate an asymptotic series, improving on the well-known rule of thumb indicating truncation just prior to the smallest term. We also give an example of elimination of spurious secular terms even when genuine secularity is present in the equation. We give short expositions of several well-known perturbation methods together with computer implementations (as scripts that can be modified). We also give a generic backward error based method that is equivalent to iteration (but we believe useful as an organizational viewpoint) for regular perturbation.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Backward Error Analysis for Perturbation Methods",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9808759638081522,
"lm_q2_score": 0.8558511488056151,
"lm_q1q2_score": 0.839483820461022
} |
https://arxiv.org/abs/1801.06747 | Connectivity of cubical polytopes | A cubical polytope is a polytope with all its facets being combinatorially equivalent to cubes. We deal with the connectivity of the graphs of cubical polytopes. We first establish that, for any $d\ge 3$, the graph of a cubical $d$-polytope with minimum degree $\delta$ is $\min\{\delta,2d-2\}$-connected. Second, we show, for any $d\ge 4$, that every minimum separator of cardinality at most $2d-3$ in such a graph consists of all the neighbours of some vertex and that removing the vertices of the separator from the graph leaves exactly two components, with one of them being the vertex itself. | \section{Introduction}
The $k$-dimensional {\it skeleton} of a polytope $P$ is the set of all its faces of dimension of at most $k$. The 1-skeleton of $P$ is the {\it graph} $G(P)$ of $P$. We denote by $V(P)$ the vertex set of $P$.
This paper studies the (vertex) connectivity of a cubical polytope, the (vertex) connectivity of the graph of the polytope. A {\it cubical} $d$-polytope is a polytope with all its facets being cubes. By a cube we mean any polytope that is combinatorially equivalent to a cube; that is, one whose face lattice is isomorphic to the face lattice of a cube.
Unless otherwise stated, the graph theoretical notation and terminology follow from \cite{Die05} and the polytope theoretical notation and terminology from \cite{Zie95}. Moreover, when referring to graph-theoretical properties of a polytope such as degree and connectivity, we mean properties of its graph.
In the three-dimensional world, Euler's formula implies that the graph of a cubical 3-polytope $P$ has $2|V(P)|-4$ edges, and hence its minimum degree is three; the {\it degree} of a vertex records the number of edges incident with the vertex. Besides, every 3-polytope is $3$-connected by Balinski's theorem \cite{Bal61}. Hence the dimension, minimum degree and connectivity of a cubical 3-polytope all coincide.
\begin{theorem}[Balinski {\cite{Bal61}}]\label{thm:Balinski} The graph of a $d$-polytope is $d$-connected.
\end{theorem}
This equality between dimension, minimum degree and connectivity of a cubical polytope no longer holds in higher dimensions. In Blind and Blind's classification of cubical $d$-polytopes where every vertex has degree $d$ or $d+1$ \cite{BliBli98}, the authors exhibited cubical $d$-polytopes with the same graph as the $(d+1)$-cube; for an explicit example, check \cite[Sec.~4]{JosZie00}. More generally, the paper \cite[Sec.~6]{JosZie00} exhibited cubical $d$-polytopes with the same $(\floor{d/2}-1)$-skeleton as the $d'$-cube for every $d'>d$, the so-called {\it neighbourly cubical $d$-polytopes}. And even more generally, Sanyal and Ziegler \cite[p.~422]{SanZie10}, and later Adin, Kalmanovich and Nevo \cite[Sec.~5]{AdiKalNev18}, produced cubical $d$-polytopes with the same $k$-skeleton as the $d'$-cube for every $1\le k\le \floor{d/2}-1$ and every $d'>d$, the so-called {\it $k$-neighbourly cubical $d$-polytopes}. Thus the minimum degree or connectivity of a cubical $d$-polytope for $d\ge 4$ does not necessarily coincide with its dimension; this is what one would expect. However, somewhat surprisingly, we can prove a result connecting the connectivity of a cubical polytope to its minimum degree, regardless of the dimension; this is a vast generalisation of a similar, and well-known, result in the $d$-cube \cite[Prop.~1]{Ram04}; see also \cref{sec:cube-connectivity}.
\vspace{0.2cm}
Define a {\it separator} of a polytope as a set of vertices disconnecting the graph of the polytope. Let $X$ be a set of vertices in a graph $G$. Denote by $G[X]$ the subgraph of $G$ induced by $X$, the subgraph of $G$ that contains all the edges of $G$ with vertices in $X$. Write $G-X$ for $G[V(G)\setminus X]$; that is, the subgraph $G-X$ is obtained by removing the vertices in $X$ and their incident edges. Our main result is the following.
\vspace{0.2cm}
\noindent {\bf Theorem} (Connectivity Theorem). {\it A cubical $d$-polytope $P$ with minimum degree $\delta$ is $\min\{\delta,2d-2\}$-connected for every $d\ge 3$.
Furthermore, for any $d\ge 4$, every minimum separator $X$ of cardinality at most $2d-3$ consists of all the neighbours of some vertex, and the subgraph $G(P)-X$ contains exactly two components, with one of them being the vertex itself.}
A {\it simple} vertex in a $d$-polytope is a vertex of degree $d$; otherwise we say that the vertex is {\it nonsimple}. An immediate corollary of the theorem is the following.
\vspace{0.2cm}
\noindent {\bf Corollary.} {\it A cubical $d$-polytope with no simple vertices is $(d+1)$-connected.}
\begin{remark}
The connectivity theorem is best possible in the sense that there are infinitely many cubical $3$-polytopes with minimum separators not consisting of the neighbours of some vertex (\cref{fig:cubical-3-polytopes}).
\end{remark}
It is not hard to produce examples of polytopes with differing values of minimum degree and connectivity. The {\it connected sum} $P_{1}\#P_{2}$ of two $d$-polytopes $P_{1}$ and $P_{2}$ with two projectively isomorphic facets $F_{1}\subset P_{1}$ and $F_{2}\subset P_{2}$ is obtained by gluing $P_{1}$ and $P_{2}$ along $F_{1}$ and $F_{2}$ \cite[Example~8.41]{Zie95}. Projective transformations on the polytopes $P_{1}$ and $P_{2}$, such as those in \cite[Def.~3.2.3]{Ric06}, may be required for $P_{1}\#P_{2}$ to be convex. \Cref{fig:connected-sum} depicts this operation. A connected sum of two copies of a cyclic $d$-polytope with $d\ge 4$ and $n\ge d+1$ vertices (\cite[Thm.~0.7]{Zie95}), which is a polytope whose facets are all simplices, results in a $d$-polytope of minimum degree $n-1$ that is $d$-connected but not $(d+1)$-connected.
\begin{figure}
\includegraphics{cubical-3-polytopes}
\caption{Cubical $3$-polytopes with minimum separators not consisting of the neighbours of some vertex. The vertices of the separator are coloured in gray. The removal of the vertices of a face $F$ does not leave a 2-connected subgraph: the remaining vertex in gray disconnects the subgraph. Infinitely many more examples can be generated by using well-known expansion operations such as those in \cite[Fig.~3]{BriGreGre05}.}\label{fig:cubical-3-polytopes}
\end{figure}
\begin{figure}
\includegraphics{cubical-min-deg-connectivity}
\caption{Connected sum of two cubical polytopes.}\label{fig:connected-sum}
\end{figure}
On our way to prove the connectivity theorem we prove results of independent interest, for instance, the following (\cref{cor:Removing-facet-(d-2)-connectivity} in \cref{sec:cubical-connectivity}).
\vspace{0.2cm}
\noindent {\bf Corollary. }{\it Let $P$ be a cubical $d$-polytope and let $F$ be a proper face of $P$. Then the subgraph $G(P)-V(F)$ is $(d-2)$-connected.}
\vspace{0.2cm}
\begin{remark}
The examples of \cref{fig:cubical-3-polytopes} also establish that the previous corollary is best possible in the sense that the removal of the vertices of a proper face $F$ of a cubical $d$-polytope does not always leave a $(d-1)$-connected subgraph of the graph of the polytope.
\end{remark}
The corollary gives another unusual property of cubical polytopes. A tight result of Perles and Prabhu \cite[Thm.~1]{PerPra93} implies that the removal of the vertices of any $k$-face ($-1\le k\le d-1$) from a $d$-polytope leaves a $\max\{1,d-k-1\}$-connected subgraph of the graph of the polytope.
The connectivity theorem also gives rise to the following corollary and open problem.
\vspace{0.2cm}
\noindent {\bf Corollary.} {\it There are functions $f:\mathbb{N}\rightarrow \mathbb{N}$ and $g:\mathbb{N}\rightarrow \mathbb{N}$ such that, for every $d\ge 4$,
\begin{enumerate}[(i)]
\item the function $f(d)$ gives the maximum number such that every cubical $d$-polytope with minimum degree $\delta\le f(d)$ is $\delta$-connected;
\item the function $g(d)$ gives the maximum number such that every minimum separator with cardinality at most $g(d)$ of every cubical $d$-polytope consists of the neighbourhood of some vertex; and
\item $2d-3\le g(d)$ and $g(d)< f(d)$.
\end{enumerate}}
An exponential bound in $d$ for $f(d)$ is readily available. The connected sum of two copies of a neighbourly cubical $d$-polytope with minimum degree $\delta> 2^{d-1}$, which exists by \cite[Thm.~16]{JosZie00}, results in a cubical $d$-polytope with minimum degree $\delta$ and with a minimum separator of cardinality $2^{d-1}$, the number of vertices of the facet along which we glued. The cardinality of this separator gives at once the announced upper bound. This exponential bound in conjunction with the connectivity theorem gives that
\begin{equation}\label{eq:bounds}
2d-3\le g(d)< f(d)\le 2^{d-1}.
\end{equation}
The following problem naturally arises.
\begin{problem}\label{prob:bounds}
For $d\ge 4$ provide precise values for the functions $f(d)$ and $g(d)$ or improve the lower and upper bounds in \eqref{eq:bounds}.
\end{problem}
We suspect that both functions are linear in $d$.
Using ideas developed in this paper, we studied further connectivity properties of cubical $d$-polytopes \cite{ThiPinUgo18}. A graph is {\it $k$-linked} if, for every set of $2k$ distinct vertices $\{s_{1},\dots,s_{k},t_{1},\ldots,t_{k}\}$, there exist disjoint paths $L_{1},\ldots, L_{k}$ such that the endpoints of $L_{i}$ are $s_{i}$ and $t_{i}$. We proved that the graph of a cubical $d$-polytope is $\floor{(d+1)/2}$-linked for $d\ne 3$.
\section{Preliminary results}
\label{sec:preliminary}
This section groups a number of results that will be used in later sections of the paper.
The definitions of polytopal complex and strongly connected complex play an important role in the paper. A {\it polytopal complex} $\mathcal{C}$ is a finite nonempty collection of polytopes in $\mathbb{R}^{d}$ where the faces of each polytope in $\mathcal{C}$ all belong to $\mathcal{C}$ and where polytopes intersect only at faces (if $P_{1}\in \mathcal{C}$ and $P_{2}\in \mathcal{C}$ then $P_{1}\cap P_{2}$ is a face of both $P_{1}$ and $P_{2}$). The empty polytope is always in $\mathcal{C}$. The {\it dimension} of a complex $\mathcal{C}$ is the largest dimension of a polytope in $\mathcal{C}$; if $\mathcal{C}$ has dimension $d$ we say that $C$ is a {\it $d$-complex}. Faces of a complex of largest and second largest dimension are called {\it facets} and {\it ridges}, respectively. If each of the faces of a complex $\mathcal{C}$ is contained in some facet we say that $\mathcal{C}$ is {\it pure}.
Given a polytopal complex $\mathcal{C}$ with vertex set $V$ and a subset $X$ of $V$, the subcomplex of $\mathcal{C}$ formed by all the faces of $\mathcal{C}$ containing only vertices from $X$ is called {\it induced} and is denoted by $\mathcal{C}[X]$. Removing from $\mathcal{C}$ all the vertices in a subset $X\subset V(\mathcal{C})$ results in the subcomplex $\mathcal{C}[V(\mathcal{C})\setminus X]$, which we write as $\mathcal{C}-X$. We say that a subcomplex $\mathcal{C}'$ is a {\it spanning} subcomplex of $\mathcal{C}$ if $V(\mathcal{C}')=V(\mathcal{C})$. The {\it graph} of a complex is the undirected graph formed by the vertices and edges of the complex. As in the case of polytopes, we denote the graph of a complex $\mathcal{C}$ by $G(\mathcal{C})$. A pure polytopal complex $\mathcal{C}$ is {\it strongly connected} if every pair of facets $F$ and $F'$ is connected by a path $F_{1}\ldots F_{n}$ of facets in $\mathcal{C}$ such that $F_{i}\cap F_{i+1}$ is a ridge of $\mathcal{C}$, $F_{1}=F$, and $F_{n}=F'$; we say that such a path is a {\it $(d-1,d-2)$-path} or a {\it facet-ridge path} if the dimensions of the faces can be deduced from the context. From the definition, it follows that every 0-complex is trivially strongly connected and that every complex contains a spanning 0-subcomplex.
The relevance of strongly connected complexes stems from the ensuing result of Sallee.
\begin{proposition}[{\cite[Sec.~2]{Sal67}}]\label{prop:connected-complex-connectivity} The graph of a strongly connected $d$-complex is $d$-connected.
\end{proposition}
Strongly connected complexes can be defined from a $d$-polytope $P$. Two basic examples are given by the complex of all faces of $P$, called the {\it complex} of $P$ and denoted by $\mathcal{C}(P)$, and the complex of all proper faces of $P$, called the {\it boundary complex} of $P$ and denoted by $\mathcal{B}(P)$. For a polytopal complex $\mathcal{C}$, the {\it star} of a face $F$ of $\mathcal{C}$, denoted $\st(F,\mathcal{C})$, is the subcomplex of $\mathcal{C}$ formed by all the faces containing $F$, and their faces; the {\it antistar} of a face $F$ of $\mathcal{C}$, denoted $\a-st(F,\mathcal{C})$, is the subcomplex of $\mathcal{C}$ formed by all the faces disjoint from $F$. That is, $\a-st(F,\mathcal{C})=\mathcal{C}-V(F)$. Unless otherwise stated, when defining stars and antistars in a polytope, we always assume the underlying complex is the boundary complex of the polytope.
Some complexes defined from a $d$-polytope are strongly connected $(d-1)$-complexes, as the next proposition attests; the parts about the boundary complex and the antistar of a vertex already appeared in \cite{Sal67}.
\begin{proposition}[{\cite[Cor.~11, Thm.~3.5]{Sal67}}]\label{prop:st-ast-connected-complexes} Let $P$ be a $d$-polytope. Then, the boundary complex $\mathcal{B}(P)$ of $P$, and the star and antistar of a vertex in $\mathcal{B}(P)$, are all strongly connected $(d-1)$-complexes of $P$.
\end{proposition}
\begin{proof} Let $\psi$ define the natural anti-isomorphism from the face lattice of $P$ to the face lattice of its dual $P^{*}$.
The three complexes are pure. The complex $\mathcal{B}(P)$ is clearly pure, and so is the star of a vertex. Perhaps a sentence may be appropriate for the antistar of a vertex: a face of $P$ that does not contain the vertex must lie in a facet that does not contain the vertex. We proceed to prove the strong connectivity of the complexes.
The statement about $\mathcal{B}(P)$ was already proved in \cite[Cor.~2.11]{Sal67}. The facets in $\mathcal{B}(P)$ correspond to vertices in $P^{*}$. The existence of a facet-ridge path in $\mathcal{B}(P)$ between any two facets $F_{1}$ and $F_{2}$ of $\mathcal{B}(P)$ amounts to the existence of a vertex-edge path in $P^{*}$ between the vertices $\psi(F_{1})$ and $\psi(F_{2})$ of $P^{*}$. That $\mathcal{B}(P)$ is a strongly connected $(d-1)$-complex now follows from the connectivity of the graph of $P^{*}$ (Balinski's theorem).
The assertion about the star of a vertex does not seem to explicitly appear in \cite{Sal67}. The facets in the star $\St$ of a vertex $v$ in $\mathcal{B}(P)$ correspond to the vertices in the facet $\psi(v)$ in $P^{*}$. The existence of a facet-ridge path in $\St$ between any two facets $F_{1}$ and $F_{2}$ of $\St$ amounts to the existence of a vertex-edge path in $\psi(v)$ between the vertices $\psi(F_{1})$ and $\psi(F_{2})$ of $\psi(v)$. That $\St$ is a strongly connected $(d-1)$-complex follows from the connectivity of the graph of $\psi({v})$ (Balinski's theorem).
The assertion about the antistar of a vertex $v$ was first shown in \cite[Thm.~3.5]{Sal67}. The facets in $\a-st(v)$ correspond to the vertices of $P^{*}$ that are not in $\psi(v)$. That is, if $F_1$ and $F_{2}$ are any two facets of $\a-st(v)$, then $\psi(F_{1}),\psi(F_{2})\in V(P^{*})\setminus V(\psi(v))$. The existence of a facet-ridge path between $F_1$ and $F_{2}$ in $\a-st(v)$ amounts to the existence of a vertex-edge path between $\psi(F_{1})$ and $\psi(F_{2})$ in the subgraph $G(P^{*})- V(\psi(v))$ of $G(P^{*})$. The removal of the vertices of a facet does not disconnect the graph of a polytope \cite[Thm.~3.1]{Sal67}, wherefrom it follows that $G(P^{*})- V(\psi(v))$ is connected, as desired.
\end{proof}
\section{Connectivity of the $d$-cube}
\label{sec:cube-connectivity}
We unveil some further properties of the cube, whose proofs exploit the realisation of a $d$-cube as a $0-1$ $d$-polytope \cite{Zie00}. A {\it $0-1$ $d$-polytope} is a $d$-polytope whose vertices have coordinates in $\{0,1\}^{d}$. Here $\{0,1\}^{d}$ denotes the set of all $d$-element sequences from $\{0,1\}$.
We next give some basic properties of the $d$-cube, including some specific to its realisation as a $0-1$ polytope.
\begin{remark}[Basic properties of the $d$-cube]\label{rmk:cube-properties} Let $\vec x=(x_{1},\ldots,x_{d})$ with $x_{i}\in\{0,1\}$ be a vertex of the $0-1$ $d$-cube $Q_{d}$.
\begin{enumerate}[(i)]
\item Every two facets of $Q_{d}$ either intersect at a ridge or are disjoint.
\item Each of the $2d$ facets of $Q_{d}$ is the convex hull of a set of the form
\begin{align*}
F_{i}^{0}:=\conv \{\vec x\in V(Q_{d}): x_{i}=0\}\;\text{or}\; F_{i}^{1}:=\conv \{\vec x\in V(Q_{d}): x_{i}=1\},
\end{align*}
for $i$ in $[1,d]$, the interval $1,\ldots,d$.
\item A $(d-k)$-face is the intersection of exactly $k$ facets, and thus, its vertices have the form \[\{\vec x \in V(Q_{d}): x_{i_{1}}=0,\ldots,x_{i_{r}}=0,x_{i_{r+1}}=1,\ldots,x_{i_{k}}=1\}\] for $k\in [1,d]$ and $r\in[0,k]$.
\end{enumerate}
\end{remark}
While it is true that the antistar of a vertex in a $d$-polytope is always a strongly connected $(d-1)$-complex (\cref{prop:st-ast-connected-complexes}), it is far from true that this extends to higher dimensional faces. Consider any $d$-polytope $P$ with a simplex facet $J$ that contains at least one vertex $v$ of degree $d$ in $P$. Let $F$ be any face in $J$ that does not contain $v$. Then the vertex $v$ has degree $d-|V(F)|$ in the subcomplex $P- V(F)$. Since every vertex in a pure $(d-1)$-complex has degree at least $d-1$, the antistar of $F$ in $\mathcal{B}(P)$, which contains $v$, cannot be a pure $(d-1)$-complex for $\dim F\ge 1$. This extension is however possible for the $d$-cube.
\begin{lemma}
\label{lem:cube-face-complex} Let $F$ be a proper face in the $d$-cube $Q_{d}$. Then the antistar of $F$ is a strongly connected $(d-1)$-complex.
\end{lemma}
\begin{proof} Without loss of generality, assume that $Q_{d}$ is given as a $0-1$ polytope, and for the sake of concreteness, that our proper face $F$ is defined as $\conv \{\vec x\in V(Q_{d}): x_{1}=0,\ldots,x_{k}=0\}$ (\cref{rmk:cube-properties}(iii)). That is, $F=F_{1}^{0}\cap \cdots\cap F_{k}^{0}$; refer to \cref{rmk:cube-properties}(ii).
We claim that the antistar of $F$ is the pure $(d-1)$-complex \[\mathcal{C}:=\mathcal{C}(F_{1}^{1})\cup \cdots\cup \mathcal{C}(F_{k}^{1});\] refer to \cref{rmk:cube-properties}(ii)-(iii).
We proceed by proving that $\a-st(F,Q_{d})\subseteq \mathcal{C}$. Take any $(d-l)$-face $K\not \in \mathcal{C}$. Then $K=J_{1}\cap \cdots \cap J_{l}$ for some facets $J_{i}$ of $Q_{d}$. A facet $J_{i}$ is defined by either $\conv \{\vec x:V(Q_{d}):x_{j}=1\; \text{for some $j\in [k+1,d]$}\}$ or $\conv \{\vec x:V(Q_{d}):x_{j}=0\; \text{for some $j\in [1,d]$}\}$. According to \cref{rmk:cube-properties}(iii), for $l\in [1,d]$ and $r\in[0,l]$, we get that \[K=\conv \{\vec x\in V(Q_{d}):x_{i_{1}}=0,\ldots,x_{i_{r}}=0,x_{i_{r+1}}=1,\ldots,x_{i_{l}}=1\}.\] From the form of the facets $J_{i}$ it follows that $i_{j}\ge k+1$ for all $j\in [r+1, l]$. Hence there is a vertex $\vec x=(x_{1},\ldots,x_{d})$ in $K$ satisfying $x_{1}=\cdots =x_{k}=0$, which implies that $\{\vec x\}\subset K\cap F$. That is, $K\not \in \a-st(F,Q_{d})$.
To prove that $\mathcal{C} \subseteq \a-st(F,Q_{d})$ holds, observe that, if $K\in \mathcal{C}$ then it is in a facet $F_{i}^{1}$ for some $i\in[1,k]$, and therefore, it belongs to $\a-st(F,Q_{d})$. Hence $\mathcal{C}=\a-st(F,Q_{d})$.
That $\mathcal{C}$ is strongly connected follows from noting that the facets $F_{1}^{1}, \ldots, F_{k}^{1}$ are pairwise nondisjoint, and therefore, pairwise intersect at $(d-2)$-faces (\cref{rmk:cube-properties}(i)).
\end{proof}
\cref{prop:known-cube-cutsets} is well known \cite[Prop.~1]{Ram04}, but we are not aware of a reference for \cref{prop:cube-cutsets}.
\begin{proposition}[{\cite[Prop.~1]{Ram04}}]\label{prop:known-cube-cutsets}
Any separator $X$ of cardinality $d$ in $Q_{d}$ consists of the $d$ neighbours of some vertex in the cube, and the subgraph $G(Q_{d})-X$ has exactly two components, with one of them being the vertex itself.
\end{proposition}
\begin{proof}
A proof can be found in \cite[Prop.~1]{Ram04}: essentially, one proceeds by induction on $d$, considering the effect of the separator on a pair of disjoint facets.
\end{proof}
\begin{proposition}\label{prop:cube-cutsets} Let $y$ be a vertex of the $d$-cube $Q_{d}$ and let $Y$ be a subset of the neighbours of $y$ in $Q_{d}$. Then the subcomplex of $Q_{d}$ induced by $V(Q_{d})\setminus (\{y\}\cup Y)$ contains a spanning strongly connected $(d-2)$-subcomplex.
\end{proposition}
\begin{proof} Without loss of generality, assume that $Q_{d}$ is given as a $0-1$ polytope, and for the sake of concreteness, that $y=(0,\ldots,0)$ and $Y=\{\vec e_{1},\ldots,\vec e_{k}\}$ where $\vec e_{i}$ denotes the standard unit vector with the $i$-entry equal to one.
Let $\mathcal{C}:=Q_{d}- (\{y\}\cup Y)$, the subcomplex of $Q_{d}$ induced by $V(Q_{d})\setminus (\{y\}\cup Y)$. Consider the $d-k$ ridges
\[R_{i}:=\conv \{\vec x\in V(Q_{d}): x_{1}=0,x_{i}=1\}\;\text{for $i\in [k+1,d]$}\] and the ${d\choose 2}$ ridges \[R_{i,j}:=\conv \{\vec x\in V(Q_{d}): x_{i}=1,x_{j}=1\} \text{for some $i,j\in[1,d]$ with $i\ne j$.}\]
Let $\mathcal{C}':=\mathcal{C}(R_{k+1})\cup\cdots\cup \mathcal{C}(R_{d})\cup \mathcal{C}(R_{1,2})\cup \cdots\cup \mathcal{C}(R_{d-1,d})$. Then $\mathcal{C}'$ is a pure $(d-2)$-subcomplex of $\mathcal{C}$.
We show that $\mathcal{C}'$ is a spanning subcomplex of $\mathcal{C}$. Let $\vec x=(x_{1},\ldots,x_{d})$ be a vertex in $V(Q_{d})\setminus (\{y\}\cup Y)$. Then either $\vec x=\vec e_{i}$ for some $i=k+1,\ldots, d$ or $x_{i}=x_{j}=1$ for some $i,j\in[1,d]$ with $i\ne j$; see \cref{rmk:cube-properties}(iii).
In the former case, the vertex $\vec x$ lies in the $(d-2)$-face $R_{i}$, and in the latter case, the vertex $\vec x$ lies in the $(d-2)$-face $R_{i,j}$.
Therefore $\vec x\in \mathcal{C}'$. We next show that $\mathcal{C}'$ is strongly connected.
Take any two distinct ridges \(R\) and \(R'\) from $\mathcal{C}'$. We consider three cases based on the form of $R$ and $R'$.
Suppose that \(R=R_{i}\) and \(R'=R_{j}\) for $i,j\in[k+1,d]$ and $i\ne j$. Then there is a \((d-2,d-3)\)-path $L$ of length one from \(R\) to \(R'\) through their common \((d-3)\)-face \(\conv \{\vec x\in V(Q_{d}):x_1=0,x_i=1,x_j=1\}\). That is, $L:=RR'$.
Next suppose that \(R=R_{i}\) and \(R'=R_{j,l}\) for $j,l\in[1,d]$ with $j\ne l$. If \(i=j\) there is a \((d-2,d-3)\)-path of length one from \(R\) to \(R'\) through the common \((d-3)\)-face \(\conv \{\vec x\in V(Q_{d}):x_1=0,x_i=1,x_l=1\}\). If \(i\neq j\), and consequently \(i\neq l\), then there is a \((d-2,d-3)\)-path $L$ of length two from \(R\) to \(R'\) through the \((d-2)\) face $R_{i,l}$, which shares the \((d-3)\)-face $\conv \{\vec x\in V(Q_{d}):x_{1}=0,x_{i}=1,x_{l}=1\}$ with \(R\) and the \((d-3)\)-face $\conv \{\vec x\in V(Q_{d}):x_{i}=1,x_{j}=1,x_{l}=1\}$ with \(R'\). That is, $L:=RR_{i,l}R'$.
Finally suppose that \(R=R_{i,j}\) with $i\ne j$ and \(R'=R_{l,m}\) with $l\ne m$. If \(i=l\) there is a \((d-2,d-3)\)-path from \(R\) to \(R'\) through the common \((d-3)\)-face \(\conv \{\vec x\in V(Q_{d}):x_i=1,x_j=1,x_m=1\}\). If \(\{i,j\}\cap\{l,m\}=\emptyset\) then there is a \((d-2,d-3)\)-path $L$ of length two from \(R\) to \(R'\) through the \((d-2)\)-face $R_{i,l}$, which shares the \((d-3)\)-face $\conv \{\vec x\in V(Q_{d}):x_{i}=1,x_{j}=1,x_{l}=1\}$ with \(R\) and the \((d-3)\)-face $\conv \{\vec x\in V(Q_{d}):x_{i}=1,x_{l}=1,x_{m}=1\}$ with \(R'\). That is, $L:=RR_{i,l}R'$.\end{proof}
\begin{remark}\label{rmk:cubical-neighbours}
In \cref{prop:cube-cutsets}, the subcomplex of $Q_{d}$ induced by $V(Q_{d})\setminus (\{y\}\cup Y)$, in the proof of \cref{prop:cube-cutsets} denoted by $\mathcal{C}$, is pure if and only if $Y$ is the set of all neighbours of $y$. Let $Y=\{\vec e_{1},\ldots,\vec e_{k}\}$ and $y=(0,\ldots,0)$. If $k<d$ then the facets $\conv\{\vec x\in V(Q_{d}):x_{\ell}=1\}$ for $\ell\in[k+1,d]$ are in $\mathcal{C}$ and the ridge $\conv\{\vec x\in V(Q_{d}):x_{i}=1,x_{j}=1\}$ for $i,j\in[1,k]$ and $i\ne j$ is in $\mathcal{C}$ but the facets $\conv\{\vec x\in V(Q_{d}):x_{i}=1\}$ and $\conv\{\vec x\in V(Q_{d}):x_{j}=1\}$ are not in $\mathcal{C}$. Thus $\mathcal{C}$ is nonpure. If instead $k=d$ then no facet is in $\mathcal{C}$, and the vector coordinates of every vertex in $\mathcal{C}$ has at least two entries with ones, and thus, it is contained in some ridge $\conv\{\vec x\in V(Q_{d}):x_{i}=1,x_{j}=1\}$ for $i,j\in [1,d]$ and $i\ne j$, which is in $\mathcal{C}$. Thus $\mathcal{C}$ is a pure $(d-2)$-subcomplex of $Q_{d}$, and it coincides with the complex $\mathcal{C}'$. Figure \ref{fig:prop-cube-cutsets} illustrates \cref{prop:cube-cutsets}.
\end{remark}
\begin{figure}
\includegraphics{Prop-cube-cutsets}
\caption{Complexes in the 4-cube. (a) The 4-cube with the vertex $y=(0,0,0,0)$ singled out. The vertex labelling corresponds to a realisation of the 4-cube as a $0-1$ polytope. (b) Vertex coordinates as elements of $\{0,1\}^{4}$. (c) The strongly connected 2-complex $\mathcal{C}$ induced by $V(Q_{4})\setminus (\{y\}\cup Y)$ where $Y=\{v_{2},v_{3},v_{5},v_{9}\}$. Every face of $\mathcal{C}$ is contained in a 2-face of the cube. (d) The nonpure complex $\mathcal{C}$ induced by $V(Q_{4})\setminus (\{y\}\cup Y)$ where $Y=\{v_{2}=\vec e_{1},v_{5}=\vec e_{3}\}$. The 2-face $\conv \{v_{6},v_{8},v_{14},v_{16}\}=\conv\{\vec x\in V(Q_{4}):x_{1}=1,x_{3}=1\}$ of $\mathcal{C}$ is not contained in any 3-face, and there are two 3-faces in $\mathcal{C}$, namely $\conv \{v_{3},v_{4},v_{7},v_{8}, v_{11},v_{12},v_{15},v_{16}\}=\conv\{\vec x\in V(Q_{4}):x_{2}=1\}$ and $\conv \{v_{9},v_{10},v_{11},v_{12},v_{13},v_{14}, v_{15},v_{16}\}=\conv\{\vec x\in V(Q_{4}):x_{4}=1\}$.}\label{fig:prop-cube-cutsets}
\end{figure}
\section{Cubical polytopes}
\label{sec:cubical-connectivity}
The aim of this section is to prove \cref{thm:cubical-connectivity}, a result that relates the connectivity of a cubical polytope to its minimum degree.
Two vertex-edge paths are {\it independent} if they share no inner vertex. Similarly, two facet-ridge paths are {\it independent} if they do not share an inner facet.
Given sets $A,B$ of vertices in a graph, a path from $A$ to $B$, called an {\it $A-B$ path}, is a (vertex-edge) path $L:=u_{0}\ldots u_{n}$ in the graph such that $V(L)\cap A=\{u_{0}\}$ and $V(L)\cap B=\{u_{n}\}$. We write $a-B$ path instead of $\{a\}-B$ path, and likewise, write $A-b$ path instead of $A-\{b\}$.
Our exploration of the connectivity of cubical polytopes starts with a statement about the connectivity of the star of a vertex. But first we need a lemma that holds for all $d$-polytopes.
\begin{lemma}\label{lem:star-minus-facet-F} Let $P$ be a $d$-polytope with $d\ge 2$. Then, for any two distinct facets $F_{1}$ and $F_{2}$ of $P$, the following hold.
\begin{enumerate}[(i)]
\item There are $d$ independent facet-ridge paths between $F_{1}$ and $F_{2}$ in $P$.
\item Let $\St$ be the star of a vertex and let $F$ be a facet of $\St$. If $F_{1}$ and $F_{2}$ are in $\St$ and are both different from $F$, then there exists a $(d-1,d-2)$-path between $F_{1}$ and $F_{2}$ in $\St$ that does not contain $F$.
\item Let $F$ be a facet of $P$ other than $F_{1}$ and $F_{2}$. Then there exists a $(d-1,d-2)$-path between $F_{1}$ and $F_{2}$ in $P$ that does not contain $F$.
\item Let $R$ be an arbitrary ridge of $P$. Then there exists a facet-ridge path $J_{1}\ldots J_{m}$ with $J_{1}=F_{1}$ and $J_{m}=F_{2}$ in $P$ such that $J_{\ell}\cap J_{\ell+1}\ne R$ for each $\ell\in[1,m-1]$.
\end{enumerate}
\end{lemma}
\begin{proof} The proof of the lemma essentially follows from dualising Balinski's theorem.
Let $\psi$ define the natural anti-isomorphism from the face lattice of $P$ to the face lattice of its dual $P^{*}$.
(i). Any two independent vertex-edge paths in $P^{*}$ between the vertices $\psi(F_{1})$ and $\psi(F_{2})$ correspond to two independent facet-ridge paths in $P$ between the facets $F_{1}$ and $F_{2}$. By Balinski's theorem there are $d$ independent $\psi(F_{1})-\psi(F_{2})$ paths in $P^{*}$, and so the assertion follows.
(ii). The facets in the star $\St$ of a vertex $s$ in $\mathcal{B}(P)$ correspond to the vertices in the facet $\psi(s)$ in $P^{*}$ corresponding to $s$. The existence of a facet-ridge path in $\St$ between any two facets $F_{1}$ and $F_{2}$ of $\St$ amounts to the existence of a vertex-edge path in $\psi(s)$ between the vertices $\psi(F_{1})$ and $\psi(F_{2})$ of $\psi(s)$. Since the graph of the facet $\psi(s)$ is $(d-1)$-connected (Balinski's theorem), by Menger's theorem (\cite{Menger1927}; see also \cite[Sec.~3.3]{Die05}) there are $d-1$ independent paths between $\psi(F_{1})$ and $\psi(F_{2})$. Hence we can pick one such path $L^{*}$ that avoids the vertex $\psi(F)$ of $\psi(s)$. Dualising this path $L^{*}$ gives a $(d-1,d-2)$-path $L$ between $F_{1}$ and $F_{2}$ in the star $\St$ that does not contain the facet $F$ of $P$.
(iii). By (i) there are $d$ independent facet-ridge paths between $F_{1}$ and $F_{2}$ in $P$, and since $d\ge 2$, we can pick one such path that does not contain $F$.
(iv). Again by (i), there are $d$ independent facet-ridge paths between $F_{1}$ and $F_{2}$ in $P$, and since $d\ge 2$ and the ridge $R$ can be present in at most one such path, there must exist a facet-ridge path that does not contain $R$. The assertion now follows.
\end{proof}
For a path $L:=u_{0}\ldots u_{n}$ we write $u_{i}Lu_{j}$ for $0\le i\le j\le n$ to denote the subpath $u_{i}\ldots u_{j}$.
\begin{proposition}\label{prop:star-minus-facet} Let $F$ be a facet in the star $\St$ of a vertex in a cubical $d$-polytope. Then the antistar of $F$ in $\St$ is a strongly connected $(d-2)$-complex.
\end{proposition}
\begin{proof} Let $s$ be a vertex of a facet $F$ in a cubical $d$-polytope $P$ and let $F_{1},\ldots,F_{n}$ be the facets in the star $\St$ of the vertex $s$. Let $F_{1}=F$. The result is true for $d=2$: the antistar of $F$ is just a vertex, a strongly connected 0-complex. So assume $d\ge 3$.
According to \cref{lem:cube-face-complex}, the antistar of $F_{i}\cap F_{1}$ in $F_{i}$, the subcomplex of $F_{i}$ induced by $V(F_{i})\setminus V(F_{i}\cap F_{1})$, is a strongly connected $(d-2)$-complex for each $i\in [2,n]$. Since \[\a-st(F_{1},\St)=\bigcup_{i=2}^{n} \a-st(F_{i}\cap F_{1}, F_{i}),\] it follows that $\a-st(F_{1},\St)$ is a pure $(d-2)$-complex. It remains to prove that there exists a $(d-2,d-3)$-path $L$ between any two ridges $R_{i}$ and $R_{j}$ in $\a-st(F_{1},\St)$.
By virtue of \cref{lem:cube-face-complex}, we can assume that $R_{i}\in \a-st(F_{i}\cap F_{1}, F_{i})$ and $R_{j}\in \a-st(F_{j}\cap F_{1}, F_{j})$ for $i\ne j$ and $i,j\in [2,n]$. Since $\St$ is a strongly connected $(d-1)$-complex (\cref{prop:st-ast-connected-complexes}), there exists a $(d-1,d-2)$-path $M:=J_{1}\ldots J_{m}$ in $\St$, where $J_{\ell}\cap J_{\ell+1}$ is a ridge for $\ell\in [1,m-1]$, $J_{1}=F_{i}$ and $J_{m}=F_{j}$. Let $E_{0}:=R_{i}$ and $E_{m}:=R_{j}$.
We can assume the path $M$ doesn't contain $F_{1}$ (\cref{lem:star-minus-facet-F}(ii)). Let us show that the path \(L\) exists, by proving the following statement by induction.
\begin{claim}
If \(\ell\leq m\),
there exists a \((d-2,d-3)\) path in \(\bigcup_{i=1}^\ell \a-st(J_i\cap F_1,J_i)\) between \(E_0\in \a-st(J_1\cap F_1,J_1)\) and any ridge \(E_\ell\in \a-st(J_\ell\cap F_1,J_\ell)\).
\end{claim}
\begin{claimproof}
The statement is true for \(\ell=1\). The complex \(\a-st(J_1\cap F_1,J_1)\) is a strongly connected $(d-2)$-complex and contains \(E_{0}\). So an induction on $\ell$ can start.
Suppose that the statement is true for some \(\ell< m\). We show the existence of a $(d-2,d-3)$-path between $E_{0}$ and any ridge of \(\a-st(J_{\ell+1}\cap F_1,J_{\ell+1})\).
Let $E_{\ell}$ be a ridge in $\a-st(J_{\ell}\cap F_{1}, J_{\ell})$ such that $E_{\ell}$ contains a $(d-3)$-face $I_{\ell}$ of $\a-st(J_{\ell}\cap F_{1}, J_{\ell})\cap \a-st(J_{\ell+1}\cap F_{1}, J_{\ell+1})$. By the induction hypothesis there exists a \((d-2,d-3)\) path \(L_\ell\) in \(\bigcup_{i=1}^{\ell} \a-st(J_i\cap F_1,J_i)\) between \(E_{0}\) and \(E_{\ell}\).
Consider a ridge \(E'_{\ell+1}\in \a-st(J_{\ell+1}\cap F_1,J_{\ell+1})\) such that $E'_{\ell+1}$ contains the aforementioned \((d-3)\)-face \(I_{\ell}\). There is a \((d-2,d-3)\)-path \(L'_{\ell+1}\) in \(\a-st(J_{\ell+1}\cap F_1,J_{\ell+1})\) from \(E'_{\ell+1}\) to any ridge \(E_{\ell+1}\) in \(\a-st(J_{\ell+1}\cap F_1,J_{\ell+1})\), thanks to \(\a-st(J_{\ell+1}\cap F_1,J_{\ell+1})\) being a strongly connected $(d-2)$-complex.
Since $E_{\ell}$ and $E'_{\ell+1}$ share $I_{\ell}$, a path $L_{\ell+1}$ from $E_{0}$ to the arbitrary ridge $E_{\ell+1}$ is obtained as $L_{\ell+1}=E_{0}L_{\ell}E_{\ell}E'_{\ell+1}L'_{\ell+1}E_{\ell+1}$.
For this concatenation to work it remains to prove that the complex $\a-st(J_{\ell}\cap F_{1},J_{\ell})\cap \a-st(J_{\ell+1}\cap F_{1},J_{\ell+1})$ contains the aforementioned $(d-3)$-face $I_{\ell}$. Let $K_{\ell}:=J_{\ell}\cap J_{\ell+1}\cap F_{1}$. Because $J_{\ell}\cap J_{\ell+1}$ is a ridge but not of $F_{1}$ and because $\{s\}\subseteq V(J_{\ell})\cap V(J_{\ell+1})\cap V(F_{1})$, we find that $0\le \dim K_{\ell}\le d-3$. From $J_{\ell}\cap J_{\ell+1}$ being a $(d-2)$-cube and $\dim K_{\ell}\le d-3$ follows the existence of a $(d-3)$-face in $ J_{\ell}\cap J_{\ell+1}$ that is disjoint from $F_{1}$, our $I_{\ell}$. As a consequence, this face $I_{\ell}\in \a-st(J_{\ell}\cap F_{1},J_{\ell})\cap \a-st(J_{\ell+1}\cap F_{1},J_{\ell+1})$, as desired.
\end{claimproof}
Applying the claim to \(\ell=m\) gives the existence of a path in \(\cup_{i=1}^m \a-st(J_i\cap F_1,J_i)\) between $E_{0}=R_{i}$ and $E_{m}=R_{j}$ ; this is the desired path $L$.\end{proof}
The proof method used in \cref{prop:star-minus-facet} also proves the following.
\begin{theorem}\label{thm:polytope-minus-facet} Let $F$ be a proper face of a cubical $d$-polytope $P$. Then the antistar of $F$ in $P$ contains a spanning strongly connected $(d-2)$-subcomplex.
\end{theorem}
\begin{proof} Let $F_{1},\ldots,F_{n}$ be the facets of $P$ and let $F$ be a proper face of $P$. The result is true for $d=2$: the antistar of $F$ is a strongly connected 1-complex, and thus, contains a spanning 0-complex. So assume $d\ge 3$.
Let \[\mathcal{C}_{r}:=\mathcal{B}(F_{r})-V(F).\]
If $F_{r}=F$ then $\mathcal{C}_{r}=\emptyset$, and if $F_{r}\cap F=\emptyset$ then $\mathcal{C}_{r}$ is the boundary complex of $F_{r}$, a strongly connected $(d-2)$-subcomplex of $F_{r}$ (\cref{prop:st-ast-connected-complexes}). Otherwise, $\mathcal{C}_{r}$ is the antistar of $F_{r}\cap F$ in $F_{r}$, also a strongly connected $(d-2)$-subcomplex of $F_{r}$ (\cref{lem:cube-face-complex}).
Let \[\mathcal{C}:=\bigcup_{r=1}^{n} \mathcal{C}_{r}.\] We show that $\mathcal{C}$ is the required spanning strongly connected $(d-2)$-subcomplex of $P-V(F)$, the antistar of $F$ in $P$. It follows that $\mathcal{C}$ is a spanning pure $(d-2)$-subcomplex of $P-V(F)$. It remains to prove that there exists a $(d-2,d-3)$-path $L$ in $\mathcal{C}$ between any two ridges $R_{i}$ and $R_{j}$ of $\mathcal{C}$ with $i\neq j$.
If $R_{i},R_{j}\in \mathcal{C}_{r}$ for some $r\in [1,n]$, then, since $\mathcal{C}_{r}$ is a strongly connected $(d-2)$-complex (\cref{lem:cube-face-complex}), there exists a $(d-2,d-3)$-path in $\mathcal{C}_{r}$ between the two ridges $R_{i}$ and $R_{j}$. Therefore, we can assume that $R_{i}$ is in $\mathcal{C}_{i}$ and $R_{j}$ is in $\mathcal{C}_{j}$ for $i\ne j$. Observe that $F_{i}\ne F$ and $F_{j}\ne F$. Hereafter we let $E_{0}:=R_{i}$ and $E_{m}:=R_{j}$.
Since $\mathcal{B}(P)$ is a strongly connected $(d-1)$-subcomplex of $P$, there exists a $(d-1,d-2)$-path $M:=J_{1}\ldots J_{m}$ in $P$ where $J_{\ell}\cap J_{\ell+1}$ is a ridge for $\ell\in [1,m-1]$, $J_{1}=F_{i}$ and $J_{m}=F_{j}$. Each facet $J_{r}$ coincides with a facet $F_{i_{r}}$ for some $i_{r}\in[1,n]$; we henceforth let $\mathcal D_{r}:=\mathcal{C}_{i_{r}}$.
By \cref{lem:star-minus-facet-F}(iii)-(iv) we can assume that $J_{r}\ne F$ for $r\in[1,m]$ in the case of $F$ being a facet and that $J_{\ell}\cap J_{\ell+1}\ne F$ for $\ell\in[1,m-1]$ in the case of $F$ being a ridge. As a consequence, $\dim (J_{\ell}\cap J_{\ell+1}\cap F)\le d-3$; this in turn implies that, for each $\ell\in[1,m-1]$, $J_{\ell}\cap J_{\ell+1}$ contains a $(d-3)$-face $I_{\ell}$ that is disjoint from $F$. Hence $I_{\ell}\in \mathcal D_{\ell}\cap\mathcal D_{\ell+1}$ for each $\ell\in[1,m-1]$.
As in the proof or Proposition~\ref{prop:star-minus-facet}, we show that the path $L$ exists by proving the following claim by induction.
\begin{claim}
If \(\ell\leq m\),
there exists a \((d-2,d-3)\) path in \(\bigcup_{i=1}^\ell \mathcal D_{i}\) between \(E_0\in \mathcal D_{1}\) and any ridge $E_{\ell}\in \mathcal D_{\ell}\).
\end{claim}
\begin{claimproof}
The statement is true for \(\ell=1\). The complex \(\mathcal D_{1}\) is a strongly connected $(d-2)$-complex and \(E_0\in \mathcal D_{1}\).
Suppose that the statement is true for some \(\ell< m\). We show the existence of a $(d-2,d-3)$-path between $E_{0}\in \mathcal D_{1}$ and any ridge of $\mathcal D_{\ell+1}$.
Let \(E_{\ell}\) be a ridge in \(\mathcal D_{\ell}\) containing a \((d-3)\) face \(I_{\ell}\) of \(\mathcal D_{\ell} \cap \mathcal D_{\ell+1}\); this $(d-3)$-face $I_{\ell}$ exists by our previous discussion. By the induction hypothesis, there exists a \((d-2,d-3)\) path \(L_\ell\) in \(\bigcup_{i=1}^{\ell} \mathcal D_i\) between \(E_0\) and the ridge \(E_{\ell}\).
Consider a ridge \(E'_{\ell+1}\in \mathcal D_{\ell+1}\) containing the face \(I_{\ell}\). There is a $(d-2,d-3)$-path $L'_{\ell+1}$ in \(\mathcal D_{\ell+1}\) from \(E'_{\ell+1}\) to any ridge \(E_{\ell+1}\in \mathcal D_{\ell+1}\), thanks to $\mathcal D_{\ell+1}$ being a strongly connected $(d-2)$-complex.
The desired path $L_{\ell+1}$ between $E_{0}$ and the arbitrary ridge $E_{\ell+1}$ is obtained as $L_{\ell+1}=E_{0}L_{\ell}E_{\ell}E'_{\ell+1}L_{\ell+1}E_{\ell+1}$.
\end{claimproof}
The claim for \(\ell=m\) gives the desired \((d-2,d-3)\)-path $L$ in $\cup_{i=1}^{m}\mathcal D_{i}\subset \mathcal{C}$ between $E_{0}=R_{i}$ and $E_{m}=R_{j}$, which concludes the proof.
\end{proof}
\begin{remark}\label{rmk:Antistar-Cubical-nonpure} \cref{thm:polytope-minus-facet} is best possible in the sense that the antistar of a face does not always contain a spanning strongly connected $(d-1)$-subcomplex. The removal of the vertices of the face $F$ in \cref{fig:cubical-3-polytopes} leaves a pure $(d-1)$-subcomplex that is not strongly connected.
\end{remark}
The ideas presented in \cref{prop:star-minus-facet,thm:polytope-minus-facet} play a key role in the proof of the main result of \cite{ThiPinUgo18}.
Before proving the main result of the section, we state a useful corollary that follows from \cref{prop:connected-complex-connectivity,thm:polytope-minus-facet}.
\begin{corollary}\label{cor:Removing-facet-(d-2)-connectivity} Let $P$ be a cubical $d$-polytope and let $F$ be a proper face of $P$. Then the subgraph $G(P)-V(F)$ is $(d-2)$-connected.
\end{corollary}
For $d\ge 4$ we define the two functions $f(d)$ and $g(d)$ that we mentioned in the introduction.
\begin{enumerate}
\item The function $f(d)$ gives the maximum number such that every cubical $d$-polytope with minimum degree $\delta\le f(d)$ is $\delta$-connected.
\item the function $g(d)$ gives the maximum number such that every minimum separator with cardinality at most $g(d)$ of every cubical $d$-polytope consists of the neighbourhood of some vertex.
\end{enumerate}
The functions $f(3)$ and $g(3)$ are not defined. No cubical 3-polytope has minimum degree $\delta\ge 4$, and so for every positive integer $\delta_{0}\ge 3$ it follows that every cubical 3-polytope with minimum degree $\delta\le \delta_{0}$ is $\delta$-connected. \Cref{fig:cubical-3-polytopes} shows cubical 3-polytopes with minimum separators that are not the neighbourhood of a vertex.
The function $f(d)$ is well defined for $d\ge 4$. There is a cubical $d$-polytope with minimum degree $\delta$ for every $\delta\ge d\ge 4$, for instance, a neighbourly cubical $d$-polytope \cite{JosZie00}. Every $d$-polytope is $d$-connected by Balinski's theorem. Furthermore, there exists a cubical $d$-polytope with minimum degree $\delta>2^{d-1}$ that is not $\delta$-connected: the connected sum of two copies of a neighbourly cubical $d$-polytope with minimum degree $\delta$. Thus $d\le f(d)\le 2^{d-1}$.
At this moment, we don't claim that $g(d)$ exists; this will become evident in the proof of \cref{thm:cubical-connectivity}.
\begin{proposition}\label{prop:g+1} Let $P$ be a cubical $d$-polytope with $d\ge 4$. If the function $g(d)$ exists and $P$ has minimum degree at least $g(d)+1$, then $G(P)$ is $(g(d)+1)$-connected.
\end{proposition}
\begin{proof} Suppose that $G(P)$ is not $(g(d)+1)$-connected. Then there is a minimum separator $X$ with cardinality at most $g(d)$. By the definition of $g(d)$, $X$ consists of all the neighbours of some vertex $u$. This contradicts the degree of $u$, which is at least $g(d)+1>|X|$.
\end{proof}
\begin{corollary}\label{cor:f-g} If the function $g(d)$ exists for $d\ge 4$, then $f(d)>g(d)$.
\end{corollary}
\begin{theorem}[Connectivity Theorem]\label{thm:cubical-connectivity} A cubical $d$-polytope $P$ with minimum degree $\delta$ is $\min\{\delta,2d-2\}$-connected for every $d\ge 3$.
Furthermore, for any $d\ge 4$, every minimum separator $X$ of cardinality at most $2d-3$ consists of all the neighbours of some vertex, and the subgraph $G(P)-X$ contains exactly two components, with one of them being the vertex itself. \end{theorem}
\begin{proof} Let $0\le \alpha\le d-3$ and let $P$ be a cubical $d$-polytope with minimum degree at least $d+\alpha$. Let $G:=G(P)$.
We first prove that $P$ is $(d+\alpha)$-connected. The case of $d=3$ follows from Balinski's theorem. So assume $d\ge 4$. Let $X$ be a minimum separator of $P$. Throughout the proof, let $u$ and $v$ be two distinct vertices that belong to $G-X$ and are disconnected by $X$. The theorem follows from a number of claims that we prove next.
\begin{claim}\label{cl:d-1} If $|X|\le d+\alpha$ then, for any facet $F$, the cardinality of $X\cap V(F)$ is at most $d-1$.
\end{claim}
\begin{claimproof}
Suppose otherwise and let $F$ be a facet with $|X\cap V(F)|\ge d$.
Let \[G':=G-V(F).\] According to \cref{cor:Removing-facet-(d-2)-connectivity}, the subgraph $G'$ is $(d-2)$-connected. Since there are at most $\alpha\le d-3$ vertices in $V(G')\cap X$, removing from $G'$ the vertices in $V(G')\cap X$ doesn't disconnect $G'$.
We show there is a $u-v$ path in $G- X$, which would be a contradiction and prove the claim. If $u,v\in V(G')\setminus X$ then there is a $u-v$ path in $G'-X$, as $G'-X$ is connected. So assume $u\in V(F)\setminus X$. Since $u$ has degree at least $d+\alpha$ and since every vertex in $F$ has at least $d+\alpha-(d-1)=\alpha+1$ neighbours outside $F$ (in $G'$), at least one of them, say $u_{G'}$, is in $V(G') \setminus X$. Likewise either $v\in V(F)\setminus X$ and there is a neighbour $v_{G'}$ of $v$ in $V(G')\setminus X$ or $v\in G'-X$. Therefore, if $v\in V(F)\setminus X$ then there is a $u-v$ path $L$ in $G- X$ that contains a subpath $L'$ in $G'$ between the vertices $u_{G'}$ and $v_{G'}$ in $V(G') \setminus X$; that is, $L=uu_{G'}L'v_{G'}v$. If instead $v\in G'- X$ then there is a $u-v$ path $L$ in $G- X$ passing through the vertex $u_{G'}$ and containing a subpath $L':=u_{G'}-v$ in $G'-X$; that is, $L=uu_{G'}L'v$. Hence there is always a $u-v$ path in $G-X$, and thus, $G$ is not disconnected by $X$, a contradiction.
\end{claimproof}
\begin{claim}\label{cl:d-facets} If $|X|\le d+\alpha$, then there exist facets $F_{1},\ldots,F_{d}$ of $P$ such that $G(F_{i})$ is disconnected by $X$ for each $i\in[1,d]$.
\end{claim}
\begin{claimproof}
Suppose by way of contradiction that $X$ disconnects the graphs of at most $k$ facets $F_{1},\ldots, F_{k}$ of $P$ with $k\le d-1$. We find a $u-v$ path in $G-X$, which would contradict $X$ being a separator of $G$.
There are at least $d$ facets containing $u$ and there are at least $d$ facets containing $v$. As a result, we can pick facets $K_{u}$ and $K_{v}$ with $u\in K_{u}$ and $v\in K_{v}$ whose graphs are not disconnected by $X$; that is $K_{u},K_{v}\not\in\{F_{1},\ldots,F_{k}\}$. If $K_{u}=K_{v}$ then we can find a $u-v$ path in $G(K_{u})- X$. So assume $K_{u}\ne K_{v}$. Since $\mathcal{B}(P)$ is a strongly connected $(d-1)$-complex and since there are at least $d$ independent $(d-1,d-2)$-paths from $K_{u}$ to $K_{v}$ in $\mathcal{B}(P)$ (\cref{lem:star-minus-facet-F}(i)), there exists a $(d-1,d-2)$-path $J_{1}\ldots J_{n}$ in $\mathcal{B}(P)$ with $J_{1}=K_{u}$ and $J_{n}=K_{v}$ such that $\{J_{1},\ldots,J_{n}\}\cap \{F_{1},\ldots,F_{k}\}=\emptyset$. As a consequence, the subgraphs $G(J_{i})$ are not disconnected by $X$.
Construct a $u-v$ path $L$ by traversing the facets $J_{1},\ldots, J_{n}$ as follows: find a path $L_{1}$ in $J_{1}$ from $u$ to a vertex in $J_{1}\cap J_{2}$, then a path $L_{2}$ in $J_{2}$ from $J_{1}\cap J_{2}$ to $J_{2}\cap J_{3}$ and so on up to a path $L_{n-1}$ in $J_{n-1}$ from $J_{n-2}\cap J_{n-1}$ to $J_{n-1}\cap J_{n}$; here use the connectivity of the subgraphs $G(J_{1})- X,\ldots, G(J_{n-1})- X$. Finally, find a path $L_{n}$ in $J_{n}=K_{v}$ from $J_{n-1}\cap J_{n}$ to the vertex $v$ using the connectivity of $G(J_{n})- X$. The path $L$ is the concatenation of the paths $L_{1},\ldots,L_{n}$.
The aforementioned concatenation works as long as there is at least one vertex in $V(J_{\ell}\cap J_{\ell+1})\setminus X$ for each $\ell\in [1,n-1]$. For $d\ge 4$, it follows that $|V(J_{\ell}\cap J_{\ell+1})|=2^{d-2}\ge d$, which is greater that $|V(J_{\ell})\cap X|\le d-1$ by \cref{cl:d-1}. Hence $V(J_{\ell}\cap J_{\ell+1})\setminus X$ is nonempty, and consequently, the $u-v$ path $L$ always exists and completes the proof of the claim.
\end{claimproof}
\begin{claim}\label{cl:connectivity} If $|X|\le d+\alpha$ then $|X|=d+\alpha$.
\end{claim}
\begin{claimproof} Let $F$ be a facet of $P$ whose graph is disconnected by $X$, which by \cref{cl:d-facets} exists. \cref{cl:d-1} together with Balinski's theorem ensures that $|V(F)\cap X|=d-1$. Let $G':=G-V(F)$. By \cref{cor:Removing-facet-(d-2)-connectivity}, $G'$ is a $(d-2)$-connected subgraph of $G$.
Suppose that a minimum separator $X$ has size at most $d-1+\alpha$; we show that $X$ does not disconnect $G$ by finding a $u-v$ path $L$ between the vertices $u$ and $v$ of $G-X$, which would be a contradiction.
There are at most $\alpha\le d-3$ vertices in $V(G')\cap X$, and so removing $V(G')\cap X$ from $G'$ doesn't disconnect $G'$.
If $u$ and $v$ are both in $G'$ then there is a $u-v$ path in $G'$ that is disjoint from $X$. So assume that $u\in V(F)\setminus X$. Let $X_{1}$ denote the set of neighbours of $u$ in $G'$; then $|X_{1}|\ge \alpha+1$, since $u$ has at least $d+\alpha$ neighbours in $P$, with exactly $d-1$ of them in $F$. As a consequence, there is a neighbour $u_{G'}$ of $u$ in $V(G') \setminus X$. Likewise either $v\in V(F)\setminus X$ and there is a neighbour $v_{G'}$ of $v$ in $V(G')\setminus X$ or $v\in G'-X$. If $v\in V(F) \setminus X$, there is a $u-v$ path in $G- X$ that passes through the vertices $u_{G'}$ and $v_{G'}$ of $V(G') \setminus X$. If instead $v\in G'-X$, there is a $u-v$ path $L$ in $G- X$ that includes a subpath $L'$ in $G'-X$ between $u_{G'}$ and $v$ so that $L=uu_{G'}L'v$. Hence we always have a $u-v$ path in $G-X$. This contradiction shows that a minimum separator has size {\it exactly} $d+\alpha$.
\end{claimproof}
{\bf From \cref{cl:connectivity} it follows that $P$ is $(d+\alpha)$-connected.} The structure of a minimum separator is settled in \cref{cl:separator-d-4,cl:separator-5-d-3}. For every $d\ge 4$, \cref{cl:separator-d-4} settles the case $\alpha\le d-4$ and \cref{cl:separator-5-d-3} the case $\alpha= d-3$.
\begin{claim}\label{cl:separator-d-4} If $\alpha\le d-4$, then the set $X$ consists of the neighbours of some vertex and the minimum degree of $P$ is exactly $d+\alpha$.
\end{claim}
\begin{claimproof}
As in \cref{cl:connectivity}, let $F$ be a facet of $P$ whose graph is disconnected by $X$ and let $G':=G-V(F)$. Then $G'$ is a $(d-2)$-connected subgraph of $G$ (\cref{cor:Removing-facet-(d-2)-connectivity}). Besides, $|V(F)\cap X|=d-1$ by a combination of \cref{cl:d-1} and Balinski's theorem.
Since there are {\it exactly} $\alpha+1\le d-3$ vertices in $V(G')\cap X$, removing $V(G')\cap X$ from $G'$ doesn't disconnect $G'$. We may therefore assume that $u\in V(F)\setminus X$.
If there is a path $L_{u}$ in $G-X$ from $u$ to a vertex $u_{G'}\in G'-X$ and a path $L_{v}$ in $G-X$ from $v$ to a vertex $v_{G'}$ in $G'-X$ so that $L_{u}$ and $L_{v}$ are both disjoint from $X$, then we get a $u-v$ path $L$ in $G- X$ defined as $L=uL_{u}u_{G'}L'v_{G'}L_{v}v$ where $L'$ is a path in $G'-X$ between $u_{G'}$ and $v_{G'}$. Recall the minimum degree of $u$ is at least $d+\alpha$.
We may therefore assume that $u$ is in $V(F)\setminus X$ and that there is no such path $L_{u}$ in $G-X$ from $u$ to $G'-X$. The set $X_{1}$ of neighbours of $u$ in $G'$ must then be a subset of $X$, and since $|X_{1}|\ge \alpha+1$, it follows that $X_{1}=V(G')\cap X$, and thus, that $|X_{1}|=\alpha+1$. In addition, every path of length two from $u$ to $G'$ passing through a neighbour of $u$ in $F$ contains some vertex from $X$; otherwise the aforementioned path $L_{u}$ would exist. Let $X_{2}$ denote the vertices in $X$ that are present in a $u-V(G')$ path of length two passing through a neighbour of $u$ in $F$. Every vertex of $F$ has a neighbour in $G'$, and so there is a $u-V(G')$ path through each neighbour of $u$ in $F$, $d-1$ such neighbours in total. Since there are no triangles in $P$, we get $X_{1}\cap X_{2}=\emptyset$, which in turn implies that $X_{2}\subset V(F)$. Hence $|X_{2}|=d-1$, and every neighbour of $u$ in $F$ is in $X$. Consequently, the degree of $u$, $|X_{1}|+|X_{2}|$, is precisely $d+\alpha$, and the set $X$ consists of the $d+\alpha$ neighbours of $u$ in $P$, as desired.
\end{claimproof}
\begin{claim}\label{cl:separator-5-d-3} If $\alpha=d-3$, then the set $X$ consists of the neighbours of some vertex and the minimum degree of $P$ is exactly $d+\alpha$.
\end{claim}
\begin{claimproof} Proceed by contradiction: every vertex in $P$ has at least one neighbour outside $X$.
By \cref{cl:d-1} there are at most $d-1$ vertices from $X$ in any facet $F$ of $P$. If the removal of $X$ disconnects the graph of a facet $F$, then there would be exactly $d-1$ vertices in $V(F)\cap X$, which constitute the neighbours in $F$ of some vertex of $F$ (\cref{prop:known-cube-cutsets}). Consequently, the subgraph $G(F)-X$ would have exactly two components: one being a singleton $z(F)$ and another $Z(F)$ being $(d-3)$-connected by \cref{prop:cube-cutsets}; if $X$ doesn't disconnect $F$, we let $z(F)=\emptyset$ and let $Z(F):=G(F)-X$. Hence, for every facet $F$ of $P$, the subgraph $Z(F)$ is connected, and $V(F)=z(F)\cup V(Z(F))\cup (V(F)\cap X)$. Abusing terminology, if $z(F)\neq\emptyset$ we make no distinction between the set and its unique element.
Since $u$ and $v$ are separated by $X$, every $u-v$ path in $G$ contains a vertex from $X$. Because the vertex $u$ has a neighbour $w$ not in $X$, there must exist a facet $F_{u}$ in which $u\in Z(F_{u})$: a facet containing the edge $uw$. Similarly, there exists a facet $F_{v}$ containing $v$ in which $v\in Z(F_{v})$.
Consider an arbitrary $(d-1,d-2)$-path $J_{1}\ldots J_{n}$ in $P$ with $J_{1}=F_{u}$ and $J_{n}=F_{v}$. If, for each $i\in [1,n-1]$, there is a vertex $y_{i}\in V(J_{i}\cap J_{i+1})$ with $y_{i}\in V(Z(J_{i}))\cap V(Z(J_{i+1}))$, then there would be a $u-v$ path $L$ in $G-X$ and the claim would hold. Indeed, let $y_0:= u$ and $y_{n}: = v$. For all $i\in [0,n-1]$, there would be a path $L_{i+1}$ in $Z(J_{i+1})$ from $y_i$ to $y_{i+1}$. Concatenating all these paths $L_{1},\ldots, L_{n}$, we would then have a $u-v$ path $L$ in $G-X$, giving a contradiction and settling the claim. We would say that a facet-ridge path $J_{1}\ldots J_{n}$ from $F_{u}$ to $F_{v}$ is {\it valid} if the aforementioned vertex $y_{i}$ exists for each $i\in [1,n-1]$; otherwise it is {\it invalid}.
Hence {\bf it remains to show that, for some facet-ridge path $J_{1}\ldots J_{n}$ from $J_{1}=F_{u}$ to $J_{n}=F_{v}$, there exists a vertex in $V(Z(J_{i}))\cap V(Z(J_{i+1}))$ for each $i\in [1,n-1]$ when $d\ge 4$.} In other words, {\bf it remains to show there exists a valid fact-ridge path from $F_{u}$ to $F_{v}$.}
Take a facet-ridge path $J_{1}\ldots J_{n}$ from $F_{u}$ to $F_{v}$ and suppose it is invalid; that is, $V(Z(J_{i}))\cap V(Z(J_{i+1}))=\emptyset$ for some $i\in [1,n-1]$. Then $V(Z(J_{i}))\cap V(J_{i+1})\subset z(J_{i+1})$. Therefore,
\begin{equation}
\begin{aligned}\label{eq:Claim4-1}
V(J_{i}\cap J_{i+1})=V(J_{i})\cap V(J_{i+1})&=\left[z(J_{i})\cup V(Z(J_{i}))\cup (V(J_{i})\cap X)\right]\cap V(J_{i+1})\\
&\quad\subset z(J_{i})\cup z(J_{i+1})\cup (X\cap V(J_{i}\cap J_{i+1})).
\end{aligned}
\end{equation}
If neither $G(J_{i})$ nor $G(J_{i+1})$ is disconnected by $X$, then $z(J_{i})=z(J_{i+1})=\emptyset$, and by \cref{eq:Claim4-1} and \cref{cl:d-1}, \begin{equation}\label{eq:Claim4-2}2^{d-2}=|V(J_{i}\cap J_{i+1})|\le |X\cap V(J_{i})|\le d-1.\end{equation}
If instead $G(J_{i})$ is disconnected by $X$, then $X\cap V(J_{i})$ consists of all the $d-1$ neighbours of $z(J_{i})$ in $J_{i}$ (\cref{prop:known-cube-cutsets}), and thus, $|X\cap V(J_{i}\cap J_{i+1})|\le d-2$. In this case, by \cref{eq:Claim4-1},
\begin{equation}\label{eq:Claim4-3}2^{d-2}=|V(J_{i}\cap J_{i+1})|\le 2+d-2=d.\end{equation}
\Cref{eq:Claim4-2} does not hold for $d\ge 4$, while \cref{eq:Claim4-3} only holds for $d=4$, in which case it holds with equality. As a consequence, if $d\ge 5$, every facet-ridge path from $F_{u}$ to $F_{v}$ is valid. As a result, the aforementioned $u-v$ path $L$ in $G-X$ always exists for $d\ge 5$, a contradiction. {\bf This completes the case $d\ge 5$}.
The case $d=4$ requires more work. Let \[X:=\{x_{1},\ldots,x_{5}\}.\] Suppose by way of contradiction that every facet-ridge path from $F_{u}$ to $F_{v}$ is invalid. Consider a particular such path $M:=J_{1}\ldots J_{n}$. Then $V(Z(J_{i}))\cap V(Z(J_{i+1}))=\emptyset$ for some $i\in [1,n-1]$, and for that index $i$, \cref{eq:Claim4-3} must hold with equality, which implies that \cref{eq:Claim4-1} must also hold with equality. Consequently, the following setting ensues.
\begin{enumerate}
\item $|z(J_{i})\cup z(J_{i+1})|=2$; that is, $z(J_{i})\ne z(J_{i+1})$;
\item
the graphs of the facets $J_{i}$ and $J_{i+1}$ are both disconnected by $X$;
\item the neighbours of $z(J_{i})$ in $J_{i}$ and of $z(J_{i+1})$ in $J_{i+1}$ are all from $X$;
\item the ridge $R_{i}:=J_{i}\cap J_{i+1}$ consists of four vertices---namely, $z(J_{i})$, $z(J_{i+1})$ and two vertices from $X$, say $x_{1}$ and $x_{2}$;
\item each vertex $z(J_{i})$ and $z(J_{i+1})$ has a neighbour in $J_{i}\setminus J_{i+1}$ and $J_{i+1}\setminus J_{i}$, respectively; and
\item there is a vertex from $X$, say $x_{5}$, lying outside $J_{i}\cup J_{i+1}$.
\end{enumerate}
Any pair of facets in this setting are said to be in {\it Configuration A} and the ridge in which they intersect is said to be {\it problematic}. For instance, the pair $(J_{i}, J_{i+1})$ is in Configuration A and the ridge $R_{i}$ is problematic; see \cref{fig:Aux-Conn-Thm-A}(a).
For a facet-ridge path from $F_{u}$ to $F_{v}$ to be invalid, it must have a pair of facets in Configuration A.
\begin{figure}
\includegraphics{Aux-Connectivity-Theorem-A}
\caption{Auxiliary figure for \cref{cl:separator-5-d-3} of \cref{thm:cubical-connectivity}. (a) Configuration A: two facets $J_{i}$ and $J_{i+1}$ whose graphs are disconnected by $X=\{x_{1},x_{2},x_{3},x_{4},x_{5}\}$ and a problematic ridge $R_{i}:=J_{i}\cap J_{i+1}$. (b)--(d) The facets $F_{u}$ and $F_{u}$ are both disconnected by $X$ and intersects at an edge. The ridge $R_{u}:=F_{u}\cap F_{u}''$ that defines a new facet $F_{u}''$ is highlighted.}\label{fig:Aux-Conn-Thm-A}
\end{figure}
We want to be more careful when selecting the facets $F_{u}$ and $F_{v}$ and when selecting the facet-ridge path $M$ from $F_{u}$ to $F_{v}$. We require the following.
\begin{equation}\label{claim-Fu-Fv}
\parbox{0.8\textwidth}{The facets $F_{u}$ and $F_{v}$ can be picked so that their graphs are not disconnected by $X$; that is, $G(F_{u})-X$ and $G(F_{v})-X$ are both connected subgraphs of $G(F_{u})$ and $G(F_{v})$, respectively.} \tag{*}
\end{equation}
\begin{proof}[Proof of \eqref{claim-Fu-Fv}] Suppose that the facet $F_{u}$ cannot be picked as desired. Then the graph of $F_{u}$ is disconnected by $X$, and by \cref{prop:known-cube-cutsets} there is a vertex $z(F_{u})\in G(F_{u})$ whose neighbours in $F_{u}$ are all from $X$. Say $X\cap V(F_{u})=\{x_{i_{1}},x_{i_{2}},x_{i_{3}}\}$. Recall that $u\in Z(F_{u})$.
Since $u$ has degree at least five (it is nonsimple), it follows that there is a facet $F_{u}'$ in $P$ containing $u$ and intersecting $F_{u}$ at a vertex or an edge. Since $F_{u}'$ contains $u$, its graph must be disconnected by $X$ (otherwise it is the desired facet). Therefore $|X\cap V(F_{u}')|=3$, and thus, $X\cap V(F_{u})\cap V(F_{u}')\neq \emptyset$. As a consequence, we find that $F_{u}\cap F'_{u}$ is an edge between a vertex of $X$, say $x_{i_{2}}$, and $u$. It follows that $X\cap V(F'_{u})=\{x_{i_{2}},x_{i_{4}},x_{i_{5}}\}$. Three configurations are possible: \cref{fig:Aux-Conn-Thm-A}(b)--(d).
The argument remains unchanged in all the three configurations. Refer to \cref{fig:Aux-Conn-Thm-A}(b) for concreteness. Consider the ridge $R_{u}$ of $F_{u}$ that contains the edge $ux_{i_{2}}$ but does not contain the vertex $x_{i_{3}}$; the ridge $R_{u}$ is highlighted in \cref{fig:Aux-Conn-Thm-A}(b). Let $F_{u}''$ be the facet of $P$ that intersects $F_{u}$ at $R_{u}$. Then $X\cap V(F_{u}'') \subseteq \{x_{i_{2}},x_{i_{4}}\}$, since $F_{u}\cap F_{u}''$ and $F_{u}'\cap F_{u}''$ are faces that contain $u$. Therefore, the graph of $F_{u}''$ is not disconnected by $X$ and $F_{u}''$ could have been chosen as $F_{u}$. As a consequence of this contradiction, the facet $F_{u}$ can be picked as desired.
Similar analysis shows that the facet $F_{v}$ can also be picked so that $G(F_{v})$ is not disconnected by $X$. This completes the proof of \eqref{claim-Fu-Fv}.
\end{proof}
We are now ready to complete the proof of the claim by showing that we can always find a valid facet-ridge $F_{u}-F_{v}$ path. The existence of such a path would complete the proof of the claim.
There are at least four independent facet-ridge paths from $F_{u}$ to $F_{v}$ (\cref{lem:star-minus-facet-F}(i))---say $M_{a}, M_{b}, M_{c}$ and $M_{d}$---and at least four pairs of facets exhibiting Configuration A---one per path. Each pair of facets in Configuration A gives rise to a problematic ridge. We may assume that $M=M_{a}$. The ensuing four points are key.
\begin{enumerate}
\item The facet $F_{u}$ or $F_{v}$ does not appear in any Configuration A (by Statement~\eqref{claim-Fu-Fv}).
\item Any facet of $P$ other than $F_{u}$ and $F_{v}$ may appear in at most one facet-ridge $F_{u}-F_{v}$ path; in particular, it appears in at most one pair exhibiting Configuration A.
\item The problematic ridges are pairwise distinct, as the paths $M_{a},M_{b},M_{c},M_{d}$ as independent.
\item Each problematic ridge appears in precisely one of the paths $M_{a},M_{b},M_{c},M_{d}$.
\item Each nonproblematic ridge $R$ of $P$ present in a Configuration A appears in at most two paths in $\{M_{a},M_{b},M_{c},M_{d}\}$. This is so because $R$ is the intersection of two facets $F$ and $F'$, and the facet $F$ or $F'$ appears in at most one such path.
\end{enumerate}
With a counting argument we show that Configuration A cannot occur in all the four paths. We count the ridges that contain two vertices from $X$ and are present in a Configuration A.
For every pair of facets $(J,J')$ exhibiting Configuration A, there are five ridges in $J\cup J'$ containing two vertices from $X$. For instance, for the pair $(J_{i}, J_{i+1})$ of \cref{fig:Aux-Conn-Thm-A}(a), the pairs $(x_{1},x_{2})$, $(x_{1},x_{3})$, $(x_{2},x_{3})$, $(x_{1},x_{4})$ and $(x_{2},x_{4})$ induce the five ridges. So, considering the four aforementioned $F_{u}-F_{v}$ paths, we have a total of twenty ridges that are present in a Configuration A and contain two vertices from $X$. Besides, there are ten ways of pairing two vertices from $X$, and thereby there are at most ten distinct ridges containing two vertices from $X$.
Each problematic ridge appears in precisely one Configuration A; there are at least four problematic ridges, and therefore, there are at most six nonproblematic ridges. Each nonproblematic ridge that contains two vertices from $X$ appears in two facets, and consequently in at most two pairs of facets exhibiting Configuration A (that is, in at most two Configurations A). We account for the ten ridges containing two vertices from $X$ in the four Configurations A: at least four problematic ridges and at most six nonproblematic ones. This means that we can have at most $4+6\times 2=16$ ridges that contain two vertices from $X$ and appear in the four Configurations A. Since the four Configurations A require twenty ridges containing two vertices from $X$, we can choose a $(d-1,d-2)$-path $F_{u}-F_{v}$ in which Configuration A doesn't occur. {\bf This completes the case $d=4$}, and with it the proof of the claim.
\end{claimproof}
We now complete the proof of the theorem. \cref{cl:separator-d-4,cl:separator-5-d-3} ensure that, for $d\ge 4$, a minimum separator $X$ with cardinality at most $2d-3$ in a cubical $d$-polytope consists of the neighbours of a vertex. Thus {\bf the function $g(d)$ exists and satisfies $2d-3\le g(d)$.}
From \cref{cor:f-g} it then follows that $f(d)\ge 2d-2$; in other words, a cubical $d$-polytope with minimum degree $\delta$ is $\min\{\delta,2d-2\}$-connected. This completes the proof of the theorem.
\end{proof}
A simple corollary of \cref{thm:cubical-connectivity} is the following.
\begin{corollary}\label{cor:cubical-nsimple-connectivity} A cubical $d$-polytope with no simple vertices is $(d+1)$-connected.
\end{corollary}
As we mentioned in the introduction an open problem that nicely arises from \cref{thm:cubical-connectivity} is \cref{prob:bounds}.
\section{Acknowledgments} The authors would like to thank the anonymous referees for their very detailed comments and suggestions. The presentation of the paper has greatly benefited from their input.
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2019-07-16T02:20:39",
"yymm": "1801",
"arxiv_id": "1801.06747",
"language": "en",
"url": "https://arxiv.org/abs/1801.06747",
"abstract": "A cubical polytope is a polytope with all its facets being combinatorially equivalent to cubes. We deal with the connectivity of the graphs of cubical polytopes. We first establish that, for any $d\\ge 3$, the graph of a cubical $d$-polytope with minimum degree $\\delta$ is $\\min\\{\\delta,2d-2\\}$-connected. Second, we show, for any $d\\ge 4$, that every minimum separator of cardinality at most $2d-3$ in such a graph consists of all the neighbours of some vertex and that removing the vertices of the separator from the graph leaves exactly two components, with one of them being the vertex itself.",
"subjects": "Combinatorics (math.CO)",
"title": "Connectivity of cubical polytopes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9852713831229043,
"lm_q2_score": 0.8519527982093666,
"lm_q1q2_score": 0.8394047118471712
} |
https://arxiv.org/abs/2209.14474 | A slight generalization of Steffensen Method for Solving Non Linear Equations | In this article, we present an iterative method to find simple roots of nonlinear equations, that is, to solving an equation of the form $f(x) = 0$. Different from Newton's method, the method we purpose do not require evaluation of derivatives. The method is based on the classical Steffensen's method and it is a slight modification of it. The proofs of theoretical results are stated using Landau's Little o notation and simples concepts of Real Analysis. We prove that the method converges and its rate of convergence is quadratic. The method present some advantages when compared with Newton's and Steffesen's methods as ilustrated by numerical tests given. | \section{Introduction}
Iterative methods for solving nonlinear real equations of the form $f(x) = 0$ have been widely studied by many researchers around the world.
Newton's (or Newton-Raphson's) method certainly is the best known iterative method and studied in any numerical calculus course. From a initial guess $x_0$, it is defined the sequence $(x_n)$ given by
\begin{equation}
x_{n+1} = x_n - \dfrac{f(x_n)}{f'(x_n)},
\end{equation}
when $f'(x_n)\neq 0$ (it means the sequence is well defined). It is shown (see, for example, \cite{ostrowski1973solution}
), under certain hypothesis, if $x_0$ is chosen close enough to $p$, where $f(p) = 0$, then $\lim x_n = p$.
On the one hand, Newton's method is very efficient, because its convergence is quadratic, but on the other hand it is necessary to compute derivatives, what can be computationally expensive. An alternative idea is to approximate the derivative in the method in some way. Secant's (see \cite{diez2003note}), Steffensen's (see \cite{steffensen1933remarks}) and Kurchatov's (see \cite{shakhno2004kurchatov}) method applies this idea (free derivative methods) and probably are the best knows iterative algorithms free of derivatives. As is known in the literature, Secant's method has rate of convergence given by the golden ratio number, Steffensen's and Kurchatov's have quadratic convergence.
Based on these ideas, many reserachers have been worked to obtain free derivative methods for solving non linear equations (see, for example, \cite{candela2019class, wu2001new, wang2009semi, piscoran2019new,wu2000class}).
In this paper, we present an iterative method for finding simple root of a non linear equation based on Steffesen Method. In spite of the simplicity of our ideas, we have not been able to find any reference in the literature as we do here. The algorithm purposed is given by
\begin{equation}\label{algorithm1}
x_{n+1} = x_n - \dfrac{g\circ f(x_n)\cdot f(x_n)}{f(x_n + g\circ f(x_n)) - f(x_n)},
\end{equation}
where $g$ is a continuous differentiable function that has in the origin an isolated zero. Observe that if $g$ is the identity function, one has the classical Steffensen method. In this paper, we refer to \eqref{algorithm1} as $g-$Steffensen method.
Our main strategy is to use Landau's Little o notation and its algebra which simplifies the way of writing proofs. This paper is organized as follows. In Section \ref{sectionLittleONotation}, we present the little o notation. For completeness, in Proposition \ref{propLittleO} we enumerate simple, but important properties that permit us describe Little o notation's algebra. In Section \ref{sectionMethod}, we proof two theorems that ensure convergence and rate of convergence. In Section \ref{sectionNumericalTests}, are given some numerical tests and comments. We present some examples when \eqref{algorithm1} is effective for some examples of $g$ functions compared with Steffensen's and Newton's method.
\section{A note on Little o Notation}\label{sectionLittleONotation}
For completeness, we defines what is the little o notation and the algebra utilized.
\begin{definition}\label{defLittleO_a}
Let $X $ be a nonempty subset of $\mathbb{R}$ and $a$ a limit point of $X$. For given functions $f,
g: X \rightarrow \mathbb{R}$, we say that $f(x) = o(g(x))$, or just $f = o(g)$, as $x \rightarrow a$, if $f\in o(g)$ where
\[
o(g) = \{ h: X \rightarrow \mathbb{R}: \forall \varepsilon\, \exists \delta >0 \text{ such that } |h(x)| < \varepsilon |g(x)| \text{ for all }x \in (a - \delta , a + \delta) \backslash \{a\} \}.
\]
\end{definition}
In a similar way, one can define
\begin{definition}\label{defLittleO_infty}
Let $X \subset \mathbb{R} $ be unbounded above. For given functions $f,
g: X \rightarrow \mathbb{R}$, we say that $f(x) = o(g(x))$, or just $f = o(g)$, as $x \rightarrow \infty$, if $f\in o(g)$, where
\[
o(g) = \{ h: X \rightarrow \mathbb{R}: \forall \varepsilon \,\exists M >0 \text{ such that } |h(x)| < \varepsilon |g(x)| \text{ for all }x>M \}.
\]
\end{definition}
The proof of proposition below is straightforward and it is omitted.
\begin{proposition}\label{propLittleO}
Let $f, g, h, F$ and $G$ be real functions. Also for $x\to a$ and $x\to \infty$, if
\begin{enumerate}
\item[$(i)$] $c \neq 0$ and $f = o(F)$, then $cf = o(F)$;
\item[$(ii)$] $f = o(F)$ and $g=o(G)$, then $f \cdot g = o(FG)$;
\item[$(iii)$] $f = o(g)$ and $g = o(h)$, then $f = o(h)$;
\item[$(iv)$] $f = o(F)$ and $g = o(G)$, then $f + g = o(H)$ as $x \rightarrow a$, where $H = \max\{F, G\}$.
\end{enumerate}
\end{proposition}
Acording to Proposition \ref{propLittleO} it is possible to define an algebra for Little o notation.
\begin{definition}\label{defLitteOAlgebra}
Let $f,g$ be real functions. One can define the operations (also for $x\to a$ or $x \to \infty$):
\begin{enumerate}
\item[$(i)$] $c \cdot o(f) = o(f)$, if $c \neq 0$;
\item[$(ii)$] $o(f) \cdot o(g) = o(fg)$;
\item[$(iii)$] $f \cdot o(g) = o(fg)$;
\item[$(iv)$] $o(f) + o(g) = o(h)$, where $h = max\{f,g\}$.
\end{enumerate}
\end{definition}
\section{Convergence of $g-$Steffensen Method}\label{sectionMethod}
\begin{theorem}
Let $f,g:\mathbb{R} \to \mathbb{R}$ be continuously differentiable real functions. Assume that $p$ is a isolated zero of $f$ such that $f'(p)\neq 0$. Supose that $g$ has an isolated zero in origin. If $f''$ is continuous, then there is a neighborhood $V$ of $p$ such that the sequence $(x_n)$ produced by \eqref{algorithm1}, where $x_0 \in V$, converges to $p$.
\end{theorem}
\begin{proof}
Let $\phi$ be the real function given by
\[
\phi(x) = x-\dfrac{f(x)}{\rho(x)},
\]
where
\[
\rho(x)=
\left\{
\begin{array}{c}
\frac{f(x + g(f(x))) - f(x)}{g(f(x))}, \text{ if }x\neq p\\
f'(p), \text{ if }x = p\\
\end{array}
\right.
\]
It is straightforward to see that $\rho$ is a continuous function and that fixed points of $\phi$ are roots of $f$.
From Taylor Series expansion and Definition \ref{defLitteOAlgebra}, as $x\to p$, we have:
\begin{eqnarray*}
\rho'(x) &=& f''(x) + o(1) + \left(\frac{f'(x)}{g \circ f(x)} + f''(x) + o(1)\right) \cdot (g \circ f)'(x) \\
&-& {(g \circ f)'(x)} \left( \frac{f'(x)}{g \circ f(x)} + \frac{f''(x)}{2} + o(1) \right)\\
&=& f''(x) + \frac{1}{2} f''(x) (g \circ f)'(x) +\left( (g \circ f)'(x) \right)\cdot o(1).
\end{eqnarray*}
Therefore
\begin{equation}\label{eqRhoPrime}
\lim_{x \rightarrow p} \rho' (x) = f''(p) + \frac{1}{2} f''(p) \cdot \left(g\circ f\right)'(p).
\end{equation}
On other hand, we can write
\begin{align*}
\frac{\rho (x) - \rho (p)}{x - p} & = \frac{f'(x) + \frac{f''(x)}{2} g \circ f(x) + o(g \circ f(x)) - f'(p)}{x - p} \\
& =\frac{f' (x) - f'(p)}{x - p} + \frac{1}{2} f''(x) \cdot \frac{g \circ f(x) - g \circ f(p)}{x-p} + \frac{g \circ f(x) - g \circ f (p)}{x-p} o(1).
\end{align*}
Then
\[
\rho'(p) = f''(p) + \frac{1}{2} f''(p) \cdot \left(g\circ f\right)'(p).
\]
According \eqref{eqRhoPrime}, $\rho$ is a $C^1$ function, as well $\phi$. This way, we have $\phi'(\rho) = 0$. This ensure the existence of $V$ and the theorem is proved.
\end{proof}
The quadratic converge can be obtained as consequence of Theorem 1 of \cite{candela2019class}, but we present another prove for completeness.
\begin{theorem}
The rate of convergence of Algorithm given in \eqref{algorithm1} is quadratic.
\end{theorem}
\begin{proof}
Let $p$ be such that $f(p)=0$ and $e_n = x_n - p$. From \eqref{algorithm1}, we have
\begin{align*}
e_{n+1} &= e_n - \dfrac{g\circ f(x_n)\cdot f(x_n) }{f(x_n + g\circ f(x_n)) - f(x_n)} \\
& = e_n - \dfrac{g\circ f(x_n)\cdot f(x_n) }{f(x_n) + f'(x_n)\cdot g\circ f(x_n) + \dfrac{f''(x_n)}{2} \cdot g\circ f(x_n) + o\big(g\circ f(x_n)\cdot \big) - f(x_n)}\\
& = e_n - \dfrac{f(x_n)}{ f'(x_n) + \dfrac{f''(x_n)}{2}\cdot g\circ f(x_n) + o\big(g\circ f(x_n)\big) }.
\end{align*}
Since
\[
f(x_n) = f(p) + f'(p) e_n + \dfrac{f''(p)}{2}e_n^2 + o(e_n^2) = f'(p) e_n + \dfrac{f''(p)}{2}e_n^2 + o(e_n^2)
\]
and $f'(x_n) = f'(p) + f''(p) e_n + o(e_n)$, one has
\begin{align*}
e_{n+1} & = e_n - \dfrac{f'(p) e_n + \dfrac{f''(p)}{2}e_n^2 + o(e_n^2)}{ f'(p) + f''(p) e_n + o(e_n) + g\circ f(x_n)\left(\dfrac{f''(x_n)}{2} + o(1)\right) }\\
& = \dfrac{\dfrac{f''(p)}{2}e_n^2 + o(e_n^2) + g\circ f(x_n)\left(\dfrac{f''(x_n)}{2} + o(1)\right) e_n }{f'(p) + f''(p) e_n + o(e_n) + g\circ f(x_n)\left(\dfrac{f''(x_n)}{2} + o(1)\right)}.
\end{align*}
On other hand,
\[
g\circ f(x_n) = g\circ f (p)+(g\circ f )'(p)e_n + o(e_n) = (g\circ f )'(p)e_n + o(e_n),
\]
then one can write
\begin{align*}
e_{n+1} & =\dfrac{\dfrac{f''(p)}{2}e_n^2 + o(e_n^2) + \big(g\circ f )'(p)e_n + o(e_n)\big)\left(\dfrac{f''(x_n)}{2} + o(1)\right) e_n }{f'(p) + f''(p) e_n + o(e_n) + \left((g\circ f )'(p)e_n + o(e_n)\right)\cdot \left(\dfrac{f''(x_n)}{2} + o(1)\right) },
\end{align*}
that gives
\[
\dfrac{e_{n+1}}{e_n^2} = \dfrac{\dfrac{f''(p)}{2} + o(1) + \big((g\circ f )'(p) + o(1)\big)\left(\dfrac{f''(x_n)}{2} + o(1)\right) }{f'(p) + f''(p) e_n + o(e_n) + \left((g\circ f )'(p)e_n + o(e_n)\right)\cdot \left(\dfrac{f''(x_n)}{2} + o(1)\right) }.
\]
It follows that
\[
\lim\dfrac{e_{n+1}}{e_n^2} = \dfrac{1}{f'(p)}\cdot\left(\dfrac{f''(p)}{2} + (g\circ f )'(p) \cdot\dfrac{ f''(x_n)}{2}\right) = \dfrac{1}{f'(p)}\cdot\left(\dfrac{f''(p)}{2} + g'(0)f'(p) \cdot\dfrac{ f''(p)}{2}\right).
\]
That is, the $g-$Steffensen iterative method has quadratic convergence.
\end{proof}
\section{Numerical Tests}\label{sectionNumericalTests}
In this section we present some numerical tests using iteration formulae \eqref{algorithm1}. For these tests, we chose six $g$ functions: $g_1(x) = \sin(x)$, $g_2(x) = e^x - 1$, $g_3(x) = x^2$, $g_4(x) = \cos(x) - 1$, $g_5(x) = tg(x)$ and $g_6(x) = e^{-x} - 1$. In all examples, we looked for the root $p$ in the interval $[a,b]$ and chose the initial shoot $x_0 $ belonging to $[a,b]$. In the tables presented, $n$ indicates the number of iterations and $x_n$ illustrates an approximation of $p$. Comparisons with Steffensen's and Newton's method are also commented.
\section*{Examples of functions that do not converge for Steffensen method}
Examples below are not convergent when we use Steffensen method, that is, when $g$ is the identity function. The $g-$Steffesen method is convergent for many $g$'s.
\begin{example}
$f(x) = sen^2(x) - x^2 + 1$, $[a,b] = [0,3]$, $x_0 = 3$.
The $g-$Steffensen method is convergent for all $g$'s we chose (see Table \ref{tableSinSquared}).
\begin{table}[H]\centering
{$f(x) = sen^2(x) - x^2 + 1$.}
{\begin{tabular}{r|l|l|l}
\hline
& $n$ & $x_n$ & $|f(x_n)|$ \\
\hline
$g_1$ & 7 & 1.4044916482153411 & $3.3306690738754696 \times 10^{-16}$\\
$g_2$ & 7 & 1.4044916482153411 & $3.3306690738754696 \times 10^{-16}$\\
$g_3$ & 12 & 1.4044916482773504 & $1.5393597507795675\times 10^{-10}$ \\
$g_4$ & 4 & 1.4044916488265187 & $1.5172312295419488\times 10^{-9}$\\
$g_5$ & 11 & 1.4044916482153413 & $4.440892098500626\times 10^{-16}$\\
$g_6$ & 80 & 1.4044916482153411 & $3.3306690738754696\times 10^{-16}$\\
\hline
\end{tabular}}
\label{tableSinSquared}
\end{table}
\end{example}
\begin{example}
$f(x) = x^3 - x - 1 $, $[a,b] = [0,2]$, $x_0 = 1$.
Among all $g$ functions we chose, just for $g_2$ the algorithm \eqref{algorithm1} is not convergent (see Table \ref{tableDegree3Polynomialcat2}).
\begin{table}[H]\centering
{$f(x) = x^3 - x - 1$}
{\begin{tabular}{r|l|l|l}
\hline
& $n$ & $x_n$ & $|f(x_n)|$ \\
\hline
$g_1$ & 11 & 1.324717957244746 & $2.220446049250313 \times 10^{-16}$\\
$g_3$ & 5 & 1.3247179573200405 & $3.211033661187912\times 10^{-10}$ \\
$g_4$ & 5 & 1.3247179573118653 & $2.862388104318825\times 10^{-10}$\\
$g_5$ & 28 &1.324717957244746 & $2.220446049250313\times 10^{-16}$\\
$g_6$ & 8 & 1.324717957244746 & $2.220446049250313\times 10^{-16}$\\
\hline
\end{tabular}}
\label{tableDegree3Polynomialcat2}
\end{table}
\end{example}
\begin{example}
$f(x) = e^{1-x} - 1$, $[a,b] = [0,3]$, $x_0 = 3$.
The method is convergent for $g_1$, $g_4$, $g_5$,$g_6$ (see Table \ref{tableExpOne-x}) and it is divergent for $g_2$ and $g_3$.
\begin{table}\centering
{$f(x) = e^{1-x} - 1$}
{\begin{tabular}{r|l|l|l}
\hline
& $n$ & $x_n$ & $|f(x_n)|$ \\
\hline
$g_1$ & 5 & 1.0 & $0.0$\\
$g_4$ & 11 & 0.9999999999188026 & $8.119727112898545 \times 10^{-11}$\\
$g_5$ & 6 & 1.0 & $0.0$\\
$g_6$ & 24 & 0.9999999999999999 & $0.0$\\
\hline
\end{tabular}}
\label{tableExpOne-x}
\end{table}
\end{example}
\section*{Examples of functions that do not converge for Newton's and Steffensen's method}
We present some examples when both Newton's and Steffensen's method do not converges, but the $g-$Steffensen method is convergent for some choices of $g$.
\begin{example}
$f(x) = x^3 - 2x + 2$, $[a,b] = [-3,1]$, $x_0 = 1$.
In this example, $g-$Steffesen method is convergent for $g_1$, $g_3$ and $g_4$ (see Table \ref{tableDegree3Polynomial}) and it is divergent for $g_2, g_5$ and $g_6$.
\begin{table}[H]\centering
{$f(x) = x^3 - 2x + 2$}
{\begin{tabular}{r|l|l|l}
\hline
& $n$ & $x_n$ & $|f(x_n)|$ \\
\hline
$g_1$ & 12 & -1.7692923542386314 & $0.0$\\
$g_3$ & 28 & -1.7692923543026569 & $4.73223682462276\times 10^{-10}$ \\
$g_4$ & 27 & -1.76929235421728 & $1.5781242979073795\times 10^{-10}$\\
\hline
\end{tabular}}
\label{tableDegree3Polynomial}
\end{table}
\end{example}
\begin{example}
$f(x) = \text{arctg}(x-2)$, $[a,b] = [0,3.5]$, $x_0 = 3.5$.
The $g-$Steffensen method is convergent for $g_4$ and $g_6$ (see Table \ref{tableArctanXMinusTwo}) and it is divergent for $g_1, g_2, g_3$ and $g_5$.
\begin{table}[H]\centering
{$f(x) = \text{arctg}(x-2)$}
{\begin{tabular}{r|l|l|l}
\hline
& $n$ & $x_n$ & $|f(x_n)|$ \\
\hline
$g_4$ & 6 & 2.000000000000001 & $8.881784197001252\times 10^{-16}$ \\
$g_6$ & 4 & 2 & 0 \\
\hline
\end{tabular}}
\label{tableArctanXMinusTwo}
\end{table}
\end{example}
\begin{example}
$f(x) = x^5 - x + 1$, $[a,b] = [0,3]$, $x_0 = 3$.
If we use $g_1$, $g_4$ and $g_6$, $g-$Steffensen method is convergent (see Table \ref{tableDegree5Polynomial}), but it is divergent for $g_2, g_3$ and $g_5$.
\begin{table}[H]\centering
{$f(x) = x^5 - x +1$}
{\begin{tabular}{r|l|l|l}
\hline
& $n$ & $x_n$ & $|f(x_n)|$ \\
\hline
$g_1$ & 30 & -1.1673039782614187 & $6.661338147750939\times 10^{-16}$\\
$g_4$ & 8 & -1.1673039788241997 & $4.6617254501057914\times 10^{-9}$ \\
$g_6$ & 22 & -1.1673039782614187 & $6.661338147750939\times 10^{-16}$\\
\hline
\end{tabular}}
\label{tableDegree5Polynomial}
\end{table}
\end{example}
\section*{Other cases}
\begin{example}
$f(x) = 0.5x^3 - 6x^2 + 21.5x -22$, $[a,b] = [0,3]$, $x_0 = 3$.
Newton's method is divergent, Steffensen's method is convergent to a root not in interval [0,3] and Algorithm \eqref{algorithm1} converges to a root in $[0,3]$ for $g_2$ and $g_4$ (see Table \ref{tablePolynomialOtherCases}); it is convergent to a root outside $[0,3]$ for $g_1$ and $g_3$, and it is divergent for $g_5$ and $g_6$.
\begin{table}[H]\centering
{$f(x) = 0.5x^3 - 6x^2 + 21.5x -22$}
{\begin{tabular}{r|l|l|l}
\hline
& $n$ & $x_n$ & $|f(x_n)|$ \\
\hline
$g_2$ & 20 & 1.7639320225002113 & $7.105427357601002 \times 10^{-15}$\\
$g_4$ & 5 & 1.7639320224170847 & $4.156319732828706\times 10^{-10}$\\
\hline
\end{tabular}}
\label{tablePolynomialOtherCases}
\end{table}
\end{example}
\begin{example}
$f(x) =cos(x)$, $[a,b] = [0,3.5]$, $x_0 = 3.5$.
In this example, Newton's method and Steffensen's method converges to a root outside of the interval considered. The method $g-$Steffesen method is convergent just for $g_5$ as illustrated in Table \ref{tableCos}.
\begin{table}[H]\centering
{$f(x) = \cos x$}
{\begin{tabular}{r|l|l|l}
\hline
& $n$ & $x_n$ & $|f(x_n)|$ \\
\hline
$g_5$ & 5 & 1.5707963267948966 & $6.123233995736766\times 10^{-17}$\\
\hline
\end{tabular}}
\label{tableCos}
\end{table}
\end{example}
\begin{example}
$f(x) =10xe^{-x^2} - 1$, $[a,b] = [0,3]$, $x_0 = 3$.
In this example, Newton's method and Steffensen's method are divergent. The method presented in this article is convergent, for $g_5$ with 8 iterations for the root belonging to the interval, $x_n = 1.67963061042845$, resulting in $|f(x_n)| =2.220446049250313\times 10^{-16}$ (see Table \ref{tableFinal}). For $g_1$, $g_2$, $g_3$, $g_4$ and $g_6$, the method is divergent.
\begin{table}[H]\centering
{$f(x) = 10xe^{-x^2} - 1$}
{\begin{tabular}{r|l|l|l}
\hline
& $n$ & $x_n$ & $|f(x_n)|$ \\
\hline
$g_5$ & 8 & 1.67963061042845 & $2.220446049250313\times 10^{-16}$\\
\hline
\end{tabular}}
\end{table}
\label{tableFinal}
\end{example}
\section*{Acknowledgements} Authors thanks Federal University of Ouro Preto and first and third authors thanks FNDE/MEC for partial support.
\bibliographystyle{plain}
| {
"timestamp": "2022-09-30T02:05:12",
"yymm": "2209",
"arxiv_id": "2209.14474",
"language": "en",
"url": "https://arxiv.org/abs/2209.14474",
"abstract": "In this article, we present an iterative method to find simple roots of nonlinear equations, that is, to solving an equation of the form $f(x) = 0$. Different from Newton's method, the method we purpose do not require evaluation of derivatives. The method is based on the classical Steffensen's method and it is a slight modification of it. The proofs of theoretical results are stated using Landau's Little o notation and simples concepts of Real Analysis. We prove that the method converges and its rate of convergence is quadratic. The method present some advantages when compared with Newton's and Steffesen's methods as ilustrated by numerical tests given.",
"subjects": "Numerical Analysis (math.NA)",
"title": "A slight generalization of Steffensen Method for Solving Non Linear Equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9845754447499795,
"lm_q2_score": 0.8519528019683105,
"lm_q1q2_score": 0.8388118089039405
} |
https://arxiv.org/abs/1910.08189 | Digital Fundamental Groups and Edge Groups of Clique Complexes | In previous work, we have defined---intrinsically, entirely within the digital setting---a fundamental group for digital images. Here, we show that this group is isomorphic to the edge group of the clique complex of the digital image considered as a graph. The clique complex is a simplicial complex and its edge group is well-known to be isomorphic to the ordinary (topological) fundamental group of its geometric realization. This identification of our intrinsic digital fundamental group with a topological fundamental group---extrinsic to the digital setting---means that many familiar facts about the ordinary fundamental group may be translated into their counterparts for the digital fundamental group: The digital fundamental group of any digital circle is $\mathbb{Z}$; a version of the Seifert-van Kampen Theorem holds for our digital fundamental group; every finitely presented group occurs as the (digital) fundamental group of some digital image. We also show that the (digital) fundamental group of every 2D digital image is a free group. | \section{Introduction}
A \emph{digital image} $X$ is a finite subset $X \subseteq \mathbb{Z}^n$ of the integral lattice in some $n$-dimensional Euclidean space, together with
a particular adjacency relation on the set of points.
This is an abstraction of an actual digital image which consists of pixels (in the plane, or higher dimensional analogues of such).
\emph{Digital topology} refers to the use of notions and methods from (algebraic) topology to study digital images.
The idea in doing so is that such notions can provide useful theoretical background for certain steps of image processing, such as contour filling, border and boundary following, thinning, and feature extraction or recognition (e.g. see p.273 of \cite{Ko-Ro91}). There is an extensive literature on digital topology (e.g. \cite{Ro86, Bo99, Evako2006}).
As a contribution to this literature, in \cite{LOS19c, LOS19a, LOS19b} we have started to build a general ``digital homotopy theory" that brings the full strength of homotopy theory to the digital setting. In \cite{LOS19c} we focussed on the fundamental group. Our definition of the digital fundamental group in \cite{LOS19c}---see below for a r{\'e}sum{\'e}---is \emph{intrinsic}, in the sense that it is defined directly in terms of a digital image, using ingredients such as homotopy of based loops defined within the digital setting. Indeed, a crucial component of our development in \cite{LOS19c} involves the notion of \emph{subdivision} of a digital image---a construction that relies on the ``cubical" setting of the integer lattice and which does not translate out of the digital setting in any obvious way. One of the main results of \cite{LOS19c} shows that this process of subdivision preserves the fundamental group of a digital image (Th.3.16 of \cite{LOS19c}).
In this paper, we make significant advances on the development of \cite{LOS19c}. The main result is the following.
\begin{introtheorem}[\thmref{thm: digital pi1 = edge pi1}]
Let $X$ be a digital image and $\mathrm{cl}(X)$ its clique complex. The digital fundamental group of $X$, as defined in \cite{LOS19c}, is isomorphic to the edge group of $\mathrm{cl}(X)$.
\end{introtheorem}
See below for descriptions of the clique complex and of the edge group of a simplicial complex. Now it is known that the edge group of a simplicial complex is isomorphic to the fundamental group---in the ordinary, topological sense---of the spatial realization of the simplicial complex (see \cite[Th.3.3.9]{Mau96}, repeated as \thmref{thm: edge gp = pi1} here).
It follows that the relatively unfamiliar digital fundamental group may be identified with the much more familiar topological fundamental group of a space that is associated to the digital image in a fairly transparent way. With this identification we may, with care over one or two technical points, translate many known results about the topological fundamental group into their counterparts for the digital fundamental group. Doing so adds greatly to our understanding of the digital fundamental group.
An overview of the organization of the paper and our results follows. \secref{sec: basics} summarizes some basics of digital topology and our definition of the digital fundamental group from \cite{LOS19c}. We have tried to keep this material to the minimum necessary for understanding our results here, and refer to \cite{LOS19c} for fuller details.
In \secref{sec: based homotopy} we give two technical results about relative homotopy of paths or loops. These results were not included in \cite{LOS19c}, so we prove them here since they are needed in the sequel. \secref{sec: edge groups} contains our main result. We review clique complexes and edge groups, and prove the isomorphism asserted in the Theorem above. In \secref{sec: first consequences} we begin to draw consequences from this Theorem. In \thmref{thm: pi of C is Z} we show that the digital fundamental group of any digital circle is $\mathbb{Z}$ (we define what we mean by digital circles in \defref{def: circle}). In \thmref{thm: DVK} we deduce a version of the Seifert-van Kampen theorem for the digital fundamental group. The conclusion is the same as the topological theorem, but we require an extra (mild) hypothesis in addition to the usual connectivity hypotheses. We use this result to give concrete examples of digital images with interesting fundamental groups. \exref{ex: DD} shows that a one-point union of two digital circles has non-abelian digital fundamental group (a free group on two generators, in fact).
\exref{ex: projective plane} shows that a certain digital image---which we construct as a ``digital projective plane"---has torsion in its digital fundamental group (which is $\mathbb{Z}_2$, in fact).
These examples are deduced from special cases of our digital Seifert-van Kampen theorem (\corref{cor: contractible U cap V} and \corref{cor: contractible V}). To the best of our knowledge, these are the first examples given of digital images with fundamental group---in any sense---that is not free abelian. More generally, we are able to realize any finitely presented group as the digital fundamental group of some digital image in Theorem \ref{thm: fg realization}. In the final \secref{sec: 2D free}, we show that the digital fundamental group of every 2D digital image is a free group. This result does not follow automatically from the isomorphism of \thmref{thm: digital pi1 = edge pi1}. Rather, we establish it after some preliminary results in \secref{sec: 2D free} about shortening of paths that are of interest in their own right.
The fundamental group is not new in digital topology (see \cite{Kong89, Bo99}, for example). But our approach and development in \cite{LOS19c} and here differs from versions previously used in digital topology. We give some discussion of these differences now. As we pointed out in \cite{LOS19c}, our fundamental group differs from that of \cite{Bo99} for basic examples of digital images. This difference derives from differences in the notion of homotopy, and is explained in some detail in \cite{LOS19c}.
Ayala et al.~\cite{ADFQ03} work in a setting in which digital images have extra structure that our notion of digital image does not have \emph{a priori}. By making different choices of their ``weak lighting function," for example, one can arrive at different notions of a fundamental group that on a digital circle take $\mathbb{Z}$ or the trivial group. Furthermore, \cite{ADFQ03} does not actually define a fundamental group in the digital setting. Rather, their ``digital" fundamental group is defined \emph{extrinsically} to be the edge group of an auxiliary complex; they do not work in terms of loops and homotopies in the actual digital image itself, as we do.
A digital image in our sense only conforms to one of the general ``device models" considered in \cite{ADFQ03}, namely, the \emph{standard cubical decomposition of Euclidean $n$-space} $\mathbb{R}^n$. Working within that device model, and using $(3^n-1)$-adjacency in $\mathbb{Z}^n$, as we do consistently, we do not know whether it is possible to make a uniform, once and for all, choice of extra structure for which the corresponding fundamental group of \cite{ADFQ03} determined by such a choice agrees with our fundamental group. If not, then our notions of fundamental group are basically different. But even if it were, it is unlikely that such a matching would extend to any other aspects of our more general digital homotopy theory. For example, maps of digital images and homotopies of them do not appear to be discussed in the body of work surrounding \cite{ADFQ03}.
We end this introduction by mentioning a more general notion than that of a digital image to which many of our results apply.
A \emph{tolerance space} is a set with a symmetric, reflexive binary relation (which we interpret as an adjacency relation on the points of the set). Poston (in \cite{Po71}) referred to the use of notions from (algebraic) topology in a tolerance space setting as \emph{fuzzy geometry}, and used ``fuzzy" terminology throughout. Sossinsky, however, makes a sharp distinction between tolerance spaces and more general ``fuzzy mathematics" (see \S5, `Tolerance is Crisp, Not Fuzzy,' of \cite{So86}). For a recent, detailed history of tolerance spaces together with further examples of applications of tolerance spaces, see \cite{Pet12}. Every digital image is a tolerance space. Conversely, every finite tolerance space may be embedded in some $\mathbb{Z}^n$ as a digital image, preserving the adjacencies (we explain how in \propref{prop: digital graph} below). But there may be many ways to ``realize" a given tolerance space as a digital image. Thus, a digital image may be thought of as a tolerance space together with a particular choice of embedding into some $\mathbb{Z}^n$. Our focus is on developing homotopy theory in the context of digital images. However, many of our results apply just as well to tolerance spaces. The main difference between the two concepts, from our point of view, concerns subdivision. Whereas a digital image has canonical subdivisions (that are defined in terms of the ambient $\mathbb{Z}^n$), a tolerance space does not. One can always embed a tolerance space as a digital image in some $\mathbb{Z}^n$, and then use the subdivisions for that dimension, but there is no canonical choice of such. Generally speaking, then, results that we prove about a digital image $X$ may be interpreted equally well as results about a general tolerance space $X$, so long as the proofs do not involve subdividing $X$. Examples of this include the results of \cite{LOS19c} through Theorem 3.15---including the definition of the fundamental group, its independence of the choice of basepoint, and its behaviour with respect to products. Also, \thmref{thm: digital pi1 = edge pi1} of this paper and its consequences in \secref{sec: first consequences} apply equally well to tolerance spaces as to digital images (the proofs involve subdivisions of intervals---the domains of paths and loops, but do not involve subdivisions of the digital image/tolerance space). The result of \secref{sec: 2D free}, on the other hand, is specifically about 2D digital images and would only make sense as a statement about tolerance spaces that may be ``realized" as 2D digital images.
\smallskip
\textbf{Acknowledgements.} Thanks to John Oprea for many helpful comments on this work. The second-named author was supported by a travel grant from Ursinus College. Also, thanks to Andrea Bianchi for explaining to one of us (Lupton) the procedure for realizing a finite tolerance space as a digital image, which we give as \propref{prop: digital graph} here.
\section{Digital Topology and a Digital Fundamental Group}\label{sec: basics}
We review some notation and terminology from digital topology, and give a brief summary of our definition of the fundamental group from \cite{LOS19c}.
Because we are dealing with the fundamental group, our basic object of interest is a \emph{based digital image}, and maps and homotopies will preserve basepoints.
\subsection{Adjacency and Continuity}
A \emph{based digital image} $X$ means a finite subset $X \subseteq \mathbb{Z}^n$ of the integral lattice in some $n$-dimensional Euclidean space, together with a choice of a distinguished point $x_0 \in X$ which we refer to as the \emph{basepoint} of $X$, and the following reflexive, symmetric binary relation on $X$ that we refer to as \emph{adjacency}: two (not necessarily distinct) points $x = (x_1, \ldots, x_n) \in X$ and $y = (y_1, \ldots, y_n) \in X$ are adjacent if $|x_i-y_i| \leq 1$ for each $i = 1, \dots, n$.
If $x, y \in X \subseteq \mathbb{Z}^n$, we write $x \sim_X y$ to denote that $x$ and $y$ are adjacent.
We usually suppress the basepoint $x_0$ from our notation unless it is useful to emphasize the particular basepoint. Thus, we will denote a based digital image $(X, x_0)$ simply as $X$, with the understanding that there is some choice of basepoint $x_0$.
We use the notation $I_N$ or $[0, N]$ for the \emph{digital interval of length} $N$. Namely, $I_N \subseteq \mathbb{Z}$ consists of the integers from $0$ to $N$ (inclusive) in $\mathbb{Z}$ where consecutive
integers are adjacent. Thus, we have $I_1 = [0, 1] = \{0, 1\}$, $I_2 = [0, 2] = \{0, 1, 2\}$, and so-on. Occasionally, we may use $I_0$ to denote the singleton point $\{0\} \subseteq \mathbb{Z}$.
We will consistently choose $0 \in I_N$ as the basepoint of an interval.
For based digital images $X \subseteq \mathbb{Z}^n$ and $Y \subseteq \mathbb{Z}^m$, a function $f\colon X \to Y$ is \emph{continuous} if $f(x) \sim_Y f(y)$ whenever $x \sim_Xy$, and is \emph{based} if $f(x_0) = y_0$.
By a \emph{based map} of based digital images, we mean a continuous, based function.
\subsection{Paths, Loops and Homotopies}\label{subsec: review homotopy}
Let $(Y, y_0)$ be a based digital image with $Y \subseteq \mathbb{Z}^n$. For any $N \geq 1$, a \emph{based path of length $N$ in $Y$} is a based map $\alpha\colon I_N \to Y$ (with $\alpha(0) = y_0$). Unlike in the topological setting, where any path may be taken with the fixed domain $[0, 1]$, in the digital setting we must allow paths to have different domains. A \emph{based loop of length $N$} in $Y$ is a based path $\gamma\colon I_N \to Y$ that satisfies $\gamma(0) = \gamma(N) = y_0$.
A based digital image $(X, x_0)$ is \emph{connected} if, for any $x \in X$ there is some based path $\alpha\colon I_N \to X$ (for some $N \geq 0$) with $\alpha(N) = x$.
The product of based digital images $(X, x_0)$ with $X\subseteq \mathbb{Z}^m$ and $(Y, y_0)$ with $Y\subseteq \mathbb{Z}^n$ is $\big(X \times Y, (x_0, y_0)\big)$. Here, the Cartesian product $X \times Y \subseteq \mathbb{Z}^{m} \times \mathbb{Z}^{n} \cong \mathbb{Z}^{m+n}$ has the adjacency relation $(x, y) \sim_{X \times Y} (x', y')$ when $x\sim_X x'$ and $y \sim_Y y'$.
Two based maps of based digital images $f, g\colon X \to Y$ are \emph{based homotopic} if, for some $N\geq 1$, there is a (continuous) based map
$$H \colon X \times I_N \to Y,$$
with $H(x, 0) = f(x)$ and $H(x, N) = g(x)$, and $H(x_0, t) = y_0$ for all $t=0, \ldots, N$. Then $H$ is a \emph{based homotopy} from $f$ to $g$, and we write $f \approx g$.
We specialize this to the context of based loops as follows.
Based loops $\alpha, \beta \colon I_M \to Y$ (of the same length) are \emph{based homotopic as based loops} if there is a based homotopy $H\colon I_M \times I_N \to Y$ with $H(0, t) = H(M, t) = y_0$ for all $t \in I_N$. We refer to such a homotopy as a based homotopy of based loops and we write $\alpha \approx \beta$, even though the homotopy is more restrictive here than in the general based sense. The context should make it clear exactly what we intend our homotopies to preserve.
\subsection{Subdivision of Intervals} In our broader digital homotopy theory program, subdivision of digital images plays a prominent role. However, for the purposes of this paper we do not need the general notion of subdivision of a digital image. Rather, we only need subdivision for intervals. We will restrict ourselves to this particular instance of subdivision here, and refer to our other papers for the more general notion---especially \cite{LOS19b} in which we discuss subdivision of maps as well as of general digital images.
For each $k \geq 2$ and each $N \geq 0$, we have a \emph{standard projection} map
$$\rho_k \colon I_{kN + k-1} \to I_N$$
defined by $\rho_k(i) = \lfloor i/k \rfloor$. Here, $\lfloor i/k \rfloor$ denotes the integer part of $i/k$, namely the largest integer less than or equal to $i/k$. Thus $\rho_k$ aggregates the points of $I_{kN + k-1}$ into groups of $k$ consecutive integers, and sends each aggregate to a suitable point of $I_N$. The integers $\{0, \ldots, k-1\}$ are sent to $0 \in I_N$, $\{k, \ldots, 2k-1\}$ are sent to $1$, and so-on. In \cite{LOS19c} and our other papers, we use notation in the style $S(I_N, k)$, and refer to the $k$-fold subdivision of the interval $I_N$, for what here we are simply taking as the interval $I_{kN + k-1}$. We have no need of this general notation here, and so do not adopt it.
Now let $(Y, y_0)$ be a based digital image with $Y \subseteq \mathbb{Z}^n$.
If $\gamma\colon I_N \to Y$ is a based loop in $Y$, then for any $k$,
$$\gamma\circ \rho_k\colon I_{kN + k-1} \to I_N \to Y$$
is also a based loop (of length $kN + k-1$), in that we have $\gamma\circ \rho_k(0) = \gamma(0) = y_0$ and $\gamma\circ \rho_k(kN+k-1) = \gamma(N) = y_0$.
Geometrically speaking, the composition $\gamma\circ \rho_k$ amounts to a reparametrization of the loop $\gamma$. The image traced out in $Y$ is the same, but we pause at each point of the loop for an interval of length $k-1$. This device allows us to compare loops of different lengths, and also provides flexibility in deforming loops by (based) homotopies.
\subsection{Concatenation of Paths and Loops}
Suppose $\alpha\colon I_M \to Y$ and
$\beta\colon I_N \to Y$ are paths---not necessarily based paths---in $Y$ that satisfy $\alpha(M) \sim_Y \beta(0)$. Their \emph{concatenation} is the path $\alpha\cdot\beta\colon I_{M+N+1} \to Y$ of length $M+N+1$ in $Y$ defined by
\begin{equation}\label{eq: concat}
\alpha\cdot\beta(t) = \begin{cases} \alpha(t) & 0 \leq t \leq M\\ \beta(t-(M+1)) & M+1 \leq t \leq M+N+1.\end{cases}
\end{equation}
If $\alpha(M) = \beta(0)$, then our definition means that we pause for a unit interval when attaching the end of $\alpha$ to the start of $\beta$.
Given two based loops $\alpha\colon I_M \to Y$ and
$\beta\colon I_N \to Y$, we form their \emph{product} by concatenation:
$$\alpha\cdot\beta\colon I_{M+N+1} \to Y$$
is the based loop of length $M + N +1$ defined by \eqref{eq: concat}.
We pause at the basepoint for a unit interval when attaching the end of $\alpha$ to the start of $\beta$. This product of based loops is strictly associative, as is easily checked.
\subsection{Subdivision-Based Homotopy of Based Loops and the Fundamental Group}
Two based loops $\alpha \colon I_M \to Y$ and $\beta \colon I_N \to Y$ (generally of different lengths) are \emph{subdivision-based homotopic as based loops} if, for some $k, l$ with $k, l \geq 1$ and $k(M+1) = l(N+1)$, we have
$$\alpha\circ \rho_k\colon I_{kM + k-1} \to I_M \to Y \quad \text{and} \quad \beta\circ \rho_l\colon I_{lN + l-1}\to I_N \to Y$$
based-homotopic as maps $I_{kM + k-1} = I_{lN + l-1} \to Y$, via a based homotopy of based loops; i.e., if we have a homotopy $H\colon I_{kM + k-1} \times I_R \to Y$ that satisfies $H(s, 0) = \alpha\circ \rho_k(s)$ and $H(s, R) = \beta\circ \rho_l(s)$, and also $H(0, t) = H(kM + k-1, t) = y_0$ for all $t \in I_R$.
In \cite{LOS19c} we show that subdivision-based homotopy of based loops is an equivalence relation on the set of all based loops (of all lengths) in $Y$.
Denote by $[\alpha]$ the (subdivision-based homotopy) equivalence class of based loops represented by a based loop $\alpha\colon I_N \to Y$. Thus, we have $[\alpha] = [\alpha\circ \rho_k]$ for any standard projection $\rho_k\colon I_{kN + k-1} \to I_N$. More generally, we write $[\alpha] = [\beta]$ whenever $\alpha$ and $\beta$ are subdivision-based homotopic as based loops in $Y$.
For $Y\subseteq \mathbb{Z}^n$ a based digital image, denote the set of subdivision-based homotopy equivalence classes of based loops in $Y$ by $\pi_1(Y; y_0)$.
As we show in \cite{LOS19c}, setting
$[\alpha]\cdot[\beta] = [\alpha\cdot\beta]$
for based loops $\alpha\colon I_M \to Y$ and $\beta\colon I_N \to Y$ gives a well-defined product on the set $\pi_1(Y; y_0)$. This product is associative, since concatenation of based loops itself is associative.
Now for any path $\gamma\colon I_M \to Y$, let $\overline{\gamma} \colon I_M \to Y$ denote the \emph{reverse path}
$\overline{\gamma}(t) = \gamma(M-t)$. If $\alpha$ is a based loop in $Y$, then so too is its reverse $\overline{\alpha}$.
For any $N \geq 0$, write $C_N \colon I_N \to Y$ for the constant loop defined by $C_N(t) = y_0$ for $0 \leq t \leq N$. Since $C_0\circ \rho_k = C_{k-1} \colon I_{k-1} \to Y$ for any $k$, it follows that all the constant loops $C_N$ represent the same subdivision-based homotopy equivalence class of based loops, which we denote by $\mathbf{e} \in \pi_1(Y; y_0)$.
Then $\pi_1(Y; y_0)$ is a group, with $\mathbf{e}$ a two-sided identity element and
$[\overline{\alpha}]$ a two-sided inverse element of $[\alpha]$, for each $[\alpha] \in \pi_1(Y, y_0)$. See \cite{LOS19c} for details of all this
\section{Results on relative homotopy}\label{sec: based homotopy}
In this section, paths need not be based.
\begin{definition}
Let $X$ be a digital image and suppose $\alpha, \beta \colon I_M \to X$ are (not-necessarily based) paths of the same length with the same initial point and the same terminal point, so $\alpha(0) = \beta(0)$ and $\alpha(M) = \beta(M)$. (These need not be the same point, unless we want to consider $\alpha$ and $\beta$ as loops.) Then we say that $\alpha$ and $\beta$ are \emph{homotopic relative the endpoints} if there is a homotopy $H\colon I_M \times I_T \to X$, for some $T$, that satisfies $H(s, 0) = \alpha(s)$ and $H(s, T) = \beta(s)$ for $s \in I_M$, as well as $H(0, t) = \alpha(0) = \beta(0)$ and $H(M, t) = \alpha(M) = \beta(M)$ for $t \in T$. That is, the endpoints of the paths remain fixed under the homotopy. We use the same notation $\alpha \approx \beta$ for this special kind of homotopy as for the ordinary notion of homotopy (in which the endpoints need not be fixed). Once again, the context should make it clear what we intend our homotopies to preserve.
\end{definition}
\begin{lemma}\label{lem: homotopy rel endpts}
Suppose $\alpha \approx \alpha' \colon I_M \to X$ and $\beta \approx \beta \colon I_N \to X$ are paths in a digital image $X$ and that the homotopies are relative the endpoints. Suppose that we have $\alpha(N) = \alpha'(N) \sim_X \beta(0) = \beta'(0)$, so that we may form the concatenations $\alpha\cdot\beta$ and $\alpha'\cdot\beta'$. Then we have a homotopy of paths relative the endpoints
$$\alpha\cdot\beta \approx \alpha'\cdot\beta' \colon I_{M+N +1} \to X.$$
If the concatenations are of based loops, then this is a based homotopy of based loops.
\end{lemma}
\begin{proof}
This is basically the same as the proof of part (a) of Lemma 3.6 of \cite{LOS19c}. We reproduce the proof here.
Suppose we have homotopies relative the endpoints $H\colon I_M \times I_R \to X$ and $G\colon I_N \times I_T \to X$ from $\alpha$ to $\alpha'$ and from $\beta$ to $\beta'$ respectively. We first, if necessary, adjust one of the intervals $I_R, I_T$ so that both homotopies are of the same length. Suppose we have $R < T$ (the case in which $R > T$ is handled similarly, and we omit it). Then lengthen $H$ into a based homotopy $H' \colon I_M \times I_T \to X$ defined as
$$H' (s, t) = \begin{cases} H(s, t) & 0 \leq t \leq R \\ H(s, R) & R+1 \leq t \leq T.\end{cases}$$
Allowing this to be continuous on $I_M \times I_T$, it is clearly a homotopy relative the endpoints from $\alpha$ to $\alpha'$. To confirm continuity, say we have $(s, t) \sim_{I_M \times I_T} (s', t')$. Since $t\sim_{I_T} t'$, we must have either $\{t, t'\} \subseteq [0, R]$ or $\{t, t'\} \subseteq [R, T]$. If $\{ (s, t), (s', t')\} \subseteq I_M \times I_R$, then continuity of $H$ gives
$H'(s, t) \sim_{X} H'(s', t')$. If $\{ (s, t), (s', t')\} \subseteq I_M \times [R, T]$, then we have $H'(s, t) = H(s, R) \sim_{X} H(s', R) = H'(s', t')$. It follows that this extended $H'$ is continuous. Now define a homotopy (with $H' = H$ in case the original $R$ and $T$ are equal) $H'+G\colon I_{M+N+1} \times I_T \to X$ as
$$(H' +G)(s, t) = \begin{cases} H'(s, t) & 0 \leq s \leq M \\ G(s-(M+1), t) & M+1 \leq s \leq M+N+1.\end{cases}$$
Once again, if continuous on $I_{M+N+1} \times I_T$, this is clearly a homotopy relative the endpoints from $\alpha\cdot\beta$ to $\alpha'\cdot\beta'$. To check the two homotopies assemble together continuously, we observe that, if $(s, t) \sim_{I_{M+N+1} \times I_T} (s', t')$, then either $\{s, s'\} \subseteq [0, M+1]$ or $\{s, s'\} \subseteq [M+1, M+N]$. Then proceeding as in the first part, and using $H(M, t) = \alpha(M) = \alpha'(M)$ and $G(0, t) = \beta(0) = \beta'(0)$, so that $H(M, t) \sim_X G(0, t')$ for all $t, t' \in T$, we confirm the continuity of $(H' +G)$.
\end{proof}
In our fundamental group, any reparametrization of the form $\alpha\circ \rho_k$, for a based loop $\alpha$, represents the same equivalence class of loops as $\alpha$ in $\pi_1(X; x_0)$. In \cite{Bo99}, a more general kind of reparametrization of loops was used to form the equivalence classes. We define this more general reparametrization of paths or loops here.
\begin{definition}\label{def: triv extn}
Let $\alpha \colon I_M \to X$ be a path. A \emph{trivial extension} of $\alpha$ is any path $\alpha' \colon I_{M'} \to X$ of the following form. For each $i$ with $0 \leq i \leq M$, choose $t_i \in \mathbb{Z}$ with $t_i \geq 0$. Then define $\alpha'$ by
$$\alpha'(s) = \begin{cases} \alpha(0) & 0 \leq s \leq t_0\\
\alpha(1) & t_0 + 1 \leq s \leq t_0 + 1 + t_1 \\
\alpha(2) & t_0 + t_1 + 2 \leq s \leq t_0 + t_1 + 2 + t_2 \\
\ \ \vdots & \ \ \vdots \\
\alpha(M) & \sum_{i=0}^{M-1} t_i + M \leq s \leq \sum_{i=0}^{M} t_i + M. \end{cases}
$$
\end{definition}
If we choose each $t_i = 0$, then we retrieve the original path $\alpha$. Generally, a trivial extension of $\alpha$ is a prolonged version (a re-parametrization) of $\alpha$ that repeats the value $\alpha(i)$ an extra $t_i$ times, to produce a path $\alpha' \colon I_{M'} \to X$ with the same image in $X$ as that of $\alpha$, but of length
$$M' = \sum_{i=0}^{M} t_i + M.$$
We may also view this trivial extension $\alpha'$ as a concatenation of $M+1$ constant paths
$$\alpha' = \alpha_0 \cdot\ \cdots \cdot \alpha_M$$
where each $\alpha_i \colon I_{t_i} \to X$ is a constant path of length $t_i$ at $\alpha(i)$.
\begin{lemma}\label{lem: triv ext alpha C}
Let $\alpha \colon I_M \to X$ be any path in $X$. Suppose we have a trivial extension $\alpha' \colon I_{M'} \to X$ of $\alpha$ as above, with at least one of the $t_i$ positive. There is a homotopy relative the endpoints
$$\alpha' \approx \alpha \cdot C_T \colon I_{M'} \to X,$$
where $C_T \colon I_T \to X$ is the constant path at $\alpha(M)$ of length $T = \sum_{i=0}^{M} t_i - 1$.
\end{lemma}
\begin{proof}
Begin with the special case in which one of the $t_i = 1$ and the others are 0 (so we repeat once a single point of $\alpha$). For each $i \in I_M$, write $\beta_i \colon I_{M+1} \to X$ for the trivial extension of this \emph{elementary} kind defined by
$$\beta_i (s) = \begin{cases} \alpha(s) & 0 \leq s \leq i\\ \alpha(s-1) & i+1 \leq s \leq M+1 .\end{cases}$$
\emph{Claim.} We claim that, for any $M\geq 0$ and each $i$ with $0 \leq i \leq M$, we have a homotopy of paths relative the endpoints $\alpha\cdot C_0 \approx \beta_i \colon I_{M+1} \to X$.
\emph{Proof of Claim.}
Notice that $C_0 \colon I_0 \to X$ is the constant path of length $0$ that maps the singleton point $\{0\}$ to $\alpha(M)$. Thus we have an equality of paths $\alpha\cdot C_0 = \beta_M \colon I_{M+1} \to X$ for any $M \geq 0$. Furthermore, we may define a homotopy $H \colon I_{M+1} \times I_1 \to X$ by
$$H(s, t) = \begin{cases} \beta_M(s) & t = 0\\ \beta_{M-1}(s) & t = 1,\end{cases}$$
for any $M \geq 1$. We check that $H$ is continuous. For this, suppose we have $(s, t) \sim (s', t')$ in $I_{M+1} \times I_1$. If $t = t'$, then we have $H(s, t) \sim_X H(s', t)$ from the continuity of either $\beta_M$ (if $t' = t = 0$) or $\beta_{M-1}$ (if $t' = t = 1$). So it remains to check that we have $H(s, 0) \sim_X H(s', 1)$ when $s \sim s'$ in $I_{M+1}$. Because $s \sim s'$, we must have $\{ s, s'\} \subseteq [0, M-1]$ or $\{ s, s'\} \subseteq [M-1, M+1]$. If $\{ s, s'\} \subseteq [0, M-1]$, then $H(s, 0) = \beta_M(s) = \alpha(s)$ and $H(s', 1) = \beta_{M-1}(s') = \alpha(s')$. In this case, then, we have $H(s, 0) \sim_X H(s', 1)$ from the continuity of $\alpha$. For the remaining choices of $\{ s, s'\} \subseteq [M-1, M+1]$, the possible values for $H$ satisfy $\{ H(s, 0), H(s', 1) \} \subseteq \{ \alpha(M-1), \alpha(M) \}$. Continuity of $\alpha$ gives that $\alpha(M-1) \sim_X \alpha(M)$, and it follows that any two values of $H$, when restricted to $[M-1, M+1]\times I_1$, must be adjacent. Thus $H(s, t) \sim_X H(s', t')$ for any pair of adjacent points; $H$ is a (continuous) homotopy. Clearly, we have $H(0, t) = \alpha(0)$ and $H(M+1, t) = \alpha(M)$ for $t \in I_1$, and so $H$ is a homotopy relative the endpoints
\begin{equation}\label{eq: elementary te}
\alpha \cdot C_0 = \beta_M \approx \beta_{M-1} \approx I_{M+1} \to X.
\end{equation}
Now assume inductively that, for any $M \geq 0$, we have a homotopy of paths relative the endpoints $\alpha\cdot C_0 \approx \beta_{M-k}\colon I_{M+1} \to X$, for some $k$ with $0 \leq k \leq M-1$. Induction starts with $k =0$ or $1$, by the observations we just made leading up to \eqref{eq: elementary te}. For the inductive step, re-write $\beta_{M-(k+1)}\colon I_{M+1} \to X$ as a concatenation $\gamma_{M-(k+1)} \cdot \gamma'$ with
$$\gamma_{M-(k+1)}(s) = \beta_{M-(k+1)}(s) \text{ for } 0 \leq s \leq M-k$$
the path of length $M-k$ that agrees with $\beta_{M-(k+1)}$ through the repeated value $\beta_{M-(k+1)}\big(M-(k+1)\big)= \beta_{M-(k+1)}(M-k)$, and
$$\gamma'(s) = \beta_{M-(k+1)}(s+M-k+1) \text{ for } 0 \leq s \leq k$$
the path of length $k$ that completes $\beta_{M-(k+1)}$ when concatenated with $\gamma_{M-(k+1)}$. Then $\gamma_{M-(k+1)}$ is of the form of an elementary trivial extension (but of a path of length $M-k-1$) which we may write as $\gamma_{M-(k+1)} = \gamma\cdot C'_0$, where $\gamma(s) = \gamma_{M-(k+1)}(s) = \beta_{M-(k+1)}(s)$ for $0 \leq s \leq M-(k+1)$ and $C'_0$ the constant path of length $0$ at $\gamma\big(M-(k+1)\big) = \gamma_{M-(k+1)}\big(M-(k+1)\big) = \beta_{M-(k+1)}\big(M-(k+1)\big)$. As above, define a homotopy $H \colon I_{M-k} \times I_1 \to X$ by
$$G(s, t) = \begin{cases} \gamma_{M-(k+1)}(s) & t = 0\\ \gamma_{M-(k+2)}(s) & t = 1,\end{cases}$$
where $\gamma_{M-(k+2)}$ denotes the elementary trivial extension of $\gamma$ that repeats the value $\gamma\big(M-(k+2)\big)$. That is,
$$\gamma_{M-(k+2)}(s) = \begin{cases} \gamma(s) & 0 \leq s \leq M-(k+2)\\ \gamma(s-1) & M-(k+1) \leq s \leq M-k.\end{cases}$$
Exactly as we did leading up to \eqref{eq: elementary te}, we may confirm the continuity of this $G$, and check that it is a homotopy relative the endpoints
$$\gamma_{M-(k+1)} \approx \gamma_{M-(k+2)}\colon I_{M-k} \to X.$$
From \lemref{lem: homotopy rel endpts} it follows that we have a homotopy relative the endpoints
$$\gamma_{M-(k+1)}\cdot \gamma' \approx \gamma_{M-(k+2)} \cdot \gamma' \colon I_{M-k} \to X.$$
But above, we chose $\gamma_{M-(k+1)}$ so that $\beta_{M-(k+1)} = \gamma_{M-(k+1)} \cdot \gamma'$, and it is easy to see that we have $\beta_{M-(k+2)} = \gamma_{M-(k+2)} \cdot \gamma'$. Hence we have a homotopy relatiive the endpoints
$$\beta_{M-(k+1)} \approx \beta_{M-(k+2)} \colon I_{M+1} \to X,$$
and the induction step is complete. The claim follows.
\emph{End of Proof of Claim.}
Now a typical trivial extension may be obtained by repeatedly making elementary extensions. Suppose inductively that the assertion of the lemma is true for all trivial extensions of $\alpha$ with $T = \sum_{i=0}^{M} t_i \leq k$, for some $k \geq 1$. Induction starts with $k=1$, and we have just established this in the claim. Now say we have a trivial extension $\alpha' \colon I_{M'} \to X$ of $\alpha$ with $M' = M + \sum_{i=0}^{M} t_i $ and $\sum_{i=0}^{M} t_i = k+1$. Suppose $n$ is the first index for which $t_n >0$. Then $\alpha'$ is an elementary extension of the path $\alpha'' \colon I_{M'-1} \to X$ defined by
$$\alpha''(s) = \begin{cases} \alpha'(s) & 0 \leq s \leq \sum_{i=0}^{n} t_i - 1 \\
\alpha'(s+1) & \sum_{i=0}^{n} t_i \leq s \leq \sum_{i=0}^{M} t_i ,\end{cases}$$
with $M' -1 =k$. Since $\alpha''$ is a trivial extension of $\alpha$ with $M' -1 = M+T-1$ where $T-1=k$, we may apply the inductive hypothesis to obtain a homotopy relative the endpoints
$$\alpha'' \approx \alpha \cdot C_{T-1}\colon I_{M'-1} \to X.$$
And, because $\alpha'$ is an elementary extension of $\alpha''$ we also have (from the claim) a homotopy relative the endpoints
$$\alpha' \approx \alpha'' \cdot C_{0}\colon I_{M'} \to X.$$
Now \lemref{lem: homotopy rel endpts} gives a homotopy relative the endpoints
$$\alpha'' \cdot C_{0} \approx \alpha \cdot C_{T-1}\cdot C_{0} \colon I_{M'} \to X.$$
Transitivity of homotopy relative the endpoints, with the observation that \\ $C_{T-1}\cdot C_{0} = C_T \colon I_T \to X$, now completes the induction. The result follows.
\end{proof}
\section{Edge Groups and Clique Complexes}\label{sec: edge groups}
The \emph{edge group of a simplicial complex} is a group defined, like the digital fundamental group of a digital image or the fundamental group of a topological space, in terms of equivalence classes of edge loops (namely, loops consisting of edge paths). The equivalence relation is given by a combinatorial notion of homotopy. We repeat some of the definitions from \cite[\S3.3]{Mau96}.
Suppose that $K$ is a simplicial complex with $1$-skeleton consisting of vertices $V$ and edges $E$. An \emph{edge path} is a finite sequence
$\{ v_0, v_1, \ldots, v_n\}$ of vertices in $V$ such that, for each $i$ with $0 \leq i \leq n-1$, we have $v_i = v_{i+1}$ or $\{v_i, v_{i+1} \} \in E$ an edge of $K$. An edge path is an \emph{edge loop} if we have, in addition, $v_0 = v_n$.
\begin{definition}\label{def: elem edge htpy}
By an \emph{elementary edge-homotopy (relative the endpoints)} we mean one of the following operations on edge paths:
\begin{itemize}
\item[(a)] If $v_i = v_{i+1}$, for some $i$ with $0 \leq i \leq n-1$, then replace an edge path $\{ v_0, \ldots, v_i, v_{i+1}, v_{i+2}, \ldots v_n\}$ with $\{ v_0, \ldots, v_i, v_{i+2}, \ldots v_n\}$. Namely, delete a repeated vertex. Or, conversely, for any $i$ with $0 \leq i \leq n$, replace an edge path $\{ v_0, \ldots, v_i, v_{i+1}, \ldots v_n\}$ with $\{ v_0, \ldots, v_i, v_i, v_{i+1}, \ldots v_n\}$. Namely, insert a repeat of a vertex.
\item[(b)] If $\{v_{i-1}, v_i, v_{i+1}\}$ form a simplex of $K$, for some $i$ with $1 \leq i \leq n-1$, replace an edge path $\{ v_0, \ldots, v_{i-1}, v_i, v_{i+1}, \ldots v_n\}$ with $\{ v_0, \ldots, v_{i-1}, v_{i+1}, \ldots v_n\}$. Or, conversely, for any $i$ with $0 \leq i \leq n-1$, replace an edge path $\{ v_0, \ldots, v_i, v_{i+1}, \ldots v_n\}$ with $\{ v_0, \ldots, v_i, v, v_{i+1}, \ldots v_n\}$ for any $v \in V$ for which $\{v_{i}, v, v_{i+1}\}$ form a simplex of $K$.
\end{itemize}
We say that two edge paths are \emph{edge-homotopic (relative their endpoints)} if one can apply a finite sequence of elementary edge homotopies, of types (a) and (b) in any order or combination, so as to start with one of the edge paths and arrive at the other. We refer to the sequence of elementary edge homotopies as an \emph{edge homotopy} from one edge path to the other. If two edge paths $\alpha = \{ v_0, v_1, \ldots, v_n\}$ and $\beta = \{ w_0, w_1, \ldots, w_m\}$ with $v_0 = w_0$ and $v_n = w_m$ are edge-homotopic (relative their endpoints), then we write $\alpha \approx_\mathrm{e} \beta$.
\end{definition}
If $K$ is a based simplicial complex with basepoint $v_0 \in V$, and if the two edge paths in question are edge loops, each of which starts and finishes at $v_0$, then we will refer to \emph{an edge homotopy of based loops}. Two edge paths $\alpha = \{ v_0, v_1, \ldots, v_n\}$ and $\beta = \{ w_0, w_1, \ldots, w_m\}$ with $v_n = w_0$ may be concatenated to form the edge path
$$\alpha \cdot \beta = \{ v_0, v_1, \ldots, v_n, w_1, w_2, \ldots, w_m\}.$$
\begin{remark}\label{rem: concatenation}
This concatenation differs from the way in which we concatenate suitable digital paths. In fact, we could just as well concatenate edge paths in the way in which we do our digital paths, requiring only that $\{ v_n, w_0\}$ be an edge in $K$. However, since we want to cite results from the literature, we use the standard way of concatenating edge paths. Doing so causes no problems for us in the development.
\end{remark}
Suppose that $K$ is a based simplicial complex with basepoint $v_0 \in V$. Edge homotopy of based loops is an equivalence relation on the set of edge loops based at $v_0$. Denote the equivalence class of an edge loop $\alpha$ by $[\alpha]$, and the set of all equivalence classes by $\mathrm{E}(K; v_0)$. Just as for the fundamental group, defining
$$[\alpha] \cdot [\beta] = [\alpha\cdot \beta] \in \mathrm{E}(K; v_0)$$
gives a well-defined product of equivalence classes. Concatenation of edge loops is associative, and so this product is associative.
Each edge path $\alpha = \{ v_0, v_1, \ldots, v_n\}$ has a \emph{reverse}, which is the edge path $\overline{\alpha} = \{ v_n, v_{n-1}, \ldots, v_0\}$.
One confirms that $\overline{\alpha} \cdot \alpha$ and $\alpha\cdot \overline{\alpha}$ are both edge-homotopic, as based loops, to a constant loop at $v_0$.
\begin{remark}
Although intuitively we may think of type (b) elementary edge homotopies as collapsing or expanding a $2$-simplex, in fact there is no requirement that $\{v_{i-1}, v_i, v_{i+1}\}$ be a $2$-simplex. Indeed, to reduce a concatenation of the form $\alpha\cdot \overline{\alpha}$ to the trivial loop $\{ v_0\}$ requires collapsing terms such as $\ldots, v_i, v_{i+1}, v_i, \ldots$, in which $\{ v_i, v_{i+1}\}$ is an edge, to $\ldots, v_i, v_i, \ldots$.
\end{remark}
Then the equivalence class of the trivial loop $[\{ v_0\}]$ plays the role of a two-sided identity element, and $[\alpha]^{-1} = [\overline{\alpha}]$ defines inverses, making $\mathrm{E}(K; v_0)$ into a group, called \emph{the edge group} of $K$ (based at $v_0$).
We have the following result.
\begin{theorem}[{\cite[Th.3.3.9]{Mau96}}]\label{thm: edge gp = pi1}
Suppose $K$ is a simplicial complex with basepoint $v_0$. Let $|K|$ be the spatial realization of $K$, with basepoint $v_0 \in |K|$. There is an isomorphism of groups
$$ \mathrm{E}(K; v_0) \cong \pi_1(|K|; v_0),$$
where the right-hand side denotes the ordinary fundamental group of $|K|$ as a topological space. \qed
\end{theorem}
This result is often given as a means of computing the fundamental group of a topological space or, at least, arriving at a presentation of it. We refer to \cite{Mau96} for details of the material we have just reviewed.
Now suppose that $X$ is a digital image. We may associate to $X$ its \emph{clique complex}, which we denote by $\mathrm{cl}(X)$ and which is a simplicial complex whose simplices are determined by the cliques of $X$. Namely, the vertices of $\mathrm{cl}(X)$ are the vertices of $X$. The $2$-cliques of $X$, namely pairs of adjacent points, are the $1$-simplices of $X$, and so-on. In general, the $(n+1)$-cliques of $X$ are the $n$-simplices of $\mathrm{cl}(X)$. Now observe that the set of simplices $\mathrm{cl}(X)$ satisfies the the requirements to be an (abstract) simplicial complex (a subset of a clique is again a clique).
We will show that the (digital) fundamental group of a digital image $X$ is isomorphic to the edge group of the clique complex $\mathrm{cl}(X)$. The basic idea is to associate to each based loop $\alpha \colon I_M \to X$, in an obvious way, its corresponding edge loop
$$\mathrm{e}(\alpha) = \{ \alpha(0), \alpha(1), \ldots, \alpha(M)\}$$
of vertices in $\mathrm{cl}(X)$. Note that (digital) continuity of $\alpha$ ensures that $\mathrm{e}(\alpha)$ is an edge path in $\mathrm{cl}(X)$. Furthermore, it is an edge loop because we also have $\alpha(0) = \alpha(M) = x_0$. Then we wish to define a homomorphism
$$\phi\colon \pi_1(X; x_0) \to \mathrm{E}(\mathrm{cl}(X); x_0),$$
by setting $\phi( [\alpha] )= [ \mathrm{e}(\alpha)]$. From the next lemma, it will follow that this $\phi$ is well-defined; we will complete the proof that $\phi$ gives an isomorphism of groups following that.
\begin{lemma}\label{lem: htpy vs edge htpy}
Let $\alpha, \beta\colon I_M \to X$ and $\gamma\colon I_N \to X$ be based loops in a digital image $X$.
\begin{itemize}
\item[(i)] For any $k$, we have an edge homotopy of based edge loops $\mathrm{e}(\alpha\circ \rho_k) \approx_\mathrm{e} \mathrm{e}(\alpha)$ in $\mathrm{cl}(X)$.
\item[(ii)] If $\alpha \approx \beta\colon I_M \to X$ as based loops in $X$, then we have an edge homotopy of based edge loops $\mathrm{e}(\alpha) \approx_\mathrm{e} \mathrm{e}(\beta)$ in $\mathrm{cl}(X)$.
\item[(iii)] If $\alpha$ and $\gamma$ are based-subdivision homotopic as based loops in $X$, then we have an edge homotopy of based edge loops $\mathrm{e}(\alpha) \approx_\mathrm{e} \mathrm{e}(\gamma)$ in $\mathrm{cl}(X)$.
\end{itemize}
\end{lemma}
\begin{proof}
(i) More generally, if we have any trivial extension $\alpha' \colon I_{M'} \to X$ of $\alpha$, then $\mathrm{e}(\alpha') \approx_\mathrm{e} \mathrm{e}(\alpha)$. The composition $\alpha\circ \rho_k$ is simply the special case of a trivial extension of $\alpha$ in which we repeat each value of $\alpha$ a total of $k$-times. Refer to \defref{def: triv extn} for our notation about trivial extensions. Also, in the proof of \lemref{lem: triv ext alpha C}, we defined the \emph{elementary trivial extensions}
$$\beta_i (s) = \begin{cases} \alpha(s) & 0 \leq s \leq i\\ \alpha(s-1) & i+1 \leq s \leq M+1 .\end{cases}$$
These are the special cases of trivial extensions of $\alpha$ in which we repeat once a single point of $\alpha$. It is tautological that we have
$$\mathrm{e}(\alpha) \approx_\mathrm{e} \mathrm{e}(\beta_i),$$
for each $i$ with $0 \leq i \leq M$, using elementary edge homotopies of type (a) from \defref{def: elem edge htpy} . Since the composition $\alpha\circ \rho_k$ may be achieved as a finite sequence of elementary trivial extensions of the path $\alpha$, so too the edge loop $\mathrm{e}(\alpha\circ \rho_k)$ may be achieved as the corresponding finite sequence of elementary edge homotopies of type (a) of the edge loop $\mathrm{e}(\alpha)$.
(ii) Suppose we have a based homotopy of based loops $H \colon I_M \times I_N \to X$ from $\alpha$ to $\beta$. We resolve each step of this homotopy, namely the restriction of $H$ to a map $I _M \times [t, t+1] \to X$ for each $t$ with $0 \leq t \leq N-1$, into a succession of ``elementary homotopies," as follows.
For each $t$ with $0 \leq t \leq N$, define a loop $H_t\colon I _M \to X$ by $H_t(s) = H(s, t)$ for $s \in I_M$. Continuity of the homotopy $H$ means that, for each $t$ with $0 \leq t \leq N-1$, the paths $H_t \colon I_M \to X$ and $H_{t+1} \colon I_M \to X$ are adjacent as paths in $X$, in the sense used in \cite{LOS19a}. Namely, for each $s \sim s'$ in $I_M$, we have $H_t(s) \sim_X H_{t+1}(s')$. Now define, for each $q$ with $0 \leq q \leq N-1$, a homotopy
$$G_q \colon I_M \times I_M \to X$$
by setting
$$G_q(s, t) = \begin{cases} H_{q+1}(s) & 0 \leq s \leq t\\ H_{q}(s) & t+1 \leq s \leq M.\end{cases}$$
If $(s, t) \sim (s', t')$ in $I_M \times I_M$, then in particular we have $s \sim s'$ in $I_M$. Now the only possible values for $G_q(s, t)$ are $H_{q+1}(s)$ or $H_{q}(s)$, and the only possible values for $G_q(s', t')$ are $H_{q+1}(s') $ or $H_{q}(s')$. From the remark above, about adjacency of $H_q$ and $H_{q+1}$, it follows that we have both $H_{q+1}(s)$ and $H_{q}(s)$ adjacent to both $H_{q+1}(s')$ and $H_{q}(s')$. Hence, we have $G_q(s, t) \sim G_q(s', t')$ in $X$, and so $G_q$ is continuous. Notice, then, that \emph{$G_q$ is a homotopy from $H_q$ to $H_{q+1}$} for each $q$ with $0 \leq q \leq N-1$, and that we have $G_q(s, M) = G_{q+1}(s, 0) = H_{q+1}(s)$ for each $s \in I_M$, each $q$ with $0 \leq q \leq N-2$.
Now we may assemble the $G_t$ together into a homotopy
$$G \colon I_M \times I_{MN} \to X$$
by setting
$$G(s, t) = G_q(s, r) \text{ if } t = Mq+r \text{ for } 0 \leq q \leq N-1 \text{ and } 0 \leq r \leq M-1$$
and then $G(s, MN) = G_{N-1}(s, M) = H_N(s)$. Note that this $G$ is continuous by the same argument that we use to show homotopy is transitive. Specifically, here, we have $G = G_q$ when restricted to the rectangle $I_M \times [Mq, M(q+1)]$, and so $G$ is continuous when restricted to each such rectangle. But if we have $(s, t) \sim (s', t')$ in $I_M \times I_{MN}$, then in particular we have $|t' - t| \leq 1$ and hence both $(s, t)$ and $(s', t')$ must lie in at least one such rectangle. Then $G(s, t) = G_q(s, t) \sim_X G_q(s', t') = G(s, t)$ for some $q$, and it follows that $G$ is continuous on $I_M \times I_{MN}$.
We have constructed $G$, which is a ``slower" homotopy of based loops from the loop $\alpha$ at which the original homotopy $H$ starts, to the loop $\beta$ at which the original homotopy $H$ ends. The difference between the two homotopies is that, whereas $H$ makes the transition in unit time from one loop to an adjacent loop that may differ in many values, the slower homotopy $G$ makes a transition in unit time from one loop to an adjacent loop that differs in at most one value.
\emph{Claim.} For each $t$ with $0 \leq t \leq MN-1$, define the two based loops $\eta, \eta'\colon I_M \to X$ by $\eta(s) = G(s, t)$ and $\eta'(s) = G(s, t+1)$. Then we have an edge homotopy of based edge loops $\mathrm{e}(\eta) \approx_\mathrm{e} \mathrm{e}(\eta')$ in $\mathrm{cl}(X)$.
\emph{Proof of Claim.} The loops $\eta$ and $\eta'$ differ in at most one value, which means that, for some $S$ with $1 \leq S \leq M-1$, we have $\eta(s) = \eta'(s)$ for $s \not= S$, and $\eta(S) \sim_X \eta'(S)$ (these may agree also, in which case we have $\eta = \eta'$). This follows from the way in which we have constructed the homotopy $G$. Then an edge homotopy from $\mathrm{e}(\eta)$ to $\mathrm{e}(\eta')$ is given by the sequence of elementary edge homotopies
$$\begin{aligned} \cdots, \eta(S-1), \eta(S), \eta(S+1), \cdots &\approx_\mathrm{e} \cdots, \eta(S-1), \eta'(S), \eta(S),\eta(S+1), \cdots \\
&\approx_\mathrm{e} \cdots, \eta(S-1), \eta'(S), \eta(S+1), \cdots,
\end{aligned}$$
in which the first edge path is $\mathrm{e}(\eta)$ and the last is $\mathrm{e}(\eta')$. The first elementary edge homotopy inserts the vertex $\eta'(S)$ between $\eta(S-1)$ and $\eta(S)$. This is permissible since $G(S-1, t)$, $G(S, t)$ and $G(S, t+1)$ form a $3$-clique in $X$, from continuity of $G$. The second elementary edge homotopy deletes the vertex $\eta(S)$ from between $\eta'(S)$ and $\eta(S+1)$. Again, this is permissible since continuity of $G$ implies that $G(S, t+1)$, $G(S, t)$ and $G(S+1, t)$ form a $3$-clique in $X$. \emph{End of Proof of Claim.}
Since $\mathrm{e}(\eta) \approx_\mathrm{e} \mathrm{e}(\eta')$ in $\mathrm{cl}(X)$ for each $t$, transitivity of edge homotopies now gives $\mathrm{e}(\alpha) \approx_\mathrm{e} \mathrm{e}(\beta)$ in $\mathrm{cl}(X)$
(iii) Now suppose that $\alpha$ and $\gamma$ are based-subdivision homotopic as based loops in $X$. This means that for some $k$ and $k'$, we have a based homotopy of based loops $\alpha\circ \rho_k \approx \gamma\circ \rho_{k'}$. But then we have edge homotopies of based edge loops
$$\mathrm{e}(\alpha) \approx_\mathrm{e} e(\alpha\circ \rho_k) \approx_\mathrm{e} \mathrm{e}(\gamma\circ \rho_{k'}) \approx_\mathrm{e}\e(\gamma)$$
in $\mathrm{cl}(X)$, with the first and third edge homotopies coming from part (i), and the middle edge homotopy from part (ii). Then part (iii) follows from transitivity of edge homotopies.
\end{proof}
\begin{theorem}\label{thm: digital pi1 = edge pi1}
Let $X$ be a digital image and $\mathrm{cl}(X)$ its clique complex. The map
$$\phi\colon \pi_1(X; x_0) \to \mathrm{E}(\mathrm{cl}(X); x_0),$$
defined by setting $\phi( [\alpha] )= [ \mathrm{e}(\alpha)]$ is an isomorphism of groups.
\end{theorem}
\begin{proof}
The map $\phi$ is well-defined by \lemref{lem: htpy vs edge htpy} (recall that, in our formulation of the digital fundamental group $\pi_1(X; x_0)$, based loops $\alpha$ and $\beta$ represent the same element of $\pi_1(X; x_0)$ if they are subdivision-based homotopic). Although we concatenate based loops in a slightly different way from that in which edge loops are concatenated (cf.~\remref{rem: concatenation}), nonetheless $\phi$ is a homomorphism. The concatenation of two based loops $\alpha\cdot \beta$ has a repeat of the basepoint at times $M$ and $M+1$ (if $\alpha$ is of length $M$). But then we may use an elementary edge homotopy of type (a) to delete this repetition, so that we have
$$\mathrm{e}(\alpha\cdot \beta) \approx_\mathrm{e} \mathrm{e}(\alpha) \cdot \mathrm{e}(\beta),$$
where the right-hand side refers to (the standard) concatenation of edge loops in $\mathrm{cl}(X)$. It follows that $\phi$ is indeed a homomorphism. Any edge loop $\{v_0, \ldots, v_n\}$ in $\mathrm{cl}(X)$ may be viewed as $\mathrm{e}(\alpha)$, where $\alpha\colon I_n \to X$ is the path $\alpha(i) = v_i$ for $0 \leq i \leq n$. Continuity of $\alpha$ follows because $v_i$ and $v_{i+1}$ must be adjacent in $X$ for there to be an edge joining them in $\mathrm{cl}(X)$. So $\phi$ is evidently onto.
It remains to show that $\phi$ is also injective. For this it is sufficient to show that if two edge loops, which---as we just observed---we may assume are of the form $\mathrm{e}(\alpha)$ and $\mathrm{e}(\beta)$ for loops $\alpha$ and $\beta$ in $X$, are homotopic \emph{via} an elementary edge homotopy, then the loops $\alpha$ and $\beta$ are subdivision-based homotopic. So first suppose that $\mathrm{e}(\beta)$ is edge homotopic to $\mathrm{e}(\alpha)$ by an elementary edge homotopy of type (a)---addition of a vertex $v_j$ after an occurrence of this vertex in the edge loop (by the symmetric nature of edge homotopy, it is not necessary to consider removal of a vertex). Then $\beta$ is what we earlier called an elementary trivial extension of $\alpha$. \lemref{lem: triv ext alpha C} now gives $[\beta] = [\alpha\cdot C_0]$ in $\pi_1(X; x_0)$, with $C_0$ denoting the constant loop at $x_0$. From \cite{LOS19c} we have $[\alpha\cdot C_0] = [\alpha]\cdot [C_0] = [\alpha]$ in $\pi_1(X; x_0)$ ($\beta$ is subdivision-based homotopic to $\alpha$). On the other hand, suppose that $\mathrm{e}(\beta)$ is edge homotopic to $\mathrm{e}(\alpha)$ by an elementary edge homotopy of type (b)---addition of a vertex $v$ between two vertices $\alpha(j)$ and $\alpha(j+1)$ with $\{ \alpha(j), v, \alpha(j+1)\}$ a simplex of $\mathrm{cl}(X)$. But if $\{ v_j, v, v_{j+1}\}$ is a simplex of $\mathrm{cl}(X)$, then we have $\alpha(j) \sim \alpha(j+1)$ and $v$ is adjacent to both of these in $X$. Let $\beta_j$ denote the elementary trivial extension of $\alpha$ obtained by repeating the value $\alpha(j)$, as in the proof of \lemref{lem: triv ext alpha C}. We may define a homotopy
$$H \colon I_{M+1} \times I_1 \to X,$$
assuming $\alpha$ is of length $M$, by setting
$$H(s, t) = \begin{cases} \alpha(s) & 0 \leq s \leq j\\
\alpha(j) & (s, t) = (j+1, 0)\\
v & (s, t) = (j+1, 1)\\
\alpha(s-1) & j+2 \leq s \leq M+1.\end{cases}$$
%
It is easy to confirm that $H$ is continuous, and that it is a based homotopy of based loops $\beta_j \approx \beta$. In $\pi_1(X; x_0)$, then, we have
%
$$[\alpha] = [\alpha]\cdot [C_0] = [\alpha\cdot C_0] = [\beta_j] = [\beta],$$
%
where the first two re-writes are basic identities in $\pi_1(X; x_0)$, the next is the first item we proved in the proof of \lemref{lem: triv ext alpha C}, and the last follows from the homotopy $H$ above. Thus, for each type of elementary edge homotopy, we have established that $\mathrm{e}(\alpha) \approx \mathrm{e}(\beta)$ implies $[\alpha] = [\beta] \in \pi_1(X; x_0)$. Injectivity of $\phi$ follows, and this completes the proof.
\end{proof}
\section{Direct Consequences for the Digital Fundamental Group}\label{sec: first consequences}
Because so much is known about edge groups of simplicial complexes and the fundamental groups of topological spaces, it is now easy to compile many basic results about the digital fundamental group. We simply translate known facts and results from the topological setting to the digital setting, wherever feasible. We begin by considering digital circles.
Our definition of a digital circle is effectively the same as the ``simple closed curve" definition of \cite[\S3]{Bo99}. Actually, these curves are closed but are simple only in the tolerance space sense.
\begin{definition}\label{def: circle}
Consider a set $C = \{x_0, x_1, \ldots, x_{N-1}\}$ of $N$ (distinct) points in $\mathbb{Z}^n$, with $N \geq 4$ and for any $n \geq 2$. We say that $C$ is a \emph{circle of length $N$} if we have adjacencies $x_i \sim_C x_{i+1}$ for each $0 \leq i \leq N-2$, and $x_{N-1} \sim_C x_0$, and no other adjacencies amongst the elements of $C$.
\end{definition}
We may parametrize a digital circle as a loop $\alpha\colon I_N \to X$ (in various ways).
\begin{theorem}\label{thm: pi of C is Z}
$\pi_1(C; x_0) \cong \mathbb{Z}$ for every digital circle $C$.
\end{theorem}
\begin{proof}
The clique complex of a digital circle is a cycle graph, with geometric realization an actual circle $S^1$. The result follows from \thmref{thm: digital pi1 = edge pi1}, \thmref{thm: edge gp = pi1}, and the well-known, basic calculation of $\pi_1(S^1; x_0) \cong \mathbb{Z}$ (e.g. \cite[Th.II.5.1]{Mas91}).
\end{proof}
\begin{remark}
We have shown in \cite{LOS19c} that a particular $4$-point digital circle $D$, which we called the diamond, has fundamental group $\pi_1(D; x_0) \cong \mathbb{Z}$. This computation was done staying within digital topology, using some results we developed in \cite{LOS19a}. This gives a computation of $\pi_1(S^1; x_0) \cong \mathbb{Z}$ independently of the usual topological argument, through the identifications
$$\pi_1(S^1; x_0) \cong \pi_1(| \mathrm{cl}(D)| ; x_0) \cong \pi_1(D; x_0)$$
of \thmref{thm: digital pi1 = edge pi1} and \thmref{thm: edge gp = pi1}. Furthermore, these theorems allow us to lever the single computation $\pi_1(D; x_0) \cong \mathbb{Z}$ into a computation of the fundamental group of any digital circle $C$, because we have $| \mathrm{cl}(D)| = S^1 = | \mathrm{cl}(C)|$ (we mean the spatial realizations are homeomorphic to the circle, here).
Note that digital circles of different lengths are not (digitally) based-homotopy equivalent. We suspect that any two digital circles are subdivision-based homotopy equivalent. However, we are as yet unable to establish this because the arguments become bogged down in lengthy expositional details. The digital fundamental group is preserved by this notion of subdivision-based homotopy equivalence. But the isomorphism $\pi_1(D; x_0) \cong \pi_1(C; x_0)$, for any digital circle $C$, is available to us without having to establish $D$ and $C$ as subdivision-based homotopy equivalent. These comments indicate that, speaking generally, enlarged or reduced versions of a digital image should have the same fundamental group as the original, even though they will not be homotopy equivalent, and even though we may not be able to show them subdivision-based homotopy equivalent. This is because we may---at the fundament group level---pass into the topological setting, enlarge or reduce there, and then pass back into the digital setting.
\end{remark}
We now deduce a general result that enables calculation of many examples.
The Seifert-van Kampen theorem describes the fundamental group of a union $\pi_1(U \cup V; x_0)$ in terms of the fundamental groups $\pi_1(U; x_0)$, $\pi_1(V; x_0)$ and $\pi_1(U \cap V; x_0)$. We will need to place certain mild constraints on the union.
\begin{definition}\label{def: disconnected complements}
Suppose $U$ and $V$ are digital images in some $\mathbb{Z}^n$. Denote by $U' = \{ v \in V \mid v \not\in V \cap U\}$ the complement of $U$ in $U \cup V$ and by $V' = \{ u \in U \mid u \not\in U \cap V\}$ the complement of $V$ in $U \cup V$. We say that $U$ and $V$ have \emph{disconnected complements} (in $U \cup V$) if $U'$ and $V'$ are disconnected from each other. That is, $U$ and $V$ have disconnected complements when the set of pairs $\{ u, v\}$ with $u \in V'$, $v \in U'$ and $u \sim_{U \cup V} v$ is empty.
\end{definition}
\begin{theorem}[Digital Seifert-van Kampen]\label{thm: DVK}
Let $U$ and $V$ be connected digital images in some $\mathbb{Z}^n$ with connected intersection $U \cap V$. Choose $x_0 \in U \cap V$ for the basepoint of $U \cap V$, $U$, $V$, and $U \cup V$. If $U$ and $V$ have disconnected complements, then
$$\xymatrix{ \pi_1(U\cap V; x_0) \ar[r]^-{i_1} \ar[d]_{i_2} & \pi_1(U; x_0) \ar[d]^{\psi_1}\\
\pi_1(V; x_0) \ar[r]_-{\psi_2} & \pi_1(U\cup V; x_0) }$$
is a pushout diagram of groups and homomorphisms, with $i_1$, $i_2$, $\psi_1$ and $\psi_2$ the homomorphisms of fundamental groups induced by the inclusions $U \cap V \to U$, $U \cap V \to V$, $U \to U \cup V$ and $V \to U \cup V$ respectively.
That is, suppose we are given any homomorphisms $h_1\colon \pi_1(U; x_0) \to G$ and $h_2\colon \pi_1(V; y_0) \to G$ that satisfy $h_1\circ i_1= h_2\circ i_2 \colon \pi_1(U\cap V; x_0) \to G$, with $G$ an arbitrary group. Then there is a homomorphism $\phi\colon \pi_1(U \cup V; x_0) \to G$ that makes (all parts of) the following diagram commute
\begin{displaymath}
\xymatrix{ \pi_1(U\cap V; x_0) \ar[r]^-{i_1} \ar[d]_{i_2}& \pi_1(U; x_0) \ar[d]_{\psi_1}
\ar@/^/[ddr]^{h_1}&\\ \pi_1(V; x_0) \ar[r]^-{\psi_2} \ar@/_/[drr]_{h_2}& \pi_1(U\cup V; x_0)
\ar@{.>}[dr]_(.3){\phi}&\\ &&G }
\end{displaymath}
and $\phi$ is the unique such homomorphism.
\end{theorem}
\begin{proof}
In general, we have $\mathrm{cl}(U) \cap \mathrm{cl}(V) = \mathrm{cl}(U \cap V)$. Observe that, with the hypothesis of disconnected complements, we also have $\mathrm{cl}(U) \cup \mathrm{cl}(V) = \mathrm{cl}(U \cup V)$. Hence, we have isomorphisms $\pi_1(U \cap V; x_0) \cong E\left( \mathrm{cl}(U) \cap \mathrm{cl}(V); x_0 \right)$ and $\pi_1(U \cup V; x_0) \cong E\left( \mathrm{cl}(U) \cup \mathrm{cl}(V); x_0 \right)$, from \thmref{thm: digital pi1 = edge pi1}. Now we may apply the
ordinary Seifert-van Kampen theorem from the topological setting in the form for simplicial complexes (see, e.g. \cite[Th.11.60]{Rot95})
to the inclusions of connected simplicial (sub-) complexes
$$\xymatrix{ \mathrm{cl}(U) \cap \mathrm{cl}(V) \ar[r] \ar[d] & \mathrm{cl}(U) \ar[d]\\
\mathrm{cl}(V) \ar[r] & \mathrm{cl}(U)\cup \mathrm{cl}(V) }$$
and conclude the result via \thmref{thm: digital pi1 = edge pi1} and \thmref{thm: edge gp = pi1}.
\end{proof}
\begin{remark}
We make no assumptions about any of the induced homomorphisms $i_1$, $i_2$, $\psi_1$ and $\psi_2$ being injective. Depending on the circumstances, some or all of them, in various combinations, may be injective. But none of them need be injective.
\end{remark}
\begin{remark}
The theorem identifies $\pi_1(U\cup V; x_0)$ up to isomorphism, although it does so indirectly in terms of a universal property. For $U$ and $V$ that satisfy the hypotheses, a more concrete description of $\pi_1(U \cup V; x_0)$ may be given as follows (see Th.11.58 of \cite{Rot95} for example). We have an isomorphism
%
$$\pi_1(U \cup V; x_0) \cong \frac{ \pi_1(U; x_0) \ast \pi_1(V; x_0)}{N},$$
%
where $\pi_1(U; x_0) \ast \pi_1(V; x_0)$ denotes the free product and $N$ the normal subgroup generated by $\{ i_1(g) i_2(g^{-1}) \mid g \in \pi_1(U\cap V; x_0)\}$. Or, in terms of presentations, if $\pi_1(U; x_0) = \langle G_1 \mid R_1 \rangle$ and $\pi_1(V; x_0) = \langle G_2 \mid R_2 \rangle$, where the $G_i$ and $R_i$ are sets of generators and relations, then we have a presentation
%
$$\pi_1(U \cup V; x_0) = \langle G_1 \cup G_2 \mid R_1 \cup R_2 \cup \{ i_1(g) i_2(g^{-1}) \mid g \in \pi_1(U\cap V; x_0)\} \rangle.$$
%
\end{remark}
\begin{remark}
The conclusion of the theorem need not hold if $U$ and $V$ do not have disconnected complements. For example, take $U, V \subseteq \mathbb{Z}^2$ as follows.
%
$$U = \{ (1, 0), (0, 1) \}, \quad V = \{ (1, 0), (0, -1), (-1, 0) \},$$
%
so that $U \cup V = D$, the diamond, and $U \cap V = \{ (1, 0) \}$. Then we have $(-1, 0) \sim (0, 1)$ with $(-1, 0) \in U'$ and $(0, 1) \in V'$, and so $U$ and $V$ do not have disconnected complements. Furthermore, we know from \cite{LOS19c} or \thmref{thm: pi of C is Z} above that $\pi_1\big(D; (1,0)\big) \cong \mathbb{Z}$, whereas here we have $U$ and $V$ both contractible with trivial fundamental group. Evidently, the conclusion of the theorem does not hold. Specifically, here, the issue is that---concomitant with $U'$ and $V'$ not being disconnected---we have $\mathrm{cl}(U) \cup \mathrm{cl}(V)$ strictly contained in (not equal to) $\mathrm{cl}(U \cup V)$.
\end{remark}
\begin{remark}
It is possible to prove \thmref{thm: DVK} entirely within the digital setting (without relying on \thmref{thm: digital pi1 = edge pi1} and \thmref{thm: edge gp = pi1}). Surprisingly, perhaps, we are able to prove \thmref{thm: DVK} by adapting the argument that is used in \cite{Mas77} to prove the topological Seifert-van Kampen theorem there. That argument uses the Lebesgue covering lemma, from the theory of compact metric spaces. In our digital setting, we find that it is possible to follow the same argument without really having to develop a substitute for this ingredient. It turns out that dividing a rectangle $I_M \times I_N$ into unit squares achieves the same purpose as does dividing the rectangle $I \times I$ into subrectangles of diameter less than the Lebesgue number of a certain covering of $I \times I$ in the topological setting.
\end{remark}
\begin{remark}
Ayala et al.~\cite{ADFQ03} have a Seifert-van Kampen theorem for the digital fundamental groups they consider. However, as we mentioned in the introduction, their approach is effectively to \emph{define} the fundamental group as that of an associated simplicial complex, so it \emph{a priori} will obey the Seifert-van Kampen theorem and possess any other properties of the topological fundamental group. The difference between that approach and ours is that we have an intrinsic, self-contained construction of the fundamental group in the digital setting, and we need to establish \thmref{thm: digital pi1 = edge pi1} and \thmref{thm: edge gp = pi1} in order to make use of the properties of the topological fundamental group.
\end{remark}
There are some special cases of \thmref{thm: DVK} that are especially useful. First, consider the case in which the intersection has trivial fundamental group (cf.~\cite[Th.IV.3.1]{Mas91}).
\begin{corollary}[To \thmref{thm: DVK}]\label{cor: contractible U cap V}
Suppose $U$ and $V$ satisfy the hypotheses of \thmref{thm: DVK} (including disconnected complements) and, in addition, we have $\pi_1(U \cap V; x_0) = \{ \mathbf{e} \}$. Then we have
$$\pi_1(U \cup V; x_0) \cong \pi_1(U; x_0) \ast \pi_1(V;x_0),$$
%
where the right-hand side denotes the free product of groups. More formally,
%
$$\xymatrix{ \{ \mathbf{e}\} \ar[r]^-{i_1} \ar[d]_{i_2} & \pi_1(U; x_0) \ar[d]^{\psi_1}\\
\pi_1(V; x_0) \ar[r]_-{\psi_2} & \pi_1(U\cup V; x_0) }$$
is a pushout diagram of groups and homomorphisms.
That is, suppose we are given any homomorphisms $h_1\colon \pi_1(U; x_0) \to G$ and $h_2\colon \pi_1(V; x_0) \to G$ with $G$ an arbitrary group. Then there is a homomorphism $\phi\colon \pi_1(U \cup V; x_0) \to G$ that makes the following diagram commute
$$\xymatrix{ \pi_1(U; x_0) \ar[d]_{\psi_1} \ar[rrd]^-{h_1} \\
\pi_1(U \cup V; x_0) \ar[rr]^{\phi} & &G\\
\pi_1(V; x_0) \ar[u]^{\psi_2} \ar[rru]_-{h_2}}$$
and $\phi$ is the unique such homomorphism.
\end{corollary}
\begin{proof}
Direct from \thmref{thm: DVK}.
\end{proof}
In particular, if we have $U \cap V = \{ x_0\}$, so that $U \cup V$ is a one-point union of $U$ and $V$, and if $U$ and $V$ have disconnected complements in $U \cup V$, then we have
$\pi_1(U \cup V; x_0) \cong \pi_1(U; x_0) \ast \pi_1(V;x_0)$.
\begin{example}\label{ex: DD}
Let $D=\{(1,0), (0,1), (-1,0), (0,-1)\}$ be the diamond in $\mathbb{Z}^2$, with basepoint $(1, 0)$. The ``double diamond" in $\mathbb{Z}^2$ , with basepoint $(0, 0)$ pictured in \figref{fig:D v D} may be viewed as a one-point union $D \vee D$ of two isomorphic copies of $D$. With $U$ and $V$ the right-hand and the left-hand copies of $D$, respectively, we have $U \cap V = \{(0, 0)$\}, a single point. Since $\pi_1(D; x_0) \cong \mathbb{Z}$, it follows from \corref{cor: contractible U cap V} that we have $\pi_1(D \vee D; x_0) \cong \mathbb{Z} \ast \mathbb{Z}$. Alternatively, we could just as well deduce the same conclusion by observing that $\mathrm{cl}(D \vee D)$ has geometric realization homeomorphic to $S^1 \vee S^1$, the one-point union of two circles, and using the well-known result that $\pi_1(S^1 \vee S^1; x_0) \cong \mathbb{Z} \ast \mathbb{Z}$ (e.g., \cite[Ex.IV.3.1]{Mas91}) together with \thmref{thm: digital pi1 = edge pi1} and \thmref{thm: edge gp = pi1}. This example illustrates that a digital image may have non-abelian fundamental group.
\begin{figure}[h!]
\centering
\includegraphics[trim=140 450 80 100,clip,width=0.5\textwidth]{FigureDD.pdf}
\caption{$D \vee D$ in $\mathbb{Z}^2$}\label{fig:D v D}
\end{figure}
\end{example}
Another special case of \thmref{thm: DVK} that is often useful is the case in which one of $U$ or $V$ is contractible or, at least, has trivial fundamental group (cf.~\cite[Th.IV.4.1]{Mas91}.
\begin{corollary}[To \thmref{thm: DVK}]\label{cor: contractible V}
Suppose $U$ and $V$ satisfy the hypotheses of \thmref{thm: DVK} (including disconnected complements) and, in addition, we have $\pi_1(V; x_0)=\{e\}$. Then $\psi_1\colon \pi_1(U)\to \pi_1(U\cup V)$ is an epimorphism, and its kernel is the smallest normal subgroup of $\pi_1(U)$ containing the image $\phi_1[\pi_1(U\cap V)]$.
\end{corollary}
\begin{proof}
Direct from \thmref{thm: DVK}.
\end{proof}
Our next example will display a digital image with fundamental group isomorphic to $\mathbb{Z}_2$. Our approach here is to ``reverse-engineer" a digital image $X$ so that the geometric realization of $\mathrm{cl}(X)$ is homeomorphic to the real projective plane $\mathbb{R} P^2$. The approach depends in part on being able to realize a graph as a digital image. We now describe a general procedure for doing this.
Recall our discussion of tolerance spaces from the introduction. A \emph{simple graph} is one that has no double edges or edges that connect a vertex to itself. A tolerance space may be viewed as a simple graph, and vice versa, by interpreting ``adjacent vertices" in the tolerance space as ``vertices connected by an edge" in the graph. In the following, and in the sequel, by an ``isomorphism" across the structures of digital images, on the one hand, and simple graphs/tolerance spaces, on the other, we mean an adjacency-preserving bijection of the vertices with an adjacency-preserving inverse.
\begin{proposition}\label{prop: digital graph}
If $G$ is a finite simple graph (a finite tolerance space), then $G$ may be isomorphically embedded as a digital image with vertices in the hypercube $[-1, 1]^{n-1} \subseteq \mathbb{Z}^{n-1}$, where $n = |G|$, the number of vertices.
\end{proposition}
\begin{proof}
Work by induction on $n$. Induction starts with $n = 1$ (or $n = 2$), where there is nothing to show.
Inductively assume that, if $|G| \leq n$, then we may embed $G$ as a digital image in $[-1, 1]^{n-1}$. Suppose we have a graph $G'$ with $n+1$ vertices. Choose any vertex $x \in G'$ and write $G' = G \cup \{x\}$ with $|G| = n$. Embed $G$ as a digital image in $[-1, 1]^{n-1} \subseteq \mathbb{Z}^{n-1} \subseteq \mathbb{Z}^{n-1} \times \mathbb{Z} = \mathbb{Z}^{n}$. Then each vertex $y \in G$ has coordinates $y = (y_1, \ldots, y_{n-1}, 0) \in \mathbb{Z}^{n}$, and we have $y_i \in \{\pm1, 0\}$ for $i = 1, \ldots, n-1$.
Denote by $\mathrm{lk}(v) $ the (vertices of the) \emph{link} of a vertex $v$ in a graph, namely, the set of vertices (other than $v$) connected by an edge to $v$.
Now separate the vertices of $G$ into the disjoint union $G = \mathrm{lk}(x) \sqcup \mathrm{lk}(x)^C$. For each $y \in \mathrm{lk}(x)^C$, move it down to the plane $y_n = -1$. In other words, adjust the embedding of $G$ in $\mathbb{Z}^n$ using the isomorphism of digital images $\phi\colon G \to \overline{G}$ given by
$$\phi(y_1, \ldots, y_{n-1}, 0) = \begin{cases} (y_1, \ldots, y_{n-1}, 0) & \text{if } y \in \mathrm{lk}(x) \\ (y_1, \ldots, y_{n-1}, -1) & \text{if } y \in \mathrm{lk}(x)^C \end{cases}$$
This is an isomorphism, since we have---for $y, y' \in \mathbb{Z}^{n-1} \times \{0\} \subseteq \mathbb{Z}^n$---
$$y \sim_{\mathbb{Z}^n} y' \iff (y_1, \ldots, y_{n-1}) \sim_{\mathbb{Z}^{n-1}} (y'_1, \ldots, y'_{n-1}) \iff \phi(y) \sim_{\mathbb{Z}^n} \phi(y').$$
So we now have $G$ embedded in $\mathbb{Z}^n$ as a digital image with $\mathrm{lk}(x) \subseteq [-1, 1]^{n-1} \times \{0\} \subseteq \mathbb{Z}^n$ and $\mathrm{lk}(x)^C \subseteq [-1, 1]^{n-1} \times \{-1\} \subseteq \mathbb{Z}^n$. Add $x$ as the point $x = \textbf{e}_n = (0, \ldots, 0, 1)$. This point is adjacent to every point in $[-1, 1]^{n-1} \times \{0\} \subseteq \mathbb{Z}^n$, and hence to every point of $\mathrm{lk}(x)$ as we have embedded it. Furthermore, $x = \textbf{e}_n$ is not adjacent to any point of $[-1, 1]^{n-1} \times \{-1\} \subseteq \mathbb{Z}^n$, and so this produces exactly the adjacencies of $x$ from $G'$. This completes the induction.
\end{proof}
\begin{example}\label{ex: projective plane}
As announced above, we now construct a digital image $X$ that may be viewed as a digital version of the real projective plane $\mathbb{R} P^2$. Start with a suitable triangulation of $\mathbb{R} P^2$. Notice that some care must be taken here. For example, the triangulation of $\mathbb{R} P^2$ given in
\cite[Ex.I.6.2]{Mas91} (see Figure 1.13 on p.15 of \cite{Mas91}) is not suitable. This is because the clique complex of that triangulation, considered as a graph, contains simplices that are not part of the triangulation (the triangulation has ``empty" simplices, and so is not a clique, or flag complex). For example, with reference to the notation of \cite[Ex.I.6.2]{Mas91} , the $3$-clique $123$ does not correspond to a $2$-simplex of the triangulation. Indeed, the triangulation of \cite[Ex.I.6.2]{Mas91}, considered as a graph, is actually a complete graph, and so its clique complex would be a $5$-simplex, with contractible spatial realization. Instead, we may use the triangulation of $\mathbb{R} P^2$ (represented as the disc with antipodal points of the boundary circle identified) illustrated in \figref{fig:RP2}.
\begin{figure}[h!]
\centering
$$
\begin{tikzpicture}[
decoration={markings},
]
\draw [black,domain=45:90,postaction={decorate,decoration={mark=between positions 0.45 and 0.55 step 0.045 with {\draw (.075,-.13) -- (0,0) -- (.075,.13) ;}}}] plot ({3*cos(\x)}, {3*sin(\x)});
\draw [black,domain=0:45,postaction={decorate,decoration={mark=between positions 0.42 and 0.58 step 0.045 with {\draw (.075,-.13) -- (0,0) -- (.075,.13) ;}}}] plot ({3*cos(\x)}, {3*sin(\x)});
\draw [black,domain=-45:0,postaction={decorate,decoration={mark=between positions 0.5 and 0.5 step 0.1 with {\draw (.075,-.13) -- (0,0) -- (.075,.13) ;}}}] plot ({3*cos(\x)}, {3*sin(\x)});
\draw [black,domain=-90:-45,postaction={decorate,decoration={mark=between positions 0.45 and 0.55 step 0.06 with {\draw (.075,-.13) -- (0,0) -- (.075,.13) ;}}}] plot ({3*cos(\x)}, {3*sin(\x)});
\draw [black,domain=-135:-90,postaction={decorate,decoration={mark=between positions 0.45 and 0.55 step 0.045 with {\draw (.075,-.13) -- (0,0) -- (.075,.13) ;}}}] plot ({3*cos(\x)}, {3*sin(\x)});
\draw [black,domain=-180:-135,postaction={decorate,decoration={mark=between positions 0.42 and 0.58 step 0.045 with {\draw (.075,-.13) -- (0,0) -- (.075,.13) ;}}}] plot ({3*cos(\x)}, {3*sin(\x)});
\draw [black,domain=135:180,postaction={decorate,decoration={mark=between positions 0.5 and 0.5 step 0.1 with {\draw (.075,-.13) -- (0,0) -- (.075,.13) ;}}}] plot ({3*cos(\x)}, {3*sin(\x)});
\draw [black,domain=90:135,postaction={decorate,decoration={mark=between positions 0.45 and 0.55 step 0.06 with {\draw (.075,-.13) -- (0,0) -- (.075,.13) ;}}}] plot ({3*cos(\x)}, {3*sin(\x)});
\node[inner sep=1.75pt, circle, fill=black] (a) at (0,3) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (g) at (-3,0) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (b) at (3/1.414,3/1.414) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (h) at (-3/1.414,3/1.414) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (d) at (3/1.414,-3/1.414) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (f) at (-3/1.414,-3/1.414) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (e) at (0,-3) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (c) at (3,0) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (i) at (0,0) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v1) at (-.6,1.5) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v2) at (.6,1.5) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v3) at (1.5,.6) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v4) at (1.5,-.6) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v5) at (.6,-1.5) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v6) at (-.6,-1.5) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v7) at (-1.5,-.6) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v8) at (-1.5,.6) [draw] {};
\draw (v1) -- (h);
\draw (v1) -- (a);
\draw (v1) -- (v2);
\draw (v1) -- (v8);
\draw (v1) -- (i);
\draw (v2) -- (i);
\draw (v2) -- (b);
\draw (v2) -- (v3);
\draw (v2) -- (a);
\draw (v3) -- (b);
\draw (v3) -- (c);
\draw (v3) -- (i);
\draw (v3) -- (v4);
\draw (v4) -- (i);
\draw (v4) -- (d);
\draw (v4) -- (c);
\draw (v4) -- (v5);
\draw (v5) -- (i);
\draw (v5) -- (d);
\draw (v5) -- (e);
\draw (v5) -- (v6);
\draw (v6) -- (i);
\draw (v6) -- (e);
\draw (v6) -- (f);
\draw (v6) -- (v7);
\draw (v7) -- (i);
\draw (v7) -- (f);
\draw (v7) -- (g);
\draw (v7) -- (v8);
\draw (v8) -- (i);
\draw (v8) -- (g);
\draw (v8) -- (h);
\node[anchor = south ] at (a) {{$3$}};
\node[anchor = south ] at (b) {{$4$}};
\node[anchor = west ] at (c) {{$1$}};
\node[anchor = north ] at (d) {{$2$}};
\node[anchor = north ] at (e) {{$3$}};
\node[anchor = north ] at (f) {{$4$}};
\node[anchor = east ] at (g) {{$1$}};
\node[anchor = south ] at (h) {{$2$}};
\node[anchor = south west] at (i) {{$13$}};
\node[anchor = south west ] at (v1) {{$7$}};
\node[anchor = west ] at (v2) {{$8$}};
\node[anchor = south west ] at (v3) {{$9$}};
\node[anchor = north west ] at (v4) {{$10$}};
\node[anchor = north ] at (v5) {{$11$}};
\node[anchor = north ] at (v6) {{$12$}};
\node[anchor = south west ] at (v7) {{$5$}};
\node[anchor = south east ] at (v8) {{$6$}};
\end{tikzpicture}
$$
\caption{Triangulation of $\mathbb{R} P^2$}\label{fig:RP2}
\end{figure}
Observe that this triangulation, considered as a graph (after making the identifications indicated), contains $3$-cliques, each of which corresponds to a $2$-simplex of the triangulation, and does not contain any $4$-cliques. Therefore, if $G$ is the (abstract) graph, or tolerance space illustrated, its clique complex will give $\mathrm{cl}(G) = K$, where $K$ is the (abstract) simplicial complex indicated, and thus $|\mathrm{cl}(G)|$ will be homeomorphic to $\mathbb{R} P^2$.
It remains to display the abstract graph/tolerance space $G$ as a digital image, up to isomorphism.
\propref{prop: digital graph} provides a general scheme for doing this which, if followed strictly, would result in a digital image in $\mathbb{Z}^{12}$. We may adapt that scheme here and get off to a more efficient start (in terms of embedding dimension) by embedding $8$ vertices of $G$ in $\mathbb{Z}^3$. Remove the vertex $13$ from $G$. Observe that, if the identifications indicated are made now, we would obtain a triangulated M{\"o}bius strip. Then the vertex $13$ is a cone-point on the boundary of this M{\"o}bius strip. Topologically, this is one way to see that $\mathbb{R} P^2$ may be embedded in $\mathbb{R}^4$. However, it seems that, here, our particular triangulation of the M{\"o}bius strip does not embed in $\mathbb{Z}^4$ as a digital image. So remove also the vertices $1$, $2$, $3$, and $4$. What remains is an $8$-point cycle graph, which we may embed as a digital image in $\mathbb{Z}^3$. In fact, we may embed the $8$-point cycle graph as a digital image in the cube $[-1, 1]^3 \subseteq \mathbb{Z}^3$. The coordinates of the $8$ vertices of this cycle graph may be assigned as follows:
$$
\begin{aligned}
5 &= (1, 0, 1) \quad 6 = (1, 1, 0) \quad 7 = (0, 1, -1) \quad 8 = (-1, 1, 0)\\
9 &= (-1, 0, 1) \quad 10 = (-1, -1, 0) \quad 11 = (0, -1, -1) \quad 12 = (1, -1, 0)\\
\end{aligned}
$$
Since this is a digital image in $[-1, 1]^3$, we may now proceed with the general scheme of \propref{prop: digital graph} for embedding a graph as a digital image. The result will be $G$ embedded as a digital image $X$ in $[-1, 1]^8 \subseteq \mathbb{Z}^8$. We add the vertices $1$, $2$, $3$, $4$, and $13$, in that order, and as we add each vertex we preserve the adjacencies amongst prior vertices and add the adjacencies between them and the vertex being added.
Add vertex $1$: Embed the graph thus far into $[-1, 1]^3 \times \{0\} \subseteq \mathbb{Z}^4$; move the last coordinate of those vertices not adjacent to vertex $1$ to $-1$; add the vertex $1$ as $(0, 0, 0, 1)$. This results in
$$
\begin{aligned}
5 &= (1, 0, 1, 0) \quad 6 = (1, 1, 0, 0) \quad 7 = (0, 1, -1, -1) \quad 8 = (-1, 1, 0, -1)\\
9 &= (-1, 0, 1, 0) \quad 10 = (-1, -1, 0, 0) \quad 11 = (0, -1, -1, -1) \quad 12 = (1, -1, 0, -1)\\
1 &= (0, 0, 0, 1).
\end{aligned}
$$
The next three steps repeat this process, following the scheme of \propref{prop: digital graph}. These steps result in a digital image in $\mathbb{Z}^7$ with points
$$
\begin{aligned}
5 &= (1, 0, 1, 0, -1, -1, 0) \quad 6 = (1, 1, 0, 0, 0, -1, -1) \quad 7 = (0, 1, -1, -1, 0, 0, -1)\\
8 &= (-1, 1, 0, -1, -1, 0, 0) \quad 9 = (-1, 0, 1, 0, -1, -1, 0) \quad 10 = (-1, -1, 0, 0, 0, -1, -1)\\
11 &= (0, -1, -1, -1, 0, 0, -1) \quad 12 = (1, -1, 0, -1, -1, 0, 0) \quad 1 = (0, 0, 0, 1, 0, -1, 0) \\
2 &= (0, 0, 0, 0, 1, 0, -1) \quad 3 = (0, 0, 0, 0, 0, 1, 0) \quad 4 = (0, 0, 0, 0, 0, 0, 1).
\end{aligned}
$$
Finally, we add the vertex $13$, using the same scheme. This is the point that corresponds to the cone-point if we visualize projective space as the M{\"o}bius strip with a cone attached to its boundary. The result is the digital image $X \subseteq \mathbb{Z}^8$ consisting of the $13$ points
$$
\begin{aligned}
5 &= (1, 0, 1, 0, -1, -1, 0, 0) \quad 6 = (1, 1, 0, 0, 0, -1, -1, 0) \quad 7 = (0, 1, -1, -1, 0, 0, -1, 0)\\
8 &= (-1, 1, 0, -1, -1, 0, 0, 0) \quad 9 = (-1, 0, 1, 0, -1, -1, 0, 0) \quad 10 = (-1, -1, 0, 0, 0, -1, -1, 0)\\
11&= (0, -1, -1, -1, 0, 0, -1, 0) \quad 12 = (1, -1, 0, -1, -1, 0, 0, 0) \quad 1 = (0, 0, 0, 1, 0, -1, 0, -1) \\
2 &= (0, 0, 0, 0, 1, 0, -1, -1) \quad 3 = (0, 0, 0, 0, 0, 1, 0, -1) \quad 4 = (0, 0, 0, 0, 0, 0, 1, -1)\\
13& = (0, 0, 0, 0, 0, 0, 0, 1).
\end{aligned}
$$
As a digital image, recall, it is not necessary to specify adjacencies: these are determined by position, or coordinates, in $\mathbb{Z}^8$.
For this digital image $X$, by construction, we have $\mathrm{cl}(X)$ isomorphic to the complex represented by $G$, as a simplicial complex, and thus the spatial realization $|\mathrm{cl}(X)|$ is homeomorphic to $\mathbb{R} P^2$. As is well-known, we have $\pi_1(\mathbb{R} P^2; x_0) \cong \mathbb{Z}_2$ (see \cite[Ex.V.5.2]{Mas91}, for example). From
\thmref{thm: digital pi1 = edge pi1} and \thmref{thm: edge gp = pi1}, it follows that we have
$$\pi_1(X; x_0) \cong \mathbb{Z}_2.$$
Notice that it would also be possible to calculate $\pi_1(X; x_0) \cong \mathbb{Z}_2 $ using \corref{cor: contractible V}, mimicking the steps in the argument used for \cite[Ex.V.5.2]{Mas91}.
This example illustrates that a digital image may have torsion in its fundamental group.
\end{example}
Finally, for this section, we use the approach of \exref{ex: projective plane} to show the following general realization result.
\begin{theorem}\label{thm: fg realization}
Every finitely presented group occurs as the (digital) fundamental group of some digital image.
\end{theorem}
\begin{proof}
Suppose $G$ is a finitely presented group with finite presentation
$$G = \langle g_1, \ldots, g_n \mid R_1, \ldots, R_m \rangle.$$
Here, each $R_j$ is a word in the $g_i$ and their inverses $g_i^{-1}$. We may suppose these words are in reduced form (no occurrences of a generator juxtaposed with its own inverse). First we build in the usual way, but taking care to avoid empty simplices, a two-dimensional simplicial complex with this $G$ as edge group. For the one-skeleton, take an $n$-fold one-point union of length-$4$ cycle graphs with vertices
$$V = \{ v_0 \} \cup \bigcup_{i=1}^n \{ v_{i, 1}, v_{i, 2}, v_{i, 3} \}$$
and edges
$$E = \bigcup_{i=1}^n \left\{ \{ v_0, v_{i, 1}\}, \{ v_{i, 1}, v_{i, 2} \}, \{ v_{i, 2}, v_{i, 3} \}, \{ v_{i, 3}, v_0 \} \right\}.$$
The case in which $n = 2$ is illustrated in \figref{fig:n=2} below.
The edge group of this graph is the free group on $n$ generators, which we may identify with the free group $\langle g_1, \ldots, g_n \rangle$ in an obvious way. Namely, each generator $g_i$ corresponds to the edge loop $\{ v_0, v_{i, 1}, v_{i, 2}, v_{i, 3}, v_0 \}$ of length $4$. The inverse of a generator corresponds to the reverse path: $g_i^{-1}$ corresponds to the edge loop $\{ v_0, v_{i, 3}, v_{i, 2}, v_{i, 1}, v_0 \}$. Because each of the generating cycle graphs is of length four, there are no $3$-cliques in this graph, hence no empty $2$-simplices.
Next, for each relator $R_j$, we wish to attach a (triangulated) disk so as to introduce this relation into the edge group. Here, again, we just have to be careful not to introduce any empty $3$-simplices.
We may achieve this as follows. Consider a single relator $R$. Suppose $R$ is a word
$$R = g_{j_1}^{\epsilon_1} \cdots g_{j_k}^{\epsilon_k}$$
of length $k$ in the letters $\{ g_i, g_i^{-1} \}$, with each $\epsilon_r$ either $1$ or $-1$. Define a cycle graph $C$ of length $4k$ whose vertices we list in order as
$$V_C = \{ w_1, w_{1, 1}, w_{1, 2}, w_{1, 3}, w_2, w_{2, 1}, w_{2, 2}, w_{2, 3}, w_3, \ldots, w_k, w_{k, 1}, w_{k, 2}, w_{k, 3} \},$$
with adjacent vertices of this list joined by an edge of $C$, as well as the last vertex $w_{k, 3}$ and the first vertex $w_1$ joined by an edge. Take a copy of this cycle graph $C'$ with vertices
$$V_{C'} = \{ w'_1, w'_{1, 1}, w'_{1, 2}, w'_{1, 3}, w'_2, w'_{2, 1}, w'_{2, 2}, w'_{2, 3}, w'_3, \ldots, w'_k, w'_{k, 1}, w'_{k, 2}, w'_{k, 3} \}.$$
Now join the $i$th listed vertex of $C$ to the $i$th and $(i+1)$st listed vertices of $C'$ (treating the $(4k+1)$st as the first). This creates a ``triangulated annulus," with $C$ as outer boundary and $C'$ as inner boundary. Finally, add another vertex $w$, and join this vertex to every vertex of $C'$. A case in which $k=3$ is illustrated in \exref{ex: fg example} below (see \figref{fig:disk}).
So far, we have built a triangulated disk that has no $4$-cliques. Now attach this disk to the one-point union of length-$4$ cycle graphs, according as the letters of the relator $R$. Namely, identify for each $i$ the edge loops (vertex-for-vertex and edge-for-edge)
$$w_i, w_{i, 1}, w_{i, 2}, w_{i, 3}, w_{i+1} \text{ with } \begin{cases} v_0, v_{i, 1}, v_{i, 2}, v_{i, 3}, v_0 & \text{ if } \epsilon_i = 1\\
v_0, v_{i, 3}, v_{i, 2}, v_{i, 1}, v_0 & \text{ if } \epsilon_i = -1.\end{cases}$$
Now it is standard that attaching this disk in this way introduces the relation $R$ into the edge group (and no other relations). The main point here, though, is that we have introduced the desired relation by building a $2$-dimensional simplicial complex that has no empty $2$-simplices, and no $4$-cliques (hence no empty $3$- or higher simplices). Considering the $1$-skeleton of the complex after attaching the disk as a graph, its clique complex is the $2$-dimensional complex we have constructed.
It is clear that we may apply this last step to each of the relators $R_j$. Doing so constructs a $2$-dimensional simplicial complex $K$, that is the original one-point union of $n$ length-$4$ cycle graphs, with $m$ triangulated disks attached as in the step above. The edge group of $K$, by construction, is $G$. As a (finite, simple) graph, we may embed the one-skeleton of $K$ into some $\mathbb{Z}^n$ (possibly a high-dimensional such) as a digital image, following the scheme of \propref{prop: digital graph}. Furthermore, from the way in which we have constructed and attached the triangulated disks, the clique complex of this digital image, considered as the graph we started from, is exactly $K$. Then the (digital) fundamental group of this digital image is $G$, as follows from \thmref{thm: digital pi1 = edge pi1}.
\end{proof}
\begin{example}\label{ex: fg example}
We illustrate the above result with an example. Take
$$G = \langle g_1, g_2 \mid g_1 g_2 g_1^{-1} \rangle.$$
Following the recipe of the proof of \thmref{thm: fg realization}, we start with a graph that is a one-point union of two cycle graphs of length $4$:
\begin{figure}[h!]
\centering
$$
\begin{tikzpicture}[scale=1.3,
decoration={markings},
]
\node[inner sep=1.75pt, circle, fill=black] (v0) at (0,0) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v21) at (-1,1) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v22) at (-2,0) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v23) at (-1,-1) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v13) at (1,1) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v12) at (2,0) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (v11) at (1,-1) [draw] {};
\draw (v0) -- (v21);
\draw (v0) -- (v23);
\draw (v0) -- (v13);
\draw (v0) -- (v11);
\draw (v22) -- (v21);
\draw (v22) -- (v23);
\draw (v12) -- (v13);
\draw (v12) -- (v11);
\node[anchor = north ] at (v0) {{$v_{0}$}};
\node[anchor = south ] at (v21) {{$v_{2,1}$}};
\node[anchor = east ] at (v22) {{$v_{2,2}$}};
\node[anchor = north ] at (v23) {{$v_{2,3}$}};
\node[anchor = south ] at (v13) {{$v_{1,3}$}};
\node[anchor = west ] at (v12) {{$v_{1,2}$}};
\node[anchor = north ] at (v11) {{$v_{1,1}$}};
\end{tikzpicture}
$$
\caption{Two-fold one-point union of cycle graphs of length $4$. }\label{fig:n=2}
\end{figure}
Next, we construct a triangulated disk whose boundary corresponds to the relation we wish to introduce. Once again following the recipe of the proof of \thmref{thm: fg realization}, this will consist of: a cycle graph of length $12$; an intermediate cycle graph of the same length; an evident triangulation of the ``annulus" with these cycle graphs as boundary; a cone-point added to ``cone-off" the inner cycle graph. In \figref{fig:disk}, we have illustrated the result, and also indicated the identifications we make along the boundary, with vertices and edges identified with their counterparts in the one-point union illustrated above.
\begin{figure}[h!]
\centering
$$
\begin{tikzpicture}[scale=1.3,
decoration={markings},
]
\draw[black, thick] (0,0) circle [radius=3];
\draw[black, thick] (0,0) circle [radius=1.5];
\draw [thick,domain=0:120, <->] plot ({4.4*cos(\x)}, {4.4*sin(\x)});
\draw [thick,domain=120:240, <->] plot ({4.4*cos(\x)}, {4.4*sin(\x)});
\draw [thick,domain=240:360, <->] plot ({4.4*cos(\x)}, {4.4*sin(\x)});
\node[] at (3.3,3.3) {{$g_1$}};
\node[] at (-4.6,0) {{$g_2$}};
\node[] at (3.3,-3.3) {{$g^{-1}_1$}};
\node[inner sep=1.75pt, circle, fill=black] (w) at (0,0) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w1) at (3,0) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w2) at (2.598,1.5) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w3) at (1.5,2.598) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w4) at (0,3) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w5) at (-1.5,2.598) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w6) at (-2.598,1.5) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w7) at (-3,0) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w8) at (-2.598,-1.5) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w9) at (-1.5,-2.598) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w10) at (0,-3) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w11) at (1.5,-2.598) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w12) at (2.598,-1.5) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w1') at (3/2,0) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w2') at (2.598/2,1.5/2) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w3') at (1.5/2,2.598/2) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w4') at (0,3/2) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w5') at (-1.5/2,2.598/2) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w6') at (-2.598/2,1.5/2) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w7') at (-3/2,0) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w8') at (-2.598/2,-1.5/2) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w9') at (-1.5/2,-2.598/2) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w10') at (0,-3/2) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w11') at (1.5/2,-2.598/2) [draw] {};
\node[inner sep=1.75pt, circle, fill=black] (w12') at (2.598/2,-1.5/2) [draw] {};
\draw (w) -- (w1');
\draw (w) -- (w2');
\draw (w) -- (w3');
\draw (w) -- (w4');
\draw (w) -- (w5');
\draw (w) -- (w6');
\draw (w) -- (w7');
\draw (w) -- (w8');
\draw (w) -- (w9');
\draw (w) -- (w10');
\draw (w) -- (w11');
\draw (w) -- (w12');
\draw (w1') -- (w12);
\draw (w1') -- (w1);
\draw (w2') -- (w1);
\draw (w2') -- (w2);
\draw (w3') -- (w2);
\draw (w3') -- (w3);
\draw (w4') -- (w3);
\draw (w4') -- (w4);
\draw (w5') -- (w4);
\draw (w5') -- (w5);
\draw (w6') -- (w5);
\draw (w6') -- (w6);
\draw (w7') -- (w6);
\draw (w7') -- (w7);
\draw (w8') -- (w7);
\draw (w8') -- (w8);
\draw (w9') -- (w8);
\draw (w9') -- (w9);
\draw (w10') -- (w9);
\draw (w10') -- (w10);
\draw (w11') -- (w10);
\draw (w11') -- (w11);
\draw (w12') -- (w11);
\draw (w12') -- (w12);
\node[anchor = south west ] at (w) {{$w$}};
\node[anchor = north east ] at (w1') {{$w_1'$}};
\node[anchor = west ] at (w2') {{$w_{1,1}'$}};
\node[anchor = south west ] at (w3') {{$w_{1,2}'$}};
\node[anchor = south west ] at (w4') {{$w_{1,3}'$}};
\node[anchor = south east ] at (w5') {{$w_2'$}};
\node[anchor = east ] at (w6') {{$w_{2,1}'$}};
\node[anchor = north west ] at (w7') {{$w_{2,2}'$}};
\node[anchor = north ] at (w8') {{$w_{2,3}'$}};
\node[anchor = north west ] at (w9') {{$w_3'$}};
\node[anchor = south west ] at (w10') {{$w_{3,1}'$}};
\node[anchor = north west ] at (w11') {{$w_{3,2}'$}};
\node[anchor = north west ] at (w12') {{$w_{3,3}'$}};
\node[anchor = west ] at (w1) {{$w_{1}\sim v_{0}$}};
\node[anchor = south west ] at (w2) {{$w_{1,1}\sim v_{1,1}$}};
\node[anchor = south west ] at (w3) {{$w_{1,2}\sim v_{1,2}$}};
\node[anchor = south ] at (w4) {{$w_{1,3}\sim v_{1,3}$}};
\node[anchor = south east ] at (w5) {{$w_{2}\sim v_{2}$}};
\node[anchor = east ] at (w6) {{$w_{2,1}\sim v_{2,1}$}};
\node[anchor = east ] at (w7) {{$w_{2,2}\sim v_{2,2}$}};
\node[anchor = north east ] at (w8) {{$w_{2,3}\sim v_{2,3}$}};
\node[anchor = north east ] at (w9) {{$w_{3}\sim v_{0}$}};
\node[anchor = north ] at (w10) {{$w_{3,1}\sim v_{1,3}$}};
\node[anchor = north west ] at (w11) {{$w_{3,2}\sim v_{1,2}$}};
\node[anchor = north west ] at (w12) {{$w_{3,3}\sim v_{1,1}$}};
\end{tikzpicture}
$$
\caption{Triangulated disk, plus attachements. }\label{fig:disk}
\end{figure}
Identifying the boundary of this triangulated disk, in the way indicated, to the one-point union of length-$4$ cycle graphs illustrated in \figref{fig:n=2} results in a $2$-dimensional simplicial complex whose edge group is $G$. This simplicial complex has $7+12+1=20$ vertices. Following our general scheme for embedding a graph as a digital image, we may realize the one-skeleton of this complex as a digital image in some $\mathbb{Z}^n$ with $n \leq 19$ (considerably less should be possible). Furthermore, the clique complex of this digital image, considered as the graph that we realized, has clique complex exactly this simplicial complex, with edge group $G$. This digital image realizes the group $G$.
\end{example}
\begin{remark}
\thmref{thm: fg realization}, \propref{prop: digital graph}, \exref{ex: projective plane} and \exref{ex: fg example} taken together raise interesting questions. First, is it the case that every homotopy type may be taken as the spatial realization of a simplicial complex that is a clique complex? As we saw in the above example, in some cases at least, triangulations commonly used to represent a space as a simplicial complex need not be clique complexes. Second, when we do have a homotopy type represented as the spatial realization of some clique complex $\mathrm{cl}(G)$, we may always display $G$ as a digital image, but the embedding dimension may be quite high. It would interesting, for example, to know whether it is possible to have a digital image in $\mathbb{Z}^4$ whose clique complex has spatial realization homeomorphic to $\mathbb{R} P^2$. Generally speaking, even when we have a graph $G$ whose $E(G, v_0)$ gives some group of interest, it does not seem easy to determine the minimal embedding dimension of $G$ as a digital image. For instance, it is not immediately clear which groups might be obtained as the fundamental groups of 3D digital images.
\end{remark}
\section{Path Shortening and 2D Digital Images}\label{sec: 2D free}
Whilst \thmref{thm: digital pi1 = edge pi1} and \thmref{thm: edge gp = pi1} allow us to use many results from the topological setting in the digital setting, they do not automatically resolve all questions about the digital fundamental group. For example, as just remarked, it is not immediately clear which groups might be obtained as the fundamental groups of 3D digital images.
Likewise the digital fundamental group of a general 2D image.
In fact we will show in Theorem \ref{thm: pi is free} below that the fundamental group of every 2D digital image is a free group. Now, the clique complex of a 2D image, generally speaking, is a simplicial complex with simplices of dimension up to $3$. There is no general reason why such a simplicial complex should have fundamental group that is a free group. So some argument is required, either in the digital setting or, using \thmref{thm: digital pi1 = edge pi1} and \thmref{thm: edge gp = pi1}, in the simplicial complex setting or in the topological setting. We argue in the digital setting.
To prepare for this result, we establish some basic results about paths and digital circles.
\begin{definition}
Let $X \subseteq \mathbb{Z}^r$ be any digital image. Suppose we have two points $a, b \in X$ that are non-adjacent. We say that a set of $n+2$ (distinct) points $P = \{ a, x_1, \ldots, x_n, b\} \subseteq X$ with $n \geq 1$ is a \emph{contractible path in $X$ from $a$ to $b$} of length $n+1$ if we have adjacencies $a \sim_X x_1$, $x_i \sim_X x_{i+1}$ for each $1 \leq i \leq n-1$, and $x_n \sim_X b$, and no other adjacencies amongst the elements of $P$.
\end{definition}
The relationship on pairs of points of having a contractible path from one to the other is clearly symmetric: a contractible path from $a$ to $b$ will serve as a contractible path from $b$ to $a$.
The nomenclature is justified by the following observations.
\begin{lemma}
Suppose we have a set of points $P = \{ a, x_1, \ldots, x_n, b\} \subseteq X$ that is a \emph{contractible path in $X$ from $a$ to $b$}.
\begin{itemize}
\item[(A)] There is a path $\alpha\colon I_{n+1} \to X$ with $\alpha(0) = a$, $\alpha(n+1) = b$, and $\alpha(i) = x_i$ for $1 \leq i \leq n$.
\item[(B)] This path gives an isomorphism of digital images $I_{n+1} \cong P$
\item[(C)] With $a \in P$ as basepoint, $P$ is a based-contractible subset of $X$ (contractible in itself, not just in $X$).
\end{itemize}
\end{lemma}
\begin{proof}
(A) This point is more or less tautological. We just need to observe that $\alpha$ as defined is continuous, which is to say that we have $\alpha(i) \sim_X \alpha(i+1)$ for each $0 \leq i \leq n$. This is part of the data given about $P$.
(B) The path $\alpha \colon I_{n+1} \to P$ has continuous inverse $g\colon P \to I_{n+1}$ given by $g(a) = 0$, $g(n+1) = b$, and $g(x_i) = i$ for $1 \leq i \leq n$. Notice that this depends on the points of $P$ being distinct from each other (no repeats).
(C) An interval is based-contractible, via a based contracting homotopy, to any of its points. In Example 3.13 of \cite{LOS19c}, for example, we give a contracting homotopy $H \colon I_{n+1} \times I_{n+1} \to I_{n+1}$ that satisfies $H(i, 0) = i$ and $H(i, n+1) = 0$, and is a based homotopy in the sense that we also have $H(0, t) = 0$ for all $t \in I_{n+1}$. The homotopy is defined by
$$H(i, t) = \begin{cases} i & 0 \leq i \leq n+1-t \\
n+1-t & n+2-t \leq i \leq n+1.\end{cases}$$
This evidently satisfies $H(i, 0) = i$, $H(i, n+1) = 0$, and $H(0, t) = 0$. The only issue is whether $H$ is continuous.
Since we omitted the details of the check on continuity in Example 3.13 of \cite{LOS19c} (and also in Example 3.19 of \cite{LOS19a}), we provide the details here.
To check continuity, suppose that we have $(i, t) \sim_{I_{n+1} \times I_{n+1}} (i', t')$. We must show that $H(i, t) \sim_{I_{n+1}} H(i', t')$.
If the coordinates $(x, y)$ of both points satisfy $x+y \leq n+1$, then we have $i \leq n+1-t$ and $i' \leq n+1-t'$, and the formula for $H$ gives $| H(i', t') - H(i, t)| = |i' - i| \leq 1$, since $(i, t) \sim (i', t')$ means that we have $|i' - i| \leq 1$ and $|t' - t| \leq 1$.
If the coordinates $(x, y)$ of both points satisfy $x+y \geq n+1$, then $| H(i', t') - H(i, t)| = |(n+1-t') - (n+1-t)| = |(t-t')| \leq 1$, again because $(i, t) \sim (i', t')$. The only case that remains, then, is that in which the coordinates of one point satisfy $x+y \leq n$ and those of the other point satisfy $x+y \geq n+2$. Since $(i, t) \sim (i', t')$ entails $| (i'+t') - (i+t)| \leq 2$, we must have $i+t = n$ and $i'+t' = n+2$. (There is no loss of generality in writing the point to the lower-left of the other as $(s, t)$.) But then we have $t' -t =1$ (as well as $i'-i = 1$), from the adjacency $(i, t) \sim (i', t')$. It follows that $| H(i', t') - H(i, t)| = |(n+1-t') - i| = |(n+1-t') - (n-t)|= |1-(t'-t)| = 0$. In all cases, we have $H(i, t) \sim_{I_{n+1}} H(i', t')$, so $H$ is indeed continuous.
Now contractibility is preserved by an isomorphism of digital images (it is also preserved by other, much more general notions of ``same-ness"). Here, the isomorphisms $\alpha$ and $g$ of part (B) define a homotopy
$$G = \alpha\circ H \circ (g \times \mathrm{id}_{I_{n+1}}) \colon P \times I_{n+1} \to P,$$
that satisfies $G(p, 0) = \alpha\circ g(p) = p$ and $G(p, n+1) = \alpha(0) = a$ for each $p \in P$. The homotopy $G$ also satisfies $G(a, t) = \alpha\circ H(0, t) = \alpha(0) = a$ for each $t \in I_{n+1}$, so it is a based contraction of $P$ in the sense asserted.
\end{proof}
If we remove a point from a digital circle, we obtain a contractible path. Whilst there are many contractible paths that may be ``completed" to a digital circle by the addition of a suitable point, there are examples of contractible paths that may not be completed to a circle, even when we have $a$ and $b$ adjacent to a common point of $X$.
\begin{example}
Take $X\subseteq \mathbb{Z}^2$ by $X=\{(-1,0), (0,0), (1,0)\}$. Then $X$ is a contractible path that cannot be completed to a digital circle by a single point. This is because the only points not in $X$ adjacent to $a=(-1,0)$ and $b=(1,0)$ are $(0,1)$ and $(0,-1)$, which are adjacent to $(0,0)$.
\end{example}
We have the following ``shortening lemma."
\begin{lemma}\label{lem: contract path}
Let $X \subseteq \mathbb{Z}^n$ be any digital image. For non-adjacent points $a, b \in X$, if there is path in $X$ from $a$ and $b$, then the path may be shortened to a contractible path in $X$ from $a$ to $b$.
\end{lemma}
\begin{proof}
Suppose we have a path $\gamma\colon I_N \to X$ with $\gamma(0) = a$ and $\gamma(N) = b$. If we have $\gamma(i) = \gamma(i+k)$ for some $k \geq 1$ and $0 \leq i \leq N-1$, then we simply delete the part of the path $\gamma$ between the adjacent values (including one of the repeats). Specifically, we define a shorter path $\gamma'\colon I_{N-k} \to X$ by
$$\gamma'(s) = \begin{cases} \gamma(s) & 0 \leq s \leq i\\
\gamma(s+k) & i+1 \leq s \leq N-k.\end{cases}$$
These two parts of $\gamma$ join to give a continuous $\gamma'$, since at the join we have $\gamma'(i) = \gamma(i) = \gamma(i+ k)$ and $\gamma'(i+1) = \gamma(i+k+1)$, which are adjacent in $X$ by the continuity of $\gamma$.
By repeating the first step sufficiently many times, we may assume without loss of generality that $\gamma'\colon I_{N'} \to X$ is a (shorter) path from $a$ to $b$ that does not have any repeated values. Suppose we have $\gamma'(i) \sim_X \gamma'(i+k')$ for some $k' \geq 2$ and $0 \leq i \leq N-2$, then again we simply delete the part of the path $\gamma'$ between the adjacent values (leaving both adjacent values themselves). Specifically, we define a shorter path $\gamma''\colon I_{N'-k'+1} \to X$ by
$$\gamma''(s) = \begin{cases} \gamma'(s) & 0 \leq s \leq i\\
\gamma(s+k'-1) & i+1 \leq s \leq N-k'+1.\end{cases}$$
These two parts of $\gamma''$ join to give a continuous $\gamma'$, since at the join we have $\gamma''(i) = \gamma'(i)$ and $\gamma''(i+1) = \gamma'(i+k')$, which are adjacent in $X$ by the assumption on $\gamma'$. By repeating this step sufficiently many times, we arrive at a path $\gamma''\colon I_{N''} \to X$ from $a$ to $b$ that satisfies
$$\{ \gamma''(i) \mid 0 \leq i \leq N'' \} \subseteq \{ \gamma(i) \mid 0 \leq i \leq N \},$$
so it is a ``shortening" of the original path from $a$ to $b$. It also satisfies $\gamma''(i) \not\sim \gamma''(i+k)$ for $k \geq 2$, for each $0 \leq i \leq N'-2$, and does not contain any repeated values. The set $\{ \gamma''(i) \mid 0 \leq i \leq N'' \}$ gives a contractible path in $X$ from $a$ to $b$.
\end{proof}
We have one more ingredient to prepare for our main result. We recall a definition from \cite{LOS19c}.
\begin{definition}[Based Homotopy Equivalence]\label{def: based h.e.}
Let $f \colon X \to Y$ be a based map of based digital images. If there is a based map $g \colon Y \to X$ such that $g\circ f \approx \text{id}_X$ and $f \circ g \approx \text{id}_Y$, then $f$ is a \emph{based-homotopy equivalence}, and $X$ and $Y$ are said to be \emph{based-homotopy equivalent}, or to have the same \emph{based-homotopy type}.
\end{definition}
In this definition, the notation ``$\approx$" denotes based homotopy of based maps, as we recalled in \secref{sec: basics}. As we remarked in \cite{LOS19c}, the notion of based homotopy equivalence of digital images is often too rigid to be of much use as a notion of ``same-ness" for digital images. However, in the following result, we do find a use for it. It follows easily from Lemma 3.11 of \cite{LOS19c} that if $X$ and $Y$ are based-homotopy equivalent digital images, then their digital fundamental groups are isomorphic.
We are now ready to prove the main result of this section.
\begin{theorem}\label{thm: pi is free}
Let $X \in \mathbb{Z}^2$ be a connected 2D digital image. Then $\pi_1(X; x_0)$ is a free group.
\end{theorem}
\begin{proof}
We argue by induction on the (finite) number of points $N$ in the digital image. Induction starts with $N= 1, 2,$ or $3$, where there is nothing to prove ($X$ is contractible to a point in these cases, so has $\pi_1(X;x_0) \cong \{\mathbf{e}\}$).
So assume inductively that, for any 2D digital image with $n$ or fewer points, the fundamental group is free. Now suppose $X$ is a digital image with $n+1$ points.
We may totally order the points of $X$ by lexicographic order. That is, $(x_1, y_1) > (x_2, y_2)$ if $x_1 > x_2$, and $(x, y_1) > (x, y_2)$ if $y_1 > y_2$. Suppose that $x\in X$ is the maximal point in this ordering, so that there are no points of $X$ with a greater first coordinate, and the only points with the same first coordinate as that of $x$ have smaller second coordinate. The possible neighbours of $x$ in $X$ are illustrated as follows (there are at most $4$ of them):
$$
\begin{tabular}{c|c}
$a$ & \\
\hline \\
$c$ & $x$ \\
\hline \\
$b_1$ & $b_2$ \\
\end{tabular}
$$
The \emph{link of $x$}, which we denote by $\mathrm{lk}(x)$, is that subset of $\{ a, c, b_1, b_2 \}$ consisting of those points present in $X$. First note that in the exceptional case in which $X = \{x\} \cup \mathrm{lk}(x)$, which would entail $n$ being relatively small, and $X$ consisting of at most the $5$ points illustrated, then $X$ itself will be contractible, with trivial fundamental group. So from now on, assume that we have points in $X$ in addition to those of $\{x\} \cup \mathrm{lk}(x)$. Furthermore, $\mathrm{lk}(x)$ must be non-empty, otherwise $X$ would be disconnected; we assume a choice of basepoint in $\mathrm{lk}(x)$.
We divide and conquer, based on the form of this link.
\textbf{Case 1: $c \in \mathrm{lk}(x)$.} In this case, we claim that $X$ is based-homotopy equivalent to $X - \{x\}$.
In fact we show that $X - \{x\}$ is a deformation retract of $X$. Define a retraction $r\colon X \to X - \{x\}$ on each $y \in X$ by
$$r(y) = \begin{cases} y & y \not= x\\ c & y = x.\end{cases}$$
Let $i \colon X - \{x\} \to X$ denote the inclusion. We have $r\circ i = \mathbf{id}\colon X - \{x\} \to X - \{x\}$. We claim that
$$H(y, t) = \begin{cases} y & t=0\\ i\circ r(y) & t=1\end{cases}$$
defines a (continuous) homotopy $H \colon X \times I_1 \to X$. To confirm continuity, suppose we have $(y, t) \sim (y', t')$ in $X \times I_1 $. If neither $y$ nor $y'$ are $x$, then we have $H(y, t) = y$ and $H(y', t') = y'$, which are adjacent because $(y, t) \sim (y', t')$ implies $y \sim y'$ in $X$. If both $y$ and $y'$ are $x$, then we have $H(x, 0) = x$ and $H(x, 1) = c$, which are adjacent. So the only remaining adjacencies we need check are for $H(x, 0)$ and $H(y', t')$ and for $H(x, 1)$ and $H(y', t')$ with $y' \not= x$ but $y' \sim_X x$. But then we have $H(x, 0) = x \sim y' = H(y', t')$, and $H(x, 1) = c \sim y' = H(y', t')$. This latter follows because the only possibilities for $y' \sim x$ are
from $\mathrm{lk}(x)$, and each of these is also adjacent to $c$.
This completes the check of the continuity of $H$, so it is a homotopy $\mathbf{id}_X \approx i\circ r\colon X \to X$. With a choice of basepoint in $\mathrm{lk}(x)$, $H$ is evidently a based homotopy $\mathbf{id}_X \approx i\circ r$. Then, as claimed, $X$ is based-homotopy equivalent to $X - \{x\}$ and thus these two digital images have isomorphic fundamental groups. Since $X - \{x\}$ has $n$ vertices, its fundamental group is a free group, by our induction hypothesis, and so this establishes the induction step in this case.
For the remainder of the argument, we suppose that $c$ is absent, so that $\mathrm{lk}(x) \subseteq \{ a, b_1, b_2 \}$.
\textbf{Case 2: $\{ b_1, b_2 \} \subseteq \in \mathrm{lk}(x)$.} In this case, we claim that $X$ is based-homotopy equivalent to $X - \{b_2\}$.
Again, we show that $X - \{b_2\}$ is a deformation retract of $X$. This is similar to the previous case, Define a retraction $r\colon X \to X - \{b_2\}$ on each $y \in X$ by
$$r(y) = \begin{cases} y & y \not= b_2\\ b_1 & y = b_2.\end{cases}$$
Let $i \colon X - \{b_2\} \to X$ denote the inclusion. We have $r\circ i = \mathbf{id}\colon X - \{b_2\} \to X - \{b_2\}$. We claim that
$$H(y, t) = \begin{cases} y & t=0\\ i\circ r(y) & t=1\end{cases}$$
defines a (continuous) homotopy $H \colon X \times I_1 \to X$. To confirm continuity, suppose we have $(y, t) \sim (y', t')$ in $X \times I_1 $. If neither $y$ nor $y'$ are $b_2$, then we have $H(y, t) = y$ and $H(y', t') = y'$, which are adjacent because $(y, t) \sim (y', t')$ implies $y \sim y'$ in $X$. If both $y$ and $y'$ are $b_2$, then we have $H(b_2, 0) = b_2$ and $H(b_2, 1) = b_1$, which are adjacent. So the only remaining adjacencies we need check are for $H(b_2, 0)$ and $H(y', t')$ and for $H(b_2, 1)$ and $H(y', t')$ with $y' \not= b_2$ but $y' \sim_X b_2$. But then we have $H(b_2, 0) = b_2 \sim y' = H(y', t')$, and $H(b_2, 1) = b_1 \sim y' = H(y', t')$. This latter follows because the only possibilities for $y \sim b_2$ are
from $\{ x, b_1, z_1, z_2 \}$ (see the figure below, and recall that there are no points in $X$ with first coordinate greater than that of $x$---also, we are supposing $c$ is absent from $X$, but it does not affect the argument here even if we include it), and each of these is also adjacent to $b_1$.
$$
\begin{tabular}{c|c}
$a$ & \\
\hline \\
$c$ & $x$ \\
\hline \\
$b_1$ & $b_2$ \\
\hline \\
$z_1$ & $z_2$ \\
\end{tabular}
$$
This completes the check of the continuity of $H$, so it is a homotopy $\mathbf{id}_X \approx i\circ r\colon X \to X$. If we choose, say, basepoint $b_1$, then $H$ is evidently a based homotopy $\mathbf{id}_X \approx i\circ r$. Then the induction step goes through in this case just as in the previous case.
\textbf{Case 3: $\mathrm{lk}(x) = \{ a\}$, $\mathrm{lk}(x) = \{ b_1\}$, or $\mathrm{lk}(x) = \{ b_2\}$.}
In this case, choose whichever point is in $\mathrm{lk}(x)$ as basepoint, and set $U = \{x\} \cup \mathrm{lk}(x)$ and $V = X - \{x\}$. Then $X = U \cup V$ and $U \cap V = \mathrm{lk}(x)$, a single point (the basepoint). Furthermore, the complements $U' = X - (\{x\} \cup \mathrm{lk}(x))$ and $V' = \{ x\}$ are disconnected. It follows from \corref{cor: contractible U cap V}
that we have
$$\pi_1(X; x_0) \cong \pi_1(U; x_0) \ast \pi_1(V; x_0),$$
%
the free product of groups. Furthermore, $U$ is isomorphic to the unit interval $I_1$, which is (based) contractible and so we have $\pi_1(U; x_0) \cong \{\mathbf{e}\}$. Thus $\pi_1(X; x_0) \cong \pi_1(V; x_0)$, and our induction hypothesis gives that $\pi_1(V;x_0)$ is a free group, as $V$ has fewer points than $X$. The induction step is complete in this case also.
The only remaining possibilities for the link of $x$, now are $\mathrm{lk}(x) = \{ a, b_1 \}$ and $\mathrm{lk}(x) = \{ a, b_2 \}$. Notice that, here, we cannot use the same sets $U$ and $V$ to decompose $X$ as in the previous case, since \thmref{thm: DVK} requires a \emph{connected} intersection $U \cap V$. Instead, we take up each of these cases with an argument that uses the contractible path material earlier in the section.
\textbf{Case 4: $\mathrm{lk}(x) = \{ a, b_1 \}$.} Recall that we assume $X \not= \{x\} \cup \mathrm{lk}(x)$ and so there are points in $X$ other than those adjacent to $x$, and there exists a path from $x$ to those points but only by passing through either $a$ or $b_1$. There are two sub-cases: (W) in which $a$ and $b_1$ are not connected by a path in $X - \{x\}$; and (C) in which $a$ and $b_1$ are connected by a path in $X - \{x\}$. Take sub-case (W) first. Here, $X - \{x\}$ must fall into the two components defined by
$$X_a = \big\{ p \in X - \{x\} \mid p \text{ is connected to } a \text{ by a path in } X - \{x\} \big\}$$
and
$$X_{b_1} = \big\{ p \in X - \{x\} \mid p \text{ is connected to } b_1 \text{ by a path in } X - \{x\} \big\},$$
the first of which contains $a$ and the second of which contains $b_1$. We explain this assertion as follows. Take $y \in X - \{x\}$ and suppose that $y \not\in X_a$. Since $X$ is connected, there is some path from $y$ to $b_1$ in $X$. If this path does not contain $x$, then it is a path in $X - \{x\}$ from $y$ to $b_1$ and so $y \in X_b$. Otherwise, consider the first occurrence of $x$ in the path from $y$ to $b_1$. The part of the path from $y$ to the point preceding this first occurrence of $x$ is a path in $X - \{x\}$ from $y$ to a point of $\mathrm{lk}(x)$. But if this point is $a$, we would have $y \in X_a$, which we said was not the case. Therefore, it is $b_1$ and we have $y \in X_{b_1}$. Furthermore, not only do we have $X - \{x\} = X_a \cup X_{b_1}$ but these components must be disconnected, in the sense that no point of $X_a$ is adjacent to any point of $X_{b_1}$. For if we were to have $p \in X_a$ and $q \in X_{b_1}$ with $p \sim q$, then we could concatenate a path in $X - \{x\}$ from $a$ to $p$ with a path in $X - \{x\}$ from $q$ to $b_1$, to obtain a path in $X - \{x\}$ from $a$ to $b_1$. But we are currently assuming there is no such path. From all this, it follows that if we set $U = X_a \cup \{x\}$ and $V = X_{b_1}\cup \{x\}$, then $U \cap V = \{ x \}$. So take the basepoint as $x_0 = x$, and notice that the complements $U' = X_{b_1}$ and $V' = X_a$ are disconnected, by the preceding discussion. In effect, we have identified $X$ as a one-point union $U \vee V$. It follows from \corref{cor: contractible U cap V}
that we have
$$\pi_1(X; x_0) \cong \pi_1(U; x_0) \ast \pi_1(V; x_0).$$
%
Snce $U$ and $V$ both contain fewer points than $X$, our induction hypothesis gives that their fundamental groups are free groups. The free product of free groups is again a free group, and the induction step is complete in this sub-case (W).
Now go back to sub-case (C), in which $a$ and $b_1$ are connected by a path in $X - \{x\}$. By \lemref{lem: contract path} we may shorten this path to a contractible path in $X - \{x\}$. So without loss of generality, suppose we have a contractible path $P$ in $X - \{x\}$ from $a$ to $b_1$. Now set $U = P \cup \{x\}$ and $V = X - \{x\}$. Then $U \cap V = P$, which is connected and contractible. If $X \not= P \cup \{x\}$, choose $a = x_0$ and observe that the complements $U' = X - (P \cup \{x\})$ and $V' = \{x\}$ are disconnected. We may apply \corref{cor: contractible U cap V} again to obtain
$$\pi_1(X; x_0) \cong \pi_1(U; x_0) \ast \pi_1(V; x_0).$$
Then both $U$ and $V$ contain fewer points than $X$, and it follows as in the previous sub-case that $\pi_1(X; x_0)$ is a free group. However, it is possible that we have $X = P \cup \{x\}$, in which case $V$ will have (one) fewer points than $X$, but $U$ will have the same number, and we are not able to apply our inductive hypotheses to $U$. But in this situation, notice that $U = P \cup \{x\}$ is a digital circle, with $\pi_1(U; a) \cong \mathbb{Z}$ by \thmref{thm: pi of C is Z}. Now $\mathbb{Z}$ is a free group, and our inductive hypothesis applied to $V$ yields
$\pi_1(X; x_0) \cong \mathbb{Z} \ast \pi_1(V; x_0)$: a free product of free groups, and hence a free group. So we have closed the induction in sub-case (C). This completes the induction in Case 4.
\textbf{Case 5: $\mathrm{lk}(x) = \{ a, b_2 \}$.}
This case may be handled with an argument identical to that just used for Case 4, only replacing $b_1$ with $b_2$. We omit the details.
This exhausts all cases for the link of $x$, and completes the induction. The result follows.
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2019-10-21T02:03:47",
"yymm": "1910",
"arxiv_id": "1910.08189",
"language": "en",
"url": "https://arxiv.org/abs/1910.08189",
"abstract": "In previous work, we have defined---intrinsically, entirely within the digital setting---a fundamental group for digital images. Here, we show that this group is isomorphic to the edge group of the clique complex of the digital image considered as a graph. The clique complex is a simplicial complex and its edge group is well-known to be isomorphic to the ordinary (topological) fundamental group of its geometric realization. This identification of our intrinsic digital fundamental group with a topological fundamental group---extrinsic to the digital setting---means that many familiar facts about the ordinary fundamental group may be translated into their counterparts for the digital fundamental group: The digital fundamental group of any digital circle is $\\mathbb{Z}$; a version of the Seifert-van Kampen Theorem holds for our digital fundamental group; every finitely presented group occurs as the (digital) fundamental group of some digital image. We also show that the (digital) fundamental group of every 2D digital image is a free group.",
"subjects": "Algebraic Topology (math.AT); Combinatorics (math.CO)",
"title": "Digital Fundamental Groups and Edge Groups of Clique Complexes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9890130573694068,
"lm_q2_score": 0.8479677583778257,
"lm_q1q2_score": 0.8386511852639358
} |
https://arxiv.org/abs/2103.14002 | Ramanujan's Beautiful Integrals | Throughout his entire mathematical life, Ramanujan loved to evaluate definite integrals. One can find them in his problems submitted to the \emph{Journal of the Indian Mathematical Society}, notebooks, Quarterly Reports to the University of Madras, letters to Hardy, published papers and the Lost Notebook. His evaluations are often surprising, beautiful, elegant, and useful in other mathematical contexts. He also discovered general methods for evaluating and approximating integrals. A survey of Ramanujan's contributions to the evaluation of integrals is given, with examples provided from each of the above-mentioned sources. | \section{Introduction}
Ramanujan loved infinite series and integrals. They permeate almost all of his work from the years he recorded his findings in notebooks \cite{nb} until the end of his life in 1920 at the age of 32. In this paper we provide a survey of some of his most beautiful theorems on integrals. Of course, it is impossible to adequately cover what Ramanujan accomplished in his devotion to integrals. Many of Ramanujan's theorems and examples of integrals have inspired countless mathematicians to take Ramanujan's thoughts and proceed further. For many of Ramanujan's integrals, we stand in awe and admire their beauty, much as we listen to a beautiful Beethoven piano sonata or an intricate but mellifluous raaga in Carnatic or Hindustani classical music. We hope that this survey will provide further inspiration.
Ramanujan evaluated many definite integrals, most often infinite integrals. In many cases, the integrals are so ``unusual,'' that we often wonder how Ramanujan ever thought that elegant evaluations existed. Some of his integrals satisfy often surprising functional equations. He was an expert in finding exquisite examples for integral transforms, some of which are original with him. His so-called ``Master Theorem" fits into this category. Some of his integrals have (non-trivial) relations with infinite series and continued fractions. Ramanujan was also a master in finding asymptotic expansions of integrals.
Integrals often arise in concomitant problems that Ramanujan studied. For example, in his first letter to G.~H.~Hardy, Ramanujan asserted \cite[p.~24]{bcbrar} (with a minor correction needed)
\begin{quote}
1, 2, 4, 5, 8, 9, 10, 13, 16, 17, 18,\dots\, are numbers which are either themselves squares or which can be expressed as the sum of two squares.
The number of such numbers greater than \emph{A} and less than
\emph{B}
$$=K\int_A^B \df{dx}{\sqrt{\log x}} + \theta (x)$$
where $ K = .764 \dots$ and $\theta (x)$ is very small when
compared with the previous integral. \emph{K} and $\theta (x)$
have been exactly found though complicated.
\end{quote}
Theorems and claims of this kind are better addressed in the contexts in which they arise, and so we do not address such integral appearances in the present paper.
Beginning in 1911, Ramanujan offered a total of 58 \emph{Questions} to the \emph{Journal of the Indian Mathematical Society}. In seven of them, readers are asked to evaluate definite integrals. (For discussions of all 58 problems, see a paper by the first author, Y.-S.~Choi, and S.-Y.~Kang \cite{lange}, \cite[pp.~215--258]{bcbrara}.) Ramanujan's first letter to Hardy contains over 60 statements, and eleven of them pertain to definite integrals. In his second letter to Hardy, seven entries provide evaluations of integrals \cite[pp.~xxiii--xxix]{cp}, \cite[Chapter 2]{bcbrar}. (Portions of both letters have been lost.) Six of Ramanujan's published papers are devoted to the evaluation of integrals. The most abundant source for Ramanujan's integrals are his (earlier) notebooks \cite{nb}. His lost notebook \cite{lnb} also contains several intriguing integrals \cite{lnb}.
The purpose of this paper is to provide readers with a survey of some of Ramanujan's most beautiful, surprising, and possibly useful integrals. Examples from each of the sources mentioned in the previous paragraph will be given.
\section{Problems Posed by Ramanujan in the\\ \emph{Journal of the Indian Mathematical Society}}
Question 783 below is an especially elegant problem posed by Ramanujan in the \emph{Journal of the Indian Mathematical Society} \cite{783} and found in Ramanujan's third notebook \cite[p.~373]{nb}. Like so many of Ramanujan's discoveries, we wonder how Ramanujan ever thought of this problem. While reading this paper, readers will undoubtedly ask questions of this sort several times.
\begin{question} \textbf{(Question 783)}\label{783a}
For $n\geq 0$, put $v=u^n-u^{n-1}$, and define
\begin{equation}\label{783b}
\varphi(n):=\int_0^1\df{\log u}{v}dv.
\end{equation}
Then,
\begin{equation*}
\varphi(0)=\df{\pi^2}{6},\qquad \varphi(1)=\df{\pi^2}{12}, \qquad\text{and}\qquad\varphi(2)=\df{\pi^2}{15}.
\end{equation*}
Furthermore, if $n>0$,
\begin{equation}\label{783c}
\varphi(n)+\varphi\left(\frac{1}{n}\right)=\df{\pi^2}{6}.
\end{equation}
\end{question}
Inspired by Question \ref{783a}, Berndt and R.~J.~Evans \cite{rje} established the generalization given below. Here and in the sequel, we use the conventions
\begin{equation*}
f(\infty):=\begin{cases}\underset{x\to\infty}{\lim}f(x), \quad &\text{provided that the limit exists},\\
\infty, \quad & \text{if } f(x)\to \infty \text{ as } x\to\infty.
\end{cases}
\end{equation*}
\begin{theorem}\label{783d}
Let $g$ be a strictly increasing, differentiable function on $[0,\infty)$ with $g(0)=1$ and $g(\infty)=\infty$. For $n>0$ and $t\geq0$, define
\begin{equation*}
v(t):=\df{g^n(t)}{g(1/t)}.
\end{equation*}
Suppose that
\begin{equation*}
\varphi(n):=\int_0^1\log\,g(t)\df{dv}{v}
\end{equation*}
converges. Then
\begin{equation*}
\varphi(n)+\varphi\left(\frac{1}{n}\right)=2\varphi(1).
\end{equation*}
\end{theorem}
Note that if $g(t)=1+t$, Theorem \ref{783d} reduces to Question \ref{783a}. See \cite{rje} and \cite[pp.~326--329]{IV} for more details.
The integral \eqref{783b} is reminiscent of the dilogarithm, defined by
\begin{equation}\label{dilog}
\text{Li}_2(z):=-\int_0^z\df{\log(1-w)}{w}dw, \qquad z\in\mathbb{C},
\end{equation}
where the principal branch of $\log(1-w)$ is chosen. The dilogarithm was studied by Ramanujan in Chapter 9 of his first notebook \cite[Entries 5--7]{nb}, \cite[pp.~246--249]{I}, where many of the fundamental properties of $\text{Li}_2(z)$ are proved. He returned to $\text{Li}_2(z)$ in his third notebook \cite[pp.~365]{nb}, \cite[pp.~322--326]{IV}.
Ramanujan was fond of formulas evincing symmetry, such as in Question 295 below \cite{295}.
\begin{question}\label{295a} If $\alpha$ and $\beta$ are positive and $\alpha\beta=\pi$, then
\begin{equation}\label{1}
\sqrt{\alpha}\int_0^{\i}\df{e^{-x^2}}{\cosh \, \alpha x}dx =\sqrt{\beta}\int_0^{\i}\df{e^{-x^2}}{\cosh \, \beta x}dx.
\end{equation}
\end{question}
Question \ref{295a} can be found in Ramanujan's first letter to G.~H.~Hardy \cite[p.~27]{bcbrar}, and also in Section 21 of Chapter 13 in his second notebook \cite{nb}, \cite[p.~225]{II}. The identity \eqref{1} appears in a manuscript published with the lost notebook \cite{lnb}, \cite[p.~368]{geabcbIV}, and it was also established by Hardy \cite[p.~203]{hardy1}.
Four additional relations in the spirit of \eqref{1} can be found in the aforementioned manuscript \cite[p.~368]{geabcbIV}.
\section{Ramanujan's First Two Letters to Hardy} In his first letter to Hardy, Ramanujan offers the following ``reciprocity" formula or ``theta" relation.
\begin{theorem}\label{modular1} For $n>0$, define
\begin{equation}\label{modular}
\phi(n):=\int_0^{\infty} \dfrac{\cos nx}{ e^{2\pi \sqrt x} - 1}dx.
\end{equation}
Then
\begin{equation}\label{modular2}
\int_0^{\infty} \df{\sin nx}{e^{2\pi \sqrt
x} - 1}dx = \phi (n) - \df{1}{2n} + \phi \left(\df{\pi ^2}{
n}\right)\sqrt{\df{2 \pi ^3}{ n^3}} .
\end{equation}
``$\phi (n)$ is a complicated function \dots"
\end{theorem}
As special cases,
$$\phi (0) = \df{1}{12};\qquad \phi \left(\df{\pi}{ 2}\right) = \df{1}{4\pi};\qquad \phi (\pi) = \df{2 -\sqrt{2}}{ 8};\qquad \phi(2\pi) = \df{1}
{16};$$
$$\phi \left(\df{2\pi}{ 5}\right) = \df{8 - 3\sqrt{5}}{16};\qquad \phi
\left(\df{\pi}{ 5}\right) = \df{6 +\sqrt{5}}{ 4} - \df{ 5 \sqrt{10}}{8};\qquad \phi
(\infty) = 0;$$
$$\phi \left(\df{2\pi }{ 3}\right) = \df{1}{ 3} - \sqrt{3} \left(\df {3}{ 16} - \df{1}{
8 \pi}\right) .$$
An abbreviated version of Theorem \ref{modular1} appeared as a problem in the \emph{Journal of the Indian Mathematical Society} \cite{463}. Theorem \ref{modular1} can also be found in Ramanujan's notebooks \cite{nb}.
The function $\phi(n)$ in \eqref{modular} can be expressed in terms of a variant of Gauss sums to which Ramanujan devotes an entire paper \cite{12}. For example, if $n=a/b$, where $a$ and $b$ are positive odd numbers, then
\begin{equation}\label{modular3}
\phi\left(\df{\pi a}{b}\right):=\df14\sum_{r=1}^b(b-2r)\cos\left(\df{r^2\pi a}{b}\right)-\df{b}{4a}\sqrt{\df{b}{a}}
\sum_{r=1}^{a}(a-2r)\sin\left(\df{1}{4}\pi+\df{r^2 \pi b}{a}\right).
\end{equation}
See Berndt's book \cite[pp.~296--303]{IV} for a more complete discussion of $\phi(n)$ and its connections with Ramanujan's analogues of Gauss sums and another class of his infinite series.
After discussing the Rogers--Ramanujan continued fraction and two generalizations in his second letter to Hardy, Ramanujan offers representations for a pair of integrals by continued fractions \cite[p.~xxviii]{cp}, \cite[p.~57]{bcbrar}.
\begin{theorem} We have
\begin{align*}
4\int_0^{\infty}\df{xe^{-x\sqrt5}}{\cosh x}dx&=\df{1}{1}\+\df{1^2}{1}\+\df{1^2}{1}\+\df{2^2}{1}\+\df{2^2}{1}\+\df{3^2}{1}\+\df{3^2}{1}\+\cds,\\
2\int_0^{\infty}\df{x^2e^{-x\sqrt3}}{\sinh x}dx&=\df{1}{1}\+\df{1^3}{1}\+\df{1^3}{3}\+\df{2^3}{1}\+\df{2^3}{5}\+\df{3^3}{1}\+\df{3^3}{7}\+\cds.
\end{align*}
\end{theorem}
These two identities were first proved in print by C.~T.~Preece \cite{preece} in 1931. Above, we have corrected two misprints in the second formula that appears in \cite[p.~xxviii]{cp}.
\section{Ramanujan's Published Papers on Integrals}
Ramanujan published the papers \cite{7}, \cite{11}, \cite{12}, \cite{22}, \cite{23}, and \cite{27} on definite integrals. We briefly discussed some of the content of \cite{12} in the previous section.
Define, for $\Re(w)\geq 0$,
\begin{equation*
\phi_{w}(t):=\int_0^{\i}\df{\cos\,\pi tx}{\cosh\,\pi x}e^{-\pi wx^2}dx \qquad\text{and}\qquad
\psi_{w}(t):=\int_0^{\i}\df{\sin\,\pi tx}{\sinh\,\pi x}e^{-\pi wx^2}dx.
\end{equation*}
In \cite{23}, Ramanujan made use of ``modular relations'' satisfied by $\phi_w(t)$ and $\psi_w(t)$, namely,
\begin{align*}
\phi_w(t)&=\df{1}{\sqrt{w}}e^{-\tf14\pi t^2/w}\psi_{1/w}(it/w)
\end{align*}
and
\begin{equation*}
e^{\tf14\pi t^2/w}\left\{\frac12+\psi_w(t)\right\}=e^{\tf14\pi(t+w)^2/w}\phi_w(t+w),
\end{equation*}
to develop representations in terms of theta functions. These were then used
to evaluate large classes of integrals for specific values of $t$ and $w$. We give two examples:
\begin{align}\label{mos}
\int_0^{\i}\df{\sin\,2\pi tx}{\sinh\,\pi x}\cos\,\pi x^2\,dx&=\df{\cosh\,\pi t-\cos\,\pi t^2}{2\sinh\,\pi t},\\\
\int_0^{\i}\df{\sin\,2\pi tx}{\sinh\,\pi x}\sin\,\pi x^2\,dx&=\df{\sin\,\pi t^2}{2\sinh\,\pi t}.\notag
\end{align}
The integral evaluation \eqref{mos} was used by A.~K.~Mustafy \cite{mustafy} to obtain a new integral representation of the Riemann zeta function $\zeta(s)$, with an integrand similar to those above, which in turn also gives a proof of Riemann's functional equation for $\zeta(s)$.
Perhaps Ramanujan's most important paper on integrals is \cite{27}. Here, several classes of integrals involving the gamma function are evaluated in closed form. Many of the integrals in \cite{27} can perhaps be evaluated by contour integration, although Ramanujan did not use this method in \cite{27}.
It has generally been accepted that Ramanujan was not conversant with the analytic theory of functions of a complex variable. Hardy opined
\cite[p.~xxx]{cp}, ``\dots and he (Ramanujan) had indeed but the vaguest idea of what a function of a complex variable was.'' However, in \cite{27} it is clear that Ramanujan knew certain elementary facts about functions of a complex variable, in particular, at the very least, he was aware of the care needed in treating branches of ``multi-valued'' functions.
On the next-to-last page of Ramanujan's third notebook \cite[pp.~391]{nb}, several integrals from complex analysis are recorded. Next to one of them appear the words, ``contour integration.'' So, maybe Ramanujan knew more complex analysis than either Hardy or others have thought.
One of the general integrals that Ramanujan evaluated is \cite{27}, \cite[pp.~221--222]{cp}:
\begin{equation*}
\int_{-\i}^{\i}\Gamma(\alpha+x)\Gamma(\beta-x)e^{inx}dx.
\end{equation*}
Similarly, Ramanujan devised a general approach to evaluating
\begin{equation*}
\int_{-\i}^{\i}\df{e^{inx}}{\Gamma(\alpha+x)\Gamma(\beta-x)\Gamma(\gamma+\ell x)\Gamma(\delta-\ell x)}dx,
\end{equation*}
where $n$ and $\ell$ are real numbers.
We forego hypotheses in this brief survey, but instead mention only a special case. If $\alpha+\beta+\gamma+\delta=4$, then \cite[p.~229]{27}
\begin{gather*}
\int_{-\i}^{\i}\df{\cos\{\pi(x+\beta+\gamma)\}}{\Gamma(\alpha+x)\Gamma(\beta-x)\Gamma(\gamma+2 x)\Gamma(\delta-2x)}dx\\=
\df{1}{2\Gamma(\gamma+\delta-1)\Gamma(2\alpha+\delta-2)\Gamma(2\beta+\gamma-2)}.
\end{gather*}
Ramanujan devised some beautiful and unusual definite integral evaluations involving products of ordinary Bessel functions $J_{\nu}(x)$. For example, if
$\Re(\alpha+\beta)>-1$ \cite[p.~225]{27},
\begin{equation*}
\int_{-\i}^{\i}\df{J_{\alpha+w}(x)}{x^{\alpha+w}}\df{J_{\beta-w}(y)}{y^{\beta-w}}dw=
\df{J_{\alpha+\beta}\{\sqrt{(2x^2+2y^2)}\}}{(\tf12 x^2+\tf12 y^2)^{(\alpha+\beta)/2}}.
\end{equation*}
Ramanujan concluded his paper \cite{27} with a formula providing the evaluation of a fairly general integral involving the product of four Bessel functions.
Ramanujan's integrals were discussed by Watson in his treatise \cite[p.~449]{watson}. In a footnote, he remarks, ``these integrals evaluated by Ramanujan may prove to be of the highest importance in the theory of the transmission of Electric Waves.''
One of the highly influential papers that Ramanujan wrote after arriving in England is his paper \cite{riemann}, in which he obtains transformations for integrals arising in the theory of Riemann's zeta function $\zeta(s)$. The Riemann $\xi(s)$ and $\Xi$ functions are defined by \cite[p.~16]{titch}
\begin{align*}
\xi(s):=\frac{1}{2}s(s-1)\pi^{-\frac{s}{2}}\Gamma\left(\frac{s}{2}\right)\zeta(s) \qquad\text{and}\qquad \Xi(t)&:=\xi\left(\tfrac{1}{2}+it\right).
\end{align*}
In his review \cite{hardyw} of Ramanujan's work in England until 1917, Hardy cites \cite{riemann} as one of Ramanujan's four most important papers.
One of the two main results in his paper is \cite[Equation (12)]{riemann}
\begin{align}\label{uncanny1}
&\int_{0}^{\infty}\left\{e^{-z}-4\pi\int_{0}^{\infty}\frac{xe^{-3z-\pi x^2e^{-4z}}}{e^{2\pi x}-1}\, dx\right\}\cos(tz)\, dz\nonumber\\
&=\frac{1}{8\sqrt{\pi}}\G\left(\frac{-1+it}{4}\right)\G\left(\frac{-1-it}{4}\right)\Xi\left(\frac{t}{2}\right),
\end{align}
which, through Fourier inversion, leads to \cite[Equation (13)]{riemann}
\begin{align}\label{13}
&e^{-n}-4\pi e^{-3n}\int_{0}^{\infty}\frac{xe^{-\pi x^2e^{-4n}}}{e^{2\pi x}-1}\, dx\nonumber\\
&=\frac{1}{4\pi\sqrt{\pi}}\int_{0}^{\infty}\G\left(\frac{-1+it}{4}\right)\G\left(\frac{-1-it}{4}\right)\Xi\left(\frac{t}{2}\right)\cos(nt)\, dt
\end{align}
for $n\in\mathbb{R}$. About \eqref{13}, Hardy \cite{ghh} writes, for $\sigma=\Re(s)$,
\begin{quote}\emph{The integral has properties similar to those of the integral by means of which I proved recently that $\zeta(s)$ has an infinity of zeros on the line $\sigma=1/2$, and may be used for the same purpose.}\end{quote}
In the last section of his paper \cite{riemann}, for $n\in\mathbb{R}$, Ramanujan obtains new beautiful integral representations for
\begin{equation}\label{sec5int}
F(n,s):=\int_{0}^{\infty}\Gamma\left(\frac{s-1+it}{4}\right)\Gamma\left(\frac{s-1-it}{4}\right)
\Xi\left(\frac{t+is}{2}\right)\Xi\left(\frac{t-is}{2}\right)\frac{\cos nt}{(s+1)^2+t^2}\, dt,
\end{equation}
each valid in some vertical strip in the half-plane $\sigma>1$. One of these is given by
\begin{align*
F(n, s)=\frac{1}{8}(4\pi)^{-\frac{1}{2}(s-3)}\int_{0}^{\infty}x^{s}\left(\frac{1}{\exp{(xe^n)}-1}
-\frac{1}{xe^n}\right)\left(\frac{1}{\exp{(xe^{-n})}-1}-\frac{1}{xe^{-n}}\right)\, dx.
\end{align*}
(See \cite{dixit} for corrected misprints in \cite{riemann}.)
Regarding the special case $s=0$ of this integral, Hardy \cite{ghh} writes,
\begin{quote}\emph{\dots the properties of this integral resemble those of one which Mr. Littlewood and I have used, in a paper to be published shortly in Acta Mathematica to prove that}
\end{quote}
\begin{equation*}
\int_{-T}^{T}\left|\zeta\left(\frac{1}{2}+ti\right)\right|^2\, dt \sim
2 T\log T\hspace{3mm}(T\to\infty),
\end{equation*}\\[2pt]
where a misprint from Hardy's paper \cite{ghh} has been corrected.
This special case $s=0$ of the integral in \eqref{sec5int} also appears on page $220$ of the lost notebook \cite{lnb}, where Ramanujan gives an exquisitely beautiful modular relation associated with it. The reader is referred to \cite{bcbad}, \cite{dixitms} and \cite{dixzah} for more details on Ramanujan's formulas from \cite{riemann}, their importance and their applications.
In conclusion, about Ramanujan's formulas from \cite{riemann}, Hardy remarks \cite{ghh},
\begin{quote}\emph{It is difficult at present to estimate the importance of these results. The unsolved problems concerning the zeros of $\zeta(s)$ or of $\Xi(t)$ are among the most obscure and difficult in the whole range of Pure Mathematics. \dots
But I should not be at all surprised if still more important applications were to be made of Mr. Ramanujan's formulae in the future.}
\end{quote}
\section{Ramanujan's Quarterly Reports}
Ramanujan's fame began with his publications in the \emph{Journal of the Indian Mathematical Society} in 1911,
and it reached the English astronomer Sir Gilbert Walker, who was working at an observatory in Madras. In a letter to the University of Madras dated February 26, 1913, he wrote, ``The University would be justified in enabling S.~Ramanujan for a few years \emph{at least} to spend the whole of his time on mathematics, without any anxiety as to his livelihood.'' The Board of Studies at the University of Madras agreed to this request, and its chairman, Professor B.~Hanumantha Rao, wrote a letter to Vice-Chancellor Francis Dewsbury on March 25, 1913 with the recommendation that Ramanujan be awarded a scholarship of 75 rupees per month. A stipulation in the scholarship required Ramanujan to write Quarterly Reports to the Board of Studies in Mathematics. Ramanujan wrote three of these Quarterly Reports before he departed for England on March 17, 1914. Unfortunately, they were eventually lost; but, on the other hand, fortunately, T.~A.~Satagopan made a handwritten copy of the reports in 1925. An extensive description of their contents was published by Berndt in the \emph{Bulletin of the London Mathematical Society} \cite{quartblms}. The aforementioned letters can be found in the book \cite[pp.~70--76]{bcbrar} by Berndt and Robert Rankin.
We offer a few results from the Quarterly Results; the first are Frullani's Theorem and Ramanujan's generalization.
\begin{theorem}\label{frullani1} \textbf{(Frullani)} If $f$ is a continuous function on $[0,\infty)$ such that $f(\infty)$ exists, then, for any pair $a,b>0$,
\begin{equation}\label{frullani}
\int_0^{\infty}\df{f(ax)-f(bx)}{x}dx=\{f(0)-f(\infty)\}\log\left(\df{b}{a}\right).
\end{equation}
If $f(\infty)$ does not exist, but $f(x)/x$ is integrable over $[c,\infty)$ for $c>0$, then \eqref{frullani} still holds, but with $f(\infty)$ replaced by 0.
\end{theorem}
In his second Quarterly Report, Ramanujan offers a beautiful generalization of Frullani's Theorem. A slightly less general version is provided by Ramanujan in the unorganized pages of his second notebook \cite[pp.~332, 334]{nb}, \cite[p.~316]{I}. We do not give below the hypotheses that are needed for $u(x)$ and $v(x)$; see \cite[pp.~299, 313]{I} for these requirements. Set
$$f(x)-f(\infty)=\sum_{k=0}^{\infty}\df{u(k)(-x)^k}{k!}\qquad\text{and}\qquad g(x)-g(\infty)=\sum_{k=0}^{\infty}\df{v(k)(-x)^k}{k!}.$$
Ramanujan also assumes that the limit below can be taken under the integral sign.
\begin{theorem}\label{frullani2}
Let $u(x)$ and $v(x)$ be given as above, and assume that $f$ and $g$ are continuous functions on $[0,\infty)$. Also assume that $f(0)=g(0)$ and $f(\infty)=g(\infty)$. Then, if $a,b>0$,
\begin{equation*}
\lim_{n\to0}\int_0^{\infty}x^{n-1}\left\{f(ax)-g(bx)\right\}dx
=\{f(0)-f(\infty)\}\left\{\log\left(\df{b}{a}\right)+\df{d}{ds}\left(\log\left(\df{v(s)}{u(s)}\right)\right)_{s=0}\right\}.
\end{equation*}
\end{theorem}
Ramanuan's proof depends on his now famous \emph{Master Theorem}. He assumes that a function $F(x)$ can be expanded in a Taylor series about $x=0$ with an infinite radius of convergence. Then Ramanujan asserts that the value of the integral
$$ \int_0^{\infty}x^{n-1}F(x)dx$$
can be found from the coefficient of $x^n$ in the expansion of $F(x)$.
\begin{theorem}\label{mastertheorem} \textbf{Ramanujan's Master Theorem.} Suppose that for $-\infty<x<\infty$,
\begin{equation}\label{master}
F(x)=\sum_{k=0}^{\infty}\df{\varphi(k)(-x)^k}{k!}.
\end{equation}
Then
\begin{equation}\label{master1}
\int_0^{\infty}x^{n-1}\sum_{k=0}^{\infty}\df{\varphi(k)(-x)^k}{k!}dx=\Gamma(n)\varphi(-n).
\end{equation}
\end{theorem}
For our first illustration of Ramanujan's Master Theorem \cite[p.~300]{I}, let $m,n>0$ and set $x=y/(1+y)$ to find that
\begin{equation*}\int_0^1x^{m-1}(1-x)^{n-1}dx=\int_0^{\infty}y^{m-1}(1+y)^{-(m+n)}dy.\end{equation*}
From the binomial series
$$ (1+y)^{-r}=\sum_{k=0}^{\infty}\df{\Gamma(k+r)}{\Gamma(r)k!}(-y)^k, \qquad |y|<1,$$
we find that $\varphi(t)=\Gamma(t+m+n)/\Gamma(m+n).$ Applying Ramanujan's Master Theorem, we deduce the well-known representation of the beta function $B(m,n)$,
\begin{equation}\label{beta}
B(m,n):=\int_0^1 x^{m-1}(1-x)^{n-1}dx=\Gamma(m)\varphi(-m)=\df{\Gamma(m)\Gamma(n)}{\Gamma(m+n)}.
\end{equation}
For our second example, we need the notation
\begin{equation*}\label{intro1}
(a;q)_0:=1,\quad(a;q)_n:=\prod_{k=0}^{n-1}(1-aq^k),\quad n\geq1,
\end{equation*}
and
\begin{equation*}\label{intro2}
(a;q)_{\i}:=\lim_{n\to\i}(a;q)_n, \quad |q|<1.
\end{equation*}
Recall the $q$-binomial theorem \cite[p.~8]{gasper}
\begin{equation*
\sum_{m=0}^{\i}\df{(a;q)_m}{(q;q)_m}z^m=\df{(az;q)_{\i}}{(z;q)_{\i}},\qquad |z|<1.
\end{equation*}
Letting $z=-x$, replacing $a$ by $aq$, and applying the Master Theorem, we establish, for $\Re\,s>0$, Ramanujan's beautiful identity \cite{12}, \cite[p.~57]{cp},
\begin{equation}\label{beautiful}
\int_0^{\infty}t^{s-1}\df{(-atq;q)_{\infty}}{(-t;q)_{\infty}}dt=\df{\pi}{\sin(\pi s)}\df{(q^{1-s};q)_{\infty}(aq;q)_{\infty}}{(q;q)_{\infty}(aq^{1-s};q)_{\infty}}.
\end{equation}
Richard Askey \cite{askey} made a thorough study of the integral in \eqref{beautiful} and showed that, if $s=x$ and $a=q^{x+y}$, then this integral is a natural $q$-analogue of the beta function $B(x,y)$ in \eqref{beta}.
An extension of Ramanujan's Master Theorem has been studied by M.~A.~Chaudhry and A.~Qadir \cite{cq}.
\section{Ramanujan's (Earlier) Notebooks}
As we have seen in the preceding sections, many of Ramanujan's theorems and examples on integrals appear in his notebooks \cite{nb}. Because of their centrality in Ramanujan's vast accomplishments in his theories of theta functions and modular equations, our concentration in this section focuses on elliptic integrals.
The complete elliptic integral of the first kind $K(k)$, $0<k<1$, is defined by
\begin{equation*}
K(k):=\int_0^{\pi/2}\df{dt}{\sqrt{1-k^2\sin^2t}}=\df{\pi}{2}\,{_2F_1}\left(\tf12,\tf12;1;k^2\right),
\end{equation*}
where the second equality arises from expanding the integrand in a power series and integrating termwise.
The number $k$ is called the \emph{modulus}. The function ${_2F_1}$ on the right-hand side is an (ordinary) hypergeometric function, which is defined (more generally) by
\begin{equation}\label{2F1}
{_pF_q}(a_1,a_2,\dots, a_p;b_1,b_2,\dots, b_q;z):=\sum_{n=0}^{\i}\df{(a_1)_n(a_2)_n\cdots(a_p)_n}{(b_1)_n(b_2)_n\cdots(b_q)_n(n!)}z^n,
\end{equation}
where
$$(a)_0=1, \qquad (a)_n:=a(a+1)(a+2)\cdots(a+n-1),\qquad n\geq 1,$$
and it is assumed that $p$ and $q$ are chosen so that \eqref{2F1} converges in some domain.
If $\pi/2$ is replaced by another number $v$, $0<v<\pi/2,$ then the integral is called an incomplete elliptic integral.
The integral $K(k)$ is prominent in the theory of the Jacobian elliptic functions sn$(u)$, cn$(u)$, and dn$(u)$, which evidently were not considered by Ramanujan.
More importantly, for Ramanujan, $K(k)$ plays a central role in his theories of theta functions, class invariants, singular moduli, Eisenstein series and partitions; its importance cannot be overemphasized. Any statement about an elliptic integral yields a corresponding statement about an ordinary hypergeometric function, and conversely. However, instead of concentrating on hypergeometric functions, the focus here is on the integral themselves, in particular, their transformations and values.
Elliptic integrals appear at scattered places in Ramanujan's notebooks.
A particularly rich source of identities for elliptic integrals is Section 7 of Chapter 17 in Ramanujan's second notebook \cite{nb}, \cite[pp.~104--117]{III}.
We begin with the famous \emph{addition theorem} for elliptic integrals. Let
\begin{equation*}
u:=\int_0^{\alpha}\df{d \varphi}{\sqrt{1-x^2\sin^2\varphi}}, \quad v:=\int_0^{\beta}\df{d \varphi}{\sqrt{1-x^2\sin^2\varphi}}, \quad
w:=\int_0^{\gamma}\df{d \varphi}{\sqrt{1-x^2\sin^2\varphi}}.
\end{equation*}
Ramanujan gave four different conditions for $\alpha$, $\beta$, and $\gamma$ to ensure the validity of the addition theorem \cite[p.~107]{III}
\begin{equation}\label{uvw}
u+v=w.
\end{equation}
In particular, if \cite[Entry 7(viii) (c)]{III},
\begin{equation}\label{uvw1}
\cot\,\alpha\,\cot\,\beta=\df{\cos\,\gamma}{\sin\,\alpha\,\sin\,\beta}+\sqrt{1-x\sin^2\gamma},
\end{equation}
then \eqref{uvw} holds. The condition \eqref{uvw1} is equivalent to the condition
$$ \df{\text{cn}(u)\,\text{cn}(v)}{\text{sn}(u)\,\text{sn}(v)}=\df{\text{cn}(u+v)}{\text{sn}(u)\,\text{sn}(v)}+\text{dn}(u+v).$$
Although the addition theorem \eqref{uvw} is classical, many of Ramanujan's identities involving elliptic integrals appear to be new.
In \cite[pp.~108--109]{III}, the two given proofs of the following result are verifications; Ramanujan must have had a more natural proof.
\begin{entry
If $|x|<1$, then
\begin{equation*}
\df{\pi}{2}\int_0^{\pi/2} \df{d \varphi}{\sqrt{1+x\sin\,\varphi}}=\int_0^{\pi/2}\df{\cos^{-1}(x\,\sin^2\varphi)d\,\varphi}{\sqrt{1-x^2\sin^4\varphi}}.
\end{equation*}
\end{entry}
The following entry is a beautiful theorem, more recondite than the previous theorem. It is a wonderful illustration of Ramanujan's ingenuity and quest for beauty \cite[pp.~111--112]{III}.
\begin{entry
If $|x|<1$, then
\begin{gather*}
\int_0^{\pi/2}\int_0^{\pi/2}\df{x\sin\,\varphi\,d\theta\,d\varphi}{\sqrt{(1-x^2\sin^2\varphi)(1-x^2\sin^2\theta\,\sin^2\varphi)}}\\
=\df12\left(\int_0^{\pi/2}\df{d\varphi}{\sqrt{1-\tf12(1+x)\sin^2\varphi}}\right)^2-
\df12\left(\int_0^{\pi/2}\df{d\varphi}{\sqrt{1-\tf12(1-x)\sin^2\varphi}}\right)^2.
\end{gather*}
\end{entry}
Despite the fact that Ramanujan's second notebook is a revised edition of the first, there are over 200 claims in the first notebook that cannot be located in the second. In particular, on page 172 in the first notebook \cite{nb}, two remarkable elliptic integral transformations are recorded \cite[pp.~403--404]{V}. One of them is given in the next entry.
\begin{entry}
Let $0<x<1$, and assume for $0\leq\alpha, \beta\leq \pi/2$ that
$$ \df{1+\sin\,\beta}{1-\sin\,\beta}=\df{1+\sin\,\alpha}{1-\sin\,\alpha}\left(\df{1+x\sin\,\alpha}{1-x\sin\,\alpha}\right)^2.$$
Then,
\begin{equation*}
(1+2x)\int_0^{\alpha}\df{d\theta}{\sqrt{1-x^3\left(\frac{2+x}{1+2x}\right)\sin^2\theta}}=
\int_0^{\beta}\df{d\theta}{\sqrt{1-x\left(\frac{2+x}{1+2x}\right)^3\sin^2\theta}}.
\end{equation*}
\end{entry}
The next two unusual entries are related to elliptic integrals and are found in the unorganized pages of Ramanujan's second notebook \cite[pp.~283, 286]{nb}, \cite[p.~255]{IV}.
\begin{entry}
Let $0\leq\theta\leq\pi/2$ and $0\leq v\leq1$. Define $\mu$ to be the constant defined by putting $v=1$ and $\theta=\pi/2$ in the definition
\begin{equation}\label{G}
\df{\theta\mu}{2}=\int_0^v\df{dt}{\sqrt{1+t^4}}=:G(v).
\end{equation}
Then,
\begin{equation}\label{el}
2\tan^{-1}v=\theta+\sum_{n=1}^{\i}\df{\sin(2n\theta)}{n\cosh(n\pi)}.
\end{equation}
\end{entry}
Despite its unusual character, \eqref{el} is not too difficult to prove, and follows from the inversion theorem for elliptic integrals.
The integral $G(v)$ has a striking resemblance to the classical lemniscate integral defined next.
As above, let $0\leq\theta\leq\pi/2$ and $0\leq v\leq1$. Define $\mu$ to be the constant defined by putting $v=1$ and $\theta=\pi/2$ in \eqref{F} below. Then the lemniscate integral $F(v)$ is defined by
\begin{equation}\label{F}
\df{\theta\mu}{\sqrt2}=\int_0^v\df{dt}{\sqrt{1-t^4}}=:F(v)=\sum_{n=0}^{\i}\df{\left(\frac12\right)_nv^{4n+1}}{n!(4n+1)},
\end{equation}
where the right-hand side is a representation for $F(v)$ that arises from expanding the integrand in a binomial series.
Ramanujan offers an inversion formula for the lemniscate integral analogous to \eqref{el}. Altogether, Ramanujan states ten inversion formulas, six of them for the lemniscate integral \cite[pp.~283, 285, 286]{nb}. We offer one of them \cite[p.~252]{IV}. Proofs for all six are given in \cite[245--260]{IV}.
\begin{entry} Let $\theta$ and $v$ be as given in \eqref{F}. Then,
\begin{gather*}
\log\,v+\df{\pi}{6}-\df12 \log 2+\sum_{n=1}^{\infty}\df{(\tf14)_nv^{4n}}{(\tf34)_n4n}
=\log(\sin\theta)+\df{\theta^2}{2\pi}-2\sum_{n=1}^{\infty}\df{\cos(2n\theta)}{n(e^{2\pi n}-1)}.
\end{gather*}
\end{entry}
If
$$ v=\df{\sqrt2\,x}{\sqrt{1+x^4}},$$
then
$$ F(v)=\int_0^v\df{dt}{\sqrt{1-t^4}}=\sqrt2\int_0^x\df{dt}{\sqrt{1+t^4}}=\sqrt{2}G(x),$$
which is an important key step in the historically famous problem of doubling the arc length of the lemniscate.
The lemniscate integral was initially studied by James Bernoulli and Count Giulio Fagn{\'{a}}no. Raymond Ayoub \cite{ayoub} wrote a very informative article emphasizing its history and importance. Carl Ludwig Siegel \cite{siegel} considered the lemniscate integral so important that he began his development of the theory of elliptic functions with a thorough discussion of it.
\section{Ramanujan's Lost Notebook}
On pages 51--53 in his lost notebook \cite{lnb}, Ramanujan states several original, surprising, and unusual integral identities involving elliptic integrals and his theta functions, including
\begin{equation}\label{f}
f(-q):=(q;q)_{\infty}, \qquad |q|<1,
\end{equation}
which, except for a factor of $q^{1/24}$, is Dedekind's eta-function $\eta(\tau)$, where $q=e^{2\pi i\tau}, \tau\in\mathbb{H}.$ Ramanujan's integrals of theta functions are associated with elliptic integrals and modular equations of degrees 5, 10, 14, or 35. In view of degrees 14 and 35, it is surprising that none of degree 7 are given. These integral identities were first proved by S.~Raghavan and S.~S.~Rangachari \cite{rr} using the theory of modular forms, and later by the first author, Heng Huat Chan, and Sen-Shan Huang employing ideas with which Ramanujan would have been familiar \cite{bch}. Proofs for all of Ramanujan's identities can be found in Andrews' and the first author's book \cite[pp.~327--371]{geabcbI}. Certain proofs depend upon transformations of elliptic integrals found in Ramanujan's second notebook and discussed above. Differential equations for products or quotients of theta functions are also featured in some proofs. The first of two examples that we give is associated with modular equations of degree 5.
\begin{entry}\cite[p.~333]{geabcbI}, \cite[p.~52]{lnb} Let $f(-q)$ be defined by \eqref{f}, $\epsilon=(\sqrt5+1)/2$, and
\begin{equation}\label{rrcf}
u:=u(q):=\df{q^{1/5}}{1}\+\df{q}{1}\+\df{q^2}{1}\+\df{q^3}{1}\+\cds, \qquad |q|<1,
\end{equation}
which defines the Rogers--Ramanujan continued fraction.
Then,
\begin{align*}
5^{3/4}\int_0^{q}\df{f^2(-t)f^2(-t^5)}{\sqrt{t}}dt
&=\int_{\cos^{-1}\left((\epsilon u)^{5/2}\right)}^{\pi/2}\df{d\varphi}{\sqrt{1-\epsilon^{-5}5^{-3/2}\sin^2\varphi}}\\
&=\int_0^{2\tan^{-1}\left(5^{3/4}\sqrt{q}f^3(-q^5)/f^3(-q)\right)}\df{d\varphi}{\sqrt{1-\epsilon^{-5}5^{-3/2}\sin^2\varphi}}.
\end{align*}
\end{entry}
To prove the next entry, also associated with modular equations of degree 5, we need a differential equation involving theta functions.
\begin{lemma} Let
$$\lambda:=\lambda(q):=q\df{f^6(-q^5)}{f^6(-q)}.$$
Then
$$q\df{d}{dq}\lambda(q)=\sqrt{q}\,f^2(-q)f^2(-q^5)\sqrt{125\lambda^3+22\lambda^2+\lambda}.$$
\end{lemma}
\begin{entry}\cite[p.~342]{geabcbI}, \cite[p.~52]{lnb} Let $u$ be defined by \eqref{rrcf}. Then there exists a constant $C$ such that
\begin{equation*}
u^5+u^{-5}=\df{1}{2\sqrt{q}}\df{f^3(-q)}{f^3(-q^5)}\left(C+\int_q^1\df{f^8(-t)}{f^4(-t^5)}\df{dt}{t^{3/2}}+125
\int_0^q\df{f^8(-t^5)}{f^4(-t)}\sqrt{t}\,dt\right).
\end{equation*}
\end{entry}
\noindent(The constant $C$ can be determined, but it is different from that claimed by Ramanujan \cite[pp.~346--347]{geabcbI}.)
The next entry is connected with modular equations of degree 14.
\begin{entry}\cite[pp.~51--52]{lnb}, \cite[p.~359]{geabcbI}
Let
$$v:=v(q):=q\left(\df{f(-q)f(-q^{14})}{f(-q^2)f(-q^7)}\right)^4.$$
Put
$$c=\df{\sqrt{13+16\sqrt2}}{7}.$$
Then
\begin{gather*}
\int_0^q f(-t)f(-t^2)f(-t^7)f(-t^{14})dt=
\df{1}{\sqrt{8\sqrt2}}\int^{\cos^{-1}c}_{\cos^{-1}\left(c\frac{1+v}{1-v}\right)}\df{d\varphi}{\sqrt{1-\frac{16\sqrt2-13}{32\sqrt2}\sin^2\varphi}}.
\end{gather*}
\end{entry}
Our concluding example of Ramanujan's exquisite formulas is an identity linking elliptic integrals and modular equations of degree 35.
\begin{entry}\cite[p.~53]{lnb}, \cite[p.~364]{geabcbI}
If
$$v:=v(q):=q\df{f(-q)f(-q^{35})}{f(-q^5)f(-q^7)},$$
then
\begin{equation*}
\int_0^q t\,f(-t)f(-t^5)f(-t^7)f(-t^{35})dt=
\int_0^v\df{t\,dt}{\sqrt{(1+t-t^2)(1-5t-9t^3-5t^5-t^6)}}.
\end{equation*}
\end{entry}
| {
"timestamp": "2021-03-26T01:30:16",
"yymm": "2103",
"arxiv_id": "2103.14002",
"language": "en",
"url": "https://arxiv.org/abs/2103.14002",
"abstract": "Throughout his entire mathematical life, Ramanujan loved to evaluate definite integrals. One can find them in his problems submitted to the \\emph{Journal of the Indian Mathematical Society}, notebooks, Quarterly Reports to the University of Madras, letters to Hardy, published papers and the Lost Notebook. His evaluations are often surprising, beautiful, elegant, and useful in other mathematical contexts. He also discovered general methods for evaluating and approximating integrals. A survey of Ramanujan's contributions to the evaluation of integrals is given, with examples provided from each of the above-mentioned sources.",
"subjects": "Number Theory (math.NT); Classical Analysis and ODEs (math.CA)",
"title": "Ramanujan's Beautiful Integrals",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9912886168290836,
"lm_q2_score": 0.8459424431344437,
"lm_q1q2_score": 0.8385731143717584
} |
https://arxiv.org/abs/1904.10765 | Equipartitions and Mahler volumes of symmetric convex bodies | Following ideas of Iriyeh and Shibata we give a short proof of the three-dimensional Mahler conjecture {\mf for symmetric convex bodies}. Our contributions include, in particular, simple self-contained proofs of their two key statements. The first of these is an equipartition (ham sandwich type) theorem which refines a celebrated result of Hadwiger and, as usual, can be proved using ideas from equivariant topology. The second is an inequality relating the product volume to areas of certain sections and their duals. We observe that these ideas give a large family of convex sets in every dimension for which the Mahler conjecture holds true. Finally we give an alternative proof of the characterization of convex bodies that achieve the equality case and establish a {\mf new} stability result. | \section{Introduction}
A {\it convex body} is a compact convex subset of $\mathbb R^n$ with non empty interior. We say that $K$ is {\it symmetric} if it is centrally symmetric with its center at the origin, i.e. $K=-K$. We write $|L|$ for the $k$-dimensional Lebesgue measure (volume) of a measurable set $L\subset \mathbb R^n$, where $k$ is the dimension of the minimal affine subspace containing $L$. We will refer to \cite{AGM, Sc} for general references for convex bodies.
The polar body $K^\circ$ of a symmetric convex body $K$ is defined by
$K^\circ=\{y; \langle x,y\rangle\le1, \forall x\in K\}$
and its \emph{volume product} by
$$
\mathcal P(K) = |K| |K^\circ|.
$$
It is a linear invariant of $K$, that is
$\mathcal P(TK)=\mathcal P(K)$ for every linear isomorphism $T: \mathbb R^n \rightarrow\mathbb R^n$. The set of convex symmetric bodies in $\mathbb R^n$ wih the Banach-Mazur distance is compact and the product volume $K\mapsto \mathcal P(K)$ is a continuous function (see the definition in Section \ref{sec:stability} below) hence the maximum and minimum values of $\mathcal P(K)$ are attained.
The Blaschke-Santal\'o inequality states that
$$
\mathcal P(K) \le \mathcal P(B^n_2),
$$
where $B^n_2$ is the Euclidean unit ball. Moreover the previous inequality is an equality if and only if $K$ is an ellipsoid (\cite{San1949}, \cite{Pet}, see \cite{MP}
or also \cite{MR2} for a simple proof of both the inequality and
the case of equality).
In \cite{Mah1939} Mahler conjectured that for every symmetric convex body $K$ in $\mathbb R^n$,
$$
\mathcal P(K)\ge \mathcal P(B_\infty^n)=\frac{4^n}{n!},
$$
where $B_\infty^n=[-1;1]^n$ and proved it for $n=2$ \cite{Mah1939m}. Later the conjecture was proved for unconditional convex bodies \cite{SR,Meyer}, zonoids \cite{Reisner, GMR} and other special cases
\cite{Kar19, Barthe, FMZ}. The conjecture was proved in \cite{BourMil} up to a multiplicative $c^n$ factor for some constant $c>0$, see also \cite{Kuperberg, Na, Giannop}. It is also known that the cube, its dual $B_1^n$ and more generally Hanner polytopes are local minimizers \cite{NazZva, Kim} and that the conjecture follows from conjectures in systolic geometry \cite{ABT} and symplectic geometry \cite{AKY,Kar19}.
Iriyeh and Shibata \cite{IS} came up with a beautiful proof of this conjecture in dimension $3$ that generalizes a proof of Meyer \cite{Meyer} in the unconditional case by adding two new ingredients: differential geometry and a ham sandwich type (or equipartition) result. In this mostly self-contained note we provide an alternative proof and derive the three dimensional symmetric Mahler conjecture following their work.
\begin{thm}[\cite{IS}] \label{thm:main}
For every convex symmetric body $K$ in $\mathbb R^3$,
$$\abs{K}\abs{K^\circ} \geq \abs{B_1^3}\abs{B_\infty^3}=\frac{32}{3}.$$
Equality is achieved if and only if $K$ or $K^{\circ} $ is a parallelepiped.
\end{thm}
In Section \ref{sec:equipart} we prove an equipartition result. In Section \ref{sec:main} we derive the key inequality and put it together with the aforementioned equipartition to prove Theorem \ref{thm:main}. In Section \ref{sec:equality} we prove the equality cases of Theorem \ref{thm:main} and in Section \ref{sec:stability} we present a new corresponding stability result. \\
\paragraph{{\bf{Acknowledgements}}}
We thank Shlomo Reisner for his observations on the paper of Iriyeh and Shibata \cite{IS} and Pavle Blagojevi\'c, Roman Karasev and Ihab Sabik for comments on an earlier version of this work.
\section{An equipartition result}\label{sec:equipart}
A celebrated result of Hadwiger \cite{Hadwiger} who answered a question of Gr\"unbaum \cite{Grunbaum} shows that for any absolutely continuous finite measure in $\mathbb R^3$ there exists three planes for which any octant has $\frac 1 8 $ of the measure. There is a vast literature around Hadwiger's theorem, see \cite{FDEP, Ramos, MVZ,Makeev2007, BlaZie18}. Theorem \ref{thm:equipart} below which corresponds to formula (15) in \cite{IS} refines Hadwiger's theorem when the measure is centrally symmetric in a way that is reminiscent of the spicy chicken theorem \cite{kha,AKK}.
\begin{thm}\label{thm:equipart}
Let $K \subset \mathbb R^3$ be a symmetric convex body. Then there exist planes $H_1,H_2,H_3$ passing through the origin such that:
\begin{itemize}
\item they split $K$ into $8$ pieces of equal volume, and
\item for each plane $H_i$, the section $K \cap H_i$ is split into $4$ parts of equal area by the other two planes.
\end{itemize}
\end{thm}
Notice that in the proof of this theorem, the convexity of $K$ is not used. The convex body $K$ could be replaced by a symmetric measure defined via a density function and a different symmetric density function could be used to measure the areas of the sections. \\ \begin{proof}[Proof of Theorem \ref{thm:equipart}]
The scheme of this proof is classical in applications of algebraic topology to discrete geometry. It is often referred to as the configuration-space/test-map scheme (see e.g. Chapter 14 in \cite{TOG2017}).
Assume that $H\subset\mathbb R^3$ is an oriented plane with outer normal $v$. Let us denote the halfspaces
$H^+=\{x; \langle x,v\rangle> 0\}$ and $H^-=\{x; \langle x,v\rangle< 0\}$. If $u\in H^+$, we say that $u$ is on the positive side of $H$.
Given the convex body $K\subset\mathbb R^3$, we parametrize a special family of triplets of planes by orthonormal bases $U=(u_1,u_2,u_3)\in SO(3)$ in the following way.
Let $H_1$ be the plane $u_1^\perp=\{x; \langle x,u_1\rangle=0\}$.
Let $l_2,l_3\subset H_1$ be the unique pair of lines through the origin (as in the left part of Figure \ref{fig}) with the following properties:
\begin{itemize}
\item $u_2,u_3$ are directed along the angle bisectors of $l_2$ and $l_3$,
\item the lines $l_2$ and $l_3$ split $H_1\cap K$ into four regions of equal area,
\item For any $x \in l_3$, $\langle x, u_2 \rangle$ has the same sign as $\langle x, u_3\rangle$.
\end{itemize}
For $i=2,3$ let $H_i$ be the unique plane containing $l_i$ that splits $K\cap H_1^+$ into two parts of equal volume and let $H_i^+$ be the half-space limited by $H_i$ which contains $u_2$. Thus
\begin{itemize}
\item $\abs{K\cap H_1 \cap (H_2^+ \cap H_3^+)}=\frac{1}{4}\abs{K \cap H_1}$,
\item $\abs{K \cap (H_1^+ \cap H_i^+)}=\frac{1}{2}\abs{K \cap H_1^+} =\frac{1}{4}\abs{K}$ for $i=2,3$.
\end{itemize}
By using standard arguments it can be seen that the lines $\{l_i\}_{i=2,3}$, the planes $\{H_i\}_{i=1,2,3}$ and the half-spaces $\{H_i^+\}_{i=1,2,3}$ are uniquely determined and depend continuously on $U=(u_1,u_2,u_3)$.
\begin{figure}
\includegraphics[width=\textwidth]{dibujo.pdf}
\caption{The main parts of our construction restricted to the planes $H_1$, $H_2$ and $H_3$. Gray marks are used on the positive sides of oriented lines and planes. In the middle and right figures the horizontal lines coincide with $l_2$ and $l_3$, respectively.}
\label{fig}
\end{figure}
Now we define a continuous test-map $F_1=(f_1, f_2, f_3):SO(3)\to\mathbb R^3$, by
\begin{align*}
f_1(U)&=\frac 18\vol(K) - \vol(K\cap H_1^+\cap H_2^+\cap H_3^+),\\
f_2(U)&=\frac 14\area(K\cap H_2) - \area(K\cap H_2\cap H_1^+\cap H_3^+),\\
f_3(U)&=\frac 14\area(K\cap H_3) - \area(K\cap H_3\cap H_1^+\cap H_2^+),
\end{align*}
where $H_i = H_i(U)$, for $i=1,2,3$. Clearly, any zero of $F_1$ corresponds to a partition with the desired properties.
The dihedral group $D_4$ of eight elements with generators $g_1$ of order two and $g_2$ of order four, acts freely on $SO(3)$ by
\begin{align*}
g_1 \cdot (u_1,u_2,u_3)&=(-u_1,u_2,-u_3),\\
g_2\cdot(u_1,u_2,u_3)&=(u_1,u_3,-u_2).
\end{align*}
It also acts on $\mathbb R^3$ linearly by
\begin{align*}
g_1\cdot(x_1,x_2,x_3)&=(-x_1,-x_3,-x_2),\\
g_2\cdot(x_1,x_2,x_3)&=(-x_1,x_3,-x_2).
\end{align*}
Since $K$ is symmetric, $F_1$ is $D_4$-equivariant under the actions we just described, {\em i.e.}
\begin{equation}\label{eq:zho}
g_i\cdot F_1(U)=F_1(g_i\cdot U),\quad\hbox{for all $U=(u_1,u_2,u_3)\in SO(3)$}.
\end{equation}
Indeed, observe that $g_1$ and $g_2$ transform $(H_1^+,H_2^+,H_3^+)$ into $(H_1^-,H_3^+,H_2^+)$ and $(H_1^+,H_3^-,H_2^+)$, respectively (see Figure \ref{fig}). Next, to establish (\ref{eq:zho}), observe that since $K$ is symmetric, the volume of a set of the form $K\cap H_1^\pm\cap H_2^\pm\cap H_3^\pm$ can only have two possible values which depend only on the parity of the number of positive half-spaces used. The same is true for the area of a set of the form $K \cap H_2\cap H_1^\pm\cap H_3^\pm$ and $K \cap H_3\cap H_1^\pm\cap H_2^\pm$.
Consider the function $F_0: SO(3) \to \mathbb R^3$
\[F_0(U)=F_0(u_1,u_2,u_3)=
\begin{pmatrix}
u_{2,2} u_{3,2}\\
u_{3,1} + u_{2,1}\\
u_{3,1} - u_{2,1}
\end{pmatrix},
\]
where $u_{i,j}$ represents the $j$-entry of the vector $u_i$. A direct computation shows that $F_0$ is $D_4$-equivariant and has exactly $8=\lvert D_4\rvert$ zeros in $SO(3)$ given by the orbit under $D_4$ of the identity matrix $I=(e_1, e_2, e_3)$, where $(e_1, e_2,e_3)$ is the canonical basis.
Furthermore, the zeros of $F_0$ are transversal to $SO(3)$. To see this, consider the space $SO(3)$ as a smooth manifold in the space $M_{3\times 3}\simeq\mathbb R^9$ of $3\times 3$ matrices.
For $i=1,2,3$, let $R_i(\theta)$ be the rotation in $\mathbb R^3$ of angle $\theta$ around the vector $e_i$. For example, the matrix corresponding to $i=1$ is of the form
$$R_1(\theta)=\begin{pmatrix}
1 & 0 & 0\\
0 & \cos(\theta) & -\sin(\theta)\\
0 & \sin(\theta) & \cos(\theta)
\end{pmatrix}.$$
The vectors $v_i=\frac{d R_i}{d t}(0)$ generate the tangent space to $SO(3)$ at $I$. Let $D F_0$ be the derivative of $F_0$ at $I$, then it can be verified that $\abs{\det (D F_0 \cdot (v_1,v_2,v_3))}=2\neq 0$ which implies the transversality at $I$. The transversality at the remaining $7$ zeros follows immediately from the $D_4$-equivariance of $F_0$.
The result now follows directly from Theorem 2.1 in \cite{klartag}. The idea of this theorem can be traced back to Brouwer and was used in the equivariant setting by B\'ar\'any to give an elegant proof of the Borsuk-Ulam theorem. B\'ar\'any's proof is explained in the piecewise linear category in Section 2.2 of \cite{Matousek}. For the reader's convenience we give a sketch of the proof of Theorem 2.1 in \cite{klartag} in our case.
Consider the continuous $D_4$-equivariant function defined on $SO(3)\times [0,1]$ by
\[F(U,t):=(1-t) F_0(U)+t F_1(U).\]
We approximate $F$ by a smooth $D_4$-equivariant function $F_\varepsilon$ such that $F_\varepsilon(U,0)=F(U,0)=F_0(U)$, $\sup_{U,t}\abs{F(U,t)-F_\varepsilon (U,t)}<\varepsilon$ and $0$ is a regular value of $F_\varepsilon$. The existence of such a smooth equivariant function follows from Thom's transversality theorem \cite{Thom1954} (see also \cite[pp. 68--69]{guillemin2010}), an elementary direct proof can be found in Section 2 of \cite{klartag}. The implicit function theorem implies that $Z_\varepsilon=F_\varepsilon^{-1}(0,0,0)$ is a one dimensional smooth submanifold of $SO(3)\times [0,1]$ on which $D_4$ acts freely. The submanifold $Z_\varepsilon$ is a union of connected components which are diffeomorphic either to an interval, or to a circle, the former having their boundary on $SO(3)\times \{0,1\}$.
The set $Z_\varepsilon$ has an odd number (1) of orbits under $D_4$ intersecting $SO(3)\times \{0\}$. Denote by $\alpha \colon [0,1] \to SO(3)\times [0,1]$ a topological interval of $F_\varepsilon^{-1}(0)$. Let $g\in D_4$, observe that $g(\alpha(0))\neq \alpha(1)$, indeed, if that is the case then $g$ maps $\alpha([0,1])$ to itself and hence has a fixed point, but this would imply that the action of $D_4$ is not free which is a contradiction. We conclude that an odd number of orbits of $Z_\varepsilon$ must intersect $SO(3)\times \{1\}$, i.e. there exists $U_\varepsilon\in SO(3)$ such that $F_\varepsilon(U_\varepsilon,1)=0$. Since the previous discussion holds for every $\varepsilon$, there exists $U\in SO(3)$ such that $F(U,1)=F_1(U)=0$.
\end{proof}
\begin{remark}
Let us restate the punch line of the above argument in algebraic topology language: $F_\varepsilon^{-1}(0)\cap SO(3)\times \{0\}$ is a non-trivial $0$-dimensional homology class of $SO(3)$ in the $D_4$-equivariant homology with $Z_2$ coefficients, on the other hand $F_\varepsilon^{-1}(0)$ is a $D_4$-equivariant bordism so $F_\varepsilon^{-1}(0)\cap SO(3)\times \{1\}$ must also be non-trivial in this equivariant homology, and in particular, non empty.
\end{remark}
\begin{remark}
Theorem \ref{thm:equipart} can also be shown using obstruction theory with the aide of a $D_4$-equivariant CW-decomposition of $SO(3)$ (see \cite{BLAG}).
\end{remark}
\begin{remark}\label{rm:linear} We shall say for the rest of the paper that $K$ is {\em equipartitioned by the standard orthonormal basis} $(e_1, e_2, e_3)$ if $(\pm e_1, \pm e_2, \pm e_3) \in \partial K$ and the planes $H_i=e_i^\perp$ satisfy the conditions of Theorem \ref{thm:equipart}. Applying Theorem \ref{thm:equipart} it follows that for every convex, symmetric body $K \subset \mathbb R^3$ there exists $T \in GL(3)$ such that $TK$ is equipartitioned by the standard orthonormal basis.
\end{remark}
\section{Symmetric Mahler conjecture in dimension 3}\label{sec:main}
For a sufficiently regular oriented hypersurface $A \subset \mathbb R^n$, define the vector
\[V(A)=\int_A n_A(x)dH(x).\]
where $H$ is the $(n-1)$-dimensional Hausdorff measure on $\mathbb R^n$ such that $|A|=H(A),$ for $A$ contained in a hyperplane, and $n_A(x)$ denotes the unit normal to $A$ at $x$ defined by its orientation.
Notice that for $n=3$ one has
\begin{equation}\label{wedge}
V(A):=\left(\int_{A}dx_2\wedge dx_3,\int_{A} dx_1 \wedge dx_3,\int_{A} dx_1 \wedge dx_2 \right),
\end{equation}
because of the following equality between vector valued differential forms
\begin{equation}
n_A(x) dH(x)=(dx_2\wedge dx_3, dx_1\wedge dx_3, dx_1\wedge dx_2). \label{identity}
\end{equation}
Indeed let $T_x$ be the tangent plane at $x$, for a pair of tangent vectors $u,v \in T_x$, $dS(x)(u,v)$ is the signed area of the parallelogram spanned by $u$ and $v$. Let $\theta$ be the angle of intersection between $T_x$ and $e_i^\perp$, and observe that $n_A(x)_i=\cos(\theta)$.
On the other hand since the form $dx_j \wedge d x_k$ doesn't depend on the value of $x_i$, we have \[dx_j\wedge dx_k(u,v)=dx_j\wedge dx_k(P_{e_i}u,P_{e_i}v)=\det(P_{e_i}u,P_{e_i}v).\]
This is the signed area of the projection of the oriented parallelogram spanned by $u$ and $v$ on the coordinate hyperplane $e_i^\perp$. Thales theorem implies \[dx_j\wedge dx_k(P_{e_i}u,P_{e_i}v)= \cos(\theta) dH(x)(u,v)= (n_A(x))_i dH(x)(u,v),\]
establishing identity (\ref{identity}) above.
For any convex body $K$ in $\mathbb R^n$ containing $0$ in its interior, the orientation of a subset $A$ of $\partial K$ is given by exterior normal $n_K$ to $K$ so that $V(A)=\int_A n_K(x)dH(x)$; we define $\mathcal C(A):=\{rx; 0\le r\le1, x\in A\}$, and observe that
\[\abs{\mathcal C(A)}=\frac{1}{n}\int_A \langle x,n_K(x)\rangle dH(x).\]
If $K_1$ and $K_2$ are sufficiently regular Jordan regions of $\mathbb R^n$ and $K_1\cap K_2$ is an hypersurface, we define
$K_1\overrightarrow{\cap} K_2$
to be oriented according to the outer normal of $\partial K_1$. So even though $K_1\overrightarrow{\cap} K_2$ equals $K_2 \overrightarrow{\cap} K_1$ as sets, they may have opposite orientations. We denote by $[a,b]= \{ (1-t)a+tb: t \in [0,1]\}$ the segment joining $a$ to $b$. The inequality of the following proposition generalizes inequality (3) in \cite{Meyer} and was proved in \cite{IS} Proposition 3.2.
\begin{proposition}\label{ineq}
Let $K$ be a symmetric convex body in $\mathbb R^n$. Let $A$ be a Borel subset of $\partial K$ with $|\mathcal C(A)| \not=0$, then
\[
\frac{1}{n}\langle x, V(A)\rangle \leq \abs{\mathcal C(A)},\ \forall x\in K.
\]
So $\frac{V(A)}{n|\mathcal C(A)|}\in K^\circ$ and if moreover for some $x_0\in K$ one has $\langle x_0,\frac{V(A)}{n|\mathcal C(A)|}\rangle=1$ then $x_0\in\partial K$,
$\frac{V(A)}{n|\mathcal C(A)|}\in \partial K^\circ$ and $[x,x_0]\subset\partial K$, for almost all $x\in A$.
\end{proposition}
\begin{proof}
For any $x\in K$, we have $\langle x,n_K(z)\rangle \le \langle z,n_K(z)\rangle$ for any $z\in\partial K$. Thus
\[
\langle x, V(A)\rangle = \int_A \langle x,n_K(z)\rangle dH(z)\leq \int_A \langle z,n_K(z)\rangle dH(z) =n|\mathcal C(A)|, \ \forall x\in K.
\]
It follows that $\frac{V(A)}{n|\mathcal C(A)|}\in K^\circ$.
If for some $x_0\in K$ one has $\langle x_0,\frac{V(A)}{n|\mathcal C(A)|} \rangle=1$, then clearly $x_0\in\partial K$ and $\frac{V(A)}{n|\mathcal C(A)|}\in \partial K^\circ$. Moreover, for almost all $x\in A$ one has $\langle x_0,n_K(x)\rangle=\langle x,n_K(x)\rangle$ thus $[x, x_0] \subset \partial K$.
\end{proof}
Using Proposition \ref{ineq} twice we obtain the following corollary.
\begin{corollary} \label{coro}
Let $K$ be a symmetric convex body in $\mathbb R^n$. Consider two Borel subsets $A \subset \partial K$ and $B\subset \partial K^\circ$ such that $|\mathcal C(A)|>0$ and $|\mathcal C(B)|>0$. Then one has
\[\langle V(A), V(B) \rangle \le n^2|\mathcal C(A)||\mathcal C(B)|.\]
If there is equality then $[a,\frac{V(B)}{n|\mathcal C(B)|}]\subset\partial K$ and $[b,\frac{V(A)}{n|\mathcal C(A)|}]\subset \partial K^\circ$ for almost all $a\in A$ and $b\in B$.
\end{corollary}
\begin{proof}[Proof of the inequality of Theorem \ref{thm:main}]
Since the volume product is continuous, the inequality for an arbitrary convex body follows by approximation by centrally symmetric smooth strictly convex bodies (see \cite{Sc} section 3.4).
Since the volume product is linearly invariant, we may assume, using Remark \ref{rm:linear} that $K$ is equipartitioned by the standard orthonormal basis.
For $\omega\in\{-1;1\}^3$ and any set $L\subset\mathbb R^3$ we define $L(\omega)$ to be the intersection of $L$ with the $\omega$-octant:
\[
L(\omega)=\{x\in L; \omega_i x_i\ge0;\ i=1,2,3\}.
\]
From the equipartition of volumes one has $\abs{K(\omega)}=\abs{K}/8$ for every $\omega \in \{-1,1\}^3$.
For $\omega \in\{-1,1\}^3$ let $N(\omega):=\{\omega'\in \{-1,1\}^3: |\omega-\omega'|=2\}$. In other words $\omega' \in N(\omega)$ if $[\omega, \omega']$ is an edge of the cube $[-1,1]^3$.
Using Stokes theorem we obtain $V(\partial (K(\omega)))=0$ hence
\begin{equation}\label{eq:sec}
V((\partial K)(\omega))=-\sum_{\omega' \in N(\omega)} V(K(\omega) \overrightarrow{\cap} K(\omega'))=\sum_{i=1}^3 \frac{|K\cap e^\perp_{i}|}{4} \omega_{i} e_{i}
\end{equation}
where in the last equality we used the equipartition of areas of $K\cap e^\perp_{i}$.
Since $K$ is strictly convex and smooth, there exists a diffeomorphism $\varphi:\partial K\to \partial K^\circ$ such that $\langle \varphi(x),x\rangle=1$.
We extend $\varphi$ to $\mathbb R^3$ by homogeneity of degree one: $\varphi(\lambda x)=\lambda \varphi(x)$, for any $\lambda\geq 0$ and $x\in\partial K$.
Then $K^\circ=\bigcup_{\omega}\varphi (K(\omega))$ and $|K^\circ|=\sum_{\omega}|\varphi (K(\omega))|$.
From the equipartition of volumes one has
\[\abs{K}\abs{K^\circ} =\sum_{\omega } \abs{K} \abs{\varphi (K(\omega))}=8\sum_{\omega} \abs{K(\omega)}\abs{\varphi (K(\omega))}.\]
From Corollary \ref{coro} we deduce that for every $\omega\in\{-1,1\}^3$
\begin{eqnarray}\label{ineq:coro}
\abs{K(\omega)}\abs{\varphi (K(\omega))} \ge \frac{1}{9} \langle V((\partial K)(\omega)), V(\varphi(\partial K)(\omega)))\rangle.
\end{eqnarray}
Thus, using \eqref{eq:sec}
\begin{align}\label{eq:meyer}
\abs{K}\abs{K^\circ}&\ge\frac{8}{9}\sum_{\omega} \langle V( (\partial K)(\omega)),V( \varphi((\partial K)(\omega)))\rangle =
\frac{8}{9} \sum_\omega\langle \sum_{i=1}^3 \frac{ |K\cap e^\perp_{i}|}{4} \omega_i e_i, V( \varphi((\partial K)(\omega)))\rangle \nonumber\\
&= \frac{8}{9}\sum_{i=1}^3 \frac{ |K\cap e^\perp_{i}|}{4} \langle e_i, \sum_\omega \omega_i V( \varphi((\partial K)(\omega)))\rangle.
\end{align}
By Stokes theorem $V(\varphi( \partial K(\omega)))=0$, therefore
\[
V(\varphi( (\partial K)(\omega))) =-\sum_{\omega' \in N(\omega)}V(\varphi(K(\omega) \overrightarrow{\cap} K(\omega'))).
\]
Recall that we have chosen orientations so that for every $\omega' \in N(\omega)$
\[
V(\varphi(K(\omega) \overrightarrow{\cap} K(\omega')))=-V(\varphi(K(\omega')\overrightarrow{\cap} K(\omega))).
\]
Substituting in \eqref{eq:meyer} we obtain
\[
\abs{K}\abs{K^\circ}\ge\frac{8}{9}\sum_{i=1}^3 \frac{ |K\cap e^\perp_{i}|}{4} \langle e_i, \sum_\omega \omega_i \sum_{\omega'\in N(\omega)}V(\varphi(K(\omega')\overrightarrow{\cap} K(\omega))) \rangle.
\]
If $[\omega,\omega']$ is an edge of the cube $[-1,1]^3$ let $c(\omega,\omega')$ be the coordinate in which $\omega$ and $\omega'$ differ. For every $i=1,2,3$ we have
\begin{align}\label{eq:punch}
\sum_\omega \omega_i \sum_{\omega'\in N(\omega)}V(\varphi(K(\omega')\overrightarrow{\cap} K(\omega))) = \sum_\omega \sum_{\omega'\in N(\omega): c(\omega,\omega')=i}\omega_i V(\varphi(K(\omega')\overrightarrow{\cap} K(\omega))) \\
+ \sum_\omega \sum_{\omega'\in N(\omega): c(\omega,\omega')\neq i}\omega_i V(\varphi(K(\omega')\overrightarrow{\cap} K(\omega))). \nonumber
\end{align}
The first part of the right hand side of \eqref{eq:punch} can be rewritten as a sum of terms of the form
\[ \omega_i V(\varphi(K(\omega')\overrightarrow{\cap} K(\omega))) + \omega_i' V(\varphi(K(\omega)\overrightarrow{\cap} K(\omega'))) =2\omega_i V(\varphi(K(\omega')\overrightarrow{\cap} K(\omega))),\]
since $\omega_i=-\omega_i'$. Thus for each $i$, the first part of the right hand side of \eqref{eq:punch} equals
\[\sum_{|\omega-\omega'|=2, \omega_i=1,\omega'_i=-1} 2V(\varphi(K(\omega')\overrightarrow{\cap} K(\omega))=2V(\varphi(K \cap e_i^\perp)),\]
where $K\cap e_i^\bot$ is oriented in the direction of $e_i$.
The second part of the sum \eqref{eq:punch} can be rewritten as a sum of terms of the form
\[\omega_i V(\varphi(K(\omega')\overrightarrow{\cap} K(\omega))) + \omega_i' V(\varphi(K(\omega)\overrightarrow{\cap} K(\omega')))=0,\]
since in this case $\omega_i=\omega_i'$.
Let $P_i$ be the orthogonal projection on the plane $e_i^\bot$.
Then $P_i: \varphi(K\cap e_i^\bot)\to P_i (K^\circ)$ is a bijection thus from equation (\ref{wedge}) we get
\begin{equation}\label{eq:scalar}
\langle V (\varphi(K\cap e_i^\bot)), e_i\rangle=|P_i(K^\circ)|.
\end{equation}
Since the polar of a section is the projection of the polar, $P_i(K^\circ)=P_i(\varphi(K\cap e_i^\bot))=(K\cap e_i^\bot)^{\circ}$, and we obtain
\begin{eqnarray}\label{ineq:end}
\abs{K}\abs{K^\circ}\ge \frac{4}{9}\sum_{i=1}^3 |K\cap e^\perp_{i}| |P_i(K^\circ)|\ge \frac{4}{9}\cdot 3\cdot \frac{4^2}{2}=\frac{4^3}{3!}=\frac{32}{3},
\end{eqnarray}
where we used the $2$-dimensional Mahler inequality
\begin{eqnarray}\label{ineq:mahler2d}
\abs{K\cap e_i^\bot}\abs{P_i(K^\circ)}\ge \frac {4^2}{2}=8.
\end{eqnarray}
\end{proof}
\begin{remark}
In higher dimensions an equipartition result is not at our disposal, but the generalization of the rest of the proof is straightforward and provides a new large family of examples for which the Mahler conjecture holds.
\end{remark}
\begin{proposition}
If $K \subset \mathbb R^n$ is a centrally symmetric convex body that can be partitioned with hyperplanes $H_1,H_2\ldots H_n$ into $2^n$ pieces of the same volume such that each section $K\cap H_i$ satisfies the Mahler conjecture and is partitioned into $2^{n-1}$ regions of the same $(n-1)$-dimensional volume by the remaining hyperplanes, then
\[\abs{K}\abs{K^\circ}\ge \frac{4^n}{n!}.\]
\end{proposition}
The proof is the same, the first inequality has a $\frac{2^n}{n^2}$ factor in front. This time there are $2^{n-1}$ parts on each section and each one appears twice so we multiply by a factor of $\frac{1}{2^{n-2}}$ and the sum has $n$ terms, so the induction step introduces a factor of $\frac{4}{n}$ as desired.
\section{Equality case}\label{sec:equality}
In this section we prove that the only symmetric three dimensional convex bodies that achieve equality in the Mahler conjecture are linear images of the cube and of the cross polytope.
The strategy is to look at the steps in the proof of Theorem \ref{thm:main} where inequalities were used. Specifically, the analogous theorem in dimension $2$ implies that the coordinate sections of $K$ satisfy the Mahler conjecture and therefore are parallelepipeds. Combinatorial analysis of how these sections can interact with the equipartition yields the equality case. The major ingredient of the analysis on how these sections can coexist in a convex body is corollary \ref{coro}. At first one might think that the situation is extremely rigid and there are just a few evident ways in which a convex body with coordinate square sections might be the union of $8$ cones. In fact there is a large family of positions of the cube for which it is equipartitioned by the standard orthogonal basis, and more than one way of positioning the octahedra so that it is equipartitioned by the standard orthogonal basis, as it will become evident in the proof.\\
First we show that the symmetric convex bodies which are equipartitioned by the standard orthonormal basis (see Remark \ref{rm:linear}) are uniformly bounded.
We use the notation $\conv(\cdot)$ to denote the convex hull.
\begin{lemma}\label{lm:bounded} Let $L \subset \mathbb R^2$ be a symmetric convex body equipartitioned by the standard orthonormal basis. Then
\begin{equation}\label{eq:r2bound}
B_1^2 \subset L \subset \sqrt{2} B_2^2.
\end{equation}
Let $K \subset \mathbb R^3$ be a symmetric convex body equipartitioned by the standard orthonormal basis. Then
$$
B_1^3 \subset K \subset 54\sqrt{2} B_1^3.
$$
\end{lemma}
\begin{proof} The first inclusion follows from the fact $\pm e_1, \pm e_2 \in \partial L$. Our goal is to see how far any point in $L$ may be situated from $B_1^2$. Consider a point $a
=a_1e_1+a_2e_2 \in L \cap (\mathbb R_+)^2 \setminus B_1^2$. Note that $|a_1-a_2| \le 1$, since otherwise
$e_j$ would be in the interior of $\conv (a, \pm e_i)$ for $i\not=j$, which would contradict the equipartition assumption. It follows that
$
|L \cap (\mathbb R_+)^2| \ge|\conv(0,e_1,e_2,a)|= \frac{a_1+a_2}{2}.
$
Let $b$ be the intersection of the line through $a$ and $e_1$ and the line through $-a$ and $-e_2$. Then
$
L \cap \{ x_1 \ge 0, x_2 \le 0\} \subset \conv(0, b, e_1, -e_2),
$
Indeed, otherwise one of the points $e_1$ or $-e_2$ would not be on the boundary of $L$. Using that $|\conv(0, b, e_1, -e_2)|=\frac{b_1-b_2}{2}$ and the equipartition property of $L$, a direct computation shows that
$(a_1-\frac{1}{2})^2+ (a_2-\frac{1}{2})^2 \le \frac{1}{2}.$
Repeating this observation for all quadrants of $L$ we get the result.
Now consider a symmetric convex body $K \subset \mathbb R^3$ equipartitioned by the standard orthonormal basis. Let us prove that
\begin{equation}\label{eq:one-third}
\max_{a\in K(\omega)}\|a\|_1\ge\max_{a\in K}\|a\|_1^{1/3},\quad \forall\omega\in\{-1;1\}^3,
\end{equation}
where $\|a\|_1=\sum |a_i|$. Let $\omega_0\in\{-1;1\}^3$ and $a(\omega_0)\in K(\omega_0)$ such that $\|a(\omega_0)\|_1=\max_{a\in K}\|a\|_1$. Then
$$
|K(\omega_0)| \ge |\conv(0,e_1,e_2, e_3,a(\omega_0))|=\|a(\omega_0)\|_1/6.
$$
For any $\omega\in\{-1;1\}^3$, one has $K(\omega)\subset\max_{a\in K(\omega)}\|a\|_1B_1^3(\omega)$ thus $|K(\omega)|\le\max_{a\in K(\omega)}\|a\|_1^3/6$, this together with the equipartition property of $K$ gives (\ref{eq:one-third}).
Let $R=\max_{a\in K}\|a\|_1$. Then, from (\ref{eq:one-third}), for all $\omega$ there exists $a(\omega)=a_1(\omega)e_1+a_2(\omega)e_2+a_3(\omega) e_3 \in K(\omega)$ such that $\|a(\omega)\|_1 \ge R^{1/3}$. Note that $\|a(\omega)\|_\infty=\max_i |a_i(\omega)| \ge R^{1/3}/3$.
Moreover, there exists $\omega \not =\omega'$, and $i\in \{1,2,3\}$ such that
\begin{equation}\label{eq:comb}
|a_i(\omega)|=\|a(\omega)\|_\infty \ge \frac{R^{1/3}}{3},\quad |a_i(\omega')|=\|a(\omega')\|_\infty \ge \frac{R^{1/3}}{3} \quad\mbox{and}\quad \sign a_i(\omega) = \sign a_i(\omega').
\end{equation}
Indeed, among the $8$ vectors $a(\omega)$, at least three achieve their $\ell_\infty$-norm at the same coordinate $i$ and, among these three vectors, at least two have this coordinate of the same sign.
Consider $a(\omega)$ and $a(\omega')$ as in (\ref{eq:comb}), then $\lambda a(\omega) +(1-\lambda) a(\omega') \in K$ for all $\lambda \in [0,1]$. Since $\omega \not =\omega'$, there exists a coordinate $j \not=i$ such that either $a_j(\omega)=a_j(\omega')=0$ or $\sign a_j(\omega) \not =\sign a_j(\omega')$. For some $\lambda \in [0,1]$, one has
$
\lambda a_j(\omega) +(1-\lambda) a_j(\omega') =0$ and thus $\lambda a(\omega) +(1-\lambda) a(\omega') \in K \cap e_j^\perp.
$
Using the equipartition of $L:=K \cap e_j^\perp$ and (\ref{eq:r2bound}), together with the properties of $a(\omega)$ and $a(\omega')$, we get
$$
\frac{R^{1/3}}{3} \le | (\lambda a_i(\omega) +(1-\lambda) a_i(\omega'))| \le \sqrt{2}.
$$
\end{proof}
Using Lemma \ref{lm:bounded}, we prove the following approximation lemma.
\begin{lemma}\label{approx} Let $L\subset\mathbb R^3$ be a symmetric convex body. Then there exists a sequence $(L_m)_m$ of smooth symmetric strictly convex bodies which converges to $L$ in Hausdorff distance and a sequence $(T_m)_m$ of linear invertible maps which converges to a linear invertible map $T$ such that for any $m$ the bodies $T_mL_m$ and $TL$ are equipartitioned by the standard orthonormal basis.
\end{lemma}
\begin{proof}
From \cite{Sc} section 3.4 there exists a sequence $(L_m)_m$ of smooth symmetric strictly convex bodies converging to $L$ in Hausdorff distance. From Theorem \ref{thm:equipart} there exists a linear map $T_m$ such that $T_mL_m$ is equipartitioned by the standard orthonormal basis. Since $(L_m)_m$ is a convergent sequence it is also a bounded sequence and thus there exist constants $c_1, c_2>0$ such that $c_1B_2^3 \subset L_m \subset c_2B_2^3$. Moreover, from Lemma \ref{lm:bounded} there exist $c'_1, c'_2>0$ such that $ c'_1B_2^3 \subset T_m L_m \subset c'_2B_2^3$. Thus $\sup_m\{\|T_m\|, \|T^{-1}_m\|\}<\infty$ and we may select a subsequence $T_{m_k}$ which converges to some invertible linear map $T$. Then $T_{k_m}L_{k_m}$ converges to $TL$. By continuity, $TL$ is also equipartitioned by the standard orthonormal basis.
\end{proof}
We also need the following lemma.
\begin{lemma}\label{lm:square} Let $P$ be a centrally symmetric parallelogram with non empty interior. Assume that $P$ is equipartitioned by the standard orthonormal axes
then $P$ is a square.
\end{lemma}
\begin{proof} For some invertible linear map $S$, one has $S(P)=B_\infty^2$ and the lines $\{t Se_1: t \in \mathbb R\}$, $\{t Se_2: t \in \mathbb R\}$ equipartition $B_\infty^3$. The square $B_\infty^2$ remains invariant when we apply a rotation of angle $\pi/2$. The rotated lines must also equipartition $B_\infty^2$, this implies that the rotated lines are also invariant since otherwise one cone of $1/4$ of the area would be contained in another cone of area $1/4$. Moreover $Se_1,Se_2\in\partial B_\infty^2$, thus $\|Se_1\|_2=\|Se_2\|_2:=\lambda$, and we can conclude that $\frac 1 \lambda S$ is an isometry and $P$ is a square.
\end{proof}
We shall use the following easy lemma.
\begin{lemma}\label{observation} Let $K$ be a convex body in $\mathbb R^3$.
If two segments $(p_1,p_2)$ and $[p_3,p_4]$ are included in $\partial K$ and satisfy $(p_1,p_2)\cap [p_3,p_4]\neq\emptyset$,
then $\conv(p_1,p_2,p_3, p_4)\subset H\cap K\subset\partial K$, for some supporting plane $H$ of $K$.
\end{lemma}
\begin{proof}
Let $H$ be a supporting plane of $K$ such that $[p_3,p_4]\subset H$. Denote by $n$ the exterior normal of $H$. Let $x\in(p_1,p_2)\cap [p_3,p_4]$. Then $\langle p_i,n\rangle\le \langle x,n\rangle$ for all $i$. Moreover there exists $0<\lambda<1$ such that $x=(1-\lambda)p_1+\lambda p_2$. Since $\langle x,n\rangle=(1-\lambda) \langle p_1,n\rangle+\lambda\langle p_2,n\rangle \le \langle x,n\rangle$ one has $p_1,p_2\in H$. Since $H\cap K$ is convex, we conclude that $\conv(p_1,p_2,p_3, p_4)\subset H\cap K$.
\end{proof}
\noindent
\begin{proof}[Proof of the equality case of Theorem 1.]
Let $L$ be a symmetric convex body such that $\abs{L}\abs{L^\circ}=\frac{32}{3}$. Applying Lemma \ref{approx} and denoting $K=TL$ and $K_m=T_mL_m$, we get that the bodies $K_m$ are smooth and strictly convex symmetric, the bodies $K_m$ and $K$ are equipartitioned by the standard orthonormal basis and the sequence $(K_m)_m$ converges to $K$ in Hausdorff distance.
Applying inequality (\ref{ineq:end}) to $K_m$ and taking the limit we get
$$
\frac{32}{3}=\abs{K}\abs{K^\circ}=\lim_{m\to+\infty}|K_m||K_m^\circ|\ge\frac{4}{9}\sum_{i=1}^3 \lim_{m\to+\infty} |K_m\cap e^\perp_{i}| |P_i(K_m^\circ)|= \frac{4}{9}\sum_{i=1}^3 |K\cap e^\perp_{i}| |P_i(K^\circ)|\ge \frac{32}{3},
$$
where we used (\ref{ineq:mahler2d}). Thus we deduce that for all $i=1,2,3$,
\[
\abs{K\cap e_i^\bot}\abs{P_i(K^\circ)}= 8.
\]
Reisner \cite{Reisner} and Meyer \cite{Meyer} showed that if Mahler's equality in $\mathbb R^2$ is achieved then the corresponding planar convex body is a parallelogram. It follows that the sections of $K$ by the planes $e_i^\bot$ are parallelograms, which are equipartitioned by the standard orthonormal axes. From Lemma \ref{lm:square} the coordinate sections $K\cap e_i^\bot$ are squares and one may write
\[K\cap e_i^\bot=\conv(a_\omega;\ \omega_i=0;\ \omega_j\in\{-;+\}, \forall j\neq i),\quad\hbox{where}\quad a_\omega\in\sum_{i=1}^3\mathbb R_{\omega_i}e_i.
\]
For example
$K\cap e_3^\bot=\conv(a_{++0}, a_{+ - 0}, a_{-+0}, a_{--0}).$
We discuss the four different cases, which also appeared in \cite{IS}, depending on the location of the vertices of $K\cap e_i^\bot$.\\
{\bf Case 1.} If exactly one of the coordinate sections of $K$ is the canonical $B_1^2$ ball, then $K$ is a parallelepiped. \\
For instance, suppose that for
$i=1,2$, $K\cap e_i^\bot\neq\conv(\pm e_j; j\neq i)$, but $K\cap e_3^\bot=\conv(\pm e_1, \pm e_2)$. We apply Lemma \ref{observation} to the segments $[a_{+0+}, a_{-0+}]$ and $[a_{0++}, a_{0-+}]$
to get that the supporting plane $S_3$ of $K$ at $e_3$ contains the quadrilateral $F_3:=\conv(a_{+0+}, a_{0++}, a_{-0+}, a_{0-+})$ and $e_3$ is in the relative interior of the face $K\cap S_3$ of $K$. Moreover $(a_{+0+}, a_{+0-})$ and $[e_1,e_2]$ are included in $\partial K$ and intersect at $e_1$. Thus from Lemma \ref{observation} the triangle $\conv(a_{+0+}, a_{+0-},e_2)$ is included in a face of $K$. In the same way, this face also contains the triangle $\conv(a_{0+-}, a_{0++},e_1)$. Thus the quadrilateral $\conv(a_{+0+}, a_{+0-}, a_{0+-}, a_{0++})$ is included in a face of $K$. In the same way, we get
that the quadrilateral $\conv(a_{0+-}, a_{0++}, a_{-0+}, a_{-0-})$ is included in a face of $K$. We conclude that $K$ is a symmetric body with $6$ faces, thus a parallelepiped.\\
{\bf Case 2.} If exactly two of the coordinate sections of $K$ are canonical $B_1^2$ balls, then $K=B_1^3$. \\
For instance, suppose that
for every $i=1,2$, $K\cap e_i^\bot=\conv(\pm e_j; j\neq i)$, but $K\cap e_3^\bot\neq\conv(\pm e_1, \pm e_2)$. Then the segments $(a_{++0}, a_{+-0})$ and $[e_1,e_3]$ are included in $\partial K$ and intersect at $e_1$. Thus from Lemma \ref{observation}, the triangle $\conv(a_{++0}, a_{+-0}, e_3)$ is
in a face of $K$. Using the other octants, we conclude that $K=\conv(e_3,-e_3, K\cap e_3^{\perp})$.\\
{\bf Case 3.} If for every $1\le i\le 3$, $K\cap e_i^\bot\neq\conv(\pm e_j; j\neq i)$, then $K$ is a cube. \\
One has $(a_{++0}, a_{+-0})\subset \partial K$ and $(a_{+0+}, a_{+0-})\subset\partial K$ and these segments intersect at $e_1$. From Lemma~\ref{observation}, the supporting plane $S_1$ of $K$ at $e_1$ contains the quadrilateral $\conv(a_{++0}, a_{+-0}, a_{+0+}, a_{+0-})$ and $e_1$ is in the relative interior of the face $K\cap S_1$. Reproducing this in each octant,
we conclude that $K$ is contained in the parallelepiped $L$ delimited by the planes $S_i$ and $-S_i$, $1\le i\le3$. Let $u_i\in\partial K^\circ$ be such that $S_i=\{x\in \mathbb R^3; \langle x, u_i \rangle =1\}$. It follows that $K^{\circ}$ contains the octahedron $L^\circ= \conv(\pm u_1, \pm u_1,\pm u_3)$. Observe that $K\cap e_i^\perp= L\cap e_i^\perp$
and thus
$
P_i K^\circ= P_i L^{\circ},
$
for each $i=1,2,3$.
Let $H_i$ the oriented plane spanned by $u_j, u_k$, with the positive side containing $u_i$, where $i, j, k$ are distinct indices, $i,j,k \in \{1,2,3\}$ and let $H_i^{\omega_i}$ be the half-space containing $\omega_i u_i$. Let
$$
K^{\circ}_\omega =K^{\circ}\cap H_1^{\omega_1} \cap H_2^{\omega_2}\cap H_3^{\omega_3} \mbox{ and } (\partial K^{\circ})_\omega =\partial K^{\circ}\cap H_1^{\omega_1} \cap H_2^{\omega_2}\cap H_3^{\omega_3}.
$$
We note that $P_i(u_j),$ $i\not = j$ is orthogonal to an edge of $K \cap e_i^\perp$, thus
$P_i(u_j)$ is a vertex of $P_i(K^\circ)$, taking into account that $P_i(K^\circ)$ is a square we get
$$
P_i(K^\circ)=P_i (\conv(\pm u_j, \pm u_k))=P_i(K^\circ \cap H_i),
$$ where $i, j, k$ are distinct indices, $i,j,k \in \{1,2,3\}$. Similarly to (\ref{eq:scalar}), from equation (\ref{wedge}) we get
\begin{equation}\label{eq:descr}
\langle V(K^\circ \cap H_i), e_i\rangle=|P_i(K^\circ)|,
\end{equation}
where $K^\circ\cap H_i$ is oriented in the direction of $u_i$.
As in the proof
of Theorem \ref{thm:main}, we use the equipartition of $K$, to get
$$
\frac{32}{3}=|K||K^{\circ}|=8\sum_{\omega} |K(\omega) | |K^{\circ}_\omega| \ge \frac{8}{9} \sum \langle V((\partial K)(\omega)), V((\partial K^{\circ})_\omega)\rangle,
$$
where the last inequality follows from Corollary \ref{coro}. Since $K^\circ_\omega$ is a convex body one has
\[
V((\partial K^{\circ})_\omega) =-\sum_{\omega'\in N(\omega)}V(K^\circ_\omega \overrightarrow{\cap} K^\circ_{\omega'}).
\]
We thus may continue as in the proof of Theorem \ref{thm:main}, we finally get
$$
\frac{32}{3}=|K||K^{\circ}|=8\sum_{\omega} |K(\omega) | |K^{\circ}_\omega| \ge \frac{8}{9} \sum \langle V((\partial K)(\omega)), V((\partial K^{\circ})_\omega)\rangle\ge \frac{4}{9} \sum_{i=1}^3 |K \cap e_i^{\perp}| |P_i(K^{\circ})| =\frac{32}{3}.
$$
It follows that $\langle \frac{V((\partial K)(\omega))}{3 |K(\omega) |}, \frac{V((\partial K^{\circ})_\omega)}{3 |K^\circ_\omega |}\rangle =1$. Using Corollary \ref{coro} we define the points
\begin{equation}\label{eq:points}
y_\omega:= \frac{V((\partial K)(\omega))}{3 |K(\omega) |} \in\partial K^{\circ} \mbox{ and } x_\omega:=\frac{V((\partial K^{\circ})_\omega)}{3 |K^\circ_\omega |}\in \partial K,
\end{equation}
for all $\omega\in \{-1,1\}^3$. Since $[e_1,x_{+,+,+}]\subset \partial K$ one has $x_{+++}\in S_1$. In the same way $x_{+++}\in S_2$ and $x_{+++}\in S_3$. Reproducing this in each octant and for each $x_\omega$, we conclude that $K=L$.\\
{\bf Case 4.} If for every $1\le i\le 3$, $K\cap e_i^\bot=\conv(\pm e_j; j\neq i)$, then $B_1^3=K$.\\
Since $B_1^3\subset K$ one has $K^\circ\subset B_\infty^3$ and $P_i(K^\circ)=P_i( B_\infty^3)$.
If there exists $\omega \in \{-1;1\}^3$ such that $\omega \in K^\circ$, then $K(\omega) =B_1^3(\omega)$ and using equipartition property of $K$ we get $K=B_1^n$. Assume, towards the contradiction, that $K^\circ \cap \{-1;1\}^3 =\emptyset$. Since $P_iK^\circ= [-1,1]^3\cap e_i^\perp$,
each {\it open} edge $(\varepsilon,\varepsilon')$ of the cube $[-1,1]^3$ contains at least
one point $C_{\varepsilon,\varepsilon'}\in \partial K^\circ$.
Using the symmetry of $K^\circ$ we may assume that the twelve selected points $C_{\varepsilon,\varepsilon'}$ are pairwise symmetric.
For each $i=1,2,3,$ consider the $4$ points $C_{\varepsilon,\varepsilon'}$ belonging to edges of $[-1,1]^3$ parallel to the direction $e_i$. They generate a plane $H_i$ passing through the origin. Thus, we have defined $3$ linearly independent planes $H_1, H_2$ and $H_3$ (note that linear independence follows from the fact that $H_i$ does not contain the vertices of the cube and passes through edges parallel to $e_i$ only). Those planes define a partition of $K^\circ$ into $8$ regions with non-empty interior, moreover,
$$
P_i(K^\circ)=P_i(K^\circ \cap H_i).
$$
We repeat verbatim the construction of Case 3 using the partition of $K^\circ$ by the planes $H_i$ and defining $H_i^{\omega_i}$ to be a half-space containing $\omega_i e_i$. Using Proposition \ref{ineq} and Corollary \ref{coro} we get that for each point of $y\in \partial K^\circ$, for some $\varepsilon\in \{-1;1\}^3$, one has
$[y,y_\omega]\subset (\partial K^\circ)_\omega$ (see (\ref{eq:points}) for the definition of $y_\omega$). Together with the fact that each face of $B_\infty^3$ intersects $K^\circ$ on a facet of $K^\circ$ this gives that $B_\infty^3=K^\circ$, which contradicts our assumption that
$K^\circ \cap \{-1;1\}^3 =\emptyset$.
\end{proof}
\section{Stability}\label{sec:stability}
The goal of this section is to establish the stability of the $3$ dimensional Mahler inequality.
The {\it Banach-Mazur distance} between symmetric convex bodies $K$ and $L$ in $\mathbb R^n$ is defined as
$$d_{BM}(K,L) = \inf \{d\ge1 : L\subset TK\subset dL, \mbox{ for some } T\in{\rm GL}(n)\}.$$
We refer to \cite{AGM} for properties of the Banach-Mazur distance, in particular, for John's theorem, which states that $d_{BM}(K, B_2^n) \le \sqrt{n}$ for any symmetric convex body $K\subset \mathbb R^n$ and thus $d_{BM}(K, L) \le n$ for any pair of symmetric convex bodies $K,L \subset \mathbb R^n$.
\begin{thm}\label{thm:stability_BM}
There exists an absolute constant $C>0$, such that for every symmetric convex body $K \subset \mathbb R^3$ and $\delta>0$ satisfying
$\mathcal P(K) \leq (1+ \delta)\mathcal P(B_\infty^3)$, one has
$$
\min\{ d_{BM}(K, B_\infty^3), d_{BM}(K, B_1^3)\} \le 1+C\delta.
$$
\end{thm}
We start with a general simple lemma on compact metric spaces and continuous functions.
\begin{lemma}\label{lem:metric}
Let $(A,d)$ be a compact metric space, $(B,d')$ a metric space, $f:A\to B$ a continuous function and $D$ a closed subset of $B$. Then,
\begin{enumerate}
\item For any $\beta>0$ there exists $\alpha>0$ such that $d(x,f^{-1}(D))\ge\beta$ implies $d'(f(x),D)\ge\alpha$.
\item If there exists $c_1,c_2>0$ such that $d(x,f^{-1}(D))<c_1$ implies $d'(f(x), D))\ge c_2d(x,f^{-1}(D))$ then for some $C>0$, one has
$d(x,f^{-1}(D)) \le C d'(f(x), D)), \ \forall x \in A.$
\end{enumerate}
\end{lemma}
\begin{proof} (1) Let $\beta>0$ such that $A_\beta = \{x \in A: d(x, f^{-1}(D) \ge \beta\}\neq\emptyset$. Then $A_\beta$ is compact and since the function $x\mapsto d'(f(x), D))$ is continuous on $A_\beta$, it reaches its infimum $\alpha$ at some point $x_0\in A_\beta$. We conclude that for all $x\in A_\beta$ one has $d'(f(x), D))\ge\alpha=d'(f(x_0), D))>0$ since $x_0\notin f^{-1}(D)$.\\
(2) Consider two cases. First assume that $d(x,f^{-1}(D))<c_1$, then $d'(f(x), D) \ge c_2d(x,f^{-1}(D))$ and we select $C=1/c_2$. Now assume
that $d(x,f^{-1}(D)) \ge c_1$, then using (1), with $\beta=c_1$ we get
$$
d'(f(x),D)\ge\alpha \ge \frac{\alpha}{{\rm diam}(A)} d(x,f^{-1}(D)) \ \forall x \in A,
$$
where ${\rm diam}(A)$ is the diameter of the metric space $A$ and we conclude with $C=\max\{1/c_2, \frac{{\rm diam}(A)}{\alpha} \}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:stability_BM}] The proof follows from Lemma \ref {lem:metric} together with the stability theorem proved in \cite{NazZva} and equality cases proved in the previous section. Using the linear invariance of the volume product, together with John's theorem we reduce to the case when $B_2^3\subseteq K \subseteq \sqrt{3} B_2^3$. Our metric space $A$ will be the set of such bodies with the Hausdorff metric $d_H$ (see, for example, \cite{Sc}). Let $B=\mathbb R$. Then $f:A \to B$, defined by $f(K)=\mathcal P(K)$, is continuous on $A$ (see for example \cite{FMZ}). Finally, let $D=\mathcal P(B_\infty^3)$. Using the description of the equality cases proved in the previous section we get that
$$
f^{-1}(D)=\{K\in A; \mathcal P(K)=\mathcal P(B_\infty^3)\}=\{K\in A; \exists\ S\in {\rm SO}(3); K=S B_\infty^3\ \hbox{or}\ K=\sqrt{3}SB_1^3\}.
$$
Note that $B_\infty^3$ is in John position (see for example \cite{AGM}) and thus if $ B_2^3 \subset T B_\infty^3 \subset \sqrt{3}B_2^3$ for some $T \in GL(3)$, then $T \in SO(3)$.
We show that the assumptions in the second part of Lemma \ref{lem:metric} are satisfied. As observed in \cite{BH} the result of \cite{NazZva} may be stated in the following way: there exists constants $\delta_0(n), \beta_n>0$, depending on dimension $n$ only, such that for every symmetric convex body $K \subset \mathbb R^n$ satisfying $d_{BM}(K, B_\infty^n) \le 1+ \delta_0(n)$ we get
$$
\P(K) \ge (1+\beta_n (d_{BM}(K, B_\infty^n)-1))\P(B_\infty^n).
$$
Using that $d_{BM}(K^\circ, L^\circ)= d_{BM}(K, L)$, we may restate the $3$ dimensional version of the above stability theorem in the following form: there are absolute constants $c_1, c_2 >0$ such that for every symmetric convex body $K \subset \mathbb R^3$ satisfying
$\min\{ d_{BM}(K, B_\infty^3), d_{BM}(K, B_1^3)\} :=1+d \le 1+c_1,$
one has
$$
\P(K) \ge \P(B_\infty^3)+ c_2 d.
$$
To finish checking the assumption, note that for all $K,L$ convex bodies such that $B_2^3\subseteq K,L \subseteq \sqrt{3} B_2^3$:
\begin{eqnarray}\label{eq:dist}
d_{BM}(K, L)-1 \le \min_{T\in GL(3)} d_H (TK, L) \le \sqrt{3}(d_{BM}(K, L) -1 ).
\end{eqnarray}
Applying Lemma \ref{lem:metric} we deduce that there exists $C>0$ such that for all $K$ such that $B_2^3\subseteq K \subseteq \sqrt{3} B_2^3$:
\[
\min_{S\in SO(3)}\min(d_H(K,SB_\infty^3), d_H(K,S\sqrt{3}B_1^3))\le C|\P(K)-\P(B_\infty^3)|.
\]
Using (\ref{eq:dist}) we conclude the proof.
\end{proof}
\begin{remark} The same method as in the proof of Theorem \ref{thm:stability_BM}, i.e. applying Lemma \ref{lem:metric}, known equality cases and the results from \cite{Kim, NazZva} can be used to present shorter proofs of the stability theorems given in \cite{BH, ZvK}.
\end{remark}
| {
"timestamp": "2021-01-21T02:18:53",
"yymm": "1904",
"arxiv_id": "1904.10765",
"language": "en",
"url": "https://arxiv.org/abs/1904.10765",
"abstract": "Following ideas of Iriyeh and Shibata we give a short proof of the three-dimensional Mahler conjecture {\\mf for symmetric convex bodies}. Our contributions include, in particular, simple self-contained proofs of their two key statements. The first of these is an equipartition (ham sandwich type) theorem which refines a celebrated result of Hadwiger and, as usual, can be proved using ideas from equivariant topology. The second is an inequality relating the product volume to areas of certain sections and their duals. We observe that these ideas give a large family of convex sets in every dimension for which the Mahler conjecture holds true. Finally we give an alternative proof of the characterization of convex bodies that achieve the equality case and establish a {\\mf new} stability result.",
"subjects": "Metric Geometry (math.MG)",
"title": "Equipartitions and Mahler volumes of symmetric convex bodies",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986151391819461,
"lm_q2_score": 0.849971181358171,
"lm_q1q2_score": 0.8382002635027918
} |
https://arxiv.org/abs/math/0510568 | Winning rate in the full-information best choice problem | Following a long-standing suggestion by Gilbert and Mosteller, we derive an explicit formula for the asymptotic winning rate in the full-information problem of the best choice. | \section{Introduction}
Let $X_1,X_2\ldots$ be a sequence of independent uniform $[0,1]$ random variables.
The full-information best choice problem, as introduced
by Gilbert and Mosteller \cite{GM},
asks one to find a stopping rule $\tau_n$ to maximise the probability
\begin{equation}\label{stop}
P_n(\tau):=\mathbb P(X_{\tau}=\max(X_1,\ldots,X_n))
\end{equation}
over all stopping rules $\tau\leq n$ adapted to the sequence $(X_i)$.
The name `full information' was attached to the problem to stress that the observer
learns the exact values of $X_i$'s and knows their distribution, in contrast
to the `no information' problem where only the relative ranks of observations are available
(see \cite{SamSurv} for a survey and history of the best choice or `secretary' problems).
Because the stopping criterion (\ref{stop}) depends only on ranks of the observations,
the instance of uniform distribution covers, in fact, the general case of sampling from arbitrary
continuous distribution.
\par Gilbert and Mosteller showed that
the optimal stopping rule is
of the form
$$\tau_n=\min\{i:\,X_i=\max(X_1,\ldots,X_i)~{\rm and~}X_i\geq d_{n-i}\},$$
where $d_k$ is a sequence of decision numbers defined by the equation
\begin{equation}\label{d}
\sum_{j=1}^k (d_k^{-j}-1)/j=1 {~~\rm for~~}k\geq 1,~~{\rm and~~}d_0=0.
\end{equation}
They also proved that $d_k\uparrow 1$ in such a way that
$k(1-d_k)\to c$ for $c=0.804\ldots$ the solution to the transcendental equation
\begin{equation}\label{c}
\int_0^c x^{-1}({e^x-1})\,{\rm d}x=1\,,
\end{equation}
and they provided a numerical evidence that the optimal probability of the best choice
$P_n^*:=P_n(\tau_n)$
converges to a limit $P^*=0.580164\ldots$ The limiting value was justified by different methods
in the subsequent work \cite{BG, GnFI, SamUn, SamSurv}
along with the explicit formula
\begin{equation}\label{limpr}
P^*={\rm e}^{-c}+({\rm e}^c-c-1)\int_1^\infty {\rm e}^{-cx}x^{-1}\,{\rm d} x\,
\end{equation}
due to Samuels \cite{SamUn}.
\par Refinements and generalisations of the results of \cite{GM} appeared in
\cite{GnFI, GnBC, payoff, HillKennedy, Pet, Por1, Sam-Cahn}. Still,
one interesting feature of the optimal stopping rule seems to have not been
discussed in the literature.
We mean the tiny
Section 3e in \cite{GM} where Gilbert and Mosteller say:
``One would correctly anticipate that as $n$ increases, the probability of winning at a given draw tends to zero.
On the other hand,
$\,n\,\mathbb P({\rm win~at~draw}~i)$ tends to a constant for
$i/n$ tending to a constant $\lambda\,$".
Spelled out in detail, Gilbert and Mosteller claimed existence of the limit
\begin{equation}\label{conv}
w(t)=\lim_{i,n\to\infty,\,i/n\to t}
n\,\mathbb P(\tau_n=i,\,X_i=\max(X_1,\ldots,X_n))
\end{equation}
where $t\in [0,1]$ stands for their $\lambda$. Such a function may be called the asymptotic
{\it winning rate} since it tells us how the chance of correct recognising the maximum
is distrubuted over the time, hence
the total probability of the best choice must satisfy
$$P^*=\int_0^1 w(t)\,{\rm d} t\,.$$
In this paper, we prove the conjecture of \cite{GM} regarding the convergence
and we derive an explicit formula for the winning rate (\ref{conv}). In fact, we show more: the function $w$ appears as the
exact winning rate in a continuous-time
version of the best choice problem associated with a planar Poisson process, as developed in
\cite{GnFI, GnBC, Sam3}.
\section{The Poisson framework}
We start by recalling the setup from \cite{GnFI, GnBC}. Consider a homogeneous planar Poisson process (PPP) in the
semi-infinite strip
$R=[0,1]\times \,]-\infty,0]$, with Lebesgue measure as intensity.
The generic atom $a=(t,x)\in R$ of the PPP is understood as score $x$
observed at time $t$ .
Let ${\cal F}=({\cal F}_t, ~t\in [0,1])$ be the filtration with ${\cal F}_t$ the $\sigma$-algebra generated by
the PPP restricted to $[0,t]\times \,]-\infty,0]$.
We say that an atom $a=(t,x)$ of the PPP is a {\it record} if there are no
other PPP-atoms north-west of $a$. The maximum of the PPP is an atom $a^*=(t^*,x^*)$ with the largest $x$-value.
Alternatively, the maximum $a^*$ can be defined as the last record of the PPP, that is the record with the largest $t$-value.
For $\tau$ a ${\cal F}$-adapted stopping rule with values in $[0,1]$,
the performance of $\tau^*$ is defined as the probability of
the event $\{\tau=t^*\}$, interpreted as the best choice from the PPP.
The associated best choice problem amounts to maximising probability of the event $\{\tau=t^*\}$.
\par In a poissonised version of the Gilbert-Mosteller problem the observations sampled from
the $[0,1]$ uniform distribution arrive
on $[0,\ell]$ at epochs of a rate 1 Poisson process \cite{BG, Bojd, GnSak, Sak}. This is equivalent to
the PPP setup with background space $[0,\ell]\times[0,1]$, which can be
mapped linearly onto $[0,1]\times[-\ell,0]$ so that the componentwise order of points is preserved.
Now, the optimal stopping in
$[0,1]\times[-\ell,0]$
fits in the framework with the background space $R$
by a minor modification of the
stopping criterion:
a stopping rule $\tau$ adapted to $\cal F$ is evaluated
by the probability of the event $\{\tau=t^*, a^*>-\ell\}$ that stopping occurs at the maximum atom
and above $-\ell$. In this sense we shall speak of a {\it constrained} best choice problem.
\par Let $\Gamma=\{(t,x)\in R: -x(1-t)<c\}$ where $c$ is as in (\ref{c}).
It is known \cite{GnFI} that the optimal stopping rule is the first time (if any)
when the record process enters $\Gamma$, that is
$$\tau^*=\min\{t: {\rm~ there~is~a~record~}a=(t,x)\in \Gamma\}$$
(or $\tau^*=1$ if no such $t\in [0,1[$ exists).
Similarly, the optimal stopping rule for the constrained problem is the first time (if any)
when the record process enters $\Gamma(\ell):=\Gamma\cap ([0,1]\times[0,-\ell])$.
\par Let
$$g(\ell,t):=\mathbb P(\tau^*=t^*,\, t^*<t,\, x^*>-\ell)$$
be the probability that $\tau^*$ wins by stopping above $-\ell$ and before $t$ and let
$$g(\infty,t):=\mathbb P(\tau^*=t^*,\, t^*<t).$$
By the above relation between the constrained and unconstrained problems we have
$$g(\ell,t)=g(\infty,t)~~~{\rm for~~~}0\leq t\leq (1-c/\ell)_+\,.$$
The {\it winning rate} in the Poisson problem is defined as
$$w(t)=\partial_t\, g(\infty,t).$$
\section{Computing the rate}
Because the atoms south-west of $(x,-\ell)$ fall outside the stopping region $\Gamma(\ell)$ we have
$\partial_\ell\,g(\ell,t)=0$ and
$g(\infty,t)=g(c/(1-t),t)$ for $\ell>s/(1-t)$. To determine $\partial_\ell\,g(\ell,t)$ for $\ell>c/(1-t)$
consider two rectangles $R_1=[0,t]\times[-\ell,0]$ and $R_2=[0,t]\times[-\ell+\delta,0]$ with small $\delta>0$.
The optimal constrained stopping rules in $R_1$ and $R_2$ stop before $t$ at distinct atoms if and only if
the record process enters $\Gamma({\ell})$ at some atom $a_0=(\sigma,\xi)\in [(1-c/\ell)_+\,,t]\times
[-\ell,-\ell+\delta]$. Then stopping at $a_0\in R_1\setminus R_2$ is a win if $a_0=a^*$, which occurs with probability
$$p_1={\rm e}^{-\ell}\ell(t-(1-c/\ell)_+){\delta\over\ell}+o(\delta)=
{\rm e}^{-\ell}(t-(1-c/\ell)_+){\delta}+o(\delta).$$
On the other hand, stopping in $R_2$ is a win (and stopping at $a_0$ is a loss) if $a_0$ is folowed by some
$k>0$ atoms in $[\sigma,1]\times[-\ell,0]$, the leftmost of these $k$ atoms appears within $[\sigma,t]\times[-\ell,0]$
and it is the overall maximum $a^*$
which is an event of probability
$$p_2={\rm e}^{-\ell}\sum_{k=1}^\infty{c^{k+1}\over(k+1)!}\left[
1-(k+1){(t-(1-c/\ell)_+)\over c/\ell}\,\,{(1-t)^k\over (c/\ell)^k}-{(1-t)^{k+1}\over (c/\ell)^{k+1}}
\right]{1\over k}\,{\delta\over \ell}+o(\delta).$$
It follows that
$$\partial_\ell\,g(\ell,t)=\lim_{\delta\to 0}{p_1-p_2\over \delta}={\rm e}^{-\ell}\int\limits_{(1-c/\ell)_+}^t
\left(
1-\sum_{k=1}^\infty\left[{\ell^k(1-\sigma)^k\over k!\,k}-{\ell^k(1-t)^k\over k!\,k}
\right]
\right){\rm d}\sigma\,.$$
Now, computing the mixed second derivative $\partial_{\ell\,t} \,g(\ell,t)$ and integrating
in $\ell$ from $0$ to $c/(1-t)$ we obtain the winning rate in the Poisson problem, which is our main result.
\begin{proposition} The winning rate is given by the formula
\begin{equation}\label{w}
w(t)=-{\rm e}^{-c}+{{\rm e}^{-ct}-{\rm e}^{-ct/(1-t)}\over t}+
{e^{-c t}-te^{-c}\over 1-t}+
{c\over 1-t}
\left\{ I\left({c\over1-t}\,,\,c\right)-I\left({ct\over 1-t}\,,\,ct\right)\right\} ,
\end{equation}
where $c$ is as in {\rm (\ref{c})} and for $0<s<t$
$$I(t,s)=\int_s^t \xi^{-1}{{\rm e}^{-\xi}}\,{\rm d}\xi\,.$$
\end{proposition}
\noindent
The boundary values of $w$ are $w(0)=1-{\rm e}^{-c}=0.5526\ldots$ and $w(1)={\rm e}^{-c}=0.4473\ldots$,
in accordance with \cite[Fig. 3]{GM}.
A {\tt Mathematica}-drawn graph of {\rm (\ref{w})} exhibits a curve identical to that in
\cite[Fig 3.]{GM}.
\par The special value (\ref{c}) of $c$ was not used in the argument, hence
the right side of the formula (\ref{w}) gives the winning rate for
every stopping rule defined
by a stopping region like $\Gamma$ but with arbitrary positive value of the constant in place of $c$.
We also note that
the winning rate in the constrained problem coincides with $w(t)$ for $t<(1-c/\ell)_+$.
\section{Embedding and convergence}
It remains to show that $w$ given by (\ref{w}) is indeed the limiting value for the finite-$n$ problem
in (\ref{conv}).
To that end, we will exploit the embedding technique from \cite{GnFI}.
\par With $n$ fixed, divide $R$ in strips $J_i=[(i-1)/n, \,i /n[\,\,\times\,\,]\!-\infty,0]$,
$i=1,\ldots,n$. Consider a sequence
${\cal Y}_n=((T_i,Y_i), \,i=1,\ldots,n)$ where $(T_i,Y_i)$ is an atom with the largest
$x$-component within the strip $J_i$.
Observe that the point process of records in ${\cal Y}_n$ is
a subset of the set of records of the PPP in $R$,
in particular $\max Y_i=x^*$.
By homogeneity of the PPP we have
$T_1,Y_1,\ldots,T_n,Y_n$
jointly independent, with each $T_i$ uniformly distributed on $[(i-1)/n, \,i /n[$
and each $Y_i$ exponentially distributed on $]-\infty,0]$ with rate $1/n$.
It follows that the discrete-time optimal stopping problem of recognising the maximum in ${\cal Y}_n$
is equivalent to the Gilbert-Mosteller problem with exponentially distributed observations.
\par Let $\hat{\tau}_n$ be the optimal stopping rule for recognising the maximum in ${\cal Y}_n$.
We shall view $\hat{\tau}_n$ as a strategy
for choosing the maximum of PPP with the additional option of
{\it partial return} meaning that $\hat{\tau}_n$ assumes values in $[0,1]$, that
$$\{(i-1)/n< \hat{\tau}_n\leq i/n\}\in {\cal F}_{i/n}$$
and that $\{(i-1)/n< \hat{\tau}_n\leq i/n\}$ is associated with the stopping at $(T_i,Y_i)$.
Explicitly, $\hat{\tau}_n$ stops at the first time the sequence of ${\cal Y}_n$-records enters
$$\Gamma_n=\bigcup\limits_{i=1}^n ~](i-1)/n,i/n]\times [b_{n-i},0],$$
where $b_k=n\log d_k$ and the $d_k$'s are the decision numbers as in (\ref{d}).
The partial return option
implies that the winning chance of $\hat{\tau}_n$ is higher than that of
$\tau^*$.
\par Let $a'$ be the last record before $a^*$. One checks easily that $\tau^*$ and $\hat{\tau}$ may differ
only if either $a^*$ or $a'$ hit the domain
$$\Delta_n:=(\Gamma_n\setminus\Gamma) \cup(\Gamma\setminus\Gamma_n).$$
By \cite[Equation (11)]{GnFI} we have $((i-1)/n,b_{n-i})\not\in\Gamma$ and
$(i/n,b_{n-i})\in\Gamma$ for $i=1,\ldots,n$. This combined with the fact that
the distribution of $t^*$ is uniform and that of $x^*$ is exponential yields
$$n\,\mathbb P(a^*\in \Delta_n\cap J_i)<\exp\left({-nc\over n-i+1}\right)-\exp\left({-nc\over n-i}\right)=O(n^{-1})$$
uniformly in $i\leq n$. A similar estimate holds also for $a'$, and because
$$\mathbb P((i-1)/n<\hat{\tau}_n\leq i/n)=\mathbb P(\tau_n=i), ~~~w(i/n)=\mathbb P(\tau^*=t^*, a^*\in J_i)+O(n^{-1})$$
(the second since $w$ is smooth on $[0,1]$)
we conclude:
\begin{proposition} As $n\to\infty$ the optimal stoping rule $\tau_n$ satisfies
$$
\max_{1\leq i\leq n}|w(i/n)-n\mathbb P(\tau_n=i,~X_{i}=\max(X_1,\ldots,X_n)|=O(n^{-1}).
$$
where $w$ is given by {\rm (\ref{w})}.
\end{proposition}
\vskip0.5cm
| {
"timestamp": "2005-11-19T22:44:31",
"yymm": "0510",
"arxiv_id": "math/0510568",
"language": "en",
"url": "https://arxiv.org/abs/math/0510568",
"abstract": "Following a long-standing suggestion by Gilbert and Mosteller, we derive an explicit formula for the asymptotic winning rate in the full-information problem of the best choice.",
"subjects": "Probability (math.PR)",
"title": "Winning rate in the full-information best choice problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969660413473,
"lm_q2_score": 0.8519527944504227,
"lm_q1q2_score": 0.8379781838318834
} |
https://arxiv.org/abs/2205.00879 | An invitation to formal power series | This is a lecture on the theory of formal power series developed entirely without any analytic machinery. Combining ideas from various authors we are able to prove Newton's binomial theorem, Jacobi's triple product, the Rogers--Ramanujan identities and many other prominent results. We apply these methods to derive several combinatorial theorems including Ramanujan's partition congruences, generating functions of Stirling numbers and Jacobi's four-square theorem. We further discuss formal Laurent series and multivariate power series and end with a proof of MacMahon's master theorem. | \section{Introduction}
In a first course on abstract algebra students learn the difference between polynomial (real-valued) functions familiar from high school and formal polynomials defined over arbitrary fields. In courses on analysis they learn further that certain “well-behaved” functions possess a Taylor series expansion, i.\,e. a power series which converges in a neighborhood of a point. On the other hand, only specialized courses cover the formal world of power series where no convergence questions are asked.
The purpose of these expository notes is to give a far-reaching introduction to formal power series without appealing to any analytic machinery (we only use an elementary discrete metric). In doing so, we go well beyond a dated account undertaken by Niven~\cite{Niven} in 1969 (for instance, Niven cites Euler's pentagonal theorem without proof). An alternative approach with different emphases can be found in Tutte~\cite{Tutte1,Tutte2}.
To illustrate the usefulness of formal power series we offer several combinatorial applications including some deep partition identities due to Ramanujan and others. This challenges the statement “While the formal analogies with ordinary calculus are undeniably beautiful, strictly speaking one can’t go much beyond Euler that way…” from the introduction of the recent book by Johnson~\cite{Johnson}.
While most proofs presented here are not new, they are scattered in the literature spanning five decades and cannot be found in a unified treatment to my knowledge. Our main source of inspiration is the accessible book by Hirschhorn~\cite{Hirschhorn} (albeit based on analytic reasoning) in combination with numerous articles cited when appropriate. The work on these notes was initiated by lectures on combinatorics and discrete mathematics at the universities of Jena and Hannover.
I hope that the present notes may serve as the basis of seminars for undergraduate and graduate students alike. The prerequisites do not go beyond a basic abstract algebra course (from \autoref{seclaurent} on, some knowledge of algebraic and transcendental field extensions is assumed).
The material is organized as follows: In the upcoming section we define the ring of formal power series over an arbitrary field and discuss its basis properties. Thereafter, we introduce our toolkit consisting of compositions, derivations and exponentiations of power series.
In \autoref{secmain} we first establish the binomial theorems of Newton and Gauss and later obtain Jacobi's famous triple product identity, Euler's pentagonal number theorem and the Rogers--Ramanujan identities. In the subsequent section we apply the methods to combinatorial problems to obtain a number of generating functions. Most notably, we prove Ramanujan's partitions congruences (modulo $5$ and $7$) as well as his so-called “most beautiful” formula. Another section deals with Stirling numbers, permutations, Faulhaber's formula and the Lagrange--Jacobi four-square theorem. In \autoref{seclaurent} we consider formal Laurent series in order to prove the Lagrange--Bürmann inversion formula and Puiseux' theorem on the algebraic closure.
In the following section, multivariate power series enter the picture. We give proofs of identities of Vieta, Girad--Newton and Waring on symmetric polynomials. We continue by developing multivariate versions of Leibniz' differentiation rule, Faá di Bruno's rule and the inverse function theorem.
In the final section we go somewhat deeper by taking matrices into account. After establishing the Lagrange--Good inversion formula, we culminate by proving MacMahon's master theorem. Along the way we indicate analytic counterparts, connections to other areas and insert a few exercises.
\section{Definitions and basic properties}
The sets of positive and non-negative integers are denoted by $\mathbb{N}=\{1,2,\ldots\}$ and $\mathbb{N}_0=\{0,1,\ldots\}$ respectively.
\begin{Def}\hfill
\begin{enumerate}[(i)]
\item
The letter $K$ will always denote a (commutative) field. In this section there are no requirements on $K$, but at later stages we need that $K$ has characteristic $0$ or contains some roots of unity. At this point we often replace $K$ by $\mathbb{C}$ for convenience (and not for making analytic arguments available).
\item
A (formal) \emph{power series} over $K$ is just an infinite sequence $\alpha=(a_0,a_1,\ldots)$ with \emph{coefficients} $a_0,a_1,\ldots\in K$.
The set of power series forms a $K$-vector space denoted by $K[[X]]$ with respect to the familiar componentwise operations:
\[\alpha+\beta:=(a_0+b_0,a_1+b_1,\ldots),\qquad \lambda\alpha:=(\lambda a_0,\lambda a_1,\ldots),\]
where $\beta=(b_0,b_1,\ldots)\in K[[X]]$ and $\lambda\in K$.
We identify the elements of $a\in K$ with the \emph{constant} power series $(a,0,\ldots)$. In general we call $a_0$ the \emph{constant term} of $\alpha$ and set $\inf(\alpha):=\inf\{n\in\mathbb{N}_0:a_n\ne 0\}$ with $\inf(0)=\inf\varnothing=\infty$ (as a group theorist I avoid calling $\inf(\alpha)$ the order of $\alpha$ as in many sources).
\item To motivate a multiplication on $K[[X]]$ we introduce an \emph{indeterminant} $X$ and its powers
\[X^0:=1=(1,0,\ldots),\qquad X=X^1=(0,1,0,\ldots),\qquad X^2=(0,0,1,0,\ldots),\qquad\ldots.\]
We can now formally write
\[\alpha=\sum_{n=0}^\infty a_nX^n.\]
If there exists some $d\in\mathbb{N}_0$ with $a_n=0$ for all $n>d$, then $\alpha$ is called a (formal) \emph{polynomial}.
The smallest $d$ with this property is the \emph{degree} $\deg(\alpha)$ of $\alpha$ (by convention $\deg(0)=-\infty$). In this case, $a_{\deg(\alpha)}$ is the \emph{leading coefficient} and $\alpha$ is called \emph{monic} if $a_{\deg(\alpha)}=1$.
The set of polynomials (inside $K[[X]]$) is denoted by $K[X]$.
\item
We borrow from the usual multiplication of polynomials (sometimes called \emph{Cauchy product} or \emph{discrete convolution}) to define
\[\boxed{\alpha\cdot \beta:=\sum_{n=0}^\infty\Bigl(\sum_{k=0}^na_kb_{n-k}\Bigr)X^n}\]
for arbitrary $\alpha,\beta\in K[[X]]$ as above.
\end{enumerate}
\end{Def}
Note that $1,X,X^2,\ldots$ is a $K$-basis of $K[X]$, but not of $K[[X]]$. Indeed, $K[[X]]$ has no countable basis.
Against a popular trend to rename $X$ to $q$ (as in \cite{Hirschhorn}), we always keep $X$ as “formal” as possible.
\begin{Lem}\label{intdom}
With the above defined addition and multiplication $(K[[X]],+,\cdot)$ is an integral domain with identity $1$, i.\,e. $K[[X]]$ is a commutative ring such that $\alpha\cdot\beta\ne 0$ for all $\alpha,\beta\in K[[X]]\setminus\{0\}$. Moreover, $K$ and $K[X]$ are subrings of $K[[X]]$.
\end{Lem}
\begin{proof}
Most axioms follows from the definition in a straight-forward manner. To prove the associativity of $\cdot$, let $\alpha=(a_0,\ldots)$, $\beta=(b_0,\ldots)$ and $\gamma=(c_0,\ldots)$ be power series. The $n$-th coefficient of $\alpha\cdot(\beta\cdot\gamma)$ is
\[\sum_{i=0}^na_i\sum_{j=0}^{n-i}b_jc_{n-i-j}=\sum_{i+j+k=n}a_ib_jc_k=\sum_{i=0}^n\Bigl(\sum_{j=0}^ia_jb_{i-j}\Bigr)c_{n-i},\]
which happens to be the $n$-th coefficient of $(\alpha\cdot\beta)\cdot\gamma$.
Now let $\alpha\ne 0\ne\beta$ with $k:=\inf(\alpha)$ and $l:=\inf(\beta)$. Then the $(k+l)$-th coefficient of $\alpha\cdot\beta$ is $\sum_{i=0}^{k+l}a_ib_{k+l-i}=a_kb_l\ne 0$. In particular, $\inf(\alpha\cdot\beta)=\inf(\alpha)\inf(\beta)$ and $\alpha\cdot\beta\ne 0$.
Since $K\subseteq K[X]\subseteq K[[X]]$ and the operations agree in these rings, it is clear that $K$ and $K[X]$ are subrings of $K[[X]]$ (with the same neutral elements).
\end{proof}
The above proof does not require $K$ to be a field. It works more generally for integral domains and this is needed later in \autoref{defmulti}.
From now on we will usually omit the multiplication symbol $\cdot$ and apply multiplications always before additions. For example, $\alpha\beta-\gamma$ is shorthand for $(\alpha\cdot\beta)+(-\gamma)$.
Moreover, we often omit the summation index in writing $\sum a_nX^n$ if it is clear from the context.
The scalar multiplication is compatible with the ring multiplication, i.\,e. $\lambda(\alpha\beta)=(\lambda\alpha)\beta=\alpha(\lambda\beta)$ for $\alpha,\beta\in K[[X]]$ and $\lambda\in K$. This turns $K[[X]]$ into a $K$-algebra.
\begin{Ex}\label{bsppower}\hfill
\begin{enumerate}[(i)]
\item The following power series can be defined for any $K$:
\[1-X,\qquad\sum_{n=0}^\infty X^n,\qquad\sum nX^n,\qquad\sum(-1)^nX^n.\]
We compute
\[(1-X)\sum_{n=0}^\infty X^n=\sum_{n=0}^\infty X^n-\sum_{n=1}^\infty X^n=1.\]
\item For a field $K$ of characteristic $0$ (like $K=\mathbb{Q}$, $\mathbb{R}$ or $\mathbb{C}$) we can define the \emph{formal exponential series}
\[\boxed{\exp(X):=\sum_{n=0}^\infty\frac{X^n}{n!}=1+X+\frac{X^2}{2}+\frac{X^3}{6}+\ldots\in K[[X]].}\]
We will never write $e^X$ for the exponential series, since Euler's number $e$ simply does not live in the formal world.
\end{enumerate}
\end{Ex}
\begin{Def}\hfill
\begin{enumerate}[(i)]
\item We call $\alpha\in K[[X]]$ \emph{invertible} if there exists some $\beta\in K[[X]]$ such that $\alpha\beta=1$. As usual, $\beta$ is uniquely determined and we write $\alpha^{-1}:=1/\alpha:=\beta$.
As in any ring, the invertible elements form the group of units denoted by $K[[X]]^\times$.
\item For $\alpha,\beta,\gamma\in K[[X]]$ we write more generally $\alpha=\frac{\beta}{\gamma}$ if $\alpha\gamma=\beta$ (regardless whether $\gamma$ is invertible or not).
For $k\in\mathbb{N}_0$ let $\alpha^k:=\alpha\ldots\alpha$ with $k$ factors and $\alpha^{-k}:=(\alpha^{-1})^k$ if $\alpha\in K[[X]]^\times$.
\item For $\alpha\in K[[X]]$ let $(\alpha):=\bigl\{\alpha\beta:\beta\in K[[X]]\bigr\}$ the principal ideal generated by $\alpha$.
\end{enumerate}
\end{Def}
The reader may know that every ideal of $K[X]$ is principal. The next lemma implies that every proper ideal of $K[[X]]$ is a power of $(X)$ (hence, $K[[X]]$ is a discrete valuation ring with unique maximal ideal $(X)$).
\begin{Lem}\label{leminv}
Let $\alpha=\sum a_nX^n\in K[[X]]$. Then the following holds
\begin{enumerate}[(i)]
\item $\alpha$ is invertible if and only if $a_0\ne 0$. Hence, $K[[X]]^\times=K[[X]]\setminus(X)$.
\item If there exists some $m\in\mathbb{N}$ with $\alpha^m=1$, then $\alpha\in K$. Hence, the elements of finite order in $K[[X]]^\times$ lie in $K^\times$.
\end{enumerate}
\end{Lem}
\begin{proof}\hfill
\begin{enumerate}[(i)]
\item Let $\beta=\sum b_nX^n\in K[[X]]$ such that $\alpha\beta=1$. Then $a_0b_0=1$ and $a_0\ne 0$. Assume conversely that $a_0\ne 0$. We define $b_0,b_1,\ldots\in K$ recursively by $b_0:=1/a_0$ and
\[b_k:=-\frac{1}{a_0}\sum_{i=1}^{k}a_ib_{k-i}\in K\]
for $k\ge 1$. Then
\[\sum_{i=0}^ka_ib_{k-i}=\begin{cases}
1&\text{if }k=0,\\
0&\text{if }k>0.
\end{cases}\]
Hence, $\alpha\beta=1$ where $\beta:=\sum b_nX^n$.
\item We may assume that $m>1$. For any prime divisor $p$ of $m$ it holds that $(\alpha^{m/p})^p=1$. Thus, by induction on $m$, we may assume that $m=p$. By way of contradiction, suppose $\alpha\notin K$ and let $n:=\min\{k\ge 1:a_k\ne 0\}$. The $n$-th coefficient of $\alpha^p=1$ is $pa_0^{p-1}a_n=0$. Since $\alpha$ is invertible (indeed $\alpha^{-1}=\alpha^{m-1}$), we know $a_0\ne 0$ and conclude that $p=0$ in $K$ (i.\,e. $K$ has characteristic $p$).
Now we investigate the coefficient of $X^{np}$ in $\alpha^p$. Obviously, it only depends on $a_0,\ldots,a_{np}$.
Since $p$ divides $\binom{p}{k}=\frac{p(p-1)\ldots(p-k+1)}{k!}$ for $0<k<p$, the binomial theorem yields $(a_0+a_1X)^p=a_0^p+a_1^pX^p$. This familiar rule extends inductively to any finite number of summands. Hence,
\[(a_0+\ldots+a_{np}X^{np})^p=a_0^p+a_n^pX^{np}+a_{n+1}^pX^{(n+1)p}+\ldots+a_{np}^{p}X^{np^2}.\]
In particular, the $np$-th coefficient of $\alpha^p$ is $a_n^p\ne 0$; a contradiction to $\alpha^p=1$.\qedhere
\end{enumerate}
\end{proof}
\begin{Ex}\label{bspinv}\hfill
\begin{enumerate}[(i)]
\item By \autoref{bsppower} we obtain the familiar formula for the \emph{formal geometric series}
\[\frac{1}{1-X}=\sum X^n\]
\item For any $\alpha\in K[[X]]\setminus\{1\}$ and $n\in\mathbb{N}$ an easy induction yields
\[\sum_{k=0}^{n-1}\alpha^k=\frac{1-\alpha^n}{1-\alpha}.\]
\item For distinct $a,b\in K\setminus\{0\}$ one has the \emph{partial fraction decomposition}
\begin{equation}\label{partial}
\frac{1}{(a+X)(b+X)}=\frac{1}{b-a}\Bigl(\frac{1}{a+X}-\frac{1}{b+X}\Bigr),
\end{equation}
which can be generalized depending on the algebraic properties of $K$.
\end{enumerate}
\end{Ex}
We now start forming infinite sums of power series. To justify this process we introduce a discrete norm, which behaves much simpler than the euclidean norm on $\mathbb{C}$, for instance.
\begin{Def}\label{defnorm}
For $\alpha=\sum a_nX^n\in K[[X]]$ let
\[|\alpha|:=2^{-\inf(\alpha)}\in\mathbb{R}\]
be the \emph{norm} of $\alpha$ with the convention $|0|=2^{-\infty}=0$.
\end{Def}
The number $2$ in \autoref{defnorm} can of course be replaced by any real number greater than $1$.
Note that $\alpha$ is invertible if and only if $|\alpha|=1$.
The following lemma turns $K[[X]]$ into an ultrametric space.
\begin{Lem}\label{ultra}
For $\alpha,\beta\in K[[X]]$ we have
\begin{enumerate}[(i)]
\item $|\alpha|\ge 0$ with equality if and only if $\alpha=0$,
\item $|\alpha\beta|=|\alpha||\beta|$,
\item $|\alpha+\beta|\le\max\{|\alpha|,|\beta|\}$ with equality if $|\alpha|\ne|\beta|$.
\end{enumerate}
\end{Lem}
\begin{proof}\hfill
\begin{enumerate}[(i)]
\item This follows from the definition.
\item Without loss of generality, let $\alpha\ne 0\ne\beta$. We have already seen in the proof of \autoref{intdom} that $\inf(\alpha\beta)=\inf(\alpha)\inf(\beta)$.
\item From $a_n+b_n\ne 0$ we obtain $a_n\ne 0$ or $b_n\ne 0$. It follows that $\inf(\alpha+\beta)\ge\min\{\inf(\alpha),\inf(\beta)\}$.
This turns into the ultrametric inequality $|\alpha+\beta|\le\max\{|\alpha|,|\beta|\}$. If $\inf(\alpha)>\inf(\beta)$, then clearly $\inf(\alpha+\beta)=\inf(\beta)$. \qedhere
\end{enumerate}
\end{proof}
\begin{Thm}\label{vollständig}
The distance function $d(\alpha,\beta):=|\alpha-\beta|$ for $\alpha,\beta\in K[[X]]$ turns $K[[X]]$ into a complete metric space.
\end{Thm}
\begin{proof}
Clearly, $d(\alpha,\beta)=d(\beta,\alpha)\ge 0$ with equality if and only if $\alpha=\beta$. Hence, $d$ is symmetric and positive definite. The triangle inequality follows from \autoref{ultra}:
\begin{align*}
d(\alpha,\gamma)&=|\alpha-\gamma|=|\alpha-\beta+\beta-\gamma|\le\max\bigl\{|\alpha-\beta|,|\beta-\gamma|\bigr\}\\
&\le|\alpha-\beta|+|\beta-\gamma|=d(\alpha,\beta)+d(\beta,\gamma).
\end{align*}
Now let $\alpha_1,\alpha_2,\ldots\in K[[X]]$ by a Cauchy sequence with $\alpha_m=\sum a_{m,n}X^n$ for $m\ge 1$. For every $k\ge 1$ there exists some $M=M(k)\ge 1$ such that $|\alpha_m-\alpha_M|<2^{-k}$ for all $m\ge M$. This shows $a_{m,n}=a_{M,n}$ for all $m\ge M$ and $n\le k$. We define
\[a_k:=a_{M(k),k}\]
and $\alpha=\sum a_kX^k$. Then $|\alpha-\alpha_m|<2^{-k}$ for all $m\ge M(k)$, i.\,e. $\lim_{m\to\infty}\alpha_m=\alpha$. Therefore, $K[[X]]$ is complete with respect to $d$.
\end{proof}
Note that $K[[X]]$ is the completion of $K[X]$ with respect to $d$. In order words: power series can be regarded as Cauchy series of polynomials.
For convergent sequences $(\alpha_k)_k$ and $(\beta_k)_k$ we have (as in any metric space with multiplication)
\[\lim_{k\to\infty}(\alpha_k+\beta_k)=\lim_{k\to\infty}\alpha_k+\lim_{k\to\infty}\beta_k,\qquad\lim_{k\to\infty}(\alpha_k\beta_k)=\lim_{k\to\infty}\alpha_k\cdot\lim_{k\to\infty}\beta_k.\]
The infinite sum
\[\sum_{k=1}^\infty\alpha_k:=\lim_{n\to\infty}\sum_{k=1}^n\alpha_k\]
can only converge if $(\alpha_k)_k$ is a \emph{null sequence}, that is, $\lim_{k\to\infty}|\alpha_k|=0$.
Surprisingly and in stark contrast to euclidean spaces, the converse is also true as we are about to see. This crucial fact makes the arithmetic of formal power series much simpler than the analytic counterpart.
\begin{Lem}\label{infsum}
For every null sequence $\alpha_1,\alpha_2,\ldots\in K[[X]]$ the series
$\sum_{k=1}^\infty\alpha_k$ and $\prod_{k=1}^\infty(1+\alpha_k)$ converge, i.\,e. they are well-defined in $K[[X]]$.
\end{Lem}
\begin{proof}
By \autoref{vollständig} it suffices to show that the partial sums form Cauchy sequences. For $\epsilon>0$ let $N\ge 0$ such that $|\alpha_k|<\epsilon$ for all $k\ge N$. Then, for $k>l\ge N$, we have
\begin{align*}
\Bigl|\sum_{i=1}^k\alpha_i-\sum_{i=1}^l\alpha_i\Bigr|&=\Bigl|\sum_{i=l+1}^k\alpha_i\Bigr|\overset{\ref{ultra}}{\le}\max\bigl\{|\alpha_i|:i=l+1,\ldots,k\bigr\}<\epsilon,\\
\Bigl|\prod_{i=1}^k(1+\alpha_i)-\prod_{i=1}^l(1+\alpha_i)\Bigr|&=\prod_{i=1}^l\underbrace{|1+\alpha_i|}_{\le 1}\Bigl|\prod_{i=l+1}^k(1+\alpha_i)-1\Bigr|\le\Bigl|\sum_{\varnothing\ne I\subseteq\{l+1,\ldots,k\}}\prod_{i\in I}\alpha_i\Bigr|\\&\le\max\bigl\{|\alpha_i|:i=l+1,\ldots,k\bigr\}<\epsilon.\qedhere
\end{align*}
\end{proof}
We often regard finite sequences as null sequences by extending them silently.
Let $\alpha_1,\alpha_2,\ldots\in K[[X]]$ be a null sequence and $\alpha_k=\sum a_{k,n}X^n$ for $k\ge 1$. For every $n\ge 0$ only finitely many of the coefficients $a_{1,n},a_{2,n},\ldots$ are non-zero. This shows that the coefficient of $X^n$ in
\begin{equation}\label{infsums}
\sum_{k=1}^\infty\alpha_k=\sum_{n=0}^\infty\Bigl(\sum_{k=1}^\infty a_{k,n}\Bigr)X^n
\end{equation}
depends on only finitely many terms. The same reasoning applies to the $\prod_{k=1}^\infty(1+\alpha_k)$.
For $\gamma\in K[[X]]$ and null sequences $(\alpha_k)$, $(\beta_k)$ it holds that $\sum\alpha_k+\sum\beta_k=\sum(\alpha_k+\beta_k)$ and $\gamma\sum\alpha_k=\sum\gamma\alpha_k$ as expected.
Moreover, a convergent sum does not depend on the order of summation. In fact, for every bijection $\pi\colon \mathbb{N}\to\mathbb{N}$ and $n\in\mathbb{N}$ there exists some $N\in\mathbb{N}$ such that $\pi(k)>n$ for all $k>N$. Hence, $\alpha_{\pi(1)},\alpha_{\pi(2)},\ldots$ is a null sequence.
We often exploit this fact by interchanging summation signs (discrete \emph{Fubini's theorem}).
\begin{Ex}\hfill
\begin{enumerate}[(i)]
\item For $\alpha\in (X)$ we have $|\alpha^n|=|\alpha|^n\le 2^{-n}\to 0$ and therefore $\sum\alpha^n=\frac{1}{1-\alpha}$.
So we have substituted $X$ by $\alpha$ in the geometric series. This will be generalized in \autoref{defsub}.
\item Since every non-negative integer has a unique $2$-adic expansion, we obtain
\[\prod_{k=0}^\infty(1+X^{2^k})=1+X+X^2+\ldots=\frac{1}{1-X}.\]
Equivalently,
\[\prod_{k=0}^\infty(1+X^{2^k})=\prod\frac{(1+X^{2^k})(1-X^{2^k})}{1-X^{2^k}}=\prod\frac{1-X^{2^{k+1}}}{1-X^{2^k}}=\frac{1}{1-X}.\]
More interesting series will be discussed in \autoref{seccomb}.
\end{enumerate}
\end{Ex}
\section{The toolkit}
\begin{Def}\label{defsub}
Let $\alpha=\sum a_nX^n\in K[[X]]$ and $\beta\in K[[X]]$ such that $\alpha\in K[X]$ or $\beta\in (X)$. We define
\[\boxed{\alpha\circ\beta:=\alpha(\beta):=\sum_{n=0}^\infty a_n\beta^n.}\]
\end{Def}
If $\alpha$ is a polynomial, it is clear that $\alpha(\beta)$ is a valid power series, while for $\alpha\in (X)$ the convergence of $\alpha(\beta)$ is guaranteed by \autoref{infsum}. In the following we will silently assume that one of these conditions is fulfilled.
Observe that $|\alpha(\beta)|\le|\alpha|$ if $\beta\in(X)$.
\begin{Ex}
For $\alpha=\sum a_nX^n\in K[[X]]$ we have $\alpha(0)=a_0$ and $\alpha(X^2)=\sum a_nX^{2n}$. On the other hand for $\alpha=\sum X^n$ we are not allowed to form $\alpha(1)$.
\end{Ex}
\begin{Lem}\label{lemcomp}
For $\alpha,\beta,\gamma\in (X)$ and every null sequence $\alpha_1,\alpha_2,\ldots \in K[[X]]$ we have
\begin{align}
\Bigl(\sum\alpha_k\Bigr)\circ\beta&=\sum \alpha_k(\beta),\label{distcirc}\\
\Bigl(\prod(1+\alpha_k)\Bigr)\circ\beta&=\prod(1+\alpha_k(\beta)),\label{distcirc2}\\
\alpha\circ(\beta\circ\gamma)&=(\alpha\circ\beta)\circ\gamma.\label{associativ}
\end{align}
\end{Lem}
\begin{proof}
Since $|\alpha_k(\beta)|\le|\alpha_k|\to 0$ for $k\to\infty$, all series are well-defined. Using the notation from \eqref{infsums} we deduce:
\[
\Bigl(\sum\alpha_k\Bigr)\circ\beta=\sum_{n=0}^\infty\Bigl(\sum_{k=1}^\infty a_{k,n}\Bigr)\beta^n=\sum_{k=1}^\infty\Bigl(\sum_{n=0}^\infty a_{k,n}\beta^n\Bigr)=\sum\alpha_k(\beta).
\]
We begin proving \eqref{distcirc2} with only two factors, say $\alpha_1=\sum a_nX^n$ and $\alpha_2=\sum b_nX^n$:
\[(\alpha_1\alpha_2)\circ\beta=\sum_{n=0}^\infty\Bigl(\sum_{k=0}^na_kb_{n-k}\Bigr)\beta^n=\sum_{n=0}^\infty\sum_{k=0}^n(a_k\beta^k)(b_{n-k}\beta^{n-k})=(\alpha_1\circ\beta)(\alpha_2\circ\beta).\]
Inductively, \eqref{distcirc2} holds for finitely many factors. Now taking the limit gives
\[\Bigl|\prod(1+\alpha_k(\beta))-\Bigl(\prod_{k=1}^n(1+\alpha_k)\Bigr)\circ\beta\Bigr|=\prod_{k=1}^n|1+\alpha_k(\beta)|\Bigl|\prod_{k=n+1}^\infty(1+\alpha_k(\beta))-1\Bigr|\to 0.\]
Using \eqref{distcirc} and \eqref{distcirc2}, the validity of \eqref{associativ} reduces to the trivial case where $\alpha=X$.
\end{proof}
We warn the reader that in general
\[\alpha\circ\beta\ne\beta\circ\alpha,\qquad \alpha\circ(\beta\gamma)\ne(\alpha\circ\beta)(\alpha\circ\gamma),\qquad \alpha\circ(\beta+\gamma)\ne\alpha\circ\beta+\alpha\circ\gamma.\]
Nevertheless, the last statement can be corrected for the exponential series (\autoref{lemfunc}).
\begin{Thm}\label{revgroup}
The set $K[[X]]^\circ:=(X)\setminus(X^2)\subseteq K[[X]]$ forms a group with respect to $\circ$.
\end{Thm}
\begin{proof}
Let $\alpha,\beta,\gamma\in K[[X]]^\circ$. Then $\alpha(\beta)\in K[[X]]^\circ$, i.\,e. $K[[X]]^\circ$ is closed under $\circ$. The associativity holds by \eqref{associativ}.
By definition, $X\in K[[X]]^\circ$ and $X\circ\alpha=\alpha=\alpha\circ X$.
To construct inverses we argue as in \autoref{leminv}. Let $\alpha^k=\sum_{n=0}^\infty a_{k,n}X^n$ for $k\in\mathbb{N}_0$. Since $a_0=0$, also $a_{k,n}=0$ for $n<k$ and $a_{n,n}=a_1^n\ne 0$. We define recursively $b_0:=0$, $b_1:=\frac{1}{a_1}\ne 0$ and
\[b_n:=-\frac{1}{a_{n,n}}\sum_{k=0}^{n-1}a_{k,n}b_k\]
for $n\ge 2$. Setting $\beta:=\sum b_nX^n\in K[[X]]^\circ$, we obtain
\[\beta(\alpha)=\sum_{k=0}^\infty b_k\alpha^k=\sum_{k=0}^\infty\sum_{n=0}^\infty b_ka_{k,n}X^n=\sum_{n=0}^\infty\Bigl(\sum_{k=0}^nb_ka_{k,n}\Bigr)X^n=X.\]
As in any monoid, this automatically implies $\alpha(\beta)=X$.
\end{proof}
For $\alpha\in K[[X]]^\circ$, we call the unique $\beta\in K[[X]]^\circ$ with $\alpha(\beta)=X=\beta(\alpha)$ the \emph{reverse} of $\alpha$. To avoid confusion with the inverse $\alpha^{-1}$ (which is not defined here), we refrain from introducing a symbol for the reverse.
\begin{Ex}\hfill
\begin{enumerate}[(i)]
\item Let $\alpha$ be the reverse of $X+X^2+\ldots=\frac{X}{1-X}$. Then
\[X=\frac{\alpha}{1-\alpha}\]
and it follows that $\alpha=\frac{X}{1+X}=X-X^2+X^3-\ldots$.
In general, it is much harder to find a closed-form expression for the reverse. We do so for the exponential series with the help of formal derivatives (\autoref{deflog}). Later we provide the explicit Lagrange--Bürmann inversion formula (\autoref{lagrange}) using the machinery of Laurent series.
\item For the field $\mathbb{F}_p$ with $p$ elements (where $p$ is a prime), the subgroup $N_p:=X+(X^2)$ of $\mathbb{F}_p[[X]]^\circ$ is called \emph{Nottingham group}. One can show that every finite $p$-group is a subgroup of $N_p$, so it must have a very rich structure.
\end{enumerate}
\end{Ex}
\begin{Lem}[Functional equation]\label{lemfunc}
For every null sequence $\alpha_1,\alpha_2,\ldots\in (X)\subseteq\mathbb{C}[[X]]$,
\begin{equation}\label{func2}
\boxed{\exp\Bigl(\sum\alpha_k\Bigr)=\prod\exp(\alpha_k).}
\end{equation}
In particular, $\exp(kX)=\exp(X)^k$ for $k\in\mathbb{Z}$.
\end{Lem}
\begin{proof}
Since $\sum\alpha_k\in (X)$ and $\exp(\alpha_k)\in 1+\alpha_k+\frac{\alpha_k^2}{2}+\ldots$, both sides of \eqref{func2} are well-defined. For two summands $\alpha,\beta\in (X)$ we compute
\begin{align*}
\exp(\alpha+\beta)&=\sum\frac{(\alpha+\beta)^n}{n!}=\sum_{n=0}^\infty\sum_{k=0}^n\binom{n}{k}\frac{\alpha^k\beta^{n-k}}{n!}\\
&=\sum_{n=0}^\infty\sum_{k=0}^n\frac{\alpha^k\beta^{n-k}}{k!(n-k)!}=\sum\frac{\alpha^n}{n!}\cdot\sum\frac{\beta^n}{n!}=\exp(\alpha)\exp(\beta).
\end{align*}
By induction we obtain \eqref{func2} for finitely many summands. Finally,
\[\Bigl|\prod\exp(\alpha_k)-\exp\Bigl(\sum_{k=1}^n\alpha_k\Bigr)\Bigr|=\prod_{k=1}^n|\exp(\alpha_k)|\Bigl|\prod_{k=n+1}^\infty\exp(\alpha_k)-1\Bigr|\to 0.\]
For the second claim let $k\in\mathbb{N}_0$. Then $\exp(kX)=\exp(X+\ldots+X)=\exp(X)^k$. Since
\[\exp(kX)\exp(-kX)=\exp(kX-kX)=\exp(0)=1,\]
we also have $\exp(-kX)=\exp(kX)^{-1}=\exp(X)^{-k}$.
\end{proof}
\begin{Def}
For $\alpha=\sum a_nX^n\in K[[X]]$ we call
\[\boxed{\alpha':=\sum_{n=1}^\infty na_nX^{n-1}\in K[[X]]}\]
the (formal) \emph{derivative} of $\alpha$. Moreover, let $\alpha^{(0)}:=\alpha$ and $\alpha^{(n)}:=(\alpha^{(n-1)})'$ the $n$-th derivative for $n\in\mathbb{N}$.
\end{Def}
It seems natural to define formal \emph{integrals} as counterparts, but this is less useful, since in characteristic $0$ we have $\alpha=\beta$ if and only if $\alpha'=\beta'$ and $\alpha(0)=\beta(0)$.
\begin{Ex}
As expected we have $1'=0$, $X'=1$ as well as
\[\exp(X)'=\sum_{n=1}^\infty n\frac{X^{n-1}}{n!}=\sum_{n=0}^\infty\frac{X^n}{n!}=\exp(X).\]
Note however, that $(X^p)'=0$ if $K$ has characteristic $p$.
\end{Ex}
In characteristic $0$, derivatives provide a convenient way to extract coefficients of power series.
For $\alpha=\sum a_nX^n\in \mathbb{C}[[X]]$ we see that
$\alpha^{(0)}(0)=\alpha(0)=a_0$, $\alpha'(0)=a_1$, $\alpha''(0)=2a_2,\ldots,\alpha^{(n)}(0)=n!a_n$. Hence, \emph{Taylor's theorem} (more precisely, the \emph{Maclaurin series}) holds
\begin{equation}\label{taylor}
\boxed{\alpha=\sum_{n=0}^\infty\frac{\alpha^{(n)}(0)}{n!}X^n.}
\end{equation}
Over arbitrary fields we are not allowed to divide by $n!$. Alternatively, one may use the $k$-th \emph{Hasse derivative} defined by
\[H^k(\alpha):=\sum_{n=k}^\infty\binom{n}{k}a_nX^{n-k}\]
(the integer $\binom{n}{k}$ can be embedded in any field). Note that $H^k(\alpha)=k!\alpha^{(k)}$ and $\alpha=\sum_{n=0}^\infty H^n(\alpha)(0)X^n$. In the following we restrict ourselves to complex power series.
\begin{Lem}\label{der}
For $\alpha,\beta\in \mathbb{C}[[X]]$ and every null sequence $\alpha_1,\alpha_2,\ldots\in \mathbb{C}[[X]]$ the following rules hold:
\begin{align*}
\Bigl(\sum\alpha_k\Bigr)'&=\sum\alpha_k'&&(\emph{sum rule}),\\
(\alpha\beta)'&=\alpha'\beta+\alpha\beta'&&(\emph{(finite) product rule}),\\
\Bigl(\prod(1+\alpha_k)\Bigr)'&=\prod(1+\alpha_k)\sum\frac{\alpha_k'}{1+\alpha_k},&&(\emph{(infinite) product rule}),\\
\Bigl(\frac{\alpha}{\beta}\Bigr)'&=\frac{\alpha'\beta-\alpha\beta'}{\beta^2}&&(\emph{quotient rule}),\\
(\alpha\circ\beta)'&=\alpha'(\beta)\beta'&&(\emph{chain rule}).
\end{align*}
\end{Lem}
\begin{proof}\hfill
\begin{enumerate}[(i)]
\item\label{sumr} Using the notation from \eqref{infsums}, we have
\[\Bigl(\sum\alpha_k\Bigr)'=\Bigl(\sum_{n=0}^\infty\sum_{k=1}^\infty a_{k,n}X^n\Bigr)'=\sum_{n=0}^\infty\sum_{k=1}^\infty n a_{k,n}X^{n-1}=\sum_{k=1}^\infty\Bigl(\sum_{n=0}^\infty na_{k,n}X^{n-1}\Bigr)=\sum\alpha_k'.\]
\item\label{produktr} By \eqref{sumr} we may assume $\alpha=X^k$ and $\beta=X^l$. In this case,
\[(\alpha\beta)'=(X^{k+l})'=(k+l)X^{k+l-1}=kX^{k-1}X^{l}+lX^{l-1}X^{k}=\alpha'\beta+\beta'\alpha.\]
\item\label{produktinf} Without loss of generality, suppose $\alpha_k\ne-1$ for all $k\in\mathbb{N}$ (otherwise both sides vanish).
Let $|\alpha_k|<2^{-N-1}$ for all $k>n$. The coefficient of $X^N$ on both sides of the equation depends only on $\alpha_1,\ldots,\alpha_n$.
From \eqref{produktr} we verify inductively:
\[\Bigl(\prod_{k=1}^n(1+\alpha_k)\Bigr)'=\prod_{k=1}^n(1+\alpha_k)\sum_{k=1}^n\frac{\alpha_k'}{1+\alpha_k}\]
for all $n\in\mathbb{N}$. Now the claim follows with $N\to\infty$.
\item By \eqref{produktr},
\[\alpha'=\Bigl(\frac{\alpha}{\beta}\beta\Bigr)'=\Bigl(\frac{\alpha}{\beta}\Bigr)'\beta+\frac{\alpha\beta'}{\beta}.\]
\item By \eqref{produktinf}, the \emph{power rule} $(\alpha^n)'=n\alpha^{n-1}\alpha'$ holds for $n\in\mathbb{N}_0$. The sum rule implies
\[(\alpha\circ\beta)'=\Bigl(\sum a_n\beta^n\Bigr)'=\sum a_n(\beta^n)'=\sum_{n=1}^\infty na_n\beta^{n-1}\beta'=\alpha'(\beta)\beta'.\qedhere\]
\end{enumerate}
\end{proof}
The product rule implies the rather trivial \emph{factor rule} $(\lambda\alpha)'=\lambda\alpha'$ as well as \emph{Leibniz' rule} \[(\alpha\beta)^{(n)}=\sum_{k=0}^n\binom{n}{k}\alpha^{(k)}\beta^{(n-k)}\]
for $\lambda\in\mathbb{C}$ and $\alpha,\beta\in\mathbb{C}[[X]]$. A generalized version of the latter and a chain rule for higher derivatives are proven in \autoref{secmult}.
\begin{A}
Let $\alpha,\beta\in(X)$ such that $\beta\notin(X^2)$. Prove \emph{L'Hôpital's rule} $\frac{\alpha}{\beta}(0)=\frac{\alpha'(0)}{\beta'(0)}$.
\end{A}
\begin{Ex}\label{deflog}
Define the (formal) \emph{logarithm} by
\[\boxed{\log(1+X):=\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n}X^n=X-\frac{X^2}{2}+\frac{X^3}{3}\mp\ldots\in \mathbb{C}[[X]].}\]
By \autoref{revgroup}, $\alpha:=\exp(X)-1$ possesses a reverse and $\log(\exp(X))=\log(1+\alpha)\in\mathbb{C}[[X]]^\circ$.
Since
\[\log(1+X)'=1-X+X^2\mp\ldots=\sum(-X)^n=\frac{1}{1+X},\]
the chain rules yields
\[\log(1+\alpha)'=\frac{\alpha'}{1+\alpha}=\frac{\exp(X)}{\exp(X)}=1.\]
This shows that $\log(\exp(X))=X$. Therefore, $\log(1+X)$ is the reverse of $\alpha=\exp(X)-1$ as expected from analysis.
Moreover, $\log(1-X)=-\sum_{n=1}^\infty \frac{X^n}{n}$.
\end{Ex}
The only reason why we called the power series $\log(1+X)$ instead of $\log(X)$ or just $\log$ is to keep the analogy to the natural logarithm (as an analytic function).
\begin{Lem}[Functional equation]
For every null sequence $\alpha_1,\alpha_2,\ldots\in (X)\subseteq\mathbb{C}[[X]]$,
\begin{equation}\label{funclog}
\boxed{\log\Bigl(\prod(1+\alpha_k)\Bigr)=\sum\log(1+\alpha_k).}
\end{equation}
\end{Lem}
\begin{proof}
\begin{align*}
\log\Bigl(\prod(1+\alpha_k)\Bigr)&=\log\Bigl(\prod\exp(\log(1+\alpha_k))\Bigr)\overset{\eqref{func2}}{=}\log\Bigl(\exp\Bigl(\sum\log(1+\alpha_k)\Bigr)\Bigr)\\
&=\sum\log(1+\alpha_k).\qedhere
\end{align*}
\end{proof}
\begin{Ex}\label{bsplog}
By \eqref{funclog},
\[
\log\Bigl(\frac{1}{1-X}\Bigr)=-\log(1-X)=\sum_{n=1}^\infty\frac{X^n}{n}.
\]
\end{Ex}
\begin{Def}\label{defpower}
For $c\in\mathbb{C}$ and $\alpha\in (X)$ let
\[\boxed{(1+\alpha)^c:=\exp\bigl(c\log(1+\alpha)\bigr).}\]
If $c=1/k$ for some $k\in\mathbb{N}$, we write more customary $\sqrt[k]{1+\alpha}:=(1+\alpha)^{1/k}$ and in particular $\sqrt{1+\alpha}:=\sqrt[2]{1+\alpha}$.
\end{Def}
By \autoref{lemfunc},
\[(1+\alpha)^c(1+\alpha)^d=\exp\bigl(c\log(1+\alpha)+d\log(1+\alpha)\bigr)=(1+\alpha)^{c+d}\]
for every $c,d\in\mathbb{C}$ as expected. Consequently, $\sqrt[k]{1+\alpha}^k=1+\alpha$ for $k\in\mathbb{N}$, i,\,e $\sqrt[k]{1+\alpha}$ is a $k$-th \emph{root} of $1+\alpha$ with constant term $1$.
Suppose that $\beta\in\mathbb{C}[[X]]$ also satisfies $\beta^k=1+\alpha$. Then $\beta^{-1}\sqrt[k]{1+\alpha}$ has order $\le k$ in $\mathbb{C}[[X]]^\times$. From \autoref{leminv} we conclude that $\beta^{-1}\sqrt[k]{1+\alpha}$ is constant, i.\,e. $\beta=\beta(0)\sqrt[k]{1+\alpha}$. Consequently, $\sqrt[k]{1+\alpha}$ is the unique $k$-th of $1+\alpha$ with constant term $1$.
The inexperienced reader may find the following exercise helpful.
\begin{A}
Check that the following power series in $\mathbb{C}[[X]]$ are well-defined:
\begin{align*}
\sin(X)&:=\sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)!}X^{2n+1},&\cos(X)&:=\sum_{n=0}^\infty\frac{(-1)^n}{(2n)!}X^{2n},\\
\tan(X)&:=\frac{\sin(X)}{\cos(X)},&\sinh(X)&:=\sum_{k=0}^\infty\frac{X^{2k+1}}{(2k+1)!},\\
\arcsin(X)&:=\sum_{n=0}^\infty\frac{(2n)!}{(2^nn!)^2}\frac{X^{2n+1}}{2n+1},&\arctan(X)&:=\sum_{k=0}^\infty\frac{(-1)^k}{2k+1}X^{2k+1}.
\end{align*}
Show that
\begin{enumerate}[(a)]
\item\label{euler} \textup(\textsc{Euler}'s formula\textup) $\exp(\mathrm{i} X)=\cos(X)+\mathrm{i}\sin(X)$ where $\mathrm{i}=\sqrt{-1}\in\mathbb{C}$.
\item $\sin(2X)=2\sin(X)\cos(X)$ and $\cos(2X)=\cos(X)^2-\sin(X)^2$.\\
\textit{Hint:} Use \eqref{euler} and separate real from non-real coefficients.
\item \textup(\textsc{Pythagorean} identity\textup) $\cos(X)^2+\sin(X)^2=1$.
\item $\sinh(X)=\frac{1}{2}(\exp(X)-\exp(-X))$.
\item $\sin(X)'=\cos(X)$ and $\cos(X)'=-\sin(X)$.
\item $\arctan\circ\tan=X$.\\
\textit{Hint:} Mimic the argument for $\log(1+X)$.
\item $\arctan(X)=\frac{\mathrm{i}}{2}\log\Bigl(\frac{\mathrm{i}+X}{\mathrm{i}-X}\Bigr)$.
\item $\arcsin(X)'=\frac{1}{\sqrt{1-X^2}}$.
\item $\arcsin\circ \sin=X$.
\end{enumerate}
\end{A}
\section{The main theorems}\label{secmain}
For $c\in\mathbb{C}$ and $k\in\mathbb{N}$ we extend the definition of usual binomial coefficient by
\[\binom{c}{k}:=\frac{c(c-1)\ldots(c-k+1)}{k!}\in\mathbb{C}\]
(it is useful to know that numerator and denominator both have exactly $k$ factors).
The next theorem is a vast generalization of the binomial theorem (take $c\in\mathbb{N}$) and the geometric series (take $c=-1$).
\begin{Thm}[\textsc{Newton}'s binomial theorem]\label{newton}
For $\alpha\in (X)$ and $c\in\mathbb{C}$ the following holds
\begin{equation}\label{newtoneq}
\boxed{(1+\alpha)^c=\sum_{k=0}^\infty\binom{c}{k}\alpha^k.}
\end{equation}
\end{Thm}
\begin{proof}
It suffices to prove the equation for $\alpha=X$ (we may substitute $X$ by $\alpha$ afterwards). By the chain rule,
\[\bigl((1+X)^c\bigr)'=\exp(c\log(1+X))'=c\frac{(1+X)^c}{1+X}=c(1+X)^{c-1}\]
and inductively, $\bigl((1+X)^c\bigr)^{(k)}=c(c-1)\ldots(c-k+1)(1+X)^{c-k}$. Now the claim follows from Taylor's theorem~\eqref{taylor}.
\end{proof}
A striking application of \autoref{newton} will be given in \autoref{Turan}.
\begin{Ex}
Let $\zeta\in\mathbb{C}$ be an $n$-th root of unity and let $\alpha:=(1+X)^\zeta-1\in (X)$. Then \[\alpha\circ\alpha=\bigl(1+(1+X)^\zeta-1\bigr)^\zeta-1=(1+X)^{\zeta^2}-1\]
and inductively $\alpha\circ\ldots\circ\alpha=(1+X)^{\zeta^n}-1=X$. In particular, the order of $\alpha$ in the group $\mathbb{C}[[X]]^\circ$ divides $n$. Thus in contrast to \autoref{leminv}, the group $\mathbb{C}[[X]]^\circ$ possesses “interesting” elements of finite order.
\end{Ex}
Since we do not call our indeterminant $q$ (as in many sources), it makes no sense to introduce the $q$-Pochhammer symbol $(q;q)_n$. Instead we devise a non-standard notation.
\begin{Def}\label{defgauss}
For $n\in\mathbb{N}_0$ let $X^n!:=(1-X)(1-X^2)\ldots(1-X^n)$. For $0\le k\le n$ we call
\[\gauss{n}{k}:=\frac{X^n!}{X^k!X^{n-k}!}=\frac{1-X^n}{1-X^k}\ldots\frac{1-X^{n-k+1}}{1-X}\in\mathbb{C}[[X]]\]
a \emph{Gaussian coefficient}. If $k<0$ or $k>n$ let $\gauss{n}{k}:=0$.
\end{Def}
As for the binomial coefficients, we have $\gauss{n}{0}=\gauss{n}{n}=1$ and $\gauss{n}{k}=\gauss{n}{n-k}$ for all $n\in\mathbb{N}_0$ and $k\in\mathbb{Z}$. Moreover, $\gauss{n}{1}=\frac{1-X^n}{1-X}=1+X+\ldots+X^{n-1}$.
The familiar recurrence formula for binomial coefficients needs to be altered as follows.
\begin{Lem}
For $n\in\mathbb{N}_0$ and $k\in\mathbb{Z}$,
\begin{equation}\label{lemgauss}
\gauss{n+1}{k}=X^k\gauss{n}{k}+\gauss{n}{k-1}=\gauss{n}{k}+X^{n+1-k}\gauss{n}{k-1}.
\end{equation}
\end{Lem}
\begin{proof}
For $k>n+1$ or $k<0$ all parts are $0$. Similarly, for $k=n+1$ or $k=0$ both sides equal $1$. Finally, for $1\le k\le n$ it holds that
\begin{align*}
X^k\gauss{n}{k}+\gauss{n}{k-1}&=\Bigl(X^k\frac{1-X^{n-k+1}}{1-X^k}+1\Bigr)\frac{X^n!}{X^{k-1}!X^{n-k+1}!}=\frac{1-X^{n+1}}{1-X^k}\frac{X^n!}{X^{k-1}!X^{n+1-k}!}\\
&=\gauss{n+1}{k}=\gauss{n+1}{n+1-k}=X^{n+1-k}\gauss{n}{n+1-k}+\gauss{n}{n-k}\\
&=\gauss{n}{k}+X^{n+1-k}\gauss{n}{k-1}.\qedhere
\end{align*}
\end{proof}
Since $\gauss{n}{0}$ and $\gauss{n}{1}$ are polynomials, \eqref{lemgauss} shows inductively that all Gaussian coefficients are polynomials. We may therefore evaluate $\gauss{n}{k}$ at $X=1$. Indeed \eqref{lemgauss} becomes the recurrence for the binomial coefficients if $X=1$. Hence $\gauss{n}{k}(1)=\binom{n}{k}$. This can be seen more directly by writing
\[\gauss{n}{k}=\frac{\frac{1-X^n}{1-X}\ldots\frac{1-X^{n-k+1}}{1-X}}{\frac{1-X^k}{1-X}\ldots\frac{1-X}{1-X}}=\frac{(1+X+\ldots+X^{n-1})\ldots(1+X+\ldots+X^{n-k})}{(1+X+\ldots+X^{k-1})\ldots(1+X)1}.\]
We will interpret the coefficients of $\gauss{n}{k}$ in \autoref{partseries}.
\begin{Ex}
\[\gauss{4}{2}=X^2\gauss{3}{2}+\gauss{3}{1}=X^2(1+X+X^2)+(1+X+X^2)=1+X+2X^2+X^3+X^4.\]
\end{Ex}
The next formulas resemble $(1+X)^n=\sum_{k=0}^n\binom{n}{k}X^k$ and $(1-X)^{-n}=\sum_{k=0}^\infty\binom{n+k-1}{k}X^k$ from Newton's binomial theorem. The first is a finite sum, the second an infinite sum arising from some sort of inverses.
We will encounter many more such “dual pairs” in \autoref{genstir}, \autoref{vieta} and \eqref{mac1}, \eqref{a2}.
\begin{Thm}[\textsc{Gauß}' binomial theorem]\label{GaussBT}
For $n\in\mathbb{N}$ and $\alpha\in\mathbb{C}[[X]]$ the following holds
\begin{empheq}[box=\fbox]{align*}
\prod_{k=0}^{n-1}(1+\alpha X^k)&=\sum_{k=0}^n\gauss{n}{k}\alpha^kX^{\binom{k}{2}},\\
\prod_{k=1}^n\frac{1}{1-\alpha X^k}&=\sum_{k=0}^\infty\gauss{n+k-1}{k}\alpha^kX^k.
\end{empheq}
\end{Thm}
\begin{proof}
We argue by induction on $n$.
\begin{enumerate}[(i)]
\item For $n=1$ both sides become $1+\alpha$. For the induction step we let all sums run from $-\infty$ to $\infty$ (this will not change their value, but makes index shifts much more transparent):
\begin{align*}
\prod_{k=0}^{n}(1+\alpha X^k)&=(1+\alpha X^n)\sum_{k=-\infty}^\infty\gauss{n}{k}\alpha^kX^{\binom{k}{2}}\\[-6mm]
&=\sum\gauss{n}{n-k}\alpha^kX^{\binom{k}{2}}+\sum\gauss{n}{n-k}\alpha^{k+1}X^{n-k}X^{\overbrace{\scriptstyle{\binom{k}{2}+k}}^{\binom{k+1}{2}}}\\
&=\sum\gauss{n}{n-k}\alpha^kX^{\binom{k}{2}}+\sum X^{n-k+1}\gauss{n}{n-k+1}\alpha^{k}X^{\binom{k}{2}}\\
&\overset{\eqref{lemgauss}}{=}\sum\gauss{n+1}{n-k+1}\alpha^kX^{\binom{k}{2}}=\sum\gauss{n+1}{k}\alpha^kX^{\binom{k}{2}}.
\end{align*}
\item Here, $n=1$ is the geometric series $\frac{1}{1-\alpha X}=\sum_{k=0}^\infty\alpha^kX^k$. In general:
\begin{align*}
(1-\alpha X^{n+1})\sum_{k=0}^\infty\gauss{n+k}{k}\alpha^kX^k&=\sum_{k=0}^\infty\gauss{n+k}{k}\alpha^kX^k-X^n\sum_{k=0}^\infty\gauss{n+k}{k}\alpha^{k+1}X^{k+1}\\
&=\sum_{k=0}^\infty\biggl(\gauss{n+k}{k}-X^n\gauss{n+k-1}{k-1}\biggr)\alpha^kX^k\\
&\overset{\eqref{lemgauss}}{=}\sum_{k=0}^\infty\gauss{n+k-1}{k}\alpha^kX^k=\prod_{k=1}^n\frac{1}{1-\alpha X^k}.\qedhere
\end{align*}
\end{enumerate}
\end{proof}
\begin{Rmk}\label{important}
We emphasize that in the proof of \autoref{GaussBT}, $\alpha$ is treated as a variable independent of $X$. The proof and the statement are therefore still valid if we substitute $X$ by some $\beta\in(X)$ \emph{without} changing $\alpha$ to $\alpha(\beta)$.
\end{Rmk}
Using
\begin{equation}\label{lim}
\lim_{n\to\infty}\gauss{n}{k}=\frac{1}{X^k!}\lim_{n\to\infty}(1-X^{n-k+1})\ldots(1-X^n)=\frac{1}{X^k!},
\end{equation}
we obtain an infinite variant of Gauss' theorem:
\begin{Cor}[\textsc{Euler}]
For all $\alpha\in\mathbb{C}[[X]]$,
\begin{align}
\prod_{k=0}^\infty(1+\alpha X^k)&=\sum_{k=0}^\infty\frac{\alpha^kX^{\binom{k}{2}}}{X^k!},\label{E1}\\
\prod_{k=1}^\infty\frac{1}{1-\alpha X^k}&=\sum_{k=0}^\infty\frac{\alpha^kX^k}{X^k!}.\label{E2}
\end{align}
\end{Cor}
If $\alpha\in (X)$, we can apply \eqref{E2} with $\alpha X^{-1}$ to obtain
\begin{equation}\label{E3}
\prod_{k=0}^\infty\frac{1}{1-\alpha X^k}=\sum_{k=0}^\infty\frac{\alpha^k}{X^k!}.
\end{equation}
We are now in a position to derive one of the most powerful theorems on power series.
\begin{Thm}[\textsc{Jacobi}'s triple product identity]\label{jacobi}
For every $\alpha\in \mathbb{C}[[X]]\setminus (X^2)$ the following holds
\[\boxed{\prod_{k=1}^\infty(1-X^{2k})(1+\alpha X^{2k-1})(1+\alpha^{-1}X^{2k-1})=\sum_{k=-\infty}^\infty \alpha^kX^{k^2}.}\]
\end{Thm}
\begin{proof}
We follow Andrews~\cite{AndrewsJTP}. Observe that $\alpha^{-1}X^{2k-1}$ is a power series for all $k\ge 1$, since $\alpha\notin(X^2)$. Also, both sides of the equation are well-defined.
According to \autoref{important} we are allowed to substitute $X$ by $X^2$ and simultaneously $\alpha$ by $\alpha^{-1} X$ in \eqref{E1}:
\begin{align*}
\prod_{k=1}^\infty (1+\alpha^{-1} X^{2k-1})&=\prod_{k=0}^\infty (1+\alpha^{-1} X^{2k+1})=\sum_{k=0}^\infty\frac{\alpha^{-k}X^{k^2}}{(1-X^2)\ldots(1-X^{2k})}\\
&=\prod_{k=1}^\infty\frac{1}{1-X^{2k}}\sum_{k=0}^\infty\alpha^{-k}X^{k^2}\prod_{l=0}^\infty(1-X^{2l+2k+2}).
\end{align*}
Again, $\alpha^{-k}X^{k^2}$ is still a power series. Since for negative $k$ the inner product vanishes, we may sum over $k\in\mathbb{Z}$.
A second application of \eqref{E1} with $X^2$ instead of $X$ and $-X^{2k+2}$ in the role of $\alpha$ shows
\begin{align*}
\prod_{k=1}^\infty (1+\alpha^{-1} X^{2k-1})(1-X^{2k})&=\sum_{k=-\infty}^\infty\alpha^{-k}X^{k^2}\sum_{l=0}^\infty\frac{(-1)^lX^{l^2+l+2kl}}{(1-X^2)\ldots(1-X^{2l})}\\
&=\sum_{l=0}^\infty \frac{(-\alpha X)^l}{(1-X^2)\ldots(1-X^{2l})}\sum_{k=-\infty}^\infty X^{(k+l)^2}\alpha^{-k-l}.
\end{align*}
After the index shift $k\mapsto -k-l$, the inner sum does not depend on $l$ anymore. We then apply \eqref{E3} on the first sum with $X$ replaced by $X^2$ and $-\alpha X\in (X)$ instead of $\alpha$:
\begin{align*}
\prod_{k=1}^\infty (1+\alpha^{-1} X^{2k-1})(1-X^{2k})=\prod_{k=0}^\infty\frac{1}{1+\alpha X^{2k+1}}\sum_{k=-\infty}^\infty X^{k^2}\alpha^{k}=\prod_{k=1}^\infty\frac{1}{1+\alpha X^{2k-1}}\sum_{k=-\infty}^\infty X^{k^2}\alpha^{k}.
\end{align*}
We are done by rearranging terms.
\end{proof}
\begin{Rmk}\label{import2}
Since the above proof is just a combination of Euler's identities, we are still allowed to replace $X$ and $\alpha$ individually.
Furthermore, we have required the condition $\alpha\notin(X^2)$ only to guarantee that $\alpha^{-1}X$ is well-defined. If we replace $X$ by $X^m$, say, the proof is still sound for $\alpha\in\mathbb{C}[[X]]\setminus(X^{m+1})$.
\end{Rmk}
A (somewhat analytical) proof only making use of \eqref{E1} can be found in \cite{Zhu}. There are numerous purely combinatorial proofs like \cite{Kolitsch,Lewis,Sudler,Sylvester,Wright,Zolnowsky}, which are meaningful for formal power series.
\begin{Ex}\hfill
\begin{enumerate}[(i)]
\item Choosing $\alpha\in\{\pm1,X\}$ in \autoref{jacobi} reveals the following elegant identities:
\begin{align}
\prod_{k=1}^\infty(1-X^{2k})(1+X^{2k-1})^2&=\sum_{k=-\infty}^\infty X^{k^2},\label{JTP1}\\
\prod_{k=1}^\infty\frac{(1-X^k)^2}{1-X^{2k}}=\prod_{k=1}^\infty(1-X^{2k})(1-X^{2k-1})^2&=\sum_{k=-\infty}^\infty(-1)^k X^{k^2},\label{JTP2}\\
\prod_{k=1}^\infty(1-X^{2k})(1+X^{2k})^2&=\frac{1}{2}\sum_{k=-\infty}^\infty X^{k^2+k}=\sum_{k=0}^\infty X^{k^2+k},\label{JTP3}
\end{align}
where in \eqref{JTP3} we made use of the bijection $k\mapsto -k-1$ on $\mathbb{Z}$. These formulas are needed in the proof of \autoref{thmLJ}.
In \eqref{JTP3} we find $X$ only to even powers. By equating the corresponding coefficients, we may replace $X^2$ by $X$ to obtain
\[\prod_{k=1}^\infty(1-X^{2k})(1+X^k)=\prod_{k=1}^\infty(1-X^{k})(1+X^{k})^2=\sum_{k=0}^\infty X^{\frac{k^2+k}{2}}.\]
A very similar identity will be proved in \autoref{Jhoch3}.
\item Relying on \autoref{import2}, we can replace $X$ by $X^3$ and $\alpha$ by $-X$ at the same time in \autoref{jacobi}. This leads to
\[\prod_{k=1}^\infty (1-X^{6k})(1-X^{6k-2})(1-X^{6k-4})=\sum_{k=-\infty}^\infty (-1)^kX^{3k^2+k}.\]
Substituting $X^2$ by $X$ provides Euler's celebrated \emph{pentagonal number theorem}:
\begin{equation}\label{EPTN}
\boxed{\prod_{k=1}^\infty(1-X^k)=\prod_{k=1}^\infty(1-X^{3k})(1-X^{3k-1})(1-X^{3k-2})=\sum_{k=-\infty}^\infty(-1)^kX^{\frac{3k^2+k}{2}}.}
\end{equation}
There is a well-known combinatorial proof of \eqref{EPTN} by Franklin, which is reproduced in the influential book by Hardy--Wright~\cite[Section~19.11]{Hardy}.
\item The following formulas arise in a similar manner by substituting $X$ by $X^5$ and selecting $\alpha\in\{-X,-X^3\}$ afterwards
(this is allowed by \autoref{import2}):
\begin{align}
\prod_{k=1}^\infty (1-X^{5k})(1-X^{5k-2})(1-X^{5k-3})=\sum_{k=-\infty}^\infty (-1)^kX^{\frac{5k^2+k}{2}},\label{mod51}\\
\prod_{k=1}^\infty (1-X^{5k})(1-X^{5k-1})(1-X^{5k-4})=\sum_{k=-\infty}^\infty (-1)^kX^{\frac{5k^2+3k}{2}}.\label{mod52}
\end{align}
This will be used in the proof of \autoref{RRI}.
\end{enumerate}
\end{Ex}
To obtain yet another triple product identity, we first consider a finite version due to Hirschhorn~\cite{Hirschhornpoly}.
\begin{Lem}\label{Hirschhorn}
For all $n\in\mathbb{N}_0$,
\begin{equation}\label{HH}
\prod_{k=1}^n(1-X^k)^2=\sum_{k=0}^n(-1)^k(2k+1)X^{\frac{k^2+k}{2}}\gauss{2n+1}{n-k}.
\end{equation}
\end{Lem}
\begin{proof}
The proof is by induction on $n$: Both sides are $1$ if $n=0$. So assume $n\ge 1$ and let $Q_n$ be the right hand side of \eqref{HH}.
The summands of $Q_n$ are invariant under the index shift $k\mapsto -k-1$ and vanish for $|k|>n$. Hence, we may sum over $k\in\mathbb{Z}$ and divide by $2$. A threefold application of \eqref{lemgauss} gives:
\begin{align*}
Q_n&=X^n\frac{1}{2}\sum_{k=-\infty}^\infty(-1)^k(2k+1)X^{\frac{k^2-k}{2}}\gauss{2n}{n-k}+\frac{1}{2}\sum(-1)^k(2k+1)X^{\frac{k^2+k}{2}}\gauss{2n}{n-k-1}\\
&=X^n\frac{1}{2}\sum(-1)^k(2k+1)X^{\frac{k^2-k}{2}}\gauss{2n-1}{n-k}+X^{2n}\frac{1}{2}\sum(-1)^k(2k+1)X^{\frac{k^2+k}{2}}\gauss{2n-1}{n-k-1}\\
&\quad+\frac{1}{2}\sum(-1)^k(2k+1)X^{\frac{k^2+k}{2}}\gauss{2n-1}{n-k-1}+X^n\frac{1}{2}\sum(-1)^k(2k+1)X^{\frac{k^2+3k+2}{2}}\gauss{2n-1}{n-k-2}.
\end{align*}
The second and third sum amount to $(1+X^{2n})Q_{n-1}$. We apply the transformations $k\mapsto k+1$ and $k\mapsto k-1$ in the first sum and fourth sum respectively:
\begin{align*}
Q_n&=(1+X^{2n})Q_{n-1}-X^n\frac{1}{2}\sum(-1)^k\biggl((2k+3)X^{\frac{k^2+k}{2}}\gauss{2n-1}{n-k-1}+(2k-1)X^{\frac{k^2+k}{2}}\gauss{2n-1}{n-k-1}\biggr)\\
&=(1-X^{2n})Q_{n-1}-2X^nQ_{n-1}=(1-X^n)^2Q_{n-1}=\prod_{k=1}^n(1-X^k)^2.\qedhere
\end{align*}
\end{proof}
\begin{Thm}[\textsc{Jacobi}]\index{Jacobi}\label{Jhoch3}
We have
\[\boxed{\prod_{k=1}^\infty(1-X^k)^3=\sum_{k=0}^\infty(-1)^k(2k+1)X^{\frac{k^2+k}{2}}.}\]
\end{Thm}
\begin{proof}
By \autoref{Hirschhorn}, we have
\begin{align*}
\prod_{k=1}^\infty(1-X^k)^3&=\sum_{k=0}^\infty(-1)^k(2k+1)X^{\frac{k^2+k}{2}}\lim_{n\to\infty}\gauss{2n+1}{n-k}\lim_{n\to\infty}\prod_{l=1}^{n+k+1}(1-X^l)\\
&=\sum_{k=0}^\infty(-1)^k(2k+1)X^{\frac{k^2+k}{2}}\lim_{n\to\infty}(1-X^{n-k+1})\ldots(1-X^{2n+1})\\
&=\sum_{k=0}^\infty(-1)^k(2k+1)X^{\frac{k^2+k}{2}}.\qedhere
\end{align*}
\end{proof}
In an analytic framework, \autoref{Jhoch3} can be derived from \autoref{jacobi} (see \cite[Theorem~357]{Hardy}). A combinatorial proof was given in \cite{Joichi}.
As a preparation for the infamous Rogers--Ramanujan identities~\cite{RogersR}, we start again with a finite version due to Bressoud~\cite{Bressoud}. The impatient reader may skip these technical results and start right away with the applications in \autoref{seccomb} (\autoref{RRI} is only needed in \autoref{nrpartbi}\eqref{schur1},\eqref{schur2}).
\begin{Lem}\label{Bressoud}
For $n\in\mathbb{N}_0$,
\begin{align}
\sum_{k=0}^\infty \gauss{n}{k}X^{k^2}&=\sum_{k=-\infty}^\infty (-1)^k\gauss{2n}{n+2k}X^{\frac{5k^2+k}{2}},\label{Bress1}\\
\sum_{k=0}^\infty \gauss{n}{k}X^{k^2+k}&=\sum_{k=-\infty}^\infty (-1)^k\gauss{2n+1}{n+2k}X^{\frac{5k^2-3k}{2}}.\label{Bress2}
\end{align}
\end{Lem}
\begin{proof}
We follow a simplified proof by Chapman~\cite{Chapman}. Let $\alpha_n$ and $\tilde{\alpha}_n$ be the left and the right hand side respectively of \eqref{Bress1}. Similarly, let $\beta_n$ and $\tilde{\beta}_n$ be the left and right hand side respectively of \eqref{Bress2}. Note that all four sums are actually finite.
We show both equations at the same time by establishing a common recurrence relation between $\alpha_n$, $\beta_n$ and $\tilde{\alpha}_n$, $\tilde{\beta}_n$.
We compute $\alpha_0=\beta_0=\tilde{\alpha}_0=\tilde{\beta}_0=1$. For $n\ge 1$,
\begin{align*}
\alpha_n&\overset{\eqref{lemgauss}}{=}\sum_{k=-\infty}^\infty \biggl(\gauss{n-1}{k}+X^{n-k}\gauss{n-1}{k-1}\biggr)X^{k^2}=\alpha_{n-1}+X^n\sum \gauss{n-1}{k-1}X^{k(k-1)}\\
&=\alpha_{n-1}+X^n\sum \gauss{n-1}{k}X^{k(k+1)}=\alpha_{n-1}+X^n\beta_{n-1},\\
\beta_n-X^n\alpha_n&=\sum_{k=-\infty}^\infty \gauss{n}{k}X^{k^2+k}(1-X^{n-k})=\sum \frac{X^n!}{X^k!X^{n-k}!}X^{k^2+k}(1-X^{n-k})\\
&=(1-X^n)\sum \gauss{n-1}{k}X^{k^2+k}=(1-X^n)\beta_{n-1}.
\end{align*}
These recurrences characterize $\alpha_n$ and $\beta_n$ uniquely. The familiar index transformation $k\mapsto -k-1$ implies $\sum(-1)^k\gauss{2n-2}{n+2k}X^{\frac{5(k^2+k)}{2}}=0$. This is used in the following computation:
\begin{align*}
\tilde{\alpha}_n-\tilde{\alpha}_{n-1}&=\sum (-1)^k\biggl(\gauss{2n}{n+2k}-\gauss{2n-2}{n-1+2k}\biggr)X^{\frac{5k^2+k}{2}}\\
&\overset{\eqref{lemgauss}}{=}\sum (-1)^k\biggl(\gauss{2n-1}{n+2k}+X^{n-2k}\gauss{2n-1}{n+2k-1}-\gauss{2n-2}{n-1+2k}\biggr)X^{\frac{5k^2+k}{2}}\\
&\overset{\eqref{lemgauss}}{=}\sum (-1)^k\biggl(X^{n+2k}\gauss{2n-2}{n+2k}+X^{n-2k}\gauss{2n-1}{n+2k-1}\biggr)X^{\frac{5k^2+k}{2}}=X^n\tilde{\beta}_{n-1},\\
\tilde{\beta}_n-X^n\tilde{\alpha}_n&=\sum_{k=-\infty}^\infty (-1)^k\biggl(\gauss{2n+1}{n+2k}-X^{n+2k}\gauss{2n}{n+2k}\biggr)X^{\frac{5k^2-3k}{2}}\\
&=\sum_{k=-\infty}^\infty (-1)^k\gauss{2n}{n+2k-1}X^{\frac{5k^2-3k}{2}}\\
&=\sum_{k=-\infty}^\infty (-1)^k\biggl(\gauss{2n-1}{n+2k-1}+X^{n-2k+1}\gauss{2n-1}{n+2k-2}\biggr)X^{\frac{5k^2-3k}{2}}\\
&=\tilde{\beta}_{n-1}+X^n\sum_{k=-\infty}^\infty (-1)^k\gauss{2n-1}{n+2k-2}X^{\frac{5k^2-7k+2}{2}}\\
&=\tilde{\beta}_{n-1}+X^n\sum_{k=-\infty}^\infty (-1)^{1-k}\gauss{2n-1}{n-2k}X^{\frac{5(1-k)^2-7(1-k)+2}{2}}\\
&=\tilde{\beta}_{n-1}-X^n\sum_{k=-\infty}^\infty (-1)^k\gauss{2n-1}{n+2k-1}X^{\frac{5k^2-3k}{2}}=(1-X^n)\tilde{\beta}_{n-1}.
\end{align*}
By induction on $n$, it follows that $\alpha_n=\tilde{\alpha}_n$ and $\beta_n=\tilde{\beta}_n$ as desired.
\end{proof}
\begin{Thm}[\textsc{Rogers--Ramanujan} identities]\label{RRI}
We have
\begin{empheq}[box=\fbox]{align}
\prod_{k=1}^\infty\frac{1}{(1-X^{5k-1})(1-X^{5k-4})}&=\sum_{k=0}^\infty\frac{X^{k^2}}{X^k!},\label{RR1}\\
\prod_{k=1}^\infty\frac{1}{(1-X^{5k-2})(1-X^{5k-3})}&=\sum_{k=0}^\infty\frac{X^{k^2+k}}{X^k!}.\label{RR2}
\end{empheq}
\end{Thm}
\begin{proof}
\begin{align*}
\sum_{k=0}^\infty\frac{X^{k^2}}{X^k!}&\overset{\eqref{lim}}{=}\sum_{k=0}^\infty X^{k^2}\lim_{n\to\infty}\gauss{n}{k}\overset{\eqref{Bress1}}{=}\sum_{k=-\infty}^\infty (-1)^kX^{\frac{5k^2+k}{2}}\lim_{n\to\infty}\gauss{2n}{n+2k}\\
&\overset{\eqref{mod51}}{=}\frac{\prod_{k=1}^\infty(1-X^{5k})(1-X^{5k-2})(1-X^{5k-3})}{\prod_{k=1}^\infty(1-X^k)}=\prod_{k=1}^\infty\frac{1}{(1-X^{5k-1})(1-X^{5k-4})},\\
\sum_{k=0}^\infty\frac{X^{k^2+k}}{X^k!}&\overset{\eqref{lim}}{=}\sum_{k=0}^\infty X^{k^2+k}\lim_{n\to\infty}\gauss{n}{k}\overset{\eqref{Bress2}}{=}\sum_{k=-\infty}^\infty (-1)^kX^{\frac{5k^2-3k}{2}}\lim_{n\to\infty}\gauss{2n+1}{n+2k}\\
&\overset{\eqref{mod52}}{=}\frac{\prod_{k=1}^\infty(1-X^{5k})(1-X^{5k-1})(1-X^{5k-4})}{\prod_{k=1}^\infty(1-X^k)}=\prod_{k=1}^\infty\frac{1}{(1-X^{5k-2})(1-X^{5k-3})}.\qedhere
\end{align*}
\end{proof}
The Rogers--Ramanujan identities were long believed to lie deeper within the theory of elliptic functions (Hardy~\cite[p. 385]{Hardy} wrote “No proof is really easy (and it would perhaps be unreasonable to expect an easy proof.”; Andrews~\cite[p. 105]{Andrews} wrote “…no doubt it would be unreasonable to expect a really easy proof.“). Meanwhile a great number of proofs were found, some of which are combinatorial (see \cite{AndrewsRR} or the recent book \cite{Sills}). An interpretation of these identities is given in \autoref{nrpartbi} below.
We point out that there are many “finite identities”, like \autoref{Bressoud}, approaching the Rogers--Ramanujan identities (as there are many rational sequences approaching $\sqrt{2}$).
One can find many more interesting identities, like the \emph{quintuple product}, along with comprehensive references (and analytic proofs) in Johnson~\cite{Johnson}.
\section{Applications to combinatorics}\label{seccomb}
In this section we bring to life all of the abstract theorems and identities of the previous section.
If $a_0,a_1,\ldots$ is a sequence of numbers usually arising from combinatorial context, the power series $\alpha=\sum a_nX^n$ is called the \emph{generating function} of $(a_n)_n$. Although this seems pointless at first, power series manipulations often reveal explicit formulas for $a_n$, which can hardly be seen by inductive arguments.
We give a first impression with the most familiar generating functions.
\begin{Ex}\label{exgen}\hfill
\begin{enumerate}[(i)]
\item The number of $k$-element subsets of an $n$-element set is $\binom{n}{k}$ with generating function $(1+X)^n$.
A $k$-element multi-subset (where elements are allowed to appear more than once) $\{a_1,\ldots,a_k\}$ of $\{1,\ldots,n\}$ can be turned into a $k$-element subset $\{a_1,a_2+1,\ldots,a_k+k-1\}$ of $\{1,\ldots,n+k-1\}$ and vice versa. The number of $k$-element multi-subsets of an $n$-element set is therefore $\binom{n+k-1}{k}$ with generating function $(1-X)^{-n}$ by Newton's binomial theorem.
\item The number of $k$-dimensional subspaces of a $n$-dimensional vector space over a finite field with $q<\infty$ elements is $\gauss{n}{k}$ evaluated at $X=q$ (indeed there are $(q^n-1)(q^n-q)\ldots(q^n-q^{n-k+1})$ linearly independent $k$-tuples and $(q^k-1)(q^k-q)\ldots(q^k-q^{k-1})$ of them span the same subspace). The generating function is closely related to Gauss' binomial theorem.
\item The \emph{Fibonacci numbers} $f_n$ are defined by $f_n:=n$ for $n=0,1$ and $f_{n+1}:=f_n+f_{n-1}$ for $n\ge 1$. The generating function $\alpha$ satisfies $\alpha=X+X^2\alpha+X\alpha$ and is therefore given by $\alpha=\frac{X}{1-X-X^2}$. An application of the partial fraction decomposition \eqref{partial} leads to the well-known \emph{Binet formula}
\[f_n=\frac{1}{\sqrt{5}}\Bigl(\frac{1+\sqrt{5}}{2}\Bigr)^n-\frac{1}{\sqrt{5}}\Bigl(\frac{1-\sqrt{5}}{2}\Bigr)^n.\]
\item The \emph{Catalan numbers} $c_n$ are defined by $c_n:=n$ for $n=0,1$ and $c_n:=\sum_{k=1}^{n-1}c_kc_{n-k}$ for $n\ge 2$ (most authors shift the index by $1$). Its generating function $\alpha$ fulfills $\alpha-\alpha^2=X$, i.\,e. it is the reverse of $X-X^2$. This quadratic equation has only one solution $\alpha=\frac{1}{2}(1-\sqrt{1-4X})$ in $\mathbb{C}[[X]]^\circ$. Newton's binomial theorem can be used to derive $c_{n+1}=\frac{1}{n+1}\binom{2n}{n}$. A slightly shorter proof based on Lagrange--Bürmann's inversion formula is given in \autoref{catalan} below.
\end{enumerate}
\end{Ex}
We now focus on combinatorial objects which defy explicit formulas.
\begin{Def}\label{defpart}
A \emph{partition} of $n\in\mathbb{N}$ is a sequence of positive integers $\lambda=(\lambda_1,\ldots,\lambda_l)$ such that $\lambda_1+\ldots+\lambda_l=n$ and $\lambda_1\ge\ldots\ge\lambda_l$. We call $\lambda_1,\ldots,\lambda_l$ the \emph{parts} of $\lambda$.
The set of partitions of $n$ is denoted by $P(n)$ and its cardinality is $p(n):=|P(n)|$.
For $k\in\mathbb{N}_0$ let $p_k(n)$ be the number of partitions of $n$ with each part $\lambda_i\le k$. Finally, let $p_{k,l}(n)$ be the number of partitions of $n$ with each part $\le k$ and at most $l$ parts in total. Clearly, $p_1(n)=p_{n,1}(n)=1$ and $p_n(n)=p_{n,n}(n)=p(n)$. Moreover, $p_{k,l}(n)=0$ whenever $n>kl$. For convenience let $p(0)=p_0(0)=p_{0,0}(0)=1$ ($0$ can be interpreted as the empty sum).
\end{Def}
\begin{Ex}
The partitions of $n=7$ are
\begin{gather*}
(7),(6,1),(5,2),(5,1^2),(4,3),(4,2,1),(4,1^3),(3^2,1),\\
(3,2^2),(3,2,1^2),(3,1^4),(2^3,1),(2^2,1^3),(2,1^5),(1^7).
\end{gather*}
Hence, $p(7)=15$, $p_3(7)=8$ and $p_{3,3}(7)=2$.
\end{Ex}
\begin{Thm}\label{partseries}
The generating functions of $p(n)$, $p_k(n)$ and $p_{k,l}(n)$ are given by
\begin{align*}
\Aboxed{\sum_{n=0}^\infty p(n)X^n&=\prod_{k=1}^\infty\frac{1}{1-X^k},}\\
\sum_{n=0}^\infty p_k(n)X^n&=\frac{1}{X^k!},\\
\sum_{n=0}^\infty p_{k,l}(n)X^n&=\gauss{k+l}{k}.
\end{align*}
\end{Thm}
\begin{proof}
It is easy to see that $p_k(n)$ is the coefficient of $X^n$ in
\begin{equation}\label{eulereq}
\begin{gathered}
(1+X^1+X^{1+1}+\ldots)(1+X^2+X^{2+2}+\ldots)\ldots(1+X^k+X^{k+k}+\ldots)\\
=\frac{1}{1-X}\frac{1}{1-X^2}\ldots\frac{1}{1-X^k}=\frac{1}{X^k!}.
\end{gathered}
\end{equation}
This shows the second equation. The first follows from $p(n)=\lim_{k\to \infty}p_k(n)$. For the last claim we argue by induction on $k+l$ using \eqref{lemgauss}. If $k=0$ or $l=0$, then both sides equal $1$. Thus, let $k,l\ge 1$. Pick a partition $\lambda=(\lambda_1,\lambda_2,\ldots)$ of $n$ with each part $\le k$ and at most $l$ parts. If $\lambda_1<k$, then all parts are $\le k-1$ and $\lambda$ is counted by $p_{k-1,l}(n)$. If on the other hand $\lambda_1=k$, then $(\lambda_2,\lambda_3,\ldots)$ is counted by $p_{k,l-1}(n-k)$. Conversely, each partition counted by $p_{k,l-1}(n-k)$ can be extended to a partition counted by $p_{k,l}(n)$. We have proven the recurrence
\[p_{k,l}(n)=p_{k-1,l}(n)+p_{k,l-1}(n-k).\]
Induction yields
\begin{align*}
\sum p_{k,l}(n)X^n&=\sum p_{k-1,l}(n)X^n+X^k\sum p_{k,l-1}(n)X^n\\
&=\gauss{k+l-1}{k-1}+X^k\gauss{k+l-1}{k}\overset{\eqref{lemgauss}}{=}\gauss{k+l}{k}.\qedhere
\end{align*}
\end{proof}
\begin{Thm}\label{nrpartbi}
The following assertions hold for $n,k,l\in\mathbb{N}_0$:
\begin{enumerate}[(i)]
\item\label{parta} $p_{k,l}(n)=p_{l,k}(n)=p_{k,l}(kl-n)$ for $n\le kl$.
\item The number of partitions of $n$ into exactly $k$ parts is the number of partitions with largest part $k$.
\item\label{partb} \textup(\textsc{Glaisher}\textup) The number of partitions of $n$ into parts not divisible by $k$ equals the number of partitions with no part repeated $k$ times \textup(or more\textup).
\item\label{eulerodd} \textup(\textsc{Euler}\textup) The number of partitions of $n$ into unequal parts is the number of partitions into odd parts.
\item\label{schur1} \textup(\textsc{Schur}\textup) The number of partitions of $n$ in parts which differ by more than $1$ equals the number of partitions in parts of the form $\pm 1+5k$.
\item\label{schur2} \textup(\textsc{Schur}\textup) The number of partitions of $n$ in parts which differ by more than $1$ and are larger than $1$ equals the number of partitions into parts of the form $\pm2+5k$.
\end{enumerate}
\end{Thm}
\begin{proof}\hfill
\begin{enumerate}[(i)]
\item Since $\gauss{k+l}{k}=\gauss{k+l}{l}$, we obtain $p_{k,l}(n)=p_{l,k}(n)$ by \autoref{partseries}. Let $\lambda=(\lambda_1,\ldots,\lambda_s)$ be a partition counted by $p_{k,l}(n)$. After adding zero parts if necessary, we may assume that $s=l$. Then $\bar{\lambda}:=(k-\lambda_l,k-\lambda_{l-1},\ldots,k-\lambda_1)$ is a partition counted by $p_{k,l}(kl-n)$. Since $\bar{\bar{\lambda}}=\lambda$, we obtain a bijection between the partitions counted by $p_{k,l}(n)$ and $p_{k,l}(kl-n)$.
\item\label{pkn} The number of partitions of $n$ with largest part $k$ is $p_k(n)-p_{k-1}(n)$. The number of partitions with exactly $k$ parts is
\[p_{n,k}(n)-p_{n,k-1}(n)\overset{\eqref{parta}}{=}p_{k,n}(n)-p_{k-1,n}(n)=p_k(n)-p_{k-1}(n).\]
\item Looking at \eqref{eulereq} again, it turns out that the desired generating function is
\[
\prod_{k\,\nmid\, m}\frac{1}{1-X^m}=\prod_{m=1}^\infty\frac{1-X^{km}}{1-X^m}=(1+X+\ldots+X^{k-1})(1+X^2+\ldots+X^{2(k-1)})\ldots.
\]
\item Take $k=2$ in \eqref{partb}.
\item According to \cite[Section~2.4]{Sills}, it was Schur, who first gave this interpretation of the Rogers--Ramanujan identities. The coefficient of $X^n$ on the left hand side of \eqref{RR1} is the number of partitions into parts of the form $\pm1+5k$. The right hand side can be rewritten (thanks to \autoref{partseries}) as
\[\sum_{k=0}^\infty\sum_{n=0}^\infty p_k(n)X^{n+k^2}=\sum_{n=0}^\infty\sum_{k=0}^np_k(n-k^2)X^n.\]
By \eqref{pkn}, $p_k(n-k^2)$ counts the partitions of $n-k^2$ with at most $k$ parts.
If $(\lambda_1,\ldots,\lambda_k)$ is such a partition (allowing $\lambda_i=0$ here), then $(\lambda_1+2k-1,\lambda_2+2k-3,\ldots,\lambda_k+1)$ is a partition of $n-k^2+1+3+\ldots+2k-1=n$ with exactly $k$ parts, which all differ by more than $1$.
\item This follows similarly using $k^2+k=2+4+\ldots+2k$. \qedhere
\end{enumerate}
\end{proof}
There is a remarkable connection between \eqref{partb}, \eqref{eulerodd} and \eqref{schur1} of \autoref{nrpartbi}: Numbers not divisible by $3$ are of the form $\pm1+3k$, while odd numbers are of the form $\pm1+4k$.
\begin{Ex}
For $n=7$ the following partitions are counted by \autoref{nrpartbi}:
\begin{center}
\begin{tabular}{llllll}
exactly three parts:& $(5,1^2)$,& $(4,2,1)$,& $(3^2,1)$,& $(3,2^2)$\\
largest part $3$:&$(3^2,1)$,& $(3,2^2)$,& $(3,2,1^2)$,& $(3,1^4)$\\\hline
unequal parts:&$(7)$,& $(6,1)$,& $(5,2)$,& $(4,3)$,& $(4,2,1)$\\
odd parts:& $(7)$,& $(5,1^2)$,& $(3^2,1)$,& $(3,1^4)$,& $(1^7)$\\\hline
parts differ by more than $1$:& $(7)$,& $(6,1)$,& $(5,2)$\\
parts of the form $\pm1+5k$& $(6,1)$,& $(4,1^3)$,& $(1^7)$\\\hline
parts $\ge 2$ differ by more than $1$:& $(7)$,& $(5,2)$\\
parts of the form $\pm2+5k$& $(7)$,& $(3,2^2)$
\end{tabular}
\end{center}
\end{Ex}
Some of the statements in \autoref{nrpartbi} permit nice combinatorial proofs utilizing \emph{Young diagrams} (or \emph{Ferres diagrams}). We refer the reader to the introductory book~\cite{AndrewsEriksson}.
The following exercise (inspired by \cite{AndrewsEriksson}) can be solved with formal power series.
\begin{A}
Prove the following statements for $n,k\in\mathbb{N}$:
\begin{enumerate}[(a)]
\item The number of partition of $n$ into even parts is the number of partitions whose parts have even multiplicity.
\item (\textsc{Legendre}) If $n$ is not of the form $\frac{1}{2}(3k^2+k)$ with $k\in\mathbb{Z}$, then the number of partitions of $n$ into an even number of unequal parts is the number of partitions into an odd number of unequal parts.\\
\textit{Hint:} Where have we encountered $\frac{1}{2}(3k^2+k)$ before?
\item (\textsc{Subbarao}) The number of partitions of $n$ where each part appears 2, 3 or 5 times equals the number of partitions into parts of the form $\pm 2+12k$, $\pm3+12k$ or $6+12k$.
\item (\textsc{MacMahon}) The number of partitions of $n$ where each part appears at least twice equals the number of partitions in parts not of the form $\pm1+6k$.
\end{enumerate}
\end{A}
The reader may has noticed that Euler's pentagonal number theorem~\eqref{EPTN} is just the inverse of the generating function of $p(n)$ from \autoref{partseries}, i.\,e.
\[\sum_{n=0}^\infty p(n)X^n\cdot \sum_{k=-\infty}^\infty(-1)^kX^{\frac{3k^2+k}{2}}=1\]
and therefore
\[\sum_{k=-n}^n(-1)^kp\Bigl(n-\frac{3k^2+k}{2}\Bigr)=0\]
for $n\in\mathbb{N}$, where $p(k):=0$ whenever $k<0$. This leads to a recurrence formula
\begin{align*}
p(0)&=1,\\
p(n)&=p(n-1)+p(n-2)-p(n-5)-p(n-7)+\ldots\qquad(n\in\mathbb{N}).
\end{align*}
\begin{Ex}
We compute
\begin{align*}
p(1)&=p(0)=1,&p(4)&=p(3)+p(2)=3+2=5,\\
p(2)&=p(1)+p(0)=2,&p(5)&=p(4)+p(3)-p(0)=5+3-1=7,\\
p(3)&=p(2)+p(1)=3,&p(6)&=p(5)+p(4)-p(1)=7+5-1=11
\end{align*}
(see \url{https://oeis.org/A000041} for more terms).
\end{Ex}
The generating functions we have seen so far all have integer coefficients. If $\alpha,\beta\in\mathbb{Z}[[X]]$ and $d\in\mathbb{N}$, we write $\alpha\equiv\beta\pmod{d}$, if all coefficients of $\alpha-\beta$ are divisible by $d$. This is compatible with the ring structure of $\mathbb{Z}[[X]]$, namely if $\alpha\equiv\beta\pmod{d}$ and $\gamma\equiv\delta\pmod{d}$, then $\alpha+\gamma\equiv\beta+\delta\pmod{d}$ and $\alpha\gamma\equiv\beta\delta\pmod{d}$. Now suppose $\alpha\in 1+(X)$. Then the proof of \autoref{leminv} shows $\alpha^{-1}\in\mathbb{Z}[[X]]$. In this case $\alpha\equiv\beta\pmod{d}$ is equivalent to $\alpha^{-1}\equiv\beta^{-1}\pmod{d}$. If $d=p$ happens to be a prime, we have
\[(\alpha+\beta)^p=\sum_{k=0}^p\frac{p(p-1)\ldots(p-k+1)}{k!}\alpha^k\beta^{p-k}\equiv\alpha^p+\beta^p\pmod{p},\]
as in any commutative ring of characteristic $p$.
With this preparation, we come to a remarkable discovery by Ramanujan~\cite{RamanujanCongruence}.
\begin{Thm}[\textsc{Ramanujan}]\label{R57}
The following congruences hold for all $n\in\mathbb{N}_0$:
\[\boxed{p(5n+4)\equiv 0\pmod{5},\qquad p(7n+5)\equiv 0\pmod{7}.}\]
\end{Thm}
\begin{proof}
Let $\alpha:=\prod(1-X^k)$. By the remarks above, $\alpha^5=\prod(1-X^k)^5\equiv \prod(1-X^{5k})\equiv\alpha(X^5)\pmod{5}$ and $\alpha^{-5}\equiv\alpha(X^5)^{-1}\pmod{5}$. For $k\in\mathbb{Z}$ we compute
\[\frac{1}{2}(k^2+k)\equiv\begin{cases}
0&\text{if }k\equiv 0,-1\pmod{5},\\
1&\text{if }k\equiv 1,-2\pmod{5},\\
3&\text{if }k\equiv 2\pmod{5}.
\end{cases}\]
This allows to write Jacobi's identity \eqref{Jhoch3} in the form
\begin{align*}
\alpha^3&=\smashoperator[r]{\sum_{k\,\equiv\, 0,-1\text{ (mod $5$)}}}(-1)^k(2k+1)X^{\frac{k^2+k}{2}}+\sum_{\mathclap{k\,\equiv\, 1,-2\text{ (mod $5$)}}}(-1)^k(2k+1)X^{\frac{k^2+k}{2}}+\sum_{\mathclap{k\,\equiv\, 2\text{ (mod $5$)}}}(-1)^k(\underbrace{2k+1}_{\mathclap{\equiv\,0\text{ (mod $5$)}}})X^{\frac{k^2+k}{2}}\\
&\equiv\alpha_0+\alpha_1\pmod{5},
\end{align*}
where $\alpha_i$ is formed by the monomials $a_kX^k$ with $k\equiv i\pmod{5}$.
Now \autoref{partseries} implies
\begin{equation}\label{rameq}
\sum_{n=0}^\infty p(n)X^n=\alpha^{-1}=\frac{(\alpha^3)^3}{(\alpha^5)^2}\equiv\frac{(\alpha_0+\alpha_1)^3}{\alpha(X^5)^2}\pmod{5}.
\end{equation}
If we expand $(\alpha_0+\alpha_1)^3$, then only terms $X^k$ with $k\equiv 0,1,2,3\pmod{5}$ occur, while in $\alpha(X^5)^{-2}$ only terms $X^{5k}$ occur. Therefore the right hand side of \eqref{rameq} contains no terms of the form $X^{5k+4}$. So we must have $p(5k+4)\equiv 0\pmod{5}$.
For the congruence modulo $7$ we compute similarly $\frac{1}{2}(k^2+k)\equiv 0,1,3,6\pmod{7}$, where the last case only occurs if $k\equiv 3\pmod{7}$ and in this case $2k+1\equiv 0\pmod{7}$. As before we may write $\alpha^3\equiv\alpha_0+\alpha_1+\alpha_3\pmod{7}$. Then
\[\sum_{n=0}^\infty p(n)X^n=\alpha^{-1}=\frac{(\alpha^3)^2}{\alpha^7}\equiv\frac{(\alpha_0+\alpha_1+\alpha_3)^2}{\alpha(X^7)}\pmod{7}.
\]
Again $X^{7k+5}$ does not appear on the right hand side.
\end{proof}
Ramanujan has also discovered the congruence $p(11n+6)\equiv 0\pmod{11}$ for all $n\in\mathbb{N}_0$ (the reader finds the history of this and other results in \cite{AndrewsEriksson,Hirschhorn}, for instance). This was believed to be more difficult to prove, until elementary proofs were found by Marivani~\cite{Marivani}, Hirschhorn~\cite{HirschhornR11} and others (see also \cite[Section~3.5]{Hirschhorn}). The details are however extremely tedious to verify by hand.
By the Chinese remainder theorem, two congruence of coprime moduli can be combined as in
\[p(35n+19)\equiv 0\pmod{35}.\]
Ahlgren~\cite{Ahlgren} (building on Ono~\cite{Ono}) has shown that in fact for every integer $k$ coprime to $6$ there is such a congruence modulo $k$. Unfortunately, they do not look as nice as \autoref{R57}. For instance,
\[p(11^3\cdot13n+237)\equiv 0\pmod{13}.\]
The next result explains the congruence modulo $5$ and is known as Ramanujan's “most beautiful” formula (since \autoref{RRI} was first discovered by Rogers).
\begin{Thm}[\textsc{Ramanujan}]
We have
\[\boxed{\sum_{n=0}^\infty p(5n+4)X^n=5\prod_{k=1}^\infty\frac{(1-X^{5k})^5}{(1-X^k)^6}.}\]
\end{Thm}
\begin{proof}
The arguments are taken from \cite[Chapter~5]{Hirschhorn}, leaving out some unessential details.
This time we start with Euler's pentagonal number theorem. Since
\[\frac{3k^2+k}{2}\equiv \begin{cases}
0&\text{if }k\equiv 0,-2\pmod{5},\\
1&\text{if }k\equiv -1\pmod{5},\\
2&\text{if }k\equiv 1,2\pmod{5},
\end{cases}\]
we can write \eqref{EPTN} in the form
\[\alpha:=\prod(1-X^k)=\sum_{k=-\infty}^\infty (-1)^kX^{\frac{3k^2+k}{2}}=\alpha_0+\alpha_1+\alpha_2,\]
where $\alpha_i$ is formed by the terms $a_kX^k$ with $k\equiv i\pmod{5}$. In fact,
\begin{equation}\label{alpha1}
\alpha_1=\sum_{k=-\infty}^\infty (-1)^{5k-1}X^{\frac{3(5k-1)^2+5k-1}{2}}=-X\sum(-1)^kX^{\frac{75k^2-25k}{2}}=-X\alpha(X^{25}).
\end{equation}
On the other hand we have
\[\sum_{k=0}^\infty(-1)^k(2k+1)X^{\frac{k^2+k}{2}}\overset{\eqref{Jhoch3}}{=}\alpha^3=(\alpha_0+\alpha_1+\alpha_2)^3.\]
When we expand the right hand side, the monomials of the form $X^{5k+2}$ all occur in $3\alpha_0(\alpha_0\alpha_2+\alpha_1^2)$.
Since we have already realized in the proof of \autoref{R57} that $(k^2+k)/2\not\equiv 2\pmod{5}$, we conclude that \begin{equation}\label{short}
\alpha_1^2=-\alpha_0\alpha_2.
\end{equation}
Let $\zeta\in\mathbb{C}$ be a primitive $5$-th root of unity.
Using that
\[X^5-1=\prod_{i=0}^4(X-\zeta^i)=\zeta^{1+2+3+4}\prod(\zeta^{-i}X-1)=\prod(\zeta^iX-1),\]
we compute
\[\prod_{i=0}^4\alpha(\zeta^iX)=\prod_{k=0}^\infty\prod_{i=0}^4(1-\zeta^{ik}X^{k})=\alpha(X^5)^5\prod_{5\,\nmid\,k}(1-X^{5k})=\frac{\alpha(X^5)^6}{\alpha(X^{25})}.\]
This leads to
\begin{align}
\sum& p(n)X^n=\frac{1}{\alpha}=\frac{\alpha(X^{25})}{\alpha(X^5)^6}\alpha(\zeta X)\alpha(\zeta^2X)\alpha(\zeta^3X)\alpha(\zeta^4X)\notag\\
&=\frac{\alpha(X^{25})}{\alpha(X^5)^6}(\alpha_0+\zeta\alpha_1+\zeta^2\alpha_2)(\alpha_0+\zeta^2\alpha_1+\zeta^4\alpha_2)(\alpha_0+\zeta^3\alpha_1+\zeta\alpha_2)(\alpha_0+\zeta^4\alpha_1+\zeta^3\alpha_2).\label{long}
\end{align}
We are only interested in the monomials $X^{5n+4}$. Those arise from the products $\alpha_0^2\alpha_2^2$, $\alpha_0\alpha_1^2\alpha_2$ and $\alpha_1^4$. To facilitate the expansion of the right hand side of \eqref{long}, we notice that the Galois automorphism $\gamma$ of the cyclotomic field $\mathbb{Q}_5$ sending $\zeta$ to $\zeta^2$ permutes the four factors cyclically. Whenever we obtain a product involving some $\zeta^i$, say $\alpha_0^2\alpha_2^2\zeta^3$, the full orbit under $\langle\gamma\rangle$ must occur, which is $\alpha_0^2\alpha_2^2(\zeta+\zeta^2+\zeta^3+\zeta^4)=-\alpha_0^2\alpha_2^2$. Now there are six choices to form $\alpha_0^2\alpha_2^2$. Four of them form a Galois orbit, while the two remaining appear without $\zeta$. The whole contribution is therefore $(1+1-1)\alpha_0^2\alpha_2^2=\alpha_0^2\alpha_2^2$. In a similar manner we compute,
\[
\sum p(5n+4)X^{5n+4}=\frac{\alpha(X^{25})}{\alpha(X^5)^6}(\alpha_0^2\alpha_2^2-3\alpha_0\alpha_1^2\alpha_2+\alpha_1^4)\overset{\eqref{short}}{=}5\frac{\alpha(X^{25})}{\alpha(X^5)^6}\alpha_1^4\overset{\eqref{alpha1}}{=}5X^4\frac{\alpha(X^{25})^5}{\alpha(X^5)^6}.
\]
The claim follows after dividing by $X^4$ and replacing $X^5$ by $X$.
\end{proof}
Partitions can be generalized to higher dimensions. A \emph{plane partition} of $n\in\mathbb{N}$ is an $n\times n$-matrix $\lambda=(\lambda_{ij})$ consisting of non-negative integers such that
\begin{itemize}
\item $\lambda_{i,1}\ge\lambda_{i,2}\ge\ldots$ and $\lambda_{1,j}\ge\lambda_{2,j}\ge\ldots$ for all $i,j$,
\item $\sum_{i,j=1}^n\lambda_{ij}=n$.
\end{itemize}
Ordinary partitions can be regarded as plane partitions with only one non-zero row.
The number $pp(n)$ of plane partitions of $n$ has the fascinating generating function
\[\sum_{n=0}^\infty pp(n)X^n=\prod_{k=1}^\infty\frac{1}{(1-X^k)^k}=1+X+3X^2+6X^3+13X^4+24X^5+\ldots\]
discovered by MacMahon (see \cite[Corollary~7.20.3]{Stanley2}).
\section{Stirling numbers}
We cannot resist to present a few more exciting combinatorial objects related to power series. Since there are literally hundreds of such combinatorial identities, our selection is inevitably biased by personal taste.
\begin{Def}
A \emph{set partition} of $n\in\mathbb{N}$ is a disjoint union $A_1\mathbin{\dot\cup}\ldots\mathbin{\dot\cup} A_k=\{1,\ldots,n\}$ of non-empty sets $A_i$ in no particular order (we may require $\min A_1<\ldots<\min A_k$ to fix an order). The number of set partitions of $n$ is called the $n$-th \emph{Bell number} $b(n)$. The number of set partitions of $n$ with exactly $k$ parts is the \emph{Stirling number of the second kind} $\stirr{n}{k}$. In particular, $\stirr{n}{n}=\stirr{n}{1}=n$. We set $\stirr{0}{0}=b(0)=1$ describing the empty partition of the empty set.
\end{Def}
\begin{Ex}
The set partitions of $n=3$ are
\[\{1,2,3\}=\{1\}\cup\{2,3\}=\{1,3\}\cup\{2\}=\{1,2\}\cup\{3\}=\{1\}\cup\{2\}\cup\{3\}.\]
Hence, $b(3)=5$ and $\stirr{3}{2}=3$.
\end{Ex}
Unlike the binomial or Gaussian coefficients the Stirling numbers do not obey a symmetry as in Pascal's triangle.
While the generating functions of $b(n)$ and $\stirr{n}{k}$ have no particularly nice shape, there are close approximations which we are about to see.
\begin{Lem}
For $n,k\in\mathbb{N}_0$,
\begin{equation}\label{stirrek}
\stirr{n+1}{k}=k\stirr{n}{k}+\stirr{n}{k-1}.
\end{equation}
\end{Lem}
\begin{proof}
Without loss of generality, let $1\le k\le n$.
Let $A_1\cup\ldots\cup A_{k-1}$ a set partition of $n$ with $k-1$ parts. Then $A_1\cup\ldots\cup A_{k-1}\cup\{n+1\}$ is a set partition of $n+1$ with $k$ parts. Now let $A_1\cup\ldots\cup A_k$ be a set partition of $n$. We can add the number $n+1$ to each of the $k$ sets $A_1,\ldots,A_k$ to obtain a set partition of $n+1$ with $k$ parts.
Conversely, every set partition of $n+1$ arises in precisely one of the two described ways.
\end{proof}
\begin{Lem}\label{bellrek}
For $n\in\mathbb{N}_0$,
\[b(n+1)=\sum_{k=0}^n\binom{n}{k}b(k).\]
\end{Lem}
\begin{proof}
Every set partition $\mathcal{A}$ of $n+1$ has a unique part $A$ containing $n+1$. If $k:=|A|-1$, there are $\binom{n}{k}$ choices for $A$. Moreover, $\mathcal{A}\setminus A$ is a uniquely determined partition of the set $\{1,\ldots,n\}\setminus A$ with $n-k$ elements. Hence, there are $b(n-k)$ possibilities for this partition. Consequently,
\[b(n+1)=\sum_{k=0}^n\binom{n}{k}b(n-k)=\sum_{k=0}^n\binom{n}{k}b(k).\qedhere\]
\end{proof}
\begin{Thm}
For $n\in\mathbb{N}$ we have
\begin{align*}
\sum_{k=0}^n\stirr{n}{k}X^k&=\exp(-X)\sum_{k=0}^\infty\frac{k^n}{k!}X^k,\\
\sum_{k=0}^\infty \frac{b(k)}{k!}X^k&=\exp\bigl(\exp(X)-1\bigr).
\end{align*}
\end{Thm}
\begin{proof}
We prove both assertions by induction on $n$.
\begin{enumerate}[(i)]
\item For $n=1$, we have
\[\exp(-X)\sum_{k=1}^\infty\frac{1}{(k-1)!}X^k=X\exp(-X)\exp(X)=X=\stirr{1}{1}X\]
as claimed. Assuming the claim for $n$, we have
\begin{align*}
\sum_{k=0}^{n+1}\stirr{n+1}{k}X^k&\overset{\eqref{stirrek}}{=}\sum k\stirr{n}{k}X^k+\sum\stirr{n}{k-1}X^k=X\Bigl(\sum\stirr{n}{k}X^k\Bigr)'+X\sum\stirr{n}{k}X^k\\
&=X\Bigl(\exp(-X)\sum\frac{k^n}{k!}X^k\Bigr)'+X\exp(-X)\sum\frac{k^n}{k!}X^k\\
&=\exp(-X)\sum\frac{k^{n+1}}{k!}X^k.
\end{align*}
\item Since $\exp(X)-1\in (X)$, we can substitute $X$ by $\exp(X)-1$ in $\exp(X)$.
Let
\[\alpha:=\exp\bigl(\exp(X)-1\bigr)=\sum\frac{a_n}{n!}X^n.\]
Then $a_0=\exp(\exp(0)-1)=\exp(0)=1=b(0)$. The chain rule gives
\begin{align*}
\sum_{n=0}^\infty\frac{a_{n+1}}{n!}X^n&=\alpha'=\exp(X)\exp(\exp(X)-1)\\
&=\Bigl(\sum_{k=0}^\infty\frac{1}{k!}X^k\Bigr)\Bigl(\sum_{k=0}^\infty\frac{a_k}{k!}X^k\Bigr)=\sum_{n=0}^\infty\sum_{k=0}^n\frac{a_k}{k!(n-k)!}X^n.
\end{align*}
Therefore, $a_{n+1}=\sum_{k=0}^n\binom{n}{k}a_k$ for $n\ge 0$ and the claim follows from \autoref{bellrek}.\qedhere
\end{enumerate}
\end{proof}
Now we discuss permutations.
\begin{Def}
Let $S_n$ be the symmetric group consisting of all permutations on the set $\{1,\ldots,n\}$.
The number of permutations in $S_n$ with exactly $k$ (disjoint) cycles including fixed points is denoted by the \emph{Stirling number of the first kind} $\stir{n}{k}$.
By agreement, $\stir{0}{0}=1$ (the identity on the empty set has zero cycles).
\end{Def}
\begin{Ex}
There are $\stir{4}{2}=11$ permutations in $S_4$ with exactly two cycles:
\begin{gather*}
(1,2,3)(4),\ (1,3,2)(4),\ (1,2,4)(3),\ (1,4,2)(3),\ (1,3,4)(2),\ (1,4,3)(2),\\ (1)(2,3,4),\ (1)(2,4,3),\ (1,2)(3,4),\ (1,3)(2,4),\ (1,4)(2,3).
\end{gather*}
\end{Ex}
Since $|S_n|=n!$, there is no need for a generating function of the number of permutations.
\begin{Lem}
For $k,n\in\mathbb{N}_0$,
\begin{equation}\label{lemstir}
\stir{n+1}{k}=\stir{n}{k-1}+n\stir{n}{k}.
\end{equation}
\end{Lem}
\begin{proof}
Without loss of generality, let $1\le k\le n$.
Let $\sigma\in S_n$ with exactly $k-1$ cycles. By appending the $1$-cycle $(n+1)$ to $\sigma$ we obtain a permutation counted by $\stir{n+1}{k}$.
Now assume that $\sigma$ has $k$ cycles. When we write $\sigma$ as a sequence of $n$ numbers and $2k$ parentheses, there are $n$ meaningful positions where we can add the digit $n+1$. For example, there are three ways to add $4$ in $\sigma=(1,2)(3)$, namely
\[(4,1,2)(3),\quad(1,4,2)(3),\quad(1,2)(4,3).\]
This yields $n$ distinct permutations counted by $\stir{n+1}{k}$. Conversely, every permutation counted by $\stir{n+1}{k}$ arises in precisely one of the described ways.
\end{proof}
While the recurrences relations we have seen so far appear arbitrary, they can be explained in a unified way (see \cite{Konvalina}).
It is time to present the next dual pair of formulas resembling \autoref{GaussBT}.
\begin{Thm}\label{genstir}
The following generating functions of the Stirling numbers hold for $n\in\mathbb{N}$:
\begin{empheq}[box=\fbox]{align*}
\prod_{k=0}^{n-1}(1+kX)&=\sum_{k=0}^n\stir{n}{n-k}X^k,\\
\prod_{k=1}^n\frac{1}{1-kX}&=\sum_{k=0}^\infty\stirr{n+k}{n}X^k.
\end{empheq}
\end{Thm}
\begin{proof}
This is another induction on $n$.
\begin{enumerate}[(i)]
\item The case $n=1$ yields $1$ on both sides of the equation. Assuming the claim for $n$, we compute
\begin{align*}
\prod_{k=0}^n(1+kX)&=(1+nX)\sum\stir{n}{n-k}X^k=\sum\biggl(\stir{n}{n-k}+n\stir{n}{n-k+1}\biggr)X^k\\
&\overset{\eqref{lemstir}}{=}\sum\stir{n+1}{n+1-k}X^k.
\end{align*}
\item For $n=1$, we get the geometric series $\frac{1}{1-X}=\sum X^k$ on both sides. Assume the claim for $n-1$. Then
\begin{align*}
(1-nX)\sum_{k=0}^\infty\stirr{n+k}{n}X^k&=\sum\biggl(\stirr{n+k}{n}-n\stirr{n+k-1}{n}\biggr)X^k\\
&\overset{\eqref{stirrek}}{=}\sum\stirr{n-1+k}{n-1}X^k=\prod_{k=1}^{n-1}\frac{1}{1-kX}.\qedhere
\end{align*}
\end{enumerate}
\end{proof}
For those who still do not have enough, the next exercise might be of interest.
\begin{A}\label{bern}\hfill
\begin{enumerate}[(a)]
\item Prove \emph{Vandermonde's identity} $\sum_{k=0}^n\binom{a}{k}\binom{b}{n-k}=\binom{a+b}{n}$ for all $a,b\in\mathbb{C}$ by using Newton's binomial theorem.
\item For every prime $p$ and $1<k<p$, show that $\stir{p}{k}$ is divisible by $p$ (a property shared with $\binom{p}{k}$).
\item Prove that
\[\sum_{k=0}^n(-1)^k\stirr{n}{k}\stir{k}{m}=0\]
for $n\ne m$.
\item Determine all $n\in\mathbb{N}$ such that the Catalan number $c_n$ is odd. \\
\textit{Hint:} Consider the generating function modulo $2$.
\item The \emph{Bernoulli numbers} $b_n\in\mathbb{Q}$ are defined directly by their (exponential) generating function
\[\frac{X}{\exp(X)-1}=\sum_{n=0}^\infty \frac{b_n}{n!}X^n.\]
Compute $b_0,\ldots,b_3$ and show that $b_{2n+1}=0$ for every $n\in\mathbb{N}$.\\
\textit{Hint:} Replace $X$ by $-X$.
\end{enumerate}
\end{A}
The \emph{cycle type} of a permutation $\sigma\in S_n$ is denoted by $(1^{a_1},\ldots,n^{a_n})$, meaning that $\sigma$ has precisely $a_k$ cycles of length $k$.
\begin{Lem}\label{lemperm}
The number of permutations $\sigma\in S_n$ with cycle type $(1^{a_1},\ldots,n^{a_n})$ is
\[\frac{n!}{1^{a_1}\ldots n^{a_n}a_1!\ldots a_n!}.\]
\end{Lem}
\begin{proof}
Each cycle of $\sigma$ determines a subset of $\{1,\ldots,n\}$. The number of possibilities to choose such subsets is given by the multinomial coefficient
\[\frac{n!}{(1!)^{a_1}\ldots (n!)^{a_n}}.\]
Since the $a_i$ subsets of size $i$ can be permuted in $a_i!$ ways, each corresponding to the same permutation (as disjoint cycles commute), the number of relevant choices is only
\[\frac{n!}{(1!)^{a_1}\ldots (n!)^{a_n}a_1!\ldots a_n!}.\]
A given subset $\{\lambda_1,\ldots,\lambda_k\}\subseteq\{1,\ldots,n\}$ can be arranged in $k!$ permutations, but only $(k-1)!$ different cycles, since $(\lambda_1,\ldots,\lambda_k)=(\lambda_2,\ldots,\lambda_k,\lambda_1)=\ldots$.
Hence, the number of permutations in question is
\[\frac{n!}{(1!)^{a_1}\ldots (n!)^{a_n}a_1!\ldots a_n!}\bigl((1-1)!\bigr)^{a_1}\ldots\bigl((n-1)!\bigr)^{a_n}=\frac{n!}{1^{a_1}\ldots n^{a_n}a_1!\ldots a_n!}.\qedhere\]
\end{proof}
The following is a sibling to Glaisher's theorem. For a non-negative real number $r$ we denote the largest integer $n\le r$ by $n=\lfloor r\rfloor$.
\begin{Thm}[\textsc{Erd{\H{o}}s--Tur{\'a}n}]\label{Turan}
Let $n,d\in\mathbb{N}$. The number of permutations in $S_n$, whose cycle lengths are not divisible by $d$ is
\[n!\prod_{k=1}^{\lfloor n/d\rfloor}\frac{kd-1}{kd}.\]
\end{Thm}
\begin{proof}
According to \cite{Maroti}, the idea of the proof is credited to Pólya.
We need to count permutations with cycle type $(1^{a_1},\ldots,n^{a_n})$ where $a_k=0$ whenever $d\mid k$.
By \autoref{lemperm}, the total number divided by $n!$ is the coefficient of $X^n$ in
\begin{align*}
\prod_{\substack{k=1\\d\,\nmid\, k}}^\infty\sum_{a=0}^\infty \frac{1}{a!}\Bigl(\frac{X^k}{k}\Bigr)^a&=\prod_{d\,\nmid\, k}\exp\Bigl(\frac{X^k}{k}\Bigr)\overset{\eqref{func2}}{=}\exp\Bigl(\sum_{d\,\nmid\, k}\frac{X^k}{k}\Bigr)=\exp\Bigl(\sum_{k=1}^\infty\frac{X^k}{k}-\sum_{k=1}^\infty\frac{X^{dk}}{dk}\Bigr)\\
&=\exp\Bigl(-\log(1-X)+\frac{1}{d}\log(1-X^d)\Bigr)\overset{\eqref{funclog}}{=}\sqrt[d]{1-X^d}\,\frac{1}{1-X}\\%=\sqrt[d]{\frac{1-X^d}{1-X}}(1-X)^{\frac{1-d}{d}}
&=\frac{1-X^d}{1-X}(1-X^d)^{\frac{1-d}{d}}\overset{\eqref{newtoneq}}{=}\Bigl(\sum_{r=0}^{d-1} X^r\Bigr)\biggl(\sum_{q=0}^\infty\binom{(1-d)/d}{q}(-X^d)^q\biggr).
\end{align*}
Therein, $X^n$ appears if and only if $n=qd+r$ with $0\le r<d$ and $q=\lfloor n/d\rfloor$ (Euclidean division). In this case the coefficient is
\[(-1)^q\binom{(1-d)/d}{q}=(-1)^q\prod_{k=1}^q\frac{\frac{1}{d}-k}{k}=\prod_{k=1}^q\frac{kd-1}{kd}.\qedhere\]
\end{proof}
\begin{Ex}
A permutation has odd order as an element of $S_n$ if and only all its cycles have odd length. The number of such partitions is therefore
\[n!\prod_{k=1}^{\lfloor n/2\rfloor}\frac{2k-1}{2k}=\begin{cases}
1^2\cdot 3^2\cdot\ldots\cdot (n-1)^2&\text{if $n$ is even},\\
1^2\cdot 3^2\cdot\ldots\cdot (n-2)^2\cdot n&\text{if $n$ is odd}.
\end{cases}\]
\end{Ex}
\begin{A}
Find and prove a similar formula for the number of permutations $\sigma\in S_n$ whose cycle lengths are all divisible by $d$.
\end{A}
We insert a well-known application of Bernoulli numbers.
\begin{Thm}[\textsc{Faulhaber}]
For every $d\in\mathbb{N}$ there exists a polynomial $\alpha\in\mathbb{C}[X]$ of degree $d+1$ such that $1^d+2^d+\ldots+n^d=\alpha(n)$ for every $n\in\mathbb{N}$.
\end{Thm}
\begin{proof}
We compute the generating function
\begin{align*}
\sum_{d=0}^\infty\Bigl(\sum_{k=1}^{n-1}k^d\Bigr)\frac{X^d}{d!}&=\sum_{k=1}^{n-1}\sum_{d=0}^\infty\frac{(kX)^d}{d!}=\sum_{k=1}^{n-1}\exp(kX)=\sum_{k=1}^{n-1}\exp(X)^k=\frac{\exp(X)^n-1}{\exp(X)-1}\\
&=\frac{\exp(nX)-1}{X}\frac{X}{\exp(X)-1}\overset{\ref{bern}}{=}\sum_{k=0}^\infty\frac{n^{k+1}}{(k+1)!}X^k\sum_{l=0}^\infty\frac{b_l}{l!}X^l\\
&=\sum_{d=0}^\infty\sum_{k=0}^d\Bigl(\frac{n^{k+1}b_{d-k}d!}{(k+1)!(d-k)!}\Bigr)\frac{X^d}{d!}\\
&=\sum_{d=0}^\infty\sum_{k=0}^d\Bigl(\frac{1}{k+1}\binom{d}{k}b_{d-k}n^{k+1}\Bigr)\frac{X^d}{d!}
\end{align*}
and define $\alpha:=\sum_{k=0}^d\frac{1}{k+1}\binom{d}{k}b_{d-k}(X+1)^{k+1}\in\mathbb{C}[X]$. Since $b_0=1$, $\alpha$ is a polynomial of degree $d+1$ with leading coefficient $\frac{1}{d+1}$.
\end{proof}
\begin{Ex}
For $d=3$ the formula in the proof evaluates with some effort (using \autoref{bern}) to:
\[\alpha=b_3(X+1)+\frac{3}{2}b_2(X+1)^2+b_1(X+1)^3+\frac{1}{4}b_0(X+1)^4=\frac{1}{4}(X+1)^2X^2=\binom{X+1}{2}^2.\]
This is known as \emph{Nicomachus}'s identity:
\[1^3+2^3+\ldots+n^3=(1+2+\ldots+n)^2.\]
\end{Ex}
Even though Faulhaber's formula $1^d+2^d+\ldots+n^d=\alpha(n)$ has not much to do with power series, there still is a dual formula, again featuring Bernoulli numbers:
\[\sum_{k=1}^\infty\frac{1}{k^{2d}}=(-1)^{d+1}\frac{(2\pi)^{2d}b_{2d}}{2(2d)!}\qquad(d\in\mathbb{N}).\]
Strangely, no such formula is known to hold for odd negative exponents (perhaps because $b_{2d+1}=0$?). In fact, it is unknown if \emph{Apéry's constant} $\sum_{k=1}^\infty\frac{1}{k^3}=1{,}202\ldots$ is transcendent.
We end this section with a power series proof of the famous four-square theorem.
\begin{Thm}[\textsc{Lagrange--Jacobi}]\label{thmLJ}
Every positive integer is the sum of four squares. More precisely,
\[q(n):=\bigl|\{(a,b,c,d)\in\mathbb{Z}^4:a^2+b^2+c^2+d^2=n\}\bigr|=8\sum_{4\,\nmid\, d\,\mid\,n}d\]
for $n\in\mathbb{N}$.
\end{Thm}
\begin{proof}
We follow \cite[Section~2.4]{Hirschhorn}. Obviously, it suffices to prove the second assertion (by Jacobi). Since the summands $(-1)^k(2k+1)X^{\frac{k^2+k}{2}}$ in \autoref{Jhoch3} are invariant under the transformation $k\mapsto-k-1$, we can write
\[\prod_{k=1}^\infty(1-X^k)^3=\frac{1}{2}\sum_{k=-\infty}^\infty(-1)^k(2k+1) X^{\frac{k^2+k}{2}}.\]
Taking the square on both sides yields
\[\alpha:=\prod_{k=1}^{\infty}(1-X^k)^6=\frac{1}{4}\sum_{k,l=-\infty}^\infty (-1)^{k+l}(2k+1)(2l+1)X^{\frac{k^2+k+l^2+l}{2}}.\]
The pairs $(k,l)$ with $k\equiv l\pmod{2}$ are transformed by $(k,l)\mapsto(s,t):=\frac{1}{2}(k+l,k-l)$, while the pairs $k\not\equiv l\pmod{2}$ are transformed by $(s,t):=\frac{1}{2}(k-l-1,k+l+1)$.
Notice that $k=s+t$ and $l=s-t$ or $l=t-s-1$ respectively. Hence,
\begin{align*}
\alpha&=\frac{1}{4}\sum_{s,t=-\infty}^\infty (2s+2t+1)(2s-2t+1)X^{\frac{(s+t)^2+s+t+(s-t)^2+s-t}{2}}\\
&\quad-\frac{1}{4}\sum_{s,t=-\infty}^\infty(2s+2t+1)(2t-2s-1)X^{\frac{(s+t)^2+s+t+(t-s-1)^2+t-s-1}{2}}\\
&=\frac{1}{4}\sum_{s,t}\bigl((2s+1)^2-(2t)^2\bigr)X^{s^2+s+t^2}-\frac{1}{4}\sum_{s,t}\bigl((2t)^2-(2s+1)^2\bigr)X^{s^2+s+t^2}\\
&=\frac{1}{2}\sum_{s,t}\bigl((2s+1)^2-(2t)^2\bigr)X^{s^2+s+t^2}\\
&=\frac{1}{2}\sum_{t=-\infty}^\infty X^{t^2}\sum_{s=-\infty}^\infty(2s+1)^2X^{s^2+s}-\frac{1}{2}\sum_{s=-\infty}^\infty X^{s^2+s}\sum_{t=-\infty}^\infty (2t)^2X^{t^2}.
\end{align*}
For $\beta:=\sum X^{t^2}$ and $\gamma:=\frac{1}{2}\sum X^{s^2+s}$ we have
$\gamma+4X\gamma'=\frac{1}{2}\sum(2s+1)^2X^{s^2+s}$ and therefore
\[\alpha=\beta(\gamma+4X\gamma')-4X\beta'\gamma=\beta\gamma+4X(\beta\gamma'-\beta'\gamma).\]
Now we apply the infinite product rule to \eqref{JTP1} and \eqref{JTP3}:
\begin{align*}
\beta'&=\Bigl(\prod_{k=1}^\infty(1-X^{2k})(1+X^{2k-1})^2\Bigr)'=\beta\sum_{k=1}^\infty\Bigl(2\frac{(2k-1)X^{2k-2}}{1+X^{2k-1}}-\frac{2kX^{2k-1}}{1-X^{2k}}\Bigr)\\
\gamma'&=\Bigl(\prod_{k=1}^\infty(1-X^{2k})(1+X^{2k})^2\Bigr)'=\gamma\sum_{k=1}^\infty\Bigl(2\frac{2kX^{2k-1}}{1+X^{2k}}-\frac{2kX^{2k-1}}{1-X^{2k}}\Bigr)
\end{align*}
We substitute:
\begin{align*}
\alpha&=\beta\gamma\Bigl(1+8\sum_{k=1}^\infty\Bigl(\frac{2kX^{2k}}{1+X^{2k}}-\frac{(2k-1)X^{2k-1}}{1+X^{2k-1}}\Bigr)\Bigr).
\end{align*}
Here,
\[\beta\gamma=\prod(1-X^{2k})^2(1+X^{2k-1})^2(1+X^{2k})^2=\prod(1-X^{2k})^2(1+X^k)^2=\prod(1-X^{2k})^4(1-X^k)^{-2}.\]
After we set this off against $\alpha$, it remains
\[\Bigl(\sum_{k=-\infty}^\infty (-1)^kX^{k^2}\Bigr)^4\overset{\eqref{JTP2}}{=}\prod_{k=1}^\infty\frac{(1-X^k)^8}{(1-X^{2k})^4}=\frac{\alpha}{\beta\gamma}=1+8\sum_{k=1}^\infty\Bigl(\frac{2kX^{2k}}{1+X^{2k}}-\frac{(2k-1)X^{2k-1}}{1+X^{2k-1}}\Bigr)\]
Finally we replace $X$ by $-X$:
\begin{align*}
\sum q(n)X^n&=\Bigl(\sum_{k=-\infty}^\infty X^{k^2}\Bigr)^4=1+8\sum_{k=1}^\infty\Bigl(\frac{2kX^{2k}}{1+X^{2k}}+\frac{(2k-1)X^{2k-1}}{1-X^{2k-1}}\Bigr)\\
&=1+8\sum_{k=1}^\infty\Bigl(\frac{(2k-1)X^{2k-1}}{1-X^{2k-1}}+\frac{2kX^{2k}}{1-X^{2k}}-\frac{2kX^{2k}}{1-X^{2k}}+\frac{2kX^{2k}}{1+X^{2k}}\Bigr)\\
&=1+8\sum_{k=1}^\infty\Bigl(\frac{kX^{k}}{1-X^{k}}-\frac{4kX^{4k}}{1-X^{4k}}\Bigr)=1+8\sum_{4\,\nmid\, k}\frac{kX^{k}}{1-X^{k}}\\
&=1+8\sum_{4\,\nmid\, k}k\sum_{l=1}^\infty X^{kl}=1+8\sum_{n=1}^\infty\sum_{4\,\nmid\, d\,\mid\, n}dX^n.\qedhere
\end{align*}
\end{proof}
\begin{Ex}
For $n=28$ we obtain
\[\sum_{4\,\nmid\, d\,\mid\, 28}d=1+2+7+14=24.\]
Hence, there are $8\cdot 24=192$ possibilities to express $28$ as a sum of four squares. However, they all arise as permutations and sign-choices of
\[28=5^2+1^2+1^2+1^2=4^2+2^2+2^2+2^2=3^2+3^2+3^2+1^2.\]
\end{Ex}
\autoref{thmLJ} is best possible in the sense that every integers $n\equiv 7\pmod{8}$ is not the sum of three squares since $a^2+b^2+c^2\not\equiv 7\mod{8}$.
If $n,m\in\mathbb{N}$ are sums of four squares, so is $nm$ by the following identity of Euler (encoding the multiplicativity of the norm in \emph{Hamilton's quaternion} skew field):
\begin{gather*}
(a_1^2+a_2^2+a_3^2+a_4^2)(b_1^2+b_2^2+b_3^2+b_4^2)=(a_1b_1+a_2b_2+a_3b_3+a_4b_4)^2\\
+(a_1b_2-a_2b_1+a_3b_4+a_4b_3)^2+(a_1b_3-a_3b_1+a_4b_2-a_2b_4)^2+(a_1b_4-a_4b_1+a_2b_3-a_3b_2)^2
\end{gather*}
This reduces the proof of the first assertion (Lagrange's) of \autoref{thmLJ} to the case where $n$ is a prime.
\emph{Waring's problem} ask for the smallest number $g(k)$ such that every positive integer is the sum of $g(k)$ non-negative $k$-th powers. Hilbert proved that $g(k)<\infty$ for all $k\in\mathbb{N}$. We have $g(1)=1$, $g(2)=4$ (\autoref{thmLJ}), $g(3)=9$, $g(4)=19$ and in general it is conjectured that
\[g(k)=\Bigl\lfloor\Bigl(\frac{3}{2}\Bigr)^k\Bigr\rfloor+2^k-2\]
(see \cite{Waring}).
Curiously, only the numbers $23=2\cdot2^3+7\cdot 1^3$ and $239=2\cdot 4^3+4\cdot 3^3+3\cdot 1^3$ require nine cubes.
It is even conjectured that every sufficiently large integer is a sum of only four non-negative cubes (see \cite{4cubes}).
\section{Laurent series}\label{seclaurent}
Every integral domain $R$ can be embedded into its \emph{field of fractions} consisting of the formal fractions $\frac{r}{s}$ where $r,s\in R$ and $s\ne 0$. For our ring $K[[X]]$ these fractions have a more convenient shape.
\begin{Def}
A (formal) \emph{Laurent series} in the indeterminant $X$ over the field $K$ is a sum of the form
\[\alpha=\sum_{k=m}^\infty a_kX^k\]
where $m\in\mathbb{Z}$ and $a_k\in K$ for $k\ge m$ (i.\,e. we allow $X$ to negative powers).
We often write $\alpha=\sum_{k=-\infty}^\infty a_kX^k$ assuming that $\inf(\alpha)=\inf\{k\in\mathbb{Z}:a_k\ne 0\}$ exists.
The set of all Laurent series over $K$ is denoted by $K((X))$.
Laurent series can be added and multiplied like power series:
\[\alpha+\beta=\sum_{k=-\infty}^\infty(a_k+b_k)X^k,\qquad\alpha\beta=\sum_{k=-\infty}^\infty\Bigl(\sum_{l=-\infty}^\infty a_lb_{k-l}\Bigr)X^k\]
(one should check that the inner sum is finite). Moreover, the norm $|\alpha|$ and the derivative $\alpha'$ are defined as for power series.
\end{Def}
If a Laurent series is a finite sum, it is naturally called a \emph{Laurent polynomial}. The ring of Laurent polynomials is denoted by $K[X,X^{-1}]$, but plays no role in the following. In analysis one allows double infinite sums, but then the product is no longer well defined as in $\Bigl(\sum_{n=-\infty}^\infty X^n\Bigr)^2$.
\begin{Thm}
The field of fractions of $K[[X]]$ is naturally isomorphic to $K((X))$. In particular, $K((X))$ is a field.
\end{Thm}
\begin{proof}
Repeating the proof of \autoref{intdom} shows that $K((X))$ is a commutative ring. Let $\alpha\in K((X))\setminus\{0\}$ and $k:=\inf(\alpha)$. By \autoref{leminv}, $X^{-k}\alpha\in K[[X]]^\times$. Hence, $X^{-k}(X^{-k}\alpha)^{-1}\in K((X))$ is the inverse of $\alpha$. This shows that $K((X))$ is a field. By the universal property of the field of fractions $Q(K[[X]])$, the embedding $K[[X]]\subseteq K((X))$ extends to a (unique) field monomorphism $f\colon Q(K[[X]])\to K((X))$. If $k=\inf(\alpha)<0$, then $f\bigl(\frac{X^{-k}\alpha}{X^{-k}}\bigr)=\alpha$ and $f$ is surjective.
\end{proof}
Of course, we will view $K[[X]]$ as a subring of $K((X))$. In fact, $K[[X]]$ is the \emph{valuation ring} of $K((X))$, i.\,e. $K[[X]]=\{\alpha\in K((X)):|\alpha|\le 1\}$.
The field $K((X))$ should not be confused with the field of \emph{rational functions} $K(X)$, which is the field of fractions of $K[X]$.
If $\alpha\in K((X))$ and $\beta\in K[[X]]^\circ$, the substitute $\alpha(\beta)$ is still well-defined and \autoref{lemcomp} remains correct ($\alpha$ deviates from a power series by only finitely many terms).
\begin{Def}
The (formal) \emph{residue} of $\alpha=\sum a_kX^k\in K((X))$ is defined by $\operatorname{res}(\alpha):=a_{-1}$.
\end{Def}
The residue is a $K$-linear map such that $\operatorname{res}(\alpha')=0$ for all $\alpha\in K((X))$.
\begin{Lem}\label{lemres}
For $\alpha,\beta\in K((X))$ we have
\begin{align*}
\operatorname{res}(\alpha'\beta)&=-\operatorname{res}(\alpha\beta')\\
\operatorname{res}(\alpha'/\alpha)&=\inf(\alpha)&&(\alpha\ne 0)\\
\operatorname{res}(\alpha)\inf(\beta)&=\operatorname{res}(\alpha(\beta)\beta')&&(\beta\in (X))
\end{align*}
\end{Lem}
\begin{proof}\hfill
\begin{enumerate}[(i)]
\item This follows from the product rule
\[0=\operatorname{res}((\alpha\beta)')=\operatorname{res}(\alpha'\beta)+\operatorname{res}(\alpha\beta').\]
\item\label{res2} Let $\alpha=X^k\gamma$ with $k=\inf(\alpha)$ and $\gamma\in K[[X]]^\times$. Then
\[\frac{\alpha'}{\alpha}=\frac{kX^{k-1}\gamma+X^k\gamma'}{X^k\gamma}=kX^{-1}+\gamma'\gamma^{-1}.\]
Since $\gamma^{-1}\in K[[X]]$, it follows that $\operatorname{res}(\alpha'/\alpha)=k=\inf(\alpha)$.
\item Since $\operatorname{res}$ is a linear map, we may assume that $\alpha=X^k$. If $k\ne -1$, then
\[\operatorname{res}(\alpha(\beta)\beta')=\operatorname{res}(\beta^k\beta')=\operatorname{res}\Bigl(\Bigl(\frac{\beta^{k+1}}{k+1}\Bigr)'\Bigr)=0=\operatorname{res}(\alpha)=\operatorname{res}(\alpha)\inf(\beta).\]
If $k=-1$, then
\[\operatorname{res}(\alpha(\beta)\beta')=\operatorname{res}(\beta'/\beta)\overset{\eqref{res2}}{=}\inf(\beta)=\operatorname{res}(\alpha)\inf(\beta).\qedhere\]
\end{enumerate}
\end{proof}
\begin{Thm}[\textsc{Lagrange--Bürmann}'s inversion formula]\label{lagrange}
The reverse of $\alpha\in\mathbb{C}[[X]]^\circ$ is
\[\boxed{\sum_{k=1}^\infty \frac{\operatorname{res}(\alpha^{-k})}{k}X^k.}\]
\end{Thm}
\begin{proof}
The proof is influenced by \cite{Gessel}.
Let $\beta\in \mathbb{C}[[X]]^\circ$ be the reverse of $\alpha$, i.\,e. $\alpha(\beta)=X$. From $\alpha\in \mathbb{C}[[X]]^\circ$ we know that $\alpha\ne 0$. In particular, $\alpha$ is invertible in $\mathbb{C}((X))$. By \autoref{lemcomp}, we have $\alpha^{-k}(\beta)=X^{-k}$.
Now the coefficient of $X^k$ in $\beta$ turns out to be
\[
\frac{1}{k}\operatorname{res}(kX^{-k-1}\beta)=-\frac{1}{k}\operatorname{res}\bigl((X^{-k})'\beta\bigr)=\frac{1}{k}\operatorname{res}(X^{-k}\beta')=\frac{1}{k}\operatorname{res}\bigl(\alpha^{-k}(\beta)\beta'\bigr)=\frac{1}{k}\operatorname{res}(\alpha^{-k})
\]
by \autoref{lemres}.
\end{proof}
Since \autoref{lagrange} is actually a statement about power series, it should be mentioned that $\operatorname{res}(\alpha^{-k})$ is just the coefficient of $X^{k-1}$ in the power series $(X/\alpha)^k$. This interpretation will be used in our generalization to higher dimensions in \autoref{lagrange2}.
\begin{Ex}\label{catalan}
Recall from \autoref{exgen} that the generating function $\alpha$ of the Catalan numbers $c_n$ is the reverse of $X-X^2$.
Since
\[\Bigl(\frac{X}{X-X^2}\Bigr)^{n+1}=(1-X)^{-n-1}\overset{\eqref{newtoneq}}{=}\sum_{k=0}^\infty\binom{-n-1}{k}(-1)^kX^k,\]
we compute
\[c_{n+1}=\frac{\operatorname{res}(\alpha^{-n-1})}{n+1}=\frac{1}{n+1}(-1)^n\binom{-n-1}{n}=\frac{1}{n+1}\frac{(n+1)\ldots2n}{n!}=\frac{1}{n+1}\binom{2n}{n}.\]
\end{Ex}
Our next objective is the construction of the algebraic closure of $\mathbb{C}((X))$. We need a well-known tool.
\begin{Lem}[\textsc{Hensel}]\label{hensel}
Let $R:=K[[X]]$. For a polynomial $\alpha=\sum_{k=0}^na_kY^k\in R[Y]$ let
\[\bar\alpha:=\sum a_k(0)Y^k\in K[Y].\]
Let $\alpha\in R[Y]$ be monic such that $\bar\alpha=\alpha_1\alpha_2$ for some coprime monic polynomials $\alpha_1,\alpha_2\in K[Y]\setminus K$. Then there exist monic $\beta,\gamma\in R[Y]$ such that $\bar\beta=\alpha_1$, $\bar\gamma=\alpha_2$ and $\alpha=\beta\gamma$.
\end{Lem}
\begin{proof}
By hypothesis, $n:=\deg(\alpha)=\deg(\alpha_1)+\deg(\alpha_2)\ge 2$.
Observe that $\bar{\alpha}$ is essentially the reduction of $\alpha$ modulo the ideal $(X)$. In particular, the map $R[Y]\to K[Y]$, $\alpha\mapsto\bar\alpha$ is a ring homomorphism. For $\sigma,\tau\in R[Y]$ and $k\in\mathbb{N}$ we write more generally $\sigma\equiv\tau\pmod{(X^k)}$ if all coefficients of $\sigma-\tau$ lie in $(X^k)$.
First choose any monic polynomials $\beta_1,\gamma_1\in R[Y]$ with $\bar{\beta_1}=\alpha_1$ and $\bar{\gamma_1}=\alpha_2$. Then $\deg(\beta_1)=\deg(\alpha_1)$, $\deg(\gamma_1)=\deg(\alpha_2)$ and $\alpha\equiv\beta_1\gamma_1\pmod{(X)}$.
We construct inductively monic $\beta_k,\gamma_k\in R[Y]$ for $k\ge 2$ such that
\begin{enumerate}[(a)]
\item\label{amod} $\beta_k\equiv \beta_{k+1}$ and $\gamma_k\equiv \gamma_{k+1}\pmod{(X^k)}$,
\item\label{bmod} $\alpha\equiv\beta_k\gamma_k\pmod{(X^k)}$.
\end{enumerate}
Suppose that $\beta_k,\gamma_k$ are given. Choose $\delta\in R[Y]$ such that $\alpha=\beta_k\gamma_k+X^k\delta$ and $\deg(\delta)<n$.
Since $\alpha_1,\alpha_2$ are coprime in the Euclidean domain $K[Y]$, there exist $\sigma,\tau\in R[Y]$ such that $\bar\beta_k\bar\sigma+\bar\gamma_k\bar\tau=\alpha_1\bar\sigma+\alpha_2\bar\tau=1$ by Bézout's lemma.
Since $\beta_k$ is monic, we can perform Euclidean division by $\beta_k$ without leaving $R[Y]$. This yields $\rho,\nu\in R[Y]$ such that $\tau\delta=\beta_k\rho+\nu$ and $\deg(\nu)<\deg(\beta_k)$.
Let $d:=\deg(\gamma_1)$ and write $\sigma\delta+\gamma_k\rho=\mu+\eta Y^d$ with $\deg(\mu)<d$.
Then
\begin{align*}
\beta_{k+1}:=\beta_k+X^k\nu,&&\gamma_{k+1}:=\gamma_k+X^k\mu
\end{align*}
are monic and satisfy \eqref{amod}.
Moreover,
\[\delta\equiv(\beta_k\sigma+\gamma_k\tau)\delta\equiv \beta_k(\sigma\delta+\gamma_k\rho)+\gamma_k\nu\equiv \beta_k\mu+\beta_k\eta Y^d+\gamma_k\nu\pmod{(X)}.\]
Since the degrees of $\delta$, $\beta_k\mu$ and $\gamma_k\nu$ are all smaller than $n$ and $\deg(\beta_k\eta Y^d)\ge n$, it follows that $\bar\eta=0$.
Therefore,
\[
\beta_{k+1}\gamma_{k+1}\equiv\alpha-X^k\delta+(\beta_k\mu+\gamma_k\nu)X^k\equiv\alpha\pmod{(X^{k+1})},
\]
i.\,e. \eqref{bmod} holds for $k+1$. This completes the induction.
Let $\beta_k=\sum_{j=0}^eb_{kj}Y^j$ and $\gamma_k=\sum_{j=0}^dc_{kj}Y^j$ with $b_{ij},c_{ij}\in R$. By construction, $|b_{kj}-b_{k+1,j}|\le 2^{-k}$ and similarly for $c_{kj}$. Consequently, $b_j:=\lim_kb_{kj}$ and $c_j:=\lim_kc_{kj}$ converge in $R$. We can now define $\beta:=\sum_{j=0}^e b_jY^j$ and $\gamma:=\sum_{j=0}^d c_jY^j$. Then $\bar\beta=\bar\beta_1=\alpha_1$ and $\bar\gamma=\bar\gamma_1=\alpha_2$. Since $\beta\gamma\equiv\beta_k\gamma_k\equiv\alpha\pmod{(X^k)}$ for every $k\ge 1$, it follows that
$\alpha=\beta\gamma$.
\end{proof}
One can proceed to show that $\beta$ and $\gamma$ are uniquely determined in the situation of \autoref{hensel}, but we do not need this in the following. Indeed, since $R$ is a principal ideal domain, $R[Y]$ is a factorial ring (also called unique factorization domain) by Gauss' lemma.
This means that every monic polynomial in $R[Y]$ has a unique factorization into monic irreducible polynomials.
For every irreducible factor $\omega$ of $\alpha$, $\bar\omega$ either divides $\alpha_1$ or $\alpha_2$ (but not both). This determines the prime factorization of $\beta$ and $\gamma$ uniquely.
\begin{Ex}
Let $n\in\mathbb{N}$, $a\in (X)\subseteq R:=\mathbb{C}[[X]]$ and $\alpha=Y^n-1-a\in R[Y]$. Then $\bar\alpha=Y^n-1=\alpha_1\alpha_2$ with coprime monic $\alpha_1=Y-1$ and $\alpha_2=Y^{n-1}+\ldots+Y+1$. By Hensel's lemma there exist monic $\beta,\gamma\in R[Y]$ such that $\bar\beta=Y-1$, $\bar\gamma=\alpha_2$ and $\alpha=\beta\gamma$. We may write $\beta=Y-1-b$ for some $b\in (X)$. Then $(1+b)^n=1+a$ and the remark after \autoref{defpower} implies $1+b=\sqrt[n]{1+a}$.
The constructive procedure in the proof above must inevitably lead to Newton's binomial theorem $1+b=\sum_{k=0}^\infty\binom{1/n}{k}a^k$.
\end{Ex}
We have seen that invertible power series in $\mathbb{C}[[X]]$ have arbitrary roots. On the other hand, $X$ does not even have a square root in $\mathbb{C}((X))$. This suggests to allow $X$ not only to negative powers, but also to fractional powers.
\begin{Def}
A \emph{Puiseux series} over $K$ is defined by
\[\sum_{k=m}^\infty a_{\frac{k}{n}}X^{\frac{k}{n}},\]
where $m\in\mathbb{Z}$, $n\in\mathbb{N}$ and $a_{\frac{k}{n}}\in K$ for $k\ge m$. The set of Puiseux series is denoted by $K\{\{X\}\}$. For $\alpha,\beta\in K\{\{X\}\}$ there exists $n\in\mathbb{N}$ such that $\tilde\alpha:=\alpha(X^n)$ and $\tilde\beta:=\beta(X^n)$ lie in $K((X))$.
We carry over the field operations from $K((X))$ via
\[\alpha+\beta:=(\tilde\alpha+\tilde\beta)(X^{\frac{1}{n}}),\qquad\alpha\cdot\beta:=(\tilde\alpha\tilde\beta)(X^{\frac{1}{n}}).\]
\end{Def}
It is straight-forward to check that $(K\{\{X\}\},+,\cdot)$ is a field.
At this point we have established the following inclusions:
\[K\subseteq K[X]\subseteq K[[X]]\subseteq K((X))\subseteq K\{\{X\}\}.\]
\begin{Thm}[\textsc{Puiseux}]
The algebraic closure of $\mathbb{C}((X))$ is $\mathbb{C}\{\{X\}\}$.
\end{Thm}
\begin{proof}
We follow Nowak~\cite{Nowak}.
Set $R:=\mathbb{C}[[X]]$, $F:=\mathbb{C}((X))$ and $\hat F:=\mathbb{C}\{\{X\}\}$. We show first that $\hat F$ is an algebraic field extension of $F$.
Let $\alpha\in\hat F$ be arbitrary and $n\in\mathbb{N}$ such that $\beta:=\alpha(X^n)\in F$. Let $\zeta\in\mathbb{C}$ be a primitive $n$-th root of unity. Define
\[\Gamma:=\prod_{i=1}^n\bigl(Y-\beta(\zeta^iX)\bigr)=Y^n+\gamma_1Y^{n-1}+\ldots+\gamma_n\in F[Y].\]
Replacing $X$ by $\zeta X$ permutes the factors $Y-\beta(\zeta^iX)$ and thus leaves $\Gamma$ invariant. Consequently, $\gamma_i(\zeta X)=\gamma_i$ for $i=1,\ldots,n$. This means that there exist $\tilde\gamma_i\in F$ such that $\gamma_i=\tilde\gamma_i(X^n)$. Now let
\[\tilde\Gamma:=Y^n+\tilde\gamma_1Y^{n-1}+\ldots+\tilde\gamma_n\in F[Y].\]
Substituting $X$ by $X^n$ in $\tilde\Gamma(\alpha)$ gives $\Gamma(\beta)=0$. Thus, also $\tilde\Gamma(\alpha)=0$. This shows that $\alpha$ is algebraic over $F$ and $\hat F$ is an algebraic extension of $F$.
Now we prove that $\hat F$ is algebraically closed.
Let $\Gamma=Y^n+\gamma_1Y^{n-1}+\ldots+\gamma_n\in \hat F[Y]$ be arbitrary with $n\ge 2$. We need to show that $\Gamma$ has a root in $\hat F$. Without loss of generality, $\Gamma\ne Y^n$. After applying the \emph{Tschirnhaus transformation} $Y\mapsto Y-\frac{1}{n}\gamma_1$, we may assume that $\gamma_1=0$. Let
\[r:=\min\Bigl\{\frac{1}{k}\inf(\gamma_k):k=1,\ldots,n\Bigr\}\in\mathbb{Q}\]
and $m\in\mathbb{N}$ such that $\gamma_k(X^m)\in F$ for $k=1,\ldots,n$ and $r=\frac{s}{m}$ for some $s\in\mathbb{Z}$.
Define $\delta_0:=1$ and $\delta_k:=\gamma_k(X^m)X^{-ks}\in F$ for $k=1,\ldots,n$. Since
\[\inf(\delta_k)=m\inf(\gamma_k)-ks=m(\inf(\gamma_k)-kr)\ge0,\]
$\Delta:=Y^n+\delta_2Y^{n-2}+\ldots+\delta_n\in R[Y]$.
Consider $\bar\Delta:=Y^n+\delta_2(0)Y^{n-2}+\ldots+\delta_n(0)\in\mathbb{C}[Y]$.
Since $\inf(\delta_k)=0$ for at least one $k\ge 1$, we have $\bar\Delta\ne Y^n$
Since $\delta_1=0$, also $\bar\Delta\ne (Y-c)^n$ for all $c\in\mathbb{C}$. Using that $\mathbb{C}[Y]$ is algebraically closed, we can decompose $\bar\Delta=\bar\Delta_1\bar\Delta_2$ with coprime monic polynomials $\bar\Delta_1,\bar\Delta_2\in\mathbb{C}[Y]$ of degree $<n$. By Hensel's lemma, there exists a corresponding factorization $\Delta=\Delta_1\Delta_2$ with $\Delta_1,\Delta_2\in R[Y]$.
Finally, replace $X$ by $X^{\frac{1}{m}}$ in $\Delta_i$ to obtain $\Gamma_i\in\hat F[Y]$.
Then
\[\Gamma=X^{nr}\sum_{k=0}^n\gamma_kX^{-kr}(YX^{-r})^{n-k}=X^{nr}\sum_{k=0}^n\delta_k(X^{\frac{1}{m}})(YX^{-r})^{n-k}=X^{nr}\Gamma_1(YX^{-r})\Gamma_2(YX^{-r}).\]
Induction on $n$ shows that $\Gamma$ has a root and $\hat F$ is algebraically closed.
\end{proof}
\section{Multivariate power series}\label{secmult}
In \autoref{important} and even more in the last two results it became clear that power series in more than one indeterminant make sense. We give proper definitions now.
\begin{Def}\label{defmulti}\hfill
\begin{enumerate}[(i)]
\item The ring of formal power series in $n$ indeterminants $X_1,\ldots,X_n$ over a field $K$ is defined inductively via
\[K[[X_1,\ldots,X_n]]:=K[[X_1,\ldots,X_{n-1}]][[X_n]].\]
Its elements have the form
\[\alpha=\sum_{k_1,\ldots,k_n\ge 0}a_{k_1,\ldots,k_n}X_1^{k_1}\ldots X_n^{k_n}\]
where $a_{k_1,\ldots,k_n}\in K$. We (still) call $a_{0,\ldots,0}$ the \emph{constant term} of $\alpha$. Let $\inf(\alpha):=\inf\{k_1+\ldots+k_n:a_{k_1,\ldots,k_n}\ne0\}$ and
\[|\alpha|:=2^{-\inf(\alpha)}.\]
\item If all but finitely many coefficients of $\alpha$ are zero, we call $\alpha$ a (formal) polynomial in $X_1,\ldots,X_n$. In this case,
\[\deg(\alpha):=\sup\{k_1+\ldots+k_n:a_{k_1,\ldots,k_n}\ne 0\}\]
is the \emph{degree} of $\alpha$, where $\deg(0)=\sup\varnothing=-\infty$. Moreover, a polynomial $\alpha$ is called \emph{homogeneous} if all monomials occurring in $\alpha$ (with non-zero coefficient) have the same degree.
The set of polynomials is denoted by $K[X_1,\ldots,X_n]$.
\end{enumerate}
\end{Def}
Once we have convinced ourselves that \autoref{intdom} remains true when $K$ is replaced by an integral domain, it becomes evident that also $K[[X_1,\ldots,X_n]]$ is an integral domain. Likewise the norm still gives rise to a complete ultrametric (to prove $|\alpha\beta|=|\alpha||\beta|$ one may assume that $\alpha$ and $\beta$ are homogeneous polynomials) and the crucial \autoref{infsum} holds in $K[[X_1,\ldots,X_n]]$ too.
We stress that this metric is finer than the one induced from $K[[X_1,\ldots,X_{n-1}]]$ as, for example, $\lim_{k\to\infty}X_1^kX_2$ converges in the former, but not in the latter (with $n=2$).
Moreover, a power series $\alpha$ is invertible in $K[[X_1,\ldots,X_n]]$ if and only if its constant term is non-zero. Indeed, after scaling, the constant term is $1$ and
\[\alpha^{-1}=\frac{1}{1-(1-\alpha)}=\sum_{k=0}^\infty(1-\alpha)^k\]
converges.
Unlike $K[X]$ or $K[[X]]$, the multivariate rings are not principal ideal domains, but one can show that they are still factorial (see \cite{Buchsbaum}).
We do not require this fact in the sequel.
The degree function equips $K[X_1,\ldots,X_n]$ with a \emph{grading}, i.\,e. we have
\[K[X_1,\ldots,X_n]=\bigoplus_{d=0}^\infty P_d\]
and $P_dP_e\subseteq P_{d+e}$ where $P_d$ denotes the set of homogeneous polynomials of degree $d$.
In the following we will restrict ourselves mostly to polynomials of a special type. Note that if $\alpha,\beta_1,\ldots,\beta_n\in K[X_1,\ldots,X_n]$, we can substitute $X_i$ by $\beta_i$ in $\alpha$ to obtain $\alpha(\beta_1,\ldots,\beta_n)\in K[X_1,\ldots,X_n]$. It is important that these substitutions happen simultaneously and not one after the other (more about this at the end of the section).
\begin{Def}
A polynomial $\alpha\in K[X_1,\ldots,X_n]$ is called \emph{symmetric} if
\[\alpha(X_{\pi(1)},\ldots,X_{\pi(n)})=\alpha(X_1,\ldots,X_n)\]
for all permutations $\pi\in S_n$.
\end{Def}
It is easy to see that the symmetric polynomials form a subring of $K[X_1,\ldots,X_n]$.
\begin{Ex}\hfill
\begin{enumerate}[(i)]
\item The \emph{elementary symmetric polynomials} are $\sigma_0:=1$ and
\[\sigma_k:=\sum_{1\le i_1<\ldots<i_k\le n}X_{i_1}\ldots X_{i_k}\qquad(k\ge 1).\]
Note that $\sigma_k=0$ for $k>n$ (empty sum).
\item The \emph{complete symmetric polynomials} are $\tau_0:=1$ and
\[\tau_k:=\sum_{1\le i_1\le \ldots\le i_k\le n}X_{i_1}\ldots X_{i_k}\qquad(k\ge 1).\]
\item The \emph{power sum polynomials} are $\rho_k:=X_1^k+\ldots+X_n^k$ for $k\ge 0$.
\end{enumerate}
\end{Ex}
Keep in mind that $\sigma_k$, $\tau_k$ and $\rho_k$ depend on $n$.
All three sets of polynomials are homogeneous. The elementary and complete symmetric polynomials are special instances of \emph{Schur polynomials}, which we do not attempt to define here.
\begin{Thm}[\textsc{Vieta}]\label{vieta
The following identities hold in $K[[X_1,\ldots,X_n,Y]]$:
\begin{empheq}[box=\fbox]{align}
\prod_{k=1}^n(1+X_kY)&=\sum_{k=0}^n\sigma_kY^k,\label{vieta1}\\
\prod_{k=1}^n\frac{1}{1-X_kY}&=\sum_{k=0}^\infty \tau_kY^k.\label{vieta2}
\end{empheq}
\end{Thm}
\begin{proof}
The first equation is only a matter of expanding the product. The second equation follows from
\[
\prod_{k=1}^n\frac{1}{1-X_kY}=\prod_{k=1}^n\sum_{l=0}^\infty (X_kY)^l=\sum_{k=0}^\infty\Bigl(\sum_{l_1+\ldots+l_n=k}X_1^{l_1}\ldots X_n^{l_n}\Bigr)Y^k=\sum_{k=0}^\infty \tau_kY^k.\qedhere
\]
\end{proof}
When we specialize $X_1=\ldots=X_n=1$ in Vieta's theorem (as we may), we recover the generating functions of the binomial coefficients and the multiset counting coefficients in \autoref{exgen}. When we substitute $X_k=k$ for $k=1,\ldots,n$, we obtain a new formula for the Stirling numbers by virtue of \autoref{genstir}.
It is easy to see that the grading by degree carries over to symmetric polynomials. The following theorem shows that the elementary symmetric polynomials are the building blocks of all symmetric polynomials.
\begin{Thm}[Fundamental theorem on symmetric polynomials]\label{sympoly}
For every symmetric polynomial $\alpha\in K[X_1,\ldots,X_n]$ there exists a unique $\gamma\in K[X_1,\ldots,X_n]$ such that $\alpha=\gamma(\sigma_1,\ldots,\sigma_n)$.
\end{Thm}
\begin{proof}
We first prove the \emph{existence} of $\gamma$: Without loss of generality, let
\[\alpha=\sum_{i_1,\ldots,i_n}a_{i_1,\ldots,i_n}X_1^{i_1}\ldots X_n^{i_n}\ne 0.\]
We order the tuples $(i_1,\ldots,i_n)$ lexicographically and argue by induction on
\[f(\alpha):=\max\bigl\{(i_1,\ldots,i_n):a_{i_1,\ldots,i_n}\ne 0\bigr\}.\]
If $f(\alpha)=(0,\ldots,0)$, then $\gamma:=\alpha=a_{0,\ldots,0}\in K$. Now let $f(\alpha)=(d_1,\ldots,d_n)>(0,\ldots,0)$.
Since $\alpha=\alpha(X_{\pi(1)},\ldots,X_{\pi(n)})$ for all $\pi\in S_n$, $d_1\ge\ldots\ge d_n$. Let
\[\beta:=a_{d_1,\ldots,d_n}\sigma_1^{d_1-d_2}\sigma_2^{d_2-d_3}\ldots\sigma_{n-1}^{d_{n-1}-d_n}\sigma_n^{d_n}.\]
Then we have $f(\sigma_k^{d_k-d_{k+1}})=(d_k-d_{k+1})f(\sigma_k)=(d_k-d_{k+1},\ldots,d_k-d_{k+1},0,\ldots,0)$ and
\[f(\beta)=f(\sigma_1^{d_1-d_2})+\ldots+f(\sigma_n^{d_n})=(d_1,\ldots,d_n).\]
Hence, the symmetric polynomial $\alpha-\beta$ satisfies $f(\alpha-\beta)<(d_1,\ldots,d_n)$ and the existence of $\gamma$ follows by induction.
Now we show the \emph{uniqueness} of $\gamma$: Let $\gamma,\delta\in K[X_1,\ldots,X_n]$ such that $\gamma(\sigma_1,\ldots,\sigma_n)=\delta(\sigma_1,\ldots,\sigma_n)$. For $\rho:=\gamma-\delta$ it follows that $\rho(\sigma_1,\ldots,\sigma_n)=0$. We have to show that $\rho=0$. By way of contradiction, suppose $\rho\ne 0$. Let $d_1\ge\ldots\ge d_n$ be the lexicographically largest $n$-tuple such that the coefficient of $X_1^{d_1-d_2}X_2^{d_2-d_3}\ldots X_n^{d_n}$ in $\rho$ is non-zero. As above, $f(\sigma_1^{d_1-d_2}\ldots\sigma_n^{d_n})=(d_1,\ldots,d_n)$. For every other summand $X_1^{e_1-e_2}\ldots X_n^{e_n}$ of $\rho$ we obtain $f(\sigma_1^{e_1-e_2}\ldots\sigma_n^{e_n})<(d_1,\ldots,d_n)$. This yields $f\bigl(\rho(\sigma_1,\ldots,\sigma_n)\bigr)=(d_1,\ldots,d_n)$ in contradiction to $\rho(\sigma_1,\ldots,\sigma_n)=0$.
\end{proof}
\begin{Ex}
Consider $\alpha=XY^3+X^3Y-X-Y\in K[X,Y]$. With the notation from the proof above, $f(\alpha)=(3,1)$ and \[\beta:=\sigma_1^2\sigma_2=(X+Y)^2XY=X^3Y+2X^2Y^2+XY^3.\]
Thus, $\alpha-\beta=-2X^2Y^2-X-Y$. In the next step we have $f(\alpha-\beta)=(2,2)$ and
\[\beta_2:=-2\sigma_2^2=-2X^2Y^2.\]
It remains: $\alpha-\beta-\beta_2=-X-Y=-\sigma_1$. Finally,
\[\alpha=\beta+\beta_2-\sigma_1=\sigma_1^2\sigma_2-2\sigma_2^2-\sigma_1=\gamma(\sigma_1,\sigma_2)\]
where $\gamma=X^2Y-2Y^2-X$.
\end{Ex}
From an algebraic point of view, \autoref{sympoly} (applied to $\alpha=0$) states that the elementary symmetric polynomials $\sigma_1,\ldots,\sigma_n$ are algebraically independent over $K$, so they form a transcendence basis of $K(X_1,\ldots,X_n)$ (recall that $K(X_1,\ldots,X_n)$ has transcendence degree $n$).
The identities in the next theorem express the $\sigma_i$ recursively in terms of the $\tau_j$ and in terms of the $\rho_j$. So the latter sets of symmetric polynomials form transcendence bases too. It is no coincidence that $\deg(\sigma_k)=\deg(\tau_k)=\deg(\rho_k)=k$ for $k\le n$. A theorem from invariant theory (in characteristic $0$) implies that any algebraically independent, symmetric, homogeneous polynomials $\lambda_1,\ldots,\lambda_n$ have degrees $1,\ldots,n$ in some order (see \cite[Proposition~3.7]{Humphreys}).
\begin{Thm}[\textsc{Girard--Newton} identities]\label{GNI}
The following identities hold in $K[X_1,\ldots,X_n]$ for all $n,k\in\mathbb{N}$:
\begin{empheq}[box=\fbox]{align*}
\sum_{i=0}^k(-1)^i\sigma_i\tau_{k-i}&=0,\\
\sum_{i=1}^k\rho_i\tau_{k-i}&=k\tau_k,\\
\sum_{i=1}^k(-1)^i\sigma_{k-i}\rho_i&=-k\sigma_k.
\end{empheq}
\end{Thm}
\begin{proof}
Let $\sigma=\sum(-1)^k\sigma_kY^k=\prod(1-X_kY)$ and $\tau:=\sum\tau_kY^k=\prod\frac{1}{1-X_kY}$ as in Vieta's theorem.
\begin{enumerate}[(i)]
\item The claim follows by comparing coefficients of $Y^k$ in
\[1=\sigma\tau=\sum_{k=0}^\infty\Bigl(\sum_{i=0}^k(-1)^i\sigma_i\tau_{k-i}\Bigr)Y^k.\]
\item We differentiate with respect to $Y$ using the product rule while noticing that $\bigl(\frac{1}{1-X_kY}\bigr)'=\frac{X_k}{(1-X_kY)^2}$:
\begin{align*}
\sum_{k=1}^\infty k\tau_kY^k&=Y\tau'=\tau\sum_{k=1}^n\frac{X_kY}{1-X_kY}=\tau\sum_{k=1}^n\sum_{i=1}^\infty(X_kY)^i\\
&=\tau\sum_{i=1}^\infty\rho_iY^i=\sum_{k=1}^\infty\Bigl(\sum_{i=1}^k\rho_i\tau_{k-i}\Bigr)Y^k.
\end{align*}
\item We differentiate again with respect to $Y$ (this idea is often attributed to \cite[p.~212]{Berlekamp}):
\begin{align*}
-\sum_{k=0}^\infty(-1)^kk\sigma_kY^k&=-Y\sigma'=\sigma\sum_{k=1}^n\frac{X_kY}{1-X_kY}=\sigma\sum_{k=1}^\infty\rho_kY^k=\sum_{k=1}^\infty\Bigl(\sum_{i=1}^{k}(-1)^{k-i}\sigma_{k-i}\rho_{i}\Bigr) Y^k.\qedhere
\end{align*}
\end{enumerate}
\end{proof}
Now that we know that each of the $\sigma_i$, $\tau_i$ and $\rho_i$ can be expressed by the others, it is natural to ask for explicit formulas. This is achieved by Waring's formula. Here $P(n)$ stands for the set of partitions of $n$ as introduced in \autoref{defpart}.
\begin{Thm}[\textsc{Waring}'s formula]
The following holds in $\mathbb{C}[X_1,\ldots,X_n]$ for all $n,k\in\mathbb{N}$:
\begin{empheq}[box=\fbox]{align*}
\rho_k&=(-1)^kk\sum_{(1^{a_1},\ldots,k^{a_k})\in P(k)}(-1)^{a_1+\ldots+a_k}\frac{(a_1+\ldots+a_k-1)!}{a_1!\ldots a_k!}\sigma_1^{a_1}\ldots\sigma_k^{a_k},\\
&=-k\sum_{(1^{a_1},\ldots,k^{a_k})\in P(k)}(-1)^{a_1+\ldots+a_k}\frac{(a_1+\ldots+a_k-1)!}{a_1!\ldots a_k!}\tau_1^{a_1}\ldots\tau_k^{a_k}.
\end{empheq}
\end{Thm}
\begin{proof}
We introduce a new variable $Y$ and compute in $\mathbb{C}[[X_1,\ldots,X_n,Y]]$. The generating function of $(-1)^k\frac{\rho_k}{k}$ is
\begin{align*}
\sum_{k=1}^\infty (-1)^k\frac{\rho_k}{k}Y^k&=-\sum_{i=1}^n\sum_{k=1}^\infty (-1)^{k-1}\frac{(X_iY)^k}{k}=-\sum_{i=1}^n\log(1+X_iY)\overset{\eqref{funclog}}{=}-\log\Bigl(\prod_{i=1}^n(1+X_iY)\Bigr)\\
&\overset{\eqref{vieta1}}{=}-\log\Bigl(1+\sum_{i=1}^n\sigma_iY^i\Bigr)=\sum_{l=1}^\infty\frac{(-1)^l}{l}\Bigl(\sum_{i=1}^n\sigma_iY^i\Bigr)^l.
\end{align*}
Now we use the multinomial theorem to expand the inner sum:
\begin{align*}
\sum_{k=1}^\infty (-1)^k\frac{\rho_k}{k}Y^k&=\sum_{l=1}^\infty\frac{(-1)^l}{l}\sum_{a_1+\ldots+a_n=l}\frac{l!}{a_1!\ldots a_n!}\sigma_1^{a_1}\ldots\sigma_n^{a_n}Y^{a_1+2a_2+\ldots+na_n}\\
&=\sum_{k=1}^\infty\sum_{(1^{a_1},\ldots,k^{a_k})\in P(k)}(-1)^{a_1+\ldots+a_k}\frac{(a_1+\ldots+a_k-1)!}{a_1!\ldots a_k!}\sigma_1^{a_1}\ldots\sigma_k^{a_k} Y^k.
\end{align*}
Note that $\sigma_k=0$ for $k>n$. This implies the first equation.
For the second we start similarly:
\[\sum_{k=1}^\infty \frac{\rho_k}{k}Y^k=\sum_{i=1}^n\sum_{k=1}^\infty \frac{(X_iY)^k}{k}=\sum_{i=1}^n\log\bigl((1-X_iY)^{-1}\bigr)=\log\Bigl(\prod_{i=1}^n\frac{1}{1-X_iY}\Bigr)\\
\overset{\eqref{vieta2}}{=}\log\Bigl(1+\sum_{i=1}^\infty\tau_iY^i\Bigr).\]
Since we are only interested in the coefficient of $X^k$, we can truncate the sum to
\[\log\Bigl(1+\sum_{i=1}^k\tau_iY^i\Bigr)=-\sum_{l=1}^\infty\frac{(-1)^l}{l}\sum_{a_1+\ldots+a_k=l}\frac{l!}{a_1!\ldots a_k!}\tau_1^{a_1}\ldots\tau_k^{a_k}Y^{a_1+2a_2+\ldots+ka_k}\]
and argue as before.
\end{proof}
\begin{Ex}
Since we are dealing with polynomials, it is legitimate to replace the indeterminants by actual numbers. Let $x_1,x_2,x_3\in\mathbb{C}$ be the roots of
\[\alpha=X^3+2X^2-3X+1\in\mathbb{C}[X]\]
(guaranteed to exist by the fundamental theorem of algebra).
By Vieta's theorem,
\[\sigma_1(x_1,x_2,x_3)=-2,\qquad\sigma_2(x_1,x_2,x_3)=-3,\qquad\sigma_3(x_1,x_2,x_3)=-1.\]
The partitions of $3$ are $(1^3,2^0,3^0)$, $(1^1,2^1,3^0)$ and $(1^0,2^0,3^1)$. We compute with the first Waring formula
\[x_1^3+x_2^3+x_3^3=\rho_3(x_1,x_2,x_3)=-3\Bigl(-\frac{2!}{3!}(-2)^3+(-2)(-3)-(-1)^3\Bigr)=-29\]
without knowing what $x_1,x_2,x_3$ are!
Here is an alternative approach for those who like matrices. The companion matrix
\[A=\begin{pmatrix}
0&0&-1\\
1&0&3\\
0&1&-2
\end{pmatrix}
\]
of $\alpha$ has characteristic polynomial $\alpha$. Hence, the eigenvalues of $A^k$ are $x_1^k$, $x_2^k$ and $x_3^k$. This shows $\rho_k(x_1,x_2,x_3)=\operatorname{tr}(A^k)$.
\end{Ex}
We invite the reader to prove the other four transition formulas.
\begin{A}\label{exwaring}
Show that the following holds in $\mathbb{C}[X_1,\ldots,X_n]$ for all $n,k\in\mathbb{N}$:
\begin{align}
\sigma_k&=(-1)^k\sum_{(1^{a_1},\ldots,k^{a_k})\in P(k)}(-1)^{a_1+\ldots+a_k}\frac{(a_1+\ldots+a_k)!}{a_1!\ldots a_k!}\tau_1^{a_1}\ldots\tau_k^{a_k},\notag\\
&=(-1)^k\sum_{(1^{a_1},\ldots,k^{a_k})\in P(k)}\frac{(-1)^{a_1+\ldots+a_k}}{1^{a_1}a_1!\ldots k^{a_k}a_k!}\rho_1^{a_1}\ldots \rho_k^{a_k},\label{Wsec}\\
\tau_k&=(-1)^k\sum_{(1^{a_1},\ldots,k^{a_k})\in P(k)}(-1)^{a_1+\ldots+a_k}\frac{(a_1+\ldots+a_k)!}{a_1!\ldots a_k!}\sigma_1^{a_1}\ldots\sigma_k^{a_k},\notag\\
&=\sum_{(1^{a_1},\ldots,k^{a_k})\in P(k)}\frac{1}{1^{a_1}a_1!\ldots k^{a_k}a_k!}\rho_1^{a_1}\ldots \rho_k^{a_k}.\label{W4th}
\end{align}
\textit{Hint:} For \eqref{Wsec} and \eqref{W4th}, mimic the proof of \autoref{Turan} (these are specializations of \emph{Frobenius' formula} on Schur polynomials).
\end{A}
We leave polynomials to fully develop multivariate power series.
\begin{Def}
For $\alpha\in K[[X_1,\ldots,X_n]]$ and $1\le i\le n$ let $\partial_i\alpha$ be the $i$-th \emph{partial derivative} with respect to $X_i$, i.\,e. we regard $\alpha$ as a power series in $X_i$ with coefficients in $K[[X_1,\ldots,X_{i-1},X_{i+1},\ldots,X_n]]$ and form the usual derivative. For $k\in\mathbb{N}_0$ let $\partial_i^k\alpha$ be the $k$-th derivative with respect to $X_i$.
\end{Def}
Note that $\partial_i$ is a linear operator, which commutes with all $\partial_j$ (\emph{Schwarz' theorem}). Indeed, by linearity it suffices to check
\[\partial_i\partial_j(X_i^kX_j^l)=\partial_i(lX_i^kX_j^{l-1})=klX_i^{k-1}X_j^{l-1}=\partial_j(kX_i^{k-1}X_j^l)=\partial_j\partial_i(X_i^kX_j^l).\]
We need a fairly general form of the product rule.
\begin{Lem}[\textsc{Leibniz}' rule]
Let $\alpha_1,\ldots,\alpha_s\in\mathbb{C}[[X_1,\ldots,X_n]]$ and $k_1,\ldots,k_n\in\mathbb{N}_0$. Then
\[\boxed{\partial_1^{k_1}\ldots\partial_n^{k_n}(\alpha_1\ldots\alpha_s)=\sum_{l_{11}+\ldots+l_{1s}=k_1}\ldots\sum_{l_{n1}+\ldots+l_{ns}=k_n}\frac{k_1!\ldots k_n!}{\prod_{i,j}l_{ij}!}\prod_{t=1}^s\partial_1^{l_{1t}}\ldots\partial_n^{l_{nt}}\alpha_t.}\]
\end{Lem}
\begin{proof}
For $n=1$ the claim is more or less equivalent to the familiar multinomial theorem
\[(a_1+\ldots+a_s)^k=\sum_{l_1+\ldots+l_s=k}\frac{k!}{l_1!\ldots l_s!}a_1^{l_1}\ldots a_s^{l_s},\]
where $a_1,\ldots,a_s$ lie in any commutative ring. With every new indeterminant we simply apply the case $n=1$ to the formula for $n-1$. In this way the multinomial coefficients are getting multiplied.
\end{proof}
Our next goal is the multivariate chain rule for (higher) derivatives.
We equip $\mathbb{C}[[X_1,\ldots,X_n]]^n$ with the direct product ring structure and use the shorthand notation $\alpha:=(\alpha_1,\ldots,\alpha_n)$ and $0:=(0,\ldots,0)$. Write
\[\alpha\circ\beta:=\bigl(\alpha_1(\beta_1,\ldots,\beta_n),\ldots,\alpha_n(\beta_1,\ldots,\beta_n)\bigr)\]
provided this is well-defined.
It is not difficult to show that
\begin{equation}\label{distmult}
\begin{split}
(\alpha+\beta)\circ \gamma=(\alpha\circ\gamma)+(\beta\circ\gamma),\\
(\alpha\cdot\beta)\circ \gamma=(\alpha\circ\gamma)\cdot(\beta\circ\gamma)
\end{split}
\end{equation}
as in \autoref{lemcomp}.
It was remarked by M. Hardy~\cite{MHardy} that Leibniz' rule as well as the chain rule become slightly more transparent when we give up on counting multiplicities of derivatives as follows.
\begin{Thm}[\textsc{Faá di Bruno}'s rule]\label{faa}
Let $\alpha,\beta_1,\ldots,\beta_n\in K[[X_1,\ldots,X_n]]$ such that $\alpha(\beta_1,\ldots,\beta_n)$ is defined. Then for $1\le k_1,\ldots,k_s\le n$ we have
\[\boxed{\partial_{k_1}\ldots\partial_{k_s}\bigl(\alpha(\beta_1,\ldots,\beta_n)\bigr)=\sum_{t=1}^s\sum_{1\le i_1,\ldots,i_t\le n}\sum_{\substack{A_1\dot{\cup}\ldots\dot{\cup} A_t\\=\{1,\ldots,s\}}}(\partial_{A_1}\beta_{i_1})\ldots(\partial_{A_t}\beta_{i_t})(\partial_{i_1}\ldots\partial_{i_t}\alpha)(\beta_1,\ldots,\beta_n),}
\]
where $A_1\dot{\cup}\ldots\dot{\cup} A_t$ runs through the set partitions of $s$ and $\partial_{A_t}:=\prod_{a\in A_t}\partial_{k_a}$.
\end{Thm}
\begin{proof}
By \eqref{distmult}, we may assume that $\alpha=X_1^{a_1}\ldots X_n^{a_n}$. Then by the product rule,
\begin{equation}\label{s1}
\partial_k\bigl(\alpha(\beta_1,\ldots,\beta_n)\bigr)=\sum_{i=1}^n(\partial_k\beta_i)a_i\beta_1^{a_1}\ldots\beta_i^{a_i-1}\ldots\beta_n^{a_n}=\sum_{i=1}^n(\partial_k\beta_i)(\partial_i\alpha)(\beta_1,\ldots,\beta_n).
\end{equation}
This settles the case $s=1$. Now assume that the claim for arbitrary $s$ is established. When we apply some $\partial_{k_{s+1}}$ on the right hand side of the induction hypothesis, we need the product rule again. There are two cases: either $s+1$ is added to some of the existing sets $A_t$ or $\partial_{k_{s+1}}$ is applied to $(\partial_{i_1}\ldots\partial_{i_t}\alpha)(\beta_1,\ldots,\beta_n)$. In the latter case $s$ increases to $s+1$, $A_{s+1}=\{s+1\}$ and $i_{s+1}$ is introduced as in \eqref{s1}.
\end{proof}
\begin{Ex}
For $n=1$ and $K=\mathbb{C}$, \autoref{faa} “simplifies” to
\begin{align*}
(\alpha(\beta))^{(s)}&=\sum_{t=1}^s\sum_{A_1\dot\cup\ldots\dot\cup A_t}\beta^{(|A_1|)}\ldots\beta^{(|A_t|)}\alpha^{(t)}(\beta)\\
&=\sum_{(1^{a_1},\ldots,s^{a_s})\in P(s)}\frac{s!}{(1!)^{a_1}\ldots (s!)^{a_s}a_1!\ldots a_s!}(\beta')^{a_1}\ldots(\beta^{(s)})^{a_s}(\alpha(\beta))^{(a_1+\ldots+a_s)},
\end{align*}
where $(1^{a_1},\ldots,s^{a_s})$ runs over the partitions of $s$ and the coefficient is explained just as in \autoref{lemperm}.
\end{Ex}
\section{MacMahon's master theorem}
In this final section we enter a non-commutative world by making use of matrices. The ultimate goal is the \emph{master theorem} found and named by MacMahon~\cite[Chapter~II]{MacMahon}.
Since $K[[X_1,\ldots,X_n]]$ can be embedded in its field of fractions, the familiar rules of linear algebra (over fields) remain valid in the ring $K[[X_1,\ldots,X_n]]^{n\times n}$ of $n\times n$-matrices with coefficients in $K[[X_1,\ldots,X_n]]$. In particular, the determinant of $A=(\alpha_{ij})_{i,j}$ can be defined by \emph{Leibniz' formula} (not rule)
\[\det(A):=\sum_{\sigma\in S_n}\operatorname{sgn}(\sigma)\alpha_{1\sigma(1)}\ldots\alpha_{n\sigma(n)}.\]
It follows that $\det(A(0))=\det(A)(0)$ by \eqref{distmult}.
Recall that the \emph{adjoint} of $A$ is defined by $\operatorname{adj}(A):=\bigl((-1)^{i+j}\det(A_{ji})\bigr)_{i,j}$ where $A_{ji}$ is obtained from $A$ by deleting the $j$-th row and $i$-th column. Then
\[A\operatorname{adj}(A)=\operatorname{adj}(A)A=\det(A)1_n,\]
where $1_n$ denotes the identity $n\times n$-matrix.
This shows that $A$ is invertible if and only if $\det(A)$ is invertible in $K[[X_1,\ldots,X_n]]$, i.\,e. $\det(A)$ has a non-zero constant term.
Expanding the entries of $A$ as $\alpha_{ij}=\sum a^{(i,j)}_{k_1,\ldots,k_n}X_1^{k_1}\ldots X_n^{k_n}$ gives rise to a natural bijection
\begin{align*}
\Omega\colon K[[X_1,\ldots,X_n]]^{n\times n}&\to K^{n\times n}[[X_1,\ldots,X_n]],\\
A&\mapsto\sum_{k_1,\ldots,k_n}\bigl(a^{(i,j)}_{k_1,\ldots,k_n}\bigr)_{i,j}X_1^{k_1}\ldots X_n^{k_n}.
\end{align*}
Clearly, $\Omega$ is a vector space isomorphism. To verify that it is even a ring isomorphism, it is enough to consider matrices $A$, $B$ with only one non-zero entry each. But then $AB=0$ or $AB$ is just the multiplication in $K[[X_1,\ldots,X_n]]$.
So we can now freely pass from one ring to the other, keeping in mind that we are dealing with power series with non-commuting coefficients!
Allowing some flexibility, we can also expand $A=\sum_i A_iX_k^i$ where $k$ is fixed and $A_i\in K[[X_1,\ldots,X_{k-1},X_{k+1},\ldots,X_n]]^{n\times n}$.
This suggests to define
\[\partial_kA:=\sum_{i=1}^\infty iA_iX_k^{i-1}=(\partial_k\alpha_{ij})_{i,j}.\]
The sum and product differentiation rules remain correct, but the power rule $\partial_k(A^s)=s\partial_k(A)A^{s-1}$ (and in turn Leibniz' rule) does not hold in general, since $A$ might not commute with $\partial_kA$.
The next two results are just a warm-up and not needed later on.
\begin{Lem}\label{lemJac}
Let $A\in\mathbb{C}[[X_1,\ldots,X_n]]^{n\times n}$ and $1\le k\le n$. Then $\partial_k\det(A)=\operatorname{tr}\bigl(\operatorname{adj}(A)\partial_kA\bigr)$.
\end{Lem}
\begin{proof}
Write $A=(\alpha_{ij})$. By Leibniz' formula and the product rule, it follows that
\begin{align*}
\partial_k\det(A)&=\partial_k\Bigl(\sum_{\sigma\in S_n}\operatorname{sgn}(\sigma)\alpha_{1\sigma(1)}\ldots\alpha_{n\sigma(n)}\Bigr)\\
&=\sum_{i=1}^n\sum_{\sigma\in S_n}\operatorname{sgn}(\sigma)\alpha_{1\sigma(1)}\ldots\partial_k(\alpha_{i\sigma(i)})\ldots\alpha_{n\sigma(n)}\\
&=\sum_{i=1}^n\sum_{j=1}^n\sum_{\substack{\sigma\in S_n\\\sigma(j)=i}}\operatorname{sgn}(\sigma)\alpha_{1\sigma(1)}\ldots\partial_k(\alpha_{ji})\ldots\alpha_{n\sigma(n)}.
\end{align*}
The permutations $\sigma\in S_n$ with $\sigma(j)=i$ correspond naturally to
\[\tau:=(i,i+1,\ldots,n)^{-1}\sigma(j,j+1,\ldots,n)\in S_{n-1}\]
with $\operatorname{sgn}(\tau)=(-1)^{i+j}\operatorname{sgn}(\sigma)$.
Hence, Leibniz' formula applied to $\det(A_{ji})$ gives
\[\sum_{j=1}^n\sum_{\substack{\sigma\in S_n\\\sigma(j)=i}}\operatorname{sgn}(\sigma)\alpha_{1\sigma(1)}\ldots\partial_k(\alpha_{ji})\ldots\alpha_{n\sigma(n)}=\sum_{j=1}^n(-1)^{i+j}\det(A_{ji})\partial_k(\alpha_{ji}).\]
Since this is the entry of $\operatorname{adj}(A)\partial_kA$ at position $(i,i)$, the claim follows.
\end{proof}
If $A\in\mathbb{C}^{n\times n}[[X_1,\ldots,X_n]]$ has zero constant term, then $\exp(A)=\sum_{k=0}^\infty\frac{A^k}{k!}$ converges and is even invertible since it has constant term $1_n$.
\begin{Thm}[\textsc{Jacobi}'s determinant formula]
Let $A\in\mathbb{C}^{n\times n}[[X_1,\ldots,X_n]]$ with zero constant term. Then
\[\boxed{\det(\exp(A))=\exp(\operatorname{tr}(A)).}\]
\end{Thm}
\begin{proof}
We introduce a new variable $Y$ and consider $B:=\exp(AY)$. Denoting the derivative with respect to $Y$ by $'$, we have
\[B'=\Bigl(\sum_{k=0}^\infty\frac{A^k}{k!}Y^k\Bigr)'=\sum_{k=1}^\infty\frac{A^k}{(k-1)!}Y^{k-1}=AB.\]
Invoking \autoref{lemJac} and using that $B$ is invertible, we compute:
\begin{align*}
\det(B)'&=\operatorname{tr}(\operatorname{adj}(B)B')=\det(B)\operatorname{tr}(B^{-1}AB)=\det(B)\operatorname{tr}(A).
\end{align*}
This is a differential equation, which can be solved as follows.
Write $\det(B)=\sum_{k=0}^\infty B_kY^k$ with $B_k\in\mathbb{C}[[X_1,\ldots,X_n]]$. Then $B_0=\det(B(0))=\det(\exp(0)1_n)=\det(1_n)=1$ and $B_{k+1}=\frac{1}{k+1}\operatorname{tr}(A)B_k$ for $k\ge 0$. This yields
\[\det(B)=1+\operatorname{tr}(A)Y+\frac{\operatorname{tr}(A)^2}{2}Y^2+\ldots=\exp(\operatorname{tr}(A)Y).\]
Since we already know that $\exp(A)$ converges, we are allowed to specialize $Y=1$ in $B$, from which the claim follows.
\end{proof}
\begin{Def}
For $\alpha=(\alpha_1,\ldots,\alpha_n)\in K[[X_1,\ldots,X_n]]^n$ we call
\[J(\alpha):=(\partial_j\alpha_i)_{i,j}\in K[[X_1,\ldots,X_n]]^{n\times n}\]
the \emph{Jacobi matrix} of $\alpha$.
\end{Def}
\begin{Ex}
The Jacobi matrix of the power sum polynomials $\rho=(\rho_1,\ldots,\rho_n)$ is a deformed \emph{Vandermonde matrix} $J(\rho)=(iX_j^{i-1})_{i,j}$ with determinant $n!\prod_{i<j}(X_j-X_i)$. The next theorem furnishes a new proof for the algebraic independence of $\rho_1,\ldots,\rho_n$.
\end{Ex}
\begin{Thm}\label{thmjacobi}
Polynomials $\alpha_1,\ldots,\alpha_n\in \mathbb{C}[X_1,\ldots,X_n]$ form a transcendence basis of $\mathbb{C}(X_1,\ldots,X_n)$ if and only if $\det(J(\alpha))\ne 0$.
\end{Thm}
\begin{proof}
The proof follows \cite[Proposition~3.10]{Humphreys}.
Suppose first that $\alpha_1,\ldots,\alpha_n$ are algebraically dependent. Then there exists $\beta\in\mathbb{C}[X_1,\ldots,X_n]\setminus\mathbb{C}$ such that $\beta(\alpha_1,\ldots,\beta_n)=0$ and $\deg(\beta)$ is as small as possible.
By \eqref{s1},
\[\sum_{i=1}^n(\partial_k\alpha_i)(\partial_i\beta)(\alpha_1,\ldots\alpha_n)=\partial_k(\beta(\alpha_1,\ldots,\beta_n))=0\]
for $k=1,\ldots,n$. This is a homogeneous linear system over $\mathbb{C}(X_1,\ldots,X_n)$ with coefficient matrix $J(\alpha)^\mathrm{t}$. Since $\beta\notin\mathbb{C}$, there exists $1\le k\le n$ such that $\partial_k\beta\ne 0$. Now $(\partial_k\beta)(\alpha_1,\ldots,\alpha_n)\ne 0$, because $\deg(\beta)$ was chosen to be minimal. Hence, the linear system has a non-trivial solution and $\det(J(\alpha))$ must be $0$.
Assume conversely, that $\alpha_1,\ldots,\alpha_n$ are algebraically independent over $\mathbb{C}$. Since $\mathbb{C}(X_1,\ldots,X_n)$ has transcendence degree $n$, the polynomials $X_i,\alpha_1,\ldots,\alpha_n$ are algebraically dependent for each $i=1,\ldots,n$. Let $\beta_i\in\mathbb{C}[X_0,X_1,\ldots,X_n]\setminus\mathbb{C}$ such that $\beta_i(X_i,\alpha_1,\ldots,\alpha_n)=0$ and $\deg(\beta_i)$ as small as possible. Again by \eqref{s1},
\[\delta_{ik}(\partial_0\beta_i)(X_i,\alpha_1,\ldots,\alpha_n)+\sum_{j=1}^n(\partial_k\alpha_j)(\partial_j\beta_i)(X_i,\alpha_1,\ldots\alpha_n)=\partial_k(\beta_i(X_i,\alpha_1,\ldots,\beta_n))=0\]
for $i=1,\ldots,n$. Since $\alpha_1,\ldots,\alpha_n$ are algebraically independent, $X_0$ must occur in every $\beta_i$.
In particular, $\partial_0\beta_i\ne 0$ has smaller degree than $\beta_i$. The choice of $\beta_i$ implies $(\partial_0\beta_i)(X_i,\alpha_1,\ldots\alpha_n)\ne 0$ for $i=1,\ldots,n$. This leads to the following matrix equation in $\mathbb{C}[X_1,\ldots,X_n]$:
\[\bigl((\partial_j\beta_i)(X_i,\alpha_1,\ldots,\alpha_n)\bigr)_{i,j}J(\alpha)=-\bigl(\delta_{ij}(\partial_0\beta_i)(X_i,\alpha_1,\ldots,\alpha_n)\bigr)_{i,j}.\]
Since the determinant of the diagonal matrix on the right hand side does not vanish, also $\det(J(\alpha))$ cannot vanish.
\end{proof}
\begin{Def}
Let $C_a\subseteq K[[X_1,\ldots,X_n]]$ be the set of power series with constant term $a\in K$, i.\,e. $\alpha\in C_a\iff\alpha(0)=a$. Let
\[K[[X_1,\ldots,X_n]]^\circ:=\bigl\{\alpha\in C_0^n:\det(J(\alpha))\notin C_0\bigr\}\subseteq K[[X_1,\ldots,X_n]]^n.\]
\end{Def}
For $n=1$ we have $\alpha\in K[[X_1,\ldots,X_n]]^\circ\iff\alpha(0)=0\ne\alpha'(0)\iff\alpha\in(X)\setminus(X^2)$, so our notation is consistent with \autoref{revgroup}. The following is a multivariate analog.
\begin{Thm}[Inverse function theorem]\label{IFT}
The set $K[[X_1,\ldots,X_n]]^\circ$ is a group with respect to $\circ$ and
\[K[[X_1,\ldots,X_n]]^\circ\to\operatorname{GL}(n,K),\qquad \alpha\mapsto J(\alpha)(0)\]
is a group epimorphism.
\end{Thm}
\begin{proof}
Let $\alpha,\beta\in K[[X_1,\ldots,X_n]]^\circ$. Clearly, $\alpha\circ\beta\in C_0^n$. By \eqref{s1},
\[\partial_j(\alpha_i(\beta))=\sum_{k=1}^n(\partial_j\beta_k)(\partial_k\alpha_i)(\beta)\]
and $J(\alpha\circ\beta)=J(\alpha)(\beta)\cdot J(\beta)$. It follows that
\begin{equation}\label{Jhom}
J(\alpha\circ\beta)(0)=J(\alpha)(0)J(\beta)(0)\ne 0
\end{equation}
and $\alpha\circ\beta\in K[[X_1,\ldots,X_n]]^\circ$.
By fully exploiting \eqref{distmult}, the associativity $(\alpha\circ\beta)\circ\gamma=\alpha\circ(\beta\circ\gamma)$ can be reduced to the easy case where $\alpha=(0,\ldots,0,X_i,0,\ldots,0)$. The identity element of $K[[X_1,\ldots,X_n]]^\circ$ is clearly $(X_1,\ldots,X_n)$.
For the construction of inverse elements, we first assume that $J(\alpha)(0)=1_n$. Here we can adapt the proof of \autoref{sympoly}.
We sort the $n$-tuples of indices lexicographically and define $\beta_{i,1}:=X_i\in C_0$. For a given $\beta_{i,j}$ let $f(i,j):=(k_1,\ldots,k_n)$ be the minimal tuple such that the coefficient $c$ of $X_1^{k_1}\ldots X_n^{k_n}$ in $\beta_{i,j}(\alpha_1,\ldots,\alpha_n)-X_i$ is non-zero (if there is no such tuple we are done). Now let
\[\beta_{i,j+1}:=\beta_{i,j}-cX_1^{k_1}\ldots X_{n}^{k_n}\in C_0.\]
Since $(\partial_j\alpha_k)(0)=\delta_{kj}$, $X_k$ is the unique monomial of degree $1$ in $\alpha_k$. Consequently, $X_1^{k_1}\ldots X_n^{k_n}$ is the unique lowest degree monomial in $\alpha_1^{k_1}\ldots\alpha_n^{k_n}$. Hence, going from $\beta_{i,j}(\alpha_1,\ldots,\alpha_n)$ to $\beta_{i,j+1}(\alpha_1,\ldots,\alpha_n)$ replaces $X_1^{k_1}\ldots X_n^{k_n}$ with terms of higher degree. Consequently, $f(i,j+1)>f(i,j)$ and $\beta_i:=\lim_{j\to\infty}\beta_{i,j}\in C_0$ exists with $\beta_i(\alpha_1,\ldots,\alpha_n)=X_i$.
Now we consider the general case. As explained before, $\det(J(\alpha))\notin C_0$ implies that $J(\alpha)$ is invertible. Let $S:=(s_{ij})=J(\alpha)^{-1}(0)\in K^{n\times n}$ and
\[\tilde{\alpha}_i:=\sum_{j=1}^ns_{ij}\alpha_j\in C_0\]
for $i=1,\ldots,n$. Then
\[J(\tilde{\alpha})(0)=(\partial_j\tilde{\alpha}_i)_{i,j}(0)=\Bigl(\sum_{k=1}^ns_{ik}(\partial_j\alpha_k)(0)\Bigr)_{i,j}=SJ(\alpha)(0)=1_n.\]
By the construction above, there exists $\tilde{\beta}\in C_0^n$ with $\tilde{\alpha}\circ\tilde{\beta}=(X_1,\ldots,X_n)$. Define $\tilde{X}_i:=\sum_{j=1}^ns_{ij}X_j\in C_0$ and $\beta_i:=\tilde{\beta}_i(\tilde{X}_1,\ldots,\tilde{X}_n)\in C_0$ for $i=1,\ldots,n$. Then
\[\sum_{j=1}^ns_{ij}\alpha_i(\beta_1,\ldots,\beta_n)=\tilde{\alpha}_i\circ\beta=\tilde{\alpha}_i\circ \tilde{\beta}\circ(\tilde{X}_1,\ldots,\tilde{X}_n)=\tilde{X}_i=\sum_{j=1}^ns_{ij}X_j.\]
Since $S$ is invertible, it follows that $\alpha_i(\beta_1,\ldots,\beta_n)=X_i$ for $i=1,\ldots,n$.
By \eqref{Jhom}, $J(\beta)(0)=S$ and $\beta\in K[[X_1,\ldots,X_n]]^\circ$ is the inverse of $\alpha$ with respect to $\circ$.
This shows that $K[[X_1,\ldots,X_n]]^\circ$ is a group. The map $\alpha\mapsto J(\alpha)(0)$ is a homomorphism by \eqref{Jhom}. For $A=(a_{ij})\in\operatorname{GL}(n,K)$ let $\alpha_i:=a_{i1}X_1+\ldots+a_{in}X_n$. Then $\alpha\in C_0$ and $J(\alpha)(0)=A$. So our map is surjective.
\end{proof}
If $\alpha_1,\ldots,\alpha_n\in\mathbb{C}[X_1,\ldots,X_n]$ are polynomials such that $\det(J(\alpha))\in\mathbb{C}^\times$, the \emph{Jacobi conjecture} (put forward by Keller~\cite{Keller} in 1939) claims that there exist polynomials $\beta_1,\dots,\beta_n$ such that $\alpha\circ\beta=(X_1,\ldots,X_n)$. This is still open even for $n=2$ (see \cite{Essen}).
An explicit formula for the reverse (i.\,e. the inverse with respect to $\circ$) is given by the following multivariate version of \autoref{lagrange}. To simplify the proof (which is still difficult) we restrict ourselves to those $\beta\in\mathbb{C}[[X_1,\ldots,X_n]]$ such that $\beta_i\in X_iC_1\subseteq C_0$. Note that $J(\beta)(0)=1_n$ here.
\begin{Thm}[\textsc{Lagrange--Good}'s inversion formula]\label{lagrange2}
Let $\alpha\in\mathbb{C}[[X_1,\ldots,X_n]]$ and $\beta_i\in X_iC_1$ for $i=1,\ldots,n$. Then
\begin{equation}\label{betaex}
\alpha=\sum_{k_1,\ldots,k_n\ge 0}c_{k_1,\ldots,k_n}\beta_1^{k_1}\ldots\beta_n^{k_n}
\end{equation}
where $c_{k_1,\ldots,k_n}\in\mathbb{C}$ is the coefficient of $X_1^{k_1}\ldots X_n^{k_n}$ in
\[\alpha \Bigl(\frac{X_1}{\beta_1}\Bigr)^{k_1+1}\ldots\Bigl(\frac{X_n}{\beta_n}\Bigr)^{k_n+1}\det(J(\beta)).\]
\end{Thm}
\begin{proof}
The proof is taken from Hofbauer~\cite{Hofbauer}.
By the inverse function theorem, there exists $\gamma\in\mathbb{C}[[X_1,\ldots,X_n]]^\circ$ such that $\gamma\circ\beta=(X_1,\ldots,X_n)$. Replacing $X_i$ by $\gamma_i(\beta)$ in $\alpha$ yields an expansion in the form \eqref{betaex} where we denote the coefficients by $\bar{c}_{k_1,\ldots,k_n}$ for the moment. Observe that $\tau_i:=X_i/\beta_i\in C_1$ and $\det(J(\beta))\in C_1$.
For $l_1,\ldots,l_n\ge 0$ we define
\[\rho_{l_1,\ldots,l_n}:=\tau_1^{l_1+1}\ldots\tau_n^{l_n+1}\det(J(\beta))\in C_1.\]
Then $c_{l_1,\ldots,l_n}$ is, by definition, the coefficient of $X_1^{l_1}\ldots X_n^{l_n}$ in $\alpha\rho_{l_1,\ldots,l_n}$. So it also must be the coefficient of $X_1^{l_1}\ldots X_n^{l_n}$ in
\[\sum_{\substack{k_1,\ldots,k_n\ge 0\\\forall i\,:\,k_i\le l_i}}\bar{c}_{k_1,\ldots,k_n}X_1^{k_1}\ldots X_n^{k_n}\rho_{l_1-k_1,\ldots,l_n-k_n}.\]
It is easy to see that $c_{0,\ldots,0}=\alpha(0)=\bar{c}_{0,\ldots,0}$ as claimed. Hence, it suffices to show that $X_1^{k_1}\ldots X_n^{k_n}$ does not occur in $\rho_{k_1,\ldots,k_n}$ for $(k_1,\ldots,k_n)\ne(0,\ldots,0)$.
By the product rule,
\[\tau_i\partial_j\beta_i=\partial_j(\beta_i\tau_i)-\beta_i\partial_j\tau_i=\delta_{ij}-X_i\frac{\partial_j\tau_i}{\tau_i}.\]
Since the (Jacobi) determinant is linear in every row, it follows that
\[\rho_{k_1,\ldots,k_n}=\det\bigl(\delta_{ij}\tau_i^{k_i}-X_i\tau_i^{k_i-1}\partial_j\tau_i\bigr)=\sum_{\sigma\in S_n}\operatorname{sgn}(\sigma)\prod_{i=1}^n\bigl(\delta_{i\sigma(i)}\tau_i^{k_i}-X_i\tau_i^{k_i-1}\partial_{\sigma(i)}\tau_i\bigr).\]
By the (multivariate) Taylor series, we want to show that $(\partial_1^{k_1}\ldots\partial_n^{k_n}\rho_{k_1,\ldots,k_n})(0)=0$.
Leibniz' rule applied to the inner product yields
\[P_\sigma:=\sum_{l_{11}+\ldots+l_{1s}=k_1}\ldots\sum_{l_{n1}+\ldots+l_{ns}=k_n}\frac{k_1!\ldots k_n!}{\prod_{i,j}l_{ij}!}\prod_{t=1}^n\partial_1^{l_{1t}}\ldots\partial_n^{l_{nt}}\bigl(\delta_{t\sigma(t)}\tau_t^{k_t}-X_t\tau_t^{k_t-1}\partial_{\sigma(t)}\tau_t\bigr)\]
Therein, we find
\[\bigl(\partial_1^{l_{1t}}\ldots\partial_n^{l_{nt}}(X_t\tau_t^{k_t-1}\partial_{\sigma(t)}\tau_t)\bigr)(0)=l_{tt}\bigl(\partial_1^{l_{1t}}\ldots\partial_t^{l_{tt}-1}\ldots\partial_n^{l_{nt}}(\tau_t^{k_t-1}\partial_{\sigma(t)}\tau_t)\bigr)(0).\]
In particular, the product is zero if $\sigma(t)\ne t$ and $l_{tt}=0$.
We will disregard this case in the following. This also means that $t_{\sigma(t)\sigma(t)}<k_t$ whenever $\sigma(t)\ne t$.
We set $\mu_i:=\tau_i^{k_i}$ and observe that $\frac{1}{k_t}\partial_{\sigma(t)}(\mu_t)=\tau_t^{k_t-1}\partial_{\sigma(t)}\tau_t$. Hence, the inner product of $P_\sigma(0)$ takes the form
\[\prod_{t=1}^n\bigl(\delta_{t\sigma(t)}\partial_1^{l_{1t}}\ldots\partial_n^{l_{nt}}\mu_t-\frac{l_{tt}}{k_t}\partial_1^{l_{1t}}\ldots\partial_t^{l_{tt}-1}\ldots\partial_{\sigma(t)}^{l_{\sigma(t)t}+1}\ldots\partial_n^{l_{nt}}\mu_t\bigr).\]
Finally, we transform the indices via $l_{jt}\mapsto m_{jt}:=l_{jt}-\delta_{jt}+\delta_{j\sigma(t)}$ (the problematic cases $l_{tt}=0$ and $l_{\sigma(t)\sigma(t)}=k_{\sigma(t)}$ were excluded above). Note that $m_{1t}+\ldots+m_{nt}=k_t$ and
\[\frac{l_{tt}}{l_{1t}!\ldots l_{nt}!}=\frac{l_{\sigma(t)t}+1}{l_{1t}!\ldots(l_{tt}-1)!\ldots (l_{\sigma(t)t}+1)!\ldots l_{nt}!}=\frac{m_{\sigma(t)t}}{m_{1t}!\ldots m_{nt}!}.\]
This turns $P_\sigma(0)$ into
\[P_\sigma(0)=\sum_{m_{ij}}\frac{k_1!\ldots k_n!}{\prod_{i,j}m_{ij}!}\prod_{t=1}^n\partial_1^{m_{1t}}\ldots\partial_n^{m_{nt}}(\mu_t)(0)\Bigl(\delta_{t\sigma(t)}-\frac{m_{\sigma(t)t}}{k_t}\Bigr).\]
Since only the last term actually depends on $\sigma$, we conclude
\[(\partial_1^{k_1}\ldots\partial_n^{k_n}\rho_{k_1,\ldots,k_n})(0)=\sum_{m_{ij}}\frac{k_1!\ldots k_n!}{\prod_{i,j}m_{ij}!}\prod_{t=1}^n\partial_1^{m_{1t}}\ldots\partial_n^{m_{nt}}(\mu_t)(0)\sum_{\sigma\in S_n}\operatorname{sgn}(\sigma)\prod_{t=1}^n\Bigl(\delta_{t\sigma(t)}-\frac{m_{\sigma(t)t}}{k_t}\Bigr).\]
The final sum is the determinant of $(\delta_{ij}-m_{ji}/k_i)_{ij}$. This matrix is singular, since each column sum is $1-\frac{1}{k_i}\sum_{j=1}^nm_{ji}=0$. This completes the proof of $(\partial_1^{k_1}\ldots\partial_n^{k_n}\rho_{k_1,\ldots,k_n})(0)=0$.
\end{proof}
In an attempt to unify and generalize some dual pairs we have already found, we study the following setting. Let $A=(a_{ij})\in\mathbb{C}^{n\times n}$ and $D=\operatorname{diag}(X_1,\ldots,X_n)$. For $I\subseteq N:=\{1,\ldots,n\}$ let $A_I:=(a_{ij})_{i,j\in I}$ and $X_I=\prod_{i\in I}X_i$. Since the determinant is linear in every row, we obtain
\begin{align*}
\det(1_n+DA)&=\begin{vmatrix}
1&0&\cdots&0\\
a_{21}X_2&1+a_{22}X_2& &a_{2n}X_2\\
\vdots&&\ddots&\vdots\\
a_{n1}X_n&\cdots&\cdots&1+a_{nn}X_n
\end{vmatrix}+\begin{vmatrix}
a_{11}&\cdots&a_{1n}\\
a_{21}X_2&&a_{2n}X_2\\
\vdots&&\vdots\\
a_{n1}X_n&\cdots&1+a_{nn}X_n
\end{vmatrix}X_1\\
&=\begin{vmatrix}
1+a_{22}X_2&\cdots&a_{2n}X_2\\
\vdots&\ddots&\vdots\\
a_{n2}X_n&\cdots&1+a_{nn}X_n
\end{vmatrix}+\begin{vmatrix}
a_{11}&\cdots&\cdots&a_{1n}\\
0&1&0&0\\
a_{31}X_3&&&a_{3n}X_3\\
\vdots&&&\vdots\\
a_{n1}X_n&\cdots&\cdots&1+a_{nn}X_n
\end{vmatrix}X_1\\
&\quad+\begin{vmatrix}
a_{11}&\cdots&a_{1n}\\
a_{21}&\cdots&a_{2n}\\
a_{31}X_3&\cdots&a_{3n}X_3\\
\vdots&&\vdots\\
a_{n1}X_n&\cdots&1+a_{nn}X_n
\end{vmatrix}X_1X_2=\ldots\\
&=1+\sum_{i=1}^na_{ii}X_i+\sum_{i<j}\det(A_{\{i,j\}})X_iX_j+\ldots+\det(A)X_N.
\end{align*}
Altogether,
\begin{equation}\label{mac1}
\det(1_n+DA)=\sum_{I\subseteq N}\det(A_I)X_I,
\end{equation}
where $\det(A_\varnothing)=1$ for convenience. The dual equation, discovered by Vere-Jones~\cite{Vere-Jones}, uses the \emph{permanent} $\operatorname{per}(A)=\sum_{\sigma\in S_n}a_{1\sigma(1)}\ldots a_{n\sigma(n)}$ of $A$:
\begin{equation}\label{a2}
\frac{1}{\det(1_n-DA)}=\sum_{k=0}^\infty\sum_{I\in N^k}\operatorname{per}(A_I)\frac{X_I}{k!},
\end{equation}
where $I$ now runs through all tuples of elements in $N$
(in contrast to the determinant, $\operatorname{per}(A_I)$ does not necessarily vanish if $A_I$ has identical rows).
We will derive \eqref{a2} in \autoref{vere} from the following result, which seems more amenable to applications.
\begin{Thm}[\textsc{MacMahon}'s Master Theorem]\label{mahon}
Let $A=(a_{ij})\in\mathbb{C}^{n\times n}$ and $D=\operatorname{diag}(X_1,\ldots,X_n)$. Then
\begin{equation}\label{mac2}
\boxed{\frac{1}{\det(1_n-DA)}=\sum_{k_1,\ldots,k_n\ge 0}c_{k_1,\ldots,k_n}X_1^{k_1}\ldots X_n^{k_n},}
\end{equation}
where $c_{k_1,\ldots,k_n}\in\mathbb{C}$ is the coefficient of $X_1^{k_1}\ldots X_n^{k_n}$ in
\[\prod_{i=1}^n(a_{i1}X_1+\ldots+a_{in}X_n)^{k_i}.\]
\end{Thm}
\begin{proof}
Let $A_i:=a_{i1}X_1+\ldots+a_{in}X_n$ and $\beta_i:=X_i(1+A_i)^{-1}\in X_iC_1$ for $i=1,\ldots,n$. Let $D(\beta):=\operatorname{diag}(\beta_1,\ldots,\beta_n)$ and $\alpha:=\det(1_n-D(\beta)A)^{-1}$. Since $\partial_j A_i=a_{ij}$, we obtain
\[\partial_j\beta_i=\frac{\delta_{ij}(1+A_i)-X_ia_{ij}}{(1+A_i)^2}=\frac{\delta_{ij}-\beta_ia_{ij}}{1+A_i}\]
and
\[\alpha\det(J(\beta))=\prod_{i=1}^n\frac{1}{1+A_i}.\]
Hence, by \autoref{lagrange2}, the coefficient of $\beta_1^{k_1}\ldots\beta_n^{k_n}$ in $\alpha$ is the coefficient of $X_1^{k_1}\ldots X_n^{k_n}$ in
\[\Bigl(\frac{X_1}{\beta_1}\Bigr)^{k_1+1}\ldots\Bigl(\frac{X_n}{\beta_n}\Bigr)^{k_n+1}\prod_{i=1}^n\frac{1}{1+A_i}=\prod_{i=1}^n(1+a_{i1}X_1+\ldots+a_{in}X_n)^{k_i}.\]
Since the product on the right hand side has degree $k_1+\ldots+k_n$, the additional summand $1$ plays no role and the desired coefficient really is $c_{k_1,\ldots,k_n}$. By \autoref{IFT}, the $X_i$ can be substituted by some $\gamma_i$ such that $\beta_1^{k_1}\ldots\beta_n^{k_n}$ becomes $X_1^{k_1}\ldots X_n^{k_n}$ and $\alpha$ becomes $\det(1_n-DA)^{-1}$.
\end{proof}
A graph-theoretical proof of \autoref{mahon} was given by Foata and is presented in \cite[Section~9.4]{Brualdi}. There is also a short analytic argument which reduces the claim to the easy case where $A$ is a triangular matrix.
\begin{Cor}\label{vere}
Equation \eqref{a2} holds.
\end{Cor}
\begin{proof}
By the multinomial theorem we have
\begin{align*}
\prod_{i=1}^n&(a_{i1}X_1+\ldots+a_{in}X_n)^{k_i}\\
&=\sum_{k_{11}+\ldots+k_{1n}=k_1}\ldots\sum_{k_{n1}+\ldots+k_{nn}=k_n}\frac{k_1!\ldots k_n!}{\prod_{i,j}k_{ij}!}a_{11}^{k_{11}}a_{12}^{k_{12}}\ldots a_{nn}^{k_{nn}}X_1^{k_{11}+\ldots+k_{n1}}\ldots X_n^{k_{1n}+\ldots+k_{nn}}.
\end{align*}
To obtain $c_{k_1,\ldots,k_n}$ one needs to run only over those indices $k_{ij}$ with $\sum_i k_{ij}=k_j$ for $j=1,\ldots,n$.
On the other hand, we need to sum over those tuples $I\in N^{k_1+\ldots+k_n}$ in \eqref{a2} which contain $i$ with multiplicity $k_i$ for each $i=1,\ldots,n$. The number of those tuples is $\frac{(k_1+\ldots+k_n)!}{k_1!\ldots k_n!}$. The factor $(k_1+\ldots+k_n)!$ cancels with $\frac{1}{k!}$ in \eqref{a2}.
Since the permanent is invariant under permutations of rows and columns, we may assume that $I=(1^{k_1},\ldots,n^{k_n})$. Then $A_I$ has the block form $A_I=(A_{ij})_{i,j}$ where
\[A_{ij}=a_{ij}\begin{pmatrix}
1&\cdots&1\\
\vdots&&\vdots\\
1&\cdots&1
\end{pmatrix}\in\mathbb{C}^{k_i\times k_j}.\]
In the definition of $\operatorname{per}(A_I)$, every permutation $\sigma$ corresponds to a selection of $n$ entries in $A_I$ such that one entry in each row and each column is selected. Suppose that $k_{ij}$ entries in block $A_{ij}$ are selected. Then $\sum_i k_{ij}=k_j$ and $\sum_j k_{ij}=k_i$. To choose the rows in each $A_{ij}$ there are $\frac{k_1!\ldots k_n!}{\prod k_{ij}!}$ possibilities. We get the same number for the selections of columns. Finally, once rows and columns are fixed, there are $\prod k_{ij}!$ choices to permute the entries in each block $A_{ij}$. Now the coefficient of $X_1^{k_1}\ldots X_n^{k_n}$ in \eqref{a2} turns out to be
\[\sum_{\substack{k_{ij}\\\sum_ik_{ij}=k_j\\\sum_jk_{ij}=k_i}}\frac{k_1!\ldots k_n!}{\prod_{i,j} k_{ij}!}a_{11}^{k_{11}}a_{12}^{k_{12}}\ldots a_{nn}^{k_{nn}}=c_{k_1,\ldots,k_n}.\qedhere\]
\end{proof}
We illustrate with some examples why MacMahon called \autoref{mahon} the \emph{master} theorem (as he was a former major, I am tempted to called it the $M^4$-theorem).
\begin{Ex}\hfill
\begin{enumerate}[(i)]
\item The expression $\det(1_n-DA)$ is reminiscent to the definition of the characteristic polynomial $\chi_A=X^n+s_{n-1}X^{n-1}+\ldots+s_0\in\mathbb{C}[X]$ of $A$. In fact, setting $X:=X_1=\ldots=X_n$ allows us to regard $\det(1_n-XA)$ as a Laurent polynomial in $X$. We can then introduce $X^{-1}$ to obtain
\[\det(1_n-XA)=X^n\det(X^{-1}1_n-A)=X^n\chi_A(X^{-1})=1+s_{n-1}X+\ldots+s_0X^n.\]
Now \eqref{mac1} in combination with Vieta's theorem yields
\[\sum_{\substack{I\subseteq N\\|I|=k}}\det(A_I)=(-1)^ks_{n-k}=\sigma_k(\lambda_1,\ldots,\lambda_n),\]
where $\lambda_1,\ldots,\lambda_n\in\mathbb{C}$ are the eigenvalues of $A$. This extends the familiar identities $\det(A)=\lambda_1\ldots\lambda_n$ and $\operatorname{tr}(A)=\lambda_1+\ldots+\lambda_n$. With the help of \autoref{exwaring}, one can also express $s_k$ in terms of $\rho_l(\lambda_1,\ldots,\lambda_n)=\operatorname{tr}(A^l)$.
\item If $A=1_n$ and $X_1=\ldots=X_n=X$, then \eqref{mac1} and \eqref{mac2} become
\begin{align*}
(1+X)^n&=\sum_{I\subseteq N}X^{|I|}=\sum_{k=0}^n\binom{n}{k}X^k,\\
(1-X)^{-n}&=\sum_{k_1,\ldots,k_n\ge 0}X^{k_1+\ldots+k_n}=\sum_{k=0}^\infty\binom{n+k-1}{k}X^k,
\end{align*}
since the $k$-element multisets correspond to the tuples $(k_1,\ldots,k_n)$ with $k_1+\ldots+k_n=k$ where $k_i$ encodes the multiplicity of $i$.
\item Taking $A=1_n$ and $X_k=X^k$ in \eqref{mac2} recovers an equation from \autoref{partseries}:
\[\prod_{k=1}^n\frac{1}{1-X^k}=\sum_{k_1,\ldots,k_n\ge0}X^{k_1+2k_2+\ldots+nk_n}=\sum_{k=0}^\infty p_n(k)X^k.\]
Similarly, choosing $X_k=kX$ or $X_k=X_kY$ leads more of less directly to \autoref{genstir} and \autoref{vieta} respectively.
\item Take $(X_1,X_2,X_3)=(X,Y,Z)$ and
\[A=\begin{pmatrix}
0&1&-1\\
-1&0&1\\
1&-1&0
\end{pmatrix}\]
in \eqref{mac2}. Then by \emph{Sarrus' rule},
\begin{align*}
\frac{1}{\det(1_3-DA)}&=\frac{1}{1+XZ+YZ+XY}=\sum_{k=0}^\infty(-1)^k(XY+YZ+ZX)^k\\
&=\sum_{k=0}^\infty(-1)^k\sum_{a+b+c=k}\frac{k!}{a!b!c!}X^{a+c}Y^{a+b}Z^{b+c}.
\end{align*}
The coefficient of $(XYZ)^{2n}$ is easily seen to be $(-1)^n\frac{(3n)!}{(n!)^3}$. On the other hand, the same coefficient in
\[(Y-Z)^{2n}(Z-X)^{2n}(X-Y)^{2n}=\sum_{a,b,c\ge 0}\binom{2n}{a}\binom{2n}{b}\binom{2n}{c}(-1)^{a+b+c}X^{c-b+2n}Y^{a-c+2n}Z^{b-a+2n}\]
occurs for $a=b=c$. This yields \emph{Dixon's identity}:
\[(-1)^n\frac{(3n)!}{(n!)^3}=\sum_{k=0}^{2n}(-1)^k\binom{2n}{k}^3.\]
\end{enumerate}
\end{Ex}
We end with a short outlook.
Power series with an infinite set of indeterminants $\{X_i:i\in I\}$ can be defined by
\[K[[X_i:i\in I]]:=\bigcup_{\substack{J\subseteq I\\|J|<\infty}}K[[X_j:j\in J]].\]
Moreover, power series in non-commuting indeterminants exists and form what is sometimes called the \emph{Magnus ring} $K\langle\langle X_1,\ldots,X_n\rangle\rangle$ (the polynomial version is the \emph{free algebra} $K\langle X_1,\ldots,X_n\rangle$).
The Lie bracket $[a,b]:=ab-ba$ turns $K\langle\langle X_1,\ldots,X_n\rangle\rangle$ into a \emph{Lie algebra} and fulfills \emph{Jacobi's identity}
\[[a,[b,c]]+[b,[c,a]]+[c,[a,b]]=0.\]
The functional equation for $\exp(X)$ is replaced by the \emph{Baker--Campbell--Hausdorff formula} in this context.
The reader might ask about formal Laurent series in multiple indeterminants. Although the field of fractions $K((X_1,\ldots,X_n))$ certainly exists, its elements do not look like one might expect. For example, the inverse of $X-Y$ could be
$\sum_{k=1}^\infty X^{-k}Y^{k-1}$ or $-\sum_{k=1}^\infty X^{k-1}Y^{-k}$.
The first series lies in $K((X))((Y))$, but not in $K((Y))((X))$. For the second series it is the other way around.
\section*{Acknowledgment}
I thank Diego García Lucas for proofreading.
The work is supported by the German Research Foundation (\mbox{SA 2864/1-2} and \mbox{SA 2864/3-1}).
\addcontentsline{toc}{section}{References}
\begin{small}
| {
"timestamp": "2022-05-03T02:41:53",
"yymm": "2205",
"arxiv_id": "2205.00879",
"language": "en",
"url": "https://arxiv.org/abs/2205.00879",
"abstract": "This is a lecture on the theory of formal power series developed entirely without any analytic machinery. Combining ideas from various authors we are able to prove Newton's binomial theorem, Jacobi's triple product, the Rogers--Ramanujan identities and many other prominent results. We apply these methods to derive several combinatorial theorems including Ramanujan's partition congruences, generating functions of Stirling numbers and Jacobi's four-square theorem. We further discuss formal Laurent series and multivariate power series and end with a proof of MacMahon's master theorem.",
"subjects": "History and Overview (math.HO); Combinatorics (math.CO); Number Theory (math.NT)",
"title": "An invitation to formal power series",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9881308761574287,
"lm_q2_score": 0.8479677526147223,
"lm_q1q2_score": 0.8379031183444313
} |
https://arxiv.org/abs/1703.02775 | Cubical Covers of Sets in $\mathbb{R}^n$ | Wild sets in $\mathbb{R}^n$ can be tamed through the use of various representations though sometimes this taming removes features considered important. Finding the wildest sets for which it is still true that the representations faithfully inform us about the original set is the focus of this rather playful, expository paper that we hope will stimulate interest in cubical coverings as well as the other two ideas we explore briefly: Jones' $\beta$ numbers and varifolds from geometric measure theory. | \section{Introduction}
\label{sec:intro}
In this paper we explain and illuminate a few ideas for (1)
representing sets and (2) learning from those representations. Though
some of the ideas and results we explain are likely written down
elsewhere (though we are not aware of those references), our purpose
is not to claim priority to those pieces, but rather to stimulate
thought and exploration. Our primary intended audience is students of
mathematics even though other, more mature mathematicians may find a
few of the ideas interesting. We believe that cubical covers can be
used at an earlier point in the student career and that both the
$\beta$ numbers idea introduced by Peter Jones and the idea of
varifolds pioneered by Almgren and Allard and now actively being
developed by Menne, Buet, and collaborators are still very much
underutilized by all (young and old!). To that end, we have written
this exploration, hoping that the questions and ideas presented here,
some rather elementary, will stimulate others to explore the ideas for
themselves.
We begin by briefly introducing cubical covers, Jones' $\beta$, and
varifolds, after which we look more closely at questions involving
cubical covers. Then both of the other approaches are explained in a
little bit of detail, mostly as an invitation to more exploration,
after which we close with problems for the reader and some unexplored
questions.
{\bf Acknowledgments:} LP thanks Robert Hardt and Frank Morgan for
useful comments and KRV thanks Bill Allard for useful conversations and
Peter Jones, Gilad Lerman, and Raanan Schul for introducing him to the
idea of the Jones' $\beta$ representations.
\newpage
\section{Representing Sets \& their Boundaries in $\mathbf{R}^n$}
\subsection{Cubical Refinements: Dyadic Cubes}
In order to characterize various sets in $\mathbf{R}^n$, we explore
the use of cubical covers whose cubes have side
lengths which are positive integer powers of $\frac{1}{2}$,
\textbf{dyadic cubes}, or more precisely, (closed) dyadic $n$-cubes
with sides parallel to the axes. Thus the side length at the $d$th subdivision
is $l(C)=\frac{1}{2^d}$, which can be made as small as desired.
Figure~\ref{fig:dyadic} illustrates this by looking at a unit cube
in $\mathbf{R}^2$ lying in the first quadrant with a vertex at the origin. We
then form a sequence of refinements by dividing each side length in
half successively, and thus quadrupling the number of cubes each time.
\begin{defn}
We shall say that the $n$-cube $C$ (with side length denoted as $l(C)$) is dyadic if
\[
C=\prod^n_{j=1}[m_j2^{-d},(m_j+1)2^{-d}], \ \ \ m_j \in \mathbb{Z}, \ d\in \mathbb{N}\cup\{0\}.
\]
\end{defn}
\begin{figure} [H]
\centering
\input{cubes.pdf_t}
\caption{Dyadic Cubes.}
\label{fig:dyadic}
\end{figure}
In this paper, we will assume $C$ to be a dyadic $n$-cube
throughout. We will denote the union of the dyadic $n$-cubes with edge
length $\frac{1}{2^d}$ that intersect a set $E\subset \mathbf{R}^n$ by
$\mathcal{C}^E_d$ and define $\partial\mathcal{C}^E_d$ to
be the boundary of this union (see Figure
\ref{fig:cubicalcover1}). Two simple questions we will explore for
their illustrative purposes are:
\begin{enumerate}
\item ``If we know $\mathcal{L}^n(\mathcal{C}^E_d)$, what can we say about $\mathcal{L}^n(E)$?'' and similarly,
\item ``If we know $\mathcal{H}^{n-1}(\partial\mathcal{C}^E_d)$, what can we say about $\mathcal{H}^{n-1}(\partial E)$?''
\end{enumerate}
\begin{figure}[H]
\centering
\scalebox{0.5}{\input{cubicalcover1.pdf_t}}
\caption{Cubical cover $\mathcal{C}^E_d$ of a set $E$.}
\label{fig:cubicalcover1}
\end{figure}
\subsection{Jones' $\beta$ Numbers}
Another approach to representing sets in $\mathbf{R}^n$, developed by Jones
\cite{jon90}, and generalized by Okikiolu \cite{oki92}, Lerman
\cite{ler03}, and Schul \cite{sch07}, involves the question of under
what conditions a bounded set $E$ can be contained within a
rectifiable curve $\Gamma$, which Jones likened to the Traveling
Salesman Problem taken over an infinite set. (See Definition~\ref{rect}
below for the definition of rectifiable.)
Jones showed that if the aspect ratios of the optimal containing
cylinders in each dyadic cube go to zero fast enough, the set $E$
is contained in a rectifiable curve. Jones' approach ends up providing
one useful approach of defining a representation for a set in $\mathbf{R}^n$
similar to those discussed in the next section. We return to this
topic in Section \ref{jonessection2}. The basic idea is illustrated in
Figure~\ref{fig:jones1}.
\begin{figure} [H]
\begin{center}
\input{Jones1.pdf_t}
\caption{Jones' $\beta$ Numbers. The green lines indicate the thinnest cylinder containing $\Gamma$ in the cube $C$. We see from this relatively large width that $\Gamma$ is not very ``flat" in this cube.}\label{jones1}
\label{fig:jones1}
\end{center}
\end{figure}
\subsection{Working Upstairs: Varifolds}\label{Var}
A third way of representing sets in $\mathbf{R}^n$ uses \emph{varifolds}.
Instead of representing $E\subset \mathbf{R}^n$ by working in $\mathbf{R}^n$, we work
in the \emph{Grassmann Bundle}, $\mathbf{R}^n\times G(n,m)$.
We parameterize the Grassmannian $G(2,1)$ by taking the upper unit semicircle in
$\mathbf{R}^2$ (including the point $(1,0)$, but not including $(1,\pi)$,
where both points are given in polar coordinates) and straightening it
out into a vertical axis (as in Figure \ref{var1a}). The bundle $\mathbf{R}^2
\times G(2,1)$ is then represented by $\mathbf{R}^2 \times [0,\pi).$
\begin{figure}[H]
\begin{center}
\input{var1a.pdf_t}
\caption{The vertical axis for the ``upstairs."}\label{var1a}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\input{varifolds1a.pdf_t}
\caption{Working Upstairs in the Grassmann bundle.}\label{var1c}
\end{center}
\end{figure}
Figure~\ref{var1c} illustrates how the tangents are built into this
representation of subsets of $\mathbf{R}^n$, giving us a sense of why this
representation might be useful. A circular curve in $\mathbf{R}^2$ becomes two
half-spirals upstairs (in the Grassmann bundle representation, as
shown in the first image of Figure \ref{var1c}). Other curves in $\mathbf{R}^2$
are similarly illuminated by their Grassmann bundle
representations. We return to this idea in Section \ref{Var2}.
\section{Simple Questions}
Let $E\subset \mathbf{R}^n$ and $C$ be any dyadic $n$-cube as before. Define \[\mathcal{C}(E,d) = {\{C \ | \ C\cap E \neq \emptyset, \ l(C) = {1}/{2^d}\}}\]
and, as above,
\[\mathcal{C}^E_d \equiv \bigcup_{C\in \mathcal{C}(E,d)} C. \]
Here are two questions:
\begin{enumerate}
\item Given $E\subset\mathbf{R}^n$, when is there a $d_0$ such that for all $d\geq d_0$, we have
\begin{equation}\label{A}
\mathcal{L}^n(\mathcal{C}^E_d) \leq M(n)\mathcal{L}^n(E)
\end{equation}
for some constant $M(n)$ independent of $E$?
\item Given $E\subset\mathbf{R}^n$, and any $\delta>0$, when does there exists
a $d_0$ such that for all $d\geq d_0$, we have
\begin{equation}\label{B}
\mathcal{L}^n(\mathcal{C}^E_d) \leq (1+\delta)\mathcal{L}^n(E)?
\end{equation}
\end{enumerate}
\begin{rem} Of course using the fact that Lebesgue measure is a Radon
measure, we can very quickly get that for $d$ large enough
(i.e. $2^{-d}$ small enough), the measure of the cubical cover is as
close to the measure of the set as you want, as long as the set is
compact and has positive measure. But the focus of this paper is on
what we can get in a much more transparent, barehanded fashion, so we
explore along different paths, getting answers that are, by some metrics,
suboptimal.
\end{rem}
\begin{exm}\label{exm:rational-points}
If $E=\mathbb{Q}^n\cap [0,1]^n$, then $\mathcal{L}^n(E)=0$, but
$\mathcal{L}^n(\mathcal{C}^E_d)=1 \ \forall d \geq 0$.
\end{exm}
\begin{exm}
Let $E$ be as in Example~\ref{exm:rational-points}. Enumerate $E$
as $\hat{q_1}, \hat{q_2},\hat{q_3}, \ldots$. Now let
$D_i=B(\hat{q_i},\frac{\epsilon}{2^i})$ and $E_\epsilon \equiv
\{\cup D_i\} \cap [0,1]^n$ with $\epsilon$ chosen small enough so
that $\mathcal{L}^n(E_\epsilon)\leq \frac{1}{100}$. Then
$\mathcal{L}^n(E_\epsilon)\leq \frac{1}{100}$, but
$\mathcal{L}^n(\mathcal{C}^{E_\epsilon}_d)=1 \ \forall d > 0$.
\end{exm}
\subsection{A Union of Balls}\label{union}
For a given set $F\subseteq \mathbf{R}^n,$ suppose $E= \cup_{x\in F}\bar{B}(x,r)$, a
union of closed balls of radius $r$ centered at each point $x$ in $F$.
Then we know that $E$ is \textbf{regular} (locally Ahlfors $n$-regular
or \textit{locally} $n$-regular), and thus there exist $0 < m < M <
\infty$ and an $r_0>0$ such that for all $x\in E$ and for all
$0<r<r_0$, we have
\[
m r^n \leq \mathcal{L}^n(\bar{B}(x, r)\cap E) \leq M r^n.
\]
This is all we need to establish a sufficient condition for Equation
(\ref{A}) above.
\begin{rem}
The upper bound constant $M$ is immediate
since $E$ is a union of $n$-balls, so $M=\alpha_n$, the
$n$-volume of the unit $n$-ball, works. However, this is not the case for
$k$-regular sets in $\mathbf{R}^n$, $k<n$, since we are now asking for a bound on the
$k$-dimensional measure of an $n$-dimensional set which could easily
be infinite.
\end{rem}
\begin{enumerate}
\item Suppose $E= \cup_{x\in F}\bar{B}(x,r)$, a union of closed balls of
radius $r$ centered at each point $x$ in $F$.
\item Let $\mathcal{C}=\mathcal{C}(E,d)$ for some $d$ such that
$\frac{1}{2^d} \ll r$, and let $\oldhat{\mathcal{C}}=\{3C \mid C\in
\mathcal{C}\},$ where $3C$ is an $n$-cube concentric with $C$ with
sides parallel to the axes and $l(3C)=3l(C)$, as shown in Figure
\ref{3c}.
\begin{figure}[H]
\begin{center}
\input{3Ccube.pdf_t}
\caption{Concentric Cubes.}\label{3C}
\label{3c}
\end{center}
\end{figure}
\item\label{crucial-cube-step} This implies that for $3C\in \oldhat{\mathcal{C}}$
\begin{equation}
\frac{\mathcal{L}^n(3C\cap E)}{\mathcal{L}^n(3C)}>\theta >0, \ \ \ \text{with} \ \theta \in \mathbf{R}.
\end{equation}
\item We then make the following observations:
\begin{enumerate}
\item Note that there are $3^n$ different tilings of the plane by $3C$
cubes whose vertices live on the $\frac{1}{2^d}$ lattice. (This can
be seen by realizing that there are $3^n$ shifts you can perform on
a $3C$ cube and both (1) keep the originally central cube $C$ in the
$3C$ cube and (2) keep the vertices of the $3C$ cube in the
$\frac{1}{2^d}$ lattice.)
\item Denote the $3C$ cubes in these tilings $\mathcal{T}_i, i =
1,...,3^n$.
\item Define $\oldhat{\mathcal{C}}_i \equiv \oldhat{\mathcal{C}}\cap\mathcal{T}_i$.
\item Note now that by Step~(\ref{crucial-cube-step}), the number of $3C$ cubes in
$\oldhat{\mathcal{C}}_i$ cannot exceed
\[N_i \equiv \frac{ \mathcal{L}^n(E)}{\theta \mathcal{L}^n(3C)}.\]
\item Denote the total number of cubes in $\mathcal{C}$ by $N_{\mathcal{C}^E_d}$.
\item The number of cubes in $\mathcal{C}$, $N_{\mathcal{C}^E_d}$, cannot exceed
\[\sum_{i=1}^{3^n} N_i = 3^n\frac{ \mathcal{L}^n(E)}{\theta \mathcal{L}^n(3C)}.\]
\item Putting it all together, we get
\begin{eqnarray}
\mathcal{L}^n(\mathcal{C}_d^E) &=& \mathcal{L}^n(\cup_{C\in\mathcal{C}} C)\nonumber\\
& = & N_{\mathcal{C}^E_d} \mathcal{L}^n(C) \nonumber\\
&\leq & 3^n \frac{\mathcal{L}^n(E) }{\theta \mathcal{L}^n(3C)} \mathcal{L}^n(C) \nonumber\\
& = & \frac{\mathcal{L}^n(E)}{\theta}.
\end{eqnarray}
\end{enumerate}
\item This shows that if $E= \cup_{x\in F}\bar{B}(x,r)$, then
\[ \mathcal{L}^n(\mathcal{C}^E_d) \leq\frac{1}{\theta}\mathcal{L}^n(E).\]
\end{enumerate}
We now have two conclusions:
\begin{description}
\item[Regularized sets]We notice that for any fixed $r_0 > 0$, as long as we pick $d_0$ big enough, then $r < r_0$ and $d > d_0$ imply that $E= \cup_{x\in F}\bar{B}(x,r)$ satisfies
\[ \mathcal{L}^n(\mathcal{C}^E_d)
\leq\frac{1}{\theta(n)}\mathcal{L}^n(E),\] for a $\theta(n) > 0$ that
depends on n, but not on $F$.
\item[Regular sets] Now suppose that \[F \in \mathcal{R}_m \equiv \{W\subset\mathbf{R}^n \;| \; mr^n < \mathcal{L}^n(W\cap \bar{B}(x,r)),\; \forall \; x \in W \text{ and } r< r_0\}.\] Then we immediately get the same result: for a big enough $d$ (depending only on $r_0$),
\[ \mathcal{L}^n(\mathcal{C}^F_d)
\leq\frac{1}{\theta(m)}\mathcal{L}^n(F),\] where $\theta(m) > 0$
depends only on the regularity class that $F$ lives in and not on
which subset in that class we cover with the cubes.
\end{description}
\subsection{Minkowski Content}
\begin{defn}
\emph{(Minkowski content).} Let $W\subset \mathbf{R}^n$, and let $W_r \equiv \{x \mid d(x,W)<r\}$. The $(n-1)$-dimensional Minkowski Content is defined as $\mathcal{M}^{n-1}(W)\equiv \lim_{r \rightarrow 0} \frac{\mathcal{L}^n(W_r)}{2r}$, when the limit exists (see Figure \ref{Min}).
\end{defn}
\begin{defn}\label{rect}
\emph{(($\mathcal{H}^{m},m$)-rectifiable set)}. A set $W\subset \mathbf{R}^n$ is called {\bf ($\mathcal{H}^{m},m$)-rectifiable} if $\mathcal{H}^m(W)<\infty$ and $\mathcal{H}^m$-almost all of $W$ is contained in the union of the images of countably many Lipschitz functions from $\mathbf{R}^m$ to $\mathbf{R}^n$. We will use {\bf rectifiable} and {\bf ($\mathcal{H}^{m},m$)-rectifiable} interchangeably when the dimension of the sets are clear from the context.
\end{defn}
\begin{figure}[H]
\begin{center}
\scalebox{0.5}{\input{Minkowski-krv.pdf_t}}
\caption{Minkowski Content.}\label{Min}
\end{center}
\end{figure}
\begin{defn}[m-rectifiable]
We will say that $E\subset\mathbf{R}^n$ is {\bf $m$-rectifiable} if there is a Lipschitz function mapping a bounded subset of $\mathbf{R}^m$ onto $E$.
\end{defn}
\begin{thm}\label{mink-equals-hausdorff}
$\mathcal{M}^{n-1}(W)=\mathcal{H}^{n-1}(W)$ when $W$ is a closed, $(n$-$1)$-rectifiable set.
\end{thm}
See Theorem 3.2.39 in \cite{fed69} for a proof.
\begin{rem}
Notice that $m$-rectifiable is more restrictive that $(\mathcal{H}^m,m)$-rectifiable. In fact, Theorem~\ref{mink-equals-hausdorff} is false for $(\mathcal{H}^m,m)$-rectifiable sets. See the notes at the end of section 3.2.39 in \cite{fed69} for details.
\end{rem}
Now, let $W$ be ($n$-$1$)-rectifiable, set $r_d \equiv
\sqrt{n}\left(\frac{1}{2^d}\right)$, and choose $r_\delta$ small
enough so that
\[
\mathcal{L}^n(W_{r_d}) \leq \mathcal{M}^{n-1}(W)2r_d + \delta,
\]
for all $d\in \mathbb{N}\cup\{0\}$ such that $r_d\leq r_\delta.$
(Note: Because the diameter of an $n$-cube with edge length
$\frac{1}{2^d}$ is $r_d = \sqrt{n}\left(\frac{1}{2^d}\right)$ , no point of
$\mathcal{C}_d^W$ can be farther than $r_d$ away from $W$. Thus
$\mathcal{C}_d^W\in W_{r_d}$.)
Assume that $\mathcal{L}^n(E) \neq 0$ and $\partial E$ is ($n$-$1$)-rectifiable. Letting $W\equiv \partial E$, we have
\begin{eqnarray*}
\mathcal{L}^n(\mathcal{C}^E_d)-\mathcal{L}^n(E) &\leq& \mathcal{L}^n(W_{r_d})\\
& \leq& \mathcal{M}^{n-1}(\partial E)2r_d + \delta\\
&\leq& \mathcal{M}^{n-1}(\partial E)2r_\delta + \delta
\end{eqnarray*}
so that
\begin{equation}\label{eq:mink-bound}
\mathcal{L}^n(\mathcal{C}^E_d)\leq (1+\oldhat{\delta})\mathcal{L}^n(E), \ \ \text{where} \ \oldhat{\delta}=\frac{\mathcal{M}^{n-1}(\partial E)2r_\delta + \delta}{\mathcal{L}^n(E)}.
\end{equation}
Since we control $r_\delta$ and $\delta$, we can make $\oldhat{\delta}$ as small as we like, and we have a sufficient condition to establish Equation (\ref{B}) above.\\
\noindent {\bf The result}: let $\oldhat{\delta}$ be as in
Equation~(\ref{eq:mink-bound}) and $E\subset \mathbf{R}^n$ such that
$\mathcal{L}^n(E) \neq 0$. Suppose that $\partial E$ (which is
automatically closed) is ($n$-$1$)-rectifiable and $\mathcal{H}^{n-1}(\partial
E)<\infty$, then, for every $\delta>0$ there exists a $d_0$ such that
for all $d\geq d_0$,
\[\mathcal{L}^n(\mathcal{C}^E_d)\leq (1+\oldhat{\delta})\mathcal{L}^n(E).\]
\begin{prob}
Suppose that $E\subset\mathbf{R}^n$ is bounded. Show that for any $r > 0$,
$E_r$, the set of points that are at most a distance $r$ from $E$,
has a $(\mathcal{H}^{n-1},n-1)$-rectifiable boundary. Show this by showing
that $\partial E_r$ is contained in a finite number of graphs of
Lipschitz functions from $\mathbf{R}^{n-1}$ to $\mathbf{R}$. Hint: cut $E_r$ into
small chunks $F_i$ with common diameter $D \ll r$ and prove that
$(F_i)_r$ is the union of a finite number of Lipschitz graphs.
\end{prob}
\begin{prob}
Can you show that in fact the boundary of $E_r$, $\partial E_r$, is
actually ($n$-$1$)-rectifiable? See if you can use the results of the
previous problem to help you.
\end{prob}
\begin{rem}
We can cover a union $E$ of open balls of radius $r$, whose centers
are bounded, with a cover $\mathcal{C}^E_d$ satisfying
Equation~(\ref{B}). In this case, $\partial \mathcal{C}^E_d$
certainly meets the requirements for the result just shown.
\end{rem}
\subsection{Smooth Boundary, Positive Reach}
\label{sec:reach}
In this section, we show that if $\partial E $ is \textit{smooth} (at
least $C^{1,1}$), then $E$ has positive \textit{reach} allowing us to
get an even cleaner bound, depending in a precise way on the curvature
of $\partial E$.
We will assume that $E$ is closed. Define $E_r = \{x\in
R^n \, | \, \operatorname{dist}(x,E) \leq r\}$, $\operatorname{cls}(x) \equiv \{y\in E \; | \; d(x,E) = |x-y|\}$
and $\operatorname{unique}(E) = \{x \; | \, \operatorname{cls}(x)\text{ is a single point}\}$.
\begin{defn}[Reach]
The \textbf{reach} of $E$, $\operatorname{reach}(E)$, is defined
\[\operatorname{reach}(E)\equiv \sup \{r \; | \; E_r \subset \operatorname{unique}(E)\}\]
\end{defn}
\begin{rem}
Sets of positive reach were introduced by Federer in 1959~\cite{federer1959curvature} in a paper that also introduced the famous coarea formula.
\end{rem}
\begin{rem}
If $E\subset\mathbf{R}^n$ is ($n$-$1$)-dimensional and $E$
is closed, then $E = \partial E$.
\end{rem}
Another equivalent definition involves rolling balls around the boundary of $E$.
The closed ball $\bar{B}(x,r)$ {\bf touches} $E$ if \[\bar{B}(x,r)\cap E \subset \partial\bar{B}(x,r)\cap\partial E\]
\begin{defn}
The \textbf{reach} of $E$, $\operatorname{reach}(E)$, is defined
\[\operatorname{reach}(E)\equiv \sup \{r \; | \text{ every ball of radius $r$ touching $E$ touches at a single point}\}\]
\end{defn}
Put a little more informally, $\operatorname{reach}(E)$ is the supremum of
radii $r$ of the balls such that each ball of that radius rolling around E touches E at only one point (see Figure \ref{reach}).
\begin{figure}[H]
\begin{center}
\input{reach.pdf_t}
\caption{Positive and Non-positive Reach.}\label{reach}
\end{center}
\end{figure}
As mentioned above, if $\partial E$ is $C^{1,1}$, then it has positive
reach (see Remark 4.20 in \cite{federer1959curvature}). That is, if
for all $x\in \partial E$, there is a neighborhood of $x$, $U_x\subset
\mathbf{R}^n$, such that after a suitable change of coordinates, there is a
$C^{1,1}$ function $f:\mathbf{R}^{n-1}\rightarrow\mathbf{R}$ such that $\partial E\cap
U_x$ is the graph of $f$. (Recall that a function is $C^{1,1}$ if its
derivative is Lipschitz continuous.) This implies, among other
things, that the (symmetric) second fundamental form of $\partial E$
exists $\mathcal{H}^{n-1}$-almost everywhere on $\partial E$. The fact that
$\partial E$ is $C^{1,1}$ implies that at $\mathcal{H}^{n-1}$-almost every
point of $\partial E$, the $n-1$ principal curvatures $\kappa_i$ of
our set exist and $|\kappa_i| \leq \frac{1}{\operatorname{reach}(\partial E)}$ for
$1\leq i \leq n-1$.
We will use this fact to determine a bound for the $(n-1)$-dimensional
change in area as the boundary of our set is expanded outwards or
contracted inwards by $\epsilon$ (see Figure \ref{lep}, Diagram
1). Let us first look at this in $\mathbf{R}^2$ by examining the following
ratios of lengths of expanded or contracted arcs for sectors of a ball
in $\mathbf{R}^2$ as shown in Diagram 2 in Figure \ref{lep} below.
\begin{figure}[H]
\begin{center}
\input{lepsilon.pdf_t}
\caption{Moving Out and Sweeping In.}\label{lep}
\end{center}
\end{figure}
\[
\frac{\mathcal{H}^1(l_\epsilon)}{\mathcal{H}^1(l)}=\frac{(r+\epsilon)\theta}{r\theta}=1+\frac{\epsilon}{r}=1+\epsilon \kappa
\]
\[
\frac{\mathcal{H}^1(l_{-\epsilon})}{\mathcal{H}^1(l)}=\frac{(r-\epsilon)\theta}{r\theta}=1-\frac{\epsilon}{r}=1-\epsilon \kappa,
\]
where $\kappa$ is the principal curvature of the circle (the boundary of the 2-ball), which we can think of as defining the reach of a set $E\subset \mathbf{R}^2$ with $C^{1,1}$-smooth boundary. \\
The Jacobian for the normal map pushing in or out by $\epsilon$,
which by the area formula is the factor by which the area
changes, is given by $\prod_{i=1}^{n-1}(1\pm \epsilon \kappa_i)$ (see
Figure~\ref{lep}, Diagram 1). If we define $\oldhat{\kappa}\equiv
\max\{|\kappa_1|,|\kappa_2|,\ldots, |\kappa_{n-1}|\}$, then
we have the following ratios:
\[
\text{Max Fractional Increase of $\mathcal{H}^{n-1}$ boundary ``area" Moving Out:}
\]
\[
\prod_{i=1}^{n-1}(1+\epsilon \kappa_i) \leq (1+\epsilon \oldhat{\kappa})^{n-1}.
\]
\[
\text{Max Fractional Decrease of $\mathcal{H}^{n-1}$ boundary ``area" Sweeping In:}
\]
\[
\prod_{i=1}^{n-1}(1-\epsilon \kappa_i) \geq (1-\epsilon \oldhat{\kappa})^{n-1}.
\]
\begin{rem}
Notice that $\oldhat{\kappa} = \frac{1}{\operatorname{reach}(\partial E)}$.
\end{rem}
For a ball, we readily find the value of the ratio
\begin{eqnarray}
\label{eq:ball-ratio}
\frac{\mathcal{L}^n(B(0,r+\epsilon))}{\mathcal{L}^n(B(0,r))} &=& \left( \frac{r+\epsilon}{r} \right)^n\nonumber \\
& = & (1+\epsilon\kappa)^n \ \ \text{ (setting $\delta = \epsilon\kappa$)}\nonumber \\
& = & (1+\delta)^n,
\end{eqnarray}
where $\kappa = \frac{1}{r}$ is the curvature of the ball along any geodesic.
Now we calculate the bound we are interested in for $E$, assuming
$\partial E$ is $C^{1,1}$. Define $E_\epsilon \subset \mathbf{R}^n \equiv \{x
\mid d(x,E)<\epsilon\}$. We first compute a bound for
\begin{align}
\frac{\mathcal{L}^n(E_\epsilon)}{\mathcal{L}^n(E)} &= \frac{\mathcal{L}^n(E)+\mathcal{L}^n(E_\epsilon \setminus E)}{\mathcal{L}^n(E)} \nonumber \\
&= 1 + \frac{\mathcal{L}^n(E_\epsilon \setminus E)}{\mathcal{L}^n(E)}. \label{ratio}
\end{align}
Since $\kappa_i$ is a function of $x\in \partial E$ defined $\mathcal{H}^{n-1}$-almost everywhere, we may set up the integral below over $\partial E$ and do the actual computation over $\partial E \setminus K$, where $K\equiv\{$the set of measure 0 where $\kappa_i $ is not defined$\}$. Computing bounds for the numerator and denominator separately in the second term in (\ref{ratio}), we find, by way of the Area Formula \cite{morgan-2008-geometric},
\begin{align}
\mathcal{L}^n(E_\epsilon \setminus E) &= \int^\epsilon_0 \int_{\partial E} \prod_{i=1}^{n-1}(1+r \kappa_i)d\mathcal{H}^{n-1}dr \nonumber \\
&\leq \int^\epsilon_0 \int_{\partial E} (1+r \oldhat{\kappa})^{n-1}d\mathcal{H}^{n-1}dr \nonumber \\
&= \mathcal{H}^{n-1}(\partial E) \left. \frac{(1+r \oldhat{\kappa})^n}{n\oldhat{\kappa}} \right\vert^\epsilon_0 \nonumber \\
&= \mathcal{H}^{n-1}(\partial E) \left( \frac{(1+\epsilon \oldhat{\kappa})^n}{n\oldhat{\kappa}}-\frac{1}{n\oldhat{\kappa}}\right) \label{numbound}
\end{align}
and
\begin{align}
\mathcal{L}^n(E) &\geq \int^{r_0}_0 \int_{\partial E} \prod_{i=1}^{n-1}(1-r \kappa_i)d\mathcal{H}^{n-1}dr \nonumber \\
&\geq \int^{r_0}_0 \int_{\partial E} (1-r \oldhat{\kappa})^{n-1}d\mathcal{H}^{n-1}dr \nonumber \\
&= \mathcal{H}^{n-1}(\partial E) \left. \frac{-(1-r \oldhat{\kappa})^n}{n\oldhat{\kappa}} \right\vert^{r_0}_0 \nonumber \\
&= \frac{\mathcal{H}^{n-1}(\partial E)}{n\oldhat{\kappa}}, \ \ \ \text{when} \ r_0=\frac{1}{\oldhat{\kappa}}. \label{denombound}
\end{align}
From \ref{ratio}, \ref{numbound}, and \ref{denombound}, we have
\begin{align}
\frac{\mathcal{L}^n(E_\epsilon)}{\mathcal{L}^n(E)} &\leq 1+ \frac{\mathcal{H}^{n-1}(\partial E) \left( \frac{(1+\epsilon \oldhat{\kappa})^n}{n\oldhat{\kappa}}-\frac{1}{n\oldhat{\kappa}}\right)}{\frac{\mathcal{H}^{n-1}(\partial E)}{n\oldhat{\kappa}}} \nonumber \\
&= (1+\epsilon \oldhat{\kappa})^n \ \ \ (\text{setting} \ \delta = \epsilon \oldhat{\kappa} \nonumber) \\
&= (1+\delta)^n.
\end{align}
From this we get that
\[ \mathcal{L}^n(E_\epsilon) \leq (1+\epsilon\oldhat{\kappa})^n \mathcal{L}^n(E)\]
so that
\[\mathcal{L}^n(\mathcal{C}_{d(\epsilon)}^E) \leq (1+\epsilon\oldhat{\kappa})^n \mathcal{L}^n(E) \]
where $d(\epsilon) = \log_2(\frac{\sqrt{n}}{\epsilon})$ is found by
solving $\sqrt{n}\frac{1}{2^d} = \epsilon$.
Thus, when $\partial E$ is smooth enough to have positive reach, we
find a nice bound of the type in Equation~(\ref{B}), with a precisely
known dependence on curvature.
\section{A Boundary Conjecture}
What can we say about boundaries? Can we bound
\[
\frac{\mathcal{H}^{n-1}(\partial \mathcal{C}^E_d)}{\mathcal{H}^{n-1}(\partial E)}?
\]
\begin{figure}[H]
\begin{center}
\scalebox{1.0}{\input{cubeboundary.pdf_t}}
\caption{Cubes on the Boundary.}\label{cube}
\end{center}
\end{figure}
\begin{con}\label{con}
If $E\subset \mathbf{R}^n$ is compact and $\partial E$ is $C^{1,1}$,
\[
\limsup_{d \rightarrow \infty} \frac{\mathcal{H}^{n-1}(\partial \mathcal{C}^E_d)}{\mathcal{H}^{n-1}(\partial E)}\leq n.
\]
\end{con}
\begin{proof}[Brief Sketch of Proof for $n=2$] $ $
\begin{enumerate}
\item Since $\partial E$ is $C^{1,1}$, we can zoom in far enough
at any point $x\in\partial E$ so that it looks flat.
\item Let $C$ be a cube in the cover $\mathcal{C}(E,d)$ that intersects
the boundary near $x$ and has faces in the boundary
$\partial\mathcal{C}^E_d$. Define $F = \partial C \cap \partial
\mathcal{C}^E_d$.
\item (Case 1) Assume that the tangent at $x$, $T_x\partial E$, is
not parallel to either edge direction of the cubical cover (see Figure~\ref{cube2}).
\begin{enumerate}
\item Let $\Pi$ be the projection onto the horizontal axis and notice
that $\frac{\mathcal{H}^1(F)}{\Pi(F)} \leq 2+\epsilon$ for any epsilon.
\item This is stable to perturbations which is important since the actual piece
of the boundary $\partial E$ we are dealing with is not a straight line.
\end{enumerate}
\item (Case 2) Suppose that the tangent at $x$, $T_x\partial E$,
is parallel to one of the two faces of
the cubical cover, and let $U_x$ be a neighborhood of
$x\in \partial E$.
\begin{enumerate}
\item Zooming in far enough, we see that the cubical boundary can
only oscillate up and down so that the maximum ratio for any
horizontal tangent is (locally) $2$.
\item But we can create a sequence of examples that attain ratios
as close to 2 as we like by finding a
careful sequence of perturbations that attains a ratio locally
of $2-\epsilon$ for any $\epsilon$ (see Figure~\ref{cube}).
\item That is, we can create perturbations that, on an unbounded
set of $d$'s, $\{d_i\}_{i=1}^\infty,$ yield a ratio
$\frac{\mathcal{H}^1(\mathcal{C}^E_{d_i}\cap \, U_x)}{\partial E} >
2-\epsilon$, and we can send $\epsilon\rightarrow 0$.
\end{enumerate}
\item Use the compactness of $\partial E$ to put this all together
into a complete proof.
\end{enumerate}
\end{proof}
\begin{figure}[H]
\begin{center}
\scalebox{0.75}{\input{conjecture.pdf_t}}
\caption{The case in which $\theta$, the angle between $T_x \partial E$ and the $x$-axis, is neither $0$ nor $\pi/2$.}\label{cube2}
\end{center}
\end{figure}
\begin{prob} Suppose we exclude $C$'s that contain less than some
fraction $\theta$ of $E$ (as defined in Conjecture \ref{con}) from the cover to get the reduced cover
$\hat{\mathcal{C}}_d^E$. In this case, what is the optimal bound
$B(\theta)$ for the ratio of boundary measures
\[\limsup_{d\rightarrow\infty}\frac{\mathcal{H}^{n-1}(\partial
\hat{\mathcal{C}}_d^E)}{\mathcal{H}^{n-1}(\partial E)} \leq B(\theta)?\]
\end{prob}
\section{Other Representations}
\subsection{The Jones' $\beta$ Approach}\label{jonessection2}
As mentioned above, another approach to representing sets in $\mathbf{R}^n$,
developed by Jones \cite{jon90}, and generalized by Okikiolu
\cite{oki92}, Lerman \cite{ler03}, and Schul \cite{sch07}, involves
the question of under what conditions a bounded set $E$ can be
contained within a rectifiable curve $\Gamma$, which Jones likened to
the Traveling Salesman Problem taken over an infinite set. While Jones
worked in $\mathbb{C}$ in his original paper, the work of Okikiolu,
Lerman, and Schul extended the results to $\mathbf{R}^n \ \forall n\in
\mathbb{N}$ as well as infinite dimensional space.
Recall that a compact, connected set $\Gamma \subset \mathbf{R}^2$ is
rectifiable if it is contained in the image of a countable set of
Lipschitz maps from $\mathbf{R}$ into $\mathbf{R}^2$, except perhaps for a set of
$\mathcal{H}^1$ measure zero. We have the result that if $\Gamma$ is compact
and connected, then $l(\Gamma)=\mathcal{H}^1(\Gamma)<\infty$ implies it
is rectifiable (see pages 34 and 35 of
\cite{falconer-1986-geometry}).
Let $W_C$ denote the width of the thinnest cylinder containing the set $E$ in the dyadic $n$-cube $C$ (see Figure \ref{jones2}), and define
the $\beta$ number of $E$ in $C$ to be
\[
\beta_E(C)\equiv \frac{W_C}{l(C)}.
\]
\begin{figure}[H]
\begin{center}
\scalebox{.6}{
\input{Jones2.pdf_t}}
\caption{Jones' $\beta$ Numbers and $W_C$. Each of the two green lines in a cube $C$ is an equal distance away from the red line and is chosen so that the green lines define the thinnest cylinder containing $E\cap C$. Then the red lines are varied over all possible lines in $C$ to find that red line whose corresponding cylinder is the thinnest of all containing cylinders. In this sense, the minimizing red lines are the best fit to $E$ in each $C$.}\label{jones2}
\end{center}
\end{figure}
Jones' main result is this theorem:
\begin{thm}\cite{jon90}\label{jones}
Let $E$ be a bounded set and $\Gamma$ be a connected set both in
$\mathbf{R}^2$. Define $\beta_\Gamma(C)\equiv \frac{W_C}{l(C)},$ where
$W_C$ is the width of the thinnest cylinder in the $2$-cube $C$
containing $\Gamma.$ Then, summing over all possible
$C$, $$\beta^2(\Gamma)\equiv \sum_C (\beta_\Gamma(3C))^2l(C)<\eta
\,l(\Gamma)<\infty, \text{ where } \eta \in \mathbf{R}.$$ \\ Conversely, if
$\beta^2(E)<\infty$ there is a connected set $\Gamma,$ with $E
\subset \Gamma,$ such that \[l(\Gamma) \leq (1+ \delta) \operatorname{diam}(E) +
\alpha_\delta\beta^2(E),\] \text{ where } $\delta>0$ \text{ and }
$\alpha_\delta = \alpha(\delta)\in \mathbf{R}.$
\end{thm}
Jones' main result, generalized to $\mathbf{R}^n$, is that a bounded set $E \subset \mathbf{R}^n$ is contained in a rectifiable curve $\Gamma$ if and only if
\[
\beta^2(E)\equiv \sum_C (\beta_E(3C))^2l(C)<\infty,
\]
where the sum is taken over all dyadic cubes.
Note that each $\beta$ number of $E$ is calculated over the dyadic
cube $3C$, as defined in Section \ref{union}. Intuitively, we see
that in order for $E$ to lie within a rectifiable curve $\Gamma$, $E$
must look flat as we zoom in on points of $E$ since $\Gamma$ has
tangents at $\mathcal{H}^1$-almost every point $x\in\Gamma$. Since
both $W_C$ and $l(C)$ are in units of length, $\beta_E(C)$ is a scale-invariant measure of
the flatness of $E$ in $C$. In higher dimensions, the
analogous cylinders' widths and cube edge lengths are also divided to
get a scale-invariant $\beta_E(C)$.
The notion of local linear approximation has been explored by many
researchers. See for example the work of Lerman and
collaborators~\cite{chen-2009-spectral,arias-2011-spectral,zhang-2012-hybrid,arias-2017-spectral}. While
distances other than the sup norm have been considered when
determining closeness to the approximating line, see~\cite{ler03},
there is room for more exploration there. In the section below,
\emph{Problems and Questions}, we suggest an idea involving the
multiscale flat norm from geometric measure theory.
\subsection{A Varifold Approach}\label{Var2}
As mentioned above, a third way of representing sets in $\mathbf{R}^n$ uses
\emph{varifolds}. Instead of representing $E\subset \mathbf{R}^n$ by working
in $\mathbf{R}^n$, we work in the \emph{Grassmann Bundle}, $\mathbf{R}^n\times
G(n,m)$. Advantages include, for
example, the automatic encoding of tangent information directly into
the representation. By building into the representation this tangent
information, we make set comparisons where we care about tangent
structure easy and natural.
\begin{defn}[Grassmannian]
The $m$-dimensional Grassmannian in $\mathbf{R}^n$, \[G(n,m) =G(\mathbf{R}^n,m),\]
is the set of all $m$-dimensional planes through the origin.
\end{defn}
For example, $G(2,1)$ is the space of all lines through the origin
in $\mathbf{R}^2$, and $G(3,2)$ is the space of all planes through the origin
in $\mathbf{R}^3$. The Grassmann bundle $\mathbf{R}^n \times G(n,m)$ can be thought
of as a space where $G(n,m)$ is attached to each point in $\mathbf{R}^n$.
\begin{defn}[Varifold]
A varifold is a \emph{Radon measure} $\mu$ on the
Grassmann bundle $\mathbf{R}^n \times G(n,m)$.
\end{defn}
Suppose $\pi:(x,g)\in \mathbf{R}^n \times G(n,m) \rightarrow x$. One of the
most common appearances of varifolds are those that arise from
rectifiable sets $E$. In this case the measure $\mu_E$ on $\mathbf{R}^n
\times G(n,m)$ is the pushforward of $m$-Hausdorff measure on $E$ by
the tangent map $T:x\rightarrow (x,T_xE)$.
Let $E\subset \mathbf{R}^n$ be an ($\mathcal{H}^{m},m$)-rectifiable set (see
Definition \ref{rect}). We know the approximate $m$-dimensional
tangent space $T_xE$ exists $\mathcal{H}^m$-almost everywhere since
$E$ is ($\mathcal{H}^{m},m$)-rectifiable, which in turn implies that, except
for an $\mathcal{H}^m$-measure 0 set, $E$ is contained in the union of the images of
countably many Lipschitz functions from $\mathbf{R}^m$ to $\mathbf{R}^n$.
The measure of $A\subset \mathbf{R}^n \times G(n,m)$ is given by $\mu(A)
= \mathcal{H}^m(T^{-1}\{A\})$. Let $S\equiv \{(x, T_xE) \, \vert \, x\in
E\}$, the section of the Grassmann bundle defining the
varifold. $S$, intersected with each fiber $\{x\}\times
G(n,m)$, is the single point $(x, T_xE)$, and so we could just as
well use the projection $\pi$ in which case we would have $\mu_E(A)
= \mathcal{H}^m(\pi(A\cap S))$.
\begin{defn}
A \emph{rectifiable varifold} is a radon measure $\mu_E$ defined on an ($\mathcal{H}^{m},m$)-rectifiable set $E\subset \mathbf{R}^n$. Recalling $S\equiv \{(x, T_xE) \, \vert \, x\in E\}$, let $A \subset \mathbf{R}^n \times G(n,m)$ and define
\[
\mu_E({A})= \mathcal{H}^m(\pi(A \cap S)).
\]
\end{defn}
We will call $E=\pi (S)$ the ``downstairs'' representation of $S$ for
any $S\subset\mathbf{R}^n \times G(n,m)$, and we will call $S =
T(E)\subset\mathbf{R}^n\times G(n,m)$ the ``upstairs'' representation of any
rectifiable set $E$, where $T$ is the tangent map over the rectifiable
set $E$.
\bigskip
\begin{figure}[H]
\begin{center}
\input{varifolds1a.pdf_t}
\caption{Working Upstairs.}\label{var1ctoo}
\end{center}
\end{figure}
Figure~\ref{var1ctoo}, repeated from above, illustrates how the
tangents are built into this representation of subsets of $\mathbf{R}^n$,
giving us a sense of why this representation might be useful. Suppose
we have three line segments almost touching each other, i.e. appearing
to touch as subsets of $\mathbf{R}^2$. The upstairs view puts each segment at
a different height corresponding to the angle of the segment. So,
these segments are not close in any sense in $\mathbf{R}^2\times G(n,m)$. Or
consider a straight line segment and a very fine sawtooth curve that
may look practically indistinguishable, but will appear drastically
different upstairs.
We can use varifold representations in combination with a cubical
cover to get a quantized version of a curve that has tangent
information as well as position information. If, for example, we cover
a set $S \subset \mathbf{R}^2 \times G(2,1)$ with cubes of edge length
$\frac{1}{2^d}$ and use this cover as a representation for $S$, we
know the position and angle to within $\frac{\sqrt{3}}{2^{d+1}}$. In other
words, we can approximate our curve $S \subset \mathbf{R}^2 \times G(2,1)$ by the
union of the centers of the cubes (with edge length $\frac{1}{2^d}$)
intersecting $S$. This simple idea seems to merit further exploration.
\section{Problems and Questions}
\begin{prob}
Find a smooth $\partial E$, with $E \subset \mathbf{R}^n$, such that
\[
\mathcal{H}^{n-1}(\partial \mathcal{C}^E_d) / \mathcal{H}^{n-1}(\partial E)=0 \ \forall d.
\]
\end{prob}
\textbf{Hint:} Look at unbounded $E \subset\mathbf{R}^2$ such that $\mathcal{L}^2(E^c) < \infty$.
\begin{prob}
Suppose that $E$ is open and $\mathcal{H}^{n-1}(\partial E) < \infty$. Show that if the \textbf{reach} of $\partial E $ is positive, then
\[
\liminf_{d \rightarrow \infty} \frac{\mathcal{H}^{n-1}(\partial \mathcal{C}^E_d)}{\mathcal{H}^{n-1}(\partial E)}\geq 1.
\]
\end{prob} \textbf{Hint:} First show that $\partial E$ has
unique inward and outward pointing normals. (Takes a bit of work!)
Next, examine the map $F: \partial E \rightarrow \mathbf{R}^n$, where
$F(x)=x+\eta(x) N(x)$, $N(x)$ is the normal to $\partial E$ at $x$,
and $\eta(x)$ is a positive real-valued function chosen so that
\emph{locally} $F(\partial E) = \partial \mathcal{C}^E_d$. Use the
Binet-Cauchy Formula to find the Jacobian, and then apply the Area
Formula. To do this calculation, notice that at any point
$x_0\in\partial E$ we can choose coordinates so that
$T_{x_0}\partial E$ is horizontal (i.e. $N(x_0) = e_n$). Calculate
using $F: T_{x_0}\partial E = \mathbf{R}^{n-1} \rightarrow \mathbf{R}^n$ where
$F(x)=x+\eta(x) N(x)$. (See Chapter 3 of \cite{evans-1992-1} for the
Binet-Cauchy formula and the Area Formula.)
\begin{prob}
Suppose $E$ has dimension $n-1$, positive reach, and is locally regular (in $\mathbf{R}^n$). \\
a.) Find bounds for $\mathcal{H}^{n}( \mathcal{C}^E_d) / \frac{1}{2^d}.$\\
b.) How does this ratio relate to $\mathcal{H}^{n-1}(E)$?
\end{prob}
\textbf{Hint:} Use the ideas in Section~\ref{sec:reach} to calculate a
bound on the volume of the tube with thickness $2
\frac{\sqrt{n}}{2^d}$ centered on $E$.
\begin{que}
Can we use the ``upstairs'' version of cubical covers to find better
representations for sets and their boundaries? (Of course, ``better"
depends on your objective!)
\end{que}
For the following question, we need the notion of the \emph{multiscale
flat norm}~\cite{morgan-2007-1}. The basic idea of this distance,
which works in spaces of oriented curves and surfaces of any dimension
(known as currents), is that we can decompose the curve or surface $T$
into $(T - \partial S) + \partial S$, \emph{but} we measure the cost
of the decomposition by adding the volumes of $T-\partial S$ and $S$
(not $\partial S$!). By volume, we mean the $m$-dimensional volume, or
$m$-volume of an $m$-dimensional object, so if $T$ is $m$-dimensional,
we would add the $m$-volume of $T-\partial S$ and the ($m$+1)-volume
of $S$ (scaled by the parameter $\lambda$). We get that
\[\Bbb{F}_\lambda(T) = \min_S M_m(T-\partial S) + \lambda
M_{m+1}(S).\] It turns out that $T-\partial S$ is the best
approximation to $T$ that has curvature bounded by
$\lambda$~\cite{allard-2007-1}. We exploit this in the following ideas
and questions.
\begin{rem}
Currents can be thought of as generalized oriented curves or
surfaces of any dimension $k$. More precisely, they are members of
the dual space to the space of $k$-forms. For the purposes of this
section, thinking of them as (perhaps unions of pieces of) oriented
$k$-dimensional surfaces $W$, so that $W$ and $-W$ are simply
oppositely oriented and cancel if we add them, will be
enough to understand what is going on. For a nice introduction to
the ideas, see for example the first few chapters of
\cite{morgan-2008-geometric}.
\end{rem}
\begin{que}
Choose $k\in \{1,2,3\}$. In what follows we focus on sets $\Gamma$
which are one-dimensional, the interior of a cube $C$ will be denoted
$C^o$, and we will work at some scale $d$, i.e. the edge length of the cube
will be $\frac{1}{2^d}$.
Consider the piece of $\Gamma$ in $C^o$, $\Gamma \cap C^o$. Inside the
cube $C$ with edge length $\frac{1}{2^d}$, we will use the flat norm
to
\begin{enumerate}
\item find an approximation of $\Gamma \cap C^o$ with curvature
bounded by $\lambda = 2^{d+k}$ and
\item find the distance of that approximation from $\Gamma \cap C^o$.
\end{enumerate}
This decomposition is then obtained by minimizing \[M_1((\Gamma\cap
C^o)-\partial S) + 2^{d+k}M_{2}(S) = \mathcal{H}^1((\Gamma\cap
C^o)-\partial S) + 2^{d+k}\mathcal{L}^2(S).\] The minimal $S$ will be
denoted $S_d$ (see Figure~\ref{fig:cube-flat-beta-1}).
\begin{figure}[H]
\centering
\scalebox{0.5}{\input{cube-flat-beta-1.pdf_t}}
\caption{Multiscale flat norm decomposition inspiring the definition of $\beta_\Gamma^{\Bbb{F}}$.}
\label{fig:cube-flat-beta-1}
\end{figure}
{\color{blue} Suppose that we define $\beta^{\mathbb{F}}_\Gamma(C)$
by \[\beta^{\mathbb{F}}_\Gamma(C) l(C) = 2^{d+k}\mathcal{L}^2(S_d)\]
so that \[\beta^{\mathbb{F}}_\Gamma(C) =
2^{2d+k}\mathcal{L}^2(S_d).\] What can we say about the properties
(e.g. rectifiability) of $\Gamma$ given the finiteness of $\sum_C
\left(\beta^{\mathbb{F}}_\Gamma(3C)\right)^2 l(C)$?}
\end{que}
\begin{que}
Can we get an advantage by using the flat norm decomposition as a preconditioner before
we find cubical cover approximations? For example, define
\[
\mathcal{F}_d^\Gamma \equiv \mathcal{C}^{\Gamma_d}_d \text{ and } \Gamma_d \equiv \Gamma - \partial S_d,
\]
\[
\text{where} \ S_d = \operatornamewithlimits{argmin}_S \left(\mathcal{H}^1(\Gamma-\partial S)+2^{d+k}\mathcal{L}^2(S)\right).
\]
Since the flat norm minimizers have bounded mean curvature, is this
enough to force the cubical covers to give us better quantitative
information on $\Gamma$? How about in the case in which $\Gamma
= \partial E$, $E\subset \mathbf{R}^2$?
\end{que}
\section{Further Exploration}
\label{sec:further-exploration}
There are a number of places to begin in exploring these ideas
further. Some of these works require significant dedication to master,
and it is always recommended that you have someone who has mastered a
path into pieces of these areas that you can ask questions of when you
first wade in. Nonetheless, if you remember that the language can
always be translated into pictures, and you make the effort to do that,
headway towards mastery can always be made. Here is an annotated list
with comments:
\begin{description}
\item[Primary Varifold References] Almgren's little
book~\cite{almgren-2001-1} and Allard's founding
contribution~\cite{allard-1972-1} are the primary sources for
varifolds. Leon Simon's book on geometric measure
theory~\cite{simon-1984-lectures} (available for free online) has a
couple of excellent chapters, one of which is an exposition of
Allard's paper.
\item[Recent Varifold Work] Both Buet and
collaborators~\cite{buet-2013-varifolds,buet-2014-quantitative,buet-2015-discrete,buet-2016-surface,buet-2016-varifold}
and Charon and
collaborators~\cite{charlier-2014-fshape,charon-2013-varifold,charon-2014-functional}
have been digging into varifolds with an eye to applications. While
these papers are a good start, there is still a great deal of
opportunity for the use and further development of varifolds. On
the theoretical front, there is the work of Menne and
collaborators~\cite{menne-2008-c2, menne-2009-some,
menne-2010-sobolev, menne-2012-decay, menne-2014-weakly,
kolasinski-2015-decay, menne-2016-pointwise, menne-2016-sobolev,
menne-2017-concept, menne-2017-geometric}. We want to call
special attention to the recent introduction to the idea of a
varifold that appeared in the December 2017 AMS
Notices~\cite{menne-2017-concept}.
\item[Geometric Measure Theory I] The area underlying the ideas here are
those from geometric measure theory. The fundamental treatise in the
subject is still Federer's 1969 \emph{Geometric Measure
Theory}~\cite{fed69} even though most people start by reading
Morgan's beautiful introduction to the
subject, \emph{Geometric Measure Theory: A Beginner's Guide}~\cite{morgan-2008-geometric} and Evans' \emph{Measure Theory and Fine Properties of Functions}~\cite{evans-1992-1}. Also recommended are Leon
Simon's lecture notes~\cite{simon-1984-lectures}, Francesco Maggi's
book that updates the classic Italian approach~\cite{mag12}, and
Krantz and Parks' \emph{Geometric Integration
Theory}~\cite{krantz-2008-geometric}.
\item[Geometric Measure Theory II] The book by
Mattila~\cite{mattila-1999-geometry} approaches the subject from the
harmonic-analysis-flavored thread of geometric measure theory. Some
use this as a first course in geometric measure theory, albeit one
that does not touch on minimal surfaces, which is the focus of the
other texts above. De Lellis' exposition \emph{Rectifiable Sets,
Densities, and Tangent Measures}~\cite{de-lellis-2008-1} or Priess'
1987 paper \emph{Geometry of Measures in $\mathbf{R}^n$: Distribution,
Rectifiability, and Densities}~\cite{preiss-1987-geometry} is also
very highly recommended.
\item[Jones' $\beta$] In addition to the papers cited in the text
~\cite{jon90,oki92,ler03,sch07}, there are related works by David and
Semmes that we recommended. See for
example~\cite{david-1993-analysis}. There is also the applied work
by Gilad Lerman and his collaborators that is often inspired by
Jones' $\beta$ and his own development of Jones' ideas
in~\cite{ler03}. See
also~\cite{chen-2009-spectral,zhang-2012-hybrid,zhang-2009-median,arias-2011-spectral}. See
also the work by Maggioni and
collaborators~\cite{little-2009-estimation,allard-2012-multiscale}.
\item[Multiscale Flat Norm] The flat norm was introduced by Whitney in
1957~\cite{whitney-1957-geometric} and used to create a topology
on currents that permitted Federer and Fleming, in their landmark
paper in 1960~\cite{federer-1960-normal}, to obtain the existence of
minimizers. In 2007, Morgan and Vixie realized that a variational
functional introduced in image analysis was actually computing a
multiscale generalization of the flat norm~\cite{morgan-2007-1}. The
ideas are beginning to be explored in these papers
\cite{vixie-2010-multiscale, van-dyke-2012-thin,
ibrahim-2013-simplicial, vixie-2015-some, ibrahim-2016-flat,
alvarado-2017-lower}.
\end{description}
\section{Introduction}
\label{sec:intro}
In this paper we explain and illuminate a few ideas for (1)
representing sets and (2) learning from those representations. Though
some of the ideas and results we explain are likely written down
elsewhere (though we are not aware of those references), our purpose
is not to claim priority to those pieces, but rather to stimulate
thought and exploration. Our primary intended audience is students of
mathematics even though other, more mature mathematicians may find a
few of the ideas interesting. We believe that cubical covers can be
used at an earlier point in the student career and that both the
$\beta$ numbers idea introduced by Peter Jones and the idea of
varifolds pioneered by Almgren and Allard and now actively being
developed by Menne, Buet, and collaborators are still very much
underutilized by all (young and old!). To that end, we have written
this exploration, hoping that the questions and ideas presented here,
some rather elementary, will stimulate others to explore the ideas for
themselves.
We begin by briefly introducing cubical covers, Jones' $\beta$, and
varifolds, after which we look more closely at questions involving
cubical covers. Then both of the other approaches are explained in a
little bit of detail, mostly as an invitation to more exploration,
after which we close with problems for the reader and some unexplored
questions.
{\bf Acknowledgments:} LP thanks Robert Hardt and Frank Morgan for
useful comments and KRV thanks Bill Allard for useful conversations and
Peter Jones, Gilad Lerman, and Raanan Schul for introducing him to the
idea of the Jones' $\beta$ representations.
\newpage
\section{Representing Sets \& their Boundaries in $\mathbf{R}^n$}
\subsection{Cubical Refinements: Dyadic Cubes}
In order to characterize various sets in $\mathbf{R}^n$, we explore
the use of cubical covers whose cubes have side
lengths which are positive integer powers of $\frac{1}{2}$,
\textbf{dyadic cubes}, or more precisely, (closed) dyadic $n$-cubes
with sides parallel to the axes. Thus the side length at the $d$th subdivision
is $l(C)=\frac{1}{2^d}$, which can be made as small as desired.
Figure~\ref{fig:dyadic} illustrates this by looking at a unit cube
in $\mathbf{R}^2$ lying in the first quadrant with a vertex at the origin. We
then form a sequence of refinements by dividing each side length in
half successively, and thus quadrupling the number of cubes each time.
\begin{defn}
We shall say that the $n$-cube $C$ (with side length denoted as $l(C)$) is dyadic if
\[
C=\prod^n_{j=1}[m_j2^{-d},(m_j+1)2^{-d}], \ \ \ m_j \in \mathbb{Z}, \ d\in \mathbb{N}\cup\{0\}.
\]
\end{defn}
\begin{figure} [H]
\centering
\input{cubes.pdf_t}
\caption{Dyadic Cubes.}
\label{fig:dyadic}
\end{figure}
In this paper, we will assume $C$ to be a dyadic $n$-cube
throughout. We will denote the union of the dyadic $n$-cubes with edge
length $\frac{1}{2^d}$ that intersect a set $E\subset \mathbf{R}^n$ by
$\mathcal{C}^E_d$ and define $\partial\mathcal{C}^E_d$ to
be the boundary of this union (see Figure
\ref{fig:cubicalcover1}). Two simple questions we will explore for
their illustrative purposes are:
\begin{enumerate}
\item ``If we know $\mathcal{L}^n(\mathcal{C}^E_d)$, what can we say about $\mathcal{L}^n(E)$?'' and similarly,
\item ``If we know $\mathcal{H}^{n-1}(\partial\mathcal{C}^E_d)$, what can we say about $\mathcal{H}^{n-1}(\partial E)$?''
\end{enumerate}
\begin{figure}[H]
\centering
\scalebox{0.5}{\input{cubicalcover1.pdf_t}}
\caption{Cubical cover $\mathcal{C}^E_d$ of a set $E$.}
\label{fig:cubicalcover1}
\end{figure}
\subsection{Jones' $\beta$ Numbers}
Another approach to representing sets in $\mathbf{R}^n$, developed by Jones
\cite{jon90}, and generalized by Okikiolu \cite{oki92}, Lerman
\cite{ler03}, and Schul \cite{sch07}, involves the question of under
what conditions a bounded set $E$ can be contained within a
rectifiable curve $\Gamma$, which Jones likened to the Traveling
Salesman Problem taken over an infinite set. (See Definition~\ref{rect}
below for the definition of rectifiable.)
Jones showed that if the aspect ratios of the optimal containing
cylinders in each dyadic cube go to zero fast enough, the set $E$
is contained in a rectifiable curve. Jones' approach ends up providing
one useful approach of defining a representation for a set in $\mathbf{R}^n$
similar to those discussed in the next section. We return to this
topic in Section \ref{jonessection2}. The basic idea is illustrated in
Figure~\ref{fig:jones1}.
\begin{figure} [H]
\begin{center}
\input{Jones1.pdf_t}
\caption{Jones' $\beta$ Numbers. The green lines indicate the thinnest cylinder containing $\Gamma$ in the cube $C$. We see from this relatively large width that $\Gamma$ is not very ``flat" in this cube.}\label{jones1}
\label{fig:jones1}
\end{center}
\end{figure}
\subsection{Working Upstairs: Varifolds}\label{Var}
A third way of representing sets in $\mathbf{R}^n$ uses \emph{varifolds}.
Instead of representing $E\subset \mathbf{R}^n$ by working in $\mathbf{R}^n$, we work
in the \emph{Grassmann Bundle}, $\mathbf{R}^n\times G(n,m)$.
We parameterize the Grassmannian $G(2,1)$ by taking the upper unit semicircle in
$\mathbf{R}^2$ (including the point $(1,0)$, but not including $(1,\pi)$,
where both points are given in polar coordinates) and straightening it
out into a vertical axis (as in Figure \ref{var1a}). The bundle $\mathbf{R}^2
\times G(2,1)$ is then represented by $\mathbf{R}^2 \times [0,\pi).$
\begin{figure}[H]
\begin{center}
\input{var1a.pdf_t}
\caption{The vertical axis for the ``upstairs."}\label{var1a}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\input{varifolds1a.pdf_t}
\caption{Working Upstairs in the Grassmann bundle.}\label{var1c}
\end{center}
\end{figure}
Figure~\ref{var1c} illustrates how the tangents are built into this
representation of subsets of $\mathbf{R}^n$, giving us a sense of why this
representation might be useful. A circular curve in $\mathbf{R}^2$ becomes two
half-spirals upstairs (in the Grassmann bundle representation, as
shown in the first image of Figure \ref{var1c}). Other curves in $\mathbf{R}^2$
are similarly illuminated by their Grassmann bundle
representations. We return to this idea in Section \ref{Var2}.
\section{Simple Questions}
Let $E\subset \mathbf{R}^n$ and $C$ be any dyadic $n$-cube as before. Define \[\mathcal{C}(E,d) = {\{C \ | \ C\cap E \neq \emptyset, \ l(C) = {1}/{2^d}\}}\]
and, as above,
\[\mathcal{C}^E_d \equiv \bigcup_{C\in \mathcal{C}(E,d)} C. \]
Here are two questions:
\begin{enumerate}
\item Given $E\subset\mathbf{R}^n$, when is there a $d_0$ such that for all $d\geq d_0$, we have
\begin{equation}\label{A}
\mathcal{L}^n(\mathcal{C}^E_d) \leq M(n)\mathcal{L}^n(E)
\end{equation}
for some constant $M(n)$ independent of $E$?
\item Given $E\subset\mathbf{R}^n$, and any $\delta>0$, when does there exists
a $d_0$ such that for all $d\geq d_0$, we have
\begin{equation}\label{B}
\mathcal{L}^n(\mathcal{C}^E_d) \leq (1+\delta)\mathcal{L}^n(E)?
\end{equation}
\end{enumerate}
\begin{rem} Of course using the fact that Lebesgue measure is a Radon
measure, we can very quickly get that for $d$ large enough
(i.e. $2^{-d}$ small enough), the measure of the cubical cover is as
close to the measure of the set as you want, as long as the set is
compact and has positive measure. But the focus of this paper is on
what we can get in a much more transparent, barehanded fashion, so we
explore along different paths, getting answers that are, by some metrics,
suboptimal.
\end{rem}
\begin{exm}\label{exm:rational-points}
If $E=\mathbb{Q}^n\cap [0,1]^n$, then $\mathcal{L}^n(E)=0$, but
$\mathcal{L}^n(\mathcal{C}^E_d)=1 \ \forall d \geq 0$.
\end{exm}
\begin{exm}
Let $E$ be as in Example~\ref{exm:rational-points}. Enumerate $E$
as $\hat{q_1}, \hat{q_2},\hat{q_3}, \ldots$. Now let
$D_i=B(\hat{q_i},\frac{\epsilon}{2^i})$ and $E_\epsilon \equiv
\{\cup D_i\} \cap [0,1]^n$ with $\epsilon$ chosen small enough so
that $\mathcal{L}^n(E_\epsilon)\leq \frac{1}{100}$. Then
$\mathcal{L}^n(E_\epsilon)\leq \frac{1}{100}$, but
$\mathcal{L}^n(\mathcal{C}^{E_\epsilon}_d)=1 \ \forall d > 0$.
\end{exm}
\subsection{A Union of Balls}\label{union}
For a given set $F\subseteq \mathbf{R}^n,$ suppose $E= \cup_{x\in F}\bar{B}(x,r)$, a
union of closed balls of radius $r$ centered at each point $x$ in $F$.
Then we know that $E$ is \textbf{regular} (locally Ahlfors $n$-regular
or \textit{locally} $n$-regular), and thus there exist $0 < m < M <
\infty$ and an $r_0>0$ such that for all $x\in E$ and for all
$0<r<r_0$, we have
\[
m r^n \leq \mathcal{L}^n(\bar{B}(x, r)\cap E) \leq M r^n.
\]
This is all we need to establish a sufficient condition for Equation
(\ref{A}) above.
\begin{rem}
The upper bound constant $M$ is immediate
since $E$ is a union of $n$-balls, so $M=\alpha_n$, the
$n$-volume of the unit $n$-ball, works. However, this is not the case for
$k$-regular sets in $\mathbf{R}^n$, $k<n$, since we are now asking for a bound on the
$k$-dimensional measure of an $n$-dimensional set which could easily
be infinite.
\end{rem}
\begin{enumerate}
\item Suppose $E= \cup_{x\in F}\bar{B}(x,r)$, a union of closed balls of
radius $r$ centered at each point $x$ in $F$.
\item Let $\mathcal{C}=\mathcal{C}(E,d)$ for some $d$ such that
$\frac{1}{2^d} \ll r$, and let $\oldhat{\mathcal{C}}=\{3C \mid C\in
\mathcal{C}\},$ where $3C$ is an $n$-cube concentric with $C$ with
sides parallel to the axes and $l(3C)=3l(C)$, as shown in Figure
\ref{3c}.
\begin{figure}[H]
\begin{center}
\input{3Ccube.pdf_t}
\caption{Concentric Cubes.}\label{3C}
\label{3c}
\end{center}
\end{figure}
\item\label{crucial-cube-step} This implies that for $3C\in \oldhat{\mathcal{C}}$
\begin{equation}
\frac{\mathcal{L}^n(3C\cap E)}{\mathcal{L}^n(3C)}>\theta >0, \ \ \ \text{with} \ \theta \in \mathbf{R}.
\end{equation}
\item We then make the following observations:
\begin{enumerate}
\item Note that there are $3^n$ different tilings of the plane by $3C$
cubes whose vertices live on the $\frac{1}{2^d}$ lattice. (This can
be seen by realizing that there are $3^n$ shifts you can perform on
a $3C$ cube and both (1) keep the originally central cube $C$ in the
$3C$ cube and (2) keep the vertices of the $3C$ cube in the
$\frac{1}{2^d}$ lattice.)
\item Denote the $3C$ cubes in these tilings $\mathcal{T}_i, i =
1,...,3^n$.
\item Define $\oldhat{\mathcal{C}}_i \equiv \oldhat{\mathcal{C}}\cap\mathcal{T}_i$.
\item Note now that by Step~(\ref{crucial-cube-step}), the number of $3C$ cubes in
$\oldhat{\mathcal{C}}_i$ cannot exceed
\[N_i \equiv \frac{ \mathcal{L}^n(E)}{\theta \mathcal{L}^n(3C)}.\]
\item Denote the total number of cubes in $\mathcal{C}$ by $N_{\mathcal{C}^E_d}$.
\item The number of cubes in $\mathcal{C}$, $N_{\mathcal{C}^E_d}$, cannot exceed
\[\sum_{i=1}^{3^n} N_i = 3^n\frac{ \mathcal{L}^n(E)}{\theta \mathcal{L}^n(3C)}.\]
\item Putting it all together, we get
\begin{eqnarray}
\mathcal{L}^n(\mathcal{C}_d^E) &=& \mathcal{L}^n(\cup_{C\in\mathcal{C}} C)\nonumber\\
& = & N_{\mathcal{C}^E_d} \mathcal{L}^n(C) \nonumber\\
&\leq & 3^n \frac{\mathcal{L}^n(E) }{\theta \mathcal{L}^n(3C)} \mathcal{L}^n(C) \nonumber\\
& = & \frac{\mathcal{L}^n(E)}{\theta}.
\end{eqnarray}
\end{enumerate}
\item This shows that if $E= \cup_{x\in F}\bar{B}(x,r)$, then
\[ \mathcal{L}^n(\mathcal{C}^E_d) \leq\frac{1}{\theta}\mathcal{L}^n(E).\]
\end{enumerate}
We now have two conclusions:
\begin{description}
\item[Regularized sets]We notice that for any fixed $r_0 > 0$, as long as we pick $d_0$ big enough, then $r < r_0$ and $d > d_0$ imply that $E= \cup_{x\in F}\bar{B}(x,r)$ satisfies
\[ \mathcal{L}^n(\mathcal{C}^E_d)
\leq\frac{1}{\theta(n)}\mathcal{L}^n(E),\] for a $\theta(n) > 0$ that
depends on n, but not on $F$.
\item[Regular sets] Now suppose that \[F \in \mathcal{R}_m \equiv \{W\subset\mathbf{R}^n \;| \; mr^n < \mathcal{L}^n(W\cap \bar{B}(x,r)),\; \forall \; x \in W \text{ and } r< r_0\}.\] Then we immediately get the same result: for a big enough $d$ (depending only on $r_0$),
\[ \mathcal{L}^n(\mathcal{C}^F_d)
\leq\frac{1}{\theta(m)}\mathcal{L}^n(F),\] where $\theta(m) > 0$
depends only on the regularity class that $F$ lives in and not on
which subset in that class we cover with the cubes.
\end{description}
\subsection{Minkowski Content}
\begin{defn}
\emph{(Minkowski content).} Let $W\subset \mathbf{R}^n$, and let $W_r \equiv \{x \mid d(x,W)<r\}$. The $(n-1)$-dimensional Minkowski Content is defined as $\mathcal{M}^{n-1}(W)\equiv \lim_{r \rightarrow 0} \frac{\mathcal{L}^n(W_r)}{2r}$, when the limit exists (see Figure \ref{Min}).
\end{defn}
\begin{defn}\label{rect}
\emph{(($\mathcal{H}^{m},m$)-rectifiable set)}. A set $W\subset \mathbf{R}^n$ is called {\bf ($\mathcal{H}^{m},m$)-rectifiable} if $\mathcal{H}^m(W)<\infty$ and $\mathcal{H}^m$-almost all of $W$ is contained in the union of the images of countably many Lipschitz functions from $\mathbf{R}^m$ to $\mathbf{R}^n$. We will use {\bf rectifiable} and {\bf ($\mathcal{H}^{m},m$)-rectifiable} interchangeably when the dimension of the sets are clear from the context.
\end{defn}
\begin{figure}[H]
\begin{center}
\scalebox{0.5}{\input{Minkowski-krv.pdf_t}}
\caption{Minkowski Content.}\label{Min}
\end{center}
\end{figure}
\begin{defn}[m-rectifiable]
We will say that $E\subset\mathbf{R}^n$ is {\bf $m$-rectifiable} if there is a Lipschitz function mapping a bounded subset of $\mathbf{R}^m$ onto $E$.
\end{defn}
\begin{thm}\label{mink-equals-hausdorff}
$\mathcal{M}^{n-1}(W)=\mathcal{H}^{n-1}(W)$ when $W$ is a closed, $(n$-$1)$-rectifiable set.
\end{thm}
See Theorem 3.2.39 in \cite{fed69} for a proof.
\begin{rem}
Notice that $m$-rectifiable is more restrictive that $(\mathcal{H}^m,m)$-rectifiable. In fact, Theorem~\ref{mink-equals-hausdorff} is false for $(\mathcal{H}^m,m)$-rectifiable sets. See the notes at the end of section 3.2.39 in \cite{fed69} for details.
\end{rem}
Now, let $W$ be ($n$-$1$)-rectifiable, set $r_d \equiv
\sqrt{n}\left(\frac{1}{2^d}\right)$, and choose $r_\delta$ small
enough so that
\[
\mathcal{L}^n(W_{r_d}) \leq \mathcal{M}^{n-1}(W)2r_d + \delta,
\]
for all $d\in \mathbb{N}\cup\{0\}$ such that $r_d\leq r_\delta.$
(Note: Because the diameter of an $n$-cube with edge length
$\frac{1}{2^d}$ is $r_d = \sqrt{n}\left(\frac{1}{2^d}\right)$ , no point of
$\mathcal{C}_d^W$ can be farther than $r_d$ away from $W$. Thus
$\mathcal{C}_d^W\in W_{r_d}$.)
Assume that $\mathcal{L}^n(E) \neq 0$ and $\partial E$ is ($n$-$1$)-rectifiable. Letting $W\equiv \partial E$, we have
\begin{eqnarray*}
\mathcal{L}^n(\mathcal{C}^E_d)-\mathcal{L}^n(E) &\leq& \mathcal{L}^n(W_{r_d})\\
& \leq& \mathcal{M}^{n-1}(\partial E)2r_d + \delta\\
&\leq& \mathcal{M}^{n-1}(\partial E)2r_\delta + \delta
\end{eqnarray*}
so that
\begin{equation}\label{eq:mink-bound}
\mathcal{L}^n(\mathcal{C}^E_d)\leq (1+\oldhat{\delta})\mathcal{L}^n(E), \ \ \text{where} \ \oldhat{\delta}=\frac{\mathcal{M}^{n-1}(\partial E)2r_\delta + \delta}{\mathcal{L}^n(E)}.
\end{equation}
Since we control $r_\delta$ and $\delta$, we can make $\oldhat{\delta}$ as small as we like, and we have a sufficient condition to establish Equation (\ref{B}) above.\\
\noindent {\bf The result}: let $\oldhat{\delta}$ be as in
Equation~(\ref{eq:mink-bound}) and $E\subset \mathbf{R}^n$ such that
$\mathcal{L}^n(E) \neq 0$. Suppose that $\partial E$ (which is
automatically closed) is ($n$-$1$)-rectifiable and $\mathcal{H}^{n-1}(\partial
E)<\infty$, then, for every $\delta>0$ there exists a $d_0$ such that
for all $d\geq d_0$,
\[\mathcal{L}^n(\mathcal{C}^E_d)\leq (1+\oldhat{\delta})\mathcal{L}^n(E).\]
\begin{prob}
Suppose that $E\subset\mathbf{R}^n$ is bounded. Show that for any $r > 0$,
$E_r$, the set of points that are at most a distance $r$ from $E$,
has a $(\mathcal{H}^{n-1},n-1)$-rectifiable boundary. Show this by showing
that $\partial E_r$ is contained in a finite number of graphs of
Lipschitz functions from $\mathbf{R}^{n-1}$ to $\mathbf{R}$. Hint: cut $E_r$ into
small chunks $F_i$ with common diameter $D \ll r$ and prove that
$(F_i)_r$ is the union of a finite number of Lipschitz graphs.
\end{prob}
\begin{prob}
Can you show that in fact the boundary of $E_r$, $\partial E_r$, is
actually ($n$-$1$)-rectifiable? See if you can use the results of the
previous problem to help you.
\end{prob}
\begin{rem}
We can cover a union $E$ of open balls of radius $r$, whose centers
are bounded, with a cover $\mathcal{C}^E_d$ satisfying
Equation~(\ref{B}). In this case, $\partial \mathcal{C}^E_d$
certainly meets the requirements for the result just shown.
\end{rem}
\subsection{Smooth Boundary, Positive Reach}
\label{sec:reach}
In this section, we show that if $\partial E $ is \textit{smooth} (at
least $C^{1,1}$), then $E$ has positive \textit{reach} allowing us to
get an even cleaner bound, depending in a precise way on the curvature
of $\partial E$.
We will assume that $E$ is closed. Define $E_r = \{x\in
R^n \, | \, \operatorname{dist}(x,E) \leq r\}$, $\operatorname{cls}(x) \equiv \{y\in E \; | \; d(x,E) = |x-y|\}$
and $\operatorname{unique}(E) = \{x \; | \, \operatorname{cls}(x)\text{ is a single point}\}$.
\begin{defn}[Reach]
The \textbf{reach} of $E$, $\operatorname{reach}(E)$, is defined
\[\operatorname{reach}(E)\equiv \sup \{r \; | \; E_r \subset \operatorname{unique}(E)\}\]
\end{defn}
\begin{rem}
Sets of positive reach were introduced by Federer in 1959~\cite{federer1959curvature} in a paper that also introduced the famous coarea formula.
\end{rem}
\begin{rem}
If $E\subset\mathbf{R}^n$ is ($n$-$1$)-dimensional and $E$
is closed, then $E = \partial E$.
\end{rem}
Another equivalent definition involves rolling balls around the boundary of $E$.
The closed ball $\bar{B}(x,r)$ {\bf touches} $E$ if \[\bar{B}(x,r)\cap E \subset \partial\bar{B}(x,r)\cap\partial E\]
\begin{defn}
The \textbf{reach} of $E$, $\operatorname{reach}(E)$, is defined
\[\operatorname{reach}(E)\equiv \sup \{r \; | \text{ every ball of radius $r$ touching $E$ touches at a single point}\}\]
\end{defn}
Put a little more informally, $\operatorname{reach}(E)$ is the supremum of
radii $r$ of the balls such that each ball of that radius rolling around E touches E at only one point (see Figure \ref{reach}).
\begin{figure}[H]
\begin{center}
\input{reach.pdf_t}
\caption{Positive and Non-positive Reach.}\label{reach}
\end{center}
\end{figure}
As mentioned above, if $\partial E$ is $C^{1,1}$, then it has positive
reach (see Remark 4.20 in \cite{federer1959curvature}). That is, if
for all $x\in \partial E$, there is a neighborhood of $x$, $U_x\subset
\mathbf{R}^n$, such that after a suitable change of coordinates, there is a
$C^{1,1}$ function $f:\mathbf{R}^{n-1}\rightarrow\mathbf{R}$ such that $\partial E\cap
U_x$ is the graph of $f$. (Recall that a function is $C^{1,1}$ if its
derivative is Lipschitz continuous.) This implies, among other
things, that the (symmetric) second fundamental form of $\partial E$
exists $\mathcal{H}^{n-1}$-almost everywhere on $\partial E$. The fact that
$\partial E$ is $C^{1,1}$ implies that at $\mathcal{H}^{n-1}$-almost every
point of $\partial E$, the $n-1$ principal curvatures $\kappa_i$ of
our set exist and $|\kappa_i| \leq \frac{1}{\operatorname{reach}(\partial E)}$ for
$1\leq i \leq n-1$.
We will use this fact to determine a bound for the $(n-1)$-dimensional
change in area as the boundary of our set is expanded outwards or
contracted inwards by $\epsilon$ (see Figure \ref{lep}, Diagram
1). Let us first look at this in $\mathbf{R}^2$ by examining the following
ratios of lengths of expanded or contracted arcs for sectors of a ball
in $\mathbf{R}^2$ as shown in Diagram 2 in Figure \ref{lep} below.
\begin{figure}[H]
\begin{center}
\input{lepsilon.pdf_t}
\caption{Moving Out and Sweeping In.}\label{lep}
\end{center}
\end{figure}
\[
\frac{\mathcal{H}^1(l_\epsilon)}{\mathcal{H}^1(l)}=\frac{(r+\epsilon)\theta}{r\theta}=1+\frac{\epsilon}{r}=1+\epsilon \kappa
\]
\[
\frac{\mathcal{H}^1(l_{-\epsilon})}{\mathcal{H}^1(l)}=\frac{(r-\epsilon)\theta}{r\theta}=1-\frac{\epsilon}{r}=1-\epsilon \kappa,
\]
where $\kappa$ is the principal curvature of the circle (the boundary of the 2-ball), which we can think of as defining the reach of a set $E\subset \mathbf{R}^2$ with $C^{1,1}$-smooth boundary. \\
The Jacobian for the normal map pushing in or out by $\epsilon$,
which by the area formula is the factor by which the area
changes, is given by $\prod_{i=1}^{n-1}(1\pm \epsilon \kappa_i)$ (see
Figure~\ref{lep}, Diagram 1). If we define $\oldhat{\kappa}\equiv
\max\{|\kappa_1|,|\kappa_2|,\ldots, |\kappa_{n-1}|\}$, then
we have the following ratios:
\[
\text{Max Fractional Increase of $\mathcal{H}^{n-1}$ boundary ``area" Moving Out:}
\]
\[
\prod_{i=1}^{n-1}(1+\epsilon \kappa_i) \leq (1+\epsilon \oldhat{\kappa})^{n-1}.
\]
\[
\text{Max Fractional Decrease of $\mathcal{H}^{n-1}$ boundary ``area" Sweeping In:}
\]
\[
\prod_{i=1}^{n-1}(1-\epsilon \kappa_i) \geq (1-\epsilon \oldhat{\kappa})^{n-1}.
\]
\begin{rem}
Notice that $\oldhat{\kappa} = \frac{1}{\operatorname{reach}(\partial E)}$.
\end{rem}
For a ball, we readily find the value of the ratio
\begin{eqnarray}
\label{eq:ball-ratio}
\frac{\mathcal{L}^n(B(0,r+\epsilon))}{\mathcal{L}^n(B(0,r))} &=& \left( \frac{r+\epsilon}{r} \right)^n\nonumber \\
& = & (1+\epsilon\kappa)^n \ \ \text{ (setting $\delta = \epsilon\kappa$)}\nonumber \\
& = & (1+\delta)^n,
\end{eqnarray}
where $\kappa = \frac{1}{r}$ is the curvature of the ball along any geodesic.
Now we calculate the bound we are interested in for $E$, assuming
$\partial E$ is $C^{1,1}$. Define $E_\epsilon \subset \mathbf{R}^n \equiv \{x
\mid d(x,E)<\epsilon\}$. We first compute a bound for
\begin{align}
\frac{\mathcal{L}^n(E_\epsilon)}{\mathcal{L}^n(E)} &= \frac{\mathcal{L}^n(E)+\mathcal{L}^n(E_\epsilon \setminus E)}{\mathcal{L}^n(E)} \nonumber \\
&= 1 + \frac{\mathcal{L}^n(E_\epsilon \setminus E)}{\mathcal{L}^n(E)}. \label{ratio}
\end{align}
Since $\kappa_i$ is a function of $x\in \partial E$ defined $\mathcal{H}^{n-1}$-almost everywhere, we may set up the integral below over $\partial E$ and do the actual computation over $\partial E \setminus K$, where $K\equiv\{$the set of measure 0 where $\kappa_i $ is not defined$\}$. Computing bounds for the numerator and denominator separately in the second term in (\ref{ratio}), we find, by way of the Area Formula \cite{morgan-2008-geometric},
\begin{align}
\mathcal{L}^n(E_\epsilon \setminus E) &= \int^\epsilon_0 \int_{\partial E} \prod_{i=1}^{n-1}(1+r \kappa_i)d\mathcal{H}^{n-1}dr \nonumber \\
&\leq \int^\epsilon_0 \int_{\partial E} (1+r \oldhat{\kappa})^{n-1}d\mathcal{H}^{n-1}dr \nonumber \\
&= \mathcal{H}^{n-1}(\partial E) \left. \frac{(1+r \oldhat{\kappa})^n}{n\oldhat{\kappa}} \right\vert^\epsilon_0 \nonumber \\
&= \mathcal{H}^{n-1}(\partial E) \left( \frac{(1+\epsilon \oldhat{\kappa})^n}{n\oldhat{\kappa}}-\frac{1}{n\oldhat{\kappa}}\right) \label{numbound}
\end{align}
and
\begin{align}
\mathcal{L}^n(E) &\geq \int^{r_0}_0 \int_{\partial E} \prod_{i=1}^{n-1}(1-r \kappa_i)d\mathcal{H}^{n-1}dr \nonumber \\
&\geq \int^{r_0}_0 \int_{\partial E} (1-r \oldhat{\kappa})^{n-1}d\mathcal{H}^{n-1}dr \nonumber \\
&= \mathcal{H}^{n-1}(\partial E) \left. \frac{-(1-r \oldhat{\kappa})^n}{n\oldhat{\kappa}} \right\vert^{r_0}_0 \nonumber \\
&= \frac{\mathcal{H}^{n-1}(\partial E)}{n\oldhat{\kappa}}, \ \ \ \text{when} \ r_0=\frac{1}{\oldhat{\kappa}}. \label{denombound}
\end{align}
From \ref{ratio}, \ref{numbound}, and \ref{denombound}, we have
\begin{align}
\frac{\mathcal{L}^n(E_\epsilon)}{\mathcal{L}^n(E)} &\leq 1+ \frac{\mathcal{H}^{n-1}(\partial E) \left( \frac{(1+\epsilon \oldhat{\kappa})^n}{n\oldhat{\kappa}}-\frac{1}{n\oldhat{\kappa}}\right)}{\frac{\mathcal{H}^{n-1}(\partial E)}{n\oldhat{\kappa}}} \nonumber \\
&= (1+\epsilon \oldhat{\kappa})^n \ \ \ (\text{setting} \ \delta = \epsilon \oldhat{\kappa} \nonumber) \\
&= (1+\delta)^n.
\end{align}
From this we get that
\[ \mathcal{L}^n(E_\epsilon) \leq (1+\epsilon\oldhat{\kappa})^n \mathcal{L}^n(E)\]
so that
\[\mathcal{L}^n(\mathcal{C}_{d(\epsilon)}^E) \leq (1+\epsilon\oldhat{\kappa})^n \mathcal{L}^n(E) \]
where $d(\epsilon) = \log_2(\frac{\sqrt{n}}{\epsilon})$ is found by
solving $\sqrt{n}\frac{1}{2^d} = \epsilon$.
Thus, when $\partial E$ is smooth enough to have positive reach, we
find a nice bound of the type in Equation~(\ref{B}), with a precisely
known dependence on curvature.
\section{A Boundary Conjecture}
What can we say about boundaries? Can we bound
\[
\frac{\mathcal{H}^{n-1}(\partial \mathcal{C}^E_d)}{\mathcal{H}^{n-1}(\partial E)}?
\]
\begin{figure}[H]
\begin{center}
\scalebox{1.0}{\input{cubeboundary.pdf_t}}
\caption{Cubes on the Boundary.}\label{cube}
\end{center}
\end{figure}
\begin{con}\label{con}
If $E\subset \mathbf{R}^n$ is compact and $\partial E$ is $C^{1,1}$,
\[
\limsup_{d \rightarrow \infty} \frac{\mathcal{H}^{n-1}(\partial \mathcal{C}^E_d)}{\mathcal{H}^{n-1}(\partial E)}\leq n.
\]
\end{con}
\begin{proof}[Brief Sketch of Proof for $n=2$] $ $
\begin{enumerate}
\item Since $\partial E$ is $C^{1,1}$, we can zoom in far enough
at any point $x\in\partial E$ so that it looks flat.
\item Let $C$ be a cube in the cover $\mathcal{C}(E,d)$ that intersects
the boundary near $x$ and has faces in the boundary
$\partial\mathcal{C}^E_d$. Define $F = \partial C \cap \partial
\mathcal{C}^E_d$.
\item (Case 1) Assume that the tangent at $x$, $T_x\partial E$, is
not parallel to either edge direction of the cubical cover (see Figure~\ref{cube2}).
\begin{enumerate}
\item Let $\Pi$ be the projection onto the horizontal axis and notice
that $\frac{\mathcal{H}^1(F)}{\Pi(F)} \leq 2+\epsilon$ for any epsilon.
\item This is stable to perturbations which is important since the actual piece
of the boundary $\partial E$ we are dealing with is not a straight line.
\end{enumerate}
\item (Case 2) Suppose that the tangent at $x$, $T_x\partial E$,
is parallel to one of the two faces of
the cubical cover, and let $U_x$ be a neighborhood of
$x\in \partial E$.
\begin{enumerate}
\item Zooming in far enough, we see that the cubical boundary can
only oscillate up and down so that the maximum ratio for any
horizontal tangent is (locally) $2$.
\item But we can create a sequence of examples that attain ratios
as close to 2 as we like by finding a
careful sequence of perturbations that attains a ratio locally
of $2-\epsilon$ for any $\epsilon$ (see Figure~\ref{cube}).
\item That is, we can create perturbations that, on an unbounded
set of $d$'s, $\{d_i\}_{i=1}^\infty,$ yield a ratio
$\frac{\mathcal{H}^1(\mathcal{C}^E_{d_i}\cap \, U_x)}{\partial E} >
2-\epsilon$, and we can send $\epsilon\rightarrow 0$.
\end{enumerate}
\item Use the compactness of $\partial E$ to put this all together
into a complete proof.
\end{enumerate}
\end{proof}
\begin{figure}[H]
\begin{center}
\scalebox{0.75}{\input{conjecture.pdf_t}}
\caption{The case in which $\theta$, the angle between $T_x \partial E$ and the $x$-axis, is neither $0$ nor $\pi/2$.}\label{cube2}
\end{center}
\end{figure}
\begin{prob} Suppose we exclude $C$'s that contain less than some
fraction $\theta$ of $E$ (as defined in Conjecture \ref{con}) from the cover to get the reduced cover
$\hat{\mathcal{C}}_d^E$. In this case, what is the optimal bound
$B(\theta)$ for the ratio of boundary measures
\[\limsup_{d\rightarrow\infty}\frac{\mathcal{H}^{n-1}(\partial
\hat{\mathcal{C}}_d^E)}{\mathcal{H}^{n-1}(\partial E)} \leq B(\theta)?\]
\end{prob}
\section{Other Representations}
\subsection{The Jones' $\beta$ Approach}\label{jonessection2}
As mentioned above, another approach to representing sets in $\mathbf{R}^n$,
developed by Jones \cite{jon90}, and generalized by Okikiolu
\cite{oki92}, Lerman \cite{ler03}, and Schul \cite{sch07}, involves
the question of under what conditions a bounded set $E$ can be
contained within a rectifiable curve $\Gamma$, which Jones likened to
the Traveling Salesman Problem taken over an infinite set. While Jones
worked in $\mathbb{C}$ in his original paper, the work of Okikiolu,
Lerman, and Schul extended the results to $\mathbf{R}^n \ \forall n\in
\mathbb{N}$ as well as infinite dimensional space.
Recall that a compact, connected set $\Gamma \subset \mathbf{R}^2$ is
rectifiable if it is contained in the image of a countable set of
Lipschitz maps from $\mathbf{R}$ into $\mathbf{R}^2$, except perhaps for a set of
$\mathcal{H}^1$ measure zero. We have the result that if $\Gamma$ is compact
and connected, then $l(\Gamma)=\mathcal{H}^1(\Gamma)<\infty$ implies it
is rectifiable (see pages 34 and 35 of
\cite{falconer-1986-geometry}).
Let $W_C$ denote the width of the thinnest cylinder containing the set $E$ in the dyadic $n$-cube $C$ (see Figure \ref{jones2}), and define
the $\beta$ number of $E$ in $C$ to be
\[
\beta_E(C)\equiv \frac{W_C}{l(C)}.
\]
\begin{figure}[H]
\begin{center}
\scalebox{.6}{
\input{Jones2.pdf_t}}
\caption{Jones' $\beta$ Numbers and $W_C$. Each of the two green lines in a cube $C$ is an equal distance away from the red line and is chosen so that the green lines define the thinnest cylinder containing $E\cap C$. Then the red lines are varied over all possible lines in $C$ to find that red line whose corresponding cylinder is the thinnest of all containing cylinders. In this sense, the minimizing red lines are the best fit to $E$ in each $C$.}\label{jones2}
\end{center}
\end{figure}
Jones' main result is this theorem:
\begin{thm}\cite{jon90}\label{jones}
Let $E$ be a bounded set and $\Gamma$ be a connected set both in
$\mathbf{R}^2$. Define $\beta_\Gamma(C)\equiv \frac{W_C}{l(C)},$ where
$W_C$ is the width of the thinnest cylinder in the $2$-cube $C$
containing $\Gamma.$ Then, summing over all possible
$C$, $$\beta^2(\Gamma)\equiv \sum_C (\beta_\Gamma(3C))^2l(C)<\eta
\,l(\Gamma)<\infty, \text{ where } \eta \in \mathbf{R}.$$ \\ Conversely, if
$\beta^2(E)<\infty$ there is a connected set $\Gamma,$ with $E
\subset \Gamma,$ such that \[l(\Gamma) \leq (1+ \delta) \operatorname{diam}(E) +
\alpha_\delta\beta^2(E),\] \text{ where } $\delta>0$ \text{ and }
$\alpha_\delta = \alpha(\delta)\in \mathbf{R}.$
\end{thm}
Jones' main result, generalized to $\mathbf{R}^n$, is that a bounded set $E \subset \mathbf{R}^n$ is contained in a rectifiable curve $\Gamma$ if and only if
\[
\beta^2(E)\equiv \sum_C (\beta_E(3C))^2l(C)<\infty,
\]
where the sum is taken over all dyadic cubes.
Note that each $\beta$ number of $E$ is calculated over the dyadic
cube $3C$, as defined in Section \ref{union}. Intuitively, we see
that in order for $E$ to lie within a rectifiable curve $\Gamma$, $E$
must look flat as we zoom in on points of $E$ since $\Gamma$ has
tangents at $\mathcal{H}^1$-almost every point $x\in\Gamma$. Since
both $W_C$ and $l(C)$ are in units of length, $\beta_E(C)$ is a scale-invariant measure of
the flatness of $E$ in $C$. In higher dimensions, the
analogous cylinders' widths and cube edge lengths are also divided to
get a scale-invariant $\beta_E(C)$.
The notion of local linear approximation has been explored by many
researchers. See for example the work of Lerman and
collaborators~\cite{chen-2009-spectral,arias-2011-spectral,zhang-2012-hybrid,arias-2017-spectral}. While
distances other than the sup norm have been considered when
determining closeness to the approximating line, see~\cite{ler03},
there is room for more exploration there. In the section below,
\emph{Problems and Questions}, we suggest an idea involving the
multiscale flat norm from geometric measure theory.
\subsection{A Varifold Approach}\label{Var2}
As mentioned above, a third way of representing sets in $\mathbf{R}^n$ uses
\emph{varifolds}. Instead of representing $E\subset \mathbf{R}^n$ by working
in $\mathbf{R}^n$, we work in the \emph{Grassmann Bundle}, $\mathbf{R}^n\times
G(n,m)$. Advantages include, for
example, the automatic encoding of tangent information directly into
the representation. By building into the representation this tangent
information, we make set comparisons where we care about tangent
structure easy and natural.
\begin{defn}[Grassmannian]
The $m$-dimensional Grassmannian in $\mathbf{R}^n$, \[G(n,m) =G(\mathbf{R}^n,m),\]
is the set of all $m$-dimensional planes through the origin.
\end{defn}
For example, $G(2,1)$ is the space of all lines through the origin
in $\mathbf{R}^2$, and $G(3,2)$ is the space of all planes through the origin
in $\mathbf{R}^3$. The Grassmann bundle $\mathbf{R}^n \times G(n,m)$ can be thought
of as a space where $G(n,m)$ is attached to each point in $\mathbf{R}^n$.
\begin{defn}[Varifold]
A varifold is a \emph{Radon measure} $\mu$ on the
Grassmann bundle $\mathbf{R}^n \times G(n,m)$.
\end{defn}
Suppose $\pi:(x,g)\in \mathbf{R}^n \times G(n,m) \rightarrow x$. One of the
most common appearances of varifolds are those that arise from
rectifiable sets $E$. In this case the measure $\mu_E$ on $\mathbf{R}^n
\times G(n,m)$ is the pushforward of $m$-Hausdorff measure on $E$ by
the tangent map $T:x\rightarrow (x,T_xE)$.
Let $E\subset \mathbf{R}^n$ be an ($\mathcal{H}^{m},m$)-rectifiable set (see
Definition \ref{rect}). We know the approximate $m$-dimensional
tangent space $T_xE$ exists $\mathcal{H}^m$-almost everywhere since
$E$ is ($\mathcal{H}^{m},m$)-rectifiable, which in turn implies that, except
for an $\mathcal{H}^m$-measure 0 set, $E$ is contained in the union of the images of
countably many Lipschitz functions from $\mathbf{R}^m$ to $\mathbf{R}^n$.
The measure of $A\subset \mathbf{R}^n \times G(n,m)$ is given by $\mu(A)
= \mathcal{H}^m(T^{-1}\{A\})$. Let $S\equiv \{(x, T_xE) \, \vert \, x\in
E\}$, the section of the Grassmann bundle defining the
varifold. $S$, intersected with each fiber $\{x\}\times
G(n,m)$, is the single point $(x, T_xE)$, and so we could just as
well use the projection $\pi$ in which case we would have $\mu_E(A)
= \mathcal{H}^m(\pi(A\cap S))$.
\begin{defn}
A \emph{rectifiable varifold} is a radon measure $\mu_E$ defined on an ($\mathcal{H}^{m},m$)-rectifiable set $E\subset \mathbf{R}^n$. Recalling $S\equiv \{(x, T_xE) \, \vert \, x\in E\}$, let $A \subset \mathbf{R}^n \times G(n,m)$ and define
\[
\mu_E({A})= \mathcal{H}^m(\pi(A \cap S)).
\]
\end{defn}
We will call $E=\pi (S)$ the ``downstairs'' representation of $S$ for
any $S\subset\mathbf{R}^n \times G(n,m)$, and we will call $S =
T(E)\subset\mathbf{R}^n\times G(n,m)$ the ``upstairs'' representation of any
rectifiable set $E$, where $T$ is the tangent map over the rectifiable
set $E$.
\bigskip
\begin{figure}[H]
\begin{center}
\input{varifolds1a.pdf_t}
\caption{Working Upstairs.}\label{var1ctoo}
\end{center}
\end{figure}
Figure~\ref{var1ctoo}, repeated from above, illustrates how the
tangents are built into this representation of subsets of $\mathbf{R}^n$,
giving us a sense of why this representation might be useful. Suppose
we have three line segments almost touching each other, i.e. appearing
to touch as subsets of $\mathbf{R}^2$. The upstairs view puts each segment at
a different height corresponding to the angle of the segment. So,
these segments are not close in any sense in $\mathbf{R}^2\times G(n,m)$. Or
consider a straight line segment and a very fine sawtooth curve that
may look practically indistinguishable, but will appear drastically
different upstairs.
We can use varifold representations in combination with a cubical
cover to get a quantized version of a curve that has tangent
information as well as position information. If, for example, we cover
a set $S \subset \mathbf{R}^2 \times G(2,1)$ with cubes of edge length
$\frac{1}{2^d}$ and use this cover as a representation for $S$, we
know the position and angle to within $\frac{\sqrt{3}}{2^{d+1}}$. In other
words, we can approximate our curve $S \subset \mathbf{R}^2 \times G(2,1)$ by the
union of the centers of the cubes (with edge length $\frac{1}{2^d}$)
intersecting $S$. This simple idea seems to merit further exploration.
\section{Problems and Questions}
\begin{prob}
Find a smooth $\partial E$, with $E \subset \mathbf{R}^n$, such that
\[
\mathcal{H}^{n-1}(\partial \mathcal{C}^E_d) / \mathcal{H}^{n-1}(\partial E)=0 \ \forall d.
\]
\end{prob}
\textbf{Hint:} Look at unbounded $E \subset\mathbf{R}^2$ such that $\mathcal{L}^2(E^c) < \infty$.
\begin{prob}
Suppose that $E$ is open and $\mathcal{H}^{n-1}(\partial E) < \infty$. Show that if the \textbf{reach} of $\partial E $ is positive, then
\[
\liminf_{d \rightarrow \infty} \frac{\mathcal{H}^{n-1}(\partial \mathcal{C}^E_d)}{\mathcal{H}^{n-1}(\partial E)}\geq 1.
\]
\end{prob} \textbf{Hint:} First show that $\partial E$ has
unique inward and outward pointing normals. (Takes a bit of work!)
Next, examine the map $F: \partial E \rightarrow \mathbf{R}^n$, where
$F(x)=x+\eta(x) N(x)$, $N(x)$ is the normal to $\partial E$ at $x$,
and $\eta(x)$ is a positive real-valued function chosen so that
\emph{locally} $F(\partial E) = \partial \mathcal{C}^E_d$. Use the
Binet-Cauchy Formula to find the Jacobian, and then apply the Area
Formula. To do this calculation, notice that at any point
$x_0\in\partial E$ we can choose coordinates so that
$T_{x_0}\partial E$ is horizontal (i.e. $N(x_0) = e_n$). Calculate
using $F: T_{x_0}\partial E = \mathbf{R}^{n-1} \rightarrow \mathbf{R}^n$ where
$F(x)=x+\eta(x) N(x)$. (See Chapter 3 of \cite{evans-1992-1} for the
Binet-Cauchy formula and the Area Formula.)
\begin{prob}
Suppose $E$ has dimension $n-1$, positive reach, and is locally regular (in $\mathbf{R}^n$). \\
a.) Find bounds for $\mathcal{H}^{n}( \mathcal{C}^E_d) / \frac{1}{2^d}.$\\
b.) How does this ratio relate to $\mathcal{H}^{n-1}(E)$?
\end{prob}
\textbf{Hint:} Use the ideas in Section~\ref{sec:reach} to calculate a
bound on the volume of the tube with thickness $2
\frac{\sqrt{n}}{2^d}$ centered on $E$.
\begin{que}
Can we use the ``upstairs'' version of cubical covers to find better
representations for sets and their boundaries? (Of course, ``better"
depends on your objective!)
\end{que}
For the following question, we need the notion of the \emph{multiscale
flat norm}~\cite{morgan-2007-1}. The basic idea of this distance,
which works in spaces of oriented curves and surfaces of any dimension
(known as currents), is that we can decompose the curve or surface $T$
into $(T - \partial S) + \partial S$, \emph{but} we measure the cost
of the decomposition by adding the volumes of $T-\partial S$ and $S$
(not $\partial S$!). By volume, we mean the $m$-dimensional volume, or
$m$-volume of an $m$-dimensional object, so if $T$ is $m$-dimensional,
we would add the $m$-volume of $T-\partial S$ and the ($m$+1)-volume
of $S$ (scaled by the parameter $\lambda$). We get that
\[\Bbb{F}_\lambda(T) = \min_S M_m(T-\partial S) + \lambda
M_{m+1}(S).\] It turns out that $T-\partial S$ is the best
approximation to $T$ that has curvature bounded by
$\lambda$~\cite{allard-2007-1}. We exploit this in the following ideas
and questions.
\begin{rem}
Currents can be thought of as generalized oriented curves or
surfaces of any dimension $k$. More precisely, they are members of
the dual space to the space of $k$-forms. For the purposes of this
section, thinking of them as (perhaps unions of pieces of) oriented
$k$-dimensional surfaces $W$, so that $W$ and $-W$ are simply
oppositely oriented and cancel if we add them, will be
enough to understand what is going on. For a nice introduction to
the ideas, see for example the first few chapters of
\cite{morgan-2008-geometric}.
\end{rem}
\begin{que}
Choose $k\in \{1,2,3\}$. In what follows we focus on sets $\Gamma$
which are one-dimensional, the interior of a cube $C$ will be denoted
$C^o$, and we will work at some scale $d$, i.e. the edge length of the cube
will be $\frac{1}{2^d}$.
Consider the piece of $\Gamma$ in $C^o$, $\Gamma \cap C^o$. Inside the
cube $C$ with edge length $\frac{1}{2^d}$, we will use the flat norm
to
\begin{enumerate}
\item find an approximation of $\Gamma \cap C^o$ with curvature
bounded by $\lambda = 2^{d+k}$ and
\item find the distance of that approximation from $\Gamma \cap C^o$.
\end{enumerate}
This decomposition is then obtained by minimizing \[M_1((\Gamma\cap
C^o)-\partial S) + 2^{d+k}M_{2}(S) = \mathcal{H}^1((\Gamma\cap
C^o)-\partial S) + 2^{d+k}\mathcal{L}^2(S).\] The minimal $S$ will be
denoted $S_d$ (see Figure~\ref{fig:cube-flat-beta-1}).
\begin{figure}[H]
\centering
\scalebox{0.5}{\input{cube-flat-beta-1.pdf_t}}
\caption{Multiscale flat norm decomposition inspiring the definition of $\beta_\Gamma^{\Bbb{F}}$.}
\label{fig:cube-flat-beta-1}
\end{figure}
{\color{blue} Suppose that we define $\beta^{\mathbb{F}}_\Gamma(C)$
by \[\beta^{\mathbb{F}}_\Gamma(C) l(C) = 2^{d+k}\mathcal{L}^2(S_d)\]
so that \[\beta^{\mathbb{F}}_\Gamma(C) =
2^{2d+k}\mathcal{L}^2(S_d).\] What can we say about the properties
(e.g. rectifiability) of $\Gamma$ given the finiteness of $\sum_C
\left(\beta^{\mathbb{F}}_\Gamma(3C)\right)^2 l(C)$?}
\end{que}
\begin{que}
Can we get an advantage by using the flat norm decomposition as a preconditioner before
we find cubical cover approximations? For example, define
\[
\mathcal{F}_d^\Gamma \equiv \mathcal{C}^{\Gamma_d}_d \text{ and } \Gamma_d \equiv \Gamma - \partial S_d,
\]
\[
\text{where} \ S_d = \operatornamewithlimits{argmin}_S \left(\mathcal{H}^1(\Gamma-\partial S)+2^{d+k}\mathcal{L}^2(S)\right).
\]
Since the flat norm minimizers have bounded mean curvature, is this
enough to force the cubical covers to give us better quantitative
information on $\Gamma$? How about in the case in which $\Gamma
= \partial E$, $E\subset \mathbf{R}^2$?
\end{que}
\section{Further Exploration}
\label{sec:further-exploration}
There are a number of places to begin in exploring these ideas
further. Some of these works require significant dedication to master,
and it is always recommended that you have someone who has mastered a
path into pieces of these areas that you can ask questions of when you
first wade in. Nonetheless, if you remember that the language can
always be translated into pictures, and you make the effort to do that,
headway towards mastery can always be made. Here is an annotated list
with comments:
\begin{description}
\item[Primary Varifold References] Almgren's little
book~\cite{almgren-2001-1} and Allard's founding
contribution~\cite{allard-1972-1} are the primary sources for
varifolds. Leon Simon's book on geometric measure
theory~\cite{simon-1984-lectures} (available for free online) has a
couple of excellent chapters, one of which is an exposition of
Allard's paper.
\item[Recent Varifold Work] Both Buet and
collaborators~\cite{buet-2013-varifolds,buet-2014-quantitative,buet-2015-discrete,buet-2016-surface,buet-2016-varifold}
and Charon and
collaborators~\cite{charlier-2014-fshape,charon-2013-varifold,charon-2014-functional}
have been digging into varifolds with an eye to applications. While
these papers are a good start, there is still a great deal of
opportunity for the use and further development of varifolds. On
the theoretical front, there is the work of Menne and
collaborators~\cite{menne-2008-c2, menne-2009-some,
menne-2010-sobolev, menne-2012-decay, menne-2014-weakly,
kolasinski-2015-decay, menne-2016-pointwise, menne-2016-sobolev,
menne-2017-concept, menne-2017-geometric}. We want to call
special attention to the recent introduction to the idea of a
varifold that appeared in the December 2017 AMS
Notices~\cite{menne-2017-concept}.
\item[Geometric Measure Theory I] The area underlying the ideas here are
those from geometric measure theory. The fundamental treatise in the
subject is still Federer's 1969 \emph{Geometric Measure
Theory}~\cite{fed69} even though most people start by reading
Morgan's beautiful introduction to the
subject, \emph{Geometric Measure Theory: A Beginner's Guide}~\cite{morgan-2008-geometric} and Evans' \emph{Measure Theory and Fine Properties of Functions}~\cite{evans-1992-1}. Also recommended are Leon
Simon's lecture notes~\cite{simon-1984-lectures}, Francesco Maggi's
book that updates the classic Italian approach~\cite{mag12}, and
Krantz and Parks' \emph{Geometric Integration
Theory}~\cite{krantz-2008-geometric}.
\item[Geometric Measure Theory II] The book by
Mattila~\cite{mattila-1999-geometry} approaches the subject from the
harmonic-analysis-flavored thread of geometric measure theory. Some
use this as a first course in geometric measure theory, albeit one
that does not touch on minimal surfaces, which is the focus of the
other texts above. De Lellis' exposition \emph{Rectifiable Sets,
Densities, and Tangent Measures}~\cite{de-lellis-2008-1} or Priess'
1987 paper \emph{Geometry of Measures in $\mathbf{R}^n$: Distribution,
Rectifiability, and Densities}~\cite{preiss-1987-geometry} is also
very highly recommended.
\item[Jones' $\beta$] In addition to the papers cited in the text
~\cite{jon90,oki92,ler03,sch07}, there are related works by David and
Semmes that we recommended. See for
example~\cite{david-1993-analysis}. There is also the applied work
by Gilad Lerman and his collaborators that is often inspired by
Jones' $\beta$ and his own development of Jones' ideas
in~\cite{ler03}. See
also~\cite{chen-2009-spectral,zhang-2012-hybrid,zhang-2009-median,arias-2011-spectral}. See
also the work by Maggioni and
collaborators~\cite{little-2009-estimation,allard-2012-multiscale}.
\item[Multiscale Flat Norm] The flat norm was introduced by Whitney in
1957~\cite{whitney-1957-geometric} and used to create a topology
on currents that permitted Federer and Fleming, in their landmark
paper in 1960~\cite{federer-1960-normal}, to obtain the existence of
minimizers. In 2007, Morgan and Vixie realized that a variational
functional introduced in image analysis was actually computing a
multiscale generalization of the flat norm~\cite{morgan-2007-1}. The
ideas are beginning to be explored in these papers
\cite{vixie-2010-multiscale, van-dyke-2012-thin,
ibrahim-2013-simplicial, vixie-2015-some, ibrahim-2016-flat,
alvarado-2017-lower}.
\end{description}
| {
"timestamp": "2017-11-15T02:01:30",
"yymm": "1703",
"arxiv_id": "1703.02775",
"language": "en",
"url": "https://arxiv.org/abs/1703.02775",
"abstract": "Wild sets in $\\mathbb{R}^n$ can be tamed through the use of various representations though sometimes this taming removes features considered important. Finding the wildest sets for which it is still true that the representations faithfully inform us about the original set is the focus of this rather playful, expository paper that we hope will stimulate interest in cubical coverings as well as the other two ideas we explore briefly: Jones' $\\beta$ numbers and varifolds from geometric measure theory.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Cubical Covers of Sets in $\\mathbb{R}^n$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.985496421586532,
"lm_q2_score": 0.849971181358171,
"lm_q1q2_score": 0.8376435576801546
} |
https://arxiv.org/abs/1303.5466 | An O(N) Direct Solver for Integral Equations on the Plane | An efficient direct solver for volume integral equations with O(N) complexity for a broad range of problems is presented. The solver relies on hierarchical compression of the discretized integral operator, and exploits that off-diagonal blocks of certain dense matrices have numerically low rank. Technically, the solver is inspired by previously developed direct solvers for integral equations based on "recursive skeletonization" and "Hierarchically Semi-Separable" (HSS) matrices, but it improves on the asymptotic complexity of existing solvers by incorporating an additional level of compression. The resulting solver has optimal O(N) complexity for all stages of the computation, as demonstrated by both theoretical analysis and numerical examples. The computational examples further display good practical performance in terms of both speed and memory usage. In particular, it is demonstrated that even problems involving 10^{7} unknowns can be solved to precision 10^{-10} using a simple Matlab implementation of the algorithm executed on a single core. | \subsection{Notation and Preliminaries}
We view an $N \times N$ matrix $A$ as a kernel function $K = K(p,q)$ evaluated at pairs of sample points. As it typically comes from an integral formulation of an elliptic PDE, aside from a low-rank block structure we also expect Green's identities to hold for $K$; this however, is not a fundamental limitation of our algorithm: the use of Green's identities is restricted to one small part of the algorithm (Section~\ref{sec:hss}) relying on equivalent density representation and can be replaced by any other technique of similar nature.
We use matlab-like notation $A(I,J)$ where $I$ and $J$ are ordered sets of
indices to denote submatrices of a matrix $A$. Again, following
matlab conventions, $A(:,J)$ and $A(I,:)$ indicate blocks of columns
and rows, respectively.
An essential building block for our algorithm is the interpolative
low-rank decomposition. This decomposition factors an $m \times n$
matrix $A$ into a narrower \emph{skeleton} matrix $A^{\rm sk} = A(:, I^{\rm sk})$
of size $m \times k$, consisting of a subset of columns of $A$
indexed by the set of indices $I^{\rm sk}$, and the interpolation matrix
$T$ of size $k \times (n-k)$, expressing the remaining columns of $A$
as linear combinations of columns of $A^{\rm sk}$: The set of indices
$I^{\rm sk}$ is called the \emph{column skeleton} of $A$. If $\Pi^{\rm sk}$ is
the permutation matrix placing entries with indices from $I^{\rm sk}$ first,
\begin{equation}
A = A^{\rm sk} \begin{bmatrix} I_{k \times k}\; T \end{bmatrix} \Pi^{\rm sk} + E
\label{eq:interp-fact-R}
\end{equation}
where $||E||_2 \sim \sigma_{k+1}$ vanishes as we increase $k$, and $R
= [I_{k\times k} T] \Pi^{\rm sk}$ is a downsampling interpolation matrix.
We denote this compression operation by $[T,I^{\rm sk}] = ID(A, \varepsilon)$, where
$\varepsilon$ controls the norm of $E$. To obtain a similar compression for
rows, we apply the same operation to $A^T$; in this case, we obtain a
factorization $A = (\Pi^{\rm sk}_{r})^T \begin{bmatrix} I_{k \times k} \\ T_r^T \end{bmatrix} A^{\rm sk}_r + E_r$,
where $L = (\Pi^{\rm sk}_r)^T \begin{bmatrix} I_{k \times k} \\ T_r^T \end{bmatrix}$
is an upsampling interpolation matrix.
\subsection{Constructing hierarchically semi-separable matrices}
\label{sec:hss}
We assume that the domain of interest is contained in a rectangle
$\Omega$, with a regular grid of samples (if the input data is given
in a different representation, we resample it first).
A quadtree is constructed by recursively subdividing $\Omega$ into
cells (boxes $B_i$) by bisection,
corresponding to the nodes of a \emph{binary} tree $\cal T$.
The two \emph{children} of a box are denoted $c_1(i)$ and $c_2(i)$.
Subdivision can be done adaptively without significant changes to the
algorithm, but to simplify the exposition we focus on the uniform refinement case.
Let $I_i$ denote the index vector marking all discretization points in
box $B_{i}$, and let $\mathcal{L}$ denote a list of the leaf boxes. Then
$\{I_{i}\}_{i \in \mathcal{L}}$ forms a disjoint partition of the full
index set
\begin{equation}
\label{eq:indexpartition}
\{1,\,2,\,3,\,\dots,\,N\} = \bigcup_{i \in \mathcal{L}} I_{i}.
\end{equation}
Let $m_{i}$ denote the number of points in $B_{i}$.
The partition (\ref{eq:indexpartition}) corresponds to a blocking
\begin{equation}
\label{eq:blocked}
\sum_{j \in \mathcal{L}} {A_{ij} \sigma_j} = f_i,\qquad i \in \mathcal{L},
\end{equation}
of the linear system (\ref{eq:Asigma}), where
$A_{ij} = A(I_i,I_j)$, and the vectors $\sigma$ and $f$ are partitioned accordingly.
For a linear system such as (\ref{eq:Asigma}) arising from the discretization
of an integral equation with a smooth kernel, the off-diagonal blocks of
(\ref{eq:blocked}) typically have low numerical rank. Such matrices can be
represented in an efficient data-sparse format called
\emph{hierarchically semi-separable (HSS)}. In order to
rigorously describe this format, we first define the concept of
a \emph{block-separable matrix}.
\begin{definition} [Block-separable Matrices]
\label{def:blockseparable}
We say $A$ is \emph{block-separable} if there exist matrices $\{L_i,R_i \}_{i \in \mathcal{L}}$
such that each off-diagonal block $A_{i,j}$ in (\ref{eq:blocked}) admits the factorization
\begin{equation}
\label{eq:blockseparable}
A_{i,j} = \underset{m_i \times k_i}{L_i} \ \underset{k_i \times k_j}{M_{i,j}} \ \underset{k_j \times m_j}{R_j},
\end{equation}
where the block ranks $k_{i}$ satisfy $k_{i} < m_{i}$.
\end{definition}
In order to construct the matrices $L_{i}$ and $R_{i}$ in (\ref{eq:blockseparable})
it is helpful to introduce \textit{block rows} and \textit{block columns} of $A$:
For $i \in \mathcal{L}$, we define the $i$th off-diagonal block row of
$A$ as $A^{row}_i = A(I_i, I \setminus I_i) = [A_{i,1} \dots A_{i,i-1}
A_{i,i+1} \dots A_{i,p}]$. The $j$th off-diagonal block column
$A^{col}_j$ is defined analogously. Given a prescribed accuracy $\varepsilon$, we denote by
$k_i^{r}$ and $k_i^{c}$ the $\varepsilon$-ranks of $A^{row}_i$ and $A^{col}_j$, respectively.
Now that we have defined $A^{row}_i$ and $A^{col}_i$, we use them to obtain
the factorization (\ref{eq:blockseparable}) as follows:
For each $i \in \mathcal{L}$, form interpolative decompositions of
$A^{row}_i$ and $A^{col}_i$:
\begin{equation}
\label{eq:fullID}
A_i^{row} = \underset{m_i \times k_i}{L_{i}}
\underset{k_{i} \times (N-m_i)}{A(I_{i}^{\rm rsk},:)}
\qquad\mbox{and}\qquad
A_i^{col} =
\underset{(N-m_i) \times k_i}{A(I_{i}^{\rm csk},:)}
\underset{k_i \times m_i}{R_{i}},
\end{equation}
where the index vectors $I_{i}^{\rm rsk}$ and $I_{i}^{\rm csk}$ are
the \textit{row-skeleton} and \textit{column-skeleton} of block $i$, respectively.
Note that the columns of $L_i$ are a column basis for $A^{row}_i$,
and the rows of $R_i$ a row basis of $A^{col}_i$. Setting
$$
M_{i,j} = A(I_i^{\rm rsk},I_j^{\rm csk}),
$$
we then find that (\ref{eq:blockseparable}) necessarily holds.
Observe that each matrix $M_{i,j}$ is a submatrix of $A$.
Set $D_i = A_{i,i}$. This yields a block factorization for $A$:
\begin{equation}
A = D^{d} + L^{d} A^{d-1} R^{d}
\label{eq:onelevel-factorization}
\end{equation}
where $D^{d}$, $L^{d}$, and $R^{d}$ are the block diagonal matrices whose diagonal
blocks are given by $\{D_{i}\}_{i \in \mathcal{L}}$, $\{L_{i}\}_{i \in \mathcal{L}}$,
and $\{R_{i}\}_{i \in \mathcal{L}}$, respectively. The matrix $A^{d-1}$ is
the submatrix of $A$ corresponding to the union of skeleton points, with
diagonal blocks zeroed out and off-diagonal blocks $M_{i,j}$.
\begin{remark}
Row and column skeleton sets need not coincide, although for all purposes, we will assume they are augmented so that they are the same size (so $D_i$ blocks are square). If the system matrix is symmetric, as is the case for all matrices considered in this paper, these sets are indeed the same and further $R_i = L_i^{T}$. For this reason, as well as for the sake of simplicity, we make no further distinction between them unless it is necessary.
\end{remark}
\subsubsection*{Hierarchical compression of A}
The key property that allows dense matrix operations to be performed
with less that $O(N^{2})$ complexity is that the low-rank structure in
definition~\ref{def:blockseparable} can be exploited recursively in the
sense that the matrix $A^{d-1}$ in \eqref{eq:onelevel-factorization}
itself is block-separable.
To be precise, we re-partition the matrix $A^{d-1}$ by merging $2\times 2$
sets of blocks to form new larger blocks. Each larger block is associated
with a box $i$ on level $d-1$ corresponding to the index vector
$I_{c_1(i)}^{\rm sk} \sqcup I_{c_2(i)}^{\rm sk}$ (the new index vector
holds $m_i = k_{c_1}+k_{c_2}$ nodes).
The resulting matrix with larger blocks is then itself block-separable
and admits a factorization, cf.~Figure \ref{fig:reblock},
\begin{equation}
A = D^{d} + L^{d} \big(D^{d-1} + L^{d-1}\,A^{d-2}\,R^{d-1}\bigr) R^{d}
\label{eq:twolevel-factorization}
\end{equation}
\begin{figure}[H]
\begin{center}
\includegraphics[scale = 0.5]{Multilevel_HSS_merge}
\caption{\textit{Two levels of block-separable compression:
blocks of $M$ corresponding to children are merged and then
off-diagonal interactions are further compressed.}}
\label{fig:reblock}
\end{center}
\end{figure}
We say that $A$ is \emph{hierarchically semiseparable} (HSS) if the
process of reblocking and recompression can be continued through all
levels of the tree. In other words, we assume that
$A^{\ell} = D^{\ell} + L^{\ell} A^{\ell-1} R^{\ell}$
for $\ell = d \ldots 1$, or, more explicitly,
\begin{equation}
A^d = D^{d} + L^{d}( D^{d-1} + L^{d-1} ( D^{d-1} + \ldots (D^1 + L^1D^0R^1) \ldots ) R^{d-1}) R^{d}
\label{eq:telescope}
\end{equation}
with $A = A^d$, $A^0 = D^0$, and $D^{\ell}$, $L^{\ell}$ and $R^{\ell}$
block-diagonal, with blocks in matrices with index $\ell$ corresponding
to boxes at level $\ell$. For non-leaf boxes, blocks $D_i^{\ell}$ account
for ``sibling interactions,'' in other words interactions between the
children $c_{1}(i)$ and $c_{2}(i)$ of $B_{i}$,
\begin{equation}
D_i =
\left[ \begin{array}{cc}
& M_{c_1 (i), c_2 (i)}\\
M_{c_2 (i), c_1 (i)} &
\end{array} \right] \\
\label{eq:D-expressions}
\end{equation}
We call (\ref{eq:telescope}) \emph{the telescoping factorization} of $A$.
The matrices under consideration in this manuscript are (like most matrices arising
from the discretization of integral operators) all HSS.
\begin{remark}
In this paper we use the term \textit{hierarchically semi-separable (HSS)}
to conform with standard use in the literature, see \cite{Gu06sparse,Gu06ULV,chandrasekaran2010numerical,xia2010}.
In \cite{MR2005,GYMR2012,ho2012fast} the term \textit{hierarchically block-separable (HBS)} is
alternatively used to refer to this hierarchical version of block-separability, consistent
with definition~\ref{def:blockseparable}.
\end{remark}
\subsubsection*{Using equivalent densities to accelerate compression}
A matrix is block-separable as long as all sub-matrices $A^{\rm row}_{i}$
and $A_{i}^{\rm col}$ are rank-deficient. However, these matrices are large,
so directly computing the IDs in (\ref{eq:fullID}) is expensive ($O(N^{2})$ cost \cite{xia2012complexity}).
In this section, we describe how to exploit the fact that the
matrix to be compressed is associated with an elliptic PDE to
reduce the asymptotic cost. Related techniques were previously
described in \cite{kifmm04ying} and \cite{MR2005}.
Let $B_i$ denote a leaf box with associated index vector $I_{i}$.
We will describe the accelerated technique for constructing a matrix
$L_{i}$ and an index vector $I_{i}^{\rm rsk} \subset I_{i}$ such that
\begin{equation}
\label{eq:cup1}
A_{i}^{\rm row} = L_{i}\,A(I_{i}^{\rm rsk},:)
\end{equation}
holds to high precision. (The technique for finding $R_{i}$ and
$I_{i}^{\rm csk}$ such that (\ref{eq:fullID}) holds is analogous.)
For concreteness, suppose temporarily that the kernel $\mathcal{K}$
is the fundamental solution of the Laplace equation, $\mathcal{K}(x,y) = \frac{1}{2 \pi} \log|x-y|$.
The idea is to construct a small matrix $\tilde{A}_{i}^{\rm row}$
with the property that
\begin{equation}
\label{eq:cup2}
\mbox{Ran}\bigl(A_{i}^{\rm row}) \subseteq \mbox{Ran}\bigl(\tilde{A}_{i}^{\rm row}).
\end{equation}
In other words, the columns of $\tilde{A}_{i}^{\rm row}$ need to
span the columns of $A_{i}^{\rm row}$. Then compute an ID of the
\textit{small} matrix $\tilde{A}_{i}^{\rm row}$,
\begin{equation}
\label{eq:cup3}
\tilde{A}_{i}^{\rm row} = L_{i}\,\tilde{A}(I_{i}^{\rm rsk},:).
\end{equation}
Now (\ref{eq:cup2}) and (\ref{eq:cup3}) together imply that (\ref{eq:cup1}) holds.
It remains to construct a small matrix $\tilde{A}_{i}^{\rm row}$
whose columns span the range of $A_{i}^{\rm row}$. To do this,
suppose that $v \in \mbox{Ran}(A_{i}^{\rm row})$, so that
for some vector $q \in \mathbb{R}^{N-m_{i}}$
$v = A_{i}^{\rm row}\,q$.
Physically, this means that the values of $v$ represent values
of a harmonic function generated by sources $q$ located outside
the box $B_{i}$. We now know from potential theory that any
harmonic function in $B_{i}$ can be replicated by a source density
on the boundary $\partial B_{i}$. The discrete analog of this statement
is that to very high precision, we can replicate the harmonic function
in $B_{i}$ by placing point charges in a thin layer of discretization
nodes surrounding $B_{i}$ (drawn as solid diamonds in Figure \ref{fig:proxy}(b)).
Let $\{z_{j}\}_{j=1}^{p_{i}}$ denote the locations of these points.
The claim is then that $v$ can be replicated by placing some
``equivalent charges'' at these points. In other words, we form
$\tilde{A}_{i}^{\rm row}$ as the matrix of size $m_{i} \times p_{i}$
whose entries take the form $\mathcal{K}(x_{r},z_{j})$ for $r \in I_{i}$,
and $j = 1,\,2,\,\dots,\,p_{i}$.
\begin{remark}
Figure \ref{fig:proxy} shows an example of how accelerated compression
works. Figure \ref{fig:proxy}(a) illustrates a domain $\Omega$ with
a sub-domain $B_{i}$ (the dotted box). Suppose that $\varphi$ is a harmonic
function on $B_{i}$. Then potential theory assures us that $\varphi$ can be
generated by sources on $\partial B_{i}$, in other words $\varphi(x) =
\int_{\Gamma}\mathcal{K}(x,y)\,\sigma(y)\,ds(y)$ for some density $\sigma$.
The discrete analog of this statement is that to high precision, the harmonic
function $\varphi$ can be generated by placing point charges on the proxy
points $\{z_{j}\}_{j=1}^{p_{i}}$ marked with solid diamonds in Figure
\ref{fig:proxy}(b). The practical consequence is that instead of
factoring the big matrix $A_{i}^{\rm row}$ which represents interactions between
all target points in $B_{i}$ (circles) and all source points (diamonds),
it is enough to factor the small matrix $\tilde{A}_{i}^{\rm row}$ representing
interactions between target points (circles) and proxy points (solid diamonds).
\end{remark}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=40mm]{fig_proxy_a}
&
\includegraphics[width=40mm]{fig_proxy_b} \\
(a) & (b)
\end{tabular}
\end{center}
\caption{(a) A domain $\Omega$ (solid) with a sub-domain $B_{i}$ (dotted).
(b) Target points in $B_{i}$ are circles, source points are diamonds,
and among the source points, the {\em proxy points} are solid.}
\label{fig:proxy}
\end{figure}
\begin{remark}
The width of the layer of proxy points depends on the accuracy requested.
We found that for the Laplace kernel, a layer of width 1 leads to relative
accuracy about $10^{-5}$, and width 2 leads to relative accuracy $10^{-10}$.
For the Helmholtz kernel $\mathcal{K}(x,y) = H_{0}^{(1)}(\kappa|x-y|)$,
similar accuracy is \textit{typically} observed, but in this case, thicker
skeleton layers are recommended to avoid problems associated with resonances.
(Recall that in classical potential theory, a solution to the Helmholtz equation may
require both monopole and dipole charges to be placed on $\partial B_{i}$.)
\end{remark}
\subsection{HSS matrix-vector multiplication}
\label{sec:HSS-multiply}
To describe the process of computing the inverse of an HSS matrix in
compressed form, it is convenient first to explain how matrix-vector
products can be computed. The telescoping factorization \eqref{eq:telescope} yields a fast matrix-vector apply
algorithm evaluating $u = A \sigma$. The structure of this algorithm
is similar to FMM (but simpler, as we do not treat the
near-field separately, approximating \emph{all} external interactions
with a single set of coefficients). To emphasize the underlying
physical intuition, we refer to values $\sigma$ at
points as charges and to the values $u$ we want to compute as
potentials. On non-leaf boxes, we use notation $\phi^{\ell}_i$ for
the charges assigned to the skeleton points of the box, and
$u^{up,\ell}_i$ for computed potentials. The vector $\phi^{\ell}$ ($u^{\ell}$)
is the concatenations of all charges (potentials) of the boxes at level $\ell$.
At the finest level, we define $\phi^d := \sigma$.
\subsubsection*{Upward pass} The upward pass simply uses the rectangular
block-diagonal matrices to compute $\phi^{\ell}$: $\phi^{\ell} = R^{\ell+1} \phi^{\ell+1}$
Each block $R^{\ell+1}_i$ acts on the the subvector of charges
corresponding to the children of $B_i$ at level $\ell$.
\subsubsection*{Downward pass} We compute a potential $u_i$ for each box,
due to all \emph{outside} charges. starting from two boxes at level
$1$. For the top-level boxes $B_1$ and $B_2$, the values are computed
directly, using the sibling interaction matrix forming $D^0$ as
defined by \eqref{eq:D-expressions}, in other words $u^1 = D^0 \phi^1$.
For boxes at the level $\ell > 1$, the outside field on the boxes is
obtained as the sum of the fields interpolated to the boxes at level
$\ell$ using $L^\ell$ (``tall'' rectangular block-diagonal) and
contributions of the siblings through square block diagonal matrix
$D^\ell$:
\begin{equation*}
u^{\ell} = D^{\ell} \phi^{\ell} + L^\ell u^{\ell-1}
\label{eq:dn}
\end{equation*}
At the leaf level, the last step is to transfer to the sample points
and add the field due to boxes themselves (self-interactions, stored
in the diagonal blocks $A_{i,i}$ of $D^{d}$), The actions of different
transformations are summarized in the following computational flow diagram:
\[
\xymatrix{
\sigma = \phi^{d} \ar[rr]^{R^d} \ar[rd]^{D^d} \ar[d]_{A^d \approx A}& & \phi^{d-1} \ar[rr]^{R^{d-1}} \ar[rd]^{D^{d-1}} \ar[d]_{A^{d-1}} & & \cdots \ \ar[r]^{R^1} & \phi^{0} \ar[d]_{D^0 =A^0}\\
u = u^{d} &\oplus \ar[l] & u^{d-1} \ar[l]^{L^{d}} &\oplus\ar[l]& \ar[l]^{L^{d-1}}\cdots& u^{0} \ar[l]^{L^1}\\
}
\]
\subsection{Computing the HSS form of $A^{-1}$}
\label{sec:HSS-inverse}
If $A$ is non-singular, the telescoping factorization \eqref{eq:telescope}
can typically be inverted directly, yielding an HSS representation of $A^{-1}$,
which can be applied efficiently using the algorithm described in Section \ref{sec:HSS-multiply}.
The inversion process is also best understood in terms of the
variables introduced in Section~\ref{sec:HSS-multiply}.
First, we derive a formula for inversion of matrices having
single-level telescoping factorization $Z = F + LMR$.
(The matrix $Z$ on the first step coincides with $A = A^d$, and $F =
D^d$, but both $Z$ and $F$ are different from $A^\ell$ and $D^\ell$
on subsequent steps).
We consider the system $Z \sigma = f$, and
define $\phi = \phi^{d-1} = R\sigma$, $u = u^{d-1} = M\phi$.
We then perform block-Gaussian elimination on the resulting block system:
\begin{equation}
\begin{bmatrix} F & L & 0 \\ -R & 0 & I \\ 0 & -I & M \end{bmatrix} \begin{bmatrix} \sigma \\ u \\ \phi \end{bmatrix} = \begin{bmatrix} f \\ 0 \\ 0 \end{bmatrix}
\end{equation}
We form the auxiliary matrices $E = RF^{-1}L$, and $G = E + M$. Then
\begin{equation}
\begin{bmatrix} F & 0 & 0 \\ 0 & E^{-1} & 0 \\ 0 & 0 & G \end{bmatrix} \begin{bmatrix} \sigma \\ u \\ \phi \end{bmatrix} = \begin{bmatrix} [I - LERF^{-1} + LEG^{-1}ERF^{-1}]f \\ [RF^{-1} - G^{-1}ERF^{-1}]f \\ ERF^{-1}f \end{bmatrix}
\end{equation}
which yields the inverse of $Z$ by solving the block-diagonal system in the first line. Denoting $\tA[R] = ERF^{-1}$, $\tA[D] = F^{-1}(I - L\tA[R])$ and $\tA[L] = F^{-1}LE$, we obtain:
\begin{equation}
Z^{-1} = \tA[D] + \tA[L] (E+M)^{-1} \tA[R]
\label{eq:inv-recursion}
\end{equation}
We make a few observations:
\begin{itemize}
\item If $D,L$ and $R$ are block diagonal, then so are $E,\tA[R],\tA[L]$ and $\tA[D]$.
This means that these matrices can be computed inexpensively via independent computations
that are local to each box.
\item The factors in the inverse can be interpreted as follows:
\begin{itemize}
\item $E^{-1} = RF^{-1}L$ can be viewed as a \emph{local solution operator} "reduced" to the set of skeleton points for each box. It maps fields to charge densities on these sets.
\item $E + M$ maps charge densities on \emph{the union of skeleton points} to fields, adding diagonal $E\sigma$ and off-diagonal $M\sigma$ contributions.
\end{itemize}
\end{itemize}
This inversion procedure can be applied recursively.
To obtain a recursion formula, we define an auxiliary matrix
$\tA[A]^\ell = A^\ell + E^{\ell+1}$, for $\ell < d$, and $\tA[A]^d =
A^d = A$; $F^\ell = D^\ell + E^{\ell+1}$, and $F^d = D^d$.
Then $\tA[A]^\ell$ satisfies $ \tA[A]^\ell = F^\ell + L^\ell
A^{\ell-1} R^\ell$, and its inverse by \eqref{eq:inv-recursion} satisfies
\begin{equation}
\left( \tA[A]^\ell\right)^{-1} = \tA[D]^\ell + \tA[L]^\ell (E^\ell +
A^{\ell-1} )^{-1} \tA[R]^\ell =
\tA[D]^\ell + \tA[L]^\ell (\tA[A]^{\ell-1} )^{-1} \tA[R]^\ell
\label{eq:inv-telescope}
\end{equation}
A fine-to-coarse procedure for computing the blocks of the inverse
immediately follows from \eqref{eq:inv-telescope}:
for each layer $\ell$, we first compute $F^\ell$, using $E^{\ell+1}$
for the finer layer (zero for the finest). $F^\ell$ determines
$\tA[D]^\ell$, $\tA[R]^\ell$ and $\tA[L]^\ell$, and $E^{\ell}$, to
be used at the next layer.
Algorithm~\ref{alg:HSS-inv} summarizes the HSS inversion algorithm;
it takes as input a tree $\cal T$ with index sets $I_i$
defined for leaf nodes, and matrices $D_i$, $R_i$ and $L_i$ for each
box $B_i$, and computes components $\tA[D]_i$, $\tA[R]_i$
and $\tA[L]_i$ of the HSS form of $A^{-1}$. This algorithm has
complexity $O(N^{3/2})$ for a volume integral equation in 2D.
It forms the starting point from which we derive an $O(N)$
algorithm in Section \ref{sec:inversion}.
\begin{algorithm}[h!]
\begin{center}
\begin{algorithmic}[1]
\FOR{ each box $B_i$ in fine-to-coarse order}
\IF{$B_i$ is a leaf}
\STATE $F_i = D_i$
\ELSE
\STATE $F_i = D_i + \begin{bmatrix} E_{c_1(i)} &\\ &E_{c_2(i)} \end{bmatrix}$
\ENDIF
\IF{$B_i$ is top-level}
\STATE $\tA[F]_i = F_i^{-1}$ \COMMENT{Direct inversion at the top level}
\ELSE
\STATE Compute $F_i^{-1}$
\STATE $E_i = (R_i F_i^{-1} L_i )^{-1}$
\STATE $\tA[R]_i = E_i R_i F_i^{-1}$
\STATE $\tA[D]_i = F_i^{-1} (I - L_i \tA[R]_i)$
\STATE $\tA[L]_i = F_i^{-1} L_i E_i$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{center}
\caption{HSS matrix inversion}
\label{alg:HSS-inv}
\end{algorithm}
The hierarchical structure this algorithm computes is not entirely the same as that of
matrix $A$: $\tA[L],\tA[R]$ are not interpolation matrices, and $\tA[D]$ has nonzero
diagonal blocks. However, it can be converted to standard HSS format if needed via
the simple re-formatting algorithm in \cite{GYMR2012, G2011thesis}.
\subsection{Complexity of algorithms on binary tries}
\label{sec:framework}
All our algorithms compute and store matrix blocks associated with boxes
organized into a binary tree $\cA[T]$. To produce complexity estimates
for a given accuracy $\varepsilon$, we introduce bounds for work $W_{\ell}(n_{\ell},\varepsilon)$ and storage $M_{\ell}(n_{\ell},\varepsilon) $ at each level $\ell$ of the tree, where $n_{\ell}=2^{-\ell}N$ is the maximum number of points in a box at this level (we assume that the work per box on a given level has small variance).
\begin{lemma}
\label{lemma:complexity}
Let $n_{\ell}=2^{-\ell}N$, and $d = \log_2(N/n_{max})$, and exponents
$p,q \ge 0$. Then if $W_{\ell}(n_{\ell},\varepsilon)$ has the form $C_{\varepsilon} n_{\ell}^p \log_2^q(n_{\ell})$,
the total work has complexity
\begin{equation*}
\textbf{NTI: }\sum_{\ell = 0}^{d} {2^{\ell} W_{\ell}(n_{\ell},\varepsilon)} =
\left\{ \begin{array}{lr}
O(N) & : 0 \le p<1 \\
O(N \log_2^{q+1}N) & : p = 1 \\
O(N^p \log_2^{q}N) & : p > 1
\end{array} \right.
\end{equation*}
\begin{equation*}
\textbf{TI: } \sum_{\ell = 0}^{d} {W_{\ell}(n_{\ell},\varepsilon)} = O(N^p \log^{q}N)
\end{equation*}
\end{lemma}
For NTI algorithms, on each level $\ell$ we obtain an estimate by adding the bound for work per box for the $2^{\ell}$ boxes. The polynomial growth in $W_{\ell}$ or $M_{\ell}$ is compensated by the fact that the number of boxes decreases exponentially going up the tree.
If the rate of growth of $W_{\ell}$ is slower than linear ($p<1$),
the overall complexity is linear. If $W_{\ell}$ grows linearly, we
accumulate a $\log N$ factor going up the tree.
If the growth of $W_{\ell}$ is superlinear ($p>1$), the work
performed on the top boxes dominates, and we obtain the same
complexity as in $W_{\ell}$ for the overall algorithm.
In the TI case, work/storage on the top boxes dominates the
calculation, since only one set of matrices needs to be computed and
stored per level. Hence, the interpretation is simpler: the
single-box bound for complexity at the top levels reflects the overall complexity.
\subsection{Assumptions on matrix structure}
\label{sec:assumptions}
Complexity estimates for the work per box require assumptions on
matrix structure. Let us restate some assumptions already
made in Sections \ref{sec:background} and \ref{sec:inversion}:
\begin{enumerate}
\item \label{asm:skeleton} \textbf{Skeleton size scaling:}
The maximum size of skeleton sets for boxes at level $\ell$
grows as $O(n_{\ell}^{1/2})$. This determines the size of
blocks within the HSS structure, and can be proved for
non-oscillatory PDE kernels in 2D.
\item \label{asm:eqdensity} \textbf{Localization:} Equivalent
densities may be used to represent long range interactions to
within any specified accuracy, cf.~Section \ref{sec:hss}.
\item \label{asm:skeleton-struct} \textbf{Skeleton structure:}
The skeleton set for any box may be chosen from within a thin
layer of points close to the boundary of the box, cf.~Section \ref{sec:skeleton_construction}.
\item \label{asm:compressed-blocks} \textbf{Compressed block structure:}
Experimental evidence and physical intuition from scattering problems
allows us to assume that the blocks of $F$ and $E$ discussed in
Section~\ref{sec:inversion} have one-dimensional HSS or low-rank
structure, with logarithmic rank growth ( $O(\log(n_{\ell}))$ ).
\end{enumerate}
\noindent
These assumptions arise naturally in the context of
solving integral equations with non-oscillatory PDE kernels in 2D. All
assumptions excluding the last one are relevant for both dense and
compressed block algorithms. The last one is needed only for the
compressed-block algorithms.
We note assumption \ref{asm:skeleton-struct} implies \ref{asm:skeleton}: being able to pick skeletons from a thin boundary layer determines how their sizes scale. We mention them separately to distinguish their roles on the design and complexity analysis of our algorithms: while \ref{asm:skeleton} mainly impacts block sizes on the outer HSS structure, \ref{asm:skeleton-struct} is much more specific and refers to a priori knowledge of skeleton set structure which we exploit extensively in the compressed-block algorithm.
\subsection{Estimates}
\label{sec:estimates}
We analyze work and storage for the algorithms of Section~\ref{sec:inversion}. Since they all use the same
set of fast operations (see Section \ref{sec:fast_arithmetic}), we can make unifying observations:
\subsubsection*{Work}
\begin{enumerate}
\item Assumptions \ref{asm:skeleton} and \ref{asm:skeleton-struct}
imply that our fast subroutines perform operations with HSS blocks of
size $O(n_{\ell}^{1/2})$.
Further, Assumption~\ref{asm:compressed-blocks} states that these
behave like HSS matrices representing boundary integral operators,
for which all one-dimensional HSS operations are known to be linear
in matrix size. Thus, all \textbf{HSS1D} operations are $O(n_{\ell}^{1/2})$, including matrix application.
\item As indicated in Remark~\ref{rem:matrix-matrix}, for an HSS
matrix of size $k \times k$, products of HSS and low-rank matrices
require $O(kq)$ work. Assumption~\ref{asm:compressed-blocks} implies
all such products are $O(n_{\ell}^{1/2} \log_2(n_{\ell}))$. Low-rank
matrix matrix-vector multiplication has the same complexity.
\item Finally, both \textbf{LR\_to\_HSS1D} and \textbf{Rand\_ID}
involve interpolative decompositions of a matrix of size
$O(n_{\ell}^{1/2}) \times O(\log_2(n_{\ell}))$, and, therefore have complexity $O(n_{\ell}^{1/2} \log^2_2(n_{\ell}))$. Products between low-rank matrices are also of this complexity.
\end{enumerate}
\subsubsection*{Storage}
\begin{enumerate}
\item Again, since all HSS blocks behave as operators acting on one-dimensional
box boundaries, storage is linear with respect to the number of nodes along the
boundary of the box: $O(n_{\ell}^{1/2})$.
\item A low-rank matrix of size $m \times n$ and rank $q$ occupies
$O((m+n)q)$ space in storage. By Assumption~\ref{asm:compressed-blocks}, storage of low rank blocks ($\ttA[L],\ttA[R]$ and off-diagonal blocks of F) is $O(n_{\ell}^{1/2} \log_2(n_{\ell}))$.
\end{enumerate}
Algorithms~\ref{alg:inter-lowrank}, \ref{alg:build-inv}, and \ref{alg:builde},
require only the operations listed above. We observe that all algorithms contain
at least one $O(n_{\ell}^{1/2} \log^2_2(n_{\ell}))$ operation. In terms of storage,
\textbf{INTER\_LOWRANK} and \textbf{BUILD\_Finv} store both HSS and low-rank blocks
($O(n_{\ell}^{1/2} \log_2(n_{\ell}))$), and \textbf{BUILD\_E} one HSS block ($O(n_{\ell}^{1/2})$).
Hence, the \emph{compressed-block} interpolation operator build and inverse compression
algorithms perform $W^{CB}_{\ell} =
O(n_{\ell}^{1/2} \log^2_2(n_{\ell}))$ work per box and require $M^{CB}_{\ell} = O(n_{\ell}^{1/2} \log_2(n_{\ell}))$ storage for each set of matrices computed at level $\ell$. As a contrast, their \emph{dense-block} counterparts have $W^{DB}_{\ell} = O(n_{\ell}^{3/2})$ and $M_{\ell}^{DB} = O(n_{\ell})$.
We note that a more detailed complexity analysis may be performed to
obtain constants for each subroutine, given the necessary experimental
data about our kernel for a given accuracy $\varepsilon$. The specific
dependance of these constants on accuracy is briefly discussed and
tested in Section~\ref{sec:vary-accuracy}.
We summarize the complexity estimates in the following proposition
\begin{proposition}
\label{prop:complexity}
{\textbf{Dense-Block Algorithms}} Let $\cA[A]$ be an $N \times N$ system matrix such that assumptions 1-3 hold.
Then, the dense-block tree build and inverse compression algorithms perform $O(N^{3/2})$ work. For NTI kernels, storage requirements and matrix apply are both $O(N \log N)$. For TI kernels, storage is $O(N)$, and matrix apply is $O(N \log N)$.
\textbf{NTI Compressed-Block Algorithms} Let $\cA[A]$ be an $N \times N$, non translation invariant system matrix such
that assumptions 1-4 hold. Then compressed-block tree build, inverse compression and HSS apply all perform $O(N)$ work,
and require $O(N)$ storage.
\textbf{TI Compressed-Block Algorithms} Let $\cA[A]$ be an $N \times N$,
translation invariant system matrix such that assumptions 1-4 hold.
Then compressed-block tree build, inverse compression and HSS apply
all perform $O(N)$ work, and require $O(N)$ storage. In fact, inverse compression work and storage are sublinear:
$O(N^{1/2} \log^2_2 N)$ and $O(N^{1/2} \log_2 N)$, respectively.
\end{proposition}
The limitations of dense-block algorithms now become clear. With notation as in Lemma \ref{lemma:complexity},
dense-block algebra corresponds to $(p,q) = (3/2,0)$ for work and $(p,q) = (1,0)$ for storage, which precludes
overall linear complexity. For the compressed block algorithms, on the other hand, we have
$(p,q)=(1/2,2)$ for work and $(p,q)=(1/2,1)$ for storage, which does yield linear complexity.
\begin{remark}
As we have previously observed, the compressed-block algorithm may be generalized to system matrices with other rank growth and behavior. More specifically, we observe that a sufficient condition to maintain optimal complexity could be that the outer HSS structure ranks grow as $O(n_{\ell}^p)$, for $p<1$, and that the most expensive operations such as the randomized interpolative decomposition also remain being sublinear. This would require all low-rank blocks to be of rank $q_{\ell} \sim n_{\ell}^{min\{1/3,(p-1)/2p\}}$.
\end{remark}
\begin{remark}
In all our practical implementations of the compressed-block algorithms, we perform dense computations for blocks up to a fixed threshhold skeleton size $k_{\rm cut}$, after which we switch to the fast routines. This may then be tuned as a parameter to further speed up these algorithms, and a straight-forward computation shows it does not alter their computational complexity.
\end{remark}
\subsection{Previous Work}
The key observation enabling the construction of $O(N)$ algorithms for
solving linear systems arising from the discretization of integral
equations is that the off-diagonal blocks of the coefficient matrix
can be very well approximated by matrices of low numerical rank. In
this section, we briefly describe some key results.
\subsubsection*{Optimal complexity iterative solvers for integral equations}
By combining the observation that off-diagonal blocks of the matrix
have low rank with a hierarchical partitioning of the physical domain
into a tree-structure, algorithms of $O(N)$ or $O(N\log N)$ complexity
for matrix-vector multiplication were developed in the 1980's.
Perhaps, the most prominent is the Fast Multiple Method~\cite{greengard1987fast,rokhlin1985,rokhlin1997},
but the Panel Clustering ~\cite{PanelClustering} and Barnes-Hut~\cite{BarnesHut}
methods are also well known. When a fast algorithm for the matrix-vector
multiplication such as the FMM is coupled with an iterative method such as, e.g.,
GMRES~\cite{GMRES} or Bi-CGSTAB~\cite{BiCGSTAB}, the result is a solver
for the linear system arising upon discretization of (\ref{eq:basic-integral})
with overall complexity $O(M\,N)$, where $M$ is the number of steps
required by the iterative solver. In many situations, $M$ is independent
of $N$ and convergence can be very rapid.
The first FMMs constructed were custom designed for specific elliptic
equations (e.g.~Laplace, Helmholtz, Stokes), but it was later realized
that \textit{kernel-independent methods} that work for broad classes of
problems could be developed~\cite{kifmm04ying,gimbutas2002}.
They directly inspired the work described in this paper, and
the new direct solvers can also be said to be ``kernel-independent'' in the
sense that the same algorithm, and the same code, can be applied to several
different types of physical problems such as electro-statics, Stokes
flows, and low frequency scattering.
While iterative solvers accelerated by fast methods for matrix-vector
multiply can be very effective in many contexts, their performance
is held hostage to the convergence rate of the iteration. If the equation is not well-conditioned, the
complexity of an iterative solve may increase. Pre-conditioners can sometimes be constructed to accelerate convergence,
but these tend to be quite problem-specific and do not readily lend themselves
to the construction of general purpose codes.
Examples of ill-conditioned problems that can be challenging to iterative methods include
Fredholm equations of the first kind, elasticity problems on thin domains,
and scattering problems near resonances.
\subsubsection*{Direct solvers for integral equations}
In the last ten years, a number of efficient \textit{direct} solvers for
linear systems associated with integral equations have been constructed.
These solvers entirely side-step the challenges related to convergence
speed of iterative solvers. They can also lead to dramatic improvements
in speed, in particular in situations where several of equations with
the same coefficient matrix but different right-hand sides need to be solved.
The work presented here draws heavily on \cite{MR2005}, (based on
on \cite{starr_rokhlin}), which describes a direct solver that was
originally developed for boundary integral equations defined on curves
in the plane, and has optimal $O(N)$ complexity for this case. The
observation that this algorithm
can also be applied to volume integral equations in the plane, and to
boundary integral equations in 3D was made in \cite{greengard2009fast}
and later elaborated in \cite{G2011thesis}.
For these cases, the direct solver requires $O(N^{3/2})$ flops to build
an approximation to the inverse of the matrix, and $O(N\log N )$ flops for the
``solve stage'' once the inverse has been constructed.
Similar work was done in \cite{ho2012fast}, where it is also demonstrated that
the direct solver can, from a practical point of view, be implemented using
standard direct solvers for large \textit{sparse matrices}. This improves
stability, and greatly simplifies the practical implementation due to the
availability of standard packages such as UMFPACK.
The direct solver of \cite{MR2005} relies on the fact that the matrices
arising form the discretization of integral equations can be efficiently
represented in a data-sparse format often referred to as ``Hierarchically
Semi-Separable (HSS)'' matrices. This matrix format was also explored in
\cite{Gu06sparse,Gu06ULV}, with a more recent efficient version presented in
\cite{xia2010}. This work computes ULV and Cholesky factorizations of HSS
matrices; if these techniques were applied to volume integral equations in
2D, the complexity would be $O(N^{3/2})$, in complete agreement with
\cite{greengard2009fast} and \cite{G2011thesis}.
The paper \cite{xia2012complexity} presents a general complexity study of
HSS algorithms, under different rank growth patterns; it presents an
optimal-complexity HSS recompression method which we adapt to our
setting as a part of the overall algorithm.
An important class of related algorithms is $\cA[H]$ and $\cA[H]^2$-matrix
methods of Hackbusch and co-workers (see \cite{borm2003hierarchical,2008_bebendorf_book,2010_borm_book}
for surveys). These techniques are based on variations of the cross approximation method for low-rank compression,
and have been applied both to integral equations and sparse systems derived from PDEs.
The matrix factorization for two and three-dimensional problems algorithms are formulated
recursively, and a full set of compressed operations for lower-dimensional problems needs to be
available. In \cite{borm2009construction,borm2006matrix}, algorithms for
$\cA[H]^2$ matrix arithmetics are described; Observed behavior for integral equation operators on the cube and on the sphere in chapter 10 of \cite{2010_borm_book} is $O(N\log^4 N )$ for matrix compression, $O(N\log^3 N )$ for inversion and $O(N\log^2 N )$ for solve time and memory use.
\subsubsection*{Direct solvers for sparse systems}
Our direct solver is conceptually related to direct solvers
for sparse system matrices such as the classical \textit{nested dissection} and
\textit{multifrontal} methods \cite{george_1973,hoffman_1973,1989_directbook_duff}.
These solvers do not have optimal complexity (they typically require $O(N^{3/2})$
for the factorization stage in 2D, and $O(N^{2})$ in 3D), but are nevertheless
popular (especially in 2D) due in part to their robustness, and in part to the
unrivaled speed that can be attained for problems involving multiple right hand
sides. Very recently, it has been demonstrated that by exploiting structured matrix
algebra (such as, e.g., $\mathcal{H}$-matrices, or HSS matrices), to manipulate the
dense matrices that arise due to fill-in, direct solvers of linear or close to linear
complexity can be constructed
\cite{2009_xia_superfast,2011_ying_nested_dissection_2D,2007_leborne_HLU,2009_martinsson_FEM,G2011thesis}.
The direct solver described in this paper is conceptually similar to these algorithms
in that they all rely on hierarchical domain decompositions, and efficient representations
of operators that live on the interfaces between sub-domains.
\subsection{Overview of new results}
We present a direct solver that achieves optimal $O(N)$ complexity for \emph{two-dimensional} systems derived from integral equations with non-oscillatory kernels or kernels in low-frequency mode. Similarly to other HSS or $\cA[H]$-matrix methods, we rely on the fact that the system is derived from an integral equation only weakly: if the kernel is of a different nature, the scalability of the solver may deteriorate, but it can still perform accurate calculations.
The main features of our solver include:
\begin{itemize}
\item Observed $O(N)$ complexity both in time and storage for all stages of the computation
(in particular, for both the ``build'' and the ``solve'' stages).
\item The algorithm supports high accuracy (up to $10^{-10} - 10^{-12}$ is practical), while maintaing reasonable efficiency both in time and memory.
\item The algorithm can take direct advantage of translation invariance and symmetry of underlying kernels, achieving considerable speedup and reduction of memory cost.
\end{itemize}
The main aspects of the algorithm that allow us to achieve high performance are:
\begin{itemize}
\item Two levels of hierarchical structures are used. The matrix $A$ as whole is represented in the HSS format,
and certain blocks within the HSS structure are themselves represented in the HSS format.
\item Direct construction of the inverse: unlike many previous algorithms, we directly build a compressed representation of the inverse, rather than compressing the matrix itself and then inverting.
\item Our direct solver needs only a subset of a full set of HSS matrix arithmetics;
in particular, the relatively expensive matrix-matrix multiplication is never used.
\end{itemize}
We achieve significant gains in speed and memory efficiency with
respect to the existent $O(N^{3/2})$ approach. For non-oscillatory
kernels, our algorithm outperforms the $O(N^{3/2})$ algorithm around
$N \sim10^5$. Sizes of up to $N \sim 10^7$ are practical on a desktop
gaining one order of magnitude in inverse compression time and
storage. For example, for non-translation-invariant kernels, a
problem of size $N = 3 \times 10^6$ and target accuracy $\varepsilon = 10^{-10}$
takes 1 hour and $\sim$ 50GB to invert. Each solve takes $10$ seconds.
For translation-invariant kernels such as Laplace, a problem of size
$N = 1 \times 10^7$ and target accuracy $\varepsilon = 10^{-10}$ takes half
an hour and $5$ GB to invert, with $20$-second solves.
Reducing target accuracy to $\varepsilon = 10^{-5}$, inversion costs for the
latter problem go down to 5 minutes and $1$GB of storage, and each solve takes $10$ seconds.
Our accelerated approach to build the HSS binary tree also yields a fast $O(N)$ matrix compression algorithm. As is noted in \cite{ho2012fast}, given that matrix-vector applies are orders of magnitude faster than one round of FMM, this algorithm would be preferable to an interative method coupled with FMM for problems that require more than a few iterations. To give an example, for $N=10^7$ and $\varepsilon = 10^{-10}$, matrix compression takes $5$ minutes and $1$GB of storage, and each matrix apply takes less than $10$ seconds.
Finally, we are able to apply our method with some minor modifications to oscillatory kernels in low frequency mode, and apply this to the solution of the corresponding 2D volume scattering problems. Although the costs are considerably higher, we still observe optimal scaling and similar performance gains.
\subsection{Efficient skeleton construction in 2D}
\label{sec:skeleton_construction}
\subsubsection*{Skeleton size scaling}
Let us first consider for a non-oscillatory kernel (e.g. Laplace) the rank of the interaction between two
neighboring boxes for dimensions $D = 1,2$. This example captures the essential behavior
and size of skeleton sets in different dimensions. (Figure \ref{fig:interaction_ranks})
Let $B_i$ be a box at level $\ell$ of the tree in $D$ dimensions, with
$m_i \sim n_{\ell} = N/2^{\ell}$ points.
We estimate the $\varepsilon$-rank $k_i$ of
the interaction of $B_i$ with a box of the same size adjacent to
it. If sources and targets are well-separated
(the distance between the set and target points is at least the set's
diameter), the rank of their interaction matrix can be bounded by a
constant $p$ for a given $\epsilon$. We perform a recursive
subdivision of our box $B_i$ into well-separated sets until they
contain $p$ points of less. Then
\begin{equation*}
k_{i} \sim p \sum_{s=0}^{\log_2(n_{\ell}/p)/D} {2^{(D-1)s}}
\end{equation*}
\begin{figure}[t]
\begin{center}
\includegraphics[scale = 0.5]{interaction_ranks}
\caption{ \textit{Interaction ranks in 1D and 2D. Source box $B_i$ is recursively subdivided into well-separated sets, whose interaction with $B_j$ is constant rank. This provides an upper bound for the overall interaction rank.} }
\end{center}
\label{fig:interaction_ranks}
\end{figure}
For $D=1$, we have $\log_2(n_{\ell}/p)$ intervals, and so $k_i \sim
O(\log_2(n_{\ell}))$. For $D=2$, we subdivide $\log_{4}(n_{\ell}/p)$ times in the
direction normal to the shared edge of the boxes, getting $2^{s}$
well-separated boxes. Then, $k_i \sim 2^{\log_{2}(n_{\ell}/p)/2} = O(n_{\ell}^{1/2})$
A simple calculation shows that this yields an estimate
$O(N)$ for the complexity of the 1D inversion algorithm,
(the logarithmic rank growth with box size does not affect the complexity,
because the number of boxes per level shrinks exponentially)
and $O(N^{3/2})$ in the two-dimensional case.
\subsubsection*{Structure of skeleton sets in 2D}
The HSS hierarchical compression procedure
requires us to construct skeleton sets for
boxes at all levels.
The algorithm of Section~\ref{sec:hss} starts with constructing
skeletons for leaf boxes, using interpolative decomposition
of block rows with equivalent density acceleration.
For non-leaf boxes at level $\ell$, index sets are obtained by merging
skeleton sets of children and performing another
interpolative decomposition on the corresponding block row
of $A^\ell$ to obtain the skeleton.
This approach works for one-dimensional problems, but for two-dimensional ones applying
interpolative decomposition at all levels of the hierarchy is
prohibitively expensive even with equivalent density acceleration:
the complexity of each decomposition for a box $B_i$
is proportional to $k_i^3 \sim n_{\ell}^{3/2}$ in the two-dimensional
case. As a consequence, the overall complexity cannot be lower
than $O(N^{3/2})$.
Our algorithm for constructing the skeleton sets at all levels
is based on the following crucial observation:
\emph{It is always possible to find an accurate set of skeleton points
for a box by searching exclusively within a thin layer of points along
the boundary of the box.}
This observation can be justified using
representation results from potential theory, cf.~Section \ref{sec:hss}.
It has also been substantiated by extensive numerical experiments.
Increasing target accuracy adds more points in deeper layers,
but the depth never grows too large:
for the kernels we have considered, the factorization selects one
boundary layer for $\varepsilon \sim 10^{-5}$, and two layers for $\varepsilon \sim 10^{-10}$.
(Figure~\ref{fig:laplace-skeletons}).
This observation allows us to make two modifications to
the skeleton selection algorithm. First, we restrict the set of
points from which the skeletons are selected \emph{a priori}
to $m$ boundary layers. Second, rather than selecting
skeleton points for a parent box from the union of skeletons
of child boxes using an expensive interpolative decomposition,
we simply take all points in the boundary layers of the parent box.
More specifically, $I_i = I_{c_{1}(i)}^{sk} \sqcup I_{c_{2}(i)}^{sk}$
is split into $I_i^{sk}$, the skeleton of $B_i$,
consisting of all points of $I_i$ within $m$ layers of the boundary
of $B_i$, and $I_i^{rs} = I_i \setminus I_i^{sk}$ (residual index set),
consisting of points at the interface of two child boxes.
To obtain the interpolation operator
$T_i: I_i^{rs} \rightarrow I_i^{sk}$ provided by the
interpolative decomposition in the slower approach,
we use a proxy set $Z^{\rm proxy} = \{z_{j}\}_{j=1}^{p_{i}}$
(as described in Section \ref{sec:hss}), and compute $T_i$ from the following equation:
\begin{equation*}
K(Z^{proxy},X_i^{sk})T = K(Z^{proxy},X_i^{rs}).
\end{equation*}
\begin{remark}.
The set from which the skeleton set is picked has fixed width.
This means that matrices acting on the skeleton set are compressible
(in the HSS sense) in a manner analogous to boundary integral operators
and admit linear complexity matrix algebra. These matrices in essence
act like boundary-to-boundary operators on the box.
\end{remark}
\begin{figure}[t]
\begin{center}
\includegraphics[scale = 0.09]{Skeletons_vs_accuracy}
\caption{ \textit{A) Skeleton sets picked by the interpolative decomposition for different relative interpolation accuracies B) Log plot of relative interpolation error depending on skeleton set size } }
\label{fig:laplace-skeletons}
\end{center}
\end{figure}
\subsection{Overview of the modified inversion algorithm}
\label{sec:overview-inv}
Construction of a compressed inverse of an HSS matrix described in
Section~\ref{sec:HSS-inverse} proceeds in two stages:
first, the HSS form of the matrix $A$ is constructed, followed
by HSS inversion, which is performed without any additional
compression. In our modified algorithm, significant changes
are made to the second stage, including compression of
all blocks. Not all blocks constructed at the first stage
(compression of $A$) are needed for the inverse construction,
so we reduce the first-stage algorithm to building the
tree and interpolation operators $L_i,R_i$ only.
Examining Algorithm~\ref{alg:HSS-inv},
we observe that all blocks formed for each box $B_i$ involve the
factors $E_i,F_i^{-1}$ and the interpolation matrices $L_i,R_i$.
One first step to save computation is to compute $\{E_i,F_i^{-1}\}$
only. Matrix-vector multiplication for blocks $\tA[D]$, $\tA[L]$ and
$\tA[R]$ needed for the inverse matvec algorithm can be implemented as
a sequence of matvecs for $L_i$, $R_i$, $E_i$ and $F_i^{-1}$.
The main operations in the algorithm are the two dense block inversions in lines $10-11$. Since $F_i$ is of size $m_i = k_{c_{1}(i)}+k_{c_{2}(i)}$ (merge of two skeleton sets) and $E_i$ is of size $k_i$, storage space of these blocks is $O(k_i^2) = O(n_{\ell})$ and inverting them costs $O(k_i^3) = O(n_{\ell}^{3/2})$ floating point operations. This observation shows that it is impossible to obtain linear complexity for HSS inversion if we store and invert $E_i$ and $F_i$ blocks densely, or even to build the interpolation matrices that form $L_i$ and $R_i$.
We first present a reformulation of the HSS inversion algorithm that partitions computation of $E_i$ and $F_i^{-1}$ into block operations, allowing us to compress blocks as low rank or one-dimensional HSS forms. As it is typically the case, it is essential to avoid explicit construction of the blocks that we want compressed, i.e. they are constructed in a compressed form from the start.
\subsubsection*{Building and inverting $F_i$}
For a non-leaf box $B_i$, $F_i$ is a linear operator defined on $I_i = I_{c_{1}(i)}^{sk} \sqcup I_{c_{2}(i)}^{sk}$, that is, on the merge of skeleton points from its children. Using physical interpretation of Section~\ref{sec:HSS-inverse},
it maps charge distributions to fields on this set, adding contributions from local operators $E_{c_{1}(i)}, E_{c_{2}(i)}$ and sibling interactions.
We expect $F_i$ to have a rank structure similar to that of
$K[I_i,I_i]$.
Aside from needing a
compressed form and an efficient matvec for $F_i^{-1}$ to be used
in the inverse HSS matvec algorithm, we also use $F_i^{-1}$ to
construct $E_i^{-1} = R_i F_i^{-1} L_i$.
Let $\Pi_i$ be the permutation matrix that places skeleton points first.
Matrices
$R_i = \begin{bmatrix} I \ T_i^{up} \end{bmatrix} \Pi_i^T$ and $L_i = \Pi_i \begin{bmatrix} I \\
(T_i^{dn})^{T} \end{bmatrix}$ have a block form, with sub-blocks
$T_i^{up}$ and $T_i^{dn}$, which we will show to be low-rank
(Section~\ref{sec:lowrank-interpol}). To construct $E_i$
efficiently, we need an explicit partition of $F_i$ into
blocks matching blocks of $R_i$ and $L_i$, i.e. corresponding
to skeleton index set $I^{sk}_i$ and residual index set $I^{rs}_i$ of $B_i$.
We use the following notation for the blocks of $F_i$:
\begin{equation*} \Pi_i F_i \Pi_i^{T} =
\begin{bmatrix} F_i[I_i^{sk},I_i^{sk}] & F_i[I_i^{sk},I_i^{rs}] \\ F_i[I_i^{rs},I_i^{sk}] & F_i[I_i^{rs},I_i^{rs}] \end{bmatrix}
= \begin{bmatrix} F_i^{sk} & F_i^{s \leftarrow r} \\ F_i^{r \leftarrow s} & F_i^{rs}, \end{bmatrix} \end{equation*}
where we use $s$ to refer to the skeleton set of the parent
and $r$ to the ``residual'' set (the part of the union of the skeleton
set of the children not retained in the parent).
To represent $F^{-1}$ we use these subblocks and perform block-Gaussian elimination. If $\Phi = \Pi_i F_i^{-1} \Pi^{T}$. Then:
\begin{equation*} \Phi_i = \begin{bmatrix} \phi_i^{sk} \ \phi_i^{s \leftarrow r} \\ \phi_i^{r \leftarrow s} \ \phi_i^{rs} \end{bmatrix} \end{equation*}
We note that, by eliminating residual points first, $\phi_i^{sk}$ is the inverse of the Schur complement matrix $(S_i^{rs})^{-1} = [F_i^{sk} - F_i^{s\leftarrow r}(F_i^{rs})^{-1}F_i^{r \leftarrow s}]^{-1}$, and that only $\{ (F_i^{rs})^{-1},F_i^{r \leftarrow s},F_i^{s \leftarrow r},(S_i^{rs})^{-1} \}$ are needed to compute the blocks for the inverse $\Phi_i$. The routine in our inverse compression algorithm that builds $F_i^{-1}$ (Section~\ref{sec:buildFinv}) constructs these four blocks in compressed form.
\subsubsection*{Building $E_i$} Following the definition of $E_i$ and the
block structure of $F_i^{-1}$ we obtain the expression:
\begin{equation*}
E_i^{-1}
= \begin{bmatrix} I \ T_i^{up} \end{bmatrix}
\begin{bmatrix} \phi_i^{sk} \ \phi_i^{s \leftarrow r}\\ \phi_i^{r \leftarrow s} \ \phi_i^{rs} \end{bmatrix}
\begin{bmatrix} I \\ (T_i^{dn})^{T} \end{bmatrix}
= \phi_i^{sk} + T_i^{up}\phi_i^{s \leftarrow r} + \phi_i^{r \leftarrow s} (T_i^{dn})^{T} + T_i^{up}\phi_i^{rs}(T_i^{dn})^{T}
\end{equation*}
As $T_i^{up}$,$T_i^{dn}$ are low rank, the last three terms in this sum are low rank, too. Hence, $E_i$ can be computed as a low rank update of $\phi_i^{sk}$, the inverse of $S_i^{rs}$ \\
We summarize the modified algorithm below; at this point it is a purely algebraic transformation, we explain in greater detail how the
new structure can be used to compress various matrices.
Next to each block we write between brackets the type of compression used in the $O(N)$ algorithm: \textbf{[LR]} (low rank) or \textbf{[HSS1D]}. The output of the new form consists of:
\begin{enumerate}
\item The blocks of $F_i^{-1}$: $\{
(F_i^{rs})^{-1},F_i^{r \leftarrow s},F_i^{s \leftarrow r},(S_i^{rs})^{-1} \}$
\item The matrix $E_i$
\end{enumerate}
The entries of blocks of $F_i$ are evaluated using the formulas in Algorithm~\ref{alg:HSS-inv}.
\begin{algorithm}[h!]
\begin{center}
\begin{algorithmic}[1]
\FOR{ each box $B_i$ in fine-to-coarse order}
\IF{$B_i$ is a leaf}
\STATE $F_i^{-1} = D_i^{-1} = K[I_i,I_i]^{-1}$
\ELSE
\STATE Compute blocks $F_i^{sk}$, $F_i^{rs}$ from $(E_{c_1(i)},E_{c_2(i)})$ \hfill \hfill
\linebreak In compressing and inverting the matrix $F_i$ for non-leaf boxes, we store sub-blocks of $F_i$ and apply block inversion formulae as explained above:
\STATE $(F_i^{rs})^{-1} = (F[I_i^{rs},I_i^{rs}])^{-1}$ \hfill \textbf{[HSS1D]}
\STATE Compute blocks $F_i^{r \leftarrow s},F_i^{s \leftarrow r}$ \hfill \textbf{[LR]}
\STATE
$ (S_i^{rs})^{-1} = (F_i^{sk} - F_i^{s\leftarrow
r}(F_i^{rs})^{-1}F_i^{r \leftarrow s})^{-1}$ \hfill \textbf{[HSS1D]}
\ENDIF
\STATE
$E_i = \left( (S_i^{rs})^{-1} + T_i^{up}\phi_i^{s \leftarrow r}
+ \phi_i^{r \leftarrow s}(T_i^{dn})^{T} +
T_i^{up}\phi_i^{rs}(T_i^{dn})^{T}\right)^{-1}$
\hfill \textbf{[HSS1D]}
\ENDFOR
\end{algorithmic}
\end{center}
\caption{Modified HSS inversion algorithm}
\label{alg:HSS-inv-mod}
\end{algorithm}
In the remaining part of this section we elaborate the details of efficient construction of all blocks using HSS and low-rank
operations.
\subsection{Compressed two-dimensional HSS inversion}
\label{sec:2dhss}
We use two compressed formats for various linear operators in the
algorithm: low-rank and one-dimensional HSS (dense-block HSS)
described in Section~\ref{sec:background}.
\subsubsection*{Operator Notation} To distinguish between linear operators
compressed in different ways, we use different fonts:
\begin{itemize}
\item $X$ (in normal font) refers to an abstract linear operator, with no representation specified.
\item $\cA[X]$ refers to a dense-block HSS representation of $X$;
\item $\ttA[X]$ refers to a low-rank representation of $X$
\end{itemize}
The font used for interpolation operators like $R(:,J)=\begin{bmatrix} I \ T \end{bmatrix}$ refers to the representation of $T$. Operations such as matrix-vector multiplies should be understood accordingly: e.g., $\ttA[X] v$ is evaluated using a low-rank factorization of $X$; if the rank is $q$, and vector size is $n$, the complexity is $O (qn)$. Similarly, $\cA[X] v$ is an $O (n)$ application of a dense-block HSS matrix.
We say that the algorithms operating on per-box matrices
are fast if all operations involved have
cost and storage proportional to the block size $O(n^{1/2})$ or that
times a logarithmic factor $O(n^{1/2}\log^q(n))$. Our algorithm includes three main fast subroutines.
\begin{enumerate}
\item \textbf{INTER\_LOWRANK}: Interpolation matrices $\ttA[T]_i$ are built in low rank form.
\item \textbf{BUILD\_Finv}: Given dense-block HSS matrices $ \{ \cA[E]_{c_1(i)},\cA[E]_{c_2(i)} \}$, it computes $\{(\cA[F]_i^{rs})^{-1},\ttA[F]_i^{s \leftarrow r},\ttA[F]_i^{r \leftarrow s},(\cA[S]_i^{rs})^{-1} \}$ in their respective compressed forms.
\item \textbf{BUILD\_E}: Given
$\{(\cA[F]_i^{rs})^{-1},\ttA[F]_i^{s \leftarrow
r},\ttA[F]_i^{r \leftarrow s},(\cA[S]_i^{rs})^{-1} \}$ and
$\ttA[T]_i^{up},\ttA[T]_i^{dn}$, it computes $\cA[E]_i$ as a dense-block HSS matrix.
\end{enumerate}
\subsubsection{Fast Arithmetic}
\label{sec:fast_arithmetic}
There is a number of operations with low rank and HSS matrices which we must be able to perform efficiently:
\begin{itemize}
\item \textbf{Dense-block HSS1D compression, inversion and matvec:} These are the algorithms in \cite{GYMR2012}, which we have outlined in Section 2, and we denote the corresponding routines as \textbf{HSS1D\_Compress} and \textbf{HSS1D\_Invert}.
\item \textbf{\emph{Fast addition and manipulation of HSS1D matrices}}:
\begin{itemize}
\item {\textbf{HSS1D\_Sum:}} Given two matrices $\cA[A]$ and $\cA[B]$ in HSS form, return an HSS form for $\cA[C] = \cA[A]+\cA[B]$
\item {\textbf{HSS1D\_Split:}} Given a matrix $\cA[A]$ defined on $I_1 \cup I_2$, it produces the diagonal blocks $\cA[A]_1$ and $\cA[A]_2$ in HSS form.
\item {\textbf{HSS1D\_Merge:}} Given matrices $\cA[A]_1$ and $\cA[A]_2$, it concatenates them to produce the block diagonal HSS matrix $\cA[A]$, and sorts its leaves accordingly.
\item \textbf{\emph{Additional HSS compression routines}}:
\begin{itemize}
\item {\textbf{LR\_to\_HSS1D:}} Convert a low-rank
operator to dense-block HSS form.
\item {\textbf{HSS1D\_Recompress:}} Using the algorithm by Xia \cite{xia2012complexity}, we re-compress an HSS 1D form to obtain optimal ranks. It is crucial to do so after performing fast arithmetic (e.g.~a sum) of HSS matrices.
\end{itemize}
\end{itemize}
\end{itemize}
\begin{remark}
\textbf{Matrix-matrix products.} As we have mentioned before, a
feature of our algorithm is that it avoids using the
linear yet expensive matrix-matrix product algorithm for
structured matrices.
We arrange the computations so that all matrix-matrix products are between a dense-block HSS $\cA[H]$ of size $k \times k$ and a low rank matrix $\ttA[K] = UV^{T}$ of rank $q$. Since $\cA[H]U$ requires only $q$ $O(k)$ fast matvecs, these products may be computed in $O(qk)$ work.
\label{rem:matrix-matrix}
\end{remark}
\begin{remark}
For every call to \textbf{HSS1D\_Sum} in the algorithms below, it
should be assumed that it is followed by a recompression step (a call
to \textbf{HSS1D\_Recompress}). Due to the theorem 5.3 of \cite{xia2012complexity}, this implies all dense-block
HSS matrices presented are compact, that is, their blocks have a near-optimal size.
\end{remark}
\subsubsection*{Randomized interpolative decomposition \textbf{RAND\_ID}} The randomized sampling techniques in \cite{RandID07,RandID08,halko2011} speeding up the interpolative decomposition are critical to obtain the right complexity for the compression of low-rank operators. For an $m \times m$ dense-block HSS matrix $A$ of rank $q$, the matvec has complexity $O(m)$ time. Assuming that at level $\ell$ of the binary
tree $\cA[T]$, $m=O(n_{\ell}^{1/2})$ and $q=O(\log(n_{\ell}))$, the
complexity of the randomized IDs is:
\begin{equation*} O(mq^2+q^3) = {O(n_{\ell}^{1/2}\log^2(n_{\ell}))} \end{equation*}
\subsubsection{Interpolation Operators in low-rank form}
\label{sec:lowrank-interpol}
We recall that interpolation operators $T_i$ are built with the binary
tree $\cA[T]$ using an interpolative decomposition, and that they are
inputs to the dense-block HSS inverse algorithms. They encode interactions between the residual points indexed as $I^{rs}_i$ and the exterior $X^{ext}_i$ as a linear combination of interactions with the skeleton points indexed by $I_i^{sk}$. In other words, they are solutions of the linear equation:
\begin{equation*}
K[X^{ext}_i,X_i^{sk}]T_i = K[X^{ext}_i,X_i^{rs}]
\end{equation*}
The acceleration proposed in Section \ref{sec:hss} implies that interaction with outside points may be represented with a proxy $Z^{\rm proxy}_i$ of charges right outside the box boundary. We employ as many layers as are kept for $I_i^{sk}$, and remove points until $|Z^{\rm proxy}_i| = |X^{sk}_i| = k_i$. This yields an equation for $T_i$:
\begin{equation*}
K[Z^{\rm proxy}_i,X_i^{sk}]T_i = K[Z^{\rm proxy}_i,X_i^{rs}]
\end{equation*}
$K[Z^{\rm proxy}_i,X_i^{sk}]$ encodes interactions between two close boundary layer
curves, their distance being equal to the grid spacing $h$. It is invertible,
ill-conditioned, and most importantly it has dense-block HSS structure.
$K[Z^{\rm proxy}_i,X_i^{rs}]$ encodes interactions with the interfacial points.
Except for the closest layers, the interface is well-separated from the proxy, and so it is easy to see (via a multipole argument, or numerically as in Figure \ref{fig:inter-LR}) that this matrix is of logarithmic low rank:
\begin{figure}[H]
\begin{center}
~\includegraphics[scale = 0.09]{Interpolation_Operator_LR}
\caption{\textit{Skeleton points are in black, residual points in blue and proxy points (diamonds) in green. We apply an ID with $\varepsilon=1e-10$, and label subselected interface points also in green.}}
\label{fig:inter-LR}
\end{center}
\end{figure}
As a result, $T_i = K[Z^{\rm proxy}_i,X_i^{sk}]^{-1} K[Z^{\rm proxy}_i,X_i^{rs}]$ is also a low rank operator.
We describe a fast algorithm to compress this operator. We note that while the description above follows
the case of boxes in the plane, these observations should hold in general, as long as most residual points
are well separated from the proxy.
\begin{algorithm}[h]
\noindent\textbf{Input:} Box $B_i$ information, index sets $I^{sk}_i$ and $I^{rs}_i$;
\noindent\textbf{Output:} $\{U_{i},V_{i}\}$ such that $T_i = U_iV_i^{T}$.
\begin{center}
\begin{algorithmic}[1]
\STATE (i) \emph{Compress and Invert HSS 1D operator}
\STATE $\cA[K]^{s \rightarrow p} = $\textbf{HSS1D\_Compress}$(K,Z^{\rm proxy}_i,X_i^{sk},\varepsilon)$
\STATE $(\cA[K]^{s \rightarrow p})^{-1} = $\textbf{HSS1D\_Invert}$(\cA[K]^{s \rightarrow p},\varepsilon)$
\STATE (ii) \emph{Randomized interpolatory decomposition}
\STATE $\cA[K]^{r \rightarrow p} = $\textbf{HSS1D\_Compress}$(K,Z^{\rm proxy}_i,X_i^{rs},\varepsilon)$
\STATE $[T^{r \rightarrow p} , J^{r \rightarrow p}] =$\textbf{RAND\_ID}$(\cA[K]^{r \rightarrow p}, \varepsilon);$
\STATE The ID gives us a low-rank decomposition of $K^{r \rightarrow p}$ (of $\varepsilon$-rank q):
\STATE $U_i = (\cA[K]^{s \rightarrow p})^{-1}\ttA[K]^{r \rightarrow p}(:,J^{r \rightarrow p}(1:q));$
\STATE $V_i^{T}(:,J^{r \rightarrow p}) = \begin{bmatrix} I \ \ T^{r \rightarrow p} \end{bmatrix}$
\end{algorithmic}
\end{center}
\caption{Algorithm INTER\_LOWRANK}
\label{alg:inter-lowrank}
\end{algorithm}
\begin{remark}
In general, $Z_i^{proxy}$ will have slightly more points than $X_i^{sk}$, making $\cA[K]^{s \rightarrow p}$ a rectangular matrix. A fast HSS least squares algorithm such as in \cite{ho2012LS,dewildehierarchical} replaces the inversion and inverse apply in lines 3 and 8 of Algorithm \ref{alg:inter-lowrank}, with no impact in complexity.
\end{remark}
\subsubsection{Compressed $F^{-1}$}
\label{sec:buildFinv}
We define some auxiliary index sets for the merge of two children boxes:
$I_{i,c(i)}^{sk}$ are skeleton points shared with child $c(i)$ (on
boundary layers) and $I_{i,c(i)}^{rs}$ are residual children's skeleton points (in the middle interface). We define $I_i^{sk} = I_{i,c_1(i)}^{sk} \cup I_{i,c_2(i)}^{sk}$, and $I_i^{rs} = I_{i,c_1(i)}^{rs} \cup I_{i,c_2(i)}^{rs}$
\begin{figure}[H]
\begin{center}
\includegraphics[scale = 0.5]{merge_indices}
\caption{\textit{Merge of two children's skeleton indices.}}
\end{center}
\end{figure}
We recall that first, matrix $F$ is built and separated into blocks
corresponding to skeleton $I_i^{sk}$ and residual $I_i^{rs}$
sets. These two sets can be seen as arranged along one-dimensional
curves (boundary of the parent box and interface between siblings).
We order $I_i^{rs}$ in the direction of the interface, and $I_i^{sk}$ cyclically around the box boundary.
Then $\cA[F]_i^{rs}=F(I_i^{rs},I_i^{rs})$ and $\cA[F]_i^{sk} =
F(I_i^{sk},I_i^{sk})$ can be constructed in dense-block HSS form.
The two off-diagonal blocks $\{ \ttA[F]_i^{s\leftarrow r},\ttA[F]_i^{r\leftarrow s} \}$ encode interaction between sets that are close only at few points, and can thus be compressed as low-rank operators.
\subsubsection*{\textbf{BUILD\_Finv} routine}
To clarify the structure of the construction of $F^{-1}$, in Algorithm 4 we show this routine in two ways. On the left, we indicate the original dense computation, and on which line of the reformulated algorithm in Section 2.5 it occurs. On the right, we indicate the set of fast operations that replaces it in the linear complexity algorithm.
\begin{algorithm}[h!]
\noindent\textbf{Input.}
Children's matrices $\cA[E]_{c_1(i)}$ and $\cA[E]_{c_2(i)}$ in HSS form (each defined on skeletons $I_{c_j(i)}^{sk}$). \\
\noindent\textbf{Output.}
\begin{enumerate}
\item $(\cA[F]_i^{rs})^{-1}$ in dense-block HSS form on the residual set $I^{rs}_i$.
\item ${\ttA[F]_i^{s \leftarrow r},\ttA[F]_i^{r \leftarrow s}}$ as low-rank operators.
\item Inverse of Schur complement matrix $(\cA[S]_i^{rs})^{-1}$ in
dense-block HSS form on $I^{sk}_i$.
\end{enumerate}
\noindent
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{3}{|l|}{(i) Obtain diagonal blocks of E by splitting into boundary and interface}
\\
\hline
$E^{sk}_{c_j(i)} = E_{c_j(i)}[I^{sk}_{i,c_j(i)},I^{sk}_{i,c_j(i)}]$ && $[\cA[E]_{c_j(i)}^{sk},\cA[E]_{c_j(i)}^{rs}] = $ \textbf{HSS1D\_Split}$(\cA[E]_{c_j(i)},I_{i,c_j(i)}^{sk})$ \\
$E^{rs}_{c_j(i)} = E_{c_j(i)}[I^{rs}_{i,c_j(i)},I^{rs}_{i,c_j(i)}]$ && \\
\hline
\multicolumn{3}{|l|}{(ii) Build diagonal blocks of F: merge diagonal blocks of E, compress off-diagonal blocks}
\\
\hline
Line 5: && $\cA[E]_i^{sk,dg} =$ \textbf{HSS1D\_Merge}$(\cA[E]_{c_1(i)}^{sk},\cA[E]_{c_2(i)}^{sk});$ \\
$F^{sk} = \begin{bmatrix} E_{c_1}^{sk} & 0 \\ 0 & E_{c_2}^{sk} \end{bmatrix} + \begin{bmatrix} 0 & K[I_{c_1}^{sk},I_{c_2}^{sk}] \\ K[I_{c_2}^{sk},I_{c_1}^{sk}] & 0 \end{bmatrix}$ &&
$\cA[K]_i^{sk,off} =$ \textbf{HSS1D\_Compress}$(K,I_{i}^{sk});$ \\
&& $\cA[F]_i^{sk}$ = \textbf{HSS1D\_Sum}$(\cA[E]_i^{sk,dg},\cA[K]_i^{sk,offd});$ \\
\hline
Line 5: && $\cA[E]_i^{rs,dg} =$ \textbf{HSS1D\_Merge}$(\cA[E]_{c_1(i)}^{rs},\cA[E]_{c_2(i)}^{rs});$ \\
$F^{rs} = \begin{bmatrix} E_{c_1}^{rs} & 0 \\ 0 & E_{c_2}^{rs} \end{bmatrix} + \begin{bmatrix} 0 & K[I_{c_1}^{rs},I_{c_2}^{rs}] \\ K[I_{c_2}^{rs},I_{c_1}^{rs}] & 0 \end{bmatrix}$ &&
$\cA[K]_i^{rs,off} =$ \textbf{HSS1D\_Compress}$(K,I_{i}^{rs});$ \\
&& $\cA[F]_i^{rs}$ = \textbf{HSS1D\_Sum}$(\cA[E]_i^{rs,dg},\cA[K]_i^{rs,offd});$ \\
\hline
\multicolumn{3}{|l|}{(iii) Inverse of $F^{rs}_i$:}
\\
\hline
Line 6: && \\
$(F^{rs}_i)^{-1} = F[I^{rs}_i,I^{rs}_i]^{-1}$ && $(\cA[F]_i^{rs})^{-1}$ = \textbf{HSS1D\_Invert}$(\cA[F]_i^{rs});$ \\
\hline
\multicolumn{3}{|l|}{(iv) Low rank decompositions for $\ttA[F]_i^{s \leftarrow r}$ and $\ttA[F]_i^{r \leftarrow s}$ using Randomized IDs:}
\\
\hline
Line 7: && \\
$\ttA[F]_i^{s \leftarrow r} = F[I^{sk}_i,I^{rs}_i]$ && $[T_i^{r},J_i^{r}] =$\textbf{RAND\_ID}$(F_i^{s \leftarrow r} , \varepsilon);$ \\
$\ttA[F]_i^{r \leftarrow s} = F[I^{rs}_i,I^{sk}_i]$ && $[T_i^{s},J_i^{s}] =$\textbf{RAND\_ID}$(F_i^{r \leftarrow s} , \varepsilon);$ \\
\hline
\multicolumn{3}{|l|}{(v) Schur complement as a low rank perturbation of $\cA[F]_{i}^{sk}$:}
\\
\hline
Line 8: && $\cA[P]_i^{sk}$ = \textbf{LR\_to\_HSS1D}$( - \ttA[F]_i^{s \leftarrow r}(\cA[F]_i^{rs})^{-1}\ttA[F]_i^{r \leftarrow s});$ \\
$(S^{rs})^{-1} = [F[I_i^{sk},I_i^{sk}] - F^{s\leftarrow r}(F^{rs})^{-1}F^{r \leftarrow s}]^{-1}$ &&
$\cA[S]_i^{rs} =$ \textbf{HSS1D\_Sum}$(\cA[F]_{i}^{sk},\cA[P]_{i}^{sk});$ \\
&& $(\cA[S]_i^{rs})^{-1} =$ \textbf{HSS1D\_Invert}$(\cA[S]_i^{rs});$ \\
\hline
\end{tabular}
\caption{BUILD\_Finv}
\label{alg:build-inv}
\end{algorithm}
\subsubsection*{Applying $F^{-1}$ to a vector} Once we have the four
compressed blocks $\{ (\cA[F]_i^{rs})^{-1}, \ttA[F]_i^{s \leftarrow
r},\ttA[F]_i^{r \leftarrow s}, (\cA[S]_i^{rs})^{-1} \}$, the
routine \textbf{APPLY\_Finv} can be used to compute the fast product
$\sigma=F_i^{-1}u$. This is done in a straightforward way splitting
$u$ into $[ u[I_i^{sk}] , u[I_i^{rs}]]$ and using fast matvecs of the
blocks of $F^{-1}$.
\subsubsection{Compressed $\cA[E]$}
\label{sec:buildE}
The last piece that is required is a routine that constructs $\cA[E]$ as a dense-block HSS matrix from $F^{-1}$ in compressed form. For a box $B_i$, $E_i$ is a linear operator defined on the skeleton set $I_i^{sk}$. Its inverse is obtained by applying $F_i^{-1}$ to this set using interpolation operators:
\begin{equation*} E_i^{-1} = L_i F_i^{-1} R_i \end{equation*}
Hence, the physical intuition again is that $E_i$ has a rank structure similar to that of $K[I_i^{sk},I_i^{sk}]$. In \cite{bremer2011fast,MR07scat,chen2002} its inverse is called a \emph{reduced scattering matrix}. From a purely algebraic point of view, we observe that $\cA[E]$ is a low-rank perturbation of $\cA[S]^{rs}$. Hence, if $\cA[S]^{rs}$ has
dense-block HSS structure, so does $\cA[E]$.
\subsubsection*{BUILD\_E routine}
Recalling the definition of $E_i$ in Algorithm~\ref{alg:HSS-inv-mod},
\begin{equation*}
E_i^{-1} = R_i F_i^{-1} L_i = \begin{bmatrix} I \ \ttA[T]_i^{up} \end{bmatrix}
\begin{bmatrix} \phi_i^{sk} \ \phi_i^{s \leftarrow r}\\ \phi_i^{r \leftarrow s} \ \phi_i^{rs} \end{bmatrix}
\begin{bmatrix} I \\ (\ttA[T]_i^{dn})^{T} \end{bmatrix}
= \phi_i^{sk} + \ttA[T]_i^{up}\phi_i^{s \leftarrow r} + \phi_i^{r \leftarrow s}(\ttA[T]_i^{dn})^{T} + \ttA[T]_i^{up}\phi_i^{rs}(\ttA[T]_i^{dn})^{T}
\end{equation*}
we observe the last three matrices are low-rank. Using the Schur
complement formulae we obtain explicit factorizations for each one,
recompress as a single low-rank matrix and then convert it to the
dense-block HSS form using \textbf{LR\_to\_HSS1D}.
\begin{algorithm}[h!]
\noindent\textbf{Input.}
$F^{-1}$ as blocks:
$(\cA[F]_i^{rs})^{-1},\ttA[F]_i^{s \leftarrow
r},\ttA[F]_i^{r \leftarrow s}$,$(\cA[S]_i^{rs})^{-1}$;
$L_i$ and $R_i$ in low-rank form:
$\ttA[T]_i^{up}$,$J_i^{up}$,$\ttA[T]_i^{dn}$, and $J_i^{dn}$,
$\ttA[T]_i^{up} = U_i^{up}V_i^{up}$, $(\ttA[T]_i^{dn})^{T} = U_i^{dn}V_i^{dn}$.
\noindent\textbf{Output.} $\cA[E]_i$ in HSS form (on the curve $I_i^{sk}$)
\begin{center}
\begin{algorithmic}[1]
\STATE $V_{i,1}^{E} = V_i^{up}\phi^{s \leftarrow r}; \ U_{i,1}^{E} = U_i^{up}$
\STATE $U_{i,2}^{E} = \phi^{r \leftarrow s}U_i^{dn}; \ V_{i,2}^{E} = V_i^{dn}$
\STATE $U_{i,3}^{E} = U_i^{up}V_i^{up}\phi^{rs}U_i^{dn}; \ V_{i,3}^{E} = V_i^{dn}$
\STATE Recompress: $U_i^E(V_i^E)^{T} = \sum_{p=1}^{3} {U_{i,p}^{E}(V_{i,p}^{E})^{T}}$
\STATE $\cA[M]_{i} = \textbf{LR\_to\_HSS1D}(U_{i}^{E},V_{i}^{E})$
\STATE $\cA[E]_i^{-1} = \textbf{HSS1D\_Sum}((\cA[S]_i^{rs})^{-1} , \cA[M]_{i})$
\STATE $\cA[E]_i = \textbf{HSS1D\_Invert}(\cA[E]_i^{-1})$
\end{algorithmic}
\end{center}
\caption{BUILD\_E}
\label{alg:builde}
\end{algorithm}
We can perform the low-rank matrix products (of rank $q_i$) in
Algorithm~\ref{alg:builde} in $O(m_i q_i)$ or $O(m_i q_i^2)$ operations, since all of the products involved
are fast ($O(k_i)$ or $O(m_i-k_i)$, where $|I_i| = m_i$ and $|I_i^{sk}| = k_i$).
\subsection{Inverse matrix-vector multiplication}
Once the compressed form of the inverse is obtained, it can be
efficiently applied to a right-hand side vectors
with the algorithm of Section~\ref{sec:HSS-multiply},
but using fast algorithms for our compressed representations of blocks.
We present it here for completeness (Algorithm~\ref{alg:inv-matvec}).
\begin{algorithm}[h!]
\noindent\textbf{Input}
\begin{enumerate}
\item The binary tree $\mathcal{T}$ including skeleton set indices and $R_i,L_i$
associated with boxes;
\item HSS-compressed inverse: per-box blocks forming $F_i$ and $\cA[E]_i$.
\item the vector $f$ of field values defined at source points.
\end{enumerate}
\subsubsection*{Output} The output is $\sigma = A^{-1}f$, where $A^{-1}$ is the compressed inverse.
\begin{center}
\begin{algorithmic}[1]
\COMMENT {Upward Pass, compute $u_i^{up}$}
\FOR{ each box $B_i$ in fine-to-coarse order}
\IF{$B_i$ is a leaf}
\STATE $u = f(I_i^{up})$
\ELSE
\STATE $u = \begin{bmatrix} u_{c_1(i)}^{up} \\ u_{c_2(i)}^{up} \end{bmatrix} $
\ENDIF
\STATE Multiplication by $\tA[R]_i = E_i R_i F_i^{-1}$:
\STATE $\varphi_i = \textbf{APPLY\_Finv}(u)$
\STATE $u_i^{up} = \cA[E]_i \ttA[R]_i \varphi_i$
\ENDFOR
\STATE \COMMENT {Downward Pass: compute $\phi^{dn}_i$; the result $A^{-1}f$ on leaf $B_i$ is $\phi^{dn}_i$}
\STATE $\phi^{dn}_{top} = 0$ \ , \ $\nu_{top}^{dn} = u$
\FOR{ each $B_i$ in coarse-to-fine order}
\STATE Multiplication by $\tA[D]_i = F_i^{-1}[I - L_i\tA[R]_i]$
\STATE Define $u$ as above
\STATE $\nu_i^{dn} = u - L_i u_i^{up} $
\STATE Multiplication by $\tA[L]_i = F_i^{-1} L_i E_i$:
$\rho_i^{dn} = \ttA[L]_i \cA[E]_i \phi_i^{dn}$
\STATE Add both contributions multiplying by the common factor $F_i^{-1}$:
\IF {$B_i$ is a leaf}
\STATE $ \sigma(I_i^{dn}) = \textbf{APPLY\_Finv}(\nu_i^{dn} + \rho_i^{dn})$
\ELSE
\STATE $ \begin{bmatrix} \phi^{dn}_{c_1(i)} \\ \phi^{dn}_{c_2(i)} \end{bmatrix} = \textbf{APPLY\_Finv}(\nu_i^{dn} + \rho_i^{dn}) $
\ENDIF
\ENDFOR
\end{algorithmic}
\end{center}
\caption{HSS inverse matrix-vector multiplication.}
\label{alg:inv-matvec}
\end{algorithm}
We notice that application of $\tA[R]_i, \tA[D]_i$ and $\tA[L]_i$ is
substituted by fast matrix vector multiplication of
$\ttA[L]_i, \ttA[R]_i$ , $\cA[E]_i$ and the blocks comprising
$F_i^{-1}$. All of these have complexity $O(n_{\ell}^{1/2})$ or $O(n_{\ell}^{1/2}\log(n_{\ell}))$.
\subsubsection*{General formulation}
As we noted in the introduction, a considerable number of physical problems involve solving one or a system of integral equations of the form
\begin{equation*}
\cA[A] [\sigma](x) = a(x)\sigma(x) + \int_{\Omega} {b(x) \cA[K] (||x - y||) c(y)\sigma(y) dy} = f(x),
\end{equation*}
where $a$, $b$ and $c$ are given smooth functions, and $\cA[K](r)$ is related to a free-space Green's function. Then, given $\{ x_i \}_{i=1}^{N} \in \Omega = [-1,1]^2$ points on a regular grid with spacing $h$, we can perform a Nystr\"om discretization, obtaining a linear system $A\sigma = f$, with $A$ an $N \times N$ matrix with entries:
\begin{equation*}
A_{i,j} = a(x_i)\delta_{i,j} + h^2 b(x_i) \cA[K] (||x_i - x_j||) c(x_j)
\end{equation*}
\subsection{High accuracy: performance and scaling}
Most of our tests are done for a target accuracy of $\varepsilon = 10^{-10}$ (referring to the local truncation error of all routines above); this is the ``stress test'' for our algorithm, as high accuracy requires
larger skeletons. We measure the wall-clock timings and memory usage for the following parts of the solution process:
\begin{enumerate}
\item building the tree and interpolation operators;
\item inverse construction and compression;
\item inverse matrix-vector multiplication (solve, timings only).
\end{enumerate}
We compare against the dense-block HSS inversion algorithm.
We take $\cA[K](r)$ to be the 2D Laplace free-space Green's
function: $ \cA[K](r) = \frac{1}{2\pi} \log(r) $ and $a \equiv 1$.
\begin{itemize}
\item If $b \equiv c$, $A$ is symmetric, which leads to some computational savings in the HSS algorithms.
\item The case $b \not \equiv 1$ is our example of a \textbf{non translation invariant (NTI)} kernel. Such an equation appears in forward scattering problems (zero or low-frequency Lippman-Schwinger equation).
\item The case $b \equiv 1$ is one of our examples of \textbf{translation invariant (TI)} kernel. We recall that significant computational savings may be achieved in this case, given that we only build one set of matrix blocks per level.
\end{itemize}
We note that these results are typical for non-oscillatory Green's
function kernels: we have performed the same tests for the 3D Laplace
single layer potential and the 2D and 3D Yukawa Green's functions,
and found behavior analogous to the one described below.
\subsubsection{Non-translation-invariant kernel}
We take $b(x)$ to be a smooth function with moderate variation:
\begin{equation*}
b(x) = 1 + 0.5e^{-(x_1-0.3)^2-(x_2-0.6)^2}
\end{equation*}
Although this may seem like a simplistic choice, it does not matter much for computational performance
unless the problem is under-resolved or $b$ is $0$ on a large subdomain.
For the NTI case ranks of blocks in dense-block HSS matrices and ranks of low-rank blocks
vary both by level and inside each level. However, we observe that the rank growth for these blocks
matches our complexity assumptions.
We set leaf box size at $n_{max}=7^2$, and for problem sizes $N=784$ to $N=3211264$, we compare total inversion time (tree build + inverse compression) and total memory usage for the dense-block (HSS-D) and compressed-block (HSS-C) inverse compression algorithms.
\begin{table}[h!]
\centering
\begin{tabular}{ | r | c | c | c | c | }
\hline
$N$ & HSS-D Time & HSS-C Time & HSS-D Memory & HSS-C Memory \\
& $O(N^{3/2})$ & $O(N)$ & $O(N \log N)$ & $O(N)$ \\
\hline
784 & 0.11 s & 0.17 s & 4.68 MB & 4.48 MB \\
3136 & 0.67 s & 1.70 s & 29.09 MB & 25.24 MB \\
12544 & 4.50 s & 8.32 s & 159.59 MB & 123.07 MB \\
50176 & 31.45 s & 40.43 s & 819.58 MB & 538.51 MB \\
200704 & 3.79 m & 3.23 m & 3.72 GB & 2.23 GB \\
802816 & 28.35 m & 13.66 m & 17.27 GB & 9.23 GB \\
3211264 & 3.58 hr & 54.795 m & 70.99 GB & 34.09 GB \\
\hline
\end{tabular}
\caption{Total inversion time and storage for the NTI inverse compression algorithms}
\label{tbl:NTI}
\end{table}
We observe a very close match between the experimental scaling and the
complexity estimates in Proposition \ref{prop:complexity}, which we recall on the second
row of Table~\ref{tbl:NTI}.
The slopes in a log-log plot (Figure \ref{fig:NTI_Inv}.A) show that
both inversion and tree build times are $O(N^{3/2})$ for the dense
block version and $O(N)$ for our accelerated algorithm. The break-even point
is around $N = 10^5$, for which both methods take about $1.5$
minutes. In the speedup plot
in \ref{fig:NTI_Inv}.B, we can clearly observe performance gains for
$N > 10^5$ due to difference in scaling. By $N = 10^7$ our algorithm gains one order of magnitude speedup.
Additional speed gains may be obtained by adjusting the size of the fine-scale boxes.
The slopes of the log-log plot (Figure \ref{fig:NTI_Inv}.C) confirm
that memory usage for these algorithms behaves like $O(N \log N)$ and
$O(N)$, respectively. Our algorithm uses less memory in all
cases.(Figure~\ref{fig:NTI_Inv}.D).
By $N = 10^7$, it uses $2.5 \times$ less memory (150 GB, which amounts
to approximately 1800 doubles per degree of freedom).
\subsubsection*{Matrix compression} Figure \ref{fig:NTI_Inv}.A, shows that
the tree build time is about $18 \%$ of the total inversion
time. The constructed tree can also be used to compress the original
matrix $\cA[A]$ in $O(N \log N)$ work, and even in $O(N)$ work by
incurring the small additional cost of compressing sibling interaction
matrices as HSS 1D. Even though our focus on building a fast direct
solver, this suggests our accelerated method to compress interpolation
matrices $\ttA[L]$ and $\ttA[R]$ readily yields $O(N)$ matrix
compression. The compressed matrix can be used as a kernel-independent
FMM, with a significantly simplified algorithm structure.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale = 0.17]{NTI_L2D_InvComp}
\caption{\textit{NTI 2D Laplace Kernel: Inverse compression timings and memory usage. Total times and storage are represented in solid lines, Tree build and storage in dashed lines.}}
\label{fig:NTI_Inv}
\end{center}
\end{figure}
\subsubsection{Translation-invariant kernel}
Exploiting translation invariance yields a substantial improvement in performance, which stems from the fact that only one set of matrices per level of $\cA[T]$ needs to be computed and stored. The full-block HSS inverse compression still scales as $O(N^{3/2})$, but is $3\times$ faster for all tested problem sizes, and memory usage becomes more
efficient as $N$ grows. The lack of improvement in the asymptotic behavior of inverse compression when compared to the the non-translation-invariant case stems from the fact that $O(N^{3/2})$ work is performed on the top boxes of the tree in both cases.
As one would expect, our compressed-block algorithm displays sublinear
scaling for both inverse compression time and storage, for the
largest problems gaining an order of magnitude in both when compared
to the NTI case.
For $N = 10^7$, it has become about $11\times$ faster (29 minutes), using $30\times$ less memory (5 GB).
Building the binary tree, which as mentioned amounts to
compressing $\cA[A]$, displays similar performance gains. For $N = 10^7$, it takes only $4.5$ minutes to build and storage is only $1.3 GB$.
\begin{table}[h!]
\centering
\begin{tabular}{ | r | c | c | c | c | }
\hline
N & HSS-D Time & HSS-C Time & HSS-D Memory & HSS-C Memory \\
& $O(N^{3/2}) $ & $O(N)$ & $O(N)$ & $O(N)$ \\
\hline
784 & 0.05 s & 0.13 s & 1.94 MB & 1.75 MB\\
3136 & 0.21 s & 0.98 s & 9.04 MB & 6.19 MB\\
12544 & 1.40 s & 3.41 s & 39.16 MB & 19.03 MB\\
50176 & 9.68 s & 10.76 s & 163.19 MB & 52.09 MB\\
200704 & 1.21 m & 30.89 s & 666.39 MB & 151.41 MB\\
802816 & 9.20 m & 1.59 m & 2.61 GB & 474.74 MB\\
3211264 & 1.19 hr & 6.68 m & 9.9359 GB & 1.56 GB \\
12845056 & 9.28 hr & 29.22 m & 39.74 GB & 5.29 GB \\
\hline
\end{tabular}
\caption{Total inversion time and storage for the TI inverse compression algorithms}
\end{table}
The algorithm achieves parity with the dense-block HSS for lower
values of $N$ (around $N = 50000$), for which both methods take about
$10$ seconds to produce the HSS inverse. By $N =10^7$, our algorithm
is about $20 \times$ faster than the dense-block approach
(Figure \ref{fig:TI_Inv}). We again observe that tree build time is consistently about $15\%$-$20 \%$ of the total inversion time, and so our observation about matrix compression holds.
Figure \ref{fig:TI_Inv}.C shows sublinear scaling for memory usage. It
remains true that our algorithm uses less memory in all cases. For $N
=10^7$, it uses $8 \times$ less memory
(5 GB, which amounts to 50 doubles per degree of freedom).
\begin{figure}[h!]
\begin{center}
\includegraphics[scale = 0.17]{TI_L2D_InvComp}
\caption{\textit{TI 2D Laplace kernel: Inverse compression timings and memory usage. Total times and storage are represented in solid lines, Tree build and storage in dashed lines.}}
\label{fig:TI_Inv}
\end{center}
\end{figure}
\subsubsection{Inverse matrix-vector multiplication}
In order to test the inverse matvec algorithm, we run it with ten
random right-hand sides and compute time per solve (in seconds) for
both NTI and TI examples. We note that, given that the same matrix
blocks are applied to the same subsets of each right-hand side, code
can be easily vectorized leading to performance gains if multiple
solves are to be performed simultaneously. If our matrix is
translation-invariant additional speedup result from the fact that the
same set of matrices is applied to vectors corresponding to all boxes in a given level.
We note that for all cases, the inverse apply is very fast, scaling
linearly and remaining well under a minute for sizes up to $N \sim
10^7$. As expected, the differences between the standard and compressed block apply are small, the latter becoming marginally faster around $N \sim 3\times10^6$.
\begin{table}[h!]
\centering
\begin{tabular}{ | r | c | c | c | c | c | c |}
\hline
N & NTI HSS-D & NTI HSS-C & TI HSS-D & TI HSS-C & NTI HSS-C & TI HSS-C \\
& $O(N \log N) $ & $O(N)$ & $O(N \log N)$ & $O(N)$ & Error & Error \\
\hline
784 & 0.0014 & 0.0018 & 0.0007 & 0.0011 & 1.6e-14 & 9.2e-15 \\
3136 & 0.0064 & 0.0090 & 0.0031 & 0.0046 & 1.8e-14 & 1.8e-14 \\
12544 & 0.0292 & 0.0362 & 0.0137 & 0.0162 & 8.6e-11 & 5.7e-11 \\
50176 & 0.1320 & 0.1546 & 0.0590 & 0.0600 & 1.6e-10 & 1.7e-10 \\
200704 & 0.5993 & 0.6772 & 0.2819 & 0.2512 & 2.3e-10 & 1.6e-10 \\
802816 & 2.6611 & 2.8193 & 1.2709 & 1.0763 & 4.0e-10 & 3.8e-10 \\
3211264 & 11.816 & 11.737 & 5.77296 & 4.5650 & 5.1e-9 & 1.6e-9 \\
12845056 & \color{red}52.468 & \color{red}48.8641 & 25.8312 & 19.3619 & - & 3.2e-9 \\
\hline
\end{tabular}
\caption{Inverse apply timings (in seconds) for both NTI and TI algorithms. Numbers in red are extrapolated from previous data.}
\end{table}
For all cases, if $\mathcal{A}$ and $\mathcal{A}^{-1}$ are our compressed, approximate matrix and inverse, we use the following error measure:
\begin{equation*}
E = \frac{|| v - \mathcal{A}\mathcal{A}^{-1}v||}{||v||}
\end{equation*}
Taking the maximum over a number of randomly generated right-hand-sides $v$. In this measure, which can be thought of as an approximate residual, both algorithms achieve the desired target accuracy for the inverse apply.
We may then bound the exact residual by:
\begin{equation*}
|| v - A\mathcal{A}^{-1}v || \leq E + || \mathcal{A} - A || || \mathcal{A}^{-1} v||
\end{equation*}
\subsection{The effect of varying accuracy}
\label{sec:vary-accuracy}
If we set a lower target accuracy, this can significantly speed up both algorithms presented above: the number of boundary layers needed, the size of skeleton sets in $\cA[T]$, ranks in low-rank and HSS 1D blocks in the compressed-block algorithm are all lower, and low-rank factorization can be performed faster.
In this set of tests, using the Laplace single layer potential in 2D, we compare their behavior for target accuracies $\varepsilon = 10^{-5}$ and $10^{-10}$.
\subsubsection*{Dense-block algorithm} In this case, only the change in skeleton set size is relevant. As in Section~\ref{sec:complexity}, we can bound $k_i \le k_{\ell} = C_{\varepsilon}n_{\ell}^{1/2}$. In particular,
$\varepsilon=10^{-5}$ requires one boundary layer and $\varepsilon=10^{-10}$ requires
two, and so $C_{10^{-10}} \sim 2C_{10^{-5}}$, as expected from standard multipole estimates ($C_{\varepsilon} \sim \log(1/\varepsilon)$).
Following our complexity analysis, we observe both tree build and inverse compression algorithms perform $O(k_i^3)$ work per box, and so the constant in the leading term is of the form $\hat{C}\log^3(1/\varepsilon)$. This would imply a factor of $8$ between $\varepsilon=10^{-5}$ and $\varepsilon=10^{-10}$.
Analogously, inverse matvec work and memory usage are $O(k_i^2)$ per box, and so we expect a factor of 4 between these two cases.
\subsubsection*{Compressed-block algorithm}
Performance of the compressed block algorithm depends on all three quantities mentioned above:
the size of 2D HSS skeletons is again $O(\log(n_{\ell}/\varepsilon))$, but
fast operations are performed on these.
Hence, we expect a factor of $\log(1/\varepsilon)$ to appear, at worst.
Assuming sizes of dense blocks in low-rank and HSS 1D representations
are $O(\log(n_{\ell}/\varepsilon))$, the most expensive one-dimensional
dense-block HSS perations are linear with a constant $O(\log^3(1/\varepsilon))$, and again matvecs and storage get a factor $O(\log^2(1/\varepsilon))$.
Hence, we expect the factors to behave as $O(\log^4(1/\varepsilon))$ ($16 \times$) for tree build and inverse compression, and as $O(\log^3(1/\varepsilon))$ ($8 \times$) for inverse apply and memory storage.
\begin{table}[h!]
\centering
\begin{tabular}{ | r | c | c | c | c | c | c | c | c |}
\hline
N & HSS-D Time & Ratio & HSS-C Time & Ratio & HSS-D Memory & Ratio & HSS-C Memory & Ratio \\
\hline
784 & 0.01 s & 3.1 & 0.03 s & 6.7 & 0.45 MB & 3.3 & 0.44 MB & 4.1 \\
3136 & 0.03 s & 5.2 & 0.07 s & 15 & 1.92 MB & 3.7 & 1.75 MB & 4.2 \\
12544 & 0.20 s & 6.4 & 0.75 s & 6 & 7.98 MB & 3.8 & 5.40 MB & 4.3 \\
50176 & 1.35 s & 6.9 & 2.83 s & 6 & 32.53 MB &3.9 & 13.67 MB & 4.4 \\
200704 & 9.77 m & 7.4 & 7.87 s & 6.4 & 131.39 MB & 4 & 36.59 MB & 4.4 \\
802816 & 1.22 m & 7.6 & 20.88 s & 7.3 & 528.19 GB & 4 & 107.38 MB & 4.4 \\
3211264 & 9.13 m & 7.9 & 1.19 m & 8 & 2.07 GB & 4 & 349.50 MB & 4.2 \\
12845056 & 1.14 hr & 8.1 & 4.56 m & 8.9 & 8.34 GB & 4 & 1.21 GB & 3.8 \\
\hline
\end{tabular}
\caption{ Inverse compression time and storage of the TI algorithms
for $\varepsilon = 10^{-5}$, and the ratio of the quantities corresponding to $10^{-10}$ and $10^{-5}$ target accuracy}
\end{table}
Experimental results show that our estimates for the dense block
algorithm (HSS-D) are close: as N grows, we observe that the ratio
between compression times converges to $\mathbf{8}$, and that for
memory usage to $\mathbf{4}$, as expected. However, although there more variation in the compressed-block results, we generally observe our estimates to be conservative, since the ratios are fairly similar to the dense-block case.
Our hypotheses also appear to be conservative for the inverse apply algorithms: ratios between these two accuracies tend to be between $\mathbf{2}$ and $\mathbf{3}$ for all cases tested.
We omit the results for the non-translation-invariant case, since the comparison and ratios between both target accuracies are quite similar. Overall, this implies that the analysis and observations in Section 5.1 can be applied with little modification to the case $\varepsilon = 10^{-5}$: for both algorithms, a faster but less accurate inverse can be compressed $\mathbf{8}$ times faster, stored using $\mathbf{4}$ times less memory, and can be applied two or three times faster than the high accuracy case.
\subsection{Low Frequency Oscillatory Kernels}
Finally, we perform the same set of experiments as in Section 5.1, but this time taking $\mathcal{K}(r)$ in our general formulation to be the 2D Helmholtz free-space Green's function for wave number $k$:
\begin{equation}
\mathcal{G}_{k}(r) = -\frac{i}{4} H^1_0(k r)
\label{eqn:H2D_kernel}
\end{equation}
where $H^1_0$ is the Hankel function of the first kind. As we will see in Section 5.4, this is akin to solving the Lippmann-Schwinger equation for a given frequency $k$.
As mentioned before, with some minor modifications our solver is able to handle oscillatory kernels for low frequencies. We briefly note the main differences with non-oscillatory problems:
\begin{itemize}
\item{\textbf{Skeleton Structure:}} For high accuracy, skeletons with
consisting of more layers of points are needed in comparison with non-oscillatory kernels such as Laplace. Besides this, as the wavenumber grows it is also necessary to keep points inside of the box in the skeleton set. Experimentally we determine that keeping several points per wavelength is enough to maintain accuracy for matrix and inverse compression.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale = 0.09]{H2D_skeletons}
\caption{\textit{2D Helmholtz kernel skeleton set structure: Skeleton points are in black and residual points in blue. A few points per wavelength are kept on the interface between the two children boxes.}}
\label{fig:H2D_skeletons}
\end{center}
\end{figure}
\item{\textbf{Conditioning:}} The Lippmann-Schwinger equation is moderately
ill-conditioned, with condition number depending on $k$. This impacts
the matrix inversions in both levels of compression resulting
several digits of accuracy. We note that this is mild for the
wavenumbers tested, and that we can always gain them back by one round of iterative refinement (matrix and inverse applies remain quite fast).
\item{\textbf{Rank growth assumptions:}} Ultimately, as $k$ grows, the assumptions stated in Section 4 will fail: The size of skeletons in compressed blocks will depend on $k$, as well as the number of points needed in our discretization.
\end{itemize}
This points to the fact that a different approach is needed to build a fast direct solver that can handle moderate and high frequency regimes.
\subsubsection{Inverse compression results: TI and NTI cases}
We present experimental results for both the non-translation-invariant and the translation-invariant cases. Let $\kappa = k / 2 \pi$ be the number of wavelengths that fit in our domain, the unit box. For $\kappa = 4,8,16,32$, we set a leaf box size at $n_{max} = 9^2$ and test for increasing problem size $N$. Here we compare total inversion time and memory usage for the HSS-D and HSS-C inverse compression algorithms.
We observe that, while the cases for $\kappa=4,8$ behave similarly to their Laplace counterpart (skeletons consist of 2 boundary layers), for $\kappa=16,32$ and higher at least 3 layers are needed, and accuracy in matrix compression deteriorates unless we also keep a few extra points per wavelength inside the box. As we will see, this change in skeleton set structure is the main cause for differences in behavior between these two sets of examples.
We note that, given that the Helmholtz kernel is complex-valued, two doubles are stored for each matrix entry. This implies that memory usage can be a priori expected to be at least 2 times bigger than that of real-valued kernels.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale = 0.15]{NTI_H2D_InvComp}
\caption{\textit{NTI 2D Helmholtz kernel: Inverse compression timings and memory usage for increasing values of $\kappa$}}
\label{fig:H2D_NTI_Inv}
\end{center}
\end{figure}
We first observe that for all cases, experimental scaling again coincides with the expected complexity (the $O(N^{3/2})$ HSS-D algorithm is not plotted in figure \ref{fig:H2D_NTI_Inv} to avoid clutter). For the HSS-C algorithm, slopes in \ref{fig:H2D_NTI_Inv}.A quickly approach to 1 as N grows.
\textbf{Break-even points and speedup:} on the speedup plot in figure \ref{fig:H2D_NTI_Inv}.B, we can observe that the point where the compressed-block algorithm overcomes its dense-block counterpart grows slightly with $\kappa$. While it is around $10^5$ for $\kappa = 4,8$, it grows closer to $10^6$ for $\kappa=16,32$. We can also see a moderate speedup is gained after these break-even points due to better scaling, similarly to the case for Laplace.
\textbf{Memory Usage:} In figures \ref{fig:H2D_NTI_Inv}.C and \ref{fig:H2D_NTI_Inv}.D we again can observe the HSS-C algorithm always provides extra compression in terms of storage, and this improves as N grows. By $N \sim 10^6$, it uses $3-4$ times less memory than HSS-D.
For $N \sim 10^6$ and $\kappa = 4,8$, the HSS-C algorithm takes about $1.3$ hours to produce the HSS inverse, requiring $27GB$ of storage. For $\kappa = 16,32$, it takes 7 and 10 hours to build the inverse, respectively, and storage requirements go up to $\sim 60GB$. Comparing its performance with the Laplace NTI kernel in Section 5.1 (which requires 0.5 hours and 10 GB for this size), we see that as the wave number increases, it takes considerably more time and storage to compress the inverse. The most dramatic change can be observed for $\kappa = 16,32$, since skeleton sets become much bigger in size.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale = 0.15]{TI_H2D_InvComp}
\caption{\textit{TI 2D Helmholtz kernel: Inverse compression timings and memory usage for increasing values of $\kappa$}}
\label{fig:H2D_TI_Inv}
\end{center}
\end{figure}
Exploiting translation invariance yields significant performance gains for all tested wavenumbers. The HSS-D algorithm is again $O(N^{3/2})$, and about $3 \times$ faster than the NTI case. We also observe sublinear scaling for inverse compression and storage for the HSS-C version, although a bit less dramatic than in the case for the Laplace kernel. By $N \sim 10^6$, it has become $5-6 \times$ faster, using $15 \times$ less memory.
Since only one set of matrices is computed for each level in $\mathcal{T}$, we can observe more clearly the impact of adding an extra layer of skeleton points: inverse compression time and storage become about 1 order of magnitude higher. Also, there is a more pronounced difference between $\kappa=16$ and $\kappa=32$ for bigger problem sizes.
\textbf{Break-even points and speedup:} On the speedup plot in
figure \ref{fig:H2D_TI_Inv}.B, we see that break-even points are a bit
smaller when compared with the NTI case, and that they again grow with
$\kappa$. For $\kappa=4$ it is below $10^5$, for $\kappa = 32$, it
happens around $3\times 10^5$. Speedups are again considerably better and faster growing due to sublinear scaling, especially for small wavenumbers.
\textbf{Memory Usage:} The gain in compression is more rapid, especially for low wavenumbers. By $N \sim 10^6$, it uses $3-5$ times less memory than HSS-D. Also, in \ref{fig:H2D_TI_Inv}.D, we see the slopes for the memory ratio are higher than in the NTI case.
For $N \sim 10^6$ and $\kappa = 4,8$, the HSS-C algorithm takes about $15$ minutes to produce the HSS inverse, requiring $1.5GB$ of storage. For $\kappa = 16,32$, these go up to 1.2 and 2.6 hours to build the inverse, taking $4.6$ and $5 GB$ of storage . Comparing its performance with the Laplace TI kernel (2 minutes and 0.5 GB for this size), we observe similar differences in storage as in the NTI case, but much more drastic ones in terms of inverse compression time.
\subsubsection{Inverse matrix-vector multiplication}
Finally, we test the inverse apply algorithm as in Section 5.1.3. Since the differences between dense and compressed block are quite small, we present results for the HSS-C apply only, for $\kappa=8$ and $\kappa=32$. The inverse apply is still considerably fast and retains optimal scaling.
\begin{table}[h!]
\centering
\begin{tabular}{ | r | c | c | c | c | c | c |}
\hline
N & NTI HSS-C & NTI HSS-C & TI HSS-C & TI HSS-C & TI HSS-C & TI HSS-C \\
& $\kappa=8$ & $\kappa=32$ & $\kappa=8$ & $\kappa=32$ & Error ($\kappa=8$) & Error ($\kappa=32$)\\
\hline
1296 & 0.0036 & 0.0074 & 0.0022 & 0.0048 & 1e-13 & 1e-12 \\
5184 & 0.0205 & 0.0596 & 0.0126 & 0.0365 & 5.5e-10 & 2.6e-8 \\
20736 & 0.1102 & 0.3046 & 0.0587 & 0.1663 & 1.6e-10 & 8.7e-9 \\
82944 & 0.5197 & 1.4962 & 0.2465 & 0.8148 & 2.3e-9 & 3.6e-8 \\
331776 & 2.5431 & 6.7812 & 1.0350 & 3.4867 & 1.0e-10 & 2.0e-8 \\
1327104 & \color{red}10.1684 & \color{red}27.1248 & 4.3536 & 15.4991 & 1.2e-9 & 4.9e-8 \\
\hline
\end{tabular}
\caption{Inverse apply timings (in seconds) for both NTI and TI algorithms, for different values of $\kappa$. Numbers in red are extrapolated from previous data.}
\end{table}
For $\kappa=16,32$, we usually lose two digits in our accuracy measure ($ \sim 10^{-8}$), when compared to our target $10^{-10}$. As was mentioned above, this is an effect of conditioning and can be addressed if necessary by cranking up accuracy or by one round of iterative refinement.
\subsection{2D scattering problem: Lippmann-Schwinger equation}
\subsubsection*{Background on scattering} Consider an acoustic scattering problem in $\mathbb{R}^2$ involving a ``soft''
scatterer contained in a domain $\Omega$. For simplicity, we assume that the ``incoming field'' is generated from a point
source at the points $x_s \in \Omega^{\rm c}$. A typical mathematical model would take the form:
\begin{equation}
-\Delta u(x) - \frac{\omega^2}{v(x)^2} u(x) = \delta (x-x_s),\qquad x \in \mathbb{R}^2
\label{eqn:helmholtz_freespace}
\end{equation}
where $\kappa$ is the frequency of the incoming wave, and where $v(x)$ is the wave-speed at $x$.
We assume that the wave-speed is constant outside of $\Omega$, so that $v(x) = v_0$ for
$x \in \Omega^{\rm c}$. To make the equation well-posed, we assume that $u(x)$ satisfies
the natural ``radiation condition'' at infinity. We define the ``wave number'' $k$ via
\begin{equation*}
k = \frac{\omega}{v_0},
\end{equation*}
and a function $b = b(x)$ that quantifies the ``deviation'' in the wave-number from the free-space wave-number via
\begin{equation*}
b(x) = k^2 - \frac{\omega^2}{v(x)^2}.
\end{equation*}
Observe that $b(x) = 0$ outside $\Omega$. Equation (\ref{eqn:helmholtz_freespace}) then takes the form
\begin{equation}
-\Delta u(x) - k^2 u(x) + b(x)\,u(x) = \delta (x-x_s),\qquad x \in \mathbb{R}^2.
\label{eqn:scatter_local}
\end{equation}
Finally, let $G_k$ denote the Helmholtz fundamental solution as in equation \ref{eqn:H2D_kernel},
and split the solution $u$ of (\ref{eqn:scatter_local}) into ``incoming'' and ``outgoing'' fields
so that $u = u_{in} + u_{out}$, where
\begin{equation*}
u_{in}(x) = G_k(x,x_s).
\end{equation*}
Then $u_{out}$ satisfies:
\begin{equation}
-\Delta u_{out}(x) - k^2 u_{out}(x) + b(x)\,u_{out}(x) = -b(x)\,u_{in}(x),\qquad x \in \Omega.
\label{eqn:outgoing}
\end{equation}
When converting (\ref{eqn:outgoing}) as an integral equation, we assume that $b(x)$ is non-negative,
and look for a solution of the form
\begin{equation}
u_{out}(x) = [S_k (\sqrt{b}\,\tau)](x) = \int_{\Omega} {G_k(x,y) \sqrt{b(y)}\,\tau(y)\,dy}.
\label{eqn:ansatz}
\end{equation}
Inserting (\ref{eqn:ansatz}) into (\ref{eqn:outgoing}), and dividing the result by $\sqrt{b(x)}$, we find the equation
\begin{equation}
\tau(x) + \int_{\Omega} {\sqrt{b(x)}\,G_k(x,y)\,\sqrt{b(y)} \tau(y) dy} = -\sqrt{b(x)}\, u_{in}(x),\qquad x \in \Omega
\label{eqn:LS}
\end{equation}
\subsubsection*{Test problem} For our numerical tests, we take:
\begin{equation*}
b(x_1,x_2) = 0.25 k^2 (1 + \tanh D(1 - \epsilon -|x_1|)) (1 + \tanh D(1 - \epsilon -|x_2|))
\end{equation*}
Note that
$b$ is equal to $1$ in a subdomain of $[-1,1]^2$ and transitions exponentially to $0$ close to the boundary of the square. Parameters $D$ and $\epsilon$ can be manipulated to change this layer's position and width. We note that if $b$ close to $0$ up to machine precision, the tree build routine must be modified to pick skeleton points along the support of $b$.
\begin{figure}[H]
\begin{center}
\includegraphics[scale = 0.05]{LS_bump_tanh}
\caption{\textit{Plot of deviation $b$ for the scatterer in our Lippman-Schwinger equation}}
\label{fig:LS_bump}
\end{center}
\end{figure}
We implement an $O(h^4)$ accurate corrected trapezoidal rule \cite{duan2009high} by adding a diagonal correction to our kernel. Since our algorithm compresses off-diagonal blocks, this has no bearing on the performance of our direct solver. We note that higher order (up to $O(h^{10})$) quadratures of this form can be applied if needed.
\subsubsection*{Numerical results} For $\kappa = 4,8$ and increasing problem size $N$, we solve the Lippmann-Schwinger equation
(\ref{eqn:LS}) using our solver for non-translation invariant, symmetric operators.
For both for the approximate matrix $\mathcal{A}$ and its inverse, we first measure the empirical order of convergence when applying them to the same right hand side while refining grid size $h$. We confirm that the order is approximately $O(h^4) = O(N^{-2})$ until the error is comparable to the target accuracy.
Since there is a much higher contrast in the kernel entries, and they get close to $0$ at rows and columns corresponding to the box boundary, through preliminary experiments we observe that $2$ layers yield moderate accuracy ($\sim 10^{-6}$), and $3$ are needed for high accuracy ($\sim 10^{-10}$). We note that for cases where there is small variation within the domain of interest $\Omega$, this approach could be optimized by giving special treatment to boxes which intersect its boundary.
Also, we observe that the inverse of the translation-invariant operator in Section 5.3 may be used as a right preconditioner for this problem. Using BiCGstab, we find that for a given wave number $\kappa$, the number of iterations to solve the preconditioned system is moderate and independent of problem size $N$.
We test the direct solver and the preconditioned iterative solver proposed above (Table \ref{tbl:LS_inv}) For the latter, the compression step includes tree-build (matrix compression) for the Lippmann-Schwinger kernel, and inverse compression for the corresponding translation-invariant Helmholtz kernel (our preconditioner).
\begin{table}[h!]
\centering
\begin{tabular}{ | r | c | c | c | c | c | c |}
\hline
N & HSS-D Time & HSS-C Time & Iter Time & HSS-D Memory & HSS-C Memory & Iter Memory \\
\hline
784 & 1.92 s & 1.86 s & 1.72 s & 18.74 MB & 18.74 MB & 9.63 MB \\
3136 & 11.81 s & 24.93 s & 9.30 s & 125.87 MB & 92.30 MB & 54.37 MB \\
12544 & 1.31 m & 2.68 m & 1.13 m & 720.90 MB & 464.81 MB & 258.97 MB \\
50176 & 8.03 m & 14.20 m & 5.65 m & 3.69 GB & 5.56 GB & 1.19 GB \\
200704 & 47.13 m & 1.17 h & 25.71 m & 18.34 GB & 13.22 GB & 5.53 GB \\
802816 & 4.78 h & 5.70 h & 1.93 h & 90.36 GB & 47.16 GB & 25.38 GB \\
\hline
\end{tabular}
\caption{ Total inverse compression time and storage for the Lippmann-Schwinger equation
for $\kappa = 8$ and $\varepsilon = 10^{-10}$ target accuracy}
\label{tbl:LS_inv}
\end{table}
Finally, on Table \ref{tbl:LS_apply} we compare inverse apply (solve) stage for both algorithms and for the iterative approach.
\begin{table}[h!]
\centering
\begin{tabular}{ | r | c | c | c | c | c | c |}
\hline
N & HSS-D Solve & HSS-C Solve & HSS Error & Iter Solve & Iter $\#$ & Error \\
\hline
784 & 0.03 s & 0.02 s & 2.7e-11 & 0.59 s & 20 & 8.0e-9 \\
3136 & 0.08 s & 0.17 s & 5.2e-9 & 3.13 s & 30 & 9.2e-9 \\
12544 & 0.36 s & 0.82 s & 7.1e-8 & 14.34 s & 28 & 4.4e-9 \\
50176 & 1.74 s & 3.73 s & 3.6e-11 & 56.12 s & 28 & 3.6e-9 \\
200704 & 9.54 s & 20.21 s & 9.3e-11 & 236.23 s & 29.5 & 7.7e-9 \\
802816 & 52.50 s & 109.32 s & 1.2e-10 & 1010.1 s & 29.5 & 4.4e-9 \\
\hline
\end{tabular}
\caption{Inverse apply timings for the Lippmann-Schwinger equation for $\kappa = 8$ and $\varepsilon = 10^{-10}$ target accuracy}
\label{tbl:LS_apply}
\end{table}
We note that for the HSS-C algorithm, one round of adaptive refinement is needed in order to attain the target accuracy for the approximate residual. This means two inverse and one matrix applies are used in the solve.
\section{Introduction}
\label{sec:introduction}
\input{2DHSS_CMZ12_intro.tex}
\section{Background}
\label{sec:background}
\input{2DHSS_CMZ12_background.tex}
\section{$O(N)$ Inverse Compression Algorithm}
\label{sec:inversion}
\input{2DHSS_CMZ12_inversion.tex}
\section{Complexity Estimates}
\label{sec:complexity}
\input{2DHSS_CMZ12_complexity.tex}
\section{Numerical Results}
\label{sec:numerical}
\input{2DHSS_CMZ12_results.tex}
\section{Conclusions and Future Work}
\label{sec:conclusions}
\input{2DHSS_CMZ12_conclusions.tex}
\paragraph{Acknowledgments:} We would like to thank Mark Tygert and Leslie Greengard's group for many helpful insights during the design and implementation of our algorithm.
| {
"timestamp": "2013-05-16T02:00:21",
"yymm": "1303",
"arxiv_id": "1303.5466",
"language": "en",
"url": "https://arxiv.org/abs/1303.5466",
"abstract": "An efficient direct solver for volume integral equations with O(N) complexity for a broad range of problems is presented. The solver relies on hierarchical compression of the discretized integral operator, and exploits that off-diagonal blocks of certain dense matrices have numerically low rank. Technically, the solver is inspired by previously developed direct solvers for integral equations based on \"recursive skeletonization\" and \"Hierarchically Semi-Separable\" (HSS) matrices, but it improves on the asymptotic complexity of existing solvers by incorporating an additional level of compression. The resulting solver has optimal O(N) complexity for all stages of the computation, as demonstrated by both theoretical analysis and numerical examples. The computational examples further display good practical performance in terms of both speed and memory usage. In particular, it is demonstrated that even problems involving 10^{7} unknowns can be solved to precision 10^{-10} using a simple Matlab implementation of the algorithm executed on a single core.",
"subjects": "Numerical Analysis (math.NA)",
"title": "An O(N) Direct Solver for Integral Equations on the Plane",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126444811033,
"lm_q2_score": 0.8558511488056151,
"lm_q1q2_score": 0.8376323411297337
} |
https://arxiv.org/abs/1507.00810 | A Refinement of Vietoris Inequality for Cosine Polynomials | The classical Vietoris cosine inequality is refined by establishing a positive polynomial lower bound. | \section{Introduction}
In 1958, Vietoris \cite{V} published the following ``surprising and quite deep result" \cite[p. 1]{K} on inequalities for a class of sine and cosine polynomials.
\vspace{0.3cm}
\noindent
{\bf{Proposition 1.}} \emph{If the real numbers
$a_k$ $(k=0,1,...,n)$ satisfy
$$
a_0\geq a_1 \geq \cdots \geq a_n>0
\quad{and}
\quad{2k a_{2k}\leq (2k-1) a_{2k-1} \,\,\, (k\geq 1)},
$$
then}
\begin{equation}
\sum_{k=1}^n a_k \sin(kx)>0
\quad{and}
\quad{\sum_{k=0}^n a_k \cos(kx)>0} \quad{(0<x<\pi)}.
\end{equation}
\vspace{0.3cm}
\noindent
In order to prove (1.1) it is enough to consider the special case $a_k=b_k$, where
$$
b_{2k}=b_{2k+1}=\frac{1}{4^k}{2k \choose k}\,\,\,
(k\geq 0)
$$
and to apply summation by parts; see Askey and Steinig \cite{AS}.
In fact, Vietoris proved that
\begin{equation}
S_n(x)>0 \quad\mbox{and}
\quad{T_n(x)>0}
\quad{(n\geq 1; 0<x< \pi)},
\end{equation}
where
$$
S_n(x)=\sum_{k=1}^n b_k \sin(kx)
\quad\mbox{and}
\quad{
T_n(x)=\sum_{k=0}^n b_k\cos(kx)}.
$$
In what follows, we maintain these notations.
In 1974, Askey and Steinig \cite{AS}
offered
a simplified proof of (1.2) and showed that
these inequalities have remarkable applications in the theory of ultraspherical polynomials and that they can be used to find estimates for the location of zeros of trigonometric polynomials.
In the recent past, Vietoris' inequalities received attention from several authors, who offered new conditions on the coefficients $a_k$ such that (1.1) holds;
see Belov \cite{B}, Brown \cite{Br},
Brown and Dai \cite{BDW}, Brown and Hewitt \cite{BH}, Brown and Yin \cite{BY}, Koumandos \cite{K}, Mondal and Swaminathan \cite{MS}.
Interesting historical remarks on these inequalities were given by Askey
\cite{A2}.
Is it possible to replace the lower bound $0$ in (1.2)
by a positive expression? In 2010, this problem was solved
for the sine polynomial $S_n$. Alzer, Koumandos and Lamprecht
\cite{AKL} proved the following
\vspace{0.3cm}
{\bf{Proposition 2.}}
\emph{The inequalities}
\begin{equation}
S_n(x)>\sum_{k=0}^4 a_k x^k>0
\quad{(a_k\in\mathbf{R}, k=0,...,4)}
\end{equation}
\emph{hold for all $n\geq 1$ and $x\in (0,\pi)$ if and only if}
$$
a_0=0, \quad{a_1=-\pi^2 a_4},
\quad{a_2=3 \pi^2 a_4},
\quad{a_3=-3\pi a_4},
\quad{-1/\pi^3<a_4<0}.
$$
{\em Moreover, in} (1.3) {\em
the biquadratic polynomial cannot be replaced by an algebraic polynomial of degree smaller than $4$.
}
It is natural to ask for a counterpart
of Proposition 2 which holds for the cosine polynomial $T_n$. More precisely, we try to find algebraic polynomials $p$ of smallest degree such that
\begin{equation}
T_n(x)\geq p(x)>0 \quad{(n\geq 1; 0<x<\pi)}.
\end{equation}
It is the aim of this paper to determine
all quadratic polynomials $p$ satisfying
(1.4).
We remark that there is no linear
polynomial $p$ such that (1.4) is valid. Otherwise,
setting $p(x)=\gamma_0 +\gamma_1 x$
gives
$$
T_1(x)=1+\cos(x)\geq \gamma_0+\gamma_1 x>0.
$$
We let $x$ tend to $\pi$ and obtain
$\gamma_0=-\pi \gamma_1$. Thus,
$$
\frac{1+\cos(x)}{\pi-x}\geq -\gamma_1 >0.
$$
But, this contradicts
$$
\lim_{x\to\pi}\frac{1+\cos(x)}{\pi-x}=
0.
$$
In the next section, we collect twelve lemmas. Our main result is presented in Section 3. We conclude the paper with some remarks which
are given in Section 4. Among others,
we provide a new inequality for a sum of
Jacobi polynomials.
The numerical values in this paper
have been calculated via the computer program MAPLE 13. We point out
that
in four places we apply the classical Sturm theorem to determine the number of distinct
zeros of an algebraic polynomial in a given interval. Since the Sturm procedure requires lengthy technical computations
we omit the details. However, those details which we do not include in this paper
are compiled in the supplementary article
\cite{AK}.
Concerning
Sturm's theorem we also refer to
van der Waerden \cite[p. 248 ]{W} and Kwong \cite{Kw}.
\vspace{0.3cm}
\section{Lemmas}
Here, we collect lemmas
which play an important role in the proof of
our main result.
\vspace{0.3cm}
{\bf{Lemma 1.}} \emph{We have}
\begin{equation}
\min_{0\leq x<\pi}\frac{T_6(x)}{(x-\pi)^2}
=0.12290... .
\end{equation}
\emph{Proof.}
We define
$$
\eta(x)=10x^6+6x^5-12x^4-\frac{11}{2}x^3+\frac{29}{8}x^2+\frac{11}{8} x+\frac{9}{16}
$$
and
$$
\theta(x)=(\pi-\arccos(x))^2.
$$
Let $c=0.1229$.
First, we show that
\begin{equation}
\eta(x)-c \theta(x)>0 \quad\mbox{for}
\quad{x\in (-1,1]}.
\end{equation}
We distinguish six cases.
Case 1. $-1<x<0$.\\
Let
$$
\omega(x)=\frac{1}{3}x^4+\frac{\pi}{6}x^3+x^2+\pi x +\frac{\pi^2}{4}.
$$
Then we have
\begin{equation}
\omega(x)>0 \quad\mbox{and}
\quad{\omega'(x)>0.}
\end{equation}
Let
$$
\phi(x)=\sqrt{\omega(x)}-\sqrt{\theta(x)}.
$$
Differentiation gives
$$
4(1-x^2)\Bigl(\frac{\omega'(x)}{2\sqrt{\omega(x)}}+\frac{1}{\sqrt{1-x^2}}\Bigr)\omega(x)\phi'(x)=
-\frac{1}{36}x^4(8x+3\pi)(8x^3+3\pi x^2+16x+9\pi)<0,
$$
so that (2.3) leads to
$$
\phi'(x)<0\quad\mbox{and}
\quad{\phi(x)>\phi(0)=0}.
$$
Since
$$
\eta(x)-c\omega(x)>0,
$$
we obtain
$$
\eta(x)-c\theta(x)>c(\omega(x)-\theta(x))>0.
$$
Case 2. $0\leq x\leq 0.3$\\
Using $\eta'(x)>0$ and $\theta'(x)>0$
gives
$$
\eta(x)- c\theta(x)\geq \eta(0)-c\theta(0.3)=0.13... .
$$
Case 3. $0.3\leq x\leq 0.5$.\\
We have $\eta''(x)<0$ and $\theta'(x)>0$. This yields
$$
\eta(x)-c\theta(x)\geq \min (\eta(0.3), \eta(0.5))-c \theta(0.5)=\eta(0.5)-c\theta(0.5)=0.52... .
$$
Case 4. $0.5\leq x\leq 0.65$.\\
Since $\eta'(x)<0$ and $\theta'(x)>0$, we get
$$
\eta(x)-c\theta(x)\geq \eta(0.65)-c\theta(0.65)=0.14... .
$$
Case 5. $0.65\leq x\leq 0.95$.\\
We have $\eta(x)>0$. Let
$$
\lambda(x)=\sqrt{\eta(x)}
-\sqrt{c\theta(x)}
\quad\mbox{and}
\quad{\mu(x)=\eta(x)\eta''(x)-\frac{1}{2}\eta'(x)^2}.
$$
Differentiation leads to
\begin{equation}
2(1-x^2)^3 \Bigl(\frac{\mu(x)}{\eta(x)^{3/2}}+\frac{2\sqrt{c}x}{(1-x^2)^{3/2}}\Bigr)\eta(x)^3\lambda''(x)
=(1-x^2)^{3}\mu(x)^2 -4c x^2 \eta(x)^3=\nu(x), \quad\mbox{say}.
\end{equation}
The function $\mu$ and $\nu$ are polynomials. Applying
Sturm's theorem
reveals that $\mu$ and
$\nu$ have no zeros on $[0.65,0.95]$.
Since $\mu(3/4)>0$ and $\nu(3/4)>0$, we conclude that both functions are positive
on $[0.65,0.95]$. From (2.4) we find
that $\lambda''(x)>0$. Let
$x^*=0.74746$. Then, $\lambda'(x^*)>0$.
This implies
$$
\lambda(x)\geq \lambda(x^*)+(x-x^*)\lambda'(x^*)
\geq \lambda(x^*)+(0.65-x^*)\lambda'(x^*)=0.0000077... .
$$
Case 6. $0.95\leq x\leq 1$.\\
Since $\eta'(x)>0$ and $\theta'(x)>0$, we obtain
$$
\eta(x)-c\theta(x)\geq \eta(0.95)-c\theta(1)=1.43... .
$$
Thus, (2.2) is proved. Let
$$
T^*(x)=\frac{T_6(x)}{(\pi-x)^2}.
$$
We have
$\lim_{x\to\pi} T^*(x)=\infty$ and
$T_6(x)=\eta(\cos(x))$. From (2.2) we obtain
$$
0.1229<T^*(x) \quad{(0\leq x<\pi)}.
$$
Since $T^*(0.725)=0.122907...$, we conclude
that (2.1) is valid.
\vspace{0.3cm}
{\bf{Remark.}}
Numerical computation shows that the minimum value is
$$
\alpha = 0.12290390650... \label{alp}
$$
attained at the unique point $ x=0. 72656896349... $.
\vspace{0.3cm}
The following lemma is known as
l'H\^opital's rule for monotonicity. A slightly weaker version can be found in \cite[Proposition 148]{HLP}.
\vspace{0.3cm}
{\bf{Lemma 2.}} \emph{Let $u$ and $v$ be real-valued functions which are continuous
on $[a,b]$ and differentiable on $(a,b)$.
Furthermore, let $v' \neq 0$ on $(a,b)$. If $u'/v'$ is strictly increasing (resp. decreasing) on $(a,b)$, then
the functions
$$
x\mapsto \frac{u(x)-u(a)}{v(x)-v(a)}
\quad{and}
\quad{x\mapsto \frac{u(x)-u(b)}{v(x)-v(b)}
}
$$
are strictly increasing (resp.
decreasing) on $(a,b)$.}
\vspace{0.3cm}
A proof for the next lemma is given by Vietoris in \cite{V}.
\vspace{0.3cm}
{\bf{Lemma 3.}} \emph{Let $n\geq 2$ and $x\in (0,\pi)$.
Then,}
$$
1+\cos(x)-
\frac{1}{4\sin(x/2)}
\Bigl(1+\sin(3x/2)\Bigr)
\leq T_n(x).
$$
\vspace{0.3cm}
{\bf{Lemma 4.}} \emph{If $3\pi/8\leq x<\pi$, then}
\begin{equation}
\frac{123}{1000}
(\pi-x)^2<1+\cos(x)-
\frac{1}{4\sin(x/2)}
\Bigl(1+\sin(3x/2)\Bigr).
\end{equation}
\vspace{0.3cm}
\emph{Proof.}
Let
$$
g(x)=\frac{123}{1000}(\pi-x)^2
\quad\mbox{and}
\quad{h(x)=1+\cos(x)-
\frac{1}{4\sin(x/2)}
\Bigl(1+\sin(3x/2)\Bigr)}.
$$
Then we have
$$
g'(x)=
\frac{123}{500}(x-\pi)<0
\quad\mbox{and}
\quad{h'(x)=
\frac{\cos(x/2)}{8\sin^2(x/2)}
\Bigl(1-8 \sin^3(x/2)\Bigr)<0.}
$$
Let $3\pi/8\leq r\leq x\leq s\leq 2\pi/3$.
We obtain
$$
h(x)-g(x)\geq h(s)-g(r)=q(r,s),
\quad\mbox{say}.
$$
Since
$$
q(3\pi/8,1.27)=0.0024...,
\quad{q(1.27,1.45)=0.0024...},
\quad{q(1.45,1.7)=0.0008...},
$$
$$
q(1.70,1.95)=0.0072...,
\quad{q(1.95,2\pi/3)=0.0366...},
$$
we conclude that (2.5) holds for $x\in [3\pi/8, 2\pi/3]$.
Next, we define
$$
f(x)=g(x)-h(x).
$$
Let $y\in (0,\pi/3)$.
Using
$$
\frac{y}{2}<\frac{\sin(y/2)}{\cos(y/2)}
$$
yields
\begin{eqnarray*}
f'(\pi-y) &=&
\frac{\sin(y/2)}{8 \cos^2(y/2)}
\Bigl( 8 \cos^3(y/2)-1 \Bigr)-
\frac{123 y}{500} \\
&>&
\frac{\sin(y/2)}{8 \cos^2(y/2)}
\Bigl( 8 \cos^3(y/2)-4\cos(y/2)-1\Bigr) \\[1.2ex] &>& 0.
\end{eqnarray*}
Thus, $y\mapsto f(\pi-y)$ is strictly decreasing on $(0,\pi/3)$, so that
we get
$$
f(\pi-y)< f(\pi)=0.
$$
This proves (2.5) for $x\in (2\pi/3,\pi)$.
\vspace{0.3cm}
{\bf{Lemma 5.}} \emph{For $x\in[0,\pi]$,}
\begin{equation}
x^2<\frac{20000}{99}\Bigl(1-\cos\frac{x}{10}\Bigr).
\end{equation}
\vspace{0.3cm}
\emph{Proof.}
Let
$$
u_0(x)=1-\cos(x/10) \quad\mbox{and}
\quad{v_0(x)=x^2}.
$$
Since
$$
200\,\frac{u_0'(x)}{v_0'(x)}=\frac{\sin(x/10)}{x/10}
$$
is decreasing on $[0,\pi]$, we conclude from Lemma 2 that the function
$$
w(x)=\frac{1-\cos(x/10)}{x^2}
\quad{(0<x\leq \pi)},
\quad{w(0)=\frac{1}{200}}
$$
is also decreasing on $[0,\pi]$. Thus, for $x\in[0,\pi]$,
$$
w(x)\geq w(\pi)=0.004959... >0.00495=\frac{99}{20000}.
$$
This settles (2.6).
\vspace{0.3cm}
{\bf{Lemma 6.}} \emph{Let $a_k$, $\beta_k$ $(k=1,...,n)$, and $\alpha^*$ be real numbers such that}
$$
\sum_{k=1}^j a_k \geq \alpha^*
\quad{for} \quad{j=1,...,n}
\quad{and}
\quad{\beta_1\geq \beta_2 \geq \cdots \geq \beta_n\geq 0}.
$$
\emph{Then,}
$$
\sum_{k=1}^n a_k \beta_k\geq \alpha^* \beta_1.
$$
\vspace{0.3cm}
\emph{Proof.}
Let
$$
A_j=\sum_{k=1}^j a_k \quad\mbox{and}
\quad{\beta_{n+1}=0.}
$$
Summation by parts gives
$$
\sum_{k=1}^n {a_k \beta_k}=\sum_{k=1}^n
A_k(\beta_k -\beta_{k+1})
\geq \sum_{k=1}^n
\alpha^* (\beta_k -\beta_{k+1})=\alpha^* \beta_1.
$$
\vspace{0.3cm}
{\bf{Lemma 7.}} \emph{Let
$$
C_n(x)=\sum_{k=0}^n (-1)^k b_k \cos(kx).
$$
If $2\leq n\leq 21$ $(n\neq 6)$ and $x\in (5\pi/8,\pi)$, then}
\begin{equation}
\frac{820}{33}\Bigl(1-\cos\frac{x}{10}\Bigr)
\leq C_n(x).
\end{equation}
\vspace{0.3cm}
\emph{Proof.}
We
set $y=x/10$ and
$$
P_n(y)=C_n(10y)-\frac{820}{33}(1-\cos(y)).
$$
Putting $Y=\cos(y)$ reveals that $P_n(y)$ is
an algebraic polynomial in $Y$.
We denote this polynomial by $P_n^*(Y)$, where
$Y\in [\cos(\pi/10), \cos(\pi/16)]
=[0.951...,0.980...]$. Applying Sturm's
theorem gives that $P_n^*$ has
no zero on $[0.951, 0.981]$ and satisfies $P_n^*(0.97)>0$. It follows that $P_n$
is positive on $[\pi/16, \pi/10]$. This implies that (2.7) holds.
\vspace{0.3cm}
{\bf{Lemma 8.}} \emph{Let}
\begin{equation}
\Delta(x)=\sum_{k=0}^{21} (-1)^k (b_k -b_{22}) \cos(kx)-\frac{820}{33}\Bigl(1-\cos\frac{x}{10}\Bigr).
\end{equation}
\emph{If \, $5\pi/8\leq x\leq 2.68$, \, then}
\, $\Delta(x)> 0.29$;
\emph{if \, $2.68\leq x\leq 2.83$, \, then}
\, $\Delta(x)> 0.46$;
\emph{if \, $2.83\leq x\leq 2.908$, \, then}
\, $\Delta(x)> 0.64$;
\emph{if \, $2.908\leq x\leq 2.970$, \, then}
\, $\Delta(x)> 0.90$;
\emph{if \, $2.970\leq x\leq 3.021$, \,
then}
\, $\Delta(x)> 1.32$;
\emph{if \, $3.021\leq x\leq 3.051$, \, then}
\, $\Delta(x)> 1.78$.
\vspace{0.3cm}
\emph{Proof.}
Let $5\pi/8\leq x\leq 2.68$. We have
$\cos(\pi/16)=0.980...$ and $\cos(0.268)=0.964...$.
The function $\Delta -0.29$ is an algebraic
polynomial in $Y=\cos(x/10)$. An application of
Sturm's theorem gives that this function
is positive on $[0.964,0.981]$. This leads
to $\Delta(x)>0.29$ for $x\in [5\pi/8, 2.68]$. Using the same method of proof we obtain that the other estimates for $\Delta(x)$ are also valid.
\vspace{0.3cm}
{\bf{Lemma 9.}} \emph{Let $n\geq 22$,}
\begin{equation}
H_n(x)=\sum_{k=0}^n (-1)^k \cos(kx)
\quad{and}
\quad{D_n(x)=b_{22} H
_{22}(x)+\sum_{k=23}^n (-1)^k b_k\cos(kx).
}
\end{equation}
\emph{If \, $5\pi/8\leq x\leq 2.68$, \, then}
\, $D_n(x)\geq - 0.29$;
\emph{if \, $2.68\leq x\leq 2.83$, \, then}
\, $D_n(x)\geq - 0.46$;
\emph{if \, $2.83\leq x\leq 2.908$, \, then}
\, $D_n(x)\geq - 0.64$;
\emph{if \, $2.908\leq x\leq 2.970$, \, then}
\, $D_n(x)\geq - 0.90$;
\emph{if \, $2.970\leq x\leq 3.021$, \, then}
\, $D_n(x)\geq - 1.32$;
\emph{if \, $3.021\leq x\leq 3.051$, \, then}
\, $D_n(x)\geq - 1.78$.
\vspace{0.3cm}
\emph{Proof.}
We have
$$
H_n(x)
=\frac{1}{2}+(-1)^n\frac{\cos((n+1/2)x)}{2\cos(x/2)}.
$$
Let $5\pi/8\leq x\leq 2.68$. Then we obtain
\begin{equation}
H_n(x)\geq \frac{1}{2}-\frac{1}{2\cos(x/2)}
\geq \frac{1}{2}-\frac{1}{2\cos(1.34)}
=-1.68...>-1.72...=
-\frac{0.29 \cdot 4^{11}}{{22\choose 11}}
=-\frac{0.29}{b_{22}}.
\end{equation}
Using (2.10) and
$$
b_{22}\geq b_{23} \geq \cdots \geq b_n
$$
we conclude from Lemma 6 that
$D_n(x)\geq -0.29$.
Applying the same method we obtain the other estimates.
\vspace{0.3cm}
As usual, we set
$$
(a)_0=1, \quad{(a)_n=\prod_{k=0}^{n-1} (a+k)}=\frac{\Gamma(a+n)}{\Gamma(a)}
\quad{(n\geq 1)}.
$$
The following result is due to Koumandos
\cite{K}; see also Koumandos \cite{K1} for background information.
\vspace{0.3cm}
{\bf{Lemma 10.}} \emph{Let $ 0<\gamma <0. 6915562 $ be given and}
\begin{equation}
d_{2k} = d_{2k+1} = \frac{(\gamma )_k}{k! }
\qquad{(k=0,1,2,...)} .
\end{equation}
\emph{Then, for $n\geq 1$ and $x\in (0,\pi)$,}
$$
\sum_{k=0}^n d_k \cos(kx)>0.
$$
\vspace{0.3cm}
In what follows, we denote by $d_k$ $(k=0,1,2,...)$ the numbers defined in (2.11)
with $\gamma=0.69$.
\vspace{0.3cm}
{\bf{Lemma 11.}} \emph{Let}
\begin{equation}
I(x)=
\sum_{k=0}^{21}\Bigl(b_k-\frac{b_{22}}{d_{22}} d_k\Bigr)\cos(kx).
\end{equation}
\emph{If $0<x\leq 0.1$, then} $I(x)>1.5$.
\vspace{0.3cm}
\emph{Proof.}
Setting $Y=\cos(x)$ gives that $I-1.5$ is an algebraic polynomial in $Y$. We have $\cos(0.1)=0.995...$. Sturm's theorem reveals that this polynomial is positive
on $[0.995,1]$. It follows that
$I(x)>1.5$ for $x\in[0,0.1]$.
\vspace{0.3cm}
{\bf{Lemma 12.}} \emph{Let $n\geq 22$ and}
\begin{equation}
J_n(x)=\frac{b_{22}}{d_{22}}\sum_{k=0}^{21}
d_k\cos(kx)
+\sum_{k=22}^n
b_k\cos(kx).
\end{equation}
\emph{If $0<x<\pi$, then} $J_n(x)\geq 0$.
\vspace{0.3cm}
\emph{Proof.}
Let $x\in (0,\pi)$.
We set
$$
K_j(x)=\frac{b_{22}}{d_{22}}\sum_{k=0}^j
d_k\cos(kx)\quad{(j=0,1,2,...)}
$$
Then, $K_0(x) \equiv b_{22}/d_{22}$ and from Lemma 10 we obtain
$$
K_j(x)> 0 \quad\mbox{for} \quad{j\geq 1}.
$$
Let
$$
a_k=\frac{b_{22}}{d_{22}} d_k \cos(kx)
\quad{(k=0,1,...,n)},
\quad{\beta_0 = \cdots =\beta_{21}=1},
\quad{\beta_k=\frac{d_{22} b_k}{b_{22} d_k}} \quad{(k=22,...,n)}.
$$
Since
$$
\beta_0\geq \beta_1 \geq \cdots \geq \beta_n>0,
$$
we conclude from Lemma 6 that
$$
J_n(x)=\sum_{k=0}^n a_k \beta_k \geq 0.
$$
\vspace{0.3cm}
\section{Main result}
We are now in a position to present
positive lower bounds for the
cosine polynomial $T_n$.
\vspace{0.3cm}
\noindent
{\bf{Theorem.}} \emph{The inequalities
\begin{equation}
T_n(x)\geq c_0 + c_1 x + c_2 x^2 >0
\quad{(c_k\in\mathbf{R}, k=0,1,2)}
\end{equation}
hold for all natural numbers $n$ and real numbers $x\in (0,\pi)$ if and only if}
\begin{equation}
c_0=\pi^2 c_2,
\quad{c_1=-2\pi c_2,}
\quad{0<c_2\leq \alpha},
\end{equation}
\emph{where}
\begin{equation}
\alpha=\min_{0\leq t<\pi}
\frac{T_6(t)}{(t-\pi)^2}=0.12290... .
\end{equation}
\vspace{0.3cm}
\emph{Proof.}
We set
$$
Q(x)=c_0+c_1 x+c_2 x^2.
$$
If (3.1) is valid for all $n\geq 1$ and $x\in (0,\pi)$, then we get
$$
T_1(x)=1+\cos(x)\geq Q(x)>0.
$$
We let $x$ tend to $\pi$ and obtain $c_0+c_1\pi +c_2 \pi^2=0$. Thus,
$$
\frac{1+\cos(x)}{x-\pi}\leq\frac{Q(x)}{x-\pi} =c_1+c_2(x+\pi)<0.
$$
Again, we let $x$ tend to $\pi$. This gives
\begin{equation}
c_1=-2\pi c_2 \quad\mbox{and}
\quad{ c_0=-\pi c_1- \pi^2 c_2=\pi^2 c_2.}
\end{equation}
It follows that
\begin{equation}
Q(x)=c_2(x-\pi)^2 \quad\mbox{with}
\quad{c_2>0}.
\end{equation}
Moreover, from (3.1) (with $n=6$) we obtain
$$
\frac{T_6(x)}{(x-\pi)^2}\geq\frac{Q(x)}{(x-\pi)^2 }=c_2.
$$
Using (3.3) leads to
\begin{equation}
\alpha \geq c_2 .
\end{equation}
From (3.4) - (3.6) we conclude that (3.2) holds.
Next, we show that (3.2) and (3.3) lead to
(3.1). If (3.2) and (3.3) are valid, then
$$
0<c_0+c_1 x+c_2 x^2 =c_2(x-\pi)^2
\leq \alpha (x-\pi)^2.
$$
Hence, we have to prove that
\begin{equation}
\alpha(x-\pi)^2\leq T_n(x)
\quad{(n\geq 1; 0<x<\pi)}.
\end{equation}
Applying Lemma 2 we obtain that the function
$$
F(x)=\frac{T_1(x)}{(x-\pi)^2}=\frac{1+\cos(x)}{(x-\pi)^2}
$$
is strictly increasing on $(0,\pi)$. Thus,
$$
F(x)> F(0)=\frac{2}{\pi^2}=0.202... .
$$
This settles (3.7) for $n=1$. From (3.3) we
conclude that (3.7) is also valid for $n=6$.
In what follows we prove
\begin{equation}
\frac{123}{1000}(\pi-x)^2\leq T_n(x)
\end{equation}
for $n\geq 2$ $(n\neq 6)$ and $x\in (0,\pi)$. With regard to Lemma 3 and Lemma 4
we may assume that $x\in (0,3\pi/8)$.
We replace in (3.8) $x$ by $\pi-x$. It follows that it is enough to prove
\begin{equation}
\frac{123}{1000}x^2
\leq \sum_{k=0}^n (-1)^k b_k \cos(kx)=C_n(x)
\end{equation}
for $n\geq 2$ $(n\neq 6)$ and $x\in (5\pi/8, \pi)$.
Using Lemma 5 yields that
\begin{equation}
\frac{820}{33}\Bigl(1-\cos\frac{x}{10}\Bigr)\leq C_n(x)
\end{equation}
implies (3.9).
An application of Lemma 7 reveals that (3.10) is valid if $2\leq n\leq 21$ $(n\neq 6)$.
Now, let $n\geq 22$. We have the representation
\begin{equation}
C_n(x)-\frac{820}{33}\Bigl(1-\cos\frac{x}{10}\Bigr)=\Delta(x)+D_n(x),
\end{equation}
with $\Delta$ and $D_n$ as defined in
(2.8) and (2.9), respectively.
Applying Lemma 8 and Lemma 9
reveals that
\begin{equation}
\Delta(x)+D_n(x)\geq 0 \quad\mbox{for}
\quad{x \in [5\pi/8, 3.051].}
\end{equation}
From (3.11) and (3.12) we conclude
that
(3.10) is valid for $x\in [5\pi/8,3.051]$.
This implies that (3.8) holds for $x\in [\pi-3.051,3\pi/8]$. Hence, it remains to
prove (3.8) for $x\in (0,\pi-3.051)$.
Since
$$
\frac{123}{1000}(\pi-x)^2<1.22
\quad\mbox{for} \quad{x\in (0,\pi-3.051)}
$$
and $\pi-3.051=0.090...$, it suffices to show that
\begin{equation}
1.22\leq T_n(x) \quad\mbox{for}
\quad{x \in (0, 0.1]}.
\end{equation}
We have
\begin{equation}
T_n(x)=I(x)+J_n(x),
\end{equation}
where $I$ and $J_n$ are defined in (2.12) and (2.13), respectively. Applying Lemma 11 and Lemma 12
we conclude from (3.14) that (3.13) holds. This completes the proof of the Theorem.
\vspace{0.3cm}
\section{Concluding remarks}
(I) If we set $a_0=1$ and
$a_k=1/k$ $(k\geq 1)$
in (1.1), then we find
\begin{equation}
\sum_{k=1}^n \frac{\sin(kx)}{k}>0
\quad\mbox{and}
\quad{1+\sum_{k=1}^n \frac{\cos(kx)}{k}>0}
\quad{(n\geq 1; 0<x<\pi)}.
\end{equation}
The first inequality is the famous
Fej\'er-Jackson inequality, which was conjectured by Fej\'er in 1910 and proved
one year later by Jackson \cite{J}. Its analogue for the cosine sum was published by Young \cite{Y} in 1913. Both inequalities motivated the research of many
authors, who presented numerous refinements, extensions, and variants
of (4.1). We refer to Askey \cite{A1}, Askey and Gasper \cite{AG},
Milovanovi\'c, Mitrinovi\'c, and Rassias \cite[chapter 4]{MMR} and the references cited therein.
Applying our Theorem
we obtain an improvement of Young's inequality:
$$
1+\sum_{k=1}^n \frac{\cos(kx)}{k}
\geq \alpha (\pi-x)^2 \quad{(n\geq 1; 0<x<\pi)},
$$
{where $\alpha$ is given in} (3.3)
\vspace{0.2cm}
(II) Askey and Steinig \cite{AS}
used Proposition 1 to prove the following
interesting result.
\vspace{0.3cm}
{\bf{Proposition 3.}} \emph{Let $\gamma_k$
$(k=0,1,...,n)$ be positive real numbers such that}
\begin{equation}
2k \gamma_k \leq (2k-1) \gamma_{k-1}
\quad{(k\geq 1)}.
\end{equation}
\emph{Then, for $n\geq 0$ and $t\in (0,2\pi)$,}
$$
\sum_{k=0}^n \gamma_k \sin \bigl( (k+1/4)t\bigr)>0
\quad{and}
\quad{\sum_{k=0}^n \gamma_k \cos\bigl( (k+1/4)t\bigr)>0.}
$$
An application of the Theorem leads to a refinement of the second inequality:
\begin{equation}
\sum_{k=0}^n \gamma_k \cos \bigl( (k+1/4)t\bigr)\geq \frac{\alpha \gamma_0(2\pi-t)^2}{8 \cos(t/4)}
\quad{(n\geq 0; 0<t<2\pi)}.
\end{equation}
In order to prove (4.3) we set
$$
\gamma_k^*=\frac{1}{4^k}{2k\choose k}
=\frac{1}{k! }\Bigl(\frac{1}{2}\Bigr)_k
\quad{(k\geq 0)}.
$$
Then,
\begin{equation}
2\cos(t/4)\sum_{k=0}^j \gamma_k^* \cos \bigl( (k+1/4)t\bigr)=T_{2j+1}(t/2)
\geq \alpha (\pi-t/2)^2
\quad\mbox{for}
\quad{j=0,1,...,n}.
\end{equation}
We define
$$
\beta_k=\frac{\gamma_k}{\gamma_k^*}
\quad{(k\geq 0)}
$$
and apply (4.2). This yields
$$
\beta_0\geq \beta_1 \geq \cdots \geq \beta_n>0.
$$
Using (4.4) and Lemma 6 leads to
$$
\sum_{k=0}^n \gamma_k \cos \bigl( (k+1/4)t\bigr)=
\sum_{k=0}^n \beta_k\gamma_k^* \cos\bigl( (k+1/4)t\bigr)
\geq \beta_0 \frac{\alpha(\pi-t/2)^2}{2\cos(t/4)}.
$$
Since $\beta_0=\gamma_0$, we get (4.3).
\vspace{0.2cm}
(III) The classical Jacobi polynomials
$P_m^{(a,b)}(z)$ are given by
$$
P_m^{(a,b)}(z)=
\frac{(a+1)_m}{m! }\sum_{k=0}^m
\frac{(-m)_k (m+a+b+1)_k}{k! (a+1)_k}\Bigl(\frac{1-z}{2}\Bigr)^k.
$$
A collection of the main properties of these functions
can be found, for instance, in
\cite[chapter 1.2.7]{MMR}.
An application of (4.4) (with $t=4x, j=n)$ and the
identity
$$
\frac{P_m^{(-1/2,-1/2)}(\cos(x))}{P_m^{(-1/2,-1/2)}(1)}=\cos(m x)
$$
(with $m=4k+1)$ yields
$$
\sum_{k=0}^n \frac{1}{k! } \Bigl(\frac{1}{2}\Bigr)_k
\frac{P_{4k+1}^{(-1/2,-1/2)}(\cos(x))}{P_{4k+1}^{(-1/2,-1/2)}(1)}\geq
\frac{\alpha (\pi-2 x)^2}{2\cos(x)}
\quad{(n\geq 0; 0< x<\pi/2)}.
$$
For related inequalities we refer to
Askey \cite{As}, Askey and Gasper
\cite{AG1}, \cite{AG} and the references therein.
\vspace{0.7cm}
{\bf{Acknowledgement.}} We thank the referees for helpful comments.
\vspace{0.9cm}
| {
"timestamp": "2015-07-06T02:04:02",
"yymm": "1507",
"arxiv_id": "1507.00810",
"language": "en",
"url": "https://arxiv.org/abs/1507.00810",
"abstract": "The classical Vietoris cosine inequality is refined by establishing a positive polynomial lower bound.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "A Refinement of Vietoris Inequality for Cosine Polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9808759660443166,
"lm_q2_score": 0.8539127585282744,
"lm_q1q2_score": 0.8375825019389884
} |
https://arxiv.org/abs/math/0701149 | Sums of Consecutive Integers | A decomposition of a natural number n is a sequence of consecutive natural numbers that sums to n. We construct a one-to-one correspondence between the odd factors of a natural number and its decompositions. We study the decompositions by their lengths and introduce the concept of length spectrum. Also, we use diagrams to illustrate the ideas behind our proofs. | \section*{The Proof}
We start by defining a {\em decomposition of a natural number $ n$} to
be a sequence of consecutive natural numbers whose sum is $n$. The
number of terms is called the {\em length of the decomposition}, and a
decomposition of length $1$ is called {\em trivial}. Further, a
decomposition is called {\em odd} ({\em even}) if its length is odd
(even). For example, consider the number $15$. It has four
decompositions, $(15), (7, 8), (4, 5, 6)$ and $(1, 2, 3, 4, 5)$, one
even and three odd.
We will construct a one-to-one correspondence between the odd factors
of a number and its decompositions. This proves the conjecture since
the powers of $2$ are precisely those numbers with only one odd factor
(namely $1$) and thus they have only the trivial decomposition.
Here is our construction: Let $k$ be an odd factor of $n$. Then since
the sum of the $k$ integers from $-(k-1)/2$ to $(k-1)/2$ is $0$,
adding $n/k$ to each of them gives the sequence
\begin{equation} \label{eq:odd}
\frac{n}{k}-\frac{k-1}{2},\quad \frac{n}{k}-\frac{k-1}{2}+1, \quad
\cdots, \quad \frac{n}{k}+\frac{k-1}{2},
\end{equation}
whose sum is
\[ \sum_{j=-(k-1)/2}^{(k-1)/2} (\frac{n}{k}+j)
= \frac{n}{k}\cdot k \quad + \sum_{j=-(k-1)/2}^{(k-1)/2} j = n+0 = n.\]
There are now two cases to consider:
\begin{itemize}
\item[(i)] $(k-1)/2 < n/k$. Then~\eqref{eq:odd} is already an odd
decomposition of $n$. The length of this decomposition is $k$.
\item[(ii)] $(k-1)/2 \ge n/k$. Then~\eqref{eq:odd} begins with 0 or a
negative number. After dropping the 0 and canceling the negative
terms with the corresponding positive ones, we are left with the
sequence
\begin{equation} \label{eq:even}
\frac{k-1}{2}- \frac{n}{k} +1, \quad \frac{k-1}{2}-\frac{n}{k} + 2,
\quad \cdots, \quad \frac{n}{k} +
\frac{k-1}{2},
\end{equation}
which is an even decomposition of $n$ of length $2n/k$.
\end{itemize}
We call the decomposition in either~\eqref{eq:odd} or~\eqref{eq:even}
the {\em decomposition of $ n$ associated with $ k$}.
To show that every decomposition of $n$ arises this way, suppose
$(a+i)_{i=1}^m$ is a decomposition of $n$. Since its sum $m(2a+m+1)/2$
is $n$, either $m$ or $2a + m+1$ (but not both), depending on the
parity of $m$, is an odd factor of $n$. It is then straightforward to
verify that the given decomposition is the one associated with that
odd factor.
We have in fact delivered more than we promised. Note that the
condition $(k-1)/2 < n/k$ is equivalent to $k-1 < 2n/k$, but since $k$
is odd the condition is therefore the same as $k < 2n/k$, or
equivalently $k < \sqrt{2n}$. Likewise, $(k-1)/2 \ge n/k$ is
equivalent to $k > \sqrt{2n}$. Therefore, we have proved the
following:
\begin{thm}
There is a one-to-one correspondence between the odd factors of a
natural number $n$ and its decompositions. More precisely, for each
odd factor $k$ of $n$, if $k < \sqrt{2n}$, then~\eqref{eq:odd} is an
odd decomposition of $n$ of length $k$. If $k > \sqrt{2n}$,
then~\eqref{eq:even} is an even decomposition of $n$ of length
$2n/k$. Moreover, these are all the decompositions of $n$.
\end{thm}
An immediate consequence of the theorem is that {\em the number of
decompositions of $n$ is the number of odd factors of $n$}, which is
$\prod_p (e_p+1)$, where $p$ runs through the odd primes and $e_p$ is
the power of $p$ appearing in the prime factorization of $n$. Let us
illustrate both this formula and the theorem with the example $n=45$.
Since $45=3^2 \cdot 5$, there should be six decompositions, four odd
and two even, and indeed here they are:
\begin{table}[h]
\caption{\small The decompositions of 45} \label{t:decomp45}
\begin{center}
\begin{tabular}{c c c c c c}
\hline
$k$ & $(k-1)/2$ & $n/k$ & decomposition & length & parity\\
\hline
1 & 0 & 45 & (45) & 1 &odd\\
3 & 1 & 15 & (14, 15, 16) & 3 &odd\\
5 & 2 & 9 & (7, 8, 9, 10, 11) & 5 &odd\\
9 & 4 & 5 & (1, 2, 3, 4, 5, 6, 7, 8, 9) & 9 &odd\\
15& 7 & 3 & (5, 6, 7, 8, 9, 10) &6 &even\\
45& 22& 1 & (22, 23) & 2 &even\\
\hline
\end{tabular}
\end{center}
\end{table}
\section*{The Length Spectra}
We define the {\em length spectrum of $ n$}, denoted by
$\lspec(n)$, to be the set of lengths of the decompositions of $n$.
According to the theorem, $\lspec(n)$ is the set
\begin{equation*}
\left\{ k \colon k \ \text{odd},\ k\, |\, n,\ k < \sqrt{2n} \right\}
\cup \left\{ 2n/k \colon k\ \text{odd},\ k\, |\, n,\
k > \sqrt{2n} \right\}.
\end{equation*}
For example, we have seen that $\lspec(45) = \{1,3,5,9\} \cup \{2,6\}$
(Table~\ref{t:decomp45}). In the following, we record some simple
facts about length spectra. Most of them are direct consequences of
the theorem. An in-depth treatment of this notion is given
in~\cite{lspec}.
Let $(k_i)_{i=1}^s$ be the list of odd factors of $n$ in ascending
order. Thus, $k_1=1$, $k_2$ (if it exists) is the smallest odd prime
factor of $n$, and $n = 2^dk_s$ for some $d \ge 0$. Let $r$ be the
largest index ($1 \le r \le s$) such that $k_r < \sqrt{2n}$.
\begin{enumerate}
\item The length of an even decomposition of $n$ is of the form
$2n/k_j$, and hence the highest power of $2$ that divides any even
element of $\lspec(n)$ is $2^{d+1}$. This observation rules out, for
example, the possibility of the set $\{1,2,3,4\}$ being a length
spectrum.
\item The smallest number with a length spectrum of size $s$ is
$3^{s-1}$. The smallest number with $m$ in its length spectrum is
clearly
\[ 1+2+\cdots + m = m(m+1)/2.\] In other words, $m \in \lspec(n)$
implies $m(m+1)/2 \le n$. Hence $m \le (-1+\sqrt{1+8n})/2$. This
gives an upper bound on the elements of $\lspec(n)$ in terms of $n$.
\item The longest decomposition of $n$ has length $\max\{k_r,
2n/k_{r+1}\}$ (the maximum is $k_r$ if $n$ has no even
decompositions). The shortest non-trivial decomposition of $n$ (if
any) has length $\min\{k_2, 2n/k_s\} = \min\{k_2, 2^{d+1}\}$.
\item The condition $k_s < \sqrt{2n}$, or equivalently $k_s <
2^{d+1}$, is clearly both necessary and sufficient for $n$ to have
only odd decompositions. The situation, however, is quite different
for even decompositions: if $k_j > \sqrt{2n}$, then $k_s/k_j <
2k_s/k_j < \sqrt{2n}$, so the number of even decompositions of $n$
is at most the number of odd decompositions of $n$. Consequently, if
every non-trivial decomposition of $n$ is even, then $n$ has exactly
one non-trivial decomposition. Those numbers with this property are
precisely the numbers of the form $2^dk$ with $k$ an odd prime $>
2^{d+1}$.
\end{enumerate}
\section*{Epilogue}
Finding the exact number of decompositions of $n$ can be hard. In
general, the formula $\prod_{p}(e_p+1)$ is impractical for large $n$
since it essentially calls for the prime factorization of $n$. What
LeVeque obtained in~\cite{lev}, among other things, was the average
order of the number of decompositions as a function of $n$. Moreover,
he discussed not only the sums of consecutive integers but the sums of
arithmetic progressions in general. Readers with a taste for analytic
number theory will find his article enjoyable.
Guy gave a very short proof of the theorem in~\cite{guy}, then deduced
from it a characterization of primes. He gave some rough estimates of
the number of decompositions and also remarked that finding this
number explicitly is not easy.
Weil's book~\cite{weil} is merely a collection of exercises for the
elementary number theory course given by him at University of Chicago
in the summer of 1949. Maxwell Rosenlicht, an assistant of Weil's at
that time, was in charge of the ``laboratory'' section and responsible
for most of the exercises. It was a relief to learn from Weil that the
challenge of motivating students to work on problems is rather common.
The following is part of the foreword taken directly from that book:
\begin{quote}
The course consisted of two lectures a week, supplemented by a
weekly ``laboratory period'' where students were given exercises...
. The idea was borrowed from the ``Praktikum'' of German
universities. Being alien to the local tradition, it did not work
out as well as I had hoped, and student attendance at the problem
sessions soon became desultory.
\end{quote}
An obvious but crucial point in our proof of the conjecture is that
$0$ can be expressed as the sum of an odd number of consecutive
integers. Let me explain the intuition behind this trick here. The
sum of the first $n$ consecutive natural numbers is called the {\em
$n$-th triangular number} because $1, 2, \cdots, n$ can be
arranged to form a triangle (see Figure~\ref{f:tri}).
\begin{figure}[htp]
\begin{equation*}
\yng(1,2,3,4)
\end{equation*}
\caption{\small The $4$-th triangular number is $10$.}
\label{f:tri}
\end{figure}
Now a sum of consecutive numbers can be viewed as the difference of
two triangular numbers (or a ``trapezoidal number''). Also, a number
$n$, as long as it is not a power of 2, can be represented by an
$(n/k) \times k$ rectangle with $k>1$ an odd factor of $n$. So the
question is: how can you get a trapezoid from such a rectangle? Well,
it does not take a big leap of imagination to see that this can be
done by cutting off a corner of the rectangle and flipping it over.
For example, the diagrams in Figure~\ref{f:transf} illustrate how we
obtain the decomposition $(2,3,4,5,6)$ of $20$.
\begin{figure}[htp]
\begin{equation*}
\raise-.52in
\hbox{\young(\hfil\hfil\hfil\bullet\bullet,\hfil\hfil\hfil\hfil\bullet,\hfil\hfil\hfil\hfil\hfil,\hfil\hfil\hfil\hfil\hfil)}
\qquad \leadsto \qquad
\young(\bullet,\bullet\bullet,\hfil\hfil\hfil,\hfil\hfil\hfil\hfil,\hfil\hfil\hfil\hfil\hfil,\hfil\hfil\hfil\hfil\hfil)
\end{equation*}
\caption{\small Transforming a $4 \times 5$ rectangle into a trapezoid}
\label{f:transf}
\end{figure}
Of course, one has to worry about the case when $n/k < (k-1)/2$, but
this is exactly why the use of negative numbers comes in handy. The
diagrams in Figure~\ref{f:decomp} should be self-explanatory.
\begin{figure}[htp]
\begin{align*}
\yng(7,7) \quad &= \quad \phantom{+} \quad
\young(\hfil,\hfil\hfil,\hfil\hfil\hfil,::::---,:::::--,::::::-)
\quad \\ \\
&\phantom{=} \quad + \quad \yng(7,7)\\
&\phantom{=} \quad \underline{\phantom{111111111111111111111}} \\
&= \quad \phantom{+} \quad
\young(\hfil,\hfil\hfil,\hfil\hfil\hfil,\hfil\hfil\hfil\hfil,\hfil\hfil\hfil\hfil\hspace{-0.7em}\not,::::::\not\hspace{-0.2em}-)
\quad = \quad \raise-23.5pt \hbox{
\young(\hfil,\hfil\hfil,\hfil\hfil\hfil,\hfil\hfil\hfil\hfil,\hfil\hfil\hfil\hfil)}
\end{align*}
\caption{\small The decomposition (2,3,4,5) of 14.}
\label{f:decomp}
\end{figure}
{\em Acknowledgment.} I would like to thank Greg Kallo, the editor,
and the referees for many helpful suggestions on the presentation of
this article.
| {
"timestamp": "2007-01-05T00:19:01",
"yymm": "0701",
"arxiv_id": "math/0701149",
"language": "en",
"url": "https://arxiv.org/abs/math/0701149",
"abstract": "A decomposition of a natural number n is a sequence of consecutive natural numbers that sums to n. We construct a one-to-one correspondence between the odd factors of a natural number and its decompositions. We study the decompositions by their lengths and introduce the concept of length spectrum. Also, we use diagrams to illustrate the ideas behind our proofs.",
"subjects": "History and Overview (math.HO); Number Theory (math.NT)",
"title": "Sums of Consecutive Integers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9899864305585842,
"lm_q2_score": 0.8459424411924673,
"lm_q1q2_score": 0.8374715378141457
} |
https://arxiv.org/abs/1101.5652 | Completeness of Ordered Fields | The main goal of this project is to prove the equivalency of several characterizations of completeness of Archimedean ordered fields; some of which appear in most modern literature as theorems following from the Dedekind completeness of the real numbers, while a couple are not as well known and have to do with other areas of mathematics, such as nonstandard analysis.Continuing, we study the completeness of non-Archimedean fields, and provide several examples of such fields with varying degrees of properties, using nonstandard analysis to produce some relatively "nice" (in particular, they are Cantor complete) final examples.As a small detour, we present a short construction of the real numbers using methods from nonstandard analysis. | \section*{Introduction}
In most textbooks, the set of real numbers $\mathbb{R}$ is commonly taken to be a totally ordered Dedekind complete field. Following from this definition, one can then establish the basic properties of $\mathbb{R}$ such as the Bolzano-Weierstrass property, the Monotone Convergence property, the Cantor completeness of $\mathbb{R}$ (Definition~\ref{D: completeness}), and the sequential (Cauchy) completeness of $\mathbb{R}$. The main goal of this project is to establish the equivalence of the preceding properties, in the setting of a totally ordered Archimedean field (Theorem~\ref{T: BIG ONE}), along with a less well known algebraic form of completeness (Hilbert completeness, Definition~\ref{D: completeness}) and a property from non-standard analysis which roughly states that every finite element from the non-standard extension of the field is ``infinitely close'' to some element from the field (Leibniz completeness, Definition~\ref{D: completeness}). (The phrase infinitely close may be off-putting to some as it is sometimes associated with mathematics lacking in rigour, but in section~\S\ref{S: inf} we properly define this terminology and in section~\S\ref{S: examples 1} and \S\ref{S: examples 2} we provide several examples of fields with non-trivial infinitesimal elements.)
As is usual in mathematics, we continued our research past that of Archimedean fields to determine how removing the assumption that the field is Archimedean affects the equivalency of the properties listed in Theorem~\ref{T: BIG ONE}. What we found is that all but three of those properties are false in non-Archimedean fields: Hilbert completeness, Cantor completeness and sequential completeness. Furthermore, we found that the last two of these three properties are no longer equivalent; rather, the latter is a necessary, but not sufficient, condition for the former (see Theorem~\ref{T: non-arch completeness} and Example~\ref{E: robinson asymptotic}).
One application of Theorem~\ref{T: BIG ONE} is given in section~\S\ref{S: axioms}, where we establish eight different equivalent collections of axioms, each of which can be used as axiomatic definitions of the set of real numbers. Another application
is an alternative perspective of classical mathematics that results from the equivalency of Dedekind completeness (Definition~\ref{D: completeness}) and the non-standard property outlined in the first paragraph above (this property is defined in Definition~\ref{D: completeness}).
\onehalfspacing
\pagenumbering{arabic}
\pagestyle{fancy}
\renewcommand{\sectionmark}[1]{\markright{\thesection.\ #1}}
\fancyhf{}
\fancyhead[L]{\rightmark}
\fancyhead[R]{\bfseries \thepage}
\renewcommand{\headrulewidth}{0.0pt}
\setcounter{page}{1}
\section{Orderable Fields}\label{S: orderable fields}
In this section we recall the main definitions and properties of totally ordered fields. For more details, we refer to Lang~\cite{lang} and van der Waerden~\cite{waerden}.
\begin{definition}[Orderable Field]\label{D: Orderable fields}
A field $\mathbb K$ is \emph{orderable} if there exists a non-empty $\mathbb{K}_+ \subset\mathbb{K}$ such that
\begin{def-enumerate}
\item $0\not\in\mathbb{K}_+$
\item $(\forall x,y\in\mathbb{K}_+)(x + y\in\mathbb{K}_+ \emph{ and } xy\in\mathbb{K}_+)$
\item $(\forall x\in\mathbb{K}\setminus\{0\})(x\in\mathbb{K}_+\emph{ or } -x\in\mathbb{K}_+)$
\end{def-enumerate}
\end{definition}
Provided that $\mathbb K$ is orderable, we can fix a set $\mathbb{K}_+$ that satisfies the properties given above and generate a strict order relation on $\mathbb K$ by $x<_{\mathbb{K}_+}y$ \emph{ i\,f\,f }\;} %{\em i\,f\,{f}\; }$y-x\in\mathbb{K}_+$. Further, we can define a total ordering (i.e. reflexive, anti-symmetric, transitive, and total) on $\mathbb K$ by $x\le_{\mathbb K_+}y$ \emph{ i\,f\,f }\;} %{\em i\,f\,{f}\; } $x<_{\mathbb K_+}y\emph{ or } x=y$.
\begin{definition}[Totally Ordered Field]
Let $\mathbb K$ be an orderable field and $\mathbb K_+\subset\mathbb K$. Then using the relation $\le_{\mathbb K_+}$ generated by $\mathbb K_+$, as noted above, we define $(\mathbb K,\le_{\mathbb K_+})$ to be a \emph{totally ordered field}. As well, with this ordering mind, we define the \emph{absolute value} of $x\in\mathbb{K}$ as $|x|=:\max(-x,x)$.
\end{definition}
For simplicity, when $\mathbb{K_+}$ is clear from the context we shall use the more standard notation $x\le y$ and $x<y$ in place of $x\le_{\mathbb{K}_+}y$ and $x<_{\mathbb{K}_+}y$, respectively. As well, we shall refer to $\mathbb K$ as being a totally ordered field -- or the less precise, ordered field -- rather than the more cumbersome $(\mathbb K,\le_{\mathbb K_+})$.
\begin{lemma}[Triangle Inequality]\label{L: 1 tri ineq}
Let $\mathbb{K}$ be an ordered field and $a,b\in\mathbb{K}$, then $|a+b|\le|a|+|b|$.
\end{lemma}
\begin{proof}
As $-|a|-|b|\le a+ b\le |a|+|b|$, we have $|a+b|\le|a|+|b|$ because $a+b\le|a|+|b|$ and $-(a+b)\le|a|+|b|$.
\end{proof}
\begin{definition}
Let $\mathbb K$ be a totally ordered field and $A\subset \mathbb K$. We denote the set of upper bounds of $A$ by $\ensuremath{\mathcal{UB}}(A)$. More formally, $$\ensuremath{\mathcal{UB}}(A)=:\{x\in\mathbb K : (\forall a\in A)(a\le x)\}$$
\end{definition}
\begin{definition}[Ordered Field Homomorphism]
Let $\mathbb K$ and $\mathbb F$ be ordered fields and $\phi:\mathbb K\to\mathbb F$ be a field homomorphism. Then $\phi$ is called an \emph{ordered field homomorphism} if it preserves the order; that is, $x\le y$ implies $\phi(x) \le\phi(y)$ for all $x, y\in\mathbb K$.
Definitions of \emph{ordered field ismorphisms} and \emph{embeddings} follow similarly.
\end{definition}
\begin{remark}\label{R: q embedding}
Let $\mathbb K$ be an ordered field. Then we define the ordered field embedding $\sigma:\mathbb Q\to\mathbb K$ by $\sigma(0)=:0$, $\sigma(n)=:n\cdot1$, $\sigma(-n)=:-\sigma(n)$ and $\sigma(\frac{m}{k})=:\frac{\sigma(m)}{\sigma(k)}$ for $n\in\mathbb N$ and $m,k\in\mathbb Z$. We say that $\sigma$ is the \emph{canonical embedding} of $\mathbb Q$ into $\mathbb K$.
\end{remark}
\begin{definition}[Formally Real]
A field $\mathbb K$ is \emph{formally real} if, for every $n\in\mathbb N$, the equation $$\sum_{k=0}^n x_k^2 = 0$$ has only the trivial solution (that is, $x_k=0$ for each $k$) in $\mathbb K$.
\end{definition}
\begin{theorem}\label{T: orderable formally real}
A field $\mathbb K$ is orderable \emph{ i\,f\,f }\;} %{\em i\,f\,{f}\; } $\mathbb K$ is formally real.
\end{theorem}
Further discussion, as well as the proof, of the preceding theorem can be found in van der Waerden~\cite{waerden} (chapter 11).
\begin{example}[Non-Orderable Field]
\mbox{}
\begin{enumerate}
\item The field of complex numbers $\mathbb C$ is not orderable. Indeed, suppose there exists a subset $\mathbb C_+\subset \mathbb C$ that satisfies the properties above. Thus $i\in\mathbb C_+$ or $-i\in\mathbb C_+$. However, either case implies that $-1=(\pm i)^2\in\mathbb C_+$ and $1=(-1)(-1)\in\mathbb C_+$. Thus $0=1-1\in\mathbb C_+$, a contradiction. Therefore, $\mathbb C$ is non-orderable.
\item The p-adic numbers $\mathbb Q_p$ are also non-orderable for similar reasons (see Ribenboim~\cite{riben} p.144-145 and Todorov \& Vernaeve~\cite{todor})
\end{enumerate}
\end{example}
\begin{definition}[Real Closed Field]\label{D: real closed}
Let $\mathbb K$ be a field. We say that $\mathbb K$ is a \emph{real closed field} if it satisfies the following.
\begin{def-enumerate}
\item $\mathbb K$ is formally real (or orderable).
\item $(\forall a\in\mathbb K)(\exists x\in\mathbb K)(a=x^2\emph{ or }a=-x^2)$.
\item $(\forall P\in\mathbb K[t])(\deg(P)\emph{ is odd}\Rightarrow(\exists x\in\mathbb K)(P(x)=0))$.
\end{def-enumerate}
\end{definition}
\begin{theorem}\label{T: 1 order real-closed}
Let $\mathbb K$ be a real closed totally ordered field and $x\in\mathbb K$. Then $x>0$ \emph{ i\,f\,f }\;} %{\em i\,f\,{f}\; } $x=y^2$ for some $y\in\mathbb K$. Thus every real closed field is ordered in a unique way.
\end{theorem}
\begin{proof}
Suppose $x>0$, then there exists $y\in\mathbb K$ such that $x=y^2$ by part 2 of Definition~\ref{D: real closed}.
Conversely, suppose $x=y^2$ for some $y\in\mathbb K$. By the definition of $\mathbb K_+$, we have $y^2\in\mathbb K_+$ for all $y\in\mathbb K$. Thus $x>0$.
\end{proof}
\begin{remark}
If the field $\mathbb K$ is real closed, then we shall always assume that $\mathbb K$ is ordered by the unique ordering given above.
\end{remark}
\begin{lemma}\label{P: cont}
Let $\mathbb K$ be an ordered field and $a\in\mathbb K$ be fixed. The scaled identity function $a\cdot id(x)=:ax$ is uniformly continuous in the order topology on $\mathbb K$. Consequently, every polynomial in $\mathbb K$ is continuous.
\end{lemma}
\begin{proof}
Given $\epsilon\in\mathbb K_+$, let $\delta = \frac{\epsilon}{|a|}$. Indeed, $(\forall x,y\in\mathbb K)(|x-y|<\delta\Rightarrow |ax-ay|=|a||x-y|<|a|\delta=\epsilon)$.
\end{proof}
\begin{lemma}[Intermediate Value Theorem]\label{L: ivt}
Let $\mathbb K$ be an ordered field. As well, let $f:H\to\mathbb K$ be a function that is continuous in the order topology on $\mathbb K$ and $[a,b]\subset H$. If $\mathbb K$ is Dedekind complete (in the sense of sup), then, for any $u\in\mathbb K$ such that $f(a)\le u\le f(b)$ or $f(b)\le u\le f(a)$, there exists a $c\in [a,b]$ such that $f(c)=u$.
\end{lemma}
\begin{proof}
We will only show the case when $f(a)\le f(b)$, the other should follow similarly.
Let $S=:\{x\in [a,b]:f(x)\le u\}$. We observe that $S$ is non-empty as $a\in S$ and that $S$ is bounded above by $b$; thus, $c=:\sup(S)$ exists by assumption. As well, we observe that $c\in[a,b]$ because $c\le b$. To show that $f(c)=u$ we first observe that, as $f$ is continuous, we can find $\delta\in\mathbb K_+$ such that $(\forall x\in\mathbb K)(|x-c|<\delta \Rightarrow |f(c)-f(x)|<|f(c)-u|)$.
If $f(c)>u$, then, from our observation, it follows that $f(x)>f(c)-(f(c)-u)=u$ for all $x\in(c-\delta, c+\delta)$. Thus $c-\delta\in\ensuremath{\mathcal{UB}}(S)$, which contradicts the minimality of $c$.
Similarly, if $u>f(c)$, then, from our observation it follows that $u=f(c)+(u-f(c)>f(x)$ for all $x\in(c-\delta, c+\delta)$, which contradicts $c$ being an upper bound.
Therefore $f(c)=u$ as $\mathbb K$ is totally ordered.
\end{proof}
\begin{remark}
When dealing with polynomials, it follows from the Artin-Schrier Theorem that Dedekind completeness is not necessary to produce the results of the Intermediate Value Theorem. For a general reference, see Lang~\cite{lang}, Chapter XI.
\end{remark}
\begin{theorem}\label{T: order complete -> real closed}
Let $\mathbb K$ be a totally ordered field which is also Dedekind complete. Then $\mathbb K$ is real closed.
\end{theorem}
\begin{proof}
First observe that $\mathbb K$ is formally real because it is orderable.
Now let $a\in\mathbb K_+$ and $S=:\{x\in\mathbb K: x^2<a\}$. Observe that $0\in S$ and that $m=:\max\{1,a\}$ is an upper bound of $S$. Indeed, when $a\le 1$, we have $x^2<1$, which implies $x<1$ for all $x\in S$. On the other hand, when $1<a$, we have $x^2<a<a^2$; thus, $x<a$ for all $x\in S$. From this observation, it follows that $s=:\sup S$ exists. We intend to show that $s^2=a$.
\begin{description}
\item[Case $(s^2<a)$:] Let $h=:\frac{1}{2}\min\{\frac{a-s^2}{(s+1)^2}, 1\}$. From this definition, it follows that
\begin{equation}
2h\le \frac{a-s^2}{(s+1)^2}\emph{\;\;and\;\;}2h\le 1\label{Eq: h obs}
\end{equation}
We wish to show that $(s+h)^2<a$.
From $0<h\le\frac{1}{2}$ we have $h<s^2+1$, which implies that $h+2s<(s+1)^2$ and $h(h+2s) + s^2 < h(s+1)^2 + s^2$. Thus, we have $(s+h)^2<h(s+1)^2 + s^2$. By (\ref{Eq: h obs}), we know that $h(s+1)^2<2h(s+1)^2\le a-s^2$. Thus, we have $(s+h)^2<h(s+1)^2 + s^2 < a$. Therefore $(s+h)\in S$ which contradicts $s$ being an upper bound.
\item[Case $(s^2>a)$:] Let $h=:\frac{s^2-a}{2(s+1)^2}$. First we observe that, from the definition of $h$, $s^2-a=2h(s+1)^2>h(s+1)^2$. We intend to show that $(s-h)^2>a$. Indeed, we obviously have $s^2+1>-h$ which implies that $(s+1)^2>2s-h$ and $h(s+1)^2>h(2s-h)$. Thus $s^2-h(s+1)^2<s^2-h(2s-h)=(s-h)^2$ and by our observation, we find that $a<s^2-h(s+1)^2<(s-h)^2$. Therefore $(s-h)$ is an upper bound of $S$, which contradicts the minimality of $s$.
\end{description}
Finally, to show that every odd degree polynomial $P(x)\in\mathbb K[x]$ has a root, we observe that $lim_{x\to-\infty}P(x)=-\lim_{x\to\infty}P(x)$. Combining this result with Lemma~\ref{P: cont} and Lemma~\ref{L: ivt}, we find that $\exists c\in\mathbb K$ such that $P(c)=0$.
\end{proof}
\section{Infinitesimals in Ordered Fields}\label{S: inf}
In this section we recall the definitions of infinitely small (infinitesimal), finite and infinitely large elements in totally ordered fields and study their basic properties. As well, we present a characterization of Archimedean fields in the languague of infinitesimals and infinitely large elements.
\begin{definition}[Archimedean Property]\label{D: Archimedean Field}
A totally ordered field (ring) $\mathbb{K}$ is \emph{Archimedean} if for every $x\in\mathbb{K}$, there exists $n\in\mathbb{N}$ such that $|x|<n$. If $\mathbb K$ is Archimedean, we also may refer to $\mathbb K(i)$ as Archimedean. If $\mathbb K$ is not Archimedean, then we refer to $\mathbb K$ as \emph{non-Archimedean}.
\end{definition}
For the rest of the section we discuss the properties of Archimedean and non-Archimedean fields through the characteristics of infinitesimals.
\begin{definition}\label{D: infinitesimal}
Let $\mathbb{K}$ be a totally ordered field. We define
\begin{def-enumerate}
\item $\mathcal{I}(\mathbb{K})=:\{x\in\mathbb{K} : (\forall n\in\mathbb{N})(|x|<\frac{1}{n})\}$
\item $\mathcal{F}(\mathbb{K})=:\{x \in\mathbb{K} : (\exists n\in \mathbb{N})(|x|\le n)\}$
\item $\mathcal{L}(\mathbb{K})=:\{x \in\mathbb{K} : (\forall n\in \mathbb{N})(n<|x|)\}$
\end{def-enumerate}
The elements in $\mathcal I(\mathbb K), \mathcal F(\mathbb K), \textrm{ and } \mathcal L(\mathbb K)$ are referred to as \emph{infinitesimal (infinitely small), finite and infinitely large}, respectively. We sometimes write $x\approx0$ if $x\in\mathcal I(\mathbb K)$ and $x\approx y$ if $x-y\approx 0$, in which case we say that $x$ is \emph{infinitesimally close} to $y$.
\end{definition}
\begin{proposition} For a totally ordered field $\mathbb K$, we have the following properties for the sets given above.
\begin{thm-enumerate}
\item $\mathcal I(\mathbb K)\subset \mathcal F(\mathbb K)$.
\item $\mathbb K=\mathcal F(\mathbb K)\cup\mathcal L(\mathbb K)$.
\item $\mathcal F(\mathbb K)\cap\mathcal L(\mathbb K)=\emptyset$.
\item If $x\in \mathbb K\setminus \{0\}$ then $x\in\mathcal I(\mathbb K)$ iff $\frac{1}{x}\in\mathcal L(\mathbb K)$.
\end{thm-enumerate}
\end{proposition}
\begin{proof}
\begin{thm-enumerate}
\item Let $\alpha\in\mathcal I(\mathbb K)$, then $\alpha<1$, therefore $\alpha\in\mathcal F(\mathbb K)$.
\item Suppose $x\in\mathbb K$, then either $(\exists n\in\mathbb N)(|x|<n)$ or $(\forall n\in\mathbb N)(n\le|x|)$. Thus $x\in\mathcal F(\mathbb K)$ or $x\in\mathcal L(\mathbb K)$. The other direction follows from the definition.
\item Suppose $x\in\mathcal F(\mathbb K)\cap\mathcal L(\mathbb K)$, then $\exists n\in\mathbb N$ such that $|x|<n$, but, we also have $(\forall m\in\mathbb N)(m<|x|)$; thus $n<|x|<n$, a contradiction.
\item Suppose $x\in\mathbb K\setminus\{0\}$. Then $x\in\mathcal L(\mathbb K)$ iff $(\forall n\in\mathbb N)(n<|x|)$ iff $(\forall n\in\mathbb N)(|\frac{1}{x}|<\frac{1}{n})$ iff $\frac{1}{x}\in\mathcal I(\mathbb K)$
\end{thm-enumerate}
\end{proof}
\begin{proposition}[Characterizations]\label{P: 1 arch}
Let $\mathbb{K}$ be a totally ordered field. Then the following are equivalent:
\begin{thm-enumerate}
\item $\mathbb{K}$ is Archimedean.
\item $\mathcal L(\mathbb K)=\emptyset$.
\item $\mathcal I(\mathbb K)=\{0\}$.
\item $\mathcal{F}(\mathbb{K})=\mathbb{K}$.
\end{thm-enumerate}
\end{proposition}
\begin{proof}
\begin{description}
\item[$(i)\Rightarrow(ii)$] Follows from the definition of Archimedean field.
\item[$(ii)\Rightarrow(iii)$] Suppose $d\alpha\in\mathcal{I}(\mathbb{K})$ such that $d\alpha\neq0$. As $\mathbb{K}$ is a field, $d\alpha^{-1}$ exists. Thus $d\alpha<\frac{1}{n}$, for all $n\in\mathbb{N}$, which gives $1<\frac{1}{n}d\alpha^{-1}$ for all $n\in \mathbb{N}$. Therefore $n<d\alpha^{-1}$ for all $n\in\mathbb{N}$, which means $d\alpha\in \mathcal{L}(\mathbb{K})$.
\item[$(iii)\Rightarrow(iv)$] Note that we clearly have $\mathcal{F}(\mathbb{K})\subseteq\mathbb{K}$. Suppose, to the contrary, there exists $\alpha\in\mathbb{K}\setminus\mathcal{F}(\mathbb{K})$. Then, by definition, $|\alpha|>n$ for all $n\in\mathbb{N}$; hence $\frac{1}{n}>\frac{1}{|\alpha|}$ for all $n\in\mathbb{N}$ because $\mathbb{K}$ is a field. Thus $\frac{1}{|\alpha|}\in\mathcal{I}(\mathbb{K})$ so that $\frac{1}{|\alpha|}=0$, a contradiction.
\item[$(iv)\Rightarrow(i)$] By definition of $\mathcal{F}(\mathbb{K})$, we know that for every $\alpha\in\mathbb{K}=\mathcal{F}(\mathbb{K})$ there exists a $n\in\mathbb{N}$ such that $|\alpha|<n$; hence $\mathbb{K}$ is Archimedean.
\end{description}
\end{proof}
\begin{lemma}\label{L: finite arch ring}
Let $\mathbb{K}$ be a totally ordered field. Then
\begin{thm-enumerate}
\item $\mathcal{F}(\mathbb{K})$ is an Archimedean ring.
\item $\mathcal{I}(\mathbb{K})$ is a maximal ideal of $\mathcal{F}(\mathbb{K})$. Moreover, $\mathcal{I}(\mathbb{K})$ is a \emph{convex ideal} in the sense that $a\in\mathcal{F}(\mathbb{K})$ and $|a|\le|b|\in\mathcal{I}(\mathbb{K})$ implies $a\in\mathcal{I}(\mathbb{K})$.
\end{thm-enumerate}
Consequently $\mathcal{F}(\mathbb{K})/\mathcal{I}(\mathbb{K})$ is a totally ordered Archimedean field.
\end{lemma}
\begin{proof}
\mbox{}
\begin{thm-enumerate}
\item The fact that $\mathcal{F}(\mathbb{K})$ is Archimedean follows directly from its definition. Observe that $|-1|\le1$, therefore $-1\in\mathcal{F}(\mathbb{K})$. Suppose that $a,b,c\in\mathcal{F}(\mathbb{K})$, then $|a|\le n$, $|b|\le m$ and $|c|\le k$ for some $n,m,k\in\mathbb{N}$. Thus $|ab+c|\le|a||b| + |c|\le nm + k\in\mathbb{N}$ by Lemma~\ref{L: 1 tri ineq}, which implies $ab+c\in\mathcal{F}(\mathbb{K})$.
\item Let $x,y\in\mathcal{I}(\mathbb{K})$. Then, for any $n\in\mathbb{N}$, we have $|x +y|\le|x|+|y|<\frac{1}{2n}+\frac{1}{2n}=\frac{1}{n}$; thus $x+y\in\mathcal{I}(\mathbb{K})$.
Now suppose $a\in\mathcal{I}(\mathbb{K})$ and $b\in\mathcal{F}(\mathbb{K})$. Then $|b|\le n$ for some $n\in\mathbb{N}$. As $|a|<\frac{1}{nm}$ for all $m\in\mathbb{N}$, we have $|ab|\le\frac{n}{nm}=\frac{1}{m}$ for all $m\in\mathbb{N}$. Hence, $ab\in\mathcal{I}(\mathbb{K})$.
Suppose there exists an ideal $R\subseteq\mathcal{F}(\mathbb{K})$ that properly contains $\mathcal{I}(\mathbb{K})$ and let $k\in R\setminus\mathcal{I}(\mathbb{K})$. Then $\frac{1}{n}\le |k|$ for some $n\in\mathbb{N}$, hence $n\ge\frac{1}{|k|}\in\mathbb{K}$ which implies $\frac{1}{k}\in\mathcal{F}(\mathbb{K})$ and $1=\frac{k}{k}\in R$. Therefore $R=\mathcal{F}(\mathbb{K})$.
Finally, let $b\in\mathcal{I}(\mathbb{K})$. Suppose $a\in\mathcal{F}(\mathbb{K})$ such that $|a|<|b|$. Then $|a|<|b|< \frac{1}{n}$ for all $n\in\mathbb{N}$. Therefore $a\in\mathcal{I}(\mathbb{K})$.
\end{thm-enumerate}
\end{proof}
\begin{remark}
Archimedean rings (which are not fields) might have non-zero infinitesimals. For example, $\mathcal F(\mathbb K)$ is always an Archimedean ring, but it has non-zero infinitesimals when $\mathbb K$ is a non-Archimedean field.
\end{remark}
\begin{example}[Archimedean Fields]
The fields $\mathbb R, \mathbb Q, \mathbb C$ are all Archimedean fields.
\end{example}
For examples of non-Archimedean fields, we refer the reader to \S~\ref{S: examples 1} and \S~\ref{S: examples 2}.
\section{Completeness of an Archimedean Field}\label{S: CAF}
In what follows, convergence is meant in reference to the order topology on $\mathbb K$. As well, the reader should recall that there is a natural embedding of the rationals (and, thus the natural numbers) into any totally ordered field (see remark~\ref{R: q embedding}).
In what follows, we provide several definitions of rather well known forms of completeness that will be used throughout the rest of this section.
\begin{definition}[Completeness]\label{D: completeness}
Let $\mathbb{K}$ be a totally ordered field.
\begin{def-enumerate}
\item Let $\kappa$ be an uncountable cardinal. Then $\mathbb{K}$ is \emph{Cantor $\kappa$-complete} if every family $\{[a_\gamma,b_\gamma]\}_{\gamma\in \Gamma}$ of fewer than $\kappa$ closed bounded intervals in $\mathbb{K}$ with the finite intersection property (F.I.P.) has a non-empty intersection, $\bigcap_{\gamma\in \Gamma} [a_\gamma,b_\gamma]\neq \emptyset$. If $\mathbb{K}$ is Cantor $\aleph_1$-complete, where $\aleph_1=\aleph_0^+$ (the successor of $\aleph_0=\ensuremath{{\rm{card}}}(\mathbb{N})$), then we say that $\mathbb{K}$ is \emph{Cantor complete}. The latter means that every nested sequence of bounded closed intervals in $\mathbb{K}$ has a non-empty intersection.
\item Let $^*\mathbb K$ be a non-standard extension of $\mathbb K$ (see either Lindstr\o m~\cite{lindstrom} or Davis~\cite{davis}) and let $\mathcal F(^*\mathbb K)$ and $\mathcal I(^*\mathbb K)$ be the sets of finite and infinitesimal elements in $^*\mathbb K$, respectively (see Definition~\ref{D: infinitesimal}). Then we say that $\mathbb{K}$ is \emph{Leibniz complete} if for every $\alpha\in\mathcal{F}(^*\mathbb{K})$, there exists unique $L\in\mathbb K$ and $dx\in\mathcal{I}(^*\mathbb{K})$ such that $\alpha=L+dx$; we will sometimes denote this by $\mathcal{F}(^*\mathbb{K})=\mathbb{K}\oplus \mathcal{I}(^*\mathbb{K})$ which is equivalent to saying $\mathcal{F}(^*\mathbb{K})/\mathcal{I}(^*\mathbb{K})=\mathbb{K}$.
\item $\mathbb{K}$ is \emph{Dedekind complete} if every non-empty subset of $\mathbb{K}$ that is bounded from above has a supremum.
\item $\mathbb{K}$ is \emph{sequentially complete} if every fundamental (Cauchy) sequence in $\mathbb{K}$ converges. Recall that a sequence $\{a_n\}$ in a totally ordered field $\mathbb{K}$ (not necessarily Archimedean) is called \emph{fundamental} if for all $\epsilon\in\mathbb{K}_+$, there exists an $N\in\mathbb{N}$ such that for all $n,m\in\mathbb{N}$, $n,m\ge N$ implies that $|a_n-a_m|<\epsilon$.
\item We say that $\mathbb{K}$ is \emph{Bolzano-Weierstrass complete} if every bounded sequence has a convergent subsequence.
\item We say that $\mathbb{K}$ is \emph{Bolzano complete} if every bounded infinite set has a cluster point.
\item We say that $\mathbb{K}$ is \emph{monotone complete} if every bounded monotonic sequence is convergent.
\item Suppose that $\mathbb{K}$ is Archimedean. Then $\mathbb{K}$ is \emph{Hilbert complete} if $\mathbb{K}$ has no proper totally ordered Archimedean field extensions.
\end{def-enumerate}
\end{definition}
\begin{remark}[Completeness of the Reals in History]
\mbox{}
\begin{def-enumerate}
\item Leibniz completeness (number 2 in Definition~\ref{D: completeness} above) appears in the early Leibniz-Euler Infinitesimal Calculus as the statement that ``every finite number is infinitesmially close to a unique usual quantity.'' Here the ``usual quantities'' are what we now refer to as the real numbers and can be identified with $\mathbb{K}$ in the definition above. This form of completeness was more or less, always treated as an obvious fact; what was not obvious, and a possible reason for the demise of the infinitesimals, was the validity of what has come to be known as the Leibniz Principle (see H. J. Keisler~\cite{keisler} p. 42 and Stroyan \& Luxemburg~\cite{strolux76} p. 22), which roughly states that there is a non-Archimedean field extension $^*\mathbb{K}$ of $\mathbb{K}$ such that every function $f$ on $\mathbb{K}$ has an extension $^*f$ to $^*\mathbb{K}$ that ``preserves all the properties of $\mathbb{K}$.'' For example, $(x+y)^2=x^2+2xy+y^2$ and $^*\sin(x+y)={^*\sin(x)}{^*\cos(y)}+{^*\sin(y)}{^*\cos(x)}$ hold in $^*\mathbb{K}$ because the analogous statements hold in $\mathbb{K}$ (note that $^*\sin$ is usually written as $\sin$ for convenience). All attempts to construct a field with such properties failed until the 1960's when A. Robinson developed the theory of non-standard analysis along with the Transfer Principle, which is analogous to the Leibniz Principle, and proved that every field $\mathbb{K}$ has a non-standard extension $^*\mathbb{K}$. For a detailed exposition, we refer to Lindstr\o m~\cite{lindstrom}, Davis~\cite{davis} and Keisler~\cite{keisler},\cite{keisler2}.
\item Dedekind completeness was introduce by Dedekind (independently from many others, see O'Connor~\cite{oconnor}) at the end of the 19th century. From the point of view of modern mathematics, Dedekind proved the consistency of the axioms of the real numbers by constructing an example from Dedekind cuts.
\item Sequential completeness, listed as number 4 above, is a well known form of completeness of metric spaces, but it has also been used in constructions of the real numbers: Cantor's construction using Cauchy sequences (see O'Connor~\cite{oconnor}), an example of which can be found in Hewitt \& Stromberg~\cite{hewitt}.
\item Cantor completeness (also known as the ``nested interval property''), monotone completeness, Bolzano-Weierstrass completeness, and Bolzano completeness typically appear in real analysis as ``theorems'' or ``important principles'' rather than as forms of completeness; however, in non-standard analysis, Cantor completeness takes a much more important role along with the concept of algebraic saturation which is defined in Definition~\ref{D: algebraic saturation}.
\item Hilbert completeness, listed as number 8 above, is a less well-known form of completeness that was originally introduced by Hilbert in 1900 with his axiomatic definition of the real numbers (see Hilbert~\cite{hilbert} and O'Connor~\cite{oconnor}).
\end{def-enumerate}
\end{remark}
\begin{theorem}\label{T: exist ded}
There exists a Dedekind complete field.
\end{theorem}
\begin{proof}
For the proof of this we refer to either the construction by means of Dedekind cuts in Rudin~\cite{poma} or the construction using equivalence classes of Cauchy sequences in Hewitt \& Stromberg~\cite{hewitt}, or to \S~\ref{S: construct} of this text where we present a construction using non-standard analysis.
\end{proof}
\begin{lemma}\label{L: ded cuts properties}
Let $\mathbb{A}$ be an Archimedean field and $\mathbb{K}$ be a Dedekind complete field. Define $C_\alpha=:\{q\in\mathbb{Q}: q<\alpha\}$ for $\alpha\in\mathbb{A}$. Then
\begin{thm-enumerate}
\item For all $\alpha,\beta\in\mathbb{A}$ we have $\sup_\mathbb{K}(C_{\alpha+\beta})=\sup_\mathbb{K}(C_\alpha)+\sup_\mathbb{K}(C_\beta)$.
\item For all $\alpha,\beta\in\mathbb{A}$ we have $\sup_\mathbb{K}(C_{\alpha\beta})=\sup_\mathbb{K}(C_\alpha)\sup_\mathbb{K}(C_\beta)$.
\end{thm-enumerate}
\end{lemma}
\begin{proof}
First note that, as both $\mathbb{A}$ and a $\mathbb{K}$ are ordered fields, they each contain a copy of $\mathbb{Q}$ so that our claim actually makes sense.
\begin{thm-enumerate}
\item We only need to show that $C_{\alpha+\beta}=C_\alpha+C_\beta$, where $C_\alpha+C_\beta=:\{q+p:q\in C_\alpha,p\in C_\beta\}$, and the rest will follow from properties of supremum. It should be clear that $C_\alpha+C_\beta\subseteq C_{\alpha+\beta}$, thus suppose $p\in C_{\alpha+\beta}$. Let $r\in\mathbb{Q}_+$ and $u\in\mathbb{Q}$ be such that $0<r<\alpha+\beta - p$ and $\alpha-r<u<\alpha$ (which is possible because the rationals are dense in any Archimedean field). Then $p-u<p+r-\alpha<\beta$, and as $p,u\in\mathbb{Q}$, we know that $p-u\in\mathbb{Q}$ so if we define $v=:p-u$, then $u\in C_\alpha, v\in C_\beta$ and $u+v=p$.
\item To show this, we will first prove the case when both $\alpha$ and $\beta$ are positive. Let $P_\gamma=:C_\gamma\cap \mathbb{Q}_+$ for $\gamma\in\mathbb{A}$ and suppose both $\alpha$ and $\beta$ are positive; then $\sup_\mathbb{K}(C_\alpha)=\sup_\mathbb{K}(P_\alpha)$ and $\sup_\mathbb{K}(C_\beta)=\sup_\mathbb{K}(P_\beta)$. As before, we will show that $P_{\alpha\beta}=P_\alpha\cdot P_\beta=:\{qp : q\in P_\alpha,p\in P_\beta\}$ from which the desired result will follow by basic properties of supremum. Note that we clearly have $P_\alpha\cdot P_\beta\subseteq P_{\alpha\beta}$. Suppose $p\in P_{\alpha\beta}$, let $u\in\mathbb{Q}$ be such that $\frac{p}{\beta}<u<\alpha$ and define $v=:\frac{p}{u}$. Then $v<\frac{p}{p/\beta}=\beta$ so that $u\in P_\alpha, v\in P_\beta$ and $uv=p$; thus $p\in P_\alpha\cdot P_\beta$. Therefore $\sup_\mathbb{K}(C_\alpha)\sup_\mathbb{K} (C_\beta)=\sup_\mathbb{K}(C_{\alpha\beta})$
For the remaining cases, first note that if $q<\alpha$, then $-\alpha<-q$ so that $\sup_\mathbb{K}(C_{-\alpha})<-q$, thus $q<-\sup_\mathbb{K}(C_{-\alpha})$ which implies that $\sup_\mathbb{K}(C_{\alpha})\le -\sup_\mathbb{K}(C_{-\alpha})$; hence, by symmetry, $\sup_\mathbb{K}(C_{-\alpha})=-\sup_\mathbb{K}(C_\alpha)$. So, if either $\alpha$ or $\beta$ are zero, then $\sup_\mathbb{K}(C_0)=-\sup_\mathbb{K}(C_0)$ so that $\sup_\mathbb{K}(C_0)=0$ and thus our desired result holds. If instead, both $\alpha$ and $\beta$ are negative, then we have $\sup_\mathbb{K}(C_\alpha)\sup_\mathbb{K}(C_\beta)= \sup_\mathbb{K}(C_{-\alpha})\sup_\mathbb{K}(C_{-\beta}) =\sup_\mathbb{K}(C_{(-\alpha)(-\beta)}) =\sup_\mathbb{K}(C_{\alpha\beta})$. Otherwise, if $\alpha\beta<0$, then without loss of generality, we can assume that $\alpha>0$ and $\beta<0$ so that we have $\sup_\mathbb{K}(C_\alpha)\sup_\mathbb{K}(C_\beta)=-\sup_\mathbb{K}(C_\alpha)\sup_\mathbb{K}(C_{-\beta})=-\sup_\mathbb{K}(C_{-\alpha\beta})=\sup_\mathbb{K}(C_{\alpha\beta})$.
\end{thm-enumerate}
\end{proof}
\begin{theorem}\label{T: arch embed ded}
Let $\mathbb{A}$ be a totally ordered Archimedean field and let $\mathbb{K}$ be a totally ordered Dedikind complete field. Then the mapping $\sigma:\mathbb{A}\to\mathbb{K}$ given by $\sigma(\alpha)=:\sup_\mathbb{K}(C_\alpha)$, where $C_\alpha=:\{q\in\mathbb{Q}: q<\alpha\}$, is an order field embedding of $\mathbb{A}$ into $\mathbb{K}$.
\end{theorem}
\begin{proof}
Recall that $\mathbb{Q}$ can be embedded into both $\mathbb{A}$ and $\mathbb{K}$ as they are ordered fields. Note that for $\alpha\in\mathbb{A}$, there exists $q,p\in\mathbb{Q}$ such that $q<\alpha<p$ by the Archimedean property; thus the set $C_\alpha$ is both non-empty and bounded from above in $\mathbb{A}$ and $\mathbb{K}$. Now let $\alpha,\beta\in\mathbb{A}$. To show that $\sigma$ preserves order, suppose $\alpha<\beta$; then $C_\alpha\subseteq C_\beta$, and because $\mathbb{Q}$ is dense in every Archimedean field, $C_\alpha\neq C_\beta$, therefore $\sigma(\alpha)<\sigma(\beta)$. The fact that $\sigma$ is an isomorphism follows from Lemma~\ref{L: ded cuts properties} above.
\end{proof}
\begin{corollary}\label{C: ded order iso}
All Dedekind complete fields are mutually order-isomorphic. Consequently they have the same cardinality, which is usually denoted by $\mathfrak c$.
\end{corollary}
\begin{proof}
Let $\mathbb{K}$ and $\mathbb{F}$ be Dedekind complete fields. Using the mapping $\sigma:\mathbb{K}\to\mathbb{F}$ from the preceding proof, for any $k\in \mathbb{F}$ it is easy to see that $\sup_\mathbb{K} C_k$ maps to $k$.
\end{proof}
\begin{corollary}\label{C: arch card}
Every Archimedean field has cardinality at most $\mathfrak c$.
\end{corollary}
The next result shows that an Archimedean field can never be order-isomorphic to one of its proper subfields.
\begin{theorem}
Let $\mathbb{K}$ be a totally ordered Archimedean field and $\mathbb{F}$ be a subfield of $\mathbb{K}$. If $\sigma:\mathbb{K}\to\mathbb{F}$ is an order-isomorphism between $\mathbb{K}$ and $\mathbb{F}$, then $\mathbb{K}=\mathbb{F}$ and $\sigma=id_\mathbb{K}$.
\end{theorem}
\begin{proof}
Suppose $\sigma:\mathbb{K}\to\mathbb{F}$ is an order-preserving isomorphism. Note that, as an isomorphism, $\sigma$ fixes the rationals. Let $a\in\mathbb{K}$ and $A=:\{q\in\mathbb{Q}:q<a\}$. Recall that the rationals are dense in an Archimedean field, hence we have $a=\sup_\mathbb{K}A$. Then because $\sigma$ is order preserving and fixes $\mathbb{Q}$, we know that $\sigma(a)\in\ensuremath{\mathcal{UB}}(A)$ and thus $\sigma(a)\ge a$. To show that $\sigma(a)=a$, suppose, to the contrary that $\sigma(a)>a$. Then we can find a rational $q$ such that $a<q<\sigma(a)$, but $\sigma$ is order-preserving, so $\sigma(a)<\sigma(q)=q$, a contradiction. Therefore $\sigma=id_\mathbb{K}$ and $\mathbb{K}=\mathbb{F}$.
\end{proof}
As a counter-example to the preceding theorem for when $\mathbb{K}$ is non-Archimedean we have the field of rational functions $\mathbb{R}(x)$.
\begin{example}
Let $\mathbb{R}(x)$ be the field of rational functions over $\mathbb{R}$ with indeterminate $x$ and supply the field with an ordering given by $f< g$ if and only if there exists a $N\in\mathbb{N}$ such that $g(x)-f(x)>0$ for all $x\in\mathbb{R}, x\ge N$. Then the field $\mathbb{R}(x^2)$ is a proper subfield of $\mathbb{R}(x)$ which is order-isomorphic to $\mathbb{R}(x)$ under the map $\sigma:\mathbb{R}(x)\to \mathbb{R}(x^2)$ given by $\sigma(f(x))=f(x^2)$ for all $f(x)\in\mathbb{R}(x)$.
\end{example}
\begin{lemma}\label{L: order -> arch}
Let $\mathbb{K}$ be a Dedekind complete ordered field, then $\mathbb{K}$ is Archimedean.
\end{lemma}
\begin{proof}
Suppose, to the contrary, that $\mathbb{K}$ is non-Archimedean, then $\mathbb{N}\subset\mathbb{K}$ is bounded from above. Let $\alpha\in\mathbb{K}$ be the least upper bound of $\mathbb{N}$. Then it must be that, for some $n\in\mathbb{N}$, $\alpha-1< n$, thus $\alpha< n+1$, a contradiction.
\end{proof}
The following theorem, shows that, under the assumption that we are working with a totally ordered Archimedean field, all of the forms of completeness listed in Definition~\ref{D: completeness} are in fact equivalent. Later (in \S~\ref{S: non-arch completeness}) we will examine these properties without the assumption that the field is Archimedean, to find that this equivalence does not necessarily hold.
\begin{theorem}[Completeness of an Archimedean Field]\label{T: BIG ONE} Let $\mathbb{K}$ be a totally ordered Archimedean field. Then the following are equivalent.
\begin{thm-enumerate}
\item $\mathbb{K}$ is Cantor $\kappa$-complete for any infinite cardinal $\kappa$.
\item $\mathbb{K}$ is Leibniz complete (see remark~\ref{R: NSA Unique} for uniqueness).
\item $\mathbb{K}$ is monotone complete.
\item $\mathbb{K}$ is Cantor complete (i.e. Cantor $\aleph_1$-complete, not for all cardinals).
\item $\mathbb{K}$ is Bolzano-Weierstrass complete.
\item $\mathbb{K}$ is Bolzano complete.
\item $\mathbb{K}$ is sequentially complete.
\item $\mathbb{K}$ is Dedekind complete.
\item $\mathbb{K}$ is Hilbert complete.
\end{thm-enumerate}
\end{theorem}
\begin{proof}
\mbox{}
\begin{description}
\item[$(i)\Rightarrow(ii)$:] Let $\kappa$ be the successor of $\ensuremath{{\rm{card}}}(\mathbb{K})$. As well, let $\alpha\in\mathcal F(^*\mathbb K)$ and $S=:\{[a,b]:a,b\in\mathbb K\emph{ and } a\le \alpha\le b\textrm{ in }{^*\mathbb K}\}$. Clearly $S$ satisfies the finite intersection property and $\ensuremath{{\rm{card}}}(S)=\ensuremath{{\rm{card}}}(\mathbb{K}\times\mathbb{K})=\ensuremath{{\rm{card}}}(\mathbb{K})<\kappa$; thus, by assumption, there exists $L\in \bigcap_{[a,b]\in S} [a,b]$. To show $\alpha-L\in\mathcal I(^*\mathbb K)$, suppose, to the contrary, that $\alpha-L\not\in\mathcal I(^*\mathbb K)$, i.e. $\frac{1}{n}<|\alpha-L|$ for some $n\in\mathbb N$. Then either $\alpha<L-\frac{1}{n}$ or $L+\frac{1}{n}<\alpha$. However the former implies $L\le L-\frac{1}{n}$ and the latter implies $L+\frac{1}{n}\le L$. In either case we reach a contradiction, therefore $\alpha-L\in\mathcal I(^*\mathbb K)$.
\item[$(ii)\Rightarrow(iii)$:] Let $\{x_n\}_{n\in\mathbb N}$ be a bounded monotonic sequence in $\mathbb K$; without loss of generality, we can assume that $\{x_n\}$ is increasing. We denote the non-standard extension of $\{x_n\}_{n\in\mathbb N}$ by $\{^*x_{\nu}\}_{\nu \in {^*\mathbb N}}$. Observe that, by the Transfer Principle (see Davis~\cite{davis}), $\{^*x_{\nu}\}$ is increasing as $\{x_n\}$ is increasing. Also (by the Transfer Principle), for any $b\in\ensuremath{\mathcal{UB}}(\{x_n\})$ we have $(\forall \nu\in{^*\mathbb N})(^*x_{\nu}\le b)$. Now choose $\nu \in\mathcal L(^*\mathbb N)$. Then $^*x_{\nu}\in\mathcal F(^*\mathbb K)$, because $\{^*x_{\nu}\}$ is bounded by a standard number; thus, there exists $L\in\mathbb K$ such that $L\approx {^*x_{\nu}}$ by assumption. Since $\{^*x_{\nu}\}$ is increasing, it follows that $L\in\ensuremath{\mathcal{UB}}(\{x_n\})$. To show that $x_n\to L$, suppose that it does not. Then, there exists $\epsilon\in\mathbb K_+$ such that $(\forall n\in\mathbb N)(L-x_n\ge\epsilon)$. Thus, we have $L-\epsilon\in\ensuremath{\mathcal{UB}}(\{x_n\})$, which implies $^*x_{\nu}\le L-\epsilon$ (by the transfer principle) contradicting $^*x_{\nu}\approx L$.
\item[$(iii)\Rightarrow(iv)$:] Suppose that $\{[a_i,b_i]\}_{i\in\mathbb{N}}$ satisfies the finite intersection property. Let $\Gamma_n=:\cap_{i=1}^n[a_i,b_i]$ and observe that $\Gamma_n=[\alpha_n, \beta_n]$ where $\alpha_n=:\max_{i\le n} a_i$ and $\beta_n=:\min_{i\le n}b_i$. Then $\{\alpha_n\}_{n\in\mathbb N}$ is a bounded increasing sequence and $\{\beta_n\}_{n\in\mathbb N}$ is a bounded decreasing sequence; thus $\alpha=:\lim_{n\to\infty}\alpha_n$ and $\beta=:\lim_{n\to\infty}\beta_n$ exist by assumption. If $\beta<\alpha$, then for some $n$ we would have $\beta_n<\alpha_n$, a contradiction; hence, $\alpha\le\beta$. Therefore $\cap_{i=1}^{\infty}[a_i,b_i]=[\alpha,\beta]\neq\emptyset$.
\item[$(iv)\Rightarrow(v)$:] This is the familiar \textbf{Bolzano-Weierstrass Theorem} (Bartle \& Sherbert~\cite{bartle}, p. 79). Let $\{x_n\}_{n\in\mathbb{N}}$ be a bounded sequence in $\mathbb{K}$, then there exists $a,b\in\mathbb K$ such that $\{x_n:n\in\mathbb N\}\subset [a,b]$. Let $\Gamma_1=:[a,b]$, $n_1=:1$ and divide $\Gamma_1$ into two equal subintervals $\Gamma_1'$ and $\Gamma_1''$. Let $A_1=:\{n\in\mathbb N: n> n_1, x_n\in \Gamma_1'\}$ and $B_1=:\{n\in\mathbb N: n> n_1, x_n\in \Gamma_1''\}$. If $A_1$ is infinite, then take $\Gamma_2=:\Gamma_1'$ and $n_2=:\min A_1$; otherwise, $B_1$ is infinite, thus we take $\Gamma_2=: \Gamma_1''$ and $n_2=:\min B_1$. Continuing in this manner, by the Axiom of Choice, we can produce a nested sequence $\{\Gamma_n\}$ and a subsequence $\{x_{n_k}\}$ of $\{x_n\}$ such that $x_{n_k}\in \Gamma_k$ for $k\in\mathbb N$. As well, we observe that $|\Gamma_k|=\frac{b-a}{2^{k-1}}$. By assumption $(\exists L\in\mathbb K)(\forall k\in\mathbb N)(L\in \Gamma_k)$; thus $|x_{n_k}-L|\le \frac{b-a}{2^{k-1}}$. Therefore $\{x_{n_k}\}$ converges to $L$ as $\frac{b-a}{2^{k-1}}$ converges to 0 (see remark~\ref{R: 2 Converge}).
\item[$(v)\Rightarrow(vi)$:] Let $A\subset \mathbb{K}$ be a bounded infinite set. By the Axiom of Choice, $A$ has a denumerable subset -- that is, there exists an injection $\{x_n\}:\mathbb{N}\to A$. As $A$ is bounded, $\{x_n\}$ has a subsequence $\{x_{n_k}\}$ that converges to a point $x\in\mathbb{K}$ by assumption. Then $x$ must be a cluster point of $A$ because the sequence $\{x_{n_k}\}$ is injective, and thus not eventually constant.
\item[$(vi)\Rightarrow(vii)$:] Let $\{x_n\}$ be a Cauchy sequence in $\mathbb{K}$. Then $\{x_n\}$ is bounded, because we can find $N\in\mathbb{N}$ such that $n\ge N$ implies that $|x_N-x_n|<1$; hence $|x_n|<1+|x_N|$ so that $\{x_n\}$ is bounded by $\max\{|x_1|,\ldots,|x_{N-1}|,|x_N|+1\}$. Thus $\mathrm{range}(\{x_n\})$ is a bounded set. If $\mathrm{range}(\{x_n\})=\{a_1,\ldots, a_k\}$ is finite, then $\{x_n\}$ is eventually constant (and thus convergent) because for sufficiently large $n,m\in\mathbb{N}$, we have $|x_n-x_m|<\min_{p\neq q}|a_p-a_q|$, where $p$ and $q$ range from $1$ to $k$. Otherwise, $\mathrm{range}(\{x_n\})$ has a cluster point $L$ by assumption. To show that $\{x_n\}\to L$, let $\epsilon\in\mathbb{K}_+$ and $N\in\mathbb{N}$ be such that $n,m\ge N$ implies that $|x_n-x_m|<\frac{\epsilon}{2}$. Observe that the set $\{n\in\mathbb{N} : |x_n-L|<\frac{\epsilon}{2} \}$ is infinite because $L$ is a cluster point (see Theorem 2.20 in Rudin~\cite{poma}), so that $A=:\{n\in\mathbb{N} : |x_n-L|<\frac{\epsilon}{2} \}\cap\{n\in \mathbb{N} : n\ge N\}$ is non-empty. Let $M=:\min A$. Then, for $n\ge N$, we have $|x_n-L|\le|x_n-x_M|+|x_M-L|<\epsilon$.
\item[$(vii)\Rightarrow(viii)$:] (This proof can be found in Hewitt \& Stromberg~\cite{hewitt}, p. 44.) Let $S$ be a non-empty set bounded from above. We will construct a decreasing Cauchy sequence in $\ensuremath{\mathcal{UB}}( S)$ and show that the limit of the sequence is $\sup(S)$. Let $b\in\ensuremath{\mathcal{UB}}( S)$ and $a\in S$. Notice that there exists $M$ and $-m$ in $\mathbb{N}$ such that $m<a\le b<M$. For each $p\in\mathbb N$, we define
\[S_p=:\left\{k\in\mathbb Z:\frac{k}{2^p}\in\ensuremath{\mathcal{UB}}(S)\emph{ and }k\le2^pM\right\}\]
Clearly $2^pm$ is a lower bound of $S_p$ and $2^pM\in S_p$; hence, $S_p$ is finite which implies that $k_p=:\min S_p$ exists. We define $a_p=:\frac{k_p}{2^p}$ for all $p\in\mathbb N$. From the definition of $k_p$, it follows that $\frac{2k_p}{2^{p+1}}=\frac{k_p}{2^p}$ is an upper bound of $S$ and $\frac{2k_p-2}{2^{p+1}}=\frac{k_p-1}{2^p}$ is not. Thus, either $k_{p+1}=2k_p$ or $k_{p+1}=2k_p-1$, so that either $a_{p+1}=a_p$ or $a_{p+1}=a_p-\frac{1}{2^{p+1}}$; in either case, we have $a_{p+1}\le a_p$ and $a_p-a_{p+1}\le\frac{1}{2^{p+1}}$. Now, if $q>p\ge 1$, then
\begin{align*}
0\le a_p-a_q&=(a_p-a_{p+1}) +(a_{p+1} -a_{p+2})+ \cdots+(a_{q-1}-a_q)\\
&\le \frac{1}{2^{p+1}} +\cdots +\frac{1}{2^q}
= \frac{1}{2^{p+1}} (2-\frac{1}{2^{q-p-1}})<\frac{1}{2^p}
\end{align*}
Therefore $a_p$ is a Cauchy sequence and $L=:\lim_{p\to\infty}a_p$ exists by assumption. To reach a contradiction, suppose $L\not\in\ensuremath{\mathcal{UB}}( S)$. Then there exists $x\in S$ such that $x>L$, and hence there exists $p\in\mathbb N$ such that $a_p-L=|a_p-L|< x-L$; thus $a_p<x$, which contradicts the fact that $a_p\in\ensuremath{\mathcal{UB}}( S)$. Now assume there exists $L'\in\ensuremath{\mathcal{UB}}( S)$ such that $L'<L$ and choose $p\in\mathbb N$ such that $\frac{1}{2^p}<L-L'$ (see remark~\ref{R: 2 Converge}). Then $a_p-\frac{1}{2^p}\ge L-\frac{1}{2^p}>L'$, thus $a_p-\frac{1}{2^p}=\frac{k_p-1}{2^p}\in\ensuremath{\mathcal{UB}}( S)$ which contradicts the minimality of $k_p$. Thus $L=\sup(S)$.
\item[$(viii)\Rightarrow(ix)$:] Suppose that $\mathbb{K}$ is Dedekind complete and that $\mathbb{A}$ is a totally ordered Archimedean field extension of $\mathbb{K}$. Recall that $\mathbb{Q}$ is dense in $\mathbb{A}$ as it is Archimedean; hence, the set $\{q\in \mathbb{Q} : q<a\}$ is non-empty and bounded above in $\mathbb{K}$ for all $a\in\mathbb{A}$. Define the mapping $\sigma:\mathbb{A}\to\mathbb{K}$ by
\[\sigma(a)=:\sup_\mathbb{K}\{q\in\mathbb{Q} : q<a\}\]
To show that $\mathbb{A}=\mathbb{K}$ we will show that $\sigma$ is just the identity map. Note that $\sigma$ fixes $\mathbb{K}$. To reach a contradiction, suppose that $\mathbb{A}\neq\mathbb{K}$ and let $a\in\mathbb{A}\setminus\mathbb{K}$. Then $\sigma(a)\neq a$ so that either $\sigma(a)>a$ or $\sigma(a)<a$. If it is the former, then there exists $q\in\mathbb{Q}$ such that $a<q<\sigma(a)$, and if it is the latter then there exists $q\in\mathbb{Q}$ such that $\sigma(a)<q<a$ (because $\mathbb{K}$ is Archimedean by assumption so that $\mathbb{Q}$ is dense in $\mathbb{K}$). In either case we reach a contradiction. Therefore $\mathbb{K}$ has no proper Archimedean field extensions.
\item[$(ix)\Rightarrow(i)$:] Suppose, to the contrary, there is an infinite cardinal $\kappa$ and a family $[a_i,b_i]_{i\in I}$ of fewer than $\kappa$ closed bounded intervals with the finite intersection property such that $\bigcap_{i\in I}[a_i,b_i]=\emptyset$. Let $\overline{\mathbb{K}}$ be a Dedekind complete field (see Theorem~\ref{T: exist ded}). As $\mathbb{K}$ is an Archimedean field, there is a natural embedding of $\mathbb{K}$ into $\overline{\mathbb{K}}$, so we can consider $\mathbb{K}\subseteq \overline{\mathbb{K}}$ (see Theorem~\ref{T: arch embed ded}). Because $[a_i,b_i]$ satisfies the finite intersection property, the set $A=:\{a_i :i\in I\}$ is bounded from above and non-empty so that $c=:\sup(A)$ exists in $\overline{\mathbb{K}}$, but then $a_i\le c\le b_i$ for all $i\in I$ so that $c\not\in\mathbb{K}$. Thus $\overline{\mathbb{K}}$ is a proper field extension of $\mathbb{K}$ which is Archimedean by Lemma~\ref{L: order -> arch}, a contradiction.
\end{description}
\end{proof}
\begin{remark}
It should be noted that the equivalence of $(ii)$ and $(vii)$ above was proved in Keisler (\cite{keisler}, pp 17-18). Also, the equivalence of $(viii)$ and $(ix)$ was proved in Banaschewski~\cite{bana} using a different method than ours that relies on the axiom of choice.
\end{remark}
\begin{remark}\label{R: NSA Unique}
Using the Archimedean property assumed of $\mathbb K$, we can actually show that the standard and infinitesimal parts of the decomposition of a finite number are unique: let $\alpha\in\mathcal{F}(^*\mathbb{K})$ and $a,b\in\mathbb{K}$ such that $\alpha-a,\alpha-b\in\mathcal{I}(^*\mathbb{K})$, then $\alpha -a - \alpha + b=b-a\in\mathcal{I}(^*\mathbb{K})$; however, $\mathbb{K}\cap\mathcal{I}( ^*\mathbb{K})=\{0\}$ because $\mathbb K$ is Archimedean. Therefore $b=a$.
\end{remark}
\begin{remark}\label{R: 2 Converge}
As $\mathbb K$ is Archimedean, for any $\epsilon\in\mathbb K_+$, there exists $n\in\mathbb N$ such that $\frac{1}{n}<\epsilon$. Thus, the fact that both of the sequences, $\frac{1}{n}$ and $\frac{1}{2^n}$, converge to $0$ depends only on the Archimedean property.
\end{remark}
\begin{corollary}
If $\mathbb{K}$ is an ordered Archimedean field that is not Dedekind complete (e.g. $\mathbb{Q}$), then $^*\mathbb{K}$ contains finite numbers that are not infinitesimally close to some number of $\mathbb{K}$.
\end{corollary}
\begin{proof}
Follows from Theorem~\ref{T: BIG ONE} as we showed $(ii)\Leftrightarrow(vii)$.
\end{proof}
In Theorem~\ref{T: BIG ONE} we showed that the nine properties listed above were equivalent under the assumption that $\mathbb K$ is Archimedean. However, just as we showed in Lemma~\ref{L: order -> arch}, we have observed that this assumption is not necessary for \emph{some} of the properties.
To the end of this section, we show that, along with property $(viii)$, properties $(i), (ii), (iii), (v)$ and $(vi)$ imply the Archimedean property.
\begin{lemma}
Let $\mathbb{K}$ be an ordered field. If $\mathbb{K}$ is Bolzano complete, then $\mathbb{K}$ is Archimedean.
\end{lemma}
\begin{proof}
Suppose, to the contrary, that $\mathbb{K}$ is non-Archimedean. Then $\mathbb{N}\subset \mathbb{K}$ is bounded from above, and hence has a cluster point $L\in\mathbb{K}$. Then the set $A=:\{n\in \mathbb{N} : |L-n|<\frac{1}{2}, n\neq L\}$ is non-empty. If $A$ contains only one element $m\in\mathbb{N}$, then the set $\{n\in\mathbb{N}: |L-n|<|L-m|\}$ must be empty, which contradicts the fact that $L$ is a cluster point of $\mathbb{N}$. Otherwise, if $A$ contains two distinct elements $p,q\in\mathbb{N}$, then $|p-q|\le|p-L|+|q-L|<1$, but $p,q\in\mathbb{N}$, so this implies $p=q$, a contradiction.
\end{proof}
\begin{lemma}\label{L: bw -> mct -> arch}
Let $\mathbb K$ be an ordered field. If $\mathbb{K}$ is either Bolzano-Weierstrass complete or monotone complete, then $\mathbb K$ is Archimedean.
\end{lemma}
\begin{proof}
We show that Bolzano-Weiestrass completeness implies the monotone completeness, and then show that the monotone completeness implies the Archimedean property.
Suppose that $\mathbb{K}$ is Bolzano-Weierstrass complete and let $\{x_n\}$ be a bounded monotonic sequence. Without loss of generality, we can assume that $\{x_n\}$ is increasing. Then $\{x_n\}$ has a subsequence $\{x_{n_k}\}$ that converges to some point $L\in\mathbb K$, by assumption. As $\{x_n\}$ is increasing, it follows that $\{x_{n_k}\}$ is increasing and that $L\in\ensuremath{\mathcal{UB}}(\{x_n\})$. Given $\epsilon\in\mathbb K_+$, we know there exists $\delta\in\mathbb K_+$ such that $(\forall k\in\mathbb N)(\delta\le k\Rightarrow|x_{n_k}-L|<\epsilon)$, but we also know that for any $m\ge n_{\delta}$, we have $x_{n_{\delta}}\le x_m\le L$. Therefore $\{x_n\}$ converges to $L$.
Now suppose that the $\mathbb{K}$ is monotone complete. To reach a contradiction, suppose that $\mathbb K$ is a non-Archimedean. From this, it follows that the sequence $\{n\}$ is bounded and, thus, converges to some point $L$ by assumption. It should be clear that $L\in\mathcal L(\mathbb K)$. As well, we have $L-n\in\mathcal L(\mathbb K)$ for any $n\in\mathbb N$ (because $L-n\not\in\mathcal L(\mathbb K)$ implies $L<m+n\in\mathbb N$ for some $m\in\mathbb N$). However, this last condition contradicts $\{n\}$ converging to $L$, as the difference between the sequence and the point $L$ will always be infinitely large. Therefore $\mathbb K$ must be Archimedean.
\end{proof}
\begin{lemma}\label{L: gcantor -> arch}
Let $\mathbb K$ be an ordered field. If $\mathbb{K}$ is Cantor $\kappa$-complete for $\kappa=\ensuremath{{\rm{card}}}(\mathbb{K})^+$, then $\mathbb K$ is Archimedean. Consequently, if $\mathbb{K}$ is Cantor $\kappa$-complete for every cardinal $\kappa$, then $\mathbb{K}$ is Archimedean.
\end{lemma}
\begin{proof}
Let $S\subset\mathbb K$ be bounded from above and $\Gamma=:\{[a,b] : a\in S, b\in\ensuremath{\mathcal{UB}}(S)\}$. Observe that $\Gamma$ satisfies the finite intersection property and that $\ensuremath{{\rm{card}}}(\Gamma)\le\ensuremath{{\rm{card}}}(\mathbb K)\times\ensuremath{{\rm{card}}}(\mathbb K)=\ensuremath{{\rm{card}}}(\mathbb K)$. Then there exists $\sigma\in\bigcap_{\gamma\in\Gamma} \gamma$ by assumption. Clearly $(\forall a\in S)(a\le\sigma)$, thus $\sigma\in\ensuremath{\mathcal{UB}}(S)$. But we also have $(\forall b\in\ensuremath{\mathcal{UB}}(S))(\sigma\le b)$. Therefore $\sigma=\sup(S)$.
\end{proof}
\begin{lemma}\label{L: nsa -> arch}
Let $\mathbb K$ be a totally ordered field. If $\mathcal F(^*\mathbb K)=\mathbb K\oplus\mathcal I(^*\mathbb K)$, in the sense that every finite number can be decomposed uniquely into the sum of an element from $\mathbb K$ and an element from $\mathcal I(^*\mathbb K)$, then $\mathbb K$ is Archimedean.
\end{lemma}
\begin{proof}
Suppose that $\mathbb K$ is non-Archimedean. Then there exists a $dx\in\mathcal I(\mathbb K)$ such that $dx\neq0$ by Proposition~\ref{P: 1 arch}. Now take $\alpha\in\mathcal F(^*\mathbb K)$ arbitrarily. By assumption there exists unique $k\in\mathbb K$ and $d\alpha\in\mathcal I(^*\mathbb K)$ such that $\alpha=k+d\alpha$. However, we know that $dx\in\mathcal I(^*\mathbb K)$ as well because $\mathbb{K}\subset{^*\mathbb{K}}$ and the ordering in $^*\mathbb{K}$ extends that of $\mathbb{K}$. Thus $(k+dx)+(d\alpha-dx)=k+d\alpha=\alpha$ where $k+dx\in\mathbb K$ and $d\alpha-dx\in\mathcal I(^*\mathbb K)$. This contradicts the uniqueness of $k$ and $d\alpha$. Therefore $\mathbb K$ is Archimedean.
\end{proof}
\section{Axioms of the Reals}\label{S: axioms}
In modern mathematics, the set of real numbers is most commonly defined in an axiomatic fashion as a totally ordered, Dedekind complete field; however, the result of Theorem~\ref{T: BIG ONE} presents multiple options for an axiomatic definition of $\mathbb{R}$. What follows are several different axiomatic definitions of the set of real numbers: the first two are based on Cantor completeness, the next three are sequential approaches, the sixth and seventh are based on properties of subsets, the eigth is based on non-standard analysis, and the last is based on an algebraic characterization.
\subsubsection*{Axioms of the Reals based on Cantor Completeness}
\begin{enumerate}[label=\textbf{C\arabic*.},ref=\textbf{C\arabic*}]
\item \label{D: R ckc}
$\mathbb{R}$ is the set that satisfies the following axioms
\begin{axiom-enum}
\item $\mathbb R$ is a totally ordered field.
\item $\mathbb R$ is Cantor $\kappa$-complete for any infinite cardinal $\kappa$ (Theorem~\ref{T: BIG ONE} number $(i)$).
\end{axiom-enum}
\item\label{D: R cc}
$\mathbb{R}$ is the set that satisfies the following axioms
\begin{axiom-enum}
\item $\mathbb R$ is a totally ordered Archimedean field.
\item $\mathbb R$ is Cantor complete (Theorem~\ref{T: BIG ONE} number $(iv)$).
\end{axiom-enum}
\end{enumerate}
\subsubsection*{Sequential Axioms of the Reals}
\begin{enumerate}[label=\textbf{S\arabic*.},ref=\textbf{S\arabic*}]
\item \label{D: R bw}
$\mathbb{R}$ is the set that satisfies the following axioms
\begin{axiom-enum}
\item $\mathbb R$ is a totally ordered field.
\item $\mathbb R$ is Bolzano-Weierstrass complete (Theorem~\ref{T: BIG ONE} $(v)$).
\end{axiom-enum}
\item\label{D: R mct}
$\mathbb{R}$ is the set that satisfies the following axioms
\begin{axiom-enum}
\item $\mathbb R$ is a totally ordered field.
\item $\mathbb R$ monotone complete (Theorem~\ref{T: BIG ONE} $(iii)$).
\end{axiom-enum}
\item\label{D: R seq}
$\mathbb{R}$ is the set that satisfies the following axioms
\begin{axiom-enum}
\item $\mathbb R$ is a totally ordered Archimedean field.
\item $\mathbb R$ is sequentially complete (Theorem~\ref{T: BIG ONE} $(vii)$).
\end{axiom-enum}
\end{enumerate}
\subsubsection*{Set-Based Axioms of the Reals}
\begin{enumerate}[label=\textbf{B\arabic*.},ref=\textbf{B\arabic*}]
\item \label{D: R bol}
$\mathbb{R}$ is the set that satisfies the following axioms
\begin{axiom-enum}
\item $\mathbb{R}$ is a totally ordered field.
\item $\mathbb{R}$ is Bolzano complete (Theorem~\ref{T: BIG ONE} $(vi)$).
\end{axiom-enum}
\item \label{D: R ded}
$\mathbb{R}$ is the set that satisfies the following axioms
\begin{axiom-enum}
\item $\mathbb{R}$ is a totally ordered field.
\item $\mathbb{R}$ is Dedekind complete (Theorem~\ref{T: BIG ONE} $(viii)$).
\end{axiom-enum}
\end{enumerate}
\subsubsection*{Axioms of the Reals from Non-standard Analysis}
\begin{enumerate}[label=\textbf{N\arabic*.},ref=\textbf{N\arabic*}]
\item \label{D: R nsa}
$\mathbb{R}$ is the set that satisfies the following axioms
\begin{axiom-enum}
\item $\mathbb{R}$ is a totally ordered field.
\item $\mathbb{R}$ is Leibniz complete (Theorem~\ref{T: BIG ONE} $(ii)$ and Remark~\ref{R: NSA Unique}).
\end{axiom-enum}
\end{enumerate}
\subsubsection*{Algebraic Axioms of the Reals}
\begin{enumerate}[label=\textbf{A\arabic*.},ref=\textbf{A\arabic*}]
\item\label{D: R alg}
$\mathbb{R}$ is the set that satisfies the following axioms
\begin{axiom-enum}
\item $\mathbb{R}$ is a totally ordered Archimedean field.
\item $\mathbb{R}$ is Hilbert complete (Theorem~\ref{T: BIG ONE} $(ix)$).
\end{axiom-enum}
\end{enumerate}
Notice that definitions \ref{D: R alg}, \ref{D: R cc} and \ref{D: R seq} explicitly assert that $\mathbb{R}$ is an \emph{Archimedean} field, while the others do not. From the preceding section, we know that properties of the remaining definitions above (definitions \ref{D: R ckc}, \ref{D: R bw}, \ref{D: R mct}, \ref{D: R nsa}, \ref{D: R bol}, \ref{D: R ded}) are sufficient to establish the Archimedean property, but it should be clear that every non-Archimedean field is Hilbert complete, and as we will see in the next section, sequential completeness and Cantor completeness can hold in non-Archimedean fields as well.
\section{Non-standard Construction of the Real Numbers}\label{S: construct}
What we present in this section is a very quick presentation of one way to construct a Dedekind complete field using the techniques of non-standard analysis; however, this construction can not be considered a ``proof of existence of Dedekind complete fields'' because we rely on the assumption that every Archimedean field has cardinality at most $\mathfrak c$ where $\mathfrak c$ is defined in Corollary~\ref{C: ded order iso}. An alternative approach using non-standard analysis, that does not rely on the assumption that Dedekind complete fields exist, can be found in Davis~\cite{davis}, but his method is somewhat outdated as it uses the Concurrence Theorem rather than the more current concept of saturation that we employ. It is our opinion that our method can be modified to remove the assumption mentioned above, while still being simpler than the approach used in Davis.
As well, we note that a slight refinement of the construction given here can be found in Hall \& Todorov~\cite{HallTodDedekind11}.
To begin, let $^*\mathbb{Q}$ be a $\mathfrak c^+$-saturated non-standard extension of $\mathbb{Q}$ and recall that $^*\mathbb{Q}$ is a totally ordered non-Archimedean field extension of $\mathbb{Q}$.
Now define $\overline{^*\mathbb{Q}}=:\mathcal{F}(^*\mathbb{Q})/\mathcal{I}(^*\mathbb{Q})$ (see \S~\ref{S: inf} for definition of $\mathcal{F}$ and $\mathcal{I}$). Notice that $\overline{^*\mathbb{Q}}$ is a totally ordered Archimedean field since $\mathcal{F}(^*\mathbb{Q})$ is a totally ordered Archimedean ring and $\mathcal{I}(^*\mathbb{Q})$ is a maximal convex ideal in $\mathcal{F}(^*\mathbb{Q})$ (see Lemma~\ref{L: finite arch ring}). Let $q:\mathcal{F}(^*\mathbb{Q})\to\overline{^*\mathbb{Q}}$ be the cannonical homomorphism.
\begin{theorem}
$\overline{^*\mathbb{Q}}$ is Dedekind complete.
\end{theorem}
\begin{proof}
Let $A\subset\overline{^*\mathbb{Q}}$ be non-empty and bounded from above. If either $A$ is finite or $A\cap\ensuremath{\mathcal{UB}}(A)\neq\emptyset$ then we are done as $\max(A)$ exists; thus, suppose $A$ is infinite and $A\cap\ensuremath{\mathcal{UB}}(A)=\emptyset$. Let $B=:\ensuremath{\mathcal{UB}}(A)$. Then by the Axiom of Choice, there exist functions $f:A\to {^*\mathbb{Q}}$ and $g:B\to{^*\mathbb{Q}}$ such that, for all $a\in A$, $f(a)\in a$ and for all $b\in B$, $g(b)\in b$. Observe that $f(a)< g(b)$ for all $a\in A$ and $b\in B$ because $a<b$ by assumption. Consequently, the family $\{[f(a),g(b)]\}_{a\in A,b\in B}$ has the finite intersection property, and because $\overline{^*\mathbb{Q}}$ is Archimedean, we have that $\ensuremath{{\rm{card}}}(A\times B)=\ensuremath{{\rm{card}}}(\overline{^*\mathbb{Q}})\le\mathfrak c$ (see Corollary~\ref{C: arch card}). Thus, by the Saturation Principle and Theorem~\ref{T: non-arch completeness}, there exists a $\gamma\in\bigcap_{a\in A,b\in B}[f(a),g(b)]$, at which point we clearly have $\sup A=q(\gamma)$.
\end{proof}
Thus $\overline{^*\mathbb{Q}}$ is a Dedekind complete field, and from Corollary~\ref{C: ded order iso} we know that $\overline{^*\mathbb{Q}}$ is order field isomorphic to the field of Dedekind cuts $\mathbb{R}$ under the mapping from $q(\alpha)\mapsto C_\alpha$ where $C_\alpha=:\{p\in\mathbb{Q} : p<\alpha\}$ for $\alpha\in\mathcal{F}({^*\mathbb{Q}})$. What's interesting about this observation is that the sets $C_\alpha$ provide explicit form to represent a Dedekind cut using only $\mathbb{Q}$ and its non-standard extension $^*\mathbb{Q}$.
\section{Completeness of a Non-Archimedean Field}\label{S: non-arch completeness}
In this section we present some basic results concerning completeness of non-Archimedean fields. Most of the results in this section are due to H. Vernaeve~\cite{vernaeve}.
As before, $\kappa^+$ stands for the successor of $\kappa$ and $\aleph_1=\aleph_0^+$.
\begin{theorem}
Let $\mathbb K$ be an ordered field. If $\mathbb K$ is non-Archimedean and Cantor $\kappa$-complete (see Definition~\ref{D: completeness}), then $\kappa\le\ensuremath{{\rm{card}}}(\mathbb K)$.
\end{theorem}
\begin{proof}
Suppose, to the contrary, that $\kappa>\ensuremath{{\rm{card}}}(\mathbb K)$, then $\mathbb K$ is Cantor $\ensuremath{{\rm{card}}}(\mathbb K)^+$-complete. Then it follows from Lemma~\ref{L: gcantor -> arch} that $\mathbb{K}$ is Archimedean, a contradiction.
\end{proof}
It should be noted, that in non-standard analysis there is a generalization of the following definition to what are known as \emph{internal sets}, for more information we refer to Lindstr\o m~\cite{lindstrom}.
\begin{definition}[Algebraic Saturation]\label{D: algebraic saturation}
Let $\kappa$ be an infinite cardinal. A totally ordered field $\mathbb K$ is \emph{algebraically $\kappa$-saturated} if every family $\{(a_\gamma,b_\gamma)\}_{\gamma\in \Gamma}$ of fewer than $\kappa$ open intervals in $\mathbb K$ with the F.I.P. (finite intersection property) has a non-empty intersection, $\bigcap_{\gamma\in \Gamma} (a_\gamma,b_\gamma)\neq \emptyset$. If $\mathbb{K}$ is algebraically $\aleph_1$-saturated -- i.e. every sequence of open intervals with the F.I.P. has a non-empty intersection -- then we simply say that $\mathbb{K}$ is \emph{algebraically saturated}. As well, we say that $\mathbb K$ is \emph{algebraically $\kappa$-saturated at infinity} if every collection of fewer than $\kappa$ elements from $\mathbb K$ is bounded. Also, $\mathbb{K}$ is \emph{algebraically saturated at infinity} if $\mathbb K$ is algebraically $\aleph_1$-saturated at infinity -- i.e. every countable subset of $\mathbb{K}$ is bounded.
\end{definition}
Notice that every totally ordered field is algebraically $\aleph_0$-saturated and $\aleph_0$-saturated at infinity (in a trivial way).
\begin{theorem}\label{T: sat <-> cantor}
Let $\mathbb K$ be an ordered field and $\kappa$ be an uncountable cardinal. Then the following are equivalent:
\begin{thm-enumerate}
\item $\mathbb K$ is algebraically $\kappa$-saturated
\item $\mathbb K$ is Cantor $\kappa$-complete and algebraically $\kappa$-saturated at infinity.
\end{thm-enumerate}
\end{theorem}
\begin{proof}
\mbox{}
\begin{description}
\item[$(i)\Rightarrow(ii)$:] Let $\mathcal C=:\{[a_{\gamma},b_{\gamma}]\}_{\gamma\in \Gamma}$ and $\mathcal O=:\{(a_{\gamma}, b_{\gamma})\}_{\gamma\in \Gamma}$ be families of fewer than $\kappa$ bounded closed and open intervals, respectively, where $\mathcal C$ has the F.I.P.. If $a_k=b_p$ for some $k,p\in\Gamma$, then $\bigcap_{\gamma\in \Gamma}[a_{\gamma},b_{\gamma}]=\{a_k\}$ by the F.I.P. in $\mathcal C$. Otherwise, $\mathcal O$ has the F.I.P.; thus, there exists $\alpha\in\bigcap_{\gamma\in \Gamma} (a_{\gamma}, b_{\gamma})\subseteq\bigcap_{\gamma\in \Gamma}[a_{\gamma}, b_{\gamma}]$ by algebraic $\kappa$-saturation. Hence $\mathbb K$ is Cantor $\kappa$-complete. To show that $\mathbb K$ is algebraically $\kappa$-saturated at infinity, let $A\subset\mathbb K$ be a set with $\ensuremath{{\rm{card}}}(A)<\kappa$. Then $\bigcap_{a\in A}(a,\infty)\neq\emptyset$ by algebraic $\kappa$-saturation.
\item[$(ii)\Rightarrow(i)$:] Let $\{(a_{\gamma},b_{\gamma})\}_{\gamma\in \Gamma}$ be a family of fewer than $\kappa$ elements with the F.I.P.. Without loss of generality, we can assume that each interval is bounded. As $\mathbb K$ is algebraically $\kappa$-saturated at infinity, there exists $\frac{1}{\rho} \in \ensuremath{\mathcal{UB}}(\{ \frac{1}{b_l -a_k} : l, k\in \Gamma \})$ (that is, $\frac{1}{b_l-a_k}\le\frac{1}{\rho}$ for all $l,k\in \Gamma$) which implies that $\rho>0$ and that $\rho$ is a lower bound of $\{b_l-a_k : l, k\in \Gamma\}$. Next, we show that the family $\{[a_\gamma+\frac{\rho}{2},b_\gamma-\frac{\rho}{2}]\}_{\gamma\in\Gamma}$ satisfies the F.I.P.. Let $\gamma_1,\ldots,\gamma_n\in\Gamma$ and $\zeta=:\max_{k\le n}\{a_{\gamma_k} + \frac{\rho}{2}\}$. Then, for all $m\in\mathbb N$ such that $m\le n$, we have $a_{\gamma_m} + \frac{\rho}{2}\le \zeta \le b_{\gamma_m} - \frac{\rho}{2}$ by the definition of $\rho$; thus, $\zeta\in[a_{\gamma_m}+\frac{\rho}{2}, b_{\gamma_m}-\frac{\rho}{2}]$ for $m\le n$. By Cantor $\kappa$-completeness, there exists $\alpha\in\bigcap_{\gamma\in\Gamma} [a_{\gamma}+\frac{\rho}{2}, b_{\gamma}-\frac{\rho}{2}]\subseteq\bigcap_{\gamma\in\Gamma}(a_{\gamma},b_{\gamma})$.
\end{description}
\end{proof}
\begin{corollary}\label{C: sat -> seq}
Let $\mathbb K$ be an ordered field. If $\mathbb K$ is algebraically saturated, then every convergent sequence is eventually constant. Consequently, $\mathbb K$ is sequentially complete.
\end{corollary}
\begin{proof}
Let $x_n\to L$ and assume that $\{x_n\}$ is not eventually constant. Then there exists a subsequence $\{x_{n_k}\}$ such that $\delta_k=:|x_{n_k}-L|>0$ for all $k\in\mathbb N$. Thus, there exists $\epsilon\in\bigcap_{k\in\mathbb N} (0,\delta_k)$ by assumption; hence $0<\epsilon<\delta_k$ for all $k\in\mathbb K$, which contradicts $\delta_k\to0$. Finally, suppose $\{x_n\}$ is a Cauchy sequence and observe that $|x_{n+1}-x_n|\to0$. Thus, by what we just proved, $|x_{n+1}-x_n|=0$ for sufficiently large $n\in\mathbb N$. Hence $\{x_n\}$ is eventually constant.
\end{proof}
\begin{corollary}\label{C: cantor -> sequential}
Let $\mathbb K$ be an ordered field. If $\mathbb K$ is Cantor complete, but not algebraically saturated, then:
\begin{thm-enumerate}
\item $\mathbb{K}$ has an increasing unbounded sequence.
\item $\mathbb{K}$ is sequentially complete.
\end{thm-enumerate}
\end{corollary}
\begin{proof}
\mbox{}
\begin{thm-enumerate}
\item By Theorem~\ref{T: sat <-> cantor} we know that $\mathbb K$ has a subset $A\subset\mathbb K$ that is unbounded. Let $x_1\in A$ be arbitrary. Now assume that $x_n$ has been defined, then there exists $c\in A$ such that $x_n<c$ by assumption; define $x_{n+1}=:c$. Using this inductive definition (and the axiom of choice), we find that $\{x_n\}$ is an increasing unbounded sequence in $\mathbb K$.
\item By Part $(i)$ of this corollary, we know there exists an unbounded increasing sequence $\{\frac{1}{\epsilon_n}\}$. We observe that $\{\epsilon_n\}$ is a decreasing never-zero sequence that converges to zero. Let $\{x_n\}$ be a Cauchy sequence in $\mathbb K$. For all $n\in\mathbb N$, we define $S_n=:[x_{m_n} - \epsilon_n, x_{m_n} + \epsilon_n]$, where $m_n=:\min\{k\in\mathbb N : (\forall l,j\in\mathbb N)(k\le l,j\Rightarrow |x_l-x_j|<\epsilon_n)\}$ (which exists as $\{x_n\}$ is a Cauchy sequence). To show that the family $\{S_n\}_{n\in\mathbb{N}}$ satisfies the finite intersection property, let $A\subset \mathbb N$ be finite and $\rho=:\max(A)$; then we observe that $x_{m_{\rho}}\in S_k$ for any $k\in A$ because $m_k\le m_{\rho}$. Therefore there exists $L\in\bigcap_{k=1}^{\infty}S_k$ by Cantor completeness. To show that $x_n\to L$, we first observe that, given any $\delta\in\mathbb K_+$, we can find an $n\in\mathbb N$ such that $2\epsilon_n<\delta$ because $\{\epsilon_n\}$ converges to zero. As well, we note that $L\in S_n$ and that the width of $S_n$ is $2\epsilon_n$ for all $n\in\mathbb N$. Thus, given $\delta\in\mathbb K_+$ we can find $n\in\mathbb N$ such that $2\epsilon_n<\delta$, and, because $(\forall l\in\mathbb N)(m_n\le l\Rightarrow x_l\in S_n)$, we have $(\forall l\in\mathbb N)(m_n\le l\Rightarrow |L-x_l|<2\epsilon_n<\delta)$.
\end{thm-enumerate}
\end{proof}
The previous two corollaries can be summarized in the following result.
\begin{corollary}\label{T: non-arch completeness}
Let $\mathbb K$ be an ordered field, then we have the following implications
$$\mathbb K\textrm{ is }\kappa\textrm{-saturated}\implies\mathbb K\textrm{ is Cantor }\kappa\textrm{-complete}\implies \mathbb K\textrm{ is sequentially complete}$$
\end{corollary}
\begin{proof}
The first implication follows from Theorem~\ref{T: sat <-> cantor}. For the second implication we have two cases: $\mathbb{K}$ is algebraically saturated, or $\mathbb{K}$ is not-algebraically saturated. For the first case we have Corollary~\ref{C: sat -> seq} and for the second we have Corollary~\ref{C: cantor -> sequential}.
\end{proof}
\section{Valuation Fields}\label{S: val}
Before we begin with the examples, see the next section, we quickly define \emph{ordered valuation fields} for our use later; for readers interested in the subject we refer to P. Ribenboim~\cite{riben}, A.H. Lightstone \& A. Robinson~\cite{lightstone}, and Todorov~\cite{todor-inf}. In what follows, let $\mathbb K$ be an ordered field.
\begin{definition}[Ordered Valuation Field]
The mapping $v:\mathbb K\to\mathbb R\cup\{\infty\}$ is called a \emph{non-Archimedean valuation} on $\mathbb K$ if, for every $x,y\in\mathbb K$,
\begin{def-enumerate}
\item $v(x)=\infty$ \emph{ i\,f\,f }\;} %{\em i\,f\,{f}\; } $x=0$
\item $v(xy)=v(x)+v(y)$ (Logarithmic property)
\item $v(x + y)\ge\min\{v(x), v(y)\}$ (Non-Archimedean property)
\item $|x| < |y|$ implies $v(x) \ge v(y)$ (Convexity property)
\end{def-enumerate}
The structure $(\mathbb K,v)$ is called an \emph{ordered valuation field}. As well, a valuation $v$ is \emph{trivial} if $v(x)=0$ for $x\in\mathbb K\setminus\{0\}$; otherwise, $v$ is \emph{non-trivial}.
\end{definition}
\begin{remark}[Krull's Valuation]
It is worth noting that the definition given above can be considered as a specialized version of that given by Krull: a valuation is a mapping $v:\mathbb K\to G\cup\{\infty\}$ where $G$ is an ordered abelian group.
\end{remark}
\section{Examples of Sequentially and Spherically Complete Fields}\label{S: examples 1}
Although they may not be as well known as Archimedean fields, non-Archimedean fields have been used in a variety of different settings: most notably to produce non-standard models in the model theory of fields and to provide a field for non-standard analysis. In an effort to make non-Archimedean fields seem less exotic than the may initially appear, we have compiled a very modest list of non-Archimedean fields that have been used in different areas of mathematics. Our first couple of examples are of fields of formal power series, which should be more familiar to the reader, as the use of these fields as generalized scalars in analysis predates A. Robinson's work in non-standard analysis \cite{todor-asymp} and are used quite often as non-standard models in model theory. For more information on these fields, we direct the reader to D. Laugwitz~\cite{laugwitz} and Todorov \& Wolf~\cite{todor-wolf}. The last of the examples are based on A. Robinson's theory of non-standard extensions. The key distinction that we would like to emphasize between these two forms of non-Archimedean fields, is that the fields of power series are at most spherically complete while the fields from non-standard analysis are always Cantor complete if not algebraically saturated.
In what follows, $\mathbb{K}$ is a totally ordered field. As well, in the next four examples we present sets of series for which we assume have been supplied with the normal operations of \emph{polynomial-like} addition and multiplication. Under this assumption, each of the following sets are in fact fields.
\begin{example}[Hahn Series]
Let $\ensuremath{{\rm{supp}}}(f)$ denote the \emph{support} of $f$ (i.e. values in the domain for which $f$ is non-zero). The field of \emph{Hahn series} is defined to be the set $$\mathbb K((t^{\mathbb R}))=: \left\{\sum_{r\in\mathbb R} a_rt^r : a_r\in\mathbb K\emph{ and }\ensuremath{{\rm{supp}}}(r\mapsto a_r)\subseteq\mathbb R\emph{ is well ordered}\right\}$$ which can be supplied with the \emph{canonical valuation} $\nu:\mathbb K((t^{\mathbb R}))\to\mathbb R\cup\{\infty\}$ defined by $\nu(0)=:\infty$ and $\nu(A)=:\min(\ensuremath{{\rm{supp}}}(r\mapsto a_r))$ for all $A\in\mathbb K((t^{\mathbb R}))$. As well, $\mathbb K((t^{\mathbb R}))$ has a natural ordering given by
\[\mathbb K((t^{\mathbb R}))_+=:\left\{A=\sum_{r\in\mathbb R} a_rt^r \in\mathbb K((t^{\mathbb R})) : a_{\nu(A)}>0\right\}\]
\end{example}
\begin{remark}
For our purposes here, the preceding definition of the field of Hahn series is sufficient; however, there is a more general definition in which the additive group $\mathbb{R}$ is replaced with an abelian ordered group $G$.
\end{remark}
\begin{example}[Levi-Civita]
The field of Levi-Civita series is defined to be the set $$\mathbb K\langle t^{\mathbb R}\rangle=:\left\{\sum_{n=0}^{\infty}a_nt^{r_n} : a_n\in\mathbb K \emph{ and }\{r_n\}\emph{ is strictly increasing and unbounded in }\mathbb R\right\}.$$ As $\{r_n : n\in\mathbb N\}$ is well-ordered whenever $\{r_n\}$ is strictly increasing, we can embed $\mathbb K\langle t^{\mathbb R}\rangle$ into $\mathbb K((t^{\mathbb R}))$ in an obvious way: $\sum_{n=0}^{\infty}a_nt^{r_n}\mapsto\sum_{k\in\mathbb R}\beta_kt^k$ where $\beta_{r_n}=:a_n$ for all $n\in\mathbb N$, and $\beta_k=:0$ for $k\not\in\textrm{range}(\{r_n\})$. Thus, $\mathbb K\langle t^{\mathbb R}\rangle$ can be ordered by the ordering inherited from $\mathbb K((t^{\mathbb R}))$.
\end{example}
\begin{example}[Laurent Series]
The field
$$\mathbb K(t^{\mathbb Z})=:\left\{\sum_{n=m}^{\infty}a_nt^n : a_n\in\mathbb K \emph{ and }m\in\mathbb Z\right\}$$ of formal \emph{Laurent series} have a very simple embedding into $\mathbb K\langle t^{\mathbb R}\rangle$, and, thus, $\mathbb K(t^{\mathbb Z})$ is an ordered field using the ordering inherited from $\mathbb K\langle t^{\mathbb R}\rangle$.
\end{example}
\begin{example}[Rational Functions]
The field
$$\mathbb K(t)=:\left\{\frac{P(t)}{Q(t)} : P, Q\in\mathbb K[t]\emph{ and }Q\not\equiv0\right\}$$ of \emph{rational functions} can be embedded into $\mathbb K(t^{\mathbb Z})$ by associating each rational function with its Laurent expansion about zero; thus $\mathbb K(t)$ inherits the ordering from $\mathbb K(t^{\mathbb Z})$. As well, we can consider $\mathbb K\subset\mathbb K(t)$ under the canonical embedding given by $a\mapsto f_a$ where $f_a(t)\equiv a$, for all $a\in\mathbb K$.
\end{example}
As we mention above, each field that we defined in examples 1-4 can be structured as such: $$\mathbb K\subset \mathbb K(t)\subset \mathbb K(t^{\mathbb Z})\subset \mathbb K\langle t^{\mathbb R}\rangle\subset \mathbb K((t^{\mathbb R}))$$ What we claim, is that all of these extensions of $\mathbb K$ are in fact non-Archimedean (regardless of whether $\mathbb K$ is Archimedean or not). But this is simple: because each field is a subfield of the following fields, all we have to show is that $\mathbb K(t)$ is non-Archimedean, and this just follows from the observation that $\sum_{n=1}^{\infty}t^n=\frac{t}{1-t}\in\mathbb K(t)$ is a non-zero infinitesimal and $\sum_{n=-1}^{\infty} t^n=\frac{1}{t-t^2}\in\mathbb K(t)$ is infinitely large -- that is, $0<\frac{t}{1-t}<\frac{1}{n}$ and $\frac{1}{t-t^2}>n$ for all $n\in\mathbb{N}$.
\begin{theorem}\label{T: series real-closed}
If $\mathbb K$ is real closed, then both $\mathbb K\langle t^{\mathbb R}\rangle$ and $\mathbb K((t^{\mathbb R}))$ are real closed.
\end{theorem}
\begin{proof}
See Prestel~\cite{prestel}.
\end{proof}
\begin{definition}[Valuation Metric]
Let $(\mathbb{K},v)$ be an ordered valuation field, then the mapping $d_v:\mathbb{K}\times\mathbb{K}\to\mathbb{R}$ given by $d_v(x,y)=e^{-v(x-y)}$, where $e^{-\infty}=0$, is the \emph{valuation metric} on $(\mathbb{K},v)$. We denote by $(\mathbb{K},d_v)$ the corresponding metric space. Further, if $c\in\mathbb{K}$ and $r\in\mathbb{R}_+$, then we define the corresponding sets of open and closed balls by
\begin{align*}
B(c, r)=:\{k\in\mathbb{K} : d_v(c,k)<r\}\\
\overline{B}(c,r)=:\{k\in\mathbb{K} : d_v(c,k)\le r\}
\end{align*}
respectively.
\end{definition}
\begin{definition}[Spherically Complete]
A metric space is \emph{spherically complete} if every nested sequence of closed balls has a non-empty intersection.
\end{definition}
From the definition, it should be clear that every metric space is spherically complete is sequentially complete.
\begin{example}
The metric space $(\mathbb{R}((t^{\mathbb{R}})),d_v)$ is spherically complete. A proof of this can be found in W. Krull~\cite{krull} and Theorem~2.12 of W.A.J. Luxemburg~\cite{luxemburg-valuation}.
\end{example}
\begin{example}
Both $\mathbb{R}(t^{\mathbb Z})$ and $\mathbb{R}\langle t^{\mathbb R}\rangle$ are sequentially complete fields. A proof of this can be found in D. Laugwitz~\cite{laugwitz}.
\end{example}
\section{Examples of Cantor Complete and Saturated Fields}\label{S: examples 2}
All the examples given from this point on are based on the non-standard analysis of A. Robinson; for an introduction to the area we refer the reader to either Robinson~\cite{robinson1} \cite{robinson2}, Luxemburg~\cite{luxemburg}, Davis~\cite{davis}, Lindstr\o m~\cite{lindstrom} or Cavalcante~\cite{cavalcante}. For axiomatic introductions in particular, we refer the reader to Lindstr\o m~\cite{lindstrom} (p. 81-83) and Todorov~\cite{todor-axiomatic} (p. 685-688).
As we remarked in the beginning of this section, what is so particularly interesting about these fields is that they are all Cantor complete; a property that none of the fields of formal power series with real coefficients share (i.e. the fields given by replacing $\mathbb{K}$ with $\mathbb{R}$).
\begin{example}
Let $\kappa$ be an infinite cardinal and $^*\mathbb R$ be a $\kappa$-saturated non-standard extension of $\mathbb R$ (see Lindstr\o m~\cite{lindstrom}). Then $^*\mathbb R$ is a non-Archimedean, real closed, algebraically $\kappa$-saturated field (see Definition~\ref{D: algebraic saturation}) with $\mathbb R\subset{^*\mathbb R}$. It follows from Theorem~\ref{T: sat <-> cantor} that $^*\mathbb R$ is Cantor $\kappa$-complete.
\end{example}
\begin{definition}[Convex Subring]
Let $F$ be an ordered ring and $R\subset F$ be a ring. Then $R$ is a \emph{convex subring}, if for all $x\in F$ and $y\in R$, we have that $0\le|x|\le|y|$ implies that $x\in R$. Similarily, an ideal $I$ in $R$ is called convex if, for every $x\in R$ and $y\in I$, $0\le|x|\le|y|$ implies that $x\in I$.
\end{definition}
\begin{example}[Robinson's Asymptotic Field]\label{E: robinson asymptotic}
Let $^*\mathbb R$ be a non-standard extension of $\mathbb R$ and $\rho$ be a positive infinitesimal in $^*\mathbb{R}$. We define the sets of non-standard $\rho$-\emph{moderate} and $\rho$-\emph{negligible} numbers to be
\begin{eqnarray*}
\mathcal M_{\rho}(^*\mathbb R)&=:&\{\zeta\in{^*\mathbb R} : |\zeta|\le \rho^{-m} \emph{ for some } m\in\mathbb N\}\\
\mathcal N_{\rho}(^*\mathbb R)&=:&\{\zeta\in{^*\mathbb R} : |\zeta| < \rho^n \emph{ for all } n\in\mathbb N\}
\end{eqnarray*}
respectively. The \emph{Robinson field of real $\rho$-asymptotic numbers} (A. Robinson~\cite{robinson1},\cite{robinson2}) is the factor ring ${^{\rho}\mathbb R}=:\mathcal M_{\rho} / \mathcal N_{\rho}$. As it is not hard to show that $\mathcal M_{\rho}$ is a convex subring, and $\mathcal N_{\rho}$ is a maximal convex ideal, it follows that $^{\rho}\mathbb R$ is an orderable field. From Todorov \& Vernaeve~\cite{todor} (Theorem 7.3, p. 228) we know that $^\rho\mathbb{R}$ is real-closed. As well, $^{\rho}\mathbb R$ is not algebraically saturated as we observe that the sequence $\{\rho^{-n}\}_{n\in\mathbb N}$ is unbounded and increasing (see Corollary~\ref{C: cantor -> sequential}). Following from the preceding observation and the fact that $^{\rho}\mathbb R$ is Cantor complete (Todorov \& Vernaeve~\cite{todor-asymp}, Theorem 10.2, p. 24), we can apply Lemma~\ref{C: cantor -> sequential} to find that $^{\rho}\mathbb R$ is sequentially complete.
\end{example}
There are several interesting properties of $^\rho\mathbb{R}$ that distinguish it from other fields: it is non-Archimedean; it is not algebraically saturated at infinity (see Definition~\ref{D: algebraic saturation}); it has a countable topological basis (the sequence of intervals $(-s^n, s^n)$, where $s$ is the image of $\rho$ under the quotient mapping from $\mathcal{M}_\rho$ to $^\rho\mathbb{R}$, forms a basis for the neighborhoods of 0). As well, the field of Hahn series $\mathbb{R}((t^\mathbb{R}))$ can be embedded into $^\rho\mathbb{R}$ (see Todorov \& Wolf~\cite{todor-wolf}) by mapping a Hahn series $\sum_{r\in\mathbb{R}} a_rt^r$ to the series $\sum_{r\in\mathbb{R}} a_rs^r$ in $^\rho\mathbb{R}$ -- which converges in $^\rho\mathbb{R}$, but not in $^*\mathbb{R}$! To summarize this fact, we present the following chain of inclusions that extends upon that given in the preceding subsection.
\[\mathbb R\subset \mathbb R(t)\subset \mathbb R(t^{\mathbb Z})\subset \mathbb R\langle t^{\mathbb R}\rangle\subset \mathbb R((t^{\mathbb R}))\subset {^\rho\mathbb{R}}\]
Thus ${^{\rho}\mathbb R}$ is a totally ordered, non-Archimedean, real closed, sequentially complete, and Cantor complete field that is not algebraically saturated and contains all of the fields of formal power series listed above.
\begin{example}
Let $\mathcal M\subset{^*\mathbb R}$ be a convex subring of $^*\mathbb R$ and let $\mathcal I_{\mathcal M}$ be the set of non-invertible elements of $\mathcal M$. Then $\widehat{\mathcal M}=:\mathcal M/\mathcal I_{\mathcal M}$ is a real closed, Cantor complete field. In the case that $\mathcal M=\mathcal F(^*\mathbb R)$, then $\widehat{\mathcal M}=\mathbb R$. Otherwise, $\widehat{\mathcal M}$ is non-Archimedean. These fields, $\widehat{\mathcal M}$ are referred to as $\mathcal M$-asymptotic fields. It should also be noted that for some $\mathcal M$, it might be that $\widehat{\mathcal M}$ is saturated. For more discussion about these fields we refer to Todorov~\cite{todor-lecturenotes}. Notice that $^\rho\mathbb{R}$ is a particular $\mathcal{M}$-asymptotic field that is given by $\mathcal{M}=\mathcal{M}_\rho$.
\end{example}
\begin{remark}
It may be worth noting that, in $^\rho\mathbb{R}$, the metric topology generated from the valuation metric, and the order topology are equivalent.
\end{remark}
\input{refs.tex}
\end{document}
| {
"timestamp": "2011-02-01T02:00:30",
"yymm": "1101",
"arxiv_id": "1101.5652",
"language": "en",
"url": "https://arxiv.org/abs/1101.5652",
"abstract": "The main goal of this project is to prove the equivalency of several characterizations of completeness of Archimedean ordered fields; some of which appear in most modern literature as theorems following from the Dedekind completeness of the real numbers, while a couple are not as well known and have to do with other areas of mathematics, such as nonstandard analysis.Continuing, we study the completeness of non-Archimedean fields, and provide several examples of such fields with varying degrees of properties, using nonstandard analysis to produce some relatively \"nice\" (in particular, they are Cantor complete) final examples.As a small detour, we present a short construction of the real numbers using methods from nonstandard analysis.",
"subjects": "Logic (math.LO)",
"title": "Completeness of Ordered Fields",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9875683495127004,
"lm_q2_score": 0.8479677622198946,
"lm_q1q2_score": 0.8374261233754793
} |
https://arxiv.org/abs/0808.2160 | Local energy estimates for the finite element method on sharply varying grids | Local energy error estimates for the finite element method for elliptic problems were originally proved in 1974 by Nitsche and Schatz. These estimates show that the local energy error may be bounded by a local approximation term, plus a global "pollution" term that measures the influence of solution quality from outside the domain of interest and is heuristically of higher order. However, the original analysis of Nitsche and Schatz is restricted to quasi-uniform grids. We present local a priori energy estimates that are valid on shape regular grids, an assumption which allows for highly graded meshes and which much more closely matches the typical practical situation. Our chief technical innovation is an improved superapproximation result. | \section{Introduction}
In this note we prove local energy error estimates for the finite element method for second-order linear elliptic problems on highly refined triangulations. Most a priori error analyses for the finite element method in norms other than the global energy norm place severe restrictions on the mesh. In particular, such error analyses are most often carried out under the assumption that the grid is {\it quasi uniform}, that is, all simplices in the mesh are required to have diameter equivalent to some fixed parameter $h$. The typical practical situation is rather different. Many (especially adaptive) finite element codes enforce only {\it shape regularity} of elements, meaning that all elements in the mesh must have bounded aspect ratio. Though it places a weak restriction upon the rate with which the diameters of elements in the mesh may change, shape regularity allows for the locally refined meshes that are needed to resolve the singularities and other sharp local variations of the solution that occur in the majority of practical applications.
In the work \cite{NS74} of Nitsche and Schatz, local energy error estimates were established for interior subdomains under the assumption that the finite element grid is quasi-uniform. Such local energy estimates are helpful in understanding basic error behavior, especially ``pollution effects'' of global solution properties on local approximation quality, and they also provide an important technical tool in many proofs of pointwise bounds for the finite element method (cf. \cite{SW95}). In addition, the most relevant error notion in applications is often related to some {\it local} norm or functional instead of to the global energy error, as evidenced by the recent surge of interest in ensuring control of the error in calculating ``quantities of interest'' in adaptive finite element calculations instead of merely controlling the default global energy error (cf. \cite{BR01}). As a final example of the applicability of local energy estimates, we mention that the estimates of \cite{NS74} have been used to justify certain approaches to parallelization and adaptive meshing (cf. \cite{BH00}). Thus local energy estimates are of broad and fundamental importance in finite element theory.
Here we prove local energy error estimates under the assumption that the finite element triangulation is shape regular instead of under the more restrictive assumption of quasi uniformity required in \cite{NS74}. In other words, we essentially prove that the results of Nitsche and Schatz hold under the restrictions typically placed upon meshes in practical codes, which in particular allow for highly graded grids. Our main innovation is a novel ``superapproximation'' result which we state and prove in \S2. In \S3 we then prove a local energy bound that is valid on grids that are only assumed to be shape-regular. As in \cite{NS74}, our results are valid for operators that are only {\it locally} elliptic, so that the PDE under consideration may be degenerate or change type outside of the domain of interest. In contrast to \cite{NS74}, the results we present here are valid up to the domain boundary, allow for nonhomogeneous Neumann, Dirichlet, and mixed boundary conditions, and also require only $L_\infty$ regularity of the coefficients of the differential operator.
\section{An improved superapproximation result}
\label{sec2}
An essential feature of the proofs of local error estimates given in \cite{NS74}, and also of essentially all published proofs of local and maximum-norm a priori error estimates for finite element methods, is the use of superapproximation properties. In essence, superapproximation bounds establish that a function in the finite element space multiplied by any smooth function can be approximated exceptionally well by the finite element space.
In order to fix thoughts, we shall in this section assume for simplicity that $\Omega \subset \mathbb{R}^n$ is a polyhedral domain; a more general situation is considered in \S3 below. Let $\mathcal{T}_h$ be a simplicial decomposition of $\Omega$. Denote by $h_T$ the diameter of the element $T \in \mathcal{T}_h$. We assume throughout that the elements in $\mathcal{T}_h$ are shape-regular, that is, each simplex $T \in \mathcal{T}_h$ contains a ball of diameter $c_1 h_T$ and is contained in a ball of radius $C_1 h_T$, where $c_1$ and $C_1$ are fixed. Let also $S_h^r$ be a standard Lagrange finite element space consisting of continuous piecewise polynomials of degree $r-1$. We shall use standard notation for Sobolev spaces, norms, and seminorms, e.g., $\|u\|_{H^1(\Omega)}=(\int_\Omega (u^2+|\nabla u|^2) \hspace{2pt}{\rm d}x )^{1/2}$, $|u|_{W_p^k(\Omega)}=(\sum_{|\alpha|=k} \|D^\alpha u\|_{L_p(\Omega)}^p)^{1/p}$, etc.
A standard superapproximation result is as follows. Let $\omega \in C^\infty(\Omega)$ with $|\omega|_{W_\infty^j(\Omega)} \le C d^{-j}$, $0 \le j \le r$. Then for each $\chi \in S_h^r$, there exists $\eta \in S_h^r$ such that for each $T \in \mathcal{T}_h$ satisfying $d \ge h_T$,
\begin{equation}
\|\omega \chi-\eta\|_{H^1(T)} \le C( \frac{h_T}{d} \|\nabla \chi\|_{L_2(T)}+ \frac{h_T}{d^2} \|\chi\|_{L_2(T)}).
\label{eq2-1}
\end{equation}
Our modified result follows (cf. \cite{Guz06}).
\begin{theorem}
\label{t2-1}
Let $\omega \in C^\infty(\Omega)$ with $|\omega|_{W_\infty^j(\Omega)} \le C d^{-j}$ for $0 \le j \le r$. Then for each $\chi \in S_h^r$, there exists $\eta \in S_h^r$ such that for each $T \in \mathcal{T}_h$ satisfying $d \ge h_T$,
\begin{equation}
\| \omega^2 \chi - \eta\|_{H^1(T)} \le C (\frac{h_T}{d} \|\nabla (\omega \chi)\|_{L_2(T)}+ \frac{h_T}{d^2} \|\chi\|_{L_2(T)}).
\label{eq2-2}
\end{equation}
\end{theorem}
\begin{remark} {\rm
There are two differences between (\ref{eq2-1}) and (\ref{eq2-2}). First, in (\ref{eq2-1}) we consider approximation of $\omega \chi$, whereas in (\ref{eq2-2}) we consider approximation of $\omega^2 \chi$. Secondly, in (\ref{eq2-1}) the norms on the right hand side involve only $\chi$, whereas in (\ref{eq2-2}) the $H^1$ seminorm involves $\omega \chi$. If we think of $\omega$ as a cutoff function, this distinction becomes vitally important: $\omega \chi$ has the same support as $\omega^2 \chi$, whereas the support of $\chi$ is generally larger than that of $\omega \chi$. This seemingly minor difference will allow us to establish local energy estimates on grids that are only assumed to be shape regular.
}\end{remark}
\begin{proof} Let $I_h:C^0(\Omega) \rightarrow S_h^r$ be the standard Lagrange interpolant. We shall choose $\eta=I_h (\omega^2 \chi)$ in (\ref{eq2-2}). For $T \in \mathcal{T}_h$, we may use standard approximation theory (cf. \cite{BS02}) to calculate
\begin{equation}
\begin{split}
\|\omega^2 \chi-I_h(\omega^2\chi)\|_{H^1(T)} \le & C h_T^{n/2} \|\omega^2 \chi-I_h (\omega^2 \chi)\|_{W_\infty^1(T)}
\\ \le & C h_T^{n/2+r-1} | \omega^2 \chi|_{W_\infty^{r}(T)}.
\end{split}
\label{eq2-3}
\end{equation}
Noting that $D^\alpha \chi=0$ for all multiindices $\alpha$ with $|\alpha|=r$, recalling that $\frac{h_T}{d} \le 1$, and employing inverse estimates, we compute
\begin{equation}
\begin{split}
C h_T^{n/2+r-1} & | \omega^2 \chi|_{W_\infty^{r}(T)} \le C (\sum_{i=2}^r h_T^{i-1} |\omega^2|_{W_\infty^i(T)}) \|\chi\|_{L_2(T)}
\\ & + Ch_T^{n/2+r-1} \sum_{|\alpha|=1, |\beta|=r-1} \|D^\alpha \omega^2 D^\beta \chi\|_{L_\infty(T)}
\\ \le & C \frac{h_T}{d^2} \|\chi\|_{L_2(T)}+Ch_T^{n/2+r-1} \sum_{|\alpha|=1, |\beta|=r-1} \|D^\alpha \omega^2 D^\beta \chi\|_{L_\infty(T)}.
\end{split}
\label{eq2-4}
\end{equation}
We next consider the terms $\|D^\alpha \omega^2 D^\beta \chi\|_{L_\infty(T)}$ above. Since $|\alpha|=1$, we have $D^\alpha \omega^2=2 \omega D^\alpha \omega$. Let $\hat{\omega}=\frac{1}{|T|} \int_T \omega \hspace{2pt}{\rm d}x$ so that $\|\omega-\hat{\omega}\|_{L_\infty(T)} \le C h_T |\omega|_{W_\infty^1(T)} \le C \frac{h_T}{d}$. Employing inverse estimates, we thus have
\begin{equation}
\begin{split}
Ch_T^{n/2+r-1} & \sum_{|\alpha|=1, |\beta|=r-1} \|D^\alpha \omega^2 D^\beta \chi\|_{L_\infty(T)}
\\ \le & C d^{-1} h_T^{n/2+r-1} \sum_{|\beta|=r-1} \|\omega D^\beta \chi\|_{L_\infty(T)}
\\ \le & C d^{-1} h_T^{n/2+r-1} \sum_{|\beta|=r-1} (\|(\omega-\hat{\omega}) D^\beta \chi\|_{L_\infty(T)}+\|\hat{\omega} D^\beta \chi\|_{L_\infty(T)})
\\ \le & C (\frac{h_T}{d^2} \|\chi\|_{L_2(T)}+\frac{h_T}{d} |\hat{\omega} \chi|_{H^1(T)})
\\ \le & C (\frac{h_T}{d^2} \|\chi\|_{L_2(T)}+\frac{h_T}{d} |(\hat{\omega}-\omega) \chi|_{H^1(T)}+\frac{h_T}{d} |\omega \chi|_{H^1(T)}).
\end{split}
\label{eq2-5}
\end{equation}
Using an inverse inequality, we find that
\begin{equation}
\begin{split}
\frac{h_T}{d} |(\hat{\omega}-\omega) \chi|_{H^1(T)} \le &\frac{h_T}{d} (|\omega|_{W_\infty^1(T)} \|\chi\|_{L_2(T)}+ \|\hat{\omega}-\omega\|_{L_\infty(T)} |\chi|_{H^1(T)})
\\ \le & C\frac{h_T}{d} (\frac{1}{d} \|\chi\|_{L_2(T)}+\frac{h_T}{d} |\chi|_{H^1(T)})
\\ \le & C\frac{h_T}{d^2} \|\chi\|_{L_2(T)}.
\end{split}
\label{eq2-6}
\end{equation}
Inserting (\ref{eq2-6}) into (\ref{eq2-5}) and the result into (\ref{eq2-4}) and (\ref{eq2-3}) completes the proof of (\ref{eq2-2}).
\end{proof}
\section{Local $H^1$ estimates}
In this section we state and prove a local $H^1$ estimate that is valid on highly graded grids. We now let $\Omega$ be a domain in $\mathbb{R}^n$, and let $\Omega_0$ be a bounded subdomain of $\Omega$. We decompose $\partial \Omega \cap \partial \Omega_0$ (if it is nonempty) into a Dirichlet portion $\Gamma_D$ and a Neumann portion $\Gamma_N$. For the sake of simplicity, we assume that $\Gamma_D$ is polyhedral and that $\Gamma_N$ is either polyhedral or Lipschitz. Let $u$ satisfy
\begin{equation}
\begin{split}
-\mathop{\rm div}(A \nabla u)+b\cdot \nabla u+cu=&f \hbox{ in } \Omega_0,
\\u=&g_D \hbox{ on } \Gamma_D,
\\ \frac{\partial u}{\partial n_A}=&g_N \hbox{ on } \Gamma_N.
\end{split}
\label{eq3-1}
\end{equation}
Here $A$ is an $n \times n$ coefficient matrix that is uniformly bounded and positive definite in $\Omega$, $b \in L_\infty(\Omega_0)^n$, $c \in L_\infty(\Omega_0)$, and $\frac{\partial}{\partial n_A}$ is the conormal derivative with respect to $A$. We also assume that $\Omega \subset \mathbb{R}^n$. Note that we make no assumptions about the differential equation solved by $u$ outside of $\Omega_0$.
Let $H_{D,0}^1(\Omega_0)=\{ u \in H^1(\Omega_0):u|_{\Gamma_D}=0 \}$, and let $H_{D}^1(\Omega_0)=u \in H^1(\Omega_0):u|_{\Gamma_D}=g_D\}$. Also let $H_<^1(B)=\{u \in H^1(\Omega_0): u|_{\Omega \setminus B}=0\}$ for subsets $B$ of $\Omega_0$. Thus functions in $H_<^1(B)$ are zero on $\partial B \setminus \partial \Omega$, but may be nonzero on portions of $\partial B$ coinciding with $\partial \Omega$, or put in other terms, functions in $H_<^1(B)$ are compactly supported in $B$ modulo $\partial \Omega$. Rewriting (\ref{eq3-1}) in its weak form, we find that $u \in H_{D}^1(\Omega_0)$ satisfies
\begin{equation}
\begin{split}
L(u,v):=&\int_\Omega (A \nabla u \nabla v +b\cdot \nabla u v+cuv)\hspace{2pt}{\rm d}x
\\ =&\int_\Omega fv \hspace{2pt}{\rm d}x-\int_{\Gamma_N} g_N v \hspace{2pt}{\rm d} \sigma, ~ v \in H_{D,0}^1(\Omega) \cap H_<^1(\Omega_0).
\end{split}
\label{eq3-2}
\end{equation}
Following \cite{NS74}, we do not assume that $L$ is coercive over $H^1(\Omega_0)$, but rather we make a {\it local} coercivity assumption:
\vspace{3pt}
\newline {\it R1: Local coercivity.} There exists a constant $d_0>0$ such that if $B$ is the intersection of any open sphere of diameter $d \le d_0$ with $\Omega_0$, then $L$ is coercive over $H_<^1(B)$, that is, for some constant $C_1>0$,
\begin{equation}
(C_1)^{-1} \|u\|_{H^1(B)}^2 \le L(u,u) \le C_1 \|u\|_{H^1(B)}^2, ~ u \in H_<^1(B).
\label{eq3-2-1}
\end{equation}
\begin{remark}
\label{rem3-1}
{\rm R1 may be satisfied in one of two ways. It may happen that $L$ is coercive over $H^1(\Omega_0)$, in which case no further argument is needed. R1 so long as a Poincar\'e inequality
\begin{equation}
\|u\|_{L_2 (B)} \le Cd\|u\|_{H^1(B)}
\label{eq3-2-1a}
\end{equation}
holds for balls $B$ as in R1 having small enough diameter (cf. Remark 1.2 of \cite{NS74}). Such Poincar\'e inequalities always hold for interior balls. If $B$ is the nontrivial intersection of an open ball with $\Omega$, then (\ref{eq3-2-1a}) holds for $d \le d_1$ small enough under the restrictions we have placed on $\partial \Omega \cap \partial \Omega_0$; here $d_1$ depends on the properties of $\partial \Omega \cap \partial \Omega_0$. } \end{remark}
Next we make assumptions concerning the finite element approximation $u_h$ of $u$. Let $\mathcal{T}_0$ be a triangulation such that $\Omega_0 \subset \cup_{T \in \mathcal{T}_0} \overline{T}$ and $T \cap \Omega_0 \neq \emptyset$ for all $T \in \mathcal{T}_0$. Let $h_T=diam(T)$ for $T \in \mathcal{T}_0$. We denote our trial finite element space by $S_D$. We do not assume that $S_D \subset H_D^1(\Omega)$. In addition, we let $S_{D,0}=S_D \cap H_{D,0}^1(\Omega_0)$ be our trial finite element space. We assume that $u_h$ is the local finite element approximation to $u$ on $\Omega_0$, that is, $u_h \in S_D$ and
\begin{equation}
L(u-u_h, v_h)=0 \hbox{ for all } v_h \in S_{D,0} \cap H_<^1(\Omega_0).
\label{eq3-2-3}
\end{equation}
We do not explicitly fix $u_h$ on the Dirichlet portion of the boundary, but rather implicitly assume that $u_h|_{\Gamma_D}$ is set equal to some appropriate interpolant or projection of $g_D$.
Next we state properties that $S_D$ and $S_{D,0}$ must possess in order to prove the desired local energy error estimate. Let $\tilde{d} \le d_0$ be a fixed parameter, and let $G_1$ and $G$ be arbitrary subsets of $\Omega_0$ with $G_1 \subset G$ and $dist(G_1, \partial G \setminus \partial \Omega) =\tilde{d}>0$. Then the following are assumed to hold:
\vspace{3pt}
\newline {\it A1: Local interpolant.} There exists a local interpolant $I$ such that for each $u \in H_<^1(G_1)$, $I u \in S_D \cap H_<^1(G)$, and for each $ u \in H_{D,0}^1(\Omega_0)$, $I u \in S_{D,0}$.
\vspace{3pt}
\newline {\it A2: Inverse properties.} For each $\chi \in S_D$, $T \in \mathcal{T}_h$, $1 \le p \le q \le \infty$, and $0 \le \nu \le s \le r$ with $r$ sufficiently small,
\begin{equation}
\|\chi\|_{W_q^s(T)} \le C h_T^{\nu-s +\frac{n}{p}-\frac{n}{q}} \|\chi\|_{W_p^\nu(T)}.
\label{eq3-2-4}
\end{equation}
\vspace{3pt}
\newline {\it A3: Superapproximation.} Let $\omega \in C^\infty(\Omega_0) \cap H_<^1(G_1)$ with $|\omega|_{W_\infty^j(\Omega_0)} \le C d^{-j}$ for integers $0 \le j \le r$ with $r$ sufficiently large. For each $\chi \in S_{D,0}$ and for each $T \in \mathcal{T}_h$ satisfying $d \le h_T$,
\begin{equation}
\| \omega^2 \chi - I (\omega^2 \chi) \|_{H^1(T)} \le C (\frac{h_T}{d} \|\nabla (\omega \chi) \|_{L_2(T)}+ \frac{h_T}{d^2} \| \chi\|_{L_2(T)}),
\label{eq3-2-2}
\end{equation}
where the interpolant $I$ is as in A1 above.
\begin{remark}{\rm A1, A2, and A3 are satisfied by standard finite element spaces defined on shape-regular triangular grids. A1 also essentially requires that the finite element mesh resolve $G \setminus G_1$, i.e., that $\tilde{d} \ge K \max_{T \cap G \neq \emptyset} h_T$ with $K$ large enough.
} \end{remark}
We begin by proving a Caccioppoli-type estimate for ``discrete harmonic'' functions. Such a statement was also proved in \cite{NS74} as a preliminary to local energy estimates, though the proof we give below more closely follows \cite{SW77}.
\begin{lemma}
\label{lem3-1}
Let $G_0 \subset G \subset \Omega_0$ be given, and let $dist(G_0, \partial G \setminus \partial \Omega)=d$ with $d \le 2 d_0$ where $d_0$ is the parameter defined in the assumption R1. Let also A1, A2, and A3 hold with $\tilde{d}=\frac{d}{4}$, and assume that $u_h \in S_{D,0}$ satisfies
\begin{equation}
L(u_h, v_h)=0 \hbox{ for all } v_h \in S_{D,0} \cap H_<^1(\Omega_0).
\label{eq3-9-a}
\end{equation}
In addition let $\max_{T \cap G \neq \emptyset} \frac{h_T}{d} \le \frac{1}{4}$. Then
\begin{equation}
\|u_h\|_{H^1(G_0)} \le
C\frac{1}{d} \|u_h\|_{L_2(G)}.
\label{eq3-9-b}
\end{equation}
Here $C$ depends only on the constants in (\ref{eq3-2-4}) and (\ref{eq3-2-2}) and the coefficients of $L$.
\end{lemma}
\begin{proof} We assume that $G_0$ is the intersection of a ball $B_{\frac{d}{4}}$ of radius $\frac{d}{4}$ with $\Omega_0$; the general case may be proved using a covering argument. Let then $G_1$ and $G_2$ be the intersections with $\Omega_0$ of balls having the same center as $G_0$ and having radii $\frac{d}{2}$ and $\frac{3d}{4}$, respectively, and without loss of generality let $G$ be the corresponding ball of radius $d$. Let then $\omega \in C_0^\infty(G_1)$ be a cutoff function which is $1$ on $G_0$ and which satisfies $\|\omega\|_{W_\infty^j(G_1)} \le Cd^{-j}$, $0 \le j \le r$. We may then apply the assumptions A1 through A3 to the pairs $G_1$ and $G_2$, and $G_2$ and $G$.
Using (\ref{eq3-2-1}), we first compute that
\begin{equation}
\|u_h\|_{H^1(G_0)}^2 \le \|\omega u_h \|_{H^1(G)}^2 \le CL(\omega u_h ,\omega u_h).
\label{eq3-4}
\end{equation}
Using the fact that $\|\nabla \omega \|_{L_\infty(\Omega)} \le \frac{C}{d}$, we compute that for any $\epsilon>0$,
\begin{equation}
\begin{split}
L(\omega&u_h, \omega u_h)= L(u_h,\omega^2 u_h)
\\ & - \int_\Omega u_h [ A \nabla(\omega u_h) \nabla \omega +u_h A \nabla \omega \nabla \omega
+ A \nabla \omega \nabla (\omega u_h) +\omega u_h b \nabla \omega ]\hspace{2pt}{\rm d}x
\\ \le & |L(u_h, \omega^2 u_h)|+ C\frac{1}{d^2 \epsilon} \|u_h\|_{L_2(G)}^2+\epsilon \|\omega u_h \|_{H_1(G)}^2 .
\end{split}
\label{eq3-5}
\end{equation}
Next we use (\ref{eq3-9-a}), (\ref{eq3-2-2}), and the fact that $\|\omega^2 u_h\|_{H^1(G)} \le \|\omega u_h\|_{H^1(G)}+\frac{C}{d}\|u_h\|_{L_2(G)}$ to compute
\begin{equation}
\begin{split}
L( u_h, \omega^2 u_h) = & L(u_h, \omega^2 u_h-I (\omega^2 u_h))
\\ \le &
C \sum_{T \cap G_2 \neq \emptyset} h_T \|u_h\|_{H^1(T)} (\frac{1}{d} |\omega u_h|_{H^1(T)}+\frac{1}{d^2} \|u_h\|_{L_2(T)}).
\end{split}
\label{eq3-6}
\end{equation}
Using (\ref{eq3-2-4}) and the fact that $\frac{h_T}{d} \le 1$, we have for $\epsilon$ as above that
\begin{equation}
\begin{split}
C h_T \|u_h&\|_{H^1(T)} (\frac{1}{d} |\omega u_h|_{H^1(T)}+\frac{1}{d^2} \|u_h\|_{L_2(T)})
\\ \le & \frac{C}{\epsilon d^2} \|u_h\|_{L_2(T)}^2+\epsilon |\omega u_h|_{H^1(T)}^2.
\end{split}
\label{eq3-7}
\end{equation}
Inserting (\ref{eq3-7}) into (\ref{eq3-6}), noting that $T \cap G_2 \neq \emptyset$ implies that $T \subset G$ (since $\max_{T \cap G \neq \emptyset} h_T \le \frac{d}{4}$) and carrying out further elementary manipulations then yields that for $\epsilon>0$,
\begin{equation}
L(u_h, \omega^2 u_h )
\\ \le \frac{C}{\epsilon d^2}\|u_h\|_{L_2(G)}^2+ \epsilon \|\omega u_h \|_{H^1(G)}^2.
\label{eq3-8}
\end{equation}
Inserting (\ref{eq3-8} into (\ref{eq3-5}) and the result into (\ref{eq3-4}) yields
\begin{equation}
\|\omega u_h \|_{H^1(G)}^2
\le \frac{C}{\epsilon d^2} \|u_h\|_{L_2(G)}^2+2 \epsilon \|\omega u_h\|_{H^1(G)}^2.
\label{eq3-9}
\end{equation}
Taking $\epsilon=\frac{1}{4}$ so that we may kick back the last term above, employing the triangle inequality, and inserting the result into (\ref{eq3-4}) then completes the proof of (\ref{eq3-9-b}).
\end{proof}
We now prove a local energy error estimate. In our proof below we shall follow \cite{NS74} by using a local finite element projection in order to split the finite element error into an approximation error and a ``discrete harmonic'' term which may be bounded using Lemma \ref{lem3-1}. We note, however, that the use of a local finite element projection is not necessary, and our final local error estimate may in fact be proved with some simple modifications to the proof of Lemma \ref{lem3-1} above. These two styles of proof are essentially equivalent. Local finite element projections have been used for example in \cite{NS74}, \cite{SW77}, \cite{SW95}, and \cite{AL95} in order to prove local a priori error estimates. The methodology of Lemma \ref{lem3-1} in which no local projections are used has been employed in for example \cite{De04b} and \cite{Guz06} in order to prove local a priori error estimates and in \cite{LN03} and \cite{Dem07} in order to prove local a posteriori error estimates.
\begin{theorem}
\label{t3-3}
Let $G_0 \subset G \subset \Omega_0$ be given, and let $dist(G_0, \partial G \setminus \partial \Omega)=d$ with $d \le \min\{2 d_0, d_1\}$ where $d_0$ is the parameter defined in the assumption R1 and $d_1$ is defined in Remark \ref{rem3-1}. Let also A1, A2, and A3 hold with $\tilde{d}=\frac{d}{16}$. In addition let $\max_{T \cap G \neq \emptyset} \frac{h_T}{d} \le \frac{1}{16}$. Then
\begin{equation}
\begin{split}
\|u-u_h\|_{H^1(G_0)} \le & C \min_{u_h -\chi \in S_{D,0} } (\|u-\chi\|_{H^1(G)}+\frac{1}{d} \|u-\chi\|_{L_2(G)})
\\&+C \frac{1}{d} \|u-u_h\|_{L_2(G)}.
\label{eq3-12}
\end{split}
\end{equation}
Here $C$ depends only on the constant $C$ in (\ref{eq2-2}) and the coefficients of $L$.
\end{theorem}
\begin{proof}
We assume that $G_0$ is the intersection of a ball $B_{\frac{d}{2}}$ of radius $\frac{d}{2}$ with $\Omega_0$; the general case may be proved using a covering argument. Let $G_1$ be the intersection with $\Omega_0$ of a ball having the same center as $G_0$ and having radius $\frac{3d}{4}$, and without loss of generality let $G$ be the corresponding ball of radius $d$. Let then $\omega \in C_0^\infty(G)$ be a cutoff function which is $1$ on $G_1$ and which satisfies $\|\omega\|_{W_\infty^j(G)} \le Cd^{-j}$, $0 \le j \le r$. Note that we may apply Lemma \ref{lem3-1} with $G_0$ on the left hand side of the estimate (\ref{eq3-9-b}) and $G_1$ on the right hand side.
Next we let $P(\omega u)$ be a local finite element projection of $\omega u$. In particular, we let $P(\omega u) \in S_D \cap H_<^1(G)$ with $u_h-P(\omega u) =0 $ on $\Gamma_D \cap \partial G_1$ satisfy
\begin{equation}
L(\omega u-P(\omega u), v_h)=0, ~ v_h \in S_{D,0} \cap H_<^1(G).
\label{eq3-13}
\end{equation}
The local coercivity condition (\ref{eq3-2-1}) then implies the stability estimate
\begin{equation}
\|P(\omega u)\|_{H^1(G)} \le C \|\omega u \|_{H^1(G)}.
\label{eq3-14}
\end{equation}
Recalling that $u_h-P(\omega u)=0$ on $\Gamma_D \cap \partial G_1$ while employing (\ref{eq3-9-b}) and using (\ref{eq3-2-1a}) while recalling that $\omega \equiv 1$ on $G_1$, we compute that
\begin{equation}
\begin{split}
\|u-u_h&\|_{H^1(G_0)} \le \|\omega u- P(\omega u)\|_{H^1(G_0)}+\|P(\omega u)-u_h\|_{H^1(G_0)}
\\ \le & \|\omega u-P(\omega u)\|_{H^1(G)}+\frac{C}{d}\|P(\omega u)-u_h\|_{L_2(G_1)}
\\ \le & \|\omega u-P(\omega u)\|_{H^1(G)}+\frac{C}{d}(\|P(\omega u)-\omega u\|_{L_2(G_1)}+\|u-u_h\|_{L_2(G_1)})
\\ \le & C \|\omega u-P(\omega u)\|_{H^1(G)}+\frac{C}{d} \|u-u_h\|_{L_2(G_1)}.
\label{eq3-16}
\end{split}
\end{equation}
Next we employing the triangle inequality along with (\ref{eq3-14}) while recalling that $\|\omega \|_{W_\infty^j(G_2)} \le Cd^{-j}$ in order to find that
\begin{equation}
\begin{split}
\|\omega u-P(\omega u)\|_{H^1(G)} \le & C \|\omega u\|_{H^1(G)}
\\ \le & C(\|u\|_{H^1(G)}+\frac{1}{d} \|u\|_{L_2(G)}).
\end{split}
\label{eq3-17}
\end{equation}
In order to complete the proof of (\ref{eq3-12}), we first insert (\ref{eq3-17}) into (\ref{eq3-16}) and finally write $u-u_h=(u-\chi)+(\chi-u_h)$ with $u_h-\chi \in S_{D,0}$.
\end{proof}
\bibliographystyle{amsalpha}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2008-08-15T18:04:14",
"yymm": "0808",
"arxiv_id": "0808.2160",
"language": "en",
"url": "https://arxiv.org/abs/0808.2160",
"abstract": "Local energy error estimates for the finite element method for elliptic problems were originally proved in 1974 by Nitsche and Schatz. These estimates show that the local energy error may be bounded by a local approximation term, plus a global \"pollution\" term that measures the influence of solution quality from outside the domain of interest and is heuristically of higher order. However, the original analysis of Nitsche and Schatz is restricted to quasi-uniform grids. We present local a priori energy estimates that are valid on shape regular grids, an assumption which allows for highly graded meshes and which much more closely matches the typical practical situation. Our chief technical innovation is an improved superapproximation result.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Local energy estimates for the finite element method on sharply varying grids",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232924970205,
"lm_q2_score": 0.8519527963298947,
"lm_q1q2_score": 0.8373190523409907
} |
https://arxiv.org/abs/2012.05138 | On the minimum value of the condition number of polynomials | In 1993, Shub and Smale posed the problem of finding a sequence of univariate polynomials of degree $N$ with condition number bounded above by $N$. In a previous paper by C. Beltán, U. Etayo, J. Marzo and J. Ortega-Cerdà, it was proved that the optimal value of the condition number is of the form $O(\sqrt{N})$, and the sequence demanded by Shub and Smale was described by a closed formula (for large enough $N\geqslant N_0$ with $N_0$ unknown) and by a search algorithm for the rest of the cases. In this paper we find concrete estimates for the constant hidden in the $O(\sqrt{N})$ term and we describe a simple formula for a sequence of polynomials whose condition number is at most $N$, valid for all $N=4M^2$, with $M$ a positive integer. | \section{Introduction}
\subsection{Statement of the main problem}
The condition number of a polynomial at a root is a measure for the first order variation of the root under small perturbations of the polynomial. It has different formulas and properties depending on how these changes are measured, see for example \cite{Demmel,TB}. Among the most popular and useful definitions is the one given by Shub and Smale in \cite{SS93I,SS93III}, where polynomials are first homogenized (hence the zeros lie in $\mathbb P(\mathbb{C}^2)$) and Bombieri norm is used to measure the perturbation of the polynomial. The concrete definition of Shub and Smale's {\em normalized condition number} $\mu_{\text{norm}}$ and some of its properties are recalled in a later section.
In \cite{SS93II,SS93III} it was proved that with probability at least $1/2$, (a certain choice of) random polynomials have condition number at most $N$, leading to the following:
\begin{problem}[Main Problem in \cite{SS93III}]\label{Prob_cond}
Find explicitly a family of polynomials of degree $N$ whose condition number is at most $N$.
\end{problem}
(The authors of \cite{SS93III} also relaxed the problem changing ``at most $N$'' to ``at most $N^c$ for any constant $c$, say $c=100$''.) By ``find explicitely'' they mean ``giving a handy description'' or describing a BSS algorithm --that essentially means an algorithm where exact real arithmetic is available, see \cite{BlCuShSm98}-- to solve the problem. During his plenary conference at the FoCM'14 meeting in Montevideo, Shub referred to this question as ``finding hay in the haystack'', since we know that a lot of such polynomials exist but it just turned out to be quite difficult to describe one!
The motivation of Shub and Smale was the search of a good starting polynomial to be used in homotopy methods for polynomial root finding, that is the one--dimensional case of Smale's 17th problem. Smale's 17th was finally solved without finding the solution to Problem \ref{Prob_cond} nor its high--dimensional analogous, see \cite{BP, BuCu, Lairez} or the monograph \cite{BuCubook}, leaving these questions open (see the Open Problems section in \cite{BuCubook}). Problem \ref{Prob_cond} was finally solved in \cite{BEMO19} where it was proved that:
\begin{enumerate}
\item There exists a constant $a>0$ such that the condition number of any degree $N$ polynomial is at least $a\sqrt{N}$.
\item There exist an explicit construction of a polynomial of any degree, given by its zeros, and a constant $b>0$ such that the condition number of the $N$--th degree polynomial is at most $b\sqrt{N}$.
\end{enumerate}
From \cite{Ujue} we have the concrete value $a\geqslant e^{C_{\log}}/2$ where $C_{\log}$ is defined by \eqref{eq:Clog} and bounded in \eqref{eq:Clogbound}, but the value of $b$ is not known. As a consequence, one gets an algorithm to generate a degree $N$ polynomial whose condition number is at most $N$: run in parallel a search algorithm (based on enumeration of rational zeros) and the sequence of item (2). Since computing the condition number given the zeros is immediate, and since the polynomials in the sequence of (2) eventually have a condition number smaller than $N$, this produces a polynomial time algorithm for Problem \ref{Prob_cond}.
The solution of \cite{BEMO19} is thus an algorithm to generate the demanded sequence, and it certainly solves Problem \ref{Prob_cond}, but it leaves an open question behind:
\begin{problem}[Main Problem after \cite{BEMO19}]\label{Prob_cond2}
Find an explicit formula for a family of polynomials of degree $N$ whose condition number is at most $N$. Also, find asymptotic bounds for the minimum condition number of a degree $N$ polynomial, $N\to\infty$.
\end{problem}
\subsection{Main result}
In these pages we make some partial progress in Problem \ref{Prob_cond2}. More exactly, we prove for the first part of this problem:
\begin{theorem}[Main result]\label{th:mainintro}
Let $N=4M^2,$ with $M\geqslant 1$ a positive integer. Define
\begin{equation*}
r_j=4j, \quad h_j=1-\frac{4j^2}{N},
\end{equation*}
for $1\leqslant j \leqslant M$ and consider the polynomial of degree $N$ given by
\begin{equation*}
P_N(z)=\displaystyle(z^{r_M}-1)\prod_{j=1}^{M-1}(z^{r_j}-\rho(h_j)^{r_j})(z^{r_j}-\rho(h_j)^{-r_j}),
\end{equation*}
where $\rho(x)=\sqrt{\frac{1+x}{1-x}}$. Then $\mu_{\text{norm}}(P_N)\leqslant \min(N,(19/2)\sqrt{N+1})$.
\end{theorem}
The first three polynomials of our sequence can be seen in Table \ref{table:tabla_poly}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
$M$ & $N$ & Polynomial \\
\hline \hline
1 & 4 & $z^4-1$ \\ \hline
2 & 16 & $(z^8-1)(z^4-49)(z^4-1/49)$ \\ \hline
3 & 36 & $(z^{12}-1)(z^8-2401/16)(z^8-16/2401)(z^4-289)(z^4-1/289)$ \\ \hline
\end{tabular}
\caption{The first three polynomials of the sequence constructed in Theorem \ref{th:mainintro} corresponding to degrees 4, 16 and 36, respectively.}
\label{table:tabla_poly}
\end{center}
\end{table}
The zeros of above polynomial $P_N$ correspond, under the stereographic projection, to the spherical points of a set $\mathcal{P}_N$ described in Section \ref{Sect:PN}.
Modifying $\mathcal P_N$ slightly, one can very likely adapt our proof to similar subsequences such as, say, $N=4M^2+1$ or $N=4M^2+2M$, but solving the problem for general $N$ is still out of reach, since the explicit computations become too complicated.
Our method of proof also produces the upper bound in the following corollary, which is a first answer to the second part of Problem \ref{Prob_cond2} (the lower bound is proved by U. Etayo in \cite{Ujue} and uses a recent result by S. Steinerberger \cite{Stefan}):
\begin{corollary}\label{cor:limsinf}
The minimum condition number $\alpha_N=\inf\{\mu_{\text norm}(P):deg(P)=N\}$ of a degree $N$ polynomial satisfies
\[
0.454\ldots\leqslant \frac{e^{C_{\log}}}2\leqslant \liminf_{N\to\infty}\frac{\alpha_N}{\sqrt{N}}\leqslant \frac{\sqrt{3}}{2}e^{15/8}=5.647\ldots
\]
\end{corollary}
A related problem of great importance is that of finding the optimal constant $C_N$ in the multiterm Bombieri inequality
\[
\prod_{i=1}^N{\|z-z_i\|}\leqslant C_N \left\|\prod_{i=1}^N(z-z_i)\right\|,
\]
which compares the Bombieri-Weyl norm of a polynomial and the product of the norms of its linear factors. Finding the optimal value of $C_N$ is a formidable challenge! A recent breackthrough by U. Etayo \cite{Ujue} is that the optimal value of this constant is essentially $\sqrt{e^N/(N+1)}$. More precisely, define $K_N$ by
\[
C_N=K_N\sqrt{\frac{e^N}{N+1}}.
\]
Then, we have $\alpha\leqslant K_N\leqslant 1$ for some $\alpha>0$ which is independent of $N$. No lower bounds on $\alpha$ are known till now, but from \cite[Th. 4.5]{Ujue} and our main result above we deduce:
\begin{equation}\label{eq:KN}
\limsup_{N\to\infty}K_N\geqslant \frac{e^{C_{\log}}}{\sqrt{3}e^{15/8}}\geqslant 0.08.
\end{equation}
\subsection{Relation to well--distributed spherical points and the logarithmic energy}
We will define our sequence of polynomials by its zeros, which are in turn seen as points in the unit $2$--sphere $\mathbb{S}$ via the stereographic projection. It was noted in \cite{SS93III} that if a collection of spherical points $ p_1,\ldots, p_N\in\mathbb{S}$ is very well distributed in the sense that it quasi--minimizes the logarithmic energy
\begin{equation}\label{Logarithmic_energy}
\mathcal{E}( p_1,\ldots, p_N)=\displaystyle\sum_{i \neq j} \log\frac{1}{| p_i- p_j|},
\end{equation}
then the associated complex points are the zeros of a well--conditioned polynomial. More precisely, let us denote by $m_N$ the minimum possible value of the logarithmic energy,
$$
m_N=\min_{ p_1,\ldots, p_N\in\mathbb{S}}\mathcal{E}( p_1,\ldots, p_N).
$$
The main result of \cite{SS93III} is that if $\mathcal{E}( p_1,\ldots, p_N)\leqslant m_N+c\log N$ then the condition number of the corresponding polynomial is at most $\sqrt{N^{1+c}(N+1)}$, thus solving the relaxation of Problem \ref{Prob_cond}. (Note that our notation is slightly different from that of \cite{SS93III}: we use the unit sphere instead of the Riemann sphere and our definition of the log--energy ranges over $i\neq j$ instead of $i<j$. Our notation is the most frequent nowadays).
Inspired by this result, Shub and Smale posed the problem of finding collections of spherical points with quasioptimal $\log$--energy. This later was included in Smale's famous list of Problems for the XXI century \cite{Smale2000}:
\begin{problem}[Smale's 7th Problem]\label{Prob7_Smale}
Can one find $ p_1,\ldots, p_N\in\mathbb{S}$ such that $\mathcal{E}( p_1,\ldots, p_N) \leqslant m_N+c\log N$ for some universal constant $c$?
\end{problem}
The value of $m_N$ is not still well known. After \cite{Wagner89,RSZ94,Dubickas96,Brauchart08,BS18}, we have
\begin{equation}\label{eq:Clog}
m_N=\kappa N^2-\frac{1}{2}N\textrm{log}N+C_{\log}N+o(N),
\end{equation}
where $C_{\log}$ is a constant and, denoting by $d\sigma$ the normalized uniform measure in $\mathbb S$,
\begin{equation}\label{eq:continuousenergy}
\kappa=\int_{x,y\in\mathbb{S}}\log\frac{1}{|x-y|}d\sigma(x)d\sigma(y)=\frac{1}{2}-\log 2<0,
\end{equation}
is the continuous energy. From \cite{Stefan} and \cite{BS18}, we have that
\begin{equation}\label{eq:Clogbound}
-0.0954\ldots \leqslant C_{\log} \leqslant 2\log2 +\frac{1}{2}\log\frac{2}{3}+3\log\frac{\sqrt{\pi}}{\Gamma(1/3)}=-0.0556\ldots
\end{equation}
The upper bound has been conjectured to be an equality, see \cite{BHS12,BS18} and the monograph \cite{BHS19} for context.
We stress that our construction of the point set for Theorem \ref{th:mainintro} does not solve Problem \ref{Prob7_Smale}: its log--energy is of the form $\kappa N^2-\frac12N\log N+O(N)$ (this can be deduced directly from \eqref{Logarithmic_energy} and Corollary \ref{cor:D} or seen as a consequence of \cite[Th. 1.5]{Ujue}).
\subsection{Condition number of polynomials}
We now give the precise definition and some properties of the condition number of polynomials. Let us consider a bivariate homogeneous polynomial with complex coefficients of degree $N\geqslant 1$,
\begin{equation*}
h(x,y)=\displaystyle\sum_{i=0}^N a_ix^iy^{N-i}, \quad a_i\in\mathbb{C}, \,\, a_N\neq 0.
\end{equation*}
The zeros of $h$ lie in the complex projective space $\mathbb{P}(\mathbb{C}^2)$.
Following \cite{SS93I}, the normalized condition number of $h$ at a zero $\zeta\in\mathbb{P}(\mathbb{C}^2)$ is
\begin{equation*}
\mu_{\text{norm}}(h,\zeta)=
\left\{
\begin{array}{ll}
N^{1/2}\|(Dh(\zeta)|_{\zeta^{\perp}})^{-1}\|\|h\|\|\zeta\|^{N-1}, \quad &\text{if }\, \exists (Dh(\zeta)|_{\zeta^{\perp}})^{-1},\\
+\infty, \quad &\text{otherwise}.
\end{array}
\right.
\end{equation*}
Here, $Dh(\zeta)|_{\zeta^{\perp}}$ is the restriction of the derivative $Dh(\zeta)=\left(\frac{\partial}{\partial x}h\quad \frac{\partial}{\partial y}h\right)_{(x,y)=\zeta}$ to the orthogonal complement of $\zeta$ in $\mathbb{C}^2$, and $\|h\|$ is the Bombieri-Weyl norm (also known as Kostlan or Bombieri or Weyl norm) of $h$, defined as
\begin{equation*}
\|h\|= \left(\displaystyle\sum_{i=0}^N {N \choose i}^{-1} |a_i|^2\right)^{1/2}.
\end{equation*}
If $\zeta$ is a double root of $h$, then by definition $\mu_{\text{norm}}(h,\zeta)=\infty$.
On the other hand, if there is not mention to a concrete root of $h$, then we define
\begin{equation*}
\mu_{\text{norm}}(h)=\max_{\zeta\in \mathbb{P}(\mathbb{C}^2):h(\zeta)=0}\mu_{\text{norm}}(h,\zeta).
\end{equation*}
Let
\begin{equation*}
f(z)=\displaystyle\sum_{i=0}^N a_iz^i, \quad a_N\neq 0,
\end{equation*}
be an univariate polynomial of degree $N$ with complex coefficients and $z\in\mathbb{C}$ a zero of
$f$. Consider the homogeneous counterpart of $f$,
\begin{equation*}
h(x,y)=\displaystyle\sum_{i=0}^N a_ix^iy^{N-i},
\end{equation*}
and define
\begin{equation*}
\mu_{\text{norm}}(f,z)=\mu_{\text{norm}}(h,(z,1)), \quad \mu_{\text{norm}}(f)=\max_{z\in\mathbb{C}:f(z)=0}\mu_{\text{norm}}(f,z).
\end{equation*}
Taking $\|f\|=\|h\|$ and expanding the derivative, it turns out that
\begin{equation}\label{eq:mu}
\mu_{\text{norm}}(f,z)=\frac{N^{1/2}(1+|z|^2)^{\frac{N-2}{2}}}{|f'(z)|}\|f\|,
\end{equation}
which allows us to easily compute the condition number for simple cases (see \cite{Beltran15} for an elementary proof of this last formula).
\subsection{An alternative formula for the condition number and idea of our proof}
Since a polynomial is (up to a multiplicative constant) defined by its zeros and these can be seen as spherical points, one can aim to give a formula for the condition number of a polynomial that depends uniquely on the associated spherical points. Shub and Smale accomplished this task. Adapting the notation of \cite{SS93III} to ours, we have:
\begin{prop}
Let $P(z)=\prod_{i=1}^N(z-z_i)$ be a polynomial and denote by $ p_i$ the point in $\mathbb{S}$
obtained from the inverse stereographic projection of each $z_i$. Then the condition number of $P$ equals
\begin{equation}\label{exp_cond_S2}
\mu_{\text{norm}}(P)=\frac{1}{2}\sqrt{N(N+1)}\max_{1 \leqslant i \leqslant N}\frac{\left(\int_{\mathbb{S}} \prod_{j=1}^N |p- p_j|^2 d\sigma(p)\right)^{1/2}}{\prod_{j\neq i}| p_i- p_j|}.
\end{equation}
\end{prop}
As in \cite{BEMO19}, we will start from a geometrical construction of a point set $\mathcal P_N$ (see Figure \ref{fig:construction} for a graphical description). Its main features are:
\begin{enumerate}
\item The $N$ spherical points are distributed in $2M-1$ parallels of varying height in the sphere, with the $M$--th parallel being the equator.
\item The parallel at height $h_j$ contains $r_j$ points which are (up to a homotety and a traslation) a set of $r_j$ roots of the unity. They may have a phase or not, this is not important for our proof.
\item The values of $h_j$ are chosen in such a way that there is a {\em band} of relative area $r_j/N$ whose central height is $h_j$.
\item The construction is equatorially symmetric: $h_j=-h_{2M-j}$ and $r_j=r_{2M-j}$.
\end{enumerate}
Once we have defined our set of points, in order to prove our main result we proceed as follows:
\begin{enumerate}
\item[(A)] Given any $q\in\mathbb S$ and any band $B$ in the sphere, we consider the central parallel $Q$ of $B$, and we compare the integral $I_B$ of $\log|p-q|$ when $p$ lies in the band with the expected value $\Tilde{I}_{Q}$ of the same function when $p$ lies in $Q$. We conclude that $I_B\approx \nu(B) \Tilde{I}_Q$ where $\nu(B)$ is the normalized area of $B$.
\item[(B)] Given any $q\in\mathbb S$, we divide the integral $I=-\kappa$ of $\log|p-q|$ with $p\in\mathbb S$ in the different bands associated to our point set. From (A), the value $I_j$ in each band $B_j$ is similar to $\nu(B_j)\Tilde{I}_j $ where $\Tilde{I}_j $ is the expected value of the function along the parallel $Q_j$, that is $I_j\approx (r_j/N)\Tilde{I}_j $ and $-\kappa N=I N=N\sum_j I_j\approx \sum_j r_j\Tilde I_j$. The difference between $-\kappa N$ and $\sum_j r_j\Tilde{I}_j$ turns out to be rather small except if $q$ is too close to the poles.
\item[(C)] We then compare the value of $r_j \Tilde{I}_j $ with that of $\sum_{k=0}^{r_j-1}\log|q-p_{j,k}|$ where the $p_{j,k}$ are the $r_j$ points in the corresponding parallel. Both quantities are again very similar (except for the parallel which is the one closest to $q$, where the discrete sum can diverge to $-\infty$). From this, we get
\[
\prod_{i=1}^n|q-p_i|=e^{\sum_{j=1}^{2M-1}\sum_{k=0}^{r_j-1}\log|q-p_{j,k}|}\lesssim e^{\sum_{j=1}^{2M-1}r_j \Tilde{I}_j}\approx e^{-\kappa N}.
\]
This essentially gives an upper bound for the numerator in \eqref{exp_cond_S2}, once the details are settled.
\item[(D)] The same kind of argument, using also that our point set is well--separated, produces a lower bound $\prod_{j\neq i}|p_i-p_j|\gtrsim \sqrt{N}e^{-N\kappa}$, valid for all fixed $i$, for the denominator of \eqref{exp_cond_S2}. This almost finishes the proof of our main result.
\end{enumerate}
This procedure is similar to that of \cite{BEMO19}, but in this paper all the appearances of $\approx, \lesssim,\gtrsim$ are estimated with concrete constants. One benefit that we get is that our point set is more simple, since in \cite{BEMO19} the points need to be distributed in the parallels of height $h_j$ but also a part of them is sent to the parallels delimiting the spherical bands, while we only need to allocate points in the central parallels.
\begin{remark}
The construction in \cite{BEMO19} has a property that ours does not: the associated discrete measure can be used to approximate the continuous integral of $\log|p-q|$ up to a constant order for any fixed $q$. This property (which is the reason to send part of the points to the parallels delimiting the bands) is a key point in the proof of the main result of \cite{BEMO19}. Our construction only gets this if $q$ is not too close to the north and south poles, yet we are able to prove our main theorem from this weaker property.
\end{remark}
\subsection*{Organization of the paper}
In the next section, we state the construction of the set of spherical points which will be, under the stereographic projection, the zeros of the sequence of well--conditioned polynomials. In Section \ref{sec:comparison} we prove (A); in Section \ref{sec:Proof1} we prove (B); sections \ref{sec:numerador} and \ref{sec:denominador} are devoted to proving (C) and (D) respectively. Finally, the main result is proved in Section \ref{sec:final}.
\section{Geometrical description of the set of points $\mathcal{P}_{N}$ in $\mathbb{S}$}\label{Sect:PN}
We now construct our set of points $\mathcal{P}_N=\{p_1,\ldots,p_N\}$ in $\mathbb{S}$. We denote by $Q_h$ the parallel of height $h$,
\begin{equation*}
Q_h=\{(x,y,z)\in\mathbb{S} : z=h\}, \quad -1\leqslant h \leqslant 1.
\end{equation*}
Let $M$ be a positive integer, define $N=4M^2$, and let
\begin{equation*}
r_j=\begin{cases}4j, &1\leqslant j\leqslant M,\\4(2M-j), &M\leqslant j\leqslant 2M-1,\end{cases}
\quad
h_j=\begin{cases}1-\frac{j^2}{M^2}, &1\leqslant j\leqslant M, \\-1+\frac{(2M-j)^2}{M^2}, &M\leqslant j\leqslant 2M-1.\end{cases}
\end{equation*}
Note that $N=r_1+\cdots+r_{2M-1}$ and the claimed symmetric properties $r_j=r_{2M-j}$, $h_j=-h_{2M-j}$, and note also that $h_M=0$. Our point set is constructed by taking $r_j$ equally spaced points in each of the parallels $Q_j=Q_{h_j}$. We will refer to $Q_j$ just as the $j$--th parallel.
For all $1 \leqslant j \leqslant 2M-1$, we define the $j$--th band as
\begin{equation*}
B_j=\{(x,y,z)\in\mathbb{S}, \,H_j \leqslant z \leqslant H_{j-1}\},
\end{equation*}
being
\begin{equation}\label{frontera_banda}
H_j=\begin{cases}1-\frac{j(j+1)}{M^2}, & 0 \leqslant j \leqslant M-1,\\-1+\frac{(2M-j-1)(2M-j)}{M^2}, &M\leqslant j\leqslant 2M-1.\end{cases}
\end{equation}
Observe that $Q_j$ is the central parallel of the band $B_j$, in the sense that
\begin{equation*}
h_j=\frac{H_{j-1}+H_j}{2}=H_{j-1}-\frac{r_j}{N}=H_j+\frac{r_j}{N},\quad 1\leqslant j \leqslant 2M-1.
\end{equation*}
Moreover, note that $B_1$ and $B_{2M-1}$ are just two spherical caps surrounding the north and the south pole respectively and that $\mathbb{S}= \displaystyle\cup_{j=1}^{2M-1}B_{j}$. The relative area of each band is (use for example Lemma \ref{int_S2}):
\begin{equation*}
\nu(B_j)=\frac{H_{j-1}-H_j}{2}=\frac{r_j}{N}, \quad 1\leqslant j \leqslant 2M-1.
\end{equation*}
See Figure \ref{fig:construction} for a graphical illustration of this construction.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Imagen.png}
\caption{Our construction of spherical points for $M=3$, that is $N=36$ points, from three different points of view (left: tilted; center: equatorial; right: north pole). The parallels $Q_j$ are the black circles, and the points are equidistributed among them. The red lines are the parallels $Q_{H_j}$ which delimit the bands (the north and south poles are marked with red dots and correspond to $H_0$ and $H_{2M-1}$, but they do not belong in $\mathcal P_N$.)}\label{fig:construction}
\end{figure}
\section{Comparison of the integrals in parallels and bands}\label{sec:comparison}
This section provides a comparison tool which is independent of the construction above. It will later be applied to each of the bands $B_j$ described in Section \ref{Sect:PN}.
Let $B$ be the band contained between parallels of height $h-\epsilon$ and $h+\epsilon$, with $Q=Q_h$ the central parallel. The relative area of $B$ is $\epsilon$.
We now show that, for any {\em fixed} $q=(a,b,c)\in\mathbb S$, the integral $I_B(q)$ of $\log|p-q|$ with $p$ chosen in $B$ is approximately equal to the relative area of the band, times the expected value $\Tilde I_Q(q)$ of the same function in the central parallel. From \cite[Prop. 2.2]{Diamond}, for the parallel of height $t\in[-1,1]$ the expected value $\Tilde I_{Q_t}(q)$ satisfies
\begin{equation}\label{Valor_fp(h)}
\Tilde I_{Q_t}(q)=
\left\{\begin{array}{ll}
\frac{1}{2}(\log(1+t)+\log(1-c)), \quad &\mathrm{if }\,\, t \geqslant c,\\
&\\
\frac{1}{2}(\log(1-t)+\log(1+c)), \quad &\mathrm{if }\,\, t <c.
\end{array}\right.
\end{equation}
From Lemma \ref{int_S2} we have
\begin{equation}\label{rel1}
\frac{1}{\varepsilon}I_B(q)=\frac{1}{\varepsilon}\int_{p\in B}\log|q-p|d\sigma(p)=\frac{1}{2\varepsilon}\int_{h-\varepsilon}^{h+\varepsilon}\Tilde I_{Q_t}(q)dt.
\end{equation}
\begin{lemma}[Comparison when $q$ is outside of the band]\label{lem:nuevocotaptomedio}
With the notations above, if $c\geqslant h+\varepsilon$ we have
\begin{align}\label{eq:nueva}
0\leqslant \Tilde I_{Q}(q)-\frac{1}{\varepsilon}I_B(q)
-\frac{\varepsilon^2}{12(1-h)^2}&=\sum_{n=2}^{\infty}
\frac{\varepsilon^{2n}}{4n(2n+1)(1-h)^{2n}}\\
\nonumber & \leqslant \frac{1}{2}\left(\frac{5}{6}-\log2\right)\frac{\varepsilon^{4}}{(1-h)^{4}},
\end{align}
and if $c\leqslant h-\varepsilon$ we have
\begin{align}\label{eq:nueva2}
0\leqslant \Tilde I_{Q}(q)-\frac{1}{\varepsilon}I_B(q)
-\frac{\varepsilon^2}{12(1+h)^2}&=\sum_{n=2}^{\infty}
\frac{\varepsilon^{2n}}{4n(2n+1)(1+h)^{2n}}\\
\nonumber & \leqslant \frac{1}{2}\left(\frac{5}{6}-\log2\right)\frac{\varepsilon^{4}}{(1+h)^{4}}.
\end{align}
\end{lemma}
\begin{proof}
Assume first that $c\geqslant h+\varepsilon$. Let $U$ be the right hand term in \eqref{eq:nueva}. Then,
\begin{align*}
U=&\frac12\log(1-h)-\frac{1}{4\varepsilon}\int_{-\varepsilon}^\varepsilon \log(1-h+t)\,dt-\frac{\varepsilon^2}{12(1-h)^2}\\
=&-\frac{1}{4\varepsilon}
\int_{-\varepsilon}^{\varepsilon}\log\left(1+\frac{t}{1-h}\right)\,dt
-\frac{\varepsilon^2}{12(1-h)^2}\\
=&-\sum_{n=1}^\infty\frac{1}{4\varepsilon}\int_{-\varepsilon}^\varepsilon
\frac{(-1)^{n+1}t^n}{n(1-h)^n}\,dt-\frac{\varepsilon^2}{12(1-h)^2}.
\end{align*}
The terms with odd $n$ in the sum integrate to $0$ and hence we have
\begin{align*}
U=&\sum_{n=1}^\infty
\frac{\varepsilon^{2n}}{4n(2n+1)(1-h)^{2n}}-\frac{\varepsilon^2}{12(1-h)^2}
=\sum_{n=2}^\infty
\frac{\varepsilon^{2n}}{4n(2n+1)(1-h)^{2n}},
\end{align*}
as wanted. The final inequality follows from noting that $\varepsilon/(1-h)\leqslant 1$ and computing the sum (\cite[(0.234.8)]{GR15}). The other case ($c\leqslant h-\varepsilon$) is proved the same way.
\end{proof}
\begin{lemma}[Comparison when $q$ is inside of the band]\label{comp_paral_band_p}
With the notations above, assume now that $h-\varepsilon \leqslant c \leqslant h+\varepsilon.$ Then,
\begin{equation}
-\frac{\varepsilon/4}{1-h^2}\leqslant \Tilde I_{Q}(q)-\frac{1}{\varepsilon}I_{B}(q)\leqslant
\begin{cases}\frac{(1-\log 2)\varepsilon^2}{2(1-h)^2},&h\leqslant c\leqslant h+\varepsilon,\\
\frac{(1-\log 2)\varepsilon^2}{2(1+h)^2},&h-\varepsilon\leqslant c\leqslant h.\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
Assume first that $c\in[h,h+\varepsilon]$, and define
\begin{align*}
U(c)=&\Tilde I_{Q}(q)-\frac{1}{\varepsilon}
I_{B}(q)\\
=&\frac{1}{2}\log(1-h)+\frac12\log(1+c)-\frac{1}{4\varepsilon}\int_{h-\varepsilon}^c\log(1-t)+\log(1+c)\,dt
\\&-\frac{1}{4\varepsilon}\int_c^{h+\varepsilon}\log(1+t)+\log(1-c)\,dt.
\end{align*}
With some little arithmetic it is easy to see that
\[
U'(c)=\frac{h+\varepsilon-c}{\varepsilon(2-2c^2)}\geqslant 0.
\]
The maximum and the minimum of $U$ in the interval $c\in[h,h+\varepsilon]$ are thus in the extremes. The case $c=h+\varepsilon$ is covered by \eqref{eq:nueva}, which (using $\varepsilon/(1-h)\leqslant 1$) yields:
\begin{equation*}
U(c)\leqslant U(h+\varepsilon)=\sum_{n=1}^\infty
\frac{\varepsilon^{2n}}{4n(2n+1)(1-h)^{2n}}\leqslant \frac{\varepsilon^2}{(1-h)^2}\frac14\sum_{n=1}^\infty
\frac{1}{n(2n+1)},
\end{equation*}
and again from \cite[(0.234.8)]{GR15} we obtain the result.
For the minimum, we have
\begin{align*}
U(c)\geqslant U(h)=&\frac14(\log(1-h)+\log(1+h))-\frac{1}{4\varepsilon}\left(\int_{h-\varepsilon}^h\log(1-t)\,dt+\int_h^{h+\varepsilon}\log(1+t)\,dt\right)\\
=&-\frac{1}{4\varepsilon}\left(\int_{h-\varepsilon}^h\log\left(1+\frac{h-t}{1-h}\right)\,dt+\int_h^{h+\varepsilon}\log\left(1+\frac{t-h}{1+h}\right)\,dt\right)
\\
=&-\frac{1}{4\varepsilon}\left(\int_0^{\varepsilon}\log\left(1+\frac{t}{1-h}\right)\,dt+\int_0^{\varepsilon}\log\left(1+\frac{t}{1+h}\right)\,dt\right).
\end{align*}
Expanding the logarithms in power series and integrating termwise we get
\begin{align*}
U(c)\geqslant U(h)=&\frac{1}{4}\sum_{n= 1}^\infty\frac{(-1)^n\varepsilon^n}{n(n+1)}\frac{1}{(1-h)^n}+\frac{1}{4}\sum_{n= 1}^\infty\frac{(-1)^n\varepsilon^n}{n(n+1)}\frac{1}{(1+h)^n},
\end{align*}
and since $\varepsilon/(1-h)$ and $\varepsilon/(1+h)$ are both at most equal to $1$ we see that both alternating series have decreasing terms, thus concluding:
\[
U(c)\geqslant U(h)\geqslant-\frac{\epsilon}{8(1-h)}-\frac{\epsilon}{8(1+h)}=-\frac{\epsilon/4}{1-h^2},
\]
and the lemma follows. The other case ($c\in[h-\varepsilon,h]$) is done the same way.
\end{proof}
\section{Comparison between $-\kappa N$ and $\sum_j r_j\Tilde{I}_j $}\label{sec:Proof1}
Recall that:
\begin{itemize}
\item $I_j(q)$ is the integral of $\log|p-q|$ when $p$ lies in the $j$--th band $B_j$, so that
\begin{equation}\label{eq:kap}
-\kappa=\int_{\mathbb S}\log|p-q|\,d\sigma(p)=\sum_{j=1}^{2M-1}I_j(q),\quad q\in\mathbb S.
\end{equation}
\item $\Tilde{I}_j (q)$ is the expected value of $\log|p-q|$ when $p$ lies in the $j$--th parallel $Q_j$. From \eqref{Valor_fp(h)} we have
\begin{equation}\label{Valor_Itildej}
\Tilde{I}_j (q)=
\left\{\begin{array}{ll}
\frac{1}{2}(\log(1+h_j)+\log(1-c)), \quad &\mathrm{if }\,\, h_j \geqslant c,\\
&\\
\frac{1}{2}(\log(1-h_j)+\log(1+c)), \quad &\mathrm{if }\,\, h_j <c.
\end{array}\right.
\end{equation}
\end{itemize}
The results in Section \ref{sec:comparison} yield $N I_j(q)\approx r_j\Tilde{I}_j(q)$ where the meaning of $\approx$ is precise and has different bounds for $q$ outside and inside of the band $B_j$.
\begin{lemma}\label{const_cond}
Let $q=(a,b,c)\in B_\ell\subseteq \mathbb{S}$ with $\ell\leqslant M$ and $M\geqslant 5$. Let
\begin{equation}\label{Sum_SN}
S_N=S_N(q)=\displaystyle\sum_{j=1}^{2M-1} r_j\Tilde{I}_j (q).
\end{equation}
Then,
\begin{equation*}
-1\leqslant S_N + N\kappa-T(\ell)\leqslant \frac{2(1-\log 2)}{\ell}+\frac{1}{15},
\end{equation*}
where
\begin{equation}\label{eq:T}
T(\ell)=\sum_{j=1}^{\ell-1}\frac{r_j^3}{12N^2(1+h_j)^2}+\sum_{j=\ell+1}^{2M-1}\frac{r_j^3}{12N^2(1-h_j)^2}.
\end{equation}
\end{lemma}
\begin{proof}
Let
\begin{align*}
(\star)=&\,S_N + N\kappa-T(\ell)\\
\stackrel{\text{\eqref{eq:kap}}
=&\,r_\ell \left(\Tilde{I}_\ell(q)-\frac{N}{r_\ell}I_\ell(q)\right)\\
&+\sum_{j=1}^{\ell-1}r_j\left(\Tilde{I}_j(q)-\frac{N}{r_j}I_j(q)-\frac{r_j^2}{12N^2(1+h_j)^2}\right)\\
&+\sum_{j=\ell+1}^{2M-1}r_j\left(\Tilde{I}_j(q)-\frac{N}{r_j}I_j(q)-\frac{r_j^2}{12N^2(1-h_j)^2}\right).
\end{align*}
From Lemmas \ref{lem:nuevocotaptomedio} (with $\varepsilon=r_j/N$) and \ref{comp_paral_band_p} (with $\varepsilon=r_\ell/N$) we have on the one hand
\[
(\star)\geqslant -\frac{ r_\ell^2}{4N(1-h_\ell^2)}=-\frac{1}{2-\frac{\ell^2}{M^2}}\geqslant -1,
\]
and on the other hand
\begin{align*}
(\star)\leqslant & \frac{(1-\log2)\,r_\ell^3}{2N^2(1-h_\ell)^2}+\frac{1}{2}\left(\frac{5}{6}-\log2\right)\left(\sum_{j=1}^{\ell-1}\frac{r_j^{5}}{N^4(1+h_j)^{4}}+
\sum_{j=\ell+1}^{2M-1}\frac{r_j^{5}}{N^4(1-h_j)^{4}}\right)\\
\leqslant &\frac{2(1-\log 2)}{\ell}+\frac{1}{2}\left(\frac{5}{6}-\log2\right)\left(2\sum_{j=1}^{M-1}\frac{r_j^{5}}{N^4(1+h_j)^{4}}+\sum_{j=\ell+1}^{M}\frac{r_j^{5}}{N^4(1-h_j)^{4}}\right)\\
\leqslant &\frac{2(1-\log 2)}{\ell}+\frac{1}{2}\left(\frac{5}{6}-\log2\right)\left(8\sum_{j=1}^{M-1}\frac{j^{5}}{(2M^2-j^2)^{4}}+4\sum_{j=\ell+1}^{M}\frac{1}{j^3}\right)\\
\leqslant &\frac{2(1-\log 2)}{\ell}+\frac{1}{2}\left(\frac{5}{6}-\log2\right)\left(8 \displaystyle\sum_{j=1}^{M-1} \frac{j^5}{M^8} + 4\displaystyle\sum_{j=2}^{\infty}\frac{1}{j^3}\right)\\
\leqslant &\frac{2(1-\log 2)}{\ell}+\frac{1}{2}\left(\frac{5}{6}-\log2\right)\left(\frac{1}{30}+4[\zeta(3)-1]\right),
\end{align*}
where $\zeta(3)$ denotes Ap\'ery's constant. Note that we have used \cite[(0.121.5)]{GR15} and $M\geqslant 5$ to deduce
$$
8\displaystyle\sum_{j=1}^{M-1} \frac{j^5}{M^8}=\frac{2(M-1)^2}{3M^5}\left(2(M-1)-\frac{1}{M}\right)\leqslant \frac1{30}.
$$
The lemma follows after some arithmetic bounding $\log 2\geqslant 0.69$ and $\zeta(3)\leqslant 1.203$.
\end{proof}
The following result offers us a lower and upper bound for $T(\ell)$.
\begin{lemma}\label{lem:aux1}
Let $q\in B_{\ell}$ with $\ell\leqslant M$. Then,
\begin{equation*}
\frac{1}{3}\log\frac{M+1}{\ell+1}\leqslant T(\ell) \leqslant \frac{1}{3}\log\frac{M}{\ell}+\frac{1}{6}.
\end{equation*}
\end{lemma}
\begin{proof}
From Definition \eqref{eq:T} and symmetry with respect to the equator, we have
\begin{align*}
T(\ell)=& \frac{1}{12}\displaystyle\sum_{j=1}^{\ell-1}\frac{r_j^3}{N^2(1+h_j)^2}+\frac{1}{12}\displaystyle\sum_{j=\ell+1}^M \frac{r_j^3}{N^2(1-h_j)^2}+\frac{1}{12}\displaystyle\sum_{j=1}^{M-1}\frac{r_j^3}{N^2(1+h_j)^2}\\
=& \frac{1}{3}\displaystyle\sum_{j=1}^{\ell-1}\frac{j^3}{(2M^2-j^2)^2}+\dfrac{1}{3}\displaystyle\sum_{j={\ell+1}}^M \frac{1}{j}+\frac{1}{3}\displaystyle\sum_{j=1}^{M-1}\frac{j^3}{(2M^2-j^2)^2}.
\end{align*}
Then,
\begin{equation*}
\frac{1}{3}\displaystyle\sum_{j=\ell+1}^M\frac{1}{j}\leqslant T(\ell) \leqslant
\frac{1}{3}\displaystyle\sum_{j=\ell+1}^M \frac{1}{j} +\frac{2}{3}\displaystyle\sum_{j=1}^{M-1}\frac{j^3}{(2M^2-j^2)^2}.
\end{equation*}
Note that $\sum_{j={\ell+1}}^M \frac{1}{j}$ vanishes for $\ell=M$.
The result follows from the following two bounds: on the one hand
\begin{equation*}
\frac{2}{3}\displaystyle\sum_{j=1}^{M-1}\frac{j^3}{(2M^2-j^2)^2} \leqslant \frac{2}{3}\displaystyle\sum_{j=1}^{M-1}\frac{j^3}{M^4}=\frac{(M-1)^2}{6M^2} \leqslant \frac{1}{6},
\end{equation*}
and on the other hand, from Lemma \ref{lem:sumalog} we have
\[
\frac{1}{3}\log\frac{M+1}{\ell+1}<\frac{1}{3}\displaystyle\sum_{j=\ell+1}^M \frac{1}{j}<\frac{1}{3}\log\frac{M}{\ell},
\]
and the proof is concluded.
\end{proof}
\begin{prop}\label{Cociente_Integral_cond_previo}
Let $q\in B_{\ell}$ with $\ell\leqslant M$ and $M \geqslant 5$. Then,
\begin{equation*}
-1\leqslant -1+\frac13\log \frac{M+1}{\ell+1}\leqslant S_N+N\kappa \leqslant \frac13\log \frac{M}{\ell}+\frac{2(1-\log 2)}{\ell}+\frac{1}{4}.
\end{equation*}
In other words, $-N\kappa\approx \sum_j r_j\Tilde I_j(q)$ where the symbol $\approx$ hides essentially $\frac13\log\frac M\ell$.
\end{prop}
\begin{proof}
Immediate from lemmas \ref{const_cond} and \ref{lem:aux1}.
\end{proof}
\section{The numerator of the condition number formula}\label{sec:numerador}
This section is devoted to get a upper bound for the term
\begin{equation*}
\log\displaystyle\prod_{i=1}^N |p-p_i| - S_N,
\end{equation*}
with $S_N=S_N(p)$ the discrete sum defined in \eqref{Sum_SN}, thus producing an upper bound for the numerator in \eqref{exp_cond_S2}. We need some technical lemmata.
\begin{lemma}\label{lem:int}
Given $x,y\in\mathbb{R}$ and $\varphi \in [0,2\pi/r]$, we have
\[
\prod_{i=0}^{r-1}\left(x^2+y^2-2xy\cos\left(\varphi+\frac{2\pi i}{r}\right)\right)=x^{2r}+y^{2r}-2x^ry^r\cos(r\varphi).
\]
\end{lemma}
\begin{proof}
See \cite[eq. 1.394]{GR15}.
\end{proof}
Next we are going to give an exact expression for the quantity
\begin{equation}\label{eq:1}
\Theta=\prod_{i=0}^{r-1}|p-q_i|^2,
\end{equation}
where $p=(\sqrt{1-c^2},0,c)$ is a spherical point and $q_0,\ldots, q_{r-1}$ are $r$ equidistributed points in the parallel of height $h$, that is
\[
q_i=\left(\sqrt{1-h^2}\cos\left(\varphi+\frac{2\pi i}{r}\right),\sqrt{1-h^2}\sin\left(\varphi+\frac{2\pi i}{r}\right),h\right),
\]
with $\varphi\in[0,2\pi/r]$ any phase representing that the points can be in any position.
\begin{lemma}\label{lem:xey}
We have $\Theta=x^{2r}+y^{2r}-2x^ry^r\cos(r\varphi)$, where
\begin{align*}
x=&\sqrt{1-c}\sqrt{1+h},\\
y=&\sqrt{1+c}\sqrt{1-h}.
\end{align*}
In particular, the minimum and maximum of $\Theta$ are reached respectively in $\varphi=0$ and $\varphi=\pi/r$ and we get
\[
|x^r-y^r|^2 \leqslant \Theta\leqslant |x^r+y^r|^2.
\]
\end{lemma}
\begin{proof}
We write
\begin{align*}
\Theta=&\prod_{i=0}^{r-1}|p-q_i|^2\\
=&\prod_{i=0}^{r-1}(2-2\langle p,q_i\rangle)\\
=&\prod_{i=0}^{r-1}\left(2-2\sqrt{1-h^2}\sqrt{1-c^2}\cos\left(\varphi+\frac{2\pi i}{r}\right)-2hc\right).
\end{align*}
Taking $x$ and $y$ as in the statement above and applying Lema \ref{lem:int} we deduce that
\begin{align*}
\Theta=&x^{2r}+y^{2r}-2x^ry^r\cos(r\varphi),
\end{align*}
that is the first assertion of Lemma. The second one is direct: as the maximum and the minimum value of the cosine are $\pm1$ it is enough to notice that
\[
x^{2r}+y^{2r}\pm2x^ry^r=|x^r\pm y^r|^2.
\]
\end{proof}
\begin{prop}\label{ImpCond_cota_num}
Let $p\in \mathbb{S}$ and let $S_N$ be as in \eqref{Sum_SN}. Denote by $p_i \in \mathcal{P}_N$ the points in our collection $\mathcal{P}_N$. Then, for $M\geqslant 5$,
\begin{equation*}
\log\displaystyle\prod_{k=1}^N |p-p_k| \leqslant S_N + \log 2 + \frac12.
\end{equation*}
\end{prop}
\begin{proof}
Without loss of generality, we take $p=(\sqrt{1-c^2},0,c)$ belonging to the band $B_{\ell}$, with $1 \leqslant \ell \leqslant M$. Let us denote by $q_{j,0},\ldots,q_{j,r_j-1}$ the $r_j$ equidistributed points in the $j$--th parallel.
From Lemma \ref{lem:xey}, we have
\begin{equation}\label{cota_prod_s}
\log\prod_{k=1}^N| p-p_k|=\log\prod_{j=1}^{2M-1}\prod_{i=0}^{r_j-1}| p-q_{j,i}|\leqslant \sum_{j=1}^{2M-1}\log| x_{h_j,c}^{r_j}+y_{h_j,c}^{r_j}|,
\end{equation}
where
\begin{align*}
x_{h_j,c}=&\sqrt{1-c}\sqrt{1+h_j},\\
y_{h_j,c}=&\sqrt{1+c}\sqrt{1-h_j}.
\end{align*}
Note that the bound obtained in \eqref{cota_prod_s} can be rewritten as
\begin{align}
\nonumber \sum_{j=1}^{2M-1}\log & | x_{h_j,c}^{r_j}+y_{h_j,c}^{r_j}| \\
& = \displaystyle\sum_{j=1}^{\ell-1} r_j\log x_{h_j,c} + \displaystyle\sum_{j=\ell+1}^{2M-1}r_j\log y_{h_j,c}
+\log| x_{h_{\ell},c}^{r_{\ell}}+y_{h_{\ell},c}^{r_{\ell}}| \label{cota_ProPP}\\
\nonumber & \hspace*{0.3cm} + \sum_{j=1}^{\ell-1}\log| 1+y_{h_j,c}^{r_j}/x_{h_j,c}^{r_j}|+\sum_{j=\ell+1}^{2M-1}\log|1+x_{h_j,c}^{r_j}/y_{h_j,c}^{r_j}|.
\end{align}
We know that $c\in[H_{\ell},H_{\ell-1}]$. Then, for $1\leqslant j\leqslant \ell-1$ we have $h_j\geqslant c$ and hence $x_{h_j,c}\geqslant y_{h_j,c}$. Reciprocally, for $\ell+1\leqslant
j\leqslant 2M-1$ we get $x_{h_j,c} \leqslant y_{h_j,c}$. Moreover, if $c\in[H_{\ell},h_{\ell}]$ then $h_{\ell} \geqslant c$ and $x_{h_{\ell},c}\geqslant y_{h_{\ell},c}$, so
$$
\log |x_{h_{\ell},c}^{r_{\ell}}+y_{h_{\ell},c}^{r_{\ell}}| = r_{\ell}\log x_{h_{\ell},c} + \log |1+y_{h_{\ell},c}^{r_{\ell}}/x_{h_{\ell},c}^{r_{\ell}}| \leqslant r_{\ell}\log x_{h_{\ell},c} +\log 2,
$$
and from \eqref{Valor_Itildej} we deduce that
$$
\eqref{cota_ProPP} \leqslant \log 2 + \displaystyle\sum_{j=1}^{\ell} r_j\log x_{h_j,c} +\displaystyle\sum_{j=\ell+1}^{2M-1}r_j\log y_{h_j,c}=\log 2 +\sum_{j=1}^{2M-1}r_j \Tilde{I}_j (p).
$$
It is easy to check that this inequality also holds for $c\in(h_{\ell},H_{\ell-1}]$.
In any case, we obtain
\begin{align*}
\sum_{j=1}^{2M-1}\log| x_{h_j,c}^{r_j}+y_{h_j,c}^{r_j}| &\leqslant \log 2 + \sum_{j=1}^{2M-1}r_j\Tilde{I}_j (p)\\
&\hspace*{0.25cm}+ \sum_{j=1}^{\ell-1}\log| 1+y_{h_j,c}^{r_j}/x_{h_j,c}^{r_j}|+\sum_{j=\ell+1}^{2M-1}\log|1+x_{h_j,c}^{r_j}/y_{h_j,c}^{r_j}|.
\end{align*}
By \eqref{Sum_SN} and using $\log(1+\alpha)\leqslant\alpha$ with $\alpha>0$, we have
\begin{align*}
\sum_{j=1}^{2M-1}\log| x_{h_j,c}^{r_j}+y_{h_j,c}^{r_j}|\leqslant & \,\,\log 2 + S_N\\
&+\sum_{j=1}^{\ell-1}\left(\frac{(1+c)(1-h_j)}{(1-c)(1+h_j)}\right)^{r_j/2}+\sum_{j=\ell+1}^{2M-1}\left(\frac{(1-c)(1+h_j)}{(1+c)(1-h_j)}\right)^{r_j/2},
\end{align*}
and bounding $H_{\ell}\leqslant c\leqslant H_{\ell-1}$ it is easy to see that
\begin{multline}
\log\prod_{k=1}^N| p-p_k| \leqslant \log 2+S_N\\
+\sum_{j=1}^{\ell-1}\left(\frac{(1+H_{\ell-1})(1-h_j)}{(1-H_{\ell-1})(1+h_j)}\right)^{r_j/2}+
\sum_{j=\ell+1}^{2M-1}\left(\frac{(1-H_{\ell})(1+h_j)}{(1+H_{\ell})(1-h_j)}\right)^{r_j/2} \label{acot_sum}.
\end{multline}
Next, we are going to bound the two sums in \eqref{acot_sum}.
Notice that for $\ell=1$, the first sum vanishes. We have
\begin{align*}
\eqref{acot_sum} &= \sum_{j=1}^{\ell-1}\left(\frac{(1+H_{\ell-1})(1-h_j)}{(1-H_{\ell-1})(1+h_j)}\right)^{r_j/2}+\sum_{j=\ell+1}^{M}\left(\frac{(1-H_{\ell})(1+h_j)}{(1+H_{\ell})(1-h_j)}\right)^{r_j/2}\\
& \hspace*{0.5cm}
+\sum_{j=1}^{M-1}\left(\frac{(1-H_{\ell})(1-h_j)}{(1+H_{\ell})(1+h_j)}\right)^{r_j/2}\\
& = \sum_{j=1}^{\ell-1}\left(\frac{(2M^2-\ell(\ell-1))j^2}{\ell(\ell-1)(2M^2-j^2)}\right)^{2j}+\sum_{j=\ell+1}^{M}\left(\frac{\ell(\ell+1)(2M^2-j^2)}{(2M^2-\ell(\ell+1))j^2}\right)^{2j}\\
&\hspace*{0.5cm}+\sum_{j=1}^{M-1}\left(\frac{\ell(\ell+1)j^2}{(2M^2-\ell(\ell+1))(2M^2-j^2)}\right)^{2j}\\
& \leqslant \displaystyle\sum_{j=1}^{\ell-1} \left(\frac{j^2}{\ell(\ell-1)}\right)^{2j}+\displaystyle\sum_{j=\ell+1}^{M} \left(\frac{\ell(\ell+1)}{j^2}\right)^{2j}+\frac{3}{2}\displaystyle\sum_{j=1}^{M-1} \left(\frac{j}{M}\right)^{4j}\\
& \leqslant \frac12,
\end{align*}
since by Lemmas \ref{L_acot_sum_alpha} and \ref{L_acot_sum_2alpha} one can deduce that
\begin{align*}
\displaystyle\sum_{j=1}^{\ell-1} \left(\frac{j^2}{\ell(\ell-1)}\right)^{2j} &= \displaystyle\sum_{j=1}^{\ell-2} \left(\frac{j^2}{\ell(\ell-1)}\right)^{2j} + \left(\frac{\ell-1}{\ell}\right)^{2(\ell-1)}
\leqslant \frac{1}{4},
\\
\displaystyle\sum_{j=\ell+1}^{M} \left(\frac{\ell(\ell+1)}{j^2}\right)^{2j} &= \displaystyle\sum_{j=\ell+2}^{M} \left(\frac{\ell(\ell+1)}{j^2}\right)^{2j} + \left(\frac{\ell}{\ell+1}\right)^{2(\ell+1)}\\&\leqslant \displaystyle\sum_{j=\ell+2}^{M} \left(\frac{\ell+1}{j}\right)^{4j} + e^{-2} \leqslant \frac{1}{6},\\
\\
\frac{3}{2}\displaystyle\sum_{j=1}^{M-1} \left(\frac{j}{M}\right)^{4j} &\leqslant \frac{1}{20}.
\end{align*}
\end{proof}
The final outcome of this section will be used in the proof of our main theorem:
\begin{corollary}\label{cor:C}
If $p\in B_\ell$ with $\ell\leqslant M$ and $M\geqslant 5$, then
\[
\prod_{k=1}^N |p-p_k|\leqslant 2e^{-\kappa N}\left(\frac{M}{\ell}\right)^{1/3}e^{3/4}\left(\frac{e}{2}\right)^{2/\ell}.
\]
\end{corollary}
\begin{proof}
Immediate from propositions \ref{ImpCond_cota_num} and \ref{Cociente_Integral_cond_previo}.
\end{proof}
\section{The denominator of the condition number formula}\label{sec:denominador}
The results obtained in the previous section give us an upper bound for the numerator of \eqref{exp_cond_S2}.
Now, for any fixed $i=1,\ldots,N$, we need a lower bound for the denominator.
\begin{lemma}\label{prod_puntos_circunf_1}
Let $r$ be fixed and let $p_0,\ldots,p_{r-1}$ be $r$ equidistributed points on the unit circumference. Then
\begin{equation*}
\log \prod_{k=1}^{r-1} |p_k-p_0|=\log r.
\end{equation*}
\end{lemma}
\begin{proof}
This is a basic exercise in complex number theory: rotate the points in such a way that $p_{k}=e^{2i\pi k/r}$. Since the polynomial $z^r-1$ has the $r$ roots of unity as zeros, we have $\prod_{k=0}^{r-1}(z-p_{k})=z^r-1=(z-1)(1+z+\ldots+z^{r-1})$. Removing the factor $z-1$ from both terms and substituting $z=1$ we get the result.
\end{proof}
The following is an inmediate consequence:
\begin{corollary}\label{prod_puntos_circunf_r}
Let $r$ be fixed and let $p_0,\ldots,p_{r-1}$ be $r$ equidistributed points in a circumference of radius $s$. Then
\begin{equation*}
\log \prod_{k=1}^{r-1} |p_k-p_0|=\log r + (r-1)\log s.
\end{equation*}
\end{corollary}
\begin{prop}\label{ImpCond_cota_denom}
Let $p \in \mathcal{P}_N$ be fixed and let $S_N=S_N(p)$ be as in \eqref{Sum_SN}. For $M\geqslant 5$, the following inequality holds
\begin{equation*}
\log \displaystyle\prod_{\substack{p_i\in\mathcal{P}_N\\p_i\neq p}}^N |p_i-p| \geqslant S_N + \log (2\sqrt{2}M) -\frac18.
\end{equation*}
\end{prop}
\begin{proof}
We may assume, without loss of generality, that $p=q_{\ell,0}=(\sqrt{1-h_{\ell}^2},0,h_{\ell})$ is a point belonging to our set $\mathcal{P}_N$ located in the $\ell$--th parallel, with $1 \leqslant \ell \leqslant M$. As before, we denote by $q_{j,0},\ldots,q_{j,r_j-1}$ the $r_j$ equidistributed points in the $j$--th parallel. We write
\begin{align}
\nonumber\log \displaystyle\prod_{\substack{p_i\in\mathcal{P}_N\\p_i\neq p}}^N |p_i-p| =& \log \prod_{i=1}^{r_{\ell}-1}|q_{{\ell},0}-q_{\ell,i}|+
\nonumber \log\prod_{\substack{j=1 \\ j\neq \ell}}^{2M-1}\prod_{i=0}^{r_j-1}|q_{\ell,0}-q_{j,i}|\\
\geqslant & \log 2\sqrt{2}M + r_{\ell}\log x_{h_{\ell},h_{\ell}} + \sum_{\substack{j=1 \\ j \neq \ell}}^{2M-1}\log| x_{h_j,h_{\ell}}^{r_j}-y_{h_j,h_{\ell}}^{r_j}|, \label{cota_den_cond_1}
\end{align}
where we have used on the one hand that, by Lemma \ref{lem:xey},
$$
\log\prod_{\substack{j=1 \\ j\neq \ell}}^{2M-1}\prod_{i=0}^{r_j-1}|q_{\ell,0}-q_{j,i}| \geqslant \sum_{\substack{j=1 \\ j \neq \ell}}^{2M-1}\log| x_{h_j,h_{\ell}}^{r_j}-y_{h_j,h_{\ell}}^{r_j}|,
$$
with
\begin{align*}
x_{h_j,h_{\ell}}=&\sqrt{1-h_{\ell}}\sqrt{1+h_j},\\
y_{h_j,h_{\ell}}=&\sqrt{1+h_{\ell}}\sqrt{1-h_j},
\end{align*}
and on the other hand that, from Corollary \ref{prod_puntos_circunf_r}
\begin{align*}
\log \prod_{i=1}^{r_{\ell}-1}|q_{\ell,0}-q_{\ell,i}| =& \log r_{\ell} + (r_{\ell}-1)\log \sqrt{1-h_{\ell}^2}\\
=& \log \frac{r_{\ell}}{\sqrt{1-h_{\ell}^2}}+r_{\ell}\log x_{h_{\ell},h_{\ell}}\\
\geqslant & \log (2\sqrt{2}M) + r_{\ell}\log x_{h_{\ell},h_{\ell}}.
\end{align*}
For $1 \leqslant j \leqslant \ell-1$, $h_j > h_{\ell}$ so $x_{h_j,h_{\ell}} > y_{h_j,h_{\ell}}$. Reciprocally, $x_{h_j,h_{\ell}} < y_{h_j,h_{\ell}}$ for $\ell \leqslant j \leqslant 2M-1$. Thus
\begin{align*}
\sum_{\substack{j=1 \\ j \neq \ell}}^{2M-1}\log| x_{h_j,h_{\ell}}^{r_j}-y_{h_j,h_{\ell}}^{r_j}|
=& \sum_{j=1}^{\ell-1}r_j\log x_{h_j,h_{\ell}}+ \sum_{j=\ell+1}^{2M-1}r_j\log y_{h_j,h_{\ell}}\\
& + \sum_{j=1}^{\ell-1}\log| 1-y_{h_j,h_{\ell}}^{r_j}/x_{h_j,h_{\ell}}^{r_j}|
+ \sum_{j=\ell+1}^{2M-1}\log| 1-x_{h_j,h_{\ell}}^{r_j}/y_{h_j,h_{\ell}}^{r_j}|,
\end{align*}
and substituting into \eqref{cota_den_cond_1} we obtain
\begin{align}
\nonumber \log \displaystyle\prod_{\substack{p_i\in\mathcal{P}_N\\p_i\neq p}}^N |p_i-p| \geqslant & \log (2\sqrt{2}M) + S_N + T_{\ell,j},\\
T_{\ell,j}& = \sum_{j=1}^{\ell-1}\log| 1-y_{h_j,h_{\ell}}^{r_j}/x_{h_j,h_{\ell}}^{r_j}|
+ \sum_{j=\ell+1}^{2M-1}\log| 1-x_{h_j,h_{\ell}}^{r_j}/y_{h_j,h_{\ell}}^{r_j}|, \label{log_cotden}
\end{align}
since by \eqref{Valor_Itildej} and \eqref{Sum_SN}
$$
\sum_{j=1}^{\ell}r_j\log x_{h_j,h_{\ell}}+ \sum_{j=\ell+1}^{2M-1}r_j\log y_{h_j,h_{\ell}} = \sum_{j=1}^{2M-1}r_j \Tilde{I}_j (q_{\ell,0}) = S_N(p).
$$
Next, we use
\begin{equation}\label{des_log_m}
\log(1-\alpha)\geqslant -\frac{16}{15}\alpha, \quad \alpha \in [0,1/16],
\end{equation}
to estimate \eqref{log_cotden} (note that for $\ell=1$, the first sum vanishes).
It is not difficult to check that we can use \eqref{des_log_m} to get the following bounds for \eqref{log_cotden}.
\begin{align*}
-T_{\ell,j} \leqslant &
\frac{16}{15}\displaystyle\sum_{j=1}^{\ell-1} \left(\frac{y_{h_{j},h_{\ell}}}{x_{h_{j},h_{\ell}}}\right)^{r_j} +\frac{16}{15}\displaystyle\sum_{j=\ell+1}^M\left(\frac{x_{h_{j},h_{\ell}}}{y_{h_{j},h_{\ell}}}\right)^{r_j}+\frac{16}{15}\displaystyle\sum_{j=M+1}^{2M-1}\left(\frac{x_{h_{j},h_{\ell}}}{y_{h_{j},h_{\ell}}}\right)^{r_j}\\
= & \frac{16}{15}\displaystyle\sum_{j=1}^{\ell-1}\left(\frac{(2M^2-\ell^2)j^2}{\ell^2(2M^2-j^2)}\right)^{2j} + \frac{16}{15}\displaystyle\sum_{j=\ell+1}^M
\left(\frac{\ell^2(2M^2-j^2)}{(2M^2-\ell^2)j^2} \right)^{2j}\\
& \hspace{0.5cm}+\frac{16}{15}\displaystyle\sum_{j=1}^{M-1} \left(\frac{\ell^2j^2}{(2M^2-\ell^2)(2M^2-j^2)}\right)^{2j}\\
\leqslant & \frac{16}{15}\displaystyle\sum_{j=1}^{\ell-1} \left(\frac{j}{\ell}\right)^{4j}+ \frac{16}{15}\displaystyle\sum_{j=\ell+1}^M \left(\frac{\ell}{j}\right)^{4j}+ \frac{16}{15}\displaystyle\sum_{j=1}^{M-1} \left(\frac{j}{M}\right)^{4j}\\
\leqslant & \frac1{8},
\end{align*}
where we have used lemmas \ref{L_acot_sum_alpha} and \ref{L_acot_sum_2alpha}. The proposition follows.
\end{proof}
We will use the following easy consequence in the last section:
\begin{corollary}\label{cor:D}
Let $p$ be any point of $\mathcal P_N$. Then
\[
\displaystyle\prod_{\substack{p_i\in\mathcal{P}_N\\p_i\neq p}}^N |p_i-p| \geqslant \sqrt{2N}e^{-\kappa N}e^{-9/8}.
\]
\end{corollary}
\begin{proof}
Immediate from propositions \ref{ImpCond_cota_denom} and \ref{Cociente_Integral_cond_previo}.
\end{proof}
\section{Proof of the main results}\label{sec:final}
If $M\leqslant 4$ our proof is computer assisted: we construct the polynomial $P_N$, (that has rational coefficients) and the point set (whose points are algebraic and can thus be represented exactly in a computer algebra package), which allows us to compute exactly $\mu_{norm}$ from \eqref{eq:mu}, showing that its actually upper bounded by $N=4M^2$.
For $M\geqslant 5$, from \eqref{exp_cond_S2} and the symmetry of the construction, we have
\begin{align*}
\mu_{\text{norm}}(P)&=\frac{1}{2}\sqrt{N(N+1)}\max_{1 \leqslant i \leqslant N}\frac{\left(\int_{\mathbb{S}} \prod_{j=1}^N |p- p_j|^2 d\sigma(p)\right)^{1/2}}{\prod_{j\neq i}| p_i- p_j|}\\
&\stackrel{\text{Cor. \ref{cor:D}}}{\leqslant}\frac{1}{2}\sqrt{N(N+1)}\frac{\left(\sum_{\ell=1}^{2M-1}\int_{B_\ell} \prod_{j=1}^N |p- p_j|^2 d\sigma(p)\right)^{1/2}}{\sqrt{2N}e^{-\kappa N}e^{-9/8}}\\
&=\frac{1}{2}\sqrt{N(N+1)}\frac{\left(\int_{B_M} \prod_{j=1}^N |p- p_j|^2 d\sigma(p)+2\sum_{\ell=1}^{M-1}\int_{B_\ell} \prod_{j=1}^N |p- p_j|^2 d\sigma(p)\right)^{1/2}}{\sqrt{2N}e^{-\kappa N}e^{-9/8}}
\end{align*}
Using Corollary \ref{cor:C} and recalling that the relative area of $B_\ell$ is $r_\ell/N=\ell/M^2$, the term inside the parenthesis is bounded above by
\begin{align*}
4e^{-2\kappa N}e^{3/2}\left(\frac{1}{M}\left(\frac{e}{2}\right)^{4/M}+\frac{2}{M^{4/3}}\sum_{\ell=1}^{M-1}\ell^{1/3}\left(\frac{e}{2}\right)^{4/\ell}\right),\\
\stackrel{\text{Lemma \ref{lem:sum3}}}{\leqslant} 4e^{-2\kappa N}e^{3/2}\left(\frac{1}{M}\left(\frac{e}{2}\right)^{4/M}+\frac32+\frac{24(1-\log 2)}{M}+\frac{6}{M^{4/3}}\right).
\end{align*}
We then have proved:
\begin{align*}
\mu_{\text{norm}}(P)&\leqslant \sqrt{\frac{N+1}2}e^{3/4+9/8}\left(\frac{1}{M}\left(\frac{e}{2}\right)^{4/M}+\frac32+\frac{24(1-\log 2)}{M}+\frac{6}{M^{4/3}}\right)^{1/2}.
\end{align*}
This proves our Theorem \ref{th:mainintro}: the term inside the parenthesis decreases with $M$ and conclude after some arithmetic:
\[
\mu_{\text{norm}}(P)\leqslant\frac{19}{2}\sqrt{N+1},
\]
which is less than $N$ for $M\geqslant 5$. Moreover, we also get a proof of Corollary \ref{cor:limsinf} since in the limit $M\to\infty$ we have
\[
\mu_{\text{norm}}(P)\leqslant \frac{\sqrt{3}}{2}e^{3/4+9/8}\sqrt{N+1}.
\]
| {
"timestamp": "2020-12-10T02:25:18",
"yymm": "2012",
"arxiv_id": "2012.05138",
"language": "en",
"url": "https://arxiv.org/abs/2012.05138",
"abstract": "In 1993, Shub and Smale posed the problem of finding a sequence of univariate polynomials of degree $N$ with condition number bounded above by $N$. In a previous paper by C. Beltán, U. Etayo, J. Marzo and J. Ortega-Cerdà, it was proved that the optimal value of the condition number is of the form $O(\\sqrt{N})$, and the sequence demanded by Shub and Smale was described by a closed formula (for large enough $N\\geqslant N_0$ with $N_0$ unknown) and by a search algorithm for the rest of the cases. In this paper we find concrete estimates for the constant hidden in the $O(\\sqrt{N})$ term and we describe a simple formula for a sequence of polynomials whose condition number is at most $N$, valid for all $N=4M^2$, with $M$ a positive integer.",
"subjects": "Complex Variables (math.CV)",
"title": "On the minimum value of the condition number of polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9896718496130619,
"lm_q2_score": 0.8459424314825852,
"lm_q1q2_score": 0.837205410831541
} |
https://arxiv.org/abs/1706.02606 | The Chain Group of a Forest | For every labeled forest $\mathsf{F}$ with set of vertices $[n]$ we can consider the subgroup $G$ of the symmetric group $S_n$ that is generated by all the cycles determined by all maximal paths of $\mathsf{F}$. We say that $G$ is the chain group of the forest $\mathsf{F}$. In this paper we study the relation between a forest and its chain group. In particular, we find the chain groups of the members of several families of forests. Finally, we prove that no copy of the dihedral group of cardinality $2n$ inside $S_n$ can be achieved as the chain group of any forest. | \section{Introduction} \label{sec:intro}
It is typical in Mathematics to use intrinsic information of discrete objects such as graphs, trees, and finite posets, to carry out algebraic and geometric constructions. For instance, such constructions include the fundamental group of a graph \cite[Chapter~11]{aH01}, the incidence algebra of a finite poset \cite[Chapter~3]{rS11}, and the forest polytope of a graph \cite[Chapter~50]{aS03}. In this paper we use the maximal paths of a forest $\ff$ to construct a finite group, which we call the \emph{chain group} of $\ff$. The method we use to produce the chain group of a given forest is motivated in part by the way Stanley defines a chain polytope from a locally finite poset (see \cite{rS86}).
Given a finite poset $P = \{x_1, \dots, x_n\}$, its corresponding \emph{chain polytope} $\mathcal{C}(P)$ is defined to be the set of points $(y_1, \dots, y_n) \in \rr_{\ge 0}^n$ satisfying the condition
\begin{equation} \label{eq:chain polytope condition}
y_{i_1} + \dots + y_{i_k} \le 1 \ \text{ whenever } \ x_{i_1} <_P \dots <_P x_{i_k} \ \text{ is a maximal chain of } P.
\end{equation}
In other words, the chain polytope $\mathcal{C}(P)$ is the intersection of the half-spaces determined by the maximal chains of $P$ as indicated in \eqref{eq:chain polytope condition}.
Let us see how to reuse the same method Stanley applies to build the chain polytope of a poset, to naturally associate a finite group $\mathcal{G}(\ff)$ to each forest $\ff$. Instead of taking $\rr^n$ as the universe containing the half-spaces utilized in \eqref{eq:chain polytope condition} to produce $\mathcal{C}(P)$, we can rather consider $S_n$ as the universe containing the generators of a group $\mathcal{G}(P)$, which is defined by
\[
\mathcal{G}(\ff) = \big\langle (s_{i_1} \ \dots \ s_{i_k}) \in S_n \ \text{ whenever } \ x_{i_1} <_P \dots <_P x_{i_k} \ \text{ is a maximal path in} \ \ff \big\rangle.
\]
We call $\mathcal{G}(\ff)$ the \emph{chain group} of the forest $\ff$.
Chain polytopes, as introduced in \cite{rS86} by Stanley, have nice features. For example, if $\mathcal{C}(P)$ is the chain polytope of the poset $P$, then the number of vertices of $\mathcal{C}(P)$ equals the number of antichains of $P$; see \cite[Theorem~2.2]{rS86}. In addition, the volume of $\mathcal{C}(P)$ is determined by the combinatorial structure of $P$; see \cite[Corollary~4.2]{rS86}. We will see in Section~3 that the chain group of a forest has a nice behavior; for example, disjoint unions of forests become direct sums of groups (see Proposition~\ref{prop:chain group of disjoint graphs}). On the other hand, if we relabel a given forest $\ff$, the resulting forest has chain group conjugate to $\mathcal{C}(\ff)$ (see Proposition~\ref{prop:order groups of isomorphic and dual posets}).
There are a few natural questions we might ask about this assignment. How the chain groups of two distinct labeling of the same forest are associated? If $G$ is the chain group of the forest $\ff$, can we determine whether $G$ satisfies certain properties only by studying $\ff$? It is our intension to answer such questions here.
In addition, we might wonder, for a fixed $n$, which subgroups of $S_n$ will show as a chain group of some forest labeled by $[n]$. Given that, every finite subgroup is a subgroup of $S_n$ for $n$ large enough, this is not a question that we expect to answer in its full generality. However, we might hope to decide whether relatively simple families of subgroups of $S_n$ can be realized as chain groups of some $n$-forest. For example, is the alternating group $A_n$ a chain group of an $n$-forest for every $n \in \nn$? This question, along with other similar ones, will be answered later in the sequel.
This paper is structured as follows. In Section~\ref{sec:Background and Notation} we review the definitions on graph theory we will be using later. Then, in Section~\ref{sec:general observations}, we prove that passing from a forest to its chain group behaves well with respect to relabeling and changes disjoint union for direct sum. In Section~\ref{sec:abelian case} we study the abelian chain groups. In Section~\ref{sec:the chain group of a tree} we compute the chain groups of members of several families of trees. We also provide some results useful to find the chain groups of some forests. Finally, in Section~\ref{sec:missing chain groups}, we show that the dihedral cannot be achieved as the chain group of any forest.
\section{Background and Notation} \label{sec:Background and Notation}
In this section, we fix notation and briefly recall the definitions of the main objects related to those being studied here. We also state the relevant properties of such objects necessary to follow the present paper. For background material in group theory, symmetric groups, and graph theory we refer the reader to Rotman \cite{jR94}, Sagan \cite{bS01}, and Bondy and Murty \cite{BM08}, respectively.
The double-struck symbols $\mathbb{N}$ and $\mathbb{N}_0$ denote the sets of positive integers and non-negative integers, respectively. For $n \in \nn$, we denote the set $\{1,\dots, n\}$ just by $[n]$. Following the standard notation of group theory, we let $S_n$ and $A_n$ denote the symmetric and the alternating group on $n$ letters, respectively. In addition, the dihedral group of order $2n$ is denoted by $D_{2n}$.
To settle down our nomenclature, let us recall some basic definitions concerning graphs. A \emph{graph} is a pair $\mathsf{G} = (V,E)$, where $V$ is a finite set and $E$ is a collection of $2$-element subsets of $V$. The elements of $V$ are called \emph{vertices} of $\mathsf{G}$ while the elements of $E$ are called \emph{edges} of $\mathsf{G}$. It is often convenient to denote the set of vertices and the set of edges of $\mathsf{G}$ by $V(\mathsf{G})$ and $E(\mathsf{G})$, respectively. The \emph{degree} of a vertex $v$, denoted by $\deg(v)$, is the number of edges containing it. We say that a vertex is a \emph{leaf} if it has degree one. An edge $\{v,w\}$ is also denoted by $vw$. Distinct vertices $v$ and $w$ of $V$ are called \emph{adjacent} if $vw \in E$. In the context of this paper, a \emph{walk} $\omega$ in $\mathsf{G}$ is a sequence of vertices, say $v_0, \dots, v_\ell$ such that $v_{i-1}$ is adjacent to $v_i$ for each $i = 1, \dots, \ell$. If $v_\ell = v_0$, then the walk $\omega$ is said to be \emph{closed}. If, in addition, $v_i = v_j$ implies that $i = j$ or $\{i,j\} = \{0,\ell\}$ then $\omega$ is called a \emph{path}; in this case we say that the \emph{length} of $\omega$ is $\ell$. A path of $\mathsf{G}$ is \emph{maximal} if it is not strictly contained in another path. A closed path of length at least three is called a \emph{cycle}. A graph is said to be \emph{connected} if any two distinct vertices can be connected by a path. Every graph $G$ is the finite disjoint union of connected graphs, which are called \emph{connected components} of $G$. On the other hand, a graph is called \emph{acyclic} provided it does not contain any cycle.
\begin{definition}
An acyclic connected graph is called a \emph{tree}. A finite disjoint union of trees is said to be a \emph{forest}.
\end{definition}
\begin{example}
The next figure illustrates a graph $\mathsf{G}$ having four connected components. The leftmost component is a \emph{chain}, the second component is a cycle, the third component is a star, and the fourth component is a tree. Notice that $\mathsf{G}$ is not a forest.
\begin{figure}[h]
\centering
\includegraphics[width = 10.0cm]{Math.png}
\caption{A graph with four connected components.}
\label{fig:a graph with four connected components}
\end{figure}
\end{example}
A labeled \emph{forest} is a forest $\mathsf{F}$, whose vertices are labeled by the set $\{1,\dots, |V(\mathsf{F})|\}$. All the forests we will be interested in throughout this paper are labeled.
\section{General Observations} \label{sec:general observations}
In this section we formally define the chain group of a forest and explore some general facts connecting them. We also present some examples to illustrate the connection.
\begin{definition}
For $n \in \nn$ let $\ff$ be a labeled forest with $n$ vertices. The \emph{chain group} of $\ff$, which we denote by $G_{\ff}$, is the subgroup of $S_n$ generated by all cycles $(i_1 \ \dots \ i_m)$ such that $i_1, \dots, i_m$ is a maximal path in $\ff$.
\end{definition}
\begin{example}
Figure~\ref{fig:chain groups of three forests} shows three forests $\ff_1$, $\ff_2$, and $\ff_3$.
\begin{figure}[h]
\centering
\includegraphics[width = 12.0cm]{ThreeForests.png}
\caption{Three labeled forests with their respective chain groups.}
\label{fig:chain groups of three forests}
\end{figure}
The forest $\ff_1$ consists of only two disjoint maximal paths, namely $(1,2,3)$ and $(4,5)$; therefore $G_{\ff_1} = \langle (1 \ 2 \ 3), (4 \ 5) \rangle \cong \zz_3 \times \zz_2$ (cf. Proposition~\ref{prop:chain group of disjoint graphs} below).
On the other hand, $\mathsf{F}_2$ has exactly three maximal paths, which are $(1,2,3,4)$, $(1,2,3,5)$, and $(4,3,5)$. As $(1 \ 2 \ 3 \ 4) \circ (1 \ 2 \ 3 \ 5) = (1 \ 3 \ 5 \ 2 \ 4)$, the chain group $G_{\mathsf{F}_3}$ contains a $5$-cycle. On the other hand, as $(3 \ 4 \ 5) \circ (1 \ 2 \ 3 \ 4) = (1 \ 2 \ 4) (3 \ 5)$ it follows that $G_{\mathsf{F}_2}$ also contains the two cycle $(3 \ 5)$. Hence $S_5 = \langle (1 \ 3 \ 5 \ 2 \ 4), (3 \ 5) \rangle \le G_{\mathsf{F}_2}$, and so $G_{\mathsf{F}_2} = S_5$.
Finally, $\ff_3$ has $\binom{5}{2}$ maximal paths. The chain group of $\ff_3$ is generated by the $3$-cycles $(1 \ a \ b)$ for all $a,b \in \{2,3,4,5,6\}$ with $a \neq b$. These $3$-cycles are enough to generate the whole alternating group (see the proof of Theorem~\ref{thm:chain group of the star graph} for more details). Hence $G_{\mathsf{F}_3} = A_6$.
\end{example}
Note that $S_n$ acts on the set of labeled forests having exactly $n$ vertices by relabeling their vertices. We show now that this action conjugates the chain groups.
\begin{proposition} \label{prop:order groups of isomorphic and dual posets}
If $\ff$ and $\ff'$ are forests with $n$ vertices that are a relabeling version of each other, then their chain groups are conjugate in $S_n$.
\end{proposition}
\begin{proof}
Let $G$ and $G'$ denote the chain groups of $\ff$ and $\ff'$, respectively. Let $\pi \colon \ff \to \ff'$ be a graph isomorphism. In particular, we can interpret $\pi$ as an element in $S_n$. Consider the map $\varphi \colon G \to G'$ defined by $\varphi(\sigma) = \pi \sigma \pi^{-1}$. For every maximal path $i_1, \dots, i_m$ in $\ff$, we have that $\pi(i_1), \dots, \pi(i_m)$ is a maximal path in $\ff'$ and, therefore,
\[
\varphi((i_1 \ \dots \ i_m)) = \pi (i_1 \ \dots \ i_m) \pi^{-1} = (\pi(i_1) \ \dots \ \pi(i_m))
\]
is a maximal path in $\ff'$. So the map $\varphi$ is well defined. It follows immediately that $\varphi$ is a group homomorphism. Now we can define $\psi \colon G' \to G$ by $\psi(\sigma) = \pi^{-1} \sigma \pi$, and similarly verify that it is a well-defined homomorphism of groups. Since $\varphi$ and $\psi$ are inverses of each other, $\pi$ is an isomorphism.
\end{proof}
Proposition~\ref{prop:order groups of isomorphic and dual posets} gives us the freedom to talk about the chain group of a non-necessarily labeled forest as long as we are not interested in the specific subgroup of the symmetric group we are dealing with but only in its isomorphic class.
Let us verify now that the chain group of a forest is the direct product of the chain groups of the trees of the given forest.
\begin{proposition} \label{prop:chain group of disjoint graphs}
If $\ff$ is a forest which is the disjoint union of the trees $\sft_1, \dots, \sft_m$, then $G_{\ff} \cong G_{\sft_1} \times \dots \times G_{\sft_m}$.
\end{proposition}
\begin{proof}
Let $n = |\ff|$. It suffices to assume that $m=2$. Let $\sigma_1, \dots, \sigma_r$ be the generating cycles induced by the maximal paths of $\sft_1$, and let $\rho_1, \dots, \rho_s$ be the generating cycles induced by the maximal paths of $\sft_2$. As $\sigma_i$ and $\rho_j$ are disjoint cycles in $S_n$ for each pair $(i,j) \in [r] \times [s]$, we can write every element of $G_{\ff}$ as $\sigma \rho$ for some $\rho \in G_{\sft_1}$ and $\rho \in G_{\sft_2}$. Now it immediately follows that the assignment $(\sigma, \rho) \mapsto \sigma \rho$ is, indeed, an isomorphism from $G_{\sft_1} \times G_{\sft_2}$ to $G_{\ff}$.
\end{proof}
\section{Abelian Chain Groups associated to $n$-Forests} \label{sec:abelian case}
In this section we characterize the forests whose chain groups are abelian. In addition, we determine those abelian groups that show up as chain groups of some forest.
\begin{example} \label{ex:the cyclic group of order n is always a chain group}
Let $\sft$ be an $n$-tree with at most two leafs. Then there is only one maximal path, namely $\sigma(1) \prec \dots \prec \sigma(n)$ for some bijection $\sigma \colon [n] \to [n]$. Thus, the chain group associated to $\sft$ is $G = \langle (\sigma(1) \ \dots \ \sigma(n)) \rangle \cong \zz_n$.
\end{example}
More generally, we have the following result.
\begin{proposition}
Let $n$ be a natural, and let $\ff$ be an $n$-forest. Then the associated chain group of $\ff$ is abelian if and only if $\ff$ is the disjoint union of paths.
\end{proposition}
\begin{proof}
Example~\ref{ex:the cyclic group of order n is always a chain group}, along with Proposition~\ref{prop:chain group of disjoint graphs} in the preview section, immediately implies that if $\ff$ is the disjoint union of $k$ chains of lengths $n_1, \dots, n_k$, then $G_{\ff} \cong \zz_{n_1} \times \dots \times \zz_{n_k}$. In particular, $G_{\ff}$ is abelian. To prove the direct implication, suppose by contradiction that there are two distinct maximal paths $(i_1, \dots, i_r)$ and $(j_1, \dots, j_s)$ that are not disjoint. Set $\sigma = (i_1 \ \dots \ i_r)$ and $\tau = (j_1 \ \dots \ j_s)$. As $\ff$ has no cycles, $\{i_1, i_r\} \neq \{j_1, j_s\}$. We can assume, without loss of generality that $i_r \neq j_s$. Because $i_r \neq j_s$, there exists an index $p > 1$ such that $i_p \in \{j_1, \dots, j_s\}$ and $i_{p+1} \notin \{j_1, \dots, j_s\}$ (let $i_p = j_q$). In this case $(\sigma \circ \tau \circ \sigma^{-1})(i_p) = i_{p+1} \neq j_{q+1} = \tau(j_q)$. Hence $\sigma$ and $\tau$ do not commute, contradicting the fact that $G_{\ff}$ is abelian.
\end{proof}
For $n \in \nn$, we study which abelian groups are chain groups associated to $n$-forests.
\begin{proposition} \label{prop:elementary abelian groups that are chain groups}
The elementary abelian group $(\zz/p\zz)^r$, where $p$ is prime and $r$ is a natural, is the chain group associated to an $n$-forest if and only if $rp \le n$.
\end{proposition}
\begin{proof}
For the direct implication, suppose that $(\zz/p\zz)^r$ is the chain group of an $n$-forest. This implies that $S_n$ contains a copy $G$ of $(\zz/p\zz)^r$. Consider the set $S$ of $p$-cycles inside any disjoint-cycle decomposition of any element of $G$. Take a maximal subset $\{\sigma_1, \dots, \sigma_s\}$ of $S$ satisfying that no element is a power of another one. As $G$ is abelian the $\sigma_i$'s are pairwise disjoint. In addition, $G' = \langle \sigma_1, \dots, \sigma_s \rangle$ is isomorphic to $ \cong (\zz/p\zz)^s$ and contains $G$, which yields that $r \le s$. As the $s$ $p$-cycles are disjoint, one finds that $rp \le sp \le n$.
Suppose, on the other hand, that $rp \le n$. Consider the forest $\ff$ having $r + n - rp$ connected components, $r$ of them being path graphs on $p$ vertices and $n - rp$ of them being $1$-vertex trees. The chain group $G$ of $\ff$ is generated then by $r$ disjoint $p$-cycles. Hence $G$ is a subgroup of $S_n$ isomorphic to the elementary abelian group $(\zz/p\zz)^r$, which completes the proof.
\end{proof}
Not every abelian subgroup of $S_n$ can be reached as an associated chain group of an $n$-forest. In particular, the abelian subgroups of maximal order are never achieved in this way, as we shall prove in Theorem~\ref{thm:abelian group of maximum order that are chain groups}. The following theorem describes the abelian subgroups of $S_n$ of maximum order.
\begin{theorem}\cite[Theorem 1]{BG89}
Let $G$ be an abelian subgroup of maximal order of the symmetric
group $S_n$. Then
\begin{enumerate}
\item $G \cong (\zz/3\zz)^k$ if $n = 3k$;
\item $G \cong \zz/2\zz \times (\zz/3\zz)^k$ if $n = 3k+2$;
\item either $G \cong \zz/4\zz \times (\zz/3\zz)^{k-1}$ or $G \cong (\zz/2\zz)^2 \times (\zz/3\zz)^{k-1}$ if $n = 3k+1$.
\end{enumerate}
\end{theorem}
We can use Theorem~\ref{thm:chain group of the star graph} to argue the following proposition.
\begin{theorem} \label{thm:abelian group of maximum order that are chain groups}
The maximum order abelian subgroups of $S_n$ are the chain group of $n$-forests.
\end{theorem}
\begin{proof}
Suppose first that $n=3k$. Theorem~\ref{thm:chain group of the star graph} guarantees that any maximum order abelian group $G$ of $S_n$ is a copy of $(\zz/3\zz)^k$. It follows by Proposition~\ref{prop:elementary abelian groups that are chain groups} that $G$ is the chain group of an $n$-forest.
Assume now that $n=3k+2$. Consider the $n$-forest $\ff$ consisting of the following $k+1$ connected components: $k$ $3$-vertex paths and one two-vertex path. The chain group associated to $\ff$ is isomorphic to $\zz/2\zz \times (\zz/3\zz)^k$, which is a maximum order abelian group of $S_n$ by Theorem~\ref{thm:chain group of the star graph}.
Lastly, assume that $n = 3k+1$. Then consider the $n$-forest $\ff_1$ having as connected components $k-1$ three-vertex paths and one $4$-vertex path, and also consider the $n$-forest $\ff_2$ having as connected components $k-1$ three-vertex paths and two $2$-vertex path. Notice the chain groups of $\ff_1$ and $\ff_2$ are isomorphic to $\zz/4\zz \times (\zz/3\zz)^{k-1}$ and $(\zz/2\zz)^2 \times (\zz/3\zz)^{k-1}$, respectively. As before such groups have both maximum orders by Theorem~\ref{thm:chain group of the star graph}, and the result follows.
\end{proof}
\section{Chain Groups of some Trees} \label{sec:the chain group of a tree}
In this section, we will only consider trees. The simplest family of trees consists of \emph{chains} (i.e., trees containing exactly one maximal path), and chain groups of chains are cyclic. Another very simple example of trees are the ones having all their vertices except one having degree $1$ (see the second graph in Figure~\ref{fig:chain groups of three forests}). It turns out that the chain group of this family of trees is always $A_n$ as the next theorem indicates.
\begin{theorem} \label{thm:chain group of the star graph}
For every $n \in \nn_{\ge 3}$, there exists a labeled tree with $n$ vertices whose chain group is $A_n$.
\end{theorem}
\begin{proof}
When $n = 3$, the alternating group $A_n$ is is isomorphic to $\zz_3$, and it is enough to take $\sft$ to be the only tree on $3$ vertices. Assume that $n \ge 4$. From the fact that every $3$-cycle $(i \ j \ k)$ in $S_n$ not containing $1$ satisfies $(i \ j \ k) = (1 \ i \ j)(1 \ j \ k)$ and the fact that every $3$-cycle $(1 \ i \ j)$ in $S_n$ not containing $2$ satisfies $(1 \ i \ j) = (1 \ 2 \ j)^2(1 \ 2 \ i)(1 \ 2 \ j)$, we can immediately deduce that $A_n$ is generated by $3$-cycles of the form $(1 \ 2 \ i)$ for all $i \in [n] \setminus \{1,2\}$. Now we just need to take $\mathsf{T}$ to be the star graph $K_{1,n-1}$ to have that $G_{\sft} = A_n$ (see, for an illustration, the central forest in Figure~\ref{fig:chain groups of three forests}).
\end{proof}
Now we turn to find a large family of forests each of its members $\sft$ has chain group $S_n$, where $n = |V(\sft)|$. First, let us introduce the following definition.
\begin{definition}
We say that a tree is an \emph{antenna} if it has exactly one vertex of degree three and exactly one maximal path of length two.
\end{definition}
It is not hard to verify that if a tree is an antenna, then it must be like $\ff_2$ in Figure~\ref{fig:chain groups of three forests} with, perhaps, the vertical path more prolonged upward. In particular, an antenna has exactly three maximal paths.
\begin{proposition} \label{prop:chain group of an antenna}
Let $\sft$ be a labeled antenna with an odd number $n$ of vertices. Then the chain group of $\sft$ is $S_n$.
\end{proposition}
\begin{proof}
By Proposition~\ref{prop:order groups of isomorphic and dual posets}, we can relabel $\sft$ if necessary so that its labels look like the one in Figure~\ref{fig:horizontal antenna}. Let $G_{\sft}$ be the chain group of $\sft$.
\begin{figure}[h]
\centering
\includegraphics[width = 6cm]{HorizontalAntenna.png}
\caption{}
\label{fig:horizontal antenna}
\end{figure}
The maximal path of $\sft$ are the $1,3,2$ with corresponding generator $\sigma = (1 \ 3 \ 2)$, the path $1,3,4, \dots, n$ with corresponding generator $\sigma_1 = (1 \ 3 \ 4 \ \dots \ n)$, and the path $2,3, \dots, n$ with corresponding generator $\sigma_2 = (2 \ 3 \ \dots \ n)$. Notice that $\sigma_1 \circ \sigma_2$ is a cycle of length $n$. In addition, the disjoint cycle decomposition of $\sigma \circ \sigma_2^{-1}$ contains exactly a cycle of length two and a cycle of length $n-2$. Therefore $(\sigma \circ \sigma_2^{-1})^{n-2}$ is a transposition. As $G_{\sft}$ contains a full cycle and a transposition, it must be $S_n$.
\end{proof}
Proposition~\ref{prop:chain group of an antenna} says in particular that for every odd $n \ge 5$ there is a tree whose chain forest is $S_n$. In addition, we can use this proposition to find the chain group of more complex forests. Before explaining how to do this, let us introduce the following definition.
\begin{definition}
Let $G$ be a graph, and let $G'$ be a subgraph of $G$. We say that $G'$ is an \emph{extended subgraph} of $G$ is every leaf of $G'$ is also a leaf of $G$.
\end{definition}
\emph{Extended subtrees} and \emph{extended subforests} are defined in a similar fashion. Notice that a connected component of a graph is always an extended subtree. The next figure depicts a tree and two of its subgraphs (which happen to be forests) only one of them being extended.
\begin{figure}[h]
\centering
\includegraphics[width = 2.5cm]{ExtendedForest1.png} \hspace{2cm}
\includegraphics[width = 2.5cm]{ExtendedForest2.png} \hspace{2cm}
\includegraphics[width = 2.5cm]{ExtendedForest3.png}
\caption{A tree and two of its subforests; only the subforest in the center is extended.}
\label{fig:extended subforest}
\end{figure}
It follows immediately that if $\ff$ is a forest and $\ff'$ is an extended subforest of $\ff$, then the chain group of $\ff'$ is a subgroup of the chain group of $\ff$. Using this observation and Proposition~\ref{prop:chain group of an antenna} is not hard to argue the following result.
\begin{proposition} \label{prop:trees with full chain group}
Let $\sft$ be a tree with $n$ vertices and a maximal path $\alpha$ of length two. If the distance from any of the two leaves in $\alpha$ to any leaf that is not in $\alpha$ is odd, then the chain group of $\sft$ is $S_n$.
\end{proposition}
\begin{proof}
Left to the reader.
\end{proof}
Proposition~\ref{prop:trees with full chain group}, along with Proposition~\ref{prop:chain group of disjoint graphs}, allows us to easily determine the chain groups of relatively complex forests. For example, the chain group of the forest illustrated in Figure~\ref{fig:final forest} is $S_{12} \times S_{12} \times S_{17}$.
\begin{figure}[h]
\centering
\includegraphics[width = 2.5cm]{FinalTree2.png} \hspace{1cm}
\includegraphics[width = 4.5cm]{FinalTree1.png} \hspace{1cm}
\includegraphics[width = 2.5cm]{FinalTree2.png}
\caption{A forest with 41 vertices.}
\label{fig:final forest}
\end{figure}
We closed this section providing a sufficient condition for the chain groups of some antenna-like trees to have a full cycle.
\begin{proposition}
Let $\sft$ be a labeled tree with $n$ vertices having exactly one vertex $v$ of degree three and the rest of its vertices of degree at most two. If $\mathsf{d}(v,w)$ is odd for some leave $w$, then $G_{\sft}$ has a cycle of length $n$.
\end{proposition}
\begin{proof}
Note first that $\sft$ only has three leaves, say $w_1, w_2,$ and $w_3$. Suppose, without loss of generality, that $\mathsf{d}(v,w_1)$ is odd. Let $w_1 = v_1, v_2, \dots, v_{2t} = v$ be the path from $w_1$ to $v$. Also, let $v = v_{2t+1}, v_{2t+2}, \dots, v_r = v_2$ and $v = v_{2t+1}, v'_1, \dots, v'_s = w_3$. Then notice that
\[
(v_1 \ \dots \ v_r) \circ (v_1 \ \dots \ v_{2t+1} \ v'_1 \ \dots \ v'_s) = (v_1 \ v_3 \ \dots \ v_{2t+1} \ v'_1 \ \dots \ v'_s \ v_2 \ v_4 \ \dots \ v_{2t} \ v_{2t+2} \ \dots v_r)
\]
is a cycle of length $n$.
\end{proof}
\section{The Dihedral is Missing} \label{sec:missing chain groups}
The symmetric group $S_n$ contains many copies of the dihedral group $D_{2n}$. However, none of these copies is the chain group of any labeled forest with $n$ vertices.
\begin{lemma} \label{lem:two vertices greater than two}
Let $\sft$ be a tree with at least two vertices whose degree is strictly greater than $2$. Then $\sft$ contains a maximal chain $C$ such that $|\mathsf{T} \! \setminus \! C| \ge 3$.
\end{lemma}
\begin{proof}
Let $v$ and $w$ be two distinct vertices of $\sft$ with degrees at strictly greater than $2$. Let $\rho$ be the unique path in $\sft$ from $v$ to $w$. As $\deg(v) \ge 3$ there exist two maximal paths $\nu_1$ and $\nu_2$ among those starting at $v$ such that $\nu_1 \cap \nu_2 = \nu_1 \cap \rho = \nu_2 \cap \rho = \{v\}$. Similarly, there are two paths $\omega_1$ and $\omega_2$ maximal among those starting at $w$ satisfying that $\omega_1 \cap \omega_2 = \omega_1 \cap \rho = \omega_2 \cap \rho = \{v\}$. Let $v_1,v_2,w_1,w_2$ be the leaves contained in $\nu_1, \nu_2, \omega_1, \omega_2$, respectively. Now take $C$ to be the unique maximal chain from $v_1$ to $v_2$. Because $C$ does not contain any vertex in $\{w,w_1,w_2\}$, the lemma follows.
\end{proof}
\begin{theorem}
For every $n \in \nn$, the dihedral group $D_{2n}$ is not a chain group of any labeled forest with $n$ vertices.
\end{theorem}
\begin{proof}
The dihedral $D_2 \cong \zz_2$ cannot be the chain group of the trivial forest because the latter is trivial. In addition, the possible chain groups of a $2$-forest are isomorphic to either the trivial group or $\zz_2$ and $D_4 \cong V_4$; therefore the theorem is also true in the case of $n=2$. Let $n \ge 3$ and assume, by way of contradiction, that $\ff$ is an $n$-forest whose chain group is the dihedral $D_{2n}$.
First, let us consider the case in which $\ff$ is disconnected. Since the action of $D_{2n}$ on $[n]$ does not fix any point, $\ff$ cannot have trivial connected components (i.e., isolated vertices). If $\ff$ had a connected component $C$ with at least three vertices, then any element of $D_{2n}$ associated to a maximal path of a component $C' \neq C$ would fix at least three elements of $[n]$, namely the vertices of $C$, which is impossible because every nontrivial element of $D_{2n}$ fixes at most two elements of $[n]$. Therefore every connected component of $\ff$ contains exactly two vertices. But the fact that $\ff$ is the disjoint union of paths, contradicts that $D_{2n}$ is not abelian. Hence $\ff$ cannot be disconnected.
Now let $\ff$ be a tree with $n$ vertices whose associated chain group is $D_{2n}$. Since $D_{2n}$ is not abelian, $\ff$ is not a path graph. If $\ff$ contains two vertices of degree strictly greater than $2$, then Lemma~\ref{lem:two vertices greater than two} guarantees the existence of a maximal chain $C$ such that $\ff \! \setminus \! C$ contains at least three vertices. Thus, the generator of $D_{2n}$ associated to the chain $C$ would fix at least three elements of $[n]$, which cannot be possible. Hence $\ff$ must contain at most one vertex $v$ such that $\deg(v) > 2$.
Suppose first that $\deg(v) \ge 4$. If $\deg(v) \ge 5$, then it is not hard to see that for every maximal chain $C$ of $\ff$ containing $v$ one has $|\ff \setminus C| \ge 3$, which would imply that the generator of $D_{2n}$ associated to $C$ fixes at least $3$ elements of $[n]$. Thus, assume $\deg(v) = 4$. If $|\ff| > 5$, then it follows as before that there are at least $3$ vertices in the complement of any maximal chain of $\ff$ having minimum size among those containing $v$. On the other hand, $|\ff| = 5$ implies that $\ff$ is isomorphic to $K_{1,4}$. By Theorem~\ref{thm:chain group of the star graph}, the chain group of $K_{1,4}$ is $A_5$, which is not isomorphic to $D_{10}$ (for instance, $|A_5| > |D_{10}|$), a contradiction.
Finally, suppose that $\deg(v) = 3$. To argue this case, let $C$ be a maximal chain of $\ff$ with maximum cardinality among those containing $v$, and let $\rho$ be the generator of the copy of $D_{2n}$ in $S_n$ induced by $C$. The element $\rho$ is not an $n$-cycle as $|C| < n$. On the other hand, the maximality of $|C|$ implies that $\rho$ has order strictly greater than $n/2$. As the only elements in $D_{2n}$ of order $n$ are the $n$-cycles, we obtain a contradiction. The theorem now follows.
\end{proof}
| {
"timestamp": "2017-06-09T02:06:16",
"yymm": "1706",
"arxiv_id": "1706.02606",
"language": "en",
"url": "https://arxiv.org/abs/1706.02606",
"abstract": "For every labeled forest $\\mathsf{F}$ with set of vertices $[n]$ we can consider the subgroup $G$ of the symmetric group $S_n$ that is generated by all the cycles determined by all maximal paths of $\\mathsf{F}$. We say that $G$ is the chain group of the forest $\\mathsf{F}$. In this paper we study the relation between a forest and its chain group. In particular, we find the chain groups of the members of several families of forests. Finally, we prove that no copy of the dihedral group of cardinality $2n$ inside $S_n$ can be achieved as the chain group of any forest.",
"subjects": "Combinatorics (math.CO)",
"title": "The Chain Group of a Forest",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9896718453483273,
"lm_q2_score": 0.8459424295406088,
"lm_q1q2_score": 0.8372054053019017
} |
https://arxiv.org/abs/1704.05486 | The convexification effect of Minkowski summation | Let us define for a compact set $A \subset \mathbb{R}^n$ the sequence $$ A(k) = \left\{\frac{a_1+\cdots +a_k}{k}: a_1, \ldots, a_k\in A\right\}=\frac{1}{k}\Big(\underset{k\ {\rm times}}{\underbrace{A + \cdots + A}}\Big). $$ It was independently proved by Shapley, Folkman and Starr (1969) and by Emerson and Greenleaf (1969) that $A(k)$ approaches the convex hull of $A$ in the Hausdorff distance induced by the Euclidean norm as $k$ goes to $\infty$. We explore in this survey how exactly $A(k)$ approaches the convex hull of $A$, and more generally, how a Minkowski sum of possibly different compact sets approaches convexity, as measured by various indices of non-convexity. The non-convexity indices considered include the Hausdorff distance induced by any norm on $\mathbb{R}^n$, the volume deficit (the difference of volumes), a non-convexity index introduced by Schneider (1975), and the effective standard deviation or inner radius. After first clarifying the interrelationships between these various indices of non-convexity, which were previously either unknown or scattered in the literature, we show that the volume deficit of $A(k)$ does not monotonically decrease to 0 in dimension 12 or above, thus falsifying a conjecture of Bobkov et al. (2011), even though their conjecture is proved to be true in dimension 1 and for certain sets $A$ with special structure. On the other hand, Schneider's index possesses a strong monotonicity property along the sequence $A(k)$, and both the Hausdorff distance and effective standard deviation are eventually monotone (once $k$ exceeds $n$). Along the way, we obtain new inequalities for the volume of the Minkowski sum of compact sets, falsify a conjecture of Dyn and Farkhi (2004), demonstrate applications of our results to combinatorial discrepancy theory, and suggest some questions worthy of further investigation. | \section{Introduction}
\label{sec:intro}
Minkowski summation is a basic and ubiquitous operation on sets. Indeed, the Minkowski sum
$A+B = \{a+b: a \in A, b \in B \}$ of sets $A$ and $B$ makes sense as long as $A$ and $B$ are subsets
of an ambient set in which the operation + is defined. In particular, this notion makes sense
in any group, and there are multiple fields of mathematics that are preoccupied with
studying what exactly this operation does. For example, much of classical additive combinatorics
studies the cardinality of Minkowski sums (called sumsets in this context) of finite subsets of a group and their interaction
with additive structure of the concerned sets, while the study of the Lebesgue measure of Minkowski sums in ${\bf R}^n$
is central to much of convex geometry and geometric functional analysis.
In this survey paper, which also contains a number of original results, our goal is to understand
better the qualitative effect of Minkowski summation in ${\bf R}^n$-- specifically, the ``convexifying''
effect that it has. Somewhat surprisingly, while the existence of such an effect has long been known,
several rather basic questions about its nature do not seem to have been addressed, and we undertake
to fill the gap.
The fact that Minkowski summation produces sets that look ``more convex'' is easy to visualize
by drawing a non-convex set\footnote{The simplest nontrivial example is three non-collinear points
in the plane, so that $A(k)$ is the original set $A$ of vertices of a triangle together with those
convex combinations of the vertices formed by rational coefficients with denominator $k$.} in the plane and its self-averages $A(k)$ defined by
\begin{eqnarray}\label{defAk}
A(k) = \left\{\frac{a_1+\cdots +a_k}{k} : a_1, \ldots, a_k\in A\right\}=\frac{1}{k}\Big(\underset{k\ {\rm times}}{\underbrace{A + \cdots + A}}\Big).
\end{eqnarray}
This intuition was first made precise in the late 1960's independently\footnote{Both the papers
of Starr \cite{Sta69} and Emerson and Greenleaf \cite{EG69} were submitted in 1967 and published in 1969,
but in very different communities (economics and algebra); so it is not surprising that the authors of these papers were
unaware of each other. Perhaps more surprising is that the relationship between these papers does not seem
to have ever been noticed in the almost 5 decades since. The fact that $A(k)$ converges
to the convex hull of $A$, at an $O(1/k)$ rate in the Hausdorff metric when dimension $n$ is fixed,
should perhaps properly be called the Emerson-Folkman-Greenleaf-Shapley-Starr theorem,
but in keeping with the old mathematical tradition of not worrying too much about names of theorems (cf., Arnold's principle),
we will simply use the nomenclature that has become standard.} by Starr \cite{Sta69} (see also \cite{Sta81}), who credited
Shapley and Folkman for the main result, and by Emerson and Greenleaf \cite{EG69}. Denoting by $\mathrm{conv}(A)$ the convex hull of $A$, by $B_2^n$ the $n$-dimensional Euclidean ball of radius $1$, and by $d(A)= \inf\{r>0: \mathrm{conv}(A)\subset A+rB_2^n\}$ the Hausdorff distance between a
set $A$ and its convex hull, it follows from the Shapley-Folkman-Starr theorem that
if $A_1, \ldots, A_k$ are compact sets in ${\bf R}^n$ contained inside some ball, then
\begin{eqnarray*}
d(A_1 +\cdots + A_k) = O\big( \sqrt{\min\{k,n\}} \big) .
\end{eqnarray*}
By considering $A_1=\cdots=A_k=A$, one concludes that
$d(A(k))=O \big( \frac{\sqrt{n}}{k} \big)$. In other words,
when $A$ is a compact subset of ${\bf R}^n$ for fixed dimension $n$,
$A(k)$ converges in Hausdorff distance to $\mathrm{conv}(A)$ as $k\ra\infty$,
at rate at least $O(1/k)$.
Our geometric intuition would suggest that in some sense, as $k$ increases,
the set $A(k)$ is getting progressively more convex, or in other words, that
the convergence of $A(k)$ to $\mathrm{conv}(A)$ is, in some sense, monotone.
The main goal of this paper is to examine this intuition, and explore whether it can
be made rigorous.
One motivation for our goal of exploring monotonicity in the Shapley-Folkman-Starr theorem is that it
was the key tool allowing Starr \cite{Sta69} to prove that in an economy
with a sufficiently large number of traders, there are (under some natural
conditions) configurations arbitrarily close
to equilibrium even without making any convexity assumptions on preferences of the traders;
thus investigations of monotonicity in this theorem speak to the question
of whether these quasi-equilibrium configurations in fact get ``closer'' to a true equilibrium
as the number of traders increases.
A related result is the core convergence result of Anderson \cite{And78},
which states under very general conditions that
the discrepancy between a core allocation
and the corresponding competitive equilibrium price vector in a
pure exchange economy becomes arbitrarily small as the number
of agents gets large. These results are central results in mathematical economics,
and continue to attract attention (see, e.g., \cite{Sch12}).
Our original motivation, however, came from a conjecture made
by Bobkov, Madiman and Wang \cite{BMW11}. To state it,
let us introduce the volume deficit $\Delta(A)$ of a compact set $A$ in ${\bf R}^n$:
$\Delta(A):= \mathrm{Vol}_n(\mathrm{conv}(A)\setminus A) = \mathrm{Vol}_n(\mathrm{conv}(A))-\mathrm{Vol}_n(A)$,
where $\mathrm{Vol}_n$ denotes the Lebesgue measure in ${\bf R}^n$.
\begin{conj}[Bobkov-Madiman-Wang \cite{BMW11}]\label{weakconj}
Let $A$ be a compact set in ${\bf R}^n$ for some $n\in\mathbb{N}$, and let $A(k)$
be defined as in \eqref{defAk}.
Then the sequence $\{\Delta(A(k))\}_{k \geq 1}$ is non-increasing in $k$,
or equivalently, $\{\mathrm{Vol}_n(A(k))\}_{k \geq 1}$ is non-decreasing.
\end{conj}
In fact, the authors of \cite{BMW11} proposed a number of related conjectures,
of which Conjecture~\ref{weakconj} is the weakest. Indeed,
they conjectured a monotonicity property in a probabilistic limit theorem, namely the law of
large numbers for random sets due to Z.~Artstein and Vitale \cite{AV75};
when this conjectured monotonicity property of \cite{BMW11} is restricted to deterministic
(i.e., non-random) sets, one obtains Conjecture \ref{weakconj}. They showed
in turn that this conjectured monotonicity property in the law of large numbers for
random sets is implied by the following volume inequality for Minkowski sums.
For $k\ge1$ being an integer, we set $[k]=\{1, \ldots, k\}$.
\begin{conj}[Bobkov-Madiman-Wang \cite{BMW11}]\label{strongconj}
Let $n\ge1$, $k\ge2$ be integers and let $A_1, \dots, A_k$ be $k$ compact sets in ${\bf R}^n$. Then
\begin{eqnarray}\label{conjdimn}
\mathrm{Vol}_n\left(\sum_{i=1}^kA_i\right)^\frac{1}{n}\ge \frac{1}{k-1}\sum_{i=1}^k\mathrm{Vol}_n\left(\sum_{j\in[k]\setminus\{i\}}A_j\right)^\frac{1}{n}.
\end{eqnarray}
\end{conj}
Apart from the fact that Conjecture~\ref{strongconj} implies Conjecture~\ref{weakconj} (which can be seen simply by applying the former to $A_1=\cdots=A_k=A$, where $A$ is a fixed compact set),
Conjecture~\ref{strongconj} is particularly interesting because of its close connections to
an important inequality in Geometry, namely the Brunn-Minkowski inequality,
and a fundamental inequality in Information Theory, namely the entropy power inequality.
Since the conjectures in \cite{BMW11} were largely motivated by these connections,
we now briefly explain them.
The Brunn-Minkowski inequality (or strictly speaking, the Brunn-Minkowski-Lyusternik inequality)
states that for all compact sets $A,B$ in ${\bf R}^n$,
\begin{eqnarray}\label{BMI}
\mathrm{Vol}_n(A+B)^{1/n} \geq \mathrm{Vol}_n(A)^{1/n} + \mathrm{Vol}_n(B)^{1/n} .
\end{eqnarray}
It is, of course, a cornerstone of Convex Geometry, and has beautiful relations to many areas
of Mathematics (see, e.g., \cite{Gar02, Sch14:book}).
The case $k=2$ of Conjecture~\ref{strongconj} is exactly the Brunn-Minkowski inequality (\ref{BMI}). Whereas Conjecture~\ref{strongconj} yields the monotonicity described in Conjecture~\ref{weakconj},
the Brunn-Minkowski inequality only allows one to deduce that the subsequence
$\{\mathrm{Vol}_n(A(2^k))\}_{k \in \mathbb{N}}$ is non-decreasing
(one may also deduce this fact from the trivial inclusion $A \subset \frac{A+A}{2}$).
The entropy power inequality states that for all independent random vectors $X, Y$ in ${\bf R}^n$,
\begin{eqnarray}\label{EPI}
N(X+Y) \geq N(X) + N(Y),
\end{eqnarray}
where
$$N(X) = \frac{1}{2\pi e} e^{\frac{2h(X)}{n}}$$
denotes the entropy power of $X$. Let us recall that the entropy of a random vector $X$ with density function $f_X$
(with respect to Lebesgue measure $dx$) is $h(X) = -\int f_X(x) \log f_X(x) dx$ if the integral exists and $- \infty$ otherwise (see, e.g., \cite{CT91:book}). As a consequence, one may deduce that for independent and identically distributed random vectors $X_i$, $i \geq 0$, the sequence
$$ \left\{ N \left( \frac{X_1 + \cdots + X_{2^k}}{\sqrt{2^k}} \right) \right\}_{k \in \mathbb{N}} $$
is non-decreasing.
S.~Artstein, Ball, Barthe and Naor \cite{ABBN04:1}
generalized the entropy power inequality (\ref{EPI}) by proving that for any independent random vectors $X_1, \dots, X_k$,
\begin{eqnarray}\label{newEPI}
N\left(\sum_{i=1}^k X_i\right) \geq \frac{1}{k-1} \sum_{i=1}^k N\left(\sum_{j \in [k] \setminus \{i\}} X_j\right).
\end{eqnarray}
In particular, if all $X_i$ in the above inequality are identically distributed, then one may deduce that the sequence
$$ \left\{ N \left( \frac{X_1 + \cdots + X_{k}}{\sqrt{k}} \right) \right\}_{k \geq 1} $$
is non-decreasing. This fact is usually referred to as ``the monotonicity of entropy in the Central Limit Theorem'',
since the sequence of entropies of these normalized sums converges to that of a Gaussian distribution as shown earlier
by Barron \cite{Bar86}.
Later, simpler proofs of the inequality \eqref{newEPI} were given by \cite{MB06:isit, TV06}; more general inequalities
were developed in \cite{MB07, Shl07, MG17}.
There is a formal resemblance between inequalities (\ref{EPI}) and (\ref{BMI}) that was noticed in a pioneering work of Costa and Cover \cite{CC84} and later explained by Dembo, Cover and Thomas \cite{DCT91} (see also \cite{SV00, WM14}
for other aspects of this connection). In the last decade, several further developments have been made that link
Information Theory to the Brunn-Minkowski theory, including entropy analogues of the Blaschke-Santal\'o inequality \cite{LYZ04},
the reverse Brunn-Minkowski inequality \cite{BM11:cras, BM12:jfa}, the Rogers-Shephard inequality \cite{BM13:goetze, MK18} and the Busemann inequality \cite{BNT16}.
Indeed, volume inequalities and entropy inequalities (and also certain small ball inequalities \cite{MMX17:1}) can be unified using
the framework of R\'enyi entropies; this framework and the relevant literature is surveyed in \cite{MMX17:0}.
On the other hand, natural analogues in the Brunn-Minkowski theory of Fisher information inequalities
hold sometimes but not always \cite{FGM03, AFO14, FM14}.
In particular, it is now well understood that the functional $A\mapsto \mathrm{Vol}_n(A)^{1/n}$ in the geometry of compact subsets of ${\bf R}^n$, and the functional $f_X\mapsto N(X)$ in probability are analogous to each other in many (but not all) ways. Thus, for example, the monotonicity property desired in Conjecture~\ref{weakconj}
is in a sense analogous to the monotonicity property in the Central Limit Theorem implied by inequality \eqref{newEPI},
and Conjecture \ref{strongconj} from \cite{BMW11} generalizes the Brunn-Minkowski inequality (\ref{BMI})
exactly as inequality (\ref{newEPI}) generalizes the entropy power inequality (\ref{EPI}).
The starting point of this work was the observation that although Conjecture \ref{strongconj} holds for
certain special classes of sets (namely, one dimensional compact sets, convex sets and their Cartesian product, as shown in subsection \ref{sec:1d}),
both Conjecture~\ref{weakconj} and Conjecture~\ref{strongconj} fail to hold in general even for moderately high dimension
(Theorem~\ref{thm:counter} constructs a counterexample in dimension 12). These results,
which consider the question of the monotonicity of $\Delta(A(k))$ are stated and proved
in Section~\ref{sec:Delta}. We also discuss there the question of when one has
convergence of $\Delta(A(k))$ to 0, and at what rate, drawing on the work of the
\cite{EG69} (which seems not to be well known in the contemporary literature on convexity).
Section~\ref{sec:vol} is devoted to developing some new volume inequalities for Minkowski sums.
In particular, we observe in Theorem~\ref{thm:fsa} that if the exponents of $1/n$ in Conjecture~\ref{strongconj} are removed,
then the modified inequality is true for general compact sets
(though unfortunately one can no longer directly relate this to
a law of large numbers for sets). Furthermore, in the case of convex sets, Theorem~\ref{thm:smod}
proves an even stronger fact, namely that the volume of the Minkowski sum of
convex sets is supermodular. Various other facts surrounding these observations are also
discussed in Section~\ref{sec:vol}.
Even though the conjecture about $A(k)$ becoming progressively more convex in the sense
of $\Delta$ is false thanks to Theorem~\ref{thm:counter}, one can ask the same question when we measure the extent of non-convexity
using functionals other than $\Delta$. In Section~\ref{sec:meas}, we survey the existing literature
on measures of non-convexity of sets, also making some possibly new observations about
these various measures and the relations between them. The functionals we consider
include a non-convexity index $c(A)$ introduced by Schneider \cite{Sch75}, the notion of
inner radius $r(A)$ introduced by Starr \cite{Sta69} (and studied in an equivalent form
as the effective standard deviation $v(A)$ by Cassels \cite{Cas75}, though the equivalence was only understood later by Wegmann \cite{Weg80}),
and the Hausdorff distance $d(A)$ to the convex hull, which we already introduced when
describing the Shapley-Folkman-Starr theorem. We also consider the generalized Hausdorff distance $d^{(K)}(A)$
corresponding to using a non-Euclidean norm whose unit ball is the convex body $K$.
The rest of the paper is devoted to
the examination of whether $A(k)$ becomes progressively more convex as $k$ increases,
when measured through these other functionals.
In Section~\ref{sec:c}, we develop the main positive result of this paper, Theorem~\ref{thm:c-quant-mono},
which shows that $c(A(k))$ is monotonically (strictly) decreasing in $k$, unless
$A(k)$ is already convex. Various other properties of Schneider's non-convexity index
and its behavior for Minkowski sums are also established here, including the optimal
$O(1/k)$ convergence rate for $c(A(k))$. We remark that even the question of convergence of $c(A(k))$
to 0 does not seem to have been explored in the literature.
Section~\ref{sec:r} considers the behavior of $v(A(k))$ (or equivalently $r(A(k))$).
For this sequence, we show that monotonicity holds in dimensions 1 and 2,
and in general dimension, monotonicity holds eventually (in particular, once $k$ exceeds $n$).
The convergence rate of $r(A(k))$ to 0 was already established
in Starr's original paper \cite{Sta69}; we review the classical proof of Cassels \cite{Cas75}
of this result.
Section~\ref{sec:d} considers the question of monotonicity of $d(A(k))$,
as well as its generalizations $d^{(K)}(A(k))$ when we consider ${\bf R}^n$ equipped with norms
other than the Euclidean norm (indeed, following \cite{BG81}, we even consider
so-called ``nonsymmetric norms''). Again here, we show that
monotonicity holds in dimensions 1 and 2,
and in general dimension, monotonicity holds eventually (in particular, once $k$ exceeds $n$).
In fact, more general inequalities are proved that hold for Minkowski sums of different sets.
The convergence rate of $d(A(k))$ to 0 was already established
in Starr's original paper \cite{Sta69}; we review both a classical proof,
and also provide a new very simple proof of a rate result
that is suboptimal in dimension for the Euclidean norm but sharp in both
dimension and number $k$ of summands given that it holds for arbitrary norms. In 2004 Dyn and Farkhi \cite{DF04} conjectured that
$d^2(A+B) \leq d^2(A)+d^2(B).$ We show that this conjecture is false in ${\bf R}^n$, $n \ge 3$.
In Section~\ref{sec:discrep}, we show that a number of results from combinatorial discrepancy theory
can be seen as consequences of the convexifying effect of Minkowski summation.
In particular, we obtain a new bound on the discrepancy for finite-dimensional Banach spaces
in terms of the Banach-Mazur distance of the space from a Euclidean one.
Finally, in Section~\ref{sec:disc}, we make various additional remarks, including on notions
of non-convexity not considered in this paper.
\vspace{.1in}
\noindent
{\bf Acknowledgments.}
Franck Barthe had independently observed that Conjecture~\ref{strongconj} holds in dimension 1, using the same proof, by 2011.
We are indebted to Fedor Nazarov for valuable discussions, in particular for the help in the construction of the
counterexamples in Theorem~\ref{thm:counter} and Theorem \ref{thm:DF}. We would like to thank Victor Grinberg for many enlightening
discussions on the connections with discrepancy theory, which were an enormous help with putting Section~\ref{sec:discrep}
together. We also thank Franck Barthe, Dario Cordero-Erausquin, Uri Grupel,
Bo'az Klartag, Joseph Lehec, Paul-Marie Samson, Sreekar Vadlamani, and Murali Vemuri for interesting discussions.
Some of the original results developed in this work were announced in \cite{FMMZ16}; we are grateful to Gilles Pisier
for curating that announcement. Finally we are grateful to the anonymous referee for a careful reading of the
paper and constructive comments.
\section{Measures of non-convexity}
\label{sec:meas}
\subsection{Preliminaries and Definitions}
\label{sec:defns}
Throughout this paper, we only deal with compact sets, since several of the measures
of non-convexity we consider can have rather unpleasant behavior if we do not make this assumption.
The convex hull operation interacts nicely with Minkowski summation.
\begin{lem}\label{lem:conv-sum}
Let $A,B$ be nonempty subsets of ${\bf R}^n$. Then,
$$ \mathrm{conv}(A+B)=\mathrm{conv}(A)+\mathrm{conv}(B). $$
\end{lem}
\begin{proof}
Let $x \in \mathrm{conv}(A) + \mathrm{conv}(B) $. Then $x = \sum_{i=1}^k \lambda_i a_i + \sum_{j=1}^l \mu_j b_j$, where $a_i \in A$, $b_j \in B$, $\lambda_i \geq 0$, $\mu_j \geq 0$ and $\sum_{i=1}^k \lambda_i=1$, $\sum_{j=1}^l \mu_j=1$. Thus, $x = \sum_{i=1}^k \sum_{j=1}^l \lambda_i \mu_j (a_i + b_j)$. Hence $x \in \mathrm{conv}(A+B)$. The other inclusion is clear.
\end{proof}
Lemma \ref{lem:conv-sum} will be used throughout the paper without necessarily referring to it.
A useful consequence of Lemma \ref{lem:conv-sum} is the following remark.
\begin{rem}\label{rk:referee}
If $A+\lambda\mathrm{conv}(A)$ is convex then
\[
A+\lambda\mathrm{conv}(A)=\mathrm{conv}(A+\lambda\mathrm{conv}(A))=\mathrm{conv}(A)+\lambda\mathrm{conv}(A)=(1+\lambda)\mathrm{conv}(A).
\]
\end{rem}
The Shapley-Folkman lemma, which is closely related to the classical Carath\'eodory theorem, is key to our development.
\begin{lem}[Shapley-Folkman]\label{lem:SF}
Let $A_1, \dots, A_k$ be nonempty subsets of ${\bf R}^n$, with $k \geq n + 1$. Let $a\in \sum_{i \in [k]} \mathrm{conv}(A_i)$. Then there exists a set $I$ of cardinality at most $n$ such that
$$ a\in \sum_{i \in I} \mathrm{conv}(A_i) + \sum_{i \in [k] \setminus I} A_i. $$
\end{lem}
\begin{proof}
We present below a proof taken from Proposition 5.7.1 of \cite{Ber09:book}.
Let $a \in \sum_{i \in [k]} \mathrm{conv}(A_i)$. Then
$$ a = \sum_{i \in [k]} a_i = \sum_{i \in [k]}\sum_{j=1}^{t_i} \lambda_{ij} a_{ij}, $$
where $\lambda_{ij} \geq 0$, $\sum_{j=1}^{t_i} \lambda_{ij} = 1$, and $a_{ij} \in A_i$. Let us consider the following vectors of ${\bf R}^{n+k}$,
\begin{eqnarray*}
z & = & (a, 1, \cdots, 1), \\ z_{1j} & = & (a_{1j}, 1, 0, \cdots, 0), \quad j \in [t_1], \\ & \vdots & \\ z_{kj} & = & (a_{kj}, 0, \cdots, 0, 1), \quad j \in [t_k].
\end{eqnarray*}
Notice that $z = \sum_{i=1}^k \sum_{j=1}^{t_i} \lambda_{ij} z_{ij}$. Using Carath\'eodory's theorem in the positive cone generated by $z_{ij}$ in ${\bf R}^{n+k}$, one has
$$ z = \sum_{i=1}^k \sum_{j=1}^{t_i} \mu_{ij} z_{ij}, $$
for some nonnegative scalars $\mu_{ij}$ where at most $n+k$ of them are non zero. This implies that $a = \sum_{i=1}^k \sum_{j=1}^{t_i} \mu_{ij} a_{ij}$ and that $\sum_{j=1}^{t_i} \mu_{ij} = 1$, for all $i \in [k]$. Thus for each $i \in [k]$, there exists $j_i \in [t_i]$ such that $\mu_{ij_i} > 0$. But at most $n+k$ scalars $\mu_{ij}$ are positive. Hence there are at most $n$ additional $\mu_{ij}$ that are positive. One deduces that there are at least $k-n$ indices $i$ such that $\mu_{i\ell_i}=1$ for some $\ell_i \in [t_i]$, and thus $\mu_{ij} = 0$ for $j \neq \ell_i$. For these indices, one has $a_i \in A_i$. The other inclusion is clear.
\end{proof}
The Shapley-Folkman lemma may alternatively be written as the statement that, for $k \geq n + 1$,
\begin{eqnarray}
\mathrm{conv}(\sum_{i \in [k]} A_i) = \bigcup_{I \subset [k]: |I| \leq n} \bigg[ \sum_{i \in I} \mathrm{conv}(A_i) + \sum_{i \in [k] \setminus I} A_i \bigg],
\end{eqnarray}
where $|I|$ denotes the cardinality of $I$.
When all the sets involved are identical, and $k >n$, this reduces to the identity
\begin{eqnarray}
k\,\mathrm{conv}(A) = n\, \mathrm{conv}(A) + (k- n) \,A(k-n).
\end{eqnarray}
It should be noted that the Shapley-Folkman lemma is in the center of a rich vein of investigation in convex analysis
and its applications. As explained by Z.~Artstein \cite{Art80}, It may be seen as a discrete manifestation of a key lemma about extreme points that is related
to a number of ``bang-bang'' type results. It also plays an important role in the theory of vector-valued measures;
for example, it can be used as an ingredient in the proof of Lyapunov's theorem on the range of vector measures (see \cite{KR13}, \cite{DU77:book}
and references therein).
For a compact set $A$ in ${\bf R}^n$, denote by
$$R(A)=\min_{x}\{r>0: A\subset x+rB_2^n\}$$
the radius of the smallest ball containing $A$. By Jung's theorem \cite{Jun01}, this parameter is close to the diameter, namely one has
\begin{eqnarray*}
\frac{\mathrm{diam}(A)}{2}\le R(A)\le \mathrm{diam}(A)\sqrt{\frac{n}{2(n+1)}}\le\frac{\mathrm{diam}(A)}{\sqrt{2}},
\end{eqnarray*}
where $\mathrm{diam}(A) = \sup_{x,y \in A} |x-y|$ is the Euclidean diameter of $A$. We also denote by
\begin{eqnarray*}
\mathrm{inr}(A)=\max_{x}\{r\ge0: x+rB_2^n\subset A\}
\end{eqnarray*}
the inradius of $A$, i.e. the radius of a largest Euclidean ball included in $A$.
There are several ways of measuring non-convexity of a set:
\begin{enumerate}
\item The Hausdorff distance from the convex hull is perhaps the most obvious measure to consider:
\begin{eqnarray*}
d(A)= d_H(A, \mathrm{conv}(A))= \inf\{r>0: \mathrm{conv}(A)\subset A+rB_2^n\} .
\end{eqnarray*}
A variant of this is to consider the Hausdorff distance when the ambient metric space is ${\bf R}^n$ equipped with a norm different from the Euclidean norm. If $K$ is the closed unit ball of this norm
(i.e., any symmetric\footnote{We always use ``symmetric'' to mean centrally symmetric, i.e., $x\in K$ if and only if $-x\in K$.}, compact, convex set with nonempty interior), we define
\begin{eqnarray}\label{eq:dK}
d^{(K)}(A)= \inf\{r>0: \mathrm{conv}(A)\subset A+rK\} .
\end{eqnarray}
In fact, the quantity \eqref{eq:dK} makes sense for any compact convex set containing 0 in its interior --
then it is sometimes called the Hausdorff distance with respect to a ``nonsymmetric norm''.
\item Another natural measure of non-convexity is the ``volume deficit'':
\begin{eqnarray*}
\Delta(A)=\mathrm{Vol}_n(\mathrm{conv}(A)\setminus A) = \mathrm{Vol}_n(\mathrm{conv}(A))- \mathrm{Vol}_n(A).
\end{eqnarray*}
Of course, this notion is interesting only when $\mathrm{Vol}_n(\mathrm{conv}(A))\not=0$.
There are many variants of this that one could consider, such as
$\log\mathrm{Vol}_n(\mathrm{conv}(A))- \log\mathrm{Vol}_n(A)$, or relative versions such as
$\Delta(A)/\mathrm{Vol}_n(\mathrm{conv}(A))$ that are automatically bounded.
\item The ``inner radius'' of a compact set was defined by Starr \cite{Sta69} as follows:
\begin{eqnarray*}
r(A) = \sup_{x\in\mathrm{conv}(A)} \inf \{R(T): T\subset A, x\in \mathrm{conv}(T) \} .
\end{eqnarray*}
\item The ``effective standard deviation'' was defined by Cassels \cite{Cas75}. For a random vector $X$ in ${\bf R}^n$, let $V(X)$ be the trace of its covariance matrix. Then the effective standard deviation of a compact set $A$ of ${\bf R}^n$ is
\begin{eqnarray*}
v^2(A) = \sup_{x\in\mathrm{conv}(A)} \inf \{ V(X) : {\rm supp\,}(X)\subset A, |{\rm supp\,}(X)|<\infty, \mathbb{E} X=x \}.
\end{eqnarray*}
Let us notice the equivalent geometric definition of $v$:
\begin{eqnarray*}
v^2(A) &=& \sup_{x\in\mathrm{conv}(A)}\inf\{\sum p_i |a_i-x|^2: x=\sum p_i a_i; p_i >0; \sum p_i=1, a_i \in A \}\\
&=& \sup_{x\in\mathrm{conv}(A)}\inf\{\sum p_i |a_i|^2-|x|^2: x=\sum p_i a_i; p_i >0; \sum p_i=1, a_i \in A \}.
\end{eqnarray*}
\item In analogy with the effective standard deviation, we define the ``effective absolute deviation'' by
\begin{eqnarray*}
w(A) &=& \sup_{x\in\mathrm{conv}(A)}\inf\bigg\{\sum p_i |a_i-x|: \, x=\sum p_i a_i; p_i >0; \sum p_i=1, a_i \in A \bigg\}\\
&=& \sup_{x\in\mathrm{conv}(A)} \inf \{ \mathbb{E}|X-x| : {\rm supp\,}(X)\subset A, |{\rm supp\,}(X)|<\infty, \mathbb{E} X=x \}.
\end{eqnarray*}
\item Another non-convexity measure was defined by Cassels \cite{Cas75} as follows:
\begin{eqnarray*}
\rho(A) = \sup_{x \in \mathrm{conv}(A)} \inf_{a \in A_x} |x-a|,
\end{eqnarray*}
where $A_x = \{a \in A : \exists b \in \mathrm{conv}(A), \exists \theta \in (0,1) \mbox{ such that } x = (1-\theta)a + \theta b\}$.
\item The ``non-convexity index'' was defined by Schneider \cite{Sch75} as follows:
\begin{eqnarray*}
c(A) = \inf \{ \lambda\geq 0: A+\lambda\, \mathrm{conv}(A) \text{ is convex} \}.
\end{eqnarray*}
\end{enumerate}
\subsection{Basic properties of non-convexity measures}
\label{sec:properties}
All of these functionals are 0 when $A$ is a convex set; this justifies calling them ``measures of non-convexity''.
In fact, we have the following stronger statement
since we restrict our attention to compact sets.
\begin{lem}\label{lem:meas0}
Let $A$ be a compact set in ${\bf R}^n$. Then:
\begin{enumerate}
\item $c(A)=0$ if and only if $A$ is convex.
\item $d(A)=0$ if and only if $A$ is convex.
\item $r(A)=0$ if and only if $A$ is convex.
\item $\rho(A)=0$ if and only if $A$ is convex.
\item $v(A)=0$ if and only if $A$ is convex.
\item $w(A)=0$ if and only if $A$ is convex.
\item Under the additional assumption that $\mathrm{conv}(A)$ has nonempty interior, $\Delta(A)=0$ if and only if $A$ is convex.
\end{enumerate}
\end{lem}
\begin{proof} Directly from the definition of $c(A)$ we get that $c(A)=0$ if $A$ is convex (just select $\lambda=0$). Now assume that $c(A)=0$, then $\{A+\frac{1}{m} \mathrm{conv}(A)\}_{m=1}^\infty$
is a sequence of compact convex sets, converging in Hausdorff metric to $A$, thus $A$ must be convex. Notice that this observation is due to Schneider \cite{Sch75}.
The assertion about $d(A)$ follows immediately from the definition and the limiting argument similar to the above one.
If $A$ is convex then, clearly $r(A)=0$, indeed we can always take $T=(rB_2^n+x) \cap A \not =\emptyset$ with $r \to 0$. Next, if $r(A)=0$, then using Theorem \ref{thm:weg} below we have $d(A)\le r(A)=0$ thus $d(A)=0$ and therefore $A$ is convex.
The statements about $\rho(A)$, $v(A)$ and $w(A)$ can be deduced from the definitions, but they will also follow immediately from the Theorem \ref{thm:weg} below.
Assume that $A$ is convex, then $\mathrm{conv}(A)=A$ and $\Delta(A)=0$. Next, assume that $\Delta(A)=0$. Assume, towards a contradiction, that $\mathrm{conv}(A) \not =A$. Then there exists $x \in \mathrm{conv}(A)$ and $r>0$ such that $(x + r B_2^n) \cap A =\emptyset$. Since $\mathrm{conv}(A)$ is convex and has nonempty interior, there exists a ball $y+sB_2^n\subset\mathrm{conv}(A)$ and one has
$$
\Delta(A)\ge \mathrm{Vol}_n(\mathrm{conv}(A) \cap (x + r B_2^n))\ge \mathrm{Vol}_n(\mathrm{conv}(x, y+sB_2^n) \cap (x + r B_2^n)) >0,
$$
which contradicts $\Delta(A)=0$.
\end{proof}
The following lemmata capture some basic properties of all these measures of non-convexity
(note that we need not separately discuss $v$, $w$ and $\rho$ henceforth owing to Theorem~\ref{thm:weg}).
The first lemma concerns the behavior of these functionals on scaling of the argument set.
\begin{lem}\label{lem:scaling}
Let $A$ be a compact subset of ${\bf R}^n$, $x \in {\bf R}^n$, and $\lambda\in (0,\infty)$.
\begin{enumerate}
\item $c(\lambda A + x)= c(A)$. In fact, $c$ is affine-invariant.
\item $d(\lambda A + x)= \lambda d(A)$.
\item $r(\lambda A + x)=\lambda r(A)$.
\item $\Delta(\lambda A + x)= \lambda^n \Delta(A)$. In fact, if $T(x)=Mx+b$, where
$M$ is an invertible linear transformation and $b \in {\bf R}^n$, then $\Delta(T(A))=|{\mathop{\rm det}}(M)| \Delta(A)$.
\end{enumerate}
\end{lem}
\begin{proof}
To see that $c$ is affine-invariant, we first notice that $\mathrm{conv}(TA)=T\mathrm{conv}(A)$. Moreover writing $Tx=Mx+b$, where
$M$ is an invertible linear transformation and $b \in {\bf R}^n$, we get that
$$
TA+ \lambda \mathrm{conv}(TA)=M(A+\lambda \mathrm{conv}(A))+(1+\lambda)b,
$$
which is convex if and only if $A+\lambda \mathrm{conv}(A)$ is convex.
It is easy to see from the definitions that $d$, $r$ and $\Delta$ are translation-invariant, and that $d$ and $r$ are 1-homogeneous and $\Delta$ is $n$-homogeneous with respect to dilation.
\end{proof}
The next lemma concerns the monotonicity of non-convexity measures with respect to the inclusion relation.
\begin{lem}\label{lem:dincl}
Let $A, B$ be compact sets in ${\bf R}^n$ such that $A\subset B$ and $\mathrm{conv}(A)=\mathrm{conv}(B)$. Then:
\begin{enumerate}
\item $c(A)\geq c(B)$.
\item $d(A)\geq d(B)$.
\item $r(A)\geq r(B)$.
\item $\Delta(A)\geq \Delta(B)$.
\end{enumerate}
\end{lem}
\begin{proof}
For the first part, observe that if $\lambda=c(A)$,
\begin{eqnarray*}
(1+\lambda)\mathrm{conv}(B) \supset B+\lambda\mathrm{conv}(B) = B+\lambda\mathrm{conv}(A) \supset A+\lambda\mathrm{conv}(A) = (1+\lambda)\mathrm{conv}(B),
\end{eqnarray*}
where in the last equation we used that $A + \lambda \mathrm{conv}(A)$ is convex and Remark \ref{rk:referee}.
Hence all relations in the above display must be equalities, and $B+\lambda\mathrm{conv}(B)$ must be convex, which means $c(A) = \lambda \geq c(B)$.
For the second part, observe that
\begin{eqnarray*}
d(A)=\sup_{x\in \mathrm{conv}(A)}d(x,A)=\sup_{x\in \mathrm{conv}(B)}d(x,A)\ge\sup_{x\in \mathrm{conv}(B)}d(x,B)=d(B).
\end{eqnarray*}
For the third part, observe that
\begin{eqnarray*}
\inf \{R(T): T\subset A, x\in \mathrm{conv}(T) \} \geq \inf \{R(T): T\subset B, x\in \mathrm{conv}(T) \}.
\end{eqnarray*}
Hence $r(A) \geq r(B)$.
For the fourth part, observe that
\begin{eqnarray*}
\Delta(A) = \mathrm{Vol}_n(\mathrm{conv}(B)) - \mathrm{Vol}_n(A) \geq \mathrm{Vol}_n(\mathrm{conv}(B)) - \mathrm{Vol}_n(B) = \Delta(B).
\end{eqnarray*}
\end{proof}
As a consequence of Lemma \ref{lem:dincl}, we deduce that $A(k)$ is monotone along the
subsequence of powers of 2, when measured through all these measures of non-convexity.
Finally we discuss topological aspects of these non-convexity functionals,
specifically, whether they have continuity properties with respect to the topology on the
class of compact sets induced by Hausdorff distance.
\begin{lem}\label{lem:semicont}
Suppose $A_k\xrightarrow{d_H} A$, where all the sets involved are compact subsets of ${\bf R}^n$. Then:
\begin{enumerate}
\item $\lim_{k\ra\infty} d(A_k)= d(A)$, i.e., $d$ is continuous.
\item $\mathop{\rm lim\ inf}_{k\ra\infty} \Delta(A_k)\geq \Delta(A)$, i.e., $\Delta$ is lower semicontinuous.
\item $\mathop{\rm lim\ inf}_{k\ra\infty} c(A_k) \geq c(A)$, i.e., $c$ is lower semicontinuous.
\item $\mathop{\rm lim\ inf}_{k\ra\infty} r(A_k) \geq r(A)$, i.e., $r$ is lower semicontinuous.
\end{enumerate}
\end{lem}
\begin{proof}
Let us first observe that for any compact sets $A,B$
\begin{eqnarray}\label{eq:lip}
d_H(\mathrm{conv}(A),\mathrm{conv}(B))\le d_H(A,B),
\end{eqnarray}
by applying the convex hull operation to the inclusions
$B \subset A+ d B_2^n$ and $A \subset B+ d B_2^n$,
and invoking Lemma~\ref{lem:conv-sum}.
Thus $A_k\xrightarrow{d_H} A$ implies $\mathrm{conv}(A_k)\xrightarrow{d_H} \mathrm{conv}(A)$.
\\
1. Observe that by the triangle inequality for the Hausdorff metric, we have the inequality
\begin{eqnarray*}
d(B) = d_H(B, \mathrm{conv}(B))
\leq d_H(B, A) + d_H(A, \mathrm{conv}(A)) + d_H(\mathrm{conv}(A), \mathrm{conv}(B)).
\end{eqnarray*}
Using \eqref{eq:lip} one deduces that $d(B)-d(A) \leq 2d_H(B, A)$.
Changing the role of $A$ and $B$, we get
\begin{eqnarray*}
|d(B)-d(A)| \leq 2d_H(B, A).
\end{eqnarray*}
This proves the continuity of $d$.
2. Recall that, with respect to the Hausdorff distance, the volume is upper semicontinuous on the class of compact sets (see, e.g., \cite[Theorem 12.3.6]{SW08:book})
and continuous on the class of compact convex sets (see, e.g., \cite[Theorem 1.8.20]{Sch14:book}). Thus
\begin{eqnarray*}
\mathop{\rm lim\ sup}_{k\ra\infty} \mathrm{Vol}_n(A_k) \leq \mathrm{Vol}_n(A)
\end{eqnarray*}
and
\begin{eqnarray*}
\lim_{k\ra\infty} \mathrm{Vol}_n(\mathrm{conv}(A_k)) = \mathrm{Vol}_n(\mathrm{conv}(A)) ,
\end{eqnarray*}
so that subtracting the former from the latter yields the desired semicontinuity of $\Delta$.
3. Observe that by definition,
\begin{eqnarray*}
A_k+\lambda_k \mathrm{conv}(A_k)= (1+\lambda_k) \mathrm{conv}(A_k) ,
\end{eqnarray*}
where $\lambda_k=c(A_k)$.
Note that from Theorem \ref{thm:sch75} below due to Schneider \cite{Sch75} one has $\lambda_k \in [0, n]$, thus there exists a convergent subsequence $\lambda_{k_n} \to \lambda_*$ and
\begin{eqnarray*}
A+\lambda_* \mathrm{conv}(A)= (1+\lambda_*) \mathrm{conv}(A) ,
\end{eqnarray*}
Thus $\lambda_* \ge c(A)$, which is the desired semicontinuity of $c$.
4. Using $A_k\xrightarrow{d_H} A$ we get that $R(A_k)$ is bounded and thus $r(A_k)$ is bounded and there is a convergent subsequence $r(A_{k_m}) \to l$.
Our goal is to show that $r(A) \le l$. Let $x \in \mathrm{conv}(A)$. Then there exits $x_m \in A_{k_m}$ such that $x_m \to x$. From the definition of $r(A_{k_m})$ we get that there exists $T_m \subset A_{k_m}$ such that $x_m \in \mathrm{conv}(T_m)$ and $R(T_m) \le r(A_{k_m})$. We can select a convergent subsequence $T_{m_i} \to T$, where $T$ is compact (see \cite[Theorem 1.8.4]{Sch14:book}), then $T\subset A$ and $x \in \mathrm{conv} (T)$ and $R(T_{m_i}) \to R(T)$ therefore $R(T)\le l$. Thus $r(A) \le l$.
\end{proof}
We emphasize that the semicontinuity assertions in Lemma~\ref{lem:semicont} are not
continuity assertions for a reason and even adding the assumption of nestedness of the sets would not help.
\begin{ex}\label{ex:sch}
Schneider \cite{Sch75} observed that $c$ is not continuous with respect to the Hausdorff distance,
even if restricted to the compact sets with nonempty interior. His example consists of taking a triangle in the plane,
and replacing one of its edges by the two segments which join the endpoints of the edge to an interior point (see Figure \ref{fig:sch}).
More precisely, let $a_k=(\frac{1}{2}-\frac{1}{k},\frac{1}{2}-\frac{1}{k})$, $A_k=\mathrm{conv}((0,0); (1,0); a_k)\cup \mathrm{conv}((0,0); (0,1); a_k)$, and $A= \mathrm{conv}((0,0) ; (0,1); (1,0))=\mathrm{conv}(A_k)$. Then $d_H(A_k,A)\to 0$. But one has $r(A)=c(A)=0$ since $A$ is convex. Moreover one can notice that $c(A_k)=1$. Indeed on one hand $A\subset \frac{A+A_k}{2}$, which implies that $c(A_k)\le1$, on the other hand for every $\lambda<1$ the point $(\frac{1}{2},\frac{1}{2})\in A\setminus\frac{A_k+\lambda A}{1+\lambda}$, thus $c(A_k)=1$. Notice also that $r(A_k)=1/\sqrt{2}$. Indeed $A_k\subset (\frac{1}{2},\frac{1}{2})+\frac{1}{\sqrt{2}}B_2^2$ hence $r(A_k)\le\frac{1}{\sqrt{2}}$ and the opposite inequality is not difficult to see since the supremum in the definition of $r$ is attained at the point $(\frac{1}{2},\frac{1}{2})$.
\end{ex}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.4]{Schnider}
\end{center}
\caption{Discontinuity of $c$ and $r$ with respect to Hausdorff distance (Example \ref{ex:sch}). }\label{fig:sch}
\end{figure}
\begin{ex}\label{ex:dis}
To see that there is no continuity for $\Delta$, consider a sequence of discrete nested sets converging in $d$ to $[0,1]$, more precisely: $A_k=\{\frac{m}{2^k}; 0\le m\le 2^k\}$.
\end{ex}
\subsection{Special properties of Schneider's index}
\label{sec:schneider}
All these functionals other than $c$ can be unbounded. The boundedness of $c$
follows for the following nice inequality due to Schneider \cite{Sch75}.
\begin{thm}\label{thm:sch75}\cite{Sch75}
For any subset $A$ of ${\bf R}^n$,
\begin{eqnarray*}
c(A) \leq n.
\end{eqnarray*}
\end{thm}
\begin{proof}
Applying the Shapley-Folkman lemma (Lemma \ref{lem:SF}) to $A_1 = \cdots = A_{n+1} = A$, where $A \subset {\bf R}^n$ is a fixed compact set, one deduces that $(n+1)\mathrm{conv}(A) = A + n\mathrm{conv}(A)$. Thus $c(A) \leq n$.
\end{proof}
Schneider \cite{Sch75} showed that $c(A)=n$ if and only if $A$ consists of $n+1$ affinely independent points. Schneider also showed that if $A$ is unbounded or connected, one has the sharp bound $c(A)\leq n-1$.
Let us note some alternative representations of Schneider's non-convexity index. First, we would like to remind the definition of the {\it Minkowski functional} of a compact convex set $K$ containing zero:
$$
\|x\|_K=\inf\{t > 0: x \in tK\},
$$
with the usual convention that $\|x\|_K=+\infty$ if $\{t > 0: x \in tK\}=\emptyset$. Note that $K=\{x\in {\bf R}^n: \|x\|_K\le 1\}$ and $\|x\|_K$ is a norm if $K$ is symmetric with non empty interior.
For any compact set $A\subset{\bf R}^n$, define
\begin{eqnarray*}
A_\lambda= \frac{1}{1+\lambda} [A+\lambda\, \mathrm{conv}(A)],
\end{eqnarray*}
and observe that
\begin{eqnarray*}
\mathrm{conv}(A_\lambda) = \frac{1}{1+\lambda} \mathrm{conv}(A+\lambda\, \mathrm{conv}(A)) = \frac{1}{1+\lambda} [\mathrm{conv}(A) + \lambda \mathrm{conv}(A)] = \mathrm{conv}(A).
\end{eqnarray*}
Hence, we can express
\begin{eqnarray}\label{eq:c-alam}
c(A)= \inf \{ \lambda\geq 0: A_\lambda \text{ is convex} \} = \inf \{ \lambda\geq 0: A_\lambda = \mathrm{conv}(A) \} .
\end{eqnarray}
Rewriting this yet another way, we see that if $c(A)<t$, then for each $x\in\mathrm{conv}(A)$, there
exists $a\in A$ and $b\in\mathrm{conv}(A)$ such that
\begin{eqnarray*}
x=\frac{a+tb}{1+t} ,
\end{eqnarray*}
or equivalently, $x-a=t(b-x)$. In other words,
$x-a\in t K_x$ where $K_x=\mathrm{conv}(A)-x$,
which can be written as $\|x-a\|_{K_x} \leq t$ using the Minkowski functional.
Thus
\begin{eqnarray*}
c(A)=\sup_{x\in\mathrm{conv}(A)} \inf_{a\in A} \|x-a\|_{K_x} .
\end{eqnarray*}
This representation is nice since it allows for comparison with the representation of $d(A)$
in the same form but with $K_x$ replaced by the Euclidean unit ball.
\begin{rem}
Schneider \cite{Sch75} observed that there are many closed {\it unbounded} sets $A\subset {\bf R}^n$ that satisfy $c(A)=0$,
but are not convex. Examples he gave include the set of integers in ${\bf R}$, or a parabola in the plane.
This makes it very clear that if we are to use $c$ as a measure of non-convexity, we should
restrict attention to compact sets.
\end{rem}
\subsection{Unconditional relationships}
\label{sec:uncond}
It is natural to ask how these various measures of non-convexity are related. First we note that $d$ and $d^{(K)}$ are equivalent. To prove this we would like to present an elementary but useful observation:
\begin{lem}\label{lem:d^K-d^L}
Let $K\subset {\bf R}^n$ be an arbitrary convex body containing $0$ in its interior.
Consider a convex body $L \subset {\bf R}^n$ such that $K \subset L$ and $t>0$. Then for any compact set $A \subset {\bf R}^n$,
$$
d^{(K)} (A) \geq d^{(L)} (A)
$$
and
$$
d^{(tK)}(A)=\frac{1}{t}d^{(K)}(A).
$$
\end{lem}
\begin{proof}
Notice that
$$ A+d^{(K)}(A) L \supset A+d^{(K)}(A)K \supset \mathrm{conv}(A). $$
Hence, $d^{(K)} (A) \geq d^{(L)} (A)$. In addition, one has
$$ A + d^{(K)}(A) K = A + \frac{1}{t}d^{(K)}(A) t K. $$
Hence, $d^{(tK)}(A)=\frac{1}{t}d^{(K)}(A)$.
\end{proof}
The next lemma follows immediately from Lemma \ref{lem:d^K-d^L}:
\begin{lem}\label{lem:d-d^K}
Let $K$ be an arbitrary convex body containing 0 in its interior. For any compact set $A \subset {\bf R}^n$, one has
$$ r d^{(K)}(A) \leq d(A) \leq R d^{(K)}(A), $$
where $r,R>0$ are such that $rB_2^n \subset K \subset RB_2^n$.
\end{lem}
It is also interesting to note a special property of $d^{(\mathrm{conv}(A))}(A)$:
\begin{lem}\label{lem:d^conv}
Let $A$ be a compact set in ${\bf R}^n$. If $0 \in \mathrm{conv}(A)$, then
$$ d^{(\mathrm{conv}(A))}(A)\le c(A). $$
If $0 \in A$, then
$$ d^{(\mathrm{conv}(A))}(A)\le \min\{1, c(A)\}. $$
\end{lem}
\begin{proof}
If $0 \in \mathrm{conv}(A)$, then $\mathrm{conv}(A) \subset (1+c(A)) \mathrm{conv}(A)$. But,
$$
(1+c(A)) \mathrm{conv}(A) = A+c(A)\mathrm{conv}(A),
$$
where we used the fact that by definition of $c(A)$, $A + c(A) \mathrm{conv}(A)$ is convex. Hence, $d^{(\mathrm{conv}(A))}(A)\le c(A)$.
If $0 \in A$, in addition to the above argument, we also have
$$ \mathrm{conv}(A) \subset A + \mathrm{conv}(A). $$
Hence, $d^{(\mathrm{conv}(A))}(A)\le 1$.
\end{proof}
Note that the inequality in the above lemma cannot be reversed even with the cost of an additional multiplicative constant. Indeed, take the sets $A_k$ from Example \ref{ex:sch}, then $c(A_k)=1$ but $d^{(\mathrm{conv}(A_k))}(A_k)$ tends to $0$.
Observe that $d, r, \rho$ and $v$ have some similarity in definition. Let us introduce the point-wise definitions of above notions: Consider $x\in \mathrm{conv}(A)$, define
\begin{itemize}
\item $d_A(x)=\inf\limits_{a\in A} |x-a|.$ \\
More generally, if $K$ is a compact convex set in ${\bf R}^n$ containing the origin,
\item $d^{(K)}_A(x)=\inf\limits_{a\in A} \|x-a\|_K.$
\item $r_A(x) = \inf \{R(T): T\subset A, x\in \mathrm{conv}(T) \} .$
\item $v^2_A (x)= \inf\{\sum p_i |a_i|^2-|x|^2: x=\sum p_i a_i; p_i >0; \sum p_i=1, a_i \in A \}.$
\item $w_A(x) =\inf\bigg\{\sum p_i |a_i-x|: \, x=\sum p_i a_i; p_i >0; \sum p_i=1, a_i \in A \bigg\}$.
\item $\rho_A(x) = \inf_{a \in A_x} |x-a|, $
where $$A_x = \{a \in A : \exists b \in \mathrm{conv}(A), \exists \theta \in (0,1) \mbox{ such that } x = (1-\theta)a + \theta b\}.$$
\end{itemize}
Below we present a Theorem due to Wegmann \cite{Weg80} which proves that $r, \rho$ and $v$ are equal for compact sets and that they are equal also to $d$ under an additional assumption. For the sake of completeness we will present the proof of Wegmann \cite{Weg80} which is simplified here for the case of compact sets.
\begin{thm}[Wegmann \cite{Weg80}]\label{thm:weg}
Let $A$ be a compact set in ${\bf R}^n$, then
\begin{eqnarray*}
d(A)\le \rho(A) =w(A)= v(A)= r(A).
\end{eqnarray*}
Moreover if $v_A(x_0)=v(A)$, for some $x_0$ in the relative interior of $\mathrm{conv}(A)$, then $d(A)=v(A)=w(A)=r(A)=\rho(A)$.
\end{thm}
\begin{proof}
1) First observe that $ d(A)\le \rho(A) \leq w(A)\leq v(A)\leq r(A)$ by easy arguments; in fact, this relation holds point-wise, i.e. $d_A(x) \le \rho_A(x)\le w_A(x)\leq v_A(x)\le r_A(x)$.
Indeed the first inequality follows directly from the definitions, because $A_x \subset A$.
To prove the second inequality consider any convex decomposition of $x\in \mathrm{conv}(A)$, i.e. $x=\sum p_i a_i$,
with $p_i >0, \sum p_i=1, a_i \in A$.
Without loss of generality we may assume that $|x-a_1|\le |x-a_i|$ for all $i\le m$. Then
$$
\sum p_i |x-a_i| \ge |x-a_1| \ge \rho_A(x),
$$
because $a_1 \in A_x$ (indeed, $x=p_1 a_1 + (1-p_1) \sum\limits_{i\ge 2} \frac{p_i}{1-p_1} a_i$).
The third inequality $w_A(x)\leq v_A(x)$ immediately follows from the Cauchy-Schwarz inequality.
To prove the fourth inequality let $T=\{a_1,\dots, a_m\} \subset A$ be such that $x\in \mathrm{conv}(T)$. Let $p_1,\dots,p_m>0$ be such that $\sum p_i=1$ and $x=\sum p_ia_i$. Let $c$ be the center of the smallest Euclidean ball containing $T$. Notice that the minimum of $\sum p_i |x-a_i|^2$ is reached for $x=\sum p_i a_i$, thus
$$
v_A^2(x)\le \sum p_i |x-a_i|^2 \le \sum p_i |c-a_i|^2 \le R^2(T),
$$
and we take infimum over all $T$ to finish the proof of the inequality.\\
2) Consider $x_0\in\mathrm{conv}(A)$. To prove the theorem we will first show that $r_A(x_0) \le v(A)$. After this we will show that $v_A(x_0) \le \rho(A)$ and finally we will prove if $x_0$ is in the relative interior of $\mathrm{conv}(A)$ and maximizes $v_A(x)$, among $x\in\mathrm{conv}(A)$ then $d_A(x_0) \ge v(A)$. \\
2.1) Let us prove that $r_A(x_0) \le v(A)$. Assume first that $x_0$ is an interior point of $\mathrm{conv} (A)$.
Let us define the compact convex set $Q \subset {\bf R}^{n+1}$ by
$$
Q=\mathrm{conv}\{ (a,|a|^2); a\in A\}.
$$
Next we define the function $f: \mathrm{conv}(A) \to {\bf R}^+$ by $f(x)=\min\{y: (x,y) \in Q\}$, note that
\begin{eqnarray*}
f(x)&=&\min\{y: (x,y)=\sum\lambda_i (a_i, |a_i|^2); \lambda_1,\dots,\lambda_m >0 \mbox{ and } a_1, \dots, a_m \in A\}\\
&=&\min\{\sum\lambda_i|a_i|^2: \lambda_1,\dots,\lambda_m >0 \mbox{ and } a_1, \dots, a_m \in A, \sum \lambda_i=1; x=\sum\lambda_i a_i\}\\
&=&v^2_A(x)+|x|^2.
\end{eqnarray*}
Note that $(x_0, f(x_0))$ is a boundary point of $Q$ hence there exists a support hyperplane $H$ of $Q$ at $(x_0, f(x_0))$. Since $x_0$ is an interior point of $\mathrm{conv} (A)$, the hyperplane $H$ cannot be vertical because a vertical support plane would separate $x_0$ from boundary points of $\mathrm{conv} (A)$ and thus separate $(x_0, f(x_0))$ from boundary points of $Q$. Thus there exist $b \in {\bf R}^n$ and $\alpha \in {\bf R}$ such that $H=\{(x,t)\in{\bf R}^{n+1}: t=2 \langle b, x\rangle +\alpha\}$. Since $(x_0,f(x_0))\in H$ one has
\begin{equation}\label{eq:ba}
f(x_0) = 2 \langle b, x_0\rangle +\alpha
\end{equation}
and
$$
f(x) \ge 2 \langle b, x\rangle +\alpha, \mbox{ for all } x \in \mathrm{conv} (A).
$$
By definition of $f$, there exists $a_1,\dots, a_m\in A$ and
$\lambda_1, \dots, \lambda_m>0$, $\sum \lambda_i=1$ such that $x_0=\sum \lambda_i a_i$ and
$$
f(x_0)=\sum\lambda_i |a_i|^2 =\sum\lambda_i f(a_i).
$$
From the convexity of $Q$ we get that $(a_i, f(a_i))\in H\cap Q$, for any $i$; indeed we note that
$$
f(x_0)=2 \langle b, x_0\rangle +\alpha=\sum_i\lambda_i(2 \langle b, a_i\rangle +\alpha)\le \sum_i \lambda_if(a_i)=f(x_0).
$$
Thus $2 \langle b, a_i\rangle +\alpha=f(a_i)$ for all $i$.
Let $T=\{a_1, \dots a_m\}$ and $W=\mathrm{conv} (T)$. Note that for any $x\in W \cap A$ we have
$$
|x|^2=f(x)=2\langle b, x \rangle +\alpha
$$
thus $\alpha+|b|^2 =|x-b|^2 \ge 0$. Define
\begin{eqnarray}\label{def:weg-R}
R^2 = \alpha+|b|^2.
\end{eqnarray}
Notice that for any $x\in \mathrm{conv}(A)$
\begin{equation}\label{eq:va}
v^2_A(x)=f(x)-|x|^2 \ge 2\langle b, x \rangle +\alpha - |x|^2=R^2-|b-x|^2,
\end{equation}
with equality if $x \in W$, in particular, $0\le v^2_A(x)=R^2-|b-x|^2 \le R^2$, for every $x\in W$. Consider the point $w \in W$ such that
$$
v_A^2(w)=\max\limits_{x \in W} v^2_A(x) = \max\limits_{x \in W}(R^2-|b-x|^2) =R^2-\inf_{x\in W}|b-x|^2.
$$
Then one has $|b-w|=\inf_{x\in W}|b-x|$, which means $w$ is the projection of the point $b$ on the convex set $W$. This implies that, for every $x\in W$, one has
$\langle x-b,w-b\rangle\ge|w-b|^2$, thus
$$
|x-w|^2=|x-b|^2-2\langle x-b,w-b\rangle+|w-b|^2\le |x-b|^2-|w-b|^2\le R^2-|w-b|^2=v_A^2(w).
$$
We get $T\subset W \subset w+v_A(w)B_2^n$ and
$$
R(T) \le v_A(w) = \max\limits_{x \in W} v_A(x).
$$
Using that $x_0\in W =\mathrm{conv} (T)$ and $T\subset A$, we conclude from the definition of $r_A$ that
$$r_A(x_0)\le R(T)\le\max\limits_{x \in W} v_A(x)\le v(A).$$
If $x_0$ is a boundary point of $\mathrm{conv} (A)$, then using the boundary structure of the polytope $\mathrm{conv}(A)$ (see \cite[Theorem 2.1.2, p. 75 and Remark 3, p. 78]{Sch14:book})
$x_0$ belongs to the relative interior of an exposed face $F$ of $\mathrm{conv}( A)$. By the definition of the notion of
exposed face (see \cite[p. 75]{Sch14:book}) we get that if $x=\sum \lambda_i a_i$ for $a_i \in A$ and $\lambda_i>0$ with $\sum\lambda_i=1$, then $a_i \in A \cap F$. Thus
\begin{equation}\label{eq:face}
v_A(x_0)=v_{A \cap F}(x_0), r_A(x_0)=r_{A \cap F}(x_0) \mbox{ and } \rho_A(x_0)=\rho_{A \cap F}(x_0).
\end{equation}
If $\mbox{dim}(F) =0$ then $x_0 \in A$ and thus all proposed inequalities are trivial, otherwise we can reproduce the above argument for $A \cap F$ instead of $A$.\\
2.2) Now we will prove that $v_A(x_0) \le \rho(A)$. Consider $b, \alpha$ and $R$ defined in \eqref{eq:ba} and \eqref{def:weg-R}.
Using that $v_A(a)=0$, for every $a\in A$ and (\ref{eq:va}), we get $|b-a|\ge R$, for all $a\in A$. We will need to consider two cases
\begin{enumerate}
\item If $b\in \mathrm{conv}(A)$, then from the above $d_A(b)=\inf\limits_{a\in A} |b-a| \ge R$ thus
\begin{equation}\label{eq:vd}
v_A(x_0) \le R\le d_A(b) \le \rho_A(b)\le \rho(A).
\end{equation}
\item If $b\not\in \mathrm{conv}(A)$, then there exists $y \in \partial(\mathrm{conv}(A)) \cap [w, b]$, thus $|b-y| \le |b-w|$. So, from (\ref{eq:va}) we have
$$
v_A^2(y) \ge R^2 -|b-y|^2 \ge R^2-|b-w|^2=v^2_A(w) \ge v^2_A(x_0),
$$
so it is enough to prove $v_A(y) \le \rho(A)$, where $y \in \partial(\mathrm{conv}(A))$. Let $F$ be the face of $\mathrm{conv}(A)$ containing $y$ in its relative interior. Thus we can use the approach from (\ref{eq:face}) and reproduce the above argument for $A\cap F$ instead of $A$, in the end of which we will again get two cases (as above), in the first case we get $v_A(y)=v_{A\cap F}(y) \le \rho(A\cap F)\le \rho(A)$. In the second case, there exists $z \in \partial(\mathrm{conv}(A\cap F))$ such that $v_{A\cap F}(z) \ge v_{A\cap F}(y)$ and we again reduce the dimension of the set under consideration. Repeating this argument we will arrive to the dimension $1$ in which the proof can be completed by verifying that $b \in \mathrm{conv}(A)$ (indeed, in this case $W=[a_1, a_2]$, $a_1, a_2 \in A$ and $|a_1 -b|=|a_2-b|$, thus $b=(a_1+a_2)/2 \in \mathrm{conv}(A)$) and thus $v_A(x_0) \le \rho(A)$.
\end{enumerate}
2.3) Finally, assume $v_A(x_0)=v(A)$, where $x_0$ is in the relative interior of $\mathrm{conv}(A)$. We may assume that $\mathrm{conv}(A)$ is $n$-dimensional (otherwise we would work in the affine subspace generated by $A$). Then using (\ref{eq:va}) we get that $v_A^2(x_0)=R^2- |b-x_0|^2$ and $v_A^2(a) \ge R^2- |b-a|^2,$ for all $a\in \mathrm{conv}(A)$, thus
$$
0 \le v_A^2(x_0)-v_A^2(a) \le |b-a|^2 - |b-x_0|^2,
$$
for all $a \in \mathrm{conv}(A)$. So $|b-x_0| \le |b-a|$ for all $a \in \mathrm{conv}(A)$, this means that the minimal distance between $b$ and $a \in \mathrm{conv}(A)$ is reached at $a=x_0$. Notice that if $b \not\in \mathrm{conv}(A)$ then $x_0$ must belong to $\partial(\mathrm{conv}(A))$, which contradicts our hypothesis. Thus $b \in \mathrm{conv}(A)$ and $x_0=b$, and we can use (\ref{eq:vd}) to conclude that
$v(A)=v_A(x_0) \le d_A(x_0) \le d(A)$.
\end{proof}
\begin{rem}\label{delaunay}
The method used in the proof of Theorem \ref{thm:weg} is reminiscent of the classical approach to Voronoi diagrams and Delaunay triangulation
(see, e.g., \cite[section 5.7]{Mat02:book}). Moreover the point $b$ constructed above is exactly the center of the ball
circumscribed to the simplex of the Delaunay triangulation to which the point $x_0$ belongs.
\end{rem}
Next we present a different proof of $r(A)=v(A)$ from Theorem \ref{thm:weg}, which essentially uses Remark \ref{delaunay}
and is more geometric. The proof will be deduced from the following proposition that better describes the geometric properties of the function $v_A$.
\begin{prop}\label{prop:vgeom} Let $A$ be a compact set in ${\bf R}^n$ and $x \in \mathrm{conv}(A)$.
\begin{enumerate}
\item Then there exists an integer $1\le m\le n+1$, $m$ affinely independent points $a_1,\dots, a_m \in A$ and $m$ real numbers $p_1, \dots, p_m>0$ such that $\sum_{i=1}^{m} p_i=1$, $x=\sum_{i=1}^{m} p_ia_i$ and
\begin{eqnarray*}
v^2_A (x)= \sum_{i=1}^{m} p_i |a_i|^2-|x|^2.
\end{eqnarray*}
\item Let $S=\{a_1, \dots, a_m\}$. Then there exists $c \in {\rm aff}S$ and $R_c>0$, such that $|a_i-c|=R_c$, for all $1\le i\le m$ and
\begin{eqnarray*}
v^2_A (x)=R_c^2-|x-c|^2.
\end{eqnarray*}
Moreover $|a-c|\ge R_c$, for all $a\in A\cap {\rm aff}S$.
\item For every $y\in\mathrm{conv}(S)$ there exists $q_1,\dots, q_m\ge0$ such that $\sum_{i=1}^{m} q_i=1$, $y=\sum_{i=1}^{m} q_ia_i$ and
\begin{eqnarray*}
v^2_A (y)= \sum_{i=1}^{m} q_i |a_i|^2-|y|^2=R_c^2-|y-c|^2.
\end{eqnarray*}
\end{enumerate}
\end{prop}
\begin{proof} {\it 1.} Recall that
$$v^2_A (x)= \inf\left\{\sum_{i=1}^{m} \lambda_i |a_i|^2-|x|^2: m\in\mathbb{N}, x=\sum_{i=1}^{m}\lambda_i a_i; \lambda_i >0; \sum_{i=1}^{m} \lambda_i=1, a_i \in A \right\}.$$
Following the standard proof of Carath\'eodory's theorem, we will show that for any decomposition of $x$ in the form $x=\sum\lambda_i a_i$, with $a_1, \dots, a_m$ being affinely dependent, the quantity $\sum \lambda_i |a_i|^2$ is not minimal. Thus the infimum in the definition of $v^2_A (x)$ may be reduced to affinely independent decompositions of $x$, thus with $m\le n+1$ points. Hence the infimum is taken on a compact set and is reached.
So let $x=\sum\lambda_i a_i$ and assume that the sequence $a_1, \dots, a_m$ is affinely dependent then there exists a sequence of real numbers $\{\mu_i\}_{i=1}^m$, not all zeros, such that $\sum \mu_i a_i=0$ and $\sum \mu_i=0$. We note that (by multiplying, if needed, all $\mu_i$ by $-1$) we may also assume that
\begin{equation}\label{eq:poss}
\sum \mu_i|a_i|^2\ge0.
\end{equation}
And there is some $i$ such that $\mu_i>0$. Consider $k \in \{1,\dots, m\}$ such that
$$
\frac{\lambda_k}{\mu_k}=\min\{\frac{\lambda_i}{\mu_i}:\,\,\,\, \mu_i>0\}.
$$
Next, using that $a_k=-\sum\limits_{i\not=k} \frac{\mu_i}{\mu_k} a_i$ we get
$$
x=\sum\limits_{i\not=k}\lambda_i a_i - \lambda_k \sum\limits_{i\not=k} \frac{\mu_i}{\mu_k} a_i=\sum\limits_{i\not=k}\left(\lambda_i -\lambda_k \frac{\mu_i}{\mu_k}\right) a_i,
$$
where $\left(\lambda_i -\lambda_k \frac{\mu_i}{\mu_k}\right) \ge 0$ for all $i$ and $\sum \left(\lambda_i -\lambda_k \frac{\mu_i}{\mu_k}\right) =1$, so we reduce the number of elements in sequence $\{a_i\}$. Thus, the only thing left is to show that
$$
\sum_{i\not=k}\left(\lambda_i -\lambda_k \frac{\mu_i}{\mu_k}\right) |a_i|^2 \le \sum_{i=1}^m \lambda_i |a_i|^2.
$$
Using that $\mu_k>0$, the above is equivalent to $\sum \mu_i|a_i|^2\ge0$, which is exactly (\ref{eq:poss}). Therefore, we may assume that infimum in the definition of $v_A^2(x)$ is is reached on affinely independent points and is actually a minimum. Hence, there exists an integer $1\le m\le n+1$, $m$ affinely independent points $a_1,\dots, a_m \in A$ and $m$ real numbers $p_1, \dots, p_m>0$ such that $\sum_{i=1}^{m} p_i=1$, $x=\sum_{i=1}^{m} p_ia_i$ and
$v^2_A (x)= \sum_{i=1}^{m} p_i |a_i|^2-|x|^2.$\\
{\it 2.} One has $x=\sum_{i=1}^{m} p_ia_i$, with $p_i>0$ and $\sum_{i=1}^{m} p_i=1$ thus $x$ is in the relative interior of $\mathrm{conv}(S)$. Since $a_1,\dots, a_m$ are affinely independent, $\mathrm{conv} S$ is a $m$-dimensional simplex and there exists $c \in {\rm aff}S$ and $R_c>0$, such that $S \subset c+ R_c S^{n-1} $. Then $|a_i-c|=R_c$, for all $1\le i\le m$. Thus $|a_i|^2=R_c^2+2\langle c,a_i\rangle-|c|^2$, for all $1\le i\le m$. Hence
\begin{eqnarray*}
v^2_A (x)&=& \sum_{i=1}^{m} p_i |a_i|^2-|x|^2=\sum_{i=1}^{m} p_i (R_c^2-|c|^2+2\langle c,a_i\rangle)-|x|^2\\&=&R_c^2-|c|^2+2\langle c,x\rangle-|x|^2=R_c^2-|c-x|^2.
\end{eqnarray*}
Assume now that there is $a\in A\cap {\rm aff}S$ such that $|a-c|< R_c$. Notice that we can select $k\in \{1, \dots, m\}$ such that $x \in \mathrm{conv} \{ a, \{a_i\}_{i\not=k}\}$. Indeed, consider $a'=a+e_{n+1}\in{\bf R}^{n+1}$ and note that
the orthogonal projection of $\mathrm{conv}\{S, a'\}$ on ${\rm aff}S$ is equal to $\mathrm{conv}\{S, a\}$
and thus
$$
\mathrm{conv}\{S\} \subseteq \mathrm{conv}\{S, a\}= \bigcup_{k=1}^m \mathrm{conv}\{a, \{a_i\}_{i\not=k}\}.
$$
Thus, there exists $\lambda_1,\dots, \lambda_m \ge0$, with $\sum_{i=1}^m \lambda_i=1$ such that $x = \sum_{i=1}^m \lambda_i \tilde{a}_i$, where $\tilde{a}_i=a_i$ for $i\neq k$ and $\tilde{a}_k=a$. Moreover, since $x$ is in the relative interior of $\mathrm{conv}(S)$ one has $\la_k>0$. Then
\begin{eqnarray*}
\sum_{i=1}^{m} \lambda_i |\tilde{a}_i|^2-|x|^2 &=& \sum\limits_{i=1}^{m} \lambda_i |\tilde{a}_i-c+c|^2-|x|^2= \sum\limits_{i=1}^{m} \lambda_i |\tilde{a}_i-c|^2+2 \langle x-c, c\rangle +|c|^2-|x|^2\\
&=& \sum\limits_{i=1}^{m} \lambda_i |\tilde{a}_i-c|^2-|x-c|^2 < R_c^2 -|x-c|^2=v_A^2(x),
\end{eqnarray*}
which contradicts the minimality of the sequence $a_1, \dots, a_m$.\\
{\it 3.} Let $y \in \mathrm{conv}\{S\}$, then there exists $q_i\ge0, \sum q_i=1$ such that $y=\sum_{i=1}^{m} q_i a_i$. Consider another sequence $\{b_i\} \subset A$, with $y=\sum \lambda_i b_i$, and $\lambda_i>0, \sum \lambda_i=1$.
Using the fact that $x, y \in {\rm aff} S$ we get,
as in {\it 2.} that $x=\mu_k y +\sum_{i\not=k} \mu_i a_i$, for some $\mu_i\ge0, \sum \mu_i=1$. Note that $\mu_k \not=0$, because $x$ is in the relative interior of $\mathrm{conv} S$. Thus
$$
x=\mu_k \sum \lambda_i b_i +\sum_{i\not=k} \mu_i a_i.
$$
The minimality of the sequence $S$ with respect to $v^2_A(x)$ implies that for any other convex combination $x=\sum \tilde{p}_i \tilde{a}_i$, $\{\tilde{a}_i\} \subset A$, we get
$
\sum \tilde{p}_i|\tilde{a}_i - c|^2 \ge R_c.
$
Thus
$$
\sum \mu_k \lambda_i |b_i-c|^2 +\sum\limits_{i\not=k} \mu_i |a_i-c|^2\ge R_c^2.
$$
Using that $|a_i-c|=R_c$ and the fact that $\sum\limits_{i\not=k} \mu_i =1- \mu_k$ we get
$$
\sum \lambda_i |b_i-c|^2 \ge R_c^2,
$$
which is exactly what we need to finish the proof. Indeed, again
$$
\sum\lambda_i|b_i|^2-|y|^2 =\sum \lambda_i|b_i-c|^2 - |c-y|^2 \ge R_c^2 -|c-y|^2=\sum q_i|a_i|^2-|y|^2,
$$
and thus $v^2_A (y)= \sum q_i |a_i|^2-|y|^2=R_c^2-|y-c|^2$ and $S$ is a minimizing sequence for $v^2_A(y)$.
\end{proof}
Now we are ready to use the above proposition to show that $v(A) \ge r(A)$.
For every $x \in \mathrm{conv}(A)$ let $S=\{a_1,\dots, a_m\}$ be the simplex obtained from Proposition \ref{prop:vgeom} and let $c$ and $R_c$ denote the center and the radius of the circumscribed ball of $S$. Then
$$
\sup\limits_{y\in \mathrm{conv}(S)} v^2_A(y)=R^2_c - \inf\limits_{y\in \mathrm{conv}(S)} |y-c|^2 =R^2_c-|c- w|^2,
$$
where $w=P_{\mathrm{conv} S}(c)$ denotes the projection of $c$ onto the convex set $\mathrm{conv} S$, i.e. the nearest point to $c$ from $\mathrm{conv}(S)$.
For every $i$, one has $\langle a_i-w,c-w\rangle\le0$ thus $\langle a_i-c,c-w\rangle\le-|c-w|^2$, hence
$$
|a_i-w|^2=|a_i-c+c-w|^2=|a_i-c|^2+2\langle a_i-c,c-w\rangle+|c-w|^2\le R_c^2-|c-w|^2.
$$
Thus $S$ is contained in the ball of radius $\rho=\sqrt{R_c^2-|c-w|^2}=\sup_{y\in \mathrm{conv}(S)} v_A(y)\le v(A)$.
Hence $r_A(x)\le\rho\le v(A)$ which finishes the proof of $r(A) \le v(A)$.
\qed
\par\vspace{.1in}
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|l|l|l|l|}
\hline
{$\Rightarrow$} & {$d$} & $r$ & $c$ & $\Delta$ \\
\hline
$d$ & = & N (Ex. \ref{ex:sch}, \ref{ex:delta-r}) & N (Ex. \ref{ex:sch}, \ref{ex:c}) & N (Ex. \ref{ex:dis}) \\
\hline
$r$ & Y (Th. \ref{thm:weg}) & = & N (Ex. \ref{ex:c}, \ref{ex:rc}) & N (Ex. \ref{ex:dis}) \\
\hline
$c$ & N (Ex. \ref{ex:c}, \ref{ex:delta-cd}) & N (Ex. \ref{ex:c}) & = & N (Ex. \ref{ex:dis}, \ref{ex:c})\\
\hline
$\Delta$ & N (Ex. \ref{ex:delta-d}, \ref{ex:delta-cd}) & N (Ex. \ref{ex:sch}, \ref{ex:delta-r}) & N (Ex. \ref{ex:c}, \ref{ex:rc}) & = \\
\hline
\end{tabular}
\end{center}
\caption{When does convergence to 0 for one measure of non-convexity unconditionally imply the same for another?}\label{tab:1}
\end{table}
The above relationships (summarized in Table~\ref{tab:1}) are the only unconditional relationships that exist
between these notions in general dimension. To see this, we list below some examples that show why no other
relationships can hold in general.
\begin{ex}\label{ex:c}
By Lemma~\ref{lem:scaling}, we can scale a non convex set to get examples
where $c$ is fixed but $d,r$ and $\Delta$ converge to 0, for example, take $A_k=\{0; \frac{1}{k}\}$;
or to get examples where $c$ goes to 0 but $d,r$ are fixed and $\Delta$ diverges, for example take $A_k=\{0, 1, \dots, k\}$.
\end{ex}
\begin{ex}\label{ex:delta-r}
An example where $\Delta(A_k) \ra 0$, $d(A_k)\ra 0$ but $r(A_k)$ is bounded away
from 0 is given by a right triangle from which a piece is shaved off leaving a protruding edge, see Figure \ref{fig:dcr}.
\end{ex}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.4]{triangle1}
\end{center}
\caption{$\Delta(A_k) \ra 0$ but $r(A_k)>\sqrt{2}/2$ (Example \ref{ex:delta-r}).}\label{fig:dcr}
\end{figure}
\begin{ex}\label{ex:delta-d}
An example where $\Delta(A_k)\ra 0$ but both $c(A_k)$ and $d(A_k)$ are bounded away from 0
is given by taking a 3-point set with 2 of the points getting arbitrarily closer but staying away from the third, see Figure \ref{fig:DcD}.
\end{ex}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.4]{triangle2}
\end{center}
\caption{$\Delta(A_k)\ra 0$ but $c(A_k) \ge 1$ and $d(A_k)\ge 1/2$ (Example \ref{ex:delta-d}).}\label{fig:DcD}
\end{figure}
\begin{ex}\label{ex:delta-cd}
An example where $\Delta(A_k) \ra 0$ and $c(A_k)\ra 0$ but $d(A_k)>1/2$ can be found in Figure \ref{fig:dcr-same}.
\end{ex}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.8]{VS}
\end{center}
\caption{$\mathrm{Vol}_2(A_k) \ge 1$, $\Delta(A_k) \ra 0$ and $c(A_k)\ra 0$ but $d(A_k)>1/2$ (Example \ref{ex:delta-cd}).}\label{fig:dcr-same}
\end{figure}
\subsection{Conditional relationships}
\label{sec:cond}
There are some more relationships between different notions of non-convexity that emerge if we impose some natural conditions on the sequence of sets (such as ruling out escape to infinity, or vanishing to almost nothing).
A first observation of this type is that Hausdorff distance to convexity is dominated by Schneider's index of non-convexity if $A$ is contained in a ball of known radius.
\begin{lem}\label{lem:c-d}
For any compact set $A \subset {\bf R}^n$,
\begin{eqnarray}\label{eq:d-c}
d(A)\leq R(A) c(A) .
\end{eqnarray}
\end{lem}
\begin{proof}
By translation invariance, we may assume that $A\subset R(A) B_2^n$. Then $0\in \mathrm{conv}(A)$, and it follows that
\begin{eqnarray*}
\mathrm{conv}(A) \subset \mathrm{conv}(A) + c(A)\mathrm{conv}(A) = A + c(A)\mathrm{conv}(A) \subset A + c(A)R(A)B_2^n.
\end{eqnarray*}
Hence $d(A)\leq R(A) c(A)$.
\end{proof}
This bound is useful only if $c(A)$ is smaller than $1$, because we already know that $d(A)\le r(A)\le R(A)$.
In dimension 1, all of the non-convexity measures are tightly connected.
\begin{lem}\label{lem:meas-1d}
Let $A$ be a compact set in ${\bf R}$. Then
\begin{eqnarray}\label{eq:c-d-1d}
r(A) = d(A) = R(A)c(A)\le \frac{\Delta(A)}{2}.
\end{eqnarray}
\end{lem}
\begin{proof}
We already know that $d(A)\le r(A)$. Let us prove that $r(A)\le d(A)$. From the definition of $r(A)$ and $d(A)$, we have
$$r(A)=\sup_{x\in\mathrm{conv}(A)}\inf\left\{\frac{\beta-\alpha}{2}; \alpha, \beta\in A, \alpha\le x\le \beta\right\},
\quad d(A)=\sup_{y\in\mathrm{conv}(A)}\inf_{\alpha\in A}|y-\alpha|.$$
Thus we only need to show that for every $x\in\mathrm{conv}(A)$, there exists $y\in\mathrm{conv}(A)$ such that
$$\inf\left\{\frac{\beta-\alpha}{2}; \alpha, \beta\in A, \alpha\le x\le \beta\right\}\le\inf_{\alpha\in A}|y-\alpha|.$$
By compactness there exists $\alpha, \beta\in A$, with $\alpha\le x\le \beta$ achieving the infimum in the left hand side. Then we only need to choose $y=\frac{\alpha+\beta}{2}$ in the right hand side to conclude that $r(A)\le d(A)$. In addition, we get $(\alpha,\beta)\subset \mathrm{conv}(A)\setminus A$ thus $2r(A)=\beta-\alpha\le\Delta(A)$.\\
Now we prove that $d(A)=R(A)c(A)$. From Lemma \ref{lem:c-d}, we have $d(A) \leq R(A) c(A)$. Let us prove that $R(A) c(A)\le d(A)$. By an affine transform, we may reduce to the case where $\mathrm{conv}(A)=[-1,1]$, thus $-1=\min(A)\in A$ and $1=\max(A)\in A$. Notice that $R(A)=1$ and denote $d:=d(A)$. By the definition of $d(A)$, one has $[-1,1]=\mathrm{conv}(A)\subset A+[-d,d]$. Thus using that $-1\in A$ and $1\in A$, we get
$$
A + d(A) \mathrm{conv}(A) = A+[-d,d] \supset (-1+[-d,d]) \cup [-1,1]\cup (1+[-d,d])=[-1-d, 1+d],
$$
we conclude that $A + d(A) \mathrm{conv}(A)\supset (1+d(A))\mathrm{conv}(A)$ and thus $R(A) c(A)=c(A)\le d(A)$.
\end{proof}
Notice that the inequality on $\Delta$ of Lemma \ref{lem:meas-1d} cannot be reversed as shown by Example \ref{ex:dis}.
The next lemma provides a connection between $r$ and $c$ in ${\bf R}^n$.
\begin{lem}\label{lem:c-r}
For any compact set $A \subset {\bf R}^n$,
\begin{eqnarray}\label{eq:r-c}
r(A) \leq 2 \frac{c(A)}{1+c(A)} R(A).
\end{eqnarray}
\end{lem}
\begin{proof}
Consider $x^*$ the point in $\mathrm{conv}(A)$ that realizes the maximum in the definition of $\rho(A)$ (it exists since $\mathrm{conv}(A)$ is closed). Then, for every $a \in A_{x^*}$, one has $\rho(A) \leq |x^*-a|$. By definition,
$$ c(A) = \inf \{\lambda \geq 0 : \mathrm{conv}(A) = \frac{A + \lambda \mathrm{conv}(A)}{1+\lambda} \}. $$
Hence,
$$ x^* = \frac{1}{1+c(A)}a + \frac{c(A)}{1+c(A)}b, $$
for some $a \in A$ and $b \in \mathrm{conv}(A)$. Since $\frac{1}{1+c(A)} + \frac{c(A)}{1+c(A)} = 1$, one deduces that $a \in A_{x^*}$. Thus,
$$ \rho(A) \leq |x^*-a|. $$
But,
$$ x^* - a = \frac{1}{1+c(A)}a + \frac{c(A)}{1+c(A)}b - a = \frac{c(A)}{1+c(A)} (b-a). $$
It follows that
$$ \rho(A) \leq |x^*-a| = \frac{c(A)}{1+c(A)} |b-a| \leq \frac{c(A)}{1+c(A)} \mathrm{diam}(A) \leq 2 \frac{c(A)}{1+c(A)} R(A). $$
As shown by Wegmann (cf. Theorem \ref{thm:weg}), if $A$ is closed then $\rho(A) = r(A)$. We conclude that
$$ r(A) \leq 2 \frac{c(A)}{1+c(A)} R(A). $$
\end{proof}
Our next result says that the only reason for which we can find examples
where the volume deficit goes to 0, but the Hausdorff distance from convexity does not,
is because we allow the sets either to shrink to something of zero volume, or run off to infinity.
\begin{thm}\label{thm:delta-d}
Let $A$ be a compact set in ${\bf R}^n$ with nonempty interior. Then
\begin{eqnarray}\label{eq:counter-size}
d(A)\le \left(\frac{n}{\mathrm{Vol}_{n-1}(B_2^{n-1})}\right)^\frac{1}{n}\left(\frac{2R(A)}{\mathrm{inr}(\mathrm{conv}(A))}\right)^\frac{n-1}{n}\Delta(A)^\frac{1}{n}.
\end{eqnarray}
\end{thm}
\begin{proof}
From the definition of $d(A)$ there exists $x\in\mathrm{conv}(A)$ such that $\mathrm{Vol}_n((x+d(A)B_2^n)\cap A)=0$. Thus
$\Delta(A)\ge \mathrm{Vol}_n(\mathrm{conv}(A) \cap (x + d(A) B_2^n))$. Let us denote $r=\mathrm{inr}(\mathrm{conv}(A))$. From the definition of $\mathrm{inr}(\mathrm{conv}(A))$, there exists $y\in\mathrm{conv}(A)$ such that $y+rB_2^n\subset \mathrm{conv}(A)$. Hence
$$\Delta(A)\ge \mathrm{Vol}_n(\mathrm{conv}(x, y+rB_2^n) \cap (x + d(A) B_2^n))\ge \frac{1}{n}\mathrm{Vol}_{n-1}(B_2^{n-1})\left(\frac{rd(A)}{2R(A)}\right)^{n-1}. $$
Let $\{z\}=[x,y]\cap (x + d(A) S^{n-1})$ be the intersection point of the sphere centered at $x$ and the segment $[x,y]$ and let $h$ be the radius of the $(n-1)$-dimensional sphere $S_h=\partial(\mathrm{conv}(x, y+rB_2^n)) \cap (x + d(A) S^{n-1}))$. Then $h=\frac{d(A)r}{|x-y|}$ and $\mathrm{conv}(x, y+rB_2^n) \cap (x + d(A) B_2^n)\supset\mathrm{conv}(x,S_h,z)$. Thus
$$\Delta(A)\ge \mathrm{Vol}_n(\mathrm{conv}(x,S_h,z))=\frac{d(A)}{n}\mathrm{Vol}_{n-1}(B_2^{n-1})h^{n-1}\ge\frac{d(A)^n}{n}\mathrm{Vol}_{n-1}(B_2^{n-1})\left(\frac{r}{|x-y|}\right)^{n-1}.$$
\end{proof}
Observe that the first term on the right side in inequality \eqref{eq:counter-size} is just a dimension-dependent constant,
while the second term depends only on the ratio of the radius of the smallest Euclidean ball containing $A$
to that of the largest Euclidean ball inside it.
The next lemma enables to compare the inradius, the outer radius and the volume of convex sets. Such estimates were studied in \cite{BHS03}, \cite {San88} where, in some cases, optimal inequalities were proved in dimension 2 and 3.
\begin{lem}
Let $K$ be a convex body in ${\bf R}^n$. Then
$$
\mathrm{Vol}_n(K)\le(n+1)\mathrm{Vol}_{n-1}(B_2^{n-1})\mathrm{inr}(K)(2R(K))^{n-1}.
$$
\end{lem}
\begin{proof}
From the definition of $\mathrm{inr}(K)$, there exists $y\in K$ such that $y+\mathrm{inr}(K)B_2^n\subset K$. Without loss of generality, we may assume that $y=0$ and that $\mathrm{inr}(K)=1$, which means that $B_2^n$ is the Euclidean ball of maximal radius inside $K$. This implies that $0$ must be in the convex hull of the contact points of $S^{n-1}$ and $\partial(K)$, because if it is not, then there exists an hyperplane separating $0$ from these contact points and one may construct a larger Euclidean ball inside $K$.
Hence from Caratheodory,
there exists $1\le k\le n$ and $k+1$ contact points $a_1,\dots,a_{k+1}$ so that $0\in\mathrm{conv}(a_1,\dots,a_{k+1})$ and $K\subset S=\{x: \langle x, a_i\rangle\le 1, \forall i\in\{1,\dots, k+1\}\}$. Since $0\in\mathrm{conv}(a_1,\dots,a_{k+1})$, there exists $\la_1, \dots, \la_{k+1}\ge0$ such that $\sum_{i=1}^{k+1}\la_ia_i=0$. Thus for every $x\in{\bf R}^n$, $\sum_{i=1}^{k+1}\la_i\langle x,a_i\rangle=0$ hence there exists $i$ such that $\langle x,a_i\rangle\ge0$. Hence
$$
S\subset\bigcup_{i=1}^{k+1}[0,a_i]\times\{x : \langle x, a_i\rangle=0\}.
$$
Moreover $K\subset\mathrm{diam}(K)B_2^n$ thus
$$
K\subset S\cap \mathrm{diam}(K)B_2^n\subset\bigcup_{i=1}^{k+1}[0,a_i]\times\{x\in\mathrm{diam}(K)B_2^n : \langle x, a_i\rangle=0\}.
$$
Passing to volumes and using that $a_i\in S^{n-1}$, we get
$$\mathrm{Vol}_n(K)\le (k+1)\mathrm{Vol}_{n-1}(B_2^{n-1})(\mathrm{diam}(K))^{n-1}\le(n+1)\mathrm{Vol}_{n-1}(B_2^{n-1})(2R(K))^{n-1}.
$$
\end{proof}
An immediate corollary of the above theorem and lemma is the following.
\begin{coro}\label{co:d-del}
Let $A$ be a compact set in ${\bf R}^n$. Then
$$
d(A)\le c_n\frac{R(A)^{n-1}}{\mathrm{Vol}_n(\mathrm{conv}(A))^\frac{n-1}{n}}\Delta(A)^\frac{1}{n},
$$
where $c_n$ is an absolute constant depending on $n$ only. Thus for any sequence of compact sets $(A_k)$ in ${\bf R}^n$ such that
$\sup_k R(A_k)<\infty$ and $\inf_k \mathrm{Vol}_n(A_k) >0$,
the convergence $\Delta(A_k)\ra 0$ implies that $d(A_k)\ra 0$.
\end{coro}
\begin{table}[!h]
\begin{center}
\begin{tabular}{|c|l|l|l|l|}
\hline
{$\Rightarrow$} & {$d$} & $r$ & $c$ & $\Delta$ \\
\hline
$d$ & = & N (Ex. \ref{ex:sch}, \ref{ex:ak}) & N (Ex. \ref{ex:sch}, \ref{ex:ak}) & N (Ex. \ref{ex:hw}) \\
\hline
$r$ & Y & = & N (Ex. \ref{ex:rc}) & N (Ex. \ref{ex:hw}) \\
\hline
$c$ & Y (Lem. \ref{lem:c-d}) & Y (Lem. \ref{lem:c-r})& = & N (Ex. \ref{ex:hw})\\
\hline
$\Delta$ & Y (Cor. \ref{co:d-del})& N (Ex. \ref{ex:sch}, \ref{ex:ak}) & N (Ex. \ref{ex:sch}, \ref{ex:ak})& = \\
\hline
\end{tabular}
\end{center}
\caption{When does convergence to 0 for one measure of non-convexity imply the same for another
when we assume the sequence lives in a big ball and has positive limiting volume?}
\end{table}
From the preceding discussion, it is clear that $d(A_k)\ra 0$ is a much weaker statement than
either $c(A_k)\ra 0$ or $r(A_k)\ra 0$.
\par\vspace{.1in}
\begin{ex}\label{ex:hw} Consider a unit square with a set of points in the neighboring unit square, where the set of points becomes more dense as $k \to \infty$ (see Figure \ref{fig:dD}). This example shows that the convergence in the Hausdorff sense
is weaker than convergence in the volume deficit sense even when the volume
of the sequence of sets is bounded away from 0.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.6]{ps1} \hskip 2cm \includegraphics[scale=0.6]{ps2}
\end{center}
\caption{$d(A_k) \ra 0$ and $\mathrm{Vol}_2(A_k)> c$ but $\Delta(A_k)>c$ (Example \ref{ex:hw}).}\label{fig:dD}
\end{figure}
\end{ex}
The following example shows that convergence in $\Delta$ does not imply convergence in $r$ nor $c$:
\begin{ex} \label{ex:ak}
Consider the set $A_k = \{ (1-\frac{1}{k}, 0)\} \cup ([1,2]\times[-1,1])$ in the plane.
\end{ex}
Note that the Example \ref{ex:ak} also shows that convergence in $d$ does not imply convergence in $r$ nor $c$.
The following example shows that convergence in $r$ does not imply convergence in $c$:
\begin{ex} \label{ex:rc}
Consider the set $A_k=B_2^2\cup\{(1+1/k, 1/k) ; (1+1/k, -1/k)\}$ in the plane, the union of the Euclidean ball and
two points close to it and close to each other (see Figure \ref{fig:cr}).
Then we have $c(A_k)=1$ by applying the same argument as in Example \ref{ex:sch} to the point $(1+1/k,0)$.
But for $r(A_k)$, we see that because of the roundness of the ball, one has $r(A_k)=\frac{\sqrt{k+1}}{\sqrt{2}k} \to 0$, when $k$ grows.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.6]{Mat_circle}
\end{center}
\caption{$c(A_k)=1$ but $r(A_k) \to 0$, when $k$ grows (Example \ref{ex:rc}). }\label{fig:cr}
\end{figure}
\end{ex}
\section{The behavior of volume deficit}
\label{sec:Delta}
In this section we study the volume deficit. Recall its definition: for $A$ compact in ${\bf R}^n$,
\begin{eqnarray*}
\Delta(A)=\mathrm{Vol}_n(\mathrm{conv}(A)\setminus A) = \mathrm{Vol}_n(\mathrm{conv}(A))- \mathrm{Vol}_n(A).
\end{eqnarray*}
\subsection{Monotonicity of volume deficit in dimension one and for Cartesian products}
\label{sec:1d}
In this section, we observe that Conjecture \ref{strongconj} holds in dimension one and also for
products of one-dimensional compact sets. In fact, more generally, we prove that Conjecture \ref{strongconj} passes to Cartesian products.
\begin{thm}\label{thm:1d}
Conjecture \ref{strongconj} holds in dimension one. In other words, if $k\ge2$ is an integer and $A_1, \dots, A_k$ are compact sets in ${\bf R}$, then
\begin{eqnarray}\label{eq:thm-fsa_0}
\mathrm{Vol}_1\left(\sum_{i=1}^kA_i\right) \ge \frac{1}{k-1}\sum_{i=1}^k\mathrm{Vol}_1\left(\sum_{j\in[k]\setminus\{i\}}A_j\right).
\end{eqnarray}
\end{thm}
\begin{proof}
We adapt a proof of Gyarmati, Matolcsi and Ruzsa \cite[Theorem 1.4]{GMR10} who established the same kind of inequality for finite subsets of the integers and cardinality instead of volume. The proof is based on set inclusions. Let $k \geq 1$. Set $S=A_1+ \cdots +A_k$ and for $i \in[k]$, let $a_i=\min A_i$, $b_i=\max A_i$,
$$S_i = \sum_{j\in[k]\setminus\{i\}}A_j,$$
$s_i=\sum_{j<i}a_j+\sum_{j>i}b_j$, $S_i^-=\{x\in S_i ; x\le s_i\}$ and $S_i^+=\{x\in S_i ; x> s_i\}$. For all $i \in [k-1]$, one has
$$ S\supset (a_i+S_i^-)\cup(b_{i+1}+S_{i+1}^+).$$
Since $a_i+s_i=\sum_{j\le i}a_j+\sum_{j>i}b_j=b_{i+1}+s_{i+1}$, the above union is a disjoint union. Thus for $i \in [k-1]$
$$\mathrm{Vol}_1(S) \ge \mathrm{Vol}_1(a_i+S_i^-)+\mathrm{Vol}_1(b_{i+1}+S_{i+1}^+)
= \mathrm{Vol}_1(S_i^-) + \mathrm{Vol}_1(S_{i+1}^+).$$
Notice that $S_1^-=S_1$ and $S_k^+=S_k\setminus\{s_k\}$, thus adding the above $k-1$ inequalities we obtain
\begin{eqnarray*}
(k-1)\mathrm{Vol}_1(S) & \ge & \sum_{i=1}^{k-1}\left(\mathrm{Vol}_1(S_i^-) + \mathrm{Vol}_1(S_{i+1}^+) \right) \\
& = & \mathrm{Vol}_1(S_1^-) + \mathrm{Vol}_1(S_k^+) + \sum_{i=2}^{k-1}\mathrm{Vol}_1(S_i) \\
&=& \sum_{i=1}^k\mathrm{Vol}_1(S_i).
\end{eqnarray*}
We have thus established Conjecture \ref{strongconj} in dimension 1.
\end{proof}
\begin{rem}
As mentioned in the proof, Gyarmati, Matolcsi and Ruzsa \cite{GMR10} earlier obtained a discrete version of Theorem~\ref{thm:1d}
for cardinalities of sums of subsets of the integers. There are also interesting upper bounds on cardinalities of sumsets
in the discrete setting that have similar combinatorial structure, see, e.g., \cite{GMR10, BB12, MMT12} and references therein.
Furthermore, as discussed in the introduction for the continuous domain, there are also discrete entropy analogues of these
cardinality inequalities, explored in depth in \cite{Ruz09:1, Tao10, MK10:isit, BB12, MMT12, HAT14, WWM14:isit, MWW17:1, MWW17:2} and references therein.
We do not discuss discrete analogues further in this paper.
\end{rem}
Now we prove that Conjecture \ref{strongconj} passes to Cartesian products.
\begin{thm}\label{thm:product}
Let $k,m\ge 2$ and $n_1, \dots, n_m\ge 1$ be integers. Let $n=n_1+\cdots +n_m$. For $1\le i\le k$ and $1\le l\le m$, let $A_i^l$ be some compact sets in ${\bf R}^{n_l}$. Assume that for any $1\le l\le m$ the $k$ compact sets $A_1^l,\dots, A_k^l\subset {\bf R}^{n_l}$ satisfy Conjecture \ref{strongconj}.
For $1\le i\le k$, let $A_i=A_i^1\times \cdots\times A_i^m\subset{\bf R}^n={\bf R}^{n_1}\times\cdots\times{\bf R}^{n_m}$. Then Conjecture \ref{strongconj} holds for $A_1,\dots, A_k$.
\end{thm}
\begin{proof}
Let $S=\sum_{i=1}^kA_i$ and let $S_i = \sum_{j\neq i} A_j$ then let us prove that
$$(k-1)\mathrm{Vol}_n(S)^\frac{1}{n}\ge \sum_{i=1}^k \mathrm{Vol}_n(S_i)^\frac{1}{n}.
$$
For all $1\le i\le k$, one has
$$S_i=\sum_{j\neq i} A_j=\sum_{j\neq i}\prod_{l=1}^mA_j^l=\prod_{l=1}^m\left(\sum_{j\neq i}A_j^l\right).$$
For $1\le i\le k$, denote $\sigma_i=(\mathrm{Vol}_{n_l}(\sum_{j\neq i}A_j^l)^\frac{1}{n_l})_{1\le l\le m}\in{\bf R}^m$, and for $x=(x_l)_{1\le l\le m}\in{\bf R}^m$, denote $\|x\|_0=\prod_{l=1}^m |x_l|^\frac{n_l}{n}$. Then, using Minkowski's inequality for $\|\cdot\|_0$ (see, for example, Theorem 10 in \cite{HLP88:book}), we deduce that
$$
\sum_{i=1}^k \mathrm{Vol}_n(S_i)^\frac{1}{n}=\sum_{i=1}^k \prod_{l=1}^m\mathrm{Vol}_{n_l}\left(\sum_{j\neq i}A_j^l\right)^\frac{1}{n}=\sum_{i=1}^k\|\sigma_i\|_0\le\left\| \sum_{i=1}^k\sigma_i\right\|_0=\prod_{l=1}^m\left(\sum_{i=1}^k\sigma_i^l\right)^\frac{n_l}{n}.
$$
Using that for any $1\le l\le m$ the $k$ compact sets $A_1^l,\dots, A_k^l\subset {\bf R}^{n_l}$ satisfy Conjecture \ref{strongconj}, we obtain
$$
\sum_{i=1}^k\sigma_i^l=\sum_{i=1}^k\mathrm{Vol}_{n_l}\left(\sum_{j\neq i}A_j^l\right)^\frac{1}{n_l}\le (k-1)\mathrm{Vol}_{n_l}\left(\sum_{i=1}^kA_i^l\right)^\frac{1}{n_l}.
$$
Thus
$$
\sum_{i=1}^k \mathrm{Vol}_n(S_i)^\frac{1}{n}\le \prod_{l=1}^m\left((k-1)\mathrm{Vol}_{n_l}\left(\sum_{i=1}^kA_i^l\right)^\frac{1}{n_l}\right)^\frac{n_l}{n}=(k-1)\mathrm{Vol}_n(S)^\frac{1}{n}.
$$
\end{proof}
From Theorems \ref{thm:1d} and \ref{thm:product}, and the fact that Conjecture \ref{strongconj} holds for convex sets,
we deduce that Conjecture \ref{strongconj} holds for Cartesian products of one-dimensional compact sets and convex sets.
\subsection{A counterexample in dimension $\ge 12$}
\label{sec:counter}
In contrast to the positive results for compact product sets, both the conjectures of
Bobkov, Madiman and Wang \cite{BMW11} fail in general for even moderately high dimension.
\begin{thm}\label{thm:counter}
For every $k \geq 2$, there exists $n_k \in \mathbb{N}$ such that for every $n\ge n_k$ there is a compact set $A \subset {\bf R}^n$ such that $\mathrm{Vol}_n(A(k+1))<\mathrm{Vol}_n(A(k))$. Moreover, one may take
$$ n_k =\min\left\{n\in k\mathbb{Z}: n> \frac{\log(k)}{\log\left(1+\frac{1}{k}\right) - \frac{\log(2)}{k}}\right\}. $$
In particular, one has $n_2=12$, whence Conjectures~\ref{weakconj} and Conjecture~\ref{strongconj} are false in ${\bf R}^n$ for $n\ge12$.
\end{thm}
\begin{proof}
Let $k \geq 2$ be fixed and let $n_k$ be defined as in the statement of Theorem \ref{thm:counter} so that
$$n_k> \frac{\log(k)}{\log\left(1+\frac{1}{k}\right) - \frac{\log(2)}{k}}$$
and $n_k=kd$, for a certain $d \in \mathbb{N}$. Let $F_1, \dots, F_k$ be $k$ linear subspaces of ${\bf R}^{n_k}$ of dimension $d$ orthogonal to each other such that ${\bf R}^{n_k} = F_1 \oplus \cdots \oplus F_k$. Set $A = I_1 \cup \cdots \cup I_k$, where for every $i \in [k]$, $I_i$ is a convex body in $F_i$. Notice that for every $l \geq 1$,
$$ \underset{l\ {\rm times}}{\underbrace{A + \cdots + A}} = \bigcup_{m_i \in \{0, \cdots, l\}, \sum_{i=1}^k m_i = l} (m_1 I_1 + \cdots + m_k I_k), $$
where we used the convexity of each $I_i$ to write the Minkowski sum of $m_i$ copies of $I_i$ as $m_i I_i$.
Thus
$$ k^{n_k} \mathrm{Vol}_{n_k}(A(k)) = \mathrm{Vol}_{n_k}(I_1 + \cdots + I_k) = \mathrm{Vol}_{n_k}(I_1 \times \cdots \times I_k), $$
and
\begin{eqnarray*}
(k+1)^{n_k} \mathrm{Vol}_{n_k}(A(k+1)) & = & \mathrm{Vol}_{n_k}((2I_1 + I_2 + \cdots + I_k) \cup \cdots \cup (I_1 + \cdots + I_{k-1} + 2I_k)) \\ & = & \mathrm{Vol}_{n_k}((2I_1 \times I_2 \times \cdots \times I_k) \cup \cdots \cup (I_1 \times \cdots \times I_{k-1} \times 2I_k)) \\ & \leq & \mathrm{Vol}_{n_k}(2I_1 \times I_2 \times \cdots \times I_k) + \cdots + \mathrm{Vol}_{n_k}(I_1 \times \cdots \times I_{k-1} \times 2I_k) \\ & = & k2^d \mathrm{Vol}_{n_k}(I_1 \times \cdots \times I_k) \\ & = & k^{n_k+1}2^d \mathrm{Vol}_{n_k}(A(k)).
\end{eqnarray*}
The hypothesis on $n_k$ enables us to conclude that $\mathrm{Vol}_{n_k}(A(k+1)) < \mathrm{Vol}_{n_k}(A(k))$. Now for $n\ge n_k$, we define $\tilde{A}=A\times[0,1]^{n-n_k}$. For every $l$, one has $\tilde{A}(l)=A(l)\times[0,1]^{n-n_k}$, thus $\mathrm{Vol}_n(\tilde{A}(l)) = \mathrm{Vol}_{n_k}(A(l))$. Therefore $\mathrm{Vol}_n(\tilde{A}(k+1)) < \mathrm{Vol}_n(\tilde{A}(k))$, which establishes that $\tilde{A}$ gives a counterexample in ${\bf R}^n$.
The sequence $\left\{\frac{\log(k)}{\log\left(1+\frac{1}{k}\right) - \frac{\log(2)}{k}}\right\}_{k \geq 2}$ is increasing and $\frac{\log(2)}{\log\left(1+\frac{1}{2}\right) - \frac{\log(2)}{2}} \approx 11.77$. Hence, Conjecture~\ref{weakconj} is false for $n \geq 12$.
\end{proof}
\begin{rem}
\begin{enumerate}
\item It is instructive to visualize the counterexample for $k=2$, which is done in Figure~\ref{fig:counter} by representing
each of the two orthogonal copies of ${\bf R}^6$ by a line.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.45]{sum_1} \,
\includegraphics[scale=0.45]{sum_2}\,
\includegraphics[scale=0.45]{sum_3}
\end{center}
\caption{A counterexample in ${\bf R}^{12}$.}\label{fig:counter}
\end{figure}
\item It was shown by Bobkov, Madiman and Wang \cite{BMW11} that Conjecture \ref{strongconj} is true for convex sets.
The constructed counterexample is a union of convex sets and is symmetric and star-shaped.
\item Notice that in the above example one has $\mathrm{Vol}_n(A(k-1))=0$. By adding to $A$ a ball with sufficiently small radius, one obtains a counterexample satisfying $\mathrm{Vol}_n(A(k)) > \mathrm{Vol}_n(A(k-1)) > 0$ and $\mathrm{Vol}_n(A(k)) > \mathrm{Vol}_n(A(k+1))$.
\item The counterexample also implies that Conjecture 1.1 in \cite{BMW11}, which suggests a fractional version
of Young's inequality for convolution with sharp constant, is false. It is still possible that it may be true for a
restricted class of functions (like the log-concave functions).
\item Conjectures \ref{strongconj} and \ref{weakconj} are still open in dimension $n \in \{2, \ldots, 11\}$.
\end{enumerate}
\end{rem}
\subsection{Convergence rates for $\Delta$}
\label{sec:Delta-rate}
The asymptotic behavior of $\Delta(A(k))$ has been extensively studied by Emerson and Greenleaf \cite{EG69}.
In analyzing $\Delta(A(k))$, the following lemma about convergence of $A(k)$ to 0 in Hausdorff distance is useful.
\begin{lem}\label{lem:advance-sfs}
If $A$ is a compact set in ${\bf R}^n$,
\begin{eqnarray}\label{emerson}
\mathrm{conv}(A) \subset A(k) + \frac{n \,\mathrm{diam}(A)}{k}B_2^n.
\end{eqnarray}
\end{lem}
\begin{proof}
Using invariance of (\ref{emerson}) under the shifts of $A$, we may assume that $0 \in \mathrm{conv}(A)$,
$$ \mathrm{conv}(A) = \mathrm{conv}(A(k)) \subset (1+c(A(k))) \mathrm{conv}(A) = A(k) + c(A(k)) \mathrm{conv}(A). $$
Using $c(A(k)) \leq \frac{c(A)}{k}$ (see Theorem \ref{thm:c-rate} in Section~\ref{sec:c}), as well as $c(A) \leq n$ (see Theorem \ref{thm:sch75}), we deduce that
$$ \mathrm{conv}(A) \subset A(k) + \frac{n}{k} \mathrm{conv}(A). $$
To conclude, we note that since $0 \in \mathrm{conv}(A)$, one has $|x| \leq \mathrm{diam}(A)$ for every $x \in \mathrm{conv}(A)$. Hence, $\mathrm{conv}(A) \subset \mathrm{diam}(A) B_2^n$.
Finally, we obtain
$$ \mathrm{conv}(A) \subset A(k) + \frac{n \,\mathrm{diam}(A)}{k}B_2^n. $$
\end{proof}
Note that Lemma~\ref{lem:advance-sfs} is
similar but weaker than the Shapley-Folkman-Starr theorem
discussed in the introduction, and which we will prove in Section~\ref{sec:d-rate}.
Lemma~\ref{lem:advance-sfs} was contained in \cite{EG69}, but with an extra factor of 2.
One clearly needs assumption beyond compactness to have asymptotic vanishing of $\Delta(A(k))$.
Indeed, a simple counterexample would be a finite set $A$ of points,
for which $\Delta(A(k))$ always remains at $\mathrm{Vol}_n(\mathrm{conv}(A))$ and fails
to converge to 0. Once such an assumption is made, however, one has the following result.
\begin{thm}\cite{EG69}
Let $A$ be a compact set in ${\bf R}^n$ with nonempty interior. Then
\begin{eqnarray*}
\Delta(A(k)) \leq \frac{C}{k} \mathrm{Vol}_n(\mathrm{conv}(A)),
\end{eqnarray*}
for some constant $C$ possibly depending on $n$.
\end{thm}
\begin{proof}
By translation-invariance, we may assume that $\delta B_2^n\subset A$ for some $\delta>0$. Then
$\delta B_2^n\subset A(k_0)$, and by taking $k_0 \geq \frac{n\, \mathrm{diam}(A)}{\delta}$,
we have
$$\frac{n \,\mathrm{diam}(A)}{k_0}B_2^n\subset A(k_0).$$
Hence using (\ref{emerson}) we get
\begin{eqnarray*}
\mathrm{conv}(A)\subset A(k) + \frac{k_0}{k} A(k_0)=\frac{k+k_0}{k}A(k+k_0) ,
\end{eqnarray*}
so that by taking the volume we have
\begin{eqnarray*}
\mathrm{Vol}_n(\mathrm{conv}(A))\leq \bigg( 1+ \frac{k_0}{k}\bigg)^n \mathrm{Vol}_n(A(k+k_0)) ,
\end{eqnarray*}
and
\begin{eqnarray*}
\Delta(A(k+k_0)) \leq \bigg[ \bigg( 1+ \frac{k_0}{k}\bigg)^n -1 \bigg] \mathrm{Vol}_n(A(k+k_0)) = O\bigg(\frac{1}{k}\bigg) \mathrm{Vol}_n(\mathrm{conv}(A)).
\end{eqnarray*}
\end{proof}
\section{Volume inequalities for Minkowski sums}
\label{sec:vol}
\subsection{A refined superadditivity of the volume for compact sets}
\label{sec:fsa}
In this section, we observe that if the exponents of $1/n$ in Conjecture~\ref{strongconj} are removed,
then the modified inequality is true (though unfortunately one can no longer directly relate this to
a law of large numbers for sets).
\begin{thm}\label{thm:fsa}
Let $n\ge1$, $k\ge2$ be integers and let $A_1, \dots, A_k$ be $k$ compact sets in ${\bf R}^n$. Then
\begin{eqnarray}\label{eq:thm-fsa}
\mathrm{Vol}_n\left(\sum_{i=1}^kA_i\right) \ge \frac{1}{k-1}\sum_{i=1}^k\mathrm{Vol}_n\left(\sum_{j\in[k]\setminus\{i\}}A_j\right).
\end{eqnarray}
\end{thm}
\begin{proof}
We use arguments similar to the proof of Theorem~\ref{thm:1d}.
Indeed, let us define the sets $S$ and $S_i$ in the same way as in the proof of Theorem~\ref{thm:1d}.
Let $\theta\in S^{n-1}$ be any fixed unit vector and let us define $a_i=\min\{\langle x,\theta\rangle ; x\in A_i\}$,
$b_i=\max \{\langle x,\theta\rangle ; x\in A_i\}$, $s_i=\sum_{j<i}a_j+\sum_{j>i}b_j$, $S_i^-=\{x\in S_i ; \langle x,\theta\rangle\le s_i\}$
and $S_i^+=\{x\in S_i ; \langle x,\theta\rangle> s_i\}$. Then, the same inclusions hold true and thus we obtain
\begin{eqnarray*}
(k-1)\mathrm{Vol}_n(S)
& \ge & \sum_{i=1}^{k-1}\left(\mathrm{Vol}_n(S_i^-) + \mathrm{Vol}_n(S_{i+1}^+)\right) \\
& = & \mathrm{Vol}_n(S_1^-) + \mathrm{Vol}_n(S_k^+) + \sum_{i=2}^{k-1}\mathrm{Vol}_n(S_i) \\
&=& \sum_{i=1}^k\mathrm{Vol}_n(S_i).
\end{eqnarray*}
\end{proof}
Applying Theorem~\ref{thm:fsa} to $A_1=\cdots=A_k=A$ yields the following positive result.
\begin{cor}\label{cor:fsa-weak}
Let $A$ be a compact set in ${\bf R}^n$ and $A(k)$ be defined as in (\ref{defAk}). Then
\begin{eqnarray}\label{weak}
\mathrm{Vol}_n(A(k)) \ge \left(\frac{k-1}{k}\right)^{n-1}\mathrm{Vol}_n(A(k-1)).
\end{eqnarray}
\end{cor}
In the following proposition, we improve Corollary~\ref{cor:fsa-weak} under additional assumptions on the set $A \subset {\bf R}^n$, for $n \geq 2$.
\begin{prop}\label{asymptote}
Let $A$ be a compact subset of ${\bf R}^n$ and $A(k)$ be defined as in (\ref{defAk}). If there exists a hyperplane $H$ such that $\mathrm{Vol}_{n-1}(P_H(A)) = \mathrm{Vol}_{n-1}(P_H(\mathrm{conv}(A)))$, where $P_H(A)$ denotes the orthogonal projection of $A$ onto $H$, then
$$ \mathrm{Vol}_n(A(k)) \ge \frac{k-1}{k}\mathrm{Vol}_n(A(k-1)). $$
\end{prop}
\begin{proof}
By assumption, $\mathrm{Vol}_{n-1}(P_H(A))=\mathrm{Vol}_{n-1}(P_H(\mathrm{conv}(A)))$. Thus, for every $k \geq 1$, $\mathrm{Vol}_{n-1}(P_H(A(k)))=\mathrm{Vol}_{n-1}(P_H(\mathrm{conv}(A)))$. Indeed, one has $A \subset A(k) \subset \mathrm{conv}(A)$. Thus, $P_H(A) \subset P_H(A(k)) \subset P_H(\mathrm{conv}(A))$. Hence,
$$ \mathrm{Vol}_{n-1}(P_H(A)) \leq \mathrm{Vol}_{n-1}(P_H(A(k))) \leq \mathrm{Vol}_{n-1}(P_H(\mathrm{conv}(A))) = \mathrm{Vol}_{n-1}(P_H(A)). $$
It follows by the Bonnesen inequality (concave Brunn-Minkowski inequality, see \cite{BF87:book, Ohm55}) that for every $k \geq 2$,
\begin{eqnarray*}
\mathrm{Vol}_n(A(k)) & = & \mathrm{Vol}_n\left( \frac{k-1}{k} A(k-1) + \frac{1}{k} A \right) \\ & \geq & \frac{k-1}{k} \mathrm{Vol}_n(A(k-1)) + \frac{1}{k} \mathrm{Vol}_n(A) \geq \frac{k-1}{k} \mathrm{Vol}_n(A(k-1)).
\end{eqnarray*}
\end{proof}
\begin{rem}
\begin{enumerate}
\item By considering the set $A=\{0, 1\}$ and $\delta_{\frac{1}{2}}$ the Dirac measure at $\frac{1}{2}$, one has
$$ \delta_{\frac{1}{2}}(A(2)) = 1 > 0 = \delta_{\frac{1}{2}}(A(3)). $$
Hence Conjecture~\ref{weakconj} does not hold in general for log-concave measures in dimension~1.
\item If $A$ is countable, then for every $k \geq 1$, $\mathrm{Vol}_n(A(k)) = 0$, thus the sequence $\{\mathrm{Vol}_n(A(k))\}_{k \geq 1}$ is constant and equal to $0$.
\item If there exists $k_0 \geq 1$ such that $A(k_0) = \mathrm{conv}(A)$, then for every $k \geq k_0$, $A(k) = \mathrm{conv}(A)$. Indeed,
\begin{eqnarray*}
(k_0+1)A(k_0+1) &=& k_0 A(k_0) + A = k_0 \mathrm{conv}(A) + A \\
& \supset & \mathrm{conv}(A) + k_0 A(k_0) = (k_0+1)\mathrm{conv}(A).
\end{eqnarray*}
It follows that $A(k_0+1)=\mathrm{conv}(A)$. We conclude by induction. Thus, in this case, the sequence $\{\mathrm{Vol}_n(A(k))\}_{k \geq 1}$ is stationary to $\mathrm{Vol}_n(\mathrm{conv}(A))$, for $k \geq k_0$.
\item It is natural to ask if the refined superadditivity of volume
can be strengthened to fractional superadditivity as defined in Definition~\ref{def:fsa} below.
While this appears to be a difficult question in general,
it was shown recently in \cite{BMW18}
that fractional superadditivity is true in the case of compact subsets of ${\bf R}$.
\end{enumerate}
\end{rem}
\subsection{Supermodularity of volume for convex sets}
\label{sec:smod}
If we restrict to convex sets, an even stronger inequality is true from which
we can deduce Theorem~\ref{thm:fsa} for convex sets.
\begin{thm}\label{thm:smod}
Let $n\in\mathbb{N}$. For compact convex subsets $B_1, B_2, B_3$ of ${\bf R}^{n}$, one has
\begin{eqnarray}\label{set}
\mathrm{Vol}_n(B_1+B_2+B_3) + \mathrm{Vol}_n(B_1) \geq \mathrm{Vol}_n(B_1+B_2) + \mathrm{Vol}_n(B_1+B_3) .
\end{eqnarray}
\end{thm}
We first observe that Theorem~\ref{thm:smod} is actually equivalent to a
formal strengthening of it, namely Theorem~\ref{thm:smod2} below.
Let us first recall the notion of a supermodular set function.
\begin{defn}
A set function $f:2^{[k]}\ra{\bf R}$ is {\it supermodular} if
\begin{eqnarray}\label{supmod:defn}
f(s\cupt)+f(s\capt) \geq f(s) + f(t)
\end{eqnarray}
for all subsets $s, t$ of $[k]$.
\end{defn}
\begin{thm}\label{thm:smod2}
Let $B_1, \ldots, B_k$ be compact convex subsets of ${\bf R}^n$, and
define
\begin{eqnarray}\label{def:setfn-v}
v(s)=\mathrm{Vol}_n\bigg(\sum_{i\ins} B_i \bigg)
\end{eqnarray}
for each $s\subset [k]$.
Then $v:2^{[k]}\ra [0,\infty)$ is a supermodular set function.
\end{thm}
Theorem~\ref{thm:smod2} implies Theorem~\ref{thm:smod}, namely
\begin{eqnarray}\label{vset}
\mathrm{Vol}_n(B_1+B_2+B_3) + \mathrm{Vol}_n(B_1) \geq \mathrm{Vol}_n(B_1+B_2) + \mathrm{Vol}_n(B_1+B_3)
\end{eqnarray}
for compact convex subsets $B_1, B_2, B_3$ of ${\bf R}^{n}$,
since the latter is a special case of Theorem~\ref{thm:smod2} when $k=3$.
To see the reverse, apply the inequality \eqref{vset} to
\begin{eqnarray*}
B_1=\sum_{i\ins\capt} A_i , \quad B_2=\sum_{i\ins\setminust} A_i, \quad B_3=\sum_{i\int\setminuss} A_i .
\end{eqnarray*}
Our proof of Theorem~\ref{thm:smod} combines a property of determinants that seems to have been
first explicitly observed by Ghassemi and Madiman \cite{MG17} with
a use of optimal transport inspired by Alesker, Dar and Milman \cite{ADM99}.
Let us prepare the ground by stating these results.
\begin{lem}\label{lem:det}\cite{MG17}
Let $K_1, K_2$ and $K_3$ be $n\times n$ positive-semidefinite matrices. Then
\begin{equation*}
{\mathop{\rm det}}(K_1 + K_2 + K_3) + {\mathop{\rm det}}(K_1) \geq {\mathop{\rm det}}(K_1 + K_2) +{\mathop{\rm det}}(K_1 + K_3) .
\end{equation*}
\end{lem}
We state the deep result of \cite{ADM99} directly for $k$ sets instead of for two sets as in \cite{ADM99}
(the proof is essentially the same, with obvious modifications).
\begin{thm}[Alesker-Dar-Milman \cite{ADM99}]\label{thm:ADM}
Let $A_1,\ldots,A_{k} \subset {\bf R}^{n}$ be open, convex sets
with $|A_i|=1$ for each $i\in [k]$.
Then there exist $C^1$-diffeomorphisms $\psi_i:A_1\ra A_i$
preserving Lebesgue measure, such that
\begin{eqnarray*}
\sum_{i\in [k]} \lambda_i A_i = \bigg\{ \sum_{i\in [k]} \lambda_i \psi_i(x): x\in A_1 \bigg\} ,
\end{eqnarray*}
for any $\lambda_1,\ldots,\lambda_{k} >0$.
\end{thm}
\noindent{\it Proof of Theorem~\ref{thm:smod}.
By adding a small multiple of the
Euclidean ball $B_2^n$ and then using the continuity of $\varepsilon\mapsto \mathrm{Vol}_n(B_i+\varepsilon B_2^n)$
as $\varepsilon\ra 0$, we may assume that each of the $B_i$ satisfy $\mathrm{Vol}_n(B_i)>0$.
Then choose $\lambda_i$ such that $B_i=\lambda_i A_i$ with $|A_i|=1$, so that
\begin{eqnarray*}\begin{split}
\mathrm{Vol}_n(B_1+B_2+B_3) \, &=\, \mathrm{Vol}_n\bigg( \sum_{i \in [3]} \lambda_i A_i \bigg)
\,=\, \int 1_{\sum_{i \in [3]} \lambda_i A_i}(x) dx \\
&=\, \int 1_{\big\{ \sum_{i \in [3]} \lambda_i \psi_i(y): y\in A_1 \big\} }(x) dx ,
\end{split}\end{eqnarray*}
using Theorem~\ref{thm:ADM}.
Applying a change of coordinates using the diffeomorphism $x=\sum_{i\in [3]} \lambda_i \psi_i(y)$,
\begin{eqnarray*}\begin{split}
V &:=\mathrm{Vol}_n(B_1+B_2+B_3) = \int 1_{A_1}(y) {\mathop{\rm det}}\bigg(\sum_{i \in [3]} \lambda_i D\psi_i\bigg)(y) dy \\
&\geq \int_{A_1} {\mathop{\rm det}}[(\lambda_1 D\psi_1+\lambda_2 D\psi_2)(y)] + {\mathop{\rm det}}[(\lambda_1 D\psi_1+\lambda_3 D\psi_3)(y)] - {\mathop{\rm det}}[\lambda_1 D\psi_1(y)] dy \\
&= \int 1_{A_1}(y) d[(\lambda_1 \psi_1+\lambda_2 \psi_2)(y)] + \int 1_{A_1}(y) d[(\lambda_1 \psi_1+\lambda_3 \psi_3)(y)] - \int 1_{A_1}(y) d[\lambda_1 \psi_1(y)] dy \\
&= \int 1_{\{ \lambda_1 \psi_1(y)+\lambda_2 \psi_2(y): y\in A_1\} }(z) dz + \int 1_{\{ \lambda_1 \psi_1(y)+\lambda_3 \psi_3(y): y\in A_1\} }(z') dz' \\
& \quad\quad\quad\quad\quad - \int 1_{\{ \lambda_1 \psi_1(y): y\in A_1\} }(z'') dz''
\end{split}\end{eqnarray*}
where the inequality follows from Lemma~\ref{lem:det},
and the last equality is obtained by
making multiple appropriate coordinate changes.
Using Theorem~\ref{thm:ADM} again,
\begin{eqnarray*}\begin{split}
\mathrm{Vol}_n(B_1+B_2+B_3) \,
&\geq \int 1_{\lambda_1 A_1+\lambda_2 A_2}(z) dz + \int 1_{\lambda_1 A_1+\lambda_3 A_3}(z) dz - \int 1_{\lambda_1 A_1}(z) dz \\
&= \mathrm{Vol}_n(B_1+B_2) + \mathrm{Vol}_n(B_1+B_3) - \mathrm{Vol}_n(B_1) .
\end{split}\end{eqnarray*}
\qed
For the purposes of discussion below, it is useful to collect some well known
facts from the theory of supermodular set functions.
Observe that if $v$ is supermodular and $v(\emptyset)=0$, then considering disjoint
$s$ and $t$ in \eqref{supmod:defn} implies that
$v$ is superadditive. In fact, a more general structural result
is true. To describe it, we need some terminology.
\begin{defn}\label{def:frac}
Given a collection $\mathcal{C}$ of subsets of $[k]$, a function $\alpha:\mathcal{C} \to {\bf R}^+$,
is called a {\em fractional partition}, if for each $i\in [k]$, we have
$\sum_{s\in \mathcal{C}:i\in s} \alpha_{\setS} = 1$.
\end{defn}
The reason for the terminology is that this notion extends the familiar notions of
a partition of sets (whose indicator function can be defined
precisely as in Definition~\ref{def:frac} but with range restriction to $\{0,1\}$)
by allowing fractional values.
An important example of a fractional partition of $[k]$ is the collection
$\mathcal{C}_m=\binom{[k]}{m}$ of all subsets of size $m$,
together with the coefficients $\alpha_{\setS}=\binom{k-1}{m-1}^{-1}$.
\begin{defn}\label{def:fsa}
A function $f:2^{[k]}\ra{\bf R}$ is {\it fractionally superadditive} if
for any fractional partition $(\mathcal{C}, \beta)$,
\begin{eqnarray*}
f([k]) \geq \sumS \beta_{\setS} f(s) .
\end{eqnarray*}
\end{defn}
The following theorem has a long history and is implicit in results from
cooperative game theory in the 1960's but to our knowledge, it was first explicitly stated
by Moulin Ollagnier and Pinchon \cite{MP82}.
\begin{thm}\label{prop:BS}\cite{MP82}
If $f:2^{[k]}\ra{\bf R}$ is supermodular and $f(\emptyset)=0$, then $f$ is fractionally superadditive.
\end{thm}
A survey of the history of Theorem~\ref{prop:BS}, along with various
strengthenings of it and their proofs, and discussion of several applications,
can be found in \cite{MT10}.
If $\{A_i, i\in [k]\}$ are compact convex sets and $u(s)=\mathrm{Vol}_n(\sum_{i\in s} A_i)$ as defined in \eqref{def:setfn-v},
then $u(\emptyset)=0$ and Theorem~\ref{thm:smod2} says that $u$ is supermodular,
whence Theorem~\ref{prop:BS} immediately implies
that $u$ is fractionally superadditive.
\begin{cor}\label{cor:vol}
Let $B_1, \ldots, B_k$ be compact convex subsets of ${\bf R}^n$ and let $\beta$
be any fractional partition using a collection $\mathcal{C}$ of subsets of $[k]$. Then
\begin{eqnarray*}
\mathrm{Vol}_n\bigg(\sum_{i\in [k]} B_i\bigg) \geq \sumS \beta_{\setS} \mathrm{Vol}_n\bigg(\sum_{i\in s} B_i\bigg) .
\end{eqnarray*}
\end{cor}
Corollary~\ref{cor:vol} implies that for each $m<k$,
\begin{eqnarray}\label{eq:vol-deg}
\mathrm{Vol}_n\bigg(\sum_{i\in [k]} B_i\bigg) \geq \binom{k-1}{m-1}^{-1} \sum_{|s|=m} \mathrm{Vol}_n\bigg(\sum_{i\in s} B_i\bigg) .
\end{eqnarray}
Let us discuss whether these inequalities contain anything novel. On the one hand,
if we consider the case $m=1$ of inequality \eqref{eq:vol-deg}, the resulting inequality is not new
and in fact implied by the Brunn-Minkowski inequality:
\begin{eqnarray*}
\mathrm{Vol}_n\bigg(\sum_{i\in [k]} B_i\bigg) \geq \big[ \sum_{i\in [k]} \mathrm{Vol}_n(B_i)^\frac{1}{n} \big]^n
\geq \sum_{i\in [k]} \mathrm{Vol}_n(B_i) .
\end{eqnarray*}
On the other hand, applying the inequality \eqref{eq:vol-deg} to $m=k-1$ yields
precisely Theorem~\ref{thm:fsa} for convex sets $B_i$, i.e.,
\begin{eqnarray}\label{f-sup-appl}\begin{split}
\mathrm{Vol}_n \bigg(\sum_{i\in [k]} B_i\bigg) &\geq \frac{1}{k-1} \sum_{i\in [k]} \mathrm{Vol}_n\bigg(\sum_{j\neq i} B_j\bigg).
\end{split}\end{eqnarray}
Let us compare this with what is obtainable
from the refined Brunn-Minkowski inequality for convex sets proved in \cite{BMW11},
which says that
\begin{eqnarray}\label{bm-sup-appl}\begin{split}
\mathrm{Vol}_n\bigg(\sum_{i\in [k]} B_i\bigg) &\geq \bigg(\frac{1}{k-1}\bigg)^n \bigg[\sum_{i\in [k]} \mathrm{Vol}_n\big(\sum_{j\neq i} B_j\big)^\frac{1}{n} \bigg]^n .
\end{split}\end{eqnarray}
Denote the right hand sides of \eqref{f-sup-appl} and \eqref{bm-sup-appl} by $R_{\eqref{f-sup-appl}}$
and $R_{\eqref{bm-sup-appl}}$. Also set
\begin{eqnarray*}
c_i=\mathrm{Vol}_n\bigg(\sum_{j\neq i} B_j\bigg)^\frac{1}{n} ,
\end{eqnarray*}
and write $c=(c_1, \ldots, c_k)\in [0,\infty)^k$, so that
$R_{\eqref{f-sup-appl}}^{\frac{1}{n}} =(k-1)^{-\frac{1}{n}} \|c\|_n$
and
$R_{\eqref{bm-sup-appl}}^{\frac{1}{n}} = (k-1)^{-1} \|c\|_1$.
Here, for $m\geq 1$, $\|c\|_m = \left( \sum_{i=1}^k c_i^m \right)^{\frac{1}{m}}$. In other words,
\begin{eqnarray*}
\bigg[\frac{R_{\eqref{f-sup-appl}}}{R_{\eqref{bm-sup-appl}}} \bigg]^\frac{1}{n}
= (k-1)^{1-\frac{1}{n}} \frac{\|c\|_n}{\|c\|_1} .
\end{eqnarray*}
Let us consider $n=2$ for illustration. Then we have
\begin{eqnarray*}
\bigg[\frac{R_{\eqref{f-sup-appl}}}{R_{\eqref{bm-sup-appl}}} \bigg]^\half
= \sqrt{k-1} \frac{\|c\|_2}{\|c\|_1} ,
\end{eqnarray*}
which ranges between $\sqrt{1-\frac{1}{k}}$ and $\sqrt{k-1}$,
since $\|c\|_2/\|c\|_1 \in [k^{-\half} ,1]$. In particular, neither bound is uniformly better;
so the inequality \eqref{eq:vol-deg} and Corollary~\ref{cor:vol} do indeed have
some potentially useful content.
Motivated by the results of this section, it is natural to ask if the volume of Minkowski
sums is supermodular even without the convexity assumption on the sets involved,
as this would strengthen Theorem~\ref{thm:fsa}. In fact, this is not the case.
\begin{prop}\label{prop:smod-counter}
There exist compact sets $A, B, C \subset {\bf R}$ such that
\begin{eqnarray*}
\mathrm{Vol}_1(A+B+C) + \mathrm{Vol}_1(A) < \mathrm{Vol}_1(A+B) + \mathrm{Vol}_1(A+C) .
\end{eqnarray*}
\end{prop}
\begin{proof}
Consider $A=\{0,1\}$ and $B=C=[0,1]$. Then,
$$ \mathrm{Vol}_1(A+B+C) + \mathrm{Vol}_1(A) = 3 < 4 = \mathrm{Vol}_1(A+B) + \mathrm{Vol}_1(A+C). $$
\end{proof}
On the other hand, the desired inequality is true in dimension 1 if the set $A$ is convex.
More generally, in dimension 1, one has the following result.
\begin{prop}\label{prop:smod-1d}
If $A, B, C\subset {\bf R}$ are compact, then
\begin{eqnarray*}
\mathrm{Vol}_1(A+B+C) +\mathrm{Vol}_1(\mathrm{conv}(A))\ge \mathrm{Vol}_1(A+B) + \mathrm{Vol}_1(A+C) .
\end{eqnarray*}
\end{prop}
\begin{proof}
Assume, as one typically does in the proof of the one-dimensional Brunn-Minkowski inequality,
that $\max B=0=\min C$. (We can do this without loss of generality since translation does not
affect volumes.) This implies that $B\cup C\subset B+C$, whence
\begin{eqnarray*}
(A+B)\cup(A+C)=A+(B\cup C)\subset A+B+C.
\end{eqnarray*}
Hence
\begin{eqnarray*}\begin{split}
\mathrm{Vol}_1(A+B+C) &\ge \mathrm{Vol}_1((A+B)\cup(A+C))\\
&= \mathrm{Vol}_1(A+B)+\mathrm{Vol}_1(A+C)-\mathrm{Vol}_1((A+B)\cap(A+C)) .
\end{split}\end{eqnarray*}
We will show that $(A+B)\cap(A+C)\subset \mathrm{conv}(A)$, which together with the preceding
inequality yields the desired conclusion
$\mathrm{Vol}_1(A+B+C)\ge \mathrm{Vol}_1(A+B)+\mathrm{Vol}_1(A+C)- \mathrm{Vol}_1(\mathrm{conv}(A))$.
To see that $(A+B)\cap(A+C)\subset \mathrm{conv}(A)$, consider $x\in(A+B)\cap(A+C)$.
One may write $x=a_1+b=a_2+c$, with $a_1,a_2\in A$, $b\in B$ and $c\in C$.
Since $\max B=0=\min C$ one has $b\le 0\le c$ and one deduces that $ a_2\le x\le a_1$
and thus $x\in \mathrm{conv}(A)$. This completes the proof.
\end{proof}
\begin{rem}
\begin{enumerate}
\item One may wonder if Proposition \ref{prop:smod-1d} extends to higher dimension. More particularly, we do
not know if the supermodularity inequality
$$\mathrm{Vol}_n(A+B+C) +\mathrm{Vol}_n(A)\ge \mathrm{Vol}_n(A+B) + \mathrm{Vol}_n(A+C) $$
holds true in the case where $A$ is convex and $B$ and $C$ are any compact sets.
\item It is also natural to ask in view of the results of this section whether the fractional superadditivity
\eqref{conjdimn} of $\mathrm{Vol}_n^{1/n}$ for convex sets proved in \cite{BMW11} follows from a more general supermodularity
property, i.e., whether
\begin{eqnarray}\label{eq:q-supmod}
\mathrm{Vol}_n^{1/n}(A+B+C) + \mathrm{Vol}_n^{1/n}(A) \geq \mathrm{Vol}_n^{1/n}(A+B) + \mathrm{Vol}_n^{1/n}(A+C)
\end{eqnarray}
for convex sets $A, B, C\subset {\bf R}^n$. It follows from results of \cite{MG17} that such a result
does not hold (their counterexample to the determinant version of \eqref{eq:q-supmod} corresponds in our context
to choosing ellipsoids in ${\bf R}^2$). Another simple explicit counterexample is the following:
Let
$A = [0,2] \times [0, 1/2]$,
$B= [0,1/2] \times [0,2]$,
and $C=\epsilon B_2^2$, with $\epsilon > 0$. Then,
$$ \mathrm{Vol}_2(A)^{1/2} = 1, \quad \mathrm{Vol}_2(A+B+C)^{1/2} = \sqrt{25/4 + 10\epsilon + \pi \epsilon^2}, $$
$$ \mathrm{Vol}_2(A+B)^{1/2} = 5/2, \quad \mathrm{Vol}_2(A+C)^{1/2} = \sqrt{1 + 5\epsilon + \pi \epsilon^2}. $$
Hence,
\begin{eqnarray*}\begin{split}
\mathrm{Vol}_2(A+B+C)^{1/2} + \mathrm{Vol}_2(A)^{1/2} &= 1 + 5/2 + 2\epsilon + o(\epsilon) \\
\mathrm{Vol}_2(A+B)^{1/2} + \mathrm{Vol}_2(A+C)^{1/2} &= 1 + 5/2 + (5/2)\epsilon + o(\epsilon)
\end{split}\end{eqnarray*}
For $\epsilon$ small enough, this yields a counterexample to \eqref{eq:q-supmod}.
\item It is shown in \cite{MG17} that the entropy analogue of Theorem \ref{thm:smod}
does not hold, i.e., there exist independent real-valued random variables $X, Y, Z$ with log-concave distributions
such that
\begin{eqnarray*}
e^{2h(X+Y+Z)}+e^{2h(Z)} < e^{2h(X+Z)}+e^{2h(Y+Z)}.
\end{eqnarray*}
\end{enumerate}
\end{rem}
\section{The behavior of Schneider's non-convexity index}
\label{sec:c}
In this section we study Schneider's non-convexity index. Recall its definition: for $A$ compact in ${\bf R}^n$,
\begin{eqnarray*}
c(A) = \inf \{ \lambda\geq 0: A+\lambda\, \mathrm{conv}(A) \text{ is convex} \}.
\end{eqnarray*}
\subsection{The refined monotonicity of Schneider's non-convexity index}
\label{sec:c-mono}
In this section, our main result is that Schneider's non-convexity index $c$ satisfies a strong kind of monotonicity in any dimension.
We state the main theorem of this section, and will subsequently deduce corollaries
asserting monotonicity in the Shapley-Folkman-Starr theorem from it.
\begin{thm}\label{thm:fracsubofc}
Let $n\ge1$ and let $A, B, C$ be subsets of ${\bf R}^n$. Then
\begin{eqnarray*}
c(A+B+C) \leq \max\{c(A+B), c(B+C)\}.
\end{eqnarray*}
\end{thm}
\begin{proof}
Let us denote $\lambda=\max\{c(A+B), c(B+C)\}$. Then
\begin{eqnarray*}
A+B+C+\lambda\mathrm{conv}(A+B+C)&=&A+B+\lambda\mathrm{conv}(A+B)+C+\lambda\mathrm{conv}(C)\\
&=&(1+\lambda)\mathrm{conv}(A+B)+C+\lambda\mathrm{conv}(C)\\
&\supset& (1+\lambda)\mathrm{conv}(A)+B+\lambda\mathrm{conv}(B)+C+\lambda\mathrm{conv}(C)\\
&=&(1+\lambda)\mathrm{conv}(A)+ (1+\lambda)\mathrm{conv}(B+C)\\
&=& (1+\lambda)\mathrm{conv}(A+B+C).
\end{eqnarray*}
Since the opposite inclusion is clear, we deduce that $A+B+C+\lambda\mathrm{conv}(A+B+C)$ is convex, which means that $c(A+B+C) \leq \lambda=\max\{c(A+B), c(B+C)\}.$
\end{proof}
Notice that the same kind of proof also shows that if $A+B$ and $B+C$ are convex then $A+B+C$ is also convex. Moreover, Theorem~\ref{thm:fracsubofc} has an equivalent formulation for $k\ge2$
subsets of ${\bf R}^n$, say $A_1, \dots, A_k$: if $s, t\subset [k]$
with $s\cup t=[k]$, then
\begin{eqnarray}\label{eq:c-multi}
c\left(\sum_{i\in[k]}A_i\right) \leq \max \bigg\{ c\left(\sum_{i\in s}A_i\right), c\left(\sum_{i\in t}A_i\right) \bigg\}.
\end{eqnarray}
To see this, apply Theorem~\ref{thm:fracsubofc} to
\begin{eqnarray*}
B=\sum_{i\ins\capt} A_i , \quad A=\sum_{i\ins\setminust} A_i, \quad C=\sum_{i\int\setminuss} A_i .
\end{eqnarray*}
From the inequality \eqref{eq:c-multi}, the following corollary, expressed in a more
symmetric fashion, immediately follows.
\begin{cor}\label{cor:fracsubofc}
Let $n\ge1$ and $k\ge2$ be integers and let $A_1, \dots, A_k$ be $k$ sets in ${\bf R}^n$. Then
$$
c\left(\sum_{l\in[k]}A_l\right) \leq \max_{i\in[k]}c\left(\sum_{l\in[k]\setminus\{i\}}A_l\right).
$$
\end{cor}
The $k=2$ case of Corollary~\ref{cor:fracsubofc} follows directly from the definition of $c$
and was observed by Schneider in \cite{Sch75}.
Applying Corollary~\ref{cor:fracsubofc} for $A_1=\cdots =A_k=A$, where $A$ is a fixed subset of ${\bf R}^n$,
and using the scaling invariance of $c$, one deduces that the sequence $c(A(k))$ is non-increasing.
In fact, for identical sets, we prove something even stronger in the following theorem.
\begin{thm}\label{thm:c-quant-mono}
Let $A$ be a subset of ${\bf R}^n$ and $k\ge2$ be an integer. Then
\begin{eqnarray*}
c\left(A(k)\right)\le\frac{k-1}{k}c\left(A(k-1)\right).
\end{eqnarray*}
\end{thm}
\begin{proof}
Denote $\lambda=c\left(A(k-1)\right)$. Since $\mathrm{conv}(A(k-1))=\mathrm{conv}(A)$, from the definition of $c$, one knows that
$A(k-1)+\lambda\mathrm{conv}(A)=\mathrm{conv}(A)+\lambda\mathrm{conv}(A)=(1+\lambda)\mathrm{conv}(A)$. Using that $A(k)=\frac{A}{k}+\frac{k-1}{k}A(k-1)$, one has
\begin{eqnarray*}
A(k)+\frac{k-1}{k}\lambda\mathrm{conv}(A) &=& \frac{A}{k}+\frac{k-1}{k}A(k-1)+\frac{k-1}{k}\lambda\mathrm{conv}(A)\\
&=&\frac{A}{k}+\frac{k-1}{k}\mathrm{conv}(A)+\frac{k-1}{k}\lambda\mathrm{conv}(A)\\
&\supset&\frac{\mathrm{conv}(A)}{k}+\frac{k-1}{k}A(k-1)+\frac{k-1}{k}\lambda\mathrm{conv}(A)\\
&=&\frac{\mathrm{conv}(A)}{k}+\frac{k-1}{k}(1+\lambda)\mathrm{conv}(A)\\
&=&\left(1+\frac{k-1}{k}\lambda\right)\mathrm{conv}(A).
\end{eqnarray*}
Since the other inclusion is trivial, we deduce that $A(k)+\frac{k-1}{k}\lambda\mathrm{conv}(A)$ is convex which proves that
$$c(A(k))\le \frac{k-1}{k}\lambda=\frac{k-1}{k}c\left(A(k-1)\right).$$
\end{proof}
\begin{rem}
\begin{enumerate}
\item We do not know if $c$ is fractionally subadditive; for example, we do not know if
$2\,c(A+B+C)\le c(A+B)+c(A+C)+c(B+C)$. We know it with a better constant if $A=B=C$,
as a consequence of Theorem \ref{thm:c-quant-mono}. We also know it if we take a large enough number of sets; this is a consequence of the Shapley-Folkman lemma (Lemma~\ref{lem:SF}).
\item The Schneider index $c$ (as well as any other measure of non-convexity) cannot be submodular.
This is because, if we consider $A=\{0,1\}$, $B=C=[0,1]$, then $c(A+B)=c(A+C)=c(A+B+C)=0$ but $c(A)>0$, hence
\begin{eqnarray*}
c(A+B+C) + c(A) > c(A+B) + c(A+C).
\end{eqnarray*}
\end{enumerate}
\end{rem}
\subsection{Convergence rates for Schneider's non-convexity index}
\label{sec:c-rate}
We were unable to find any examination in the literature of rates, or indeed, even of
sufficient conditions for convergence as measured by $c$.
Let us discuss convergence in the Shapley-Folkman-Starr theorem
using the Schneider non-convexity index. In dimension 1, we can get an
$O(1/k)$ bound on $c(A(k))$ by using the close relation \eqref{eq:c-d-1d} between
$c$ and $d$ in this case. In general dimension, the same bound also holds: by applying Theorem~\ref{thm:c-quant-mono} inductively, we get the following theorem.
\begin{thm}\label{thm:c-rate}
Let $A$ be a compact set in ${\bf R}^n$. Then
\begin{eqnarray*}
c(A(k)) \leq \frac{c(A)}{k} .
\end{eqnarray*}
In particular, $c(A(k))\ra 0$ as $k\ra\infty$.
\end{thm}
Let us observe that the $O(1/k)$ rate of convergence cannot be improved,
either for $d$ or for $c$. To see this simply consider the case where $A= \{0,1\}\subset {\bf R}$.
Then $A(k)$ consists of the $k+1$ equispaced points $j/k$, where $j\in \{0, 1, \ldots, k\}$,
and $c(A(k))=2d(A(k))= 1/k$ for every $k\in\mathbb{N}$.
\section{The behavior of the effective standard deviation $v$}
\label{sec:r}
In this section we study the effective standard deviation $v$. Recall its definition: for $A$ compact in ${\bf R}^n$,
\begin{eqnarray*}
v^2(A) = \sup_{x\in\mathrm{conv}(A)}\inf\{\sum p_i |a_i-x|^2: x=\sum p_i a_i; p_i >0; \sum p_i=1, a_i \in A \}.
\end{eqnarray*}
\subsection{Subadditivity of $v^2$}
Cassels \cite{Cas75} showed that $v^2$ is subadditive.
\begin{thm}[\cite{Cas75}]\label{thm:v-subadd}
Let $A,B$ be compact sets in ${\bf R}^n$. Then,
\begin{eqnarray*}
v^2(A+B) \leq v^2(A)+v^2(B).
\end{eqnarray*}
\end{thm}
\begin{proof}
Recall that $v(A)=\sup_{x\in\mathrm{conv}(A)}v_A(x)$, where
$$v_A^2(x)=\inf\{\sum_{i\in I} \lambda_i |a_i-x|^2: (\lambda_i,a_i)_{i\in I}\in\Theta_A(x)\}, $$
and $\Theta_A(x)=\{(\lambda_i,a_i)_{i\in I}: I\ \hbox{finite},\ x=\sum \lambda_i a_i; \lambda_i>0; \sum \lambda_i=1, a_i \in A \}.$
Thus
$$v(A+B)=\sup_{x\in\mathrm{conv}(A+B)}v_{A+B}(x)=\sup_{x_1\in\mathrm{conv}(A)}\sup_{x_2\in\mathrm{conv}(B)}v_{A+B}(x_1+x_2).$$
And one has
$$v_{A+B}^2(x_1+x_2)=\inf\{\sum_{i\in I} \nu_i |c_i-x_1-x_2|^2: (\nu_i,c_i)_{i\in I}\in\Theta_{A+B}(x_1+x_2)\}.$$
For $(\lambda_i,a_i)_{i\in I}\in\Theta_A(x_1)$ and $(\mu_j,b_j)_{j\in J}\in\Theta_B(x_2)$ one has
$$(\lambda_i\mu_j,a_i+b_j)_{(i,j)\in I\times J}\in\Theta_{A+B}(x_1+x_2), $$
and
\begin{equation}\label{eq:var-add}
\begin{aligned}
&\sum_{(i,j)\in I\times J}\lambda_i\mu_j |a_i+b_j-x_1-x_2|^2 \\
&=\sum_{i\in I} \lambda_i |a_i-x_1|^2+\sum_{j\in J} \mu_j |b_j-x_2|^2 + 2\sum_{(i,j)\in I\times J}\lambda_i\mu_j \langle a_i-x_1, b_j-x_2\rangle \\
&=\sum_{i\in I} \lambda_i |a_i-x_1|^2+\sum_{j\in J} \mu_j |b_j-x_2|^2 + 2 \langle \sum_{i\in I} \lambda_i a_i-x_1, \sum_{j\in J}\mu_j b_j-x_2\rangle \\
&= \sum_{i\in I} \lambda_i |a_i-x_1|^2+\sum_{j\in J} \mu_j |b_j-x_2|^2 .
\end{aligned}
\end{equation}
Thus
\begin{eqnarray*}\begin{split}
v_{A+B}^2(x_1+x_2)&\le \inf_{(\lambda_i,a_i)_{i\in I}\in\Theta_A(x_1)}\inf_{(\mu_j,b_j)_{j\in J}\in\Theta_B(x_2)}\sum_{i\in I} \lambda_i |a_i-x_1|^2+\sum_{j\in J} \mu_j |b_j-x_2|^2 \\
&=v_A^2(x_1)+v_B^2(x_2).
\end{split}\end{eqnarray*}
Taking the supremum in $x_1\in\mathrm{conv}(A)$ and $x_2\in\mathrm{conv}(B)$, we conclude.
\end{proof}
Observe that we may interpret the proof probabilistically. Indeed, a key point in the proof is the identity \eqref{eq:var-add},
which is just the fact that the variance of a sum of independent random variables is the sum of the individual variances
(written out explicitly for readability).
\subsection{Strong fractional subadditivity for large $k$}
In this section, we prove that the effective standard deviation $v$ satisfies a strong fractional subadditivity when considering sufficient large numbers of sets.
\begin{thm}\label{v-large-k}
Let $A_1, \dots, A_k$ be compact sets in ${\bf R}^n$, with $k \geq n + 1$. Then,
$$ v\left(\sum_{i \in [k]} A_i\right) \leq \max_{I \subset [k]: |I| \leq n} \min_{i \in [k] \setminus I} v\left(\sum_{j \in [k] \setminus \{i\}} A_j \right). $$
\end{thm}
\begin{proof}
Let $x \in \mathrm{conv}(\sum_{i\in [k]} A_i)$, where $k \geq n + 1$. By using the Shapley-Folkman lemma (Lemma~\ref{lem:SF}), there exists a set $I$ of at most $n$ indexes such that
$$
x \in \sum_{i\in I} \mathrm{conv}(A_i) + \sum_{i \in [k] \setminus I} A_i.
$$
Let $i_0 \in [k] \setminus I$. In particular, we have
$$ x \in \mathrm{conv}\bigg(\sum_{i \in [k] \setminus \{i_0\}} A_i\bigg) + A_{i_0}. $$
Hence, by definition of the convex hull,
$$ x = \sum_m p_m a_m + a_{i_0} = z + a_{i_0}, $$
where $z=\sum_m p_m a_m$, $\sum_m p_m =1$, $a_m \in \sum_{i \in [k] \setminus \{i_0\}} A_i$ and $a_{i_0} \in A_{i_0}$. Thus, by denoting $A_{\{i_0\}} = \sum_{i \in [k] \setminus \{i_0\}} A_i$, we have
\begin{eqnarray*}
v_{A_{\{i_0\}}}^2(z) & = & \inf \bigg\{ \sum_m p_m |a_m - z|^2 : z = \sum_m p_m a_m; \sum_m p_m = 1; a_m \in A_{ \{i_0\} } \bigg\} \\
& = & \inf \bigg\{ \sum_m p_m |a_m + a_{i_0} - (z + a_{i_0})|^2 : z = \sum_m p_m a_m; \sum_m p_m = 1; a_m \in A_{ \{i_0\}} \bigg\} \\
& \geq & \inf \bigg\{ \sum_m p_m |a_m^* - (z + a_{i_0})|^2 : z + a_{i_0} = \sum_m p_m a_m^* ; \sum_m p_m = 1 ; a_m^* \in \sum_{i \in [k]} A_i \bigg\} \\
& = & v_{\sum_{i \in [k]} A_i}^2(x).
\end{eqnarray*}
Taking supremum over all $z \in \mathrm{conv}(\sum_{i \in [k] \setminus \{i_0\}} A_i)$, we deduce that
$$ v_{\sum_{i \in [k]} A_i}(x) \leq v\bigg(\sum_{i \in [k] \setminus \{i_0\}} A_i\bigg). $$
Since this is true for every $i_0 \in [k] \setminus I$, we deduce that
$$ v_{\sum_{i \in [k]} A_i}(x) \leq \min_{i \in [k] \setminus I} v\bigg(\sum_{j \in [k] \setminus \{i\}} A_j\bigg). $$
Taking the supremum over all set $I \subset [k]$ of cardinality at most $n$ yields
$$ v_{\sum_{i \in [k]} A_i}(x) \leq \max_{I \subset [k]: |I| \leq n} \min_{i \in [k] \setminus I} v\bigg(\sum_{j \in [k] \setminus \{i\}} A_j\bigg). $$
We conclude by taking the supremum over all $x \in \mathrm{conv}(\sum_{i\in [k]} A_i)$.
\end{proof}
An immediate consequence of Theorem~\ref{v-large-k} is that if $k \geq n + 1$, then
$$ v\left(\sum_{i \in [k]} A_i\right) \leq \max_{i \in [k]} v\left(\sum_{j \in [k] \setminus \{i\}} A_j \right). $$
By iterating this fact as many times as possible (i.e., as long as the number of sets is at least $n+1$), we obtain
the following corollary.
\begin{cor}\label{cor:v-large-k}
Let $A_1, \dots, A_k$ be compact sets in ${\bf R}^n$, with $k \geq n + 1$. Then,
$$ v\left(\sum_{i \in [k]} A_i\right) \leq \max_{I \subset [k]: |I| = n} v\left(\sum_{j \in I} A_j \right). $$
\end{cor}
In the case where $A_1 = \cdots = A_k = A$, we can repeat the above argument with $k\ge c(A)+1$ to prove that in this case,
$$ v(A(k)) \leq \frac{k-1}{k} v(A(k-1)), $$
where $c(A)$ is the Schneider non-convexity index of $A$. Since $c(A) \leq n$, and $c(A) \leq n-1$ when $A$ is connected, we deduce the following monotonicity property for the effective standard deviation.
\begin{cor}
\begin{enumerate}
\item In dimension 1 and 2, the sequence $v(A(k))$ is non-increasing for every compact set $A$.
\item In dimension 3, the sequence $v(A(k))$ is non-increasing for every compact and connected set $A$.
\end{enumerate}
\end{cor}
\begin{rem}
It follows from the above study that if a compact set $A \subset {\bf R}^n$ satisfies $c(A) \leq 2$, then the sequence $v(A(k))$ is non-increasing. One can see that if a compact set $A \subset {\bf R}^n$ contains the boundary of its convex hull, then $c(A) \leq 1$; for such set $A \subset {\bf R}^n$, the sequence $v(A(k))$ is non-increasing.
\end{rem}
\subsection{Convergence rates for $v$}
It is classical that one has convergence in $v$ at good rates.
\begin{thm}[\cite{Cas75}]\label{thm:cassels-v-gen}
Let $A_1, \ldots, A_k$ be compact sets in ${\bf R}^n$. Then
\begin{eqnarray*}
v(A_1 +\cdots + A_k) \leq \sqrt{\min\{k,n\}} \, \max_{i\in [k]} v(A_i) .
\end{eqnarray*}
\end{thm}
\begin{proof}
Firstly, by using subadditivity of $v^2$ (Theorem \ref{thm:v-subadd}), one has
$$ v^2(A_1 + \cdots + A_k) \leq k \max_{i \in [k]} v^2(A_i). $$
Hence, $v(A_1 + \cdots + A_k) \leq \sqrt{k} \max_{i \in [k]} v(A_i)$.
If $k\geq n+1$, we can improve this bound using Corollary~\ref{cor:v-large-k}, which gives us
\begin{eqnarray*}\begin{split}
v^2\left(\sum_{i \in [k]} A_i\right)
&\leq \max_{I \subset [k]: |I| = n} v^2\left(\sum_{j \in I} A_j \right)\\
&\leq \max_{I \subset [k]: |I| = n} \sum_{j \in I} v^2(A_j)\\
&\leq n \max_{i \in I} v^2(A_i) \leq n \max_{i \in [k]} v^2(A_i) ,
\end{split}\end{eqnarray*}
again using subadditivity of $v^2$ for the second inequality.
\end{proof}
By considering $A_1=\cdots=A_k=A$, one obtains the following convergence rate.
\begin{cor}\label{cor:vsfs}
Let $A$ be a compact set in ${\bf R}^n$. Then,
\begin{eqnarray*}
v(A(k))\leq \min\bigg\{ \frac{1}{\sqrt{k}}, \frac{\sqrt{n}}{k} \bigg\} v(A).
\end{eqnarray*}
\end{cor}
\section{The behavior of the Hausdorff distance from the convex hull}
\label{sec:d}
In this section we study the Hausdorff distance from the convex hull. Recall its definition: for $K$ being a compact convex set containing 0 in its interior and $A$ compact in ${\bf R}^n$,
\begin{eqnarray*}
d^{(K)}(A)= \inf\{r>0: \mathrm{conv}(A)\subset A+rK\} .
\end{eqnarray*}
\subsection{Some basic properties of the Hausdorff distance}
\label{ss:d-mono-gen}
The Hausdorff distance is subadditive.
\begin{thm}\label{d-additive}
Let $A, B$ be compact sets in ${\bf R}^n$, and $K$ be an arbitrary convex body containing 0 in its interior. Then
\begin{eqnarray*}
d^{(K)}(A+B) \leq d^{(K)}(A)+d^{(K)}(B) .
\end{eqnarray*}
\end{thm}
\begin{proof}
The convexity of $K$ implies that
\begin{eqnarray*}
A+B + (d^{(K)}(A)+d^{(K)}(B)) K = A + d^{(K)}(A)K + B + d^{(K)}(B)K,
\end{eqnarray*}
but since $A + d^{(K)}(A)K \supset \mathrm{conv}(A)$ and $B + d^{(K)}(B)K \supset \mathrm{conv}(B)$ by definition, we have
\begin{eqnarray*}
A+B + (d^{(K)}(A)+d^{(K)}(B)) K \supset \mathrm{conv}(A) +\mathrm{conv}(B) = \mathrm{conv}(A+B).
\end{eqnarray*}
\end{proof}
We can provide a slight further strengthening of Theorem \ref{d-additive} when dealing with Minkowski sums of more than 2 sets, by following an argument similar to that used for Schneider's non-convexity index.
\begin{thm}\label{thm:d-3sum}
Let $A, B, C$ be compact sets in ${\bf R}^n$, and $K$ be an arbitrary convex body containing 0 in its interior. Then
\begin{eqnarray*}
d^{(K)}(A+B+C) \leq d^{(K)}(A+B)+d^{(K)}(B+C).
\end{eqnarray*}
\end{thm}
\begin{proof}
Notice that
\begin{eqnarray*}\begin{split}
&A+B+C+\big(d^{(K)}(A+B)+d^{(K)}(B+C)\big)K \\
&= A+B+d^{(K)}(A+B) K + C + d^{(K)}(B+C) K \\
&\supset \mathrm{conv}(A+B)+C+d^{(K)}(B+C) K\\
&\supset \mathrm{conv}(A)+B+C+d^{(K)}(B+C) K\\
&\supset \mathrm{conv}(A)+ \mathrm{conv}(B+C)\\
&= \mathrm{conv}(A+B+C).
\end{split}\end{eqnarray*}
\end{proof}
In particular, Theorem~\ref{thm:d-3sum} implies that
\begin{eqnarray*}
d^{(K)}\left(\sum_{l\in[k]}A_l\right) \leq 2 \max_{i\in[k]}d^{(K)}\left(\sum_{l\in[k]\setminus\{i\}}A_l\right) ,
\end{eqnarray*}
and, when the sets are the same,
\begin{eqnarray}\label{eq:d-part-mono}
d^{(K)}(A(k))\leq 2\frac{k-1}{k} d^{(K)}(A(k-1)).
\end{eqnarray}
While not proving monotonicity of $d^{(K)}(A(k))$, the inequality \eqref{eq:d-part-mono} does provide a bound on
extent of non-monotonicity in the sequence in general dimension.
\subsection{The Dyn--Farkhi conjecture}
Dyn and Farkhi \cite{DF04} conjectured that
\begin{eqnarray}\label{eq:sub-d}
d^2(A+B) \leq d^2(A)+d^2(B).
\end{eqnarray}
The next theorem shows that the above conjecture is false in ${\bf R}^n$ for $n\ge 3$.
\begin{thm}\label{thm:DF}
Let $q\geq 0$. The inequality
$$
d^q(A+B) \le d^q(A)+d^q(B),
$$
holds for all compact sets $A, B \subset {\mathbb R}^3$ if and only if $q \le 1$.
\end{thm}
\begin{proof}
We have already seen that the inequality holds for $q=1$ and thus the inequality holds when $0\le q\le1$.
Let $q\geq 0$ be such that the inequality holds for all compact sets $A$ and $B$. Let $A=A_1 \cup A_2$,
where $A_1$ and $A_2$ are intervals such that $A_1=[(0,0,0), (1,0,-f)]$ and $A_2=[(0,0,0), (1,0, f)]$, and $f>0$ is a large number to be selected.
Let $B=B_1 \cup B_2$, where $B_1$ and $B_2$ are intervals such that $B_1=[(0,0,0), (1,-f,0)]$ and $A_2=[(0,0,0), (1,f, 0)]$.
Note that $(1,0,0)$ belongs to both ${\rm conv}(A)$ and ${\rm conv}(B)$. It is easy to see, using two dimensional considerations that
$$d(B)=d(A)=d_A(1,0,0)=\frac{f}{\sqrt{1+f^2}}\le 1.$$
Next we notice that
$$
A+B=\bigcup \limits_{i,j \in \{1,2\}} (A_i + B_j).
$$
Thus, the points in $A+B$ can be parametrized by
$$
t (1, \pm f, 0) + s(1,0 \pm f),
$$
where $t,s \in [0,1]$. We note that $(2,0,0) \in {\rm conv}(A+B)$ and
$$
d(A+B)\ge d_{A+B}(2,0,0)=\min\limits_{t,s \in [0,1]}\sqrt{ (2-(t+s))^2 +f^2(s^2+t^2)}=\frac{2f}{\sqrt{f^2+2}}
$$
Note that if $f \to \infty$, this tends to $2$. So assuming that the inequality
$$
d^q(A+B) \le d^q(A)+d^q(B),
$$
holds implies that $2^q\le 2$ thus $q \le 1$.
\end{proof}
\begin{rem}
\begin{enumerate}
\item
Note that the above example is also valid if we consider $\ell_p$, $p\ge 1$ metric instead of the $\ell_2$ metric.
Indeed $d^{(B_p^3)}(A)=d^{(B_p^3)}(B)\le 1$ and we may compute $\ell_p$ distance from $(2,0,0)$ to $A+B$ as
$$
\min\limits_{t,s \in [0,1]}\left((2-(t+s))^p +f^p(s^p+t^p)\right)^\frac{1}{p} .
$$
If $f \to \infty$, then to minimize the above, we must, again, select $s, t$ to be close to zero,
and thus the distance is at least $2$. This shows that if the inequality
$$
(d^{(B_p^3)})^q(A+B) \le (d^{(B_p^3)})^q(A)+(d^{(B_p^3)})^q(B)
$$
holds for all $A, B \subset {\bf R}^3$, then $q \le 1$.
\item As shown by Wegmann \cite{Weg80}, if the set $A$ is such that the supremum in the definition of $v(A)$
is achieved at a point in the relative interior of $\mathrm{conv}(A)$, then $d(A)=v(A)$. Thus Theorem~\ref{thm:v-subadd} implies the following statement:
If $A, B$ are compact sets in ${\bf R}^n$ such that the supremum in the definition of $v(A)$
is achieved at a point in the relative interior of $\mathrm{conv}(A)$, and likewise for $B$, then
\begin{eqnarray*}
d^2(A+B) \leq d^2(A)+d^2(B) .
\end{eqnarray*}
\item We emphasize that the conjecture is still open in the case $A=B$.
In this case, the Dyn-Farkhi conjecture is equivalent to
\begin{eqnarray*}
d\left(\frac{A+A}{2}\right) \leq \frac{d(A)}{\sqrt{2}}.
\end{eqnarray*}
If $c_n$ is the best constant such that $d(\frac{A+A}{2}) \leq c_n d(A)$ for all compact sets $A$ in dimension $n$,
then one has
\begin{eqnarray*}
c_n\ge \sqrt{\frac{n-1}{2n}}
\end{eqnarray*}
for $n\ge2$. This can be seen from the example where $A=\{a_1,\cdots, a_{n+1}\}$ is a set of $n+1$ vertices of a regular simplex in ${\bf R}^n$, $n\ge2$.
For this example, it is not difficult to see that $d(A)=|g-a_1|$, where $g=(a_1+\cdots +a_{n+1})/(n+1)$ is the center of mass of $A$ and $d(\frac{A+A}{2})=|g-\frac{a_1+a_2}{2}|$. Then, one easily concludes that
\begin{eqnarray*}
\frac{d(\frac{A+A}{2})}{d(A)}=\frac{|g-\frac{a_1+a_2}{2}|}{|g-a_1|}=\sqrt{\frac{n-1}{2n}}.
\end{eqnarray*}
Thus we get $\sup_n c_n \ge\frac{1}{\sqrt{2}}$, while the Dyn-Farkhi conjecture amounts to $\sup_n c_n \le\frac{1}{\sqrt{2}}$.
\item Notice that there is another interpretation of $d(A)$ as the largest empty circle of $A$,
i.e., the radius of the circle of largest radius, centered at a point in $\mathrm{conv}(A)$ and containing no point of $A$ in its interior (see \cite{Sch08},
where the relevance of this notion for planning new store locations and toxic waste dump locations is explained). Indeed this radius is equal to
\begin{eqnarray*}
\sup\{R; \exists x\in\mathrm{conv}(A); |x-a|\ge R, \forall a\in A\}=\sup\{R; \sup_{x\in\mathrm{conv}(A)} \inf_{a\in A}|x-a|\ge R\}=d(A).
\end{eqnarray*}
\end{enumerate}
\end{rem}
\subsection{Strong fractional subadditivity for large $k$}
In this section, similarly as for the effective standard deviation $v$, we prove that the Hausdorff distance from the convex hull $d^{(K)}$
satisfies a strong fractional subadditivity when considering sufficient large numbers of sets.
\begin{thm}\label{thm:d-large-k}
Let $K$ be an arbitrary convex body containing 0 in its interior. Let $A_1, \dots, A_k$ be compact sets in ${\bf R}^n$, with $k \geq n + 1$. Then,
$$ d^{(K)}\left(\sum_{i \in [k]} A_i \right) \leq \max_{I \subset [k]: |I| \leq n} \min_{i \in [k] \setminus I} d^{(K)}\left(\sum_{j \in [k] \setminus \{i\}} A_j \right). $$
\end{thm}
\begin{proof}
Let $x \in \mathrm{conv}(\sum_{i \in [k]} A_i)$. By using the Shapley-Folkman lemma (Lemma~\ref{lem:SF}), there exists a set $I \subset [k]$ of cardinality at most $n$ such that
$$ x \in \sum_{i \in I} \mathrm{conv}(A_i) + \sum_{i \in [k] \setminus I} A_i. $$
Let $i_0 \in [k] \setminus I$. In particular, we have
$$ x \in \sum_{i \in [k] \setminus \{i_0\}} \mathrm{conv}(A_i) + A_{i_0}. $$
Thus,
$$ x = \sum_{i \in [k] \setminus \{i_0\}} x_i + x_{i_0} = z + x_{i_0}, $$
for some $x_i \in \mathrm{conv}(A_i)$, $i \in [k] \setminus \{i_0\}$, and some $x_{i_0} \in A_{i_0}$, where $z = \sum_{i \in [k] \setminus \{i_0\}} x_i$. Hence,
\begin{eqnarray*}
d^{(K)}_{\sum_{i \in [k] \setminus \{i_0\}} A_i}(z) & = & \inf_{a \in \sum_{i \in [k] \setminus \{i_0\}} A_i} \|z-a\|_K \\ & = & \inf_{a \in \sum_{i \in [k] \setminus \{i_0\}} A_i} \|z + x_{i_0} - (a + x_{i_0}) \|_K \\ & \geq & \inf_{a^* \in \sum_{i \in [k]} A_i} \|z + x_{i_0} - a^* \|_K \\ & = & d^{(K)}_{\sum_{i \in [k]} A_i}(x).
\end{eqnarray*}
Taking supremum over all $z \in \mathrm{conv}(\sum_{i \in [k] \setminus \{i_0\}} A_i)$, we deduce that
$$ d^{(K)}_{\sum_{i \in [k]} A_i}(x) \leq d^{(K)}(\sum_{i \in [k] \setminus \{i_0\}} A_i). $$
Since this is true for every $i_0 \in [k] \setminus I$, we deduce that
$$ d^{(K)}_{\sum_{i \in [k]} A_i}(x) \leq \min_{i \in [k] \setminus I} d^{(K)}(\sum_{j \in [k] \setminus \{i\}} A_j). $$
Taking the supremum over all set $I \subset [k]$ of cardinality at most $n$ yields
$$ d^{(K)}_{\sum_{i \in [k]} A_i}(x) \leq \max_{I \subset [k]: |I| \leq n} \min_{i \in [k] \setminus I} d^{(K)}(\sum_{j \in [k] \setminus \{i\}} A_j). $$
We conclude by taking the supremum over all $x \in \mathrm{conv}(\sum_{i\in [k]} A_i)$.
\end{proof}
In the case where $A_1 = \cdots = A_k = A$, we can use the above argument to prove that for $k \geq c(A) + 1$,
$$ d^{(K)}(A(k)) \leq \frac{k-1}{k} d^{(K)}(A(k-1)), $$
where $c(A)$ is the Schneider non-convexity index of $A$. Since $c(A) \leq n$, and $c(A) \leq n-1$ when $A$ is connected,
we deduce the following monotonicity property for the Hausdorff distance to the convex hull.
\begin{cor}
Let $K$ be an arbitrary convex body containing 0 in its interior. Then,
\begin{enumerate}
\item In dimension 1 and 2, the sequence $d^{(K)}(A(k))$ is non-increasing for every compact set $A$.
\item In dimension 3, the sequence $d^{(K)}(A(k))$ is non-increasing for every compact and connected set $A$.
\end{enumerate}
\end{cor}
\begin{rem}
It follows from the above study that if a compact set $A \subset {\bf R}^n$ satisfies $c(A) \leq 2$, then the sequence $d^{(K)}(A(k))$ is non-increasing.
One can see that if a compact set $A \subset {\bf R}^n$ contains the boundary of its convex hull, then $c(A) \leq 1$; for such set $A \subset {\bf R}^n$,
the sequence $d^{(K)}(A(k))$ is non-increasing.
\end{rem}
It is useful to also record a simplified version of Theorem~\ref{thm:d-large-k}.
\begin{cor}\label{cor:d-large-k}
Let $K$ be an arbitrary convex body containing 0 in its interior. Let $A_1, \dots, A_k$ be compact sets in ${\bf R}^n$, with $k \geq n + 1$. Then,
$$ d^{(K)}\left(\sum_{i \in [k]} A_i\right) \leq \max_{I \subset [k]: |I| = n} d^{(K)}\left(\sum_{i \in I} A_i \right) \leq n \max_{i\in [k]} d^{(K)}(A_i) .$$
\end{cor}
\begin{proof}
By Theorem~\ref{thm:d-large-k}, provided $k>n$, we have in particular
\begin{eqnarray*}
d^{(K)}\left(\sum_{i \in [k]} A_i\right) \leq \max_{i\in [k]} d^{(K)}\left(\sum_{j \neq i} A_j \right) .
\end{eqnarray*}
Iterating the same argument as long as possible, we have that
\begin{eqnarray*}
d^{(K)}\left(\sum_{i \in [k]} A_i\right) \leq \max_{I \subset [k]: |I| = n} d^{(K)}\left(\sum_{j \in I} A_j \right) ,
\end{eqnarray*}
which is the first desired inequality. Applying the subadditivity property of $d^{(K)}$ (namely, Theorem~\ref{d-additive}),
we immediately have the second desired inequality.
\end{proof}
While Corollary~\ref{cor:d-large-k} does not seem to have been explicitly written down before, it seems to have
been first discovered by V. Grinberg (personal communication).
\subsection{Convergence rates for $d$}
\label{sec:d-rate}
Let us first note that having proved convergence rates for $v(A(k))$, we automatically
inherit convergence rates for $d^{(K)}(A(k))$ as a consequence of Lemma \ref{lem:d-d^K}, Theorem \ref{thm:weg} and Corollary \ref{cor:vsfs}.
\begin{cor}\label{d-rate-v-rate} Let $K$ be an arbitrary convex body containing 0 in its interior. For any compact set $A\subset {\bf R}^n$,
\begin{eqnarray*}
d^{(K)}(A(k))\leq \frac{1}{r}\min\bigg\{ \frac{1}{\sqrt{k}}, \frac{\sqrt{n}}{k} \bigg\} v(A),
\end{eqnarray*}
where $r>0$ is such that $rB_2^n \subset K$.
\end{cor}
For Euclidean norm (i.e., $K=B_2^n$), this goes back to \cite{Sta69, Cas75}.
Although we have a strong convergence result for $d^{(K)}(A(k))$ as a consequence of that for $v(A(k))$, we give below another estimate of $d^{(K)}(A(k))$ in terms of $d^{(K)}(A)$, instead of $v(A)$.
\begin{thm}\label{d-easy-rate} For any compact set $A\subset {\bf R}^n$,
\begin{eqnarray*}
d^{(K)}(A(k)) \leq \min\left\{1,\frac{\lceil c(A) \rceil}{k}\right\}d^{(K)}(A).
\end{eqnarray*}
\end{thm}
\begin{proof}
As a consequence of Theorem \ref{d-additive}, we always have $d^{(K)}(A(k)) \leq d^{(K)}(A)$. Now consider $k \geq c(A) + 1$, and notice that
\begin{eqnarray*}
kA(k) + \lceil c(A) \rceil d^{(K)}(A)K \supset (k- \lceil c(A) \rceil )A(k - \lceil c(A) \rceil) + \lceil c(A) \rceil \mathrm{conv}(A) = \mathrm{conv}(kA(k)).
\end{eqnarray*}
Hence $d^{(K)}(kA(k)) \leq \lceil c(A) \rceil d^{(K)}(A)$, or equivalently, $d^{(K)}(A(k)) \leq \frac{\lceil c(A) \rceil d^{(K)}(A)}{k}$.
\end{proof}
Using the fact that $c(A) \leq n$ for every compact set $A \subset {\bf R}^n$, we deduce that
$$ d^{(K)}(A(k)) \leq \min\left\{1,\frac{n}{k}\right\}d^{(K)}(A). $$
\section{Connections to discrepancy theory}
\label{sec:discrep}
The ideas in this section have close connections to the area known sometimes as ``discrepancy theory'',
which has arisen independently in the theory of Banach spaces, combinatorics, and computer science.
It should be emphasized that there are two distinct but related areas that go by the name of discrepancy theory.
The first, discussed in this section and sometimes called
``combinatorial discrepancy theory'' for clarity, was likely originally motivated by questions related to absolute versus unconditional
versus conditional convergence for series in Banach spaces. The second, sometimes called
``geometric discrepancy theory'' for clarity, is related to how well a finite set of points can approximate a uniform distribution on
(say) a cube in ${\bf R}^n$. Our discussion here concerns the former; the interested reader may consult \cite{Tra14:book} for more on the latter.
When looked at deeper, however, combinatorial discrepancy theory is also related to the ability to discretely approximate ``continuous''
objects. For example, a famous result of Spencer \cite{Spe85} says that given any collection $\{S_1,\ldots,S_n\}$ of subsets of $[n]$, it is possible to color the
elements of $[n]$ with two colors (say, red and blue) such that
\begin{eqnarray*}
\bigg| |S_i\cap R| - \frac{|S_i|}{2} \bigg|\leq 3\sqrt{n},
\end{eqnarray*}
for each $i\in [n]$, where $R\subset [n]$ is the set of red elements. As explained for example by Srivastava \cite{Sri13:blog}
\begin{quote}
In other words, it is possible to partition $[n]$ into two subsets
so that this partition is very close to balanced on {\it each one} of the test sets $S_i$.
Note that a ``continuous'' partition which splits each element exactly in half will be exactly balanced on each $S_i$;
the content of Spencer's theorem is that we can get very close to this ideal situation with an actual, discrete partition
which respects the wholeness of each element.
\end{quote}
Indeed, Srivastava also explains how the recent celebrated results of Marcus, Spielman and Srivastava \cite{MSS15:1, MSS15:2}
that resulted in the solution of the Kadison-Singer conjecture may be seen from a discrepancy point of view.
For any $n$-dimensional Banach space $E$ with norm $\|\cdot\|_E$, define the functional
\begin{eqnarray*}
V(k, E)= \max_{x_1, \ldots, x_k: \|x_i\|=1 \,\forall i\in [k]} \,\, \min_{(\epsilon_1, \ldots, \epsilon_k)\in \{-1,1\}^k} \, \bigg\| \sum_{i\in [k]} \epsilon_i x_i\bigg\|_E.
\end{eqnarray*}
In other words, $V(k,E)$ answers the question: for any choice of $k$ unit vectors in $E$, how small are we guaranteed to be able to make the
signed sum of the unit vectors by appropriately choosing signs? The question of what can be said about the numbers $V(k,E)$
was first asked\footnote{See \cite[p. 496]{Kle63:book} where this question is stated as one in a collection of then-unsolved problems.} by A. Dvoretzky in 1963.
Let us note that the same definition also makes sense when $\|\cdot\|$ is a nonsymmetric norm (i.e., satisfies $\|ax\|=a\|x\|$ for $a>0$,
positive-definiteness and the triangle inequality), and we will discuss it in this more general setting.
It is a central result of discrepancy theory \cite{GS80, BG81} that when $E$ has dimension $n$, it always holds\footnote{The fact
that $V(k,E)\leq n$ appears to be folklore and the first explicit mention of it we could find is in \cite{GS80}.} that $V(k,E) \leq n$.
To make the connection to our results, we observe that this fact actually follows from Corollary~\ref{cor:d-large-k}.
\begin{thm}\label{thm:discrep-main}
Suppose $A_1, \ldots, A_k \subset K$, where $K$ is a convex body in ${\bf R}^n$ containing 0 in its interior (i.e., the unit ball of a non-symmetric norm $\|\cdot \|_K$),
and suppose $0\in \mathrm{conv}(A_i)$ and $\dim(A_i)=1$ for each $i\in [k]$. Then there exist vectors $a_i\in A_i$ ($i\in [k]$) such that
\begin{eqnarray*}
\bigg\| \sum_{i\in [k]} a_i\bigg\|_K \leq n .
\end{eqnarray*}
In particular, if $K$ is symmetric, then by choosing $A_i=\{x_i, -x_i\}$, with $\|x_i\|_K=1$, one immediately has $V(k, E_K)\leq n$
for $E_K=({\bf R}^n, \|\cdot\|_K)$.
\end{thm}
\begin{proof}
We simply observe that since $0\in \mathrm{conv}(\sum_{i\in [k]} A_i)$, there exists a point $a_0\in\sum_{i\in [k]} A_i$ such that
$$
\|a_0\|_K\leq \sup\limits_{x \in \mathrm{conv}(\sum\limits_{i\in [k]} A_i)} \inf\limits_{a \in \sum_{i\in [k]} A_i} \|a-x \|_K=d^{(K)}(\sum\limits_{i\in [k]} A_i) \leq n \max_{i\in [k]} d^{(K)}(A_i),$$
where the last inequality follows from Corollary \ref{cor:d-large-k}. Moreover, using that for each $i \in [k]$,
$A_i \subseteq K$ and $K$ is convex, we get $\mathrm{conv}(A_i) \subseteq K$. Thus by Lemmata~\ref{lem:d^K-d^L} and \ref{lem:d^conv}, $ d^{(K)}(A_i) \le d^{(\mathrm{conv}(A_i))}(A_i) \leq c(A_i)\leq 1$,
where the last inequality uses Theorem~\ref{thm:sch75} and the assumption that $\dim(A_i)=1$.
\end{proof}
\begin{rem}
B\'ar\'any and Grinberg \cite{BG81} proved Theorem~\ref{thm:discrep-main} without the condition $\dim(A_i)=1$.
They also proved it for symmetric bodies $K$ under the weaker condition that
$0\in \mathrm{conv}(\sum_{i\in [k]} A_i)$; we will recover this fact for symmetric bodies, without restriction on the dimension,
as a consequence of Theorem \ref{thm:discrep-super} below.
\end{rem}
\begin{rem}\label{rem:sharp}
As pointed out in \cite{BG81}, Theorem~\ref{thm:discrep-main} is sharp. By taking $E=\ell_1^n$ and $x_i$ to be the $i$-th standard basis vector $e_i$ of ${\bf R}^n$,
we see that for any choice of signs, $\big\| \sum_{i\in [n]} \epsilon_i x_i\big\|= n$, which implies that $V(n, \ell_1^n)=n$.
\end{rem}
\begin{rem}
It is natural to think that the sequence $V(k,E)$ may be monotone with respect to $k$. Unfortunately, this is not true.
Swanepoel \cite{Swa00} showed that $V(k,E)\leq 1$ for every {\it odd} $k$ and every $2$-dimensional Banach space $E$.
Consequently, we have $V(1, \ell_1^2)=1$ and $V(3, \ell_1^2)\leq 1$,
whereas we know from Remark~\ref{rem:sharp} that $V(2, \ell_1^2)=2$.
\end{rem}
Not surprisingly, for special norms, better bounds can be obtained. In particular (see, e.g., \cite[Theorem 2.4.1]{AS00:book} or \cite[Lemma 2.2]{Bec83:2}), $V(k, \ell_2^n)\leq \sqrt{n}$.
We will present a proof of this and more general facts in Theorem \ref{thm:discrep-super}.
But first let us discuss a quite useful observation about the quantity $V(k,E)$: it is an isometric invariant,
i.e., invariant under nonsingular linear transformations of the unit ball. A way to measure the extent of isometry is
using the Banach-Mazur distance $d_{BM}$: Let $E$, $E'$ be two $n$-dimensional normed spaces. The Banach-Mazur distance between them is defined as
$$
d_{BM} (E, E')=\inf\{\|T\| \cdot \|T^{-1}\|; T:E \to E' \mbox{ isomorphism}\}.
$$
Thus $d_{BM}(E,E') \ge 1$ and $d_{BM}(E,E') = 1$ if and only if $E$ and $E'$ are isometric. We also remind that the above notion have a geometrical interpretation. Indeed if we denote by $B(X)$ a unit ball of
Banach space $X$, then $d_{BM}(E,E')$ is a minimal positive number such that there exists a linear transformation $T$ with:
$$
B(E) \subseteq T(B(E')) \subseteq d_{BM}(E,E') B(E).
$$
\begin{lem}\label{lem: V-BM}
If $d_{BM}(E,E')=1$, then
\begin{eqnarray*}
V(k,E)=V(k,E').
\end{eqnarray*}
\end{lem}
\begin{proof} Consider an invertible linear transformation $T$ such that $T(B(E))=B(E')$ and thus $\| y\|_E = \|Ty\|_{E'}$, then
\begin{eqnarray*}
V(k, E)&= & \max_{x_1, \ldots, x_k: \|x_i\|_E=1 \,\forall i\in [k]} \,\, \min_{(\epsilon_1, \ldots, \epsilon_k)\in \{-1,1\}^k} \, \bigg\| \sum_{i\in [k]} \epsilon_i x_i\bigg\|_E\\
&= & \max_{x_1, \ldots, x_k: \|T x_i\|_{E'}=1 \,\forall i\in [k]} \,\, \min_{(\epsilon_1, \ldots, \epsilon_k)\in \{-1,1\}^k} \, \bigg\| T \left(\sum_{i\in [k]} \epsilon_i x_i \right)\bigg\|_{E'}\\
&= & \max_{y_1, \ldots, y_k: \|y_i\|_{E'}=1 \,\forall i\in [k]} \,\, \min_{(\epsilon_1, \ldots, \epsilon_k)\in \{-1,1\}^k} \, \bigg\| \sum_{i\in [k]} \epsilon_i y_i \bigg\|_{E'}
\end{eqnarray*}
\end{proof}
Now we would like to use the ideas of the proof of Theorem \ref{thm:discrep-main} together with Lemma \ref{lem: V-BM} to prove the following statement that will help us to provide sharper bounds for $V(k,E)$ for intermediate norms.
\begin{thm}\label{thm:discrep-super}
Suppose $A_1, \ldots, A_k \subset K$, where $K$ is a symmetric convex body in ${\bf R}^n$ (i.e., the unit ball of a norm $\|\cdot \|_K$),
and suppose $0\in \mathrm{conv}(\sum_{i\in [k]} A_i)$. Then there exist vectors $a_i\in A_i$ ($i\in [k]$) such that
\begin{eqnarray*}
\bigg\| \sum_{i\in [k]} a_i\bigg\|_K \leq \sqrt{n}\, d_{BM}(E,\ell_2^n),
\end{eqnarray*}
where $E=({\bf R}^n, \|\cdot\|_K)$. In particular, by choosing $A_i=\{x_i, -x_i\}$, with $\|x_i\|_K=1$, one immediately has
\begin{eqnarray*}
V(k,E)\leq \sqrt{n} \, d_{BM}(E,\ell_2^n).
\end{eqnarray*}
\end{thm}
\begin{proof}
Let $d=d_{BM}(E,\ell_2^n)$, then we may assume, using Lemma \ref{lem: V-BM}, that $B_2^n \subset K \subset d B_2^n$. Next, as in the proof of Theorem \ref{thm:discrep-main} we observe that since $0\in \mathrm{conv}(\sum_{i\in [k]} A_i)$, there exists a point $a\in\sum_{i\in [k]} A_i$ such that
$$
\|a\|_K \leq d^{(K)} \left(\sum\limits_{i\in [k]} A_i\right) \le \max_{I \subset [k]: |I| = n} d^{(K)}\left(\sum_{j \in I} A_j \right),
$$
where the last inequality follows from Corollary \ref{cor:d-large-k}. Next, we apply Lemma \ref{lem:d^K-d^L} together with $B_2^n \subset K$ to get
$$
\max_{I \subset [k]: |I| = n} d^{(K)}\left(\sum_{j \in I} A_j \right) \le \max_{I \subset [k]: |I| = n} d\left(\sum_{j \in I} A_j \right).
$$
Now we can apply Theorems \ref{thm:weg} and \ref{thm:v-subadd} to get
$$
\max_{I \subset [k]: |I| = n} d\left(\sum_{j \in I} A_j \right) \le \max_{I \subset [k]: |I| = n} v\left(\sum_{j \in I} A_j \right) \le \max_{I \subset [k]: |I| = n} \sqrt{ \sum_{j \in I} v^2( A_j) } \le d\sqrt{n},
$$
where the last inequality follows from the fact that $v(A_i)=r(A_i)$ is bounded by $d$ since $A_i\subset K \subset d B_2^n$.
\end{proof}
We note that it follows from F. John Theorem (see, e.g., \cite[page 10]{MS86:book}) that
$d_{BM}(E, \ell_2^n) \le \sqrt{n}$ for any $n$-dimensional Banach space $E$.
Thus we have the following corollary, which recovers a result of \cite{BG81}.
\begin{cor}\label{cor:discrep-super}
Suppose $A_1, \ldots, A_k \subset K$, where $K$ is a convex symmetric body in ${\bf R}^n$,
and suppose $0\in \mathrm{conv}(\sum_{i\in [k]} A_i)$. Then there exist vectors $a_i\in A_i$ ($i\in [k]$) such that
\begin{eqnarray*}
\bigg\| \sum_{i\in [k]} a_i\bigg\|_K \leq n.
\end{eqnarray*}
In particular, by choosing $A_i=\{x_i, -x_i\}$, with $\|x_i\|_K=1$, one immediately has
\begin{eqnarray*}
V(k,E)\leq n,
\end{eqnarray*}
where $E=({\bf R}^n, \|\cdot\|_K)$.
\end{cor}
It is well known that $d_{BM}(\ell_p^n, \ell_2^n)=n^{|\frac{1}{p}-\frac{1}{2}|}$ for $p\ge 1$ (see, e.g., \cite[page 20]{MS86:book}).
Thus Theorem \ref{thm:discrep-super} gives:
\begin{cor}
For any $p\geq 1$ and any $n\in\mathbb{N}$,
$$
V(k, \ell_p^n) \le n^{\frac{1}{2}+|\frac{1}{p}-\frac{1}{2}|}.
$$
\end{cor}
In particular, we recover the classical fact that $V(k, \ell_2^n)\leq \sqrt{n}$, which can be found, e.g., in \cite[Theorem 2.4.1]{AS00:book}.
V. Grinberg (personal communication) informed us of the following elegant and sharp bound generalizing this fact that he obtained in unpublished work:
if $A_i$ are subsets of ${\bf R}^n$ and $D=\max_i \mathrm{diam}(A_i)$, then
\begin{eqnarray}\label{eq:grinberg}
d\bigg(\sum_{i\in [k]} A_i\bigg) \leq \frac{D}{2} \sqrt{n}.
\end{eqnarray}
The special case of this when each $A_i$ has cardinality 2 is due to Beck \cite{Bec83:2}. Let us note that the inequality \eqref{eq:grinberg} improves upon
the bound of $\sqrt{n}\max_i v(A_i)$ that is obtained in the Shapley-Folkman theorem by combining Theorems~\ref{thm:weg} and \ref{thm:cassels-v-gen}.
Finally let us note that the fact that the quantities $V(k,E)$ are $O(n)$ for general norms and $O(\sqrt{n})$ for Euclidean norm
is consistent with the observations in Section~\ref{sec:d-rate} that the rate of convergence of $d^{(K)}(A(k))$ for a compact set $A\subset {\bf R}^n$
is $O(n/k)$ for general norms and $O(\sqrt{n}/k)$ for Euclidean norm (i.e., $K=B_2^n$).
We do not comment further on the relationship of our study with discrepancy theory, which contains many interesting results and questions
when one uses {\it different} norms to pick the original unit vectors, and to measure the length of the signed sum (see, e.g., \cite{BF81:1, Gia97, Nik13}). The interested reader
may consult the books \cite{Cha00:book, Mat10:book, CST14:book} for more in this direction, including discussion of algorithmic issues and applications
to theoretical computer science. There are also connections to the Steinitz lemma \cite{Bar08}, which was originally discovered
in the course of extending the Riemann series theorem (on the real line being the set of possible limits by rearrangements of
a conditionally convergent sequence of real numbers) to sequences of vectors (where it is called the L\'evy-Steinitz theorem,
and now understood in quite general settings, see, e.g., \cite{Sof08}).
\section{Discussion}
\label{sec:disc}
Finally we mention some notions of non-convexity that we do not take up in this paper:
\begin{enumerate}
\item Inverse reach: The notion of reach was defined by Federer \cite{Fed59},
and plays a role in geometric measure theory. For a set $A$ in ${\bf R}^n$, the reach of $A$ is defined as
\begin{eqnarray*}
\text{reach}(A)=\sup \{ r>0: \forall y\in A+rB_2^n,\, \text{there exists a unique}\, x\in A\, \text{nearest to}\, y \}.
\end{eqnarray*}
A key property of reach is that $\text{reach}(A)=\infty$ if and only if $A$ is convex; consequently one may think
of
\begin{eqnarray*}
\iota(A)=\text{reach}(A)^{-1}
\end{eqnarray*}
as a measure of non-convexity.
Th\"ale \cite{Tha08} presents a comprehensive survey of the study of
sets with positive reach (however, one should take into account the
cautionary note in the review of this article on MathSciNet).
\item Beer's index of convexity: First defined and studied by Beer \cite{Bee73:1}, this quantity
is defined for a compact set $A$ in ${\bf R}^n$ as the probability that 2 points drawn uniformly from $A$ at random
``see'' each other (i.e., the probability that the line segment connecting them is in $A$). Clearly this
probability is 1 for convex sets, and 0 for finite sets consisting of more than 1 point. Since our study has
been framed in terms of measures of non-convexity, it is more natural to consider
\begin{eqnarray*}
b(A)= 1-{\bf P} \{ [X,Y]\subset A\} ,
\end{eqnarray*}
where $X, Y$ are i.i.d. from the uniform measure on $A$, and $[x,y]$ denotes the line segment connecting
$x$ and $y$.
\item Convexity ratio: The convexity ratio of a set $A$ in ${\bf R}^n$ is defined as the ratio of the volume
of a largest convex subset of $A$ to the volume of $A$; it is clearly 1 for convex sets and can be arbitrarily
close to 0 otherwise. For dimension 2, this has been studied, for example,
by Goodman \cite{Goo81}. Balko et al. \cite{BJVW14} discuss this notion in general dimension, and also
give some inequalities relating the convexity ratio and Beer's index of convexity. Once again, to get
a measure of non-convexity, it is more natural to consider
\begin{eqnarray*}
\kappa(A)=1-\frac{\mathrm{Vol}_n(L(A))}{\mathrm{Vol}_n(A)} ,
\end{eqnarray*}
where $L(A)$ denotes a largest convex subset of $A$.
\end{enumerate}
These notions of non-convexity are certainly very interesting, but they behave quite
differently from the notions we have explored thus far.
For example, if $b(A)=0$ or $\kappa(A)=0$, the compact set $A$ may not be convex, but differ from a convex set
by a set of measure zero. For example, if $A$ is the union of a unit Euclidean ball and a point separated from it,
then
\begin{eqnarray}\label{eq:bk0}
b(A)=\kappa(A)=0 ,
\end{eqnarray}
even though $A$ is compact but non-convex.
Even restricting to compact {\it connected} sets does not help-- just connect the disc with a point by a segment,
and we retain \eqref{eq:bk0} though $A$ remains non-convex.
It is possible that further restricting to connected open sets
is the right thing to do here-- this may yield a characterization of convex sets using $b$ and $\kappa$,
but it still is not enough to ensure stability of such a characterization. For example, $b(A)$ small
would not imply that $A$ is close to its convex hull even for this restricted class of sets, because we can take
the previous example of a point connected to a disc by a segment and just slightly fatten the segment.
Generalizing this example leads to a curious phenomenon. Consider $A=B_2^n \cup \{x_1,...,x_N\}$,
where $x_1, \ldots, x_N$ are points in ${\bf R}^n$ well separated
from each other and the origin.
Then $b(A)=\kappa(A)=0$, but we can send $b(\frac{A+A}{2})$ and $\kappa(\frac{A+A}{2})$
arbitrarily close to 1 by making $N$ go to infinity
(since isolated points are never seen for $A$ but become very important for the sumset).
This is remarkably bad behavior indeed, since it indicates an extreme violation of the monotone decreasing
property of $b(A(k))$ or $\kappa(A(k))$ that one might wish to explore, already in dimension 2.
Based on the above discussion, it is clear that the measures $\iota, b, \kappa$ of non-convexity are
more sensitive to the topology of the set than the functionals we considered in most of this paper.
Thus it is natural that the behavior of these additional measures for Minkowski sums should be
studied with a different global assumption than in this paper (which has focused on what can be said for compact sets). We hope to investigate this question in future work.
| {
"timestamp": "2018-05-17T02:12:46",
"yymm": "1704",
"arxiv_id": "1704.05486",
"language": "en",
"url": "https://arxiv.org/abs/1704.05486",
"abstract": "Let us define for a compact set $A \\subset \\mathbb{R}^n$ the sequence $$ A(k) = \\left\\{\\frac{a_1+\\cdots +a_k}{k}: a_1, \\ldots, a_k\\in A\\right\\}=\\frac{1}{k}\\Big(\\underset{k\\ {\\rm times}}{\\underbrace{A + \\cdots + A}}\\Big). $$ It was independently proved by Shapley, Folkman and Starr (1969) and by Emerson and Greenleaf (1969) that $A(k)$ approaches the convex hull of $A$ in the Hausdorff distance induced by the Euclidean norm as $k$ goes to $\\infty$. We explore in this survey how exactly $A(k)$ approaches the convex hull of $A$, and more generally, how a Minkowski sum of possibly different compact sets approaches convexity, as measured by various indices of non-convexity. The non-convexity indices considered include the Hausdorff distance induced by any norm on $\\mathbb{R}^n$, the volume deficit (the difference of volumes), a non-convexity index introduced by Schneider (1975), and the effective standard deviation or inner radius. After first clarifying the interrelationships between these various indices of non-convexity, which were previously either unknown or scattered in the literature, we show that the volume deficit of $A(k)$ does not monotonically decrease to 0 in dimension 12 or above, thus falsifying a conjecture of Bobkov et al. (2011), even though their conjecture is proved to be true in dimension 1 and for certain sets $A$ with special structure. On the other hand, Schneider's index possesses a strong monotonicity property along the sequence $A(k)$, and both the Hausdorff distance and effective standard deviation are eventually monotone (once $k$ exceeds $n$). Along the way, we obtain new inequalities for the volume of the Minkowski sum of compact sets, falsify a conjecture of Dyn and Farkhi (2004), demonstrate applications of our results to combinatorial discrepancy theory, and suggest some questions worthy of further investigation.",
"subjects": "Functional Analysis (math.FA); Metric Geometry (math.MG)",
"title": "The convexification effect of Minkowski summation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9919380094250444,
"lm_q2_score": 0.8438951084436076,
"lm_q1q2_score": 0.8370916340330841
} |
https://arxiv.org/abs/2210.07019 | The Fourier dimension spectrum and sumset type problems | We introduce and study the \emph{Fourier dimension spectrum} which is a continuously parametrised family of dimensions living between the Fourier dimension and the Hausdorff dimension for both sets and measures. We establish some fundamental theory and motivate the concept via several applications, especially to sumset type problems. For example, we study dimensions of convolutions and sumsets, and solve the distance set problem for sets satisfying certain Fourier analytic conditions. | \section{The Fourier dimension spectrum: definition and basic properties}
The Hausdorff dimension (of a set or measure) is a fundamental geometric notion describing fine scale structure. The Fourier dimension, on the other hand, is an analytic notion which captures rather different features. Both the Hausdorff and Fourier dimensions have numerous applications in, for example, ergodic theory, number theory, harmonic analysis and probability theory. The Fourier dimension of a set is bounded above by its Hausdorff dimension and is much more sensitive to, for example, arithmetic resonance and curvature. Indeed, the middle third Cantor set has Fourier dimension 0 because it possesses too much arithmetic structure, and a line segment embedded in the plane has Fourier dimension 0 because it does not possess enough curvature. We note that the Hausdorff dimension of both of these sets is strictly positive. The line segment example also shows that the Fourier dimension is sensitive to the ambient space in a way that the Hausdorff dimension is not since a line segment in $\mathbb{R}$ has Fourier dimension 1.
The purpose of this paper is to introduce, study, and motivate a continuously parametrised family of dimensions which vary between the Fourier and Hausdorff dimensions. The hope is that the resulting `Fourier dimension spectrum' will reveal more analytic and geometric information than the two notions considered in isolation and thus be amenable to applications in areas where both notions play a role.
We begin by defining the Fourier dimension spectrum and deriving several fundamental properties including continuity (Theorems \ref{cty0} and \ref{ctyX}) and how it depends on the ambient space (Theorem \ref{embedding}) noting that the `endpoints' must behave differently. We go on to put the work in a wider context, especially in relation to average Fourier dimensions and Strichartz type bounds (Theorems \ref{stric3} and \ref{stric1}). During the above analysis we also derive (or estimate) the Fourier dimension spectrum explicitly for several examples including Riesz products (Theorem \ref{riesz1} and Corollary \ref{riesz2}), certain self-similar measures (Corollaries \ref{selfsim1} and \ref{selfsim2}), and measures on various curves (Corollary \ref{hyperplane} and Theorem \ref{xp}).
After establishing some fundamental theory, we move towards applications of the Fourier dimension spectrum, especially concerning sumsets, convolutions, distance sets and certain random sets. A rough heuristic which emerges is that when the Fourier dimension spectrum is not (the restriction of) an affine function, it provides more information than the Hausdorff and Fourier dimension on their own and this leads to new estimates in various contexts. For example, the Sobolev dimension of a measure increases under convolution with itself if and only if the Fourier dimension spectrum is not (the restriction of) a linear function (Corollary \ref{conv2}) and when the Fourier dimension spectrum of a measure is not an affine function, it provides better estimates for the Hausdorff dimension of the distance set of its support than the Hausdorff and Fourier dimension provide on their own via Mattila integrals (Theorem \ref{maindistance}). As a result we solve the distance set problem for sets satisfying certain Fourier analytic conditions. A simple special case shows that if $\mu$ is a finite Borel measure on $\mathbb{R}^d$ with $\int |\hat \mu |^4 <\infty$, then the distance set of the support of $\mu$ has positive Lebesgue measure (Corollary \ref{cute}). We note that the exponent 4 is sharp. We also use the Fourier dimension spectrum to give conditions ensuring a measure is `Sobolev improving' (Corollary \ref{sobolevimproving}), to give estimates for the Hausdorff dimension of certain random constructions where the Fourier dimension alone provides only trivial estimates (Corollary \ref{randomsum}), and to provide a one line proof (and extension) of a well-known connection between moment analysis and Fourier dimension in random settings (Lemma \ref{moment}).
The idea to introduce a continuum of dimensions in-between a given pair of `fractal dimensions' is part of a growing programme sometimes referred to as `dimension interpolation'. Previous examples include the \emph{Assouad spectrum} which lives in-between the (upper) box-counting and Assouad dimensions \cite{assouad} and the \emph{intermediate dimensions} \cite{intermediate} which live in-between the Hausdorff and box-counting dimensions. The Fourier dimension spectrum is of a rather different flavour since the aforementioned notions are defined via coverings. Despite their recent inception, the Assouad spectrum and intermediate dimensions are proving useful tools in a growing range of (often unexpected) areas, for example, in quasi-conformal mapping theory \cite{tyson} and in analysis of spherical maximal functions \cite{joris}. We believe this will also be the case for the Fourier dimension spectrum.
\subsection{Background: Energy, Fourier transforms, and dimension}
Throughout the paper, we write $A \lesssim B$ to mean there exists a constant $c >0$ such that $A \leq cB$. The implicit constants $c$ are suppressed to improve exposition. If we wish to emphasise that these constants depend on another parameter $\lambda$, then we will write $A \lesssim_\lambda B$. We also write $A \gtrsim B$ if $B \lesssim A$ and $A \approx B$ if $A \lesssim B$ and $A \gtrsim B$.
Let $\mu$ be a finite Borel measure on $\mathbb{R}^d$ with support denoted by $\textup{spt}(\mu)$.
For $s \geq 0$, the $s$-energy of $\mu$ is given by
\[
\mathcal{I}_s(\mu) = \int \int \frac{d \mu(x) \, d \mu(y)}{|x-y|^s}
\]
and can be used to estimate the Hausdorff dimension of $\mu$ and its support (the so-called `potential theoretic method'). Indeed, if $s \geq 0$ is such that $\mathcal{I}_s(\mu) <\infty$, then
\[
\dim_{\textup{H}} \textup{spt}(\mu) \geq \dim_{\textup{H}} \mu \geq s
\]
where $\dim_{\textup{H}}$ denotes Hausdorff dimension. In fact, this is a precise characterisation of Hausdorff dimension since for all Borel sets $X$ and $s < \dim_{\textup{H}} X$, there exists a finite Borel measure $\mu$ on $X$ such that $\mathcal{I}_s(\mu) <\infty$. See \cite{falconer, mattila} for more on Hausdorff dimension, energy and the potential theoretic method.
There is an elegant connection between energy (and thus Hausdorff dimension) and the Fourier transform. The Fourier transform of $\mu$ is the function $\hat \mu : \mathbb{R}^d \to \mathbb{C}$ given by
\[
\hat \mu (z) = \int e^{-2\pi i z \cdot x} \, d \mu(x).
\]
In fact, for $0<s<d$
\[
\mathcal{I}_s(\mu) \approx_{s,d} \int_{\mathbb{R}^d} |\hat \mu(z) |^{2} |z|^{s-d} \, dz,
\]
see \cite[Theorem 3.1]{mattila}. Therefore, if $|\hat \mu(z) |\lesssim |z|^{-s/2}$ for some $s \in (0,d)$, then $\mathcal{I}_s(\mu) <\infty$ and $\dim_{\textup{H}} \mu \geq s$. This motivates the \emph{Fourier dimension}, defined by
\[
\dim_{\mathrm{F}} \mu = \sup\{ s \geq 0 : |\hat \mu(z) |\lesssim |z|^{-s/2}\}
\]
for measures and
\[
\dim_{\mathrm{F}} X = \sup \{ \dim_{\mathrm{F}} \mu \leq d : \textup{spt}(\mu) \subseteq X \}
\]
for sets $X \subseteq \mathbb{R}^d$. Here the supremum is taken over finite Borel measures $\mu$ supported by $X$. See \cite{modfourier} for some interesting alternative formulations of the Fourier dimension, including the modified Fourier dimension and the compact Fourier dimension. For non-empty sets $X \subseteq \mathbb{R}^d$
\[
0 \leq \dim_{\mathrm{F}} X \leq \dim_{\textup{H}} X \leq d
\]
and $X$ is a \emph{Salem set} if and only if $\dim_{\mathrm{F}} X = \dim_{\textup{H}} X$. See \cite{mattila} for more on the Fourier dimension. There are many random constructions giving rise to Salem sets but non-trivial deterministic examples are much harder to come by and in general it is rather easy for a set to fail to be Salem. For example, a line segment in $\mathbb{R}$ is Salem but in $\mathbb{R}^2$ it is not. Further, the middle third Cantor set, many self-affine sets, the cone in $\mathbb{R}^3$, and the graph of Brownian motion all fail to be Salem sets. In this article we are interested in sets which are \emph{not} Salem and wish to explore the difference between $\dim_{\mathrm{F}} X$ and $\dim_{\textup{H}} X$ in a novel and meaningful way.
\subsection{The Fourier dimension spectrum}
We exploit the connection between Fourier dimension and Hausdorff dimension via energy to define a continuum of `dimensions' lying in-between the Fourier and Hausdorff dimensions. Let $\mu$ be a finite Borel measure on $\mathbb{R}^d$. For $\theta \in [0,1]$ and $s \geq 0$, we define energies
\[
\mathcal{J}_{s,\theta}(\mu) = \left( \int_{\mathbb{R}^d} |\hat \mu(z) |^{2/\theta} |z|^{s/\theta-d} \, dz \right)^{\theta},
\]
where we adopt the convention that
\[
\mathcal{J}_{s,0}(\mu) = \sup_{z \in \mathbb{R}^d} |\hat \mu(z) |^{2} |z|^{s}.
\]
We define the \emph{Fourier dimension spectrum} (or simply the \emph{Fourier spectrum}) of $\mu$ at $\theta$ by
\[
\dim^\theta_{\mathrm{F}} \mu = \sup\{s \geq 0 : \mathcal{J}_{s,\theta}(\mu) < \infty\}
\]
where we write $\sup \o = 0$. Note that
\[
\mathcal{J}_{s,1}(\mu) = \int_{\mathbb{R}^d} |\hat \mu(z) |^{2} |z|^{s-d} \, dz
\]
is the familiar \emph{Sobolev energy} and, therefore, $\dim^1_{\mathrm{F}} \mu = \dim_{\mathrm{S}} \mu$ where $\dim_{\mathrm{S}} \mu$ is the \emph{Sobolev dimension} of $\mu$, see \cite[Section 5.2]{mattila}. Moreover, $\dim^0_{\mathrm{F}} \mu = \dim_{\mathrm{F}} \mu$ is the Fourier dimension of $\mu$.
For sets $X \subseteq \mathbb{R}^d$, we define the \emph{Fourier dimension spectrum} (or simply the \emph{Fourier spectrum}) of $X$ at $\theta$ by
\[
\dim^\theta_{\mathrm{F}} X = \sup \{ \dim^\theta_{\mathrm{F}} \mu \leq d : \textup{spt}(\mu) \subseteq X \}.
\]
Here the supremum is again taken over finite Borel measures $\mu$ supported by $X$. One immediately sees that $\dim^0_{\mathrm{F}} X = \dim_{\mathrm{F}} X$ is the Fourier dimension of $X$. Moreover, using $\mathcal{J}_{s,1}(\mu) \approx \mathcal{I}_s(\mu)$ for $0<s<d$ where $\mathcal{I}_s(\mu)$ is the standard energy (see \cite[Theorem 3.1]{mattila}),
$\dim^1_{\mathrm{F}} X = \dim_{\mathrm{H}} X$ returns the Hausdorff dimension for Borel sets $X$. The quantity $\min\{\dim_{\mathrm{S}} \mu , d\} = \min\{ \dim^1_{\mathrm{F}} \mu, d\}$ is the \emph{energy dimension} or \emph{lower correlation dimension} of $\mu$. It is easy to see (e.g. Theorem \ref{concave} below) that
\[
\dim_{\mathrm{F}} \mu \leq \dim^\theta_{\mathrm{F}} \mu \leq \dim_{\mathrm{S}} \mu
\]
and
\[
\dim_{\mathrm{F}} X \leq \dim^\theta_{\mathrm{F}} X \leq \dim_{\mathrm{H}} X
\]
for all $\theta \in (0,1)$.
The Fourier dimension spectrum can be defined in terms of $L^p$ spaces in a convenient way. Define a measure $m_d$ on $\mathbb{R}^d$ by $d m_d =\min\{|z|^{-d}, 1\} \, dz$ and a family of functions $f^s_\mu : \mathbb{R}^d \to \mathbb{R}$ by $f_\mu^s(z) =|\hat \mu(z)|^{2} |z|^{s}$. Then
\[
\mathcal{J}_{s,\theta}(\mu) \approx \| f^s_\mu \|_{L^{1/\theta}(m_d)}
\]
and so
\[
\dim^\theta_{\mathrm{F}} \mu = \sup\left\{ s: f^s_\mu \in L^{1/\theta}(m_d) \right\}.
\]
Finally, it could be interesting to consider a \emph{modified Fourier spectrum}, following the modified Fourier dimension defined in \cite{modfourier}, but we do not pursue this here. We will be more focused on the Fourier dimension spectrum of measures and many of our results for sets would also hold for the modified or compact variants considered by \cite{modfourier}. We leave details to the reader.
\begin{figure}[H]
\includegraphics[width=\textwidth]{graphs.png} \\ \vspace{3mm}
\caption{\emph{Three examples.} Left: the Fourier spectrum of a measure with positive Fourier dimension in the plane embedded into $\mathbb{R}^3$, see Theorem \ref{embedding}. Centre: the Fourier spectrum of a Riesz product from Corollary \ref{riesz2} with $a=0.8, \lambda=3$. Right: the Fourier spectrum of Lebesgue measure lifted onto the graph of $x \mapsto x^4$, see Theorem \ref{xp}.}
\end{figure}
\subsection{Analytic properties: continuity, concavity} \label{basicsection}
The first task is to examine fundamental properties of the function $\theta \mapsto \dim^\theta_{\mathrm{F}} \mu$. The results we prove in this section will be used throughout the paper as we develop the theory towards more sophisticated applications.
\begin{thm} \label{concave}
Let $\mu$ be a finite Borel measure and $X$ a non-empty set. Then $\dim^\theta_{\mathrm{F}} \mu$ is a non-decreasing concave function of $\theta \in [0,1]$. In particular, $\dim^\theta_{\mathrm{F}} \mu$ is continuous on $(0,1]$. Further, $\dim^\theta_{\mathrm{F}} X$ is non-decreasing on $[0,1]$ and continuous on $(0,1]$ but may not be concave.
\end{thm}
\begin{proof}
The claims for $X$ are clear once the claims for $\mu$ are established. We first prove concavity. Fix $0 \leq \theta_0 < \theta_1 \leq 1$ and let $\theta \in (\theta_0, \theta_1)$. Let $s_0<\dim_\mathrm{F}^{\theta_0} \mu$, $s_1<\dim_\mathrm{F}^{\theta_1} \mu$ and
\[
s= s_0 \frac{\theta_1-\theta}{\theta_1-\theta_0} + s_1\frac{\theta-\theta_0}{\theta_1-\theta_0}.
\]
Define $m_d$ by $dm_d =\min\{|z|^{-d}, 1\} \, dz$. Then
\begin{align*}
\mathcal{J}_{s,\theta}(\mu)^{1/\theta} &\approx \int_{\mathbb{R}^d} |\hat \mu(z) |^{2/\theta} |z|^{s/\theta} \, dm_d(z) \\
& = \int_{\mathbb{R}^d} \left( |\hat \mu(z) |^{2/\theta_0} |z|^{s_0/\theta_0}\right)^{ \frac{\theta_0(\theta_1-\theta)}{\theta(\theta_1-\theta_0)} } \, \left( |\hat \mu(z) |^{2/\theta_1} |z|^{s_1/\theta_1}\right)^{ \frac{\theta_1(\theta-\theta_0)}{\theta(\theta_1-\theta_0)} } \, dm_d(z) \\
& \leq \left( \int_{\mathbb{R}^d} |\hat \mu(z) |^{2/\theta_0} |z|^{s_0/\theta_0} \, dm_d(z) \right)^{\frac{\theta_0(\theta_1-\theta)}{\theta(\theta_1-\theta_0)}} \ \left( \int_{\mathbb{R}^d} |\hat \mu(z) |^{2/\theta_1} |z|^{s_1/\theta_1} \, dm_d(z) \right)^{\frac{\theta_1(\theta-\theta_0)}{\theta(\theta_1-\theta_0)}} \\
&<\infty
\end{align*}
by H\"older's inequality and choice of $s_0$ and $s_1$. This establishes $\dim^\theta_{\mathrm{F}} \mu \geq s$, proving concavity of $\dim^\theta_{\mathrm{F}} \mu$.
Next we prove that $\dim^\theta_{\mathrm{F}} \mu$ is non-decreasing. Fix $0 \leq \theta_0 < \theta_1 \leq 1$. Let $\varepsilon>0$ and define $m_d^\varepsilon$ by $dm_d^\varepsilon =c \min\{|z|^{-(d+\varepsilon)}, 1\} \, dz$ where $c$ is chosen such that $m_d^\varepsilon$ is a probability measure. Then, using Jensen's inequality,
\begin{align*}
\mathcal{J}_{s,\theta_1}(\mu)^{1/\theta_0} &\approx \left(\int_{\mathbb{R}^d} |\hat \mu(z) |^{2/\theta_1} |z|^{s/\theta_1} |z|^{\varepsilon} \, dm_d^\varepsilon(z) \right)^{\theta_1/\theta_0} \\
&\leq \int_{\mathbb{R}^d} |\hat \mu(z) |^{2/\theta_0} |z|^{s/\theta_0} |z|^{\varepsilon\theta_1/\theta_0} \, dm_d^\varepsilon(z) \\
&= \int_{\mathbb{R}^d} |\hat \mu(z) |^{2/\theta_0} |z|^{(s +\varepsilon( \theta_1-\theta_0))/\theta_0} |z|^{\varepsilon} \, dm_d^\varepsilon(z) \\
&\approx \mathcal{J}_{s+\varepsilon( \theta_1-\theta_0),\theta_0}(\mu)^{1/\theta_0} \\
&<\infty
\end{align*}
provided $s< \dim_\mathrm{F}^{\theta_0} \mu -\varepsilon(\theta_1-\theta_0)$. Letting $\varepsilon$ tend to 0 proves $\dim_\mathrm{F}^{\theta_1} \mu \geq \dim_\mathrm{F}^{\theta_0} \mu $ as required.
\end{proof}
Later we will show that the Fourier dimension spectrum is not necessarily continuous at $\theta = 0$, see Proposition \ref{discont}. However, continuity can be established across the whole range $[0,1]$ assuming only very mild conditions. In order to prove continuity of the Fourier dimension spectrum at $\theta=0$ (and thus over the full range $[0,1]$) we need to assume H\"older continuity of the Fourier transform. First we show that this holds assuming only a mild decay condition on the measure. For compactly supported measures the Fourier transform is Lipschitz (see \cite[(3.19)]{mattila}) but this is true for many non-compactly supported measures too.
\begin{lma} \label{holder}
Let $\mu$ be a finite Borel measure on $\mathbb{R}^d$ such that $|z|^\alpha \in L^1(\mu)$ for some $\alpha \in (0,1]$. Then $\hat \mu$ is $\alpha$-H\"older.
\end{lma}
\begin{proof}
For $x, y \in \mathbb{R}^d$ with $|x-y| \geq 1$, it is immediate that $|\hat \mu(x) - \hat\mu(y) | \leq 2 \leq 2|x-y|$ and so we may assume $|x-y| \leq 1$. Then,
\begin{align*}
|\hat \mu(x) - \hat\mu(y) | \leq \int \left\lvert 1-e^{-2\pi i (x-y) \cdot z } \right\rvert \, d \mu(z)& \lesssim \int \left\lvert 1-e^{-2\pi i (x-y) \cdot z } \right\rvert^\alpha \, d \mu(z) \\
&\lesssim \int |x-y|^\alpha|z|^\alpha \, d \mu(z) \\
&\lesssim |x-y|^\alpha
\end{align*}
as required.
\end{proof}
\begin{thm} \label{cty0}
Let $\mu$ be a finite Borel measure on $\mathbb{R}^d$ such that $\hat \mu$ is $\alpha$-H\"older. Then
\[
\dim^\theta_{\mathrm{F}} \mu \leq \dim_{\mathrm{F}} \mu + d\left(1+\frac{\dim_{\mathrm{F}} \mu}{2\alpha} \right)\theta.
\]
In particular, $\dim^\theta_{\mathrm{F}} \mu$ is Lipschitz continuous at $\theta=0$ and therefore Lipschitz continuous on $[0,1]$.
\end{thm}
\begin{proof}
Let $t>\dim_{\mathrm{F}} \mu$ which guarantees the existence of a sequence $z_k \in \mathbb{R}^d$ with $|z_k| \geq 1$ and $|z_k| \to \infty$ such that for all $k$
\[
|\hat \mu (z_k)| \geq |z_k|^{-t/2}.
\]
Since $\mu$ is $\alpha$-H\"older, there exists $c=c(\mu) \in (0,1)$ such that
\[
|\hat \mu (z)| \geq |z_k|^{-t/2}/2
\]
for all $z \in B(z_k,c|z_k|^{-t/(2\alpha)})$. By passing to a subsequence if necessary we may assume that the balls $B(z_k,c|z_k|^{-t/(2\alpha)})$ are pairwise disjoint. Therefore
\begin{align*}
\mathcal{J}_{s,\theta}(\mu)^{1/\theta} &\geq \sum_k \int_{B(z_k,c|z_k|^{-t/(2\alpha)})} |\hat \mu(z) |^{2/\theta} |z|^{s/\theta-d} \, dz \\
&\gtrsim \sum_k |z_k|^{-td/(2\alpha)} |z_k|^{-t/\theta} |z_k|^{s/\theta - d} = \infty
\end{align*}
whenever $s\geq d \theta + t+td\theta/(2\alpha)$. This proves
\[
\dim^\theta_{\mathrm{F}} \mu \leq \dim_{\mathrm{F}} \mu + d\left(1+\frac{\dim_{\mathrm{F}} \mu}{2\alpha} \right)\theta
\]
as required.
\end{proof}
Later we provide an example showing that the bounds from Theorem \ref{cty0} are sharp in the case $\dim_{\mathrm{F}} \mu = 0$ and $\alpha = 1$, see Corollary \ref{firstsharp}. We believe them to be sharp in general but do not pursue this question further.
\begin{ques}
Are the bounds from Theorem \ref{cty0} sharp? Are they sharp when $\mu$ is compactly supported and $\dim_{\mathrm{F}} \mu >0$?
\end{ques}
One benefit of Theorem \ref{cty0} is the demonstration that positive Fourier dimension of a measure can always be observed by averaging (in the case when $\hat \mu$ is H\"older). That is, provided $\hat \mu$ is H\"older, $\dim_{\mathrm{F}} \mu >0$ if and only if $\dim^\theta_{\mathrm{F}} \mu>d \theta$ for some $\theta >0$. This is potentially useful since positive Fourier dimension requires uniform estimates on $|\hat \mu|$ which are \emph{a priori} harder to obtain than estimates on the `averages' $\mathcal{J}_{s,\theta}(\mu)$.
Using Theorem \ref{cty0} we immediately get continuity of the Fourier dimension spectrum for compact sets. However, using a trick inspired by \cite[Lemma 1]{modfourier} we can upgrade this to continuity of the Fourier dimension spectrum for \emph{all} sets.
\begin{thm} \label{ctyX}
If $ X\subseteq \mathbb{R}^d $ is a non-empty set, then
\[
\dim^\theta_{\mathrm{F}} X \leq \dim_{\mathrm{F}} X + d\left(1+\frac{\dim_{\mathrm{F}} X}{2} \right)\theta
\]
for all $\theta \in [0,1]$. In particular, $\dim^\theta_{\mathrm{F}} X$ is Lipschitz continuous on the whole range $[0,1]$ with $\dim_{\mathrm{F}}^0 X = \dim_{\mathrm{F}} X$ and, if $X$ is Borel, $\dim_{\mathrm{F}}^1 X = \dim_{\textup{H}} X$.
\end{thm}
\begin{proof}
Let $\mu$ be a finite Borel measure supported by $X$. Let $f: \mathbb{R}^d \to [0,\infty)$ be a smooth function with compact support such that the Borel measure $\nu$ defined by $d \nu = f \, d \mu$ satisfies $\nu(X)>0$. Then $\nu$ is supported on a compact subset of $X$. We claim that
\[
\dim^\theta_{\mathrm{F}} \nu \geq \dim^\theta_{\mathrm{F}} \mu
\]
for all $\theta \in [0,1]$. Together with Theorem \ref{cty0} and Lemma \ref{holder}, this claim proves the result. Since $f$ is smooth and has compact support, it holds that for all integers $n \geq 1$ $|\hat f (t)| \lesssim_n |t|^{-n}$ for $|t| \geq 1$. In particular, $\hat f$ and $f$ are $L^1$ functions. Therefore, by the Fourier inversion formula,
\begin{equation} \label{inversionuse}
\hat \nu (z) = \int_{\mathbb{R}^d} \hat \mu(z-t) \hat f(t) \, dt.
\end{equation}
The claim for $\theta=0$ is \cite[Lemma 1]{modfourier} and so we may assume $\theta \in (0,1]$. Using \eqref{inversionuse}
\begin{align*}
\mathcal{J}_{s,\theta}(\nu)^{1/\theta} &= \int_{\mathbb{R}^d} \left\lvert \int_{\mathbb{R}^d} \hat \mu(z-t) \hat f(t) \, dt \right\rvert^{2/\theta} |z|^{s/\theta-d} \, dz\\
& \lesssim \int_{\mathbb{R}^d}\int_{\mathbb{R}^d} \left\lvert \hat \mu(z-t)\right\rvert^{2/\theta} |\hat f(t) | \, dt \, |z|^{s/\theta-d} \, dz \qquad \text{(by Jensen's inequality)}\\
& = \int_{\mathbb{R}^d}|\hat f(t) | \int_{\mathbb{R}^d} \left\lvert \hat \mu(z-t)\right\rvert^{2/\theta}|z|^{s/\theta-d} \, dz \, dt \qquad \text{(by Fubini's theorem)}\\
& = \int_{\mathbb{R}^d}|\hat f(t) | \int_{\mathbb{R}^d} \left\lvert \hat \mu(z)\right\rvert^{2/\theta}|z+t|^{s/\theta-d} \, dz \, dt \\
& = \int_{\mathbb{R}^d}|\hat f(t) | \int_{\mathbb{R}^d \setminus B(0,5 |t|)} \left\lvert \hat \mu(z)\right\rvert^{2/\theta}|z+t|^{s/\theta-d} \, dz \, dt \\
&\, \hspace{4cm} + \int_{\mathbb{R}^d}|\hat f(t) | \int_{B(0,5 |t|)} \left\lvert \hat \mu(z)\right\rvert^{2/\theta}|z+t|^{s/\theta-d} \, dz \, dt \\
&\lesssim \int_{\mathbb{R}^d}|\hat f(t) | \, dt \int_{\mathbb{R}^d} \left\lvert \hat \mu(z)\right\rvert^{2/\theta}|z|^{s/\theta-d} \, dz + \int_{\mathbb{R}^d}|\hat f(t) | |t|^d |t|^{s/\theta-d} \, dt \\
&\lesssim \mathcal{J}_{s,\theta}(\mu)^{1/\theta} + 1
\end{align*}
using the rapid decay of $|\hat f(t)|$. This proves the claim and the theorem.
\end{proof}
It is useful to keep the following simple bounds in mind. These are immediate from Theorems \ref{concave} and \ref{cty0}.
\begin{cor} \label{bounds}
Let $\mu$ be a compactly supported finite Borel measure on $\mathbb{R}^d$. Then
\[
\dim_{\mathrm{F}} \mu + \theta\left(\dim_{\mathrm{S}} \mu - \dim_{\mathrm{F}} \mu\right) \leq \dim^\theta_{\mathrm{F}} \mu \leq \min \left\{ \dim_{\mathrm{S}} \mu, \, \dim_{\mathrm{F}} \mu + d\left(1+\frac{\dim_{\mathrm{F}} \mu}{2} \right)\theta \right\}.
\]
\end{cor}
In certain extremal situations the Fourier dimension spectrum is determined by the Fourier and Sobolev dimensions.
\begin{cor} \label{collapse}
If $\mu$ is a finite Borel measure on $\mathbb{R}^d$ such that $\hat \mu$ is $\alpha$-H\"older and
\[
\dim_{\mathrm{S}} \mu = \left(1+\frac{d}{2\alpha} \right) \dim_{\mathrm{F}} \mu + d,
\]
then
\[
\dim^\theta_{\mathrm{F}} \mu = \dim_{\mathrm{F}} \mu + d\left(1+\frac{\dim_{\mathrm{F}} \mu}{2\alpha} \right)\theta
\]
for all $\theta \in [0,1]$.
\end{cor}
Another simple consequence of Theorem \ref{cty0} is that the Sobolev dimension can be controlled by the Fourier dimension. We are unaware if the following estimates were known previously.
\begin{cor} \label{sobolev}
Let $\mu$ be a finite Borel measure on $\mathbb{R}^d$ such that $\hat \mu$ is $\alpha$-H\"older. Then
\[
\dim_{\mathrm{S}} \mu \leq \left(1+\frac{d}{2\alpha} \right) \dim_{\mathrm{F}} \mu + d.
\]
In particular, if
\[
\dim_{\mathrm{F}} \mu \leq \frac{d}{1+d/(2\alpha)}
\]
then $\dim_{\mathrm{S}} \mu \leq 2d$, and if $\dim_{\mathrm{F}} \mu =0$, then $\dim_{\mathrm{S}} \mu \leq d$. These are relevant thresholds for Sobolev dimension because if $\dim_{\mathrm{S}} \mu > 2d$, then $\mu$ is a continuous function and if $\dim_{\mathrm{S}} \mu >d$, then $\mu \in L^2(\mathbb{R}^d)$, see \cite[Theorem 5.4]{mattila}.
\end{cor}
\section{Dependence on ambient space}
An elementary but striking observation on the Fourier dimension is that it depends on the ambient space in a way that Hausdorff dimension, for example, does not. Consider integers $1 \leq k<d$ and let $f^{k,d} : \mathbb{R}^k \to \mathbb{R}^d$ be an isometric embedding defined by identifying $\mathbb{R}^k$ with a $k$-dimensional affine subset of $\mathbb{R}^d$. Then it is well-known and easily seen that
\[
\dim_{\mathrm{F}} f_\#^{k,d} \mu = 0
\]
for all finite Borel measures $\mu$ on $\mathbb{R}^k$ but, provided $\dim_{\mathrm{S}} \mu \leq k$,
\[
\dim_{\mathrm{S}} f_\#^{k,d} \mu = \dim_{\mathrm{S}} \mu.
\]
Here $f_\#^{k,d} \mu $ is the pushforward of $\mu$ under $f^{k,d}$. The conclusion for Sobolev dimension follows by observing that the standard energy $\mathcal{I}_s(\mu)$ does not depend on the ambient space. Moreover,
\[
\dim_{\mathrm{F}} f^{k,d}(X) = 0
\]
and
\[
\dim_{\textup{H}} f^{k,d}(X) = \dim_{\textup{H}} X
\]
for all $X \subseteq \mathbb{R}^k$. Since the Fourier dimension spectrum interpolates between the Fourier and Sobolev/Hausdorff dimensions, it is natural to ask what happens to the Fourier dimension spectrum under the embeddings $f^{k,d}$. In particular, the answer must encapsulate both of the rather distinct behaviours seen above.
\begin{thm} \label{embedding}
Let $1 \leq k<d$ be integers, $\mu$ be a finite Borel measure on $\mathbb{R}^k$ and $X \subseteq \mathbb{R}^k$ be a non-empty set. Then
\[
\dim^\theta_{\mathrm{F}} f_\#^{k,d} \mu = \min\{\theta k, \dim^\theta_{\mathrm{F}} \mu\}
\]
and
\[
\dim^\theta_{\mathrm{F}} f ^{k,d} (X) = \min\{\theta k, \dim^\theta_{\mathrm{F}} X\}
\]
for all $\theta \in [0,1]$.
\end{thm}
\begin{proof}
The claim for sets follows immediately from the claim for measures and so we just prove the result for $\mu$. Fix $\theta \in (0,1)$, write $V = f^{k,d}(\mathbb{R}^k)$ and $\pi$ for orthogonal projection from $\mathbb{R}^d$ onto $V$ identified with $\mathbb{R}^k$. We may assume without loss of generality that $V$ is a subspace of $\mathbb{R}^d$. We begin with the upper bound. Since $\hat \mu (0) = \mu(\mathbb{R}^d)>0$ and $\hat \mu $ is continuous, there exists $\varepsilon=\varepsilon(\mu)$ such that $|\hat \mu (z)| \geq \mu(\mathbb{R}^d)/2>0$ for all $z \in B_V(0,\varepsilon)$, where $B_V$ denotes open balls in $V$. Write $0 \neq z \in \mathbb{R}^d$ in spherical coordinates $z=r v$ where $r>0$ and $v \in S^{d-1}$ with $\sigma_{d-1}$ the surface measure on $S^{d-1}$. Then
\begin{align*}
\mathcal{J}_{s,\theta}( f_\#^{k,d} \mu)^{1/\theta} &=\int_{0}^\infty \int_{S^{d-1}} | \hat \mu(r \pi(v)) |^{2/\theta} r^{s/\theta-d} \, r^{d-1} d\sigma_{d-1}(v) dr \\
& \geq \int_{0}^\infty r^{s/\theta-1} \int_{\substack{v \in S^{d-1}: \\ r \pi(v) \in B_V(0,\varepsilon)}} |\hat \mu(r \pi(v)) |^{2/\theta} \, d\sigma_{d-1}(v) dr \\
& \gtrsim_\theta \int_{0}^\infty r^{s/\theta-1}\sigma_{d-1}\left(v \in S^{d-1}: \pi(v) \in B_V(0,\varepsilon/r) \right) dr \\
& \gtrsim \int_{0}^\infty r^{s/\theta-1} \left( \varepsilon/r\right)^k dr \\
&=\infty
\end{align*}
whenever $s \geq k\theta$, proving $\dim^\theta_{\mathrm{F}} f_\#^{k,d} \mu \leq k \theta$.
For the remaining upper bound, we may assume $\theta k >\dim^\theta_{\mathrm{F}} \mu$ and let $\theta k> s >\dim^\theta_{\mathrm{F}} \mu$ and fix $1< c <2$ and $\max\{c/2,1/c\} < r <1$. In what follows the implicit constants may depend on $r$ and $c$ (and other fixed parameters as usual). Then, writing $z=(x,y)$ for $x \in V^\perp$ and $y \in V$,
\begin{align*}
\int_{\mathbb{R}^d} |\widehat{ f_\#^{k,d} \mu}(z) |^{2/\theta} |z|^{s/\theta-d} \, dz & \gtrsim \int_{x \in V^\perp } \int_{\substack{y \in V : \\ |x| \leq |y| \leq 2 |x|}} |\hat \mu(y) |^{2/\theta} |y|^{s/\theta-k} |x|^{k-d} \, dy dx \\
& \geq \sum_{n=1}^\infty \int_{\substack{x \in V^\perp: \\ r c^n \leq |x| \leq c^n} } \int_{\substack{y \in V : \\ c^n \leq |y| \leq c^{n+1}}} |\hat \mu(y) |^{2/\theta} |y|^{s/\theta-k} |x|^{k-d} \, dy dx \\
& \gtrsim \sum_{n=1}^\infty c^{n(k-d)}\int_{\substack{x \in V^\perp: \\ r c^n \leq |x| \leq c^n} } \, dx \int_{\substack{y \in V : \\ c^n \leq |y| \leq c^{n+1}}} |\hat \mu(y) |^{2/\theta} |y|^{s/\theta-k} \, dy \\
& \gtrsim \sum_{n=1}^\infty \int_{\substack{y \in V : \\ c^n \leq |y| \leq c^{n+1}}} |\hat \mu(y) |^{2/\theta} |y|^{s/\theta-k} \, dy \\
& = \int_{\substack{y \in V : \\ c \leq |y| }} |\hat \mu(y) |^{2/\theta} |y|^{s/\theta-k} \, dy \\
&=\infty
\end{align*}
proving $\dim^\theta_{\mathrm{F}} f_\#^{k,d} \mu \leq \dim^\theta_{\mathrm{F}} \mu$.
We turn our attention to the lower bound. Let $s <\min\{\theta k , \dim^\theta_{\mathrm{F}} \mu\}$ and $\varepsilon \in (0,1)$ with $s+\varepsilon\theta <\min\{\theta k , \dim^\theta_{\mathrm{F}} \mu\}$. Then, again writing $z=(x,y)$ for $x \in V^\perp$ and $y \in V$,
\begin{align*}
\int_{\mathbb{R}^d} |\widehat{ f_\#^{k,d} \mu}(z) |^{2/\theta} |z|^{s/\theta-d} \, dz & = \int_{\mathbb{R}^d} |\hat \mu (\pi(z)) |^{2/\theta} |z|^{(s+\varepsilon \theta)/\theta-k} |z|^{k-d-\varepsilon} \, dz \\
& \leq \int_{x \in V^\perp } \int_{y \in V } |\hat \mu (y) |^{2/\theta} |y|^{(s+\varepsilon \theta)/\theta-k} |x|^{k-d-\varepsilon} \, dy \, dx \\
& \leq \int_{y \in V } |\hat \mu (y) |^{2/\theta} |y|^{(s+\varepsilon \theta)/\theta-k} \, dy \int_{x \in V^\perp }|x|^{k-d-\varepsilon} \, dx \\
&<\infty
\end{align*}
proving $\dim^\theta_{\mathrm{F}} f_\#^{k,d} \mu \geq \min\{\theta k, \dim^\theta_{\mathrm{F}} \mu\}$, as required.
\end{proof}
One can see from Theorem \ref{embedding} together with Theorem \ref{cty0} that the Fourier dimension spectrum of a measure is preserved upon embedding in a Euclidean space with strictly larger dimension if and only if the Fourier dimension of the measure is zero to begin with.
Using Theorem \ref{embedding} we get our first examples where the Fourier dimension spectrum can be derived explicitly.
\begin{cor} \label{hyperplane}
Let $X$ be an isometric embedding of $[0,1]^k$ in $\mathbb{R}^d$ for integers $1 \leq k < d$ and let $\mu$ be the restriction of $k$-dimensional Hausdorff measure to $X$. Then
\[
\dim^\theta_{\mathrm{F}} X = \dim^\theta_{\mathrm{F}} \mu = k \theta
\]
for all $\theta \in [0,1]$.
\end{cor}
\section{Fourier coefficients, energy, and Riesz products}
There is a convenient representation for the energy in terms of the Fourier coefficients of a measure, see \cite[Theorem 3.21]{mattila}. Indeed, for $\mu$ a finite Borel measure on $\mathbb{R}^d$ with support contained in $[0,1]^d$
\begin{equation} \label{fouriercoeff}
\mathcal{J}_{s,1}(\mu) \approx 1 + \sum_{z\in \mathbb{Z}^d \setminus\{0\}} |\hat{\mu}(z)|^2 |z|^{s-d}
\end{equation}
for $0<s<d$.
Using the convolution formula, \eqref{fouriercoeff} gives information about $\mathcal{J}_{s,\theta}(\mu)$ for $0<s<d \theta$ with $\theta$ the reciprocal of an integer. However, we need to sum over the finer grid $\theta\mathbb{Z}^d$ because the convolution $\mu^{*1/\theta}$ is no-longer supported on $[0,1]^d$ (but on $[0,1/\theta]^d$) and so we need to rescale in order to apply \eqref{fouriercoeff}.
\begin{prop} \label{coeffs}
Let $\mu$ be a finite Borel measure on $\mathbb{R}^d$ with support contained in $[0,1]^d$. For $0<s<d \theta$ and $\theta =1/k$ for $k \in \mathbb{N}$
\[
\mathcal{J}_{s,\theta}(\mu) \approx 1 + \sum_{z\in \theta\mathbb{Z}^d \setminus\{0\}} |\hat{\mu}(z)|^{2/\theta} |z|^{s/\theta-d} .
\]
\end{prop}
\begin{proof}
Writing $\theta \mu^{*1/\theta}$ for the pushforward of $\mu^{*1/\theta}$ under the dilation $x \mapsto \theta x$, we get that $\theta \mu^{*1/\theta}$ is supported on $[0,1]^d$ and that
\begin{equation} \label{temp}
\widehat{\theta \mu^{*1/\theta}}(z) = \widehat{ \mu^{*1/\theta}}(\theta z)
\end{equation}
for $z \in \mathbb{R}^d$. Then
\begin{align*}
\mathcal{J}_{s,\theta}(\mu) = \int_{\mathbb{R}^d} |\hat \mu (z)|^{2/\theta} |z|^{s/\theta - d} \, dz & = \int_{\mathbb{R}^d} |\widehat{ \mu^{*1/\theta}} (z)|^{2} |z|^{s/\theta - d} \, dz \qquad \text{(convolution formula)}\\
& \approx_\theta \int_{\mathbb{R}^d} | \widehat{\theta \mu^{*1/\theta}}(z/\theta)|^{2} |z/\theta|^{s/\theta - d} \, dz \qquad \text{(by \eqref{temp})} \\
& \approx_\theta \int_{\mathbb{R}^d} | \widehat{\theta \mu^{*1/\theta}}(z)|^{2} |z|^{s/\theta - d} \, dz\\
& = \mathcal{J}_{s/\theta,1}(\theta \mu^{*1/\theta}) \\
&\approx 1 + \sum_{z\in \mathbb{Z}^d \setminus\{0\}} |\widehat{\theta \mu^{*1/\theta}}(z)|^2 |z|^{s/\theta-d} \qquad \text{(by \eqref{fouriercoeff})} \\
&\approx_\theta 1 + \sum_{z\in \mathbb{Z}^d \setminus\{0\}} |\widehat{ \mu^{*1/\theta}}(\theta z)|^2 |\theta z|^{s/\theta-d} \qquad \text{(by \eqref{temp})} \\
& = 1 + \sum_{z\in \theta\mathbb{Z}^d \setminus\{0\}} |\hat{\mu}(z)|^{2/\theta} |z|^{s/\theta-d} \qquad \text{(convolution formula)}
\end{align*}
as required.
\end{proof}
It would be useful to relax the assumptions in Proposition \ref{coeffs} but we do not attempt this here.
\begin{ques}
In Proposition \ref{coeffs}, can the assumption that $\theta$ is the reciprocal of an integer be removed? Can the assumption that $0<s<d\theta$ be weakened?
\end{ques}
Being able to estimate the energy in terms of the Fourier coefficients often simplifies calculations. To demonstrate this we give an easy example of a measure showing the upper bound from Theorem \ref{cty0} is sharp in the case $\dim_{\mathrm{F}} \mu = 0$ and $\alpha = 1$.
\begin{cor} \label{firstsharp}
Define $f:[0,1] \to \mathbb{R}$ by
\[
f(x) = 2+ \sum_{n=1}^\infty n^{-2} \sin(2\pi2^{n}x)
\]
and $f_d: [0,1]^d \to \mathbb{R}$ by $f_d(x_1, \dots, x_d) = f(x_1) \cdots f(x_d)$.
Then $f$ and $f_d$ are non-negative and we may define a measure $\mu$ supported on $[0,1]^d$ by $d\mu = f_d dx$. Then
\[
\dim^\theta_{\mathrm{F}} \mu = \theta d
\]
for all $\theta \in [0,1]$.
\end{cor}
\begin{proof}
For integers $n \geq 1$
\[
|\hat f(2^{n})| = n^{-2}
\]
and $|\hat f(z)| = 0$ for integers $z \in \mathbb{Z}$ which are not of the form $z=2^n$ for $n \geq 1$ an integer. Moreover, for all $z = (z_1, \dots, z_d) \in \mathbb{Z}^d$
\[
|\hat \mu(z)| = |\hat f_d(z)| = |\hat f(z_1)| \cdots |\hat f(z_d)| .
\]
Therefore, if $z = (z_1, \dots, z_d) \in \mathbb{Z}^d$ is such that $z_i=2^{n_i}$ for integers $n_i \geq 1$, then
\[
|\hat \mu(z)| = n_1^{-2} \cdots n_d^{-2}
\]
and $|\hat \mu(z)| = 0$, otherwise. In particular, $\dim_{\mathrm{F}} \mu = 0$. Further, using the Fourier series representation \eqref{fouriercoeff} of the energy
\[
\mathcal{J}_{s,1}(\mu) \approx 1 + \sum_{z=-\infty}^{\infty} |\hat{\mu}(z)|^2 |z|^{s-d} \lesssim 1 + \sum_{n=1}^{\infty} n^{d-1} n^{-4} 2^{n(s-d)} <\infty
\]
for $s<d$ and so $\dim_{\mathrm{S}} \mu = d$. Since $\dim^\theta_{\mathrm{F}} \mu$ is concave, the result follows from Theorem \ref{cty0}.
\end{proof}
\subsection{Riesz products}
Proposition \ref{coeffs} gives useful information about the Fourier dimension spectrum for $\theta$ which are the reciprocal of an integer. However, if more information about the Fourier transform is available sometimes this can be pushed to all $\theta$. This is the case for Riesz products which we use below to provide explicit examples with a more complicated Fourier dimension spectrum than the examples we have met thus far. Riesz products are a well-studied family of measures defined by
\[
\mu_{a,\lambda} = \prod_{j=1}^\infty (1+a_j \cos(2\pi \lambda_j x))
\]
where $a=(a_j)$ and $\lambda = (\lambda_j)$ with $a_j \in [-1,1]$ and $\lambda_j \in \mathbb{N}$ with $ \lambda_{j+1} \geq 3\lambda_j$ are given sequences. Here $\mu_{a,\lambda}$ is the weak limit of the sequence of absolutely continuous measures associated to the truncated products. It is well-known that $\mu_{a,\lambda}$ is absolutely continuous if and only if $\sum_j a_j^2 < \infty$ and otherwise they are mutually singular with respect to Lebesgue measure, see \cite[Theorem 13.2]{mattila}. The dimension theory of Riesz products, especially in the singular case, is well-studied, see \cite{hare, mattila}. The Fourier coefficients of $\mu_{a,\lambda}$ can be derived easily giving
\[
|\widehat{\mu_{a,\lambda}}(k)| = \Pi_{\varepsilon_j \neq 0} (a_j/2)
\]
for integers $k \neq 0$ with (unique) representation
\[
k = \sum_j \varepsilon_j \lambda_j \qquad (\varepsilon_j \in \{-1,0,1\})
\]
and $|\widehat{\mu_{a,\lambda}}(k)| = 0$ for integers without such a representation. Therefore
\[
\widehat{\mu_{a,\lambda}}(\lambda_k) = a_k
\]
and so
\[
\dim_{\mathrm{F}} \mu_{a,\lambda} \leq \liminf_{k \to \infty} \frac{-2 \log |a_k|}{\log \lambda_k}.
\]
Therefore, if $\dim_{\mathrm{F}} \mu_{a,\lambda} >0$, then $\sum_j a_j^2 < \infty $ and $ \mu_{a,\lambda}$ is absolutely continuous. Moreover, if $\lambda_{j+1}/\lambda_{j} \to \infty$, then $\dim_{\textup{H}} \mu_{a,\lambda} = 1$ and $\dim_{\mathrm{S}} \mu_{a,\lambda} \geq 1$, see \cite[Corollary 3.3]{hare}.
\begin{thm} \label{riesz1}
Suppose $ \lambda_{j+1} \leq C \lambda_j$ for some fixed $C \geq 3$ and $\liminf_{k \to \infty} \frac{-2 \log |a_k|}{\log \lambda_k} = 0$. Let
\[
\mathcal{S}_\theta(s) =\sum_{k=1}^\infty \lambda_k^{s/\theta-1} \, \Pi_{j=1}^{k-1} (1+|a_j|^{2/\theta}/2) .
\]
Then, for $\theta \in (0,1]$,
\[
\dim^\theta_{\mathrm{F}} \mu_{a,\lambda} = \sup\{s \leq \theta : \mathcal{S}_\theta(s) < \infty\}.
\]
\end{thm}
\begin{proof}
We first prove the upper bound. Note that $\dim_{\mathrm{F}} \mu_{a,\lambda} = \liminf_{k \to \infty} \frac{-2 \log |a_k|}{\log \lambda_k} = 0$ and so $\dim^\theta_{\mathrm{F}} \mu_{a,\lambda} \leq \theta$ by Corollary \ref{bounds}. Observe that
\[
|\widehat{\mu_{a,\lambda}}(z)| \geq |\widehat{\mu_{a,\lambda}}(k)|/2
\]
for all $z \in \mathbb{R}$ with $|z-k| \leq 1/2$ and where $k \neq 0$ has a (unique) representation
\begin{equation} \label{unirep}
k = \sum_j \varepsilon_j \lambda_j \qquad (\varepsilon_j \in \{-1,0,1\}).
\end{equation}
Summing over such $k$,
\[
\mathcal{J}_{s,\theta}(\mu_{a,\lambda}) \gtrsim \sum_{k} |\widehat{\mu_{a,\lambda}}(k)|^{2/\theta} |k|^{s/\theta-1}.
\]
Then, by following the proof of \cite[Theorem 13.3]{mattila}, one obtains that the right hand side is finite if and only if $\mathcal{S}_\theta(s) < \infty$. This proves the upper bound.
For the lower bound, observe that
\[
|\widehat{\mu_{a,\lambda}}(z)| \lesssim \sum_k |\widehat{\mu_{a,\lambda}}(k)|\min\{ 1 , |z-k|^{-1}\}
\]
for $z \in \mathbb{R}$ where the sum is over integers $k \neq 0$ with (unique) representation \eqref{unirep}. Fix $\theta \in (0,1]$, $0<s<\theta$ and $0<\varepsilon< (\theta-s)/(2-\theta)$. Note that $\mathcal{S}_\theta(s) = \infty$ for $s > \theta$. By Jensen's inequality,
\begin{align*}
|\widehat{\mu_{a,\lambda}}(z)|^{2/\theta} &\lesssim_\varepsilon \sum_k |\widehat{\mu_{a,\lambda}}(k)|^{2/\theta}\min\left\{ 1 , \frac{1}{|z-k|^{1-\varepsilon(2/\theta-1)}}\right\}.
\end{align*}
Then, applying Fubini's theorem and setting $y=z-k$,
\begin{align*}
\mathcal{J}_{s,\theta}(\mu_{a,\lambda}) &=\int_\mathbb{R} |\widehat{\mu_{a,\lambda}}(z)|^{2/\theta} |z|^{s/\theta-1} \, dz\\
&\lesssim_\varepsilon \sum_k |\widehat{\mu_{a,\lambda}}(k)|^{2/\theta}\int_\mathbb{R} \min\left\{ |z|^{s/\theta-1} , \frac{|z|^{s/\theta-1} }{|z-k|^{1-\varepsilon(2/\theta-1)}}\right\} \, dz \\
&\lesssim \sum_k |\widehat{\mu_{a,\lambda}}(k)|^{2/\theta} \bigg(\int_{|y| \leq 2} |y+k|^{s/\theta-1} \, dy \\
&\, \qquad \qquad \qquad + \int_{2 \leq |y| \leq 2k} \frac{|k|^{s/\theta-1} }{|y|^{1-\varepsilon(2/\theta-1)}} \, dy + \int_{ |y| \geq 2k} \frac{1}{|y|^{2-s/\theta -\varepsilon(2/\theta-1)}} \, dy \bigg)\\
&\lesssim \sum_k |\widehat{\mu_{a,\lambda}}(k)|^{2/\theta} |k|^{s/\theta-1+\varepsilon(2/\theta-1)} \\
& <\infty
\end{align*}
provided $s +\varepsilon(2-\theta)< \sup\{s \leq \theta : \mathcal{S}_\theta(s) < \infty\}$. Since $\varepsilon>0$ can be chosen arbitrarily small, the lower bound follows.
\end{proof}
It would be interesting to investigate the Fourier dimension spectrum of Riesz products in greater generality, but we do not pursue this here. We note a pleasant explicit formula in the following simple case.
\begin{cor} \label{riesz2}
If $\lambda_j=\lambda^j \geq 3$ and $a_j=a \in [-1,1]$ with $a \neq 0$ and $\lambda\geq 3$ constants, then, for $\theta \in [0,1]$,
\[
\dim^\theta_{\mathrm{F}} \mu_{a,\lambda} = \theta-\theta \frac{\log(1+|a|^{2/\theta}/2)}{\log \lambda}.
\]
\end{cor}
\subsection{An example with discontinuity at $\theta=0$}
Here we construct a (necessarily unbounded) measure for which the Fourier dimension spectrum is discontinuous at $\theta = 0$. The construction is similar to those above, but utilises the unbounded domain to achieve rapid Fourier decay around isolated peaks.
\begin{prop} \label{discont}
There exists a finite Borel measure on $\mathbb{R}$ for which $\dim^\theta_{\mathrm{F}} \mu$ is not continuous at $\theta=0$.
\end{prop}
\begin{proof}
Define $f: \mathbb{R} \to [0,\infty)$ by
\[
f(x) = \sum_{n=1}^\infty n^{-2} n^{-n}\left(2+ \sin(2\pi2^nx) \right) \textbf{1}_{[0,n^n]}(x)
\]
where $\textbf{1}_{[0,n^n]}$ is the indicator function on $[0,n^n]$. Define an (unbounded) measure $\mu$ by $\mu = f dx$ noting that
\[
\mu(\mathbb{R}) \leq \sum_n 3(n^{-2} n^{-n}) n^n = 3\sum_n n^{-2} < \infty.
\]
For integers $n \geq 1$,
\[
|\hat \mu(2^n) | \approx (n^{-2} n^{-n})n^n = n^{-2}
\]
and so $\dim_{\mathrm{F}} \mu = \dim_\textup{F}^0 \mu = 0$. Moreover, for all $z \in \mathbb{R}$,
\[
|\hat \mu(z) | \lesssim \max_n \min\left\{ n^{-2} , \frac{1}{n^n |z-2^n|} \right\} \lesssim \begin{cases}
n^{-2} & \frac{n^n 2^n}{n^n+1} \leq |z| \leq \frac{n^n 2^n}{n^n-1} \text{ (for some $n \geq 3$)} \\
|z|^{-1}& \text{otherwise}
\end{cases}
\]
Therefore, for $\theta \in (0,1]$ and $s >0$,
\begin{align*}
\mathcal{J}_{s, \theta} (\mu) &\lesssim \int |z|^{-2/\theta} |z|^{s/\theta-1} \, dz + \sum_n n^{-4/\theta} \int_{\frac{n^n 2^n}{n^n+1} \leq |z| \leq \frac{n^n 2^n}{n^n-1}} |z|^{s/\theta-1} \, dz \\
&\lesssim \int |z|^{(s-2)/\theta-1} \, dz + \sum_n n^{-4/\theta} 2^{n(s/\theta-1)}2^n n^{-n} \\
&<\infty
\end{align*}
provided $s<2$. Therefore, $\dim^\theta_{\mathrm{F}} \mu \geq 2$ for $\theta \in (0,1]$ and $\dim^\theta_{\mathrm{F}} \mu$ is not continuous at $\theta = 0$.
\end{proof}
It is easy to adapt the above calculation to show that $\dim^\theta_{\mathrm{F}} \mu = 2$ for $\theta \in (0,1]$. Further, this example can be modified easily to obtain different behaviour at the discontinuity, including positive Fourier dimension and arbitrarily large jumps.
\section{Connection to Strichartz bounds and average Fourier dimensions}
Strichartz \cite{stric1, stric2} considered bounds for averages of the Fourier transform of the form
\[
R^{d-\beta_k} \lesssim \int_{|z| \leq R} |\hat \mu (z)|^{2k} \, dz \lesssim R^{d-\alpha_k}
\]
for integers $k \geq 1$ and $0 \leq \alpha_k \leq \beta_k$. Motivated by this, for $\theta \in (0,1]$, let
\[
\overline{F}_\mu(\theta) = \limsup_{R \to \infty} \frac{\theta \log \left(R^{-d} \int_{|z| \leq R} |\hat \mu (z)|^{2/\theta} \, dz \right)}{-\log R}
\]
and
\[
\underline{F}_\mu(\theta) = \liminf_{R \to \infty} \frac{\theta \log \left(R^{-d} \int_{|z| \leq R} |\hat \mu (z)|^{2/\theta}\, dz \right)}{-\log R}.
\]
In particular, $\overline{F}_\mu(\theta)$ is the infimum of $\beta \geq 0$ for which
\[
R^{d-\beta/\theta} \lesssim \int_{|z| \leq R} |\hat \mu (z)|^{2/\theta} \, dz
\]
and $\underline{F}_\mu(\theta)$ is the supremum of $\alpha\geq 0$ for which
\[
\int_{|z| \leq R} |\hat \mu (z)|^{2/\theta} \, dz\lesssim R^{d-\alpha/\theta}.
\]
One can interpret $\underline{F}_\mu(\theta)$ as a `$\theta$-averaged Fourier dimension'. Note that
\[
\int_{\mathbb{R}^d } |\hat \mu (z)|^{2/\theta} \, dz= \lim_{R \to \infty} \int_{|z| \leq R} |\hat \mu (z)|^{2/\theta}\, dz
\]
exists and is either a positive finite number or $+\infty$. In the former case $\underline{F}_\mu(\theta) = \overline{F}_\mu(\theta) = \theta d$ and in the latter case $0 \leq \underline{F}_\mu(\theta) \leq \overline{F}_\mu(\theta) \leq \theta d$. There is a connection between the Fourier dimension spectrum and these average Fourier dimensions.
\begin{thm} \label{stric3}
Let $\mu$ be a finite Borel measure on $\mathbb{R}^d$ and $\theta \in (0,1]$. Then
\[
\dim^\theta_{\mathrm{F}} \mu \geq \underline{F}_\mu(\theta).
\]
\end{thm}
\begin{proof}
We may assume $\underline{F}_\mu(\theta) >0$. Let $c>1$ and $0<s <\alpha<\underline{F}_\mu(\theta).$ Then
\begin{align*}
\mathcal{J}_{s,\theta}(\mu)^{1/\theta} = \int_{\mathbb{R}^d} |\hat \mu(z) |^{2/\theta} |z|^{s/\theta-d} \, dz
& \lesssim 1+ \sum_{k =1}^\infty \int_{c^{k-1} <|z| \leq c^k} |\hat \mu(z) |^{2/\theta} |z|^{s/\theta-d} \, dz \\
& \approx 1+ \sum_{k =1}^\infty c^{k(s/\theta-d)} \int_{c^{k-1} <|z| \leq c^k} |\hat \mu(z) |^{2/\theta} \, dz \\
& \leq 1+ \sum_{k =1}^\infty c^{k(s/\theta-d)} \int_{|z| \leq c^k} |\hat \mu(z) |^{2/\theta} \, dz \\
& \lesssim 1+ \sum_{k =1}^\infty c^{k(s/\theta-d)} c^{k(d-\alpha/\theta)} \\
&< \infty
\end{align*}
proving $\dim^\theta_{\mathrm{F}} \mu \geq \alpha$, which proves the result.
\end{proof}
The connection becomes stronger, and in fact $\dim^\theta_{\mathrm{F}} \mu$ and $\underline{F}_\mu(\theta)$ coincide, in the following special case.
\begin{thm} \label{stric1}
Let $\mu$ be a finite Borel measure on $\mathbb{R}^d$ and $\theta \in (0,1]$. If $ \underline{F}_\mu(\theta) < \theta d$, then
\[
\dim^\theta_{\mathrm{F}} \mu = \underline{F}_\mu(\theta).
\]
\end{thm}
\begin{proof}
The lower bound $\dim^\theta_{\mathrm{F}} \mu \geq \underline{F}_\mu(\theta)$ comes from Theorem \ref{stric3} and so we prove the upper bound. Choose $\underline{F}_\mu(\theta) <\beta < s< \theta d$. Then there exist arbitrarily large $R>0$ satisfying
\[
\int_{|z| \leq R} |\hat \mu(z) |^{2/\theta} \, dz \gtrsim R^{d-\beta/\theta}
\]
and
\begin{align*}
\mathcal{J}_{s,\theta}(\mu)^{1/\theta} = \int_{\mathbb{R}^d} |\hat \mu(z) |^{2/\theta} |z|^{s/\theta-d} \, dz
& = \sup_{R>0} \int_{|z| \leq R} |\hat \mu(z) |^{2/\theta} |z|^{s/\theta-d} \, dz \\
& \geq \sup_{R>0}R^{s/\theta-d} \int_{|z| \leq R} |\hat \mu(z) |^{2/\theta} \, dz \\
& \gtrsim \sup_{R>0}R^{s/\theta-d} R^{d-\beta/\theta} \\
&=\infty
\end{align*}
which proves $\dim^\theta_{\mathrm{F}} \mu \leq \beta$, proving the result.
\end{proof}
\subsection{Self-similar measures}
Combining \cite[Theorem 2]{bisbas} for $\theta \in (0,1)$ and \cite[Theorem 4.4]{stric1} for $\theta=1$ with Theorem \ref{stric3} we get the following.
\begin{cor} \label{selfsim1}
For $p \in (0,1)$ with $p \neq 1/2$, let $\mu_p$ be the self-similar measure on $[0,1]$ given by the distribution of the random series
\[
\sum_{n=0}^\infty X_n (1/2)^n
\]
where $\mathbb{P}(X_n=0)=p$ and $\mathbb{P}(X_n=1)=(1-p)$. Then
\[
\dim^\theta_{\mathrm{F}} \mu_p \geq \underline{F}_{\mu_p}(\theta) \geq \theta - \theta \log_2(1+|2p-1|^{2/\theta})
\]
for all $\theta \in (0,1)$ and
\[
\dim_{\mathrm{S}} \mu_p = \underline{F}_{\mu_p}(1) = 1 - \log_2(1+|2p-1|^{2}).
\]
In particular, $\dim^\theta_{\mathrm{F}} \mu_p>\theta \dim_{\mathrm{S}} \mu_p$ for all $\theta \in (0,1)$,
\[
\lim_{\theta \to 0} \frac{\dim^\theta_{\mathrm{F}} \mu_p}{\theta} \geq 1
\]
and
\[
\dim_{\mathrm{S}} \mu_p < \dim_{\textup{H}} \mu_p = \frac{p \log p + (1-p)\log (1-p)}{-\log 2}.
\]
\end{cor}
In the above, if $p=1/2$, then $\mu_p$ is Lebesgue measure restricted to $[0,1]$ and
\[
\dim^\theta_{\mathrm{F}} \mu_p = \dim_{\mathrm{F}} \mu_p = \dim_{\mathrm{S}} \mu_p = 1
\]
for all $\theta \in (0,1)$. We can also apply Strichartz' work \cite{stric1, stric2} to get some partial information for other self-similar measures. For example, we get some non-trivial information about the Fourier dimension spectrum of self-similar measures on the middle third Cantor set. Similar estimates, sometimes for more choices of $\theta$, can also be deduced from \cite{stric1, stric2} for other self-similar measures but we leave the details to the reader.
\begin{cor} \label{selfsim2}
For $p \in (0,1)$, let $\mu_p$ be the self-similar measure on the middle third Cantor set corresponding to Bernoulli weights $p, (1-p)$. Then
\[
\dim_{\mathrm{F}} \mu = \dim_\textup{F}^0 \mu = 0
\]
\[
\dim_\textup{F}^{1/2} \mu = \frac{\log (p^4+4p^2(1-p)^2+(1-p)^4)}{-2\log 3}
\]
and
\[
\dim_{\mathrm{S}} \mu = \dim_\textup{F}^1 \mu = \frac{\log (p^2+(1-p)^2)}{-\log 3} < 2 \dim_\textup{F}^{1/2} \mu.
\]
In particular, $\dim^\theta_{\mathrm{F}} \mu$ is not a linear function.
\end{cor}
\begin{proof}
The formulae for $ \dim_\textup{F}^{1/2} \mu$ and $ \dim_\textup{F}^{1} \mu$ come from Theorem \ref{stric3} combined with \cite[Corollary 4.4]{stric1}. The fact that $\dim_{\mathrm{F}} \mu = 0$ follows from the well-known and easily proved fact that the Fourier dimension of the middle third Cantor set is 0.
\end{proof}
It would be interesting to investigate the Fourier dimension spectrum of self-similar measures more generally and also for other dynamically invariant, but we do not pursue this here. In particular, there is a lot of interest currently on the Fourier dimension of invariant measures in various contexts, see for example \cite{li, stevens, solomyak}.
\begin{ques}
What is $\dim^\theta_{\mathrm{F}} \mu$ when $\mu$ is a self-similar measure on the middle third Cantor set? What about more general self-similar measures, self-affine measures and other dynamically invariant measures?
\end{ques}
\section{Another example: measures on curves}
To bolster our collection of examples, here we provide a simple family where the Fourier dimension spectrum can be computed explicitly and exhibits some non-trivial behaviour. Let $p >1$ and let $\mu_p$ be the lift of Lebesgue measure on $[0,1]$ to the curve $\{(x,x^p) : x \in [0,1]\} \subseteq \mathbb{R}^2$ via the map $x \mapsto (x,x^p)$.
\begin{thm} \label{xp}
For $p >1$ and $\theta \in [0,1]$,
\[
\dim^\theta_{\mathrm{F}} \mu_p = \min\{ 2/p+\theta (1-1/p), \, 1\}.
\]
\end{thm}
\begin{proof}
Noting
\[
\widehat{\mu_p}(z) = \int_0^1 e^{-2\pi i z \cdot (x, x^p)} \, dx,
\]
using polar coordinates $z = re^{i \alpha}$
\[
\mathcal{J}_{s,\theta}(\mu_p)^{1/\theta} =\int_{r=0}^\infty r^{s/\theta-1} \int_{\alpha=0}^{2\pi} \left\lvert \widehat{\mu_p}(z) \right\rvert ^{2/\theta} \, d\alpha \, d r .
\]
We split this integral up into three regions, which are handled separately. Define $\varepsilon(p) \in (0,\pi/2)$ by $\tan \varepsilon(p) = 1/(2p)$.
First, for $\alpha$ such that $|\cos \alpha | \geq \varepsilon(p)$, it is a simple consequence of van der Corput's lemma (see \cite[Theorem 14.2]{mattila}) that
\[
\left\lvert \widehat{\mu_p}(z) \right\rvert \lesssim r^{-1/2} = |z|^{-1/2}
\]
and so
\begin{equation} \label{estxp1}
\int_{r=0}^\infty r^{s/\theta-1} \int_{|\cos \alpha | \geq \varepsilon(p)} \left\lvert \widehat{\mu_p}(z) \right\rvert ^{2/\theta} \, d\alpha \, d r < \infty
\end{equation}
for $s<1$.
Second, for $z=re^{i\alpha}$ such that $r \geq 10$ and $r^{-(p-1)/p}\leq |\cos \alpha| \leq \varepsilon(p) $,
\begin{align*}
\left\lvert \widehat{\mu_p}(z) \right\rvert &=\left\lvert \int_0^1 e^{-2\pi i(rx\cos\alpha+ rx^p\sin\alpha)} \, dx\right\rvert = \left\lvert \int_0^1 e^{2\pi i r|\cos\alpha|\phi(x)} \, dx\right\rvert
\end{align*}
for
\[
\phi(x) = - \frac{\cos \alpha}{|\cos \alpha|} x - \frac{\sin \alpha}{|\cos \alpha|}x^p.
\]
Since
\[
|\phi'(x)| \geq 1-p \tan \varepsilon(p) = 1/2,
\]
van der Corput's lemma (see \cite[Theorem 14.2]{mattila}) gives
\[
\left\lvert \widehat{\mu_p}(z) \right\rvert \lesssim \frac{1}{r | \cos \alpha |}.
\]
Therefore,
\begin{align}\label{estxp2}
& \hspace{-10mm} \int_{r=10}^\infty r^{s/\theta-1} \int_{r^{-(p-1)/p}\leq |\cos \alpha| \leq \varepsilon(p)} \left\lvert \widehat{\mu_p}(z) \right\rvert ^{2/\theta} \, d\alpha \, d r \nonumber \\
&\lesssim \int_{r=10}^\infty r^{s/\theta-1-2/\theta} \int_{\alpha=0}^{\arccos(r^{-(p-1)/p})} \left(\frac{1}{|\cos \alpha|}\right)^{2/\theta} \, d\alpha \, d r \nonumber \\
&\lesssim \int_{r=10}^\infty r^{s/\theta-1-2/\theta} r^{\left(\frac{2}{\theta}-1\right)\left(\frac{p-1}{p}\right)}\, d r \nonumber \\
&<\infty
\end{align}
provided $s <2/p+\theta (1-1/p)$.
Finally, for $z=re^{i\alpha}$ such that $r \geq 10$ and $0 \leq |\cos \alpha| \leq r^{-(p-1)/p} $ and using the substitution $y= rx\cos\alpha+ rx^p\sin\alpha$,
\begin{align*}
\left\lvert \widehat{\mu_p}(z) \right\rvert &=\left\lvert \int_0^1 e^{-2\pi i(rx\cos\alpha+ rx^p\sin\alpha)} \, dx\right\rvert \\
& = \left\lvert \int_0^{r(\cos\alpha +\sin\alpha)} \frac{e^{-2\pi i y}}{r\cos\alpha+rp x^{p-1}\sin \alpha} \, dy\right\rvert \\
& \lesssim \left\lvert \int_0^{r|\cos\alpha |^{\frac{p}{p-1}}} \frac{e^{-2\pi i y}}{r\cos\alpha} \, dy \right\lvert + \left\lvert \int_{r|\cos\alpha|^{\frac{p}{p-1}}}^{r(\cos\alpha +\sin\alpha)} \frac{e^{-2\pi i y}}{rp x^{p-1}\sin \alpha} \, dy\right\rvert \\
& \lesssim \frac{ r|\cos\alpha |^{\frac{p}{p-1}}}{r|\cos\alpha|} + \frac{1}{r^{1/p}}\left\lvert \int_{ r|\cos\alpha|^{\frac{p}{p-1}}}^{r(\cos\alpha +\sin\alpha)}e^{-2\pi i y} y^{(1-p)/p} \, dy\right\rvert \\
& \lesssim |\cos\alpha |^{{\frac{1}{p-1}}} + \frac{1}{r^{1/p}} \\
& \lesssim \frac{1}{r^{1/p}}
\end{align*}
Therefore,
\begin{align}\label{estxp3}
& \hspace{-10mm} \int_{r=10}^\infty r^{s/\theta-1} \int_{0 \leq |\cos \alpha| \leq r^{-(p-1)/p}} \left\lvert \widehat{\mu_p}(z) \right\rvert ^{2/\theta} \, d\alpha \, d r \nonumber \\
&\lesssim \int_{r=10}^\infty r^{s/\theta-1} \int_{\alpha=\arccos(r^{-(p-1)/p})}^{\pi/2} \left( \frac{1}{r^{1/p}} \right)^{2/\theta} \, d\alpha \, d r \nonumber \\
&\lesssim \int_{r=10}^\infty r^{s/\theta-1-\frac{2}{\theta p}} r^{-(p-1)/p} \, d r \nonumber \\
&<\infty
\end{align}
provided $s <2/p+\theta (1-1/p)$. Together, \eqref{estxp1}, \eqref{estxp2} and \eqref{estxp3} establish the desired lower bound noting that the integral over $|z| \leq 10$ is trivially finite.
For the upper bound, let $1 >q>(p-1)/p$. Then, similar to above, for $z$ such that $0 \leq |\cos \alpha| \leq r^{-q} $,
\begin{align*}
\left\lvert \widehat{\mu_p}(z) \right\rvert & = \left\lvert \int_0^{r(\cos\alpha +\sin\alpha)} \frac{e^{-2\pi i y}}{r\cos\alpha+rp x^{p-1}\sin \alpha} \, dy\right\rvert \\
& \geq \left\lvert \int_{r|\cos\alpha|^{1/q}}^{r(\cos\alpha +\sin\alpha)} \frac{e^{-2\pi i y}}{rp x^{p-1}\sin \alpha} \, dy\right\rvert \ - \ \left\lvert \int_0^{r|\cos\alpha |^{1/q}} \frac{e^{-2\pi i y}}{r\cos\alpha} \, dy \right\lvert\\
& \geq \frac{1}{r^{1/p}}\left\lvert \int_{ r|\cos\alpha|^{1/q}}^{r(\cos\alpha +\sin\alpha)}e^{-2\pi i y} y^{(1-p)/p} \, dy\right\rvert \ - \ \frac{ r|\cos\alpha |^{1/q}}{r|\cos\alpha|} \\
& \gtrsim \frac{1}{r^{1/p}} - |\cos\alpha |^{{1/q-1}} \\
& \geq \frac{1}{2 r^{1/p}}
\end{align*}
for $r \geq r_0$ for some constant $r_0 \approx_q 1$. Therefore,
\begin{align*}
& \hspace{-10mm} \int_{r=0}^\infty r^{s/\theta-1} \int_{0 \leq |\cos \alpha| \leq r^{-(p-1)/p}} \left\lvert \widehat{\mu_p}(z) \right\rvert ^{2/\theta} \, d\alpha \, d r \nonumber \\
&\gtrsim \int_{r=r_0}^\infty r^{s/\theta-1} \int_{\alpha=\arccos(r^{-q})}^{\pi/2} \left( \frac{1}{r^{1/p}} \right)^{2/\theta} \, d\alpha \, d r \nonumber \\
&\gtrsim \int_{r=r_0}^\infty r^{s/\theta-1-\frac{2}{\theta p}} r^{-q} \, d r \nonumber \\
&=\infty
\end{align*}
provided $s \geq 2/p+\theta q$. Combined with the trivial fact that $\dim^\theta_{\mathrm{F}} \mu_p \leq \dim_{\mathrm{S}} \mu_p \leq 1$, this proves the desired upper bound by letting $q \to (p-1)/p$.
\end{proof}
\section{Convolutions and sumsets}
Given sets $X, Y \subseteq \mathbb{R}^d$, the \emph{sumset} of $X$ and $Y$ is
\[
X+Y = \{ x+y : x \in X, y \in Y\} \subseteq \mathbb{R}^d.
\]
Such sets arise naturally in many contexts and a question of particular interest in additive combinatorics is to understand, for example, how the `size' of $X+X$ is related to the `size' of $X$. There is interest in determining conditions which ensure $\dim_{\textup{H}} (X+X) > \dim_{\textup{H}} X$ or perhaps $\dim_{\textup{H}} (kX) \to d$ as $k \to \infty$ where $kX$ is the $k$-fold sumset for integers $k \geq 1$, see for example \cite{linden}. Convolutions are natural measures to consider in this context: if $\mu$ and $\nu$ are measures on $X$ and $Y$, respectively, the convolution $\mu*\nu$ is supported on the sumset $X+Y$. It turns out that the Fourier dimension spectrum precisely characterises when the Sobolev dimension increases under convolution, see Corollary \ref{conv2}, and gives many partial results about the dimensions of sumsets and convolutions more generally.
\subsection{Convolutions}
First we give some general lower bounds for the Fourier dimension spectrum of a convolution. The special case when $d = \theta= \lambda= 1$, $\dim_{\mathrm{S}} \mu = 1$ and $\dim_{\mathrm{F}} \nu>0$ is essentially \cite[Lemma 2.1 (1)]{shmerkin}.
\begin{thm} \label{convolution}
Let $\mu$ and $\nu$ be finite Borel measures on $\mathbb{R}^d$. Then for all $s,t \geq 0$
\[
\mathcal{J}_{s+t,\theta}(\mu * \nu) \lesssim \inf_{\lambda \in [0,1]} \mathcal{J}_{s,\lambda\theta}(\mu ) \mathcal{J}_{t,(1-\lambda)\theta}(\nu ).
\]
In particular,
\[
\dim^\theta_{\mathrm{F}} (\mu * \nu) \geq \sup_{\lambda \in [0,1]} \left( \dim_\mathrm{F}^{\lambda \theta } \mu + \dim_\mathrm{F}^{(1-\lambda)\theta} \nu \right).
\]
\end{thm}
\begin{proof}
Let $\lambda \in [0,1]$ and $s,t \geq 0$. Let $p=1/\lambda \in [1, \infty]$ and $q=1/(1-\lambda) \in [1,\infty]$ be H\"older conjugates and write $d m_d =\min\{|z|^{-d}, 1\} \, dz$. Then
\begin{align*}
\mathcal{J}_{s+t,\theta}(\mu * \nu)^{1/\theta} & \approx \int_{\mathbb{R}^d} |\widehat{\mu * \nu}(z) |^{2/\theta} |z|^{(s+t)/\theta} \, d m_d(z) \\
& = \int_{\mathbb{R}^d} |\hat{\mu }(z) |^{2/\theta} |z|^{s/\theta} \, |\hat{ \nu}(z) |^{2/\theta} |z|^{t/\theta} \, d m_d(z) \qquad \text{(by convolution formula)} \\
& \leq \left( \int_{\mathbb{R}^d} |\hat{\mu }(z) |^{2p/\theta} |z|^{ps/\theta} \, d m_d(z) \right)^{1/p} \left( \int_{\mathbb{R}^d} |\hat{ \nu}(z) |^{2q/\theta} |z|^{qt/\theta} \, d m_d(z)\right)^{1/q} \\
&\hspace{5cm} \text{(by H\"older's inequality)}\\
&= \mathcal{J}_{s,\lambda\theta}(\mu )^{1/\theta}\mathcal{J}_{t,(1-\lambda)\theta}(\nu )^{1/\theta} .
\end{align*}
This establishes the first claim. It follows that for $\lambda \in [0,1]$ and all $s < \dim_\mathrm{F}^{\lambda\theta} \mu$ and $t < \dim_\mathrm{F}^{(1-\lambda)\theta} \nu$, $\dim^\theta_{\mathrm{F}} (\mu * \nu) \geq s+t$, which proves second claim.
\end{proof}
One is often interested in how dimension grows (or how smoothness increases) under iterated convolution. For integers $k \geq 1$, we write $\mu^{*k}$ for the $k$-fold convolution of $\mu$ and $kX$ for the $k$-fold sumset. The following is a generalisation of the simple fact that $\dim_{\mathrm{F}} (\mu^{*k}) = k \dim_{\mathrm{F}} \mu$.
\begin{lma} \label{conv1}
Let $\mu$ be a finite Borel measure on $\mathbb{R}^d$. Then
\[
\dim^\theta_{\mathrm{F}} (\mu^{*k}) = k \dim_\mathrm{F}^{\theta/k} \mu
\]
for all $\theta \in [0,1]$ and all integers $k \geq 1$. In particular,
\[
\dim_{\mathrm{S}} (\mu^{*k}) = k \dim_\mathrm{F}^{1/k} \mu
\]
and, provided $\dim^\theta_{\mathrm{F}} \mu$ is continuous at $\theta=0$,
\[
\dim_{\mathrm{F}} \mu = \lim_{k \to \infty} \frac{\dim_{\mathrm{S}} (\mu^{*k})}{k}.
\]
\end{lma}
\begin{proof}
By the convolution formula
\[
\int_{\mathbb{R}^d} \lvert\widehat{\mu^{*k}}(z) \rvert^{2/\theta} |z|^{s/\theta-d} \, dz = \int_{\mathbb{R}^d} |\hat{\mu}(z) |^{2/(\theta/k)} |z|^{(s/k)/(\theta/k)-d} \, dz
\]
which proves the result.
\end{proof}
It is not possible to express $\dim^\theta_{\mathrm{F}} (\mu * \nu) $ in terms of the Fourier dimension spectra of $\mu$ and $\nu$ in general if $\mu$ and $\nu$ are distinct. For example, consider $\mu$ and $\nu$ given by 1-dimensional Hausdorff measure restricted to distinct unit line segments in the plane. Then, by Corollary \ref{hyperplane}, $\dim^\theta_{\mathrm{F}} \mu = \dim^\theta_{\mathrm{F}} \nu = \theta$ for all $\theta$. However, if the line segments are not contained in a common line, then the convolution $\mu * \nu$ is 2-dimensional Lebesgue measure restricted to a parallelogram and therefore $\dim^\theta_{\mathrm{F}} (\mu * \nu) = 2$ for all $\theta$. On the other hand, if the line segments are contained in a common line, then $\dim^\theta_{\mathrm{F}} (\mu * \nu) = \theta$ for all $\theta$.
Lemma \ref{conv1} shows that the Fourier dimension of a measure can be expressed in terms of the Sobolev dimension of convolutions of the measure with itself. This may be of use in applications since the Fourier dimension is usually harder to compute than the Sobolev dimension. Recall that continuity of $\dim^\theta_{\mathrm{F}} \mu$ at $\theta=0$ is a very mild assumption and holds, for example, provided $|z|^\alpha \in L^1(\mu)$ for some $\alpha>0$, see Lemma \ref{holder} and Theorem \ref{cty0}.
Lemma \ref{conv1} plus concavity (Theorem \ref{concave}) imply that the Fourier dimension spectrum cannot decrease under convolution. However, we can say much more and in fact the Fourier dimension spectrum necessarily increases unless it has a very restricted form. We also get a precise characterisation of when the Sobolev dimension increases under convolution.
\begin{cor} \label{conv2}
Let $\mu$ be a finite Borel measure on $\mathbb{R}^d$ and $\theta \in (0,1]$. Then
\[
\dim^\theta_{\mathrm{F}}(\mu * \mu) > \dim^\theta_{\mathrm{F}} \mu
\]
if and only if $\dim_\mathrm{F}^{\lambda \theta} \mu > \lambda \dim^\theta_{\mathrm{F}} \mu$ for some $\lambda \in [0,1)$. In particular,
\[
\dim_{\mathrm{S}}(\mu * \mu) > \dim_{\mathrm{S}} \mu.
\]
if and only if $\dim_\mathrm{F}^{\lambda} \mu > \lambda \dim_{\mathrm{S}} \mu$ for some $\lambda \in [0,1)$.
\end{cor}
\begin{proof}
One direction is Lemma \ref{conv1}. Indeed, if $\dim_\mathrm{F}^{\lambda \theta} \mu = \lambda \dim^\theta_{\mathrm{F}} \mu$ for all $\lambda \in [0,1)$, then $\dim^\theta_{\mathrm{F}} (\mu * \mu) = 2\dim_\mathrm{F}^{\theta/2} \mu = \dim^\theta_{\mathrm{F}} \mu$.
To prove the other direction, let $\lambda \in [0,1)$ be such that $\dim_\mathrm{F}^{\lambda \theta} \mu > \lambda \dim^\theta_{\mathrm{F}} \mu$. By Theorem \ref{concave}, $\dim^\theta_{\mathrm{F}} \mu$ is concave and therefore $ \dim_\mathrm{F}^{\theta/2} \mu> (\dim^\theta_{\mathrm{F}} \mu)/2$. Then, by Lemma \ref{conv1},
\[
\dim^\theta_{\mathrm{F}} (\mu * \mu) = 2\dim_\mathrm{F}^{\theta/2} \mu > \dim^\theta_{\mathrm{F}} \mu.
\]
The special case concerning Sobolev dimension is obtained by setting $\theta=1$.
\end{proof}
The previous result characterises when the Sobolev dimension increases under convolution. In fact, using Lemma \ref{conv1}, the Fourier dimension spectrum also characterises the limiting behaviour of the Sobolev dimension of iterated convolutions.
\begin{cor} \label{iterated}
Let $\mu$ be a finite Borel measure on $\mathbb{R}^d$ such that $\hat \mu$ is $\alpha$-H\"older. Then
\[
\lim_{k \to \infty} \left( \dim_{\mathrm{S}} (\mu^{*k}) - k \dim_{\mathrm{F}} \mu \right) = \partial_+ \dim^\theta_{\mathrm{F}} \mu \vert_{\theta=0}
\]
where $D = \partial_+ \dim^\theta_{\mathrm{F}} \mu \vert_{\theta=0}$ is the right semi-derivative of $\dim^\theta_{\mathrm{F}} \mu$ at $0$. Moreover, by Theorem \ref{cty0}
\[
\dim_{\mathrm{S}} \mu- \dim_{\mathrm{F}} \mu \leq D \leq d\left(1+\frac{\dim_{\mathrm{F}} \mu}{2\alpha}\right).
\]
In particular, if $\dim_{\mathrm{F}} \mu>0$, then $\dim_{\mathrm{S}} (\mu^{*k}) \sim k \dim_{\mathrm{F}} \mu$ and if $\dim_{\mathrm{F}} \mu=0$, then $\dim_{\mathrm{S}} (\mu^{*k})$ is a bounded monotonic sequence which converges to the right semi-derivative of $\dim^\theta_{\mathrm{F}} \mu$ at $0$.
\end{cor}
\subsection{Sumsets and iterated sumsets}
Next we give sufficient conditions for the Hausdorff dimension of $X+Y$ to exceed the Hausdorff dimension of $Y$.
\begin{cor} \label{X+Y}
Let $X, Y \subseteq \mathbb{R}^d$ be non-empty sets with $Y$ Borel. If
\[
\lambda \dim_{\textup{H}} Y< \dim_\mathrm{F}^{\lambda} X \leq d- (1-\lambda) \dim_{\textup{H}} Y
\]
for some $\lambda \in [0,1)$, then
\[
\dim_{\textup{H}}(X+Y) \geq \dim_{\textup{H}} Y +(\dim_\mathrm{F}^{\lambda} X-\lambda \dim_{\textup{H}} Y) > \dim_{\textup{H}} Y.
\]
If
\[
\dim_\mathrm{F}^{\lambda} X > d- (1-\lambda) \dim_{\textup{H}} Y
\]
for some $\lambda \in [0,1)$, then $X+Y$ has positive $d$-dimensional Lebesgue measure and if $X$ supports a measure $\mu$ with
\[
\dim_\mathrm{F}^{\lambda} \mu > 2d- (1-\lambda) \dim_{\textup{H}} Y
\]
for some $\lambda \in [0,1)$, then $X+Y$ has non-empty interior.
\end{cor}
\begin{proof}
Let $\varepsilon>0$. By definition of $\dim^\theta_{\mathrm{F}} X$, there exists a finite Borel measure $\mu$ supported on a closed set $X_0 \subseteq X$ with $\dim_\mathrm{F}^{\lambda} \mu \geq \dim_\mathrm{F}^{\lambda} X -\varepsilon$. Moreover, there exists a finite Borel measure $\nu$ supported on a closed set $Y_0 \subseteq Y$ with $\dim_{\mathrm{S}} \nu \geq \dim_{\textup{H}} Y - \varepsilon$ and therefore, by concavity of the Fourier dimension spectrum for measures, $\dim_\mathrm{F}^{\theta} \nu \geq \theta (\dim_{\textup{H}} Y- \varepsilon )$ for all $\theta \in (0,1)$. Then $\mu * \nu $ is supported on $X_0+Y_0 \subseteq X+Y$ and, by Theorem \ref{convolution},
\begin{align*}
\dim_{\mathrm{S}} (\mu * \nu ) &\geq \dim_\mathrm{F}^\lambda \mu + \dim_\mathrm{F}^{(1-\lambda)} \nu \\
& \geq \dim_\mathrm{F}^{\lambda} X -\varepsilon + (1-\lambda)\left(\dim_{\textup{H}} Y-\varepsilon \right) \\
&= \dim_{\textup{H}} Y +(\dim_\mathrm{F}^{\lambda} X-\lambda \dim_{\textup{H}} Y) -\varepsilon(2-\lambda)
\end{align*}
which proves the first two claims upon letting $\varepsilon \to 0$. The final claim (giving non-empty interior) is proved similarly by establishing that $\dim_{\mathrm{S}} (\mu * \nu ) > 2d$. We omit the details.
\end{proof}
If $X$ and $Y$ coincide, then the above result simplifies and we get a succinct sufficient condition for dimension increase under addition.
\begin{cor} \label{sumsetcharacter}
Let $X \subseteq \mathbb{R}^d$ be a Borel set with $\dim_{\textup{H}} X < d$. If $\dim_\mathrm{F}^{\lambda} X > \lambda \dim_{\textup{H}} X$ for some $\lambda \in [0,1)$, then
\[
\dim_{\textup{H}}(X+X) > \dim_{\textup{H}} X.
\]
\end{cor}
We note that $\dim^\theta_{\mathrm{F}} X$ does not characterise precisely when $\dim_{\textup{H}}(X+X) > \dim_{\textup{H}} X$ (compare with Corollary \ref{conv2}). For example, let $X \subseteq \mathbb{R}^2$ be the union of two unit line segments. Then $\dim^\theta_{\mathrm{F}} X = \theta$. However, if the two line segments lie in a common line, then $\dim_{\textup{H}} (X+X) = \dim_{\textup{H}} X = 1$, but if the two line segments do not lie in a common line, then $X+X$ has non-empty interior.
Finally, we consider dimension growth of the iterated sumset $kX$. If $\dim_{\mathrm{F}} X >0$, then $kX$ will have non-empty interior for finite explicit $k$. Therefore in the following we restrict to sets with $\dim_{\mathrm{F}} X = 0$. The Fourier dimension spectrum gives a lower bound for the dimension growth in terms of the right semi-derivative at 0.
\begin{cor}
Let $X \subseteq \mathbb{R}^d$ be a non-empty set with $\dim_{\mathrm{F}} X = 0$. Then
\[
\lim_{k \to \infty} \dim_{\textup{H}} (kX) \geq \sup_{\theta \in (0,1)} \frac{\dim^\theta_{\mathrm{F}} X}{\theta} = \partial_+ \dim^\theta_{\mathrm{F}} X \vert_{\theta=0}.
\]
\end{cor}
\begin{proof}
Fix $\theta \in (0,1)$, let $\varepsilon>0$ and let $\mu$ be a finite Borel measure on $X$ with $\dim^\theta_{\mathrm{F}} \mu > \dim^\theta_{\mathrm{F}} X-\theta\varepsilon$. Then, for integers $k \geq 1/\theta$, by Lemma \ref{conv1} and concavity of $\dim^\theta_{\mathrm{F}} \mu$ (Theorem \ref{concave}),
\[
\dim_{\textup{H}} (kX) \geq \dim_{\mathrm{S}} (\mu^{*k}) = k \dim_\mathrm{F}^{1/k} \mu \geq \frac{ \dim_\mathrm{F}^{\theta} \mu}{\theta} > \frac{ \dim^\theta_{\mathrm{F}} X}{\theta} -\varepsilon.
\]
Since this holds for all $\varepsilon>0$ and all $\theta \in (0,1)$ and all sufficiently large $k$, the result follows.
\end{proof}
\subsection{Measures and sets which improve dimension}
Motivated by the well-known problem of Stein to classify measures which are $L^p$-improving, one can ask when a measure $\mu$ will increase the dimension of all measures $\nu$ from some class under convolution simultaneously. Theorem \ref{convolution} clearly gives some information about this. We state one version, leaving further analysis to the reader. See \cite{rossi} for a related question. The main interest here is in measures with Fourier dimension 0 since if $\mu$ has positive Fourier dimension then it increases the Sobolev dimension of all measures.
\begin{cor} \label{sobolevimproving}
Let $\mu$ be a finite Borel measure on $\mathbb{R}^d$ such that
\[
\sup_{\theta \in (0,1)} \frac{\dim^\theta_{\mathrm{F}} \mu}{\theta} \geq s.
\]
Then $\mu$ is Sobolev improving in the sense that for all finite Borel measures $\nu$ on $\mathbb{R}^d$ with $\dim_{\mathrm{S}} \nu < s$
\[
\dim_{\mathrm{S}} (\mu * \nu) > \dim_{\mathrm{S}} \nu.
\]
\end{cor}
\begin{proof}
Let $\nu$ be a finite Borel measure with $s> t>\dim_{\mathrm{S}} \nu$. By assumption there exists $\theta \in (0,1)$ such that
\[
\dim^\theta_{\mathrm{F}} \mu \geq t \theta.
\]
Then, by Theorem \ref{convolution}, $\dim_{\mathrm{S}} (\mu * \nu) \geq \dim_\mathrm{F}^{ \theta } \mu + \dim_\mathrm{F}^{ 1-\theta} \nu \geq t \theta + (1-\theta) \dim_{\mathrm{S}} \nu > \dim_{\mathrm{S}} \nu.$
\end{proof}
The utility of the previous result is that the Sobolev dimension of $\mu$ itself may be arbitrarily close to 0, that is, much smaller than $s$. Next we state a version for sumsets, which follows immediately from Corollary \ref{X+Y}. Again, the Hausdorff dimension of $X$ can be arbitrarily close to 0.
\begin{cor}
Let $X \subseteq \mathbb{R}^d$ be a non-empty set such that
\[
\sup_{\theta \in (0,1)} \frac{\dim^\theta_{\mathrm{F}} \mu}{\theta} \geq s.
\]
Then $X$ is Hausdorff improving in the sense that for all non-empty Borel $Y \subseteq \mathbb{R}^d$ with $\dim_{\textup{H}} Y<s$
\[
\dim_{\textup{H}} (X+Y) > \dim_{\textup{H}} Y.
\]
\end{cor}
\section{Distance sets}
Given a set $X \subseteq \mathbb{R}^d$, the associated \emph{distance set} is
\[
D(X) = \{ |x-y| : x,y \in X\}.
\]
Following \cite{distance}, there has been a lot of interest in the so-called `distance set problem', which is to relate the size of $F$ with the size of $D(X)$. For example, it is conjectured that (for $d \geq 2$ and $X$ Borel) $\dim_{\textup{H}} X >d/2$ guarantees $\mathcal{L}^1(D(X)) >0$ and $\dim_{\textup{H}} X \geq d/2$ guarantees $\dim_{\textup{H}} D(X) = 1$. Here $\mathcal{L}^1$ is the 1-dimensional Lebesgue measure on $\mathbb{R}$. Both of these conjectures are open for all $d \geq 2$, despite much recent interest and progress, e.g. \cite{guth, keletishmerkin,shmerkinwang}. For example, the measure version of the conjecture holds for Salem sets, that is, when $\dim_{\mathrm{F}} X = \dim_{\textup{H}} X >d/2$, see \cite[Corollary 15.4]{mattila}, and the dimension version holds when $\dim_{\textup{P}} X = \dim_{\textup{H}} X \geq d/2$ \cite{shmerkinwang}. Here $\dim_{\textup{P}}$ denotes the packing dimension, which is bounded below by the Hausdorff dimension in general. We are able to obtain estimates for the size of distance sets in terms of the Fourier dimension spectrum, including the provision of new families of sets for which the above conjectures hold. Our main result on distance sets is the following. This result will follow from the more general Theorem \ref{maindistance2} given below.
\begin{thm} \label{maindistance}
If $X \subseteq \mathbb{R}^d$ satisfies
\[
\sup_{\theta \in [0,1]} \left( \dim_\textup{F}^\theta X + \dim_\textup{F}^{1-\theta} X \right) > d,
\]
then $\mathcal{L}^1(D(X)) >0$. Otherwise
\[
\dim_{\textup{H}} D(X) \geq 1-d+\sup_{\theta \in [0,1]} \left( \dim_\textup{F}^\theta X + \dim_\textup{F}^{1-\theta} X \right) .
\]
\end{thm}
We note that it is possible for
\[
\sup_{\theta \in [0,1]} \left(\dim_\textup{F}^\theta X + \dim_\textup{F}^{1-\theta} X \right) > d,
\]
to hold for sets with $\dim_{\mathrm{F}} X = 0$ and $\dim_{\textup{H}} X < \dim_{\textup{P}} X$, even with $\dim_{\textup{H}} X$ arbitrarily close to $d/2$.
If we set $\theta = 0$ in Theorem \ref{maindistance} then we find that if $X \subseteq \mathbb{R}^d$ is a Borel set with $ \dim_{\mathrm{F}} X+\dim_{\textup{H}} X > d$, then $\mathcal{L}^1(D(X)) >0$, with a corresponding dimension bound. These results were obtained by Mattila in \cite[Theorem 5.3]{mattiladistance}. Note that this implies the measure (and dimension) version of the original conjecture for Salem sets. Setting $\theta=1/2$ in Theorem \ref{maindistance} yields a pleasant corollary which appears more similar to the original conjecture.
\begin{cor}
Let $X \subseteq \mathbb{R}^d$. If $\dim_\textup{F}^{1/2} X > d/2$, then $\mathcal{L}^1(D(X)) >0$ and, otherwise, $\dim_{\textup{H}} D(X) \geq 1-d+ 2 \dim_\textup{F}^{1/2} X$.
\end{cor}
Since for $\theta \neq 1/2$ different values of $\theta$ are used simultaneously in Theorem \ref{maindistance}, we are led naturally to consider \emph{two} measures supported on $X$, one for $\theta$ and one for $1-\theta$. This leads us to consider mixed distance sets. Given sets $X,Y \subseteq \mathbb{R}^d$, the \emph{mixed distance set} of $X$ and $Y$ is
\[
D(X,Y) = \{ |x-y| : x \in X, y \in Y\}.
\]
Of course $D(X,X)$ recovers $X$, but $D(X,Y)$ is (typically) a strict subset of $D(X \cup Y)$. Theorem \ref{maindistance} follows from the following more general result which considers mixed distance sets and directly uses the energies.
\begin{thm} \label{maindistance2}
Suppose $\mu$ and $\nu$ are finite Borel measures on $\mathbb{R}^d$ with
\[
\mathcal{J}_{s,\theta}(\mu) < \infty
\]
and
\[
\mathcal{J}_{t,1-\theta}(\nu) < \infty
\]
for some $s,t \geq 0$. If $s+t \geq d$, then
\[
\mathcal{L}^1(D(\textup{spt}(\mu), \textup{spt}(\nu))) >0
\]
and if $s+t < d$, then
\[
\dim_{\textup{H}} D(\textup{spt}(\mu), \textup{spt}(\nu)) \geq 1-d+s+t.
\]
\end{thm}
We defer the proof of Theorem \ref{maindistance2} (as well as the simple deduction of Theorem \ref{maindistance}) to the following subsection. Another consequence of Theorem \ref{maindistance2} (choosing $\theta=1/2$) can be stated just in terms of the Fourier transform. Again, the assumption can be satisfied even when $\dim_{\mathrm{F}} \mu = 0$.
\begin{cor} \label{cute}
If $\mu$ is a finite Borel measure on $\mathbb{R}^d$ with
\[
\int |\hat \mu(z) |^4 \, dz <\infty
\]
then $\mathcal{L}^1(D(\textup{spt}(\mu))) >0$.
\end{cor}
\begin{proof}
Since
\[
\mathcal{J}_{d/2,1/2}(\mu) = \int |\hat \mu(z) |^4 \, dz
\]
this is an immediate consequence of Theorem \ref{maindistance2} with $\nu=\mu$.
\end{proof}
The above corollary is sharp in the sense that for all $d \geq 1$ and $\varepsilon \in (0,1)$ there exists a finite Borel measure on $\mathbb{R}^d$ with
\[
\int |\hat \mu(z) |^{4+\varepsilon} \, dz <\infty
\]
but $\dim_{\textup{H}} D(\textup{spt}(\mu)) < 1$. For example, suppose $X \subseteq \mathbb{R}^d $ is a compact Salem set with
\[
\dim_{\textup{H}} D(\textup{spt}(\mu)) <1
\]
and $\dim_{\textup{H}} X = \dim_{\mathrm{F}} X = d/2-\varepsilon/11$. Such $X$ can be constructed by randomising the construction from \cite[Theorem 2.4]{distance} giving compact sets $E$ with $\dim_{\textup{H}} E$ arbitrarily close to (but smaller than) $d/2$ and $\dim_{\textup{H}} D(E)<1$. Let $\mu$ be a finite Borel measure on $X$ satisfying $\dim_{\mathrm{F}} \mu > d/2-\varepsilon/10$. Then
\[
\int |\hat \mu(z) |^{4+\varepsilon} \, dz \lesssim \int |z|^{-(d/2-\varepsilon/10)(4+\varepsilon)/2} \, dz =\int |z|^{-d-d\varepsilon/4+\varepsilon/5+\varepsilon^2/20} \, dz < \infty.
\]
\subsection{Proof of Theorems \ref{maindistance2} }
We first prove the claim in the case when $s+t \geq d$. Write $\nu_0$ for the measure defined by $ \nu_0(E) = \nu(-E)$ for Borel sets $E$ where $-E = \{ -x : x \in E\}$. Then, by Theorem \ref{convolution},
\[
\int |\widehat{\mu * \nu_0}(z)|^2 \, dz \lesssim \mathcal{J}_{s+t,1}(\mu * \nu_0) \lesssim \mathcal{J}_{s, \theta}(\mu ) \mathcal{J}_{t,1 -\theta}(\nu_0 ) = \mathcal{J}_{s, \theta}(\mu ) \mathcal{J}_{t,1 -\theta}(\nu ) < \infty.
\]
It follows that $\widehat{\mu * \nu_0} \in L^2(\mathbb{R}^d)$ and therefore $\mu * \nu_0 \in L^2(\mathbb{R}^d)$ by \cite[Theorem 3.3]{mattila}. Since $\mu * \nu_0$ is supported by the sumset $\textup{spt}(\mu)+ \textup{spt}(\nu_0) = \textup{spt}(\mu)- \textup{spt}(\nu)$, we conclude that the \emph{difference} set $A= \textup{spt}(\mu)- \textup{spt}(\nu)$ has positive $d$-dimensional Lebesgue measure. Therefore
\[
0< \int_{\mathbb{R}^d} \textbf{1}_{A}(z) \, dz = \int_{S^{d-1}} \int_{0}^\infty \textbf{1}_{A}(rv) \, r^{d-1}dr d\sigma_{d-1}(v)
\]
where $\sigma_{d-1}$ is the surface measure on $S^{d-1}$. This implies the existence of (many) $v \in S^{d-1}$ such that $\mathcal{L}^1(\{ r : rv \in A\})>0$. However, for all $v \in S^{d-1}$, $\{ r : rv \in A\} \subset D(\textup{spt}(\mu), \textup{spt}(\nu))$, proving the result.
The proof in the sub-critical case $s+t<d$ is morally similar but rather more complicated because we cannot disintegrate $s$-dimensional Hausdorff measure $\mathcal{H}^s$ for $s<d$ since it is not a $\sigma$-finite measure. Fortunately, we can appeal to a deep result of Mattila which connects (quadratic) spherical averages and distance sets, see, for example, \cite[Proposition 15.2 (b)]{mattila}. Following \cite{mattiladistance}, we need a more general statement than \cite[Proposition 15.2 (b)]{mattila} where we associate a `distance measure' to two measures rather than a single measure (repeated twice). Define the mixed quadratic spherical average of $\mu$ and $\nu$ for $r>0$ by
\[
\sigma_{\mu,\nu}(r) = \int_{S^{d-1}} \hat \mu(r v) \overline{\hat \nu(r v)}\, d\sigma_{d-1}(v)
\]
noting that $\sigma_{\mu,\mu}(r) $ is the usual quadratic spherical average from \cite[Section 15.2]{mattila}. The mutual energy of $\mu$ and $\nu$ may then be expressed as
\[
\mathcal{I}_s(\mu,\nu) : = \int \int \frac{d \mu(x) \, d \nu (y)}{|x-y|^s} \approx \int_{\mathbb{R}^d} \hat \mu(z) \overline{\hat \nu(z)} |z|^{s-d} \, dz = \int_{0}^\infty \sigma_{\mu,\nu}(r) r^{s-1} \, dr
\]
for $s\in (0,d)$. Define a `mixed distance measure' $\delta_{\mu, \nu}$ by
\[
\int \phi(x) \, d\delta_{\mu, \nu}(x) = \int \int \phi(|x-y|) \, d\mu x \, d \nu y
\]
for continuous functions $\phi$ on $\mathbb{R}$. This recovers the distance measure $\delta(\mu)$ defined in \cite{mattila} for $\mu=\nu$. Then $\delta_{\mu, \nu}$ is a finite Borel measure supported on the mixed distance set $D(\textup{spt}(\mu), \textup{spt}(\nu))$. Finally, following \cite{mattila}, define the weighted mixed distance measure $\Delta_{\mu, \nu}$ by
\[
\int \phi(x) \, d\Delta_{\mu, \nu}(x) = \int u^{(1-d)/2} \phi(u) \, d\delta_{\mu, \nu}(u)
\]
for continuous functions $\phi$ on $\mathbb{R}$ and the weighted mixed quadratic spherical average of $\mu$ and $\nu$ for $r>0$ by
\[
\Sigma_{\mu, \nu}(r) = r^{(d-1)/2} \sigma_{\mu,\nu}(r) .
\]
The following is the key technical tool in proving Theorem \ref{maindistance2} and recovers \cite[Proposition 15.2 (b)]{mattila} when $\mu=\nu$. One can find the details in \cite{mattiladistance} where mixed spherical averages are considered explicitly, but one can also follow the proof of \cite[Proposition 15.2]{mattila}. The proof follows verbatim as in \cite[Proposition 15.2]{mattila} replacing $\sigma(\mu)$, $\delta(\mu)$, $\Delta(\mu)$ and $\Sigma(\mu)$ by $\sigma_{\mu,\nu}$, $\delta_{\mu, \nu}$, $\Delta_{\mu, \nu}$, and $\Sigma_{\mu, \nu}$, respectively, and remembering to smoothen both $\mu$ and $\nu$ to measures $f^\mu_\varepsilon$ and $f^\nu_\varepsilon$. We leave the details to the reader.
\begin{prop} \label{technical}
Suppose $\mu$ and $\nu$ are finite Borel measures on $\mathbb{R}^d$ and $\beta \geq 0$ is such that $\mathcal{I}_\beta(\mu,\nu) < \infty$. If $\beta>(d+\alpha - 1)/2$ and
\[
\int_1^\infty \sigma_{\mu,\nu}(r) ^2 r^{d+\alpha-2} \, dr < \infty,
\]
for some $\alpha \in (0,1)$, then $\mathcal{I}_\alpha(\Delta_{\mu,\nu}) < \infty$.
\end{prop}
We are ready to prove the sub-critical bound from Theorem \ref{maindistance2}. We aim to apply Proposition \ref{technical} with $\beta = (s+t)/2$ where $s$ and $t$ are from the statement of Theorem \ref{maindistance2}. First, note that
\begin{align*}
\mathcal{I}_{(s+t)/2}(\mu,\nu) &\approx \int_{\mathbb{R}^d} \hat \mu(z) \overline{\hat \nu(z)} |z|^{(s+t)/2-d} \, dz\\
& \leq \left(\int_{\mathbb{R}^d} |\hat \mu(z)|^{2/\theta} |z|^{s/\theta-d}\, dz\right)^{\theta/2} \left( \int_{\mathbb{R}^d} |\hat \nu(z)|^{2/\theta}|z|^{t/\theta-d} \, dz\right)^{(1-\theta)/2}\\
& \hspace{2cm}\text{(H\"older's inequality integrating against $|z|^{-d} \, dz$)}\\
&= \mathcal{J}_{s,\theta}(\mu)^{1/2} \mathcal{J}_{t,1-\theta}(\nu)^{1/2} \\
&<\infty.
\end{align*}
Choose $\alpha \in (0,1)$ such that $(s+t)/2>(d+\alpha - 1)/2$. Then
\begin{align*}
& \hspace{-0.5cm} \int_1^\infty \sigma_{\mu,\nu}(r) ^2 r^{d+\alpha-2} \, dr \leq \int_1^\infty \left( \int_{S^{d-1}} |\hat \mu(r v)| |\hat \nu(r v)|\, d\sigma_{d-1}(v)\right)^2 r^{s+t-1} \, dr \\
&\lesssim \int_1^\infty \int_{S^{d-1}} |\hat \mu(r v)|^2 |\hat \nu(r v)|^2 r^{s+t-1} \, d\sigma_{d-1}(v)\, dr \qquad \text{(Jensen's inequality)} \\
&= \int_1^\infty \int_{S^{d-1}} |\hat \mu(r v)|^2 r^{s-\theta} |\hat \nu(r v)|^2 r^{t+\theta-1} \, d\sigma_{d-1}(v)\, dr \\
&\leq \left(\int_1^\infty \int_{S^{d-1}} |\hat \mu(r v)|^{2/\theta} r^{s/\theta-1} \, d\sigma_{d-1}(v)\, dr \right)^\theta \\
&\, \hspace{1cm} \cdot \left( \int_1^\infty \int_{S^{d-1}} |\hat \nu(r v)|^{2/(1-\theta)} r^{t/(1-\theta) - 1} \, d\sigma_{d-1}(v)\, dr \right)^{1-\theta} \qquad \text{(H\"older's inequality)}\\
&\leq \left(\int_{\mathbb{R}^d} |\hat \mu(z) |^{2/\theta} |z|^{s/\theta-d} \, dz\right)^{\theta} \left(\int_{\mathbb{R}^d} |\hat \nu(z) |^{2/(1-\theta)} |z|^{t/(1-\theta)-d} \, dz\right)^{1-\theta}\\
&= \mathcal{J}_{s,\theta}(\mu) \mathcal{J}_{t,1-\theta}(\nu) \\
& < \infty.
\end{align*}
Apply Proposition \ref{technical} to deduce that
\[
\dim_{\textup{H}} D(\textup{spt}(\mu), \textup{spt}(\nu)) = \dim_{\textup{H}} \textup{spt}(\Delta_{\mu, \nu} ) \geq \alpha
\]
and then letting $\alpha \to 1-d+s+t$ proves the result.
Finally, Theorem \ref{maindistance} follows easily from Theorem \ref{maindistance2}. Since $\dim_\textup{F}^\theta X$ is continuous in $\theta \in [0,1]$ by Theorem \ref{ctyX}, we may choose $\theta \in [0,1]$ such that
\[
\dim_\textup{F}^\theta X + \dim_\textup{F}^{1-\theta} X
\]
attains its supremum. Then, by definition, for all $s< \dim_\textup{F}^\theta X $ and $t<\dim_\textup{F}^{1-\theta} X $, there exist finite Borel measures $\mu$ and $\nu$ with supports contained in $X$ such that
\[
\mathcal{J}_{s,\theta}(\mu) < \infty
\]
and
\[
\mathcal{J}_{t,1-\theta}(\nu) < \infty.
\]
The result then follows by applying Theorem \ref{maindistance2}.
\section{Random sets and measures}
There are many interesting connections between Fourier analysis and random processes, see \cite{kahane} especially. In part as an invitation to further investigation along these lines, we close by presenting two applications of the Fourier dimension spectrum to problems concerning random sets and measures.
There is a well-known connection between Fourier decay of random measures and decay of the moments $\mathbb{E}(|\hat \mu(z)|^{k})$, see \cite{kahane}. In particular, a method of Kahane allows one to transfer quantitative information about the moments to an almost sure lower bound on the Fourier dimension. This technique has been used to show that many random sets are almost surely Salem, including fractional Brownian images. Kahane's approach was formalised in a general setting by Ekstr\"om \cite[Lemma 6]{ekstrom}. We recover and generalise Ekstr\"om's lemma with a very simple proof.
\begin{lma} \label{moment}
Suppose $\mu$ is a random finite Borel measure on $\mathbb{R}^d$ such that for $\theta \in (0,1]$ and all $z \in \mathbb{R}^d$
\[
\mathbb{E}(|\hat \mu(z)|^{2/\theta}) \lesssim |z|^{-s/\theta}.
\]
Then $\dim^\theta_{\mathrm{F}} \mu \geq s$ almost surely. In particular, if $\hat \mu$ is H\"older almost surely and for a sequence of $k \in \mathbb{N}$ tending to infinity it holds that for all $z \in \mathbb{R}^d$
\[
\mathbb{E}(|\hat \mu(z)|^{2k}) \lesssim |z|^{-sk},
\]
then $\dim_{\mathrm{F}} \mu \geq s$ almost surely. Further, if $\hat \mu$ is $\alpha$-H\"older almost surely and there exists $\varepsilon>0$ and $\theta \in (0,1)$ such that
\[
\mathbb{E}(|\hat \mu(z)|^{2/\theta}) \lesssim |z|^{-(d+\varepsilon)}
\]
then
\[
\dim_{\mathrm{F}} \mu \geq \frac{2 \alpha \varepsilon \theta}{2 \alpha+ d\theta }>0
\]
almost surely.
\end{lma}
\begin{proof}
Let $t < s$. By Fubini's theorem
\begin{align*}
\mathbb{E} ( \mathcal{J}_{t,\theta}(\mu)) \leq \int_{\mathbb{R}^d} |z|^{t/\theta-d} \mathbb{E}(|\hat \mu(z)|^{2/\theta}) \, dz \lesssim \int_{\mathbb{R}^d} |z|^{(t-s)/\theta-d} \, dz < \infty
\end{align*}
which proves the first claim. The second claim then follows since $\hat \mu$ is almost surely continuous and the third claim follows by combining the first claim with the bounds from Theorem \ref{cty0}.
\end{proof}
To bound the Fourier dimension from below using Lemma \ref{moment}, we only require control of an unbounded sequence of moments or a sufficiently good bound for a single moment, whereas the \cite[Lemma 6]{ekstrom} (and \cite[Lemma 1, page 252]{kahane} which its proof relies on) requires control of all the moments simultaneously. Moreover, \cite[Lemma 6]{ekstrom} requires the measures to be almost surely supported in a fixed compact set, whereas our random measures can be unbounded as long as $\hat \mu$ is H\"older almost surely (with a random H\"older exponent). For both of these improvements, our gain comes from continuity of the Fourier dimension spectrum. Among other things, Ekstr\"om uses the above approach to construct random images of compactly supported measures with large Fourier dimension almost surely \cite[Theorem 2]{ekstrom}. Using Lemma \ref{moment} this can be extended to include non-compactly supported measures. We leave the details to the reader.
For our second application we consider an explicit random model based on fractional Brownian motion in a setting where the Fourier dimension alone yields only trivial results. Given $\alpha \in (0,1)$ and integers $n,d \geq 1$, let $B^\alpha: \mathbb{R}^n \to \mathbb{R}^d$ be index $\alpha$ fractional Brownian motion. See \cite[Chapter 18]{kahane} for the construction and a detailed analysis of this process ($B^\alpha$ is Kahane's $(n,d,\gamma)$ Gaussian process with $\gamma=2\alpha$). In particular, \cite[page 267, Theorem 1]{kahane} gives that for a compact set $Y \subseteq \mathbb{R}^n$ with $\dim_{\textup{H}} Y > s$ the image $B^\alpha(Y)$ almost surely supports a measure $\mu$ satisfying
\begin{equation} \label{kahaneeq}
\dim_{\mathrm{S}} \mu \geq \dim_{\mathrm{F}} \mu \geq s/\alpha.
\end{equation}
Combining this with Theorem \ref{convolution} immediately gives the following.
\begin{cor} \label{randomeasy}
Let $X \subseteq \mathbb{R}^d$ and $Y \subseteq \mathbb{R}^n$ be non-empty Borel sets. If $\dim_{\textup{H}} Y >0$ and
\[
\alpha < \frac{\dim_{\textup{H}} Y}{d-\dim_{\textup{H}} X},
\]
then $X+ B^\alpha(Y)$ has positive $d$-dimensional Lebesgue measure almost surely. Otherwise,
\[
\dim_{\textup{H}} (X+ B^\alpha(Y)) \geq \dim_{\textup{H}} X + \frac{\dim_{\textup{H}} Y}{\alpha}
\]
almost surely.
\end{cor}
Crucial to the above result is that the image $B^\alpha(Y)$ is a Salem set almost surely. We are interested in the following variant where this is no longer the case. Let $V$ be a $k$-dimensional subspace of $\mathbb{R}^d$ for integers $1 \leq k < d$ and $X\subseteq \mathbb{R}^d$ and $Y \subseteq \mathbb{R}^n$ be non-empty Borel sets. Let $B^\alpha: \mathbb{R}^n \to V$ be index $\alpha$ fractional Brownian motion with $V$ identified with $\mathbb{R}^k$. We are interested in almost sure lower bounds for the dimension of $X+ B^\alpha(Y)$. However, this time $B^\alpha(Y)$ has Fourier dimension 0 and so non-trivial estimates are not possible using only the Fourier dimension. If $\dim_{\textup{H}} X \geq k$, then we cannot improve on the trivial estimate $\dim_{\textup{H}} (X+ B^\alpha(Y)) \geq \dim_{\textup{H}} X$ since, for example, $X$ could be a subset of $V$ with dimension $k$. However, if $\dim_{\textup{H}} X < k$, then we can use the Fourier dimension spectrum to derive non-trivial bounds. Moreover, no matter what values of $\alpha$, $\dim_{\textup{H}} X$ and $\dim_{\textup{H}} Y$ we assume, we can never conclude that $X+ B^\alpha(Y)$ has positive $d$-dimensional Lebesgue measure in general. For example, consider $X \subseteq V \times F$ for a set $F\subseteq V^\perp$ with zero $(d-k)$-dimensional Lebesgue measure. In this case $X+ B^\alpha(Y) \subseteq V \times F$ has zero $d$-dimensional Lebesgue measure.
\begin{cor} \label{randomsum}
Let $V$ be a $k$-dimensional subspace of $\mathbb{R}^d$ for integers $1 \leq k < d$ and $X\subseteq \mathbb{R}^d$ and $Y \subseteq \mathbb{R}^n$ be non-empty Borel sets. Let $B^\alpha: \mathbb{R}^n \to V$ be index $\alpha$ fractional Brownian motion with $V$ identified with $\mathbb{R}^k$. If $\dim_{\textup{H}} X < k$, then almost surely
\[
\dim_{\textup{H}} (X+ B^\alpha(Y)) \geq \min\left\{ \dim_{\textup{H}} X + \frac{\dim_{\textup{H}} Y}{\alpha}-\frac{\dim_{\textup{H}} X \dim_{\textup{H}} Y}{k\alpha} , k\right\}.
\]
\end{cor}
\begin{proof}
By Theorem \ref{convolution}, the definition of the Fourier dimension spectrum for sets, \eqref{kahaneeq} and Theorem \ref{embedding},
\begin{align*}
\dim_{\textup{H}} (X+ B^\alpha(Y)) &\geq \min\left\{ \sup_{\theta \in (0,1)} \left(\dim_\textup{F}^{1-\theta} X + \dim^\theta_{\mathrm{F}} B^\alpha(Y) \right), \ d \right\} \\
&\geq \sup_{\theta \in (0,1)} \left( (1-\theta) \dim_{\textup{H}} X + \min\left\{ k \theta , \frac{\dim_{\textup{H}} Y}{\alpha}\right\} \right) \\
&= \min\left\{ \dim_{\textup{H}} X + \frac{\dim_{\textup{H}} Y}{\alpha}-\frac{\dim_{\textup{H}} X \dim_{\textup{H}} Y}{k\alpha} , k\right\}
\end{align*}
as required.
\end{proof}
In the above proof, we were forced to make the `worst case' estimate $\dim_\textup{F}^{1-\theta} X \geq (1-\theta) \dim_{\textup{H}} X$ since we made no assumptions about $X$. Clear improvements are possible if we make assumptions about the Fourier dimension spectrum of $X$ (for example, if $X$ is Salem) but we leave the details to the reader. We do not know if the almost sure lower bounds given above are the best possible in general.
\begin{ques}
Is the almost sure lower bound from Corollary \ref{randomsum} sharp?
\end{ques}
The bounds are sharp for $\alpha \leq \dim_{\textup{H}} Y/ k$ as the following example will show. Let $X = E \times F$ where $E \subseteq V$ and $F \subseteq V^\perp$ with $\dim_{\textup{H}} F =0$. It is possible to arrange for $\dim_{\textup{H}} X = \dim_{\textup{P}} E$ and for this to assume any value in the interval $[0,k]$. Here $\dim_{\textup{P}}$ is again the packing dimension. Let $Y \subseteq \mathbb{R}^n$ satisfy $\dim_{\textup{H}} Y = \dim_{\textup{P}} Y$. Then $X+B^\alpha(Y) = (E+B^\alpha(Y)) \times F$ and (for all realisations of $B^\alpha$)
\begin{align*}
\dim_{\textup{H}} (X+B^\alpha(Y)) \leq \dim_{\textup{P}} (E+B^\alpha(Y)) + \dim_{\textup{H}} F &= \dim_{\textup{P}} (E+B^\alpha(Y)) \\
&\leq \min\left\{ \dim_{\textup{P}} E+\frac{\dim_{\textup{P}} Y}{\alpha}, \, k\right\} \\
&= \min\left\{ \dim_{\textup{H}} X+\frac{\dim_{\textup{H}} Y}{\alpha}, \, k\right\}
\end{align*}
which coincides with the general almost sure lower bound for $\alpha \leq \dim_{\textup{H}} Y / k$.
One might hope that this problem could be reduced to the simpler setting of Corollary \ref{randomeasy} by decomposing $X+ B^\alpha(Y)$ into slices parallel to $V$. We have some doubt over this approach due to the following example. Consider the case where $X$ is a `graph' in the sense that $X \subseteq V \times F$ with the property that $ X \cap (V+x)$ is a single point for all $x \in F$. Despite the minimal fibre structure, there is no restriction on $\dim_{\textup{H}} X$ and it can take any value in the interval $[\dim_{\textup{H}} F,\dim_{\textup{H}} F+ k]$. Then $(X+B^\alpha(Y)) \cap (V+x)$ is a translation of $B^\alpha(Y)$ for all $x \in F$ and estimating the dimension of $X+B^\alpha(Y)$ using standard slicing methods, e.g. Marstrand's slice theorem, see \cite{mattila, falconer}, cannot do better than
\[
\dim_{\textup{H}} (X+ B^\alpha(Y)) \geq \min\left\{ \frac{\dim_{\textup{H}} Y}{\alpha}, \, k\right\} + \dim_{\textup{H}} F
\]
almost surely. For $\alpha > \dim_{\textup{H}} Y/k$ this estimate can be much poorer than the estimate from Corollary \ref{randomsum} and can even be worse than the trivial lower bound $\dim_{\textup{H}} X$.
\section*{Acknowledgements}
I am grateful to Pertti Mattila, Amlan Banaji and Natalia Jurga for helpful comments.
| {
"timestamp": "2022-10-14T02:16:54",
"yymm": "2210",
"arxiv_id": "2210.07019",
"language": "en",
"url": "https://arxiv.org/abs/2210.07019",
"abstract": "We introduce and study the \\emph{Fourier dimension spectrum} which is a continuously parametrised family of dimensions living between the Fourier dimension and the Hausdorff dimension for both sets and measures. We establish some fundamental theory and motivate the concept via several applications, especially to sumset type problems. For example, we study dimensions of convolutions and sumsets, and solve the distance set problem for sets satisfying certain Fourier analytic conditions.",
"subjects": "Classical Analysis and ODEs (math.CA); Dynamical Systems (math.DS); Functional Analysis (math.FA); Metric Geometry (math.MG); Probability (math.PR)",
"title": "The Fourier dimension spectrum and sumset type problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9867771798031351,
"lm_q2_score": 0.8479677564567913,
"lm_q1q2_score": 0.8367552312804243
} |
https://arxiv.org/abs/1309.6710 | On the number of spanning trees in random regular graphs | Let $d \geq 3$ be a fixed integer. We give an asympotic formula for the expected number of spanning trees in a uniformly random $d$-regular graph with $n$ vertices. (The asymptotics are as $n\to\infty$, restricted to even $n$ if $d$ is odd.) We also obtain the asymptotic distribution of the number of spanning trees in a uniformly random cubic graph, and conjecture that the corresponding result holds for arbitrary (fixed) $d$. Numerical evidence is presented which supports our conjecture. | \section{Introduction}\label{s:intro}
\global\long\def\zeta{\zeta}
In this paper, $d$ denotes a fixed integer which is at least 2 (and usually
at least 3). All asymptotics are taken as $n\to\infty$, with $n$ restricted to
even integers when $d$ is odd.
The number of spanning trees in a graph, also called the \emph{complexity}
of the graph,
is of interest for a number of reasons. The complexity of a graph
is an evaluation of
the Tutte polynomial (see for example~\cite{welsh}).
The Merino-Welsh conjecture~\cite{MW} relates
the complexity of a graph with two other graph
parameters, namely, the number of acyclic orientations and the number of
totally cyclic orientations of a graph.
(Noble and Royle~\cite{NR} recently proved that the Merino-Welsh conjecture
is true for series-parallel graphs.) The complexity of a graph also plays
a role in the theory of electrical networks (see for example~\cite{myers}).
We are interested in the number of spanning trees in random regular graphs.
The first significant result in this area is due to McKay~\cite{McKay-0},
who proved that for $d\ge3$, the
$n$th root of the number of spanning trees of a random $d$-regular
graph with $n$ vertices converges to
\begin{equation}
\frac{\left(d-1\right)^{d-1}}{\left(d^{2}-2d\right)^{d/2-1}}\label{eq:mckay}
\end{equation}
as $n\to\infty$, with probability one. An alternative proof of this
was later given
by Lyons~\cite[Example 3.16]{Lyons}.
McKay~\cite[Theorem 4.2]{McKay} gave an asymptotic expression for the
expected
number of spanning trees in a random graph with specified degrees,
up to some unknown constant. His result holds when the maximum degree
is bounded
and the average degree is bounded away from 2 (independently of $n$).
When specialised to regular degree sequences,~\cite[Theorem 4.2]{McKay}
states that the
expected number of spanning trees in $\mathcal{G}_{n,\k}$ is asymptotic to
\begin{equation}
\frac{c_{d}}{n}\left(\frac{\left(d-1\right)^{d-1}}{\left(d^{2}-2d\right)^{d/2-1}}\right)^{n},\label{eq:cd}
\end{equation}
for some unknown constant $c_{d}$.
Other work on asymptotics for the number of spanning trees has focussed on
circulant graphs, grid graphs and tori (see for example~\cite{GYZ} and the
references therein).
Our first result, Theorem~\ref{thm:expectation}, provides the value of the
constant $c_{d}$ from (\ref{eq:cd}), proving that
\[
c_{d}=
\exp\left(\frac{6d^{2}-14d+7}{4\left(d-1\right)^{2}}\right)\,
\frac{\left(d-1\right)^{1/2}}{\left(d-2\right)^{3/2}}.
\]
For our second result we investigate
the distribution of the number of spanning trees in random $d$-regular
graphs using the small subgraph conditioning method, and obtain
the asymptotic distribution in the case of cubic graphs, presented in
Theorem~\ref{thm:distribution-3}. We provide partial calculations for
arbitrary fixed degrees, which lead us to conjecture that the corresponding
result holds in general (see Conjecture~\ref{cnj:distribution}).
In order to precisely state our main results we must introduce some notation
and terminology.
\subsection{Notation and our main results}\label{ss:notation}
Let $\mathbb{N}$ denote the natural numbers (which includes 0).
For integers $n$, $k$ let $(n)_{k}$ denote the falling factorial
$n(n-1)\cdots (n-k+1)$. Square brackets without subscripts denote
extraction of coefficients of a generating function.
We use $\boldsymbol{1}\left(\cdot \right)$ to denote both the indicator variable of
an event and the characteristic function of a set (the particular set will
appear as a subscript). We use standard asymptotic notation throughout,
with the exception that $\rightsquigarrow$ indicates convergence in
distribution of a sequence of random variables.
(We use this notation rather than $\stackrel{d}{\rightarrow}$
to avoid overloading the symbol ``$d$'',
which we use for the degree of the graph.)
Let $\mathcal{G}_{n,\k}$ denote the uniform model of $d$-regular simple graphs
on the vertex set $\{ 1,\ldots, n\}$.
Define the random variable $Y_{\mathcal{G}}$ to be
the number of spanning trees in a random $G\in\mathcal{G}_{n,\k}$.
Clearly $Y_{\mathcal{G}}$ is identically zero if $n\geq 3$ and
$d < 2$.
A 2-regular graph has a spanning tree if and only if
it is connected (that is, forms a Hamilton cycle), in which case it has exactly $n$ spanning
trees. Hence the distribution of $Y_{\mathcal{G}}$
can be inferred from~\cite[Equation (11)]{Wormald}.
For the remainder of the paper we assume that $d\geq 3$.
Our first result gives an asymptotic expression for the expectation
of $Y_{\mathcal{G}}$.
\begin{theorem}
\label{thm:expectation}
Let $d\geq 3$ be a fixed integer. Then
\[
\mathbb{E} Y_{\mathcal{G}} \sim
\exp\left(\frac{6d^{2}-14d+7}{4\left(d-1\right)^{2}}\right)\,
\frac{\left(d-1\right)^{1/2}}{n\left(d-2\right)^{3/2}}\,
\left(\frac{\left(d-1\right)^{d-1}}{\left(d^{2}-2d\right)^{d/2-1}}
\right)^{n}\ .
\]
\end{theorem}
This theorem is proved at the end of Section~\ref{sec:E_YX}.
Next, for fixed $d\geq 3$ and for each positive integer $j$, define
\begin{equation}
\lambda_{j}\left(d\right) = \frac{\left(d-1\right)^{j}}{2j},\qquad
\zeta_{j}\left(d\right) =
-\frac{2\left(d-1\right)^{j}-1}{\left(d-1\right)^{2j}}
\label{eq:janson-parameters}
\end{equation}
Our second theorem gives the asymptotic distribution of the number of
spanning trees in the case of cubic graphs.
\begin{theorem}
\label{thm:distribution-3}
Let $Z_{j}\sim\mathrm{Poisson}\left(\lambda_{j}\left(3\right)\right)$,
with each $Z_{j}$ independent. Consider the number of spanning trees
in a random cubic graph, normalized by the expectation given in Theorem
\ref{thm:expectation} for $d=3$. The asymptotic distribution of
this quantity is given by
\[
\frac{Y_\mathcal{G}}{\mathbb{E} Y_{\mathcal{G}}} \,\,
\rightsquigarrow\,\,
\prod_{j=3}^{\infty}\, \left(1+\zeta_{j}\left(3\right)\right)^{Z_{j}}
\, e^{-\lambda_{j}\left(3\right)\zeta_{j}\left(3\right)}.
\]
\end{theorem}
This theorem is proved in Section~\ref{sec:EY2}.
We conjecture that an analogous result holds for arbitrary (fixed) $d \geq 3$.
\begin{conjecture}
\label{cnj:distribution}
Let $d\geq 3$ be fixed. Then
\[
\frac{Y_\mathcal{G}}{\mathbb{E} Y_{\mathcal{G}}} \,\,
\rightsquigarrow
\,\, \prod_{j=3}^{\infty}\,
\left(1+\zeta_{j}\left(d\right)\right)^{Z_{j}}\,
e^{-\lambda_{j}\left(d\right)\zeta_{j}\left(d\right)},
\]
where $Z_{j}\sim\mathrm{Poisson}\left(\lambda_{j}\left(d\right)\right)$
and each $Z_{j}$ is independent.
\end{conjecture}
We present numerical evidence which supports this conjecture in
Section~\ref{ss:numerical}.
\subsection{Plan of attack}\label{ss:plan}
From now on, we omit explicit mention
of $d$ in the constants $\zeta_{j}=\zeta_{j}\left(d\right)$ and
$\lambda_{j}=\lambda_{j}\left(d\right)$ from (\ref{eq:janson-parameters}).
As is standard in this area, most of our calculations
will be performed in the uniform probability space $\mathcal{P}_{n,\k}$ of
\textit{pairings} (also called the
\textit{configuration model}~\cite{Bollobas,McKay-pairings,Wormald}.
Let $d$ and $n$ be positive integers such that $d n$ is even.
Consider a set of $d n$ \emph{prevertices} distributed evenly into
$n$ sets, called
\emph{buckets}.
(We prefer the terminology ``prevertices'' to ``points''.)
A \emph{pairing} is a partition of the prevertices into
$d n/2$ sets of size 2, called \emph{pairs}.
Then
\begin{equation}
|\mathcal{P}_{n,\k}| = \#P\left(d n\right) = \frac{(d n)!}{(d n/2)!\, 2^{d n/2}}
\sim\sqrt{2}\left(\frac{d n}{e}\right)^{d n/2},
\label{eq:num-pairings-asymptotic}
\end{equation}
using Stirling's formula.
By contracting the prevertices in each bucket to a vertex, each pairing
projects to a labelled $d$-regular multigraph, with loops permitted.
Let $\Omega_{n,\k}$ denote the set of such multigraphs, and denote the projection
of a pairing $P$ by $G\left(P\right)$.
(We will occasionally informally refer to ``partial'' pairings, where
only a subset of the prevertices are paired. The projection of a partial
pairing is defined in the same way.)
Each $G\in\mathcal{G}_{n,\k}$ is the projection of $\left(d!\right)^{n}$ different
pairings (permuting the prevertices in each bucket), so we can recover
the uniform model $\mathcal{G}_{n,\k}$ from $\mathcal{P}_{n,\k}$ by conditioning on the event that
the projected multigraph of a random pairing is simple.
We will apply the small subgraph conditioning method in the
form given by Janson~\cite[Theorem 1]{Janson}.
\begin{theorem}
\label{thm:janson}Let $\lambda_{j}>0$ and $\zeta_{j}\ge-1$, $j=1,2,\dots,$
be constants. Suppose that for each $n$ we have a sequence
$\boldsymbol{X} =(X_1,X_2,X_3,\ldots)$ of
non-negative integer valued random variables and a random variable
$Y$ with $\mathbb{E} Y\ne0$ (at least for large $n$). Further suppose the
following conditions are satisfied:
\begin{enumerate}[label=\textup{(A\arabic*)}]
\item\label{cond:A1}
For $m\ge1$, $\left(X_{1},\dots,X_{m}\right)\rightsquigarrow\left(Z_{1},\dots,Z_{m}\right)$,
where $Z_{j}$ are independent Poisson random variables with means
$\lambda_{j}$;
\item\label{cond:A2}
For any $m\ge0$, $\delta^{1}\in\mathbb{N}^{m}$,
\[
\frac{\mathbb{E}\left[Y|X_{1}=\delta^{1}_{1},\dots,X_{m}=\delta^{1}_{m}\right]}{\mathbb{E} Y} \longrightarrow \prod_{j=1}^{m}\left(1+\zeta_{j}\right)^{\delta^{1}_{j}}e^{-\lambda_{j}\zeta_{j}};
\]
\item\label{cond:A3}
${\displaystyle \sum_{j=1}^{\infty}\lambda_{j}\zeta_{j}^{2}<\infty}$;
\item\label{cond:A4}
$
{\displaystyle \frac{\mathbb{E} Y^{2}}{\left(\mathbb{E} Y\right)^{2}}\rightarrow\exp\left(\sum_{j=1}^{\infty}\lambda_{j}\zeta_{j}^{2}\right)}$.\end{enumerate}
Then
\begin{equation}
\frac{Y}{\mathbb{E} Y}\rightsquigarrow W:=\prod_{j=1}^{\infty}\left(1+\zeta_{j}\right)^{Z_{j}}e^{-\lambda_{j}\zeta_{j}}.\label{eq:W}
\end{equation}
Moreover, this and the convergence in \ref{cond:A1} hold jointly.
\end{theorem}
We also need a related lemma:
\begin{lemma}
\label{lem:janson}\textup{(\cite[Lemma 1]{Janson})} Let $\lambda_{j}'>0$,
$j=1,2,\dots,$ be constants. Suppose that \ref{cond:A1} holds and
that $Y\ge0$.\textup{ S}uppose:
\begin{enumerate}[label=\textup{(A2$'$)}]
\item\label{cond:A2'}
For any $m\ge0$, $\delta^{1}\in\mathbb{N}^{m}$,
\[
\frac{\mathbb{E}\left[Y\prod_{j=1}^{m}\falling{X_{j,n}}{\delta^{1}_{j}}\right]}{\mathbb{E} Y}\longrightarrow\prod_{j=1}^{m}\left(\lambda_{j}'\right)^{\delta^{1}_{j}}.
\]
\end{enumerate}
Then \ref{cond:A2} holds with $\lambda_{j}\left(1+\zeta_{j}\right)=\lambda_{j}'$
for all positive integers $j$.
\end{lemma}
We now define the random variables $X_{j}$ and $Y$ to which these
results will be applied.
For each $j\geq 1$, let $\gamma_{j}:\Omega_{n,\k}\to\mathbb{N}$ give the number
of cycles of length $j$ in a multigraph. (A loop is a 1-cycle, and a pair
of edges on the same two vertices is a 2-cycle.)
Then the random variable $X_{j}=\gamma_{j}\circ G$
is the number of $j$-cycles in the projection of a random pairing $P\in\mathcal{P}_{n,\k}$. Write
$\boldsymbol{X} = \left(X_{j}\right)_{j\geq 1}$ for the
sequence of all cycle counts.
It is well known~\cite{Bollobas} that for any positive integer $m$,
the random variables $X_{1},\ldots,X_{m}$
are asymptotically independent Poisson random variables, and that
the mean of $X_{j}$ tends to the quantity $\lambda_{j} = \lambda_{j}(d)$
given in (\ref{eq:janson-parameters}).
Hence Condition~\ref{cond:A1} of Theorem~\ref{thm:janson} holds.
Let $\tau:\Omega_{n,\k}\to\mathbb{N}$ be the function which counts
spanning trees in $d$-regular multigraphs. Define $Y_{\mathcal{G}}$
as the restriction of $\tau$ to $\mathcal{G}_{n,\k}$, and define $Y=\tau\circ G$.
Then, $Y_{\mathcal{G}}$ is the number of spanning trees in a random
$G\in\mathcal{G}_{n,\k}$, as in Section~\ref{ss:notation}. $Y$ is accordingly
the number of spanning trees in the projection of a random pairing
$P\in\mathcal{P}_{n,\k}$. We will
investigate the asymptotic distribution of $Y_{\mathcal{G}}$
through analysis of $Y$.
In Section~\ref{sec:EY} we obtain an asymptotic formula for the expected
value of $Y$. In Section~\ref{sec:E_YX} we analyse the interaction of the
number of spanning trees with short cycles, establishing that \ref{cond:A2}
holds for $\lambda_{j}$ and $\zeta_{j}$ as given in (\ref{eq:janson-parameters}).
This enables us to prove Theorem~\ref{thm:expectation} and to prove
that \ref{cond:A3} holds. Then in Section~\ref{sec:EY2} we investigate
the second moment of $Y$. We can prove that Condition~\ref{cond:A4} holds
when $d=3$, leading to a proof of Theorem~\ref{thm:distribution-3}.
Using our partial calculations for general degrees, we provide numerical
evidence that strongly supports Conjecture~\ref{cnj:distribution}.
\section{Expected number of spanning trees }\label{sec:EY}
In this section we compute $\mathbb{E} Y$. Let $\mathcal{T}_{n}$ denote the set of labelled
trees on $n$ vertices, so that $\left|\mathcal{T}_{n}\right|=n^{n-2}$ by Cayley's
formula (see for example, \cite{Moon}).
Recalling the definition of $Y$,
we have
\[
\mathbb{E}Y = \sum_{P\in\mathcal{P}_{n,\k}}\frac{1}{\left|\mathcal{P}_{n,\k}\right|}\,\tau\left(G\left(P\right)\right),
\]
and hence
\begin{equation}
\left|\mathcal{P}_{n,\k}\right|\:\mathbb{E}Y = \sum_{P\in\mathcal{P}_{n,\k}}\sum_{T\in\mathcal{T}_{n}}M_{T,P},\label{eq:|P|EY-sum-0}
\end{equation}
where $M_{T,P}$ is the number of ways to embed $T$ into the multigraph
$G\left(P\right)$. (When $G\left(P\right)$ is simple, $M_{T,P}$
is zero or one).
Now, we want to condition on the degree of each of the $n$ vertices
in $T$. Define the set of possible degree sequences
\[
\mathcal{D}_{n}=\left\{ \delta\in\mathbb{N}^{n}:\quad\sum_{j=1}^{n}\delta_{j}=2\left(n-1\right)\right\} .
\]
We can decompose $\left|\mathcal{P}_{n,\k}\right|\:\mathbb{E}Y$ as
\begin{equation}
\sum_{\delta\in\mathcal{D}_{n}}\sum_{T\in\mathcal{T}_{n}}\boldsymbol{1}\left(T\sim\delta\right)\,
\sum_{P\in\mathcal{P}_{n,\k}}\, M_{T,P},\label{eq:|P|EY-sum-1}
\end{equation}
where $T\sim\delta$ denotes the event that vertex $j$ has degree $\delta_{j}$
in $T$, for all $j=1,\ldots n$.
To evaluate the innermost sum in (\ref{eq:|P|EY-sum-1}), fix some
$\delta\in\mathcal{D}_{n}$ and some $T\in\mathcal{T}_{n}$ with $T\sim\delta$. We need to count the
number of pairings that include $T$, with the embedding of $T$ identified.
That is, if for some pairing $P$, the tree $T$ can be embedded in
$G\left(P\right)$ in multiple ways, then we count each different
way separately.
Now, exactly $\delta_{j}$ of the prevertices in bucket $j$ must contribute
to $T$, and there are $\falling{d}{\delta_{j}}$ ways to choose and
order these prevertices. So, there are
\[
\prod_{j=1}^{n}\falling{d}{\delta_{j}}
\]
ways to pair up the $n-1$ edges corresponding to a copy of $T$.
Then, there are
\[
d n-2\left(n-1\right)=\left(d-2\right)n+2
\]
prevertices remaining, which can be paired in $\#P\left(\left(d-2\right)n+2\right)$
ways. This yields
\begin{equation}
\left|\mathcal{P}_{n,\k}\right|\,\mathbb{E}Y=
\#P\left(\left(d-2\right)n+2\right)\,
\sum_{\delta\in\mathcal{D}_{n}}\prod_{j=1}^{n}\, \falling{d}{\delta_{j}} \,
\sum_{T\in\mathcal{T}_{n}}
\, \boldsymbol{1}\left(T\sim\delta\right).
\label{eq:|P|EY-sum-2}
\end{equation}
The inner sum in (\ref{eq:|P|EY-sum-2}) is the number of trees with
degree sequence $\delta$, which is the multinomial
\begin{equation}
\binom{n-2}{\delta_{1}-1,\ldots,\delta_{n}-1}
=\frac{\left(n-2\right)!}{\prod_{j=1}^{n}\left(\delta_{j}-1\right)!}.
\label{eq:moon}
\end{equation}
(See for example Moon~\cite[Theorem 3.1]{Moon}.)
Hence
\[
\left|\mathcal{P}_{n,\k}\right|\:\mathbb{E}Y = \left(n-2\right)!\,\#P\left(nd-2\left(n-1\right)\right)\sum_{\delta\in\mathcal{D}_{n}}\prod_{j=1}^{n}\frac{\falling{d}{\delta_{j}}}{\left(\delta_{j}-1\right)!}.
\]
It follows that the total number of ways to choose a spanning tree on $n$
vertices and choose a partial pairing that projects to that tree is
\begin{align}
(n-2)!\, \sum_{\delta\in\mathcal{D}_{n}}\, \prod_{j=1}^{n}\,
\frac{\falling{d}{\delta_{j}}}{\left(\delta_{j}-1\right)!}
&= (n-2)!\, \left[x^{2\left(n-1\right)}\right]\,
\left(\sum_{j=1}^{\infty}\,
\frac{\falling{d}{j}}{\left(j-1\right)!}\, x^{j}\right)^{n}\nonumber \\
&= (n-2)!\, \left[x^{2\left(n-1\right)}\right]\,
\left(d x\left(1+x\right)^{d-1}\right)^{n}\nonumber\\
&= (n-2)!\, d^{n}\, \binom{(d-1)n}{n-2}. \label{useful}
\end{align}
Hence, by Stirling's approximation and (\ref{eq:num-pairings-asymptotic})
we conclude that
\begin{align}
\left|\mathcal{P}_{n,\k}\right|\,\mathbb{E} Y & = \left(n-2\right)!\,\#P\left(nd-2\left(n-1\right)\right)\,d^{n}\, \binom{(d-1)n}{n-2}\nonumber \\
& \sim \frac{\sqrt{2}\left(d-1\right)^{1/2}}{n\left(d-2\right)^{3/2}}\left(d\left(d-2\right)\left(d-1\right)^{d-1}\left(\frac{n}{\left(d-2\right)e}\right)^{d/2}\right)^{n}.\label{eq:|P|EY-asymptotic}
\end{align}
It follows that
\begin{align}
\mathbb{E} Y & = \frac{\left(n-2\right)!\,\#P\left(nd-2\left(n-1\right)\right)}{\#P\left(nd\right)}\,d^{n}\, \binom{(d-1)n}{n-2} \nonumber \\
& \sim \frac{\left(d-1\right)^{1/2}}{n\left(d-2\right)^{3/2}}\left(\frac{\left(d-1\right)^{d-1}}{\left(d^{2}-2d\right)^{d/2-1}}\right)^{n}.
\label{eq:EY-asymptotic}
\end{align}
Hence for $d\ge3$ and $n$ sufficiently large, we have $\mathbb{E}Y\ne0$.
\section{Interaction with short cycles}\label{sec:E_YX}
\global\long\defX_{\r}{X_{\delta^{1}}}
\global\long\def\mathcal{P}_{n,\r}{\mathcal{P}_{n,\delta^{1}}}
\global\long\defs{s}
\global\long\defn_{\r}{n_{\delta^{1}}}
\global\long\def\left|\r\right|{\left|\delta^{1}\right|}
\global\long\def\delta'{\delta'}
\global\long\def\mathcal{D}_{Q}{\mathcal{D}_{Q}}
\global\long\deft{t}
\global\long\defE{E}
\global\long\def\mathcal{D}_{\dI}{\mathcal{D}_{\delta'}}
\global\long\def\mu{\mu}
\global\long\def\kappa{\kappa}
\global\long\def\Lambda{\Lambda}
\global\long\defg{g}
\global\long\def\mathcal{R}_{n,\r}{\mathcal{R}_{n,\delta^{1}}}
\global\long\def\mathcal{I}_{\i}{\mathcal{I}_{j}}
\global\long\def\mathcal{Q}_{\r}{\mathcal{Q}_{\delta^{1}}}
Recall that $X_{j}=\gamma_{j}\circ G$ is the number of cycles of
length $j$ in the projection of a random pairing $P\in\mathcal{P}_{n,\k}$.
For some fixed $m\ge0$, $\delta^{1}\in\mathbb{N}^{m}$, let $X_{\r}=\prod_{j=1}^{m}\falling{X_{j}}{\delta^{1}_{j}}$.
In this section we will compute an asymptotic formula for $\mathbb{E}\left[YX_{\r}\right]/\mathbb{E}Y$,
in the form required by Condition \ref{cond:A2'}.
We have
\[
\mathbb{E}\left[YX_{\r}\right]=\sum_{P\in\mathcal{P}_{n,\k}}\frac{1}{\left|\mathcal{P}_{n,\k}\right|}\,\tau\left(G\left(P\right)\right)\prod_{j=1}^{m}\falling{\gamma_{j}\left(G\left(P\right)\right)}{\delta^{1}_{j}}.
\]
Note that
\[
\prod_{j=1}^{m}\falling{\gamma_{j}\left(G\left(P\right)\right)}{\delta^{1}_{j}}
\]
is the number of ways to choose, for each $j\in\{ 1,\ldots, m\}$,
an ordered set of $\delta^{1}_{j}$ cycles
of length $j$.
This will result in an ordered set of
\[
\left|\r\right|=\sum_{j=1}^{m}\delta^{1}_{j}
\]
cycles.
We make the decomposition
\[
\prod_{j=1}^{m}\falling{\gamma_{j}\left(G\left(P\right)\right)}{\delta^{1}_{j}}
=\gamma_{\delta^{1}}^{\left(0\right)}+\gamma_{\delta^{1}}',
\]
where $\gamma_{\delta^{1}}^{\left(0\right)}$ is the number of ordered sets of cycles
in which each cycle is disjoint, and $\gamma_{\delta^{1}}'$ is the number
of ordered sets in which some vertices are shared between multiple cycles.
We can further decompose $\gamma_{\delta^{1}}'$ by the structure of the interaction
between the cycles. That is, according to the multigraph that is the
union of the cycles, and the specification of which edges of this
union belong to which cycle. This expresses $\gamma_{\delta^{1}}'$ as a sum
of terms $\gamma_{\delta^{1}}^{\left(j\right)}$. The number of terms $J$
in this decomposition depends on $\rho$, but is $O\left(1\right)$
as $n\to\infty$.
Define
\[
E^{\left(j\right)}=\sum_{P\in\mathcal{P}_{n,\k}}\,
\tau\left(G\left(P\right)\right)\, \gamma_{\delta^{1}}^{\left(j\right)},
\]
so that we have $\left|\mathcal{P}_{n,\k}\right|\,\mathbb{E}\left[YX_{\r}\right]=\sum_{j=1}^{J}E^{\left(j\right)}$.
We proceed to calculate $E^{\left(0\right)}$. As is standard when
applying this method (see for example, \cite[Theorem 9.6]{JLR}),
$\mathbb{E}\left[YX_{\r}\right]$ is asymptotically dominated by $E^{\left(0\right)}$
(the contribution due to disjoint cycles). See Lemma \ref{lem:Ej}
for some justification for this fact.
Let $\mathcal{C}_{n,j}$ be the set of all $j$-cycles on the vertex
set $\left\{ 1,\dots,n\right\} $, and define the Cartesian product
\[
\mathcal{C}_{n,\delta^{1}}=\prod_{j=1}^{m}\mathcal{C}_{n,j}^{\delta^{1}_{j}}.
\]
Each $R\in\mathcal{C}_{n,\delta^{1}}$ is an ordered set of $\left|\delta^{1}\right|$
cycles. We use the notation $R_{j,k}$ for the $k$th cycle of
length $j$ in $R$. Next, define
\[
\mathcal{R}_{n,\r} = \left\{ R\in\mathcal{C}_{n,\delta^{1}}:\mbox{ the cycles in $R$ are pairwise
disjoint} \right\} .
\]
Similarly to (\ref{eq:|P|EY-sum-0}), we have
\[
E^{\left(0\right)} = \sum_{R\in\mathcal{R}_{n,\r}}\sum_{T\in\mathcal{T}_{n}}\sum_{P\in\mathcal{P}_{n,\k}}M_{\left(T,R\right),P},
\]
where $M_{\left(T,R\right),P}$ is the number of ways to embed the
tree $T$ and the cycles in $R$ into the multigraph $G\left(P\right)$.
We further condition on the edge intersection between the embedding
of $T$ and the cycles in $R$. We use a binary sequence of length
$j$ to encode the intersection of an $j$-cycle with a spanning
tree. Picking an arbitrary start vertex and direction for the cycle,
if the $k$th edge of the cycle is to be included in the intersection,
then the $k$th element of the corresponding sequence is one; otherwise
it is zero. All sequences $q\in\left\{ 0,1\right\} ^{j}$ represent
possible intersections, except the sequence $\left(1,\dots,1\right)$,
because a tree contains no cycles.
Define the set of all possible intersection sequences for a cycle
of length $j$, by
\[
\mathcal{I}_{\i}=\left\{ 0,1\right\} ^{j}\setminus \left(1,\dots,1\right).
\]
Also, define the Cartesian product
\[
\mathcal{Q}_{\r}=\prod_{j=1}^{m}\mathcal{I}_{\i}^{\delta^{1}_{j}}.
\]
So, for each $R\in \mathcal{R}_{n,\r}$, specifying some $Q\in\mathcal{Q}_{\r}$ fully specifies the
intersection between the cycles in $R$ and a tree $T$.
We have
\begin{equation}
E^{\left(0\right)}=\sum_{Q\in\mathcal{Q}_{\r}}\sum_{R\in\mathcal{R}_{n,\r}}\sum_{T\in\mathcal{T}_{n}}\sum_{P\in\mathcal{P}_{n,\k}}M_{\left(T,R,Q\right),P},\label{eq:|P|E[YX]}
\end{equation}
where $M_{\left(T,R,Q\right),P}$ is the number of ways to embed $T$
and the cycles in $R$ in $P$, such that the intersection between
the embedding of $T$ and the cycle $R_{j,k}$ is consistent with
$Q_{j,k}$, for $j = 1,\ldots, m$ and $k = 1,\ldots, \delta^{1}_{j}$.
Fixing $Q\in\mathcal{Q}_{\r}$, we will now evaluate the innermost triple sum in
(\ref{eq:|P|E[YX]}). Consider the following process:
\renewcommand{\theenumi}{\arabic{enumi}}
\renewcommand{\labelenumi}{\theenumi.}
\setenumerate{itemsep=5pt}
\begin{enumerate}
\item \label{step:3}Choose some $R\in\mathcal{R}_{n,\r}$.
\item \label{step:4} Choose a partial pairing that projects to $R$.
\item \label{step:6} Extend this to a pairing of a spanning tree consistent
with $Q$.
\item \label{step:7}Pair the remaining prevertices arbitrarily.
\end{enumerate}
We will find that the number of ways to complete each step is independent
of the other steps. Then, $E^{\left(0\right)}$ is a product of the
number of ways to complete each step, summed over all $Q\in\mathcal{Q}_{\r}$.
First let $n_{\r}=\sum_{j=1}^{m}j\delta^{1}_{j}$ be the total number of vertices
in each $R\in\mathcal{R}_{n,\r}$. The number of ways to choose the vertices for some
$R\in\mathcal{R}_{n,\r}$ is
\[
\binom{n}{n_{\r}}
\]
and the number of different arrangements of disjoint cycles on those
vertices is
\[
\frac{n_{\r}!}{\prod_{j=1}^{m}\left(j\chi\left(j\right)\right)^{\delta^{1}_{j}}},
\]
where $j\chi\left(j\right)$ is the size of the automorphism group
of a $j$-cycle:
\[
\chi\left(j\right)=\begin{cases}
1 & \mbox{for }j\le2\\
2 & \mbox{for }j>2.
\end{cases}
\]
That is, the number of ways to complete Step \ref{step:3} is
\[
s_{\ref{step:3}}^{\left(0\right)}=\frac{n!}{\left(n-n_{\r}\right)!\prod_{j=1}^{m}\left(j\chi\left(j\right)\right)^{\delta^{1}_{j}}}.
\]
Next, the number of ways to complete Step \ref{step:4} is
\[
s_{\ref{step:4}}^{\left(0\right)}=\prod_{j=1}^{m}\left(\frac{\chi\left(j\right)\left(d\left(d-1\right)\right)^{j}}{2}\right)^{\delta^{1}_{j}}.
\]
Note for future reference that we have
\begin{equation}
s_{\ref{step:3}}^{\left(0\right)}s_{\ref{step:4}}^{\left(0\right)}\sim n^{n_{\r}}\prod_{j=1}^{m}\left(\frac{\left(d\left(d-1\right)\right)^{j}}{2j}\right)^{\delta^{1}_{j}}.\label{eq:f_3f_4}
\end{equation}
Next, we count the number of ways to extend this pairing to
a tree $T$ consistent with $Q$. We do this by constructing a new
irregular pairing model $\mathcal{P}_{n,\r}$ from the prevertices still unpaired after
Step \ref{step:4}. Recall that $Q$ describes a union of disjoint
paths; for each of these paths, combine the unpaired prevertices remaining
in each constituent vertex of the path to form a \emph{super-bucket}. If the
path has $k$ vertices then the resulting super-bucket
has $k\left(d-2\right)$ prevertices. Let $\left|Q\right|$ be the number
of super-buckets formed in this way, so the total number of buckets in
$\mathcal{P}_{n,\r}$ is $n':=n-n_{\r}+\left|Q\right|$.
Now, consider an extension of a pairing of cycles from Step \ref{step:4},
to a (partial) pairing of the edges of a tree $T$ consistent with
$Q$, as per Step \ref{step:6}. The pairs from this extension correspond
uniquely to a (partial) pairing $P'$ in the pairing model $\mathcal{P}_{n,\r}$.
By the construction of $\mathcal{P}_{n,\r}$, the projection $T'=G\left(P'\right)$
of this pairing is simply $T$ with some subpaths contracted
to single vertices. Since contracting edges of a tree cannot create
cycles, $G\left(P'\right)$ is itself a (spanning) tree. Similarly,
every pairing of a tree in $\mathcal{P}_{n,\r}$ corresponds to an extension of
a pairing of cycles to a pairing of a tree in $\mathcal{P}_{n,\k}$ consistent with
$Q$. So the number of ways $s_{\ref{step:6}}^{\left(0\right)}$
to complete Step \ref{step:6} equals the number of ways to
choose and pair up a spanning tree in $\mathcal{P}_{n,\r}$.
We will perform this count as in Section \ref{sec:EY}, by conditioning
on the degree in $T'$ of each bucket in $\mathcal{P}_{n,\r}$. Put an arbitrary
ordering on the $\left|Q\right|$ super-buckets, and let $d_{j}$
be the number of prevertices in the $j$th super-bucket. For a degree
sequence $\delta$, let $\left|\delta\right|$ be its degree sum. Define the
sets
\begin{align*}
\mathcal{D}_{Q} & = \left\{ \delta'\in\mathbb{N}^{\left|Q\right|}:\quad\delta'_{j}\le d_{j}\mbox{ for all }j\right\} ,\\
\mathcal{D}_{\dI} & = \left\{ \delta\in\mathbb{N}^{n-n_{\r}}:\quad\sum_{j=1}^{n-n_{\r}}\delta_{j}=2\left(n'-1\right)-\left|\delta'\right|\right\} .
\end{align*}
The set $\mathcal{D}_{Q}$ contains all possible degree-in-$T'$ sequences for
the $\left|Q\right|$ super-buckets. For some $\delta'\in\mathcal{D}_{Q}$, the set
$\mathcal{D}_{\dI}$ contains all possible degree sequences for the $n-n_{\r}$ remaining
ordinary buckets. So, we have
\[
s_{\ref{step:6}}^{\left(0\right)}=
\sum_{\delta'\in\mathcal{D}_{Q}} \sum_{\delta\in\mathcal{D}_{\dI}}\, \sum_{T'\in\mathcal{T}_{n'}}\,
\boldsymbol{1}\left(T'\sim\left(\delta',\delta\right)\right)\, \sum_{P'\in\mathcal{P}_{n,\r}}M_{T',P},
\]
where $T'\sim\left(\delta',\delta\right)$ denotes the event that the super-buckets
have degree-in-$T'$ sequence $\delta'$ and the remaining vertices have
degree-in-$T'$ sequence $\delta$. Proceeding as before, after fixing
some $\left(T',\delta',\delta\right)$, there are
\[
\left(\prod_{j=1}^{n-n_{\r}}\falling d{\delta_{j}}\right)\left(\prod_{j=1}^{\left|Q\right|}\falling{d_{j}}{\delta'_{j}}\right)
\]
ways to pair the edges of $T'$. There are
\[
\frac{\left(n'-2\right)!}{\left(\prod_{j=1}^{n-n_{\r}}\left(\delta_{j}-1\right)!\right)\left(\prod_{j=1}^{\left|Q\right|}\left(\delta'_{j}-1\right)!\right)}
\]
trees with $T'\sim\left(\delta',\delta\right)$, by (\ref{eq:moon}). So,
we have
\begin{equation}
s_{\ref{step:6}}^{\left(0\right)}=\sum_{\delta'\in\mathcal{D}_{Q}}A_{\delta'}\prod_{j=1}^{\left|Q\right|}\frac{\falling{d_{j}}{\delta'_{j}}}{\left(\delta'_{j}-1\right)!},\label{eq:f_6}
\end{equation}
where
\begin{align}
A_{\delta'} & = \left(n'-2\right)!\sum_{\delta\in\mathcal{D}_{\dI}}\prod_{j=1}^{n-n_{\r}}\frac{\falling d{\delta_{j}}}{\left(\delta_{j}-1\right)!}\nonumber \\
& = \left(n'-2\right)!\,\left[x^{2\left(n'-1\right)-\left|\delta'\right|}\right]\left(d x\left(1+x\right)^{d-1}\right)^{n-n_{\r}}\nonumber \\
& = \left(n'-2\right)!\,d^{n-n_{\r}}\,
\binom{(d-1)(n-n_{\r})}{2(n'-1)-|\delta'|-(n-n_{\r})}\nonumber \\
& \sim \frac{\left(d-2\right)^{2\left|Q\right|-\left|\delta'\right|-5/2}\,\left(d-1\right)^{1/2}}{n^{n_{\r}-\left|Q\right|+2}}\left(\frac{\left(d-2\right)^{d-2}}{d\left(d-1\right)^{d-1}}\right)^{n_{\r}}\left(\frac{d\left(d-1\right)^{d-1}n}{e\left(d-2\right)^{d-2}}\right)^{n}.\label{eq:A_d'-asymptotic}
\end{align}
Finally, for Step \ref{step:7} there are $d n-2n_{\r}-2\left(n'-1\right)=\left(d-2\right)n-2\left(\left|Q\right|-1\right)$
prevertices remaining, which can be paired in
\begin{equation}
s_{\ref{step:7}}^{\left(0\right)}=\#P\left(\left(d-2\right)n-2\left(\left|Q\right|-1\right)\right)\sim\sqrt{2}\left(\frac{\left(d-2\right)n}{e}\right)^{\left(d/2-1\right)n}\left(\left(d-2\right)n\right)^{1-\left|Q\right|}\label{eq:f_7}
\end{equation}
ways.
Combining (\ref{eq:|P|EY-asymptotic}), (\ref{eq:f_3f_4}), (\ref{eq:f_6}),
(\ref{eq:A_d'-asymptotic}) and (\ref{eq:f_7}), we have
\begin{align}
\frac{E^{\left(0\right)}}{\left|\mathcal{P}_{n,\k}\right|\,\mathbb{E}Y}
& = \sum_{Q\in\mathcal{Q}_{\r}}\,
\frac{s_{\ref{step:3}}^{\left(0\right)}s_{\ref{step:4}}^{\left(0\right)}s_{\ref{step:6}}^{\left(0\right)}s_{\ref{step:7}}^{\left(0\right)}}
{\left|\mathcal{P}_{n,\k}\right|\,\mathbb{E}Y}\nonumber \\
& \to \left(\prod_{j=1}^{m}\left(\frac{\left(d\left(d-1\right)\right)^{j}}{2j}\right)^{\delta^{1}_{j}}\right)\nonumber \\
& \quad\times \sum_{Q\in \mathcal{Q}_{\r}}\, \sum_{\delta'\in\mathcal{D}_{Q}}\left(d-2\right)^{\left|Q\right|-\left|\delta'\right|}\left(\frac{\left(d-2\right)^{d-2}}{d\left(d-1\right)^{d-1}}\right)^{n_{\r}}\prod_{j=1}^{\left|Q\right|}\frac{\falling{d_{j}}{\delta'_{j}}}{\left(\delta'_{j}-1\right)!}.\label{eq:E0}
\end{align}
As is standard in these arguments, the only significant contribution to
$\mathbb{E}\left[Y X_{\r}\right]$ comes from $E^{\left(0\right)}$, where the cycles
do not overlap. For completeness we sketch a proof of this below.
\begin{lemma}
\label{lem:Ej}$\mathbb{E}\left[YX_{\r}\right]$ is dominated by the contribution
from $E^{\left(0\right)}$. That is,
\[
\frac{\mathbb{E}\left[YX_{\r}\right]}{\mathbb{E}Y}\sim\frac{E^{\left(0\right)}}{\left|\mathcal{P}_{n,\k}\right|\,\mathbb{E}Y}.
\]
\end{lemma}
\begin{proof}
We can estimate general $E^{\left(j\right)}$ with some slight modifications
to the above calculations. We would need a different $\mathcal{R}_{n,\r}'\subseteq\mathcal{C}_{n,\delta^{1}}$
that contains all possible ways to embed an ordered set of cycles
with a particular union $U$ into the vertex set $\left\{ 1,\dots,n\right\} $.
The intersection between a spanning tree and the cycles in some $R\in\mathcal{R}_{n,\r}$
would then be a subforest of $U$, so it would be more complicated
to explicitly define a set $\mathcal{Q}_{\r}$ that encodes all possibilities for
the intersection. However, the number of possible intersections is
still independent of $n$.
Using the same 4 steps, the decomposition $E^{\left(j\right)}=s_{\ref{step:3}}^{\left(j\right)}s_{\ref{step:4}}^{\left(j\right)}s_{\ref{step:6}}^{\left(j\right)}s_{\ref{step:7}}^{\left(j\right)}$
is still valid. Let $\left|U\right|$ and $\left\Vert U\right\Vert $
be the number of vertices and edges in the multigraph $U$, respectively.
Carefully adjusting the calculations for $E^{\left(0\right)}$, we
have $s_{\ref{step:3}}^{\left(j\right)}/s_{\ref{step:3}}^{\left(0\right)}=O\left(n^{\left|U\right|-n_{\r}}\right)$,
$s_{\ref{step:4}}^{\left(j\right)}/s_{\ref{step:4}}^{\left(0\right)}=O\left(1\right)$
and $s_{\ref{step:6}}^{\left(j\right)}/s_{\ref{step:6}}^{\left(0\right)}=O\left(1\right)$.
For Step \ref{step:7} there would be $d n-2\left\Vert U\right\Vert -2\left(n'-1\right)$
prevertices remaining, so $s_{\ref{step:7}}^{\left(j\right)}/s_{\ref{step:7}}^{\left(0\right)}=O\left(n^{n_{\r}-\left\Vert U\right\Vert }\right)$.
We conclude that $E^{\left(j\right)}/E^{\left(0\right)}=O\left(n^{\left|U\right|-\left\Vert U\right\Vert }\right)$.
Any non-disjoint union between distinct cycles has more edges than
vertices, so the lemma is proved.
\end{proof}
We now want to to express (\ref{eq:E0}) in the form required by \ref{cond:A2'}.
So, we consider each cycle independently. For a sequence $q\in\mathcal{I}_{\i}$,
let $q\left[k\right]$ be the number of paths with $k$ vertices
in the intersection encoded by $q$. So, for $Q\in\mathcal{Q}_{\r}$ we have
\[
\left|Q\right|=\sum_{j=1}^{m}\sum_{\ell=1}^{\delta^{1}_{j}}\sum_{k=1}^{j}Q_{j,\ell}\left[k\right].
\]
Also, let $\left|q\right|=\sum_{k=1}^{j}q\left[k\right]$ be
the total number of paths in the intersection encoded by $q$.
Now, recall that if the $j$th super-bucket was collapsed from a
path of length $k$, then $d_{j}=k\left(d-2\right)$. Also recall
that $n_{\r}=\sum_{j=1}^{m}j\delta^{1}_{j}$ and note that
\[
\left(d-2\right)^{\left|Q\right|-\left|\delta'\right|}=\prod_{j=1}^{\left|Q\right|}\left(d-2\right)^{1-\delta'_{j}}.
\]
As a result, we have
\[
\frac{\mathbb{E}\left[YX_{\r}\right]}{\mathbb{E}Y}\rightarrow \prod_{j=1}^{m}\left(\lambda_{j}'\right)^{\delta^{1}_{j}},
\]
where
\[
\lambda_{j}' = \frac{\left(d\left(d-1\right)\right)^{j}}{2j}\left(\frac{\left(d-2\right)^{d-2}}{d\left(d-1\right)^{d-1}}\right)^{j}\sum_{q\in\mathcal{I}_{\i}}\prod_{k=1}^{j}\left(\sum_{\ell=1}^{k\left(d-2\right)}\frac{\falling{k\left(d-2\right)}{\ell}}{\left(\ell-1\right)!\,\left(d-2\right)^{\ell-1}}\right)^{q\left[k\right]}.
\]
Note that $\ell$ takes the role of $\delta'_{j}$ for the $j$th super-bucket.
We have proved that Condition \ref{cond:A2'} is satisfied. It remains
to simplify our expression for $\lambda_{j}'$. We have
\begin{align*}
\lambda_{j}' & = \frac{1}{2j}\,
\left(\frac{d-2}{d-1}\right)^{j\left(d-2\right)}\,
\sum_{q\in\mathcal{I}_{\i}}\, \prod_{k=1}^{j}\, \left(k\left(d-2\right)\,
\sum_{\ell=0}^{k\left(d-2\right)-1}\,
\binom{k(d-2)-1}{\ell}\,
\left(d-2\right)^{-\ell}\right)^{q\left[k\right]}\\
& = \frac{1}{2j}\left(\frac{d-2}{d-1}\right)^{j\left(d-2\right)}\sum_{q\in\mathcal{I}_{\i}}\prod_{k=1}^{j}\left(k\left(d-2\right)\left(\frac{1}{d-2}+1\right)^{k\left(d-2\right)-1}\right)^{q\left[k\right]}\\
& = \frac{1}{2j}\sum_{q\in\mathcal{I}_{\i}}\left(\frac{\left(d-2\right)^{2}}{d-1}\right)^{\left|q\right|}\prod_{k=1}^{j}k^{q\left[k\right]}.
\end{align*}
Now, recall that $\left(1,\dots,1\right)\notin\mathcal{I}_{\i}$. So, to evaluate
the sum over $q$, we may identify a particular element in the sequence
to be zero. By symmetry, we arbitrarily choose the last. We also condition
on $\left|q\right|$: define
\begin{equation}
\mu = \frac{\left(d-2\right)^{2}}{d-1},\qquad
\Lambda_{j,t} = \sum_{\substack{q\in\mathcal{I}_{\i}\\
\left|q\right|=t\\
q_{j}=0
}
}\mu^{t}\prod_{k=1}^{j}k^{q\left[k\right]}.
\end{equation}
Note that $\Lambda_{j,1}=j\mu$ for all $j$, because the only sequence
$q\in\mathcal{I}_{\i}$ with $\left|q\right|=1$ and $q_{j}=0$ is $\left(1,\dots,1,0\right)$.
For $\left|q\right|>1$, the first path in (the intersection encoded
by) a sequence can contain anywhere between 1 and $j-1$ vertices.
Ranging over the possibilities, we have
\[
\Lambda_{j,t}=\sum_{k=1}^{j-1}k\mu\Lambda_{j-k,t-1}
\]
for $t>0$. To solve this recurrence, define the generating function
\[
\Lambda\left(x,y\right) = \sum_{j=1}^{\infty}\sum_{t=1}^{\infty}\Lambda_{j,t}x^{j}y^{t}.
\]
We have
\[
\Lambda\left(x,y\right)-\sum_{j=1}^{\infty}\Lambda_{j,1}x^{j}y=\sum_{j=1}^{\infty}\sum_{t=2}^{\infty}\sum_{k=1}^{j-1}k\mu\Lambda_{j-k,t-1}x^{j}y^{t},
\]
so
\begin{align*}
\Lambda\left(x,y\right) & = \sum_{k=1}^{\infty}k x^{k}y\mu\sum_{j=k+1}^{\infty}\sum_{t'=1}^{\infty}\Lambda_{j-k,t'}x^{j-k}y^{t'}+\sum_{j=1}^{\infty}j x^{j}y\mu\\
& = \left(\Lambda\left(x,y\right)+1\right)\sum_{k=1}^{\infty}k x^{k}y\mu.
\end{align*}
Now, defining
\[
g\left(x\right)=\sum_{k=1}^{\infty}k x^{k}\mu=\frac{x\mu}{\left(1-x\right)^{2}},
\]
we have
\[
\Lambda\left(x,y\right)=\frac{g\left(x\right)y}{1-g\left(x\right)y}.
\]
If $q\in\mathcal{I}_{\i}$, then there are $j$ positions to place a zero, and
if $\left|q\right|=t$ then there are $t$ zeros in $q$. Hence
\begin{align*}
\lambda_{j}' & = \frac{1}{2j}\sum_{t=1}^{j}\frac{j\Lambda_{j,t}}{t}\\
&= \frac{1}{2}\left[x^{j}\right]\sum_{t=1}^{j}\frac{1}{t}\left[y^{t-1}\right]\frac{g\left(x\right)}{1-g\left(x\right)y}\\
& = \frac{1}{2}\left[x^{j}\right]\int_{0}^{1}\frac{g\left(x\right)}{1-g\left(x\right)y}\mathrm{d}y\\
& = -\frac{1}{2}\left[x^{j}\right]\log\left(1-g\left(x\right)\right).
\end{align*}
Now, defining $\kappa=\sqrt{d-1}$ we
have
\[
1-g\left(x\right)=\frac{1+x^{2}-\left(2+\mu\right)x}{\left(1-x\right)^{2}}=\frac{\left(1-\kappa^{2}x\right)\left(1-\kappa^{-2}x\right)}{\left(1-x\right)^{2}},
\]
so
\begin{align}
\lambda_{j}' & = \frac{1}{2}\left[x^{j}\right]\left(2\log\left(1-x\right)-\log\left(1-\kappa^{2}x\right)-\log\left(1-\kappa^{-2}x\right)\right) \nonumber\\
& = \frac{1}{2}\left[x^{j}\right]\sum_{k=1}^{\infty}\frac{-2x^{k}+\left(\kappa^{2}x\right)^{k}+\left(\kappa^{-2}x\right)^{k}}{k}\nonumber\\
& = \frac{1}{2j}\left(\kappa^{j}-\kappa^{-j}\right)^{2}\nonumber\\
& = \frac{\left(\left(d-1\right)^{j}-1\right)^{2}}{2j\left(d-1\right)^{j}}
\label{eq:lambda'}
\end{align}
for $j \geq 1$.
To complete this section we will
establish that conditions \ref{cond:A2} and \ref{cond:A3} of
Theorem~\ref{thm:janson} hold, and prove
Theorem~\ref{thm:expectation}.
\begin{lemma}
Let $d\geq 3$ be a fixed integer. Then
Conditions~\ref{cond:A2} and~\ref{cond:A3} of Theorem~\ref{thm:janson} are
satisfied, and
\[
\exp\left(\sum_{j=1}^{\infty}\lambda_{j}\zeta_{j}^{2}\right)=\frac{d^{2}}{\sqrt{\left(d-1\right)\left(d-2\right)\left(d^{2}-d+1\right)}}.
\]
\label{EY2/EY2-target}
\end{lemma}
\begin{proof}
The calculations of this section show that Condition~\ref{cond:A2'}
of Lemma~\ref{lem:janson} is satisfied with $\lambda_{j}'$ given by
(\ref{eq:lambda'}). Then Lemma~\ref{lem:janson} guarantees that \ref{cond:A2}
is satisfied.
Using the Taylor expansion of $\log\left(1-z\right)$,
it follows from (\ref{eq:janson-parameters}) that
\begin{align*}
& \sum_{j=1}^{\infty}\lambda_{j}\zeta_{j}^{2} \\
&=
\sum_{j=1}^{\infty} \, \dfrac{1}{2j}\left(4\left(d-1\right)^{-j}-4\left(d-1\right)^{-2j}+\left(d-1\right)^{-3j}\right)\\
&=\dfrac{1}{2}\left(-4\log\left(1-\left(d-1\right)^{-1}\right)+4\log\left(1-\left(d-1\right)^{-2}\right)-\log\left(1-\left(d-1\right)^{-3}\right)\right).
\end{align*}
Taking the exponential of both sides and rearranging establishes
the stated expression for
$\exp\left(\sum_{j = 1}^\infty \lambda_{j}\zeta_{j}^2\right)$,
which is finite for $d\geq 3$. Hence Condition~\ref{cond:A3} holds,
as required.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:expectation}]
We know that condition~\ref{cond:A2} holds (as proved above), and hence
\[
\mathbb{E} Y_{\mathcal{G}} = \mathbb{E}\left[Y|X_{1}=X_{2}=0\right]\\
\rightarrow \mathbb{E} Y\,\exp\left(-\lambda_{1}\zeta_{1}-\lambda_{2}\zeta_{2}\right).
\]
Substituting using (\ref{eq:janson-parameters}) and (\ref{eq:EY-asymptotic})
completes the proof.
\end{proof}
\section{The second moment}\label{sec:EY2}
\global\long\def\delta^{1}{\delta^{1}}
\global\long\def\delta^{2}{\delta^{2}}
\global\long\def\eta^{1}{\eta^{1}}
\global\long\def\eta^{2}{\eta^{2}}
\global\long\def\eta^{3}{\eta^{3}}
\global\long\def\floor#1{\left\lfloor #1\right\rfloor }
\global\long\def\ceil#1{\left\lceil #1\right\rceil }
\global\long\defC{C}
\global\long\def\mathcal{X}{\mathcal{X}}
\global\long\def\Xk#1{\mathcal{X}^{\left(#1\right)}}
\global\long\defF_n{F_n}
\global\long\def\Hk#1{F_n^{\left(#1\right)}}
\global\long\defx{x}
\global\long\defy{y}
\global\long\deff{f}
\global\long\def\mathcal{S}{\mathcal{S}}
\global\long\defa_{n}{a_{n}}
\global\long\def\hat{a}_{n}{\hat{a}_{n}}
\global\long\def\mathcal{O}{\mathcal{O}}
We now want to calculate $\mathbb{E}Y^{2}$.
As a first step we transform this problem into one of evaluating
the coefficient of a certain generating function.
\begin{lemma}
Let $d\geq 3$ be fixed and define
\[ N(n,d) = \begin{cases} n & \text{ if $d\geq 4$,}\\
n/2 + 2 & \text{ if $d=3$}.
\end{cases}\]
Then
\begin{align*}
& \left|\mathcal{P}_{n,\k}\right|\,\mathbb{E}Y^{2} \\
&= \frac{n!\, \left((d-2)n\right)!\, d^n}{2^{(d/2-1)n+2}}\,
\sum_{b=1}^{N(n,d)}\, \frac{2^b}
{b!\,\left(\left(d/2-1\right)n-b+2\right)!}\,
\left[z^{n}\right]\,\left(\sum_{j=1}^{\infty}
\binom{(d-1)j}{j}\, z^{j}\right)^{b}.
\end{align*}
\label{lem:|P|EY2}
\end{lemma}
\begin{proof}
We write
\[
\left|\mathcal{P}_{n,\k}\right|\,\mathbb{E}Y^{2} = \sum_{P\in\mathcal{P}_{n,\k}}\, \sum_{T_{1}\in\mathcal{T}_{n}}\,
\sum_{T_{2}\in\mathcal{T}_{n}}\, M_{\left(T_{1},T_{2}\right),P},
\]
where $M_{(T_{1},T_{2}),P}$ is the number of ways to embed the ordered
pair of trees $\left(T_{1},T_{2}\right)$ into the multigraph $G\left(P\right)$.
We will estimate this sum by choosing some $T_{1},T_{2}\in\mathcal{T}_{n}$ and
counting the ways to pair up their edges, then counting the ways to
complete the pairing. We break up this process in a similar way to
Section \ref{sec:E_YX}:
\begin{enumerate}
\item Choose $b\in\{1,\ldots, n\}$, which will be the number of
connected components in the
intersection of the embeddings of $T_{1}$ and $T_{2}$.
(As we will see later, when $d=3$ we must restrict to $b\in \{1,\ldots,
n/2 + 2\}$.)
\item \label{step:EY2-1}
Choose a partition $(\nu_1,\ldots, \nu_b)$ of $n$ into positive parts.
That is, $\nu_{j}$ is a positive integer for $j=1,\ldots, b$ and
$\sum_{j=1}^b \nu_{j} = n$. Here $\nu_{j}$ will be the number of
vertices in the $j$th connected component of the interesection.
(We should divide by
$b!$ to account for our assumption that the connected components
are labelled).
\item \label{step:EY2-2}
Choose a partition of the $n$ vertices into $b$ groups, where the
size of the $j$th group is $\nu_{j}$.
\item \label{step:EY2-3} In each group, choose a spanning tree on that
group and choose a partial pairing that projects to that tree.
This specifies a component of the intersection.
\end{enumerate}
Now, collapse the buckets in each group into a single super-bucket,
giving exactly $b$ super-buckets. The
$j$th super-bucket has $d\nu_{j}-2\left(\nu_{j}-1\right)$
unpaired prevertices. We now want to pair up two pair-disjoint spanning
trees $T_{1}',T_{2}'$ in the collapsed pairing model. These will
extend to $T_{1}$ and $T_{2}$ using the intersection subtrees chosen
in Step \ref{step:EY2-3}.\vspace{-5pt}
\begin{enumerate}
\setcounter{enumi}{4}
\item
\label{step:EY2-4}
For $j=1,\ldots, b$, choose $\delta^{1}_{j}$ and
$\delta^{2}_{j}$, the degree of vertex $j$ in $T_{1}'$ and $T_{2}'$
respectively, in such a way that
$\delta^{1}_{j}+\delta^{2}_{j}\led\nu_{j}-2\left(\nu_{j}-1\right)$.
These must also satisfy
\[ \sum_{j=1}^b \delta^{1}_{j} = \sum_{j=1}^b \delta^{2}_{j} = 2(b-1),\]
as they are the degree sequence of a spanning tree on $b$ vertices.
\item \label{step:EY2-5}
Choose two trees $T_{1}',T_{2}'$ on the $b$ vertices
that are consistent with the degree sequences chosen in Step \ref{step:EY2-5}.
\item \label{step:EY2-6}
Pair up these two trees in a pair-disjoint way.
\item \label{step:EY2-7}
Pair all remaining prevertices to complete a $d$-regular pairing.
\end{enumerate}
Given $\nu$, the number of ways to complete Step \ref{step:EY2-2}
is
\[
s_{\ref{step:EY2-2}}= \binom{n}{\nu_{1},\ldots,\nu_{b}}.
\]
By (\ref{useful}),
the number of ways to complete Step \ref{step:EY2-3} is
\[
s_{\ref{step:EY2-3}}=
d^{n}\, \prod_{j=1}^{b}\, \left(\nu_{j}-2\right)!\,
\binom{(d-1)\nu_{j}}{\nu_{j}-2}.
\]
The number of ways to complete Step \ref{step:EY2-5} is
\[
s_{\ref{step:EY2-5}}= \binom{b-2}{\delta^{1}_{1}-1,\ldots,\delta^{1}_{b}-1}\,
\binom{b-2}{\delta^{2}_{1}-1,\ldots,\delta^{2}_{b}-1},
\]
by (\ref{eq:moon}), and the number of ways to complete Step \ref{step:EY2-6}
is
\[
s_{\ref{step:EY2-6}}=\prod_{j=1}^{b}\falling{(d-2)\nu_{j}+2}{\delta^{1}_{j}+\delta^{2}_{j}}.
\]
Finally, for Step \ref{step:EY2-7} there are
\[
\sum_{j=1}^{b}\left(d\nu_{j}-2\left(\nu_{j}-1\right)-\delta^{1}_{j}-\delta^{2}_{j}\right)
= \left(d-2\right)n-2\left(b-2\right)
\]
prevertices remaining, so the number of ways to complete Step \ref{step:EY2-7}
is
\[
s_{\ref{step:EY2-7}}=\#P\left(\left(d-2\right)n-2\left(b-2\right)\right).
\]
For this construction to make sense, the quantity $(d-2)n-2(b-2)$ must be
nonnegative. This is certainly true when $d\geq 4$, but when $d=3$ this
imposes the constraint that $b\leq n/2+2$. (This explains the definition
of $N(n,d)$ in the lemma statement.)
It will be convenient to work with the nonnegative variables
\begin{equation}
\label{translate} \eta^{1}_{j}=\delta^{1}_{j}-1, \quad \eta^{2}_{j}=\delta^{2}_{j}-1 \quad \text{ and }
\quad \eta^{3}_{j}=\left(d-2\right)\nu_{j}-\eta^{1}_{j}-\eta^{2}_{j}
\end{equation}
defined for $j=1,\ldots, b$.
Let
\[ \mathcal{S}_{\ref{step:EY2-1}}(b) = \{ \nu \in \{ 1,\ldots, n\}^b :
\sum_{j=1}^b \nu_{j} = n \}
\]
be the set of possible sequences $\nu$
from Step \ref{step:EY2-1} and let
\begin{align*} \mathcal{S}_{\ref{step:EY2-4}}(\nu)
= \left\{ \left(\eta^{1},\eta^{2},\eta^{3}\right)\in\left(\mathbb{N}^{b}\right)^{3}: \right. &
\left.
\quad\eta^{1}_{j}+\eta^{2}_{j}+\eta^{3}_{j}=\left(d-2\right)\nu_{j} \, \mbox{ for }\,
j=1,\ldots, b,\right.\\
& \left. {} \quad
\sum_{j=1}^b \eta^{1}_{j} = \sum_{j=1}^b \eta^{2}_{j} = b-2\right\}
\end{align*}
be the set of sequences arising from Step~\ref{step:EY2-4} using (\ref{translate}).
Combining all of the above gives
\begin{align*}
\left|\mathcal{P}_{n,\k}\right|\,\mathbb{E}Y^{2} & = \sum_{b=1}^{N(n,d)}\,
\frac{1}{b!}\, \sum_{\nu\in\mathcal{S}_{\ref{step:EY2-1}}(b)}\,
s_{\ref{step:EY2-2}}s_{\ref{step:EY2-3}}\,
\sum_{\left(\eta^{1},\eta^{2},\eta^{3}\right)\in\mathcal{S}_{\ref{step:EY2-4}}(\nu)}\,
s_{\ref{step:EY2-5}}s_{\ref{step:EY2-6}}s_{\ref{step:EY2-7}}\\
& = \sum_{b=1}^{N(n,d)}\, \frac{n!\, d^{n}}
{b!\,\left(\left(d/2-1\right)n-b+2\right)!\,2^{\left(d/2-1\right)n-b+2}}\,
\sum_{\nu\in\mathcal{S}_{\ref{step:EY2-1}}(b)}\,
\prod_{j=1}^{b}\, \frac{\left(\left(d-1\right)\nu_{j}\right)!}{\nu_{j}!}\\
& \quad\times\sum_{\left(\eta^{1},\eta^{2},\eta^{3}\right)\in\mathcal{S}_{\ref{step:EY2-4}}(\nu)}
\, \binom{b-2}{\eta^{1}_{1},\ldots,\eta^{1}_{b}}\, \binom{b-2}{\eta^{2}_{1},\ldots,\eta^{2}_{b}}
\,\binom{(d-2)n-2(b-2)}{\eta^{3}_{1},\dots,\eta^{3}_{b}}.
\end{align*}
Now
\begin{align*}
& \sum_{\left(\eta^{1},\eta^{2},\eta^{3}\right)\in\mathcal{S}_{\ref{step:EY2-4}}(\nu)}\,
\binom{b-2}{\eta^{1}_{1},\ldots,\eta^{1}_{b}}\,
\binom{b-2}{\eta^{2}_{1},\ldots,\eta^{2}_{b}}\,
\binom{(d-2)n-2(b-2)}{\eta^{3}_{1},\ldots,\eta^{3}_{b}}\\
& = \sum_{\left(\eta^{1},\eta^{2},\eta^{3}\right)\in\mathcal{S}_{\ref{step:EY2-4}}(\nu)}\,
\left[z_{1}^{\eta^{1}_{1}}\dots z_{b}^{\eta^{1}_{b}}\right]\,
\left(\sum_{j=1}^{b}z_{j}\right)^{b-2}\\
& \quad\times\left[z_{1}^{\eta^{2}_{1}}\dots z_{b}^{\eta^{2}_{b}}\right]\left(\sum_{j=1}^{b}z_{j}\right)^{b-2}\left[z_{1}^{\eta^{3}_{1}}\dots z_{b}^{\eta^{3}_{b}}\right]\left(\sum_{j=1}^{b}z_{j}\right)^{\left(d-2\right)n-2\left(b-2\right)}\\
& = \left[ z_1^{(d-2)\nu_1}\, z_2^{(d-2)\nu_2} \cdots z_b^{(d-2)\nu_b}
\right]\, \left(\sum_{j=1}^{b}z_{j}\right)^{\left(d-2\right)n}\\
& = \binom{(d-2)n}{(d-2)\nu_{1},\ldots,(d-2)\nu_{b}}.
\end{align*}
It follows that
\begin{align*}
& \left|\mathcal{P}_{n,\k}\right|\,\mathbb{E}Y^{2} \\
&= \frac{n!\, \left((d-2)n\right)!\, d^n}{2^{(d/2-1)n+2}}\,
\sum_{b=1}^{N(n,d)} \, \frac{2^b}{b!\,\left(\left(d/2-1\right)n-b+2\right)!}
\, \sum_{\nu\in\mathcal{S}_{\ref{step:EY2-1}}(b)}\,
\prod_{j=1}^{b}\, \binom{(d-1)\nu_{j}}{\nu_{j}}.
\end{align*}
This is equal to the expression in the lemma statement,
by definition of $\mathcal{S}_{\ref{step:EY2-1}}(b)$.
\end{proof}
We now seek to evaluate
\[ [z^n] \left( \sum_{j=1}^\infty \binom{(d-1)j}{j} z^{j}\right)^b.\]
By Stirling's approximation and the ratio test, the radius of
convergence of the series $\sum_{j=1}^{\infty}\, \binom{(d-1)j}{j}\, z^{j}$
equals $\frac{\left(d-2\right)^{d-2}}{\left(d-1\right)^{d-1}}$.
Hence,
\[
f\left(z\right):=\sum_{j=1}^{\infty}\,
\binom{(d-1)j}{j}\, \left(\frac{\left(d-2\right)^{d-2}}{\left(d-1\right)^{d-1}}\right)^{j}z^{j}
\]
is analytic in the disk $\left\{ z:\left|z\right|<1\right\} $. Define
$\beta=b/n$ and let $r_{\beta}\in\left(0,1\right)$ be fixed
for each $\beta$ (we will determine this later). Then, with the contour
$\Gamma:\left[-\pi,\pi\right]\to\mathbb{C}$ defined by
$\theta\mapsto r_{\beta}e^{i\theta}$, we have
\begin{align}
\left[z^{n}\right]f\left(z\right)^{b} & =
\frac{1}{2\pi i}\, \int_{\Gamma}\frac{f\left(z\right)^{b}}{z^{n+1}}\,
\mathrm{d}z\nonumber \\
& = \frac{1}{2\pi}\int_{-\pi}^{\pi}\left(\frac{f\left(r_{\beta}e^{i\theta}\right)^{\beta}}{r_{\beta}e^{i\theta}}\right)^{n}\mathrm{d}\theta,\label{eq:contour-integral}
\end{align}
by Cauchy's coefficient formula.
Let
\[ \mathcal{X}_{n}=\left\{ \dfrac{1}{n},\dfrac{2}{n},\dots,\dfrac{N(n,d)}{n}\right\}
\times\left[-\pi,\pi\right]
\]
be the sample space for pairs $\left(\beta,\theta\right)$,
and define
$a_{n}:\mathcal{X}_{n}\to\mathbb{C}$ by
\begin{equation}
\label{andef}
a_{n}\left(\beta,\theta\right)=\frac{2^{\beta n}}{\left(\beta n\right)!\,\left(\left(d/2-1\right)n-\beta n+2\right)!}\left(\frac{f\left(r_{\beta}e^{i\theta}\right)^{\beta}}{r_{\beta}e^{i\theta}}\right)^{n}.
\end{equation}
Finally, let
\begin{equation}
\label{eq:E}
F_n=\sum_{b=1}^{N(n,d)}\int_{-\pi}^{\pi}a_{n}\left(b/n,\theta\right)\,\mathrm{d}\theta.
\end{equation}
Then, by Lemma~\ref{lem:|P|EY2} and (\ref{eq:contour-integral}),
\begin{equation}
\left|\mathcal{P}_{n,\k}\right|\,\mathbb{E}Y^{2} =
\frac
{\left(d-1\right)^{n\left(d-1\right)}n!\,\left(\left(d-2\right)n\right)!\,d^{n}}
{2\pi\left(d-2\right)^{n\left(d-2\right)}2^{\left(d/2-1\right)n+2}}\, F_n .
\label{eq:E2}
\end{equation}
We now apply the saddle point method to estimate the sum in
(\ref{eq:E}) in the case that $d=3$.
Our proof is adapted from that of~\cite[Theorem 2.3]{GJR}.
When $d=3$ the function $f$ satisfies
\[
f\left(z\right)=\sum_{j=1}^{\infty}\, \binom{2j}{j}\,
\left(\frac{z}{4}\right)^{j}=\left(1-z\right)^{-1/2}-1.
\]
We note for later that if $\theta\in [-\pi,\pi]$ is nonzero then
\begin{equation}
\left| f\left( r_\beta e^{i\theta}\right)\right| =
\left| \, \sum_{j=1}^\infty
\binom{2j}{j}\, \left(\frac{r_\beta}4\right)^j
e^{ij\theta}\, \right|
<
\sum_{j=1}^\infty
\binom{2j}{j}\, \left(\frac{r_\beta}4\right)^j
= |f(r_\beta)|,
\label{lem:technical}
\end{equation}
using the triangle inequality.
Hence for each $\beta$ the function $\theta \mapsto |f( r_\beta e^{i\theta})|$
on $[-\pi,\pi]$ is uniquely maximised at $\theta =0$.
Define $\mathcal{X}=\left(0,1\right]\times\left[-\pi,\pi\right]$
and let $\mathcal{X}^{*}\subset\mathcal{X}$ be a set (to be determined) such that for $\left(\beta,\theta\right)\in\mathcal{X}^{*}$,
both $\beta$ and $d/2-1-\beta=1/2-\beta$ are bounded below by some
positive constant.
Then Stirling's approximation gives, for $\beta\in\mathcal{X}^*\cap\mathcal{X}_n$,
\begin{align}
& a_{n}\left(\beta,\theta\right) \nonumber\\
& \sim \frac{e^{n/2}}{2\pi\, n^{n/2+3}\, \sqrt{\beta\left(1/2-\beta\right)}\, \left(1/2-\beta\right)^{2}}\left(\frac{\left(2f\left(r_{\beta}e^{i\theta}\right)\right)^{\beta}}{r_{\beta}e^{i\theta}\beta^{\beta}\left(1/2-\beta\right)^{\left(1/2-\beta\right)}}\right)^{n}.\label{eq:g-asymptotic-working}
\end{align}
Next, define the half-spaces $\mathcal{X}^{1/2}=\left(0,1/2\right]\times\left[-\pi,\pi\right]$
and $\bar{\mathcal{X}}^{1/2}=\left[0,1/2\right]\times\left[-\pi,\pi\right]$.
Define the real-valued sequence $\left(c_{n}\right)_{n\in\mathbb{N}}$
and the functions $\psi:\bar{\mathcal{X}}^{1/2}\to\mathbb{R}$ and $\phi:\mathcal{X}^{1/2}\to\mathbb{C}$
by
\begin{align*}
c_{n} & = \frac{e^{n/2}}{2\pi n^{n/2+3}},\\
\psi\left(\beta,\theta\right) & = \beta^{-1/2}\left(1/2-\beta\right)^{-5/2},\\
\phi\left(\beta,\theta\right) & = \beta\log\left(2f\left(r_{\beta}e^{i\theta}\right)\right)-\log r_{\beta}-i\theta-\beta\log\beta-\left(1/2-\beta\right)\log\left(1/2-\beta\right),
\end{align*}
so that we have
\begin{equation}
a_{n}\left(\beta,\theta\right)\sim c_{n}\, \psi\left(\beta,\theta\right)\,
e^{n\phi\left(\beta,\theta\right)}\label{eq:g-asymptotic-statement}
\end{equation}
uniformly for $\left(\beta,\theta\right)\in\mathcal{X}_{n}\cap\mathcal{X}^{*}$.
Let $D$ denote the differential operator
\[ \left(D\phi\left(x\right)\right)_{j}=
\frac{\partial\phi\left(x\right)}{\partial x_{j}}.\]
We seek a stationary point of $\phi$. The condition
$\frac{\partial\phi\left(\beta,0\right)}{\partial\theta}=0$
is equivalent to the condition $\beta r_\beta f'(r_\beta) = f(r_\beta)$.
Solving for $r_\beta$ gives
\[
r_{\beta}=\frac{1}{8}\left(8-4\beta-\beta^{2}\pm\sqrt{\beta^{3}\left(8+\beta\right)}\right).
\]
We choose
$r_{\beta}=\frac{1}{8}\left(8-4\beta-\beta^{2}-
\sqrt{\beta^{3}\left(8+\beta\right)}\right)\in\left(0,1\right)$,
which ensures that
$\frac{\partial\phi\left(\beta,0\right)}{\partial\theta}=0$.
Next, we calculate that with this choice of $r_\beta$,
\[ \frac{\partial\phi}{\partial \beta}(\beta,0) =
\log\left(\frac{\left(4-\beta-\sqrt{\beta(8+\beta)}\right)(1-2\beta)}
{\beta\left(\beta + \sqrt{\beta(8+\beta)}\right)}\right).
\]
Setting this equal to 0 and solving for $\beta$ gives the equation
$(3\beta-1)(\beta^2-4\beta+2)=0$.
The only solution with $\beta\in\left(0,\frac12\right]$ is $\beta=\frac13$ so we choose
$x^{*}=\left(\frac{1}{3},0\right)$ and check that
$D\phi\left(x^{*}\right)=0$.
Note that $\phi\left(x^{*}\right)=\log\left(4\sqrt{\frac{2}{3}}\right)$,
and
\[
H=-\begin{pmatrix}\frac{63}{5} & 0\\
0 & \frac{5}{2}
\end{pmatrix}
\]
is the Hessian matrix of $\phi$ at $x^{*}$. Define $C_{1}=5/8$,
so that $-4C_{1}$ is the largest eigenvalue of $H$.
Now, define $\hat{\phi}$ by $\hat{\phi}\left(x\right)=\phi\left(x\right)-\phi\left(x^{*}\right)$,
and define $\hat{a}_{n}:\mathcal{X}_{n}\to\mathbb{C}$ by
\[
\hat{a}_{n}\left(x\right)=
c_n^{-1}\, e^{-n\phi\left(x^{*}\right)}\,
a_{n}\left(x\right).
\]
With a Taylor expansion about $x^{*}$, for $x\in\mathcal{X}^{1/2}$ we have
\begin{equation}
\hat{\phi}\left(x\right) = \frac{1}{2}\left(x-x^{*}\right)^{T}H\left(x-x^{*}\right)+h\left(x\right)\left|x-x^{*}\right|^{2},\label{eq:taylor}
\end{equation}
where $h\left(x\right)$ is complex and $h\left(x\right)\to0$ as
$x\tox^{*}$. For all $v\in\mathbb{R}^{2}$ we have $v^{T}Hv\le-2C_{1}\left|v\right|^{2}$,
so we can choose $\xi<\frac{1}{6}$ such that $\Re\hat{\phi}\left(x\right)\le-C_{1}\left|x-x^{*}\right|^{2}$
for $\left|x-x^{*}\right|<\xi$. Define $\mathcal{X}^{*}=\left\{ x\in\mathcal{X}^{1/2}:\left|x-x^{*}\right|<\xi\right\} $,
satisfying the requirement for (\ref{eq:g-asymptotic-working}).
Next, define the sets
\begin{align*}
\Xk 1 & = \left\{ x\in\mathcal{X}^{*}:\left|x-x^{*}\right|<n^{-1/3}\right\} ,\\
\Xk 2 & = \mathcal{X}^{*}\setminus \Xk 1,\\
\Xk 3 & = \mathcal{X}^{1/2}\setminus \mathcal{X}^{*},\\
\Xk 4 & = \mathcal{X}\setminus \mathcal{X}^{1/2},
\end{align*}
so that with
\[
\Hk{j}=\sum_{b=1}^{n/2+2}\int_{-\pi}^{\pi}\hat{a}_{n}\left(b/n,\theta\right)\,
\boldsymbol{1}_{\Xk{j}}(b/n,\theta)\, \mathrm{d}\theta
\]
we have
\begin{equation}
\label{split}
c_n^{-1}\, e^{-n\phi\left(x^{*}\right)} F_n=\Hk 1+\Hk 2+\Hk 3+\Hk 4.
\end{equation}
\begin{lemma}
With notation as above, we have
\[ \Hk 1 + \Hk 2 + \Hk 3 + \Hk 4 \sim \Hk 1
\sim \frac{144\pi}{\sqrt{7}}.\]
\label{lem:dominates}
\end{lemma}
\begin{proof}
Note that $\psi$ is a continuous function defined on a compact set.
So, $\psi$ is absolutely bounded on its domain, by $C_{2}$ say.
By (\ref{eq:g-asymptotic-statement}), it follows that $\left|\hat{a}_{n}\left(x\right)\right|=O\left(e^{n\Re\hat{\phi}\left(x\right)}\right)$
uniformly for $x\in\mathcal{X}^{*}$. For $x\in\Xk 2$ we have
\[
n\Re\hat{\phi}\left(x\right)\le-nC_{1}\left|x-x^{*}\right|^{2}\le-C_{1}n^{1/3}\to-\infty
\]
and consequently
\begin{equation}
\left|\Hk 2\right| \, \leq \,
\sum_{b=1}^{n} \int_{-\pi}^{\pi}\, \left|\hat{a}_{n}\left(b/n,\theta\right)\right|\,
\boldsymbol{1}_{\Xk 2}(b/n,\theta)\, \mathrm{d}\theta \,
= \, O\left(ne^{-C_{1}n^{1/3}}\right) = o(1).
\label{F2bound}
\end{equation}
Now (\ref{lem:technical}) implies that
for each $\beta$, $\Re\hat{\phi}\left(\beta,\theta\right)$
is uniquely maximized when $\theta=0$.
Also
$\frac{\partial\Re\hat{\phi}}{\partial\beta}\left(\beta,0\right)=\frac{\partial\hat{\phi}}{\partial\beta}\left(\beta,0\right)=0$ only for $\left(\beta,0\right)=x^*$, since $\hat{\phi}$ is real along the line $\theta=0$.
Checking the values of $\Re\hat{\phi}(\beta,0)$ in the limit as $\beta\to 0$
and $\beta\to \nfrac{1}{2}$,
it follows that $\Re\hat{\phi}$ attains a unique
maximum on $\mathcal{X}^{1/2}$ at $x^{*}$.
Let $-C_{3}<0$ be the maximum value of $\Re\hat{\phi}$ on $\Xk 3$.
Let $u\lor w=\max\left\{ u,w\right\} $ for real numbers $u,w$.
We now redo the calculations of (\ref{eq:g-asymptotic-working}) using an
alternate form of Stirling's inequality which holds for all $k\geq 0$,
namely $\sqrt{k\lor1}\left(\frac{k}{e}\right)^{k}\le k!$.
For $\left(\beta,\theta\right) \in \mathcal{X}^{1/2}\cap \mathcal{X}_n$,
\begin{align*}
\left|\hat{a}_{n}\left(\beta,\theta\right)\right| & \le
\frac{ne^{2}}{\sqrt{\left(\beta n\lor 1\right)
\left(\left(n/2-\beta n+2\right)\lor 1\right)}
\left(1/2-\beta\right)^{2}}\, e^{n\Re\hat{\phi}\left(x\right)}\\
& = e^{n\Re\hat{\phi}\left(x\right)+o\left(n\right)}.
\end{align*}
It follows that
\begin{equation}
\label{F3bound}
\left|\Hk 3\right|=O\left(ne^{-C_{3}n/2}\right) = o(1).
\end{equation}
Next suppose that $\left(\beta,\theta\right)\in\Hk 4\cap \mathcal{X}_n$.
Then we have $\frac{1}{2}<\beta\le\frac{1}{2}+o(1)$ and $\left(n/2-b+2\right)!=1$.
By the alternate form of Stirling's inequality and (\ref{lem:technical}),
\begin{align*}
\left|a_{n}\left(\beta,\theta\right)\right| & \le \frac{n^{3}e^{2}}{\sqrt{\left(\beta n\lor 1\right)}}\, c_{n}\left|\frac{\left(2f\left(r_{\beta}e^{i\theta}\right)\right)^{\beta}}{r_{\beta}e^{i\theta}\beta^{\beta}}\right|^{n}\\
& \le e^{o\left(n\right)}c_{n}\left(\frac{\left(2f\left(r_{1/2}\right)\right)^{1/2}}{r_{1/2}\left(\frac12\right)^{1/2}}\right)^{n}.
\end{align*}
By direct computation,
\[
\log\frac{\left(2f\left(r_{1/2}\right)\right)^{1/2}}{r_{1/2}\left(\frac12\right)^{1/2}}=\phi(x^*)+C_4
\]
for some $C_4>0$. It follows that
\begin{equation}
\label{F4bound}
\left|\Hk 4\right|=O\left(ne^{-C_{4}n/2}\right) = o(1).
\end{equation}
It remains to consider $\Hk 1$. Define $\ceil{\left(\beta,\theta\right)}=\left(\frac{\ceil{\beta n}}{n},\theta\right)$,
so that $F_n=n\int_{\mathcal{X}}\hat{a}_{n}\left(\ceil{x}\right)\mathrm{d}x$. For
any $y\in\mathbb{R}^{2}$, define $x_{y}=x^{*}+y/\sqrt{n}$ and
$B_{n}=\left\{ y:\ceil{x_{y}}\in\Xk 1\right\} $, so that we can
make the change of variables
\[
\Hk 1=\int_{B_{n}}\hat{a}_{n}\left(\ceil{x_{y}}\right)\mathrm{d}y.
\]
Note that
\[
\left|y/\sqrt{n}\right|=\left|x_{y}-x^{*}\right|\le\left|x_{y}-\ceil{x_{y}}\right|+\left|\ceil{x_{y}}-x^{*}\right|=O\left(n^{-1/3}+n^{-1}\right)=O\left(n^{-1/3}\right)
\]
for $y\in B_{n}$, so that $B_{n}$ is approximately a ball of radius
$O\left(n^{1/6}\right)$.
Next, a first-order Taylor expansion of $D\phi$ about $x^{*}$ gives
\[
\left|D\phi\left(x_{y}\right)\right|=O\left(\left|y/\sqrt{n}\right|\right)=O\left(n^{-1/3}\right).
\]
Another first-order Taylor expansion of $\phi$ about $x_{y}$ gives
\[
\phi\left(\ceil{x_{y}}\right)-\phi\left(x_{y}\right)=O\left(\left|D\phi\left(x_{y}\right)\right|\left|\ceil{x_{y}}-x_{y}\right|\right)=O\left(n^{-4/3}\right),
\]
so that
\begin{equation}
e^{n\phi\left(x_{y}\right)}\sim e^{n\phi\left(\ceil{x_{y}}\right)}\label{eq:ceil-asymptotic}
\end{equation}
uniformly. Now, for each $y\in\mathbb{R}^{2}$, we have $\ceil{x_{y}}\tox^{*}$.
For $n$ large enough so that $y\in B_{n}$, we have $\psi\left(\ceil{x_{y}}\right)\to\psi\left(x^{*}\right)$
by continuity and $e^{n\phi\left(x_{y}\right)}\to e^{\frac{1}{2}y^{T}Hy}$
by (\ref{eq:taylor}). We therefore have $\boldsymbol{1}_{B_{n}}\left(y\right)\hat{a}_{n}\left(\ceil{x_{y}}\right)\to\psi\left(x^{*}\right)e^{\frac{1}{2}y^{T}Hy}$
for all $y$.
Recalling that $C_{2}$ and $2C_{1}$ are bounds involving $\psi$
and $\phi$ respectively, with (\ref{eq:ceil-asymptotic}) we have
$\left|\boldsymbol{1}_{B_{n}}\left(y\right)\hat{a}_{n}\left(\ceil{x_{y}}\right)\right|\le2C_{2}e^{-C_{1}\left|y\right|^{2}}$
for sufficiently large $n$.
Since
\[
\left(\det\left(-H\right)\right)^{-1/2} = \frac{1}{3}\sqrt{\frac{2}{7}},\quad
\psi\left(x^{*}\right) = 108\sqrt{2}
\]
we obtain, by the dominated convergence theorem,
\[
\Hk 1\to\psi\left(x^{*}\right)\int_{\mathbb{R}^{2}}\, e^{\frac{1}{2}y^{T}Hy}\,
\mathrm{d}y=2\pi\, \psi\left(x^{*}\right)\,
\left(\det\left(-H\right)\right)^{-1/2} = \frac{144\pi}{\sqrt{7}}.
\]
Combining this with (\ref{F2bound}--\ref{F4bound})
completes the proof.
\end{proof}
We now pull these calculations together to prove the following.
\begin{lemma}
Let $d=3$. Then
\[ \mathbb{E}[Y^2] \sim \frac{18}{\sqrt{14}}\, \left(\frac{16}{3}\right)^n,\]
and hence
\[ \frac{\mathbb{E}[Y^2]}{[\mathbb{E} Y]^2} \rightarrow \frac{9}{\sqrt{14}}.
\]
It follows that Condition~\ref{cond:A4} holds when $d=3$.
\label{lem:secondmoment}
\end{lemma}
\begin{proof}
Lemma~\ref{lem:dominates} and (\ref{split}) prove that
\[ F_n \sim \frac{72}{n^3\sqrt{7}}\, \left( 4\sqrt{\frac{2e}{3n}}\right)^n.\]
Substituting $d=3$ into (\ref{eq:E2}) and applying
(\ref{eq:num-pairings-asymptotic}) gives
\[
\mathbb{E}Y^{2} = \frac{(6\sqrt{2})^n\, (n!)^2}{4\pi\, \#P(3n)}\, F_n
\sim\frac{18}{\sqrt{14}}\left(\frac{16}{3}\right)^{n},
\]
using Stirling's approximation.
Then, with (\ref{eq:EY-asymptotic}) and Lemma~\ref{EY2/EY2-target},
we conclude that
\[
\frac{\mathbb{E}Y^{2}}{(\mathbb{E}Y)^{2}}\to\frac{9}{\sqrt{14}}=\exp\left(\sum_{j=1}^{\infty}\lambda_{j}\gamma_{j}^{2}\right).
\]
This establishes Condition~\ref{cond:A4}, as required.
\end{proof}
We can now complete the proof of Theorem~\ref{thm:distribution-3}.
\begin{proof}[Proof of Theorem \ref{thm:distribution-3}]
We will prove that for general $d \geq 3$, if Condition~\ref{cond:A4}
holds then Conjecture~\ref{cnj:distribution} is true.
In particular, this will prove Theorem~\ref{thm:distribution-3},
using Lemma~\ref{lem:secondmoment}.
Suppose that Condition \ref{cond:A4} is satisfied for some fixed
integer $d\geq 3$. Then by Lemma~\ref{EY2/EY2-target} we may apply
Theorem~\ref{thm:janson} to conclude that
(\ref{eq:W}) holds for $Y$. Therefore, for all real numbers $y$ we have
\begin{alignat*}{2}
\mathbb{P}\left(Y_{\mathcal{G}}/\mathbb{E} Y_{\mathcal{G}}<y\right)
&=\enspace&& \mathbb{P}\left(Y/\mathbb{E} Y_{\mathcal{G}}<y|X_{1}=X_{2}=0\right)\\
& \rightarrow &&
\mathbb{P}\left(W\exp\left(\lambda_{1}\zeta_{1}+\lambda_{2}\zeta_{2}\right)<y|
Z_{1}=Z_{2}=0\right)\\
&=&& \mathbb{P}\left(\prod_{j=3}^{\infty}\left(1+\zeta_{j}\right)^{Z_{j}}
e^{-\lambda_{j}\zeta_{j}}<y\right).
\end{alignat*}
Hence Conjecture \ref{cnj:distribution} is a consequence
of \ref{cond:A4}.
\end{proof}
\subsection{Support for Conjecture~\ref{cnj:distribution}}\label{ss:numerical}
\global\long\defp{p}
Let $p_{d}\left(n\right)$ denote the quotient
\[
\frac{\mathbb{E} Y^{2}}{\left(\mathbb{E} Y\right)^{2}}\left/\exp\left(\sum_{j=1}^{\infty}\lambda_{j}\zeta_{j}^{2}\right)\right. .
\]
For any fixed integer $d\geq 4$, Conjecture~\ref{cnj:distribution} holds
if and only Condition~\ref{cond:A4} from Theorem~\ref{thm:janson} is satisfied;
that is, if and only if $p_{d}(n)\sim 1$.
Using Lemma~\ref{lem:|P|EY2} we can compute
$p_{d}\left(100\right)$ for various values of $d$:
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$d$ & 3 & 4 & 5 & 6 & 100\tabularnewline
\hline
$p_{d}\left(100\right)$ & 0.9761 & 0.9881 & 0.9921 & 0.9942 & 0.9998\tabularnewline
\hline
\end{tabular}
\par\end{center}
Figure \ref{fig:EY2-plot} is a plot of
$p_{d}\left(n\right)$ for $d\in \{ 3,4,5,6,100\}$ and $n\leq 50$.
\begin{figure}[ht!]
\begin{center}
\centerline{\includegraphics{spanningtree-arxiv-figure0}}
\caption{\label{fig:EY2-plot} A plot of $p_d(n)$ for $d \in \{ 3,4,5,6,100\}$}
\end{center}
\end{figure}
This plot supports our conjecture that
$p_{d}(n)\sim 1$ for all $d\geq 4$.
Indeed, the rate of convergence to 1 appears to increase as $d$ increases.
We now give an asymptotic result which is equivalent to
Conjecture~\ref{cnj:distribution}.
Combining (\ref{eq:num-pairings-asymptotic}), \ref{cond:A4}, (\ref{eq:EY-asymptotic}), Lemma~\ref{EY2/EY2-target}, Lemma~\ref{lem:|P|EY2} and applying Stirling's
formula shows that for a fixed integer $d\geq 4$,
Conjecture~\ref{cnj:distribution} holds if and only if
\begin{align}
\sum_{b=1}^{n}\,&
\frac{2^{b}}{b!\,\left(\left(d/2-1\right)n-b+2\right)!}\,
\left[z^{n}\right]\,\left(\sum_{j=1}^{\infty}\binom{(d-1)j}{j}\, z^{j}\right)^{b} \notag\\
&\sim
\frac{2d^{2}}{\pi\left(d-2\right)^{4} \, n^3}\,\,
\sqrt{\frac{2d-2}{d^{2}-d+1}}\,\,
\left(\frac{\left(d-1\right)^{2(d-1)}}{\left(d-2\right)^{2(d-2)}}\left(\frac{2e}{d n}\right)^{d/2-1}\right)^{n}.
\label{sufficient}
\end{align}
\section*{Acknowledgements}
We would like to thank the referee for their helpful comments.
| {
"timestamp": "2014-02-19T02:02:34",
"yymm": "1309",
"arxiv_id": "1309.6710",
"language": "en",
"url": "https://arxiv.org/abs/1309.6710",
"abstract": "Let $d \\geq 3$ be a fixed integer. We give an asympotic formula for the expected number of spanning trees in a uniformly random $d$-regular graph with $n$ vertices. (The asymptotics are as $n\\to\\infty$, restricted to even $n$ if $d$ is odd.) We also obtain the asymptotic distribution of the number of spanning trees in a uniformly random cubic graph, and conjecture that the corresponding result holds for arbitrary (fixed) $d$. Numerical evidence is presented which supports our conjecture.",
"subjects": "Combinatorics (math.CO); Probability (math.PR)",
"title": "On the number of spanning trees in random regular graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9910145707166789,
"lm_q2_score": 0.8438950966654774,
"lm_q1q2_score": 0.8363123369518483
} |
https://arxiv.org/abs/1408.2005 | On the Meeting Time for Two Random Walks on a Regular Graph | We provide an analysis of the expected meeting time of two independent random walks on a regular graph. For 1-D circle and 2-D torus graphs, we show that the expected meeting time can be expressed as the sum of the inverse of non-zero eigenvalues of a suitably defined Laplacian matrix. We also conjecture based on empirical evidence that this result holds more generally for simple random walks on arbitrary regular graphs. Further, we show that the expected meeting time for the 1-D circle of size $N$ is $\Theta(N^2)$, and for a 2-D $N \times N$ torus it is $\Theta(N^2 log N)$. | \section{Introduction}
Consider a system of discrete-time random walks on a graph $G(V,E)$ with two walkers. Each time, they each independently move to a nearby vertex or stay still with given probabilities. Denote the transition matrix of a single walker by $P$, where $P(i,j)$ is the probability that one walker moves from $v_i$ to $v_j$ in a time slot. This process is assumed to start at steady state (i.e. uniform distribution) for each walker, and terminates when they meet at the same vertex. We denote this meeting time by $\tau$, which is a random variable with the expectation \textbf{E}$[\tau]$. Our objective is to analyze this quantity on $d$-regular graphs.
\begin{figure}[hbp]
\centering
\includegraphics[width=0.2\textwidth]{Graph_2}
\caption{4 walkers on a 3-regular graph}
\label{fig 1}
\end{figure}
It is instructive to consider the problem on the one-dimensional circle first. We study a circle with $N$ nodes, denoted by $V=\{0,1,2, \cdots ,N-1 \}$. The two walkers start from arbitrary position according to the initial distribution. Every step, the walker on $i$ chooses to move to $\{i-1,i+1\}$ (for simplicity of notation, assume that if $i=N-1$, then $i+1 = 0$ and similarly that if $i=0$ then $i-1 = N-1$ ) or stay still at $i$ with probability $\{p_1,p_2,p_3\}$ respectively.
\begin{figure}[hbp]
\centering
\includegraphics[width=0.6\textwidth]{circle}
\caption{1-D circle}
\label{fig 2}
\end{figure}
Since we are only concerned about the meeting time, the relative position of the two walkers is enough to describe that random variable. So we fix one walker at '$0$'. Then in this new equivalent model, the transition matrix of the other walker before the encounter is
$M = P \cdot P^T$.
A similar equivalent model can be defined for a $N \times N$ torus. Let $V = \{(x,y)|x,y=1,2,...,l\}$. Every step, the walker on $(x,y)$ moves to $(x\pm 1,y\pm 1)$ or stay still at $(x,y)$ with given probability. Define the index of $(x,y)$ to be \textbf{Ind}$(x,y)=(x-1)N+y$, we can get a $N^2$-order matrix $P$. Let $i,j$ denote the indices of two vertex $(x_i,y_i),(x_j,y_j)$. Then $P(i,j)$ denotes the probability that one walker moves from $(x_i,y_i)$ to $(x_j,y_j)$ each step. $P$ is a ``block-circulant matrix" defined in 3.1.2 .
Similar to the 1-D case, we fix one walker at the lower-right cell, the transient matrix of the other walker before the encounter here is also given as $M = P \cdot P^T$, which is symmetric.
\begin{figure}[hbp]
\centering
\includegraphics[width=0.6\textwidth]{Graph_4}
\caption{2-D Torus}
\label{fig:3}
\end{figure}
Our main result is as follows: by suitably defining a Laplacian matrix $L$, the expected meeting time of the two walkers \textbf{E}$[\bm{\tau}]$ (i.e., the expectation of the first time that they meet on the same cell starting from the steady state uniform distribution) on a ring or torus could be explicitly expressed as the sum of the reciprocals of non-zero eigenvalues of $L$. We further conjecture based on empirical evidence that the result holds more generally for simple random walks (i.e., with equal transition probabilities) on arbitrary regular graphs.
\section{Method and Key Results}
\subsection{Preliminary}
Recall the standard definition of a Circulant Matrix:
\begin{definition}[Circulant Matrix]
A circulant matrix is a matrix where each row vector is rotated one element to the right relative to the preceding row vector. A circulant matrix $A$ is fully specified by one vector, $\bm{a}$, which appears as the first row of $A$.
\end{definition}
\subsubsection{Properties of Circulant Matrix}
For arbitrary real, circulant matrix $A$ generated by $\{a_0,a_1,\cdots,a_{n-1}\}$with order $n$, we can find its eigenvalue in a general way following the approach indicated in~\cite{Kleinberg11}. First define vector $\bm{\xi_i}$ whose $j^{th}$ component is
\begin{equation}
\xi_i(j) = \frac{1}{\sqrt{n}}w^{ij} \quad \text{where} \quad w=e^{\frac{2\pi}{n}} \text{is the $n^{th}$ roots of unity }
\end{equation}
We can prove the following properties:
(a) $<\xi_i,\xi_j>=\delta_{ij}$
(b) $\bm{A\xi_i} = \lambda_i \bm{\xi_i}, i = 1,2,...,n-1,0$\\
(a) shows that$\{\xi_i|i=1,2,\cdots,N\}$ are the orthogonal eigenvectors of $A$. $\lambda_i$ is the eigenvalue of $A$, which can be calculated by
\begin{eqnarray}
(A\xi_i)(j)& = &\sum_{k=1}^{n} A(j,k)\xi_i(k) \\
& = & \frac{a_0}{\sqrt{n}}w^{ij}+\frac{a_1}{\sqrt{n}}w^{i(j+1)}+\cdots+ \frac{a_{n-1}}{\sqrt{n}}w^{i(j+n-1)}\\
& = & \xi_i(j)(\sum_{k=0}^{n-1}a_k w^{ik})
\end{eqnarray}
Let $\lambda_i = \sum_{k=0}^{n-1}a_k w^{ik}$, then we have the property (b).
\begin{definition}[Block-Circulant Matrix]
If $A$ is a $n^2$-order partitioned circulant matrix generated by $A_0,A_1,\cdots,A_{n-1}$ where the $A_k$ are all $n$-order circulant matrices generated by $\{a_{i,0},a_{i,1},\cdots,a_{i,n-1}\}$ (see illustration below for a 9-order Block-Circulant Matrix). Then $A$ is called a block-circulant matrix.
\end{definition}
$
A = \begin{bmatrix}
A_0 & A_1 & A_2 \\
A_2 & A_0 & A_1 \\
A_1 & A_2 & A_0
\end{bmatrix},
\quad \text{where} \quad
A_i = \begin{bmatrix}
a_{i,0} & a_{i,1} & a_{i,2} \\
a_{i,2} & a_{i,0} & a_{i,1} \\
a_{i,1} & a_{i,2} & a_{i,0}
\end{bmatrix} \quad \text{for} \quad i=0,1,2.
$
\subsubsection{Properties of Block-Circulant Matrix}
Given index $i$, the coordinates of $i$ is $x_i=\text{quotient}(i-1,n),y_i=\text{remainder}(i-1,n)$。Then we need to modify the definition of $\bm{\xi_i}$ by
\begin{equation}
\xi_x(j) = \frac{1}{n}w^{x_iy_i+x_jy_j} \quad \text{where} w=e^{\frac{2\pi}{n}} \text{is the $n^{th}$ roots of unity }
\end{equation}
The properties given above in section 3.1.1 still hold, and we have \\ $\lambda_i = \sum_{l=0}^{n-1}\sum_{k=0}^{n-1} a_{l,k} w^{x_i l+y_i(k+1)}$ is the $i^{th}$ eigenvalue of $A$.
\subsection{Results on Circle}
\subsubsection{The Expected Meeting Time}
Let us first discuss the problem on the simplest graph, a 1-D circle.
\begin{theorem}
If two particles make independent random walks on a circle with an uniform initial distribution, then the expected meeting time is $\sum_{\lambda_i \ne 0} \lambda_i^{-1}$, where $\lambda_i$ is the $i^{th}$ eigenvalue of $L = I - PP^T$, and $P$ is the transition matrix for a single walker.
\end{theorem}
Put the transition probabilities in $M$ as the weight of edges. Then we get the Laplacian matrix,
\begin{equation}
L = I - M = I - PP^T
\end{equation}
which is a circulant matrix generated by $\{1-q_0,-q_1,-q_2,0,\cdots,0,-q_2,-q_1\}$, where $q_0 = p_1^2+p_2^2+p_3^2, q_1 = p_3(p_1+p_2), q_2 = p_1p_2$.
Let $T_i$,which is the $i^{th}$ component of vector $\bm{T}$, denote the expected meeting time with starting vertex $i$. Obviously, $T_0 = 0$. The initial distribution is $\bm{\pi}$. Then \textbf{E}$[\tau] = \bm{\pi}^T \bm{T}$. We can obtain a set of equations by recurrence:
\begin{equation}
T_i=
q_2 T_{i-2}+q_1T_{i-1}+q_0T_{i}+q_1T_{i+1}+q_2T_{i+2}+1 \quad i \ne 0
\end{equation}
Notice that the coefficients $q_0+2q_1+2q_2 = p_1^2+p_2^2+p_3^2 + 2p_3(p_1+p_2)+p_1p_2 = (p_1+p_2+p_3)^2=1$. By summing up the above equations, we have:
\begin{equation}
T_0=
q_2 T_{N-2}+q_1T_{N-1}+q_0T_{0}+q_1T_{1}+q_2T_{2}-(N-1)
\end{equation}
Thus, the Laplacian matrix $L$ is the coefficient matrix of (4),(5).
\begin{equation}
L\bm{T}=\bm{\Delta t},\quad \text{where} \quad \Delta t = (1,1,1,\cdots,1,-(N-1))^T
\end{equation}
Since L is a real, circulant matrix, we can use the conclusion in section 3.1.1. Taking the inner product of (9) with $\xi_i$ on both sides, from the symmetry of $L$ we have
\begin{equation}
<\bm{LT},\bm{\xi_i}> = <\bm{T},\bm{L\xi_i}> = <\bm{T},\lambda_i\bm{\xi_i}> = \lambda_i (\sum_{k=1}^{N-1}T_k \frac{w^{ik}}{\sqrt{N}}+T_0)
\end{equation}
\begin{equation}
<\bm{\Delta t},\bm{\xi_i}> = \frac{1}{\sqrt{N}}( \sum_{k=1}^{N-1}w^{ik}-(N-1)) =
\left\{
\begin{array}{lr}
-\sqrt{N} & i\ne 0\\
0 & i=0
\end{array}
\right.
\end{equation}
Notice that $\sum_{k=1}^{N-1}w^{ik} = -1$ for $i \ne 0$. Combined with (9), for $i \ne 0$,
\begin{equation}
\sum_{k=1}^{N-1}\frac{T_k}{\sqrt{N}}w^{ik} = -\sqrt{N}(\lambda_i)^{-1}
\end{equation}
Summing up by $i$, we have:
\begin{equation}
\sum_{i=1}^{N-1}\sum_{k=1}^{N-1}\frac{T_k}{\sqrt{N}}w^{ik} = - \sqrt{N} \sum_{i=1}^{N-1} \lambda_i^{-1}
\end{equation}
\begin{equation}
\sum_{i=1}^{N-1}\sum_{k=1}^{N-1}T_k w^{ik} = -N\sum_{i=1}^{N-1} \lambda_i^{-1}
\end{equation}
Changing the order of summation,
\begin{equation}
\sum_{k=1}^{N-1}T_k\sum_{i=1}^{N-1} w^{ik} = -N\sum_{i=1}^{N-1} \lambda_i^{-1}
\end{equation}
\begin{equation}
\frac{1}{N}\sum_{k=1}^{N-1}T_k = \sum_{i=1}^{N-1} \lambda_i^{-1}
\end{equation}
We assume the steady state distribution is the initial distribution. For any arbitrary regular graph, this is the uniform distribution. The expected meeting time is then given as:
\begin{equation}
\textbf{E}[\tau] = \bm{\pi}^T \bm{T} = \frac{1}{N}\sum_{k=1}^{N-1}T_k = \sum_{i=1}^{N-1} \lambda_i^{-1}
\end{equation}
Note that this is the sum of the reciprocals of non-zero eigenvalues of $L$.
\subsubsection{The Order Estimation of $\textbf{E}[\tau]$}
For simplicity, we estimate the order of $\textbf{E}[\tau]$ for simple random walk (i.e., $p_1=p_2=p_3=\frac{1}{3}$):
\begin{equation}
\begin{split}
\textbf{E}[\tau]=&\sum_{i \ne 0}^N \frac{1}{3}(2-\frac{4}{3}\cos{\frac{\pi i}{N}}-\frac{2}{3}\cos{\frac{2 \pi i}{N}})^{-1}\\
=&\sum_{i=1}^N \frac{2}{9}(2-\cos{\frac{\pi i}{N}}-(\cos{\frac{\pi i}{N}})^2)^{-1}\\
=&\frac{2}{9}\sum_{i=1}^N \frac{1}{(2+t_i)(1-t_i)}\\
\end{split}
\end{equation}
where $t_i = \cos{\frac{\pi i}{N}} $. Thus $(2+t_i)^{-1} \in [1/3,1]$, which is bounded by constants. From~\cite{Montroll1969}, we have that summation $\sum_{i=1}^N (1-t_i)^{-1}$ is $O(N^2)$. Thus $\textbf{E}[\tau]$ is $O(N^2)$. On the other side, for $i = 1$, applying the Taylor Theorem we have
\begin{equation}
\frac{1}{1-t_1}=\frac{1}{1-\cos{\frac{\pi}{N}}} =\frac{1}{\Theta(1/N^2)} = \Theta(N^2)
\end{equation}
Thus $\textbf{E}[\tau]$ is also $\Omega(N^2)$, yielding that in fact for the 1-D circle, $\textbf{E}[\tau]$ grows with the size of the graph as $\Theta(N^2)$.
\subsection{Results on Torus}
\subsubsection{The Expected Meeting Time}
\begin{theorem}
If two particles make independent random walks on a torus with an uniform initial distribution, then the expected meeting time is $\sum_{\lambda_i \ne 0} \lambda_i^{-1}$, where $\lambda_i$ is the $i^{th}$ eigenvalue of $L = I - PP^T$, and $P$ is the transition matrix for a single walker.
\end{theorem}
Similarly, put the probabilities of transition in $M$ as the weight of edges. Then we get the Laplacian matrix.
\begin{equation}
L = I - M =I - PP^T
\end{equation}
Let $T_i$ denotes the expected encounter time with starting point with index $i$, which is the $i^{th}$ component of vector $\bm{T}$. Obviously, $T_{N^2} = 0$. If the initial distribution is $\bm{\pi}$, then \textbf{E}$[\tau] = \bm{\pi}^T \bm{T}$. We can get a set of equations by recurrence (for a more readable notation here we write that $T_{x,y} = T_\text{Ind}(x,y)$).
For ease of exposition, we illustrate below this recurrence equation for a simple random walk, that means the walker in the original model moves to its neighbour or stay still with the same probability $\frac{1}{5}$:
\begin{equation}
\begin{split}
T_{x,y} =
& \frac{1}{25} T_{x\pm 2,y}+\frac{1}{25}T_{x,y\pm 2}+\frac{2}{25}T_{x\pm 1,y}+\frac{2}{25}T_{x,y\pm 1}\\
+&\frac{2}{25}T_{x\pm 1,y\pm 1}+\frac{1}{5}T_{x,y}+1 \quad i \ne 0
\end{split}
\end{equation}
Note that such a recurrence equation for $T_{x,y}$ could also be written for any random walk that moves to neighboring nodes with different probabilities.
We also have:
\begin{equation}
L\bm{T}=\bm{\Delta t},\quad \text{where} \quad \Delta t = (1,1,1,\cdots,1,-(N^2-1))^T
\end{equation}
With the same approach in 3.2, we have
\begin{equation}
<\bm{LT},\bm{\xi_i}> = <\bm{T},\bm{L\xi_i}> = <\bm{T},\lambda_i\bm{\xi_i}> = \lambda_i (\sum_{k=1}^{N} \sum_{l=1}^{N}\frac{T_{k,l}}{N}w^{x_i k+y_i l})
\end{equation}
\begin{equation}
<\bm{\Delta t},\bm{\xi_i}> = \frac{1}{N}( \sum_{(k,l)\ne (N,N)}w^{x_ik+y_il}-(N^2-1)) =
\left\{
\begin{array}{lr}
-N & i\ne 0\\
0 & i=0
\end{array}
\right.
\end{equation}
Combined with (20) and $T_0=0$, summing up by $i$ for $i \ne 0$, we have
\begin{equation}
\sum_{i=1}^{N^2-1}\sum_{(k,l)\ne (N,N)}\frac{T_{k,l}}{N}w^{x_i k+y_i l} = - N \sum_{i=1}^{N^2-1} \lambda_i^{-1}
\end{equation}
\begin{equation}
\sum_{(x,y)\ne (N,N)}\sum_{(k,l)\ne (N,N)}T_{k,l}w^{x_i k+y_i l} = - N \sum_{i=1}^{N^2-1} \lambda_i^{-1}
\end{equation}
Change the sequence of summation, finally we have
\begin{equation}
\frac{1}{N^2}\sum_{(k,l)\ne (N,N)}T_{k,l} = \sum_{i=1}^{N^2-1} \lambda_i^{-1}
\end{equation}
Note that we get actually the same expression as 1-D circle. Given the uniform initial distribution, the expected time $\textbf{E}[\tau]$ is the sum of the reciprocals of non-zero eigenvalues of $L$.
\subsubsection{The Order Estimation of $\textbf{E}[\tau]$}
Applied (6) to (25), we have
\[\textbf{E}[\tau]=
\sum_{\begin{subarray}{l} i,j=0 \\ (i,j)\ne(0,0)\end{subarray}}^{N-1}
\left( \frac{1}{25}
(20-2(\cos{\frac{4\pi i}{N}}+\cos{\frac{4\pi j}{N}})-4(\cos{\frac{2\pi i}{N}}+\cos{\frac{2\pi j}{N}})
-8\cos{\frac{2\pi i}{N}}\cos{\frac{2\pi j}{N}})
\right)^{-1}\]
which can be rewritten as
\begin{equation}
\textbf{E}[\tau]\equiv
\sum_{\begin{subarray}{l} i,j=0 \\ (i,j)\ne(0,0)\end{subarray}}^{N-1}
\frac{1}{2t_{ij}s_{ij}+3}\frac{1}{1-t_{ij}s_{ij}}
\end{equation}
where $t_{ij}=\cos{\frac{\pi(i+j)}{N}}, s_{ij}=\cos{\frac{\pi(i-j)}{N}}$. By applying the lemma(proved in \textbf{Appendix A}):
\begin{lemma}
If $\theta_1,\theta_2\in [0,\frac{\pi}{4}]$, then
\[\frac{1}{1-\cos{\theta_1}\cos{\theta_2}}\le \frac{4}{1-\cos{2\theta_1}\cos{2\theta_2}}\]
\end{lemma}
we can separate the summation into $\Theta(logN)$ parts, and prove that each part is $\Theta(N^2)$.
Thus finally we obtain that
\begin{equation}
\textbf{E}[\tau]~is~\Theta(N^2 log N)
\end{equation}
The complete proof is given in the \textbf{Appendix A}.
\section{Discussion}
We have proved that on the circle and the torus, the sum of the reciprocals of non-zero eigenvalues of $L = I - PP^T$ is the expected meeting time of two walkers. In fact, if the graph has a strong symmetry properties which guarantees $M = PP^T$ and $L$ is (block-)circulant, then the proof still holds. The simulation results shown in figure~\ref{fig:4} match the conclusion in section 3.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{torus}\\
\caption{Simulation Results on 2-D Torus}
\label{fig:4}
\end{figure}
Moreover, we find empirically that the expression even works for simple random walks on arbitrary regular graphs. This is not a trivial observation, since the symmetry of vertices doesn't hold for arbitrary regular graph, see the examples for 4-regular graphs in figure~\ref{fig:5}. In this case, the equivalent model approach of fixing one of the walkers at a particular location and defining the transition matrix of the other walker does not work.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.15\textwidth]{g5-4}
\quad
\includegraphics[width=0.17\textwidth]{g6-4}
\quad
\includegraphics[width=0.17\textwidth]{g7-4}
\quad
\includegraphics[width=0.15\textwidth]{g8-4} \\
\caption{Special Cases for 4-regular Graph }
\label{fig:5}
\end{figure}
\begin{conjecture}[Expected Meeting Time on Regular Graph]
If two particles make independent \textbf{simple} random walks on a connected $d$-regular graph, and the initial distribution is uniform, then the expected meeting time $\textbf{E}[\tau]$ is $\sum_{\lambda_i \ne 0} \lambda_i^{-1}$, where $\lambda_i$ is the $i^{th}$ eigenvalue of $L = I - PP^T$, and $P$ is the transition matrix for a single walker.
\end{conjecture}
Our conjecture is supported by empirical evidence which we present here. Figure~\ref{fig:6} shows simulation results as well as relevant numerical calculations for simple random walks over arbitrary regular graphs. The left figure shows the results on 10-regular graphs, while the right one on graphs with 30 vertices. For each horizontal point, a single random graph is generated and fixed for averaging over multiple random initial conditions drawn from a uniform distribution. Each blue mark indicates the average meeting time when doing the experiment independently for 500 times, and green mark for 10000 times. The red mark indicates the conjectured value of the expected meeting time (i.e. the sum of the reciprocals of non-zero eigenvalues of $L$). The black mark indicates the exact value of $\textbf{E}[\tau]$ which could be calculated by the definition of expectation once given transition probabilities (See Appendix B). In each case we see that the conjecture is valid.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.45\textwidth]{f_1_1}
\quad
\includegraphics[width=0.45\textwidth]{f_2_2}
\\
\caption{Simulation Results on General Regular Graphs }
\label{fig:6}
\end{figure}
One way to prove the conjecture may be to use the method in section 3; but for this approach we would need an additional conjecture.
\begin{conjecture}
If $A$ is the adjacency matrix of a connected d-regular graph $G$ with $n$ vertex, then $A$ has a set of orthogonal eigenvectors \{$\xi_1, \xi_2, \cdots, \xi_n$\} satisfying
$(a).\quad \xi_n = (1,1,\cdots, 1)^T$;
$(b).\quad \xi_i(n) = 1,$ for all $i$;
$(c). \quad \sum_{j=1}^n \xi_i(j) = 0, $for all $i$;
$(d). \quad \sum_{i=1}^n \xi_i(j) = 0, $for all $j$;
$(e). \quad <\xi_i,\xi_j>=n \delta_{ij}$
\end{conjecture}
\begin{proposition}
\textbf{Conjecture 2} is a sufficient condition for \textbf{Conjecture 1}.
\end{proposition}
\textit{Proof.}
Suppose $\mu_1, ...,\mu_n$ is the eigenvalues of $A$.
We define a matrix $\tilde{L}$ as follows:
\begin{equation}
\tilde{L} = I - P \otimes P
\end{equation}
where $P \otimes P$ is the kronecker product of P. Then from$P = (I+A)/(d+1)$, we have the eigenvalue of $P$ is $\beta_i = (\mu_i+1)/(d+1)$. Thus from the properties of kronecker product, the eigenvalue and eigenvector of $\tilde{L}$ is $\lambda_{i,j} =\beta_i \beta_j$ and $\xi_{i,j} = \xi_i \otimes \xi_j$.
We can similarly construct a recursive function of $T_{i,j}$, which indicates the expected meeting time with walkers on vertex $i$ and $j$. Obviously, $T_{i,i} = 0$. We can prove that $\tilde{L}T = \bm{\Delta t}$, where $\Delta t_{i,j} = 1$ if $i \ne j$, else $\Delta t_{i,i} = -(n-1)$. Then
\begin{equation}
<\bm{\tilde{L} T},\bm{\xi_{i,j}}> = <\bm{T},\bm{L\xi_{i,j}}> = <\bm{T},\lambda_i\bm{\xi_{i,j}}> = \lambda_{i,j} \sum_{(k,l)=(1,1)}^{(n,n)}T_{k,l}\xi_i(k) \xi_j(l)
\end{equation}
Combined with (c) and (e) in \textbf{Conjecture 2}, we have
\begin{equation}
\begin{split}
<\bm{\Delta t},\bm{\xi_{i,j}}> &=\sum_{k=1} ^n \left(-(n-1)\xi_i(k)\xi_j(k) + \xi_i(k)\sum_{l \ne k}\xi_j(l)\right)\\
& = \sum_{k=1} ^n -(n-1)\xi_i(k)\xi_j(k) - \xi_i(k)\xi_j(k)\\
& = -n <\xi_i,\xi_j> = -n^2 \delta_{ij}
\end{split}
\end{equation}
Thus we have
\begin{equation}
\begin{split}
\sum_{(k,l)=(1,1)}^{(n,n)}T_{k,l}\xi_i(k) \xi_j(l) = \frac{1}{\lambda_{i,j}} <\bm{\Delta t},\bm{\xi_{i,j}}>= -n^2 \delta_{ij}
\end{split}
\end{equation}
Summing by $(i,j) \ne (n,n)$ and applying (d), finally we get the expression
\begin{equation}
\begin{split}
\textbf{E}[{\tau}] & = \frac{1}{n^2} \sum_{(i,j)=(1,1)}^{(n,n)}T_{i,j}
= \sum_{(i,j)=(1,1)}^{(n,n)} \delta_{ij} \frac{1}{\lambda_{i,j} }
= \sum_i^n \frac{1}{\lambda_{i,i} }
\end{split}
\end{equation}
Notice that $\lambda_{i,i}$ is the same eigenvalue of $L = I - PP^T$ in our original definition of $L$. Thus we have proved that if \textbf{Conjecture 2} holds then the \textbf{Conjecture 1} would be true.
\begin{remark}
If we let $\xi_n$ be the eigenvector with eigenvalue $\mu=d$, then (a) holds.
\end{remark}
\begin{remark}
Since $\sum_{j=1}^n \xi_i(j) = (1,1,\cdots,1)^T \xi_i$, multiply $(1,1,\cdots,1)^T$ on the left of $L \xi_i = \lambda_i \xi_i$ and we have
\begin{equation}
\sum_{j=1}^n \xi_i(j) = \frac{1}{\lambda_i}\left((1,1,\cdots,1)^T L \right) \xi_i = 0
\end{equation}
Notice that the the row sum of $L$ is equal to 0. Thus we have (c).
\end{remark}
| {
"timestamp": "2014-08-12T02:02:57",
"yymm": "1408",
"arxiv_id": "1408.2005",
"language": "en",
"url": "https://arxiv.org/abs/1408.2005",
"abstract": "We provide an analysis of the expected meeting time of two independent random walks on a regular graph. For 1-D circle and 2-D torus graphs, we show that the expected meeting time can be expressed as the sum of the inverse of non-zero eigenvalues of a suitably defined Laplacian matrix. We also conjecture based on empirical evidence that this result holds more generally for simple random walks on arbitrary regular graphs. Further, we show that the expected meeting time for the 1-D circle of size $N$ is $\\Theta(N^2)$, and for a 2-D $N \\times N$ torus it is $\\Theta(N^2 log N)$.",
"subjects": "Probability (math.PR); Discrete Mathematics (cs.DM); Combinatorics (math.CO)",
"title": "On the Meeting Time for Two Random Walks on a Regular Graph",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9811668723123672,
"lm_q2_score": 0.851952809486198,
"lm_q1q2_score": 0.8359078734413069
} |
https://arxiv.org/abs/2107.02579 | Numerical Matrix Decomposition | In 1954, Alston S. Householder published \textit{Principles of Numerical Analysis}, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.Keywords: Existence and computing of matrix decompositions, Floating point operations (flops), Low-rank approximation, Pivot, LU/PLU decomposition, CR/CUR/Skeleton decomposition, Coordinate transformation, ULV/URV decomposition, Rank decomposition, Rank revealing decomposition, Update/downdate, Tensor decomposition. | \part{Gaussian Elimination}
\section*{Introduction}
In linear algebra, the \textit{Gaussian elimination} is often referred to as the \textit{row reduction}, which is an algorithm for solving systems of linear equations such that the original full linear equation is converted into the \textit{row echelon} form (or in some cases, an upper triangular one) whilst the computational complexity is reduced.
In the row reduction process, the Gaussian elimination performs elementary row operations that can be divided into two phases. The first phase is also known as the \textit{forward elimination} that reduces the linear system into an upper triangular one or the row echelon form in general, where the properties of the linear system is kept into the new one such as the rank and from which we can tell whether there are solutions, the uniqueness of the solution and so on. In the second phase, the process performs a \textit{back substitution} such that the linear system is converted into a \textit{reduced row echelon form} and the solution is found.
In this part, we will introduce the related LU, and Cholesky decompositions. And another highly relevant decomposition, CR decomposition, will be delayed into Part~\ref{part:data-interation} as it skeletons the matrix and compresses the matrix into a thin one whilst sparsity, nonnegativity of the matrices are kept.
\chapter{LU Decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{LU Decomposition}
Perhaps the best known and the first matrix decomposition we should know about is the LU decomposition. We now illustrate the results in the following theorem and the proof of the existence of which will be delayed in the next sections.
\begin{theoremHigh}[LU Decomposition with Permutation]\label{theorem:lu-factorization-with-permutation}
Every nonsingular $n\times n$ square matrix $\bA$ can be factored as
\begin{equation}
\bA = \bP\bL\bU, \nonumber
\end{equation}
where $\bP$ is a permutation matrix, $\bL$ is a unit lower triangular matrix (i.e., lower triangular matrix with all 1's on the diagonal), and $\bU$ is a nonsingular upper triangular matrix.
\end{theoremHigh}
The LU decomposition was the decomposition of Gauss's elimination algorithm, which he sketched in 1809 \citep{gauss1877theoria} and presented in full in 1810 \citep{gauss1828disquisitio}. See also the discussion by \citep{stewart2000decompositional}. Note that, in the remainder of this text, we will put the decomposition-related results in the blue box. And other claims will be in a gray box. This rule will be applied for the rest of the survey without special mention.
\begin{remark}[Decomposition Notation]
The above decomposition applies to any nonsingular matrix $\bA$. We will see that this decomposition arises from the elimination steps in which case row operations of subtraction and exchange of two rows are allowed where the subtractions are recorded in matrix $\bL$ and the row exchanges are recorded in matrix $\bP$. To make this row exchange explicit, the common form for the above decomposition is $\bQ\bA=\bL\bU$ where $\bQ=\bP^\top$ that records the exact row exchanges of the rows of $\bA$. Otherwise, the $\bP$ would record the row exchanges of $\bL\bU$. In our case, we will make the decomposition to be clear for matrix $\bA$ rather than for $\bQ\bA$. For this reason, we will put the permutation matrix on the right-hand side of the equation for the remainder of the text without special mention.
\end{remark}
Specifically, in some cases, we will not need the permutation matrix. This decomposition relies on the leading principal minors. We provide the definitions which are important for the illustration.
\begin{definition}[Principal Minors\index{Principal minors}]\label{definition:principle-minors}
Let $\bA$ be an $n\times n$ square matrix. A $k \times k$ submatrix of $\bA$ obtained by deleting any $n-k$ columns and the same $n-k$ rows from $\bA$ is called a $k$-th order \textbf{principal submatrix} of $\bA$. The determinant of a $k \times k$ principal submatrix is called a $k$-th order \textbf{principal minor} of $\bA$.
\end{definition}
Under mild conditions on the selected indices for the submatrix, we may obtain a specific kind of principal minors.
\begin{definition}[Leading Principal Minors\index{Leading principal minors}]\label{definition:leading-principle-minors}
Let $\bA$ be an $n\times n$ square matrix. A $k \times k$ submatrix of $\bA$ obtained by deleting the \textbf{last} $n-k$ columns and the \textbf{last} $n-k$ rows from $\bA$ is called a $k$-th order \textbf{leading principal submatrix} of $\bA$, that is, the $k\times k$ submatrix taken from the top left corner of $\bA$. The determinant of the $k \times k$ leading principal submatrix is called a $k$-th order \textbf{leading principal minor} of $\bA$.
\end{definition}
Given an $n\times n$ matrix $\bA$ with $(i,j)$-th entry being $a_{ij}$, let $\bA_{1:k,1:k}$ denote the $k\times k$ submatrix taken from the top left corner of $\bA$. That is,
$$
\bA_{1:k,1:k} =
\begin{bmatrix}
a_{11} & a_{12} & \ldots & a_{1k}\\
a_{21} & a_{22} & \ldots & a_{2k}\\
\vdots & \vdots & \ddots & \vdots \\
a_{k1} & a_{k2} & \ldots & a_{kk}\\
\end{bmatrix}.
$$
Then $\Delta_k=\det(\bA_{1:k,1:k} )$ is the $k$-th order leading principal minor of $\bA$.
Under specific conditions on the leading principal minors of matrix $\bA$, the LU decomposition will not involve the permutation matrix.
\begin{theoremHigh}[LU Decomposition without Permutation]\label{theorem:lu-factorization-without-permutation}
For any $n\times n$ square matrix $\bA$, if all the leading principal minors are nonzero, i.e., $\det(\bA_{1:k,1:k})\neq 0$, for all $k\in \{1,2,\ldots, n\}$, then $\bA$ can be factored as
\begin{equation}
\bA = \bL\bU, \nonumber
\end{equation}
where $\bL$ is a unit lower triangular matrix (i.e., lower triangular matrix with all 1's on the diagonal), and $\bU$ is a \textbf{nonsingular} upper triangular matrix.
\item Specifically, this decomposition is \textbf{unique}. See Corollary~\ref{corollary:unique-lu-without-permutation}.
\end{theoremHigh}
\begin{remark}[Other Forms of the LU Decomposition without Permutation]
The leading principal minors are nonzero, in another word, means the leading principal submatrices are nonsingular.
\paragraph{Singular $\bA$} In the above theorem, we assume $\bA$ is nonsingular as well. The LU decomposition also exists for singular matrix $\bA$. However, the matrix $\bU$ will be singular as well in this case. This can be shown in the following section that, if matrix $\bA$ is singular, some pivots will be zero, and the corresponding diagonal values of $\bU$ will be zero.
\paragraph{Singular leading principal submatrices} Even if we assume matrix $\bA$ is nonsingular, the leading principal submatrices might be singular. Suppose further that some of the leading principal minors are zero, the LU decomposition also exists, but if so, it is again not unique.
\end{remark}
We will discuss where this decomposition comes from in the next section. There are also generalizations of LU decomposition to non-square or singular matrices, such as rank-revealing LU decomposition. Please refer to \citep{pan2000existence, miranian2003strong, dopico2006multiple} or we will have a short discussion in Section~\ref{section:rank-reveal-lu-short}.
\section{Relation to Gaussian Elimination}\label{section:gaussian-elimination}
Solving linear system equation $\bA\bx=\bb$ is the basic problem in linear algebra.
Gaussian elimination transforms a linear system into an upper triangular one by applying simple \textit{elementary row transformations} on the left of the linear system in $n-1$ stages if $\bA\in \real^{n\times n}$. As a result, it is much easier to solve by a backward substitution. The elementary transformation is defined rigorously as follows.
\begin{definition}[Elementary Transformation\index{Elementary transformation}]
For square matrix $\bA$, the following three transformations are referred as \textbf{elementary row/column transformations}:
\item 1. Interchanging two rows (or columns) of $\bA$;
\item 2. Multiplying all elements of a row (or a column) of $\bA$ by some nonzero number;
\item 3. Adding any row (or column) of $\bA$ multiplied by a nonzero number to any other row (or column);
\end{definition}
Specifically, the elementary row transformations of $\bA$ are unit lower triangular to multiply $\bA$ on the left, and the elementary column transformations of $\bA$ are unit upper triangular to multiply $\bA$ on the right.
The Gaussian elimination is described by the third type - elementary row transformation above. Suppose the upper triangular matrix obtained by Gaussian elimination is given by $\bU = \bE_{n-1}\bE_{n-2}\ldots\bE_1\bA$, and in the $k$-th stage, the $k$-th column of $\bE_{k-1}\bE_{k-2}\ldots\bE_1\bA$ is $\bx\in \real^n$. Gaussian elimination will introduce zeros below the diagonal of $\bx$ by
$$
\bE_k = \bI - \bz_k \be_k^\top,
$$
where $\be_k \in \real^n$ is the $k$-th unit basis vector, and $\bz_k\in \real^n$ is given by
$$
\bz_k = [0, \ldots, 0, z_{k+1}, \ldots, z_n]^\top, \qquad z_i= \frac{x_{i}}{x_{k}}, \gap\forall i \in \{k+1,\ldots, n\}.
$$
We realize that $\bE_k$ is a unit lower triangular matrix (with $1$'s on the diagonal) with only the $k$-th column of the lower submatrix being nonzero,
$$
\bE_k=
\begin{bmatrix}
1 & \ldots & 0& 0 & \ldots & 0\\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
0 & \ldots & 1 & 0 & \ldots & 0 \\
0 & \ldots & -z_{k+1} & 1 & \ldots & 0\\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
0 & \ldots & -z_n & 0 & \ldots & 1
\end{bmatrix},
$$
and multiplying on the left by $\bE_k$ will introduce zeros below the diagonal:
$$
\bE_k \bx =
\begin{bmatrix}
1 & \ldots & 0& 0 & \ldots & 0\\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
0 & \ldots & 1 & 0 & \ldots & 0 \\
0 & \ldots & -z_{k+1} & 1 & \ldots & 0\\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
0 & \ldots & -z_n & 0 & \ldots & 1
\end{bmatrix}
\begin{bmatrix}
x_1 \\
\vdots \\
x_k\\
x_{k+1} \\
\vdots \\
x_n
\end{bmatrix}=
\begin{bmatrix}
x_1 \\
\vdots \\
x_k\\
0 \\
\vdots \\
0
\end{bmatrix}.
$$
For example, we write out the Gaussian elimination steps for a $4\times 4$ matrix. For simplicity, we assume there are no row permutations. And in the following matrix, $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
\begin{tcolorbox}[title={A Trivial Gaussian Elimination For a $4\times 4$ Matrix}]
\begin{equation}\label{equation:elmination-steps}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bE_1}{\longrightarrow}
\begin{sbmatrix}{\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bE_2}{\longrightarrow}
\begin{sbmatrix}{\bE_2\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{\boxtimes} & \boxtimes & \boxtimes \\
0 & \bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bE_3}{\longrightarrow}
\begin{sbmatrix}{\bE_3\bE_2\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{\boxtimes} & \boxtimes & \boxtimes \\
0 & 0 & \textcolor{blue}{\boxtimes} & \boxtimes \\
0 & 0 & \bm{0} & \textcolor{blue}{\bm{\boxtimes}}
\end{sbmatrix},
\end{equation}
\end{tcolorbox}
\noindent where $\bE_1, \bE_2, \bE_3$ are lower triangular matrices. Specifically, as discussed above, Gaussian transformation matrices $\bE_i$'s are unit lower triangular matrices with $1$'s on the diagonal. This can be explained that for the $k$-th transformation $\bE_k$, working on the matrix $\bE_{k-1}\ldots\bE_1\bA$, the transformation subtracts multiples of the $k$-th row from rows $\{k+1, k+2, \ldots, n\}$ to get zeros below the diagonal in the $k$-th column of the matrix. And never use rows $\{1, 2, \ldots, k-1\}$.
For the transformation example above, at step $1$, we multiply left by $\bE_1$ so that multiples of the $1$-st row are subtracted from rows $2, 3, 4$ and the first entries of rows $2, 3, 4$ are set to zero. Similar situations for step 2 and step 3. By setting $\bL=\bE_1^{-1}\bE_2^{-1}\bE_3^{-1}$ and letting the matrix after elimination be $\bU$, \footnote{The inverses of unit lower triangular matrices are also unit lower triangular matrices. And the products of unit lower triangular matrices are also unit lower triangular matrices.} we get $\bA=\bL\bU$. Thus we obtain an LU decomposition for this $4\times 4$ matrix $\bA$.
\begin{definition}[Pivot\index{Pivot}]\label{definition:pivot}
First nonzero entry in the row after each elimination step is called a \textbf{pivot}. For example, the \textcolor{blue}{blue} crosses in Equation~\eqref{equation:elmination-steps} are pivots.
\end{definition}
But sometimes, it can happen that the value of $\bA_{11}$ is zero. No $\bE_1$ can make the next elimination step successful. So we need to interchange the first row and the second row via a permutation matrix $\bP_1$. This is known as the \textbf{pivoting}, or \textbf{permutation}:
\begin{tcolorbox}[title={Gaussian Elimination With a Permutation In the Beginning}]
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
0 & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\bP_1}{\longrightarrow}
\begin{sbmatrix}{\bP_1\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bE_1}{\longrightarrow}
\begin{sbmatrix}{\bE_1\bP_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
&\stackrel{\bE_2}{\longrightarrow}
\begin{sbmatrix}{\bE_2\bE_1\bP_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{\boxtimes} & \boxtimes & \boxtimes \\
0 & \bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bE_3}{\longrightarrow}
\begin{sbmatrix}{\bE_3\bE_2\bE_1\bP_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{\boxtimes} & \boxtimes & \boxtimes \\
0 & 0 & \textcolor{blue}{\boxtimes} & \boxtimes \\
0 & 0 & \bm{0} & \textcolor{blue}{\bm{\boxtimes}}
\end{sbmatrix}.
\end{aligned}
$$
\end{tcolorbox}
\noindent By setting $\bL=\bE_1^{-1}\bE_2^{-1}\bE_3^{-1}$ and $\bP=\bP_1^{-1}$, we get $\bA=\bP\bL\bU$. Therefore we obtain a full LU decomposition with permutation for this $4\times 4$ matrix $\bA$.
In some situations, other permutation matrices $\bP_2, \bP_3, \ldots$ will appear in between the lower triangular $\bE_i$'s. An example is shown as follows:
\begin{tcolorbox}[title={Gaussian Elimination With a Permutation In Between}]
$$
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\end{sbmatrix}
\stackrel{\bE_1}{\longrightarrow}
\begin{sbmatrix}{\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bP_1}{\longrightarrow}
\begin{sbmatrix}{\bP_1\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bE_2}{\longrightarrow}
\begin{sbmatrix}{\bE_2\bP_1\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{\boxtimes} & \boxtimes & \boxtimes \\
0 & 0 & \textcolor{blue}{\boxtimes} & \boxtimes \\
0 & \bm{0} & \bm{0} & \textcolor{blue}{\bm{\boxtimes}}
\end{sbmatrix}.
$$
\end{tcolorbox}
\noindent In this case, we find $\bU=\bE_2\bP_1\bE_1\bA$. In Section~\ref{section:lu-perm}, Section~\ref{sec:compute-lu-pivoting} or Section~\ref{section:partial-pivot-lu}, we will show that the permutations in-between will result in the same form $\bA=\bP\bL\bU$ where $\bP$ takes account of all the permutations.
The above examples can be easily extended to any $n\times n$ matrix if we assume there are no row permutations in the process. And we will have $n-1$ such lower triangular transformations.
The $k$-th transformation $\bE_k$ introduces zeros below the diagonal in the $k$-th column of $\bA$ by subtracting multiples of the $k$-th row from rows $\{k+1, k+2, \ldots, n\}$.
Finally, by setting $\bL=\bE_1^{-1}\bE_2^{-1}\ldots \bE_{n-1}^{-1}$ we obtain the LU decomposition $\bA=\bL\bU$ (without permutation).
\section{Existence of the LU Decomposition without Permutation}\label{section:exist-lu-without-perm}
The Gaussian elimination or Gaussian transformation shows the origin of the LU decomposition. We then prove Theorem~\ref{theorem:lu-factorization-without-permutation} rigorously, i.e., the existence of the LU decomposition without permutation by induction.
\begin{proof}[\textbf{of Theorem~\ref{theorem:lu-factorization-without-permutation}: LU Decomposition without Permutation}]
We will prove by induction that every $n\times n$ square matrix $\bA$ with nonzero leading principal minors has a decomposition $\bA=\bL\bU$. The $1\times 1$ case is trivial by setting $L=1, U=A$, thus, $A=LU$.
Suppose for any $k\times k$ matrix $\bA_k$ with all the leading principal minors being nonzero has an LU decomposition without permutation. If we prove any $(k+1)\times(k+1)$ matrix $\bA_{k+1}$ can also be factored as this LU decomposition without permutation, then we complete the proof.
For any $(k+1)\times(k+1)$ matrix $\bA_{k+1}$, suppose the $k$-th order leading principal submatrix of $\bA_{k+1}$ is $\bA_k$ with size $k\times k$. Then $\bA_k$ can be factored as $\bA_k = \bL_k\bU_k$ with $\bL_k$ being a unit lower triangular matrix and $\bU_k$ being a nonsingular upper triangular matrix from the assumption. Write out $\bA_{k+1}$ as
$
\bA_{k+1} = \begin{bmatrix}
\bA_k & \bb \\
\bc^\top & d
\end{bmatrix}.
$
Then it admits the factorization:
$$
\bA_{k+1} = \begin{bmatrix}
\bA_k & \bb \\
\bc^\top & d
\end{bmatrix}
=
\begin{bmatrix}
\bL_k &\bzero \\
\bx^\top & 1
\end{bmatrix}
\begin{bmatrix}
\bU_k & \by\\
\bzero & z
\end{bmatrix} = \bL_{k+1}\bU_{k+1},
$$
where $\bb = \bL_k\by$, $\bc^\top = \bx^\top\bU_k$, $d = \bx^\top\by + z$, $\bL_{k+1}=\begin{bmatrix}
\bL_k &\bzero \\
\bx^\top & 1
\end{bmatrix}$, and $\bU_{k+1}=\begin{bmatrix}
\bU_k & \by\\
\bzero & z
\end{bmatrix}$.
From the assumption, $\bL_k$ and $\bU_k$ are nonsingular. Therefore
$$
\by = \bL_k^{-1}\bb, \qquad \bx^\top=\bc^\top\bU_k^{-1}, \qquad z=d - \bx^\top\by.
$$
If further, we could prove $z$ is nonzero such that $\bU_{k+1}$ is nonsingular, we complete the proof.
Since all the leading principal minors of $\bA_{k+1}$ are nonzero,
we have $\det(\bA_{k+1})=$
\footnote{By the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix}
\bA & \bB \\
\bC & \bD
\end{bmatrix}$, then $\det(\bM) = \det(\bA)\det(\bD-\bC\bA^{-1}\bB)$.}
$\det(\bA_k)\cdot$ $\det(d-\bc^\top\bA_k^{-1}\bb)
=\det(\bA_k)\cdot(d-\bc^\top\bA_k^{-1}\bb) \neq 0$, where $d-\bc^\top\bA_k^{-1}\bb$ is a scalar.
As $\det(\bA_k)\neq 0$ from the assumption, we obtain $d-\bc^\top\bA_k^{-1}\bb \neq 0$. Substitute $\bb = \bL_k\by$ and $\bc^\top = \bx^\top\bU_k$ into the formula, we have $d-\bx^\top\bU_k\bA_k^{-1}\bL_k\by =d-\bx^\top\bU_k(\bL_k\bU_k)^{-1}\bL_k\by =d-\bx^\top\by \neq 0$ which is exactly the form of $z\neq 0$. Thus we find $\bL_{k+1}$ with all the values on the diagonal being 1, and $\bU_{k+1}$ with all the values on the diagonal being nonzero which means $\bL_{k+1}$ and $\bU_{k+1}$ are nonsingular, \footnote{A triangular matrix (upper or lower) is nonsingular if and only if all the entries on its main diagonal are nonzero.}
from which the result follows.
\end{proof}
We further prove that if no permutation involves, the LU decomposition is unique.
\begin{corollary}[Uniqueness of the LU Decomposition without Permutation]\label{corollary:unique-lu-without-permutation}
Suppose the $n\times n$ square matrix $\bA$ has nonzero leaning principal minors. Then, the LU decomposition is unique.
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:unique-lu-without-permutation}]
Suppose the LU decomposition is not unique, then we can find two decompositions such that $\bA=\bL_1\bU_1 = \bL_2\bU_2$ which implies $\bL_2^{-1}\bL_1=\bU_2\bU_1^{-1}$. The left of the equation is a unit lower triangular matrix and the right of the equation is an upper triangular matrix.
This implies both sides of the above equation are diagonal matrices. Since the inverse of a unit lower triangular matrix is also a unit lower triangular matrix, and the product of unit lower triangular matrices is also a unit lower triangular matrix, this results in that $\bL_2^{-1}\bL_1 = \bI$.
The equality implies that both sides are identity such that $\bL_1=\bL_2$ and $\bU_1=\bU_2$ and leads to a contradiction.
\end{proof}
In the proof of Theorem~\ref{theorem:lu-factorization-without-permutation}, we have shown that the diagonal values of the upper triangular matrix are all nonzero if the leading principal minors of $\bA$ are all nonzero. We then can formulate this decomposition in another form if we divide each row of $\bU$ by each diagonal value of $\bU$. This is called the \textit{LDU decomposition}.
\begin{corollaryHigh}[LDU Decomposition]\label{corollary:ldu-decom}
For any $n\times n$ square matrix $\bA$, if all the leading principal minors are nonzero, i.e., $\det(\bA_{1:k,1:k})\neq 0$, for all $k\in \{1,2,\ldots, n\}$, then $\bA$ can be \textbf{uniquely} factored as
\begin{equation}
\bA = \bL\bD\bU, \nonumber
\end{equation}
where $\bL$ is a unit lower triangular matrix, $\bU$ is a \textbf{unit} upper triangular matrix, and $\bD$ is a diagonal matrix.
\end{corollaryHigh}
The proof is trivial that from the LU decomposition of $\bA=\bL\bR$, we can find a diagonal matrix $\bD=diag(\bR_{11}, \bR_{22}, \ldots, \bR_{nn})$ such that $\bD^{-1}\bR = \bU$ is a unit upper triangular matrix. And the uniqueness comes from the uniqueness of the LU decomposition.
\section{Existence of the LU Decomposition with Permutation}\label{section:lu-perm}
In Theorem~\ref{theorem:lu-factorization-without-permutation}, we require that matrix $\bA$ has nonzero leading principal minors. However, this is not necessarily. Even when the leading principal minors are zero, nonsingular matrices still have an LU decomposition, but with an additional permutation. The proof is still from induction.
\begin{proof}[\textbf{of Theorem~\ref{theorem:lu-factorization-with-permutation}: LU Decomposition with Permutation}]
We note that any $1\times 1$ nonsingular matrix has a full LU decomposition $A=PLU$ by simply setting $P=1$, $L=1$, $U=A$.
We will show that if every $(n-1)\times (n-1)$ nonsingular matrix has a full LU decomposition, then this is also true for every $n\times n$ nonsingular matrix. By induction, we prove that every nonsingular matrix has a full LU decomposition.
We will formulate the proof in the following order. \textcolor{black}{If $\bA$ is nonsingular, then its row permuted matrix $\bB$ is also nonsingular}. And Schur complement of $\bB_{11}$ in $\bB$ is also nonsingular. Finally, we formulate the decomposition of $\bA$ by $\bB$ from this property.
We notice that at least one element in the first column of $\bA$ must be nonzero otherwise $\bA$ will be singular. We can then apply a row permutation that makes the element in entry $(1,1)$ to be nonzero. That is, there exists a permutation $\bP_1$ such that $\bB = \bP_1 \bA$ in which case $\bB_{11} \neq 0$. Since $\bA$ and $\bP_1$ are both nonsingular and the product of nonsingular matrices is also nonsingular, then $\bB$ is also nonsingular.
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10,frametitle={Schur complement of $\bB$ is also nonsingular:}]
Now consider the Schur complement of $\bB_{11}$ in $\bB$ with size $(n-1)\times (n-1)$
$$
\bar{\bB} = \bB_{2:n,2:n} -\frac{1}{\bB_{11}} \bB_{2:n,1} \bB_{1,2:n}.
$$
Suppose there is an $(n-1)$-vector $\bx$ satisfies
\begin{equation}\label{equ:lu-pivot1}
\bar{\bB} \bx = 0.
\end{equation}
Then $\bx$ and $y=-\frac{1}{\bB_{11}}\bB_{1,2:n} \bx $ satisfy
$$
\bB
\left[
\begin{matrix}
\bx \\
y
\end{matrix}
\right]
=
\left[
\begin{matrix}
\bB_{11} & \bB_{1,2:n} \\
\bB_{2:n,1} & \bB_{2:n,2:n}
\end{matrix}
\right]
\left[
\begin{matrix}
\bx \\
y
\end{matrix}
\right]
=
\left[
\begin{matrix}
\bzero \\
0
\end{matrix}
\right].
$$
Since $\bB$ is nonsingular, $\bx$ and $y$ must be zero. Hence, Equation~\eqref{equ:lu-pivot1} holds only if $\bx=\bzero$ which means that the null space of $\bar{\bB}$ is of dimension 0 and thus $\bar{\bB}$ is nonsingular with size $(n-1)\times(n-1)$.
\end{mdframed}
By the induction assumption that any $(n-1)\times(n-1)$ nonsingular matrix can be factorized as the full LU decomposition form
$$
\bar{\bB} = \bP_2\bL_2\bU_2.
$$
We then factor $\bA$ as
\begin{equation*}
\begin{aligned}
\bA &= \bP_1^\top
\left[
\begin{matrix}
\bB_{11} & \bB_{1,2:n} \\
\bB_{2:n,1} & \bB_{2:n,2:n}
\end{matrix}
\right]\\
&= \bP_1^\top
\left[
\begin{matrix}
1 & 0 \\
0 & \bP_2
\end{matrix}
\right]
\left[
\begin{matrix}
\bB_{11} & \bB_{1,2:n} \\
\bP_2^\top \bB_{2:n,1} &\bP_2^\top \bB_{2:n,2:n}
\end{matrix}
\right]\\
&= \bP_1^\top
\left[
\begin{matrix}
1 & 0 \\
0 & \bP_2
\end{matrix}
\right]
\left[
\begin{matrix}
\bB_{11} & \bB_{1,2:n} \\
\bP_2^\top \bB_{2:n,1} & \textcolor{blue}{\bL_2\bU_2}+\bP_2^\top \textcolor{blue}{\frac{1}{\bB_{11}} \bB_{2:n,1} \bB_{1,2:n}}
\end{matrix}
\right]\\
&= \bP_1^\top
\left[
\begin{matrix}
1 & 0 \\
0 & \bP_2
\end{matrix}
\right]
\left[
\begin{matrix}
1 & 0 \\
\frac{1}{\bB_{11}}\bP_2^\top \bB_{2:n,1} & \bL_2
\end{matrix}
\right]
\left[
\begin{matrix}
\bB_{11} & \bB_{1,2:n} \\
\bzero & \bU_2
\end{matrix}
\right].\\
\end{aligned}
\end{equation*}
Therefore, we find the full LU decomposition of $\bA=\bP\bL\bU$ by defining
$$
\bP = \bP_1^\top
\left[
\begin{matrix}
1 & 0 \\
0 & \bP_2
\end{matrix}
\right], \qquad
\bL=\left[
\begin{matrix}
1 & 0 \\
\frac{1}{\bB_{11}}\bP_2^\top \bB_{2:n,1} & \bL_2
\end{matrix}
\right], \qquad
\bU=
\left[
\begin{matrix}
\bB_{11} & \bB_{1,2:n} \\
\bzero & \bU_2
\end{matrix}
\right],
$$
from which the result follows. We will formulate this process into Algorithm~\ref{alg:lu-with-pivoting} to compute this decomposition.
\end{proof}
\section{Computing the LU without Pivoting Recursively: A=LU}\label{section:compute-lu-without-pivot}
As a start, we stay with the most frequent and easy case without involving row exchanges
\begin{equation}
\bA = \bL\bU, \nonumber
\end{equation}
where $\bL$ is the unit lower triangular matrix and $\bU$ is the nonsingular upper triangular matrix. We refer to this decomposition as the LU decomposition without pivoting (or without permutation).
In Section~\ref{section:gaussian-elimination}, we mentioned the connection of the LU decomposition to the Gaussian elimination. We could find the LU decomposition of a matrix by first applying Gaussian elimination to $\bA$ to get $\bU$, and then examine the multipliers in the Gaussian elimination process to determine the entries below the main diagonal of $\bL$. We will now look at another method for finding the LU decomposition without going through the process of Gaussian elimination.
Again, we define $\bA_{i:j,m:n}$ to be the $(j-i+1)\times(n-m+1)$ submatrix of $\bA$ with rows $i, i+1, \ldots, j$ and columns $m, m+1, \ldots, n$ of $\bA$. And simply, $\bA_{ij}$ to be the ($i,j$)-th entry of $\bA$.
Assume $\bA$ has the form $ \bA = \bL\bU$ of the LU decomposition. We will see, this is equivalent to assuming $\bA_{11}$ is not zero, which implies leading principal minors are nonzero.
From the property of lower triangular matrices and upper triangular matrices, we suppose $\bA$ can be factored as
$$
\bA=\left[
\begin{matrix}
\bA_{11} & \bA_{1,2:n} \\
\bA_{2:n,1} & \bA_{2:n,2:n}
\end{matrix}
\right]
=\left[
\begin{matrix}
1 & 0 \\
\bL_{2:n,1} & \bL_{2:n,2:n}
\end{matrix}
\right]
\left[
\begin{matrix}
\bU_{11} & \bU_{1,2:n} \\
0 & \bU_{2:n,2:n}
\end{matrix}
\right] = \bL\bU.
$$
Writing out the product on the right-hand side of the above equation, we obtain
$$
\bA=\left[
\begin{matrix}
\bA_{11} & \bA_{1,2:n} \\
\bA_{2:n,1} & \bA_{2:n,2:n}
\end{matrix}
\right]
=\left[
\begin{matrix}
\bU_{11} & \bU_{1,2:n} \\
\bU_{11} \bL_{2:n,1} & \bL_{2:n,1}\bU_{1,2:n} +\bL_{2:n,2:n} \bU_{2:n,2:n}
\end{matrix}
\right] ,
$$
which helps us decide the values of $\bL$ and $\bU$ by
$$
\begin{aligned}
\bU_{11} &= \bA_{11} \\
\bU_{1,2:n} &= \bA_{1,2:n}
\end{aligned}
\bigg\}
\qquad \mathrm{i.e.,} \qquad \bU_{1,1:n} = \bA_{1,1:n},
$$
$$
\bL_{2:n,1} = \frac{1}{\bA_{11}} \bA_{2:n,1},
$$
and
$$
\bL_{2:n,2:n}\bU_{2:n,2:n} = \bA_{2:n,2:n} - \bL_{2:n,1}\bU_{1,2:n} = \bA_{2:n,2:n} -\frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{1,2:n}.
$$
As $\bL_{2:n,2:n}\in \real^{(n-1)\times(n-1)}$ is also a unit lower triangular matrix and $\bU_{2:n,2:n}\in \real^{(n-1)\times(n-1)}$ is a nonsingular upper triangular matrix both of which are of size $(n-1)\times (n-1)$ from the context. Let $\bA_2 = \bA_{2:n,2:n} -\frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{1,2:n} $. So we can calculate $\bL_{2:n,2:n}$ and $\bU_{2:n,2:n}$ by factoring $\bA_2$ as
$$
\bA_2 = \bL_{2:n,2:n}\bU_{2:n,2:n},
$$
which is an LU decomposition of a matrix with size $(n-1)\times(n-1)$. This suggests a recursive algorithm: to factorize a matrix of size $n\times n $, we calculate the first column of $\bL$ (whose first row can be calculated implicitly as 1) and the first row of $\bU$ (whose first column can be calculated implicitly as $\bA_{11}$) leaving the other $n-1$ columns and $n-1$ rows of them to the next round. Continuing recursively, we arrive at a decomposition of a $1\times 1$ matrix.
\paragraph{A word on the leading principal minors} In this process, we only assume the elements in entry $(1,1)$ of $\bA, \bA_2, \bA_3, \ldots, \bA_n$ are nonzero. This is actually the same assumption as the leading principal minors of $\bA$ are all nonzero. The recursive process is formulated in Algorithm~\ref{alg:lu-without-pivoting}.
\begin{algorithm}[H]
\caption{LU Decomposition without Pivoting Recursively}
\label{alg:lu-without-pivoting}
\begin{algorithmic}[1]
\Require
Matrix $\bA$ is nonsingular and square with size $n\times n $;
\State Calculate the first row of $\bU$: $\bU_{1,1:n} = \bA_{1,1:n}$; \Comment{0 flops}
\State Calculate the first column of $\bL$: $\bL_{11}=1$ and $\bL_{2:n,1} = \frac{1}{\bA_{11}} \bA_{2:n,1}$; \Comment{$n-1$ flops}
\State Calculate the LU decomposition
$$\bA_2=\bA_{2:n,2:n} -\frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{1,2:n} = \bL_{2:n,2:n}\bU_{2:n,2:n};$$ \Comment{$2(n-1)^2$ flops}
\end{algorithmic}
\end{algorithm}
\textbf{Schur complement}: Assume $\bA_{11}$ is not zero, then $\bA_2 = \bA_{2:n,2:n} -\frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{1,2:n}$ is called the Schur complement of $\bA_{11}$ in $\bA$. More details about Schur complement can refer to Appendix~\ref{appendix:schur-complement}.
\textbf{Operation count}: The LU decomposition algorithm without pivoting is the first algorithm we have presented in this survey. It is important to assess its cost. To do so, we follow the classical route and count the number of floating-point operations (flops) that the algorithm requires. Each addition, subtraction, multiplication, division, and square root counts as one flop. Note that we have the convention that an assignment operation does not count as one flop.
\begin{theorem}[Algorithm Complexity: LU without Pivoting Recursively]\label{theorem:lu-complexity}
Algorithm~\ref{alg:lu-without-pivoting} requires $\sim (2/3)n^3$ flops to compute the LU decomposition of an $n\times n$ matrix. Note that the theorem expresses only the leading term of the flop count. And the symbol ``$\sim$" has the usual asymptotic meaning
\begin{equation*}
\lim_{n \to +\infty} \frac{\mathrm{number\, of\, flops}}{(2/3)n^3} = 1.
\end{equation*}
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:lu-complexity}]
The step 1 in Algorithm~\ref{alg:lu-without-pivoting} costs 0 flops and step 2 involves $(n-1)$ divisions.
In step 3, we can compute $\frac{1}{\bA_{11}} \bA_{2:n,1}$ firstly which costs $0$ flops as it has been calculated in step 2, and the outer product with $\bA_{1,2:n}$ costs $(n-1)^2$ multiplications/flops. So the computation of $\frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{1,2:n}$ involves $(n-1)^2$ multiplications.
If we calculate $\bA_{2:n,1}\bA_{1,2:n}$ firstly, then the costs of $\frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{1,2:n}$ is $2(n-1)^2$ totally.
So we choose the first way to do the computation. Furthermore, The subtraction of the two matrices requires $(n-1)^2$ flops. As a result, the cost of step 3 is $2(n-1)^2$ flops in total.
It can be shown that the costs of the final loop is $2(n-1)^2 + (n-1) = 2n^2-3n+1$ flops. Let $f(n)=2n^2-3n+1$, the final cost can then be calculated by
$$
\mathrm{cost}=f(n)+f(n-1)+\ldots+f(1).
$$
Simple calculation\footnote{By the fact that $1^2+2^2+\ldots+n^2 = \frac{2n^3+3n^2+n}{6}$ and $1+2+\ldots+n=\frac{n(n+1)}{2}$.} can show that the complexity is $(2/3)n^3-(1/2)n^2-(1/6)n$ flops, or $(2/3)n^3$ flops if we keep only the leading term.
\end{proof}
\subsection{Complexity of Matrix and Vector Operations}
The calculation of the complexity extensively relies on the complexity of the multiplication of two matrices so that we formulate the finding in the following lemma.
\begin{lemma}[Vector Inner Product Complexity]
Given two vectors $\bv,\bw\in \real^{n}$. The inner product of the two vectors $\bv^\top\bw$ is given by $\bv^\top\bw=v_1w_1+v_2w_2+\ldots v_nw_n$ which involves $n$ scalar multiplications and $n-1$ scalar additions. Therefore the complexity for the inner product is $2n-1$ flops.
\end{lemma}
The matrix multiplication thus relies on the complexity of the inner product.
\begin{lemma}[Matrix Multiplication Complexity]\label{lemma:matrix-multi-complexity}
For matrix $\bA\in\real^{m\times n}$ and $\bB\in \real^{n\times k}$, the complexity of the multiplication $\bC=\bA\bB$ is $mk(2n-1)$ flops.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:matrix-multi-complexity}]
We notice that each entry of $\bC$ involves a vector inner product which requires $n$ multiplications and $n-1$ additions. And there are $mk$ such entries which leads to the conclusion.
\end{proof}
\section{Computing the LU without Pivoting Element-Wise: A=LU}\label{section:compute-lu-without-pivot-doolittle}
We notice that computing the LU decomposition is equivalent to solving the following equations
$$
\bA_{ij} = \sum_{s=1}^{\min(i,j)} \bL_{is} \bU_{sj}, \qquad \forall i,j\in \{1,2,\ldots, n\}.
$$
Furthermore, when $i\leq j$, the above equation can be decomposed into
$$
\bA_{ij} = \sum_{s=1}^{i-1} \bL_{is} \bU_{sj} + \bU_{ij}, \qquad \text{since $\bL_{ii}=1$}.
$$
Suppose we know the first $k-1$ columns of $\bL$ and the first $k-1$ rows of $\bU$, we have the following observations:
$$
\begin{aligned}
\bA_{kj} &= \sum_{s=1}^{k-1} \bL_{ks} \bU_{sj} + \bU_{kj}, \qquad &\text{for all $j\in \{k,k+1,\ldots, n\}$,} \qquad \text{since $k\leq j$}.\\
\bA_{ik} &= \sum_{s=1}^{k-1} \bL_{is} \bU_{sk}+\bL_{ik} \bU_{kk}, \qquad &\text{for all $i\in \{k+1,k+2,\ldots,n\}$},\qquad \text{since $i\geq k$}.
\end{aligned}
$$
Therefore, the $k$-th row of $\bU$ and $k$-th column of $\bL$ can be obtained by
$$
\begin{aligned}
\bU_{kj} &= \bA_{kj} - \sum_{s=1}^{k-1} \bL_{is} \bU_{sj} , \qquad &\text{for all $j\in \{k,k+1,\ldots, n\}$,} \qquad \text{since $k\leq j$}.\\
\bL_{ik} &=(\bA_{ik} - \sum_{s=1}^{k-1} \bL_{is} \bU_{sk})/\bU_{kk}, \qquad &\text{for all $i\in \{k+1,k+2,\ldots,n\}$},\qquad \text{since $i\geq k$}.
\end{aligned}
$$
This is known as the \textit{Doolittle's method}, the values of $\bU$ and $\bL$ can be computed element-wise, and the process is formulated in Algorithm~\ref{alg:compute-lu-element-level}. We notice that, mathematically, the Doolittle's method is equivalent to the recursive algorithm, but from different perspectives.
\begin{algorithm}[H]
\caption{LU Decomposition without Pivoting Element-Wise}
\label{alg:compute-lu-element-level}
\begin{algorithmic}[1]
\Require
Matrix $\bA$ with size $n\times n$;
\State Calculate first of $\bR$ by $\bR_{11} = \sqrt{\bA_{11}}$;
\For{$k=1$ to $n$}
\State //i.e., the $k$-th column of $\bU$ and $k$-th row of $\bL$;
\For{$j=k$ to $n$}
\State $\bU_{kj} = \bA_{kj} - \sum_{s=1}^{k-1} \bL_{is} \bU_{sj}$;
\EndFor
\For{$i=k+1$ to $n$}
\State$\bL_{ik} =(\bA_{ik} - \sum_{s=1}^{k-1} \bL_{is} \bU_{sk})/\bU_{kk}$;
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: LU without Pivoting Elementwise]\label{theorem:lu-complexity-doolittle}
Algorithm~\ref{alg:compute-lu-element-level} requires $\sim (2/3)n^3$ flops to compute the LU decomposition of an $n\times n$ matrix.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:lu-complexity-doolittle}]
The step 5 in Algorithm~\ref{alg:compute-lu-element-level} requires $(k-1)$ multiplications, $(k-2)$ additions, and 1 subtractions for each loop $k,j$. And there are $n-k+1$ such loop $j$'s, that is, $(2k-2)(n-k+1)=-2k^2+(2n+4)k-2(n+1)$ flops totally from step 5 for each loop $k$.
Similarly, the step 8 requires $(k-1)$ multiplications, $(k-2)$ additions, $1$ subtraction, and 1 division for each loop $k,j$. And there are $n-k$ such loop $j$'s, that is, $(2k-1)(n-k)=-2k^2 + (2n+1)k-n$ flops totally from step 8 for each loop $k$.
Thus, for each loop $k$, the total complexity is $(2k-2)(n-k+1)+(2k-1)(n-k) = -4k^2 + (4n+5)k -(3n+2)$ flops. Let $f(k)=-4k^2 + (4n+5)k -(3n+2)$, the final complexity can be computed by
$$
\mathrm{cost=} f(1)+f(2) +\ldots f(n).
$$
A simple calculation can show that the complexity is $(2/3)n^3$ flops if we keep only the leading term.
\end{proof}
\subsection{Extension to Thin Matrices}
We notice that the complexity of Algorithm~\ref{alg:compute-lu-element-level} is the same as that of Algorithm~\ref{alg:lu-without-pivoting}. Doolittle's method is mathematically equivalent to the recursive algorithm. However, the Doolittle's method can be extended to compute the LU decomposition for $\bA\in\real^{m\times n}$ with $m\geq n$, i.e., a thin matrix. The LU decomposition is given by $\bA=\bL\bU$ where $\bL\in\real^{m\times n}$ and $\bU\in \real^{n\times n}$ are upper triangular. That is, $\bL$ is \textit{trapezoidal}: $\bL_{ij}=0$ for $i<j$. The procedure is similar and is formulated in Algorithm~\ref{alg:compute-lu-element-level-thin} where the difference is illustrated in blue text.
\begin{algorithm}[H]
\caption{Thin LU Decomposition without Pivoting Element-Wise}
\label{alg:compute-lu-element-level-thin}
\begin{algorithmic}[1]
\Require
Matrix $\bA$ with size $\textcolor{blue}{m}\times n$;
\State Calculate first of $\bR$ by $\bR_{11} = \sqrt{\bA_{11}}$;
\For{$k=1$ to $n$}
\State //i.e., the $k$-th column of $\bU$ and $k$-th row of $\bL$;
\For{$j=k$ to $n$}
\State $\bU_{kj} = \bA_{kj} - \sum_{s=1}^{k-1} \bL_{is} \bU_{sj}$;
\EndFor
\For{$i=k+1$ to $\textcolor{blue}{m}$}
\State$\bL_{ik} =(\bA_{ik} - \sum_{s=1}^{k-1} \bL_{is} \bU_{sk})/\bU_{kk}$;
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: LU Thin Matrix ]\label{theorem:lu-complexity-doolittle-thin}
Algorithm~\ref{alg:compute-lu-element-level-thin} requires $\sim n^2(m-n/3)$ flops to compute the LU decomposition of an $n\times n$ matrix.
\item When $m=n$, the complexity of Algorithm~\ref{alg:compute-lu-element-level-thin} is $(2/3)n^3$ flops, which is the same as that of Algorithm~\ref{alg:compute-lu-element-level}.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:lu-complexity-doolittle-thin}]
The complexity of step 5 is the same as that in Algorithm~\ref{alg:compute-lu-element-level}, which is $(2k-2)(n-k+1)=-2k^2+(2n+4)k-2(n+1)$ flops totally from step 5 for each loop $k$.
The complexity of step 8 is slightly different where we replace $n$ by $m$, and it requires $(2k-1)(m-k)=-2k^2 + (2m+1)k-m$ flops totally from step 8 for each loop $k$.
Thus, for each loop $k$, the total complexity is $(2k-2)(n-k+1)+(2k-1)(m-k) = -4k^2 + (2m+2n+5)k -(2n+m+2)$ flops. Let $f(k)=-4k^2 + (2m+2n+5)k -(2n+m+2)$, the final complexity can be calculated by
$$
\mathrm{cost=} f(1)+f(2) +\ldots+ f(n).
$$
A simple calculation can show that the complexity is $n^2(m-n/3)$ flops if we keep only the leading term.
\end{proof}
\section{Computing the LU with Pivoting: A=PLU}\label{sec:compute-lu-pivoting}
Further, we extend Algorithm~\ref{alg:lu-without-pivoting} to the full LU decomposition with $\bA=\bP\bL\bU$. Note that we assume $\bA_{11}$ is nonzero in Algorithm~\ref{alg:lu-without-pivoting}. This is not necessarily true. We will avoid this assumption by permutation matrix. The following algorithm is just formulated from the proof of Theorem~\ref{theorem:lu-factorization-with-permutation}.
\begin{algorithm}[H]
\caption{LU Decomposition with Pivoting}
\label{alg:lu-with-pivoting}
\begin{algorithmic}[1]
\Require
matrix $\bA$ is nonsingular and square with size $n\times n$;
\State Choose permutation matrix $\bP_1$ such that $\bB = \bP_1 \bA$ and $\bB_{11}\neq 0$; \Comment{0 flops}
\State Calculate the $\bar{\bB}$ for next round: $\bar{\bB}=\bB_{2:n,2:n} -\frac{1}{\bB_{11}} \bB_{2:n,1} \bB_{1,2:n} = \bP_2\bL_2 \bU_2$; \Comment{$2(n-1)^2+(n-1)$ flops}
\State Calculate the full LU decomposition of $\bA=\bP\bL\bU$ with
$$
\bP = \bP_1^\top
\left[
\begin{matrix}
1 & 0 \\
0 & \bP_2
\end{matrix}
\right], \qquad
\bL=\left[
\begin{matrix}
1 & 0 \\
\frac{1}{\bB_{11}}\bP_2^\top \bB_{2:n,1} & \bL_2
\end{matrix}
\right], \qquad
\bU=
\left[
\begin{matrix}
\bB_{11} & \bB_{1,2:n} \\
\bzero & \bU_2
\end{matrix}
\right]
$$
\gap \gap \Comment{$n-1$ flops}
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: LU with Pivoting]\label{theorem:lu-complexity-with-pivoting}
Algorithm~\ref{alg:lu-with-pivoting} requires $\sim (2/3)n^3$ flops to compute a full LU decomposition of an $n\times n$ nonsingular matrix.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:lu-complexity-with-pivoting}]
The step 1 costs 0 flops as it only involves assignment operations and step 2 involves $(n-1)^2$ multiplications and $(n-1)^2+(n-1)$ divisions which costs $2(n-1)^2+(n-1)$ flops to compute $\bar{\bB}=\bB_{2:n,2:n} -\frac{1}{\bB_{11}} \bB_{2:n,1} \bB_{1,2:n}$ as shown in the proof of Theorem~\ref{theorem:lu-complexity}.
The computation of step 3 results from $\frac{1}{\bB_{11}}\bP_2^\top \bB_{2:n,1}$ which costs $n-1$ flops as the permutation does not count.
So it costs $(2n^2-2n)$ flops in the final loop. Let $f(n) = 2n^2-2n$, the final complexity can be calculated by
$$
\mathrm{cost=} f(n)+f(n-1)+\ldots+f(1).
$$
Simple calculations can show that the complexity is $(2/3)n^3-(2/3)n$ flops, or $(2/3)n^3$ flops if we keep only the leading term.
\end{proof}
\section{Bandwidth Preserving in the LU Decomposition without Permutation}
For any matrix, the bandwidth of it can be defined as follows.
\begin{definition}[Matrix Bandwidth\index{Matrix bandwidth}]\label{defin:matrix-bandwidth}
For any matrix $\bA\in \real^{n\times n}$ with entry $(i,j)$ element denoted by $\bA_{ij}$. Then $\bA$ has \textbf{upper bandwidth $q$} if $\bA_{ij} =0$ for $j>i+q$, and \textbf{lower bandwidth $p$} if $\bA_{ij}=0$ for $i>j+p$.
An example of a $6\times 6$ matrix with upper bandwidth $2$ and lower bandwidth $3$ is shown as follows:
$$
\begin{bmatrix}
\boxtimes & \boxtimes & \boxtimes & 0& 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes & 0\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes & \boxtimes\\
0 & 0 & \boxtimes & \boxtimes& \boxtimes & \boxtimes\\
\end{bmatrix}.
$$
\end{definition}
Then, we prove that the bandwidth after the LU decomposition without permutation is preserved.
\begin{lemma}[Bandwidth Preserving]\label{lemma:lu-bandwidth-presev}
For any matrix $\bA\in \real^{n\times n}$ with upper bandwidth $q$ and lower bandwidth $p$. If $\bA$ has an LU decomposition $\bA=\bL\bU$, then $\bU$ has upper bandwidth $q$ and $\bL$ has lower bandwidth $p$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:lu-bandwidth-presev}]
Following from the computation of the LU decomposition without permutation in Section~\ref{section:compute-lu-without-pivot}, and
from the property of lower triangular matrices and upper triangular matrices, we have the decomposition for $\bA$ as follows
$$
\bA=\left[
\begin{matrix}
\bA_{11} & \bA_{1,2:n} \\
\bA_{2:n,1} & \bA_{2:n,2:n}
\end{matrix}
\right]
=\left[
\begin{matrix}
1 & 0 \\
\frac{1}{\bA_{11}} \bA_{2:n,1} & \bI_{n-1}
\end{matrix}
\right]
\left[
\begin{matrix}
\bA_{11} & \bA_{1,2:n}\\
0 & \bS
\end{matrix}
\right] = \bL_1 \bU_1,
$$
where $\bS =\bA_{2:n,2:n} - \frac{1}{\bA_{11}}\bA_{2:n,1}\bA_{1,2:n}$ is the Schur complement of $\bA_{11}$ in $\bA$.
We can name this decomposition of $\bA$ as the $s$-decomposition of $\bA$.
The first column of $\bL_1$ and the first row of $\bU_1$ have the required structure (bandwidth $p$ and $q$ respectively), and the Schur complement $\bS$ of $\bA_{11}$ has upper bandwidth $q-1$ and lower bandwidth $p-1$ respectively. The result follows by induction on the $s$-decomposition of $\bS$.
\end{proof}
\section{Block LU Decomposition}
Another form of the LU decomposition is to factor the matrix into block triangular matrices.
\begin{theoremHigh}[Block LU Decomposition without Permutation]\label{theorem:block-lu-factorization-without-permutation}
For any $n\times n$ square matrix $\bA$, if the first $m$ leading principal block submatrices are nonsingular, then $\bA$ can be factored as
\begin{equation}
\bA = \bL\bU
=
\begin{bmatrix}
\bI & & & \\
\bL_{21} & \bI & & \\
\vdots & & \ddots & \\
\bL_{m1} & \ldots & \bL_{m,m-1} & \bI
\end{bmatrix}
\begin{bmatrix}
\bU_{11} &\bU_{12} & \ldots & \bU_{1m}\\
& \bU_{22} & & \vdots \\
& & \ddots & \bU_{m-1,m}\\
& & & \bU_{mm}\\
\end{bmatrix}
, \nonumber
\end{equation}
where $\bL_{i,j}$'s and $\bU_{ij}$'s are some block matrices.
Specifically, this decomposition is unique.
\end{theoremHigh}
Note that the $\bU$ in the above theorem is not necessarily upper triangular. An example can be shown as follows:
$$
\bA =
\left[\begin{array}{cc;{2pt/2pt}cc}
0& 1 & 1 & 1\\
-1& 2 & -1 & 2\\\hdashline[2pt/2pt]
2& 1 & 4 & 2\\
1& 2 & 3 & 3\\
\end{array}\right]
=
\left[\begin{array}{cc;{2pt/2pt}cc}
1& 0 & 0& 0\\
0& 1 & 0 & 0\\\hdashline[2pt/2pt]
5& -2 & 1 & 0\\
4& -1 & 0 & 1\\
\end{array}\right]
\left[\begin{array}{cc;{2pt/2pt}cc}
0& 1 & 1 & 1\\
-1& 2 & -1 & 2\\\hdashline[2pt/2pt]
0& 0 & -3 & 1\\
0& 0 & -2 & 1\\
\end{array}\right].
$$
The trivial non-block LU decomposition fails on $\bA$ since the entry $(1,1)$ is zero. However, the block LU decomposition exists.
\section{Application: Linear System via the LU Decomposition}\label{section:lu-linear-sistem}
Consider the well-determined linear system $\bA\bx = \bb$ with $\bA$ of size $n\times n $ and nonsingular. Avoid solving the system by computing the inverse of $\bA$, we solving linear equation by the LU decomposition. Suppose $\bA$ admits the LU decomposition $\bA = \bP\bL\bU$. The solution is given by the following algorithm.
\begin{algorithm}[H]
\caption{Solving Linear Equations by LU Decomposition}
\label{alg:linear-equation-by-LU}
\begin{algorithmic}[1]
\Require
matrix $\bA$ is nonsingular and square with size $n\times n $, solve $\bA\bx=\bb$;
\State LU Decomposition: factor $\bA$ as $\bA=\bP\bL\bU$; \Comment{(2/3)$n^3$ flops}
\State Permutation: $\bw = \bP^\top\bb$; \Comment{0 flops }
\State Forward substitution: solve $\bL\bv = \bw$; \Comment{$1+3+... + (2n-1)=n^2$ flops}
\State Backward substitution: solve $\bU\bx= \bv$; \Comment{$1+3+... + (2n-1)=n^2$ flops}
\end{algorithmic}
\end{algorithm}
The complexity of the decomposition step is $(2/3)n^3$ flops, the backward and forward substitution steps both cost $1+3+... + (2n-1)=n^2$ flops. Therefore, the total cost for computing the linear system via the LU factorization is $(2/3)n^3 + 2n^2$ flops. If we keep only the leading term, the Algorithm~\ref{alg:linear-equation-by-LU} costs $(2/3)n^3$ flops where the most cost comes from the LU decomposition.
\paragraph{Linear system via the block LU decomposition} For a block LU decomposition of $\bA=\bL\bU$, we need to solve $\bL\bv = \bw$ and $\bU\bx = \bv$. But the latter system is not triangular and requires some extra computations.
\section{Application: Computing the Inverse of Nonsingular Matrices}\label{section:inverse-by-lu}
By Theorem~\ref{theorem:lu-factorization-with-permutation}, for any nonsingular matrix $\bA\in \real^{n\times n}$, we have a full LU factorization $\bA=\bP\bL\bU$. Then the inverse can be obtained by solving the matrix equation
$$
\bA\bX = \bI,
$$
which contains $n$ linear systems computation: $\bA\bx_i = \be_i$ for all $i \in \{1, 2, \ldots, n\}$ where $\bx_i$ is the $i$-the column of $\bX$ and $\be_i$ is the $i$-th column of $\bI$ (i.e., the $i$-th unit vector).
\begin{theorem}[Inverse of Nonsingular Matrix by Linear System]
Computing the inverse of a nonsingular matrix $\bA \in \real^{n\times n}$ by $n$ linear systems needs $\sim (2/3)n^3 + n(2n^2)=(8/3)n^3$ flops where $(2/3)n^3$ comes from the computation of the LU decomposition of $\bA$.
\end{theorem}
The proof is trivial by using Algorithm~\ref{alg:linear-equation-by-LU}.
However, the complexity can be reduced by taking the advantage of the structures of $\bU, \bL$.
We find that the inverse of the nonsingular matrix is $\bA^{-1} = \bU^{-1}\bL^{-1}\bP^{-1}=\bU^{-1}\bL^{-1}\bP^{T}$.
\begin{theorem}[Inverse of Nonsingular Matrix by LU Factorization]\label{theorem:inverse-by-lu2}
Computing the inverse of a nonsingular matrix $\bA \in \real^{n\times n}$ by $\bA^{-1} = \bU^{-1}\bL^{-1}\bP^{T}$ needs $\sim (2/3)n^3 + (4/3)n^3=2n^3$ flops where $(2/3)n^3$ comes from the computation of the LU decomposition of $\bA$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:inverse-by-lu2}]
We notice the computation of $\bU^{-1}\bL^{-1}\bP^{T}$ comes from $\bY=\bU^{-1}\bL^{-1}$. And $\bU^{-1}$ is an upper triangular, $\bL^{-1}$ is a unit lower triangular matrix.
Suppose
$$\bU^{-1} = \bZ=\begin{bmatrix}
-\bz_1^\top-\\
-\bz_2^\top-\\
\vdots \\
-\bz_n^\top-
\end{bmatrix}\qquad
\text{ and }\qquad
\bU = [\bu_1, \bu_2, \ldots, \bu_n],
$$
are the row partitions and column partitions of $\bU^{-1}$ and $\bU$ respectively.
Since both $\bZ$ and $\bU$ are upper triangular matrices. Thus we have
$$
\bI=
\begin{bmatrix}
\bz_1^\top\bu_1=1 & \bz_1^\top\bu_2=0 & \bz_1^\top\bu_3=0 & \bz_1^\top\bu_4=0 & \ldots & \bz_1^\top\bu_n=0 \\
\bz_2^\top\bu_1=0 & \bz_2^\top\bu_2=1 & \bz_2^\top\bu_3=0 & \bz_2^\top\bu_4=0 & \ldots & \bz_2^\top\bu_n=0 \\
\bz_3^\top\bu_1=0 & \bz_3^\top\bu_2=0 & \bz_3^\top\bu_3=1 &\bz_3^\top\bu_4=0 & \ldots & \bz_3^\top\bu_n=0 \\
\bz_4^\top\bu_1=0 & \bz_2^\top\bu_2=0 & \bz_4^\top\bu_3=0 &\bz_4^\top\bu_4=1 & \ldots & \bz_4^\top\bu_n=0 \\
\vdots &\vdots & \vdots &\vdots & \ldots & \vdots \\
\bz_n^\top\bu_1=0 & \bz_n^\top\bu_2=0 & \bz_n^\top\bu_3=0&\bz_n^\top\bu_4=0 & \ldots & \bz_n^\top\bu_n=1
\end{bmatrix}.
$$
By $\bz_1^\top\bu_1=1$, we can compute the first component of $\bz_1$ with 1 flop; By $\bz_1^\top\bu_2=0$, we can compute the second component of $\bz_1$ with 3 flops as the first component is already calculated; $\ldots$.
Then we list the complexity of each inner product in an $n\times n$ matrix, where each entry ($i,j$) denotes the cost to calculate the $(i,j)$ element of $\bU^{-1}$:
$$
\mathrm{cost}=
\begin{bmatrix}
\bz_1^\top\bu_1=1 & \bz_1^\top\bu_2=3 & \bz_1^\top\bu_3=5 & \bz_1^\top\bu_4=7 & \ldots & \bz_1^\top\bu_n=2n-1 \\
0 & \bz_2^\top\bu_2=1 & \bz_2^\top\bu_3=3 & \bz_2^\top\bu_4=5 & \ldots & \bz_2^\top\bu_n=2n-3 \\
0 & 0 & \bz_3^\top\bu_3=1 &\bz_3^\top\bu_4=3 & \ldots & \bz_3^\top\bu_n=2n-5 \\
0 & 0 & 0 &\bz_4^\top\bu_4=1 & \ldots & \bz_4^\top\bu_n=2n-7 \\
\vdots &\vdots & \vdots &\vdots & \ldots & \vdots \\
0 & 0 & 0& 0 & \ldots & \bz_n^\top\bu_n=1
\end{bmatrix}=
\begin{bmatrix}
n^2 \\
(n-1)^2 \\
(n-2)^2 \\
(n-3)^2 \\
\vdots \\
1
\end{bmatrix},
$$
which is a sum of $n$ sets of arithmetic sequences and the sum of each sequence is shown in the last equality above. Thus the total cost to compute $\bU^{-1}$ is \fbox{$\frac{2n^3+3n^2+n}{6}$} flops. Similarly, to calculate $\bL^{-1}$ also costs \fbox{$\frac{2n^3+3n^2+n}{6}$} flops.
A moment of reflexion on the multiplication of $\bY=\bU^{-1}\bL^{-1}$ would reveal that:
$\bullet$ The entry $(1,1)$ of $\bY_{11}$ involves the computation of an inner product with $n$-dimension which takes $2n-1$ flops (i.e., $n$ multiplications and $n-1$ additions).
$\bullet$ The entry $(1,2)$ of $\bY_{12}$ involves the computation of an inner product with $(n-1)$-dimension which takes $2n-3$ flops.
$\bullet$ The process can go on, we write the flops in an $n\times n$ matrix with each entry $(i,j)$ meaning the number of flops to calculate the $(i,j)$ element of $\bY$:
$$
\text{costs of $\bY=\bU^{-1}\bL^{-1}$}=
\begin{bmatrix}
\{ \textcolor{red}{2n-1} & \textcolor{red}{2n-3} & \textcolor{red}{2n-5} & \textcolor{red}{2n-7} & \textcolor{red}{\ldots} & \textcolor{red}{1}\} \\
\overbrace{\textcolor{cyan}{2n-3}} & \{ \textcolor{green}{2n-3} & \textcolor{green}{2n-5} & \textcolor{green}{2n-7} & \textcolor{green}{\ldots} & \textcolor{green}{1}\} \\
\textcolor{cyan}{2n-5} & \overbrace{\textcolor{magenta}{2n-5}} & \{ \textcolor{blue}{2n-5} & \textcolor{blue}{2n-7} & \textcolor{blue}{\ldots} & \textcolor{blue}{1}\} \\
\textcolor{cyan}{2n-7} & \textcolor{magenta}{2n-7} & \overbrace{\textcolor{olive}{2n-7}} & \{\textcolor{orange}{2n-7} &\textcolor{orange}{\ldots} & \textcolor{orange}{1}\} \\
\textcolor{cyan}{\vdots} & \textcolor{magenta}{\vdots} & \textcolor{olive}{\vdots} & \textcolor{purple}{\vdots} & \ldots & \vdots \\
\underbrace{\textcolor{cyan}{1}} & \underbrace{\textcolor{magenta}{1}} & \underbrace{\textcolor{olive}{1}} & \underbrace{\textcolor{purple}{1}} & \vdots & \textcolor{teal}{1} \\
\end{bmatrix},
$$
which is a symmetric matrix and the total complexity can be calculated by the sum of several arithmetic sequences where each sequence is denoted in a different color. To make it clearer, each arithmetic sequence is also denoted by a brace in the above matrix. Simple calculations will show the complexity is \fbox{$(2/3)n^3 + (1/3)n$}.
As a result, the total cost is then $(2/3)n^3 + (1/3)n^3 + (1/3)n^3 + (2/3)n^3=2n^3$ flops if we keep only the leading term, where the first $(2/3)n^3$ comes from the computation of the LU decomposition of $\bA$.
\end{proof}
\section{Application: Computing the Determinant via the LU Decomposition}
We can find the determinant easily by using the LU decomposition.
If $\bA=\bL\bU$, then $\det(\bA) = \det(\bL\bU) = \det(\bL)\det(\bU) = \bU_{11}\bU_{22}\ldots\bU_{nn}$ where $\bU_{ii}$ is the $i$-th diagonal of $\bU$ for $i\in \{1,2,\ldots,n\}$. \footnote{The determinant of a lower triangular matrix (or an upper triangular matrix) is the product of the diagonal entries.}
Further, for the LU decomposition with permutation $\bA=\bP\bL\bU$, $\det(\bA) = \det(\bP\bL\bU) = \det(\bP)\bU_{11}\bU_{22}\ldots\bU_{nn}$.
The determinant of a permutation matrix is either 1 or –1 because after
changing rows around (which changes the sign of the determinant \footnote{The determinant changes sign when two rows are exchanged (sign reversal).}) a permutation matrix
becomes identity matrix $\bI$, whose determinant is one.
\section{Pivoting}
\subsection{Partial Pivoting}\label{section:partial-pivot-lu}
In practice, it is desirable to pivot even when it is not necessary. When dealing with a linear system via the LU decomposition as shown in Algorithm~\ref{alg:linear-equation-by-LU}, if the diagonal entries of $\bU$ are small, it can lead to inaccurate solutions for the linear solution. Thus, it is common to pick the largest entry to be the pivot. This is known as the \textit{partial pivoting}. For example,
\begin{tcolorbox}[title={Partial Pivoting For a $4\times 4$ Matrix}]
\begin{equation}\label{equation:elmination-steps2}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bE_1}{\longrightarrow}
\begin{sbmatrix}{\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{2} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{5} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{7} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bP_1}{\longrightarrow}
\begin{sbmatrix}{\bP_1\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{7} & \textcolor{blue}{\boxtimes} & \textcolor{blue}{\boxtimes} \\
0 & 5 & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{2} & \textcolor{blue}{\boxtimes} & \textcolor{blue}{\boxtimes}
\end{sbmatrix}
\stackrel{\bE_2}{\longrightarrow}
\begin{sbmatrix}{\bE_2\bP_1\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 7 & \boxtimes & \boxtimes \\
0 & \bm{0} & \textcolor{red}{\bm{\boxtimes}} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{0} & \textcolor{red}{\bm{\boxtimes}}
\end{sbmatrix},
\end{equation}
\end{tcolorbox}
\noindent in which case, we pick 7 as the pivot after transformation by $\bE_1$ even when it is not necessary. This interchange permutation can guarantee that no multiplier is greater than 1 in absolute value during the Gaussian elimination.
A specific example is provided as follows.
\begin{example}[Partial Pivoting]
Suppose
$$
\bA=
\begin{bmatrix}
2 & 10 & 5\\
1 & 4 & -2 \\
6 & 8 & 4
\end{bmatrix}
$$
To get the smallest possible multipliers in the first Gaussian transformation, we need to interchange the largest value in the first column to entry $(1,1)$. The permutation matrix $\bP_1$ is doing so
$$
\bP_1 =
\begin{bmatrix}
&& 1\\
&1& \\
1& &
\end{bmatrix}
, \qquad \text{such that} \qquad
\bU=\bP_1\bA =
\begin{bmatrix}
6 & 8 & 4\\
1 & 4 & -2 \\
2 & 10 & 5
\end{bmatrix}.
$$
Then, it follows that $\bE_1$ will introduce zero below the entry $(1,1)$,
$$
\bE_1 =
\begin{bmatrix}
1 & &\\
-1/6 & 1 \\
-1/3 & 0 & 1
\end{bmatrix},
\qquad \text{and} \qquad
\bU=\bE_1\bP_1\bA=
\begin{bmatrix}
6 & 8 & 4\\
0 & 8/3 & -8/3 \\
0 & 22/3 & 11/3
\end{bmatrix}
$$
Now, we pivot 22/3 before 8/3, and the permutation is given by
$$
\bP_2 =
\begin{bmatrix}
1 && \\
& &1 \\
&1 &
\end{bmatrix}
, \qquad \text{such that} \qquad
\bU=\bP_2\bE_1\bP_1\bA =
\begin{bmatrix}
6 & 8 & 4\\
0 & 22/3 & 11/3\\
0 & 8/3 & -8/3 \\
\end{bmatrix}.
$$
Finally, the Gaussian transformation to introduce zero below entry ($2,2$) is given by
$$
\bE_2 =
\begin{bmatrix}
1 & & \\
& 1 & \\
& -4/11 & 1
\end{bmatrix}
\qquad \text{and} \qquad
\bU=\bE_2 \bP_2\bE_1\bP_1\bA=
\begin{bmatrix}
6 & 8 & 4\\
0 & 22/3 & 11/3\\
0 & 0 & -4\\
\end{bmatrix}.
$$
And output the final $\bU$.
\exampbar
\end{example}
As discussed above, the Gaussian transformation $\bE_k$ in $k$-th step of the partial pivoting is given by
$$
\bE_k = \bI - \bz_k \be_k^\top,
$$
where $\be_k \in \real^n$ is the $k$-th unit vector, and $\bz_k\in \real^n$ is given by
$$
\bz_k = [0, \ldots, 0, z_{k+1}, \ldots, z_n]^\top, \qquad z_i= \frac{\bU_{ik}}{\bU_{kk}}, \forall i \in \{k+1,\ldots, n\}.
$$
We realize that $\bE_k$ is a unit lower triangular matrix with only the $k$-th column of the lower submatrix being nonzero,
$$
\begin{blockarray}{ccccccc}
\begin{block}{c[ccccc]c}
& \bI & \bzero & 0 & \bzero & \bzero & \\
& \bzero & \ddots & 0 & \bzero & \bzero & \\
\bE_k= & \bzero&\bzero & 1 &\bzero & \bzero & \\
& \bzero & \bzero & \boxtimes & \bI & \bzero & \\
& \bzero & \bzero & \boxtimes & \bzero & \bI & \\
\end{block}
& & & k& & & \\
\end{blockarray}.
$$
More generally, the procedure for computing the LU decomposition with partial pivoting of $\bA\in\real^{n\times n}$ is given in Algorithm~\ref{alg:lu-partial-pivot}.
\begin{algorithm}[H]
\caption{LU Decomposition with Partial Pivoting}
\label{alg:lu-partial-pivot}
\begin{algorithmic}[1]
\Require
Matrix $\bA$ with size $n\times n$;
\State Let $\bU = \bA$;
\For{$k=1$ to $n-1$} \Comment{i.e., get the $k$-th column of $\bU$}
\State Find a row permutation matrix $\bP_k$ that swaps $\bU_{kk}$ with the largest element in $|\bU_{k:n,k}|$;
\State $\bU =\bP_k\bU$;
\State Determine the Gaussian transformation $\bE_k $ to introduce zeros below the diagonal of the $k$-th column of $\bU$;
\State $\bU = \bE_k\bU$;
\EndFor
\State Output $\bU$;
\end{algorithmic}
\end{algorithm}
The algorithm requires $~2/3(n^3)$ flops and $(n-1)+(n-2)+\ldots + 1 \sim O(n^2)$ comparisons resulting from the pivoting procedure.
Upon completion, the upper triangular matrix $\bU$ is given by
$$
\bU = \bE_{n-1}\bP_{n-1} \ldots \bE_2\bP_2\bE_1\bP_1\bA.
$$
\paragraph{Computing the final $\bL$} And we here show that Algorithm~\ref{alg:lu-partial-pivot} computes the LU decomposition in the following form
$$
\bA = \bP\bL\bU,
$$
where $\bP=\bP_1 \bP_2\ldots\bP_{n-1}$ takes account of all the interchanges, $\bU$ is the upper triangular matrix results directly from the algorithm, $\bL$ is unit lower triangular with $|\bL_{ij}|\leq 1$ for all $1 \leq i,j\leq n$. $\bL_{k+1:n,k}$ is a permuted version of $\bE_k$'s multipliers. To see this, we notice that the permutation matrices used in the algorithm fall into a special kind of permutation matrix since we only interchange two rows of the matrix. \textit{This implies the $\bP_k$'s are symmetric and $\bP_k^2 = \bI$ for $k\in \{1,2,\ldots,n-1\}$}.
Suppose
$$
\bM_k = (\bP_{n-1} \ldots \bP_{k+1}) \bE_k (\bP_{k+1} \ldots \bP_{n-1}).
$$
Then, $\bU$ can be written as
$$
\bU = \bM_{n-1}\ldots \bM_2\bM_1 \bP^\top \bA.
$$
To see what $\bM_k$ is, we realize that $\bP_{k+1}$ is a permutation with the upper left $k\times k$ block being an identity matrix. And thus we have
$$
\begin{aligned}
\bM_k &= (\bP_{n-1} \ldots \bP_{k+1}) (\bI_n-\bz_k\be_k^\top) (\bP_{k+1} \ldots \bP_{n-1})\\
&=\bI_n - (\bP_{n-1} \ldots \bP_{k+1})(\bz_k\be_k^\top)(\bP_{k+1} \ldots \bP_{n-1}) \\
&=\bI_n - (\bP_{n-1} \ldots \bP_{k+1}\bz_k) (\be_k^\top\bP_{k+1} \ldots \bP_{n-1}) \\
&=\bI_n - (\bP_{n-1} \ldots \bP_{k+1}\bz_k)\be_k^\top. \qquad &\text{(since $\be_k^\top\bP_{k+1} \ldots \bP_{n-1} = \be_k^\top$)}
\end{aligned}
$$
This implies that $\bM_k$ is unit lower triangular with the $k$-th column being the permuted version of $\bE_k$. And the final lower triangular $\bL$ is thus given by
$$
\bL = \bM_1^{-1}\bM_2^{-1} \ldots \bM_{n-1}^{-1}.
$$
\subsection{Complete Pivoting}\label{section:complete-pivoting}
In partial pivoting, when introducing zeros below the diagonal of the $k$-th column of $\bU$, the $k$-th pivot is determined by scanning the current subcolumn $\bU_{k:n,k}$. In complete pivoting, the largest absolute entry in the current submatrix $\bU_{k:n,k:n}$ is interchanged into the entry $(k,k)$ of $\bU$. Therefore, an additional \textit{column permutation} $\bQ_k$ is needed in each step. The final upper triangular matrix $\bU$ is obtained by
$$
\bU = \bE_{n-1}\bP_{n-1}\ldots (\bE_2\bP_2(\bE_1\bP_1\bA\bQ_1)\bQ_2) \ldots \bQ_{n-1}.
$$
Similarly, the complete pivoting algorithm is formulated in Algorithm~\ref{alg:lu-complete-pivot}.
\begin{algorithm}[H]
\caption{LU Decomposition with Complete Pivoting}
\label{alg:lu-complete-pivot}
\begin{algorithmic}[1]
\Require
Matrix $\bA$ with size $n\times n$;
\State Let $\bU = \bA$;
\For{$k=1$ to $n-1$} \Comment{the value $k$ is to get the $k$-th column of $\bU$}
\State Find a row permutation matrix $\bP_k$, and a column permutation $\bQ_k$ that swaps $\bU_{kk}$ with the largest element in $|\bU_{k:n,k:n}|$, say $\bU_{u,v} = \max{|\bU_{k:n,k:n}|}$;
\State $\bU =\bP_k\bU\bQ_k$;
\State Determine the Gaussian transformation $\bE_k $ to introduce zeros below the diagonal of the $k$-th column of $\bU$;
\State $\bU = \bE_k\bU$;
\EndFor
\State Output $\bU$;
\end{algorithmic}
\end{algorithm}
The algorithm requires $~2/3(n^3)$ flops and $(n^2+(n-1)^2+\ldots +1^2)\sim O(n^3)$ comparisons resulting from the pivoting procedure. Again, let $\bP=\bP_1 \bP_2\ldots\bP_{n-1}$, $\bQ = \bQ_1 \bQ_2\ldots\bQ_{n-1}$,
$$
\bM_k = (\bP_{n-1} \ldots \bP_{k+1}) \bE_k (\bP_{k+1} \ldots \bP_{n-1}), \qquad \text{for all $k\in \{1,2,\ldots,n-1\}$}
$$
and
$$
\bL = \bM_1^{-1}\bM_2^{-1} \ldots \bM_{n-1}^{-1}.
$$
We have $\bA = \bP\bL\bU \bQ^\top$ or $\bP^\top\bA\bQ = \bL\bU$ as the final decomposition.
\subsection{Rook Pivoting}
The \textit{rook pivoting} provides an alternative to the partial and complete pivoting. Instead of choosing the larges value in $|\bU_{k:n,k:n}|$ in the $k$-th step, it searches for an element of $\bU_{k:n,k:n}$ that is maximal in both its row and column. Apparently, the rook pivoting is not unique such that we could find many entries that satisfy the criteria. For example, for a submatrix $\bU_{k:n,k:n}$ as follows
$$
\bU_{k:n,k:n} =
\begin{bmatrix}
1 & 2 & 3 & 4 \\
2 & 3 & 7 & 3 \\
5 & 2 & 1 & 2 \\
2 & 1 & 2 & 1 \\
\end{bmatrix},
$$
where the $7$ will be chosen by complete pivoting. And one of $5, 4, 7$ will be identified as a rook pivot.
\section{Rank-Revealing LU Decomposition}\label{section:rank-reveal-lu-short}
In many applications, a factorization produced by Gaussian elimination with pivoting when $\bA$ has rank $r$ will reveal rank in the following form\index{Rank-revealing}\index{Rank-revealing LU}
$$
\bP\bA\bQ =
\begin{bmatrix}
\bL_{11} & \bzero \\
\bL_{21}^\top & \bI
\end{bmatrix}
\begin{bmatrix}
\bU_{11} & \bU_{12} \\
\bzero & \bzero
\end{bmatrix},
$$
where $\bL_{11}\in \real^{r\times r}$ and $\bU_{11}\in \real^{r\times r}$ are nonsingular, $\bL_{21}, \bU_{21}\in \real^{r\times (n-r)}$, and $\bP,\bQ$ are permutations. Gaussian elimination with rook pivoting or complete pivoting can result in such decomposition \citep{hwang1992rank, higham2002accuracy}.
\section{Rate of Change of L and U*}
If $\bA$ has a unique LU decomposition, and a small perturbation $\Delta\bA$ such that the LU decomposition of $\bA+\Delta\bA$ also exists and is given by $\bA+\Delta\bA = (\bL+\Delta\bL)(\bU+\Delta\bU)$. Then $\Delta\bA$ can be obtained by
$$
\Delta\bA = \Delta\bL \cdot \bU + \bL \cdot \Delta\bU.
$$
And we have
$$
\bL^{-1}\cdot \Delta\bA \cdot \bU^{-1} = \bL^{-1}\cdot \Delta\bL + \Delta\bU \cdot \bU^{-1}.
$$
Since both $\bL$ and $(\bL+\Delta\bL)$ are unit lower triangular matrices, $\Delta \bL$ is strictly lower triangular, i.e., lower triangular with 0's on the diagonal. And both $\bU$ and $\bU+\Delta\bU$ are upper triangular matrices, $\Delta\bU$ is therefore upper triangular. Thus
$$
\Delta\bL = \bL \cdot slt(\bL^{-1}\cdot \Delta\bA \cdot \bU^{-1}),
\qquad
\Delta\bU = ut(\bL^{-1}\cdot \Delta\bA \cdot \bU^{-1}) \bU,
$$
where $slt(\bB)$ is the strictly lower triangular part of $\bB$, and $ut(\bB)$ is the upper triangular part of $\bB$. Clearly, the sensitivity, aka, the rate of change of $\bL$ and $\bU$ depends on the inverse of $\bL$ and $\bU$. More generally, we have the following result.
\begin{theorem}[Rate of Change of $\bL$ and $\bU$]\label{theorem:rate-lu-decomposition}
Suppose matrix $\bA \in \real^{n\times n}$ has nonzero leading principal minors, i.e., $\det(\bA_{1:k,1:k})\neq 0$, for all $k\in \{1,2,\ldots, n\}$, with LU decomposition $\bA = \bL\bU$. Let $\bG$ be a real $n\times n$ matrix and let $\Delta\bA = \epsilon \bG$, for some $\epsilon \geq 0$. If $\epsilon$ is sufficiently small such that all the leading principal minors of $\bA+t\bG$ are nonzero for all $|t|\leq \epsilon$. Then $\bA+\Delta\bA$ has the LU decomposition
$$
\bA+\Delta\bA = (\bL+\Delta\bL)(\bU+\Delta\bU),
$$
with $\Delta\bL$ and $\Delta\bU$ satisfying
$$
\begin{aligned}
\Delta \bL &= \epsilon \dotL(0) + O(\epsilon^2), \\
\Delta\bU &= \epsilon \dotU(0) + O(\epsilon^2),
\end{aligned}
$$
where $\dotL(0) = \dotL(t)|_{t=0}$ and $\dotU(0) = \dotU(t)|_{t=0}$, and $\dotL(t)$ and $\dotU(t)$ are defined by the unique LU decomposition
\begin{equation}\label{equation:partial-eq}
\bA + t\bG = \bL(t)\bU(t), \qquad |t| \leq \epsilon.
\end{equation}
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:rate-lu-decomposition}]
We notice that
\begin{equation}\label{equation:rage-lu-eq1}
\begin{aligned}
\bL(t) &= \bL +\Delta\bL, \\
\dotL(t)&=\frac{\partial \bL(t)}{\partial t}
\end{aligned}
\end{equation}
By Taylor expansion, we have
$$
\dotL(\epsilon) =\frac{\partial \bL(\epsilon)}{\partial \epsilon} \approx \dotL(0) + \ddotL(\epsilon) \epsilon + O(\epsilon^2).
$$
Since $\epsilon$ is small enough, such that Equation~\eqref{equation:rage-lu-eq1} can be written as
$$
\dotL(\epsilon)=\frac{\partial \bL(\epsilon)}{\partial \epsilon} = \frac{\Delta\bL}{\epsilon},
$$
which results in
$$
\Delta\bL = \epsilon \dotL(0) + O(\epsilon^2).
$$
Similarly, we can also obtain
$$
\Delta\bU = \epsilon \dotU(0) + O(\epsilon^2).
$$
This completes the proof.
\end{proof}
Following from the Theorem~\ref{theorem:rate-lu-decomposition} above, we can also claim that
\begin{align}
\bL\dotU(0) &+ \dotL(0)\bU = \bG, \label{equation:partial-lu-rate-1}\\
\dotL(0) &= \bL \cdot slt(\bL^{-1}\bG\bU^{-1}), \label{equation:partial-lu-rate-2}\\
\dotU(0) &= ut(\bL^{-1}\bG\bU^{-1}) \bU. \label{equation:partial-lu-rate-3}
\end{align}
Note that $\bL(0) = \bL$, $\bL(\epsilon) =\bL+\Delta\bL$, $\bU(0) = \bU$, $\bU(\epsilon)=\bU +\Delta\bU$. If we differentiate Equation~\eqref{equation:partial-eq}, and set $t=0$, we can obtain result in Equation~\eqref{equation:partial-lu-rate-1}. Further, multiply Equation~\eqref{equation:partial-lu-rate-1} left by $\bL^{-1}$ and right by $\bU^{-1}$, we have
$$
\dotU(0)\bU^{-1} + \bL^{-1}\dotL(0) =\bL^{-1}\bG\bU^{-1},
$$
where $\dotL(0)$ is strictly lower triangular and $\dotU(0)$ is upper triangular. Thus the results in Equation~\eqref{equation:partial-lu-rate-2} and \eqref{equation:partial-lu-rate-3} can be proved.
In the above theorem, we proved that the sensitivity of $\Delta\bL$ and $\Delta\bU$ is bounded by $\dotL(0)$ and $\dotU(0)$ to some $\epsilon$ value. More results of the perturbation on the LU decomposition can be found in \citep{baarland1991perturbation,sun1992rounding, sun1992componentwise, stewart1993perturbation, stewart1997perturbation, chang1997pertubation, chang1998sensitivity, higham2002accuracy, bueno2004stability}, similar results for the QR decomposition \citep{stewart1993perturbation, chang1997perturbationqr, chang1997pertubation}, and the Cholesky decomposition \citep{sun1992rounding, stewart1997perturbation, chang1997pertubation}.
\chapter{Nonnegative Matrix Factorization (NMF)}\index{NMF}\label{section:nmf}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Nonnegative Matrix Factorization}
Following from the matrix factorization via the ALS, we now consider algorithms for solving the nonnegative matrix factorization (NMF) problem:
\begin{itemize}
\item Given a nonnegative matrix $\bA\in \real^{M\times N}$, find nonnegative matrix factors $\bW\in \real^{M\times K}$ and $\bZ\in \real^{K\times N}$ such that:
$$
\bA\approx\bW\bZ.
$$
\end{itemize}
To measure the approximation, the loss to evaluate is still from the Frobenius norm of the difference between the two matrices:
$$
L(\bW,\bZ) = ||\bW\bZ-\bA||^2.
$$
\section{NMF via Multiplicative Update}
Following from Section~\ref{section:als-netflix}, given $\bW\in \real^{M\times K}$, we want to update $\bZ\in \real^{K\times N}$, the gradient with respect to $\bZ$ is given by Equation~\eqref{equation:givenw-update-z-allgd}:
$$
\begin{aligned}
\frac{\partial L(\bZ|\bW)}{\partial \bZ} =2 \bW^\top(\bW\bZ-\bA) \in \real^{K\times N}.
\end{aligned}
$$
Applying the gradient descent idea in Section~\ref{section:als-gradie-descent}, the trivial update on $\bZ$ can be done by
$$
(\text{GD on $\bZ$})\gap \bZ \leftarrow \bZ - \eta \left(\frac{\partial L(\bZ|\bW)}{\partial \bZ}\right)=\bZ - \eta \left(2 \bW^\top\bW\bZ-2\bW^\top\bA\right),
$$
where $\eta$ is a small positive step size. Now if we suppose a different step size for each entry of $\bZ$ and incorporate the constant 2 into the step size, the update can be obtained by
$$
(\text{GD$^\prime$ on $\bZ$})\gap
\begin{aligned}
\bZ_{kn} &\leftarrow \bZ_{kn} - \frac{\eta_{kn}}{2} \left(\frac{\partial L(\bZ|\bW)}{\partial \bZ}\right)_{kn}\\
&=\bZ_{kn} - \eta_{kn}(\bW^\top\bW\bZ-\bW^\top\bA)_{kn}, \gap k\in [1,K], n\in[1,N],
\end{aligned}
$$
where $\bZ_{kn}$ is the $(k,n)$-th entry of $\bZ$. Now if we rescale the step size:
$$
\eta_{kn} = \frac{\bZ_{kn}}{(\bW^\top\bW\bZ)_{kn}},
$$
then we obtain the update rule:
$$
(\text{Multiplicative update on $\bZ$})\gap
\bZ_{kn} \leftarrow \bZ_{kn}\frac{(\bW^\top\bA)_{kn}}{(\bW^\top\bW\bZ)_{kn}}, \gap k\in [1,K], n\in[1,N],
$$
which is known as the \textit{multiplicative update} and is first developed in \citep{lee2001algorithms} and further discussed in \citep{pauca2006nonnegative}. Analogously, the multiplicative update on $\bW$ can be obtained by
\begin{equation}\label{equation:multi-update-w}
(\text{Multiplicative update on $\bW$})\gap
\bW_{mk} \leftarrow \bW_{mk} \frac{(\bA\bZ^\top)_{mk}}{(\bW\bZ\bZ^\top)_{mk}}, \gap m\in[1,M], k\in[1,K].
\end{equation}
\begin{theorem}[Convergence of Multiplicative Update]
The loss $L(\bW,\bZ)=||\bW\bZ-\bA||^2$ is non-increasing under the multiplicative update rules:
$$
\left\{
\begin{aligned}
\bZ_{kn} &\leftarrow \bZ_{kn}\frac{(\bW^\top\bA)_{kn}}{(\bW^\top\bW\bZ)_{kn}}, \gap k\in [1,K], n\in[1,N];\\
\bW_{mk} &\leftarrow \bW_{mk} \frac{(\bA\bZ^\top)_{mk}}{(\bW\bZ\bZ^\top)_{mk}}, \gap m\in[1,M], k\in[1,K].
\end{aligned}
\right.
$$
\end{theorem}
We refer the proof of the above theorem to \citep{lee2001algorithms}. Clearly the approximations $\bW$ and $\bZ$ remain nonnegative during the updates.
It
is generally best to update $\bW$ and $\bZ$ ``simultaneously”, instead of updating each
matrix fully before the other. In this case, after updating a row of $\bZ$, we update the corresponding column of $\bW$.
In
the implementation, a small positive quantity, say the square root of the machine
precision, should be added to the denominators in the approximations of $\bW$ and $\bZ$
at each iteration step. And a trivial $\epsilon=10^{-9}$ can do the job. The full procedure is shown in Algorithm~\ref{alg:nmf-multiplicative}.
\begin{algorithm}[h]
\caption{NMF via Multiplicative Updates}
\label{alg:nmf-multiplicative}
\begin{algorithmic}[1]
\Require $\bA\in \real^{M\times N}$;
\State initialize $\bW\in \real^{M\times K}$, $\bZ\in \real^{K\times N}$ randomly with nonnegative entries.
\State choose a stop criterion on the approximation error $\delta$;
\State choose maximal number of iterations $C$;
\State $iter=0$;
\While{$|| \bA- (\bW\bZ)||^2>\delta $ and $iter<C$}
\State $iter=iter+1$;
\For{$k=1$ to $K$}
\For{$n=1$ to $N$} \Comment{udate $k$-th row of $\bZ$}
\State $\bZ_{kn} \leftarrow \bZ_{kn}\frac{(\bW^\top\bA)_{kn}}{(\bW^\top\bW\bZ)_{kn}+\textcolor{blue}{\epsilon}}$;
\EndFor
\For{$m=1$ to $M$} \Comment{udate $k$-th column of $\bW$}
\State $\bW_{mk} \leftarrow \bW_{mk} \frac{(\bA\bZ^\top)_{mk}}{(\bW\bZ\bZ^\top)_{mk}+\textcolor{blue}{\epsilon}}$;
\EndFor
\EndFor
\EndWhile
\State Output $\bW,\bZ$;
\end{algorithmic}
\end{algorithm}
\section{Regularization}
\begin{algorithm}[h]
\caption{NMF via Regularized Multiplicative Updates}
\label{alg:nmf-multiplicative-regularization}
\begin{algorithmic}[1]
\Require $\bA\in \real^{M\times N}$;
\State initialize $\bW\in \real^{M\times K}$, $\bZ\in \real^{K\times N}$ randomly with nonnegative entries.
\State choose a stop criterion on the approximation error $\delta$;
\State choose maximal number of iterations $C$;
\State choose regularization parameter $\lambda_z, \lambda_w$;
\State $iter=0$;
\While{$|| \bA- (\bW\bZ)||^2>\delta $ and $iter<C$}
\State $iter=iter+1$;
\For{$k=1$ to $K$}
\For{$n=1$ to $N$} \Comment{udate $k$-th row of $\bZ$}
\State $\bZ_{kn} \leftarrow \bZ_{kn}\frac{(\bW^\top\bA)_{kn}-
\textcolor{blue}{\lambda_z\bZ_{kn}}}{(\bW^\top\bW\bZ)_{kn}+\textcolor{blue}{\epsilon}}$;
\EndFor
\For{$m=1$ to $M$} \Comment{udate $k$-th column of $\bW$}
\State $\bW_{mk} \leftarrow \bW_{mk} \frac{(\bA\bZ^\top)_{mk} -\textcolor{blue}{\lambda_w\bW_{mk}}}{(\bW\bZ\bZ^\top)_{mk}+\textcolor{blue}{\epsilon}}$;
\EndFor
\EndFor
\EndWhile
\State Output $\bW,\bZ$;
\end{algorithmic}
\end{algorithm}
Similar to the ALS with regularization in Section~\ref{section:regularization-extention-general}, recall the regularization helps employ the ALS into general matrices. We can also add a regularization in the context of NMF:
$$
L(\bW,\bZ) =||\bW\bZ-\bA||^2 +\lambda_w ||\bW||^2 + \lambda_z ||\bZ||^2, \qquad \lambda_w>0, \lambda_z>0,
$$
where the induced matrix norm is still the Frobenius norm. The gradient with respect to $\bZ$ given $\bW$ is the same as that in Equation~\eqref{equation:als-regulari-gradien}:
$$
\begin{aligned}
\frac{\partial L(\bZ|\bW)}{\partial \bZ} =2\bW^\top(\bW\bZ-\bA) + \textcolor{blue}{2\lambda_z\bZ} \in \real^{K\times N}.
\end{aligned}
$$
The trivial gradient descent update can be obtained by
$$
(\text{GD on }\bZ) \gap \bZ \leftarrow \bZ - \eta \left(\frac{\partial L(\bZ|\bW)}{\partial \bZ}\right)=\bZ - \eta \left(2 \bW^\top\bW\bZ-2\bW^\top\bA+\textcolor{blue}{2\lambda_z\bZ}\right),
$$
Analogously, if we suppose a different step size for each entry of $\bZ$ and incorporate the constant 2 into the step size, the update can be obtained by
$$
(\text{GD$^\prime$ on $\bZ$})\gap
\begin{aligned}
\bZ_{kn} &\leftarrow \bZ_{kn} - \frac{\eta_{kn}}{2} \left(\frac{\partial L(\bZ|\bW)}{\partial \bZ}\right)_{kn}\\
&=\bZ_{kn} - \eta_{kn}(\bW^\top\bW\bZ-\bW^\top\bA+\textcolor{blue}{\lambda_z\bZ})_{kn}, \gap k\in [1,K], n\in[1,N],
\end{aligned}
$$
Now if we rescale the step size:
$$
\eta_{kn} = \frac{\bZ_{kn}}{(\bW^\top\bW\bZ)_{kn}},
$$
then we obtain the update rule:
$$
(\text{Multiplicative update on $\bZ$})\gap
\bZ_{kn} \leftarrow \bZ_{kn}\frac{(\bW^\top\bA)_{kn}-
\textcolor{blue}{\lambda_z\bZ_{kn}}
}{(\bW^\top\bW\bZ)_{kn}}, \gap k\in [1,K], n\in[1,N].
$$
Similarly, the multiplicative update on $\bW$ can be obtained by
$$
(\text{Multiplicative update on $\bW$})\gap
\bW_{mk} \leftarrow \bW_{mk} \frac{(\bA\bZ^\top)_{mk}
-\textcolor{blue}{\lambda_w\bW_{mk}}
}{(\bW\bZ\bZ^\top)_{mk}}, \gap m\in[1,M], k\in[1,K].
$$
The procedure is then formulated in Algorithm~\ref{alg:nmf-multiplicative-regularization}.
\section{Initialization}
In the above discussion, we initialize $\bW$ and $\bZ$ randomly. Whereas, there are also alternative strategies designed to obtain better initial estimates in the hope of converging more rapidly to a good solution \citep{boutsidis2008svd, gillis2014and}. We sketch the methods as follows for a reference:
\begin{itemize}
\item \textit{Clustering techniques.} Use some clustering methods on the columns of $\bA$, and make the cluster means of the top $K$ clusters as the columns of $\bW$, and initialize $\bZ$ as a proper scaling of
the cluster indicator matrix (that is, $\bZ_{kn}\neq 0$ indicates $\ba_n$ belongs to the $k$-th cluster);
\item \textit{Subset selection.} Pick $K$ columns of $\bA$ and set those as the initial columns for $\bW$, and analogously, $K$ rows of $\bA$ are selected to form the rows of $\bZ$;
\item \textit{SVD-based.} Suppose the SVD of $\bA=\sum_{i=1}^{r}\sigma_i\bu_i\bv_i^\top$ where each factor $\sigma_i\bu_i\bv_i^\top$ is a rank-one matrix with possible negative values in $\bu_i, \bv_i$, and nonnegative $\sigma_i$. Denote $[x]_+=\max(x, 0)$, we notice
$$
\bu_i\bv_i^\top = [\bu_i]_+[\bv_i]_+^\top+[-\bu_i]_+[-\bv_i]_+^\top-[-\bu_i]_+[\bv_i]_+^\top-[\bu_i]_+[-\bv_i]_+^\top.
$$
Either $[\bu_i]_+[\bv_i]_+^\top$ or $[-\bu_i]_+[-\bv_i]_+^\top$ can be selected as a column and a row in $\bW,\bZ$.
\end{itemize}
\section{Movie Recommender Context}
Both the NMF and the ALS approximate the matrix and reconstruct the entries in the matrix with a set of basis vectors. The basis in the NMF is composed of vectors with nonnegative elements while the basis in the ALS can have positive or negative values.
The difference then is that the NMF reconstructs each vector as a positive summation of the basis vectors with a ``relative" small component in the direction of each basis vector.
Whereas, In the ALS the data is modeled as a linear combination of the basis you can add or subtract vectors as needed and the components in the direction of each basis vector can be large positive values or negative values. Therefore, depending on the application one or the other factorization can be utilized to describe the data with different meanings.
In the context of a movie recommender system then the rows of $\bW$ represent the feature of the movies and columns of $\bZ$ represent the features of a user.
In the NMF you can say that a movie is 0.5 comedy, 0.002 action, and 0.09 romantic. However, In the ALS you can get combinations such as 4 comedy, -0.05 comedy, and -3 drama, i.e., a positive or negative component on that feature.
The ALS and NMF are similar in the sense that the importance of each basis vector is not in a hierarchical manner. Whereas, The key difference between the ALS and the SVD is in that, in the SVD, the importance of each vector in the basis is relative to the value of the singular value associated with that vector. For the SVD of $\bA=\sum_{i=1}^{r}\sigma_i\bu_i\bv_i^\top$,
this usually means that the reconstruction $\sigma_1\bu_1\bv_i^\top$ via the first set of basis vectors dominates and is the most used set to reconstruct data, then the second set and so on, so the basis in the SVD has an implicit hierarchy and that doesn't happen in the ALS or the NMF.
Recall the low-rank approximation on the flag image in Section~\ref{section:als-low-flag} where we find the second component $\bw_2\bz_2^\top$ via the ALS in Figure~\ref{fig:als52} plays an important role in the reconstruction of the original figure, whereas the second component $\sigma_2\bu_2\bv_2^\top$ via the SVD in Figure~\ref{fig:svd22} plays a small role in the reconstruction.
\chapter{Alternating Least Squares}\label{section:als}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Preliminary: Least Squares Approximations}
The linear model is the main technique in regression problems and the primary
tool for it is the least squares approximation which minimizes a sum of squared errors. This is a natural choice when we’re interested in finding the regression function which minimizes the
corresponding expected squared error. Over the recent decades, linear models have been used in a wide range of applications, e.g., decision making \citep{dawes1974linear}, time series \citep{christensen1991linear, lu2017machine}, and in many fields of study, production science, social science and soil science \citep{fox1997applied, lane2002generalized, schaeffer2004application, mrode2014linear}.
Let's consider the overdetermined system $\bb = \bA\bx $, where $\bA\in \real^{m\times n}$ is the input data matrix, $\bb\in \real^m$ is the observation matrix (target matrix), and the sample number $m$ is larger than the dimension number $n$. $\bx$ is a vector of weights of the linear model. Normally $\bA$ will have full column rank since the data from real world has large chance to be unrelated. In practice, a bias term is added to the first column of $\bA$ such that the least square is to find the solution of
\begin{equation}\label{equation:ls-bias}
\widetildebA \widetildebx =
[\bm{1} ,\bA ]
\begin{bmatrix}
x_0\\
\bx
\end{bmatrix}
= \bb .
\end{equation}
It often happens that $\bb = \bA\bx$ has no solution. The usual reason is: too many equations, i.e., the matrix has more rows than columns.
Define the column space of $\bA$ by $\{\bA\bgamma: \,\, \forall \bgamma \in \real^n\}$ and denoted by $\cspace(\bA)$.
Thus the meaning of $\bb = \bA\bx$ has no solution is that $\bb$ is outside the column space of $\bA$. In another word, the error $\be = \bb -\bA\bx$ cannot get down to zero. When the error $\be$ is as small as possible in the sense of mean squared error, $\bx_{LS}$ is a least squares solution, i.e., $||\bb-\bA\bx_{LS}||^2$ is minimum.
\paragraph{Least squares by calculus}
When $||\bb-\bA\bx||^2$ is differentiable, and parameter space of $\bx$ is an open set (the least achievable value is obtained inside the parameter space), the least squares estimator must be the root of $||\bb-\bA\bx||^2$. We thus come into the following lemma.
\begin{lemma}[Least Squares by Calculus]\label{lemma:ols}
Assume $\bA \in \real^{m\times n}$ is fixed and has full rank (i.e., the columns of $\bA$ are linearly independent) with $m>n$. Consider the overdetermined system $\bb = \bA\bx$, the least squares solution by calculus via setting the derivative in every direction of $||\bb-\bA\bx||^2$ to be zero is $\bx_{LS} = (\bA^\top\bA)^{-1}\bA^\top\bb$. The value $\bx_{LS} = (\bA^\top\bA)^{-1}\bA^\top\bb$ is known as the ordinary least squares (OLS) estimator or simply least squares (LS) estimator of $\bx$.
\end{lemma}
To prove the lemma above, we must show $\bA^\top\bA$ is invertible. Since we assume $\bA$ has full rank and $m>n$. $\bA^\top\bA \in \real^{n\times n}$ is invertible if it has rank $n$ which is the same as the rank of $\bA$. This has been proved in Lemma~\ref{lemma:rank-of-ata} (p.~\pageref{lemma:rank-of-ata}).
Apply the observation to $\bA^\top$, we can also prove that $\bA\bA^\top$ and $\bA$ have same rank. This result brings about the ordinary least squares estimator as follows.
\begin{proof}[of Lemma \ref{lemma:ols}]
Recall from calculus that a minimum of a function $f(\bx)$ occurs at a value $\bx_{LS}$ such that the derivative $\frac{\partial}{\partial \bx} f(\bx)=\bzero$. The differential of $||\bb-\bA\bx||^2$ is $2\bA^\top\bA\bx -2\bA^\top\bb$. $\bA^\top\bA$ is invertible since we assume $\boldsymbol{A}$ is fixed and has full rank with $m>n$ (Lemma~\ref{lemma:rank-of-ata}, p.~\pageref{lemma:rank-of-ata}). So the OLS solution of $\bx$ is $\bx_{LS} = (\bA^\top\bA)^{-1}\bA^\top\bb$ which completes the proof.
\end{proof}
\begin{definition}[Normal Equation]\label{definition:normal-equation-als}
We can write the zero derivative of $||\bb-\bA\bx||^2$ as $\bA^\top\bA \bx_{LS} = \bA^\top\bb$. The equation is also known as the \textit{normal equation}. In the assumption, $\bA$ has full rank with $m>n$. So $\bA^\top\bA$ is invertible which implies $\bx_{LS} = (\bA^\top\bA)^{-1}\bA^\top\bb$.
\end{definition}
\begin{figure}[h!]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[A convex funtion]{\label{fig:convex-1}
\includegraphics[width=0.31\linewidth]{./imgs/convex.pdf}}
\subfigure[A concave function]{\label{fig:convex-2}
\includegraphics[width=0.31\linewidth]{./imgs/concave.pdf}}
\subfigure[A random function]{\label{fig:convex-3}
\includegraphics[width=0.31\linewidth]{./imgs/convex-none.pdf}}
\caption{Three functions.}
\label{fig:convex-concave-none}
\end{figure}
However, we do not actually know the least squares estimator obtained in Lemma~\ref{lemma:ols} is the least or largest achievable estimator (or neither). An example is shown in Figure~\ref{fig:convex-concave-none}. All we can get is that there only exists one root for the function $||\bb-\bA\bx||^2$. The following remark can address this concern.
\begin{remark}[Verification of Least Squares Solution]
Why does the zero derivative imply least mean squared error? The usual reason is from the convex analysis as we shall see shortly. But here we verify directly that the OLS solution finds the least squares. For any $\bbeta \neq \hat{\bbeta}$, we have
\begin{equation}
\begin{aligned}
||\bb - \bA\bx ||^2 &= ||\bb - \bA\bx_{LS} + \bA\bx_{LS} - \bA\bx||^2 \\
&= ||\bb-\bA\bx_{LS} + \bA (\bx_{LS} - \bx)||^2 \\
&=||\bb-\bA\bx_{LS}||^2 + ||\bA(\bx_{LS} - \bx)||^2 + 2(\bA(\bx_{LS} - \bx))^\top(\bb-\bA\bx_{LS}) \\
&=||\bb-\bA\bx_{LS}||^2 + ||\bA(\bx_{LS} - \bx)||^2 + 2(\bx_{LS} - \bx)^\top(\bA^\top\bb - \bA^\top\bA\bx_{LS}), \nonumber
\end{aligned}
\end{equation}
where the third term is zero from the normal equation and $||\bA(\bx_{LS} - \bx)||^2 \geq 0$. Therefore,
\begin{equation}
||\bb - \bA\bx ||^2 \geq ||\bb-\bA\bx_{LS}||^2. \nonumber
\end{equation}
Thus we show that the OLS estimator indeed gives the minimum, not the maximum or a
saddle point via the calculus approach.
\end{remark}
Further question would be posed: why does this normal equation magically produce solution for $\bx$? A simple example would give the answer. $x^2=-1$ has no real solution. But $x\cdot x^2 = x\cdot (-1)$ has a real solution $\hat{x} = 0$ in which case $\hat{x}$ makes $x^2$ and $-1$ as close as possible.
\begin{example}[Multiply From Left Can Change The Solution Set ]
Consider the matrix
$$
\bA=\left[
\begin{matrix}
-3 & -4 \\
4 & 6 \\
1 & 1
\end{matrix}
\right] \mathrm{\,\,\,and\,\,\,}
\bb=\left[
\begin{matrix}
1 \\
-1 \\
0
\end{matrix}
\right].
$$
It can be easily verified that $\bA\bx = \bb$ has no solution for $\bx$. However, if we multiply on the left by
$$
\bB=\left[
\begin{matrix}
0 & -1 & 6\\
0 & 1 & -4
\end{matrix}
\right].
$$
Then we have $\bx_{LS} = [1/2, -1/2]^\top$ as the solution of $\bB\bA\bx= \bB\bb$. This specific example shows why the normal equation can give rise to the least square solution. Multiply on the left of a linear system will change the solution set.\exampbar
\end{example}
\begin{mdframed}[hidealllines=true,backgroundcolor=gray!10,frametitle={Rank Deficiency}]
Note here, we assume $\bA\in \real^{m\times n}$ has full rank with $m>n$ to make $\bA^\top\bA$ invertible. But when two or more columns of $\bA$ are perfectly correlated, the matrix $\bA$ will be deficient and $\bA^\top\bA$ is singular. Choose the $\bx$ that minimizes $\bx_{LS}^\top\bx_{LS}$ which meets the normal equation can help to solve the problem. I.e., choose the shortest magnitude least squares solution. But this is not the main interest of the text. We will leave this topic to the readers. In Section~\ref{section:ls-utv}, ~\ref{section:application-ls-svd}, we have shortly discussed how to use UTV decomposition and SVD to tackle the rank deficient least squares problem.
\end{mdframed}
\section{Netflix Recommender and Matrix Factorization}\label{section:als-netflix}
In the Netflix prize \citep{bennett2007netflix}, the goal was to predict ratings of users for different movies, given the existing ratings of those users for other movies.
We index $M$ movies with $m= 1, 2,\ldots,M$ and $N$ users
by $n = 1, 2,\ldots,N$. We denote the rating of the $n$-th user for the $m$-th movie by $a_{mn}$. Define $\bA$ to be an $M \times N$ rating matrix with columns $\ba_n \in \real^M$ containing ratings of the $n$-th user. Note that many ratings $a_{mn}$ are missing and our goal is to predict those missing ratings accurately.
We formally consider algorithms for solving the following problem: The matrix $\bA$ is approximately factorized into an $M\times K$ matrix $\bW$ and a $K \times N$ matrix $\bZ$. Usually $K$ is chosen to be smaller than $M$ or $N$, so that $\bW$ and $\bZ$ are smaller than the original
matrix $\bA$. This results in a compressed version of the original data matrix.
An
appropriate decision on the value of $K$ is critical in practice, but the choice of $K$ is very
often problem dependent.
The factorization is significant in the sense, suppose $\bA=[\ba_1, \ba_2, \ldots, \ba_N]$ and $\bZ=[\bz_1, \bz_2, \ldots, \bz_N]$ are the column partitions of $\bA,\bZ$ respectively, then $\ba_n = \bW\bz_n$, i.e., each column $\ba_n$ is approximated by a linear combination of the columns of $\bW$ weighted by the components in $\bz_n$. Therefore, columns of $\bW$ can be thought of containing column basis of $\bA$. This is similar to the factorization in the data interpretation part (Part~\ref{part:data-interation}, p.~\pageref{part:data-interation}). What's different is that we are not restricting $\bW$ to be exact columns from $\bA$.
To find the approximation $\bA\approx\bW\bZ$, we need to define a loss function such that the distance between $\bA$ and $\bW\bZ$ can be measured. The loss function is selected to be the Frobenius norm between two matrices which vanishes to zero if $\bA=\bW\bZ$ where the advantage will be seen shortly.
To simplify the problem, let us
assume that there are no missing
ratings firstly. Project data vectors $\ba_n$ to a smaller
dimension $\bz_n \in \real^K$ with $K<M$,
such that the \textit{reconstruction error} measured by Frobenius norm is minimized (assume $K$ is known):
\begin{equation}\label{equation:als-per-example-loss}
\mathop{\min}_{\bW,\bZ} \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bw_m^\top\bz_n\right)^2,
\end{equation}
where $\bW=[\bw_1^\top; \bw_2^\top; \ldots; \bw_M^\top]\in \real^{M\times K}$ and $\bZ=[\bz_1, \bz_2, \ldots, \bz_N] \in \real^{K\times N}$ containing $\bw_m$'s and $\bz_n$'s as rows and columns respectively. The loss form in Equation~\eqref{equation:als-per-example-loss} is known as the \textit{per-example loss}. It can be equivalently written as
$$
L(\bW,\bZ) = \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bw_m^\top\bz_n\right)^2 = ||\bW\bZ-\bA||^2.
$$
Moreover, the loss $L(\bW,\bZ)=\sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bw_m^\top\bz_n\right)$ is convex with respect to $\bZ$ given $\bW$ and vice versa. Therefore, we can first minimize with respect to $\bZ$
given $\bW$ and then minimize with respect to $\bW$
given $\bZ$:
$$
\left\{
\begin{aligned}
\bZ &\leftarrow \mathop{\arg \min}_{\bZ} L(\bW,\bZ); \qquad \text{(ALS1)} \\
\bW &\leftarrow \mathop{\arg \min}_{\bW} L(\bW,\bZ). \qquad \text{(ALS2)}
\end{aligned}
\right.
$$
This is known as the \textit{coordinate descent algorithm} in which case we employ the least squares, it is also called the \textit{alternating least squares (ALS)} \citep{comon2009tensor, takacs2012alternating, giampouras2018alternating}. The convergence is guaranteed if the loss function $L(\bW,\bZ)$ decreases at each iteration and we shall discuss more on this in the sequel.\index{ALS}
\begin{remark}[Convexity and Global Minimum]
Although the loss function defined by Frobenius norm $|| \bW\bZ-\bA||^2$ is convex in $\bW$ given $\bZ$ or vice versa, it is not convex in both variables together. Therefore we are not able to find the global minimum. However, the convergence is assured to find local minima.
\end{remark}
\paragraph{Given $\bW$, Optimizing $\bZ$}
Now, let's see what is in the problem of $\bZ \leftarrow \mathop{\arg \min}_{\bZ} L(\bW,\bZ)$. When there exists a unique minimum of the loss function $L(\bW,\bZ)$ with respect to $\bZ$, we speak of the \textit{least squares} minimizer of $\mathop{\arg \min}_{\bZ} L(\bW,\bZ)$.
Given $\bW$, $L(\bW,\bZ)$ can be written as $L(\bZ|\bW)$ to emphasize on the variable of $\bZ$:
$$
\begin{aligned}
L(\bZ|\bW) &= ||\bW\bZ-\bA||^2= \left\Vert\bW[\bz_1,\bz_2,\ldots, \bz_N]-[\ba_1,\ba_2,\ldots,\ba_N]\right\Vert^2=\left\Vert
\begin{bmatrix}
\bW\bz_1 - \ba_1 \\
\bW\bz_2 - \ba_2\\
\vdots \\
\bW\bz_N - \ba_N
\end{bmatrix}
\right\Vert^2.
\end{aligned}\footnote{The matrix norm used here is the Frobenius norm (Definition~\ref{definition:frobenius}, p.~\pageref{definition:frobenius}) such that $|| \bA ||= \sqrt{\sum_{i=1,j=1}^{m,n} (\bA_{ij})^2}$ if $\bA\in \real^{m\times n}$. And the vector norm used here is the $l_2$ norm (Section~\ref{section:vector-norm}, p.~\pageref{section:vector-norm}) such that $||\bx||_2 = \sqrt{\sum_{i=1}^{n}x_i^2}$ if $\bx\in \real^n$.}
$$
Now, if we define
$$
\widetildebW =
\begin{bmatrix}
\bW & \bzero & \ldots & \bzero\\
\bzero & \bW & \ldots & \bzero\\
\vdots & \vdots & \ddots & \vdots \\
\bzero & \bzero & \ldots & \bW
\end{bmatrix}
\in \real^{MN\times KN},
\gap
\widetildebz=
\begin{bmatrix}
\bz_1 \\ \bz_2 \\ \vdots \\ \bz_N
\end{bmatrix}
\in \real^{KN},
\gap
\widetildeba=
\begin{bmatrix}
\ba_1 \\ \ba_2 \\ \vdots \\ \ba_N
\end{bmatrix}
\in \real^{MN},
$$
then the (ALS1) problem can be reduced to the normal least squares for minimizing $||\widetildebW \widetildebz - \widetildeba||^2$ with respect to $\widetildebz$. And the solution is given by
$$
\widetildebz = (\widetildebW^\top\widetildebW)^{-1} \widetildebW^\top\widetildeba.
$$
The construction may seem reasonable at first glance. But since $rank(\widetildebW) =\min\{M,K\}$, $(\widetildebW^\top\widetildebW)$ is not invertible.
A direct way to solve (ALS1) is to find the differential of $L(\bZ|\bW)$ with respect to $\bZ$:
\begin{equation}\label{equation:givenw-update-z-allgd}
\begin{aligned}
\frac{\partial L(\bZ|\bW)}{\partial \bZ} &=
\frac{\partial \,\,tr\left((\bW\bZ-\bA)(\bW\bZ-\bA)^\top\right)}{\partial \bZ}\\
&=\frac{\partial \,\,tr\left((\bW\bZ-\bA)(\bW\bZ-\bA)^\top\right)}{\partial (\bW\bZ-\bA)}
\frac{\partial (\bW\bZ-\bA)}{\partial \bZ}\\
&\stackrel{\star}{=}2 \bW^\top(\bW\bZ-\bA) \in \real^{K\times N},
\end{aligned}
\end{equation}
where the first equality is from the definition of Frobenius (Definition~\ref{definition:frobenius}, p.~\pageref{definition:frobenius}) such that $|| \bA || = \sqrt{\sum_{i=1,j=1}^{m,n} (\bA_{ij})^2}=\sqrt{tr(\bA\bA^\top)}$, and equality ($\star$) comes from the fact that $\frac{\partial tr(\bA\bA^\top)}{\partial \bA} = 2\bA$. When the loss function is a differentiable function of $\bZ$, we may determine the least squares solution by differential calculus, and a minimum of the function
$L(\bZ|\bW)$ must be a root of the equation:
$$
\frac{\partial L(\bZ|\bW)}{\partial \bZ} = \bzero.
$$
By finding the root of the above equation, we have the ``candidate" update on $\bZ$ that find the minimizer of $L(\bZ|\bW)$
\begin{equation}\label{equation:als-z-update}
\bZ = (\bW^\top\bW)^{-1} \bW^\top \bA \leftarrow \mathop{\arg \min}_{\bZ} L(\bZ|\bW).
\end{equation}
Before we declare a root of the above equation is actually a minimizer rather than a maximizer (that's why we call the update a ``candidate" update above), we need to verify the function is convex such that if the function is twice differentiable, this can be equivalently done by verifying
$$
\frac{\partial^2 L(\bZ|\bW)}{\partial \bZ^2} > 0,
$$
i.e., the Hessian matrix is positive definite (recall the definition of positive definiteness, Definition~\ref{definition:psd-pd-defini}, p.~\pageref{definition:psd-pd-defini}). To see this, we write out the twice differential
$$
\frac{\partial^2 L(\bZ|\bW)}{\partial \bZ^2}= 2\bW^\top\bW \in \real^{K\times K},
$$
which has full rank if $\bW\in \real^{M\times K}$ has full rank (Lemma~\ref{lemma:rank-of-ata}, p.~\pageref{lemma:rank-of-ata}) and $K<M$. We here claim that if $\bW$ has full rank, then $\frac{\partial^2 L(\bZ|\bW)}{\partial \bZ^2}$ is positive definite. This can be done by checking that when $\bW$ has full rank, $\bW\bx=\bzero$ only when $\bx=\bzero$ since the null space of $\bW$ is of dimension 0. Therefore,
$$
\bx^\top (2\bW^\top\bW)\bx >0, \qquad \text{for any nonzero vector $\bx\in \real^K$}.
$$
Now, the thing is that we need to check \textcolor{blue}{if $\bW$ has full rank so that the Hessian of $L(\bZ|\bW)$ is positive definiteness}, otherwise, we cannot claim the update of $\bZ$ in Equation~\eqref{equation:als-z-update} decreases the loss so that the matrix decomposition is going into the right way to better approximate the original matrix $\bA$ by $\bW\bZ$.
We will shortly come back to the positive definiteness of the Hessian matrix in the sequel which relies on the following lemma
\begin{lemma}[Rank of $\bZ$ after Updating]\label{lemma:als-update-z-rank}
Suppose $\bA\in \real^{M\times N}$ has full rank with $M\leq N$ and $\bW\in \real^{M\times K}$ has full rank with $K<M$, then the update of $\bZ=(\bW^\top\bW)^{-1} \bW^\top \bA \in \real^{K\times N}$ in Equation~\eqref{equation:als-z-update} has full rank.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:als-update-z-rank}]
Since $\bW^\top\bW\in \real^{K\times K}$ has full rank if $\bW$ has full rank (Lemma~\ref{lemma:rank-of-ata}, p.~\pageref{lemma:rank-of-ata}) such that $(\bW^\top\bW)^{-1} $ has full rank.
Suppose $\bW^\top\bx=\bzero$, this implies $(\bW^\top\bW)^{-1} \bW^\top\bx=\bzero$. Thus
$$
\nspace(\bW^\top) \subseteq \nspace\left((\bW^\top\bW)^{-1} \bW^\top\right).
$$
Moreover, suppose $(\bW^\top\bW)^{-1} \bW^\top\bx=\bzero$, and since $(\bW^\top\bW)^{-1} $ is invertible. This implies $ \bW^\top\bx=(\bW^\top\bW)\bzero=\bzero$, and
$$
\nspace\left((\bW^\top\bW)^{-1} \bW^\top\right)\subseteq \nspace(\bW^\top).
$$
As a result, by ``sandwiching", it follows that
\begin{equation}\label{equation:als-z-sandiwch1}
\nspace(\bW^\top) = \nspace\left((\bW^\top\bW)^{-1} \bW^\top\right).
\end{equation}
Therefore, $(\bW^\top\bW)^{-1} \bW^\top$ has full rank $K$. Let $\bT=(\bW^\top\bW)^{-1} \bW^\top\in \real^{K\times M}$, and suppose $\bT^\top\bx=\bzero$. This implies $\bA^\top\bT^\top\bx=\bzero$, and
$$
\nspace(\bT^\top) \subseteq \nspace(\bA^\top\bT^\top).
$$
Similarly, suppose $\bA^\top(\bT^\top\bx)=\bzero$. Since $\bA$ has full rank with the dimension of the null space being 0: $dim\left(\nspace(\bA^\top)\right)=0$, $(\bT^\top\bx)$ must be zero. The claim follows from that since $\bA$ has full rank $M$ with the row space of $\bA^\top$ being equal to the column space of $\bA$ where $dim\left(\cspace(\bA)\right)=M$ and the $dim\left(\nspace(\bA^\top)\right) = M-dim\left(\cspace(\bA)\right)=0$. Therefore, $\bx$ is in the null space of $\bT^\top$ if $\bx$ is in the null space of $\bA^\top\bT^\top$:
$$
\nspace(\bA^\top\bT^\top)\subseteq \nspace(\bT^\top).
$$
By ``sandwiching" again,
\begin{equation}\label{equation:als-z-sandiwch2}
\nspace(\bT^\top) = \nspace(\bA^\top\bT^\top).
\end{equation}
Since $\bT^\top$ has full rank $K<M<N$, $dim\left(\nspace(\bT^\top) \right) = dim\left(\nspace(\bA^\top\bT^\top)\right)=0$.
Therefore,
$\bZ^\top=\bA^\top\bT^\top$ has full rank $K$.
We complete the proof.
\end{proof}
\paragraph{Given $\bZ$, Optimizing $\bW$}
Given $\bZ$, $L(\bW,\bZ)$ can be written as $L(\bW|\bZ)$ to emphasize on the variable of $\bW$:
$$
\begin{aligned}
L(\bW|\bZ) &= ||\bW\bZ-\bA||^2.
\end{aligned}
$$
A direct way to solve (ALS2) is to find the differential of $L(\bW|\bZ)$ with respect to $\bW$:
$$
\begin{aligned}
\frac{\partial L(\bW|\bZ)}{\partial \bW} &=
\frac{\partial \,\,tr\left((\bW\bZ-\bA)(\bW\bZ-\bA)^\top\right)}{\partial \bW}\\
&=\frac{\partial \,\,tr\left((\bW\bZ-\bA)(\bW\bZ-\bA)^\top\right)}{\partial (\bW\bZ-\bA)}
\frac{\partial (\bW\bZ-\bA)}{\partial \bW}\\
&= 2(\bW\bZ-\bA)\bZ^\top \in \real^{M\times K}.
\end{aligned}
$$
The ``candidate" update on $\bW$ is similarly to find the root of the differential $\frac{\partial L(\bW|\bZ)}{\partial \bW}$:
\begin{equation}\label{equation:als-w-update}
\bW^\top = (\bZ\bZ^\top)^{-1}\bZ\bA^\top \leftarrow \mathop{\arg\min}_{\bW} L(\bW|\bZ).
\end{equation}
Again, we emphasize that the update is only a ``candidate" update. We need to further check whether the Hessian is positive definite or not.
The Hessian matrix is given by
$$
\begin{aligned}
\frac{\partial^2 L(\bW|\bZ)}{\partial \bW^2} =2\bZ\bZ^\top \in \real^{K\times K}.
\end{aligned}
$$
Therefore, by analogous analysis, if $\bZ$ has full rank with $K<N$, the Hessian matrix is positive definite.
\begin{lemma}[Rank of $\bW$ after Updating]\label{lemma:als-update-w-rank}
Suppose $\bA\in \real^{M\times N}$ has full rank with $M\leq N$ and $\bZ\in \real^{K\times N}$ has full rank with $K<N$, then the update of $\bW^\top = (\bZ\bZ^\top)^{-1}\bZ\bA^\top$ in Equation~\eqref{equation:als-w-update} has full rank.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:als-update-w-rank}]
The proof is slightly different to that of Lemma~\ref{lemma:als-update-z-rank}. Since $\bZ \in \real^{K\times N}$ and $\bA^\top \in \real^{N\times M}$ have full rank, i.e., $\det(\bZ)>0$ and $\det(\bA^\top)>0$. The determinant of their product $\det(\bZ\bA^\top)=\det(\bZ)\det(\bA^\top)>0$ such that $\bZ\bA^\top$ has full rank (rank $K$). Similarly argument can find $\bW^\top$ also has full rank.
\end{proof}
Combine the observations in Lemma~\ref{lemma:als-update-z-rank} and Lemma~\ref{lemma:als-update-w-rank}, as long as we \textcolor{blue}{initialize $\bZ, \bW$ to have full rank}, the updates in Equation~\eqref{equation:als-z-update} and Equation~\eqref{equation:als-w-update} are reasonable. \textcolor{blue}{The requirement on the $\leq N$ is reasonable in that there are always more users than the number of movies}. We conclude the process in Algorithm~\ref{alg:als}.
\begin{algorithm}[H]
\caption{Alternating Least Squares}
\label{alg:als}
\begin{algorithmic}[1]
\Require $\bA\in \real^{M\times N}$ \textcolor{blue}{with $M\leq N$};
\State initialize $\bW\in \real^{M\times K}$, $\bZ\in \real^{K\times N}$ \textcolor{blue}{with full rank and $K<M\leq N$};
\State choose a stop criterion on the approximation error $\delta$;
\State choose maximal number of iterations $C$;
\State $iter=0$;
\While{$||\bA-\bW\bZ||>\delta $ and $iter<C$}
\State $iter=iter+1$;
\State $\bZ = (\bW^\top\bW)^{-1} \bW^\top \bA \leftarrow \mathop{\arg \min}_{\bZ} L(\bZ|\bW)$;
\State $\bW^\top = (\bZ\bZ^\top)^{-1}\bZ\bA^\top \leftarrow \mathop{\arg\min}_{\bW} L(\bW|\bZ)$;
\EndWhile
\State Output $\bW,\bZ$;
\end{algorithmic}
\end{algorithm}
\section{Regularization: Extension to General Matrices}\label{section:regularization-extention-general}
We can add a regularization to minimize the following loss:
\begin{equation}\label{equation:als-regularion-full-matrix}
L(\bW,\bZ) =||\bW\bZ-\bA||^2 +\lambda_w ||\bW||^2 + \lambda_z ||\bZ||^2, \qquad \lambda_w>0, \lambda_z>0,
\end{equation}
where the differential with respect to $\bZ, \bW$ are given respectively by
\begin{equation}\label{equation:als-regulari-gradien}
\left\{
\begin{aligned}
\frac{\partial L(\bW,\bZ) }{\partial \bZ} &= 2\bW^\top(\bW\bZ-\bA) + 2\lambda_z\bZ \in \real^{K\times N};\\
\frac{\partial L(\bW,\bZ) }{\partial \bW} &= 2(\bW\bZ-\bA)\bZ^\top + 2\lambda_w\bW \in \real^{M\times K}.
\end{aligned}
\right.
\end{equation}
The Hessian matrices are given respectively by
$$
\left\{
\begin{aligned}
\frac{\partial^2 L(\bW,\bZ) }{\partial \bZ^2} &= 2\bW^\top\bW+ 2\lambda_z\bI \in \real^{K\times K};\\
\frac{\partial^2 L(\bW,\bZ) }{\partial \bW^2} &= 2\bZ\bZ^\top + 2\lambda_w\bI \in \real^{K\times K}, \\
\end{aligned}
\right.
$$
which are positive definite due to the perturbation by the regularization. To see this,
$$
\left\{
\begin{aligned}
\bx^\top (2\bW^\top\bW +2\lambda_z\bI)\bx &= \underbrace{2\bx^\top\bW^\top\bW\bx}_{\geq 0} + 2\lambda_z ||\bx||^2>0, \gap \text{for nonzero $\bx$};\\
\bx^\top (2\bZ\bZ^\top +2\lambda_w\bI)\bx &= \underbrace{2\bx^\top\bZ\bZ^\top\bx}_{\geq 0} + 2\lambda_w ||\bx||^2>0,\gap \text{for nonzero $\bx$}.
\end{aligned}
\right.
$$
\textcolor{blue}{The regularization makes the Hessian matrices positive definite even if $\bW, \bZ$ are rank deficient}. And now the matrix decomposition can be extended to any matrix even when $M>N$. In rare cases, $K$ can be chosen as $K>\max\{M, N\}$ such that a high-rank approximation of $\bA$ is obtained. However, in most scenarios, we want to find the low-rank approximation of $\bA$ such that $K<\min\{M, N\}$. For example, the ALS can be utilized to find the low-rank neural networks to reduce the memory of the neural networks whilst increase the performance (Section~\ref{section:low-rank-neural}, p.~\pageref{section:low-rank-neural}).
Therefore, the minimizers are given by finding the roots of the differential:
\begin{equation}\label{equation:als-regular-final-all}
\left\{
\begin{aligned}
\bZ &= (\bW^\top\bW+ \lambda_z\bI)^{-1} \bW^\top \bA ;\\
\bW^\top &= (\bZ\bZ^\top+\lambda_w\bI)^{-1}\bZ\bA^\top .
\end{aligned}
\right.
\end{equation}
The regularization parameters $\lambda_z, \lambda_2\in \real$ are used to balance the trade-off
between the accuracy of the approximation and the smoothness of the computed solution. The choice on the selection of the parameters is typically problem dependent and can be obtained by \textit{cross-validation}. Again, we conclude the process in Algorithm~\ref{alg:als-regularizer}.
\begin{algorithm}[H]
\caption{Alternating Least Squares with Regularization}
\label{alg:als-regularizer}
\begin{algorithmic}[1]
\Require $\bA\in \real^{M\times N}$;
\State initialize $\bW\in \real^{M\times K}$, $\bZ\in \real^{K\times N}$ \textcolor{blue}{randomly without condition on the rank and the relationship between $M, N, K$};
\State choose a stop criterion on the approximation error $\delta$;
\State choose regularization parameters $\lambda_w, \lambda_z$;
\State choose maximal number of iterations $C$;
\State $iter=0$;
\While{$||\bA-\bW\bZ||>\delta $ and $iter<C$}
\State $iter=iter+1$;
\State $\bZ = (\bW^\top\bW+ \lambda_z\bI)^{-1} \bW^\top \bA \leftarrow \mathop{\arg \min}_{\bZ} L(\bZ|\bW)$;
\State $\bW^\top = (\bZ\bZ^\top+\lambda_w\bI)^{-1}\bZ\bA^\top \leftarrow \mathop{\arg\min}_{\bW} L(\bW|\bZ)$;
\EndWhile
\State Output $\bW,\bZ$;
\end{algorithmic}
\end{algorithm}
\section{Missing Entries}\label{section:alt-columb-by-column}
Since the matrix decomposition via the ALS is extensively used in the Netflix recommender data, where many entries are missing since many users have not watched some movies or they will not rate the movies for some reasons. We can make an additional mask matrix $\bM\in \real^{M\times N}$ where $\bM_{mn}\in \{0,1\}$ means if the user $n$ has rated the movie $m$ or not. Therefore, the loss function can be defined as
$$
L(\bW,\bZ) = ||\bM\circledast \bA- \bM\circledast (\bW\bZ)||^2,
$$
where $\circledast$ is the \textit{Hadamard product} between matrices. For example, the Hadamard product for a $3 \times 3$ matrix $\bA$ with a $3\times 3$ matrix $\bB$ is
$$
\bA\circledast \bB =
\begin{bmatrix}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{bmatrix}
\circledast
\begin{bmatrix}
b_{11} & b_{12} & b_{13} \\
b_{21} & b_{22} & b_{23} \\
b_{31} & b_{32} & b_{33}
\end{bmatrix}
=
\begin{bmatrix}
a_{11}b_{11} & a_{12}b_{12} & a_{13}b_{13} \\
a_{21}b_{21} & a_{22}b_{22} & a_{23}b_{23} \\
a_{31}b_{31} & a_{32}b_{32} & a_{33}b_{33}
\end{bmatrix}.
$$
To find the solution of the problem, let's decompose the updates in Equation~\eqref{equation:als-regular-final-all} into:
\begin{equation}\label{equation:als-ori-all-wz}
\left\{
\begin{aligned}
\bz_n &= (\bW^\top\bW+ \lambda_z\bI)^{-1} \bW^\top \ba_n, &\gap& \text{for $n\in \{1,2,\ldots, N\}$} ;\\
\bw_m &= (\bZ\bZ^\top+\lambda_w\bI)^{-1}\bZ\bb_m, &\gap& \text{for $m\in \{1,2,\ldots, M\}$} ,
\end{aligned}
\right.
\end{equation}
where $\bZ=[\bz_1, \bz_2, \ldots, \bz_N], \bA=[\ba_1,\ba_2, \ldots, \ba_N]$ are the column partitions of $\bZ, \bA$ respectively. And $\bW^\top=[\bw_1, \bw_2, \ldots, \bw_M], \bA^\top=[\bb_1,\bb_2, \ldots, \bb_M]$ are the column partitions of $\bW^\top, \bA^\top$ respectively. The factorization of the updates indicates the update can be done via a column by column fashion.
\paragraph{Given $\bW$}
Let $\bo_n\in \real^M$ denote the movies rated by user $n$ where $o_{nm}=1$ if user $n$ has rated movie $m$, and $o_{nm}=1$ otherwise. Then the $n$-th column of $\bA$ without missing entries can be denoted as the matlab style notation $\ba_n[\bo_n]$. And we want to approximate the existing $n$-th column by $\ba_n[\bo_n] \approx \bW[\bo_n, :]\bz_n$ which is actually a rank-one least squares problem:
\begin{equation}\label{equation:als-ori-all-wz-modif-z}
\begin{aligned}
\bz_n &= \left(\bW[\bo_n, :]^\top\bW[\bo_n, :]+ \lambda_z\bI\right)^{-1} \bW[\bo_n, :]^\top \ba_n[\bo_n], &\gap& \text{for $n\in \{1,2,\ldots, N\}$} .
\end{aligned}
\end{equation}
Moreover, the loss function with respect to $\bz_n$:
$$
L(\bz_n|\bW) =\sum_{m\in \bo_n} \left(a_{mn} - \bw_m^\top\bz_n\right)^2
$$
and if we are concerned about the loss for all users:
$$
L(\bZ|\bW) =\sum_{n=1}^N\ \sum_{m\in \bo_n} \left(a_{mn} - \bw_m^\top\bz_n\right)^2
$$
\paragraph{Given $\bZ$}
Similarly, if $\bp_m \in \real^{N}$ denotes the users that have rated the movie $m$ with $\bp_{dn}=1$ if the movie $m$ has been rated by user $n$. Then the $m$-th row of $\bA$ without missing entries can be denoted as the matlab style notation $\bb_m[\bp_m]$. And we want to approximate the existing $m$-th row by $\bb_m[\bp_m] \approx \bZ[:, \bp_m]^\top\bw_m$,
\footnote{Note that $\bZ[:, \bp_m]^\top$ is the transpose of $\bZ[:, \bp_m]$, which is equal to $\bZ^\top[\bp_m,:]$, i.e., transposing first and then selecting.}
which again is a rank-one least squares problem:
\begin{equation}\label{equation:als-ori-all-wz-modif-w}
\begin{aligned}
\bw_m &= (\bZ[:, \bp_m]\bZ[:, \bp_m]^\top+\lambda_w\bI)^{-1}\bZ[:, \bp_m]\bb_m[\bp_m], &\gap& \text{for $m\in \{1,2,\ldots, M\}$} .
\end{aligned}
\end{equation}
Moreover, the loss function with respect to $\bw_n$:
$$
L(\bw_n|\bZ) =\sum_{n\in \bp_m} \left(a_{mn} - \bw_m^\top\bz_n\right)^2
$$
and if we are concerned about the loss for all users:
$$
L(\bW|\bZ) =\sum_{d=1}^M \sum_{n\in \bp_m} \left(a_{mn} - \bw_m^\top\bz_n\right)^2
$$
The procedure is again formulated in Algorithm~\ref{alg:als-regularizer-missing-entries}.
\begin{algorithm}[H]
\caption{Alternating Least Squares with Missing Entries and Regularization}
\label{alg:als-regularizer-missing-entries}
\begin{algorithmic}[1]
\Require $\bA\in \real^{M\times N}$;
\State initialize $\bW\in \real^{M\times K}$, $\bZ\in \real^{K\times N}$ \textcolor{blue}{randomly without condition on the rank and the relationship between $M, N, K$};
\State choose a stop criterion on the approximation error $\delta$;
\State choose regularization parameters $\lambda_w, \lambda_z$;
\State compute the mask matrix $\bM$ from $\bA$;
\State choose maximal number of iterations $C$;
\State $iter=0$;
\While{\textcolor{blue}{$||\bM\circledast \bA- \bM\circledast (\bW\bZ)||^2>\delta $} and $iter<C$}
\State $iter=iter+1$;
\For{$n=1,2,\ldots, N$}
\State $\bz_n = \left(\bW[\bo_n, :]^\top\bW[\bo_n, :]+ \lambda_z\bI\right)^{-1} \bW[\bo_n, :]^\top \ba_n[\bo_n]$; \Comment{$n$-th column of $\bZ$}
\EndFor
\For{$m=1,2,\ldots, M$}
\State $\bw_m = (\bZ[:, \bp_m]\bZ[:, \bp_m]^\top+\lambda_w\bI)^{-1}\bZ[:, \bp_m]\bb_m[\bp_m]$;\Comment{$m$-th column of $\bW^\top$}
\EndFor
\EndWhile
\State Output $\bW^\top=[\bw_1, \bw_2, \ldots, \bw_M],\bZ=[\bz_1, \bz_2, \ldots, \bz_N]$;
\end{algorithmic}
\end{algorithm}
\section{Vector Inner Product}\label{section:als-vector-product}
We have seen the ALS is to find matrices $\bW, \bZ$ such that $\bW\bZ$ can approximate $\bA\approx \bW\bZ$ in terms of minimum least squared loss:
$$
\mathop{\min}_{\bW,\bZ} \sum_{n=1}^N \sum_{d=1}^{M} \left(a_{mn} - \bw_m^\top\bz_n\right)^2,
$$
that is, each entry $a_{mn}$ in $\bA$ can be approximated by the inner product between the two vectors $\bw_m^\top\bz_n$. The geometric definition of vector inner product is given by
$$
\bw_m^\top\bz_n = ||\bw||\cdot ||\bz|| \cos \theta,
$$
where $\theta$ is the angle between $\bw$ and $\bz$. So if the vector norms of $\bw, \bz$ are determined, the smaller the angle, the larger the inner product.
Come back to the Netflix data, where the rating are ranging from 0 to 5 and the larger the better. If $\bw_m$ and $\bz_n$ fall ``close" enough, then $\bw^\top\bz$ will have a larger value. This reveals the meaning behind the ALS where $\bw_m$ represents the features of movie $m$, whilst $\bz_n$ contains the features of user $n$. And each element in $\bw_m$ and $\bz_n$ represents a same feature. For example, it could be that the second feature $\bw_{m2}$ \footnote{$\bw_{m2}$ is the second element of vector $\bw_{m2}$} represents if the movie is an action movie or not, and $\bz_{n2}$ denotes if the user $n$ likes action movies or not. If it happens the case, then $\bw_m^\top\bz_n$ will be large and approximates $a_{mn}$ well.
Note that, in the decomposition $\bA\approx \bW\bZ$, we know the rows of $\bW$ contain the hidden features of the movies and the columns of $\bZ$ contain the hidden features of the users. However, we cannot identify what are the meanings of the rows of $\bW$ or the columns of $\bZ$. We know they could be something like categories or genres of the movies, that provide some underlying connections between the users and the movies, but we cannot be sure what exactly they are. This is where the terminology ``hidden" comes from.
\section{Gradient Descent}\label{section:als-gradie-descent}
In Equation~\eqref{equation:als-ori-all-wz}, we obtain the column-by-column update directly from the full matrix way in Equation~\eqref{equation:als-regular-final-all} (with regularization considered). Now let's see what's behind the idea. Following from Equation~\eqref{equation:als-regularion-full-matrix}, the loss under the regularization:
\begin{equation}
L(\bW,\bZ) =||\bW\bZ-\bA||^2 +\lambda_w ||\bW||^2 + \lambda_z ||\bZ||^2, \qquad \lambda_w>0, \lambda_z>0,
\end{equation}
Since we are now considering the minimization of above loss with respect to $\bz_n$, we can decompose the loss into
\begin{equation}\label{als:gradient-regularization-zn}
\begin{aligned}
L(\bz_n) &=||\bW\bZ-\bA||^2 +\lambda_w ||\bW||^2 + \lambda_z ||\bZ||^2\\\
&= ||\bW\bz_n-\ba_n||^2 + \lambda_z ||\bz_n||^2 +
\underbrace{\sum_{i\neq n} ||\bW\bz_i-\ba_i||^2 + \lambda_z \sum_{i\neq n}||\bz_i||^2 + \lambda_w ||\bW||^2 }_{C_{z_n}},
\end{aligned}
\end{equation}
where $C_{z_n}$ is a constant with respect to $\bz_n$, and $\bZ=[\bz_1, \bz_2, \ldots, \bz_N], \bA=[\ba_1,\ba_2, \ldots, \ba_N]$ are the column partitions of $\bZ, \bA$ respectively. Taking the differential
$$
\frac{\partial L(\bz_n)}{\partial \bz_n} = 2\bW^\top\bW\bz_n - 2\bW^\top\ba_n + 2\lambda_z\bz_n,
$$
under which the root is exactly the first update of the column fashion in Equation~\eqref{equation:als-ori-all-wz}:
$$
\bz_n = (\bW^\top\bW+ \lambda_z\bI)^{-1} \bW^\top \ba_n, \gap \text{for $n\in \{1,2,\ldots, N\}$}.
$$
Similarly, we can decompose the loss with respect to $\bw_m$,
\begin{equation}\label{als:gradient-regularization-wd}
\begin{aligned}
L(\bw_m ) &=
||\bW\bZ-\bA||^2 +\lambda_w ||\bW||^2 + \lambda_z ||\bZ||^2\\
&=||\bZ^\top\bW-\bA^\top||^2 +\lambda_w ||\bW^\top||^2 + \lambda_z ||\bZ||^2\\\
&= ||\bZ^\top\bw_m-\bb_n||^2 + \lambda_w ||\bw_m||^2 +
\underbrace{\sum_{i\neq m} ||\bZ^\top\bw_i-\bb_i||^2 + \lambda_w \sum_{i\neq m}||\bw_i||^2 + \lambda_z ||\bZ||^2 }_{C_{w_m}},
\end{aligned}
\end{equation}
where $C_{w_m}$ is a constant with respect to $\bw_m$, and $\bW^\top=[\bw_1, \bw_2, \ldots, \bw_M], \bA^\top=[\bb_1,\bb_2, \ldots,$ $\bb_M]$ are the column partitions of $\bW^\top, \bA^\top$ respectively.
Analogously, taking the differential with respect to $\bw_m$, it follows that
$$
\frac{\partial L(\bw_m)}{\partial \bw_m} = 2\bZ\bZ^\top\bw_m - 2\bZ\bb_n + 2\lambda_w\bw_m,
$$
under which the root is exactly the second update of the column fashion in Equation~\eqref{equation:als-ori-all-wz}:
$$
\bw_m = (\bZ\bZ^\top+\lambda_w\bI)^{-1}\bZ\bb_m, \gap \text{for $m\in \{1,2,\ldots, M\}$} .
$$
Now suppose we write out the iteration as the superscript and we want to find the updates $\{\bz^{(k+1)}_n, \bw^{(k+1)}_m\}$ base on $\{\bZ^{(k)}, \bW^{(k)}\}$:
$$
\left\{
\begin{aligned}
\bz^{(k+1)}_n &\leftarrow \mathop{\arg \min}_{\bz_n^{(k)}} L(\bz_n^{(k)});\\
\bw_m^{(k+1)} &\leftarrow \mathop{\arg\min}_{\bw_m^{(k)}} L(\bw_m^{(k)}).
\end{aligned}
\right.
$$
For simplicity, we will be looking at $\bz^{(k+1)}_n \leftarrow \mathop{\arg \min}_{\bz_n^{(k)}} L(\bz_n^{(k)}|-)$, and the derivation for the update on $\bw_m^{(k+1)}$ will be the same. Suppose we want to approximate $\bz^{(k+1)}_n$ by a linear update on $\bz^{(k)}_n$:
$$
\bz^{(k+1)}_n = \bz^{(k)}_n + \eta \bv.
$$
The problem now turns to the solution of $\bv$ such that
$$
\bv=\mathop{\arg \min}_{\bv} L(\bz^{(k)}_n + \eta \bv) .
$$
\begin{algorithm}[h]
\caption{Alternating Least Squares with Full Entries and Gradient Descent}
\label{alg:als-regularizer-missing-stochas-gradient}
\begin{algorithmic}[1]
\Require $\bA\in \real^{M\times N}$;
\State initialize $\bW\in \real^{M\times K}$, $\bZ\in \real^{K\times N}$ \textcolor{blue}{randomly without condition on the rank and the relationship between $M, N, K$};
\State choose a stop criterion on the approximation error $\delta$;
\State choose regularization parameters $\lambda_w, \lambda_z$, and step size $\eta_w, \eta_z$;
\State choose maximal number of iterations $C$;
\State $iter=0$;
\While{$|| \bA- (\bW\bZ)||^2>\delta $ and $iter<C$}
\State $iter=iter+1$;
\For{$n=1,2,\ldots, N$}
\State $\bz^{(k+1)}_n =\bz^{(k)}_n - \eta_z \frac{\nabla L(\bz^{(k)}_n )}{||\nabla L(\bz^{(k)}_n )||}$; \Comment{$n$-th column of $\bZ$}
\EndFor
\For{$m=1,2,\ldots, M$}
\State $\bw^{(k+1)}_m = \bw^{(k)}_m - \eta_w \frac{\nabla L(\bw^{(k)}_m )}{||\nabla L(\bw^{(k)}_m )||}$;\Comment{$m$-th column of $\bW^\top$}
\EndFor
\EndWhile
\State Output $\bW^\top=[\bw_1, \bw_2, \ldots, \bw_M],\bZ=[\bz_1, \bz_2, \ldots, \bz_N]$;
\end{algorithmic}
\end{algorithm}
By Taylor's formula (Appendix~\ref{appendix:taylor-expansion}, p.~\pageref{appendix:taylor-expansion}), $L(\bz^{(k)}_n + \eta \bv)$ can be approximated by
$$
L(\bz^{(k)}_n + \eta \bv) \approx L(\bz^{(k)}_n ) + \eta \bv^\top \nabla L(\bz^{(k)}_n ),
$$
when $\eta$ is small enough. Then an search under the condition $||\bv||=1$ given positive $\eta$ is as follows:
$$
\bv=\mathop{\arg \min}_{||\bv||=1} L(\bz^{(k)}_n + \eta \bv) \approx\mathop{\arg \min}_{||\bv||=1}
\left\{L(\bz^{(k)}_n ) + \eta \bv^\top \nabla L(\bz^{(k)}_n )\right\}.
$$
This is known as the \textit{greedy search}. The optimal $\bv$ can be obtained by
$$
\bv = -\frac{\nabla L(\bz^{(k)}_n )}{||\nabla L(\bz^{(k)}_n )||},
$$
i.e., $\bv$ is in the opposite direction of $\nabla L(\bz^{(k)}_n )$. Therefore, the update of $\bz_n^{(k+1)}$ is reasonable to be taken as
$$
\bz^{(k+1)}_n =\bz^{(k)}_n + \eta \bv = \bz^{(k)}_n - \eta \frac{\nabla L(\bz^{(k)}_n )}{||\nabla L(\bz^{(k)}_n )||},
$$
which usually called the \textit{gradient descent}. Similarly, the gradient descent of $\bw_m^{(k+1)}$ is given by
$$
\bw^{(k+1)}_m =\bw^{(k)}_m + \eta \bv = \bw^{(k)}_m - \eta \frac{\nabla L(\bw^{(k)}_m )}{||\nabla L(\bw^{(k)}_m )||}.
$$
The updated procedure on Algorithm~\ref{alg:als-regularizer} by a gradient descent way is then formulated in Algorithm~\ref{alg:als-regularizer-missing-stochas-gradient}.
\paragraph{Geometrical Interpretation of Gradient Descent}
\begin{lemma}[Direction of Gradients]\label{lemm:direction-gradients}
An important fact is that gradients are orthogonal to level curves (a.k.a., level surface).
\end{lemma}
\begin{proof}[of Lemma~\ref{lemm:direction-gradients}]
This is equivalently to prove that the gradients is orthogonal to the tangent of the level curve. For simplicity, let's first look at the 2-dimensional case. Suppose the level curve has the form $f(x,y)=c$. This implicitly gives a relation between $x$ and $y$ such that $y=y(x)$ where $y$ can be thought of as a function of $x$. Therefore, the level curve can be written as
$$
f(x, y(x)) = c.
$$
The chain rule indicates
$$
\frac{\partial f}{\partial x} \underbrace{\frac{dx}{dx}}_{=1} + \frac{\partial f}{\partial y} \frac{dy}{dx}=0.
$$
Therefore, the gradients is perpendicular to the tangent:
$$
\left\langle \frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}\right\rangle
\cdot
\left\langle \frac{dx}{dx}, \frac{dy}{dx}\right\rangle=0.
$$
Let us now treat the problem in full generality, suppose the level curve of a vector $\bx\in \real^n$: $f(\bx) = f(x_1, x_2, \ldots, x_n)=c$. Each variable $x_i$ can be regarded as a function of a variable $t$ on the level curve $f(\bx)=c$: $f(x_1(t), x_2(t), \ldots, x_n(t))=c$. Differentiate the equation with respect to $t$ by chain rule:
$$
\frac{\partial f}{\partial x_1} \frac{dx_1}{dt} + \frac{\partial f}{\partial x_2} \frac{dx_2}{dt}
+\ldots + \frac{\partial f}{\partial x_n} \frac{dx_n}{dt}
=0.
$$
Therefore, the gradients is perpendicular to the tangent in $n$-dimensional case:
$$
\left\langle \frac{\partial f}{\partial x_1}, \frac{\partial f}{\partial x_2}, \ldots, \frac{\partial f}{\partial x_n}\right\rangle
\cdot
\left\langle \frac{dx_1}{dt}, \frac{dx_2}{dt}, \ldots \frac{dx_n}{dt}\right\rangle=0.
$$
This completes the proof.
\end{proof}
The lemma above reveals the geometrical interpretation of gradient descent. For finding a solution to minimize a convex function $L(\bz)$, gradient descent goes to the negative gradient direction that can decrease the loss. Figure~\ref{fig:alsgd-geometrical} depicts a $2$-dimensional case, where $-\nabla L(\bz)$ pushes the loss to decrease for the convex function $L(\bz)$.
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[A 2-dimensional convex function $L(\bz)$]{\label{fig:alsgd1}
\includegraphics[width=0.47\linewidth]{./imgs/alsgd1.pdf}}
\subfigure[$L(\bz)=c$ is a constant]{\label{fig:alsgd2}
\includegraphics[width=0.44\linewidth]{./imgs/alsgd2.pdf}}
\caption{Figure~\ref{fig:alsgd1} shows a function ``density" and a contour plot (\textcolor{bluepigment}{blue}=low, \textcolor{canaryyellow}{yellow}=high) where the upper graph is the ``density", and the lower one is the projection of it (i.e., contour). Figure~\ref{fig:alsgd2}: $-\nabla L(\bz)$ pushes the loss to decrease for the convex function $L(\bz)$.}
\label{fig:alsgd-geometrical}
\end{figure}
\section{Regularization: A Geometrical Interpretation}
\begin{figure}[h]
\centering
\includegraphics[width=0.95\textwidth]{./imgs/alsgd3.pdf}
\caption{Constrained gradient descent with $\bz^\top\bz\leq C$. The \textcolor{green}{green} vector $\bw$ is the projection of $\bv_1$ into $\bz^\top\bz\leq C$ where $\bv_1$ is the component of $-\nabla l(\bz)$ perpendicular to $\bz_1$. The right picture is the next step after the update in the left picture. $\bz^\star$ denotes the optimal solution of \{$\min l(\bz)$\}.}
\label{fig:alsgd3}
\end{figure}
We have seen in Section~\ref{section:regularization-extention-general} that the regularization can extend the ALS to general matrices. The gradient descent can reveal the geometric meaning of the regularization. To avoid confusion, we denote the loss function without regularization by $l(\bz)$ and the loss with regularization by $L(\bz) = \l(\bz)+\lambda_z ||\bz||^2$ where $l(\bz): \real^n \rightarrow \real$. When minimizing $l(\bz)$, descent method will search in $\real^n$ for a solution. However, in machine learning, searching in the whole space can cause overfitting. A partial solution is to search in a subset of the vector space, e.g., searching in $\bz^\top\bz < C$ for some constant $C$. That is
$$
\mathop{\arg\min}_{\bz} \gap l(\bz), \qquad s.t., \gap \bz^\top\bz\leq C.
$$
As shown above, a trivial gradient descent method will go further in the direction of $-\nabla l(\bz)$, i.e., update $\bz$ by $\bz\leftarrow \bz-\eta \nabla l(\bz)$ for small step size $\eta$. When the level curve is $l(\bz)=c_1$ and the current position of $\bz=\bz_1$ where $\bz_1$ is the intersection of $\bz^\top\bz=C$ and $l(\bz)=c_1$, the descent direction $-\nabla l(\bz_1)$ will be perpendicular to the level curve of $l(\bz_1)=c_1$ as shown in the left picture of Figure~\ref{fig:alsgd3}. However, if we further restrict the optimal value can only be in $\bz^\top\bz\leq C$, the trivial descent direction $-\nabla l(\bz_1)$ will lead $\bz_2=\bz_1-\eta \nabla l(\bz_1)$ outside of $\bz^\top\bz\leq C$. A solution is to decompose the step $-\nabla l(\bz_1)$ into
$$
-\nabla l(\bz_1) = a\bz_1 + \bv_1,
$$
where $a\bz_1$ is the component perpendicular to the curve of $\bz^\top\bz=C$, and $\bv_1$ is the component parallel to the curve of $\bz^\top\bz=C$. Keep only the step $\bv_1$, then the update
$$
\bz_2 = \text{project}(\bz_1+\eta \bv_1) = \text{project}\left(\bz_1 + \eta
\underbrace{(-\nabla l(\bz_1) -a\bz_1)}_{\bv_1}\right)\footnote{where the project($\bx$)
will project the vector $\bx$ to the closest point inside $\bz^\top\bz\leq C$. Notice here the direct update $\bz_2 = \bz_1+\eta \bv_1$ can still make $\bz_2$ outside the curve of $\bz^\top\bz\leq C$.}
$$
will lead to a smaller loss from $l(\bz_1)$ to $l(\bz_2)$ and still match $\bz^\top\bz\leq C$. This is known as the \textit{projection gradient descent}. It is not hard to see that the update $\bz_2 = \text{project}(\bz_1+\eta \bv_1)$ is equivalent to finding a vector $\bw$ (shown by the \textcolor{green}{green} vector in the left picture of Figure~\ref{fig:alsgd3}) such that $\bz_2=\bz_1+\bw$ is inside the curve of $\bz^\top\bz\leq C$. Mathematically, the $\bw$ can be obtained by $-\nabla l(\bz_1) -2\lambda \bz_1$ for some $\lambda$ as shown in the middle picture of Figure~\ref{fig:alsgd3}. This is exactly the negative gradient of $L(\bz)=l(\bz)+\lambda||\bz||^2$ such that
$$
\nabla L(\bz) = \nabla l(\bz) + 2\lambda \bz,
$$
and
$$
\begin{aligned}
\bw &= -\nabla L(\bz) \leadto \bz_2 &= \bz_1+ \bw =\bz_1 - \nabla L(\bz).
\end{aligned}
$$
And in practice, a small step size $\eta$ can avoid going outside the curve of $\bz^\top\bz\leq C$:
$$
\bz_2 =\bz_1 - \eta\nabla L(\bz),
$$
which is exactly what we have discussed in Section~\ref{section:regularization-extention-general}, the regularization term.
\paragraph{Sparsity}
In rare cases, we want to find sparse solution $\bz$ such that $l(\bz)$ is minimized. Constrained in $||\bz||_1 \leq C$ exists to this purpose where $||\cdot||_1$ is the $l_1$ norm of a vector or a matrix. The illustration of the $l_1$ in 2-dimensional and 3-dimensional space is shown in Figure~\ref{fig:p-norm-2d} and \ref{fig:p-norm-comparison-3d}. Similar to the previous case, the $l_1$ constrained optimization pushes the gradient descent towards the border of the level of $||\bz||_1=C$. The situation in the 2-dimensional case is shown in Figure~\ref{fig:alsgd4}. In a high-dimensional case, many elements in $\bz$ will be pushed into the breakpoint of $||\bz||_1=C$ as shown in the right picture of Figure~\ref{fig:alsgd4}.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\textwidth]{./imgs/alsgd4.pdf}
\caption{Constrained gradient descent with $||\bz||_1\leq C$, where the \textcolor{red}{red} dot denotes the breakpoint in $l_1$ norm. The right picture is the next step after the update in the left picture. $\bz^\star$ denotes the optimal solution of \{$\min l(\bz)$\}.}
\label{fig:alsgd4}
\end{figure}
\section{Stochastic Gradient Descent}
Now suppose we come back to the per-example loss:
$$
L(\bW,\bZ)= \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bw_m^\top\bz_n\right)^2 + \lambda_w||\bw_m||^2 +\lambda_z||\bz_n||^2.
$$
And when we iteratively decrease the per-example loss term $l(\bw_m, \bz_n)=\left(a_{mn} - \bw_m^\top\bz_n\right)^2$ for all $m\in [0,M], n\in[1,N]$, the full loss $L(\bW,\bZ)$ can also be decreased. This is known as the \textit{stochastic coordinate descent}. The differentials with respect to $\bw_m, \bz_n$, and their roots are given by
$$
\left\{
\begin{aligned}
\nabla l(\bz_n)=\frac{\partial l(\bw_m,\bz_n)}{\partial \bz_n} &= 2\bw_m\bw_m^\top \bz_n + 2\lambda_w\bw_m -2a_{mn} \bw_m \\
&\qquad \leadtosmall \bz_n= a_{mn}(\bw_m\bw_m^\top+\lambda_z\bI)^{-1}\bw_m;\\
\nabla l(\bw_m)=\frac{\partial l(\bw_m, \bz_n)}{\partial \bw_m} &= 2\bz_n\bz_n^\top\bw_m +2\lambda_z\bz_n - 2a_{mn}\bz_n\\
&\qquad \leadtosmall \bw_m= a_{mn}(\bz_n\bz_n^\top+\lambda_w\bI)^{-1}\bw_n.
\end{aligned}
\right.
$$
or analogously, the update can be done by gradient descent, and since we update by per-example loss, it is also known as the \textit{stochastic gradient descent}
$$
\left\{
\begin{aligned}
\bz_n&= \bz_n - \eta_z \frac{\nabla l(\bz_n)}{||\nabla l(\bz_n)||}; \\
\bw_m&= \bw_m - \eta_w \frac{\nabla l(\bw_m)}{||\nabla l(\bw_m)||}.
\end{aligned}
\right.
$$
The stochastic gradient descent update for ALS is formulated in Algorithm~\ref{alg:als-regularizer-missing-stochas-gradient-realstoch}. And in practice, the $m,n$ in the algorithm can be randomly produced, that's where the name \textit{stochastic} comes from.
\begin{algorithm}[h]
\caption{Alternating Least Squares with Full Entries and Stochastic Gradient Descent}
\label{alg:als-regularizer-missing-stochas-gradient-realstoch}
\begin{algorithmic}[1]
\Require $\bA\in \real^{M\times N}$;
\State initialize $\bW\in \real^{M\times K}$, $\bZ\in \real^{K\times N}$ \textcolor{blue}{randomly without condition on the rank and the relationship between $M, N, K$};
\State choose a stop criterion on the approximation error $\delta$;
\State choose regularization parameters $\lambda_w, \lambda_z$, and step size $\eta_w, \eta_z$;
\State choose maximal number of iterations $C$;
\State $iter=0$;
\While{$|| \bA- (\bW\bZ)||^2>\delta $ and $iter<C$}
\State $iter=iter+1$;
\For{$n=1,2,\ldots, N$}
\For{$m=1,2,\ldots, M$} \Comment{in practice, $m,n$ can be randomly produced}
\State $\bz_n= \bz_n - \eta_z \frac{\nabla l(\bz_n)}{||\nabla l(\bz_n)||}$;\Comment{$n$-th column of $\bZ$}
\State $\bw_m= \bw_m - \eta_w \frac{\nabla l(\bw_m)}{||\nabla l(\bw_m)||}$;\Comment{$m$-th column of $\bW^\top$}
\EndFor
\EndFor
\EndWhile
\State Output $\bW^\top=[\bw_1, \bw_2, \ldots, \bw_M],\bZ=[\bz_1, \bz_2, \ldots, \bz_N]$;
\end{algorithmic}
\end{algorithm}
\section{Bias Term}
\begin{figure}[h]
\centering
\includegraphics[width=0.95\textwidth]{./imgs/als-bias.pdf}
\caption{Bias terms in alternating least squares where the \textcolor{canaryyellow}{yellow} entries denote ones (which are fixed) and \textcolor{cyan}{cyan} entries denote the added features to fit the bias terms. The dotted boxes give an example on how the bias terms work.}
\label{fig:als-bias}
\end{figure}
In ordinary least squares, a bias term is added to the raw matrix as shown in Equation~\eqref{equation:ls-bias}. A similar idea can be applied to the ALS problem. We can add a fixed column with all 1's to the last column of $\bW$, thus an extra row should be added to last row of $\bZ$ to fit the features introduced by the bias term in $\bW$. Analogously, a fixed row with all 1's can be added to the first row of $\bZ$, and an extra column in the first column of $\bW$ to fit the features. The situation is shown in Figure~\ref{fig:als-bias}.
Following from the loss with respect to the columns of $\bZ$ in Equation~\eqref{als:gradient-regularization-zn}, suppose $\widetildebz_n
=
\begin{bmatrix}
1\\
\bz_n
\end{bmatrix}
$ is the $n$-th column of $\widetildebZ$, we have
\begin{equation}
\begin{aligned}
L(\bz_n) &=||\widetildebW\widetildebZ-\bA||^2 +\lambda_w ||\widetildebW||^2 + \lambda_z ||\widetildebZ||^2\\\
&= \left\Vert
\widetildebW
\begin{bmatrix}
1 \\
\bz_n
\end{bmatrix}-\ba_n
\right\Vert^2
+
\underbrace{\lambda_z ||\widetildebz_n||^2}_{=\lambda_z ||\bz_n||^2+\lambda_z}
+
\sum_{i\neq n} ||\widetildebW\widetildebz_i-\ba_i||^2 + \lambda_z \sum_{i\neq n}||\widetildebz_i||^2 + \lambda_w ||\widetildebW||^2 \\
&= \left\Vert
\begin{bmatrix}
\widebarbw_0 & \widebarbW
\end{bmatrix}
\begin{bmatrix}
1 \\
\bz_n
\end{bmatrix}-\ba_n
\right\Vert^2
+ \lambda_z ||\bz_n||^2 + C_{z_n}
= \left\Vert
\widebarbW \bz_n -
\underbrace{(\ba_n-\widebarbw_0)}_{\widebarba_n}
\right\Vert^2
+ \lambda_z ||\bz_n||^2 + C_{z_n},
\end{aligned}
\end{equation}
where $\widebarbw_0$ is the first column of $\widetildebW$ and $C_{z_n}$ is a constant with respect to $\bz_n$. Let $\widebarba_n = \ba_n-\widebarbw_0$, the update of $\bz_n$ is just like the one in Equation~\eqref{als:gradient-regularization-zn} where the differential is given by:
$$
\frac{\partial L(\bz_n)}{\partial \bz_n} = 2\widebarbW^\top\widebarbW\bz_n - 2\widebarbW^\top\widebarba_n + 2\lambda_z\bz_n.
$$
Therefore the update on $\bz_n$ is given by the root of the above differential:
$$
\text{update on $\widetildebz_n$}=
\left\{
\begin{aligned}
\bz_n &= (\widebarbW^\top\widebarbW+ \lambda_z\bI)^{-1} \widebarbW^\top \widebarba_n, \gap \text{for $n\in \{1,2,\ldots, N\}$};\\
\widetildebz_n &= \begin{bmatrix}
1\\\bz_n
\end{bmatrix}.
\end{aligned}
\right.
$$
Similarly, follow from the loss with respect to each row of $\bW$ in Equation~\eqref{als:gradient-regularization-wd}, suppose $\widetildebw_m =
\begin{bmatrix}
\bw_m \\
1
\end{bmatrix}$ is the $m$-th row of $\widetildebW$ (or $m$-th column of $\widetildebW^\top$), we have
\begin{equation}
\begin{aligned}
L(\bw_m )
&=||\widetildebZ^\top\widetildebW-\bA^\top||^2 +\lambda_w ||\widetildebW^\top||^2 + \lambda_z ||\widetildebZ||^2\\\
&=
||\widetildebZ^\top\widetildebw_m-\bb_m||^2 +
\underbrace{\lambda_w ||\widetildebw_m||^2}_{=\lambda_w ||\bw_m||^2+\lambda_w}
+
\sum_{i\neq m} ||\widetildebZ^\top\widetildebw_i-\bb_i||^2 + \lambda_w \sum_{i\neq m}||\widetildebw_i||^2 + \lambda_z ||\widetildebZ||^2 \\
&=
\left\Vert
\begin{bmatrix}
\widebarbZ^\top&
\widebarbz_0
\end{bmatrix}
\begin{bmatrix}
\bw_m \\
1
\end{bmatrix}
-\bb_m\right\Vert^2 +
\lambda_w ||\bw_m||^2
+
C_{w_m}\\
&=
\left\Vert
\widebarbZ^\top\bw_m
-(\bb_m-\widebarbz_0) \right\Vert^2+
\lambda_w ||\bw_m||^2
+
C_{w_m},
\end{aligned}
\end{equation}
where $\widebarbz_0$ is the last column of $\widetildebZ^\top$ and $\widebarbZ^\top$ is the left columns of it: $\widetildebZ^\top=\begin{bmatrix}
\bw_m \\
1
\end{bmatrix}$,
$C_{w_m}$ is a constant with respect to $\bw_m$, and $\bW^\top=[\bw_1, \bw_2, \ldots, \bw_M], \bA^\top=[\bb_1,\bb_2, \ldots, \bb_M]$ are the column partitions of $\bW^\top, \bA^\top$ respectively. Let $\widebarbb_m = \bb_m-\widebarbz_0$, the update of $\bw_m$ is again just like the on in Equation~\eqref{als:gradient-regularization-wd} where the differential is given by:
$$
\frac{\partial L(\bw_md)}{\partial \bw_m} = 2\widebarbZ\cdot \widebarbZ^\top\bw_m - 2\widebarbZ\cdot \widebarbb_m + 2\lambda_w\bw_m.
$$
Therefore the update on $\bw_m$ is given by the root of the above differential
$$
\text{update on $\widetildebw_m$}=
\left\{
\begin{aligned}
\bw_m&=(\widebarbZ\cdot \widebarbZ^\top+\lambda_w\bI)^{-1}\widebarbZ\cdot \widebarbb_m, \gap \text{for $m\in \{1,2,\ldots, M\}$} ;\\
\widetildebw_m &= \begin{bmatrix}
\bw_m \\ 1
\end{bmatrix}.
\end{aligned}
\right.
$$
Similar updates by gradient descent under the bias terms or treatment on missing entries can be deduced and we shall not repeat the details (see Section~\ref{section:als-gradie-descent} and \ref{section:alt-columb-by-column} for a reference).
\section{Applications}
\subsection{Low-Rank Approximation}\label{section:als-low-flag}
We have discussed and compared the effects of SVD and pseudoskeleton on low-rank approximation in Section~\ref{section:svd-low-rank-approxi} (p.~\pageref{section:svd-low-rank-approxi}). The image to be compressed is shown in Figure~\ref{fig:eng300} with size of $600\times 1200$ and rank 402. Figure~\ref{fig:svdd-by-parts} shows the image reconstructed by the first singular value already approximates the original image very well. Figure~\ref{fig:svdd-pseudoskeleton} shows the difference of each compression with rank 90, 60, 30, 10. We find SVD does well with rank 90, 60, 30. Pseudoskeleton compresses well in the black horizontal and vertical lines in the image. But it performs poor in the details of the flag.
\noindent
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[$\sigma_1\bu_1\bv_1^\top$\protect\newline$F_1=60217$]{\label{fig:svd12}
\includegraphics[width=0.15\linewidth]{./imgs/svd_pic1.png}}
\subfigure[$\sigma_2\bu_2\bv_2^\top$\protect\newline$F_2=120150$]{\label{fig:svd22}
\includegraphics[width=0.15\linewidth]{./imgs/svd_pic2.png}}
\subfigure[$\sigma_3\bu_3\bv_3^\top$\protect\newline$F_3=124141$]{\label{fig:svd32}
\includegraphics[width=0.15\linewidth]{./imgs/svd_pic3.png}}
\subfigure[$\sigma_4\bu_4\bv_4^\top$\protect\newline$F_4=125937$]{\label{fig:svd42}
\includegraphics[width=0.15\linewidth]{./imgs/svd_pic4.png}}
\subfigure[$\sigma_5\bu_5\bv_5^\top$\protect\newline$F_5=126127$]{\label{fig:svd52}
\includegraphics[width=0.15\linewidth]{./imgs/svd_pic5.png}}
\subfigure[All 5 singular values: $\sum_{i=1}^{5}\sigma_i\bu_i\bv_i^\top$,\protect\newline$F=$\textbf{44379}]{\label{fig:svd62}
\includegraphics[width=0.15\linewidth]{./imgs/svd_pic6_all.png}}
\quad
\subfigure[$\bc_1\br_1^\top$\protect\newline$G_1=60464$]{\label{fig:skeleton51}
\includegraphics[width=0.15\linewidth]{./imgs/skeleton_5_1.png}}
\subfigure[$\bc_2\br_2^\top$\protect\newline$G_2=122142$]{\label{fig:skeleton52}
\includegraphics[width=0.15\linewidth]{./imgs/skeleton_5_2.png}}
\subfigure[$\bc_3\br_3^\top$\protect\newline$G_3=123450$]{\label{fig:skeleton53}
\includegraphics[width=0.15\linewidth]{./imgs/skeleton_5_3.png}}
\subfigure[$\bc_4\br_4^\top$\protect\newline$G_4=125975$]{\label{fig:skeleton54}
\includegraphics[width=0.15\linewidth]{./imgs/skeleton_5_5.png}}
\subfigure[$\bc_5\br_5^\top$\protect\newline$G_5=124794$]{\label{fig:skeleton55}
\includegraphics[width=0.15\linewidth]{./imgs/skeleton_5_4.png}}
\subfigure[Pseudoskeleton Rank 5 $\sum_{i=1}^{5}\bc_i\br_i^\top$,\protect\newline$G=45905$.]{\label{fig:skeleton5_all}
\includegraphics[width=0.15\linewidth]{./imgs/skeleton_5_all.png}}
\quad
\subfigure[$\bw_1\bz_1^\top$\protect\newline$S_1=82727$]{\label{fig:als51}
\includegraphics[width=0.15\linewidth]{./imgs/als_rank5_3.png}}
\subfigure[$\bw_2\bz_2^\top$\protect\newline$S_2=107355$]{\label{fig:als52}
\includegraphics[width=0.15\linewidth]{./imgs/als_rank5_2.png}}
\subfigure[$\bw_3\bz_3^\top$\protect\newline$S_3=119138$]{\label{fig:als53}
\includegraphics[width=0.15\linewidth]{./imgs/als_rank5_5.png}}
\subfigure[$\bw_4\bz_4^\top$\protect\newline$S_4=120022$]{\label{fig:als54}
\includegraphics[width=0.15\linewidth]{./imgs/als_rank5_1.png}}
\subfigure[$\bw_5\bz_5^\top$\protect\newline$S_5=120280$]{\label{fig:als55}
\includegraphics[width=0.15\linewidth]{./imgs/als_rank5_4.png}}
\subfigure[ALS Rank 5 $\sum_{i=1}^{5}\bw_i\bz_i^\top$,\protect\newline$S=52157$.]{\label{fig:als5_all}
\includegraphics[width=0.15\linewidth]{./imgs/als_rank3.png}}
\caption{Image compression for gray flag image into a rank-5 matrix via the SVD, and decompose into 5 parts where $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_{5}$, i.e., $F_1\leq F_2\leq \ldots \leq F_5$ with $F_i = ||\sigma_i\bu_i\bv^\top - \bA||_F$ for $i\in \{1,2,\ldots, 5\}$. And reconstruct images by single singular value and its corresponding left and right singular vectors, $\bc_i\br_i^\top$, $\bw_i\bz_i^\top$ respectively.}
\label{fig:svdd-by-parts-als}
\end{figure}
Similar results can be observed for the low-rank approximation via the ALS decomposition. The ALS approximation is given by $\bA\approx\bW\bZ$ where $\bW\in \real^{m\times \gamma}, \bZ\in \real^{\gamma\times n}$ if $\bA\in \real^{m\times n}$ such that $\bW$ and $\bZ$ are rank-$\gamma$ matrices. Suppose $\gamma=5$, and
$$
\bW=[\bw_1, \bw_2, \ldots, \bw_5],
\qquad
\text{and}
\qquad
\bZ =
\begin{bmatrix}
\bz_1^\top \\
\bz_2^\top \\
\vdots \\
\bz_5^\top
\end{bmatrix},
$$
are the column and row partitions of $\bW, \bZ$ respectively \footnote{For simplicity, note that this definition is different to what we have defined in Section~\ref{section:als-netflix} where we define $\bw_i$ as the rows of $\bW$. }. Then $\bA$ can be approximated by $\sum_{i=1}^{5}\bw_i\bz_i^\top$. The partitions are ordered such that
$$
\underbrace{||\bw_1\bz_1^\top-\bA||_F}_{S_1} \leq
\underbrace{||\bw_2\bz_2^\top-\bA||_F}_{S_2}
\leq \ldots \leq
\underbrace{||\bw_5\bz_5^\top-\bA||_F}_{S_5}.
$$
We observe that $\bw_1\bz_1^\top$ works slightly \textbf{different} to that of $\sigma_1\bu_1\bv^\top$ where the reconstruction error measured by Frobenius norm are not close as well (82,727 in the pseudoskeleton case compared to that of 60,217 in the SVD case). As we mentioned previously, $\bc_1\br_1^\top$ works similarly to that of $\sigma_1\bu_1\bv^\top$ since the pseudoskeleton relies on the SVD. However, in ALS, the reconstruction is all from least squares optimization.
The key difference between ALS and SVD is in that, in SVD, the importance of each vector in the basis is relative to the value of the singular value associated with that vector. This usually means that the first vector of the basis dominates and is the most used vector to reconstruct data, then the second vector and so on, so the basis in SVD has an implicit hierarchy and that doesn't happen in ALS where we find the second component $\bw_2\bz_2^\top$ via ALS in Figure~\ref{fig:als52} plays an important role in the reconstruction of the original figure, whereas the second component $\sigma_2\bu_2\bv_2^\top$ via SVD in Figure~\ref{fig:svd22} plays a small role in the reconstruction.
\begin{SCfigure}
\caption{Comparison of reconstruction errors measured by Frobenius norm among the SVD, pseudoskeleton, and ALS where the approximated rank ranges from $3$ to 100. ALS with well-selected parameters works similar to SVD.}
\includegraphics[width=0.5\textwidth]{./imgs/svd_skeleton_als_fnorm.pdf}
\label{fig:svd_skeleton_als_fnorm}
\end{SCfigure}
We finally compare low-rank approximation among the SVD, pseudoskeleton, and ALS with different ranks. Figure~\ref{fig:svdd-pseudoskeleton-als} shows the difference of each compression with rank 90, 60, 30, 10. We observe that the SVD does well with rank 90, 60, 30. The pseudoskeleton-approximation compresses well in the black horizontal and vertical lines in the image. But it performs poor in the details of the flag. ALS works similarly to the SVD in terms of visual expression and reconstruction errors measured by Frobenius norm.
Figure~\ref{fig:svd_skeleton_als_fnorm} shows the comparison of the reconstruction errors among the SVD, the pseudoskeleton, and the ALS approximations measured by Frobenius norm ranging from rank $3$ to $100$ where we find in all cases, the truncated SVD does best in terms of Frobenius norm. Again, similar results can be observed when applied to spectral norm. The ALS works better than the pseudoskeleton decomposition when $\lambda_w=\lambda_z=0.15$. An interesting cutoff happens when $\lambda_w=\lambda_z=\{0.03, 0.08, 0.15\}$. That is, when the rank increases, the ALS will be very close to the SVD in the sense of low-rank approximation.
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[SVD with rank 90\protect\newline Frobenius norm=\textbf{6,498}]{\label{fig:svd902}
\includegraphics[width=0.3\linewidth]{./imgs/svd90.png}}
\quad
\subfigure[Pseudoskeleton with rank 90\protect\newline Frobenius norm=13,751]{\label{fig:skeleton902}
\includegraphics[width=0.3\linewidth]{./imgs/skeleton90.png}}
\subfigure[ALS with rank 90\protect\newline Frobenius norm=6,622]{\label{fig:als_90}
\includegraphics[width=0.3\linewidth]{./imgs/als_rank90.png}}\\
\subfigure[SVD with rank 60\protect\newline Frobenius norm=\textbf{8,956}]{\label{fig:svd602}
\includegraphics[width=0.3\linewidth]{./imgs/svd60.png}}
\quad
\subfigure[Pseudoskeleton with rank 60\protect\newline Frobenius norm=14,217]{\label{fig:skeleton602}
\includegraphics[width=0.3\linewidth]{./imgs/skeleton60.png}}
\subfigure[ALS with rank 60\protect\newline Frobenius norm=9,028]{\label{fig:als_60}
\includegraphics[width=0.3\linewidth]{./imgs/als_rank60.png}}\\
\subfigure[SVD with rank 30\protect\newline Frobenius norm=\textbf{14,586}]{\label{fig:svd302}
\includegraphics[width=0.3\linewidth]{./imgs/svd30.png}}
\quad
\subfigure[Pseudoskeleton with rank 30\protect\newline Frobenius norm=17,853]{\label{fig:skeleton302}
\includegraphics[width=0.3\linewidth]{./imgs/skeleton30.png}}
\subfigure[ALS with rank 30\protect\newline Frobenius norm=18,624]{\label{fig:als_30}
\includegraphics[width=0.3\linewidth]{./imgs/als_rank30.png}}
\subfigure[SVD with rank 10\protect\newline Frobenius norm=\textbf{31,402}]{\label{fig:svd102}
\includegraphics[width=0.3\linewidth]{./imgs/svd10.png}}
\quad
\subfigure[Pseudoskeleton with rank 10\protect\newline Frobenius norm=33,797]{\label{fig:skeleton102}
\includegraphics[width=0.3\linewidth]{./imgs/skeleton10.png}}
\subfigure[ALS with rank 10\protect\newline Frobenius norm=33,449]{\label{fig:als_10}
\includegraphics[width=0.3\linewidth]{./imgs/als_rank10.png}}
\caption{Image compression for gray flag image with different rank.}
\label{fig:svdd-pseudoskeleton-als}
\end{figure}
\subsection{Movie Recommender}
The ALS is extensively developed for the movie recommender system. To see this, we obtain
the ``movielens100k" dataset from MovieLens \citep{harper2015movielens}\footnote{http://grouplens.org}. It consists of 100,000 ratings from 943 users on 1,682 movies. The rating values go from 0 to 5. The data was collected through the MovieLens website
during the seven-month period from September 19th,
1997 through April 22nd, 1998. This data has been cleaned up - users
who had less than 20 ratings or did not have complete demographic
information were removed from this data set such that simple demographic info for the users (age, gender, occupation, zip) can be obtained. However, we will only work on the trivial rating matrix.
The dataset is split into training and validation data, around 95,015 and 4,985 ratings respectively.
The error is measured by root mean squared error (RMSE). The RMSE is frequently used as a measure of the differences between values. For a set of values $\{x_1, x_2, \ldots, x_n\}$ and its predictions $\{\hat{x}_1, \hat{x}_2, \ldots, \hat{x}_n\}$, the RMSE can be described as
$$
\text{RMSE}(\bx, \hat{\bx}) = \sqrt{\frac{1}{n} \sum_{i=1}^{n}(x_i-\hat{x}_i)}.
$$
The minimal RMSE for validation is obtained when $K=185$ and $\lambda_w=\lambda_z=0.15$, and it is equal to $0.806$ as shown in Figure~\ref{fig:movie100k}. Therefore, when the rating ranges from 0 to 5, the ALS at least can predict whether the user likes to watch the movie (ranges 4 to 5) or not much (ranges 0 to 2).
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Training]{\label{fig:movie100k1}
\includegraphics[width=0.44\linewidth]{./imgs/movielen100k.pdf}}
\quad
\subfigure[Validation]{\label{fig:movie100k2}
\includegraphics[width=0.44\linewidth]{./imgs/movielen100k_val.pdf}}
\caption{Comparison of training and validation error for ``movielens100k" data set with different reduction dimension and regularization parameters.}
\label{fig:movie100k}
\end{figure}
A recommender system can work simply by pushing the movie $m$ when $a_{mn}\geq4$ if user $n$ have not rated the movie $m$. Or in rare cases, it happens that the user $n$ have rated all the movies he likes (say rates $\geq4$). Then, a partial solution is to find out similar movies to high-rating movies to push. Suppose user $n$ likes movie $m$ very much and he has rated with $5$: $a_{mn}=5$. Under the ALS approximation $\bA=\bW\bZ$ where each row of $\bW$ represents the hidden features of each movie (see Section~\ref{section:als-vector-product} on the vector product). The solution is given by finding the most similar movies to movie $m$ that user $n$ has not rated (or watched). In mathematical language,
$$
\mathop{\arg \max}_{\bw_i} \gap \text{similarity}(\bw_i, \bw_m), \qquad \text{for all} \gap i \neq \bo_n,
$$
where $\bw_i$'s are the rows of $\bW$ representing the hidden feature of movie $i$ and $\bo_n$ is a mask vector indicating the movies that user $n$ has rated.
The method above relies on the similarity function between two vectors. The \textit{cosine similarity} is the most commonly used measure. It is defined to equal the cosine of the angle between the two vectors:
$$
\cos(\bx, \by) = \frac{\bx^\top\by}{||\bx||\cdot ||\by||},
$$
where the value ranges from $-1$ to 1 with -1 is perfectly dissimilar and 1 is perfectly similar. From the above definition, it follows that the cosine similarity depends only on the angle between the two non-zero vectors, but not on their magnitudes since it can be regarded as the inner product between the normalized vectors. A second measure for similarity is known as the \textit{Pearson similarity}:
$$
\text{Pearson}(\bx,\by) =\frac{\Cov(\bx,\by)}{\sigma_x \cdot \sigma_y}
= \frac{\sum_{i=1}^{n} (x_i - \bar{x} ) (y_i -\bar{y})}{ \sqrt{\sum_{i=1}^{n} (x_i-\bar{x})^2}\sqrt{ \sum_{i=1}^{n} (y_i-\bar{y})^2 }},
$$
whose range varies between -1 and 1, where -1 is perfectly dissimilar and 1 is perfectly similar. The Pearson similarity is usually used to measure the linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations.
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Cosine Bin Plot]{\label{fig:als-cosine}%
\includegraphics[width=0.33\linewidth]{./imgs/als-bin-cosine.pdf}}%
\subfigure[Pearson Bin Plot]{\label{fig:als-pearson}%
\includegraphics[width=0.33\linewidth]{./imgs/als-bin-pearson.pdf}}%
\subfigure[PR Curve]{\label{fig:als-prcurve}%
\includegraphics[width=0.33\linewidth]{./imgs/als-prcurve.pdf}}%
\caption{Distribution of the insample and outsample under cosine and Pearson similarity and the Precision-Recall curve form them.}
\label{fig:als-prcurive-bin}
\end{figure}
Following from the example above on the movielens100k dataset, we choose $\lambda_w=\lambda_z=0.15$ for the regularization and the rank $62$ to minimize the RMSE. We want to look at the similarity between different movie hidden vectors.
Define further the ``insample" as the similarity between the movies having rates $5$ for each user, and ``outsample" as the similarity between the movies having rate $5$ and $1$ for each user. Figure~\ref{fig:als-pearson} and \ref{fig:als-cosine} depict the bin plot of the distribution of insample and outsample under cosine and Pearson similarity. Figure~\ref{fig:als-prcurve} shows the precision-recall (PR) curve of them where we find the cosine similarity works better such that it can find out more than $73\%$ of the potential high-rating movies with a $90\%$ precision. However, Pearson similarity can only separate out about $64\%$ of the high-rating movies to have a $90\%$ precision. In practice, other measure can also be explored, such as negative Euclidean distance in which case the Euclidean distance can measure the ``dissimilarity" between two vectors, and a negative one thus represents the similarity of them.
\chapter{Acknowledgments}
We thank Gilbert Strang for raising the question formulated in Corollary~\ref{corollary:invertible-intersection}, checking the writing of the survey, for a stream of ideas and references about the three factorizations from the steps of elimination, and for the generous sharing of the manuscript of \citep{strang2021three}.
\newpage
\chapter{Appendix}
\chapter{Modern Applications}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Low-Rank Neural Networks}\label{section:low-rank-neural}
We start with the basic LeNet5 neural network to illustrate this low-rank neural networks idea. But we modify the fully connected layers as $LenetModified$ as shown below in the bluebox. Also, we notice that a layer with input feature 120, output feature 100 is just a matrix of size $120\times 100$. If we put a "regularize" on the matrix, such as the rank of the matrix is 50. All in all, a matrix with $120\times 100$ or a matrix via matrix multiplication of two matrices $120\times 50, 50\times 100$ are similar in some sense. The matrix multiplication result also has a size of $120\times 100$. Thus, we will have a low-rank version of the fully connected layer. We call this $LeNetDecom$ structured. The fully connected layers in $LenetModified$ and $LenetDecom$ are shown as follows:\index{Low-rank neural networks}
\begin{svgraybox}
\begin{lstlisting}
LenetModified{
Convolutional Layers Omitted;
(Fully Connected Layers):
(0): Linear(in_features=120, out_features=100)
(1): Tanh()
(2): Linear(in_features=100, out_features=80)
(3): Tanh()
(4): Linear(in_features=80, out_features=60)
(5): Tanh()
(6): Linear(in_features=60, out_features=40)
(7): Tanh()
(8): Linear(in_features=40, out_features=10)
(9): LogSoftmax()
}
\end{lstlisting}
\end{svgraybox}
\begin{svgraybox}
\begin{lstlisting}
LenetDecom{
Convolutional Layers Omitted;
(Fully Connected Layers):
(0): Linear(in_features=120, out_features=50)
(1): Linear(in_features=50, out_features=100)
(2): Tanh()
(3): Linear(in_features=100, out_features=40)
(4): Linear(in_features=40, out_features=80)
(5): Tanh()
(6): Linear(in_features=80, out_features=30)
(7): Linear(in_features=30, out_features=60)
(8): Tanh()
(9): Linear(in_features=60, out_features=20)
(10): Linear(in_features=20, out_features=40)
(11): Tanh()
(12): Linear(in_features=40, out_features=10)
(13): LogSoftmax()
}
\end{lstlisting}
\end{svgraybox}
We realize that reducing a fully connected layer with size $m\times n$ to a low-rank layer $m\times r, r\times n$ can reduce the space to save the model from $mn$ values to $r(m+n)$ values. In this specific example above, we reduce $120\times 100=12000$ to $50(120+100)=11000$ values. This reduction also involves the matrix multiplication operations.
After training 100 epochs, we find the minimal training loss in $LenetModified$ is smaller than that in $LenetDecom$ as shown in Figure~\ref{fig:lenetLoss_train}. But the validation loss of $LenetModified$ (larger than 0.04) is larger than that in $LenetDecom$ (smaller than 0.04) as shown in Figure~\ref{fig:lenetLoss_val}. We see this "regularization" property in low-rank neural networks from this simple example.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Training loss]{\label{fig:lenetLoss_train}
\includegraphics[width=0.47\linewidth]{./imgs/lenetLoss_train.pdf}}
\quad
\subfigure[Validation loss]{\label{fig:lenetLoss_val}
\includegraphics[width=0.47\linewidth]{./imgs/lenetLoss_val.pdf}}
\caption{Comparison of full neural networks and low-rank neural networks.}
\label{fig:lenetLoss}
\end{figure}
Further, if we have already trained $LenetModified$, and we want to use $LenetDecom$ as it saves space and matrix operations. To avoid training from scratch, we could load the weights from $LenetModified$ to $LenetDecom$. Then, one method could be used is applying SVD to decompose the weight matrix and keep only the first $r$ singular values, i,e., set the singular values $\sigma_{r+1}, \sigma_{r+2}, \ldots=0$. That is, a weight matrix $\bW = \bU_r\bSigma_r\bV_r = \bW_1\bW_2$, where we can set $\bW_1=\bU_r\bSigma_r, \bW_2=\bV_r$. Then we load the convolutional layers of $LenetDecom$ from the convolutional layers of $LenetModified$, and load the fully connected layers of $LenetDecom$ via the decomposition of $LenetModified$. The fine-tuning of the network will make the result better. We denote this method as $LenetDecomSVD$. The training and validation losses of $LenetDecomSVD$ are shown in Figure~\ref{fig:lenetLoss}. We find the training loss of $LenetDecomSVD$ approaches 0 even in epoch 1 and the validation loss of $LenetDecomSVD$ is also smaller than 0.04 which is better than the original $LenetModified$.
This is just one simple example of how the low-rank neural network works. We do not claim that the low-rank neural networks work better in all the scenarios. And also we can apply this matrix decomposition into convolutional layers via the equivalence of convolutional layer and fully connected layers in \citep{ma2017equivalence}. Also, a more detailed exploration of going deep in neural networks is explained in \citep{chen2015net2net, wei2016network, lu2018compnet}
\section{One More Step: Adding a Nonlinear Function Layer}
In the above section, we approximate a fully connected layer by a multiplication of two low-rank matrices. The further goal can be putting a nonlinear function in between the two low-rank matrices, i.e., the final layer would be $f(\bW_1) \bW_2$, where $f()$ is a nonlinear function. We denote this method for the above example as $LenetDecomNonlinear$.\index{Nonlinear function layer}
Moreover, if we already trained $LenetModified$, we can also modify the structure of previously trained models to a new structure to avoid training from scratch every time. We then again factor the fully networks by matrix decomposition.
\begin{SCfigure
\centering
\includegraphics[width=0.5\textwidth]{imgs/tanh.pdf}
\caption{Demonstration of y=Tanh(x) vs y=x. We notice that in the input field between $(-0.25, 0.25)$, $Tanh$ is almost close to a linear function.}
\label{fig:tanh}
\end{SCfigure}
Specifically, when we use $Tanh$ function in neural networks as the nonlinear function, where $Tanh(x) = \frac{e^x-e^{-x}}{e^x + e^{-x}}$. We notice that in the input field between $(-0.25, 0.25)$, $Tanh$ is almost close to a linear function as shown in Figure~\ref{fig:tanh} where the orange line is the difference between $y=Tanh(x)$ and $y=x$ and the difference is almost 0 when $x\in (-0.25, 0.25)$.
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Training loss]{\label{fig:nonlinear_lenetLoss_train}
\includegraphics[width=0.47\linewidth]{./imgs/lenetLoss_train_Tanh.pdf}}
\quad
\subfigure[Validation loss]{\label{fig:nonlinear_lenetLoss_val}
\includegraphics[width=0.47\linewidth]{./imgs/lenetLoss_val_Tanh.pdf}}
\caption{Comparison of full neural networks and $Tanh$ in low-rank neural networks.}
\label{fig:nonlinear_lenetLoss}
\end{figure}
After the matrix multiplication of $\bW=\bW_1\bW_2$, if we can make all the values $\bA_{in}\bW_1$ in the range of $(-0.25, 0.25)$ where $\bA_1$ is the output from the previous layer, then we can add the $Tanh$ without any suffer as it is in the linear space of $Tanh$. Note that matrix decomposition has this equivalence: $\bW = \bW_1*\bW_2 = (\sigma \bW_1)*(\frac{1}{\sigma}\bW_2)$ where $\sigma$ is any nonzero scalar. Then we can set the value of $\sigma$ to make the maximal absolute value of $\bA_{in}(\sigma \bW_1)$ to be 0.25. Following the specific example in the previous section, we denote this method as $LenetDecomNonlinear\_SVD$ where we put an extra $Tanh$ function in between the factored matrices as follows:
\begin{svgraybox}
\begin{lstlisting}
LenetDecomNonlinear{
Convolutional Layers Omitted;
(Fully Connected Layers):
(0): Linear(in_features=120, out_features=50)
(1): Tanh() [Differece]
(2): Linear(in_features=50, out_features=100)
(3): Tanh()
(4): Linear(in_features=100, out_features=40)
(5): Tanh() [Differece]
(6): Linear(in_features=40, out_features=80)
(7): Tanh()
(8): Linear(in_features=80, out_features=30)
(9): Tanh() [Differece]
(10): Linear(in_features=30, out_features=60)
(11): Tanh()
(12): Linear(in_features=60, out_features=20)
(13): Tanh() [Differece]
(14): Linear(in_features=20, out_features=40)
(15): Tanh()
(16): Linear(in_features=40, out_features=10)
(17): LogSoftmax()
}
\end{lstlisting}
\end{svgraybox}
After training 100 epochs, we find the minimal training loss in $LenetModified$,\\ $LenetDecomNonlinear$, and $LenetDecomNonlinear\_SVD$ are similar as shown in Figure~\ref{fig:nonlinear_lenetLoss_train}. And the validation loss in $LenetModified$ and $LenetDecomNonlinear$ are also similar which are larger than 0.04. However, the minimal validation loss in \\ $LenetDecomNonlinear\_SVD$ is smaller than 0.04 which gives us a promising result for this matrix decomposition method used in neural networks as shown in Figure~\ref{fig:nonlinear_lenetLoss_val}. Again, we refer the readers to \citep{chen2015net2net, wei2016network, lu2018compnet} for more details about these neural architecture search methods.
\chapter{Biconjugate Decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Existence of the Biconjugate Decomposition}
The biconjugate decomposition was proposed in \citep{chu1995rank} and discussed in \citep{yang2000matrix}.
The existence of the biconjugate decomposition relies on the rank-one reduction theorem shown below. And a variety of matrix decomposition methods can be unified via this biconjugate decomposition.
\begin{theorem}[Rank-One Reduction\index{Rank-one reduction}]\label{theorem:rank-1-reduction}
Any $m\times n$ matrix $\bA\in \real^{m\times n}$ with rank $r$, a pair of vectors $\bx\in \real^n$ and $\by\in \real^m$ such that $w=\by^\top\bA\bx \neq 0$, then the matrix $\bB=\bA-w^{-1}\bx\by^\top\bA$ has rank $r-1$ which has exactly one less than the rank of $\bA$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:rank-1-reduction}]
If we can show that the dimension of $\nspace(\bB)$ is one larger than that of $\bA$. Then this implicitly shows $\bB$ has rank exactly one less than the rank of $\bA$.
For any vector $\bn \in \nspace(\bA)$, i.e., $\bA\bn=\bzero$, we then have $\bB\bn =\bA\bn-w^{-1}\bx\by^\top\bA\bn=\bzero $ which means $\nspace(\bA)\subseteq \nspace(\bB)$.
Now for any vector $\bmm \in \nspace(\bB)$, then $\bB\bmm = \bA\bmm-w^{-1}\bx\by^\top\bA\bmm =\bzero$.
Let $k=w^{-1}\by^\top\bA\bmm$, which is a scalar, thus $\bA(\bmm - k\bx)=\bzero$, i.e., for any vector $\bn\in \nspace(\bA)$, we could find a vector $\bmm\in \nspace(\bB)$ such that $\bn=(\bmm - k\bx)\in \nspace(\bA)$. Note that $\bA\bx\neq \bzero$ from the definition of $w$. Thus, the null space of $\bB$ is therefore obtained from the null space of $\bA$ by adding $\bx$ to its basis which will increase the order of the space by 1. Thus the dimension of $\nspace(\bA)$ is smaller than the dimension of $\nspace(\bB)$ by 1 which completes the proof.
\end{proof}
Suppose matrix $\bA\in \real^{m\times n}$ has rank $r$, we can define a rank reducing process to generate a sequence of Wedderburn matrices $\{\bA_k\}$:
$$
\bA_1 = \bA, \qquad \text{and}\qquad \bA_{k+1} = \bA_k-w_k^{-1}\bA_k\bx_k\by_k^\top\bA_k,
$$
where $\bx_k \in \real^n$ and $\by_k\in \real^m$ are any vectors satisfying $w_k = \by_k^\top\bA\bx_k \neq 0$. The sequence will terminate in $r$ steps since the rank of $\bA_k$ decreases by exactly one at each step. Write out the sequence:
$$
\begin{aligned}
\bA_1 &= \bA, \\
\bA_1-\bA_{2} &= w_1^{-1}\bA_1\bx_1\by_1^\top\bA_1,\\
\bA_2-\bA_3 &=w_2^{-1}\bA_2\bx_2\by_2^\top\bA_2, \\
\bA_3-\bA_4 &=w_3^{-1}\bA_3\bx_3\by_3^\top\bA_3, \\
\vdots &=\vdots\\
\bA_{r-1}-\bA_{r} &=w_{r-1}^{-1}\bA_{r-1}\bx_{r-1}\by_{r-1}^\top\bA_{r-1}, \\
\bA_r-\bzero &=w_{r}^{-1}\bA_{r}\bx_{r}\by_{r}^\top\bA_{r}.
\end{aligned}
$$
By adding the sequence we will get
$$
(\bA_1-\bA_2)+(\bA_2-\bA_3)+\ldots+(\bA_{r-1}-\bA_{r})+(\bA_r-\bzero ) =\bA= \sum_{i=1}^{r}w_i^{-1}\bA_i\bx_i\by_i^\top\bA_i.
$$
\begin{theoremHigh}[Biconjugate Decomposition: Form 1]\label{theorem:biconjugate-form1}
This equality from rank-reducing process implies the following matrix decomposition
$$
\bA = \bPhi \bOmega^{-1} \bPsi^\top,
$$
where $\bOmega=diag(w_1, w_2, \ldots, w_r)$, $\bPhi=[\bphi_1,\bphi_2, \ldots, \bphi_r]\in \real^{m\times r}$ and $\bPsi=[\bpsi_1, \bpsi_2, \ldots, \bpsi_r]$ with
$$
\bphi_k = \bA_k\bx_k, \qquad \text{and}\qquad \bpsi_k=\bA_k^\top \by_k.
$$
\end{theoremHigh}
Obviously, different choices of $\bx_k$'s and $\by_k$'s will result in different factorizations. So this factorization is rather general and we will show its connection to some well-known decomposition methods.\index{Wedderburn sequence}
\begin{remark}
For the vectors $\bx_k, \by_k$ in the Wedderburn sequence, we have the following property
$$
\begin{aligned}
\bx_k &\in \nspace(\bA_{k+1}) \bot \cspace(\bA_{k+1}^\top), \\
\by_k &\in \nspace(\bA_{k+1}^\top) \bot \cspace(\bA_{k+1}).
\end{aligned}
$$
\end{remark}
\begin{lemma}[General Term Formula of Wedderburn Sequence: V1]\label{lemma:wedderburn-sequence-general}
For each matrix with $\bA_{k+1} = \bA_k -w_k^{-1} \bA_k\bx_k\by_k^\top \bA_k$, then $\bA_{k+1} $ can be written as
$$
\bA_{k+1} = \bA - \sum_{i=1}^{k}w_{i}^{-1} \bA\bu_i \bv_i^\top \bA,
$$
where
$$
\bu_k=\bx_k -\sum_{i=1}^{k-1}\frac{\bv_i^\top \bA\bx_k}{w_i}\bu_i,\qquad \text{and}\qquad \bv_k=\by_k -\sum_{i=1}^{k-1}\frac{\by_k^\top \bA\bu_i}{w_i}\bv_i.
$$
\end{lemma}
The proof of this lemma is provided in Appendix~\ref{appendix:wedderburn-general-term}. We notice that $w_i =\by_k^\top\bA_k\bx_k$ in the general term formula is related to $\bA_k$. So it's not the true general term formula. We will write $w_i$ to be related to $\bA$ rather than $\bA_k$ later.
From the general term formula of Wedderburn sequence, we have
$$
\begin{aligned}
\bA_{k+1} &= \bA - \sum_{i=1}^{k}w_{i}^{-1} \bA\bu_i \bv_i^\top \bA \\
\bA_{k} &= \bA - \sum_{i=1}^{k-1}w_{i}^{-1} \bA\bu_i \bv_i^\top \bA.
\end{aligned}
$$
Thus, $\bA_{k+1} - \bA_{k} = -w_{k}^{-1} \bA\bu_k \bv_k^\top \bA$. Since we define the sequence by $\bA_{k+1} = \bA_k -w_k^{-1} \bA_k\bx_k\by_k^\top \bA_k$. We then find $w_{k}^{-1} \bA\bu_k \bv_k^\top \bA = w_k^{-1} \bA_k\bx_k\by_k^\top \bA_k$. It is trivial to see
\begin{equation}\label{equation:wedderburn-au-akxk}
\begin{aligned}
\bA\bu_k &=\bA_k\bx_k,\\
\bv_k^\top \bA&=\by_k^\top \bA_k.
\end{aligned}
\end{equation}
Let $z_{k,i} = \frac{\bv_i^\top \bA\bx_k}{w_i}$ which is a scalar. From the definition of $\bu_k$ and $\bv_k$ in the above lemma, then
$\bullet$ $\bu_1=\bx_1$;
$\bullet$ $\bu_2 = \bx_2 - z_{2,1}\bu_1$;
$\bullet$ $\bu_3 = \bx_3 - z_{3,1}\bu_1-z_{3,2}\bu_2$;
$\bullet$ $\ldots$.
This process is just similar to the Gram-Schmidt process. But now, we do not project $\bx_2$ onto $\bx_1$ with the smallest distance. The vector of $\bx_2$ along $\bx_1$ is now defined by $z_{2,1}$. This process is shown in Figure~\ref{fig:projection-wedd}. In Figure~\ref{fig:project-line-wedd}, $\bu_2$ is not perpendicular to $\bu_1$. But $\bu_2$ does not lie on the same line of $\bu_1$ so that $\bu_1, \bu_2$ still could span a $\real^2$ subspace. Similarly, in Figure~\ref{fig:project-space-wedd}, $\bu_3= \bx_3 - z_{3,1}\bu_1-z_{3,2}\bu_2$ does not lie in the space spanned by $\bu_1, \bu_2$ so that $\bu_1, \bu_2, \bu_3$ could still span a $\real^3$ subspace.
A moment of reflexion would reveal that the span of $\bx_2, \bx_1$ is the same as the span of $\bu_2, \bu_1$. Similarly for $\bv_i$'s. We have the following property:
\begin{equation}\label{equation:wedderburn-span-same}
\left\{
\begin{aligned}
\textrm{span}\{\bx_1, \bx_2, \ldots, \bx_j\} &= \textrm{span}\{\bu_1, \bu_2, \ldots, \bu_j\};\\
\textrm{span}\{\by_1, \by_2, \ldots, \by_j\} &= \textrm{span}\{\bv_1, \bv_2, \ldots, \bv_j\}.\\
\end{aligned}
\right.
\end{equation}
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[``Project" onto a line]{\label{fig:project-line-wedd}
\includegraphics[width=0.47\linewidth]{./imgs/projectline_wedderburn.pdf}}
\quad
\subfigure[``Project" onto a space]{\label{fig:project-space-wedd}
\includegraphics[width=0.47\linewidth]{./imgs/projectspace_wedderburn.pdf}}
\caption{``Project" a vector onto a line and onto a space.}
\label{fig:projection-wedd}
\end{figure}
Further, from the rank-reducing property in the Wedderburn sequence, we have
$$
\left\{
\begin{aligned}
\cspace(\bA_1) &\supset \cspace(\bA_2) \supset \cspace(\bA_3) \supset \ldots;\\
\nspace(\bA_1^\top) &\subset \nspace(\bA_2^\top) \subset \nspace(\bA_3^\top) \subset \ldots.
\end{aligned}
\right.
$$
Since $\by_k \in \nspace(\bA_{k+1}^\top)$, it then follows that $\by_j \in \nspace(\bA_{k+1}^\top)$ for all $j<k+1$, i.e., $\bA_{k+1}^\top \by_j=\bzero$ for all $j<k+1$. Which also holds true for $\bx_{k+1}^\top \bA_{k+1}^\top \by_j=0$ for all $j<k+1$. From Equation~\eqref{equation:wedderburn-au-akxk}, we also have $\bu_{k+1}^\top \bA^\top \by_j=0$ for all $j<k+1$. Following from Equation~\eqref{equation:wedderburn-span-same}, we obtain
$$
\bv_j^\top\bA\bu_{k+1}=0 \text{ for all } j<k+1.
$$
Similarly, we can prove
$$
\bv_{k+1}^\top\bA\bu_{j}=0 \text{ for all } j<k+1.
$$
Moreover, we defined $w_k = \by_k^\top \bA_k\bx_k$. By Equation~\eqref{equation:wedderburn-au-akxk}, we can write the $w_k$ as:
$$
\begin{aligned}
w_k &= \by_k^\top \bA_k\bx_k\\
&=\bv_k^\top \bA\bx_k \\
&=\bv_k^\top\bA (\bu_k +\sum_{i=1}^{k-1}\frac{\bv_i^\top \bA\bx_k}{w_i}\bu_i) \qquad &(\text{by the definition of }\bu_k \text{ in Lemma~\ref{lemma:wedderburn-sequence-general}})\\
&=\bv_k^\top\bA \bu_k,\qquad &(\text{by } \bv_{k}^\top\bA\bu_{j}=0 \text{ for all } j<k)
\end{aligned}
$$
which can be used to substitute the $w_k$ in Lemma~\ref{lemma:wedderburn-sequence-general} and we then have the full version of the general term formula of the Wedderburn sequence such that the formula does not depend on $\bA_k$'s (in the form of $w_k$'s) with
\begin{equation}\label{equation:uk-vk-to-mimic-gram-process}
\bu_k=\bx_k -\sum_{i=1}^{k-1}\frac{\bv_i^\top \bA\bx_k}{\textcolor{blue}{\bv_i^\top\bA\bu_i}}\bu_i,\qquad \text{and}\qquad \bv_k=\by_k -\sum_{i=1}^{k-1}\frac{\by_k^\top \bA\bu_i}{\textcolor{blue}{\bv_i^\top\bA\bu_i}}\bv_i.
\end{equation}
\textbf{Gram-Schmidt Process from Wedderburn Sequence}: If $\bX=[\bx_1,
\bx_2, \ldots, \bx_r]\in \real^{n\times r}$, $\bY=[\by_1, \by_2, \ldots, \by_r]\in \real^{m\times r}$ effects a rank-reducing process for $\bA$. Let $\bA$ be the identity matrix and $(\bX, \bY)$ are identical and contain the vectors for which an orthogonal basis is desired, then $(\bU = \bV)$ give the resultant orthogonal basis.
This form of $\bu_k$ and $\bv_k$ in Equation~\eqref{equation:uk-vk-to-mimic-gram-process} is very close to the projection to the perpendicular space of the Gram-Schmidt process in Equation~\eqref{equation:gram-schdt-eq2}. We then define \fbox{$<\bx, \by>:=\by^\top\bA\bx$} to explicitly mimic the form of projection in Equation~\eqref{equation:gram-schdt-eq2}.
We formulate the results so far in the following lemma which can help us have a clear vision about what we have been working on and we will use these results extensively in the sequel:
\begin{lemma}[Properties of Wedderburn Sequence]\label{lemma:wedderburn-sequence-general-v2}
For each matrix with $\bA_{k+1} = \bA_k -w_k^{-1} \bA_k\bx_k\by_k^\top \bA_k$, then $\bA_{k+1} $ can be written as
$$
\bA_{k+1} = \bA - \sum_{i=1}^{k}w_{i}^{-1} \bA\bu_i \bv_i^\top \bA,
$$
where
\begin{equation}\label{equation:properties-of-wedderburn-ukvk}
\bu_k=\bx_k -\sum_{i=1}^{k-1}\frac{\textcolor{blue}{<\bx_k, \bv_i>}}{\textcolor{blue}{<\bu_i,\bv_i>}}\bu_i,\qquad
\text{and}\qquad \bv_k=\by_k -\sum_{i=1}^{k-1}\frac{\textcolor{blue}{<\bu_i,\by_k>}}{\textcolor{blue}{<\bu_i,\bv_i>}}\bv_i.
\end{equation}
Further, we have the following properties:
\begin{equation}\label{equation:wedderburn-au-akxk-2}
\begin{aligned}
\bA\bu_k &=\bA_k\bx_k,\\
\bv_k^\top \bA&=\by_k^\top \bA_k.
\end{aligned}
\end{equation}
\begin{equation}
<\bu_k, \bv_j>=<\bu_j, \bv_k>=0 \text{ for all } j<k.
\end{equation}
\begin{equation}\label{equation:wk-by-ukvk}
w_k = \by_k^\top\bA_k\bx_k = <\bu_k, \bv_k>
\end{equation}
\end{lemma}
By substituting Equation~\eqref{equation:wedderburn-au-akxk-2} into Form 1 of biconjugate decomposition, and using Equation~\eqref{equation:wk-by-ukvk} which implies $w_k = \bv_k^\top\bA\bu_k$, we have the Form 2 and Form 3 of this decomposition:
\begin{theoremHigh}[Biconjugate Decomposition: Form 2 and Form 3]\label{theorem:biconjugate-form2}
The equality from rank-reducing process implies the following matrix decomposition
$$
\bA = \bA\bU_r \bOmega_r^{-1} \bV_r^\top\bA,
$$
where $\bOmega_r=diag(w_1, w_2, \ldots, w_r)$, $\bU_r=[\bu_1,\bu_2, \ldots, \bu_r]\in \real^{m\times r}$ and $\bV_r=[\bv_1, \bv_2,$ $\ldots,$ $\bv_r] \in \real^{n\times r}$ with
\begin{equation}\label{equation:properties-of-wedderburn-ukvk2-inform2}
\bu_k=\bx_k -\sum_{i=1}^{k-1}\frac{<\bx_k, \bv_i>}{<\bu_i,\bv_i>}\bu_i,\qquad
\text{and}\qquad \bv_k=\by_k -\sum_{i=1}^{k-1}\frac{<\bu_i,\by_k>}{<\bu_i,\bv_i>}\bv_i.
\end{equation}
And also the following decomposition
\begin{equation}\label{equation:wedderburn-vgamma-ugamma}
\bV_\gamma^\top \bA \bU_\gamma = \bOmega_\gamma,
\end{equation}
where $\bOmega_\gamma=diag(w_1, w_2, \ldots, w_\gamma)$, $\bU_\gamma=[\bu_1,\bu_2, \ldots, \bu_\gamma]\in \real^{m\times \gamma}$ and $\bV_\gamma=[\bv_1, \bv_2,$ $\ldots,$ $\bv_\gamma]\in \real^{n\times \gamma}$. Note the difference between the subscripts $r$ and $\gamma$ we used here with $\gamma \leq r$.
\end{theoremHigh}
We notice that, in these two forms of biconjugate decomposition, they are independent of the Wedderburn matrices $\{\bA_k\}$.
\textbf{A word on the notation}: we will use the subscript to indicate the dimension of the matrix avoiding confusion in the sequel, e.g., the $r, \gamma$ in the above theorem.
\section{Properties of the Biconjugate Decomposition}
\begin{corollary}[Connection of $\bU_\gamma$ and $\bX_\gamma$]\label{corollary:biconjugate-connection-u-x}
If $(\bX_\gamma, \bY_\gamma) \in \real^{n\times \gamma}\times \real^{m\times \gamma}$ effects a rank-reducing process for $\bA$, then there are unique unit upper triangular matrices $\bR_\gamma^{(x)}\in\real^{\gamma\times \gamma}$ and $\bR_\gamma^{(y)}\in\real^{\gamma\times \gamma}$ such that
$$
\bX_\gamma = \bU_\gamma \bR_\gamma^{(x)}, \qquad \text{and} \qquad \bY_\gamma=\bV_\gamma\bR_\gamma^{(y)},
$$
where $\bU_\gamma$ and $\bV_\gamma$ are matrices with columns resulting from the Wedderburn sequence as in Equation~\eqref{equation:wedderburn-vgamma-ugamma}.
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:biconjugate-connection-u-x}]
The proof is trivial from the definition of $\bu_k$ and $\bv_k$ in Equation~\eqref{equation:properties-of-wedderburn-ukvk} or Equation~\eqref{equation:properties-of-wedderburn-ukvk2-inform2} by setting the $j$-th column of $\bR_\gamma^{(x)}$ and $\bR_\gamma^{(y)}$ as
$$
\left[\frac{<\bx_j,\bv_1>}{<\bu_1,\bv_1>}, \frac{<\bx_j,\bv_2>}{<\bu_2,\bv_2>}, \ldots, \frac{<\bx_j,\bv_{j-1}>}{<\bu_{j-1},\bv_{j-1}>}, 1, 0, 0, \ldots, 0 \right]^\top,
$$
and
$$
\left[\frac{<\bu_1, \by_j>}{<\bu_1,\bv_1>},\frac{<\bu_2, \by_j>}{<\bu_2,\bv_2>}, \ldots, \frac{<\bu_{j-1}, \by_j>}{<\bu_{j-1},\bv_{j-1}>}, 1, 0, 0, \ldots, 0 \right]^\top.
$$
This completes the proof.
\end{proof}
The $(\bU_\gamma, \bV_\gamma) \in \real^{m\times \gamma}\times \real^{n\times \gamma}$ in Theorem~\ref{theorem:biconjugate-form2} is called a \textbf{biconjugate pair} with respect to $\bA$ if $\Omega_\gamma$ is nonsingular and diagonal. And let $(\bX_\gamma, \bY_\gamma) \in \real^{n\times \gamma}\times \real^{m\times \gamma}$ effect a rank-reducing process for $\bA$, then $(\bX_\gamma, \bY_\gamma)$ is said to be \textbf{biconjugatable} and \textbf{biconjugated into a biconjugate pair} of matrices $(\bU_\gamma, \bV_\gamma)$, if there exist unit upper triangular matrices $\bR_\gamma^{(x)},\bR_\gamma^{(y)}$ such that $\bX_\gamma = \bU_\gamma \bR_\gamma^{(x)}$ and $\bY_\gamma=\bV_\gamma\bR_\gamma^{(y)}$.
\section{Connection to Well-Known Decomposition Methods}
\subsection{LDU Decomposition}
\begin{theorem}[LDU, \cite{chu1995rank} Theorem 2.4]\label{theorem:biconjugate-ldu}
If $(\bX_\gamma, \bY_\gamma) \in \real^{n\times \gamma}\times \real^{m\times \gamma}$
and $\bA\in \real^{m\times n}$ with $\gamma$ in $\{1, 2, \ldots, r\}$. Then $(\bX_\gamma, \bY_\gamma)$ can be biconjugated if and only if $\bY_\gamma^\top\bA\bX_\gamma$ has an LDU decomposition.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:biconjugate-ldu}]
Suppose $\bX_\gamma$ and $\bY_\gamma$ are biconjugatable, then, there exists a unit upper triangular matrices $\brx$ and $\bry$ such that $\bxgamma = \bugamma\brx$, $\bygamma = \bvgamma\bry$ and $\bvgamma^\top\bA\bugamma = \bomegagamma$ is a nonsingular diagonal matrix. Then, it follows that
$$
\bygamma^\top\bA\bxgamma = \bryt \bvgamma^\top \bA \bugamma\brx = \bryt \bomegagamma \brx
$$
is the unique unit triangular LDU decomposition of $\bygamma^\top\bA\bxgamma$. This form above can be seen as the \textbf{fourth form of biconjugate decomposition}, thus we put the proof into a graybox.
Conversely, suppose $\bygamma^\top\bA\bxgamma = \bR_2^\top \bD\bR_1$ is an LDU decomposition with both $\bR_1$ and $\bR_2$ being unit upper triangular matrices. Then since $\bR_1^{-1}$ and $\bR_2^{-1}$ are also unit upper triangular matrices, and $(\bX_\gamma, \bY_\gamma)$ biconjugates into $(\bX_\gamma\bR_1^{-1}, \bY_\gamma\bR_2^{-1})$.
\end{proof}
\begin{corollary}[Determinant]\label{corollary:lu-determinant}
Suppose $(\bX_\gamma, \bY_\gamma) \in \real^{n\times \gamma}\times \real^{m\times \gamma}$ are biconjugatable. Then
$$
\det(\bygamma^\top \bA\bxgamma) = \prod_{i=1}^{\gamma} w_i.
$$
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:lu-determinant}]
By Theorem~\ref{theorem:biconjugate-ldu}, since $(\bX_\gamma, \bY_\gamma)$ are biconjugatable, then there are unit upper triangular matrices $\brx$ and $\bry$ such that $\bygamma^\top\bA\bxgamma = \bryt \bomegagamma \brx$. The determinant is just product of the trace.
\end{proof}
\begin{lemma}[Biconjugatable in Principal Minors]\label{lemma:Biconjugatable-in-Principal-Minors}
Let $r=rank(\bA) \geq \gamma$ with $\bA\in \real^{m\times n}$. In the Wedderburn sequence, take $\bx_i$ as the $i$-th basis in $\real^n$ for $i \in \{1, 2, \ldots, \gamma\}$ (i.e., $\bx_i = \be_i \in \real^n$) and $\by_i$ as the $i$-th basis in $\real^m$ for $i \in \{1, 2, \ldots, \gamma\}$ (i.e., $\by_i=\be_i \in
\real^m$). That is $\bygamma^\top \bA\bxgamma$ is the leading principal submatrix of $\bA$, i.e., $\bygamma^\top \bA\bxgamma = \bA_{1:\gamma, 1:\gamma}$. Then, $(\bX_\gamma, \bY_\gamma)$ is biconjugatable if and only if the $\gamma$-th leading principal minor of $\bA$ is nonzero. In this case, the $\gamma$-th leading principal minor of $\bA$ is given by $\prod_{i=1}^{\gamma} w_i$.\index{Leading principal minors}
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:Biconjugatable-in-Principal-Minors}]
The proof is trivial that the $\gamma$-th leading principal minor of $\bA$ is nonzero will imply that $w_i \neq 0$ for all $i\leq \gamma$. Thus the Wedderburn sequence can be successfully obtained. The converse holds since Corollary~\ref{corollary:lu-determinant} implies $\det(\bygamma^\top \bA\bxgamma)$ is nonzero.
\end{proof}
We thus finally come to the LDU decomposition for square matrices.
\begin{theorem}[LDU: Biconjugate Decomposition for Square Matrices]\label{theorem:biconjugate-square-ldu}
For any matrix $\bA\in \real^{n\times n}$, $(\bI_n, \bI_n)$ is biconjugatable if and only if all the leading principal minors of $\bA$ are nonzero. In this case, $\bA$ can be factored as
$$
\bA = \bV_n^{-\top} \bOmega_n \bU_n^{-1} = \bL\bD\bU,
$$
where $\bOmega_n = \bD$ is a diagonal matrix with nonzero values on the diagonal, $\bV_n^{-\top} = \bL$ is a unit lower triangular matrix and $\bU_n^{-1} = \bU$ is a unit upper triangular matrix.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:biconjugate-square-ldu}]
From Lemma~\ref{lemma:Biconjugatable-in-Principal-Minors}, it is trivial that $(\bI_n, \bI_n)$ is biconjugatable. From Corollary~\ref{corollary:biconjugate-connection-u-x}, we have $\bU_n \bR_n^{(x)} = \bI_n$ and $\bI_n=\bV_n\bR_n^{(y)}$, thus $\bR_n^{(x)} = \bU_n^{-1}$ and $\bR_n^{(y)} = \bV_n^{-1}$ are well defined and we complete the proof.
\end{proof}
\subsection{Cholesky Decomposition}
For symmetric and positive definite, the leading principal minors are positive for sure. The proof is provided in Section~\ref{appendix:leading-minors-pd}.
\begin{theorem}[Cholesky: Biconjugate Decomposition for PD Matrices]\label{theorem:biconjugate-square-cholesky}
For any symmetric and positive definite matrix $\bA\in \real^{n\times n}$, the Cholesky decomposition of $\bA$ can be obtained from the Wedderburn sequence applied to $(\bI_n, \bI_n)$ as $(\bX_n, \bY_n)$.
In this case, $\bA$ can be factored as
$$
\bA = \bU_n^{-\top} \bOmega_n \bU_n^{-1} = (\bU_n^{-\top} \bOmega_n^{1/2})( \bOmega_n^{1/2} \bU_n^{-1}) =\bR^\top\bR,
$$
where $\bOmega_n$ is a diagonal matrix with positive values on the diagonal, and $\bU^{-1}$ is a unit upper triangular matrix.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:biconjugate-square-cholesky}]
Since the leading principal minors of positive definite matrices are positive, $w_i>0$ for all $i\in \{1, 2,\ldots, n\}$. It can be easily verified via the LDU from biconjugation decomposition and the symmetric property of $\bA$ that $\bA = \bU_n^{-\top} \bOmega_n \bU_n^{-1}$. And since $w_i$'s are positive, thus $\bOmega_n$ is positive definite and can be factored as $\bOmega_n = \bOmega_n^{1/2}\bOmega_n^{1/2}$ which implies $\bOmega_n^{1/2} \bU_n^{-1}$ is the Cholesky factor.
\end{proof}
\subsection{QR Decomposition}
Without loss of generality, we shall assume that $\bA\in \real^{n\times n}$ has full column rank so that the columns of $\bA$ can be factored as $\bA=\bQ\bR$ with $\bQ, \bR \in \real^{n\times n}$
\begin{theorem}[QR: Biconjugate Decomposition for Nonsingular Matrices]\label{theorem:biconjugate-square-qr}
For any nonsingular matrix $\bA\in \real^{n\times n}$, the QR decomposition of $\bA$ can be obtained from the Wedderburn sequence applied to $(\bI_n, \bA)$ as $(\bX_n, \bY_n)$.
In this case, $\bA$ can be factored as
$$
\bA = \bQ\bR,
$$
where $\bQ=\bV_n \bOmega_n^{-1/2}$ is an orthogonal matrix and $\bR = \bOmega_n^{1/2}\bR_n^{(x)}$ is an upper triangular matrix with \textcolor{blue}{Form 4} in Theorem~\ref{theorem:biconjugate-ldu} and let $\gamma=n$
$$
\bY_n^\top\bA\bX_n = \bR_n^{(y)\top} \bV_n^\top \bA \bU_n\bR_n^{(x)}=\bR_n^{(y)\top} \bOmega_n \bR_n^{(x)}.
$$
where we set $\gamma=n$ since $\gamma$ is any value that $\gamma\leq r$ and the rank $r=n$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:biconjugate-square-qr}]
Since $(\bX_n, \bY_n) = (\bI_n, \bA)$. Then
By Theorem~\ref{theorem:biconjugate-ldu}, we have the decomposition
$$
\bY_n^\top\bA\bX_n = \bR_n^{(y)\top} \bV_n^\top \bA \bU_n\bR_n^{(x)}=\bR_n^{(y)\top} \bOmega_n \bR_n^{(x)}.
$$
Substitute $(\bI_n, \bA)$ into above decomposition, we have
\begin{equation}\label{equation:biconjugate-qr-ata1}
\begin{aligned}
\bY_n^\top\bA\bX_n &= \bR_n^{(y)\top} \bV_n^\top \bA \bU_n\bR_n^{(x)} = \bR_n^{(y)\top} \bOmega_n \bR_n^{(x)}\\
\bA^\top \bA &= \bR_n^{(y)\top} \bOmega_n \bR_n^{(x)} \\
\bA^\top \bA &= \bR_1^\top \bOmega_n \bR_1 \qquad (\text{$\bA^\top\bA$ is symmetric and let $\bR_1=\bR_n^{(x)}=\bR_n^{(y)}$})\\
\bA^\top \bA &= (\bR_1^\top \bOmega_n^{1/2\top}) (\bOmega_n^{1/2}\bR_1) \\
\bA^\top \bA &= \bR^\top\bR. \qquad (\text{Let $\bR = \bOmega_n^{1/2}\bR_1$})
\end{aligned}
\end{equation}
To see why $\bOmega_n$ can be factored as $\bOmega_n = \bOmega_n^{1/2\top}\bOmega_n^{1/2}$.
Suppose $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$. We obtain $w_i = \ba_i^\top \ba_i>0$ since $\bA$ is nonsingular. Thus $\bOmega_n=diag(w_1, w_2, \ldots, w_n)$ is positive definite and can be factored as
\begin{equation}\label{equation:omega-half-qr}
\bOmega_n = \bOmega_n^{1/2}\bOmega_n^{1/2}= \bOmega_n^{1/2\top}\bOmega_n^{1/2}.
\end{equation}
By $\bxgamma = \bugamma\brx$ in Theorem~\ref{theorem:biconjugate-ldu} for all $\gamma\in \{1, 2, \ldots, n\}$, we have
$$
\begin{aligned}
\bX_n &= \bU_n\bR_1 \\
\bI_n &= \bU_n\bR_1, \qquad (\text{Since $\bX_n = \bI_n$}) \\
\bU_n &= \bR_1^{-1}
\end{aligned}
$$
By $\bygamma = \bvgamma\bry$ in Theorem~\ref{theorem:biconjugate-ldu} for all $\gamma\in \{1, 2, \ldots, n\}$, we have
\begin{equation}\label{equation:biconjugate-qr-ata2}
\begin{aligned}
\bY_n &= \bV_n\bR_1\\
\bA &= \bV_n\bR_1, \qquad &(\text{$\bA=\bY_n$}) \\
\bA^\top \bA &= \bR_1^\top\bV_n^\top \bV_n\bR_1 \\
\bR_1^\top \bOmega_n \bR_1&=\bR_1^\top\bV_n^\top \bV_n\bR_1, \qquad &(\text{From Equation~\eqref{equation:biconjugate-qr-ata1}})\\
(\bR_1^\top \bOmega_n^{1/2\top}) (\bOmega_n^{1/2}\bR_1) &= (\bR_1^\top \bOmega_n^{1/2\top} \bOmega_n^{-1/2\top})\bV_n^\top \bV_n (\bOmega_n^{-1/2}\bOmega_n^{1/2}\bR_1), \qquad &\text{(From Equation~\eqref{equation:omega-half-qr})} \\
\bR^\top\bR &= \bR^\top (\bOmega_n^{-1/2\top} \bV_n^\top) (\bV_n \bOmega_n^{-1/2}) \bR
\end{aligned}
\end{equation}
Thus, $\bQ=\bV_n \bOmega_n^{-1/2}$ is an orthogonal matrix.
\end{proof}
\subsection{SVD}
To differentiate the notation, let $\bA=\bU^\mathrm{svd} \bSigma^\mathrm{svd} \bV^{\mathrm{svd}\top}$ be the SVD of $\bA$ where $\bU^\mathrm{svd} = [\bu_1^\mathrm{svd}, \bu_2^\mathrm{svd}, \ldots, \bu_n^\mathrm{svd}]$, $\bV^\mathrm{svd} = [\bv_1^\mathrm{svd}, \bv_2^\mathrm{svd}, \ldots, \bv_n^\mathrm{svd}]$ and $\bSigma^\mathrm{svd} = diag(\sigma_1, \sigma_2, \ldots, \sigma_n)$. Without loss of generality, we assume $\bA\in \real^{n\times n}$ and $rank(\bA)=n$. Readers can prove the equivalence for $\bA\in \real^{m\times n}$.
If $\bX_n=\bV^\mathrm{svd}$, $\bY_n=\bU^\mathrm{svd}$ effects a rank-reducing process for $\bA$.
From the definition of $\bu_k$ and $\bv_k$ in Equation~\eqref{equation:properties-of-wedderburn-ukvk} or Equation~\eqref{equation:properties-of-wedderburn-ukvk2-inform2}, we have
$$
\bu_k = \bv_k^\mathrm{svd} \qquad \text{and} \qquad \bv_k = \bu_k^\mathrm{svd} \qquad \text{and} \qquad w_k = \by_k^\top \bA \bx_k=\sigma_k.
$$
That is $\bV_n = \bU^\mathrm{svd}$, $\bU_n = \bV^\mathrm{svd}$, and $\bOmega_n = \bSigma^\mathrm{svd}$, where we set $\gamma=n$ since $\gamma$ is any value that $\gamma\leq r$ and the rank $r=n$.
By $\bX_n= \bU_n\bR_n^{(x)}$ in Theorem~\ref{theorem:biconjugate-ldu}, we have
$$
\bX_n = \bU_n\bR_n^{(x)} \leadto
\bV^\mathrm{svd} = \bV^\mathrm{svd}\bR_n^{(x)} \leadto
\bI_n = \bR_n^{(x)}
$$
By $\bY_n = \bV_n\bR_n^{(y)}$ in Theorem~\ref{theorem:biconjugate-ldu}, we have
$$
\bY_n = \bV_n\bR_n^{(y)}\leadto
\bU^\mathrm{svd} = \bU^\mathrm{svd}\bR_n^{(y)} \leadto
\bI_n=\bR_n^{(y)}
$$
Again, from Theorem~\ref{theorem:biconjugate-ldu} and let $\gamma=n$, we have
$$
\bY_n^\top\bA\bX_n = \bR_n^{(y)\top} \bV_n^\top \bA \bU_n\bR_n^{(x)}=\bR_n^{(y)\top} \bOmega_n \bR_n^{(x)}.
$$
That is
$$
\bU^{\mathrm{svd}\top}\bA \bV^\mathrm{svd} = \bSigma^\mathrm{svd},
$$
which is exactly the form of SVD and we prove the equivalence of SVD and biconjugate decomposition when the Wedderburn sequence is applied to $(\bV^\mathrm{svd}, \bU^\mathrm{svd})$ as $(\bX_n, \bY_n)$.
\chapter{Cholesky Decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Cholesky Decomposition}
Positive definiteness or positive semidefiniteness is one of the highest accolades to which a matrix can aspire.
In this section, we will introduce decompositional approaches for the two special kinds of matrices and we first illustrate the most famous Cholesky decomposition as follows.
\begin{theoremHigh}[Cholesky Decomposition]\label{theorem:cholesky-factor-exist}
Every positive definite matrix $\bA\in \real^{n\times n}$ can be factored as
$$
\bA = \bR^\top\bR,
$$
where $\bR \in \real^{n\times n}$ is an upper triangular matrix \textbf{with positive diagonal elements}. This decomposition is known as the \textbf{Cholesky decomposition} of $\bA$. $\bR$ is known as the \textbf{Cholesky factor} or \textbf{Cholesky triangle} of $\bA$.
\item Alternatively, $\bA$ can be factored as $\bA=\bL\bL^\top$ where $\bL=\bR^\top$ is a lower triangular matrix \textbf{with positive diagonals}.
\item Specifically, the Cholesky decomposition is unique (Corollary~\ref{corollary:unique-cholesky-main}, p.~\pageref{corollary:unique-cholesky-main}).
\end{theoremHigh}
The Cholesky decomposition is named after a French military officer and mathematician, Andr\'{e}-Louis Cholesky (1875-1918), who developed the Cholesky decomposition in his surveying work. Similar to the LU decomposition for solving linear systems, the Cholesky decomposition is further used primarily to solve positive definite linear systems. The development on the solution is similar to that of the LU decomposition in Section~\ref{section:lu-linear-sistem} (p.~\pageref{section:lu-linear-sistem}), and we shall not repeat the details.
\section{Existence of the Cholesky Decomposition via Recursive Calculation}
In this section, we will prove the existence of the Cholesky decomposition via recursive calculation. In Section~\ref{section:cholesky-by-qr-spectral} (p.~\pageref{section:cholesky-by-qr-spectral}), we will also prove the existence of the Cholesky decomposition via the QR decomposition and spectral decomposition.
Before showing the existence of Cholesky decomposition, we need the following definitions and lemmas.
\begin{definition}[Positive Definite and Positive Semidefinite\index{Positive definite}\index{Positive semidefinite}]\label{definition:psd-pd-defini}
A matrix $\bA\in \real^{n\times n}$ is positive definite (PD) if $\bx^\top\bA\bx>0$ for all nonzero $\bx\in \real^n$.
And a matrix $\bA\in \real^{n\times n}$ is positive semidefinite (PSD) if $\bx^\top\bA\bx \geq 0$ for all $\bx\in \real^n$.
\end{definition}
One of the prerequisites for the Cholesky decomposition is the definition of the above positive definiteness of a matrix. We sketch several properties of this PD matrix as follows:
\begin{tcolorbox}[title={Positive Definite Matrix Property 1 of 6}]
We will show the equivalent definition on the positive definiteness of a matrix $\bA$ is that $\bA$ only has positive eigenvalues, or on the positive semidefiniteness of a matrix $\bA$ is that $\bA$ only has nonnegative eigenvalues. The proof is provided in Section~\ref{section:equivalent-pd-psd} (p.~\pageref{section:equivalent-pd-psd}) as a consequence of the spectral theorem.
\end{tcolorbox}
\begin{tcolorbox}[title={Positive Definite Matrix Property 2 of 6}]
\begin{lemma}[Positive Diagonals of Positive Definite Matrices]\label{lemma:positive-in-pd}
The diagonal elements of a positive definite matrix $\bA$ are all \textbf{positive}. And similarly, the diagonal elements of a positive semidefinite matrix $\bB$ are all \textbf{nonnegative}.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:positive-in-pd}]
From the definition of positive definite matrices, we have $\bx^\top\bA \bx >0$ for all nonzero $\bx$. In particular, let $\bx=\be_i$ where $\be_i$ is the $i$-th unit vector with the $i$-th entry being equal to 1 and other entries being equal to 0. Then,
$$
\be_i^\top\bA \be_i = a_{ii}>0, \qquad \forall i \in \{1, 2, \ldots, n\},
$$
where $a_{ii}$ is the $i$-th diagonal component. The proof for the second part follows similarly. This completes the proof.
\end{proof}
The complete pivoting in the Cholesky decomposition is simpler than that in the LU decomposition in the sense that the maximal value in a positive definite matrix is on the diagonal
\begin{tcolorbox}[title={Positive Definite Matrix Property 3 of 6}]
\begin{lemma}[Maximal Value of Positive Definite Matrices]\label{lemma:positive-maximal-diagonal}
The maximum element in a positive definite matrix lies on the diagonal. And this argument works similarly to positive semidefinite matrices.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:positive-maximal-diagonal}]
Suppose $\be_i, \be_j$ are the $i$-th and $j$-th unit vector, and $a_{ij}$ being the entry $(i,j)$ of the positive definite matrix $\bA$. Then, it follows that
$$
(\be_i-\be_j)^\top\bA(\be_i-\be_j) = a_{ii}+a_{jj}-2a_{ij}>0.
$$
Therefore, either $a_{ii}$ or $a_{jj}$ is larger than $a_{ij}$. If we loop around all the entries, the result follows. When it comes to the positive semidefinite matrices, it follows that the largest element appear on the diagonal, with possibility that some non-diagonal elements are equal to the largest element.
\end{proof}
\begin{tcolorbox}[title={Positive Definite Matrix Property 4 of 6}]
\begin{lemma}[Schur Complement of Positive Definite Matrices\index{Schur complement}]\label{lemma:pd-of-schur}
For any positive definite matrix $\bA\in \real^{n\times n}$, its Schur complement of $\bA_{11}$ is given by $\bS_{n-1}=\bA_{2:n,2:n}-\frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{2:n,1}^\top$ which is also positive definite.
\paragraph{A word on the notation} Note that the subscript $n-1$ of $\bS_{n-1}$ means it is of size $(n-1)\times (n-1)$ and it is a Schur complement of an $n\times n$ positive definite matrix. We will use this notation in the following sections.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:pd-of-schur}]
For any nonzero vector $\bv\in \real^{n-1}$, we can construct a vector $\bx\in \real^n$ by the following equation:
$$
\bx =
\begin{bmatrix}
-\frac{1}{\bA_{11}} \bA_{2:n,1}^\top \bv \\
\bv
\end{bmatrix},
$$
which is nonzero. Then
$$
\begin{aligned}
\bx^\top\bA\bx
&= \left[-\frac{1}{\bA_{11}} \bv^\top \bA_{2:n,1}\qquad \bv^\top\right]
\begin{bmatrix}
\bA_{11} & \bA_{2:n,1}^\top \\
\bA_{2:n,1} & \bA_{2:n,2:n}
\end{bmatrix}
\begin{bmatrix}
-\frac{1}{\bA_{11}} \bA_{2:n,1}^\top \bv \\
\bv
\end{bmatrix} \\
&= \left[-\frac{1}{\bA_{11}} \bv^\top \bA_{2:n,1}\qquad \bv^\top\right]
\begin{bmatrix}
0 \\
\bS_{n-1}\bv
\end{bmatrix} \\
&= \bv^\top\bS_{n-1}\bv.
\end{aligned}
$$
Since $\bA$ is positive definite, we have $\bx^\top\bA\bx = \bv^\top\bS_{n-1}\bv >0$ for all nonzero $\bv$. Thus, the Schur complement $\bS_{n-1}$ is positive definite as well.
\end{proof}
The above argument can be extended to PSD matrices as well. If $\bA$ is PSD, then the Schur complement $\bS_{n-1}$ is also PSD.
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10]
\paragraph{A word on the Schur complement} In the proof of Theorem~\ref{theorem:lu-factorization-with-permutation}, we have shown this Schur complement $\bS_{n-1}=\bA_{2:n,2:n}-\frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{2:n,1}^\top$ is also nonsingular if $\bA$ is nonsngular and $\bA_{11}\neq 0$. Similarly, the Schur complement of $\bA_{nn}$ in $\bA$ is $\bar{\bS}_{n-1} =\bA_{1:n-1,1:n-1} - \frac{1}{\bA_{nn}}\bA_{1:n-1,n} \bA_{1:n-1,n}^\top$ which is also positive definite if $\bA$ is positive definite. This property can help prove the fact that the leading principal minors of positive definite matrices are all positive. See Section~\ref{appendix:leading-minors-pd} for more details.
\end{mdframed}
We then prove the existence of Cholesky decomposition using these lemmas.
\begin{proof}[\textbf{of Theorem~\ref{theorem:cholesky-factor-exist}: Existence of Cholesky Decomposition Recursively}]
For any positive definite matrix $\bA$, we can write out (since $\bA_{11}$ is positive by Lemma~\ref{lemma:positive-in-pd})
$$
\begin{aligned}
\bA &=
\begin{bmatrix}
\bA_{11} & \bA_{2:n,1}^\top \\
\bA_{2:n,1} & \bA_{2:n,2:n}
\end{bmatrix} \\
&=\begin{bmatrix}
\sqrt{\bA_{11}} &\bzero\\
\frac{1}{\sqrt{\bA_{11}}} \bA_{2:n,1} &\bI
\end{bmatrix}
\begin{bmatrix}
\sqrt{\bA_{11}} & \frac{1}{\sqrt{\bA_{11}}}\bA_{2:n,1}^\top \\
\bzero & \bA_{2:n,2:n}-\frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{2:n,1}^\top
\end{bmatrix}\\
&=\begin{bmatrix}
\sqrt{\bA_{11}} &\bzero\\
\frac{1}{\sqrt{\bA_{11}}} \bA_{2:n,1} &\bI
\end{bmatrix}
\begin{bmatrix}
1 & \bzero \\
\bzero & \bA_{2:n,2:n}-\frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{2:n,1}^\top
\end{bmatrix}
\begin{bmatrix}
\sqrt{\bA_{11}} & \frac{1}{\sqrt{\bA_{11}}}\bA_{2:n,1}^\top \\
\bzero & \bI
\end{bmatrix}\\
&=\bR_1^\top
\begin{bmatrix}
1 & \bzero \\
\bzero & \bS_{n-1}
\end{bmatrix}
\bR_1.
\end{aligned}
$$
where
$$\bR_1 =
\begin{bmatrix}
\sqrt{\bA_{11}} & \frac{1}{\sqrt{\bA_{11}}}\bA_{2:n,1}^\top \\
\bzero & \bI
\end{bmatrix}.
$$
Since we proved the Schur complement $\bS_{n-1}$ is positive definite in Lemma~\ref{lemma:pd-of-schur}, then we can factor it in the same way as
$$
\bS_{n-1}=
\hat{\bR}_2^\top
\begin{bmatrix}
1 & \bzero \\
\bzero & \bS_{n-2}
\end{bmatrix}
\hat{\bR}_2.
$$
Therefore, we have
$$
\begin{aligned}
\bA &= \bR_1^\top
\begin{bmatrix}
1 & \bzero \\
\bzero & \hat{\bR}_2^\top
\begin{bmatrix}
1 & \bzero \\
\bzero & \bS_{n-2}
\end{bmatrix}
\hat{\bR}_2.
\end{bmatrix}
\bR_1\\
&=
\bR_1^\top
\begin{bmatrix}
1 &\bzero \\
\bzero &\hat{\bR}_2^\top
\end{bmatrix}
\begin{bmatrix}
1 &\bzero \\
\bzero &\begin{bmatrix}
1 & \bzero \\
\bzero & \bS_{n-2}
\end{bmatrix}
\end{bmatrix}
\begin{bmatrix}
1 &\bzero \\
\bzero &\hat{\bR}_2
\end{bmatrix}
\bR_1\\
&=
\bR_1^\top \bR_2^\top
\begin{bmatrix}
1 &\bzero \\
\bzero &\begin{bmatrix}
1 & \bzero \\
\bzero & \bS_{n-2}
\end{bmatrix}
\end{bmatrix}
\bR_2 \bR_1.
\end{aligned}
$$
The same formula can be recursively applied. This process gradually continues down to the bottom-right corner giving us the decomposition
$$
\begin{aligned}
\bA &= \bR_1^\top\bR_2^\top\ldots \bR_n^\top \bR_n\ldots \bR_2\bR_1\\
&= \bR^\top \bR,
\end{aligned}
$$
where $\bR_1, \bR_2, \ldots, \bR_n$ are upper triangular matrices with positive diagonal elements and $\bR=\bR_1\bR_2\ldots\bR_n$ is also an upper triangular matrix with positive diagonal elements from which the result follows.
\end{proof}
The process in the proof can also be used to calculate the Cholesky decomposition and compute the complexity of the algorithm. In Section~\ref{section:compute-cholesky}, we will do the computation in a similar way but from a different point of view.
\begin{lemma}[$\bR^\top\bR$ is PD]\label{lemma:r-to-pd}
For any upper triangular matrix $\bR$ with positive diagonal elements, then
$
\bA = \bR^\top\bR
$
is positive definite.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:r-to-pd}]
If an upper triangular matrix $\bR$ has positive diagonals, it has full column rank, and the null space of $\bR$ is of dimension 0 by the fundamental theorem of linear algebra (discussed in Appendix~\ref{appendix:fundamental-rank-nullity}, p.~\pageref{appendix:fundamental-rank-nullity}). As a result, $\bR\bx \neq \bzero$ for any nonzero vector $\bx$. Thus $\bx^\top\bA\bx = ||\bR\bx||^2 >0$ for any nonzero vector $\bx$.
\end{proof}
This corollary above works not only for the upper triangular matrices $\bR$, but can be extended to any $\bR$ with linearly independent columns.
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10]
\paragraph{A word on the two claims} Combine Theorem~\ref{theorem:cholesky-factor-exist} and Lemma~\ref{lemma:r-to-pd}, we can claim that matrix $\bA$ is positive definite if and only if $\bA$ can be factored as $\bA=\bR^\top\bR$ where $\bR$ is an upper triangular matrix with positive diagonals.
\end{mdframed}
\section{Sylvester's Criterion: Leading Principal Minors of PD Matrices}\label{appendix:leading-minors-pd}
In Lemma~\ref{lemma:pd-of-schur}, we proved for any positive definite matrix $\bA\in \real^{n\times n}$, its Schur complement of $\bA_{11}$ is $\bS_{n-1}=\bA_{2:n,2:n}-\frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{2:n,1}^\top$ and it is also positive definite.
This is also true for its Schur complement of $\bA_{nn}$, i.e., $\bS_{n-1}^\prime = \bA_{1:n-1,1:n-1} -\frac{1}{\bA_{nn}} \bA_{1:n-1,n}\bA_{1:n-1,n}^\top$ is also positive definite.
We then claim that all the leading principal minors (Definition~\ref{definition:leading-principle-minors}, p.~\pageref{definition:leading-principle-minors}) of a positive definite matrix $\bA \in \real^{n\times n}$ are positive. This is also known as the Sylvester's criterion \citep{swamy1973sylvester, gilbert1991positive}. Recall that these positive leading principal minors imply the existence of the LU decomposition for positive definite matrix $\bA$ by Theorem~\ref{theorem:lu-factorization-without-permutation} (p.~\pageref{theorem:lu-factorization-without-permutation}).
To show the Sylvester's criterion, we need the following lemma.
\begin{tcolorbox}[title={Positive Definite Matrix Property 5 of 6}]
\begin{lemma}[Quadratic PD]\label{lemma:quadratic-pd}
Let $\bE$ be any invertible matrix. Then $\bA$ is positive definite if and only if $\bE^\top\bA\bE$ is also positive definite.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:quadratic-pd}]
We will prove by forward implication and reverse implication separately as follows.
\paragraph{Forward implication}
Suppose $\bA$ is positive definite, then for any nonzero vector $\bx$, we have $\bx^\top \bE^\top\bA\bE \bx = \by^\top\bA\by > 0$, since $\bE$ is invertible such that $\bE\bx$ is nonzero. \footnote{Since the null space of $\bE$ is of dimension 0 and the only solution for $\bE\bx=\bzero$ is the trivial solution $\bx=0$.} This implies $\bE^\top\bA\bE$ is PD.
\paragraph{Reverse implication}
Conversely, suppose $\bE^\top\bA\bE$ is positive definite, for any nonzero $\bx$, we have $\bx^\top \bE^\top\bA\bE \bx>0$. For any nonzero $\by$, there exists a nonzero $\bx$ such that $\by =\bE\bx$ since $\bE$ is invertible. This implies $\bA$ is PD as well.
\end{proof}
We then provide the rigorous proof for Sylvester's criterion.
\begin{tcolorbox}[title={Positive Definite Matrix Property 6 of 6}]
\begin{theorem}[Sylvester's Criterion\index{Sylvester's criterion}]\label{lemma:sylvester-criterion}
The real symmetric matrix $\bA\in \real^{n\times n}$ is positive definite if and only if all the leading principal minors of $\bA$ are positive.
\end{theorem}
\end{tcolorbox}
\begin{proof}[of Theorem~\ref{lemma:sylvester-criterion}]
We will prove by forward implication and reverse implication separately as follows.
\paragraph{Forward implication: } We will prove by induction for the forward implication. Suppose $\bA$ is positive definite. Since all the components on the diagonal of positive definite matrices are all positive (Lemma~\ref{lemma:positive-in-pd}, p.~\pageref{lemma:positive-in-pd}). The case for $n=1$ is trivial that $\det(\bA)> 0$ if $\bA$ is a scalar.
Suppose all the leading principal minors for $k\times k$ matrices are all positive. If we could prove this is also true for $(k+1)\times (k+1)$ PD matrices, then we complete the proof.
For a $(k+1)\times (k+1)$ matrix with the block form $\bM=\begin{bmatrix}
\bA & \bb\\
\bb^\top & d
\end{bmatrix}$, where $\bA$ is a $k\times k$ submatrix. Then its Schur complement of $d$, $\bS_{k} = \bA - \frac{1}{d} \bb\bb^\top$ is also positive definite and its determinant is positive from the assumption. Therefore, $\det(\bM) = \det(d)\det( \bA - \frac{1}{d} \bb\bb^\top) $=
\footnote{By the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix}
\bA & \bB \\
\bC & \bD
\end{bmatrix}$, then $\det(\bM) = \det(\bD)\det(\bA-\bB\bD^{-1}\bC)$.}
$d\cdot \det( \bA - \frac{1}{d} \bb\bb^\top)>0$, which completes the proof.
\paragraph{Reverse implication:} Conversely, suppose all the leading principal minors of $\bA\in \real^{n\times n}$ are positive, i.e., leading principal submatrices are nonsingular. Suppose further the $(i,j)$-th entry of $\bA$ is denoted by $a_{ij}$, we realize that $a_{11}>0$ by the assumption.
Subtract multiples of the first row of $\bA$ to zero out the entries in the first column of $\bA$ below the first diagonal $a_{11}$. That is,
$$
\bA =
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
a_{21} & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
a_{n1} & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}
\stackrel{\bE_1 \bA}{\longrightarrow}
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
0 & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
0 & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}.
$$
This operation preserves the values of the principal minors of $\bA$. The $\bE_1$ might be mysterious to the readers. Actually, the $\bE_1$ contains two steps $\bE_1 = \bZ_{12}\bZ_{11}$. The first step $\bZ_{11}$ is to subtract the 2-nd row to the $n$-th row by a multiple of the first row, that is
$$
\begin{aligned}
\bA =
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
a_{21} & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
a_{n1} & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}
&\stackrel{\bZ_{11} \bA}{\longrightarrow}
\begin{bmatrix}
1 & 0 & \ldots &0\\
-\frac{a_{21}}{a_{11}} & 1 & \ldots &0\\
\vdots & \vdots & \ddots &\vdots\\
-\frac{a_{n1}}{a_{11}} & 0 & \ldots &1\\
\end{bmatrix}
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
a_{21} & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
a_{n1} & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}\\
&=
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
0 & (a_{22} -\frac{a_{21}}{a_{11}} a_{12}) & \ldots &(a_{2n}-\frac{a_{21}}{a_{11}} a_{1n})\\
\vdots & \vdots & \ddots &\vdots\\
0 & (a_{n2}-\frac{a_{n1}}{a_{11}}a_{12}) & \ldots &(a_{nn}-\frac{a_{n1}}{a_{11}}a_{1n})\\
\end{bmatrix},
\end{aligned}
$$
where we subtract the bottom-right $(n-1)\times (n-1)$ by some terms additionally. $\bZ_{12}$ is to add back these terms.
$$
\begin{aligned}
\bZ_{11}\bA &=
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
0 & (a_{22} -\frac{a_{21}}{a_{11}} a_{12}) & \ldots &(a_{2n}-\frac{a_{21}}{a_{11}} a_{1n})\\
\vdots & \vdots & \ddots &\vdots\\
0 & (a_{n2}-\frac{a_{n1}}{a_{11}}a_{12}) & \ldots &(a_{nn}-\frac{a_{n1}}{a_{11}}a_{1n})\\
\end{bmatrix}\\
&\stackrel{\bZ_{12}(\bZ_{11} \bA)}{\longrightarrow}
\begin{bmatrix}
1 & 0 & \ldots &0\\
\frac{a_{21}}{a_{11}} & 1 & \ldots &0\\
\vdots & \vdots & \ddots &\vdots\\
\frac{a_{n1}}{a_{11}} & 0 & \ldots &1\\
\end{bmatrix}
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
0 & (a_{22} -\frac{a_{21}}{a_{11}} a_{12}) & \ldots &(a_{2n}-\frac{a_{21}}{a_{11}} a_{1n})\\
\vdots & \vdots & \ddots &\vdots\\
0 & (a_{n2}-\frac{a_{n1}}{a_{11}}a_{12}) & \ldots &(a_{nn}-\frac{a_{n1}}{a_{11}}a_{1n})\\
\end{bmatrix}\\
&=\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
0 & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
0 & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}=\bE_1\bA.
\end{aligned}
$$
Now subtract multiples of the first column of $\bE_1\bA$, from the other columns of $\bE_1\bA$ to zero out the entries in the first row of $\bE_1\bA$ to the right of the first column. Since $\bA$ is symmetric, we can multiply on the right by $\bE_1^\top$ to get what we want. We then have
$$
\bA =
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
a_{21} & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
a_{n1} & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}
\stackrel{\bE_1 \bA}{\longrightarrow}
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
0 & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
0 & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}
\stackrel{\bE_1 \bA\bE_1^\top}{\longrightarrow}
\begin{bmatrix}
a_{11} & 0 & \ldots &0\\
0 & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
0 & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}.
$$
This operation also preserves the values of the principal minors of $\bA$. The leading principal minors of $\bE_1 \bA\bE_1^\top$ are exactly the same as those of $\bA$.
Continue this process, we will transform $\bA$ into a diagonal matrix $\bE_n \ldots \bE_1 \bA\bE_1^\top\ldots\bE_n^\top$ whose diagonal values are exactly the same as the diagonals of $\bA$ and are positive. Let $\bE = \bE_n \ldots \bE_1$, which is an invertible matrix. Apparently, $\bE \bA \bE^\top$ is PD, which implies $\bA$ is PD as well from Lemma~\ref{lemma:quadratic-pd}.
\end{proof}
\section{Existence of the Cholesky Decomposition via the LU Decomposition without Permutation}
By Theorem~\ref{lemma:sylvester-criterion} on Sylvester's criterion and the existence of LU decomposition without permutation in Theorem~\ref{theorem:lu-factorization-without-permutation} (p.~\pageref{theorem:lu-factorization-without-permutation}), there is a unique LU decomposition for positive definite matrix $\bA=\bL\bU_0$ where $\bL$ is a unit lower triangular matrix and $\bU_0$ is an upper triangular matrix. Since \textit{the signs of the pivots of a symmetric matrix are the same as the signs of the eigenvalues} \citep{strang1993introduction}:
$$
\text{number of positive pivots = number of positive eigenvalues. \index{Pivot} }
$$
And $\bA = \bL\bU_0$ has the following form
$$
\begin{aligned}
\bA = \bL\bU_0 &=
\begin{bmatrix}
1 & 0 & \ldots & 0 \\
l_{21} & 1 & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
l_{n1} & l_{n2} & \ldots & 1
\end{bmatrix}
\begin{bmatrix}
u_{11} & u_{12} & \ldots & u_{1n} \\
0 & u_{22} & \ldots & u_{2n}\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & u_{nn}
\end{bmatrix}.\\
\end{aligned}
$$
This implies that the diagonals of $\bU_0$ contain the pivots of $\bA$. And all the eigenvalues of PD matrices are positive (see Lemma~\ref{lemma:eigens-of-PD-psd}, p.~\pageref{lemma:eigens-of-PD-psd}, which is a consequence of spectral decomposition). Thus the diagonals of $\bU_0$ are positive.
Taking the diagonal of $\bU_0$ out into a diagonal matrix $\bD$, we can rewrite $\bU_0=\bD\bU$ as shown in the following equation
$$
\begin{aligned}
\bA = \bL\bU_0 =
\begin{bmatrix}
1 & 0 & \ldots & 0 \\
l_{21} & 1 & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
l_{n1} & l_{n2} & \ldots & 1
\end{bmatrix}
\begin{bmatrix}
u_{11} & 0 & \ldots & 0 \\
0 & u_{22} & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & u_{nn}
\end{bmatrix}
\begin{bmatrix}
1 & u_{12}/u_{11} & \ldots & u_{1n}/u_{11} \\
0 & 1 & \ldots & u_{2n}/u_{22}\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & 1
\end{bmatrix}=\bL\bD\bU,
\end{aligned}
$$
where $\bU$ is a \textit{unit} upper triangular matrix.
By the uniqueness of the LU decomposition without permutation in Corollary~\ref{corollary:unique-lu-without-permutation} (p.~\pageref{corollary:unique-lu-without-permutation}) and the symmetry of $\bA$, it follows that $\bU = \bL^\top$, and $\bA = \bL\bD\bL^\top$. Since the diagonals of $\bD$ are positive, we can set $\bR = \bD^{1/2}\bL^\top$ where $\bD^{1/2}=\diag(\sqrt{u_{11}}, \sqrt{u_{22}}, \ldots, \sqrt{u_{nn}})$ such that $\bA = \bR^\top\bR$ is the Cholesky decomposition of $\bA$, and $\bR$ is upper triangular with positive diagonals.
\subsection{Diagonal Values of the Upper Triangular Matrix}\label{section:cholesky-diagonals}
Suppose $\bA$ is a PD matrix, take $\bA$ as a block matrix $\bA = \begin{bmatrix}
\bA_{11} & \bA_{12} \\
\bA_{21} & \bA_{22}
\end{bmatrix}$ where $\bA_{11}\in \real^{k\times k}$, and its block LU decomposition is given by
$$
\begin{aligned}
\bA &= \begin{bmatrix}
\bA_{11} & \bA_{12} \\
\bA_{21} & \bA_{22}
\end{bmatrix}
=\bL\bU_0=
\begin{bmatrix}
\bL_{11} & \bzero \\
\bL_{21} & \bL_{22}
\end{bmatrix}
\begin{bmatrix}
\bU_{11} & \bU_{12} \\
\bzero & \bU_{22}
\end{bmatrix} \\
&=\begin{bmatrix}
\bL_{11}\bU_{11} & \bL_{11}\bU_{12} \\
\bL_{21}\bU_{11} & \bL_{21}\bU_{12}+\bL_{22}\bU_{22}
\end{bmatrix}.
\end{aligned}
$$
Then the leading principal minor (Definition~\ref{definition:leading-principle-minors}, p.~\pageref{definition:leading-principle-minors}), $\Delta_k=\det(\bA_{1:k,1:k} ) = \det(\bA_{11})$ is given by
$$
\Delta_k = \det(\bA_{11}) = \det(\bL_{11}\bU_{11} ) = \det(\bL_{11} )\det(\bU_{11}).
$$
We notice that $\bL_{11}$ is a unit lower triangular matrix and $\bU_{11}$ is an upper triangular matrix. By the fact that the determinant of a lower triangular matrix (or an upper triangular matrix) is the product of the diagonal entries, we obtain
$$
\Delta_k = \det(\bU_{11})= u_{11} u_{22}\ldots u_{kk},
$$
i.e., the $k$-th leading principal minor of $\bA$ is the determinant of the $k\times k$ submatrix of $\bU_0$. That is also the product of the first $k$ diagonals of $\bD$ ($\bD$ is the matrix from $\bA = \bL\bD\bL^\top$). Let $\bD = \diag(d_1, d_2, \ldots, d_n)$, therefore, we have
$$
\Delta_k = d_1 d_2\ldots d_k = \Delta_{k-1}d_k.
$$
This gives us an alternative form of $\bD$, i.e., the \textbf{squared} diagonal values of $\bR$ ($\bR$ is the Cholesky from $\bA=\bR^\top\bR$), and it is given by
$$
\bD = \diag\left(\Delta_1, \frac{\Delta_2}{\Delta_1}, \ldots, \frac{\Delta_n}{\Delta_{n-1}}\right),
$$
where $\Delta_k$ is the $k$-th leading principal minor of $\bA$, for all $k\in \{1,2,\ldots, n\}$. That is, the diagonal values of $\bR$ are given by
$$
\diag\left(\sqrt{\Delta_1}, \sqrt{\frac{\Delta_2}{\Delta_1}}, \ldots, \sqrt{\frac{\Delta_n}{\Delta_{n-1}}}\right).
$$
\subsection{Block Cholesky Decomposition}
Following from the last section, suppose $\bA$ is a PD matrix, take $\bA$ as a block matrix $\bA = \begin{bmatrix}
\bA_k & \bA_{12} \\
\bA_{21} & \bA_{22}
\end{bmatrix}$ where $\bA_k\in \real^{k\times k}$, and its block LU decomposition is given by
$$
\begin{aligned}
\bA &= \begin{bmatrix}
\bA_k & \bA_{12} \\
\bA_{21} & \bA_{22}
\end{bmatrix}
=\bL\bU_0=
\begin{bmatrix}
\bL_k & \bzero \\
\bL_{21} & \bL_{22}
\end{bmatrix}
\begin{bmatrix}
\bU_k & \bU_{12} \\
\bzero & \bU_{22}
\end{bmatrix} \\
&=\begin{bmatrix}
\bL_k\bU_k & \bL_{11}\bU_{12} \\
\bL_{21}\bU_{11} & \bL_{21}\bU_{12}+\bL_{22}\bU_{22}
\end{bmatrix}.\\
\end{aligned}
$$
where the $k$-th leading principal submatrix $\bA_k$ of $\bA$ also has its LU decomposition $\bA_k=\bL_k\bU_k$. Then, it is trivial that the Cholesky decomposition of an $n\times n$ matrix contains $n-1$ other Cholesky decompositions within it: $\bA_k = \bR_k^\top\bR_k$, for all $k\in \{1,2,\ldots, n-1\}$. This is particularly true that any leading principal submatrix $\bA_k$ of the positive definite matrix $\bA$ is also positive definite. This can be shown that for positive definite matrix $\bA_{k+1} \in \real^{(k+1)\times(k+1)}$, and any nonzero vector $\bx_k\in\real^k$ appended by a zero element $\bx_{k+1} = \begin{bmatrix}
\bx_k\\
0
\end{bmatrix}$. It follows that
$$
\bx_k^\top\bA_k\bx_k = \bx_{k+1}^\top\bA_{k+1}\bx_{k+1} >0,
$$
and $\bA_k$ is positive definite. If we start from $\bA\in \real^{n\times n}$, we will recursively get that $\bA_{n-1}$ is PD, $\bA_{n-2}$ is PD, $\ldots$. And all of them admit a Cholesky decomposition.
\section{Existence of the Cholesky Decomposition via Induction}
In the last section, we proved the existence of the Cholesky decomposition via the LU decomposition without permutation. Following from the proof of the LU decomposition in Section~\ref{section:exist-lu-without-perm}, we realize that the existence of Cholesky decomposition can be a direct consequence of induction as well.
\begin{proof}[\textbf{of Theorem~\ref{theorem:cholesky-factor-exist}: Existence of Cholesky Decomposition by Induction\index{Induction}}]
We will prove by induction that every $n\times n$ positive definite matrix $\bA$ has a decomposition $\bA=\bR^\top\bR$. The $1\times 1$ case is trivial by setting $R=\sqrt{A}$, thus, $A=R^2$.
Suppose for any $k\times k$ PD matrix $\bA_k$ has a Cholesky decomposition. If we prove any $(k+1)\times(k+1)$ PD matrix $\bA_{k+1}$ can also be factored as this Cholesky decomposition, then we complete the proof.
For any $(k+1)\times(k+1)$ PD matrix $\bA_{k+1}$, Write out $\bA_{k+1}$ as
$$
\bA_{k+1} = \begin{bmatrix}
\bA_k & \bb \\
\bb^\top & d
\end{bmatrix}.
$$
We note that $\bA_k$ is PD.
By the inductive hypothesis, it admits a Cholesky decomposition $\bA_k$ is given by $\bA_k = \bR_k^\top\bR_k$. We can construct the upper triangular matrix
$$
\bR=\begin{bmatrix}
\bR_k & \br\\
0 & s
\end{bmatrix},
$$
such that it follows that
$$
\bR_{k+1}^\top\bR_{k+1} =
\begin{bmatrix}
\bR_k^\top\bR_k & \bR_k^\top \br\\
\br^\top \bR_k & \br^\top\br+s^2
\end{bmatrix}.
$$
Therefore, if we can prove $\bR_{k+1}^\top \bR_{k+1} = \bA_{k+1}$ is the Cholesky decomposition of $\bA_{k+1}$ (which requires the value $s$ to be positive), then we complete the proof. That is, we need to prove
$$
\begin{aligned}
\bb &= \bR_k^\top \br, \\
d &= \br^\top\br+s^2.
\end{aligned}
$$
Since $\bR_k$ is nonsingular, we have a unique solution for $\br$ and $s$ that
$$
\begin{aligned}
\br &= \bR_k^{-\top}\bb, \\
s &= \sqrt{d - \br^\top\br} = \sqrt{d - \bb^\top\bA_k^{-1}\bb},
\end{aligned}
$$
since we assume $s$ is nonnegative. However, we need to further prove that $s$ is not only nonnegative, but also positive. Since $\bA_k$ is PD, from Sylvester's criterion, and the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix}
\bA & \bB \\
\bC & \bD
\end{bmatrix}$, then $\det(\bM) = \det(\bA)\det(\bD-\bC\bA^{-1}\bB)$. We have
$$
\det(\bA_{k+1}) = \det(\bA_k)\det(d- \bb^\top\bA_k^{-1}\bb) = \det(\bA_k)(d- \bb^\top\bA_k^{-1}\bb)>0.
$$
Since $ \det(\bA_k)>0$, we then obtain that $(d- \bb^\top\bA_k^{-1}\bb)>0$ and this implies $s>0$.
We complete the proof.
\end{proof}
%
\section{Uniqueness of the Cholesky Decomposition}
\begin{corollary}[Uniqueness of Cholesky Decomposition\index{Uniqueness}]\label{corollary:unique-cholesky-main}
The Cholesky decomposition $\bA=\bR^\top\bR$ for any positive definite matrix $\bA\in \real^{n\times n}$ is unique.
\end{corollary}
The uniqueness of the Cholesky decomposition can be an immediate consequence of the uniqueness of the LU decomposition without permutation. Or, an alternative rigorous proof is provided as follows.
\begin{proof}[of Corollary~\ref{corollary:unique-cholesky-main}]
Suppose the Cholesky decomposition is not unique, then we can find two decompositions such that $\bA=\bR_1^\top\bR_1 = \bR_2^\top\bR_2$ which implies
$$
\bR_1\bR_2^{-1}= \bR_1^{-\top} \bR_2^\top.
$$
From the fact that the inverse of an upper triangular matrix is also an upper triangular matrix, and the product of two upper triangular matrices is also an upper triangular matrix, \footnote{Same for lower triangular matrices: the inverse of a lower triangular matrix is also a lower triangular matrix, and the product of two lower triangular matrices is also a lower triangular matrix.} we realize that the left-side of the above equation is an upper triangular matrix and the right-side of it is a lower triangular matrix. This implies $\bR_1\bR_2^{-1}= \bR_1^{-\top} \bR_2^\top$ is a diagonal matrix, and $\bR_1^{-\top} \bR_2^\top= (\bR_1^{-\top} \bR_2^\top)^\top = \bR_2\bR_1^{-1}$.
Let $\bLambda = \bR_1\bR_2^{-1}= \bR_2\bR_1^{-1}$ be the diagonal matrix. We notice that the diagonal value of $\bLambda$ is the product of the corresponding diagonal values of $\bR_1$ and $\bR_2^{-1}$ (or $\bR_2$ and $\bR_1^{-1}$). That is, for
$$
\bR_1=\begin{bmatrix}
r_{11} & r_{12} & \ldots & r_{1n} \\
0 & r_{22} & \ldots & r_{2n}\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & r_{nn}
\end{bmatrix},
\qquad
\bR_2=
\begin{bmatrix}
s_{11} & s_{12} & \ldots & s_{1n} \\
0 & s_{22} & \ldots & s_{2n}\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & s_{nn}
\end{bmatrix},
$$
we have,
$$
\begin{aligned}
\bR_1\bR_2^{-1}=
\begin{bmatrix}
\frac{r_{11}}{s_{11}} & 0 & \ldots & 0 \\
0 & \frac{r_{22}}{s_{22}} & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & \frac{r_{nn}}{s_{nn}}
\end{bmatrix}
=
\begin{bmatrix}
\frac{s_{11}}{r_{11}} & 0 & \ldots & 0 \\
0 & \frac{s_{22}}{r_{22}} & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & \frac{s_{nn}}{r_{nn}}
\end{bmatrix}
=\bR_2\bR_1^{-1}.
\end{aligned}
$$
Since both $\bR_1$ and $\bR_2$ have positive diagonals, this implies $r_{11}=s_{11}, r_{22}=s_{22}, \ldots, r_{nn}=s_{nn}$. And $\bLambda = \bR_1\bR_2^{-1}= \bR_2\bR_1^{-1} =\bI$.
That is, $\bR_1=\bR_2$ and this leads to a contradiction. The Cholesky decomposition is thus unique.
\end{proof}
\section{Computing the Cholesky Decomposition Recursively}\label{section:compute-cholesky}
Similar to computing the LU decomposition,
to compute the Cholesky decomposition, we write out the equality $\bA=\bR^\top\bR$:
$$
\begin{aligned}
\bA=\left[
\begin{matrix}
\bA_{11} & \bA_{1,2:n} \\
\bA_{2:n,1} & \bA_{2:n,2:n}
\end{matrix}
\right]
&=\left[
\begin{matrix}
\bR_{11} & 0 \\
\bR_{1,2:n}^\top & \bR_{2:n,2:n}^\top
\end{matrix}
\right]
\left[
\begin{matrix}
\bR_{11} & \bR_{1,2:n} \\
0 & \bR_{2:n,2:n}
\end{matrix}
\right]\\
&=
\left[
\begin{matrix}
\bR_{11}^2 & \bR_{11}\bR_{1,2:n} \\
\bR_{11}\bR_{1,2:n}^\top & \bR_{1,2:n}^\top\bR_{1,2:n} + \bR_{2:n,2:n}^\top\bR_{2:n,2:n}
\end{matrix}
\right],
\end{aligned}
$$
which allows to determine the first row of $\bR$ by
$$
\bR_{11} = \sqrt{\bA_{11}}, \qquad \bR_{1,2:n} = \frac{1}{\bR_{11}}\bA_{1,2:n}.
$$
Let $\bA_2=\bR_{2:n,2:n}^\top\bR_{2:n,2:n}$. The equality $\bA_{2:n,2:n} = \bR_{1,2:n}^\top\bR_{1,2:n} + \bR_{2:n,2:n}^\top\bR_{2:n,2:n}$ indicates
$$
\begin{aligned}
\bA_2=\bR_{2:n,2:n}^\top\bR_{2:n,2:n} &= \bA_{2:n,2:n} - \bR_{1,2:n}^\top\bR_{1,2:n} \\
&= \bA_{2:n,2:n} - \frac{1}{\bA_{11}} \bA_{1,2:n}^\top\bA_{1,2:n} \\
&= \bA_{2:n,2:n} - \frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{1,2:n}, \qquad &(\bA \mbox{ is symmetric})
\end{aligned}
$$
where $\bA_2$ is the Schur complement of $\bA_{11}$ in $\bA$ of size $(n-1)\times (n-1)$. And to get $\bR_{2:n,2:n}$ we must compute the Cholesky decomposition of matrix $\bA_2$ of shape $(n-1)\times (n-1)$. Again, this is a recursive algorithm and is formulated in Algorithm~\ref{alg:compute-choklesky}.
\begin{algorithm}[H]
\caption{Cholesky Decomposition via Recursive Algorithm}
\label{alg:compute-choklesky}
\begin{algorithmic}[1]
\Require
Positive definite matrix $\bA$ with size $n\times n$;
\State Calculate first row of $\bR$ by $\bR_{11} = \sqrt{\bA_{11}}, \bR_{1,2:n} = \frac{1}{\bR_{11}}\bA_{1,2:n}$; \Comment{$n$ flops}
\State Compute the Cholesky decomposition of the $(n-1)\times (n-1)$ matrix
$$
\bA_2=\bR_{2:n,2:n}^\top\bR_{2:n,2:n}=\bA_{2:n,2:n} - \frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{1,2:n};
$$
\Comment{$n^2-n$ flops}
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: Cholesky Recursively]\label{theorem:cholesky-complexity}
Algorithm~\ref{alg:compute-choklesky} requires $\sim(1/3)n^3$ flops to compute the Cholesky decomposition of an $n\times n$ positive definite matrix.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:cholesky-complexity}]
Step 1 takes 1 square root and $(n-1)$ divisions which take $n$ flops totally.
For step 2, note that $\frac{1}{\bA_{11}} \bA_{2:n,1}\bA_{1,2:n} = (\frac{1}{\sqrt{\bA_{11}}} \bA_{2:n,1})(\frac{1}{\sqrt{\bA_{11}}}\bA_{1,2:n}) = \bR_{1,2:n}^\top \bR_{1,2:n}$. If we calculate the complexity directly from the equation in step 2, we will get the same complexity as the LU decomposition. But the symmetry of $\bR_{1,2:n}^\top \bR_{1,2:n}$ can be adopted, the complexity of $\bR_{1,2:n}^\top \bR_{1,2:n}$ reduces from $(n-1)\times(n-1)$ multiplications to $1+2+\ldots+(n-1)=\frac{n^2-n}{2}$ multiplications, which is almost half of the original complexity. The cost of matrix division reduces from $(n-1)\times(n-1)$ to $1+2+\ldots+(n-1)=\frac{n^2-n}{2}$ as well. So the total cost for step 2 is $n^2-n$ flops.
Let $f(n) = n^2-n + n = n^2$, the total complexity is
$$
\mathrm{cost} = f(n)+f(n-1)+\ldots +f(1).
$$
Simple calculations will show the total complexity for all the recursive steps is $\frac{2n^3+3n^2+n}{6}$ flops which is $(1/3)n^3$ flops if we keep only the leading term.
\end{proof}
An important use of the Cholesky decomposition computation above is for testing whether a symmetric matrix is positive definite. The test is simply to run the algorithm above and declare the matrix positive definite if the algorithm completes without encountering any negative or zero pivots (in step 1 above) and not positive definite otherwise.
To end up this section, we provide the full pseudo code for Algorithm~\ref{alg:compute-choklesky} as shown in Algorithm~\ref{alg:compute-choklesky11} (compare the two algorithms).
\begin{algorithm}[H]
\caption{Cholesky Decomposition via Recursive Algorithm: Full Pseudo Code}
\label{alg:compute-choklesky11}
\begin{algorithmic}[1]
\Require
Positive definite matrix $\bA$ with size $n\times n$;
\For{$k=1$ to $n$} \Comment{compute the $k$th row of $\bR$}
\State $\bR_{kk} = \sqrt{\bA_{kk}}$; \Comment{first element of $k$-th row, 1 flop}
\State $\bR_{k,k+1:n} = \frac{1}{\bR_{kk}} \bA_{k,k+1:n}$; \Comment{the rest elements of $k$-th row, $n-k$ flops}
\State $\bA_{k+1:n,k+1:n} = \bA_{k+1:n,k+1:n} - \bR_{k,k+1:n}^\top\bR_{k,k+1:n}$; \Comment{$2(1+2+\ldots+(n-k))$ flops}
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Computing the Cholesky Decomposition Element-Wise}
It is also common to compute the Cholesky decomposition via element-level equations which come from directly solving the matrix equation $\bA=\bR^\top\bR$. We notice that the entry $(i,j)$ of $\bA$ is $\bA_{ij} = \bR_{:,i}^\top \bR_{:,j} = \sum_{k=1}^{i} \bR_{ki}\bR_{kj}$ if $i<j$. This further implies,
$$
\begin{aligned}
\bA_{ij} &= \bR_{:,i}^\top \bR_{:,j} = \sum_{k=1}^{i} \bR_{ki}\bR_{kj} \\
&= \sum_{k=1}^{i-1} \bR_{ki}\bR_{kj} + \bR_{ii}\bR_{ij},
\end{aligned}
$$
and
$$
\bR_{ij} = (\bA_{ij} - \sum_{k=1}^{i-1} \bR_{ki}\bR_{kj})/\bR_{ii},
$$
if $i<j$.
If we equate elements of $\bR$ by taking a column at a time and start with $\bR_{11} = \sqrt{\bA_{11}}$, the element-level algorithm is formulated in Algorithm~\ref{alg:compute-choklesky-element-level}.
\begin{algorithm}[H]
\caption{Cholesky Decomposition Element-Wise}
\label{alg:compute-choklesky-element-level}
\begin{algorithmic}[1]
\Require
Positive definite matrix $\bA$ with size $n\times n$;
\State Calculate first of $\bR$ by $\bR_{11} = \sqrt{\bA_{11}}$;
\For{$j=1$ to $n$}
\For{$i=1$ to $j-1$}
\State $\bR_{ij} = (\bA_{ij} - \sum_{k=1}^{i-1} \bR_{ki}\bR_{kj})/\bR_{ii}$, since $i<j$;
\EndFor
\State $\bR_{jj} = \sqrt{\bA_{jj}- \sum_{k=1}^{j-1}\bR_{kj}^2}$;
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: Cholesky Element-wise]\label{theorem:cholesky-complexity-element}
Algorithm~\ref{alg:compute-choklesky-element-level} requires $\sim(1/3)n^3$ flops to compute the Cholesky decomposition of an $n\times n$ positive definite matrix.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:cholesky-complexity-element}]
For step 4 in Algorithm~\ref{alg:compute-choklesky-element-level}, that is, for each $j, i$, step 4 involves $(i-1)$ multiplications, $(i-2)$ additions, $1$ subtraction, and 1 division, which is $2i-1$ flops. Let $f(k)=2k-1$, the total flops required from step 4 for each loop $j$ is given by
$$
f(1)+f(2)+\ldots +f(j-1) = j^2-2j+1 \,\,\,\,\mathrm{flops}.
$$
Further, for each loop $j$, step 6 requires $j-1$ multiplications, $j-2$ additions, $1$ subtraction, $1$ square root. That is, step 6 involves $2j-1$ flops for each loop $j$. Combine the flops needed in step 4, it needs $j^2$ flops for each loop $j$ totally. Let $g(k)=k^2$, the total complexity is thus given by
$$
g(1)+g(2)+\ldots +g(n) = \frac{2n^3+3n^2+n}{6} \,\,\,\,\mathrm{flops},
$$
which is $(1/3)n^3$ flops if we keep only the leading term.
\end{proof}
The complexity of Algorithm~\ref{alg:compute-choklesky-element-level} is the same as that of Algorithm~\ref{alg:compute-choklesky}. And indeed, the ideas behind them are similar as well.
\section{More Properties of Positive Definite Matrices}
\begin{lemma}[PD Properties\index{Positive definite}]\label{lemma:pd-more-properties}
For positive definite matrix $\bA\in \real^{n\times n}$, we have the following properties
\item 1). Any principal minors are positive (see Definition~\ref{definition:principle-minors}, p.~\pageref{definition:principle-minors}, not necessarily to be the leading principal minors);
\item 2). Suppose the diagonal values of $\bA$ are $a_{ii}$, for all $i\in \{1,2,\ldots, n\}$, then $\det(\bA) \leq \prod_{i=1}^{n}a_{ii}$. The equality can be obtained when $\bA$ is a diagonal matrix.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:pd-more-properties}]
For 1). Similar to the permutation matrix (Definition~\ref{definition:permutation-matrix}, p.~\pageref{definition:permutation-matrix}), we can define a selection matrix. For a row selection matrix $\bP$, if we select a row of the matrix $\bA$ by $\bP\bA$, the corresponding diagonal of $\bP$ will be 1, and 0 otherwise. In this sense, the $\bP^\top=\bP$ is a column selection matrix such that $\bA\bP^\top$ is to select the corresponding columns. Then, any $k\times k$ submatrix of $\bA$ can be obtained by $\bP\bA\bP^\top$. For any vector $\bx\in \real^n$ where not all the corresponding $k$ entries are zero (such that $\bx^\top\bP$ is nonzero), we have
$$
\bx^\top \bP\bA\bP^\top \bx >0.
$$
Since $(n-k)$ rows of $\bP$ are zero and the corresponding $(n-k)$ columns of $\bP$ are zero as well, these rows and columns will not count any value for the above equation, we can just remove them away.
This implies the $k\times k$ submatrix $\bA_k \in \real^{k\times k}$ of $\bA$, and its corresponding $k\times k$ sub-selection matrix $\bP_k \in \real^{k\times k}$ (which is an identity matrix here), such that
$$
\bx_k\bP_k\bA_k\bP_k^\top \bx_k >0,
$$
where $\bx_k \in \real^{k}$. Since the vector $\bx$ is any vector in $\real^n$, the $\bx_k$ is thus also any vector in $\real^k$.
This results in that $\bA_k$ is PD.
For 2). For the LU decomposition of $\bA$,
$\bA = \bL\bU_0$ has the following form
$$
\begin{aligned}
\bA = \bL\bU_0 &=
\begin{bmatrix}
1 & 0 & \ldots & 0 \\
l_{21} & 1 & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
l_{n1} & l_{n2} & \ldots & 1
\end{bmatrix}
\begin{bmatrix}
u_{11} & u_{12} & \ldots & u_{1n} \\
0 & u_{22} & \ldots & u_{2n}\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & u_{nn}
\end{bmatrix}\\
&=
\begin{bmatrix}
1 & 0 & \ldots & 0 \\
l_{21} & 1 & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
l_{n1} & l_{n2} & \ldots & 1
\end{bmatrix}
\begin{bmatrix}
u_{11} & 0 & \ldots & 0 \\
0 & u_{22} & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & u_{nn}
\end{bmatrix}
\begin{bmatrix}
1 & u_{12}/u_{11} & \ldots & u_{1n}/u_{11} \\
0 & 1 & \ldots & u_{2n}/u_{22}\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & 1
\end{bmatrix}\\
&=
\begin{bmatrix}
1 & 0 & \ldots & 0 \\
l_{21} & 1 & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
l_{n1} & l_{n2} & \ldots & 1
\end{bmatrix}
\begin{bmatrix}
u_{11} & 0 & \ldots & 0 \\
0 & u_{22} & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & u_{nn}
\end{bmatrix}
\begin{bmatrix}
1 & l_{21} & \ldots & l_{n1} \\
0 & 1 & \ldots & l_{n2}\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & 1
\end{bmatrix}=\bL\bD\bL^\top.
\end{aligned}
$$
We have discussed in Section~\ref{section:cholesky-diagonals} that $\det(\bA) = \det(\bL)\det(\bU_0) = \prod_{i=1}^{n}u_{ii}\leq \prod_{i=1}^{n}a_{ii}$, where the last inequality comes from the fact that $\bA$ is symmetric such that
$$
a_{ii} = (\sum_{j=1}^{i} l_{ij}u_{ji}) +u_{ii} = (\sum_{j=1}^{i} l_{ij}^2\cdot u_{ii}) +u_{ii} \geq u_{ii}.
$$
And the equality can be obtained when $\bA$ is a diagonal matrix.
\end{proof}
\section{Last Words on Positive Definite Matrices}
In Section~\ref{section:equivalent-pd-psd} (p.~\pageref{section:equivalent-pd-psd}), we will prove that a matrix $\bA$ is PD if and only if $\bA$ can be factored as $\bA=\bP^\top\bP$ where $\bP$ is nonsingular. And in Section~\ref{section:unique-posere-pd} (p.~\pageref{section:unique-posere-pd}), we will prove that PD matrix $\bA$ can be uniquely factored as $\bA =\bB^2$ where $\bB$ is also PD. The two results are both consequences of the spectral decomposition of PD matrices.
To conclude, for PD matrix $\bA$, we can factor it into $\bA=\bR^\top\bR$ where $\bR$ is an upper triangular matrix with positive diagonals as shown in Theorem~\ref{theorem:cholesky-factor-exist} by Cholesky decomposition, $\bA = \bP^\top\bP$ where $\bP$ is nonsingular in Theorem~\ref{lemma:nonsingular-factor-of-PD} (p.~\pageref{lemma:nonsingular-factor-of-PD}), and $\bA = \bB^2$ where $\bB$ is PD in Theorem~\ref{theorem:unique-factor-pd} (p.~\pageref{theorem:unique-factor-pd}). For clarity, the different factorizations of positive definite matrix $\bA$ are summarized in Figure~\ref{fig:pd-summary}.
\begin{figure}[htbp]
\centering
\centering
\resizebox{0.9\textwidth}{!}{%
\begin{tikzpicture}[>=latex]
\tikzstyle{state} = [draw, very thick, fill=white, rectangle, minimum height=3em, minimum width=6em, node distance=8em, font={\sffamily\bfseries}]
\tikzstyle{stateEdgePortion} = [black,thick];
\tikzstyle{stateEdge} = [stateEdgePortion,->];
\tikzstyle{stateEdge2} = [stateEdgePortion,<->];
\tikzstyle{edgeLabel} = [pos=0.5, text centered, font={\sffamily\small}];
\node[ellipse, name=pdmatrix, draw,font={\sffamily\bfseries}, node distance=7em, xshift=-9em, yshift=-1em,fill={colorals}] {PD Matrix $\bA$};
\node[state, name=bsqure, below of=pdmatrix, xshift=0em, yshift=1em, fill={colorlu}] {$\bB^2$};
\node[state, name=ptp, right of=bsqure, xshift=3em, fill={colorlu}] {$\bP^\top\bP$};
\node[state, name=rsqure, left of=bsqure, xshift=-3em, fill={colorlu}] {$\bR^\top\bR$};
\node[ellipse, name=utv, below of=pdmatrix,draw, node distance=7em, xshift=0em, yshift=-4em,font={\tiny},fill={coloruppermiddle}] {PD $\bB$};
\node[ellipse, name=upperr, left of=utv, draw, node distance=8em, xshift=-3em,font={\tiny},fill={coloruppermiddle}] {\parbox{6em}{Upper \\Triangular $\bR$}};
\node[ellipse, name=nonp, right of=utv,draw, node distance=8em, xshift=3em, font={\tiny},fill={coloruppermiddle}] {Nonsingular $\bP$};
\coordinate (lq2inter3) at ($(pdmatrix.east -| ptp.north) + (-0em,0em)$);
\draw (pdmatrix.east) edge[stateEdgePortion] (lq2inter3);
\draw (lq2inter3) edge[stateEdge]
node[edgeLabel, text width=7.25em, yshift=0.8em]{\parbox{5em}{Spectral\\Decomposition}} (ptp.north);
\coordinate (rqr2inter1) at ($(pdmatrix.west) + (0,0em)$);
\coordinate (rqr2inter3) at ($(rqr2inter1-| rsqure.north) + (-0em,0em)$);
\draw (rqr2inter1) edge[stateEdgePortion] (rqr2inter3);
\draw (rqr2inter3) edge[stateEdge]
node[edgeLabel, text width=8em, yshift=0.8em]{\parbox{2em}{LU/\\Spectral/\\Recursive}} (rsqure.north);
\draw (pdmatrix.south)
edge[stateEdge] node[edgeLabel,yshift=0.5em]{\parbox{5em}{Spectral\\Decomposition} }
(bsqure.north);
\draw (upperr.north)
edge[stateEdge] node[edgeLabel,yshift=0.5em]{}
(rsqure.south);
\draw (utv.north)
edge[stateEdge] node[edgeLabel,yshift=0.5em]{}
(bsqure.south);
\draw (nonp.north)
edge[stateEdge] node[edgeLabel,yshift=0.5em]{}
(ptp.south);
\begin{pgfonlayer}{background}
\draw [join=round,cyan,dotted,fill={colormiddle}] ($(upperr.south west) + (-1.6em, -1em)$) rectangle ($( pdmatrix.east-|ptp.north east) + (1.6em, +1.8em)$);
\end{pgfonlayer}
\end{tikzpicture}
}
\caption{Demonstration of different factorizations on positive definite matrix $\bA$.}
\label{fig:pd-summary}
\end{figure}
\section{Pivoted Cholesky Decomposition}\label{section:volum-picot-cholesjy}
If $\bP$ is a permutation matrix and $\bA$ is positive definite, then $\bP^\top\bA\bP$ is said to be a diagonal permutation of $\bA$ (among other things, it permutes the diagonals of A). Any diagonal permutation of A
is positive definite and has a Cholesky factor. Such a factorization is called a pivoted Cholesky factorization. There are many ways to pivot a
Cholesky decomposition, but the most common one is the complete pivoting (see Section~\ref{section:complete-pivoting}, p.~\pageref{section:complete-pivoting}) such that
$$
\bP\bA\bP^\top = \bR^\top\bR
$$
is the column-pivoted Cholesky decomposition of $\bA$, where $\bP$ is a permutation matrix, and $\bR$ is upper triangular.
Following the Cholesky decomposition via recursive calculation in Algorithm~\ref{alg:compute-choklesky11}, we notice from Lemma~\ref{lemma:positive-maximal-diagonal} that the maximal element of a PD matrix lies on the diagonal. Therefore, the complete pivoting algorithm for Cholesky decomposition can only search in the diagonals. The procedure is shown in Algorithm~\ref{alg:cholesky-complete-pivot22}.
\begin{algorithm}[h]
\caption{Cholesky Decomposition via Recursive Algorithm: Complete Pivoting}
\label{alg:cholesky-complete-pivot22}
\begin{algorithmic}[1]
\Require
Positive definite matrix $\bA$ with size $n\times n$;
\State $\bR \in \real^{n\times n}$ is initialized with all zeros;
\For{$k=1$ to $n$} \Comment{compute the $k$th row of $\bR$}
\State Search in the diagonals of $\bA_{k:n,k:n}$ such that $\bA_{vv} = \max \bA_{k:n,k:n}$;
\State Swap the $k$-th and $v$-th column of $\bA$: $\bA_{:,k}\leftrightarrow\bA_{:,v}$ by column permutation;
\State Swap the $k$-th and $v$-th column of $\bR$: $\bR_{:,k}\leftrightarrow\bR_{:,v}$ by column permutation;
\State Swap the $k$-th and $v$-th row of $\bA$: $\bA_{k,:}\leftrightarrow\bA_{v,:}$ by row permutation;
\State $\bR_{kk} = \sqrt{\bA_{kk}}$; \Comment{first element of $k$-th row, 1 flop}
\State $\bR_{k,k+1:n} = \frac{1}{\bR_{kk}} \bA_{k,k+1:n}$; \Comment{the rest elements of $k$-th row, $n-k$ flops}
\State $\bA_{k+1:n,k+1:n} = \bA_{k+1:n,k+1:n} - \bR_{k,k+1:n}^\top\bR_{k,k+1:n}$; \Comment{$2(1+2+\ldots+(n-k))$ flops}
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Decomposition for Semidefinite Matrices}
For positive semidefinite matrices, the Cholesky decomposition also exists with slight modification.
\begin{theoremHigh}[Semidefinite Decomposition\index{Positive semidefinite}]\label{theorem:semidefinite-factor-exist}
Every positive semidefinite matrix $\bA\in \real^{n\times n}$ can be factored as
$$
\bA = \bR^\top\bR,
$$
where $\bR \in \real^{n\times n}$ is an upper triangular matrix with possible \textbf{zero} diagonal elements and the factorization is \textbf{not unique} in general.
\end{theoremHigh}
For such decomposition, the diagonal of $\bR$ may not display the rank of $\bA$ \citep{higham2009cholesky}.
\begin{example}[\citep{higham2009cholesky}]
Suppose
$$
\bA =
\begin{bmatrix}
1 & -1 & 1 \\
-1 & 1 & -1 \\
1 & -1 & 2
\end{bmatrix}.
$$
The semidefinite decomposition is given by
$$
\bA =
\begin{bmatrix}
1 & 0 & 0 \\
-1 & 0 & 0 \\
1 & 1 & 0
\end{bmatrix}
\begin{bmatrix}
1 & -1 & 1 \\
0 & 0 & 1 \\
0 & 0 & 0
\end{bmatrix}
=\bR^\top\bR.
$$
$\bA$ has rank 2, but $\bR$ has only one nonzero diagonal element.
\exampbar
\end{example}
We notice that all PD matrices have full rank, and this fact permeates many of our proofs discussed above. This can be proved by the Sylvester's criterion (Theorem~\ref{lemma:sylvester-criterion}, p.~\pageref{lemma:sylvester-criterion}) that all the leading principal minors of PD matrices are positive. Or, we can simply prove that if a PD matrix $\bA$ is rank deficient, this implies the null space of $\bA$ has positive dimension and there exists a vector $\bx$ in the null space of $\bA$ such that $\bA\bx=\bzero$, i.e., $\bx^\top\bA\bx=0$ and it leads to a contradiction to the definition of the PD matrices.
However, this is not necessarily true for PSD matrices where the dimension of the null space can be larger than 0.
Therefore, and more generally, a rank-revealing decomposition for semidefinite decomposition is provided as follows.\index{Rank-revealing}\index{Semidefinite rank-revealing}
\begin{theoremHigh}[Semidefinite Rank-Revealing Decomposition\index{Rank-revealing}]\label{theorem:semidefinite-factor-rank-reveal}
Every positive semidefinite matrix $\bA\in \real^{n\times n}$ with rank $r$ can be factored as
$$
\bP^\top \bA\bP = \bR^\top\bR, \qquad \mathrm{with} \qquad
\bR = \begin{bmatrix}
\bR_{11} & \bR_{12}\\
\bzero &\bzero
\end{bmatrix} \in \real^{n\times n},
$$
where $\bR_{11} \in \real^{r\times r}$ is an upper triangular matrix with positive diagonal elements, and $\bR_{12}\in \real^{r\times (n-r)}$.
\end{theoremHigh}
The proof for the existence of the above rank-revealing decomposition for semidefinite matrices is delayed in Section~\ref{section:semi-rank-reveal-proof} (p.~\pageref{section:semi-rank-reveal-proof}) as a consequence of the spectral decomposition (Theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem}) and the column-pivoted QR decomposition (Theorem~\ref{theorem:rank-revealing-qr-general}, p.~\pageref{theorem:rank-revealing-qr-general}). Whereas, the rigorous proof for the trivial Semidefinite Decomposition Theorem~\ref{theorem:semidefinite-factor-exist} can be a direct result of the spectral decomposition and trivial QR decomposition (Theorem~\ref{theorem:qr-decomposition}, p.~\pageref{theorem:qr-decomposition})
.
\section{Application: Rank-One Update/Downdate}\label{section:cholesky-rank-one-update}
Updating linear systems after low-rank modifications of the system matrix is widespread in machine learning, statistics, and many other fields. However, it is well known that this update can lead to serious instabilities in the presence
of roundoff error \citep{seeger2004low}. If the system matrix is positive definite, it is almost always
possible to use a representation based on the Cholesky decomposition which is
much more numerically stable. We will shortly provide the proof for this rank one update/downdate via Cholesky decomposition in this section.
\subsection{Rank-One Update}\index{Rank-one update}
A rank-one update $\bA^\prime$ of matrix $\bA$ by vector $\bx$ is of the form \citep{gill1974methods, bojanczyk1987note, chang1997pertubation, davis1999modifying, seeger2004low, chen2008algorithm, davis2008user, higham2009cholesky}:
\begin{equation*}
\begin{aligned}
\bA^\prime &= \bA + \bv \bv^\top\\
\bR^{\prime\top}\bR^\prime &= \bR^\top\bR + \bv \bv^\top.
\end{aligned}
\end{equation*}
If we have already calculated the Cholesky factor $\bR$ of $\bA \in \real^{n\times n}$, then the Cholesky factor $\bR^\prime$ of $\bA^\prime$ can be calculated efficiently. Note that $\bA^\prime$ differs from $\bA$ only via the symmetric rank-one matrix. Hence we can compute $\bR^\prime$ from $\bR$ using the rank-one Cholesky update, which takes $O(n^2)$ operations each saving from $O(n^3)$ if we do know $\bR$, the Cholesky decomposition of $\bA$ up front, i.e., we want to compute the Cholesky decomposition of $\bA^\prime$ via that of $\bA$.
To see this,
suppose there is a set of orthogonal matrices $\bQ_n \bQ_{n-1}\ldots \bQ_1$ such that that
$$
\bQ_n \bQ_{n-1}\ldots \bQ_1
\begin{bmatrix}
\bv^\top \\
\bR
\end{bmatrix}
=
\begin{bmatrix}
\bzero \\
\bR^\prime
\end{bmatrix}.
$$
Then we find out the expression for the Cholesky factor of $\bA^\prime$ by $\bR^\prime$.
Specifically, multiply the left-hand side (l.h.s.,) of above equation by its transpose,
$$
\begin{bmatrix}
\bv & \bR^\top
\end{bmatrix}
\bQ_1 \ldots \bQ_{n-1}\bQ_n
\bQ_n \bQ_{n-1}\ldots \bQ_1
\begin{bmatrix}
\bv^\top \\
\bR
\end{bmatrix}
= \bR^\top\bR + \bv \bv^\top.
$$
And multiply the right-hand side (r.h.s.,) by its transpose,
$$
\begin{bmatrix}
\bzero & \bR^{\prime\top}
\end{bmatrix}
\begin{bmatrix}
\bzero \\
\bR^\prime
\end{bmatrix}=\bR^{\prime\top}\bR^\prime,
$$
which agrees with the l.h.s., equation. Givens rotations are such orthogonal matrices that can transfer $\bR,\bv$ into $\bR^\prime$. We will discuss the intrinsic meaning of Givens rotation shortly to prove the existence of QR decomposition in Section~\ref{section:qr-givens} (p.~\pageref{section:qr-givens}). Here, we only introduce the definition of it and write out the results directly. Feel free to skip this section for a first reading.
\begin{definition}[$n$-th Order Givens Rotation]
A Givens rotation is represented by a matrix of the following form
$$
\bG_{kl}=
\begin{bmatrix}
1 & & & & & & & & &\\
& \ddots & & & & && & &\\
& & 1 & & & & && &\\
& & & c & & & & s & &\\
&& & & 1 & & && &\\
&& & & &\ddots & && &\\
&& & & & & 1&& &\\
&& & -s & & & &c& &\\
&& & & & & & &1 & \\
&& & & & & & & &\ddots
\end{bmatrix}_{n\times n},
$$
where the $(k,k), (k,l), (l,k), (l,l)$ entries are $c, s, -s, c$ respectively, and $s = \cos \theta$ and $c=\cos \theta$ for some $\theta$.
Let $\bdelta_k \in \real^n$ be the zero vector except that the entry $k$ is 1. Then mathematically, the Givens rotation defined above can be denoted by
$$
\bG_{kl}= \bI + (c-1)(\bdelta_k\bdelta_k^\top + \bdelta_l\bdelta_l^\top) + s(\bdelta_k\bdelta_l^\top -\bdelta_l\bdelta_k^\top ).
$$
\end{definition}
It can be easily verified that the $n$-th order Givens rotation is an orthogonal matrix and its determinant is 1. For any vector $\bx =[x_1, x_2, \ldots, x_n]^\top \in \real^n$, we have $\by = \bG_{kl}\bx$, where
$$
\left\{
\begin{aligned}
&y_k = c \cdot x_k + s\cdot x_l, \\
&y_l = -s\cdot x_k +c\cdot x_l, \\
&y_j = x_j , & (j\neq k,l)
\end{aligned}
\right.
$$
That is, a Givens rotation applied to $\bx$ rotates two components of $\bx$ by some angle $\theta$ and leaves all other components the same.
Now, suppose we have an $(n+1)$-th order Givens rotation indexed from $0$ to $n$, and it is given by
$$
\bG_k = \bI + (c_k-1)(\bdelta_0\bdelta_0^\top + \bdelta_k\bdelta_k^\top) + s_k(\bdelta_0\bdelta_k^\top -\bdelta_k\bdelta_0^\top ),
$$
where $c_k = \cos \theta_k, s_k=\sin\theta_k$ for some $\theta_k$, $\bG_k \in \real^{(n+1)\times (n+1)}$, $\delta_k\in \real^{n+1}$ is a zero vector except that the $(k+1)$-th entry is 1.
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10]
Taking out the $k$-th column of the following equation
$$
\begin{bmatrix}
\bv^\top \\
\bR
\end{bmatrix}
=
\begin{bmatrix}
\bzero \\
\bR^\prime
\end{bmatrix},
$$
where we let the $k$-th element of $\bv$ be $v_k$, and the $k$-th diagonal of $\bR$ be $r_{kk}$.
We realize that $\sqrt{v_k^2 + r_{kk}^2} \neq 0$,
let $c_k = \frac{r_{kk}}{\sqrt{v_k^2 + r_{kk}^2}}$, $s_k=-\frac{v_k}{\sqrt{v_k^2 + r_{kk}^2}}$. Then,
$$
\left\{
\begin{aligned}
&v_k \rightarrow c_kv_k+s_kr_{kk}=0; \\
&r_{kk}\rightarrow -s_k v_k +c_kr_{kk}= \sqrt{v_k^2 + r_{kk}^2} = r^\prime_{kk} . \\
\end{aligned}
\right.
$$
That is, $G_k$ will introduce zero value to the $k$-th element to $\bv$ and nonzero value to $r_{kk}$.
\end{mdframed}
This finding above is essential for the rank-one update. And we obtain
$$
\bG_n \bG_{n-1}\ldots \bG_1
\begin{bmatrix}
\bv^\top \\
\bR
\end{bmatrix}
=
\begin{bmatrix}
\bzero \\
\bR^\prime
\end{bmatrix}.
$$
For each Givens rotation, it takes $6n$ flops. And there are $n$ such rotations, which requires $6n^2$ flops if keeping only the leading term. The complexity to calculate the Cholesky factor of $\bA^\prime$ is thus reduced from $\frac{1}{3} n^3$ to $6n^2$ flops if we already know the Cholesky factor of $\bA$ by the rank-one update. The above algorithm is essential to reduce the complexity of the posterior calculation in the Bayesian inference for Gaussian mixture model \citep{lu2021bayes}.
\subsection{Rank-One Downdate}
Now suppose we have calculated the Cholesky factor of $\bA$, and the $\bA^\prime$ is the downdate of $\bA$ as follows:
\begin{equation*}
\begin{aligned}
\bA^\prime &= \bA - \bv \bv^\top\\
\bR^{\prime\top}\bR^\prime &= \bR^\top\bR - \bv \bv^\top.
\end{aligned}
\end{equation*}
The algorithm is similar by proceeding as follows:
\begin{equation}\label{equation:rank-one-downdate}
\bG_1 \bG_{2}\ldots \bG_n
\begin{bmatrix}
\bzero \\
\bR
\end{bmatrix}
=
\begin{bmatrix}
\bv^\top \\
\bR^\prime
\end{bmatrix}.
\end{equation}
Again, $
\bG_k = \bI + (c_k-1)(\bdelta_0\bdelta_0^\top + \bdelta_k\bdelta_k^\top) + s_k(\bdelta_0\bdelta_k^\top -\bdelta_k\bdelta_0^\top ),
$ can be constructed as follows:
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10]
Taking out the $k$-th column of the following equation
$$
\begin{bmatrix}
\bzero \\
\bR
\end{bmatrix}
=
\begin{bmatrix}
\bv^\top \\
\bR^\prime
\end{bmatrix}.
$$
We realize that $r_{kk} \neq 0$,
let $c_k=\frac{\sqrt{r_{kk}^2 - v_k^2}}{r_{kk}}$, $s_k = \frac{v_k}{r_{kk}}$. Then,
$$
\left\{
\begin{aligned}
& 0 \rightarrow s_kr_{kk}=v_k; \\
&r_{kk}\rightarrow c_k r_{kk}= \sqrt{r_{kk}^2-v_k^2 }=r^\prime_{kk} . \\
\end{aligned}
\right.
$$
This requires $r^2_{kk} > v_k^2$ to make $\bA^\prime$ to be positive definite. Otherwise, $c_k$ above will not exist.
\end{mdframed}
Again, one can check that, multiply the l.h.s., of Equation~\eqref{equation:rank-one-downdate} by its transpose, we have
$$
\begin{bmatrix}
\bzero & \bR^\top
\end{bmatrix}
\bG_n \ldots \bG_{2}\bG_1
\bG_1 \bG_{2}\ldots \bG_n
\begin{bmatrix}
\bzero \\
\bR
\end{bmatrix} =\bR^\top\bR.
$$
And multiply the r.h.s., by its transpose, we have
$$
\begin{bmatrix}
\bv & \bR^{\prime\top}
\end{bmatrix}
\begin{bmatrix}
\bv^\top \\
\bR^\prime
\end{bmatrix}=\bv\bv^\top + \bR^{\prime\top}\bR^\prime.
$$
This results in $\bR^{\prime\top}\bR^\prime = \bR^\top\bR - \bv \bv^\top$.
\section{Application: Indefinite Rank Two Update}\index{Rank-two update}
Let $\bA = \bR^\top\bR$ be the Cholesky decomposition of $\bA$, \citep{goldfarb1976factorized, seeger2004low} give a stable method for the indefinite rank-two update of the form
$$
\bA^\prime = (\bI+\bv\bu^\top)\bA(\bI+\bu\bv^\top).
$$
Let
$$
\bigg\{
\begin{aligned}
\bz &= \bR^{-\top}\bv, \\
\bw &= \bR\bu,
\end{aligned}
\qquad
\rightarrow
\qquad
\bigg\{
\begin{aligned}
\bv &= \bR^{\top}\bz, \\
\bu &= \bR^{-1}\bw.
\end{aligned}
$$
And suppose the LQ decomposition \footnote{We will shortly introduce in Theorem~\ref{theorem:lq-decomposition} (p.~\pageref{theorem:lq-decomposition}).} of $\bI+\bz\bw^\top$ is given by $\bI+\bz\bw^\top =\bL\bQ$, where $\bL$ is lower triangular and $\bQ$ is orthogonal. Thus, we have
$$
\begin{aligned}
\bA^\prime &= (\bI+\bv\bu^\top)\bA(\bI+\bu\bv^\top)\\
&= (\bI+\bR^{\top}\bz \bw^\top \bR^{-\top })\bA(\bI+\bR^{-1}\bw \bz^\top\bR)\\
&= \bR^\top (\bI+\bz\bw^\top)(\bI+\bw\bz^\top)\bR\\
&= \bR^\top\bL \bQ\bQ^\top \bL^\top\bR \\
&= \bR^\top\bL \bL^\top\bR.
\end{aligned}
$$
Let $\bR^\prime = \bR^\top\bL$ which is lower triangular, we find the Cholesky decomposition of $\bA^\prime$.
\part{Special Topics}
\newpage
\chapter{Coordinate Transformation in Matrix Decomposition}\label{section:coordinate-transformation}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
Suppose a vector $\bv\in \real^3$ and it has elements $\bv = [3;7; 2]$. But what do these values 3, 7, and 2 mean? In the Cartesian coordinate system, it means it has a component of 3 on the $x$-axis, a component of 7 on the $y$-axis, and a component of 2 on the $z$-axis.\index{Coordinate transformation}
\section{An Overview of Matrix Multiplication}
\paragraph{Coordinate defined by a nonsingular matrix} Suppose further a $3\times 3$ nonsingular matrix $\bB$ which means $\bB$ is invertible and columns of $\bB$ are linearly independent. Thus the 3 columns of $\bB$ form a basis for the space $\real^{3}$. One step forward, we can take the 3 columns of $\bB$ as a basis for a \textcolor{blue}{new coordinate system}, which we call the \textcolor{blue}{$B$ coordinate system}. Going back to the Cartesian coordinate system, we also have three vectors as a basis, $\be_1, \be_2, \be_3$. If we put the three vectors into columns of a matrix, the matrix will be an identity matrix. So $\bI\bv = \bv$ means \textcolor{blue}{transfer $\bv$ from the Cartesian coordinate system into the Cartesian coordinate system}, the same coordinate. Similarly, \textcolor{blue}{$\bB\bv=\bu$ is to transfer $\bv$ from the Cartesian coordinate system into the $B$ system}. Specifically, for $\bv = [3;7; 2]$ and $\bB=[\bb_1, \bb_2, \bb_3]$, we have $\bu=\bB\bv = 3\bb_1+7\bb_2+2\bb_3$, i.e., $\bu$ contains 3 of the first basis $\bb_1$ of $\bB$, 7 of the second basis $\bb_2$ of $\bB$, and 2 of the third basis $\bb_3$ of $\bB$.
If again, we want to transfer the vector $\bu$ from $B$ coordinate system back to the Cartesian coordinate system, we just need to multiply by $\bB^{-1}\bu = \bv$.
\paragraph{Coordinate defined by an orthogonal matrix} A $3\times 3$ orthogonal matrix $\bQ$ defines a ``better" coordinate system since the three columns (i.e., basis) are orthonormal to each other. $\bQ\bv$ is to transfer $\bv$ from the Cartesian to the coordinate system defined by the orthogonal matrix. Since the basis vectors from the orthogonal matrix are orthonormal, just like the three vectors $\be_1, \be_2, \be_3$ in the Cartesian coordinate system, the transformation defined by the orthogonal matrix just rotates or reflects the Cartesian system.
$\bQ^\top$ can help transfer back to the Cartesian coordinate system.
\begin{figure}[h]
\centering
\includegraphics[width=0.99\textwidth]{imgs/eigenRotate.pdf}
\caption{Eigenvalue Decomposition: $\bX^{-1}$ transforms to a different coordinate system. $\bLambda$ stretches and $\bX$ transforms back. $\bX^{-1}$ and $\bX$ are nonsingular, which will change the basis of the system, and the angle between the vectors $\bv_1$ and $\bv_2$ will \textbf{not} be preserved, that is, the angle between $\bv_1$ and $\bv_2$ is \textbf{different} from the angle between $\bv_1^\prime$ and $\bv_2^\prime$. The length of $\bv_1$ and $\bv_2$ are also \textbf{not} preserved, that is, $||\bv_1|| \neq ||\bv_1^\prime||$ and $||\bv_2|| \neq ||\bv_2^\prime||$.}
\label{fig:eigen-rotate}
\end{figure}
\section{Eigenvalue Decomposition}
A square matrix $\bA$ with linearly independent eigenvectors can be factored as $\bA = \bX\bLambda\bX^{-1}$ where $\bX$ and $\bX^{-1}$ are nonsingular so that they define a system transformation intrinsically. $\bA\bu = \bX\bLambda\bX^{-1}\bu$ firstly transfers $\bu$ into the system defined by $\bX^{-1}$. Let's call this system the \textbf{eigen coordinate system}. $\bLambda$ is to stretch each component of the vector in the eigen system by the length of the eigenvalue. And then $\bX$ helps to transfer the resulting vector back to the Cartesian coordinate system. A demonstration of how the eigenvalue decomposition transforms between coordinate systems is shown in Figure~\ref{fig:eigen-rotate} where $\bv_1, \bv_2$ are two linearly independent eigenvectors of $\bA$ such that they form a basis for $\real^2$.
\begin{figure}[h]
\centering
\includegraphics[width=0.99\textwidth]{imgs/spectralrotate.pdf}
\caption{Spectral Decomposition $\bQ\bLambda \bQ^\top$: $\bQ^\top$ rotates or reflects, $\bLambda$ stretches cycle to ellipse, and $\bQ$ rotates or reflects back. Orthogonal matrices $\bQ^\top$ and $\bQ$ only change the basis of the system. However, they preserve the angle between the vectors $\bq_1$ and $\bq_2$, and the lengths of them.}
\label{fig:spectral-rotate}
\end{figure}
\section{Spectral Decomposition}
A symmetric matrix $\bA$ can be factored as $\bA = \bQ\bLambda\bQ^\top$ where $\bQ$ and $\bQ^\top$ are orthogonal so that they define a system transformation intrinsically. $\bA\bu = \bQ\bLambda\bQ^\top\bu$ firstly rotates or reflects $\bu$ into the system defined by $\bQ^\top$. Let's call this system the \textbf{spectral coordinate system}. $\bLambda$ is to stretch each component of the vector in the spectral system by the length of eigenvalue. And then $\bQ$ helps to rotate or reflect the resulting vector back to the original coordinate system. A demonstration of how the spectral decomposition transforms between coordinate systems is shown in Figure~\ref{fig:spectral-rotate} where $\bq_1, \bq_2$ are two linearly independent eigenvectors of $\bA$ such that they form a basis for $\real^2$. The coordinate transformation in the spectral decomposition is similar to that of the eigenvalue decomposition. Except that in the spectral decomposition, the orthogonal vectors transferred by $\bQ^\top$ are still orthogonal. This is also a property of orthogonal matrices. That is, orthogonal matrices can be viewed as matrices which change the basis of other matrices. Hence they preserve the angle (inner product) between the vectors
$$
\bu^\top \bv = (\bQ\bu)^\top(\bQ\bv).
$$
The above invariance of the inner products of angles between the vectors are preserved, which also relies on the invariance of their lengths:
$$
||\bQ\bu|| = ||\bu||.
$$
\section{SVD}
\begin{figure}[H]
\centering
\includegraphics[width=0.99\textwidth]{imgs/svdrotate.pdf}
\caption{SVD: $\bV^\top$ and $\bU$ rotate or reflect, $\bSigma$ stretches the circle to an ellipse. Orthogonal matrices $\bV^\top$ and $\bU$ only change the basis of the system. However, they preserve the angle between the vectors $\bv_1$ and $\bv_2$, and the lengths of them.}
\label{fig:svd-rotate}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.99\textwidth]{imgs/polarrotate.pdf}
\caption{$\bV\bSigma \bV^\top$ from SVD or Polar decomposition: $\bV^\top$ rotates or reflects, $\bSigma$ stretches cycle to ellipse, and $\bV$ rotates or reflects back. Orthogonal matrices $\bV^\top$ and $\bV$ only change the basis of the system. However, they preserve the angle between the vectors $\bv_1$ and $\bv_2$, and the lengths of them.}
\label{fig:polar-rotate}
\end{figure}
Any $m\times n$ matrix can be factored as $\bA=\bU\bSigma\bV^\top$. $\bA\bu=\bU\bSigma\bV^\top\bu$ then firstly rotates or reflects $\bu$ into the system defined by $\bV^\top$, which we call the \textbf{$V$ coordinate system}. $\bSigma$ stretches the first $r$ components of the resulted vector in the $V$ system by the lengths of the singular value. If $n\geq m$, then $\bSigma$ only keeps additional $m-r$ components which are stretched to zero while removing the final $n-m$ components. If $m>n$, the $\bSigma$ stretches $n-r$ components to zero, and also adds additional $m-n$ zero components. Finally, $\bU$ rotates or reflects the resulting vector into the \textbf{$U$ coordinate system} defined by $\bU$. A demonstration of how the SVD transforms in a $2\times 2$ example is shown in Figure~\ref{fig:svd-rotate}. Further, Figure~\ref{fig:polar-rotate} demonstrates the transformation of $\bV\bSigma \bV^\top$ in a $2\times 2$ example.
Similar to the spectral decomposition, orthogonal matrices $\bV^\top$ and $\bU$ only change the basis of the system. However, they preserve the angle between the vectors $\bv_1$ and $\bv_2$.
\begin{figure}[H]
\centering
\includegraphics[width=0.99\textwidth]{imgs/polarrotate2.pdf}
\caption{Polar decomposition: $\bV^\top$ rotates or reflects, $\bSigma$ stretches cycle to ellipse, and $\bV$ rotates or reflects back. Orthogonal matrices $\bV^\top$, $\bV$, $\bQ_l$ only change the basis of the system. However, they preserve the angle between the vectors $\bv_1$ and $\bv_2$, and the lengths of them.}
\label{fig:polar-rotate2}
\end{figure}
\section{Polar Decomposition}
Any $n\times n$ square matrix $\bA$ can be factored as left polar decomposition $\bA = (\bU\bV^\top)( \bV\bSigma \bV^\top) = \bQ_l\bS$. Similarly, $\bA\bv = \bQ_l( \bV\bSigma \bV^\top)\bu$ is to transfer $\bu$ into the system defined by $\bV^\top$ and stretch each component by the lengths of the singular values. Then the resulted vector is transferred back into the Cartesian coordinate system by $\bV$. Finally, $\bQ_l$ will rotate or reflect the resulting vector from the Cartesian coordinate system into the $Q$ system defined by $\bQ_l$. The meaning of right polar decomposition shares a similar description. Similar to the spectral decomposition, orthogonal matrices $\bV^\top$ and $\bV$ only change the basis of the system. However, they preserve the angle between the vectors $\bv_1$ and $\bv_2$.
\part{Data Interpretation and Information Distillation}\label{part:data-interation}
\section*{Introduction}
For matrix $\bA\in \real^{m\times n}$ with rank $r$, there exist $r$ linearly independent columns and rows respectively by the fundamental theorem of linear algebra (Theorem~\ref{theorem:fundamental-linear-algebra}, p.~\pageref{theorem:fundamental-linear-algebra}). Then $\bA$ admits
$$
\text{(DI1)} \qquad \underset{m\times n}{\bA} = \underset{m\times r}{\bC} \gap \underset{r\times n}{\bF},
$$
where $\bC$ contains $r$ linearly independent columns of $\bA$ and $\bF$ is to reconstruct all the columns of $\bA$ since all the columns of $\bC\bF$ are combinations of the columns of $\bC$. To see this, suppose $\bF=[\bff_1, \bff_2, \ldots, \bff_n]$ is the column partition of $\bC$, we have
$$
\bA = \bC\bF=[\bC\bff_1, \bC\bff_2, \ldots, \bC\bff_n],
$$
where each column $\bC\bff_i$ is a combination of the columns of $\bC$. And the columns of $\bC$ are known as the \textit{spanning columns} of $\bA$.
Or $\bA$ admits
$$
\text{(DI2)} \qquad \underset{m\times n}{\bA} = \underset{m\times r}{\bD} \gap \underset{r\times n}{\bR},
$$
where $\bR$ contains $r$ linearly independent rows of $\bA$ and $\bD$ is to reconstruct all the rows of $\bA$ since all the rows of $\bD\bR$ are combinations of the rows of $\bR$. To see this, suppose $\bD=[\bd_1^\top; \bd_2^\top; \ldots; \bd_m^\top]$ is the row partition of $\bD$, we have
$$
\bA = \bD\bR=
\begin{bmatrix}
\bd_1^\top\bR \\
\bd_2^\top\bR \\
\vdots \\
\bd_m^\top\bR
\end{bmatrix},
$$
where each row $\bd_i^\top\bR$ is a combination of the rows of $\bR$. And the rows of $\bR$ are known as the \textit{spanning rows} of $\bA$.
This factorization in (DI1) is similar to the QR decomposition where the first factor spans the column space of $\bA$ (orthogonal in the latter), whilst (DI2) is similar to the LQ decomposition. However (DI1) has several advantages, as compared to, e.g., the QR:
\begin{itemize}
\item It is sometimes advantageous to work with a basis that consists of a subset of the
columns of $\bA$ itself. In this case, one typically has to give up on the requirement that the basis
vectors are orthogonormal;
\item If $\bA$ is sparse and nonnegative, then $\bC$ shares these properties;
\item The (DI1) requires less memory to store and is more efficient to calculate than the QR in general. One can see from the above representation that only $r(m+n)$ entries have to be
stored instead of $mn$ entries of the original matrix $\bA$ or $mn+\frac{(n+1)n}{2}$ entries in the \textit{reduced} QR decomposition (Figure~\ref{fig:qr-comparison}, p.~\pageref{fig:qr-comparison});
\item Finding the indices associated with the spanning columns is often helpful for the purpose of data interpretation and analysis, it can be very useful to identify a subset of the columns that distills the information in the matrix;
\item Finding the least squares solution $\bA\bx=\bb$ can be done by calculating the least squares solution of $\bC\widetildebx=\bb$ where the former one is to project $\bb$ into the column space of $\bA$ and the latter one is to project it into the column space of $\bC$. This factorization (DI1) can be regarded as the first phase of the variable selection in least squares or linear models in general. A short review of the variable selection procedure is given in Section~\ref{section:append-column-qr} (p.~\pageref{section:append-column-qr}) as an application of QR decomposition, and more details can refer to \citep{lu2021rigorous};
\item The (DI1) often preserves ``the physics" of a problem in a way that the QR does not.
\end{itemize}
For the rest of this part, we will discuss several variations of the factorization above with different focuses, e.g., in terms of description, easy to interpret or not, or well-condition, and how they select the basis vectors for the column space, for the row space, or for both. The big picture can be shown in Figure~\ref{fig:data-interpretation-world-picture}.
\begin{figure}[htbp]
\centering
\begin{widepage}
\centering
\resizebox{0.65\textwidth}{!}{%
\begin{tikzpicture}[>=latex]
\tikzstyle{state} = [draw, very thick, fill=white, rectangle, minimum height=3em, minimum width=6em, node distance=8em, font={\sffamily\bfseries}]
\tikzstyle{stateEdgePortion} = [black,thick];
\tikzstyle{stateEdge} = [stateEdgePortion,->];
\tikzstyle{stateEdge2} = [stateEdgePortion,<->];
\tikzstyle{edgeLabel} = [pos=0.5, text centered, font={\sffamily\small}];
\node[state, name=cr, fill={colorcr}] {CR};
\node[state, name=rank, above of=cr,yshift=-3.2em, fill={colorcr}] {Rank};
\node[state, name=interpolative, left of=cr, yshift=-2em, fill={colorcr}] {Interpolative};
\node[state, name=skeleton, right of=cr, yshift=-2em, fill={colorcr}] {\parbox{3.5em}{ Skeleton (CUR)}};
\draw (rank.south)
edge[stateEdge] node[edgeLabel, yshift=0em, xshift=-0em]{Special Case}
(cr.north) ;
\draw ($(cr.east) + (0,1em)$)
edge[stateEdge, bend left=12.5] node[edgeLabel, xshift=0.5em, yshift=0.8em]{}
(skeleton.north);
\draw[decoration={text along path,
text={|\sffamily|Same C},text align={center}},decorate] ($(cr.east) + (-1.8em,1.2em)$) to [bend left=20.5] ($(skeleton.north) + (2.5em,-1.5em)$);
\draw (rank.west)
edge[stateEdge, bend left=-30] node[edgeLabel, yshift=0em, xshift=-0em]{}
($(interpolative.north) + (-2.5em,0em)$) ;
\draw[decoration={text along path,
text={|\sffamily|Special Case},text align={center}},decorate] ($(interpolative.north) + (-3em,0.3em)$) to [bend left=30] ($(rank.west) + (-0em,0.em)$) ;
\draw (rank.east)
edge[stateEdge, bend left=30] node[edgeLabel, yshift=0em, xshift=-0em]{}
($(skeleton.north) + (2.5em,0em)$) ;
\draw[decoration={text along path,
text={|\sffamily|``Special" Case},text align={center}},decorate] ($(rank.east) + (0.7em,0.2em)$) to [bend left=30] ($(skeleton.north) + (2.5em,0.5em)$);
\draw ($(cr.west) + (0,1em)$)
edge[stateEdge, bend left=-12.5] node[edgeLabel, xshift=-0.5em, yshift=0.8em]{}
(interpolative.north);
\draw[decoration={text along path,
text={|\sffamily|Independent},text align={center}},decorate] ($(interpolative.north) + (-0.5em,0.1em)$) to [bend left=12.5] ($(cr.west) + (.5em,1.2em)$);
\draw[decoration={text along path,
text={|\sffamily|Columns},text align={center}},decorate] ($(interpolative.north) + (-0.8em,-1.1em)$) to [bend left=12.5] ($(cr.west) + (2.5em,0.3em)$);
\begin{pgfonlayer}{background}
\draw [join=round,cyan,dotted,fill={coloruppermiddle}] ($(rank.north west -| interpolative.north west) + (-0.8em, +0.8em)$) rectangle ($(skeleton.south east) + (0.8em, -0.8em)$);
\end{pgfonlayer}
\end{tikzpicture}
}
\end{widepage}
\caption{Data interpretation relationship. See also where it's lying in the matrix decomposition world map Figure~\ref{fig:matrix-decom-world-picture}.}
\label{fig:data-interpretation-world-picture}
\end{figure}
\subsection*{\textbf{A Brief Introduction for SVD}}
For the analysis of the decomposition in this part, especially the interpolative decomposition, we need the SVD and Eckart-Young-Misky Theorem that we shall briefly introduce here. More details are delayed in Section~\ref{section:SVD} (p.~\pageref{section:SVD}). For any matrix $\bA\in \real^{m\times n}$ with rank $r$, it admits
$$
\bA = \bU \bSigma \bV^\top = \sum_{i=1}^r \sigma_i \bu_i \bv_i^\top,
$$
where $\bu_i$ and $\bv_i$ are left and right singular vectors respectively. The matrix $\bSigma\in \real^{m\times n}$ is a rectangular diagonal matrix whose entries are singular values in descending
order, $\sigma_1\geq \sigma_2\geq \ldots \geq \sigma_r \textcolor{blue}{>} \sigma_{r+1}=\ldots=0$, along the main diagonal with only $r$ nonzero singular values.
Given further $1\leq k\leq r$, and let $\bA_k$ be the \textit{truncated SVD (TSVD)} of $\bA$ with the largest $k$ terms, i.e., $\bA_k = \sum_{i=1}^{k} \sigma_i\bu_i\bv_i^\top$ from SVD of $\bA=\sum_{i=1}^{r} \sigma_i\bu_i\bv_i^\top$ by zeroing out the $r-k$ trailing singular values of $\bA$. Then $\bA_k$ is the best rank-$k$ approximation to $\bA$ in terms of the spectral norm. That is, for any rank $r$ matrix $\bB$, we have $||\bA-\bA_k||_2 \leq ||\bA-\bB||_2$ or $||\bA-\bA_k||_F \leq ||\bA-\bB||_F$ which are measured by Frobenius norm and spectral norm respectively. And the distance between $\bA,\bA_k$ is given by
$$
||\bA-\bA_k||_F = \sqrt{\sigma_{k+1}^2+\ldots +\sigma_{r}^2},
$$
or
$$
||\bA-\bA_k||_2 = \sigma_{r}.
$$
\subsection*{\textbf{Subset Selection Problem}}
Subset selection is a method for selecting a subset of columns from a real matrix so that the subset represents the entire matrix well and is far from being rank deficient. (DI1) above is just an example of column selection with full rank $r$ columns. However, given integer $k<r$, we always attempt to find $k$ linearly independent columns that best represent the information in the matrix. The mathematical formulation of the subset selection problems is: Determine a permutation matrix $\bP$ such that
$$
\bA\bP = [\bA_1, \bA_2],
$$
where $\bA_1\in \real^{m\times k}$ contains $k$ linearly independent columns of $\bA$ such that
\begin{enumerate}
\item The smallest singular value is as large as possible (similar to SVD!). That is, there exists a value $\eta$ such that the $k$-th singular value of $\bA_1$ is bounded by that of $\bA$:
$$
\frac{\sigma_k(\bA)}{\eta} \leq \sigma_k(\bA_1) \leq \sigma_k(\bA).
$$
\item The rest $n-k$ redundant columns of $\bA_2$ are well represented by $k$ columns of $\bA_1$. That
is
$$
\mathop{\min}_{\bW \in \real^{k\times (n-k)}} ||\bA_1\bW-\bA_2||_2
$$
is small, i.e., there exists a value $\eta$ such that the distance is bounded by $(k+1)$-th singular value of $\bA$:
$$
\sigma_{k+1}(\bA) \leq \mathop{\min}_{\bW \in \real^{k\times (n-k)}} ||\bA_1\bW-\bA_2||_2 \leq \eta \sigma_{k+1}(\bA).
$$
\end{enumerate}
\newpage
\chapter{CR Decomposition}\label{section:cr-decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{CR Decomposition}
The CR decomposition is proposed in \citep{strang2021every, stranglu}. As usual, we firstly give the result and we will discuss the existence and the origin of this decomposition in the following sections.
\begin{theoremHigh}[CR Decomposition]\label{theorem:cr-decomposition}
Any rank-$r$ matrix $\bA \in \real^{m \times n}$ can be factored as
$$
\underset{m\times n}{\bA} = \underset{m\times r}{\bC} \gap \underset{r\times n}{\bR}
$$
where $\bC$ is the first $r$ linearly independent columns of $\bA$, and $\bR$ is an $r\times n$ matrix to reconstruct the columns of $\bA$ from columns of $\bC$. In particular, $\bR$ is the row reduced echelon form (RREF) of $\bA$ without the zero rows.
\item The storage for the decomposition is then reduced or potentially increased from $mn$ to $r(m+n)$.
\end{theoremHigh}
\section{Existence of the CR Decomposition}
Since matrix $\bA$ is of rank $r$, there are some $r$ linearly independent columns in $\bA$. We then choose linearly independent columns from $\bA$ and put them into $\bC$:
\begin{tcolorbox}[title={Find $r$ linearly Independent Columns From $\bA$}]
1. If column 1 of $\bA$ is not zero, put it into the column of $\bC$;
2. If column 2 of $\bA$ is not a multiple of column 1, put it into the column of $\bC$;
3. If column 3 of $\bA$ is not a combination of columns 1 and 2, put it into the column of $\bC$;
4. Continue this process until we find $r$ linearly independent columns (or all the linearly independent columns if we do not know the rank $r$ beforehand).
\end{tcolorbox}
When we have the $r$ linearly independent columns from $\bA$, we can prove the existence of CR decomposition by the column space view of matrix multiplication.
\paragraph{Column space view of matrix multiplication} A multiplication of two matrices $\bD\in \real^{m\times k}, \bE\in \real^{k\times n}$ is $\bA=\bD\bE=\bD[\be_1, \be_2, \ldots, \be_n] = [\bD\be_1, \bD\be_2, \ldots, \bD\be_n]$, i.e., each column of $\bA$ is a combination of columns from $\bD$.
\begin{proof}[of Theorem~\ref{theorem:cr-decomposition}]
As the rank of matrix $\bA$ is $r$ and $\bC$ contains $r$ linearly independent columns from $\bA$, the column space of $\bC$ is equivalent to the column space of $\bA$.
If we take any other column $\ba_i$ of $\bA$, $\ba_i$ can be represented as a linear combination of the columns of $\bC$, i.e., there exists a vector $\br_i$ such that $\ba_i = \bC \br_i$, $\forall i\in \{1, 2, \ldots, n\}$. Put these $\br_i$'s into the columns of matrix $\bR$, we obtain
$$
\bA = [\ba_1, \ba_2, \ldots, \ba_n] = [\bC \br_1, \bC \br_2, \ldots, \bC \br_n]= \bC \bR,
$$
from which the result follows.
\end{proof}
\section{Reduced Row Echelon Form (RREF)}\label{section:rref-cr}
In Gaussian elimination Section~\ref{section:gaussian-elimination}, we introduced the elimination matrix (a lower triangular matrix) and permutation matrix to transform $\bA$ into an upper triangular form. We rewrite the Gaussian elimination for a $4\times 4$ square matrix, where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed:
\begin{tcolorbox}[title={Gaussian Elimination for a Square Matrix}]
$$
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bE_1}{\longrightarrow}
\begin{sbmatrix}{\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bP_1}{\longrightarrow}
\begin{sbmatrix}{\bP_1\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bE_2}{\longrightarrow}
\begin{sbmatrix}{\bE_2\bP_1\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{\boxtimes} & \boxtimes & \boxtimes \\
0 & 0 & \textcolor{blue}{\boxtimes} & \boxtimes \\
0 & \bm{0} & \bm{0} & \textcolor{blue}{\bm{\boxtimes}}
\end{sbmatrix}.
$$
\end{tcolorbox}
Furthermore, the Gaussian elimination can also be applied on a rectangular matrix, we give an example for a $4\times 5$ matrix as follows:
\begin{tcolorbox}[title={Gaussian Elimination for a Rectangular Matrix}]
$$
\begin{sbmatrix}{\bA}
\textcolor{blue}{2} & \boxtimes & 10 & 9 & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\end{sbmatrix}
\stackrel{\bE_1}{\longrightarrow}
\begin{sbmatrix}{\bE_1\bA}
\textcolor{blue}{2} & \boxtimes & 10 & 9 & \boxtimes\\
\bm{0} & \bm{0} & \textcolor{blue}{\bm{5}} & \bm{6} & \bm{\boxtimes}\\
\bm{0} & \bm{0} & \bm{2} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}
\stackrel{\bE_2}{\longrightarrow}
\begin{sbmatrix}{\bE_2\bE_1\bA}
\textcolor{blue}{2} & \boxtimes & 10 & 9 & \boxtimes\\
0 & 0 & \textcolor{blue}{5} & 6 & \boxtimes\\
0 & 0 & \bm{0} & \textcolor{blue}{\bm{3}} & \bm{\boxtimes}\\
0 & 0 & \bm{0} & \bm{0} & \bm{0}\\
\end{sbmatrix},
$$
\end{tcolorbox}
\noindent where the \textcolor{blue}{blue}-colored numbers are \textbf{pivots} as we defined previously and we call the last matrix above \textbf{row echelon form}. Note that we get the 4-th row as a zero row in this specific example. Going further, if we subtract each row by a multiple of the next row to make the entries above the pivots to be zero:
\begin{tcolorbox}[title={Reduced Row Echelon Form: Get Zero Above Pivots}]
$$
\begin{sbmatrix}{\bE_2\bE_1\bA}
\textcolor{blue}{2} & \boxtimes & 10 & 9 & \boxtimes\\
0 & 0 & \textcolor{blue}{5} & 6 & \boxtimes\\
0 & 0 & 0 & \textcolor{blue}{3} & \boxtimes\\
0 & 0 & 0 & 0 & 0\\
\end{sbmatrix}
\stackrel{\bE_3}{\longrightarrow}
\begin{sbmatrix}{\bE_3\bE_2\bE_1\bA}
\textcolor{blue}{2} & \boxtimes & \bm{0} & \bm{-3} & \bm{\boxtimes} \\
0 & 0 & \textcolor{blue}{5} & 6 & \boxtimes\\
0 & 0 & 0 & \textcolor{blue}{3} & \boxtimes\\
0 & 0 & 0 & 0 & 0\\
\end{sbmatrix}
\stackrel{\bE_4}{\longrightarrow}
\begin{sbmatrix}{\bE_4\bE_3\bE_2\bE_1\bA}
\textcolor{blue}{2} & \boxtimes & 0 & \bm{0} & \bm{\boxtimes}\\
0 & 0 & \textcolor{blue}{5} & \bm{0} & \bm{\boxtimes}\\
0 & 0 & 0 & \textcolor{blue}{3} & \boxtimes\\
0 & 0 & 0 & 0 & 0\\
\end{sbmatrix},
$$
\end{tcolorbox}
\noindent where $\bE_3$ subtracts 2 times the $2$-nd row from the $1$-st row, and $\bE_4$ adds the $3$-rd row to the $1$-st row and subtracts 2 times the $3$-rd row from the $2$-nd row. Finally, we get the full row reduced echelon form by making the pivots to be 1:
\begin{tcolorbox}[title={Reduced Row Echelon Form: Make The Pivots To Be 1}]
$$
\begin{sbmatrix}{\bE_4\bE_3\bE_2\bE_1\bA}
\textcolor{blue}{2} & \boxtimes & 0 & 0 & \boxtimes\\
0 & 0 & \textcolor{blue}{5} & 0 & \boxtimes\\
0 & 0 & 0 & \textcolor{blue}{3} & \boxtimes\\
0 & 0 & 0 & 0 & 0\\
\end{sbmatrix}
\stackrel{\bE_5}{\longrightarrow}
\begin{sbmatrix}{\bE_5\bE_4\bE_3\bE_2\bE_1\bA}
\textcolor{blue}{\bm{1}} & \bm{\boxtimes} & \bm{0} & \bm{0} & \bm{\boxtimes}\\
\bm{0} & \bm{0} & \textcolor{blue}{\bm{1}} & \bm{0} & \bm{\boxtimes}\\
\bm{0} & \bm{0} & \bm{0} & \textcolor{blue}{\bm{1}} & \bm{\boxtimes}\\
0 & 0 & 0 & 0 & 0\\
\end{sbmatrix},
$$
\end{tcolorbox}
\noindent where $\bE_5$ makes the pivots to be 1. Note here, the transformation matrix $\bE_1, \bE_2, \ldots, \bE_5$ are not necessarily to be lower triangular matrices as they are in LU decomposition. They can also be permutation matrices or other matrices. We call this final matrix the \textbf{reduced row echelon form} of $\bA$ where it has 1's as pivots and zeros above the pivots.
\begin{lemma}[Rank and Pivots]\label{lemma:rank-is-pivots}
The rank of $\bA$ is equal to the number of pivots.
\end{lemma}
\begin{lemma}[RREF in CR\index{Reduced row echelon form}]\label{lemma:r-in-cr-decomposition}
The reduced row echelon form of the matrix $\bA$ without zero rows is the matrix $\bR$ in the CR decomposition.
\end{lemma}
\begin{proof}[Informal Proof of Lemma~\ref{lemma:r-in-cr-decomposition} and Lemma~\ref{lemma:rank-is-pivots}]
Following the steps in Gaussian elimination, the number of pivot elements indicates the number of linearly independent columns in matrix $\bA$, which is on the other hand, exactly the rank of the matrix.
Following the example above, we have
$$
\bE_5\bE_4\bE_3\bE_2\bE_1\bA = \bR \quad \longrightarrow \quad \bA = (\bE_5\bE_4\bE_3\bE_2\bE_1)^{-1}\bR.
$$
We notice that the columns 1, 3, 4 of $\bR$ has only one element 1 which means we could construct a matrix $\bC$ (exactly the same column matrix in CR decomposition) whose first 3 columns to be equal to columns 1, 3, 4 of matrix $\bA$, i.e., $\bC=[\ba_1, \ba_3, \ba_4]$. Furthermore, since the last row of $\bR$ is all zero, the column 4 of $\bC$ does not account for any computation we can just ignore column 4 of $\bC$ and row 4 of $\bR$. And this $\bC$ is the only matrix that can reconstruct the columns 1, 3, 4 of $\bA$ as the pivots of $\bR$ are all 1. We obtain
$$
\bA=\bC\bR.
$$
This completes the proof.
\end{proof}
In short, we first compute the reduced row echelon form of matrix $\bA$ by $rref(\bA)$, Then $\bC$ is obtained by removing from $\bA$ all the non-pivot columns (which can be determined by looking for columns in $rref(\bA)$ which do not contain a pivot). And $\bR$ is obtained by eliminating zero rows of $rref(\bA)$. And this is actually a special case of \textbf{rank decomposition} of matrix $\bA$. However, CR decomposition is so special that it involves the reduced row echelon form so that we introduce it here particularly.
$\bR$ has a remarkable form whose $r$ columns containing the pivots form an $r\times r$ identity matrix. Note again that we can just remove the zero rows from the row reduced echelon form to obtain this matrix $\bR$. In \citep{strang2021every}, the authors give a specific notation for the row reduced echelon form without removing the zero rows as $\bR_0$:
$$
\bR_0 = rref(\bA)=
\begin{bmatrix}
\bR \\
\bzero
\end{bmatrix}=
\begin{bmatrix}
\bI_r & \bF \\
\bzero & \bzero
\end{bmatrix}\bP, \footnote{Permutation matrix $\bP$ in the right side of a matrix is to permute the column of that matrix.
}
$$
where the $n\times n$ permutation matrix $\bP$ puts the columns of $r\times r$ identity matrix $\bI_r$ into the correct positions, matching the first $r$ linearly independent columns of the original matrix $\bA$.
Previously we proved the important theorem in linear algebra that the row rank equals the column rank of any matrix by the UTV framework (Theorem~\ref{lemma:equal-dimension-rank}, p.~\pageref{lemma:equal-dimension-rank}).
The CR decomposition also reveals the theorem.
\begin{proof}[\textbf{of Theorem~\ref{lemma:equal-dimension-rank}, p.~\pageref{lemma:equal-dimension-rank}, A Second Way}]
For CR decomposition of matrix $\bA=\bC\bR$, we have $\bR = [\bI_r, \bF ]\bP$, where $\bP$ is an $n\times n$ permutation to put the columns of the $r\times r$ identity matrix $\bI_r$ into the correct positions as shown above. It can be easily verified that the $r$ rows of $\bR$ are linearly independent of the submatrix of $\bI_r$ (since $\bI_r$ is nonsingular) such that the row rank of $\bR$ is $r$.
Firstly, from the definition of the CR decomposition, the $r$ columns of $\bC$ are from $r$ linearly independent columns of $\bA$, the column rank of $\bA$ is $r$. Further,
$\bullet$ Since $\bA=\bC\bR$, all rows of $\bA$ are combinations of the rows of $\bR$. That is, the row rank of $\bA$ is no larger than the row rank of $\bR$;
$\bullet$ From $\bA=\bC\bR$, we also have $(\bC^\top\bC)^{-1}\bC^\top\bC\bR = (\bC^\top\bC)^{-1}\bC^\top\bA$, that is $\bR = (\bC^\top\bC)^{-1}\bC^\top\bA$. $\bC^\top\bC$ is nonsingular since it has full column rank $r$. Then all rows of $\bR$ are also combinations of the rows of $\bA$. That is, the row rank of $\bR$ is no larger than the row rank of $\bA$;
$\bullet$ By ``sandwiching", the row rank of $\bA$ is equal to the row rank of $\bR$ which is $r$.
Therefore, both the row rank and column rank of $\bA$ are equal to $r$ from which the result follows.
\end{proof}
In the proof above, we use CR decomposition to show that the row rank of a matrix is equal to its column rank. An elementary proof without using CR decomposition or Gaussian elimination is provided in Appendix~\ref{append:row-equal-column} (p.~\pageref{append:row-equal-column}). Moreover, we also discuss the special form of pseudo-inverse from CR decomposition in Appendix~\ref{appendix:pseudo-inverse} (p.~\pageref{appendix:pseudo-inverse}).
\section{Computing the CR Decomposition via the Gaussian Elimination}
The central step to compute CR decomposition is to find the row reduced echelon form of the matrix $\bA$. Suppose $\bA$ is of size $m\times n$:
$\bullet$ Get row echelon form (REF)\index{Row echelon form}:
A. Use row 1 of $\bA$ to make the values below $\bA_{11}$ to be 0 in the first column (permutation involved if $\bA_{11}=0$), and put the result into $\bR$, ($2(m-1)n+(m-1)$ flops);
B. Use row 2 of the result matrix $\bR$ to make the values under $\bR_{22}$ to be 0 (permutation involved if $\bR_{22}=0$), ($2(m-2)(n-1)+(m-2)$ flops);
C. Continue this process until we get the row echelon form.
$\bullet$ Get row reduced echelon form (RREF)\index{Row reduced echelon form}:
D. Use the last row to make the values above this last pivot to be zero (ignore if the row is all zero);
E. Use the penultimate row to make the values above this second last pivot to be zero (ignore if the row is all zero);
F. Continue this process until we get the row reduced echelon form and divide each row to make the pivots to be 1. Note that there are $m-1$ such steps if $m\leq n$ and $n-1$ such steps if $m>n$;
This process is formulated in Algorithm~\ref{alg:cr-decomposition-alg}.
\begin{algorithm}[H]
\caption{CR Decomposition via the Gaussian Elimination}
\label{alg:cr-decomposition-alg}
\begin{algorithmic}[1]
\Require
rank-$r$ matrix $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$ with size $m\times n $;
\State Initially set $\bR=\bA$;
\State Set $\bb_1= \ba_1$, (Suppose the first column is nonzero); \Comment{$0$ flops}
\For{$k=1$ to $m-1$}
\State Use row $k$ of $\bR$ to make the values under $\bR_{kk}$ to be 0 (permutation involved if $\bR_{kk}=0$);
\EndFor
\For{($k=m$ to $2$ if $m\leq n$), or ($k=n$ to $2$ if $m > n$)}
\State Use row $k$ to make the values above this last pivots to be zero (ignore if the row is all zero);
\EndFor
\State Make the pivot to be 1;
\State Select the rows from $\bA$ corresponding to the rows of $\bU$ into $\bR$.
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: CR via the Gaussian Elimination\index{Gaussian elimination complexity}]\label{theorem:cr-decomposition-alg}
Algorithm~\ref{alg:cr-decomposition-alg} requires
$$
\text{cost}=\left\{
\begin{aligned}
&\sim 2m^2n-m^3 \text{ flops}, \qquad &\text{if }&m\leq n; \\
&\sim mn^2 \text{ flops},\qquad &\text{if }&m>n.
\end{aligned}
\right.
$$
to compute the CR decomposition of an $m\times n$ matrix.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:cr-decomposition-alg} ]
Procedure A in the above discussion needs $m-1$ divisions to get the multipliers and it takes $m-1$ times (rows) $n$ multiplications with the multipliers. To make the values under $\bA_{11}$ to be zero, it involves $m-1$ times $n$ subtractions. As a result, procedure A costs \fbox{$2(m-1)n+(m-1)$} flops.
Procedure B needs $m-2$ divisions to get the multipliers and it takes $m-2$ times (rows) $n-1$ multiplications with the multipliers. To make the values under $\bR_{22}$ to be zero, it involves $m-2$ times $n-1$ subtractions. As a result, procedure B costs \fbox{$2(m-2)(n-1)+(m-2)$} flops.
The procedure can go on, and we can thus summarize the costs for each loop of step 4 in the following table:
\begin{table}[H]
\begin{tabular}{l|l|l|l}
$k$ & Get multipliers & Multipliers multiply each row & Rows subtraction \\ \hline
1 & $2:m=m-1$ rows & $(m-1)(n)$ & $(m-1)(n)$ \\
2 & $3:m=m-2$ rows & $(m-2)(n-1)$ & $(m-2)(n-1)$ \\
3 & $4:m=m-3$ rows & $(m-3)(n-2)$ & $(m-3)(n-2)$ \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
$k$ & $k+1:m=m-k$ rows & $(m-k)(n-k+1)$ & $(m-k)(n-k+1)$ \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
$m-1$ & $m:m=1$ row & $(1)(n-m+2)$ & $(1)(n-m+2)$
\end{tabular}
\end{table}
We notice that the $n-m+2$ in the last row of the above table may not be positive, so that we separate it into two cases to discuss the complexity of computing the row echelon form (REF).
\textbf{Get REF, case 1}: $m\leq n$, the procedure can go on until the loop $m-1$.
Thus in step 4 to get the row echelon form, we need
$$\sum_{i=1}^{m-1}\left[ 2(m-i)(n-i+1) +(m-i)\right] =\sum_{i=1}^{m-1} \left[2i^2-(2m+2n+3)i+(2mn+3m)\right],
$$
or \fbox{$m^2n-\frac{1}{3}m^3$} flops if keep only the leading terms.
\textbf{Get REF, case 2}: $m>n$, the procedure should stop at loop $n$, otherwise, $n-k+1$ will be zero if $k=n+1$. Thus, in step 4 to get the row echelon form, we need
$$
\sum_{i=1}^{n} \left[2(m-i)(n-i+1) +(m-i)\right]=\sum_{i=1}^{n} \left[2i^2-(2m+2n+3)i+(2mn+3m)\right],
$$
or \fbox{$mn^2-\frac{1}{3}n^3$} flops if keep only the leading terms.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[RREF for $m\leq n$]{\label{fig:cr-crcomputation1}
\includegraphics[width=0.47\linewidth]{./imgs/crcomputation1.pdf}}
\quad
\subfigure[RREF for $m> n$]{\label{fig:cr-crcomputation2}
\includegraphics[width=0.47\linewidth]{./imgs/crcomputation2.pdf}}
\caption{Get RREF from row echelon form.}
\label{fig:cr-crcomputation12}
\end{figure}
To get row reduced echelon form (RREF) from row echelon form (REF), we can also separate it into two cases, $m\leq n$ and $m>n$ as shown in Figure~\ref{fig:cr-crcomputation12} where blank entries indicate zeros, blue entries indicate values that are not necessarily zero, pivots are on the diagonal in the ``worst" case.
\textbf{Get RREF, case 1}: $m\leq n$
Procedure D needs $m-1$ divisions to get the multipliers and it takes $m-1$ times $n-m+1$ multiplications with the multipliers. Then to get the zeros above the pivots, it involves $m-1$ times $n-m+1$ subtractions. As a results, procedure D costs \fbox{$2(m-1)(n-m+1)+m-1$} flops.
Procedure E needs $m-2$ divisions to get the multipliers and it takes $m-2$ times $n-m+2$ multiplications with the multipliers. Then to get the zeros above the pivots, it involves $m-2$ times $n-m+2$ subtractions. As a results, procedure E costs \fbox{$2(m-2)(n-m+2)+m-2$} flops.
The procedure can go on, and we can thus again summarize the cost for each loop in the following table:
\begin{table}[H]
\begin{tabular}{l|l|l|l}
$k$ & Get multipliers & Multipliers multiply each row & Rows subtraction \\ \hline
$m$ & $1:m-1=m-1$ rows & $(m-1)(n-m+1)$ & $(m-1)(n-m+1)$ \\
$m-1$ & $1:m-2=m-2$ rows & $(m-2)(n-m+2)$ & $(m-2)(n-m+2)$ \\
$m-2$ & $1:m-3=m-3$ rows & $(m-3)(n-m+3)$ & $(m-3)(n-m+3)$ \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
$k$ & $1:k-1=k-1$ rows & $(k-1)(n-k+1)$ & $(k-1)(n-k+1)$ \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
$2$ & $1:1=1$ row & $(1)(n-2+1)$ & $(1)(n-2+1)$
\end{tabular}
\end{table}
In the second loop to get the row reduced echelon form from the row echelon form, we need
$$
\begin{aligned}
\sum_{k=2}^{m} \left[2(k-1)(n-k+1)+k-1\right]&=\sum_{k=2}^{m} \left[2(k-1)(n-k+1)+k-1\right]\\
&=\sum_{i=1}^{m-1} \left[2(i)(n-i)+i\right], \text{ ($i=k-1$)} \\
&= \sum_{i=1}^{m-1} \left[-2i^2+(2n+1)i\right],
\end{aligned}
$$
or \fbox{$-\frac{2}{3}m^3+m^2n$} flops if we keep only the leading terms.
\textbf{Get RREF, case 2}: $m>n$
Procedure D needs $n-1$ divisions to get the multipliers and it takes $n-1$ times $1$ multiplications with the multipliers. Then to get the zeros above the pivots, it involves $n-1$ times $1$ subtractions. As a result, procedure D costs \fbox{$2(n-1)(1)+n-1$} flops.
Procedure E needs $n-2$ divisions to get the multipliers and it takes $n-2$ times $2$ multiplications with the multipliers. Then to get the zeros above the pivots, it involves $n-2$ times $2$ subtractions. As a result, procedure E costs \fbox{$2(n-2)(2)+n-2$} flops.
The procedure can go on, and we can thus again summarize the cost for each loop in the following table:
\begin{table}[H]
\begin{tabular}{l|l|l|l}
$k$ & Get multipliers & Multipliers multiply each row & Rows subtraction \\ \hline
$n$ & $1:n-1=n-1$ rows & $(n-1)(1)$ & $(n-1)(1)$ \\
$n-1$ & $1:n-2=n-2$ rows & $(n-2)(2)$ & $(n-2)(2)$ \\
$n-2$ & $1:n-3=n-3$ rows & $(n-3)(3)$ & $(n-3)(3)$ \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
$k$ & $1:k-1=k-1$ rows & $(k-1)(n-k+1)$ & $(k-1)(n-k+1)$ \\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\
$2$ & $1:1=1$ row & $(1)(n-2+1)$ & $(1)(n-2+1)$
\end{tabular}
\end{table}
In the second loop to get the row reduced echelon form from the row echelon form, we need
$$
\begin{aligned}
\sum_{k=2}^{n} \left[2(k-1)(n-k+1)+k-1\right]&=\sum_{k=2}^{n} \left[2(k-1)(n-k+1)+k-1\right]\\
&=\sum_{i=1}^{n-1} \left[2(i)(n-i)+i\right], \text{ ($i=k-1$)} \\
&= \sum_{i=1}^{n-1} \left[-2i^2+(2n+1)i\right],
\end{aligned}
$$
or \fbox{$\frac{1}{3}n^3$} flops if we keep only the leading terms.
\textbf{Total cost:}
Step 9 involves $mn$ flops (divisions) at most to make the pivots to be 1's. So the total cost is:
$\bullet$ the total cost for $m\leq n$ is then $m^2n-\frac{1}{3}m^3-\frac{2}{3}m^3+m^2n = $\fbox{$2m^2n-m^3$} flops if we keep only the leading terms.
$\bullet$ the total cost for $m> n$ is then $mn^2-\frac{1}{3}n^3+\frac{1}{3}n^3 = $\fbox{$mn^2$} flops if we keep only the leading term.
To conclude, the total cost is
$$
\text{cost}=\left\{
\begin{aligned}
&2m^2n-m^3 \text{ flops}, \qquad &\text{if }m\leq n; \\
&mn^2 \text{ flops},\qquad &\text{if }m>n.
\end{aligned}
\right.
$$
And this completes the proof.
\end{proof}
In CUR decomposition, we will use the Gram-Schmidt process to find the linearly independent columns of $\bA$ which shares similar complexity with this Gaussian elimination for row reduced echelon form. But the Gram-Schmidt process is more clear from the perspective of equations to be computed, i.e., we can have specific mathematical forms for the entries of the matrices.
\section{Rank Decomposition}
We previously mentioned that the CR decomposition is a special case of rank decomposition. Formally, we prove the existence of the rank decomposition rigorously in the following theorem.
\begin{theoremHigh}[Rank Decomposition]\label{theorem:rank-decomposition}
Any rank-$r$ matrix $\bA \in \real^{m \times n}$ can be factored as
$$
\underset{m\times n}{\bA }= \underset{m\times r}{\bD}\gap \underset{r\times n}{\bF},
$$
where $\bD \in \real^{m\times r}$ has rank $r$, and $\bF \in \real^{r\times n}$ also has rank $r$, i.e., $\bD,\bF$ have full rank $r$.
The storage for the decomposition is then reduced or potentially increased from $mn$ to $r(m+n)$.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:rank-decomposition}]
By ULV decomposition in Theorem~\ref{theorem:ulv-decomposition} (p.~\pageref{theorem:ulv-decomposition}), we can decompose $\bA$ by
$$
\bA = \bU \begin{bmatrix}
\bL & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV.
$$
Let $\bU_0 = \bU_{:,1:r}$ and $\bV_0 = \bV_{1:r,:}$, i.e., $\bU_0$ contains only the first $r$ columns of $\bU$, and $\bV_0$ contains only the first $r$ rows of $\bV$. Then, we still have $\bA = \bU_0 \bL\bV_0$ where $\bU_0 \in \real^{m\times r}$ and $\bV_0\in \real^{r\times n}$. This is also known as the reduced ULV decomposition as shown in Figure~\ref{fig:ulv-comparison}. Let \{$\bD = \bU_0\bL$ and $\bF =\bV_0$\}, or \{$\bD = \bU_0$ and $\bF =\bL\bV_0$\}, we find such rank decomposition.
\end{proof}
The rank decomposition is not unique. Even by elementary transformations, we have
$$
\bA =
\bE_1
\begin{bmatrix}
\bZ & \bzero \\
\bzero & \bzero
\end{bmatrix}
\bE_2,
$$
where $\bE_1 \in \real^{m\times m}, \bE_2\in \real^{n\times n}$ represent elementary row and column operations, $\bZ\in \real^{r\times r}$. The transformation is rather general, and there are dozens of these $\bE_1,\bE_2,\bZ$. Similar construction on this decomposition as shown in the above proof, we can recover another rank decomposition.
Analogously, we can find such $\bD,\bF$ by SVD, URV, CR, CUR, and many other decompositional algorithms. However, we may connect the different rank decompositions by the following lemma.
\begin{lemma}[Connection Between Rank Decompositions]\label{lemma:connection-rank-decom}
For any two rank decompositions of $\bA=\bD_1\bF_1=\bD_2\bF_2$, there exists a nonsingular matrix $\bP$ such that
$$
\bD_1 = \bD_2\bP
\qquad
\text{and}
\qquad
\bF_1 = \bP^{-1}\bF_2.
$$
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:connection-rank-decom}]
Since $\bD_1\bF_1=\bD_2\bF_2$, we have $\bD_1\bF_1\bF_1^\top=\bD_2\bF_2\bF_1^\top$. It is trivial that $rank(\bF_1\bF_1^\top)=rank(\bF_1)=r$ such that $\bF_1\bF_1^\top$ is a square matrix with full rank and thus is nonsingular. This implies $\bD_1=\bD_2\bF_2\bF_1^\top(\bF_1\bF_1^\top)^{-1}$. Let $\bP=\bF_2\bF_1^\top(\bF_1\bF_1^\top)^{-1}$, we have $\bD_1=\bD_2\bP$ and $\bF_1 = \bP^{-1}\bF_2$.
\end{proof}
To end up this section, we write another form of the rank decomposition that will be useful to make connection to the tensor decomposition (Theorem~\ref{theorem:cp-decomp}, p.~\pageref{theorem:cp-decomp}).
\begin{theoremHigh}[Rank Decomposition, An Alternative Form]\label{theorem:rank-decomposition-alternative}
Any rank-$r$ matrix $\bA \in \real^{m \times n}$ can be factored as
$$
\underset{m\times n}{\bA }= \underset{m\times r}{\bD}\gap \underset{n\times r}{\bE^\top},
$$
where $\bD=[\bd_1, \bd_2, \ldots, \bd_r] \in \real^{m\times r}$ has rank $r$, and $\bE=[\be_1, \be_2, \ldots, \be_r] \in \real^{n\times r}$ also has rank $r$, i.e., $\bD,\bF$ have full rank $r$. Equivalently, the rank decomposition can be written as
$$
\bA = \sum_{i=1}^{r} \bd_i \be_i^\top.
$$
\end{theoremHigh}
\section{Application: Rank and Trace of an Idempotent Matrix}
The CR decomposition is quite useful to prove the rank of an idempotent matrix. See also the orthogonal projection in Appendix~\ref{appendix:orthogonal}.
\begin{lemma}[Rank and Trace of an Idempotent Matrix\index{Trace}]\label{lemma:rank-of-symmetric-idempotent2_tmp}
For any $n\times n$ idempotent matrix $\bA$ (i.e., $\bA^2 = \bA$), the rank of $\bA$ equals the trace of $\bA$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:rank-of-symmetric-idempotent2_tmp}]
Any $n\times n$ rank-$r$ matrix $\bA$ has CR decomposition $\bA = \bC\bR$, where $\bC\in\real^{n\times r}$ and $\bR\in \real^{r\times n}$ with $\bC, \bR$ having full rank $r$.
Then,
$$
\begin{aligned}
\bA^2 &= \bA, \\
\bC\bR\bC\bR &= \bC\bR, \\
\bR\bC\bR &=\bR, \\
\bR\bC &=\bI_r,
\end{aligned}
$$
where $\bI_r$ is an $r\times r$ identity matrix. Thus
$$
trace(\bA) = trace(\bC\bR) =trace(\bR\bC) = trace(\bI_r) = r,
$$
which equals the rank of $\bA$. The equality above is from the invariant of cyclic permutation of trace.
\end{proof}
\section{Other Applications}
The CR decomposition or rank decomposition is essential for the proof of many important theorems, such as the existence of pseudo-inverse in Lemma~\ref{lemma:existence-of-pseudo-inverse} (p.~\pageref{lemma:existence-of-pseudo-inverse}), finding the basis of the four subspaces in the fundamental theorem of linear algebra in Appendix~\ref{appendix:cr-decomposition-four-basis} (p.~\pageref{appendix:cr-decomposition-four-basis}).
The CR factorization can also be used for data interpretation
or to solve computational problems, such as least squares where a reduced linear system can be considered to remove redundant variables.
Readers will find the usage of the CR decomposition throughout the text.
\chapter{Eigenvalue Problem}\label{section:eigenvalue-problem}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Background}
The decompositional methods discussed in the above sections are all related to the eigenvalues and eigenvectors of matrices. Thus, the eigenvalue problem merits independent consideration.
In this section, we present some classical eigenvalue algorithms that will be often useful for computing eigenvalue-related decompositions. And we simplify the discussion by considering only matrices that are real and symmetric. When $\bA$ is real and symmetric, it can be factored into (by spectral theorem, Theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem})
\begin{equation*}
\bA = \bQ \bLambda \bQ^\top = \bQ \bLambda \bQ^{-1},
\end{equation*}
where the columns of $\bQ = [\bq_1, \bq_2, \ldots, \bq_n]$ are eigenvectors of $\bA$ whilst mutually orthonormal, the entries of $\bLambda=diag(\lambda_1, \lambda_2, \ldots, \lambda_n)$ are the corresponding eigenvalues of $\bA$, which are real. We suppose further the eigenvalues are ordered in magnitude such that
$$
|\lambda_1| \geq |\lambda_2| \geq \ldots \geq |\lambda_n| \geq 0,
$$
where in some discussions the equality will not be satisfied with special considerations. One thing to note further is that since $\bA$ is diagonalizable, the eigenvectors $\{\bq_1, \bq_2, \ldots, \bq_n\}$ span the whole space $\real^n$ such that any vector $\bv$ in space $\real^n$ can be written as a combination of the eigenvectors
$$
\bv = x_1\bq_1 +x_2\bq_2+\ldots+x_n\bq_n, \qquad \text{for all}\gap \bv\in\real^n.
$$
\section{Rate of Convergence}
Before we talk about specific algorithms to compute the eigenvalues, we need criteria to evaluate how fast the algorithms will be as most of them are iterative methods. We define the convergence of a sequence as follows. Note that the $k$-th element in a sequence is denoted by a superscript in parentheses, e.g., $\bA^{(k)}$ denotes the $k$-th matrix in a sequence, and $\ba^{(k)}$ denote the $k$-th vector in a sequence. \index{Rate of convergence}
\begin{definition}[Convergence of a Sequence]
Let $\alpha^{(1)}, \alpha^{(2)}, \ldots \in \real$ be an infinite sequence of scalars. Then $\alpha^{(k)}$ is said to converge to $\alpha^\star$ if
$$
\mathop{\lim}_{k\rightarrow \infty} |\alpha^{(k)} - \alpha^\star| = 0.
$$
Similarly, let $\balpha^{(1)}, \balpha^{(2)}, \ldots \in \real^n$ be an infinite sequence of vectors. Then $\balpha^{(k)}$ is said to converge to $\balpha^\star$ if
$$
\mathop{\lim}_{k\rightarrow \infty} ||\balpha^{(k)} - \balpha^\star|| = 0.
$$
\end{definition}
The convergence of a sequence of vectors or matrices rely on the norm. One should note that by the equivalence of vector norms (Theorem~\ref{theorem:equivalence-vector-norm}, p.~\pageref{theorem:equivalence-vector-norm}), if a sequence of vectors converges in one norm, then it converges in all norms. This will be proved important for the analysis of the convergence of eigenvectors in the sequel.
\begin{definition}[Linear Convergence]\label{definition:linear-convergence}
A sequence $\alpha^{(k)}$ with limit $\alpha^\star$ is \textit{linearly convergence} if there exists a constant \textcolor{blue}{$c\in (0,1)$} such that
$$
|\alpha^{(k+1)} - \alpha^\star| \leq c |\alpha^{(k)} - \alpha^\star|.
$$
In other words, the \textit{linearly convergent sequence} has the following property:
$$
\mathop{\lim}_{k\rightarrow \infty} \frac{|\alpha^{(k+1)} - \alpha^\star|}{|\alpha^{(k)} - \alpha^\star|} = c \in (0,1).
$$
\end{definition}
For example, the sequence $\alpha^{(k)} = 4+ (1/4)^k$ converges linearly to $\alpha^\star = 4$ since
$$
\mathop{\lim}_{k\rightarrow \infty} \frac{|\alpha^{(k+1)} - \alpha^\star|}{|\alpha^{(k)} - \alpha^\star|} = \frac{1}{4} \in (0,1).
$$
\begin{definition}[Superlinear Convergence]
A sequence $\alpha^{(k)}$ with limit $\alpha^\star$ is \textit{superlinearly convergence} if there exists a constant \textcolor{blue}{$c_k >0$ with $c_k \rightarrow 0$} such that
$$
|\alpha^{(k+1)} - \alpha^\star| \leq c_k |\alpha^{(k)} - \alpha^\star|.
$$
In other words, the \textit{superlinearly convergent sequence} has the following property:
$$
\mathop{\lim}_{k\rightarrow \infty} \frac{|\alpha^{(k+1)} - \alpha^\star|}{|\alpha^{(k)} - \alpha^\star|} =0.
$$
\end{definition}
For example, the sequence $\alpha^{(k)} = 4+\left(\frac{1}{k+4}\right)^{k+3}$ converges superlinearly to $\alpha^\star = 4$ since
$$
\mathop{\lim}_{k\rightarrow \infty} \frac{|\alpha^{(k+1)} - \alpha^\star|}{|\alpha^{(k)} - \alpha^\star|}=
\left(\frac{k+4}{k+5}\right)^{k+3} \frac{1}{k+5}
= 0.
$$
\begin{definition}[Quadratic Convergence]\label{definition:quadratic-convergence}
A sequence $\alpha^{(k)}$ with limit $\alpha^\star$ is \textit{quadratically convergence} if there exists a constant \textcolor{blue}{$c>0$} such that
$$
|\alpha^{(k+1)} - \alpha^\star| \leq c |\alpha^{(k)} - \alpha^\star|^2.
$$
In other words, the \textit{quadratically convergent sequence} has the following property:
$$
\mathop{\lim}_{k\rightarrow \infty} \frac{|\alpha^{(k+1)} - \alpha^\star|}{|\alpha^{(k)} - \alpha^\star|^2} = c .
$$
\end{definition}
For example, the sequence $\alpha^{(k)} = 4+ (1/4)^{2^k}$ converges quaeratically to $\alpha^\star = 4$ since
$$
\mathop{\lim}_{k\rightarrow \infty} \frac{|\alpha^{(k+1)} - \alpha^\star|}{|\alpha^{(k)} - \alpha^\star|^2} = 1.
$$
\begin{definition}[Cubic Convergence]
A sequence $\alpha^{(k)}$ with limit $\alpha^\star$ is \textit{cubically convergence} if there exists a constant \textcolor{blue}{$c>0$} such that
$$
|\alpha^{(k+1)} - \alpha^\star| \leq c |\alpha^{(k)} - \alpha^\star|^3.
$$
In other words, the \textit{cubically convergent sequence} has the following property:
$$
\mathop{\lim}_{k\rightarrow \infty} \frac{|\alpha^{(k+1)} - \alpha^\star|}{|\alpha^{(k)} - \alpha^\star|^3} = c .
$$
\end{definition}
For example, the sequence $\alpha^{(k)} = 4+ (1/4)^{3^k}$ converges cubically to $\alpha^\star = 4$ since
$$
\mathop{\lim}_{k\rightarrow \infty} \frac{|\alpha^{(k+1)} - \alpha^\star|}{|\alpha^{(k)} - \alpha^\star|^3} = 1.
$$
\section{Eigenvalues as Optimization}
For real and symmetric matrix $\bA$, we consider the following constrained optimization
$$
\mathop{\max}_{\bx \in \real^n} \bx^\top\bA\bx, \qquad s.t., \quad ||\bx||_2=1.
$$
By forming the Lagrangian, the optimization can be transformed into
$$
\mathcal{L}(\bx, \lambda)= \bx^\top\bA\bx - \lambda\bx^\top\bx,
$$
where $\lambda$ is called the \textit{Lagrange multiplier}\index{Lagrange multiplier}. To find the solution, the gradient of the Lagrangian has to be zero at $\bx^\star$:
$$
\Delta_{\bx} \mathcal{L}(\bx, \lambda) = 2\bA\bx-2\lambda\bx = \bzero.
$$
This implies $\bA\bx = \lambda\bx$ and and shows that the optimal points indicating $\lambda$ and $\bx^\star$ are eigenvalue and eigenvector of $\bA$.
\section{Rayleigh Quotient}\index{Rayleigh quotient}
The Rayleigh quotient of a vector $\bx\in \real^n$ associated with the matrix $\bA$ is the scalar given by the quadratic form:
$$
r(\bx) = \frac{\bx^\top\bA\bx}{\bx^\top\bx} = \left(\frac{\bx}{||\bx||}\right)^\top \bA \left(\frac{\bx}{||\bx||}\right),
$$
where $\left(\frac{\bx}{||\bx||}\right)$ is a normalized vector. When $\bx$ is an eigenvector of $\bA$, $r(\bx)$ is the eigenvalue $\lambda$ associated with $\bx$. To see this, suppose $\bA\bx=\lambda\bx$, it follows that $$\bx^\top\bA\bx = \lambda\bx^\top\bx \leadto \lambda = \frac{\bx^\top\bA\bx}{\bx^\top\bx}=r(\bx).
$$
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Function ``density" of $r(\bx)$]{\label{fig:project-rayleigh1}
\includegraphics[width=0.47\linewidth]{./imgs/rayleigh1.pdf}}
\quad
\subfigure[Function ``density" of $\nabla r(\bx)$]{\label{fig:project-rayleigh2}
\includegraphics[width=0.47\linewidth]{./imgs/rayleigh2.pdf}}
\caption{Function ``density" and contour plots (\textcolor{bluepigment}{blue}=low, \textcolor{canaryyellow}{yellow}=high) where in Figure~\ref{fig:project-rayleigh1}, the upper graph is the ``density", and the lower one is the projection of it (i.e., contour). The example is drawn by setting $\bA=\begin{bmatrix}
6 & 0\\ 0 & 2
\end{bmatrix}$ where the input $\bx=[x_1,x_2]^\top$ lies in $\real^2$. The eigenvalues are $6$ and $2$, where the corresponding eigenvectors lie in the lines of
$z\cdot \begin{bmatrix}
1 \\ 0
\end{bmatrix}$ and
$z\cdot \begin{bmatrix}
0 \\ 1
\end{bmatrix}$ with $z\in (-\infty, \infty)$ being a scalar.}
\label{fig:rayleigh-projection}
\end{figure}
Whereas, the most important property of the Rayleigh quotient is that when $\bx$ is not an eigenvector of $\bA$, the scalar $r(\bx)$ acts most like an eigenvalue in the sense that the squared norm $||\bA\bx-r(\bx)\bx||^2$ is minimized. To see this, suppose we want to find $\lambda$ such that $||\bA\bx-\lambda\bx||^2$ is minimized. Write out the equation with respect to $\lambda$
$$
||\bA\bx-r(\bx)\bx||^2 = \bx^\top\bx \lambda^2 - 2\bx^\top\bA\bx \lambda + \bx^\top\bA^\top\bA\bx.
$$
Since $\bx^\top\bx \geq 0$, the above equation is minimized by setting the gradient to be zero which leads to $\lambda=r(\bx)$, i.e., the Rayleigh quotient of $\bx$. Therefore, the Rayleigh quotient is a natural eigenvalue estimate to consider if $\bx$ is close to, but not necessarily equal to, an eigenvector. To see this, it is reasonable to take the vector $\bx\in \real^n$ as an input variable, and $r(\bx)$ as an output. Let $a = \bx^\top\bA\bx, b=\bx^\top\bx$, the gradient of $r(\bx)$ with respect to $\bx$ is given by
$$
\Delta r(\bx) = \frac{a^\prime b - ab^\prime}{b^2} = \frac{\bx^\top\bx (\bA+\bA^\top)\bx - 2\bx^\top\bA\bx \bx}{||\bx||^4}. \qquad (\text{$\bA$ is any square matrix})
$$
If we further restrict $\bA$ to be symmetric as we will consider mostly in this section, the gradient of the Rayleigh quotient reduces to
$$
\nabla r(\bx) = \frac{2\bx^\top\bx \bA\bx - 2\bx^\top\bA\bx \bx}{||\bx||^4}
=\frac{2}{||\bx||^2}\left(\bA\bx - r(\bx)\bx\right).
\qquad (\text{$\bA$ is symmetric})
$$
We observe that the gradient is a zero vector if and only if $\bx$ is an eigenvector of $\bA$ in which case $r(\bx)$ is the corresponding eigenvalue. This finding has a geometric meaning, when viewing $\bx$ as the input variable, the \textit{stationary points} (also known as the \textit{saddle points}) of function $r(\bx)$ are eigenvectors of $\bA$ and the function output is the corresponding eigenvalues. An example of these stationary points is shown in Figure~\ref{fig:rayleigh-projection} where the $r(\bx)$ and $\nabla r(\bx)$ are draw from the matrix $\bA=\begin{bmatrix}
6 & 0\\ 0 & 2
\end{bmatrix}$.
\section{Power Method, Inverse Power Method, and Rayleigh Quotient Method}
We start computing the eigenvalues by several partial methods, which compute the extremal eigenvalues of $\bA$, i.e., the one having maximum and minimum magnitude.
\subsection{The Power Method}
The power method will produce a sequence $\bv^{(k)}$ that converges linearly to an eigenvector corresponding to the largest eigenvalue of $\bA$. To obtain the corresponding eigenvalue, the Rayleigh quotient can be utilized afterwards.
The first attempt to find the eigenvalue of $\bA$ will be using the value of the largest eigenvalue (in magnitude) $\lambda_1$. We will see in this and the sections to follow, this ``theoretical" algorithms will shed light on the convergence analysis of the algorithms.
\begin{algorithm}[H]
\caption{Power Iteration (A Theoretical but Impossible One: For Convergence Analysis Only)}
\label{alg:power-iteration-theoretical}
\begin{algorithmic}[1]
\Require matrix $\bA\in \real^{n\times n}$ that is real and symmetric;
\State $\bv^{(0)}$= some vector with $||\bv^{(0)}||=1$;
\For{$k=1,2,\ldots$}
\State $\bw = \bA \bv^{(k-1)}$;
\State $\bv^{(k)} = \bw / \textcolor{blue}{\lambda_1}$;
\State $\lambda^{(k)} = \frac{(\bv^{(k)})^\top\bA\bv^{(k)}}{(\bv^{(k)})^\top\bv^{(k)}}$; \Comment{i.e., Rayleigh quotient}
\EndFor
\end{algorithmic}
\end{algorithm}
Suppose the eigenvalue $\lambda_1$ with largest magnitude is given up front, Algorithm~\ref{alg:power-iteration-theoretical} provide an iterative way to find the corresponding eigenvector.
Write $\bv^{(0)}$ as a linear combination of the orthonormal eigenvectors $\{\bq_1, \bq_2, \ldots, \bq_n\}$ (since they span the whole space $\real^n$):
\begin{equation}\label{equation:power-inprac-eq1}
\begin{aligned}
\bv^{(0)} = x_1 \bq_1 + x_2\bq_2 + \ldots + x_n\bq_n = \bQ\bx \\
\leadto \bx = \bQ^{-1}\bv^{(0)},
\end{aligned}
\end{equation}
where $\bx = [x_1, x_2, \ldots, x_n]^\top$.
Since matrix $\bA\in \real^{n\times n}$ is real and symmetric, the $k$-th power of $\bA$ is given by
\begin{equation}\label{equation:power-inprac-eq2}
\bA^k = \bQ \bLambda^k \bQ^{-1} \leadto
\left\{\begin{aligned}
\bA^k \bQ &= \bQ\bLambda^k;\\
\bA^k \bq_i &= \lambda_i^k \bq_i,\\
\end{aligned}
\right.
\end{equation}
where $\bA = \bQ \bLambda \bQ^{-1}$ is the spectral decomposition of $\bA$ (Theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem}; Remark~\ref{remark:power-spectral}, p.~\pageref{remark:power-spectral}).
Clearly, we can obtain the $k$-th element $\bv^{(k)}$ in the sequence by
$$
\begin{aligned}
\bv^{(k)} &= \bA\bv^{(k-1)} /\lambda_1 = \bA^k\bv^{(0)} / \lambda_1^k\\
&=1/ \lambda_1^k(x_1 \lambda_1^k\bq_1 + x_2\lambda_2^k\bq_2 + \ldots + x_n\lambda_n^k\bq_n) \qquad \text{(Equation~\eqref{equation:power-inprac-eq1} and \eqref{equation:power-inprac-eq2})} \\
&=x_1 \bq_1 + x_2\left(\frac{\lambda_2}{\lambda_1}\right)^k\bq_2 + \ldots + x_n\left(\frac{\lambda_n}{\lambda_1}\right)^k\bq_n.
\end{aligned}
$$
Then, it follows that
$$
\begin{aligned}
\bv^{(k)} - x_1\bq_1 &= \bA^k\bv^{(0)} / \lambda_1^k- x_1\bq_1\\
&= (\bQ \bLambda^k \bQ^{-1}) \bv^{(0)}/\lambda_1^k - x_1\bq_1 \\
&\stackrel{\bx = \bQ^{-1}\bv^{(0)}}{=\joinrel=} \bQ
\begin{bmatrix}
1 & & & \\
& \left(\frac{\lambda_2}{\lambda_1}\right)^k & & \\
& & \ddots & \\
& & & \left(\frac{\lambda_n}{\lambda_1}\right)^k
\end{bmatrix}
\bx- x_1\bq_1
= \bQ
\begin{bmatrix}
0 & & & \\
& \left(\frac{\lambda_2}{\lambda_1}\right)^k & & \\
& & \ddots & \\
& & & \left(\frac{\lambda_n}{\lambda_1}\right)^k
\end{bmatrix}
\bx.
\end{aligned}
$$
Therefore,
$$
\bQ^{-1}(\bv^{(k)} - x_1\bq_1) =
\begin{bmatrix}
0 & & & \\
& \left(\frac{\lambda_2}{\lambda_1}\right)^k & & \\
& & \ddots & \\
& & & \left(\frac{\lambda_n}{\lambda_1}\right)^k
\end{bmatrix}
\bx,
$$
and
$$
\bQ^{-1}(\bv^{(k+1)} - x_1\bq_1) =
\begin{bmatrix}
0 & & & \\
& \left(\frac{\lambda_2}{\lambda_1}\right)^{k+1} & & \\
& & \ddots & \\
& & & \left(\frac{\lambda_n}{\lambda_1}\right)^{k+1}
\end{bmatrix}
\bx
=
\begin{bmatrix}
0 & & & \\
& \left(\frac{\lambda_2}{\lambda_1}\right) & & \\
& & \ddots & \\
& & & \left(\frac{\lambda_n}{\lambda_1}\right)
\end{bmatrix}
\bQ^{-1}(\bv^{(k)} - x_1\bq_1).
$$
Define $||\bB||_{\bQ^{-1}}$ by $||\bQ^{-1} \bB||_2$ for matrix $\bB$,
this results in
\begin{equation}\label{equation:power-iteration-analysis}
\begin{aligned}
\bigg\Vert\bv^{(k+1)} - x_1\bq_1\bigg\Vert_{\bQ^{-1}} &= \bigg\Vert\bQ^{-1}(\bv^{(k+1)} - x_1\bq_1)\bigg\Vert_2\\
&\leq \left|\frac{\lambda_2}{\lambda_1}\right| \cdot \bigg\Vert\bQ^{-1}(\bv^{(k)} - x_1\bq_1)\bigg\Vert_2
= \left|\frac{\lambda_2}{\lambda_1}\right| \cdot \bigg\Vert\bv^{(k)} - x_1\bq_1\bigg\Vert_{\bQ^{-1}},
\end{aligned}
\end{equation}
where the inequality is from the matrix-vector inequality (Remark~\ref{remark:2norm-properties}, p.~\pageref{remark:2norm-properties}).
The above deduction shows that if we can prove that $||\bv||_{\bX^{-1}} = ||\bX^{-1}\bv||_2$ is a reasonable vector norm that satisfies the three criteria of vector norm (Definition~\ref{definition:matrix-norm}, p.~\pageref{definition:matrix-norm}), then by
the equivalence of vector norms (Theorem~\ref{theorem:equivalence-vector-norm}, p.~\pageref{theorem:equivalence-vector-norm}), we prove $\bv^{(k)}$ in Algorithm~\ref{alg:power-iteration-theoretical} converges to $x_1\bq_1$ linearly (Definition~\ref{definition:linear-convergence}, p.~\pageref{definition:linear-convergence}). This is actually the case.
However, the problem of Algorithm~\ref{alg:power-iteration-theoretical} is that the $\lambda_1$ is unknown to us making it impractical for us to compute the eigenvector. To tame this problem, we scale it to be of unit length at each iteration. The algorithm is then formulated in Algorithm~\ref{alg:power-iteration} where the difference is shown in \textcolor{blue}{blue} text.
\begin{algorithm}[H]
\caption{Power Iteration (A Practical One, Compare to Algorithm~\ref{alg:power-iteration-theoretical})}
\label{alg:power-iteration}
\begin{algorithmic}[1]
\Require matrix $\bA\in \real^{n\times n}$ that is real and symmetric;
\State $\bv^{(0)}$= some vector with $||\bv^{(0)}||=1$;
\For{$k=1,2,\ldots$}
\State $\bw = \bA \bv^{(k-1)}$;
\State $\bv^{(k)} = \bw / \textcolor{blue}{||\bw||}$;
\State $\lambda^{(k)} = (\bv^{(k)})^\top\bA\bv^{(k)}$; \Comment{i.e., Rayleigh quotient}
\EndFor
\end{algorithmic}
\end{algorithm}
\paragraph{Convergence Analysis}
Clearly, $\bv^{(k)}$ is still a multiple of $\bA^k \bv^{(0)}$ such that $\bv^{(k)} = c_k \bA^k \bv^{(0)} = \frac{\bA^k \bv^{(0)}}{||\bA^k \bv^{(0)}||}$. We have
\begin{equation}\label{equation:converge-equa-of-power-ite}
\begin{aligned}
\bv^{(k)} &= c_k \bA^k\bv^{(0)} \\
&=c_k (x_1 \lambda_1^k\bq_1 + x_2\lambda_2^k\bq_2 + \ldots + x_n\lambda_n^k\bq_n) \\
&= c_k \lambda_1^k\left(x_1 \bq_1 + \underbrace{x_2\left(\frac{\lambda_2}{\lambda_1}\right)^k\bq_2 + \ldots + x_n\left(\frac{\lambda_n}{\lambda_1}\right)^k\bq_n}_{\by^{(k)}}\right).
\end{aligned}
\end{equation}
The sequence $\by^{(k)}$ in the above equation vanishes as $k\rightarrow \infty$ if $|\lambda_1| \textcolor{blue}{>} |\lambda_2| \geq |\lambda_3| \geq \ldots \geq |\lambda_n| \geq 0$. Further, if $x_1\neq 0$, then as $k\rightarrow \infty$, the vector sequence $\bv^{(k)}$ converges to $\pm \bq_1$ where the sign is decided by $\lambda_1^k x_1$.
Now since we know $\bv^{(k)}$ converges to $\pm \bq_1$ such that $c_k\lambda_1^k x_1 \rightarrow \pm 1$. Following from the discussion in \citep{quarteroni2010numerical}, we here provide an improved result on the convergence of the eigenvector $\bv^{(k)}$ with a lower bound such that a form in the linear convergence can be obtained (Definition~\ref{definition:linear-convergence}, p.~\pageref{definition:linear-convergence}).
This observation proceeds the following theorem.
\begin{theorem}[Convergence of Power Iteration: Eigenvector]\label{theorem:power-convergence}
Suppose $\bA\in \real^{n\times n}$ is real and symmetric. Suppose further that the following two conditions are met
\begin{itemize}
\item $|\lambda_1| \textcolor{blue}{>} |\lambda_2| \geq |\lambda_3| \geq \ldots \geq |\lambda_n| \geq 0$, i.e., $\lambda_1$ has algebraic multiplicity being equal to 1;
\item $\bq_1^\top \bv^{(0)} \neq 0$, i.e., the initial guess $\bv^{(0)}$ has a component in the direction of the eigenvector $\bq_1$ associated with the eigenvalue $\lambda_1$.
\end{itemize}
Then there exists a constant $c$ such that
$$
||\widetildebv^{(k)}-\bq_1|| \leq c\cdot \left|\frac{\lambda_2}{\lambda_1}\right|^k,
$$
where the sequence $\widetildebv^{(k)} = \frac{\bv^{(k)}}{c_k\lambda_1^k x_1} = \bq_1+ \sum_{i=2}^n \frac{x_i}{x_1}\left(\frac{\lambda_i}{\lambda_1}\right)^k \bq_i $.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:power-convergence}]
We notice that $||\widetildebv^{(k)}-\bq_1|| = \left\Vert\sum_{i=2}^n \frac{x_i}{x_1}\left(\frac{\lambda_i}{\lambda_1}\right)^k \bq_i\right\Vert$. Let $\bz=\left[0, \frac{x_2}{x_1}\left(\frac{\lambda_2}{\lambda_1}\right)^k, \frac{x_3}{x_1}\left(\frac{\lambda_3}{\lambda_1}\right)^k, \ldots, \frac{x_n}{x_1}\left(\frac{\lambda_n}{\lambda_1}\right)^k\right]$, $\bQ=[\bq_1,\bq_2, \bq_3, \ldots, \bq_n]$, it follows that
$$
\begin{aligned}
||\widetildebv^{(k)}-\bq_1|| &= ||\bQ\bz|| \stackrel{\star}{=} ||\bz|| \\
&=\left(\sum_{i=2}^{n}\left(\frac{x_i}{x_1}\right)^2\left(\frac{\lambda_i}{\lambda_1}\right)^{2k} \right)^{1/2}
\leq \left|\frac{\lambda_2}{\lambda_1}\right|^k
\left(\sum_{i=2}^{n}\left(\frac{x_i}{x_1}\right)^2 \right)^{1/2},
\end{aligned}
$$
where the inequality comes from the Matrix-vector product inequality (Remark~\ref{remark:2norm-properties}, p.~\pageref{remark:2norm-properties}, or simply by the assumption $|\lambda_2| \geq |\lambda_3| \geq \ldots \geq |\lambda_n|$) and the equality ($\star$) comes from the length is preserved under orthogonal transformation (similar to Lemma~\ref{lemma:frobenius-orthogonal-equi}, p.~\pageref{lemma:frobenius-orthogonal-equi}). Let $c=\left(\sum_{i=2}^{n}\left(\frac{x_i}{x_1}\right)^2 \right)^{1/2}$, the result follows. Note in the discussion of \citep{quarteroni2010numerical}, the equality ($\star$) is replace by a inequality from Matrix-vector product inequality (Remark~\ref{remark:2norm-properties}, p.~\pageref{remark:2norm-properties} such that $||\bQ\bz||\leq ||\bQ||\cdot ||\bz||$.) This has a problem that we cannot find the lower bound on the above equation. The above equation also tells us (by the assumption $|\lambda_2| \geq |\lambda_3| \geq \ldots \geq |\lambda_n|$):
$$
\begin{aligned}
||\widetildebv^{(k)}-\bq_1|| &= ||\bQ\bz|| \stackrel{\star}{=} ||\bz|| \\
&=\left(\sum_{i=2}^{n}\left(\frac{x_i}{x_1}\right)^2\left(\frac{\lambda_i}{\lambda_1}\right)^{2k} \right)^{1/2}\geq \left|\frac{\lambda_n}{\lambda_1}\right|^k
\left(\sum_{i=2}^{n}\left(\frac{x_i}{x_1}\right)^2 \right)^{1/2}.
\end{aligned}
$$
Therefore, a bound on the convergence is given by
$$
\left.
\begin{aligned}
||\widetildebv^{(k+1)}-\bq_1||
&\leq c\cdot \left|\frac{\lambda_2}{\lambda_1}\right|^{k+1}\\
||\widetildebv^{(k)}-\bq_1||
&\geq c\cdot \left|\frac{\lambda_n}{\lambda_1}\right|^k
\end{aligned}
\right\}
\leadto
\frac{||\widetildebv^{(k+1)}-\bq_1||}{||\widetildebv^{(k)}-\bq_1||}
\leq
\left|
\frac{\lambda_2^{k+1}}{\lambda_1 \cdot \lambda_n^k}
\right|.
$$
However, since we do not know the value of $\lambda_n, \lambda_2, \lambda_1$, it happens that the above bound can explode. Only when $|\lambda_1| \gg |\lambda_2|$, it will be tight.
\end{proof}
Going further and following from the discussion in \citep{golub2013matrix}, we here provide an improved result on the convergence of the eigenvalue $\lambda^{(k)}$ with a tighter bound such that the sequence $\lambda^{(k)}$ converges to $\lambda_1$ \textbf{quadratically} with respect to the ratio $\left|\frac{\lambda_2}{\lambda_1}\right|$.
\begin{theorem}[Convergence of Power Iteration: Eigenvalue]\label{Theorem:power-convergence-eigenvalue}
(Under the same condition as Theorem~\ref{theorem:power-convergence}) Suppose $\bA\in \real^{n\times n}$ is real and symmetric. Suppose further that the following two conditions are met
\begin{itemize}
\item $|\lambda_1| \textcolor{blue}{>} |\lambda_2| \geq |\lambda_3| \geq \ldots \geq |\lambda_n| \geq 0$, i.e., $\lambda_1$ has algebraic multiplicity being equal to 1;
\item $\bq_1^\top \bv^{(0)} \neq 0$, i.e., the initial guess $\bv^{(0)}$ has a component in the direction of the eigenvector $\bq_1$ associated with the eigenvalue $\lambda_1$.
\end{itemize}
Then define $\theta_k \in [0, \pi/2]$ by
$$
c_k = \cos\theta_k = |\bq_1^\top \bv^{(k)}|.
$$
The $c_k$ is well defined since $||\bq_1||=||\bv^{(k)}||=1 \rightarrow 0< |\bq_1^\top \bv^{(k)}|\leq1$.
Then the sequence $s_k=\sin \theta_k, t_k = \tan \theta_k$ follows
\begin{itemize}
\item Convergence of $s_k$:
\begin{itemize}
\item $|s_k| \leq \textcolor{blue}{s_0} \left|\frac{\lambda_2}{\lambda_1}\right|^k$;
\item $\text{or } |s_k| \leq \textcolor{blue}{t_0} \left|\frac{\lambda_2}{\lambda_1}\right|^k$;
\end{itemize}
\item Convergence of $\lambda^{(k)}$:
\begin{itemize}
\item
$
\begin{aligned}
&|\lambda^{(k)} - \lambda_1| \leq \mathop{\max}_{2\leq i \leq n} |\lambda_1-\lambda_i| \cdot \textcolor{blue}{s_0^2} \left( \frac{\lambda_i}{\lambda_1}\right)^{2k};
\end{aligned}
$
\item
$
\text{or } |\lambda^{(k)} - \lambda_1| \leq \mathop{\max}_{2\leq i \leq n} |\lambda_1-\lambda_i| \cdot \textcolor{blue}{t_0^2} \left( \frac{\lambda_i}{\lambda_1}\right)^{2k}.
$
\end{itemize}
\end{itemize}
\end{theorem}
\begin{proof}[of Theorem~\ref{Theorem:power-convergence-eigenvalue}]
Since $s_k^2 = 1-c_k^2 = 1- (\bq_1^\top \bv^{(k)})^2 = 1- \left(\frac{\bq_1^\top \bA^k \bv^{(0)}}{||\bA^k \bv^{(0)}||} \right)^2$ by Equation~\eqref{equation:converge-equa-of-power-ite} where $\bv^{(0)}$ can be written as the linear combination of the orthonormal eigenvectors:
$$
\bv^{(0)} = x_1 \bq_1 + x_2\bq_2 + \ldots + x_n\bq_n = \bQ\bx \leadto \bx = \bQ^{-1}\bv^{(0)}.
$$
Since $||\bx||^2 = ||\bQ^{-1}\bv^{(0)}||^2$ and $\bv^{(0)}$ is of length 1. By equivalence under orthogonal (similar to Lemma~\ref{lemma:frobenius-orthogonal-equi}, p.~\pageref{lemma:frobenius-orthogonal-equi}), this indicates
$$
x_1^2+x_2^2+\ldots+x_n^2=||\bx||^2=1.
$$
And we have also shown in Equation~\eqref{equation:converge-equa-of-power-ite} that
$$
\bA^k \bv^{(0)} = x_1 \lambda_1^k\bq_1 + x_2\lambda_2^k\bq_2 + \ldots + x_n\lambda_n^k\bq_n.
$$
The above findings imply that
\begin{equation}\label{equation:power-lambda-1}
\begin{aligned}
s_k^2 &=1- \left(\frac{\bq_1^\top \bA^k \bv^{(0)}}{||\bA^k \bv^{(0)}||} \right)^2
=1- \left(\frac{x_1^2 \lambda_1^{2k}}{\sum_{i=1}^{n} x_i^2 \lambda_i^{2k}} \right)\\
&= \frac{\sum_{i=2}^{n} x_i^2 \lambda_i^{2k}}{\sum_{i=1}^{n} x_i^2 \lambda_i^{2k}}
\leq \textcolor{blue}{\frac{\sum_{i=2}^{n} x_i^2 \lambda_i^{2k}}{x_1^2 \lambda_1^{2k}}} \\
&=\frac{1}{x_1^2} \sum_{i=2}^{n} x_i^2 \left( \frac{\lambda_i}{\lambda_1}\right)^{2k}
\leq \frac{1}{x_1^2} \left( \sum_{i=2}^{n} x_i^2 \right) \left( \frac{\lambda_i}{\lambda_1}\right)^{2k}\\
&= \frac{1-x_1^2}{x_1^2}\left( \frac{\lambda_i}{\lambda_1}\right)^{2k} = t_0^2 \left( \frac{\lambda_i}{\lambda_1}\right)^{2k},
\end{aligned}
\end{equation}
where the last equality follows from the definition of $c_k$, when $k=0$, we have $c_0 = |\bq_1^\top\bv^{(0)}|=|x_1| \neq 0$.
Therefore, the first result follows:
$$
|s_k| \leq t_0 \left|\frac{\lambda_2}{\lambda_1}\right|^k.
$$
The above result is exactly what have done in \citep{golub2013matrix}, however, the blue text above is a loose bound since $0<x_1^2 \leq 1$ under our assumption. An improved result is shown as follows (where the difference is made into blue text):
\begin{equation}\label{equation:power-lambda-2}
\begin{aligned}
s_k^2 &=1- \left(\frac{\bq_1^\top \bA^k \bv^{(0)}}{||\bA^k \bv^{(0)}||} \right)^2
=1- \left(\frac{x_1^2 \lambda_1^{2k}}{\sum_{i=1}^{n} x_i^2 \lambda_i^{2k}} \right)\\
&= \frac{\sum_{i=2}^{n} x_i^2 \lambda_i^{2k}}{\sum_{i=1}^{n} x_i^2 \lambda_i^{2k}}
\leq \textcolor{blue}{\frac{\sum_{i=2}^{n} x_i^2 \lambda_i^{2k}}{\sum_{i=1}^{n} x_i^2 \lambda_1^{2k}}} =\frac{\sum_{i=2}^{n} x_i^2 \lambda_i^{2k}}{ \lambda_1^{2k}}\\
&= \sum_{i=2}^{n} x_i^2 \left( \frac{\lambda_i}{\lambda_1}\right)^{2k}
\leq \left( \sum_{i=2}^{n} x_i^2 \right) \left( \frac{\lambda_i}{\lambda_1}\right)^{2k}\\
&= (1-x_1^2)\left( \frac{\lambda_i}{\lambda_1}\right)^{2k} =s_0^2 \left( \frac{\lambda_i}{\lambda_1}\right)^{2k}.
\end{aligned}
\end{equation}
Therefore, the result follows that
$$
|s_k| \leq s_0 \left|\frac{\lambda_2}{\lambda_1}\right|^k.
$$
Further, since $\bv^{(k)}=\frac{\bA^k \bv^{(0)}}{||\bA^k \bv^{(0)}||}$ from Equation~\eqref{equation:converge-equa-of-power-ite}, we have
$$
\begin{aligned}
\lambda^{(k)} &= \bv^{(k)\top}\bA\bv^{(k)} =
\left(\frac{\bA^k \bv^{(0)}}{||\bA^k \bv^{(0)}||}\right)^\top
\bA
\left(\frac{\bA^k \bv^{(0)}}{||\bA^k \bv^{(0)}||}\right)\\
&=
\frac{\bv^{(0)\top} \bA^{2k+1}\bv^{(0)}}
{\bv^{(0)\top} \bA^{2k}\bv^{(0)}}
=
\frac{\sum_{i=1}^{n}x_i^2 \lambda_i^{2k+1}}{\sum_{i=1}^{n}x_i^2 \lambda_i^{2k}}.
\end{aligned}
$$
Therefore,
$$
\begin{aligned}
|\lambda^{(k)} - \lambda_1|
&=
\left|
\frac{\sum_{i=\textcolor{blue}{2}}^{n}x_i^2 \lambda_i^{2k}(\lambda_i-\lambda_1)}
{\sum_{i=1}^{n}x_i^2 \lambda_i^{2k}}
\right|
\leq \mathop{\max}_{2\leq i \leq n} |\lambda_1-\lambda_i|
\left(
\frac{\sum_{i=\textcolor{blue}{2}}^{n}x_i^2 \lambda_i^{2k}}
{\sum_{i=1}^{n}x_i^2 \lambda_i^{2k}}
\right)\\
&\leq \mathop{\max}_{2\leq i \leq n} |\lambda_1-\lambda_i| \cdot s_0^2 \left( \frac{\lambda_i}{\lambda_1}\right)^{2k},
\end{aligned}
$$
where the last inequality comes from Equation~\eqref{equation:power-lambda-1} or Equation~\eqref{equation:power-lambda-2} shown above.
\end{proof}
From the above deduction, we conclude the following convergence result:
\begin{theorem}[Convergence of Power Iteration]\label{theorem:convergence-power-iteration}
(Under the same condition as Theorem~\ref{theorem:power-convergence}) Suppose $\bA\in \real^{n\times n}$ is real and symmetric. Suppose further that the following two conditions are met
\begin{itemize}
\item $|\lambda_1| \textcolor{blue}{>} |\lambda_2| \geq |\lambda_3| \geq \ldots \geq |\lambda_n| \geq 0$, i.e., $\lambda_1$ has algebraic multiplicity being equal to 1;
\item $\bq_1^\top \bv^{(0)} \neq 0$, i.e., the initial guess $\bv^{(0)}$ has a component in the direction of the eigenvector $\bq_1$ associated to the eigenvalue $\lambda_1$.
\end{itemize}
Then the iterates of Algorithm~\ref{alg:power-iteration} satisfy
$$
||\bv^{(k)} - (\pm \bq_1)||= O\left(\left|\frac{\lambda_2}{\lambda_1}\right|^k\right), \qquad |\lambda^{(k)}-\lambda_1|=O\left(\left|\frac{\lambda_2}{\lambda_1}\right|^{2k}\right)
$$
as $k\rightarrow \infty$.
\end{theorem}
The $\pm$ sign of $\bq_1$ means that, one of $\bq_1$ and $-\bq_1$ is to be taken into the result. And this shows $\bv^{(k)}$ converges to $\pm\bq_1$ linearly in Algorithm~\ref{alg:power-iteration}.
\paragraph{Eigenvalue Assumptions}
The assumption $\bq_1^\top \bv^{(0)} \neq 0$ in the above algorithm is to assume the initial guess of the eigenvector has a component in the direction of the eigenvector $\bq_1$ we want to find. Otherwise, it will not converge to the $\pm \bq_1$.
We also carefully notice that, the equality sign does not appear in the assumption $|\lambda_1|>|\lambda_2|$, in which case $\lambda_1$ is also known as the \textit{dominant} eigenvalue of matrix $\bA$. Nevertheless, when \{$|\lambda_1|\textcolor{blue}{=|}\lambda_2| \textcolor{blue}{>} |\lambda_3| \geq |\lambda_4| \geq \ldots $, $\bq_1^\top \bv^{(0)} \neq 0$, and $\bq_2^\top \bv^{(0)} \neq 0 $\}, $\bv^{(k)}$ will converge to a multiple of $x_1\bq_1 \pm x_2\bq_2$, i.e., lies in the subspace spanned by $\{\bq_1, \bq_2\}$. To see this,
\begin{enumerate}
\item $\lambda_1=\lambda_2$, i.e., the two dominant eigenvalues are coincident. Equation~\eqref{equation:converge-equa-of-power-ite} shows $\bv^{(k)}, \lambda^{(k)}$ converges to
$$
\left\{
\begin{aligned}
\bv^{(k)} &\stackrel{k\rightarrow \infty}{\longrightarrow} \bbeta_1= \frac{ x_1 \bq_1 + x_2\bq_2}{|| x_1 \bq_1 + x_2\bq_2||}\in span\{\bq_1, \bq_2\}
\qquad \text{i.e.,} \gap
\bA\bbeta_1 = \lambda_1 \bbeta_1,\\
\lambda^{(k)} &\stackrel{k\rightarrow \infty}{\longrightarrow} \bbeta_1^\top \bA\bbeta_1 = \lambda_1.
\end{aligned}
\right.
$$
Therefore, the vector sequence $\bv^{(k)}$ still converges to an eigenvector of $\bA$, which lies in the space spanned by $\{\bq_1,\bq_2\}$, and the scalar sequence $\lambda^{(k)}$ still converges to $\lambda_1=\lambda_2$. We carefully notice that when $\lambda_1, \lambda_2$, any vector in $span\{\bq_1, \bq_2\}$ will be an eigenvector of $\bA$.
\item $\lambda_1=-\lambda_2$, i.e., the two dominant eigenvalues are opposite. Equation~\eqref{equation:converge-equa-of-power-ite} shows $\bv^{(k)}, \lambda^{(k)}$ converges to
$$
\begin{aligned}
\bv^{(k)} &\stackrel{k\rightarrow \infty}{\longrightarrow} \bbeta_2= \frac{ \lambda_1^k x_1 \bq_1 + \lambda_2^k x_2\bq_2}{||\lambda_1^k x_1 \bq_1 + \lambda_2^k x_2\bq_2||}\in span\{\bq_1, \bq_2\}.
\end{aligned}
$$
Then, we have
$$
\left\{
\begin{aligned}
\bA\bbeta_2 &= \lambda_1 \frac{x_1\bq_1+x_2\bq_2}{||x_1\bq_1-x_2\bq_2||}, \qquad \text{when $k$ is odd;}\\
\bA\bbeta_2 &= \lambda_1 \frac{x_1\bq_1-x_2\bq_2}{||x_1\bq_1+x_2\bq_2||}, \qquad \text{when $k$ is even.}
\end{aligned}
\right.
$$
Therefore, $\bv^{(k)}$ will not converge to an eigenvector of $\bA$, neither $\lambda^{(k)}$ do. In this case, we observe that $\bA\bx=\lambda\bx \rightarrow \bA^2\bx=\lambda^2\bx$ such that the eigenvalues of $\bA^2$ will be nonnegative and $\lambda_1^2=\lambda_2^2$ are the same eigenvalue of $\bA^2$ if they are eigenvalues of $\bA$. Then apply the power method on $\bA^2$ will lead to the convergence to the eigenvector and eigenvalue of $\bA^2$ which is the same as analyzed in case (1).
\item $\lambda_1=\bar{\lambda}_2$, i.e., the two dominant eigenvalues are complex conjugate. The power method is not convergent \citep{wilkinson1971algebraic, quarteroni2010numerical} and we shall not give the details.
\end{enumerate}
\paragraph{What if $\bq_1^\top \bv^{(0)} = 0$?} All the convergence results are made under the assumption that $\bq_1^\top \bv^{(0)} \neq 0$. But since we do not know $\bq_1$ up front, the requirement is not able to fulfill. When it happens that $\bq_1^\top \bv^{(0)} = 0$, from Equation~\eqref{equation:converge-equa-of-power-ite} again, the vector sequence $\bv^{(k)}$ converges to $\bq_2$ such that $\lambda^{(k)}\rightarrow \lambda_2$. Therefore, the requirement on the initial guess will not harm the convergence of the power method, but the net result will be different.
\paragraph{What do we start from the ``theoretical" one?} In the theoretical power method Algorithm~\ref{alg:power-iteration-theoretical}, we assume the $\lambda_1$ is known, and shows the vector sequence converges linearly to the eigenvector in Equation~\eqref{equation:power-iteration-analysis}. This result matches the convergence of the ``practical" power method Algorithm~\ref{alg:power-iteration} as shown in Theorem~\ref{theorem:convergence-power-iteration}. The ``theoretical" one thus can be employed to find a first analysis on the power method, and we shall shortly find its counterpart in the \textit{inverse power method} in the next section.
\subsection{The Inverse Power Method}
The \textit{power method} homes in on an eigenvector associated with the largest eigenvalue (in magnitude). Nevertheless, the \textit{inverse power method} homes in on an eigenvector associated with the smallest
eigenvalue (in magnitude). To see this, we first provide the lemma about the eigenpair of the inverse of matrices as follows.
\begin{lemma}[Eigenpair of Inverse Matrix]\label{lemma:inverse-eigenpair}
Suppose matrix $\bA^{n\times n}$ is nonsingular, and $(\lambda, \bx)$ is an eigenpair of $\bA$. Then $(1/\lambda, \bx)$ is an eigenpair of $\bA^{-1}$.
\end{lemma}
Furthermore, we assume previously that the matrix $\bA$ is real and symmetric. We have shown in Section~\ref{section:spectral-decomposition} that the rank of the real and symmetric matrix is the number of nonzero eigenvalues. The inverse power method involves the inverse of the eigenvalues such that we suppose further the matrix involved in this section is nonsingular, i.e., all the eigenvalues are nonzero and the matrix is invertible.
\begin{algorithm}[h]
\caption{Inverse Power Iteration (Compare to Algorithm~\ref{alg:power-iteration})}
\label{alg:inverse-power-iteration}
\begin{algorithmic}[1]
\Require \textcolor{blue}{nonsingular} matrix $\bA\in \real^{n\times n}$ that is real and symmetric;
\State $\bv^{(0)}$= some vector with $||\bv^{(0)}||=1$;
\For{$k=1,2,\ldots$}
\State $\bw = \textcolor{blue}{\bA^{-1}} \bv^{(k-1)}$;
\State $\bv^{(k)} = \bw / ||\bw||$;
\State $\lambda^{(k)} = (\bv^{(k)})^\top\bA\bv^{(k)}$; \Comment{i.e., Rayleigh quotient
\EndFor
\end{algorithmic}
\end{algorithm}
The idea behind the algorithm is that an eigenvector associated with the smallest eigenvalue (in magnitude) of $\bA$ is an eigenvector
associated with the largest eigenvalue (in magnitude) of $\bA^{-1}$ by Lemma~\ref{lemma:inverse-eigenpair}.
From the power method, it is trivial that the sequence $\bv^{(k)}$ in Algorithm~\ref{alg:inverse-power-iteration} converges linearly to the eigenvector of $\bA$ corresponding to the smallest eigenvalue $\lambda_n$ (or converges linearly to the eigenvector of $\bA^{-1}$ corresponding to the largest eigenvalue $1/\lambda_n$).
To abuse the analysis of the convergence of the inverse power method, we can carefully employ a theoretical (but impossible) way as shown in Algorithm~\ref{alg:power-iteration-theoretical} where we assume $\lambda_1$ is known and the convergence result is shown in Equation~\eqref{equation:power-iteration-analysis}. Come back to the inverse power method, suppose $\lambda_n$ is known and a ``theoretical" inverse power method is induced, the convergence is thus given by (analogous to Equation~\eqref{equation:power-iteration-analysis}):
\begin{equation}\label{equation:-inverse-power-iteration-analysis1}
\begin{aligned}
||\bv^{(k+1)} - x_n \bq_n||_{\bQ^{-1}} &= ||\bQ^{-1}(\bv^{(k+1)} - x_n\bq_n)||_2\\
&\leq \left|\frac{\lambda_n}{\lambda_{n-1}}\right| \cdot ||\bQ^{-1}(\bv^{(k)} - x_n\bq_n)||_2
= \left|\frac{\lambda_n}{\lambda_{n-1}}\right| \cdot ||\bv^{(k)} - x_n\bq_n||_{\bQ^{-1}},
\end{aligned}
\end{equation}
where we use the fact that if the spectral decomposition of $\bA$ is $\bA=\bQ\bLambda\bA^{-1}$, then the spectral decomposition of $\bA^{-1}$ is $\bA^{-1} =\bQ\bLambda^{-1} \bQ^{-1}$. Since $\lambda_n, \lambda_{n-1}$ are the smallest two eigenvalues of $\bA$ (in magnitude). It is possible that the two eigenvalues are very close to each other and thus the bound $\left|\frac{\lambda_n}{\lambda_{n-1}}\right| $ is close to 1.
\subsection{The Shifted Inverse Power Method}
Now we suppose that we know that one of the eigenvalues of $\bA$ is close to a value $\mu\in \real$. \textcolor{blue}{For any value $\mu$ that is not an eigenvalue of $\bA$, $\bA-\mu\bI$ is nonsingular even if $\bA$ is singular}. \textit{The matrix $\bA-\mu\bI$ is referred to as the matrix $\bA$ that has been ``shifted" by $\mu$}, and $\mu$ is called a \textit{shift}.
The following lemma reveals that a shifted version of the inverse power method can be employed to find the eigenvector associated with the eigenvalue that is closest to $\mu$.
\begin{lemma}[Eigenpair of Shifted Matrix]\label{lemma:shifted-eigenpair}
Suppose $(\lambda, \bx)$ is an eigenpair of $\bA\in \real^{n\times n}$ and $\mu\in \real $ is not an eigenvalue of $\bA$. Then $(\lambda-\mu, \bx)$ is an eigenpair of $\bA-\mu\bI$.
\item Notice that the eigenvectors of $\bA-\mu\bI$ are the same as those of $\bA$ since $\bA\bx =\lambda\bx \rightarrow (\bA-\mu\bI)\bx=(\lambda-\mu)\bx$.
\end{lemma}
\begin{algorithm}[h]
\caption{Shifted Inverse Power Iteration (Compare to Algorithm~\ref{alg:inverse-power-iteration})}
\label{alg:inverse-power-iteration-shifted}
\begin{algorithmic}[1]
\Require matrix $\bA\in \real^{n\times n}$ that is real and symmetric, \textcolor{blue}{$\mu$ is not an eigenvalue of $\bA$};
\State $\bv^{(0)}$= some vector with $||\bv^{(0)}||=1$;
\For{$k=1,2,\ldots$}
\State $\bw = \textcolor{blue}{(\bA-\mu\bI)^{-1}} \bv^{(k-1)}$;
\State $\bv^{(k)} = \bw / ||\bw||$;
\State $\lambda^{(k)} = (\bv^{(k)})^\top\bA\bv^{(k)}$; \Comment{i.e., Rayleigh quotient}
\EndFor
\end{algorithmic}
\end{algorithm}
The procedure is again formulated in Algorithm~\ref{alg:inverse-power-iteration-shifted}. To abuse the analysis of convergence again, suppose $\mu$ is closest to the smallest eigenvalue $\lambda_n$ in magnitude, the convergence is (again, analogous to Equation~\eqref{equation:power-iteration-analysis}):
\begin{equation}\label{equation:-inverse-power-iteration-analysis-shifted}
\begin{aligned}
||\bv^{(k+1)} - x_n \bq_n||_{\bQ^{-1}} &= ||\bQ^{-1}(\bv^{(k+1)} - x_n\bq_n)||_2\\
&\leq \left|\frac{\lambda_n\textcolor{blue}{-\mu}}{\lambda_{n-1}\textcolor{blue}{-\mu}}\right| \cdot ||\bQ^{-1}(\bv^{(k)} - x_n\bq_n)||_2
= \left|\frac{\lambda_n \textcolor{blue}{-\mu}}{\lambda_{n-1}\textcolor{blue}{-\mu}}\right| \cdot ||\bv^{(k)} - x_n\bq_n||_{\bQ^{-1}}.
\end{aligned}
\end{equation}
When $\mu$ is close to $\lambda_n$, the bound $\left|\frac{\lambda_n \textcolor{blue}{-\mu}}{\lambda_{n-1}\textcolor{blue}{-\mu}}\right|$ is small such that the convergence is faster than the (naive) inverse power method (see Equation~\eqref{equation:-inverse-power-iteration-analysis1}), although it is still linearly convergent.
The formal convergence results is shown in the following theorem (one can follow the deduction as in Theorem~\ref{theorem:convergence-power-iteration} to obtain the result).
\begin{theorem}[Convergence of Shifted Inverse Power Iteration]
Suppose $\lambda_J$ is the closest eigenvalue to $\mu$ and $\lambda_K$ is the second closest. Moreover, $\bq_J^\top \bv^{(0)}\neq 0$. Then the iterates of Algorithm~\ref{alg:inverse-power-iteration-shifted} satisfy
$$
||\bv^{(k)} - (\pm \bq_J)||= O\left(\left|\frac{\lambda_J-\mu}{\lambda_K-\mu}\right|^k\right), \qquad |\lambda^{(k)}-\lambda_J|=O\left(\left|\frac{\lambda_J-\mu}{\lambda_K-\mu}\right|^{2k}\right)
$$
as $k\rightarrow \infty$.
\end{theorem}
And this shows $\bv^{(k)}$ converges to $\pm\bq_J$ linearly for Algorithm~\ref{alg:inverse-power-iteration-shifted}.
\subsection{The Rayleigh Quotient Method}
We know that the \textit{inverse power iteration} converges to the eigenvector corresponding to the smallest eigenvalue of $\bA$, and the \textit{shifted inverse power iteration} converges to the eigenvector corresponding to the eigenvalue closest to $\mu$ with possible faster convergence. Fortunately, both of the two methods are \textit{inverse} power iteration in some sense. If we can combine the ideas behind the two algorithms, i.e., use the Rayleigh quotient of the estimated eigenvector in each iteration as the estimate of the eigenvalue, we can get a faster algorithm.
\begin{algorithm}[h]
\caption{Rayleigh Quotient Iteration (Compare to Algorithm~\ref{alg:inverse-power-iteration-shifted})}
\label{alg:rayleigh-quotient-iteration2}
\begin{algorithmic}[1]
\Require matrix $\bA\in \real^{n\times n}$ that is real and symmetric;
\State $\bv^{(0)}$= some vector with $||\bv^{(0)}||=1$;
\State $\lambda^{(0)}=(\bv^{(0)})^\top\bA\bv^{(0)}$; \Comment{i.e., Rayleigh quotient}
\For{$k=1,2,\ldots$}
\State $\bw = \textcolor{blue}{(\bA-\lambda^{(k-1)}\bI)^{-1}} \bv^{(k-1)}$;
\State $\bv^{(k)} = \bw / ||\bw||$;
\State $\lambda^{(k)} = (\bv^{(k)})^\top\bA\bv^{(k)}$; \Comment{i.e., Rayleigh quotient}
\EndFor
\end{algorithmic}
\end{algorithm}
The Rayleigh quotient iteration finds an eigenvector, but with which eigenvalue it is associated
is not clear from the start.
\section{QR Algorithm}
The QR algorithm for computing the eigenvalues and eigenvectors of matrices has been named as one of the ten most important algorithms of the twentieth century \citep{dongarra2000guest, cipra2000best} which is published by John G. F. Francis in the work of \citep{francis1961qr, francis1962qr} that is quoted as one of the jewels of numerical analysis \citep{trefethen1997numerical}. We will introduce the QR algorithm from the most simple case to a shifted version with implicit calculation.
The QR algorithm goes further by \textit{simultaneously} calculating the eigenvalues of a given matrix $\bA$. The central idea is to reduce matrix $\bA$ by a set of \textit{similarity transformations} (Definition~\ref{definition:similar-matrices}, p.~\pageref{definition:similar-matrices}) from which the eigenvalues are kept unmodified (Lemma~\ref{lemma:eigenvalue-similar-matrices}, p.~\pageref{lemma:eigenvalue-similar-matrices})
and in the meantime the eigenvalue is somewhat easier to calculate. The net result turns to be simple to handle, all we do in the QR algorithm is take a QR decomposition, multiply the computed factors $\bQ$ and $\bR$ together in the reverse order $\bR\bQ$, and repeat the procedure. Hence the name \textit{QR algorithm}.
\subsection{Preliminary: Power Iteration with Eigenvector Known}
We firstly show how it evolves from the power iteration algorithm.
Now, let's put a subscript for all the $\bv^{(k)}$'s to emphasize it converges to the eigenvector associated with $\lambda_i$: e.g., $\bv^{(k)}_{\textcolor{blue}{i}}$ will be proven to converge to $\lambda_{\textcolor{blue}{i}}$. And suppose further that we know the normalized eigenvector $\bq_1$ associated with $\lambda_1$ up front. A second initial vector, $\bv_2^{(0)}$, does not have a component in the direction of $\bq_1$. This can be met if it is orthogonal
to $\bq_1$ which is constructed by projection introduced in the Gram-Schmidt process (Section~\ref{section:project-onto-a-vector}, p.~\pageref{section:project-onto-a-vector}):
$$
\bv_2^{(k+1)} \leftarrow \bv_2^{(k)} -\bq_1^\top \bv_2^{(k)}\bq_1
$$
such that $\bq_1^\top \bv_2^{(k+1)}=0$ (recall $\bq_1^\top \bv_2^{(k)}\bq_1$ above is the component of $ \bv_2^{(k)}$ in the direction of $\bq_1$ since $\bq_1$ has unit length). With $\bq_1$ known beforehand, we consider the method in Algorithm~\ref{alg:power-iteration-in-qr-preliminary1}.
\begin{algorithm}[H]
\caption{Power Iteration ($\bq_1$ is Known Up Front)}
\label{alg:power-iteration-in-qr-preliminary1}
\begin{algorithmic}[1]
\Require matrix $\bA\in \real^{n\times n}$ that is real and symmetric;
\State \textcolor{blue}{$\bq_1$ is known up front};
\State $\bv_2^{(0)}$= some vector in $\real^n$;
\State $\bv_2^{(0)} = \bv_2^{(0)} -\textcolor{blue}{\bq_1}^\top \bv_2^{(0)} \textcolor{blue}{\bq_1}$; \Comment{i.e., project along $\bq_1$: $\bq_1^\top \bv_2^{(0)}=0$}
\State $\bv_2^{(0)} = \bv_2^{(0)} /||\bv_2^{(0)}||$; \Comment{normalize to have length one}
\For{$k=1,2,\ldots$}
\State $\bv_2^{(k)}= \bA \bv_2^{(k-1)}$;
\State $\bv_2^{(k)} = \bv_2^{(k)} -\textcolor{blue}{\bq_1}^\top \bv_2^{(k)} \textcolor{blue}{\bq_1}$; \Comment{make sure $\bv_2^{(k)}$ is orthogonal to $\bq_1$}
\State $\bv_2^{(k)} = \bv_2^{(k)}/ ||\bv_2^{(k)}||$;
\State $\lambda_2^{(k)} = (\bv_2^{(k)})^\top\bA\bv_2^{(k)}$; \Comment{i.e., Rayleigh quotient}
\EndFor
\end{algorithmic}
\end{algorithm}
Write again $\bv_2^{(0)}$ as a linear combination of the orthonormal eigenvectors $\bq_i$:
\begin{equation}\label{equation:power-qr-premi-1}
\bv_2^{(0)} = x_1 \bq_1 + x_2\bq_2 + \ldots + x_n\bq_n.
\end{equation}
Since in step 3 of above algorithm, we project along the $\bv_2^{(0)}$ along $\bq_1$, then the component $x_1$ of Equation~\eqref{equation:power-qr-premi-1} is equal to 0.
Similarly, $\bv_2^{(k)}$ is a multiple of $\bA^k \bv_2^{(0)}$ such that $\bv_2^{(k)} = c_{2k} \bA^k \bv_2^{(0)} = \frac{\bA^k \bv_2^{(0)}}{||\bA^k \bv_2^{(0)}||}$. We have
$$
\begin{aligned}
\bv_2^{(k)} &= c_{2k} \bA^k\bv_2^{(0)} \\
&=c_{2k} (x_1 \lambda_1^k\bq_1 + x_2\lambda_2^k\bq_2+x_3\lambda_3^k\bq_3 + \ldots + x_n\lambda_n^k\bq_n) \\
&=c_{2k} (x_2\lambda_2^k\bq_2 + x_3\lambda_3^k\bq_3+ \ldots + x_n\lambda_n^k\bq_n) \\
&= c_{2k} \lambda_2^k\left(x_2\bq_2 + x_3\left(\frac{\lambda_3}{\lambda_2}\right)^k\bq_3 + \ldots + x_n\left(\frac{\lambda_n}{\lambda_1}\right)^k\bq_n\right).
\end{aligned}
$$
Therefore, following from Theorem~\ref{theorem:convergence-power-iteration}, if we assume $|\lambda_2| \textcolor{blue}{>} |\lambda_3| \geq |\lambda_3| \geq \ldots \geq |\lambda_n| \geq 0$ and $\bq_2^\top \bv_2^{(0)}\neq 0$, $\bv_2^{(k)}$ will converge linearly to $\pm \bq_2$, and so $\lambda_2^{(k)}$ will converge to $\lambda_2$ by Algorithm~\ref{alg:power-iteration-in-qr-preliminary1}.
Analogously, when \{$|\lambda_2|\textcolor{blue}{=|}\lambda_3| \textcolor{blue}{>} |\lambda_4| \geq |\lambda_5| \ldots $, $\bq_1^\top \bv^{(0)} \neq 0$, and $\bq_2^\top \bv^{(0)} \neq 0$\}, $\bv^{(k)}$ will converge to a multiple of $x_2\bq_2 \pm x_3\bq_3$, i.e., lies in the space spanned by $\{\bq_2, \bq_3\}$.
\subsection{Preliminary: Power Iteration with Eigenvector Unknown}
However, the method presented in Algorithm~\ref{alg:power-iteration-in-qr-preliminary1} is not practical since we usually do not know $\bq_1$ up front. But since we have a trivial power iteration that can compute the $\bv_1^{(0)}$ converging to $\pm \bq_1$ (Algorithm~\ref{alg:power-iteration}, p.~\pageref{alg:power-iteration}), a simultaneous algorithms can be constructed. In lieu of project $\bv_2^{(k)}$ along $\bq_1$, one can project it along $\bv_1^{(k)}$ instead. Therefore, a method to find both $\bq_1, \bq_2$ simultaneously can be constructed in Algorithm~\ref{alg:power-iteration-in-qr-preliminary2-unknown-q1}.
\begin{algorithm}[H]
\caption{Power Iteration ($\bq_1$ is Unknown Up Front, Compare to Algorithm~\ref{alg:power-iteration-in-qr-preliminary1})}
\label{alg:power-iteration-in-qr-preliminary2-unknown-q1}
\begin{algorithmic}[1]
\Require matrix $\bA\in \real^{n\times n}$ that is real and symmetric;
\State $\bv_1^{(0)}, \bv_2^{(0)}$= two random vectors in $\real^n$;
\State $\bv_2^{(0)} = \bv_2^{(0)} -\textcolor{blue}{\bv_1^{(0)}}^\top \bv_2^{(0)} \textcolor{blue}{\bv_1^{(k)}}$; \Comment{i.e., project along $\bv_1^{(0)}$: \textcolor{blue}{$\bv_1^{(0)\top} \bv_2^{(0)}=0$}}
\State $\bv_1^{(0)} = \bv_1^{(0)} /||\bv_1^{(0)}||$; \Comment{normalize to have length one}
\State $\bv_2^{(0)} = \bv_2^{(0)} /||\bv_2^{(0)}||$; \Comment{normalize to have length one}
\For{$k=1,2,\ldots$}
\State // update $\bv_1^{(k)}$
\State $\bv_1^{(k)} = \bA\bv_1^{(k-1)}$;
\State $\bv_1^{(k)} = \bv_1^{(k)} / ||\bv_1^{(k)} ||$;
\State // update $\bv_2^{(k)}$
\State $\bv_2^{(k)} = \bA \bv_2^{(k-1)}$;
\State $\bv_2^{(k)} = \bv_2^{(k)} -\textcolor{blue}{\bv_1^{(k)}}^\top \bv_2^{(k)} \textcolor{blue}{\bv_1^{(k)}}$; \Comment{make sure $\bv_2^{(k)}$ is orthogonal to \textcolor{blue}{$\bv_1^{(k)}$}}
\State $\bv_2^{(k)} = \bv_2^{(k)}/ ||\bv_2^{(k)}||$;
\State // compute the corresponding eigenvalues
\State $\lambda_1^{(k)} = (\bv_1^{(k)})^\top\bA\bv_1^{(k)}$; \Comment{i.e., Rayleigh quotient}
\State $\lambda_2^{(k)} = (\bv_2^{(k)})^\top\bA\bv_2^{(k)}$; \Comment{i.e., Rayleigh quotient}
\EndFor
\end{algorithmic}
\end{algorithm}
Combine the findings above, the iterates of Algorithm~\ref{alg:power-iteration-in-qr-preliminary2-unknown-q1} satisfies that (when $\bq_1^\top \bv_1^{(0)} \neq 0$ and $\bq_2^\top \bv_2^{(0)}\neq 0$)
\begin{itemize}
\item If $|\lambda_1| > |\lambda_2|$, the vector sequence $\bv_1^{(k)}$ will converge linearly to $\pm \bq_1$ at a rate of $|\frac{\lambda_2}{\lambda_1}|$;
\item If $|\lambda_1| > |\lambda_2|> |\lambda_3|$, the vector sequence $\bv_2^{(k)}$ will converge linearly to $\pm \bq_2$ at a rate of $|\frac{\lambda_3}{\lambda_2}|$;
\item If $|\lambda_1| = |\lambda_2| > |\lambda_3|$, the vector sequence $\bv_1^{(k)} $ will converge to a multiple of $x_1\bq_1 \pm x_2\bq_2$, and the vector sequence $\bv_2^{(k)}$ will converge linearly to $\pm \bq_2$. I.e., the span of $\{\bq_1, \bq_2\}$ can be approximated by the span of $\{\bv_1^{(k)}, \bv_2^{(k)}\}$.
\end{itemize}
\subsection{Preliminary: Power Iteration with Eigenvector Unknown and QR Decomposition}\label{section:power-eigen-unknown-prelimise}
We carefully notice that that step 2 to step 4 Algorithm~\ref{alg:power-iteration-in-qr-preliminary2-unknown-q1} is equivalent to apply a QR decomposition on a $\real^{n\times 2}$ matrix $[\bv_1^{(0)}, \bv_2^{(0)}]$, that is
$$
\underbrace{[\bv_1^{(0)}, \bv_2^{(0)}] }_{\widehat{\bV}^{(0)}}, \bR \leftarrow QR([\bv_1^{(0)}, \bv_2^{(0)}]),
$$
where $QR(\bA)$ is defined as the function to obtain the QR decomposition of matrix $\bA$.
Moreover, step 6 to step 12 of Algorithm~\ref{alg:power-iteration-in-qr-preliminary2-unknown-q1} can also be rephrased as an QR decomposition on the $n\times 2$ matrix $\bA[\bv_1^{(k-1)}, \bv_2^{(k-1)}]$. A further simplification on the form is to obtain the eigenvalues $\lambda_1^{(k)}, \lambda_2^{(k)} $ as follows:
$$
\underbrace{\begin{bmatrix}
\lambda_1^{(k)} & 0\\
0 & \lambda_2^{(k)}
\end{bmatrix}}_{\bA^{(k)}}
=
\underbrace{\begin{bmatrix}
\bv_1^{(k)} & \bv_1^{(k)}
\end{bmatrix}^\top }_{\widehat{\bV}^{(k)\top}}
\bA
\underbrace{\begin{bmatrix}
\bv_1^{(k)} & \bv_1^{(k)}
\end{bmatrix}}_{\widehat{\bV}^{(k)}}
$$
\begin{algorithm}[H]
\caption{Power Iteration (On 2 Vectors, Equivalent to Algorithm~\ref{alg:power-iteration-in-qr-preliminary2-unknown-q1})}
\label{alg:power-iteration-in-qr-preliminary2-unknown-qr-function}
\begin{algorithmic}[1]
\Require matrix $\bA\in \real^{n\times n}$ that is real and symmetric;
\State $\widehat{\bV}^{(0)}= [\bv_1^{(0)}, \bv_2^{(0)} ]\in \real^{n\times 2}$= two random vectors in $\real^n$;
\State $\widehat{\bV}^{(0)}, \bR= QR(\widehat{\bV}^{(0)})$;
\For{$k=1,2,\ldots$}
\State $\widehat{\bV}^{(k)}, \bR= QR(\bA\widehat{\bV}^{(k-1)})$;
\State $\widehat{\bA}^{(k)}=\widehat{\bV}^{(k)\top} \bA \widehat{\bV}^{(k)}$; \Comment{ compute the corresponding eigenvalues}
\EndFor
\end{algorithmic}
\end{algorithm}
Note that the $\widehat{\text{widehat}}$ above the matrices in Algorithm~\ref{alg:power-iteration-in-qr-preliminary2-unknown-qr-function} will be proven useful to differentiate the ones in the QR algorithms.
With hindsight, it is natural to extent the algorithm not only for two vectors, but for $p\leq n$ vectors. The full algorithm is formulated in Algorithm~\ref{alg:power-iteration-in-qr-preliminary2-unknown-qr-function-mn}.
\begin{algorithm}[H]
\caption{Power Iteration (On $p$ Vectors, Compare to Algorithm~\ref{alg:power-iteration-in-qr-preliminary2-unknown-qr-function})}
\label{alg:power-iteration-in-qr-preliminary2-unknown-qr-function-mn}
\begin{algorithmic}[1]
\Require matrix $\bA\in \real^{n\times n}$ that is real and symmetric;
\State $\widehat{\bV}^{(0)}$= random matrix in $\real^{n\times \textcolor{blue}{p}}$;
\State $\widehat{\bV}^{(0)}, \bR= QR(\widehat{\bV}^{(0)})$;
\For{$k=1,2,\ldots$}
\State $\widehat{\bV}^{(k)}, \bR= QR(\bA\widehat{\bV}^{(k-1)})$;
\State $\widehat{\bA}^{(k)}=\widehat{\bV}^{(k)\top} \bA \widehat{\bV}^{(k)}$; \Comment{ compute the corresponding eigenvalues}
\EndFor
\end{algorithmic}
\end{algorithm}
Again, we observe that, when $\bq_i^\top \bv_i^{(0)} \neq 0$ for all $i \in \{1,2,\ldots, p\}$, \footnote{That is, the initial guess $\bv_i^{(0)}$ is not orthogonal to eigenvector $\bq_i$, and has a component in the direction of the eigenvector.} the iterates of Algorithm~\ref{alg:power-iteration-in-qr-preliminary2-unknown-qr-function-mn} satisfies that
\begin{enumerate}
\item If $|\lambda_1| > |\lambda_2| > \ldots > |\lambda_p| > |\lambda_{p+1}| \geq |\lambda_{p+2}| \geq \ldots $, then each column $i$ of $\widehat{\bV}^{(k)}$ (i.e., $\bv_i^{(k)}$) will converge linearly to $\pm \bq_i$ where the rate of removing from the component in the direction of $\bq_j$ is recorded as $|\frac{\lambda_i}{\lambda_j}|$ ($0<i\leq p $ and $p<j\leq n$);
\item If some of the eigenvalues have equal magnitude, then the subspace spanned by the corresponding columns of $\widehat{\bV}^{(k)}$ will approximate the subspace spanned by the corresponding eigenvectors associated with those eigenvalues;
\item If $p=n$, and $|\lambda_1| > |\lambda_2| > \ldots >|\lambda_n|$, then we will find all the eigenvectors of $\bA$.
\end{enumerate}
\subsection{A Simple QR Algorithm from Power Iteration: without Shifts}\label{section:qralg-without-shifts}
So far, we have transferred power iteration into an algorithm finding all the eigenvectors (under mild conditions) which employs QR decomposition.
Now, let's consider the power iteration with slight modification in Algorithm~\ref{alg:qr-algorithm-simple1} that cares the $\bR$ from the QR decomposition as a sequence of vectors, and $\widehat{\bA}^{(0)}$ initialized to be matrix $\bA$ (where we ignored the value indexed by 0 previously). Compare the two Algorithm~\ref{alg:qr-algorithm-simple1} and \ref{alg:qr-algorithm-simple2}.
\noindent
\begin{minipage}[t]{0.495\linewidth}
\begin{algorithm}[H]
\caption{Power Iteration}
\label{alg:qr-algorithm-simple1}
\begin{algorithmic}[1]
\Require $\bA\in \real^{n\times n}$ is real and symmetric;
\State Same as Algorithm~\ref{alg:power-iteration-in-qr-preliminary2-unknown-qr-function-mn};
\State $\widehat{\bA}^{(0)} = \bA$;
\State $\widehat{\bV}^{(0)}=\bI_n$; \Comment{initial eigenvector guess}
\State $\widehat{\bR}^{(0)} = \bI_n$; \Comment{compensate the sequence}
\For{$k=1,2,\ldots$}
\State $\widehat{\bV}^{(k)}, \widehat{\bR}^{(k)}= QR(\bA\widehat{\bV}^{(k-1)})$;
\State $\widehat{\bA}^{(k)}=\widehat{\bV}^{(k)\top} \bA \widehat{\bV}^{(k)}$;
\EndFor
\end{algorithmic}
\end{algorithm}
\end{minipage}%
\hfil
\begin{minipage}[t]{0.495\linewidth}
\begin{algorithm}[H]
\caption{Simple QR Algorithm}
\label{alg:qr-algorithm-simple2}
\begin{algorithmic}[1]
\Require $\bA\in \real^{n\times n}$ is real and symmetric;
\State $\bA^{(0)} = \bA$;
\State $\bV^{(0)}=\bI_n$; \Comment{initial eigenvector guess}
\State $\bR^{(0)} = \bI_n$; \Comment{compensate the sequence}
\State $\bQ^{(0)} = \bI_n$; \Comment{compensate the sequence}
\For{$k=1,2,\ldots$}
\State $\bQ^{(k)}, \bR^{(k)}= QR(\bA^{(k-1)})$;
\State $\bA^{(k)}=\bR^{(k)} \bQ^{(k)}$;
\State $\bV^{(k)} = \bV^{(k-1)}\bQ^{(k)} $;
\EndFor
\end{algorithmic}
\end{algorithm}
\end{minipage}
We firstly show the equivalence of the two algorithms by the following lemma.
\begin{lemma}[QR Algorithm from Power Iteration]\label{lemma:qr-algo-from-power}
We can show Algorithm~\ref{alg:qr-algorithm-simple1} and Algorithm~\ref{alg:qr-algorithm-simple2} are equivalent in the sense that for all iterates $k\in \{0,1,2,\ldots\}$, we have
$$
\left\{
\begin{aligned}
\widehat{\bA}^{(k)} &= \bA^{(k)}; \qquad \text{(diagonals that will converge to eigenvalues)}\\
\widehat{\bR}^{(k)} &= \bR^{(k)}; \\
\widehat{\bV}^{(k)} &= \bV^{(k)}. \qquad \text{(columns that will converge to eigenvectors)}
\end{aligned}
\right.
$$
\end{lemma}
For clarity, the proof is delayed in Section~\ref{section:proofs-qralgorithms}.
You might well ask: how could we find the reduction of Algorithm~\ref{alg:qr-algorithm-simple1} to Algorithm~\ref{alg:qr-algorithm-simple2}. The answer is unglamorous! It was by trial and error.
\paragraph{What's in the QR algorithm}
All we do in Algorithm~\ref{alg:qr-algorithm-simple2} is to take a QR decomposition, multiply the computed factors $\bQ$ and $\bR$
together in the reverse order $\bR\bQ$, and repeat. For convergence to diagonal form to be useful for finding eigenvalues, of course, the operations from $\bA^{(k-1)}$ to $\bA^{(k)}$ should be similarity transformations (Definition~\ref{definition:similar-matrices}, p.~\pageref{definition:similar-matrices}).
By Algorithm~\ref{alg:qr-algorithm-simple2}, it can be shown that
\begin{tcolorbox}[title={Simple QR Algorithm Property 1}]
\begin{equation}\label{equation:simple-qr-find}
\text{(SQR 1)} \qquad \bA^{(k)} = \bQ^{(k)\top } \bA^{(k-1)} \bQ^{(k)},
\end{equation}
\end{tcolorbox}
\noindent since $\bA^{(k)}=\bR^{(k)} \bQ^{(k)} = (\bQ^{(k)\top} \bA^{(k-1)}) \bQ^{(k)} $. Therefore, $\bA^{(k)}$ is a \textit{similarity transformation} \footnote{Here, when the nonsingular matrix is orthogonal, it is also known as \textit{orthogonal similarity transformation}.} of $\bA^{(k-1)}$, which is also similar transformation from $\bA$. This results in that $\bA^{(k)}$ and $\bA$ have \textit{same eigenvalues, trace, and rank} (Lemma~\ref{lemma:eigenvalue-similar-matrices}, p.~\pageref{lemma:eigenvalue-similar-matrices}). Going to the root of the induction in Equation~\eqref{equation:simple-qr-find}, it follows that
\begin{tcolorbox}[title={Simple QR Algorithm Property 2}]
\begin{equation}\label{equation:simple-qr-find-root}
\begin{aligned}
\text{(SQR 2)} \qquad \bA^{(k)} &= \bQ^{(k)\top } \bA^{(k-1)} \bQ^{(k)} \\
&=\bQ^{(k)\top } \left(\bQ^{(k-1)\top } \bA^{(k-2)} \bQ^{(k-1)} \right) \bQ^{(k)} \\
&=\ldots \\
&=
\underbrace{\bQ^{(k)\top } \bQ^{(k-1)\top }\ldots \bQ^{(0)\top } }_{\bV^{(k)\top}}
\bA
\underbrace{\bQ^{(0)} \ldots \bQ^{(k-1)} \bQ^{(k)} }_{\bV^{(k)}}
\\
&=\bV^{(k)\top}\bA \bV^{(k)}.
\end{aligned}
\end{equation}
\end{tcolorbox}
The above lemma tells that the simple QR algorithms is equivalent to the power iteration algorithms such that the diagonals of $\bA^{(k)}$ converge to the eigenvalues of $\bA$ (under mild conditions).
$\bA^{(k)} =\bV^{(k)\top}\bA \bV^{(k)} $ is not only a similarity transformation, but also an \textit{orthogonal similarity transformation} since $\bV^{(k)}$ is orthogonal. This is particular important for the stability of the iterative method as the condition of $\bA^{(k)}$ is not worse than that of the original matrix $\bA$.
Then, the equivalence indicates that the $i$-th column of $\bV^{(k)}$ will converge linearly to the $i$-th eigenvector of $\bA$, i.e., $\pm \bq_i$.
A delve into the process, it shows by Algorithm~\ref{alg:qr-algorithm-simple1} and~\ref{alg:qr-algorithm-simple2} that
\begin{tcolorbox}[title={Simple QR Algorithm Property 3}]
\begin{equation}\label{equation:simple-qr-find2}
\begin{aligned}
&\left.
\begin{aligned}
\bV^{(k)} &= \bV^{(0)}\bQ^{(1)}\bQ^{(2)}\ldots \bQ^{(k)} \\
\bA^{\textcolor{blue}{k}}&=\bV^{(k)} \bR^{(k)} \bR^{(k-1)} \ldots \bR^{(0)}
\end{aligned}
\right\}
\leadtosmall \\
&\text{(SQR 3)} \qquad \bA^k= \underbrace{\bQ^{(0)}\bQ^{(1)}\bQ^{(2)}\ldots \bQ^{(k)}}_{=\bV^{(k)}, \text{orthogonal}}
\underbrace{\bR^{(k)} \bR^{(k-1)} \ldots \bR^{(0)}}_{=\bU^{(k)}, \text{upper triangular}}
,
\end{aligned}
\end{equation}
\end{tcolorbox}
\noindent where the left hand side of the above equation is delayed to be proved in Section~\ref{section:proofs-qralgorithms} and where $\bQ^{(0)} = \bV^{(0)} = \bI$ for simplicity.
That is, $\bA^{k}$ can be expressed as a QR decomposition by $\bA^{k}=\bV^{(k)}\bU^{(k)}$.\footnote{Note the different between the notation $\bA^k$ and $\bA^{(k)}$ where $\bA^k$ here is the $k$-th power of $\bA$.}
The equivalence of the simultaneous power method and the simple QR algorithm (Algorithm~\ref{alg:qr-algorithm-simple1} and \ref{alg:qr-algorithm-simple2}) tells us a lot about convergence.
Same as that in Section~\ref{section:power-eigen-unknown-prelimise}, it follows that
\begin{enumerate}
\item If $|\lambda_1| > |\lambda_2| > \ldots > |\lambda_p| > |\lambda_{p+1}| \geq |\lambda_{p+2}| \geq \ldots \geq |\lambda_n|$, then each column $i$ of $\bV^{(k)}$ (i.e., $\bv_i^{(k)}$) will converge linearly to $\pm \bq_i$ where the rate of removing from the component in the direction of $\bq_j$ is recorded as $|\frac{\lambda_i}{\lambda_j}|$ ($0<i\leq p $ and $p<j\leq n$);
\item If some of the eigenvalues have equal magnitude, then the subspace spanned by the corresponding columns of $\bV^{(k)}$ will approximate the subspace spanned by the corresponding eigenvectors associated with those eigenvalues;
\item If $p=n$, and $|\lambda_1| > |\lambda_2| > \ldots >|\lambda_n|$, then we will find all the eigenvectors of $\bA$. Specifically, suppose $$
\bA^{(k)} =
\begin{bmatrix}
\lambda_1^{(k)} & a_{12} & a_{13} & \ldots & a_{1n}\\
a_{12} & \lambda_2^{(k)} & a_{23} & \ldots & a_{2n}\\
\vdots & \vdots & \ddots & \ddots & \vdots\\
a_{1,n-1} & a_{2,n-1} &a_{3,n-1} & \ddots & a_{n-1,n}\\
a_{1n} & a_{2n} &a_{3n} & \ldots & \lambda_n^{(k)}\\
\end{bmatrix},
$$
which is symmetric, and the convergence rate is
$$
|a_{i,i-1}| = O\left(\left|\frac{\lambda_i}{\lambda_{i-1}}\right|^k\right).
$$
\end{enumerate}
Under different conditions, say, $|\lambda_1|\geq |\lambda_2| \geq \ldots \geq |\lambda_p| > |\lambda_{p+1}| > \ldots |\lambda_{n}|$, then
\begin{enumerate}
\item The span of the first $p$ columns of $\bV^{(k)}$ will converge to the subspace spanned by the first $p$ orthonormal eigenvectors $span\{\bq_1, \bq_2, \ldots, \bq_p\}$;
\item The span of the last $n-p$ columns of $\bV^{(k)}$ will converge to the subspace spanned by the last $n-p$ orthonormal eigenvectors $span\{\bq_{p+1}, \bq_{p+2}, \ldots, \bq_n\}$;
\item Since $\bA$ is assumed to be real and symmetric, it follows that
$$span\{\bq_1, \bq_2, \ldots, \bq_p\} \perp span\{\bq_{p+1}, \bq_{p+2}, \ldots, \bq_n\} .$$
Therefore, the first $p$ columns and last $n-p$ columns of $\bV^{(k)}$ lie in a mutually orthogonal complement space. And the rate of convergence with which the two subspace become orthogonal to each other is linear
with a constant $|\lambda_{p+1}/\lambda_p|$;
\item Furthermore, when the above $p=n$, it follows that the last column of $\bV^{(k)}$ converges linearly to $\pm \bq_n$ with a constant $|\lambda_n/\lambda_{n-1}|$.
\end{enumerate}
By the finding in the shifted inverse power method (Algorithm~\ref{alg:inverse-power-iteration-shifted}), a shift by the estimate of the smallest eigenvalue (in magnitude) in each iteration can accelerate the convergence of the algorithm, and this reveals a ``practical" QR algorithm.
\begin{remark}[Asymmetric Matrix $\bA$]
In the above discussions, we assume $\bA$ is real and symmetric. If $\bA$ is asymmetric with real eigenvalues that are distinct in module , it can be shown that the $\bA^{(k)}$ in the QR algorithm converges to an upper triangular matrix where the eigenvalues lie on the diagonal (see the second form of Schur decomposition, Corollary~\ref{corollary:schur-second-form}, p.~\pageref{corollary:schur-second-form}). Fore more general matrices $\bA$, the sequence converges to an upper \textit{quasi-triangular} matrix. See \citep{quarteroni2010numerical, golub2013matrix} for more details.
\end{remark}
\paragraph{LU Algorithm} We shall only briefly discuss the LU algorithm here. Instead of factor the matrix in each iteration by a QR decomposition, the LU decomposition can be applied as well. To see this, we consider the procedure in Algorithm~\ref{alg:lu-algorithm-simple2}, and suppose further $\bA$ is nonsingular (such that its LU decomposition has nonsingular factors). We then have
\begin{equation}\label{equation:simple-qr-find-lu-simple}
\boxed{\begin{aligned}
\text{(SLU 1)} &\qquad \bA^{(k)} = \bU^{(k)}\bL^{(k)}
=((\bL^{(k)})^{-1} \bL^{(k)})\bU^{(k)}\bL^{(k)}=(\bL^{(k)})^{-1}\bA^{(k-1)}\bL^{(k)}, \\
\text{(SLU 2)} &\qquad \bA^{(k)} =
(\bL^{(k) } )^{-1} (\bL^{(k-1) })^{-1}\ldots (\bL^{(0) } )^{-1}
\bA
\bL^{(0)} \ldots \bL^{(k-1)} \bL^{(k)} .
\end{aligned}}
\end{equation}
From (SLU 1), the update from $\bA^{(k-1)}$ to $\bA^{(k)}$ is a similarity transformation (not orthogonal similarity transformation now). Therefore, the accuracy depends on the condition of each $\bL^{(k) }$ that may be out of control. This results in the rare use of the LU algorithm. See \citep{rutishauser1958solution, francis1961qr} fore more details (which is named as LR algorithm in the original papers).
\begin{algorithm}[H]
\caption{Simple LU Algorithm (Compare to Algorithm~\ref{alg:qr-algorithm-simple2})}
\label{alg:lu-algorithm-simple2}
\begin{algorithmic}[1]
\Require $\bA\in \real^{n\times n}$ is real and symmetric;
\State $\bA^{(0)} = \bA$;
\State $\bV^{(0)}=\bI_n$; \Comment{initial eigenvector guess}
\State $\textcolor{blue}{\bU}^{(0)} = \bI_n$; \Comment{compensate the sequence}
\State $\textcolor{blue}{\bL}^{(0)} = \bI_n$; \Comment{compensate the sequence}
\For{$k=1,2,\ldots$}
\State $\textcolor{blue}{\bL}^{(k)}, \textcolor{blue}{\bU}^{(k)}= \textcolor{blue}{LU}(\bA^{(k-1)})$;
\State $\bA^{(k)}=\textcolor{blue}{\bU}^{(k)} \textcolor{blue}{\bL}^{(k)}$;
\State $\bV^{(k)} = \bV^{(k-1)}\textcolor{blue}{\bL}^{(k)} $;
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{A Practical QR Algorithm: with Shifts}
From the above discussion,
we could compute the Rayleigh quotient of the last column of $\bV^{(k-1)}$. Or recall that the matrix $\bA^{(k-1)}$ estimates the eigenvalues in the diagonal such that $a_{nn}^{(k-1)}$ (the last diagonal of $\bA^{(k-1)}$) can be applied as a shift. The procedure is shown in Algorithm~\ref{alg:qr-algorithm-practical}.
\begin{algorithm}[H]
\caption{Practical QR Algorithm (The Final Algorithm! Compare to Algorithm~\ref{alg:qr-algorithm-simple2})}
\label{alg:qr-algorithm-practical}
\begin{algorithmic}[1]
\Require $\bA\in \real^{n\times n}$ is real and symmetric;
\State $\bA^{(0)} = \bA$;
\State $\bV^{(0)}=\bI_n$;\Comment{initial eigenvector guess}
\State $\bR^{(0)} = \bI_n$; \Comment{compensate the sequence}
\State $\bQ^{(0)} = \bI_n$; \Comment{compensate the sequence}
\For{$k=1,2,\ldots$}
\State Pick a shift $\mu^{(k)}$; \Comment{e.g., $\mu^{(k)}=a_{nn}^{(k-1)}$}
\State $\bQ^{(k)}, \bR^{(k)}= QR\left(\bA^{(k-1)} - \textcolor{blue}{\mu^{(k)}\bI}\right)$; \Comment{QR decomposition}
\State $\bA^{(k)}=\bR^{(k)} \bQ^{(k)} + \textcolor{blue}{\mu^{(k)}\bI}$; \Comment{diagonals converge to eigenvalues}
\State $\bV^{(k)} = \bV^{(k-1)}\bQ^{(k)} $; \Comment{columns converge to eigenvectors}
\EndFor
\end{algorithmic}
\end{algorithm}
Similar observation can be applied:
\begin{tcolorbox}[title={Practical QR Algorithm Property 1}]
\begin{equation}\label{equation:practical-qr-find1}
\left.
\begin{aligned}
\bR^{(k)}&= \bQ^{(k)\top} \left(\bA^{(k-1)} - \mu^{(k)}\bI\right)\\
\bA^{(k)}&=\bR^{(k)} \bQ^{(k)} + \mu^{(k)}\bI
\end{aligned}
\right\} \leadtosmall
\underbrace{\bA^{(k)} = \bQ^{(k)\top } \bA^{(k-1)} \bQ^{(k)}}_{\text{(PQR 1)=(SQR 1)}},
\end{equation}
\end{tcolorbox}
\noindent which is the same \textit{similar transformation} as the simple QR algorithm shown in Equation~\eqref{equation:simple-qr-find}.
Similar to Equation~\eqref{equation:simple-qr-find-root}, the same observation can be applied:
\begin{tcolorbox}[title={Practical QR Algorithm Property 2}]
\begin{equation}\label{equation:practical-qr-find-root}
\begin{aligned}
\bA^{(k)} =\bV^{(k)\top}\bA \bV^{(k)} \leadto \underbrace{\bA =\bV^{(k)} \bA^{(k)} \bV^{(k)\top}}_{\text{(PQR 2)=(SQR 2)}},
\end{aligned}
\end{equation}
\end{tcolorbox}
\noindent where $\bV^{(k)}=\bQ^{(0)}\bQ^{(1)}\bQ^{(2)}\ldots \bQ^{(k)}$ (same as (SQR 2)).
Again, $\bA^{(k)} =\bV^{(k)\top}\bA \bV^{(k)} $ is an \textit{orthogonal similarity transformation} since $\bV^{(k)}$ is orthogonal. The condition of $\bA^{(k)}$ is still not worse than that of the original matrix $\bA$.
Further, the third property of the practical QR algorithm is slightly different to that of the simple QR algorithm:
\begin{tcolorbox}[title={Practical QR Algorithm Property 3}]
\begin{equation}\label{equation:practical-qr-find2}
\begin{aligned}
&\text{(PQR 3)} \neq \text{(SQR 3)}: \\
&(\bA-\mu^{(k)}\bI)(\bA-\mu^{(k-1)}\bI)\ldots (\bA-\mu^{(1)}\bI)= \underbrace{\bQ^{(0)}\bQ^{(1)}\bQ^{(2)}\ldots \bQ^{(k)}}_{=\bV^{(k)}, \text{orthogonal}}
\underbrace{\bR^{(k)} \bR^{(k-1)} \ldots \bR^{(0)}}_{=\bU^{(k)}, \text{upper triangular}}.
\end{aligned}
\end{equation}
\end{tcolorbox}
\noindent Again, for clarity, the proof of Equation~\eqref{equation:practical-qr-find2} is delayed in Section~\ref{section:proofs-qralgorithms}. Similar to the analysis in the simple QR algorithm (Section~\ref{section:qralg-without-shifts}), assume first $\mu^{(k)}=\mu$ is fixed in Algorithm~\ref{alg:qr-algorithm-practical}, and the eigenvalues are ordered such that $|\lambda_1-\mu| > |\lambda_2-\mu| > \ldots >|\lambda_n-\mu|$, then the $(i,i-1)$-th entry in $\bA^{(k)}$ converges linearly to zero with a constant
$$
\left(\left|\frac{\lambda_i-\mu}{\lambda_{i-1}-\mu}\right|^k\right).
$$
This implies if $\mu^{(k)} = a_{nn}^{(k-1)}$, $|\lambda_n - \mu^{(k)}|$ tends to be much smaller than $|\lambda_i - \mu^{(k)}|$ for $i\in \{1,2,\ldots, n-1\}$:
$$
|\lambda_n - \mu^{(k)}| \ll |\lambda_i - \mu^{(k)}|, \gap i\in \{1,2,\ldots, n-1\}.
$$
This will make the last column of $\bA^{(k)}$ converge to the eigenvalue of $\bA$ rapidly (i.e., $a_{nn}^{(k-1)}$ converges to the eigenvalue rapidly).
\section{Apply the Practical QR Algorithm to Tridiagonal Matrices}
We observe that the practical QR algorithm will result in the eigengectors and eigenvalues via a sequence of \textit{orthogonal similarity transformations} by the property (PQR 1) in Equation~\eqref{equation:practical-qr-find1}: $\bA^{(k)} = \bQ^{(k)\top } \bA^{(k-1)} \bQ^{(k)}$. When $\bA$ is symmetric, a tridiagonal decomposition $\bA=\bQ^{(0)\top}\bT^{(0)}\bQ^{(0)}$ exists (Theorem~\ref{theorem:tridiagonal-decom}, p.~\pageref{theorem:tridiagonal-decom}) which is an orthogonal similarity transformation as well. Such decomposition can be employed as a phase 1 of the QR algorithm since the tridiagonal matrix $\bT^{(0)}$ is close to a diagonal form and easier to converge to the diagonal eigenvalue matrix. The procedure is shown in Algorithm~\ref{alg:qr-algorithm-practical-tridiagonal}.
\subsection{Explicit Shifted QR Algorithm}
\begin{algorithm}[H]
\caption{Practical QR Algorithm (Two Phases, Compare to Algorithm~\ref{alg:qr-algorithm-practical})}
\label{alg:qr-algorithm-practical-tridiagonal}
\begin{algorithmic}[1]
\Require $\bA\in \real^{n\times n}$ is real and symmetric;
\State \textcolor{blue}{$\bA=\bQ^{(0)}\bT^{(0)}\bQ^{(0)\top}$}; \Comment{tridiagonal decomposition of $\bA$}
\State $\bV^{(0)}=\textcolor{blue}{\bQ^{(0)}}$; \Comment{previously $\bV^{(0)}=\bI_n$}
\State $\bR^{(0)} = \bI_n$;
\For{$k=1,2,\ldots$}
\State Pick a shift $\mu^{(k)}$; \Comment{e.g., $\mu^{(k)}=\textcolor{blue}{t}_{nn}^{(k-1)}$}
\State $\bQ^{(k)}, \bR^{(k)}= QR\left(\textcolor{blue}{\bT}^{(k-1)} - \mu^{(k)}\bI\right)$;
\State $\textcolor{blue}{\bT}^{(k)}=\bR^{(k)} \bQ^{(k)} + \mu^{(k)}\bI$;
\State $\bV^{(k)} = \bV^{(k-1)}\bQ^{(k)} $;
\EndFor
\end{algorithmic}
\end{algorithm}
The properties of the practical QR algorithm still hold that
\begin{equation}\label{equation:tpqr-1-2}
\boxed{
\begin{aligned}
\text{(TPQR 1)} \qquad \bT^{(k)} &= \bQ^{(k)\top } \bT^{(k-1)} \bQ^{(k)};\\
\text{(TPQR $2^\prime$)} \qquad \bT^{(k)}
&= \bQ^{(k)\top } \bQ^{(k-1)\top }\ldots \bQ^{(1)\top } \bT^{(0)}\bQ^{(1)} \ldots \bQ^{(0)} \bQ^{(k)} \bQ^{(k)};\\
&= \underbrace{\bQ^{(k)\top } \bQ^{(k-1)\top }\ldots \bQ^{(1)\top } }_{\bV^{(k)\top}}
\bA
\underbrace{\bQ^{(1)} \ldots \bQ^{(0)} \bQ^{(k)} \bQ^{(k)}}_{\bV^{(k)}}.
\end{aligned}}
\end{equation}
Except now that the $\bT^{(k)}$'s are tridiagonal matrices (see discussion in the next section).
Suppose $\bT^{(k-1)}$ has the following form where we want to decide the shift $\mu^{(k)}$:
\begin{equation}\label{equation:implicit-matrix-qr}
\bT^{(k-1)} =
\begin{bmatrix}
a_1 & b_1 & & \ldots & 0 \\
b_1 & a_2 & \ddots & & \vdots \\
& \ddots & \ddots & \ddots & \\
\vdots & & \ddots & a_{n-1} & b_{n-1} \\
0 & \ldots & & b_{n-1} & a_n
\end{bmatrix}.
\end{equation}
As we have discussed, one reasonable choice for the shift is $\mu^{(k)} = a_n$. And \citep{wilkinson1968global} shows that a more effective choice is to shift by the eigenvalue of
$$
\bT^{(k-1)}_{n-1:n, n-1:n}=
\begin{bmatrix}
a_{n-1} & b_{n-1}\\
b_{n-1} & a_n
\end{bmatrix}.
$$
The above matrix has two possible eigenvalues where the one closer to $a_n$ is chosen and it is given by
$$
\mu^{(k)} = \frac{a_{n-1}+a_n }{2} + \text{sign}(d) \frac{1}{2}\sqrt{(a_{n-1}-a_n)^2+4b_{n-1}^2},
$$
where $d=a_n-a_{n-1}$. This is known as the \textit{Wilkinson shift}. \citep{wilkinson1968global} shows the algorithm converges \textit{cubically} to the eigenvectors of $\bA$ with either shift strategy and the Wilkinson shift is preferred for some heuristic reasons.
\subsection{Implicit Shifted QR Algorithm}\label{section:implifit-shift-qr}
It happens that the shift $\mu^{(k)}$ in Algorithm~\ref{alg:qr-algorithm-practical-tridiagonal} is much larger than some of the diagonal value $a_i$'s in $\bT^{(k-1)}$. Thus it is more reasonable to favor an implicit update on the $\bT^{(k)}$ from $\bT^{(k-1)}$ in which case there is no such sequence of $\bV^{(k)}$ converging to the eigenvectors of $\bA$ and it needs an extra ``explicit" computation on the eigenvectors. To see this, we conclude some observations that are necessary for the implicit shift QR algorithm (make sure to recap the properties of the tridiagonal decomposition in Section~\ref{section:tridiagonal-decomposition}, p.~\pageref{section:tridiagonal-decomposition} before going forward).
\begin{itemize}
\item \textit{Preservation of Form in Simple QR Algorithm.} The tridiagonal decomposition of a tridiagonal decomposition may not be a tridiagonal decomposition, i.e., $\bT_+ = \bQ\bT\bQ^\top$ may not be tridiagonal even if $\bT$ is tridiagonal. However, in our case, if $\bT=\bQ\bR$ is the QR decomposition of a symmetric and tridiagonal matrix $\bT\in \real^{n\times n}$, then $\bQ$ has lower bandwidth 1 and $\bR$ has upper bandwidth 2 (Definition~\ref{defin:matrix-bandwidth}, p.~\pageref{defin:matrix-bandwidth}). Then the reverse QR update $\bT_+ = \bR\bQ = \bQ^\top \bQ(\bR\bQ) = \bQ^\top \bT \bQ$ is also symmetric and tridiagonal.
\item \textit{Positive Lower Subdiagonals of the Tridiagonal Matrix.} The QR decomposition is not unique (Section~\ref{section:nonunique-qr}, p.~\pageref{section:nonunique-qr}). However, when restricting the diagonals of $\bR$ to be positive, the QR decomposition is unique (Corollary~\ref{corollary:unique-qr}, p.~\pageref{corollary:unique-qr}). Then, if $\bT$ has \textit{positive lower subdiagonals} (which implies $\bT$ is \textit{unreduced} \footnote{See Definition~\ref{definition:tridiagonal-hessenbert}, p.~\pageref{definition:tridiagonal-hessenbert}.}, i.e., nonzero lower subdiagonals, and further the lower subdiagonals are \textcolor{blue}{positive}), then $\bT_+=\bQ^\top\bT\bQ$ also has positive lower subdiagonals.\footnote{This is an important claim that is usually ignored in many text.}
\item \textit{Preservation of Form in Practical QR Algorithm.} If $\mu\in \real$, and $\bT-\mu\bI=\bQ\bR$ is the QR decomposition of shifted $\bT$, then $\bT_+=\bR\bQ +\mu\bI$ is also symmetric and tridiagonal.
\item \textit{Implicit Q Theorem.} We observe that if we restrict the elements in the lower sub-diagonal of the tridiagonal matrix $\bT$ to be \textcolor{blue}{positive} (if possible), i.e., \textit{unreduced} with positive lower subdiagonals, then the tridiagonal decomposition $\bT_+=\bQ\bT\bQ^\top$ is uniquely determined by $\bT$ and the first column of $\bQ$ (Theorem~\ref{theorem:implicit-q-tridiagonal}, p.~\pageref{theorem:implicit-q-tridiagonal}); \index{Implicit Q theorem}
\item \textit{Tridiagonal Update.} By condition (TPQR 1) in Equation~\eqref{equation:tpqr-1-2}, when the QR algorithm applied to a tridiagonal matrix, the update of the ``eigenvalue matrix" forms a tridiagonal decomposition: $\bT^{(k)} = \bQ^{(k)\top } \bT^{(k-1)} \bQ^{(k)}\rightarrow \bT^{(k-1)}=\bQ^{(k) }\bT^{(k)}\bQ^{(k)\top } $, i.e., the tridiagonal decomposition of $\bT^{(k-1)}$ is given by $\bT^{(k-1)}=\bQ^{(k) }\bT^{(k)}\bQ^{(k)\top } $. We notice that if we assume $\bT^{(k-1)}$ is \textit{unreduced with positive lower subdiagonals}, then $\bT^{(k)}$ is also \textit{unreduced with positive lower subdiagonals}. By the implicit Q theorem above, the tridiagonal decomposition is \textbf{uniquely} determined by $\bT^{(k-1)}$ itself and the first column of $\bQ^{(k) }$.
\item If $\bT^{(k-1)}$ has positive lower subdiagonals, $\bT^{(k)}$ will also has positive lower subdiagonals which in turn result in the positive lower subdiagonals in $\bT^{(k+1)}$. However, at some point, they will converge to zero.
\item \textit{Connection to the ``simple" QR algorithm.} We notice that when the ``simple" QR algorithm without shifts (Algorithm~\ref{alg:qr-algorithm-simple2}, p.~\pageref{alg:qr-algorithm-simple2}) applied to the tridiagonal matrix also has this tridiagonal update, however, the difference relies on the first column of $\bQ^{(k) }$.
\end{itemize}
We then introduce the implicit update on the practical QR algorithm.
\paragraph{Step 1: Introducing the bulge}
To see the implicit shift algorithm, a $5\times 5$ example is given step by step. Suppose at the $(k-1)$-th iteration, the elements in $\bT^{(k-1)}$ is given by Equation~\eqref{equation:implicit-matrix-qr}. Find the $2$ by 2 Givens rotation $\widetildebG_1^\top$ where $c=\cos(\theta), s=\sin(\theta)$ are computed such that\index{Bulge}
$$
\underbrace{\begin{bmatrix}
c & s \\
-s & c
\end{bmatrix}}_{\widetildebG_1^\top}
\begin{bmatrix}
a_1 - \mu^{(k)}\\
b_1
\end{bmatrix}
=
\begin{bmatrix}
\boxtimes \\
0
\end{bmatrix}.
$$
And an $n$ by $n$ Givens rotation constructed by
$$
\bG_{12}^\top =
\begin{bmatrix}
\widetildebG_1^\top & \\
& \bI_{n-2}
\end{bmatrix},
$$
where $n=5$ in our example and the subscript ``$12$" of $\bG_{12}^\top$ denotes the position where the rotation happens (Definition~\ref{definition:givens-rotation-in-qr}, p.~\pageref{definition:givens-rotation-in-qr}). Till now, this is equivalent to what we did in the first step of the QR decomposition of ($\bT^{(k-1)}-\mu^{(k)}\bI$) via the Givens rotation (Section~\ref{section:qr-givens}, p.~\pageref{section:qr-givens}).
For the $5\times 5$ example, we realize that $\bG_{12}^\top$ working on $(\bT^{(k-1)}-\textcolor{blue}{\mu^{(k)}\bI})$ will introduce a zero in entry (2,1) and destroy the zero in entry $(1,3)$. And the process is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed:
$$
\begin{aligned}
\begin{sbmatrix}{(\bT^{(k-1)}-\textcolor{blue}{\mu^{(k)}\bI})}
\boxtimes & \boxtimes & 0 & 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & 0 & 0\\
0 & \boxtimes & \boxtimes & \boxtimes & 0\\
0 & 0 & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{12}^\top\times }{\rightarrow}
\begin{sbmatrix}{\bG_{12}^\top(\bT^{(k-1)}-\textcolor{blue}{\mu^{(k)}\bI})}
\bm{\boxtimes} & \bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes}} & \bm{0} & \bm{0}\\
\textcolor{blue}{\bm{0}} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{0} & \bm{0}\\
0 & \boxtimes & \boxtimes & \boxtimes & 0 \\
0 & 0 & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}
\end{aligned}
$$
And $\bG_{12}^\top$ working on $\bT^{(k-1)}$ will destroy the zero in entry (1,3):
$$
\begin{aligned}
\begin{sbmatrix}{\bT^{(k-1)}}
\boxtimes & \boxtimes & 0 & 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & 0 & 0\\
0 & \boxtimes & \boxtimes & \boxtimes & 0\\
0 & 0 & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{12}^\top\times }{\rightarrow}
\begin{sbmatrix}{\bG_{12}^\top\bT^{(k-1)}}
\bm{\boxtimes} & \bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes}} & \bm{0} & \bm{0}\\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{0} & \bm{0}\\
0 & \boxtimes & \boxtimes & \boxtimes & 0 \\
0 & 0 & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\times\bG_{12}}{\rightarrow}
\mathop{\left[\begin{array}{ccccc}
\bm{\boxtimes} & \bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes}} & \bm{0} & \bm{0}\\\cline{1-1}
\multicolumn{1}{|c|}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{0} & \bm{0}\\
\multicolumn{1}{|c|}{\textcolor{blue}{\bm{\boxtimes}}} & \bm{\boxtimes} & \boxtimes & \boxtimes & 0 \\\cline{1-1}
\bm{0} & \bm{0} & \boxtimes & \boxtimes & \boxtimes\\
\bm{0} & \bm{0} & 0 & \boxtimes & \boxtimes
\end{array}\right]}_{\textstyle\mathstrut{\bG_{12}^\top\bT^{(k-1)}\bG_{12}}},
\end{aligned}
$$
where we find the Givens $\bG_{12}^\top$ multiplying on the left of a matrix will modify the first two rows of it, and $\bG_{12}$ multiplying on the right will modify the first two columns of it. The blue $\textcolor{blue}{\bm{\boxtimes}}$'s indicate the zero entries in tridiagonal matrix destroyed by the orthogonal similarity transformation and is known as ``\textit{introducing the bulge}".
\paragraph{Step 2: Chasing the bulge}
Now the problem becomes ``\textit{chasing the bulge}" that will introduce the ``bulge" back to zero. And in the meantime, as the final tridiagonal decomposition is decided by the first column. The second Givens rotation can be constructed by calculating $c=\cos(\theta), s=\sin(\theta)$ such that
$$
\underbrace{\begin{bmatrix}
c & s \\
-s & c
\end{bmatrix}}_{\widetildebG_2^\top}
\underbrace{(\bG_{12}^\top\bT^{(k-1)}\bG_{12})_{1:2,1}}_{\text{the vector in the \fbox{box} of above matrix}}
=
\begin{bmatrix}
\boxtimes \\
0
\end{bmatrix}.
$$
And a Givens rotation constructed by
$$
\bG_{23}^\top =
\begin{bmatrix}
1 & & \\
& \widetildebG_2^\top & \\
& & \bI_{n-3}
\end{bmatrix}.
$$
Following the above example, we have
$$
\begin{aligned}
\begin{sbmatrix}{\bG_{12}^\top\bT^{(k-1)}\bG_{12}}
\boxtimes & \boxtimes & \textcolor{blue}{\boxtimes} & 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & 0 & 0\\
\textcolor{blue}{\boxtimes} & \boxtimes & \boxtimes & \boxtimes & 0 \\
0 & 0 & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{23}^\top\times }{\rightarrow}
\begin{sbmatrix}{\bG_{23}^\top(\bG_{12}^\top\bT^{(k-1)}\bG_{12})}
\boxtimes & \boxtimes & \textcolor{blue}{\boxtimes} & 0 & 0\\
\bm{\boxtimes} & \bm{\boxtimes} &\bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes} } & \bm{0}\\
\textcolor{brown}{\bm{0}} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{0} \\
0 & 0 & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\times\bG_{23} }{\rightarrow}
\mathop{\left[\begin{array}{ccccc}
\boxtimes & \bm{\boxtimes} & \textcolor{brown}{\bm{0}} & 0 & 0\\
\bm{\boxtimes} & \bm{\boxtimes} &\bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes} } & \bm{0}\\\cline{2-2}
\textcolor{brown}{\bm{0}} & \multicolumn{1}{|c|}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{0} \\
0 & \multicolumn{1}{|c|}{\textcolor{blue}{\bm{\boxtimes}}} & \bm{\boxtimes} & \boxtimes & \boxtimes\\\cline{2-2}
0 & \bm{0} & \bm{0} & \boxtimes & \boxtimes
\end{array}\right]}_{\textstyle\mathstrut{\bG_{23}^\top(\bG_{12}^\top\bT^{(k-1)}\bG_{12})\bG_{23}}},
\end{aligned}
$$
where we find the Givens $\bG_{23}^\top$ multiplying on the left of a matrix will modify the rows $2,3$ of it, and $\bG_{23}$ multiplying on the right will modify the first columns $2,3$ of it. Now we introduce back zero for the entries (3,1) and (1,3) from step 1, however, introducing ``bulge" for entries (4,2) and (2,4) again.
Same process can go on, the example as a whole can be shown as follows where the blue $\textcolor{blue}{\bm{\boxtimes}}$ indicates the bulge introduced, the brown $\textcolor{brown}{\bm{0}}$ indicates the bulge was chased out, and \textbf{boldface} indicates the value has just been changed:
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10, frametitle={A Complete Example of Implicit QR Algorithm}]
\begin{equation}\label{equation:tridia-update-implicit}
\begin{aligned}
\begin{sbmatrix}{\bT^{(k-1)}}
\boxtimes & \boxtimes & 0 & 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & 0 & 0\\
0 & \boxtimes & \boxtimes & \boxtimes & 0\\
0 & 0 & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{12} }{\rightarrow}
\mathop{\left[\begin{array}{ccccc}
\bm{\boxtimes} & \bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes}} & \bm{0} & \bm{0}\\\cline{1-1}
\multicolumn{1}{|c|}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{0} & \bm{0}\\
\multicolumn{1}{|c|}{\textcolor{blue}{\bm{\boxtimes}}} & \bm{\boxtimes} & \boxtimes & \boxtimes & 0 \\\cline{1-1}
\bm{0} & \bm{0} & \boxtimes & \boxtimes & \boxtimes\\
\bm{0} & \bm{0} & 0 & \boxtimes & \boxtimes
\end{array}\right]}_{\textstyle\mathstrut{\bG_{12}^\top(\cdot)\bG_{12}}}
\stackrel{\bG_{23} }{\rightarrow}
\mathop{\left[\begin{array}{ccccc}
\boxtimes & \bm{\boxtimes} & \textcolor{brown}{\bm{0}} & 0 & 0\\
\bm{\boxtimes} & \bm{\boxtimes} &\bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes} } & \bm{0}\\\cline{2-2}
\textcolor{brown}{\bm{0}} & \multicolumn{1}{|c|}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{0} \\
0 & \multicolumn{1}{|c|}{\textcolor{blue}{\bm{\boxtimes}}} & \bm{\boxtimes} & \boxtimes & \boxtimes\\\cline{2-2}
0 & \bm{0} & \bm{0} & \boxtimes & \boxtimes
\end{array}\right]}_{\textstyle\mathstrut{\bG_{23}^\top(\cdot)\bG_{23}}}\\
\end{aligned}
\end{equation}
$$
\begin{aligned}
\qquad \qquad \qquad \gap &\stackrel{\bG_{34} }{\rightarrow}
\mathop{\left[\begin{array}{ccccc}
\boxtimes & \boxtimes & \bm{0} & \bm{0} & 0\\
\boxtimes & \boxtimes &\bm{\boxtimes} & \textcolor{brown}{\bm{0} }& 0\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} &\bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes} } \\\cline{3-3}
\bm{0} & \textcolor{brown}{\bm{0} } & \multicolumn{1}{|c|}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \multicolumn{1}{|c|}{\textcolor{blue}{\bm{\boxtimes}}} & \bm{\boxtimes} & \boxtimes \\\cline{3-3}
\end{array}\right]}_{\textstyle\mathstrut{\bG_{34}^\top(\cdot)\bG_{34}}}
\stackrel{\bG_{45} }{\rightarrow}
\mathop{\left[\begin{array}{ccccc}
\boxtimes & \boxtimes & 0 & \bm{0} &\bm{0}\\
\boxtimes & \boxtimes &\boxtimes & \bm{0} & \bm{0}\\
0 & \boxtimes & \boxtimes & \bm{\boxtimes} & \textcolor{brown}{\bm{0} } \\
\bm{0} & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{0} & \textcolor{brown}{\bm{0} } & \bm{\boxtimes} & \bm{\boxtimes} \\
\end{array}\right]}_{\textstyle\mathstrut{\bT^{(k)}=\bG_{45}^\top(\cdot)\bG_{45}}}.
\end{aligned}
$$
\end{mdframed}
For general matrix with shape $n\times n$, one can compute rotations $\bG_{12}, \bG_{23}, \ldots, \bG_{n-1,n}$ with the property that if $\bZ = \bG_{12}\bG_{23} \ldots\bG_{n-1,n}$, then $\bT^{(k-1)} = \bZ \bT^{(k)}\bZ^\top$ is the tridiagonal decomposition where the first column of $\bZ$ is given by $\bZ\be_1 = \bG_{12}\be_1 = \bQ^{(k)}$. We realize that the first column of $\bZ$ and $\bQ^{(k)}$ are identical and therefore $\bZ=\bQ^{(k)}$ by implicit Q theorem under the following conditions:
\begin{itemize}
\item $\bT^{(k-1)}$ is unreduced with positive lower subdiagonals;
\item QR decomposition method used in the QR algorithm is uniquely in that the diagonals of the upper triangular matrix have positive diagonals;
\end{itemize}
\paragraph{The complete algorithm}
For simplicity, we denote the construction of $\widetildebG_i$ such that $\widetildebG_i^\top \bx =\widetildebG_i^\top
\begin{bmatrix}
x_1\\x_2
\end{bmatrix}= \begin{bmatrix}
\boxtimes \\ 0
\end{bmatrix}$ by
$$
\widetildebG_i^\top =\text{givens}(x_1,x_2).
$$
In all iterations, $\widetildebG_i^\top $ will be of size $2\times 2$. And the construction of $n\times n$ Givens rotation $\bG_{i,i+1}^\top$ by
$$
\bG_{i,i+1}^\top=
G(\widetildebG_i^\top )=
\begin{bmatrix}
\bI_{i-1} & & \\
& \widetildebG_i^\top& \\
& & \bI_{n-i-1}
\end{bmatrix}.
$$
For further simplicity, we will denote $\bG_{i,i+1}^\top$ by $\bG_i^\top$ which multiplies on the left of another matrix will modify the $i$-th and $i+1$-th rows implicitly. The full procedure is then formulated in Algorithm~\ref{alg:qr-algorithm-practical-tridiagonal-implicit-shift} where $t_{ij}^{(k-1)}$ is the $(i,j)$-th entry of $\bT^{(k-1)}$.
\begin{algorithm}[H]
\caption{Practical QR Algorithm with Implicit Shift}
\label{alg:qr-algorithm-practical-tridiagonal-implicit-shift}
\begin{algorithmic}[1]
\Require $\bA\in \real^{n\times n}$ is real and symmetric;
\State $\bA=\bQ^{(0)}\bT^{(0)}\bQ^{(0)\top}$; \Comment{tridiagonal decomposition of $\bA$}
\For{$k=1,2,\ldots$}
\State Pick a shift $\mu^{(k)}$; \Comment{e.g., $\mu^{(k)}=t_{nn}^{(k-1)}$}
\State $x_1 = t_{11}-\mu^{(k)}, x_2=t_{21}$; \Comment{$t_{ij} = t^{(k-1)}_{ij}$}
\State $\bT^{(k)} = \bT^{(k-1)}$; \Comment{initialize $\bT^{(k)} $}
\For{$i=1:n-1$}
\State $\widetildebG_i^\top = \text{givens}(x_1,x_2)$;
\State $\bG_{i}^\top = G(\widetildebG_i^\top)$;
\State $\bT^{(k)} = \bG_{i}^\top\bT^{(k)}\bG_{i}$;
\If{$i<n-1$}
\State $x_1=t_{i+1,i}, x_2=t_{i+2,i}$;\Comment{$t_{ij} = t^{(k-1)}_{ij}$}
\EndIf
\EndFor
\State $\bQ^{(k)\top} = \bG_{n-1}^\top\ldots\bG_1^\top$; \Comment{this results in $\bT^{(k)} =\bQ^{(k)\top} \bT^{(k-1)}\bQ^{(k)} $}
\EndFor
\end{algorithmic}
\end{algorithm}
Suppose for iteration $p$, $\bT^{(p)}$ converges to a diagonal matrix (within the machine error). Then write out the updates in each iteration:
$$
\left.
\begin{aligned}
\bT^{(p)}&=\bQ^{(p)\top}\bT^{(p-1)} \bQ^{(p)}\\
\bT^{(p-1)}&=\bQ^{(p-1)\top}\bT^{(p-2)} \bQ^{(p-1)}\\
\vdots &= \vdots \\
\bT^{(1)}&=\bQ^{(1)\top}\bT^{(0)} \bQ^{(1)}\\
\bT^{(0)}&=\bQ^{(0)\top}\bA\bQ^{(0)}
\end{aligned}
\right\}
\leadtosmall
\bA =
\underbrace{\bQ^{(0)}\ldots \bQ^{(p)}}_{\bQ}
\bT^{(p)}
\underbrace{(\bQ^{(0)}\ldots \bQ^{(p)})^\top}_{\bQ^\top}
$$
is the approximated spectral decomposition of real symmetric $\bA$ where $\bQ$ is orthogonal containing the eigenvectors of $\bA$ and $\bT^{(p)}$ is diagonal containing eigenvalues of $\bA$ (Theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem}).
\paragraph{Decouple}
However, we can assure that the $\bT^{(k-1)}$ has nonnegative lower subdiagonals (the specific QR decomposition favored as mentioned above), whereas it happens that it is \textit{reduced}, i.e., some lower subdiagonals are 0. In this case, the eigenproblem splits into a pair of smaller problems. For example, when $\bT^{(k-1)}_{k+1,k}=0$, then, a ``practical" QR algorithm can be applied to the submatrices:
$$
\bT^{(k-1)}_{1:k,1:k} \qquad \text{and} \qquad \bT^{(k-1)}_{k+1:n,k+1:n}.
$$
And the eigenvalues can be obtained by
$$
\Lambda\left(\bT^{(k-1)} \right) = \Lambda\left(\bT^{(k-1)}_{1:k,1:k} \right) \cup \Lambda\left(\bT^{(k-1)}_{k+1:n,k+1:n}\right),
$$
where $\Lambda(\cdot)$ is the spectrum of a matrix (Definition~\ref{definition:spectrum}, p.~\pageref{definition:spectrum}).
\section{Jacobi's Method}\label{section:jacobi-spectral}
Jacobi's method is one of the oldest method to compute the eigenvalues of a matrix which was introduced in 1846 by \citep{jacobi1846theory}. The idea is to diagonalize a small submatrix at each time such that the full matrix can be diagonalized eventually. The mathematical measure for the quantity of the reduction is defined by \textit{off-diagonal norm}\index{Jacobi's rotation}\index{Off-diagonal norm}
$$
\off(\bA) = \sqrt{\sum_{i=1}^{n} \sum_{j=1, j\neq i}^{n} a_{ij}^2 },
$$
i.e., the Frobenius norm of the off-diagonal entries.
The idea behind the method is to iteratively reduce the off-diagonal quantity and relies on the Jacobi's rotation
$$
\begin{blockarray}{cccccccccccc}
\begin{block}{c[cccccccccc]c}
&1 & & & & & & & & & &\\
&& \ddots & & & & && & & &\\
&& & 1 & & & & && & & \\
&& & & c & & & & s & & &k\\
&&& & & 1 & & && & &\\
\bJ_{kl}=&&& & & &\ddots & && & &\\
&&& & & & & 1&& & &\\
&&& & -s & & & &c& & & l\\
&&& & & & & & &1 & &\\
&&& & & & & & & &\ddots &\\
\end{block}
&& & & k & & & & l & & &\\
\end{blockarray},
$$
which is the same as the Givens rotation (Definition~\ref{definition:givens-rotation-in-qr}, p.~\pageref{definition:givens-rotation-in-qr}). The difference relies on the usage of them where in the Jacobi's rotation, the $\theta$ for $s = \cos \theta$ and $c=\cos \theta$ is chosen to make the submatrix of $\bJ^\top \bA\bJ$ to be diagonal.
\subsection{The 2 by 2 Case}
To see how this Jacobi's rotation works, suppose we are looking at the 2 by 2 submatrix of a symmetric matrix
$$
\bA(k,l):=
\begin{bmatrix}
a_{kk} & a_{kl} \\
a_{lk} & a_{ll}
\end{bmatrix}
=
\begin{bmatrix}
a_{kk} & a_{kl} \\
a_{kl} & a_{ll}
\end{bmatrix}.
$$
Then $\theta$ can be computed such that
$$
\begin{aligned}
\bJ^\top \bA(k,l)\bJ
&=
\begin{bmatrix}
c & -s \\
s & c
\end{bmatrix}
\begin{bmatrix}
a_{kk} & a_{kl} \\
a_{kl} & a_{ll}
\end{bmatrix}
\begin{bmatrix}
c & s \\
-s & c
\end{bmatrix}\\
&=
\begin{bmatrix}
(c^2-cs) a_{kk}+s^2 a_{ll} -cs \cdot a_{kl} & (c^2-s^2)a_{kl}+cs (a_{kk}-a_{ll}) \\
(c^2-s^2)a_{kl} +cs (a_{kk}-a_{ll}) & s^2 a_{kk} +c^2 a_{ll}+2cs \cdot a_{kl}
\end{bmatrix}
=
\begin{bmatrix}
\neq 0 & 0 \\
0 & \neq 0
\end{bmatrix}.
\end{aligned}
$$
If $a_{kl}=0$, then we can just set $c=1,s=0$, the submatrix $\bA(k,l)$ remains unchanged. Otherwise, it is trivial that both $c\neq$ and $s\neq 0$. Divide the off-diagonal element $\{(c^2-s^2)a_{kl} +cs (a_{kk}-a_{ll}) \}$ by $c^2$ and $a_{kl}$, it follows that
$$
(1- \frac{s^2}{c^2} + \frac{s}{c} \frac{a_{kk}-a_{ll}}{a_{kl}}) = 0
$$
Let
$$
\tan \theta = t = \frac{\sin \theta}{\cos \theta} = \frac{s}{c}
\qquad
\text{and}
\qquad
\tau = \frac{a_{ll}-a_{kk}}{2a_{kl}},
$$
it suffices to solve the equation
$$
t^2+2\tau t -1=0.
$$
This gives
$$
t = -\tau \pm \sqrt{\tau^2+1}.
$$
We notice that
$$
||\bA - \bJ_{kl}^\top \bA\bJ_{kl}||_F^2 = 4(1-c) \sum_{i\neq k,l}(a_{ik}^2+a_{il}^2) + 2a_{kl}^2 /c^2,
$$
and
$$
c = \frac{1}{\sqrt{1+t^2}}.
$$
Therefore, the larger the $c$, the smaller the $||\bA - \bJ_{kl}^\top \bA\bJ_{kl}||_F^2$. This implies we should choose a $t$ with smaller magnitude:
$$
t_{min}=
\left\{
\begin{aligned}
&-\tau + \sqrt{\tau^2+1}, \qquad \text{if $\tau\geq 0$};\\
&-\tau - \sqrt{\tau^2+1}, \qquad \text{if $\tau< 0$}.\\
\end{aligned}
\right.
$$
From the discussion above, we can define the \textit{ComputeJacobiRotation} function that computes the Jacobi's rotation from the submatrix of $\bA$.
\begin{algorithm}[H]
\caption{Compute Jacobi Rotation Given the Submatrix}
\label{alg:jacobi-submatrix}
\begin{algorithmic}[1]
\Require $\bA\in \real^{n\times n}$ is real and symmetric, ($k,l$) such that $1\leq k< l\leq n$;
\Function{computeJacobiRotation}{$\bA$, $k,l$}
\If{$A_{kl}\neq 0$}
\State $\tau = (a_{ll} - a_{kk})/(2a_{kl})$
\If{$\tau \geq 0$}
\State $t=-\tau + \sqrt{\tau^2+1}$;
\Else
\State $t=-\tau - \sqrt{\tau^2+1}$;
\EndIf
\State $c=1/\sqrt{1+t^2}, s=tc$;
\Else
\State $c=1,s=0$;
\EndIf
\State Output $c,s$;
\EndFunction
\end{algorithmic}
\end{algorithm}
\subsection{The Complete Jacobi's Method}
The name of the ``complete" comes from the terminology of the complete pivoting that searches the largest element in the matrix to pivot (Section~\ref{section:complete-pivoting}, p.~\pageref{section:complete-pivoting}). At each iteration, we need to decide the submatrix $\bA(k,l)$ to diagonalize. Then in the complete Jacobi's method, we choose $(k,l)$ such that the $(k,l)$-th entry, $a_{kl}^2$, is maximal hoping that the reduction of the off-diagonal quantity to be maximal. The complete search for the largest magnitude defines the Algorithm~\ref{alg:jacobi-complete-alg}.
\begin{algorithm}[H]
\caption{Complete Jacobi's Method}
\label{alg:jacobi-complete-alg}
\begin{algorithmic}[1]
\Require $\bA\in \real^{n\times n}$ is real and symmetric, a positive tolerance $tol$ such that $\delta = tol \cdot ||\bA||_F$;
\State $\bQ = \bI_n$;
\State $i=0$;
\State $\bLambda = \bA$;
\While{$\off(\bLambda)> \delta$}
\State Choose $(k,l)$ so that $a_{kl}^2 = \arg\max \bLambda_{ij}^2, \forall 1\leq i,j \leq n$;
\State Compute $c,s$ from \textit{computeJacobiRotation}($\bLambda, k,l$);
\State Decide the $n\times n$ Jacobi's rotation $\bJ_{kl}$ by $c,s$;
\State $\bLambda = \bJ_{kl}^\top \bLambda \bJ_{kl}$;
\State $\bQ = \bQ\bJ_{kl}$;
\State Compute $\off(\bLambda^{(i)})^2 $; \Comment{which we will show converges linearly to 0.}
\State $i=i+1$;
\EndWhile
\State Output the sequence $\off(\bLambda^{(i)})^2 $, approximated diagonal $\bLambda$ and orthogonal $\bQ$;
\end{algorithmic}
\end{algorithm}
The algorithm computes the spectral decomposition by outputting $\bA = \bQ\bLambda\bQ^\top$.
It is not hard to see that, at each iteration, it requires $O(n^2)$ flops to search for the largest magnitude entry $(k,l)$ and $O(n)$ flops to update the iteration. We shall notice that although a symmetric pair of zeros is introduced into the matrix at each iteration,
the previous zeros are destroyed. However, what matters is the off-diagonal quantity reduces steadily.
Suppose the index $(k,l)$ is chosen and the Jacobi's rotation $\bJ_{kl}$ is constructed such that
$$
\bLambda_+ = \bJ_{kl}^\top\bLambda \bJ_{kl}.
$$
Since the Jacobi's rotation is orthogonal, it follows that
$$
||\bLambda_+||_F^2 = ||\bLambda||_F^2
$$
and
$$
\off(\bLambda_+)^2 = \off(\bLambda)^2 -2a_{kl}^2.
$$
As $a_{kl}$ has largest magnitude in $\bLambda$, we also have
$$
\off(\bLambda)^2 \leq n(n-1)a_{kl}^2.
$$
Therefore,
$$
\off(\bLambda_+)^2= \off(\bLambda)^2 -2a_{kl}^2 \leq \off(\bLambda)^2 - \frac{2}{n(n-1)} \off(\bLambda)^2 = \left(1-\frac{2}{n(n-1)}\right)\off(\bLambda)^2.
$$
By induction, this implies, the Jacobi's method reduces the off-diagonal quantity after $k$ iterations by
$$
\off(\bLambda^{(i)})^2 \leq \left(1-\frac{2}{n(n-1)}\right)^2 \off(\bLambda^{(0)}).
$$
and
$$
\mathop{\lim}_{k\rightarrow \infty} \frac{|\off(\bLambda^{(i)})^2 - 0|}{|\off(\bLambda^{(i-1)})^2 -0|} = 1-\frac{2}{n(n-1)} \in (0,1),
$$
such that the sequence $\off(\bLambda^{(i)})^2 $ converges linearly to 0 (Definition~\ref{definition:linear-convergence}, p.~\pageref{definition:linear-convergence}). However, in practice, \citep{henrici1958speed, schonhage1964quadratic, van1966quadratic} show that a quadratic convergence (Definition~\ref{definition:quadratic-convergence}, p.~\pageref{definition:quadratic-convergence}) can be obtained such that
$$
\mathop{\lim}_{k\rightarrow \infty} \frac{|\off(\bLambda^{(i+\frac{n(n-1)}{2})})^2|}{|\off(\bLambda^{(i)})^2|^2} = c,
$$
where $c$ is a constant. We shall not give details.
\subsection{The Cyclic-by-Row Jacobi's Method}
We mentioned in the last section that most cost of the complete Jacobi's method is from the search for the element with largest magnitude which takes $O(n^2)$ flops.
To tame the explosion, it is reasonable to update in a row-by-row fashion that iteratively updates the $n(n-1)$ upper triangular entries. This is known as the \textit{cyclic-by-row} algorithm and the procedure is shown in Algorithm~\ref{alg:jacobi-cyclicrow-alg}.
\begin{algorithm}[H]
\caption{Cyclic-by-Row Jacobi's Method}
\label{alg:jacobi-cyclicrow-alg}
\begin{algorithmic}[1]
\Require $\bA\in \real^{n\times n}$ is real and symmetric, a positive tolerance $tol$ such that $\delta = tol \cdot ||\bA||_F$;
\State $\bQ = \bI_n$;
\State $i=0$;
\State $\bLambda=\bA$;
\While{$\off(\bLambda)> \delta$}
\For{$k=1:n-1$}
\For{$l=k+1:n$}
\State Compute $c,s$ from \textit{computeJacobiRotation}($\bLambda, k,l$);
\State Decide the $n\times n$ Jacobi's rotation $\bJ_{kl}$ by $c,s$;
\State $\bLambda = \bJ_{kl}^\top \bLambda \bJ_{kl}$;
\State $\bQ = \bQ\bJ_{kl}$;
\State $\off(\bLambda^{(i)})^2 $;
\State $i=i+1$;
\EndFor
\EndFor
\EndWhile
\State Output the sequence $\off(\bLambda^{(i)})^2 $, approximated diagonal $\bLambda$ and orthogonal $\bQ$;
\end{algorithmic}
\end{algorithm}
Similarly, \citep{wilkinson1962note, van1966quadratic} show that the cyclic-by-row algorithm converges \textit{quadratically}. And in that it does not need to compare the element in the matrix, this reduces the complexity of calculation.
\subsection{Other Issues}
In practice, when computing on a $p$-processor computer, it is reasonable to do the Jacobi's algorithm in a block way such that a parallelizable method can be favored. We refer the issue to \citep{bischof1986two, shroff1989convergence, golub2013matrix} and also its counterpart in SVD computation \citep{van1985block}.
\section{Computing the SVD}
\subsection{Implicit Shifted QR Algorithm}
We previously have shown that any symmetric matrix can be reduced to tridiagonal form via a sequence of Householder reflectors that are applied on the left and the right to the matrix, whilst it is a special case of the Hessenberg decomposition (Section~\ref{section:compute-tridiagonal}, p.~\pageref{section:compute-tridiagonal}). This can reduce the cost of the QR algorithm applied to computing the spectral decomposition of a matrix. This two-phase computation is not unique but has its counterpart. Since the work of Golub, Kahan, and others on the bidiagonalization in the 1960s (Theorem~\ref{theorem:Golub-Kahan-Bidiagonalization-decom}, p.~\pageref{theorem:Golub-Kahan-Bidiagonalization-decom}), an analogous two-phase approach has been standard for the SVD. For computing the SVD, the matrix is firstly reduced to bidiagonal form \footnote{The bidiagonal matrix we discuss about in this section implicitly means an upper bidiagonal matrix. We will suppress the terminology for simplicity.} and then the bidiagonal matrix is diagonalized:
$$
\begin{bmatrix}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\end{bmatrix}
\mathop{\longrightarrow}^{\text{phase 1} }_{\text{bidiagonalize} }
\begin{bmatrix}
\boxtimes & \boxtimes & 0 & 0 & 0\\
0 & \boxtimes & \boxtimes & 0 & 0\\
0 & 0 & \boxtimes & \boxtimes & 0\\
0 & 0 & 0 & \boxtimes & \boxtimes\\
0 & 0 & 0 & 0 & \boxtimes\\
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0
\end{bmatrix}
\mathop{\longrightarrow}^{\text{phase 2} }_{\text{diagonalize} }
\begin{bmatrix}
\boxtimes & 0 & 0 & 0 & 0\\
0 & \boxtimes & 0 & 0 & 0\\
0 & 0 & \boxtimes & 0 & 0\\
0 & 0 & 0 & \boxtimes & 0\\
0 & 0 & 0 & 0 & \boxtimes\\
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0
\end{bmatrix}.
$$
The bidiagonalization is different from the tridiagonalization in that it does not require the two orthogonal on the left and on the right to be the transpose of the same matrix, and even the shape of the two orthogonal matrices can be different. This is exactly what we want for the SVD. For the second phase of the SVD computation, we will directly go to the solution via the implicit shift QR algorithm, the development to this final procedure is similar from what we have developed for the spectral decomposition.
\paragraph{Phase 2 of SVD}
Following the fact that $\bT=\bB^\top\bB$ is tridiagonal if $\bB$ is bidiagonal (Lemma~\ref{lemma:construct-triangular-from-bidia}, p.~\pageref{lemma:construct-triangular-from-bidia}). Following the tridiagonal update in QR algorithm (Section~\ref{section:implifit-shift-qr}), suppose in the $k$-th iteration, we have the bidiagonal matrix $\bB^{(k-1)}$ and its tridiagonal companion \textcolor{blue}{$\bT^{(k-1)} = \bB^{(k-1)\top}\bB^{(k-1)}$}, for $k=1,2,\ldots$. The tridiagonal update is followed by a set of Givens rotations on the left and right iteratively as shown in Equation~\eqref{equation:tridia-update-implicit} by a $5\times 5$ example where the blue $\textcolor{blue}{\bm{\boxtimes}}$ indicates the bulge introduced and the \fbox{boxed} vector indicates how the Givens matrix constructed:
$$
\begin{aligned}
\bT^{(k-1)}= \begin{sbmatrix}{\bT^{(k-1)}}
\boxtimes & \boxtimes & 0 & 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & 0 & 0\\
0 & \boxtimes & \boxtimes & \boxtimes & 0\\
0 & 0 & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{12} }{\rightarrow}
\mathop{\left[\begin{array}{ccccc}
\bm{\boxtimes} & \bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes}} & \bm{0} & \bm{0}\\\cline{1-1}
\multicolumn{1}{|c|}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{0} & \bm{0}\\
\multicolumn{1}{|c|}{\textcolor{blue}{\bm{\boxtimes}}} & \bm{\boxtimes} & \boxtimes & \boxtimes & 0 \\\cline{1-1}
\bm{0} & \bm{0} & \boxtimes & \boxtimes & \boxtimes\\
\bm{0} & \bm{0} & 0 & \boxtimes & \boxtimes
\end{array}\right]}_{\textstyle\mathstrut{\bG_{12}^\top(\cdot)\bG_{12}}}
\stackrel{\bG_{23} }{\rightarrow}
\mathop{\left[\begin{array}{ccccc}
\boxtimes & \bm{\boxtimes} & \textcolor{brown}{\bm{0}} & 0 & 0\\
\bm{\boxtimes} & \bm{\boxtimes} &\bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes} } & \bm{0}\\\cline{2-2}
\textcolor{brown}{\bm{0}} & \multicolumn{1}{|c|}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{0} \\
0 & \multicolumn{1}{|c|}{\textcolor{blue}{\bm{\boxtimes}}} & \bm{\boxtimes} & \boxtimes & \boxtimes\\\cline{2-2}
0 & \bm{0} & \bm{0} & \boxtimes & \boxtimes
\end{array}\right]}_{\textstyle\mathstrut{\bG_{23}^\top(\cdot)\bG_{23}}}\\
&\stackrel{\bG_{34} }{\rightarrow}
\mathop{\left[\begin{array}{ccccc}
\boxtimes & \boxtimes & \bm{0} & \bm{0} & 0\\
\boxtimes & \boxtimes &\bm{\boxtimes} & \textcolor{brown}{\bm{0} }& 0\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} &\bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes} } \\\cline{3-3}
\bm{0} & \textcolor{brown}{\bm{0} } & \multicolumn{1}{|c|}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \multicolumn{1}{|c|}{\textcolor{blue}{\bm{\boxtimes}}} & \bm{\boxtimes} & \boxtimes \\\cline{3-3}
\end{array}\right]}_{\textstyle\mathstrut{\bG_{34}^\top(\cdot)\bG_{34}}}
\stackrel{\bG_{45} }{\rightarrow}
\mathop{\left[\begin{array}{ccccc}
\boxtimes & \boxtimes & 0 & \bm{0} &\bm{0}\\
\boxtimes & \boxtimes &\boxtimes & \bm{0} & \bm{0}\\
0 & \boxtimes & \boxtimes & \bm{\boxtimes} & \textcolor{brown}{\bm{0} } \\
\bm{0} & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{0} & \textcolor{brown}{\bm{0} } & \bm{\boxtimes} & \bm{\boxtimes} \\
\end{array}\right]}_{\textstyle\mathstrut{\bG_{45}^\top(\cdot)\bG_{45}}}=\bT^{(k)}.
\end{aligned}
$$
That is,
$$
\begin{aligned}
\bT^{(k)} &=
(\bG_{45}^\top\bG_{34}^\top\bG_{23}^\top\bG_{12}^\top)
\textcolor{blue}{\bT^{(k-1)}}
(\bG_{12}\bG_{23}\bG_{34}\bG_{45})\\
&=
(\bG_{45}^\top\bG_{34}^\top\bG_{23}^\top\bG_{12}^\top)
\textcolor{blue}{(\bB^{(k-1)\top}\bB^{(k-1)})}
(\bG_{12}\bG_{23}\bG_{34}\bG_{45}).
\end{aligned}
$$
If we can find $\bT^{(k)}$ can also be decomposed into a product of bidiagonal matrices, then the ``tridiagonal update" can be replaced by a ``bidiagonal update". Then two problems must be tamed:
\begin{itemize}
\item Decompose $\bT^{(k)}$ into the bidiagonal form if $\bT^{(k-1)}$ has the bidiagonal form: $\bT^{(k-1)} = \bB^{(k-1)\top}\bB^{(k-1)}$;
\item Find the Givens rotations from the bidiagonal matrix $\bB^{(k-1)}$ rather than the tridiagonal one $\bT^{(k-1)}$.
\end{itemize}
It is not hard to see that what we want to prove is either
$$
\text{Choice 1:} \qquad \bB^{(k)\top} = (\bG_{45}^\top\bG_{34}^\top\bG_{23}^\top\bG_{12}^\top)\bB^{(k-1)\top}
$$
or
$$
\text{Choice 2:} \qquad \bB^{(k)\top} = (\bG_{45}^\top\bG_{34}^\top\bG_{23}^\top\bG_{12}^\top)\bB^{(k-1)\top}
\textcolor{blue}{\bV_{12}\bV_{23}\bV_{34}\bV_{45}}.
$$
I.e., one of the above choices is also lower bidiagonal if $ \bB^{(k-1)\top}$ is lower bidiagonal. The $\bV_{i,i+1}$'s are orthogonal such that they will cancel out in $\bB^{(k)\top} \bB^{(k)} $. We will see second form of $\bB^{(k)\top} $ will be used (to chase the bulge).
Let
$$
\bT^{(k-1)} =
\begin{bmatrix}
a_1 & b_1 & & \ldots & 0 \\
b_1 & a_2 & \ddots & & \vdots \\
& \ddots & \ddots & \ddots & \\
\vdots & & \ddots & \ddots & b_{4} \\
0 & \ldots & & b_{4} & a_5
\end{bmatrix}=
\begin{sbmatrix}{\bB^{(k-1)\top}}
c_1 & 0 & & \ldots & 0 \\
d_1 & c_2 & \ddots & & \vdots \\
& \ddots & \ddots & \ddots & \\
\vdots & & \ddots & \ddots & 0 \\
0 & \ldots & & d_{4} & c_5
\end{sbmatrix}
\begin{sbmatrix}{\bB^{(k-1)} }
c_1 & d_1 & & \ldots & 0 \\
0 & c_2 & \ddots & & \vdots \\
& \ddots & \ddots & \ddots & \\
\vdots & & \ddots & \ddots & d_{4} \\
0 & \ldots & & 0 & c_5
\end{sbmatrix}.
$$
\paragraph{Introducing and chasing the bulge in the bidiagonal update}
Following the Givens rotations constructed in Section~\ref{section:implifit-shift-qr}, it follows that
a Givens rotation constructed by
$$
\bG_{12}^\top =
\begin{bmatrix}
\widetildebG_1^\top & \\
& \bI_{n-2}
\end{bmatrix},
\qquad
\text{where}
\qquad
\underbrace{\begin{bmatrix}
c & s \\
-s & c
\end{bmatrix}}_{\widetildebG_1^\top }
\begin{bmatrix}
c_1^2 - \mu^{(k)}\\
c_1d_1
\end{bmatrix}
=
\begin{bmatrix}
\boxtimes \\
0
\end{bmatrix}.
$$
where $\mu^{(k)}$ is the shift in the $k$-th iteration, $n=5$ and $a_1 = c_1^2, b_1=c_1d_1$ in our example, this will \textit{introduce a bulge} in $\bB^{(k-1)\top}$:
$$
\begin{aligned}
\begin{sbmatrix}{\bB^{(k-1)\top}}
\boxtimes & 0 & 0 & 0 & 0\\
\boxtimes & \boxtimes & 0 & 0 & 0\\
0 & \boxtimes & \boxtimes & 0 & 0\\
0 & 0 & \boxtimes & \boxtimes & 0\\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{12}^\top\times }{\rightarrow}
\begin{sbmatrix}{\bG_{12}^\top\bB^{(k-1)\top}}
\bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes}} &\bm{0} & \bm{0} & \bm{0}\\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{0} & \bm{0} & \bm{0}\\
0 & \boxtimes & \boxtimes & 0 & 0 \\
0 & 0 & \boxtimes & \boxtimes & 0\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}.
\end{aligned}
$$
Then, we observe that $\bG_{23}$ rotating on the left will not help chase out the bulge \footnote{$\bG_{23}$ working on the left of $\bG_{12}\bB^{(k-1)}$ will modify the rows 2 and 3 of it which will not introduce zero back in the first row.}, a right Givens rotations must be constructed such that
$$
\bV_{12}^\top =
\begin{bmatrix}
\widetildebV_1^\top & \\
& \bI_{n-2}
\end{bmatrix},
\qquad
\text{where}
\qquad
\underbrace{\begin{bmatrix}
c & s \\
-s & c
\end{bmatrix}}_{\widetildebV_1^\top}
\begin{bmatrix}
\Big(\bG_{12}^\top\bB^{(k-1)\top}\Big)_{11}\\
\Big(\bG_{12}^\top\bB^{(k-1)\top}\Big)_{12}
\end{bmatrix}
=
\begin{bmatrix}
\boxtimes \\
0
\end{bmatrix}.
$$
This results in
$$
\begin{aligned}
\begin{sbmatrix}{\bB^{(k-1)\top}}
\boxtimes & 0 & 0 & 0 & 0\\
\boxtimes & \boxtimes & 0 & 0 & 0\\
0 & \boxtimes & \boxtimes & 0 & 0\\
0 & 0 & \boxtimes & \boxtimes & 0\\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{12}^\top\times }{\rightarrow}
\begin{sbmatrix}{\bG_{12}^\top\bB^{(k-1)\top}}
\bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes}} &\bm{0} & \bm{0} & \bm{0}\\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{0} & \bm{0} & \bm{0}\\
0 & \boxtimes & \boxtimes & 0 & 0 \\
0 & 0 & \boxtimes & \boxtimes & 0\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\times\bV_{12} }{\rightarrow}
\begin{sbmatrix}{\bG_{12}^\top\bB^{(k-1)\top} \bV_{12}}
\bm{\boxtimes} & \textcolor{brown}{\bm{0}} &\bm{0} & \bm{0} & \bm{0}\\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{0} & \bm{0} & \bm{0}\\
\textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} & \boxtimes & 0 & 0 \\
\bm{0} & \bm{0} & \boxtimes & \boxtimes & 0\\
\bm{0} & \bm{0}& 0 & \boxtimes & \boxtimes
\end{sbmatrix}.
\end{aligned}
$$
As long as the $\bG_{12}^\top$ is constructed in the same way as it in the ``tridiagonal update", then
$\bG_{12}^\top \bT^{{k-1}}\bG_{12}$
is equal to
$\bG_{12}^\top \bB^{{k-1}\top}\underbrace{\bV_{12}\bV_{12}^\top}_{\bI} \bB^{(k-1)}\bG_{12}$.
\paragraph{Step 2} The step 2 is different to what we have did in the ``tridiagonal update". In the ``tridiagonal update", we construct a Givens matrix $\bG_{23}$ by chasing the bulge in $\bT^{(k-1)}$. Apply the implicit Q theorem again (Theorem~\ref{theorem:implicit-q-tridiagonal}, p.~\pageref{theorem:implicit-q-tridiagonal}), we can decide the Givens rotation by $\bB^{(k-1)}$ directly (\textit{need not to construct $\bT^{(k-1)}=\bB^{(k-1)\top}\bB^{(k-1)}$ explicitly}), and now denoted by $\bU_{23}$:
$$
\bU_{23}^\top =
\begin{bmatrix}
1 & & \\
& \widetildebU_2^\top & \\
& & \bI_{n-3}
\end{bmatrix},
\qquad
\text{where}
\qquad
\underbrace{\begin{bmatrix}
c & s \\
-s & c
\end{bmatrix}}_{\widetildebU_2^\top }
\begin{bmatrix}
\Big(\bG_{12}^\top\bB^{(k-1)\top} \bV_{12}\Big)_{21}\\
\Big(\bG_{12}^\top\bB^{(k-1)\top} \bV_{12}\Big)_{31}
\end{bmatrix}
=
\begin{bmatrix}
\boxtimes \\
0
\end{bmatrix}.
$$
The process can go on, and the full example for chasing the bulge can be shown as follows where the blue $\textcolor{blue}{\bm{\boxtimes}}$ indicates the bulge introduced, the brown $\textcolor{brown}{\bm{0}}$ indicates the bulge was chased out, and \textbf{boldface} indicates the value has just been changed:
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10, frametitle={A Complete Example of Implicit QR Algorithm for SVD}]
$$
\begin{aligned}
\begin{sbmatrix}{\bB^{(k-1)\top}}
\boxtimes & 0 & 0 & 0 & 0\\
\boxtimes & \boxtimes & 0 & 0 & 0\\
0 & \boxtimes & \boxtimes & 0 & 0\\
0 & 0 & \boxtimes & \boxtimes & 0\\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{12}^\top\times }{\rightarrow}
\begin{sbmatrix}{\bG_{12}^\top\bB^{(k-1)\top}}
\bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes}} &\bm{0} & \bm{0} & \bm{0}\\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{0} & \bm{0} & \bm{0}\\
0 & \boxtimes & \boxtimes & 0 & 0 \\
0 & 0 & \boxtimes & \boxtimes & 0\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\times\bV_{12} }{\rightarrow}
\begin{sbmatrix}{\bG_{12}^\top\bB^{(k-1)\top} \bV_{12}}
\bm{\boxtimes} & \textcolor{brown}{\bm{0}} &\bm{0} & \bm{0} & \bm{0}\\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{0} & \bm{0} & \bm{0}\\
\textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} & \boxtimes & 0 & 0 \\
\bm{0} & \bm{0} & \boxtimes & \boxtimes & 0\\
\bm{0} & \bm{0}& 0 & \boxtimes & \boxtimes
\end{sbmatrix}\\
&\stackrel{\bU_{23}^\top \times}{\rightarrow}
\begin{sbmatrix}{\bU_{23}^\top (\cdot) }
\boxtimes & 0 & 0 & 0 & 0\\
\bm{\boxtimes} & \bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes}} & \bm{0} & \bm{0}\\
\textcolor{brown}{\bm{0}} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{0} & \bm{0} \\
0 & 0 & \boxtimes & \boxtimes & 0\\
0 & 0& 0 & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\times\bV_{23} }{\rightarrow}
\begin{sbmatrix}{\bU_{23}^\top(\cdot ) \bV_{23}}
\boxtimes & \bm{0} & \bm{0} & 0 & 0\\
\bm{\boxtimes} & \bm{\boxtimes} & \textcolor{brown}{\bm{0}} & \bm{0} & \bm{0}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{0} & \bm{0} \\
0 & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} & \boxtimes & 0\\
0 & \bm{0}& \bm{0}& \boxtimes & \boxtimes
\end{sbmatrix}\\
\end{aligned}
$$
$$
\begin{aligned}
\qquad \qquad \qquad\qquad\,\,\,\,\,&\stackrel{\bU_{34}^\top \times}{\rightarrow}
\begin{sbmatrix}{\bU_{34}^\top(\cdot)}
\boxtimes & 0 & 0 & 0 & 0\\
\boxtimes & \boxtimes & 0 & 0 & 0\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes}} & \bm{0} \\
\bm{0} & \textcolor{brown}{\bm{0}} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{0}\\
0 & 0& 0& \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\times\bV_{34} }{\rightarrow}
\begin{sbmatrix}{\bU_{34}^\top(\cdot )\bV_{34}}
\boxtimes & 0 & \bm{0} & \bm{0} & 0\\
\boxtimes & \boxtimes & \bm{0} & \bm{0} & 0\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \textcolor{brown}{\bm{0}} & \bm{0} \\
\bm{0} & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{0}\\
0 & 0& \textcolor{blue}{\bm{\boxtimes}}& \bm{\boxtimes} & \boxtimes
\end{sbmatrix}\\
&\stackrel{\bU_{45}^\top \times}{\rightarrow}
\begin{sbmatrix}{\bU_{45}^\top(\cdot)}
\boxtimes & 0 & 0 & 0& 0\\
\boxtimes & \boxtimes & 0 & 0 & 0\\
0 & \boxtimes & \boxtimes& 0 & 0\\
\bm{0} & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \textcolor{blue}{\bm{\boxtimes}}\\
\bm{0} & \bm{0} & \textcolor{brown}{\bm{0}} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
&\stackrel{\times\bV_{45} }{\rightarrow}
\begin{sbmatrix}{\bU_{45}^\top(\cdot) \bV_{45}}
\boxtimes & 0 & 0 & \bm{0}& \bm{0}\\
\boxtimes & \boxtimes & 0 & \bm{0}& \bm{0}\\
0 & \boxtimes & \boxtimes& \bm{0}& \bm{0}\\
\bm{0} & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \textcolor{brown}{\bm{0}}\\
\bm{0} & \bm{0} & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix},
\end{aligned}
$$
where $\bG_{12}$ is used in the first step to indicate the same construction as ``tridiagonal update", and $\bU_{i,i+1}$'s are used to indicate the different construction via the implicit Q theorem.
For clarity, we will denote $\bG_{12}$ by $\bU_{12}$ and $\widetildebG_1$ by $\widetildebU_1$ in the following algorithm.
\end{mdframed}
\paragraph{The complete algorithm}
Again, for simplicity, we denote the construction of $\widetildebU_i$ such that $\widetildebU_i^\top \bx =\widetildebU_i^\top
\begin{bmatrix}
x_1\\x_2
\end{bmatrix}= \begin{bmatrix}
\boxtimes \\ 0
\end{bmatrix}$ by
$$
\widetildebU_i^\top =\text{givens}(x_1,x_2).
$$
In all iterations, $\widetildebU_i^\top $ will be of size $2\times 2$. And the construction of the $n\times n$ Givens rotation $\bG_{i,i+1}^\top$ by
$$
\bU_{i,i+1}^\top=
G(\widetildebU_i^\top )=
\begin{bmatrix}
\bI_{i-1} & & \\
& \widetildebU_i^\top& \\
& & \bI_{n-i-1}
\end{bmatrix}.
$$
For further simplicity, we will denote $\bU_{i,i+1}^\top$ by $\bU_i^\top$ with multiplying on the left of another matrix will modify the $i$-th and $i+1$-th rows implicitly. The procedure is then shown in Algorithm~\ref{alg:qr-svd-implifit} where the shift $\mu^{(k)} = t_{nn}^{(k-1)}$ (i.e., the last diagonal of $\bT^{(k-1)}$) can be obtained via the last diagonal of the multiplication of $\bB^{(k-1)\top}\bB^{(k-1)}$, i.e., $\mu^{(k)} =b_{n,n-1}^2+b_{nn}^2$.
\begin{algorithm}[H]
\caption{Golub-Kahan SVD: Practical QR Algorithm with Implicit Shift (Compare to Algorithm~\ref{alg:qr-algorithm-practical-tridiagonal-implicit-shift})}
\label{alg:qr-svd-implifit}
\begin{algorithmic}[1]
\Require $\bA\in \real^{n\times n}$ is real and symmetric;
\State \textcolor{blue}{$\bA^\top=\bV^{(0)}\bB^{(0)}\bQ^{(0)\top}$}; \Comment{bidiagonal decomposition of $\bA^\top$}
\For{$k=1,2,\ldots$}
\State Pick a shift $\mu^{(k)}$; \Comment{e.g., $\mu^{(k)}=t_{nn}^{(k-1)}=b_{n,n-1}^2+b_{nn}^2$}
\State $x_1 = t_{11}-\mu^{(k)}, x_2=t_{21}$; \Comment{$t_{ij} = t^{(k-1)}_{ij}$}
\State $\bB^{(k)} = \bB^{(k-1)}$; \Comment{initialize $\bB^{(k)} $}
\For{$i=1:n-1$}
\State $\bU_{i}^\top = G(\widetildebU_i^\top)$, where $\widetildebU_i^\top = \text{givens}(x_1,x_2)$;
\State $\bB^{(k)\top} = \bU_{i}^\top\bB^{(k)\top}$; \Comment{left update}
\State $x_1 = b_{ii}, x_2=b_{i,i+1}$; \Comment{$b_{ij} = b^{(k)\top}_{ij}$}
\State $\bV_{i}\textcolor{blue}{^\top} = G(\widetildebV_i^\top)$ where $\widetildebV_i\textcolor{blue}{^\top} = \text{givens}(x_1,x_2) \leadtosmall \bV_i = (\bV_i^\top)^\top$;
\State $\bB^{(k)\top} = \bU_{i}^\top\bB^{(k)\top}\bV_i$; \Comment{right update}
\If{$i<n-1$}
\State $x_1=b_{i+1,i}, x_2=b_{i+2,i}$; \Comment{$b_{ij} = b^{(k)\top}_{ij}$}
\EndIf
\EndFor
\State $\bU^{(k)\top} = \bU_{n-1}^\top\ldots \bU_1^\top$;
\State $\bV^{(k)\top} = \bV_{1}\ldots\bU_{n-1}$; \Comment{this results in $\bB^{(k)^\top} =\bU^{(k)\top} \bB^{(k-1)^\top}\bV^{(k)} $}
\EndFor
\end{algorithmic}
\end{algorithm}
Again, suppose for iteration $p$, $\bB^{(p)}$ converges to a diagonal matrix (within the machine error). Then write out the updates in each iteration:
$$
\left.
\begin{aligned}
\bB^{(p)\top}&=\bU^{(p)\top}\bB^{(p-1)\top} \bV^{(p)}\\
\bB^{(p-1)\top}&=\bU^{(p-1)\top}\bB^{(p-2)\top} \bV^{(p-1)}\\
\vdots &= \vdots \\
\bB^{(1)\top}&=\bU^{(1)\top}\bB^{(0)\top} \bV^{(1)}\\
\bB^{(0)\top}&=\bU^{(0)\top}\bA\bV^{(0)}
\end{aligned}
\right\}
\longleftrightarrow
\left\{
\begin{aligned}
\bT^{(p)}&=\bB^{(p)\top}\bB^{(p)}&=&\bU^{(p)\top}\bT^{(p-1)} \bU^{(p)}\\
\bT^{(p-1)}&=\bB^{(p-1)\top}\bB^{(p-1)}&=&\bU^{(p-1)\top}\bT^{(p-2)} \bU^{(p-1)}\\
\vdots &= \vdots \\
\bT^{(1)}&=\bB^{(1)\top}\bB^{(1)}&=&\bU^{(1)\top}\bT^{(0)} \bU^{(1)}\\
\bT^{(0)}&=\bB^{(0)\top}\bB^{(0)}&=&\bU^{(0)\top}\textcolor{blue}{\bA\bA^\top} \bU^{(0)}
\end{aligned}
\right.
$$
This yields
$$
\bB^{(p)\top} \approx \bB^{(p)} =
\underbrace{\bU^{(p)\top} \ldots \bU^{(0)\top}}_{\bU^\top}
\bA
\underbrace{\bV^{(0)}\bV^{(p)}}_{\bV}
\leadto
\left\{
\begin{aligned}
\bA &=\bU \bB^{(p)}\bV^\top\\
\bA\bA^\top &= \bU \bB^{(p)}\bB^{(p)}\bU^\top
\end{aligned}
\right.
$$
which approximates the SVD of $\bA$.
\subsection{Jacobi's SVD Method}
We have discussed the Jacobi's method for computing the spectral decomposition of a matrix where a pair of orthogonal matrices are applied on the left and right iteratively to reduce the off-diagonal quantity. The orthogonal matrices applied on the left and right are equal (but transposed: $\bQ$ and $\bQ^\top$). It is then straightforward to apply the Jacobi's method by two different sequences of orthogonal matrices hoping the off-diagonal quantity can also be reduced. The problem remains
$$
\begin{aligned}
\bJ_1^\top \bA(k,l)\bJ_2
&=
\begin{bmatrix}
c_1 & -s_1 \\
s_1 & c_1
\end{bmatrix}
\begin{bmatrix}
a_{kk} & a_{kl} \\
a_{kl} & a_{ll}
\end{bmatrix}
\begin{bmatrix}
c_2 & s_2 \\
-s_2 & c_2
\end{bmatrix}
=
\begin{bmatrix}
\neq 0 & 0 \\
0 & \neq 0
\end{bmatrix}.
\end{aligned}
$$
Such problem can be decomposed into two parts:
\begin{itemize}
\item Find a Jacobi's rotation such that $\widetilde{\bJ}_1\bA(k,l)$ is symmetric;
\item Apply the Jacobi's method on the symmetric $\widetilde{\bJ}_1\bA(k,l)$: $\bJ_2^{\top}(\widetilde{\bJ}_1\bA(k,l)) \bJ_2$ is diagonal;
\end{itemize}
Therefore, let $\bJ_1^\top = \bJ_2^\top\widetilde{\bJ}_1 $, we diagonalize the submatrix inside $\bA$. We shall not discuss the details.
\section{Proof of Results}\label{section:proofs-qralgorithms}
\begin{proof}[of Lemma~\ref{lemma:qr-algo-from-power}]
It is trivial the iteration $0$ of the three sequences are equal by same initialization. We will show that if \{$\widehat{\bA}^{(k-1)} = \bA^{(k-1)};
\widehat{\bR}^{(k-1)} = \bR^{(k-1)};
\widehat{\bV}^{(k-1)} = \bV^{(k-1)}$\}, if this is also true for the $k$-th iteration that
\{$\widehat{\bA}^{(k)} = \bA^{(k)};
\widehat{\bR}^{(k)} = \bR^{(k)};
\widehat{\bV}^{(k)} = \bV^{(k)}$\}. Then we complete the proof by induction.
By Algorithm~\ref{alg:qr-algorithm-simple1},
$$
\left\{
\begin{aligned}
\widehat{\bV}^{(k)}, \widehat{\bR}^{(k)}= QR(\bA\widehat{\bV}^{(k-1)}) &\leadto
\widehat{\bV}^{(k)}\widehat{\bR}^{(k)} = \bA\widehat{\bV}^{(k-1)}. \\
\widehat{\bA}^{(k-1)}=\widehat{\bV}^{(k-1)\top} \underbrace{\bA \widehat{\bV}^{(k-1)}}_{\widehat{\bV}^{(k)}\widehat{\bR}^{(k)}} &\leadto
\widehat{\bA}^{(k-1)}= \underbrace{\widehat{\bV}^{(k-1)\top} \widehat{\bV}^{(k)}}_{\text{orthogonal}}\widehat{\bR}^{(k)}.
\end{aligned}
\right.
$$
Since $\widehat{\bV}^{(k-1)\top} \widehat{\bV}^{(k)}$ is a multiplication of two orthogonal matrices, it is an orthogonal matrix as well. Therefore, \textcolor{blue}{$\widehat{\bA}^{(k-1)}= \widehat{\bV}^{(k-1)\top} \widehat{\bV}^{(k)}\widehat{\bR}^{(k)}$} is the QR decomposition of $\widehat{\bA}^{(k-1)}$.
By Algorithm~\ref{alg:qr-algorithm-simple2},
$$
\bQ^{(k)}, \bR^{(k)}= QR(\bA^{(k-1)}) \leadto \bA^{(k-1)}=\bQ^{(k)} \bR^{(k)}.
$$
Therefore, a second QR decomposition of $\bA^{(k-1)}$ is given by \textcolor{blue}{$\bA^{(k-1)}=\bQ^{(k)} \bR^{(k)}$}. Although, the QR decomposition is not unique, if we take the diagonal of the upper triangular matrix to be nonnegative, the QR decomposition is unique (Corollary~\ref{corollary:unique-qr}, p.~\pageref{corollary:unique-qr}). Provided $\widehat{\bA}^{(k-1)} = \bA^{(k-1)}$, this shows
$$
\widehat{\bR}^{(k)} = \bR^{(k)}
$$
and
$$
\begin{aligned}
\bQ^{(k)} = \widehat{\bV}^{(k-1)\top} \widehat{\bV}^{(k)} \leadto \widehat{\bV}^{(k-1)}\bQ^{(k)} &= \widehat{\bV}^{(k)} \\
\underbrace{\bV^{(k-1)}\bQ^{(k)}}_{\bV^{(k)}} &= \widehat{\bV}^{(k)} .
\end{aligned}
$$
Therefore, it follows that
$$
\widehat{\bV}^{(k)} = \bV^{(k)}.
$$
To see $\widehat{\bA}^{(k)} = \bA^{(k)} $, it follows that
$$
\begin{aligned}
\widehat{\bA}^{(k)} &=\widehat{\bV}^{(k)\top} \bA \widehat{\bV}^{(k)} &\text{(By Algorithm~\ref{alg:qr-algorithm-simple1})}\\
&=\bV^{(k)\top} \bA \widehat{\bV}^{(k)} &\text{($\widehat{\bV}^{(k)} = \bV^{(k)}$)} \\
&=(\bV^{(k-1)}\bQ^{(k)} )^\top \bA (\bV^{(k-1)}\bQ^{(k)} ) &\text{(By Algorithm~\ref{alg:qr-algorithm-simple2})}\\
&= \bQ^{(k)\top} \widehat{\bA}^{(k-1)} \bQ^{(k)}=\bQ^{(k)\top} \bA^{(k-1)} \bQ^{(k)} &\text{(By Algorithm~\ref{alg:qr-algorithm-simple1})}\\
&=\bA^{(k)}. &\text{(By Equation~\eqref{equation:simple-qr-find})}
\end{aligned}
$$
This completes the proof.
\end{proof}
\begin{proof}[of Equation~\eqref{equation:simple-qr-find2}]
The first equality of the LHS of Equation~\eqref{equation:simple-qr-find2} comes from the step 7 of Algorithm~\ref{alg:qr-algorithm-simple2}.
The second equation is from the fact of Algorithm~\ref{alg:qr-algorithm-simple1} that
$$
\left.
\begin{aligned}
\bA &= \widehat{\bV}^{(k)} \widehat{\bR}^{(k)} \widehat{\bV}^{(k-1)}
=\bV^{(k)} \bR^{(k)} \bV^{(k-1)} \\
&= \widehat{\bV}^{(k-1)} \widehat{\bR}^{(k-1)} \widehat{\bV}^{(k-2)}
=\bV^{(k-1)} \bR^{(k-1)} \bV^{(k-2)} \\
& \ldots
\end{aligned}
\right\} \leadtosmall
\bA^{k}=\bV^{(k)} \bR^{(k)} \bR^{(k-1)} \ldots \bR^{(0)} .
$$
This completes the proof.
\end{proof}
\begin{proof}[of Equation~\eqref{equation:practical-qr-find2}]
For $k=1$, it is trivial to see $(\bA-\mu^{(1)}\bI) = \bQ^{(1)}\bR^{(1)} = \bQ^{(0)}\bQ^{(1)}\bR^{(1)} \bR^{(0)}$. Suppose it is true for $k-1$ that
$$
(\bA-\mu^{(k-1)}\bI)(\bA-\mu^{(k-2)}\bI)\ldots (\bA-\mu^{(1)}\bI)= \underbrace{\bQ^{(0)}\bQ^{(1)}\bQ^{(2)}\ldots \bQ^{(k-1)}}_{=\bV^{(k-1)}, \text{orthogonal}}
\underbrace{\bR^{(k-1)} \bR^{(k-2)} \ldots \bR^{(0)}}_{=\bU^{(k-1)}, \text{upper triangular}}.
$$
If we can prove it is also true for $k$, then we complete the proof by induction. To see this, we have
$$
\begin{aligned}
&\gap (\bA-\mu^{(k)}\bI)(\bA-\mu^{(k-1)}\bI)\ldots (\bA-\mu^{(1)}\bI)\\
&=(\bA-\mu^{(k)}\bI)\bV^{(k-1)}\left(\bR^{(k-1)} \bR^{(k-2)} \ldots \bR^{(0)}\right)\\
&=\left(\underbrace{\bV^{(k) } \bA^{(k)} \bV^{(k)\top}}_{\text{Equation}~\eqref{equation:practical-qr-find-root}} -\mu^{(k)}\bI\right) \bV^{(k-1)}\left(\bR^{(k-1)} \bR^{(k-2)} \ldots \bR^{(0)}\right)\\
&=\left(\bV^{(k) } \bA^{(k)} \bV^{(k)\top} -\mu^{(k)} \underbrace{\bV^{(k)}\bV^{(k)\top} }_{\bI} \right) \bV^{(k-1)}\left(\bR^{(k-1)} \bR^{(k-2)} \ldots \bR^{(0)}\right)\\
&=\bV^{(k) }\left( \bA^{(k)} -\mu^{(k)} \bI \right)\bV^{(k)\top} \bV^{(k-1)}\left(\bR^{(k-1)} \bR^{(k-2)} \ldots \bR^{(0)}\right)\\
&=\bV^{(k)}\left( \underbrace{\bR^{(k)}\bQ^{(k)} }_{\text{step 7}}\right)
\underbrace{( \bV^{(k-1)}\bQ^{(k)} )^\top}_{\text{step 8}}
\bV^{(k-1)}\left(\bR^{(k-1)} \bR^{(k-2)} \ldots \bR^{(0)}\right)\\
&=\bV^{(k)}\left(\bR^{(k)}\bR^{(k-1)} \bR^{(k-2)} \ldots \bR^{(0)}\right).
\end{aligned}
$$
This completes the proof.
\end{proof}
\part{Eigenvalue Problem}
\newpage
\chapter{Eigenvalue and Jordan Decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Eigenvalue Decomposition}
\begin{theoremHigh}[Eigenvalue Decomposition]\label{theorem:eigenvalue-decomposition}
Any square matrix $\bA\in \real^{n\times n}$ with linearly independent eigenvectors can be factored as
$$
\bA = \bX\bLambda\bX^{-1},
$$
where $\bX$ contains the eigenvectors of $\bA$ as columns, and $\bLambda$ is a diagonal matrix $diag(\lambda_1, \lambda_2, \ldots, \lambda_n)$ and $\lambda_1, \lambda_2, \ldots, \lambda_n$ are eigenvalues of $\bA$.
\end{theoremHigh}
Eigenvalue decomposition is also known as to diagonalize the matrix $\bA$. When no eigenvalues of $\bA$ are repeated, the eigenvectors are sure to be linearly independent. Then $\bA$ can be diagonalized. Note here without $n$ linearly independent eigenvectors, we cannot diagonalize. In Section~\ref{section:otherform-spectral} (p.~\pageref{section:otherform-spectral}), we will further discuss conditions under which the matrix has linearly independent eigenvectors.
\section{Existence of the Eigenvalue Decomposition}
\begin{proof}[of Theorem~\ref{theorem:eigenvalue-decomposition}]
Let $\bX=[\bx_1, \bx_2, \ldots, \bx_n]$ as the linearly independent eigenvectors of $\bA$. Clearly, we have
$$
\bA\bx_1=\lambda_1\bx_1,\qquad \bA\bx_2=\lambda_2\bx_2, \qquad \ldots, \qquad\bA\bx_n=\lambda_n\bx_n.
$$
In the matrix form,
$$
\bA\bX = [\bA\bx_1, \bA\bx_2, \ldots, \bA\bx_n] = [\lambda_1\bx_1, \lambda_2\bx_2, \ldots, \lambda_n\bx_n] = \bX\bLambda.
$$
Since we assume the eigenvectors are linearly independent, then $\bX$ has full rank and is invertible. We obtain
$$
\bA = \bX\bLambda \bX^{-1}.
$$
This completes the proof.
\end{proof}
We will discuss some similar forms of eigenvalue decomposition in the spectral decomposition section, where the matrix $\bA$ is required to be symmetric, and the $\bX$ is not only nonsingular but also orthogonal. Or, the matrix $\bA$ is required to be a \textit{simple matrix}, that is, the algebraic multiplicity and geometric multiplicity are the same for $\bA$, and $\bX$ will be a trivial nonsingular matrix that may not contain the eigenvectors of $\bA$. The decomposition also has a geometric meaning, which we will discuss in Section~\ref{section:coordinate-transformation} (p.~\pageref{section:coordinate-transformation}).
A matrix decomposition in the form of $\bA =\bX\bLambda\bX^{-1}$ has a nice property that we can compute the $m$-th power efficiently.
\begin{remark}[$m$-th Power]\label{remark:power-eigenvalue-decom}
The $m$-th power of $\bA$ is $\bA^m = \bX\bLambda^m\bX^{-1}$ if the matrix $\bA$ can be factored as $\bA=\bX\bLambda\bX^{-1}$.
\end{remark}
We notice that we require $\bA$ have linearly independent eigenvectors to prove the existence of the eigenvalue decomposition. Under specific conditions, the requirement is intrinsically satisfied.
\begin{lemma}[Different Eigenvalues]\label{lemma:diff-eigenvec-decompo}
Suppose the eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_n$ of $\bA\in \real^{n\times n}$ are all different. Then the corresponding eigenvectors are automatically independent. In another word, any square matrix with different eigenvalues can be diagonalized.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:diff-eigenvec-decompo}]
Suppose the eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_n$ are all different, and the eigenvectors $\bx_1,\bx_2, \ldots, \bx_n$ are dependent. That is, there exists a nonzero vector $\bc = [c_1,c_2,\ldots,c_{n-1}]^\top$ such that
$$
\bx_n = \sum_{i=1}^{n-1} c_i\bx_{i}.
$$
Then we have
$$
\begin{aligned}
\bA \bx_n &= \bA (\sum_{i=1}^{n-1} c_i\bx_{i}) \\
&=c_1\lambda_1 \bx_1 + c_2\lambda_2 \bx_2 + \ldots + c_{n-1}\lambda_{n-1}\bx_{n-1}.
\end{aligned}
$$
and
$$
\begin{aligned}
\bA \bx_n &= \lambda_n\bx_n\\
&=\lambda_n (c_1\bx_1 +c_2\bx_2+\ldots +c_{n-1} \bx_{n-1}).
\end{aligned}
$$
Combine above two equations, we have
$$
\sum_{i=1}^{n-1} (\lambda_n - \lambda_i)c_i \bx_i = \bzero .
$$
This leads to a contradiction since $\lambda_n \neq \lambda_i$ for all $i\in \{1,2,\ldots,n-1\}$, from which the result follows.
\end{proof}
\begin{remark}[Limitation of Eigenvalue Decomposition]
The limitation of eigenvalue decomposition is that:
$\bullet$ The eigenvectors in $\bX$ are usually not orthogonal and there are not always enough eigenvectors (i.e., some eigenvalues are equal).
$\bullet$ To compute the eigenvalues and eigenvectors $\bA\bx = \lambda\bx$ requires $\bA$ to be square. Rectangular matrices cannot be diagonalized by eigenvalue decomposition.
\end{remark}
\section{Computing the Eigenvalue Decomposition}
The computation of eigenvalues and eigenvectors involves the computation of a polynomial. Some algorithms like Rayleigh quotient iterations can solve this problem and we will further discuss algorithms to find the eigenvalues and eigenvectors of a matrix in Section~\ref{section:eigenvalue-problem} (p.~\pageref{section:eigenvalue-problem}).
As long as we have the eigenvalues and eigenvectors, we just need to compute the inverse of the nonsingular matrix $\bX$. As shown in Theorem~\ref{theorem:inverse-by-lu2} (p.~\pageref{theorem:inverse-by-lu2}), the complexity is $2n^3$ flops.
\section{Jordan Decomposition}
In eigenvalue decomposition, we suppose matrix $\bA$ has $n$ linearly independent eigenvectors. However, this is not necessarily true for all square matrices. We introduce further a generalized version of eigenvalue decomposition which is called the Jordan decomposition named after Camille Jordan \citep{jordan1870traite}.
We first introduce the definitin of Jordan blocks and Jordan form for the further description of Jordan decomposition.
\begin{definition}[Jordan Block]
An $m\times m$ upper triangular matrix $B(\lambda, m)$ is called a Jordan block provided all $m$ diagonal elements are the same eigenvalue $\lambda$ and all upper sub-digonal elements are all ones:
$$
B(\lambda, m)=
\begin{bmatrix}
\lambda & 1 & 0 & \ldots & 0 & 0 & 0 \\
0 & \lambda & 1 & \ldots & 0 & 0 & 0 \\
0 & 0 & \lambda & \ldots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
0 & 0 & 0 & \ldots & \lambda & 1& 0\\
0 & 0 & 0 & \ldots & 0 &\lambda & 1\\
0 & 0 & 0 & \ldots & 0 & 0 & \lambda
\end{bmatrix}_{m\times m}
$$
\end{definition}
\begin{definition}[Jordan Form\index{Jordan block}]
Given an $n\times n$ matrix $\bA$, a Jordan form $\bJ$ for $\bA$ is a block diagonal matrix defined as
$$
\bJ=diag(B(\lambda_1, m_1), B(\lambda_2, m_2), \ldots B(\lambda_k, m_k))
$$
where $\lambda_1, \lambda_2, \ldots, \lambda_k$ are eigenvalues of $\bA$ (duplicates possible) and $m_1+m_2+\ldots+m_k=n$.
\end{definition}
Then, the Jordan decomposition follows:
\begin{theoremHigh}[Jordan Decomposition]\label{theorem:jordan-decomposition}
Any square matrix $\bA\in \real^{n\times n}$ can be factored as
$$
\bA = \bX\bJ\bX^{-1},
$$
where $\bX$ is a nonsingular matrix containing the generalized eigenvectors of $\bA$ as columns, and $\bJ$ is a Jordan form matrix $diag(\bJ_1, \bJ_2, \ldots, \bJ_k)$ where
$$
\bJ_i =
\begin{bmatrix}
\lambda_i & 1 & 0 & \ldots & 0 & 0 & 0 \\
0 & \lambda_i & 1 & \ldots & 0 & 0 & 0 \\
0 & 0 & \lambda_i & \ldots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
0 & 0 & 0 & \ldots & \lambda_i & 1& 0\\
0 & 0 & 0 & \ldots & 0 &\lambda_i & 1\\
0 & 0 & 0 & \ldots & 0 & 0 & \lambda_i
\end{bmatrix}_{m_i\times m_i}
$$
is an $m_i\times m_i$ square matrix with $m_i$ being the number of repetitions of eigenvalue $\lambda_i$ and $m_1+m_2+\ldots +m_k = n$. $\bJ_i$'s are referred to as Jordan blocks.
Further, nonsingular matrix $\bX$ is called the \textbf{matrix of generalized eigenvectors} of $\bA$.
\end{theoremHigh}
As an example, a Jordan form can have the following structure:
$$
\begin{aligned}
\bJ&=diag(B(\lambda_1, m_1), B(\lambda_2, m_2), \ldots, B(\lambda_k, m_k))\\
&=
\begin{bmatrix}
\begin{bmatrix}
\lambda_1 & 1 & 0 \\
0 & \lambda_1 & 1 \\
0 & 0 & \lambda_1
\end{bmatrix} & & & &\\
& \begin{bmatrix}
\lambda_2
\end{bmatrix} & & &\\
& &\begin{bmatrix}
\lambda_3 & 1 \\
0 & \lambda_3
\end{bmatrix} & &\\
& & & \ddots & &\\
& & & & &\begin{bmatrix}
\lambda_k & 1 \\
0 & \lambda_k
\end{bmatrix}\\
\end{bmatrix}.
\end{aligned}
$$
\textbf{Decoding a Jordan Decomposition}: Note that zeros can appear on the upper sub-diagonal of $\bJ$ and in each block, the first column is always a diagonal containing only eigenvalues of $\bA$. Take out one block to decode, without loss of generality, we take out the first block $\bJ_1$. We shall show the columns $1, 2, \ldots, m_1$ of $\bA\bX = \bX\bJ$ with $\bX=[\bx_1, \bx_2, \ldots, \bx_n]$:
$$
\begin{aligned}
\bA\bx_1 &= \lambda_1 \bx_1 \\
\bA\bx_2 &= \lambda_1 \bx_2 + \bx_1 \\
\vdots &= \vdots \\
\bA\bx_{m_1} &= \lambda_1 \bx_{m_1} + \bx_{m_1-1}.
\end{aligned}
$$
For more details about Jordan decomposition, please refer to \citep{gohberg1996simple, hales1999jordan}.
The Jordan decomposition is not particularly interesting in practice as it is extremely sensitive to perturbation. Even with the smallest random change to a matrix
, the matrix can be made diagonalizable \citep{van2020advanced}. As a result, there is no practical mathematical software library or tool that computes it. And the proof takes dozens of pages to discuss. For this reason, we leave the proof to interesting readers.
\section{Application: Computing Fibonacci Numbers}
We use eigenvalue decomposition to compute the Fibonacci number. This example is drawn from \citep{strang1993introduction}. Every new Fibonacci number $F_{k+2}$ is the sum of the two previous Fibonacci numbers $F_{k+1}+F_{k}$. The sequence is $0, 1, 1, 2, 3, 5, 8, \ldots$. Now, the problem is what is the number of $F_{100}$?\index{Fibonacci number}
Let $\bu_{k}=\begin{bmatrix}
F_{k+1}\\
F_k
\end{bmatrix}$.
Then $\bu_{k+1}=\begin{bmatrix}
F_{k+2}\\
F_{k+1}
\end{bmatrix}=
\begin{bmatrix}
1&1\\
1&0
\end{bmatrix}
\bu_k
$ by the rule that $F_{k+2}=F_{k+1}+F_k$ and $F_{k+1}=F_{k+1}$.
Let $\bA=\begin{bmatrix}
1&1\\
1&0
\end{bmatrix}$
, we then have the equation that is $\bu_{100} = \bA^{100}\bu_0$ where $\bu_0=
\begin{bmatrix}
1\\
0
\end{bmatrix}
$.
We will see in Lemma~\ref{lemma:determinant-intermezzo} that $\det(\bA-\lambda\bI)=0$ where $\lambda$ is the eigenvalue of $\bA$. Simple calculation shows that $\det(\bA-\lambda\bI) = \lambda^2-\lambda+1=0$ and
$$
\lambda_1 = \frac{1+\sqrt{5}}{2}, \qquad \lambda_2 = \frac{1-\sqrt{5}}{2}.
$$
The corresponding eigenvectors are
$$
\bx_1 =
\begin{bmatrix}
\lambda_1\\
1
\end{bmatrix}, \qquad
\bx_2 =
\begin{bmatrix}
\lambda_2\\
1
\end{bmatrix}.
$$
By Remark~\ref{remark:power-eigenvalue-decom}, $\bA^{100} = \bX\bLambda^{100}\bX^{-1} = \bX
\begin{bmatrix}
\lambda_1^{100}&0\\
0&\lambda_2^{100}
\end{bmatrix}\bX^{-1}$ where $\bX^{-1}$ can be easily calculated as $\bX^{-1} =
\begin{bmatrix}
\frac{1}{\lambda_1-\lambda_2} & \frac{-\lambda_2}{\lambda_1-\lambda_2} \\
-\frac{1}{\lambda_1-\lambda_2} & \frac{\lambda_1}{\lambda_1-\lambda_2}
\end{bmatrix}
=\begin{bmatrix}
\frac{\sqrt{5}}{5} & \frac{5-\sqrt{5}}{10} \\
-\frac{\sqrt{5}}{5} & \frac{5+\sqrt{5}}{10}
\end{bmatrix}$. We notice that $\bu_{100} = \bA^{100}\bu_0$ is just the first column of $\bA^{100}$ which is
$$
\bu_{100} =
\begin{bmatrix}
F_{101}\\
F_{100}
\end{bmatrix}=
\begin{bmatrix}
\frac{\lambda_1^{101}-\lambda_2^{101}}{\lambda_1-\lambda_2}\\
\frac{\lambda_1^{100}-\lambda_2^{100}}{\lambda_1-\lambda_2}
\end{bmatrix}.
$$
Check by calculation, we have $F_{100}=3.542248481792631e+20$. Or more generally,
$$
\bu_{K} =
\begin{bmatrix}
F_{K+1}\\
F_{K}
\end{bmatrix}=
\begin{bmatrix}
\frac{\lambda_1^{K+1}-\lambda_2^{K+1}}{\lambda_1-\lambda_2}\\
\frac{\lambda_1^{K}-\lambda_2^{K}}{\lambda_1-\lambda_2}
\end{bmatrix},
$$
where the general form of $F_K$ is given by $F_K=\frac{\lambda_1^{K}-\lambda_2^{K}}{\lambda_1-\lambda_2}$.
\newpage
\chapter{Schur Decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Schur Decomposition}
\begin{theoremHigh}[Schur Decomposition]\label{theorem:schur-decomposition}
Any square matrix $\bA\in \real^{n\times n}$ with real eigenvalues can be factored as
$$
\bA = \bQ\bU\bQ^\top,
$$
where $\bQ$ is an orthogonal matrix, and $\bU$ is an upper triangular matrix. That is, all square matrix $\bA$ with real eigenvalues can be triangularized.
\end{theoremHigh}
\paragraph{A close look at Schur decomposition} The first column of $\bA\bQ$ and $\bQ\bU$ are $\bA\bq_1$ and $\bU_{11}\bq_1$. Then, $\bU_{11}, \bq_1$ are eigenvalue and eigenvector of $\bA$. But other columns of $\bQ$ need not be eigenvectors of $\bA$.
\paragraph{Schur decomposition for symmetric matrices} Symmetric matrix $\bA=\bA^\top$ leads to $\bQ\bU\bQ^\top = \bQ\bU^\top\bQ^\top$. Then $\bU$ is a diagonal matrix. And this diagonal matrix actually contains eigenvalues of $\bA$. All the columns of $\bQ$ are eigenvectors of $\bA$. We conclude that all symmetric matrices are diagonalizable even with repeated eigenvalues.
\section{Existence of the Schur Decomposition}
To prove Theorem~\ref{theorem:schur-decomposition}, we need to use the following lemmas.
\begin{lemma}[Determinant Intermezzo]\label{lemma:determinant-intermezzo}
We have the following properties for determinant of matrices:
$\bullet$ The determinant of multiplication of two matrices is $\det(\bA\bB)=\det (\bA)\det(\bB)$;
$\bullet$ The determinant of the transpose is $\det(\bA^\top) = \det(\bA)$;
$\bullet$ Suppose matrix $\bA$ has eigenvalue $\lambda$, then $\det(\bA-\lambda\bI) =0$;
$\bullet$ Determinant of any identity matrix is $1$;
$\bullet$ Determinant of an orthogonal matrix $\bQ$:
$$
\det(\bQ) = \det(\bQ^\top) = \pm 1, \qquad \text{since } \det(\bQ^\top)\det(\bQ)=\det(\bQ^\top\bQ)=\det(\bI)=1;
$$
$\bullet$ Any square matrix $\bA$, we then have an orthogonal matrix $\bQ$:
$$
\det(\bA) = \det(\bQ^\top) \det(\bA)\det(\bQ) =\det(\bQ^\top\bA\bQ);
$$
\end{lemma}
\begin{lemma}[Submatrix with Same Eigenvalue]\label{lemma:submatrix-same-eigenvalue}
Suppose square matrix $\bA_{k+1}\in \real^{(k+1)\times (k+1)}$ has real eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_{k+1}$. Then we can construct a $k\times k$ matrix $\bA_{k}$ with eigenvalues $\lambda_2, \lambda_3, \ldots, \lambda_{k+1}$ by
$$
\bA_{k} =
\begin{bmatrix}
-\bp_2^\top- \\
-\bp_3^\top- \\
\vdots \\
-\bp_{k+1}^\top-
\end{bmatrix}
\bA_{k+1}
\begin{bmatrix}
\bp_2 & \bp_3 &\ldots &\bp_{k+1}
\end{bmatrix},
$$
where $\bp_1$ is a eigenvector of $\bA_{k+1}$ with norm 1 corresponding to eigenvalue $\lambda_1$, and $\bp_2, \bp_3, \ldots, \bp_{k+1}$ are any orthonormal vectors orthogonal to $\bp_1$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:submatrix-same-eigenvalue}]
Let $\bP_{k+1} = [\bp_1, \bp_2, \ldots, \bp_{k+1}]$. Then $\bP_{k+1}^\top\bP_{k+1}=\bI$, and
$$
\bP_{k+1}^\top \bA_{k+1} \bP_{k+1} =
\begin{bmatrix}
\lambda_1 & \bzero \\
\bzero & \bA_{k}
\end{bmatrix}.
$$
For any eigenvalue $\lambda = \{\lambda_2, \lambda_3, \ldots, \lambda_{k+1}\}$, by Lemma~\ref{lemma:determinant-intermezzo}, we have
$$
\begin{aligned}
\det(\bA_{k+1} -\lambda\bI) &= \det(\bP_{k+1}^\top (\bA_{k+1}-\lambda\bI) \bP_{k+1}) \\
&=\det(\bP_{k+1}^\top \bA_{k+1}\bP_{k+1} - \lambda\bP_{k+1}^\top\bP_{k+1}) \\
&= \det\left(
\begin{bmatrix}
\lambda_1-\lambda & \bzero \\
\bzero & \bA_k - \lambda\bI
\end{bmatrix}
\right)\\
&=(\lambda_1-\lambda)\det(\bA_k-\lambda\bI).
\end{aligned}
$$
Where the last equality is from the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix}
\bE & \bF \\
\bG & \bH
\end{bmatrix}$, then $\det(\bM) = \det(\bE)\det(\bH-\bG\bE^{-1}\bF)$.
Since $\lambda$ is an eigenvalue of $\bA$ and $\lambda \neq \lambda_1$, then $\det(\bA_{k+1} -\lambda\bI) = (\lambda_1-\lambda)\det(\bA_{k}-\lambda\bI)=0$ means $\lambda$ is also an eigenvalue of $\bA_{k}$.
\end{proof}
We then prove the existence of the Schur decomposition by induction.
\begin{proof}[\textbf{of Theorem~\ref{theorem:schur-decomposition}: Existence of Schur Decomposition}]
We note that the theorem is trivial when $n=1$ by setting $Q=1$ and $U=A$. Suppose the theorem is true for $n=k$ for some $k\geq 1$. If we prove the theorem is also true for $n=k+1$, then we complete the proof.
Suppose for $n=k$, the theorem is true for $\bA_k =\bQ_k \bU_k \bQ_k^\top$.
Suppose further \textit{$\bP_{k+1}$ contains orthogonal vectors $\bP_{k+1} = [\bp_1, \bp_2, \ldots, \bp_{k+1}]$ as constructed in Lemma~\ref{lemma:submatrix-same-eigenvalue} where $\bp_1$ is an eigenvector of $\bA_{k+1}$ corresponding to eigenvalue $\lambda_1$ and its norm is 1, $\bp_2, \ldots, \bp_{k+1}$ are orthonormal to $\bp_1$}. Let the other $k$ eigenvalues of $\bA_{k+1}$ be $\lambda_2, \lambda_3, \ldots, \lambda_{k+1}$. Since we suppose for $n=k$, the theorem is true, we can find a matrix $\bA_{k}$ with eigenvalues $\lambda_2, \lambda_3, \ldots, \lambda_{k+1}$. So we have the following property by Lemma~\ref{lemma:submatrix-same-eigenvalue}:
$$
\bP_{k+1}^\top \bA_{k+1} \bP_{k+1} =
\begin{bmatrix}
\lambda &\bzero \\
\bzero & \bA_k
\end{bmatrix} \qquad
\text{and} \qquad
\bA_{k+1} \bP_{k+1} =
\bP_{k+1}
\begin{bmatrix}
\lambda_1 &\bzero \\
\bzero & \bA_k
\end{bmatrix}.
$$
Let
$
\bQ_{k+1} = \bP_{k+1}
\begin{bmatrix}
1 &\bzero \\
\bzero & \bQ_k
\end{bmatrix}.
$
Then, it follows that
$$
\begin{aligned}
\bA_{k+1} \bQ_{k+1} &=
\bA_{k+1}
\bP_{k+1}
\begin{bmatrix}
1 &\bzero \\
\bzero & \bQ_k
\end{bmatrix}\\
&=
\bP_{k+1}
\begin{bmatrix}
\lambda_1 &\bzero \\
\bzero & \bA_k
\end{bmatrix}
\begin{bmatrix}
1 &\bzero \\
\bzero & \bQ_k
\end{bmatrix} \\
&=
\bP_{k+1}
\begin{bmatrix}
\lambda_1 & \bzero \\
\bzero & \bA_k\bQ_k
\end{bmatrix}\\
&=
\bP_{k+1}
\begin{bmatrix}
\lambda_1 & \bzero \\
\bzero & \bQ_k \bU_k
\end{bmatrix}\qquad &\text{(By the assumption for $n=k$)} \\
&=\bP_{k+1}
\begin{bmatrix}
1 &\bzero \\
\bzero & \bQ_k
\end{bmatrix}
\begin{bmatrix}
\lambda_1 &\bzero \\
\bzero & \bU_k
\end{bmatrix}\\
&=\bQ_{k+1}\bU_{k+1}. \qquad &\text{($\bU_{k+1} = \begin{bmatrix}
\lambda_1 &\bzero \\
\bzero & \bU_k
\end{bmatrix}$)}
\end{aligned}
$$
We then have $\bA_{k+1} = \bQ_{k+1}\bU_{k+1}\bQ_{k+1}^\top$, where $\bU_{k+1}$ is an upper triangular matrix, and $\bQ_{k+1}$ is an orthogonal matrix since $\bP_{k+1}$ and
$\begin{bmatrix}
1 &\bzero \\
\bzero & \bQ_k
\end{bmatrix}$ are both orthogonal matrices and this orthogonal form is decoded in Appendix~\ref{appendix:orthogonal-matrix} (p.~\pageref{appendix:orthogonal-matrix}).
\end{proof}
\section{Other Forms of the Schur Decomposition}\label{section:other-form-schur-decom}
From the proof of the Schur decomposition, we obtain the upper triangular matrix $\bU_{k+1}$ by appending the eigenvalue $\lambda_1$ to $\bU_k$. From this process, the values on the diagonal are always eigenvalues. Therefore, we can decompose the upper triangular into two parts.
\begin{corollaryHigh}[Form 2 of Schur Decomposition]\label{corollary:schur-second-form}
Any square matrix $\bA\in \real^{n\times n}$ with real eigenvalues can be factored as
$$
\bQ^\top\bA\bQ = \bLambda +\bT, \qquad \textbf{or} \qquad \bA = \bQ(\bLambda +\bT)\bQ^\top,
$$
where $\bQ$ is an orthogonal matrix, $\bLambda=diag(\lambda_1, \lambda_2, \ldots, \lambda_n)$ is a diagonal matrix containing the eigenvalues of $\bA$, and $\bT$ is a \textit{strictly upper triangular} matrix (with zeros on the diagonal).
\end{corollaryHigh}
A strictly upper triangular matrix is an upper triangular matrix having 0's along the diagonal as well as the lower portion. Another proof for this decomposition is that $\bA$ and $\bU$ (where $\bU = \bQ^\top\bA\bQ$) are similar matrices so that they have the same eigenvalues (Lemma~\ref{lemma:eigenvalue-similar-matrices}, p.~\pageref{lemma:eigenvalue-similar-matrices}). And the eigenvalues of any upper triangular matrices are on the diagonal.
To see this, for any upper triangular matrix $\bR \in \real^{n\times n}$ where the diagonal values are $r_{ii}$ for all $i\in \{1,2,\ldots,n\}$. We have
$$
\bR \be_i = r_{ii}\be_i,
$$
where $\be_i$ is the $i$-th basis vector in $\real^n$, i.e., $\be_i$ is the $i$-th column of the $n\times n$ identity matrix $\bI_n$.
So we can decompose $\bU$ into $\bLambda$ and $\bT$. Apparently, the first part of Lemma~\ref{lemma:eigenvalue-similar-matrices} (p.~\pageref{lemma:eigenvalue-similar-matrices}) can prove the existence of the second form of the Schur decomposition.
A final observation on the second form of the Schur decomposition is shown as follows. From $\bA\bQ = \bQ(\bLambda +\bT)$, it follows that
$$
\bA \bq_k = \lambda_k\bq_k + \sum_{i=1}^{k-1}t_{ik}\bq_i,
$$
where $t_{ik}$ is the ($i,k$)-th entry of $\bT$. The form is quite close to the eigenvalue decomposition. Nevertheless, the columns become orthonormal bases and the orthonormal bases are correlated.
\part{Reduction to Hessenberg, Tridiagonal, and Bidiagonal Form}
\section*{Introduction}
In real applications, we often want to factor matrix $\bA$ into two orthogonal matrices $\bA =\bQ\bLambda\bQ^\top$ where $\bLambda$ is diagonal or upper triangular, e.g., eigen analysis via the Schur decomposition and principal component analysis (PCA) via the spectral decomposition. This can be computed via a sequence of \textit{orthogonal similarity transformations}:
$$
\underbrace{\bQ_k^\top\ldots \bQ_2^\top \bQ_1^\top}_{\bQ^\top} \bA \underbrace{\bQ_1\bQ_2\ldots\bQ_k}_{\bQ}
$$
which converges to $\bLambda$. However, this will always be very hard to handle in practice, for example, via the Householder reflectors. Following from the QR decomposition via the Householder example, \footnote{where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed. } the sequence of orthogonal similarity transformation can be constructed via the Householder reflectors:
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes
\end{sbmatrix}
&\stackrel{\bH_1\times }{\rightarrow}
\begin{sbmatrix}{\bH_1\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times \bH_1^\top }{\rightarrow}
\begin{sbmatrix}{\bH_1\bA\bH_1^\top}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix},
\end{aligned}
$$
where the left Householder introduces zeros in the first column below the main diagonal (see Section~\ref{section:qr-via-householder}), and unfortunately the right Householder will destroy the zeros introduced by the left Householder.
However, if we are less ambitious to modify the algorithms into two phases, where the first phase transforms into a Hessenberg matrix (Definition~\ref{definition:upper-hessenbert}, p.~\pageref{definition:upper-hessenbert}) or a tridiagonal matrix (Definition~\ref{definition:tridiagonal-hessenbert}, p.~\pageref{definition:tridiagonal-hessenbert}). And if we find a second phase algorithm to transform the results from the first one to the goal we want to find, then we complete the algorithm:
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes
\end{sbmatrix}
&\stackrel{\bH_1\times }{\rightarrow}
\begin{sbmatrix}{\bH_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times \bH_1^\top }{\rightarrow}
\begin{sbmatrix}{\bH_1\bA\bH_1^\top}
\boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix},
\end{aligned}
$$
where the left Householder will not influence the first row, and the right Householder will not influence the first column.
A phase 2 \footnote{which is usually an iterative algorithm.} algorithm to find the triangular matrix is shown as follows:
$$
\begin{aligned}
\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA\bH_1^\top\bH_2^\top \bH_3^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\text{Phase 2} }{\longrightarrow}
\begin{sbmatrix}{\bLambda}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & 0 & \bm{0} & \bm{\boxtimes}
\end{sbmatrix}
\end{aligned}
$$
From the discussion above, to compute the spectral decomposition, Schur decomposition, or singular value decomposition (SVD), we usually reach a compromise to calculate the Hessenberg, tridiagonal, or bidiagonal form in the first phase and leave the second phase to finish the rest \citep{van2012families, van2014restructuring, trefethen1997numerical}.
\newpage
\chapter{Hessenberg Decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Hessenberg Decomposition}
We firstly give the rigorous definition of the upper Hessenberg matrix.
\begin{definition}[Upper Hessenberg Matrix\index{Hessenbert matrix}]\label{definition:upper-hessenbert}
An \textit{upper Hessenberg matrix} is a square matrix where all the entries below the first diagonal (i.e., the ones below the \textit{main diagonal}) (a.k.a., \textit{lower subdiagonal}) are zeros. Similarly, a lower Hessenberg matrix is a square matrix where all the entries above the first diagonal (i.e., the ones above the main diagonal) are zeros.
The definition of the upper Hessenberg can also be extended to rectangular matrices, and the form can be implied from the context.
In matrix language, for any matrix $\bH\in \real^{n\times n}$, and the entry ($i,j$) denoted by $h_{ij}$ for all $i,j\in \{1,2,\ldots, n\}$. Then $\bH$ with $h_{ij}=0$ for all $i\geq j+2$ is known as an Hessenberg matrix.
Let $i$ denote the smallest positive integer for which $h_{i+1, i}=0$ where $i\in \{1,2,\ldots, n-1\}$, then $\bH$ is \textit{unreduced} if $i=n-1$.
\end{definition}
Take a $5\times 5$ matrix as an example, the lower triangular below the lower sub-diagonal are zero in the upper Hessenberg matrix:
$$
\begin{sbmatrix}{possibly\,\, unreduced}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}
\qquad
\text{or}
\qquad
\begin{sbmatrix}{reduced}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & \textcolor{blue}{0} & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}.
$$
Then we have the following Hessenberg decomposition:
\begin{theoremHigh}[Hessenberg Decomposition]\label{theorem:hessenberg-decom}
Every $n\times n$ square matrix $\bA$ can be factored as
$$
\bA = \bQ\bH\bQ^\top \qquad \text{or} \qquad \bH = \bQ^\top \bA\bQ,
$$
where $\bH$ is an upper Hessenberg matrix, and $\bQ$ is an orthogonal matrix.
\end{theoremHigh}
It's not hard to find that a lower Hessenberg decomposition of $\bA^\top$ is given by $\bA^\top = \bQ\bH^\top\bQ^\top$ if $\bA$ has the Hessenberg decomposition $\bA = \bQ\bH\bQ^\top$. The Hessenberg decomposition shares a similar form as the QR decomposition in that they both reduce a matrix into a sparse form where the lower parts of both are zero.
\begin{remark}[Why Hessenberg Decomposition]
We will see that the zeros introduced into $\bH$ from $\bA$ is accomplished by the left orthogonal matrix $\bQ$ (same as the QR decomposition) and the right orthogonal matrix $\bQ^\top$ here does not transform the matrix into any better or simple form. Then why do we want the Hessenberg decomposition rather than just a QR decomposition which has a simpler structure in that it even has zeros in the lower sub-diagonal? The answer is given in the previous section that the Hessenberg decomposition is usually used by other algorithms as a phase 1 step to find a decomposition that factor the matrix into two orthogonal matrices, e.g., SVD, UTV, and so on. And if we employ an aggressive algorithm that even favors zeros in the lower sub-diagonal (again, as in the QR decomposition), the right orthogonal transform $\bQ^\top$ will destroy the zeros that can be seen very shortly.
On the other hand, the form $\bA = \bQ\bH\bQ^\top$ on $\bH$ is known as the \textit{orthogonal similarity transformation} (Definition~\ref{definition:similar-matrices}, p.~\pageref{definition:similar-matrices}) on $\bA$ such that the eigenvalues, rank and trace of $\bA$ and $\bH$ are the same (Lemma~\ref{lemma:eigenvalue-similar-matrices}, p.~\pageref{lemma:eigenvalue-similar-matrices}). Then if we want to study the properties of $\bA$, exploration on $\bH$ can be a relatively simpler task than that on the original matrix $\bA$.
\end{remark}
\section{Similarity Transformation and Orthogonal Similarity Transformation}
As mentioned previously, the Hessenberg decomposition introduced in this section, the tridiagonal decomposition in the next section, the Schur decomposition (Theorem~\ref{theorem:schur-decomposition}, p.~\pageref{theorem:schur-decomposition}), and the spectral decomposition (Theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem}) share a similar form that transforms the matrix into a similar matrix. We now give the rigorous definition of similar matrices and similarity transformations. \index{Similar matrices}\index{Similarity tansformation}
\begin{definition}[Similar Matrices and Similarity Transformation]\label{definition:similar-matrices}
$\bA$ and $\bB$ are called \textit{similar matrices} if there exists a nonsingular matrix $\bP$ such that $\bB = \bP\bA\bP^{-1}$.
In words, for any nonsingular matrix $\bP$, the matrices $\bA$ and $\bP\bA\bP^{-1}$ are similar matrices.
And in this sense, given the nonsingular matrix $\bP$, $\bP\bA\bP^{-1}$
is called a \textit{similarity transformation} applied to matrix $\bA$.
Moreover, when $\bP$ is orthogonal, then $\bP\bA\bP^\top$ is also known as the \textit{orthogonal similarity transformation} of $\bA$. The orthogonal similarity transformation is important in the sense that the condition number of the transformed matrix $\bP\bA\bP^\top$ is not worse than that of the original matrix $\bA$.
\end{definition}
The difference between the similarity transformation and orthogonal similarity transformation is partly explained in the sense of coordinate transformation (Section~\ref{section:coordinate-transformation}, p.~\pageref{section:coordinate-transformation}). Now we prove the important properties of similar matrices in the following lemma.
\begin{lemma}[Eigenvalue, Trace and Rank of Similar Matrices\index{Trace}]\label{lemma:eigenvalue-similar-matrices}
Any eigenvalue of $\bA$ is also an eigenvalue of $\bP\bA\bP^{-1}$. The converse is also true that any eigenvalue of $\bP\bA\bP^{-1}$ is also an eigenvalue of $\bA$. I.e., $\Lambda(\bA) = \Lambda(\bB)$, where $\Lambda(\bX)$ is the spectrum of matrix $\bX$ (Definition~\ref{definition:spectrum}, p.~\pageref{definition:spectrum}).
And also the trace and rank of $\bA$ are equal to those of matrix $\bP\bA\bP^{-1}$ for any nonsingular matrix $\bP$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:eigenvalue-similar-matrices}]
For any eigenvalue $\lambda$ of $\bA$, we have $\bA\bx =\lambda \bx$. Then $\lambda \bP\bx = \bP\bA\bP^{-1} \bP\bx$ such that $\bP\bx$ is an eigenvector of $\bP\bA\bP^{-1}$ corresponding to $\lambda$.
Similarly, for any eigenvalue $\lambda$ of $\bP\bA\bP^{-1}$, we have $\bP\bA\bP^{-1} \bx = \lambda \bx$. Then $\bA\bP^{-1} \bx = \lambda \bP^{-1}\bx$ such that $\bP^{-1}\bx$ is an eigenvector of $\bA$ corresponding to $\lambda$.
For the trace of $\bP\bA\bP^{-1}$, we have $trace(\bP\bA\bP^{-1}) = trace(\bA\bP^{-1}\bP) = trace(\bA)$, where the first equality comes from the fact that
trace of a product is invariant under cyclical permutations of the factors:
\begin{equation}
trace(\bA\bB\bC) = trace(\bB\bC\bA) = trace(\bC\bA\bB), \nonumber
\end{equation}
if all $\bA\bB\bC$, $\bB\bC\bA$, and $\bC\bA\bB$ exist.
For the rank of $\bP\bA\bP^{-1}$, we separate it into two claims as follows.
\paragraph{Rank claim 1: $rank(\bZ\bA)=rank(\bA)$ if $\bZ$ is nonsingular}
We will first show that $rank(\bZ\bA)=rank(\bA)$ if $\bZ$ is nonsingular. For any vector $\bn$ in the null space of $\bA$, that is $\bA\bn = \bzero$. Thus, $\bZ\bA\bn = \bzero$, that is, $\bn$ is also in the null space of $\bZ\bA$. And this implies $\nspace(\bA)\subseteq \nspace(\bZ\bA)$.
Conversely, for any vector $\bmm$ in the null space of $\bZ\bA$, that is $\bZ\bA\bmm = \bzero$, we have $\bA\bmm = \bZ^{-1} \bzero=\bzero$. That is, $\bmm$ is also in the null space of $\bA$. And this indicates $\nspace(\bZ\bA)\subseteq \nspace(\bA)$.
By ``sandwiching", the above two arguments imply
$$
\nspace(\bA) = \nspace(\bZ\bA)\quad \longrightarrow \quad rank(\bZ\bA)=rank(\bA).
$$
\paragraph{Rank claim 2: $rank(\bA\bZ)=rank(\bA)$ if $\bZ$ is nonsingular}
We notice that the row rank is equal to the column rank of any matrix (Corollary~\ref{lemma:equal-dimension-rank}, p.~\pageref{lemma:equal-dimension-rank}). Then $rank(\bA\bZ) = rank(\bZ^\top\bA^\top)$. Since $\bZ^\top$ is nonsingular, by claim 1, we have $rank(\bZ^\top\bA^\top) = rank(\bA^\top) = rank(\bA)$ where the last equality is again from the fact that the row rank is equal to the column rank of any matrix. This results in $rank(\bA\bZ)=rank(\bA)$ as claimed.
Since $\bP, \bP^{-1}$ are nonsingular, we then have $rank(\bP\bA\bP^{-1}) = rank(\bA\bP^{-1}) = rank(\bA)$ where the first equality is from claim 1 and the second equality is from claim 2. We complete the proof.
\end{proof}
The lemma above will be proved very useful in the sequel (see Lemma~\ref{lemma:rank-of-symmetric-idempotent}, p.~\pageref{lemma:rank-of-symmetric-idempotent} to prove the trace and rank of symmetric idempotent matrices).
\section{Existence of the Hessenberg Decomposition}
We will prove that any $n\times n$ matrix can be reduced to Hessenberg form via a sequence of Householder transformations that are applied from the left and the right to the matrix.
Previously, we utilized a Householder reflector to triangularize matrices and introduce zeros below the diagonal to obtain the QR decomposition. A similar approach can be applied to introduce zeros below the subdiagonal. To see this, you have to revisit the ideas behind the Householder reflector in Definition~\ref{definition:householder-reflector} (p.~\pageref{definition:householder-reflector}).
Before introducing the mathematical construction of such decomposition, we emphasize the following remark which will be very useful in the finding of the decomposition.
\begin{remark}[Left and Right Multiplied by a Matrix with Block Identity]\label{remark:left-right-identity}
For square matrix $\bA\in \real^{n\times n}$, and a matrix
$$
\bB = \begin{bmatrix}
\bI_k &\bzero \\
\bzero & \bB_{n-k}
\end{bmatrix},
$$
where $\bI_k$ is a $k\times k$ identity matrix. Then $\bB\bA$ will not change the first $k$ rows of $\bA$, and $\bA\bB$ will not change the first $k$ columns of $\bA$.
\end{remark}
The proof of this remark is trivial.
\subsubsection*{\textbf{First Step: Introduce Zeros for the First Column}}
Let $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$ be the column partitions of $\bA$, and each $\ba_i \in \real^{n}$. Suppose $\bar{\ba}_1, \bar{\ba}_2, \ldots, \bar{\ba}_n \in \real^{n-1}$ are vectors removing the first component in $\ba_i$'s. Let
$$
r_1 = ||\bar{\ba}_1||, \qquad \bu_1 = \frac{\bar{\ba}_1 - r_1 \be_1}{||\bar{\ba}_1 - r_1 \be_1||}, \qquad \text{and}\qquad \widetilde{\bH}_1 = \bI - 2\bu_1\bu_1^\top \in \real^{(n-1)\times (n-1)},
$$
where $\be_1$ here is the first basis for $\real^{n-1}$, i.e., $\be_1=[1;0;0;\ldots;0]\in \real^{n-1}$. To introduce zeros below the sub-diagonal and operate on the submatrix $\bA_{2:n,1:n}$, we append the Householder reflector into
$$
\bH_1 = \begin{bmatrix}
1 &\bzero \\
\bzero & \widetilde{\bH}_1
\end{bmatrix},
$$
in which case, $\bH_1\bA$ will introduce zeros in the first column of $\bA$ below entry (2,1). The first row of $\bA$ will not be affected at all and kept unchanged by Remark~\ref{remark:left-right-identity}. And we can easily verify that both $\bH_1$ and $\widetilde{\bH}_1$ are orthogonal matrices and they are symmetric (from the definition of Householder reflector). To have the form in Theorem~\ref{theorem:hessenberg-decom}, we multiply $\bH_1\bA$ on the right by $\bH_1^\top$ which results in $\bH_1\bA\bH_1^\top$. The $\bH_1^\top$ on the right will not change the first column of $\bH_1\bA$ and thus keep the zeros introduced in the first column.
An example of a $5\times 5$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bH_1\times}{\rightarrow}
&\begin{sbmatrix}{\bH_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bH_1^\top}{\rightarrow}
\begin{sbmatrix}{\bH_1\bA\bH_1^\top}
\boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
\end{aligned}
$$
\subsubsection*{\textbf{Second Step: Introduce Zeros for the Second Column}}
Let $\bB = \bH_1\bA\bH_1^\top$, where the entries in the first column below entry (2,1) are all zeros. And the goal is to introduce zeros in the second column below entry (3,2). Let $\bB_2 = \bB_{2:n,2:n}=[\bb_1, \bb_2, \ldots, \bb_{n-1}]$. Suppose again $\bar{\bb}_1, \bar{\bb}_2, \ldots, \bar{\bb}_{n-1} \in \real^{n-2}$ are vectors removing the first component in $\bb_i$'s. We can again construct a Householder reflector
\begin{equation}\label{equation:householder-qr-lengthr}
r_1 = ||\bar{\bb}_1||, \qquad \bu_2 = \frac{\bar{\bb}_1 - r_1 \be_1}{||\bar{\bb}_1 - r_1 \be_1||}, \qquad \text{and}\qquad \widetilde{\bH}_2 = \bI - 2\bu_2\bu_2^\top\in \real^{(n-2)\times (n-2)},
\end{equation}
where $\be_1$ now is the first basis for $\real^{n-2}$. To introduce zeros below the sub-diagonal and operate on the submatrix $\bB_{3:n,1:n}$, we append the Householder reflector into
$$
\bH_2 = \begin{bmatrix}
\bI_2 &\bzero \\
\bzero & \widetilde{\bH}_2
\end{bmatrix},
$$
where $\bI_2$ is a $2\times 2$ identity matrix. We can see that $\bH_2\bH_1\bA\bH_1^\top$ will not change the first two rows of $\bH_1\bA\bH_1^\top$, and as the Householder cannot reflect a zero vector such that the zeros in the first column will be kept. Again, putting $\bH_2^\top$ on the right of $\bH_2\bH_1\bA\bH_1^\top$ will not change the first 2 columns so that the zeros will be kept.
Following the example of a $5\times 5$ matrix, the second step is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bH_1\bA\bH_1^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bH_2\times}{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA\bH_1^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bH_2^\top}{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA\bH_1^\top\bH_2^\top}
\boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\end{aligned}
$$
Same process can go on, and there are $n-2$ such steps. We will finally triangularize by
$$
\bH = \bH_{n-2} \bH_{n-3}\ldots\bH_1 \bA\bH_1^\top\bH_2^\top\ldots\bH_{n-2}^\top.
$$
And since $\bH_i$'s are symmetric and orthogonal, the above equation can be simply reduced to
$$
\bH =\bH_{n-2} \bH_{n-3}\ldots\bH_1 \bA\bH_1\bH_2\ldots\bH_{n-2}.
$$
Note here only $n-2$ such stages exist rather than $n-1$ or $n$. We will verify this number of steps by the example below.
The example of a $5\times 5$ matrix as a whole is shown as follows where again $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10, frametitle={A Complete Example of Hessenberg Decomposition}]
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bH_1\times}{\rightarrow}
&\begin{sbmatrix}{\bH_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bH_1^\top}{\rightarrow}
\begin{sbmatrix}{\bH_1\bA\bH_1^\top}
\boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
\stackrel{\bH_2\times}{\rightarrow}
&\begin{sbmatrix}{\bH_2\bH_1\bA\bH_1^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bH_2^\top}{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA\bH_1^\top\bH_2^\top}
\boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
\stackrel{\bH_3\times}{\rightarrow}
&\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA\bH_1^\top\bH_2^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bH_3^\top}{\rightarrow}
\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA\bH_1^\top\bH_2^\top\bH_3^\top}
\boxtimes & \boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix},
\end{aligned}
$$
\end{mdframed}
where we find
\begin{itemize}
\item when multiplying by $\bH_1$ on the left, we operate on a $(5-1)\times (5-1+1)$ submatrix; \item when multiplying by $\bH_2$ on the left, we operate on a $(5-2)\times (5-2+1)$ submatrix; \item and when multiplying by $\bH_3$ on the left, we operate on a $(5-3)\times(5-3+1)$ submatrix.
\end{itemize}
Similarly,
\begin{itemize}
\item when multiplying by $\bH_1^\top$ on the right, we operate on a $5\times (5-1)$ submatrix; \item when multiplying by $\bH_2^\top$ on the right, we operate on a $5\times (5-2)$ submatrix;
\item and when multiplying by $\bH_3^\top$ on the right, we operate on a $5\times (5-3)$ submatrix.
\end{itemize}
The above two findings are important for the computation and reduction in the complexity of the Hessenberg decomposition.
\section{Computing the Hessenberg Decomposition}
Matrix $\bA\in \real^{n\times n}$, and for left multiplied by $\bH_i$'s in each step $i\in \{1, 2, \ldots, n-2\}$, we operate on a submatrix of size $(n-i)\times(n-i+1)$. Furthermore, if we take the first column of this $(n-i)\times(n-i+1)$ matrix, we can set the first column by the reflected vector to obtain the Householder reflector such that we can operate on an $(n-i)\times(n-i)$ submatrix instead.
For multiplied by $\bH_i^\top$'s on the right in each step $i\in \{1, 2, \ldots, n-2\}$, we operate on a submatrix of size $n\times(n-i)$.
After the Householder transformation, we get the final triangular matrix $\bR$, and the process is shown in Algorithm~\ref{alg:hessenbert-decomposition-householder}.
Similar to computing the QR decomposition via Householder reflectors, in Algorithm~\ref{alg:hessenbert-decomposition-householder}, to compute $\bH = \bH_{n-2} \bH_{n-3}\ldots\bH_1 \bA\bH_1^\top\bH_2^\top\ldots\bH_{n-2}^\top$, we write out the equation
$$
\begin{aligned}
\bH &= \cleft[black](\bH_{n-2} \ldots\cleft[green](\bH_3\cleft[red](\bH_2\cleft[blue](\bH_1 \bA\bH_1\cright[blue])\bH_2\cright[red])\bH_3\cright[green])\ldots\bH_{n-2}\cright[black]) \\
&=
\begin{bmatrix}
\bI_{n-2}& \bzero \\
\bzero& \bI - 2\bu_{n-2}\bu_{n-2}^\top
\end{bmatrix}
\ldots
\begin{bmatrix}
\bI_3& \bzero \\
\bzero& \bI - 2\bu_3\bu_3^\top
\end{bmatrix}
\begin{bmatrix}
\bI_2& \bzero \\
\bzero& \bI - 2\bu_2\bu_2^\top
\end{bmatrix}
\begin{bmatrix}
\bI_1 &\bzero \\
\bzero & \bI - 2\bu_1\bu_1^\top
\end{bmatrix}\\
&\bA
\begin{bmatrix}
\bI_1 &\bzero \\
\bzero & \bI - 2\bu_1\bu_1^\top
\end{bmatrix}
\begin{bmatrix}
\bI_2& \bzero \\
\bzero& \bI - 2\bu_2\bu_2^\top
\end{bmatrix}
\begin{bmatrix}
\bI_3& \bzero \\
\bzero& \bI - 2\bu_3\bu_3^\top
\end{bmatrix}
\ldots
\begin{bmatrix}
\bI_{n-2}& \bzero \\
\bzero& \bI - 2\bu_{n-2}\bu_{n-2}^\top
\end{bmatrix},
\end{aligned}
$$
where the different colors for the parentheses indicate the computational order, and
\begin{itemize}
\item the upper-left of $\bH_1$ is a $1\times 1$ identity matrix, multiplying on the left will not change the \textbf{first row} of $\bA$, then multiplying on the right will not change the \textbf{first column} of $\bH_1\bA$;
\item the upper-left of $\bH_2$ is a $2\times 2$ identity matrix, multiplying on the left will not change the \textbf{first 2 rows} of $\bH_1\bA\bH_1$, then multiplying on the right will not change the \textbf{first 2 columns} of $\bH_2\bH_1\bA\bH_1$;
\item the upper-left of $\bH_3$ is a $3\times 3$ identity matrix, multiplying on the left will not change the \textbf{first 3 rows} of $\bH_2\bH_1\bA\bH_1\bH_2$, then multiplying on the right will not change the \textbf{first 3 columns} of $\bH_3\bH_2\bH_1\bA\bH_1\bH_2$;
\item the process can go on, and this property yields the step 8 and 9 in the algorithm.
\end{itemize}
Similarly, to get the final orthogonal matrix $\bQ=\bH_1 \bH_2\ldots\bH_{n-2}$, we write out the equation:
$$
\begin{aligned}
\bQ
&=\bH_1 \bH_2\bH_3\ldots\bH_{n-2} \\
&= \begin{bmatrix}
\bI_1 &\bzero \\
\bzero & \bI - 2\bu_1\bu_1^\top
\end{bmatrix}
\begin{bmatrix}
\bI_2& \bzero \\
\bzero& \bI - 2\bu_2\bu_2^\top
\end{bmatrix}
\begin{bmatrix}
\bI_3& \bzero \\
\bzero& \bI - 2\bu_3\bu_3^\top
\end{bmatrix}
\ldots
\begin{bmatrix}
\bI_{n-2}& \bzero \\
\bzero& \bI - 2\bu_{n-2}\bu_{n-2}^\top
\end{bmatrix},
\end{aligned}
$$
where the upper-left of $\bH_1$ is a $1\times 1$ identity matrix, the upper-left of $\bH_2$ is a $2\times 2$ identity matrix, and it will not change the first 2 columns of $\bH_1$; and the upper-left of $\bH_3$ is a $3\times 3$ identity matrix which will not modify the first 3 columns of $\bH_1\bH_2$; $\ldots$. This property yields the step 15 in the algorithm.
\begin{algorithm}[h]
\caption{Hessenberg Decomposition via the Householder Reflector}
\label{alg:hessenbert-decomposition-householder}
\begin{algorithmic}[1]
\Require matrix $\bA$ with size $n\times n $;
\State Initially set $\bH = \bA$; (\textcolor{blue}{note $\bH$ is the Hessenberg, $\bH_i$'s are Householders})
\For{$i=1$ to $n-2$}
\State $\ba = \bH_{i+1:n,i}$, i.e., first column of $\bH_{i+1:n,i:n}\in \real^{(n-i)\times(n-i+1)}$;
\State $r = ||\ba||$;\Comment{$2(n-i)$ flops}
\State $\bu_i = \ba-r\be_1$;, \Comment{1 flop}
\State $\bu_i = \bu_i / ||\bu_i||$; \Comment{$3(n-i)$ flops}
\State $\bH_{i+1,i}=r$, $\bH_{i+2:n,i}=\bzero$, i.e., set the value of first column of $\bH_{i+1:n,i:n}$; \Comment{0 flops}
\State Left: set the value of columns $2$ to $n$ of $\bH_{i+1:n,i:n}$,
$$
\begin{aligned}
\bH_{i+1:n,i+1:n} &= (\bI-2\bu_i\bu_i^\top)\bH_{i+1:n,i+1:n} \\
&= \bH_{i+1:n,i+1:n} - 2\bu_i\bu_i^\top\bH_{i+1:n,i+1:n} \in \real^{(n-i)\times(n-i)}\\
&\text{\qquad\qquad($4(n-i)^2$ flops);}
\end{aligned}
$$
\State Right:
$$
\begin{aligned}
\bH_{1:n,i+1:n} &= \bH_{1:n,i+1:n}(\bI-2\bu_i\bu_i^\top) \\
&=\bH_{1:n,i+1:n} - \bH_{1:n,i+1:n} 2\bu_i\bu_i^\top\in \real^{n\times(n-i)}\\
&\text{\qquad\,\,\,\,\,($4n(n-i)-n$ flops);}
\end{aligned}
$$
\EndFor
\State Output $\bH$ as the upper Hessenberg matrix;
\State Get $\bQ=\bH_1\bH_2\ldots\bH_{n-2}$, where $\bH_i$'s are Householder reflectors.
\State Initially set $\bQ = \bH_1$;
\For{$i=1$ to $n-3$}
\State Compute $\bQ$:
$$\begin{aligned}
\bQ_{1:n,i+2:n} &= \bQ_{1:n,i+2:n}\bH_{i+1}\\
&= \bQ_{1:n,i+2:n}(\bI - 2\bu_{i+1}\bu_{i+1}^\top)\\
&=\bQ_{1:n,i+2:n}-\bQ_{1:n,i+2:n}2\bu_{i+1}\bu_{i+1}^\top \in \real^{n\times (n-i-1)}\\
&\text{\qquad\,\,\,\,\,\gap($4n(n-i-1) - n$ flops);}
\end{aligned}$$
\EndFor
\State Output $\bQ$ as the orthogonal matrix.
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: Hessenberg via Householder]\label{theorem:hessenbert-householder}
Algorithm~\ref{alg:hessenbert-decomposition-householder} requires $\sim \frac{10}{3}n^3$ flops to compute a Hessenberg decomposition of an $n\times n$ square matrix. Further, if $\bQ$ is needed explicitly, additional $\sim 2n^3$ flops required.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:hessenbert-householder}] We separate the proof into obtaining the Hessenberg matrix and the orthogonal matrix.
\paragraph{To obtain the Hessenberg matrix: }
For loop $i$, $\bH_{i+1:n,i:n}\in \real^{(n-i)\times(n-i+1)}$. Thus $\ba$ is in $\real^{n-i}$.
In step 4, to compute $r = ||\ba||$ involves $n-i$ multiplications, $n-i-1$ additions, and 1 square root operation which is \fbox{$2(n-i)$} flops.
In step 5, $\bu_i = \ba-r\be_1$ requires 1 subtraction which is \fbox{$1$} flop as the special structure of $\be_1$;
In step 6, same to step 4, it requires $2(n-i)$ flops to compute the norm and $(n-i)$ additional divisions which is \fbox{$3(n-i)$} flops.
In step 8, suppose in loop $i$, $\bu_i^\top \bH_{i+1:n,i+1:n}$ requires $n-i$ times ($n-i$ multiplications and $n-i-1$ additions) which is \fbox{$(n-i)(2(n-i)-1)$} flops. $2\bu_i$ requires \fbox{$n-i$} multiplications. Further $2\bu_i (\bu_i^\top \bH_{i+1:n,i+1:n})$ requires \fbox{$(n-i)^2$} multiplications to make an $(n-i)\times (n-i)$ matrix. The final matrix subtraction needs \fbox{$(n-i)^2$} subtractions. Thus the total complexity for step 7 if loop $i$ is \fbox{$4(n-i)^2$} flops for each iteration $i$.
In step 9, the computation of $2\bu_i$ needs 0 flops since it has already been calculated in step 8. Similarly, $\bH_{1:n,i+1:n} 2\bu_i$ involves $n$ times ($n-i$ multiplications and $n-i-1$ additions) which is \fbox{$n(2(n-i)-1)$} flops. $\bH_{1:n,i+1:n} 2\bu_i\bu_i^\top$ takes \fbox{$n(n-i)$} multiplications to make an $n\times (n-i)$ matrix. The final matrix subtraction needs additional \fbox{$n(n-i)$} subtractions. This makes the complexity of step 9 to be \fbox{$4n(n-i)-n$} flops for each iteration $i$.
So for loop $i$ the total complexity is $4i^2 - (12n+4)i + (8n^2+3n+2)$ flops. Let $f(i)=4i^2 - (12n+4)i + (8n^2+3n+2)$, the complexity for step 2 to step 10 can be obtained by
$$
\mathrm{cost}=f(1)+f(2)+\ldots+f(n-2).
$$
Simple calculation will show the sum of $n-2$ loops is $\frac{10}{3}n^3$ flops if we keep only the leading term.
\paragraph{To obtain the orthogonal matrix: }
For the additional computation of $\bQ$ in step 15, the situation is similar to step 9. the computation of $2\bu_{i+1}$ needs 0 flops since it has already been calculated in step 8. $\bQ_{1:n,i+2:n}2\bu_{i+1}$ involves $n$ times ($n-i-1$ multiplications and $n-i-2$ additions) which is \fbox{$n(2(n-i-1)-1)$} flops. $\bQ_{1:n,i+2:n}2\bu_{i+1}\bu_{i+1}^\top$ takes \fbox{$n(n-i-1)$} to make an $n\times (n-i-1)$ matrix. The final matrix subtraction needs additional \fbox{$n(n-i-1)$} subtractions. This makes the complexity of step 15 to be \fbox{$4n(n-i-1) - n$} flops for each iteration $i$. Let $g(i)=4n(n-i-1) - n$, the complexity for step 14 to step 16 is given by
$$
\mathrm{cost}=g(1)+g(2)+\ldots +g(n-3).
$$
Trivial calculations can show that the complexity is $2n^3$ flops if we keep only the leading term.
\end{proof}
\section{Properties of the Hessenberg Decomposition}\label{section:hessenberg-decomposition}
The Hessenberg decomposition is not unique in the different ways to construct the Householder reflectors (say Equation~\eqref{equation:householder-qr-lengthr}, p.~\pageref{equation:householder-qr-lengthr}). However, under mild conditions, we can claim a similar structure in different decompositions.
\begin{theorem}[Implicit Q Theorem for Hessenberg Decomposition\index{Implicit Q theorem}]\label{theorem:implicit-q-hessenberg}
Suppose two Hessenberg decompositions of matrix $\bA\in \real^{n\times n}$ are given by $\bA=\bU\bH\bU^\top=\bV\bG\bV^\top$ where $\bU=[\bu_1, \bu_2, \ldots, \bu_n]$ and $\bV=[\bv_1, \bv_2, \ldots, \bv_n]$ are the column partitions of $\bU,\bV$. Suppose further that $k$ is the smallest positive integer for which $h_{k+1,k}=0$ where $h_{ij}$ is the entry $(i,j)$ of $\bH$. Then
\begin{itemize}
\item If $\bu_1=\bv_1$, then $\bu_i = \pm \bv_i$ and $|h_{i,i-1}| = |g_{i,i-1}|$ for $i\in \{2,3,\ldots,k\}$.
\item When $k=n-1$, the Hessenberg matrix $\bH$ is known as \textit{unreduced} (Definition~\ref{definition:upper-hessenbert}, p.~\pageref{definition:upper-hessenbert}). However, if $k<n-1$, then $g_{k+1,k}=0$.
\end{itemize}
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:implicit-q-hessenberg}]
Define the orthogonal matrix $\bQ=\bV^\top\bU$ and we have
$$
\left.
\begin{aligned}
\bG\bQ &= \bV^\top\bA\bV \bV^\top\bU = \bV^\top\bA\bU \\
\bQ\bH &= \bV^\top\bU \bU^\top\bA\bU = \bV^\top\bA\bU
\end{aligned}
\right\}
\leadto
\bG\bQ = \bQ\bH,
$$
the $(i-1)$-th column of each can be represented as
$$
\bG\bq_{i-1} = \bQ\bh_{i-1},
$$
where $\bq_{i-1}$ and $\bh_{i-1}$ are the $(i-1)$-th column of $\bQ$ and $\bH$ respectively. Since $h_{l,i-1}=0$ for $l\geq i+1$ (by the definition of upper Hessenberg matrices), $\bQ\bh_{i-1}$ can be represented as
$$
\bQ\bh_{i-1} = \sum_{j=1}^{i} h_{j,i-1} \bq_j = h_{i,i-1}\bq_i + \sum_{j=1}^{i-1} h_{j,i-1} \bq_j.
$$
Combine the two findings above, it follows that
$$
h_{i,i-1}\bq_i = \bG\bq_{i-1} - \sum_{j=1}^{i-1} h_{j,i-1} \bq_j.
$$
A moment of reflexion reveals that $[\bq_1, \bq_2, \ldots,\bq_k]$ is upper triangular. And since $\bQ$ is orthogonal, it must be diagonal and each value on the diagonal is in $\{-1, 1\}$ for $i\in \{2,\ldots, k\}$. Then, $\bq_1=\be_1$ and $\bq_i = \pm \be_i$ for $i\in \{2,\ldots, k\}$. Further, since $\bq_i =\bV^\top\bu_i$ and $h_{i,i-1}=\bq_i^\top (\bG\bq_{i-1} - \sum_{j=1}^{i-1} h_{j,i-1} \bq_j)=\bq_i^\top \bG\bq_{i-1}$. For $i\in \{2,\ldots,k\}$, $\bq_i^\top \bG\bq_{i-1}$ is just $\pm g_{i,i-1}$. It follows that
$$
\begin{aligned}
|h_{i,i-1}| &= |g_{i,i-1}|, \qquad &\forall i\in \{2,\ldots,k\},\\
\bu_i &= \pm\bv_i, \qquad &\forall i\in \{2,\ldots,k\}.
\end{aligned}
$$
This proves the first part. For the second part, if $k<n-1$,
$$
\begin{aligned}
g_{k+1,k} &= \be_{k+1}^\top\bG\be_{k} = \pm \be_{k+1}^\top\underbrace{\bG\bQ}_{\bQ\bH} \be_{k} = \pm \be_{k+1}^\top\underbrace{\bQ\bH \be_{k}}_{\text{$k$-th column of $\bQ\bH$}} \\
&= \pm \be_{k+1}^\top \bQ\bh_k = \pm \be_{k+1}^\top \sum_{j}^{k+1} h_{jk}\bq_j
=\pm \be_{k+1}^\top \sum_{j}^{\textcolor{blue}{k}} h_{jk}\bq_j=0,
\end{aligned}
$$
where the penultimate equality is from the assumption that $h_{k+1,k}=0$. This completes the proof.
\end{proof}
We observe from the above theorem, when two Hessenberg decompositions of matrix $\bA$ are both unreduced and have the same first column in the orthogonal matrices, then the Hessenberg matrices $\bH, \bG$ are similar matrices such that $\bH = \bD\bG\bD^{-1}$ where $\bD=\diag(\pm 1, \pm 1, \ldots, \pm 1)$. \textit{Moreover, and most importantly, if we restrict the elements in the lower sub-diagonal of the Hessenberg matrix $\bH$ to be positive (if possible), then the Hessenberg decomposition $\bA=\bQ\bH\bQ^\top$ is uniquely determined by $\bA$ and the first column of $\bQ$.} This is similar to what we have claimed on the uniqueness of the QR decomposition (Corollary~\ref{corollary:unique-qr}, p.~\pageref{corollary:unique-qr}).
The next finding involves a Krylov matrix defined as follows:
\begin{definition}[Krylov Matrix\index{Krylov matrix}]\label{definition:krylov-matrix}
Given matrix $\bA\in \real^{n\times n}$, a vector $\bq\in \real^n$, and a scalar $k$, the \textit{Krylov matrix} is defined to be
$$
\bK(\bA, \bq, k) =
\begin{bmatrix}
\bq & \bA\bq & \ldots & \bA^{k-1}\bq
\end{bmatrix}
\in \real^{n\times n}.
$$
\end{definition}
\begin{theorem}[Reduced Hessenberg]\label{theorem:implicit-q-hessenberg-v2}
Suppose there exists an orthogonal matrix $\bQ$ such that $\bA\in \real^{n\times n}$ can be factored as $\bA = \bQ\bH\bQ^\top$. Then $\bQ^\top\bA\bQ=\bH$ is an unreduced upper Hessenberg matrix if and only if $\bR=\bQ^\top \bK(\bA, \bq_1, n)$ is nonsingular and upper triangular where $\bq_1$ is the first column of $\bQ$.
\item If $\bR$ is singular and $k$ is the smallest index so that $r_{kk}=0$, then $k$ is also the smallest index that $h_{k,k-1}=0$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:implicit-q-hessenberg-v2}] We prove by forward implication and converse implication separately as follows:
\paragraph{Forward implication}
Suppose $\bH$ is unreduced, write out the following matrix
$$
\bR = \bQ^\top \bK(\bA, \bq_1, n) = [\be_1, \bH\be_1, \ldots, \bH^{n-1}\be_1],
$$
where $\bR$ is upper triangular with $r_{11}=1$ obviously. Observe that $r_{ii} = h_{21}h_{32}\ldots h_{i,i-1}$ for $i\in \{2,3,\ldots, n\}$. When $\bH$ is unreduced, $\bR$ is nonsingular as well.
\paragraph{Converse implication}
Now suppose $\bR$ is upper triangular and nonsingular, we observe that $\br_{k+1} = \bH\br_{k}$ such that the $(k+2:n)$-th rows of $\bH$ are zero and $h_{k+1,k}\neq 0$ for $k\in \{1,2,\ldots, n-1\}$. Then $\bH$ is unreduced.
If $\bR$ is singular and $k$ is the smallest index so that $r_{kk}=0$, then
$$
\left.
\begin{aligned}
r_{k-1,k-1}&=h_{21}h_{32}\ldots h_{k-1,k-2}&\neq 0 \\
r_{kk}&=h_{21}h_{32}\ldots h_{k-1,k-2} h_{k,k-1}&= 0
\end{aligned}
\right\}
\leadto
h_{k,k-1} =0,
$$
from which the result follows.
\end{proof}
\newpage
\chapter{Tridiagonal Decomposition: Hessenberg in Symmetric Matrices}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Tridiagonal Decomposition}
We firstly give the formal definition of the tridiagonal matrix.
\begin{definition}[Tridiagonal Matrix\index{Tridiagonal matrix}]\label{definition:tridiagonal-hessenbert}
A tridiagonal matrix is a square matrix where all the entries below the lower sub-diagonal and the entries above the upper sub-diagonal are zeros. I.e., the tridiagonal matrix is a \textit{band matrix}.
The definition of the tridiagonal matrix can also be extended to rectangular matrices, and the form can be implied from the context.
In matrix language, for any matrix $\bT\in \real^{n\times n}$, and the entry ($i,j$) denoted by $t_{ij}$ for all $i,j\in \{1,2,\ldots, n\}$. Then $\bT$ with $t_{ij}=0$ for all $i\geq j+2$ and $i \leq j-2$ is known as a tridiagonal matrix.
Let $i$ denote the smallest positive integer for which $h_{i+1, i}=0$ where $i\in \{1,2,\ldots, n-1\}$, then $\bT$ is \textit{unreduced} if $i=n-1$.
\end{definition}
Take a $5\times 5$ matrix as an example, the lower triangular below the lower sub-diagonal and upper triangular above the upper sub-diagonal are zero in the tridiagonal matrix:
$$
\begin{sbmatrix}{possibly\,\, unreduced}
\boxtimes & \boxtimes & 0 & 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & 0 & 0\\
0 & \boxtimes & \boxtimes & \boxtimes & 0\\
0 & 0 & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}
\qquad
\begin{sbmatrix}{reduced}
\boxtimes & \boxtimes & 0 & 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & 0 & 0\\
0 & \boxtimes & \boxtimes & \boxtimes & 0\\
0 & 0 & \textcolor{blue}{0} & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}.
$$
Obviously, a tridiagonal matrix is a special case of an upper Hessenberg matrix.
Then we have the following tridiagonal decomposition:
\begin{theoremHigh}[Tridiagonal Decomposition]\label{theorem:tridiagonal-decom}
Every $n\times n$ symmetric matrix $\bA$ can be factored as
$$
\bA = \bQ\bT\bQ^\top \qquad \text{or} \qquad \bT = \bQ^\top \bA\bQ,
$$
where $\bT$ is a \textit{symmetric} tridiagonal matrix, and $\bQ$ is an orthogonal matrix.
\end{theoremHigh}
The existence of the tridiagonal matrix is trivial by applying the Hessenberg decomposition to symmetric matrix $\bA$.
\section{Computing the Tridiagonal Decomposition}\label{section:compute-tridiagonal}
Since zeros are now introduced in rows as well as columns by symmetry, additional
arithmetic can be avoided by ignoring these additional zeros.
An example of a $5\times 5$ matrix is shown as follows where $\boxtimes$ or a letter represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bH_1\times}{\rightarrow}
&\begin{sbmatrix}{\bH_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{a} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bH_1^\top}{\rightarrow}
\begin{sbmatrix}{\bH_1\bA\bH_1^\top}
\boxtimes & \bm{a} & \bm{0} & \bm{0} & \bm{0} \\
a & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
\stackrel{\bH_2\times}{\rightarrow}
&\begin{sbmatrix}{\bH_2\bH_1\bA\bH_1^\top}
\boxtimes & a & 0 & 0 & 0 \\
a & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{b} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bH_2^\top}{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA\bH_1^\top\bH_2^\top}
\boxtimes & a & 0 & 0 & 0 \\
a & \boxtimes & \bm{b} & \bm{0} & \bm{0} \\
0 & b & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
\stackrel{\bH_3\times}{\rightarrow}
&\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA\bH_1^\top\bH_2^\top}
\boxtimes & a & 0 & 0 & 0 \\
a & \boxtimes & b & 0 & 0 \\
0 & b & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{c} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bH_3^\top}{\rightarrow}
\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA\bH_1^\top\bH_2^\top\bH_3^\top}
\boxtimes & a & 0 & 0 & 0 \\
a & \boxtimes & b & 0 & 0 \\
0 & b & \boxtimes & \bm{c} & \bm{0}\\
0 & 0 & c & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix},
\end{aligned}
$$
\begin{algorithm}[h]
\caption{Tridiagonal Decomposition via the Householder Reflector}
\label{alg:tridiagonal-decomposition-householder}
\begin{algorithmic}[1]
\Require matrix $\bA$ with size $n\times n $;
\State Initially set $\bT = \bA$;
\For{$i=1$ to $n-2$}
\State $\ba = \bT_{i+1:n,i}$, i.e., first column of $\bT_{i+1:n,i:n}\in \real^{(n-i)\times(n-i+1)}$;
\State $r = ||\ba||$; \Comment{$2(n-i)$ flops}
\State $\bu_i = \ba-r\be_1$; \Comment{1 flop}
\State $\bu_i = \bu_i / ||\bu_i||$; \Comment{$3(n-i)$ flops}
\State $\bT_{i+1,i}=r$, $\bT_{i+2:n,i}=\bzero$, i.e., set the value of first column of $\bT_{\textcolor{blue}{i+1}:n,i:n}$; \Comment{0 flops}
\State $\bT_{i,i+1}=r$, $\bT_{i,i+2:n}=\bzero$,\gap \, i.e., set the value of first row of $\bT_{i:n,\textcolor{blue}{i+1}:n}$; \Comment{0 flops}
\State Left and Right: let $\bZ = \bT_{i+1:n,i+1:n} \in \real^{(n-i)\times (n-i)}$,
$$
\begin{aligned}
\bZ &= \bH_i \bZ \bH_i \qquad \text{($\bH_i$ is the $i$-th Householder reflector)}\\
&= (\bI-2\bu_i\bu_i^\top)\bZ (\bI-2\bu_i\bu_i^\top)\\
&= \bZ - \bZ2\bu_i\bu_i^\top - 2\bu_i\bu_i^\top\bZ+ 2\bu_i\bu_i^\top\bZ 2\bu_i\bu_i^\top \\
\end{aligned}
$$
\Comment{$4(n-i)^2$ flops}
\EndFor
\State Output $\bT$ as the tridiagonal matrix;
\State Get $\bQ=\bH_1\bH_2\ldots\bH_{n-2}$, where $\bH_i$'s are Householder reflectors.
\State Initially set $\bQ = \bH_1$;
\For{$i=1$ to $n-3$}
\State Compute $\bQ$:
$$\begin{aligned}
\bQ_{1:n,i+2:n} &= \bQ_{1:n,i+2:n}\bH_{i+1}\\
&= \bQ_{1:n,i+2:n}(\bI - 2\bu_{i+1}\bu_{i+1}^\top)\\
&=\bQ_{1:n,i+2:n}-\bQ_{1:n,i+2:n}2\bu_{i+1}\bu_{i+1}^\top \in \real^{n\times (n-i-1)}\\
\end{aligned}$$
\Comment{$4n(n-i-1) - n$ flops}
\EndFor
\State Output $\bQ$ as the orthogonal matrix.
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: Tridiagonalization via Householder]\label{theorem:tridiagonal-householder}
Algorithm~\ref{alg:tridiagonal-decomposition-householder} requires $\sim \frac{4}{3}n^3$ flops to compute a Tridiagonal decomposition of an $n\times n$ symmetric matrix. Further, if $\bQ$ is needed explicitly, additional $\sim 2n^3$ flops required.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:tridiagonal-householder}]
The complexity of step 9 needs close scrutiny. It follows from the left Householder and right Householder updates. Since we have computed the first column of $\bT_{\textcolor{blue}{i+1}:n,i:n}$ and the first row of $\bT_{i:n,\textcolor{blue}{i+1}:n}$ explicitly, the left and right Householder updates have the following form (where the \textcolor{red}{red}-colored text is the difference from the one in computing Hessenberg decomposition for non-symmetric matrices):
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10,frametitle={Where Does the Step 9 Come From: A Splitting Way}]
Left: update columns $2$ to $n$ of $\bT_{\textcolor{blue}{i+1}:n,i:n}$, i.e., working on $\bT_{i+1:n,i+1:n}\in \real^{(n-i)\times (n-i)}$:
$$
\begin{aligned}
\bT_{i+1:n,i+1:n} &= (\bI-2\bu_i\bu_i^\top)\bT_{i+1:n,i+1:n} \\
&= \bT_{i+1:n,i+1:n} - 2\bu_i\bu_i^\top\bT_{i+1:n,i+1:n} \in \real^{(n-i)\times(n-i)}\\
&\text{\qquad\qquad($4(n-i)^2$ flops);}
\end{aligned}
$$
Right: update rows $2$ to $n$ of $\bT_{i:n,\textcolor{blue}{i+1}:n}$, i.e., working on $\bT_{i+1:n,i+1:n}\in \real^{(n-i)\times (n-i)}$:
$$
\begin{aligned}
\bT_{\textcolor{red}{i+1}:n,i+1:n} &= \bT_{\textcolor{red}{i+1}:n,i+1:n}(\bI-2\bu_i\bu_i^\top) \\
&=\bT_{\textcolor{red}{i+1}:n,i+1:n} - \bT_{\textcolor{red}{i+1}:n,i+1:n} 2\bu_i\bu_i^\top\in \real^{n\times(n-i)}\\
&\text{\qquad\,\,\,\,\,($4n(n-i)-n$ flops);}
\end{aligned}
$$
\end{mdframed}
Since the two updates now working on the same submatrix $\bT_{i+1:n,i+1:n}$, we can put them together which results in the step 9 of the algorithm. Let $\bZ = \bT_{i+1:n,i+1:n} \in \real^{(n-i)\times (n-i)}$. A delve into the reduction of the complexity in step 9 goes as follows:
$$
\begin{aligned}
\bZ &\leftarrow \bH_i \bZ \bH_i \qquad \text{($\bH_i$ is the $i$-th Householder reflector)}\\
&= (\bI-2\bu_i\bu_i^\top)\bZ (\bI-2\bu_i\bu_i^\top)\\
&= (\bZ - 2\bu_i\underbrace{\bu_i^\top \bZ}_{\by^\top })(\bI-2\bu_i\bu_i^\top) \\
&= (\bZ - 2\bu_i\by^\top )(\bI-2\bu_i\bu_i^\top) & \text{(let $\by^\top = \bu_i^\top \bZ\rightarrow \by = \bZ\bu_i$)}\\
&= \bZ - 2\bu_i\by^\top - 2\bZ\bu_i\bu_i^\top + 4\beta \bu_i\bu_i^\top & \text{(let $\beta = \by^\top\bu_i$)}\\
&= \bZ - (2\bu_i\by^\top+2\beta \bu_i\bu_i^\top) - (2\underbrace{\bZ\bu_i}_{\by}\bu_i^\top -2\beta \bu_i\bu_i^\top ) \\
&= \bZ - (2\bu_i\by^\top+2\beta \bu_i\bu_i^\top) - (2\by\bu_i^\top -2\beta \bu_i\bu_i^\top ) &\text{($\by = \bZ\bu_i$)}\\
&= \bZ - \{\bu_i\underbrace{(2\by^\top+ 2\beta \bu_i^\top)}_{\bx^\top } +\underbrace{ (2\by^\top+2\beta \bu_i^\top)^\top}_{\bx } \bu_i^\top \}\\
&= \bZ - \{\bu_i\bx^\top +( \bu_i\bx^\top)^\top \} &(\text{let $\bx=(2\by+ 2\beta \bu_i)$})
\end{aligned}
$$
The costs come from
\begin{itemize}
\item $\by = \bZ\bu_i$: $[2(n-i)-1](n-i) = 2(n-i)^2 - (n-i)$ flops from Lemma~\ref{lemma:matrix-multi-complexity} (on the complexity of a matrix multiplication);
\item $\beta = \by^\top\bu_i$: $2(n-i)-1$ flops;
\item $\bx=(2\by+ 2\beta \bu_i)$: $(n-i)+1+(n-i)=2(n-i)+1$ flops;
\item $\bu_i\bx^\top$: $(n-i)^2$ flops;
\item $( \bu_i\bx^\top)^\top $: 0 flops;
\item $\bu_i\bx^\top +( \bu_i\bx^\top)^\top$: $1+2+\ldots+(n-i)= \frac{(n-i)^2+(n-i)}{2}$ additions since it results in a symmetric matrix;
\item $\underbrace{\bZ}_{\text{symmetric}} - \underbrace{\{\bu_i\bx^\top +( \bu_i\bx^\top)^\top \}}_{\text{symmetric}} $: $1+2+\ldots+(n-i)= \frac{(n-i)^2+(n-i)}{2}$ subtractions since both matrices are symmetric;
\end{itemize}
If we keep only the leading terms of step 4 to step 9, the total complexity for loop $i$ is given by
$f(i) = 4(n-i)^2$ flops. By summation, we find the final cost
$$
\text{cost=} f(1)+f(2)+\ldots +f(n-2)=~ \frac{4}{3}n^3 \text{ flops}.
$$
This completes the proof.
\end{proof}
\section{Properties of the Tridiagonal Decomposition}\label{section:tridiagonal-decomposition}
Similarly, the tridiagonal decomposition is not unique. However, and most importantly, if we restrict the elements in the lower sub-diagonal of the tridiagonal matrix $\bT$ to be positive (if possible), then the tridiagonal decomposition $\bA=\bQ\bT\bQ^\top$ is uniquely determined by $\bA$ and the first column of $\bQ$.
\begin{theorem}[Implicit Q Theorem for Tridiagonal\index{Implicit Q theorem}]\label{theorem:implicit-q-tridiagonal}
Suppose two Tridiagonal decompositions of symmetric matrix $\bA\in \real^{n\times n}$ are given by $\bA=\bU\bT\bU^\top=\bV\bG\bV^\top$ where $\bU=[\bu_1, \bu_2, \ldots, \bu_n]$ and $\bV=[\bv_1, \bv_2, \ldots, \bv_n]$ are the column partitions of $\bU,\bV$. Suppose further that $k$ is the smallest positive integer for which $t_{k+1,k}=0$ where $t_{ij}$ is the entry $(i,j)$ of $\bT$. Then
\begin{itemize}
\item if $\bu_1=\bv_1$, then $\bu_i = \pm \bv_i$ and $|t_{i,i-1}| = |g_{i,i-1}|$ for $i\in \{2,3,\ldots,k\}$.
\item When $k=n-1$, the tridiagonal matrix $\bT$ is known as unreduced. However, if $k<n-1$, then $g_{k+1,k}=0$.
\end{itemize}
\end{theorem}
From the above theorem, we observe that if we restrict the elements in the lower sub-diagonal of the tridiagonal matrix $\bT$ to be positive (if possible), i.e., \textit{unreduced}, then the tridiagonal decomposition $\bA=\bQ\bT\bQ^\top$ is uniquely determined by $\bA$ and the first column of $\bQ$. This again is similar to what we have claimed on the uniqueness of the QR decomposition (Corollary~\ref{corollary:unique-qr}, p.~\pageref{corollary:unique-qr}).
Similarly, a reduced tridiagonal decomposition can be obtained from the implication of the Krylov matrix (Definition~\ref{definition:krylov-matrix}, p.~\pageref{definition:krylov-matrix}).
\begin{theorem}[Reduced Tridiagonal]\label{theorem:implicit-q-tridiagonal-v2}
Suppose there exists an orthogonal matrix $\bQ$ such that $\bA\in \real^{n\times n}$ can be factored as $\bA = \bQ\bT\bQ^\top$. Then $\bQ^\top\bA\bQ=\bT$ is an unreduced tridiagonal matrix if and only if $\bR=\bQ^\top \bK(\bA, \bq_1, n)$ is nonsingular and upper triangular where $\bq_1$ is the first column of $\bQ$.
\item If $\bR$ is singular and $k$ is the smallest index so that $r_{kk}=0$, then $k$ is also the smallest index that $t_{k,k-1}=0$.
\end{theorem}
The proofs of the above two theorems are the same as those in Theorem~\ref{theorem:implicit-q-hessenberg} and~\ref{theorem:implicit-q-hessenberg-v2} (p.~\pageref{theorem:implicit-q-hessenberg} and p.~\pageref{theorem:implicit-q-hessenberg-v2}).
\newpage
\chapter{Bidiagonal Decomposition}\label{section:bidiagonal-decompo}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Bidiagonal Decomposition}
We firstly give the rigorous definition of the upper Bidiagonal matrix as follows:
\begin{definition}[Upper Bidiagonal Matrix\index{Bidiagonal matrix}]\label{definition:bidiagonal-matrix}
An upper bidiagonal matrix is a square matrix which is a banded matrix with non-zero entries along the \textit{main diagonal} and the \textit{upper subdiagonal} (i.e., the ones above the main diagonal). This means there are exactly two nonzero diagonals in the matrix.
Furthermore, when the diagonal below the main diagonal has the non-zero entries, the matrix is lower bidiagonal.
The definition of bidigonal matrices can also be extended to rectangular matrices, and the form can be implied from the context.
\end{definition}
Take a $7\times 5$ matrix as an example, the lower triangular below the main diagonal and the upper triangular above the upper subdiagonal are zero in the upper bidiagonal matrix:
$$
\begin{bmatrix}
\boxtimes & \boxtimes & 0 & 0 & 0\\
0 & \boxtimes & \boxtimes & 0 & 0\\
0 & 0 & \boxtimes & \boxtimes & 0\\
0 & 0 & 0 & \boxtimes & \boxtimes\\
0 & 0 & 0 & 0 & \boxtimes\\
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0
\end{bmatrix}.
$$
Then we have the following bidiagonal decomposition:
\begin{theoremHigh}[Bidiagonal Decomposition]\label{theorem:Golub-Kahan-Bidiagonalization-decom}
Every $m\times n$ matrix $\bA$ can be factored as
$$
\bA = \bU\bB\bV^\top \qquad \text{or} \qquad \bB= \bU^\top \bA\bV,
$$
where $\bB$ is an upper bidiagonal matrix, and $\bU, \bV$ are orthogonal matrices.
\end{theoremHigh}
We will see the bidiagonalization resembles the form of a singular value decomposition where the only difference is the values of $\bB$ in bidiagonal form has nonzero entries on the upper sub-diagonal such that it will be shown to play an important role in the calculation of the singular value decomposition.
\section{Existence of the Bidiagonal Decomposition: Golub-Kahan Bidiagonalization}
Previously, we utilized a Householder reflector to triangularize matrices and introduce zeros below the diagonal to obtain the QR decomposition, and introduce zeros below the sub-diagonal to obtain the Hessenberg decomposition. A similar approach can be employed to find the bidiagonal decomposition. To see this, you have to recap the ideas behind the Householder reflector in Definition~\ref{definition:householder-reflector} (p.~\pageref{definition:householder-reflector}).\index{Golub-Kahan}
\subsubsection*{\textbf{First Step 1.1: Introduce Zeros for the First Column}}
Let $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$ be the column partitions of $\bA$, and each $\ba_i \in \real^{m}$.
We can construct the Householder reflector as follows:
$$
r_1 = ||\ba_1||, \qquad \bu_1 = \frac{\ba_1 - r_1 \be_1}{||\ba_1 - r_1 \be_1||} ,\qquad \text{and}\qquad \bH_1 = \bI - 2\bu_1\bu_1^\top \in \textcolor{blue}{\real^{m\times m}},
$$
where $\be_1$ here is the first basis for $\textcolor{blue}{\real^{m}}$, i.e., $\be_1=[1;0;0;\ldots;0]\in \textcolor{blue}{\real^{m}}$.
In this case, $\bH_1\bA$ will introduce zeros in the first column of $\bA$ below entry (1,1), i.e., reflect $\ba_1$ to $r_1\be_1$.
We can easily verify that both $\bH_1$ is a symmetric and orthogonal matrix (from the definition of Householder reflector).
An example of a $7\times 5$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bH_1\times}{\rightarrow}
&\begin{sbmatrix}{\bH_1\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}.
\end{aligned}
$$
Till now, this is exactly what we have done in the QR decomposition via the Householder reflector in Section~\ref{section:qr-via-householder} (p.~\pageref{section:qr-via-householder}).
Going further, to introduce zeros above the upper sub-diagonal of $\bH_1\bA$ is equivalent to introducing zeros below the lower subdiagonal of $(\bH_1\bA)^\top$.
\subsubsection*{\textbf{First Step 1.2: Introduce Zeros for the First Row}}
Now suppose we are looking at the \textit{transpose} of $\bH_1\bA$, that is $(\bH_1\bA)^\top =\bA^\top\bH_1^\top \in \real^{n\times m}$ and the column partition is given by $\bA^\top\bH_1^\top = [\bz_1, \bz_2, \ldots, \bz_m]$ where each $\bz_i \in \real^n$.
Suppose $\bar{\bz}_1, \bar{\bz}_2, \ldots, \bar{\bz}_m \in \real^{n-1}$ are vectors removing the first component in $\bz_i$'s.
Let
$$
r_1 = ||\bar{\bz}_1||, \qquad \bv_1 = \frac{\bar{\bz}_1 - r_1 \be_1}{||\bar{\bz}_1 - r_1 \be_1||}, \qquad \text{and}\qquad \widetilde{\bL}_1 = \bI - 2\bv_1\bv_1^\top \in\textcolor{blue}{\real^{(n-1)\times (n-1)}},
$$
where $\be_1$ now is the first basis for $\textcolor{blue}{\real^{n-1}}$, i.e., $\be_1=[1;0;0;\ldots;0]\in \textcolor{blue}{\real^{n-1}}$. To introduce zeros below the sub-diagonal and operate on the submatrix $(\bA^\top\bH_1^\top)_{2:n,1:m}$, we append the Householder reflector into
$$
\bL_1 = \begin{bmatrix}
1 &\bzero \\
\bzero & \widetilde{\bL}_1
\end{bmatrix},
$$
in which case, $\bL_1(\bA^\top\bH_1^\top)$ will introduce zeros in the first column of $(\bA^\top\bH_1^\top)$ below entry (2,1), i.e., reflect $\bar{\bz}_1$ to $r_1\be_1$. The first row of $(\bA^\top\bH_1^\top)$ will not be affected at all and kept unchanged by Remark~\ref{remark:left-right-identity} (p.~\pageref{remark:left-right-identity}) such that the zeros introduced in Step 1.1) will be kept. And we can easily verify that both $\bL_1$ and $\widetilde{\bL}_1$ are orthogonal matrices and they are symmetric (from the definition of Householder reflector).
Come back to the original \textit{untransposed} matrix $\bH_1\bA$, multiply on the right by $\bL_1^\top$ is to introduce zeros in the first row to the right of entry (1,2).
Again, following the example above, a $7\times 5$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bA^\top\bH_1^\top}
\boxtimes & 0 & 0 & 0 & 0 & 0 & 0 \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\end{sbmatrix}
\stackrel{\bL_1\times}{\rightarrow}
\begin{sbmatrix}{\bL_1 \bA^\top\bH_1^\top}
\boxtimes & 0 & 0 & 0 & 0 & 0 & 0 \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}
\stackrel{(\cdot)^\top}{\rightarrow}
\begin{sbmatrix}{\bH_1\bA\bL_1^\top }
\boxtimes & \bm{\boxtimes} & \bm{0} & \bm{0} & \bm{0} \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}.
\end{aligned}
$$
In short, $\bH_1\bA\bL_1^\top$ finishes the first step to introduce zeros for the first column and the first row of $\bA$.
\subsubsection*{\textbf{Second Step 2.1: Introduce Zeros for the Second Column}}
Let $\bB = \bH_1\bA\bL_1^\top$, where the entries in the first column below entry (1,1) are all zeros and the entries in the first row to the right of entry (1,2) are all zeros as well.
And the goal is to introduce zeros in the second column below entry (2,2).
Let $\bB_2 = \bB_{2:m,2:n}=[\bb_1, \bb_2, \ldots, \bb_{n-1}] \in \real^{(m-1)\times (n-1)}$.
We can again construct a Householder reflector
$$
r_1 = ||\bb_1||,\qquad \bu_2 = \frac{\bb_1 - r_1 \be_1}{||\bb_1 - r_1 \be_1||}, \qquad \text{and}\qquad \widetilde{\bH}_2 = \bI - 2\bu_2\bu_2^\top\in \textcolor{blue}{\real^{(m-1)\times (m-1)}},
$$
where $\be_1$ now is the first basis for $\textcolor{blue}{\real^{m-1}}$ i.e., $\be_1=[1;0;0;\ldots;0]\in \textcolor{blue}{\real^{m-1}}$. To introduce zeros below the main diagonal and operate on the submatrix $\bB_{2:m,2:n}$, we append the Householder reflector into
$$
\bH_2 = \begin{bmatrix}
1 &\bzero \\
\bzero & \widetilde{\bH}_2
\end{bmatrix},
$$
in which case, we can see that $\bH_2(\bH_1\bA\bL_1^\top)$ will not change the first row of $(\bH_1\bA\bL_1^\top)$ by Remark~\ref{remark:left-right-identity} (p.~\pageref{remark:left-right-identity}), and as the Householder cannot reflect a zero vector such that the zeros in the first column will be kept as well.
Following the above example, a $7\times 5$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bH_1\bA\bL_1^\top }
\boxtimes & \boxtimes & 0 & 0 & 0 \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\end{sbmatrix}
\stackrel{\bH_2\times }{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA\bL_1^\top }
\boxtimes & \boxtimes & 0& 0 & 0 \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}.
\end{aligned}
$$
\subsubsection*{\textbf{Second Step 2.2: Introduce Zeros for the Second Row}}
Same as step 1.2), now suppose we are looking at the \textit{transpose} of $\bH_2\bH_1\bA\bL_1^\top$, that is $(\bH_2\bH_1\bA\bL_1^\top)^\top =\bL_1\bA^\top\bH_1^\top\bH_2^\top \in \real^{n\times m}$ and the column partition is given by $\bL_1\bA^\top\bH_1^\top\bH_2^\top = [\bx_1, \bx_2, \ldots, \bx_m]$ where each $\bx_i \in \real^n$.
Suppose $\bar{\bx}_1, \bar{\bx}_2, \ldots, \bar{\bx}_m \in \real^{n-2}$ are vectors removing the first two components in $\bx_i$'s.
Construct the Householder reflector as follows:
$$
r_1 = ||\bar{\bx}_1||,\qquad \bv_2 = \frac{\bar{\bx}_1 - r_1 \be_1}{||\bar{\bx}_1 - r_1 \be_1||}, \qquad \text{and}\qquad \widetilde{\bL}_2 = \bI - 2\bv_2\bv_2^\top \in \textcolor{blue}{\real^{(n-2)\times (n-2)}},
$$
where $\be_1$ now is the first basis for $\textcolor{blue}{\real^{n-2}}$, i.e., $\be_1=[1;0;0;\ldots;0]\in \textcolor{blue}{\real^{n-2}}$. To introduce zeros below the sub-diagonal and operate on the submatrix $(\bL_1\bA^\top\bH_1\bH_2)_{3:n,1:m}$, we append the Householder reflector into
$$
\bL_1 = \begin{bmatrix}
\bI_2 &\bzero \\
\bzero & \widetilde{\bL}_2
\end{bmatrix},
$$
where $\bI_2$ is a $2\times 2$ identity matrix.
In this case, $\bL_2(\bL_1\bA^\top\bH_1^\top\bH_2^\top)$ will introduce zeros in the second column of $(\bL_1\bA^\top\bH_1^\top\bH_2^\top)$ below entry (3,2). The first two rows of $(\bL_1\bA^\top\bH_1^\top\bH_2^\top)$ will not be affected at all and kept unchanged by Remark~\ref{remark:left-right-identity} (p.~\pageref{remark:left-right-identity}). \textbf{Further, the first column of it will be kept unchanged as well}. And we can easily verify that both $\bL_1$ and $\widetilde{\bL}_1$ are orthogonal matrices and they are symmetric (from the definition of Householder reflector).
Come back to the original \textit{untransposed} matrix $\bH_2\bH_1\bA\bL_1^\top$, multiply on the right by $\bL_2^\top$ is to introduce zeros in the second row to the right of entry (2,3). Following the above example, a $7\times 5$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bL_1\bA^\top\bH_1^\top\bH_2^\top }
\boxtimes & 0 & 0 & 0 & 0 & 0 & 0 \\
\boxtimes & \boxtimes & 0 & 0 & 0 & 0 & 0\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\end{sbmatrix}
\stackrel{\bL_2\times }{\rightarrow}
\begin{sbmatrix}{\bL_2\bL_1\bA^\top\bH_1^\top\bH_2^\top }
\boxtimes & 0 & 0 & 0 & 0 & 0 & 0 \\
\boxtimes & \boxtimes & 0 & 0 & 0 & 0 & 0\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}
\stackrel{(\cdot)^\top }{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA\bL_1^\top\bL_2^\top}
\boxtimes & \boxtimes & 0 & 0 & 0\\
0 & \boxtimes & \bm{\boxtimes} & \bm{0} & \bm{0} \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}.\\
\end{aligned}
$$
In short, $\bH_2(\bH_1\bA\bL_1^\top)\bL_2^\top$ finish the second step to introduce zeros for the second column and the second row of $\bA$.
Same process can go on, and we shall notice that there are $n$ such $\bH_i$ Householder reflectors on the left and $n-2$ such $\bL_i$ Householder reflectors on the right (suppose $m>n$ for simplicity). The interleaved Householder factorization is known as the \textit{Golub-Kahan Bidiagonalization} \citep{golub1965calculating}. We will finally bidiagonalize
$$
\bB = \bH_{n} \bH_{n-1}\ldots\bH_1 \bA\bL_1^\top\bL_2^\top\ldots\bL_{n-2}^\top.
$$
And since the $\bH_i$'s and $\bL_i$'s are symmetric and orthogonal, we have
$$
\bB =\bH_{n} \bH_{n-1}\ldots\bH_1 \bA\bL_1\bL_2\ldots\bL_{n-2}.
$$
A full example of a $7\times 5$ matrix is shown as follows where again $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10,frametitle={A Complete Example of Golub-Kahan Bidiagonalization}]
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bH_1\times}{\rightarrow}
&\begin{sbmatrix}{\bH_1\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}
\stackrel{\times\bL_1^\top}{\rightarrow}
\begin{sbmatrix}{\bH_1\bA\bL_1^\top}
\boxtimes & \bm{\boxtimes} & \bm{0} & \bm{0} & \bm{0} \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}\\
\stackrel{\bH_2\times}{\rightarrow}
&\begin{sbmatrix}{\bH_2\bH_1\bA\bL_1^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bL_2^\top}{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA\bL_1^\top\bL_2^\top}
\boxtimes & \boxtimes & 0 & 0 & 0\\
0 & \boxtimes & \bm{\boxtimes} & \bm{0} & \bm{0} \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
\stackrel{\bH_3\times}{\rightarrow}
&\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA\bL_1^\top\bL_2^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bL_3^\top}{\rightarrow}
\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA\bL_1^\top\bL_2^\top\bL_3^\top}
\boxtimes & \boxtimes & 0 & 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & 0 & 0\\
0 & \boxtimes & \boxtimes & \bm{\boxtimes} & \bm{0}\\
0 & 0 & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
\stackrel{\bH_4\times}{\rightarrow}
&\begin{sbmatrix}{\bH_4\bH_3\bH_2\bH_1\bA\bL_1^\top\bL_2\bL_3^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes& \boxtimes & \boxtimes\\
0 & 0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{0} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{0} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{0} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bH_5\times}{\rightarrow}
\begin{sbmatrix}{\bH_5\bH_4\bH_3\bH_2\bH_1\bA\bL_1^\top\bL_2\bL_3^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes& \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes\\
0 & 0 & 0 & 0 & \bm{\boxtimes}\\
0 & 0 & 0 & 0 & \bm{0}\\
0 & 0 & 0 & 0 & \bm{0}
\end{sbmatrix}.
\end{aligned}
$$
\end{mdframed}
We present in a way where a right Householder reflector $\bL_i$ follows from a left one $\bH_i$. However, a trivial error that might be employed is that we do the left ones altogether, and the right ones follow. That is, a bidiagonal decomposition is a combination of a QR decomposition and a Hessenberg decomposition. Nevertheless, this is problematic, the right Householder reflector $\bL_1$ will destroy the zeros introduced by the left ones. Therefore, the left and right reflectors need to be employed in an interleaved manner to introduce back the zeros.
\section{Computing the Bidiagonal Decomposition: Golub-Kahan Bidiagonalization}
\begin{algorithm}[H]
\caption{Golub-Kahan Bidiagonal Decomposition}
\label{alg:bidiagonal-decomposition-householder}
\begin{algorithmic}[1]
\Require matrix $\bA$ with size $m\times n $ with $m>n$;
\State Initially set $\bB = \bA^\top$;
\For{$i=1$ to $n-2$}
\State \textcolor{winestain}{// do the left Householder reflector;}
\If{$i<=n-2$}
\State \textcolor{black}{$\bB = \bB^\top$ such that $\bB\in \textcolor{blue}{\real^{m\times n}}$;}
\EndIf
\State \textcolor{black}{$\ba = \bB_{i:m,i}$, i.e., first column of $\bB_{i:m,i:n} \in \real^{(m-i+1)\times(n-i+1)}$;}
\State $r = ||\ba||$; \Comment{$2(m-i+1)$ flops}
\State $\bu_i = \ba-r\be_1 \in \real^{m-i+1}$; \Comment{1 flop}
\State $\bu_i = \bu_i / ||\bu_i||$; \Comment{$3(m-i+1)$ flops}
\State $\bB_{i,i} = r$, $\bB_{i+1:m,i}=\bzero$; \Comment{update the first column of $\bB_{i:m,i:n}$, 0 flops}
\State $\bB_{i:m,i+1:n} = \bB_{i:m,i+1:n} - 2\bu_i (\bu_i^\top \bB_{i:m,i+1:n})$; \Comment{update the rest columns of $\bB_{i:m,i:n}$, $4(m-i+1)(n-i) +(m-n+1)$ flops}
\If{$i<=n-2$} \textcolor{winestain}{// do the right Householder reflector;}
\State $\bB = \bB^\top$ such that $\bB\in \textcolor{blue}{\real^{n\times m}}$;
\State $\bz = \bB_{i+1:n,i}$, i.e., first column of $\bB_{i+1:n,i:\textcolor{red}{m}}\in \real^{(n-i)\times(\textcolor{red}{m}-i+1)}$;
\State $s = ||\bz||$; \Comment{$2(n-i)$ flops}
\State $\bv_i = \bz-s\be_1 \in \real^{n-i}$; \Comment{1 flop}
\State $\bv_i = \bv_i / ||\bv_i||$; \Comment{$3(n-i)$ flops}
\State $\bB_{i+1,i}=r$, $\bB_{i+2:n,i}=\bzero$; \Comment{update the first column of $\bB_{i+1:n,i:\textcolor{red}{m}}$, 0 flops}
\State update columns $i+1$ to $m$ of $\bB_{i+1:n,i:\textcolor{red}{m}}$:
$$
\begin{aligned}
\bB_{i+1:n,i+1:\textcolor{red}{m}} &= (\bI-2\bv_i\bv_i^\top)\bB_{i+1:n,i+1:\textcolor{red}{m}}\\
&= \bB_{i+1:n,i+1:\textcolor{red}{m}} - 2\bv_i(\bv_i^\top\bB_{i+1:n,i+1:\textcolor{red}{m}}) \in \real^{(n-i)\times(\textcolor{red}{m}-i)}\\
& \text{\gap($4(n-i)(m-i) +(n-m)$ flops)}
\end{aligned}
$$
\EndIf
\EndFor
\State Output $\bB$ as the bidiagonal matrix;
\State Get $\bU=\bH_1 \bH_2\ldots\bH_n$;
\State Initially set $\bU = \bH_1$;
\For{$i=1$ to $n-1$}
\State $\bU_{1:m,i+1:m} = \bU_{1:m,i+1:m}(\bI - 2\bu_{i+1}\bu_{i+1}^\top)=\bU_{1:m,i+1:m}-\bU_{1:m,i+1:m}2\bu_{i+1}\bu_{i+1}^\top$.
\EndFor
\State Initially set $\bV = \bL_1$;
\For{$i=1$ to $n-3$}
\State $\bV_{1:n,i+2:n} = \bV_{1:n,i+2:n}(\bI - 2\bv_{i+1}\bv_{i+1}^\top)=\bV_{1:n,i+2:n}-\bV_{1:n,i+2:n}2\bv_{i+1}\bv_{i+1}^\top \in \real^{n\times (n-i-1)}$;
\EndFor
\State Output $\bU$, $\bV$ as the orthogonal matrix;
\end{algorithmic}
\end{algorithm}
From the QR decomposition and Hessenberg decomposition via the Householder reflector, it is trivial to obtain the procedure formulated in Algorithm~\ref{alg:bidiagonal-decomposition-householder} where the red-colored text is the difference between the right Householder reflectors
in bidiagonal decomposition and the reflectors in Hessenberg decomposition (Algorithm~\ref{alg:hessenbert-decomposition-householder}).
\begin{theorem}[Algorithm Complexity: Golub-Kahan Bidiagonalization]\label{theorem:bidiagonal-full-householder}
Algorithm~\ref{alg:bidiagonal-decomposition-householder} requires $\sim 4mn^2-\frac{4}{3}n^3$ flops to compute a bidiagonal decomposition of an $m\times n$ matrix with $m>n$. Further, if $\bU, \bV$ are needed explicitly, additional $\sim 4m^2n-2mn^2 + 2n^3$ flops are required.
\end{theorem}
The proof is trivial that the procedure shown above requires twice the complexity of QR decomposition via the Householder reflector as it resembles two Householder QR decomposition interleaved with one operating on the $m\times n$ matrix $\bA$ and the other on the $n\times m$ matrix $\bA^\top$. The complexity to obtain the orthogonal matrix $\bU$ is $4m^2n-2mn^2$ flops which is the same as that in QR decomposition. And similarly, the complexity to obtain the orthogonal matrix $\bV$ is $2n^3$ flops which is the same as that in Hessenberg decomposition.
\section{Computing the Bidiagonal Decomposition: LHC Bidiagonalization}
We mentioned in the previous section that the left Householder reflectors and the right reflectors are applied in an interleaved manner, otherwise, zeros introduced by the left reflectors will be destroyed. Nevertheless, when $m\gg n$, we can extract the square triangular matrix (i.e., the QR decomposition) and apply the Golub-Kahan diagonalization on the square $n\times n$ matrix. This is known as the \textit{Lawson-Hanson-Chan (LHC) bidiagonalization} \citep{lawson1995solving, chan1982improved} and the procedure is shown in Figure~\ref{fig:lhc-bidiagonal}.\index{LHC bidiagonalization}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{imgs/LHC-bidiagonal.pdf}
\caption{Demonstration of LHC-bidiagonalization of a matrix}
\label{fig:lhc-bidiagonal}
\end{figure}
The LHC bidiagonalization starts by computing the QR decomposition $\bA = \bQ\bR$. Then follows by applying the Golub-Kahan process such that $\widetilde{\bR} = \widetilde{\bU} \widetilde{\bB} \bV^\top$ where $\widetilde{\bR}$ is the square $n\times n$ triangular submatrix inside $\bR$. Append $\widetilde{\bU}$ into
$$
\bU_0 =
\begin{bmatrix}
\widetilde{\bU} & \\
& \bI_{m-n}
\end{bmatrix},
$$
which results in $\bR=\bU_0\bB \bV^\top$ and $\bA = \bQ\bU_0\bB \bV^\top$. Let $\bU=\bQ\bU_0$, we obtain the bidiagonal decomposition. The QR decomposition requires $2mn^2-\frac{2}{3}n^3$ flops and the Golub-Kahan process now requires $\frac{8}{3}n^3$ (operating on an $n\times n$ submatrix). Thus the total complexity to obtain the bidiagonal matrix $\bB$ is
$$
\text{LHC bidiagonalization: } \sim 2mn^2 + 2n^3 \text{ flops}.
$$
The LHC process creates zeros and then destroys them again in the lower triangle of the upper $n\times n $ square of $\bR$, but the zeros in the lower $(m-n)\times n$ rectangular matrix of $\bR$ will be kept. Thus when $m-n$ is large enough (or $m\gg n$), there is a net gain. Simple calculations will show the LHC bidiagonalization costs less when $m>\frac{5}{3}n$ compared to the Golub-Kahan bidiagonalization.
\section{Computing the Bidiagonal Decomposition: Three-Step Bidiagonalization}
The LHC procedure is advantageous only when $m>\frac{5}{3}n$. A further trick is to apply the QR decomposition not at the beginning of the computation, but at a suitable point in the middle \citep{trefethen1997numerical}. In particular, the procedure is shown in Figure~\ref{fig:lhc-bidiagonal2} where we apply the first $k$ steps of left and right Householder reflectors as in the Golub-Kahan process leaving the bottom-right $(m-k)\times(n-k)$ submatrix ``unreflected". Then follow up the same LHC process on the submatrix to obtain the final bidiagonal decomposition. By doing so, the complexity reduces when $n<m<2n$.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{imgs/LHC-bidiagonal2.pdf}
\caption{Demonstration of Three-Step bidiagonalization of a matrix}
\label{fig:lhc-bidiagonal2}
\end{figure}
The complexity of the Three-Step bidiagonalization can be decomposed into three ones.
The complexity of $k$ loops in Algorithm~\ref{alg:bidiagonal-decomposition-householder} can be shown to be
$$
\text{Step 1: } \qquad f_1 = 8mnk - (4m+4n)k^2 + \frac{8}{3}k^3 \text{ flops},
$$
which makes $4mn^2 - \frac{8}{3}n^3$ flops when $k=n$.
The complexity of the QR decomposition via the Householder reflector for $\widetilde{\bR} \in \real^{(m-k)\times (n-k)}$ is
$$
\text{Step 2: }\qquad f_2 = 2(m-k)(n-k)^2 - \frac{2}{3} (n-k)^3\text{ flops}.
$$
And the complexity of the Golub-Kahan diagonalization for $\widetilde{\bT} \in \real^{(n-k)\times (n-k)}$ is
$$
\text{Step 3: }\qquad f_3 = \frac{8}{3}(n-k)^3\text{ flops}.
$$
Thus, the total complexity of the three steps is given by
$$
g(k) =f_1 +f_2+f_3 = -\frac{4}{3}k^3 + (6n-2m)k^2 +(4mn-8n^2)k + 2mn^2+2n^3 .
$$
The problem now becomes finding a $k<n$ such that $g(k)$ is minimized. Taking the gradient of above function, we obtain
$$
g^\prime(k) = -4k^2 +(12n-4m)k + (4mn-8n^2),
$$
of which the root is $k=n$ or $2n-m$. We notice that $0<k<n$ such that when $2n-m>0$, the optimal value appears in one of $\{0, n, 2n-m\}$. Trivial calculation shows that
the optimal value for $k$ is $k=2n-m$ and the final complexity is now reduced to
$$
g(2n-m) = 2mn^2 + 2m^2n -\frac{2}{3}m^3 -\frac{2}{3}n^3 \,\, \text{ flops}, \qquad \text{when $m<2n$}.
$$
An example of the complexity of the Three-Step method when $n=70, m=100$ is shown in Figure~\ref{fig:bidiagonal-gk-sample} where the roots of the gradient are found to be $2n-m=40$ and $n=70$ such that $g^\prime(40)=g^\prime(70)=0$. In this specific case, the function $g(k)$ is decreasing when $k\in (0, 2n-m]$ and increasing when $k\in (2n-m, n]$.
\noindent
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[$g(k)$]{\label{fig:bidiagonal-gk-sample1}
\includegraphics[width=0.475\linewidth]{./imgs/bidiagonal-gk-sample.pdf}}
\quad
\subfigure[$g^\prime(k)$]{\label{fig:bidiagonal-gk-sample2}
\includegraphics[width=0.475\linewidth]{./imgs/bidiagonal-gk-sample2.pdf}}
\caption{An example of the complexity when $n=70, m=100$.}
\label{fig:bidiagonal-gk-sample}
\end{figure}
To conclude, the costs of the three methods are shown as follows:
$$
\left\{
\begin{aligned}
&\text{Golub-Kahan: } \sim 4mn^2-\frac{4}{3}n^3 \,\, \text{ flops}, \\
&\text{LHC: } \sim 2mn^2 + 2n^3 \,\, \text{ flops}, \\
&\text{Three-Step: } \sim 2mn^2 + 2m^2n -\frac{2}{3}m^3 -\frac{2}{3}n^3 \,\, \text{ flops} .
\end{aligned}
\right.
$$
When $m>2n$, LHC is preferred; when $n<m<2n$, the Three-Step method is preferred though the improvement is small enough as shown in Figure~\ref{fig:bidiagonal-loss-compare} where the operation counts for the three methods are plotted as a function of $\frac{m}{n}$.
\begin{SCfigure
\centering
\includegraphics[width=0.6\textwidth]{imgs/bidiagonal-loss.pdf}
\caption{Comparison of the complexity among the three bidiagonal methods. When $m>2n$, LHC is preferred; when $n<m<2n$, the Three-Step method is preferred though the improvement is small enough.}
\label{fig:bidiagonal-loss-compare}
\end{SCfigure}
Notice that the complexity discussed here does not involve the extra computation of $\bU, \bV$. We shall not discuss the issue for simplicity.
\section{Connection to Tridiagonal Decomposition}
We fist illustrate the connection by the following lemma that reveals how to construct a tridiagonal matrix from a bidiagonal one.
\begin{lemma}[Construct Tridiagonal From Bidiagonal]\label{lemma:construct-triangular-from-bidia}
Suppose $\bB\in \real^{n\times n}$ is upper bidiagonal, then $\bT_1=\bB^\top\bB$ and $\bT_2=\bB\bB^\top$ are \textit{symmetric} triangular matrices.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:construct-triangular-from-bidia}]
Suppose $\bB$ has the following form
$$
\bB=
\begin{bmatrix}
b_{11} & b_{12} & 0 & 0 &\ldots \\
0 & b_{22} & b_{23} & 0 & \ldots \\
0 & 0 & b_{33} & b_{34} & \ldots \\
\vdots & \vdots & \vdots & \ddots & \ddots \\
\ldots & \ldots & \ldots & \ldots & b_{nn}
\end{bmatrix}.
$$
Then $ \bT_1=\bB^\top \bB$ is given by
$$
\begin{aligned}
&\gap \bT_1=\bB^\top\bB =\\
&\begin{bmatrix}
b_{11} & 0 & 0 & 0 &\ldots \\
b_{12} & b_{22} & 0 & 0 & \ldots \\
0 & b_{23} & b_{33} & 0 & \ldots \\
0 & 0 & b_{34} & \vdots & \ddots \\
\vdots & \vdots & \vdots & \vdots & \vdots
\end{bmatrix}
\begin{bmatrix}
b_{11} & b_{12} & 0 & 0 &\ldots \\
0 & b_{22} & b_{23} & 0 & \ldots \\
0 & 0 & b_{33} & b_{34} & \ldots \\
\vdots & \vdots & \vdots & \ddots & \ddots \\
\ldots & \ldots & \ldots & \ldots & b_{nn}
\end{bmatrix}
=
\begin{bmatrix}
b_{11}^2 & b_{11}b_{12} & 0 &\ldots \\
b_{11}b_{12} &b_{12}^2+ b_{22}^2 & b_{22}b_{23} & \ldots \\
0 & b_{22}b_{23} & b_{23}^2+b_{33}^2 & \ldots \\
\vdots & \vdots & \ddots & \ddots \\
\end{bmatrix},
\end{aligned}
$$
which is symmetric and tridiagonal as claimed. Similarly, we can prove $\bT_2=\bB\bB^\top$ is also symmetric and tridiagonal:
$$
\begin{aligned}
&\gap \bT_2=\bB\bB^\top =\\
&
\begin{bmatrix}
b_{11} & b_{12} & 0 & 0 &\ldots \\
0 & b_{22} & b_{23} & 0 & \ldots \\
0 & 0 & b_{33} & b_{34} & \ldots \\
\vdots & \vdots & \vdots & \ddots & \ddots \\
\ldots & \ldots & \ldots & \ldots & b_{nn}
\end{bmatrix}
\begin{bmatrix}
b_{11} & 0 & 0 & 0 &\ldots \\
b_{12} & b_{22} & 0 & 0 & \ldots \\
0 & b_{23} & b_{33} & 0 & \ldots \\
0 & 0 & b_{34} & \vdots & \ddots \\
\vdots & \vdots & \vdots & \vdots & \vdots
\end{bmatrix}
=
\begin{bmatrix}
b_{11}^2 +b_{12}^2 & b_{12}b_{22} & 0 &\ldots \\
b_{12}b_{22} &b_{22}^2+ b_{23}^2 & b_{23}b_{33} & \ldots \\
0 & b_{23}b_{33} & b_{33}^2+b_{34}^2 & \ldots \\
\vdots & \vdots & \ddots & \ddots \\
\end{bmatrix},
\end{aligned}
$$
\end{proof}
The lemma above reveals an important property. Suppose $\bA=\bU\bB\bV^\top$ is the bidiagonal decomposition of $\bA$, then the symmetric matrix $\bA\bA^\top$ has a tridiagonal decomposition
$$
\bA\bA^\top=\bU\bB\bV^\top \bV\bB^\top\bU^\top = \bU\bB\bB^\top\bU^\top.
$$
And the symmetric matrix $\bA^\top\bA$ has a tridiagonal decomposition
$$
\bA^\top\bA=\bV\bB^\top\bU^\top \bU\bB\bV^\top=\bV\bB^\top\bB\bV^\top.
$$
As a final result in this section, we state a theorem giving the tridiagonal decomposition of a symmetric matrix with special eigenvalues.
\begin{theoremHigh}[Tridiagonal Decomposition for Nonnegative Eigenvalues]\label{theorem:tri-nonnegative-eigen}
Suppose $n\times n$ symmetric matrix $\bA$ has nonnegative eigenvalues, then there exists a matrix $\bZ$ such that
$$
\bA=\bZ\bZ^\top.
$$
Moreover, the tridiagonal decomposition of $\bA$ can be reduced to a problem to find the bidiagonal decomposition of $\bZ =\bU\bB\bV^\top$ such that the tridiagonal decomposition of $\bA$ is given by
$$
\bA = \bZ\bZ^\top = \bU\bB\bB^\top\bU^\top.
$$
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:tri-nonnegative-eigen}]
The eigenvectors of symmetric matrices can be chosen to be orthogonal (Lemma~\ref{lemma:orthogonal-eigenvectors}, p.~\pageref{lemma:orthogonal-eigenvectors}) such that symmetric matrix $\bA$ can be decomposed into $\bA=\bQ\bLambda\bQ^\top$ (spectral theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem}) where $\bLambda$ is a diagonal matrix containing the eigenvalues of $\bA$. When eigenvalues are nonnegative, $\bLambda$ can be factored as $\bLambda=\bLambda^{1/2} \bLambda^{1/2}$. Let $\bZ = \bQ\bLambda^{1/2}$, $\bA$ can be factored as $\bA=\bZ\bZ^\top$. Thus, combining our findings yields the result.
\end{proof}
\chapter{Interpolative Decomposition (ID)}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Interpolative Decomposition}
The column interpolative decomposition (ID) factors a matrix as the product of two matrices, one of which contains selected columns from the original matrix, and the other of which has a subset of columns consisting of the identity matrix and all its values are no greater than 1 in absolute value. Formally, we have the following theorem describing the details of the column ID.
\begin{theoremHigh}[Column Interpolative Decomposition]\label{theorem:interpolative-decomposition}
Any rank-$r$ matrix $\bA \in \real^{m \times n}$ can be factored as
$$
\underset{m \times n}{\bA} = \underset{m\times r}{\bC} \gap \underset{r\times n}{\bW},
$$
where $\bC\in \real^{m\times r}$ is some $r$ linearly independent columns of $\bA$, $\bW\in \real^{r\times n}$ is the matrix to reconstruct $\bA$ which contains an $r\times r$ identity submatrix (under a mild column permutation). Specifically entries in $\bW$ have values no larger than 1 in magnitude:
$$
\max |w_{ij}|\leq 1, \,\, \forall \,\, i\in [1,r], j\in [1,n].
$$
\item The storage for the decomposition is then reduced or potentially increased from $mn$ floats to $mr$, $(n-r)r$ floats for storing $\bC, \bW$ respectively and extra $r$ integers are required to remember the position of each column of $\bC$ in that of $\bA$.
\end{theoremHigh}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{imgs/id-column.pdf}
\caption{Demonstration of the column ID of a matrix where the \textcolor{canaryyellow}{yellow} vector denotes the linearly independent columns of $\bA$, white entries denote zero, and \textcolor{canarypurple}{purple} entries denote one.}
\label{fig:column-id}
\end{figure}
The illustration of the column ID is shown in Figure~\ref{fig:column-id} where the \textcolor{canaryyellow}{yellow} vectors denote the linearly independent columns of $\bA$ and the \textcolor{canarypurple}{purple} vectors in $\bW$ form an $r\times r$ identity submatrix. The positions of the \textcolor{canarypurple}{purple} vectors inside $\bW$ are exactly the same as the positions of the corresponding \textcolor{canaryyellow}{yellow} vectors inside $\bA$. The column ID is very similar to the CR decomposition (Theorem~\ref{theorem:cr-decomposition}, p.~\pageref{theorem:cr-decomposition}), both select $r$ linearly independent columns into the first factor and the second factor contains an $r\times r$ identity submatrix. The difference is in that the CR decomposition will exactly choose the first $r$ linearly independent columns into the first factor and the identity submatrix appears in the pivots (Definition~\ref{definition:pivot}, p.~\pageref{definition:pivot}). And more importantly, the second factor in the CR decomposition comes from the RREF (Lemma~\ref{lemma:r-in-cr-decomposition}, p.~\pageref{lemma:r-in-cr-decomposition}).
Therefore, the column ID can also be utilized in the applications of the CR decomposition, say proving the fact of rank equals trace in idempotent matrices (Lemma~\ref{lemma:rank-of-symmetric-idempotent2_tmp}, p.~\pageref{lemma:rank-of-symmetric-idempotent2_tmp}), and proving the elementary theorem in linear algebra that column rank equals row rank of a matrix (Corollary~\ref{lemma:equal-dimension-rank}, p.~\pageref{lemma:equal-dimension-rank}).
Moreover, the column ID is also a special case of rank decomposition (Theorem~\ref{theorem:rank-decomposition}, p.~\pageref{theorem:rank-decomposition}) and is apparently not unique. The connection between different column IDs is given by Lemma~\ref{lemma:connection-rank-decom} (p.~\pageref{lemma:connection-rank-decom}).
\paragraph{Notations that will be extensively used in the sequel} Following again the Matlab-style notation, if $J_s$ is an index vector with size $r$ that contains the indices of columns selected from $\bA$ into $\bC$, then $\bC$ can be denoted as $\bC=\bA[:,J_s]$ (Definition~\ref{definition:matlabnotation}, p.~\pageref{definition:matlabnotation}).
The matrix $\bC$ contains ``skeleton" columns of $\bA$, hence the subscript $s$ in $J_s$. From the ``skeleton" index vector $J_s$, the $r\times r$ identity matrix inside $\bW$ can be recovered by
$$
\bW[:,J_s] = \bI_r \in \real^{r\times r}.
$$
Suppose further we put the remaining indices of $\bA$ into an index vector $J_r$ where
$$
J_s\cap J_r=\varnothing \qquad \text{and}\qquad J_s\cup J_r = \{1,2,\ldots, n\}.
$$
The remaining $n-r$ columns in $\bW$ consists of an $r\times (n-r)$ \textit{expansion matrix} since the matrix contains \textit{expansion coefficients} to reconstruct the columns of $\bA$ from $\bC$:
$$
\bE = \bW[:,J_r] \in \real^{r\times (n-r)},
$$
where the entries of $\bE$ are known as the \textit{expansion coefficients}. Moreover, let $\bP\in \real^{n\times n}$ be a (column) permutation matrix (Definition~\ref{definition:permutation-matrix}, p.~\pageref{definition:permutation-matrix}) defined by $\bP=\bI_n[:,(J_s, J_r)]$ so that
$$
\bA\bP = \bA[:,(J_s, J_r)] = \left[\bC, \bA[:,J_r]\right],
$$
and
\begin{equation}\label{equation:interpolatibve-w-ep}
\bW\bP = \bW[:,(J_s, J_r)] =\left[\bI_r, \bE \right] \leadto \bW = \left[\bI_r, \bE \right] \bP^\top.
\end{equation}
\section{Existence of the Column Interpolative Decomposition}\label{section:proof-column-id}
\paragraph{Cramer's rule}
The proof of the existence of the column ID relies on the Cramer's rule that we shall shortly discuss here. Consider a system of $n$ linear equations for $n$ unknowns, represented in matrix multiplication form as follows \index{Cramer's rule}:
$$
\bM \bx = \bl,
$$
where $\bM\in \real^{n\times n}$ is nonsingular and $\bx,\bl \in \real^n$. Then the theorem states that in this case, the system has a unique solution, whose individual values for the unknowns are given by:
$$
x_i = \frac{\det(\bM_i)}{\det(\bM)}, \qquad \text{for all}\gap i\in \{1,2,\ldots, n\},
$$
where $\bM_i$ is the matrix formed by replacing the $i$-th column of $\bM$ with the column vector $\bl$. In full generality, the Cramer's rule considers the matrix equation
$$
\bM\bX = \bL,
$$
where $\bM\in \real^{n\times n}$ is nonsingular and $\bX,\bL\in \real^{n\times m}$. Let $I=[i_1, i_2, \ldots, i_k]$ and $J=[j_1,j_2,\ldots, j_k]$ be two index vectors where $1\leq i_1\leq i_2\leq \ldots\leq i_k\leq n$ and $1\leq j_1\leq j_2\leq \ldots\leq j_k\leq n$. Then $\bX[I,J]$ is a $k\times k$ submatrix of $\bX$. Let further $\bM_{\bL}(I,J)$ be the $n\times n$ matrix formed by replacing the $i_s$ column of $\bM$ by $j_s$ column of $\bL$ for all $s\in \{1,2,\ldots, k\}$. Then
$$
\det(\bX[I,J]) = \frac{\det\left(\bM_{\bL}(I,J)\right)}{\det(\bM)}.
$$
When $I,J$ are of size 1, it follows that
\begin{equation}\label{equation:cramer-rule-general}
x_{ij} = \frac{\det\left(\bM_{\bL}(i,j)\right)}{\det(\bM)}.
\end{equation}
Now we are ready to prove the existence of the column ID.
\begin{proof}[of Theorem~\ref{theorem:interpolative-decomposition}]
We have mentioned above the proof relies on the Cramer's rule. If we can show the entries of $\bW$ can be denoted by the Cramer's rule equality in Equation~\eqref{equation:cramer-rule-general} and the numerator is smaller than the denominator, then we can complete the proof. However, we notice that the matrix in the denominator of Equation~\eqref{equation:cramer-rule-general} is a square matrix. Here comes the trick.
\paragraph{Step 1: column ID for full row rank matrix}
For a start, we first consider the full row rank matrix $\bA$ (which implies $r=m$, $m\leq n$, and $\bA\in \real^{r\times n}$ such that the matrix $\bC\in \real^{r\times r}$ is a square matrix in the column ID $\bA=\bC\bW$ that we want). Determine the ``skeleton" index vector $J_s$ by
\begin{equation}\label{equation:interpolative-choose-js}
\boxed{ J_s = \mathop{\arg\max}_{J} \left\{|\det(\bA[:,J])|: \text{$J$ is a subset of $\{1,2,\ldots, n\}$ with size $r=m$} \right\},}
\end{equation}
i.e., $J_s$ is the index vector that is determined by maximizing the magnitude of the determinant of $\bA[:,J]$. As we have discussed in the last section, there exists a (column) permutation matrix such that
$$
\bA\bP =
\begin{bmatrix}
\bA[:,J_s]&\bA[:,J_r]
\end{bmatrix}.
$$
Since $\bC=\bA[:,J_s]$ has full column rank $r=m$, it is then nonsingular. The above equation can be rewritten as
$$
\begin{aligned}
\bA
&=\begin{bmatrix}
\bA[:,J_s]&\bA[:,J_r]
\end{bmatrix}\bP^\top\\
&=
\bA[:,J_s]
\bigg[
\bI_r \gap \bA[:,J_s]^{-1}\bA[:,J_r]
\bigg]
\bP^\top\\
&= \bC
\underbrace{\begin{bmatrix}
\bI_r & \bC^{-1}\bA[:,J_r]
\end{bmatrix}
\bP^\top}_{\bW}
\end{aligned},
$$
where the matrix $\bW$ is given by
$
\begin{bmatrix}
\bI_r & \bC^{-1}\bA[:,J_r]
\end{bmatrix}\bP^\top
=
\begin{bmatrix}
\bI_r & \bE
\end{bmatrix}\bP^\top
$ by Equation~\eqref{equation:interpolatibve-w-ep}. To prove the claim that the magnitude of $\bW$ is no larger than 1 is equivalent to proving that entries in $\bE=\bC^{-1}\bA[:,J_r]\in \real^{r\times (n-r)}$ are no greater than 1 in absolute value.
Define the index vector $[j_1,j_2,\ldots, j_n]$ as a permutation of $[1,2,\ldots, n]$ such that
$$
[j_1,j_2,\ldots, j_n] = [1,2,\ldots, n] \bP = [J_s, J_r].\footnote{Note here $[j_1,j_2,\ldots, j_n] $, $[1,2,\ldots, n]$, $J_s$, and $J_r$ are row vectors.}
$$
Thus, it follows from $\bC\bE=\bA[:,J_r]$ that
$$
\begin{aligned}
\underbrace{ [\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_r}]}_{=\bC=\bA[:,J_s]} \bE &=
\underbrace{[\ba_{j_{r+1}}, \ba_{j_{r+2}}, \ldots, \ba_{j_n}]}_{=\bA[:,J_r]:=\bB},
\end{aligned}
$$
where $\ba_i$ is the $i$-th column of $\bA$ and let $\bB=\bA[:,J_r]$.
Therefore, by Cramer's rule in Equation~\eqref{equation:cramer-rule-general}, we have
\begin{equation}\label{equation:column-id-expansionmatrix}
\bE_{kl} =
\frac{\det\left(\bC_{\bB}(k,l)\right)}
{\det\left(\bC\right)},
\end{equation}
where $\bE_{kl}$ is the entry ($k,l$) of $\bE$ and $\bC_{\bB}(k,l)$ is the $r\times r$ matrix formed by replacing the $k$-th column of $\bC$ by the $l$-th column of $\bB$. For example,
$$
\begin{aligned}
\bE_{11} &=
\frac{\det\left([\textcolor{blue}{\ba_{j_{r+1}}}, \ba_{j_2}, \ldots, \ba_{j_r}]\right)}
{\det\left([\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_r}]\right)},
\qquad
&\bE_{12} &=
\frac{\det\left([\textcolor{blue}{\ba_{j_{r+2}}}, \ba_{j_2},\ldots, \ba_{j_r}]\right)}
{\det\left([\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_r}]\right)},\\
\bE_{21} &=
\frac{\det\left([\ba_{j_1},\textcolor{blue}{\ba_{j_{r+1}}}, \ldots, \ba_{j_r}]\right)}
{\det\left([\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_r}]\right)},
\qquad
&\bE_{22} &=
\frac{\det\left([\ba_{j_1},\textcolor{blue}{\ba_{j_{r+2}}}, \ldots, \ba_{j_r}]\right)}
{\det\left([\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_r}]\right)}.
\end{aligned}
$$
Since $J_s$ is chosen to maximize the magnitude of $\det(\bC)$ in Equation~\eqref{equation:interpolative-choose-js}, it follows that
$$
|\bE_{kl}|\leq 1, \qquad \text{for all}\gap k\in \{1,2,\ldots, r\}, l\in \{1,2,\ldots, n-r\}.
$$
\paragraph{Step 2: apply to general matrices}
To summarize what we have proved above and to abuse the notation. For any matrix $\bF\in \real^{r\times n}$ with \textbf{full} rank $r\leq n$, the column ID exists that $\bF=\bC_0\bW$ where the values in $\bW$ are no greater than 1 in absolute value.
Apply the finding to the full general matrix $\bA\in \real^{m\times n}$ with rank $r\leq \{m,n\}$, it is trivial that the matrix $\bA$ admits a rank decomposition (Theorem~\ref{theorem:rank-decomposition}, p.~\pageref{theorem:rank-decomposition}):
$$
\underset{m\times n}{\bA} = \underset{m\times r}{\bD}\gap \underset{r\times n}{\bF},
$$
where $\bD,\bF$ have full column rank $r$ and full row rank $r$ respectively. For the column ID of $\bF=\bC_0\bW$ where $\bC_0=\bF[:,J_s]$ contains $r$ linearly independent columns of $\bF$. We notice by $\bA=\bD\bF$ such that
$$
\bA[:,J_s]=\bD\bF[:,J_s],
$$
i.e., the columns indexed by $J_s$ of $(\bD\bF)$ can be obtained by $\bD\bF[:,J_s]$ which in turn are the columns of $\bA$ indexed by $J_s$. This makes
$$
\underbrace{\bA[:,J_s]}_{\bC}= \underbrace{\bD\bF[:,J_s]}_{\bD\bC_0},
$$
And
$$
\bA = \bD\bF =\bD\bC_0\bW = \underbrace{\bD\bF[:,J_s]}_{\bC}\bW=\bC\bW.
$$
This completes the proof.
\end{proof}
The above proof reveals an intuitive way to compute the optimal column ID of matrix $\bA$ as shown in Algorithm~\ref{alg:column-id-intuitive}. However, any algorithm that is guaranteed to find such an optimally-conditioned factorization must have combinatorial complexity \citep{martinsson2019randomized}. In the next sections, we will consider alternative ways to find a relatively well-conditioned factorization.
\begin{algorithm}[h]
\caption{An \textcolor{blue}{Intuitive} Method to Compute the Column ID}
\label{alg:column-id-intuitive}
\begin{algorithmic}[1]
\Require
Rank-$r$ matrix $\bA$ with size $m\times n $;
\State Compute the rank decomposition $\underset{m\times n}{\bA} = \underset{m\times r}{\bD}\gap \underset{r\times n}{\bF}$ (Theorem~\ref{theorem:rank-decomposition}, p.~\pageref{theorem:rank-decomposition}) such as from UTV (Section~\ref{section:ulv-urv-decomposition}, p.~\pageref{section:ulv-urv-decomposition});
\State Compute column ID of $\bF$: $\bF=\bF[:,J_s]\bW = \widetildebC\bW$:
$$
\begin{aligned}
2.1. &\left\{
\begin{aligned}
J_s &= \mathop{\arg\max}_{J} \left\{|\det(\bF[:,J])|: \text{$J$ is a subset of $\{1,2,\ldots, n\}$ with size $r$} \right\};&\\
J_r &= \{1,2,\ldots, n\} /J_s;&\\
\end{aligned}
\right.\\
2.2.&\left\{
\begin{aligned}
\widetildebC &= \bF[:,J_s]; \\
\bM &= \bF[:,J_r];
\end{aligned}
\right.\\
2.3. &\bF\bP = \bF[:,(J_s,J_r)] \text{ to obtain $\bP$};\\
2.4. &\bE_{kl} =
\frac{\det\left(\widetildebC_{\bM}(k,l)\right)}
{\det\left(\widetildebC\right)}, \qquad \text{for all}\gap k\in [1, r], l\in [1,n-r] \text{ ~(Equation~\eqref{equation:column-id-expansionmatrix})};\\
2.5. &\bW= [\bI_r, \bE]\bP^\top \text{ ~(Equation~\eqref{equation:interpolatibve-w-ep})}.
\end{aligned}
$$
\State $\bC=\bA[:,J_s]$;
\State Output the column ID $\bA=\bC\bW$;
\end{algorithmic}
\end{algorithm}
\begin{example}[Compute the Column ID]\label{example:column-id-a}
For matrix
$$
\bA=
\begin{bmatrix}
56 & 41 & 30\\
32 & 23 & 18\\
80 & 59 & 42
\end{bmatrix}
$$
with rank 2. The trivial process for computing the column ID of $\bA$ is shown as follows. We first find a rank decomposition
$$
\bA = \bD\bF=
\begin{bmatrix}
1 & 0 \\
0 & 1 \\
2 &-1
\end{bmatrix}
\begin{bmatrix}
56 & 41 & 30 \\
32 & 23 & 18
\end{bmatrix}.
$$
Since rank $r=2$, $J_s$ is one of $[1,2], [0,2], [0,1]$ where the absolute determinant of $\bF[:,J_s]$ are $48, 48, 24$ respectively. We proceed by choosing $J_s=[0,2]$:
$$
\begin{aligned}
\widetildebC &= \bF[:,J_s]=
\begin{bmatrix}
56 & 30 \\
32 & 18
\end{bmatrix},\qquad
\bM &= \bF[:,J_r]=\begin{bmatrix}
41 \\
23
\end{bmatrix}.
\end{aligned}
$$
And
$$
\bF\bP = \bF[:(J_s,J_r)] = \bF[:,(0,2,1)]
\leadto
\bP =
\begin{bmatrix}
1 & & \\
& &1\\
& 1 &
\end{bmatrix}.
$$
In this example, $\bE\in \real^{2\times 1}$:
$$
\begin{aligned}
\bE_{11} &=
\det\left(
\begin{bmatrix}
41 & 30 \\
23 & 18
\end{bmatrix}\right)\bigg/
\det\left(
\begin{bmatrix}
56 & 30 \\
32 & 18
\end{bmatrix}\right)=1;\\
\bE_{21} &=
\det\left(
\begin{bmatrix}
56 & 41 \\
32 & 23
\end{bmatrix}\right)\bigg/
\det\left(
\begin{bmatrix}
56 & 30 \\
32 & 18
\end{bmatrix}\right)=-\frac{1}{2}.
\end{aligned}
$$
This makes
$$
\bE =
\begin{bmatrix}
1\\-\frac{1}{2}
\end{bmatrix}
\leadto
\bW = [\bI_2, \bE]\bP^\top =
\begin{bmatrix}
1 & 1 & 0\\
0 & -\frac{1}{2} & 1
\end{bmatrix}.
$$
The final selected columns are
$$
\bC = \bA[:,J_s] =
\begin{bmatrix}
56 & 30\\
32 & 18\\
80 & 42
\end{bmatrix}.
$$
The net result is given by
$$
\bA=\bC\bW =
\begin{bmatrix}
56 & 30\\
32 & 18\\
80 & 42
\end{bmatrix}
\begin{bmatrix}
1 & 1 & 0\\
0 & -\frac{1}{2} & 1
\end{bmatrix},
$$
where entries of $\bW$ are no greater than 1 in absolute value as we want.
\exampbar
\end{example}
To end up this section, we discuss where comes the non-uniqueness of the column ID.
\begin{remark}[Non-uniqueness of the Column ID]
In the above specific example~\ref{example:column-id-a}, we notice the determinant for $\bF[:,(1,2)]$ and $\bF[:,(0,2)]$ both get maximal absolute determinant. Therefore, both of them can result in a column ID of $\bA$. Whilst, we only select $J_s$ from $[1,2], [0,2], [0,1]$. When the $J_s$ is fixed from the maximal absolute determinant search, any permute of it can also be selected, e.g., $J_s=[0,2]$ or $J_s=[2,0]$ are both good. The two choice on the selection of the column index search yield the non-uniqueness of the column ID.
\end{remark}
\section{Row ID and Two-Sided ID}
We term the decomposition above as column ID. This is no coincidence since it has its siblings:
\begin{theoremHigh}[The Whole Interpolative Decomposition]\label{theorem:interpolative-decomposition-row}
Any rank-$r$ matrix $\bA \in \real^{m \times n}$ can be factored as
$$
\begin{aligned}
\text{Column ID: }&\gap \underset{m \times n}{\bA} &=& \boxed{\underset{m\times r}{\bC}} \gap \underset{r\times n}{\bW} ; \\
\text{Row ID: } &\gap &=&\underset{m\times r}{\bZ} \gap \boxed{\underset{r\times n}{\bR}}; \\
\text{Two-Sided ID: } &\gap &=&\underset{m\times r}{\bZ} \gap \boxed{\underset{r\times r}{\bU}} \gap \underset{r\times n}{\bW}, \\
\end{aligned}
$$
where
\begin{itemize}
\item $\bC=\bA[:,J_s]\in \real^{m\times r}$ is some $r$ linearly independent columns of $\bA$, $\bW\in \real^{r\times n}$ is the matrix to reconstruct $\bA$ which contains an $r\times r$ identity submatrix (under a mild column permutation): $\bW[:,J_s]=\bI_r$;
\item $\bR=\bA[I_s,:]\in \real^{r\times n}$ is some $r$ linearly independent rows of $\bR$, $\bZ\in \real^{m\times r}$ is the matrix to reconstruct $\bA$ which contains an $r\times r$ identity submatrix (under a mild row permutation): $\bZ[I_s,:]=\bI_r$;
\item Entries in $\bW, \bZ$ have values no larger than 1 in magnitude: $\max |w_{ij}|\leq 1$ and $\max |z_{ij}|\leq 1$;
\item $\bU=\bA[I_s,J_s] \in \real^{r\times r}$ is the nonsingular submatrix on the intersection of $\bC,\bR$;
\item The three matrices $\bC,\bR,\bU$ in the $\boxed{\text{boxed}}$ texts share same notation as the skeleton decomposition (Theorem~\ref{theorem:skeleton-decomposition}, p.~\pageref{theorem:skeleton-decomposition}) where they even have same meanings such that the three matrices make the skeleton decomposition of $\bA$: $\bA=\bC\bU^{-1}\bR$.
\end{itemize}
\end{theoremHigh}
The proof of the row ID is just similar to that of the column ID. Suppose the column ID of $\bA^\top$ is given by $\bA^\top=\bC_0\bW_0$ where $\bC_0$ contains $r$ linearly independent columns of $\bA^\top$ (i.e., $r$ linearly independent rows of $\bA$). Let $\bR=\bC_0, \bZ=\bW_0$, the row ID is obtained by $\bA=\bZ\bR$.
For the two-sided ID, recall from the skeleton decomposition (Theorem~\ref{theorem:skeleton-decomposition}, p.~\pageref{theorem:skeleton-decomposition}). When $\bU$ is the intersection of $\bC, \bR$, it follows that $\bA=\bC\bU^{-1}\bR$. Thus $\bC\bU^{-1}=\bZ$ by the row ID. And this implies $\bC=\bZ\bU$. By column ID, it follows that $\bA=\bC\bW=\bZ\bU\bW$ which proves the existence of the two-sided ID.
\paragraph{Data storage} For the data storage of each ID, we summarize as follows
\begin{itemize}
\item \textit{Column ID.} It requires $mr$ and $(n-r)r$ floats to store $\bC$ and $\bW$ respectively , and $r$ integers to store the indices of the selected columns in $\bA$;
\item \textit{Row ID.} It requires $nr$ and $(m-r)r$ floats to store $\bR$ and $\bZ$ respectively, and $r$ integers to store the indices of the selected rows in $\bA$;
\item \textit{Two-Sided ID.} It requires $(m-r)r$, $(n-r)r$, and $r^2$ floats to store $\bZ,\bW$, and $\bU$ respectively. And extra $2r$ integers are required to store the indices of the selected rows and columns in $\bA$.
\end{itemize}
\paragraph{Further reduction on the storage for two-sided ID for sparse matrix $\bA$}
Suppose the column ID of $\bA=\bC\bW$ where $\bC=\bA[:,J_s]$ and a good spanning rows index $I_s$ set of $\bC$ could be found:
$$
\bA[I_s,:] = \bC[I_s,:]\bW.
$$
We observe that $\bC[I_s,:] = \bA[I_s,J_s]\in \real^{r\times r}$ which is nonsingular (since full rank $r$ in the sense of both row rank and column rank). It follows that
$$
\bW = (\bA[I_s,J_s])^{-1} \bA[I_s,:].
$$
Therefore, there is no need to store the matrix $\bW$ explicitly. We only need to store $\bA[I_s,:]$ and $(\bA[I_s,J_s])^{-1}$. Or when we can compute the inverse of $\bA[I_s,J_s]$ on the fly, it only requires $r$ integers to store $J_s$ and recover $\bA[I_s,J_s]$ from $\bA[I_s,:]$. The storage of $\bA[I_s,:]$ is cheap if $\bA$ is sparse.
\section{Computing the Column ID via the CPQR}
The method used in the proof of the last section can be utilized to compute the ``optimal" column ID. However, any algorithm
that is guaranteed to find such an optimally-conditioned factorization must have combinatorial complexity. An inexpensive alternative is to favor that $\bW$ is small in norm rather than bounding each entry in modulus by one.
Recall the column-pivoted QR decomposition (CPQR, Theorem~\ref{theorem:rank-revealing-qr-general}, p.~\pageref{theorem:rank-revealing-qr-general}) such that for matrix $\bA\in \real^{m\times n}$, the \textit{reduced} CPQR is given by:
$$
\bA\bP = \bQ_r
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\end{bmatrix}
=
\begin{bmatrix}
\bQ_r\bR_{11} & \bQ_r\bR_{12} \\
\end{bmatrix},
$$
where $\bR_{11} \in \real^{r\times r}$ is upper triangular, $\bR_{12} \in \real^{r\times (n-r)}$, $\bQ_r\in \real^{m\times r}$ has orthonormal columns, and $\bP$ is a permutation matrix. The complexity of the CRPQ is $O(mnr)$ flops (Theorem~\ref{theorem:qr-reduced-rank-revealing}, p.~\pageref{theorem:qr-reduced-rank-revealing}). $\bA\bP$
is permuting the $r$ linearly independent columns into the first $r$ columns of $\bA\bP$:
$$
\bA\bP = \bA[:, (J_s, J_r)]=
\begin{bmatrix}
\bQ_r\bR_{11} & \bQ_r\bR_{12} \\
\end{bmatrix}.
$$
In the ``practical" CPQR via CGS \footnote{As a recap, CGS is short for the classical Gram-Schmidt process.} we introduced in Section~\ref{section:practical-cpqr-cgs} (p.~\pageref{section:practical-cpqr-cgs}), $\bQ_r\bR_{11}$ contains the $r$ independent columns of $\bR$ with the largest norm and $\bQ_r\bR_{12}$ is small in norm. This is important to our aim that the column ID is \textit{well-conditioned} in that entries of $\bW$ are small in magnitude.
Therefore $\bQ_r \bR_{11}$ contains $r$ linearly independent columns of $\bA$. Let $\bC=\bQ_r\bR_{11}$, and solve the linear equation $\bC\bE=\bQ_r\bR_{12}$, the column ID then follows:
$$
\bA = \bC \underbrace{[\bI_r, \bE]\bP^\top}_{\bW}.
$$
\paragraph{Calculation of the linear system}
The linear system $\bC\bE=\bQ_r\bR_{12}$ is well defined and is not in a sense of least squares as we have shown in Section~\ref{section:application-ls-qr} (p.~\pageref{section:application-ls-qr}), since every column of $\bQ_r\bR_{12}$ is in the column space of $\bC$ (i.e., the column space of $\bA$). The solution can be obtained via the \textit{normal equation} \footnote{We shall briefly discuss in Definition~\ref{definition:normal-equation-als} (p.~\pageref{definition:normal-equation-als}).}: $\bE=(\bC^\top\bC)^{-1} \bC^\top\bQ_r\bR_{12}$.
An extra cost for computing the columnd ID thus comes from the calculation of the $\bE$. The calculation of $(\bC^\top\bC)^{-1}$ takes $\boxed{r^2(2m-1)+2r^3}$ flops where $2r^3$ comes from the computation of the inverse of an $r\times r$ matrix (Theorem~\ref{theorem:inverse-by-lu2}, p.~\pageref{theorem:inverse-by-lu2}). Let's see what's left:
$$
\text{step 2: }\qquad
\bE=\underbrace{(\bC^\top\bC)^{-1} }_{r\times r}
\gap
\underbrace{ \bC^\top}_{r\times m}
\gap
\underbrace{\bQ_r}_{m\times r}
\gap
\underbrace{ \bR_{12}}_{r\times (n-r)}.
$$
Since $r<m$, the $\bC^\top\bQ_r$ should be considered as the the next step to make a smaller $r\times r$ matrix which takes $\boxed{(2m-1)r^2}$ flops:
$$
\text{step 3: }\qquad
\bE=\underbrace{(\bC^\top\bC)^{-1} }_{r\times r}
\gap
\underbrace{ \bC^\top\bQ_r}_{r\times r}
\gap
\underbrace{ \bR_{12}}_{r\times (n-r)}.
$$
When $r<(n-r)$, the calculation of $(\bC^\top\bC)^{-1} \bC^\top\bQ_r$ in the above equation should be done firstly which makes the rest complexity be $\boxed{(2r-1)r^2+ (2r-1)r(n-r)}$. Otherwise, $\bC^\top\bQ_r\bR_{12}$ should be done firstly, which makes the rest complexity be $\boxed{2(2r-1)r(n-r)}$. To conclude, the final complexity from normal equation is summarized as follows:
$$
\text{cost=}
\left\{
\begin{aligned}
&\{r^2(2m-1)+2r^3 + (2m-1)r^2\}+(2r-1)r^2+ (2r-1)r(n-r), \qquad &r&<(n-r); \\
&\{r^2(2m-1)+2r^3 + (2m-1)r^2\}+ 2(2r-1)r(n-r), \qquad &r&\geq (n-r).\\
\end{aligned}
\right.
$$
The normal equation is just a trivial way to solve the above linear equation. Other iterative methods can be employed such as the gradient descent (Section~\ref{section:als-gradie-descent}, p.~\pageref{section:als-gradie-descent}).
\subsubsection*{\textbf{Partial factorization via the CPQR}}
The complexity of the algorithms for computing column ID of a matrix via the CPQR relies on the complexity of the CPQR decomposition. When the matrix has ``exact" rank $r$, the complexity of the CPQR is $O(mr^2)$ flops for matrix $\bA\in \real^{m\times n}$. However, when it comes to the matrix $\bA$ being rank deficient, the complexity will become $O(mn^2)$. The partial factorization CPQR via MGS \footnote{As a recap, MGS is short for the modified Gram-Schmidt process.}(Section~\ref{section:partial-cpqr-mgs}, p.~\pageref{section:partial-cpqr-mgs}) can be employed to attack this problem where the partial CPQR decomposition is given by
$$
\bA\bP =
\bQ\bR=
\bQ
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bR_{22}
\end{bmatrix},
$$
where $\bR_{22}$ is small in norm. Such a partial factorization can either take a rank $k$ set up front, or a tolerance $\delta$ such that whenever $r_{kk}<\delta$ in the upper triangular matrix $\bR$ of the CPQR decomposition, we stop. This is similar to what we will introduce in the low-rank column ID via the rank-revealing QR (RRQR) decomposition. But in the partial CPQR, the norm on $\bR_{22}$ is not guaranteed to be minimal which is assured in RRQR.
\section{Low-Rank Column ID via the RRQR}
An approximate rank $\gamma$ ID of a matrix $\bA\in \real^{m\times n}$ is the approximate
factorization:
$$
\begin{aligned}
\underset{m\times n}{\bA} &=& \underset{m\times \gamma}{\bC} &\gap\underset{ \gamma \times n}{ \bW} ;\\
\bA \bP &=& \bC &\gap[\bI, \bE] ,\\
\end{aligned}
$$
where the partial column skeleton $\bC\in \real^{m\times \gamma}$
is given by a subset of the columns of $\bA$, $\gamma$ is known as the \textit{numerical rank}, the entries of $\bE$ are known as the \textit{expansion coefficients} as we have shown previously, and $\bW$
is well-conditioned in a sense that we will make precise shortly.
The low-rank approximation of column ID has been studied extensively in the context of rank-revealing QR decomposition
\citep{voronin2017efficient, martinsson2019randomized, martinsson2002randomized, halko2011finding}.
By rank-revealing QR (RRQR) (Equation~\eqref{equation:rankr-reval-qr}, p.~\pageref{equation:rankr-reval-qr}\index{Rank-revealing QR}), there exists a permutation $\bP$ such that the linearly independent columns of $\bA$ can be permuted in the left and
$$
\begin{aligned}
\bA\bP =
\bQ\bR
&=
\begin{bmatrix}
\bQ_1 & \bQ_2
\end{bmatrix}
\begin{bmatrix}
\bL & \bM \\
\bzero & \bN
\end{bmatrix}\\
&=
\begin{bmatrix}
\bQ_1\bL & \bQ_1\bM+\bQ_2\bN
\end{bmatrix}\\
&=
\bQ_1\bL
\begin{bmatrix}
\bI_{\gamma} & \bY
\end{bmatrix},
\end{aligned}
$$
where $\bN \in \real^{(n-\gamma)\times (n-\gamma)}$ and $||\bN||$ is small in some norm. And $\bY$ is defined to be the solution of the linear system $(\bQ_1\bL) \bY = (\bQ_1\bM+\bQ_2\bN)$.
We observe that $\bQ_1 \bL$ is the fist $\gamma$ columns of $\bA\bP$.
Let $\bC=\bQ_1\bL$, $\bE=\bY$, then the low-rank column ID approximation can be found by solving the linear system
\begin{equation}\label{equation:rrqr-columnid-linear}
(\bQ_1\bL) \bE = (\bQ_1\bM+\bQ_2\bN).
\end{equation}
Since $\bN$ is small in norm, it can be approximated to find the solution of
$$
\begin{aligned}
\underset{\gamma\times \gamma}{\bL} & &\underset{\gamma \times (n-\gamma)}{\bE}= &\gap \underset{(n-\gamma) \times (n-\gamma) }{\bM} .\\
\end{aligned}
$$
The problem is well defined since $\bL$ is nonsingular (upper triangular with nonzero diagonals). If $\bL$ is singular, then
one can show that $\bA$ must necessarily have rank $\gamma^\prime$
less than $\gamma$, and the bottom $\gamma-\gamma^\prime$
rows in the above linear system consist of all zeros, so there exists a solution in this case as well.
The approximation error of the column ID obtained via RRQR with pivoting is the same as that of
the RRQR:
$$
||\bA-\bC\bW| = ||\bQ_2\bN||,
$$
where $\bW=[\bI_{\gamma}, \bE]\bP^\top$.
\begin{remark}[Condition of the Linear System]
Unfortunately, $\bL$ in the linear Equation~\eqref{equation:rrqr-columnid-linear} is typically quite ill-conditioned. However, the linear system in Equation~\eqref{equation:rrqr-columnid-linear} still has a solution $\bE$ whose entries are of moderate size. Informally, the directions
where $\bL$ and $\bM$ point in are ``lined up" \citep{cheng2005compression}. We shall not give the details.
\end{remark}
\section{Computing the ID via Randomized Algorithm}\label{section:randomi-id}
Suppose matrix $\bA$ admits the rank decomposition:
$$
\underset{m\times n}{\bA} = \underset{m\times r}{\bD}\gap \underset{r\times n}{\bF}.
$$
Upon finding the rank decomposition, suppose further that the row ID of $\bD$ is obtained by
$$
\text{row ID of $\bD$: }\qquad \underset{m\times r}{\bD} = \underset{m\times r}{\bZ}\gap \underset{r\times r}{\bR_0}
=\bZ\bD[I_s,:].
$$
Similar to the proof of the column ID in Section~\ref{section:proof-column-id}, we observe that $Z \bA[I_s,:]$ is automatically a row ID of $\bA$: $\bA=\bZ\bA[I_s,:]$. This can be shown as follows:
\begin{equation}\label{equation:row-id-sub-d}
\begin{aligned}
\bZ\bA[I_s,:] &= \bZ (\bD[I_s,:]\bF) &\qquad &\text{(since $\bA=\bD\bF$)}\\
&=\bD \bF &\qquad &\text{(since $\bD=\bZ\bD[I_s,:]$)}\\
&=\bA.
\end{aligned}
\end{equation}
We notice that $\bD$ spans the same column space as $\bA$: $\cspace(\bD)=\cspace(\bA)$.
The finding above tells us that as long as we find an $m\times r$ matrix $\bD$ spanning the same column space of $\bA$, a row ID can be applied to $\bD$ to find the row ID of $\bA$. This reveals the randomized algorithm for computing row ID of $\bA$. To show this, we need a few facts that will be useful.
\begin{lemma}[Subspace of $\bA^\top \bA$ and $\bA\bA^\top$]\label{lemma:rank-of-ttt}
For matrix $\bA\in \real^{m\times n}$, we have
\begin{itemize}
\item The column space of $\bA^\top \bA$ is equal to the column space of $\bA^\top$ (i.e., row space of $\bA$): $\cspace(\bA^\top\bA)=\cspace(\bA^\top)$;
\item The column space of $\bA\bA^\top$ is equal to the column space of $\bA$: $\cspace(\bA\bA^\top)=\cspace(\bA)$.
\end{itemize}
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:rank-of-ttt}]
Let $\bx\in \nspace(\bA)$, we have
$$
\bA\bx = \bzero \leadto \bA^\top\bA \bx =\bzero,
$$
i.e., $\bx\in \nspace(\bA) \leadtosmall \bx \in \nspace(\bA^\top \bA)$, therefore $\nspace(\bA) \subseteq \nspace(\bA^\top\bA)$.
Further, let $\bx \in \nspace(\bA^\top\bA)$, we have
$$
\bA^\top \bA\bx = \bzero\leadtosmall \bx^\top \bA^\top \bA\bx = 0\leadtosmall ||\bA\bx||^2 = 0 \leadtosmall \bA\bx=\bzero,
$$
i.e., $\bx\in \nspace(\bA^\top \bA) \leadtosmall \bx\in \nspace(\bA)$, therefore $\nspace(\bA^\top\bA) \subseteq\nspace(\bA) $.
As a result, by ``sandwiching", it follows that
$$\nspace(\bA) = \nspace(\bA^\top\bA).
$$
By the fundamental theorem of linear algebra in Appendix~\ref{appendix:fundamental-rank-nullity} (p.~\pageref{appendix:fundamental-rank-nullity}),
$$
\cspace(\bA^\top)=\cspace(\bA^\top\bA).
$$
The second half of the lemma is just the same when applying the process on $\bA^\top$.
\end{proof}
The above lemma tells us that if a matrix $\bA\in \real^{m\times n}$ with rank $r$, a matrix $\bB\in \real^{m\times k}$ with $k>r$ and $\cspace(\bB)=\cspace(\bA)$. Then a moment of reflexion reveals $\bA\bA^\top\bB$ spans the same column space as $\bA$:
\begin{equation}\label{equation:random-row-id-column-space0}
\cspace(\bA\bA^\top\bB)=\cspace(\bA), \qquad \text{if }\gap \cspace(\bB)=\cspace(\bA).
\end{equation}
Further, by Lemma~\ref{lemma:column-basis-from-row-basis} (p.~\pageref{lemma:column-basis-from-row-basis}), for any matrix $\bA\in \real^{m\times n}$, suppose that $\{\bg_1, \bg_2, \ldots, \bg_r\}$ is a set of vectors in $\real^n$ which forms a basis for the row space, then $\{\bA\bg_1, \bA\bg_2, \ldots, \bA\bg_r\}$ is a basis for the column space of $\bA$:
\begin{equation}\label{equation:random-row-id-column-space-eq2}
\cspace(\bA\bG) =\cspace(\bA).
\end{equation}
Then, the above matrix $\bB$ in Eequation~\eqref{equation:random-row-id-column-space0} can be constructed by $\bA\bG$ where the columns of $\bG$ contain row basis of $\bA$:
\begin{equation}\label{equation:random-row-id-column-space2}
\cspace(\bA\bA^\top \bA\bG) =\cspace(\bA).
\end{equation}
\begin{algorithm}[h]
\caption{A \textcolor{blue}{Randomized} Method to Compute the \textcolor{blue}{Row} ID}
\label{alg:row-id-randomize1}
\begin{algorithmic}[1]
\Require
Rank-$r$ matrix $\bA$ with size $m\times n $;
\State Decide the over-sampling parameter $k$ (e.g., $k=10$)$\rightarrow$ let $z=r+k$;
\State Decide the iteration number: $\eta$ (e.g., $\eta=0,1$ or $2$);
\State Generate $r+k$ Gaussian random vectors in $\real^n$ into columns of matrix $\bG\in \real^{n\times (r+k)}$;\gap\Comment{i.e., probably contain the row basis of $\bA$}
\State Initialize $\bD=\bA\bG\in \real^{m\times (r+k)}$; \Comment{i.e., probably $\cspace(\bD)=\cspace(\bA)$, $(2n-1)mz$ flops}
\For{$i=1$ to $\eta$}
\State $\bD = \bA\bA^\top\bD$; \Comment{$(2m-1)nz+(2n-1)mz$ flops}
\EndFor
\State Calc. the row ID of small matrix $\bD$: $\bD=\bZ\bR_0=\bZ\bD[I_s,:]$;\Comment{$O(mz^2)$ flops by CPQR}
\State Output row ID of $\bA$: $\bA=\bZ\bA[I_s,:]$;
\end{algorithmic}
\end{algorithm}
Normally, we could simply stop at $\bA\bG$ since $\cspace(\bA\bG) =\cspace(\bA)$, and the row ID of $\bA\bG$ would reveal the row ID of $\bA$ by Equation~\eqref{equation:row-id-sub-d}. Computing the row ID of $\bA\bG$ is a relatively simpler task compared to that of $\bA$ directly since $\bA\bG\in \real^{m\times r}$ and $r\ll n$.
However, in some situations, when matrix $\bA$ is not exactly rank-$r$, i.e., some singular values $\sigma_k$ (with $k>r$) of $\bA$ may be small enough to be regarded as zero (truncated). We will introduce the SVD in Section~\ref{section:SVD} (p.~\pageref{section:SVD}) such that any matrix $\bA$ admits factorization $\bA=\bU\bSigma\bV^\top$ where $\bU,\bV$ are orthogonal, and roughly speaking $\bSigma$ is a diagonal matrix containing singular values with the number of nonzero singular values being the rank of $\bA$. The columns of $\bU$ contain the column basis of $\bA$ and the columns of $\bV$ contain the row basis of $\bA$ (Lemma~\ref{section:four-space-svd}, p.~\pageref{section:four-space-svd}). Therefore, $\bA\bA^\top \bA\bG$ results in
$$
\bA\bA^\top \bA\bG = (\bU\bSigma\bV^\top) (\bV\bSigma\bU^\top) (\bU\bSigma\bV^\top)\bG=\bU\bSigma^3\bV^\top\bG,
$$
where $\bSigma^3$ will \textbf{shrink the small singular values towards 0}. This is reasonable in that they do not count in the \textit{numerical rank}. For matrices whose singular values decay slowly, we should consider the process twice or third times:
$$
\bA\bA^\top(\bA\bA^\top \bA\bG ) =\bU\bSigma^5\bV^\top\bG.
$$
The procedure to compute the row ID of a matrix is shown in Algorithm~\ref{alg:row-id-randomize1} where a parameter $k$ is used to assure that there is a high probability that $\bG$=$[\bg_1, \ldots,\bg_r, \bg_{r+1}, \ldots, \bg_{r+k}]$ contains the row basis of $\bA$ (The choice $k=10$ is often good) and each $\bg_i$ are generated by \textit{random Gaussian vector}. Iteration parameter $\eta$ is usually picked as 1 or 2 to improve accuracy (to shrink the small singular values). The complexity of the algorithm is shown in the comment of each step which makes it
\begin{equation}\label{equation:rnadom-rowid-complexity}
\boxed{O(mn(r+k))} \qquad \text{ flops.}
\end{equation}
\begin{remark}[A Word on the Source of Row Basis]\label{remark:source-row-basis}
In step 4 of Algorithm~\ref{alg:row-id-randomize1}, we desire $\bD$ contains linearly independent vectors that can span the column space of $\bA$, i.e., find a column basis. We just show one way that transform row basis matrix $\bG$ to the column basis matrix $\bD=\bA\bG$, and this is known as the \textit{random projection method}.
The matrix $\bG$ formed by these columns is expected to be very close
to $\bA$ in a sense that the basis of the range of $\bD$ covers the range of $\bA$ well. The probability of failure is negligible.
However, further question can be posed, what if $\bG$ does not contain enough row basis vectors?
The column basis matrix $\bD$ can be chosen in different ways: by \textit{subsampling of the input matrix} directly (in the sense of row basis or column basis that we will show in next paragraph). The orthonormal basis
consisting of $r$ linearly independent vectors can be obtained using exact methods since the size of $\bD$ or $\bG$ is very small. These techniques are relatively insensitive to the quality of randomness and produce high accurate results.
\end{remark}
\paragraph{Subsampling method for constructing basis}
We have shown in Gram-Schmidt process (Section~\ref{section:project-onto-a-vector}, p.~\pageref{section:project-onto-a-vector}), the projection of a vector $\ba$ onto $\bb$ results in the projection $\hat{\ba} = \frac{\ba^\top\bb}{\bb^\top\bb}\bb$ which is in the direction of $\bb$ since it is a multiple of $\bb$. The component along $\bb$ thus can be obtained by $\ba-\hat{\ba} = \ba-\frac{\ba^\top\bb}{\bb^\top\bb}\bb$ which is $\bzero$ if $\ba$ and $\bb$ are dependent (i.e., in the same direction). When $\ba,\bb\in \real^n$, the complexity is $O(n)$ flops. Therefore, when we generate the $\bg_i$'s vector in Algorithm~\ref{alg:row-id-randomize1}, we can project it onto each row of $\bA$ to check it is in the row space of $\bA$, and this costs $O(mn)$ flops. And there are $\sim (r+k)$ such generation, which makes it $\boxed{O(mn(r+k))}$ flops to assure $\{\bg_1, \bg_2, \ldots, \bg_{r+k}\}$ can span the row space of $\bA$.
This is the same order of cost compared to the total cost of the algorithm in Equation~\eqref{equation:rnadom-rowid-complexity} when $n\leq m$.
Analogously, we can directly generate the columns of $\bD=[\bd_1, \bd_2, \ldots, \bd_{r+k}]$ via Gaussian random vector and check if each $\bd_i$ is in the column space of $\bA$. This costs $\boxed{O(mn(r+k))}$ that again is equal to the cost of the algorithm in Equation~\eqref{equation:rnadom-rowid-complexity}. So the complexity of the algorithm would not blow up.
\begin{algorithm}[h]
\caption{A \textcolor{blue}{Randomized} Method to Compute the \textcolor{blue}{Column} ID}
\label{alg:column-id-randomize1}
\begin{algorithmic}[1]
\Require
Rank-$r$ matrix $\bA$ with size $m\times n $;
\State Decide the over-sampling parameter $k$ (e.g., $k=10$)$\rightarrow$ let $z=r+k$;
\State Decide the iteration number: $\eta$ (e.g., $\eta=0,1$ or $2$);
\State Generate $r+k$ Gaussian random vectors in $\textcolor{blue}{\real^m}$ into columns of matrix $\bG\in \real^{\textcolor{blue}{m}\times (r+k)}$;\Comment{i.e., probably contain the column basis of $\bA$}
\State Initialize $\bF=\textcolor{blue}{\bA^\top}\bG\in \real^{n\times (r+k)}$; \Comment{i.e., probably $\cspace(\bF)=\cspace(\bA^\top)$, $\textcolor{blue}{(2m-1)nz}$ flops}
\For{$i=1$ to $\eta$}
\State $\bF = \textcolor{blue}{\bA^\top\bA}\bF$; \Comment{$(2m-1)nz+(2n-1)mz$ flops}
\EndFor
\State Calc. the \textcolor{blue}{column} ID of small $\bF$: $\bF=\bC_0\bW=\bF[:,J_s]\bW$;\Comment{$O\textcolor{blue}{(mz^2)}$ flops by CPQR}
\State Output column ID of $\bA$: $\bA=\bA[:,J_s]\bW$;
\end{algorithmic}
\end{algorithm}
\paragraph{Compute the column ID by randomized method}
Analogously, the above procedure on row ID can be easily applied to compute the column ID of matrix $\bA\in \real^{m\times n}$ by calculating the row ID of $\bA^\top$. Or
from the proof of the column ID. If $\bA$ admits a rank decomposition (Theorem~\ref{theorem:rank-decomposition}, p.~\pageref{theorem:rank-decomposition})
$$
\underset{m\times n}{\bA} = \underset{m\times r}{\bD}\gap \underset{r\times n}{\bF},
$$
where $\bD,\bF$ have full column rank $r$ and full row rank $r$ respectively.
Upon finding the rank decomposition, suppose further that the column ID of $\bF$ is obtained by
$$
\text{column ID of $\bF$: }\qquad \underset{r\times n}{\bF} = \underset{r\times r}{\bC_0}\gap \underset{r\times n}{\bW}
=\bF[:,J_s]\bW.
$$
We observe that $ \bA[:,J_s]\bW$ is automatically a column ID of $\bA$: $\bA=\bA[:,J_s]\bW$. This can be shown as follows:
\begin{equation}\label{equation:column-id-sub-d}
\begin{aligned}
\bA[:,J_s]\bW &= (\bD\bF[:,J_s])\bW &\qquad &\text{(since $\bA=\bD\bF$)}\\
&=\bD \bF &\qquad &\text{(since $\bF=\bF[:,J_s]\bW$)}\\
&=\bA.
\end{aligned}
\end{equation}
We notice that $\bF$ spans the same row space as $\bA$: $\cspace(\bF^\top)=\cspace(\bA^\top)$.
The finding above tells us that as long as we find an $r\times n$ matrix $\bD$ spanning the same row space of $\bA$, a column ID can be applied on $\bF$ to find the column ID of $\bA$. This reveals the randomized algorithm for computing column ID of $\bA$. The is actually the second part of the proof of column ID in Section~\ref{section:proof-column-id}.
We may further notice that suppose columns of $\bG$ contain the column basis of $\bA$, $\bA^\top\bG$ will contain row basis of $\bA$ by Lemma~\ref{lemma:column-basis-from-row-basis} (p.~\pageref{lemma:column-basis-from-row-basis}) again. $\bA^\top\bA\bA^\top\bG$ will span the row space of $\bA$ where $\bA^\top\bA$ on the left can help shrink small singular values:
$$
\bA^\top\bA\bA^\top\bG = (\bV\bSigma\bU^\top) (\bU\bSigma\bV^\top)(\bV\bSigma\bU^\top)\bG=\bV\bSigma^3\bU^\top\bG.
$$
The randomized procedure to compute column ID is formulated in Algorithm~\ref{alg:column-id-randomize1} where the difference compared to row ID is made in \textcolor{blue}{blue} texts.
\section*{Introduction and Background}
\addcontentsline{toc}{section}{Introduction and Background}
Matrix decomposition has become a core technology in statistics \citep{banerjee2014linear, gentle1998numerical}, optimization \citep{gill2021numerical}, recommender system \citep{symeonidis2016matrix}, and machine learning \citep{goodfellow2016deep, bishop2006pattern}, largely due to the development of back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, and Hilbert space. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields. Some excellent examples include \citep{householder2006principles, trefethen1997numerical, strang1993introduction, stewart2000decompositional, gentle2007matrix, higham2002accuracy, quarteroni2010numerical, golub2013matrix, beck2017first, gallier2017fundamentals, boyd2018introduction, strang2019linear, van2020advanced, strang2021every}. Moreover,
this survey will cover the calculation and complexity of the decompositional methods with details on the cost reduction. For a compact text of only rigorous proof, one can refer to \citep{lu2022matrix}.
A matrix decomposition is a way of reducing a complex matrix into its constituent parts which are in simpler forms.
The underlying principle of the decompositional approach to matrix computation is that it is not the
business of the matrix algorithmists to solve particular problems, but
it is an approach that can simplify more complex matrix operations which can be performed on the decomposed parts rather than on the original matrix itself. At a general level, a matrix decomposition task on matrix $\bA$ can be cast as
\begin{itemize}
\item $\bA=\bQ\bU$: where $\bQ$ is an orthogonal matrix that contains the same column space as $\bA$ and $\bU$ is a relatively simple and sparse matrix to reconstruct $\bA$.
\item $\bA=\bQ\bT\bQ^\top$: where $\bQ$ is orthogonal such that $\bA$ and $\bT$ are \textit{similar matrices} that share the same properties such as same eigenvalues, sparsity. Moreover, working on $\bT$ is an easier task compared to that of $\bA$.
\item $\bA=\bU\bT\bV$: where $\bU, \bV$ are orthogonal matrices such that the columns of $\bU$ and the rows of $\bV$ constitute an orthonormal basis of the column space and row space of $\bA$ respectively.
\item $\underset{m\times n}{\bA}=\underset{m\times r}{\bB}\gap \underset{r\times n}{\bC}$: where $\bB,\bC$ are full rank matrices that can reduce the memory storage of $\bA$. In practice, a low-rank approximation $\underset{m\times n}{\bA}\approx \underset{m\times k}{\bD}\gap \underset{k\times n}{\bF}$ can be employed where $k<r$ is called the \textit{numerical rank} of the matrix such that the matrix can be stored much more inexpensively and can be multiplied rapidly with
vectors or other matrices.
An approximation of the form $\bA=\bD\bF$ is useful for storing the matrix $\bA$ more frugally (we can store $\bD$ and $\bF$ using $k(m+n)$ floats, as opposed to $mn$ numbers for storing $\bA$), for efficiently computing a matrix-vector product $\bb = \bA\bx$ (via $\bc = \bF\bx$ and $\bb = \bD\bc$), for data interpretation, and much more.
\item A matrix decomposition, which though is usually expensive to
compute, can be reused to solve new problems involving the
original matrix in different scenarios, e.g., as long as the factorization of $\bA$ is obtained, it can be reused to solve the set of linear systems $\{\bb_1=\bA\bx_1, \bb_2=\bA\bx_2, \ldots, \bb_k=\bA\bx_k\}$.
\item More generally, a matrix decomposition can help to understand the internal meaning of what happens when multiplied by the matrix such that each constituent has a geometrical transformation (see Section~\ref{section:coordinate-transformation}, p.~\pageref{section:coordinate-transformation}).
\end{itemize}
The matrix decomposition algorithms can fall into many categories. Nonetheless, six categories hold the center and we sketch it here:
\begin{enumerate}
\item Factorizations arise from Gaussian elimination including the LU decomposition and its positive definite alternative - Cholesky decomposition;
\item Factorizations obtained when orthogonalizing the columns or the rows of a matrix such that the data can be explained well in an orthonormal basis;
\item Factorizations where the matrices are skeletoned such that a subset of the columns or the rows can represent the whole data in a small reconstruction error, whilst, the sparsity and nonnegativity of the matrices are kept as they are;
\item Reduction to Hessenberg, tridiagonal, or bidiagonal form, as a result, the properties of the matrices can be explored in these reduced matrices such as rank, eigenvalues, and so on;
\item Factorizations result from the computation of the eigenvalues of matrices;
\item In particular, the rest can be cast as a special kind of decompositions that involve optimization methods, high-level ideas where the category may not be straightforward to determine.
\end{enumerate}
The world pictures for decomposition in Figure~\ref{fig:matrix-decom-world-picture} and \ref{fig:matrix-decom-world-picture2} connect each decomposition method by their internal relations and also separate different methods by the criteria or prerequisites of them. Readers will get more information about the two pictures after reading the text.
\paragraph{Notation and preliminaries} In the rest of this section we will introduce and recap some basic knowledge about linear algebra. For the rest of the important concepts, we define and discuss them as per need for clarity.
The readers with enough background in matrix analysis can skip this section.
In the text, we simplify matters by considering only matrices that are real. Without special consideration, the eigenvalues of the discussed matrices are also real. We also assume throughout that $||\cdot||=||\cdot||_2$.
In all cases, scalars will be denoted in a non-bold font possibly with subscripts (e.g., $a$, $\alpha$, $\alpha_i$). We will use \textbf{boldface} lower case letters possibly with subscripts to denote vectors (e.g., $\bmu$, $\bx$, $\bx_n$, $\bz$) and
\textbf{boldface} upper case letters possibly with subscripts to denote matrices (e.g., $\bA$, $\bL_j$). The $i$-th element of a vector $\bz$ will be denoted by $\bz_i$ in bold font (or $z_i$ in the non-bold font).
The $n$-th element in a sequence is denoted by a superscript in parentheses, e.g., $\bA^{(n)}$ denotes the $n$-th matrix in a sequence, $\ba^{(k)}$ denote the $k$-th vector in a sequence.
Subarrays are formed when a subset of the indices is fixed.
\textit{The $i$-th row and $j$-th column value of matrix $\bA$ (entry ($i,j$) of $\bA$) will be denoted by $\bA_{ij}$ if block submatrices are involved, or by $a_{ij}$ alternatively if block submatrices are not involved}. Furthermore, it will be helpful to utilize the \textbf{Matlab-style notation}, the $i$-th row to the $j$-th row and the $k$-th column to the $m$-th column submatrix of the matrix $\bA$ will be denoted by $\bA_{i:j,k:m}$. A colon is used to indicate all elements of a dimension, e.g., $\bA_{:,k:m}$ denotes the $k$-th column to the $m$-th column of the matrix $\bA$, and $\bA_{:,k}$ denote the $k$-th column of $\bA$. Alternatively, the $k$-th column of $\bA$ may be denoted more compactly by $\ba_k$.
When the index is not continuous, given ordered subindex sets $I$ and $J$, $\bA[I, J]$ denotes the submatrix of $\bA$ obtained by extracting the rows and columns of $\bA$ indexed by $I$ and $J$, respectively; and $\bA[:, J]$ denotes the submatrix of $\bA$ obtained by extracting the columns of $\bA$ indexed by $J$.
\begin{definition}[Matlab Notation]\label{definition:matlabnotation}
Suppose $\bA\in \real^{m\times n}$, and $I=[i_1, i_2, \ldots, i_k]$ and $J=[j_1, j_2, \ldots, j_l]$ are two index vectors, then $\bA[I,J]$ denotes the $k\times l$ submatrix
$$
\bA[I,J]=
\begin{bmatrix}
\bA_{i_1,j_1} & \bA_{i_1,j_2} &\ldots & \bA_{i_1,j_l}\\
\bA_{i_2,j_1} & \bA_{i_2,j_2} &\ldots & \bA_{i_2,j_l}\\
\vdots & \vdots&\ddots & \vdots\\
\bA_{i_k,j_1} & \bA_{i_k,j_2} &\ldots & \bA_{i_k,j_l}\\
\end{bmatrix}.
$$
Whilst, $\bA[I,:]$ denotes the $k\times n$, and $\bA[:,J]$ denotes the $m\times l$ analogously.
We note that it does not matter whether the index vectors $I,J$ are row vectors or column vectors. It matters which axis they index (rows of $\bA$ or columns of $\bA$). We should also notice that range of the index:
$$
\left\{
\begin{aligned}
0&\leq \min(I) \leq \max(I)\leq m;\\
0&\leq \min(J) \leq \max(J)\leq n.
\end{aligned}
\right.
$$
\end{definition}
And in all cases, vectors are formulated in a column rather than in a row. A row vector will be denoted by a transpose of a column vector such as $\ba^\top$. A specific column vector with values is split by the symbol $``;"$, e.g., $\bx=[1;2;3]$ is a column vector in $\real^3$. Similarly, a specific row vector with values is split by the symbol $``,"$, e.g., $\by=[1,2,3]$ is a row vector with 3 values. Further, a column vector can be denoted by the transpose of a row vector e.g., $\by=[1,2,3]^\top$ is a column vector.
The transpose of a matrix $\bA$ will be denoted by $\bA^\top$ and its inverse will be denoted by $\bA^{-1}$ . We will denote the $p \times p$ identity matrix by $\bI_p$. A vector or matrix of all zeros will be denoted by a \textbf{boldface} zero $\bzero$ whose size should be clear from context, or we denote $\bzero_p$ to be the vector of all zeros with $p$ entries.
\begin{definition}[Eigenvalue]
Given any vector space $E$ and any linear map $A: E \rightarrow E$, a scalar $\lambda \in K$ is called an eigenvalue, or proper value, or characteristic value of $\bA$ if there is some nonzero vector $\bu \in E$ such that
\begin{equation*}
\bA \bu = \lambda \bu.
\end{equation*}
\end{definition}
\begin{definition}[Spectrum and Spectral Radius]\label{definition:spectrum}
The set of all eigenvalues of $\bA$ is called the spectrum of $\bA$ and denoted by $\Lambda(\bA)$. The largest magnitude of the eigenvalues is known as the spectral radius $\rho(\bA)$:
$$
\rho(\bA) = \mathop{\max}_{\lambda\in \Lambda(\bA)} |\lambda|.
$$
\end{definition}
\begin{definition}[Eigenvector]
A vector $\bu \in E$ is called an eigenvector, or proper vector, or characteristic vector of $\bA$ if $\bu \neq 0$ and if there is some $\lambda \in K$ such that
\begin{equation*}
\bA \bu = \lambda \bu,
\end{equation*}
where the scalar $\lambda$ is then an eigenvalue. And we say that $\bu$ is an eigenvector associated with $\lambda$.
\end{definition}
Moreover, the tuple $(\lambda, \bu)$ above is said to be an \textbf{eigenpair}. Intuitively, the above definitions mean that multiplying matrix $\bA$ by the vector $\bu$ results in a new vector that is in the same direction as $\bu$, but only scaled by a factor $\lambda$. For any eigenvector $\bu$, we can scale it by a scalar $s$ such that $s\bu$ is still an eigenvector of $\bA$. That's why we call the eigenvector as an eigenvector of $\bA$ associated with eigenvalue $\lambda$. To avoid ambiguity, we usually assume that the eigenvector is normalized to have length $1$ and the first entry is positive (or negative) since both $\bu$ and $-\bu$ are eigenvectors.
In this context, we will highly use the idea about the linear independence of a set of vectors. Two equivalent definitions are given as follows.
\begin{definition}[Linearly Independent]
A set of vectors $\{\ba_1, \ba_2, \ldots, \ba_m\}$ is called linearly independent if there is no combination can get $x_1\ba_1+x_2\ba_2+\ldots+x_m\ba_m=0$ except all $x_i$'s are zero. An equivalent definition is that $\ba_1\neq \bzero$, and for every $k>1$, the vector $\ba_k$ does not belong to the span of $\{\ba_1, \ba_2, \ldots, \ba_{k-1}\}$.
\end{definition}
In the study of linear algebra, every vector space has a basis and every vector is a linear combination of members of the basis. We then define the span and dimension of a subspace via the basis.
\begin{definition}[Span]
If every vector $\bv$ in subspace $\mathcal{V}$ can be expressed as a linear combination of $\{\ba_1, \ba_2, \ldots,$ $\ba_m\}$, then $\{\ba_1, \ba_2, \ldots, \ba_m\}$ is said to span $\mathcal{V}$.
\end{definition}
\begin{definition}[Subspace]
A nonempty subset $\mathcal{V}$ of $\real^n$ is called a subspace if $x\ba+y\ba\in \mathcal{V}$ for every $\ba,\bb\in \mathcal{V}$ and every $x,y\in \real$.
\end{definition}
\begin{definition}[Basis and Dimension]
A set of vectors $\{\ba_1, \ba_2, \ldots, \ba_m\}$ is called a basis of $\mathcal{V}$ if they are linearly independent and span $\mathcal{V}$. Every basis of a given subspace has the same number of vectors, and the number of vectors in any basis is called the dimension of the subspace $\mathcal{V}$. By convention, the subspace $\{\bzero\}$ is said to have dimension zero. Furthermore, every subspace of nonzero dimension has a basis that is orthogonal, i.e., the basis of a subspace can be chosen orthogonal.
\end{definition}
\begin{definition}[Column Space (Range)]
If $\bA$ is an $m \times n$ real matrix, we define the column space (or range) of $\bA$ to be the set spanned by its columns:
\begin{equation*}
\mathcal{C} (\bA) = \{ \by\in \mathbb{R}^m: \exists \bx \in \mathbb{R}^n, \, \by = \bA \bx \}.
\end{equation*}
And the row space of $\bA$ is the set spanned by its rows, which is equal to the column space of $\bA^\top$:
\begin{equation*}
\mathcal{C} (\bA^\top) = \{ \bx\in \mathbb{R}^n: \exists \by \in \mathbb{R}^m, \, \bx = \bA^\top \by \}.
\end{equation*}
\end{definition}
\begin{definition}[Null Space (Nullspace, Kernel)]
If $\bA$ is an $m \times n$ real matrix, we define the null space (or kernel, or nullspace) of $\bA$ to be the set:
\begin{equation*}
\nspace (\bA) = \{\by \in \mathbb{R}^n: \, \bA \by = \bzero \}.
\end{equation*}
And the null space of $\bA^\top$ is defined as \begin{equation*}
\nspace (\bA^\top) = \{\bx \in \mathbb{R}^m: \, \bA^\top \bx = \bzero \}.
\end{equation*}
\end{definition}
Both the column space of $\bA$ and the null space of $\bA^\top$ are subspaces of $\real^n$. In fact, every vector in $\nspace(\bA^\top)$ is perpendicular to $\cspace(\bA)$ and vice versa.\footnote{Every vector in $\nspace(\bA)$ is also perpendicular to $\cspace(\bA^\top)$ and vice versa.}
\begin{definition}[Rank]
The $rank$ of a matrix $\bA\in \real^{m\times n}$ is the dimension of the column space of $\bA$. That is, the rank of $\bA$ is equal to the maximal number of linearly independent columns of $\bA$, and is also the maximal number of linearly independent rows of $\bA$. The matrix $\bA$ and its transpose $\bA^\top$ have the same rank. We say that $\bA$ has full rank, if its rank is equal to $min\{m,n\}$. In another word, this is true if and only if either all the columns of $\bA$ are linearly independent, or all the rows of $\bA$ are linearly independent. Specifically, given a vector $\bu \in \real^m$ and a vector $\bv \in \real^n$, then the $m\times n$ matrix $\bu\bv^\top$ obtained by the outer product of vectors is of rank 1. In short, the rank of a matrix is equal to:
\begin{itemize}
\item number of linearly independent columns;
\item number of linearly independent rows;
\item and remarkably, these are always the same (see Appendix~\ref{append:row-equal-column}, p.~\pageref{append:row-equal-column}).
\end{itemize}
\end{definition}
\begin{definition}[Orthogonal Complement in General]
The orthogonal complement $\mathcalV^\perp$ of a subspace $\mathcalV$ contains every vector that is perpendicular to $\mathcalV$. That is,
$$
\mathcalV^\perp = \{\bv | \bv^\top\bu=0, \,\,\, \forall \bu\in \mathcalV \}.
$$
The two subspaces are disjoint that span the entire space. The dimensions of $\mathcalV$ and $\mathcalV^\perp$ add to the dimension of the whole space. Furthermore, $(\mathcalV^\perp)^\perp=\mathcalV$.
\end{definition}
\begin{definition}[Orthogonal Complement of Column Space]
If $\bA$ is an $m \times n$ real matrix, the orthogonal complement of $\mathcal{C}(\bA)$, $\mathcal{C}^{\bot}(\bA)$ is the subspace defined as:
\begin{equation*}
\begin{aligned}
\mathcal{C}^{\bot}(\bA) &= \{\by\in \mathbb{R}^m: \, \by^\top \bA \bx=\bzero, \, \forall \bx \in \mathbb{R}^n \} \\
&=\{\by\in \mathbb{R}^m: \, \by^\top \bv = \bzero, \, \forall \bv \in \mathcal{C}(\bA) \}.
\end{aligned}
\end{equation*}
\end{definition}
Then we have the four fundamental spaces for any matrix $\bA\in \real^{m\times n}$ with rank $r$:
\begin{description}
\item $\bullet$ $\cspace(\bA)$: Column space of $\bA$, i.e., linear combinations of columns with dimension $r$;
\item $\bullet$ $\nspace(\bA)$: Null space of $\bA$, i.e., all $\bx$ with $\bA\bx=0$ with dimension $n-r$;
\item $\bullet$ $\cspace(\bA^\top)$: Row space of $\bA$, i.e., linear combinations of rows with dimension $r$;
\item $\bullet$ $\nspace(\bA^\top)$: Left null space of $\bA$, i.e., all $\by$ with $\bA^\top \by=0$ with dimension $m-r$,
\end{description}
where $r$ is the rank of the matrix. Furthermore, $\nspace(\bA)$ is the orthogonal complement to $\cspace(\bA^\top)$, and $\cspace(\bA)$ is the orthogonal complement to $\nspace(\bA^\top)$. The proof can be found in Appendix~\ref{appendix:fundamental-rank-nullity}.
\begin{definition}[Orthogonal Matrix]
A real square matrix $\bQ$ is an orthogonal matrix if the inverse of $\bQ$ equals the transpose of $\bQ$, that is $\bQ^{-1}=\bQ^\top$ and $\bQ\bQ^\top = \bQ^\top\bQ = \bI$. In another word, suppose $\bQ=[\bq_1, \bq_2, \ldots, \bq_n]$ where $\bq_i \in \real^n$ for all $i \in \{1, 2, \ldots, n\}$, then $\bq_i^\top \bq_j = \delta(i,j)$ with $\delta(i,j)$ being the Kronecker delta function. If $\bQ$ contains only $\gamma$ of these columns with $\gamma<n$, then $\bQ^\top\bQ = \bI_\gamma$ stills holds with $\bI_\gamma$ being the $\gamma\times \gamma$ identity matrix. But $\bQ\bQ^\top=\bI$ will not be true. For any vector $\bx$, the orthogonal matrix will preserve the length: $||\bQ\bx|| = ||\bx||$.
\end{definition}
\begin{definition}[Permutation Matrix]\label{definition:permutation-matrix}
A permutation matrix $\bP$ is a square binary matrix that has exactly one entry of 1 in each row and each column and 0's elsewhere.
\paragraph{Row Point} That is, the permutation matrix $\bP$ has the rows of the identity $\bI$ in any order and the order decides the sequence of the row permutation. Suppose we want to permute the rows of matrix $\bA$, we just multiply on the left by $\bP\bA$.
\paragraph{Column Point} Or, equivalently, the permutation matrix $\bP$ has the columns of the identity $\bI$ in any order and the order decides the sequence of the column permutation. And now, the column permutation of $\bA$ is to multiply on the right by $\bA\bP$.
\end{definition}
The permutation matrix $\bP$ can be more efficiently represented via a vector $J \in \integer_+^n$ of indices such that $\bP = \bI[:, J]$ where $\bI$ is the $n\times n$ identity matrix and notably, the elements in vector $J$ sum to $1+2+\ldots+n= \frac{n^2+n}{2}$.
\begin{example}[Permutation]
Suppose,
$$\bA=\begin{bmatrix}
1 & 2&3\\
4&5&6\\
7&8&9
\end{bmatrix}
,\qquad \text{and} \qquad
\bP=\begin{bmatrix}
&1&\\
&&1\\
1&&
\end{bmatrix}.
$$
The row permutation is given by
$$
\bP\bA = \begin{bmatrix}
4&5&6\\
7&8&9\\
1 & 2&3\\
\end{bmatrix},
$$
where the order of the rows of $\bA$ appearing in $\bP\bA$ matches the order of the rows of $\bI$ in $\bP$. And the column permutation is given by
$$
\bA\bP = \begin{bmatrix}
3 & 1 & 2 \\
6 & 4 & 5\\
9 & 7 & 8
\end{bmatrix},
$$
where the order of the columns of $\bA$ appearing in $\bA\bP$ matches the order of the columns of $\bI$ in $\bP$. \exampbar
\end{example}
\begin{definition}[Selection Matrix]\label{definition:selection-matrix}
A selection matrix $\bS$ is a square diagonal matrix with diagonals being 1 or 0. The 1 entries are the rows or columns that will be selected.
\paragraph{Row Point} That is, the selection matrix $\bS$ has the rows of the identity $\bI$ if we want to select the corresponding rows, and otherwise, we mask the rows in the identity $\bI$ by zero.
Suppose we want to select the rows of matrix $\bA$, we just multiply from left by $\bS\bA$.
\paragraph{Column Point} Or, equivalent, the selection matrix $\bS$ has the columns of the identity $\bI$ if we want to select the corresponding columns, or otherwise, we mask the columns in the identity $\bI$ by zero. And now, the column selection of $\bA$ is to multiply from right by $\bA\bS$.
\end{definition}
\begin{example}[Selection and Permutation]
Suppose,
$$\bA=\begin{bmatrix}
1 & 2&3\\
4&5&6\\
7&8&9
\end{bmatrix}
,\qquad \text{and} \qquad
\bS=\begin{bmatrix}
1&&\\
& 0&\\
&&1
\end{bmatrix}.
$$
The row selection is given by
$$
\bS\bA = \begin{bmatrix}
1&2&3\\
0&0&0\\
7 & 8&9\\
\end{bmatrix},
$$
where the row of $\bA$ appearing in $\bS\bA$ matches the row entries of $\bS$. And the column selection is given by
$$
\bA\bS = \begin{bmatrix}
1& 0 & 3 \\
4 & 0 & 6\\
7 & 0 & 9
\end{bmatrix},
$$
where the columns of $\bA$ appearing in $\bA\bS$ matches column entries of $\bS$. If now, we want to reorder the selected rows or columns in the upper left of the final matrix, we can construct a permutation as follows
$$
\bP=\begin{bmatrix}
1&&\\
& & 1\\
&1&
\end{bmatrix},
$$
such that
$$
\bP\bS\bA = \begin{bmatrix}
1&2&3\\
7 & 8&9\\
0&0&0\\
\end{bmatrix},
$$
and
$$
\bA\bS\bP = \begin{bmatrix}
1 & 3& 0 \\
4 & 6& 0\\
7 & 9& 0
\end{bmatrix}.
$$
The trick is essential to some mathematical proofs, e.g., the properties of positive definite matrices in Lemma~\ref{lemma:pd-more-properties}. \exampbar
\end{example}
From an introductory course on linear algebra, we have the following remark on the equivalent claims on nonsingular matrices.
\begin{remark}[List of Equivalence of Nonsingularity for a Matrix]
For a square matrix $\bA\in \real^{n\times n}$, the following claims are equivalent:
\begin{itemize}
\item $\bA$ is nonsingular;
\item $\bA$ is invertible, i.e., $\bA^{-1}$ exists;
\item $\bA\bx=\bb$ has a unique solution $\bx = \bA^{-1}\bb$;
\item $\bA\bx = \bzero$ has a unique, trivial solution: $\bx=\bzero$;
\item Columns of $\bA$ are linearly independent;
\item Rows of $\bA$ are linearly independent;
\item $\det(\bA) \neq 0$;
\item $\dim(\nspace(\bA))=0$;
\item $\nspace(\bA) = \{\bzero\}$, i.e., the null space is trivial;
\item $\cspace(\bA)=\cspace(\bA^\top) = \real^n$, i.e., the column space or row space span the whole $\real^n$;
\item $\bA$ has full rank $r=n$;
\item The reduced row echelon form is $\bR=\bI$;
\item $\bA^\top\bA$ is symmetric positive definite;
\item $\bA$ has $n$ nonzero (positive) singular values;
\item All eigenvalues are nonzero;
\end{itemize}
\end{remark}
It will be shown important to take the above equivalence into mind, otherwise, we will easily get lost. On the other hand, the following remark also shows the equivalent claims for singular matrices.
\begin{remark}[List of Equivalence of Singularity for a Matrix]
For a square matrix $\bA\in \real^{n\times n}$ with eigenpair $(\lambda, \bu)$, the following claims are equivalent:
\begin{itemize}
\item $(\bA-\lambda\bI)$ is singular;
\item $(\bA-\lambda\bI)$ is not invertible;
\item $(\bA-\lambda\bI)\bx = \bzero$ has nonzero $\bx\neq \bzero$ solutions, and $\bx=\bu$ is one of such solutions;
\item $(\bA-\lambda\bI)$ has linearly dependent columns;
\item $\det(\bA-\lambda\bI) = 0$;
\item $\dim(\nspace(\bA-\lambda\bI))>0$;
\item Null space of $(\bA-\lambda\bI)$ is nontrivial;
\item Columns of $\bA$ are linearly dependent;
\item Rows of $\bA$ are linearly dependent;
\item $\bA$ has rank $r<n$;
\item Dimension of column space = dimension of row space = $r<n$;
\item $\bA^\top\bA$ is symmetric semidefinite;
\item $\bA$ has $r<n$ nonzero (positive) singular values;
\item Zero is an eigenvalue of $\bA$
\end{itemize}
\end{remark}
\subsection*{Matrix Decomposition in a Nutshell}
We briefly overview the different decompositional methods that we will cover in this text as follows:
\begin{enumerate}[leftmargin=*]
\item \textit{LU.} $\bA=\bL\bU =
\left(\parbox{8em}{lower triangular $\bL$\\ 1's on the diagonal}\right)
\left(\parbox{9.5em}{upper triangular $\bU$\\pivots on the diagonal}\right)$
\textbf{Requirements:} $\bA$ has nonzero leading principal minors, i.e., no row permutations involve to reduce $\bA$ to upper triangular via the Gaussian elimination (Theorem~\ref{theorem:lu-factorization-without-permutation}, p.~\pageref{theorem:lu-factorization-without-permutation}).
\item \textit{LU.} $\bA=\bL\bD\bU = \left(\parbox{8em}{lower triangular $\bL$\\ 1's on the diagonal}\right)
\left(\parbox{6em}{pivot matrix \\$\bD$ is diagonal}\right)
\left(\parbox{8.5em}{upper triangular $\bU$\\1's on the diagonal}\right)$
\textbf{Requirements:} $\bA$ has nonzero leading principal minors, i.e., no row permutations involves to reduce $\bA$ to upper triangular via the Gaussian elimination. The $\bD$ contains the pivots such that $\bU$ has 1's on the diagonal. When $\bA$ is symmetric, it follows that $\bA=\bL\bD\bL^\top$ (Corollary~\ref{corollary:ldu-decom}, p.~\pageref{corollary:ldu-decom}).
\item \textit{PLU.} $\bA=\bP\bL\bU =\left(\parbox{7em}{permutation \\matrix $\bP$ avoids\\ zeros for\\ eliminations}\right) \left(\parbox{8em}{lower triangular $\bL$\\ 1's on the diagonal}\right)\left(\parbox{9.5em}{upper triangular $\bU$\\pivots on the diagonal}\right)$
\textbf{Requirements:} $\bA$ is nonsingular, then $\bP,\bL,\bU$ are nonsingular as well. $\bP^\top$ does the row permutation in advance such that $\bP^\top\bA$ has nonzero leading principle minors (Theorem~\ref{theorem:lu-factorization-with-permutation}, p.~\pageref{theorem:lu-factorization-with-permutation}).
\item \textit{RRLU.} $\bP\bA\bQ = \begin{bmatrix}
\bL_{11} & \bzero \\
\bL_{21}^\top & \bI
\end{bmatrix}
\begin{bmatrix}
\bU_{11} & \bU_{12} \\
\bzero & \bzero
\end{bmatrix}$
\textbf{Requirements}: Any matrix with rank $r$ such that $\bP,\bQ$ are permutations, $\bL_{11}$ is lower triangular with 1's on the diagonal, $\bU_{11}$ is upper triangular with pivots on the diagonal where both of them reveal the rank of matrix $\bA$ (Section~\ref{section:rank-reveal-lu-short}, p.~\pageref{section:rank-reveal-lu-short}).
\item \textit{Complete Pivoting LU.} $\bP\bA\bQ = \bL\bU=
\left(\parbox{8em}{lower triangular $\bL$\\ 1's on the diagonal}\right)\left(\parbox{9.5em}{upper triangular $\bU$\\pivots on the diagonal}\right)$
\textbf{Requirements}: $\bA$ is nonsingular such that $\bP,\bQ$ are permutations, $\bL$ is unit lower triangular with 1's on the diagonal, $\bU$ is upper triangular with pivots on the diagonal (Section~\ref{section:complete-pivoting}, p.~\pageref{section:complete-pivoting}).
\item \textit{Cholesky.} $\bA=\bR^\top\bR$=(lower triangular)(upper triangular)
\textbf{Requirements:} $\bA$ is positive definite (symmetric) such that the diagonal of $\bR$ contains the diagonals of $\sqrt{\bD}$ which are positive from the LU decomposition. Trivially, $\bR^\top$ can be set as $\bL\sqrt{\bD}$ from the LU decomposition. When $\bA$ is positive semidefinite, $\bR$ is upper triangular with possible zeros on the diagonal (Theorem~\ref{theorem:cholesky-factor-exist}, p.~\pageref{theorem:cholesky-factor-exist}; Theorem~\ref{theorem:semidefinite-factor-exist}, p.~\pageref{theorem:semidefinite-factor-exist}).
\item \textit{Pivoted Cholesky.} $\bP\bA\bP^\top = \bR^\top\bR $=(lower triangular)(upper triangular)
\textbf{Requirements:} $\bA$ is positive definite (symmetric) such that $\bR$ is upper triangular (Section~\ref{section:volum-picot-cholesjy}, p.~\pageref{section:volum-picot-cholesjy}).
\item \textit{Semidefinite Rank-Revealing.} $\bP^\top \bA\bP = \bR^\top\bR, \qquad \mathrm{with} \qquad
\bR = \begin{bmatrix}
\bR_{11} & \bR_{12}\\
\bzero &\bzero
\end{bmatrix} $
\textbf{Requirements:} $\bA$ is positive semidefinite with rank $r$ such that after permuting by $\bP$, $\bR_{11}$ is upper triangular with rank $r$ (Theorem~\ref{theorem:semidefinite-factor-rank-reveal}, p.~\pageref{theorem:semidefinite-factor-rank-reveal}).
\item \textit{CR.} $\bA=\bC\bR$=(first independent columns of $\bA$)(basis for row space of $\bA$)
\textbf{Requirements:} Any matrix $\bA$ with rank $r$, $\bC$ contain first $r$ linearly independent columns of $\bA$, $\bR$ is the reduced row echelon form removing zero rows. $\bR$ contains an $r\times r$ identity submatrix (Theorem~\ref{theorem:cr-decomposition}, p.~\pageref{theorem:cr-decomposition}).
\item \textit{Rank.} $\bA=\bD\bF$=(column basis of $\bA$)(row basis of $\bA$)
\textbf{Requirements:} Any matrix $\bA$ with rank $r$, $\bD$ contains $r$ columns that span the column space of $\bA$, $\bF$ contains $r$ rows that span the row space of $\bA$ (Theorem~\ref{theorem:rank-decomposition}, p.~\pageref{theorem:rank-decomposition}).
\item \textit{Skeleton.} $\bA=\bC\bU^{-1}\bR$=($r$ columns of $\bA$)(intersection of $\bC,\bR$)$^{-1}$($r$ rows of $\bA$)
\textbf{Requirements:} Any matrix $\bA$ with rank $r$ such that $\bC, \bR$ come directly from columns and rows of $\bA$. $\bU$ is the mixing matrix that on the intersection of $\bC,\bR$ (Theorem~\ref{theorem:skeleton-decomposition}, p.~\pageref{theorem:skeleton-decomposition}).
\item \textit{Interpolative.} $\bA=\bC\bW$=(independent columns of $\bA$)(columns consisting of identity matrix)
\textbf{Requirements:} Any matrix $\bA$ with rank $r$ such that $\bW$ contains an $r\times r$ identity submatrix (in the sense of permutation). The elements of $\bW$ are no greater than 1 in absolute value (Theorem~\ref{theorem:interpolative-decomposition}, p.~\pageref{theorem:interpolative-decomposition}; Theorem~\ref{theorem:interpolative-decomposition-row}, p.~\pageref{theorem:interpolative-decomposition-row}).
\item \textit{QR.} $\bA = \bQ\bR$=(orthonormal columns in $\bQ$)(upper triangular $\bR$)
\textbf{Requirements:} Rectangular matrix $\bA$ has linearly independent columns otherwise $\bR$ will be singular (diagonals contain at least one zero). In the full QR decomposition, $\bQ$ is orthogonal such that $\bQ^{-1}=\bQ^\top$ (Theorem~\ref{theorem:qr-decomposition}, p.~\pageref{theorem:qr-decomposition}).
\item \textit{CPQR.}
$\bA = \bQ
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bzero
\end{bmatrix}\bP^\top
=\left(\parbox{5.9em}{orthogonal $\bQ$}\right)
\left(\parbox{8.em}{upper triangular $\bR_{11}$ $|$ full matrix $\bR_{12}$}\right)
\left(\parbox{5.2em}{permutation $\bP^\top$}\right)$
\textbf{Requirements:} Any rectangular matrix $\bA$ with rank $r$ such that $\bR_{11}$ is $r\times r$ upper triangular (Theorem~\ref{theorem:rank-revealing-qr-general}, p.~\pageref{theorem:rank-revealing-qr-general}).
\item \textit{RRQR.}
$\bA = \bQ
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bR_{22}
\end{bmatrix}\bP^\top
=\left(\parbox{5.9em}{orthogonal $\bQ$}\right)
\left(\parbox{8.em}{upper triangular $\bR_{11}$ $|$ full matrix $\bR_{12}$ $|$ $\bR_{22}$ small in norm}\right)
\left(\parbox{5.2em}{permutation $\bP^\top$}\right)$
\textbf{Requirements:} Any rectangular matrix $\bA$ with rank $r$ such that $\bR_{11}$ is $r\times r$ upper triangular and $\bR_{22}$ is small in norm (Section~\ref{section:rank-r-qr}, p.~\pageref{section:rank-r-qr}).
\item \textit{LQ.} $\bA=\bL\bQ$=(lower triangular $\bL$)(orthonormal rows in $\bQ$)
\textbf{Requirements:} Rectangular matrix $\bA$ has linearly independent rows otherwise $\bL$ will be singular (diagonals contain at least one zero). In the full LQ decomposition, $\bQ$ is orthogonal such that $\bQ^{-1}=\bQ^\top$ (Theorem~\ref{theorem:lq-decomposition}, p.~\pageref{theorem:lq-decomposition}).
\item \textit{RPLQ.}
$\bA = \bP^\top
\begin{bmatrix}
\bL_{11} & \bzero \\
\bL_{21} & \bzero
\end{bmatrix}\bQ
=\left(\text{permutation $\bP^\top$}\right)
\left(\parbox{8.em}{lower triangular $\bL_{11}$ $|$ full matrix $\bL_{21}$}\right)
\left(\parbox{5.9em}{orthogonal $\bQ$}\right)$
\textbf{Requirements:} Any rectangular matrix $\bA$ with rank $r$ such that $\bL_{11}$ is $r\times r$ lower triangular (Section~\ref{section:lq-decomp}, p.~\pageref{section:lq-decomp}).
\item \textit{Two-Sided Orthogonal.} $\bA\bP\bA=\bU\bF\bV^\top=\left(\parbox{5.3em}{orthonormal \\ basis in $\bU$}\right)
\left(\parbox{4.5em}{Upper-left \\$r\times r$\\ submatrix\\ is nonzero}\right)
\left(\parbox{5.3em}{orthonormal \\ basis in $\bV$}\right)$
\textbf{Requirements:} Square matrix with rank $r$ such that first $r$ columns of $\bU$ span the column space of $\bA$, and the rest columns span the null space of $\bA^\top$. Whilst first $r$ columns of $\bV$ span the row space of $\bA$ and the rest columns span the null space of $\bA$ (Theorem~\ref{theorem:two-sided-orthogonal}, p.~\pageref{theorem:two-sided-orthogonal}).
\item \textit{UTV.} $\bA = \bU\bT\bV$=(orthogonal $\bU$)
$\left(\parbox{8.4em}{upper triangular $\bR$\\lower triangular $\bL$}\right)$
(orthogonal $\bV$)
\textbf{Requirements:} Any matrix $\bA$ with rank $r$ such that $\bT$ is lower or upper triangular with rank $r$ (Theorem~\ref{theorem:ulv-decomposition}, p.~\pageref{theorem:ulv-decomposition}).
\item \textit{Hessenberg.} $\bA=\bQ\bH\bQ^\top$=(orthogonal matrix $\bQ$)(Hessenberg matrix $\bH$)($\bQ^\top=\bQ^{-1}$)
\textbf{Requirements:} Any square matrix $\bA$ such that $\bA,\bH$ are orthogonal similar matrices with the same rank, trace, eigenvalues (Theorem~\ref{theorem:hessenberg-decom}, p.~\pageref{theorem:hessenberg-decom}).
\item \textit{Tridiagonal.} $\bA=\bQ\bT\bQ^\top$=(orthogonal $\bQ$)(Symmetric tridiagonal $\bT$)($\bQ^\top=\bQ^{-1}$)
\textbf{Requirements:} Any symmetric matrix $\bA$ such that $\bA,\bT$ are orthogonal similar matrices with same rank, trace, eigenvalues (Theorem~\ref{definition:tridiagonal-hessenbert}, p.~\pageref{definition:tridiagonal-hessenbert}).
\item \textit{Bidiagonal.} $\bA=\bU\bB\bV^\top$=(orthogonal matrix $\bU$)(Bidiagonal $\bB$)(orthogonal matrix $\bV$)
\textbf{Requirements:} Any rectangular matrix $\bA$ such that $\bB$ is upper bidiagonal. $\bT=\bB^\top\bB$ is tridiagonal if $\bB$ is bidiagonal such that $\bA^\top\bA = \bV\bB^\top\bB\bV^\top$ is the tridiagonal decomposition of $\bA^\top\bA$ (Theorem~\ref{theorem:Golub-Kahan-Bidiagonalization-decom}, p.~\pageref{theorem:Golub-Kahan-Bidiagonalization-decom}).
\item \textit{Eigenvalue.} $\bA = \bX\bLambda\bX^{-1}
=\left(\parbox{6.5em}{eigenvectors in\\ columns of $\bX$}\right)
\left(\parbox{6.1em}{eigenvalues in \\diagonal of $\bLambda$}\right)
\left(\parbox{7em}{left eigenvectors \\in $\bX^{-1}$}\right)$
\textbf{Requirements:} Square matrix $\bA$ has $n$ linearly independent eigenvectors (Theorem~\ref{theorem:eigenvalue-decomposition}, p.~\pageref{theorem:eigenvalue-decomposition}).
\item \textit{Schur.} $\bA = \bQ\bU\bQ^\top
=\left(\parbox{6.6em}{orthogonormal \\columns in $\bQ$}\right)
\left(\parbox{8.5em}{Upper triangular $\bU$}\right)
\left(\parbox{4.8em}{$\bQ^\top=\bQ^{-1}$}\right)$
\textbf{Requirements:} $\bA$ is any real matrix with real eigenvalues (Theorem~\ref{theorem:schur-decomposition}, p.~\pageref{theorem:schur-decomposition}).
\item \textit{Spectral.} $\bA=\bQ\bLambda\bQ^\top
=\left(\parbox{7.6em}{orthogonormal \\eigenvectors in $\bQ$}\right)
\left(\parbox{7.8em}{real eigenvalues in \\diagonal of $\bLambda$}\right)
\left(\parbox{7em}{left eigenvectors \\in $\bQ^{\top}$}\right)$
\textbf{Requirements:} $\bA$ is real and symmetric, which is a special case of the Schur decomposition (Theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem}).
\item \textit{Jordan.} $\bA = \bX\bJ\bX^{-1}
=\left(\parbox{7.3em}{generalized eigenvectors \\in columns of $\bX$}\right)
\left(\parbox{6.4em}{Jordan blocks \\in $\bJ$}\right)
\left(\parbox{8.8em}{left generalized \\eigenvectors in $\bX^{-1}$}\right)$
\textbf{Requirements:} $\bA$ is any square matrix, $\bJ$ is a Jordan form matrix which has several diagonal blocks that contains identical eigenvalues in each block (Theorem~\ref{theorem:jordan-decomposition}, p.~\pageref{theorem:jordan-decomposition}).
\item \textit{SVD.} $\bA=\bU \bSigma \bV^\top
=\left(\parbox{5.5em}{left singular \\vectors in $\bU$}\right)
\left(\parbox{7.6em}{singular values in \\diagonal of $\bSigma$}\right)
\left(\parbox{5.8em}{right singular \\vectors in $\bV$}\right)$
\textbf{Requirements:} Any matrix $\bA$ such that the diagonals of $\bSigma^2$ contain the eigenvalues of $\bA^\top\bA$ or $\bA\bA^\top$. In the full SVD, both $\bU$ and $\bV$ are orthogonal (Theorem~\ref{theorem:full_svd_rectangular}, p.~\pageref{theorem:full_svd_rectangular}).
\item \textit{Polar.} $\bA=\bQ_l\bS_l
=\left(\parbox{4.6em}{orthogonal matrix $\bQ_l$}\right)
\left(\parbox{6.5em}{positive \\semidefinite $\bS_l$}\right)$
\textbf{Requirements:} $\bA$ is any square matrix. The polar decomposition indicates $\bS_l^2 = \bA^\top\bA$. When $\bA$ is nonsingular, $\bS_l$ is positive definite. Moreover, there exists a right polar decomposition $\bA=\bS_r\bQ_r$ such that $\bS_r^2 = \bA\bA^\top$ (Theorem~\ref{theorem:polar-decomposition}, p.~\pageref{theorem:polar-decomposition}).
\item \textit{ALS.} $\bA \approx \bW\bZ$=$\left(s=rank(\bW)<rank(\bA)=r\right)$
$\left(s=rank(\bZ)<rank(\bA)=r\right)$
\textbf{Requirements:} For any rectangular matrix $\bA$, $\bW\bZ$ approximates $\bA$ in the sense of Frobenius norm (Section~\ref{section:als}, p.~\pageref{section:als}).
\item \textit{NMF.} $\bA \approx \bW\bZ$=$\left(s=rank(\bW)<rank(\bA)=r\right)$
$\left(s=rank(\bZ)<rank(\bA)=r\right)$
\textbf{Requirements:} For any rectangular matrix $\bA$, $\bW\bZ$ approximates $\bA$ in the sense of Frobenius norm with entries of $\bA,\bW,\bZ$ being nonegative (Section~\ref{section:nmf}, p.~\pageref{section:nmf}).
\item \textit{Biconjugate.} $\bA = \bPhi \bOmega^{-1} \bPsi^\top$
\textbf{Requirements:} Any rectangular matrix $\bA$ with rank $r$ such that $\bOmega$ is $r\times r$ diagonal. The columns of $\bPhi, \bPsi$ comes from the Wedderburn sequence (Theorem~\ref{theorem:biconjugate-form1}, p.~\pageref{theorem:biconjugate-form1}; Theorem~\ref{theorem:biconjugate-form2}, p.~\pageref{theorem:biconjugate-form2}).
\item \textit{CP.} $\eX \approx \llbracket \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=
\sum_{r=1}^{R} \ba_r^{(1)} \circ \ba_r^{(2)} \circ \ldots \circ \ba_r^{(N)} $
\textbf{Requirements:} Any tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$ such that the CP decomposition is a low-rank approximation of the original tensor (Theorem~\ref{theorem:cp-decomp}, p.~\pageref{theorem:cp-decomp}).
\item \textit{Tucker.} $\eX \approx \llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=
\sum_{r_1=1}^{R_1} \sum_{r_2=1}^{R_2} \ldots \sum_{r_N=1}^{R_N}
g_{r_1r_2\ldots r_N}
\ba_r^{(1)} \circ \ba_r^{(2)} \circ \ldots \circ \ba_r^{(N)} $
\textbf{Requirements:} Any tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$ such that the Tucker decomposition makes the principal components in high-order dimensions (Theorem~\ref{theorem:tucker-decomp}, p.~\pageref{theorem:tucker-decomp}).
\item \textit{HOSVD.} $\eX \approx \llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=
\sum_{r_1=1}^{R_1} \sum_{r_2=1}^{R_2} \ldots \sum_{r_N=1}^{R_N}
g_{r_1r_2\ldots r_N}
\ba_r^{(1)} \circ \ba_r^{(2)} \circ \ldots \circ \ba_r^{(N)} $
\textbf{Requirements:} Any tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$ such that the HOSVD makes the principal components in high-order dimensions where further the slices of $\eG$ are mutually orthogonal and the slices are ordered in a descending manner (Theorem~\ref{theorem:hosvd-decomp}, p.~\pageref{theorem:hosvd-decomp}).
\item \textit{TT.} $\eX \approx \eG^{(1)} \boxtimes \eG^{(2)} \boxtimes \ldots \boxtimes \eG^{(N)}$
\textbf{Requirements:} Any tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$ such that the TT decomposition factors the tensor into $N$ third-order tensors (Theorem~\ref{theorem:ttrain-decomp}, p.~\pageref{theorem:ttrain-decomp}).
\end{enumerate}
\part{Triangularization, Orthogonalization and Gram-Schmidt Process}
\section*{Introduction}
Given an $m\times l$ matrix $\bA=[\ba_1, \ba_2, \ldots, \ba_l]$, with $m\geq l$, the \textit{orthonormalization} admits $\bQ_l=[\bq_1, \bq_2, \ldots, \bq_l]$ such that
$$
span([\bq_1, \bq_2, \ldots, \bq_k]) = span([\bq_1, \ba_2, \ldots, \ba_k]), \qquad \text{for all } k\in \{1,2,\ldots, l\}.
$$
Whilst, columns of $\bQ_l$ are mutually orthonormal:
$$
\bq_i^\top\bq_j =
\left\{
\begin{aligned}
&1, \qquad \text{if $i=j$};\\
&0, \qquad \text{if $i\neq j$}.
\end{aligned}
\right\} \leadto \bQ_l^\top\bQ_l = \bI_l.
$$
where $\bI_l$ is a $l\times l$ identity matrix. When we complete $\bQ_l$ into $m$ mutually orthonormal columns $\bQ=[\bq_1, \bq_2, \ldots, \bQ_m]$, $\bQ$ is a square matrix and it follows that $\bQ\bQ^\top=\bQ^\top\bQ=\bI_m$. Sometimes, this \textit{column completion} is done by the Gram-Schmidt process that we will introduce in the sequel.
For the rest of this part, we will discuss several factorization methods in the sense of orthogonalization above with different focuses, e.g., in terms of column space, row space, or both. The big picture can be shown in Figure~\ref{fig:orthgonal-world-picture}.
\begin{figure}[htbp]
\centering
\begin{widepage}
\centering
\resizebox{0.6\textwidth}{!}{%
\begin{tikzpicture}[>=latex]
\tikzstyle{state} = [draw, very thick, fill=white, rectangle, minimum height=3em, minimum width=6em, node distance=8em, font={\sffamily\bfseries}]
\tikzstyle{stateEdgePortion} = [black,thick];
\tikzstyle{stateEdge} = [stateEdgePortion,->];
\tikzstyle{stateEdge2} = [stateEdgePortion,<->];
\tikzstyle{edgeLabel} = [pos=0.5, text centered, font={\sffamily\small}];
\node[state, name=qr, node distance=7em, xshift=-9em, yshift=-1em, fill={colorqr}] {QR};
\node[state, name=lq, above of=qr, xshift=-10em, yshift=-4em, fill={colorqr}] {LQ};
\node[state, name=utv, above of=qr, xshift=10em, yshift=-4em, fill={colorqr}] {UTV};
\node[state, name=twosidedortho, draw, above of=qr,xshift=0em, yshift=0em,fill={colorqr}] {\parbox{5em}{Two-Sided \\Orthogonal}};
\draw (lq.east)
edge[stateEdge] node[edgeLabel, yshift=0em, xshift=-0.7em]{\parbox{3em}{Rank\\Estimation}}
(utv.west) ;
\draw ($(qr.east)$)
edge[stateEdge, bend left=-12.5] node[edgeLabel, xshift=-0.3em,yshift=0.5em]{Rank Estimation}
(utv.south) ;
\draw ($(lq.north) + (0em,0em)$)
edge[stateEdge, bend left=12.5] node[edgeLabel, xshift=1.5em,yshift=-0.5em]{Phase 1}
(twosidedortho.west) ;
\draw (twosidedortho.east)
edge[stateEdge, bend left=+12.5] node[edgeLabel, xshift=-1.5em,yshift=-0.5em]{Phase 2}
($(utv.north) + (0em,0em)$);
\draw ($(qr.west)$)
edge[stateEdge, bend left=+12.5] node[edgeLabel, yshift=0.5em, xshift=1em]{Row Space}
(lq.south) ;
\begin{pgfonlayer}{background}
\draw [join=round,cyan,dotted,fill={colormiddleleft}] ($(qr.south west -| lq.west) + (-0.5em, -0.5em)$) rectangle ($(twosidedortho.north east -| utv.north east) + (0.6em, 0.5em)$);
\end{pgfonlayer}
\end{tikzpicture}
}
\end{widepage}
\caption{Orthogonalization World Map. See also where it's lying in the matrix decomposition world map Figure~\ref{fig:matrix-decom-world-picture}.}
\label{fig:orthgonal-world-picture}
\end{figure}
\newpage
\chapter{QR Decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{QR Decomposition}
In many applications, we are interested in the column space of a matrix $\bA=[\ba_1, \ba_2, ..., \ba_n] \in \real^{m\times n}$. The successive spaces spanned by the columns $\ba_1, \ba_2, \ldots$ of $\bA$ are
$$
\cspace([\ba_1])\,\,\,\, \subseteq\,\,\,\, \cspace([\ba_1, \ba_2]) \,\,\,\,\subseteq\,\,\,\, \cspace([\ba_1, \ba_2, \ba_3])\,\,\,\, \subseteq\,\,\,\, \ldots,
$$
where $\cspace([\ldots])$ is the subspace spanned by the vectors included in the brackets. The idea of QR decomposition is the construction of a sequence of orthonormal vectors $\bq_1, \bq_2, \ldots$ that span the same successive subspaces.
$$
\bigg\{\cspace([\bq_1])=\cspace([\ba_1])\bigg\}\subseteq
\bigg\{\cspace([\bq_1, \bq_2])=\cspace([\ba_1, \ba_2])\bigg\}\subseteq
\bigg\{\cspace([\bq_1, \bq_2, \bq_3])=\cspace([\ba_1, \ba_2, \ba_3])\bigg\}
\subseteq \ldots,
$$
We provide the result of QR decomposition in the following theorem and we delay the discussion of its existence in the next sections.
\begin{theoremHigh}[QR Decomposition]\label{theorem:qr-decomposition}
Every $m\times n$ matrix $\bA=[\ba_1, \ba_2, ..., \ba_n]$ (whether linearly independent or dependent columns) with $m\geq n$ can be factored as
$$
\bA = \bQ\bR,
$$
where
\begin{enumerate}
\item \textbf{Reduced}: $\bQ$ is $m\times n$ with orthonormal columns and $\bR$ is an $n\times n$ upper triangular matrix which is known as the \textbf{reduced QR decomposition};
\item \textbf{Full}: $\bQ$ is $m\times m$ with orthonormal columns and $\bR$ is an $m\times n$ upper triangular matrix which is known as the \textbf{full QR decomposition}. If further restrict the upper triangular matrix to be a square matrix, the full QR decomposition can be denoted as
$$
\bA = \bQ\begin{bmatrix}
\bR_0\\
\bzero
\end{bmatrix},
$$
where $\bR_0$ is an $m\times m$ upper triangular matrix.
\end{enumerate}
Specifically, when $\bA$ has full rank, i.e., has linearly independent columns, $\bR$ also has linearly independent columns, and $\bR$ is nonsingular for the \textit{reduced} case. This implies diagonals of $\bR$ are nonzero. Under this condition, when we further restrict elements on the diagonal of $\bR$ are positive, the \textit{reduced} QR decomposition is \textbf{unique}. The \textit{full} QR decomposition is normally not unique since the right-most $(m-n)$ columns in $\bQ$ can be in any order.
\end{theoremHigh}
\section{Project a Vector Onto Another Vector}\label{section:project-onto-a-vector}
Project a vector $\ba$ to a vector $\bb$ is to find the vector closest to $\ba$ on the line of $\bb$. The projection vector $\widehat{\ba}$ is some multiple of $\bb$. Let $\widehat{\ba} = \widehat{x} \bb$ and $\ba-\widehat{\ba}$ is perpendicular to $\bb$ as shown in Figure~\ref{fig:project-line}. We then get the following result:
\begin{tcolorbox}[title={Project Vector $\ba$ Onto Vector $\bb$}]
$\ba^\perp =\ba-\widehat{\ba}$ is perpendicular to $\bb$, so $(\ba-\widehat{x}\bb)^\top\bb=0$: $\widehat{x}$ = $\frac{\ba^\top\bb}{\bb^\top\bb}$ and $\widehat{\ba} = \frac{\ba^\top\bb}{\bb^\top\bb}\bb = \frac{\bb\bb^\top}{\bb^\top\bb}\ba$.
\end{tcolorbox}
\begin{figure}[h!]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Project onto a line]{\label{fig:project-line}
\includegraphics[width=0.47\linewidth]{./imgs/projectline.pdf}}
\quad
\subfigure[Project onto a space]{\label{fig:project-space}
\includegraphics[width=0.47\linewidth]{./imgs/projectspace.pdf}}
\caption{Project a vector onto a line and a space.}
\label{fig:projection-qr}
\end{figure}
\section{Project a Vector Onto a Plane}\label{section:project-onto-a-plane}
Project a vector $\ba$ to a space spanned by $\bb_1, \bb_2, \ldots, \bb_n$ is to find the vector closest to $\ba$ on the column space of $[\bb_1, \bb_2, \ldots, \bb_n]$. The projection vector $\widehat{\ba}$ is a combination of $\bb_1, \bb_2, \ldots, \bb_n$: $\widehat{\ba} = \widehat{x}_1\bb_1+ \widehat{x}_2\bb_2+\ldots+\widehat{x}_n\bb_n$. This is actually a least squares problem. To find the projection, we just solve the normal equation $\bB^\top\bB\widehat{\bx} = \bB^\top\ba$ where $\bB=[\bb_1, \bb_2, \ldots, \bb_n]$ and $\widehat{\bx}=[\widehat{x}_1, \widehat{x}_2, \ldots, \widehat{x}_n]$. We refer the details of this projection view in the least squares to \citep{strang1993introduction, trefethen1997numerical, yang2000matrix, golub2013matrix, lu2021rigorous} as it is not the main interest of this survey. For each vector $\bb_i$, the projection of $\ba$ in the direction of $\bb_i$ can be analogously obtained by
$$
\widehat{\ba}_i = \frac{\bb_i\bb_i^\top}{\bb_i^\top\bb_i}\ba, \gap \forall i \in \{1,2,\ldots, n\}.
$$
Let $\widehat{\ba}=\sum_{i=1}^{n}\widehat{\ba_i}$, this results in
$$
\ba^\perp = (\ba-\widehat{\ba}) \perp \cspace(\bB),
$$
i.e., $(\ba-\widehat{\ba})$ is perpendicular to the column space of $\bB=[\bb_1, \bb_2, \ldots, \bb_n]$ as shown in Figure~\ref{fig:project-space}.
\section{Existence of the QR Decomposition via the Gram-Schmidt Process}\label{section:gram-schmidt-process}
\subsubsection*{\textbf{First View by Projection Directly}}
For three linearly independent vectors $\{\ba_1, \ba_2, \ba_3\}$ and the space spanned by the three linearly independent vectors $\cspace{([\ba_1, \ba_2, \ba_3])}$, i.e., the column space of the matrix $[\ba_1, \ba_2, \ba_3]$. We intend to construct three orthogonal vectors $\{\bb_1, \bb_2, \bb_3\}$ in which case $\cspace{([\bb_1, \bb_2, \bb_3])}$ = $\cspace{([\ba_1, \ba_2, \ba_3])}$. Then we divide the orthogonal vectors by their length to normalize. This process produces three mutually orthonormal vectors $\bq_1 = \frac{\bb_1}{||\bb_1||}$, $\bq_2 = \frac{\bb_2}{||\bb_2||}$, $\bq_2 = \frac{\bb_2}{||\bb_2||}$.
For the first vector, we choose $\bb_1 = \ba_1$ directly. The second vector $\bb_2$ must be perpendicular to the first one. This is actually the vector $\ba_2$ subtracting its projection along $\bb_1$:
\begin{equation}
\begin{aligned}
\bb_2 &= \ba_2- \frac{\bb_1 \bb_1^\top}{\bb_1^\top\bb_1} \ba_2 = (\bI- \frac{\bb_1 \bb_1^\top}{\bb_1^\top\bb_1} )\ba_2 \qquad &(\text{Projection view})\\
&= \ba_2- \underbrace{\frac{ \bb_1^\top \ba_2}{\bb_1^\top\bb_1} \bb_1}_{\widehat{\ba}_2}, \qquad &(\text{Combination view}) \nonumber
\end{aligned}
\end{equation}
where the first equation shows $\bb_2$ is a multiplication of the matrix $(\bI- \frac{\bb_1 \bb_1^\top}{\bb_1^\top\bb_1} )$ and the vector $\ba_2$, i.e., project $\ba_2$ onto the orthogonal complement space of $\cspace{([\bb_1])}$. The second equality in the above equation shows $\ba_2$ is a combination of $\bb_1$ and $\bb_2$.
Clearly, the space spanned by $\bb_1, \bb_2$ is the same space spanned by $\ba_1, \ba_2$. The situation is shown in Figure~\ref{fig:gram-schmidt1} in which we choose \textbf{the direction of $\bb_1$ as the $x$-axis in the Cartesian coordinate system}. $\widehat{\ba}_2$ is the projection of $\ba_2$ onto line $\bb_1$. It can be clearly shown that the part of $\ba_2$ perpendicular to $\bb_1$ is $\bb_2 = \ba_2 - \widehat{\ba}_2$ from the figure.
For the third vector $\bb_3$, it must be perpendicular to both the $\bb_1$ and $\bb_2$ which is actually the vector $\ba_3$ subtracting its projection along the plane spanned by $\bb_1$ and $\bb_2$
\begin{equation}\label{equation:gram-schdt-eq2}
\begin{aligned}
\bb_3 &= \ba_3- \frac{\bb_1 \bb_1^\top}{\bb_1^\top\bb_1} \ba_3 - \frac{\bb_2 \bb_2^\top}{\bb_2^\top\bb_2} \ba_3 = (\bI- \frac{\bb_1 \bb_1^\top}{\bb_1^\top\bb_1} - \frac{\bb_2 \bb_2^\top}{\bb_2^\top\bb_2} )\ba_3 \qquad &(\text{Projection view})\\
&= \ba_3- \underbrace{\frac{ \bb_1^\top\ba_3}{\bb_1^\top\bb_1} \bb_1}_{\widehat{\ba}_3} - \underbrace{\frac{ \bb_2^\top\ba_3}{\bb_2^\top\bb_2} \bb_2}_{\bar{\ba}_3}, \qquad &(\text{Combination view})
\end{aligned}
\end{equation}
where the first equation shows $\bb_3$ is a multiplication of the matrix $(\bI- \frac{\bb_1 \bb_1^\top}{\bb_1^\top\bb_1} - \frac{\bb_2 \bb_2^\top}{\bb_2^\top\bb_2} )$ and the vector $\ba_3$, i.e., project $\ba_3$ onto the orthogonal complement space of $\cspace{([\bb_1, \bb_2])}$. The second equality in the above equation shows $\ba_3$ is a combination of $\bb_1, \bb_2, \bb_3$. We will see this property is essential in the idea of the QR decomposition.
Again, it can be shown that the space spanned by $\bb_1, \bb_2, \bb_3$ is the same space spanned by $\ba_1, \ba_2, \ba_3$. The situation is shown in Figure~\ref{fig:gram-schmidt2}, in which we choose \textbf{the direction of $\bb_2$ as the $y$-axis of the Cartesian coordinate system}. $\widehat{\ba}_3$ is the projection of $\ba_3$ onto line $\bb_1$, $\bar{\ba}_3$ is the projection of $\ba_3$ onto line $\bb_2$. It can be shown that the part of $\ba_3$ perpendicular to both $\bb_1$ and $\bb_2$ is $\bb_3=\ba_3-\widehat{\ba}_3-\bar{\ba}_3$ from the figure.
Finally, we normalize each vector by dividing their length which produces three orthonormal vectors $\bq_1 = \frac{\bb_1}{||\bb_1||}$, $\bq_2 = \frac{\bb_2}{||\bb_2||}$, $\bq_2 = \frac{\bb_2}{||\bb_2||}$.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Project $\ba_2$ onto the space perpendicular to $\bb_1$.]{\label{fig:gram-schmidt1}
\includegraphics[width=0.47\linewidth]{./imgs/gram-schmidt1.pdf}}
\quad
\subfigure[Project $\ba_3$ onto the space perpendicular to $\bb_1, \bb_2$.]{\label{fig:gram-schmidt2}
\includegraphics[width=0.47\linewidth]{./imgs/gram-schmidt2.pdf}}
\caption{The Gram-Schmidt process.}
\label{fig:gram-schmidt-12}
\end{figure}
This idea can be extended to a set of vectors rather than only three. And we call this process as \textit{Gram-Schmidt process}. After this process, matrix $\bA$ will be triangularized. The method is named after Jørgen Pedersen Gram and Erhard Schmidt, but it appeared earlier in the work of Pierre-Simon Laplace in the theory of Lie group decomposition.
As we mentioned previously, the idea of the QR decomposition is the construction of a sequence of orthonormal vectors $\bq_1, \bq_2, \ldots$ that span the same successive subspaces.
$$
\bigg\{\cspace([\bq_1])=\cspace([\ba_1])\bigg\} \subseteq
\bigg\{\cspace([\bq_1, \bq_2])=\cspace([\ba_1, \ba_2])\bigg\} \subseteq
\bigg\{\cspace([\bq_1, \bq_2, \bq_3])=\cspace([\ba_1, \ba_2, \ba_3])\bigg\}
\subseteq \ldots,
$$
This implies any $\ba_k$ is in the space spanned by $\cspace([\bq_1, \bq_2, \ldots, \bq_k])$. \footnote{And also, any $\bq_k$ is in the space spanned by $\cspace([\ba_1, \ba_2, \ldots, \ba_k])$.} As long as we have found these orthonormal vectors, to reconstruct $\ba_i$'s from the orthonormal matrix $\bQ=[\bq_1, \bq_2, \ldots, \bq_n]$, an upper triangular matrix $\bR$ is needed such that $\bA = \bQ\bR$.
The Gram–Schmidt process is not the only algorithm for finding the QR decomposition.
Several other QR decomposition algorithms exist such as Householder reflections and Givens rotations which are more reliable in the presence of round-off errors. These QR decomposition methods may also change the
order in which the columns of $\bA$ are processed.
\subsubsection*{\textbf{Another View by Inner Product (with Projection Implicitly)}}
In the above direct projection view, we first find orthogonal vectors and then normalize them to unit 1. The projection relies on the finding in Section~\ref{section:project-onto-a-vector} where the projection of the vector $\ba$ onto the vector $\bb$ is given by $\widehat{\ba} = \frac{\ba^\top\bb}{\bb^\top\bb}\bb = \frac{\bb\bb^\top}{\bb^\top\bb}\ba$. Now suppose $\bb$ is of unit length, then
\begin{equation}\label{equation:qr-projection-unit}
\widehat{\ba} = (\ba^\top\bb) \bb.
\end{equation}
Again, for three linearly independent vectors $\{\ba_1, \ba_2, \ba_3\}$ and the space spanned by the three linearly independent vectors $\cspace{([\ba_1, \ba_2, \ba_3])}$. To abuse the notation, we will write $\bb_k$ in the first view as $\ba_k^\perp$ in this view to emphasize that $\ba_k^\perp$ is orthogonal to $\{\bq_1, \ldots, \bq_{k-1}\}$. The process proceeds as follows:
\begin{itemize}
\item Compute $\bq_1$ of unit length so that $\cspace([\bq_1]) = \cspace([\ba_1])$:
\begin{itemize}
\item Compute the length of $\ba_1$: $r_{11} = ||\ba_1||$;
\item Sets $\bq_1$ to a unit vector in the direction of $\ba_1$: $\bq_1=\frac{\ba_1}{||\ba_1||}=\frac{\ba_1}{r_{11}}$;
\end{itemize}
\item Compute $\bq_2$ of unit length so that $\cspace([\bq_1,\bq_2]) = \cspace([\ba_1,\ba_2])$:
\begin{itemize}
\item Compute $r_{12}$ so that $r_{12}\bq_1 = (\ba_2^\top\bq_1) \bq_1$ equals the component of $\ba_2$ in the direction of $\bq_1$ by Equation~\eqref{equation:qr-projection-unit};
\item Compute the component of $\ba_2$ that is orthogonal to $\bq_1$: $\ba_2^\perp = \ba_2 - r_{12}\bq_1$;
\item Compute the length of vector $\ba_2^\perp$: $r_{22}=||\ba_2^\perp||$;
\item Set $\bq_2$ to a unit vector in the direction of $||\ba_2^\perp||$: $\bq_2 = \frac{\ba_2^\perp}{||\ba_2^\perp||}=\frac{\ba_2^\perp}{r_{22}}$;
\item This results in:
$$
\begin{bmatrix}
\ba_1 & \ba_2
\end{bmatrix}=
\begin{bmatrix}
\bq_1 & \bq_2
\end{bmatrix}
\begin{bmatrix}
r_{11}& r_{12}\\
0 & r_{22}
\end{bmatrix};
$$
\end{itemize}
\item Compute $\bq_3$ of unit length so that $\cspace([\bq_1,\bq_2,\bq_3]) = \cspace([\ba_1,\ba_2,\ba_3])$:
\begin{itemize}
\item Compute $r_{13}$ so that $r_{13}\bq_1 = (\ba_3^\top\bq_1) \bq_1$ equals the component of $\ba_3$ in the direction of $\bq_1$ by Equation~\eqref{equation:qr-projection-unit};
\item Compute $r_{23}$ so that $r_{23}\bq_2 = (\ba_3^\top\bq_2) \bq_2$ equals the component of $\ba_3$ in the direction of $\bq_2$ by Equation~\eqref{equation:qr-projection-unit};
\item Compute the component of $\ba_3$ that is orthogonal to $\bq_1$ and $\bq_2$: $\ba_3^\perp = \ba_3 - r_{13}\bq_1 - r_{23}\bq_2$;
\item Compute the length of vector $\ba_3^\perp$: $r_{33}=||\ba_3^\perp||$;
\item Set $\bq_3$ to a unit vector in the direction of $||\ba_3^\perp||$: $\bq_3 = \frac{\ba_3^\perp}{||\ba_3^\perp||}=\frac{\ba_3^\perp}{r_{33}}$;
\item This results in:
$$
\begin{bmatrix}
\ba_1 & \ba_2 & \ba_3
\end{bmatrix}=
\begin{bmatrix}
\bq_1 & \bq_2 & \bq_3
\end{bmatrix}
\begin{bmatrix}
r_{11}& r_{12} & r_{13}\\
0 & r_{22} & r_{23} \\
0 & 0 & r_{33}
\end{bmatrix}.
$$
\end{itemize}
\end{itemize}
Again, this idea can be extended to a set of vectors rather than only three. The process above reveals the meaning of $r_{ij}$ in the triangular matrix $\bR$ where $r_{ij}$ represents the component of $\ba_j$ in the direction of $\bq_i$. This matches the matrix multiplication result:
$$
\ba_j = \sum_{i=1}^{j} r_{ij} \bq_j.
$$
\subsubsection*{\textbf{Main Proof}}
Though the existence of the QR decomposition is conceptually intuitive from the two views of the Gram-Schmidt process above, the formal proof is clunky which needs inductive hypothesis. We now prove it rigorously.
\begin{proof}[of Theorem~\ref{theorem:qr-decomposition}]
We will prove by induction that every $m\times n$ matrix $\bA$ with linearly independent columns admits a \textit{reduced} QR decomposition. The \textit{full} QR decomposition can be done by completing the orthonormal columns in $\bQ$. The $1\times 1$ case is trivial by setting $\bQ=\frac{\ba_1}{||\ba_1||}, \bR=||\ba_1||$, thus, $\bA=\ba_1=\bQ\bR$.
Suppose for any $m\times k$ matrix $\bA_k$ with linearly independent columns admits a reduced QR decomposition. If we prove any $m\times (k+1)$ matrix $\bA_{k+1}$ can also be factored as this reduced QR decomposition, then we complete the proof. Suppose $\bA_{k+1}=[\bA_k, \ba_{k+1}]$ where $\bA_k$ admits the reduced QR decomposition by inductive hypothesis
$$
\bA_k = \bQ_k\bR_k,
$$
where $\bQ_k$ contains orthonormal columns $\bQ_k^\top\bQ_k=\bI_k$ and $\bR_k \in \real^{k\times k}$ is upper triangular. Also, by the induction hypothesis,
if the values on the diagonal of $\bR_k$ are chosen to be positive, then the reduced QR decomposition of
$\bA_k = \bQ_k\bR_k$ is \textbf{unique}.
Suppose further $\bA_{k+1}$ can be factored as
\begin{equation}\label{equation:qr-rigorous-proof}
\bA_{k+1} =
\begin{bmatrix}
\bA_k& \ba_{k+1}
\end{bmatrix}
=
\begin{bmatrix}
\widetildebQ_k& \bq_{k+1}
\end{bmatrix}
\begin{bmatrix}
\widetildebR_k&
\begin{matrix}
\br_{k}\\
r_{k+1}
\end{matrix}
\end{bmatrix}
,
\end{equation}
where apparently $\bA_k = \widetildebQ_k \widetildebR_k$ is a reduced QR decomposition of $\bA_k$ and if restrict the diagonal values of $\widetildebR_k$ are positive, the factorization is \textbf{unique} which indicates $\widetildebQ_k = \bQ_k$ and $\widetildebR_k=\bR_k$. Moreover, Equation~\eqref{equation:qr-rigorous-proof} implies
$$
\begin{aligned}
\ba_{k+1} = \bQ_k\br_k + \bq_{k+1}r_{k+1} \\
\leadto &\bQ_k^\top \ba_{k+1} = \bQ_k^\top ( \bQ_k\br_k + \bq_{k+1}r_{k+1})=\br_k + \bQ_k^\top\bq_{k+1}r_{k+1}.
\end{aligned}
$$
Since we assume columns of $\bQ_k$ are orthonormal to $\bq_{k+1}$, it follows that $\bQ_k^\top\bq_{k+1}=\bzero$ and $\br_k=\bQ_k^\top \ba_{k+1}$. When $\bQ_k$ is fixed, the $\br_k$ is \textbf{uniquely} decided.
Let $\ba_{k+1}^\perp = \ba_{k+1}-\bQ_k\br_k$, then $\ba_{k+1}^\perp$ is orthogonal to the columns of $\bQ_k$. To see this, $\bQ_k^\top \ba_{k+1}^\perp = \bQ_k^\top(\ba_{k+1}-\bQ_k\br_k) =\bzero$ as we construct $\br_k$ by $\br_k=\bQ_k^\top \ba_{k+1}$. Since $\ba_{k+1}$ is linearly independent to columns of $\bA_k$, which is also linearly independent to columns of $\bQ_k$, $\ba_{k+1}^\perp$ is thus nonzero. Therefore, let $r_{k+1}=||\ba_{k+1}^\perp||$ and $\bq_{k+1} = \ba_{k+1}^\perp / r_{k+1}$, we find the \textbf{unique} reduced QR decomposition of $\bA_{k+1}$ with positive diagonals in the upper triangular matrix. This completes the proof.
\end{proof}
\section{Orthogonal vs Orthonormal}\index{Orthogonal}\index{Orthonormal}\label{section:orthogonal-orthonormal-qr}
The vectors $\bq_1, \bq_2, \ldots, \bq_n\in \real^m$ are mutually orthogonal when their dot products $\bq_i^\top\bq_j$ are zero whenever $i \neq j$. When each vector is divided by its length, the vectors become orthogonal unit vectors. Then the vectors $\bq_1, \bq_2, \ldots, \bq_n$ are mutually orthonormal. We put the orthonormal vectors into a matrix $\bQ$.
\begin{itemize}
\item When $m\neq n$: the matrix $\bQ$ is easy to work with because $\bQ^\top\bQ=\bI \in \real^{n\times n}$. Such $\bQ$ with $m\neq n$ is sometimes referred to as a \textbf{semi-orthogonal} matrix.
\item When $m= n$: the matrix $\bQ$ is square, $\bQ^\top\bQ=\bI$ means that $\bQ^\top=\bQ^{-1}$, i.e., the transpose of $\bQ$ is also the inverse of $\bQ$. Then we also have $\bQ\bQ^\top=\bI$, i.e., $\bQ^\top$ is the \textbf{two-sided inverse} of $\bQ$. We call this $\bQ$ an \textbf{orthogonal matrix}. \footnote{Note here we use the term \textit{orthogonal matrix} to mean the matrix $\bQ$ has orthonormal columns. The term \textit{orthonormal matrix} is \textbf{not} used for historical reasons.}
\end{itemize}
To see this, we have
$$
\begin{bmatrix}
\bq_1^\top \\
\bq_2^\top\\
\vdots \\
\bq_n^\top
\end{bmatrix}
\begin{bmatrix}
\bq_1 &\bq_2 & \ldots & \bq_n
\end{bmatrix}
=
\begin{bmatrix}
1 & & & \\
& 1 & & \\
& & \ddots & \\
& & & 1
\end{bmatrix}.
$$
In other words, $\bq_i^\top \bq_j = \delta_{ij}$ where $\delta_{ij}$ is the Kronecker delta. The columns of an orthogonal matrix $\bQ\in \real^{n\times n}$ form an \textbf{orthonormal basis} of $\real^n$. \footnote{Notice that the \textit{orthogonal matrix} $\bQ$ contains \textit{orthonormal basis}, \textbf{not} orthogonal basis.}
Orthogonal matrices can be viewed as matrices that change the basis of other matrices. Hence they preserve the angle (inner product) between the vectors
$$
\text{inner product: } \qquad \bu^\top \bv = (\bQ\bu)^\top(\bQ\bv).
$$
The above invariance of the inner products of angles between the vectors is preserved, which also relies on the invariance of their lengths:
$$
\text{length: } \qquad ||\bQ\bu|| = ||\bu||.
$$
In real cases, multiplied by a orthogonal matrix $\bQ$ will rotate (if $\det(\bQ)=1$) or reflect (if $\det(\bQ)=-1$) the original vector space. Many decomposition algorithms will result in two orthogonal matrices, thus such rotations or reflections will happen twice. See Section~\ref{section:coordinate-transformation} (p.~\pageref{section:coordinate-transformation}) for a discussion on the coordinate transformation in matrix decomposition.
\begin{example}[Rotation and Reflection in Orthogonal Matrices]
To see the rotation and reflection in orthogonal matrices, suppose
$$
\bQ_1 = \begin{bmatrix}
-1 & \\
& -1
\end{bmatrix}
\qquad
\text{and}
\qquad
\bQ_2 = \begin{bmatrix}
1 & \\
& -1
\end{bmatrix},
$$
where $\det(\bQ_1)=1$ and $\det(\bQ_2)=-1$. For vector $\bv=[1,1]^\top$, we have
$$
\bQ_1\bv = \begin{bmatrix}
-1 \\
-1
\end{bmatrix}
\qquad
\text{and}
\qquad
\bQ_2\bv = \begin{bmatrix}
1 \\
-1
\end{bmatrix}.
$$
Thus, $\bQ_1$ rotates $\bv$ along the point $\bzero$, and $\bQ_2$ reflects $\bv$ along the $x$-axis.
The illustration of the rotation and reflection is shown in Figure~\ref{fig:orthogonal-rotate-reflect}.
\exampbar
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[$\bQ_1$ rotates $\bv$ along the $\bzero$ point.]{\label{fig:orthogonal-rotate1}
\includegraphics[width=0.47\linewidth]{./imgs/orthogonal-rotate.pdf}}
\quad
\subfigure[$\bQ_2$ reflects $\bv$ along the $x$-axis.]{\label{fig:orthogonal-rotate2}
\includegraphics[width=0.47\linewidth]{./imgs/orthogonal-rotate2.pdf}}
\caption{Rotation and reflection in orthogonal matrices.}
\label{fig:orthogonal-rotate-reflect}
\end{figure}
\end{example}
\section{Properties of the QR Decomposition}
For any matrix, we have the property: $\nspace(\bA^\top)$ is the orthogonal complement of the column space $\cspace(\bA)$ in $\real^m$: $dim(\nspace(\bA^\top))+dim(\cspace(\bA))=m$.
This is known as the rank-nullity theorem. And the proof can be found in Appendix~\ref{appendix:fundamental-rank-nullity} (p.~\pageref{appendix:fundamental-rank-nullity}).
In specific, from QR decomposition, we can find a basis for the corresponding subspaces. In singular value decomposition (SVD), we will also find the orthonormal basis for $\nspace(\bA)$ and $\cspace(\bA^\top)$.
\begin{lemma}[Orthonormal Basis in $\real^m$]\label{lemma:qr-four-orthonormal-Basis}
For full QR decomposition of $\bA\in \real^{m\times n}$ with full rank $n$ and $m\geq n$, we have the following property:
\begin{itemize}
\item $\{\bq_1,\bq_2, \ldots,\bq_n\}$ is an orthonormal basis of $\cspace(\bA)$;
\item $\{\bq_{n+1},\bq_{n+2}, \ldots,\bq_m\}$ is an orthonormal basis of $\nspace(\bA^\top)$.
\end{itemize}
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:qr-four-orthonormal-Basis}]
From the Gram-Schmidt process, it is trivial that $span\{\ba_1, \ba_2, \ldots, \ba_k\}$ is equal to $span\{\bq_1, \bq_2, \ldots, \bq_k\}$ for all $k\in\{1, 2, \ldots, n\}$. Thus $\cspace(\bA) = span\{\ba_1, \ba_2, \ldots, \ba_n\}=span\{\bq_1, \bq_2, \ldots, \bq_n\}$, and $\{\bq_1, \bq_2, \ldots, \bq_n\}$ is an orthonormal basis for the column space of $\bA$. As $ \nspace(\bA^\top) \bot\cspace(\bA)$, $dim(\nspace(\bA^\top))=m-dim(\cspace(\bA))=m-n$. And the space spanned by $\{\bq_{n+1},\bq_{n+2}, \ldots,\bq_m\}$ is also $\bot\cspace(\bA)$ with dimension $m-n$. Thus, $\{\bq_{n+1},\bq_{n+2}, \ldots,\bq_m\}$ is an orthonormal basis for $\nspace(\bA^\top)$.
\end{proof}
\section{Computing the Reduced QR Decomposition via the Gram-Schmidt Process}\label{section:qr-gram-compute}
We write out this form of the reduced QR Decomposition such that $\bA=\bQ\bR$ where $\bQ\in \real^{m\times n}$ and $\bR\in \real^{n\times n}$:
\begin{equation}
\bA=\left[
\begin{matrix}
\ba_1 & \ba_2 & ... & \ba_n
\end{matrix}
\right]
=\left[
\begin{matrix}
\bq_1 & \bq_2 & ... & \bq_n
\end{matrix}
\right]
\begin{bmatrix}
r_{11} & r_{12}& \dots & r_{1n}\\
& r_{22}& \dots & r_{2n}\\
& & \ddots & \vdots \\
\multicolumn{2}{c}{\raisebox{1.3ex}[0pt]{\Huge0}} & & r_{nn} \nonumber
\end{bmatrix}.
\end{equation}
The orthonormal matrix $\bQ$ can be easily calculated by the Gram-Schmidt process. To see why we have the upper triangular matrix $\bR$, we write out these equations
\begin{equation*}
\begin{aligned}
\ba_1 & = r_{11}\bq_1 &= \sum_{i=1}^{1} r_{i1}\bq_1, \\
\ba_2 & = r_{12}\bq_1 + r_{22}\bq_2&= \sum_{i=1}^{2} r_{i2}\bq_2, \\
\ba_3 &= r_{13}\bq_1 + r_{23}\bq_2 + r_{33} \bq_3&= \sum_{i=1}^{3}r_{i3} \bq_3, \\
&\vdots& \\
\ba_k &= r_{1k}\bq_1 + r_{2k}\bq_2 + \ldots + r_{kk}\bq_k &= \sum_{i=1}^{k} r_{ik} \bq_k,\\
&\vdots& \\
\ba_n &= r_{1n}\bq_1 + r_{2n}\bq_2 + \ldots + r_{nn}\bq_n &= \sum_{i=1}^{n} r_{in} \bq_n ,
\end{aligned}
\end{equation*}
which coincides with the second equation of Equation~\eqref{equation:gram-schdt-eq2} and conforms to the form of an upper triangular matrix $\bR$. And if we extend the idea of Equation~\eqref{equation:gram-schdt-eq2} into the $k$-th term, we will get
$$
\begin{aligned}
\ba_k &= \sum_{i=1}^{k-1}(\bq_i^\top\ba_k)\bq_i + \ba_k^\perp \\
&= \sum_{i=1}^{k-1}(\bq_i^\top\ba_k)\bq_i + ||\ba_k^\perp||\cdot \bq_k,
\end{aligned}
$$
which implies we can gradually orthonormalize $\bA$ to an orthonormal set $\bQ=[\bq_1, \bq_2, \ldots, \bq_n]$ by
\begin{equation}\label{equation:qr-gsp-equation}
\left\{
\begin{aligned}
r_{ik} &= \bq_i^\top\ba_k, \,\,\,\,\forall i \in \{1,2,\ldots, k-1\};\\
\ba_k^\perp&= \ba_k-\sum_{i=1}^{k-1}r_{ik}\bq_i;\\
r_{kk} &= ||\ba_k^\perp||;\\
\bq_k &= \ba_k^\perp/r_{kk}.
\end{aligned}
\right.
\end{equation}
The procedure is formulated in Algorithm~\ref{alg:reduced-qr}.
\begin{algorithm}[h]
\caption{Reduced QR Decomposition via Gram-Schmidt Process}
\label{alg:reduced-qr}
\begin{algorithmic}[1]
\Require Matrix $\bA$ has linearly independent columns with size $m\times n $ and $m\geq n$;
\For{$k=1$ to $n$} \Comment{compute $k$-th column of $\bQ,\bR$}
\For{$i=1$ to $k-1$}
\State $r_{ik} =\bq_i^\top\ba_k$; \Comment{entry ($i,k$) of $\bR$, $2m-1$ flops}
\EndFor \Comment{all $k-1$ iterations: $(k-1)(2m-1)$ flops}
\State $\ba_k^\perp= \ba_k-\sum_{i=1}^{k-1}r_{ik}\bq_i$;\Comment{$2m(k-1)$ flops}
\State $r_{kk} = ||\ba_k^\perp||$; \Comment{main diagonal of $\bR$, $2m$ flops}
\State $\bq_k = \ba_k^\perp/r_{kk}$; \Comment{$m$ flops}
\EndFor
\State Output $\bQ=[\bq_1, \ldots, \bQ_n]$ and $\bR$ with entry $(i,k)$ being $r_{ik}$;
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: Reduced QR via Gram-Schmidt]\label{theorem:qr-reduced}
Algorithm~\ref{alg:reduced-qr} requires $\sim 2mn^2$ flops to compute a reduced QR decomposition of an $m\times n$ matrix with linearly independent columns and $m\geq n$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:qr-reduced}]
In step 3, the computation of $r_{ik}$ is a vector inner product which requires $m$ multiplications, and $m-1$ additions . This makes it $\boxed{(k-1)(2m-1)}$ flops for all the $k-1$ iterations.
In step 5, the computation of $r_{ik}\bq_i$ needs $m$ flops and there are $k-1$ such scalar-vector multiplications, which makes it $m(k-1)$ flops. The vector subtraction and additions require another $m(k-1)$ flops. Thus step $5$ costs $\boxed{2m(k-1)}$ flops.
In step 6, the vector norm involves a vector inner product plus a square root that takes $\boxed{2m}$ flops.
Step 7 costs $\boxed{m}$ for the divisions
Therefore, for computing the $k$-th column of $\bQ,\bR$, it requires $(k-1)(2m-1)+ 2m(k-1)+2m+m=4mk-m-k+1$ flops. Let $f(k) = 4mk-m-k+1$, the total complexity can be obtained by
$$
\mathrm{cost}= f(n) +f(n-1) + \ldots +f(1).
$$
Simple calculations will show the total complexity is $2mn^2+mn-3m-\frac{n^2-n}{2}$ flops, or $2mn^2$ flops if we keep only the leading term.
\end{proof}
\subsubsection*{\textbf{Orthogonal Projection}}
We notice again from Equation~\eqref{equation:qr-gsp-equation}, i.e., step 2 to step 6 in Algorithm~\ref{alg:reduced-qr}, the first two equality imply that
\begin{equation}\label{equation:qr-gsp-equation2}
\left.
\begin{aligned}
r_{ik} &= \bq_i^\top\ba_k, \,\,\,\,\forall i \in \{1,2,\ldots, k-1\}\\
\ba_k^\perp&= \ba_k-\sum_{i=1}^{k-1}r_{ik}\bq_i\\
\end{aligned}
\right\}
\rightarrow
\ba_k^\perp= \ba_k- \bQ_{k-1}\bQ_{k-1}^\top \ba_k=(\bI-\bQ_{k-1}\bQ_{k-1}^\top )\ba_k,
\end{equation}
where $\bQ_{k-1}=[\bq_1,\bq_2,\ldots, \bq_{k-1}]$. This implies $\bq_k$ can be obtained by
$$
\bq_k = \frac{\ba_k^\perp}{||\ba_k^\perp||} = \frac{(\bI-\bQ_{k-1}\bQ_{k-1}^\top )\ba_k}{||(\bI-\bQ_{k-1}\bQ_{k-1}^\top )\ba_k||}.
$$
The matrix $(\bI-\bQ_{k-1}\bQ_{k-1}^\top )$ in above equation is known as an \textit{orthogonal projection matrix} \footnote{More details can be referred to Appendix~\ref{section:by-geometry-hat-matrix} (p.~\pageref{section:by-geometry-hat-matrix}).} that will project $\ba_k$ along the column space of $\bQ_{k-1}$, i.e., project a vector so that the vector is perpendicular to the column space of $\bQ_{k-1}$. The net result is that the $\ba_k^\perp$ or $\bq_k$ calculated in this way will be orthogonal to the $\cspace(\bQ_{k-1})$, i.e., in the null space of $\bQ_{k-1}^\top$: $\nspace(\bQ_{k-1}^\top)$ by the fundamental theorem of linear algebra (Theorem~\ref{theorem:fundamental-linear-algebra}, p.~\pageref{theorem:fundamental-linear-algebra}).
Let $\bP_1=(\bI-\bQ_{k-1}\bQ_{k-1}^\top )$ and we claimed above $\bP_1=(\bI-\bQ_{k-1}\bQ_{k-1}^\top )$ is an orthogonal projection matrix such that $\bP_1\bv$ will project the $\bv$ onto the null space of $\bQ_{k-1}$. And actually, let $\bP_2=\bQ_{k-1}\bQ_{k-1}^\top$, then $\bP_2$ is also an orthogonal projection matrix such that $\bP_2\bv$ will project the $\bv$ onto the column space of $\bQ_{k-1}$.
\begin{figure}[h!]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Orthogonal projection]{\label{fig:project-oblique}
\includegraphics[width=0.47\linewidth]{./imgs/project-orthogonal.pdf}}
\quad
\subfigure[Oblique projection]{\label{fig:project-orthogonal}
\includegraphics[width=0.47\linewidth]{./imgs/project-oblique.pdf}}
\caption{Demonstration of the difference between orthogonal projection and oblique projection.}
\label{fig:projection-oblie-and-orthogonal}
\end{figure}
But why do the matrix $\bP_1, \bP_2$ can magically project a vector onto the corresponding subspaces? We will show in Lemma~\ref{lemma:rank-of-ttt} that the column space of $\bQ_{k-1}$ is equal to the column space of $\bQ_{k-1}\bQ_{k-1}^\top$:
$$
\cspace(\bQ_{k-1})=\cspace(\bQ_{k-1}\bQ_{k-1}^\top)=\cspace(\bP_2).
$$
Therefore, the result of $\bP_2\bv$ is a linear combination of the columns of $\bP_2$, which is in the column space of $\bP_2$ or the column space of $\bQ_{k-1}$. The formal definition of a \textit{projection matrix} $\bP$ is that it is idempotent $\bP^2=\bP$ such that projecting twice is equal to projecting once \footnote{See also Definition~\ref{definition:projection-matrix} (p.~\pageref{definition:projection-matrix}).}. What makes the above $\bP_2=\bQ_{k-1}\bQ_{k-1}^\top $ different is that the projection $\widehat{\bv}$ of any vector $\bv$ is perpendicular to $\bv-\widehat{\bv}$:
$$
(\widehat{\bv}=\bP_2\bv) \perp (\bv-\widehat{\bv}).
$$
This goes to the original definition we gave above: the \textit{orthogonal projection matrix} \footnote{See also Definition~\ref{definition:orthogonal-projection-matrix} (p.~\pageref{definition:orthogonal-projection-matrix}).}. To
avoid confusion, one may use the term \textit{oblique projection matrix} in the nonorthogonal
case where the difference is shown in Figure~\ref{fig:projection-oblie-and-orthogonal}. When $\bP_2$ is an orthogonal projection matrix, $\bP_1=\bI-\bP_2$ is also an orthogonal projection matrix that will project any vector onto the space perpendicular to the $\cspace(\bQ_{k-1})$, i.e., $\nspace(\bQ_{k-1}^\top)$. Therefore, we conclude the two orthogonal projections:
$$
\left\{
\begin{aligned}
\bP_1: &\gap \text{project onto $\nspace(\bQ_{k-1}^\top)$;} \\
\bP_2: &\gap \text{project onto $\cspace(\bQ_{k-1})$} .
\end{aligned}
\right.
$$
The further result that is important to notice is when the columns of $\bQ_{k-1}$ are mutually orthonormal, we have the following decomposition:
\begin{equation}\label{equation:qr-orthogonal-equality}
\boxed{\bP_1 = \bI - \bQ_{k-1}\bQ_{k-1}^\top = (\bI-\bq_1\bq_1^\top)(\bI-\bq_2\bq_2^\top)\ldots (\bI-\bq_{k-1}\bq_{k-1}^\top),}
\end{equation}
where $\bQ_{k-1}=[\bq_1,\bq_2,\ldots, \bq_{k-1}]$ and each $(\bI-\bq_i\bq_i^\top)$ is to project a vector into the perpendicular space of $\bq_i$.
\subsubsection*{\textbf{Modified Gram-Schmidt (MGS) Process}}
To emphasize the modified Gram-Schmidt process and make a connection to the equivalent projection in Equation~\eqref{equation:qr-orthogonal-equality}, we first illustrate the lemma how the entries in the upper triangular $\bR$ of the QR decomposition can be obtained in an alternative way.
\begin{lemma}[Modified Gram-Schmidt Process]
Suppose for $[\ba_1, \ba_2, \ldots,\ba_{k-1}, \ba_k$], where the first $k-1$ column are spanned by $k-1$ orthonormal vectors $[\bq_1, \bq_2, \ldots, \bq_{k-1}]$:
$$
\cspace([\ba_1, \ba_2, \ldots,\ba_{i}]) = \cspace([\bq_1, \bq_2, \ldots, \bq_{i}]), \gap \forall i\in \{1,2,\ldots, k-1\}.
$$
Therefore, $r_{ik} = \bq_i^\top\ba_k$ is the projection of $\ba_k$ on the vector $\bq_i$. Then it follows that
$$
\begin{aligned}
\bq_i^\top\ba_k &= \bq_i^\top (\ba_k \underbrace{- r_{1k}\bq_1 - r_{2k}\bq_2 - \ldots - r_{i-1,k}\bq_{i-1}}_{\text{orthogonal to $\bq_i$}}), \gap \forall i\in \{1,2,\ldots, k-1\}\\
&= \bq_i^\top (\ba_k - \sum_{j=1}^{i-1}r_{jk}\bq_j ).
\end{aligned}
$$
This can be easily checked since $\bq_i$ is orthonormal to $\{\bq_1, \bq_2, \ldots, \bq_{i-1}\}$. This observation implies another update on the $k$-th column of $\bR$.
\end{lemma}
The lemma above reveals a second algorithm to compute the reduced QR decomposition of a matrix as shown in Algorithm~\ref{alg:qr-mgs-right} of which the algorithm on the left is exactly the same one as Algorithm~\ref{alg:reduced-qr} (with slight modification) to emphasize the difference.
\noindent
\begin{minipage}[t]{0.495\linewidth}
\begin{algorithm}[H]
\caption{CGS (=Algorithm~\ref{alg:reduced-qr} )}
\label{alg:qr-mgs-left}
\begin{algorithmic}[1]
\Require $\bA\in \real^{m\times n}$ with full column rank;
\For{$k=1$ to $n$}
\State $\ba_k^\perp=\ba_k$;
\For{$i=1$ to $k-1$}
\State $r_{ik} =\bq_i^\top\ba_k$;
\State $\ba_k^\perp= \ba_k^\perp-r_{ik}\bq_i$;
\EndFor
\State $r_{kk} = ||\ba_k^\perp||$;
\State $\bq_k = \ba_k^\perp/r_{kk}$;
\EndFor
\end{algorithmic}
\end{algorithm}
\end{minipage}%
\hfil
\begin{minipage}[t]{0.495\linewidth}
\begin{algorithm}[H]
\caption{MGS}
\label{alg:qr-mgs-right}
\begin{algorithmic}[1]
\Require $\bA\in \real^{m\times n}$ with full column rank;
\For{$k=1$ to $n$}
\State $\ba_k^\perp=\ba_k$;
\For{$i=1$ to $k-1$}
\State $r_{ik} =\bq_i^\top\textcolor{blue}{\ba_k^\perp}$;
\State $\ba_k^\perp= \ba_k^\perp-r_{ik}\bq_i$; (*)
\EndFor
\State $r_{kk} = ||\ba_k^\perp||$;
\State $\bq_k = \ba_k^\perp/r_{kk}$;
\EndFor
\end{algorithmic}
\end{algorithm}
\end{minipage}
The above process is named as the \textit{modified Gram-Schmidt (MGS) process}, whereas the previous one is also known as the \textit{classical Gram-Schmidt (CGS) process}. In theory, all CGS and MGS are equivalent in the
sense that they compute exactly the same QR decompositions when exact arithmetic is employed. In
practice, in the presence of round-off error, the orthonormal columns of $\bQ$ computed by MGS are
often ``more orthonormal" than those computed by CGS.
We notice that the equality (*) in Algorithm~\ref{alg:qr-mgs-right} can be rewritten as (via the step 4 and step 5 of the algorithm)
$$
\begin{aligned}
\ba_k^\perp &:= \ba_k^\perp-r_{ik}\bq_i\\
&= \ba_k^\perp-\bq_i^\top\ba_k^\perp\bq_i\\
&=\ba_k^\perp-\bq_i\bq_i^\top\ba_k^\perp\\
&=(\bI-\bq_i\bq_i^\top)\ba_k^\perp.
\end{aligned}
$$
That is, $\ba_k^\perp$ will be updated by
$$
\left\{(\bI-\bq_{k-1}\bq_{k-1}^\top)\ldots\left[(\bI-\bq_2\bq_2^\top)\left((\bI-\bq_1\bq_1^\top) \ba_k\right)\right]\right\},
$$
where the parentheses indicate the order of the computation, and
which matches the orthogonal projection matrix equality in Equation~\eqref{equation:qr-orthogonal-equality} that
$$
\begin{aligned}
\bP_1 &= \bI-\bQ_{k-1}\bQ_{k-1}^\top\\
&=(\bI-\bq_1\bq_1^\top)(\bI-\bq_2\bq_2^\top)\ldots (\bI-\bq_{k-1}\bq_{k-1}^\top)\\
&=\prod_{i=1}^{k-1}(\bI-\bq_i\bq_i^\top),
\end{aligned}
$$
where $\bQ_{k-1}=[\bq_1,\bq_2,\ldots, \bq_{k-1}]$.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[CGS, step 1: \textcolor{blue}{blue} vector; step 2: \textcolor{green}{green} vector; step 3: \textcolor{purple}{purple} vector.]{\label{fig:projection-mgs-demons-cgs}
\includegraphics[width=0.47\linewidth]{./imgs/projectqr-cgs.pdf}}
\quad
\subfigure[MGS, step 1: \textcolor{blue}{blue} vector; step 2: \textcolor{purple}{purple} vector.]{\label{fig:projection-mgs-demons-mgs}
\includegraphics[width=0.47\linewidth]{./imgs/projectqr-mgs.pdf}}
\caption{CGS vs MGS in 3-dimensional space where $\bq_2^\prime$ is parallel to $\bq_2$ so that projecting on $\bq_2$ is equivalent to projecting on $\bq_2^\prime$.}
\label{fig:projection-mgs-demons-3d}
\end{figure}
\paragraph{What's the difference?}
Taking a three-column matrix $\bA=[\ba_1, \ba_2, \ba_3]$ as an example. Suppose we have computed $\{\bq_1, \bq_2\}$ such that $span\{\bq_1, \bq_2\}=span\{\ba_1, \ba_2\}$. And we want to proceed to compute the $\bq_3$.
In the CGS, the orthogonalization of column $\ba_n$ against column $\{\bq_1, \bq_2\}$ is performed by projecting the original column $\ba_3$ of $\bA$ onto $\{\bq_1, \bq_2\}$ respectively and subtracting at once:
\begin{equation}\label{equation:cgs-3d-exmp}
\left\{
\begin{aligned}
\ba_3^\perp &= \ba_3 - (\bq_1^\top\ba_3)\bq_1 - (\bq_2^\top\ba_3)\bq_2\\
&= \ba_3 - (\bq_1\bq_1^\top)\ba_3 - \boxed{(\bq_2\bq_2^\top)\ba_3}\\
\bq_3 &= \frac{\ba_3^\perp}{||\ba_3^\perp||},
\end{aligned}
\right.
\end{equation}
as shown in Figure~\ref{fig:projection-mgs-demons-cgs}.
In the MGS, on the other hand, the components along each $\{\bq_1, \bq_2\}$ are immediately subtracted out of the rest of the column $\ba_3$ as soon as the $\{\bq_1, \bq_2\}$ are computed. Therefore the orthogonalization of column $\ba_3$ against $\{\bq_1, \bq_2\}$ is not performed by projecting the original column $\ba_3$ against $\{\bq_1, \bq_2\}$ as it is in CGS, but rather against a vector obtained by subtracting from that column $\ba_3$ of $\bA$ the components in the direction of $\bq_1, \bq_2$ successively. This is important because the error components of $\bq_i$ in $span\{\bq_1, \bq_2\}$ will be smaller (we will further discuss in the next paragraphs).
More precisely, in the MGS the orthogonalization of column $\ba_3$ against $\bq_1$ is performed by subtracting the component of $\bq_1$ from the vector $\ba_3$:
$$
\ba_3^{(1) }= (\bI-\bq_1\bq_1^\top)\ba_3 = \ba_3 - (\bq_1\bq_1^\top)\ba_3,
$$
where $\ba_3^{(1) }$ is the component of $\ba_3$ lies in a space perpendicular to $\bq_1$. And further step is performed by
\begin{equation}\label{equation:mgs-3d-exmp}
\begin{aligned}
\ba_3^{(2) }= (\bI-\bq_2\bq_2^\top)\ba_3^{(1) }&=\ba_3^{(1) }-(\bq_2\bq_2^\top)\ba_3^{(1) }\\
&=\ba_3 - (\bq_1\bq_1^\top)\ba_3-\boxed{(\bq_2\bq_2^\top)\textcolor{blue}{\ba_3^{(1) }}}
\end{aligned}
\end{equation}
where $\ba_3^{(2) }$ is the component of $\ba_3^{(1) }$ lies in a space perpendicular to $\bq_2$ and we highlight the difference to the CGS in Equation~\eqref{equation:cgs-3d-exmp} by \textcolor{blue}{blue} text. This net result is that $\ba_3^{(2) }$ is the component of $\ba_3$ lies in the space perpendicular to $\{\bq_1, \bq_2\}$ as shown in Figure~\ref{fig:projection-mgs-demons-mgs}.
\subsubsection*{\textbf{Main difference and catastrophic cancellation}}
The key difference is that the $\ba_3$ can in general have large components in $span\{\bq_1, \bq_2\}$ in which case one starts with large
values and ends up with small values with large relative errors in them. This is known as the problem of \textit{catastrophic cancellation}. Whereas $\ba_3^{(1) }$ is in the direction perpendicular to $\bq_1$ and has only a small ``error" component in the direction of $\bq_1$. Compare the \fbox{boxed} text in Equation~\eqref{equation:cgs-3d-exmp} and \eqref{equation:mgs-3d-exmp}, it is not hard to see $(\bq_2\bq_2^\top)\ba_3^{(1) }$ in Equation~\eqref{equation:mgs-3d-exmp} is more accurate by the above argument. And thus, because of the much smaller error in this projection factor, the MGS introduces less orthogonalization error at each subtraction step than that is in the CGS. In fact, it can be shown that the final $\bQ$ obtained in the CGS satisfies
$$
||\bI-\bQ\bQ^\top|| \leq O(\epsilon \kappa^2(\bA)),
$$
where $\kappa(\bA)$ is a value larger than 1 determined by $\bA$.
Whereas, in MGS, the error satisfies
$$
||\bI-\bQ\bQ^\top|| \leq O(\epsilon \kappa(\bA)).
$$
That is, the $\bQ$ obtained in the MGS is more orthogonal.
Therefore we summarize the difference between the CGS and MGS processes for obtaining $\bq_k$ via the $k$-th column $\ba_k$ of $\bA$ and the orthonormalized vectors $\{\bq_1, \bq_2, \ldots, \bq_{k-1}\}$:
$$
\begin{aligned}
\text{(CGS)}: &\, \text{obtain $\bq_k$ by normalizing $\ba_k^\perp=(\bI-\bQ_{k-1}\bQ_{k-1}^\top)\ba_k$;} \\
\text{(MGS)}: &\, \text{obtain $\bq_k$ by normalizing $\ba_k^\perp=\left\{(\bI-\bq_{k-1}\bq_{k-1}^\top)\ldots\left[(\bI-\bq_2\bq_2^\top)\left((\bI-\bq_1\bq_1^\top) \ba_k\right)\right]\right\}$.}
\end{aligned}
$$
\subsubsection*{\textbf{Triangular Orthogonalization in CGS and MGS}}
We here illustrate that in the CGS or MGS, the orthogonal matrix $\bQ$ is obtained via a set of triangular matrices. For simplicity, we only discuss the situation in the CGS and follow up the three-column example where $\bA=[\ba_1, \ba_2, \ba_3]\in \real^{3\times 3}$. The above discussion shows that the mutually orthonormal vectors $\{\bq_1, \bq_2, \bq_3\}$ can be obtained as follows:
$$
\left\{
\begin{aligned}
\bq_1 &= \frac{\ba_1}{r_{11}};\\
\bq_2 &= \frac{\ba_2 - r_{12}\bq_1}{r_{22}};\\
\bq_3 &= \frac{\ba_3-r_{13}\bq_1-r_{23}\bq_2}{r_{33}}.
\end{aligned}
\right.
$$
Whilst, the three mutually orthonormal vectors can be equivalently obtained by
$$
\bQ\bR_3\bR_2\bR_1= \bA \leadto \bQ=\bA\bR_1^{-1}\bR_2^{-1}\bR_3^{-1},
$$
where
$$
\bR_3=
\begin{bmatrix}
1 & 0 & r_{13} \\
0 & 1 & r_{23}\\
0 & 0 & r_{33}
\end{bmatrix},
\gap
\bR_2=
\begin{bmatrix}
1 & r_{12} & 0 \\
0 & r_{22} &0\\
0 & 0 & 1
\end{bmatrix},
\gap
\bR_1=
\begin{bmatrix}
r_{11} & 0 & 0 \\
0 & 1 &0\\
0 & 0 & 1
\end{bmatrix},
$$
such that
$$
\bR_3\bR_2\bR_1 = \bR=
\begin{bmatrix}
r_{11}& r_{12} & r_{13}\\
0 & r_{22} & r_{13}\\
0 & 0 & r_{33}
\end{bmatrix}.
$$
The above procedure $\bA\bR_1^{-1}\bR_2^{-1}\bR_3^{-1}$ will obtain the $\{\bq_1, \bq_2, \bq_3\}$ in a successive manner where $\bA\bR_1^{-1}$ will get $\bq_1$ into the first column of $\bQ$; $(\bA\bR_1^{-1})\bR_2^{-1}$ will get $\bq_2$ into the second column of $\bQ$; and $(\bA\bR_1^{-1}\bR_2^{-1})\bR_3^{-1}$ will get $\bq_3$ into the third column of $\bQ$. This is called the \textit{triangular orthogonalization} in the Gram-Schmidt process. The triangular orthogonalization is problematic in the sense that the condition number of a triangular matrix ($\bR_1^{-1}, \bR_2^{-1}, \bR_3^{-1}$ in the above three-column example) can be anything. And the Gram-Schmidt process have a series of them where the condition number can grow very large so that the orthogonalization is not numerical stable.
\subsubsection*{\textbf{More to go, preliminaries for Householder and Givens methods}}
Although, we claimed here that the MGS usually works better than the CGS in practice. An example will be given in the sequel. The MGS can still fall victim to the \textit{catastrophic cancellation} problem. Suppose in iteration $k$ of the MGS Algorithm~\ref{alg:qr-mgs-right}, $\ba_k$ is almost in the span of $\{\bq_1, \bq_2, \ldots, \bq_{k-1}\}$. This will result in that $\ba_k^\perp$ has only a small component that is perpendicular to $span\{\bq_1, \bq_2, \ldots, \bq_{k-1}\}$, whereas the ``error" component in the $span\{\bq_1, \bq_2, \ldots, \bq_{k-1}\}$ will be amplified and the net result is $\bQ$ will be less orthonormal. As discussed above, the main disadvantage in both the CGS and MGS can be described by that the algorithms find the orthogonal matrix $\bQ$ via the upper triangular $\bR$, i.e., if $\bA\in \real^{m\times n}$, one obtains $\bQ$ by
$$
\bQ=\bA\underbrace{\bR_1^{-1}\bR_2^{-1}\ldots \bR_n^{-1}}_{\bR^{-1}}.
$$
In this case, if we can find a successive set of orthogonal matrices $\{\bQ_1, \bQ_2, \ldots, \bQ_l\}$ such that $\bQ_l\ldots\bQ_2\bQ_1\bA$ is triangularized, then $\bQ=(\bQ_l\ldots\bQ_2\bQ_1)^\top$ will be ``more" orthogonal than the CGS or the MGS since the condition numbers for the orthogonal matrices are all 1. We will discuss this method in Section~\ref{section:qr-via-householder} and \ref{section:qr-givens} via the Householder reflectors and the Givens rotations.
\subsubsection*{\textbf{Example for MGS vs CGS}}
For better understanding, we will show by a $4\times 3$ \textit{Lauchli matrix} where
the Lauchli matrix is an $(n+1) \times n$ rectangular matrix that has ones on the top row and the parameter $\epsilon=\sqrt{\epsilon_{mach}}$ on the diagonal starting at entry $(2,1)$ i.e., on the lower subdiagonal.\index{Lauchli matrix}
\begin{example}[MGS vs CGS]
Let $\epsilon = \sqrt{\epsilon_{mach}}$ and consider the QR decomposition of the following matrix by CGS and MGS:
$$
\bA =
\begin{bmatrix}
1& 1 & 1 \\
\epsilon & 0 & 0 \\
0 & \epsilon & 0 \\
0 & 0 & \epsilon
\end{bmatrix}
=
\begin{bmatrix}
\ba_1 & \ba_2 & \ba_3
\end{bmatrix}.
$$
Note that we will round $1 +\epsilon_{mach}$ to 1
whenever encountered in the calculation.
The CGS proceeds as follows:
\begin{itemize}
\item Compute $\bq_1$ of unit length so that $\cspace([\bq_1]) = \cspace([\ba_1])$:
\begin{itemize}
\item Compute $r_{11}$: $r_{11} = ||\ba_1||=\sqrt{1+\epsilon_{mach}}\approx1$;
\item Compute $\bq_1$: $\bq_1=\frac{\ba_1}{||\ba_1||}=\ba_1$;
\end{itemize}
\item Compute $\bq_2$ of unit length so that $\cspace([\bq_1,\bq_2]) = \cspace([\ba_1,\ba_2])$:
\begin{itemize}
\item Compute $r_{12}$: $r_{12} = \ba_2^\top\bq_1=1$;
\item Compute $\ba_2^\perp$: $\ba_2^\perp = \ba_2 - r_{12}\bq_1=\ba_2-\ba_1 = [0,-\epsilon, \epsilon, 0]^\top$;
\item Compute $r_{22}$: $r_{22}=||\ba_2^\perp||=\sqrt{2\epsilon_{mach}}=\sqrt{2}\epsilon$;
\item Compute $\bq_2$: $\bq_2 = \frac{\ba_2^\perp}{||\ba_2^\perp||}=\frac{\ba_2^\perp}{r_{22}}=[0,-\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}, 0]^\top$;
\end{itemize}
\item Compute $\bq_3$ of unit length so that $\cspace([\bq_1,\bq_2,\bq_3]) = \cspace([\ba_1,\ba_2,\ba_3])$:
\begin{itemize}
\item Compute $r_{13}$: $r_{13}= \ba_3^\top\bq_1=\ba_3^\top\ba_1=1$;
\item Compute $r_{23}$: $r_{23}= \ba_3^\top\bq_2=0$;
\item Compute $\ba_3^\perp$: $\ba_3^\perp = \ba_3 - r_{13}\bq_1 - r_{23}\bq_2=[0,-\epsilon, 0, \epsilon]^\top$;
\item Compute $r_{33}$: $r_{33}=||\ba_3^\perp||=\sqrt{2\epsilon_{mach}}=\sqrt{2}\epsilon$;
\item Compute $\bq_3$: $\bq_3 = \frac{\ba_3^\perp}{||\ba_3^\perp||}=\frac{\ba_3^\perp}{r_{33}}=[0,-\frac{1}{\sqrt{2}},0, \frac{1}{\sqrt{2}}]^\top$;
\end{itemize}
\item This results in
$$
\bA=
\begin{bmatrix}
1& 0& 0 \\
\epsilon & -\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\
0 & \frac{1}{\sqrt{2}} & 0 \\
0 & 0 & \frac{1}{\sqrt{2}}
\end{bmatrix}
\begin{bmatrix}
1& 1 & 1 \\
0 & \sqrt{2}\epsilon & 0 \\
0 & 0 & \sqrt{2}\epsilon
\end{bmatrix}=
\bQ_1\bR_1
$$
\end{itemize}
Whilst, the MGS proceeds as follows:
\begin{itemize}
\item Compute $\bq_1$ of unit length so that $\cspace([\bq_1]) = \cspace([\ba_1])$:
\begin{itemize}
\item Compute $r_{11}$: $r_{11} = ||\ba_1||=\sqrt{1+\epsilon_{mach}}\approx1$;
\item Compute $\bq_1$: $\bq_1=\frac{\ba_1}{||\ba_1||}=\ba_1$;
\end{itemize}
\item Compute $\bq_2$ of unit length so that $\cspace([\bq_1,\bq_2]) = \cspace([\ba_1,\ba_2])$:
\begin{itemize}
\item Compute $r_{12}$: $r_{12} = \ba_2^\top\bq_1=1$;
\item Compute $\ba_2^\perp$: $\ba_2^\perp = \ba_2 - r_{12}\bq_1=\ba_2-\ba_1 = [0,-\epsilon, \epsilon, 0]^\top$;
\item Compute $r_{22}$: $r_{22}=||\ba_2^\perp||=\sqrt{2\epsilon_{mach}}=\sqrt{2}\epsilon$;
\item Compute $\bq_2$: $\bq_2 = \frac{\ba_2^\perp}{||\ba_2^\perp||}=\frac{\ba_2^\perp}{r_{22}}=[0,-\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}, 0]^\top$;
\item \textcolor{blue}{Till now, we are still the same as the CGS};
\end{itemize}
\item Compute $\bq_3$ of unit length so that $\cspace([\bq_1,\bq_2,\bq_3]) = \cspace([\ba_1,\ba_2,\ba_3])$:
\begin{itemize}
\item Compute $r_{13}$: $r_{13}= \ba_3^\top\bq_1=\ba_3^\top\ba_1=1$;
\item Compute temporary $\ba_3^\perp=\ba_3 - r_{13}\bq_1=[0,-\epsilon, 0, \epsilon]^\top$;
\item Compute $r_{23}$: $r_{23}= \bq_2^\top \textcolor{blue}{\ba_3^\perp}=\textcolor{blue}{\frac{\epsilon}{\sqrt{2}}}$;
\item Compute final $\ba_3^\perp$: $\ba_3^\perp = \underbrace{\ba_3 - r_{13}\bq_1}_{\text{the old $\ba_3^\perp$}} - r_{23}\bq_2=\textcolor{blue}{[0,-\epsilon/2, -\epsilon/2, \epsilon]^\top}$;
\item Compute $r_{33}$: $r_{33}=||\ba_3^\perp||=\sqrt{2\epsilon_{mach}}=\textcolor{blue}{\frac{\sqrt{6}}{2}\epsilon}$;
\item Compute $\bq_3$: $\bq_3 = \frac{\ba_3^\perp}{||\ba_3^\perp||}=\frac{\ba_3^\perp}{r_{33}}=\textcolor{blue}{[0,-\frac{1}{\sqrt{6}}, -\frac{1}{\sqrt{6}}, \frac{2}{\sqrt{6}}]^\top}$;
\end{itemize}
\item This results in
$$
\bA=
\begin{bmatrix}
1& 0& 0 \\
\epsilon & -\frac{1}{\sqrt{2}} & \textcolor{blue}{-\frac{1}{\sqrt{6}}}\\
0 & \frac{1}{\sqrt{2}} & \textcolor{blue}{-\frac{1}{\sqrt{6}}}\\
0 & 0 & \textcolor{blue}{\frac{2}{\sqrt{6}}}\\
\end{bmatrix}
\begin{bmatrix}
1& 1 & 1 \\
0 & \sqrt{2}\epsilon & \textcolor{blue}{\frac{\epsilon}{2}} \\
0 & 0 & \frac{\sqrt{6}}{2}\epsilon
\end{bmatrix}=
\bQ_2\bR_2.
$$
\end{itemize}
We notice that
$$
\bQ_1^\top\bQ_1=
\begin{bmatrix}
1+\epsilon_{mach} & -\frac{1}{\sqrt{2}}\epsilon&-\frac{1}{\sqrt{2}}\epsilon\\
-\frac{1}{\sqrt{2}}\epsilon & 1 & \frac{1}{2}\\
-\frac{1}{\sqrt{2}}\epsilon & \frac{1}{2} & 1
\end{bmatrix}
\gap \text{and}\gap
\bQ_2^\top\bQ_2=
\begin{bmatrix}
1+\epsilon_{mach} & -\frac{1}{\sqrt{2}}\epsilon & -\frac{1}{\sqrt{6}}\epsilon\\
-\frac{1}{\sqrt{2}}\epsilon & 1 & 0\\
-\frac{1}{\sqrt{6}}\epsilon & 0 & 1
\end{bmatrix},
$$
which shows that $\bQ_2$ is better in the sense of orthogonality.
\exampbar
\end{example}
\subsubsection*{\textbf{Row-Wise MGS, Recursive Algorithm}}
The algorithms introduced above in Algorithm~\ref{alg:qr-mgs-left} and \ref{alg:qr-mgs-right} are to compute the entries in the upper triangular matrix $\bR$ element-wise and column-by-column. Suppose $\bA$ has column partition $\bA=[\ba_1, \bA_2]$ where $\bA_2=[\ba_2, \ba_3, \ldots, \ba_n]\in \real^{m\times (n-1)}$. Notice in the CGS Algorithm~\ref{alg:qr-mgs-left}, the first row of $\bR$ can be obtained by
$$
\left.
\begin{aligned}
r_{11} &= ||\ba_1||\\
r_{1k} & = \bq_1^\top\ba_k, \,\,\, \forall k\in\{2,3,\ldots, n\}.
\end{aligned}\right\}
\leadto
\left\{
\begin{aligned}
r_{11} &= ||\ba_1||\\
\br_{12}^\top & = \bq_1^\top\bA_2, \,\,\, \br_{12}=[r_{12}, r_{13}, \ldots, r_{1n}].
\end{aligned}\right.
$$
Therefore, the QR decomposition of $\bA$ is given by
$$
\bA =
\begin{bmatrix}
\ba_1 & \bA_2
\end{bmatrix}=
\begin{bmatrix}
\bq_1 & \bQ_2
\end{bmatrix}
\begin{bmatrix}
r_{11} & \br_{12}^\top\\
\bzero & \bR_{22}
\end{bmatrix}
=
\begin{bmatrix}
r_{11}\bq_1 & \bq_1\br_{12}^\top +\bQ_2\bR_{22}
\end{bmatrix},
$$
where columns of $\bQ_2\in \real^{m\times (n-1)}$ are mutually orthonormal and $\bR_{22}\in \real^{(n-1)\times (n-1)}$ is upper triangular. And this implies $\bQ_2\bR_{22} $ is the reduced QR decomposition of $\bA_2-\bq_1\br_{12}^\top$ which reveals a recursive algorithm for the reduced QR decomposition of $\bA$. This is actually the same as the MGS that subtracts each component in the span of $\{\bq_1, \bq_2, \ldots, \bq_{k-1}\}$ when computing column $k$ of $\bQ$ (i.e., equality (*) in Algorithm~\ref{alg:qr-mgs-right}). The process is formulated in Algorithm~\ref{alg:qr-mgs-fulll-rowwise-recursive}.
\begin{algorithm}[H]
\caption{MGS (\textcolor{blue}{Row-Wise and Recursively})=Algorithm~\ref{alg:qr-mgs-right} }
\label{alg:qr-mgs-fulll-rowwise-recursive}
\begin{algorithmic}[1]
\Require $\bA\in \real^{m\times n}$ with full column rank;
\For{$k=1$ to $n$} \Comment{i.e., compute $k$-th column of $\bQ$ and $k$-th row of $\bR$}
\State $\ba_1=\bA[:,1]$; \Comment{$1$-st column of $\bA\in \real^{m\times (n-k+1)}$}
\State $r_{kk}=||\ba_1||$;\Comment{$\ba_1\in \real^{m\times 1}$}
\State $\bq_k = \ba_1/r_{kk}$;
\State $\br_{k2}^\top=\bq_k^\top\bA_2$; \Comment{$\bA_2=\bA[:,2:n]\in \real^{m\times (n-k)}$, $\br_{k2}^\top\in \real^{1\times (n-k)}$}
\State $\bA=\bA_2-\bq_k\br_{k2}^\top$; \Comment{$\bA \in \real^{m\times (n-k)}$}
\EndFor
\State Output $\bQ=[\bq_1, \ldots, \bQ_n]$ and $\bR$ with entry $(i,k)$ being $r_{ik}$;
\end{algorithmic}
\end{algorithm}
\section{Computing the Full QR Decomposition via the Gram-Schmidt Process}
A full QR decomposition of an $m\times n$ matrix with linearly independent columns goes further by appending additional $m-n$ orthonormal columns to $\bQ$ so that it becomes an $m\times m$ orthogonal matrix. In addition, rows of zeros are appended to $\bR$ so that it becomes an $m\times n$ upper triangular matrix. We call the additional columns in $\bQ$ \textbf{silent columns} and additional rows in $\bR$ \textbf{silent rows}. The comparison between the reduced QR decomposition and the full QR decomposition is shown in Figure~\ref{fig:qr-comparison} where silent columns in $\bQ$ are denoted in \textcolor{gray}{gray}, blank entries are zero and \textcolor{blue}{blue} entries are elements that are not necessarily zero.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Reduced QR decomposition]{\label{fig:gphalf}
\includegraphics[width=0.47\linewidth]{./imgs/qrreduced.pdf}}
\quad
\subfigure[Full QR decomposition]{\label{fig:gpall}
\includegraphics[width=0.47\linewidth]{./imgs/qrfull.pdf}}
\caption{Comparison between the reduced and full QR decompositions.}
\label{fig:qr-comparison}
\end{figure}
\section{Dependent Columns}\label{section:dependent-gram-schmidt-process}
Previously, we assumed matrix $\bA$ has linearly independent columns. However, this is not always necessary. Suppose in step $k$ of Algorithm~\ref{alg:reduced-qr}, $\ba_k$ is in the plane spanned by $\bq_1, \bq_2, \ldots, \bq_{k-1}$ which is equivalent to the space spanned by $\ba_1, \ba_2, \ldots, \ba_{k-1}$, i.e., vectors $\ba_1, \ba_2, \ldots, \ba_k$ are dependent. Then $r_{kk}$ will be zero and $\bq_k$ does not exist because of the zero division. At this moment, we simply pick $\bq_k$ arbitrarily to be any normalized vector that is orthogonal to $\cspace([\bq_1, \bq_2, \ldots, \bq_{k-1}])$ and continue the Gram-Schmidt process. Again, for matrix $\bA$ with dependent columns, we have both reduced and full QR decomposition algorithms. We reformulate the step $k$ in the algorithm as follows:
$$
\bq_k=\left\{
\begin{aligned}
&(\ba_k-\sum_{i=1}^{k-1}r_{ik}\bq_i)/r_{kk}, \qquad r_{ik}=\bq_i^\top\ba_k, r_{kk}=||\ba_k-\sum_{i=1}^{k-1}r_{ik}\bq_i||, &\mathrm{if\,} r_{kk}\neq0, \\
&\mathrm{pick \, one\, in\,}\cspace^{\bot}([\bq_1, \bq_2, \ldots, \bq_{k-1}]),\qquad &\mathrm{if\,} r_{kk}=0.
\end{aligned}
\right.
$$
This idea can be further extended that when $\bq_k$ does not exist, we just skip the current steps. And add the silent columns in the end. In this sense, QR decomposition for a matrix with dependent columns is not unique. However, as long as you stick to a systematic process, QR decomposition for any matrix is unique.
This finding can also help to decide whether a set of vectors are linearly independent or not. Whenever $r_{kk}$ in Algorithm~\ref{alg:reduced-qr} is zero, we report the vectors $\ba_1, \ba_2, \ldots, \ba_k$ are dependent and stop the algorithm for ``independent checking".
\section{QR with Column Pivoting: Column-Pivoted QR (CPQR)}\label{section:cpqr}
Suppose $\bA$ has dependent columns, a column-pivoted QR (CPQR) decomposition can be found as follows.
\begin{theoremHigh}[Column-Pivoted QR Decomposition\index{Column-pivoted QR (CPQR)}]\label{theorem:rank-revealing-qr-general}
Every $m\times n$ matrix $\bA=[\ba_1, \ba_2, ..., \ba_n]$ with $m\geq n$ and rank $r$ can be factored as
$$
\bA\bP = \bQ
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bzero
\end{bmatrix},
$$
where $\bR_{11} \in \real^{r\times r}$ is upper triangular, $\bR_{12} \in \real^{r\times (n-r)}$, $\bQ\in \real^{m\times m}$ is an orthogonal matrix, and $\bP$ is a permutation matrix. This is also known as the \textbf{full} CPQR decomposition. Similarly, the \textbf{reduced} version is given by
$$
\bA\bP = \bQ_r
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\end{bmatrix},
$$
where $\bR_{11} \in \real^{r\times r}$ is upper triangular, $\bR_{12} \in \real^{r\times (n-r)}$, $\bQ_r\in \real^{m\times r}$ contains orthonormal columns, and $\bP$ is a permutation matrix.
\end{theoremHigh}
\subsection{A Simple CPQR via CGS}
\paragraph{A Simple CPQR via CGS}
The classical Gram-Schmidt process can compute this CPQR decomposition. Following from the QR decomposition for dependent columns that when $r_{kk}=0$, the column $k$ of $\bA$ is dependent on the previous $k-1$ columns. Whenever this happens, we permute this column into the last column and continue the Gram-Schmidt process. We notice that $\bP$ is the permutation matrix that interchanges the dependent columns into the last $n-r$ columns. Suppose the first $r$ columns of $\bA\bP$ are $[\widehat{\ba}_1, \widehat{\ba}_2, \ldots, \widehat{\ba}_r]$, and the span of them is just the same as the span of $\bQ_r$ (in the reduced version), or the span of $\bQ_{:,:r}$ (in the full version)
$$
\cspace([\widehat{\ba}_1, \widehat{\ba}_2, \ldots, \widehat{\ba}_r]) = \cspace(\bQ_r) = \cspace(\bQ_{:,:r}).
$$
And $\bR_{12}$ is a matrix that recovers the dependent $n-r$ columns from the column space of $\bQ_r$ or column space of $\bQ_{:,:r}$. The comparison of reduced and full CPQR decomposition is shown in Figure~\ref{fig:qr-comparison-rank-reveal} where silent columns in $\bQ$ are denoted in \textcolor{gray}{grey}, blank entries are zero and \textcolor{blue}{blue}/\textcolor{orange}{orange} entries are elements that are not necessarily zero.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Reduced CPQR decomposition]{\label{fig:gphalf-rank-reveal}
\includegraphics[width=0.475\linewidth]{./imgs/qrreduced-revealing.pdf}}
\quad
\subfigure[Full CPQR decomposition]{\label{fig:gpall-rank-reveal}
\includegraphics[width=0.475\linewidth]{./imgs/qrfull-revealing.pdf}}
\caption{Comparison between the reduced and full CPQR decompositions.}
\label{fig:qr-comparison-rank-reveal}
\end{figure}
\begin{algorithm}[h]
\caption{\textcolor{blue}{Simple} Reduced CPQR Decomposition via CGS}
\label{alg:reduced-qr-rank-revealing}
\begin{algorithmic}[1]
\Require
Matrix $\bA$ with size $m\times n$ and $m\geq n$;
\State $cnt = 0$; \Comment{i.e., the count for the permutations}
\State $\bq_1 = \ba_1/r_{11}, r_{11}=||\ba_1||$; \Comment{i.e., the first column of $\bR_{11}$}
\For{$k=2$ to $n$} \Comment{i.e., compute column $k$ of $\bR_{11}$}
\State Set the initial value $r_{kk}=0$;
\While{$r_{kk} ==0$} \Comment{the column is dependent if $r_{kk}$ is equal to 0}
\State $r_{ik}=\bq_i^\top\ba_k$, $\forall i \in \{1, 2, \ldots, k-1\}$;\Comment{first $k-1$ elements in column $k$ of $\bR_{11}$}
\State $r_{kk}=||\ba_k-\sum_{i=1}^{k-1}r_{ik}\bq_i||$;\Comment{$k$-th elements in column $k$ of $\bR_{11}$}
\State $\bq_k = (\ba_k-\sum_{i=1}^{k-1}r_{ik}\bq_i)/r_{kk}$;
\If{$r_{kk}==0$}
\State $cnt = cnt+1$;
\State Permute the column $k$ to last column;
\State i.e., $[\ba_k, \ba_{k+1}, \ldots, \ba_n] \leftarrow [\ba_{k+1}, \ba_{k+2}, \ldots, \ba_n, \ba_k]$;
\EndIf
\EndWhile
\If{$k+cnt == n$}
\State rank $r=k$;
\For{$k=r+1$ to $n$} \Comment{i.e., compute column $k$ of $\bR_{12}$}
\State $r_{ik}=\bq_i^\top\ba_k$, $\forall i \in \{1, 2, \ldots, \textcolor{blue}{r}\}$;
\EndFor
\EndIf
\State Output rank $r$, output $\bR_{11}, \bR_{12}$, $\bQ_r=[\bq_1, \bq_2, \ldots, \bq_r]$. And exit the loop;
\EndFor
\end{algorithmic}
\end{algorithm}
The reduced algorithm is formulated in Algorithm~\ref{alg:reduced-qr-rank-revealing}. And it is trivial to find the last $n-r$ columns of $\bQ$ in the orthogonal complement of $\cspace(\bQ_r)$.
Note step 6 of Algorithm~\ref{alg:reduced-qr-rank-revealing} can be rewritten as $\bQ_{k-1}^\top \ba_k$, where $\bQ_{k-1}=[\bq_1, \ldots, \bq_{k-1}]$.
\begin{theorem}[Algorithm Complexity: Reduced CPQR]\label{theorem:qr-reduced-rank-revealing}
Algorithm~\ref{alg:reduced-qr-rank-revealing} requires $\sim 6mnr-4mr^2$ flops to compute a CPQR decomposition of an $m\times n$ matrix with $m\geq n$ and rank $r$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:qr-reduced-rank-revealing}]
From Theorem~\ref{theorem:qr-reduced}, to compute $\bR_{11}$, we just need to replace $n$ in Theorem~\ref{theorem:qr-reduced} by $r$ such that the complexity of computing $\bR_{11}$ is \fbox{$2mr^2$} flops if keep only the leading term.
To compute $\bR_{12}$, there are $r\times (n-r)$ values, each taking $2m-1$ flops ($m$ multiplications and $m-1$ additions) from step 18. That is \fbox{$r(n-r)(2m-1)$} flops.
The upper bound on steps 6, 7, 8 is set $k-1=r$ in these steps. This makes \fbox{$(2m-1)r$} flops for step 6 ($mr$ multiplications and $(m-1)r$ additions), \fbox{$2m(r+1)$} flops for step 7 ($mr$ multiplications, $(r-1)m$ additions, $m$ subtractions, $2m$ for the norm), and \fbox{$m$} flops for step 8. The total complexity for steps 6, 7, 8 is thus \fbox{$4mr+3m-r$} flops. And there are $n-r$ such iterations, which imply \fbox{$(4mr+3m-r)(n-r)$} flops needed to find the dependent columns.
Therefore, the final complexity is
$$
2mr^2 + r(n-r)(2m-1)+ (4mr+3m-r)(n-r).
$$
And it is $6mnr-4mr^2$ flops if we keep only the leading term.
\end{proof}
We notice that when $r=n$, the complexity for Algorithm~\ref{alg:reduced-qr-rank-revealing} is $2mn^2$ which agrees with the complexity of Algorithm~\ref{alg:reduced-qr}.
\subsection{A Practical CPQR via CGS}\label{section:practical-cpqr-cgs}
\paragraph{A Practical CPQR via CGS}
We notice that the simple CPQR algorithm pivot the first $r$ independent columns into the first $r$ columns of $\bA\bP$. Let $\bA_1$ be the first $r$ columns of $\bA\bP$, and $\bA_2$ be the rest. Then, from the full CPQR, we have
$$
[\bA_1, \bA_2] =
\bQ \begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bzero
\end{bmatrix}
=\left[
\bQ \begin{bmatrix}
\bR_{11} \\
\bzero
\end{bmatrix}
,
\bQ \begin{bmatrix}
\bR_{12} \\
\bzero
\end{bmatrix}
\right]
.
$$
It is not easy to see that
$$
||\bA_2|| = \left\Vert\bQ \begin{bmatrix}
\bR_{12} \\
\bzero
\end{bmatrix}\right\Vert
=
\left\Vert\begin{bmatrix}
\bR_{12} \\
\bzero
\end{bmatrix}\right\Vert
=
\left\Vert \bR_{12} \right\Vert,
$$
where the penultimate equality comes from the orthogonal equivalence under the matrix norm (Lemma~\ref{lemma:frobenius-orthogonal-equi}, p.~\pageref{lemma:frobenius-orthogonal-equi}). Therefore, the norm of $\bR_{12}$ is decided by the norm of $\bA_2$. When favoring well-conditioned CPQR, $\bR_{12}$ should be small in norm. And a practical CPQR decomposition is to permute columns of the matrix $\bA$ firstly such that the columns are ordered decreasingly in vector norm:
$$
\widetildebA = \bA\bP_0 = [\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_n}],
$$
where $\{j_1, j_2, \ldots, j_n\}$ is a permuted index set of $\{1,2,\ldots, n\}$ and
$$
||\ba_{j_1}||\geq ||\ba_{j_2}||\geq \ldots\geq ||\ba_{j_n}||.
$$
Then apply the ``simple" reduced CPQR decomposition on $\widetildebA$ such that $\widetildebA \bP_1= \bQ_r[\bR_{11}, \bR_{12}]$. The ``practical" reduced CPQR of $\bA$ is then recovered as
$$
\bA\underbrace{\bP_0\bP_1}_{\bP} =\bQ_r[\bR_{11}, \bR_{12}].
$$
When the $l_2$ vector norm (i.e., inner product, Appendix~\ref{section:vector-norm}, p.~\pageref{section:vector-norm}) is applied, extra $n(2m-1)$ flops are required to compute the $n$ norms of the column vectors and $\frac{n(n-1)}{2}$ comparisons needed to determine the order of the norms.
\subsection{A Practical CPQR via MGS}
\paragraph{A Practical CPQR via MGS}
Now, based on the recursive MGS in Algorithm~\ref{alg:qr-mgs-fulll-rowwise-recursive}, we can also develop a practical CPQR. The algorithm is formulated in Algorithm~\ref{alg:cpqr-partial-fact-cpqr} where the only difference to Algorithm~\ref{alg:qr-mgs-fulll-rowwise-recursive} is highlighted in the \textcolor{blue}{blue} text that we permute the column with the largest norm into the first column.
\begin{algorithm}[H]
\caption{\textcolor{blue}{Practical} CPQR via MGS (\textcolor{blue}{Row-Wise and Recursively}) }
\label{alg:cpqr-partial-fact-cpqr}
\begin{algorithmic}[1]
\Require $\bA\in \real^{m\times n}$ with \textcolor{blue}{exact} rank $r$;
\For{$k=1$ to $n$} \Comment{i.e., compute $k$-th column of $\bQ$ and $k$-th row of $\bR$}
\State \textcolor{blue}{Find the column with largest norm in $\bA$, and permute to first column;}
\State $\ba_1=\bA[:,1]$; \Comment{$1$-st column of $\bA\in \real^{m\times (n-k+1)}$}
\State $r_{kk}=||\ba_1||$;\Comment{$\ba_1\in \real^{m\times 1}$}
\State $\bq_k = \ba_1/r_{kk}$;
\State $\br_{k2}^\top=\bq_k^\top\bA_2$; \Comment{$\bA_2=\bA[:,2:n]\in \real^{m\times (n-k)}$, $\br_{k2}^\top\in \real^{1\times (n-k)}$}
\State $\bA=\bA_2-\bq_k\br_{k2}^\top$; \Comment{$\bA \in \real^{m\times (n-k)}$}
\State \textcolor{blue}{Exit when $r_{kk}=0$;}
\EndFor
\State Output $\bQ=[\bq_1, \ldots, \bQ_n]$ and $\bR$ with entry $(i,k)$ being $r_{ik}$;
\end{algorithmic}
\end{algorithm}
The difference is in that, in each iteration, we need to all the norms of the columns of $\bA$ rather than compute the norm of it at once as that is in CGS. Suppose in iteration $k$, we need to compute the reduced QR decomposition of a matrix of size $m\times (n-k+1)$ if the original matrix $\bA$ is of size $m\times n$. That is, extra $(n-k+1)(2m-1)$ flops required flops needed to do the CPQR via MGS. Let $f(k)=(n-k+1)(2m-1)$, simple calculation can show that additional complexity for CPQR via MGS is:
\begin{equation}\label{equation:mgs-cpqr-extra1}
\text{extra cost = }f(1)+f(2)+\ldots +f(n)\sim mn^2 \text{ flops},
\end{equation}
if only keep the leading term. This costs more than $n(2m-1)$ flops in the ``practical" CPQR via CGS.
But do not worry, this extra cost in CPQR via MGS can be partially solved. Suppose the column partition of $\bA\in \real^{m\times n}$ is $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$, and each squared norm of the columns are given in the vector
$$
\bl =
\begin{bmatrix}
l_1 \\
l_2 \\
\vdots \\
l_n
\end{bmatrix}=
\begin{bmatrix}
||\ba_1||^2\\
||\ba_2||^2\\
\vdots \\
||\ba_n||^2\\
\end{bmatrix}.
$$
Suppose further $\bq\in \real^m$ is a unit-length vector such that $\bq^\top\bq=1$ and $\br\in \real^n$ is a trivial vector given by
$$
\br = \bA^\top\bq
=
\begin{bmatrix}
r_1 \\
r_2 \\
\vdots \\
r_n
\end{bmatrix}
. \gap \text{(similar to step 6 of above Algorithm~\ref{alg:cpqr-partial-fact-cpqr})}
$$
Let further $\bB=\bA-\bq\br^\top=[\bb_1, \bb_2, \ldots, \bb_n]$ (similar to step 7 of above Algorithm~\ref{alg:cpqr-partial-fact-cpqr}). Then the squared length vector of $\bB$ is given by
$$
\bl =
\begin{bmatrix}
s_1 \\
s_2 \\
\vdots \\
s_n
\end{bmatrix}=
\begin{bmatrix}
||\bb_1||^2\\
||\bb_2||^2\\
\vdots \\
||\bb_n||^2\\
\end{bmatrix}
=
\begin{bmatrix}
l_1 -r_1^2\\
l_2 -r_2^2\\
\vdots \\
l_n-r_n^2
\end{bmatrix}.
$$
This can be easily checked since $\bb_i = \ba_i -r_{i}\bq =\ba_i-(\ba_i^\top\bq) \bq$ such that
$$
||\bb_i||^2 = || \ba_i -r_{i}\bq||^2=( \ba_i -r_{i}\bq)^\top( \ba_i -r_{i}\bq)=l_i-r_i^2.
$$
Come back to step 2 of Algorithm~\ref{alg:cpqr-partial-fact-cpqr}. Suppose we have computed squared norm of the original matrix $\bA\in \real^{m\times n}$ (which takes $n(2m-1)$, the same as that in ``practical" CPQR via CGS). The squared norm of $\bA_2-\bq_1\br_{12}^\top$ (suppose $k=1$ in step 7 of Algorithm~\ref{alg:cpqr-partial-fact-cpqr}) can be obtained by extra $2(n-1)$ flops. And for all the $n$ iterations, the cost is $2(n-1)+2(n-2)+\ldots +2(1)=n^2-n$ flops. This is much less than $\sim mn^2$ in Equation~\eqref{equation:mgs-cpqr-extra1}.
\subsection{Partial Factorization for CPQR: Extra Bonus of CPQR via MGS}\label{section:partial-cpqr-mgs}
\paragraph{Partial Factorization for CPQR}
The extra bonus for the CPQR via MGS is that we can do partial factorization at some point. We notice that at step of Algorithm~\ref{alg:cpqr-partial-fact-cpqr}, we permute the column with the largest norm into the first column, and step 4 of Algorithm~\ref{alg:cpqr-partial-fact-cpqr} is to compute the norm into the main diagonal of the upper triangular $\bR$. When $\bA$ has full rank $n$, $r_{kk}$ in all iterations will be positive. When $\bA$ has an ``exact" rank $r$, the algorithm will stop after iteration $r$. However, when $\bA$ has an ``effective" rank $r$ with rank deficiency
\footnote{\textit{Effective rank}, or also known as the \textit{numerical rank}. Assume the $i$-th largest singular value of $\bA$ is denoted as $\sigma_i(\bA)$. Then if $\sigma_r(\bA)\gg \sigma_{r+1}(\bA)\approx 0$, $r$ is known as the numerical rank of $\bA$. The singular value of matrix $\bA$ will be introduced in the SVD section (Section~\ref{section:SVD}, p.~\pageref{section:SVD}). Whereas, when $\sigma_i(\bA)>\sigma_{r+1}(\bA)=0$, it is known as having \textit{exact rank} $r$ as we have used in most of our discussions.}, the algorithm can proceed well in the first $r$ iterations since $\{r_{11}, r_{22}, \ldots, r_{rr}\}$ are relatively large in value that are far from 0. When it comes to iteration $k=r+1, \ldots$, the $r_{kk}$ is small which means the column $k$ of $\bA\bP$ has a small component in the direction of $\bq_k$ and is ``almost" dependent on the previous $\{\bq_1, \bq_2, \ldots, \bq_k\}$. The situation is the same for the rest $\{k+1, k+2, \ldots\}$ columns since $r_{kk}\geq r_{k+1, k+1}\geq \ldots r_{nn}$ in iteration $k$. The partial factorization CPQR via MGS is formulated in Algorithm~\ref{alg:cpqr-partial-fact-cpqr-partial}. This is related to the \textit{rank-revealing QR decomposition (RRQR)} that we will introduce in the next sections. The algorithm will result in the factorization
\begin{equation}
\bA\bP =
\bQ\bR=
\bQ
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bR_{22}
\end{bmatrix},
\end{equation}
where $\bR_{22}$ is small in norm.
However, in the ``practical" CPQR with CGS (Algorithm~\ref{alg:reduced-qr-rank-revealing} with $\bA$ permuted at once at the beginning of the procedure), it is wasteful when $r \ll \min(m, n)$. Since we only permute the column with the largest norm into the beginning of $\bA\bP$. A value $r_{kk}$ close to 0 only means column $k$ of $\bA\bP$ is almost dependent on $\{\bq_1, \bq_2, \ldots, \bq_k\}$, and it does not mean the rest $\{k+1, k+2, \ldots\}$ columns of $\bA\bP$ are also almost dependent on $\{\bq_1, \bq_2, \ldots, \bq_k\}$.
\begin{algorithm}[H]
\caption{\textcolor{blue}{Practical} and \textcolor{blue}{Partial} CPQR via MGS (\textcolor{blue}{Row-Wise and Recursively}) }
\label{alg:cpqr-partial-fact-cpqr-partial}
\begin{algorithmic}[1]
\Require $\bA\in \real^{m\times n}$ with rank deficiency;
\State Select a stopping criteria $\delta$;
\For{$k=1$ to $n$} \Comment{i.e., compute $k$-th column of $\bQ$ and $k$-th row of $\bR$}
\State \textcolor{blue}{Find the column with largest norm in $\bA$, and permute to first column;}
\State $\ba_1=\bA[:,1]$; \Comment{$1$-st column of $\bA\in \real^{m\times (n-k+1)}$}
\State $r_{kk}=||\ba_1||$;\Comment{$\ba_1\in \real^{m\times 1}$}
\State $\bq_k = \ba_1/r_{kk}$;
\State $\br_{k2}^\top=\bq_k^\top\bA_2$; \Comment{$\bA_2=\bA[:,2:n]\in \real^{m\times (n-k)}$, $\br_{k2}^\top\in \real^{1\times (n-k)}$}
\State $\bA=\bA_2-\bq_k\br_{k2}^\top$; \Comment{$\bA \in \real^{m\times (n-k)}$}
\State \textcolor{blue}{Exit when $r_{kk}<\delta$, set effective rank $r=k$;}
\EndFor
\State Output $\bQ=[\bq_1, \ldots, \bQ_n]$, $\bR$ with entry $(i,k)$ being $r_{ik}$, and \textcolor{blue}{effective rank $r$};
\end{algorithmic}
\end{algorithm}
\section{QR with Column Pivoting: Revealing Rank One Deficiency}\label{section:rank-one-qr-revealing}\index{Rank-revealing}
We notice that Algorithm~\ref{alg:reduced-qr-rank-revealing} is just one method to find the column permutation where $\bA$ is rank deficient and we interchange the first linearly independent $r$ columns of $\bA$ into the first $r$ columns of the $\bA\bP$. If $\bA$ is nearly rank-one deficient and we would like to find a column permutation of $\bA$ such that the resulting pivotal element $r_{nn}$ of the QR decomposition is small. This is known as the \textit{revealing rank-one deficiency} problem.
\begin{theoremHigh}[Revealing Rank One Deficiency, \citep{chan1987rank}]\label{theorem:finding-good-qr-ordering}
If $\bA\in \real^{m\times n}$ and $\bv\in \real^n$ is a unit 2-norm vector (i.e., $||\bv||=1$), then there exists a permutation $\bP$ such that the reduced QR decomposition
$$
\bA\bP = \bQ\bR
$$
satisfies $r_{nn} \leq \sqrt{n} \epsilon$ where $\epsilon = ||\bA\bv||$ and $r_{nn}$ is the $n$-th diagonal of $\bR$. Note that $\bQ\in \real^{m\times n}$ and $\bR\in \real^{n\times n}$ in the reduced QR decomposition.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:finding-good-qr-ordering}]
Suppose $\bP\in \real^{n\times n}$ is a permutation matrix such that if $\bw=\bP^\top\bv$ where
$$
|w_n| = \max |v_i|, \,\,\,\, \forall i \in \{1,2,\ldots,n\},
$$
i.e., interchange the largest magnitude into the last entry
such that the last component of $\bw$ is equal to the maximal component of $\bv$ in absolute value. Then we have $|w_n| \geq 1/\sqrt{n}$. Suppose the QR decomposition of $\bA\bP$ is $\bA\bP = \bQ\bR$, then
$$
\epsilon = ||\bA\bv|| = ||(\bQ^\top\bA\bP) (\bP^\top\bv)|| = ||\bR\bw|| =
\begin{bmatrix}
\vdots \\
r_{nn} w_n
\end{bmatrix}
\geq |r_{nn} w_n| \geq |r_{nn}|/\sqrt{n},
$$
where the second equality above is from the length preservation under orthogonal transformation and $\bP$ is orthogonal such that $\bP\bP^\top=\bI$. This completes the proof.
\end{proof}
The following discussion is based on the existence of the singular value decomposition (SVD) which will be introduced in Section~\ref{section:SVD} (p.~\pageref{section:SVD}). Feel free to skip at a first reading. Suppose the SVD of $\bA$ is given by $\bA = \sum_{i=1}^{n} \sigma_i \bu_i\bv_i^\top$, where $\sigma_i$'s are the singular values with $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_n$, i.e., $\sigma_n$ is the smallest singular value, and $\bu_i$'s, $\bv_i$'s are left and right singular vectors respectively. Then, if we let $\bv = \bv_n$ such that $\bA\bv_n = \sigma_n \bu_n$, \footnote{We will prove that the right singular vector of $\bA$ is equal to the right singular vector of $\bR$ if the $\bA$ has QR decomposition $\bA=\bQ\bR$ in Lemma~\ref{lemma:svd-for-qr} (p.~\pageref{lemma:svd-for-qr}). The claim can also be applied to the singular values. So $\bv_n$ here is also the right singular vector of $\bR$.} we have
$$
||\bA\bv|| = \sigma_n.
$$
By constructing a permutation matrix $\bP$ such that
$$
|\bP^\top \bv|_n = \max |\bv_i|, \,\,\,\, \forall i \in \{1,2,\ldots,n\},
$$
we will find a QR decomposition of $\bA=\bQ\bR$ with a pivot $r_{nn}$ smaller than $\sqrt{n}\sigma_n$. If $\bA$ is rank-one deficient, then $\sigma_n$ will be close to 0 and $r_{nn}$ is thus bounded to a small value in magnitude which is close to 0.
\section{QR with Column Pivoting: Revealing Rank r Deficiency*}\label{section:rank-r-qr}\index{Rank-revealing QR}
Following from the last section, suppose now we want to compute the reduced QR decomposition where $\bA\in \real^{m\times n}$ is nearly rank $r$ deficient \footnote{Note that rank $r$ here does not mean the matrix has rank $r$, but rather it has rank $(\min\{m,n\}-r)$.} with $r>1$. Our goal now is to find a permutation $\bP$ such that
\begin{equation}\label{equation:rankr-reval-qr}
\bA\bP =
\bQ\bR=
\bQ
\begin{bmatrix}
\bL & \bM \\
\bzero & \bN
\end{bmatrix},
\end{equation}\footnote{To abuse the notation, we use the notation $\bL,\bM,\bN$ for clarity on the derivation. It is better to replace $\bL,\bM,\bN$ by $\bR_{11}, \bR_{12}, \bR_{22}$ to match other contexts.}
where $\bN \in \real^{r\times r}$ and $||\bN||$ is small in some norm (and $\bL\in \real^{(n-r)\times (n-r)}, \bM\in \real^{(n-r)\times r}$ that can be inferred from context).
A recursive algorithm can be applied to do so. Suppose we have already isolated a small $k\times k$ block $\bN_k$, based on which, if we can isolate a small $(k+1)\times (k+1)$ block $\bN_{k+1}$, then we can find the permutation matrix recursively. To repeat, suppose we have the permutation $\bP_k$ such that the $\bN_k \in \real^{k\times k}$ has a small norm,
$$
\bA\bP_k = \bQ_k \bR_k=
\bQ_k
\begin{bmatrix}
\bL_k & \bM_k \\
\bzero & \bN_k
\end{bmatrix}.
$$
We want to find a permutation $\bP_{k+1}$, such that $\bN_{k+1} \in \real^{(k+1)\times (k+1)}$ also has a small norm,
$$
\boxed{
\bA\bP_{k+1} = \bQ_{k+1} \bR_{k+1}=
\bQ_{k+1}
\begin{bmatrix}
\bL_{k+1} & \bM_{k+1} \\
\bzero & \bN_{k+1}
\end{bmatrix}}.
$$
From the algorithm introduced in the last section, there is an $(n-k)\times (n-k)$ permutation matrix $\widetilde{\bP}_{k+1}$ such that $\bL_k \in \real^{(n-k)\times (n-k)}$ has the QR decomposition $\bL_k \widetilde{\bP}_{k+1} = \widetilde{\bQ}_{k+1}\widetilde{\bL}_k$ such that the entry $(n-k, n-k)$ of $\widetilde{\bL}_k$ is small. By constructing
$$
\bP_{k+1} = \bP_k
\begin{bmatrix}
\widetilde{\bP}_{k+1} & \bzero \\
\bzero & \bI
\end{bmatrix},
\qquad
\bQ_{k+1} = \bQ_k
\begin{bmatrix}
\widetilde{\bQ}_{k+1} & \bzero \\
\bzero & \bI
\end{bmatrix},
$$
we have
$$
\boxed{
\bA \bP_{k+1} = \bQ_{k+1}
\begin{bmatrix}
\widetilde{\bL}_k & \widetilde{\bQ}_{k+1}^\top \bM_k \\
\bzero & \bN_k
\end{bmatrix}}.
$$
We know that entry $(n-k, n-k)$ of $\widetilde{\bL}_k$ is small, if we can prove the last row of $\widetilde{\bQ}_{k+1}^\top \bM_k$ is small in norm, then we find the QR decomposition revealing rank $k+1$ deficiency (see \citep{chan1987rank} for a proof). And the procedure is formulated in Algorithm~\ref{alg:qr-reveal-rank-r}.
\begin{algorithm}[h]
\caption{Reveal Rank $r$ Deficiency}
\label{alg:qr-reveal-rank-r}
\begin{algorithmic}[1]
\Require
Matrix $\bA$ with size $m\times n$ and $m\geq n$, and $rank(\bA)=n-r$;
\State Initialize $\bW \in \real^{n\times r}$ to zero; \Comment{store the singular vectors}
\State Initial QR decomposition by $\bA = \bQ\bR$;
\For{$i=n$ to $n-r+1$}
\State $\bL \leftarrow $ leading $i\times i$ block of $\bR$;
\State Compute the singular vector $\bv\in \real^i$ corresponding to the min singular value of $\bL$;
\State Compute a permutation $\widetilde{\Pi} \in \real^{i\times i}$ such that $|\widetilde{\Pi}^\top \bv|_i = \max |\bv_j|, \forall j \in \{1,2,\ldots, i\}$;
\State Assign $\begin{bmatrix}
\bv \\
\bzero
\end{bmatrix}$ to the $i$-th column of $\bW$;
\State Compute $\bW \leftarrow \Pi^\top \bW$, where $\Pi = \begin{bmatrix}
\widetilde{\Pi} & \bzero \\
\bzero & \bI
\end{bmatrix}$;
\State Compute the QR decomposition: $\bL\widetilde{\Pi} = \widetilde{\bQ} \widetilde{\bL}$;
\State $\bP \leftarrow \bP \Pi$;
\State $\bQ \leftarrow \bQ \begin{bmatrix}
\widetilde{\bQ} & \bzero \\
\bzero & \bI
\end{bmatrix}$;
\State $\bR \leftarrow \begin{bmatrix}
\widetilde{\bL} & \widetilde{\bQ}^\top \bB \\
\bzero & \bC
\end{bmatrix}$, where $\bB =\bR_{1:n-i, n-i+1:n}$, and $\bC = \bR_{n-i+1:n, n-i+1:n}$;
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Existence of the QR Decomposition via the Householder Reflector}\label{section:qr-via-householder}
\subsubsection*{\textbf{Householder Reflectors}}
We first give the formal definition of a Householder reflector and we will take a look at its properties.
\begin{definition}[Householder Reflector\index{Householder reflector}]\label{definition:householder-reflector}
Let $\bu \in \real^n$ be a vector of unit length (i.e., $||\bu||=1$). Then $\bH = \bI - 2\bu\bu^\top$ is said to be a \textit{Householder reflector}, a.k.a., a \textit{Householder transformation}. We call this $\bH$ the Householder reflector associated with the unit vector $\bu$ where the unit vector $\bu$ is also known as \textit{Householder vector}. If a vector $\bx$ is multiplied by $\bH$, then it is reflected in the hyperplane $span\{\bu\}^\perp$.
\item Note that if $||\bu|| \neq 1$, we can define $\bH = \bI - 2 \frac{\bu\bu^\top}{\bu^\top\bu} $ as the Householder reflector.
\end{definition}
From the definition of the Householder reflector, then we have the following corollary that a special kind of vectors will remain unchanged under the Householder reflector.
\begin{corollary}[Unreflected by Householder]
Suppose $||\bu||=1$, and define the Householder reflector $\bH=\bI-2\bu\bu^\top$. Then any vector $\bv$ that is perpendicular to $\bu$ is left unchanged by the Householder transformation, that is, $\bH\bv=\bv$ if $\bu^\top\bv=0$.
\end{corollary}
The proof is trivial that $(\bI - 2\bu\bu^\top)\bv = \bv - 2\bu\bu^\top\bv=\bv$.
Suppose $\bu$ is a unit vector with $||\bu||=1$, and a vector $\bv$ is perpendicular to $\bu$. Then any vector $\bx$ on the plane can be decomposed into two parts
$$
\bx = \bx_{\bv} + \bx_{\bu},
$$
where the first one $\bx_{\bu}$ is parallel to $\bu$ and the second one $\bx_{\bv}$ is perpendicular to $\bu$ (i.e., parallel to $\bv$). From Section~\ref{section:project-onto-a-vector} on the projection of a vector onto another one, $\bx_{\bu}$ can be computed by $\bx_{\bu} = \frac{\bu\bu^\top}{\bu^\top\bu} \bx = \bu\bu^\top\bx$, i.e., the projection of $\bx$ onto the vector $\bu$. We then transform this $\bx$ by the Householder reflector associated with $\bu$,
$$\bH\bx = (\bI - 2\bu\bu^\top)(\bx_{\bv} + \bx_{\bu}) = \bx_{\bv} -\bu\bu^\top \bx = \bx_{\bv} - \bx_{\bu},
$$
i.e., the Householder reflector transforms $\bx_{\bv} + \bx_{\bu}$ into $\bx_{\bv} - \bx_{\bu}$.
That is, the space perpendicular to $\bu$ acts as a mirror and any vector $\bx$ is reflected by the Householder reflector associated with $\bu$ (i.e., reflected in the hyperplane $span\{\bu\}^\perp$). The situation is shown in Figure~\ref{fig:householder}.
\begin{SCfigure
\centering
\includegraphics[width=0.5\textwidth]{imgs/householder.pdf}
\caption{Demonstration of the Householder reflector. The Householder reflector obtained by $\bH=\bI-2\bu\bu^\top$ where $||\bu||=1$ will reflect vector $\bx$ along the plane perpendicular to $\bu$: $\bx=\bx_{\bv} + \bx_{\bu} \rightarrow \bx_{\bv} - \bx_{\bu}$.}
\label{fig:householder}
\end{SCfigure}
The above discussion tells us how to find the reflected vector given the Householder reflector. The further question can be posed that if we know two vectors are reflected to each other up front, the next corollary tells us how to find the corresponding Householder reflector. The property is important for computing the QR decomposition if we want to reflect a column into a specific form.
\begin{corollary}[Finding the Householder Reflector]\label{corollary:householder-reflect-finding}
Suppose $\bx$ is reflected to $\by$ by a Householder reflector with $||\bx|| = ||\by||$, then the Householder reflector is obtained by
$$
\bH = \bI - 2 \bu\bu^\top, \text{ where } \bu = \frac{\bx-\by}{||\bx-\by||}.
$$
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:householder-reflect-finding}]
Write out the equation, we have
$$
\begin{aligned}
\bH\bx &= \bx - 2 \bu\bu^\top\bx =\bx - 2\frac{(\bx-\by)(\bx^\top-\by^\top)}{(\bx-\by)^\top(\bx-\by)} \bx\\
&= \bx - (\bx-\by) = \by.
\end{aligned}
$$
Note that the condition $||\bx|| = ||\by||$ is required to prove the result.
\end{proof}
The Householder reflectors are useful to set a block of components of a given vector to zero. Particularly, we usually would like to set the vector $\ba\in \real^n$ to be zero except the $i$-th element. Then the Householder vector can be chosen to be
$$
\bu = \frac{\ba - r\be_i}{||\ba - r\be_i||}, \qquad \text{where } r = \pm||\ba||
$$
which is a reasonable Householder vector since $||\ba|| = ||r\be_i|| = |r|$. We carefully notice that when $r=||\ba||$, $\ba$ is reflected to $||\ba||\be_i$ via the Householder reflector $\bH = \bI - 2 \bu\bu^\top$; otherwise when $r=-||\ba||$, $||\ba||$ is reflected to $-||\ba||\be_i$ via the Householder reflector.
Recall that in Section~\ref{section:qr-gram-compute} (p.~\pageref{section:qr-gram-compute}), we claimed the Householder or Givens method is to employ a set of orthogonal matrices to triangularize the matrix such that the QR decomposition is obtained and the orthogonal matrix is ``more" orthogonal in this sense. The Householder reflector is such orthogonal matrix for this purpose. We not provide some more properties of the Householder reflector in the following remark.
\begin{remark}[Householder Properties]\label{remark:householder-propes}
If $\bH$ is a Householder reflector, then it has the following properties:
$\bullet$ $\bH\bH = \bI$;
$\bullet$ $\bH = \bH^\top$;
$\bullet$ $\bH^\top\bH = \bH\bH^\top = \bI$ such that Householder reflector is an orthogonal matrix;
$\bullet$ $\bH\bu = -\bu$, if $\bH = \bI - 2 \bu\bu^\top$.
\end{remark}
\subsubsection*{\textbf{Orthogonal Triangularization}}
To repeat, we see in the Gram-Schmidt section that QR decomposition is to use a triangular matrix to orthogonalize a matrix $\bA$. The further idea is that, if we have a set of orthogonal matrices that can make $\bA$ to be triangular step by step, then we can also recover the QR decomposition.
Specifically, if we have an orthogonal matrix $\bQ_1$ that can introduce zeros to the $1$-st column of $\bA$ except the entry (1,1); and an orthogonal matrix $\bQ_2$ that can introduce zeros to the $2$-nd column except the entries (2,1), (2,2); $\ldots$. Then, we can also find the QR decomposition.
For the way to introduce zeros, we could reflect the columns of the matrix to a basis vector $\be_1$ whose entries are all zero except the first entry.
Let $\bA=[\ba_1, \ba_2, \ldots, \ba_n]\in \real^{m\times n}$ be the column partition of $\bA$, and let further
\begin{equation}\label{equation:qr-householder-to-chooose-r-numeraically}
r_1 = ||\ba_1||,\qquad \bu_1 = \frac{\ba_1 - r_1 \be_1}{||\ba_1 - r_1 \be_1||}, \qquad \text{and}\qquad \bH_1 = \bI - 2\bu_1\bu_1^\top,
\end{equation}
where $\be_1$ here is the first basis for $\real^m$, i.e., $\be_1 = [1;0;0;\ldots;0]\in \real^m$.
Then
\begin{equation}\label{equation:householder-qr-projection-step1}
\bH_1\bA = [\bH_1\ba_1, \bH_1\ba_2, \ldots, \bH_1\ba_n] =
\begin{bmatrix}
r_1 & \bR_{1,2:n} \\
\bzero& \bB_2
\end{bmatrix},
\end{equation}
which reflects $\ba_1$ to $r_1\be_1$ and introduces zeros below the diagonal in the $1$-st column. We observe that the entries below $r_1$ are all zero now under this specific reflection. Notice that we reflect $\ba_1$ to $||\ba_1||\be_1$ the two of which have same length, rather than reflect $\ba_1$ to $\be_1$ directly. This is for the purpose of \textbf{numerical stability} and matches the requirement in Corollary~\ref{corollary:householder-reflect-finding}.
\textbf{Choice of $r_1$:} moreover, the choice of $r_1$ is \textbf{not unique}. For \textbf{numerical stability}, it is also desirable to choose $r_1 =-\text{sign}(a_{11}) ||\ba_1||$, where $a_{11}$ is the first component of $\ba_{1}$. Or even, $r_1 =\text{sign}(a_{11}) ||\ba_1||$ is also possible as long as $||\ba_1||$ is equal to $||r_1\be_1||$. However, we will not cover this topic here.
We can then apply this process to $\bB_2$ in Equation~\eqref{equation:householder-qr-projection-step1} to make the entries below the entry (2,2) to be all zeros. Note that, we do not apply this process to the entire $\bH_1\bA$ but rather the submatrix $\bB_2$ in it because we have already introduced zeros in the first column, and reflecting again will introduce nonzero values back and destroy what have accomplished.
Suppose $\bB_2 = [\bb_2, \bb_3, \ldots, \bb_n]$ is the column partition of $\bB_2$, and let
$$
r_2 = ||\bb_2||,\qquad \bu_2 = \frac{\bb_2 - r_2 \be_1}{||\bb_2 - r_2 \be_1||}, \qquad \qquad \widetilde{\bH}_2 = \bI - 2\bu_2\bu_2^\top, \qquad \text{and}\qquad \bH_2 =
\begin{bmatrix}
1 & \bzero \\
\bzero & \widetilde{\bH}_2
\end{bmatrix},
$$
where now $\be_1$ here is the first basis for $\real^{m-1}$ and $\bH_2$ is also an orthogonal matrix since $\widetilde{\bH}_2$ is an orthogonal matrix. Then it follows that
$$
\widetilde{\bH_2}\bB_2 = [\bH_2\bb_2, \bH_2\bb_3, \ldots, \bH_2\bb_n]=
\begin{bmatrix}
r_2 & \bR_{2,3:n} \\
\bzero &\bC_3
\end{bmatrix},
$$
and
$$
\bH_2\bH_1\bA = [\bH_2\bH_1\ba_1, \bH_2\bH_1\ba_2, \ldots, \bH_2\bH_1\ba_n] =
\begin{bmatrix}
r_1 & \bR_{12} & \bR_{1,3:n} \\
0 & r_2 & \bR_{2,3:n} \\
\bzero & \bzero &\bC_3
\end{bmatrix}.
$$
Same process can go on, and if $\bA\in \real^{m\times n}$, after $n$ stages we will finally triangularize $\bA = (\bH_n \bH_{n-1}\ldots\bH_1)^{-1} \bR = \bQ\bR$. Since the $\bH_i$'s are symmetric and orthogonal (Remark~\ref{remark:householder-propes}), we have orthogonal $\bQ=(\bH_n \bH_{n-1}\ldots\bH_1)^{-1} = \bH_1 \bH_2\ldots\bH_n$.
An example of a $5\times 4$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\bH_1}{\rightarrow}
\begin{sbmatrix}{\bH_1\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bH_2}{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
&\stackrel{\bH_3}{\rightarrow}
\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bH_4}{\rightarrow}
\begin{sbmatrix}{\bH_4\bH_3\bH_2\bH_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes \\
0 & 0 & 0 & \bm{\boxtimes} \\
0 & 0 & 0 & \bm{0}
\end{sbmatrix}
\end{aligned}
$$
\paragraph{A closer look at the QR factorization} The Householder algorithm is a process that makes a matrix triangular by a sequence of orthogonal matrix operations. In the Gram-Schmidt process (both CGS and MGS), we use a triangular matrix to orthogonalize the matrix. However, in the Householder algorithm, we use orthogonal matrices to triangularize. The difference between the two approaches is then summarized as follows:
\begin{itemize}
\item Gram-Schmidt: triangular orthogonalization;
\item Householder: orthogonal triangularization.
\end{itemize}
We further notice that, in the Householder algorithm or the Givens algorithm that we will shortly see, a set of orthogonal matrices are applied so that the QR decomposition obtained is a \textit{full} QR decomposition. Whereas, the direct QR decomposition obtained by CGS or MGS is a \textit{reduced} one (although the silent columns or rows can be further added to find the full one).
\section{Computing the Full QR Decomposition via the Householder Reflector}
Since $\bA$ has $n$ columns, and for every step $i\in \{1, 2, \ldots, n\}$ to introduce zeros in the $i$-th column below the diagonal, we operate on a submatrix of size $(m-i+1)\times(n-i+1)$.
To compute the upper triangular matrix $\bR = \bH_n\bH_{n-1}\ldots\bH_1 \bA$, we notice that
$$
\begin{aligned}
\bR
&= (\bH_n\ldots(\bH_3(\bH_2(\bH_1 \bA))))\\
&=
\begin{bmatrix}
\bI_{n-1}& \bzero \\
\bzero& \bI - 2\bu_n \bu_n^\top
\end{bmatrix}
\ldots
\begin{bmatrix}
\bI_2& \bzero \\
\bzero& \bI - 2\bu_3\bu_3^\top
\end{bmatrix}
\begin{bmatrix}
\bI_1& \bzero \\
\bzero& \bI - 2\bu_2\bu_2^\top
\end{bmatrix}
\begin{bmatrix}
\bI - 2\bu_1\bu_1^\top
\end{bmatrix}\bA,
\end{aligned}
$$
where the parentheses indicate the order of the computation, the upper-left of $\bH_2$ is a $1\times 1$ identity matrix, and it will not change the \textbf{first row} and \textbf{first column} of $\bH_1\bA$ from the ``triangular property"; and the upper-left of $\bH_3$ is a $2\times 2$ identity matrix which will not change the \textbf{first 2 rows} and \textbf{first 2 columns} of $\bH_2\bH_2\bA$; $\ldots$. This property yields the step 8 in Algorithm~\ref{alg:qr-decomposition-householder} which operates only on the $i:m$ rows and $i+1:n$ column of $\bR$ in step $i$ (since the $i$-th column takes 1 flop explicitly in step 7 though this hardly reduces the complexity).
After the Householder transformation, we output the final triangular matrix $\bR$, and the process is shown in Algorithm~\ref{alg:qr-decomposition-householder}.
Furthermore, in Algorithm~\ref{alg:qr-decomposition-householder}, to get the final orthogonal matrix $\bQ=\bH_1 \bH_2\ldots\bH_n$, we notice that
$$
\begin{aligned}
\bQ
&=(((\bH_1 \bH_2)\bH_3)\ldots\bH_n) \\
&= \begin{bmatrix}
\bI - 2\bu_1\bu_1^\top
\end{bmatrix}
\begin{bmatrix}
\bI_1& \bzero \\
\bzero& \bI - 2\bu_2\bu_2^\top
\end{bmatrix}
\begin{bmatrix}
\bI_2& \bzero \\
\bzero& \bI - 2\bu_3\bu_3^\top
\end{bmatrix}
\ldots
\begin{bmatrix}
\bI_{n-1}& \bzero \\
\bzero& \bI - 2\bu_n\bu_n^\top
\end{bmatrix},
\end{aligned}
$$
where the parentheses indicate the order of the computation, the upper-left of $\bH_2$ is a $1\times 1$ identity matrix, and it will not change the \textbf{first column} of $\bH_1$; and the upper-left of $\bH_3$ is a $2\times 2$ identity matrix which will not change the \textbf{first 2 columns} of $\bH_1\bH_2$; $\ldots$. This property yields the step 14 in the algorithm.
\begin{algorithm}[H]
\caption{Full QR Decomposition via the Householder Reflector}
\label{alg:qr-decomposition-householder}
\begin{algorithmic}[1]
\Require matrix $\bA$ with size $m\times n $ and $m\geq n$;
\State Initially set $\bR = \bA$;
\For{$i=1$ to $n$}
\State $\ba = \bR_{i:m,i}$, i.e., first column of $\bR_{i:m,i:n} \in \real^{(m-i+1)\times(n-i+1)}$;
\State $r = ||\ba||$; \Comment{$2(m-i+1)$ flops;}
\State $\bu_i = \ba-r\be_1 \in \real^{m-i+1}$; \Comment{ $1$ flop;}
\State $\bu_i = \bu_i / ||\bu_i||$ ; \Comment{$3(m-i+1)$ flops;}
\State $\bR_{i,i} = r$, $\bR_{i+1:m,i}=\bzero$; \Comment{0 flops, update first column of $\bR_{i:m,i:n}$}
\State $\bR_{i:m,i+1:n} = \bR_{i:m,i+1:n} - 2\bu_i (\bu_i^\top \bR_{i:m,i+1:n})$; \Comment{update $i+1:n$ columns of $\bR_{i:m,i:n}$}
\EndFor
\State Output $\bR$ as the triangular matrix;
\State Get $\bQ=\bH_1 \bH_2\ldots\bH_n$;
\State Initially set $\bQ = \bH_1$;
\For{$i=1$ to $n-1$}
\State $\bQ_{1:m,i+1:m} = \bQ_{1:m,i+1:m}(\bI - 2\bu_{i+1}\bu_{i+1}^\top)=\bQ_{1:m,i+1:m}-\bQ_{1:m,i+1:m}2\bu_{i+1}\bu_{i+1}^\top$;
\EndFor
\State Output $\bQ$ as the orthogonal matrix;
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: QR via Householder]\label{theorem:qr-full-householder}
Algorithm~\ref{alg:qr-decomposition-householder} requires $\sim 2mn^2-\frac{2}{3}n^3$ flops to compute a full QR decomposition of an $m\times n$ matrix with linearly independent columns and $m\geq n$. Further, if $\bQ$ is needed explicitly, additional $\sim 4m^2n-2mn^2$ flops are required.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:qr-full-householder}]
For loop $i$, $\bA_{i:m,i:n}$ is of size $(m-i+1)\times(n-i+1)$. Thus $\ba_1$ is in $\real^{m-i+1}$.
In step 4, to compute $r = ||\ba||$ involves $m-i+1$ multiplications, $m-i$ additions, and 1 square root operation which is \fbox{$2(m-i+1)$} flops.
In step 5, $\bu_i = \ba-r\be_1$ involves 1 subtraction which is \fbox{$1$} flop as the special structure of $\be_1$;
In step 6, same as that in step 4, it requires $2(m-i+1)$ flops ($m-i+1$ multiplications, $m-i$ additions, and 1 square root) to compute the norm $||\bu_i||$ and $m-i+1$ additional divisions which is \fbox{$3(m-i+1)$} flops totally.
In step 8, suppose loop $i=1$, $\bu_1^\top \bR_{1:m,2:n}$ requires $n-1$ times ($m$ multiplications and $m-1$ additions) which is \fbox{$(n-1)(2m-1)$} flops. $2\bu_1$ requires \fbox{$m$} multiplications. Further, $2\bu_1 (\bu_1^\top \bR_{1:m,2:n})$ requires \fbox{$m(n-1)$} multiplications to make an $m\times (n-1)$ matrix. The final matrix subtraction needs \fbox{$m(n-1)$} subtractions. Thus the total complexity for step 8 if loop $i=1$ is \fbox{$4m(n-1)+m-(n-1)$} flops. This analysis can be applied to any loop $i$, and the complexity of step 8 for loop $i$ can be obtained by \fbox{$4(m-i+1)(n-i)+m-n+i$} flops.
So for loop $i$, the total complexity from step 3 to step 8 can be defined as $f(i)$ flops.
To compute $\bR$, the final complexity is
$$
\mathrm{cost}=f(1)+f(2)+\ldots +f(n).
$$
Simple calculations will show the sum of $n$ loops is \fbox{$2mn^2-\frac{2}{3}n^3$} flops if we keep only the leading terms.
To get the final orthogonal matrix $\bQ$, since we have already computed $2\bu_{i+1}$ in step 8, this will not have additional costs. The computation of $\bQ_{1:m,i+1:m}2\bu_{i+1}$ involves $m$ times ($m-i$ multiplications and $m-i-1$ additions) which is \fbox{$m(2(m-i)-1)$} flops. Multiplied to $\bu_{i+1}^\top$ takes \fbox{$m(m-i)$} multiplications. Further, the final matrix subtraction requires \fbox{$m(m-i)$} subtractions. So in loop $i$, the complexity of step 14 is $g(i)=4m(m-i)-m = 4m^2-4mi-m$ flops. To compute $\bQ$, the final complexity is
$$
\mathrm{cost}=g(1)+g(2)+\ldots +g(n-1).
$$
Simple calculation will show the sum of $n-1$ loops is $4m^2n-2mn^2-4m^2+mn+m$ flops, or $4m^2n-2mn^2$ flops if we keep only the leading terms.
\end{proof}
After computing the full QR decomposition via the Householder algorithm, it is trivial to recover the reduced QR decomposition by just removing the silent columns in $\bQ$ and silent rows in $\bR$. However, there is no direct way to compute the reduced QR decomposition without the full decomposition form.
In \citep{golub2013matrix}, a Householder method for rank-revealing QR decomposition is discussed, the complexity is $4mnr-2r^2(m+n)+4r^3/3$ flops for rank $r$ matrix $\bA$. If $r=n$, the complexity agrees with the result in Theorem~\ref{theorem:qr-full-householder}.
\section{Existence of the QR Decomposition via the Givens Rotation}\label{section:qr-givens}
We have seen the Givens rotation can be utilized to find the rank-one update/downdate of the Cholesky decomposition in Section~\ref{section:cholesky-rank-one-update} (p.~\pageref{section:cholesky-rank-one-update}). Now let's take a look at what does the Givens rotation accomplish by specific examples. Consider the following $2\times 2$ orthogonal matrices
$$
\bF =
\begin{bmatrix}
-c & s\\
s & c
\end{bmatrix},
\qquad
\bJ=
\begin{bmatrix}
c & -s \\
s & c
\end{bmatrix},
\qquad
\bG=
\begin{bmatrix}
c & s \\
-s & c
\end{bmatrix},
$$
where $s = \sin \theta$ and $c=\cos \theta$ for some $\theta$. The first matrix has $\det(\bF)=-1$ and is a special case of a Householder reflector in dimension 2 such that $\bF=\bI-2\bu\bu^\top$ where $\bu=\begin{bmatrix}
\sqrt{\frac{1+c}{2}}, &\sqrt{\frac{1-c}{2}}
\end{bmatrix}^\top$ or $\bu=\begin{bmatrix}
-\sqrt{\frac{1+c}{2}}, &-\sqrt{\frac{1-c}{2}}
\end{bmatrix}^\top$. The latter two matrices have $\det(\bJ)=\det(\bG)=1$ and effects a rotation instead of a reflection. Such a matrix is called a \textbf{\textit{Givens rotation}}.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[$\by = \bJ\bx$, counter-clockwise rotation.]{\label{fig:rotation1}
\includegraphics[width=0.47\linewidth]{imgs/rotation.pdf}}
\quad
\subfigure[$\by = \bG\bx$, clockwise rotation.]{\label{fig:rotation2}
\includegraphics[width=0.47\linewidth]{imgs/rotation2.pdf}}
\caption{Demonstration of two Givens rotations.}
\label{fig:rotation}
\end{figure}
Figure~\ref{fig:rotation} demonstrate a rotation of $\bx$ under $\bJ$, where $\by = \bJ\bx$ such that
$$
\left\{
\begin{aligned}
&y_1 = c\cdot x_1 - s\cdot x_2, \\
&y_2 = s \cdot x_1 + c\cdot x_2.
\end{aligned}
\right.
$$
We want to verify the angle between $\bx$ and $\by $ is actually $\theta$ (and counter-clockwise rotation) after the Givens rotation $\bJ$ as shown in Figure~\ref{fig:rotation1}. Firstly, we have
$$
\left\{
\begin{aligned}
&\cos(\alpha) =\frac{x_1}{\sqrt{x_1^2+x_2^2}}, \\
&\sin (\alpha) =\frac{x_2}{\sqrt{x_1^2+x_2^2}}.
\end{aligned}
\right.
\qquad
\text{and }\qquad
\left\{
\begin{aligned}
&\cos(\theta) =c, \\
&\sin (\theta) =s.
\end{aligned}
\right.
$$
This implies $\cos(\theta+\alpha) = \cos(\theta)\cos(\alpha)-\sin(\theta)\sin(\alpha)$.
If we can show $\cos(\theta+\alpha) = \cos(\theta)\cos(\alpha)-\sin(\theta)\sin(\alpha)$ is equal to $\frac{y_1}{\sqrt{y_1^2+y_2^2}}$, then we complete the proof.
For the former one, $\cos(\theta+\alpha) = \cos(\theta)\cos(\alpha)-\sin(\theta)\sin(\alpha)=\frac{c\cdot x_1 - s\cdot x_2}{\sqrt{x_1^2+x_2^2}}$. For the latter one, it can be verified that $\sqrt{y_1^2+y_2^2}=\sqrt{x_1^2+x_2^2}$, and $\frac{y_1}{\sqrt{y_1^2+y_2^2}} = \frac{c\cdot x_1 - s\cdot x_2}{\sqrt{x_1^2+x_2^2}}$. This completes the proof. Similarly, we can also show that the angle between $\by=\bG\bx$ and $\bx$ is also $\theta$ in Figure~\ref{fig:rotation2} and the rotation is clockwise.
More generally, we define the $n$-th order Givens rotation as follows.
\begin{definition}[$n$-th Order Givens Rotation\index{Givens rotation}]\label{definition:givens-rotation-in-qr}
An $n\times n$ Givens rotation is represented by a matrix of the following form
$$
\begin{blockarray}{cccccccccccc}
\begin{block}{c[cccccccccc]c}
&1 & & & & & & & & & &\\
&& \ddots & & & & && & & &\\
&& & 1 & & & & && & & \\
&& & & c & & & & s & & &k\\
&&& & & 1 & & && & &\\
\bG_{kl}=&&& & & &\ddots & && & &\\
&&& & & & & 1&& & &\\
&&& & -s & & & &c& & & l\\
&&& & & & & & &1 & &\\
&&& & & & & & & &\ddots &\\
\end{block}
&& & & k & & & & l & & &\\
\end{blockarray},
$$
where the $(k,k), (k,l), (l,k), (l,l)$ entries are $c, s, -s, c$ respectively, and $s = \cos \theta$ and $c=\cos \theta$ for some $\theta$.
Let $\bdelta_k \in \real^n$ be the zero vector where the $k$-th entry is 1. Then mathematically, the Givens rotation defined above can be denoted by
$$
\bG_{kl}= \bI + (c-1)(\bdelta_k\bdelta_k^\top + \bdelta_l\bdelta_l^\top) + s(\bdelta_k\bdelta_l^\top -\bdelta_l\bdelta_k^\top ),
$$
where the subscripts $k,l$ indicate the \textbf{rotation is in plane $k$ and $l$}.
Specifically, one can also define the $n$-th order Givens rotation where $(k,k),$ $(k,l),$ $(l,k),$ $(l,l)$ entries are $c, \textcolor{blue}{-s, s}, c$ respectively (note the difference in the sign of $s$). The ideas are the same.
\end{definition}
It can be easily verified that the $n$-th order Givens rotation is an orthogonal matrix and its determinant is 1. For any vector $\bx =[x_1, x_2, \ldots, x_n]^\top \in \real^n$, we have $\by = \bG_{kl}\bx$, where
$$
\left\{
\begin{aligned}
&y_k = c \cdot x_k + s\cdot x_l, \\
&y_l = -s\cdot x_k +c\cdot x_l, \\
&y_j = x_j , & (j\neq k,l)
\end{aligned}
\right.
$$
That is, a Givens rotation applied to $\bx$ rotates two components of $\bx$ by some angle $\theta$ and leaves all other components the same.
When $\sqrt{x_k^2 + x_l^2} \neq 0$,
let $c = \frac{x_k}{\sqrt{x_k^2 + x_l^2}}$, $s=\frac{x_l}{\sqrt{x_k^2 + x_l^2}}$. Then,
$$
\left\{
\begin{aligned}
&y_k = \sqrt{x_k^2 + x_l^2}, \\
&y_l = 0, \\
&y_j = x_j . & (j\neq k,l)
\end{aligned}
\right.
$$
This finding above is essential for the QR decomposition via the Givens rotation.
\begin{corollary}[Basis From Givens Rotations Forwards]\label{corollary:basis-from-givens}
For any vector $\bx \in \real^n$, there exists a set of Givens rotations $\{\bG_{12}, \bG_{13}, \ldots, \bG_{1n}\}$ such that $\bG_{1n}\ldots \bG_{13}\bG_{12}\bx = ||\bx||\be_1$ where $\be_1\in \real^n$ is the first unit basis in $\real^n$.
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:basis-from-givens}]
From the finding above, we can find a $\bG_{12}, \bG_{13}, \bG_{14}$ such that
$$
\bG_{12}\bx = \left[\sqrt{x_1^2 + x_2^2}, 0, x_3, \ldots, x_n \right]^\top,
$$
$$
\bG_{13}\bG_{12}\bx = \left[\sqrt{x_1^2 + x_2^2+x_3^2}, 0, 0, x_4, \ldots, x_n \right]^\top,
$$
and
$$
\bG_{14}\bG_{13}\bG_{12}\bx = \left[\sqrt{x_1^2 + x_2^2+x_3^2+x_4^2},0, 0, 0, x_5, \ldots, x_n \right]^\top.
$$
Continue this process, we will obtain $\bG_{1n}\ldots \bG_{13}\bG_{12} = ||\bx||\be_1$.
\end{proof}
\begin{remark}[Basis From Givens Rotations Backwards]\label{remark:basis-from-givens2}
In Corollary~\ref{corollary:basis-from-givens}, we find the Givens rotation that introduces zeros from the $2$-nd entry to th $n$-th entry (i.e., forward). Sometimes we want the reverse order, i.e., introduce zeros from the $n$-th entry to the $2$-nd entry such that $\bG_{12}\bG_{13}\ldots \bG_{1n}\bx = ||\bx||\be_1$ where $\be_1\in \real^n$ is the first unit basis in $\real^n$.
The procedure is similar, we can find a $\bG_{1n},\bG_{1,(n-1)}, \bG_{1,(n-2)}$ such that
$$
\bG_{1n}\bx = \left[\sqrt{x_1^2 + x_n^2}, x_2, x_3, \ldots, x_{n-1}, 0 \right]^\top,
$$
$$
\bG_{1,(n-1)}\bG_{1n}\bx = \left[\sqrt{x_1^2 + x_{n-1}^2+x_n^2}, x_2, x_3, \ldots, x_{n-2}, 0, 0 \right]^\top,
$$
and
$$
\bG_{1,(n-2)}\bG_{1,(n-1)}\bG_{1n}\bx = \left[\sqrt{x_1^2 + x_{n-2}^2+x_{n-1}^2+x_n^2}, x_2, x_3, \ldots,x_{n-3},0, 0, 0 \right]^\top.
$$
Continue this process, we will obtain $\bG_{12}\bG_{13}\ldots \bG_{1n}\bx = ||\bx||\be_1$.
\paragraph{An alternative form} Alternatively, there are rotations $\{\bG_{12}, \bG_{23}, \ldots,\bG_{(n-1),n}\}$ such that $\bG_{12} \bG_{23} \ldots \bG_{(n-1),n}\bx = ||\bx||\be_1$ where $\be_1\in \real^n$ is the first unit basis in $\real^n$ and where
$$
\bG_{(n-1),n}\bx = \left[x_1, x_2, \ldots, x_{n-2},\sqrt{x_{n-1}^2 + x_n^2}, 0 \right]^\top,
$$
$$
\bG_{(n-2),(n-1)}\bG_{(n-1),n}\bx = \left[x_1, x_2, \ldots,x_{n-3}, \sqrt{x_{n-2}^2+x_{n-1}^2 + x_n^2}, 0, 0 \right]^\top,
$$
and
$$
\begin{aligned}
\bG_{(n-3),(n-2)}\bG_{(n-2),(n-1)}\bG_{(n-1),n}\bx &= \\
&\,\left[x_1, x_2, \ldots ,x_{n-4}, \sqrt{x_{n-3}^2+x_{n-2}^2+x_{n-1}^2 + x_n^2},0, 0, 0 \right]^\top.
\end{aligned}
$$
Continue this process, we will obtain $\bG_{12} \bG_{23} \ldots \bG_{(n-1),n}\bx = ||\bx||\be_1$.
The above backward Givens rotation basis update will be proved useful in the rank-one changes of the QR decomposition (Section~\ref{section:qr-rank-one-changes}, p.~\pageref{section:qr-rank-one-changes}).
\end{remark}
From the Corollary~\ref{corollary:basis-from-givens} above, for the way to introduce zeros, we could \textbf{rotate} the columns of the matrix to a basis vector $\be_1$ whose entries are all zero except the first entry.
Let $\bA=[\ba_1, \ba_2, \ldots, \ba_n] \in \real^{m\times n}$ be the column partition of $\bA$, and let
\begin{equation}\label{equation:qr-rotate-to-chooose-r-numeraically}
\bG_1 = \bG_{1m}\ldots \bG_{13}\bG_{12},
\end{equation}
where $\be_1$ here is the first basis for $\real^m$, i.e., $\be_1 = [1;0;0;\ldots;0]\in \real^m$.
Then
\begin{equation}\label{equation:rotate-qr-projection-step1}
\begin{aligned}
\bG_1\bA &= [\bG_1\ba_1, \bG_1\ba_2, \ldots, \bG_1\ba_n] \\
&=
\begin{bmatrix}
||\ba_1|| & \bR_{1,2:n} \\
\bzero& \bB_2
\end{bmatrix},
\end{aligned}
\end{equation}
which rotates $\ba_1$ to $||\ba_1||\be_1$ and introduces zeros below the diagonal in the $1$-st column of $\bA$. And bear in mind that the $\bG_1$ above will sequentially change the ($1$-st, $2$-nd), ($1$-st, $3$-rd), ($1$-st, $4$-th), \ldots, ($1$-st, $m$-th) elements in pair for any vector $\bv\in \real^m$.
We can then apply this process to $\bB_2$ in Equation~\eqref{equation:rotate-qr-projection-step1} to make the entries below the (2,2)-th entry to be all zeros.
Suppose $\bB_2 = [\bb_2, \bb_3, \ldots, \bb_n]$ is the column partition of $\bB_2$, and let
$$
\bG_2 = \bG_{2m}\ldots\bG_{24}\bG_{23},
$$
where $\bG_{2n}, \ldots, \bG_{24}, \bG_{23}$ can be implied from context.
Then
$$
\begin{aligned}
\bG_2
\begin{bmatrix}
\bR_{1,2:n} \\
\bB_2
\end{bmatrix}
&=
\begin{bmatrix}
\bR_{12} & \bR_{1,3:n} \\
||\bb_2|| & \bR_{2,3:n} \\
\bzero &\bC_3
\end{bmatrix},
\end{aligned}
$$
and
$$
\begin{aligned}
\bG_2\bG_1\bA &= [\bG_2\bG_1\ba_1, \bG_2\bG_1\ba_2, \ldots, \bG_2\bG_1\ba_n]=
\begin{bmatrix}
||\ba_1|| & \bR_{12} & \bR_{1,3:n} \\
0 & ||\bb_2|| & \bR_{2,3:n} \\
\bzero & \bzero &\bC_3
\end{bmatrix}.
\end{aligned}
$$
Same process can go on, and we will finally triangularize $\bA = (\bG_n \bG_{n-1}\ldots\bG_1)^{-1} \bR = \bQ\bR$. And since $\bG_i$'s are orthogonal, the orthogonal $\bQ$ can also be obtained by $\bQ=(\bG_n \bG_{n-1}\ldots\bG_1)^{-1} = \bG_1^\top \bG_2^\top\ldots\bG_n^\top $, and
\begin{equation}\label{equation:givens-q}
\begin{aligned}
\bG_1^\top \bG_2^\top\ldots\bG_n^\top &=(\bG_n \ldots \bG_2 \bG_1)^\top \\
&=\left\{(\bG_{nm} \ldots \bG_{n,(n+1)}) \ldots (\bG_{2m}\ldots \bG_{23}) ( \bG_{1m} \ldots \bG_{12} )\right\}^\top .
\end{aligned}
\end{equation}
\paragraph{When will the Givens work better?} In practice, compared to the Householder algorithms, the Givens rotation algorithm works better when $\bA$ already has a lot of zeros below the main diagonal. Therefore, the Givens rotations can be applied to the rank-one changes of the QR decomposition as rank-one change will only introduce a small amount of nonzero values (Section~\ref{section:qr-rank-one-changes}, p.~\pageref{section:qr-rank-one-changes}).
An example of a $5\times 4$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
\paragraph{Givens rotations in $\bG_1$} For a $5\times 4$ example, we realize that $\bG_1 = \bG_{15}\bG_{14}\bG_{13}\bG_{12}$. And the process is shown as follows:
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{12}}{\rightarrow}
\begin{sbmatrix}{\bG_{12}\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bG_{13}}{\rightarrow}
\begin{sbmatrix}{\bG_{13}\bG_{12}\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes &\boxtimes \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}\\
&\stackrel{\bG_{14}}{\rightarrow}
\begin{sbmatrix}{\bG_{14}\bG_{13}\bG_{12}\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes &\boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bG_{15}}{\rightarrow}
\begin{sbmatrix}{\bG_{15}\bG_{14}\bG_{13}\bG_{12}\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes &\boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\end{sbmatrix}
\end{aligned}
$$
\paragraph{Givens rotation as a big picture} Take $\bG_1, \bG_2, \bG_3, \bG_4$ as a single matrix, we have
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\bG_1}{\rightarrow}
\begin{sbmatrix}{\bG_1\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bG_2}{\rightarrow}
\begin{sbmatrix}{\bG_2\bG_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
&\stackrel{\bG_3}{\rightarrow}
\begin{sbmatrix}{\bG_3\bG_2\bG_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bG_4}{\rightarrow}
\begin{sbmatrix}{\bG_4\bG_3\bG_2\bG_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes \\
0 & 0 & 0 & \bm{\boxtimes} \\
0 & 0 & 0 & \bm{0}
\end{sbmatrix}
\end{aligned}
$$
\paragraph{Orders to introduce the zeros} With the Givens rotations for the QR decomposition, it is flexible to choose different orders to introduce the zeros of $\bR$. In our case, we introduce zeros column by column. It is also possible to introduce zeros row by row.
\section{Computing the Full QR Decomposition via the Givens Rotation}
The algorithm to compute the full QR decomposition via the Givens rotation is straightforward from the example shown above and is illustrated in Algorithm~\ref{alg:qr-decomposition-givens}.
\begin{algorithm}[H]
\caption{Full QR Decomposition via the Givens Rotation}
\label{alg:qr-decomposition-givens}
\begin{algorithmic}[1]
\Require matrix $\bA$ with size $m\times n $ and $m\geq n$;
\State Initially set $\bR = \bA$, $\bQ=\bI$;
\For{$i=1$ to $n$}
\For{$j=i+1$ to $m$}
\State Get Givens rotation $\bG_{i,j}$ with the following parameters $c, s$:
\State $c = \frac{x_k}{\sqrt{x_k^2 + x_l^2}}$, $s=\frac{x_l}{\sqrt{x_k^2 + x_l^2}}$ where $x_k = \bR_{i,i}$, $x_l = \bR_{j,i}$;
\State Calculate $\bR = \bG_{i,j}\bR $ in following two steps:
\State $i$-th row: $\bR_{i,:} = c\cdot \bR_{i,:} + s \bR_{j,:} $;
\State $j$-th row: $\bR_{j,:} = -s\cdot \bR_{i,:} + c \bR_{j,:} $;
\EndFor
\EndFor
\State Output $\bR$ as the triangular matrix;
\For{$i=1$ to $n$}
\For{$j=i+1$ to $m$}
\State Calculate $\bQ = \bG_{i,j}\bQ $ in following two steps:
\State $i$-th row: $\bQ_{i,:} = c\cdot \bQ_{i,:} + s \bQ_{j,:} $;
\State $j$-th row: $\bQ_{j,:} = -s\cdot \bQ_{i,:} + c \bQ_{j,:} $;
\EndFor
\EndFor
\State Output $\bQ=\bQ^\top$ from Equation~\eqref{equation:givens-q};
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: QR via Givens]\label{theorem:qr-full-givens}
Algorithm~\ref{alg:qr-decomposition-givens} requires $\sim 3mn^2-n^3$ flops to compute a full QR decomposition of an $m\times n$ matrix with linearly independent columns and $m\geq n$. Further, if $\bQ$ is needed explicitly, additional $\sim 3mn^2-n^3$ flops required.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:qr-full-givens}]
For step 5, each iteration $i,j$ requires $6$ flops (which are 2 square operations, 1 addition, 1 square root, and 2 divisions). And there are $m-i$ iterations for each $i$ which means $(m-1)+(m-2)+\ldots+(m-n)=(mn-\frac{n^2+n}{2})$ iterations. Therefore, the complexity for all the step 5's is \fbox{$6(mn-\frac{n^2+n}{2})$} flops.
For each iteration $i$, step 7 and step 8 operate on two length-$(n-i+1)$ vectors. The two steps take $6(n-i+1)$ flops for each iteration $i$ (which are $4(n-i+1)$ multiplications and $2(n-i+1)$ additions). And for each $i$, there are $(m-i)$ such operations which takes $(m-i)\times 6(n-i+1)$ flops for each $i$. Let $f(i) = (m-i)\times 6(n-i+1) = m(n+1)-(m+n+1)i +i^2$. The total complexity for the two steps is equal to
$$
\mathrm{cost} =f(1)+ f(2) +\ldots +f(n),
$$
or \fbox{$3mn^2-n^3$} flops if keep only the leading terms.
Similarly, we can get the complexity of step 15 and 16 with \fbox{$3mn^2-n^3$} flops if keep only the leading terms.
\end{proof}
Same as the Householder algorithm, after computing the full QR decomposition via the Givens algorithm, it is trivial to recover the reduced QR decomposition by just removing the silent columns in $\bQ$ and silent rows in $\bR$.
\section{Uniqueness of the QR Decomposition}\label{section:nonunique-qr}
The results of the QR decomposition from the Gram-Schmidt process , the Householder algorithm, and the Givens algorithms are different. Even in the Householder algorithm, we have different methods to choose the sign of $r_1$ in Equation~\eqref{equation:qr-householder-to-chooose-r-numeraically}. Thus, from this point, QR decomposition is not unique.
\begin{example}[Non-Uniqueness of the QR Decomposition\index{Uniqueness}]
Suppose the matrix $\bA$ is given by
$$
\bA =
\begin{bmatrix}
4 & 1 \\
3 & 2
\end{bmatrix}.
$$
The QR decomposition of $\bA$ can be obtained by
$$
\begin{aligned}
\bA &= \bQ_1\bR_1=
\begin{bmatrix}
0.8 & 0.6 \\
0.6 & -0.8
\end{bmatrix}
\begin{bmatrix}
5 & 2 \\
0 & -1
\end{bmatrix}\\
&= \bQ_2\bR_2=
\begin{bmatrix}
0.8 & -0.6 \\
0.6 & 0.8
\end{bmatrix}
\begin{bmatrix}
5 & 2 \\
0 & 1
\end{bmatrix}\\
&=\bQ_3\bR_3=
\begin{bmatrix}
-0.8 & -0.6 \\
-0.6 & 0.8
\end{bmatrix}
\begin{bmatrix}
-5 & -2 \\
0 & 1
\end{bmatrix}\\
&= \bQ_4\bR_4=
\begin{bmatrix}
-0.8 & 0.6 \\
-0.6 & -0.8
\end{bmatrix}
\begin{bmatrix}
-5 & -2 \\
0 & -1
\end{bmatrix}.
\end{aligned}
$$
Thus the QR decomposition of $\bA$ is not unique.
\exampbar
\end{example}
However, if we use just the procedure described in the Gram-Schmidt process, or systematically choose the sign in the Householder algorithm, then the decomposition is unique.
The uniqueness of the \textit{reduced} QR decomposition for full column rank matrix $\bA$ is assured when $\bR$ has positive diagonals as shown in the ``main proof" of Section~\ref{section:gram-schmidt-process} by inductive analysis.
We here provide another proof for the uniqueness of the \textit{reduced} QR decomposition for matrices if the diagonal values of $\bR$ are positive which will shed light on the implicit Q theorem in Hessenberg decomposition (Section~\ref{section:hessenberg-decomposition}, p.~\pageref{section:hessenberg-decomposition}) or tridiagonal decomposition (Theorem~\ref{section:tridiagonal-decomposition}, p.~\pageref{section:tridiagonal-decomposition}).
\begin{corollary}[Uniqueness of the reduced QR Decomposition]\label{corollary:unique-qr}
Suppose matrix $\bA$ is an $m\times n$ matrix with full column rank $n$ and $m\geq n$. Then, the \textit{reduced} QR decomposition is unique if the main diagonal values of $\bR$ are positive.
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:unique-qr}]
Suppose the \textit{reduced} QR decomposition is not unique, we can complete it into a \textit{full} QR decomposition, then we can find two such full decompositions so that $\bA=\bQ_1\bR_1 = \bQ_2\bR_2$ which implies $\bR_1 = \bQ_1^{-1}\bQ_2\bR_2 = \bV \bR_2$ where $\bV= \bQ_1^{-1}\bQ_2$ is an orthogonal matrix. Write out the equation, we have
$$
\begin{aligned}
\bR_1 &=
\begin{bmatrix}
r_{11} & r_{12}& \dots & r_{1n}\\
& r_{22}& \dots & r_{2n}\\
& & \ddots & \vdots \\
\multicolumn{2}{c}{\raisebox{1.3ex}[0pt]{\Huge0}} & & r_{nn} \\
\bzero & \bzero &\ldots & \bzero
\end{bmatrix}=
\begin{bmatrix}
v_{11}& v_{12} & \ldots & v_{1m}\\
v_{21} & v_{22} & \ldots & v_{2m}\\
\vdots & \vdots & \ddots & \vdots\\
v_{m1} & v_{m2} & \ldots & v_{\textcolor{black}{mm}}
\end{bmatrix}
\begin{bmatrix}
s_{11} & s_{12}& \dots & s_{1n}\\
& s_{22}& \dots & s_{2n}\\
& & \ddots & \vdots \\
\multicolumn{2}{c}{\raisebox{1.3ex}[0pt]{\Huge0}} & & s_{nn} \\
\bzero & \bzero &\ldots & \bzero
\end{bmatrix}= \bV\bR_2,
\end{aligned}
$$
This implies
$$
r_{11} = v_{11} s_{11}, \qquad v_{21}=v_{31}=v_{41}=\ldots=v_{m1}=0.
$$
Since $\bV$ contains mutually orthonormal columns and the first column of $\bV$ is of norm 1. Thus, $v_{11} = \pm 1$.
We notice that $r_{ii}> 0$ and $s_{ii}> 0$ for $i\in \{1,2,\ldots,n\}$ by assumption such that $r_{11}> 0$ and $s_{11}> 0$ and $v_{11}$ can only be positive 1. Since $\bV$ is an orthogonal matrix, we also have
$$
v_{12}=v_{13}=v_{14}=\ldots=v_{1m}=0.
$$
Applying this process to the submatrices of $\bR_1, \bV, \bR_2$, we will find the upper-left submatrix of $\bV$ is an identity: $\bV[1:n,1:n]=\bI_n$ such that $\bR_1=\bR_2$. This implies $\bQ_1[:,1:n]=\bQ_2[:,1:n]$ and leads to a contradiction such that the reduced QR decomposition is unique.
\end{proof}
We notice that the uniqueness of the reduced QR decomposition shown above is from the diagonals of $\bR$ are positive. Therefore, if we restrict the QR decomposition that the diagonal values of $\bR$ are positive in Householder or Givens algorithms, then the decomposition will be unique. And actually, the Gram-Schmidt process in Algorithm~\ref{alg:reduced-qr} is such a decomposition that the diagonal values of $\bR$ will be positive if $\bA$ has full column rank. If $\bA$ has dependent columns, the diagonal entries of $\bR$ can only be nonnegative, and the factorization may not be unique.
\section{LQ Decomposition}\label{section:lq-decomp}
We previously proved the existence of the QR decomposition via the Gram-Schmidt process in which case we are interested in the column space of a matrix $\bA=[\ba_1, \ba_2, ..., \ba_n] \in \real^{m\times n}$. The successive spaces spanned by the columns $\ba_1, \ba_2, \ldots$ of $\bA$ are
$$
\cspace([\ba_1])\,\,\,\, \subseteq\,\,\,\, \cspace([\ba_1, \ba_2]) \,\,\,\,\subseteq\,\,\,\, \cspace([\ba_1, \ba_2, \ba_3])\,\,\,\, \subseteq\,\,\,\, \ldots,
$$
The idea of QR decomposition is the construction of a sequence of orthonormal vectors $\bq_1, \bq_2, \ldots$ that span the same successive subspaces:
$$
\left\{\cspace([\bq_1])=\cspace([\ba_1]) \right\}\,\,\,\, \subseteq\,\,\,\, \{\cspace([\bq_1, \bq_2])=\cspace([\ba_1, \ba_2])\} \,\,\,\, \subseteq\,\,\,\, \ldots,
$$
However, in many applications (see \citep{schilders2009solution}), we are also interested in the row space of a matrix $\bB=[\bb_1^\top; \bb_2^\top; ...;\bb_m^\top] \in \real^{m\times n}$, where $\bb_i$ is the $i$-th row of $\bB$. The successive spaces spanned by the rows $\bb_1, \bb_2, \ldots$ of $\bB$ are
$$
\cspace([\bb_1])\,\,\,\, \subseteq\,\,\,\, \cspace([\bb_1, \bb_2]) \,\,\,\,\subseteq\,\,\,\, \cspace([\bb_1, \bb_2, \bb_3])\,\,\,\, \subseteq\,\,\,\, \ldots.
$$
The QR decomposition thus has its sibling which finds the orthogonal row space.
By applying QR decomposition on $\bB^\top = \bQ_0\bR$, we recover the LQ decomposition of the matrix $\bB = \bL \bQ$ where $\bQ = \bQ_0^\top$ and $\bL = \bR^\top$.
\begin{theoremHigh}[LQ Decomposition]\label{theorem:lq-decomposition}
Every $m\times n$ matrix $\bB$ (whether linearly independent or dependent rows) with $n\geq m$ can be factored as
$$
\bB = \bL\bQ,
$$
where
1. \textbf{Reduced}: $\bL$ is an $m\times m$ lower triangular matrix and $\bQ$ is $m\times n$ with orthonormal rows which is known as the \textbf{reduced LQ decomposition};
2. \textbf{Full}: $\bL$ is an $m\times n$ lower triangular matrix and $\bQ$ is $n\times n$ with orthonormal rows which is known as the \textbf{full LQ decomposition}. If further restrict the lower triangular matrix to be a square matrix, the full LQ decomposition can be denoted as
$$
\bB = \begin{bmatrix}
\bL_0 & \bzero
\end{bmatrix}\bQ,
$$
where $\bL_0$ is an $m\times m$ square lower triangular matrix.
\end{theoremHigh}
Similarly, a comparison between the reduced and full LQ decomposition is shown in Figure~\ref{fig:lq-comparison}.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Reduced LQ decomposition]{\label{fig:lqhalf}
\includegraphics[width=0.47\linewidth]{./imgs/qrreduced-LR.pdf}}
\quad
\subfigure[Full LQ decomposition]{\label{fig:lqall}
\includegraphics[width=0.47\linewidth]{./imgs/qrfull-LR.pdf}}
\caption{Comparison between the reduced and full LQ decomposition.}
\label{fig:lq-comparison}
\end{figure}
\paragraph{Row-pivoted LQ (RPLQ)\index{Row-pivoted}\index{RPLQ}} Similar to the column-pivoted QR in Section~\ref{section:cpqr}, there exists a row-pivoted LQ decomposition:
$$
\left\{
\begin{aligned}
\text{Reduced RPLQ: }&\qquad
\bP\bB &=&
\underbrace{\begin{bmatrix}
\bL_{11} \\
\bL_{21}
\end{bmatrix}}_{m\times r}
\underbrace{\bQ_r }_{r\times n};\\
\text{Full RPLQ: }&\qquad
\bP\bB &=&
\underbrace{\begin{bmatrix}
\bL_{11} & \bzero \\
\bL_{21} & \bzero
\end{bmatrix}}_{m\times m}
\underbrace{\bQ }_{m\times n},\\
\end{aligned}
\right.
$$
where $\bL_{11}\in \real^{r\times r}$ is lower triangular, $\bQ_r$ or $\bQ_{1:r,:}$ spans the same row space as $\bB$, and $\bP$ is a permutation matrix that interchange independent rows into the upper-most rows.
\section{Two-Sided Orthogonal Decomposition}
\begin{theoremHigh}[Two-Sided Orthogonal Decomposition]\label{theorem:two-sided-orthogonal}
When square matrix $\bA\in \real^{n\times n}$ with rank $r$, the full CPQR, RPLQ of $\bA$ are given by
$$\bA\bP_1=\bQ_1
\begin{bmatrix}
\bR_{11} & \bR_{12}\\
\bzero & \bzero
\end{bmatrix},\qquad
\bP_2\bA=
\begin{bmatrix}
\bL_{11} & \bzero \\
\bL_{21} & \bzero
\end{bmatrix}
\bQ_2
$$
respectively. Then we would find out
$$
\bA\bP\bA = \bQ_1
\underbrace{\begin{bmatrix}
\bR_{11}\bL_{11}+\bR_{12}\bL_{21} & \bzero \\
\bzero & \bzero
\end{bmatrix}}_{\text{rank $r$}}
\bQ_2,
$$
where the first $r$ columns of $\bQ_1$ span the same column space of $\bA$, first $r$ rows of $\bQ_2$ span the same row space of $\bA$, and $\bP$ is a permutation matrix. We name this decomposition as \textbf{two-sided orthogonal decomposition}.
\end{theoremHigh}
This decomposition is very similar to the property of SVD: $\bA=\bU\bSigma\bV^\top$ that the first $r$ columns of $\bU$ span the column space of $\bA$ and the first $r$ columns of $\bV$ span the row space of $\bA$ (we shall see in Lemma~\ref{lemma:svd-four-orthonormal-Basis}, p.~\pageref{lemma:svd-four-orthonormal-Basis}). Therefore, the two-sided orthogonal decomposition can be regarded as an inexpensive alternative in this sense.
\begin{lemma}[Four Orthonormal Basis]
Given the two-sided orthogonal decomposition of matrix $\bA\in \real^{n\times n}$ with rank $r$: $\bA\bP\bA = \bU \bF\bV^\top$, where $\bU=[\bu_1, \bu_2, \ldots,\bu_n]$ and $\bV=[\bv_1, \bv_2, \ldots, \bv_n]$ are the column partitions of $\bU$ and $\bV$. Then, we have the following property:
$\bullet$ $\{\bv_1, \bv_2, \ldots, \bv_r\} $ is an orthonormal basis of $\cspace(\bA^\top)$;
$\bullet$ $\{\bv_{r+1},\bv_{r+2}, \ldots, \bv_n\}$ is an orthonormal basis of $\nspace(\bA)$;
$\bullet$ $\{\bu_1,\bu_2, \ldots,\bu_r\}$ is an orthonormal basis of $\cspace(\bA)$;
$\bullet$ $\{\bu_{r+1}, \bu_{r+2},\ldots,\bu_n\}$ is an orthonormal basis of $\nspace(\bA^\top)$.
\end{lemma}
\section{Applications}
\subsection{Application: Least Squares via the Full QR Decomposition}\label{section:application-ls-qr}
Let's consider the overdetermined system $\bA\bx = \bb$, where $\bA\in \real^{m\times n}$ is the data matrix, $\bb\in \real^m$ with $m>n$ is the observation matrix. Normally $\bA$ will have full column rank since the data from real work has a large chance to be unrelated. And the least squares (LS) solution is given by $\bx_{LS} = (\bA^\top\bA)^{-1}\bA^\top\bb$ for minimizing $||\bA\bx-\bb||^2$, where $\bA^\top\bA$ is invertible since $\bA$ has full column rank and $rank(\bA^T\bA)=rank(\bA)$.\index{Least squares}
However, the inverse of a matrix is not easy to compute, we can then use QR decomposition to find the least squares solution as illustrated in the following theorem.
\begin{theorem}[LS via QR for Full Column Rank Matrix]\label{theorem:qr-for-ls}
Let $\bA\in \real^{m\times n}$ and $\bA=\bQ\bR$ is its full QR decomposition with $\bQ\in\real^{m\times m}$ being an orthogonal matrix, $\bR\in \real^{m\times n}$ being an upper triangular matrix appended by additional $m-n$ zero rows, and $\bA$ has full column rank with $m\geq n$. Suppose $\bR = \begin{bmatrix}
\bR_1 \\
\bzero
\end{bmatrix}$, where $\bR_1 \in \real^{n\times n}$ is the square upper triangular in $\bR$, $\bb\in \real^m$, then the LS solution to $\bA\bx=\bb$ is given by
$$
\bx_{LS} = \bR_1^{-1}\bc,
$$
where $\bc$ is the first $n$ components of $\bQ^\top\bb$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:qr-for-ls}]
Since $\bA=\bQ\bR$ is the full QR decomposition of $\bA$ and $m\geq n$, the last $m-n$ rows of $\bR$ are zero as shown in Figure~\ref{fig:qr-comparison}. Then $\bR_1 \in \real^{n\times n}$ is the square upper triangular in $\bR$ and
$
\bQ^\top \bA = \bR = \begin{bmatrix}
\bR_1 \\
\bzero
\end{bmatrix}.
$
Thus,
$$
\begin{aligned}
||\bA\bx-\bb||^2 &= (\bA\bx-\bb)^\top(\bA\bx-\bb)\\
&=(\bA\bx-\bb)^\top\bQ\bQ^\top (\bA\bx-\bb) \qquad &(\text{Since $\bQ$ is an orthogonal matrix})\\
&=||\bQ^\top \bA \bx-\bQ^\top\bb||^2 \qquad &(\text{Invariant under orthogonal})\\
&=\left\Vert\begin{bmatrix}
\bR_1 \\
\bzero
\end{bmatrix} \bx-\bQ^\top\bb\right\Vert^2\\
&=||\bR_1\bx - \bc||^2+||\bd||^2,
\end{aligned}
$$
where $\bc$ is the first $n$ components of $\bQ^\top\bb$ and $\bd$ is the last $m-n$ components of $\bQ^\top\bb$. And the LS solution can be calculated by back substitution of the upper triangular system $\bR_1\bx = \bc$, i.e., $\bx_{LS} = \bR_1^{-1}\bc$.
\end{proof}
To verify Theorem~\ref{theorem:qr-for-ls}, for the full QR decomposition of $\bA = \bQ\bR$ where $\bQ\in \real^{m\times m}$ and $\bR\in \real^{m\times n}$. Together with the LS solution $\bx_{LS} = (\bA^\top\bA)^{-1}\bA^\top\bb$, we obtain
\begin{equation}\label{equation:qr-for-ls-1}
\begin{aligned}
\bx_{LS} &= (\bA^\top\bA)^{-1}\bA^\top\bb \\
&= (\bR^\top\bQ^\top\bQ\bR)^{-1} \bR^\top\bQ^\top \bb\\
&= (\bR^\top\bR)^{-1} \bR^\top\bQ^\top \bb \\
&= (\bR_1^\top\bR_1)^{-1} \bR^\top\bQ^\top \bb \\
&=\bR_1^{-1} \bR_1^{-\top} \bR^\top\bQ^\top \bb\\
&= \bR_1^{-1} \bR_1^{-\top} \bR_1^\top\bQ_1^\top \bb \\
&=\bR_1^{-1} \bQ_1^\top \bb,
\end{aligned}
\end{equation}
where $\bR = \begin{bmatrix}
\bR_1 \\
\bzero
\end{bmatrix}$ and $\bR_1\in \real^{n\times n}$ is an upper triangular matrix, and $\bQ_1 =\bQ_{1:m,1:n}\in \real^{m\times n}$ is the first $n$ columns of $\bQ$ (i.e., $\bQ_1\bR_1$ is the reduced QR decomposition of $\bA$). Then the result of Equation~\eqref{equation:qr-for-ls-1} agrees with Theorem~\ref{theorem:qr-for-ls}.
To conclude, using the QR decomposition, we first derive directly the least squares result which results in the argument in Theorem~\ref{theorem:qr-for-ls}. Moreover, we verify the result of LS from calculus indirectly by QR decomposition as well. The two results coincide with each other. For those who are interested in LS in linear algebra, a pictorial view of least squares for full column rank $\bA$ in the fundamental theorem of linear algebra is provided in Appendix~\ref{appendix:ls-fundation-theorem} or a detailed discussion in \citep{lu2021revisit}.
\subsection{Application: Rank-One Changes}\label{section:qr-rank-one-changes}
We previously discussed the rank-one update/downdate of the Cholesky decomposition in Section~\ref{section:cholesky-rank-one-update} (p.~\pageref{section:cholesky-rank-one-update}). The rank-one change $\bA^\prime$ of matrix $\bA$ in the QR decomposition is defined in a similar form:\index{Rank-one update}
$$
\begin{aligned}
\bA^\prime &= \bA + \bu\bv^\top, \\
\downarrow &\gap \downarrow\\
\bQ^\prime\bR^\prime &=\bQ\bR + \bu\bv^\top,
\end{aligned}
$$
where if we set $\bA^\prime = \bA - (-\bu)\bv^\top$, we recover the downdate form such that the update or downdate in the QR decomposition are the same. Let $\bw = \bQ^\top\bu$, we have
$$
\bA^\prime = \bQ(\bR + \bw\bv^\top).
$$
From the second form in Remark~\ref{remark:basis-from-givens2} (p.~\pageref{remark:basis-from-givens2}) on introducing zeros backwards, there exists a set of Givens rotations $\bG_{12} \bG_{23} \ldots \bG_{(n-1),n}$ such that
$$
\bG_{12} \bG_{23} \ldots \bG_{(n-1),n} \bw = \pm ||\bw|| \be_1,
$$
where $\bG_{(k-1),k}$ is the Givens rotation in plane $k-1$ and $k$ that introduces zero in the $k$-th entry of $\bw$. Apply this rotation to $\bR$, we have
$$
\bG_{12} \bG_{23} \ldots \bG_{(n-1),n}\bR = \bH_0 ,
$$
where the Givens rotations in this \textit{reverse order} are useful to transform the upper triangular $\bR$ into a ``simple" upper Hessenberg which is close to upper triangular matrices (see Definition~\ref{definition:upper-hessenbert}, p.~\pageref{definition:upper-hessenbert} that we will introduce in the Hessenberg decomposition). If the rotations are transforming $\bw$ into $\pm ||\bw||\be_1$ from \textit{forward order} as in Corollary~\ref{corollary:basis-from-givens} (p.~\pageref{corollary:basis-from-givens}), we will not have this upper Hessenberg $\bH_0$. To see this, suppose $\bR\in \real^{5\times 5}$, an example is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed. The backwards rotations result in the upper Hessenberg $\bH_0$ which is relatively simple to handle:
$$
\begin{aligned}
\text{\parbox{7em}{Backwards\\(Right Way)}: }
\begin{sbmatrix}{\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0 & 0& \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{45}}{\rightarrow}
\begin{sbmatrix}{\bG_{45}\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & 0 & \bm{\boxtimes}& \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bG_{34}}{\rightarrow}
\begin{sbmatrix}{\bG_{34}\bG_{45}\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}\\
&\stackrel{\bG_{23}}{\rightarrow}
\begin{sbmatrix}{\bG_{23}\bG_{34}\bG_{45}\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
\stackrel{\bG_{12}}{\rightarrow}
\begin{sbmatrix}{\bG_{12}\bG_{23}\bG_{34}\bG_{45}\bR}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}.
\end{aligned}
$$
And the forward rotations result in a full matrix:
$$
\begin{aligned}
\text{\parbox{7em}{Forwards\\(Wrong Way)}: }
\begin{sbmatrix}{\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0 & 0& \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{12}}{\rightarrow}
\begin{sbmatrix}{\bG_{12}\bR}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0 & 0& \boxtimes
\end{sbmatrix}
\stackrel{\bG_{23}}{\rightarrow}
\begin{sbmatrix}{\bG_{23}\bG_{12}\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0 & 0& \boxtimes
\end{sbmatrix}\\
&\stackrel{\bG_{34}}{\rightarrow}
\begin{sbmatrix}{\bG_{34}\bG_{23}\bG_{12}\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & 0 & 0& \boxtimes
\end{sbmatrix}
\stackrel{\bG_{45}}{\rightarrow}
\begin{sbmatrix}{\bG_{45}\bG_{34}\bG_{23}\bG_{12}\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\end{sbmatrix}.
\end{aligned}
$$
I.e., the backward rotation will keep a lot of the zeros as they are, whereas the forward rotation will destroy these zeros.
Generally, the backward rotation results in,
$$
\bG_{12} \bG_{23} \ldots \bG_{(n-1),n} (\bR+\bw\bv^\top) = \bH_0 \pm ||\bw|| \be_1 \bv^\top = \bH,
$$
which is also upper Hessenberg. Similar to triangularization via the Givens rotation in Section~\ref{section:qr-givens} (p.~\pageref{section:qr-givens}), there exists a set of rotations $\bJ_{12}, \bJ_{23}, \ldots, \bJ_{(n-1),n}$ such that
$$
\bJ_{(n-1),n} \ldots \bJ_{23}\bJ_{12}\bH = \bR^\prime,
$$
is upper triangular. Following from the $5\times 5$ example, the triangularization is shown as follows
$$
\begin{aligned}
\underbrace{\bH_0 \pm ||\bw|| \be_1 \bv^\top}_{\bH} =
\begin{sbmatrix}{\bH}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
&\stackrel{\bJ_{12}}{\rightarrow}
\begin{sbmatrix}{\bJ_{12}\bH}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
\stackrel{\bJ_{23}}{\rightarrow}
\begin{sbmatrix}{\bJ_{23}\bJ_{12}\bH}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}\\
&\stackrel{\bJ_{34}}{\rightarrow}
\begin{sbmatrix}{\bJ_{34}\bJ_{23}\bJ_{12}\bH}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
\stackrel{\bJ_{45}}{\rightarrow}
\begin{sbmatrix}{\bJ_{45}\bJ_{34}\bJ_{23}\bJ_{12}\bH}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & 0 & \bm{0} & \bm{\boxtimes} \\
\end{sbmatrix}.
\end{aligned}
$$
And the QR decomposition of $\bA^\prime$ is thus given by
$$
\bA^\prime = \bQ^\prime \bR^\prime,
$$
where
\begin{equation}\label{equation:qr-rank-one-update}
\left\{
\begin{aligned}
\bR^\prime &=(\bJ_{(n-1),n} \ldots \bJ_{23}\bJ_{12}) (\bG_{12} \bG_{23} \ldots \bG_{(n-1),n}) (\bR+\bw\bv^\top);\\
\bQ^\prime &= \bQ\left\{(\bJ_{(n-1),n} \ldots \bJ_{23}\bJ_{12}) (\bG_{12} \bG_{23} \ldots \bG_{(n-1),n}) \right\}^\top; \\
\text{(or) }\bQ^{\prime\top}&= \left\{(\bJ_{(n-1),n} \ldots \bJ_{23}\bJ_{12}) (\bG_{12} \bG_{23} \ldots \bG_{(n-1),n}) \right\}\bQ^\top .
\end{aligned}
\right.
\end{equation}
The procedure is then formulated in Algorithm~\ref{alg:qr-rankoneChange}.
\begin{algorithm}[h]
\caption{QR Rank-One Changes}
\label{alg:qr-rankoneChange}
\begin{algorithmic}[1]
\Require Matrix $\bA \in \real^{n\times n}$ with QR decomposition $\bA=\bQ\bR$, and $\bA^\prime = \bA+\bu\bv^\top$;
\State Calculate $\bw\leftarrow\bQ^\top\bu$; \Comment{$2n^2-n$ flops}
\State Calculate $\bH \leftarrow \bR$;
\For{$i=n-1$ to $1$}
\State Get Givens rotation $\bG_{i,i+1}$ with the following parameters $c, s$:
\State $c = \frac{x_k}{\sqrt{x_k^2 + x_l^2}}$, $s=\frac{x_l}{\sqrt{x_k^2 + x_l^2}}$ where $x_k = \bw_i$, $x_l = \bw_{i-1}$; \Comment{6 flops}
\State Calculate $\bH = \bG_{i,i+1}\bH $ in following two steps:
\State $i$-th row: $\bH_{i,:} = c\cdot \bH_{i,:} + s \bH_{j,:} $, where $j=i+1$; \Comment{$3(n-i+1)$ flops}
\State $(i+1)$-th row: $\bH_{i+1,:} = -s\cdot \bH_{i,:} + c \bH_{j,:} $, where $j=i+1$; \Comment{$3(n-i+1)$ flops}
\EndFor
\State Set $\bR^\prime =\bH \pm ||\bw|| \be_1 \bv^\top$; \Comment{$\bH, \bR^\prime$ are both upper Hessenberg}
\For{$i=1$ to $n-1$}
\State Get Givens rotation $\bJ_{i,i+1}$ with the following parameters $c, s$:
\State $c = \frac{x_k}{\sqrt{x_k^2 + x_l^2}}$, $s=\frac{x_l}{\sqrt{x_k^2 + x_l^2}}$ where $x_k = \bH_{i,i}$, $x_l = \bH_{i+1,i}$;
\State Calculate $\bR^\prime = \bJ_{i,i+1}\bR^\prime $ in following two steps:
\State $i$-th row: $\bR^\prime_{i,:} = c\cdot \bR^\prime_{i,:} + s \bR^\prime_{j,:} $, where $j=i+1$;
\State $(i+1)$-th row: $\bR^\prime_{i+1,:} = -s\cdot \bR^\prime_{i,:} + c \bR^\prime_{j,:} $, where $j=i+1$;
\EndFor
\State Output $\bR^\prime$;
\State Set $\bQ^{\prime\top} = \bQ^\top$;
\For{$i=n-1$ to $1$} \Comment{The following $c,s$ are from step 5}
\State $i$-th row: $\bQ^{\prime\top}_{i,:} = c\cdot \bQ^{\prime\top}_{i,:} + s \bQ^{\prime\top}_{j,:} $, where $j=i+1$; \Comment{$6n$ flops}
\State $(i+1)$-th row: $\bQ^{\prime\top}_{i+1,:} = -s\cdot \bQ^{\prime\top}_{i,:} + c \bQ^{\prime\top}_{j,:} $, where $j=i+1$;\Comment{$6n$ flops}
\EndFor
\For{$i=1$ to $n-1$} \Comment{The following $c,s$ are from step 13}
\State $i$-th row: $\bQ^{\prime\top}_{i,:} = c\cdot \bQ^{\prime\top}_{i,:} + s \bQ^{\prime\top}_{j,:} $, where $j=i+1$; \Comment{$6n$ flops}
\State $(i+1)$-th row: $\bQ^{\prime\top}_{i+1,:} = -s\cdot \bQ^{\prime\top}_{i,:} + c \bQ^{\prime\top}_{j,:} $, where $j=i+1$;\Comment{$6n$ flops}
\EndFor
\State Output $\bQ^\prime$;
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: QR Rank-One Change]\label{theorem:qr-full-givens-rank1}
Algorithm~\ref{alg:qr-rankoneChange} requires $\sim 8n^2$ flops to compute a full QR decomposition of an $\bA^\prime \in \real^{n\times n}$ matrix with rank-one change to $\bA$ and the full QR decomposition of $\bA$ is known. Further, if $\bQ^\prime$ is needed explicitly, additional $\sim 12n^2$ flops are required.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:qr-full-givens-rank1}]
It is trivial that step 1 needs \fbox{(*). $n(2n-1)=2n^2-n$} flops to calculate $\bw=\bQ^\top\bu$.
For step 5, each iteration $i$ requires $6$ flops (which are 2 square operations, 1 addition, 1 square root, and 2 divisions). And there are $n-1$ such iterations so that the complexity for all the step 5's is \fbox{$6(n-1)$} flops.
For each iteration $i$, step 7 and step 8 operate on two length-$(n-i+1)$ vectors. The two steps take $6(n-i+1)$ flops for each iteration $i$ (which are $4(n-i+1)$ multiplications and $2(n-i+1)$ additions).
Let $f(i) = 6(n-i+1)$, the total complexity for the two steps is equal to
$$
\mathrm{cost} =f(1)+ f(2) +\ldots +f(n-1) = \boxed{3n^2-3n} \,\, \mathrm{flops}.
$$
Therefore, the complexity for step 3 to step 9 is \fbox{(*). $6(n-1) + 3n^2-3n = 3n^2+3n-6$} flops. Similarly, the complexity for step 11 to step 16 is again \fbox{(*). $3n^2+3n-6$} flops.
However, for each iteration $i$, step 21 and step 22 operate on two length-$n$ vectors. The two steps take $6n$ flops for each iteration $i$, and the total complexity for step 20 to step 23 is \fbox{(*). $6n(n-1) =6n^2-6n$} flops. Again, step 24 to step 27 take another \fbox{(*). $6n^2-6n$} flops.
Therefore, we can calculate the final complexity by summing up the equations marked as $(*)$ with \fbox{$20n^2$} flops if keep only the leading terms ($8n^2$ flops for calculating $\bR^\prime$, and $12n^2$ flops for calculating $\bQ^\prime$).
Note that for each iteration $i$, step 7 and step 8 operate on two length-$(n-i+1)$ vectors since $\bR$ is upper triangular. If we do not favor such a structure, the final complexity is $26n^2$ flops as stated in \citep{golub2013matrix}.
\end{proof}
The algorithm can be easily applied to a rectangular matrix $\bA\in \real^{m\times n}$, or $\bA+\bU\bV^\top$ where $\bU\in \real^{m\times k}$ and $\bV\in \real^{n\times k}$.
\subsection{Application: Appending or Deleting a Column}\label{section:append-column-qr}
\paragraph{Deleting a column}
Suppose the QR decomposition of $\bA\in \real^{m\times n}$ is given by $\bA=\bQ\bR$ where the column partition of $\bA$ is $\bA=[\ba_1,\ba_2,\ldots,\ba_n]$. Now, if we delete the $k$-th column of $\bA$ such that $\bA^\prime = [\ba_1,\ldots,\ba_{k-1},\ba_{k+1},\ldots,\ba_n] \in \real^{m\times (n-1)}$. We want to find the QR decomposition of $\bA^\prime$ efficiently. Suppose further $\bR$ has the following form
$$
\begin{aligned}
\begin{blockarray}{ccccc}
\begin{block}{c[ccc]c}
& \bR_{11} & \ba & \bR_{12} & k-1 \\
\bR = & \bzero & r_{kk} & \bb^\top & 1 \\
& \bzero &\bzero& \bR_{22} & m-k \\
\end{block}
& k-1 & 1 & n-k & \\
\end{blockarray}.\\
\end{aligned}
$$
Apparently,
$$
\bQ^\top \bA^\prime =
\begin{bmatrix}
\bR_{11} &\bR_{12} \\
\bzero & \bb^\top \\
\bzero & \bR_{22}
\end{bmatrix} = \bH
$$
is upper Hessenberg. A $6\times 5$ example is shown as follows where $k=3$:
$$
\begin{aligned}
\begin{sbmatrix}{\bR = \bQ^\top\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0 & 0& \boxtimes \\
0 & 0 & 0 & 0& 0
\end{sbmatrix}
&\longrightarrow
\begin{sbmatrix}{\bH = \bQ^\top\bA^\prime}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0& \boxtimes \\
0 & 0 & 0& 0
\end{sbmatrix}.
\end{aligned}
$$
Again, for columns $k$ to $n-1$ of $\bH$, there exists a set of rotations $\bG_{k,k+1}$, $\bG_{k+1,k+2}$, $\ldots$, $\bG_{n-1,n}$ that could introduce zeros for the elements $h_{k+1,k}$, $h_{k+2,k+1}$, $\ldots$, $h_{n,n-1}$ of $\bH$. The the triangular matrix $\bR^\prime$ is given by
$$
\bR^\prime = \bG_{n-1,n}\ldots \bG_{k+1,k+2}\bG_{k,k+1}\bQ^\top \bA^\prime.
$$
And the orthogonal matrix
\begin{equation}\label{equation:qr-delete-column-finalq}
\bQ^\prime = (\bG_{n-1,n}\ldots \bG_{k+1,k+2}\bG_{k,k+1}\bQ^\top )^\top = \bQ \bG_{k,k+1}^\top \bG_{k+1,k+2}^\top \ldots \bG_{n-1,n}^\top,
\end{equation}
such that $\bA^\prime = \bQ^\prime\bR^\prime$. The procedure is formulated in Algorithm~\ref{alg:qr-delete-a-column}. And the $6\times 5$ example is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed:
$$
\begin{aligned}
\begin{sbmatrix}{\bR = \bQ^\top\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0 & 0& \boxtimes \\
0 & 0 & 0 & 0& 0
\end{sbmatrix}
&\stackrel{k=3}{\rightarrow}
\begin{sbmatrix}{\bH = \bQ^\top\bA^\prime}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0& \boxtimes \\
0 & 0 & 0& 0
\end{sbmatrix}
\stackrel{\bG_{34}}{\rightarrow}
\begin{sbmatrix}{\bG_{34}\bH }
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & \bm{0}& \bm{\boxtimes} \\
0 & 0 & 0& \boxtimes \\
0 & 0 & 0& 0
\end{sbmatrix}
\stackrel{\bG_{45}}{\rightarrow}
\begin{sbmatrix}{\bG_{45}\bG_{34}\bH}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes \\
0 & 0 &0 & \bm{\boxtimes} \\
0 & 0 & 0& \bm{0} \\
0 & 0 & 0& 0
\end{sbmatrix}.
\end{aligned}
$$
\begin{algorithm}[h]
\caption{QR Deleting a Column}
\label{alg:qr-delete-a-column}
\begin{algorithmic}[1]
\Require Matrix $\bA \in \real^{m\times n}$ with full QR decomposition $\bA=\bQ\bR$, and $\bA^\prime \in \real^{m\times (n-1)}$ by deleting column $k$ of $\bA$;
\State Obtain $\bH$ by deleting $k$ of $\bR$, that is, $\bH=\bQ^\top\bA^\prime$;
\For{$i=k$ to $n-1$}
\State Get Givens rotation $\bG_{i,i+1}$ with the following parameters $c, s$:
\State $c = \frac{x_k}{\sqrt{x_k^2 + x_l^2}}$, $s=\frac{x_l}{\sqrt{x_k^2 + x_l^2}}$ where $x_k = h_{ii}$, $x_l = h_{i+1,i}$;
\State Calculate $\bH = \bG_{i,i+1}\bH $ in following two steps:
\State $i$-th row: $\bH_{i,:} = c\cdot \bH_{i,:} + s \bH_{j,:} $, where $j=i+1$;
\State $(i+1)$-th row: $\bH_{i+1,:} = -s\cdot \bH_{i,:} + c \bH_{j,:} $, where $j=i+1$;
\EndFor
\State Set $\bR^\prime \leftarrow \bH$ and output $\bR^\prime$;
\State Set $\bQ^\prime \leftarrow \bQ^\top $;
\For{$i=k$ to $n-1$}
\State $c = \frac{x_k}{\sqrt{x_k^2 + x_l^2}}$, $s=\frac{x_l}{\sqrt{x_k^2 + x_l^2}}$ where $x_k$, $x_l$ are from step 4;
\State Calculate $\bQ^\prime = \bG_{i,i+1}\bQ^\prime $ in following two steps:
\State $i$-th row: $\bQ^\prime_{i,:} = c\cdot \bQ^\prime_{i,:} + s \bQ^\prime_{j,:} $, where $j=i+1$;
\State $(i+1)$-th row: $\bQ^\prime_{i+1,:} = -s\cdot \bQ^\prime_{i,:} + c \bQ^\prime_{j,:} $, where $j=i+1$;
\EndFor
\State Output $\bQ^\prime \leftarrow \bQ^{\prime\top}$ from Equation~\eqref{equation:qr-delete-column-finalq};
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: QR Deleting Column]\label{theorem:qr-full-givens-delete-column}
Algorithm~\ref{alg:qr-delete-a-column} requires $\sim 3n^2-6nk+3k^2$ flops to compute a full QR decomposition of an $\bA^\prime \in \real^{m\times (n-1)}$ matrix where we delete the column $k$ of $\bA\in \real^{m\times n}$ and the full QR decomposition of $\bA$ is known. Further, if $\bQ^\prime$ is needed explicitly, additional $\sim 6m(n-k)$ flops required.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:qr-full-givens-delete-column}]
For step 4, each iteration $i$ requires $6$ flops (which are 2 square operations, 1 addition, 1 square root, and 2 divisions). And there are $n-k$ such iterations so that the complexity for all the step 5's is \fbox{$6(n-k)$} flops.
For each iteration $i$, step 6 and step 7 operate on two length-$(n-i)$ vectors since $\bH$ is upper Hessenberg. The two steps take $6(n-i)$ flops for each iteration $i$ (which are $4(n-i)$ multiplications and $2(n-i)$ additions).
Let $f(i) = 6(n-i)$, the total complexity for the two steps is equal to
$$
\mathrm{cost} =f(k)+ f(k+1) +\ldots + f(n-1) = \boxed{3n^2-6nk+3k^2+3n-3k} \,\, \mathrm{flops}.
$$
Therefore, the complexity for step 2 to step 8 is \fbox{(*). $3n^2-6nk+3k^2$} flops if we keep only the leading terms.
However, for each iteration $i$, step 14 and step 15 operate on two length-$m$ vectors. The two steps take $6m$ flops for each $i$, and there are $n-k$ such iterations so that the total complexity for step 11 to step 16 is \fbox{(*). $6m(n-k)$} flops.
\end{proof}
Note that the number of column $k$ takes a role in the complexity, when $k=n$, the complexity is 0; and when $k=1$, the complexity gets its maximal value.
\paragraph{Appending a column}
Similarly, suppose $\widetilde{\bA} = [\ba_1,\ba_k,\bw,\ba_{k+1},\ldots,\ba_n]$ where we append $\bw$ into the $(k+1)$-th column of $\bA$. We can obtain
$$
\bQ^\top \widetilde{\bA} = [\bQ^\top\ba_1,\ldots, \bQ^\top\ba_k, \bQ^\top\bw, \bQ^\top\ba_{k+1}, \ldots,\bQ^\top\ba_n] = \widetilde{\bH}.
$$
A set of Givens rotations $\bJ_{m-1,m}, \bJ_{m-2,m-1}, \ldots, \bJ_{k+1,k+2}$ can introduce zeros for the $\widetilde{h}_{m,k+1}$, $\widetilde{h}_{m-1,k+1}$, $\ldots$, $\widetilde{h}_{k+2,k+1}$ elements of $\widetilde{\bH}$ such that
$$
\widetilde{\bR} =\bJ_{k+1,k+2}\ldots \bJ_{m-2,m-1}\bJ_{m-1,m} \bQ^\top \widetilde{\bA},
$$
is upper triangular.
Suppose $\widetilde{\bH}$ is of size $6\times 5$ and $k=2$, an example is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\widetilde{\bH}}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & 0& \boxtimes \\
0 & 0 & \boxtimes & 0& 0 \\
0 & 0 & \boxtimes & 0& 0
\end{sbmatrix}
&\stackrel{\bJ_{56}}{\rightarrow}
\begin{sbmatrix}{\bJ_{56}\widetilde{\bH} \rightarrow \widetilde{h}_{63}=0}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & 0& \boxtimes \\
0 & 0 & \bm{\boxtimes} & 0& 0 \\
0 & 0 & \bm{0} & 0& 0
\end{sbmatrix}
\stackrel{\bJ_{45}}{\rightarrow}
\begin{sbmatrix}{\bJ_{45}\bJ_{56}\widetilde{\bH}\rightarrow \widetilde{h}_{53}=0}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \bm{\boxtimes} & 0& \bm{\boxtimes} \\
0 & 0 & \bm{0} & 0& \bm{\boxtimes}\\
0 & 0 & 0 & 0& 0
\end{sbmatrix}\\
&\stackrel{\bJ_{34}}{\rightarrow}
\begin{sbmatrix}{\bJ_{34}\bJ_{45}\bJ_{56}\widetilde{\bH}\rightarrow \widetilde{h}_{43}=0}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & 0 & 0& \boxtimes\\
0 & 0 & 0 & 0& 0
\end{sbmatrix} = \widetilde{\bR}.
\end{aligned}
$$
And finally, the orthogonal matrix
\begin{equation}\label{equation:qr-add-column-finalq}
\widetilde{\bQ} = (\bJ_{k+1,k+2}\ldots \bJ_{m-2,m-1}\bJ_{m-1,m} \bQ^\top )^\top = \bQ \bJ_{m-1,m}^\top \bJ_{m-2,m-1}^\top \ldots \bJ_{k+1,k+2}^\top,
\end{equation}
such that $\widetilde{\bA} = \widetilde{\bQ}\widetilde{\bR}$. The procedure is again formulated in Algorithm~\ref{alg:qr-adding-a-column}.
\begin{algorithm}[h]
\caption{QR Adding a Column}
\label{alg:qr-adding-a-column}
\begin{algorithmic}[1]
\Require Matrix $\bA \in \real^{m\times n}$ with full QR decomposition $\bA=\bQ\bR$, and $\widetilde{\bA}\in \real^{m\times (n+1)}$ by adding column $\bw$ into $(k+1)$-th column of $\bA$;
\State Calculate $\bQ^\top\bw$;
\State Obtain $\widetilde{\bH}$ by inserting $\bQ^\top \bw$ into $(k+1)$-th column of $\bR$;
\For{$i=m-1$ to $k+1$}
\State Get Givens rotation $\bJ_{i,i+1}$ with the following parameters $c, s$:
\State $c = \frac{x_k}{\sqrt{x_k^2 + x_l^2}}$, $s=\frac{x_l}{\sqrt{x_k^2 + x_l^2}}$ where $x_k = \widetilde{h}_{i,k+1}$, $x_l = \widetilde{h}_{i+1,k+1}$;
\State Calculate $\widetilde{\bH} = \bJ_{i,i+1}\bH $ in following two steps:
\State $i$-th row: $\widetilde{\bH}_{i,:} = c\cdot \widetilde{\bH}_{i,:} + s \widetilde{\bH}_{j,:} $, where $j=i+1$;
\State $(i+1)$-th row: $\widetilde{\bH}_{i+1,:} = -s\cdot \widetilde{\bH}_{i,:} + c \widetilde{\bH}_{j,:} $, where $j=i+1$;
\EndFor
\State Set $\widetilde{\bR} \leftarrow \widetilde{\bH}$ and output $\widetilde{\bR}$;
\State Set $\widetilde{\bQ} \leftarrow \bQ^\top $;
\For{$i=m-1$ to $k+1$}
\State $c = \frac{x_k}{\sqrt{x_k^2 + x_l^2}}$, $s=\frac{x_l}{\sqrt{x_k^2 + x_l^2}}$ where $x_k$, $x_l$ are from step 5;
\State Calculate $\widetilde{\bQ} = \bJ_{i,i+1}\widetilde{\bQ} $ in following two steps:
\State $i$-th row: $\widetilde{\bQ}_{i,:} = c\cdot \widetilde{\bQ}_{i,:} + s \widetilde{\bQ}_{j,:} $, where $j=i+1$;
\State $(i+1)$-th row: $\bQ_{i+1,:} = -s\cdot \widetilde{\bQ}_{i,:} + c \widetilde{\bQ}_{j,:} $, where $j=i+1$;
\EndFor
\State Output $\widetilde{\bQ} \leftarrow \widetilde{\bQ}^{\top}$ from Equation~\eqref{equation:qr-add-column-finalq};
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: QR Adding Column]\label{theorem:qr-full-givens-add-column}
Algorithm~\ref{alg:qr-adding-a-column} requires $\sim 2m^2+6(mn+k^2-nk-mk)$ flops to compute a full QR decomposition of an $\widetilde{\bA} \in \real^{m\times (n+1)}$ matrix where we add a column into the $(k+1)$-th column of $\bA\in \real^{m\times n}$ and the full QR decomposition of $\bA$ is known. Further, if $\widetilde{\bQ}$ is needed explicitly, additional $\sim 6m(m-k)$ flops required.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:qr-full-givens-add-column}]
It is trivial that step 1 requires \fbox{$m(2m-1)$} flops to calculate $\bQ^\top\bw$.
For step 5, each iteration $i$ requires $6$ flops (which are 2 square operations, 1 addition, 1 square root, and 2 divisions). And there are $m-k-1$ such iterations so that the complexity for all the step 5's is \fbox{$6(m-k-1)$} flops.
For each iteration $i$, step 7 and step 8 operate on two length-$(n-k+1)$ vectors since $\widetilde{\bH}$ is upper Hessenberg. The two steps take $6(n-k+1)$ flops for each iteration $i$ (which are $4(n-k+1)$ multiplications and $2(n-k+1)$ additions).
And there are $m-k-1$ such iterations so that the complexity for all the step 7 and step 8's is \fbox{$6(n-k+1)(m-k-1)$} flops.
Therefore, the complexity for step 1 to step 9 is
$$
m(2m-1)+6(m-k-1)+6(n-k+1)(m-k-1)=\boxed{2m^2+6(n-k+2)(m-k-1)},
$$
or \fbox{$2m^2+6(mn+k^2-nk-mk)$} flops if we keep only the leading terms.
However, for each iteration $i$, step 15 and step 16 operate on two length-$m$ vectors. The two steps take $6m$ flops for each $i$, and there are $m-k-1$ such iterations so that the total complexity for step 12 to step 17 is \fbox{$6m(m-k-1)$} flops or \fbox{$6m(m-k)$} flops if we keep only the leading terms.
\end{proof}
\paragraph{Real world application} The method introduced above is useful for the efficient variable selection in the least squares problem via the QR decomposition. At each time we delete a column of the data matrix $\bA$, and apply an $F$-test to see if the variable is significant or not. If not, we will delete the variable and favor a simpler model. A short review is given as follows and more details can be referred to \citep{lu2021rigorous}.
Following the setup in Section~\ref{section:application-ls-qr},
let's consider the overdetermined system $\bA\bx = \bb$, where $\bA\in \real^{m\times n}$ is the data matrix, $\bb\in \real^m$ with $m>n$ is the observation matrix. The LS solution is given by $\bx_{LS} = (\bA^\top\bA)^{-1}\bA^\top\bb$ for minimizing $||\bA\bx-\bb||^2$, where $\bA^\top\bA$ is invertible since $\bA$ has full column rank and $rank(\bA^T\bA)=rank(\bA)$.
Suppose we delete a column of $\bA$ into $\widehat{\bA}$, the LS solution is reduced from $\bx_{LS}$ to $\widehat{\bx}_{LS}$.
Define
$$
\begin{aligned}
RSS(\widehat{\bx}_{LS}) &= ||\bb - \widehat{\bb}_{LS}||^2, \qquad \text{where } \widehat{\bb}_{LS} = \widehat{\bA}\widehat{\bx}_{LS}, \\
RSS(\bx_{LS})&= ||\bb - \bb_{LS}||^2, \qquad \text{where } \bb_{LS} = \bA\bx_{LS},\\
\bH & = \bA(\bA^\top\bA)^{-1}\bA^\top, \\
\widehat{\bH} &= \widehat{\bA}(\widehat{\bA}^\top \widehat{\bA})^{-1}\widehat{\bA}^\top. \\
\end{aligned}
$$
Suppose the \textit{reduced} QR decomposition of $\bA,\widehat{\bA}$ are given by $\bA=\bQ\bR$, $\widehat{\bA}=\widehat{\bQ}\widehat{\bR}$.
Thus $RSS(\bx_{LS}) = \bb^\top (\bI-\bH)\bb = \bb^\top\bb - (\bb^\top\bQ)(\bQ^\top\bb)$
and $RSS(\widehat{\bx}_{LS}) - RSS(\bx_{LS}) = ||\bb_{LS} - \widehat{\bb}_{LS}||^2 = \bb^\top(\bH-\widehat{\bH})\bb=(\bb^\top\bQ)(\bQ^\top\bb)-(\bb^\top\widehat{\bQ})(\widehat{\bQ}^\top\bb)$, which are the differences of two inner products.
It can be shown that $RSS(\bx_{LS})\sim \sigma^2 \chi^2_{(m-n)}$ which is a Chi-square distribution and $\sigma$ is the noise level. Under the hypothesis that the deleted column is not significant, we could conclude that
$$
T=\frac{\frac{1}{n-q}\left(RSS(\widehat{\bx}_{LS}) - RSS(\bx_{LS})\right) }{\frac{1}{m-n}RSS(\bx_{LS})} \sim F_{n-q,m-n},
$$
which is the \textbf{test statistic for $F$-test} with $q=n-1$.
Suppose we have the data set $(\ba_1, b_1)$, $(\ba_2, b_2)$, $\ldots$, $(\ba_n, b_m)$, and we observe $T=t$ for this specific data set. Then
$$
p=P[T((\ba_1, b_1), (\ba_2, b_2), \ldots, (\ba_n, b_m)) \geq t] = P[F_{n-q,m-n} \geq t].
$$
We reject the hypothesis if $p<\alpha$, for some small $\alpha$, say 0.05. This is called the \textit{p-value}.
\subsection{Application: Appending or Deleting a Row}
\paragraph{Appending a row}
Suppose the full QR decomposition of $\bA\in \real^{m\times n}$ is given by $\bA= \begin{bmatrix}
\bA_1 \\
\bA_2
\end{bmatrix}=\bQ\bR$ where $\bA_1\in \real^{k\times n}$ and $\bA_2 \in \real^{(m-k)\times n}$. Now, if we add a row such that $\bA^\prime = \begin{bmatrix}
\bA_1 \\
\bw^\top\\
\bA_2
\end{bmatrix} \in \real^{(m+1)\times n}$. We want to find the full QR decomposition of $\bA^\prime$ efficiently. Construct a permutation matrix
$$
\bP=
\begin{bmatrix}
\bzero & 1 & \bzero \\
\bI_k & \bzero & \bzero \\
\bzero & \bzero & \bI_{m-k}
\end{bmatrix}
\longrightarrow
\bP
\begin{bmatrix}
\bA_1 \\
\bw^\top\\
\bA_2
\end{bmatrix}
=
\begin{bmatrix}
\bw^\top\\
\bA_1 \\
\bA_2
\end{bmatrix}
.
$$
Then,
$$
\begin{bmatrix}
1 & \bzero \\
\bzero & \bQ^\top
\end{bmatrix}
\bP
\bA^\prime
=
\begin{bmatrix}
\bw^\top \\
\bR
\end{bmatrix}=\bH
$$
is upper Hessenberg. Similarly, a set of rotations $\bG_{12}, \bG_{23}, \ldots, \bG_{n,n+1}$ can be applied to introduce zeros for the elements $h_{21}$, $h_{32}$, $\ldots$, $h_{n+1,n}$ of $\bH$. The triangular matrix $\bR^\prime$ is given by
$$
\bR^\prime = \bG_{n,n+1}\ldots \bG_{23}\bG_{12}\begin{bmatrix}
1 & \bzero \\
\bzero & \bQ^\top
\end{bmatrix}
\bP \bA^\prime.
$$
And the orthogonal matrix
$$
\bQ^\prime = \left(\bG_{n,n+1}\ldots \bG_{23}\bG_{12}\begin{bmatrix}
1 & \bzero \\
\bzero & \bQ^\top
\end{bmatrix}
\bP \right)^\top
=
\bP^\top
\begin{bmatrix}
1 & \bzero \\
\bzero & \bQ
\end{bmatrix}
\bG_{12}^\top \bG_{23}^\top \ldots \bG_{n,n+1}^\top,
$$
such that $\bA^\prime = \bQ^\prime\bR^\prime$.
\paragraph{Deleting a row} Suppose $\bA = \begin{bmatrix}
\bA_1 \\
\bw^\top\\
\bA_2
\end{bmatrix} \in \real^{m\times n}$ where $\bA_1\in\real^{k\times n}$, $\bA_2 \in \real^{(m-k-1)\times n}$ with the full QR decomposition given by $\bA=\bQ\bR$ where $\bQ\in \real^{m\times m}, \bR\in \real^{m\times n}$. We want to compute the full QR decomposition of $\widetilde{\bA} = \begin{bmatrix}
\bA_1 \\
\bA_2
\end{bmatrix}$ efficiently (assume $m-1\geq n$). Analogously, we can construct a permutation matrix
$$
\bP =
\begin{bmatrix}
\bzero & 1 & \bzero \\
\bI_k & \bzero & \bzero \\
\bzero & \bzero & \bI_{m-k-1}
\end{bmatrix}
$$
such that
$$
\bP\bA =
\begin{bmatrix}
\bzero & 1 & \bzero \\
\bI_k & \bzero & \bzero \\
\bzero & \bzero & \bI_{m-k-1}
\end{bmatrix}
\begin{bmatrix}
\bA_1 \\
\bw^\top\\
\bA_2
\end{bmatrix}=
\begin{bmatrix}
\bw^\top \\
\bA_1\\
\bA_2
\end{bmatrix} = \bP\bQ\bR =\bM\bR ,
$$
where $\bM = \bP\bQ$ is an orthogonal matrix. Let $\bmm^\top$ be the first row of $\bM$, and a set of givens rotations $\bG_{m-1,m}, \bG_{m-2,m-1}, \ldots, \bG_{1,2}$ introducing zeros for elements $m_m, m_{m-1}, \ldots, m_2$ of $\bmm$ respectively such that $\bG_{1,2}\ldots \bG_{m-2,m-1}\bG_{m-1,m}\bmm = \alpha \be_1$ where $\alpha = \pm 1$. Therefore, we have
$$
\bG_{1,2}\ldots \bG_{m-2,m-1}\bG_{m-1,m} \bR =
\begin{blockarray}{cc}
\begin{block}{[c]c}
\bv^\top & 1 \\
\bR_1& m-1 \\
\end{block}
\end{blockarray},
$$
which is upper Hessenberg with $\bR_1\in \real^{(m-1)\times n}$ being upper triangular. And
$$
\bM \bG_{m-1,m}^\top \bG_{m-2,m-1}^\top \ldots \bG_{1,2}^\top =
\begin{bmatrix}
\alpha & \bzero \\
\bzero & \bQ_1
\end{bmatrix},
$$
where $\bQ_1\in \real^{(m-1)\times (m-1)}$ is an orthogonal matrix. The bottom-left block of the above matrix is a zero vector since $\alpha=\pm 1$ and $\bM$ is orthogonal. To see this, let $\bG=\bG_{m-1,m}^\top \bG_{m-2,m-1}^\top $ $\ldots \bG_{1,2}^\top $ with the first column being $\bg$ and $\bM = [\bmm^\top; \bmm_2^\top; \bmm_3^\top; \ldots, \bmm_{m}^\top]$ being the row partition of $\bM$. We have
$$
\begin{aligned}
\bmm^\top\bg &= \pm 1 \qquad \rightarrow \qquad \bg = \pm \bmm, \\
\bmm_i^\top \bmm &=0, \qquad \forall i \in \{2,3,\ldots,m\}.
\end{aligned}
$$
This results in
$$
\begin{aligned}
\bP\bA&=\bM\bR\\
&=(\bM \bG_{m-1,m}^\top \bG_{m-2,m-1}^\top \ldots \bG_{1,2}\top ) (\bG_{1,2}\ldots \bG_{m-2,m-1}\bG_{m-1,m} \bR ) \\
&=
\begin{bmatrix}
\alpha & \bzero \\
\bzero & \bQ_1
\end{bmatrix}
\begin{bmatrix}
\bv^\top \\
\bR_1
\end{bmatrix} =
\begin{bmatrix}
\alpha \bv^\top \\
\bQ_1\bR_1
\end{bmatrix}
=
\begin{bmatrix}
\bw^\top \\
\widetilde{\bA}
\end{bmatrix}
.
\end{aligned}
$$
This implies $\bQ_1\bR_1$ is the full QR decomposition of $\widetilde{\bA}=\begin{bmatrix}
\bA_1\\
\bA_2
\end{bmatrix}$.
\subsection{Application: Reducing the Ill-Condition via the QR decomposition}
\paragraph{Well-determined linear system}
Consider the well-determined linear equation $\bA\bx = \bb$, where $\bA\in \real^{n\times n}$ is nonsingular. The solution $\bx = \bA^{-1}\bb$ exists and is unique. Now suppose the vector $\bb$ is perturbed by $\delta\bb$, the solution is now given by
$$
\bx + \delta\bx = \bA^{-1}(\bb+\delta\bb)=\bA^{-1}\bb+\bA^{-1}\delta\bb = \bx + \bA^{-1}\delta\bb.
$$
That is $\delta\bx = \bA^{-1}\delta\bb $. By properties of matrix-vector inequality of matrix 2-norm (see Appendix~\ref{appendix:matrix-norm-sect2}), we have
$$
||\delta\bx|| = ||\bA^{-1}\delta\bb ||_2 \leq ||\bA^{-1} ||_2 ||\delta\bb||.
$$
This is known as the \textbf{absolute error bound} of $||\delta\bx||$. And if $||\bA^{-1} ||_2$ is small, then small changes in $\bb$ (i.e., $||\delta\bb||$ is small) will result in small changes in $\bx$. However, if $||\bA^{-1} ||_2$ is large, the changes in $\bx$ may be large as well.
Now we divide the above equation by $||\bx||$
$$
\frac{||\delta\bx|| }{||\bx||} \leq ||\bA^{-1} ||_2 \frac{||\delta\bb||}{||\bx||}.
$$
By $ ||\bb|| = ||\bA\bx|| \rightarrow ||\bx|| \geq \frac{||\bb||}{||\bA||_2}$, we have
$$
\frac{||\delta\bx|| }{||\bx||}
\leq ||\bA ||_2 ||\bA^{-1} ||_2 \frac{||\delta\bb||}{||\bb||}.
$$
This is known as the \textbf{relative error bound} of $\frac{||\delta\bx|| }{||\bx||}$. The product $||\bA ||_2 ||\bA^{-1} ||_2 $ is called the \textbf{condition number} of $\bA$, and is denoted as $\kappa(\bA)$:
$$
\kappa(\bA) = ||\bA ||_2 ||\bA^{-1} ||_2 .
$$
Similarly, by the inequality of Frobenius norm $||\bb|| = ||\bA\bx|| \rightarrow ||\bb|| \leq ||\bA||_F ||\bx||$, we can also define the condition number to be
$$
\kappa(\bA)= ||\bA ||_F ||\bA^{-1} ||_F.
$$
We will only consider the first definition of the condition in this context. If the relative error in $\bx$ is not much larger than the relative error in $\bb$, the matrix is said to be \textbf{well-conditioned}. That is, if the condition number $\kappa(\bA)$ is small, the matrix $\bA$ is well-conditioned. Otherwise, the matrix is called \textbf{ill-conditioned}.
\paragraph{Over-determined linear system} Now, we further consider the over-determined linear equation $\bA\bx = \bb$, where $\bA\in \real^{m\times n}$ with $m>n$. Suppose $\bA$ has full column rank such that $\bA^\top\bA$ is invertible. Then the unique least squares solution is given by
$$
\bx = (\bA^\top\bA)^{-1}\bA^\top\bb,
$$
which results from the normal equation
$$
(\bA^\top\bA)\bx = \bA^\top\bb.
$$
It can be shown that the condition number is given by
$$
\kappa(\bA^\top\bA) = \kappa(\bA)^2.
$$
\begin{example}[Reducing ill-condition]
Consider the matrix
$$
\bA =
\begin{bmatrix}
1+\delta & 1\\
1 & 1+\delta
\end{bmatrix},
$$
where $\delta$ is small. The condition number of $\bA$ is of order $\delta^{-1}$. If we use the QR decomposition $\bA = \bQ\bR$ to solve the linear equation, then
$$
\kappa(\bQ)=1, \qquad
\kappa(\bA) \rightarrow \kappa(\bQ^\top\bA) = \kappa(\bR).
$$
If one believes the condition number of $\bR$ is smaller than $\bA$, then we would overcome the ill-condition problem. Specifically, two QR decomposition results of $\bA$ are given by
$$
\begin{aligned}
\bA &= \bQ_1\bR_1\\
&=
\begin{bmatrix}
q_{11} & q_{12} \\
q_{21} & q_{22}
\end{bmatrix}
\begin{bmatrix}
r_{11} & r_{12} \\
0 & r_{22}
\end{bmatrix}\\
&=
\begin{bmatrix}
\frac{1+\delta}{\sqrt{\delta^2+2\delta+2}} & \frac{1}{\sqrt{\delta^2+2\delta+2}}\\
\frac{1}{\sqrt{\delta^2+2\delta+2}} & -\frac{1+\delta}{\sqrt{\delta^2+2\delta+2}}
\end{bmatrix}
\begin{bmatrix}
\sqrt{\delta^2+2\delta+2} & \frac{\delta^2+\delta}{(1+\delta)\sqrt{\delta^2+2\delta+2}} + \frac{\sqrt{\delta^2+2\delta+2}}{1+\delta}\\
0 & - \frac{\delta^2+2\delta}{\sqrt{\delta^2+2\delta+2}}
\end{bmatrix}
\end{aligned}
$$
or
$$
\begin{aligned}
\bA &= \bQ_2\bR_2\\
&=
\begin{bmatrix}
q_{11} & \textcolor{blue}{-}q_{12} \\
q_{21} & \textcolor{blue}{-}q_{22}
\end{bmatrix}
\begin{bmatrix}
r_{11} & r_{12} \\
0 & \textcolor{blue}{-}r_{22}
\end{bmatrix}\\
&=
\begin{bmatrix}
\frac{1+\delta}{\sqrt{\delta^2+2\delta+2}} & -\frac{1}{\sqrt{\delta^2+2\delta+2}}\\
\frac{1}{\sqrt{\delta^2+2\delta+2}} & \frac{1+\delta}{\sqrt{\delta^2+2\delta+2}}
\end{bmatrix}
\begin{bmatrix}
\sqrt{\delta^2+2\delta+2} & \frac{\delta^2+\delta}{(1+\delta)\sqrt{\delta^2+2\delta+2}} + \frac{\sqrt{\delta^2+2\delta+2}}{1+\delta}\\
0 & \frac{\delta^2+2\delta}{\sqrt{\delta^2+2\delta+2}}
\end{bmatrix}
\end{aligned}
$$
or
$$
\begin{aligned}
\bA &= \bQ_3\bR_3\\
&=
\begin{bmatrix}
\textcolor{blue}{-}q_{11} & q_{12} \\
\textcolor{blue}{-}q_{21} & q_{22}
\end{bmatrix}
\begin{bmatrix}
\textcolor{blue}{-}r_{11} & \textcolor{blue}{-}r_{12} \\
0 & r_{22}
\end{bmatrix}\\
&=
\begin{bmatrix}
-\frac{1+\delta}{\sqrt{\delta^2+2\delta+2}} & \frac{1}{\sqrt{\delta^2+2\delta+2}}\\
-\frac{1}{\sqrt{\delta^2+2\delta+2}} & -\frac{1+\delta}{\sqrt{\delta^2+2\delta+2}}
\end{bmatrix}
\begin{bmatrix}
-\sqrt{\delta^2+2\delta+2} & -\frac{\delta^2+\delta}{(1+\delta)\sqrt{\delta^2+2\delta+2}} - \frac{\sqrt{\delta^2+2\delta+2}}{1+\delta}\\
0 & - \frac{\delta^2+2\delta}{\sqrt{\delta^2+2\delta+2}}
\end{bmatrix}
\end{aligned}
$$
or
$$
\begin{aligned}
\bA &= \bQ_4\bR_4\\
&=
\begin{bmatrix}
\textcolor{blue}{-}q_{11} & \textcolor{blue}{-}q_{12} \\
\textcolor{blue}{-}q_{21} & \textcolor{blue}{-}q_{22}
\end{bmatrix}
\begin{bmatrix}
\textcolor{blue}{-}r_{11} & \textcolor{blue}{-}r_{12} \\
0 & \textcolor{blue}{-}r_{22}
\end{bmatrix}\\
&=
\begin{bmatrix}
-\frac{1+\delta}{\sqrt{\delta^2+2\delta+2}} & -\frac{1}{\sqrt{\delta^2+2\delta+2}}\\
-\frac{1}{\sqrt{\delta^2+2\delta+2}} & \frac{1+\delta}{\sqrt{\delta^2+2\delta+2}}
\end{bmatrix}
\begin{bmatrix}
-\sqrt{\delta^2+2\delta+2} & -\frac{\delta^2+\delta}{(1+\delta)\sqrt{\delta^2+2\delta+2}} - \frac{\sqrt{\delta^2+2\delta+2}}{1+\delta}\\
0 & \frac{\delta^2+2\delta}{\sqrt{\delta^2+2\delta+2}}
\end{bmatrix}
\end{aligned}
$$
Suppose $\delta=0.01$, the condition number of $\bA$ is 200, and the condition number of $\bR_1$ or $\bR_2$ is 1.
\exampbar
\end{example}
The example shown above solving linear equation via the QR decomposition can reduce the ill-condition problem, more details on the numerical stability topics can be referred to \citep{higham2002accuracy, zhang2017matrix, golub2013matrix, boyd2018introduction}.
\chapter{Skeleton/CUR Decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Skeleton Decomposition}
\begin{theoremHigh}[Skeleton Decomposition]\label{theorem:skeleton-decomposition}
Any rank-$r$ matrix $\bA \in \real^{m \times n}$ can be factored as
$$
\underset{m\times n}{\bA }=
\underset{m\times r}{\bC} \gap \underset{r\times r}{\bU^{-1} }\gap \underset{r\times n}{\bR},
$$
where $\bC$ is some $r$ linearly independent columns of $\bA$, $\bR$ is some $r$ linearly independent rows of $\bA$ and $\bU$ is the nonsingular submatrix on the intersection.
\begin{itemize}
\item The storage for the decomposition is then reduced or potentially increased from $mn$ floats to $r(m+n)+r^2$ floats.
\item Or further, if we only record the position of the indices, it requires $mr$, $nr$ floats for storing $\bC, \bR$ respectively and extra $2r$ integers to remember the position of each column of $\bC$ in that of $\bA$ and each row of $\bR$ in that of $\bA$ (i.e., construct $\bU$ from $\bC,\bR$).
\end{itemize}
\end{theoremHigh}
Skeleton decomposition is also known as the \textit{CUR decomposition} follows from the notation in the decomposition. The illustration of skeleton decomposition is shown in Figure~\ref{fig:skeleton} where the \textcolor{canaryyellow}{yellow} vectors denote the linearly independent columns of $\bA$ and \textcolor{green}{green} vectors denote the linearly independent rows of $\bA$. Specifically, if $I,J$ index vectors both with size $r$ that contain the indices of rows and columns selected from $\bA$ into $\bR$ and $\bC$ respectively, $\bU$ can be denoted as $\bU=\bA[I,J]$ (see Definition~\ref{definition:matlabnotation}, p.~\pageref{definition:matlabnotation}).
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{imgs/skeleton.pdf}
\caption{Demonstration of skeleton decomposition of a matrix.}
\label{fig:skeleton}
\end{figure}
\section{Existence of the Skeleton Decomposition}
In Corollary~\ref{lemma:equal-dimension-rank} (p.~\pageref{lemma:equal-dimension-rank}), we proved the row rank and the column rank of a matrix are equal. In another word, we can also claim that the dimension of the column space and the dimension of the row space are equal. This property is essential for the existence of the skeleton decomposition.
We are then ready to prove the existence of the skeleton decomposition. The proof is rather elementary.
\begin{proof}[of Theorem~\ref{theorem:skeleton-decomposition}]
The proof relies on the existence of such nonsingular matrix $\bU$ which is central to this decomposition method.
\paragraph{Existence of such nonsingular matrix $\bU$} Since matrix $\bA$ is rank-$r$, we can pick $r$ columns from $\bA$ so that they are linearly independent. Suppose we put the specific $r$ independent columns $\ba_{i1}, \ba_{i2}, \ldots, \ba_{ir}$ into the columns of an $m\times r$ matrix $\bN=[\ba_{i1}, \ba_{i2}, \ldots, \ba_{ir}] \in \real^{m\times r}$. The dimension of the column space of $\bN$ is $r$ so that the dimension of the row space of $\bN$ is also $r$ by Corollary~\ref{lemma:equal-dimension-rank} (p.~\pageref{lemma:equal-dimension-rank}). Again, we can pick $r$ linearly independent rows $\bn_{j1}^\top,\bn_{j2}^\top, \ldots, \bn_{jr}^\top $ from $\bN$ and put the specific $r$ rows into rows of an $r\times r$ matrix $\bU = [\bn_{j1}^\top; \bn_{j2}^\top; \ldots; \bn_{jr}^\top]\in \real^{r\times r}$. Using Corollary~\ref{lemma:equal-dimension-rank} (p.~\pageref{lemma:equal-dimension-rank}) again, the dimension of the column space of $\bU$ is also $r$ which means there are the $r$ linearly independent columns from $\bU$. So $\bU$ is such a nonsingular matrix with size $r\times r$.
\paragraph{Main proof}
As long as we find the nonsingular $r\times r$ matrix $\bU$ inside $\bA$, we can find the existence of the skeleton decomposition as follows.
Suppose $\bU=\bA[I,J]$ where $I,J$ are index vectors of size $r$. Since $\bU$ is a nonsingular matrix, the columns of $\bU$ are linearly independent. Thus the columns of matrix $\bC$ based on the columns of $\bU$ are also linearly independent (i.e., select the $r$ columns of $\bA$ with the same entries of the matrix $\bU$. Here $\bC$ is equal to the $\bN$ we construct above and $\bC=\bA[:,J]$).
As the rank of the matrix $\bA$ is $r$, if we take any other column $\ba_i$ of $\bA$, $\ba_i$ can be represented as a linear combination of the columns of $\bC$, i.e., there exists a vector $\bx$ such that $\ba_i = \bC \bx$, for all $ i\in \{1, 2, \ldots, n\}$. Let $r$ rows of $\ba_i$ corresponding to the row entries of $\bU$ be $\br_i \in \real^r$ for all $i\in \{1, 2, \ldots, n\}$ (i.e., $\br_i$ contains $r$ entries of $\ba_i$). That is, select the $r$ entries of $\ba_i$'s corresponding to the entries of $\bU$ as follows:
$$
\bA = [\ba_1,\ba_2, \ldots, \ba_n]\in \real^{m\times n} \qquad \longrightarrow \qquad
\bA[I,:]=[\br_1, \br_2, \ldots, \br_n] \in \real^{r\times n}.
$$
Since $\ba_i = \bC\bx$, $\bU$ is a submatrix inside $\bC$, and $\br_i$ is a subvector inside $\ba_i$, we have $\br_i = \bU \bx$ which is equivalent to $\bx = \bU^{-1} \br_i$. Thus for every $i$, we have $\ba_i = \bC \bU^{-1} \br_i$. Combine the $n$ columns of such $\br_i$ into $\bR=[\br_1, \br_2, \ldots, \br_n]$, we obtain
$$
\bA = [\ba_1, \ba_2, \ldots, \ba_n] = \bC \bU^{-1} \bR,
$$
from which the result follows.
In short, we first find $r$ linearly independent columns of $\bA$ into $\bC\in \real^{m\times r}$. From $\bC$, we find an $r\times r$ nonsingular submatrix $\bU$. The $r$ rows of $\bA$ corresponding to entries of $\bU$ can help to reconstruct the columns of $\bA$. Again, the situation is shown in Figure~\ref{fig:skeleton}.
\end{proof}
In case $\bA$ is square and invertible, we have skeleton decomposition $\bA=\bC\bU^{-1} \bR$ where $\bC=\bR=\bU=\bA$ such that the decomposition reduces to $\bA = \bA\bA^{-1}\bA$.
\paragraph{CR decomposition vs skeleton decomposition} We note that CR decomposition and skeleton decomposition share a similar form. Even for the symbols used $\bA=\bC\bR$ for the CR decomposition and $\bA=\bC\bU^{-1}\bR$ for the skeleton decomposition.
Both in the CR decomposition and the skeleton decomposition, we \textbf{can}\footnote{Here, we highlight that we can, but not necessarily, as we will see in the randomized algorithm for the skeleton decomposition.} select the first $r$ independent columns to obtain the matrix $\bC$ (the symbol for both the CR decomposition and the skeleton decomposition). So $\bC$'s in the CR decomposition and the skeleton decomposition are exactly the same. On the contrary, $\bR$ in the CR decomposition is the reduced row echelon form without the zero rows, whereas $\bR$ in the skeleton decomposition is exactly some rows from $\bA$ so that $\bR$'s have different meanings in the two decompositional methods. We will formally show that $\bU^{-1}\bR$ in skeleton decomposition is the row reduced echelon form without zero rows in Theorem~\ref{theorem:cur-row-reduced-form}.
\paragraph{A word on the uniqueness of CR decomposition and skeleton decomposition} As mentioned above, both in the CR decomposition and the skeleton decomposition, we select the first $r$ linearly independent columns to obtain the matrix $\bC$. In this sense, the CR and skeleton decompositions have a unique form.
However, if we select the last $r$ linearly independent columns, we will get a different CR decomposition or skeleton decomposition. We will not discuss this situation here as it is not the main interest of this text.
To repeat, in the above proof for the existence of the skeleton decomposition, we first find the $r$ linearly independent columns of $\bA$ into the matrix $\bC$. From $\bC$, we find an $r\times r$ nonsingular submatrix $\bU$. From the submatrix $\bU$, we finally find the final row submatrix $\bR\in \real^{r\times n}$. A further question can be posed that if matrix $\bA$ has rank $r$, matrix $\bC$ contains $r$ linearly independent columns, and matrix $\bR$ contains $r$ linearly independent rows, then whether the $r\times r$ ``intersection" of $\bC$ and $\bR$ is invertible or not \footnote{We thank Gilbert Strang for raising this interesting question.}.
\begin{corollary}[Nonsingular Intersection]\label{corollary:invertible-intersection}
If matrix $\bA \in \real^{m\times n}$ has rank $r$, matrix $\bC$ contains $r$ linearly independent columns, and matrix $\bR$ contains $r$ linearly independent rows, then the $r\times r$ ``intersection" matrix $\bU$ of $\bC$ and $\bR$ is invertible.
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:invertible-intersection}]
If $I,J$ are the indices of rows and columns selected from $\bA$ into $\bR$ and $\bC$ respectively, then, $\bR$ can be denoted as $\bR=\bA[I, :]$, $\bC$ can be represented as $\bC = \bA[:,J]$, and $\bU$ can be denoted as $\bU=\bA[I,J]$.
Since $\bC$ contains $r$ linearly independent columns of $\bA$, any column $\ba_i$ of $\bA$ can be represented as $\ba_i = \bC\bx_i = \bA[:,J]\bx_i$ for all $i \in \{1,2,\ldots, n\}$. This implies the $r$ entries of $\ba_i$ corresponding to the $I$ indices can be represented by the columns of $\bU$ such that $\ba_i[I] = \bU\bx_i \in \real^{r1}$ for all $i \in \{1,2,\ldots, n\}$, i.e.,
$$
\ba_i = \bC\bx_i = \bA[:,J]\bx_i \in \real^{m} \qquad \longrightarrow \qquad
\ba_i[I] =\bA[I,J]\bx_i= \bU\bx_i \in \real^{r}.
$$
Since $\bR$ contains $r$ linearly independent rows of $\bA$, the row rank and column rank of $\bR$ are equal to $r$. Combining the facts above, the $r$ columns of $\bR$ corresponding to indices $J$ (i.e., the $r$ columns of $\bU$) are linearly independent.
Again, by applying Corollary~\ref{lemma:equal-dimension-rank} (p.~\pageref{lemma:equal-dimension-rank}), the dimension of the row space of $\bU$ is also equal to $r$ which means there are the $r$ linearly independent rows from $\bU$, and $\bU$ is invertible.
\end{proof}
\section{Computing the Skeleton Decomposition via the Gram-Schmidt Process}
In Section~\ref{section:dependent-gram-schmidt-process}, we have discussed how to tackle dependent columns when orthogonalizing the matrix. We can thus use the Gram-Schmidt process to select $r$ linearly independent columns from $\bA$ resulting in $\bC$. And use the Gram-Schmidt process to select $r$ linearly independent columns from $\bC^\top$ which results in $\bU$ and $\bR$. The process is shown in Algorithm~\ref{alg:skeleton-decomposition}.
\begin{algorithm}[h]
\caption{Skeleton Decomposition via Gram-Schmidt Process}
\label{alg:skeleton-decomposition}
\begin{algorithmic}[1]
\Require
Rank-$r$ matrix $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$ with size $m\times n $;
\State Initially set column count $ck=0$ and row count $rk=0$;
\State Set $\bq_1 = \ba_1/r_{11}, r_{11}=||\ba_1||$; \Comment{Suppose the first column is nonzero and $r_{11}\neq 0$}
\For{$k=2$ to $n$}
\State $\bq_k = (\ba_k-\sum_{i=1}^{k-1}r_{in}\bq_i)/r_{kk}, r_{ik}=\bq_i^\top\ba_k, r_{kk}=||\ba_n-\sum_{i=1}^{k-1}r_{ik}\bq_i||$, $\forall i \in \{1,\ldots, k-1\}$;
\If{$r_{kk}\neq 0$}
\State Select the $k$-th column of $\bA$ into $ck$-th column of $\bC$;
\State $ck=ck+1$;
\EndIf
\EndFor
\State $\bC=[\bc_1^\top; \bc_2^\top; \ldots; \bc_m^\top]$, where $\bc_i^\top$ is the $i$-th row of $\bC$;
\State Set $\bq_1 = \bc_1/r_{11}, r_{11}=||\bc_1||$;\Comment{Suppose the first row is nonzero and $r_{11}\neq 0$}
\For{$k=2$ to $m$}
\State $\bq_k = (\bc_k-\sum_{i=1}^{k-1}r_{ik}\bq_i)/r_{kk}, r_{ik}=\bq_i^\top\bc_k, r_{kk}=||\ba_k-\sum_{i=1}^{k-1}r_{ik}\bq_i|| $, $\forall i \in \{1,\ldots, k-1\}$;
\If{$r_{kk}\neq 0$}
\State Select the $k$-th row of $\bC$ into $rk$-th row of $\bU$;
\State $rk=rk+1$;
\EndIf
\EndFor
\State Select the rows from $\bA$ corresponding to the rows of $\bU$ into $\bR$.
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: Skeleton via Gram-Schmidt]\label{theorem:comp-skeleton-decom1}
Algorithm~\ref{alg:skeleton-decomposition} requires $\sim 2(mn^2+rm^2+r^3)$ flops to compute a skeleton decomposition of an $m\times n$ matrix. We will show this algorithm is not the best way to get the skeleton decomposition.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:comp-skeleton-decom1}]
This is just applying Theorem~\ref{theorem:qr-reduced} twice to get matrix $\bU$ (to get $\bC$, it costs $2mn^2$ flops. And to get $\bU$, it costs $2rm^2$ flops.) and applying Theorem~\ref{theorem:inverse-by-lu2} to compute the inverse of $\bU$ ($2r^3$ flops).
\end{proof}
Note that, choosing the first linearly independent columns from $\bA$ will make the inverse of $\bU$ not stable. A maxvol procedure is used in \citep{goreinov2010find}. We will introduce a pseudoskeleton decomposition to overcome this problem at the end of the section.
\section{Computing the Skeleton Decomposition via Modified Gram-Schmidt Process}
In the Gram-Schmidt process, we want to find orthonormal vectors. However, this orthogonal property is not useful for skeleton decomposition. We just need to decide whether the vectors are linearly independent or not.
We can thus use the Gram-Schmidt process with slight modification to select $r$ linearly independent columns from $\bA$ to get $\bC$. And use the Gram-Schmidt process (again with slight modification) to select $r$ linearly independent columns from $\bC^\top$ to get $\bU$ and $\bR$.
For a set of $n$ vectors $\ba_1, \ba_2, \ldots, \ba_n$ from the columns of $\bA$, if we want to project a vector $\ba_k$ onto the space perpendicular to the space spanned by $\ba_1, \ba_2, \ba_{k-1}$ (suppose the $k-1$ vectors are linearly independent), we can write out this projection from Equation~\eqref{equation:gram-schdt-eq2}
$$
\begin{aligned}
\bb_k &= \ba_k - \left(\frac{ \bb_1^\top\ba_k}{\bb_1^\top\bb_1} \bb_1 + \frac{ \bb_2^\top\ba_k}{\bb_2^\top\bb_2} \bb_2+\ldots + \frac{ \bb_{k-1}^\top\ba_k}{\bb_{k-1}^\top\bb_{k-1}} \bb_{k-1} \right)\\
&=\ba_k - \sum_{i=1}^{k-1}\frac{ \bb_i^\top\ba_k}{\bb_i^\top\bb_i} \bb_i,
\end{aligned}
$$
where $\bb_k$ is the projection of $\ba_k$ onto the spaced perpendicular to the space spanned by $\{\ba_1, \ba_2, \ba_{k-1}\}$, $\bb_{k-1}$ is the projection of $\ba_{k-1}$ onto the spaced perpendicular to the space spanned by $\{\ba_1, \ba_2, \ba_{k-2}\}$, and so on.
If $\bb_k$ is nonzero, $\ba_k$ is linearly independent of $\ba_1, \ba_2, \ldots, \ba_{k-1}$ which will be chosen into the column of $\bC$.
Similarly, we can choose linearly independent rows from $\bC$ to produce $\bU$. The process is shown in Algorithm~\ref{alg:skeleton-decomposition-modified}.
\begin{algorithm}[h]
\caption{Skeleton Decomposition via \textbf{Modified Gram-Schmidt Process}}
\label{alg:skeleton-decomposition-modified}
\begin{algorithmic}[1]
\Require
Rank-$r$ matrix $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$ with size $m\times n $;
\State Initially set column count $ck=0$ and row count $rk=0$;
\State Set $\bb_1= \ba_1$; \Comment{Suppose the first column is nonzero, $0$ flops}
\For{$k=2$ to $n$}
\State $\bb_k=\ba_k - \sum_{i=1}^{k-1}\frac{ \bb_i^\top\ba_k}{\bb_i^\top\bb_i} \bb_i$, (Skip the zero terms $\bb_i$);
\If{$\bb_k\neq \bzero$}
\State Select the $k$-th column of $\bA$ into the $ck$-th column of $\bC$;
\State $ck=ck+1$;
\EndIf
\EndFor
\State $\bC=[\bc_1^\top; \bc_2^\top; \ldots; \bc_m^\top]$, where $\bc_i^\top$ is the $i$-th row of $\bC$;
\State Set $\bb_1 = \bc_1$, (Suppose the first row is nonzero);
\For{$k=2$ to $m$}
\State $\bb_k = \bc_k - \sum_{i=1}^{k-1}\frac{ \bb_i^\top\bc_k}{\bb_i^\top\bb_i} \bb_i$, (Skip the zero terms $\bb_i$);
\If{$\bb_k\neq \bzero$}
\State Select the $k$-th row of $\bC$ into the $rk$-th row of $\bU$;
\State $rk=rk+1$;
\EndIf
\EndFor
\State Select the rows from $\bA$ corresponding to the rows of $\bU$ into $\bR$.
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: Skeleton via Modified Gram-Schmidt]\label{theorem:comp-skeleton-decom2}
Algorithm~\ref{alg:skeleton-decomposition-modified} requires $\sim 2(mn^2+rm^2+ r^3)$ flops to compute a skeleton decomposition of an $m\times n$ matrix.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:comp-skeleton-decom2}]
For the last loop $n$ in step 4, the $\frac{ \bb_i^\top\ba_n}{\bb_i^\top\bb_i} \bb_i$ involves:
a. $m$ multiplications and $m-1$ additions for the calculation of $\bb_i^\top\ba_n$;
b. $m$ multiplications for the calculation of $\bb_i^\top\ba_n *\bb_i $;
c. $m$ multiplications and $m-1$ additions for the calculation of $\bb_i^\top\bb_i$;
d. 1 division;
There are $n-1$ such terms to compute procedure $a, b, d$ above. However, for procedure $c$, we did the calculation of $\bb_i^\top\bb_i$ for $i \in \{1, 2, \ldots, n-2\}$ in the previous loops. As a result, we need \fbox{$3m(n-1)+2m-1$} to compute $\frac{ \bb_i^\top\ba_n}{\bb_i^\top\bb_i} \bb_i$ for $i\in \{1, 2, \ldots, n-1\}$ where the last $2m-1$ flops is to compute $\bb_{n-1}^\top\bb_{n-1}$.
The final $\ba_k - \sum_{i=1}^{k-1}\frac{ \bb_i^\top\ba_k}{\bb_i^\top\bb_i} \bb_i$ then takes $(n-2)m$ additions and $m$ subtractions which cost \fbox{$(n-1)m$} flops. So the total cost of step 4 in the last loop $n$ is $4m(n-1)+2m-1$ flops. Let $f(i)=4m(i-1)+2m-1$, the total cost for step 3 to step 9 can be obtained by
$$
\mathrm{cost}=f(2)+f(3)+\ldots f(n).
$$
Simple calculations can show this loop takes $4m \frac{n^2-n}{2} + (2m-1)(n-1)$ flops, or \fbox{$2mn^2$} flops if keep only the leading term.
Similarly, for step 13, the total cost for the second loop is $4r \frac{m^2-m}{2} + (2r-1)(m-1)$ flops, or \fbox{$2rm^2$} flops if keep only the leading term.
As a result, the total cost is $2(mn^2+rm^2+r^3)$ flops if keep only the leading term, where $2r^3$ results from the computation of the inverse of $\bU$ by Theorem~\ref{theorem:inverse-by-lu2} (p.~\pageref{theorem:inverse-by-lu2}).
\end{proof}
The complexity of Algorithm~\ref{alg:skeleton-decomposition-modified} is the same as that of Algorithm~\ref{alg:skeleton-decomposition} as they are mathematically equivalent. However, the meaning from Algorithm~\ref{alg:skeleton-decomposition-modified} is much more clear.
\section{Computing the Skeleton Decomposition via the Gaussian Elimination}
In Algorithm~\ref{alg:cr-decomposition-alg}, we discussed the method to get the row reduced echelon form by Gaussian elimination. And the columns containing the pivots are the ones that are linearly independent. We thus can use this row reduced echelon form to find the skeleton decomposition.
\begin{algorithm}[H]
\caption{Skeleton Decomposition via Gaussian Elimination}
\label{alg:skeleton-by-row-reduced-echelon}
\begin{algorithmic}[1]
\Require
Rank-$r$ matrix $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$ with size $m\times n $;
\State Use Algorithm~\ref{alg:cr-decomposition-alg} to get the row reduced echelon form of $\bA$ and choose the columns containing pivots from $\bA$ into $\bC$;
\State Use Algorithm~\ref{alg:cr-decomposition-alg} to get the row reduced echelon form of $\bC^\top$ and choose the columns containing pivots from $\bC^\top$ into the columns of $\bU^\top$;
\State Select the rows from $\bA$ corresponding to the rows of $\bU$ into $\bR$.
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Algorithm Complexity: Skeleton via Guassian Elimination]\label{theorem:comp-skeleton-decom-row-reduced}
Algorithm~\ref{alg:skeleton-by-row-reduced-echelon} requires
$$
\text{cost}=\left\{
\begin{aligned}
&(2m^2n-m^3) + (2r^2m-r^3) + 2r^3=2m^2n-m^3 + 2r^2m+r^3 \text{}, \qquad &\text{if }m\leq n; \\
&(mn^2)+ (2r^2m-r^3)+2r^3 = mn^2+ 2r^2m+r^3\text{},\qquad &\text{if }m>n.
\end{aligned}
\right.
$$
flops to compute a skeleton decomposition of an $m\times n$ matrix. The cost of $2r^3$ above is again from the calculation of the inverse of $\bU$.
\end{theorem}
The proof of the above theorem is trivial since $r\leq m$ by Theorem~\ref{theorem:cr-decomposition-alg}.
\section{Recover Reduced Row Echelon Form from Skeleton Decomposition}
In this section, we formally claim that $\bU^{-1}\bR$ is the row reduced echelon form without zero rows.
\begin{theorem}[Recover RREF from Skeleton Decomposition]\label{theorem:cur-row-reduced-form}
Suppose we have skeleton decomposition via Algorithm~\ref{alg:skeleton-decomposition}, Algorithm~\ref{alg:skeleton-decomposition-modified}, or Algorithm~\ref{alg:skeleton-by-row-reduced-echelon}
that $\bA = \bC\bU^{-1}\bR$, then $\bU^{-1}\bR$ is the row reduced echelon form without zero rows.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:cur-row-reduced-form}]
Without loss of generality, we assume matrix $\bA\in \real^{4\times 5}$ and the linearly independent columns of matrix $\bA$ are the columns $1, 3, 5$, which means column 2 is a multiple $m_1$ of column 1, column 4 is a linear combination of column 1 and 3: column 4 = $n_1$ column 1 + $n_2$ column 2.
Then we use $\bU$ to recover the column 1, we will have $\bx_1 = \bU^{-1}\br_1 = [1;0;0]$;
When we use $\bU$ to recover the column 2, we will have $\bx_2 = \bU^{-1}\br_2 = [m_1;0;0]$;
When we use $\bU$ to recover column 3, we will have $\bx_3 = \bU^{-1}\br_3 = [0;1;0]$;
When we use $\bU$ to recover column 4, we will have $\bx_4 = \bU^{-1}\br_4 = [n_1;n_2;0]$;
When we use $\bU$ to recover column 5, we will have $\bx_5 = \bU^{-1}\br_5 = [0;0;1]$.
This clearly gives this row reduced echelon form without zero rows:
$$
\bX = \begin{bmatrix}
1 & m_1 & 0 & n_1 & 0\\
0 & 0 & 1 & n_2 & 0\\
0 & 0 & 0 & 0 & 1
\end{bmatrix},
$$
where we find the pivots are all 1's, the entries above the pivots are all 0's.
Now we go back to matrix $\bA\in \real^{m\times n}$, Algorithm~\ref{alg:skeleton-decomposition}, Algorithm~\ref{alg:skeleton-decomposition-modified}, or Algorithm~\ref{alg:skeleton-by-row-reduced-echelon} find the first linearly independent columns from left to right as columns $k, l, m, n, \ldots$ where $k\leq l \leq m\leq n\leq \ldots$. We will get similar results. Pivots are in 1: row 1, column $k$; 2: row 2, column $l$; 3: row 3, column $m$; 4: row 4, column $n$, $\ldots$.
The entries above the pivots are 0's for sure.
For columns $k+1, k+2, \ldots, l-1$, they are combinations of columns $k$, so the entries below row 1, columns $k+1, k+2, \ldots, l-1$ are zero.
For columns $l+1, l+2, \ldots, m-1$, they are combinnations of columns $k, l$, so the entries below row 2, columns $l+1, l+2, \ldots, m-1$ are zero.
The process can go on and it produces the reduced row echelon form without zero rows.
\end{proof}
\section{Randomized Algorithms}
The computing methods for skeleton decomposition described here are choosing the first linearly independent columns from $\bA$ into $\bC$ same as that in the CR decomposition. However, this is not necessarily. Randomized algorithms are discussed in \citep{mahoney2009cur, boutsidis2009improved, drineas2012fast, kishore2017literature} which do not choose the linearly independent columns from left to right.
In \citep{goreinov1997pseudo, goreinov2001maximal}, an approximation of skeleton decomposition has been developed which is called \textit{pseudoskeleton approximation}. The $k$ columns in $\bC$ and $k$ rows in $\bR$ with $k<r$ were chosen such that their intersection $\bU_{k\times k}$ has maximum volume (i.e., the maximum determinant among all $k\times k$ submatrices of $\bA$).
\section{Pseudoskeleton Decomposition via the SVD}\label{section:pseudoskeleton}
We will discuss singular value decomposition (SVD) extensively in Section~\ref{section:SVD}. But now, let's just assume we have the background of SVD, and we will show how to use the SVD to approximate the skeleton decomposition. Feel free to skip this section for a first reading.
For matrix $\bA\in\real^{m\times n}$, we want to construct an approximation of $\bA$ with rank $\gamma \leq \min(m,n)$ in the form of skeleton decomposition. That is $\bA$ is approximated by $\bA\approx \bC\bG\bR$ where $\bC$ and $\bR$ contain $\gamma$ selected columns and rows respectively and $\bG = \bU^{-1}$ with $\bU$ being the intersection of $\bC$ and $\bR$. Specifically, if $I, J$ are the indices of the selected rows and columns respectively, $\bU=\bA[I,J]$. Note here the $\gamma$ is not necessarily equal to the rank $r$ of $\bA$.
Instead of choosing the $r$ linearly independent columns from $\bA$ (as shown in skeleton decomposition), we choose $k$ random columns into matrix $\bC$ with $k>r$ or even $k=\min(m,n)$ given by the indices $J$ from the matrix $\bA$. That is $\bC = \bA[:,J]\in \real^{m\times k}$. Now we select $k$ rows from $\bA$ by the indices $I$ from the matrix $\bA$ into matrix $\bR=\bA[I,:]$ such that the volume of the intersection matrix $\bU=\bA[I,J]$ is maximized, i.e., $\det(\bU)$ is maximized given the $\bC$ chosen randomly. Note here $\bC$ is randomly chosen, but $\bR$ is not randomly chosen. Now the decomposition becomes
$$
\bA = \bC_{m\times k}\bU_{k\times k}^{-1}\bR_{k\times n}.
$$
Again, the inverse of $\bU_{k\times k}$ is not stable due to the random choice. To overcome this, we decompose $\bU_{k\times k}$ by full SVD (refer to Section~\ref{section:SVD}, p.~\pageref{section:SVD} for the difference between the reduced SVD and full SVD):
$$
\bU_{k\times k} = \bU_k\bSigma_k\bV_k^\top,
$$
where $\bU_k, \bV_k\in \real^{k\times k}$ are orthogonal matrices, $\bSigma_k$ is a diagonal matrix containing $k$ singular values $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_k$ with zeros allowed. Now we choose $\gamma$ singular values that are greater than some value $\epsilon$ and truncate the $\bU_k, \bV_k, \bSigma_k$ according to the $\gamma$ selected singular values such that $\bU_{k\times k}$ is approximated by a rank-$\gamma$ matrix $\bU_{k\times k} \approx \bU_\gamma\bSigma_\gamma\bV_\gamma^\top$, where $\bU_\gamma,\bV_\gamma\in \real^{k\times \gamma}$, and $\bSigma_\gamma \in \real^{\gamma\times \gamma}$. Therefore, the pseudoinverse of $\bU_{k\times k}$ is \footnote{See Section~\ref{section:application-ls-svd} (p.~\pageref{section:application-ls-svd}) and Appendix~\ref{appendix:pseudo-inverse} (p.\pageref{appendix:pseudo-inverse}) for the detailed discussion of pseudoinverse.}
$$
\bU^+ = (\bU_\gamma\bSigma_\gamma\bV_\gamma^\top)^{-1} =\bV_\gamma \bSigma_\gamma^{-1}\bU_\gamma^\top.
$$
As a result, the matrix $\bA$ is approximated by a rank-$\gamma$ matrix
\begin{equation}\label{equation:skeleton-low-rank}
\begin{aligned}
\bA &\approx \bC\bV_\gamma \bSigma_\gamma^{-1}\bU_\gamma^\top \bR\\
&=\bC_2 \bR_2, \qquad (\text{Let $\bC_2=\bC\bV_\gamma \bSigma_\gamma^{-1/2}$ and $\bR_2=\bSigma_\gamma^{-1/2}\bU_\gamma^\top \bR$})
\end{aligned}
\end{equation}
where $\bC_2$ and $\bR_2$ are rank-$\gamma$ matrices. Please refer to \citep{goreinov1997pseudo, kishore2017literature} for further details about the choice of how to choose such $\epsilon$ systematically. In the method described above, we only choose $\bC$ randomly. Algorithms introduced in \citep{zhu2011randomised} choose both $\bC$ and $\bR$ randomly and more stable results are obtained.
\chapter{Spectral Decomposition (Theorem)}\label{section:spectral-decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Spectral Decomposition}
\begin{theoremHigh}[Spectral Decomposition]\label{theorem:spectral_theorem}
A real matrix $\bA \in \real^{n\times n}$ is symmetric if and only if there exists an orthogonal matrix $\bQ$ and a diagonal matrix $\bLambda$ such that
\begin{equation*}
\bA = \bQ \bLambda \bQ^\top,
\end{equation*}
where the columns of $\bQ = [\bq_1, \bq_2, \ldots, \bq_n]$ are eigenvectors of $\bA$ and are mutually orthonormal, and the entries of $\bLambda=diag(\lambda_1, \lambda_2, \ldots, \lambda_n)$ are the corresponding eigenvalues of $\bA$, which are real. And the rank of $\bA$ is the number of nonzero eigenvalues. This is known as the \textbf{spectral decomposition} or \textbf{spectral theorem} of real symmetric matrix $\bA$. Specifically, we have the following properties:
1. A symmetric matrix has only \textbf{real eigenvalues};
2. The eigenvectors are orthogonal such that they can be chosen \textbf{orthonormal} by normalization;
3. The rank of $\bA$ is the number of nonzero eigenvalues;
4. If the eigenvalues are distinct, the eigenvectors are unique as well.
\end{theoremHigh}
The above decomposition is called the spectral decomposition for real symmetric matrices and is often known as the \textit{spectral theorem}.
\paragraph{Spectral theorem vs eigenvalue decomposition} In the eigenvalue decomposition, we require the matrix $\bA$ to be square and the eigenvectors to be linearly independent. Whereas in the spectral theorem, any symmetric matrix can be diagonalized, and the eigenvectors are chosen to be orthogonormal.
\paragraph{A word on the spectral decomposition} In Lemma~\ref{lemma:eigenvalue-similar-matrices} (p.~\pageref{lemma:eigenvalue-similar-matrices}), we proved that the eigenvalues of similar matrices are the same. From the spectral decomposition, we notice that $\bA$ and $\bLambda$ are similar matrices such that their eigenvalues are the same. For any diagonal matrices, the eigenvalues are the diagonal components. \footnote{Actually, we have shown in the last section that the diagonal values for triangular matrices are the eigenvalues of it.} To see this, we realize that
$$
\bLambda \be_i = \lambda_i \be_i,
$$
where $\be_i$ is the $i$-th basis vector. Therefore, the matrix $\Lambda$ contains the eigenvalues of $\bA$.
\section{Existence of the Spectral Decomposition}\label{section:existence-of-spectral}
We prove the theorem in several steps.
\begin{tcolorbox}[title={Symmetric Matrix Property 1 of 4}]
\begin{lemma}[Real Eigenvalues]\label{lemma:real-eigenvalues-spectral}
The eigenvalues of any symmetric matrix are all real.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:real-eigenvalues-spectral}]
Suppose eigenvalue $\lambda$ is a complex number $\lambda=a+ib$ where $a,b$ are real. Its complex conjugate is $\bar{\lambda}=a-ib$. Same for complex eigenvector $\bx = \bc+i\bd$ and its complex conjugate $\bar{\bx}=\bc-i\bd$ where $\bc, \bd$ are real vectors. We then have the following property
$$
\bA \bx = \lambda \bx\qquad \underrightarrow{\text{ leads to }}\qquad \bA \bar{\bx} = \bar{\lambda} \bar{\bx}\qquad \underrightarrow{\text{ transpose to }}\qquad \bar{\bx}^\top \bA =\bar{\lambda} \bar{\bx}^\top.
$$
We take the dot product of the first equation with $\bar{\bx}$ and the last equation with $\bx$:
$$
\bar{\bx}^\top \bA \bx = \lambda \bar{\bx}^\top \bx, \qquad \text{and } \qquad \bar{\bx}^\top \bA \bx = \bar{\lambda}\bar{\bx}^\top \bx.
$$
Then we have the equality $\lambda\bar{\bx}^\top \bx = \bar{\lambda} \bar{\bx}^\top\bx$. Since $\bar{\bx}^\top\bx = (\bc-i\bd)^\top(\bc+i\bd) = \bc^\top\bc+\bd^\top\bd$ is a real number. Therefore the imaginary part of $\lambda$ is zero and $\lambda$ is real.
\end{proof}
\begin{tcolorbox}[title={Symmetric Matrix Property 2 of 4}]
\begin{lemma}[Orthogonal Eigenvectors]\label{lemma:orthogonal-eigenvectors}
The eigenvectors corresponding to distinct eigenvalues of any symmetric matrix are orthogonal so that we can normalize eigenvectors to make them orthonormal since $\bA\bx = \lambda \bx \underrightarrow{\text{ leads to } } \bA\frac{\bx}{||\bx||} = \lambda \frac{\bx}{||\bx||}$ which corresponds to the same eigenvalue.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:orthogonal-eigenvectors}]
Suppose eigenvalues $\lambda_1, \lambda_2$ correspond to eigenvectors $\bx_1, \bx_2$ so that $\bA\bx_1=\lambda \bx_1$ and $\bA\bx_2 = \lambda_2\bx_2$. We have the following equality:
$$
\bA\bx_1=\lambda_1 \bx_1 \leadto \bx_1^\top \bA =\lambda_1 \bx_1^\top \leadto \bx_1^\top \bA \bx_2 =\lambda_1 \bx_1^\top\bx_2,
$$
and
$$
\bA\bx_2 = \lambda_2\bx_2 \leadto \bx_1^\top\bA\bx_2 = \lambda_2\bx_1^\top\bx_2,
$$
which implies $\lambda_1 \bx_1^\top\bx_2=\lambda_2\bx_1^\top\bx_2$. Since eigenvalues $\lambda_1\neq \lambda_2$, the eigenvectors are orthogonal.
\end{proof}
In the above Lemma~\ref{lemma:orthogonal-eigenvectors}, we prove that the eigenvectors corresponding to distinct eigenvalues of symmetric matrices are orthogonal. More generally, we prove the important theorem that eigenvectors corresponding to distinct eigenvalues of any matrix are linearly independent.
\begin{theorem}[Independent Eigenvector Theorem]\label{theorem:independent-eigenvector-theorem}
If a matrix $\bA\in \real^{n\times n}$ has $k$ distinct eigenvalues, then any set of $k$ corresponding eigenvectors are linearly independent.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:independent-eigenvector-theorem}]
We will prove by induction. Firstly, we will prove that any two eigenvectors corresponding to distinct eigenvalues are linearly independent. Suppose $\bv_1,\bv_2$ correspond to distinct eigenvalues $\lambda_1$ and $\lambda_2$ respectively. Suppose further there exists a nonzero vector $\bx=[x_1,x_2] \neq \bzero $ that
\begin{equation}\label{equation:independent-eigenvector-eq1}
x_1\bv_1+x_2\bv_2=\bzero.
\end{equation}
That is, $\bv_1,\bv_2$ are linearly independent.
Multiply Equation~\eqref{equation:independent-eigenvector-eq1} on the left by $\bA$, we get
\begin{equation}\label{equation:independent-eigenvector-eq2}
x_1 \lambda_1\bv_1 + x_2\lambda_2\bv_2 = \bzero.
\end{equation}
Multiply Equation~\eqref{equation:independent-eigenvector-eq1} on the left by $\lambda_2$, we get
\begin{equation}\label{equation:independent-eigenvector-eq3}
x_1\lambda_2\bv_1 + x_2\lambda_2\bv_2 = \bzero.
\end{equation}
Subtract Equation~\eqref{equation:independent-eigenvector-eq2} from Equation~\eqref{equation:independent-eigenvector-eq3} to find
$$
x_1(\lambda_2-\lambda_1)\bv_1 = \bzero.
$$
Since $\lambda_2\neq \lambda_1$, $\bv_1\neq \bzero$, we must have $x_1=0$. From Equation~\eqref{equation:independent-eigenvector-eq1}, $\bv_2\neq \bzero$, we must also have $x_2=0$ which arrives at a contradiction. Thus $\bv_1,\bv_2$ are linearly independent.
Now, suppose any $j<k$ eigenvectors are linearly independent, if we could prove that any $j+1$ eigenvectors are also linearly independent, we finish the proof. Suppose $\bv_1, \bv_2, \ldots, \bv_j$ are linearly independent and $\bv_{j+1}$ is dependent on the first $j$ eigenvectors. That is, there exists a nonzero vector $\bx=[x_1,x_2,\ldots, x_{j}]\neq \bzero$ that
\begin{equation}\label{equation:independent-eigenvector-zero}
\bv_{j+1}= x_1\bv_1+x_2\bv_2+\ldots+x_j\bv_j .
\end{equation}
Suppose the $j+1$ eigenvectors correspond to distinct eigenvalues $\lambda_1,\lambda_2,\ldots,\lambda_j,\lambda_{j+1}$.
Multiply Equation~\eqref{equation:independent-eigenvector-zero} on the left by $\bA$, we get
\begin{equation}\label{equation:independent-eigenvector-zero2}
\lambda_{j+1} \bv_{j+1} = x_1\lambda_1\bv_1+x_2\lambda_2\bv_2+\ldots+x_j \lambda_j\bv_j .
\end{equation}
Multiply Equation~\eqref{equation:independent-eigenvector-zero} on the left by $\lambda_{j+1}$, we get
\begin{equation}\label{equation:independent-eigenvector-zero3}
\lambda_{j+1} \bv_{j+1} = x_1\lambda_{j+1}\bv_1+x_2\lambda_{j+1}\bv_2+\ldots+x_j \lambda_{j+1}\bv_j .
\end{equation}
Subtract Equation~\eqref{equation:independent-eigenvector-zero3} from Equation~\eqref{equation:independent-eigenvector-zero2}, we find
$$
x_1(\lambda_{j+1}-\lambda_1)\bv_1+x_2(\lambda_{j+1}-\lambda_2)\bv_2+\ldots+x_j (\lambda_{j+1}-\lambda_j)\bv_j = \bzero.
$$
From assumption, $\lambda_{j+1} \neq \lambda_i$ for all $i\in \{1,2,\ldots,j\}$, and $\bv_i\neq \bzero$ for all $i\in \{1,2,\ldots,j\}$. We must have $x_1=x_2=\ldots=x_j=0$ which leads to a contradiction. Then $\bv_1,\bv_2,\ldots,\bv_j,\bv_{j+1}$ are linearly independent. This completes the proof.
\end{proof}
A direct consequence of above theorem is as follows:
\begin{corollary}[Independent Eigenvector Theorem, CNT.]\label{theorem:independent-eigenvector-theorem-basis}
If a matrix $\bA\in \real^{n\times n}$ has $n$ distinct eigenvalues, then any set of $n$ corresponding eigenvectors form a basis for $\real^n$.
\end{corollary}
\begin{tcolorbox}[title={Symmetric Matrix Property 3 of 4}]
\begin{lemma}[Orthonormal Eigenvectors for Duplicate Eigenvalue]\label{lemma:eigen-multiplicity}
If $\bA$ has a duplicate eigenvalue $\lambda_i$ with multiplicity $k\geq 2$, then there exist $k$ orthonormal eigenvectors corresponding to $\lambda_i$.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:eigen-multiplicity}]
We note that there is at least one eigenvector $\bx_{i1}$ corresponding to $\lambda_i$. And for such eigenvector $\bx_{i1}$, we can always find additional $n-1$ orthonormal vectors $\by_2, \by_3, \ldots, \by_n$ so that $\{\bx_{i1}, \by_2, \by_3, \ldots, \by_n\}$ forms an orthonormal basis in $\real^n$. Put the $\by_2, \by_3, \ldots, \by_n$ into matrix $\bY_1$ and $\{\bx_{i1}, \by_2, \by_3, \ldots, \by_n\}$ into matrix $\bP_1$
$$
\bY_1=[\by_2, \by_3, \ldots, \by_n] \qquad \text{and} \qquad \bP_1=[\bx_{i1}, \bY_1].
$$
We then have
$$
\bP_1^\top\bA\bP_1 = \begin{bmatrix}
\lambda_i &\bzero \\
\bzero & \bY_1^\top \bA\bY_1
\end{bmatrix}.
$$
As a result, $\bA$ and $\bP_1^\top\bA\bP_1$ are similar matrices such that they have the same eigenvalues since $\bP_1$ is nonsingular (even orthogonal here, see Lemma~\ref{lemma:eigenvalue-similar-matrices}, p.~\pageref{lemma:eigenvalue-similar-matrices}). We obtain
$$
\det(\bP_1^\top\bA\bP_1 - \lambda\bI_n) =
\footnote{By the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix}
\bA & \bB \\
\bC & \bD
\end{bmatrix}$, then $\det(\bM) = \det(\bA)\det(\bD-\bC\bA^{-1}\bB)$.
}
(\lambda_i - \lambda )\det(\bY_1^\top \bA\bY_1 - \lambda\bI_{n-1}).
$$
If $\lambda_i$ has multiplicity $k\geq 2$, then the term $(\lambda_i-\lambda)$ occurs $k$ times in the polynomial from the determinant $\det(\bP_1^\top\bA\bP_1 - \lambda\bI_n)$, i.e., the term occurs $k-1$ times in the polynomial from $\det(\bY_1^\top \bA\bY_1 - \lambda\bI_{n-1})$. In another word, $\det(\bY_1^\top \bA\bY_1 - \lambda_i\bI_{n-1})=0$ and $\lambda_i$ is an eigenvalue of $\bY_1^\top \bA\bY_1$.
Let $\bB=\bY_1^\top \bA\bY_1$. Since $\det(\bB-\lambda_i\bI_{n-1})=0$, the null space of $\bB-\lambda_i\bI_{n-1}$ is not none. Suppose $(\bB-\lambda_i\bI_{n-1})\bn = \bzero$, i.e., $\bB\bn=\lambda_i\bn$ and $\bn$ is an eigenvector of $\bB$.
From $
\bP_1^\top\bA\bP_1 = \begin{bmatrix}
\lambda_i &\bzero \\
\bzero & \bB
\end{bmatrix},
$
we have $
\bA\bP_1
\begin{bmatrix}
z \\
\bn
\end{bmatrix}
=
\bP_1
\begin{bmatrix}
\lambda_i &\bzero \\
\bzero & \bB
\end{bmatrix}
\begin{bmatrix}
z \\
\bn
\end{bmatrix}$, where $z$ is any scalar. From the left side of this equation, we have
\begin{equation}\label{equation:spectral-pro4-right}
\begin{aligned}
\bA\bP_1
\begin{bmatrix}
z \\
\bn
\end{bmatrix}
&=
\begin{bmatrix}
\lambda_i\bx_{i1}, \bA\bY_1
\end{bmatrix}
\begin{bmatrix}
z \\
\bn
\end{bmatrix} \\
&=\lambda_iz\bx_{i1} + \bA\bY_1\bn.
\end{aligned}
\end{equation}
And from the right side of the equation, we have
\begin{equation}\label{equation:spectral-pro4-left}
\begin{aligned}
\bP_1
\begin{bmatrix}
\lambda_i &\bzero \\
\bzero & \bB
\end{bmatrix}
\begin{bmatrix}
z \\
\bn
\end{bmatrix}
&=
\begin{bmatrix}
\bx_{i1} & \bY_1
\end{bmatrix}
\begin{bmatrix}
\lambda_i &\bzero \\
\bzero & \bB
\end{bmatrix}
\begin{bmatrix}
z \\
\bn
\end{bmatrix}\\
&=
\begin{bmatrix}
\lambda_i\bx_{i1} & \bY_1\bB
\end{bmatrix}
\begin{bmatrix}
z \\
\bn
\end{bmatrix}\\
&= \lambda_i z \bx_{i1} + \bY_1\bB \bn \\
&=\lambda_i z \bx_{i1} + \lambda_i \bY_1 \bn. \qquad (\text{Since $\bB \bn=\lambda_i\bn$})\\
\end{aligned}
\end{equation}
Combine Equation~\eqref{equation:spectral-pro4-left} and Equation~\eqref{equation:spectral-pro4-right}, we obtain
$$
\bA\bY_1\bn = \lambda_i\bY_1 \bn,
$$
which means $\bY_1\bn$ is an eigenvector of $\bA$ corresponding to the eigenvalue $\lambda_i$ (same eigenvalue corresponding to $\bx_{i1}$). Since $\bY_1\bn$ is a combination of $\by_2, \by_3, \ldots, \by_n$ which are orthonormal to $\bx_{i1}$, the $\bY_1\bn$ can be chosen to be orthonormal to $\bx_{i1}$.
To conclude, if we have one eigenvector $\bx_{i1}$ corresponding to $\lambda_i$ whose multiplicity is $k\geq 2$, we could construct the second eigenvector by choosing one vector from the null space of $(\bB-\lambda_i\bI_{n-1})$ constructed above. Suppose now, we have constructed the second eigenvector $\bx_{i2}$ which is orthonormal to $\bx_{i1}$.
For such eigenvectors $\bx_{i1}, \bx_{i2}$, we can always find additional $n-2$ orthonormal vectors $\by_3, \by_4, \ldots, \by_n$ so that $\{\bx_{i1},\bx_{i2}, \by_3, \by_4, \ldots, \by_n\}$ forms an orthonormal basis in $\real^n$. Put the $\by_3, \by_4, \ldots, \by_n$ into matrix $\bY_2$ and $\{\bx_{i1},\bx_{i2}, \by_3, \by_4, \ldots, \by_n\}$ into matrix $\bP_2$:
$$
\bY_2=[\by_3, \by_4, \ldots, \by_n] \qquad \text{and} \qquad \bP_2=[\bx_{i1}, \bx_{i2},\bY_1].
$$
We then have
$$
\bP_2^\top\bA\bP_2 =
\begin{bmatrix}
\lambda_i & 0 &\bzero \\
0& \lambda_i &\bzero \\
\bzero & \bzero & \bY_2^\top \bA\bY_2
\end{bmatrix}
=
\begin{bmatrix}
\lambda_i & 0 &\bzero \\
0& \lambda_i &\bzero \\
\bzero & \bzero & \bC
\end{bmatrix},
$$
where $\bC=\bY_2^\top \bA\bY_2$ such that $\det(\bP_2^\top\bA\bP_2 - \lambda\bI_n) = (\lambda_i-\lambda)^2 \det(\bC - \lambda\bI_{n-2})$. If the multiplicity of $\lambda_i$ is $k\geq 3$, $\det(\bC - \lambda_i\bI_{n-2})=0$ and the null space of $\bC - \lambda_i\bI_{n-2}$ is not none so that we can still find a vector from null space of $\bC - \lambda_i\bI_{n-2}$ and $\bC\bn = \lambda_i \bn$. Now we can construct a vector $\begin{bmatrix}
z_1 \\
z_2\\
\bn
\end{bmatrix}\in \real^n $, where $z_1, z_2$ are any scalar values, such that
$$
\bA\bP_2\begin{bmatrix}
z_1 \\
z_2\\
\bn
\end{bmatrix} = \bP_2
\begin{bmatrix}
\lambda_i & 0 &\bzero \\
0& \lambda_i &\bzero \\
\bzero & \bzero & \bC
\end{bmatrix}
\begin{bmatrix}
z_1 \\
z_2\\
\bn
\end{bmatrix}.
$$
Similarly, from the left side of the above equation, we will get $\lambda_iz_1\bx_{i1} +\lambda_iz_2\bx_{i2}+\bA\bY_2\bn$. From the right side of the above equation, we will get $\lambda_iz_1\bx_{i1} +\lambda_i z_2\bx_{i2}+\lambda_i\bY_2\bn$. As a result,
$$
\bA\bY_2\bn = \lambda_i\bY_2\bn,
$$
where $\bY_2\bn$ is an eigenvector of $\bA$ and orthogonal to $\bx_{i1}, \bx_{i2}$. And it is easy to construct the eigenvector to be orthonormal to the first two.
The process can go on, and finally, we will find $k$ orthonormal eigenvectors corresponding to $\lambda_i$.
Actually, the dimension of the null space of $\bP_1^\top\bA\bP_1 -\lambda_i\bI_n$ is equal to the multiplicity $k$. It also follows that if the multiplicity of $\lambda_i$ is $k$, there cannot be more than $k$ orthogonal eigenvectors corresponding to $\lambda_i$. Otherwise, it will come to the conclusion that we could find more than $n$ orthogonal eigenvectors which leads to a contradiction.
\end{proof}
The proof of the existence of the spectral decomposition is trivial from the lemmas above. Also, we can use Schur decomposition to prove the existence of it.
\begin{proof}[\textbf{of Theorem~\ref{theorem:spectral_theorem}: Existence of Spectral Decomposition}]
From the Schur decomposition in Theorem~\ref{theorem:schur-decomposition} (p.~\pageref{theorem:schur-decomposition}), symmetric matrix $\bA=\bA^\top$ leads to $\bQ\bU\bQ^\top = \bQ\bU^\top\bQ^\top$. Then $\bU$ is a diagonal matrix. And this diagonal matrix actually contains eigenvalues of $\bA$. All the columns of $\bQ$ are eigenvectors of $\bA$. We conclude that all symmetric matrices are diagonalizable even with repeated eigenvalues.
\end{proof}
For any matrix multiplication, we have the rank of the multiplication result no larger than the rank of the inputs. However, the symmetric matrix $\bA^\top \bA$ is rather special in that the rank of $\bA^\top \bA$ is equal to that of $\bA$ which will be used in the proof of singular value decomposition in the next section.
\begin{lemma}[Rank of $\bA\bB$]\label{lemma:rankAB}
For any matrix $\bA\in \real^{m\times n}$, $\bB\in \real^{n\times k}$, then the matrix multiplication $\bA\bB\in \real^{m\times k}$ has $rank$($\bA\bB$)$\leq$min($rank$($\bA$), $rank$($\bB$)).
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:rankAB}]
For matrix multiplication $\bA\bB$, we have
$\bullet$ All rows of $\bA\bB$ are combinations of rows of $\bB$, the row space of $\bA\bB$ is a subset of the row space of $\bB$. Thus $rank$($\bA\bB$)$\leq$$rank$($\bB$).
$\bullet$ All columns of $\bA\bB$ are combinations of columns of $\bA$, the column space of $\bA\bB$ is a subset of the column space of $\bA$. Thus $rank$($\bA\bB$)$\leq$$rank$($\bA$).
Therefore, $rank$($\bA\bB$)$\leq$min($rank$($\bA$), $rank$($\bB$)).
\end{proof}
\begin{tcolorbox}[title={Symmetric Matrix Property 4 of 4}]
\begin{lemma}[Rank of Symmetric Matrices]\label{lemma:rank-of-symmetric}
If $\bA$ is an $n\times n$ real symmetric matrix, then rank($\bA$) =
the total number of nonzero eigenvalues of $\bA$.
In particular, $\bA$ has full rank if and only if $\bA$ is nonsingular. Further, $\cspace(\bA)$ is the linear space spanned by the eigenvectors of $\bA$ that correspond to nonzero eigenvalues.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:rank-of-symmetric}]
For any symmetric matrix $\bA$, we have $\bA$, in spectral form, as $\bA = \bQ \bLambda\bQ^\top$ and also $\bLambda = \bQ^\top\bA\bQ$. Since we have shown in Lemma~\ref{lemma:rankAB} that the rank of the multiplication $rank$($\bA\bB$)$\leq$min($rank$($\bA$), $rank$($\bB$)).
$\bullet$ From $\bA = \bQ \bLambda\bQ^\top$, we have $rank(\bA) \leq rank(\bQ \bLambda) \leq rank(\bLambda)$;
$\bullet$ From $\bLambda = \bQ^\top\bA\bQ$, we have $rank(\bLambda) \leq rank(\bQ^\top\bA) \leq rank(\bA)$,
The inequalities above give us a contradiction. And thus $rank(\bA) = rank(\bLambda)$ which is the total number of nonzero eigenvalues.
Since $\bA$ is nonsingular if and only if all of its eigenvalues are nonzero, $\bA$ has full rank if and only if $\bA$ is nonsingular.
\end{proof}
Similar to the eigenvalue decomposition, we can compute the $m$-th power of matrix $\bA$ via the spectral decomposition more efficiently.
\begin{remark}[$m$-th Power]\label{remark:power-spectral}
The $m$-th power of $\bA$ is $\bA^m = \bQ\bLambda^m\bQ^\top$ if the matrix $\bA$ can be factored as $\bA=\bQ\bLambda\bQ^\top$.
\end{remark}
\section{Uniqueness of Spectral Decomposition}\label{section:uniqueness-spectral-decomposition}
Clearly, the spectral decomposition is not unique essentially because of the multiplicity of eigenvalues. One can imagine that eigenvalue $\lambda_i$ and $\lambda_j$ are the same for some $1\leq i,j\leq n$, and interchange the corresponding eigenvectors in $\bQ$ will have the same results but the decompositions are different.
But the \textit{eigenspaces} (i.e., the null space $\nspace(\bA - \lambda_i\bI)$ for eigenvalue $\lambda_i$) corresponding to each eigenvalue are fixed. So there is a unique decomposition in terms
of eigenspaces and then any orthonormal basis of these eigenspaces can be chosen.
\section{Other Forms, Connecting Eigenvalue Decomposition*}\label{section:otherform-spectral}
In this section, we discuss other forms of the spectral decomposition under different conditions.
\begin{definition}[Characteristic Polynomial\index{Characteristic polynomial}]
For any square matrix $\bA \in \real^{n\times n}$, the \textbf{characteristic polynomial} $\det(\bA - \lambda \bI)$ is given by
$$
\begin{aligned}
\det(\lambda\bI-\bA ) &=\lambda^n - \gamma_{n-1} \lambda^{n-1} + \ldots + \gamma_1 \lambda + \gamma_0\\
&=(\lambda-\lambda_1)^{k_1} (\lambda-\lambda_2)^{k_2} \ldots (\lambda-\lambda_m)^{k_m},
\end{aligned}
$$
where $\lambda_1, \lambda_2, \ldots, \lambda_m$ are the distinct roots of $\det( \lambda\bI-\bA)$ and also the eigenvalues of $\bA$, and $k_1+k_2+\ldots +k_m=n$, i.e., $\det(\lambda\bI-\bA)$ is a polynomial of degree $n$ for any matrix $\bA\in \real^{n\times n}$ (see proof of Lemma~\ref{lemma:eigen-multiplicity}, p.~\pageref{lemma:eigen-multiplicity}).
\end{definition}
An important multiplicity arises from the characteristic polynomial of a matrix is then defined as follows:
\begin{definition}[Algebraic Multiplicity and Geometric Multiplicity\index{Algebraic multiplicity}\index{Geometric multiplicity}]
Given the characteristic polynomial of matrix $\bA\in \real^{n\times n}$:
$$
\begin{aligned}
\det(\lambda\bI-\bA ) =(\lambda-\lambda_1)^{k_1} (\lambda-\lambda_2)^{k_2} \ldots (\lambda-\lambda_m)^{k_m}.
\end{aligned}
$$
The integer $k_i$ is called the \textbf{algebraic multiplicity} of the eigenvalue $\lambda_i$, i.e., the algebraic multiplicity of eigenvalue $\lambda_i$ is equal to the multiplicity of the corresponding root of the characteristic polynomial.
The \textbf{eigenspace associated to eigenvalue $\lambda_i$} is defined by the null space of $(\bA - \lambda_i\bI)$, i.e., $\nspace(\bA - \lambda_i\bI)$.
And the dimension of the eigenspace associated to $\lambda_i$, $\nspace(\bA - \lambda_i\bI)$, is called the \textbf{geometric multiplicity} of $\lambda_i$.
In short, we denote the algebraic multiplicity of $\lambda_i$ by $alg(\lambda_i)$, and its geometric multiplicity by $geo(\lambda_i)$.
\end{definition}
\begin{remark}[Geometric Multiplicity]\label{remark:geometric-mul-meaning}
Note that for matrix $\bA$ and the eigenspace $\nspace(\bA-\lambda_i\bI)$, the dimension of the eigenspace is also the number of linearly independent eigenvectors of $\bA$ associated to $\lambda_i$, namely a basis for the eigenspace. This implies that
while there are an infinite number of eigenvectors associated with each eigenvalue $\lambda_i$, the fact that they form a subspace (provided the zero vector is added) means that they can be described by a finite number of vectors.
\end{remark}
By definition, the sum of the algebraic multiplicities is equal to $n$, but the sum of the geometric multiplicities can be strictly smaller.
\begin{corollary}[Multiplicity in Similar Matrices\index{Similar matrices}]\label{corollary:multipli-similar-matrix}
Similar matrices have same algebraic multiplicities and geometric multiplicities.
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:multipli-similar-matrix}]
In Lemma~\ref{lemma:eigenvalue-similar-matrices} (p.~\pageref{lemma:eigenvalue-similar-matrices}), we proved that the eigenvalues of similar matrices are the same, therefore, the algebraic multiplicities of similar matrices are the same as well.
Suppose $\bA$ and $\bB= \bP\bA\bP^{-1}$ are similar matrices where $\bP$ is nonsingular. And the geometric multiplicity of an eigenvalue of $\bA$, say $\lambda$, is $k$. Then there exists a set of orthogonal vectors $\bv_1, \bv_2, \ldots, \bv_k$ that are the basis for the eigenspace $\nspace(\bA-\lambda\bI)$ such that $\bA\bv_i = \lambda \bv_i$ for all $i\in \{1, 2, \ldots, k\}$. Then, $\bw_i = \bP\bv_i$'s are the eigenvectors of $\bB$ associated with eigenvalue $\lambda$. Further, $\bw_i$'s are linearly independent since $\bP$ is nonsingular. Thus, the dimension of the eigenspace $\nspace(\bB-\lambda_i\bI)$ is at least $k$, that is, $dim(\nspace(\bA-\lambda\bI)) \leq dim(\nspace(\bB-\lambda\bI)) $.
Similarly, there exists a set of orthogonal vectors $\bw_1, \bw_2, \ldots, \bw_k$ that are the bases for the eigenspace $\nspace(\bB-\lambda\bI)$, then $\bv_i = \bP^{-1}\bw_i$ for all $i \in \{1, 2, \ldots, k\}$ are the eigenvectors of $\bA$ associated to $\lambda$. This will result in $dim(\nspace(\bB-\lambda\bI)) \leq dim(\nspace(\bA-\lambda\bI)) $.
Therefore, by ``sandwiching", we get $dim(\nspace(\bA-\lambda\bI)) = dim(\nspace(\bB-\lambda\bI)) $, which is the equality of the geometric multiplicities, and the claim follows.
\end{proof}
\begin{lemma}[Bounded Geometric Multiplicity]\label{lemma:bounded-geometri}
For any matrix $\bA\in \real^{n\times n}$, its geometric multiplicity is bounded by algebraic multiplicity for any eigenvalue $\lambda_i$:
$$
geo(\lambda_i) \leq alg(\lambda_i).
$$
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:bounded-geometri}]
If we can find a similar matrix $\bB$ of $\bA$ that has a specific form of the characteristic polynomial, then we complete the proof.
Suppose $\bP_1 = [\bv_1, \bv_2, \ldots, \bv_k]$ contains the eigenvectors of $\bA$ associated with $\lambda_i$ which are linearly independent. That is, the $k$ vectors are bases for the eigenspace $\nspace(\bA-\lambda\bI)$ and the geometric multiplicity associated with $\lambda_i$ is $k$. We can expand it to $n$ linearly independent vectors such that
$$
\bP = [\bP_1, \bP_2] = [\bv_1, \bv_2, \ldots, \bv_k, \bv_{k+1}, \ldots, \bv_n],
$$
where $\bP$ is nonsingular. Then $\bA\bP = [\lambda_i\bP_1, \bA\bP_2]$.
Construct a matrix $\bB = \begin{bmatrix}
\lambda_i \bI_k & \bC \\
\bzero & \bD
\end{bmatrix}$ where $\bA\bP_2 = \bP_1\bC + \bP_2\bD$, then $\bP^{-1}\bA\bP = \bB$ such that $\bA$ and $\bB$ are similar matrices. We can always find such $\bC,\bD$ that satisfy the above condition, since $\bv_i$'s are linearly independent with spanning the whole space $\real^n$, and any column of $\bA\bP_2$ is in the column space of $\bP=[\bP_1,\bP_2]$.
Therefore,
$$
\begin{aligned}
\det(\bA-\lambda\bI) &= \det(\bP^{-1})\det(\bA-\lambda\bI)\det(\bP) \qquad &(\text{$\det(\bP^{-1}) = 1/\det(\bP)$})\\
&= \det(\bP^{-1}(\bA-\lambda\bI)\bP) \qquad &(\det(\bA)\det(\bB) = \det(\bA\bB))\\
&= \det(\bB-\lambda\bI) \\
&= \det(\begin{bmatrix}
(\lambda_i-\lambda) \bI_k & \bC \\
\bzero & \bD - \lambda \bI
\end{bmatrix})\\
&= (\lambda_i-\lambda)^k \det(\bD-\lambda\bI),
\end{aligned}
$$
where the last equality is from the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix}
\bA & \bB \\
\bC & \bD
\end{bmatrix}$, then $\det(\bM) = \det(\bA)\det(\bD-\bC\bA^{-1}\bB)$.
This implies
$$
geo(\lambda_i) \leq alg(\lambda_i).
$$
And we complete the proof.
\end{proof}
Following from the proof of Lemma~\ref{lemma:eigen-multiplicity}, we notice that the algebraic multiplicity and geometric multiplicity are the same for symmetric matrices. We call these matrices simple matrices.
\begin{definition}[Simple Matrix]
When the algebraic multiplicity and geometric multiplicity are the same for a matrix, we call it a simple matrix.
\end{definition}
\begin{definition}[Diagonalizable]
A matrix $\bA$ is diagonalizable if there exists a nonsingular matrix $\bP$ and a diagonal matrix $\bD$ such that $\bA = \bP\bD\bP^{-1}$.
\end{definition}
Eigenvalue decomposition in Theorem~\ref{theorem:eigenvalue-decomposition} and spectral decomposition in Theorem~\ref{theorem:spectral_theorem} are such kinds of matrices that are diagonalizable.
\begin{lemma}[Simple Matrices are Diagonalizable]\label{lemma:simple-diagonalizable}
A matrix is a simple matrix if and only if it is diagonalizable.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:simple-diagonalizable}]
We will show by forward implication and backward implication separately as follows.
\paragraph{Forward implication} Suppose that $\bA\in \real^{n\times n}$ is a simple matrix, such that the algebraic and geometric multiplicities for each eigenvalue are equal. For a specific eigenvalue $\lambda_i$, let $\{\bv_1^i, \bv_2^i, \ldots, \bv_{k_i}^i\}$ be a basis for the eigenspace $\nspace(\bA - \lambda_i\bI)$, that is, $\{\bv_1^i, \bv_2^i, \ldots, \bv_{k_i}^i\}$ is a set of linearly independent eigenvectors of $\bA$ associated to $\lambda_i$, where ${k_i}$ is the algebraic or geometric multiplicity associated to $\lambda_i$: $alg(\lambda_i)=geo(\lambda_i)=k_i$. Suppose there are $m$ distinct eigenvalues, since $k_1+k_2+\ldots +k_m = n$, the set of eigenvectors consists of the union of $n$ vectors. Suppose there is a set of $x_j$'s such that
\begin{equation}\label{equation:proof-simple-diagonalize}
\bz = \sum_{j=1}^{k_1} x_j^1 \bv_j^1+ \sum_{j=1}^{k_2} x_j^2 \bv_j^2 + \ldots \sum_{j=1}^{k_m} x_j^m \bv_j^m = \bzero.
\end{equation}
Let $\bw^i = \sum_{j=1}^{k_i} x_j^i \bv_j^i$. Then $\bw^i$ is either an eigenvector associated to $\lambda_i$, or it is a zero vector. That is $\bz = \sum_{i=1}^{m} \bw^i$ is a sum of either zero vector or an eigenvector associated with different eigenvalues of $\bA$. Since eigenvectors associated with different eigenvalues are linearly independent. We must have $\bw^i =0$ for all $i\in \{1, 2, \ldots, m\}$. That is
$$
\bw^i = \sum_{j=1}^{k_i} x_j^i \bv_j^i = \bzero, \qquad \text{for all $i\in \{1, 2, \ldots, m\}$}.
$$
Since we assume the eigenvectors $\bv_j^i$'s associated to $\lambda_i$ are linearly independent, we must have $x_j^i=0$ for all $i \in \{1,2,\ldots, m\}, j\in \{1,2,\ldots,k_i\}$. Thus, the $n$ vectors are linearly independent:
$$
\{\bv_1^1, \bv_2^1, \ldots, \bv_{k_i}^1\},\{\bv_1^2, \bv_2^2, \ldots, \bv_{k_i}^2\},\ldots,\{\bv_1^m, \bv_2^m, \ldots, \bv_{k_i}^m\}.
$$
By eigenvalue decomposition in Theorem~\ref{theorem:eigenvalue-decomposition}, matrix $\bA$ can be diagonalizable.
\paragraph{Backward implication} Suppose $\bA$ is diagonalizable. That is, there exists a nonsingular matrix $\bP$ and a diagonal matrix $\bD$ such that $\bA =\bP\bD\bP^{-1} $. $\bA$ and $\bD$ are similar matrices such that they have the same eigenvalues (Lemma~\ref{lemma:eigenvalue-similar-matrices}, p.~\pageref{lemma:eigenvalue-similar-matrices}), same algebraic multiplicities, and geometric multiplicities (Corollary~\ref{corollary:multipli-similar-matrix}, p.~\pageref{corollary:multipli-similar-matrix}). It can be easily verified that a diagonal matrix has equal algebraic multiplicity and geometric multiplicity such that $\bA$ is a simple matrix.
\end{proof}
\begin{remark}[Equivalence on Diagonalization]
From Theorem~\ref{theorem:independent-eigenvector-theorem} that any eigenvectors corresponding to different eigenvalues are linearly independent, and Remark~\ref{remark:geometric-mul-meaning} that the geometric multiplicity is the dimension of the eigenspace. We realize, if the geometric multiplicity is equal to the algebraic multiplicity, the eigenspace can span the whole space $\real^n$ if matrix $\bA\in \real^{n\times n}$. So the above Lemma is equivalent to claim that if the eigenspace can span the whole space $\real^n$, then $\bA$ can be diagonalizable.
\end{remark}
\begin{corollary}
A square matrix $\bA$ with linearly independent eigenvectors is a simple matrix. Or if $\bA$ is symmetric, it is also a simple matrix.
\end{corollary}
From the eigenvalue decomposition in Theorem~\ref{theorem:eigenvalue-decomposition} (p.~\pageref{theorem:eigenvalue-decomposition}) and the spectral decomposition in Theorem~\ref{theorem:spectral_theorem} (p.~\pageref{theorem:spectral_theorem}), the proof is trivial for the corollary.
Now we are ready to show the second form of the spectral decomposition.
\begin{theoremHigh}[Spectral Decomposition: The Second Form]\label{theorem:spectral_theorem_secondForm}
A \textbf{simple matrix} $\bA \in \real^{n\times n}$ can be factored as a sum of a set of idempotent matrices
$$
\bA = \sum_{i=1}^{n} \lambda_i \bA_i,
$$
where $\lambda_i$ for all $i\in \{1,2,\ldots, n\}$ are eigenvalues of $\bA$ (duplicate possible), and also known as the \textbf{spectral values} of $\bA$. Specifically, we have the following properties:
1. Idempotent: $\bA_i^2 = \bA_i$ for all $i\in \{1,2,\ldots, n\}$;
2. Orthogonal: $\bA_i\bA_j = \bzero$ for all $i \neq j$;
3. Additivity: $\sum_{i=1}^{n} \bA_i = \bI_n$;
4. Rank-Additivity: $rank(\bA_1) + rank(\bA_2) + \ldots + rank(\bA_n) = n$.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:spectral_theorem_secondForm}]
Since $\bA$ is a simple matrix, from Lemma~\ref{lemma:simple-diagonalizable}, there exists a nonsingular matrix $\bP$ and a diagonal matrix $\bLambda$ such that $\bA=\bP\bLambda\bP^{-1}$ where $\bLambda=\diag(\lambda_1, \lambda_2, \ldots, \lambda_n)$, and $\lambda_i$'s are eigenvalues of $\bA$ and columns of $\bP$ are eigenvectors of $\bA$. Suppose
$$
\bP = \begin{bmatrix}
\bv_1 & \bv_2&\ldots & \bv_n
\end{bmatrix}
\qquad
\text{and }
\qquad
\bP^{-1} =
\begin{bmatrix}
\bw_1^\top \\
\bw_2^\top \\
\vdots \\
\bw_n^\top
\end{bmatrix}
$$
are the column and row partitions of $\bP$ and $\bP^{-1}$ respectively. Then, we have
$$
\bA= \bP\bLambda\bP^{-1} =
\begin{bmatrix}
\bv_1 & \bv_2&\ldots & \bv_n
\end{bmatrix}
\bLambda
\begin{bmatrix}
\bw_1^\top \\
\bw_2^\top \\
\vdots \\
\bw_n^\top
\end{bmatrix}=
\sum_{i=1}^{n}\lambda_i \bv_i\bw_i^\top.
$$
Let $\bA_i = \bv_i\bw_i^\top$, we have $\bA = \sum_{i=1}^{n} \lambda_i \bA_i$.
We realize that $\bP^{-1}\bP = \bI$ such that
$$
\left\{
\begin{aligned}
&\bw_i^\top\bv_j = 1 ,& \mathrm{\,\,if\,\,} i = j. \\
&\bw_i^\top\bv_j = 0 ,& \mathrm{\,\,if\,\,} i \neq j.
\end{aligned}
\right.
$$
Therefore,
$$
\bA_i\bA_j =\bv_i\bw_i^\top\bv_j\bw_j^\top = \left\{
\begin{aligned}
&\bv_i\bw_i^\top = \bA_i ,& \mathrm{\,\,if\,\,} i = j. \\
& \bzero ,& \mathrm{\,\,if\,\,} i \neq j.
\end{aligned}
\right.
$$
This implies the idempotency and orthogonality of $\bA_i$'s. We also notice that $\sum_{i=1}^{n}\bA_i = \bP\bP^{-1}=\bI$, that is the additivity of $\bA_i$'s. The rank-additivity of the $\bA_i$'s is trivial since $rank(\bA_i)=1$ for all $i\in \{1,2,\ldots, n\}$.
\end{proof}
The decomposition is highly related to the Cochran's theorem in Appendix~\ref{appendix:cochran-theorem} and its application in the distribution theory of linear models \citep{lu2021rigorous}.
\begin{theoremHigh}[Spectral Decomposition: The Third Form]\label{Corollary:spectral_theorem_3Form}
A \textbf{simple matrix} $\bA \in \real^{n\times n}$ \textcolor{blue}{with $k$ distinct eigenvalues} can be factored as a sum of a set of idempotent matrices
$$
\bA = \sum_{i=1}^{\textcolor{blue}{k}} \lambda_i \bA_i,
$$
where $\lambda_i$ for all $i\in \{1,2,\ldots, \textcolor{blue}{k}\}$ are the distinct eigenvalues of $\bA$, and also known as the \textbf{spectral values} of $\bA$. Specifically, we have the following properties:
1. Idempotent: $\bA_i^2 = \bA_i$ for all $i\in \{1,2,\ldots, \textcolor{blue}{k}\}$;
2. Orthogonal: $\bA_i\bA_j = \bzero$ for all $i \neq j$;
3. Additivity: $\sum_{i=1}^{\textcolor{blue}{k}} \bA_i = \bI_n$;
4. Rank-Additivity: $rank(\bA_1) + rank(\bA_2) + \ldots + rank(\bA_{\textcolor{blue}{k}}) = n$.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{Corollary:spectral_theorem_3Form}]
From Theorem~\ref{theorem:spectral_theorem_secondForm}, we can decompose $\bA$ by $\bA =\sum_{j=1}^{n} \beta_i \bB_j$. Without loss of generality,
the eigenvalues $\beta_i$'s are ordered such that $\beta_1 \leq \beta_2 \leq \ldots \beta_n$ where duplicate is possible. Let $\lambda_i$'s be the distinct eigenvalues, and $\bA_i$ be the sum of the $\bB_j$'s associated with $\lambda_i$.
Suppose the multiplicity of $\lambda_i$ is $m_i$, and the $\bB_j$'s associated to $\lambda_i$ can be denoted as $\{\bB_{1}^i, \bB_{2}^i, \ldots, \bB_{m_i}^i\}$. Then $\bA_i$ can be denoted as $\bA_i = \sum_{j=1}^{m_i} \bB_{j}^i$. Apparently $\bA = \sum_{i=1}^{k} \lambda_i \bA_i$.
\paragraph{Idempotency} $\bA_i^2 = (\bB_1^i + \bB_2^i+\ldots \bB_{m_i}^i)(\bB_1^i + \bB_2^i+\ldots \bB_{m_i}^i)= \bB_1^i + \bB_2^i+\ldots \bB_{m_i}^i = \bA_i$ from the idempotency and orthogonality of $\bB_j^i$'s.
\paragraph{Ortogonality} $\bA_i\bA_j = (\bB_1^i + \bB_2^i+\ldots \bB_{m_i}^i)(\bB_1^j + \bB_2^j+\ldots \bB_{m_j}^j)=\bzero$ from the orthogonality of the $\bB_j^i$'s.
\paragraph{Additivity} It is trivial that $\sum_{i=1}^{k} \bA_i = \bI_n$.
\paragraph{Rank-Additivity} $rank(\bA_i ) = rank(\sum_{j=1}^{m_i} \bB_{j}^i) = m_i$ such that $rank(\bA_1) + rank(\bA_2) + \ldots + rank(\bA_{k}) = m_1+m_2+\ldots+m_k=n$.
\end{proof}
\begin{theoremHigh}[Spectral Decomposition: Backward Implication]\label{Corollary:spectral_theorem_4Form}
If a matrix $\bA \in \real^{n\times n}$ with $k$ distinct eigenvalues can be factored as a sum of a set of idempotent matrices
$$
\bA = \sum_{i=1}^{k} \lambda_i \bA_i,
$$
where $\lambda_i$ for all $i\in \{1,2,\ldots, k\}$ are the distinct eigenvalues of $\bA$, and
1. Idempotent: $\bA_i^2 = \bA_i$ for all $i\in \{1,2,\ldots, k\}$;
2. Orthogonal: $\bA_i\bA_j = \bzero$ for all $i \neq j$;
3. Additivity: $\sum_{i=1}^{k} \bA_i = \bI_n$;
4. Rank-Additivity: $rank(\bA_1) + rank(\bA_2) + \ldots + rank(\bA_{k}) = n$.
\item Then, the matrix $\bA$ is a simple matrix.
\end{theoremHigh}
\begin{proof}[of Corollary~\ref{Corollary:spectral_theorem_4Form}]
Suppose $rank(\bA_i) = r_i$ for all $i \in \{1,2,\ldots, k\}$. By ULV decomposition in Theorem~\ref{theorem:ulv-decomposition}, $\bA_i$ can be factored as
$$
\bA_i = \bU_i \begin{bmatrix}
\bL_i & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV_i,
$$
where $\bL_i \in \real^{r_i \times r_i}$, $\bU_i \in \real^{n \times n}$ and $\bV_i\in \real^{n \times n}$ are orthogonal matrices. Let
$$
\bX_i =
\bU_i \begin{bmatrix}
\bL_i \\
\bzero
\end{bmatrix}
\qquad
\text{and}
\qquad
\bV_i =
\begin{bmatrix}
\bY_i \\
\bZ_i
\end{bmatrix},
$$
where $\bX_i \in \real^{n\times r_i}$, and $\bY_i \in \real^{r_i \times n}$ is the first $r_i$ rows of $\bV_i$. Then, we have
$$
\bA_i = \bX_i \bY_i.
$$
This can be seen as a \textbf{reduced} ULV decomposition of $\bA_i$. Appending the $\bX_i$'s and $\bY_i$'s into $\bX$ and $\bY$,
$$
\bX = [\bX_1, \bX_2, \ldots, \bX_k],
\qquad
\bY =
\begin{bmatrix}
\bY_1\\
\bY_2\\
\vdots \\
\bY_k
\end{bmatrix},
$$
where $\bX\in \real^{n\times n}$ and $\bY\in \real^{n\times n}$ (from rank-additivity). By block matrix multiplication and the additivity of $\bA_i$'s, we have
$$
\bX\bY = \sum_{i=1}^{k} \bX_i\bY_i = \sum_{i=1}^{k} \bA_i = \bI.
$$
Therefore $\bY$ is the inverse of $\bX$, and
$$
\bY\bX =
\begin{bmatrix}
\bY_1\\
\bY_2\\
\vdots \\
\bY_k
\end{bmatrix}
[\bX_1, \bX_2, \ldots, \bX_k]
=
\begin{bmatrix}
\bY_1\bX_1 & \bY_1\bX_2 & \ldots & \bY_1\bX_k\\
\bY_2\bX_1 & \bY_2\bX_2 & \ldots & \bY_2\bX_k\\
\vdots & \vdots & \ddots & \vdots\\
\bY_k\bX_1 & \bY_k\bX_2 & \ldots & \bY_k\bX_k\\
\end{bmatrix}
=\bI,
$$
such that
$$
\bY_i\bX_j = \left\{
\begin{aligned}
&\bI_{r_i} ,& \mathrm{\,\,if\,\,} i = j; \\
& \bzero ,& \mathrm{\,\,if\,\,} i \neq j.
\end{aligned}
\right.
$$
This implies
$$
\bA_i\bX_j = \left\{
\begin{aligned}
&\bX_i ,& \mathrm{\,\,if\,\,} i = j; \\
& \bzero ,& \mathrm{\,\,if\,\,} i \neq j,
\end{aligned}
\right.
\qquad
\text{and}
\qquad
\bA \bX_i = \lambda_i\bX_i.
$$
Finally, we have
$$
\begin{aligned}
\bA\bX &= \bA[\bX_1, \bX_2, \ldots, \bX_k] = [\lambda_1\bX_1, \lambda_2\bX_2, \ldots, \lambda_k\bX_k] = \bX\bLambda,
\end{aligned}
$$
where
$$\bLambda =
\begin{bmatrix}
\lambda_1 \bI_{r_1} & \bzero & \ldots & \bzero \\
\bzero & \lambda_2 \bI_{r_2} & \ldots & \bzero \\
\vdots & \vdots & \ddots & \vdots \\
\bzero & \bzero & \ldots & \lambda_k \bI_{r_k} \\
\end{bmatrix}
$$
is a diagonal matrix. This implies $\bA$ can be diagonalized and from Lemma~\ref{lemma:simple-diagonalizable}, $\bA$ is a simple matrix.
\end{proof}
\begin{corollary}[Forward and Backward Spectral]
Combine Theorem~\ref{Corollary:spectral_theorem_3Form} and Theorem~\ref{Corollary:spectral_theorem_4Form}, we can claim that matrix $\bA \in \real^{n\times n}$ is a simple matrix with $k$ distinct eigenvalues if and only if it can be factored as a sum of a set of idempotent matrices
$$
\bA = \sum_{i=1}^{k} \lambda_i \bA_i,
$$
where $\lambda_i$ for all $i\in \{1,2,\ldots, k\}$ are the distinct eigenvalues of $\bA$, and
1. Idempotent: $\bA_i^2 = \bA_i$ for all $i\in \{1,2,\ldots, k\}$;
2. Orthogonal: $\bA_i\bA_j = \bzero$ for all $i \neq j$;
3. Additivity: $\sum_{i=1}^{k} \bA_i = \bI_n$;
4. Rank-Additivity: $rank(\bA_1) + rank(\bA_2) + \ldots + rank(\bA_{k}) = n$.
\end{corollary}
\section{Skew-Symmetric Matrices and its Properties*}
We have introduced the spectral decomposition for symmetric matrices. A special kind of matrices that's related to symmetric is called the skew-symmetric matrices.
\begin{definition}[Skew-Symmetric Matrix\index{Skew-symmetric matrix}]
If matrix $\bA\in \real^{n\times n}$ have the following property, then it is known as a \textbf{skew-symmetric matrix}:
$$
\bA^\top = -\bA.
$$
Note that under this definition, for the diagonal values $a_{ii}$ for all $i \in \{1,2,\ldots, n\}$, we have $a_{ii} = -a_{ii}$ which implies all the diagonal components are 0.
\end{definition}
We have proved in Lemma~\ref{lemma:real-eigenvalues-spectral} that all the eigenvalues of symmetric matrices are real. Similarly, we could show that all the eigenvalues of skew-symmetric matrices are imaginary.
\begin{lemma}[Imaginary Eigenvalues]\label{lemma:real-eigenvalues-spectral-skew}
The eigenvalues of any skew-symmetric matrix are all imaginary or zero.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:real-eigenvalues-spectral-skew}]
Suppose eigenvalue $\lambda$ is a complex number $\lambda=a+ib$ where $a,b$ are real. Its complex conjugate is $\bar{\lambda}=a-ib$. Same for complex eigenvector $\bx = \bc+i\bd$ and its complex conjugate $\bar{\bx}=\bc-i\bd$ where $\bc, \bd$ are real vectors. We then have the following property
$$
\bA \bx = \lambda \bx\qquad \underrightarrow{\text{ leads to }}\qquad \bA \bar{\bx} = \bar{\lambda} \bar{\bx}\qquad \underrightarrow{\text{ transpose to }}\qquad \bar{\bx}^\top \bA^\top =\bar{\lambda} \bar{\bx}^\top.
$$
We take the dot product of the first equation with $\bar{\bx}$ and the last equation with $\bx$:
$$
\bar{\bx}^\top \bA \bx = \lambda \bar{\bx}^\top \bx, \qquad \text{and } \qquad \bar{\bx}^\top \bA^\top \bx = \bar{\lambda}\bar{\bx}^\top \bx.
$$
Then we have the equality $-\lambda\bar{\bx}^\top \bx = \bar{\lambda} \bar{\bx}^\top\bx$ (since $\bA^\top=-\bA$). Since $\bar{\bx}^\top\bx = (\bc-i\bd)^\top(\bc+i\bd) = \bc^\top\bc+\bd^\top\bd$ is a real number. Therefore the real part of $\lambda$ is zero and $\lambda$ is either imaginary or zero.
\end{proof}
\begin{lemma}[Odd Skew-Symmetric Determinant]\label{lemma:skew-symmetric-determinant}
For skew-symmetric matrix $\bA\in \real^{n\times n}$, if $n$ is odd, then $\det(\bA)=0$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:skew-symmetric-determinant}]
When $n$ is odd, we have
$$
\det(\bA) = \det(\bA^\top) = \det(-\bA) = (-1)^n \det(\bA) = -\det(\bA).
$$
This implies $\det(\bA)=0$.
\end{proof}
\begin{theoremHigh}[Block-Diagonalization of Skew-Symmetric Matrices]\label{theorem:skew-block-diagonalization_theorem}
A real skew-symmetric matrix $\bA \in \real^{n\times n}$ can be factored as
\begin{equation*}
\bA = \bZ \bD \bZ^\top,
\end{equation*}
where $\bZ$ is an $n\times n$ nonsingular matrix, and $\bD$ is a block-diagonal matrix with the following form
$$
\bD =
\diag\left(\begin{bmatrix}
0 & 1 \\
-1 & 0
\end{bmatrix},
\ldots,
\begin{bmatrix}
0 & 1 \\
-1 & 0
\end{bmatrix},
0, \ldots, 0\right).
$$
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:skew-block-diagonalization_theorem}]
We will prove by recursive calculation.
As usual, we will denote the entry ($i,j$) of matrix $\bA$ by $\bA_{ij}$.
\paragraph{Case 1).} Suppose the first row of $\bA$ is nonzero, we notice that $\bE\bA\bE^\top$ is skew-symmetric if $\bA$ is skew-symmetric for any matrix $\bE$. This will make both the diagonals of $\bA$ and $\bE\bA\bE^\top$ are zeros, and the upper-left $2\times 2$ submatrix of $\bE\bA\bE^\top$ has the following form
$$
(\bE\bA\bE^\top)_{1:2,1:2}=
\begin{bmatrix}
0 & x \\
-x & 0
\end{bmatrix}.
$$
Since we suppose the first row of $\bA$ is nonzero, there exists a permutation matrix $\bP$ (Definition~\ref{definition:permutation-matrix}, p.~\pageref{definition:permutation-matrix}), such that we will exchange the nonzero value, say $a$, in the first row to the second column of $\bP\bA\bP^\top$. And as discussed above, the upper-left $2\times 2$ submatrix of $\bP\bA\bP^\top$ has the following form
$$
(\bP\bA\bP^\top)_{1:2,1:2}=
\begin{bmatrix}
0 & a \\
-a & 0
\end{bmatrix}.
$$
Construct a nonsingular matrix $\bM = \begin{bmatrix}
1/a & \bzero \\
\bzero & \bI_{n-1}
\end{bmatrix}$ such that the upper left $2\times 2$ submatrix of $\bM\bP\bA\bP^\top\bM^\top$ has the following form
$$
(\bM\bP\bA\bP^\top\bM^\top)_{1:2,1:2}=
\begin{bmatrix}
0 & 1 \\
-1 & 0
\end{bmatrix}.
$$
Now we finish diagonalizing the upper-left $2\times 2$ block. Suppose now $(\bM\bP\bA\bP^\top\bM^\top)$ above has a nonzero value, say $b$, in the first row with entry $(1,j)$ for some $j>2$, we can construct a nonsingular matrix $\bL = \bI - b\cdot\bE_{j2}$ where $\bE_{2j}$ is an all-zero matrix except the entry ($2,j$) is 1, such that $(\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top)$ will introduce 0 for the entry with value $b$.
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10,frametitle={A Trivial Example}]
For example, suppose $\bM\bP\bA\bP^\top\bM^\top$ is a $3\times 3$ matrix with the following value
$$
\bM\bP\bA\bP^\top\bM^\top =
\begin{bmatrix}
0 & 1 & b \\
-1 & 0 & \times \\
\times & \times & 0
\end{bmatrix},
\qquad \text{and}\qquad
\bL =\bI - b\cdot\bE_{j2}=
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 &-b & 1
\end{bmatrix},
$$
where $j=3$ for this specific example. This results in
$$
\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top =
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 &-b & 1
\end{bmatrix}
\begin{bmatrix}
0 & 1 & \textcolor{blue}{b} \\
-1 & 0 & \times \\
\times & \times & 0
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & -b \\
0 & 0 & 1
\end{bmatrix}
=
\begin{bmatrix}
0 & 1 & \textcolor{blue}{0} \\
-1 & 0 & \times \\
\times & \times & 0
\end{bmatrix}.
$$
\end{mdframed}
Similarly, if the second row of $\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top$ contains a nonzero value, say $c$, we could construct a nonsingular matrix $\bK = \bI+c\cdot \bE_{j1}$ such that $\bK\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top\bK^\top$ will introduce 0 for the entry with value $c$.
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10,frametitle={A Trivial Example}]
For example, suppose $\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top$ is a $3\times 3$ matrix with the following value
$$
\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top =
\begin{bmatrix}
0 & 1 & 0 \\
-1 & 0 & c \\
\times & \times & 0
\end{bmatrix},
\qquad \text{and}\qquad
\bK =\bI + c\cdot\bE_{j1}=
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
c &0 & 1
\end{bmatrix},
$$
where $j=3$ for this specific example. This results in
$$
\bK\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top\bK^\top =
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
c & 0 & 1
\end{bmatrix}
\begin{bmatrix}
0 & 1 & 0 \\
-1 & 0 & \textcolor{blue}{c} \\
\times & \times & 0
\end{bmatrix}
\begin{bmatrix}
1 & 0 & c \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}
=
\begin{bmatrix}
0 & 1 & 0 \\
-1 & 0 & \textcolor{blue}{0} \\
\times & \times & 0
\end{bmatrix}.
$$
Since we have shown that $\bK\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top\bK^\top$ is also skew-symmetric, then, it is actually
$$
\bK\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top\bK^\top=
\begin{bmatrix}
0 & 1 & 0 \\
-1 & 0 & \textcolor{blue}{0} \\
\textcolor{red}{0} & \textcolor{red}{0} & 0
\end{bmatrix},
$$
so that we do not need to tackle the first 2 columns of the above equation.
\end{mdframed}
Apply this process for the bottom-right $(n-2)\times(n-2)$ submatrix, we will complete the proof.
\paragraph{Case 2).} Suppose the first row of $\bA$ is zero, a permutation matrix to put the first row into the last row and apply the process in case 1 to finish the proof.
\end{proof}
From the block-diagonalization of skew-symmetric matrices above, we could easily find that the rank of a skew-symmetric matrix is even. And we could prove the determinant of skew-symmetric with even order is nonnegative as follows.
\begin{lemma}[Even Skew-Symmetric Determinant]\label{lemma:skew-symmetric-determinant-even}
For skew-symmetric matrix $\bA\in \real^{n\times n}$, if $n$ is even, then $\det(\bA)\geq 0$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:skew-symmetric-determinant-even}]
By Theorem~\ref{theorem:skew-block-diagonalization_theorem}, we could block-diagonalize $\bA = \bZ\bD\bZ^\top$ such that
$$
\det(\bA) = \det(\bZ\bD\bZ^\top) = \det(\bZ)^2 \det(\bD) \geq 0.
$$
This completes the proof.
\end{proof}
\section{Applications}
\subsection{Application: Eigenvalue of Projection Matrix}
In Section~\ref{section:application-ls-qr} (p.~\pageref{section:application-ls-qr}), we introduced the QR decomposition can be applied to solve the least squares problem, where we consider the overdetermined system $\bA\bx = \bb$ with $\bA\in \real^{m\times n}$ being the data matrix, $\bb\in \real^m$ with $m>n$ being the observation matrix. Normally $\bA$ will have full column rank since the data from real work has a large chance to be unrelated. And the least squares solution is given by $\bx_{LS} = (\bA^\top\bA)^{-1}\bA^\top\bb$ for minimizing $||\bA\bx-\bb||^2$, where $\bA^\top\bA$ is invertible since $\bA$ has full column rank and $rank(\bA^T\bA)=rank(\bA)$. The recovered observation matrix is then $\hat{\bb} = \bA\bx_{LS} = \bA(\bA^\top\bA)^{-1}\bA^\top\bb$. $\bb$ may not be in the column space of $\bA$, but the recovered $\hat{\bb}$ is in this column space. We then define such matrix $\bH=\bA(\bA^\top\bA)^{-1}\bA^\top$ to be a projection matrix \footnote{A detailed ananlysis of orthogonal projection is provided in Appendix~\ref{section:by-geometry-hat-matrix} (p.~\pageref{section:by-geometry-hat-matrix}).}, i.e., projecting $\bb$ onto the column space of $\bA$. Or, it is also known as hat matrix, since we put a hat on $\bb$. It can be easily verified the projection matrix is symmetric and idempotent (i.e., $\bH^2=\bH$). \index{Projection matrix}
\begin{remark}[Column Space of Projection Matrix]
We notice that the hat matrix $\bH = \bA(\bA^\top\bA)^{-1}\bA^\top$ is to project any vector in $\real^m$ into the column space of $\bA$. That is, $\bH\by \in \cspace(\bA)$. Notice again $\bH\by$ is the nothing but a combination of the columns of $\bH$, thus $\cspace(\bH) = \cspace(\bA)$.
In general, for any projection matrix $\bH$ to project vector onto subspace $\mathcalV$, then $\cspace(\bH) = \mathcalV$. More formally, in a mathematical language, this property can be proved by SVD.
\end{remark}
We now show that for any projection matrix, it has specific eigenvalues. See Appendix~\ref{appendix:orthogonal} for a detailed discussion on the orthogonal projection.
\begin{proposition}[Eigenvalue of Projection Matrix]\label{proposition:eigen-of-projection-matrix}
The only possible eigenvalues of a projection matrix are 0 and 1.
\end{proposition}
\begin{proof}[of Proposition~\ref{proposition:eigen-of-projection-matrix}]
Since $\bH$ is symmetric, we have spectral decomposition $\bH =\bQ\bLambda\bQ^\top$. From the idempotent property, we have
$$
\begin{aligned}
(\bQ\bLambda\bQ^\top)^2 &= \bQ\bLambda\bQ^\top \\
\bQ\bLambda^2\bQ^\top &= \bQ\bLambda\bQ^\top \\
\bLambda^2 &=\bLambda \\
\lambda_i^2 &=\lambda_i,
\end{aligned}
$$
Therefore, the only possible eigenvalues for $\bH$ are 0 and 1.
\end{proof}
This property of the projection matrix is important for the analysis of distribution theory for linear models. See \citep{lu2021rigorous} for more details.
Following from the eigenvalue of the projection matrix, it can also give rise to the perpendicular projection $\bI-\bH$.
\begin{proposition}[Project onto $\mathcalV^\perp$]\label{proposition:orthogonal-projection_tmp}
Let $\mathcalV$ be a subspace and $\bH$ be a projection onto $\mathcalV$. Then $\bI-\bH$ is the projection matrix onto $\mathcalV^\perp$.
\end{proposition}
\begin{proof}[of Proposition~\ref{proposition:orthogonal-projection_tmp}]
First, $(\bI-\bH)$ is symmetric, $(\bI-\bH)^\top = \bI - \bH^\top = \bI-\bH$ since $\bH$ is symmatrix. And
$$
(\bI-\bH)^2 = \bI^2 -\bI\bH -\bH\bI +\bH^2 = \bI-\bH.
$$
Thus $\bI-\bH$ is a projection matrix. By spectral theorem again, let $\bH =\bQ\bLambda\bQ^\top$. Then $\bI-\bH = \bQ\bQ^\top - \bQ\bLambda\bQ^\top = \bQ(\bI-\bLambda)\bQ^\top$. Hence the column space of $\bI-\bH$ is spanned by the eigenvectors of $\bH$ corresponding to the zero eigenvalues of $\bH$ (by Proposition~\ref{proposition:eigen-of-projection-matrix}, p.~\pageref{proposition:eigen-of-projection-matrix}), which coincides with $\mathcalV^\perp$.
\end{proof}
Again, for a detailed analysis of the origin of the projection matrix and results behind the projection matrix, we highly recommend the readers refer to Appendix~\ref{appendix:orthogonal} although it is not the main interest of matrix decomposition results.
\subsection{Application: An Alternative Definition on PD and PSD of Matrices}\label{section:equivalent-pd-psd}
In Definition~\ref{definition:psd-pd-defini} (p.~\pageref{definition:psd-pd-defini}), we defined the positive definite matrices and positive semidefinite matrices by the quadratic form of the matrices. We here prove that a symmetric matrix is positive definite if and only if all eigenvalues are positive.
\begin{lemma}[Eigenvalues of PD and PSD Matrices\index{Positive definite}\index{Positive semidefinite}]\label{lemma:eigens-of-PD-psd}
A matrix $\bA\in \real^{n\times n}$ is positive definite (PD) if and only if $\bA$ has only positive eigenvalues.
And a matrix $\bA\in \real^{n\times n}$ is positive semidefinite (PSD) if and only if $\bA$ has only nonnegative eigenvalues.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:eigens-of-PD-psd}]
We will prove by forward implication and reverse implication separately as follows.
\paragraph{Forward implication:}
Suppose $\bA$ is PD, then for any eigenvalue $\lambda$ and its corresponding eigenvector $\bv$ of $\bA$, we have $\bA\bv = \lambda\bv$. Thus
$$
\bv^\top \bA\bv = \lambda||\bv||^2 > 0.
$$
This implies $\lambda>0$.
\paragraph{Reverse implication:}
Conversely, suppose the eigenvalues are positive. By spectral decomposition of $\bA =\bQ\bLambda \bQ^\top$. If $\bx$ is a nonzero vector, let $\by=\bQ^\top\bx$, we have
$$
\bx^\top \bA \bx = \bx^\top (\bQ\bLambda \bQ^\top) \bx = (\bx^\top \bQ) \bLambda (\bQ^\top\bx) = \by^\top\bLambda\by = \sum_{i=1}^{n} \lambda_i y_i^2>0.
$$
That is, $\bA$ is PD.
Analogously, we can prove the second part of the claim.
\end{proof}
\begin{theoremHigh}[Nonsingular Factor of PSD and PD Matrices]\label{lemma:nonsingular-factor-of-PD}
A real symmetric matrix $\bA$ is PSD if and only if $\bA$ can be factored as $\bA=\bP^\top\bP$, and is PD if and only if $\bP$ is nonsingular.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{lemma:nonsingular-factor-of-PD}]
For the first part, we will prove by forward implication and reverse implication separately as follows.
\paragraph{Forward implication: }Suppose $\bA$ is PSD, its spectral decomposition is given by $\bA = \bQ\bLambda\bQ^\top$. Since eigenvalues of PSD matrices are nonnegative, we can decompose $\bLambda=\bLambda^{1/2}\bLambda^{1/2}$. Let $\bP = \bLambda^{1/2}\bQ^\top$, we can decompose $\bA$ by $\bA=\bP^\top\bP$.
\paragraph{Reverse implication: } If $\bA$ can be factored as $\bA=\bP^\top\bP$, then all eigenvalues of $\bA$ are nonnegative since for any eigenvalues $\lambda$ and its corresponding eigenvector $\bv$ of $\bA$, we have
$$
\lambda = \frac{\bv^\top\bA\bv}{\bv^\top\bv} = \frac{\bv^\top\bP^\top\bP\bv}{\bv^\top\bv}=\frac{||\bP\bv||^2}{||\bv||^2} \geq 0.
$$
This implies $\bA$ is PSD by Lemma~\ref{lemma:eigens-of-PD-psd}.
Similarly, we can prove the second part for PD matrices where the positive definiteness will result in the nonsingular $\bP$ and the nonsingular $\bP$ will result in the positiveness of the eigenvalues. \footnote{See also wiki page: https://en.wikipedia.org/wiki/Sylvester's\_criterion.}
\end{proof}
\subsection{Proof for Semidefinite Rank-Revealing Decomposition}\label{section:semi-rank-reveal-proof}
In this section, we provide a proof for Theorem~\ref{theorem:semidefinite-factor-rank-reveal} (p.~\pageref{theorem:semidefinite-factor-rank-reveal}), the existence of the rank-revealing decomposition for positive semidefinite matrix.\index{Rank-revealing}\index{Semidefinite rank-revealing}
\begin{proof}[of Theorem~\ref{theorem:semidefinite-factor-rank-reveal}]
The proof is a consequence of the nonsingular factor of PSD matrices (Theorem~\ref{lemma:nonsingular-factor-of-PD}, p.~\pageref{lemma:nonsingular-factor-of-PD}) and the existence of column-pivoted QR decomposition (Theorem~\ref{theorem:rank-revealing-qr-general}, p.~\pageref{theorem:rank-revealing-qr-general}).
By Theorem~\ref{lemma:nonsingular-factor-of-PD}, the nonsingular factor of PSD matrix $\bA$ is given by $\bA = \bZ^\top\bZ$, where $\bZ=\bLambda^{1/2}\bQ^\top$ and $\bA=\bQ\bLambda\bQ^\top$ is the spectral decomposition of $\bA$.
By Lemma~\ref{lemma:rank-of-symmetric}, the rank of matrix $\bA$ is the number of nonzero eigenvalues (here the number of positive eigenvalues since $\bA$ is PSD). Therefore only $r$ components in $\bLambda^{1/2}$ are nonzero, and $\bZ=\bLambda^{1/2}\bQ^\top$ contains only $r$ independent columns, i.e., $\bZ$ is of rank $r$. By column-pivoted QR decomposition, we have
$$
\bZ\bP = \bQ
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bzero
\end{bmatrix},
$$
where $\bP$ is a permutation matrix, $\bR_{11}\in \real^{r\times r}$ is upper triangular with positive diagonals, and $\bR_{12}\in \real^{r\times (n-r)}$. Therefore
$$
\bP^\top\bA\bP =
\bP^\top\bZ^\top\bZ\bP =
\begin{bmatrix}
\bR_{11}^\top & \bzero \\
\bR_{12}^\top & \bzero
\end{bmatrix}
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bzero
\end{bmatrix}.
$$
Let
$$
\bR = \begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bzero
\end{bmatrix},
$$
we find the rank-revealing decomposition for semidefinite matrix $\bP^\top\bA\bP = \bR^\top\bR$.
\end{proof}
This decomposition is produced by using complete pivoting, which at each stage permutes the largest diagonal element
in the active submatrix into the pivot position. The procedure is similar to the partial pivoting discussed in Section~\ref{section:partial-pivot-lu} (p.~\pageref{section:partial-pivot-lu}).
\subsection{Application: Cholesky Decomposition via the QR Decomposition and the Spectral Decomposition}\label{section:cholesky-by-qr-spectral}
In this section, we provide another proof for the existence of the Cholesky decomposition.
\begin{theoremHigh}[Cholesky Decomposition: A Simpler Version of Theorem~\ref{theorem:cholesky-factor-exist}]\label{lemma:cholesky-former-one}
Every positive definite matrix $\bA\in \real^{n\times n}$ can be factored as
$$
\bA = \bR^\top\bR,
$$
where $\bR$ is an upper triangular matrix with positive diagonals.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{lemma:cholesky-former-one}]
From Theorem~\ref{lemma:nonsingular-factor-of-PD}, the PD matrix $\bA$ can be factored as $\bA=\bP^\top\bP$ where $\bP$ is a nonsingular matrix. Then, the QR decomposition of $\bP$ is given by $\bP = \bQ\bR$. This implies
$$
\bA = \bP^\top\bP = \bR^\top\bQ^\top\bQ\bR = \bR^\top\bR,
$$
where we notice that the form is very similar to the Cholesky decomposition except that we do not claim the $\bR$ has only positive diagonal values. From Algorithm~\ref{alg:reduced-qr}, the existence of QR decomposition via the Gram-Schmidt process, we realize that the diagonals of $\bR$ are nonnegative, and if $\bP$ is nonsingular, the diagonals of $\bR$ are also positive.
\end{proof}
The proof for the above theorem is a consequence of the existence of both the QR decomposition and the spectral decomposition. Thus, the existence of Cholesky decomposition can be proved via the QR decomposition and the spectral decomposition in this sense.
\subsection{Application: Unique Power Decomposition of Positive Definite Matrices}\label{section:unique-posere-pd}
\begin{theoremHigh}[Unique Power Decomposition of PD Matrices]\label{theorem:unique-factor-pd}
Any $n\times n$ positive matrix $\bA$ can be \textbf{uniquely} factored as a product of a positive definite matrix $\bB$ such that $\bA =\bB^2$.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:unique-factor-pd}]
We first prove that there exists such positive definite matrix $\bB$ so that $\bA = \bB^2$.
\paragraph{Existence} Since $\bA$ is PD which is also symmetric, the spectral decomposition of $\bA$ is given by $\bA = \bQ\bLambda\bQ^\top$. Since eigenvalues of PD matrices are positive by Lemma~\ref{lemma:eigens-of-PD-psd}, the square root of $\bLambda$ exists. We can define $\bB = \bQ\bLambda^{1/2}\bQ^\top$ such that $\bA = \bB^2$ where $\bB$ is apparently PD.
\paragraph{Uniqueness} Suppose such factorization is not unique, then there exist two of this decomposition such that
$$
\bA = \bB_1^2 = \bB_2^2,
$$
where $\bB_1$ and $\bB_2$ are both PD. The spectral decompositions of them are given by
$$
\bB_1 = \bQ_1 \bLambda_1\bQ_1^\top, \qquad \text{and} \qquad \bB_2 = \bQ_2 \bLambda_2\bQ_2^\top.
$$
We notice that $\bLambda_1^2$ and $\bLambda_2^2$ contains the eigenvalues of $\bA$, and both eigenvalues of $\bB_1$ and $\bB_2$ contained in $\bLambda_1$ and $\bLambda_2$ are positive (since $\bB_1$ and $\bB_2$ are both PD). Without loss of generality, we suppose $\bLambda_1=\bLambda_2=\bLambda^{1/2}$, and $\bLambda=\diag(\lambda_1,\lambda_2, \ldots, \lambda_n)$ such that $\lambda_1\geq \lambda_2 \geq \ldots \geq \lambda_n$. By $\bB_1^2 = \bB_2^2$, we have
$$
\bQ_1 \bLambda \bQ_1^\top = \bQ_2 \bLambda \bQ_2^\top \leadto \bQ_2^\top\bQ_1 \bLambda = \bLambda \bQ_2^\top\bQ_1.
$$
Let $\bZ = \bQ_2^\top\bQ_1 $, this implies $\bLambda$ and $\bZ$ commute, and $\bZ$ must be a block diagonal matrix whose partitioning conforms to the block structure of $\bLambda$. This results in $\bLambda^{1/2} = \bZ\bLambda^{1/2}\bZ^\top$ and
$$
\bB_2 = \bQ_2 \bLambda^{1/2}\bQ_2^\top = \bQ_2 \bQ_2^\top\bQ_1\bLambda^{1/2} \bQ_1^\top\bQ_2 \bQ_2^\top=\bB_1.
$$
This completes the proof.
\end{proof}
Similarly, we could prove the unique decomposition of PSD matrix $\bA = \bB^2$ where $\bB$ is PSD. A more detailed discussion on this topic can be referred to \citep{koeber2006unique}.
\paragraph{Decomposition for PD matrices} To conclude, for PD matrix $\bA$, we can factor it into $\bA=\bR^\top\bR$ where $\bR$ is an upper triangular matrix with positive diagonals as shown in Theorem~\ref{theorem:cholesky-factor-exist} by Cholesky decomposition, $\bA = \bP^\top\bP$ where $\bP$ is nonsingular in Theorem~\ref{lemma:nonsingular-factor-of-PD}, and $\bA = \bB^2$ where $\bB$ is PD in Theorem~\ref{theorem:unique-factor-pd}.
\chapter{Singular Value Decomposition (SVD)}\label{section:SVD}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Singular Value Decomposition}
In eigenvalue decomposition, we factor the matrix into a diagonal matrix. However, this is not always true. If $\bA$ does not have linearly independent eigenvectors, such diagonalization does not exist. The singular value decomposition (SVD) fills this gap. Instead of factoring the matrix into an eigenvector matrix, SVD gives rise to two orthogonal matrices. We provide the result of SVD in the following theorem and we will discuss the existence of SVD in the next sections.
\begin{theoremHigh}[Reduced SVD for Rectangular Matrices]\label{theorem:reduced_svd_rectangular}
For every real $m\times n$ matrix $\bA$ with rank $r$, then matrix $\bA$ can be factored as
$$
\bA = \bU \bSigma \bV^\top,
$$
where $\bSigma\in \real^{r\times r}$ is a diagonal matrix $\bSigma=diag(\sigma_1, \sigma_2 \ldots, \sigma_r)$ with $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_r$ and
\begin{itemize}
\item $\sigma_i$'s are the nonzero \textbf{singular values} of $\bA$, in the meantime, they are the (positive) square roots of the nonzero \textbf{eigenvalues} of $\trans{\bA} \bA$ and $ \bA \trans{\bA}$.
\item Columns of $\bU\in \real^{m\times r}$ contain the $r$ eigenvectors of $\bA\bA^\top$ corresponding to the $r$ nonzero eigenvalues of $\bA\bA^\top$.
\item Columns of $\bV\in \real^{n\times r}$ contain the $r$ eigenvectors of $\bA^\top\bA$ corresponding to the $r$ nonzero eigenvalues of $\bA^\top\bA$.
\item Moreover, the columns of $\bU$ and $\bV$ are called the \textbf{left and right singular vectors} of $\bA$, respectively.
\item Further, the columns of $\bU$ and $\bV$ are orthonormal (by Spectral Theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem}).
\end{itemize}
In particular, we can write out the matrix decomposition by the sum of outer products of vectors $\bA = \bU \bSigma \bV^\top = \sum_{i=1}^r \sigma_i \bu_i \bv_i^\top$, which is a sum of $r$ rank-one matrices.
\end{theoremHigh}
If we append additional $m-r$ silent columns that are orthonormal to the $r$ eigenvectors of $\bA\bA^\top$, just like the silent columns in the QR decomposition, we will have an orthogonal matrix $\bU\in \real^{m\times m}$. Similar situation for the columns of $\bV$.
We then illustrate the full SVD for matrices in the following theorem. We formulate the difference between reduced and full SVD in the \textcolor{blue}{blue} text.
\begin{theoremHigh}[Full SVD for Rectangular Matrices]\label{theorem:full_svd_rectangular}
For every real $m\times n$ matrix $\bA$ with rank $r$, then matrix $\bA$ can be factored as
$$
\bA = \bU \bSigma \bV^\top,
$$
where the left-upper side of $\bSigma\in $\textcolor{blue}{$\real^{m\times n}$} is a diagonal matrix, that is $\bSigma=\begin{bmatrix}
\bSigma_1 & \bzero \\
\bzero & \bzero
\end{bmatrix}$ where $\bSigma_1=diag(\sigma_1, \sigma_2 \ldots, \sigma_r)\in \real^{r\times r}$ with $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_r$ and
\begin{itemize}
\item $\sigma_i$'s are the nonzero \textbf{singular values} of matrix $\bA$, in the meantime, they are the (positive) square roots of the nonzero \textbf{eigenvalues} of $\trans{\bA} \bA$ and $ \bA \trans{\bA}$.
\item $\bU\in \textcolor{blue}{\real^{m\times m}}$ contains the $r$ eigenvectors of $\bA\bA^\top$ corresponding to the $r$ nonzero eigenvalues of $\bA\bA^\top$ \textcolor{blue}{and $m-r$ extra orthonormal vectors from $\nspace(\bA^\top)$}.
\item $\bV\in \textcolor{blue}{\real^{n\times n}}$ contains the $r$ eigenvectors of $\bA^\top\bA$ corresponding to the $r$ nonzero eigenvalues of $\bA^\top\bA$ \textcolor{blue}{and $n-r$ extra orthonormal vectors from $\nspace(\bA)$}.
\item Moreover, the columns of $\bU$ and $\bV$ are called the \textbf{left and right singular vectors} of $\bA$, respectively.
\item Further, the columns of $\bU$ and $\bV$ are orthonormal (by Spectral Theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem}), and \textcolor{blue}{$\bU$ and $\bV$ are orthogonal matrices}.
\end{itemize}
In particular, we can write the matrix decomposition by the sum of outer products of vectors $ \bA = \bU \bSigma \bV^\top = \sum_{i=1}^r \sigma_i \bu_i \bv_i^\top$, which is a sum of $r$ rank-one matrices.
\end{theoremHigh}
The comparison between the reduced and the full SVD is shown in Figure~\ref{fig:svd-comparison} where white entries are zero and blue entries are not necessarily zero.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Reduced SVD decomposition]{\label{fig:svdhalf}
\includegraphics[width=0.47\linewidth]{./imgs/svdreduced.pdf}}
\quad
\subfigure[Full SVD decomposition]{\label{fig:svdall}
\includegraphics[width=0.47\linewidth]{./imgs/svdfull.pdf}}
\caption{Comparison between the reduced and full SVD.}
\label{fig:svd-comparison}
\end{figure}
\section{Existence of the SVD}
To prove the existence of the SVD, we need to use the following lemmas. We mentioned that the singular values are the square roots of the eigenvalues of $\bA^\top\bA$. While, negative values do not have square roots such that the eigenvalues must be nonnegative.
\begin{lemma}[Nonnegative Eigenvalues of $\bA^\top \bA$]\label{lemma:nonneg-eigen-ata}
For any matrix $\bA\in \real^{m\times n}$, $\bA^\top \bA$ has nonnegative eigenvalues.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:nonneg-eigen-ata}]
For eigenvalue and its corresponding eigenvector $\lambda, \bx$ of $\bA^\top \bA$, we have
$$
\bA^\top \bA \bx = \lambda \bx \leadto \bx^\top \bA^\top \bA \bx = \lambda \bx^\top\bx.
$$
Since $\bx^\top \bA^\top \bA \bx = ||\bA \bx||^2 \geq 0$ and $\bx^\top\bx \geq 0$. We then have $\lambda \geq 0$.
\end{proof}
Since $\bA^\top\bA$ has nonnegative eigenvalues, we then can define the singular value $\sigma\geq 0$ of $\bA$ such that $\sigma^2$ is the eigenvalue of $\bA^\top\bA$, i.e., \fbox{$\bA^\top\bA \bv = \sigma^2 \bv$}. This is essential to the existence of the SVD.
We have shown in Lemma~\ref{lemma:rankAB} (p.~\pageref{lemma:rankAB}) that $rank$($\bA\bB$)$\leq$min$\{rank$($\bA$), $rank$($\bB$)\}.
However, the symmetric matrix $\bA^\top \bA$ is rather special in that the rank of $\bA^\top \bA$ is equal to $rank(\bA)$. And we now prove it.
\begin{lemma}[Rank of $\bA^\top \bA$]\label{lemma:rank-of-ata}
$\bA^\top \bA$ and $\bA$ have same rank.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:rank-of-ata}]
Let $\bx\in \nspace(\bA)$, we have
$$
\bA\bx = \bzero \leadto \bA^\top\bA \bx =\bzero,
$$
i.e., $\bx\in \nspace(\bA) \leadtosmall \bx \in \nspace(\bA^\top \bA)$, therefore $\nspace(\bA) \subseteq \nspace(\bA^\top\bA)$.
Further, let $\bx \in \nspace(\bA^\top\bA)$, we have
$$
\bA^\top \bA\bx = \bzero\leadtosmall \bx^\top \bA^\top \bA\bx = 0\leadtosmall ||\bA\bx||^2 = 0 \leadtosmall \bA\bx=\bzero,
$$
i.e., $\bx\in \nspace(\bA^\top \bA) \leadtosmall \bx\in \nspace(\bA)$, therefore $\nspace(\bA^\top\bA) \subseteq\nspace(\bA) $.
As a result, by ``sandwiching", it follows that
$$\nspace(\bA) = \nspace(\bA^\top\bA) \qquad
\text{and} \qquad
dim(\nspace(\bA)) = dim(\nspace(\bA^\top\bA)).
$$
By the fundamental theorem of linear algebra in Appendix~\ref{appendix:fundamental-rank-nullity} (p.~\pageref{appendix:fundamental-rank-nullity}), $\bA^\top \bA$ and $\bA$ have the same rank.
\end{proof}
Apply the observation to $\bA^\top$, we can also prove that $\bA\bA^\top$ and $\bA$ have the same rank:
$$
rank(\bA) = rank(\bA^\top \bA) = rank(\bA\bA^\top).
$$
In the form of the SVD, we claimed the matrix $\bA$ is a sum of $r$ rank-one matrices where $r$ is the number of nonzero singular values. And the number of nonzero singular values is actually the rank of the matrix.
\begin{lemma}[The Number of Nonzero Singular Values Equals the Rank]\label{lemma:rank-equal-singular}
The number of nonzero singular values of matrix $\bA$ equals the rank of $\bA$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:rank-equal-singular}]
The rank of any symmetric matrix (here $\bA^\top\bA$) equals the number of nonzero eigenvalues (with repetitions) by Lemma~\ref{lemma:rank-of-symmetric} (p.~\pageref{lemma:rank-of-symmetric}). So the number of nonzero singular values equals the rank of $\bA^\top \bA$. By Lemma~\ref{lemma:rank-of-ata}, the number of nonzero singular values equals the rank of $\bA$.
\end{proof}
We are now ready to prove the existence of the SVD.
\begin{proof}[\textbf{of Theorem~\ref{theorem:reduced_svd_rectangular}: Existence of the SVD}]
Since $\bA^\top \bA$ is a symmetric matrix, by Spectral Theorem~\ref{theorem:spectral_theorem} (p.~\pageref{theorem:spectral_theorem}) and Lemma~\ref{lemma:nonneg-eigen-ata}, there exists an orthogonal matrix $\bV$ such that
$$
\boxed{\bA^\top \bA = \bV \bSigma^2 \bV^\top},
$$
where $\bSigma$ is a diagonal matrix containing the singular values of $\bA$, i.e., $\bSigma^2$ contains the eigenvalues of $\bA^\top \bA$.
Specifically, $\bSigma=diag(\sigma_1, \sigma_2, \ldots, \sigma_r)$ and $\{\sigma_1^2, \sigma_2^2, \ldots, \sigma_r^2\}$ are the nonzero eigenvalues of $\bA^\top \bA$ with $r$ being the rank of $\bA$. I.e., $\{\sigma_1, \ldots, \sigma_r\}$ are the singular values of $\bA$. In this case, $\bV\in \real^{n\times r}$.
Now we are into the central part.
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10]
Start from \fbox{$\bA^\top\bA \bv_i = \sigma_i^2 \bv_i$}, $\forall i \in \{1, 2, \ldots, r\}$, i.e., the eigenvector $\bv_i$ of $\bA^\top\bA$ corresponding to $\sigma_i^2$:
1. Multiply both sides by $\bv_i^\top$:
$$
\bv_i^\top\bA^\top\bA \bv_i = \sigma_i^2 \bv_i^\top \bv_i \leadto ||\bA\bv_i||^2 = \sigma_i^2 \leadto ||\bA\bv_i||=\sigma_i
$$
2. Multiply both sides by $\bA$:
$$
\bA\bA^\top\bA \bv_i = \sigma_i^2 \bA \bv_i \leadto \bA\bA^\top \frac{\bA \bv_i }{\sigma_i}= \sigma_i^2 \frac{\bA \bv_i }{\sigma_i} \leadto \bA\bA^\top \bu_i = \sigma_i^2 \bu_i
$$
where we notice that this form can find the eigenvector of $\bA\bA^\top$ corresponding to $\sigma_i^2$ which is $\bA \bv_i$. Since the length of $\bA \bv_i$ is $\sigma_i$, we then define $\bu_i = \frac{\bA \bv_i }{\sigma_i}$ with norm 1.
\end{mdframed}
These $\bu_i$'s are orthogonal because $(\bA\bv_i)^\top(\bA\bv_j)=\bv_i^\top\bA^\top\bA\bv_j=\sigma_j^2 \bv_i^\top\bv_j=0$. That is
$$
\boxed{\bA \bA^\top = \bU \bSigma^2 \bU^\top}.
$$
Since \fbox{$\bA\bv_i = \sigma_i\bu_i$}, we have
$$
[\bA\bv_1, \bA\bv_2, \ldots, \bA\bv_r] = [ \sigma_1\bu_1, \sigma_2\bu_2, \ldots, \sigma_r\bu_r]\leadto
\bA\bV = \bU\bSigma,
$$
which completes the proof.
\end{proof}
By appending silent columns in $\bU$ and $\bV$, we can easily find the full SVD. A byproduct of the above proof is that the spectral decomposition of $\bA^\top\bA = \bV \bSigma^2 \bV^\top$ will result in the spectral decomposition of $\bA \bA^\top = \bU \bSigma^2 \bU^\top$ with the same eigenvalues.
\begin{corollary}[Eigenvalues of $\bA^\top\bA$ and $\bA\bA^\top$]
The nonzero eigenvalues of $\bA^\top\bA$ and $\bA\bA^\top$ are the same.
\end{corollary}
We have shown in Lemma~\ref{lemma:nonneg-eigen-ata} that the eigenvalues of $\bA^\top \bA$ are nonnegative, such that the eigenvalues of $\bA\bA^\top$ are nonnegative as well.
\begin{corollary}[Nonnegative Eigenvalues of $\bA^\top\bA$ and $\bA\bA^\top$]
The eigenvalues of $\bA^\top\bA$ and $\bA\bA^\top$ are nonnegative.
\end{corollary}
The existence of the SVD is important for defining the effective rank of a matrix.
\begin{definition}[Effective Rank vs Exact Rank]\label{definition:effective-rank-in-svd}
\textit{Effective rank}, or also known as the \textit{numerical rank}.
Following from Lemma~\ref{lemma:rank-equal-singular}, the number of nonzero singular values is equal to the rank of a matrix.
Assume the $i$-th largest singular value of $\bA$ is denoted as $\sigma_i(\bA)$. Then if $\sigma_r(\bA)\gg \sigma_{r+1}(\bA)\approx 0$, $r$ is known as the numerical rank of $\bA$. Whereas, when $\sigma_i(\bA)>\sigma_{r+1}(\bA)=0$, it is known as having \textit{exact rank} $r$ as we have used in most of our discussions.
\end{definition}
\section{Properties of the SVD}\label{section:property-svd}
\subsection{Four Subspaces in SVD}\label{section:four-space-svd}
For any matrix $\bA\in \real^{m\times n}$, we have the following property:
$\bullet$ $\nspace(\bA)$ is the orthogonal complement of the row space $\cspace(\bA^\top)$ in $\real^n$: $dim(\nspace(\bA))+dim(\cspace(\bA^\top))=n$;
$\bullet$ $\nspace(\bA^\top)$ is the orthogonal complement of the column space $\cspace(\bA)$ in $\real^m$: $dim(\nspace(\bA^\top))+dim(\cspace(\bA))=m$;
This is called the fundamental theorem of linear algebra and is also known as the rank-nullity theorem. And the proof can be found in Appendix~\ref{appendix:fundamental-rank-nullity}. Furthermore, we find the basis for the four subspaces via the CR decomposition in Appendix~\ref{appendix:cr-decomposition-four-basis}.
In specific, from the SVD, we can find an orthonormal basis for each subspace.
\begin{figure}[h!]
\centering
\includegraphics[width=0.98\textwidth]{imgs/lafundamental3-SVD.pdf}
\caption{Orthonormal bases that diagonalize $\bA$ from SVD.}
\label{fig:lafundamental3-SVD}
\end{figure}
\begin{lemma}[Four Orthonormal Basis]\label{lemma:svd-four-orthonormal-Basis}
Given the full SVD of matrix $\bA = \bU \bSigma \bV^\top$, where $\bU=[\bu_1, \bu_2, \ldots,\bu_m]$ and $\bV=[\bv_1, \bv_2, \ldots, \bv_n]$ are the column partitions of $\bU$ and $\bV$. Then, we have the following property:
$\bullet$ $\{\bv_1, \bv_2, \ldots, \bv_r\} $ is an orthonormal basis of $\cspace(\bA^\top)$;
$\bullet$ $\{\bv_{r+1},\bv_{r+2}, \ldots, \bv_n\}$ is an orthonormal basis of $\nspace(\bA)$;
$\bullet$ $\{\bu_1,\bu_2, \ldots,\bu_r\}$ is an orthonormal basis of $\cspace(\bA)$;
$\bullet$ $\{\bu_{r+1}, \bu_{r+2},\ldots,\bu_m\}$ is an orthonormal basis of $\nspace(\bA^\top)$.
The relationship of the four subspaces is demonstrated in Figure~\ref{fig:lafundamental3-SVD} where $\bA$ transfer the row basis $\bv_i$ into column basis $\bu_i$ by $\sigma_i\bu_i=\bA\bv_i$ for all $i\in \{1, 2, \ldots, r\}$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:svd-four-orthonormal-Basis}]
From Lemma~\ref{lemma:rank-of-symmetric}, for symmetric matrix $\bA^\top\bA$, $\cspace(\bA^\top\bA)$ is spanned by the eigenvectors, thus $\{\bv_1,\bv_2, \ldots, \bv_r\}$ is an orthonormal basis of $\cspace(\bA^\top\bA)$.
Since,
1. $\bA^\top\bA$ is symmetric, then the row space of $\bA^\top\bA$ equals the column space of $\bA^\top\bA$.
2. All rows of $\bA^\top\bA$ are the combinations of the rows of $\bA$, so the row space of $\bA^\top\bA$ $\subseteq$ the row space of $\bA$, i.e., $\cspace(\bA^\top\bA) \subseteq \cspace(\bA^\top)$.
3. Since $rank(\bA^\top\bA) = rank(\bA)$ by Lemma~\ref{lemma:rank-of-ata}, we then have
The row space of $\bA^\top\bA$ = the column space of $\bA^\top\bA$ = the row space of $\bA$, i.e., $\cspace(\bA^\top\bA) = \cspace(\bA^\top)$. Thus $\{\bv_1, \bv_2,\ldots, \bv_r\}$ is an orthonormal basis of $\cspace(\bA^\top)$.
Further, the space spanned by $\{\bv_{r+1}, \bv_{r+2},\ldots, \bv_n\}$ is an orthogonal complement to the space spanned by $\{\bv_1,\bv_2, \ldots, \bv_r\}$, so $\{\bv_{r+1},\bv_{r+2}, \ldots, \bv_n\}$ is an orthonormal basis of $\nspace(\bA)$.
If we apply this process to $\bA\bA^\top$, we will prove the rest claims in the lemma. Also, we can see that $\{\bu_1,\bu_2, \ldots,\bu_r\}$ is a basis for the column space of $\bA$ by Lemma~\ref{lemma:column-basis-from-row-basis} \footnote{For any matrix $\bA$, let $\{\br_1, \br_2, \ldots, \br_r\}$ be a set of vectors in $\real^n$ which forms a basis for the row space, then $\{\bA\br_1, \bA\br_2, \ldots, \bA\br_r\}$ is a basis for the column space of $\bA$.}, since $\bu_i = \frac{\bA\bv_i}{\sigma_i},\, \forall i \in\{1, 2, \ldots, r\}$.
\end{proof}
\subsection{SVD-Related Orthogonal Projections}\index{Orthogonal projection}
Following from the four subspaces in SVD, there are several important orthogonal projections associated with the SVD. A detailed discussion on the orthogonal projection is provided in Appendix~\ref{section:by-geometry-hat-matrix} (p.~\pageref{section:by-geometry-hat-matrix}). In words, the orthogonal projection matrices are matrices that are symmetric and idempotent. The projection matrix will project any vector onto the column space of it. The idempotency has a geometrical meaning such that projecting twice is equivalent to projecting once. The symmetry also has a geometrical meaning such that the distance between the original vector and the projected vector (which is in the column space of the projection matrix) is minimal. Suppose $\bA=\bU\bSigma\bV^\top$ is the SVD of $\bA$ with rank $r$. If we have the following column partitions
\[
\begin{blockarray}{ccc}
\begin{block}{c[cc]}
\bU=& \bU_r & \bU_m \\
\end{block}
&m\times r & m\times (m-r) \\
\end{blockarray}
,\qquad
\begin{blockarray}{ccc}
\begin{block}{c[cc]}
\bV= & \bV_r & \bV_n \\
\end{block}
&n\times r & n\times (n-r) \\
\end{blockarray},
\]
where $\bU_r$ and $\bV_r$ are the first $r$ columns of $\bU$ and $\bV$.
Then, the four orthogonal projections can be obtained by
$$
\begin{aligned}
\bV_r\bV_r^\top &= \text{projection onto $\cspace(\bA^\top)$},\\
\bV_n\bV_n^\top &=\text{projection onto $\nspace(\bA)$},\\
\bU_r\bU_r^\top &= \text{projection onto $\cspace(\bA)$},\\
\bU_m\bU_m^\top &= \text{projection onto $\nspace(\bA^\top)$}.\\
\end{aligned}
$$
\subsection{Relationship between Singular Values and Determinant}
Let $\bA\in \real^{n\times n}$ be a square matrix and the singular value decomposition of $\bA$ is given by $\bA = \bU\bSigma\bV^\top$, it follows that
$$
|\det(\bA)| = |\det(\bU\bSigma\bV^\top)| = |\det(\bSigma)| = \sigma_1 \sigma_2\ldots \sigma_n.
$$
If all the singular values $\sigma_i$ are nonzero, then $\det(\bA)\neq 0$. That is, $\bA$ is \textbf{nonsingular}. If there is at least one singular value such that $\sigma_i =0$, then $\det(\bA)=0$, and $\bA$ does not have full rank, and is not invertible. Then the matrix is called \textbf{singular}. This is why $\sigma_i$'s are known as the singular values.
\subsection{Orthogonal Equivalence}
We have defined in Definition~\ref{definition:similar-matrices} (p.~\pageref{definition:similar-matrices}) that $\bA$ and $\bP\bA\bP^{-1}$ are similar matrices for any nonsingular matrix $\bP$ and square matrix $\bA$. The orthogonal equivalence is defined in a similar way for rectangular matrices.
\begin{definition}[Orthogonal Equivalent Matrices\index{Orthogonal equivalent matrices}]
For any orthogonal matrices $\bU$ and $\bV$, the matrices $\bA$ and $\bU\bA\bV$ are called \textit{orthogonal equivalent matrices}. Or \textit{unitary equivalent} in complex domain when $\bU$ and $\bV$ are unitary.
\end{definition}
Then, we have the following property for orthogonal equivalent matrices.
\begin{lemma}[Orthogonal Equivalent Matrices]\label{lemma:orthogonal-equivalent-matrix}
For any orthogonal equivalent matrices $\bA$ and $\bB$, then singular values are the same.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:orthogonal-equivalent-matrix}]
Since $\bA$ and $\bB$ are orthogonal equivalent, there exist orthogonal matrices that $\bB = \bU\bA\bV$. We then have
$$
\bB\bB^\top = (\bU\bA\bV)(\bV^\top\bA^\top\bU^\top) = \bU\bA\bA^\top\bU^\top.
$$
This implies $\bB\bB^\top$ and $\bA\bA^\top$ are similar matrices. By Lemma~\ref{lemma:eigenvalue-similar-matrices} (p.~\pageref{lemma:eigenvalue-similar-matrices}), the eigenvalues of similar matrices are the same, which proves the singular values of $\bA$ and $\bB$ are the same.
\end{proof}
\subsection{SVD for QR}
\begin{lemma}[Orthogonal Equivalent Matrices]\label{lemma:svd-for-qr}
Suppose the full QR decomposition for matrix $\bA\in \real^{m\times n}$ with $m\geq n$ is given by $\bA=\bQ\bR$ where $\bQ\in \real^{m\times m}$ is orthogonal and $\bR\in \real^{m\times n}$ is upper triangular. Then $\bA$ and $\bR$ have the same singular values and right singular vectors.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:svd-for-qr}]
We notice that $\bA^\top\bA = \bR^\top\bR$ such that $\bA^\top\bA$ and $\bR^\top\bR$ have the same eigenvalues and eigenvectors, i.e., $\bA$ and $\bR$ have the same singular values and right singular vectors (i.e., the eigenvectors of $\bA^\top\bA$ or $\bR^\top\bR$).
\end{proof}
The above lemma is important show the existence and properties of the rank-revealing QR decomposition (Section~\ref{section:rank-one-qr-revealing}, p.~\pageref{section:rank-one-qr-revealing}).
And it implies that an SVD of a matrix can be constructed by the QR decomposition of itself. Suppose the QR decomposition of $\bA$ is given by $\bA=\bQ\bR$ and the SVD of $\bR$ is given by $\bR=\bU_0 \bSigma\bV^\top$. Therefore, the SVD of $\bA$ can be obtained by
$$
\bA =\underbrace{ \bQ\bU_0}_{\bU} \bSigma\bV^\top.
$$
\subsection{Interlacing Property}
The interlacing property of SVD is from that of the symmetric matrix. We provide the theorem directly and the proof can be found in \citep{wilkinson1971algebraic} and further discussed in \citep{golub2013matrix} (Theorem 8.1.7, p. 443). \index{Interlacing property}
\begin{theorem}[Interlacing Property for Symmetric Matrix]
Suppose $\bA\in \real^{n\times n}$ is symmetric, and $\bA_r$ is defined as the upper left $r\times r$ submatrix of $\bA$ such that $\bA_r=\bA[1:r, 1:r]$. Define $\lambda_i(\bB)$ the $i$-th largest eigenvalue of matrix $\bB$. Let $k=r+1$ and $\bB_k=\bA_{k}$, then we have
$$
\lambda_{r+1}(\bB_{k})\leq \lambda_r(\bA_r) \leq \lambda_r(\bB_k)\leq \ldots \leq \lambda_2(\bB_{k})\leq \lambda_1 (\bA_r)\leq \lambda_1(\bB_k).
$$
\end{theorem}
The interlacing property of for singular value can be derived directly from that of the symmetric matrices:
\begin{theorem}[Interlacing Property for Singular Values]\label{theorem:interlacing-singular}
Suppose $\bA=[\ba_1, \ba_2, \ldots, \ba_n]\in \real^{m\times n}$ with $m\geq n$, and $\bA_r=[\ba_1, \ba_2, \ldots, \ba_r]$. Define $\sigma_i(\bB)$ as the $i$-th largest singular value of matrix $\bB$.
s symmetric, and $\bA_r$ is defined as the upper left $r\times r$ submatrix of $\bA$ such that $\bA_r=\bA[1:r, 1:r]$. Define $\lambda_i(\bB)$ the $i$-th largest eigenvalue of matrix $\bB$. Let $k=r+1$ and $\bB_k=\bA_{k}$, then we have
$$
\sigma_{k}(\bB_{k})\leq \sigma_r(\bA_r) \leq \sigma_r(\bB_k)\leq \ldots \leq \sigma_2(\bB_{k})\leq \sigma_1 (\bA_r)\leq \sigma_1(\bB_k).
$$
\end{theorem}
\section{Computing the SVD}\label{section:comput-svd-in-svd}
Suppose again we have an oracle algorithm to compute the eigenvalues and eigenvectors of $\bA^\top\bA$ which costs $f(m,n)$ flops. Then the computation of SVD is trivial from the steps shown above. The algorithm is shown in Algorithm~\ref{alg:svd-oracle}.
\begin{algorithm}[H]
\caption{A Simple SVD}
\label{alg:svd-oracle}
\begin{algorithmic}[1]
\Require
Rank-$r$ matrix $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$ with size $m\times n $;
\State Initially get $\bA^\top\bA \bx_i = \sigma_i^2 \bx_i$ $\forall i\in \{1, 2, \ldots, r\}$; \Comment{ $f(m,n)$ flops}
\State Normalize each eigenvectors $\bv_i = \frac{\bx_i}{||\bx_i||}$;\Comment{$r\times 3n = 3nr$ flops}
\State Normalize each eigenvectors $\bu_i = \frac{\bA \bv_i}{\sigma_i}$; \Comment{$r(m(2n-1)+m)$ flops}
\end{algorithmic}
\end{algorithm}
By completing $\{\bv_{r+1}, \bv_{r+2}, \ldots, \bv_n\}$ into a full orthogonal basis via Gram-Schmidt, we have
\begin{equation*}
\begin{aligned}
&\bv_i^\top \bv_i = 1, \gap\forall r+1\leq i \leq n,\\
&\bv_i^\top \bv_j = 0, \gap\forall 1\leq i \leq n, r+1\leq j \leq n.
\end{aligned}
\end{equation*}
And similarly $\{\bu_{r+1}, \bu_{r+2}, \ldots, \bu_m\}$ into a full orthogonal basis via Gram-Schmidt, we have
\begin{equation*}
\begin{aligned}
&\bu_i^\top \bu_i = 1, \gap\forall r+1\leq i \leq m,\\
&\bu_i^\top \bu_j = 0,\gap \forall 1\leq i \leq m, r+1\leq j \leq m.
\end{aligned}
\end{equation*}
The detailed analysis of computational complexity for SVD is complicated. It is typically computed numerically by a two-step procedure. In the first step, the matrix is reduced to a bidiagonal matrix. This takes $O(mn^2)$ flops (Section~\ref{section:bidiagonal-decompo}, p.~\pageref{section:bidiagonal-decompo}) and the second step is to compute the SVD of the bidiagonal matrix which takes $O(n)$ iterations with each involving $O(n)$ flops. Thus, the overall cost is $O(mn^2)$ flops. For those who are interested in the computation of SVD, please refer to Section~\ref{section:eigenvalue-problem} or \citep{trefethen1997numerical, golub2013matrix, kishore2017literature} for more details.
\subsection{Randomized Method for Computing the SVD Approximately}
Suppose now the matrix $\bA$ admits rank decomposition (Theorem~\ref{theorem:rank-decomposition}, p.~\pageref{theorem:rank-decomposition}):
$$
\underset{m\times n}{\bA} = \underset{m\times r}{\bD}\gap \underset{r\times n}{\bF},
$$
where the columns of $\bD$ span the same column space of $\bA$: $\cspace(\bD)=\cspace(\bF)$. If we orthogonalize the columns of $\bD=[\bd_1, \bd_2, \ldots, \bd_r]$ into $\bQ_r=[\bq_1, \bq_2, \ldots, \bq_r]$ such that
$$
span([\bq_1, \bq_2, \ldots, \bq_k]) = span([\bd_1, \bd_2, \ldots, \bd_k]), \qquad \text{for $k\in \{1,2,\ldots,r\}$}.
$$
This can be done by the \textit{reduced} QR decomposition via the Gram-Schmidt process
where the complexity is $O(mr^2)$ (Section~\ref{section:qr-gram-compute}, p.~\pageref{section:qr-gram-compute}) if $\bD\in \real^{m\times r}$ (where we complete the $\bQ_r$ into an orthogonal one $\bQ=[\bQ_r, \bQ_2]$ such that $\bQ\bQ^\top=\bI$ for further analysis, but note that $\bQ_2$ is not needed for the final algorithm!). One can show that if the SVD of the $m\times n$ matrix $\widetildebE=\bQ^\top \bA \in \real^{m\times n}$ is given by $\widetildebE = \widetildebU\bSigma\bV^\top$, the SVD of $\bA$ can be obtained by
\begin{equation}\label{equation:svd-random-au}
\bA=\bQ\widetildebE=\underbrace{(\bQ\widetildebU)}_{\bU}\bSigma\bV^\top.
\end{equation}
Now let's decompose the above finding into
$$
\begin{aligned}
\widetildebE &=
\begin{bmatrix}
\bQ_r^\top \bA \\
\bQ_2^\top \bA
\end{bmatrix}
=
\widetildebU\bSigma\bV^\top
=
\begin{bmatrix}
\widetildebU_r & \widetildebU_{12} \\
\widetildebU_{21} & \widetildebU_{22}
\end{bmatrix}
\begin{bmatrix}
\bSigma_r & \bzero \\
\bzero & \bzero
\end{bmatrix}
\begin{bmatrix}
\bV_r^\top\\
\bV_2^\top
\end{bmatrix}
=
\begin{bmatrix}
\widetildebU_r \bSigma_r \\
\widetildebU_{21}\bSigma_r
\end{bmatrix}
\bV_r^\top \\
&\leadtosmall
\left\{
\begin{aligned}
\bQ_r^\top \bA &= \widetildebU_r \bSigma_r\bV_r^\top;\\
\bQ_2^\top \bA &= \widetildebU_{21}\bSigma_r \bV_r^\top.\\
\end{aligned}\right.
\end{aligned}
$$
By Equation~\eqref{equation:svd-random-au}, we have
$$
\widetildebU = \bQ^\top \bU =
\begin{bmatrix}
\bQ_r^\top \\
\bQ_2^\top
\end{bmatrix}
\begin{bmatrix}
\bU_r & \bU_2
\end{bmatrix}
=
\begin{bmatrix}
\bQ_r^\top \bU_r & \bQ_r^\top\bU_2 \\
\bQ_2^\top \bU_r & \bQ_2^\top\bU_2 \\
\end{bmatrix}
=
\begin{bmatrix}
\widetildebU_r & \widetildebU_{12} \\
\widetildebU_{21} & \widetildebU_{22}
\end{bmatrix}.
$$
Since $\cspace(\bQ_r) = \cspace(\bA)=\cspace(\bU_r)$, it follows that $\bQ_2^\top \bU_r=\bzero $ since $\bQ_2$ lies in the orthogonal complement of $\bQ_r$ (which is also the orthogonal complement of $\bU_r$). Therefore,
$$
\begin{aligned}
\widetildebE &=
\begin{bmatrix}
\bQ_r^\top \bA \\
\bzero
\end{bmatrix}
\end{aligned}
$$
By Equation~\eqref{equation:svd-random-au} again,
$$
\bA=\bQ\widetildebE=
\begin{bmatrix}
\bQ_r & \bQ_2
\end{bmatrix}
\begin{bmatrix}
\bQ_r^\top \bA \\
\bzero
\end{bmatrix}
=
\bQ_r\bQ_r^\top \bA.
$$
We carefully notice that $\bQ_r^\top \bA = \widetildebU_r \bSigma_r\bV_r^\top$ above is the SVD of $\bQ_r^\top \bA \in\real^{r\times n}$.
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10]
To conclude the observation, suppose we find a matrix $\bD\in \real^{m\times r}$ that spans the column space of $\bA\in \real^{m\times n}$. The \textit{reduced} QR decomposition of $\bD=\bQ_r\bR$ such that $\bQ_r$ also span the column space of $\bA$. Calculate the SVD of the small matrix $\bE=\bQ_r^\top\bA \in\real^{r\times n} $ which costs $O(nr^2)$ flops: $\bE=\bQ_r^\top\bA=\widetildebU_r\bSigma_r\bV_r^\top$. The \textit{reduced} SVD of the large matrix can be obtained by
$$
\bA = \bQ_r(\bQ_r^\top\bA) = \underbrace{\bQ_r\widetildebU_r}_{\bU_r}\bSigma_r\bV_r^\top.
$$
By completing the matrices $\bU_r\in \real^{m\times r}, \bV_r\in \real^{n\times r}$ into full orthogonal matrices, we find the \textit{full} SVD of $\bA$.
\end{mdframed}
Further, similar to the randomized algorithm for interpolative decomposition in Section~\ref{section:randomi-id} (p.~\pageref{section:randomi-id}), by Lemma~\ref{lemma:column-basis-from-row-basis} (p.~\pageref{lemma:column-basis-from-row-basis}), for any matrix $\bA\in \real^{m\times n}$, suppose that $\{\bg_1, \bg_2, \ldots, \bg_r\}$ is a set of vectors in $\real^n$ which forms a basis for the row space, then $\{\bA\bg_1, \bA\bg_2, \ldots, \bA\bg_r\}$ is a basis for the column space of $\bA$:
\begin{equation}\label{equation:random-svd-column-space1}
\cspace(\bA\bG) =\cspace(\bA).
\end{equation}
And a small integer $k$ (say $k=10$) should be picked to over-sample such that there is a high probability that $\bG$=$[\bg_1, \ldots,\bg_r, \bg_{r+1}, \ldots, \bg_{r+k}]$ contains the row basis of $\bA$. Other methods to make sure $\bG$ can span the row basis of $\bA$ or $\bA\bG$ can span the column space of $\bA$ are discussed in Remark~\ref{remark:source-row-basis} (p.~\pageref{remark:source-row-basis}). Again, The choice $k=10$ is often good. The procedure is formulated in Algorithm~\ref{alg:svd-randomized} where the end result costs $O(mn(r+k))$ flops compared to the original $O(mn^2)$ flops. We notice that the leading term of $O(mn(r+k))$ flops in the algorithms comes from step 6 to do the matrix multiplication $\bQ_r^\top\bA$. A structured choice on the random matrix $\bG$ can reduce to $O(mn\log(r+k))$ flops \citep{martinsson2019randomized, ailon2006approximate}. We shall not give the details.
\begin{algorithm}[h]
\caption{A Randomized Method to Compute the SVD}
\label{alg:svd-randomized}
\begin{algorithmic}[1]
\Require
Rank-$r$ matrix $\bA$ with size $m\times n $;
\State Decide the over-sampling parameter $k$ (e.g., $k=10$), and let $z=r+k$;
\State Decide the iteration number: $\eta$ (e.g., $\eta=0,1$ or $2$);
\State Generate $r+k$ Gaussian random vectors in $\real^n$ into columns of matrix $\bG\in \real^{n\times (r+k)}$;\Comment{i.e., probably contain the row basis of $\bA$}
\State Initialize $\bD=\bA\bG \in \real^{m\times (r+k)}$; \Comment{probably $\cspace(\bD)=\cspace(\bA)$, $m(2n-1)(r+k)$ flops}
\State Compute full QR decomposition $\bD = \underbrace{\bQ_r}_{m\times (r+k)}\bR$;\Comment{$O(mz^2)$ flops}
\State Form matrix $\bE=\bQ_r^\top \bA \in\real^{(r+k)\times n}$; \Comment{$nz(2m-1)$ flops}
\State Compute the SVD of the small matrix $\bE$: $\bE = \bU_0\bSigma\bV^\top$; \Comment{$O(nz^2)$ flops}
\State Form $\bU=\bQ_r\bU_0$ such that the \textit{reduced} SVD of $\bA=\bU\bSigma\bV^\top$; \Comment{$mz(2z-1)$ flops}
\end{algorithmic}
\end{algorithm}
\section{Polar Decomposition}
\begin{theoremHigh}[Polar Decomposition]\label{theorem:polar-decomposition}
For every real $n\times n$ square matrix $\bA$ with rank $r$, then matrix $\bA$ can be factored as
$$
\bA = \bQ_l \bS,
$$
where $\bQ_l$ is an orthogonal matrix, and $\bS$ is a positive semidefinite matrix. And this form is called the \textbf{left polar decomposition}. Also matrix $\bA$ can be factored as
$$
\bA = \bS\bQ_r,
$$
where $\bQ_r$ is an orthogonal matrix, and $\bS$ is a positive semidefinite matrix. And this form is called the \textbf{right polar decomposition}.
Specially, the left and right polar decomposition of a square matrix $\bA$ is \textbf{unique}.
\end{theoremHigh}
Since every $n\times n$ square matrix $\bA$ has full SVD $\bA = \bU \bSigma \bV^\top$, where both $\bU$ and $\bV$ are $n\times n$ orthogonal matrix. We then have $\bA = (\bU\bV^\top)( \bV\bSigma \bV^\top) = \bQ_l\bS$ where it can be easily verified that $\bQ_l = \bU\bV^\top$ is an orthogonal matrix and $\bS = \bV\bSigma \bV^\top$ is a symmetric matrix. We notice that the singular values in $\bSigma$ are nonnegative, such that $\bS=\bV\bSigma \bV^\top = \bV\bSigma^{1/2}\bSigma^{1/2\top} \bV^\top$ showing that $\bS$ is PSD.
Similarly, we have $\bA = \bU \bSigma \bU^\top \bU \bV^\top = (\bU \bSigma \bU^\top)( \bU \bV^\top)=\bS\bQ_r$. And $\bS=\bU \bSigma \bU^\top = \bU\bSigma^{1/2}\bSigma^{1/2\top} \bU^\top$ such that $\bS$ is PSD as well.
For the uniqueness of the right polar decomposition, we suppose the decomposition is not unique, and two of the decompositions are given by
$$
\bA = \bS_1\bQ_1 = \bS_2\bQ_2,
$$
such that
$$
\bS_1= \bS_2\bQ_2\bQ_1^\top.
$$
Since $\bS_1$ and $\bS_2$ are symmetric, we have
$$
\bS_1^2 = \bS_1\bS_1^\top = \bS_2\bQ_2\bQ_1^\top\bQ_1 \bQ_2^\top\bS_2 = \bS_2^2.
$$
This implies $\bS_1=\bS_2$, and the decomposition is unique (Theorem~\ref{theorem:unique-factor-pd}, p.~\pageref{theorem:unique-factor-pd}). Similarly, the uniqueness of the left polar decomposition can be implied from the context.
\begin{corollary}[Full Rank Polar Decomposition]
When $\bA\in \real^{n\times n}$ has full rank, then the $\bS$ in both the left and right polar decomposition above is a symmetric positive definite matrix.
\end{corollary}
\section{Generalized Singular Value Decomposition (GSVD)*}
Following from \citep{golub2013matrix}, we here give a short review for generalized singular value decomposition (GSVD). See also \citep{zhang2017matrix, bai1993computing, zha1989restricted, golub2013matrix, paige1981towards} for a more detailed discussion on GSVD.
\subsection{CS Decomposition}
\begin{theoremHigh}[CS Decomposition]
Suppose
$$
\begin{blockarray}{cccc}
\begin{block}{c[cc]c}
& \bQ_{11} & \bQ_{12} & m_1 \\
\bQ= & \bQ_{21} & \bQ_{22} & m_2 \\
\end{block}
& n_1 & n_2 & \\
\end{blockarray},
$$
is an orthogonal matrix with $m_1 \geq n_1$ and $m_1 \geq m_2$. Define the nonnegative integer $p$ and $q$ by $p=\max\{0, n_1-m_2\}$, and $q=\max\{0, m_2-n_1\}$. There exist orthogonal matrices $\bU_1 \in \real^{m_1\times m_1}$, $\bU_2\in \real^{m_2\times m_2}$, $\bV_1\in \real^{n_1\times n_1}$, and $\bV_2\in \real^{n_2\times n_2}$ such that if
$$
\bU = \begin{bmatrix}
\bU_1 & \bzero \\
\bzero & \bU_2
\end{bmatrix}
, \qquad
\text{and}
\qquad
\bV = \begin{bmatrix}
\bV_1 & \bzero \\
\bzero & \bV_2
\end{bmatrix},
$$
then
$$
\begin{blockarray}{ccccccc}
\begin{block}{c[ccccc]c}
& \bI & 0 & 0 & 0 & 0 & p \\
& 0 & \bC & \bS & 0 & 0 & n_1-p \\
\bU^\top\bQ\bV= & 0 & 0 & 0 &0 & \bI & m_1-n_1 \\
& 0 & \bS & -\bC & 0 & 0 & n_1-p \\
& 0 & 0 & 0 & \bI & 0 & q \\
\end{block}
& p & n_1-p & n_1-p& q & m_1-n_1& \\
\end{blockarray},
$$
where
$$
\begin{aligned}
\bC &= \diag(\cos(\theta_{p+1}), \ldots, \cos(\theta_{n_1})) = \diag(c_{p+1}, \ldots, c_{n_1}),\\
\bS &= \diag(\sin(\theta_{p+1}), \ldots, \sin(\theta_{n_1})) = \diag(s_{p+1}, \ldots, s_{n_1}),
\end{aligned}
$$
and $0\leq \theta_{p+1} \leq \ldots \leq \theta_{n_1} \leq \pi/2$.
\end{theoremHigh}
The proof can be found in \citep{paige1981towards} and a thin version of CS decomposition is provided in \citep{golub2013matrix}.
\subsection{Generalized Singular Value Decomposition (GSVD)}
\begin{theoremHigh}[Generalized Singular Value Decomposition]\label{theorem:gsvd-main}
Assume that $\bA \in \real^{m_1\times n_1}$ and $\bB \in \real^{m_2\times n_1}$ with $m_1 \geq n_1$ and
$$
r = rank(\begin{bmatrix}
\bA \\
\bB
\end{bmatrix}).
$$
Then there exist orthogonal matrices $\bU_1 \in \real^{m_1\times m_1}$ and $\bU_2\in \real^{m_2\times m_2}$, and invertible matrix $\bX \in \real^{n_1\times n_1}$ such that
$$
\begin{aligned}
&\begin{blockarray}{ccccc}
\begin{block}{c[ccc]c}
& \bI & \bzero & \bzero & p \\
\bU_1^\top \bA \bX = \bD_A = & \bzero & \bS_A & \bzero & r-p \\
& \bzero &\bzero& \bzero & m_1-r \\
\end{block}
& p & r-p & n_1-r & \\
\end{blockarray}\\
&\begin{blockarray}{ccccc}
\begin{block}{c[ccc]c}
& \bzero & \bzero & \bzero & p \\
\bU_2^\top \bB \bX = \bD_B = & \bzero & \bS_B & \bzero & r-p \\
& \bzero &\bzero& \bzero & m_2-r \\
\end{block}
& p & r-p & n_1-r & \\
\end{blockarray},
\end{aligned}
$$
where $p = \max\{r-m_2, 0\}$, $\bS_A = \diag(\alpha_{p+1}, \ldots, \alpha_r)$, and
$\bS_B = \diag(\beta_{p+1}, \ldots, \beta_r)$, and
$$
\alpha_i^2 + \beta_i^2 = 1, \qquad \forall i \in \{p+1, \ldots, r\}.
$$
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:gsvd-main}]
Suppose the SVD of $\begin{bmatrix}
\bA \\
\bB
\end{bmatrix}$ is given by
$$
\begin{bmatrix}
\bA \\
\bB
\end{bmatrix}
=
\begin{bmatrix}
\bQ_{11} & \bQ_{12} \\
\bQ_{21} &\bQ_{22}
\end{bmatrix}
\begin{bmatrix}
\bSigma_r & \bzero \\
\bzero & \bzero
\end{bmatrix}
\bP^\top,
$$
where $\bSigma_r \in \real^{r\times r}$ is nonsingular, $\bQ_{11}\in \real^{m_1\times r}$, and $\bQ_{21}\in \real^{m_2\times r}$. Apply CS decomposition on $\bQ$, there exist orthogonal matrices $\bU_1\in \real^{m_1\times m_1}$, $\bU_2\in \real^{m_2\times m_2}$, and $\bV_1 \in \real^{r\times r}$ such that
$$
\begin{bmatrix}
\bU_1 & \bzero \\
\bzero & \bU_2
\end{bmatrix}^\top
\begin{bmatrix}
\bQ_{11}\\
\bQ_{21}
\end{bmatrix}\bV_1
=
\begin{bmatrix}
\bD_A^r \\
\bD_B^r
\end{bmatrix},
$$
where $\bD_A^r$ and $\bD_B^r$ are first $r$ columns of $\bD_A$ and $\bD_B$ (note that row permutation needed here). It then follows that
$$
\begin{aligned}
\begin{bmatrix}
\bU_1 & \bzero \\
\bzero & \bU_2
\end{bmatrix}^\top
\begin{bmatrix}
\bA\\
\bB
\end{bmatrix}\bP
&=
\begin{bmatrix}
\bD_A^r & \bU_1\bQ_{12}\\
\bD_B^r & \bU_2\bQ_{22}
\end{bmatrix}
\begin{bmatrix}
\bV_1^\top \bSigma_r & \bzero \\
\bzero & \bzero
\end{bmatrix} \\
&= \begin{bmatrix}
\bD_A^r & \bzero\\
\bD_B^r & \bzero
\end{bmatrix}
\begin{bmatrix}
\bV_1^\top \bSigma_r & \bzero \\
\bzero & \bI_{n_1-r}
\end{bmatrix} \\
&= \begin{bmatrix}
\bD_A\\
\bD_B
\end{bmatrix}
\begin{bmatrix}
\bV_1^\top \bSigma_r & \bzero \\
\bzero & \bI_{n_1-r}
\end{bmatrix}. \\
\end{aligned}
$$
By setting
$$
\bX = \bP \begin{bmatrix}
\bV_1^\top \bSigma_r & \bzero \\
\bzero & \bI_{n_1-r}
\end{bmatrix}^{-1},
$$
we complete the proof.
\end{proof}
Note that if $\bB =\bI_{n_1}$, and we set $\bX = \bU_2$, then we obtain the SVD of $\bA$.
\section{Applications}
\subsection{Application: Least Squares via SVD for Rank Deficient Matrices}\label{section:application-ls-svd}
The least squares problem is described in Section~\ref{section:application-ls-qr} (p.~\pageref{section:application-ls-qr}). As a recap,
let's consider the overdetermined system $\bA\bx = \bb$, where $\bA\in \real^{m\times n}$ is the data matrix, $\bb\in \real^m$ with $m>n$ is the observation matrix. Normally $\bA$ will have full column rank since the data from real work has a large chance to be unrelated. And the least squares (LS) solution is given by $\bx_{LS} = (\bA^\top\bA)^{-1}\bA^\top\bb$ for minimizing $||\bA\bx-\bb||^2$, where $\bA^\top\bA$ is invertible since $\bA$ has full column rank and $rank(\bA^T\bA)=rank(\bA)$.\index{Least squares}
However, if $\bA$ does not have full column rank, $\bA^\top\bA$ is not invertible. We thus can use the SVD decomposition of $\bA$ to solve the LS problem. And we illustrate this in the following theorem.
\begin{theorem}[LS via SVD for Rank Deficient Matrix]\label{theorem:svd-deficient-rank}
Let $\bA\in \real^{m\times n}$ and $\bA=\bU\bSigma\bV^\top$ is its full SVD decomposition with $\bU\in\real^{m\times m}$ and $\bV\in \real^{n\times n}$ being orthogonal matrices and $rank(\bA)=r$. Suppose $\bU=[\bu_1, \bu_2, \ldots, \bu_m]$, $\bV=[\bv_1, \bv_2, \ldots, \bv_n]$ and $\bb\in \real^m$, then the LS solution with the minimal 2-norm to $\bA\bx=\bb$ is given by
\begin{equation}\label{equation:svd-ls-solution}
\bx_{LS} = \sum_{i=1}^{r}\frac{\bu_i^\top \bb}{\sigma_i}\bv_i = \bV\bSigma^+\bU^\top \bb,
\end{equation}
where the upper-left side of $\bSigma^+ \in \real^{n\times m}$ is a diagonal matrix, $\bSigma^+ = \begin{bmatrix}
\bSigma_1^+ & \bzero \\
\bzero & \bzero
\end{bmatrix}$ where $\bSigma_1^+=diag(\frac{1}{\sigma_1}, \frac{1}{\sigma_2}, \ldots, \frac{1}{\sigma_r})$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:svd-deficient-rank}]
Write out the loss to be minimized
$$
\begin{aligned}
||\bA\bx-\bb||^2 &= (\bA\bx-\bb)^\top(\bA\bx-\bb)\\
&=(\bA\bx-\bb)^\top\bU\bU^\top (\bA\bx-\bb) \qquad &(\text{Since $\bU$ is an orthogonal matrix})\\
&=||\bU^\top \bA \bx-\bU^\top\bb||^2 \qquad &(\text{Invariant under orthogonal})\\
&=||\bU^\top \bA \bV\bV^\top \bx-\bU^\top\bb||^2 \qquad &(\text{Since $\bV$ is an orthogonal matrix})\\
&=||\bSigma\balpha - \bU^\top\bb||^2 \qquad &(\text{Let $\balpha=\bV^\top \bx$})\\
&=\sum_{i=1}^{r}(\sigma_i\balpha_i - \bu_i^\top\bb)^2 +\sum_{i=r+1}^{m}(\bu_i^\top \bb)^2. \qquad &(\text{Since $\sigma_{r+1}=\sigma_{r+2}= \ldots= \sigma_m=0$})
\end{aligned}
$$
Since $\bx$ only appears in $\balpha$, we just need to set $\balpha_i = \frac{\bu_i^\top\bb}{\sigma_i}$ for all $i\in \{1, 2, \ldots, r\}$ to minimize the above equation. For any value of $\balpha_{r+1}, \balpha_{r+2}, \ldots, \balpha_{n}$, it won't change the result. From the regularization point of view (or here, we want the minimal 2-norm) we can set them to be 0. This gives us the LS solution via SVD:
$$
\bx_{LS} = \sum_{i=1}^{r}\frac{\bu_i^\top \bb}{\sigma_i}\bv_i=\bV\bSigma^+\bU^\top \bb = \bA^+\bb,
$$
where $\bA^+=\bV\bSigma^+\bU^\top\in \real^{n\times m}$ is known as the \textbf{pseudo-inverse} of $\bA$. Please refer to Appendix~\ref{appendix:pseudo-inverse} (p.~\pageref{appendix:pseudo-inverse}) for a detailed discussion about pseudo-inverse where we also prove that the column space of $\bA^+$ is equal to the row space of $\bA$, and the row space of $\bA^+$ is equal to the column space of $\bA$.\index{Pseudo-inverse}
\end{proof}
\begin{figure}[h!]
\centering
\includegraphics[width=0.98\textwidth]{imgs/lafundamental4-LS-SVD.pdf}
\caption{$\bA^+$: Pseudo-inverse of $\bA$.}
\label{fig:lafundamental4-LS-SVD}
\end{figure}
Let $\bA^+ = \bV\bSigma^+\bU^\top$ be the pseudo-inverse of $\bA$. The pseudo-inverse $\bA^+$ agrees with $\bA^{-1}$ when $\bA$ is invertible. The solution of least squares is to make the error $\bb-\bA\bx$ as small as possible concerning the mean square error. Since $\bA\bx$ can never leave the column space of $\bA$, we should choose the closest point to $\bb$ in the column space \citep{strang1993fundamental}. This point is the projection $\bp$ of $\bb$. Then the error vector $\be=\bb-\bp$ has minimal length. In another word, the best combination $\bp = \bA\bx_{LS}$ is the projection of $\bb$ onto the column space. The error $\be$ is perpendicular to the column space. Therefore, $\be=\bb-\bA\bx_{LS}$ is in the null space of $\bA^\top$:
$$
\bA^\top(\bb-\bA\bx_{LS}) = \bzero \qquad \text{or} \qquad \bA^\top\bb=\bA^\top\bA\bx_{LS},
$$
which is also known as the normal equation of least squares. The relationship between $\be$ and $\bp$ is shown in Figure~\ref{fig:lafundamental4-LS-SVD} where $\bb$ is split into $\bp+\be$. Since $\be$ is in $\nspace(\bA^\top)$ and perpendicular to $\cspace(\bA)$, and we have shown in Section~\ref{section:property-svd}, $\{\bu_1,\bu_2, \ldots,\bu_r\}$ is an orthonormal basis of $\cspace(\bA)$, then the first $r$ components of $\bU^\top\be$ are all zeros. Therefore, $\bA^+\be = \bV\bSigma^+\bU^\top\be=\bzero$. Moreover, $\bx_{LS}=\bA^+\bb = \bA^+(\bp+\be) = \bA^+\bp$.
Further, we have also shown in Section~\ref{section:property-svd}, $\{\bv_1, \bv_2, \ldots, \bv_r\} $ is an orthonormal basis of $\cspace(\bA^\top)$, $\bx_{LS} = \sum_{i=1}^{r}\frac{\bu_i^\top \bb}{\sigma_i}\bv_i$ thus is in the row space of $\bA$, i.e., it cannot be split into a combination of two components that are in row space of $\bA$ and null space of $\bA$ respectively.
Apart from this LS solution from SVD, in practice, a direct solution of the normal equations can lead to numerical difficulties when $\bX^\top\bX$ is close to singular. In particular, when two or more of the columns in $\bX^\top\bX$ are co-linear, the resulting parameter values can have a large magnitude. Such near degeneracies will not be uncommon when dealing with real data sets. The resulting numerical difficulties can be addressed using the SVD as well \citep{bishop2006pattern}.
\subsection{Application: Least Squares with Norm Ratio Method}\label{section:application-ls-svd-norm-ratio}
We first define the Frobenius norm as follows and a detailed discussion on matrix norm can be found in Appendix~\ref{appendix:matrix-norm} (p.~\pageref{appendix:matrix-norm}).
\begin{definition}[Frobenius Norm\index{Frobenius norm}]\label{definition:frobernius-in-svd}
The Frobenius norm of a matrix $\bA\in \real^{m\times n}$ is defined as
$$
|| \bA ||_F = \sqrt{\sum_{i=1,j=1}^{m,n} (\bA_{ij})^2}=\sqrt{tr(\bA\bA^\top)}=\sqrt{tr(\bA^\top\bA)} = \sqrt{\sigma_1^2+\sigma_2^2+\ldots+\sigma_r^2}.
$$
\end{definition}
Following the setup in the last section. Let $\bA_k \in \real^{m\times n}$ be the rank-$k$ approximation to the original $m\times n$ matrix $\bA$. Define the \textit{Frobenius norm ratio} \citep{zhang2017matrix} as
$$
\nu(k) = \frac{||\bA_k||_F}{||\bA||_F} = \frac{\sqrt{\sigma_1^2+\sigma_2^2+\ldots +\sigma_k^2}}{\sqrt{\sigma_1^2+\sigma_2^2+\ldots +\sigma_h^2}}, \quad h = \min\{m,n\},
$$
where
$\bA_k$ is the truncated SVD of $\bA$ with the largest $k$ terms, i.e., $\bA_k = \sum_{i=1}^{k} \sigma_i\bu_i\bv_i^\top$ from SVD of $\bA=\sum_{i=1}^{r} \sigma_i\bu_i\bv_i^\top$.
We choose the minimum integer $k$ satisfying
$$
\nu(k) \geq \alpha
$$
as the \textit{effective rank estimate} $\hat{r}$, where $\alpha$ is is the threshold with maximum value to be 1, and it is usually chosen to be $\alpha=0.997$.
After the effective rank $\hat{r}$ has been determined, we replace the $\hat{r}$ in Equation~\eqref{equation:svd-ls-solution},
$$
\hat{\bx_{LS}} = \sum_{i=1}^{\textcolor{blue}{\hat{r}}}\frac{\bu_i^\top \bb}{\sigma_i}\bv_i ,
$$
which can be regarded as an approximation to the LS solution $\bx_{LS}$. And this solution is the LS solution of the linear equation $\bA_{\hat{r}} = \bb$, where
$$
\bA_{\hat{r}} = \sum_{i=1}^{\hat{r}} \sigma_i \bu_i\bv_i^\top.
$$
This filtering method introduced above is useful when matrix $\bA$ is noisy.
\subsection{Application: Principal Component Analysis (PCA) via the Spectral Decomposition and the SVD}
Given a data set of $n$ observations $\{\bx_1,\bx_2,\ldots,\bx_n\}$ where $\bx_i\in \real^p$ for all $i\in \{1,2,\ldots,n\}$. Our goal is to project the data onto a low-dimensional space, say $m<p$. Define the sample mean vector and sample covariance matrix\index{Principal component analysis}
$$
\overline{\bx} = \frac{1}{n}\sum_{i=1}^{n}\bx_i
\qquad
\text{and}
\qquad
\bS = \frac{1}{n-1}\sum_{i=1}^{n} (\bx_i - \overline{\bx})(\bx_i-\overline{\bx})^\top.
$$
where the $n-1$ term in the covariance matrix is to make it to be an unbiased consistent estimator of the covariance \citep{lu2021rigorous}. Or the covariance matrix can also be defined as $\bS = \frac{1}{\textcolor{blue}{n}}\sum_{i=1}^{n} (\bx_i - \overline{\bx})(\bx_i-\overline{\bx})^\top$ which is also a consistent estimator of covariance matrix \footnote{Consistency: An estimator $\theta_n $ of $\theta$ constructed on the basis of a sample of size $n$ is said to be consistent if $\theta_n\stackrel{p}{\rightarrow} \theta$ as $n \rightarrow \infty $.}.
Each data point $\bx_i$ is then projected onto a scalar value by $\bu_1$ such that $\bu_1^\top\bx_i$. The mean of the projected data is obtained by $\Exp[\bu_1^\top\bx_i] = \bu_1^\top \overline{\bx}$, and the variance of the projected data is given by
$$
\begin{aligned}
\Cov[\bu_1^\top\bx_i] &= \frac{1}{n-1} \sum_{i=1}^{n}( \bu_1^\top \bx_i - \bu_1^\top\overline{\bx})^2=
\frac{1}{n-1} \sum_{i=1}^{n}\bu_1^\top ( \bx_i -\overline{\bx})( \bx_i -\overline{\bx})^\top\bu_1\\
&=\bu_1^\top\bS\bu_1.
\end{aligned}
$$
We want to maximize the projected variance $\bu_1^\top\bS\bu_1$ with respect to $\bu_1$ where we must constrain $||\bu_1||$ to prevent $||\bu_1|| \rightarrow \infty$ by setting it to be $\bu_1^\top\bu_1=1$. By Lagrange multiplier (see \citep{bishop2006pattern, boyd2004convex}), we have
$$
\bu_1^\top\bS\bu_1 + \lambda_1 (1 - \bu_1^\top\bu_1).
$$
Trivial calculation will lead to
$$
\bS\bu_1 = \lambda_1\bu_1 \leadto \bu_1^\top\bS\bu_1 = \lambda_1.
$$
That is, $\bu_1$ is an eigenvector of $\bS$ corresponding to eigenvalue $\lambda_1$. And the maximum variance projection $\bu_1$ is corresponding to the largest eigenvalues of $\bS$. The eigenvector is known as the \textit{first principal axis}.
Define the other principal axes by decremental eigenvalues until we have $m$ such principal components bring about the dimension reduction. This is known as the \textit{maximum variance formulation} of PCA \citep{hotelling1933analysis, bishop2006pattern, shlens2014tutorial}. A \textit{minimum-error formulation} of PCA is discussed in \citep{pearson1901liii, bishop2006pattern}.
\paragraph{PCA via the spectral decomposition}
Now let's assume the data are centered such that $\overline{\bx}$ is zero, or we can set $\bx_i = \bx_i-\overline{\bx}$ to centralize the data. Let the data matrix $\bX \in \real^{n\times p}$ contain the data observations as rows. The covariance matrix is given by
$$
\bS= \frac{\bX^\top\bX}{n-1},
$$
which is a symmetric matrix, and its spectral decomposition is given by
\begin{equation}\label{equation:pca-equ1}
\bS = \bU\bLambda\bU^\top,
\end{equation}
where $\bU$ is an orthogonal matrix of eigenvectors (columns of $\bU$ are eigenvectors of $\bS$), and $\bLambda=\diag(\lambda_1, \lambda_2,\ldots, \lambda_p)$ is a diagonal matrix with eigenvalues (ordered such that $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_p$). The eigenvectors are called \textit{principal axes} of the data, and they \textit{decorrelate} the the covariance matrix. Projections of the data on the principal axes are called the \textit{principal components}. The $i$-th principal component is given by the $i$-th column of $\bX\bU$. If we want to reduce the dimension from $p$ to $m$, we just select the first $m$ columns of $\bX\bU$.
\paragraph{PCA via the SVD}
If the SVD of $\bX$ is given by $\bX = \bP\bSigma\bQ^\top$, then the covariance matrix can be written as
\begin{equation}\label{equation:pca-equ2}
\bS= \frac{\bX^\top\bX}{n-1} = \bQ \frac{\bSigma^2}{n-1}\bQ^\top,
\end{equation}
where $\bQ\in \real^{p\times p}$ is an orthogonal matrix and contains the right singular vectors of $\bX$, and the upper left of $\bSigma$ is a diagonal matrix containing the singular values $\diag(\sigma_1,\sigma_2,\ldots)$ with $\sigma_1\geq \sigma_2\geq \ldots$. The number of singular values is equal to $\min\{n,p\}$ which will not be larger than $p$ and some of which are zeros.
The above Equation~\eqref{equation:pca-equ2} compared with Equation~\eqref{equation:pca-equ1} implies Equation~\eqref{equation:pca-equ2} is also a spectral decomposition of $\bS$,
since the eigenvalues in $\bLambda$ and singular values in $\bSigma$ are ordered in a descending way and the uniqueness of the spectral decomposition in terms of the eigenspaces (Section~\ref{section:uniqueness-spectral-decomposition}, p.~\pageref{section:uniqueness-spectral-decomposition}).
This results in the right singular vectors $\bQ$ are also the principal axes which decorrelate the covariance matrix,
and the singular values are related to the eigenvalues of the covariance matrix via $\lambda_i = \frac{\sigma_i^2 }{n-1}$. To reduce the dimensionality of the data from $p$ to $m$, we should select largest $m$ singular values and the corresponding right singular vectors. This is also related to the truncated SVD (TSVD) $\bX_m = \sum_{i=1}^{m}\sigma_i \bp_i\bq_i^\top$ as will be shown in the next section, where $\bp_i$'s and $\bq_i$'s are the columns of $\bP$ and $\bQ$.
\paragraph{A byproduct of PCA via the SVD for high-dimensional data} For a principle axis $\bu_i$ of $\bS = \frac{\bX^\top\bX}{n-1}$, we have
$$
\frac{\bX^\top\bX}{n-1} \bu_i = \lambda_i \bu_i.
$$
Left multiply by $\bX$, we obtain
$$
\frac{\bX\bX^\top}{n-1} (\bX\bu_i) = \lambda_i (\bX\bu_i),
$$
which implies $\lambda_i$ is also an eigenvalue of $\frac{\bX\bX^\top}{n-1} \in \real^{n\times n}$, and the corresponding eigenvector is $\bX\bu_i$. This is also stated in the proof of Theorem~\ref{theorem:reduced_svd_rectangular}, the existence of the SVD. If $p \gg n$, instead of finding the eigenvector of $\bS$, i.e., the principle axes of $\bS$, we can find the eigenvector of $\frac{\bX\bX^\top}{n-1}$. This reduces the complexity from $O(p^3)$ to $O(n^3)$. Suppose now, the eigenvector of $\frac{\bX\bX^\top}{n-1}$ is $\bv_i$ corresponding to nonzero eigenvalue $\lambda_i$,
$$
\frac{\bX\bX^\top}{n-1} \bv_i = \lambda_i \bv_i.
$$
Left multiply by $\bX^\top$, we obtain
$$
\frac{\bX^\top\bX}{n-1} (\bX^\top\bv_i) = \bS(\bX^\top\bv_i) = \lambda_i (\bX^\top\bv_i),
$$
i.e., the eigenvector $\bu_i$ of $\bS$, is proportional to $\bX^\top\bv_i$, where $\bv_i$ is the eigenvector of $\frac{\bX\bX^\top}{n-1}$ corresponding to the same eigenvalue $\lambda_i$. A further normalization step is needed to make $||\bu_i||=1$.
\subsection{Application: Low-Rank Approximation}\label{section:svd-low-rank-approxi}
For a low-rank approximation problem, there are basically two types related due to the interplay of rank and error: \textit{fixed-precision approximation problem} and \textit{fixed-rank
approximation problem}. In the fixed-precision approximation problem, for a given matrix $\bA$ and a given tolerance $\epsilon$, one wants to find a matrix $\bB$
with rank $r = r(\epsilon)$ such that $||\bA-\bB|| \leq \epsilon$ in an appropriate matrix norm. On the contrary, in the fixed-rank approximation problem, one looks for a matrix $\bB$ with fixed rank $k$ and an error $||\bA-\bB||$ as small as possible. In this section, we will consider the latter. Some excellent examples can also be found in \citep{kishore2017literature, martinsson2019randomized}.
Suppose we want to approximate matrix $\bA\in \real^{m\times n}$ with rank $r$ by a rank $k<r$ matrix $\bB$. The approximation is measured by spectral norm:\index{Low-rank approximation}
$$
\bB = \mathop{\arg\min}_{\bB} \ \ || \bA - \bB||_2,
$$
where the spectral norm is defined as follows:
\begin{definition}[Spectral Norm]\label{definition:spectral_norm}
The spectral norm of a matrix $\bA\in \real^{m\times n}$ is defined as
$$
||\bA||_2 = \mathop{\max}_{\bx\neq\bzero} \frac{||\bA\bx||_2}{||\bx||_2} =\mathop{\max}_{\bu\in \real^n: ||\bu||_2=1} ||\bA\bx||_2 ,
$$
which is also the maximal singular value of $\bA$, i.e., $||\bA||_2 = \sigma_1(\bA)$.
\end{definition}
Then, we can recover the best rank-$k$ approximation by the following theorem.
\begin{theorem}[Eckart-Young-Misky Theorem w.r.t. Spectral Norm\index{Eckart-Young-Misky theorem}]\label{theorem:young-theorem-spectral}
Given matrix $\bA\in \real^{m\times n}$ and $1\leq k\leq rank(\bA)=r$, and let $\bA_k$ be the truncated SVD (TSVD) of $\bA$ with the largest $k$ terms, i.e., $\bA_k = \sum_{i=1}^{k} \sigma_i\bu_i\bv_i^\top$ from SVD of $\bA=\sum_{i=1}^{r} \sigma_i\bu_i\bv_i^\top$ by zeroing out the $r-k$ trailing singular values of $\bA$. Then $\bA_k$ is the best rank-$k$ approximation to $\bA$ in terms of the spectral norm.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:young-theorem-spectral}]
We need to show for any matrix $\bB$, if $rank(\bB)=k$, then $||\bA-\bB||_2 \geq ||\bA-\bA_k||_2$.
Since $rank(\bB)=k$, then $dim (\nspace(\bB))=n-k$. As a result, any $k+1$ basis in $\real^n$ intersects $\nspace(\bB)$. As shown in Lemma~\ref{lemma:svd-four-orthonormal-Basis}, $\{\bv_1,\bv_2, \ldots, \bv_r\}$ is an orthonormal basis of $\cspace(\bA^\top)\subset \real^n$, so that we can choose the first $k+1$ $\bv_i$'s as a $k+1$ basis for $\real^n$. Let $\bV_{k+1} = [\bv_1, \bv_2, \ldots, \bv_{k+1}]$, then there is a vector $\bx$ that
$$
\bx \in \nspace(\bB) \cap \cspace(\bV_{k+1}),\qquad s.t.\,\,\,\, ||\bx||_2=1.
$$
That is $\bx = \sum_{i=1}^{k+1} a_i \bv_i$, and $||\sum_{i=1}^{k+1} a_i \bv_i||_2 = \sum_{i=1}^{k+1}a_i^2=1$.
Thus,
$$
\begin{aligned}
||\bA-\bB||_2^2 &\geq ||(\bA-\bB)\bx||_2^2\cdot ||\bx||_2^2, \qquad &(\text{From defintion of spectral norm}) \\
&= ||\bA\bx||_2^2, \qquad &(\text{$\bx$ in null space of $\bB$}) \\
&=\sum_{i=1}^{k+1} \sigma_i^2 (\bv_i^\top \bx)^2, \qquad &(\text{$\bx$ orthogonal to $\bv_{k+2}, \ldots, \bv_r$}) \\
&\geq \sigma_{k+1}^2\sum_{i=1}^{k+1} (\bv_i^\top \bx)^2, \qquad &(\sigma_{k+1}\leq \sigma_{k}\leq\ldots\leq \sigma_{1}) \\
&\geq \sigma_{k+1}^2\sum_{i=1}^{k+1} a_i^2, \qquad &(\bv_i^\top \bx = a_i) \\
&= \sigma_{k+1}^2.
\end{aligned}
$$
It is trivial that $||\bA-\bA_k||_2^2 = ||\sum_{i=k+1}^{r}\sigma_i\bu_i\bv_i^\top||_2^2=\sigma_{k+1}^2$. Thus, $||\bA-\bA_k||_2 \leq ||\bA-\bB||_2$, which completes the proof.
\end{proof}
\begin{SCfigure
\centering
\includegraphics[width=0.6\textwidth]{./imgs/eng300.png}
\caption{A gray flag image to be compressed.}
\label{fig:eng300}
\end{SCfigure}
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[$\sigma_1\bu_1\bv_1^\top$, $F_1 = 60,217$.]{\label{fig:svd1}%
\includegraphics[width=0.32\linewidth]{./imgs/svd_pic1.png}}
\subfigure[$\sigma_2\bu_2\bv_2^\top$, $F_2 = 120,150$.]{\label{fig:svd2}%
\includegraphics[width=0.32\linewidth]{./imgs/svd_pic2.png}}
\subfigure[$\sigma_3\bu_3\bv_3^\top$, $F_3 = 124,141$.]{\label{fig:svd3}%
\includegraphics[width=0.32\linewidth]{./imgs/svd_pic3.png}}\\
\subfigure[$\sigma_4\bu_4\bv_4^\top$, $F_4 = 125,937$.]{\label{fig:svd4}%
\includegraphics[width=0.32\linewidth]{./imgs/svd_pic4.png}}
\subfigure[$\sigma_5\bu_5\bv_5^\top$, $F_5 = 126,127$.]{\label{fig:svd5}%
\includegraphics[width=0.32\linewidth]{./imgs/svd_pic5.png}}
\subfigure[All 5 singular values: $\sum_{i=1}^{5}\sigma_i\bu_i\bv_i^\top$, \protect\newline $F=44,379$.]{\label{fig:svd6}%
\includegraphics[width=0.32\linewidth]{./imgs/svd_pic6_all.png}}
\caption{Image compression for gray flag image into a rank-5 matrix via the SVD, and decompose into 5 parts where $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_{5}$, i.e., $F_1\leq F_2\leq \ldots \leq F_5$ with $F_i = ||\sigma_i\bu_i\bv^\top - \bA||_F$ for $i\in \{1,2,\ldots, 5\}$. And reconstruct images by single singular value and its corresponding left and right singular vectors.}
\label{fig:svdd-by-parts}
\end{figure}
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[$\bc_1\br_1^\top$, $G_1 = 60,464$.]{\label{fig:skeleton1}%
\includegraphics[width=0.32\linewidth]{./imgs/skeleton_5_1.png}}
\subfigure[$\bc_2\br_2^\top$, $G_2 = 122,142$.]{\label{fig:skeleton2}%
\includegraphics[width=0.32\linewidth]{./imgs/skeleton_5_2.png}}
\subfigure[$\bc_3\br_3^\top$, $G_3 = 123,450$.]{\label{fig:skeleton3}%
\includegraphics[width=0.32\linewidth]{./imgs/skeleton_5_3.png}}\\
\subfigure[$\bc_4\br_4^\top$, $G_4 = 125,975$.]{\label{fig:skeleton4}%
\includegraphics[width=0.32\linewidth]{./imgs/skeleton_5_5.png}}
\subfigure[$\bc_5\br_5^\top$, $G_5 = 124,794$.]{\label{fig:skeleton5}%
\includegraphics[width=0.32\linewidth]{./imgs/skeleton_5_4.png}}
\subfigure[All 5 parts: $\sum_{i=1}^{5}\bc_i\br_i^\top$, \protect\newline $G=45,905$.]{\label{fig:skeleton6}%
\includegraphics[width=0.32\linewidth]{./imgs/svd_pic6_all.png}}
\caption{Image compression for gray flag image into a rank-5 matrix via the Pseudoskeleton decomposition, and decompose into 5 parts where $G_i=||\bc_i\br_i^\top-\bA||_F$ for $i\in \{1,2,\ldots, 5\}$ and $G_1\leq G_2 \leq \ldots \leq G_5$. And reconstruct images by $\bc_i\br_i^\top$.}
\label{fig:skeletond-by-parts}
\end{figure}
Moreover, readers can prove that $\bA_k$ is the best rank-$k$ approximation to $\bA$ in terms of the Frobenius norm. The minimal error is given by the Euclidean norm of the singular values that have been zeroed out in the process $||\bA-\bA_k||_F =\sqrt{\sigma_{k+1}^2 +\sigma_{k+2}^2+\ldots +\sigma_{r}^2}$.
SVD gives the best approximation of a matrix. As mentioned in \citep{stewart1998matrix, kishore2017literature}, \emph{the singular value decomposition is the creme de la creme of rank-reducing decompositions — the decomposition that all others try to beat}. And also \emph{The SVD is the climax of this linear algebra course} in \citep{strang1993introduction}.
Figure~\ref{fig:eng300} shows an example of a gray image to be compressed. The size of the image is $600\times 1200$ with a rank of 402.
In Figure~\ref{fig:svdd-by-parts}, we approximate the image into a rank-5 matrix by truncated SVD: $\bA\approx \sum_{i=1}^{5}\sigma_i\bu_i\bv_i^\top$. It is known that the singular values contain the spectrum information with higher singular values containing lower-frequency information. And low-frequency contains more useful information \citep{leondes1995multidimensional}. We find that the image, $\sigma_1\bu_1\bv_1^\top$, reconstructed by the first singular value $\sigma_1$, first left singular vector $\bu_1$, and first right singular vector $\bv_1$ is very close to the original flag image, and second to the fifth image reconstructed by the corresponding singular values and singular vectors contain more details of the flag to reconstruct it.
Similar results can be observed for the low-rank approximation via the pseudoskeleton decomposition (Section~\ref{section:pseudoskeleton}, p.~\pageref{section:pseudoskeleton}). In Equation~\eqref{equation:skeleton-low-rank} (p.~\pageref{equation:skeleton-low-rank}), we derived the low-rank approximation by $\bA\approx \bC_2\bR_2$ where $\bC_2\in \real^{m\times \gamma}, \bR_2\in \real^{\gamma\times n}$ if $\bA\in \real^{m\times n}$ such that $\bC_2$ and $\bR_2$ are rank-$\gamma$ matrices. Suppose $\gamma=5$, and
$$
\bC_2=[\bc_1, \bc_2, \ldots, \bc_5],
\qquad
\text{and}
\qquad
\bR_2 =
\begin{bmatrix}
\br_1^\top \\
\br_2^\top \\
\vdots \\
\br_5^\top
\end{bmatrix},
$$
are the column and row partitions of $\bC_2, \bR_2$ respectively. Then $\bA$ can be approximated by $\sum_{i=1}^{5}\bc_i\br_i^\top$. The partitions are ordered such that
$$
\underbrace{||\bc_1\br_1^\top-\bA||_F}_{G_1} \leq
\underbrace{||\bc_2\br_2^\top-\bA||_F}_{G_2}
\leq \ldots \leq
\underbrace{||\bc_5\br_5^\top-\bA||_F}_{G_5}.
$$
We observe that $\bc_1\br_1^\top$ works similarly to that of $\sigma_1\bu_1\bv^\top$ where the reconstruction errors measured by the Frobenius norm are very close (60,464 in the pseudoskeleton case compared to that of 60,217 in the SVD case). This is partly because the pseudoskeleton decomposition relies on the SVD (Section~\ref{section:pseudoskeleton}, p.~\pageref{section:pseudoskeleton}) such that $\bc_1\br_1^\top$ internally has the largest ``singular value" in the sense.
\begin{SCfigure
\centering
\includegraphics[width=0.5\textwidth]{./imgs/svd_skeleton_fnorm.pdf}
\caption{Comparison of reconstruction errors measured by Frobenius norm between the SVD and the pseudoskeleton approximation.}
\label{fig:svd_skeleton_fnorm}
\end{SCfigure}
We finally compare low-rank approximation between the SVD and the pseudoskeleton with different ranks. Figure~\ref{fig:svdd-pseudoskeleton} shows the difference of each compression with ranks of 90, 60, 30, 10. We observe that the SVD does well with ranks of 90, 60, 30. The pseudoskeleton-approximation compresses well in the black horizontal and vertical lines in the image. But it performs poorly in the details of the flag. Figure~\ref{fig:svd_skeleton_fnorm} shows the comparison of the reconstruction errors between the SVD and the pseudoskeleton approximation measured by Frobenius norm ranging from rank $1$ to $100$ where we find in all cases, the truncated SVD does better in terms of Frobenius norm. Similar results can be observed when applied to the spectral norm.
\newpage
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[SVD with rank 90,\gap\gap \protect\newline Frobenius norm=\textbf{6,498}]{\label{fig:svd90}
\includegraphics[width=0.47\linewidth]{./imgs/svd90.png}}
\quad
\subfigure[Pseudoskeleton with rank 90,\protect\newline Frobenius norm=13,751]{\label{fig:skeleton90}
\includegraphics[width=0.47\linewidth]{./imgs/skeleton90.png}}\\
\subfigure[SVD with rank 60,\gap\gap \protect\newline Frobenius norm=\textbf{8,956}]{\label{fig:svd60}
\includegraphics[width=0.47\linewidth]{./imgs/svd60.png}}
\quad
\subfigure[Pseudoskeleton with rank 60,\protect\newline Frobenius norm=14,217]{\label{fig:skeleton60}
\includegraphics[width=0.47\linewidth]{./imgs/skeleton60.png}}\\
\subfigure[SVD with rank 30,\gap\gap \protect\newline Frobenius norm=\textbf{14,586}]{\label{fig:svd30}
\includegraphics[width=0.47\linewidth]{./imgs/svd30.png}}
\quad
\subfigure[Pseudoskeleton with rank 30,\protect\newline Frobenius norm=17,853]{\label{fig:skeleton30}
\includegraphics[width=0.47\linewidth]{./imgs/skeleton30.png}}
\subfigure[SVD with rank 10,\gap\gap \protect\newline Frobenius norm=\textbf{31,402}]{\label{fig:svd10}
\includegraphics[width=0.47\linewidth]{./imgs/svd10.png}}
\quad
\subfigure[Pseudoskeleton with rank 10,\protect\newline Frobenius norm=33,797]{\label{fig:skeleton10}
\includegraphics[width=0.47\linewidth]{./imgs/skeleton10.png}}
\caption{Image compression for gray flag image with different ranks.}
\label{fig:svdd-pseudoskeleton}
\end{figure}
\chapter{CP Decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{CP Decomposition}
\begin{theoremHigh}[CP Decomposition]\label{theorem:cp-decomp}
The CP decomposition factorizes a tensor into a sum of component rank-one
tensors. For a general Nth-order tensor, $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$, it admits the CP decomposition
$$
\eX \approx \llbracket \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=
\sum_{r=1}^{R} \ba_r^{(1)} \circ \ba_r^{(2)} \circ \ldots \circ \ba_r^{(N)} ,
$$
where $\bA^{(n)}=[\ba_1^{(n)}, \ba_2^{(n)}, \ldots, \ba_R^{(n)}] \in \real^{I_n \times R}$ for all $n\in \{1,2,\ldots, N\}$ is the column partition of the matrix $\bA^{(n)}\in \real^{I_n \times R}$.
Or it is often useful to assume that the columns of $\bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)}$ are normalized to length
one with the weights absorbed into the vector $\blambda\in \real^R$ so that
$$
\eX \approx \llbracket \blambda; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=\sum_{r=1}^{R} \blambda_r \cdot \ba_r^{(1)} \circ \ba_r^{(2)} \circ \ldots \circ \ba_r^{(N)} ,
$$
where we follow the notation $\llbracket \ldots \rrbracket$ from \citep{kruskal1977three, kolda2009tensor}.
\end{theoremHigh}
One can find the idea behind the CP decomposition is to express the tensor as a sum of rank-one tensors, i.e., a sum of the outer product of vectors (Section~\ref{section:rank-one-tensor}, p.~\pageref{section:rank-one-tensor}). The CP decomposition is also known as the \textit{Canonical Polyadic Decomposition}, \textit{CANDECOMP-PARAFAC}, or simply \textit{PARAFAC decomposition}.
\begin{figure}[h]
\centering
\resizebox{1.0\textwidth}{!}{%
\begin{tikzpicture}
\draw [very thick] (5.8+0.5,-3.8+0.5) rectangle (7+0.5,-2.6+0.5);
\filldraw [fill=gray!60!white,draw=green!40!black] (5.8+0.5,-3.8+0.5) rectangle (7+0.5,-2.6+0.5);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.5,-3.8+0.5) grid (7+0.5,-2.6+0.5);
\draw [very thick] (5.8+0.4,-3.8+0.4) rectangle (7+0.4,-2.6+0.4);
\filldraw [fill=gray!50!white,draw=green!40!black] (5.8+0.4,-3.8+0.4) rectangle (7+0.4,-2.6+0.4);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.4,-3.8+0.4) grid (7+0.4,-2.6+0.4);
\draw [very thick] (5.8+0.3,-3.8+0.3) rectangle (7+0.3,-2.6+0.3);
\filldraw [fill=gray!40!white,draw=green!40!black] (5.8+0.3,-3.8+0.3) rectangle (7+0.3,-2.6+0.3);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.3,-3.8+0.3) grid (7+0.3,-2.6+0.3);
\draw [very thick] (5.8+0.2,-3.8+0.2) rectangle (7+0.2,-2.6+0.2);
\filldraw [fill=gray!30!white,draw=green!40!black] (5.8+0.2,-3.8+0.2) rectangle (7+0.2,-2.6+0.2);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.2,-3.8+0.2) grid (7+0.2,-2.6+0.2);
\draw [very thick] (5.8+0.1,-3.8+0.1) rectangle (7+0.1,-2.6+0.1);
\filldraw [fill=gray!20!white,draw=green!40!black] (5.8+0.1,-3.8+0.1) rectangle (7+0.1,-2.6+0.1);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.1,-3.8+0.1) grid (7+0.1,-2.6+0.1);
\draw [very thick] (5.8,-3.8) rectangle (7,-2.6);
\filldraw [fill=gray!10!white,draw=green!40!black] (5.8,-3.8) rectangle (7,-2.6);
\draw [step=0.4/2, very thin, color=gray] (5.8,-3.8) grid (7,-2.6);
\draw [very thick] (5.8-0.1,-3.8-0.1) rectangle (7-0.1,-2.6-0.1);
\filldraw [fill=gray!6!white,draw=green!40!black] (5.8-0.1,-3.8-0.1) rectangle (7-0.1,-2.6-0.1);
\draw [step=0.4/2, very thin, color=gray] (5.8-0.1,-3.8-0.1) grid (7-0.1,-2.6-0.1);
\draw[->] (5.6,-2.6) -- ++(0.6,0.6);
\draw[->] (5.55,-2.7) -- ++(0,-1.2);
\draw[->] (5.66,-4.04) -- ++(1.25,0);
\draw (6.35,-4.25) node {{\color{black}\scriptsize{spatial $y$}}};
\draw (5.2-0.3,-3.2) node {{\color{black}\scriptsize{spatial $x$}}};
\draw (5.3,-2.2) node {{\color{black}\scriptsize{temporal}}};
\draw (6.2,-4.6) node {{\color{black}\scriptsize{$\eX\in\real^{I\times J \times K}$}}};
\draw (8,-3.2) node { {\color{black}\large{$\approx$}}};
\draw [very thick] (8.6,-4.4+0.2) rectangle (8.8,-3.2+0.2);
\filldraw [fill=WildStrawberry!40!white,draw=green!40!black] (8.6,-4.4+0.2) rectangle (8.8,-3.2+0.2);
\draw (8.7,-4.8+0.2) node {{\color{black}\scriptsize{$\ba_1 \in \real^I$}}};
\draw [very thick] (9,-3-0.2+0.2) rectangle (10.2,-2.8-0.2+0.2);
\filldraw [fill=RubineRed!60!white,draw=green!40!black] (9,-3-0.2+0.2) rectangle (10.2,-2.8-0.2+0.2);
\draw (9.6,-3.3-0.2+0.2) node {{\color{black}\scriptsize{$\bb_1 \in \real^J$}}};
\draw[fill=RedOrange!40!white, line width=0.8pt] (9.2,-2.4+0.2) -- (9.4,-2.4+0.2) -- (8.8,-3+0.2) -- (8.6,-3+0.2) -- cycle;
\draw (10-0.2,-1.9) node {{\color{black}\scriptsize{$\bc_1\in \real^K$}}};
\draw (10.8,-3.2) node {{\color{black}\large{$+$}}};
\draw (11.6,-3.2) node {{\color{black}\large{$\dots$}}};
\draw (12.4,-3.2) node {{\color{black}\large{$+$}}};
\draw [very thick] (8.6+4.4,-4.4+0.2) rectangle (8.8+4.4,-3.2+0.2);
\filldraw [fill=WildStrawberry!40!white,draw=green!40!black] (8.6+4.4,-4.4+0.2) rectangle (8.8+4.4,-3.2+0.2);
\draw (8.7+4.4,-4.8+0.2) node {{\color{black}\scriptsize{$\ba_R\in \mathbb{R}^I $}}};
\draw [very thick] (9+4.4,-3-0.2+0.2) rectangle (10.2+4.4,-2.8-0.2+0.2);
\filldraw [fill=RubineRed!60!white,draw=green!40!black] (9+4.4,-3-0.2+0.2) rectangle (10.2+4.4,-2.8-0.2+0.2);
\draw (9.6+4.4,-3.3-0.2+0.2) node {{\color{black}\scriptsize{$\bb_R\in \real^J$}}};
\draw[fill=RedOrange!40!white, line width=0.8pt] (9.2+4.4,-2.4+0.2) -- (9.4+4.4,-2.4+0.2) -- (8.8+4.4,-3+0.2) -- (8.6+4.4,-3+0.2) -- cycle;
\draw (10.1+4,-1.9) node {{\color{black}\scriptsize{$\bc_R\in \real^{K}$}}};
\end{tikzpicture}
}
\caption{The CP decomposition of a third-order tensor: $\eX \approx \llbracket \bA, \bB, \bC \rrbracket
=
\sum_{r=1}^{R} \ba_r\circ \bb_r \circ \bc_r$ where $\eX\in \real^{I\times J\times K}$, $\bA\in \real^{I\times R}, \bB\in \real^{J\times R}$ and $\bC\in \real^{K\times R}$. Compare to the third-order rank-one tensor in Figure~\ref{fig:rank-one-tensor}.}
\label{fig:cp-decom-third}
\end{figure}
\begin{figure}[H]
\centering
\resizebox{0.8\textwidth}{!}{%
\begin{tikzpicture}
\coordinate (O) at (0,0,0);
\coordinate (A) at (0,2,0);
\coordinate (B) at (0,2,2);
\coordinate (C) at (0,0,2);
\coordinate (D) at (2,0,0);
\coordinate (E) at (2,2,0);
\coordinate (F) at (2,2,2);
\coordinate (G) at (2,0,2);
\draw[red!60!black,fill=red!5] (O) -- (C) -- (G) -- (D) -- cycle
\draw[red!60!black,fill=red!5] (O) -- (A) -- (E) -- (D) -- cycle
\draw[red!60!black,fill=red!5] (O) -- (A) -- (B) -- (C) -- cycle
\draw[red!60!black,fill=red!5,opacity=0.8] (D) -- (E) -- (F) -- (G) -- cycle
\draw[red!60!black,fill=red!5,opacity=0.6] (C) -- (B) -- (F) -- (G) -- cycle
\draw[red!60!black,fill=red!5,opacity=0.8] (A) -- (B) -- (F) -- (E) -- cycle
\coordinate (O) at (0+1,0+1,0+1);
\coordinate (A) at (0+1,0.252+1,0+1);
\coordinate (B) at (0+1,0.252+1,0.252+1);
\coordinate (C) at (0+1,0+1,0.252+1);
\coordinate (D) at (0.252+1,0+1,0+1);
\coordinate (E) at (0.252+1,0.252+1,0+1);
\coordinate (F) at (0.252+1,0.252+1,0.252+1);
\coordinate (G) at (0.252+1,0+1,0.252+1);
\draw[green!80!black,fill=green!10] (O) -- (C) -- (G) -- (D) -- cycle
\draw[green!80!black,fill=green!10] (O) -- (A) -- (E) -- (D) -- cycle
\draw[green!80!black,fill=green!10] (O) -- (A) -- (B) -- (C) -- cycle
\draw[green!40!black,fill=green!10,opacity=0.8] (D) -- (E) -- (F) -- (G) -- cycle
\draw[green!40!black,fill=green!10,opacity=0.6] (C) -- (B) -- (F) -- (G) -- cycle
\draw[green!40!black,fill=green!10,opacity=0.8] (A) -- (B) -- (F) -- (E) -- cycle
\draw (0.2,-1.2,0) node {\scriptsize{\color{gray}$J$}};
\draw (0.2,-0.95,0) node[rotate = 0] {{\color{gray!65}$\underbrace{\hspace{2cm}}$}};
\draw (-0.4,1,2) node[rotate = 90] {\scriptsize{\color{gray}$I$}};
\draw (-0.15,1,2) node[rotate = 270] {{\color{gray!65}$\underbrace{\hspace{2cm}}$}};
\draw (2.5,-0.2,1.4) node[rotate = 45] {\scriptsize{\color{gray}$K$}};
\draw (2.2,0,1.2) node[rotate = 45] {{\color{gray!65}$\underbrace{\hspace{1.1cm}}$}};
\draw [draw=gray!75,thick,->] (0.8,0.1,1) -- (1,0.8,1) node [right] {{\color{green!50!black}$x_{ijk}$}};
\draw (0.8,0,1) node {\scriptsize{\color{gray}$(i,j,k)$-th}};
\draw (1,-1.2,1) node {\color{black}$\eX\in\real^{I\times J\times K}$};
\draw (3,0.5,0) node[rotate = 0] {{\color{black}\LARGE{$\approx$}}};
\coordinate (O) at (0+4.5,0+-0.3,0+1);
\coordinate (A) at (0+4.5,2+-0.3,0+1);
\coordinate (B) at (0+4.5,2+-0.3,0.252+1);
\coordinate (C) at (0+4.5,0+-0.3,0.252+1);
\coordinate (D) at (0.52+4.5,0+-0.3,0+1);
\coordinate (E) at (0.52+4.5,2+-0.3,0+1);
\coordinate (F) at (0.52+4.5,2+-0.3,0.252+1);
\coordinate (G) at (0.52+4.5,0+-0.3,0.252+1);
\draw[red!60,fill=yellow!25] (O) -- (C) -- (G) -- (D) -- cycle
\draw[red!60,fill=yellow!25] (O) -- (A) -- (E) -- (D) -- cycle
\draw[red!60,fill=yellow!25] (O) -- (A) -- (B) -- (C) -- cycle
\draw[red!60,fill=yellow!25,opacity=0.8] (D) -- (E) -- (F) -- (G) -- cycle
\draw[red!60,fill=yellow!25,opacity=0.6] (C) -- (B) -- (F) -- (G) -- cycle
\draw[red!60,fill=yellow!25,opacity=0.8] (A) -- (B) -- (F) -- (E) -- cycle
\coordinate (O) at (0+4.5,0+1-0.1,0+1);
\coordinate (A) at (0+4.5,0.252+1-0.1,0+1);
\coordinate (B) at (0+4.5,0.252+1-0.1,0.252+1);
\coordinate (C) at (0+4.5,0+1-0.1,0.252+1);
\coordinate (D) at (0.52+4.5,0+1-0.1,0+1);
\coordinate (E) at (0.52+4.5,0.252+1-0.1,0+1);
\coordinate (F) at (0.52+4.5,0.252+1-0.1,0.252+1);
\coordinate (G) at (0.52+4.5,0+1-0.1,0.252+1);
\draw[green!80!black,fill=green!10] (O) -- (C) -- (G) -- (D) -- cycle
\draw[green!80!black,fill=green!10] (O) -- (A) -- (E) -- (D) -- cycle
\draw[green!80!black,fill=green!10] (O) -- (A) -- (B) -- (C) -- cycle
\draw[green!40!black,fill=green!10,opacity=0.8] (D) -- (E) -- (F) -- (G) -- cycle
\draw[green!40!black,fill=green!10,opacity=0.6] (C) -- (B) -- (F) -- (G) -- cycle
\draw[green!40!black,fill=green!10,opacity=0.8] (A) -- (B) -- (F) -- (E) -- cycle
\draw[red!60] (0.52+4.5,2+-0.3,0.252+1) -- (0.52+4.5,0+-0.3,0.252+1);
\draw (4.5+0.2,1-1.9,1) node {\color{black}$\bA\in\real^{I\times R}$};
\draw (4.5-0.3,1-0.1,1) node {\color{green!50!black}$\boldsymbol{a}_{i}$};
\coordinate (O) at (0+4.5+1.2,0+1.1,0+1);
\coordinate (A) at (0+4.5+1.2,0.52+1.1,0+1);
\coordinate (B) at (0+4.5+1.2,0.52+1.1,0.252+1);
\coordinate (C) at (0+4.5+1.2,0+1.1,0.252+1);
\coordinate (D) at (2+4.5+1.2,0+1.1,0+1);
\coordinate (E) at (2+4.5+1.2,0.52+1.1,0+1);
\coordinate (F) at (2+4.5+1.2,0.52+1.1,0.252+1);
\coordinate (G) at (2+4.5+1.2,0+1.1,0.252+1);
\draw[red!60,fill=blue!15] (O) -- (C) -- (G) -- (D) -- cycle
\draw[red!60,fill=blue!15] (O) -- (A) -- (E) -- (D) -- cycle
\draw[red!60,fill=blue!15] (O) -- (A) -- (B) -- (C) -- cycle
\draw[red!60,fill=blue!15,opacity=0.8] (D) -- (E) -- (F) -- (G) -- cycle
\draw[red!60,fill=blue!15,opacity=0.6] (C) -- (B) -- (F) -- (G) -- cycle
\draw[red!60,fill=blue!15,opacity=0.8] (A) -- (B) -- (F) -- (E) -- cycle
\coordinate (O) at (0+4.5+2.5,0+1.1,0+1);
\coordinate (A) at (0+4.5+2.5,0.52+1.1,0+1);
\coordinate (B) at (0+4.5+2.5,0.52+1.1,0.252+1);
\coordinate (C) at (0+4.5+2.5,0+1.1,0.252+1);
\coordinate (D) at (0.252+4.5+2.5,0+1.1,0+1);
\coordinate (E) at (0.252+4.5+2.5,0.52+1.1,0+1);
\coordinate (F) at (0.252+4.5+2.5,0.52+1.1,0.252+1);
\coordinate (G) at (0.252+4.5+2.5,0+1.1,0.252+1);
\draw[green!80!black,fill=green!10] (O) -- (C) -- (G) -- (D) -- cycle
\draw[green!80!black,fill=green!10] (O) -- (A) -- (E) -- (D) -- cycle
\draw[green!80!black,fill=green!10] (O) -- (A) -- (B) -- (C) -- cycle
\draw[green!40!black,fill=green!10,opacity=0.8] (D) -- (E) -- (F) -- (G) -- cycle
\draw[green!40!black,fill=green!10,opacity=0.6] (C) -- (B) -- (F) -- (G) -- cycle
\draw[green!40!black,fill=green!10,opacity=0.8] (A) -- (B) -- (F) -- (E) -- cycle
\draw[red!60] (0+4.5+1.2,0.52+1.1,0.252+1) -- (2+4.5+1.2,0.52+1.1,0.252+1);
\draw (4.5+2.0,1-0.8,1) node {\color{black}$\bB\in\real^{J\times R}$};
\draw (4.5+2.6,1-0.3,1) node {\color{green!50!black}$\bb_{j}$};
\coordinate (O) at (0+4.5+1.2,0+1.1+1.5,0+1);
\coordinate (A) at (0+4.5+1.2,0.252+1.1+1.5,0+1);
\coordinate (B) at (0+4.5+1.2,0.252+1.1+1.5,2+1);
\coordinate (C) at (0+4.5+1.2,0+1.1+1.5,2+1);
\coordinate (D) at (0.52+4.5+1.2,0+1.1+1.5,0+1);
\coordinate (E) at (0.52+4.5+1.2,0.252+1.1+1.5,0+1);
\coordinate (F) at (0.52+4.5+1.2,0.252+1.1+1.5,2+1);
\coordinate (G) at (0.52+4.5+1.2,0+1.1+1.5,2+1);
\draw[red!60,fill=red!15] (O) -- (C) -- (G) -- (D) -- cycle
\draw[red!60,fill=red!15] (O) -- (A) -- (E) -- (D) -- cycle
\draw[red!60,fill=red!15] (O) -- (A) -- (B) -- (C) -- cycle
\draw[red!60,fill=red!15,opacity=0.8] (D) -- (E) -- (F) -- (G) -- cycle
\draw[red!60,fill=red!15,opacity=0.6] (C) -- (B) -- (F) -- (G) -- cycle
\draw[red!60,fill=red!15,opacity=0.8] (A) -- (B) -- (F) -- (E) -- cycle
\coordinate (O) at (0+4.5+0.8,0+1+1.2,0+1);
\coordinate (A) at (0+4.5+0.8,0.252+1+1.2,0+1);
\coordinate (B) at (0+4.5+0.8,0.252+1+1.2,0.252+1);
\coordinate (C) at (0+4.5+0.8,0+1+1.2,0.252+1);
\coordinate (D) at (0.52+4.5+0.8,0+1+1.2,0+1);
\coordinate (E) at (0.52+4.5+0.8,0.252+1+1.2,0+1);
\coordinate (F) at (0.52+4.5+0.8,0.252+1+1.2,0.252+1);
\coordinate (G) at (0.52+4.5+0.8,0+1+1.2,0.252+1);
\draw[green!80!black,fill=green!10] (O) -- (C) -- (G) -- (D) -- cycle
\draw[green!80!black,fill=green!10] (O) -- (A) -- (E) -- (D) -- cycle
\draw[green!80!black,fill=green!10] (O) -- (A) -- (B) -- (C) -- cycle
\draw[green!40!black,fill=green!10,opacity=0.8] (D) -- (E) -- (F) -- (G) -- cycle
\draw[green!40!black,fill=green!10,opacity=0.6] (C) -- (B) -- (F) -- (G) -- cycle
\draw[green!40!black,fill=green!10,opacity=0.8] (A) -- (B) -- (F) -- (E) -- cycle
\draw[red!60] (0.52+4.5+1.2,0.252+1.1+1.5,0+1) -- (0.52+4.5+1.2,0.252+1.1+1.5,2+1);
\draw (4.5+2.5,1+1.3,1) node {\color{black}$\bC\in\real^{K\times R}$};
\draw (4.5+0.5,1+1.4,1) node {\color{green!50!black}$\bc_{k}$};
\end{tikzpicture}
}
\caption{Index of the CP decomposition of a third-order tensor: $\eX\approx\llbracket \bA,\bB,\bC\rrbracket$.}
\label{fig:cp-decom-third-index}
\end{figure}
\begin{figure}[H]
\centering
\resizebox{0.7\textwidth}{!}{%
\begin{tikzpicture}
\pgfmathsetmacro{\bxx}{0.3}
\pgfmathsetmacro{\begupwhite}{0.2}
\pgfmathsetmacro{\bcwd}{0.3}
\pgfmathsetmacro{\bbwd}{0.3}
\pgfmathsetmacro{\bbright}{0.9}
\pgfmathsetmacro{\bbup}{0.225}
\pgfmathsetmacro{\distrightdown}{4.2}
\pgfmathsetmacro{\distl}{1.2}
\pgfmathsetmacro{\aprimeright}{0.4}
\draw [very thick] (5.8+0.5,-3.8+0.5) rectangle (7+0.5,-2.6+0.5);
\filldraw [fill=gray!60!white,draw=green!40!black] (5.8+0.5,-3.8+0.5) rectangle (7+0.5,-2.6+0.5);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.5,-3.8+0.5) grid (7+0.5,-2.6+0.5);
\draw [very thick] (5.8+0.4,-3.8+0.4) rectangle (7+0.4,-2.6+0.4);
\filldraw [fill=gray!50!white,draw=green!40!black] (5.8+0.4,-3.8+0.4) rectangle (7+0.4,-2.6+0.4);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.4,-3.8+0.4) grid (7+0.4,-2.6+0.4);
\draw [very thick] (5.8+0.3,-3.8+0.3) rectangle (7+0.3,-2.6+0.3);
\filldraw [fill=gray!40!white,draw=green!40!black] (5.8+0.3,-3.8+0.3) rectangle (7+0.3,-2.6+0.3);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.3,-3.8+0.3) grid (7+0.3,-2.6+0.3);
\draw [very thick] (5.8+0.2,-3.8+0.2) rectangle (7+0.2,-2.6+0.2);
\filldraw [fill=gray!30!white,draw=green!40!black] (5.8+0.2,-3.8+0.2) rectangle (7+0.2,-2.6+0.2);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.2,-3.8+0.2) grid (7+0.2,-2.6+0.2);
\draw [very thick] (5.8+0.1,-3.8+0.1) rectangle (7+0.1,-2.6+0.1);
\filldraw [fill=gray!20!white,draw=green!40!black] (5.8+0.1,-3.8+0.1) rectangle (7+0.1,-2.6+0.1);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.1,-3.8+0.1) grid (7+0.1,-2.6+0.1);
\draw [very thick] (5.8,-3.8) rectangle (7,-2.6);
\filldraw [fill=gray!10!white,draw=green!40!black] (5.8,-3.8) rectangle (7,-2.6);
\draw [step=0.4/2, very thin, color=gray] (5.8,-3.8) grid (7,-2.6);
\draw [very thick] (5.8-0.1,-3.8-0.1) rectangle (7-0.1,-2.6-0.1);
\filldraw [fill=gray!6!white,draw=green!40!black] (5.8-0.1,-3.8-0.1) rectangle (7-0.1,-2.6-0.1);
\draw [step=0.4/2, very thin, color=gray] (5.8-0.1,-3.8-0.1) grid (7-0.1,-2.6-0.1);
\draw[->] (5.6,-2.6) -- ++(0.6,0.6);
\draw[->] (5.55,-2.7) -- ++(0,-1.2);
\draw[->] (5.66,-4.04) -- ++(1.25,0);
\draw (6.35,-4.25) node {{\color{black}\scriptsize{spatial $y$}}};
\draw (5.2-0.3,-3.2) node {{\color{black}\scriptsize{spatial $x$}}};
\draw (5.3,-2.2) node {{\color{black}\scriptsize{temporal}}};
\draw (6.2,-4.6) node {{\color{black}\scriptsize{$\eX\in\real^{I\times J \times K}$}}};
\draw (8,-3.2) node { {\color{black}\large{$\approx$}}};
\draw [very thick] (8.6+\distl,-5.0+0.2) rectangle (9.2+\distl,-3.2+0.2);
\filldraw [fill=RubineRed!60!white,draw=green!40!black] (8.6+\distl,-5.0+0.2) rectangle (9.2+\distl,-3.2+0.2);
\draw (8.7+\distl,-5.3+0.2) node {{\color{black}\scriptsize{$\bB \in \real^{J\times R}$}}};
\draw [very thick]
(8.8+\bbright+\distl,-2.9+\bbup) rectangle (10.7+\bbright+\distl,-2.3+\bbup);
\filldraw [fill=WildStrawberry!40!white,draw=green!40!black]
(8.8+\bbright+\distl,-2.9+\bbup) rectangle (10.7+\bbright+\distl,-2.3+\bbup);
\draw (9.6+\bbright+\distl,-3.2+\bbup) node {{\color{black}\scriptsize{$\bA^\top \in \real^{R\times I}$}}};
\draw[fill=RedOrange!40!white, line width=0.8pt]
(8.45+\distl,-2.0-\bxx) -- (8.8+\distl,-1.8-\bxx) -- (8.25+\distl-0.7,-1.6-\bxx+0.6) -- (7.9+\distl-0.7,-1.8-\bxx+0.6) -- cycle;
\draw (8.8+\distl,-1.5) node {{\color{black}\scriptsize{$\bC\in \real^{K\times R}$}}};
\draw[fill=RedOrange!10!white, line width=0.5pt]
(9.2+\distl,-2.0-\bxx) -- (9.3+\bcwd+\distl-0.02,-1.8+0.01-\bxx) -- (8.7+\bcwd+\distl,-1.8-\bxx) -- (8.6+\distl+0.02,-2.0+0.01-\bxx) -- cycle;
\draw[fill=RedOrange!10!white, line width=0.8pt]
(8.6+\distl,-2.0-\bxx) -- (9.2+\distl,-2.0-\bxx) -- (9.2+\distl,-2.6-\bxx) -- (8.6+\distl,-2.6-\bxx) -- cycle;
\draw[fill=RedOrange!10!white, line width=0.8pt]
(9.3+\bcwd+\distl,-1.8-\bxx) -- (9.2+\distl,-2.1+0.1-\bxx) -- (9.2+\distl,-3+0.4-\bxx) -- (9.3+\bcwd+\distl,-2.4-\bxx) -- cycle;
\draw (10-0.2+\distl,-1.5-\bxx) node {{\color{black}\scriptsize{$\eL\in \real^{R\times R\times R}$}}};
\end{tikzpicture}
}
\caption{The CP decomposition of a third-order tensor: $\eX \approx \llbracket\eL; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=
\sum_{r=1}^{R} \ba_r\circ \bb_r \circ \bc_r$ where $\eX\in \real^{I\times J\times K}$, $\eL\in \real^{R\times R\times R}$, and where columns of $\bA\in \real^{I\times R}, \bB\in \real^{J\times R}$ and $\bC\in \real^{K\times R}$ are of \textcolor{blue}{unit length}.
Compare to the first form of the CP decomposition in Figure~\ref{fig:cp-decom-third}.
}
\label{fig:cp-decom-thirr-2ndform}
\end{figure}
\paragraph{Rank of a tensor} From the CP decomposition, the rank of a tensor $\eX$ can be defined as the smallest $R$ for which the CP decomposition holds exactly.
\paragraph{Matricization}
When $\eX$ is a $2$nd-order tensor (i.e., a matrix), the CP decomposition is just the rank decomposition (Theorem~\ref{theorem:rank-decomposition-alternative}, p.~\pageref{theorem:rank-decomposition-alternative}). For simplicity, we consider the CP decomposition for the third-order tensor $\eX\in \real^{I\times J\times K}$ where the situation is illustrated in Figure~\ref{fig:cp-decom-third}:
\begin{equation}\label{equation:third-fact-1}
\eX \approx \llbracket \bA, \bB, \bC \rrbracket
=
\sum_{r=1}^{R} \ba_r\circ \bb_r \circ \bc_r ,
\end{equation}
where $\bA\in \real^{I\times R}, \bB\in \real^{J\times R}$ and $\bC\in \real^{K\times R}$.
Element-wise, the above Equation~\eqref{equation:third-fact-1} can be written as
$$
x_{ijk} = \sum_{r=1}^{R}a_{ir}b_{jr}c_{kr},
$$
as shown in Figure~\ref{fig:cp-decom-third-index}. Therefore, the matricized form of the third-order CP decomposition can be written as
\begin{equation}\label{equation:cp-matriciza-third-1}
\left\{
\begin{aligned}
\bX_{(1)} &\approx \bA (\bC\odot \bB )^\top \in \real^{I\times (JK)}; \\
\bX_{(2)} &\approx \bB (\bC\odot \bA )^\top \in \real^{J\times (IK)};\\
\bX_{(3)} &\approx \bC(\bB\odot \bA)^\top \in \real^{K\times (IJ)},
\end{aligned}
\right.
\end{equation}
where $\bX_{(n)}\in \real^{I\times (JK)}$ is the mode-$n$ matricization of the tensor $\eX$ for $n\in \{1,2,3\}$, ``$\odot$" is the Khatri-Rao product of matrices (Definition~\ref{definition:khatri-rao-product}, p.~\pageref{definition:khatri-rao-product}) such that
$$
\left\{
\begin{aligned}
\bC\odot \bB&=[\bc_1\otimes \bb_1, \bc_2\otimes \bb_2, \ldots, \bc_R\otimes \bb_R]\in \real^{JK\times R};\\
\bC\odot \bA&=[\bc_1\otimes \ba_1, \bc_2\otimes \ba_2, \ldots, \bc_R\otimes \ba_R]\in \real^{IK\times R};\\
\bB\odot \bA&=[\bb_1\otimes \ba_1, \bb_2\otimes \ba_2, \ldots, \bb_R\otimes \ba_R]\in \real^{IJ\times R}.\\
\end{aligned}
\right.
$$
In full generality, come back to the Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$, the mode-$n$ matricized form of $\eX \approx \llbracket \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket $ is given by
$$
\boxed{\underbrace{\bX_{(n)}}_{I_n\times (I_{-n})}\approx
\underbrace{\bA^{(n)}}_{{I_n\times R}} \underbrace{\left(\bA^{(N)} \odot \bA^{(N-1)} \odot \ldots \odot \bA^{(n+1)}\odot \bA^{(n-1)}\odot \ldots \odot \bA^{(2)}\odot \bA^{(1)} \right)^\top}_{R\times (I_{-n})}}
$$
where $I_{-n}=I_1 I_2\ldots I_{n-1}I_{n+1}\ldots I_N$.
\paragraph{Vectorization} Similarly, the vectorization of $\eX \approx \llbracket \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket $ is given by
$$
\boxed{
\begin{aligned}
\underbrace{vec(\eX)}_{(I_1 I_2\ldots I_N)\times 1}
\approx
\underbrace{\left(\bA^{(N)} \odot \bA^{(N-1)} \odot \ldots \odot \bA^{(2)}\odot \bA^{(1)} \right)}_{(I_1 I_2\ldots I_N)\times R} \cdot \underbrace{\bm{1}}_{R\times 1},
\end{aligned}
}
$$
where $\bm{1}=[1,1,\ldots, 1]^\top \in \real^{R\times 1}$.
\paragraph{Equivalent forms on the CP decomposition}
We mentioned in the Theorem~\ref{theorem:cp-decomp} when the columns of $\bA^{(n)}$'s are of unit length with the weights absorbed into the vector $\blambda\in \real^R$, then the CP decomposition of the Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$ can be written as
$$
\eX \approx \llbracket \blambda; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=\sum_{r=1}^{R} \blambda_r \cdot \ba_r^{(1)} \circ \ba_r^{(2)} \circ \ldots \circ \ba_r^{(N)} ,
$$
Suppose further the Nth-order tensor $\eL\in \real^{R\times R\times \ldots \times R}$ is a diagonal tensor (Definition~\ref{definition:identity-tensors}, p.~\pageref{definition:identity-tensors}) with
\begin{equation}\label{equation:cp-oth3er-fuorm-nth1}
\left\{
\begin{aligned}
l_{i,i,\ldots,i} &= \lambda_i, \gap \text{if }i\in \{1,2,\ldots, N\};\\
l_{i_1,i_2,\ldots,i_N}&= 0, \gap \text{if not \{$i_1=i_2=\ldots=i_N$\}},
\end{aligned}
\right.
\end{equation}
then the CP decomposition of the Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$ can be written as
\begin{equation}\label{equation:cp-oth3er-fuorm-nth2}
\eX \approx \llbracket \eL; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=
\sum_{r_1=1}^{R} \sum_{r_2=1}^{R} \ldots \sum_{r_N=1}^{R}
l_{r_1r_2\ldots r_N}
\ba_{\textcolor{blue}{r_1}}^{(1)} \circ \ba_{\textcolor{blue}{r_2}}^{(2)} \circ \ldots \circ \ba_{\textcolor{blue}{r_N}}^{(N)}.
\end{equation}
By the mode-$n$ tensor multiplication in Equation~\eqref{equation:moden-tensor-multi}, the CP decomposition in Equation~\eqref{equation:cp-oth3er-fuorm-nth2} can also be written as:
\begin{equation}\label{equation:cp-oth3er-fuorm-nth3}
\eX \approx \llbracket\eL; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=
\eL \times_1 \bA^{(1)} \times_2 \bA^{(2)} \ldots \times_N \bA^{(N)}.
\end{equation}
The CP decomposition in the form of Equation~\eqref{equation:cp-oth3er-fuorm-nth3} for a third-order tensor is then shown in Figure~\ref{fig:cp-decom-thirr-2ndform}.
\section{Computing the CP Decomposition}\index{ALS}
\begin{algorithm}[h]
\caption{CP Decomposition via ALS}
\label{alg:cp-decomposition-full-gene}
\begin{algorithmic}[1]
\Require Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$;
\State Pick a rank $R$;
\State Initialize $\bA^{(n)}\in \real^{I_n\times R}$ for all $n\in \{1,2,\ldots, N\}$ randomly;
\State Choose maximal number of iterations $C$;
\State $iter=0$;
\While{$iter<C$}
\State $iter=iter+1$;
\For{$n=1,2,\ldots, N$}
\State $\bV= \bA^{(N)\top} \bA^{(N)}\circledast \ldots \circledast \bA^{(n+1)\top} \bA^{(n+1)}\circledast \bA^{(n-1)\top} \bA^{(n-1)}\circledast \ldots \circledast \bA^{(1)\top} \bA^{(1)} \in \real^{R\times R}$;
\State $\bW=\left(\bA^{(N)} \odot \bA^{(N-1)} \odot \ldots \odot \bA^{(n+1)}\odot \bA^{(n-1)}\odot \ldots \odot \bA^{(2)}\odot \bA^{(1)} \right)\in \real^{R\times I_{-n}}$;
\State $\bX_{(n)} \in \real^{I_n\times I_{-n}}$; \Comment{Calculate the mode-$n$ matricization of tensor $\eX$}
\State $\bA^{(n)} = \bX_{(n)} \bW\bV^+\in \real^{I_n\times R}$;
\EndFor
\EndWhile
\State Output $\bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)}$;
\end{algorithmic}
\end{algorithm}
There is no
finite algorithm for determining the rank of a tensor, and the complexity for it is an NP-hard problem \citep{haastad1989tensor}. Therefore, the rank $R$ in the CP decomposition can not be set up front. However, the rank $R$ can be viewed as a hyperparameter and the CP decomposition thus is approximated by a low-rank tensor decomposition.
In the third-order case, $
\eX \approx \llbracket \bA, \bB, \bC \rrbracket
=
\sum_{r=1}^{R} \ba_r\circ \bb_r \circ \bc_r
$, suppose we fix the tensor rank $R$, we want to find a rank-$R$ tensor $\weX$ such that
$$
\mathop{\min }_{\weX} ||\eX - \weX||, \qquad \text{with $\weX=\sum_{r=1}^{R} \ba_r\circ \bb_r \circ \bc_r $}.
$$
Having fixed all but one matrix, the problem reduces to a least squares
problem from the matricized form. Therefore, the ALS method can be employed.
The ALS approach fixes $\bB$ and $\bC$ to update for $\bA$; then fixes $\bA$ and $\bC$ to update for $\bB$;
then fixes $\bA$ and $\bB$ to update for $\bC$, and continues to repeat the entire procedure until
some convergence criterion is satisfied or it has enough iterations.
\paragraph{Given $\bB$ and $\bC$, update for $\bA$}
For example, suppose that $\bB$ and $\bC$ are fixed. Then we want to solve
$$
\mathop{\min }_{\widehatbA} ||\bX_{(1)} - \widehatbA (\bC\odot\bB)^\top||.
$$
For simplicity, we only consider the ALS without regularization and bias terms. Recall the update of the ALS in the netflix recommender, the update of $\widehatbA$ is just the same update as that of $\bW$ in Equation~\eqref{equation:als-w-update}: for fixed matrices $\bZ,\bD$, we want to find $\bW$ that $\min||\bD-\bW\bZ||^2$, if $\bZ\bZ^\top$ is invertible, the update is given by
$$
\bW = \bD\bZ^\top (\bZ\bZ^\top)^{-1} \leftarrow \mathop{\arg\min}_{\bW} ||\bD-\bW\bZ||^2.
$$
Therefore, come back to the update in the third-order CP decomposition, it follows that
$$
\begin{aligned}
\widehatbA &\leftarrow \bX_{(1)} (\bC\odot\bB) \left( (\bC\odot\bB)^\top (\bC\odot\bB)\right)^{-1}\\
&=\bX_{(1)} (\bC\odot\bB)\left( (\bC^\top\bC)\circledast (\bB^\top\bB)\right)^{-1},
\end{aligned}
$$
where the last equality comes from Equation~\eqref{equation:two-khatri-rao-pro-equi}. When $\left( (\bC^\top\bC)\circledast (\bB^\top\bB)\right)$ is not invertible, a pseudo inverse can be applied instead (Appendix~\ref{appendix:pseudo-inverse}, p.~\pageref{appendix:pseudo-inverse}):
$$
\begin{aligned}
\widehatbA\leftarrow \bX_{(1)} (\bC\odot\bB)\left( (\bC^\top\bC)\circledast (\bB^\top\bB)\right)^{+},
\end{aligned}
$$
where $\bA^+$ is the pseudo-inverse of the matrix $\bA$. The full general procedure for computing the CP decomposition of an Nth-order tensor is then formulated in Algorithm~\ref{alg:cp-decomposition-full-gene}. An analogous update for the \textit{nonnegative CP decomposition} follows immediately from the ``similarity" between the ALS and NMF. One can simply change the ALS update by the multiplicative update (see Equation~\eqref{equation:multi-update-w}, p.~\pageref{equation:multi-update-w}).
\chapter{High-Order SVD (HOSVD)}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{High-Order SVD (HOSVD)}
We mentioned that the HOSVD can be utilized as the initialization for the calculation of the Tucker decomposition. We now consider the properties of the HOSVD.
\begin{theoremHigh}[High-Order SVD (HOSVD)]\label{theorem:hosvd-decomp}
The HOSVD factorizes a tensor into a sum of component rank-one
tensors. For a general Nth-order tensor, $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$, it admits the HOSVD
$$
\eX \approx \llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=
\sum_{r_1=1}^{R_1} \sum_{r_2=1}^{R_2} \ldots \sum_{r_N=1}^{R_N}
g_{r_1r_2\ldots r_N}
\ba_{r_1}^{(1)} \circ \ba_{r_2}^{(2)} \circ \ldots \circ \ba_{r_N}^{(N)} ,
$$
where
\begin{itemize}
\item $R_1< I_1, R_2<I_2, \ldots, R_N<I_N$;
\item $\eG$ of size ${R_1\times R_2\times \ldots \times R_N}$ is called the \textit{core tensor} so that $\eG$ can be thought of as the compressed version of $\eX$;
\item $\bA^{(n)}=[\ba_1^{(n)}, \ba_2^{(n)}, \ldots, \ba_{R_n}^{(n)}] \in \real^{I_n \times R_n}$ for all $n\in \{1,2,\ldots, N\}$ is the column partition of the matrix $\bA^{(n)}\in \real^{I_n \times R_n}$;
\item The $\bA^{(n)}$'s have mutually orthonormal columns and can be thought of as the principal component of each mode. In this sense, the $\bA^{(n)}$'s are \textit{semi-orthogonal matrices} (see the definition in Section~\ref{section:orthogonal-orthonormal-qr}, p.~\pageref{section:orthogonal-orthonormal-qr});
\item We can complete the semi-orthogonal matrices into \textit{full orthogonal matrices} by adding \textit{silent columns} into $\bA^{(n)}$'s so that $\bA^{(n)}\in \real^{I_n\times I_n}$ is an orthogonal matrix, in which case, $\eG$ will be expanded to a tensor of size $\real^{I_1\times I_2\times \ldots \times I_N}$ where $g_{r_1r_2\ldots r_N} =0$ when either one of $r_n>R_n$ for $n\in \{1,2,\ldots, N\}$. This is known as the \textit{full HOSVD}, and the previous one is also called the \textit{reduced} one to avoid confusion; And we shall only consider the reduced case in most of our discussions.
\end{itemize}
\textbf{Till now, the HOSVD is the same as the Tucker decomposition}. The difference is as follows:
\begin{itemize}
\item \textit{All orthogonality.} The slices in each mode are mutually orthogonal. Suppose $\eG_{r_n=\alpha}$ is the slice of $\eG$ where the $n$-th index is set to $\alpha$, i.e., an $(N-1)$th-order subtensor: $\eG_{r_n=\alpha}=\eG_{:,\ldots,:,r_n=\alpha,:, \ldots,:}$, then it follows that
\begin{equation}\label{equation:hosvd-conditio1}
\langle \eG_{r_n=\alpha}, \eG_{r_n=\beta}\rangle =0, \gap \alpha\neq \bbeta \in \{1,2,\ldots, R_n\},
\end{equation}
for all possible value of $n\in \{1,2,\ldots, N\}$.
\item \textit{Ordering.} The Frobenius norms of slices in each mode are decreasing with the increase in the running index:
\begin{equation}\label{equation:hosvd-conditio2}
||\eG_{r_n=1}||_F \geq ||\eG_{r_n=2}||_F \geq \ldots \geq ||\eG_{r_n=I_n}||_F \geq 0,
\end{equation}
for all possible value of $n\in \{1,2,\ldots, N\}$.
\end{itemize}
\end{theoremHigh}
The Frobenius norm $||\eG_{r_n=i}||_F$ is usually denoted as $||\eG_{r_n=i}||_F = \sigma_i^{(n)}$, and known as the \textit{mode-$n$ singular values of $\eX$}. And the vector $\ba_i^{(n)}$ is an \textit{$i$-th mode-$n$ singular vector}. Comparison of the matrix and tensor SVD reveals a clear analogy between
the two cases. In matrix language, suppose the reduced SVD of $\bA=\bU\bSigma\bV^\top\in \real^{M\times N}$, the singular values in matrix $\bSigma\in \real^{R\times R}$ is the norm of each row or each column of $\bSigma$ (since $\bSigma$ is diagonal). Left and right singular vectors in the columns of $\bU,\bV$ respectively now are generalized as the mode-$n$ singular vectors.
\paragraph{Equivalent forms on the HOSVD} The equivalent forms on the HOSVD is just the same as the Tucker decomposition. For the Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$, by the mode-$n$ tensor multiplication in Equation~\eqref{equation:moden-tensor-multi}, the HOSVD can also be written as:
\begin{equation}\label{equation:tucker-decom-in-hosvd}
\eX \approx \llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=
\eG \times_1 \bA^{(1)} \times_2 \bA^{(2)} \ldots \times_N \bA^{(N)}.
\end{equation}
Note the analogy of SVD for a matrix in Equation~\eqref{equation:svd-by-tensor-multi0}:
\begin{equation}\label{equation:svd-by-tensor-multi1}
\bA = \bU\bSigma\bV^\top = \bSigma\times_1 \bU \times_2 \bV\in \real^{M\times N}.
\end{equation}
By the result in Lemma~\ref{lemma:tensor-multi2}~\eqref{equationbracketlast}, since $\bA^{(n)}$'s are semi-orthogonal, it also follows that
\begin{equation}\label{equation:tucker-decom-in-hosvd2}
\eG = \llbracket\eX; \bA^{(1)\top}, \bA^{(2)\top}, \ldots, \bA^{(N)\top}\rrbracket = \eX \times_1 \bA^{(1)\top} \times_2 \bA^{(2)\top} \ldots \times_N \bA^{(N)\top}.
\end{equation}
Element-wise, the $(i_1,i_2,\ldots,i_N)$-th element of $\eX$ can be obtained by
$$
\eX_{i_1,i_2,\ldots,i_N} = \sum_{r_1=1}^{R_1} \sum_{r_2=1}^{R_2} \ldots \sum_{r_N=1}^{R_N}
g_{r_1r_2\ldots r_N}
a_{i_1r_1}^{(1)} a_{i_2r_2}^{(2)} \ldots a_{i_Nr_N}^{(N)}.
$$
\paragraph{Matricization}
In full generality, for the Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$, the mode-$n$ matricized form is given by
$$
\boxed{\underbrace{\bX_{(n)}}_{I_n\times (I_{-n})}\approx
\underbrace{\bA^{(n)}}_{{I_n\times R_n}}
\underbrace{\bG_{(n)}}_{R_n\times (R_{-n})}
\underbrace{\left(\bA^{(N)} \otimes \bA^{(N-1)} \otimes \ldots \otimes \bA^{(n+1)}\otimes \bA^{(n-1)}\otimes \ldots \otimes \bA^{(2)}\otimes \bA^{(1)} \right)^\top}_{(R_{-n})\times (I_{-n})}
}
$$
where $I_{-n}=I_1 I_2\ldots I_{n-1}I_{n+1}\ldots I_N$ and $R_{-n}=R_1 R_2\ldots R_{n-1}R_{n+1}\ldots R_N$. \textbf{Moreover}, since the conditions in Equation~\eqref{equation:hosvd-conditio1} and \eqref{equation:hosvd-conditio2}, this implies $\bG_{(n)}$ has mutually orthogonal rows (not orthonormal), having vector Frobenius norms equal to $\sigma_1^{(n)}, \sigma_2^{(n)},\ldots, \sigma_{R_n}^{(n)}$.
\paragraph{Matrix SVD in HOSVD}
Let further the diagonal matrix
$$
\begin{aligned}
\bSigma^{(n)} = diag\left(\sigma_1^{(n)}, \sigma_2^{(n)}, \ldots, \sigma_{R_n}^{(n)}\right) \in \real^{R_n\times R_n},
\end{aligned}
$$
where $\sigma_i^{(n)} = ||\eG_{r_n=i}||_F$. This implies, for the row normalized version $\widetildebG_{(n)}$ of $\bG_{(n)}$, we have
$$
\underbrace{\bG_{(n)}}_{R_n\times R_{-n}}=
\underbrace{\bSigma^{(n)}}_{R_n\times R_n} \underbrace{\widetildebG_{(n)}}_{R_n\times R_{-n}}.
$$
Let further,
$$
\underbrace{\bV^{(n)\top}}_{R_n \times I_{-n}}
=
\underbrace{\widetildebG_{(n)}}_{R_n\times R_{-n}}
\underbrace{\left(\bA^{(N)} \otimes \bA^{(N-1)} \otimes \ldots \otimes \bA^{(n+1)}\otimes \bA^{(n-1)}\otimes \ldots \otimes \bA^{(2)}\otimes \bA^{(1)} \right)^\top}_{(R_{-n})\times (I_{-n})},
$$
where the columns of $\bV^{(n)}$ are mutually orthonormal by Equation~\eqref{equation:orthogonal-in-kronecker1} and \eqref{equation:orthogonal-in-kronecker2}.
This reveals the (matrix) reduced SVD of $\bX_{(n)}$:
\begin{equation}\label{equation:hosvd-initia-for-tucker-1}
\boxed{
\underbrace{\bX_{(n)}}_{I_n\times (I_{-n})} \approx
\underbrace{\bA^{(n)}}_{{I_n\times R_n}}
\underbrace{\bSigma^{(n)}}_{R_n\times R_n}
\underbrace{\bV^{(n)\top}}_{R_n \times I_{-n}}.
}
\end{equation}
And if \{$\eG, \bA^{(N)}, \ldots, \bA^{(n+1)}, \bA^{(n-1)}, \ldots, \bA^{(1)}$\} are fixed, the update on $\bA^{(n)}$ can be obtained by setting the columns of it via the first $R_n$ left singular vectors of $\bX_{(n)}$. This matches the update in the subproblem of Equation~\eqref{equation:hosvd-initia-for-tucker-0}. The uniqueness of the mode-$n$ singular values thus comes from the uniqueness of the (matrix) reduced SVD.
\paragraph{Vectorization} Going further from the matricization for the Nth-order tensor $\eX$, the vectorization is given by
\begin{equation}\label{equation:hosvd-vec-in-theorem}
\boxed{\underbrace{vec(\eX) }_{(I_1\ldots I_N)\times 1}
\approx
\underbrace{\left(\bA^{(N)} \otimes \bA^{(N-1)} \otimes \ldots\otimes \bA^{(1)} \right)}_{(I_1\ldots I_N)\times (R_1\ldots R_N)}
\underbrace{vec(\eG)}_{(R_1\ldots R_N)\times 1}.
}
\end{equation}
\paragraph{Third-order case}
For simplicity and a better understanding, we consider the HOSVD for the third-order tensor $\eX\in \real^{I\times J\times K}$:
\begin{equation}\label{equation:tucker-third-fact-1}
\eX \approx \llbracket \eG; \bA, \bB, \bC \rrbracket
=
\sum_{p=1}^{P}\sum_{q=1}^{Q}\sum_{r=1}^{R}
g_{pqr} \cdot \ba_p\circ \bb_q \circ \bc_r,
\end{equation}
where $\bA\in \real^{I\times P}, \bB\in \real^{J\times Q}$, $\bC\in \real^{K\times R}$, and $\eG\in \real^{P\times Q\times R}$. Then, it follows that
\begin{itemize}
\item \textit{All orthogonality.}
$$
\langle \eG_{:,\alpha,:}, \eG_{:,\beta,:}\rangle =0, \gap \alpha\neq \bbeta \in \{1,2,\ldots, J\}.
$$
\item \textit{Ordering.} The Frobenius norms of slices in each mode are decreasing with the increase in the running index:
$$
||\eG_{:,1,:}||_F \geq ||\eG_{:,2,:}||_F \geq \ldots \geq ||\eG_{:,J,:}||_F \geq 0.
$$
\end{itemize}
The illustration of this third-order HOSVD (reduced and full versions) is similar to that of the Tucker decomposition and is shown in Figure~\ref{fig:tucker-decom-third}.
\section{Computing the HOSVD}
The calculation of the HOSVD is just the (matrix) reduced SVD from the matricized form of the HOSVD in Equation~\eqref{equation:hosvd-initia-for-tucker-1}. The procedure is shown in Algorithm~\ref{alg:hosvd-decomposition-full-gene} and we shall not repeat the details. See also \citep{de2000multilinear} for a more detailed discussion.
\begin{algorithm}[h]
\caption{HOSVD via ALS}
\label{alg:hosvd-decomposition-full-gene}
\begin{algorithmic}[1]
\Require Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$;
\State Pick a rank $R_1, R_2, \ldots, R_N$;
\State initialize $\bA^{(n)}\in \real^{I_n\times R_n}$ for all $n\in \{1,2,\ldots, N\}$ randomly;
\State choose maximal number of iterations $C$;
\State $iter=0$;
\While{$iter<C$}
\State $iter=iter+1$;
\For{$n=1,2,\ldots, N$}
\State Find the matricization along mode-$n$: $\bY_{(n)}$;
\State Set the rows of $\bA^{(n)}$ by first $R_n$ leading left singular vectors of $\bX_{(n)}$;
\EndFor
\EndWhile
\State $\eG = \llbracket\eX; \bA^{(1)\top}, \bA^{(2)\top}, \ldots, \bA^{(N)\top}\rrbracket$;
\State Output $\bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)}, \eG$;
\end{algorithmic}
\end{algorithm}
\section{Properties of the HOSVD}
\subsection{Frobenius Norm}
\begin{lemma}[Frobenius Norm of a Tensor]\label{lemma:frobenius-hosvd}
Suppose the HOSVD of the Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$ is given by $ \eX \approx \llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket $, then it follows that
$$
\begin{aligned}
||\eX||_F^2 &= ||\eG||_F^2 \\
&=\sum_{i=1}^{R_1} (\sigma_i^{(1)})^2
=\sum_{i=1}^{R_2} (\sigma_i^{(2)})^2
=\ldots
=\sum_{i=1}^{R_N} (\sigma_i^{(N)})^2 .
\end{aligned}
$$
\end{lemma}
The squared Frobenius norm of a matrix is defined to be the sum of squares of the singular values of the matrix (Definition~\ref{definition:frobernius-in-svd}, p.~\pageref{definition:frobernius-in-svd}). The above lemma tells us that the singular values in each slice of a tensor have a similar property. The proof of the above lemma follows immediately from the (matrix) reduced SVD of the matricized form of the HOSVD in Equation~\eqref{equation:hosvd-initia-for-tucker-1}.
\subsection{Low-Rank Approximation}
We discussed the best rank-$k$ approximation of a matrix is from the truncated SVD (Theorem~\ref{theorem:young-theorem-spectral}, p.~\pageref{theorem:young-theorem-spectral}) where
$$
||\bA - \bA_k||_F^2 = \sigma_{k+1}^2+\sigma_{k+2}^2 +\ldots+\sigma_{\min\{M,N\}}^2
$$
if $\bA\in \real^{M\times N}$. Similar result follows in the lower-rank tensor approximation.
\begin{lemma}[Low-Rank Approximation]\label{lemma:low-rank-hosvd}
Suppose the HOSVD of the Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$ is given by $ \eX \approx \llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket $. Define a tensor $\weX$ by discarding the smallest mode-$n$ singular values $\sigma_{K_n+1}^{(n)}, \sigma_{K_n+2}^{(n)}, \ldots, \sigma_{R_n}^{(n)}$ for given values of $K_n\leq R_n$ (for all $n\in \{1,2,\ldots, N\}$). Then, it follows that
$$
||\eX-\weX||_F^2 \leq
\sum_{r_1=K_1+1}^{R_1} (\sigma_{r_1}^{(1)})^2+
\sum_{r_2=K_2+1}^{R_2} (\sigma_{r_2}^{(2)})^2+
\ldots
+
\sum_{r_N=K_N+1}^{R_N} (\sigma_{r_N}^{(N)})^2.
$$
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:low-rank-hosvd}]
From Lemma~\ref{lemma:frobenius-hosvd}, we have the following
$$
\begin{aligned}
||\eX-\weX||_F^2&=
\sum_{r_1=1}^{R_1} \sum_{r_2=1}^{R_2} \ldots \sum_{r_N=1}^{R_N}
g_{r_1r_2\ldots r_N}^2
-
\sum_{r_1=1}^{K_1} \sum_{r_2=1}^{K_2} \ldots \sum_{r_N=1}^{K_N}
g_{r_1r_2\ldots r_N}^2 \\
&=\sum_{r_1=K_1+1}^{R_1} \sum_{r_2=K_2+1}^{R_2} \ldots \sum_{r_N=K_N+1}^{R_N}
g_{r_1r_2\ldots r_N}^2 \\
&\leq
\sum_{\textcolor{blue}{r_1=K_1+1}}^{R_1} \sum_{r_2=1}^{R_2} \ldots \sum_{r_N=1}^{R_N}
g_{r_1r_2\ldots r_N}^2 \\
&+
\sum_{r_1=1}^{R_1} \sum_{\textcolor{blue}{r_2=K_2+1}}^{R_2} \ldots \sum_{r_N=1}^{R_N}
g_{r_1r_2\ldots r_N}^2 \\
&+
\ldots
+
\sum_{r_1=1}^{R_1} \sum_{r_2=1}^{R_2} \ldots \sum_{\textcolor{blue}{r_N=K_N+1}}^{R_N}
g_{r_1r_2\ldots r_N}^2 \\
&=\sum_{r_1=K_1+1}^{R_1} (\sigma_{r_1}^{(1)})^2+
\sum_{r_2=K_2+1}^{R_2} (\sigma_{r_2}^{(2)})^2+
\ldots
+
\sum_{r_N=K_N+1}^{R_N} (\sigma_{r_N}^{(N)})^2.
\end{aligned}
$$
This completes the proof.
\end{proof}
\part{Tensor Decomposition}
\chapter{Notations and Background}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Matrices to Tensors}
In this section, we briefly overview the background for tensor analysis and we follow the notations largely from the review of \citep{kolda2006multilinear, kolda2009tensor, cichocki2016tensor} and \citep{golub2013matrix}.
A tensor is a multidimensional array where when the dimension is $N$ we call the tensor an \textit{Nth-order tensor}. For example, a first-order tensor is a vector and a second-order tensor is a matrix. In this sense,
tensors are multidimensional extensions of matrices, which are used to represent ubiquitous multidimensional data,
such as RGB images, hyperspectral images, and videos.
Previously, we use \textbf{boldface} lower case letters possibly with subscripts to denote vectors (e.g., $\bmu$, $\bx$, $\bx_n$, $\bz$) and
\textbf{boldface} upper case letters possibly with subscripts to denote matrices (e.g., $\bA$, $\bL_j$). To avoid confusion, we will use a \textbf{boldface} Euler script letters to denote a tensor with order larger than 2, e.g., $\eX, \eY,\eA,\eB,\eG \in \real^{I\times J\times K}$ for third-order tensors.
In a second-order tensor (i.e., a matrix), the first axis is usually taken as the \textbf{coordinate of spatial $x$} or \textbf{height}, and the second axis is then taken as the \textbf{coordinate of spatial $y$} or \textbf{width}, e.g., Figure~\ref{fig:tensor-2dd} is an example of a second-order tensor $\bX\in \real^{I\times J}$. A third-order tensor goes further by adding a third dimension which is usually known as a \textbf{``temporal" coordinate} (or a \textbf{``channel" coordinate} in an RGB picture). The situation of a third-order tensor $\eX\in \real^{I\times J\times K}$ is shown in Figure~\ref{fig:tensor-3dd} in which case $K=3$ in an RGB picture scenario.
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[2nd-order tensor]{\label{fig:tensor-2dd}%
\includegraphics[width=0.33\linewidth]{./imgs/tensormatrix1.pdf}}%
\subfigure[3rd-order tensor]{\label{fig:tensor-3dd}%
\includegraphics[width=0.5\linewidth]{./imgs/tensormatrix2.pdf}}%
\caption{A 2nd-order tensor into a 3rd-order tensor.}
\label{fig:tensor-2d-2-3d}
\end{figure}
\section{Tensor Indexing}\index{Tensor indexing}
\textit{Slices} are two-dimensional sections of a tensor, defined by fixing all but two indices. Figure~\ref{fig:thifd-slices} shows the slices of a third-order tensor $\eX\in \real^{I\times J\times K}$ where the \textit{horizontal ones} are the slices by varying the first index and fixing the last two: $\eX_{i,:,:}$ for all $i\in \{1,2,\ldots, I\}$ \footnote{where the indices typically range from 1 to their capital version, here, $i\in \{1,2,\ldots, I\}$.}; the \textit{lateral ones} are the slices by varying the second index: $\eX_{:,j,:}$ for all $j\in \{1,2,\ldots, J\}$; and the \textit{frontal ones} are the slices by varying the third index: $\eX_{:,:,k}$ for all $k\in \{1,2,\ldots, K\}$.
\index{Tensor slices}
\index{Tensor fibers}
Similarly, \textit{Fibers} are the higher-order analogue of matrix rows and columns. A fiber is defined by fixing every index but one. Figure~\ref{fig:thifd-fibers} shows the fibers of a third-order tensor $\eX\in \real^{I\times J\times K}$ where the \textit{column fibers} are the ones by varying only the first index: $\eX_{:,j,k}$; the \textit{row fibers} are the ones by varying only the second index: $\eX_{i,:,k}$; and the \textit{tube fibers} are the ones by varying only the third index: $\eX_{i,j,:}$.
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Horizontal slices $\eX_{i,:,:}$]{\label{fig:third-slice1}%
\centering
\begin{minipage}[b]{0.32\linewidth}
\centering
\begin{tikzpicture}
\pgfmathsetmacro{\cubex}{3}
\pgfmathsetmacro{\cubey}{0.1}
\pgfmathsetmacro{\cubez}{3}
\draw[black,fill=gray!70!white] (0,-6*0.5+0.1,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (0,-6*0.5+0.1,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (0,-6*0.5+0.1,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (0,-5*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (0,-5*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (0,-5*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (0,-4*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (0,-4*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (0,-4*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (0,-3*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (0,-3*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (0,-3*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (0,-2*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (0,-2*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (0,-2*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (0,-0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (0,-0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (0,-0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (0,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (0,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (0,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\end{tikzpicture}
\end{minipage}%
}
\subfigure[Lateral slices $\eX_{:,j,:}$]{\label{fig:third-slice2}%
\centering
\begin{minipage}[b]{0.32\linewidth}
\centering
\begin{tikzpicture}
\pgfmathsetmacro{\cubex}{0.1}
\pgfmathsetmacro{\cubey}{3}
\pgfmathsetmacro{\cubez}{3}
\draw[black,fill=gray!10!white] (3-6*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (3,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\end{tikzpicture}
\end{minipage}%
}
\subfigure[Frontal slices $\eX_{:,:,k}$]{\label{fig:thirdslice3}%
\centering
\begin{minipage}[b]{0.32\linewidth}
\centering
\begin{tikzpicture}
\pgfmathsetmacro{\cubex}{3}
\pgfmathsetmacro{\cubey}{3}
\pgfmathsetmacro{\cubez}{0.1}
\draw[black,fill=gray!70!white] (6,0,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6,0,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (6,0,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6,0,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6,0,-5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (6,0,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6,0,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6,0,-4*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (6,0,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6,0,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6,0,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (6,0,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6,0,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6,0,-2*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (6,0,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6,0,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6,0,-1*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (6,0,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (6,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\end{tikzpicture}
\end{minipage}%
}
\caption{Slices of a third-order tensor $\eX\in \real^{I\times J\times K}$. The \textbf{darker}, the larger the index.}
\label{fig:thifd-slices}
\end{figure}
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Mode-1 fibers: columns, $\eX_{:,j,k}$]{\label{fig:third-fibdr1}%
\centering
\begin{minipage}[b]{0.32\linewidth}
\centering
\begin{tikzpicture}
\pgfmathsetmacro{\cubex}{0.3}
\pgfmathsetmacro{\cubey}{3}
\pgfmathsetmacro{\cubez}{0.3}
\draw[black,fill=gray!10!white] (3-6*0.5,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,6*0.008,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,6*0.008,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,6*0.008,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,6*0.008,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,6*0.008,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,6*0.008,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3,6*0.008,-6*0.5)-- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (3,6*0.008,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,4.5*0.008,-4.5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,4.5*0.008,-4.5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,4.5*0.008,-4.5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,4.5*0.008,-4.5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,4.5*0.008,-4.5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,4.5*0.008,-4.5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3,4.5*0.008,-4.5*0.5)-- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (3,4.5*0.008,-4.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,3*0.008,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,3*0.008,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,3*0.008,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,3*0.008,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,3*0.008,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,5*0.008,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3,3*0.008,-3*0.5)-- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (3,3*0.008,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,1.5*0.008,-1.5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,1.5*0.008,-1.5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,1.5*0.008,-1.5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,1.5*0.008,-1.5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,1.5*0.008,-1.5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,5*0.008,-1.5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3,1.5*0.008,-1.5*0.5)-- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (3,1.5*0.008,-1.5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (3-6*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (3-5*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (3-4*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (3-3*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (3-2*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (3-0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (3,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\end{tikzpicture}
\end{minipage}%
}
\subfigure[Mode-2 fibers: rows, $\eX_{i,:,k}$]{\label{fig:thirds-fiber2}%
\centering
\begin{minipage}[b]{0.32\linewidth}
\centering
\begin{tikzpicture}
\pgfmathsetmacro{\cubex}{3}
\pgfmathsetmacro{\cubey}{0.3}
\pgfmathsetmacro{\cubez}{0.3}
\draw[black,fill=gray!70!white] (6,0,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6,0,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (6,0,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6,0,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6,0,-5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (6,0,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6,0,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6,0,-4*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (6,0,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6,0,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6,0,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (6,0,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6,0,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6,0,-2*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (6,0,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6,0,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6,0,-1*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (6,0,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (6,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6-1.5*0.008,1.5*0.5-0.1,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6-1.5*0.008,1.5*0.5-0.1,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (6-1.5*0.008,1.5*0.5-0.1,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6-1.5*0.008,1.5*0.5-0.1,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6-1.5*0.008,1.5*0.5-0.1,-5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (6-1.5*0.008,1.5*0.5-0.1,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6-1.5*0.008,1.5*0.5-0.1,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6-1.5*0.008,1.5*0.5-0.1,-4*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (6-1.5*0.008,1.5*0.5-0.1,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6-1.5*0.008,1.5*0.5-0.1,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6-1.5*0.008,1.5*0.5-0.1,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (6-1.5*0.008,1.5*0.5-0.1,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6-1.5*0.008,1.5*0.5-0.1,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6-1.5*0.008,1.5*0.5-0.1,-2*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (6-1.5*0.008,1.5*0.5-0.1,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6-1.5*0.008,1.5*0.5-0.1,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6-1.5*0.008,1.5*0.5-0.1,-1*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (6-1.5*0.008,1.5*0.5-0.1,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6-1.5*0.008,1.5*0.5-0.1,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6-1.5*0.008,1.5*0.5-0.1,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (6-1.5*0.008,1.5*0.5-0.1,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6-3*0.008,3*0.5-0.17,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6-3*0.008,3*0.5-0.17,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (6-3*0.008,3*0.5-0.17,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6-3*0.008,3*0.5-0.17,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6-3*0.008,3*0.5-0.17,-5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (6-3*0.008,3*0.5-0.17,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6-3*0.008,3*0.5-0.17,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6-3*0.008,3*0.5-0.17,-4*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (6-3*0.008,3*0.5-0.17,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6-3*0.008,3*0.5-0.17,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6-3*0.008,3*0.5-0.17,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (6-3*0.008,3*0.5-0.17,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6-3*0.008,3*0.5-0.17,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6-3*0.008,3*0.5-0.17,-2*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (6-3*0.008,3*0.5-0.17,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6-3*0.008,3*0.5-0.17,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6-3*0.008,3*0.5-0.17,-1*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (6-3*0.008,3*0.5-0.17,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6-3*0.008,3*0.5-0.17,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6-3*0.008,3*0.5-0.17,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (6-3*0.008,3*0.5-0.17,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6-4.5*0.008,4.5*0.5-0.24,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6-4.5*0.008,4.5*0.5-0.24,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (6-4.5*0.008,4.5*0.5-0.24,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6-4.5*0.008,4.5*0.5-0.24,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6-4.5*0.008,4.5*0.5-0.24,-5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (6-4.5*0.008,4.5*0.5-0.24,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6-4.5*0.008,4.5*0.5-0.24,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6-4.5*0.008,4.5*0.5-0.24,-4*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (6-4.5*0.008,4.5*0.5-0.24,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6-4.5*0.008,4.5*0.5-0.24,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6-4.5*0.008,4.5*0.5-0.24,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (6-4.5*0.008,4.5*0.5-0.24,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6-4.5*0.008,4.5*0.5-0.24,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6-4.5*0.008,4.5*0.5-0.24,-2*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (6-4.5*0.008,4.5*0.5-0.24,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6-4.5*0.008,4.5*0.5-0.24,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6-4.5*0.008,4.5*0.5-0.24,-1*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (6-4.5*0.008,4.5*0.5-0.24,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6-4.5*0.008,4.5*0.5-0.24,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6-4.5*0.008,4.5*0.5-0.24,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (6-4.5*0.008,4.5*0.5-0.24,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6-6*0.008,6*0.5-0.3,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6-6*0.008,6*0.5-0.3,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (6-6*0.008,6*0.5-0.3,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6-6*0.008,6*0.5-0.3,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6-6*0.008,6*0.5-0.3,-5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (6-6*0.008,6*0.5-0.3,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6-6*0.008,6*0.5-0.3,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6-6*0.008,6*0.5-0.3,-4*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (6-6*0.008,6*0.5-0.3,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6-6*0.008,6*0.5-0.3,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6-6*0.008,6*0.5-0.3,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (6-6*0.008,6*0.5-0.3,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6-6*0.008,6*0.5-0.3,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6-6*0.008,6*0.5-0.3,-2*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (6-6*0.008,6*0.5-0.3,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6-6*0.008,6*0.5-0.3,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6-6*0.008,6*0.5-0.3,-1*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (6-6*0.008,6*0.5-0.3,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6-6*0.008,6*0.5-0.3,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6-6*0.008,6*0.5-0.3,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (6-6*0.008,6*0.5-0.3,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\end{tikzpicture}
\end{minipage}%
}
\subfigure[Mode-3 fibers: tubes, $\eX_{i,j,:}$]{\label{fig:third-fiber3}%
\centering
\begin{minipage}[b]{0.32\linewidth}
\centering
\begin{tikzpicture}
\pgfmathsetmacro{\cubex}{0.3}
\pgfmathsetmacro{\cubey}{0.3}
\pgfmathsetmacro{\cubez}{3}
\draw[black,fill=gray!70!white] (0,-6*0.5+0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (0,-6*0.5+0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (0,-6*0.5+0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (0,-5*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (0,-5*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (0,-5*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (0,-4*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (0,-4*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (0,-4*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (0,-3*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (0,-3*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (0,-3*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (0,-2*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (0,-2*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (0,-2*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (0,-0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (0,-0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (0,-0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (0,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (0,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (0,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (1.5*0.5,-6*0.5+0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (1.5*0.5,-6*0.5+0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (1.5*0.5,-6*0.5+0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (1.5*0.5,-5*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (1.5*0.5,-5*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (1.5*0.5,-5*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (1.5*0.5,-4*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (1.5*0.5,-4*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (1.5*0.5,-4*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (1.5*0.5,-3*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (1.5*0.5,-3*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (1.5*0.5,-3*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (1.5*0.5,-2*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (1.5*0.5,-2*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (1.5*0.5,-2*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (1.5*0.5,-0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (1.5*0.5,-0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (1.5*0.5,-0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (1.5*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (1.5*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (1.5*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3*0.5,-6*0.5+0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (3*0.5,-6*0.5+0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (3*0.5,-6*0.5+0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3*0.5,-5*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (3*0.5,-5*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (3*0.5,-5*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3*0.5,-4*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (3*0.5,-4*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (3*0.5,-4*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3*0.5,-3*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (3*0.5,-3*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (3*0.5,-3*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3*0.5,-2*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (3*0.5,-2*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (3*0.5,-2*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3*0.5,-0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (3*0.5,-0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (3*0.5,-0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (3*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (3*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (3*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (4.5*0.5,-6*0.5+0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (4.5*0.5,-6*0.5+0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (4.5*0.5,-6*0.5+0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (4.5*0.5,-5*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (4.5*0.5,-5*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (4.5*0.5,-5*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (4.5*0.5,-4*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (4.5*0.5,-4*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (4.5*0.5,-4*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (4.5*0.5,-3*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (4.5*0.5,-3*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (4.5*0.5,-3*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (4.5*0.5,-2*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (4.5*0.5,-2*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (4.5*0.5,-2*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (4.5*0.5,-0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (4.5*0.5,-0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (4.5*0.5,-0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (4.5*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (4.5*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (4.5*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6*0.5,-6*0.5+0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6*0.5,-6*0.5+0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (6*0.5,-6*0.5+0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6*0.5,-5*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6*0.5,-5*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (6*0.5,-5*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6*0.5,-4*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6*0.5,-4*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (6*0.5,-4*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6*0.5,-3*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6*0.5,-3*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (6*0.5,-3*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6*0.5,-2*0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6*0.5,-2*0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (6*0.5,-2*0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6*0.5,-0.5,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6*0.5,-0.5,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (6*0.5,-0.5,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6*0.5,0,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (6*0.5,0,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\end{tikzpicture}
\end{minipage}%
}
\caption{Fibers of a third-order tensor $\eX\in \real^{I\times J\times K}$.}
\label{fig:thifd-fibers}
\end{figure}
More generally, the $(i,j,k)$-th element of a third-order tensor $\eX\in \real^{I\times J\times K}$ is indexed by $x_{ijk}$. And the general index for an Nth-order tensor can be implied from the context.
\section{Inner Product and Frobernius Norm}
The inner product between two Nth-order tensors with same size $\eX,\eY\in \real^{I_1\times I_2\times \ldots \times I_N}$ is the sum of the product of their entries element-wise:
$$
\langle \eX,\eY\rangle = \sum_{i_1}^{I_1}\sum_{i_2}^{I_2}\ldots \sum_{i_N}^{I_N} (x_{i_1,i_2,\ldots, i_N})\cdot (y_{i_1,i_2,\ldots, i_N}).
$$
Similarly, the Frobenius norm of an Nth-order tensor $\eX\in \real^{I_1\times I_2\times \ldots \times I_N}$ is given by $\sqrt{\langle \eX,\eX\rangle}$:
$$
||\eX||_F = \sqrt{\sum_{i_1}^{I_1}\sum_{i_2}^{I_2}\ldots \sum_{i_N}^{I_N} (x_{i_1,i_2,\ldots, i_N})^2}.
$$
\section{Outer Product and Rank-One Tensor}\label{section:rank-one-tensor}\index{Rank-one tensor}
If given two vectors $\ba\in \real^I, \bb\in \real^J$, the outer product is $\ba\bb^\top \in \real^{I\times J}$ in a trivial vector language for two vectors. Analogously, in tensor language, we use the symbol $``\circ"$ to denote the outer product, e.g., $\ba\bb^\top = \ba \circ\bb$ for the outer product of two vectors. For higher dimensions, the outer product of $N$ vectors $\ba^{(1)}\in \real^{I_1}, \ba^{(2)}\in \real^{I_2}, \ldots, \ba^{(N)}\in \real^{I_N}$ is given by
$$
\ba^{(1)}\circ \ba^{(2)}\circ \ldots \circ \ba^{(N)},
$$
where the $(i_1,i_2,\ldots, i_N)$-th element can be obtained by
$$
(\ba^{(1)}\circ \ba^{(2)}\circ \ldots \circ \ba^{(N)})_{i_1,i_2,\ldots, i_N}=
a^{(1)}_{i_1}a^{(2)}_{i_2}\ldots a^{(N)}_{i_N}.
$$
An Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$ is of rank-one if it
can be written as the outer product of $N$ vectors, i.e.,
$$
\eX = \ba^{(1)}\circ \ba^{(2)}\circ \ldots \circ \ba^{(N)},
$$
The $(i_1,i_2,\ldots, i_N)$-th element of $\eX$ is thus given by
$$
\eX_{i_1,i_2,\ldots, i_N} = x_{i_1,i_2,\ldots, i_N} = a_{i_1}^{(1)}a_{i_2}^{(2)}\ldots a_{i_N}^{(N)}.
$$
And the situation for a third-order rank-one tensor is shown in Figure~\ref{fig:rank-one-tensor}.
\begin{figure}[htbp]
\centering
\resizebox{0.65\textwidth}{!}{%
\begin{tikzpicture}
\draw [very thick] (5.8+0.5,-3.8+0.5) rectangle (7+0.5,-2.6+0.5);
\filldraw [fill=gray!60!white,draw=green!40!black] (5.8+0.5,-3.8+0.5) rectangle (7+0.5,-2.6+0.5);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.5,-3.8+0.5) grid (7+0.5,-2.6+0.5);
\draw [very thick] (5.8+0.4,-3.8+0.4) rectangle (7+0.4,-2.6+0.4);
\filldraw [fill=gray!50!white,draw=green!40!black] (5.8+0.4,-3.8+0.4) rectangle (7+0.4,-2.6+0.4);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.4,-3.8+0.4) grid (7+0.4,-2.6+0.4);
\draw [very thick] (5.8+0.3,-3.8+0.3) rectangle (7+0.3,-2.6+0.3);
\filldraw [fill=gray!40!white,draw=green!40!black] (5.8+0.3,-3.8+0.3) rectangle (7+0.3,-2.6+0.3);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.3,-3.8+0.3) grid (7+0.3,-2.6+0.3);
\draw [very thick] (5.8+0.2,-3.8+0.2) rectangle (7+0.2,-2.6+0.2);
\filldraw [fill=gray!30!white,draw=green!40!black] (5.8+0.2,-3.8+0.2) rectangle (7+0.2,-2.6+0.2);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.2,-3.8+0.2) grid (7+0.2,-2.6+0.2);
\draw [very thick] (5.8+0.1,-3.8+0.1) rectangle (7+0.1,-2.6+0.1);
\filldraw [fill=gray!20!white,draw=green!40!black] (5.8+0.1,-3.8+0.1) rectangle (7+0.1,-2.6+0.1);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.1,-3.8+0.1) grid (7+0.1,-2.6+0.1);
\draw [very thick] (5.8,-3.8) rectangle (7,-2.6);
\filldraw [fill=gray!10!white,draw=green!40!black] (5.8,-3.8) rectangle (7,-2.6);
\draw [step=0.4/2, very thin, color=gray] (5.8,-3.8) grid (7,-2.6);
\draw [very thick] (5.8-0.1,-3.8-0.1) rectangle (7-0.1,-2.6-0.1);
\filldraw [fill=gray!6!white,draw=green!40!black] (5.8-0.1,-3.8-0.1) rectangle (7-0.1,-2.6-0.1);
\draw [step=0.4/2, very thin, color=gray] (5.8-0.1,-3.8-0.1) grid (7-0.1,-2.6-0.1);
\draw[->] (5.6,-2.6) -- ++(0.6,0.6);
\draw[->] (5.55,-2.7) -- ++(0,-1.2);
\draw[->] (5.66,-4.04) -- ++(1.25,0);
\draw (6.35,-4.25) node {{\color{black}\scriptsize{spatial $y$}}};
\draw (5.2-0.3,-3.2) node {{\color{black}\scriptsize{spatial $x$}}};
\draw (5.3,-2.2) node {{\color{black}\scriptsize{temporal}}};
\draw (6.2,-4.6) node {{\color{black}\scriptsize{$\eX\in\real^{I\times J \times K}$}}};
\draw (8,-3.2) node {{\color{black}\large{$=$}}};
\draw [very thick] (8.6,-4.4+0.2) rectangle (8.8,-3.2+0.2);
\filldraw [fill=WildStrawberry!40!white,draw=green!40!black] (8.6,-4.4+0.2) rectangle (8.8,-3.2+0.2);
\draw (8.7,-4.8+0.2) node {{\color{black}\scriptsize{$\ba \in \real^I$}}};
\draw [very thick] (9,-3-0.2+0.2) rectangle (10.2,-2.8-0.2+0.2);
\filldraw [fill=RubineRed!60!white,draw=green!40!black] (9,-3-0.2+0.2) rectangle (10.2,-2.8-0.2+0.2);
\draw (9.6,-3.3-0.2+0.2) node {{\color{black}\scriptsize{$\bb \in \real^J$}}};
\draw[fill=RedOrange!40!white, line width=0.8pt] (9.2,-2.4+0.2) -- (9.4,-2.4+0.2) -- (8.8,-3+0.2) -- (8.6,-3+0.2) -- cycle;
\draw (10-0.2,-1.9) node {{\color{black}\scriptsize{$\bc\in \real^K$}}};
\end{tikzpicture}
}
\caption{A third-order tensor with rank-one, $\eX=\ba\circ\bb\circ\bc$ where the $(i, j, k)$-th element of $\eX$ is given by $x_{ijk} = a_ib_j c_k$.}
\label{fig:rank-one-tensor}
\end{figure}
\section{Diagonal and Identity Tensors}
Diagonal matrices and identity matrices have their counterparts in tensor language:
\begin{definition}[Diagonal and Identity Tensors]\label{definition:identity-tensors}
A tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$ is called a \textit{diagonal tensor} if $x_{i_1i_2\ldots i_N}\neq 0$ if and only if $i_1=i_2=\ldots=i_N$. And when $x_{i_1i_2\ldots i_N}=1$ if and only if $i_1=i_2=\ldots=i_N$, the tensor is known as the \textit{Nth-order identity tensor}.
\end{definition}
\section{Matricization: Matrix Representation of a Higher-Order Tensor}\label{section:matricization-tensor-original}
There are several kinds of matrix representations of an Nth-order tensor $\eX\in \real^{I_1\times I_2\times \ldots \times I_N}$. But for the matrix representation along the $n$-th mode, the size will always be $I_n\times (I_1\ldots I_{n-1} I_{n+1} \ldots I_N)$, which is also called the \textbf{mode-$n$ matricization of
the tensor} $\eX$ and is denoted by $\bX_{(n)}$.
To see this, we consider a specific example $\eY\in \real^{2\times 4\times 3}$ containing $\{1,2,\ldots, 24\}$ as entries where each number is stored in ascending order from the first index (height), then the second index (width), and finally the third index (channel) as shown in Figure~\ref{fig:3rd-example}:
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{imgs/tensormatrix3.pdf}
\caption{An example of a 3rd-order tensor: $\eY\in \real^{2\times 4\times 3}$ where $y_{ijk}=1+(i-1)+2(j-1)+8(k-1)$.}
\label{fig:3rd-example}
\end{figure}
\noindent Mathematically, each entry of $\eY$ can be described by
$$
y_{ijk} = 1+(i-1)+2(j-1)+8(k-1).
$$
Now, suppose we want the matricized form along the 1st-mode, $\bY_{(1)}\in \real^{2\times 12}$, the number will be \textbf{\textcolor{blue}{fetched}} first from the 1st index of $\eY$ (height), then the 2nd index (width), and finally the 3rd index (channel). And the number is \textbf{\textcolor{blue}{stored}} into $\bY_{(1)}$ from the first index, and then the second:
$$
\bY_{(1)}=
\begin{bmatrix}
1&3 & 5& 7&9 & 11& 13& 15& 17& 19 \,\,\,\, 21\,\,\,\,23\\
2&4 & 6& 8&10 & 12& 14& 16& 18& 20\,\,\,\, 22\,\,\,\,24\\
\end{bmatrix}.
$$
If we want the model-2 matricization of tensor $\eY$, i.e., $\bY_{(2)}\in \real^{4\times 6}$, the number will be \textbf{\textcolor{blue}{fetched}} first from the 2nd index (width), then the 1st index (height), and finally the 3rd index (channel). \textbf{The storing of the number will always be the same:} the number is \textbf{\textcolor{blue}{stored}} into $\bY_{(2)}$ from the first index, and then the second:
$$
\bY_{(2)}=
\begin{bmatrix}
1& 2& 9& 10& 17& 18 \\
3& 4& 11& 12& 19& 20\\
5& 6& 13& 14& 21& 22\\
7& 8& 15& 16& 23& 24\\
\end{bmatrix}.
$$
If we want the model-3 matricization of tensor $\eY$, i.e., $\bY_{(3)}\in \real^{3\times 8}$, the number will be \textbf{\textcolor{blue}{fetched}} first from the the 3rd index (channel), then 1st index (height), and finally the 2nd index (width). \textbf{The storing of the number will still be the same}:
$$
\bY_{(3)}=
\begin{bmatrix}
1& 2& 3& 4& 5& 6& 7& 8\\
9& 10& 11& 12& 13& 14& 15& 16 \\
17& 18& 19& 20& 21& 22& 23& 24\\
\end{bmatrix}.
$$
More generally, given the Nth-order tensor $\eX\in \real^{I_1\times I_2\times \ldots \times I_N}$, in the model-$n$ matricization, the tensor element $(i_1,i_2,\ldots, i_N)$ is mapped into matrix element $(i_n, j)$:
$$
j= 1+\sum_{k=1,k\neq n}^{N} (i_k-1) J_k, \qquad \text{where }\gap J_k = \prod_{m=1,m\neq n}^{k-1} I_m.
$$
Specially, the \textit{vectorization} of a tensor is defined in a similar way where we fetch the entries in an ordered manner that fetches first from the first index, then the second, \ldots, and finally the last index:
$$
vec(\eX) =
\begin{bmatrix}
1\\2\\ \vdots\\24
\end{bmatrix}.
$$
However, the matricization and vectorization of a tensor are not unique, as long as we keep the entries consistent, they are the same in the analysis. See also \citep{de2000multilinear, kiers2000towards, kolda2009tensor}.
\paragraph{Norm of the difference of two tensors} The matricization or vectorization can help derive the properties of the tensors. For example, given $\eX,\eY\in \real^{I_1\times I_2\times \ldots \times I_N}$, it follows that
\begin{equation}\label{equation:differe-tensor-norm}
\begin{aligned}
||\eX-\eY||^2 &= ||vec(\eX)-vec(\eY)||^2 = ||vec(\eX)||^2 - 2vec(\eX)^\top vec(\eY) + ||vec(\eY)||^2\\
&=||\eX||^2 -2\langle \eX,\eY\rangle + ||\eY||^2.
\end{aligned}
\end{equation}
\section{Tensor Multiplication}
We consider the \textit{mode-$n$ tensor multiplication}: multiply the tensor in the $n$-th mode. Given an Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$ and a matrix $\bA\in \real^{M\times I_n}$, the mode-$n$ tensor multiplication of $\eX$ and $\bA$ will transfer the $n$-th dimension of $\eX$ from $I_n$ to $M$. The mode-$n$ tensor multiplication is denoted by $\eX \times_n \bA \in \real^{I_1\times \ldots \times I_{n-1}\times \textcolor{blue}{M}\times I_{n+1}\times \ldots \times I_N}$, and is given by
\begin{equation}\label{equation:moden-tensor-multi}
( \eX \times_n \bA)_{i_1\times \ldots \times i_{n-1}\times \textcolor{blue}{m}\times i_{n+1}\times \ldots \times i_N}=
\sum_{i_n=1}^{I_n} (x_{i_1i_2\ldots i_N}) (a_{\textcolor{blue}{m} i_n}).
\end{equation}
\paragraph{Matrix multiplication as tensor multiplication} In matrix language, suppose two matrices $\bA\in \real^{I\times K}, \bB \in \real^{K\times J}$, then the matrix multiplication can be equivalently denoted by
$$
\bA\bB = \bB\times_1 \bA.
$$
Suppose again, $\bA\in \real^{I\times K}, \bB \in \real^{J\times K}$, then the matrix multiplication can be equivalently denoted by
$$
\bA\bB^\top = \bA\times_2 \bB.
$$
Going further by supposing $\bA\in \real^{M\times N}$ whose reduced SVD (Theorem~\ref{theorem:reduced_svd_rectangular}, p.~\pageref{theorem:reduced_svd_rectangular}) of $\bA$ is given by
$$
\underset{M\times N}{\bA} = \underset{M\times R}{\bU}
\gap
\underset{R\times R}{\bSigma}
\gap
\underset{R\times N}{\bV^\top}.
$$
Then, the reduced SVD of $\bA$ can be equivalently denoted as
\begin{equation}\label{equation:svd-by-tensor-multi0}
\bA = \bU\bSigma\bV^\top = \bSigma\times_1 \bU \times_2 \bV.
\end{equation}
The full SVD of $\bA$ can be represented in a similar way.
The matricization of a tensor can show what happens in the tensor multiplication.
\begin{lemma}[Tensor Multiplication in Matricization]\label{lemma:tensor-multi-matriciz1}
Given the Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$ and the matrix $\bA\in \real^{M\times I_n}$, it follows that
$$
\begin{aligned}
\eY = \eX\times_n\bA\in \real^{I_1\times \ldots \times I_{n-1}\times \textcolor{blue}{M}\times I_{n+1}\times \ldots \times I_N} \\
\leadto \bY_{(n)} = \bA\bX_{(n)} \in \real^{M\times I_{-n}},
\end{aligned}
$$
where $I_{-n}=I_1 I_2\ldots I_{n-1}I_{n+1}\ldots I_N$.
\end{lemma}
From the above lemma, conversely, suppose columns of $\bA$ are mutually orthonormal with $I_n\leq M$ (semi-orthogonal, see definition in Section~\ref{section:orthogonal-orthonormal-qr}, p.~\pageref{section:orthogonal-orthonormal-qr}). Then it follows that
$$
\bA^\top \bY_{(n)} = \underbrace{\bA^\top \bA}_{\bI}\bX_{(n)} = \bX_{(n)}.
$$
This reveals an important property of tensor multiplication: $\eX = \eY \times_n \bA^\top $. That is, if $\bA$ is semi-orthogonal, we have
\begin{equation}\label{equation:semiorthogonal-in-tensor-multi}
\eY = \eX\times_n\bA \leadto \eX = \eY \times_n \bA^\top.
\end{equation}
\begin{lemma}[Tensor Multiplication]\label{lemma:tensor-multi2}
Suppose given the Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$, we have
\begin{enumerate}
\item \textit{Distributive law.} Given further the matrices $\bA\in \real^{J_m\times I_m}$ and $\bB\in \real^{J_n\times I_n}$ where $m\neq n$, it follows that
$$
\eX \times_m \bA \times_n \bB = (\eX \times_m \bA) \times_n \bB = (\eX \times_n \bB) \times_m \bA.
$$
\item Given the matrices $\bA\in \real^{P\times I_m}$ and $\bB\in \real^{Q\times P}$, it follows that
$$
\eX \times_n\bA \times_n \bB = \eX\times_n (\bB\bA) \in \real^{I_1\times \ldots \times I_{n-1}\times \textcolor{blue}{Q}\times I_{n+1}\times \ldots \times I_N}.
$$
\item \label{equationbracket} Given $\eX\in \real^{I_1\times \ldots \times I_{n-1}\times \textcolor{blue}{M}\times I_{n+1}\times \ldots \times I_N} $, $\eY\in \real^{I_1\times \ldots \times I_{n-1}\times \textcolor{blue}{K}\times I_{n+1}\times \ldots \times I_N} $, and $\bA\in \real^{M\times K}$, it follows that
$$
\langle\eX, \eY\times_n\bA \rangle = \langle \eX\times_n\bA^\top, \eY\rangle .
$$
\item \label{equationbracketlast} Given semi-orthogonal matrix $\bA\in \real^{P\times I_n}$, it follows that
$$
\eY = \eX\times_n\bA \leadto \eX = \eY \times_n \bA^\top.
$$
and
$$
||\eY|| = ||\eX||,
$$
i.e., the length preservation under semi-orthogonal.
\end{enumerate}
\end{lemma}
\section{Special Matrix Products} \index{Matrix products}
Several matrix products will be proved important in the illustration of the algorithms in the sequel.
The \textit{Kronecker product} of vectors $\ba \in \real^{I}$ and $\bb\in \real^{K}$ is denoted by $\ba\otimes \bb$:
$$
\ba\otimes \bb=
\begin{bmatrix}
a_1\bb \\
a_2\bb \\
\vdots \\
a_I\bb
\end{bmatrix},
$$
which is a column vector of size $(IK)$.
\begin{definition}[Matrix Kronecker]\label{definition:kronecker-product}
Similarly, the \textit{Kronecker product} of matrices $\bA \in \real^{I\times J}$ and $\bB\in \real^{K\times L}$ is denoted by $\bA\otimes \bB$:
$$
\begin{aligned}
\bA\otimes \bB &=
\begin{bmatrix}
a_{11} \bB & a_{12}\bB & \ldots & a_{1J}\bB \\
a_{21} \bB & a_{22}\bB & \ldots & a_{2J}\bB \\
\vdots & \vdots & \ddots & \vdots \\
a_{I1} \bB & a_{I2}\bB & \ldots & a_{IJ}\bB \\
\end{bmatrix}\\
&=
\begin{bmatrix}
\ba_1 \otimes \bb_1 \,\,\ldots & \ba_1\otimes \ba_L | \,\,
\ba_2 \otimes \bb_1 \,\,\ldots &\ba_2\otimes \ba_L |\,\,
\ba_J \otimes \bb_1 \,\,\ldots &\ba_J\otimes \ba_L\,\,
\end{bmatrix},
\end{aligned}
$$
which is a matrix of size $(IK)\times (JL)$.
\end{definition}
Specifically, we notice that, when four vectors \{$\ba \in \real^{I}$ and $\bb\in \real^{K}$\} and \{$\bc \in \real^{I}$ and $\bd\in \real^{K}\}$, then
\begin{equation}\label{equation:kronecker-vector-find2}
(\ba\otimes \bb)^\top(\bc\otimes \bd)=
\begin{bmatrix}
a_1\bb^\top &
a_2\bb^\top &
\ldots &
a_I\bb^\top
\end{bmatrix}
\begin{bmatrix}
c_1\bd \\
c_2\bd \\
\vdots \\
c_I\bd
\end{bmatrix}
=\sum_{i=1}^{I}a_ic_i \bb^\top\bd=(\ba^\top\bc)(\bb^\top\bd).
\end{equation}
Specially, when $\bc=\ba, \bd=\bb$, it follows that
$$
(\ba\otimes \bb)^\top(\ba\otimes \bb) =||\ba||^2 \cdot ||\bb||^2.
$$
Similarly, for $\bA,\bC\in \real^{I\times J}$ and $\bB,\bD\in \real^{K\times L}$, it follows that
\begin{equation}
(\bA\otimes \bB)^\top (\bC\otimes \bD) = (\bA^\top\bC) \otimes (\bB^\top\bD).
\end{equation}
Note also, for $\bA \in \real^{I\times J}$, $\bB\in \real^{K\times L}$, $\bC\in \real^{J\times I}$, and $\bD\in \real^{L\times K}$, the above equation reduces to
\begin{equation}
(\bA\otimes \bB) (\bC\otimes \bD) = (\bA\bC) \otimes (\bB\bD).
\end{equation}
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10,frametitle={Pseudo Inverse in Kronecker Product}]
Moreover, following from \citep{van2000ubiquitous}, the pseudo inverse of $(\bA\otimes \bB)$ is given by
\begin{equation}\label{equation:otimes-psesudo}
(\bA\otimes \bB)^+ = \bA^+ \otimes \bB^+,
\end{equation}
where $\bA^+$ is the pseudo inverse of matrix $\bA$. Recall that the pseudo inverse of a full column rank matrix $\bA$ is simply $\bA^+=(\bA^\top\bA)^{-1}\bA^\top$.
When $\bA,\bB$ are both semi-orthogonal (see definition in Section~\ref{section:orthogonal-orthonormal-qr}, p.~\pageref{section:orthogonal-orthonormal-qr}), the pseudo inverse is $\bA^+=\bA^\top, \bB^+=\bB^\top$, and it follows that
\begin{equation}\label{equation:otimes-psesudo-semi}
(\bA\otimes \bB)^+ = \bA^\top \otimes \bB^\top.
\end{equation}
Analogously, the above pseudo inverse can be applied to the Kronecker product of a sequence of matrices.
\end{mdframed}
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10,frametitle={Orthogonality in Kronecker Product}]
Suppose further $\ba\in \real^I$, and $\bb_1, \bb_2\in \real^K$ with $\bb_1^\top\bb_2=0$, then
\begin{equation}\label{equation:orthogonal-in-kronecker1}
(\ba\otimes \bb_1) \perp (\ba\otimes \bb_2).
\end{equation}
Or suppose $\ba_1,\ba_2\in \real^I$ with $\ba_1^\top\ba_2=0$, and $\bb\in \real^K$, then
\begin{equation}\label{equation:orthogonal-in-kronecker2}
(\ba_1\otimes\bb) \perp (\ba_2 \otimes \bb).
\end{equation}
The above two findings imply
$
\bA\otimes \bB
$ contains mutually orthogonal (orthonormal) columns if both $\bA \in \real^{I\times J}$ and $\bB\in \real^{K\times L}$ contain mutually orthogonal (orthonormal) columns.
\end{mdframed}
\begin{definition}[Khatri-Rao Product]\label{definition:khatri-rao-product}
The \textit{Khatri-Rao product} of matrices $\bA \in \real^{I\times K}$ and $\bB\in \real^{J\times K}$ is denoted by $\bA\odot \bB$:
$$
\bA\odot\bB =
\begin{bmatrix}
\ba_1\otimes \bb_1 & \ba_2\otimes \bb_2 & \ldots & \ba_K\otimes \bb_K
\end{bmatrix},
$$
which is a matrix of size $(IJ)\times K$. And it is known as the ``matching columnwise" Kronecker product.
\end{definition}
From the above definition on the Khatri-Rao product, for two vectors $\ba$ and $\bb$, it follows that
$$
\ba \odot \bb = \ba\otimes\bb.
$$
Note that the ``distributive law" follows that
$$
\bA\odot \bB\odot \bC = (\bA\odot \bB)\odot \bC = \bA\odot (\bB\odot \bC).
$$
\begin{definition}[Hadamard Product]
The \textit{Hadamard product} of matrices $\bA, \bB\in \real^{I\times J}$ is denoted by $\bA\circledast\bB$:
$$
\bA\circledast\bB =
\begin{bmatrix}
a_{11}b_{11} & a_{12}b_{12} & \ldots & a_{1J}b_{1J}\\
a_{21}b_{21} & a_{22}b_{22} & \ldots & a_{2J}b_{2J}\\
\vdots & \vdots & \ddots & \vdots\\
a_{I1}b_{I1} & a_{I2}b_{I2} & \ldots & a_{IJ}b_{IJ}\\
\end{bmatrix},
$$
which is a matrix of size $I\times J$.
\end{definition}
Specifically, we further notice that, for two matrices $\bA \in \real^{I\times K}$ and $\bB\in \real^{J\times K}$, it follows that
\begin{equation}
\bZ = (\bA\odot \bB )^\top (\bA\odot \bB )=
\begin{bmatrix}
(\ba_1\otimes \bb_1)^\top \\
(\ba_2\otimes \bb_2)^\top \\
\vdots \\
(\ba_K\otimes \bb_K)^\top
\end{bmatrix}
\begin{bmatrix}
\ba_1\otimes \bb_1 & \ba_2\otimes \bb_2 & \ldots & \ba_K\otimes \bb_K
\end{bmatrix},
\end{equation}
where $\bZ\in \real^{K\times K}$ and each entry $(i,j)$ element $z_{ij}$ is given by
$$
z_{ij} = (\ba_i\otimes \bb_i)^\top (\ba_j\otimes \bb_j) = (\ba_i^\top\ba_j)(\bb_i^\top\bb_j).
$$
where the last equality comes from Equation~\eqref{equation:kronecker-vector-find2}. Therefore, $\bZ$ can be equivalently written as
\begin{equation}\label{equation:two-khatri-rao-pro-equi}
\bZ = (\bA\odot \bB )^\top (\bA\odot \bB ) =
(\bA^\top\bA) \circledast (\bB^\top\bB).
\end{equation}
To conclude, it follows that
\begin{equation}\label{equation:matrix-prod-in-tensor}
\boxed{\left.
\begin{aligned}
& (\ba\otimes \bb)^\top(\bc\otimes \bd)&=&(\ba^\top\bc)(\bb^\top\bd);\\
& (\ba\otimes \bb)^\top(\ba\otimes \bb) &=&||\ba||^2 \cdot ||\bb||^2;\\
& (\bA\otimes \bB)^\top (\bC\otimes \bD) &=& (\bA^\top\bC) \otimes (\bB^\top\bD), \gap \left(\text{\parbox{11em}{where $\bA,\bC$ same shape, \\$\bB,\bD$ same shape}}\right);\\
& (\bA\otimes \bB)^+ &=& \bA^+ \otimes \bB^+;\\
& \bA\odot \bB\odot \bC &=& (\bA\odot \bB)\odot \bC \\
&\gap &=& \bA\odot (\bB\odot \bC);\\
& (\bA\odot \bB )^\top (\bA\odot \bB ) &=& (\bA^\top\bA) \circledast (\bB^\top\bB).
\end{aligned}
\right.}
\end{equation}
\chapter{Tensor-Train (TT) Decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Tensor-Train (TT) Decomposition}
\begin{theoremHigh}[Tensor-Train Decomposition \citep{oseledets2011tensor}]\label{theorem:ttrain-decomp}
For a general Nth-order tensor, $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$, it admits the tensor-train decomposition
$$
\eX \approx \eG^{(1)} \boxtimes \eG^{(2)} \boxtimes \ldots \boxtimes \eG^{(N)},
$$
where the symbol ``$\boxtimes$" means the elements of $\eX$ can be obtained by a multiplication of $N$ matrices
$$
x_{i_1,i_2,\ldots,i_N} = \eG^{(1)}_{:,:,i_1} \eG^{(2)}_{:,:,i_2}\ldots \eG^{(N)}_{:,:,i_N}.
$$
Note here
\begin{itemize}
\item Each $\eG^{(n)} \in \real^{R_{n-1}\times R_{n}\times I_n}$ for all $n\in \{1,2,\ldots, N\}$ is a third-order tensor, and is referred to as a \textit{TT core};
\item Each $\eG^{(n)}_{:,:,i_n} \in \real^{R_{n-1}\times R_n}$ for all $i_n\in \{1,2,\ldots, I_n\}$ is a frontal slice of $\eG^{(n)}$ (see Figure~\ref{fig:thifd-slices});
\item $R_0, R_N$ are imposed to be 1 for boundary conditions;
\item $R_1,R_2,\ldots, R_{N-1}$ are known as the \textit{tensor ranks} of corresponding dimensions;
\end{itemize}
The illustration of how to extract each element of the decomposition is shown in Figure~\ref{fig:tensortrain-decom}.
\end{theoremHigh}
\begin{figure}[H]
\centering
\resizebox{1.0\textwidth}{!}{%
\begin{tikzpicture}
\pgfmathsetmacro{\bxx}{0.3}
\pgfmathsetmacro{\begupwhite}{0.2}
\pgfmathsetmacro{\bcwd}{0.3}
\pgfmathsetmacro{\bbwd}{0.3}
\pgfmathsetmacro{\bbright}{0.9}
\pgfmathsetmacro{\bbup}{0.225}
\pgfmathsetmacro{\distrightdown}{4.2}
\pgfmathsetmacro{\distl}{1.2}
\pgfmathsetmacro{\aprimeright}{0.4}
\pgfmathsetmacro{\abvd}{3.3}
\pgfmathsetmacro{\abvdd}{3.9}
\draw (6.6,-3.2+\abvd) node {{\color{black}\large{$\eX_{i_1, i_2,\ldots, i_N}$}}};
\draw (8,-3.2+\abvd) node { {\color{black}\large{$\approx$}}};
\draw [very thick] (5.8+0.5,-3.8+0.5) rectangle (7+0.5,-2.6+0.5);
\filldraw [fill=gray!60!white,draw=green!40!black] (5.8+0.5,-3.8+0.5) rectangle (7+0.5,-2.6+0.5);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.5,-3.8+0.5) grid (7+0.5,-2.6+0.5);
\draw [very thick] (5.8+0.4,-3.8+0.4) rectangle (7+0.4,-2.6+0.4);
\filldraw [fill=gray!50!white,draw=green!40!black] (5.8+0.4,-3.8+0.4) rectangle (7+0.4,-2.6+0.4);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.4,-3.8+0.4) grid (7+0.4,-2.6+0.4);
\draw [very thick] (5.8+0.3,-3.8+0.3) rectangle (7+0.3,-2.6+0.3);
\filldraw [fill=gray!40!white,draw=green!40!black] (5.8+0.3,-3.8+0.3) rectangle (7+0.3,-2.6+0.3);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.3,-3.8+0.3) grid (7+0.3,-2.6+0.3);
\draw [very thick] (5.8+0.2,-3.8+0.2) rectangle (7+0.2,-2.6+0.2);
\filldraw [fill=gray!30!white,draw=green!40!black] (5.8+0.2,-3.8+0.2) rectangle (7+0.2,-2.6+0.2);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.2,-3.8+0.2) grid (7+0.2,-2.6+0.2);
\draw [very thick] (5.8+0.1,-3.8+0.1) rectangle (7+0.1,-2.6+0.1);
\filldraw [fill=gray!20!white,draw=green!40!black] (5.8+0.1,-3.8+0.1) rectangle (7+0.1,-2.6+0.1);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.1,-3.8+0.1) grid (7+0.1,-2.6+0.1);
\draw [very thick] (5.8,-3.8) rectangle (7,-2.6);
\filldraw [fill=gray!10!white,draw=green!40!black] (5.8,-3.8) rectangle (7,-2.6);
\draw [step=0.4/2, very thin, color=gray] (5.8,-3.8) grid (7,-2.6);
\draw [very thick] (5.8-0.1,-3.8-0.1) rectangle (7-0.1,-2.6-0.1);
\filldraw [fill=gray!6!white,draw=green!40!black] (5.8-0.1,-3.8-0.1) rectangle (7-0.1,-2.6-0.1);
\draw [step=0.4/2, very thin, color=gray] (5.8-0.1,-3.8-0.1) grid (7-0.1,-2.6-0.1);
\draw (6.2,-4.6) node {{\color{black}\scriptsize{$\eX\in\real^{I_1\times I_2 \times \ldots \times I_N}$}}};
\draw (8,-3.2) node { {\color{black}\large{$\approx$}}};
\pgfmathsetmacro{\cubex}{1}
\pgfmathsetmacro{\cubey}{1}
\pgfmathsetmacro{\cubez}{0.1}
\pgfmathsetmacro{0.5}{0.3}
\pgfmathsetmacro{\xa}{3.7}
\pgfmathsetmacro{\ya}{3.1}
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6+\xa,0-\ya,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6+\xa,0-\ya,-5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (6+\xa,0-\ya,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6+\xa,0-\ya,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6+\xa,0-\ya,-4*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (6+\xa,0-\ya,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6+\xa,0-\ya,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6+\xa,0-\ya,-2*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (6+\xa,0-\ya,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6+\xa,0-\ya,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6+\xa,0-\ya,-1*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (6+\xa,0-\ya,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6+\xa,0-\ya,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6+\xa,0-\ya,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (6+\xa,0-\ya,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw (7+\xa,0-\ya) node { {\color{black}\large{$\boxtimes$}}};
\draw (5.7+\xa,-4.8) node {{\color{black}\scriptsize{$\eG^{(1)}\in\real^{R_0\times R_1 \times I_1}$}}};
\draw[->] (4.9+\xa,-3) -- ++(0.8,0.8);
\draw[->] (4.85+\xa,-3.1) -- ++(0,-1.);
\draw[->] (4.95+\xa,-4.2) -- ++(1.25,0);
\draw (5.5+\xa,-4.45) node {{\color{black}\scriptsize{$R_1$}}};
\draw (4.6+\xa,-3.5) node {{\color{black}\scriptsize{$R_0$}}};
\draw (5.2+\xa,-2.35) node {{\color{black}\scriptsize{$I_1$}}};
\draw[black,fill=gray!40!white] (6+\xa,0-\ya+\abvd,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya+\abvd,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya+\abvd,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw (5.8+\xa,-4.7+\abvdd) node {{\color{black}\scriptsize{$\eG^{(1)}_{:,:,i_1}\in\real^{R_0\times R_1}$}}};
\draw (7+\xa,0-\ya+\abvd) node { {\color{black}\large{$\times$}}};
\draw[-{Computer Modern Rightarrow}] (5.8+\xa,-2.) -- ++(0,0.8);
\draw (5.8+\xa,-1.65) node {{\color{black}\scriptsize{\textcolor{blue}{$i_1$-th slice}}}};
\pgfmathsetmacro{\xa}{6.2}
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-8*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-8*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-8*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-7*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-7*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-7*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6+\xa,0-\ya,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6+\xa,0-\ya,-5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (6+\xa,0-\ya,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6+\xa,0-\ya,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6+\xa,0-\ya,-4*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (6+\xa,0-\ya,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6+\xa,0-\ya,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6+\xa,0-\ya,-2*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (6+\xa,0-\ya,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6+\xa,0-\ya,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6+\xa,0-\ya,-1*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (6+\xa,0-\ya,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6+\xa,0-\ya,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6+\xa,0-\ya,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (6+\xa,0-\ya,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw (7.2+\xa,0-\ya) node { {\color{black}\large{$\boxtimes$}}};
\draw (5.9+\xa,-4.8) node {{\color{black}\scriptsize{$\eG^{(2)}\in\real^{R_1\times R_2 \times I_2}$}}};
\draw[->] (4.9+\xa,-3) -- ++(0.85,0.85);
\draw[->] (4.85+\xa,-3.1) -- ++(0,-1.);
\draw[->] (4.95+\xa,-4.2) -- ++(1.25,0);
\draw (5.5+\xa,-4.45) node {{\color{black}\scriptsize{$R_2$}}};
\draw (4.6+\xa,-3.6) node {{\color{black}\scriptsize{$R_1$}}};
\draw (5.2+\xa,-2.35) node {{\color{black}\scriptsize{$I_2$}}};
\draw[black,fill=gray!40!white] (6+\xa,0-\ya+\abvd,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya+\abvd,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya+\abvd,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw (5.9+\xa,-4.7+\abvdd) node {{\color{black}\scriptsize{$\eG^{(2)}_{:,:,i_2}\in\real^{R_1\times R_2}$}}};
\draw (7+\xa,0-\ya+\abvd) node { {\color{black}\large{$\times$}}};
\draw[-{Computer Modern Rightarrow}] (5.8+\xa,-2.) -- ++(0,0.8);
\draw (5.8+\xa,-1.65) node {{\color{black}\scriptsize{\textcolor{blue}{$i_2$-th slice}}}};
\draw (7.9+\xa,0-\ya) node { {\color{black}\large{$\ldots \boxtimes$}}};
\draw (7.9+\xa,0-\ya+\abvd) node { {\color{black}\large{$\ldots \times$}}};
\pgfmathsetmacro{\xa}{9.9}
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-7*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-7*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-7*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-6*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!70!white] (6+\xa,0-\ya,-6*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6+\xa,0-\ya,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!60!white] (6+\xa,0-\ya,-5*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!60!white] (6+\xa,0-\ya,-5*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6+\xa,0-\ya,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!50!white] (6+\xa,0-\ya,-4*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!50!white] (6+\xa,0-\ya,-4*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6+\xa,0-\ya,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!30!white] (6+\xa,0-\ya,-2*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!30!white] (6+\xa,0-\ya,-2*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6+\xa,0-\ya,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!20!white] (6+\xa,0-\ya,-1*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!20!white] (6+\xa,0-\ya,-1*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6+\xa,0-\ya,0) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!10!white] (6+\xa,0-\ya,0) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!10!white] (6+\xa,0-\ya,0) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw (5.9+\xa,-4.8) node {{\color{black}\scriptsize{$\eG^{(N)}\in\real^{R_{N-1}\times R_N \times I_N}$}}};
\draw[->] (4.9+\xa,-3) -- ++(0.8,0.8);
\draw[->] (4.85+\xa,-3.1) -- ++(0,-1.);
\draw[->] (4.95+\xa,-4.2) -- ++(1.25,0);
\draw (5.5+\xa,-4.45) node {{\color{black}\scriptsize{$R_N$}}};
\draw (4.48+\xa,-3.7) node {{\color{black}\scriptsize{$R_{N-1}$}}};
\draw (5.2+\xa,-2.35) node {{\color{black}\scriptsize{$I_N$}}};
\draw[black,fill=gray!40!white] (6+\xa,0-\ya+\abvd,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,-\cubey,0) -- ++(\cubex,0,0) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya+\abvd,-3*0.5) -- ++(0,0,-\cubez) -- ++(0,-\cubey,0) -- ++(0,0,\cubez) -- cycle;
\draw[black,fill=gray!40!white] (6+\xa,0-\ya+\abvd,-3*0.5) -- ++(-\cubex,0,0) -- ++(0,0,-\cubez) -- ++(\cubex,0,0) -- cycle;
\draw (5.9+\xa,-4.7+\abvdd) node {{\color{black}\scriptsize{$\eG^{(N)}_{:,:,i_N}\in\real^{R_{N-1}\times R_N}$}}};
\draw[-{Computer Modern Rightarrow}] (5.8+\xa,-2.) -- ++(0,0.8);
\draw (5.8+\xa,-1.65) node {{\color{black}\scriptsize{\textcolor{blue}{$i_N$-th slice}}}};
\end{tikzpicture}
}
\caption{Tensor-train decomposition of an Nth-order tensor: $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}\approx \eG^{(1)} \boxtimes \eG^{(2)} \boxtimes \ldots \boxtimes \eG^{(N)}$ where each $\eG^{(n)} \in \real^{R_{n-1}\times R_{n}\times I_n}$ for $n\in \{1,2,\ldots,N\}$. Each $(i_1,i_2,\ldots,i_N)$-th element is obtained by the matrix multiplication of the corresponding frontal slices of $\eG^{(n)}$'s.
Note here $R_0=R_N=1$.
}
\label{fig:tensortrain-decom}
\end{figure}
The illustration of the TT decomposition for an Nth order tensor is shown in Figure~\ref{fig:tensortrain-decom}.
In other words, the TT format approximates every entry of the tensor $\eX$ with a multiplication of $N$ matrices, in particular with a sequence of $R_n \times R_{n+1}$ matrices, each indexed by the parameter $i_{n+1}$. The figure looks like a
train with links between them, hence the name ``train". Suppose that the TT-ranks are all equal, $R_1=R_2=\ldots=R_{N-1} = R$, and that $I_1=I_2=\ldots=I_N = I$, then the TT decomposition
requires the storage of $O(N I R^2)$ floats. Thus the memory complexity of the TT decomposition scales linearly with dimension.
\paragraph{The ``$\boxtimes$" notation} The symbol ``$\boxtimes $" is defined to be a special tensor product. Suppose $\eA\in \real^{R_1\times R_2 \times I_1\times I_2\times \ldots\times I_M}, \eB\in \real^{R_2\times R_3\times J_1\times J_2\times \ldots \times J_N}$, then it follows that
$$
\eA\boxtimes \eB \in \real^{R_1\times R_3 \times I_1\times I_2\times \ldots\times I_M \times J_1\times J_2\times \ldots \times J_N},
$$
where each element is given by
$$
\begin{aligned}
(\eA\boxtimes \eB)_{r_1, r_3 , i_1, i_2, \ldots, i_M , j_1, j_2, \ldots , j_N}\\
=\sum_{r_2=1}^{R_2}
(a_{r_1,\textcolor{blue}{r_2},i_1, i_2, \ldots, i_M})
(b_{\textcolor{blue}{r_2},r_3,j_1, j_2, \ldots , j_N}).
\end{aligned}
$$
\section{Computing the TT Decomposition}
\begin{algorithm}[h]
\caption{TT-SVD}
\label{alg:tt-decomposi}
\begin{algorithmic}[1]
\Require Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$;
\State Set the initial matrix $\bC = \abovebX{1} \in \real^{I_1 \times I_2I_3\ldots I_N}$;
\State Set $R_0=R_N=1$;
\For{$n=1,2,\ldots, N-1$}
\State $\bC=\text{reshape}\bigg(\bC, (I_nR_{n-1}), (I_{n+1}I_{n+2}\ldots I_N)\bigg)$;
\State Compute the $\delta$-truncated SVD of $\bC = \bU\bSigma\bV^\top$, numerical rank $R_n=rank(\bC)$;
\State Set the rows of $\bG^{(n)}_{(2)}\in \real^{R_n \times I_nR_{n-1}}$ by first $R_n$ leading left singular vectors of $\bC$;
\State Un-matricization: $\eG^{(n)}=\text{reshape}(\bG^{(n)}_{(2)})\in \real^{R_{n-1}\times R_n\times I_n}$;
\State Get the new matrix $\bC = \bSigma\bV^\top\in \real^{R_n\times I_{n+1}I_{n+2}\ldots I_N}$;
\EndFor
\State Get the last core tensor: $\eG^{(N)}=\text{reshape}(\bC)\in \real^{R_{N-1}\times R_N\times I_N}$;
\State Output $\eG^{(1)}, \eG^{(2)}, \ldots, \eG^{(N)}$;
\end{algorithmic}
\end{algorithm}
Define the \textit{tensor unfolding} for an Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$ in the following way
$$
\abovebX{n} \in \real^{(I_1I_2\ldots I_n) \times (I_{n+1}I_{n+2}\ldots I_N)},
$$
where the $\bigg(\{i_1\ldots i_n\},\{i_{n+1}\ldots i_N\}\bigg)$-th element of $\abovebX{n}$ is obtained by $\abovebX{n}_{i_1\ldots i_n,i_{n+1}\ldots i_N} = \eX_{i_1,i_2,\ldots,i_N}$.
The tensor unfolding can be denoted by a reshape operator:
$$
\abovebX{n} = \text{reshape}\bigg(\eX, (I_1I_2\ldots I_n), (I_{n+1}I_{n+2}\ldots I_N) \bigg).
$$
This unfolding reveals a recursive algorithm to calculate the TT decomposition of $\eX$.
For this tensor unfolding along mode-$1$:
$$
\abovebX{1} \in \real^{I_1 \times I_2I_3\ldots I_N}.
$$
Recall the matricization in mode-$2$ of $\eG^{(1)}$ is given by (Section~\ref{section:matricization-tensor-original}, p.~\pageref{section:matricization-tensor-original}):
$$
\bG^{(1)}_{(2)} \in \real^{ R_1\times R_0I_1 }=\real^{ R_1 \times I_1 }.
$$
Then the $\bG^{(1)}_{(2)}$ can be though of as the data distilled version of $\abovebX{1}$, i.e., the row space of $\bG^{(1)}_{(2)}$ span the same column space as that of $\abovebX{1}$. And therefore, the $R_1$ can be determined by the numerical rank (Definition~\ref{definition:effective-rank-in-svd}, p.~\pageref{definition:effective-rank-in-svd}) of the SVD of $\abovebX{1}$, i.e., by discarding the singular values of $\abovebX{1}$ smaller than $\delta$. Similar to the Tucker and HOSVD, the row of $\bG^{(1)}_{(2)}$ can be obtained by the first $R_1$ left singular vectors of $\abovebX{1}$. For the (truncated) SVD of $\abovebX{1}$:
$$
\begin{aligned}
\abovebX{1} &= \bU_1\bSigma_1\bV_1^\top\\
&\leadto
\boxed{
\bG^{(1)}_{(2)} = \bU_1^\top\in \real^{R_1\times I_1}
}.
\end{aligned}
$$
Now, what's left is $\bSigma_1\bV_1^\top \in \real^{R_1\times I_2I_3\ldots I_N}$. By similar ``matrix unfolding", suppose we reshape $\bSigma_1\bV_1^\top$ into a $I_2R_1 \times I_3I_4\ldots I_N$ matrix $\bC$:
$$
\bC \in \real^{I_2R_1 \times I_3I_4\ldots I_N} = \text{reshape}\bigg(\bSigma_1\bV_1^\top, (I_2R_1 ), (I_3I_4\ldots I_N) \bigg).
$$
The second core tensor $\eG^{(2)}$ can be obtained in the same way via the (truncated) SVD of $\bC$:
$$
\begin{aligned}
\bC &= \bU_2 \bSigma_2\bV_2^\top \\
&\leadto
\boxed{
\bG^{(2)}_{(2)} = \bU_2^\top\in \real^{ R_2\times I_2R_1}
}.
\end{aligned}
$$
The same process can go on, the eventually all the $N$ core tensors $\{\eG^{(1)}, \eG^{(2)}, \ldots, \eG^{(N)}\}$ will be obtained via the set of (truncated) SVDs. This is known as the TT-SVD algorithm in \citep{oseledets2011tensor}. The full procedure is shown in Algorithm~\ref{alg:tt-decomposi}.
Moreover, if the rank $R_n\leq rank(\bC)$ in each iteration, a low-rank TT best approximation to $\eX$ in Frobenius norm $\eX_{best}$ always exists.
And if further the truncation tolerance for the SVD of each unfolding is set to $\delta = \epsilon/\sqrt{N-1} ||\eX||_F$, the TT-SVD is able to construct the quasi-optimal approximation $\eX_{SVD}$ such that
$$
||\eX-\eX_{SVD}||_F \leq \sqrt{N-1} ||\eX-\eX_{best}||_F.
$$
\paragraph{Complexity and curse of dimensionality} Suppose again that the TT-ranks are all equal, $R_1=R_2=\ldots=R_{N-1} = R$, and that $I_1=I_2=\ldots=I_N = I$. And the complexity of the SVD calculation is $O(MN^2)$ if the matrix is of size $M\times N$ (Section~\ref{section:comput-svd-in-svd}, p.~\pageref{section:comput-svd-in-svd}). The complexity of each step $n$ in Algorithm~\ref{alg:tt-decomposi} can be shown to be
$$
f(n) = (R I) (I^{(N-n)})^2.
$$
A simple summation shows that the complexity of the TT-SVD algorithm is
$$
\text{cost}= f(1)+f(2)+\ldots+f(N-1)= RI (I^{2(N-1)} + I^{2(N-2)}+ I^{2}) =RI^{N^2-N+1}.
$$
So the complexity grows exponentially with dimension $I$ and thus the curse of dimensionality is not evolved.
\paragraph{Further calculation methods} We notice that the calculation of the TT decomposition relies on the rank-revealing decomposition, SVD. Other methods, such as the rank-revealing QR (Section~\ref{section:rank-r-qr}, p.~\pageref{section:rank-r-qr}), column-pivoted QR (Theorem~\ref{theorem:rank-revealing-qr-general}, p.~\pageref{theorem:rank-revealing-qr-general}), CUR (Theorem~\ref{theorem:skeleton-decomposition}, p.~\pageref{theorem:skeleton-decomposition}), UTV (Theorem~\ref{theorem:ulv-decomposition}, p.~\pageref{theorem:ulv-decomposition}), Column ID (Theorem~\ref{theorem:interpolative-decomposition}, p.~\pageref{theorem:interpolative-decomposition}) can be applied in each iteration to do the matrix decomposition that finds the spanning columns. We shall not go to the details. See also \citep{oseledets2011tensor, bigoni2016spectral}.
\chapter{Tucker Decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{Tucker Decomposition}
\begin{theoremHigh}[Tucker Decomposition]\label{theorem:tucker-decomp}
The Tucker decomposition factorizes a tensor into a sum of component rank-one
tensors. For a general Nth-order tensor, $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$, it admits the Tucker decomposition
$$
\eX \approx \llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=
\sum_{r_1=1}^{R_1} \sum_{r_2=1}^{R_2} \ldots \sum_{r_N=1}^{R_N}
g_{r_1r_2\ldots r_N}
\ba_{r_1}^{(1)} \circ \ba_{r_2}^{(2)} \circ \ldots \circ \ba_{r_N}^{(N)} ,
$$
where
\begin{itemize}
\item $R_1< I_1, R_2<I_2, \ldots, R_N<I_N$;
\item $\eG$ of size ${R_1\times R_2\times \ldots \times R_N}$ is called the \textit{core tensor} so that $\eG$ can be thought of as the compressed version of $\eX$;
\item $\bA^{(n)}=[\ba_1^{(n)}, \ba_2^{(n)}, \ldots, \ba_{R_n}^{(n)}] \in \real^{I_n \times R_n}$ for all $n\in \{1,2,\ldots, N\}$ is the column partition of the matrix $\bA^{(n)}\in \real^{I_n \times R_n}$;
\item The $\bA^{(n)}$'s usually have mutually orthonormal columns and can be thought of as the principal component of each mode. In this sense, the $\bA^{(n)}$'s are \textit{semi-orthogonal matrices} (see the definition in Section~\ref{section:orthogonal-orthonormal-qr}, p.~\pageref{section:orthogonal-orthonormal-qr});
\item We can complete the semi-orthogonal matrices into \textit{full orthogonal matrices} by adding \textit{silent columns} into $\bA^{(n)}$'s so that $\bA^{(n)}\in \real^{I_n\times I_n}$ is an orthogonal matrix, in which case, $\eG$ will be expanded to a tensor of size $\real^{I_1\times I_2\times \ldots \times I_N}$ where $g_{r_1r_2\ldots r_N} =0$ when either one of $r_n>R_n$ for $n\in \{1,2,\ldots, N\}$. This is known as the \textit{full Tucker decomposition}, and the previous one is also called the \textit{reduced} one to avoid confusion; And we shall only consider the reduced case in most of our discussions.
\end{itemize}
\end{theoremHigh}
\paragraph{Equivalent forms on the Tucker decomposition} By the mode-$n$ tensor multiplication in Equation~\eqref{equation:moden-tensor-multi}, the Tucker decomposition can also be written as:
\begin{equation}\label{equation:tucker-decom-in-tuckeropera}
\eX \approx \llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=
\eG \times_1 \bA^{(1)} \times_2 \bA^{(2)} \ldots \times_N \bA^{(N)}.
\end{equation}
By the result in Lemma~\ref{lemma:tensor-multi2}~\eqref{equationbracketlast}, since $\bA^{(n)}$'s are semi-orthogonal, it also follows that
\begin{equation}\label{equation:tucker-decom-in-tuckeropera2}
\eG = \llbracket\eX; \bA^{(1)\top}, \bA^{(2)\top}, \ldots, \bA^{(N)\top}\rrbracket = \eX \times_1 \bA^{(1)\top} \times_2 \bA^{(2)\top} \ldots \times_N \bA^{(N)\top}.
\end{equation}
The operator defined in Equation~\eqref{equation:tucker-decom-in-tuckeropera} is sometimes referred to as the \textit{Tucker operator} \citep{kolda2006multilinear}.
Element-wise, the $(i_1,i_2,\ldots,i_N)$-th element of $\eX$ can be obtained by
$$
x_{i_1,i_2,\ldots,i_N} = \sum_{r_1=1}^{R_1} \sum_{r_2=1}^{R_2} \ldots \sum_{r_N=1}^{R_N}
g_{r_1r_2\ldots r_N}
a_{i_1r_1}^{(1)} a_{i_2r_2}^{(2)} \ldots a_{i_Nr_N}^{(N)}.
$$
\begin{figure}[htbp]
\centering
\resizebox{1.0\textwidth}{!}{%
\begin{tikzpicture}
\pgfmathsetmacro{\bxx}{0.3}
\pgfmathsetmacro{\begupwhite}{0.2}
\pgfmathsetmacro{\bcwd}{0.3}
\pgfmathsetmacro{\bbwd}{0.3}
\pgfmathsetmacro{\bbright}{0.9}
\pgfmathsetmacro{\bbup}{0.225}
\pgfmathsetmacro{\distrightdown}{4.2}
\pgfmathsetmacro{\dist}{4}
\pgfmathsetmacro{\distl}{0}
\pgfmathsetmacro{\aprimeright}{0.4}
\draw [very thick] (5.8+0.5,-3.8+0.5) rectangle (7+0.5,-2.6+0.5);
\filldraw [fill=gray!60!white,draw=green!40!black] (5.8+0.5,-3.8+0.5) rectangle (7+0.5,-2.6+0.5);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.5,-3.8+0.5) grid (7+0.5,-2.6+0.5);
\draw [very thick] (5.8+0.4,-3.8+0.4) rectangle (7+0.4,-2.6+0.4);
\filldraw [fill=gray!50!white,draw=green!40!black] (5.8+0.4,-3.8+0.4) rectangle (7+0.4,-2.6+0.4);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.4,-3.8+0.4) grid (7+0.4,-2.6+0.4);
\draw [very thick] (5.8+0.3,-3.8+0.3) rectangle (7+0.3,-2.6+0.3);
\filldraw [fill=gray!40!white,draw=green!40!black] (5.8+0.3,-3.8+0.3) rectangle (7+0.3,-2.6+0.3);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.3,-3.8+0.3) grid (7+0.3,-2.6+0.3);
\draw [very thick] (5.8+0.2,-3.8+0.2) rectangle (7+0.2,-2.6+0.2);
\filldraw [fill=gray!30!white,draw=green!40!black] (5.8+0.2,-3.8+0.2) rectangle (7+0.2,-2.6+0.2);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.2,-3.8+0.2) grid (7+0.2,-2.6+0.2);
\draw [very thick] (5.8+0.1,-3.8+0.1) rectangle (7+0.1,-2.6+0.1);
\filldraw [fill=gray!20!white,draw=green!40!black] (5.8+0.1,-3.8+0.1) rectangle (7+0.1,-2.6+0.1);
\draw [step=0.4/2, very thin, color=gray] (5.8+0.1,-3.8+0.1) grid (7+0.1,-2.6+0.1);
\draw [very thick] (5.8,-3.8) rectangle (7,-2.6);
\filldraw [fill=gray!10!white,draw=green!40!black] (5.8,-3.8) rectangle (7,-2.6);
\draw [step=0.4/2, very thin, color=gray] (5.8,-3.8) grid (7,-2.6);
\draw [very thick] (5.8-0.1,-3.8-0.1) rectangle (7-0.1,-2.6-0.1);
\filldraw [fill=gray!6!white,draw=green!40!black] (5.8-0.1,-3.8-0.1) rectangle (7-0.1,-2.6-0.1);
\draw [step=0.4/2, very thin, color=gray] (5.8-0.1,-3.8-0.1) grid (7-0.1,-2.6-0.1);
\draw[->] (5.6,-2.6) -- ++(0.6,0.6);
\draw[->] (5.55,-2.7) -- ++(0,-1.2);
\draw[->] (5.66,-4.04) -- ++(1.25,0);
\draw (6.35,-4.25) node {{\color{black}\scriptsize{spatial $y$}}};
\draw (5.2-0.3,-3.2) node {{\color{black}\scriptsize{spatial $x$}}};
\draw (5.3,-2.2) node {{\color{black}\scriptsize{temporal}}};
\draw (6.2,-4.6) node {{\color{black}\scriptsize{$\eX\in\real^{I\times J \times K}$}}};
\draw (8,-3.2) node { {\color{black}\large{$\approx$}}};
\draw [very thick] (8.6+\distl,-4+0.2) rectangle (9.2+\distl,-3.2+0.2);
\filldraw [fill=RubineRed!60!white,draw=green!40!black] (8.6+\distl,-4+0.2) rectangle (9.2+\distl,-3.2+0.2);
\draw (8.7+\distl,-4.4+0.2) node {{\color{black}\scriptsize{$\bB \in \real^{J\times Q}$}}};
\draw [very thick]
(9+\bbright+\distl,-2.9+\bbup) rectangle (9.5+\bbright+\distl,-2.6+\bbup);
\filldraw [fill=WildStrawberry!40!white,draw=green!40!black]
(9+\bbright+\distl,-2.9+\bbup) rectangle (9.5+\bbright+\distl,-2.6+\bbup);
\draw (9.6+\bbright+\distl,-3.2+\bbup) node {{\color{black}\scriptsize{$\bA^\top \in \real^{P\times I}$}}};
\draw[fill=RedOrange!40!white, line width=0.8pt]
(8.45+\distl,-2.3-\bxx) -- (8.9+\distl,-2.1-\bxx) -- (8.35+\distl,-1.9-\bxx) -- (7.9+\distl,-2.1-\bxx) -- cycle;
\draw (8.4+\distl,-1.8) node {{\color{black}\scriptsize{$\bC\in \real^{K\times R}$}}};
\draw[fill=RedOrange!10!white, line width=0.5pt]
(9.2+\distl,-2.4+0.1-\bxx) -- (9.4+\bcwd+\distl-0.02,-2.1+0.01-\bxx) -- (8.8+\bcwd+\distl,-2.1-\bxx) -- (8.6+\distl+0.02,-2.4+0.1+0.01-\bxx) -- cycle;
\draw[fill=RedOrange!10!white, line width=0.8pt]
(8.6+\distl,-2.4+0.1-\bxx) -- (9.2+\distl,-2.4+0.1-\bxx) -- (9.2+\distl,-3+0.4-\bxx) -- (8.6+\distl,-3+0.4-\bxx) -- cycle;
\draw[fill=RedOrange!10!white, line width=0.8pt]
(9.4+\bcwd+\distl,-2.1-\bxx) -- (9.2+\distl,-2.4+0.1-\bxx) -- (9.2+\distl,-3+0.4-\bxx) -- (9.4+\bcwd+\distl,-2.4-\bxx) -- cycle;
\draw (10-0.2+\distl,-1.8-\bxx) node {{\color{black}\scriptsize{$\eG\in \real^{P\times Q\times R}$}}};
\draw (11.6,-3.2) node {{\color{black}\large{$=$}}};
\draw [very thick]
(8.6+\dist,-4+0.2) rectangle (9.4+\dist,-3.2+0.2);
\filldraw [fill=blue!70,draw=green!40!black]
(8.6+\dist,-4+0.2) rectangle (9.4+\dist,-3.2+0.2);
\draw [very thick] (8.6+\dist,-4+0.2) rectangle (9.2+\dist,-3.2+0.2);
\filldraw [fill=RubineRed!60!white,draw=green!40!black] (8.6+\dist,-4+0.2) rectangle (9.2+\dist,-3.2+0.2);
\draw (8.7+\dist,-4.4+0.2) node {{\color{black}\scriptsize{$\bB^\prime \in \real^{J\times \textcolor{blue}{J}}$}}};
\draw [very thick]
(9+\bbright+\dist+\aprimeright,-2.8+\bbup) rectangle (9.5+\bbright+\dist+\aprimeright,-2.3+\bbup);
\filldraw [fill=blue!70,draw=green!40!black]
(9+\bbright+\dist+\aprimeright,-2.8+\bbup) rectangle (9.5+\bbright+\dist+\aprimeright,-2.3+\bbup);
\draw [very thick]
(9+\bbright+\dist+\aprimeright,-2.8+\bbup) rectangle (9.5+\bbright+\dist+\aprimeright,-2.5+\bbup);
\filldraw [fill=WildStrawberry!40!white,draw=green!40!black]
(9+\bbright+\dist+\aprimeright,-2.8+\bbup) rectangle (9.5+\bbright+\dist+\aprimeright,-2.5+\bbup);
\draw (9.6+\bbright+\dist+\aprimeright,-3.1+\bbup) node {{\color{black}\scriptsize{$\bA^{\prime\top }\in \real^{ \textcolor{blue}{I}\times I}$}}};
\draw[fill=blue!70, line width=0.8pt]
(8.45+\dist,-2.3-\bxx+\begupwhite) -- (8.9+\dist+0.3,-2.1-\bxx+\begupwhite+0.14) -- (8.35+\dist+0.3,-1.9-\bxx+\begupwhite+0.15) -- (7.9+\dist,-2.1-\bxx+\begupwhite) -- cycle;
\draw[fill=RedOrange!40!white, line width=0.8pt]
(8.45+\dist,-2.3-\bxx+\begupwhite) -- (8.9+\dist,-2.1-\bxx+\begupwhite) -- (8.35+\dist,-1.9-\bxx+\begupwhite) -- (7.9+\dist,-2.1-\bxx+\begupwhite) -- cycle;
\draw (8.4+\dist,-1.8+\begupwhite) node {{\color{black}\scriptsize{$\bC^\prime\in \real^{K\times \textcolor{blue}{K}}$}}};
\draw[fill=RedOrange!1!white, line width=0.8pt,opacity=0.5]
(8.6+\dist,-2.4+0.1-\bxx) -- (9.4+\dist,-2.4+0.1-\bxx) -- (9.4+\dist,-3+0.4-\bxx) -- (8.6+\dist,-3+0.4-\bxx) -- cycle;
\draw[fill=RedOrange!10!white, line width=0.8pt]
(9.2+\dist,-2.4+0.1-\bxx+\begupwhite) -- (9.4+\bcwd+\dist,-2.1-\bxx+\begupwhite) -- (8.8+\bcwd+\dist+0.04,-2.1-\bxx+\begupwhite+0.01) -- (8.6+\dist,-2.4+0.1-\bxx+\begupwhite) -- cycle;
\draw[fill=RedOrange!10!white, line width=0.8pt]
(8.6+\dist,-2.4+0.1-\bxx+\begupwhite) -- (9.2+\dist,-2.4+0.1-\bxx+\begupwhite) -- (9.2+\dist,-3+0.4-\bxx+\begupwhite) -- (8.6+\dist,-3+0.4-\bxx+\begupwhite) -- cycle;
\draw[fill=RedOrange!10!white, line width=0.8pt]
(9.4+\bcwd+\dist,-2.1-\bxx+\begupwhite) -- (9.2+\dist,-2.4+0.1-\bxx+\begupwhite) -- (9.2+\dist,-3+0.4-\bxx+\begupwhite) -- (9.4+\bcwd+\dist,-2.4-\bxx+\begupwhite) -- cycle;
\draw (10+\dist,-1.7-\bxx+\begupwhite) node {{\color{black}\scriptsize{$\eG^\prime\in \real^{\textcolor{blue}{I\times J\times K}}$}}};
\draw[fill=RedOrange!1!white, line width=0.8pt,opacity=0.5]
(9.4+\dist,-2.4+0.1-\bxx+\begupwhite) -- (9.6+\bcwd+\dist+0.24,-2.1-\bxx+\begupwhite+0.12) -- (8.8+\bcwd+\dist+0.26,-2.1-\bxx+\begupwhite+0.12) -- (8.6+\dist,-2.4+0.1-\bxx+\begupwhite) -- cycle;
\draw[fill=RedOrange!1!white, line width=0.8pt,opacity=0.5]
(9.6+\bcwd+\dist+0.25,-2.1-\bxx+\begupwhite+0.12) -- (9.4+\dist,-2.4+0.1-\bxx+\begupwhite) -- (9.4+\dist,-3+0.4-\bxx) -- (9.6+\bcwd+\dist+0.25,-2.4-\bxx+0.12) -- cycle;
\end{tikzpicture}
}
\caption{The Tucker decomposition of a third-order tensor: $\eX \approx \llbracket \eG; \bA, \bB, \bC \rrbracket
=
\sum_{p=1}^{P}\sum_{q=1}^{Q}\sum_{r=1}^{R}
g_{pqr} \cdot \ba_p\circ \bb_q \circ \bc_r$. Middle: \textbf{reduced Tucker decomposition}. Right: \textbf{full Tucker decomposition} where the \textcolor{blue}{blue} entries are silent \textit{orthogonal} columns and the white entries of $\eG^\prime$ are zero.}
\label{fig:tucker-decom-third}
\end{figure}
\paragraph{Matricization}
For simplicity, we consider the Tucker decomposition for the third-order tensor $\eX\in \real^{I\times J\times K}$ where the situation is illustrated in Figure~\ref{fig:tucker-decom-third}:
\begin{equation}\label{equation:tucker-third-fact-1}
\eX \approx \llbracket \eG; \bA, \bB, \bC \rrbracket
=
\sum_{p=1}^{P}\sum_{q=1}^{Q}\sum_{r=1}^{R}
g_{pqr} \cdot \ba_p\circ \bb_q \circ \bc_r,
\end{equation}
where $\bA\in \real^{I\times P}, \bB\in \real^{J\times Q}$, $\bC\in \real^{K\times R}$, and $\eG\in \real^{P\times Q\times R}$. Analogously, following the matricization of the third-order CP decomposition in Equation~\eqref{equation:cp-matriciza-third-1}, we have
\begin{equation}\label{equation:tucker-matriciza-third-1}
\left\{
\begin{aligned}
\bX_{(1)} &\approx \bA\bG_{(1)} (\bC\otimes \bB )^\top \in \real^{I\times (JK)}; \\
\bX_{(2)} &\approx \bB\bG_{(2)} (\bC\otimes \bA )^\top \in \real^{J\times (IK)};\\
\bX_{(3)} &\approx \bC\bG_{(3)}(\bB\otimes \bA)^\top \in \real^{K\times (IJ)},
\end{aligned}
\right.
\end{equation}
where now ``$\otimes$" is the Kronecker product (Definition~\ref{definition:kronecker-product}, p.~\pageref{definition:kronecker-product}).
In full generality, come back to the Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$, the mode-$n$ matricized form is given by
$$
\boxed{\underbrace{\bX_{(n)}}_{I_n\times (I_{-n})}\approx
\underbrace{\bA^{(n)}}_{{I_n\times R_n}}
\underbrace{\bG_{(n)}}_{R_n\times (R_{-n})}
\underbrace{\left(\bA^{(N)} \otimes \bA^{(N-1)} \otimes \ldots \otimes \bA^{(n+1)}\otimes \bA^{(n-1)}\otimes \ldots \otimes \bA^{(2)}\otimes \bA^{(1)} \right)^\top}_{(R_{-n})\times (I_{-n})}}
$$
where $I_{-n}=I_1 I_2\ldots I_{n-1}I_{n+1}\ldots I_N$ and $R_{-n}=R_1 R_2\ldots R_{n-1}R_{n+1}\ldots R_N$.
\paragraph{Vectorization} Going further from the matricization for the Nth-order tensor $\eX$, the vectorization is given by
\begin{equation}\label{equation:tucker-vec-in-theorem}
\boxed{\underbrace{vec(\eX) }_{(I_1\ldots I_N)\times 1}
\approx
\underbrace{\left(\bA^{(N)} \otimes \bA^{(N-1)} \otimes \ldots\otimes \bA^{(1)} \right)}_{(I_1\ldots I_N)\times (R_1\ldots R_N)}
\underbrace{vec(\eG)}_{(R_1\ldots R_N)\times 1}.
}
\end{equation}
\paragraph{Counterpart in $\eG$} For the problem in Equation~\eqref{equation:tucker-decom-in-tuckeropera2}, it also has the matricization and vectorization in a similar way:
$$
\boxed{
\begin{aligned}
\bG_{(n)}\approx
\bA^{(n)\top}
\bG_{(n)}\left(\bA^{(N)\top} \otimes \bA^{(N-1)\top} \otimes \ldots \otimes \bA^{(n+1)\top}\otimes \bA^{(n-1)\top}\otimes \ldots \otimes \bA^{(2)\top}\otimes \bA^{(1)\top} \right)^\top
\end{aligned}
}
$$
\begin{equation}\label{equation:tucker-vec-in-theorem-eg}
\boxed{\underbrace{vec(\eG) }_{(R_1\ldots R_N)\times 1 }
\approx
\underbrace{\left(\bA^{(N)\top} \otimes \bA^{(N-1)\top} \otimes \ldots\otimes \bA^{(1)\top} \right)}_{ (R_1\ldots R_N) \times (I_1\ldots I_N)}
\underbrace{vec(\eX)}_{(I_1\ldots I_N)\times 1}.
}
\end{equation}
\paragraph{Connection to the CP decomposition} We notice that when $\eG$ is the identity tensor (Definition~\ref{definition:identity-tensors}, p.~\pageref{definition:identity-tensors}), $R_1=R_2=\ldots=R_N$, and we do not further restrict the columns of $\bA^{(n)}$'s are mutually orthonormal, then the Tucker decomposition reduces to a CP decomposition. Note that the $\eG$ has to be an identity tensor in this case and it cannot be simply treated as a tensor with all 1's elements where the Tucker decomposition is given by:
$$
\eX \approx \llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket
=
\sum_{r_1=1}^{R} \sum_{r_2=1}^{R} \ldots \sum_{r_N=1}^{R}
1\cdot
\ba_{r_1}^{(1)} \circ \ba_{r_2}^{(2)} \circ \ldots \circ \ba_{r_N}^{(N)} ,
$$
which is slightly different to the CP decomposition that has only one summation.
\section{Computing the Tucker Decomposition}
\begin{algorithm}[h]
\caption{Tucker Decomposition via ALS}
\label{alg:tucker-decomposition-full-gene}
\begin{algorithmic}[1]
\Require Nth-order tensor $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$;
\State Pick a rank $R_1, R_2, \ldots, R_N$;
\State Initialize $\bA^{(n)}\in \real^{I_n\times R_n}$ for all $n\in \{1,2,\ldots, N\}$ randomly;
\State Choose maximal number of iterations $C$;
\State $iter=0$;
\While{$iter<C$}
\State $iter=iter+1$;
\For{$n=1,2,\ldots, N$}
\State Set $\eY= \llbracket\eX; \bA^{(1)\top}, \bA^{(2)\top}, \ldots,\bA^{(n-1)}, \textcolor{blue}{\bI},\bA^{(n+1)},\ldots, \bA^{(N)\top}\rrbracket$;
\State Find the matricization along mode-$n$: $\bY_{(n)}$;
\State Set the rows of $\bA^{(n)}$ by first $R_n$ leading left singular vectors of $\bY_{(n)}$;
\EndFor
\EndWhile
\State $\eG = \llbracket\eX; \bA^{(1)\top}, \bA^{(2)\top}, \ldots, \bA^{(N)\top}\rrbracket$;\Comment{by vectorize and un-vectorize, Eq.~\eqref{equation:tucker-eg-vector-undo}}
\State Output $\bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)}, \eG$;
\end{algorithmic}
\end{algorithm}
To compute the Tucker decomposition of $\eX \in \real^{I_1\times I_2\times \ldots \times I_N}$, we now consider algorithms for solving the problem:\index{ALS}
$$
\boxed{\{{\eG,\bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)}}\}
=
\mathop{\arg \gap \min}_{\eG,\bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)}}
\left\Vert\eX - \llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket \right\Vert^2,}
$$
where $\eG$ is a tensor of size ${R_1\times R_2\times \ldots \times R_N}$, $\bA^{(n)}$'s are semi-orthogonal of size ${I_n\times R_n}$ for $n\in \{1,2,\ldots, N\}$. Similar to the CP decomposition, an \textit{alternating descent} algorithm can be employed to find the solution approximately.
\subsubsection*{\textbf{Given $\bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)}$, Update $\eG$:}}
The update on $\eG$ follows immediately since we mentioned in Equation~\eqref{equation:tucker-decom-in-tuckeropera2} that
$$
\eG = \llbracket\eX; \bA^{(1)\top}, \bA^{(2)\top}, \ldots, \bA^{(N)\top}\rrbracket = \eX \times_1 \bA^{(1)\top} \times_2 \bA^{(2)\top} \ldots \times_N \bA^{(N)\top}.
$$
To simplify matters, from the vectorized form in Eequation~\eqref{equation:tucker-vec-in-theorem}, we have
$$
\begin{aligned}
&\left\Vert\eX - \llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket \right\Vert^2\\
&=
\left\Vert
vec(\eX)
-
\left(\bA^{(N)} \otimes \bA^{(N-1)} \otimes \ldots\otimes \bA^{(1)} \right)
vec(\eG)
\right\Vert^2,
\end{aligned}
$$
which is just the least squares problem and the solution is given by
\begin{equation}\label{equation:tucker-eg-vector-undo}
\begin{aligned}
vec(\eG) &\leftarrow \left(\bA^{(N)} \otimes \bA^{(N-1)} \otimes \ldots\otimes \bA^{(1)} \right)^+ vec(\eX)\\
&=\left(\bA^{(N)\top} \otimes \bA^{(N-1)\top} \otimes \ldots\otimes \bA^{(1)\top} \right)vec(\eX),
\end{aligned}
\end{equation}
where the last equality comes from Equation~\eqref{equation:otimes-psesudo-semi} since $\bA^{(n)}$'s are semi-orthogonal. As long as we update for the vectorized version, an un-vectorization operation can be applied to find the updated $\eG$.
\subsubsection*{\textbf{Given $\bA^{(1)},\ldots, \bA^{(n-1)}, \bA^{(n+1)}, \ldots, \bA^{(N)}$ and $\eG$, Update $\bA^{(n)}$:}}
When all but $\bA^{(n)}$ are fixed, by Equation~\eqref{equation:differe-tensor-norm}, the Frobenius norm of the difference is given by
$$
\begin{aligned}
&\gap \left\Vert\eX - \llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket \right\Vert^2\\
&=\left\Vert\eX \right\Vert^2
-2\langle \eX,\llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket\rangle
+ \left\Vert\llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)}, \rrbracket\right\Vert^2,
\end{aligned}
$$
where by Lemma~\ref{lemma:tensor-multi2} (\ref{equationbracket}) and Equation~\eqref{equation:tucker-decom-in-tuckeropera}~\eqref{equation:tucker-decom-in-tuckeropera2}, it follows that
$$
\begin{aligned}
&\gap\langle \eX,\llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket\rangle\\
&=
\langle \llbracket\eX; \bA^{(1)\top}, \bA^{(2)\top}, \ldots, \bA^{(N)\top} \rrbracket, \eG\rangle \gap &\text{(Lemma~\ref{lemma:tensor-multi2} (\ref{equationbracket}), Equation~\eqref{equation:tucker-decom-in-tuckeropera})}\\
&= \langle \eG, \eG \rangle=\left\Vert\eG \right\Vert^2, &\gap \text{(Equation~\eqref{equation:tucker-decom-in-tuckeropera2})}
\end{aligned}
$$
and where by Equation~\eqref{equation:tucker-decom-in-tuckeropera} and Lemma~\ref{lemma:tensor-multi2}~\eqref{equationbracketlast}, the length preservation under semi-orthogonal, we obtain
$$
\left\Vert\llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)}, \rrbracket\right\Vert^2 = ||\eG||^2.
$$
Combine all the findings,
$$
\begin{aligned}
\left\Vert\eX - \llbracket\eG; \bA^{(1)}, \bA^{(2)}, \ldots, \bA^{(N)} \rrbracket \right\Vert^2
=\left\Vert\eX \right\Vert^2- \left\Vert\eG \right\Vert^2.
\end{aligned}
$$
Hence, minimizing the left of the above equation is equivalent to maximizing $\left\Vert\eG \right\Vert^2$. Therefore, to update $\bA^{(n)}$, the problem becomes
$$
\boxed{\begin{aligned}
&\mathop{\max}_{\bA^{(n)}} \,\,\,\,\eG = \mathop{\max}_{\bA^{(n)}} \,\,\,\, \llbracket\eX; \bA^{(1)\top}, \bA^{(2)\top}, \ldots,\bA^{(n-1)},\bA^{(n)},\bA^{(n+1)},\ldots, \bA^{(N)\top}\rrbracket\\
&\text{subject to \gap $\bA^{(n)} \in \real^{I_n\times R_n}$ is semi-orthogonal.}
\end{aligned} }
$$
By defining $\eY= \llbracket\eX; \bA^{(1)\top}, \bA^{(2)\top}, \ldots,\bA^{(n-1)}, \textcolor{blue}{\bI},\bA^{(n+1)},\ldots, \bA^{(N)\top}\rrbracket$. From Lemma~\ref{lemma:tensor-multi-matriciz1}, this again, is equivalent to find the solution of
$$
\boxed{
\begin{aligned}
&\mathop{\max}_{\bA^{(n)}} \,\,\,\,||\eY \times_n \bA^{(n)}||= \mathop{\max}_{\bA^{(n)}} \,\,\,\, ||\bA^{(n)}\bY_{(n)}||_F\\
&\text{subject to \gap $\bA^{(n)} \in \real^{I_n\times R_n}$ is semi-orthogonal,}
\end{aligned}
}
$$
where $\bY_{(n)} \in \real^{R_n\times R_{1}\ldots R_{n-1}R_{n+1}\ldots R_N}$ is the matricization along the mode-$n$ of $\eY$. The solution is by setting the rows of $\bA^{(n)}$ as the first $R_n$ leading left singular vectors of $\bY_{(n)}$, \citep{kolda2006multilinear, kolda2009tensor}. We notice the above update on $\bA^{(n)}$ is not dependent on $\eG$ finally, and thus we can update the $\eG$ once after the convergence of $\bA^{(n)}$'s. The procedure is shown in Algorithm~\ref{alg:tucker-decomposition-full-gene}.
\paragraph{Initialization by High-Order SVD (HOSVD)}\index{HOSVD}
In Algorithm~\ref{alg:tucker-decomposition-full-gene}, we initialize $\bA^{(n)}$'s randomly. However, the High-Order Singular Value Decomposition can be utilized as the initial point. The problem is given by
$$
\boxed{\begin{aligned}
&\mathop{\max}_{\bA^{(n)}} \,\,\,\, \llbracket\eX; \bI, \bI, \ldots,\bI,\bA^{(n)\top},\bI,\ldots, \bI\rrbracket\\
&\text{subject to \gap $\bA^{(n)} \in \real^{I_n\times R_n}$ is semi-orthogonal.}
\end{aligned} }
$$
Similarly, the problem is equal to finding the solution of
\begin{equation}\label{equation:hosvd-initia-for-tucker-0}
\boxed{\begin{aligned}
&\mathop{\max}_{\bA^{(n)}}||\eX\times_n \bA^{(n)\top}||=||\bA^{(n)\top} \bX_{(n)}||_F\\
&\text{subject to \gap $\bA^{(n)} \in \real^{I_n\times R_n}$ is semi-orthogonal,}
\end{aligned} }
\end{equation}
which is solved by setting columns of $\bA^{(n)}$ as the first $R_n$ leading left singular vectors of $\bX_{(n)}$. And we shall shortly see an equivalent derivation by the matricization of the HOSVD in the next section, see Equation~\eqref{equation:hosvd-initia-for-tucker-1}.
\chapter{UTV Decomposition: ULV and URV Decomposition}\label{section:ulv-urv-decomposition}
\begingroup
\hypersetup{linkcolor=winestain}
\minitoc \newpage
\endgroup
\section{UTV Decomposition}
The UTV decomposition goes further by factoring the matrix into two orthogonal matrices $\bA=\bU\bT\bV$, where $\bU, \bV$ are orthogonal, whilst $\bT$ is (upper/lower) triangular.\footnote{These decompositions fall into a category known as the \textit{double-sided orthogonal decomposition}. We will see, the UTV decomposition, complete orthogonal decomposition, and singular value decomposition are all in this notion.}
The resulting $\bT$ supports rank estimation. The matrix $\bT$ can be lower triangular which results in the ULV decomposition, or it can be upper triangular which results in the URV decomposition. The UTV framework shares a similar form as the singular value decomposition (SVD, see Section~\ref{section:SVD}, p.~\pageref{section:SVD}) and can be regarded as inexpensive alternatives to the SVD.
\begin{theoremHigh}[Full ULV Decomposition]\label{theorem:ulv-decomposition}
Every $m\times n$ matrix $\bA$ with rank $r$ can be factored as
$$
\bA = \bU \begin{bmatrix}
\bL & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV,
$$
where $\bU\in \real^{m\times m}$ and $\bV\in \real^{n\times n}$ are two orthogonal matrices, and $\bL\in \real^{r\times r}$ is a lower triangular matrix.
\end{theoremHigh}
The existence of the ULV decomposition is from the QR and LQ decomposition.
\begin{proof}[of Theorem~\ref{theorem:ulv-decomposition}]
For any rank $r$ matrix $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$, we can use a column permutation matrix $\bP$ (Definition~\ref{definition:permutation-matrix}, p.~\pageref{definition:permutation-matrix}) such that the linearly independent columns of $\bA$ appear in the first $r$ columns of $\bA\bP$. Without loss of generality, we assume $\bb_1, \bb_2, \ldots, \bb_r$ are the $r$ linearly independent columns of $\bA$ and
$$
\bA\bP = [\bb_1, \bb_2, \ldots, \bb_r, \bb_{r+1}, \ldots, \bb_n].
$$
Let $\bZ = [\bb_1, \bb_2, \ldots, \bb_r] \in \real^{m\times r}$. Since any $\bb_i$ is in the column space of $\bZ$, we can find a $\bE\in \real^{r\times (n-r)}$ such that
$$
[\bb_{r+1}, \bb_{r+2}, \ldots, \bb_n] = \bZ \bE.
$$
That is,
$$
\bA\bP = [\bb_1, \bb_2, \ldots, \bb_r, \bb_{r+1}, \ldots, \bb_n] = \bZ
\begin{bmatrix}
\bI_r & \bE
\end{bmatrix},
$$
where $\bI_r$ is an $r\times r$ identity matrix. Moreover, $\bZ\in \real^{m\times r}$ has full column rank such that its full QR decomposition is given by $\bZ = \bU\begin{bmatrix}
\bR \\
\bzero
\end{bmatrix}$, where $\bR\in \real^{r\times r}$ is an upper triangular matrix with full rank and $\bU$ is an orthogonal matrix. This implies
\begin{equation}\label{equation:ulv-smpl}
\bA\bP = \bZ
\begin{bmatrix}
\bI_r & \bE
\end{bmatrix}
=
\bU\begin{bmatrix}
\bR \\
\bzero
\end{bmatrix}
\begin{bmatrix}
\bI_r & \bE
\end{bmatrix}
=
\bU\begin{bmatrix}
\bR & \bR\bE \\
\bzero & \bzero
\end{bmatrix}.
\end{equation}
Since $\bR$ has full rank, this means
$\begin{bmatrix}
\bR & \bR\bE
\end{bmatrix}$ also has full rank such that its full LQ decomposition is given by
$\begin{bmatrix}
\bL & \bzero
\end{bmatrix} \bV_0$ where $\bL\in \real^{r\times r}$ is a lower triangular matrix and $\bV_0$ is an orthogonal matrix. Substitute into Equation~\eqref{equation:ulv-smpl}, we have
$$
\bA = \bU \begin{bmatrix}
\bL & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV_0 \bP^{-1}.
$$
Let $\bV =\bV_0 \bP^{-1}$ which is a product of two orthogonal matrices, and is also an orthogonal matrix. This completes the proof.
\end{proof}
A second way to see the proof of the ULV decomposition will be discussed in the proof of Theorem~\ref{theorem:complete-orthogonal-decom} shortly via the rank-revealing QR decomposition and trivial QR decomposition.
Now suppose the ULV decomposition of matrix $\bA$ is
$$
\bA = \bU \begin{bmatrix}
\bL & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV.
$$
Let $\bU_0 = \bU_{:,1:r}$ and $\bV_0 = \bV_{1:r,:}$, i.e., $\bU_0$ contains only the first $r$ columns of $\bU$, and $\bV_0$ contains only the first $r$ rows of $\bV$. Then, we still have $\bA = \bU_0 \bL\bV_0$. This is known as the \textbf{reduced ULV decomposition}.
The comparison between the reduced and the full ULV decomposition is shown in Figure~\ref{fig:ulv-comparison} where white entries are zero and blue entries are not necessarily zero.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Reduced ULV decomposition]{\label{fig:ulvhalf}
\includegraphics[width=0.47\linewidth]{./imgs/ulvreduced.pdf}}
\quad
\subfigure[Full ULV decomposition]{\label{fig:ulvall}
\includegraphics[width=0.47\linewidth]{./imgs/ulvfull.pdf}}
\quad
\subfigure[Reduced URV decomposition]{\label{fig:urvhalf}
\includegraphics[width=0.47\linewidth]{./imgs/urvreduced.pdf}}
\quad
\subfigure[Full URV decomposition]{\label{fig:urvall}
\includegraphics[width=0.47\linewidth]{./imgs/urvfull.pdf}}
\caption{Comparison between the reduced and full ULV, and between the reduced and full URV.}
\label{fig:ulv-comparison}
\end{figure}
Similarly, we can also claim the URV decomposition as follows.
\begin{theoremHigh}[Full URV Decomposition]\label{theorem:urv-decomposition}
Every $m\times n$ matrix $\bA$ with rank $r$ can be factored as
$$
\bA = \bU \begin{bmatrix}
\bR & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV,
$$
where $\bU\in \real^{m\times m}$ and $\bV\in \real^{n\times n}$ are two orthogonal matrices, and $\bR\in \real^{r\times r}$ is an upper triangular matrix.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:urv-decomposition}]
For any rank $r$ matrix $\bA=[\ba_1^\top; \ba_2^\top; \ldots; \ba_m^\top]$ where $\ba_1, \ba_2, \ldots$ are the rows of $\bA$, we can again construct a row permutation matrix $\bP$ such that the independent rows of $\bA$ appear in the first $r$ rows of $\bP\bA$. Without loss of generality, we assume $\bb_1^\top, \bb_2^\top, \ldots, \bb_r^\top$ are the $r$ linearly independent rows of $\bA$ and
$$
\bP\bA = \begin{bmatrix}
\bb_1^\top \\
\bb_2^\top\\
\vdots\\
\bb_r^\top\\
\bb_{r+1}^\top\\
\vdots \\
\bb_m^\top
\end{bmatrix}.
$$
Let $\bZ = [\bb_1^\top; \bb_2^\top; \ldots; \bb_r^\top] \in \real^{r\times n}$. Since any $\bb_i$ is in the row space of $\bZ$, we can find a $\bE\in \real^{(m-r)\times r}$ such that
$$
\begin{bmatrix}
\bb_{r+1}^\top\\
\bb_{r+2}^\top \\
\vdots\\
\bb_m^\top
\end{bmatrix} = \bE\bZ.
$$
That is,
$$
\bP\bA = \begin{bmatrix}
\bb_1^\top \\
\bb_2^\top\\
\vdots\\
\bb_r^\top\\
\bb_{r+1}^\top\\
\vdots \\
\bb_m^\top
\end{bmatrix}=
\begin{bmatrix}
\bI_r \\
\bE
\end{bmatrix}
\bZ
$$
where $\bI_r$ is an $r\times r$ identity matrix. Moreover, $\bZ\in \real^{r\times n}$ has full row rank such that its full LQ decomposition is given by $\bZ = [\bL, \bzero]\bV$, where $\bL\in \real^{r\times r}$ is a lower triangular matrix with full rank and $\bV$ is an orthogonal matrix. This implies
\begin{equation}\label{equation:urv-smpl}
\bP\bA = \begin{bmatrix}
\bI_r \\
\bE
\end{bmatrix}
\bZ
=
\begin{bmatrix}
\bI_r \\
\bE
\end{bmatrix}
\begin{bmatrix}
\bL & \bzero
\end{bmatrix}
\bV
=
\begin{bmatrix}
\bL & \bzero \\
\bE\bL & \bzero
\end{bmatrix}\bV.
\end{equation}
Since $\bL$ has full rank, this means
$\begin{bmatrix}
\bL \\
\bE\bL
\end{bmatrix}$ also has full rank such that its full QR decomposition is given by
$\bU_0\begin{bmatrix}
\bR \\
\bzero
\end{bmatrix}$ where $\bR\in \real^{r\times r}$ is upper triangular and $\bU_0$ is an orthogonal matrix. Substitute into Equation~\eqref{equation:urv-smpl}, we have
$$
\bA =\bP^{-1} \bU_0 \begin{bmatrix}
\bR & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV .
$$
Let $\bU =\bP^{-1} \bU_0$, which is a product of two orthogonal matrices, and is also an orthogonal matrix, from which the result follows.
\end{proof}
Again, there is a version of reduced URV decomposition and the difference between the full and reduced URV can be implied from the context as shown in Figure~\ref{fig:ulv-comparison}. The ULV and URV sometimes are referred to as the UTV decomposition framework \citep{fierro1997low, golub2013matrix}.
\begin{figure}[htbp]
\centering
\centering
\resizebox{1.0\textwidth}{!}{%
\begin{tikzpicture}[>=latex]
\tikzstyle{state} = [draw, very thick, fill=white, rectangle, minimum height=3em, minimum width=6em, node distance=8em, font={\sffamily\bfseries}]
\tikzstyle{stateEdgePortion} = [black,thick];
\tikzstyle{stateEdge} = [stateEdgePortion,->];
\tikzstyle{stateEdge2} = [stateEdgePortion,<->];
\tikzstyle{edgeLabel} = [pos=0.5, text centered, font={\sffamily\small}];
\node[ellipse, name=subspace, draw,font={\sffamily\bfseries}, node distance=7em, xshift=-9em, yshift=-1em,fill={colorals}] {Subspaces};
\node[state, name=qr, below of=subspace, xshift=-9em, yshift=0em, fill={colorlu}] {Full QR};
\node[state, name=lq, below of=subspace, xshift=9em, yshift=0em, fill={colorlu}] {Full LQ};
\node[state, name=rqr, left of=qr, xshift=-3em, fill={colorqr}] {Reduced QR};
\node[state, name=rlq, right of=lq, xshift=3em, fill={colorqr}] {Reduced LQ};
\node[state, name=utv, below of=subspace, xshift=0em, yshift=-9em, fill={colorlu}] {Full URV/ULV};
\node[state, name=rutv, draw, right of=utv,xshift=7em, yshift=0em,fill={colorqr}]
{Reduced URV/ULV};
\coordinate (qr2inter3) at ($(subspace.west -| qr.north) + (-0em,0em)$);
\draw (subspace.west) edge[stateEdgePortion] (qr2inter3);
\draw (qr2inter3) edge[stateEdge]
node[edgeLabel, text width=8em, yshift=0.8em]{\parbox{2em}{Column\\Space}} (qr.north);
\coordinate (lq2inter3) at ($(subspace.east -| lq.north) + (-0em,0em)$);
\draw (subspace.east) edge[stateEdgePortion] (lq2inter3);
\draw (lq2inter3) edge[stateEdge]
node[edgeLabel, text width=7.25em, yshift=0.8em]{\parbox{2em}{Row\\Space}} (lq.north);
\coordinate (rqr2inter1) at ($(subspace.north) + (0,1em)$);
\coordinate (rqr2inter3) at ($(rqr2inter1-| rqr.north) + (-0em,0em)$);
\draw (subspace.north) edge[stateEdgePortion] (rqr2inter1);
\draw (rqr2inter1) edge[stateEdgePortion] (rqr2inter3);
\draw (rqr2inter3) edge[stateEdge]
node[edgeLabel, text width=8em, yshift=0.8em]{\parbox{2em}{Column\\Space}} (rqr.north);
\coordinate (rlr2inter3) at ($(rqr2inter1-| rlq.north) + (-0em,0em)$);
\draw (rqr2inter1) edge[stateEdgePortion] (rlr2inter3);
\draw (rlr2inter3) edge[stateEdge]
node[edgeLabel, text width=8em, yshift=0.8em]{\parbox{2em}{Row\\Space}} (rlq.north);
\draw (rqr.east)
edge[stateEdge] node[edgeLabel,yshift=0.5em]{Complete}
(qr.west) ;
\draw (rlq.west)
edge[stateEdge] node[edgeLabel,yshift=0.5em]{Complete}
(lq.east) ;
\draw (utv.east)
edge[stateEdge] node[edgeLabel,yshift=0.5em]{Deleting}
(rutv.west) ;
\coordinate (qr2utv) at ($(qr.south) + (-0em,-3em)$);
\coordinate (qr2utv2) at ($(qr2utv -| utv.north) + (-0em,0em)$);
\coordinate (qr2utv3) at ($(lq.south) + (-0em,-3em)$);
\draw (qr.south) edge[stateEdgePortion] (qr2utv);
\draw (qr2utv) edge[stateEdgePortion] (qr2utv2);
\draw (lq.south) edge[stateEdgePortion] (qr2utv3);
\draw (qr2utv3) edge[stateEdgePortion] (qr2utv2);
\draw (qr2utv2) edge[stateEdge] (utv.north);
\begin{pgfonlayer}{background}
\draw [join=round,cyan,dotted,fill={colormiddleleft}] ($(subspace.north west -|rqr.north west) + (-0.6em, 2em)$) rectangle ($(rutv.east -| rlq.south east) + (0.6em, -2.0em)$);
\end{pgfonlayer}
\end{tikzpicture}
}
\caption{Derive the ULV/URV by the QR and LR.}
\label{fig:decomQR}
\end{figure}
The relationship between the QR and ULV/URV is depicted in Figure~\ref{fig:decomQR}. Furthermore, the ULV and URV decomposition can be utilized to prove an important property in linear algebra that we shall shortly see in the sequel: the row rank and column rank of any matrix are the same. Whereas, an elementary construction to prove this claim is provided in Appendix~\ref{append:row-equal-column} (p.~\pageref{append:row-equal-column}).
We will shortly see that the forms of ULV and URV are very close to the singular value decomposition (SVD). All of the three factor the matrix $\bA$ into two orthogonal matrices. Specially, there exists a set of basis for the four subspaces of $\bA$ in the fundamental theorem of linear algebra via the ULV and the URV. Taking ULV as an example, the first $r$ columns of $\bU$ form an orthonormal basis of $\cspace(\bA)$, and the last $(m-r)$ columns of $\bU$ form an orthonormal basis of $\nspace(\bA^\top)$. The first $r$ rows of $\bV$ form an orthonormal basis for the row space $\cspace(\bA^\top)$, and the last $(n-r)$ rows form an orthonormal basis for $\nspace(\bA)$ (similar to the two-sided orthogonal decomposition):
$$
\begin{aligned}
\cspace(\bA) &= span\{\bu_1, \bu_2, \ldots, \bu_r\}, \\
\nspace(\bA) &= span\{\bv_{r+1}, \bv_{r+2}, \ldots, \bv_n\}, \\
\cspace(\bA^\top) &= span\{ \bv_1, \bv_2, \ldots,\bv_r \} ,\\
\nspace(\bA^\top) &=span\{\bu_{r+1}, \bu_{r+2}, \ldots, \bu_m\}.
\end{aligned}
$$
The SVD goes further that there is a connection between the two pairs of orthonormal basis, i.e., transforming from column basis into row basis, or left null space basis into right null space basis. We will get more details in the SVD section.
\section{Complete Orthogonal Decomposition}
What is related to the UTV decomposition is called the \textit{complete orthogonal decomposition} which factors into two orthogonal matrices as well.
\begin{theoremHigh}[Complete Orthogonal Decomposition]\label{theorem:complete-orthogonal-decom}
Every $m\times n$ matrix $\bA$ with rank $r$ can be factored as
$$
\bA = \bU \begin{bmatrix}
\bT & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV,
$$
where $\bU\in \real^{m\times m}$ and $\bV\in \real^{n\times n}$ are two orthogonal matrices, and $\bT\in \real^{r\times r}$ is an rank-$r$ matrix.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:complete-orthogonal-decom}]
By rank-revealing QR decomposition (Theorem~\ref{theorem:rank-revealing-qr-general}, p.~\pageref{theorem:rank-revealing-qr-general}), $\bA$ can be factored as
$$
\bQ_1^\top \bA\bP =
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bzero
\end{bmatrix},
$$
where $\bR_{11} \in \real^{r\times r}$ is upper triangular, $\bR_{12} \in \real^{r\times (n-r)}$, $\bQ_1\in \real^{m\times m}$ is an orthogonal matrix, and $\bP$ is a permutation matrix.
\begin{mdframed}[hidealllines=true,backgroundcolor=yellow!10]
Then, it is not hard to find a decomposition such that
\begin{equation}\label{equation:orthogonal-complete-qr-or-not}
\begin{bmatrix}
\bR_{11}^\top \\
\bR_{12}^\top
\end{bmatrix}
=
\bQ_2
\begin{bmatrix}
\bS \\
\bzero
\end{bmatrix},
\end{equation}
where $\bQ_2$ is an orthogonal matrix, $\bS$ is an rank-$r$ matrix. The decomposition is reasonable in the sense the matrix $ \begin{bmatrix}
\bR_{11}^\top \\
\bR_{12}^\top
\end{bmatrix} \in \real^{n\times r}$ has rank $r$ of which the columns stay in a subspace of $\real^{n}$. Nevertheless, the columns of $\bQ_2$ span the whole space of $\real^n$, where we can assume the first $r$ columns of $\bQ_2$ span the same space as that of $ \begin{bmatrix}
\bR_{11}^\top \\
\bR_{12}^\top
\end{bmatrix}$. The matrix $\begin{bmatrix}
\bS \\
\bzero
\end{bmatrix}$ is to transform $\bQ_2$ into $ \begin{bmatrix}
\bR_{11}^\top \\
\bR_{12}^\top
\end{bmatrix}$.
\end{mdframed}
Then, it follows that
$$
\bQ_1^\top \bA\bP \bQ_2 =
\begin{bmatrix}
\bS^\top & \bzero \\
\bzero & \bzero
\end{bmatrix}.
$$
Let $\bU = \bQ_1$, $\bV=\bQ_2^\top\bP^\top$ and $\bT = \bS^\top$, we complete the proof.
\end{proof}
We can find that when Equation~\eqref{equation:orthogonal-complete-qr-or-not} is taken to be the reduced QR decomposition of $ \begin{bmatrix}
\bR_{11}^\top \\
\bR_{12}^\top
\end{bmatrix}$, then the complete orthogonal decomposition reduces to the ULV decomposition.
\section{Applications}
\subsection{Application: Least Squares via ULV/URV for Rank Deficient Matrices}\label{section:ls-utv}
In Section~\ref{section:application-ls-qr} (p.~\pageref{section:application-ls-qr}), we introduced the LS via the full QR decomposition for full rank matrices.
However, if often happens that the matrix may be rank-deficient. If $\bA$ does not have full column rank, $\bA^\top\bA$ is not invertible. We can then use the ULV/URV decomposition to find the least squares solution as illustrated in the following theorem.\index{Least squares}\index{Rank deficient}
\begin{theorem}[LS via ULV/URV for Rank Definient Matrix]\label{theorem:qr-for-ls-urv}
Let $\bA\in \real^{m\times n}$ with rank $r$ and $m\geq n$. Suppose $\bA=\bU\bT\bV$ is its full ULV/URV decomposition with $\bU\in\real^{m\times m}, \bV\in \real^{n\times n}$ being orthogonal matrix matrices, and
$$
\bT = \begin{bmatrix}
\bT_{11} & \bzero \\
\bzero & \bzero
\end{bmatrix}
$$
where $\bT_{11} \in \real^{r\times r}$ is a lower triangular matrix or an upper triangular matrix.
Suppose $\bb\in \real^m$, then the LS solution with minimal 2-norm to $\bA\bx=\bb$ is given by
$$
\bx_{LS} = \bV^\top
\begin{bmatrix}
\bT_{11}^{-1}\bc\\
\bzero
\end{bmatrix},
$$
where $\bc$ is the first $r$ components of $\bU^\top\bb$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:qr-for-ls-urv}]
Since $\bA=\bQ\bR$ is the full QR decomposition of $\bA$ and $m>n$, the last $m-n$ rows of $\bR$ are zero as shown in Figure~\ref{fig:qr-comparison}. Then $\bR_1 \in \real^{n\times n}$ is the square upper triangular in $\bR$ and
$
\bQ^\top \bA = \bR = \begin{bmatrix}
\bR_1 \\
\bzero
\end{bmatrix}.
$
Thus,
$$
\begin{aligned}
||\bA\bx-\bb||^2 &= (\bA\bx-\bb)^\top(\bA\bx-\bb)\\
&=(\bA\bx-\bb)^\top\bU\bU^\top (\bA\bx-\bb) \qquad &(\text{Since $\bU$ is an orthogonal matrix})\\
&=||\bU^\top \bA \bx-\bU^\top\bb||^2 \qquad &(\text{Invariant under orthogonal})\\
&=|| \bU^\top\bU\bT\bV \bx-\bU^\top\bb||^2\\
&=|| \bT\bV \bx-\bU^\top\bb||^2\\
&=||\bT_{11}\be - \bc||^2+||\bd||^2,
\end{aligned}
$$
where $\bc$ is the first $r$ components of $\bU^\top\bb$ and $\bd$ is the last $m-r$ components of $\bU^\top\bb$, $\be$ is the first $r$ components of $\bV\bx$ and $\bff$ is the last $n-r$ components of $\bV\bx$:
$$
\bU^\top\bb
= \begin{bmatrix}
\bc \\
\bd
\end{bmatrix},
\qquad
\bV\bx
= \begin{bmatrix}
\be \\
\bff
\end{bmatrix}
$$
And the LS solution can be calculated by back/forward substitution of the upper/lower triangular system $\bT_{11}\be = \bc$, i.e., $\be = \bT_{11}^{-1}\bc$. For $\bx$ to have minimal 2-norm, $\bff$ must be zero. That is
$$
\bx_{LS} = \bV^\top
\begin{bmatrix}
\bT_{11}^{-1}\bc\\
\bzero
\end{bmatrix}.
$$
This completes the proof.
\end{proof}
Moreover, we will shortly find that a similar argument can also be made via the singular value decomposition (SVD) in Section~\ref{section:application-ls-svd} since SVD shares similar form as the ULV/URV, both sandwiched by two orthogonal matrices. And SVD goes further by diagonalizing the matrix. Readers can also derive the LS solution via the rank-revealing QR decomposition.
\paragraph{A word on the minimal 2-norm LS solution} For the least squares problem, the set of all minimizers
$$
\mathcal{X} = \{\bx\in \real^n: ||\bA\bx-\bb|| =\min \}
$$
is convex \citep{golub2013matrix}. And if $\bx_1, \bx_2 \in \mathcal{X}$ and $\lambda \in [0,1]$, then
$$
||\bA(\lambda\bx_1 + (1-\lambda)\bx_2) -\bb|| \leq \lambda||\bA\bx_1-\bb|| +(1-\lambda)||\bA\bx_2-\bb|| = \mathop{\min}_{\bx\in \real^n} ||\bA\bx-\bb||.
$$
Thus $\lambda\bx_1 + (1-\lambda)\bx_2 \in \mathcal{X}$. In above proof, if we do not set $\bff=\bzero$, we will find more least squares solutions. However, the minimal 2-norm least squares solution is unique. For full-rank case in the previous section, the least squares solution is unique and it must have a minimal 2-norm. See also \citep{foster2003solving, golub2013matrix} for a more detailed discussion on this topic.
\subsection{Application: Row Rank equals Column Rank Again via UTV}
As mentioned above, the UTV framework can prove the important theorem in linear algebra that the row rank and column rank of a matrix are equal.
Notice that to apply the UTV in the proof, a slight modification on the claim of the existence of the UTV decomposition needs to be taken care of. For example, in Theorem~\ref{theorem:ulv-decomposition}, the assumption of the matrix $\bA$ is to have rank $r$. Since rank $r$ already admits the fact that row rank equals column rank. A better claim here to this aim is to say matrix $\bA$ has column rank $r$ in Theorem~\ref{theorem:ulv-decomposition}. See
\citep{lu2021column} for a detailed discussion.
\begin{theorem}[Row Rank Equals Column Rank\index{Matrix rank}]\label{lemma:equal-dimension-rank}
The dimension of the column space of a matrix $\bA\in \real^{m\times n}$ is equal to the dimension of its
row space, i.e., the row rank and the column rank of a matrix $\bA$ are equal.
\end{theorem}
\begin{proof}[\textbf{of Theorem~\ref{lemma:equal-dimension-rank}, p.~\pageref{lemma:equal-dimension-rank}, A First Way}]
Any $m\times n$ matrix $\bA$ with rank $r$ can be factored as
$$
\bA = \bU_0 \begin{bmatrix}
\bL & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV_0,
$$
where $\bU_0\in \real^{m\times m}$ and $\bV_0\in \real^{n\times n}$ are two orthogonal matrices, and $\bL\in \real^{r\times r}$ is a lower triangular matrix \footnote{Instead of using the ULV decomposition, in some texts, the authors use elementary transformations $\bE_1, \bE_2$ such that $
\bA = \bE_1 \begin{bmatrix}
\bI_r & \bzero \\
\bzero & \bzero
\end{bmatrix}\bE_2,
$ to prove the result.}. Let
$$\bD = \begin{bmatrix}
\bL & \bzero \\
\bzero & \bzero
\end{bmatrix},
$$
the row rank and column rank of $\bD$ are apparently the same. If we could prove the column rank of $\bA$ equals the column rank of $\bD$, and the row rank of $\bA$ equals the row rank of $\bD$, then we complete the proof.
Let $\bU = \bU_0^\top$, $\bV=\bV_0^\top$, then $\bD = \bU\bA\bV$. Decompose the above idea into two steps, a moment of reflexion reveals that, if we could first prove the row rank and column rank of $\bA$ are equal to those of $\bU\bA$, and then, if we further prove the row rank and column rank of $\bU\bA $ are equal to those of $\bU\bA\bV$, we could also complete the proof.
\paragraph{Row rank and column rank of $\bA$ are equal to those of $\bU\bA$} Let $\bB = \bU\bA$, and let further $\bA=[\ba_1,\ba_2,\ldots, \ba_n]$ and $\bB=[\bb_1,\bb_2,\ldots,\bb_n]$ be the column partitions of $\bA$ and $\bB$ respectively. Therefore, $[\bb_1,\bb_2,\ldots,\bb_n] = [\bU\ba_1,\bU\ba_2,\ldots, \bU\ba_n]$. If $x_1\ba_1+x_2\ba_2+\ldots+x_n\ba_n=0$, then we also have
$$
\bU(x_1\ba_1+x_2\ba_2+\ldots+x_n\ba_n) = x_1\bb_1+x_2\bb_2+\ldots+x_n\bb_n = 0.
$$
Let $j_1, j_2, \ldots, j_r$ be distinct indices between 1 and $n$, if the set $\{\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_r}\}$ is independent, the set $\{\bb_{j_1}, \bb_{j_2}, \ldots, \bb_{j_r}\}$ must also be linearly independent. This implies
$$
dim(\cspace(\bB)) \leq dim(\cspace(\bA)).
$$
Similarly, by $\bA = \bU^\top\bB$, it follows that
$$
dim(\cspace(\bA)) \leq dim(\cspace(\bB)).
$$
This implies
$$
dim(\cspace(\bB)) = dim(\cspace(\bA)).
$$
Apply the process onto $\bB^\top$ and $\bA^\top$, we have
$$
dim(\cspace(\bB^\top)) = dim(\cspace(\bA^\top)).
$$
This implies the row rank and column rank of $\bA$ and $\bB=\bU\bA$ are the same.
\paragraph{Row rank and column rank of $\bU\bA$ are equal to those of $\bU\bA\bV$}
Similarly, by applying above discussion on $\bU\bA$ and $\bU\bA\bV$, we can also show that the row rank and column rank of $\bU\bA$ and $\bU\bA\bV$ are the same. This completes the proof.
\end{proof}
\chapter*{\centering \begin{normalsize}Abstract\end{normalsize}}
In 1954, Alston S. Householder published \textit{Principles of Numerical Analysis}, one of the first modern treatments on matrix decomposition that favored a (block) LU
decomposition-the factorization of a matrix
into the product of lower and upper triangular
matrices.
And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.
This survey is primarily a summary of purpose, significance of important matrix decomposition methods, e.g., LU, QR, and SVD, and the origin and complexity of the methods which shed light on their modern applications.
Most importantly, this article presents improved procedures for most of the calculations of the decomposition algorithms which potentially reduce the complexity they induce.
Again, this is a decomposition-based context, thus we will introduce the related background when it is needed and necessary. In many other textbooks on linear algebra, the principal ideas are discussed and the matrix decomposition methods serve as ``byproduct". However, we focus on the decomposition methods instead and the principal ideas serve as fundamental tools for them. The mathematical prerequisite is a first course in linear algebra. Other than this modest background, the development is self-contained, with rigorous proof provided throughout.
\paragraph{Keywords: }
Existence and computing of matrix decompositions, Complexity, Floating point operations (flops), Low-rank approximation, Pivot, LU decomposition for nonzero leading principal minors, Data distillation, CR decomposition, CUR/Skeleton decomposition, Interpolative decomposition, Biconjugate decomposition, Coordinate transformation, Hessenberg decomposition, ULV decomposition, URV decomposition, Rank decomposition, Gram-Schmidt process, Householder reflector, Givens rotation, Rank revealing decomposition, Cholesky decomposition and update/downdate, Eigenvalue problems, Alternating least squares, Randomized algorithm, Tensor decomposition, CP decomposition, Tucker decomposition, High-order SVD.
\input{chapter-worldmap.tex}
\input{chapter-intro.tex}
\input{chapter-LU.tex}
\input{chapter-cholesky.tex}
\input{chapter-qr.tex}
\input{chapter-utv.tex}
\input{chapter-cr.tex}
\input{chapter-skeleton.tex}
\input{chapter-id.tex}
\input{chapter-hessenberg.tex}
\input{chapter-eigenvalue.tex}
\input{chapter-spectral.tex}
\input{chapter-svd.tex}
\input{chapter-eigencalc.tex}
\input{chapter-coordinate.tex}
\input{chapter-als.tex}
\input{chapter-NMF.tex}
\input{chapter-biconjugate.tex}
\input{chapter-applications.tex}
\input{chapter-tensornotation.tex}
\input{chapter-tensorcp.tex}
\input{chapter-tensortucker.tex}
\input{chapter-tensorhosvd.tex}
\input{chapter-tensorttdecom.tex}
\input{chapter-append.tex}
\newpage
\vskip 0.2in
| {
"timestamp": "2022-01-05T02:10:49",
"yymm": "2107",
"arxiv_id": "2107.02579",
"language": "en",
"url": "https://arxiv.org/abs/2107.02579",
"abstract": "In 1954, Alston S. Householder published \\textit{Principles of Numerical Analysis}, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.Keywords: Existence and computing of matrix decompositions, Floating point operations (flops), Low-rank approximation, Pivot, LU/PLU decomposition, CR/CUR/Skeleton decomposition, Coordinate transformation, ULV/URV decomposition, Rank decomposition, Rank revealing decomposition, Update/downdate, Tensor decomposition.",
"subjects": "History and Overview (math.HO); Numerical Analysis (math.NA)",
"title": "Numerical Matrix Decomposition",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462194190619,
"lm_q2_score": 0.8459424314825853,
"lm_q1q2_score": 0.8357456270293889
} |
https://arxiv.org/abs/1712.04411 | Betti table Stabilization of Homogeneous Monomial Ideals | Given an homogeneous monomial ideal $I$, we provide a question- and example-based investigation of the stabilization patterns of the Betti tables shapes of $I^d$ as we vary $d$. We build off Whieldon's definition of the stabilization index of $I$, Stab$(I)$, to define the stabilization sequence of $I$, StabSeq$(I)$, and use it to explore changes in the shapes of the Betti tables of $I^d$ as we vary $d$. We also present the stabilization indices and sequences of the collection of ideals $\{I_{n}\}$ where $I_{n}=(a^{2n}b^{2n}c^{2n},b^{4n}c^{2n},a^{3n}c^{3n},a^{6n-1}b)\subseteq \Bbbk[a,b,c]$. | \section{Introduction}
There is little known about the relationship between the Betti tables $\beta(I^d)$ of $I^d$ as we vary $d$. Elena Guardo and Adam Van Tuyl compute the Betti numbers of homogeneous complete intersection ideals, in \cite{EG}. In \cite{GW}, Whieldon proved that the shapes of the Betti tables of equigenerated ideals will stabilize, and provides a conjecture for the formula for the power of stabilization for edge ideals $I_{G}$. By ``shape" of the Betti tables, of a given ideal $I$, we mean the arrangement of their nonzero entries and by ``stabilize" we mean that there exists some $D$ for which the Betti tables of $I^d$ will share the same shape as $I^D$ for all $d \geq D$. Though Whieldon's proof ensures the $D$ for which we see a stabilized Betti table shape of $\{I^d\}$ is finite, there is currently no formula to predict the value of $D$.
The lack of a generalized formula for the power of $I$, $D$, for which we see a stabilized shape of $\beta (I^d)$ presents a problem for computational investigations. Though we might be confident, having seen no evidence to the contrary, that the Betti tables of $\{I^d\}$ have stabilized in shape at $D$, we often have no way of knowing. Though this lack of knowledge is problematic, it also opens the door to exciting questions. Such questions include: If the Betti tables of $I^D$ through $I^{D+x}$ share the same shape, for some $x$, can we feel confident that the shape of the Betti tables of $\{I^d\}$ have stabilized by $D$? If so, what are the bounds on this $x$? In what ways can we characterize the Betti table shape stabilization patterns of a given ideal? Are there ideals for which we do not see a stabilized Betti table shape until $I^d$ where $d$ is arbitrarily large?
In this paper we focus on the Betti table shape stabilization patterns of homogeneous monomial ideals and provide an example-driven narrative to unearth the potential of these questions.
\hyperref[B:Background]{Section~\ref*{B:Background}} provides background on Betti numbers, Betti tables, and the stabilization index. In this section we also define the \textit{stabilization sequence} of a given ideal, and present Whieldon's result guaranteeing Betti table shape stabilization.
\hyperref[Q: Questions and Results]{Section~\ref*{Q: Questions and Results}} is framed around questions and examples concerning patterns in the stabilization indices and sequences of homogeneous monomial ideals. In this section we also define a \textit{linearly connected family} of ideals and investigate the linearly connected family of ideals $\{I_{n}\}$ where $I_{n}=(a^{2n}b^{2n}c^{2n},b^{4n}c^{2n},a^{3n}c^{3n},a^{6n-1}b)\subseteq \mathbbm{k}[a,b,c]$ which demonstrates nice relationships between the stabilization indices and sequences of $I_{n}$ as we vary $n$.
\hyperref[Appendix A]{Section~\ref*{Appendix A}} presents the Macaulay2 \cite{M2} commands used in this paper. In this section we also provide the Macaulay2 \cite{M2} session, with commentary, used in Example\autoref{E: Example 2.1} and Example\autoref{E:Example 2.3}.
\hyperref[Appendix B]{Section~\ref*{Appendix B}} provides the source code of the Macaulay2 \cite{M2} package \texttt{StabSeq.m2}. This package, written by the author, is used to produce the stabilization sequences prestented in this paper.
\section{Background} \label{B:Background}
Let $R=\mathbbm{k}[x_{1},x_{2},...,x_{n}]$, where $\mathbbm{k}$ is an algebraically closed field of characteristic zero. For $j \in Z$ denote $R(-j)$ to be a graded R-module such that $R(-j)_{i}=R_{i-j}$. We say that $R(-j)$ is shifted $j$ degrees. An element of degree $i$ in $R$ would have degree $i+j$ in $R(-j)$. Following the convention in \cite{Peeva2011} the element $1 \in R(-j)$ has degree $j$ and is called the \textit{1-generator} of $R(-j)$.
\subsection{Minimal graded free resolutions} \label{B: Minimal graded free resolutions}
Let $I \subseteq R$ be a homogeneous monomial ideal. Given $I$ we can construct a \textit{free resolution of $I$}
$$
0 \xrightarrow{d_{\ell+1}} F_{\ell} \xrightarrow{d_{\ell}} F_{\ell-1} \xrightarrow{d_{\ell-1}} \cdots \xrightarrow{d_{2}} F_{1} \xrightarrow{d_{1}} F_{0} \xrightarrow{d_{0}} I \xrightarrow{d} 0.
$$
such that: \begin{enumerate}
\item for all $i$, $F_{i}$ is a free finitely generated graded R-module of the form
$$
F_{i} = \oplus_{j \in \mathbbm{Z}}R(-j)^{\beta_{i,j}(I)}
$$
for some graded Betti numbers $\beta_{i,j}(I)$,
\item the sequence of homomorphism is \textit{exact}, that is Ker$(d_{i})=\text{Im}(d_{i+1})$, for all $i$, and Ker$(d)=\text{Im}(d_{0})$.
\end{enumerate}
\begin{defn}
A free resolution is \textit{minimal} if $d_{i+1}(F_{i+1})\subseteq(x_{1},x_{2},...,x_{n})F_{i}$, for all $i$.
\end{defn}
Informally, this condition means that the differential matrix, $D_{i}$, given by the map $d_{i}$, contains no nonzero constants.
\begin{example}
The homomorphism
$$
R(-5) \oplus R(-4) \xrightarrow{
\begin{pmatrix}
1 & 0\\
0 & x^2\\
\end{pmatrix}} R(-5) \oplus R(-2)
$$
is not minimal since the differential matrix of the map from $R(-5) \oplus R(-4)$ to $R(-5) \oplus R(-2)$ contains the nonzero constant 1. Note that the map from $R(-5)$ to $R(-5)$ is trivial, the map being multiplication by the identity. In general, we can construct a minimal free resolution from a free resolution by removing all trivial homomorphism.
\end{example}
\begin{defn}
A free resolution is \textit{graded} if:
\begin{enumerate}
\item each R-module $F_{i}$ is graded,
\item $R_{j}F_{i} \subseteq F_{i+j}$ for all $i, j \in \mathbbm{Z}$,
\item the map $d_{i}$, for all $i$, is a homomorphism of degree 0.
\end{enumerate}
\end{defn}
We say that a homomorphism $d_{i}:F_{i} \rightarrow F_{i-1}$ has \textit{degree} $j$ if deg($d_{i}(n))=j+\text{deg}(n)$ for each $n \in F_{i}$.
\begin{example}
Consider the homomorphism
$$
R(-3) \oplus R(-2) \xrightarrow{
\begin{pmatrix}
x_{1}^3 & x_{2}^2\\
\end{pmatrix}} R.
$$
Let $(f,g) \in R(-3) \oplus R(-2)$. Under this homomorphism $(f,g) \mapsto fx_{1}^3+gx_{2}^2$. Let $f$ and $g$ have degrees $a_{1}$ and $a_{2}$, respectively. The deg$(fx_{1}^3)=a_{1}+3$ and deg$(gx_{2}^2)=a_{2}+2$ in $R$. Since the degrees of $f \in R(-3)$ and $g \in R(-2)$ are shifted by 3 and 2, respectively, the deg$(f)=a_{1}+3$ in $R(-3)$ and deg$(g)=a_{2}+2$ in $R(-2)$. Since deg$(fx_{1}^3)=\text{deg}(f)$ and deg$(gx_{2}^2)=\text{deg}(g)$ the homomorphism has degree 0.
\end{example}
We now provide a step by step construction of a minimal graded free resolution in the following example.
\begin{example} \label{E: Example 2.1}
Let $I=(x_{1}x_{2}^2,x_{1}x_{3}^2,x_{2}^3,x_{1}^3) \subseteq \mathbbm{k}[x_{1},x_{2},x_{3}]$. We begin our minimal graded free resolution of $I$ with
$$
F_{0} \xrightarrow{d_{0}} I \xrightarrow{d} 0.
$$
The map $d$ is surjective, therefore the generators of $I$, all of which have degree 3, generate Ker$(d)$. Since Ker$(d)=\text{Im}(d_{0})$, the generators of $I$ also generate Im$(d_{0})$. We therefore set $F_{0}=R(-3) \oplus R(-3) \oplus R(-3) \oplus R(-3)$. Denote $f_{1},f_{2},f_{3},f_{4}$ to be the homogeneous 1-generator of the 4 $R$ modules of $F_{0}$. Note that, for all $i$, deg$(f_{i})=3$. We now define $d_{0}$ by
$$
\begin{tabular}{cccc}
$f_{1} \mapsto x_{1}x_{2}^2,$ & $f_{2} \mapsto x_{1}x_{3}^2,$ & $f_{3} \mapsto x_{2}^3,$ & $f_{4} \mapsto x_{1}^3$.\\
\end{tabular}
$$
Our resolution therefore begins
$$
F_{1} \xrightarrow{d_{1}} R^4(-3) \xrightarrow{
\begin{pmatrix}
x_{1}x_{2}^2 & x_{1}x_{3}^2 & x_{2}^3 & x_{1}^3\\
\end{pmatrix}
} I \rightarrow 0.
$$
Since Ker$(d_{0})=\text{Im}(d_{1})$, we can determine the generating set of Im$(d_{1})$ by determining the generating set of Ker$(d_{0})$. Let $\alpha_{1}f_{1}+\alpha_{2}f_{2}+\alpha_{3}f_{3}+\alpha_{4}f_{4} \in \text{Ker}(d_{0})$ where, for all $i$, $\alpha_{i} \in R$. To determine the generators of Ker$(d_{0})$, we must solve the equation
$$
\alpha_{1}x_{1}x_{2}^2+\alpha_{2}x_{1}x_{3}^2+\alpha_{3}x_{2}^3+\alpha_{4}x_{1}^3=0
$$
for $(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})$. The generating solutions to the above equation are
$$
\begin{tabular}{cccc}
$\sigma_{1}=(-x_{2},0,x_{1},0)$, & $\sigma_{2}=(x_{1}^2,0,0,-x_{2}^2)$, & $\sigma_{3}=(0,x_{1}^2,0,-x_{3}^2)$, & $\sigma_{4}=(-x_{3}^2,x_{2}^2,0,0)$ \\
\end{tabular}
$$
Thus
$
\begin{tabular}{cccc}
$-x_{2}f_{1}+x_{1}f_{3},$ & $x_{1}^2f_{1}-x_{2}^2f_{4},$ & $x_{1}^2f_{2}-x_{3}^2f_{4},$ & $-x_{3}^2f_{1}+x_{2}^2f_{2}$ \\
\end{tabular}
$
generate Ker$(d_{0})$. Their degrees are $4,5,5,5$ respectively. We therefore set $F_{1}=R(-4) \oplus R(-5) \oplus R(-5) \oplus R(-5)$. Denote $g_{1}, g_{2}, g_{3}, g_{4}$ to be the homogeneous 1-generator of the 4 R-modules of $F_{1}$, respectively. Note that deg$(g_{1})=4$ and deg$(g_{i})=5$ for $2 \leq i \leq 4$, . We now define $d_{1}$ by
$$
\begin{tabular}{cccc}
$g_{1} \mapsto -x_{2}f_{1}+x_{1}f_{3},$ & $g_{2} \mapsto x_{1}^2f_{1}-x_{2}^2f_{4},$ & $g_{3} \mapsto x_{1}^2f_{2}-x_{3}^2f_{4},$ & $g_{4} \mapsto -x_{3}^2f_{1}+x_{2}^2f_{2}$\\
\end{tabular}
$$
with the resulting next step of our resolutions
$$
F_{2} \xrightarrow{d_{2}}
R(-4) \oplus R^3(-5) \xrightarrow{
\begin{pmatrix}
-x_{2} & x_{1}^2 & 0 & -x_{3}^2\\
0 & 0 & x_{1}^2 & x_{2}^2\\
x_{1} & 0 & 0 & 0\\
0 & -x_{2}^2 & -x_{3}^2 & 0\\
\end{pmatrix}} R^4(-3) \xrightarrow{
\begin{pmatrix}
x_{1}x_{2}^2 & x_{1}x_{3}^2 & x_{2}^3 & x_{1}^3\\
\end{pmatrix}
} I \rightarrow 0.
$$
Recall Ker$(d_{1})=\text{Im}(d_{2})$. Let $\beta_{1}g_{1}+\beta_{2}g_{2}+\beta_{3}g_{3}+\beta_{4}g_{4} \in \text{Ker}(d_{1})$ where, for all $i$, $\beta_{i} \in R$. To determine the generators of Ker$(d_{1})$, we must solve the equation
$$
\beta_{1}(-x_{2}f_{1}+x_{1}f_{3})+\beta_{2}(x_{1}^2f_{1}-x_{2}^2f_{4})+\beta_{3}(x_{1}^2f_{1}-x_{2}^2f_{4})+\beta_{4}(-x_{3}^2f_{1}+x_{2}^2f_{2})=0
$$
for $(\beta_{1},\beta_{2},\beta_{3},\beta_{4})$. The generating solution to the above equation is
$$
\eta_{1}=(0,-x_{3}^2,-x_{2}^2,x_{1}^2).
$$
Thus $-x_{3}^2g_{2}-x_{2}^2g_{3}+x_{1}^2g_{4}$ generates Ker$(d_{1})$. The degree of the generator of Ker$(d_{1})$ is 7. We therefore set $F_{2}=R(-7)$. Denote $h_{1}$ to be the homogeneous 1-generator of $R(-7)$. Note that deg($h_{1})=3$. We now define $d_{2}$ by
$$
h_{1} \mapsto -x_{3}^2g_{2}-x_{2}^2g_{3}+x_{1}^2g_{4}
$$
with the resulting next step of our resolutions
$$
F_{3} \xrightarrow{d_{3}}
R(-7) \xrightarrow{
\begin{pmatrix}
0\\
-x_{3}^2\\
-x_{2}^2\\
x_{1}^2\\
\end{pmatrix}
}
R(-4) \oplus R^3(-5) \xrightarrow{
\begin{pmatrix}
-x_{2} & x_{1}^2 & 0 & -x_{3}^2\\
0 & 0 & x_{1}^2 & x_{2}^2\\
x_{1} & 0 & 0 & 0\\
0 & -x_{2}^2 & -x_{3}^2 & 0\\
\end{pmatrix}}
$$
$$
R^4(-3) \xrightarrow{
\begin{pmatrix}
x_{1}x_{2}^2 & x_{1}x_{3}^2 & x_{2}^3 & x_{1}^3\\
\end{pmatrix}
} I \rightarrow 0.
$$
Recall Ker$(d_{2})=\text{Im}(d_{3})$. Let $\gamma_{1}h_{1} \in \text{Ker}(d_{2})$, where $\gamma_{1} \in R$. To determine the generators of Ker$(d_{2})$, we must solve the equation
$$
\gamma_{1}h_{1}=0
$$
The generating solution to the above equation is $\gamma_{1}=0$. Thus 0 generates Ker$(d_{2})$ and Im$(d_{3})$. We conclude by setting $F_{3}=0$. We now obtain our complete minimal graded free resolution of $I$.
$$
0 \rightarrow
R(-7) \xrightarrow{
\begin{pmatrix}
0\\
-x_{3}^2\\
-x_{2}^2\\
x_{1}^2\\
\end{pmatrix}
}
R(-4) \oplus R^3(-5) \xrightarrow{
\begin{pmatrix}
-x_{2} & x_{1}^2 & 0 & -x_{3}^2\\
0 & 0 & x_{1}^2 & x_{2}^2\\
x_{1} & 0 & 0 & 0\\
0 & -x_{2}^2 & -x_{3}^2 & 0\\
\end{pmatrix}}
$$
$$
R^4(-3) \xrightarrow{
\begin{pmatrix}
x_{1}x_{2}^2 & x_{1}x_{3}^2 & x_{2}^3 & x_{1}^3\\
\end{pmatrix}
} I \rightarrow 0.
$$
\end{example}
By construction our resolution is graded and free. Our graded free resolution is also minimal, given that no nonzero constants are contained in any of the differential matrices $D_{i}$. All remaining minimal graded free resolutions in this paper were computed in Macaulay2 \cite{M2}. See \hyperref[Appendix]{Section~\ref*{Appendix A}} for Macaulay2 \cite{M2} commands used in this paper.
\subsection{Graded Betti numbers and Betti numbers} \label{Graded Betti numbers and Betti numbers}
Below we use the same notation as in \hyperref[B: Minimal graded free resolutions]{Section~\ref*{B: Minimal graded free resolutions}}.
\begin{defn}
The \textit{graded Betti numbers} of $I$ are the numbers $$\beta_{i,j}(I)= \text{number of copies of}\ R(-j)\ \text{in}\ F_{i}.$$
\end{defn}
\begin{defn}
The $i^{th}$ \textit{total Betti number} of $I$ is the sum of the graded Betti numbers in homological degree $i$: $$\beta_{i}(I)= \sum_{j \in \mathbbm{Z}}\beta_{i,j}(I).$$
\end{defn}
\begin{example} \label{E:Example 2.3}
In Example\autoref{E: Example 2.1}, we saw that the minimal graded free resolution of $I=(x_{1}x_{2}^2,x_{1}x_{3}^2,x_{2}^3,x_{1}^3) \subseteq \mathbbm{k}[x_{1},x_{2},x_{3}]$ was
$$
0 \rightarrow R(-7) \rightarrow R(-4) \oplus R^3(-5) \rightarrow R^4(-3) \rightarrow I \rightarrow 0.
$$
In the above minimal graded free resolution of $I$ we have that $F_{0}=R^4(-3)$, $F_{1}=R(-4) \oplus R^3(-5)$, and $F_{2}=R(-7)$. The graded Betti numbers of $I$ are therefore
$$
\begin{tabular}{cccc}
$\beta_{0,3}(I)=4$ & $\beta_{1,4}(I)=1$ & $\beta_{1,5}(I)=3$ & $\beta_{2,7}(I)=1$\\
\end{tabular}
$$
and the total Betti numbers of $I$ are
$$
\begin{tabular}{ccc}
$\beta_{0}(I)=4$ & $\beta_{1}(I)=4$ & $\beta_{2}(I)=1$.\\
\end{tabular}
$$
\end{example}
\subsection{Betti tables}
\begin{defn}
Once we have determined the graded Betti numbers of a given ideal, we can express them in a \textit{Betti table}. The Betti tables in this paper are styled as computed in Macaulay2 \cite{M2}. The $i^{th}$ column and $j^{th}$ row in a Betti table corresponds to the graded Betti number $\beta_{i,i+j}$. A graded Betti number equal to 0 in a Betti table will be denoted by a $\cdot$ .
\end{defn}
\begin{example} \label{E:Example 2.5}
The Betti table of the ideal in Example\autoref{E:Example 2.3} would be expressed as follows.
$$
\begin{tabular}{ccccc}
$I$ & & & $I$ &\\
&
\begin{tabular}{cccc}
- & 0 & 1 & 2\\
total: & $\beta_{0}(I)$ & $\beta_{1}(I)$ & $\beta_{2}(I)$\\
1: & $\beta_{0,1}(I)$ & $\beta_{1,2}(I)$ & $\beta_{2,3}(I)$\\
2: & $\beta_{0,2}(I)$ & $\beta_{1,3}(I)$ & $\beta_{2,4}(I)$\\
3: & $\beta_{0,3}(I)$ & $\beta_{1,4}(I)$ & $\beta_{2,5}(I)$\\
4: & $\beta_{0,4}(I)$ & $\beta_{1,5}(I)$ & $\beta_{2,6}(I)$\\
5: & $\beta_{0,5}(I)$ & $\beta_{1,6}(I)$ & $\beta_{2,7}(I)$\\
\end{tabular}
& $\rightarrow$ & &
\begin{tabular}{cccc}
- & 0 & 1 & 2\\
total: & 4 & 4 & 1\\
1: & $\cdot$ & $\cdot$ & $\cdot$\\
2: & $\cdot$ & $\cdot$ & $\cdot$\\
3: & 4 & 1 & $\cdot$\\
4: & $\cdot$ & 3 & $\cdot$\\
5: & $\cdot$ & $\cdot$ & 1\\
\end{tabular}\\
\end{tabular}
$$
\end{example}
The Betti tables in the rest of this paper will be shifted to only show nonzero Betti numbers in the resolution of $I^d$. The Betti table in Example\autoref{E:Example 2.5} would therefore be expressed as follows.
$$
\begin{tabular}{cc}
$I$ & \\
&
\begin{tabular}{cccc}
- & 0 & 1 & 2\\
total: & 4 & 4 & 1\\
3: & 4 & 1 & $\cdot$\\
4: & $\cdot$ & 3 & $\cdot$\\
5: & $\cdot$ & $\cdot$ & 1\\
\end{tabular}\\
\end{tabular}
$$
\subsection{Betti table shape}
\begin{defn}
We say that the Betti tables of $I^x$ and $I^y$ share the same \textit{shape} if there exists an integer $r$ such that, for all $i$ and $j$,
$$
\beta_{i,j+rx}(I^x) \neq 0 \Longleftrightarrow \beta_{i,j+ry}(I^y) \neq 0.
$$
\end{defn}
Informally, this condition means that the Betti tables of $I^x$ and $I^y$ exhibit the same structure of nonzero entries.
Notice that the $r$ in this definition corresponds to the degree of the lowest degree generator of $I$: if $g$ is the generator of lowest degree $r$ in $I$, $g^x$ and $g^y$ will be the generators of lowest degree in $I^x$ and $I^y$, respectively, and correspond to the least nonzero entry in the $0^{th}$ column of the Betti tables $I^x$ and $I^y$, respectively. It follows that the least nonzero entry in the $0^{th}$ column of the Betti table of $I^x$ and $I^y$ will therefore occur, respectively, in row $rx$ and $ry$. Therefore $r$, in the condition
$$
\beta_{i,j+rx}(I^x) \neq 0 \Longleftrightarrow \beta_{i,j+ry}(I^y) \neq 0
$$
can be considered as a shift of the rows of the Betti tables of $I^x$ and $I^y$, dependant upon the generator of lowest degree of $I$.
The following example will provide clarity to this concept of Betti table shape.
\begin{example} \label{E:Example 2.7}
Let $I=(x_{1}^2,x_{2}^2,x_{3}^2,x_{4}^2) \subseteq \mathbbm{k}[x_{1},x_{2},x_{3},x_{4}]$. The Betti tables $\beta (I)$ and $\beta (I^2)$ are given by:
$$
\begin{tabular}{ccccc}
$I$ & & & $I^2$ & \\
&
\begin{tabular}{ccccc}
- & 0 & 1 & 2 & 3\\
total: & 4 & 6 & 4 & 1\\
2: & 4 & $\cdot$ & $\cdot$ & $\cdot$\\
3: & $\cdot$ & 6 & $\cdot$ & $\cdot$\\
4: & $\cdot$ & $\cdot$ & 4 & $\cdot$\\
5: & $\cdot$ & $\cdot$ & $\cdot$ & 1\\
\end{tabular}
& & &
\begin{tabular}{ccccc}
- & 0 & 1 & 2 & 3\\
total: & 10 & 20 & 15 & 4\\
4: & 10 & $\cdot$ & $\cdot$ & $\cdot$\\
5: & $\cdot$ & 20 & $\cdot$ & $\cdot$\\
6: & $\cdot$ & $\cdot$ & 15 & $\cdot$\\
7: & $\cdot$ & $\cdot$ & $\cdot$ & 4\\
\end{tabular}\\
\end{tabular}
$$
We say that the Betti tables of $I^1$ and $I^2$ share the same shape since they both exhibit the same diagonal structure of nonzero entries. In the above Betti tables we see that
$$
\begin{tabular}{cccc}
$\beta_{0,2}(I) \Longleftrightarrow \beta_{0,4}(I^2)$, & $\beta_{1,4}(I) \Longleftrightarrow \beta_{1,6}(I^2)$, & $\beta_{2,6}(I) \Longleftrightarrow \beta_{2,8}(I^2)$, & $\beta_{3,8}(I) \Longleftrightarrow \beta_{3,10}(I^2),$\\
\end{tabular}
$$
Thus, for all $i$ and $j$,
$$\beta_{i,j+2(1)}(I^1) \neq 0 \Longleftrightarrow \beta_{i,j+2(2)}(I^2) \neq 0.$$
\end{example}
Though the Betti tables in Example\autoref{E:Example 2.7} shared the same shape, not all Betti tables of different powers of a given ideal do. There are many simple examples for which this is true, several of which will be explored in the rest of this paper. We will now develop a language to better handle different Betti table shapes.
\subsection{Betti table shape stabilization}
\begin{defn} \textnormal{(Definition 1.2 in \cite{GW})}
We say that an ideal $I=(f_{0},f_{1},...,f_{k}) \subseteq R$ is \textit{equigenerated in degree r} if $deg(f_{i})=r$ for all $f_{i}$.
\end{defn}
The following result by Whieldon shows that, given $I \subseteq R$, where $I$ is an equigenerated ideal of degree $r$, the Betti tables of $I^d$ will \textit{stabilize}, in shape, for all $d$ greater than some index $D$. That is, though the Betti tables of $I^d$, where $d < D$ may differ in shape, all Betti tables of $I^d$ where $d \geq D$ will share the same shape as $I^D$.
\begin{thm} \label{T: Theorem 2.9}
\textnormal{(Theorem 4.1 in \cite{GW})}
Let $I=(f_{0},f_{1},...,f_{k}) \subseteq R$ be an equigenerated ideal of degree $r$. Then there exists a $D$ such that for all $d \geq D$, we have
$$\beta_{i,j+rd}(I^d) \neq 0 \Longleftrightarrow \beta_{i,j+rD}(I^D) \neq 0.$$
\end{thm}
\subsection{Stabilization Index}
The power of $I$, $D$, for which we see a stabilized Betti table shape in Theorem\autoref{T: Theorem 2.9} is referred to as the \textit{stabilization index} of $I$, also defined by Whieldon.
\begin{defn}
[Definition 5.1 in \cite{GW}]
Let $I=(f_{0},f_{1},...,f_{k}) \subseteq R$ be an equigenerated ideal of degree $r$. Let the \textit{stabilization index Stab$(I)$} of $I$ be the smallest $D$ such that for all $d \geq D$,
$$\beta_{i,j+rd}(I^d) \neq 0 \Longleftrightarrow \beta_{i,j+rD}(I^D) \neq 0.$$
\end{defn}
Below is an example to aid in the understanding of these concepts.
\begin{example} \label{E: Example 2.11}
Let $I=(x_{1}x_{2}x_{3}x_{4},x_{2}^4,x_{1}x_{4}^3) \subseteq \mathbbm{k}[x_{1},x_{2},x_{3},x_{4}]$. If we consider the Betti tables of the first few powers of $I$,
$$
\begin{tabular}{cccccccc}
$I^1$ & & & $I^2$ & & & $I^3$ & \\
&
\begin{tabular}{cccc}
- & 0 & 1 & 2\\
total: & 3 & 3 & 1\\
4: & 3 & $\cdot$ & $\cdot$\\
5: & $\cdot$ & 1 & $\cdot$\\
6: & $\cdot$ & 1 & $\cdot$\\
7: & $\cdot$ & 1 & 1\\
\end{tabular}
& & &
\begin{tabular}{cccc}
- & 0 & 1 & 2\\
total: & 6 & 9 & 4\\
8: & 6 & $\cdot$ & $\cdot$\\
9: & $\cdot$ & 3 & $\cdot$\\
10: & $\cdot$ & 4 & 2\\
11: & $\cdot$ & 2 & 2\\
\end{tabular}
& & &
\begin{tabular}{cccc}
- & 0 & 1 & 2\\
total: & 10 & 18 & 9\\
12: & 10 & $\cdot$ & $\cdot$\\
13: & $\cdot$ & 6 & $\cdot$\\
14: & $\cdot$ & 9 & 6\\
15: & $\cdot$ & 3 & 3\\
\end{tabular}\\
\end{tabular}
$$
we see two distinct Betti table shapes, first expressed in the Betti tables of $I^1$ and $I^2$. One can test higher powers of $I$ and see that it appears that $\beta_{i,j+4(2)}(I^2) \neq 0 \Longleftrightarrow \beta_{i,j+4(d)}(I^d) \neq 0$, for all $i$ and $j$, and $d \geq 2$. Given Theorem\autoref{T: Theorem 2.9}, it appears that all Betti tables of $I^d$, where $d \geq 2$, will have the same shape as $I^2$. Therefore, we predict that Stab$(I)=2$.
\end{example}
Though Whieldon's proof of Theorem\autoref{T: Theorem 2.9} ensures that the Stab$(I)$ is finite, there is no known general formula for the Stab$(I)$. Whieldon provides a conjecture for the formula for the stabilization index of edge ideals; see \cite{GW} Conjecture 5.2. Below we extend Whieldon's concept of the Stab$(I)$ to distinguish the different powers of $I$, $d$, where we see changes in the shape of the Betti table of $I^d$, up to the Stab$(I)$.
\subsection{Stabilization Sequence}
\begin{defn}
Let $I \subseteq R$. Let the \textit{stabilization sequence StabSeq$(I)$} of $I$ be the sequence of powers for which we see new shapes of the Betti tables of $I$.
$$\text{StabSeq}(I)=\{d : I^d\ \text{does not share the same Betti table shape as}\ I^{d-1}, d \in \mathbbm{Z}^+\}.$$
\end{defn}
In Example\autoref{E: Example 2.11}, we saw two distinct Betti table shapes, first expressed at $I^1$ and $I^2$, with a predicted Stab$(I)=2$. Therefore, $1,2 \in \text{StabSeq}(I)$ and $3 \not\in \text{StabSeq}(I)$. Our predicted stabilization sequence is therefore
$$
\text{StabSeq}(I)=\{1,2\}.
$$
By definition all stabilization sequences will contain 1, since 1 is the first power at which we can observe a Betti table of $I^d$. All stabilization sequences in this paper were determined using the Macaulay2 \cite{M2} package \texttt{StabSeq.m2}, created by the author. The source code for this package is displayed in \hyperref[Appendix B]{Section~\ref*{Appendix B}}.
\section{Questions and Results} \label{Q: Questions and Results}
There are many questions we can ask about the stabilization index and sequence of a given ideal. The rest of this paper focuses on exploring these questions through an example-based investigation.
\subsection{Question 1. \textnormal{\textit{If $\beta (I^d)$ and $\beta (I^{d+1})$ have the same shape, is Stab$(I)=d$?}}}\
One might hope that the shape of the Betti tables of $\{I^d\}$ will stabilize when consecutive powers share the same shape. That is, Stab$(I)=d$ where $d$ is the smallest power such that, for all $i$ and $j$, and some $r$,
$$
\beta_{i,j+rd}(I^d) \neq 0 \Longleftrightarrow \beta_{i,j+r(d+1)}(I^{d+1}) \neq 0.
$$
This would make determining the stabilization index very straightforward, for we would only have to check the Betti table shapes of powers of $I$ until consecutive powers share the same shape. This appears to be the case for the ideals in Example\autoref{E:Example 2.7} and Example\autoref{E: Example 2.11}. It also follows from Theorem 2.1 of \cite{EG} that the Betti tables of all complete intersection monomial equigenerated ideals of the form
$$
I=(x_{1}^s,x_{2}^s,\dots,x_{n}^s) \subseteq R
$$
will stabilize at $I^1$ and have the following shape, for all $d \in \mathbbm{Z^+}:$
$$
\begin{tabular}{cc}
$I^d$ & \\
&
\begin{tabular}{ccccccc}
- & 0 & 1 & $\cdots$ & $n-2$ & $n-1$\\
total: & $\ast$ & $\ast$ & $\cdots$ & $\ast$& $\ast$\\ \cline{2-2}
$ds$: & \multicolumn{1}{|c|}{$\ast$} & $\cdot$ & $\cdots$ & $\cdot$ & $\cdot$\\ \cline{2-3}
$(d+1)s-1$: & $\cdot$ & \multicolumn{1}{|c|}{$\ast$} & $\cdots$ & $\cdot$ & $\cdot$\\ \cline{3-3}
$\vdots$ & $\cdot$ & $\cdot$ & $\ddots$ & $\cdot$ & $\cdot$\\ \cline{5-5}
$(d+(n-2))s-(n-2)$: & $\cdot$ & $\cdot$ & $\cdots$ & \multicolumn{1}{|c|}{$\ast$} & $\cdot$\\ \cline{5-6}
$(d+(n-1))s-(n-1)$: & $\cdot$ & $\cdot$ & $\cdots$ & $\cdot$ & \multicolumn{1}{|c|}{$\ast$}\\ \cline{6-6}
\end{tabular}\\
\end{tabular}
$$
Therefore the ideal in Example\autoref{E:Example 2.7}, $I=(x_{1}^2,x_{2}^2,x_{3}^2,x_{4}^2) \subseteq k[x_{1},x_{2},x_{3},x_{4}]$, will have the following Betti table shape, for all $d \in \mathbbm{Z^+}$,
$$
\begin{tabular}{cc}
$I^d$ & \\
&
\begin{tabular}{cccccc}
- & 0 & 1 & 2 & 3\\
total: & $\ast$ & $\ast$ & $\ast$ & $\ast$\\ \cline{2-2}
$2d$: & \multicolumn{1}{|c|}{$\ast$} & $\cdot$ & $\cdot$ & $\cdot$\\ \cline{2-3}
$2d+1$: & $\cdot$ & \multicolumn{1}{|c|}{$\ast$} & $\cdot$ & $\cdot$\\ \cline{3-4}
$2d+2$: & $\cdot$ & $\cdot$ & \multicolumn{1}{|c|}{$\ast$} & $\cdot$\\ \cline{4-5}
$2d+3$: & $\cdot$ & $\cdot$ & $\cdot$ & \multicolumn{1}{|c|}{$\ast$}\\ \cline{5-5}
\end{tabular}\\
\end{tabular}
$$
and stabilization index and sequence
$$
\begin{tabular}{ccc}
Stab$(I)=1$, & StabSeq$(I)=\{1\}.$\\
\end{tabular}
$$
However, the following example shows that we are not guaranteed Betti table shape stabilization when consecutive powers share the same shape.
\begin{example} \label{E:Example 3.1}
Let $I=(x_{1}^3x_{2},x_{2}^4,x_{1}^2x_{3}^2,x_{2}^3x_{3}) \subseteq \mathbbm{k}[x_{1},x_{2},x_{3}]$. Below are the Betti tables of the first few powers of $I$.
$$
\begin{tabular}{cccccccc}
$I^1$ & & & $I^2$ & & & $I^3$ & \\
&
\begin{tabular}{cccc}
- & 0 & 1 & 2 \\
total: & 4 & 5 & 2 \\
4: & 4 & 1 & $\cdot$ \\
5: & $\cdot$ & 1 & $\cdot$ \\
6: & $\cdot$ & 3 & 2 \\
\end{tabular}
& & &
\begin{tabular}{cccc}
- & 0 & 1 & 2 \\
total: & 10 & 15 & 6 \\
8: & 10 & 5 & $\cdot$ \\
9: & $\cdot$ & 3 & $\cdot$ \\
10: & $\cdot$ & 7 & 6 \\
\end{tabular}
& & &
\begin{tabular}{cccc}
- & 0 & 1 & 2 \\
total: & 20 & 32 & 13 \\
12: & 20 & 14 & 1 \\
13: & $\cdot$ & 7 & 1 \\
14: & $\cdot$ & 11 & 11 \\
\end{tabular}\\
\end{tabular}
$$
In the above tables we see that $I^1$ and $I^2$ share the same shape, that is, for all $i$ and $j$,
$$
\beta_{i,j+4(1)}(I^1) \neq 0 \Longleftrightarrow \beta_{i,j+4(2)}(I^2) \neq 0.
$$
However, the Stab$(I) \neq 1$ since the Betti table of $I^3$ differs in shape than that of $I^1$. When we check the shape of the Betti tables of $I^d$ where $d \geq 3$ we see that, for all $i$ and $j$,
$$
\beta_{i,j+4(3)}(I^3) \neq 0 \Longleftrightarrow \beta_{i,j+4(d)}(I^d) \neq 0.
$$
Therefore, we predict the stabilized shape of the Betti tables of $I^d$ for $d \geq 3$ to be
$$
\begin{tabular}{cc}
$I^d$ & \\
&
\begin{tabular}{cccc}
- & 0 & 1 & 2 \\
total: & $\ast$ & $\ast$ & $\ast$ \\ \cline{2-4}
$4d$: & \multicolumn{1}{|c}{$\ast$} & $\ast$ & \multicolumn{1}{c|}{$\ast$} \\ \cline{2-2}
$4d+1$: & $\cdot$ & \multicolumn{1}{|c}{$\ast$} & \multicolumn{1}{c|}{$\ast$} \\
$4d+2$: & $\cdot$ & \multicolumn{1}{|c}{$\ast$} & \multicolumn{1}{c|}{$\ast$} \\ \cline{3-4}
\end{tabular}\\
\end{tabular}
$$
with the following stabilization index and sequence
$$
\begin{tabular}{ccc}
Stab$(I)=3$ & & StabSeq$(I)=\{1,3\}.$\\
\end{tabular}
$$
\end{example}
From this example, we know that we are not guaranteed to have found the stabilized Betti table shape, of a given ideal, when the Betti tables of two consecutive powers share the same shape. In the many cases where we do not have the tools to generalize the Betti tables of a given ideal, we have to make hypothesis for the stabilization index.
\subsection{Question 2. \textnormal{\textit{Does there exist some $x$ such that if $\beta_{i,j+r(D)}(I^D) \neq 0 \Longleftrightarrow \beta_{i,j+r(d)}(I^{d}) \neq 0,$ for all $d \in \{(D+1),\dots,(D+x)\},$ then Stab$(I) \leq D$?}}}\
Can we be confident that the shape of the Betti tables of $\{I^d\}$ stabilize when a large enough number of consecutive Betti tables in this collection share the same shape? Determining a bound on this $x$ would allow for far more straightforward computational investigation into stabilization indices and sequences, as there would be a recognizable point which would mark the stabilization. It is likely that this $x$ is dependant upon characteristics of the ideal in question, however what specific characteristics these may be we cannot answer here. From the following example we know that such an $x$ must be at least 7.
\begin{example} \label{E: Example 3.4}
Let $I=(x_{1}^3x_{2}^3x_{3}^3,x_{2}^6x_{3}^3,x_{1}^4x_{3}^5,x_{1}^8x_{2}) \subseteq \mathbbm{k}[x_{1},x_{2},x_{3}]$. Below are the Betti tables of $I^d$ for which $I^d$ does not share the same Betti table shape as $I^{d-1}$.
$$
\begin{tabular}{cccccccc}
$I^1$ & & & $I^2$ & & & $I^3$ &\\
&
\begin{tabular}{cccc}
- & 0 & 1 & 2 \\
total: & 4 & 4 & 1 \\
9: & 4 & $\cdot$ & $\cdot$ \\
10: & $\cdot$ & $\cdot$ & $\cdot$ \\
11: & $\cdot$ & 2 & $\cdot$ \\
12: & $\cdot$ & $\cdot$ & $\cdot$ \\
13: & $\cdot$ & 2 & $\cdot$ \\
14: & $\cdot$ & $\cdot$ & 1 \\
\end{tabular}
& & &
\begin{tabular}{cccc}
- & 0 & 1 & 2 \\
total: & 10 & 15 & 6 \\
18: & 10 & $\cdot$ & $\cdot$ \\
19: & $\cdot$ & 1 & $\cdot$ \\
20: & $\cdot$ & 8 & $\cdot$ \\
21: & $\cdot$ & $\cdot$ & 1 \\
22: & $\cdot$ & 6 & 2 \\
23: & $\cdot$ & $\cdot$ & 3 \\
\end{tabular}
& & &
\begin{tabular}{cccc}
- & 0 & 1 & 2 \\
total: & 20 & 35 & 16 \\
27: & 20 & $\cdot$ & $\cdot$ \\
28: & $\cdot$ & 4 & $\cdot$ \\
29: & $\cdot$ & 21 & 1 \\
30: & $\cdot$ & $\cdot$ & 5 \\
31: & $\cdot$ & 10 & 5 \\
32: & $\cdot$ & $\cdot$ & 5 \\
\end{tabular}\\
\end{tabular}
$$
$$
\begin{tabular}{cccccccc}
$I^4$ & & & $I^6$ & & & $I^{13}$ &\\
&
\begin{tabular}{cccc}
- & 0 & 1 & 2 \\
total: & 35 & 66 & 32 \\
36: & 35 & 1 & $\cdot$ \\
37: & $\cdot$ & 10 & $\cdot$ \\
38: & $\cdot$ & 41 & 4 \\
39: & $\cdot$ & $\cdot$ & 13 \\
40: & $\cdot$ & 14 & 8 \\
41: & $\cdot$ & $\cdot$ & 7 \\
\end{tabular}
& & &
\begin{tabular}{cccc}
- & 0 & 1 & 2 \\
total: & 84 & 166 & 83 \\
54: & 84 & 10 & $\cdot$ \\
55: & $\cdot$ & 39 & 1 \\
56: & $\cdot$ & 95 & 20 \\
57: & $\cdot$ & $\cdot$ & 37 \\
58: & $\cdot$ & 22 & 14 \\
59: & $\cdot$ & $\cdot$ & 11 \\
\end{tabular}
& & &
\begin{tabular}{cccc}
- & 0 & 1 & 2 \\
total: & 560 & 1101 & 542 \\
117: & 560 & 255 & 1 \\
118: & $\cdot$ & 430 & 130 \\
119: & $\cdot$ & 336 & 216 \\
120: & $\cdot$ & $\cdot$ & 135 \\
121: & $\cdot$ & 50 & 35 \\
122: & $\cdot$ & $\cdot$ & 25 \\
\end{tabular}\\
\end{tabular}
$$
We have checked the Betti tables shapes of $I^d$ for $d \leq 100$ and found the same Betti table shape as that of $I^{13}$. Thus we hypothesize the stabilization index and sequence of $I$ to be
$$
\begin{tabular}{ccc}
Stab$(I)=13$ & & StabSeq$(I)=\{1,2,3,4,6,13\}.$\\
\end{tabular}
$$
\end{example}
From this example, which is derived from a relatively simple homogeneous ideal, we see a more complicated stabilization sequence, one which contains 6 elements, and a relatively large stabilization index occurring at $I^d$ where $d>10$. Another interesting thing to note from this example is that the Betti table of $I^d$ appeared to have stabilized at $I^6$, that is until we checked $I^{6+x}$ where $x=7$. Though checking the Betti table shape 7 powers higher after a new Betti table shape is not computationally intensive, this example shows how little we know about the bound on $x$. For if such a simple ideal can produce a relatively complicated stabilization index and sequence, with a bound on $x\geq 7$, what can we expect from more complicated ideals? We expect that ideals with a greater number of and degree of variables and generators may produce these more complicated stabilization indices and sequences.
\subsection{Question 3. \textnormal{\textit{In what ways can we characterize stabilization sequences?}}} \label{Q: Question 3}\
We acknowledge that this question is quite general and could be used to guide investigations of families of ideals $\{I_{i}\}$ whose Betti table shape does not necessarily stabilize. We focus on exploring this question for collections of ideals $\{I^d\}$ where $I$ is an equigenerated ideal, of fixed degree or a fixed number of generators. In Example\autoref{E: Example 3.4}, we saw an ideal with a stabilization sequence containing 6 elements, whereas all the previous examples contained at most 2 elements. Given these examples we may wonder about the cardinality of other stabilization sequences. Are there ideals whose stabilization sequences are of arbitrarily large cardinality? We may also ask whether some stabilization sequence are polynomial? The following example presents a surprising stabilization sequence.
\begin{example} \label{E: Example 3.6}
Let $I=(x_{1}^8x_{2}^8x_{3}^8,x_{1}^4x_{2}^{16}x_{3}^4,x_{1}x_{2}^{23},x_{2}^{12}x_{3}^{12}) \subseteq \mathbbm{k}[x_{1},x_{2},x_{3}]$. One can test shapes of the Betti tables of $I^d$ where $1 \leq d \leq 20$ and find the following stabilization sequence,
$$
\text{StabSeq}(I)=\{1,2,3,5,7,9,15,17,19\}.
$$
\end{example}
We have checked the Betti tables shapes of $I^d$ for $d \leq 100$ and found the same Betti table shape as that of $I^{19}$. Though we do not see a stabilization sequence of arbitrarily large cardinality for this ideal, we do see a stabilization sequence containing far more elements than previous examples, $|\text{StabSeq}(I)|=9$. In this example we also see rather disconcerting behaviour of the elements in the stabilization sequence of $I$, for elements $3,5,7,9$ grow by increments of 2 yet the following element is 15. This behaviour is problematic because it leads us to assume that elements will continue to appear by increments of 2, then produces an element incremented by 6. This example provides further evidence that we should be cautious when dealing with stabilization sequences of ideals. For in many cases where we do not currently have the tools to definitively state them, there is no guarantee that any structure or pattern they appear to exhibit will continue.
\subsection{Question 4. \textnormal{\textit{For equigenerated ideals, of fixed degree or a fixed number of generators, can the stabilization index be arbitrarily large?}}} \label{Q: Question 4}\
In \autoref{Q: Question 3} we ask whether there exists stabilization sequence of arbitrarily large cardinality. A natural extension of this question is whether there exists equigenerated ideals, of fixed degree or a fixed number of generators, whose stabilization index is arbitrarily large. In Example\autoref{E: Example 3.4} and Example\autoref{E: Example 3.6}, we saw stabilization indices of 11 and 19 respectively. One might expect, given the relative simplicity of the ideal in these examples, that more complicated ideals, will stabilize at higher powers, perhaps even some which stabilize at $I^d$ where $d>>0$. We predict that the existence of such indices depend upon the number of and degree of the variables and generators of the ideal.
\subsection{Linearly connected family of ideals}
Until now we have been working with collections of ideals $\{I^d\}$ as we vary $d$. These collections are examples of \textit{graded families} of ideals.
\begin{defn}
A collection of ideals $\{I_{i}\} \subseteq R$ is a \textit{graded family} if, for all $i$ and $j$, $I_{i}I_{j} \subset I_{i+j}$.
\end{defn}
It is understood that there is inherent structure between ideals in a graded family. We now define a \textit{linearly connected family} of ideals, which are not a graded family in general. Recall we may write a monomial in $R$ as $x^{\alpha}=x_{1}^{{\alpha}_{1}}x_{2}^{{\alpha}_{2}}\cdots x_{n}^{{\alpha}_{n}}$.
\begin{defn}
A collection of monomial ideals $\{I_{j}\} \subseteq R$ is a \textit{linearly connected family} if there exists linear functions $\alpha_{k}^i(j)$ such that
$$
I_{j}=(x^{\alpha^1(j)},x^{\alpha^2(j)},...,x^{\alpha^l(j) })\ \text{where}\ \alpha^i(j)=(\alpha_{1}^i(j),\alpha_{2}^i(j),...,\alpha_{n}^i(j))
$$
$\text{for}\ 1 \leq i \leq l\ \text{and}\ j \in \mathbbm{Z}^+.$
\end{defn}
\begin{example}
The homogeneous ideal $\{I_{b}\}=(x_{1}^bx_{2}^{3b},x_{2}^{4b},x_{1}^{2b}x_{3}^{2b}) \subseteq \mathbbm{k}[x_{1},x_{2},x_{3}]$ is a \textit{linearly connected family} as the powers of the variables of the generators of $I$ are all linear functions dependant on $b$. The $\alpha^i(b)$ are listed below
$$
\begin{tabular}{ccc}
$\alpha^1(b)=(b,3b,0)$, & $\alpha^2(b)=(0,4b,0)$, & $\alpha^3(b)=(2b,0,2b)$.\\
\end{tabular}
$$
\end{example}
We will now discuss the linearly connected family of homogeneous ideals $\{I_{n}\}$, where
$$I_{n}=(a^{2n}b^{2n}c^{2n},b^{4n}c^{2n},a^{3n}c^{3n},a^{6n-1}b)\subseteq \mathbbm{k}[a,b,c],$$
as we vary $n$ and the powers of $I_{n}$. For $n=1$, $I_{1}=(a^2b^2c^2,b^4c^2,a^3c^3,a^5b)$, we observe three distinct Betti table shapes. Below are the Betti tables of $I_{1}^d$ for which $I_{1}^d$ does not share the same Betti table shape as $I_{1}^{d-1}$.
$$
\begin{tabular}{cccccccc}
$I_{1}^1$ & & & $I_{1}^2$ & & & $I_{1}^6$\\
&
\begin{tabular}{cccc}
- & 1 & 2 & 3 \\
total: & 4 & 4 & 1 \\
4: & 4 & $\cdot$ & $\cdot$ \\
5: & $\cdot$ & 2 & $\cdot$ \\
6: & $\cdot$ & 2 & 1 \\
\end{tabular}
& & &
\begin{tabular}{cccc}
- & 1 & 2 & 3 \\
total: & 10 & 15 & 1 \\
11: & 10 & 1 & $\cdot$ \\
12: & $\cdot$ & 8 & 1 \\
13: & $\cdot$ & 6 & 5 \\
\end{tabular}
& & &
\begin{tabular}{cccc}
- & 1 & 2 & 3 \\
total: & 84 & 162 & 79 \\
35: & 84 & 49 & 1 \\
36: & $\cdot$ & 91 & 53 \\
37: & $\cdot$ & 22 & 25 \\
\end{tabular}\\
\end{tabular}
$$
The above result is not surprising, especially when considering the stabilization index and sequences of the ideal in Example\autoref{E: Example 3.4} and Example\autoref{E: Example 3.6}. However, when we increase the value of $n$ and test the resulting stabilization sequences of $I_{n}$, we see very surprising results. Below we have listed the stabilization sequences of $I_{n}$ for $ 1 \leq n \leq 8$, determined by testing the shapes of the Betti tables of $I_{n}^d$ for $d\leq100$.
$$
\begin{tabular}{l}
$\text{StabSeq}(I_{1})= \{1,2,6\}$\\
\\
$\text{StabSeq}(I_{2})= \{1,2,3,5,6,11\}$\\
\\
$\text{StabSeq}(I_{3})=\{1,2,3,5,6,11,17,23\}$\\
\\
$\text{StabSeq}(I_{4})=\{1,2,3,5,6,11,17,23,29,35\}$\\
\\
$\text{StabSeq}(I_{5})=\{1,2,3,5,6,11,17,23,29,35,41,47\}$\\
\\
$\text{StabSeq}(I_{6})=\{1,2,3,5,6,11,17,23,29,35,41,47,53,59\}$\\
\\
$\text{StabSeq}(I_{7})=\{1,2,3,5,6,11,17,23,29,35,41,47,53,59,65,71\}$\\
\\
$\text{StabSeq}(I_{8})=\{1,2,3,5,6,11,17,23,29,35,41,47,53,59,65,71,77,83\}$\\
\end{tabular}
$$
We had no reason to expect that the stabilization indices or sequences of a linearly connected family of ideals should have any relation. Yet we see a very clear pattern and apparent structure arising in this particular linearly connected family of homogeneous ideals. We see that the stabilization index of $I_{n}$, for $n \geq 2$, appears to be given by the linear function
$$
\text{Stab}(I_{n})=12n-13
$$
and the stabilization sequence of $I_{n}$ appears to be given by
$$
\text{StabSeq}(I_{n})=\text{StabSeq}(I_{n-1}) \cup \{12n-13,12n-19\}.
$$
These are entirely unexpected relationships, which raise more questions than answers. In \autoref{Q: Question 3} we ask in what ways can we characterize the stabilization sequences? From this related collection we see that there are in fact stabilization sequences with potentially large cardinality. For the cardinality of $I_{n}$ for $n\geq2$ appears to be given by the linear function
$$
|\text{StabSeq}(I_{n})|=2n+2.
$$
We also see that the stabilization sequences appear to be somewhat linear, that is, the stabilization sequence of $I_{n}$ not only appears to be a union of the stabilization sequence of $I_{n-1}$ and $\{12n-13,12n-19\}$ but also expressible as
$$
\text{StabSeq}(I_{n})=\{1,2,3,5,6\} \cup \{d:d=\{11+6m\}\ \text{for all}\ 0\leq m\leq n-1, m \in \mathbbm{Z^+}\}
$$
for $n\geq3$. In \autoref{Q: Question 4} we ask whether there exists a collection of equigenerated ideals, of fixed degree or fixed number of generators, whose Betti tables stabilized in shape at an arbitrarily large power? If the structure in this related collection continues, it appears that as $n$ becomes arbitrarily large, so will the stabilization index of $I_{n}$, being dependant upon
$$
\text{Stab}(I_{n})=12n-13.
$$
\section{Future Research}
The unexpected structure of the stabilization indices and sequences found in the linearly related family of homogeneous ideals above presents a wealth of questions for future research. Below we list some questions which we believe to be fruitful avenues for future research.
\begin{question}
Do all linearly related family of homogeneous ideals exhibit structure in their stabilization indices and sequences? If so, in what ways can we characterize this structure?
\end{question}
\begin{question}
What sorts of functions are the stabilization indices and sequences of other linearly related family of homogeneous ideals dependant upon? Are these functions always linear?
\end{question}
\begin{question}
Is there a formula for Stab$(I)$ and StabSeq$(I)$ for certain classes of ideals? If so what characteristics of the ideals does it depend upon?
\end{question}
$$
$$
$$
$$
$$
$$
$$
$$
\break
\section{Appendix A.} \label{Appendix A}
Macaulay2 \cite{M2} commands used in this paper.
$$
\begin{tabular}{ll} \cline{1-2}
\multicolumn{1}{|c|}{Operation} & \multicolumn{1}{|c|}{M2 Command}\\ \cline{1-2}
\multicolumn{1}{|l|}{Defining Field:} & \multicolumn{1}{|l|}{
\begin{minipage}{2.9in}
\verbatiminput{SFM2Fieldgeneral.txt}
\end{minipage}}
\\
\multicolumn{1}{|l|}{Defining ideal:} & \multicolumn{1}{|l|}{
\begin{minipage}{2.9in}
\verbatiminput{SFM2Idealgeneral.txt}
\end{minipage}}
\\
\multicolumn{1}{|l|}{Define ideal as a module:} & \multicolumn{1}{|l|}{
\begin{minipage}{2.9in}
\verbatiminput{SFM2Module.txt}
\end{minipage}}
\\
\multicolumn{1}{|l|}{Define resolution of ideal:} & \multicolumn{1}{|l|}{
\begin{minipage}{2.9in}
\verbatiminput{SFM2Res.txt}
\end{minipage}}
\\
\multicolumn{1}{|l|}{View resolution of ideal:} & \multicolumn{1}{|l|}{
\begin{minipage}{2.9in}
\verbatiminput{SFM2Cdd.txt}
\end{minipage}}
\\
\multicolumn{1}{|l|}{View Betti table of ideal:} & \multicolumn{1}{|l|}{
\begin{minipage}{2.9in}
\verbatiminput{SFM2Bettires.txt}
\end{minipage}}
\\
\multicolumn{1}{|l|}{Load stabilization sequence package from} & \multicolumn{1}{|l|}{
\texttt{load "StabSeq.m2"}}
\\
\multicolumn{1}{|l|}{Macaulay2 code/ directory:} & \multicolumn{1}{|l|}{}
\\
\multicolumn{1}{|l|}{Determine stabilization sequence of ideal:} & \multicolumn{1}{|l|}{
\texttt{StabSeq(I, input max power to check,}}\\
\multicolumn{1}{|l|}{} & \multicolumn{1}{|l|}{
\texttt{IncludeBettis => true)}}
\\ \cline{1-2}
\end{tabular}
$$
Below we provide the Macaulay2 \cite{M2} session used in finding the minimal graded free resolution and Betti table of the ideal in Example\autoref{E: Example 2.1} and Example\autoref{E:Example 2.3} with commentary. We begin by defining \texttt{R} as the polynomial ring that we will be working in. We then construct the ideal \texttt{I}, in \texttt{R}, and define \texttt{M} as the module of \texttt{I}.\\
\verbatiminput{M2-examples/SFM2code-1.txt}
$ $
Given the module of \texttt{I}, \texttt{M}, we define \texttt{C} as the minimally graded free resolution of \texttt{M}. The command \texttt{C.dd} allows us to view the minimally graded free resolution of \texttt{M} in its entirety.\\
\break
\verbatiminput{M2-examples/SFM2code-2.txt}
$ $
Lastly, we view the Betti table of the module of \texttt{I}, \texttt{M}.\\
\verbatiminput{M2-examples/SFM2code-3.txt}
$ $
Below we provide the Macaulay2 \cite{M2} session used in finding the stabilization sequence of the ideal in Example\autoref{E:Example 3.1} and Example\autoref{E: Example 3.6} with commentary. Similar to the previous session, we begin by defining \texttt{R} as the polynomial ring then construct the ideal \texttt{I}, in \texttt{R}.\\
\verbatiminput{M2-examples/SFM2code-SS1.txt}
$ $
We load the stabilization sequence package, provided in \hyperref[Appendix B]{Section~\ref*{Appendix B}}, from the /Library/Application Support/Macaulay2/code/ directory.\\
\verbatiminput{M2-examples/SFM2code-SS2.txt}
$ $
For full documentation of the stabilization package, see \hyperref[Appendix B]{Section~\ref*{Appendix B}}. The stabilization sequence package computes the stabilization sequence of the inputed ideal (\texttt{I} in the following example) up to the indicated maximum power (\texttt{25} in the following example). By including the optional argument \texttt{IncludeBettis => true} the \texttt{StabSeq} function also outputs the Betti tables for which we see a new Betti table shape. Below is the stabilization sequence and Betti tables found in Example\autoref{E:Example 3.1}.\\
\verbatiminput{M2-examples/SFM2code-SS3.txt}
$ $
Without the optional argument \texttt{IncludeBettis => true} the \texttt{StabSeq} function simply outputs the stabilization sequence of the inputed ideal (\texttt{J} in the following example) up to the indicated maximum power (\texttt{25} in the following example). Below is the stabilization sequence and Betti tables found in Example\autoref{E: Example 3.6}.\\
\verbatiminput{M2-examples/SFM2code-SS4.txt}
\section{Appendix B.} \label{Appendix B}
Below we provide the source code for \texttt{StabSeq.m2}. To use this algorithm, save it as a .m2 file in /Library/Application Support/Macaulay2/code/. Note that the \texttt{--} in the source code below corresponded to comments to aid in the understanding of this package.\\
\begin{lstlisting}
StabSeq = method(Options => {IncludeBettis => false})
needsPackage "BoijSoederberg"
-- By Aaron Slobodin, September 2017, open source.
-- Algorithm returns the Stabilization Sequence (up to inputed max power) of the inputed ideal.
-- The algorithm includes the optional argument to print the Betti Tables of all powers of the inputed ideal that are found in the determined Stabilization Sequence.
-- Note that the Stabilization Sequence of a given ideal, I, is {i+1: I^(i+1) does not share the same Betti table shape as I^i, i in ZZ+}
StabSeq(Ideal,ZZ) := o -> (I, PowerCap) -> (
StabSeqList := {1};
CurrentBetti := betti res minimalPresentation module I;
BettiLister := {CurrentBetti};
CurrentMatrix = matrix CurrentBetti;
for i from 1 to (PowerCap-1) do (
NewBetti := betti res minimalPresentation module I^(i+1);
NewMatrix := matrix NewBetti;
-- Check to see if dimensions of the Betti tables of I^i and I^(i+1) are the same.
if (numgens source CurrentMatrix - 1) != (numgens source NewMatrix - 1) or (numgens target CurrentMatrix - 1) != (numgens target NewMatrix - 1) then (
-- If they dimensions are not equal, they have different Betti table shapes, resulting in
-- 1. the stabilization sequence (StabSeqList) gaining the element i+1.
-- 2. the list of Betti tables (BettiLister) gains the element NewBetti
StabSeqList = append(StabSeqList, i+1);
BettiLister = append(BettiLister, NewBetti);
)
-- Check to see if Betti table of I^i and I^(i+1) exhibit the same shape.
else (
for j from 0 to (numgens source CurrentMatrix - 1) do (
for k from 0 to (numgens target CurrentMatrix - 1) do (
if CurrentMatrix_j_k != 0 and NewMatrix_j_k == 0 or CurrentMatrix_j_k == 0 and NewMatrix_j_k != 0 then (
-- If the Betti tables of I^i and I^i+1 differ in shape then
-- 1. the stabilization sequence (StabSeqList) gaining the element i+1.
-- 2. the list of Betti tables (BettiLister) gains the element NewBetti
StabSeqList = append(StabSeqList, i+1);
BettiLister = append(BettiLister, NewBetti);
);
);
);
);
-- Either the Betti table of I^i and I^(i+1) shared the same shape or differed in shape. Regardless, the new reference Betti table should therefore be the Betti table of I^(i+1).
CurrentMatrix = NewMatrix;
);
-- Optional command to print the Betti tables of elements in the determined stabilization sequence.
if o.IncludeBettis then (<< unique BettiLister);
-- Return determined stabilization sequence.
return unique StabSeqList;
);
\end{lstlisting}
\section{Acknowledgments}
I would like to thank the Quest Summer Fellows Committee for providing me the funding and opportunity for this research and Dr. Sarah Mayes-Tang for her endless support as my host faculty advisor. Calculations in this paper were performed using the computer software Macaulay2 \cite{M2}.
| {
"timestamp": "2017-12-13T02:15:10",
"yymm": "1712",
"arxiv_id": "1712.04411",
"language": "en",
"url": "https://arxiv.org/abs/1712.04411",
"abstract": "Given an homogeneous monomial ideal $I$, we provide a question- and example-based investigation of the stabilization patterns of the Betti tables shapes of $I^d$ as we vary $d$. We build off Whieldon's definition of the stabilization index of $I$, Stab$(I)$, to define the stabilization sequence of $I$, StabSeq$(I)$, and use it to explore changes in the shapes of the Betti tables of $I^d$ as we vary $d$. We also present the stabilization indices and sequences of the collection of ideals $\\{I_{n}\\}$ where $I_{n}=(a^{2n}b^{2n}c^{2n},b^{4n}c^{2n},a^{3n}c^{3n},a^{6n-1}b)\\subseteq \\Bbbk[a,b,c]$.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Betti table Stabilization of Homogeneous Monomial Ideals",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9808759610129464,
"lm_q2_score": 0.8519528038477824,
"lm_q1q2_score": 0.8356600252118678
} |
https://arxiv.org/abs/1407.1901 | On Solving a Curious Inequality of Ramanujan | Ramanujan proved that the inequality $\pi(x)^2 < \frac{e x}{\log x} \pi\Big(\frac{x}{e}\Big)$ holds for all sufficiently large values of $x$. Using an explicit estimate for the error in the prime number theorem, we show unconditionally that it holds if $x \geq \exp(9658)$. Furthermore, we solve the inequality completely on the Riemann Hypothesis, and show that $x=38, 358, 837, 682$ is the largest integer counterexample. | \section{Introduction}
We let $\pi(x)$ denote the number of primes which are less than or equal to $x$. In one of his notebooks, Ramanujan (see the preservations by Berndt \cite[Ch.24]{Berndt}) proved that the inequality
\begin{equation} \label{inequality}
\pi(x)^2 < \frac{e x}{\log x} \pi\Big(\frac{x}{e}\Big)
\end{equation}
holds for all sufficiently large values of $x$. Berndt \cite{Berndt} states that Wheeler, Keiper and Galway used \textsc{Mathematica} in an attempt to determine an $x_0$ such that (\ref{inequality}) holds for all $x \geq x_0$. They were unsuccessful, but independently Galway was able to establish that the largest prime counter-example below $10^{11}$ occurs at $x = 38, 358, 837, 677$.
\begin{comment}
Indeed, one can run a straightforward check to see that $x = 38, 358, 837, 682$ is the greatest integer counterexample for $x<10^{11}$.
\end{comment}
Hassani looked at the problem in $2012$ \cite{Hassani} and established (inter alia):
\begin{thm}[Hassani]\label{thm:hassani}
If one assumes the Riemann Hypothesis, then inequality (\ref{inequality}) holds for all $x\geq 138,766,146,692,471,228$.
\end{thm}
\begin{proof}
This is Theorem 1.2 of \cite{Hassani}.
\end{proof}
\begin{comment}
The first purpose of this paper is to show that Ramanujan's inequality holds without condition for all $x \geq \exp(9658)$. We then solve the inequality completely on the assumption of the Riemann hypothesis and show that $x=38, 358, 837, 682$ is the greatest integer counterexample, and so (\ref{inequality}) holds conditionally for all real numbers $x \geq 38, 358, 837, 683$.
\end{comment}
The purpose of this paper is to establish the following two theorems:
\begin{thm}\label{thm:uncon}
Inequality (\ref{inequality}) holds unconditionally for all $x\geq\exp(9658)$.
\end{thm}
\begin{thm}\label{thm:con}
Assuming the Riemann Hypothesis, the largest integer counter-example to inequality (\ref{inequality}) is that at $x=38, 358, 837, 682$.
\end{thm}
We will look at the unconditional result first, before considering that contingent on the Riemann Hypothesis.
\section{The unconditional result}
\subsection{Ramanujan's original proof}
We start by giving Ramanujan's original and, we think, rather fetching proof, which is based on the prime number theorem, or more specifically that
\begin{equation} \label{piexpansion}
\pi(x) = x \sum_{k=0}^{4} \frac{k!}{\log^{k+1}x}+O\Big(\frac{x}{\log^6 x}\Big)
\end{equation}
as $x \rightarrow \infty$. As such we have the two estimates
\begin{equation*} \label{pisquared}
\pi^2 (x) = x^2 \Big\{ \frac{1}{\log^2 x} + \frac{2}{\log^3 x} + \frac{5}{\log^4 x} + \frac{16}{\log^5 x} + \frac{64}{\log^6 x} \Big\} + O\Big( \frac{x^2}{\log^7 x}\Big)
\end{equation*}
and
\begin{eqnarray*} \label{pimult}
\frac{e x}{\log x} \pi \Big(\frac{x}{e}\Big) & = & \frac{x^2}{\log x} \Big\{ \sum_{k=0}^{4} \frac{k!}{(\log x-1)^{k+1}} \Big\} + O\Big( \frac{x^2}{\log^7 x} \Big) \\
& = & x^2 \Big\{ \frac{1}{\log^2 x} + \frac{2}{\log^3 x} + \frac{5}{\log^4 x} + \frac{16}{\log^5 x} + \frac{65}{\log^6 x} \Big\} + O\Big( \frac{x^2}{\log^7 x}\Big).
\end{eqnarray*}
Subtracting the above two expressions gives
\begin{equation} \label{makemenegative}
\pi^2 (x) - \frac{e x}{\log x} \pi \Big(\frac{x}{e}\Big) = - \frac{x^2}{\log^6 x} + O \Big(\frac{x^2}{\log^7 x} \Big)
\end{equation}
which is negative for sufficiently large values of $x$. This completes the proof.
The proof serves as a tribute to the workings of Ramanujan's mind, for surely one would not calculate the asymptotic expansions of such functions without the knowledge that doing so would be fruitful.
Note that if one were to work through the above proof using explicit estimates on the asymptotic expansion of the prime-counting function, then one would be able to make precise what is meant by ``sufficiently large''. The following lemma shows how one might do this.
\begin{lem} \label{lem1}
Let $m_a, M_a \in \mathbb{R}$ and suppose that for $x>x_a$ we have
$$ x \sum_{k=0}^{4} \frac{k!}{\log^{k+1}x}+ \frac{m_a x}{\log^6 x} < \pi(x) < x \sum_{k=0}^{4} \frac{k!}{\log^{k+1}x}+\frac{M_a x}{\log^6 x}.$$
Then Ramanujan's inequality is true if
$$x > \max( e x_{a},x_{a}' )$$
where a value for $x_{a}'$ can be obtained in the proof and is completely determined by $m_a, M_a$ and $x_{a}$.
\end{lem}
\begin{proof}
Following along the lines of Ramanujan's proof we have for $x > x_{a}$
\begin{equation} \label{pipi}
\pi^2(x) < x^2 \Big\{ \frac{1}{\log^2 x}+ \frac{2}{\log^3 x}+ \frac{5}{\log^4 x}+ \frac{16}{\log^5 x}+ \frac{64}{\log^6 x} + \frac{\epsilon_{M_a}(x)}{\log^7 x} \Big\}
\end{equation}
where
$$\epsilon_{M_a} (x) = 72 + 2 M_a + \frac{2M_a+132}{\log x} + \frac{4M_a+288}{\log^2 x} + \frac{12 M_a+576}{\log^3 x}+\frac{48M_a}{\log^4 x} + \frac{M_a^2}{\log^5 x}.$$
The other term requires slightly more trickery; we have for $x > e x_{a}$
$$\frac{ex}{\log x} \pi \Big(\frac{x}{e} \Big) > \frac{x^2}{\log x} \Big( \sum_{k=0}^{4} \frac{k!}{(\log x - 1)^{k+1}}\Big) + \frac{m_a x}{(\log x-1)^{6}}. $$
We make use of the inequality
\begin{eqnarray*}
\frac{1}{(\log x - 1)^{k+1}} & = & \frac{1}{\log^{k+1} x} \Big(1+ \frac{1}{\log x} + \frac{1}{\log^2 x} + \frac{1}{\log^3 x} + \cdots \Big)^{k+1} \\ \\
& > & \frac{1}{\log^{k+1} x} \Big(1+ \frac{1}{\log x}+ \cdots + \frac{1}{\log^{5-k} x} \Big)^{k+1}
\end{eqnarray*}
to get
\begin{equation} \label{epi}
\frac{ex}{\log x} \pi \Big(\frac{x}{e} \Big) > x^2 \Big\{ \frac{1}{\log^2 x}+ \frac{2}{\log^3 x}+ \frac{5}{\log^4 x}+ \frac{16}{\log^5 x}+ \frac{64}{\log^6 x} + \frac{\epsilon_{m_a}(x)}{\log^7 x} \Big\},
\end{equation}
where
$$\epsilon_{m_a}(x) = 206+m_a+\frac{364}{\log x} + \frac{381}{\log^2 x}+\frac{238}{\log^3 x} + \frac{97}{\log^4 x} + \frac{30}{\log^5 x} + \frac{8}{\log^6 x}.$$
Now, subtracting $(\ref{epi})$ from $(\ref{pipi})$ we have
$$\pi^2(x) - \frac{ex}{\log x} \pi \Big( \frac{x}{e} \Big) < \frac{x^2}{\log^6 x} \Big(-1 + \frac{\epsilon_{M_a} (x) - \epsilon_{m_a} (x)}{\log x} \Big).$$
The right hand side is negative if
$$\log x > \epsilon_{M_a} (x_{a}) - \epsilon_{m_a} (x_{a}),$$
and so we can then choose $x_a' $ as some value which satisfies this.
\end{proof}
The aim is to reduce $\max(ex_{a},x_{a}' )$ so as to get the sharpest bound available using this method and modern estimates involving the prime counting function. The next two subsections deal with deriving the explicit bounds on $\pi(x)$ that are required to invoke Lemma \ref{lem1}.
\subsection{An estimate for Chebyshev's function}
We define Chebyshev's $\theta$-function for some $x \in \mathbb{R}$ to be
$$\theta(x) = \sum_{p \leq x} \log p$$
where the sum is over prime numbers. We now call on Theorem 1 of Trudgian \cite{trudgian}, which explicitly bounds the error in approximating $\theta(x)$ with $x$.
\begin{lem}
Let
$$\epsilon_0 (x) = \sqrt{\frac{8}{17 \pi}} X^{1/2} e^{-X}, \hspace{0.2in} X= \sqrt{(\log x)/R}, \hspace{0.2in} R = 6.455.$$
Then
$$|\theta(x) - x | \leq x \epsilon_0(x), \hspace{0.2in} x \geq 149$$
\end{lem}
This is another form of the prime number theorem, though explicit and able to give us the estimates required to use Lemma \ref{lem1}. For any choice of $a>0$, it is possible to use the above lemma to find some $x_a >0$ such that
\begin{equation} \label{chebyshevbound}
| \theta(x) - x | < a \frac{x}{\log^5 x}
\end{equation}
for all $x>x_a$; we simply need to find the range of $x$ for which
$$\sqrt{\frac{8}{17 \pi}} \bigg( \frac{\log x}{R} \bigg)^{1/4} e^{-\sqrt{(\log x)/R}} < a \frac{x}{\log^5 x}. $$
As this may yield large values of $x_a$, we write $x = e^y$ (also $x_a = e^{y_a}$) and take logarithms to get the equivalent inequality
\begin{equation} \label{solve}
\log c + \frac{21}{4} \log y \leq \sqrt{\frac{y}{R}}.
\end{equation}
In the next part, we will see how bounds of the form in ($\ref{chebyshevbound}$) can be manipulated to give the estimates on $\pi(x)$ required to use Lemma \ref{lem1}.
\subsection{Upper and lower bounds for $\pi(x)$}
Suppose that, for any $a>0$ and some corresponding $x_a>0$ we have
$$\theta(x) < x + a \frac{x}{\log^5 x}$$
for all $x > x_a$. The technique of partial summation gives us that
\begin{eqnarray*}
\pi(x) & < & \frac{x}{\log x} + \int_2^x \frac{dt}{\log^2 t} + a \frac{x}{\log^6 x} + a \int_2^x \frac{dt}{\log^7 t} \\ \\
& < & x \Big( \sum_{k=0}^{4} \frac{k!}{\log^{k+1} x} \Big) + (120+a) \frac{x}{\log^6 x} + (720 +a) \int_2^x \frac{dt}{\log^7 t}.
\end{eqnarray*}
We can estimate the remaining integral here by
\begin{eqnarray*}
\int_2^x \frac{dt}{\log^7 t} & < & \frac{x}{\log^7 x} + 7 \int_2^x \frac{dt}{\log^8 t} \\ \\
& < & \frac{x}{\log^7 x} + 7 \bigg( \int_2^{\sqrt{x}} \frac{dt}{\log^8 t} + \int_{\sqrt{x}}^{x} \frac{dt}{\log^8 t}\bigg) \\ \\
& < & \frac{x}{\log^7 x} + 7 \Big( \frac{\sqrt{x}}{\log^8 2} + \frac{2^8 x}{\log^8 x} \Big).
\end{eqnarray*}
Putting it all together we have that
$$\pi(x) < x \Big( \sum_{k=0}^{4} \frac{k!}{\log^{k+1} x} \Big) + M_a \frac{x}{\log^6 x}$$
for all $x>x_a$, where
\begin{equation} \label{M}
M_a = 120 + a +\frac{a+720}{\log x_a} + \frac{1792 a + 1290240}{\log^2 x_a} + \Big( \frac{5040+7a}{\log^8 2} \Big) \frac{\log^6 x_a}{\sqrt{x_a}}.
\end{equation}
In an almost identical way, we can obtain for $x>x_a$ that
$$\pi(x) > x \Big( \sum_{k=0}^{4} \frac{k!}{\log^{k+1} x} \Big) + m_a \frac{x}{\log^6 x}$$
where
\begin{equation} \label{m}
m_a = 120 -a - \frac{a}{\log x_a} - \frac{1792}{\log^2 x_a} - 2 A \frac{\log^6 x_a}{x_a} - \frac{7 a \log^6 x_a}{\log^8 2 \sqrt{x_a}}
\end{equation}
and
$$A = \sum_{k=1}^{5} \frac{k!}{\log^{k+1} 2} \approx 1266.08.$$
\subsection{Numerical estimates}
Our method is as follows. We choose some $a>0$ such that we wish for
$$|\theta(x) - x| < a \frac{x}{\log^5 x} $$
to hold for $x>x_a = e^{y_a}$. We simply plug our desired value of $a$ into (\ref{solve}) and use \textsc{Mathematica} to search for some value of $y_a$, such that the inequality holds for all $x > e^{y_a}$. We then use (\ref{M}) and (\ref{m}) to calculate two values $m_a$ and $M_a$ such that
$$ x \sum_{k=0}^{4} \frac{k!}{\log^{k+1}x}+ \frac{m_a x}{\log^6 x} < \pi(x) < x \sum_{k=0}^{4} \frac{k!}{\log^{k+1}x}+\frac{M_a x}{\log^6 x}$$
holds for $x>e^{y_a}$. Then by Lemma \ref{lem1}, we find some value $x_{a}' = e^{y_a'}$ (dependent on $a$, $m_a$ and $M_a$, and thus really only on $a$) such that Ramanujan's inequality is true for $x > \max(ex_a, x_a')$.
One finds that small values of $a$, give rise to large values of $x_a$, yet small values of $x_a'$. Similiarly, large values of $a$ will yield small $x_a$ yet large values of $x_a '$. Of course, we want $x_a$ and $x_a'$ to be comparable, so that we might lower their maximum as much as possible. Thus, the idea is to select $a$ so that $ex_a$ and $x_a'$ are as close as possible.
It can be verified that choosing $a=3223$ gives $x_a = \exp(9656.8)$ with the values
$$m_a = -3103.33, \hspace{0.2in} M_a = 3343.48.$$
One then computes, using Lemma \ref{lem1} that $x_a' = \exp(9657.8)$ will work. This gives us Theorem \ref{thm:uncon}.
\section{Estimates on the Riemann hypothesis}
We now assume the Riemann Hypothesis and can therefore rely on Schoenfeld's conditional bound for the prime counting function:
\begin{thm}[Schoenfeld]\label{thm:schoen}
For $x\geq 2657$ we have
\begin{equation*} \label{schoen}
|\pi(x) - \text{li}(x) | < \frac{1}{8 \pi} \sqrt{x} \log x.
\end{equation*}
\end{thm}
\begin{proof}
See \cite{schoenfeld}.
\end{proof}
We now aim to improve on Theorem \ref{thm:hassani} of Hassani to the extent that a numerical computation to check the remaining cases become feasible. We have
\begin{lem}\label{lem:lowlim}
Assuming the Riemann Hypothesis, we have
$$\pi^2(x)<\frac{ex}{\log x}\pi\left(\frac{x}{e}\right)$$
for all $x\geq 1.15\cdot 10^{16}$.
\end{lem}
\begin{proof}
Platt and Trudgian \cite{platttrudgian} have recently confirmed that $\pi(x)<\text{li}(x)$ holds for $x \leq 1.2 \cdot10^{17}$. Together with Theorem \ref{thm:schoen} we see that
$$f(x)=\pi^2(x) - \frac{ex}{\log x} \pi \Big( \frac{x}{e} \Big)$$
is bounded above by
$$g(x) = \text{li}^2(x) - \frac{ex}{\log x} \bigg( \text{li}\Big(\frac{x}{e} \Big) - \frac{1}{8 \pi} \sqrt{\frac{x}{e}} (\log x - 1) \bigg) $$
for all $x \geq 2657 e$. Berndt \cite[Ch.24]{Berndt} uses some elementary calculus to show that a similar function to the above is monotonically increasing over some range. One can use that same technique here to show that $g(x)$ is monotonically decreasing for all $x \geq 10^{16}$. Then, \textsc{Mathematica} can be used to show that
$$g(1.15 \cdot 10^{16}) \approx -3.211 \cdot 10^{19} <0$$
and thus $g(x)$ is negative for all $x \geq 1.15 \cdot 10^{16}$ and the lemma follows.
\end{proof}
\subsection{Computation}
It was stated in the introduction that the largest integer counterexample of (\ref{inequality}) up to $x=10^{11}$ occurs at $x = 38, 358, 837, 682$. In this subsection, we wish to show by computation that there are no counterexamples in the interval $[10^{11}, 1.15 \cdot 10^{16}]$.
As before, we write
$$f(x) = \pi^2(x) - \frac{ex}{\log x} \pi\bigg( \frac{x}{e} \bigg).$$
Note that $f$ is strictly decreasing between primes, so we could simply check that $f(p)<0$ for all primes $p$ in the required range. However, there are roughly $3.2\cdot 10^{14}$ primes to consider\footnote{Or precisely $319,870,505,122,591$.} and this many evaluations of $f$ would be computationally too expensive. Instead we employ a simple stepping argument.
\begin{lem}\label{lem:step}
Let $x_0$ be in the interval $[10^{11},1.15 \cdot 10^{16}]$ with $f(x_0) < 0$. Set $\epsilon=\sqrt{\pi^2(x_0)-f(x_0)}-\pi(x_0)$. Then $f(x) < 0$ for all $x\in[x_0, x_0 + \epsilon]$.
\end{lem}
\begin{proof}
We have for $x_0\geq e$ and $\epsilon>0$
\begin{eqnarray*}
f(x_0+\epsilon) & = & \pi^2(x_0+\epsilon)-\frac{e(x_0+\epsilon)}{\log(x_0+\epsilon)}\pi\left(\frac{x_0+\epsilon}{e}\right)\\
& \leq & \pi^2(x_0)+2\pi(x_0)\epsilon+\epsilon^2-\frac{ex_0}{\log x_0}\pi\left(\frac{x_0}{e}\right)\\
& = & f(x_0)+2\pi(x_0)\epsilon+\epsilon^2.
\end{eqnarray*}
Setting $f(x_0+\epsilon)=0$ and solving the resulting quadratic in $\epsilon$ gives us our lemma.
\end{proof}
Suppose we have access to a table of values of $\pi(x_i)$ with $x_{i+1}>x_i$ for all $i$. Then we can compute an interval containing $\pi(x)$ simply by looking up $\pi(x_i)$ and $\pi(x_{i+1})$ where $x_i\leq x \leq x_{i+1}$. Repeating this for $x/e$ we can determine an interval $[a,b]$ for $f(x)$. Assuming $b$ is negative, we can use Lemma \ref{lem:step} to step to a new $x$ and repeat.
Oliviera e Silva has produced extensive tables of $\pi(x)$ \cite{Oliviera2012}. Unfortunately, these are not of a sufficiently fine granularity to support the algorithm outlined above. In other words the estimates on $\pi(x)$ and $\pi(x/e)$ we can derive from these tables alone are too imprecise and do not determine the sign of $f(x)$ uniquely. We looked at the possibility of refining the coarse intervals provided by these tables using Montgomery and Vaughan's explicit version of the Brun-Titchmarsh \cite{MV} theorem but to no avail. Instead, we re-sieved the range $[1\cdot 10^{10},1.15\cdot 10^{16}]$ to produce exact values for $\pi(x_i)$ where the $x_i$ were more closely spaced. Table \ref{tab:pixs} provides the details.\footnote{At the same time we double checked Oliviera e Silva's computations and, as expected, we found no discrepancies.}
\begin{table}[ht]
\caption{The Prime Sieving Parameters}
\label{tab:pixs}
\centering
\begin{tabular}{c c c}
\hline\hline
From & To & Spacing\\[0.5 ex] \hline
$10^{10}$ & $10^{11}$ & $10^3$\\
$10^{11}$ & $10^{12}$ & $10^4$\\
$10^{12}$ & $10^{13}$ & $10^5$\\
$10^{13}$ & $10^{14}$ & $10^6$\\
$10^{14}$ & $10^{15}$ & $10^7$\\
$10^{15}$ & $10^{16}$ & $10^8$\\
$10^{16}$ & $1.15\cdot 10^{16}$ & $10^9$\\
\hline\hline
\end{tabular}
\end{table}
We used Kim Walisch's ``primesieve'' package \cite{Walisch2012} to perform the sieving and it required a total of about $300$ hours running on nodes of the University of Bristol's Bluecrystal cluster \cite{ACRC2014}.\footnote{Each node comprises two $8$ core Intel\textsuperscript{\circledR} Xeon\textsuperscript{\circledR} E5-2670 CPUs running at $2.6$ GHz and we ran with one thread per core.}
Using the stepping algorithm outlined above running against these tables, actually confirming that $f(x)<0$ for all $x\in[1\cdot 10^{11},1.15\cdot 10^{16}]$ took less than $5$ minutes on a single core. We had to step (and therefore compute $f(x)$) about $5.3\cdot 10^8$ times to span this range. We sampled at every $100,000$th step and Figure \ref{fig:plot} shows a log/log plot of $x$ against $-f(x)$ for these samples.\footnote{Actually we use the midpoint of the interval computed for $-f(x)$.}
\begin{figure}[tbp]
\centering
\fbox{\includegraphics[width=0.65\linewidth,angle=270]{ram-eps-converted-to.pdf}}
\caption{$\log(x)$ vs. $\log(-f(x))$.}
\label{fig:plot}
\end{figure}
No counter-examples to (\ref{inequality}) where uncovered by this computation and so we can now state
\begin{lem}\label{lem:comp}
For $x\in[10^{11},1.15\cdot 10^{16}]$ we have
$$\pi^2(x)<\frac{ex}{\log x}\pi\left(\frac{x}{e}\right).$$
\end{lem}
Lemmas \ref{lem:lowlim} and \ref{lem:comp} now give us Theorem \ref{thm:con}.
\begin{comment}
\begin{lem}
Let $x_0 \in [10^{11},1.15 \cdot 10^{16})$ with $f(x_0) < 0$. It follows that $f(x) < 0$ for all $x$ in the range
$$[x_0, x_0 + \epsilon(x_0)]$$
where $ \epsilon(x_0) < 0.19 x$ is the solution to the equation
$$\frac{\epsilon}{\log \epsilon} = \frac{-f(x_0)}{5 \text{li}(x_0)}$$
\end{lem}
\begin{proof}
Suppose that $x \in [10^{11},1.15 \cdot 10^{16})$ is such that $f(x) < 0$. Further, suppose that $\epsilon > 0$ is such that $f(x + \epsilon) >0$, that is;
$$0 < \pi^2 (x+\epsilon) - \frac{e (x+\epsilon)}{\log(x+\epsilon)} \pi\Big( \frac{x+\epsilon}{e} \Big).$$
We will use known bounds so as to obtain a lower bound on $\epsilon$ in terms of $x$. We can use Montgomery and Vaughn's \cite{mvlargesieve} elegant version of the Brun-Titchmarsh Theorem on the first term as follows.
\begin{eqnarray*}
\pi^2(x+\epsilon) & = & \Big[ \pi(x) + (\pi(x+\epsilon) -\pi(x)) \Big]^2 \\
& \leq & \bigg[ \pi(x) + \frac{2 \epsilon}{\log \epsilon} \bigg]^2.
\end{eqnarray*}
The second term can be estimated trivially as
$$\frac{e (x+\epsilon)}{\log(x+\epsilon)} \pi\Big( \frac{x+\epsilon}{e} \Big) > \frac{ex}{\log x} \pi\Big( \frac{x}{e} \Big) $$
due to the eventually increasing nature of $x/\log x$. Thus, putting these estimates together, we have that $\epsilon$ must satisfy
$$\frac{4 \epsilon}{\log \epsilon} \pi(x) + \frac{4 \epsilon^2}{\log^2 \epsilon} > - f(x),$$
or rather:
\begin{equation} \label{banana}
\frac{\epsilon}{\log \epsilon} \bigg( 4 \pi(x) + \frac{4\epsilon}{\log \epsilon} \bigg) > -f(x).
\end{equation}
By Theorem 1 of Rosser and Schoenfeld \cite{rosserschoenfeld1962}, we have that the bound
$$\pi(x) < \frac{x}{\log x} \bigg(1 + \frac{3}{2 \log x} \bigg)$$
holds for all $x>1$.In our range we have that $x > 10^{11}$ and so one can get that
$$\pi(x) < 1.06 \frac{x}{\log x}.$$
Moreover, the restriction $\epsilon < 0.19 x$ is included so that we may estimate the bracketed term in (\ref{banana}) by
$$4 \pi(x) + \frac{4\epsilon}{\log \epsilon} < 5 \frac{x}{\log x}.$$
Inserting this into (\ref{banana}) and rearranging completes the proof.
\end{proof}
In the computation, we will also make good use of the tables of Silva \cite{silva}. Note that he has given the value of
\end{comment}
\clearpage
\bibliographystyle{plain}
| {
"timestamp": "2014-07-09T02:04:23",
"yymm": "1407",
"arxiv_id": "1407.1901",
"language": "en",
"url": "https://arxiv.org/abs/1407.1901",
"abstract": "Ramanujan proved that the inequality $\\pi(x)^2 < \\frac{e x}{\\log x} \\pi\\Big(\\frac{x}{e}\\Big)$ holds for all sufficiently large values of $x$. Using an explicit estimate for the error in the prime number theorem, we show unconditionally that it holds if $x \\geq \\exp(9658)$. Furthermore, we solve the inequality completely on the Riemann Hypothesis, and show that $x=38, 358, 837, 682$ is the largest integer counterexample.",
"subjects": "Number Theory (math.NT)",
"title": "On Solving a Curious Inequality of Ramanujan",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9924227600767888,
"lm_q2_score": 0.8418256472515683,
"lm_q1q2_score": 0.8354469323488305
} |
https://arxiv.org/abs/1503.07281 | Note on vanishing power sums of roots of unity | For a given positive integers $m$ and $\ell$, we give a complete list of positive integers $n$ for which their exist $m$th roots of unity $x_1,\dots,x_n \in \mathbb{C}$ such that $x_1^{\ell} + \cdots + x_n^{\ell}=0$. This extends the earlier result of Lam and Leung on vanishing sums of roots of unity. Furthermore, we prove that for which integers $n$ with $2 \leq n \leq m$, there are distinct $m$th roots of unity $x_1,\dots,x_n \in \mathbb{C}$ such that $x_1^{\ell} + \cdots + x_n^{\ell}=0$. | \section{Introduction}
Let $m$ be a positive integer. By an $m$th root of unity, we mean a complex number $\zeta$ such that $\zeta^m=1.$ That is, a
root of the polynomial $X^m-1.$ One can easily see that the roots of $X^m-1$ are distinct, in fact there are exactly $m$,
$m$th roots of unity. Using the relationship between the roots and the coefficients of a polynomial, we see that the sum
of all $m$th roots of unity, which is the coefficient of $X^{m-1}$ in $X^m-1,$ is zero. A natural question is: What are all
the positive integers $n$ for which there exist $m$th roots of unity $x_1,\ldots,x_n$ (repetition is allowed) such that
$x_1+\cdots+x_n=0$. A beautiful result of T. Y. Lam and K. H. Leung \cite{lam} gives a complete classification of all such integers.
Suppose $m$ has prime factorization $p_1^{a_1}\ldots p_r^{a_r}$, where $a_i >0$, then we have the following theorem due to
Lam and Leung:
\begin{theorem}\label{thm:1}
Let $n$ be a positive integer. Then there are $m$th roots of unity $x_1,\ldots,x_n$ such that
$x_1+\cdots+x_n=0$ if and only if $n$ is of the form $n_1p_1+\cdots+n_rp_r$ where each $n_i$ is a non-negative integer
for $1\leq i \leq r.$
\end{theorem}
\noindent Theorem \ref{thm:1} motivate us to ask the following:
\begin{question}\label{lam-ext-quest}
Let $m$ and $\ell$ be positive integers. What are all the positive integers $n$ for
which there exist $m$th roots of unity $x_1,\ldots,x_n$ such that $x_1^{\ell} + \cdots + x_n^{\ell}=0$ ?
\end{question}
Note that when $\ell =1$, the complete answer to Question \ref{lam-ext-quest} is given by Theorem \ref{thm:1}.
However, for $ \ell \geq 2$, we do not find any results in this direction in the literature. Our objective
here is to study the case when $ \ell \geq 2$. First, we fix some notations. Let $m$ be a positive integer, and
let $\Omega_m$ denotes the set of all $m$th roots of unity.
For a positive integer $\ell,$ $ W_\ell(m)$ denotes the set of all positive integers $n$ for which
there exist $n$-elements $x_1,\dots,x_n \; \in \;\Omega_m $ such that $x_1^{\ell} + \cdots + x_n^{\ell}=0$.
When ${\ell}=1$, we simply denote $ W_\ell(m)$ by $W(m)$. With this notation, Question \ref{lam-ext-quest} can be
reformulated as follows: { \it Let $m$ and $\ell$ be positive integers. What are all the positive integers in the set $W_\ell(m)$ ?}
It is clear that if $m$ divides ${\ell}$ then $W_\ell(m)$ is an empty set. Suppose that there are $m$th complex
roots of unity, say, $x_1,\dots,x_n$ such that $x_1^\ell+\cdots+x_n^\ell=0.$ Since the $\ell$th power of an $m$th root of unity is
still an $m$th root of unity, the equation $x_1^\ell+\cdots+x_n^\ell=0$ with $x_i \in \Omega_m$ can be written in the
form $y_1+\cdots+y_n=0$ with $y_i \in \Omega_m$. This shows that for any positive integer $m$ and $\ell$, $W_\ell(m)$ is a subset of $W(m)$. It
follows from Theorem \ref{thm:1} that any positive integers in the set $W_\ell(m)$ must be of the form $n_1p_1+\cdots+n_rp_r$ where
each $n_i$ is a non-negative integer for $1\leq i \leq r.$ In Section $2$, we give a complete list of integers in the set $W_\ell(m)$
( see Theorem \ref{thm:2}). Moreover, in Section $3$ we find all positive integers $n \in W_\ell(m)$ for which there are distinct $m$th complex roots of
unity $x_1,\dots,x_n$ such that $x_1^\ell+\cdots+x_n^\ell=0$ ( see Theorem \ref{thm:3}).
There are algebraic aspects why Question \ref{lam-ext-quest} is important.
For instance, for a positive integer $a$, denote by $p_a$
the power sum polynomial $X_1^a+\cdots+X_n^a$ of degree $a$. Let $ \ell < k$ be two positive integers. In commutative algebra,
one encounters the following situation: To show that the ideal $\langle p_{\ell}, p_k \rangle$ generated by the polynomials
$p_{\ell}$ and $p_k$ is a prime ideal in $\mathbb{C}[X_1,\dots,X_n]$, one needs to show that the power sum
polynomial $X_1^{\ell} + \cdots + X_n^{\ell}$ does not vanish when one allows the $X_i$'s to take values among
the $(\ell-k)$th roots of unity \cite[see proof of Theorem 3.8]{k}.
\vspace{2mm}
\paragraph{{\bf Acknowledgment:}} We are grateful to Professor Ram Murty for his valuable suggestions regarding the paper.
We also thank the referee for many useful comments for improving the manuscript.
This project was funded by the Department of Atomic Energy (DAE), Government of India.
\section{vanishing of power sums of roots of unity}\label{main-thm}
Let $m$ and ${\ell}$ be positive integers. In this section, we completely characterize all the positive integers in the set $W_\ell(m)$.
More precisely, we prove the following theorem:
\begin{theorem}\label{thm:2}
Let $m$ and ${\ell}$ be positive integers. Let $d=(m,{\ell})$ be the greatest common divisor of $m$ and ${\ell}.$
Then $W_\ell(m)=W(m/d).$
\end{theorem}
In other words, Theorem \ref{thm:2} says that: For any positive integer $n,$ $x_1^{\ell} + \cdots + x_n^{\ell}=0$
with $x_i\in\Omega_m$ if and only if $y_1+\cdots+y_n=0$ with $y_i\in\Omega_{m/d}.$
\begin{proof} It is well known that $\Omega_m$, that is, the set of all $m$th roots of unity, form a group with respect
to the multiplication of complex numbers.
In fact, it is a cyclic group of order $m$, generated by the complex number $\zeta_m=\cos 2\pi/m+\mathfrak{i}\sin 2\pi/m.$ There is a
remarkable property about finite cyclic groups. Namely, if $G$ is a finite cyclic group and $l$ is a positive integer
relatively prime to the order of $G,$ then the map
\begin{eqnarray}\label{equ:1}
x \longmapsto x^{\ell} \quad ( x \; \in \; G)
\end{eqnarray}
is an automorphism of $G$ (In fact, all the automorphisms of $G$ are of the form (\ref{equ:1}) for some integer
${\ell}$ which is relatively prime to the order of $G$). It follows that, if ${\ell}$ is a positive integer which is relatively
prime to $m$ then every element of $\Omega_m$ is a ${\ell}$th power of some element of
$\Omega_m.$ Thus, for an integer $l$ which is relatively prime to $m,$ the equation $x_1^{\ell}+\cdots+x_n^{\ell}=0$ with $x_i\in \Omega_m$ can be replaced by $y_1+\ldots+y_n=0$
with $y_i\in \Omega_m,$ and vice versa. This discussion proves Theorem \ref{thm:2} for the case when $\ell$ is relatively prime to $m.$
Now assume that $d>1$. Consider the map
\begin{eqnarray}\label{psi-map}
\psi_d: \Omega_m \longrightarrow \Omega_{m/d}
\end{eqnarray}
defined by $x\mapsto x^d$ for $x\in\Omega_m.$ This map is clearly onto, and the kernel is
exactly $\Omega_d.$ Thus, $\Omega_m/\Omega_d\cong \Omega_{m/d}.$
Now suppose that there are elements $x_1,\ldots,x_n\in\Omega_m$ such that $x_1^{\ell}+\cdots+x_n^{\ell}=0.$ Then this sum can be rewritten
as $\left(x_1^{{\ell}/d}\right)^d+\cdots+\left(x_n^{{\ell}/d}\right)^d=0.$ Since ${\ell}/d$ and $m$ are relatively prime,
by the above discussion, the latter equation can be rewritten in the form $y_1^d+\cdots+y_n^d=0$ with $y_i\in\Omega_m.$
Finally, using the map $\psi_d,$ the latter sum can be realized
as $z_1+\ldots+z_n=0$ where $z_i\in\Omega_{m/d}$ for $1\leq i\leq n.$ In fact, all these steps can be reversed.
This completes the proof of Theorem \ref{thm:2}.
\end{proof}
Combining Theorems \ref{thm:1} and \ref{thm:2}, we have the following corollary:
\begin{cor}\label{cor:1}
Let $m$, $n$ and ${\ell}$ be positive integers. Let $d=(m,{\ell})$ be the greatest common divisor of $m$ and ${\ell}.$ Then there
are $m$th roots of unity $x_1,\ldots,x_n$ such that $x_1^\ell+\cdots+x_n^\ell=0$ if and only if $n$ is of the form $n_1q_1+\cdots+n_sq_s$ where
each $n_i$ is a non-negative integer for $1\leq i \leq s$ and $q_1,\dots,q_s$ are distinct prime divisors of $m/d$.
\end{cor}
\begin{example} Let $m=60$, and let $\ell$ be an integer with $1 \leq \ell < 60$.
By Theorem \ref{thm:2}, $W_\ell(m)=W(m/d)$ where $d$ is the greatest common divisor of $m$ and $\ell$.
When $d$ varies over the divisors of $m$, $m/d$ also varies over the divisors of $m$. Thus $W_\ell(m)$ coincides with $W(d)$ for some divisor $d$ of $m$.
On the other hand, by Theorem \ref{thm:1}, $W(d)=\sum_{i=1}^{s}q_i\mathbb{N}$
where $d=q_1^{b_1}\dots q_{s}^{b_s}$ is the prime factorization of $d$. Here $\mathbb{N}$ denotes the set of non-negative integers. We thus have the following table
which describe $W(d)$ for all positive divisors $d$ of $m=60$.
\end{example}
{ \footnotesize
\begin{table}[H]
\centering
\begin{tabular}{| c | l |}
\hline
$d$ & $W(d)$ \\ \hline
$1$ & $\emptyset$ \\
$2$ & $2 \mathbb{N}$ \\
$3$ & $3 \mathbb{N}$ \\
$4$ & $2 \mathbb{N}$ \\
$5$ & $ 5 \mathbb{N}$ \\
$6$ & $ 2 \mathbb{N}$ + $3 \mathbb{N} = \mathbb{N} \setminus \{ 1 \}$ \\
$10$ & $ 2 \mathbb{N}$ + $5 \mathbb{N} = \mathbb{N} \setminus \{1,3 \}$ \\
$12$ & $ 2 \mathbb{N}$ + $3 \mathbb{N} = \mathbb{N} \setminus \{1 \}$ \\
$15$ & $ 3 \mathbb{N}$ + $5 \mathbb{N} = \mathbb{N} \setminus\{ 1,2,4,7 \}$ \\
$20$ & $ 2 \mathbb{N}$ + $5 \mathbb{N} = \mathbb{N} \setminus \{ 1,3 \}$ \\
$30$ & $ 2 \mathbb{N}$ + $3 \mathbb{N} $ + $5 \mathbb{N} = \mathbb{N} \setminus \{ 1 \}$ \\
$60$ & $ 2 \mathbb{N}$ + $3 \mathbb{N} $ + $5 \mathbb{N} = \mathbb{N} \setminus \{ 1 \}$ \\
\hline
\end{tabular}
\label{table}
\end{table}
}
\section{vanishing of power sums of distinct roots of unity}
Let $m$ and $\ell$ be two positive integers. For an integer $n \in W_{\ell}(m)$, the {\it height} $H(n;\ell,m)$ of $n$ is defined to be
the smallest positive integer $h$ for which there are $m$th roots of unity $x_1,\dots,x_n$ such that $x_1^\ell+\cdots+x_n^\ell=0$
and the maximum among the repetition of $x_i$'s is $h$,
that is, $h$ is the maximum among the $h_i$, where $h_i$ is the number of times $x_i$ appears in the list $x_1,\dots,x_n$.
When $\ell =1$, we denote $H(n;\ell,m)$ by $H(n;m)$. Note that $H(n;m)=1$ provided $ 2 \leq n \leq m$. Gary Sivek \cite{Gary} refined the
work of Lam and Leung by proving that for any integers $m \geq 2$ and $2 \leq n \leq m$, $H(n;m)=1$ if
and only if both $n$ and $m-n$ are expressible as a linear combination of the prime factors of $m$ with non-negative
integer coefficients. Here we extend Sivek's result to vanishing of power sums of distinct roots of unity:
\begin{theorem}\label{thm:3}
Let $m$ and $l$ be positive integers, and let $n$ be an integer such that $2 \leq n\leq m.$ Let $d$ be the greatest common divisor of $m$ and $\ell$.
Then $H(n;\ell,m)=1$ if and only if $H(n; m/d) \leq d$.
\end{theorem}
\begin{proof}
Let $\Omega_{m/d} = \{ z_1,\dots,z_{m/d} \}$. Suppose that there are distinct $m$th roots of unity $x_1,\dots,x_n$ such
that $x_1^\ell+\cdots+x_n^\ell=0$. Since $d$ is the greatest common divisor
of $\ell$ and $m$, this equation can be rewritten in the form $y_1^d+\cdots+y_n^d=0$ with $y_1,\dots,y_n$ are $m$th roots of unity.
Using the map $\psi_d$, the latter equation can be written as $\sum_{i=1}^{m/d} a_iz_i =0$ where $a_i$ is the cardinality
of the set $ \{ y_1,\dots,y_n \} \cap \psi_d^{-1}(z_i)$
for $1 \leq i \leq m/d$. On the other hand, $\psi_d^{-1}(z)$ has exactly $d$ elements for each $z \in \Omega_{m/d}$.
It follows that $H(n;m/d) \leq \max \{ a_1,\; \dots, a_{m/d} \} \leq d$. This proves that if $H(n;\ell,m) = 1$ then $H(n;m/d) \leq d$.
Conversely, suppose that $H(n;m/d) \leq d$. Then there is a partition $( a_1,\dots,a_{m/d} )$ of $n$ into non-negative
integers $a_i$ with $a_i \leq d$ for $ 1 \leq i \leq m/d$ and $\sum_{i=1}^{m/d} a_iz_i =0$. Let $y_i$ be any element
of $\psi_d^{-1}(z_i)$ for $1 \leq i \leq m/d$. Then $ \psi_d^{-1}(z_i)= \; y_i \Omega_d \; = \{ \; y_i x \; | \; x \in \Omega_d \}$. Since $a_i \leq d$,
one can replace $a_iz_i$ by $y_i^d ( x_1^d+\cdots+x_{a_i}^d )$ where $x_1,\dots,x_{a_i}$ are distinct elements of $\Omega_d$.
Hence $H(n;\ell,m)=H(n;d,m)=1$ since $\sum_{i=1}^{m/d} a_i=n$. This completes the proof of Theorem \ref{thm:3}.
\end{proof}
| {
"timestamp": "2016-03-04T02:07:29",
"yymm": "1503",
"arxiv_id": "1503.07281",
"language": "en",
"url": "https://arxiv.org/abs/1503.07281",
"abstract": "For a given positive integers $m$ and $\\ell$, we give a complete list of positive integers $n$ for which their exist $m$th roots of unity $x_1,\\dots,x_n \\in \\mathbb{C}$ such that $x_1^{\\ell} + \\cdots + x_n^{\\ell}=0$. This extends the earlier result of Lam and Leung on vanishing sums of roots of unity. Furthermore, we prove that for which integers $n$ with $2 \\leq n \\leq m$, there are distinct $m$th roots of unity $x_1,\\dots,x_n \\in \\mathbb{C}$ such that $x_1^{\\ell} + \\cdots + x_n^{\\ell}=0$.",
"subjects": "Number Theory (math.NT)",
"title": "Note on vanishing power sums of roots of unity",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9896718458052632,
"lm_q2_score": 0.843895100591521,
"lm_q1q2_score": 0.8351792218684289
} |
https://arxiv.org/abs/2205.04629 | Symmetry of hypersurfaces and the Hopf Lemma | A classical theorem of A.D. Alexandrov says that a connected compact smooth hypersurface in Euclidean space with constant mean curvature must be a sphere. We give exposition to some results on symmetry properties of hypersurfaces with ordered mean curvature and associated variations of the Hopf Lemma. Some open problems will be discussed. | \section{Introduction}
H. Hopf established in \cite{H}
that an immersion of a topological $2$-sphere in $\mathbb R^3$
with constant mean curvature must be a standard sphere.
He also made
the conjecture that the conclusion holds for all immersed connected closed hypersurfaces in $\mathbb R^{n+1}$ with constant mean curvature.
A.D. Alexandrov proved in \cite{A} that if
$M$ is an embedded connected closed hypersurface with constant mean
curvature, then $M$
must be a standard sphere.
If $M$ is immersed instead of embedded, the conclusion does not hold
in general,
as shown by W.-Y. Hsiang in \cite{Hs}
for $n\ge 3$ and by Wente in \cite{W} for $n=2$.
A.
Ros in \cite{R} gave a different proof for the theorem of Alexandrov
making use of the variational properties
of the mean curvature.
In this note, we give exposition to some
results in
\cite{Li}-\cite{LYY}.
It is suggested that the reader
read the introductions of \cite{LN1}, \cite{LN2} and \cite{LYY}.
Throughout the paper $M$ is a smooth compact connected embedded hypersurface in $\mathbb R^{n+1}$,
$k(X)=(k_1(X), \cdots, k_n(X))$ denotes the principal curvatures of $M$ at $X$ with respect to the inner normal, and
the mean curvature of $M$ is
$$
H(X) := \frac 1n\left[ k_1(X)+\cdots + k_n(X)\right].
$$
We use $G$ to denote the open bounded set bounded by $M$.
Li
proved in\cite{Li}
the following result saying
that if the mean curvature $H: M\to \mathbb R$
has a Lipschitz extension $K: \mathbb R^{n+1} \to \mathbb R$ which is
monotone in the $X_{n+1}$ direction, then
$M$ is symmetric about a hyperplane
$X_{n+1}=c$.
\begin{thm} (\cite{Li})
Let M be a
smooth compact connected embeded
hypersurface without boundary embedded in
$\mathbb R^{n+1}$, and let $K$ be a Lipschitz function in $\mathbb R^{n+1}$
satisfying
\begin{equation}
K(X', B)\le K(X', A),\quad \forall\
X'\in \mathbb R^n, \ A\le B.
\label{1new}
\end{equation}
Suppose that at each point $X$ of $M$ the mean curvature $H(X)$ equals $K(X)$. Then $M$ is symmetric about a hyperplane
$X_{n+1}= c.
$
\label{TheoremA}
\end{thm}
In \cite{Li}, $K$ was assumed to be $C^1$ for the above result, but the
proof there only needs $K$ being Lipschitz.
Li and Nirenberg then considered
in \cite{LN1} and \cite{LN2}
the more general question in which the condition
$H(X)=K(X)$ with $K$ satisfying (\ref{1new}) is replaced by the weaker,
more natural, condition:
\noindent
{\bf Main Assumption.}
\ For any two points $(X', A), (X', B) \in M$ satisfying
$A\le B$
and that $ \{(X',\theta A+(1 - \theta )B):0 \le \theta \le 1\}$
lies in $\overline G$, we have
\begin{equation}
H(X', B)\le H(X',A).
\label{main}
\end{equation}
They
showed in \cite{LN1}
that this assumption alone is not enough
to guarentee the symmetry of $M$
about some hyperplane $X_{n+1}=c$.
The mean curvature $H: M\to \mathbb R$ of the
counterexample
constructed in [\cite{LN1}, Fig. 4] has a monotone extension
$K: \mathbb R^{n+1} \to \mathbb R$ which is $C^\alpha $
for every $0<\alpha<1$, but fails to be Lipschitz.
The counterexample actually satisfies (\ref{main}) with an equality.
They also constructed a counterexample [\cite{LN1}, Section 6]
showing that the inequality (\ref{main}) does not imply a pairwise equality.
A conjecture was made in \cite{LN2}
after the introduction of
\noindent {\bf
Condition S.}\
$ M $ stays on one side of any hyperplane parallel to
the $X_{n+1}$ axis that is tangent to $M$.
\medskip
\begin{conjecture} (\cite{LN2})\
\label{ConjectureA}
Any smooth compact connected embedded hypersurface $M$ in $\mathbb R^{n+1}$
satisfying
the Main Assumption and Condition S must be symmetric about a hyperplace
$X_{n+1}=c$.
\end{conjecture}
The conjecture for $n=1$ was proved in \cite{LN1}.
For
$n\ge 2$, they introduced
the following
condition:
\medskip
\noindent {\bf Condition T.}\
Every line parallel to the
$X_{n+1}$-axis that is tangent to $M$ has contact of finite order.
\medskip
Note that if $M$ is real analytic then Condition T is
automatically satisfied.
They proved in [\cite{LN2}, Theorem 1] that
$M$ is symmetric about a hyperplane $X_{n+1}=c$
under the Main Assumption, Condition S and T, and a
local convexity condition
near points where the tangent planes are parallel to the $X_{n+1}$-axis.
For convex $M$, their
result is
\medskip
\begin{thm} (\cite{LN2})
\label{TheoremB}
Let $M$ be a smooth compact convex hypersurface in $\mathbb R^{n+1}$ satisfying the Main Assumption and Condition T.
Then $M$ must be symmetric about a hyperplane $X_{n+1}=c$.
\end{thm}
\medskip
The theorem of Alexandrov is more general in that one can replace the mean curvature by a wide class of symmetric functions of the principal curvatures.
Similarly, Theorem \ref{TheoremA} and
Theorem \ref{TheoremB}
(as well as the more general [\cite{LN2}, Theorem 1]) still hold
when the mean curvature function is replaced by
more general curvature functions.
Consider a triple $(M, \Gamma, g)$: Let $M$
be a compact connected $C^2$
hypersurface without boundary embedded in
$\mathbb R^{n+1}$,
and let $g(k_1, \cdots, k_n)$ be a $C^3$
function,
symmetric in $(k_1, \cdots, k_n)$, defined in an open convex
neighborhood $\Gamma$ of
$\{ (k_1(X), \cdots, k_n(X))\ |\
X\in M\}$, and satisfy
\begin{equation}
\frac {\partial g}{\partial k_i}(k)>0,\ \ \ 1\le i\le n\ \ \ \
\mbox{and}\ \ \ \ \frac {\partial ^2 g}{ \partial k_i\partial k_j}(k)\eta^i\eta^j
\le 0,\qquad \forall\ k\in \Gamma \ \mbox{and} \ \eta\in \mathbb R^n.
\label{general}
\end{equation}
For convex $M$, their
result ([\cite{LN2}, Theorem 2]) is
as follows.
\medskip
\begin{thm} (\cite{LN2})\
\label{TheoremC} Let the triple $(M,\Gamma,g)$ satisfy (\ref{general}).
In addition, we assume that $M$ is convex and
satisfies
Condition T and
the Main Assumption with inequality (\ref{main}) replaced by
\begin{equation}
g(k(X', B))\le g(k(X',A)).
\label{main2}
\end{equation}
Then $M$ must be symmetric about a hyperplane $X_{n+1}=c$.
\end{thm}
For $1\le m\le n$, let
$$
\sigma_m(k_1, \cdots, k_n)= \sum_{ 1\le i_1<\cdots<i_m\le n} k_{i_1}\cdots k_{i_m}
$$
be the $m$-th elementary symmetric function, and let
$$
g_m:= (\sigma_m)^{\frac 1m}.
$$
It is known that $g=g_m$ satisfies the above properties in
$$
\Gamma_m := \{ (k_1, \cdots, k_n)\in \mathbb R^n\ |\
\sigma_j(k_1, \cdots, k_n)>0\ \mbox{for}\
1\le j\le m\}.
$$
It is known that
$\Gamma_1=\{ k\in \mathbb R^n\ |\ k_1+\cdots + k_n>0\}$,
$\Gamma_n=\{ k\in \mathbb R^n\ |\ k_1, \cdots, k_n>0\}$,
$\Gamma_{m+1}\subset \Gamma_m$, and
$\Gamma_m$ is the connected component of
$\{ k\in \mathbb R^n\ |\ \sigma_m(k)>0\}$ containing
$\Gamma_n$.
\medskip
The method of proof of Theorem \ref{TheoremB} and
\ref{TheoremC}
(as well as
the more general [\cite{LN2}, Theorem 1 and 2])
begins as in that of the theorem of
Alexandrov, using the method of moving planes.
Then, as indicated in the introduction of
\cite{LN1}, one is led to the need for
variations
of the classical Hopf Lemma.
The Hopf Lemma is a local result. The needed variant
of the Hopf Lemma to prove Theorem \ref{TheoremB} (and Conjecture
\ref{ConjectureA})
was raised as an open problem ([\cite{LN2}, Open Problem 2])
which remains open.
The proof of Theorem \ref{TheoremB} and
\ref{TheoremC}
(as well as
the more general [\cite{LN2}, Theorem 1 and 2]) was
based on the maximum principle, but also used
a global argument.
In a recent paper \cite{LYY},
Li, Yan and Yao proved
Conjecture \ref{ConjectureA}
using a method
different from that of \cite{LN1} and \cite{LN2}, exploiting
the variational properties of
the mean curvature.
In fact, they proved the symmetry result under a
slightly weaker assumption than Condition S:
\medskip
\noindent {\bf Condition S'.}\
There exists some constant $r>0$,
such that for every
$\overline X=(\overline X', \overline X_{n+1})\in M$ with a horizontal
unit outer normal (denote it by $\bar \nu =(\bar \nu', 0))$,
the vertical cylinder
$|X'-(\overline X'+r\bar \nu')|=r$
has an empty intersection with $G$.
($G$ is the bounded open set in
$\mathbb R^{n+1}$ bounded by the hypersurface $M$.)
\begin{thm} (\cite{LYY})\
\label{TheoremD}
Let $M$
be a compact connected $C^2$
hypersurface without boundary embedded in
$\mathbb R^{n+1}$,
which satisfies both the Main Assumption and Condition
S'.
Then $M$
must be symmetric about a hyperplane $X_{n+1}=c$.
\end{thm}
Here are two conjectures, in increasing strength.
\begin{conjecture}
For $n\ge 2$ and $2\le m\le n$, let
$M$
be a compact connected $C^2$
hypersurface without boundary embedded in
$\mathbb R^{n+1}$ satisfying Condition
S (or the slightly weaker Condition S') and
$\{ (k_1(X), \cdots, k_n(X))\ |\
X\in M\}\subset \Gamma_m$.
We assume that $M$ satisfies the Main Assumption with inequality (\ref{main}) replaced by
\begin{equation}
\sigma_m(k(X', B))\le \sigma_m(k(X',A)).
\label{main3}
\end{equation}
Then $M$ must be symmetric about a hyperplane $X_{n+1}=c$.
\label{open1}
\end{conjecture}
The next one is for more general curvature functions.
\begin{conjecture}
For $n\ge 2$, let the triple $(M, \Gamma, g)$ satisfy (\ref{general}).
In addition, we assume that
$M$
satisfies Condition
S (or the slightly weaker Condition S') and the Main Assumption
with
inequality (\ref{main}) replaced by (\ref{main2}).
Then $M$ must be symmetric about a hyperplane $X_{n+1}=c$.
\label{open1new}
\end{conjecture}
The above two conjectures are open even for convex $M$.
Conjecture \ref{open1} can be approached by two ways. One is by the
method of moving planes, and this leads to the study of variations of the
Hopf Lemma.
Such variations of the Hopf Lemma are of
its own interest.
A number of open problems and conjectures
on such variations of the Hopf Lemma has been discussed in
\cite{LN1}-\cite{LN3}.
For related works, see \cite{PWY} and \cite{SB}.
We will give some discussion on this in Section 1.
Conjecture \ref{open1} can also be approached by using the
variational properties of the higher order mean curvature (i.e. the $\sigma_m$-curvature).
If the answer to Conjecture \ref{open1}
is affirmative, then the inequality in (\ref{main3}) must be an equality.
This curvature equality
was proved in \cite{LYY2}, using the variational properties
of the $\sigma_m$-curvature:
\begin{lem} (\cite{LYY2})
\label{lemma2new}
For $n\ge 2$ and $2\le m\le n$, let
$M$
be a compact connected $C^2$
hypersurface without boundary embedded in
$\mathbb R^{n+1}$ satisfying Condition
S'.
We assume that $M$ satisfies the Main Assumption, with inequality (\ref{main}) replaced by
(\ref{main3}).
Then (\ref{main3}) must be an equality for every pair of points.
\end{lem}
The proof of Theorem
\ref{TheoremD} and Lemma \ref{lemma2new} will be sketched in Section 2.
\medskip
We have discussed in the above symmetry properties of
hypersurfaces in the Euclidean space.
It is also
interesting to study
symmetry properties of hypersurfaces under ordered curvature assumptions
in the
hyperbolic space, including
the study of
the counter part
of Theorem \ref{TheoremA}, Theorem
\ref{TheoremD}, and Conjecture
\ref{open1} in the hyperbolic space.
Extensions of the Alexandrov-Bernstein theorems in the hyperbolic space
were given by Do Carmo and Lawson in \cite{DL};
see also Nelli \cite{N}
for a survey on Alexandrov-Bernstein-Hopf theorems.
\section{Discussion on Conjecture \ref{open1} and
the proof of Theorem \ref{TheoremC}
}
Let
\begin{equation}
\Omega=\{(t,y)\ |\ y\in \mathbb R^{n-1}, |y|<1,
0<t<1\},
\label{D1-1new}
\end{equation}
$$
u, v\in C^\infty(\overline \Omega),
$$
$$
u\ge v\ge 0,\qquad \mbox{in}\ \Omega,
$$
$$
u(0,y)=v(0,y),\quad \forall\ |y|<1;
\qquad u(0,0)=v(0,0)=0,
$$
$$
u_t(0,0)=0,
$$
$$
u_t>0,\qquad \mbox{in}\ \Omega.
$$
We use $k^u(t,y)=(k_1^u(t,y), \cdots, k_n(t,y))$ to
denote the principal curvatures
of the graph of $u$ at $(t,y)$.
Similarly, $k^v=(k_1^v, \cdots, k_n^v)$ denotes the principal
curvatures
of the graph of $v$.
Here are two plausible variations of the Hopf Lemma.
\begin{openproblem}
For $n\ge 2$ and $1\le m\le n$, let $u$ and $v$ satisfy the above.
Assume
$$
\left\{
\begin{array}{l}
\mbox{whenever}\ u(t,y)=v(s,y), 0<s<1, |y|<1,\ \mbox{then there}\\
\sigma_m(k^u)(t,y)\le \sigma_m(k^v)(s,y).
\end{array}
\right.
$$
Is it true that either
\begin{equation}
u\equiv v\ \ \mbox{near}\ (0,0)
\label{openproblem}
\end{equation}
or
\begin{equation}
v\equiv 0\ \ \mbox{near}\ (0,0)?
\label{open2}
\end{equation}
\label{OP3}
\end{openproblem}
A weaker version is
\medskip
\begin{openproblem}
In addition to the assumption
in Open Problem 1, we further assume that
$$
w(t,y):=
\left\{
\begin{array}{ll}
v(t,y),& t\ge 0, |y|<1\\
u(-t, y),& t<0, |y|<1
\end{array}
\right.
\ \mbox{is}\ C^\infty\ \mbox{in}\
\{(t,y)\ |\ |t|<1, |y|<1\}.
$$
Is it true that either (\ref{openproblem}) or (\ref{open2}) holds?
\label{OP4}
\end{openproblem}
Open Problem \ref{OP3} and \ref{OP4} for $m=1$ are exactly the same as
[\cite{LN2}, Open Problem 1 and 2],
where it was pointed out that an affirmative
answer to Open Problem \ref{OP4} for $m=1$ would yield
a proof of Conjecture \ref{ConjectureA}
by modification of the arguments in \cite{LN1} and \cite{LN2}.
This applies to $2\le m\le n$ as well:
An affirmative
answer to Open Problem \ref{OP4} for some $2\le m\le n$ would yield a proof of
Conjecture \ref{open1} (with Condition S)
for the $m$.
As mentioned earlier,
the answer to Open Problem \ref{OP3} for n=1 is yes, and was proved in \cite{LN1}.
For $n\ge 2$, a number of conjectures and open problems
on plausible variations
to the Hopf Lemma were given in
\cite{LN1}-\cite{LN3}.
The study of such variations of the Hopf Lemma can first be made for the
Laplace operator instead of the curvature operators.
The following was studied in \cite{LN3}.
Let $u\ge v$ be in $C^\infty(\overline \Omega)$ where $\Omega$ is given by (\ref{D1-1new}).
Assume that
$$
u>0, \ v>0,\ u_t>0\quad
\mbox{in}\ \Omega
$$
and
$$
u(0, y)=0\quad\mbox{for}\ |y|<1.
$$
We impose a main condition for the Laplace operator:
$$
\mbox{whenever}\
u(t,y)=v(s,y)\ \mbox{for}\
0<t\le s<1, \mbox{there}\
\Delta u(t,y)\le \Delta v(s, y).
$$
Under some conditions we wish to conclude that
\begin{equation}
u\equiv v\ \ \mbox{in}\ \Omega.
\label{7}
\end{equation}
The following two conjectures, in decreasing strength, were given
in \cite{LN3}.
\begin{conjecture}
Assume, in addition to the above, that
\begin{equation}
u_t(0, 0)=0.
\label{8}
\end{equation}
Then (\ref{7}) holds:
$$
u\equiv v\ \ \mbox{in}\ \Omega.
$$
\end{conjecture}
\begin{conjecture}
In addition to (\ref{8}) assume that
$$
u(t,0)\ \mbox{and}\
v(t,0)\ \mbox{vanish at}\
t=0\ \mbox{of finite order}.
$$
Then
$$
u\equiv v\ \ \mbox{in}\ \Omega.
$$
\end{conjecture}
Partial results were given in \cite{LN3} concerning these
conjectures.
On the other hand, the conjectures remain largely open.
\section{Discussion on
Conjecture \ref{open1} and
the proof of Theorem \ref{TheoremD}
}
Theorem \ref{TheoremD} was proved in \cite{LYY} by making use of
the variational properties
of the mean curvature operator.
We sketch the proof of Therem \ref{TheoremD} below, see \cite{LYY} for
details.
For any smooth, closed hypersurface $M$ embeded
in $\mathbb R^{n+1}$, let
$V:
\mathbb R^{n+1}\to \mathbb R^{n+1}$ be a smooth vector field.
Consider, for $|t|<1$,
\begin{equation}
M(t):= \{ x+tV(x) \ |\ x\in M\},
\label{M}
\end{equation}
and
$$
S(t):= \int_{ M(t) }d\sigma= \mbox{area of}\ M(t).
$$
It is well known that
\begin{equation}
\frac{d}{dt}S(t)\bigg|_{ t=0}=-\int_M
V(x)\cdot \nu(x) H(x) d\sigma(x),
\label{var}
\end{equation}
where $H(x)$ is the mean curvature of $M$ at $x$ with respect to the inner
unit normal $\nu$.
Define the projection map $\pi: (x', x_{n+1}) \to x'$, and set
$R:=\pi (M)$.
Condition S' assures that
$\nu(\bar x)$, $\bar x\in M$, is horizontal
iff $\bar x'\in \partial R$;
$\partial R$ is $C^{1,1}$
(with $C^{1,1}$ normal under control); and
$$
M=M_1\cup M_2\cup \widehat M,$$
where
$M_1, M_2$ are respectively graphs of functions $f_1, f_2: R^\circ\to R,$
$f_1, f_2\in C^2(R^\circ), f_1>f_2$ in
$R^\circ$, and
$\widehat M:=\{ (x', x_{n+1})\in M\ |\ x'\in \partial R\}\equiv M\cap \pi^{-1}(\partial R).$
Note that $f_1, f_2$ are not in $C^0(R)$ in general.
\begin{lem}
\begin{equation}
H(x', f_1(x'))= H(x', f_2(x'))\quad \forall\ x'\in R^\circ.
\label{equality}
\end{equation}
\label{lemma1a}
\end{lem}
\noindent {\bf Proof.}\
Take $V(x)=e_{n+1}=(0, ..., 0, 1)$,
and let $M(t)$ and
$
S(t)$
be defined as above with this choice of $V(x)$.
Clearly, $S(t)$ is independent of $t$.
So
we have, using (\ref{var}) and the order assumption on the mean curvature,
that
\begin{eqnarray}
0&=&\frac{d}{dt}S(t)\bigg|_{ t=0}=-\sum_{i=1}^2
\int_{M_i}
e_{n+1} \cdot \nu(x) H(x) d\sigma(x)\nonumber\\
&=& -\int_{R^\circ}
\left[H(x', f_1(x'))-H(x', f_2(x'))\right]dx'\ge 0.
\label{3.5}
\end{eqnarray}
Using again the order assumption on the mean curvature we obtain
The curvature equality (\ref{equality}).
\medskip
For any $v\in
C^\infty(R^{n})$, let
$V(x):=v(x')e_{n+1}$, and let $M(t)$ and
$
S(t)$
be defined as above with this choice of $V(x)$.
We have, using (\ref{var}) and (\ref{equality}), that
\begin{eqnarray}
0&=&\frac{d}{dt}S(t)\bigg|_{ t=0}=-\sum_{i=1}^2
\int_{M_i}
v(x') e_{n+1} \cdot \nu(x) H(x) d\sigma(x)\nonumber\\
&=& -\int_{R^\circ}
v(x') \left[H(x', f_1(x'))-H(x', f_2(x'))\right]dx'= 0.
\label{One}
\end{eqnarray}
Theorem \ref{TheoremD} is proved by contradition as follows:
If $M$ is not symmetric about a hyperplane, then
$\nabla (f_1+f_2)$ is not identically zero. We will find a
particular $V(x)=v(x') e_{n+1}$, $v\in C^2_{loc}(R^\circ)
$,
to make
$$
\frac{d}{dt}S(t)\bigg|_{ t=0}\ne 0,
$$
which contradicts to (\ref{One}).
Write
$$
S(t)= \sum_{i=1}^2
\int_{ R^\circ}
\sqrt{
1+ |\nabla [ f_i(x') +v(x') ]t|^2 } dx'+\widehat S,
$$
where $\widehat S$, the area of the vertical part of $M$,
is independent of $t$ (since $v$ is zero near
$\partial R$, so the vertical part of $M$ is not moved).
\medskip
A calculation gives
$$
\frac{d}{dt}S(t)\bigg|_{ t=0}
=
\int_{ R^\circ}
\sum_{i=1}^2
\left[\nabla A(\nabla f_1(x'))- \nabla A(-\nabla f_2(x'))\right]
\cdot \nabla v(x')dx',
$$
where
$$
A(q):=\sqrt{ 1+|q|^2},\ \ q\in R^n.
$$
We know
that $$
\nabla A(q)= \frac q{ \sqrt{ 1+|q|^2} }\quad \mbox{and}
\quad \nabla^2 A(q)\ge (1+|q|^2)^{ -3/2} I>0\ \ \forall q.
$$
So
$[\nabla A(q_1)-\nabla A(q_2)] \cdot (q_1-q_2)>0$
for any $q_1\ne q_2$.
\medskip
If $\nabla (f_1+f_2)\in C^2_{loc}(R^\circ)u\setminus \{0\}$, we would take
$v=\nabla (f_1+f_2)$ and obtain
\begin{eqnarray*}
\frac{d}{dt}S(t)\bigg|_{ t=0}
&=&
\int_{ R^\circ}
\left[\nabla A(\nabla f_1(x'))- \nabla A(-\nabla f_2(x'))\right]
\cdot \nabla v(x')dx'\\
&=&
\int_{ R^\circ}
\left[\nabla A(\nabla f_1(x'))- \nabla A(-\nabla f_2(x'))\right]
\cdot [\nabla f_1(x')+ \nabla f_2(x')]dx'\\
&>&
0.
\end{eqnarray*}
In general, $\nabla (f_1+f_2)$ would not be in $C^2_{loc}(R^\circ)$.
It turns out that Condition S' allows us to do a smooth
cutoff near
$\partial R$, and conclude the proof.
We skip the crucial details, which can be found in the last few pages of
\cite{LYY}.
\medskip
\bigskip
Now we give the
\noindent{\bf Proof of Lemma \ref{lemma2new}.}\ The proof is similar to that of Lemma \ref{lemma1a}, see also the proof of [\cite{LYY}, Proposition 3] for more details.
We still take $V(X)=e_{n+1}$
and let $M(t)$ be as in (\ref{M}). Consider
$$
S_{m-1}(t):=
\int_{ M(t) }
\sigma_{m-1} (x) d\sigma.
$$
Clearly, $S_{m-1}(t)$ is independent of $t$.
The variational properties of higher order curvature
[\cite{Reilly}, Theorem B] gives
\begin{eqnarray*}
\frac{d}{dt}S_{m-1}(t)\bigg|_{ t=0}=-m\int_M
V(x)\cdot \nu(x) \sigma_m(x) d\sigma(x),
\end{eqnarray*}
thus the same argument as (\ref{3.5}) yields
\begin{eqnarray*}
0&=&\frac{d}{dt}S_{m-1}(t)\bigg|_{ t=0}=-m\int_M
V(x)\cdot \nu(x) \sigma_m(x) d\sigma(x)\\
&=& -\int_{R^\circ}
\left[\sigma_m(x', f_1(x'))-\sigma_m(x', f_2(x'))\right]dx'\ge 0.
\end{eqnarray*}
We deduce from the above, using the curvature inequality (\ref{main3}),
that the equality in (\ref{main3}) must hold for every pair of points.
Lemma \ref{lemma2new} is proved.
| {
"timestamp": "2022-05-11T02:07:19",
"yymm": "2205",
"arxiv_id": "2205.04629",
"language": "en",
"url": "https://arxiv.org/abs/2205.04629",
"abstract": "A classical theorem of A.D. Alexandrov says that a connected compact smooth hypersurface in Euclidean space with constant mean curvature must be a sphere. We give exposition to some results on symmetry properties of hypersurfaces with ordered mean curvature and associated variations of the Hopf Lemma. Some open problems will be discussed.",
"subjects": "Analysis of PDEs (math.AP); Differential Geometry (math.DG)",
"title": "Symmetry of hypersurfaces and the Hopf Lemma",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787838473909,
"lm_q2_score": 0.8459424431344437,
"lm_q1q2_score": 0.8350964322183507
} |
https://arxiv.org/abs/1507.03764 | The number of additive triples in subsets of abelian groups | A set of elements of a finite abelian group is called sum-free if it contains no Schur triple, i.e., no triple of elements $x,y,z$ with $x+y=z$. The study of how large the largest sum-free subset of a given abelian group is had started more than thirty years before it was finally resolved by Green and Ruzsa a decade ago. We address the following more general question. Suppose that a set $A$ of elements of an abelian group $G$ has cardinality $a$. How many Schur triples must $A$ contain? Moreover, which sets of $a$ elements of $G$ have the smallest number of Schur triples? In this paper, we answer these questions for various groups $G$ and ranges of $a$. | \section{Introduction}
\label{sec:introduction}
A typical problem in extremal combinatorics has the following form: What is the largest size of a structure which does not contain any forbidden configurations? Once this extremal value is known, it is very natural to ask how many forbidden configurations one is guaranteed to find in every structure of a certain size that is larger than the extremal value. There are many results of this kind. Most notably, there is a very large body of work on the problem of determining the smallest number of $k$-vertex cliques in a graph with $n$ vertices and $m$~edges, attributed to Erd{\H{o}}s and Rademacher; see~\cite{Er62, Er69, ErSi83, LoSi83, Ni11, Ra08, Re12}. In extremal set theory, there is an extension of the celebrated Sperner's theorem, where one asks for the minimum number of chains in a family of subsets of $\{1, \ldots, n\}$ with more than $\binom{n}{\lfloor n/2 \rfloor}$ members; see~\cite{DaGaSu13, DoGrKaSe13, ErKl74, Kl68}. Another example is a recent work in \cite{DaGaSu14}, motivated by the classical theorem of Erd\H{o}s, Ko, and Rado. It studies how many disjoint pairs must appear in a $k$-uniform set system of a certain size.
Analogous questions have been studied in the context of Ramsey theory. Once we know the maximum size of a~structure which does not contain some unavoidable pattern, we may ask how many such patterns are bound to appear in every structure whose size exceeds this maximum. For example, a well-known problem posed by Erd{\H{o}}s is to determine the minimum number of monochromatic $k$-vertex cliques in a $2$-colouring of the edges of~$K_n$; see, e.g.,~\cite{Co12, FrRo93, Th89}. This may be viewed as an extension of Ramsey's theorem. Another example is an extension of the famous theorem of Erd\H{o}s and Szekeres~\cite{ErSz35}, which states that any sequence of more than $k^2$ numbers contains a monotone (that is, monotonically increasing or monotonically decreasing) subsequence of length $k+1$. Here, one may ask what the minimum number of monotone subsequences of length $k+1$ contained in a sequence of $n$ numbers is; see \cite{BaHuLiPiUdVo, My02, SaSu}.
In this paper, we consider a similar Erd\H{o}s--Rademacher-type generalisation of a classical problem in additive combinatorics. Recall that a \emph{Schur triple} in an abelian group $G$ is a triple of elements $x,y,z$ of $G$, not necessarily distinct, satisfying $x+y=z$. A set $A$ of elements of $G$ is called \emph{sum-free} if it contains no Schur triples. The study of sum-free sets in abelian groups goes back to the work of Erd\H{o}s~\cite{E65}. In 1965, he proved that any set of $n$ non-zero integers contains a sum-free subset of size at least $n/3$ and asked whether the fraction $1/3$ could be improved. Despite significant interest in this problem, the matching upper bound was proved only recently by Eberhard, Green, and Manners~\cite{EbGrMa14}, who constructed a sequence of sets showing that Erd\H{o}s' result is asymptotically tight.
A related question, which is also more than forty years old, is to determine how large the largest sum-free subset of a given finite abelian group is. The following two simple observations provide strong lower bounds for this quantity. First, note that by considering the `middle' interval of an appropriate length, one sees that the cyclic group $\mathbb{Z}_m$ contains a sum-free subset with $\lfloor\frac{m+1}{3}\rfloor$ elements. Second, if $G$ is an abelian group, $H$ is a subgroup of $G$, $\pi \colon G \to G / H$ is the canonical homomorphism, and $B$ is a sum-free subset of $G/H$, then the set $\pi^{-1}(B) \subseteq G$ is also sum-free. The appearance of the expression $\lfloor \frac{m+1}{3} \rfloor$ above explains why the following nomenclature is commonly used in this context.
\begin{dfn}
Let $G$ be an abelian group of order $n$. We say that $G$ is of: (i) \emph{type~\Rom{1}} if $n$ has a prime factor $p$ satisfying $p \equiv 2 \pmod 3$; (ii) \emph{type~\Rom{2}} if $n$ has no such prime factor but $3$ divides $n$; (iii) \emph{type~\Rom{3}} otherwise, i.e., if each prime factor $p$ of $n$ satisfies $p \equiv 1 \pmod 3$.
\end{dfn}
Using the above two observations, one can check that if $G$ is an abelian group with $n$ elements, then the largest sum-free set in $G$ has size at least
\begin{itemize}
\item
$(\frac{1}{3}+\frac{1}{3p})n$ if $G$ is of type~\Rom{1}\ and $p$ is the smallest prime factor of $n$ with $p \equiv 2 \pmod 3$,
\item
$\frac{n}{3}$ if $G$ is of type~\Rom{2},
\item
$(\frac{1}{3}-\frac{1}{3m})n$ if $G$ is of type~\Rom{3}\ and $m$ is the largest order of an element in $G$.
\end{itemize}
It turns out that these simple lower bounds are actually tight, but the task of showing that this is indeed the case took more than thirty five years. This was first proved by Diananda and Yap~\cite{DiYa69} for groups of types \Rom{1}\ and \Rom{2}\ and in~\cite{RS, Y1, Y2} for some groups of type~\Rom{3}. Only many years later, Green and Rusza~\cite{GrRu05} established it for all groups.
Motivated by these results on the size of the largest sum-free sets, we consider the following more general questions.
\begin{prob}
\label{prob:main}
Let $A$ be an $a$-element subset of a finite abelian group $G$. How many Schur triples must $A$ contain? Which sets of $a$ elements of $G$ have the minimum number of Schur triples?
\end{prob}
In this paper, we answer these questions for various groups $G$ and ranges of $a$. Some estimates for the number of Schur triples in large subsets of abelian groups appeared already in~\cite{GrRu05, LeLuSc01}, but to the best of our knowledge, we are the first to explicitly consider these questions and obtain exact results.
Given a subset $A$ of an abelian group, we shall denote by $\mathrm{ST}(A)$ the number of Schur triples contained in $A$. More precisely, we let
\[
\mathrm{ST}(A) = \left|\left\{ (x,y,z) \in A^3 \colon x+y=z \right\}\right|,
\]
so that if $x+y=z$ and $x \neq y$, then we consider $(x,y,z)$ and $(y,x,z)$ as different triples.
Our first result concerns cyclic groups of prime order. In this case, we derive a complete answer to both parts of Problem~\ref{prob:main} from a classical result of Pollard~\cite{Po74} and its stability counterpart due to Nazarewicz, O'Brien, O'Neill, and Staples~\cite{NaOBrONeSt07}.
\begin{thm}
\label{thm:Zp}
Suppose that $p$ is an odd prime and order the elements of the $p$-element cyclic group $\mathbb{Z}_p$ as $x_1, \ldots, x_p$, where
\[
x_{2i} = \frac{p-1}{2} + i \qquad \text{and} \qquad x_{2i+1} = \frac{p-1}{2} - i.
\]
For every $A \subseteq \mathbb{Z}_p$ with $a$ elements,
\begin{equation}
\label{eq:STA-Zp}
\mathrm{ST}(A) \ge \mathrm{ST}(\{x_1, \ldots, x_a\}) =
\begin{cases}
0, & \text{if $a \le \frac{p+1}{3}$,} \\
\left\lfloor \frac{3a-p}{2} \right\rfloor \left\lceil \frac{3a-p}{2} \right\rceil, & \text{if $a > \frac{p+1}{3}$}.
\end{cases}
\end{equation}
Moreover, if $\mathrm{ST}(\{x_1, \ldots, x_a\}) > 0$, then equality holds above only if $A = \varphi(\{x_1, \ldots, x_a\})$ for some $\varphi \in \mathrm{Aut}(\mathbb{Z}_p)$, that is, if $A = \xi \cdot \{x_1, \ldots, x_a\}$ for some nonzero $\xi \in \mathbb{Z}_p$.
\end{thm}
Our second result concerns groups of type~\Rom{1}. We shall say that a group $G$ is of \emph{type \tI$(p)$} if $p$ is the smallest prime factor of $|G|$ among those satisfying $p \equiv 2 \pmod 3$. Suppose that $G$ is of type \Rom{1}$(p)$. It was proved by Diananda and Yap~\cite{DiYa69} that the largest sum-free set in $G$ has $(\frac{1}{3} + \frac{1}{3p})|G|$ elements. We shall generalise this result by answering both questions in Problem~\ref{prob:main} under the assumption that $|A| \le (\frac{1}{3} + \frac{1+\delta}{3p})|G|$ for some absolute constant $\delta$.
\begin{thm}
\label{thm:type-I}
There exists a positive constant $\delta$ such that the following holds. Suppose that $p$ is a prime satisfying $p \equiv 2 \pmod 3$ and let $G$ be a group of type \tI$(p)$. If $0 \le t \le \delta|G|/p$, then for every $A \subseteq G$ with $(\frac{1}{3} + \frac{1}{3p})|G| + t$ elements,
\begin{equation}
\label{eq:type-I}
\mathrm{ST}(A) \ge \frac{3t|G|}{p} + \mathbbm{1}[p \neq 2] \cdot t^2.
\end{equation}
\end{thm}
Our proof of Theorem~\ref{thm:type-I} will also yield the following characterisation of all sets achieving equality in~\eqref{eq:type-I}. Let $p$ and $G$ be as in the statement of the theorem and suppose that $p = 3k+2$. Let $\varphi \colon G \to \mathbb{Z}_p$ be an arbitrary surjective homomorphism, let $A_0 = \varphi^{-1}(\{k+1, \ldots, 2k+1\})$, and note that $A_0$ is a sum-free set with $(\frac{1}{3} + \frac{1}{3p})|G|$ elements. Given a $t$ with $0 \le t \le 2|G|/(7p)$, let $A_t'$ be an arbitrary sum-free subset of $\varphi^{-1}(\{k\})$ with $t$ elements\footnote{Such a set exists as if $k > 0$, then the set $\varphi^{-1}(\{k\})$ itself is sum-free and has $|G|/p$ elements; if $k = 0$, then $\varphi^{-1}(\{k\}) $ is a subgroup of $G$ with index $2$ and every nontrivial abelian group $H$ contains a sum-free set with at least $2|H|/7$ elements.} and let $A_t = A_0 \cup A_t'$. Then $\mathrm{ST}(A_t) = \frac{3t|G|}{p} + \mathbbm{1}[p \neq 2] \cdot t^2$. Moreover, every set $A \subseteq G$ that achieves equality in~\eqref{eq:type-I} is of this form.
Our third result is a complete solution to Problem~\ref{prob:main} for the `hypercube', i.e., the group~$\mathbb{Z}_2^n$. Here, there is a very elegant way of describing a sequence of sets minimising the number of Schur triples among all subsets of $\mathbb{Z}_2^n$ of the same cardinality.
\begin{thm}
\label{thm:Z_2n}
Let $n$ be a positive integer. For each $a \in \{1, \ldots, 2^n-1\}$, let $A_a$ be the set of vectors in $\{0,1\}^n$ that are binary representations of the numbers $2^n-1, \ldots, 2^n-a$. Let $k$ be the unique integer satisfying $2^n - 2^k \le a < 2^n - 2^{k-1}$. Then for every $A \subseteq \mathbb{Z}_2^n$ with $a$ elements,
\begin{equation}
\label{eq:Z_2n}
\mathrm{ST}(A) \ge \mathrm{ST}(A_a) = (3a+2^k-2^{n+1})(2^n-2^k).
\end{equation}
\end{thm}
Our proof of Theorem~\ref{thm:Z_2n} will also yield the following characterisation of sets achieving equality in~\eqref{eq:Z_2n}. Let $a$, $k$, and $n$ be as in the statement of the theorem. For every $a$-element $A \subseteq \mathbb{Z}_2^n$ satisfying $\mathrm{ST}(A) = \mathrm{ST}(A_a)$, the following holds. There is a subgroup $K < \mathbb{Z}_2^n$ with $2^k$ elements such that $\mathbb{Z}_2^n \setminus K \subseteq A$ and $A \cap K$ is sum-free. One may check that each $A$ of this form satisfies $\mathrm{ST}(A) = \mathrm{ST}(A_a)$. As clearly each such $K$ is isomorphic to~$\mathbb{Z}_2^k$, one may say a little more about the structure of $A \cap K$ for certain ranges of $a$. In particular, it was proved in~\cite{ClDuRo90, ClPe92} that each sum-free subset of $\mathbb{Z}_2^k$ with more than $5 \cdot 2^{k-4}$ elements is contained in some maximum-size sum-free subset of $\mathbb{Z}_2^k$, i.e., the odd coset of some subgroup of index two. (Smaller sum-free sets of $\mathbb{Z}_2^k$ do not admit such an elegant structural description. For example, if $k \ge 4$, then the set $\{e_1, e_2, e_3, e_4, e_1+e_2+e_3+e_4\} + \mathrm{span}\{e_5, \ldots, e_k\}$, where $e_1, \ldots, e_k$ is a basis of $\mathbb{Z}_2^k$ as a vector space over $\mathbb{Z}_2$, is sum-free, has $5 \cdot 2^{k-4}$ elements, and is not contained in any maximum-size sum-free subset of $\mathbb{Z}_2^k$.)
Finally, we consider Problem~\ref{prob:main} for groups of type~\Rom{2}, i.e., groups whose order is divisible by three. As it turns out, here the answer is much less `uniform' among all groups in this class. To be more precise, given an abelian group $G$, let $f_G$ be the function defined by
\begin{equation}
\label{eq:fG}
f_G(a) = \min\{\mathrm{ST}(A) \colon A \subseteq G \text{ and } |A| = a\}
\end{equation}
and let $a_G$ be the largest cardinality of a sum-free set in $G$, i.e., $a_G = \max\{a \colon f_G(a) = 0\}$. On the one hand, if $G = \mathbb{Z}_3^n$, then $f_G$ `behaves' similarly as in the case when $G$ is of type \tI$(p)$\ for some fixed prime $p$, namely, $f_G(a+1) - f_G(a)$ is of order $|G|$ for all $a$ in an interval of length $\Omega(|G|/p)$ starting at $a_G$. On the other hand, if $G = \mathbb{Z}_3 \times \mathbb{Z}_p$, where $p$ is a prime with $p \equiv 1 \pmod 3$, then $f_G(a)$ is merely of order $(a-a_G)^2$ for all $a \ge a_G$. (As $G$ is of type~\Rom{2}, $a_G = p$.)
\begin{thm}
\label{thm:Z_3n}
There exists a positive constant $\delta$ such that the following holds. Let $n$ be a positive integer and suppose that $0 \le t \le \delta 3^{n-1}$. Then for every $A \subseteq \mathbb{Z}_3^n$ with $3^{n-1}+t$ elements,
\begin{equation}
\label{eq:Z_3n}
\mathrm{ST}(A) \ge 3^{n-1}t + t^2.
\end{equation}
Moreover, \eqref{eq:Z_3n} holds with equality when $A$ is the union of $\{x \in \mathbb{Z}_3^n \colon x_1 = 1\}$ and an arbitrary $t$-element sum-free subset of $\{x \in Z_3^n \colon x_1 = 2\}$.
\end{thm}
\begin{prop}
\label{prop:Z3Zp}
Let $p$ be a prime. Then for every $a \in \{p+1, \ldots, 3p\}$, there exists an $a$-element set $A \subseteq \mathbb{Z}_3 \times \mathbb{Z}_p$ with
\[
\mathrm{ST}(A) \le 21(a-p)^2.
\]
\end{prop}
Last but not least, the argument we use in our proof of Theorem~\ref{thm:type-I} can be adapted to show that every sufficiently large set $A$ of elements of an arbitrary finite abelian group that is nearly sum-free must necessarily contain a genuinely sum-free set $B$ such that $|A \setminus B|$ is very small.
\begin{prop}
\label{prop:Green-Ruzsa-optimal}
Suppose that $\varepsilon > 0$ and let $G$ be a finite abelian group. If some $A \subseteq G$ has at least $(\frac{1}{3}+\varepsilon)|G|$ elements and $\mathrm{ST}(A) \le \varepsilon^2|G|^2/2$, then $A$ contains a sum-free set $B$ with $|A \setminus B| \le \varepsilon|G|$.
\end{prop}
A slightly weaker version of this useful fact was first proved by Green and Ruzsa~\cite{GrRu05} (their statement has a stronger requirement on the number of Schur triples). Observe that our assumption on $\mathrm{ST}(A)$ in the above proposition is optimal up to an absolute multiplicative constant. Indeed, Proposition~\ref{prop:Z3Zp} implies that for every $\varepsilon \in (0,\frac{2}{3})$, there are a group $G$ with no sum-free set larger than $|G|/3$ and a set $A$ with $\mathrm{ST}(A) \le 22 \varepsilon^2 |G|^2$ and more than $(\frac{1}{3} + \varepsilon)|G|$ elements (and hence no sum-free subset $B$ with $|A \setminus B| \le \varepsilon |G|$).
\section{Cyclic groups of prime order}
\label{sec:Zp}
In this section, we prove Theorem~\ref{thm:Zp}. The first part of the theorem, a lower bound on the number of Schur triples in an arbitrary set of $a$ elements of $\mathbb{Z}_p$ is a fairly straightforward consequence of the following result of Pollard~\cite{Po74}, which generalises the well-known theorem of Cauchy~\cite{Ca13} and Davenport~\cite{Da35}.
\begin{thm}
\label{thm:Pollard}
Let $p$ be a prime and let $A, B \subseteq \mathbb{Z}_p$ . For an integer $r$, denote by $N_r$ the number of elements of $\mathbb{Z}_p$ which are expressible in at least $r$ ways as $x+y$ with $x \in A$ and $y \in B$. Then for every $r$ with $1 \le r \le \min\{|A|,|B|\}$,
\begin{equation}
\label{eq:Pollard}
N_1 + \ldots + N_r \ge r \cdot \min\{p, |A|+|B|-r\}.
\end{equation}
\end{thm}
The second part of Theorem~\ref{thm:Zp} will be derived from the following stability counterpart of Pollard's result due to Nazarewicz, O'Brien, O'Neill, and Staples~\cite{NaOBrONeSt07}.
\begin{thm}
\label{thm:Pollard-stability}
Let $p$ be a prime, let $A, B \subseteq \mathbb{Z}_p$, and let $r$ be an integer with $1 \le r \le \min\{|A|,|B|\}$. Then equality holds in~\eqref{eq:Pollard} of Theorem~\ref{thm:Pollard} if and only if at least one of the following conditions holds:
\begin{enumerate}
\renewcommand{\theenumi}{\textit{(\roman{enumi})}}
\item
\label{item:Pollard-stability-i}
$\min\{|A|, |B|\} = r$,
\item
\label{item:Pollard-stability-ii}
$|A| + |B| \ge p + r$,
\item
\label{item:Pollard-stability-iii}
$|A| = |B| = r + 1$ and $B = x - A$ for some $x \in \mathbb{Z}_p$, or
\item
\label{item:Pollard-stability-iv}
$A$ and $B$ are arithmetic progressions with the same common difference.
\end{enumerate}
\end{thm}
\begin{proof}[{Proof of Theorem~\textup{\ref{thm:Zp}}}.]
Fix an arbitrary set $A$ of $a$ elements of $\mathbb{Z}_p$. Given an $r \ge 1$, let $S_r$ denote the set of all elements in $\mathbb{Z}_p$ that are expressible in at least $r$~ways as $x+y$ with $x, y \in A$. Recall that $N_r=|S_r|$. Moreover, let $N_r' = |S_r \cap A|$. Clearly, $N_r' \ge N_r + a - p$ and hence by Theorem~\ref{thm:Pollard}, for any $R \ge 0$,
\begin{equation}
\label{eq:STA}
\mathrm{ST}(A) = \sum_{r \ge 1} N_r' \ge \sum_{r = 1}^R N_r' \ge \sum_{r=1}^R N_r + R(a-p) \ge R \cdot \big( \min\{p, 2a-R\} + a - p \big).
\end{equation}
Let $R = \max\big\{0, \lceil \frac{3a-p}{2} \rceil\big\}$. Note that $a \le p$ implies that $2a \le p + \frac{3a-p}{2}$ and consequently
\[
2a \le p + \left\lceil \frac{3a-p}{2} \right\rceil \le p + R.
\]
In particular, the minimum in the right-hand side of~\eqref{eq:STA} is equal to $2a-R$ and therefore,
\begin{equation}
\label{eq:STA-final}
\mathrm{ST}(A) \ge R(3a-R-p) = \max\left\{0, \left\lceil \frac{3a-p}{2} \right\rceil\right\} \cdot \min\left\{ 3a-p, \left\lfloor \frac{3a-p}{2} \right\rfloor \right\}.
\end{equation}
It is straightforward to check that the right-hand side of~\eqref{eq:STA-final} is equal to the right-hand side of~\eqref{eq:STA-Zp}. In order to complete the proof of~\eqref{eq:STA-Zp}, we still need to verify the equality there. Assume first that $a$ is even. Then $\{x_1, \ldots, x_a\} = \big\{ \frac{p+1-a}{2}, \ldots, \frac{p+a-1}{2}\big\}$. A Schur triple $(x,y,z)$ in $\{x_1, \ldots, x_a\}$ may be of one of the following two types: either $x+y=z$ or $x+y = z+p$, where the equalities hold in $\mathbb{Z}$. Let $R' = \max\big\{0, \frac{3a-p-1}{2}\big\} = \max\big\{0, \lfloor\frac{3a-p}{2}\rfloor\big\}$. It is easy to check that $\frac{p+1-a}{2}$ plays the role of $x$ in exactly $R'$ triples of the first type, as $y$ ranges over the $R'$ smallest elements of the interval $\big\{ \frac{p+1-a}{2}, \ldots, \frac{p+a-1}{2}\big\}$. More generally, the element $\frac{p+1-a}{2} + i$ plays the role of $x$ in exactly $R'-i$ triples of the first type. By symmetry, $\frac{p+a-1}{2}$ plays the role of $x$ in exactly $R'$ triples of the second type, as $y$ ranges over the $R'$ largest elements of the interval $\big\{ \frac{p+1-a}{2}, \ldots, \frac{p+a-1}{2}\big\}$, and more generally, $\frac{p+a-1}{2} - i$ plays the role of $x$ in exactly $R'-i$ triples of the second type. It follows that
\begin{equation}
\label{eq:STxi-a-even}
\begin{split}
\mathrm{ST}(\{x_1, \ldots, x_a\}) & = 2 \cdot \sum_{r = 1}^{R'} r = 2 \cdot \binom{R'+1}{2} = R'(R'+1) \\
& = \max\left\{0, \left\lfloor \frac{3a-p}{2} \right\rfloor\right\} \cdot \max\left\{1, \left\lceil\frac{3a-p}{2}\right\rceil\right\}.
\end{split}
\end{equation}
It is straightforward to verify that the right-hand side of~\eqref{eq:STxi-a-even} is equal to the right-hand side of~\eqref{eq:STA-Zp}. When $a$ is odd, then $\{x_1, \ldots, x_a\} = \big\{ \frac{p-a}{2}, \ldots, \frac{p+a-2}{2}\big\}$ and, letting $R'' = \max\big\{0, \frac{3a-p}{2}\big\} = \max\big\{0, \lfloor \frac{3a-p}{2} \rfloor\big\} = \max\big\{0, \lceil\frac{3a-p}{2}\rceil\big\}$, analogous considerations yield
\begin{equation}
\label{eq:STxi-a-odd}
\begin{split}
\mathrm{ST}(\{x_1, \ldots, x_a\}) & = \binom{R''+1}{2} + \binom{R''}{2} = (R'')^2 \\
& = \max\left\{0, \left\lceil \frac{3a-p}{2} \right\rceil\right\} \cdot \max\left\{0, \left\lfloor \frac{3a-p}{2} \right\rfloor \right\}.
\end{split}
\end{equation}
It is easy to check that the right-hand side of~\eqref{eq:STxi-a-odd} is equal to the right-hand side of~\eqref{eq:STA-Zp}.
We now characterise all sets $A$ with $a$ elements that achieve the lower bound in~\eqref{eq:STA-Zp} whenever the right-hand side of~\eqref{eq:STA-Zp} is nonzero, that is, when $a > \lfloor \frac{p+1}{3} \rfloor$. To this end, let us analyse when all inequalities in~\eqref{eq:STA} hold with equality. In particular, equality must hold in~\eqref{eq:Pollard} of Theorem~\ref{thm:Pollard} invoked with $A \leftarrow A$, $B \leftarrow A$, and $r \leftarrow R$, where $R = \max\big\{0, \lceil \frac{3a-p}{2} \rceil\big\} \ge 1$ and the inequality follows from our assumption that $a > \lfloor \frac{p+1}{3} \rfloor$. Theorem~\ref{thm:Pollard-stability} tells us that~\eqref{eq:STA} can hold with equality only if one of the following conditions is satisfied:
\begin{enumerate}
\renewcommand{\theenumi}{(\roman{enumi})}
\item
\label{item:Zp-stab-1}
$a = R$,
\item
$2a \ge p + R$,
\item
$a = R + 1$ and $A = x - A$ for some $x \in \mathbb{Z}_p$,
\item
\label{item:Zp-stab-4}
$A$ is an arithmetic progression.
\end{enumerate}
Now observe that
\begin{align*}
a = R=\lceil a-\frac{p-a}{2} \rceil & \Longleftrightarrow \left\lfloor \frac{p-a}{2} \right\rfloor = 0 \Longleftrightarrow a \ge p-1, \\
2a \ge p+R & \Longleftrightarrow a \ge \left\lceil \frac{a+p}{2} \right\rceil \Longleftrightarrow a \ge p, \\
a = R+1 & \Longleftrightarrow \left\lfloor \frac{p-a}{2} \right\rfloor = 1 \Longleftrightarrow a \in \{p-3, p-2\},
\end{align*}
and every set $A \subseteq \mathbb{Z}_p$ that satisfies either $|A| \ge p-2$ or $|A| = p-3$ and $A = x - A$ for some $x \in \mathbb{Z}_p$ is an arithmetic progression (to see this, note that $A^c$ is an arithmetic progression), we deduce that each of \ref{item:Zp-stab-1}--\ref{item:Zp-stab-4} implies that $A$ must be an arithmetic progression with common difference $d$. We may assume that $d = 1$ as otherwise we may replace $A$ by its automorphic image $d^{-1} \cdot A$. Hence, $A = \{x, \ldots, x+a-1\}$ for some $x \in \mathbb{Z}_p$.
In order for~\eqref{eq:STA} to hold with equality, it must also be that
\[
|A \cap S_R| = N_R' = N_R + a - p = |S_R| - |A^c|,
\]
that is, $A^c \subseteq S_R$. One easily checks that $S_R = \{2x+R-1, \ldots, 2x +2a-R-1\}$ and hence $|S_R| = 2(a-R)+1 = 2\lfloor \frac{p-a}{2} \rfloor + 1$. If $a$ is even, then $|A^c| = p-a = |S_R|$ and hence $x + a = 2x+R-1$, which yields $x = \frac{p-a+1}{2}$, that is,
\[
A = \left\{ \frac{p+1}{2} - \frac{a}{2}, \ldots, \frac{p-1}{2} + \frac{a}{2} \right\} = \{x_1, \ldots, x_a\}.
\]
If $a$ is odd, then $|A^c| = |S_R| - 1$ and hence either $x+a = 2x+R-1$ or $x+a = 2x+R$, yielding $x = \frac{p-a}{2}$ or $x = \frac{p-a}{2} + 1$, that is,
\[
A = \pm\left\{ \frac{p-a}{2} + 1, \ldots, \frac{p+a}{2} \right\} = \pm\{x_1, \ldots, x_a\}.\qedhere
\]
\end{proof}
\section{Groups of type~\Rom{1}}
\label{sec:type-I}
In this section, we prove Theorem~\ref{thm:type-I}. Our argument uses some ideas from~\cite{GrRu05,LeLuSc01}. Actually, one may adapt the arguments of these two papers to establish Theorem~\ref{thm:type-I} under the stronger assumption that $t \le \delta n/p^4$ for some positive constant $\delta$. Our main tool will be the following classical result of Kneser~\cite{Kn53,Kn55}. The version stated below is~\cite[Theorem~3.1]{Ke60}. Recall that the \emph{stabiliser} of a set $A$ of elements of an abelian group $G$, denoted by $\mathrm{Stab}(A)$, is defined by
\[
\mathrm{Stab}(A) = \{x \in G \colon A + x = A\}.
\]
\begin{thm}
\label{thm:Kneser}
Let $A$ and $B$ be finite non-empty subsets of an Abelian group $G$ satisfying $|A+B| \le |A| + |B| - 1$. Then $H = \mathrm{Stab}(A+B)$ satisfies
\[
|A+B| = |A+H| + |B+H| - |H|.
\]
\end{thm}
\begin{proof}[{Proof of Theorem~\textup{\ref{thm:type-I}}}.]
Let $\delta = 1/82$ and let $p$, $G$, and $t$ be as in the statement of the theorem. Denote the order of $G$ by $n$ and let $A$ be an arbitrary set of $(\frac{1}{3} + \frac{1}{3p})n + t$ elements of $G$. Let $n' = n/(3p)$ and define
\[
C_- = \big\{x \in A \colon |(x-A) \cap A| \ge n'\big\} \quad \text{and} \quad
C_+ = \big\{x \in A \colon |(x+A) \cap A| \ge n'\big\}.
\]
Using the inclusion-exclusion principle (Bonferroni's inequality) to count Schur triples $(x,y,z) \in A^3$ such that $x \in C_+$, $y \in C_+$, or $z \in C_-$ yields
\[
\mathrm{ST}(A) \ge (2|C_+| + |C_-|) \cdot n' - |C_+|^2 - 2|C_-||C_+|.
\]
Therefore, if $|C_-| \ge 4t$ and $|C_+| \ge 4t$, then, passing to subsets of $C_-$ and $C_+$ of size exactly $4t$, we have
\[
ST(A) \ge 12tn' - 48t^2 = \frac{4tn}{p} - 48t^2 > \frac{3tn}{p} + t^2,
\]
where the last inequality follows as $49t^2 \le 49\delta tn/p < tn/p$. Hence, we may assume that either $|C_-| < 4t$ or $|C_+| < 4t$.
Given a $* \in \{-, +\}$, let $B = A \setminus C_*$ and observe that $|(x*A) \cap A^c| \ge |A| - n'$ for each $x \in B$ and hence for every $x, y \in B$,
\begin{multline*}
|(x*A) \cap (y * A)| \ge |(x*A) \cap (y*A) \cap A^c| \ge 2(|A|-n') - (n-|A|) \\
= 3|A| - n - 2n' = n/p + 3t - 2n' = n/(3p) + 3t \ge n/(3p).
\end{multline*}
In other words, for every pair $x,y \in B$, there are at least $n/(3p)$ pairs $a,b \in A$ such that $x-y = a - b$.
Fix a $* \in \{-,+\}$ such that $|C_*| < 4t$ and let $B = A \setminus C_*$. Using our observation above to count Schur triples $(x,y,z) \in A^3$ such that $x \in B - B$ yields
\begin{equation}
\label{eq:STA-B-B-lower}
ST(A) \ge |(B - B) \cap A| \cdot n/(3p).
\end{equation}
Hence, we may assume that $|(B-B) \cap A| \le 9t + 3t^2p/n \le 10t$, where the last inequality holds as $t \le \delta n / p$. In particular,
\begin{equation}
\label{eq:B-B-upper}
|B-B| \le n - |A| + 10t = 2|A|-n/p+7t \le 2|B| - n/p + 15t,
\end{equation}
where the last inequality follows as $|B| = |A| - |C_*| \ge |A| - 4t$. Letting $H = \mathrm{Stab}(B-B)$, Kneser's theorem (Theorem~\ref{thm:Kneser}) implies that
\begin{equation}
\label{eq:B-B-lower}
|B-B| = 2|B+H|-|H| \ge 2|B| - |H|.
\end{equation}
Putting~\eqref{eq:B-B-upper} and~\eqref{eq:B-B-lower} together yields
\begin{equation}
\label{eq:H-properties}
|H| \ge n/p -15t \quad \text{and} \quad |B+H| - |B| \le \frac{|H|-n/p+15t}{2}.
\end{equation}
Let $m = |G/H|$ and observe that~\eqref{eq:H-properties} yields
\begin{equation}
\label{eq:m-lower}
m \le \frac{n}{n/p - 15t} \le \frac{p}{1-15\delta}.
\end{equation}
Note also that by our assumption that $|C_*| < 4t$, we have
\begin{equation}
\label{eq:B-lower}
|B| > |A|-4t = n/3+n/(3p)-3t \ge n/3 + (1/3 - 3\delta) n/p > n/3.
\end{equation}
Now, let $B_H = (B+H) / H$. As $|B_H| > m/3$ by~\eqref{eq:B-lower}, we must have $|B_H| \ge \lceil \frac{m+1}{3} \rceil$. We shall now consider two cases.
\medskip
\textsc{Case 1.} $m \not\equiv 2 \pmod 3$.
\smallskip
In this case, $|B_H| \ge \frac{m+2}{3}$ and it follows from~\eqref{eq:B-B-lower} that $|B-B|/|H| = 2|B_H| - 1 \ge \frac{2m+1}{3}$. This implies that
\[
|(B-B) \cap A| \ge \left(\frac{2}{3}+\frac{1}{3m}\right)n + \left(\frac{1}{3}+\frac{1}{3p}\right)n - n > \frac{n}{3p} \ge 10t,
\]
contradicting our assumption, cf.~\eqref{eq:STA-B-B-lower}.
\medskip
\textsc{Case 2.} $m \equiv 2 \pmod 3$.
\smallskip
Let $q$ be the smallest prime factor of $m$ satisfying $q \equiv 2 \pmod 3$; $m$ has such a prime factor as otherwise $m \not\equiv 2 \pmod 3$. Since $m$ divides $n$ and $p$ is the smallest prime factor of $n$ satisfying $p \equiv 2 \pmod 3$, we must have $q \ge p$. But $m < 2p$ by~\eqref{eq:m-lower}, so necessarily $m = q$, that is, $m$ is a prime satisfying $m \equiv 2 \pmod 3$ and $m \ge p$. This means that $G/H$ is the cyclic group $\mathbb{Z}_m$.
We now claim that $B_H \cap (B_H - B_H) = \emptyset$. Indeed, otherwise we would have $|(B+H) \cap (B-B)| \ge |H|$ (since $B-B$ is a union of cosets of $H$) and, since $B \subseteq A$, by~\eqref{eq:H-properties},
\begin{equation}
\label{eq:A-structure}
|(B+H) \setminus A| \le |B+H|-|B| \le \frac{n/m-n/p+15t}{2} \le 7.5t,
\end{equation}
which in turn would yield
\[
|A \cap (B-B)| \ge |H| - 7.5t = n/m - 7.5t \ge (1-15\delta)n/p - 7.5t > 10t,
\]
contradicting our assumption, cf.~\eqref{eq:STA-B-B-lower}.
Therefore, it must be that $B_H \cap (B_H - B_H) = \emptyset$, that is, $B_H \subseteq \mathbb{Z}_m$ is a sum-free set. But $|B_H| > m/3$, which means that $|B_H| = \frac{m+1}{3}$ and it follows from the results of Diananda and Yap~\cite{DiYa69} that, up to isomorphism, $B_H = \{\ell+1, \ldots, 2\ell+1\}$, where $m = 3\ell+2$.
\medskip
Let $A_0 = B+H$ and let $\varphi \colon G \to \mathbb{Z}_m$ be a homomorphism that maps $A_0$ to $\{\ell+1, \ldots, 2\ell+1\}$; in particular $H = \varphi^{-1}(0)$ and $B_H = \{\ell+1, \ldots, 2\ell+1\}$. As $-\varphi$ is also such a homomorphism and $-\ell = 2\ell+2$ in $\mathbb{Z}_m$, we may assume that $|A \cap \varphi^{-1}(\ell)| \ge |A \cap \varphi^{-1}(2\ell+2)|$. It follows from~\eqref{eq:A-structure} that $|A_0 \setminus A| \le 7.5t$. We shall now perform a stability analysis of $A$ and prove that~\eqref{eq:type-I} holds and the inequality there is strict unless $A_0 \subseteq A$, $m = p$, and $A \setminus A_0$ is a sum-free subset of $\varphi^{-1}(\{\ell\})$.
We first claim that replacing an element from $A \setminus A_0$ with an element of $A_0 \setminus A$ only decreases the number of Schur triples in $A$. Indeed, as $A_0$ is sum-free, a given element of $A_0$ participates only in Schur triples that contain an element of $A \setminus A_0$; clearly, the number of such triples is at most $6|A \setminus A_0|$, which is at most $51t$ as
\[
|A \setminus A_0| = |A| - |A \cap A_0| = (|A_0| + t) - (|A_0| - |A_0 \setminus A|) \le 8.5t.
\]
On the other hand, for every $j \not\in \{\ell+1, \ldots, 2\ell+1\}$, there are $h, i \in \{\ell+1, \ldots, 2\ell+1\}$ such that $h + i = j$. Therefore if $x \in A \setminus A_0$, then, letting $j = \varphi(x) \not\in \{\ell+1, \ldots, 2\ell+1\}$, the number $\mathrm{ST}_x(A)$ of Schur triples in $A$ that contain $x$ satisfies
\begin{multline*}
\mathrm{ST}_x(A) \ge \left|\big(x - (\varphi^{-1}(i) \cap A)\big) \cap (\varphi^{-1}(h) \cap A)\right| \ge |\varphi^{-1}(i) \cap A| + |\varphi^{-1}(h) \cap A| - |H| \\
\ge 2(|H| - 7.5t) - |H| \ge n/m - 15t > (1-15\delta)n/p - 15t > 51t.
\end{multline*}
Therefore, it suffices to prove~\eqref{eq:type-I}, and characterise all cases of equality there, under the assumption that $A_0 \subseteq A$.
Now, note that each of $\ell$ and $2\ell+2$ participates in exactly three Schur triples with two elements of $\{\ell+1, \ldots, 2\ell+1\}$, namely: $(\ell, \ell+1, 2\ell+1)$, $(\ell+1, \ell, 2\ell+1)$, and $(2\ell+1,2\ell+1,\ell)$ and $(\ell+1,\ell+1,2\ell+2)$, $(2\ell+1,2\ell+2,\ell+1)$, and $(2\ell+2,2\ell+1,\ell+1)$. On the other hand, every element of $\mathbb{Z}_m \setminus \{\ell, \ldots, 2\ell+2\}$ participates in at least four such triples. It follows that:
\begin{enumerate}
\renewcommand{\theenumi}{(\roman{enumi})}
\item
\label{item:stability-1}
Every element of $\varphi^{-1}(\{\ell,2\ell+2\})$ forms $3n/m$ Schur triples with two elements of $A_0$ and at most $6|A \setminus A_0|$ additional Schur triples with two elements of $A$ (one of which is not in $A_0$).
\item
\label{item:stability-2}
Every element of $\varphi^{-1}(\mathbb{Z}_m \setminus \{\ell, \ldots, 2\ell+2\})$ forms at least $4n/m$ Schur triples with two elements of $A_0$.
\end{enumerate}
Since $|A_0| = (\frac{1}{3} + \frac{1}{3m})n$, our assumption that $A_0 \subseteq A$ and~\eqref{eq:m-lower} imply that
\[
|A \setminus A_0| = \frac{n}{3p} + t - \frac{n}{3m} \le \left(\frac{1}{3} +\delta\right)\frac{n}{p} - \frac{n}{3m} \le \left[\frac{1}{1-15\delta}\left(\frac{1}{3} + \delta\right) - \frac{1}{3}\right] \frac{n}{m} < \frac{n}{6m}.
\]
It therefore follows from~\ref{item:stability-1} and~\ref{item:stability-2} that moving elements from $A \cap \varphi^{-1}(\mathbb{Z}_m \setminus \{\ell, \ldots, 2\ell+2\})$ to $\varphi^{-1}(\{\ell,2\ell+2\})$ only decreases $\mathrm{ST}(A)$. Therefore, we may restrict our attention to sets $A$ satisfying
\[
\varphi^{-1}(\{\ell+1, \ldots, 2\ell+1\}) = A_0 \subseteq A \subseteq A_0 \cup \varphi^{-1}(\{\ell, 2\ell+2\}) = \varphi^{-1}(\{\ell, \ldots, 2\ell+2\}).
\]
Observe that if $\ell > 0$, then every ordered pair of elements $(x,y) \in \varphi^{-1}(\ell)^2 \cup \varphi^{-1}(2\ell+2)^2$ satisfying $(x,y) \in A^2$ participates in a unique Schur triple (in $A$) in which $x$ precedes $y$, the triple $(x,y,x+y)$. On the other hand, if $\ell > 0$, then every pair $(x,y) \in \varphi^{-1}(\ell) \times \varphi^{-1}(2\ell+2)$ satisfying $(x,y) \in A^2$ participates in four Schur triples in $A$: the triples $(x,y-x,y)$, $(y-x,x,y)$, $(y,x-y,x)$, and $(x-y,y,x)$. Counting separately Schur triples in $A$ that contain two (using~\ref{item:stability-1}), one (using the above observation), and no elements of $A_0$ yields
\[
\begin{split}
\mathrm{ST}(A) & \ge |A \setminus A_0| \cdot \frac{3n}{m} + \mathbbm{1}[m > 2] \cdot |A \setminus A_0|^2 + \mathrm{ST}(A \setminus A_0) \\
& = \left( \frac{n}{3p} + t - \frac{n}{3m} \right) \cdot \frac{3n}{m} + \mathbbm{1}[m > 2] \cdot \left( \frac{n}{3p} + t - \frac{n}{3m} \right)^2 + \mathrm{ST}(A \setminus A_0) \\
& \ge \frac{3nt}{p} + \mathbbm{1}[p > 2] \cdot t^2 + \mathrm{ST}(A \setminus A_0),
\end{split}
\]
where the first inequality is strict unless $\ell = 0$, $A \cap \varphi^{-1}(\ell) = \emptyset$, or $A \cap \varphi^{-1}(2\ell+2) = \emptyset$ and the last inequality is strict unless $m = p$ (recall that $p \le m \le 2p$). This completes the proof.
\end{proof}
\section{The hypercube $\mathbb{Z}_2^n$}
\label{sec:Z_2n}
In this section, we prove Theorem~\ref{thm:Z_2n}. One way of obtaining lower bounds on $\mathrm{ST}(A)$ in our proof will be using eigenvalue analysis of the \emph{Cayley graph} of $G$ generated by $A$. Recall that given an abelian group $G$ and an $A \subseteq G \setminus \{0\}$ satisfying $A = -A$, we define $\mathcal{G}_A$ to be the graph with vertex set $G$ whose edges are all pairs $\{x,y\}$ such that $x-y \in A$. It follows from this definition that for every $A \subseteq G \setminus \{0\}$ with $A = -A$,
\begin{equation}
\label{eq:STA-ecGA}
\mathrm{ST}(A) = 2e(\mathcal{G}_A[A]),
\end{equation}
where $\mathcal{G}_A[A]$ denotes the subgraph of $\mathcal{G}_A$ induced by $A$. We shall derive lower bounds on $e(\mathcal{G}_A[A])$ using the following well-known result of Alon and Chung~\cite{AlCh88}.
\begin{thm}
\label{thm:AlCh}
Let $\mathcal{G}$ be an $N$-vertex $D$-regular graph and let $\lambda$ be the smallest eigenvalue of its adjacency matrix. Then for every $U \subseteq V(\mathcal{G})$,
\[
2e(\mathcal{G}[U]) \ge \frac{D}{N} |U|^2 + \frac{\lambda}{N} |U| (N - |U|).
\]
\end{thm}
Precise eigenvalue analysis of $\mathcal{G}_A$ will be possible in our setting due to the fact that the \emph{characters} of any abelian group $G$ form a basis of eigenvectors of $\mathcal{G}_A$ for every $A$. Moreover, in the case $G = \mathbb{Z}_2^n$, there is a one-to-one correspondence between nontrivial characters of $G$ and subgroups of $G$ with index two. More precisely, for each nontrivial character $\chi \in \hat{G}$ there is a subgroup $H < G$ of index $2$ such that
\[
\chi(x) =
\begin{cases}
1, & \text{if $x \in H$}, \\
-1, & \text{if $x \not\in H$}.
\end{cases}
\]
In particular, the smallest eigenvalue $\lambda$ of $\mathcal{G}_A$ satisfies
\[
\lambda = \min\big\{|A \cap H| - |A \cap H^c| \colon \text{$H < G$ with $[G:H]=2$} \big\}.
\]
\begin{proof}[{Proof of Theorem~\textup{\ref{thm:Z_2n}}}.]
Let $n$ be a positive integer, let $G = \mathbb{Z}_2^n$, and let $a$ and $k$ be as in the statement of the theorem. We may assume that $k \le n-1$ as otherwise both~\eqref{eq:Z_2n} and the characterisation of sets achieving equality are vacuous. We shall first count Schur triples in the set $A_a$ and establish the equality in~\eqref{eq:Z_2n}. Given an $x \in G$, let us denote by $\overline{x}$ the integer with binary representation $x$ (viewed as a $\{0,1\}$-vector), so that
\[
\{\overline{x} \colon x \in A_a \} = \{2^n-1, \ldots, 2^n-a\}.
\]
Fix some $x \in A_a$ and let $j$ be the unique integer such that $2^j \le \overline{x} < 2^{j+1}$. We claim that the number $\mathrm{ST}_x(A_a)$ of (ordered) Schur triples containing $x$ such that $\overline{x}$ is the smallest element is $3(2^n-2^{j+1})$. (As $0 \not\in A_a$, each Schur triple in $A_a$ contains three distinct nonzero elements.) It is enough to show that the number of pairs $\{y,z\} \subseteq A_a$ satisfying $x + y + z = 0$ and $\overline{x} < \overline{y} < \overline{z}$ is $2^{n-1} - 2^j$. To this end, note first that for every such pair, $\overline{z} \ge 2^{j+1}$ since otherwise $2^j \le \overline{x}, \overline{y}, \overline{z} < 2^{j+1}$ and then $\overline{x+y+z} \ge 2^j$; consequently, also $\overline{y} \ge 2^{j+1}$ as otherwise $\overline{x+y+z} \ge 2^{j+1}$. Conversely, given an arbitrary element $z$ such that $\overline{z} \ge 2^{j+1}$, we have $\overline{z+x} \ge 2^{j+1} > \overline{x}$ and hence $\{z, z+x\} \subseteq A_a$ is such a pair. Thus the number of these pairs is $\frac{1}{2}(2^n-2^{j+1})$, as claimed.
Now, recall that $k$ satisfies $2^n - 2^k \le a < 2^n - 2^{k-1}$. We may assume that $k \le n-1$ as otherwise $A_a \subseteq A_{2^{n-1}}$ is sum-free. Let $t = a - 2^n + 2^k$. Then $0 \le t < 2^{k-1}$ and
\[
\begin{split}
\mathrm{ST}(A_a) & = \sum_{x \in A_a} \mathrm{ST}_x(A_a) = 3 \cdot \left[\sum_{j = k}^{n-1} 2^j (2^n-2^{j+1}) + t (2^n - 2^k)\right] \\
& = 3(2^n-2^k)2^n - 2(4^n-4^k) + 3(a-2^n+2^k)(2^n-2^k) \\
& = (3a - 2^{n+1} + 2^k)(2^n-2^k).
\end{split}
\]
Let us now fix an arbitrary $a$-element set $A \subseteq G$. We shall prove that $\mathrm{ST}(A) \ge \mathrm{ST}(A_a)$ by induction on $n$. We may assume that $0 \not\in A$ as one may easily check that replacing $0$ with an arbitrary element of $G \setminus A$ decreases the number of Schur triples by at least one (as $x + x \neq x$ unless $x = 0$). The case $n=1$ is trivial, including the characterisation of sets achieving equality in~\eqref{eq:Z_2n}, so let us assume that $n \ge 2$. Let $H < G$ be the subgroup of index $2$ that minimises $|A \cap H|$, let $A_e = A \cap H$ (the set of `even' elements of $A$), and let $A_o = A \cap H^c$ (the set of `odd' elements of $A$). Moreover, let $a_e = |A_e|$ and $a_o = |A_o|$.
Observe that each Schur triple in $A$ contains an even number of `odd' elements (elements of $A_o$). As $H \cong \mathbb{Z}_2^{n-1}$, the number of Schur triples that contain only elements of $A_e$ satisfies
\begin{equation}
\label{eq:ST-Ae}
\mathrm{ST}(A_e) \ge \mathrm{ST}^{(n-1)}\left(A_{a_e}^{(n-1)}\right),
\end{equation}
where the superscript `$(n-1)$' signifies the fact that we are referring to elements and Schur triples in $\mathbb{Z}_2^{n-1}$. The number of Schur triples that contain one element of $A_e$ and two elements of $A_o$ may be estimated as follows. Fix some $x \in A_e$. The number of Schur triples in $A$ that contain $x$ and two elements of $A_o$ is precisely $3|(x+A_o) \cap A_o|$, as the elements of $(x+A_o) \cap A_o$ are in one-to-one correspondence with ordered pairs $(y,z) \in A_o^2$ such that $x+y+z=0$. Since $x+A_o, A_o \subseteq H^c$, then
\begin{equation}
\label{eq:AoxAo}
|(x+A_o) \cap A_o| \ge |x+A_o| + |A_o| - |H^c| = 2|A_o| - |H| = 2a_o - 2^{n-1}
\end{equation}
and consequently, recalling~\eqref{eq:ST-Ae},
\begin{equation}
\label{eq:STA-lower}
\mathrm{ST}(A) \ge \mathrm{ST}(A_e) + 3a_e(2a_o - 2^{n-1}) \ge \mathrm{ST}^{(n-1)}\left(A_{a_e}^{(n-1)}\right) + 3a_e(2(a-a_e) - 2^{n-1}).
\end{equation}
We claim that the right-hand side of~\eqref{eq:STA-lower} is greater than or equal to $\mathrm{ST}(A_a)$ as long as
\begin{equation}
\label{eq:Aes-case1}
a_e < \max\left\{ 2^{n-1}-2^{k-1} , a - 2^{n-1} + 2^{k-2} \right\},
\end{equation}
and equality holds only when $a_e = a - 2^{n-1}$ or when $a_e = 2^{n-1} - 2^{k-1}$ and $a > 2^n - 2^{k-1} - 2^{k-2}$.
To see this, note first that $a_e \ge a - |H^c| = a - 2^{n-1} \ge 2^{n-1} - 2^k$. If moreover $a_e < 2^{n-1} - 2^{k-1}$, then by our inductive assumption,
\begin{equation}
\label{eq:STAe-lower}
\mathrm{ST}^{(n-1)}\left(A_{a_e}^{(n-1)}\right) = (3a_e + 2^k - 2^n)(2^{n-1}-2^k).
\end{equation}
Substituting the right-hand side of~\eqref{eq:STAe-lower} into~\eqref{eq:STA-lower}, we verify that if $a - 2^{n-1} \le a_e < 2^{n-1} - 2^{k-1}$, then the right-hand side of~\eqref{eq:STA-lower} is at least as large as $\mathrm{ST}(A_a)$, with equality holding only if $a_e = a - 2^{n-1}$. To see this, observe that the difference of these functions is
\[
6\left(a_e-2^{n-1}+2^{k-1}\right) \left(a-a_e-2^{n-1}\right),
\]
which is quadratic in~$a_e$, with the coefficient of $a_e^2$ negative, and equal to zero if $a_e = a - 2^{n-1}$ or $a_e = 2^{n-1} - 2^{k-1}$.
Assume now that $2^{n-1} - 2^{k-1} \le a_e < a - 2^{n-1} + 2^{k-2}$. In particular, $a_e < 2^{n-1} - 2^{k-2}$ and hence by the inductive assumption,
\begin{equation}
\label{eq:STAe-lower-2}
\mathrm{ST}^{(n-1)}\left(A_{a_e}^{(n-1)}\right) = (3a_e + 2^{k-1} - 2^n)(2^{n-1}-2^{k-1}).
\end{equation}
Substituting the right-hand side of~\eqref{eq:STAe-lower-2} into \eqref{eq:STA-lower}, we verify that if $2^{n-1} - 2^{k-1} \le a_e < a - 2^{n-1} + 2^{k-2}$, then the right hand side of~\eqref{eq:STA-lower} is at least as large as $\mathrm{ST}(A_a)$, with equality holding only if $a_e = 2^{n-1} - 2^{k-1}$. To see this, observe that the difference of these functions is
\[
6\left(a_e-2^{n-1}+2^{k-1}\right)\left(a-a_e-2^{n-1}+2^{k-2}\right),
\]
which is quadratic in $a_e$, with the coefficient of $a_e^2$ negative, and equal to zero if $a_e = 2^{n-1} - 2^{k-1}$ or $a_e = a - 2^{n-1} + 2^{k-2}$.
For the remainder of the proof, we may and shall assume that the reverse of~\eqref{eq:Aes-case1} holds, i.e., that $a_e \ge 2^{n-1} - 2^{k-1}$ and $a_e \ge a - 2^{n-1} + 2^{k-2}$. By our choice of $H$, this means that the smallest eigenvalue $\lambda$ of $\mathcal{G}_A$ satisfies
\[
\lambda = |A \cap H| - |A \cap H^c| = a_e - a_o = 2a_e - a \\
\ge 2\max\left\{ 2^{n-1} - 2^{k-1}, a - 2^{n-1} + 2^{k-2} \right\} - a,
\]
that is,
\[
\lambda \ge
\begin{cases}
-2^{k-2} & \text{if $a \le 2^n - 2^{k-1} - 2^{k-2}$,} \\
a - 2^n + 2^{k-1} & \text{if $a \ge 2^n - 2^{k-1} - 2^{k-2}$.}
\end{cases}
\]
As every element of $G$ has order $2$, then trivially $A = -A$ and thus it follows from~\eqref{eq:STA-ecGA} and Theorem~\ref{thm:AlCh} that (recall that $0 \not\in A$)
\begin{equation}
\label{eq:STA-lower-eigenvalues}
\mathrm{ST}(A) \ge 2^{-n}a^3 + \begin{cases}
-2^{k-2-n}a(2^n-a) & \text{if $a \le 2^n - 2^{k-1} - 2^{k-2}$,} \\
2^{-n}(a - 2^n + 2^{k-1})a(2^n-a) & \text{if $a \ge 2^n - 2^{k-1} - 2^{k-2}$.}
\end{cases}
\end{equation}
In order to finish the proof, we shall now show that the right-hand side of~\eqref{eq:STA-lower-eigenvalues} is greater than $\mathrm{ST}(A_a)$ for every $a$ with $2^n - 2^k \le a < 2^n - 2^{k-1}$.
First, denote by $f_1(a)$ the difference between the right-hand side of~\eqref{eq:STA-lower-eigenvalues} and $\mathrm{ST}(A_a)$ when $2^n - 2^k \le a \le 2^n - 2^{k-1} - 2^{k-2}$. That is, let
\[
f_1(a) = 2^{-n} a^3 + 2^{k-2-n} a^2 - 2^{k-2}a - (3a+2^k-2^{n+1})(2^n-2^k).
\]
Let $a_\ell = 2^n - 2^k$ and $a_r = 2^n - 2^{k-1} - 2^{k-2}$. We need to show that $f_1(a) > 0$ for each $a$ satisfying $a_\ell \le a \le a_r$. This follows because $f_1$ is cubic in $a$, with the coefficient of $a^3$ positive, and
\begin{itemize}
\item
$f_1(a_r) = 2^{2k-2} - 9 \cdot 2^{3k-n-5} \ge 2^{2k-2} - 9 \cdot 2^{2k-6} = 7 \cdot 2^{2k-6} > 0$, since $k \le n-1$.
\item
$f_1'(a_\ell) = 2^{k-2} (5 \cdot 2^{k-n+1} - 11) \le -6 \cdot 2^{k-2} < 0$, since $k \le n-1$.
\item
$f_1'(a_r) = 2^{k-4} (21 \cdot 2^{k-n} - 20) \le -19 \cdot 2^{k-5} < 0$, since $k \le n-1$.
\end{itemize}
Indeed, since $f'_1$ is quadratic in $a$, with the coefficient of $a^2$ positive, it has only one continuous interval where it takes negative values. Therefore $f'_1(a)$ is negative for all $a \in [a_\ell, a_r]$ and thus $f_1$ is decreasing in this interval, attaining its minimum at $a=a_r$.
Similarly, denote by $f_2(a)$ the difference between the right-hand side of~\eqref{eq:STA-lower-eigenvalues} and $\mathrm{ST}(A_a)$ when $2^n - 2^{k-1} - 2^{k-2} \le a < 2^n - 2^{k-1}$. That is, let
\begin{multline*}
f_2(a) = 2^{-n} a^3 + 2^{-n} (a-2^n+2^{k-1})a(2^n-a) - (3a+2^k-2^{n+1})(2^n-2^k) \\
= (2-2^{k-n-1}) a^2 + (7 \cdot 2^{k-1} - 2^{n+2}) a + 2^{2n+1} +2^{2k} - 3 \cdot 2^{k+n}.
\end{multline*}
Let $a_\ell = 2^n - 2^{k-1} - 2^{k-2}$ and $a_r = 2^n - 2^{k-1}$. We need to show that $f_2(a) > 0$ for each $a$ satisfying $a_\ell \le a \le a_r$. This follows because $f_2$ is quadratic in $a$, with the coefficient of $a^2$ positive, and
\begin{itemize}
\item
$f_2'(a_m) = 0$, where $a_m = \frac{2^{n+3} - 7 \cdot 2^k}{8 - 2^{k+1-n}}$.
\item
$f_2(a_m) = \frac{7 \cdot 2^{n+2k-3} - 2^{3k}}{2^{n+2} - 2^k} > 0$, since $k \le n-1$.
\end{itemize}
Finally, we characterise sets achieving equality in~\eqref{eq:Z_2n}. To this end, suppose that $\mathrm{ST}(A) = \mathrm{ST}(A_a)$. This means, in particular, that~\eqref{eq:Aes-case1} holds (as otherwise $\mathrm{ST}(A) > \mathrm{ST}(A_a)$), the two inequalities in~\eqref{eq:STA-lower} hold with equality, and the right-hand side of~\eqref{eq:STA-lower} is equal to $\mathrm{ST}(A_a)$. As noted above, this may happen only in the following two cases.
\medskip
\textsc{Case 1.} $a_e = a - 2^{n-1}$.
\smallskip
In this case, $a_o = a - a_e = 2^{n-1}$ and thus $A \supseteq H^c$. Moreover, $\mathrm{ST}(A_e) = \mathrm{ST}^{(n-1)}\left(A_{a_e}^{(n-1)}\right)$ and hence we may appeal to our inductive assumption. Since
\[
2^{n-1} - 2^k \le a_e = a - 2^{n-1} < 2^{n-1} - 2^{k-1},
\]
then there is a $K < H$ with $2^k$ elements such that $H \setminus K \subseteq A_e$ and $A_e \cap K$ is sum-free. But then the set $A \cap K = A_e \cap K$ is sum-free, and $\mathbb{Z}_2^n \setminus K = H^c \cup (H \setminus K) \subseteq A$.
\medskip
\textsc{Case 2.} $a_e = 2^{n-1} - 2^{k-1}$ and $a > 2^n - 2^{k-1} - 2^{k-2}$.
\smallskip
Since $\mathrm{ST}(A_e) = \mathrm{ST}^{(n-1)}\left(A_{a_e}^{(n-1)}\right)$, we may appeal to the inductive assumption and deduce that $A_e = H \setminus K$ for some $K < H$ with $2^{k-1}$ elements. Let $L$ be an arbitrary subgroup satisfying $K < L < H$ and $[H : L] = 2$. Such a subgroup exists as $H / K \cong \mathbb{Z}_2^{n-k}$ and we have assumed that $n \ge k-1$.
Note that $\mathbb{Z}_2^n / L \cong Z_2^2$ and hence there are two subgroups $H_1, H_2 < \mathbb{Z}_2^n$ such that $[\mathbb{Z}_2^n : H_i] = 2$ and $H \cap H_i = L$ for each $i \in \{1,2\}$ and $H^c \cap H_1 = H^c \setminus H_2$. Define $A_o^1 = A_o \cap H_1 = A_o \setminus H_2$ and $A_o^2 = A_o \cap H_2 = A_o \setminus H_1$. We claim that $A_o^{3-i} = H^c \cap H_i^c$ for some $i \in \{1,2\}$. Before we prove the claim, let us show that its statement contradicts the assumption that $a < 2^n - 2^{k-1}$, which in turn implies that $\mathrm{ST}(A) = \mathrm{ST}(A_a)$ cannot hold in Case 2. As $H \setminus L \subseteq H \setminus K = A_e \subseteq A$, the claim implies that $H_i^c = (H_i^c \cap H^c) \cup (H \setminus L) \subseteq A$. But $H_i$ is a subgroup of $\mathbb{Z}_2^n$ of index $2$ and therefore by our choice of $H$, we have
\[
2^{n-1} - 2^{k-1} = a_e = |A \cap H| \le |A \cap H_i| = |A| - |A \cap H_i^c| = |A| - |H_i^c| = a - 2^{n-1},
\]
a contradiction. Therefore, in order to complete the proof, it suffices to prove the claim. To this end, suppose that it is not true, i.e., there are $y_1 \in (H^c \cap H_2^c) \setminus A_o$ and $y_2 \in (H^c \cap H_1^c) \setminus A_o$. Let $x \in H \setminus L$ be such that $x + y_1 = y_2$; such an $x$ exists as $(H^c \cap H_1^c) - (H^c \cap H_2^c) = H \setminus L$. It is easy to see that this $x$ satisfies
\[
|(x + A_o) \cap A_o| = |(x+A_o) \cap A_o \cap (H^c \setminus \{y_2\})| \ge 2|A_o| - |H^c \setminus \{y_2\}| > 2|A_o| - |H^c|.
\]
Since $x \in H \setminus L \subseteq H \setminus K = A_e$, then the first inequality in~\eqref{eq:STA-lower} is strict, see~\eqref{eq:AoxAo}, contradicting our assumption that $\mathrm{ST}(A) = \mathrm{ST}(A_a)$.
\end{proof}
\section{Groups of type~\Rom{2}}
\label{sec:type-II}
In this section, we prove Theorem~\ref{thm:Z_3n} and Proposition~\ref{prop:Z3Zp}. Our proof of Theorem~\ref{thm:Z_3n} will again employ simple eigenvalue analysis. Given an abelian group $G$ and an $A \subseteq G$, we define $\vec{\mathcal{G}}_A$ to be the directed graph with vertex set $G$ whose arcs are all ordered pairs $(x,y)$ such that $y - x \in A$. It follows from this definition that for each $A \subseteq G$,
\[
\mathrm{ST}(A) = e(\vec{\mathcal{G}}_A[A]).
\]
A straightforward adaptation of the proof of Theorem~\ref{thm:AlCh} yields the following proposition. Here, the adjacency matrix of a directed graph $\vec{\mathcal{G}}$ with vertex set $V$ is the $\{0,1\}$-valued $V$-by-$V$ matrix $\big(\mathbbm{1}[(x,y) \in \vec{\mathcal{G}}]\big)_{x,y \in V}$.
\begin{prop}
\label{prop:AlCh-directed}
Let $\vec{\mathcal{G}}$ be an $N$-vertex directed graph whose each vertex has both the in- and the outdegree equal to $D$. If the adjacency matrix of $\vec{\mathcal{G}}$ has an orthogonal basis of eigenvectors with eigenvalues satisfying $\Re(\lambda) \ge r$, then for every $U \subseteq V(\vec{\mathcal{G}})$,
\[
e(\vec{\mathcal{G}}[U]) \ge \frac{D}{N} |U|^2 + \frac{r}{N} |U|(N-|U|).
\]
\end{prop}
As it was the case with undirected Cayley graphs, the characters of $G$ form a basis of eigenvectors of $\vec{\mathcal{G}}_A$ for every $A \subseteq G$. Moreover, the eigenvalues of $\vec{\mathcal{G}}_A$ are $\sum_{a \in A} \chi(A)$, where $\chi$ ranges over all $|G|$ characters of $G$. In the case $G = \mathbb{Z}_3^n$, as each element of $G$ has order $3$, all characters of $G$ take values in the set $\{1, e^{\frac{2\pi i}{3}}, e^{-\frac{2\pi i}{3}}\} \subseteq \mathbb{C}$ of third roots of unity. Therefore, letting $r_A$ be the smallest real part of an eigenvalue of the adjacency matrix of $\vec{\mathcal{G}}_A$, we have
\begin{equation}
\label{eq:rA}
r_A = \min\big\{|A \cap H| - |A \setminus H|/2 \colon H < G \text{ with } [G:H] = 3\big\},
\end{equation}
where we used the fact that $\Re(e^{\pm\frac{2\pi i}{3}}) = \cos\frac{2\pi}{3} = -\frac{1}{2}$. Proposition~\ref{prop:AlCh-directed} now implies that for every $A \subseteq G$,
\begin{equation}
\label{eq:STA-Z3n-eigenvalues}
\mathrm{ST}(A) \ge 3^{-n}|A|^3 + r_A|A|\left(1-3^{-n}|A|\right),
\end{equation}
where $r_A$ is the quantity defined in~\eqref{eq:rA}.
\begin{proof}[{Proof of Theorem~\textup{\ref{thm:Z_3n}}}.]
Let $\delta = 1/1000$, let $n$ and $t$ be as in the statement of the theorem, and denote $\mathbb{Z}_3^n$ by $G$. Finally, fix some $A \subseteq G$ with $a = 3^{n-1} + t$ elements. Let us first consider the case when $|A \cap H| \ge t$ for every subgroup $H < G$ of index $3$. In this case, the quantity $r_A$ defined in~\eqref{eq:rA} satisfies
\[
r_A \ge -\frac{a-t}{2} + t = -\frac{3^{n-1}}{2} + t
\]
and consequently~\eqref{eq:STA-Z3n-eigenvalues} yields
\begin{multline*}
\mathrm{ST}(A) \ge 3^{-n} a^3 + \left(t - \frac{3^{n-1}}{2}\right) a \left(1 - 3^{-n}a\right) = a\left( t + 3^{-n}a(a-t) + \frac{a}{6} - \frac{3^{n-1}}{2} \right) \\
= a \left(t + \frac{a}{3} + \frac{a}{6} - \frac{a-t}{2}\right) = \frac{3at}{2} \ge at = 3^{n-1}t + t^2.
\end{multline*}
Hence, for the remainder of the proof we may assume that $|A \cap H| < t$ for some $H < G$ of index~$3$.
Fix an arbitrary $H$ with this property and let $\varphi \colon G \to \mathbb{Z}_3$ be a homomorphism with $\varphi^{-1}(0) = H$. For each $i \in \{0,1,2\}$, let $A_i = A \cap \varphi^{-1}(i)$ and let $a_i = |A_i|$. As $-\varphi$ is also a homomorphism, we may assume that $a_2 \ge a_1$. Considering only the Schur triples $(x,y,z) \in A^3$ that $\varphi$ maps to $(0,2,2)$, $(2,0,2)$, $(2,2,1)$, and $(1,1,2)$, we obtain
\begin{equation}
\label{eq:STA-Z3n-lower}
\begin{split}
\mathrm{ST}(A) & \ge 2 \sum_{x \in A_0} |(x + A_2) \cap A_2| + \sum_{x \in A_1} |(x-A_2) \cap A_2| + \sum_{x \in A_1} |(x+A_1) \cap A_2| \\
& \ge 2a_0(2a_2 - 3^{n-1}) +a_1 (2a_2 - 3^{n-1}) + a_1(a_1+a_2-3^{n-1}) \\
& = (a-a_2)(2a_2 - 3^{n-1} + t) + a_0(3a_2+a_0-2a).
\end{split}
\end{equation}
where we used the identity $3^{n-1} + t = a = a_0 + a_1 + a_2$. Treating $a_0$ as fixed, denote the right-hand side of~\eqref{eq:STA-Z3n-lower} by $f_{a_0}(a_2)$. Observe that the function $f_{a_0}$ is quadratic in $a_2$, with the coefficient of $a_2^2$ negative. Thus, if $2 \cdot 3^{n-2} \le a_2 \le 3^{n-1}$, then
\[
\mathrm{ST}(A) \ge \min\{f_{a_0}(2\cdot3^{n-2}), f_{a_0}(3^{n-1})\}.
\]
Note that
\[
f_{a_0}(3^{n-1}) = t(3^{n-1}+t) + a_0(3^{n-1}+a_0-2t) \ge t(3^{n-1}+t),
\]
as $t \le \delta 3^{n-1} \le 3^{n-1}/2$. Moreover,
\[
f_{a_0}(2 \cdot 3^{n-2}) = (3^{n-2} + t)^2 + a_0(a_0-2t) \ge (3^{n-2}+t)^2 - t^2 \ge t(3^{n-1}+t),
\]
as $t \le \delta 3^{n-1}$ and $(\frac{1}{3}+\tau)^2 - \tau^2 \ge \tau(1+\tau)$ for each $\tau \in [0,\delta]$. Therefore, it remains to consider the case $a_1 \le a_2 < 2 \cdot 3^{n-2}$ and $a_0 < t$.
We let $\varepsilon = 1/30$ and define
\[
C = \{x \in A_2 \colon |(x+A_2) \cap A_1| \ge \varepsilon 3^{n-1}\}.
\]
Since clearly $\mathrm{ST}(A) \ge |C| \cdot \varepsilon 3^{n-1}$, we may further assume that
\[
|C| \le \frac{at}{\varepsilon 3^{n-1}} \le \frac{(1+\delta)}{\varepsilon} \cdot t \le \frac{(1+\delta)\delta}{\varepsilon} \cdot 3^{n-1} \le \varepsilon 3^{n-1}.
\]
In the remainder of the proof, we show that this is impossible.
Let $B = A_2 \setminus C$. By definition, for every $x \in B$, we have $x + B \subseteq A_2 + A_2 \subseteq \varphi^{-1}(1)$ and $|(x+B) \setminus A_1| \ge |B| - \varepsilon 3^{n-1}$. Hence, for any two $x, y \in B$, we have
\begin{multline*}
|(x+B) \cap (y+B)| \ge |(x + B) \cap (y+B) \cap (\varphi^{-1}(1) \setminus A_1)| \\
\ge 2(|B| - \varepsilon 3^{n-1}) - (3^{n-1} - a_1) \ge |B| - 3\varepsilon 3^{n-1},
\end{multline*}
as $|B| + a_1 = a_1 + a_2 - |C| \ge a - a_0 - \varepsilon 3^{n-1} > (1-\varepsilon) 3^{n-1}$ since $a_0 < t$. This means that every element of $B-B$ has at least $|B| - 3\varepsilon 3^{n-1}$ representations as a difference of two elements of~$B$ and hence
\begin{equation}
\label{eq:Z3-B-B-upper}
|B-B| \le \frac{|B|^2}{|B| - 3\eps3^{n-1}} < 3^{n-1},
\end{equation}
where the last inequality follows as $|B|$ satisfies
\[
\left(\frac{1}{2} - \varepsilon\right) \cdot 3^{n-1} < \frac{a-a_0}{2} - \varepsilon 3^{n-1} \le a_2 - |C| = |B| \le a_2 < \frac{2}{3} \cdot 3^{n-1}
\]
and $\frac{\beta^2}{\beta - 3\varepsilon} < 1$ for all $\beta \in (\frac{1}{2}-\varepsilon, \frac{2}{3})$. As $|\mathrm{Stab}(B-B)| \le |B-B| < 3^{n-1}$, then necessarily $|\mathrm{Stab}(B-B)| \le 3^{n-2}$. It now follows from Kneser's theorem that
\begin{equation}
\label{eq:Z3-B-B-lower}
|B-B| \ge 2|B| - |\mathrm{Stab}(B-B)| \ge 2|B| - 3^{n-2}.
\end{equation}
But now~\eqref{eq:Z3-B-B-upper} and~\eqref{eq:Z3-B-B-lower} yield
\[
2|B| - 3^{n-2} \le \frac{|B|^2}{|B|-3\eps3^{n-1}},
\]
which is impossible as $|B| \ge (\frac{1}{2}-\varepsilon)3^{n-1}$ and $2\beta - \frac{1}{3} > \frac{\beta^2}{\beta - 3\varepsilon}$ if $\beta > \frac{1}{2}-\varepsilon$.
\end{proof}
\begin{proof}[{Proof of Proposition~\textup{\ref{prop:Z3Zp}}}.]
Fix an $a \in \{p+1, \ldots, 3p\}$, let $b = 3\lceil \frac{a}{3} \rceil$, and note that $b \ge a$ and $b-p \le 3(a-p)$. It suffices to show that there is a set $B \subseteq \mathbb{Z}_3 \times \mathbb{Z}_p$ with $b$ elements satisfying $\mathrm{ST}(B) \le \frac{21}{9}(b-p)^2$. One such set is $B = \mathbb{Z}_3 \times \{x_1, \ldots, x_{b/3}\}$, where $x_1, \ldots, x_p \in \mathbb{Z}_p$ are as in the statement of Theorem~\ref{thm:Zp}. Clearly,
\[
\mathrm{ST}(B) = \mathrm{ST}(\mathbb{Z}_3) \cdot \mathrm{ST}(\{x_1, \ldots, x_{b/3}\}) = 9 \cdot \left\lfloor \frac{b-p}{2} \right\rfloor \left\lceil \frac{b-p}{2} \right\rceil \le \frac{9}{4} (b-p)^2.\qedhere
\]
\end{proof}
\section{A removal-type lemma of Green and Ruzsa}
\label{sec:removal-type-lemma}
In this section, we prove Proposition~\ref{prop:Green-Ruzsa-optimal}
\begin{proof}[{Proof of Proposition~\textup{\ref{prop:Green-Ruzsa-optimal}}}.]
Suppose that $\varepsilon > 0$ and $G$ is an abelian group of order $n$ and let $A$ be an arbitrary set of at least $(1/3+\varepsilon)$ elements of $G$ with $\mathrm{ST}(A) \le \varepsilon^2n^2/2$. Similarly as in the proof of Theorem~\ref{thm:type-I}, define
\[
C = \{x \in A \colon |(x-A) \cap A| \ge \varepsilon n\}.
\]
As clearly $\mathrm{ST}(A) \ge |C| \cdot \varepsilon n$, we have $|C| \le \varepsilon n / 2$. Let $A' = A \setminus C$. We claim that for every $x,y \in A'$, there are at least $\varepsilon n$ representations of $x-y$ as $a-b$ with $a,b \in A$. Indeed, for each $x,y \in A'$,
\begin{multline*}
|(x-A) \cap (y-A)| \ge |(x-A) \cap (y-A) \cap A^c| \\
\ge 2(|A| - \varepsilon n) - |A^c| = 3|A| - (1+2\varepsilon)n \ge \varepsilon n.
\end{multline*}
In particular, $\mathrm{ST}(A) \ge |(A'-A') \cap A| \cdot \varepsilon n$ and therefore
\[
|(A' - A') \cap A'| \le |(A' - A') \cap A| \le \varepsilon n /2.
\]
The set $B = A' \setminus (A' - A')$ is sum-free and
\[
|A \setminus B| = |C| + |(A'-A') \cap A'| \le \varepsilon n.\qedhere
\]
\end{proof}
\section{Concluding remarks}
In this paper, we have determined the minimum number of Schur triples in a set of $a$ elements of a finite abelian group $G$ for various $a$ and $G$. We have been able to resolve this problem completely in the cases when $G$ is a cyclic group of prime order and when $G = \mathbb{Z}_2^n$. We have also obtained some partial results for groups of type~\Rom{1}, that is, groups whose order is divisible by a prime $p$ satisfying $p \equiv 2 \pmod 3$. In this case, we have determined the minimum number of Schur triples for all $a$ in a short interval starting from $(\frac{1}{3} + \frac{1}{3p})|G|$, which is the largest size of a sum-free set in $G$.
We believe that solving Problem~\ref{prob:main} completely would be rather difficult. There are several reasons for it. First, determining merely the largest size of a sum-free set in a general group of type~\Rom{3}\ requires considerable effort, see~\cite{GrRu05}. Second, the `behaviour' of the function $f_G$ defined in~\eqref{eq:fG} already becomes highly `non-uniform' when $G$ ranges over groups of type~\Rom{2}. Third, even the seemingly modest task of determining $f_G$ for groups of even order, say, would most likely entail understanding $f_G$ for general $G$; simply consider the group $\mathbb{Z}_2 \times G$ for some `difficult' $G$.
In view of this, it could be interesting to resolve Problem~\ref{prob:main} for particular families of $G$. One natural candidate would be the cyclic groups $\mathbb{Z}_{2^n}$. Here, we are tempted to guess that, similarly to the cases $G = \mathbb{Z}_p$ and $G = \mathbb{Z}_2^n$, there exists an ordering of the elements of $\mathbb{Z}_{2^n}$ as $x_1^n, \ldots, x_{2^n}^n$ such that for every $a$, the set $\{x_1^n, \ldots, x_a^n\}$ minimises $\mathrm{ST}(A)$ among all $a$-element $A \subseteq \mathbb{Z}_{2^n}$. It is likely that one such family of sequences $(x_i^n)$ is the one defined as follows: $x_1^0 = 0$ and $x_i^{n+1} = 2x_i^n + 1$ and $x_{2^n+i}^{n+1} = 2x_i^n$ for all $n\ge 0$ and $i \in [2^n]$.
\medskip
\noindent
\textbf{Acknowledgement.} Parts of this work were carried out when the first author visited the Institute for Mathematical Research (FIM) of ETH Z\"urich, and also when the second author visited the School of Mathematical Sciences of Tel Aviv University. We would like to thank both institutions for their hospitality and for creating a stimulating research environment. We are indebted to B\'ela Bajnok for pointing out an error in the statement of Theorem~\ref{thm:Zp} in the previous version of this paper.
\bibliographystyle{amsplain}
| {
"timestamp": "2016-07-21T02:10:45",
"yymm": "1507",
"arxiv_id": "1507.03764",
"language": "en",
"url": "https://arxiv.org/abs/1507.03764",
"abstract": "A set of elements of a finite abelian group is called sum-free if it contains no Schur triple, i.e., no triple of elements $x,y,z$ with $x+y=z$. The study of how large the largest sum-free subset of a given abelian group is had started more than thirty years before it was finally resolved by Green and Ruzsa a decade ago. We address the following more general question. Suppose that a set $A$ of elements of an abelian group $G$ has cardinality $a$. How many Schur triples must $A$ contain? Moreover, which sets of $a$ elements of $G$ have the smallest number of Schur triples? In this paper, we answer these questions for various groups $G$ and ranges of $a$.",
"subjects": "Combinatorics (math.CO)",
"title": "The number of additive triples in subsets of abelian groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9869795079712153,
"lm_q2_score": 0.8459424353665381,
"lm_q1q2_score": 0.8349278486300373
} |
https://arxiv.org/abs/2007.10293 | Weak Convergence of Probability Measures | Lecture notes based on the book Convergence of Probability Measures by Patrick Billingsley. | \section*{Introduction}
Throughout these lecture notes we use the following notation
\[\Phi(z)={1\over\sqrt{2\pi}}\int_{-\infty}^z e^{-{u^2/2}}du.\]
Consider a symmetric simple random walk $S_n=\xi_1+\ldots+\xi_n$ with $\mathbb P(\xi_i=1)=\mathbb P(\xi_i=-1)=1/2$. The random sequence $S_n$ has no limit in the usual sense. However, by de Moivre's theorem (1733),
\[\mathbb P(S_n\le z\sqrt{n})\to\Phi(z) \mbox{ as }n\to\infty \mbox{ for any }z\in\boldsymbol R.\]
This is an example of convergence in distribution ${S_n\over \sqrt{n}}\Rightarrow Z$ to a normally distributed random variable.
Define a sequence of stochastic processes $X^n=(X^n_t)_{t\in[0,1]}$ by linear interpolation between its values $X^n_{i/n}(\omega)={S_i(\omega)\over\sigma\sqrt n}$ at the points $t=i/n$, see Figure \ref{frw}.
The much more powerful functional CLT claims convergence in distribution towards the Wiener process
$X^n\Rightarrow W$.
\begin{figure}
\centering
\includegraphics[width=12cm,height=4cm]{Donsker.pdf}
\caption{Scaled symmetric simple random walk $X^n_t(\omega)$ for a fixed $\omega\in\Omega$ and $n=4,16,64$.}
\label{frw}
\end{figure}
This course deals with weak convergence of probability measures on Polish spaces $({\boldsymbol S},\mathcal S)$. For us, the principal examples of Polish spaces (complete separable metric spaces) are
the space $\boldsymbol C=\boldsymbol C[0,1]$ of continuous trajectories $x:[0,1]\to\boldsymbol R$ (Section \ref{secC}),
the space $\boldsymbol D=\boldsymbol D[0,1]$ of cadlag trajectories $x:[0,1]\to\boldsymbol R$ (Section \ref{secD}),
the space $\boldsymbol D[0,\infty)$ of cadlag trajectories $x:[0,\infty)\to\boldsymbol R$ (Section \ref{secDi}).\\
To prove the functional CLT $X^n\Rightarrow W$, we have to check that $\mathbb E f(X^n)\to\mathbb Ef(W)$ for all bounded continuous functions $f:{\boldsymbol C}[0,1]\to \boldsymbol R$, which is not practical to do straightforwardly. Instead, one starts with the finite-dimensional distributions
$$(X^n_{t_1},\ldots,X^n_{t_k})\Rightarrow (W_{t_1},\ldots,W_{t_k}).$$
To prove the weak convergence of the finite-dimensional distributions, it is enough to check the convergence of moment generating functions, thus allowing us to focus on a special class of continuous functions $f_{\lambda_1,\ldots,\lambda_k}: \boldsymbol R^k\to \boldsymbol R$, where $\lambda_i\ge0$ and
\[f_{\lambda_1,\ldots,\lambda_k}(x_1,\ldots,x_k)=\exp(\lambda_1x_1+\ldots+\lambda_kx_k).\]
For the weak convergence in the infinite-dimensional space $\boldsymbol C[0,1]$, the usual additional step is to verify tightness of the distributions of the family of processes $(X^n)$. Loosely speaking, tightness means that no probability mass escapes to infinity.
By Prokhorov theorem (Section \ref{secP}), tightness implies relative compactness, which means that each subsequence of $X^n$ contains a further subsequence converging weakly. Since all possible limits have the finite-dimensional distributions of $W$, we conclude that all subsequences converge to the same limit $W$, and by this we establish the convergence $X^n\Rightarrow W$.
This approach makes it crucial to find tightness criteria in $\boldsymbol C[0,1]$, $\boldsymbol D[0,1]$, and then in $\boldsymbol D[0,\infty)$.
\section{The Portmanteau and mapping theorems}
\subsection{Metric spaces}
Consider a metric space ${\boldsymbol S}$ with metric $\rho(x,y)$. For subsets $A\subset {\boldsymbol S}$, denote the closure by $A^-$, the interior by $A^\circ$, and the boundary by $\partial A=A^--A^\circ$. We write
\[\rho(x,A)=\inf\{\rho(x,y):y\in A\},\qquad A^\epsilon=\{x:\rho(x,A)<\epsilon\}.\]
\begin{definition}
Open balls $B(x,r)=\{y\in {\boldsymbol S}: \rho(x,y)<r\}$ form a {\it base} for ${\boldsymbol S}$: each {\it open set} in ${\boldsymbol S}$ is a union of open balls. Complements to the open sets are called {\it closed sets}. The {\it Borel $\sigma$-algebra} $\mathcal S$ is formed from the open and closed sets in ${\boldsymbol S}$ using the operations of countable intersection, countable union, and set difference.
\end{definition}
\begin{definition}
A collection $\mathcal A$ of ${\boldsymbol S}$-subsets is called a $\pi$-system if it is closed under intersection, that is if $A,B\in\mathcal A$, then $A\cap B\in\mathcal A$.
We say that $\mathcal L$ is a $\lambda$-system if: (i) ${\boldsymbol S}\in\mathcal L$, (ii) $A\in\mathcal L$ implies $A^c \in\mathcal L$, (iii) for any sequence of disjoint sets $A_n\in\mathcal L$, $\cup_nA_n\in \mathcal L$.
\end{definition}
\begin{theorem}\label{Dyn} Dynkin's $\pi$-$\lambda$ lemma.
If $\mathcal A$ is a $\pi$-system such that $\mathcal A\subset\mathcal L$, where $\mathcal L$ is a $\lambda$-system, then
$\sigma(\mathcal A)\subset\mathcal L$, where $\sigma(\mathcal A)$ is the $\sigma$-algebra generated by $\mathcal A$.
\end{theorem}
\begin{definition}
A metric space ${\boldsymbol S}$ is called
{\it separable} if it contains a countable dense subset. It is called
{\it complete} if every Cauchy (fundamental) sequence has a limit lying in ${\boldsymbol S}$. A complete separable metric space is called a {\it Polish} space.
\end{definition}
Separability is a topological property, while completeness is a property of the metric and not of the topology.
\begin{definition}
An {\it open cover} of $A\subset {\boldsymbol S}$ is a class of open sets whose union contains $A$.
\end{definition}
\begin{theorem}\label{M3}
These three conditions are equivalent:
(i) ${\boldsymbol S}$ is separable,
(ii) ${\boldsymbol S}$ has a countable base (a class of open sets such that each open set is a union of sets in the class),
(iii) Each open cover of each subset of ${\boldsymbol S}$ has a countable subcover.
\end{theorem}
\begin{theorem}\label{M3'}
Suppose that the subset $M$ of ${\boldsymbol S}$ is separable.
(i) There is a countable class $\mathcal A$ of open sets with the property that, if $x\in G\cap M$ and $G$ is open, then $x\in A\subset A^-\subset G$ for some $A\in\mathcal A$.
(ii) Lindel\"{o}f property. Each open cover of $M$ has a countable subcover.
\end{theorem}
\begin{definition}
A set $K$ is called {\it compact} if each open cover of $K$ has a finite subcover.
A set $A\subset {\boldsymbol S}$ is called {\it relatively compact} if each sequence in $A$ has a convergent subsequence the limit of which may not lie in $A$.
\end{definition}
\begin{theorem}\label{M5}
Let $A$ be a subset of a metric space ${\boldsymbol S}$. The following three conditions are equivalent:
(i) $A^-$ is compact,
(ii) $A$ is relatively compact,
(iii) $A^-$ is complete and $A$ is totally bounded (that is for any $\epsilon>0$, $A$ has a finite $\epsilon$-net the points of which are not required to lie in $A$).
\end{theorem}
\begin{theorem}\label{M10}
Consider two metric spaces $({\boldsymbol S},\rho)$ and $({\boldsymbol S}',\rho')$ and maps $h,h_n:{\boldsymbol S}\to{\boldsymbol S}'$. If $h$ is continuous, then it is measurable $\mathcal S/\mathcal S'$. If each $h_n$ is measurable $\mathcal S/\mathcal S'$, and if $h_nx\to hx$ for every $x\in {\boldsymbol S}$, then $h$ is also measurable $\mathcal S/\mathcal S'$.
\end{theorem}
\subsection{Convergence in distribution and weak convergence}
\begin{definition}\label{p7}
Let $P_n, P$ be probability measures on $({\boldsymbol S},\mathcal S)$. We say $P_n\Rightarrow P$ {\it weakly converges} as $n\to \infty$ if for any bounded continuous function $f:{\boldsymbol S}\to \boldsymbol R$
\[\int_{{\boldsymbol S}}f(x)P_n(dx)\to\int_{{\boldsymbol S}}f(x)P(dx), \quad n\to\infty. \]
\end{definition}
\begin{definition}
Let $X$ be a $({\boldsymbol S},\mathcal S)$-valued random element defined on the probability space $(\Omega,\mathcal F,\mathbb P)$.
We say that a probability measure $P$ on ${\boldsymbol S}$ is the probability distribution of $X$ if $P(A)=\mathbb P(X\in A)$ for all $A\in\mathcal S$.
\end{definition}
\begin{definition}\label{p25}
Let $X_n,X$ be $({\boldsymbol S},\mathcal S)$-valued random elements defined on the probability spaces $(\Omega_n,\mathcal F_n,\mathbb P_n)$, $(\Omega,\mathcal F,\mathbb P)$.
We say $X_n$ converge in distribution to $X$ as $n\to \infty$ and write $X_n\Rightarrow X$, if for any bounded continuous function $f:{\boldsymbol S}\to \boldsymbol R$,
\[\mathbb E_n(f(X_n))\to\mathbb E(f(X)), \quad n\to\infty. \]
This is equivalent to the weak convergence $P_n\Rightarrow P$ of the respective probability distributions.
\end{definition}
\begin{example}
The function $f(x)=1_{\{x\in A\}}$ is bounded but not continuous, therefore if $P_n\Rightarrow P$, then $P_n(A)\to P(A)$ does not always hold.
For ${\boldsymbol S}={{\boldsymbol R}}$, the function $f(x)=x$ is continuous but not bounded, therefore if $X_n\Rightarrow X$, then $\mathbb E_n(X_n)\to\mathbb E(X)$ does not always hold.
\end{example}
\begin{definition}
Call $A\in\mathcal S$ a {\it $P$-continuity set} if $P(\partial A)=0$.
\end{definition}
\begin{theorem}\label{2.1} Portmanteau's theorem. The following five statements are equivalent.
(i) $P_n\Rightarrow P$.
(ii) $\int f(x)P_n(dx)\to\int f(x)P(dx)$ for all bounded uniformly continuous $f:{\boldsymbol S}\to \boldsymbol R$.
(iii) $\limsup_{n\to\infty}P_n(F)\le P(F)$ for all closed $F\in\mathcal S$.
(iv) $\liminf_{n\to\infty}P_n(G)\ge P(G)$ for all open $G\in\mathcal S$.
(v) $P_n(A)\to P(A)$ for all $P$-continuity sets $A$.
\end{theorem}
Proof. (i) $\to$ (ii) is trivial.
\noindent (ii) $\to$ (iii). For a closed $F\in\mathcal S$ put
$$g(x)=(1-\epsilon^{-1}\rho(x,F))\vee0.$$
This function is bounded and uniformly continuous since $|g(x)-g(y)|\le\epsilon^{-1}\rho(x,y)$. Using
\[
1_{\{x\in F\}}\le g(x)\le 1_{\{x\in F^\epsilon\}}
\]
we derive (iii) from (ii):
$$\limsup_{n\to\infty}P_n(F)\le\limsup_{n\to\infty}\int g(x)P_n(dx)=\int g(x)P(dx)\le P(F^\epsilon)\to P(F),\quad \epsilon\to0.$$
\noindent (iii) $\to$ (iv) follows by complementation.
\noindent (iii) + (iv) $\to$ (v). If $P(\partial A)=0$, then then the leftmost and rightmost probabilities coincide:
\begin{align*}
P(A^-)&\ge \limsup_{n\to\infty} P_n(A^-)\ge \limsup_{n\to\infty} P_n(A)\\
&\ge \liminf_{n\to\infty}P_n(A)\ge \liminf_{n\to\infty}P_n(A^\circ)\ge P(A^\circ).
\end{align*}
\noindent (v) $\to$ (i). By linearity we may assume that the bounded continuous function $f$ satisfies $0\le f\le1$. Then putting $A_t=\{x:f(x)>t\}$ we get
\[\int_{\boldsymbol S} f(x)P_n(dx)=\int_0^1 P_n(A_t)dt \to\int_0^1 P(A_t)dt =\int_{\boldsymbol S} f(x)P(dx). \]
Here the convergence follows from (v) since $f$ is continuous, implying that
$\partial A_t=\{x:f(x)=t\}$,
and since $\{x:f(x)=t\}$ are $P$-continuity sets except for countably many $t$. We also used the bounded convergence theorem.
\begin{example}
Let $F(x)=\mathbb P(X\le x)$. Then $X_n=X+n^{-1}$ has distribution $F_n(x)=F(x-n^{-1})$. As $n\to\infty$, $F_n(x)\to F(x-)$, so convergence only occurs at continuity points.
\end{example}
\begin{corollary}\label{1.2}
A single sequence of probability measures can not weakly converge to each of two different limits.
\end{corollary}
Proof. It suffices to prove that if $\int_{{\boldsymbol S}}f(x)P(dx)=\int_{{\boldsymbol S}}f(x)Q(dx)$ for all bounded, uniformly continuous functions $f:{\boldsymbol S}\to \boldsymbol R$, then $P=Q$. Using the
bounded, uniformly continuous functions $g(x)=(1-\epsilon^{-1}\rho(x,F))\vee0$ we get
$$P(F)\le\int_{{\boldsymbol S}}g(x)P(dx)=\int_{{\boldsymbol S}}g(x)Q(dx)\le Q(F^\epsilon).$$
Letting $\epsilon\to0$ it gives for any closed set $F$, that $P(F)\le Q(F)$ and by symmetry we conclude that $P(F)=Q(F)$. It follows that $P(G)=Q(G)$ for all open sets $G$.
It remains to use regularity of any probability measure $P$: if $A\in\mathcal S$ and $\epsilon>0$, then there exist a closed set $F_\epsilon$ and an open set $G_\epsilon$ such that $F_\epsilon\subset A\subset G_\epsilon$ and $P(G_\epsilon-F_\epsilon)<\epsilon$. To this end we denote by $\mathcal G_P$ the class of $\mathcal S$-sets with the just stated property. If $A$ is closed, we can take $F=A$ and $G=F^\delta$, where $\delta$ is small enough. Thus all closed sets belong to
$\mathcal G_P$, and we need to show that $\mathcal G_P$ forms a $\sigma$-algebra. Given $A_n\in\mathcal G_P$, choose closed sets $F_n$ and open sets $G_n$ such that $F_n\subset A_n\subset G_n$ and $P(G_n-F_n)<2^{-n-1}\epsilon$. If $G=\cup_{n}G_n$ and $F=\cup_{n\le n_0}F_n$ with $n_0$ chosen so that $P(\cup_{n}F_n-F)<\epsilon/2$, then $F\subset \cup_{n}A_n\subset G$ and $P(G-F)<\epsilon$. Thus $\mathcal G_P$ is closed under the formation of countable unions. Since it is closed under complementation, $\mathcal G_P$ is a $\sigma$-algebra.
\begin{theorem}\label{2.7}
Mapping theorem. Let $X_n$ and $X$ be random elements of a metric space ${\boldsymbol S}$. Let $h:{\boldsymbol S}\to {\boldsymbol S}'$ be a $\mathcal S/\mathcal S'$-measurable mapping and $D_h$ be the set of its discontinuity points. If $X_n\Rightarrow X$ and $\mathbb P(X\in D_h)=0$, then $h(X^n)\Rightarrow h(X)$.
In other terms, if $P_n\Rightarrow P$ and $P(D_h)=0$, then $P_nh^{-1}\Rightarrow Ph^{-1}$.
\end{theorem}
Proof. We show first that $D_h$ is a Borel subset of ${\boldsymbol S}$. For any pair $(\epsilon,\delta)$ of positive rationals, the set
\[A_{\epsilon\delta}=\{x\in {\boldsymbol S}: \mbox{ there exist $y,z\in {\boldsymbol S}$ such that }\rho(x,y)<\delta,\rho(x,z)<\delta,\rho'(hy,hz)\ge \epsilon\}\]
is open. Therefore, $D_h=\cup_\epsilon\cap_\delta A_{\epsilon\delta}\in\mathcal S$. Now, for each $F\in\mathcal S'$,
\begin{align*}
\limsup_{n\to\infty} P_n(h^{-1}F)&\le\limsup_{n\to\infty} P_n( (h^{-1}F)^-)\le P((h^{-1}F)^-)\\
&\le P( h^{-1}(F^-)\cup D_h)= P( h^{-1}(F^-)).
\end{align*}
To see that $(h^{-1}F)^-\subset h^{-1}(F^-)\cup D_h$ take an element $x\in(h^{-1}F)^-$. There is a sequence $x_n\to x$ such that $h(x_n)\in F$, and therefore, either $h(x_n)\to h(x)$ or $x\in D_h$. By the Portmanteau theorem, the last chain of inequalities implies $P_nh^{-1}\Rightarrow Ph^{-1}$.
\begin{example}
Let $P_n\Rightarrow P$. If $A$ is a $P$-continuity set and $h(x)=1_{\{x\in A\}}$, then by the mapping theorem, $P_nh^{-1}\Rightarrow Ph^{-1}$.
\end{example}
\subsection{Convergence in probability and in total variation. Local limit theorems}
\begin{definition}
Suppose $X_n$ and $X$ are random elements of ${\boldsymbol S}$ defined on the same probability space. If $\mathbb P(\rho(X_n,X)<\epsilon)\to1$ for each positive $\epsilon$, we say $X_n$ converge to $X$ {\it in probability} and write $X_n\stackrel{\rm P}{\to} X$.
\end{definition}
\begin{exercise}\label{inP}
Convergence in probability $X^n\stackrel{\rm P}{\to} X$ is equivalent to the weak convergence $\rho(X^n,X)\Rightarrow0$. Moreover, $(X^n_1,\ldots,X^n_k)\stackrel{\rm P}{\to} (X_1,\ldots,X_k)$ if and only if $X^n_i\stackrel{\rm P}{\to} X_i$ for all $i=1,\ldots,k$.
\end{exercise}
\begin{theorem}\label{3.2}
Suppose $(X_n,X_{u,n})$ are random elements of ${\boldsymbol S}\times {\boldsymbol S}$. If $X_{u,n}\Rightarrow Z_u$ as $n\to\infty$ for any fixed $u$, and $Z_u\Rightarrow X$ as $u\to\infty$, and
\[\lim_{u\to\infty}\limsup_{n\to\infty}\mathbb P(\rho(X_{u,n},X_n)\ge\epsilon)=0,\mbox{ for each positive }\epsilon,\]
then $X_n\Rightarrow X$.
\end{theorem}
Proof. Let $F\in\mathcal S$ be closed and define $F_\epsilon$ as the set $\{x:\rho(x,F)\le\epsilon\}$. Then
\begin{align*}
\mathbb P(X_n\in F)&=\mathbb P(X_n\in F,X_{u,n}\notin F_\epsilon)+\mathbb P(X_n\in F,X_{u,n}\in F_\epsilon)\\
&\le\mathbb P(\rho(X_{u,n},X_n)\ge\epsilon)+\mathbb P(X_{u,n}\in F_\epsilon).
\end{align*}
Since $F_\epsilon$ is also closed and $F_\epsilon\downarrow F$ as $\epsilon\downarrow 0$, we get
\begin{align*}
\limsup_{n\to\infty}\mathbb P(X_n\in F)&\le\limsup_{\epsilon\to0}\limsup_{u\to\infty}\limsup_{n\to\infty}\mathbb P(X_{u,n}\in F_\epsilon)\\
&\le\limsup_{\epsilon\to0}\mathbb P(X\in F_\epsilon)=\mathbb P(X\in F).
\end{align*}
\begin{corollary}\label{3.1}
Suppose $(X_{n},Y_n)$ are random elements of ${\boldsymbol S}\times {\boldsymbol S}$. If $Y_{n}\Rightarrow X$ as $n\to\infty$ and $\rho(X_n,Y_n)\Rightarrow0$, then $X_n\Rightarrow X$. Taking $Y_n\equiv X$, we conclude that convergence in probability implies convergence in distribution.
\end{corollary}
\begin{definition}
Convergence in {\it total variation} $P_n\stackrel{\rm TV}{\to}P$ means
\[\sup_{A\in \mathcal S}|P_n(A)-P(A)|\to0.\]
\end{definition}
\begin{theorem}\label{(3.10)} Scheffe's theorem. Suppose $P_n$ and $P$ have densities $f_n$ and $f$ with respect to a measure $\mu$ on $({\boldsymbol S},\mathcal S)$. If $f_n\to f$ almost everywhere with respect to $\mu$, then $P_n\stackrel{\rm TV}{\to}P$ and therefore $P_n\Rightarrow P$.
\end{theorem}
Proof. For any $A\in \mathcal S$
\begin{align*}
|P_n(A)-P(A)|&=\Big|\int_A(f_n(x)-f(x))\mu(dx)\Big|\le \int_{\boldsymbol S}|f(x)-f_n(x)|\mu(dx)\\
&=2\int_{\boldsymbol S}(f(x)-f_n(x))^+\mu(dx),
\end{align*}
where the last equality follows from
\begin{align*}
0=\int_{\boldsymbol S} (f(x)-f_n(x))\mu(dx)=\int_{\boldsymbol S}(f(x)-f_n(x))^+\mu(dx)-\int_{\boldsymbol S}(f(x)-f_n(x))^-\mu(dx).
\end{align*}
On the other hand, by the dominated convergence theorem, $\int(f(x)-f_n(x))^+\mu(dx)\to0$.
\begin{example}\label{E3.3} According to Theorem \ref{(3.10)} the local limit theorem implies the integral limit theorem $P_n\Rightarrow P$. The reverse implication is false. Indeed, let $P=\mu$ be Lebesgue measure on ${\boldsymbol S}=[0,1]$ so that $f\equiv1$. Let $P_n$ be the uniform distribution on the set
\[B_n=\bigcup_{k=0}^{n-1} (kn^{-1},kn^{-1}+n^{-3}) \]
with density $f_n(x)=n^{2}1_{\{x\in B_n\}}$. Since $\mu(B_n)=n^{-2}$, the Borel-Cantelli lemma implies that $\mu(B_n \mbox{ i.o.})=0$. Thus $f_n(x)\to0$ for almost all $x$ and there is no local theorem. On the other hand, $|P_n[0,x]-x|\le n^{-1}$ implying $P_n\Rightarrow P$.
\end{example}
\begin{theorem}\label{3.3} Let ${\boldsymbol S}=\boldsymbol R^k$. Denote by $L_n\subset \boldsymbol R^k$ a lattice with cells having dimensions $(\delta_1(n),\ldots,\delta_k(n))$ so that the cells of the lattice $L_n$ all having the form
\[B_n(x)=\{y:x_1-\delta_1(n)<y_1\le x_1,\ldots,x_k-\delta_k(n)<y_k\le x_k\},\quad x\in L_n\]
have size $v_n=\delta_1(n)\cdots\delta_k(n)$. Suppose that $(P_n)$ is a sequence of probability measures on $\boldsymbol R^k$, where $P_n$ is supported by $L_n$ with probability mass function $p_n(x)$.
Suppose that $P$ is a probability measure on $\boldsymbol R^k$ having density $f$ with respect to Lebesgue measure. Assume that all $\delta_i(n)\to0$ as $n\to\infty$. If
${p_n(x_n)\over v_n} \to f(x)$ whenever $x_n\in L_n$ and $x_n\to x$, then $P_n\Rightarrow P$.
\end{theorem}
Proof. Define a probability density $f_n$ on $\boldsymbol R^k$ by setting $f_n(y)={p_n(x)\over v_n}$ for $y\in B_n(x)$. It follows that $f_n(y)\to f(y)$ for all $y\in\boldsymbol R^k$. Let a random vector $Y_n$ have the density $f_n$ and $X$ have the density $f$. By Theorem \ref{(3.10)}, $Y_n\Rightarrow X$. Define $X_n$ on the same probability space as $Y_n$ by setting $X_n=x$ if $Y_n$ lies in the cell $B_n(x)$. Since $\|X_n-Y_n\|\le\|\delta(n)\|$, we conclude using Corollary \ref{3.1} that $X_n\Rightarrow X$.
\begin{example}\label{E3.4} If $S_n$ is the number of successes in $n$ Bernoulli trials, then according to the local form of the de Moivre-Laplace theorem,
\[\mathbb P(S_n=i) \sqrt{npq}={n\choose i}p^iq^{n-i}\sqrt{npq}\to{1\over\sqrt{2\pi}}e^{-z^2/2}\]
provided $i$ varies with $n$ in such a way that ${i-np\over\sqrt{npq}}\to z$. Therefore, Theorem \ref{3.3} applies to the lattice
\[L_n=\Big\{{i-np\over\sqrt{npq}}, i\in\mathbb Z\Big\}\]
with $v_n={1\over\sqrt{npq}}$ and the probability mass function $p_n({i-np\over\sqrt{npq}})=\mathbb P(S_n=i)$ for $i=0,\ldots,n$. As a result we get the integral form of the de Moivre-Laplace theorem:
\[\mathbb P\Big({S_n-np\over \sqrt{npq}}\le z\Big)\to\Phi(z) \mbox{ as }n\to\infty \mbox{ for any }z\in\boldsymbol R.\]
\end{example}
\section{Convergence of finite-dimensional distributions}
\subsection{Separating and convergence-determining classes}
\begin{definition}\label{p18}
Call a subclass $\mathcal A\subset \mathcal S$ a {\it separating class} if any two probability measures with $P(A)=Q(A)$ for all $A\in \mathcal A$, must be identical: $P(A)=Q(A)$ for all $A\in \mathcal S$.
Call a subclass $\mathcal A\subset \mathcal S$ a {\it convergence-determining class} if, for every $P$ and every sequence $(P_n)$, convergence $P_n(A)\to P(A)$ for all $P$-continuity sets $A\in\mathcal A$ implies $P_n\Rightarrow P$.
\end{definition}
\begin{lemma}\label{PM.42}
If $\mathcal A\subset\mathcal S$ is a $\pi$-system and $\sigma(\mathcal A)=\mathcal S$, then $\mathcal A$ is a separating class.
\end{lemma}
Proof. Consider a pair of probability measures such that $P(A)=Q(A)$ for all $A\in \mathcal A$. Let $\mathcal L=\mathcal L_{P,Q}$ be the class of all sets $A\in \mathcal S$ such that $P(A)=Q(A)$. Clearly, ${\boldsymbol S}\in \mathcal L$. If $A\in \mathcal L$, then $A^c\in \mathcal L$ since $P(A^c)=1-P(A)=1-Q(A)=Q(A^c)$.
If $A_n$ are disjoint sets in $\mathcal L$, then $\cup_nA_n\in \mathcal L$ since
$$P(\cup_nA_n)=\sum_nP(A_n)=\sum_nQ(A_n)=Q(\cup_nA_n).$$
Therefore $\mathcal L$ is a $\lambda$-system, and since $\mathcal A\subset\mathcal L$, Theorem \ref{Dyn} gives $\sigma(\mathcal A)\subset\mathcal L $, and $\mathcal L=\mathcal S$.
\begin{theorem}\label{2.3} Suppose that $P$ is a probability measure on a separable ${\boldsymbol S}$, and a subclass $\mathcal A_P\subset \mathcal S$ satisfies
(i) $\mathcal A_P$ is a $\pi$-system,
(ii) for every $x\in {\boldsymbol S}$ and $\epsilon>0$, there is an $A\in\mathcal A_P$ for which $x\in A^\circ\subset A\subset B(x,\epsilon)$.
\noindent If $P_n(A)\to P(A)$ for every $A\in\mathcal A_P$, then $P_n\Rightarrow P$.
\end{theorem}
Proof. If $A_1,\ldots,A_r$ lie in $\mathcal A_P$, so do their intersections.
Hence, by the inclusion-exclusion formula and a theorem assumption,
\begin{align*}
P_n\Big(\bigcup_{i=1}^r A_i\Big)&=\sum_iP_n(A_i)-\sum_{ij}P_n(A_i\cap A_j)+\sum_{ijk}P_n(A_i\cap A_j\cap A_k)-\ldots\\
&\to\sum_iP(A_i)-\sum_{ij}P(A_i\cap A_j)+\sum_{ijk}P(A_i\cap A_j\cap A_k)-\ldots =P\Big(\bigcup_{i=1}^r A_i\Big).
\end{align*}
If $G\subset {\boldsymbol S}$ is open, then for each $x\in G$, $x\in A_x^\circ\subset A_x\subset G$ holds for some $A_x\in\mathcal A_P$. Since ${\boldsymbol S}$ is separable, by Theorem \ref{M3} (iii), there is a countable sub-collection $(A_{x_i}^\circ)$ that covers $G$. Thus $G=\cup_i A_{x_i}$, where all $A_{x_i}$ are $\mathcal A_P$-sets.
With $A_i=A_{x_i}$ we have $G=\cup_i A_i$. Given $\epsilon$, choose $r$ so that $P\big(\cup_{i=1}^r A_i\big)>P(G)-\epsilon$. Then,
\[P(G)-\epsilon<P\Big(\bigcup_{i=1}^r A_i\Big)=\lim_nP_n\Big(\bigcup_{i=1}^r A_i\Big)\le\liminf_nP_n(G).\]
Now, letting $\epsilon\to0$ we find that for any open set $\liminf_nP_n(G)\ge P(G)$.
\begin{theorem}\label{2.4} Suppose that ${\boldsymbol S}$ is separable and consider a subclass $\mathcal A\subset \mathcal S$. Let $\mathcal A_{x,\epsilon}$ be the class of $A\in\mathcal A$ satisfying $x\in A^\circ\subset A\subset B(x,\epsilon)$, and let $\partial\mathcal A_{x,\epsilon}$ be the class of their boundaries. If
(i) $\mathcal A$ is a $\pi$-system,
(ii) for every $x\in {\boldsymbol S}$ and $\epsilon>0$, $\partial\mathcal A_{x,\epsilon}$ contains uncountably many disjoint sets,
\noindent then $\mathcal A$ is a convergence-determining class.
\end{theorem}
Proof. For an arbitrary $P$ let $\mathcal A_P$ be the class of $P$-continuity sets in $\mathcal A$. We have to show that if $P_n(A)\to P(A)$ holds for every $A\in\mathcal A_P$, then $P_n\Rightarrow P$. Indeed, by (i), since $\partial (A\cap B)\subset\partial (A)\cup\partial (B)$, $\mathcal A_P$ is a $\pi$-system. By (ii), there is an $A_x\in\mathcal A_{x,\epsilon}$ such that $P(\partial A_x)=0$ so that $A_x\in\mathcal A_P$. It remains to apply Theorem \ref{2.3}.
\subsection{Weak convergence in product spaces}
\begin{definition}\label{dpr}
Let $P$ be a probability measure on ${\boldsymbol S}={\boldsymbol S}'\times {\boldsymbol S}''$ with the product metric
\[\rho((x',x''),(y',y''))=\rho'(x',y')\vee\rho''(x'',y'').\]
Define the marginal distributions by $P'(A')=P(A'\times {\boldsymbol S}'')$ and $P''(A'')=P({\boldsymbol S}'\times A'')$. If the marginals are independent, we write $P=P'\times P''$.
We denote by $\mathcal S'\times \mathcal S''$ the product $\sigma$-algebra generated by the measurable rectangles $A'\times A''$ for $A'\in \mathcal S'$ and $A''\in \mathcal S''$.
\end{definition}
\begin{lemma}\label{M10}
If ${\boldsymbol S}={\boldsymbol S}'\times {\boldsymbol S}''$ is separable, then the three Borel $\sigma$-algebras are related by $\mathcal S=\mathcal S'\times \mathcal S''$.
\end{lemma}
Proof. Consider the projections $\pi':{\boldsymbol S}\to {\boldsymbol S}'$ and $\pi'':{\boldsymbol S}\to {\boldsymbol S}''$ defined by $\pi'(x',x'')=x'$ and $\pi''(x',x'')=x''$, each is continuous.
For $A'\in \mathcal S'$ and $A''\in \mathcal S''$, we have
$$A'\times A''=(\pi')^{-1}A'\cap(\pi'')^{-1}A''\in \mathcal S,$$
since the two projections are continuous and therefore measurable. Thus $\mathcal S'\times \mathcal S''\subset \mathcal S$.
On the other hand, if ${\boldsymbol S}$ is separable, then each open set in ${\boldsymbol S}$ is a countable union of the balls
\[B((x',x''),r)=B'(x',r)\times B''(x'',r)\]
and hence lies in $\mathcal S'\times \mathcal S''$. Thus $\mathcal S\subset \mathcal S'\times \mathcal S''$.
\begin{theorem}\label{2.8} Consider probability measures $P_n$ and $P$ on a separable metric space ${\boldsymbol S}={\boldsymbol S}'\times {\boldsymbol S}''$.
(a) $P_n\Rightarrow P$ implies $P_n'\Rightarrow P'$ and $P_n''\Rightarrow P''$.
(b) $P_n\Rightarrow P$ if and only if $P_n(A'\times A'')\to P(A'\times A'')$ for each $P'$-continuity set $A'$ and each $P''$-continuity set $A''$.
(c) $P_n'\times P_n''\Rightarrow P$ if and only if $P_n'\Rightarrow P'$, $P_n''\Rightarrow P''$, and $P=P'\times P''$.
\end{theorem}
Proof. (a) Since $P'=P(\pi')^{-1}$, $P''=P(\pi'')^{-1}$ and the projections $\pi'$, $\pi''$ are continuous, it follows by the mapping theorem that $P_n\Rightarrow P$ implies $P_n'\Rightarrow P'$ and $P_n''\Rightarrow P''$.
(b) Consider the $\pi$-system $\mathcal A$ of measurable rectangles $A'\times A''$: $A'\in \mathcal S'$ and $A''\in \mathcal S''$. Let $\mathcal A_P$ be the class of $A'\times A''\in \mathcal A$ such that
$P'(\partial A')=P''(\partial A'')=0$. Since
\[\partial (A'\cap B')\subset (\partial A')\cup (\partial B'),\qquad \partial (A''\cap B'')\subset (\partial A'')\cup (\partial B''),\]
it follows that $\mathcal A_P$ is a $\pi$-system:
$$A'\times A'',B'\times B''\in \mathcal A_P\quad\Rightarrow \quad (A'\times A'')\cap (B'\times B'')\in \mathcal A_P.$$
And since
$$\partial(A'\times A'')\subset((\partial A')\times {\boldsymbol S}'')\cup ({\boldsymbol S}'\times (\partial A'')),$$
each set in $\mathcal A_P$ is a $P$-continuity set. Since $B'(x',r)$ in
have disjoint boundaries for different values of $r$, and since the same is true of the $B''(x'',r)$, there are arbitrarily small $r$ for which $B(x,r)=B'(x',r)\times B''(x'',r)$ lies in $\mathcal A_P$. It follows that Theorem \ref{2.3} applies to $\mathcal A_P$: $P_n\Rightarrow P$ if and only if $P_n(A)\to P(A)$ for each $A\in\mathcal A_P$.
The statement (c) is a consequence of (b).
\begin{exercise}\label{P2.7}The uniform distribution on the unit square and the uniform distribution on its diagonal have identical marginal distributions. Use this fact to demonstrate that the reverse to (a) in Theorem \ref{2.8} is false.
\end{exercise}
\begin{exercise}
Let $(X_n,Y_n)$ be a sequence of two-dimensional random vectors.
Show that if $(X_n,Y_n)\Rightarrow (X, Y)$, then besides $X_n\Rightarrow X$ and $Y_n\Rightarrow Y$, we have $X_n+Y_n\Rightarrow X+Y$.
Give an example of $(X_n,Y_n)$ such that $X_n\Rightarrow X$ and $Y_n\Rightarrow Y$ but the sum $X_n+Y_n$ has no limit distribution.
\end{exercise}
\subsection{Weak convergence in $\boldsymbol R^k$ and $\boldsymbol R^\infty$ }\label{wcR}
Let $\boldsymbol R^k$ denote the $k$-dimensional Euclidean space with elements $x=(x_1,\ldots,x_k)$ and the ordinary metric
\[\|x-y\|=\sqrt{(x_1-y_1)^2+\ldots+(x_k-y_k)^2}.\]
Denote by $\mathcal R^k$ the corresponding class of $k$-dimensional Borel sets. Put $A_x=\{y:y_1\le x_1,\ldots,y_k\le x_k\}$, $x\in\boldsymbol R^k$. The probability measures on $(\boldsymbol R^k, \mathcal R^k)$ are completely determined by their distribution functions $F(x)=P(A_x)$ at the points of continuity $x\in\boldsymbol R^k$.
\begin{lemma}\label{Mtest} The Weierstrass M-test. Suppose that sequences of real numbers $x^n_i\to x_i$ converge for each i, and for all $(n,i)$, $|x^n_i|\le M_i$, where $\sum_i M_i<\infty$. Then $\sum_i x_i<\infty$, $\sum_i x^n_i<\infty$, and $\sum_i x^n_i\to\sum_i x_i$.
\end{lemma}
Proof. The series of course converge absolutely, since $\sum_i M_i<\infty$. Now for any $(n,i_0)$,
\[\Big|\sum_i x^n_i-\sum_i x_i\Big|\le \sum_{i\le i_0} |x^n_i-x_i|+2\sum_{i>i_0}M_i.\]
Given $\epsilon>0$, choose $i_0$ so that $\sum_{i>i_0}M_i<\epsilon/3$, and then choose $n_0$ so that $n>n_0$ implies $ |x^n_i-x_i|<{\epsilon\over 3i_0}$ for $i\le i_0$. Then $n>n_0$ implies $|\sum_i x^n_i-\sum_i x_i|<\epsilon$.
\begin{lemma}\label{E1.2}
Let $\boldsymbol R^\infty$ denote the space of the sequences $x=(x_1,x_2\ldots)$ of real numbers with metric
\[\rho(x,y)=\sum_{i=1}^\infty{1\wedge |x_i-y_i|\over2^i}.\]
Then $\rho(x^n,x)\to0$ if and only if $|x^n_i-x_i|\to0$ for each $i$.
\end{lemma}
Proof. If $\rho(x^n,x)\to0$, then for each $i$ we have $1\wedge |x^n_i-x_i|\to0$ and therefore $|x^n_i-x_i|\to0$. The reverse implication holds by Lemma \ref{Mtest}.
\begin{definition}
Let $\pi_k:\boldsymbol R^\infty\to\boldsymbol R^k$ be the natural projections $\pi_k(x)=(x_1,\ldots,x_k)$, $k=1,2,\ldots$, and let $P$ be a probability measure on $(\boldsymbol R^\infty, \mathcal R^\infty)$. The probability measures $P\pi_k^{-1}$ defined on $(\boldsymbol R^k, \mathcal R^k)$ are called the {\it finite-dimensional distributions} of $P$.
\end{definition}
\begin{theorem}\label{p10}
The space $\boldsymbol R^\infty$ is separable and complete. Let $P$ and $Q$ be two probability measures on $(\boldsymbol R^\infty, \mathcal R^\infty)$.
If $P\pi_k^{-1}=Q\pi_k^{-1}$ for each $k$, then $P=Q$.
\end{theorem}
Proof. Convergence in $\boldsymbol R^\infty$ implies coordinatewise convergence, therefore $\pi_k$ is continuous so that the sets
\[B_k(x,\epsilon)=\big\{ y\in\boldsymbol R^\infty:|y_i-x_i|<\epsilon,\ i=1,\ldots, k\big\}=\pi_k^{-1}\big\{ y\in\boldsymbol R^k:|y_i-x_i|<\epsilon,\ i=1,\ldots, k\big\}\]
are open. Moreover, $y\in B_k(x,\epsilon)$ implies $\rho(x,y)<\epsilon+2^{-k}$. Thus $B_k(x,\epsilon)\subset B(x,r)$ for $r>\epsilon+2^{-k}$. This means that the sets $B_k(x,\epsilon)$ form a base for the topology of $\boldsymbol R^\infty$. It follows that the space is separable: one countable, dense subset consists of those points having only finitely many nonzero coordinates, each of them rational.
If $(x^n)$ is a fundamental sequence, then each coordinate sequence $(x^n_i)$ is fundamental and hence converges to some $x_i$, implying $x^n\to x$. Therefore, $\boldsymbol R^\infty$ is also complete.
Let $\mathcal A$ be the class of finite-dimensional sets $\{x:\pi_k(x)\in H\}$ for some $k$ and some $H\in\mathcal R^k$. This class of cylinders is closed under finite intersections. To be able to apply Lemma \ref{PM.42} it remains to observe that $\mathcal A$ generates $\mathcal R^\infty$: by separability each open set $G\subset\boldsymbol R^\infty$ is a countable union of sets in $\mathcal A$, since the sets $B_k(x,\epsilon)\in\mathcal A$ form a base.
\begin{theorem}\label{E2.4}
Let $P_n,P$ be probability measures on $(\boldsymbol R^\infty, \mathcal R^\infty)$. Then $P_n\Rightarrow P$ if and only if $P_n\pi_k^{-1}\Rightarrow P\pi_k^{-1}$ for each $k$.
\end{theorem}
Proof. Necessity follows from the mapping theorem. Turning to sufficiency, let $\mathcal A$, again, be the class of finite-dimensional sets $\{x:\pi_k(x)\in H\}$ for some $k$ and some $H\in\mathcal R^k$. We proceed in three steps.
Step 1. Show that $\mathcal A$ is a convergence-determining class. This is proven using Theorem \ref{2.4}. Given $x$ and $\epsilon$, choose $k$ so that $2^{-k}<\epsilon/2$ and consider the collection of uncountably many finite-dimensional sets
\[A_\eta=\{y:|y_i-x_i|<\eta, i=1,\ldots, k\}\mbox{ for }0<\eta<\epsilon/2.\]
We have $A_\eta\in\mathcal A_{x,\epsilon}$. On the other hand, $\partial A_\eta$ consists of the points $y$ such that $|y_i-x_i|\le\eta$ with equality for some $i$, hence these boundaries are disjoint. And since $\boldsymbol R^\infty$ is separable, Theorem \ref{2.4} applies.
Step 2. Show that $\partial (\pi_k^{-1}H)=\pi_k^{-1}\partial H$.
From the continuity of $\pi_k$ it follows that $\partial (\pi_k^{-1}H)\subset\pi_k^{-1}\partial H$. Using special properties of the projections we can prove inclusion in the other direction. If
$x\in\pi_k^{-1}\partial H$, so that $\pi_kx\in\partial H$, then there are points $\alpha^{(u)}\in H$, $\beta^{(u)}\in H^c$ such that $\alpha^{(u)}\to \pi_kx$ and $\beta^{(u)}\to \pi_kx$ as $u\to\infty$. Since the points $(\alpha^{(u)}_1,\ldots,\alpha^{(u)}_k,x_{k+1},\ldots)$ lie in $\pi_k^{-1} H$ and converge to $x$, and since the points $(\beta^{(u)}_1,\ldots,\beta^{(u)}_k,x_{k+1},\ldots)$ lie in $(\pi_k^{-1} H)^c$ and converge to $x$, we conclude that $x\in \partial (\pi_k^{-1}H)$.
Step 3. Suppose that $P\pi_k^{-1}(\partial H)=0$ implies $P_n\pi_k^{-1}(H)\to P\pi_k^{-1}(H)$ and show that $P_n\Rightarrow P$.
If $A\in \mathcal A$ is a finite-dimensional $P$-continuity set, then we have $A=\pi_k^{-1}H$ and
\[P\pi_k^{-1}(\partial H)=P(\pi_k^{-1}\partial H)=P(\partial \pi_k^{-1}H)=P(\partial A)=0.\]
Thus by assumption, $P_n(A)\to P(A)$ and according to step 1, $P_n\Rightarrow P$.
\subsection{Kolmogorov's extension theorem}
\begin{definition}\label{Kcon}
We say that the system of finite-dimensional distributions $\mu_{t_1,\ldots,t_k}$ is consistent if the joint distribution functions
\[F_{t_1,\ldots,t_k}(z_1,\ldots,z_k)=\mu_{t_1,\ldots,t_k}((-\infty,z_1]\times \ldots\times(-\infty,z_k])\]
satisfy two consistency conditions
(i) $F_{t_1,\ldots,t_k,t_{k+1}}(z_1,\ldots,z_k,\infty)= F_{t_1,\ldots,t_k}(z_1,\ldots,z_k)$,
(ii) if $\pi$ is a permutation of $(1,\ldots,k)$, then $$F_{t_{\pi(1)},\ldots,t_{\pi(k)}}(z_{\pi(1)},\ldots,z_{\pi(k)})=F_{t_1,\ldots,t_k}(z_1,\ldots,z_k).$$
\end{definition}
\begin{theorem}\label{ket}
Let $\mu_{t_1,\ldots,t_k}$ be a consistent system of finite-dimensional distributions. Put $\Omega=\{\mbox{functions }\omega:[0,1]\to \mathbb R\}$ and $\mathcal F$ is the $\sigma$-algebra generated by the finite-dimensional sets $\{\omega: \omega(t_i)\in B_i, i=1,\ldots,n\}$, where $B_i$ are Borel subsets of $\mathbb R$. Then there is a unique probability measure $\mathbb P$ on $(\Omega,\mathcal F)$ such that a stochastic process defined by $X_t(\omega)=\omega(t)$ has the finite-dimensional distributions $\mu_{t_1,\ldots,t_k}$.
\end{theorem}
Without proof. Kolmogorov's extension theorem does not directly imply the existence of the Wiener process because the $\sigma$-algebra $\mathcal F$ is not rich enough to ensure the continuity property for trajectories. However, it is used in the proof of Theorem \ref{13.6} establishing the existence of processes with cadlag trajectories.
\section{Tightness and Prokhorov's theorem}\label{secP}
\subsection{Tightness of probability measures}
Convergence of finite-dimensional distributions does not always imply weak convergence. This makes important the following concept of tightness.
\begin{definition}
A family of probability measures $\Pi$ on $({\boldsymbol S},\mathcal S)$ is called {\it tight} if for every $\epsilon$ there exists a compact set $K\subset {\boldsymbol S}$ such that $P(K)>1-\epsilon$ for all $P\in\Pi$.
\end{definition}
\begin{lemma}\label{1.3}
If ${\boldsymbol S}$ is separable and complete, then each probability measure $P$ on $({\boldsymbol S},\mathcal S)$ is tight.
\end{lemma}
Proof. Separability: for each $k$ there is a sequence $A_{k,i}$ of open $1/k$-balls covering ${\boldsymbol S}$. Choose $n_k$ large enough that $P(B_k)>1-\epsilon 2^{-k}$ where $B_k=A_{k,1}\cup\ldots\cup A_{k,n_k}$. Completeness: the totally bounded set $B_{1}\cap B_2\cap\ldots$ has compact closure $K$. But clearly $P(K^c)\le\sum_k P(B_k^c)<\epsilon$.
\begin{exercise}
Check whether the following sequence of distributions on $\boldsymbol R$
\[
P_n(A)=(1-n^{-1})1_{\{0\in A\}}+n^{-1}1_{\{n^2\in A\}},\qquad n\ge1,
\]
is tight or it ``leaks" towards infinity. Notice that the corresponding mean value is $n$.
\end{exercise}
\begin{definition}
A family of probability measures $\Pi$ on $({\boldsymbol S},\mathcal S)$ is called {\it relatively compact} if any sequence of its elements contains a weakly convergent subsequence. The limiting probability measures might be different for different subsequences and lie outside $\Pi$.
\end{definition}
\begin{definition}\label{p72}
Let $\boldsymbol P$ be the space of probability measures on $({\boldsymbol S},\mathcal S)$. The {\it Prokhorov distance} $\pi(P,Q)$ between $P,Q\in \boldsymbol P$ is defined as the infimum of those positive $\epsilon$ for which
\[P(A)\le Q(A^\epsilon)+\epsilon, \quad Q(A)\le P(A^\epsilon)+\epsilon, \quad\mbox{for all }A\in\mathcal S.\]
\end{definition}
\begin{lemma}
The Prokhorov distance $\pi$ is a metric on $\boldsymbol P$.
\end{lemma}
Proof. Obviously $\pi(P,Q)=\pi(Q,P)$ and $\pi(P,P)=0$. If $\pi(P,Q)=0$, then for any $F\in\mathcal S$ and $\epsilon>0$, $P(F)\le Q(F^\epsilon)+\epsilon$. For closed $F$ letting $\epsilon\to0$ gives $P(F)\le Q(F)$. By symmetry, we have $P(F)= Q(F)$ implying $P= Q$.
To verify the triangle inequality notice that if $\pi(P,Q)<\epsilon_1$ and $\pi(Q,R)<\epsilon_2$, then
\[P(A)\le Q(A^{\epsilon_1})+\epsilon_1\le R((A^{\epsilon_1})^{\epsilon_2})+\epsilon_1+\epsilon_2\le R(A^{\epsilon_1+\epsilon_2})+\epsilon_1+\epsilon_2.\]
Thus, using the symmetric relation we obtain $\pi(P,R)<\epsilon_1+\epsilon_2$. Therefore, $\pi(P,R)\le \pi(P,Q)+\pi(Q,R)$.
\begin{theorem}\label{6.8}
Suppose ${\boldsymbol S}$ is a complete separable metric space. Then weak convergence is equivalent to $\pi$-convergence, $(\boldsymbol P,\pi)$ is separable and complete, and $\Pi\subset \boldsymbol P$ is relatively compact iff its $\pi$-closure is $\pi$-compact.
\end{theorem}
Without proof.
\begin{theorem}\label{2.6}
A necessary and sufficient condition for $P_n\Rightarrow P$ is that each subsequence $P_{n'}$ contains a further subsequence $P_{n''}$ converging weakly to $P$.
\end{theorem}
Proof. The necessity is easy but useless. As for sufficiency, if $P_n\nRightarrow P$, then $\int_{{\boldsymbol S}}f(x)P_n(dx)\nrightarrow\int_{{\boldsymbol S}}f(x)P(dx)$ for some bounded, continuous $f$. But then, for some $\epsilon>0$ and some subsequence $P_{n'}$,
\[\Big|\int_{{\boldsymbol S}}f(x)P_{n'}(dx)-\int_{{\boldsymbol S}}f(x)P(dx)\Big|\ge\epsilon\quad \mbox{ for all }n',\]
and no further subsequence can converge weakly to $P$.
\begin{theorem}\label{5.1}
Prokhorov's theorem, the direct part. If a family of probability measures $\Pi$ on $({\boldsymbol S},\mathcal S)$ is tight, then it is relatively compact.
\end{theorem}
Proof. See the next subsection.
\begin{theorem}\label{5.2}
Prokhorov's theorem, the reverse part. Suppose ${\boldsymbol S}$ is a complete separable metric space. If $\Pi$ is relatively compact, then it is tight.
\end{theorem}
Proof. Consider open sets $G_n\uparrow {\boldsymbol S}$. For each $\epsilon$ there is an $n$ such that $P(G_n)>1-\epsilon$ for all $P\in\Pi$. To show this we assume the opposite: $P_n(G_n)\le1-\epsilon$ for some $P_n\in\Pi$. By the assumed relative compactness, $P_{n'}\Rightarrow Q$ for some subsequence and some probability measure $Q$. Then $$Q(G_n)\le\liminf_{n'}P_{n'}(G_n)\le\liminf_{n'}P_{n'}(G_{n'})\le1-\epsilon$$
which is impossible since $G_n\uparrow {\boldsymbol S}$.
If $A_{ki}$ is a sequence of open balls of radius $1/k$ covering ${\boldsymbol S}$ (separability), so that ${\boldsymbol S}=\cup_iA_{k,i}$ for each $k$. From the previous step, it follows that there is an $n_k$ such that $P(\cup_{i\le n_k}A_{k,i})>1-\epsilon2^{-k}$ for all $P\in\Pi$. Let $K$ be the closure of the totally bounded set $\cap_{k\ge1}\cup_{i\le n_k}A_{k,i}$, then $K$ is compact (completeness) and $P(K)>1-\epsilon$ for all $P\in\Pi$.
\subsection{Proof of Prokhorov's theorem}
This subsection contains a proof of the direct half of Prokhorov's theorem. Let $(P_n)$ be a sequence in the tight family $\Pi$. We are to find a subsequence $(P_{n'})$ and a probability measure $P$ such that $P_{n'}\Rightarrow P$. The proof, like that of Helly's selection theorem will depend on a diagonal argument.
Choose compact sets $K_1\subset K_2\subset\ldots$ such that $P_n(K_i)>1-i^{-1}$ for all $n$ and $i$. The set $K_\infty=\cup_iK_i$ is separable: compactness = each open cover has a finite subcover, separability = each open cover has a countable subcover. Hence, by Theorem \ref{M3'}, there exists a countable class $\mathcal A$ of open sets with the following property: if $G$ is open and $x\in K_\infty\cap G$, then $x\in A\subset A^-\subset G$ for some $A\in\mathcal A$. Let $\mathcal H$ consist of $\emptyset$ and the finite unions of sets of the form $ A^-\cap K_i$ for $A\in\mathcal A$ and $i\ge1$.
Consider the countable class $\mathcal H=(H_j)$. For $(P_n)$ there is a subsequence $(P_{n_1})$ such that $P_{n_1}(H_1)$ converges as $n_1\to\infty$. For $(P_{n_1})$ there is a further subsequence $(P_{n_2})$ such that $P_{n_2}(H_2)$ converges as $n_2\to\infty$. Continuing in this way we get a collection of indices $(n_{1k})\supset(n_{2k})\supset\ldots$ such that $P_{n_{jk}}(H_j)$ converges as $k\to\infty$ for each $j\ge1$. Putting $n'_j=n_{jj}$ we find a subsequence $(P_{n'})$ for which the limit
\[\alpha(H)=\lim_{n'}P_{n'}(H)\mbox{ exists for each }H\in\mathcal H.\]
Furthermore, for open sets $G\subset {\boldsymbol S}$ and arbitrary sets $M\subset {\boldsymbol S}$ define
\[\beta(G)=\sup_{H\subset G}\alpha(H),\quad \gamma(M)=\inf_{G\supset M}\beta(G).\]
Our objective is to construct on $({\boldsymbol S},\mathcal S)$ a probability measure $P$ such that $P(G)=\beta(G)$ for all open sets $G$.
If there does exist such a $P$, then the proof will be complete: if $H\subset G$, then
\[\alpha(H)=\lim_{n'}P_{n'}(H)\le \liminf_{n'}P_{n'}(G),\]
whence $P(G)\le\liminf_{n'}P_{n'}(G)$, and therefore $P_{n'}\Rightarrow P$.
The construction of the probability measure $P$ is divided in seven steps.
Step 1: if $F\subset G$, where $F$ is closed and $G$ is open, and if $F\subset H$, for some $H\in\mathcal H$, then $F\subset H_0\subset G$, for some $H_0\in\mathcal H$.
Since $F\subset K_{i_0}$ for some $i_0$, the closed set $F$ is compact. For each $x\in F$, choose an $A_x\in\mathcal A$ such that $x\in A_x\subset A_x^-\subset G$. The sets $A_x$ cover the compact $F$, and there is a finite subcover $A_{x_1},\ldots,A_{x_k}$. We can take $H_0=\cup_{j=1}^k(A_{x_j}^-\cap K_{i_0})$.
Step 2: $\beta$ is finitely subadditive on the open sets.
Suppose that $H\subset G_1\cup G_2$, where $H\in\mathcal H$ and $G_1, G_2$ are open. Define
\begin{align*}
F_1&=\big\{x\in H:\rho(x,G_1^c)\ge \rho(x,G_2^c)\big\},\\
F_2&=\big\{x\in H:\rho(x,G_2^c)\ge \rho(x,G_1^c)\big\},
\end{align*}
so that $H=F_1\cup F_2$ with $F_1\subset G_1$ and $F_2\subset G_2$. According to Step 1, since $F_i\subset H$, we have $F_i\subset H_i\subset G_i$ for some $H_i\in\mathcal H$.
The function $\alpha(H)$ has these three properties
\begin{align*}
\alpha(H_1)&\le\alpha(H_2)\qquad\qquad\quad \mbox{ if }H_1\subset H_2,\\
\alpha(H_1\cup H_2)&= \alpha(H_1)+\alpha(H_2)\quad \mbox{ if }H_1\cap H_2=\emptyset,\\
\alpha(H_1\cup H_2)&\le \alpha(H_1)+\alpha(H_2).
\end{align*}
It follows first,
\[\alpha(H)\le \alpha(H_1\cup H_2)\le\alpha(H_1)+\alpha(H_2)\le \beta(G_1)+\beta(G_2),\]
and then
$$\beta(G_1\cup G_2)=\sup_{H\subset G_1\cup G_2}\alpha(H)\le \beta(G_1)+\beta(G_2).$$
Step 3: $\beta$ is countably subadditive on the open sets.
If $H\subset \cup_n G_n$, then, since $H$ is compact, $H\subset \cup_{n\le n_0} G_n$ for some $n_0$, and finite subadditivity imples
\[\alpha(H)\le \sum_{n\le n_0}\beta(G_n)\le \sum_{n}\beta(G_n).\]
Taking the supremum over $H$ contained in $\cup_{n} G_n$ gives $\beta(\cup_{n}G_n)\le \sum_{n}\beta(G_n)$.
Step 4: $\gamma$ is an outer measure.
Since $\gamma$ is clearly monotone and satisfies $\gamma(\emptyset)=0$, we need only prove that it is countably subadditive. Given a positive $\epsilon$ and arbitrary $M_n\subset {\boldsymbol S}$, choose open sets $G_n$ such that $M_n\subset G_n$ and $\beta(G_n)<\gamma(M_n)+\epsilon/2^n$. Apply Step 3
$$\gamma(\bigcup_{n}M_n)\le \beta(\bigcup_{n}G_n)\le \sum_{n}\beta(G_n)\le \sum_{n}\gamma(M_n)+\epsilon,$$
and let $\epsilon\to0$ to get
$\gamma(\bigcup_{n}M_n)\le \sum_{n}\gamma(M_n)$.
Step 5: $\beta(G)\ge\gamma(F\cap G)+\gamma(F^c\cap G)$ for $F$ closed and $G$ open.
Choose $H_3,H_4\in\mathcal H$ for which \begin{align*}
&H_3\subset F^c\cap G\quad \mbox{ and }\quad \alpha(H_3)>\beta(F^c\cap G)-\epsilon,\\
&H_4\subset H_3^c\cap G\quad \mbox{ and }\quad \alpha(H_4)>\beta(H_3^c\cap G)-\epsilon.
\end{align*}
Since $H_3$ and $H_4$ are disjoint and are contained in $G$, it follows from the properties of the functions $\alpha,\beta,$ and $\gamma$ that
\begin{align*}
\beta(G)\ge \alpha(H_3\cup H_4)=\alpha(H_3)+\alpha(H_4)&>\beta(F^c\cap G)+\beta(H_3^c\cap G)-2\epsilon\\
&\ge \gamma(F^c\cap G)+ \gamma(F\cap G)-2\epsilon.
\end{align*}
Now it remains to let $\epsilon\to0$.
Step 6: if $F\subset {\boldsymbol S}$ is closed, then $F$ is in the class $\mathcal M$ of $\gamma$-measurable sets.
By Step 5, $\beta(G)\ge\gamma(F\cap L)+\gamma(F^c\cap L)$ if $F$ is closed, $G$ is open, and $G\supset L$. Taking the infimum over these $G$ gives $\gamma(L)\ge\gamma(F\cap L)+\gamma(F^c\cap L)$ confirming that $F$ is $\gamma$-measurable.
Step 7: $\mathcal S\subset\mathcal M$, and the restriction $P$ of $\gamma$ to $\mathcal S$ is a probability measure satisfying $P(G)=\gamma(G)=\beta(G)$ for all open sets $G\subset {\boldsymbol S}$.
Since each closed set lies in $\mathcal M$ and $\mathcal M$ is a $\sigma$-algebra, we have $\mathcal S\subset\mathcal M$.
To see that the $P$ is a probability measure, observe that each $K_i$ has a finite covering by $\mathcal A$-sets and therefore $K_i\in\mathcal H$. Thus
\[1\ge P({\boldsymbol S})=\beta({\boldsymbol S})\ge \sup_i\alpha(K_i)\ge \sup_i(1-i^{-1})=1.\]
\subsection{Skorokhod's representation theorem}
\begin{theorem}\label{6.7}
Suppose that $P_n\Rightarrow P$ and $P$ has a separable support. Then there exist random elements $X_n$ and $X$, defined on a common probability space $(\Omega,\mathcal F,\mathbb P)$, such that
$P_n$ is the probability distribution of $X_n$, $P$ is the probability distribution of $X$, and $X_n(\omega)\to X(\omega)$ for every $\omega$.
\end{theorem}
Proof. We split the proof in four steps.
Step 1: show that for each $\epsilon$, there is a finite $\mathcal S$-partition $B_0,B_1,\ldots, B_k$ of ${\boldsymbol S}$ such that
\[0<P(B_0)<\epsilon,\quad P(\partial B_i)=0,\quad {\rm diam} (B_i)<\epsilon,\quad i=1,\ldots,k.\]
Let $M$ be a separable $\mathcal S$-set for which $P(M)=1$. For each $x\in M$, choose $r_x$ so that $0<r_x<\epsilon/2$ and $P(\partial B(x,r_x))=0$. Since
$M$ is a separable, it can be covered by a countable subcollection $A_1,A_2,\ldots$ of the balls $B(x,r_x)$. Choose $k$ so that $P(\cup_{i=1}^k A_i)>1-\epsilon$. Take
\[B_0=\big(\bigcup_{i=1}^k A_i\big)^c,\quad B_1=A_1,\quad B_i=A_1^c\cap\ldots\cap A_{i-1}^c\cap A_i,\]
and notice that $\partial B_i\subset \partial A_1\cup\ldots\cup\partial A_k$.
Step 2: definition of $n_j$.
Take $\epsilon_j=2^{-j}$. By step 1, there are $\mathcal S$-partitions $B_0^j,B_1^j,\ldots, B_k^j$ such that
\[0<P(B_0^j)<\epsilon_j,\quad P(\partial B_i^j)=0,\quad {\rm diam} (B_i^j)<\epsilon_j,\quad i=1,\ldots,k_j.\]
If some $P(B_i^j)=0$, we redefine these partitions by amalgamating such $B_i^j$ with $B_0^j$, so that $P(\cdot|B_i^j)$ is well defined for $i\ge1$. By the assumption $P_n\Rightarrow P$, there is for each $j$ an $n_j$ such that
\[P_n(B_i^j)\ge (1-\epsilon_j)P(B_i^j),\quad i=0,1,\ldots, k_j,\quad n\ge n_j.\]
Putting $n_0=1$, we can assume $n_0< n_1<\cdots$.
Step 3: construction of $X, Y_n,Y_{ni},Z_n,\xi$.
Define $m_n=j$ for $n_j\le n<n_{j+1}$ and write $m$ instead of $m_n$. By Theorem \ref{ket} we can find an $(\Omega,\mathcal F,\mathbb P)$ supporting random elements $X, Y_n,Y_{ni},Z_n$ of ${\boldsymbol S}$ and a random variable $\xi$, all independent of each other and having distributions satisfying: $X$ has distribution $P$, $Y_n$ has distribution $P_n$,
\begin{align*}
&\mathbb P(Y_{ni}\in A)=P_n(A|B_i^m), \quad \mathbb P(\xi\le \epsilon)=\epsilon,\\
&\epsilon_m\mathbb P(Z_n\in A)=\sum_{i=0}^{k_m}P_n(A|B^m_i)\Big(P_n(B^m_i)-(1-\epsilon_m)P(B^m_i)\Big).
\end{align*}
Note that $\mathbb P(Y_{ni}\in B_i^m)=1$.
Step 4: construction of $X_n$.
Put $X_n=Y_n$ for $n<n_1$. For $n\ge n_{1}$, put
\[X_n=1_{\{\xi\le1-\epsilon_m\}}\sum_{i=0}^{k_m}1_{\{X\in B_i^m\}}Y_{ni}+1_{\{\xi>1-\epsilon_m\}}Z_n.\]
By step 3, we $X_n$ has distribution $P_n$ because
\begin{align*}
\mathbb P(X_n\in A)&=(1-\epsilon_m)\sum_{i=0}^{k_m}\mathbb P(X\in B_i^m,Y_{ni}\in A)+\epsilon_m\mathbb P(Z_n\in A)\\
&=(1-\epsilon_m)\sum_{i=0}^{k_m}\mathbb P(X\in B_i^m)P_n(A|B_i^m)\\
&\qquad+\sum_{i=0}^{k_m}P_n(A|B^m_i)\Big(P_n(B^m_i)-(1-\epsilon_m)P(B^m_i)\Big)\\
&=P_n(A).
\end{align*}
Let
$$E_j=\{X\notin B_0^j;\ \xi\le1-\epsilon_j \}\mbox{ and }E=\liminf_jE_j=\bigcup_{j=1}^\infty\bigcap_{i=j}^\infty E_i.$$
Since $\mathbb P(E_j^c)<2\epsilon_j$, by the Borel-Cantelli lemma, $\mathbb P(E^c)=\mathbb P(E_j^c \mbox{ i.o.})=0$ implying $\mathbb P(E)=1$. If $\omega\in E$, then both $X_n(\omega)$ and $X(\omega)$ lie in the same $B_i^m$ having diameter less than $\epsilon_m$. Thus, $\rho(X_n(\omega),X(\omega))<\epsilon_m$ and $X_n(\omega)\to X(\omega)$ for $\omega\in E$. It remains to redefine $X_n$ as $X$ outside $E$.
\begin{corollary} The mapping theorem. Let $h:{\boldsymbol S}\to {\boldsymbol S}'$ be a continuous mapping between two metric spaces. If $P_n\Rightarrow P$ on ${\boldsymbol S}$ and $P$ has a separable support, then $P_nh^{-1}\Rightarrow Ph^{-1}$ on ${\boldsymbol S}'$.
\end{corollary}
Proof. Having $X_n(\omega)\to X(\omega)$ we get $h(X_n(\omega))\to h(X(\omega))$ for every $\omega$. It follows, by Corollary \ref{3.1} that $h(X_n)\Rightarrow h(X)$ which is equivalent to $P_nh^{-1}\Rightarrow Ph^{-1}$.
\section{Functional Central Limit Theorem on $\boldsymbol C=\boldsymbol C[0,1]$}\label{secC}
\subsection{Weak convergence in $\boldsymbol C$}
\begin{definition}
An element of the set $\boldsymbol C=\boldsymbol C[0,1]$ is a continuous function $x=x(t)$. The distance between points in $\boldsymbol C$ is measured by the uniform metric
\[\rho(x,y)=\|x-y\|=\sup_{0\le t\le1}|x(t)-y(t)|.\]
Denote by $\mathcal C$ the Borel $\sigma$-algebra of subsets of $\boldsymbol C$.
\end{definition}
\begin{exercise}
Draw a picture for an open ball $B(x,r)$ in $\boldsymbol C$.\\
For any real number $a$ and $t\in[0,1]$ the set $\{x:x(t)<a\}$ is an open subset of $\boldsymbol C$.
\end{exercise}
\begin{example}\label{E1.3}
Convergence $\rho(x_n,x)\to0$ means uniform convergence of continuous functions, it is stronger than pointwise convergence. Consider the function $z_n(t)$ that increases linearly from 0 to 1 over $[0,n^{-1}]$, decreases linearly from 1 to 0 over $[n^{-1},2n^{-1}]$, and equals 0 over $[2n^{-1},1]$. Despite $z_n(t)\to0$ for any $t$ we have $\|z_n\|=1$ for all $n$.
\end{example}
\begin{theorem}\label{p11}
The space $\boldsymbol C$ is separable and complete.
\end{theorem}
Proof. Separability. Let $L_k$ be the set of polygonal functions that are linear over each subinterval $[{i-1\over k},{i\over k}]$ and have rational values at the end points. We will show that the countable set $\cup_{k\ge 1}L_k$ is dense in $\boldsymbol C$. For given $x\in \boldsymbol C$ and $\epsilon>0$, choose $k$ so that
\[|x(t)-x(i/k)|<\epsilon\quad\mbox{for all }t\in[{(i-1)/ k},{i/k}],\quad 1\le i\le k\]
which is possible by uniform continuity. Then choose $y\in L_k$ so that $|y(i/k)-x(i/k)|<\epsilon$ for each $i$. It remains to draw a picture with trajectories over an interval $[{i-1\over k},{i\over k}]$ and check that $\rho(x,y)\le3\epsilon$.
Completeness. Let $(x_n)$ be a fundamental sequence so that
\[\epsilon_n=\sup_{m> n}\sup_{0\le t\le1}|x_n(t)-x_m(t)|\to0,\quad n\to\infty.\]
Then for each $t$, the sequence $(x_n(t))$ is fundamental on $\boldsymbol R$ and hence has a limit $x(t)$. Letting $m\to\infty$ in the inequality $|x_n(t)-x_m(t)|\le \epsilon_n$ gives $|x_n(t)-x(t)|\le \epsilon_n$. Thus $x_n$ converges uniformly to $x\in \boldsymbol C$.
\begin{definition}
Convergence of finite-dimensional distributions $X^n\stackrel{\rm fdd}{\longrightarrow}X$ means that for all $t_1,\ldots,t_k$
$$(X^n_{t_1},\ldots,X^n_{t_k})\Rightarrow (X_{t_1},\ldots,X_{t_k}).$$
\end{definition}
\begin{exercise}
The projection $\pi_{t_1,\ldots,t_k}:\boldsymbol C\to \boldsymbol R^k$ defined by $\pi_{t_1,\ldots,t_k}(x)=(x(t_1),\ldots,x(t_k))$ is a continuous map.
\end{exercise}
\begin{example} By the mapping theorem, if $X^n\Rightarrow X$, then $X^n\stackrel{\rm fdd}{\longrightarrow}X$. The reverse in not true.
Consider $z_n(t)$ from Example \ref{E1.3} and put $X^n=z_n$, $X=0$ so that $X^n\stackrel{\rm fdd}{\longrightarrow}X$.
Take $h(x)=\sup_tx(t)$. It satisfies $|h(x)-h(y)|\le\rho(x,y)$ and therefore is a continuous function on $\boldsymbol C$.
Since $h(z_n)\equiv1$, we have $h(X^n)\nRightarrow h(X)$, and according to the mapping theorem $X^n\nRightarrow X$.
\end{example}
\begin{definition}
Define a modulus of continuity of a function $x:[0,1]\to \boldsymbol R$ by
\[w_x(\delta)=w(x,\delta)=\sup_{|s-t|\le\delta}|x(s)-x(t)|,\quad \delta\in(0,1].\]
\end{definition}
For any $x:[0,1]\to \boldsymbol R$ its modulus of continuity $w_x(\delta)$ is non-decreasing over $\delta$. Clearly, $x\in \boldsymbol C$ if and only if $w_x(\delta)\to0$ as $\delta\to0$. The limit $j_x=\lim_{\delta\to0}w_x(\delta)$ is the absolute value of the largest jump of $x$.
\begin{exercise}
Show that for any fixed $\delta\in(0,1]$ we have $|w_x(\delta)-w_y(\delta)|\le2\rho(x,y)$ implying that $w_x(\delta)$ is a continuous function on $\boldsymbol C$.
\end{exercise}
\begin{example}
For $z_n\in \boldsymbol C$ defined in Example \ref{E1.3} we have $w(z_n,\delta)=1$ for $n\ge \delta^{-1}$.
\end{example}
\begin{exercise}
Given a probability measure $P$ on the measurable space $(\boldsymbol C,\mathcal C)$ there exists a random process $X$ on a probability space $(\Omega,\mathcal F,\mathbb P)$ such that $\mathbb P(X\in A)=P(A)$ for any $A\in\mathcal C$.
\end{exercise}
\begin{theorem} \label{7.5}
Let $P_n, P$ be probability measures on $(\boldsymbol C,\mathcal C)$. Suppose $P_{n}\pi^{-1}_{t_1,\ldots,t_k}\Rightarrow P\pi^{-1}_{t_1,\ldots,t_k}$ holds for all tuples $(t_1,\ldots,t_k)\subset[0,1]$. If for every positive $\epsilon$
\[
(i) \quad \quad \quad \quad \lim_{\delta\to0}\limsup_{n\to\infty}P_n(x: w_x(\delta)\ge\epsilon)=0,\quad \quad \quad \quad
\]
then $P_n\Rightarrow P$.
\end{theorem}
Proof. The proof is given in terms of convergence in distribution using Theorem \ref{3.2}.
For $u=1,2,\ldots$, define $M_u:\boldsymbol C\to \boldsymbol C$ in the following way. Let $(M_ux)(t)$ agree with $x(t)$ at the points $0,1/u,2/u,\ldots,1$ and be defined by linear interpolation between these points. Observe that $\rho(M_ux,x)\le2w_x(1/u)$.
Further, for a vector $\alpha=(\alpha_0,\alpha_1,\ldots,\alpha_u)$ define $(L_u\alpha)(t)$ as an element of $\boldsymbol C$ such that it has values $\alpha_i$ at points $t=i/n$ and is linear in between. Clearly, $\rho(L_u\alpha,L_u\beta)=\max_i|\alpha_i-\beta_i|$, so that $L_u:\boldsymbol R^{u+1}\to \boldsymbol C$ is continuous.
Let $t_i=i/u$. Observe that $M_u=L_u\pi_{t_0,\ldots,t_u}$. Since $\pi_{t_0,\ldots,t_u}X^n\Rightarrow\pi_{t_0,\ldots,t_u}X$ and $L_u$ is continuous, the mapping theorem gives $M_uX^n\Rightarrow M_uX$ as $n\to\infty$.
Since
\[\limsup_{u\to\infty}\rho(M_uX,X)\le2\limsup_{u\to\infty}w(X,1/u)=0,\]
we have $M_uX\to X$ in probability and therefore $M_uX\Rightarrow X$.
Finally, due to $\rho(M_uX^n,X^n)\le2w(X^n,1/u)$ and condition (i) we have
\[\limsup_{u\to\infty}\limsup_{n\to\infty}\mathbb P\big(\rho(M_uX^n,X^n)\ge\epsilon\big)\le\limsup_{u\to\infty}\limsup_{n\to\infty}\mathbb P(2w(X^n,1/u)\ge\epsilon)=0.\]
It remains to apply Theorem \ref{3.2}.
\begin{lemma}\label{p12}
Let $P$ and $Q$ be two probability measures on $(\boldsymbol C,\mathcal C)$. If $P\pi^{-1}_{t_1,\ldots,t_k}=Q\pi^{-1}_{t_1,\ldots,t_k}$ for all $0\le t_1<\ldots<t_k\le1$, then
$P=Q$.
\end{lemma}
Proof. Denote by $\mathcal C_f$ the collection of {\it cylinder sets} of the form
\[\pi^{-1}_{t_1,\ldots,t_k}(H)=\{y\in \boldsymbol C: (y(t_1),\ldots,y(t_k))\in H\},\qquad \qquad \qquad (*)\]
where $0\le t_1<\ldots<t_k\le1$ and a Borel subset $H\subset \boldsymbol R^k$. Due to the continuity of the projections we have $\mathcal C_f\subset\mathcal C$.
It suffices to check, using Lemma \ref{PM.42}, that $\mathcal C_f$ is a separating class. Clearly, $\mathcal C_f$ is closed under formation of finite intersections. To show that $\sigma(\mathcal C_f)=\mathcal C$, observe that a closed ball centered at $x$ of radius $a$ can be represented as $\cap_r(y:|y(r)-x(r)|\le a)$, where $r$ ranges over rationals in [0,1]. It follows that $\sigma(\mathcal C_f)$ contains all closed balls, hence the open balls, and hence the $\sigma$-algebra generated by the open balls. By separability, the $\sigma$-algebra generated by the open balls, the so-called {\it ball $\sigma$-algebra}, coincides with the Borel $\sigma$-algebra generated by the open sets.
\begin{exercise}
Which of the three paths on Figure \ref{fcy} belong to the cylinder set $(*)$ with $k=3$, $t_1=0.2, t_2=0.5, t_3=0.8$, and $H=[-2,1]\times[-2,2]\times[-2,1]$.
\end{exercise}
\begin{figure}
\centering
\includegraphics[height=6cm]{Cylinder.png}
\caption{Cylinder sets.}
\label{fcy}
\end{figure}
\begin{theorem}\label{7.1}
Let $P_n$ be probability measures on $(\boldsymbol C,\mathcal C)$. If their finite-dimensional distributions converge weakly
$P_{n}\pi^{-1}_{t_1,\ldots,t_k}\Rightarrow \mu_{t_1,\ldots,t_k}$, and if $P_n$ is tight, then
(a) there exists a probability measure $P$ on $(\boldsymbol C,\mathcal C)$ with $P\pi^{-1}_{t_1,\ldots,t_k}=\mu_{t_1,\ldots,t_k}$, and
(b) $P_n\Rightarrow P$.
\end{theorem}
Proof. Tightness implies relative compactness which in turn implies that each subsequence $(P_{n'})\subset (P_n)$ contains a further subsequence $(P_{n''})\subset (P_{n'})$ converging weakly to some probability measure $P$. By the mapping theorem
$P_{n''}\pi^{-1}_{t_1,\ldots,t_k}\Rightarrow P\pi^{-1}_{t_1,\ldots,t_k}$. Thus by hypothesis,
$P\pi^{-1}_{t_1,\ldots,t_k}=\mu_{t_1,\ldots,t_k}$. Moreover, by Lemma \ref{p12}, the limit $P$ must be the same for all converging subsequences, thus applying Theorem \ref{2.6} we may conclude that $P_n\Rightarrow P$.
\subsection{Wiener measure and Donsker's theorem}
\begin{definition}\label{p87}
Let $\xi_i$ be a sequence of r.v. defined on the same probability space $(\Omega,\mathcal{F},\mathbb P)$.
Put $S_n=\xi_1+\ldots+\xi_n$ and let $X^n_t(\omega)$ as a function of $t$ be the element of $\boldsymbol C$ defined by linear interpolation between its values $X^n_{i/n}(\omega)={S_i(\omega)\over\sigma\sqrt n}$ at the points $t=i/n$.
\end{definition}
\begin{theorem} \label{8.2'}Let $X^n=(X^n_t:0\le t\le1)$ be defined by Definition \ref{p87} and let $P_n$ be the probability distribution of $X^n$. If $\xi_i$ are iid with zero mean and finite variance $\sigma^2$, then
(a) $P_{n}\pi^{-1}_{t_1,\ldots,t_k}\Rightarrow \mu_{t_1,\ldots,t_k}$, where
$\mu_{t_1,\ldots,t_k}$ are Gaussian distributions on $\boldsymbol R^k$ satisfying
\[
\mu_{t_1,\ldots,t_k}\big\{(x_1,\ldots,x_k): x_i-x_{i-1}\le\alpha_i,i=1,\ldots,k\big\}
=\prod_{i=1}^k\Phi\Big({\alpha_i\over\sqrt{t_i-t_{i-1}}}\Big), \mbox{ where }x_0=0,
\]
(b) the sequence $(P_n)$ of probability measures on $(\boldsymbol C,\mathcal C)$ is tight.
\end{theorem}
Proof. The claim (a) follows from the classical CLT and independence of increments of $S_n$.
For example, if $0\le s\le t\le1$, then
\begin{align*}
(X^n_s,X^n_t-X^n_s)&={1\over\sigma\sqrt{n}}(S_{\lfloor ns\rfloor},S_{\lfloor nt\rfloor}-S_{\lfloor ns\rfloor})+ \epsilon^n_{s,t},\\
\epsilon^n_{s,t}&={1\over\sigma\sqrt{n}}(\{ ns\}\xi_{\lfloor ns\rfloor+1},\{ nt\}\xi_{\lfloor nt\rfloor+1}-\{ ns\}\xi_{\lfloor ns\rfloor+1}),
\end{align*}
where $\{ nt\}$ stands for the fractional part of $nt$. By the classical CLT and Theorem \ref{2.8}c, ${1\over\sigma\sqrt{n}}(S_{\lfloor ns\rfloor},S_{\lfloor nt\rfloor}-S_{\lfloor ns\rfloor})$ has $\mu_{s,t}$ as a limit distribution. Applying Corollary \ref{3.1} to $\epsilon^n_{s,t}$, we derive $P_{n}\pi^{-1}_{s,t}\Rightarrow \mu_{s,t}$.
The proof of (b) is postponed until the next subsection.
\begin{definition}
Wiener measure $\mathbb W$ is a probability measure on $\boldsymbol C$ with $\mathbb W\pi^{-1}_{t_1,\ldots,t_k}= \mu_{t_1,\ldots,t_k}$ given by the formula in Theorem \ref{8.2'} part (a). The standard Wiener process $W$ is the random element on $(\boldsymbol C,\mathcal C,\mathbb W)$ defined by $W_t(x)=x(t)$.
\end{definition}
The existence of $\mathbb W$ follows from Theorems \ref{7.1} and \ref{8.2'}.
\begin{theorem} \label{8.2}Let $X^n=(X^n_t:0\le t\le1)$ be defined by Definition \ref{p87}. If $\xi_i$ are iid with zero mean and finite variance $\sigma^2$, then $X^n$ converges in distribution to the standard Wiener process.
\end{theorem}
Proof 1. This is a corollary of Theorems \ref{7.1} and \ref{8.2'}.\\
\noindent Proof 2. An alternative proof is based on Theorem \ref{7.5}. We have to verify that condition (i) of Theorem \ref{7.5} holds under the assumptions of Theorem \ref{8.2'}. To this end take $t_j=j\delta$, $j=0,\ldots, \delta^{-1}$ assuming $n\delta>1$. Then
\begin{align*}
\mathbb P(w(X^n,\delta)\ge3\epsilon)&\le \sum_{j=1}^{1/\delta}\mathbb P\Big(\sup_{t_{j-1}\le s\le t_j} |X^n_s-X^n_{t_{j-1}}| \ge \epsilon\Big)\\
&= \sum_{j=1}^{1/\delta}\mathbb P\Big(\max_{(j-1)n\delta\le k\le jn\delta} {|S_k-S_{(j-1)n\delta}|\over\sigma\sqrt {n}} \ge \epsilon\Big)= \sum_{j=1}^{1/\delta}\mathbb P\Big(\max_{k\le n\delta} |S_k|\ge \epsilon\sigma\sqrt {n}\Big)\\
&=\delta^{-1}\mathbb P\Big(\max_{k\le n\delta} |S_k|\ge \epsilon\sigma\sqrt {n}\Big)\le 3\delta^{-1}\max_{k\le n\delta}\mathbb P\Big( |S_k|\ge \epsilon\sigma
\sqrt {n}/3\Big),
\end{align*}
where the last is Etemadi's inequality:
\[\mathbb P\Big(\max_{k\le n} |S_k|\ge \alpha\Big)\le 3\max_{k\le n}\mathbb P\Big( |S_k|\ge \alpha/3\Big).\]
Remark: compare this with Kolmogorov's inequality
$\mathbb P(\max_{k\le n} |S_k|\ge \alpha)\le {n\sigma^2\over \alpha^2}$.
It suffices to check that assuming $\sigma=1$,
\[\lim_{\lambda\to\infty}\limsup_{n\to\infty}\lambda^2\max_{k\le n}\mathbb P\Big( |S_k|\ge \epsilon\lambda\sqrt n\Big)=0.\]
Indeed, by the classical CLT,
$$\mathbb P(|S_k|\ge \epsilon\lambda\sqrt k)<4(1-\Phi(\epsilon\lambda))\le {6\over \epsilon^4\lambda^4}$$
for sufficiently large $k\ge k(\lambda\epsilon)$.
It follows,
\[\limsup_{n\to\infty}\lambda^2\max_{k(\lambda\epsilon)\le k\le n}\mathbb P\Big( |S_k|\ge \epsilon\lambda\sqrt n\Big)\le\limsup_{n\to\infty}\lambda^2\max_{k\ge k(\lambda\epsilon)}\mathbb P\Big( |S_k|\ge \epsilon\lambda\sqrt k\Big)\le{6\over \epsilon^4\lambda^2}.\]
On the other hand, by Chebyshev's inequality,
\[\limsup_{n\to\infty}\lambda^2\max_{k\le k(\lambda\epsilon)}\mathbb P\Big( |S_k|\ge \epsilon\lambda\sqrt n\Big)\le\limsup_{n\to\infty}{\lambda^2k(\lambda\epsilon)
\over \epsilon^2\lambda^2n}=0\]
finishing the proof of (i) of Theorem \ref{7.5}.
\begin{example}
We show that $h(x)=\sup_tx(t)$ is a continuous mapping from $\boldsymbol C$ to $\boldsymbol R$. Indeed, if $h(x)\ge h(y)$, then there are $t_i$ such that
\[0\le h(x)-h(y)=x(t_1)-y(t_2)\le x(t_1)-y(t_1)\le \|x-y\|.\]
Thus, we have $|h(x)-h(y)|\le \rho(x,y)$ and continuity follows.
\end{example}
\begin{example}
Turning to the symmetric simple random walk, put $M_n=\max(S_0,\ldots,S_n)$. As we show later in Theorem \ref{(9.10)}, for any $b\ge0$,
\[\mathbb P(M_n\le b\sqrt n)\to{2\over\sqrt{2\pi}}\int_0^b e^{-u^2/2}du.\]
From $h(X^n)\Rightarrow h(W)$ with $h(x)=\sup_tx(t)$ we conclude that $\sup_{0\le t\le1}W_t$ is distributed as $|W_1|$. The same limit holds for $M_n=\max({S_0\over\sigma},{S_1-\mu\over\sigma}\ldots,{S_n-n\mu\over\sigma})$ for sums of iid r.v. with mean $\mu$ and standard deviation $\sigma$. For this reason the functional CLT is also called an {\it invariance principle}: the general limit can be computed via the simplest relevant case.
\end{example}
\begin{exercise}
Check if the following functionals are continuous on $\boldsymbol C$:
$$\sup_{\{0\le s,t\le1\}}|x(t)-x(s)|,\qquad \int_0^1 x(t)dt.$$
\end{exercise}
\subsection{Tightness in $\boldsymbol C$}
\begin{theorem}\label{7.2} The Arzela-Ascoli theorem. The set $A\subset \boldsymbol C$ is relatively compact if and only if
\begin{align*}
(i)& \quad\sup_{x\in A}|x(0)|<\infty,
\\
(ii)& \quad \lim_{\delta\to0}\sup_{x\in A} w_x(\delta)=0.
\end{align*}
\end{theorem}
Proof. {\it Necessity}. If the closure of $A$ is compact, then (i) obviously must hold. For a fixed $x$ the function $w_x(\delta)$ monotonely converges to zero as $\delta\downarrow0$. Since for each $\delta$ the function $w_x(\delta)$ is continuous in $x$ this convergence is uniform over $x\in K$ for any compact $K$. It remains to see that taking $K$ to be the closure of $A$ we obtain (ii).
{\it Sufficiency}. Suppose now that (i) and (ii) hold. For a given $\epsilon>0$, choose $n$ large enough for $\sup_{x\in A} w_x(1/n)<\epsilon$. Since
\[|x(t)|\le |x(0)|+\sum_{i=1}^n |x(ti/n)-x(t(i-1)/n)|\le |x(0)|+n\sup_{x\in A} w_x(1/n),\]
we derive $\alpha:=\sup_{x\in A}\|x\|<\infty$. The idea is to use this and (ii) to prove that $A$ is totally bounded, since $\boldsymbol C$ is complete, it will follow that $A$ is relatively compact.
In other words, we have to find a finite $B_\epsilon\subset \boldsymbol C$ forming a $2\epsilon$-net for $A$.
Let $-\alpha=\alpha_0<\alpha_1<\ldots<\alpha_k=\alpha$ be such that $\alpha_j-\alpha_{j-1}\le\epsilon$. Then $B_\epsilon$ can be taken as a set of the continuous polygonal functions $y:[0,1]\to[-\alpha,\alpha]$ that linearly connect the pairs of points $({i-1\over n}, \alpha_{j_{i-1}}), ({i\over n}, \alpha_{j_i})$. See Figure \ref{aras}.
Let $x\in A$. It remains to show that there is a $y\in B_\epsilon$ such that $\rho(x,y)\le2\epsilon$. Indeed, since $|x(i/n)|\le\alpha$, there is a $y\in B_\epsilon$ such that $|x(i/n)-y(i/n)|<\epsilon$ for all $i=0,1,\ldots,n$. Both $y(i/n)$ and $y((i-1)/n)$ are within $2\epsilon$ of $x(t)$ for $t\in[(i-1)/n,i/n]$. Since $y(t)$ is a convex combination of $y(i/n)$ and $y((i-1)/n)$, it too is within $2\epsilon$ of $x(t)$. Thus $\rho(x,y)\le2\epsilon$ and $B_\epsilon$ is a $2\epsilon$-net for $A$.
\begin{figure}
\centering
\includegraphics[width=12cm,height=4cm]{net.pdf}
\caption{The Arzela-Ascoli theorem: constructing a $2\epsilon$-net.}
\label{aras}
\end{figure}
\begin{exercise}
Draw a curve $x\in A$ (cf Figure \ref{aras}) for which you can not find a $y\in B_\epsilon$ such that $\rho(x,y)\le\epsilon$.
\end{exercise}
The next theorem explains the nature of condition (i) in Theorem \ref{7.5}.
\begin{theorem}\label{7.3}
Let $P_n$ be probability measures on $(\boldsymbol C,\mathcal C)$. The sequence $(P_n)$ is tight if and only if the following two conditions hold:
\begin{align*}
(i)& \quad \lim_{a\to\infty}\limsup_{n\to\infty}P_n(x: |x(0)|\ge a)=0,
\\
(ii)& \quad \lim_{\delta\to0}\limsup_{n\to\infty}P_n(x: w_x(\delta)\ge\epsilon)=0,\mbox{ for each positive }\epsilon.
\end{align*}
\end{theorem}
Proof. Suppose $(P_n)$ is tight. Given a positive $\eta$, choose a compact $K$ such that $P_n(K)>1-\eta$ for all $n$. By the Arzela-Ascoli theorem we have $K\subset (x: |x(0)|\le a)$ for large enough $a$ and $K\subset (x: w_x(\delta)\le\epsilon)$ for small enough $\delta$. Hence the necessity.
According to condition (i), for each positive $\eta$, there exist large $a_\eta$ and $n_\eta$ such that
\[P_n(x: |x(0)|\ge a_\eta)\le\eta,\quad n\ge n_\eta,\]
and condition (ii) implies that for each positive $\epsilon$ and $\eta$, there exist a small $\delta_{\epsilon,\eta}$ and a large $n_{\epsilon,\eta}$ such that
\[P_n(x: w_x(\delta_{\epsilon,\eta})\ge\epsilon)\le\eta,\quad n\ge n_{\epsilon,\eta},\]
Due to Lemma \ref{1.3} for any finite $k$ the measure $P_k$ is tight, and so by the necessity there is a $a_{k,\eta}$ such that $P_k(x: |x(0)|\ge a_{k,\eta})\le\eta$, and there is a $\delta_{k,\epsilon,\eta}$ such that $P_k(x: w_x(\delta_{k,\epsilon,\eta})\ge\epsilon)\le\eta$.
Thus in proving sufficiency, we may put $n_\eta=n_{\epsilon,\eta}=1$ in the above two conditions. Fix an arbitrary small positive $\eta$. Given the two improved conditions, we have $P_n(B)\ge1-\eta$ and $P_n(B_{k})\ge1-2^{-k}\eta$ with $B=(x: |x(0)|< a_{\eta})$ and $B_{k}=(x: w_x(\delta_{1/k,2^{-k}\eta})<1/k)$. If $K$ is the closure of intersection of $B\cap B_1\cap B_{2}\cap\ldots$, then $P_n(K)\ge1-2\eta$. To finish the proof observe that $K$ is compact by the Arzela-Ascoli theorem.
\begin{example}
Consider the Dirac probability measure $P_n$ concentrated on the point $z_n\in \boldsymbol C$ from Example \ref{E1.3}. Referring to Theorem \ref{7.3} verify that the sequence $(P_n)$ is not tight.
\end{example}
\noindent {\bf Proof of Theorem \ref{8.2'} part b}. The stated tightness follows from Theorem \ref{7.3}. Indeed, condition (i) in Theorem \ref{7.3} is trivially fulfilled as $X^n_0\equiv0$. Furthermore, condition (i) of Theorem \ref{7.5} (established in the proof 2 of Theorem \ref{8.2}) translates into (ii) in Theorem \ref{7.3}.
\section{Applications of the functional CLT}
\subsection{The minimum and maximum of the Brownian path}
\begin{theorem}\label{(9.10)} Consider the standard Wiener process $W=(W_t, 0\le t\le 1)$ and let
$$m=\inf_{0\le t\le1} W_t,\qquad M=\sup_{0\le t\le1} W_t.$$
If $a\le0\le b$ and $a\le a'<b'\le b$, then
\begin{align*}
\mathbb P(&a<m\le M<b;\ a'<W_1<b')\\
&=\sum_{k=-\infty}^\infty\Big(\Phi(2k(b-a)+b')-\Phi(2k(b-a)+a')\Big)\\
&\qquad -\sum_{k=-\infty}^\infty\Big(\Phi(2k(b-a)+2b-a')-\Phi(2k(b-a)+2b-b')\Big),
\end{align*}
so that with $a'=a$ and $b'=b$ we get
\begin{align*}
\mathbb P(&a<m\le M<b)=\sum_{k=-\infty}^\infty(-1)^k\Big(\Phi(k(b-a)+b)-\Phi(k(b-a)+a)\Big).
\end{align*}
\end{theorem}
Proof. Let $S_n$ be the symmetric simple random walk and put $m_n=\min(S_0,\ldots,S_n)$, $M_n=\max(S_0,\ldots,S_n)$. Since the mapping of $\boldsymbol C$ into $\boldsymbol R^3$ defined by
\[x\to\big(\inf_tx(t),\sup_tx(t),x(1)\big)\]
is continuous, the functional CLT entails $n^{-1/2}(m_n,M_n,S_n)\Rightarrow(m,M,W_1)$. The theorem's main statement will be obtained in two steps.
Step 1: show that for integers satisfying $i<0<j$ and $i\le i'<j'\le j$,
\begin{align*}
\mathbb P(i<m_n\le M_n<j;\ &i'<S_n<j')\\
&=\sum_{k=-\infty}^\infty\mathbb P(2k(j-i)+i'<S_n<2k(j-i)+j')\\
&\quad -\sum_{k=-\infty}^\infty\mathbb P(2k(j-i)+2j-j'<S_n<2k(j-i)+2j-i').
\end{align*}
In other words, we have to show that for $i<0< j$, $i< l< j$
\begin{align*}
\mathbb P(i<m_n\le M_n<j;\ S_n=l)&=\sum_{k=-\infty}^\infty\mathbb P(S_n=2k(j-i)+l)\\
& \qquad -\sum_{k=-\infty}^\infty\mathbb P(S_n=2k(j-i)+2j-l).\qquad\qquad(*)
\end{align*}
Observe that here both series are just finite sums as $|S_n|\le n$.
Equality $(*)$ is proved by induction on $n$. For $n=1$, if $j>1$, then
\begin{align*}
\mathbb P(i<m_1&\le M_1<j;\ S_1=1)=\mathbb P(S_1=1)\\
&=\sum_{k=-\infty}^\infty\mathbb P(S_1=2k(j-i)+1) -\sum_{k=-\infty}^\infty\mathbb P(S_1=2k(j-i)+2j-1),
\end{align*}
and if $i<-1$, then
\begin{align*}
\mathbb P(i<m_1&\le M_1<j;\ S_1=-1)=\mathbb P(S_1=-1)\\
&=\sum_{k=-\infty}^\infty\mathbb P(S_1=2k(j-i)-1) -\sum_{k=-\infty}^\infty\mathbb P(S_1=2k(j-i)+2j+1).
\end{align*}
Assume as induction hypothesis that the statement holds for $(n-1,i,j,l)$ with all relevant triplets $(i,j,l)$. Conditioning on the first step of the random walk, we get
\begin{align*}
\mathbb P(i<m_n\le M_n<j;\ S_n=l)&={1\over2} \cdot \mathbb P(i-1<m_{n-1}\le M_{n-1}<j-1;\ S_{n-1}=l-1)\\
&+{1\over2} \cdot \mathbb P(i+1<m_{n-1}\le M_{n-1}<j+1;\ S_{n-1}=l+1),
\end{align*}
which together with the induction hypothesis yields the stated equality $(*)$
\begin{align*}
2 \mathbb P(&i<m_n\le M_n<j;\ S_n=l)\\
&=\sum_{k=-\infty}^\infty\Big(\mathbb P(S_{n-1}=2k(j-i)+l-1)+\mathbb P(S_{n-1}=2k(j-i)+l+1)\Big)
\\
&\quad -\sum_{k=-\infty}^\infty\Big(\mathbb P(S_{n-1}=2k(j-i)+2j-l+1)+\mathbb P(S_{n-1}=2k(j-i)+2j-l-1)\Big)
\\
&=2\sum_{k=-\infty}^\infty\mathbb P(S_n=2k(j-i)+l) -2\sum_{k=-\infty}^\infty\mathbb P(S_n=2k(j-i)+2j-l).
\end{align*}
Step 2: show that for $c>0$ and $a<b$,
\begin{align*}
\sum_{k=-\infty}^\infty\mathbb P\Big(2k\lfloor c\sqrt n\rfloor+\lfloor a\sqrt n\rfloor<&S_n<2k\lfloor c\sqrt n\rfloor+\lfloor b\sqrt n\rfloor\Big)\\
& \to
\sum_{k=-\infty}^\infty\Big(\Phi(2kc+b)-\Phi(2kc+a)\Big),\quad n\to\infty.
\end{align*}
This is obtained using the CLT. The interchange of the limit with the summation over $k$ follows from
\begin{align*}
\lim_{k_0\to\infty} \sum_{|k|>k_0}\mathbb P\Big(2k\lfloor c\sqrt n\rfloor+\lfloor a\sqrt n\rfloor<S_n<2k\lfloor c\sqrt n\rfloor+\lfloor b\sqrt n\rfloor\Big)=0,
\end{align*}
which in turn can be justified by the following series form of Scheffe's theorem. If $\sum_ks_{kn}=\sum_ks_{k}=1$, the terms being nonnegative, and if $s_{kn}\to s_k$ for each $k$, then $\sum_kr_ks_{kn}\to\sum_kr_ks_{k}$ provided $r_k$ is bounded. To apply this in our case we should take
\[s_{kn}=\mathbb P\Big(2k\lfloor \sqrt n\rfloor-\lfloor \sqrt n\rfloor<S_n\le2k\lfloor \sqrt n\rfloor+\lfloor \sqrt n\rfloor\Big), \quad s_k=\Phi(2k+1)-\Phi(2k-1).\]
\begin{corollary}\label{(9.14)} Consider the standard Wiener process $W$. If $a\le0\le b$, then
\begin{align*}
\mathbb P(\sup_{0\le t\le1}W_t<b)&=2\Phi(b)-1,\\
\mathbb P(\inf_{0\le t\le1}W_t>a)&=1-2\Phi(a),\\
\mathbb P(\sup_{0\le t\le1} |W_t|<b)&=2\sum_{k=-\infty}^\infty\Big\{\Phi((4k+1)b)-\Phi((4k-1)b)\Big\}.
\end{align*}
\end{corollary}
\subsection{The arcsine law}
\begin{lemma}\label{p247}
For $x\in \boldsymbol C$ and a Borel measurable, bounded $v:\boldsymbol R\to\boldsymbol R$, put $h(x)=\int_0^1v(x(t))dt$. If $v$ is continuous except on a set $D_v$ with
$\lambda(D_v)=0$, where $\lambda$ is the Lebesgue measure, then $h$ is $\mathcal C$-measurable and is continuous except on a set of Wiener measure 0.
\end{lemma}
Proof. Since both mappings $x\to x(t)$ and $t\to x(t)$ are continuous, the mapping $(x,t)\to x(t)$ is continuous in the product topology and therefore Borel measurable. It follows that the mapping $\psi(x,t)=v(x(t))$ is also measurable. Since $\psi$ is bounded, $h(x)=\int_0^1\psi(x,t)dt$ is $\mathcal C$-measurable, see Fubini's theorem.
Let $E=\{(x,t):x(t)\in D_v\}$. If $\mathbb W$ is Wiener measure on $(\boldsymbol C, \mathcal C)$, then by the hypothesis $\lambda(D_v)=0$,
\begin{align*}
&\mathbb W\{x:(x,t)\in E\}=\mathbb W\{x:x(t)\in D_v\}=0\mbox{ for each }t\in[0,1].
\end{align*}
It follows by Fubini's theorem applied to the measure $\mathbb W\times\lambda$ on $\boldsymbol C\times[0,1]$ that $\lambda\{t:(x,t)\in E\}=0$ for all $x$ outside a set $A_v\in\mathcal C$ satisfying $\mathbb W(A_v)=0$. Suppose that $\|x_n-x\|\to0$. If $x\notin A_v$, then $x(t)\notin D_v$ for almost all $t$ and hence $v(x_n(t))\to v(x(t))$ for almost all $t$. It follows by the bounded convergence theorem that
\[\mbox{if }x\notin A_v\mbox{ and } \|x_n-x\|\to0,\quad \mbox{ then }\int_0^1v(x_n(t))dt\to\int_0^1v(x(t))dt.\]
\begin{exercise}
Let $W$ be a standard Wiener process and $t_0\in(0,1)$. Put $W'_s={W_t-W_{t_0}\over \sqrt{1-t_0}}$ for $s={t-t_0\over 1-t_0}$, $t\in[t_0,1]$. Using the Donsker invariance principle show that $(W'_s, 0\le s\le 1)$ is also distributed as a standard Wiener process.
\end{exercise}
\begin{lemma}\label{M15}
Each of the following three mappings $h_i:\boldsymbol C\to\boldsymbol R$
\begin{align*}
h_1(x)&=\sup\{t:x(t)=0, t\in[0,1] \},\\
h_2(x)&=\lambda\{t: x(t)>0,t\in[0,1]\},\\
h_3(x)&=\lambda\{t: x(t)>0,t\in[0,h_1(x)]\}
\end{align*}
is $\mathcal C$-measurable
and continuous except on a set of Wiener measure 0.
\end{lemma}
Proof. Using the previous lemma with $v(z)=1_{\{z\in(0,\infty)\}}$ we obtain the assertion for $h_2$.
Turning to $h_1$, observe that
$$\{x:h_1(x)<\alpha\}=\{x:x(t)>0, t\in[\alpha,1]\}\cup\{x:x(t)<0, t\in[\alpha,1]\}$$
is open and hence $h_1$ is measurable. If $h_1$ is discontinuous at $x$, then there exist $0<t_0<t_1<1$ such that $x(t_1)=0$ and
\[ \mbox{either }\quad x(t)>0 \mbox{ for all } t\in[t_0,1]\setminus\{t_1\}\quad\mbox{ or }\quad x(t)<0 \mbox{ for all }
t\in[t_0,1]\setminus\{t_1\}.\]
That $h_1$ is continuous except on a set of Wiener measure 0 will therefore follow if we show that, for each $t_0$, the random variables
$$M_0=\sup\{W_t,t\in[t_0,1]\}\quad\mbox{ and }\quad \inf\{W_t,t\in[t_0,1]\}$$
have continuous distributions. By the last exercise and Theorem \ref{(9.10)}, $M'=M_0-W_{t_0}$ has a continuous distribution. Because $M'$ and $W_{t_0}$ are independent, we conclude that their sum also has a continuous distribution. The infimum is treated the same way.
Finally, for $h_3$, use the representation
\[h_3(x)=\psi(x,h_1(x)),\quad \mbox{ where }\psi(x,t)=\int_0^tv(x(u))du \quad \mbox{ with } v(z)=1_{\{z\in(0,\infty)\}}.\]
\begin{theorem}\label{(9.23)} Consider the standard Wiener process $W$ and let
$T=h_1(W)$ be the time at which $W$ last passes through 0,
$U=h_2(W)$ be the total amount of time $W$ spends above 0, and
$V=h_3(W)$ be the total amount of time $W$ spends above 0 in the interval $[0,T]$.
\noindent so that
$$U=V+(1-T)1_{\{W_1\ge0\}}.$$
Then the triplet $(T,V,W_1)$ has the joint density
\[f(t,v,z)=1_{\{0<v<t<1\}}g(t,z),\quad g(t,z)={1\over2\pi }{|z|\over t^{3/2}(1-t)^{3/2}}e^{-{z^2\over 2(1-t)}}.\]
In particular, the conditional distribution of $V$ given $(T,W_1)$ is uniform on $[0,T]$, and
\[\mathbb P(T\le t)=\mathbb P(U\le t)={2\over\pi }\arcsin(\sqrt t), \quad 0<t<1.\]
\end{theorem}
Proof. The main idea is to apply the invariance principle via the symmetric simple random walk $S_n$. We will use three properties of $S_n$ and its path functionals $(T_n,U_n,V_n)$. First, we need the local limit theorem for $p_n(i)=\mathbb P(S_n=i)$ similar to that of Example \ref{E3.4}:
\[\mbox{if }{i\over\sqrt n}\to z,\mbox{ with } n-i \mbox{ being even}, \mbox{ then }{\sqrt n\over 2}p_n(i)\to{1\over\sqrt{2\pi}}e^{-z^2/2}.\]
Second, we need the fact that
\[\mathbb P(S_1\ge1,\ldots, S_{n-1}\ge1,S_n=i)={i\over n}p_n(i),\quad i\ge1.\]
The third fact we need is that if $S_{2n}=0$, then $U_{2n}=V_{2n}$ and
\[\mathbb P(V_{2n}=2j|S_{2n}=0)={1\over n+1},\quad j=0,1,\ldots,n.\]
Using these three facts we obtain that for $0\le2j\le2k<n$ and $i\ge1$,
\begin{align*}
\mathbb P(T_n=2k,&V_{2n}=2j,S_n=i)\\
& = \mathbb P(S_{2k}=0,V_{2n}=2j,S_{2k+1}\ge1,\ldots, S_{n-1}\ge1,S_n=i)\\
&=\mathbb P(S_{2k}=0)\mathbb P(V_{2k}=2j|S_{2k}=0)\mathbb P(S_{2k+1}\ge1,\ldots, S_{n-1}\ge1,S_{n}=i|S_{2k}=0)\\
&=p_{2k}(0){1\over k+1}{i\over n-2k}p_{n-2k}(i).
\end{align*}
We apply Theorem \ref{3.3} to the three-dimensional lattice of points $({2k\over n},{2j\over n},{i\over \sqrt n})$ for which $i\equiv n \mbox{ (mod 2)}$. The volume of the corresponding cell is ${2\over n}\cdot{2\over n}\cdot{2\over \sqrt n}=8n^{-5/2}$. If
\[{2k\over n}\to t,\quad{2j\over n}\to v,\quad{i\over\sqrt n}\to z,\quad0<v<t<1,\quad z>0,\]
then
\begin{align*}
{n^{5/2}\over8}\mathbb P(T_n=2k,&V_{2n}=2j,S_n=i)\\
&={\sqrt n\over\sqrt {2k}}{\sqrt {2k}\over2}p_{2k}(0){n\over 2(k+1)}{i\over \sqrt n}{n\over n-2k}{\sqrt n\over\sqrt {n-2k}}{\sqrt {n-2k}\over2}p_{n-2k}(i)\\
&\to {1\over\sqrt{2\pi}}{1\over t^{3/2}}z{1\over (1-t)^{3/2}}{1\over\sqrt{2\pi}}e^{-{z^2\over2(1-t)}}=g(t,z).
\end{align*}
The same result holds for negative $z$ by symmetry.
The joint density of $(T,W_1)$ is $tg(t,z)1_{\{0<t<1\}}$, hence the marginal density for $T$ equals
\begin{align*}
f_T(t)&=\int_{-\infty}^\infty tg(t,z)dz=\int_{0}^\infty e^{-{z^2\over 2(1-t)}} {zdz\over \pi(1-t)^{3/2}t^{1/2}}={1\over\pi(1-t)^{1/2}t^{1/2}}
\end{align*}
implying
\[\mathbb P(T\le t)={2\over\pi }\arcsin(\sqrt t), \quad 0<t<1.\]
Notice also that
\begin{align*}
G(u)&:=\int_{-\infty}^\infty \int_{u}^1g(t,z)dtdz= \int_{u}^1\int_{0}^\infty e^{-{z^2\over 2(1-t)}} {zdz\over \pi(1-t)^{3/2}t^{3/2}}dt\\
&=\int_{u}^1{dt\over\pi(1-t)^{1/2}t^{3/2}}=-{2\over\pi}\int_{u}^1{dt^{-1/2}\over\sqrt{1-t}}={2\over\pi}\int_1^{u^{-1/2}}{ydy\over\sqrt{y^2-1}}={2\over\pi}\sqrt{u^{-1}-1}.
\end{align*}
If $(T,W_1)=(t,z)$, then $U$ is distributed uniformly over $[1-t,1]$ for $z\ge0$, and uniformly over $[0,t]$ for $z<0$:
\begin{align*}
\mathbb P(U\le u|T=t,W_1=z)&={u-1+t\over t}1_{\{u\in[1-t,1], z\ge0\}}+{u\over t}1_{\{u\in[0,t], z<0\}}+1_{\{u\in(t,1], z<0\}}.
\end{align*}
Thus the marginal distribution function of $U$ equals
\begin{align*}
\mathbb P(U\le u)&=\mathbb E\Big({u-1+T\over T}1_{\{u\in[1-T,1], W_1\ge0\}}+{u\over T}1_{\{u\in[0,T], W_1<0\}}+1_{\{u\in(T,1], W_1<0\}}\Big)\\
&= \int_0^\infty\int _{1-u}^1(u-1+t)g(t,z)dtdz+ \int_{-\infty}^0\int _{u}^1ug(t,z)dtdz+ \int_{-\infty}^0\int _0^{u}tg(t,z)dtdz\\
&={1\over2}\int _{1-u}^1f_T(t)dt+{u-1\over2}G(1-u)+{u\over2} G(u)+{1\over2}\int _0^{u}f_T(t)dt\\
&={1\over2}\mathbb P(T>1- u)+{1\over2}\mathbb P(T\le u)={2\over\pi }\arcsin(\sqrt u).
\end{align*}
\subsection{The Brownian bridge}
\begin{definition}
The transformed standard Wiener process $W^\circ_t=W_t-tW_1$, $t\in[0,1]$, is called the standard Brownian bridge.
\end{definition}
\begin{exercise}
Show that the standard Brownian bridge $W^\circ$ is a Gaussian process with zero mean and covariance $\mathbb E(W^\circ_sW^\circ_t)=s(1-t)$ for $s\le t$.
\end{exercise}
\begin{example}
Define $h:\boldsymbol C\to \boldsymbol C$ by $h(x(t))=x(t)-tx(1)$. This is a continuous mapping since $\rho(h(x),h(y))\le2\rho(x,y)$, and $h(X^n)\Rightarrow W^\circ$ by Theorem \ref{8.2}.
\end{example}
\begin{theorem}\label{(9.32)}
Let $P_\epsilon$ be the probability measure on $(\boldsymbol C,\mathcal C)$ defined by
\[P_\epsilon(A)=\mathbb P(W\in A|0\le W_1\le\epsilon),\quad A\in\mathcal C.\]
Then $P_\epsilon\Rightarrow \mathbb W^\circ$ as $\epsilon\to0$, where $\mathbb W^\circ$ is the distribution of the Brownian bridge $W^\circ$.
\end{theorem}
Proof. We will prove that for every closed $F\in\mathcal C$
\[\limsup_{\epsilon\to0}\mathbb P(W\in F|0\le W_1\le\epsilon)\le \mathbb P(W^\circ\in F).\]
Using $W^\circ_t=W_t-tW_1$ we get $\mathbb E(W^\circ_tW_1)=0$ for all $t$. From the normality we conclude that $W_1$ is independent of each $(W^\circ_{t_1},\ldots,W^\circ_{t_k})$. Therefore,
\[\mathbb P(W^\circ\in A,W_1\in B)=\mathbb P(W^\circ\in A)\mathbb P(W_1\in B),\quad A\in \mathcal C_f, B\in \mathcal R,\]
and since $ \mathcal C_f$, the collection of finite-dimensional sets, see the proof of Lemma \ref{p12}, is a separating class, it follows
\[\mathbb P(W^\circ\in A|0\le W_1\le\epsilon)=\mathbb P(W^\circ\in A),\quad A\in \mathcal C, \epsilon >0.\]
Observe that $\rho(W,W^\circ)=|W_1|$. Thus,
$$\{|W_1|\le\delta\}\cap\{W\in F\}\subset \{W^\circ\in F_\delta\},\quad \mbox{where }F_\delta=\{x:\rho(x,F)\le\delta\}.$$
Therefore, if $\epsilon<\delta$
\[\mathbb P(W\in F|0\le W_1\le\epsilon)\le\mathbb P(W^\circ\in F_\delta|0\le W_1\le\epsilon)=\mathbb P(W^\circ\in F_\delta),\]
leading to the required result
\[\limsup_{\epsilon\to0}\mathbb P(W\in F|0\le W_1\le\epsilon)\le\limsup_{\delta\to0}\mathbb P(W^\circ\in F_\delta)= \mathbb P(W^\circ\in F).\]
\begin{theorem} \label{(9.39)}
Distribution functions for several functionals of the Brownian bridge:
\begin{align*}
\mathbb P\big(a<\inf_t W^\circ_t\le\sup_t W^\circ_t\le b\big)&=\sum_{k=-\infty}^\infty \Big(e^{-2k^2(b-a)^2}-e^{-2(b+k(b-a))^2}\Big),\quad a<0<b,\\
\mathbb P\big(\sup_t |W^\circ_t|\le b\big)&=1+2\sum_{k=1}^\infty (-1)^ke^{-2k^2b^2},\quad b>0,\\
\mathbb P\big(\sup_t W^\circ_t\le b\big)&=\mathbb P\big(\inf_t W^\circ_t>- b\big)=1-e^{-2b^2},\quad b>0,\\
\mathbb P\big( h_2(W^\circ)\le u\big)&=u,\quad u\in[0,1].
\end{align*}
\end{theorem}
Proof. The main idea of the proof is the following. Suppose that $h:\boldsymbol C\to\boldsymbol R^k$ is a measurable mapping and that the set $D_h$ of its discontinuities satisfies $\mathbb W^\circ(D_h)=0$. It follows by Theorem \ref{(9.32)} and the mapping theorem that
\[\mathbb P(h(W^\circ)\le\alpha)=\lim_{\epsilon\to0}\mathbb P(h(W)\le\alpha|0\le W_1\le\epsilon).\]
Using either this or alternatively,
\[\mathbb P(h(W^\circ)\le\alpha)=\lim_{\epsilon\to0}\mathbb P(h(W)\le\alpha|-\epsilon\le W_1\le0)\]
one can find explicit forms for distributions connected with $W^\circ$.
Turning to Theorem \ref{(9.10)} with $a<0<b$ and $a'=0,b'=\epsilon$ we get
\begin{align*}
\mathbb P(a<m\le M<b;&\ 0<W_1<\epsilon)\\
&=\sum_{k=-\infty}^\infty\Big(\Phi(2k(b-a)+\epsilon)-\Phi(2k(b-a))\Big)\\
&\qquad-\sum_{k=-\infty}^\infty\Big(\Phi(2k(b-a)+2b)-\Phi(2k(b-a)+2b-\epsilon)\Big).
\end{align*}
This implies the first statement as
\[{\Phi(z+\epsilon)-\Phi(z)\over\epsilon}\to{e^{-z^2/2}\over\sqrt{2\pi}}.\]
As for the last statement, we need to show, in terms of $U=h_2(W)$, that
\[\lim_{\epsilon\to0}\mathbb P(U\le u|-\epsilon\le W_1\le0)=u,\]
or, in terms of $V=h_3(W)$, that
\[\lim_{\epsilon\to0}\mathbb P(V\le u|-\epsilon\le W_1\le0)=u.\]
Recall that the distribution of $V$ for given $T$ and $W_1$ is uniform on $(0,T)$, in other words, $L=V/T$ is uniformly distributed on $(0,1)$ and is independent of $(T,W_1)$. Thus,
\begin{align*}
\mathbb P(V\le u|-\epsilon\le W_1\le0)&=\mathbb P(TL\le u|-\epsilon\le W_1\le0)\\
&=\int_0^1\mathbb P(T\le u/s|-\epsilon\le W_1\le0)ds\\
&=u+\int_u^1\mathbb P(T\le u/s|-\epsilon\le W_1\le0)ds.
\end{align*}
It remains to see that
\begin{align*}
\int_u^1\mathbb P(T\le u/s&|-\epsilon\le W_1\le0)ds={1\over\Phi(\epsilon)-\Phi(0)}\int_u^1\mathbb P(T\le u/s;-\epsilon\le W_1\le0)ds\\
&\le c\epsilon^{-1}\int_u^1\mathbb P(T\le r;-\epsilon\le W_1\le0){dr\over r^2}\le
c\epsilon^{-1}u^{-2}\int_u^1\int_0^{r}\int_0^{\epsilon}tg(t,z)dzdtdr\\
& \le
c_1\epsilon u^{-2}\int_0^{1}{dt\over t^{1/2}(1-t)^{1/2}}\to0,\quad \epsilon\to0.
\end{align*}
\section{The space $\boldsymbol D=\boldsymbol D[0,1]$}\label{secD}
\subsection{Cadlag functions}
\begin{definition}
Let $\boldsymbol D=\boldsymbol D[0,1]$ be the space of functions $x:[0,1]\to\boldsymbol R$ that are right continuous and have left-hand limits.
\end{definition}
\begin{exercise}
If $x_n\in\boldsymbol D$ and $\|x_n-x\|\to0$, then $x\in\boldsymbol D$.
\end{exercise}
For $x\in \boldsymbol D$ and $T\subset[0,1]$ we will use notation
\[w_x(T)=w(x,T)=\sup_{s,t\in T}|x(t)-x(s)|,\]
and write $w_x[t,t+\delta]$ instead of $w_x([t,t+\delta])$.
This should not be confused with the earlier defined modulus of continuity
\[w_x(\delta)=w(x,\delta)=\sup_{0\le t\le 1-\delta}w_x[t,t+\delta],\]
Clearly, if $T_1\subset T_2$, then $w_x(T_1)\le w_x(T_2)$. Hence $w_x(\delta)$ is monotone over $\delta$.
\begin{example}\label{frp}
Consider $x_n(t)=$ the fractional part of $nt$. It has regular downward jumps of size $1$. For example, $x_1(t)=t$ for $t\in[0,1)$, and $x_1(1)=0$. Another example: $x_2(t)=2t$ for $t\in[0,1/2)$, $x_2(t)=2t-1$ for $t\in[1/2,1)$, and $x_2(1)=0$.
Placing an interval $[t,t+\delta]$ around a jump, we find $w_{x_n}(\delta)\equiv1$.
\end{example}
\begin{lemma}\label{L12.1}
Consider an arbitrary $x\in \boldsymbol D$. For each $\epsilon>0$, there exist points $0=t_0<t_1<\ldots<t_v=1$ such that
\[w_x[t_{i-1},t_i)<\epsilon,\quad i=1,2,\ldots,v.\]
It follows that $x$ is bounded, and that $x$ can be uniformly approximated by simple functions constant over intervals, so that it is Borel measurable.
It follows also that $x$ has at most countably many jumps.
\end{lemma}
Proof. To prove the first statement, let $t^\circ=t^\circ(\epsilon)$ be the supremum of those $t\in[0,1]$ for which $[0,t)$ can be decomposed into finitely many subintervals satisfying $w_x[t_{i-1},t_i)<\epsilon$. We show in three steps that $t^\circ=1$.
Step 1. Since $x(0)=x(0+)$, we have $w_x[0,\eta_0)<\epsilon$ for some small positive $\eta_0$. Thus $t^\circ>0$.
Step 2. Since $x(t^\circ-)$ exists, we have $w_x[t^\circ-\eta_1,t^\circ)<\epsilon$ for some small positive $\eta_1$, which implies that the interval $[0,t^\circ)$ can itself be so decomposed.
Step 3. Suppose $t^\circ=\tau$, where $\tau<1$. From $x(\tau)=x(\tau+)$ using the argument of the step 1, we see that according to the definition of $t^\circ$ we should have $t^\circ>\tau$.
The last statement of the lemma follows from the fact that for any natural $n$, there exist at most finitely many points $t$ at which $|x(t)-x(t-)|\ge n^{-1}$.
\begin{exercise}
Find a bounded function $x\notin \boldsymbol D$ with the following property: for any set $0=t_0<t_1<\ldots<t_v=1$ there exists an $i$ such that $w_x[t_{i-1},t_i)\ge1$.
\end{exercise}
\begin{definition}
Let $\delta\in(0,1)$. A set $0=t_0<t_1<\ldots<t_v=1$ is called $\delta$-sparse if $t_i-t_{i-1}>\delta$ for $i=1,\ldots,v$. Define an analog of the modulus of continuity $w_x(\delta)$ by
\begin{align*}
w'_x(\delta)&=w'(x,\delta)=\inf_{\{t_i\}}\max_{1\le i\le v}w_x[t_{i-1},t_i),
\end{align*}
where the infimum extends over all $\delta$-sparse sets $\{t_i\}$. The function $w_x'(\delta)$ is called a {\it cadlag modulus} of $x$.
\end{definition}
\begin{exercise}
Using Lemma \ref{L12.1} show that a function $x:[0,1]\to\boldsymbol R$ belongs to $\boldsymbol D$ if and only if $w_x'(\delta)\to0$.
\end{exercise}
\begin{exercise}
Compute $w'_x(\delta)$ for $x=1_{[0,a)}$.
\end{exercise}
\begin{lemma}\label{p123}
For any $x$, $w'_x(\delta)$ is non-decreasing over $\delta$, and $w_x'(\delta)\le w_x(2\delta)$. Moreover, for any $x\in \boldsymbol D$,
\[ j_x\le w_x(\delta)\le 2 w_x'(\delta)+j_x,\quad j_x=\sup_{0<t\le1}|x(t)-x(t-)|.\]
\end{lemma}
Proof. Taking a $\delta$-sparse set with $t_i-t_{i-1}\le2\delta$ we get $w_x'(\delta)\le w_x(2\delta)$. To see that $w_x(\delta)\le 2 w_x'(\delta)+j_x$ take a $\delta$-sparse set such that $w_x[t_{i-1},t_i)\le w_x'(\delta)+\epsilon$ for all $i$. If $|t-s|\le\delta$, then $s,t\in[t_{i-1},t_{i+1})$ for some $i$ and $|x(t)-x(s)|\le 2(w_x'(\delta)+\epsilon)+j_x$.
\begin{lemma}\label{(12.28)} Considering triples $t_1,t,t_2$ in [0,1] put
\begin{align*}
w''_x(\delta)&=w''(x,\delta)=\sup_{t_1\le t_2\le t_1+\delta}\sup_{t_1\le t\le t_2}\{|x(t)-x(t_1)|\wedge|x(t_2)-x(t)
\}.
\end{align*}
For any $x$, $w''_x(\delta)$ is non-decreasing over $\delta$, and $w''_x(\delta)\le w'_x(\delta)$.
\end{lemma}
Proof. Suppose that $w'_x(\delta)<w$ and $\{\tau_i\}$ be a $\delta$-sparse set such that $w_x[\tau_{i-1},\tau_i)<w$ for all $i$. If $t_1\le t\le t_2\le t_1+\delta$, then either $|x(t)-x(t_1)|<w$ or $|x(t_2)-x(t)|<w$. Thus $w''_x(\delta)<w$ and letting $w\downarrow w'_x(\delta)$ we obtain $w''_x(\delta)\le w'_x(\delta)$.
\begin{example}
For the functions $x_n(t)=1_{\{t\in[0,n^{-1})\}}$ and $y_n=1_{\{t\in[1-n^{-1},1]\}}$ we have $w''_{x_n}(\delta)=w''_{y_n}(\delta)=0$, although $w'_{x_n}(\delta)=w'_{y_n}(\delta)=1$ for $n\ge\delta^{-1}$.
\end{example}
\begin{exercise}
Consider $x_n(t)$ from Example \ref{frp}. Find $w'_{x_n}(\delta)$ and $w''_{x_n}(\delta)$ for all $(n,\delta)$.
\end{exercise}
\begin{figure}
\centering
\includegraphics[width=8cm,height=4cm]{wx.pdf}
\caption{Exercise \ref{xwx}.}
\label{wx}
\end{figure}
\begin{exercise}\label{xwx}
Compare the values of $w'_{x}(\delta)$ and $w''_{x}(\delta)$ for the curve $x$ on the Figure \ref{wx}.
\end{exercise}
\begin{lemma}\label{(12.32)}
For any $x\in \boldsymbol D$ and $\delta\in(0,1)$,
\begin{align*}
{w'_x(\delta/2)\over24} \le w''_x(\delta)\vee |x(\delta)-x(0)|\vee |x(1-)-x(1-\delta)|\le w'_x(\delta).
\end{align*}
\end{lemma}
Proof. The second inequality follows from the definition of $w'_x(\delta)$ and Lemma \ref{(12.28)}. For the first inequality it suffices to show that
\begin{align*}
(i)&\quad w_x[t_1,t_2) \le 2(w''_x(\delta)+ |x(t_2)-x(t_1)|),\mbox{ if }t_2\le t_1+\delta,\\
(ii)&\quad w'_x(\delta/2) \le 6\Big(w''_x(\delta)\vee w_x[0,\delta)\vee w_x[1-\delta,1)\Big),
\end{align*}
as these two relations imply
\begin{align*}
{w'_x(\delta/2)\over6} &\le w''_x(\delta)\vee w_x[0,\delta)\vee w_x[1-\delta,1)\\
&\le (2w''_x(\delta)+ 2|x(\delta)-x(0)|)\vee (2w''_x(\delta)+ 2|x(1-)-x(1-\delta)|)\\
&\le (4w''_x(\delta))\vee (4|x(\delta)-x(0)|)\vee(4 |x(1-)-x(1-\delta)|).
\end{align*}
Here we used the trick
\[w_x[1-\delta,1)=\lim_{t\uparrow1}w_x[1-\delta,t)\le2w''_x(\delta)+2\lim_{t\uparrow1} |x(t)-x(1-\delta)|.\]
To see (i), note that, if $t_1\le t< t_2\le t_1+\delta$, then either $|x(t)-x(t_1)|\le w''_x(\delta)$, or $|x(t_2)-x(t)|\le w''_x(\delta)$. In the latter case, we have
\[|x(t)-x(t_1)|\le |x(t)-x(t_2)|+|x(t_2)-x(t_1)|\le w''_x(\delta)+ |x(t_2)-x(t_1)|.\]
Therefore, for $t_2\le t_1+\delta$,
\[\sup_{t_1\le t< t_2}|x(t)-x(t_1)|\le w''_x(\delta)+ |x(t_2)-x(t_1)|,\]
hence
\[\sup_{t_1\le s,t< t_2}|x(t)-x(s)|\le 2(w''_x(\delta)+ |x(t_2)-x(t_1)|).\]
We prove (ii) in four steps.
Step 1. We will need the following inequality
\
\qquad\qquad |x(s)-x(t_1)|\wedge|x(t_2)-x(t)|\le 2w''_x(\delta)\mbox{ if }t_1\le s<t\le t_2\le t_1+\delta.\qquad\qquad\qquad(*)\]
To see this observe that, by the definition of $w''_x(\delta)$, either $|x(s)-x(t_1)|\le w''_x(\delta)$ or both $|x(t_2)-x(s)|\le w''_x(\delta)$ and $|x(t)-x(s)|\le w''_x(\delta)$.
In the second case, using the triangular inequality we get $|x(t_2)-x(t)|\le 2w''_x(\delta)$ .
Step 2. Putting
$$\alpha:=w''_x(\delta)\vee w_x[0,\delta)\vee w_x[1-\delta,1),\qquad T_{x,\alpha}:=\{t:x(t)-x(t-)>2\alpha\},$$
show that there exist points $0=s_0<s_1<\ldots<s_r=1$ such that $s_i-s_{i-1}\ge\delta$ and
\[T_{x,\alpha}\subset\{s_0,\ldots,s_r\}.\]
Suppose $u_1,u_2\in T_{x,\alpha}$ and $0<u_1<u_2<u_1+\delta$. Then there are disjoint intervals $(t_1,s)$ and $(t,t_2)$ such that $u_1\in(t_1,s)$, $u_2\in(t,t_2)$, and $t_2-t_1<\delta$. As both these intervals are short enough, we have a contradiction with $(*)$. Thus $(0,1)$ can not contain two points from $T_{x,\alpha}$, within $\delta$ of one another. And neither $[0,\delta)$ nor $[1-\delta,1)$ can contain a point from $T_{x,\alpha}$.
Step 3. Recursively adding middle points for the pairs $(s_{i-1},s_i)$ such that $s_i-s_{i-1}>\delta$ we get and enlarged set $\{s_0,\ldots,s_r\}$ (with possibly a larger $r$) satisfying
\[T_{x,\alpha}\subset\{s_0,\ldots,s_r\},\qquad \delta/2<s_i-s_{i-1}\le\delta,\quad i=1,\ldots,r.\]
Step 4. It remains to show that $w'_x(\delta/2) \le 6\alpha$. Since $\{s_0,\ldots,s_r\}$ from step 3 is a $(\delta/2)$-sparse set, it suffices to verify that
$$w_x[s_{i-1},s_i)\le 6\alpha,\quad i=1,\ldots,r.$$
The proof will be completed after we demonstrate that
$$|x(t_2)-x(t_1)|\le 6\alpha\quad \mbox{ for }s_{i-1}\le t_1<t_2<s_i.$$
Define $\sigma_1$ and $\sigma_2$ by
\begin{align*}
\sigma_1&=\sup\{\sigma\in[t_1,t_2]: \sup_{t_1\le u\le\sigma}|x(u)-x(t_1)|\le2\alpha\},\\
\sigma_2&=\inf\{\sigma\in[t_1,t_2]: \sup_{\sigma\le u\le t_2}|x(t_2)-x(u)|\le2\alpha\}.
\end{align*}
If $\sigma_1<\sigma_2$, then there are $\sigma_1<s<t<\sigma_2$ violating $(*)$ due to the fact that by definition of $\alpha$, we have $w''_x(\delta)\le\alpha$. Therefore, $\sigma_2\le\sigma_1$ and it follows that
$|x(\sigma_1-)-x(t_1)|\le2\alpha$ and $|x(t_2)-x(\sigma_1)|\le2\alpha$. Since $\sigma_1\in (s_{i-1},s_i)$, we have $|x(\sigma_1)-x(\sigma_1-)|\le2\alpha$ implying
$$|x(t_2)-x(t_1)|\le |x(t_2)-x(\sigma_1)|+|x(\sigma_1)-x(\sigma_1-)|+|x(\sigma_1-)-x(t_1)|\le 6\alpha.$$
\subsection{Two metrics in $\boldsymbol D$ and the Skorokhod topology}
\begin{example}
Consider
$x(t)=1_{\{t\in[a,1]\}}$ and $y(t)=1_{\{t\in[b,1]\}}$ for $a,b\in(0,1)$. If $a\ne b$, then $\|x-y\|=1$ even when $a$ is very close to $b$.
For the space $\boldsymbol D$, the uniform metric is not good and we need another metric.
\end{example}
\begin{definition}
Let $\Lambda$ denote the class of strictly increasing continuous mappings $\lambda:[0,1]\to[0,1]$ with $\lambda0=0$, $\lambda1=1$. Denote by $1\in\Lambda$ the identity map $1 t\equiv t$, and put
$ \|\lambda\|^\circ=\sup_{s<t}\big|\log{\lambda t-\lambda s\over t-s}\big|$. The smaller $ \|\lambda\|^\circ$ is the closer to 1 are the slopes of $\lambda$:
\[ e^{-\|\lambda\|^\circ}\le{\lambda t-\lambda s\over t-s}\le e^{\|\lambda\|^\circ}.\]
\end{definition}
\begin{exercise}
Let $\lambda,\mu\in\Lambda$. Show that $$\|\lambda\mu-\lambda\|\le \|\mu-1\|\cdot e^{\|\lambda\|^\circ}.$$
\end{exercise}
\begin{definition}
For $x,y\in \boldsymbol D$ define
\begin{align*}
d(x,y)&=\inf_{\lambda\in\Lambda}\{\|\lambda-1\|\vee\|x-y\lambda\|\},\\
d^\circ(x,y)&=\inf_{\lambda\in\Lambda}\{\|\lambda\|^\circ\vee\|x-y\lambda\|\},
\end{align*}
\end{definition}
\begin{exercise}
Show that $d(x,y)\le \|x-y\|$ and $d^\circ(x,y)\le \|x-y\|$.
\end{exercise}
\begin{example}
Consider
$x(t)=1_{\{t\in[a,1]\}}$ and $y(t)=1_{\{t\in[b,1]\}}$ for $a,b\in(0,1)$. Clearly, if $\lambda(a)=b$, then $\|x-y\lambda\|=0$ and otherwise $\|x-y\lambda\|=1$. Thus
\begin{align*}
d(x,y)&=\inf\{\|\lambda-1\|:\lambda\in\Lambda, \lambda(a)=b\}=|a-b|,\\
d^\circ(x,y)&=\Big(\inf\{\|\lambda\|^\circ:\lambda\in\Lambda, \lambda(a)=b\}\Big)\wedge1=\Big(\big|\log{a\over b}\big|\vee\big|\log{1-a\over 1-b}\big|\Big)\wedge1,
\end{align*}
so that $d(x,y)\to0$ and $d^\circ(x,y)\to0$ as $b\to a$.
\end{example}
\begin{exercise}\label{abc}
Given $0<b<a<c<1$, find $d(x,y)$ for
$$x(t)=2\cdot1_{\{t\in[a,1]\}},\qquad y(t)=1_{\{t\in[b,1]\}}+1_{\{t\in[c,1]\}}.$$
Does $d(x,y)\to0$ as $b\to a$ and $c\to a$?
\end{exercise}
\begin{lemma}\label{p126}
Both $d$ and $d^\circ$ are metrics in $\boldsymbol D$, and $d\le e^{d^\circ}-1$.
\end{lemma}
Proof. Note that $d(x,y)$ is the infimum of those $\epsilon>0$ for which there exists a $\lambda\in\Lambda$ with
\begin{align*}
&\sup_t|\lambda t-t|=\sup_t|t-\lambda^{-1}t|<\epsilon,\\
&\sup_t|x(t)-y(\lambda t)|=\sup_t|x(\lambda^{-1}t)-y(t)|<\epsilon.
\end{align*}
Of course $d(x,y)\ge0$, $d(x,y)=0$ implies $x=y$, and $d(x,y)=d(y,x)$. To see that $d$ is a metric we have to check the triangle inequality $d(x,z)\le d(x,y)+d(y,z)$. It follows from
\begin{align*}
&\|\lambda_1\lambda_2-1\|\le\|\lambda_1-1\|+\|\lambda_2-1\|,\\
&\|x-z\lambda_1\lambda_2\|\le\|x-y\lambda_2\|+\|y-z\lambda_1\|.
\end{align*}
Symmetry and the triangle inequality for $d^\circ$ follows from $ \|\lambda^{-1}\|^\circ= \|\lambda\|^\circ$ and the inequality $$ \|\lambda_1\lambda_2\|^\circ\le\|\lambda_1\|^\circ+\|\lambda_2\|^\circ.$$ That $d^\circ(x,y)=0$ implies $x=y$ follows from
$d\le e^{d^\circ}-1$ which is a consequence of $\|x-y\lambda\|\le e^{\|x-y\lambda\|}-1$ and
\[\|\lambda-1\|=\sup_{0\le t\le 1}t\Big| {\lambda t-\lambda0\over t-0}-1\Big|\le e^{\|\lambda\|^\circ}-1.\]
The last inequality uses $|u-1|\le e^{|\log u|}-1$ for $u>0$.
\begin{example}
Consider $j_x$, the maximum jump in $x\in \boldsymbol D$. Clearly, $|j_x-j_y|<\epsilon$ if $\|x-y\|<\epsilon/2$, and so $j_x$ is continuous in the uniform topology. It is also continuous in the Skorokhod topology. Indeed, if $d(x,y)<\epsilon/2$, then there is a $\lambda$ such that $\|\lambda-1\|<\epsilon/2$ and $\|x-y\lambda\|<\epsilon/2$. Since $j_y=j_{y\lambda}$, we conclude using continuity in the uniform topology
$|j_x-j_{y}|=|j_x-j_{y\lambda}|<\epsilon$.
\end{example}
\begin{lemma}\label{12.2}
If $d(x,y)=\delta^2$ and $\delta\le1/3$, then $d^\circ(x,y)\le4\delta+w'_x(\delta)$.
\end{lemma}
Proof. We prove that if $d(x,y)<\delta^2$ and $\delta\le1/3$, then $d^\circ(x,y)<4\delta+w'_x(\delta)$.
Choose $\mu\in\Lambda$ such that $\|\mu-1\|<\delta^2$ and $\|x\mu^{-1}-y\|<\delta^2$. Take $\{t_i\}$ to be a $\delta$-sparse set satisfying $w_x[t_{i-1},t_i)\le w'_x(\delta)+\delta$ for each $i$. Take $\lambda$ to agree with $\mu$ at the points $\{t_i\}$ and to be linear in between. Since $\mu^{-1}\lambda t_i=t_i$, we have $t\in[t_{i-1},t_i)$ if and only if $\mu^{-1}\lambda t\in[t_{i-1},t_i)$, and therefore
\[|x(t)-y(\lambda t)|\le |x(t)-x(\mu^{-1}\lambda t)|+|x(\mu^{-1}\lambda t)-y(\lambda t)|< w'_x(\delta)+\delta+\delta^2\le4\delta+w'_x(\delta).\]
Now it is enough to verify that $\|\lambda\|^\circ<4\delta$. Draw a picture to see that the slopes of $\lambda$ are always between ${\delta\pm2(\delta^2)\over \delta}=1\pm2\delta$. Since $|\log(1\pm2\delta)|< 4\delta$ for sufficiently small $\delta$, we get $\|\lambda\|^\circ<4\delta$.
\begin{theorem}\label{12.1}
The metrics $d$ and $d^\circ$ are equivalent and generate the same, so called {\it Skorokhod topology}.
\end{theorem}
Proof. By definition $d(x_n,x)\to0$ ($d^\circ(x_n,x)\to0$) if and only if there is a sequence $\lambda_n\in\Lambda$ such that
$\|\lambda_n-1\|\to0$ ($\|\lambda_n\|^\circ\to0$) and $\|x_n\lambda_n-x\|\to0$. If $d^\circ(x_n,x)\to0$, then $d(x_n,x)\to0$ due to $d\le e^{d^\circ}-1$.
The reverse implication follows from Lemma \ref{12.2}.
\begin{definition}
Denote by $\mathcal D$ the Borel $\sigma$-algebra formed from the open and closed sets in $({\boldsymbol D},d)$, or equivalently $({\boldsymbol D},d^\circ)$, using the operations of countable intersection, countable union, and set difference.
\end{definition}
\begin{lemma}\label{(12.14)}
Skorokhod convergence $x_n\to x$ in ${\boldsymbol D}$ implies $x_n(t)\to x(t)$ for continuity points $t$ of $x$. Moreover, if $x$ is continuous on $[0,1]$, then Skorokhod convergence implies uniform convergence.
\end{lemma}
Proof. Let $\lambda_n\in\Lambda$ be such that
$\|\lambda_n-1\|\to0$ and $\|x_n-x\lambda_n\|\to0$. The first assertion follows from
\[|x_n(t)-x(t)|\le |x_n(t)-x(\lambda_nt)|+|x(\lambda_nt)-x(t)|.\]
The second assertion is obtained from
\[\|x_n-x\|\le \|x_n-x\lambda_n\|+w_x(\|\lambda_n-1\|).\]
\begin{example}
Put $x_n(t )=1_{\{t\in[a-2^{-n},1]\}}+1_{\{t\in[a+2^{-n},1]\}}$ and $x(t )=2\cdot 1_{\{t\in[a,1]\}}$ for some $a\in(0,1)$. We have $x_n(t)\to x(t)$ for continuity points $t$ of $x$, however $x_n$ does not converge to $x$ in the Skorokhod topology.
\end{example}
\begin{exercise}
Fix $\delta\in(0,1)$ and consider $w'_x(\delta)$ as a function of $x\in\boldsymbol D$.
(i) The function $w'_x(\delta)$ is continuous with respect to the uniform metric
since
$$|w'_x(\delta) - w'_y(\delta)|\le2 ||x-y||.$$
Hint: show that $w_x[t_{i-1},t_i)\le w_y[t_{i-1},t_i)+2 ||x-y||$.
(ii) However $w'_x(\delta)$ is not continuous with respect to $(\boldsymbol D,d)$. Verify this using $x_n=1_{[0,\delta+2^{-n})}$ and $x=1_{[0,\delta)}$.
\end{exercise}
\begin{exercise}
Show that $h(x)=\sup_tx(t)$ is a continuous mapping from $\boldsymbol D$ to $\boldsymbol R$. Hint: show first that for any $\lambda\in\Lambda$,
\[|h(x)-h(y)|\le \|x-y\lambda\|.\]
\end{exercise}
\subsection{Separability and completeness of $\boldsymbol D$}
\begin{lemma}\label{L12.3}
Given $0=s_0<s_1<\ldots<s_k=1$ define a non-decreasing map\\ $\kappa:[0,1]\to [0,1]$ by setting
\[
\kappa t=\left\{
\begin{array}{ll}
s_{j-1} & \mbox{for } t\in[s_{j-1},s_j),\ j=1,\ldots,k,\\
1 & \mbox{for } t=1.
\end{array}
\right.
\]
If $\max_j(s_j-s_{j-1})\le\delta$, then $d( x\kappa,x)\le \delta\vee w'_x(\delta)$ for any $x\in \boldsymbol D$.
\end{lemma}
Proof. Given $\epsilon>0$ find a $\delta$-sparse set $\{t_i\}$ satisfying $w_x[t_{i-1},t_i)\le w'_x(\delta)+\epsilon$ for each $i$. Let $\lambda\in\Lambda$ be linear between $\lambda t_i=s_j$ for $t_i\in(s_{j-1},s_j]$, $j=1,\ldots,k$ and $\lambda 0=0$. Since $\|\lambda-1\|\le\delta$, it suffices to show that $|x(\kappa t)-x(\lambda^{-1}t)|\le w'_x(\delta)+\epsilon$. This holds if $t$ is 0 or 1, and it is enough to show that, for $t\in(0,1)$, both $\kappa t$ and $\lambda^{-1} t$ lie in the same $[t_{i-1},t_i)$, see Figure \ref{Figa}. We prove this by showing that $\kappa t<t_i$ is equivalent to $\lambda^{-1} t<t_i$, $i=1,\ldots,k$. Suppose that $t_i\in(s_{j-1},s_j]$. Then
\[\kappa t<t_i\quad\Rightarrow\quad \kappa t<s_j\quad\Rightarrow\quad \kappa t\le s_{j-1}\quad\Rightarrow\quad \kappa t<t_i.\]
Thus $\kappa t<t_i$ is equivalent to $\kappa t<s_j$ which in turn is equivalent to $ t<s_j$. On the other hand,
$\lambda t_i=s_j$, and hence $ t<s_j$ is equivalent to $t<\lambda t_i$ or $\lambda^{-1} t<t_i$.
\begin{figure}
\centering
\includegraphics[width=6cm]{lemma6.pdf}
\caption{Lemma \ref{L12.3}. Comparing the blue graph of the function $\kappa$ to the red graph of the function $\lambda^{-1}$.}
\label{Figa}
\end{figure}
\begin{example}
Let $x_n(t)=1_{\{t\in[t_0,t_0+2^{-n})\}}$. Since $d(x_n,x_{n+1})=2^{-n}-2^{-n-1}=2^{-n-1}$, the sequence $(x_n)$ is fundamental. However, it is not $d$-convergent. Indeed, $x_n(t)\to0$ for all $t\ne t_0$ and Skorokhod convergence $x_n\to x$ in ${\boldsymbol D}$ by Lemma \ref{(12.14)}, should imply $x(t)=0$ for all points of continuity of $x$. Since $x\in{\boldsymbol D}$ has at most countably many points of discontinuities, by right continuity we conclude that $x\equiv0$. Moreover, since the limit $x\equiv0$ is continuous, we must have $\|x_n-x\|\to0$. But $\|x_n\|\equiv1$.
\end{example}
\begin{theorem}\label{12.2}
The space $\boldsymbol D$ is separable under $d$ and $d^\circ$, and is complete under $d^\circ$.
\end{theorem}
Proof. $Separability$ for $d$. Put $s_j=j/k$, $j=1,\ldots,k$. Let $B_k$ be the set of functions having a constant, rational value over each $[s_{j-1},s_j)$ and a rational value at $t=1$. Then $B=\cup B_k$ is countable. Now it is enough to prove that given $x\in{\boldsymbol D}$ and $\epsilon>0$ we can find some $y\in B$ such that $d(x,y)<2\epsilon$. Choosing $k$ such that $k^{-1}<\epsilon$ and $w'_x(k^{-1})<\epsilon$ we can find $y\in B_k$ satisfying $d( x\kappa,y)<\epsilon$, for $\kappa$ defined as in Lemma \ref{L12.3}. It remains to see that $d( x\kappa,x)<\epsilon$ according to Lemma \ref{L12.3}.
$Completeness$. We show that any $d^\circ$-fundamental sequence $x_n\in \boldsymbol D$ contains a subsequence $y_k=x_{n_k}$ that is $d^\circ$-convergent. Choose $n_k$ in such a way that $d^\circ(y_k,y_{k+1})<2^{-k}$. Then $\Lambda$ contains $\mu_k$ such that $\|\mu_k\|^\circ<2^{-k}$ and $\|y_k\mu_k-y_{k+1}\|<2^{-k}$.
We suggest a choice of $\lambda_k\in\Lambda$ such that $\|\lambda_k\|^\circ\to0$ and $\|y_k\lambda_k-y\|\to0$ for some $y\in \boldsymbol D$. To this end put $\mu_{k,m}=\mu_k\mu_{k+1}\ldots \mu_{k+m}$. From
\begin{align*}
\| \mu_{k,m+1}-\mu_{k,m}\|&\le\| \mu_{k+m+1}-1\|\cdot e^{\|\mu_k\mu_{k+1}\ldots \mu_{k+m}\|^\circ}\\
&\le(e^{\| \mu_{k+m+1}\|^\circ}-1)\cdot e^{\|\mu_k\|^\circ+\ldots +\|\mu_{k+m}\|^\circ}\\
&\le 2^{-k-m}\cdot e^{2^{-k}+\ldots+2^{-k-m}}<2^{-k-m+2}
\end{align*}
we conclude that for a fixed $k$ the sequence of functions $\mu_{k,m}$ is uniformly fundamental. Thus there exists a $\lambda_k$ such that $\|\mu_{k,m}-\lambda_k\|\to 0$ as $m\to\infty$. To prove that $\lambda_k\in\Lambda$ we use
\begin{align*}
\log \Big| {\mu_{k,m}t- \mu_{k,m}s\over t-s}\Big|\le\|\mu_k\mu_{k+1}\ldots \mu_{k+m}\|^\circ<2^{-k+1}.
\end{align*}
Letting here $m\to\infty$ we get $\|\lambda_k\|^\circ\le2^{-k+1}$. Since $\|\lambda_k\|^\circ$ is finite we conclude that $\lambda_k$ is strictly increasing and therefore $\lambda_k\in\Lambda$.
Finally, observe that
\[\|y_k\lambda_k-y_{k+1}\lambda_{k+1}\|=\|y_k\mu_k\lambda_{k+1}-y_{k+1}\lambda_{k+1}\|=\|y_k\mu_k-y_{k+1}\|<2^{-k}.\]
It follows, that the sequence $y_k\lambda_k\in \boldsymbol D$ is uniformly fundamental and hence $\|y_k\lambda_k-y\|\to0$ for some $y$. Observe that $y$ must lie in $\boldsymbol D$. Since $\|\lambda_k\|^\circ\to0$, we obtain $d^\circ(y_k,y)\to0$.
\subsection{Relative compactness in the Skorokhod topology}
First comes an analogue of the Arzela-Ascoli theorem in terms of $w'_x(\delta)$, and then a convenient alternative in terms of $w''_x(\delta)$.
\begin{theorem} \label{12.3}
A set $A\subset \boldsymbol D$ is relatively compact in the Skorokhod topology iff
\begin{align*}
(i)& \quad \sup_{x\in A}\|x\|<\infty, \\
(ii)& \quad \lim_{\delta\to0}\sup_{x\in A} w'_x(\delta)=0.
\end{align*}
\end{theorem}
Proof of sufficiency only. Put $\alpha=\sup_{x\in A}\|x\|$. For a given $\epsilon>0$,
put $H_\epsilon=\{\alpha_i\}$, where $-\alpha=\alpha_0<\alpha_1<\ldots<\alpha_k=\alpha$ and $\alpha_j-\alpha_{j-1}\le\epsilon$,
and choose $\delta<\epsilon$ so that $w'_x(\delta)<\epsilon$ for all $x\in A$.
\noindent According to Lemma \ref{L12.3} for any $\kappa=\kappa_{\{s_j\}}$ satisfying $\max_j(s_{j-1}-s_j)\le\delta$, we have $d( x\kappa,x)\le \epsilon$ for all $x\in A$. Take
$B_\epsilon$ be the set of $y\in \boldsymbol D$ that assume on each $[s_{j-1},s_j)$ a constant value from $H_\epsilon$ and $y(1)\in H_\epsilon$. For any $x\in A$ there is a $y\in B_\epsilon$ such that $d( x\kappa,y)\le \epsilon$. Thus $B_\epsilon$ forms a $2\epsilon$-net for $A$ in the sense of $d$ and $A$ is totally bounded in the sense of $d$.
But we must show that $A$ is totally bounded in the sense of $d^\circ$, since this is the metric under which $\boldsymbol D$ is complete. This is true as according Lemma \ref{12.2}, the set $B_{\delta^2}$ is an $\epsilon'$-net for $A$, where $\epsilon'=4\delta+\sup_{x\in A}w'_x(\delta)$ can be chosen arbitrary small.
\begin{theorem} \label{12.4}
A set $A\subset \boldsymbol D$ is relatively compact in the Skorokhod topology iff
\begin{align*}
(i)& \quad \sup_{x\in A}\|x\|<\infty, \\
(ii)& \quad
\left\{
\begin{array}{l}
\lim_{\delta\to0}\sup_{x\in A} w''_x(\delta)=0,\\
\lim_{\delta\to0}\sup_{x\in A} |x(\delta)-x(0)|=0,\\
\lim_{\delta\to0}\sup_{x\in A} |x(1-)-x(1-\delta)|=0.
\end{array}
\right.
\end{align*}
\end{theorem}
Proof. It is enough to show that (ii) of Theorem \ref{12.4} is equivalent to (ii) of Theorem \ref{12.3}. This follows from Lemma \ref{(12.32)}.
\section{Probability measures on $\boldsymbol D$ and random elements}
\subsection{Finite-dimensional distributions on $\boldsymbol D$}
Finite-dimensional sets play in $\boldsymbol D$ the same role as they do in $\boldsymbol C$.
\begin{definition}
Consider projection mappings $\pi_{t_1,\ldots,t_k}:\boldsymbol D\to\boldsymbol R^k$. For $T\subset[0,1]$, define in $\mathcal D$ the subclass $\mathcal D_f(T)$ of finite-dimensional sets $\pi_{t_1,\ldots,t_k}^{-1}(H)$, where $k$ is arbitrary, $t_i$ belong to $T$, and $H\in\mathcal R^k$.
\end{definition}
\begin{theorem}\label{12.5} Consider projection mappings $\pi_{t_1,\ldots,t_k}:\boldsymbol D\to\boldsymbol R^k$. The following three statements hold.
(a) The projections $\pi_0$ and $\pi_1$ are continuous, and for $t\in(0,1)$, $\pi_t$ is continuous at $x$ if and only if $x$ is continuous at $t$.
(b) Each $\pi_{t_1,\ldots,t_k}$ is a measurable map.
(c) If $T$ contains 1 and is dense in $[0,1]$, then $\sigma\{\pi_t:t\in T\}=\sigma\{\mathcal D_f(T)\}=\mathcal D$ and $\mathcal D_f(T)$ is a separating class.
\end{theorem}
Proof. (a) Since each $\lambda\in\Lambda$ fixes 0 and 1, $\pi_0$ and $\pi_1$ are continuous: for $i=0,1$,
\[d(x,y)\ge\inf_{\lambda\in\Lambda}\|x-y\lambda\|\ge |x(i)-y(i)|=|\pi_i(x)-\pi_i(y)|.\]
Suppose that $0<t<1$. If $x$ is continuous at $t$, then by Lemma \ref{(12.14)}, $\pi_t$ is continuous at $x$. Suppose, on the other hand, that $x$ is discontinuous at $t$. If $\lambda_n\in\Lambda$ carries $t$ to $t-1/n$ and is linear on $[0,t]$ and $[t,1]$, and if $x_n(s)=x(\lambda_ns)$, then $d(x_n,x)\to0$ but $x_n(t)\nrightarrow x(t)$.
(b) A mapping into $\boldsymbol R^k$ is measurable if each component mapping is. Therefore it suffices to show that
$\pi_t$ is measurable. Since $\pi_1$ is continuous, we may assume $t<1$. We use the pointwise convergence
\[h_\epsilon(x):=\epsilon^{-1}\int_t^{t+\epsilon}x_sds\to\pi_t(x),\quad \epsilon\to0,\quad x\in \boldsymbol D.\]
If $x^n\to x$ in the Skorokhod topology, then $x^n_t\to x_t$ for continuity points $t$ of $x$, and we conclude that $x^n\to x$ almost surely. By the Bounded Convergence Theorem, the almost sure convergence $x^n\to x$ and the uniform boundedness of $(x^n)$ imply
$h_\epsilon(x^n)\to h_\epsilon(x)$.
Thus for each $\epsilon$, $h_\epsilon$ is continuous and therefore measurable, implying that its limit $\pi_t$ is also measurable.
(c) By right-continuity and the assumption that $T$ is dense, it follows that $\pi_0$ is measurable with respect to
$\sigma\{\mathcal D_f(T)\}$. So we may as well assume that $0\in T$.
Suppose $s_0,\ldots,s_k$ are points in $T$ satisfying $0=s_0<\ldots<s_k=1$. For
$\alpha=(\alpha_0,\ldots,\alpha_k)\in\boldsymbol R^{k+1}$ define $V\alpha\in \boldsymbol D$ by
\[
(V\alpha)(t)=\left\{
\begin{array}{ll}
\alpha_{j-1} & \mbox{for } t\in[s_{j-1},s_j),\ j=1,\ldots,k,\\
\alpha_k & \mbox{for } t=1.
\end{array}
\right.
\]
Clearly, $V:\boldsymbol R^{k+1}\to \boldsymbol D$ is continuous implying that $\kappa=V\pi_{s_0,\ldots,s_k}$ is measurable $\sigma\{\mathcal D_f(T)\}/\mathcal D$.
Since $T$ is dense, for any $n$ we can choose $s_0^n,\ldots,s_k^n$ so that $\max_i(s_i^n-s_{i-1}^n)<n^{-1}$. Put $\kappa^n=V\pi_{s_0^n,\ldots,s_k^n}$. With this choice define a map $A_n:\boldsymbol D\to \boldsymbol D$ by $A_nx=x\kappa^n$. By Lemma \ref{L12.3}, $A_nx\to x$ for each $x$. We conclude that the identity map is measurable $\sigma\{\mathcal D_f(T)\}/\mathcal D$ and therefore $\mathcal D\subset\sigma\{\mathcal D_f(T)\}$. Finally, since $\mathcal D_f(T)$ is a $\pi$-system, it is a separating class.
\begin{definition}
Let $D_c$ be the set of count paths: nondecreasing functions $x\in \boldsymbol D$ with $x(t)\in\mathbb Z$ for each $t$, and $x(t)- x(t-)=1$ at points of discontinuity.
\end{definition}
\begin{exercise}
Find $d(x,y)$ for $x,y\in D_c$ in terms of the jump points of these two count paths. How does a fundamental sequence $(x_n)$ in $D_c$ look for large $n$?
Show that $D_c$ is closed in the Skorokhod topology.
\end{exercise}
\begin{lemma}
Let $T_0=\{t_1,t_2,\ldots\}$ be a countable, dense set in $[0,1]$, and put
$\pi(x)=(x(t_1),x(t_2),\ldots).$
(a) The mapping $\pi:\boldsymbol D\to\boldsymbol R^\infty$ is $\mathcal D/\mathcal R^\infty$-measurable.
(b) If $x,x_n\in D_c$ are such that $\pi (x_n)\to \pi(x)$, then
$x_n\to x$ in the Skorokhod topology.
\end{lemma}
Proof. (a) In terms of notation of Section \ref{wcR},
$$\pi^{-1}(\pi^{-1}_kH)=\pi^{-1}_{t_1,\ldots,t_k}H\in\mathcal D\qquad\mbox{ for }H\in \mathcal R^k,$$
and the finite-dimensional sets $\pi^{-1}_kH$ generate $\mathcal R^\infty$.
(b) Convergence $\pi (x_n)\to \pi(x)$ implies $x_n(t_i)\to x(t_i)$, which in turn means that $x_n(t_i)=x(t_i)$ for $n>n_i$, for all $i=1,2,\ldots$. A function in $x\in D_c$ has only finitely many discontinuities, say $0<s_1<\ldots< s_k\le1$. For a given $\epsilon$ choose points $u_i$ and $v_i$ in $T_0$ in such a way that $u_i<s_i\le v_i<u_{i}+\epsilon$ and the intervals $[v_{i-1},u_i], i=1,\ldots,k$ are disjoint, with $v_0=0$. Then for $n$ exceeding some $n_0$, $x_n$ agrees with $x$ over each $[v_{i-1},u_i]$ and has a single jump in each $[u_i,v_i]$. If $\lambda_n\in\Lambda$ carries $s_i$ to the point in $[u_i,v_i]$ where $x_n$ has a jump and is defined elsewhere by linearity, then $\|\lambda_n-1\|\le\epsilon$ and $x_n(\lambda_nt)\equiv x(t)$ implying $d(x_n,x)\le\epsilon$ for $n>n_0$.
\begin{theorem}\label{12.6}
Let $T_0$ be a countable, dense set in $[0,1]$. If $P_n(D_c)=P(D_c)=1$ and $P_{n}\pi^{-1}_{t_1,\ldots,t_k}\Rightarrow P\pi^{-1}_{t_1,\ldots,t_k}$ for all $k$-tuples in $T_0$, then $P_n\Rightarrow P$.
\end{theorem}
Proof. The idea is, in effect, to embed $D_c$ in $\boldsymbol R^\infty$ and apply Theorem \ref{E2.4}. By hypothesis, $P_{n}\pi^{-1}\pi^{-1}_k\Rightarrow P\pi^{-1}\pi^{-1}_k$, but since in $\boldsymbol R^\infty$ weak convergence is the same thing as weak convergence of finite-dimensional distributions, it follows that $P_{n}\pi^{-1}\Rightarrow P\pi^{-1}$ in $\boldsymbol R^\infty$. For $A\subset \boldsymbol D$, define $A^*=\pi^{-1}(\pi A)^-$. If $A\in\mathcal D$, then
\[\limsup_nP_n(A)\le \limsup_nP_n(A^*)=\limsup_nP_n(\pi^{-1}(\pi A)^-)\le P(\pi^{-1}(\pi A)^-)=P(A^*).\]
Therefore, if $F\in\mathcal D$ is closed, then
\[\limsup_nP_n(F)=\limsup_nP_n(F\cap D_c)\le P((F\cap D_c)^*)=P((F\cap D_c)^*\cap D_c).\]
It remains to show that if $F\in\mathcal D$ is closed, then $(F\cap D_c)^*\cap D_c\subset F$.
Take an $x\in(F\cap D_c)^*\cap D_c$. Since $x\in(F\cap D_c)^*$, we have $\pi(x)\in(\pi(F\cap D_c))^-$ and there is a sequence $x_n\in F\cap D_c$ such that $\pi (x_n)\to \pi(x)$. Because $x\in D_c$, the previous lemma gives $x_n\to x$ in the Skorokhod topology. Since $x_n\in F$ and $F\in\mathcal D$ is closed, we conclude that $x\in F$.
\begin{corollary}\label{E12.3}
Suppose for each $n$, $\xi_{n1},\ldots,\xi_{nn}$ are iid indicator r.v. with $\mathbb P(\xi_{ni}=1)=\alpha/n$. If $X^n_t=\sum_{i\le nt} \xi_{ni}$, then $X^n\Rightarrow X$ in $\boldsymbol D$, where $X$ is the Poisson process with parameter $\alpha$.
\end{corollary}
Proof. The random process $X^n_t=\sum_{i\le nt} \xi_{ni}$ has independent increments. Its finite-dimensional distributions weakly converge to that of the Poisson process $X$ with
$$\mathbb P(X_t-X_s=k)={\alpha^k(t-s)^k\over k!}e^{-\alpha(t-s)}\qquad\mbox{ for }0\le s<t\le1.$$
\begin{exercise}
Suppose that $\xi$ is uniformly distributed over $[{1\over3},{2\over3}]$, and consider the random functions
\[X_t=2\cdot1_{\{t\in[\xi,1]\}},\quad X_t^n=1_{\{t\in[\xi-n^{-1},1]\}}+1_{\{t\in[\xi+n^{-1},1]\}}.\]
Show that $X^n\nRightarrow X$, even though $(X^n_{t_1},\ldots,X^n_{t_k})\Rightarrow (X_{t_1},\ldots,X_{t_k})$ for all $(t_1,\ldots,t_k)$. Why does Theorem \ref{12.6} not apply?
\end{exercise}
\begin{lemma}\label{13.0}
Let $P$ be a probability measure on $(\boldsymbol D,\mathcal D)$. Define $T_P\subset[0,1]$ as the collection of $t$ such that the projection $\pi_t$ is $P$-almost surely continuous. The set $T_P$ contains 0 and 1, and its complement in $[0,1]$ is at most countable. For $t\in(0,1)$, $t\in T_P$ is equivalent to $P\{x:x(t)\neq x(t-)\}=0$.
\end{lemma}
Proof. Recall Theorem \ref{12.5} (a) and put $J_t=\{x:x(t)\neq x(t-)\}$ for a $t\in(0,1)$. We have to show that $P(J_t)>0$ is possible for at most countably many $t$. Let $J_t(\epsilon)=\{x:|x(t)- x(t-)|>\epsilon\}$. For fixed, positive $\epsilon$ and $\delta$, there can be at most finitely many $t$ for which $P(J_t(\epsilon))\ge \delta$. Indeed, if $P(J_{t_n}(\epsilon))\ge\delta$ for infinitely many distinct $t_n$, then
\[P(J_{t_n}(\epsilon)\mbox{ i.o.})=P(\limsup_{n}J_{t_n}(\epsilon))\ge \limsup_{n\to\infty}P(J_{t_n}(\epsilon))\ge \delta,\]
contradicting the fact that for a single $x\in \boldsymbol D$ the jumps can exceed $\epsilon$ at only finitely many points, see Lemma \ref{L12.1}. Thus
$P(J_t(\epsilon))>0$ is possible for at most countably many $t$. The desired result follows from
\[\{t:P(J_t)>0\}=\bigcup_n\{t:P(J_t(n^{-1}))>0\},\]
which in turn is a consequence of $P(J_t(\epsilon))\uparrow P(J_t)$ as $\epsilon\downarrow 0$.
\begin{theorem}\label{13.1}
Let $P_n, P$ be probability measures on $(\boldsymbol D,\mathcal D)$. If the sequence $(P_n)$ is tight and $P_{n}\pi^{-1}_{t_1,\ldots,t_k}\Rightarrow P\pi^{-1}_{t_1,\ldots,t_k}$ holds whenever $t_1,\ldots,t_k$ lie in $T_P$, then $P_n\Rightarrow P$.
\end{theorem}
Proof. We will show that if a subsequence $(P_{n'})\subset (P_n)$ converges weakly to some $Q$, then $Q=P$. Indeed, if $t_1,\ldots,t_k$ lie in $T_Q$, then
$\pi_{t_1,\ldots,t_k}$ is continuous on a set of $Q$-measure 1, and therefore, $P_{n'}\Rightarrow Q$ implies by the mapping theorem that $P_{n'}\pi^{-1}_{t_1,\ldots,t_k}\Rightarrow Q\pi^{-1}_{t_1,\ldots,t_k}$. On the other hand, if $t_1,\ldots,t_k$ lie in $T_P$, then $P_{n'}\pi^{-1}_{t_1,\ldots,t_k}\Rightarrow P\pi^{-1}_{t_1,\ldots,t_k}$ by the assumption. Therefore, if $t_1,\ldots,t_k$ lie in $T_Q\cap T_P$, then $Q\pi^{-1}_{t_1,\ldots,t_k}=P\pi^{-1}_{t_1,\ldots,t_k}$. It remains to see that $\mathcal D_f(T_Q\cap T_P)$ is a separating class by applying Lemma \ref{13.0} and Theorem \ref{12.5}.
\subsection{Tightness criteria in $\boldsymbol D$}
\begin{theorem}\label{13.2}
Let $P_n$ be probability measures on $(\boldsymbol D,\mathcal D)$. The sequence $(P_n)$ is tight if and only if the following two conditions hold:
\begin{align*}
(i)& \quad \lim_{a\to\infty}\limsup_{n\to\infty}P_n(x: \|x\|\ge a)=0, \\
(ii)& \quad \lim_{\delta\to0}\limsup_{n\to\infty}P_n(x: w'_x(\delta)\ge\epsilon)=0,\mbox{ for each positive }\epsilon.
\end{align*}
Condition (ii) is equivalent to
\begin{align*}
(ii')& \quad \forall\epsilon, \eta>0; \exists\delta, n_0>0: \quad
P_n(x: w'_x(\delta)\ge\epsilon)\le\eta, \mbox{ for }n>n_0.
\end{align*}
\end{theorem}
Proof. This theorem is proven similarly to Theorem \ref{7.3} using Theorem \ref{12.3}. Equivalence of (ii) and (ii$'$) is due to monotonicity of $w'_x(\delta)$.
\begin{theorem}\label{13.2'}
Let $P_n$ be probability measures on $(\boldsymbol D,\mathcal D)$. The sequence $(P_n)$ is tight if and only if the following two conditions hold:
\begin{align*}
(i)& \quad \lim_{a\to\infty}\limsup_{n\to\infty}P_n(x: \|x\|\ge a)=0, \\
(ii)& \quad \forall\epsilon, \eta>0; \exists\delta, n_0>0: \quad
\left\{
\begin{array}{l}
P_n(x: w''_x(\delta)\ge\epsilon)\le\eta,\\
P_n(x: |x(\delta)-x(0)|\ge\epsilon)\le\eta,\\
P_n(x: |x(1-)-x(1-\delta)|\ge\epsilon)\le\eta,
\end{array}
\right.\quad \mbox{ for }n>n_0.
\end{align*}
\end{theorem}
Proof. This theorem follows from Theorem \ref{13.2} with $(i)$ and $(ii')$ using Lemma \ref{(12.32)}. (Recall how Theorem \ref{12.4} was obtained from Theorem \ref{12.3} using Lemma \ref{(12.32)}.)
\begin{lemma}\label{C13}
Turn to Theorems \ref{13.2} and \ref{13.2'}. Under (ii) condition (i) is equivalent to the following weaker version:
(i$\,'$) for each $t$ in a set $T$ that is dense in $[0,1]$ and contains 1,
\[\lim_{a\to\infty}\limsup_{n\to\infty}P_n(x: |x(t)|\ge a)=0,\]
\end{lemma}
Proof. The implication (i) $\Rightarrow$ (i$'$) is trivial. Assume (ii) of Theorem \ref{13.2} and (i$'$). For a given $\delta\in(0,1)$ choose from $T$ points $0<s_1<\ldots<s_k=1$ such that $\max\{s_1,s_2-s_{1},\ldots,s_k-s_{k-1}\}<\delta$. By hypothesis (i$'$), there exists an $a$ such that
\[\qquad \qquad \qquad \qquad P_n(x: \max_{j}|x(s_j)|\ge a)<\eta,\quad n>n_0\qquad \qquad \qquad \qquad (*)\]
For a given $x$, take a $\delta$-sparse set $(t_0,\ldots,t_v)$ such that all $w_x[t_{i-1},t_i)<w'_x(\delta)+1$.
Since each $[t_{i-1},t_i)$ contains an $s_j$, we have
$$\|x\|\le \max_{j}|x(s_j)|+w'_x(\delta)+1.$$
Using (ii$'$) of Theorem \ref{13.2} and $(*)$, we get $P_n(x: \|x\|\ge a+2)<2\eta$ implying (i).
\subsection{A key condition on 3-dimensional distributions}
The following condition plays an important role.
\begin{definition}\label{kc}
For a probability measure $P$ on $\boldsymbol D$, we will write $P\in H_{\alpha,\beta}$, if there exist $\alpha>1$, $\beta\ge0$, and a nondecreasing continuous $H:[0,1]\to\boldsymbol R$ such that for all $\epsilon>0$ and all $0\le t_1\le t_2\le t_3\le1$,
\[P\pi^{-1}_{t_1,t_2,t_3}\{(z_1,z_2,z_3): |z_2-z_1|\ge\epsilon,|z_3-z_2|\ge\epsilon\}\le \epsilon^{-2\beta}(H(t_3)-H(t_1))^{\alpha}.\]
For a random element $X$ on $\boldsymbol D$ with probability distribution $P$, this condition $P\in H_{\alpha,\beta}$ means that for all $\epsilon>0$ and all $0\le r\le s\le t\le1$
\[\mathbb P\Big(|X_s-X_r|\ge\epsilon, |X_t-X_s|\ge\epsilon\Big)\le \epsilon^{-2\beta}(H(t)-H(r))^{\alpha}.\]
\end{definition}
\begin{lemma}\label{10.3}
Let a random element $X$ on $\boldsymbol D$ have a probability distribution $P\in H_{\alpha,\beta}$. Then there is a constant $K_{\alpha,\beta}$ depending only on $\alpha$ and $\beta$ such that
\[\mathbb P\Big(\sup_{r\le s\le t}\big(|X_s-X_r|\wedge |X_t-X_s|\big)\ge\epsilon\Big)\le {K_{\alpha,\beta}\over\epsilon^{2\beta}}(H(1)-H(0))^{\alpha}.\]
\end{lemma}
Proof.
The stated estimate is obtained in four consecutive steps.
Step 1. Let $T_k=\{i/2^k,0\le i\le 2^k\}$ and
\begin{align*}
A_k&=\max\big(|X_s-X_r|\wedge |X_t-X_s|\mbox{ over the adjacent triplets } r\le s\le t \mbox{ in } T_k\big),\\
B_k&=\max\big(|X_s-X_r|\wedge |X_t-X_s|\mbox{ over } r\le s\le t \mbox{ from } T_k\big).
\end{align*}
We will show that $B_k\le 2(A_1+\ldots+A_k)$. To this end, for each $t\in T_k$ define a $t_n\in T_{k-1}$ by
\[t_n=
\left\{
\begin{array}{ll}
t & \mbox{if } t\in T_{k-1}, \\
t -2^{-k}& \mbox{if } t\notin T_{k-1} \mbox{ and } |X_t-X_{t-2^{-k}}|\le |X_t-X_{t+2^{-k}}|, \\
t +2^{-k}& \mbox{if } t\notin T_{k-1} \mbox{ and } |X_t-X_{t-2^{-k}}|> |X_t-X_{t+2^{-k}}|,
\end{array}
\right.
\]
so that $|X_t-X_{t_n}|\le A_k$. Then for any triplet $r\le s\le t$ from $T_k$,
\begin{align*}
|X_s-X_r|&\le |X_s-X_{s_n}|+|X_{s_n}-X_{r_n}|+|X_{r}-X_{r_n}| \le |X_{s_n}-X_{r_n}|+2A_k,\\
|X_t-X_s|&\le |X_{t_n}-X_{s_n}|+2A_k.
\end{align*}
Since here $r_n\le s_n\le t_n$ lie in $T_{k-1}$, it follows that $|X_s-X_r|\wedge |X_t-X_s|\le B_{k-1}+2A_k$, and therefore,
$$B_k\le B_{k-1}+2A_k\le 2(A_1+\ldots+A_k),\quad k\ge1.$$
Step 2. Consider a special case when $H(t)\equiv t$. Using the right continuity of the paths we get from step 1 that
\[\sup_{r\le s\le t}\big(|X_s-X_r|\wedge |X_t-X_s|\big)\le 2\sum_{k=1}^\infty A_k.\]
This implies that for any $\theta\in(0,1)$,
\begin{align*}
\mathbb P\Big(\sup_{r\le s\le t}\big(|X_s-X_r|&\wedge |X_t-X_s|\big)\ge2\epsilon\Big)\le \mathbb P\Big(\sum_{k=1}^\infty A_k\ge\epsilon\Big)\\
&\le \mathbb P\Big(\sum_{k=1}^\infty A_k\ge\epsilon(1-\theta)\sum_{k=1}^\infty\theta^k\Big)\le \sum_{k=1}^\infty \mathbb P\Big(A_k\ge\epsilon(1-\theta)\theta^k\Big)\\
&\le \sum_{k=1}^\infty\sum_{i=1}^{2^k-1}\mathbb P\Big(|X_{i/2^k}-X_{(i-1)/2^k}|\wedge|X_{(i+1)/2^k}-X_{i/2^k}|\ge\epsilon(1-\theta)\theta^k\Big).
\end{align*}
Applying the key condition with $H(t)\equiv t$ we derive from the previous relation choosing a $\theta\in(0,1)$ satisfying $\theta^{2\beta}>2^{1-\alpha}$, that the stated estimate holds in the special case
\begin{align*}
\mathbb P\Big(\sup_{r\le s\le t}\big(|X_s-X_r|\wedge |X_t-X_s|\big)\ge2\epsilon\Big)
&\le \sum_{k=1}^\infty2^k{2^{(1-k)\alpha}\over (\epsilon(1-\theta)\theta^k)^{2\beta}}\\
&= {2^\alpha\over\epsilon^{2\beta}(1-\theta)^{2\beta}} \sum_{k=1}^\infty(\theta^{-2\beta}2^{1-\alpha})^k.
\end{align*}
Step 3. For a strictly increasing $H(t)$ take $a$ so that $a^{2\beta}H(1)^\alpha=1$, and define a new process $Y_t$ by $Y_t=aX_{b(t)}$, where the time change $b(t)$ is such that $H(b(t))=tH(1)$. Since
\begin{align*}
\mathbb P\Big(|Y_s-Y_r|\ge\epsilon; |Y_t-Y_s|\ge\epsilon\Big)&=\mathbb P\Big(|X_{b(s)}-X_{b(r)}|\ge a^{-1}\epsilon; |X_{b(t)}-X_{b(s)}|\ge a^{-1}\epsilon\Big)\\
&\le \epsilon^{-2\beta}(t-r)^{\alpha},
\end{align*}
we can apply the result of the step 2 to the new process and prove the statement of the lemma under the assumption of step 3.
Step 4. If $H(t)$ is not strictly increasing, put $H_v(t)=H(t)+v t$ for an arbitrary small positive $v$. We have
\[\mathbb P\Big(|X_s-X_r|\ge\epsilon; |X_t-X_s|\ge\epsilon\Big)\le \epsilon^{-2\beta}(H_v(t)-H_v(r))^{\alpha},\]
and according to step 3
\[\mathbb P\Big(\sup_{r\le s\le t}\big(|X_s-X_r|\wedge |X_t-X_s|\big)\ge\epsilon\Big)\le {K_{\alpha,\beta}\over\epsilon^{2\beta}}(H(1)+v-H(0))^{\alpha}.\]
It remains to let $v$ go to 0. Lemma \ref{10.3} is proven.
\begin{lemma}\label{p143}
If $P\in H_{\alpha,\beta}$, then given a positive $\epsilon$,
\[P(x: w''_x(\delta)\ge\epsilon)\le {2K_{\alpha,\beta}\over\epsilon^{2\beta}}(H(1)-H(0))(w_H(2\delta))^{\alpha-1},\]
so that $P(x: w''_x(\delta)\ge\epsilon)\to0$ as $\delta\to0$.
\end{lemma}
Proof. Take $t_i=i\delta$ for $0\le i\le \lfloor1/\delta\rfloor$ and $t_{\lceil1/\delta\rceil}=1$. If $|t-r|\le\delta$, then $r$ and $t$ lie in the same $[t_{i-1},t_{i+1}]$ for some $1\le i\le\lceil1/\delta\rceil-1$. According to Lemma \ref{10.3}, for $X$ with distribution $P$,
\[\mathbb P\Big(\sup_{t_{i-1}\le r\le s\le t\le t_{i+1}}\big(|X_s-X_r|\wedge |X_t-X_s|\big)\ge\epsilon\Big)\le {K_{\alpha,\beta}\over\epsilon^{2\beta}}(H(t_{i+1})-H(t_{i-1}))^{\alpha},\]
and due to monotonicity of $H$,
\begin{align*}
\mathbb P\Big(w''(X,\delta)\ge\epsilon\Big)&\le \sum_{i=1}^{\lceil1/\delta\rceil-1}{K_{\alpha,\beta}\over\epsilon^{2\beta}}(H(t_{i+1})-H(t_{i-1}))^{\alpha}\\
&\le {K_{\alpha,\beta}\over\epsilon^{2\beta}}\Big(\sup_{0\le t\le 1-2\delta}(H(t+2\delta)-H(t))^{\alpha-1}\Big)2(H(1)-H(0))\\
&={2K_{\alpha,\beta}\over\epsilon^{2\beta}}(H(1)-H(0))(w_H(2\delta))^{\alpha-1}.
\end{align*}
It remains to recall that the modulus of continuity $w_H(2\delta)$ of the uniformly continuous function $H$ converges to 0 as $\delta\to0$.
\subsection{A criterion for existence}
\begin{theorem}\label{13.6}There exists in $\boldsymbol D$ a random element with finite dimensional distributions $\mu_{t_1,\ldots,t_k}$ provided the following three conditions:
(i) the finite dimensional distributions $\mu_{t_1,\ldots,t_k}$ are consistent, see Definition \ref{Kcon},
(ii) there exist $\alpha>1$, $\beta\ge0$, and a nondecreasing continuous $H:[0,1]\to\boldsymbol R$ such that for all $\epsilon>0$ and all $0\le t_1\le t_2\le t_3\le1$,
\[\mu_{t_1,t_2,t_3}\{(z_1,z_2,z_3): |z_2-z_1|\ge\epsilon,|z_3-z_2|\ge\epsilon\}\le \epsilon^{-2\beta}(H(t_3)-H(t_1))^{\alpha},\]
(iii) $\mu_{t,t+\delta}\{(z_1,z_2): |z_2-z_1|\ge\epsilon\}\to0$ as $\delta\downarrow0$ for each $t\in[0,1)$.
\end{theorem}
Proof. The main idea, as in the proof of Theorem \ref{7.1} (a), is to construct a sequence $(X^n)$ of random elements in $\boldsymbol D$ such that the corresponding sequence of distributions $(P_n)$ is tight and has the desired limit finite dimensional distributions $\mu_{t_1,\ldots,t_k}$.
Let vector $(X_{n,0},\ldots,X_{n,2^n})$ have distribution $\mu_{t_{0},\ldots,t_{2^n}}$, where $t_i\equiv t^n_i=i2^{-n}$, and define
\[
X^n_t=\left\{
\begin{array}{ll}
X_{n,i} & \mbox{for } t\in[i2^{-n},(i+1)2^{-n}),\quad i=0,\ldots,2^n-1, \\
X_{n,2^n} & \mbox{for } t=1.
\end{array}
\right.
\]
The rest of the proof uses Theorem \ref{13.2'} and is divided in four steps.
Step 1. For all $\epsilon>0$ and $r,s,t\in T_n=\{t_{0},\ldots,t_{2^n}\}$ we have by (ii),
\[\mathbb P\Big(|X_s^n-X_r^n|\ge\epsilon, |X_t^n-X_s^n|\ge\epsilon\Big)\le \epsilon^{-2\beta}(H(t)-H(r))^{\alpha}.\]
It follows that in general for $0\le r \le s\le t\le 1$
\[\mathbb P\Big(|X_s^n-X_r^n|\ge\epsilon, |X_t^n-X_s^n|\ge\epsilon\Big)\le \epsilon^{-2\beta}(H(t)-H(r-2^{-n}))^{\alpha},\]
where $H(t)=H(0)$ for $t<0$. Slightly modifying Lemma \ref{p143} we obtain that given a positive $\epsilon$, there is a constant $K_{\alpha,\beta,\epsilon}$
\[P_n(x: w''_x(\delta)\ge\epsilon)\le K_{\alpha,\beta,\epsilon}(H(1)-H(0))(w_H(3\delta))^{\alpha-1},\quad \mbox{for }\delta\le 2^{-n}.\]
This gives the first part of (ii) in Theorem \ref{13.2'}.
Step 2. If $2^{-k}\le\delta$, then
\[\|X^n\|\le \max_{t\in T_k}|X^n_t|+w''(X^n,\delta).\]
Since the distributions of the first term on the right all coincide for $n\ge k$, it follows by step 1 that condition (i) in Theorem \ref{13.2'} is satisfied.
Step 3. To take care of the second and third parts of (ii) in Theorem \ref{13.2'}, we fix some $\delta_0\in(0,1/2)$, and temporarily assume that
for $\delta\in(0,\delta_0)$,
\[\qquad \qquad \mu_{0,\delta}\{(z_1,z_2):z_1=z_2\}=1,\qquad \mu_{1-\delta,1}\{(z_1,z_2):z_1=z_2\}=1.\qquad \qquad \qquad (*)\]
In this special case, the second and third parts of (ii) in Theorem \ref{13.2'} hold and we conclude that
the sequence of distributions $P_n$ of $X_n$ is tight.
By Prokhorov's theorem, $(X^n)$ has a subsequence weakly converging in distribution to a random element $X$ of $\boldsymbol D$ with some distribution $P$. We want to show that $P\pi^{-1}_{t_1,\ldots,t_k}=\mu_{t_1,\ldots,t_k}$. Because of the consistency hypothesis, this holds for dyadic rational $t_i\in \cup_nT_n$. The general case is obtained using the following facts:
\begin{align*}
P_n\pi^{-1}_{t_1,\ldots,t_k}&\Rightarrow P\pi^{-1}_{t_1,\ldots,t_k},\\
P_n\pi^{-1}_{t_1,\ldots,t_k}&=\mu_{t_{n1},\ldots,t_{nk}},\quad \mbox{ for some }t_{ni}\in T_n,\mbox{ provided }k\le 2^n,\\
\mu_{t_{n1},\ldots,t_{nk}}&\Rightarrow \mu_{t_1,\ldots,t_k}.
\end{align*}
The last fact is a consequence of (iii). Indeed, by Kolmogorov's extension theorem, there exists a stochastic process $Z$ with vectors $(Z_{t_{1}},\ldots,Z_{t_{k}})$ having distributions $\mu_{t_1,\ldots,t_k}$. Then by (iii), $Z_{t+\delta}\stackrel{\rm P}{\to}Z_t$ as $\delta\downarrow0$. Using Exercise \ref{inP} we derive
$(Z_{t_{n1}},\ldots,Z_{t_{nk}})\stackrel{\rm P}{\to}(Z_{t_{1}},\ldots,Z_{t_{k}})$ implying $\mu_{t_{n1},\ldots,t_{nk}}\Rightarrow \mu_{t_1,\ldots,t_k}$.
Step 4. It remains to remove the restriction $(*)$. To this end take
\[
\lambda t=\left\{
\begin{array}{ll}
0 & \mbox{for } t\le\delta_0, \\
{t-\delta_0\over1-2\delta_0} & \mbox{for } \delta_0<t<1-\delta_0, \\
1 & \mbox{for } t\ge1-\delta_0.
\end{array}
\right.
\]
Define $\nu_{s_1,\ldots,s_k}$ as $\mu_{t_1,\ldots,t_k}$ for $s_i=\lambda t_i$. Then the $\nu_{s_1,\ldots,s_k}$ satisfy the conditions of the theorem with a new $H$, as well as $(*)$, so that there is a random element $Y$ of $\boldsymbol D$ with these finite-dimensional distributions. Finally,
setting $X_t=Y_{\delta_0+t(1-2\delta_0)}$ we get a process $X$ with the required finite dimensional distributions $P\pi^{-1}_{t_1,\ldots,t_k}=\mu_{t_1,\ldots,t_k}$.
\begin{example}\label{E13.1}
Construction of a Levy process. Let $\nu_t$ be a measure on the line for which $\nu_t(\boldsymbol R)=H(t)$ is nondecreasing and continuous, $t\in[0,1]$. Suppose for
$s\le t$, $\nu_s(A)\le\nu_t(A)$ for all $A\in\mathcal R$ so that $\nu_t-\nu_s$ is a measure with total mass $H(t)-H(s)$. Then there is an infinitely divisible distribution having mean 0, variance $H(t)-H(s)$, and characteristic function
\[\phi_{s,t}(u)=\exp \int_{-\infty}^\infty{e^{iuz}-1-iuz\over z^2}(\nu_t-\nu_s)(dz).\]
\end{example}
We can use Theorem \ref{13.6} to construct a random element $X$ of $\boldsymbol D$ with $X_0=0$, for which the increments are independent and
$$\mathbb E(e^{iu_1X_{t_1}}e^{iu_2(X_{t_2}-X_{t_1})}\cdots e^{iu_k(X_{t_k}-X_{t_{k-1}})})=\phi_{0,t_1}(u_1)\phi_{t_1,t_2}(u_2)\cdots \phi_{t_{k-1},t_k}(u_k).$$
Indeed, since $\phi_{r,t}(u)=\phi_{r,s}(u)\phi_{s,t}(u)$ for $r\le s\le t$, the implied finite-dimensional distributions $\mu_{t_1,\ldots,t_k}$ are consistent. Further, by Chebyshev's inequality and independence, condition (ii) of Theorem \ref{13.6} is valid with $\alpha=\beta=2$:
\begin{align*}
\mu_{t_1,t_2,t_3}\{(z_1,z_3,z_3): |z_2-z_1|\ge\epsilon,|z_3-z_2|\ge\epsilon\}&\le {H(t_2)-H(t_1)\over\epsilon^{2}}\cdot {H(t_3)-H(t_2)\over\epsilon^{2}}\\&\le \epsilon^{-4}(H(t_3)-H(t_1))^{2}.
\end{align*}
Another application of Chebyshev's inequality gives
\begin{align*}
\mu_{t,t+\delta}\{(z_1,z_2):&|z_2-z_1|\ge\epsilon\}\le {H(t+\delta)-H(t)\over\epsilon^{2}}\to0,\quad \delta\downarrow0.
\end{align*}
\section{Weak convergence on $\boldsymbol D$}
Recall that the subset $T_P\subset[0,1]$, introduced in Lemma \ref{13.0}, is the collection of $t$ such that the projection $\pi_t$ is $P$-almost surely continuous.
\subsection{Criteria for weak convergence in $\boldsymbol D$}
\begin{lemma}\label{13.3'}
Let $P$ be a probability measure on $(\boldsymbol D,\mathcal D)$ and $\epsilon>0$. By right continuity of the paths we have $ \lim_{\delta\to0}P(x: |x(\delta)-x(0)|\ge\epsilon)=0$.
\end{lemma}
Proof. Put $A_\delta=\{x: |x(\delta)-x(0)|\ge\epsilon\}$. Let $\delta_n\to0$. It suffices to show that $P(A_{\delta_n})\to0$. To see this observe that right continuity of the paths entails
\[\bigcap_{n\ge1}\bigcup_{k\ge n}A_{\delta_k}=\emptyset,\]
and therefore, $P(A_{\delta_n})\le P(\cup_{k\ge n}A_{\delta_k})\to0$ as $n\to\infty$.
\begin{theorem}\label{13.3}
Let $P_n, P$ be probability measures on $(\boldsymbol D,\mathcal D)$. Suppose $P_{n}\pi^{-1}_{t_1,\ldots,t_k}\Rightarrow P\pi^{-1}_{t_1,\ldots,t_k}$ holds whenever $t_1,\ldots,t_k$ lie in $T_P$. If for every positive $\epsilon$
\begin{align*}
(i)& \quad \lim_{\delta\to0}P(x: |x(1)-x(1-\delta)|\ge \epsilon)=0, \\
(ii)& \quad \lim_{\delta\to0}\limsup_{n\to\infty}P_n(x: w''_x(\delta)\ge\epsilon)=0
\end{align*}
then $P_n\Rightarrow P$.
\end{theorem}
Proof. This result should be compared with Theorem \ref{7.5} dealing with the space $\boldsymbol C$.
Recall Theorem \ref{13.1}. We prove tightness by checking conditions (i$'$) in Lemma \ref{C13} and (ii) in Theorem \ref{13.2'}. For each $t\in T_P$, the weakly convergent sequence $P_n\pi^{-1}_t$ is tight which implies (i$'$) with $T_P$ in the role of $T$.
As to (ii) in Theorem \ref{13.2'} we have to verify only the second and third parts. By hypothesis, $P_{n}\pi^{-1}_{0,\delta}\Rightarrow P\pi^{-1}_{0,\delta}$ so that for $\delta\in T_P$,
\[\limsup_{n\to\infty}P_n(x: |x(\delta)-x(0)|\ge\epsilon)\le P(x: |x(\delta)-x(0)|\ge\epsilon),\]
and the second part follows from Lemma \ref{13.3'}.
Turning to the third part of (ii) in Theorem \ref{13.2'}, the symmetric to the last argument brings for $1-\delta\in T_P$,
\[\limsup_{n\to\infty}P_n(x: |x(1)-x(1-\delta)|\ge\epsilon)\le P(x: |x(1)-x(1-\delta)|\ge\epsilon).\]
Now, suppose that\footnote{Here we use an argument suggested by Timo Hirscher.}
\[|x(1-)-x(1-\delta)|\ge\epsilon.\]
Since
\[|x(1-)-x(1-\delta)|\wedge|x(1)-x(1-)|\le w''_x(\delta),\]
we have either $w''_x(\delta)\ge\epsilon$
or $|x(1)-x(1-)|\le w''_x(\delta)$. Moreover, in the latter case, it is either
$|x(1)-x(1-\delta)|\ge\epsilon/2$ or $w''_x(\delta)\ge\epsilon/2$ or both.
The last two observations yield
\begin{align*}
\limsup_{n\to\infty}&P_n(x: |x(1-)-x(1-\delta)|\ge\epsilon)
\\
&\le P(x: |x(1)-x(1-\delta)|\ge\epsilon/2)
+\limsup_{n\to\infty}P_n(x: w''_x(\delta)\ge\epsilon/2),
\end{align*}
and the third part readily follows from conditions (i) and (ii).
\begin{theorem} \label{13.5} For $X^n\Rightarrow X$ on $\boldsymbol D$ it suffices that
(i) $(X^n_{t_1},\ldots,X^n_{t_k})\Rightarrow (X_{t_1},\ldots,X_{t_k})$ for points $t_i\in T_P$, where $P$ is the probability distribution of $X$,
(ii) $X_1-X_{1-\delta}\Rightarrow 0$ as $\delta\to0$,
(iii) there exist $\alpha>1$, $\beta>0 $, and a nondecreasing continuous function $H:[0,1]\to\boldsymbol R$ such that
\[\mathbb E\Big(|X^n_s-X^n_r|^\beta|X^n_t-X^n_s|^\beta\Big)\le (H(t)-H(r))^{\alpha}\quad \mbox{ for }0\le r\le s\le t\le1.\]
\end{theorem}
Proof. By Theorem \ref{13.3}, it is enough to show that
$$\lim_{\delta\to0}\limsup_{n\to\infty}\mathbb P\Big(w''(X^n,\delta)\ge\epsilon\Big)=0.$$
This follows from Lemma \ref{p143} as (iii) implies that $X_n$ has a distribution $P_n\in H_{\alpha,\beta}$.
\subsection{Functional CLT on $\boldsymbol D$}
The identity map $c:\boldsymbol C\to \boldsymbol D$ is continuous and therefore measurable $\mathcal C/\mathcal D$. If $\mathbb W$ is a Wiener measure on $(\boldsymbol C,\mathcal C)$, then $\mathbb Wc^{-1}$ is a Wiener measure on $(\boldsymbol D,\mathcal D)$. We denote this new measure by $\mathbb W$ rather than $\mathbb Wc^{-1}$. Clearly, $\mathbb W(\boldsymbol C)=1$. Let also denote by $W$ a random element of $\boldsymbol D$ with distribution $\mathbb W$.
\begin{theorem}\label{14.1}
Let $\xi_1,\xi_2,\ldots$ be iid r.v. defined on $(\Omega,\mathcal F,\mathbb P)$. If $\xi_i$ have zero mean and variance $\sigma^2$ and
$X^n_t={\xi_1+\ldots+\xi_{\lfloor nt\rfloor}\over\sigma\sqrt n}$, then $X^n\Rightarrow W$.
\end{theorem}
Proof. We apply Theorem \ref{13.5}. Following the proof of Theorem \ref{8.2'} (a) one
gets the convergence of the fdd (i) even for the $X^n$ as they are
defined here. Condition (ii) follows from the fact that the Wiener process $W_t$ has no jumps. We finish the proof by showing that (iii) holds with $\alpha=\beta=2$ and $H(t)=2t$. Indeed,
$$ \mathbb E\big(|X^n_s-X^n_r|^2|X^n_t-X^n_s|^2\big)=0\quad \mbox{ for }0\le t-r< n^{-1},$$
as either $X^n_s=X^n_r$ or $X^n_t=X^n_s$.
On the other hand, for $t-r\ge n^{-1}$, by independence,
\begin{align*}
\mathbb E\big(|X^n_s-X^n_r|^2|X^n_t-X^n_s|^2\big)&={\lfloor ns\rfloor -\lfloor nr\rfloor\over n}\cdot{\lfloor nt\rfloor -\lfloor ns\rfloor\over n}\\
& \le \Big({\lfloor nt\rfloor -\lfloor nr\rfloor\over n}\Big)^2 \le (2(t-r))^2.
\end{align*}
\begin{example}\label{p148}
Define $(\xi_n)$ on $([0,1],\mathcal B_{[0,1]},\lambda)$ using the Rademacher functions $\xi_n(\omega)=2w_n-1$ in terms of the dyadic (binary) representation $\omega=\omega_1\omega_2\ldots$. Then $(\xi_n)$ is a sequence of independent coin tossing outcomes with values $\pm1$. Theorem \ref{14.1} holds with $\sigma=1$:
$\Big({\xi_1+\ldots+\xi_{\lfloor nt\rfloor}\over\sqrt n}\Big)_{t\in[0,1]}\Rightarrow W.$
\end{example}
\begin{lemma}\label{M21} Consider a probability space $(\Omega,\mathcal F,\mathbb P)$ and let $\mathbb P_0$ be a probability measure absolutely continuous with respect to $\mathbb P$. Let $\mathcal F_0\subset \mathcal F$ be an algebra of events such that for some $A_n\in\sigma(\mathcal F_0)$
\[\mathbb P(A_n|E)\to\alpha,\quad \mbox{ for all }E\in\mathcal F_0 \mbox{ with }\mathbb P(E)>0.\]
Then $\mathbb P_0(A_n)\to\alpha$.
\end{lemma}
Proof. We have $\mathbb P_0(A)=\int_Ag_0(\omega)\mathbb P(d\omega)$, where $g_0=d\mathbb P_0/d\mathbb P$. It suffices to prove that
\[\ \qquad\qquad\qquad \qquad\int_{A_n}g(\omega)\mathbb P(d\omega)\to \alpha\int_{\Omega}g(\omega)\mathbb P(d\omega)\qquad\qquad \qquad\qquad (*)\]
if $g$ is $\mathcal F$-measurable and $\mathbb P$-integrable. We prove $(*)$ in three steps.
Step 1. Write $\mathcal F_1=\sigma(\mathcal F_0)$ and denote by $\mathcal F_2$ the class of events $E$ for which
\[\mathbb P(A_n\cap E)\to\alpha\mathbb P(E).\]
We show that $\mathcal F_1\subset \mathcal F_2$. To be able to apply Theorem \ref{Dyn} we have to show that $\mathcal F_2$ is a $\lambda$-system. Indeed, suppose for a sequence of disjoint sets $E_i$ we have
\[\mathbb P(A_n\cap E_i)\to\alpha\mathbb P(E_i).\]
Let $E=\cup_iE_i$, then by Lemma \ref{Mtest},
\[\mathbb P(A_n\cap E)=\sum_i\mathbb P(A_n\cap E_i)\to\alpha\sum_i\mathbb P(E_i)=\alpha\mathbb P(E).\]
Step 2. Show that $(*)$ holds for $\mathcal F_1$-measurable functions $g$. Indeed, due to step 1, relation $(*)$ holds if $g$ is the indicator of an $\mathcal F_1$-set. and hence if it is a simple $\mathcal F_1$-measurable function. If $g$ is $\mathcal F_1$-measurable and $\mathbb P$-integrable function, choose simple $\mathcal F_1$-measurable functions $g_k$ that satisfy $|g_k|\le|g|$ and $g_k\to g$. Now
\[\Big|\int_{A_n}g(\omega)\mathbb P(d\omega)-\alpha\int_{\Omega}g(\omega)\mathbb P(d\omega)\Big|\le\Big|\int_{A_n}g_k(\omega)\mathbb P(d\omega)-\alpha\int_{\Omega}g_k(\omega)\mathbb P(d\omega)\Big|+(1+\alpha)\mathbb E|g-g_k|.\]
Let first $n\to\infty$ and then $k\to\infty$ and apply the dominated convergence theorem.
Step 3. Finally, take $g$ to be a $\mathcal F$-measurable and $\mathbb P$-integrable. We use conditional expectation $g_1=\mathbb E(g|\mathcal F_1)$
\[\int_{A_n}g(\omega)\mathbb P(d\omega)=\mathbb E(g1_{\{A_n\}})=\mathbb E(g_11_{\{A_n\}})\to \alpha\mathbb E(g_1)=\alpha\int_{\Omega}g(\omega)\mathbb P(d\omega).\]
\begin{theorem}\label{14.2}
Let $\xi_1,\xi_2,\ldots$ be iid r.v. defined on $(\Omega,\mathcal F,\mathbb P)$ having zero mean and variance $\sigma^2$. Put
$X^n_t={\xi_1+\ldots+\xi_{\lfloor nt\rfloor}\over\sigma\sqrt n}$. If $\mathbb P_0$ is a probability measure absolutely continuous with respect to $\mathbb P$, then $X^n\Rightarrow W$ with respect to $\mathbb P_0$.
\end{theorem}
Proof. Step 1. Choose $k_n$ such that $k_n\to\infty$ and $k=o(n)$ as $n\to\infty$ and put
\begin{align*}
\bar X^n_t&={1\over\sigma\sqrt n}\sum_ {i=k_n}^{\lfloor nt\rfloor}\xi_i,\\
Y_n&={1\over\sigma\sqrt n}\max_{1\le k< k_n}|\xi_1+\ldots+\xi_k|.
\end{align*}
By Kolmogorov's inequality, for any $a>0$,
$$\mathbb P(Y_n\ge a)\to0, \ \ n\to\infty,$$
and therefore
\begin{align*}
d(X^n,\bar X^n)\le\|X^n-\bar X^n\|=Y_n\Rightarrow0 \quad\mbox{with respect to }\mathbb P.
\end{align*}
Applying Theorem \ref{14.1} and Corollary \ref{3.1} we conclude that $\bar X^n\Rightarrow W$ with respect to $\mathbb P$.
Step 2: show using Lemma \ref{M21}, that $\bar X^n\Rightarrow W$ with respect to $\mathbb P_0$.
If $A\in\mathcal D$ is a $\mathbb W$-continuity set, then $\mathbb P(A_n)\to\alpha$ for $A_n=\{\bar X^n\in A\}$ and $\alpha=\mathbb W(A)$. Let
$\mathcal F_0$ be the algebra of the cylinder sets $\{(\xi_1,\ldots,\xi_k)\in H\}$. If $E\in\mathcal F_0$, then $A_n$ are independent of $E$ for large $n$ and by Lemma \ref{M21}, $\mathbb P_0(\bar X^n\in A)\to\mathbb W(A)$.
Step 3. Since $1_{\{Y_n\ge a\}}\to0$ almost surely with respect to $\mathbb P$, the dominated convergence theorem gives
$$\mathbb P_0(Y_n\ge a)=\int g_0(\omega)1_{\{Y_n\ge a\}} \mathbb P(d\omega)\to0.$$
Arguing as in step 1 we conclude that $d(X^n,\bar X^n)\Rightarrow0$ with respect to $\mathbb P_0$.
Step 4. Applying once again Corollary \ref{3.1} we conclude that $X^n\Rightarrow W$ with respect to $\mathbb P_0$.
\begin{example}\label{p148.} Define $\xi_n$ on $([0,1],\mathcal B_{[0,1]},\lambda_p)$ with
$$\lambda_p(du)=au^{a-1}du,\quad a=-\log_2(1-p),\quad p\in(0,1)$$
again, as in Example \ref{p148}, using the Rademacher functions. If $p={1\over2}$, then $\lambda_p=\lambda$ and we are back to Example \ref{p148}.
With $p\ne{1\over2}$, this corresponds to dependent $p$-coin tossings with
$$\int_0^{2^{-n}}\lambda_p(du)=(1-p)^n$$
being the probability of having $n$ failures in the first $n$ tossings, and
$$\int_{1-2^{-n}}^1\lambda_p(du)=1-(1-p)^{-\log_2(1-2^{-n})}$$
being the probability of having $n$ successes in the first $n$ tossings. By Theorem \ref{14.2}, even in this case
$\Big({\xi_1+\ldots+\xi_{\lfloor nt\rfloor}\over\sqrt n}\Big)_{t\in[0,1]}\Rightarrow W.$
\end{example}
\subsection{Empirical distribution functions}
\begin{definition}
Let $\xi_1(\omega),\ldots,\xi_n(\omega)$ be iid with a distribution function $F$ over $[0,1]$.
The corresponding empirical process is defined by $Y^n_t=\sqrt n(F_n(t)-F(t))$, where
$$F_n(t)=n^{-1}(1_{\{\xi_1\le t\}}+\ldots+1_{\{\xi_n\le t\}})$$
is the empirical distribution function.
\end{definition}
\begin{figure}
\centering
\includegraphics[width=12cm,height=6cm]{emp.pdf}
\caption{An example of the empirical distribution function with $n=10$ for the uniform distribution (left panel) and the corresponding empirical process (right panel).}
\label{empp}
\end{figure}
\begin{lemma}
Let $(Z_1^n,\ldots,Z_r^n)$ have a multinomial distribution Mn$(n,p_1,\ldots,p_r)$. Then the normalized vector
$\Big({Z_1^n-np_1\over\sqrt n},\ldots,{Z_r^n-np_r\over\sqrt n}\Big)$ converges in distribution to a multivariate normal distribution with zero means and a covariance matrix
\[
\bf V=\left(
\begin{array}{ccccc}
p_1(1-p_1) & -p_1p_2 & -p_1p_3&\ldots&-p_1p_r \\
-p_2p_1 &p_2(1-p_2) & -p_2p_3 &\ldots&-p_2p_r \\
-p_3p_1 &-p_3p_2 & p_3(1-p_3) &\ldots&-p_3p_r \\
\ldots&\ldots&\ldots&\ldots&\ldots \\
-p_rp_1 &-p_rp_2 & -p_rp_3 &\ldots&p_r(1-p_r)
\end{array}
\right).
\]
\end{lemma}
Proof. To apply the continuity property of the multivariate characteristic functions consider
\[\mathbb E\exp\Big(i\theta_1{Z_1^n-np_1\over\sqrt n}+\ldots+i\theta_r{Z_r^n-np_r\over\sqrt n}\Big)=\Big(\sum_{j=1}^r p_j e^{i \tilde\theta _j/\sqrt n}\Big)^n,\]
where $\tilde\theta _j=\theta _j-(\theta_1p_1+\ldots+\theta_rp_r)$. Similarly to the classical case we have
\[\Big(\sum_{j=1}^r p_j e^{i \tilde\theta _j/\sqrt n}\Big)^n=\Big(1-{1\over 2n}\sum_{j=1}^r p_j \tilde\theta _j^2+o(n^{-1})\Big)^n\to
e^{-{1\over 2}\sum_{j=1}^r p_j \tilde\theta _j^2}=e^{-{1\over 2}(\sum_{j=1}^r p_j \theta _j^2-(\sum_{j=1}^r p_j \theta _j)^2)}.\]
It remains to see that the right hand side equals $e^{-{1\over2}\boldsymbol \theta \mathbf V\boldsymbol \theta ^{\rm t}}$ which follows from the representation
\[
{\bf V}=\left(
\begin{array}{ccc}
p_1 & &0\\
&\ddots& \\
0 &&p_r
\end{array}
\right)-
\left(
\begin{array}{c}
p_1 \\
\vdots \\
p_r
\end{array}
\right)
\Big(p_1,\ldots,p_r\Big).
\]
\begin{theorem}\label{14.3}
If $\xi_1,\xi_2,\ldots$ are iid $[0,1]$-valued r.v. with a distribution function $F$, then the empirical process weakly converges $Y^n\Rightarrow Y$ to a random element $(Y_t)_{t\in[0,1]}=(W^\circ_{F(t)})_{t\in[0,1]}$, where $W^\circ$ is the standard Brownian bridge. The limit $Y$ is a Gaussian process specified by $\mathbb E(Y_t)=0$ and $\mathbb E(Y_sY_t)=F(s)(1-F(t))$ for $s\le t$.
\end{theorem}
Proof. We start with the uniform case, $F(t)\equiv t$ for $t\in[0,1]$, by showing $Y^n\Rightarrow W^\circ$, where $W^\circ$ is the Brownian bridge with $\mathbb E(W^\circ_sW^\circ_t)=s(1-t)$ for $s\le t$. Let
$$U^n_t=nF_n(t)=1_{\{\xi_1\le t\}}+\ldots+1_{\{\xi_n\le t\}}$$
be the number of $\xi_1,\ldots,\xi_n$ falling inside $[0,t]$. Since the increments of $U^n_t$ are described by multinomial joint distributions, by the previous lemma, the fdd of $Y^n_t={U^n_t-nt\over \sqrt n}$ converge to those of $W^\circ$. Indeed, for $t_1<t_2<\ldots$ and $i<j$,
\[\mathbb E(W^\circ_{t_i}-W^\circ_{t_{i-1}})(W^\circ_{t_j}-W^\circ_{t_{j-1}})=-(t_i-t_{i-1})(t_j-t_{j-1})=-p_ip_j.\]
By Theorem \ref{13.5} it suffices to prove for $t_1\le t\le t_2$ that
\[\mathbb E\Big((Y^n_t-Y^n_{t_1})^2(Y^n_{t_2}-Y^n_t)^2\Big)\le (t-t_1)(t_2-t)\le (t_2-t_1)^2.\]
In terms of $\alpha_i=1_{\{\xi_i\in(t_1,t]\}}+t_1-t$ and $\beta_i=1_{\{\xi_i\in(t,t_2]\}}+t-t_2$ the first inequality is equivalent to
\[\mathbb E\Big(\big(\sum_{i=1}^n\alpha_i\big)^2\big(\sum_{i=1}^n\beta_i\big)^2\Big)\le n^2(t-t_1)(t_2-t).\]
As we show next, this follows from $\mathbb E(\alpha_i)=\mathbb E(\beta_i)=0$, independence $(\alpha_i,\beta_i)\independent(\alpha_j,\beta_j)$ for $i\ne j$, and the following formulas for the second order moments. Let us write $p_1=t-t_1$, and $p_2=t_2-t$. Since
\[
\alpha_i=\left\{
\begin{array}{cl}
1-p_1 & \mbox{w.p. } p_1, \\
-p_1 & \mbox{w.p. } 1- p_1,
\end{array}
\right.\
\beta_i=\left\{
\begin{array}{cl}
1-p_2 & \mbox{w.p. } p_2, \\
-p_2 & \mbox{w.p. } 1- p_2,
\end{array}
\right.\]
and
\[\alpha_i\beta_i=\left\{
\begin{array}{cl}
-(1-p_1)p_2 & \mbox{w.p. } p_1, \\
-p_1(1-p_2) & \mbox{w.p. } p_2,\\
p_1p_2 & \mbox{w.p. } 1-p_1-p_2,
\end{array}
\right.
\]
we have
\begin{align*}
&\mathbb E(\alpha_i^2)=p_1(1-p_1),\quad \mathbb E(\beta_i^2)=p_2(1-p_2),\quad \mathbb E(\alpha_i\beta_i)=-p_1p_2,\\
&\mathbb E(\alpha_i^2\beta_i^2)=p_1(1-p_1)^2p_2^2+p_2p_1^2(1-p_2)^2+(1-p_1-p_2)p_1^2p_2^2
=p_1p_2(p_1+p_2-3p_1p_2)
\end{align*}
and
\begin{align*}
\mathbb E\Big(\big(\sum_{i=1}^n\alpha_i\big)^2\big(\sum_{i=1}^n\beta_i\big)^2\Big)&=n\mathbb E(\alpha_i^2\beta_i^2)
+n(n-1)\mathbb E(\alpha_i^2)\mathbb E(\beta_i^2)+2n(n-1)(\mathbb E(\alpha_i\beta_i))^2\\
&\le n^2 p_1 p_2 (p_1 +p_2-3p_1 p_2+1-p_1-p_2+3p_1 p_2) \\
&= n^2 p_1 p_
=n^2(t-t_1)(t_2-t).
\end{align*}
This proves the theorem for the uniform case. For a general continuous and strictly increasing $F(t)$ we use the transformation $\eta_i=F(\xi_i)$ into uniformly distributed r.v. If $G_n(t)=G_n(t,\omega)$ is the empirical distribution function of $(\eta_1(\omega),\ldots,\eta_n(\omega))$ and $Z^n_t=\sqrt n(G_n(t)-t)$, then $Z^n\Rightarrow W^\circ$.
Observe that
\[G_n(F(t))={1_{\{\eta_1\le F(t)\}}+\ldots+1_{\{\eta_n\le F(t)\}}\over n}={1_{\{F(\xi_1)\le F(t)\}}+\ldots+1_{\{F(\xi_n)\le F(t)\}}\over n}=F_n(t).\]
Define $\psi:\boldsymbol D\to \boldsymbol D$ by $(\psi x)(t)=x(F(t))$. If $x_n\to x$ in the Skorokhod topology and $x\in \boldsymbol C$, then the convergence is uniform, so that $\psi x_n\to \psi x$ uniformly and hence in the Skorokhod topology. By the mapping theorem $\psi(Z^n)\Rightarrow \psi(W^\circ)$. Therefore,
$$Y^n=(Y^n_t)_{t\in[0,1]}=(Z^n_{F(t)})_{t\in[0,1]}=\psi(Z^n)\Rightarrow \psi(W^\circ)=(W^\circ_{F(t)})_{t\in[0,1]}=Y.$$
Finally, for $F(t)$ with jumps and constant parts (see Figure \ref{titr}) the previous argument works provided there exists an iid sequence $\eta_1,\eta_2,\ldots$ of uniformly distributed r.v. as well as iid $\xi'_1,\xi'_2,\ldots$ with distribution function $F$, such that
$$\{\eta_i\le F(t)\}\equiv\{\xi_i'\le t\},\quad t\in [0,1],i\ge1.$$
This is achieved by starting with uniform $\eta_1,\eta_2,\ldots$ on possibly another probability space and putting $\xi_i'=\phi(\eta_i)$, where $\phi(u)=\inf\{t: u\le F(t)\}$ is the quantile function satisfying
\[\{\phi(u)\le t\}=\{u\le F(t)\}.\]
\begin{figure}
\centering
\includegraphics[width=6cm,height=6cm]{brbr.pdf}
\caption{The time transformation $\psi:\boldsymbol D\to \boldsymbol D$. Along the $y$-axis an original path $x(t)$ is depicted, along the $x$-axis the time transformed path $(\psi x)(t)=x(F(t))$ is given. The jumps of $F$ translate into path jumps, the constant parts of $F$ translate into the horizontal pieces of the transformed path.}
\label{titr}
\end{figure}
\begin{example} Kolmogorov-Smirnov test. Let $F$ be continuous. By the mapping theorem we obtain
\[\sqrt n\sup_t|F_n(t)-F(t)|=\sup_t|Y^n_t|\Rightarrow \sup_t|W^\circ_{F(t)}|=\sup_t|W^\circ_{t}|,\]
where the limit distribution is given by Theorem \ref{(9.39)}.
\end{example}
\section{The space $\boldsymbol D_\infty=\boldsymbol D[0,\infty)$}\label{secDi}
\subsection{Two metrics on $\boldsymbol D_\infty$}
To extend the Skorokhod theory to the space $\boldsymbol D_\infty=\boldsymbol D[0,\infty)$ of the cadlag functions on $[0,\infty)$, consider for each $t>0$ the space $\boldsymbol D_t=\boldsymbol D[0,t]$ of the same cadlag functions restricted on $[0,t]$. All definitions for $\boldsymbol D=\boldsymbol D[0,1]$ have obvious analogues for $\boldsymbol D_t$: for example we denote by $d_t(x,y)$ the analogue of $d_1(x,y):=d(x,y)$. Denote $\|x\|_t=\sup_{u\in[0,t]}|x(u)|$.
\begin{example}
One might try to define Skorokhod convergence $x_n\to x$ on $\boldsymbol D_\infty$ by requiring that $d_t(x_n,x)\to 0$ for each finite $t>0$. This does not work: if $x_n(t)=1_{\{t\in[0,1-n^{-1})\}}$, the natural limit would be $x(t)=1_{\{t\in[0,1)\}}$ but $d_1(x_n,x)=1$ for all $n$. The problem here is that $x$ is discontinuous at $t=1$, and the definition must accommodate discontinuities.
\end{example}
\begin{lemma}\label{L16.1}
Let $0<u<t<\infty$. If $d_t(x_n,x)\to 0$ and $x$ is continuous at $u$, then $d_u(x_n,x)\to 0$.
\end{lemma}
Proof. By hypothesis, there are time transforms $\lambda_n\in \Lambda_t$ such that $\|\lambda_n-1\|_t\to0$ and $\|x_n-x\lambda_n\|_t\to0$ as $n\to\infty$. Given $\epsilon$, choose $\delta$ so that $|v-u|\le2\delta$ implies $|x(v)-x(u)|\le\epsilon/2$. Now choose $n_0$ so that, if $n\ge n_0$ and $v\le t$, then $|\lambda_n v-v|\le\delta$ and $|x_n(v)-(x\lambda_n)(v)|\le\epsilon/2$. Then, if $n\ge n_0$ and $|v-u|\le\delta$, we have
\[|\lambda_n v-u|\le|\lambda_n v-v|+|v-u|\le2\delta\]
and hence
\[|x_n(v)-x(u)|\le|x_n(v)-(x\lambda_n)(v)|+|(x\lambda_n)(v)-x(u)|\le\epsilon.\]
Thus
\[\sup_{|v-u|\le\delta}|x(v)-x(u)|\le\epsilon, \quad \sup_{|v-u|\le\delta}|x_n(v)-x(u)|\le\epsilon \quad \mbox{ for }n\ge n_0.\]
Let
\[u_n=
\left\{
\begin{array}{ll}
u-n^{-1} & \mbox{if } \lambda_n u>u, \\
u & \mbox{if } \lambda_n u=u, \\
\lambda_n^{-1}(u-n^{-1}) & \mbox{if } \lambda_n u<u,
\end{array}
\right.
\]
so that $u_n\le u$. Since
\[|u_n -u|\le|\lambda_n^{-1}(u-n^{-1}) -(u-n^{-1})|+n^{-1},\]
we have $u_n\to u$, and since
\[|\lambda_nu_n-u|\le|\lambda_nu_n-u_n|+|u_n -u|,\]
we also have $\lambda_nu_n\to u$.
\begin{figure}
\centering
\includegraphics[width=8cm,height=8cm]{lemma.pdf}
\caption{A detail of the proof of Lemma \ref{L16.1} and Theorem \ref{16.1}.}
\label{lem}
\end{figure}
Define $\mu_n\in\Lambda_u$ so that $\mu_nv=\lambda_nv$ for $v\in[0,u_n]$ and interpolate linearly on $(u_n,u]$ aiming at the diagonal point $\mu_nu=u$, see Figure \ref{lem}. By linearity, $|\mu_nv-v|\le |\lambda_nu_n-u_n|$ for $v\in[u_n,u]$ and we have $\|\mu_n-1\|_u\to0$.
It remains to show that $\|x_n-x\mu_n\|_u\to0$. To do this we choose $n_1$ so that
\[u_n>u-\delta\quad \mbox{ and }\quad \lambda_nu_n>u-\delta\quad \mbox{for }n\ge n_1.\]
If $v\le u_n$, then
\[|x_n(v)-(x\mu_n)(v)|=|x_n(v)-(x\lambda_n)(v)|\le \|x_n-x\lambda_n\|_t.\]
On the other hand, if $v\in[u_n,u]$ and $n\ge n_1$, then $v\in[u-\delta,u]$ and $\mu_nv\in[u-\delta,u]$ implying for $n\ge n_1\vee n_0$
\[|x_n(v)-(x\mu_n)(v)|\le|x_n(v)-x(u)|+|x(u)-(x\mu_n)(v)|\le 2\epsilon.\]
The proof is finished.
\begin{definition}
For any natural $i$, define a map $\psi_i:\boldsymbol D_\infty\to \boldsymbol D_i$ by
$$(\psi_ix)(t)=x(t)1_{\{t\le i-1\}}+(i-t)x(t)1_{\{i-1<t\le i\}}$$
making the transformed function $(\psi_ix)(t)$ continuous at $t=i$.
\end{definition}
\begin{definition}\label{p168}
Two topologically equivalent metrics $d_\infty(x,y)$ and $d_\infty^\circ(x,y)$ are defined on $\boldsymbol D_\infty$ in terms of $d(x,y)$ and $d^\circ(x,y)$ by
\[d_\infty(x,y)=\sum_{i=1}^\infty{1\wedge d_i(\psi_ix,\psi_iy)\over 2^i},\qquad d_\infty^\circ(x,y)=\sum_{i=1}^\infty{1\wedge d_i^\circ(\psi_ix,\psi_iy)\over 2^i}.\]
\end{definition}
The metric properties of $d_\infty(x,y)$ and $d_\infty^\circ(x,y)$ follow from those of $d_i(x,y)$ and $d_i^\circ(x,y)$. In particular, if $d_\infty(x,y)=0$, then $d_i(\psi_ix,\psi_iy)=0$ and $\psi_ix=\psi_iy$ for all $i$, and this implies $x=y$.
\begin{lemma}
The map $\psi_i:\boldsymbol D_\infty\to \boldsymbol D_i$ is continuous.
\end{lemma}
Proof. It follows from the fact that $d_\infty(x_n,x)\to0$ implies $d_i(\psi_ix_n,\psi_ix)\to0$.
\subsection{Characterization of Skorokhod convergence on $\boldsymbol D_\infty$}
Let $\Lambda_\infty$ be the set of continuous, strictly increasing maps $\lambda:[0,\infty)\to[0,\infty)$ such that $\lambda 0=0$ and $\lambda t\to\infty$ as $t\to\infty$. Denote $\|x\|_\infty=\sup_{u\in[0,\infty)}|x(u)|$.
\begin{exercise}
Let $\lambda\in\Lambda_t$ where $t\in(0,\infty]$. Show that the inverse transformation $\lambda^{-1}\in\Lambda_t$ is such that $\|\lambda^{-1}-1\|_t=\|\lambda-1\|_t$.
\end{exercise}
\begin{example}
Consider the sequence $x_n(t)=1_{\{t\ge n\}}$ of elements of $\boldsymbol D_\infty$. Its natural limit is $x\equiv0$ as $\|x_n-x\|_t=0$ for all $n>t>0$. However, $\|x_n\lambda_n-x\|_\infty=\|x_n\lambda_n\|_\infty=1$ for any choice of $\lambda_n\in\Lambda_\infty$.
\end{example}
\begin{theorem}\label{16.1}
Convergence $d_\infty(x_n,x)\to0$ takes place if and only if there is a sequence $\lambda_n\in\Lambda_\infty$ such that
$$\|\lambda_n-1\|_\infty\to0\mbox{ and }\|x_n\lambda_n-x\|_i\to0\mbox{ for each }i.$$
\end{theorem}
Proof. Necessity. Suppose $d_\infty(x_n,x)\to0$. Then $d_i(\psi_ix_n,\psi_ix)\to0$ and there exist $\lambda_n^{(i)}\in\Lambda_i$ such that
\[\epsilon_n^{(i)}=\|\lambda_n^{(i)}-1\|_i\vee\|(\psi_ix_n)\lambda_n^{(i)}-\psi_ix\|_i\to0,\quad n\to\infty\quad \mbox{for each }i.\]
Choose $n_i>1$ such that $n\ge n_i$ implies $\epsilon_n^{(i)}<i^{-1}$. Arrange that $n_i<n_{i+1}$, and let
$$i_1=\ldots=i_{n_1-1}=1,\quad i_{n_1}=\ldots=i_{n_2-1}=2,\quad i_{n_2}=\ldots=i_{n_3-1}=3, \quad\ldots$$
so that $i_n\to\infty$. Define $\lambda_n\in\Lambda_\infty$ by
\[\lambda_nt=
\left\{
\begin{array}{ll}
\lambda_n^{(i_n)}t & \mbox{if } t\le i_n, \\
t & \mbox{if } t> i_n.
\end{array}
\right.
\]
Then
$$\|\lambda_n-1\|_\infty=\|\lambda_n^{(i_n)}-1\|_{i_n}\le \epsilon_n^{(i_n)}<i^{-1}_n\to0.$$
Now fix $i$. If $n$ is large enough, then $i< i_n-1$ and
\begin{align*}
\|x_n\lambda_n-x\|_i&=\|(\psi_{i_n}x_n)\lambda_n-\psi_{i_n}x\|_i\\
&\le \|(\psi_{i_n}x_n)\lambda_n-\psi_{i_n}x\|_{i_n}=\|(\psi_{i_n}x_n)\lambda_n^{(i_n)}-\psi_{i_n}x\|_{i_n}\le \epsilon_n^{(i_n)}<i^{-1}_n\to0.
\end{align*}
Sufficiency. Suppose that there is a sequence $\lambda_n\in\Lambda_\infty$ such that, firstly, $\|\lambda_n-1\|_\infty\to0$, and secondly,
$\|x_n\lambda_n-x\|_i\to0$ for each $i$. Observe that for some $C_i$,
$$\|x\|_i\le C_i \mbox{ and }\|x_n\|_i\le C_i\mbox{ for all }(n, i) .$$
Indeed, by the first assumption, for large $n$ we have $\lambda_n(2i)>i$ implying
$\|x_n\|_i\le \|x_n\lambda_n\|_{2i}$, where by the second assumption, $\|x_n\lambda_n\|_{2i}\to\|x\|_{2i}$.
Fix an $i$. It is enough to show that $d_i(\psi_{i}x_n,\psi_{i}x)\to0$. As in the proof of Lemma \ref{L16.1} define
\[u_n=
\left\{
\begin{array}{ll}
i-n^{-1} & \mbox{if } \lambda_n i<i, \\
i & \mbox{if } \lambda_n i=i, \\
\lambda_n^{-1}(i-n^{-1}) & \mbox{if } \lambda_n i>i,
\end{array}
\right.
\]
and $\mu_n\in\Lambda_i$ so that $\mu_nv=\lambda_nv$ for $v\in[0,u_n]$ interpolating linearly on $(u_n,i]$ towards $\mu_ni=i$. As before, $\|\mu_n-1\|_i\to0$ and it suffices to check that
$$\|(\psi_{i}x_n)\mu_n-\psi_{i}x\|_i\to0,\quad n\to\infty.$$
To see that the last relation holds suppose $j:=\lambda_n^{-1}(i-1)\le i-1$ (the other case $j>i-1$ is treated similarly) and observe that
\begin{align*}
\|(\psi_{i}x_n)\lambda_n-\psi_{i}x\|_i&= \Big(\|x_n\lambda_n-x\|_{j}\Big)\vee\Big(\sup_{j< t\le i-1}|(i-\lambda_nt)x_n(\lambda_nt)-x(t)|\Big)\\
&\vee\Big(\sup_{i-1< t\le i}|(i-\lambda_nt)x_n(\lambda_nt)-(i-t)x(t)|\Big)\\
&\le
\Big(\|x_n\lambda_n-x\|_{i}+C_i\sup_{j< t\le i-1}|(i-1-\lambda_nt)|\Big)\\
&\vee\Big(\sup_{i-1< t\le i}|i-\lambda_nt|\cdot|x_n(\lambda_nt)-x(t)|+\sup_{i-1< t\le i}|(\lambda_nt-t)x(t)|\Big)\\
&\le \|x_n\lambda_n-x\|_{i}+C_i\|\lambda_n-1\|_i
\to0.
\end{align*}
It follows that for $t\le u_n$,
\begin{align*}
|(\psi_{i}x_n)(\mu_nt)-(\psi_{i}x)(t)|\le \|(\psi_{i}x_n)\lambda_n-\psi_{i}x\|_i \to0.
\end{align*}
Turning to the case $u_n<t\le i$, given an $\epsilon\in(0,1)$ choose $n_0$ such that for $n>n_0$, $u_n$ and $\mu_nu_n$ both lie in $[i-\epsilon,i]$. Then
\begin{align*}
|(\psi_{i}x_n)(\mu_nt)-(\psi_{i}x)(t)|\le \sup_{u_n< t\le i}|(i-\mu_nt)x_n(\mu_nt)-(i-t)x(t)|\le 2C_i\epsilon.
\end{align*}
\begin{theorem}\label{16.2}
Convergence $d_\infty(x_n,x)\to0$ takes place if and only if $d_t(x_n,x)\to0$ for each continuity point $t$ of $x$.
\end{theorem}
Proof. Necessity. If $d_\infty(x_n,x)\to0$, then $d_i(\psi_{i}x_n,\psi_{i}x)\to0$ for each $i$. Given a continuity point $t$ of $x$, take an integer $i$ for which $t<i-1$. According to Lemma \ref{L16.1}, $d_t(x_n,x)=d_t(\psi_{i}x_n,\psi_{i}x)\to0$.
Sufficiency. Choose continuity points $t_i$ of $x$ in such a way that $t_i\uparrow\infty$ as $i\to\infty$. By hypothesis,
\[d_{t_i}(x_n,x)\to0,\quad n\to\infty,\quad i\ge1.\]
Choose $\lambda_n^{(i)}\in\Lambda_{t_i}$ so that
\[\epsilon_n^{(i)}=\|\lambda_n^{(i)}-1\|_{t_i}\vee\|x_n\lambda_n^{(i)}-x\|_{t_i}\to0,\quad n\to\infty\quad \mbox{for each }i.\]
Using the argument from the first part of the proof of Theorem \ref{16.1}, define integers $i_n$ in such a way that $i_n\to\infty$ and $\epsilon_n^{(i_n)}<i^{-1}_n$. Put
\[\lambda_nt=
\left\{
\begin{array}{ll}
\lambda_n^{(i_n)}t & \mbox{if } t\le t_{i_n}, \\
t & \mbox{if } t> t_{i_n},
\end{array}
\right.
\]
so that $\lambda_n\in\Lambda_\infty$. We have $\|\lambda_n-1\|_\infty\le i^{-1}_n$, and for any given $i$, if $n$ is sufficiently large so that $i<t_{ i_n}$, then
\[\|x_n\lambda_n-x\|_i=\|x_n\lambda_n^{(i_n)}-x\|_{i}\le\|x_n\lambda_n^{(i_n)}-x\|_{t_{i_n}}\le \epsilon_n^{(i_n)}<i^{-1}_n\to0.\]
Applying Theorem \ref{16.1} we get $d_\infty(x_n,x)\to0$.
\begin{exercise}
Show that the mapping $h(x)=\sup_{t\ge0}x(t)$ is not continuous on $\boldsymbol D_\infty$.
\end{exercise}
\subsection{Separability and completeness of $\boldsymbol D_\infty$}
\begin{lemma}\label{M6}
Suppose $(\boldsymbol S_i,\rho_i)$ are metric spaces and consider $\boldsymbol S=\boldsymbol S_1\times \boldsymbol S_2\times\ldots$ together with the metric of coordinate-wise convergence
\[\rho(x,y)=\sum_{i=1}^\infty{1\wedge \rho_i(x_i,y_i)\over 2^i}.\]
If each $\boldsymbol S_i$ is separable, then $\boldsymbol S$ is separable. If each $\boldsymbol S_i$ is complete, then $\boldsymbol S$ is complete.
\end{lemma}
Proof. Separability. For each $i$, let $B_i$ be a countable dense subset in $\boldsymbol S_i$ and $x_i^\circ \in \boldsymbol S_i$ be a fixed point. We will show that the countable set $B=B(x_{1}^\circ,x_{2}^\circ,\ldots)$ defined by
\[B=\{x\in \boldsymbol S: x=(x_1,\ldots,x_k,x_{k+1}^\circ,x_{k+2}^\circ,\ldots), x_1 \in B_1, \ldots x_k \in B_k, k\in\boldsymbol N\}\]
is dense in $\boldsymbol S$.
Given an $\epsilon$ and a point $y\in \boldsymbol S$, choose $k$ so that $\sum_{i>k}2^{-i}<\epsilon$ and then choose points $x_i \in B_i$ so that $\rho_i(x_i,y_i)<\epsilon$. With this choice the corresponding point $x\in B$ satisfies $\rho(x,y)<2\epsilon$.
Completeness. Suppose that $x^n=(x^n_1,x^n_2,\ldots)$ are points of $\boldsymbol S$ forming a fundamental sequence. Then each sequence $(x^n_i)$ is fundamental in $\boldsymbol S_i$ and hence $\rho_i(x^n_i,x_i)\to0$ for some $x_i \in \boldsymbol S_i$.
By the M-test, Lemma \ref{Mtest}, $\rho(x^n,x)\to0$, where $x=(x_1,x_2,\ldots)$.
\begin{definition}\label{deep}
Consider the product space $\overline {\boldsymbol D}=\boldsymbol D_1\times \boldsymbol D_2\times\ldots$ with the coordinate-wise convergence metric (cf Definition \ref{p168})
\[\rho(\bar x,\bar y)=\sum_{i=1}^\infty{1\wedge d_i^\circ(\bar x_i,\bar y_i)\over 2^i},\qquad \bar x=(\bar x_1,\bar x_2,\ldots),\ \bar y=(\bar y_1,\bar y_2,\ldots)\in \overline {\boldsymbol D}.\]
Put $\psi x=(\psi_1 x,\psi_2 x,\ldots)$ for $ x\in \boldsymbol D_\infty$. Then $\psi x\in\overline {\boldsymbol D}$ and $d^\circ _\infty(x,y)=\rho(\psi x,\psi y)$ so that $\psi$ is an isometry of $(\boldsymbol D_\infty,d^\circ _\infty)$ into $(\overline {\boldsymbol D},\rho)$.
\end{definition}
\begin{lemma}\label{L16.2}
The image $\overline {\boldsymbol D}_\infty:=\psi \boldsymbol D_\infty$ is closed in $\overline {\boldsymbol D}$.
\end{lemma}
Proof. Suppose that $x_n\in \boldsymbol D_\infty$, $\bar x=(\bar x_1,\bar x_2,\ldots)\in \overline {\boldsymbol D}$, and $\rho(\psi x_n,\bar x)\to0$, then $d_i(\psi_i x_n,\bar x_i)\to0$ for each $i$. We must find an $x\in \boldsymbol D_\infty$ such that $\bar x=\psi x$.
The sequence of functions $\bar x_i\in \boldsymbol D_i$, $i=1,2,\ldots$ has at most countably many points of discontinuity. Therefore, there is a dense set $T\in[0,\infty)$ such that for every $i\ge t\in T$, the function $\bar x_i(\cdot)$ is continuous at $t$. Since $d_i(\psi_i x_n,\bar x_i)\to0$, we have $\psi_i x_n(t)\to\bar x_i(t)$ for all $t\in T\cap[0,i]$. This means that for every $t\in T$ there exists the limit $x(t)=\lim_nx_n(t)$, since $\psi_i x_n(t)= x_n(t)$ for $i>t+1$.
Now $\psi_i x(t)= \bar x_i(t)$ on $T\cap[0,i]$. Hence $x(t)= \bar x_i(t)$ on $T\cap[0,i-1]$, so that $x$ can be extended to a cadlag function on each $[0,i-1]$ and then to a cadlag function on $[0,\infty)$. We conclude, using right continuity, that $\psi_i x(t)= \bar x_i(t)$ for all $t\in[0,i]$.
\begin{theorem}\label{16.3} The metric space $(\boldsymbol D_\infty,d_\infty^\circ)$ is separable and complete.
\end{theorem}
Proof. According Lemma \ref{M6} the space $\overline {\boldsymbol D}$ is separable and complete, so are the closed subspace $\overline {\boldsymbol D}_\infty$ and its isometric copy $\boldsymbol D_\infty$.
\subsection{Weak convergence on $\boldsymbol D_\infty$}
\begin{definition}
For any natural $i$ and any $s\ge i$, define a map $\psi_{s,i}:\boldsymbol D_s\to \boldsymbol D_i$ by
$$( \psi_{s,i} x)(t)=x(t)1_{\{t\le i-1\}}+(i-t)x(t)1_{\{i-1<t\le i\}}.
$$
\end{definition}
\begin{exercise}
Show that the mapping $\psi_{s,i}$ is continuous.
\end{exercise}
\begin{lemma}\label{L16.3}
A necessary and sufficient condition for $P_n\Rightarrow P$ on $\boldsymbol D_\infty$ is that $P_n\psi_k^{-1}\Rightarrow P\psi_k^{-1}$ on $\boldsymbol D_k$ for every $k\in\boldsymbol N$.
\end{lemma}
Proof. Since $\psi_k$ is continuous, the necessity follows from the mapping theorem.
For the sufficiency we need the isometry $\psi$ from Definition \ref{deep} and the inverse isometry $\psi^{-1}$:
\[\boldsymbol D_\infty\stackrel{\psi_k}{\to}\boldsymbol D_k,\qquad \boldsymbol D_\infty\stackrel{\psi}{\to}\overline {\boldsymbol D},\qquad \overline {\boldsymbol D}_\infty\stackrel{\psi^{-1}}{\to}\boldsymbol D_\infty.\]
Define two more mappings
$$\overline {\boldsymbol D}\stackrel{\zeta_k}{\to}\boldsymbol D_1\times\ldots \times\boldsymbol D_k,\qquad \boldsymbol D_k\stackrel{\chi_k}{\to}\boldsymbol D_1\times\ldots \times\boldsymbol D_k$$
by
\begin{align*}
\zeta_k(\bar x)=(\bar x_1,\ldots,\bar x_k),\qquad \chi_k(x)=(\psi_{k,1}x,\ldots,\psi_{k,k}x).
\end{align*}
Consider the Borel $\sigma$-algebra $\overline {\mathcal D}$ for $(\overline {\boldsymbol D},\rho)$ and let $\overline {\mathcal D}_f\subset \overline {\mathcal D}$ be the class of sets of the form $\zeta_k^{-1}H$ where $k\ge1$ and $H\in\mathcal D_1\times\ldots \times\mathcal D_k$, see Definition \ref{dpr}. The remainder of the proof is split into four steps.
Step 1. Applying Theorem \ref{2.4} show that $\mathcal A=\overline {\mathcal D}_f$ is a convergence-determining class.
Given a ball $B(\bar x,\epsilon)\subset \overline {\boldsymbol D}$, take $k$ so that $2^{-k}<\epsilon/2$ and consider the cylinder sets
\[A_\eta=\{\bar y\in\overline {\boldsymbol D}: d_i^\circ(\bar x_i,\bar y_i)<\eta,i=1,\ldots,k\}\quad\mbox{for }0<\eta<\epsilon/2.\]
Then $\bar x\in A_\eta^\circ=A_\eta\subset B(\bar x,\epsilon)$ implies $A_\eta\in \mathcal A_{x,\epsilon}$.
It remains to see that the boundaries of $A_\eta$ for different $\eta$ are disjoint.
Step 2. For probability measures $Q_n$ and $Q$ on $\overline {\boldsymbol D}$ show that if $Q_n\zeta_k^{-1}\Rightarrow Q\zeta_k^{-1}$ for every $k$, then $Q_n\Rightarrow Q$.
This follows from the equality $\partial (\zeta_k^{-1}H)=\zeta_k^{-1}\partial H$ for $H\in \mathcal D_1\times\ldots \times\mathcal D_k$, see the proof of Theorem \ref{E2.4}.
Step 3. Assume that $P_n\psi_k^{-1}\Rightarrow P\psi_k^{-1}$ on $\boldsymbol D_k$ for every $k$ and show that $P_n\psi^{-1}\Rightarrow P\psi^{-1}$ on $\overline {\boldsymbol D}$.
The map $\chi_k$ is continuous: if $x_n\to x$ in $\boldsymbol D_k$, then $\psi_{k,i}x_n\to \psi_{k,i}x$ in $\boldsymbol D_i$, $i\le k$. By the mapping theorem,
$P_n\psi_k^{-1}\chi_k^{-1}\Rightarrow P\psi_k^{-1}\chi_k^{-1}$, and since $\chi_k\psi_k=\zeta_k\psi$, we get $P_n\psi^{-1}\zeta_k^{-1}\Rightarrow P\psi^{-1}\zeta_k^{-1}$. Referring to step 2 we conclude $P_n\psi^{-1}\Rightarrow P\psi^{-1}$.
Step 4. Show that $P_n\psi^{-1}\Rightarrow P\psi^{-1}$ on $\overline {\boldsymbol D}$ implies $P_n\Rightarrow P$ on $\boldsymbol D_\infty$.
Extend the isometry $\psi^{-1}$ to a map $\eta:\overline {\boldsymbol D}\to\boldsymbol D_\infty$ by putting $\eta(\bar x)=x_0\in\boldsymbol D_\infty$ for all
$\bar x\notin\overline {\boldsymbol D}_\infty$. Then $\eta$ is continuous when restricted to $\overline {\boldsymbol D}_\infty$, and since $\overline {\boldsymbol D}_\infty$ supports $P\psi^{-1}$ and the $P_n\psi^{-1}$, it follows that
\[P_n=P_n\psi^{-1}\eta^{-1}\Rightarrow P\psi^{-1}\eta^{-1}=P.\]
\begin{definition}
For a probability measure $P$ on $\boldsymbol D_\infty$ define $T_P\subset[0,\infty)$ as the set of $t$ for which $P(J_t)=0$, where $J_t=\{x:x\mbox{ is discontinuous at }t\}$. (See Lemma \ref{13.0}.)
\end{definition}
\begin{exercise}
Let $P$ be the probability measure on $\boldsymbol D_\infty$ generated by the Poisson process with parameter $\lambda$. Show that $T_P=[0,\infty)$.
\end{exercise}
\begin{lemma}\label{p174}
For $x\in \boldsymbol D_\infty$ let $r_t x$ be the restriction of $x$ on $[0,t]$. The function $r_t : \boldsymbol D_\infty\to \boldsymbol D_t$ is measurable. The set of points at which $r_t$ is discontinuous belongs to $J_t$.
\end{lemma}
Proof. Denote $\delta_k=t/k$. Define the function $r^k_tx\in \boldsymbol D_t$ as having the value $x({i\delta_k})$ on $[{i\delta_k},{(i+1)\delta_k})$ for $0\le i\le k-1$ and the value $x(t)$ at $t$. Since the $\pi_{i\delta_k}$ are measurable $\mathcal D_\infty/\mathcal R_1$, it follows as in the proof of Theorem \ref{12.5} (b) that $r^k_t$ is measurable $\mathcal D_\infty/\mathcal D_t$. By Lemma \ref{L12.3},
\[d_t(r^k_tx,r_tx)\le \delta_k\vee w'_t(x,\delta_k)\to0\mbox{ as }k\to\infty\mbox{ for each }x\in \boldsymbol D_\infty.\]
Now, to show that $r_t$ is measurable take a closed $F\in\mathcal D_t$. We have $F=\cap_\epsilon F^{2\epsilon}$, where the intersection is over positive rational $\epsilon$. From
\[r_t^{-1}F\subset \liminf_k(r^k_t)^{-1}F^{\epsilon}=\bigcup_{j=1}^\infty\bigcap_{k=j}^\infty (r^k_t)^{-1}F^{\epsilon}\subset r_t^{-1}F^{2\epsilon}\]
we deduce that $r_t^{-1}F= \cap_\epsilon \liminf_k(r^k_t)^{-1}F^{\epsilon}$ is measurable. Thus $r_t$ is measurable.
To prove the second assertion take an $x\in \boldsymbol D_\infty$ which is continuous at $t$. If $d_\infty(x_n,x)\to0$, then by Theorem \ref{16.2},
$$d_t(r_tx_n,r_tx)=d_t(x_n,x)\to0.$$
In other words, if $x\notin J_t$, then $r_t$ is continuous at $x$.
\begin{theorem}\label{16.7}
A necessary and sufficient condition for $P_n\Rightarrow P$ on $\boldsymbol D_\infty$ is that $P_nr_t^{-1}\Rightarrow Pr_t^{-1}$ for each $t\in T_P$.
\end{theorem}
Proof. If $P_n\Rightarrow P$ on $\boldsymbol D_\infty$, then $P_nr_t^{-1}\Rightarrow Pr_t^{-1}$ for each $t\in T_P$ due to the mapping theorem and Lemma \ref{p174}.
For the reverse implication, it is enough, by Lemma \ref{L16.3}, to show that $P_n\psi_i^{-1}\Rightarrow P\psi_i^{-1}$ on $\boldsymbol D_i$ for every $i$.
Given an $i$ choose a $t\in T_P$ so that $t\ge i$.
Since $\psi_i= \psi_{t,i} \circ r_t$, the mapping theorem gives
\[P_n\psi_i^{-1}=(P_nr_t^{-1}) \psi_{t,i}^{-1}\Rightarrow (Pr_t^{-1}) \psi_{t,i}^{-1}= P\psi_i^{-1}.\]
\begin{exercise}
Let $W^\circ$ be the standard Brownian bridge. For $t\in[0,\infty)$ put $W_t=(1+t)W^\circ_{t\over1+t}$. Show that such defined random element $W$ of $\boldsymbol D_\infty$ is a Gaussian process with zero means and covariance function $\mathbb E(W_sW_t)=s$ for $0\le s\le t<\infty$. This is a Wiener process $W=(W_t,0\le t<\infty)$. Clearly, $r_t(W)$ is a Wiener process which is a random element of $\boldsymbol D_t$.
\end{exercise}
\begin{corollary}
Let $\xi_1,\xi_2,\ldots$ be iid r.v. defined on $(\Omega,\mathcal F,\mathbb P)$. If $\xi_i$ have zero mean and variance $\sigma^2$ and $X^n_t={\xi_1+\ldots+\xi_{\lfloor nt\rfloor}\over\sigma\sqrt n}$, then $X^n\Rightarrow W$ on $\boldsymbol D_\infty$.
\end{corollary}
Proof. By Theorem \ref{14.1}, $X^n\Rightarrow W$ on $\boldsymbol D_1$. The same proof gives $X^n\Rightarrow W$ on $\boldsymbol D_t$ for each $t\in [0,\infty)$. In other words, $r_t(X^n)\Rightarrow r_t(W)$ for each $t\in [0,\infty)$, and it remains to apply Theorem \ref{16.7}.
\begin{corollary}
Suppose for each $n$, $\xi_{n1},\ldots,\xi_{nn}$ are iid indicator r.v. with $\mathbb P(\xi_{ni}=1)=\alpha/n$. If $X^n_t=\sum_{i\le nt} \xi_{ni}$, then $X^n\Rightarrow X$ on $\boldsymbol D_\infty$, where $X$ is the Poisson process with parameter $\alpha$.
\end{corollary}
Proof. Combine Corollary \ref{E12.3} and Theorem \ref{16.7}.
\end{document}
\subsection{Metric space $C[0,1]$}
\begin{exercise}
The metric space $C$ is separable.
\end{exercise}
\begin{exercise}
The metric space $C$ is complete.
\end{exercise}
\end{document}
| {
"timestamp": "2020-07-21T02:39:25",
"yymm": "2007",
"arxiv_id": "2007.10293",
"language": "en",
"url": "https://arxiv.org/abs/2007.10293",
"abstract": "Lecture notes based on the book Convergence of Probability Measures by Patrick Billingsley.",
"subjects": "Probability (math.PR)",
"title": "Weak Convergence of Probability Measures",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9912886168290836,
"lm_q2_score": 0.8418256412990658,
"lm_q1q2_score": 0.8344921755746072
} |
https://arxiv.org/abs/1009.2970 | Cyclic polygons in classical geometry | Formulas about the side lengths, diagonal lengths or radius of the circumcircle of a cyclic polygon in Euclidean geometry, hyperbolic geometry or spherical geometry can be unified. | \section{Introduction}
In Euclidean geometry, hyperbolic geometry or spherical geometry, a cyclic polygon is a polygon whose vertexes are on a same circle. In Euclidean geometry, the side lengths and diagonal lengths of a cyclic polygon satisfy some polynomials. Ptolemy's theorem about a cyclic quadrilateral and Fuhrmann's theorem about a cyclic hexagon are examples. The two theorems also hold in hyperbolic geometry, for example, see \cite{S}. It is also observed in \cite{S} that the formulas for hyperbolic geometry are easily obtained by replacing an edge length $l/2$ in Euclidean geometry by $\sinh l/2$. In the paper we will show this is a general principle to translate a result in Euclidean geometry to a result in hyperbolic geometry. Also we get formulas in spherical geometry by replacing an edge length $l/2$ in Euclidean geometry by $\sin l/2$.
In Euclidean geometry, the radius of the circumcircle of a cyclic polygon can be calculated from the side lengths. For more information about the radius of the circumcircle of a cyclic polygon, see, for example, \cite{MRR,FP,P,V}. As a corollary of our main result, the formulas of radius of the circumcircle of a cyclic polygon in Euclidean geometry, hyperbolic geometry or spherical geometry can be unified.
In this paper, we do not give any new theorems about a cyclic polygon in Euclidean geometry. But we show that once there is a formula about sides lengths, diagonal lengths and radius of circumcircle of a cyclic polygon in Euclidean geometry, there is an essentially same formula in hyperbolic geometry or spherical geometry. In other words, we provide a machinery to generate theorems in hyperbolic geometry or spherical geometry.
For recent development of the study of cyclic polygons in Euclidean geometry, see, for example, \cite{P1,P2}.
In section 2, 3 and 4, the general principle are illustrated by examples about triangles, cyclic quadrilaterals and cyclic hexagons. In section 5, the main result, Theorem \ref{main}, is stated formally. Section 6 establishes a lemma and section 7 proves Theorem \ref{main}.
\section{Triangle}
For a triangle on the Euclidean plane, assume that it has the side lengths $a,b,c$ and the radius of circumcircle $r$. Then
$$r^2=\frac{(abc)^2}{(a+b+c)(b+c-a)(a+c-b)(a+b-c)}.$$
For a triangle on the hyperbolic plane, assume that it has the side lengths $a,b,c$ and the radius of circumcircle $r$. Then
$$\frac14\sinh^2 r=\frac{(\sinh\frac a2 \sinh\frac b2 \sinh\frac c2)^2}
{A_h(A_h-2\sinh\frac a2)(A_h-2\sinh\frac b2)(A_h-2\sinh\frac c2)},$$
where $A_h=\sinh\frac a2+\sinh\frac b2+\sinh\frac c2.$
For a triangle on the unit sphere, assume that it has the side lengths $a,b,c$ and the radius of circumcircle $r$. Then
$$\frac14\sin^2 r=\frac{(\sin\frac a2 \sin\frac b2 \sin\frac c2)^2}
{A_s(A_s-2\sin\frac a2)(A_s-2\sin\frac b2)(A_s-2\sin\frac c2)},$$
where $A_s=\sin\frac a2+\sin\frac b2+\sin\frac c2.$
The three formulas are essentially the same. They can be unified by introduce the following function:
$$
s(x)=\left\{
\begin{array}{lll}
\frac x2 &\ \ \mbox{in Euclidean geometry} \\
\sinh \frac x2 &\ \ \mbox{in hyperbolic geometry}\\
\sin \frac x2 &\ \ \mbox{in spherical geometry}
\end{array}
\right.
$$
The unified formula is
$$\frac14 s(2r)^2=\frac{(s(a)s(b)s(c))^2}{A_3(A_3-2s(a))(A_3-2s(b))(A_3-2s(c))},$$
where $A_3=s(a)+s(b)+s(c)$.
\section{Cyclic quadrilateral}
\begin{figure}[ht!]
\labellist\small\hair 2pt
\pinlabel $a$ at 133 136
\pinlabel $b$ at 19 136
\pinlabel $c$ at 54 28
\pinlabel $d$ at 160 28
\pinlabel $e$ at 43 78
\pinlabel $f$ at 86 128
\endlabellist
\centering
\includegraphics[scale=0.45]{quad}
\caption{Ptolemy's theorem}
\label{fig:quad}
\end{figure}
We can observe the similar phenomenon in the case of a quadrilateral inscribed in a circle which is called a cyclic quadrilateral. That means the formula about the side lengths, diagonal lengths and radius of circumcircle of a cyclic quadrilateral can be unified in Euclidean geometry, hyperbolic geometry and spherical geometry. Assume the side lengths are $a,b,c,d$, the diagonal lengths are $e,f$ as labeled in Figure \ref{fig:quad} and the radius of the circumcicle is $r$. Then the Ptolemy's theorem says
$$s(e)s(f)=s(a)s(c)+s(b)s(d).$$
For more information about generalization of Ptolemy's theorem in hyperbolic and spherical geometry, see \cite{V1,V2}.
The formulas representing diagonal lengths in terms of side lengths are
$$s(e)^2=(s(a)s(c)+s(b)s(d))\frac{s(a)s(d)+s(b)s(c)}{s(a)s(b)+s(c)s(d)},$$
$$s(f)^2=(s(a)s(c)+s(b)s(d))\frac{s(a)s(b)+s(c)s(d)}{s(a)s(d)+s(b)s(c)}.$$
Ptolemy's theorem is a corollary of the two formulas.
The formula involving the radius is
$$\frac14 s(2r)^2=\frac{(s(a)s(b)+s(c)s(d))(s(a)s(c)+s(b)s(d))(s(a)s(d)+s(b)s(c))}
{(A_4-2s(a))(A_4-2s(b))(A_4-2s(c))(A_4-2s(d))}$$
where $A_4=s(a)+s(b)+s(c)+s(d).$
\section{Cyclic hexagon}
\begin{figure}[ht!]
\labellist\small\hair 2pt
\pinlabel $a$ at 162 187
\pinlabel $a'$ at 23 19
\pinlabel $b$ at 73 188
\pinlabel $b'$ at 141 3
\pinlabel $c$ at 9 120
\pinlabel $c'$ at 181 89
\pinlabel $e$ at 86 95
\pinlabel $f$ at 94 130
\pinlabel $g$ at 115 106
\endlabellist
\centering
\includegraphics[scale=0.45]{hex}
\caption{Fuhrmann's theorem}
\label{fig:hex}
\end{figure}
One more example is Fuhrmann's theorem about a cyclic hexagon in Euclidean geometry. This theorem also holds in hyperbolic and spherical geometry. And the formula can be unified in the three cases. Assume that a convex cyclic hexagon in Euclidean geometry, hyperbolic geometry and spherical geometry have the side lengths $a, a', b, b', c,$ and $c'$, and diagonal lengths $e, f$, and $g$ as labeled in Figure \ref{fig:hex}, then
$$s(e)s(f)s(g)=s(a)s(a')s(e)+s(b)s(b')s(f)+s(c)s(c')s(g)+s(a)s(b)s(c)+s(a')s(b')s(c').$$
The generalization of Fuhrmann's theorem into hyperbolic geometry is treated in Wilson Stothers' web page about hyperbolic geometry \cite{S}.
\section{Cyclic polygon}
We can expect that the similar phenomenon holds for general polygon inscribed in a circle which is called the cyclic polygon. In this paper we show that the same relationships about the side lengths, diagonal lengths and the radius of the circumcicle hold in Euclidean geometry, hyperbolic geometry and spherical geometry. To make the statement formal, we introduce the following notations.
Fix an integer $n\geq 3.$
The set $\mathcal{E}_n$ of polynomials is defined as follows. A polynomial $f$ of $\frac{n(n+1)}{2}$ variables is in the set $\mathcal{E}_n$ if $f(|P_iP_j|_e,r_e)=0$ for any $n$ points $P_1,P_2,...,P_n$ on a circle of radius $r_e$ on the Euclidean plane, where $|P_iP_j|_e$ denotes the Euclidean distance between the two points $P_i$ and $P_j$.
The set $\mathcal{H}_n$ of polynomials is defined as follows. A polynomial $f$ of $\frac{n(n+1)}{2}$ variables is in the set $\mathcal{H}_n$ if $f(\sinh\frac{|P_iP_j|_h}2,\frac12\sinh r_h)=0$ for any $n$ points $P_1,P_2,...,P_n$ on a circle of radius $r_h$ on the hyperbolic plane, where $|P_iP_j|_h$ denotes the hyperbolic distance between the two points $P_i$ and $P_j$.
The set $\mathcal{S}_n$ of polynomials is defined as follows. A polynomial $f$ of $\frac{n(n+1)}{2}$ variables is in the set $\mathcal{S}_n$ if $f(\sin\frac{|P_iP_j|_s}2,\frac12\sin r_s)=0$ for any $n$ points $P_1,P_2,...,P_n$ on a circle of radius $r_s$ on the unit sphere, where $|P_iP_j|_s$ denotes the spherical distance between the two points $P_i$ and $P_j$.
\begin{theorem}\label{main} $\mathcal{E}_n=\mathcal{H}_n=\mathcal{S}_n$.
\end{theorem}
R. J. Gregorac \cite{G} generalized Ptolemy's theorem to a convex cyclic polygon on the Euclidean plane and generalized Fuhrmann's theorem to a convex cyclic $2n$-gon on the Euclidean plane. By Theorem \ref{main}, Gregorac's two results can be easily generalized into hyperbolic and spherical geometry and the formulas can be unified. For example, we consider the later one.
\begin{corollary} Let $n>3.$ Let $\{V_0,V_1,...,V_{2n-1}\}$ be points on a circle in Euclidean geometry, hyperbolic geometry or spherical geometry. Let $l_{ij}$ denote the length of the geodesic segment $V_iV_j$ for $i\neq j$. Then $$\det(a_{ij})=0$$ where
$$a_{ji}=(-1)^{\delta_{ij}}(s(l_{2i-2,2j-1})s(l_{2j-1,2i}))^{-1}$$ and $\delta_{ij}$ is the Kronecker delta.
\end{corollary}
A cyclic polygon in Euclidean, hyperbolic or spherical geometry is uniquely determined by its side lengths. Therefore any diagonal length or radius of the circumcircle is a function of side lengths. In the case of Euclidean geometry, it is not difficult to see that these functions are algebraic functions. As a corollary of Theorem \ref{main}, these functions are unified in Euclidean, hyperbolic or spherical geometry.
\begin{corollary} For a Euclidean, hyperbolic or spherical cyclic polygon with vertexes $P_1,P_2,...,P_n$ and the side lengths $|P_iP_{i+1}|=l_{i,i+1}$ where $n+1=1$, the length $l_{ij}$ of the diagonal $P_iP_j$ $(|j-i|\geq 2)$ is
$$s(l_{ij})=F_{ij}(s(l_{12}),s(l_{23}),...,s(l_{n1}))$$ for some algebraic function $F_{ij}$ which is independent of the three kinds of geometry. The radius $r$ of the circumcircle circle satisfies $$s(2r)=G(s(l_{12}),s(l_{23}),...,s(l_{n1}))$$ for some algebraic function $G$ which is independent of the three kinds of geometry.
\end{corollary}
\section{Homogeneity}
\begin{figure}[ht!]
\labellist\small\hair 2pt
\pinlabel $P_1$ at 132 84
\pinlabel $P_1'$ at 156 74
\pinlabel $P_2$ at 101 172
\pinlabel $P_2'$ at 111 198
\pinlabel $P_3$ at 30 153
\pinlabel $P_3'$ at 9 172
\pinlabel $P_4$ at 33 76
\pinlabel $P_4'$ at 12 56
\pinlabel $P_1$ at 344 86
\pinlabel $P_1'$ at 368 73
\pinlabel $P_2$ at 314 174
\pinlabel $P_2'$ at 324 202
\pinlabel $P_3$ at 243 153
\pinlabel $P_3'$ at 222 174
\pinlabel $P_4$ at 246 76
\pinlabel $P_4'$ at 228 55
\pinlabel $P_1$ at 542 106
\pinlabel $P_1'$ at 525 76
\pinlabel $P_2$ at 566 125
\pinlabel $P_2'$ at 567 88
\pinlabel $P_3$ at 476 128
\pinlabel $P_3'$ at 470 88
\pinlabel $P_4$ at 455 108
\pinlabel $P_4'$ at 464 56
\pinlabel $(a)$ at 81 10
\pinlabel $(b)$ at 297 10
\pinlabel $(c)$ at 507 10
\pinlabel $O$ at 74 122
\pinlabel $O$ at 285 122
\pinlabel $S$ at 503 32
\endlabellist
\centering
\includegraphics[scale=0.62]{homo}
\caption{}
\label{fig:homo}
\end{figure}
To prove Theorem \ref{main}, we need the following lemma.
\begin{lemma}\label{homo} If $f\in \mathcal{E}_n \cup \mathcal{H}_n \cup \mathcal{S}_n$, then $f$ is homogeneous.
\end{lemma}
\begin{proof} Assume $f\in \mathcal{E}_n$. Then $f(|P_iP_j|_e,r_e)=0$ for any $n$ points $P_1,P_2,...,P_n$ on a circle of any radius $r_e$ on the Euclidean plane. We can assume the center of the circle is the origin. For any $x>0,$ by multiplying the coordinates of $P_1,P_2,...,P_n$ by $x$, we obtain new points $P'_1,P'_2,...,P'_n$ with is on the circle of radius $xr_e$ centered at the origin, see Figure \ref{fig:homo}(a). The distance $|P'_iP'_j|_e=x|P_iP_j|_e$ and the radius $xr_e$ also satisfy the polynomial $f$, i.e., $f(x|P_iP_j|_e,xr_e)=0$. Therefore $f$ is homogeneous.
Assume $f\in \mathcal{H}_n$. Then $f(\sinh\frac{|P_iP_j|_h}2,\frac12\sinh r_h)=0$ for any points $P_1,P_2,...,P_n$ on a circle $C$ of radius $r_h$ on the hyperbolic plane. We use the Poincar\'e disk model of the hyperbolic plane: $\{z\in \mathbb{C}: |z|<1\}$. Assume the center of the circle is the origin $O$. For any $x>0$, there is another circle $C'$ centered at $O$ with radius $r_h'$ satisfying $\sinh r_h'= x\sinh r_h.$ For each $i$, let $P_i'$ be the intersection point of $C'$ and the geodesic ray starting from $O$ and passing though $P_i$ as in Figure \ref{fig:homo}(b). By the law of sine of a hyperbolic triangle, we have
$$\frac{\sinh\frac{|P_iP_j|_h}2}{\sinh r_h}=\sin\frac{\angle P_iOP_j}{2}
=\frac{\sinh\frac{|P_i'P_j'|_h}2}{\sinh r_h'}=\frac{\sinh\frac{|P_i'P_j'|_h}2}{x\sinh r_h}$$
where $\angle P_iOP_j \in (0,\pi].$ Therefor $\sinh\frac{|P_i'P_j'|_h}2=x\sinh\frac{|P_iP_j|_h}2$.
Since $f\in \mathcal{H}_n,$ the equation $f(\sinh\frac{|P_i'P_j'|_h}2,\frac12\sinh r_h')=0$ holds. Thus $$f(x\sinh\frac{|P_iP_j|_h}2,x\frac12\sinh r_h)=0$$ holds for any $x>0$. Therefore $f$ is homogeneous.
Assume $f\in \mathcal{S}_n$. Then $f(\sin\frac{|P_iP_j|_s}2,\frac12\sin r_s)=0$ for any points $P_1,P_2,...,P_n$ on a circle $C$ of radius $r_s$ on the unit sphere. Assume the center of the circle is the south pole $S$. For any $x\in (0,1)$, there is another circle $C'$ centered at south pole with radius $r_s'$ satisfying $\sin r_s'= x\sin r_s.$ For each $i$, let $P_i'$ be the intersection point of $C'$ and the geodesic ray starting from $O$ and passing through $P_i$ as in Figure \ref{fig:homo}(c). By the law of sine of a spherical triangle, we have
$$\frac{\sin\frac{|P_iP_j|_s}2}{\sin r_s}=\sin\frac{\angle P_iOP_j}{2}
=\frac{\sin\frac{|P_i'P_j'|_s}2}{\sin r_s'}=\frac{\sin\frac{|P_i'P_j'|_s}2}{x\sin r_s}$$
where $\angle P_iOP_j \in (0,\pi].$ Therefor $\sin\frac{|P_i'P_j'|_s}2=x\sin\frac{|P_iP_j|_s}2$.
Since $f\in \mathcal{S}_n,$ the equation $f(\sin\frac{|P_i'P_j'|_s}2,\frac12\sin r_s')=0$ holds. Thus $$f(x\sin\frac{|P_iP_j|_s}2,x\frac12\sin r_s)=0$$ holds for any $x\in (0,1)$. Therefore $f$ is homogeneous.
\end{proof}
\section{Proof of Theorem \ref{main}}
\subsection{$\mathcal{E}_n \subseteq \mathcal{H}_n$}
Let $f$ be a polynomial in $\mathcal{E}_n.$
Consider $n$ points $P_1,P_2,...,P_n$ on a circle of radius $r_h$ on the hyperbolic plane. We use the Poincar\'e disk model of the hyperbolic plane. Assume that the center of the circle is the origin $O$. In the hyperbolic triangle $\triangle_h P_iOP_j$ which can be degenerated into a line segment, by the law of sine, we have
$$\sin\frac{\angle P_iOP_j}{2}=\frac{\sinh\frac{|P_iP_j|_h}2}{\sinh r_h}$$
where $\angle P_iOP_j \in (0,\pi].$
On the other hand, this circle is also a Euclidean circle. We consider the Euclidean triangle $\triangle_e P_iOP_j$, by the law of sine, we have
$$\sin\frac{\angle P_iOP_j}{2}=\frac{|P_iP_j|_e}{2r_e}.$$
Hence $$\sinh\frac{|P_iP_j|_h}2=\frac{\sinh r_h}{2r_e}\cdot |P_iP_j|_e.$$
Obviously,
$$\frac12 \sinh r_h=\frac{\sinh r_h}{2r_e}\cdot r_e.$$
Since $f\in \mathcal{E}_n,$ $f(|P_iP_j|_e,r_e)=0$. By Lemma \ref{homo},
$$f(\sinh\frac{|P_iP_j|_h}2,\frac12\sinh r_h)=f(x|P_iP_j|_e,xr_e)=0,$$ where $x=\frac{\sinh r_h}{2r_e}$. Hence $f\in \mathcal{H}_n$.
\subsection{$\mathcal{H}_n \supseteq \mathcal{E}_n$}
Let $f$ be a polynomial in $\mathcal{H}_n.$
For any points $P_1,P_2,...,P_n$ on a circle of radius $r_e$ centered at the origin on the Euclidean plane. For a number $y\in (0,\frac 1{r_e}),$ consider the circle $C'$ centered at the origin and with radius $yr_e.$
For any $i$, let $P'_i$ be the the intersection point of $C'$ and the ray starting from $O$ and and passing through $P_i$. For any $i,j$, we consider the Euclidean triangle $\triangle_e P_iOP_j$ which can be degenerated. On the other hand,
Since all the points $P'_1,P'_2,...,P'_n$ are in the unit disk which is considered as the hyperbolic plane, for any $i,j$, there is also a hyperbolic triangle $\triangle_h P_iOP_j$.
By the law of sine, we have
$$\frac{\sinh\frac{|P'_iP'_j|_h}2}{\sinh |OP'_i|_h}=\sin\frac{\angle P_iOP_j}{2}=\frac{|P'_iP'_j|_e}{2yr_e}=\frac{|P_iP_j|_e}{2r_e}.$$
By Lemma \ref{homo}, $f(x\sinh\frac{|P'_iP'_j|_h}2,x\frac12\sinh |OP'_i|_h)=0$ for any $x>0$. By taking $x=\frac{2r_e}{\sinh |OP'_i|_h}$,
we have $f(|P_iP_j|_e, r_e)=0.$ Thus $f\in \mathcal{E}_n.$
To sum up, we have proved that $\mathcal{E}_n=\mathcal{H}_n$.
\subsection{$\mathcal{E}_n \subseteq \mathcal{S}_n$}
Let $f$ be a polynomial in $\mathcal{E}_n.$
Consider $n$ points $P_1,P_2,...,P_n$ on a circle of radius $r_s$ on the unit sphere $\{(x,y,z):x^2+y^2+z^2=1\}$. Assume that the center of the circle is the south pole $S$. In the spherical triangle $\triangle_s P_iSP_j$ which can be degenerated, by the law of sine, we have
$$\sin\frac{\angle P_iSP_j}{2}=\frac{\sin\frac{|P_iP_j|_s}2}{\sin r_s}.$$
For any $i$, let $P'_i$ be the image of $P_i$ under the stereographic projection from the north pole, i.e., the intersection point of the plane $z=0$ and the line passing through the north pole and $P_i$. Then $P'_1,P'_2,...,P'_n$ are points on a circle of radius $r_e$ centered at the origin $O$ on the Euclidean plane $z=0$. In the Euclidean triangle $\triangle_e P_iOP_j$, by the law of sine, we have
$$\sin\frac{\angle P'_iOP'_j}{2}=\frac{|P_iP_j|_e}{2r_e}.$$
Since $\angle P_iSP_j=\angle P'_iOP'_j,$ we have
$$\sin\frac{|P_iP_j|_s}2=\frac{\sin r_s}{2r_e}\cdot |P'_iP'_j|_e.$$
Obviously,
$$\frac12 \sin r_s=\frac{\sin r_s}{2r_e}\cdot r_e.$$
Since $f\in \mathcal{E}_n,$ $f(|P'_iP'_j|_e,r_e)=0$. By Lemma \ref{homo},
$$f(\sin\frac{|P_iP_j|_s}2,\frac12\sin r_s)=f(x|P'_iP'_j|_e,xr_e)=0,$$ where $x=\frac{\sin r_s}{2r_e}$. Hence $f\in \mathcal{S}_n$.
\subsection{$\mathcal{S}_n \subseteq \mathcal{E}_n$}
Let $g$ be a polynomial in $\mathcal{S}_n.$
For any points $B_1,B_2,...,B_n$ on a circle of radius $r_e$ centered at the origin $O$ on the Euclidean plane.
Consider the stereographic projection from the north pole of the unit sphere $\{(x,y,z):x^2+y^2+z^2=1\}$. The image of
$B_1,B_2,...,B_n$ under the stereographic projection are points $B'_1,B'_2,...,B'_n$ on a circle of radius $r_s$ centered at the south pole $S$ on the unit sphere. By the law of sine, we have
$$\sin\frac{\angle B_iOB_j}{2}=\frac{|B_iB_j|_e}{2r_e},$$
$$\sin\frac{\angle B'_iSB'_j}{2}=\frac{\sin\frac{|B'_iB'_j|_s}2}{\sin r_s}.$$
Since $\angle B_iOB_j=\angle B'_iSB'_j,$ we have
$$|B_iB_j|_e=\frac{2r_e}{\sin r_s}\cdot \sin\frac{|B'_iB'_j|_s}2.$$
Obviously,
$$r_e=\frac{2r_e}{\sin r_s} \cdot \frac12 \sin r_s.$$
Since $g\in \mathcal{S}_n,$ $g(\sin\frac{|B'_iB'_j|_s}2, \frac12 \sin r_s)=0.$ By Lemma \ref{homo}, $$g(x\sin\frac{|B'_iB'_j|_s}2, x\frac12 \sin r_s)=g(|B_iB_j|_e, r_e)=0,$$ where $x=\frac{2r_e}{\sin r_s}.$ Hence $g\in \mathcal{E}_n.$
To sum up, we have proved that $\mathcal{E}_n=\mathcal{S}_n$.
| {
"timestamp": "2010-09-16T02:02:34",
"yymm": "1009",
"arxiv_id": "1009.2970",
"language": "en",
"url": "https://arxiv.org/abs/1009.2970",
"abstract": "Formulas about the side lengths, diagonal lengths or radius of the circumcircle of a cyclic polygon in Euclidean geometry, hyperbolic geometry or spherical geometry can be unified.",
"subjects": "Metric Geometry (math.MG)",
"title": "Cyclic polygons in classical geometry",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9863631651194373,
"lm_q2_score": 0.8459424314825853,
"lm_q1q2_score": 0.8344064542259956
} |
https://arxiv.org/abs/1307.1418 | Stabilization of coefficients for partition polynomials | We find that a wide variety of families of partition statistics stabilize in a fashion similar to $p_k(n)$, the number of partitions of n with k parts, which satisfies $p_k(n) = p_{k+1}(n + 1), k \geq n/2$. We bound the regions of stabilization, discuss variants on the phenomenon, and give the limiting sequence in many cases as the coefficients of a single-variable generating function. Examples include many statistics that have an Euler product form, partitions with prescribed subsums, and plane overpartitions. | \section{Introduction}
Consider a set of combinatorial objects $\{\alpha\}$ with statistics $wt(\alpha)$ and $t(\alpha)$, thinking of $wt(\alpha)$ as the primary descriptor. Let $G(z,q)$ be its two-variable generating function, that is, if $p(n,k)$ is the number of objects $\alpha$ with $wt(\alpha) = n$ and $t(\alpha) = k$,
\[
G(z,q) = \sum_{n,k \in \mathbb{N} \bigcup \{0\}} p(n,k) q^n z^k\, \text{.}
\]
An important example occurs when the generating function
$G(z,q)$ has the Euler product form
\begin{equation}\label{eq:product}
G(z,q) = \prod_{i \geq 1} \frac{1}{(1-z q^i)^{a_i}}
\end{equation}
with $a_i \in \mathbb{Z}^+ \bigcup \{0\}$.
We let $F_n(z)$ denote the $q^n$ coefficient of $G(z,q)$ which is a polynomial in $z$, so
\[
G(z,q) = \sum_n F_n(z) q^n, \quad \, F_n(z) = \sum_k p(n,k) z^k \, .
\]
For $a_i = 1$ this is the generating function for partitions of weight $wt(\alpha) = n$ with number of parts $t(\alpha) = k$. It is a well-known fact in partition theory that $p(n,k)$ for $k$ nearly equal to $n$ has a value independent of $n$: $p(n,n-b)$ is the number of partitions of $b$ for $b\leq\frac{n}{2}$. A similar result holds for plane partitions of $n$ indexed by their trace \cite[p. 199]{Andrews_book} or \cite[Corollary 5.3]{Stanley}.
This phenomenon, which we call {\sl stabilization,} is widespread in generating functions of combinatorial interest, even those of greater complexity. The purpose of this paper is to describe this behavior in more general cases, and consider some illustrative examples and variations.
We found the polynomial framework to be well suited to these problems rather than a direct approach. The arguments should be adaptable to a wide variety of cases.
We would like to thank the anonymous referee for a careful readthrough: noting typos, improving exposition, and suggesting occasional strengthening of theorems for which we had been hesitant to extend our reach. This article is substantially improved from their efforts.
\section{ Basic Infinite Product Generating Functions }
Let $G(z,q)$ be an infinite product generating function of the form
\begin{equation}\label{eq:new_genfct}
G(z,q) = \sum_{n=1}^\infty F_n(z) q^n = \prod_{j=1}^{\infty} \frac{1}{(1-z^{b(j)} q^{c(j)})^{a_j}}.
\end{equation}
where we assume that the number of $j$ for which $c(j) = t$ for any $t$ is finite, so that the series converges. We find that if the $c$ grow sufficiently faster than the $b$, the upper ends of the $F_n(z)$ stabilize, to the coefficients of a single-variable generating function which we can give.
Let ${\mathcal F}$ denote the set of all nonnegative integer sequences with finite support.
For ${\bf e} = (e_1,e_2,\dots) \in {\mathcal F}$
set
\[
\mu(\mathbf{e}) = (c(1)^{e_1}c(2)^{e_2}\cdots),\quad
\nu(\mathbf{e}) = (b(1)^{e_1}b(2)^{e_2}\cdots)
\]
to denote the partitions with parts $c(j)$ (resp. $b(j)$) appearing $e_j$ times.
A direct expansion of the generating functions yields
an explicit form for the polynomial $F_n(z)$:
\begin{eqnarray}\label{eq:take_away}
F_n (z)
&= &
\sum_{k=0}^n z^k \, \sum_{{\mu(\mathbf{e}) \vdash n} \atop {\nu(\mathbf{e}) \vdash k}} \prod_{i \geq 1}
\binom{ a_i+e_i-1}{e_i}
\end{eqnarray}
We will compare the coefficients of $F_n(z)$ with those of the expansion of
\[
\prod_{i=1}^\infty
\frac{1}{ (1-z^{b(i)})^{a_i}} =
\sum_{k=0}^\infty z^k \, \sum_{ \nu( {\bf e}) \vdash k} \, \prod_{i\geq 1} \binom{ a_i + e_i -1}{ e_i} .
\]
\begin{theorem}
\label{thm:stabliztion2}
Suppose that the exponents satisfy $a_1 = 1 = b(1) = c(1)$.
If there exists a positive integer $m \geq 2$ such that
\begin{equation}\label{eq:stable2}
m \cdot b(j) \leq c (j), \quad j \geq 2 ,
\end{equation}
then
\\
(a)
for $k > n/m $, $\left[z^k \right] F_n(z) = \left[z^{k+1} \right] F_{n+1} (z)$.
\\
(b)
If $c(j) - b(j) > 0$ for all $j>1$ and the set of $j$ for which $c(j) - b(j)$ takes a given value is finite for any fixed difference,
then for $\ell \leq \lfloor n/m \rfloor$,
\[
\left[ z^{n-\ell}\right]F_n(z) = \left[z^{\ell}\right] \prod_{j \geq 2} \frac{1}{(1-z^{c(j)-b(j)})^{a_j}} .
\]
\end{theorem}
\begin{proof}
(a)
From the explicit form (\ref{eq:take_away}) of the polynomial $F_n(z)$, we can expand it as
\begin{eqnarray}
F_n (z)
&= &
\sum_{k=0}^n z^k \, \sum_{{\mu(\mathbf{e}) \vdash n} \atop {\nu(\mathbf{e}) \vdash k}} \prod_{i \geq 1}
\binom{ a_i+e_i-1}{e_i}
\nonumber
\\
&=& \sum_{k=0}^n \sum_{e_1=0}^k z^{b(1) e_1}
\sum_{{\mu^-(\mathbf{e}) \vdash n - c(1)e_1} \atop {\nu^-(\mathbf{e}) \vdash k - b(1)e_1}} z^{k-b(1)e_1}
\prod_{i \geq 2} \left( {{a_i+e_i-1} \atop {e_i}} \right) .
\nonumber
\end{eqnarray}
Now if the integer sequence ${\bf e}$ gives a contribution to $[z^{k+1}]F_{n+1}(z)$ and $e_1>0$, then we define
${\bf e}'$
as the integer sequence
all of whose terms agree with ${\bf e}$ except for $j=1$ where we set $e_1'=e_1-1$.
In this way, we obtain all the possible terms contributing to $[z^k]F_n(z)$.
Conversely, any term for $[z^k]F_n(z)$ gives a contribution to $[z^{k+1}]F_{n+1}(z)$ by simply adding
1 to its first component.
The result reduces to showing that any contribution to $[z^{k+1}]F_{n+1}(z)$ indexed by ${\bf e}$
must have $e_1>0$.
We introduce the notation for the modified partitions
\[
\mu^{-}(\mathbf{e}) = (c(2)^{e_2} c(3)^{e_3} \cdots),\quad
\nu^{-}(\mathbf{e}) = (b(2)^{e_2} b(3)^{e_3} \cdots) .
\]
Now assume that the partition with $\mu({\bf e}) \vdash n$ and $\nu({\bf e}) \vdash k$ gives a contribution to $[z^{k+1}]F_{n+1}(z)$ but
$e_1 = 0$. So $\mu^-(\mathbf{e}) \vdash n$ and $\nu^-(\mathbf{e}) \vdash k$, but if $c(j) \geq m \cdot b(j)$ for all $j \geq 2$,
then $\vert \mu^-({\bf e}) \vert \geq m \vert \nu^-({\bf e}) \vert$.
Hence if $ k > \frac{n}{m}$, we find that $\vert \mu^{-}({\bf e}) \vert > n$, a contradiction. Thus all terms in both expansions are the same, and so the coefficients are equal. This proves part (a).
For part (b), we begin by forming a new partition as follows.
First subtract $b(j)$ from each $c(j)$ and consider the partition
$\lambda(\mathbf{e}) = ((c(2)-b(2))^{e_2}\dots)$. This removes exactly the amount $\vert \nu^{-}(\mathbf{e})\vert$ from $\vert \mu^{-}(\mathbf{e})\vert$, so
\begin{eqnarray}
F_n (z)
&=& \sum_{k=0}^n \sum_{e_1=0}^k z^{k}
\sum_{{\lambda(\mathbf{e}) \vdash n - k} \atop {\nu^-(\mathbf{e}) \vdash k - b(1)e_1}} \prod_{i \geq 2} \left( {{a_i+e_i-1} \atop {e_i}} \right)
\nonumber
\\
&=& \sum_{k=0}^n z^{k}
\sum_{{\lambda(\mathbf{e}) \vdash n - k} \atop {\vert \nu^-(\mathbf{e}) \vert \leq k}} \prod_{i \geq 2} \left( {{a_i+e_i-1} \atop {e_i}} \right) . \nonumber
\end{eqnarray}
By hypothesis, the parts of $\lambda$ satisfy $c(j) - b(j) \geq (m-1)b(j)$.
If $k = n-\ell > \lfloor n/m \rfloor$, then $\ell < \lfloor n \frac{m-1}{m} \rfloor$.
If $\lambda(\mathbf{e}) \vdash n-k = \ell$, then $\nu^-(\mathbf{e}) \leq \frac{\ell}{m-1} \leq k$.
Thus the sum runs over all $\mathbf{e}$ for which $\lambda(\mathbf{e}) \vdash \ell$ with parts $c(j) - b(j)$. But this is exactly the coefficient of $z^{\ell}$ in the expansion claimed:
\[
\left[ z^{n-\ell}\right]F_n(z) = \left[z^{\ell}\right] \prod_{j \geq 2} \frac{1}{(1-z^{c(j)-b(j)})^{a_j}} \, \text{ .}
\]
\end{proof}
By the technique of proof, we have a slight improvement in a special case of hand enumerators of prefabs
(\cite{Bender_Goodman}, \cite[page 92]{Wilf}).
\begin{corollary}
Suppose $a_1=1$.
If $F_n(z) = [q^n] \prod_{j=1}^\infty (1-zq^j)^{- a_j}$,
then
for $k \geq n/2$, $\left[z^k \right] F_n(z) = \left[z^{k+1} \right] F_{n+1} (z)$.
\end{corollary}
By the proof, we find that the generating function for the stabilized coefficients gives an upper bound outside the range of stability.
\begin{corollary}
With the hypotheses of the Theorem,
\[
\left[ z^{n-\ell}\right]F_n(z) \leq \left[z^{\ell}\right] \prod_{j \geq 2} \frac{1}{(1-z^{c(j)-b(j)})^{a_j}} ,
\quad 0 \leq \ell \leq n.
\]
\end{corollary}
Variants of stabilization exist in several guises. If the $b$ grow faster than the bound of the previous theorem, we find that the smaller end of the polynomials stabilize instead of the larger (and $b$ and $c$ staying within a given ratio range will permit both phenomena). We also note that it is possible for the larger coefficients of a sequence of polynomials to stabilize in periods, i.e., the coefficients match those of a polynomial every 2 or more steps further along.
\begin{theorem}\label{thm:stabilization2}
(a)
Let $\{ b(j)\}$ and $\{ c(j)\}$ be two strictly increasing sequences of integers, positive except that
$b(1)=0$.
Let $F_n(z) = [q^n]\prod_{j=1}^\infty (1-z^{ b(j) } q^{ c(j) })^{ - a_j }$ with $a_1=1$.
If there exists a positive integer $m $ such that for all $j \geq 2$
\[
(m+1) b(j) > c(j) ,
\]
then
\[
[z^k] F_n(z) = \, [ z^k] F_{n+ c(1)}(z), \quad 0 \leq k \leq n/(m+1).
\]
(b)
Let $a_1=0, a_2=1$.
If
$F_n(z) = [q^n]\prod_{j=2}^\infty (1-z q^{ j})^{ - a_j }$, then
$\deg(F_n) = \lfloor n/2 \rfloor$
and
\[
[z^k] F_n(z) = \, [z^{k+1}] F_{n+2}(z) , \quad k \geq n/3 .
\]
\end{theorem}
\begin{proof}
Let ${\bf e}$ be a sequence of non-negative integers with finite support.
By (\ref{eq:take_away}), for the coefficients $[z^k]F_n(z)$ and $[z^k] F_{n+c(1)}(z)$, we need to consider the following two sets of
finitely supported nonnegative integer sequences:
\begin{eqnarray*}
S_n &=&
\{ {\bf e} \in {\mathcal F} : | \mu({\bf e}) | = n, \, | \nu({\bf e} ) | =k \},
\\
S_{n+c(1)} &=&
\{ {\bf f} \in {\mathcal F} : | \mu({\bf f}) | = n+c(1), \, | \nu({\bf f} ) | =k \} .
\end{eqnarray*}
We construct a bijection between these two sets. Given ${\bf e} \in S_n$, we take the
corresponding ${\bf f}$ with $f_i=e_i$ for $2 \leq i$ since $b(1)=0$.
Next write out $ | \mu({\bf e}) | = n$
and $ | \mu({\bf f}) | = n+c(1)$:
\[
n= \sum_{j \geq 1} c(j) e_j, \quad n+c(1) = c(1) f_1 + \sum_{j \geq 2} c(j) e_j .
\]
The term $f_1$ uniquely determines a preimage $e_1$
provided $f_1>0$. But $f_1$ must be positive for $k \leq n/(m+1)$ since
\[
\sum_{j\geq 2} c(j) f_j = \sum_{j \geq 2} c(j) e_j < (m+1) \sum_{j \geq 2} b(j) e_j = (m+1)k \leq n.
\]
Finally, the coefficients themselves agree; that is, $[z^k] F_n(z) = [z^k] F_{n+c(1)}(z)$:
\[
\prod \left\{ \binom{ a_i + e_i -1}{ e_i} : {\bf e} \in S_n \right\} =
\prod \left\{ \binom{ a_i + f_i -1}{ f_i} : {\bf f} \in S_{n+c(1)} \right\}
\]
since the above binomial coefficients are all equal for $i \geq 2$ and when $i=1$ they both reduce to $1$ since $a_1=1$.
For part (b), let
$S_n = \{ {\bf e } \in {\mathcal F} :
\mu({\bf e}) \vdash n, \nu({\bf e} ) \vdash k\}$
while
$S_{n+2}=
\{ {\bf f } \in {\mathcal F} :
\mu({\bf f}) \vdash n+2, \nu({\bf f} ) \vdash k+1\}$.
We can construct a bijection between these two sets as in part (a) provided $f_2 >0$ when $k \geq n/3$.
Assume that $f_2=0$ is possible. Then $\sum f_j = k+1$ while $\sum_{j \geq 3} j f_j = n+2$. On the other hand,
$3(k+1) \geq \sum_{j \geq 3} j f_j $ which yields a contradiction.
\end{proof}
\section{Partitions With Prescribed Subsums}\label{section:subsums}
Fix a positive integer $m$ and integer $i$ so $1 \leq i \leq m$. Canfield-Savage-Wilf \cite[Section 3]{CSW} introduced the generating function
\[
G_{m,i}(z,q) = \prod_{j=1}^\infty \,
\prod_{b=1}^{i-1} \frac{1}{ 1- z^{j-1} q^{ (j-1)m+b}} \, \prod_{b=i}^m \frac{1}{ 1- z^{j} q^{ (j-1)m+b}}
\]
to describe partitions with prescribed subsums. They let
$\Lambda_{m,i}(n,k)$ be the number of partitions $\lambda=(\lambda_1,\cdots, \lambda_n)$ of $n$
such that the sum of those parts $\lambda_j$ whose indices $j$ are congruent to $i$ modulo $m$ is $k$;
that is,
\[
\sum_{ j : j \equiv i \, ({\rm mod} \, m)} \lambda_j = k .
\]
Then they found
\[
\sum_{n,k \geq 0} \Lambda_{m,i}(n,k) z^k q^n = G_{m,i}(z,q).
\]
We begin by recovering a result for
$\Lambda_{2,2}(n,k)$ in \cite[Theorem 1]{CSW} and \cite{Yuri}
by reformulating it in terms of the generating function
$G_{2,2}(z,q)$ and stabilization of polynomial coefficients.
\begin{proposition}\label{prop:n-mk}
Let $m \geq 2$ and $1 \leq b < m$. Let $G(z,q)= \prod_{j=1}^\infty (1-z^{j-1} q^{(j-1)m+b} )^{-1}$
and $A_n(z) = [q^n] G(z,q)$. Then for $ 0 \leq k \leq n/(m+1)$ we have
\begin{enumerate}
\item\quad
$[z^k] A_n(z) = [z^k] A_{n+b}(z)$,
\item\quad
if $n-mk$ is not divisible by $b$, then
$[z^k] A_n(z) = 0$,
\item\quad
if $n-mk$ is divisible by $b$ and $bk \leq n/(m+1)$, then
$[z^k] A_n(z) = p(k)$.
\end{enumerate}
\end{proposition}
\begin{proof}
The first part is a direct consequence of the last theorem.
For part (2), in Theorem \ref{thm:stabilization2} let $c(j) = m(j-1)+b$ and $b(j)=j-1$. When all $a_j =1$, $[z^j]F_n(z)$ is the number of all ${\bf e} \in {\mathcal F}$ such that $\mu({\bf e}) \vdash n$ and $\nu({\bf e}) \vdash k$.
Consider
\begin{eqnarray}\label{eq:n-mk}
n
&=&| \mu({\bf e})| = \sum_{j \geq 1} e_j [ m(j-1) + b]
\nonumber
\\
&=&
m \sum_{j \geq 1} e_j (j-1) + b \sum_{j \geq 1} e_j
\nonumber
\\
&=&
mk + b \sum_{j \geq 1} e_j .
\end{eqnarray}
Hence $n-mk$ must be divisible by $b$ for any nonzero choice of {\bf e}.
For part (3),
assume $0 \leq bk \leq n/(m+1)$ and that $n-mk$ is divisible by $b$.
Let {\bf e} be any solution to $\nu( {\bf e})=k$. Note that this does not give
a constraint for the choice of $e_1$. On the other hand, the choice of $e_1$ by (\ref{eq:n-mk}) must be
\[
e_1 = (n-mk)/b - \sum_{j\geq 2} e_j, \quad e_1 \geq 0
\]
to yield $\mu ({\bf e}) = n$.
Hence, $0\leq [z^k] F_n(z) \leq p(k)$.
Finally,
the inequality $0 \leq bk \leq n/(m+1)$ shows that $e_1\geq 0$ always holds. Hence,
$[z^k] F_n(z)=p(k)$.
\end{proof}
\begin{theorem}
Let $G_A(z,q)=\prod_{k=1}^\infty ( 1-z^{k} q^{2k-1} )^{-1}$
and $G_B(z,q)= \prod_{k=1}^\infty ( 1-z^{k} q^{2k} )^{-1}$, so $G_{2,2}(z,q) = G_A(z,q) G_B(z,q)$. Then the coefficients of the polynomials
$F_n(z)= [q^n] G_{2,2}(z,q)$ satisfy
\[
[z^{k}] F_n(z) = [z^{k}] F_{n+1}(z) = \sum_{\ell=0}^k p(\ell) p(k-\ell), \quad 0 \leq k \leq n/3,
\]
where $p(\ell)$ is the number of partitions of $\ell$, as usual.
\end{theorem}
\begin{proof}
Let $A_n(z)=[q^n]G_A(z,q)$ and $B_n(z) = [q^n]G_B(z,q)$. The generating function $G_B(z,q)$ has the explicit expansion
\[
G_B(z,q) = \sum_{j=0}^\infty p(j) z^j q^{2j}
\]
so $B_{2k+1}(z)=0$ while $B_{2k}(z) = p(k) z^k$.
We also know by Proposition \ref{prop:n-mk} that
\[
[z^j] A_n(z) = p(j), \quad 0 \leq j \leq n/3.
\]
Next we have that
\[
F_n(z) = \sum_{\ell=0}^{n/2} A_{n-2\ell}(z) B_{2\ell}(z) = \sum_{\ell=0}^{n/2} p(\ell) z^\ell A_{n-2\ell}(z)
\]
Examine the coefficient $[z^k]F_n(z)$ for $0 \leq k \leq n/3$:
\[
[z^k]F_n(z) = [z^k] \sum_{\ell=0}^{n/2} p(\ell) z^\ell A_{n-2\ell}(z)
=\sum_{\ell=0}^{n/2} p(\ell) \, [z^{k-\ell} ] A_{n-2\ell}(z)
\]
Since $k-\ell \leq (n-2\ell)/3$ for $0\leq \ell \leq k$ and $k\leq n/3$,
we find $[z^{k - \ell}] A_{n - 2 \ell}(z) = p(k - \ell)$.
We conclude that
\[
[z^k]F_n(z)= \sum_{\ell=0}^{k} p(\ell) \, p( k - \ell) .
\]
\end{proof}
In order to investigate more general cases for the
polynomials $F_n(z) = [q^n] G_{m,i}(z,q)$ it is convenient to rewrite the generating function as
\begin{equation}\label{eq:new_eq}
G_{m,i}(z,q)=
\left( \prod_{b=1}^{i-1} \frac{1}{1-q^b} \, \right) \,
\prod_{a=1}^\infty \, \prod_{d=0}^{m-1} \frac{1}{ 1-z^a q^{ i+ (a-1) m + d}}.
\end{equation}
We wish to find a useful form for the coefficient $[z^j] F_n(z)$. As usual, we have
\begin{eqnarray*}
[q^n] G_{m,i}(z,q)
&& =
\sum_{s=0}^n [q^{n-s}] \left( \prod_{b=1}^{i-1} \frac{1}{1-q^b} \, \right) \,
[q^s] \prod_{a=1}^{\infty} \prod_{d=0}^{m-1} \frac{1}{ 1- z^{a} q^{i+ (a-1)m+d}}
\\
&&= \quad
\sum_{s=0}^n [q^{n-s}] \left( \prod_{b=1}^{i-1}\,
\sum_{ f_b=0}^\infty \, \left(q^b \right)^{f_b} \, \right)
\\
&&
\qquad
\times \quad
[q^s] \prod_{a=1}^{\infty} \prod_{d=0}^{m-1} \, \sum_{ f_{(a,d)}=0}^\infty
\left(z^{a} q^{ i+(a-1)m+d} \right)^{f_{(a,d)}}
\end{eqnarray*}
A typical term of the above expansion is indexed by a pair of partitions $(\rho, \mu)$
where $\rho \vdash n-s$ where each part of $\rho$ is $<i$ while $\mu \vdash s$ where
each part of $\mu$ is $\geq i$. We write the parts of $\mu$ as $\mu_{(a,d)}$ with multiplicity
$e_{\mu(a,d)}$ where $a \geq 1$ and $0 \leq d < m$. By construction, we find that
\[
s = \sum_{a,d} e_{\mu(a,d)} \mu_{(a,d)}.
\]
while, of course, no part of $\rho$ has a contribution to $z$.
Each part $\mu_{(a,d)}$ of $\mu$ with multiplicity $e_{\mu(a,d)}$ contributes the power of $z$
\[
\left( z^a \right)^{e_{\mu(a,d)}} = z^{ a e_{ \mu(a,d)}},
\]
so the total contribution from $\mu$ is
\[
\sum_{a,d} a e_{\mu(a,d)}.
\]
We prove a few lemmas in preparation for the next theorem.
\begin{lemma}
\begin{eqnarray}\label{eq:coeff_F}
\lefteqn{
[z^j] F_n(z)
}
\nonumber
\\
&& = \sum_{s=ji}^n p(n-s, < i) \, \#\{ \mu \vdash s : \textrm{all parts of } \mu \textrm{ are } \geq i, \sum_{a,d} a e_{\mu(a,d)}=j \}
\end{eqnarray}
where $p(n-s, < i)$ denotes the number of all partitions of $n-s$ all of whose parts are strictly less than $i$.
\end{lemma}
\begin{proof}
This expression for $[z^j] F_n(z)$ follows directly from the generating function except for the
range of summation.
Let $\mu$ be a partition of $s$. Then we have
\begin{eqnarray*}
s &=& \sum_{a,d} [ i + m (a-1) + d] e_{\mu(a,d)}
\\
&\geq& \sum_{a,d} [ i + i (a-1) ] e_{\mu(a,d)}
\\
&=&
i \sum_{a,d} a e_{\mu(a,d)} = ij .
\end{eqnarray*}
\end{proof}
Let $r$ denote the number of parts of $\mu$ while $t$ denotes the number of parts with $a\geq 2$ so $r-t$ gives the number of
parts with $a=1$. Note that
\[
r = \sum_{a,d} e_{\mu(a,d)}.
\]
\begin{lemma} Given the partition $\mu$, we have the bound $j-r \geq t$, and can rewrite $s$ as
$$s = i r + m (j-r) + \sum_{a,d} d e_{\mu(a,d)}.$$
\end{lemma}
\begin{proof}
Since $\mu \vdash s$, we find that
\begin{eqnarray*}
s &=&
\sum_{a,d} e_{\mu(a,d)} \, [ i + (a-1) m + d ]
\\
&=&
i \, \sum_{a,d} e_{\mu(a,d)} + m \sum_{a,d} (a-1) e_{\mu(a,d)} + \sum_{a,d} d e_{\mu(a,d)}
\\
&=&
i r + m (j-r) + \sum_{a,d} d e_{\mu(a,d)}.
\end{eqnarray*}
For the bound, consider
\[
j-r = \sum_{a,d} (a-1) e_{\mu(a,d)} \geq \sum_{a \geq 2, d} e_{\mu(a,d)} \geq t.
\]
\end{proof}
\begin{lemma}
Let $\mu$ be a partition of $s$ all of whose parts are $\geq i$. If
$m \geq i+2$ and
$j > s /(i+1)$, then $\mu$ must have at least one part of size $i$.
\end{lemma}
\begin{proof}
We now assume that $m \geq i+2$ and that the part $i$ does not appear in $\mu$.
In particular, for any part of $\mu$ of the form $i +d$, with $a=1$, $d$ must be strictly positive.
Then we have a refinement of the above bounds:
\begin{eqnarray*}
s &\geq&
i r + m (j-r) + \sum_{a,d} d e_{\mu(a,d)}
\geq
i r + (i+2) (j-r) + \sum_{ d} d e_{1,d}
\\
&\geq&
i r + (i+2) (j - r) + (r-t)
\\
&\geq&
i r + (i+1) (j - r) + t + (r-t)
\\
&=&
(i+1) [ r + (j-r)] = (i+1) j.
\end{eqnarray*}
Hence the partition $\mu$ must have $i$ as a part; otherwise, $(i+1)j >s$ which contradicts our assumption.
\end{proof}
With these preparations, we will show that
\begin{theorem}\label{thm:subsums}
Let $m \geq 1$ and $1 \leq i \leq m$. Set $F_n(z) = [ q^n] G_{m,i}(z,q)$ where
$G_{m,i}(z,q)$ is given by (\ref{eq:new_eq}).
If $m>i+1$ and $j> \frac{n}{i+1}$, then
\[
[z^j] F_n(z) = [z^{j-1}] F_{n-i}(z) .
\]
\end{theorem}
\begin{proof}
By (\ref{eq:coeff_F}), we need to show that
\begin{eqnarray*}
&&
\sum_{s=ji }^{ n} p(n-s, < i) \, \# \{ \mu \vdash s : \textrm{all parts of } \mu \textrm{ are } \geq i, \sum_{a,d} a e_{\mu(a,d)}=j \}
\\
&&
=
\sum_{s'= (j-1) i }^{n- i } p(n-s'-i, < i) \,
\\
&&
\quad
\times \quad
\# \{ \nu \vdash s' : \textrm{all parts of } \nu \textrm{ are } \geq i, \sum_{a,d} a e_{\nu(a,d)}=j -1 \}
\end{eqnarray*}
These two coefficients are equal provided we construct a bijection $T$ between
\begin{eqnarray*}
U_s &=& \{ \mu \vdash s : \textrm{all parts of } \mu \textrm{ are } \geq i, \sum_{a,d} a e_{\mu(a,d)}=j \}
\\
V_{s'}
&=&
\{ \nu \vdash s' : \textrm{all parts of } \nu \textrm{ are } \geq i, \sum_{a,d} a e_{\nu(a,d)}=j -1 \}.
\end{eqnarray*}
when $s' = s-i$. Let $\mu \in U_s$. By Lemma 7, the partition $\mu$ must have a part equal to $i$.
Let $\nu=T(\mu)$ be the partition of $s-i$ obtained by deleting one part from $\mu$ of size $i$. It is easy
to verify that $\nu \in V_{s'}$. The inverse of $T$ is simply adding a part of size $i$ to $\nu \in V_{s'}$.
\end{proof}
In other words, Theorem \ref{thm:subsums} shows that if $m>i+1$ and $j> n/(i+1)$, then the subsums satisfy
$\Lambda_{m,i}(n,j)= \Lambda_{m,i}(n-i,j-1)$.
\section{Laurent Type Polynomials }\label{section:laurent}
A more general case consists of generating functions that involve $z$ raised to different powers;
ultimately, we might treat the case of the generating function
\[
G(z,q) = \prod_{(i,j) \in \mathbb{Z}^2 \setminus (0,0)} {(1 - z^i q^j)}^{a_{ij}} \, .
\]
An important example comes from the generating function for the crank statistic for partitions.
Let
\[
C(z,q) = \prod_{k\geq 1} \frac{1-q^k}{(1-z q^k)(1-z^{-1} q^k)} = \sum_{n=0}^\infty \, M_n(z) q^n
\]
where $M_n(z)$ is a symmetric Laurent polynomial.
From the definition of the crank, the coefficient of $z^{n-k}$ in $M_n(z)$,
for $ k \leq \frac{n}{2}$, equals the number of partitions of $k$ that include no 1s.
In particular, the coefficients of $M_n(z)$ stabilize in the ranges for powers $n-k$ and $-n+k$
for $0\leq k \leq \lfloor n/2 \rfloor$. It is suggested in \cite{BG2} that the zeros for the crank polynomial converge to
the unit circle.
Another example is the generating
function
\[
\prod_{k=1}^\infty \frac{ 1}{ (1-z q^k)^{k}} \, \frac{1}{ (1-z^{-1} q^k)^k } \, \frac{1}{ (1-q^k)^{2k}}.
\]
which comes from the Donaldson-Thomas Theory in algebraic geometry and whose asymptotics were studied in \cite{siam10}.
\begin{lemma}\label{thm:freezing1}
Let $\{A_n(z)\}_{n=0}^\infty$ be a sequence of polynomials, such that the degree of $A_n(z)$ is $n$, whose coefficients satisfy
\[
[z^{n-k}] A_n(z) = [ z^{n+1-k}]A_{n+1}(z), \quad 0 \leq k \leq n/m
\]
for some integer $m \geq 2$.
Let $\{B_n(z)\}_{n=0}^\infty$ be another sequence of polynomials. Then the coefficients of the
polynomial sequence $\{F_n(z)\}_{n=0}^\infty$
where
\[
F_n(z) = \sum_{\ell=0}^n \, A_{\ell}(z) B_{n-\ell}(z^{-1})
\]
also satisfy
\[
[z^{n-k}] F_n(z) = [ z^{n+1-k}]F_{n+1}(z), \quad 0 \leq k \leq n/m.
\]
\end{lemma}
\begin{proof}
Let $0 \leq k \leq \lfloor n/m \rfloor$. Consider $[z^{n-k}] F_n(z)$. We have
\begin{eqnarray}
[z^{n-k}] F_n(z)
&=&
[z^{n-k}] \sum_{\ell=0}^n A_{\ell}(z) \, B_{n-\ell}(z^{-1})
\nonumber
\\
&=&
\sum_{\ell=0}^n \, \sum_{a=0}^k \, [ z^{ n-k+a}] A_\ell(z) \, [ z^{-a}] B_{n-\ell}(z^{-1})
\nonumber
\\
&=&
\sum_{\ell=n-k}^n \, \sum_{a=0}^{k} \, [ z^{ n-k+a}] A_\ell(z) \, [ z^{-a}] B_{n-\ell}(z^{-1})
\label{eq:first_sum}
\end{eqnarray}
where in the last step we note that $[z^{ n-k+a}]A_\ell(z)=0$ if $\ell < n-k$.
For $[z^{n+1-k}]F_{n+1}(z)$, we reindex $\ell$ by 1 and note that $[z^{n+1-k}]A_0(z)=0$ for all $n$, giving us the expression for $[z^{n+1-k}]F_{n+1}(z)$:
\begin{equation}
\label{eq:second_sum}
\sum_{\ell=n-k}^{n} \, \sum_{a=0}^{k} \, [ z^{ n+1-k+a}] A_{\ell+1}(z) \, [ z^{-a}] B_{n-\ell}(z^{-1})
\end{equation}
By assumption, we know that
\[
[ z^{ n-k+a}] A_\ell(z) =[ z^{ n+1-k+a}] A_{\ell+1}(z), \quad n-k \leq \ell \leq n
\]
since $0 \leq k \leq \lfloor n/m \rfloor$. The coefficients of $[z^{-a}] B_{n-\ell}(z^{-1})$ are the same, and consequently the two sums (\ref{eq:first_sum}) and (\ref{eq:second_sum}) are
equal.
\end{proof}
\begin{lemma}\label{lemma:convolution}
Given the generating function
$G(z,q) = \prod_{i \geq 1} (1-zq^i)^{-a_i}$ where $a_1=1$, let $A_n(z) = [q^n] G(z,q)$. Let $Q(q) = \sum_{j=0}^\infty c_j q^j$.
Let $F_n(z) = [q^n] G(z,q)Q(q)$. Then the tail coefficients of $F_n(z)$ stabilize; that is,
\[
[z^{k}] F_n(z) = [z^{k+1}] F_{n+1}(z), \quad k \geq n/2.
\]
\end{lemma}
\begin{proof}
By construction, the polynomials $F_n(z)$ have the form
\[
F_n(z) = \sum_{\ell=0}^n c_{n-\ell} A_\ell(z).
\]
Let $k \geq n/2$. Then the coefficients for $[z^{k}] F_n(z)$ and $ [z^{k+1}] F_{n+1}(z)$ are given by
\begin{eqnarray*}
[z^{k}] F_n(z) = \sum_{\ell=k}^n c_{n-\ell} [z^{k} ] A_\ell(z),
\quad
\,[z^{k+1}]F_{n+1}(z) = \sum_{j = k+1}^{n+1} c_{n+1-j} [z^{k+1}] A_{j}(z).
\end{eqnarray*}
As a consequence of Corollary 1, $c_{n-\ell} [z^{k} ] A_\ell(z) = c_{n+1-j} [z^{k+1}] A_{j}(z)$ for $j= \ell+1$ and $k \leq \ell \leq n$.
\end{proof}
\begin{theorem}
If $a_1 = b_1 = 1$, and
\[
G(z,q) = \sum_{n=0}^\infty F_n (z) q^n = \prod_{i \geq 1} {(1 - z q^i)}^{-a_i} {(1 - z^{-1} q^i)}^{-b_i} {(1 \pm q^i)}^{c_i},
\]
then the coefficients of the Laurent polynomials $F_n(z)$ satisfy
\[
[z^{n-k}] F_n(z) = [z^{n+1-k}] F_{n+1}(z), \quad [z^{-(n-k)}] F_n(z) = [z^{-(n+1-k)}] F_{n+1}(z),
\]
for $0 \leq k \leq n/2$.
\end{theorem}
\section{ Plane Overpartition Stabilization }
A plane partition is an array of positive integers, conventionally justified to the upper left corner of the fourth quadrant, which are weakly descending left in rows and down in columns. A plane overpartition is a plane partition whose entries may be overlined or not according to
certain rules \cite{CSV}: in each row, the last occurrence of an integer may be overlined
(or not) and in every column, all but the first occurrence of an integer are overlined, while the first occurrence may or may not be overlined.
In \cite[Proposition 4]{CSV}, the generating function for the weighted plane overpartitions is found to be
\[
\sum_{ \Pi \,\,\textrm{is a plane overpartition}}
z^{ o(\Pi)} q^{ | \Pi |} =
\prod_{n=1}^\infty \frac{ (1+ zq^n)^n}{ (1-q^n)^{ \lceil n/2 \rceil} (1-z^2q^n)^{ \lfloor n/2 \rfloor } }
\]
where $o( \Pi)$ is the number of overlined parts of the plane
overpartition $\Pi$.
\begin{theorem}
Let $G(z,q)$ be the generating function for the polynomials $F_n(z)$:
\[
G(z,q)=
\prod_{n=1}^\infty \frac{ (1+ zq^n)^n}{ (1-q^n)^{ \lceil n/2 \rceil} (1-z^2q^n)^{ \lfloor n/2 \rfloor } }
=
\sum_{n=0}^\infty F_n(z) q^n .
\]
Then the coefficients of the polynomials $F_n(z)$ satisfy the stabilization condition
\[
[z^{k+1}] F_{n+1}(z) = [ z^k] F_n(z),
\]
for $k\geq 2n/3$.
\end{theorem}
\begin{proof}
Let $\{A_n(z)\}$ be the polynomial sequence with generating function $G_A(z,q)$ where
\[
G_A(z,q) = \prod_{n=2}^\infty \frac{ 1}{ (1-z^2q^n)^{ \lfloor n/2 \rfloor } } = \sum_{n=0}^\infty A_n(z) q^n
\]
and $\{ B_n(z)\}$ with generating function $G_B(z,q)$:
\[
G_B(z,q) = \prod_{n=2}^\infty (1+ zq^n)^n = \sum_{n=0}^\infty B_n(z) q^n .
\]
Easily, we have that $\deg(A_n) = 2 \lfloor n/2 \rfloor$.
By Theorem \ref{thm:stabilization2}, replacing $z$ by $z^2$, we also find
\[
[z^k] A_n = [ z^{k+2}] A_{n+2}, \quad k \geq 2n/3 .
\]
The degree of the polynomial $B_N(z)$ is the largest number of parts in a possible partition of $N$ drawn from a multiset of two 2s, three 3s, etc. For $N \geq 21 = 2+2+3+3+3+4+4$, the average size of part is at least 3 and so
\[
\deg(B_N) \leq \frac{1}{3}N \, \text{.}
\]
For smaller $N$, direct calculation shows that $\deg(B_N) \leq \frac{2}{3}N$.
It is more convenient to work with the intermediate polynomials
\[
Q_n(z) = [q^n] G_A(z,q) G_B(z,q) =\sum_{\ell=0}^n A_{n-\ell}(z) \, B_\ell(z).
\]
We have the summation formula for $[z^k] Q_n(z)$:
\begin{eqnarray*}
[z^k] Q_n(z)
&=&
\sum_{\ell=0}^n \, \sum_{a=0}^k [ z^{k-a}] A_{n - \ell}(z) \cdot [z^a] B_\ell(z)
\\
\\
&=&
\sum_{\ell \in I_{k,n} } \, \sum_{ a =0 }^{ \deg(B_\ell)} \, [ z^{k-a}] A_{n - \ell}(z) \cdot [z^a] B_\ell(z)
\end{eqnarray*}
where $I_{k,n}$ is the set of indices
\[
I_{k,n} =
\left\{
\ell :
\deg(B_\ell) + \deg(A_{n - \ell }) \geq k, \, 0 \leq \ell \leq n
\right\} .
\]
We have $I_{k+2, n+2} = I_{k,n} $ since for any $B_\ell$ the matching $A_{n-\ell}$ for sums of $k$ map to the matching $A_{n+2-\ell}$ for sums of $k+2$.
We next show that
\[
\sum_{\ell \in I_{k,n} } \, \sum_{ a = 0}^{ \deg(B_\ell)} \, [ z^{k-a}] A_{n - \ell}(z) \cdot [z^a] B_\ell(z)
=
\sum_{\ell \in I_{k+2,n+2} } \, \sum_{ a = 0}^{ \deg(B_\ell)} \, [ z^{k+2-a}] A_{n+2 - \ell}(z) \cdot [z^a] B_\ell(z) ;
\]
To do this, we need that all the terms $[ z^{k-a}] A_{n- \ell}$ fall into the stable range of indices.
Now
to get into the stable range, we need
\[
k-a \geq \frac{2}{3} ( n - \ell)
\]
where $0\leq a \leq \deg B_\ell$ and $\ell \in I_{k,n}$.
We consider the stronger condition
\begin{eqnarray*}
\frac{2}{3} n - \deg B_\ell &\geq& \frac{2}{3} ( n - \ell) ,
\end{eqnarray*}
which reduces to
\[
\deg(B_\ell) \leq \frac{2}{3} \ell, \quad \ell \in I_{k,n}
\]
which does indeed hold.
Our next step is to define
\[
P_n(z) = (1+zq) Q_n(z).
\]
(Leaving out this factor earlier simplified the degree analysis, since without it we had the single exceptional case of $\deg (B_1) = 1$.) We observe that
\begin{eqnarray*}
\,
[z^k] P_n (z) &=& [ z^k] Q_n(z) + [ z^{k-1} ] Q_{n-1}(z),
\\
\,
[z^{k+1}] P_{n+1} (z) &=& [ z^{k+1}] Q_{n+1}(z) + [ z^{k}] Q_{n}(z) .
\end{eqnarray*}
Hence, we have
\[
[z^k] P_n(z) = [z^{k+1}] P_{n+1}(z), \quad k \geq 2n/3 .
\]
To finish the proof, we define a polynomial family $C_n(q)$ by
\[
\prod_{n=1}^\infty \frac{ 1 }{ (1-q^n)^{ \lceil n/2 \rceil} } = \sum_{n=0}^\infty C_n(q) .
\]
Then the polynomials $F_n(z)$ are given by
\[
F_n(z) = [q^n] C(q) \, \sum_{\ell =0}^\infty P_\ell (z) q^\ell .
\]
By Lemma \ref{lemma:convolution},
we see that this construction maintains the stability of the coefficients of the polynomial family $\{ P_\ell(z)\}$.
\end{proof}
\begin{corollary}
Let $\overline{pp}_k(n)$ be the number of plane overpartitions of $n$ with $k$ overlined parts. If $k \geq 2n/3$, then
\[
\overline{pp}_k(n) =\overline{pp}_{k+1}(n+1).
\]
\end{corollary}
| {
"timestamp": "2013-07-05T02:08:27",
"yymm": "1307",
"arxiv_id": "1307.1418",
"language": "en",
"url": "https://arxiv.org/abs/1307.1418",
"abstract": "We find that a wide variety of families of partition statistics stabilize in a fashion similar to $p_k(n)$, the number of partitions of n with k parts, which satisfies $p_k(n) = p_{k+1}(n + 1), k \\geq n/2$. We bound the regions of stabilization, discuss variants on the phenomenon, and give the limiting sequence in many cases as the coefficients of a single-variable generating function. Examples include many statistics that have an Euler product form, partitions with prescribed subsums, and plane overpartitions.",
"subjects": "Combinatorics (math.CO)",
"title": "Stabilization of coefficients for partition polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.991152646746442,
"lm_q2_score": 0.8418256492357358,
"lm_q1q2_score": 0.8343777203390415
} |
https://arxiv.org/abs/2009.05820 | Empty axis-parallel boxes | We show that, for every set of $n$ points in the $d$-dimensional unit cube, there is an empty axis-parallel box of volume at least $\Omega(d/n)$ as $n\to\infty$ and $d$ is fixed. In the opposite direction, we give a construction without an empty axis-parallel box of volume $O(d^2\log d/n)$. These improve on the previous best bounds of $\Omega(\log d/n)$ and $O(2^{7d}/n)$ respectively. | \section{Introduction}
\paragraph{Dispersion.}
A \emph{box} is a Cartesian product of open intervals. Given a set $P\subset [0,1]^d$, we say that a
box $B=(a_1,b_1)\times \dots \times (a_d,b_d)$ is \emph{empty} if $B\cap P=\emptyset$. Let $m(P)$
be the volume of the largest empty box contained in $[0,1]^d$.
Let $m_d(n)$ be the largest number such that every $n$-point set $P\subset [0,1]^d$ admits
an empty box of volume at least $m_d(n)$. Alternatively, $m_d(n)=\min m(P)$, where the minimum
is over all $n$-point sets $P\subset [0,1]^d$.
The quantity $m(P)$ is called the \emph{dispersion} of $P$. The motivation for estimating $m_d(n)$ came independently in several subjects.
The earliest occurrence is probably in the work of Rote and Tichy~\cite{rote_tichy} who were
motivated by the relations to $\veps$-nets in discrete geometry on one hand, and
with the relations to discrepancy theory on the other. The dispersion
also arose in the problem of estimating rank-one tensors \cite{bddg,Krieg_Rudolf,nr_tensor} and in Marcinkiewicz-type
discretizations \cite{temlyakov}. In addition, in \cite{chen_dumitrescu}, lower bounds on dispersion were
used, via a compactness argument, to give constructions with large gap between covering and independence numbers for families of axis-parallel boxes.
The obvious bound $m_d(n)\geq 1/(n+1)$ was observed in several works, including \cite{dj60,bddg,rote_tichy}.
The first non-trivial lower bound of $m_d(n)\geq \tfrac{5}{4(n+5)}$ for $d\geq 2$ is due to Dumitrescu and Jiang
\cite{dj_amidst}. In \cite{dj60} they proved, for fixed $b$ and $d$, that $(n+1) m_d(n)\geq (b+1)m_d(b)-o(1)$,
which implies that the limit
\[
c_d\eqdef \lim_{n\to\infty} n m_d(n)
\]
exists. Indeed, for each $b$, $\liminf (n+1) m_d(n)\geq (b+1)m_d(b)$, and therefore
$\liminf (n+1)m_d(n)\geq \limsup (n+1)m_d(n)$. Since $\liminf$ is always smaller than $\limsup$,
the limit exists.
The best lower bound on $m_d(n)$ for fixed $d$ is due to Aistleitner, Hinrichs and Rudolf \cite{ahr},
which is $c_d\geq \tfrac{1}{4}\log_2 d$. In the same paper they present a proof, due to Larcher,
that $c_d\leq 2^{7d+1}$. In this note we show that the correct dependence of $c_d$ on $d$ is neither logarithmic nor exponential, but polynomial.
\begin{theorem}\label{thm:crude}
The dispersion of $n$-point sets in $[0,1]^d$ satisfies
\begin{equation}
m_d(n)\geq \frac{1}{n}\cdot \frac{2d}{e}\bigl(1-4dn^{-1/d}\bigr)\qquad\text{for all }d\text{ and all }n.\label{lowerbound:crude}
\end{equation}
\end{theorem}
All the logarithms in the rest of the paper are to base $e=2.718\dotsc$.
\begin{theorem}\label{thm:constr}
For every $d\geq 3$ and every $n\geq 1$, there is a set of at most $n$ points in $[0,1]^d$ for which the largest empty box
has volume at most $8000d^2\log d/n$.
\end{theorem}
For very large $n$, we have a slightly better lower bound.
\begin{theorem}\label{thm:sharp}
Let $R,T$ be positive real numbers that satisfy $T<R_0$ and $R_0-T<\log \frac{R_0}{T}$. Then
\[
c_d\geq R_0\exp\bigl(-\tfrac{1}{2d}(R_0-T)\bigr).
\]
In particular, $c_d\geq \frac{2d}{e}(1+e^{-2d})$ for all $d$, and $c_2\geq 1.50476$.
\end{theorem}
This improves on the aforementioned bound of $c_2\geq 5/4$ by Dumitrescu--Jiang.
Very recently the upper bound of $c_2\leq 1.8945$ was proved by Kritzinger and Wiart \cite{kritzinger_wiart}.
\paragraph{Acknowledgements.} We thank Mario Ullrich and Daniel Rudolf for comments on the earlier version of this manuscript.
\section{Proofs of the lower bounds (\texorpdfstring{\Cref{thm:crude,thm:sharp}}{Theorems 1 and 3})}
\paragraph{Averaging argument.}
We first give a simple argument for \Cref{thm:crude}. We will then show how to modify
that argument to get \Cref{thm:sharp}. We start with the common part of the two arguments.\medskip
Let $R_0>0$ be a parameter to be chosen later subject to $R_0\leq n$, and set $\delta \eqdef \tfrac{1}{2}(R_0/n)^{1/d}$.
Let $f\colon [0,R_0]\to \R_+$ be some weight function. We postpone the actual choice of $f$ until later.
We adopt the convention that $f(R)=0$ if $R\geq R_0$.
Let $B$ be the cube of volume $R_0/n$ centered at the origin, i.e., $B\eqdef\bigl[-\delta,\delta\bigr]^d$.
Using $f$, we define a function on $\R^d$ by
\[
F(x)\eqdef f(2^dr^dn)\qquad\text{ for }\norm{x}_{\infty}=r.
\]
Because $f$ vanishes outside $[0,R_0]$, the function $F$ vanishes outside~$B$.
Put $M\eqdef n\int_B F(x)\,dx$.
Note that $M=\int_{r=0}^{\delta} \bigl(2^dnr^{d-1}d\cdot f(2^dr^d n)\bigr)\,dr=\int_0^{R_0} f(R)\,dR$.
Because
\[
\int_{t\in \R^d} \sum_{p\in P-t} F(p)\,dt=\sum_{p\in P}\int_{t\in \R^d} F(p-t)\,dt=\sum_{p\in P}\int_{x\in \R^d} F(x)\,dx=M,
\]
it follows that there exists $t\in [\delta,1-\delta\bigr]^d$ such that
\begin{equation}\label{eq:weightbnd}
\sum_{p\in P-t} F(p)\leq M/(1-2\delta)^d,
\end{equation}
for otherwise $\int_{[\delta,1-\delta]^d} \sum_{p\in P-t} F(p)\,dt>M$.
It suffices to find a large box inside $B$ that is empty with respect to the set $P'\eqdef (P-t)\cap B$, for then we may obtain
an empty box of the same volume inside $[0,1]^d$ after translating by~$t$.
To find the empty box, we shave the sides off~$B$. Namely, for each point $p\in P'$
there is a coordinate of largest absolute value. If there is more than
one such coordinate, break the tie arbitrarily. Call this coordinate \emph{dominant}
for~$p$. Write the coordinates of $p\in P'$ as $p=(p_1,\dotsc,p_d)$. For each $i\in [d]$, put
\begin{align*}
a_i&\eqdef\min \{-p_i : i\text{ is dominant for }p\in P'\text{ and }p_i\leq 0\},\\
b_i&\eqdef\min \{\phantom{-}p_i : i\text{ is dominant for }p\in P'\text{ and }p_i\geq 0\}.
\end{align*}
Should the set in the definition of $a_i$ be empty, we put $a_i=\delta$.
Similarly, should the set in the definition of $b_i$ be empty, we put $b_i=\delta$.
The box
\[
B'\eqdef \prod_{i=1}^d (-a_i,b_i)
\]
is evidently disjoint from $P-t$ and is contained in~$B$.
\begin{lemma}\label{lem}
The volume of $B'$ is at least $\frac{R_0}{n} \prod_{p\in P'} \sqrt{\frac{\norm{p}_{\infty}}{\delta}}$.
\end{lemma}
\begin{proof}
Fix any coordinate $i\in [d]$.
Suppose first that the two sets in the definitions of $a_i$ and $b_i$ are non-empty.
Let $p,q\in P-t$ be the points such that $a_i=-p_i$ and $b_i=q_i$.
By the AM--GM inequality
\begin{equation}\label{eq:two}
\frac{a_i+b_i}{2\delta}\geq \frac{\sqrt{a_ib_i}}{\delta}=\sqrt{\frac{\norm{p}_{\infty}}{\delta}}\cdot\sqrt{\frac{\norm{q}_{\infty}}{\delta}}.
\end{equation}
Suppose next that only one of the two sets in the definitions of $a_i$ and $b_i$ is non-empty.
Say $a_i=-p_i$ for some $p\in P-t$ and $b_i=\delta$ (the other case being symmetric). Then
by a similar application of the AM--GM inequality we obtain
\begin{equation}\label{eq:one}
\frac{a_i+b_i}{2\delta}\geq \sqrt{\frac{\norm{p}_{\infty}}{\delta}}.
\end{equation}
By taking the product of \eqref{eq:two} and \eqref{eq:one} as appropriate over all $i\in [d]$, and noting that
every point has only one dominant coordinate,
we obtain
\[
\vol B'=(2\delta)^d \cdot \prod_{i=1}^d \frac{a_i+b_i}{2\delta}\geq (2\delta)^d\prod_{p\in P'} \sqrt{\frac{\norm{p}_{\infty}}{\delta}}.\qedhere
\]
\end{proof}
\paragraph{Simple weight function (proof of \texorpdfstring{\Cref{thm:crude}}{Theorem 1}).}
The simplest choice of the constant $R_0$ and weight function $f$ is
\begin{align*}
R_0&\eqdef 2d,\\
f(R)&\eqdef \log \frac{R_0}{R}.
\end{align*}
The condition $R_0\leq n$ is satisfied unless $2d\leq n$, but in that case \Cref{thm:crude} holds vacuously.
With this choice of $R_0$ and $f$, we obtain $M=\int_0^{R_0} f(R)\,dR=R_0$ and $F(x)=d\log \frac{\delta}{\norm{x}_{\infty}}$ on $B$.
So, from \Cref{lem} we obtain
\begin{align*}
\vol B'&\geq \frac{R_0}{n} \exp\biggl(-\tfrac{1}{2}\sum_{p\in P'}\log \frac{\delta}{\norm{p}_{\infty}}\biggr)=
\frac{R_0}{n} \exp\Bigl(-\tfrac{1}{2d}\sum_{p\in P'} F(p)\Bigr)\\
\intertext{which in view of \eqref{eq:weightbnd} is}
&\geq \frac{R_0}{n}\exp\Bigl(-\frac{1}{2d} M(1-2\delta)^{-d}\Bigr)=\frac{R_0}{n}\exp\Bigl(-\bigl(1-(R_0/n)^{1/d}\bigr)^{-d}\Bigr)\\
\intertext{which, since $(2d)^{1/d}\leq 2$, is}
&\geq \frac{R_0}{n}\exp\bigl(-(1-2n^{-1/d})^{-d}\bigr)\geq \frac{R_0}{n}\exp\bigl(-(1-2dn^{-1/d})^{-1}\bigr).\\
\intertext{Using $\exp(-(1-x)^{-1})=e^{-1}\cdot \exp(-x-x^2-\dotsb)\geq e^{-1}\cdot (1-x-x^2-\dotsb)\geq e^{-1}(1-2x)$ for $x\in[0, 1/2]$, we may deduce that}
\vol B' &\geq \frac{1}{n}\cdot\frac{2d}{e}\bigl(1-4dn^{-1/d}\bigr).
\end{align*}
\paragraph{Better weight function (proof of \texorpdfstring{\Cref{thm:sharp}}{Theorem 3}).}
Let $T$ and $R_0$ be as in the statement of \Cref{thm:sharp}.
Since the aim is to prove a bound on $c_d$, we may assume that $n$ is sufficiently large.
Define
\begin{equation}
f(R)\eqdef
\begin{cases}
\log \frac{R_0}{T}&\text{if }R\leq T,\\
\log \frac{R_0}{R}&\text{if } T<R\leq R_0.\\
\end{cases}
\end{equation}
It is readily computed that $M=(\int_0^T+\int_T^{R_0})f(R)\,dR=R_0-T$. Since $(1-2\delta)^d\to 1$, it follows that
$M/(1-2\delta)^d<\log \frac{R_0}{T}$, for large enough $n$.
Because of \eqref{eq:weightbnd}, this implies that
$\sum_{x\in P'} F(x)<\log \frac{R_0}{T}$, and hence
for no point $x\in P'$ does it hold that $R\leq T$, where $R=2^dn\norm{x}_{\infty}^d$.
So, $F(x)=d\log \frac{\delta}{\norm{x}_{\infty}}$ for all $x\in P'\cap B$.
So, we may proceed as before to obtain
\begin{align*}
\vol B'&\geq \frac{R_0}{n} \exp\Bigl(-\tfrac{1}{2d}\sum_{p\in P'} F(p)\Bigr)\geq \frac{R_0}{n} \exp\bigl( -\tfrac{1}{2d}M(1-2\delta)^{-d}\bigr).
\end{align*}
Taking the limit $n\to\infty$, the bounds on $c_d$ follows.\smallskip
The bound $c_d\geq \frac{2d}{e}(1+e^{-2d})$ is obtained by choosing $R_0=2d$ and $T=R_0\exp(-R_0)$. The bound
$c_2\geq 1.50476$ is obtained by choosing $R_0=3.69513$ and $T=0.101622$.
\section{Proof of the upper bound (\texorpdfstring{\Cref{thm:constr}}{Theorem 2})}
\paragraph{Construction outline.}
Our construction is a modification of the Hilton--Hammerseley construction. As in the
Halton--Hammerseley construction, we will select primes $p_1,\dotsc,p_d$, each of which is associated to respective coordinate direction.
As in the analysis of Halton--Hammerseley construction, we will be interested in \emph{canonical boxes}, which are the boxes\footnote{Here and elsewhere in this section we work with half-open boxes.
Since every half-open box contains an open box of the same volume, this does not impair the strength of our constructions, but
doing so will be technically advantageous.} of the form
\[
B=\prod_{i=1}^d \left[\frac{a_i}{p_i^{k_i}},\frac{a_i+1}{p_i^{k_i}}\right).
\]
for some integers $0\leq a_i<p_i^{k_i},i=1,2,\dotsc,d$.
For a prime $p$ and a nonnegative integer $x$, consider the base-$p$ expansion of the number $x$, say
$x=\nobreak x_0+x_1p+\dotsb+x_{\ell}p^{\ell}$. Put $r_p(x)\eqdef x_0p^{-1}+x_1p^{-2}+\dotsb+x_{\ell}p^{-\ell-1}$; note that
$r_p(x)$ is the number in $[0,1)$ obtained by reversing the base-$p$ digits of~$x$.
Define the function $r\colon \Z_{\geq 0}\to [0,1]^d$ by $r(x)\eqdef \bigl(r_{p_1}(x),\dotsc,r_{p_d}(x)\bigr)$.
Our construction is broken into two stages. The set that we construct in the first stage
is an $r$-image of a certain subset of $\Z_{\geq 0}$. (Note that the usual Halton--Hammerseley construction is the $r$-image of an interval of length~$n$.)
This set has $O(n d \log d)$ elements and intersects \emph{almost} all the boxes of volume about~$1/n$.
In the second stage of the construction, we show that $d+1$ suitably chosen translates
of the first set meet all the boxes of volume~$1/n$.
\paragraph{First stage.} To simplify the proof, we will discretize the boxes we work with. We will do so by shrinking them slightly,
so that $i$'th coordinates have terminating base-$p_i$ expansions.
With hindsight we choose $p_i$ to be the $(d+i)$'th smallest prime, for each $i=1,2,\dots,d$.
Put $\gamma\eqdef p_1p_2\dots p_d$, and let $n$ be an arbitrary integer divisible by $2\gamma^{11}$.
\begin{definition}
We say that a box $\beta$ is a \emph{good box} if it is of the form
\begin{equation}\label{eq:goodboxform}
\beta=\prod_{i=1}^d \left[\frac{a_i}{p_i^{k_i}}+\frac{b_i}{p_i^{k_i+3}},\frac{a_i}{p_i^{k_i}}+\frac{c_i}{p_i^{k_i+3}}\right),
\end{equation}
for some integers $0\leq b_i<c_i\leq p_i^3$ and $k_i\in\Z_{\geq 0}$ for $i=1,2,\dots,d$, and whose volume is $1/4n\leq\nobreak \vol(\beta)\leq 1/n$.
Let $B=\prod_i \left[a_i/p_i^{k_i},(a_i+1)/p_i^{k_i}\right)$.
We call $(B,\beta)$ a \emph{good pair}.
\end{definition}
Since $i$'th coordinate dimension of $B$ is at most $p_i^3$ times larger than that of $\beta$, it follows that $\vol(B)\leq \gamma^3/n$.
In other words, a (discretized) box $\beta$ is good if it is contained in a canonical box $B$ that is not much larger than $\beta$.
Note that, since a good $\beta$ can sometimes be written in the form \eqref{eq:goodboxform} in more than one way,
the choice of $B$ in the definition of a good pair is, in general, not unique.\medskip
Our aim in this stage of construction is to find a set $P$ that meets every good box. In the next stage
we will superimpose several copies of $P$ to create a set that meets every large box. It is precisely
because the family of good boxes is richer than the family of canonical boxes
that we lose less in the second stage than if we used the Halton--Hammerseley construction.
\medskip
Suppose $B$ is a canonical box. Write it as $B=\prod_i\left[a_i/p_i^{k_i},(a_i+1)/p_i^{k_i}\right)$, and consider $r^{-1}(B)$. The set $r^{-1}(B)$ consists
of the solutions to the system
\begin{align*}
x&\equiv a_1'\pmod{p_1^{k_1}},\\
x&\equiv a_2'\pmod{p_2^{k_2}},\\
&\setbox0\hbox{$\equiv$}\mathrel{\makebox[\wd0]{\vdots}}\\
x&\equiv a_d'\pmod{p_d^{k_d}},
\end{align*}
where $a_i'\eqdef r_{p_i}(a_i)p_i^{k_i}$, i.e., $a_i'$ is the integer obtained from $a_i$ by reversing its base-$p_i$ expansion.
By the Chinese Remainder theorem, the set $r^{-1}(B)$ is an infinite arithmetic progression with step $D(B)\eqdef p_1^{k_1}p_2^{k_2}\dotsb p_d^{k_d}=1/\vol(B)$.
Let $A(B)$ be the least element of $r^{-1}(B)$, so that
\[r^{-1}(B)=\lbrace A(B)+kD(B): k\in\mathbb{Z}_{\geq 0}\rbrace.\]
Given a good pair $(B,\beta)$, define \[L_B(\beta)\eqdef\{ k \in\mathbb{Z}_+ : r\bigl(A(B)+kD(B)\bigr) \in \beta\}.\]
\begin{claim}\label{claim:L}
The set $\cL\eqdef \lbrace L_B(\beta):(B,\beta)\mbox{ is a good pair}\rbrace$ is of size at most $\gamma^{12}$.
\end{claim}
\begin{proof}
Let $(B,\beta)$ be a good pair. Write $B$ and $\beta$ in the form
\[B=\prod_{i=1}^d \left[\frac{a_i}{p_i^{k_i}},\frac{a_i+1}{p_i^{k_i}}\right),\qquad \beta=\prod_{i=1}^d \left[\frac{a_i}{p_i^{k_i}}+\frac{b_i}{p_i^{k_i+3}},\frac{a_i}{p_i^{k_i}}+\frac{c_i}{p_i^{k_i+3}}\right).\]
We know that $r\bigl(A(B)+kD(B)\bigr)\in \beta$ is equivalent to
\begin{align*}
A(B)+kD(B)&\in a_1'+p_1^{k_1}J_1\pmod{p_1^{k_1+3}},\\
A(B)+kD(B)&\in a_2'+p_2^{k_2}J_2\pmod{p_2^{k_2+3}},\\
&\setbox0\hbox{$\in$}\mathrel{\makebox[\wd0]{\vdots}}\\
A(B)+kD(B)&\in a_d'+p_d^{k_d}J_d\pmod{p_d^{k_d+3}},
\end{align*}
where the sets $J_i$ consist of base-$p_i$ reversals of the numbers in the interval $[b_i,c_i)$ (which are $3$-digit long in base $p_i$).
On the other hand, we know that
\begin{align*}
A(B)+kD(B)&\equiv a_1'+(\alpha_1+k\delta_1)p_1^{k_1}\pmod{p_1^{k_1+3}},\\
A(B)+kD(B)&\equiv a_2'+(\alpha_2+k\delta_2)p_2^{k_2}\pmod{p_2^{k_2+3}},\\
&\setbox0\hbox{$\equiv$}\mathrel{\makebox[\wd0]{\vdots}}\\
A(B)+kD(B)&\equiv a_d'+(\alpha_d+k\delta_d)p_d^{k_d}\pmod{p_d^{k_d+3}}
\end{align*}
for some $\alpha_i,\delta_i\in\mathbb{Z}/p_i^3\mathbb{Z},i=1,2,\dots,d$. There are at most $\gamma^6$ different choices for $(\alpha_i,\delta_i)_{i=1}^d$.
Also, there are at most $\gamma^6$ different choices for $(b_i,c_i)_{i=1}^d$ satisfying $0\leq b_i<c_i\leq p_i^3$. Since $L_B(\beta)$ is determined by $(\alpha_i,\delta_i,b_i,c_i)_{i=1}^d$, the claim is true.
\end{proof}
To each canonical box $B$ of volume between $1/4 n$ and $\gamma^3/n$ we assign
a \emph{type}, so that boxes of the same type behave similarly.
Formally, let $\A(B)$ be the unique multiple of $n/\gamma^4$ satisfying
$0\leq A(B)-\A(B)<n/\gamma^4$.
Similarly,
let $\D(B)$ be the unique multiple of $n/\gamma^{11}$ satisfying \linebreak $0\leq D(B)- \D(B)< n/\gamma^{11}$.
The type of $B$ is then the pair $\T(B)\eqdef \bigl(\A(B),\D(B)\bigr)$.
Note that, from $1/4n\leq \vol(B)\leq \gamma^3/n$ and $D(B)=1/\vol(B)$ it follows that
\begin{equation}\label{eq:dbound}
n/\gamma^3-n/\gamma^{11}<\D(B)\leq 4n.
\end{equation}
\begin{claim}\label{claim:type}
The number of types is at most $\gamma^{16}$.
\end{claim}
\begin{proof}
Since $A(B)<D(B)\leq 4 n$, the number of types is at most
$(\frac{4 n}{n/\gamma^4})(\frac{4 n}{n/\gamma^{11}})=16\gamma^{15}\leq \gamma^{16}$.
\end{proof}
For a type $\T=(\A,\D)$, let $\Y(\T)\eqdef \lbrace \A+k\D : k\in\mathbb{Z}_{\geq 0}\rbrace$ be the arithmetic progression generated by
$\A$ and $\D$. Note that if $\T=\T(B)$, then $\Y(\T)$ is an approximation to $r^{-1}(B)$.
In particular, $\Y(\T)$ and $r^{-1}(B)$ intersect any long interval that is not too far from the origin in approximately
the same way.
For integers $a,b$, denote by $[a,b)$ the integer interval consisting of integers $x$ satisfying $a\leq x<b$.
Our construction will be a union of intervals of length $n/\gamma^3$ whose left endpoints are in $[0,n\gamma^4)$.\smallskip
We first estimate the difference between respective terms in $\Y(\T)$ and $r^{-1}(B)$ inside $[0,n\gamma^4)$.
\begin{claim}\label{claim:shrink}
Suppose $\T(B)=\bigl(\A(B),\D(B)\bigr)$.
Then for any integer $x\in [0,n\gamma^4)$ and any integer $k$,
$\A(B)+k\D(B)\in [x,x+ n/2\gamma^3)$ implies $A(B)+kD(B)\in [x,x+ n/\gamma^3)$.
\end{claim}
\begin{proof}
For such $k$, since $\mathcal{A}(B)+k\mathcal{D}(B)< n\gamma^4+ n/\gamma^3$, from \eqref{eq:dbound} we deduce that \[k< \frac{n\gamma^4+ n/\gamma^3}{n/\gamma^3-n/\gamma^{11}}\leq 2\gamma^7.\]
In view of $k\geq 0$, this implies that
\[0\leq (A(B)+kD(B))-(\mathcal{A}(B)+k\mathcal{D}(B))\leq \frac{n}{\gamma^4}+2\gamma^7\cdot \frac{n}{\gamma^{11}}= \frac{3n}{\gamma^4},\]
and hence $A(B)+kD(B)\in[x,x+n/2\gamma^3+3n/\gamma^4)\subseteq [x,x+n/\gamma^3)$.
\end{proof}
For a type $\T$ and $L\in\cL$ that satisfy $\T=\T(B)$ and $L=L_B(\beta)$ for some good pair $(B,\beta)$, define
\[\Y_{\T}(L)\eqdef\lbrace \A+k\D:k\in L\rbrace.\]
With this definition,
$\Y_{\T}(L)$
is the approximation to $r^{-1}(\beta)$ induced by the approximation $\Y(B)$
to~$r^{-1}(B)$.
\begin{claim}
The set $\bY_{\T}(L)=\Y_{\T}(L)\cap [0,n\gamma^4)$ is of size at least $\gamma^4/16+1$.
\end{claim}
\begin{proof}
Let $(B,\beta)$ be a good pair such that $\T=\T(B)$ and $L=L_B(\beta)$.
The set $L_B(\beta)$ is $\gamma^3$-periodic, i.e., $k\in L_B(\beta)$ implies $k+\gamma^3\in L_B(\beta)$.
The intersection of any interval of length $\gamma^3$ with $L_B(\beta)$ is of size exactly $\gamma^3\frac{\vol(\beta)}{\vol(B)}$.
Since the preimage of $[0,n\gamma^4)$ under the map $k\mapsto \A+k\D$ contains
\[
\left\lfloor \frac{n\gamma^4-\A}{\gamma^3\D}\right\rfloor\geq \frac{n\gamma}{\D}-2\geq n\gamma \vol(B)-2\geq \frac{1}{2}n\gamma \vol(B)
\]
non-overlapping intervals of length $\gamma^3$, the size of $\bY_{\T}(L)$ is at least
\[\frac{1}{2}n\gamma \vol(B)\cdot \gamma^3\frac{\vol(\beta)}{\vol(B)}=\frac{1}{2}n \gamma^4 \vol(\beta)\geq \gamma^4/16+1.\qedhere\]
\end{proof}
\begin{claim}\label{claim:probability}
Let $x$ be chosen uniformly from $[0,n\gamma^4)$. Then
$\Pr\bigl[\,\bY_{\T}(L)\cap [x,x+n/2\gamma^3) \neq \emptyset \bigr]\geq 1/32\gamma^3$.
\end{claim}
\begin{proof}
Let $y\in \bY_{\T}(L)$ be arbitrary. If $y\notin [0,n/2\gamma^3)$, then $\Pr[y\in [x,x+n/2\gamma^3)]=1/2\gamma^7$. Since
$\D>n/\gamma^3-n/\gamma^{11}\geq n/2\gamma^3$, the set $\bY_{\T}(L)$ contains at most one element in the interval $[0,n/2\gamma^3)$.
Hence \[\E\bigl[ \abs[\big]{\bY_{\T}(L)\cap [x,x+n/2\gamma^3)} \bigr]\geq 1/32\gamma^3.\]
Since elements of $\bY_{\T}(L)$ are at least $\D$ apart, $\abs{\bY_{\T}(L)\cap [x,x+n/2\gamma^3)}\in \{0,1\}$ for all $x$.
Therefore, \[\Pr\bigl[\,\bY_{\T}(L)\cap [x,x+n/2\gamma^3) \neq \emptyset \bigr]=\E\bigl[ \abs[\big]{\bY_{\T}(L)\cap [x,x+n/2\gamma^3)} \bigr]\geq 1/32\gamma^3.\qedhere\]
\end{proof}
Sample $900 \gamma^3 \log \gamma$ elements uniformly at random from $[0,n\gamma^4)$, independently from one another.
Let $X$ be the resulting set. Then by the preceding claim
\begin{align*}
\Pr\bigl[\,\bY_{\T}(L)\cap \bigl(X+[0,n/2\gamma^3)\bigr)=\emptyset\bigr]\leq (1-1/32\gamma^3)^{900 \gamma^3 \log \gamma}<\gamma^{-28}.
\end{align*}
From \Cref{claim:L,claim:type} and the union bound it then follows that there exists a choice of $X$
such that $\bY_{\T}(L)\cap \bigl(X+[0,n/2\gamma^3)\bigr)$ is non-empty whenever $\T=\T(B)$, $L=L_B(\beta)$ and $(B,\beta)$ is a good pair.
In other words, for every $(B,\beta)$ there exist $x\in X$ and an integer $k\in L_B(\beta)$ such that $\A(B)+k\D(B)\in [x,x+n/2\gamma^3)$.
By \Cref{claim:shrink} this implies that $A(B)+kD(B)\in [x,x+n/\gamma^3)$ for the same $x$ and~$k$, whereas the definition
of $L_B(\beta)$ implies that $r\bigl(A(B)+kD(B)\bigr)\in \beta$.
Because this holds for every good pair $(B,\beta)$,
the set $P\eqdef r\bigl(X+[0,n/\gamma^3) \bigr)$ meets every good box.
Note that
$\abs{P}\leq \abs{X}\cdot \frac{n}{\gamma^3}\leq 900 \log \gamma\cdot n\leq 3000 d\log d \cdot n$ (since $\log \gamma\leq d\log p_d\leq 3d\log d$).
\paragraph{Second stage.} So far we have worked with boxes whose coordinates are rational numbers with denominators
of the form $p_i^{k_i}$. Given an arbitrary box, we shall shrink it down to a box of such form. We begin by describing this
process.
A \emph{$p$-interval} is an interval of the form $[a/p^k,b/p^k)$ for some integers $0\leq a<b<p^k$.
A \emph{canonical $p$-interval} is an interval of the form $[a/p^k,(a+1)/p^k)$ with $0\leq a<p^k$.
Note that canonical boxes are precisely the boxes that are Cartesian products of canonical intervals in appropriate bases.
A $p$-interval $[a/p^k,b/p^k)$ is \emph{well-shrunk} if $b-a<p^2$.
\begin{claim} Every interval $[s,u)$
contains a well-shrunk $p$-interval of length at least $(1-2/p)\len [s,u)$.
\end{claim}
\begin{proof}
Let $k$ be the smallest integer satisfying $\len [s,u)\geq p^{-k}$. Let $I$ be the largest interval
of the form $I=[a/p^{k+1},b/p^{k+1})$ contained in $[s,u)$. Then
$\len I
\geq u-s-2p^{-k-1} \geq (1-2/p)(u-s)$, and
$b-a=p^{k+1}\len I\leq p^{k+1}\len [s,u)<p^2$.
\end{proof}
Call an interval $[s,u)$ \emph{$p$-bad} if it contains a rational number with denominator $p^{k+1}$, where $\len [s,u)<2p^{-k-2}$ and $k\in \Z_{\geq 0}$.
\begin{claim}\label{claim:bad}
A box $\alpha=\prod_i[s_i,u_i)\subset [0,1]^d$ of volume $1/n$ fails to contain a good box only if, for some $i\in[d]$,
the interval $[s_i,u_i)$ is $p_i$-bad.
\end{claim}
\begin{proof}
For each $i\in [d]$, let $[s_i',u_i')$ be a well-shrunk $p_i$-interval
contained in $[s_i,u_i)$ as above. Let $\beta\eqdef \prod_i [s_i',u_i')$. Note that $\vol(\beta)\geq \vol(\alpha)\prod_i (1-2/p_i)\geq 1/4n$.
Let $B=\prod_i [a_i/p_i^{k_i},(a_i+1)/p_i^{k_i})$ be the smallest canonical box containing $\beta$.
Since the $p$\nobreakdash-interval $[s_i',u_i')$ is contained in $[a_i/p_i^{k_i},(a_i+1)/p_i^{k_i})$, we may write
it in the form \[[s_i',u_i')=[a_i/p_i^{k_i}+b_i/p_i^{\ell_i},a_i/p_i^{k_i}+c_i/p_i^{\ell_i})\] for
some integers $0\leq b_i<c_i<p_i^{k_i-\ell_i}$. Since $[s_i',u_i')$ is well-shrunk, $c_i-b_i<p^2$.
If $(B,\beta)$ is not a good pair, there exists $i\in [d]$, such that $\ell_i\geq k_i+4$. Fix such an $i$.
By the minimality of $B$, the interval $[s_i',u_i')$ contains a rational number with denominator
$p_i^{k_i+1}$. Since $[s_i,u_i)$ contains $[s_i',u_i')$, this rational number is also contained
in $[s_i,u_i)$. As $\len [s_i,u_i)\leq (c_i-b_i+2)p^{-\ell_i}<(p^2+2)p^{-k_i-4}\leq 2p^{-k_i-2}$, the interval $[s_i,u_i)$ is $p_i$-bad.
\end{proof}
\begin{claim}\label{claim:translates}
Let $\Delta=1/p(p-1)$. Suppose $[s,u)\subset [1/p,1]$ is an arbitrary interval. Then at most one of its translates $[s,u),[s,u)-2\Delta,\dotsc,[s,u)-2d\Delta$ is $p$-bad.
\end{claim}
\begin{proof}
Suppose that, for some $r$, the interval $[s,u)-2r\Delta$ contains rational number $a/p^{k+1}$ and is of length $\len [s,u)<2p^{-k-2}$.
Then the interval $[s_r,u_r)\eqdef [s,u)- 2r\Delta - a/p^{k+1}$ contains $0$ and is also of length $\len I<2p^{-k-2}$.
Hence, $u_r< 2p^{-k-2}$, and so $(k+2)$'nd digit in the base-$p$ of $u_r$ is either $0$ or~$1$.
Note that it is the same as the $(k+2)$'nd digit of $u-2r\Delta$.
Since the base-$p$ expansion of $\Delta$ is $0.01111\dotsb$ and $2d+1<p$,
for at most one of the numbers $u,u-2\Delta,\dotsc,u-2d\Delta$ is the $(k+2)$'nd digits
equal to $0$ or~$1$.
Hence, at most one of the intervals $[s,u),[s,u)-2\Delta,\dotsc,[s,u)-2d\Delta$ contains a rational number with denominator $p^{k+1}$.
\end{proof}
Let $P$ be the set constructed in the first stage. Let $v\in [0,1]^d$ be the vector whose $i$'th coordinate
is $v_i=1/p_i(p_i-1)$. Let $P'\eqdef \bigcup_{r=0}^d (P+2rv)$. We claim that $P'$ meets every subbox
of $\prod_i [1/p_i,1]$ of volume~$1/n$.
Indeed, suppose $\alpha=\prod_i [s_i,u_i) \subset \prod_i [1/p_i,1]$ is an arbitrary box of volume~$1/n$.
Then by the preceding claim, there exists $r\in \{0,1,\dotsc,d\}$ such that for no $i\in[d]$ is the interval $[s_i,u_i)-2r\Delta$
$p$-bad. \Cref{claim:bad} tells us that the box $\alpha-2r\Delta$ contains a good box. Since
the set $P$ meets all good boxes, it follows that the $P+2r\Delta$ meets $\alpha$.
As $P+2r\Delta\subset P'$, the set $P'$ indeed meets~$\alpha$.
Finally, we scale the box $\prod_i [1/p_i,1]$ onto $[0,1]^d$. This way, we turn the set $P'$
into a set that meets every subbox of $[0,1]^d$ of volume $\frac{1}{n}\prod p_i/(p_i-1)\leq 2/n$.
This set has size $\abs{P'}\leq (d+1)\cdot 3000d\log d\cdot n$.\medskip
This construction shows that $m_d(\lfloor 3000d(d+1)\log d\cdot n\rfloor)\leq 2/n$ for all $n$ that are divisible by $2\gamma^{11}$.
Since the limit $c_d=\lim_{n\to\infty} nm_d(n)$ exists, it then follows that
$c_d\leq 6000d(d+1)\log d$, which, by the Dumitrescu--Jiang inequality mentioned in the introduction,
implies that $m_d(b)\leq \frac{1}{b+1}\cdot 6000d(d+1)\log d$ for all $b$.
Because $6000d(d+1)\log d\leq 8000d^2\log d$, the proof is complete.
\section{Problems and remarks}
\begin{itemize}
\item Because of the $n^{-1/d}$ term, the bound in \Cref{thm:crude} is weak when the number
of points $n$ is small compared to the dimension~$d$. It is likely possible to
replace the term $n^{-1/d}$ with $O_d(n^{-1})$ by using a more sophisticated averaging argument.
In our argument we considered an average of translates of a
function supported on a fixed box~$B$. The error term $n^{-1/d}$ is due to
the points near the boundary of $[0,1]^d$ receiving less weight than the rest.
One can remedy this by using, in addition to the translates of $B$,
also elongated boxes of volume $\vol(B)$ to add weight in the regions
near the boundary of $[0,1]^d$. In this paper, we decided to sacrifice
the slightly stronger bound for a simpler proof.
For best constructions of low-dispersion sets that are good when $n$ is small compared to $d$, see \cite{ullrich_vybiral,litvak} (improving
on the earlier bounds in \cite{sosnovec}). These constructions are probabilistic. For an explicit construction, which has larger dispersion, see \cite{krieg}.
\item The low-dispersion sets are used in \cite{bddg}, \cite{Krieg_Rudolf}, and \cite[Theorem 11]{nr_tensor} to give algorithms
to approximate certain one-dimensional tensors. Because of that, it is useful to
derandomize the construction in \Cref{thm:constr}. The following is a way to do so. It gives an algorithm that computes a set from \Cref{thm:constr}
in $d^{O(d)}+O(dn)$ arithmetic operations.
The algorithm is broken into two steps. The first step is a pre-processing step, which depends solely on~$d$.
The second step takes the output of the first step and $n$ and quickly produces the $n$-point low-dispersion point set in $[0,1]^d$.
For the pre-processing step, we need some definitions.
\begin{definition}
For any tuple $(\alpha_i,\delta_i,b_i,c_i)_{i=1}^d$ with $\alpha_i,\delta_i,b_i,c_i\in\mathbb{Z}/p_i^3\mathbb{Z},i=1,2,\ldots,d$, consider the system of $d$ equations in unknown $k$
\begin{align*}
\alpha_1+k\delta_1&\in J_1 \pmod{p_1^{3}},\\
\alpha_2+k\delta_2&\in J_2 \pmod{p_2^{3}},\\
&\setbox0\hbox{$\equiv$}\mathrel{\makebox[\wd0]{\vdots}}\\
\alpha_d+k\delta_d&\in J_d \pmod{p_d^{3}},
\end{align*}
where the sets $J_i$ consist of base-$p_i$ reversals of the numbers in the interval $[b_i,c_i)$ (which are $3$-digit long in base $p_i$). Let $L$ be the set of solutions of this system. Then $\cL'$ consists of all such sets $L$ as $(\alpha_i,\delta_i,b_i,c_i)_{i=1}^d$ ranges over all tuples in $(\mathbb{Z}/p_i^3 \mathbb{Z})^{4d}$.
\end{definition}
From the proof of \Cref{claim:L}, we have $L_B(\beta)\in \cL'$, for any $n$ and any good pair $(B,\beta)$. Also, note that $\abs{\cL'}\leq \gamma^{12}$. For each tuple $(\alpha_i,\delta_i,b_i,c_i)_{i=1}^d$, testing whether $k$ satisfies the equations can be done in $O(d)$ many arithmetic operations (by computing $d$ left-hand sides, reversing their digits, and seeing if the results are in appropriate intervals). Since any $L\in\cL'$ is $\gamma^3$-periodic, we only need to test $k$ satisfying $0\leq k<\gamma^3$. Thus, the set $\cL'$ can be computed in $O(d\gamma^{15})=d^{O(d)}$ many arithmetic operations.
\begin{definition}
We say that a subset $\bY'\subseteq [0,2\gamma^{15})$ is a \emph{representative} if there exist integers $\A',\D'$ and a set $L\in\cL'$ satisfying the following conditions:
\begin{itemize}
\item $\D'$ is even, and $\A'$ is divisible by $2\gamma^7$,
\item $0\leq \A'<\D'$,
\item $2\gamma^8-2< \D'\leq 8\gamma^{11}$,
\item $\bY'=(\A'+L\D')\cap [0,2\gamma^{15})$,
\item $|\bY'|\geq \gamma^4/16+1$.
\end{itemize}
\end{definition}
A representative is, roughly speaking, a sequence $\A(B)+L_B(\beta)\D(B)$ generated by the type of a good pair $(B,\beta)$, that is then scaled by $2\gamma^{11}/n$.
The input of the pre-processing step is just $d$, and the output is a set $X'\subseteq [0,2\gamma^{15})$ of size at most $900\gamma^3\log\gamma$ such that $X'+[0,\gamma^8)$ intersects all the representatives.
The existence of $X'$ is guaranteed by the following two claims and union bound.
\begin{claim}\label{claim:repbound}
The number of representatives is at most $\gamma^{28}$.
\end{claim}
\begin{claim}\label{claim:hittingbound}
Let $x'$ be chosen uniformly from $[0,2\gamma^{15})$. For any fixed representative $\bY'$, the probability
of $[x',x'+\gamma^8)$ hitting $\bY'$ is
$\Pr\bigl[\,\bY\cap [x',x'+\gamma^8) \neq \emptyset \bigr]\geq 1/32\gamma^3$.
\end{claim}
\begin{proof}[Proof of~\Cref{claim:repbound}]
The numbers of possible $\A'$, $\D'$, and $L$ in the definition representatives are at most $8\gamma^{11}/2\gamma^7=4\gamma^4$, $8\gamma^{11}/2=4\gamma^{11}$, and $\gamma^{12}$, respectively. Thus, the number of representatives is at most $16\gamma^{27}\leq \gamma^{28}$.
\end{proof}
\begin{proof}[Proof of~\Cref{claim:hittingbound}]
The proof is similar to the proof of \Cref{claim:probability}. Let $y\in \bY'$ be arbitrary. If $y\notin [0,\gamma^8)$, then $\Pr[y\in [x,x+\gamma^8)]=1/2\gamma^7$. Assume $\bY'$ is defined by $\A',\D',L$. Since
$\D'>2\gamma^8-2\geq \gamma^8$, the set $\bY'$ contains at most one element in the interval $[0,\gamma^8)$.
Hence \[\E\bigl[ \abs[\big]{\bY'\cap [x',x'+\gamma^8)} \bigr]\geq 1/32\gamma^3.\]
Since elements of $\bY'$ are at least $\D'$ apart, $\abs{\bY'\cap [x',x'+\gamma^8)}\in \{0,1\}$ for all $x'$.
Therefore, \[\Pr\bigl[\,\bY'\cap [x',x'+\gamma^8) \neq \emptyset \bigr]=\E\bigl[ \abs[\big]{\bY'\cap [x',x'+\gamma^8)} \bigr]\geq 1/32\gamma^3.\qedhere\]
\end{proof}
It follows from the second part of the claim that
\begin{align*}
\Pr\bigl[\,\bY'\cap \bigl(X'+[0,\gamma^8)\bigr)=\emptyset\bigr]\leq (1-1/32\gamma^3)^{900 \gamma^3 \log \gamma}<\gamma^{-28}.
\end{align*}
Thus, by the first part of the claim and union bound, there exists a choice of $X'$ such that $X'+[0,\gamma^8)$ intersects all the representatives. We can find such an $X'$ using the method of conditional expectations in time $\gamma^{O(1)}$.\smallskip
For any fixed $n$, which is divisible by $2\gamma^{11}$, we use $X'$ from the pre-processing step to find the set $P$ that meets every good box, similarly to the stage one of the construction in \Cref{thm:constr}. Namely, given $X'$ as above, let $X=\frac{n}{2\gamma^{11}}X'$. The desired set is then $P\eqdef r(X+[0,n/\gamma^3))$. Indeed, given a good pair $(B,\beta)$, let $\bY'$ be the representative
corresponding to the triple $(\A',\D',L)$ where
$\A'=\A(B)\times 2\gamma^{11}/n$, $\D'=\D(B)\times 2\gamma^{11}/n$, and $L=\cL_B(\beta)$. From the pre-processing step, there exists some $x'\in X'$ such that $[x',x'+\gamma^8)\cap \bY'\neq \emptyset$. This implies $\A(B)+k\D(B)\in\frac{n}{2\gamma^{11}}x'+[0,n/2\gamma^3)$ for some $k\in L_B(\beta)$, and hence the set $P$ hits $\beta$.
Given such a set $P$ we can then proceed exactly as in the stage two in the proof of \Cref{thm:constr}. It is
completely deterministic. Naively, it takes $O(d\log n)$ steps
to compute each element of $P$ since computing the function $r_{p_i}$
requires $O(\log n)$ steps, for each $i$. That would make the total
number of operations in the algorithm $d^{O(d)}+O(dn\log n)$.
However, since the base-$p_i$ expansions of adjacent integers are
almost identical, it is possible to re-use the value of $r_{p_i}(m)$
when computing $r_{p_i}(m+1)$. This way one obtains an algorithm
with the total number of operations being $d^{O(d)}+O(dn)$.
\item Dispersion has also been studied on the torus. In this variant of the problem,
the boxes are products of \emph{toroidal intervals}, which, in addition to the usual intervals $(a,b)$ for $a<b$,
include the sets of the form $(a,b)\eqdef (a,1]\cup [0,b)$ for $a>b$. Denote the $d$-dimensional
torus by $\Tor^d$, and let $m_d^{\Tor}$ be the corresponding dispersion function, i.e., the largest number
such that there is an empty box of volume $m_d^{\Tor}(n)$ among every $n$-point set on $\Tor^d$.
Ullrich \cite{ullrich} proved that $m_d^{\Tor}(n)\geq \min\{1,d/n\}$. This bound is trivially sharp for $d=1$,
and it was shown in \cite{breneis_hinrichs} that it is also sharp for $d=2$ and infinitely many~$n$.
In the opposite direction, the construction of Larcher, which was mentioned in the introduction, carries over verbatim to the
torus, and so $m_d(n)\leq 2^{7d+1}/n$. We can improve the base of exponent from $2^7$ to $e/2$.
\begin{proposition}
The toroidal dispersion satisfies $m_d^{\Tor}(n)\leq 32000(e/2)^dd^3\log d/n$, for all $n$ divisible by~$d$.
\end{proposition}
\begin{proof}
Let $P$ be the set obtained by invoking \Cref{thm:constr} with $n/d$ in place of~$n$.
Write $P+u$ to denote the shift of $P$ by vector $u$, where the `shift' is understood as a shift on~$\Tor^d$.
Set $v\eqdef\nobreak(1/d,1/d,...,1/d)\in \Tor^d$, and consider the shifts $P+rv$ for $r\in \{0,1,\dotsc,d-1\}$.
We claim that the toroidal dispersion of $\bigcup_{r=0}^{d-1}(P+rv)$ is at most $32000(e/2)^dd^3\log d/n$.
To prove this, it suffices, for every toroidal box $B_0$ of volume $32000(e/2)^dd^3\log d/n$, to find $r\in \{0,1,\dotsc,d-1\}$ such that the toroidal box
$B_r\eqdef B_0-rv$ contain a usual box of volume $8000d^3\log d/n$.
Write $\len(a,b)$ for the length of a toroidal interval $(a,b)$.
If $(a,b)$ is a toroidal interval, the largest usual interval contained in $(a,b)-x$ has length
$\len(a,b)$ if $x\notin (a,b)$ and $\max\bigl\{\len(a,x),\len(x,b)\bigr\}$ if $x\in (a,b)$.
For a toroidal interval $(a,b)$, let $f_{(a,b)}$ be the function given by
\[
f_{(a,b)}(x)\eqdef
\begin{cases}
\log\left(\max\left\{\frac{\len(a,x)}{\len(a,b)},\frac{\len(x,b)}{\len(a,b)}\right\}\right)&\text{for }x\in(a,b),\\
0&\text{for }x\notin (a,b).
\end{cases}
\]
If $B=\prod (a_i,b_i)$ is a toroidal box, the largest usual box contained in $B-(x_1,\dotsc,x_d)$ has volume
\[
\vol(B)\exp\Bigl(\sum f_{(a_i,b_i)}(x_i)\Bigr).
\]
We shall estimate $\tfrac{1}{d}\sum_{r=0}^{d-1} f_{(a_i,b_i)}(r/d)$ by comparing it to the respective integral:
Since the function $f_{(a,b)}$ is unimodal with minimum at
$x=(a+b)/2$, the total variation of $f_{(a,b)}$ is $f_{(a,b)}(a)+f_{(a,b)}(b)-2f_{(a,b)}(\frac{a+b}{2})=2\log 2$.
Hence,
\begin{equation}\label{eq:singleavg}
\frac{1}{d}\sum_{r=0}^{d-1}f_{(a,b)}(r/d)\geq \int_{0}^1f_{(a,b)}(x)\,dx-2\log2/d.
\end{equation} We can bound the integral in turn by
$
\int f_{(a,b)}(x)\,dx=(\log 2-1)\len(a,b)\geq \log 2-1
$.
Summing~\eqref{eq:singleavg} over each of the $d$ coordinate directions, we then obtain
\[
\frac{1}{d}\sum_{r=0}^{d-1}\sum_{i=1}^d f_{(a_i,b_i)}(r/d)\geq d\log(2/e)-2\log 2.
\]
Hence, given any toroidal box $B_0$, there exists $r\in \{0,1,\dotsc,d-1\}$ such that the toroidal box $B_r=B_0-rv$ contains a usual box
of volume at least $\vol(B_0)\cdot \tfrac{1}{4}(2/e)^d$. In particular, if $\vol(B_0)\geq 32000(e/2)^dd^3\log d/n$,
then $B_r$ contains a usual box of volume $8000 d^3\log d/n$.
\end{proof}
It might be that the toroidal dispersion is indeed larger than the usual dispersion.
One evidence in that direction is that the VC dimension of boxes in the $[0,1]^d$
is $2d$ whereas the VC dimension of toroidal boxes is asymptotic to $d\log_2d$, as recently
showed by Gillibert, Lachmann and M\"ullner \cite{gillibert_lachmann_mullner}.
\item The first stage in the proof of \Cref{thm:constr} can be modified to yield a set $P$
such that the intersection $P\cap B$ with any dyadic box $B$ contains approximately
the same number of points. This can be used to give better constructions of sets in $\R^d$
without any large convex holes. The details are in \cite{almostnets} and \cite{convexholes}.
\item We suspect that the smallest dispersion of an $n$-point set is asymptotic to
$\Theta(d\log d\cdot \frac{1}{n})$.
\end{itemize}
\bibliographystyle{plain}
| {
"timestamp": "2021-02-26T02:18:48",
"yymm": "2009",
"arxiv_id": "2009.05820",
"language": "en",
"url": "https://arxiv.org/abs/2009.05820",
"abstract": "We show that, for every set of $n$ points in the $d$-dimensional unit cube, there is an empty axis-parallel box of volume at least $\\Omega(d/n)$ as $n\\to\\infty$ and $d$ is fixed. In the opposite direction, we give a construction without an empty axis-parallel box of volume $O(d^2\\log d/n)$. These improve on the previous best bounds of $\\Omega(\\log d/n)$ and $O(2^{7d}/n)$ respectively.",
"subjects": "Combinatorics (math.CO); Computational Geometry (cs.CG)",
"title": "Empty axis-parallel boxes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513914124559,
"lm_q2_score": 0.8459424295406088,
"lm_q1q2_score": 0.8342273039463048
} |
https://arxiv.org/abs/2201.00145 | Matrix Decomposition and Applications | In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields. | \part{Gaussian Elimination}
\newpage
\chapter{LU Decomposition}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{LU Decomposition}
Perhaps the best known and the first matrix decomposition we should know about is the LU decomposition. We now illustrate the results in the following theorem and the proof of the existence of which will be delayed in the following sections.
\begin{theoremHigh}[LU Decomposition with Permutation]\label{theorem:lu-factorization-with-permutation}
Every nonsingular $n\times n$ square matrix $\bA$ can be factored as
\begin{equation}
\bA = \bP\bL\bU, \nonumber
\end{equation}
where $\bP$ is a permutation matrix, $\bL$ is a unit lower triangular matrix (i.e., lower triangular matrix with all 1's on the diagonal), and $\bU$ is a nonsingular upper triangular matrix.
\end{theoremHigh}
Note that, in the remainder of this text, we will put the decomposition-related results in the blue box. And other claims will be in a gray box. This rule will be applied for the rest of the book without special mention.
\begin{remark}[Decomposition Notation]
The above decomposition applies to any nonsingular matrix $\bA$. We will see that this decomposition arises from the elimination steps in which case row operations of subtraction and exchange of two rows are allowed, the subtractions are recorded in matrix $\bL$ and the row exchanges are recorded in matrix $\bP$. To make this row exchange explicit, the common form for the above decomposition is $\bQ\bA=\bL\bU$ where $\bQ=\bP^\top$ records the exact row exchanges of the rows of $\bA$. Otherwise, the $\bP$ would record the row exchanges of $\bL\bU$. In our case, we will make the decomposition to be clear for matrix $\bA$ rather than for $\bQ\bA$. For this reason, we will put the permutation matrix on the right-hand side of the equation for the remainder of the text without special mention.
\end{remark}
Particularly, there are circumstances where we won't require the permutation matrix. This decomposition relies on the leading principal minors. We provide a rigorous definition which is important for the illustration in the sequel.
\begin{definition}[Leading Principal Minors\index{Leading principal minors}]\label{definition:leading-principle-minors}
Let $\bA$ be an $n\times n$ square matrix. A $k \times k$ submatrix of $\bA$ obtained by deleting the \textbf{last} $n-k$ columns and the \textbf{last} $n-k$ rows from $\bA$ is called a $k$-th order \textbf{leading principal submatrix} of $\bA$, that is, the $k\times k$ submatrix taken from the top left corner of $\bA$. The determinant of the $k \times k$ leading principal submatrix is called a $k$-th order \textbf{leading principal minor} of $\bA$.
\end{definition}
Under mild conditions on the leading principal minors of matrix $\bA$, the LU decomposition will not involve the permutation matrix.
\begin{theoremHigh}[LU Decomposition without Permutation]\label{theorem:lu-factorization-without-permutation}
For any $n\times n$ square matrix $\bA$, if all the leading principal minors are nonzero, i.e., $\det(\bA_{1:k,1:k})\neq 0$, for all $k\in \{1,2,\ldots, n\}$, then $\bA$ can be factored as
\begin{equation}
\bA = \bL\bU, \nonumber
\end{equation}
where $\bL$ is a unit lower triangular matrix (i.e., lower triangular matrix with all 1's on the diagonal), and $\bU$ is a \textbf{nonsingular} upper triangular matrix.
\item Specifically, this decomposition is \textbf{unique}. See Corollary~\ref{corollary:unique-lu-without-permutation} (p.~\pageref{corollary:unique-lu-without-permutation}).
\end{theoremHigh}
\begin{remark}[Other Forms of the LU Decomposition without Permutation]
The leading principal minors are nonzero, in another word, this means the leading principal submatrices are nonsingular.
\paragraph{Singular $\bA$} In the above theorem, we assume $\bA$ is nonsingular as well. The LU decomposition also exists for singular matrix $\bA$. However, the matrix $\bU$ will be singular as well in this case. This can be shown in the following section that, if matrix $\bA$ is singular, some pivots will be zero, and the corresponding diagonal values of $\bU$ will be zero.
\paragraph{Singular leading principal submatrices} Even if we assume matrix $\bA$ is nonsingular, the leading principal submatrices might be singular. Suppose further that some of the leading principal minors are zero, the LU decomposition also exists, but if so, it is then not unique.
\end{remark}
We will discuss where this decomposition comes from in the next section. There are also generalizations of LU decomposition to non-square or singular matrices, such as rank-revealing LU decomposition. Please refer to \citet{pan2000existence, miranian2003strong, dopico2006multiple} for further discussion or we will have a short discussion in Section~\ref{section:rank-reveal-lu-short} (p.~\pageref{section:rank-reveal-lu-short}).
\index{Backward substitution}
\index{Gaussian elimination}
\section{Relation to Gaussian Elimination}\label{section:gaussian-elimination}
Solving linear system equation $\bA\bx=\bb$ is the basic problem in linear algebra.
Gaussian elimination transforms a linear system into an upper triangular one by applying simple \textit{elementary row transformations} on the left of the linear system in $n-1$ stages if $\bA\in \real^{n\times n}$. As a result, it is much easier to solve by a \textit{backward substitution}. The elementary transformation is defined rigorously as follows.
\begin{definition}[Elementary Transformation\index{Elementary transformation}]
For square matrix $\bA$, the following three transformations are referred to as \textbf{elementary row/column transformations}:
\item 1. Interchanging two rows (or columns) of $\bA$;
\item 2. Multiplying all elements of a row (or a column) of $\bA$ by some nonzero values;
\item 3. Adding any row (or column) of $\bA$ multiplied by a nonzero number to any other row (or column);
\end{definition}
Specifically, the elementary row transformations of $\bA$ are unit lower triangular to multiply $\bA$ on the left, and the elementary column transformations of $\bA$ are unit upper triangular to multiply $\bA$ on the right.
The Gaussian elimination is then described by the third type (elementary row transformation) above. Suppose the upper triangular matrix obtained by Gaussian elimination is given by $\bU = \bE_{n-1}\bE_{n-2}\ldots\bE_1\bA$ ($n-1$ steps), and in the $k$-th stage, the $k$-th column of $\bE_{k-1}\bE_{k-2}\ldots\bE_1\bA$ is $\bx\in \real^n$. Gaussian elimination will introduce zeros below the diagonal of $\bx$ by
$$
\bE_k = \bI - \bz_k \be_k^\top,
$$
where $\be_k \in \real^n$ is the $k$-th unit basis vector, and $\bz_k\in \real^n$ is given by
$$
\bz_k = [0, \ldots, 0, z_{k+1}, \ldots, z_n]^\top, \qquad z_i= \frac{x_{i}}{x_{k}}, \gap \forall i \in \{k+1,\ldots, n\}.
$$
We realize that $\bE_k$ is a unit lower triangular matrix (with $1$'s on the diagonal) with only the $k$-th column of the lower submatrix being nonzero,
$$
\bE_k=
\begin{bmatrix}
1 & \ldots & 0& 0 & \ldots & 0\\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
0 & \ldots & 1 & 0 & \ldots & 0 \\
0 & \ldots & -z_{k+1} & 1 & \ldots & 0\\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
0 & \ldots & -z_n & 0 & \ldots & 1
\end{bmatrix},
$$
and multiplying on the left by $\bE_k$ will introduce zeros below the diagonal:
$$
\bE_k \bx =
\begin{bmatrix}
1 & \ldots & 0& 0 & \ldots & 0\\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
0 & \ldots & 1 & 0 & \ldots & 0 \\
0 & \ldots & -z_{k+1} & 1 & \ldots & 0\\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
0 & \ldots & -z_n & 0 & \ldots & 1
\end{bmatrix}
\begin{bmatrix}
x_1 \\
\vdots \\
x_k\\
x_{k+1} \\
\vdots \\
x_n
\end{bmatrix}=
\begin{bmatrix}
x_1 \\
\vdots \\
x_k\\
0 \\
\vdots \\
0
\end{bmatrix}.
$$
For example, we write out the Gaussian elimination steps for a $4\times 4$ matrix. For simplicity, we assume there are no row permutations. And in the following matrix, $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
\paragraph{A Trivial Gaussian Elimination For a $4\times 4$ Matrix:}
\begin{equation}\label{equation:elmination-steps}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bE_1}{\longrightarrow}
\begin{sbmatrix}{\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bE_2}{\longrightarrow}
\begin{sbmatrix}{\bE_2\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{\boxtimes} & \boxtimes & \boxtimes \\
0 & \bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bE_3}{\longrightarrow}
\begin{sbmatrix}{\bE_3\bE_2\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{\boxtimes} & \boxtimes & \boxtimes \\
0 & 0 & \textcolor{blue}{\boxtimes} & \boxtimes \\
0 & 0 & \bm{0} & \textcolor{blue}{\bm{\boxtimes}}
\end{sbmatrix},
\end{equation}
where $\bE_1, \bE_2, \bE_3$ are lower triangular matrices. Specifically, as discussed above, Gaussian transformation matrices $\bE_i$'s are unit lower triangular matrices with $1$'s on the diagonal. This can be explained that for the $k$-th transformation $\bE_k$, working on the matrix $\bE_{k-1}\ldots\bE_1\bA$, the transformation subtracts multiples of the $k$-th row from rows $\{k+1, k+2, \ldots, n\}$ to get zeros below the diagonal in the $k$-th column of the matrix. And never use rows $\{1, 2, \ldots, k-1\}$.
To be more concrete, for the transformation example above in step $1$, we multiply on the left by $\bE_1$ so that multiples of the $1$-st row are subtracted from rows $2, 3, 4$ and the first entries of rows $2, 3, 4$ are set to zero. Similar situations for step 2 and step 3. By setting $\bL=\bE_1^{-1}\bE_2^{-1}\bE_3^{-1}$ and letting the matrix after elimination be $\bU$, \footnote{The inverses of unit lower triangular matrices are also unit lower triangular matrices. And the products of unit lower triangular matrices are also unit lower triangular matrices.} we get $\bA=\bL\bU$. Thus we obtain an LU decomposition for this $4\times 4$ matrix $\bA$.
\begin{definition}[Pivot\index{Pivot}]\label{definition:pivot}
First nonzero entry in the row after each elimination step is called a \textbf{pivot}. For example, the \textcolor{blue}{blue} crosses in Equation~\eqref{equation:elmination-steps} are pivots.
\end{definition}
However, it is possible for $a_{11}$ (the entry (1,1) of matrix $\bA$) to have a value of zero on occasion.
No such matrix $\bE_1$ can make the next elimination step successful.
So we need to interchange the first row and the second row via a permutation matrix $\bP_1$. This is known as the \textit{pivoting}, or simply \textit{permutation}.
\paragraph{Gaussian Elimination With a Permutation In the Beginning:}
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
0 & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\bP_1}{\longrightarrow}
\begin{sbmatrix}{\bP_1\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bE_1}{\longrightarrow}
\begin{sbmatrix}{\bE_1\bP_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
&\stackrel{\bE_2}{\longrightarrow}
\begin{sbmatrix}{\bE_2\bE_1\bP_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{\boxtimes} & \boxtimes & \boxtimes \\
0 & \bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bE_3}{\longrightarrow}
\begin{sbmatrix}{\bE_3\bE_2\bE_1\bP_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{\boxtimes} & \boxtimes & \boxtimes \\
0 & 0 & \textcolor{blue}{\boxtimes} & \boxtimes \\
0 & 0 & \bm{0} & \textcolor{blue}{\bm{\boxtimes}}
\end{sbmatrix}.
\end{aligned}
$$
By setting $\bL=\bE_1^{-1}\bE_2^{-1}\bE_3^{-1}$ and $\bP=\bP_1^{-1}$, we get $\bA=\bP\bL\bU$. Therefore we obtain a full LU decomposition with permutation for this $4\times 4$ matrix $\bA$.
In some circumstances, other permutation matrices $\bP_2, \bP_3, \ldots$ will appear in between the lower triangular $\bE_i$'s. An example is shown as follows.
\paragraph{Gaussian Elimination With a Permutation In Between:}
$$
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\end{sbmatrix}
\stackrel{\bE_1}{\longrightarrow}
\begin{sbmatrix}{\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bP_1}{\longrightarrow}
\begin{sbmatrix}{\bP_1\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bE_2}{\longrightarrow}
\begin{sbmatrix}{\bE_2\bP_1\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{\boxtimes} & \boxtimes & \boxtimes \\
0 & 0 & \textcolor{blue}{\boxtimes} & \boxtimes \\
0 & \bm{0} & \bm{0} & \textcolor{blue}{\bm{\boxtimes}}
\end{sbmatrix}.
$$
In this case, we find $\bU=\bE_2\bP_1\bE_1\bA$. In Section~\ref{section:lu-perm} (p.~\pageref{section:lu-perm}) or Section~\ref{section:partial-pivot-lu} (p.~\pageref{section:partial-pivot-lu}), we will show that the permutations in-between will result in the same form $\bA=\bP\bL\bU$ where $\bP$ takes account of all the permutations.
The above examples can be easily extended to any $n\times n$ matrix if we assume there are no row permutations in the process. And we will have $n-1$ such lower triangular transformations.
The $k$-th transformation $\bE_k$ introduces zeros below the diagonal in the $k$-th column of $\bA$ by subtracting multiples of the $k$-th row from rows $\{k+1, k+2, \ldots, n\}$.
Finally, by setting $\bL=\bE_1^{-1}\bE_2^{-1}\ldots \bE_{n-1}^{-1}$ we obtain the LU decomposition $\bA=\bL\bU$ (without permutation).
\section{Existence of the LU Decomposition without Permutation}\label{section:exist-lu-without-perm}
The Gaussian elimination or Gaussian transformation reveals the origin of the LU decomposition. We then prove Theorem~\ref{theorem:lu-factorization-without-permutation} rigorously, i.e., the existence of the LU decomposition without permutation by induction.
\begin{proof}[\textbf{of Theorem~\ref{theorem:lu-factorization-without-permutation}: LU Decomposition without Permutation}]
We will prove by induction that every $n\times n$ square matrix $\bA$ with nonzero leading principal minors has a decomposition $\bA=\bL\bU$. The $1\times 1$ case is trivial by setting $L=1, U=A$, thus, $A=LU$.
Suppose for any $k\times k$ matrix $\bA_k$ with all the leading principal minors being nonzero has an LU decomposition without permutation. If we prove any $(k+1)\times(k+1)$ matrix $\bA_{k+1}$ can also be factored as this LU decomposition without permutation, then we complete the proof.
For any $(k+1)\times(k+1)$ matrix $\bA_{k+1}$, suppose the $k$-th order leading principal submatrix of $\bA_{k+1}$ is $\bA_k$ with the size of $k\times k$. Then $\bA_k$ can be factored as $\bA_k = \bL_k\bU_k$ with $\bL_k$ being a unit lower triangular matrix and $\bU_k$ being a nonsingular upper triangular matrix from the assumption. Write out $\bA_{k+1}$ as
$
\bA_{k+1} = \begin{bmatrix}
\bA_k & \bb \\
\bc^\top & d
\end{bmatrix}.
$
Then it admits the factorization:
$$
\bA_{k+1} = \begin{bmatrix}
\bA_k & \bb \\
\bc^\top & d
\end{bmatrix}
=
\begin{bmatrix}
\bL_k &\bzero \\
\bx^\top & 1
\end{bmatrix}
\begin{bmatrix}
\bU_k & \by\\
\bzero & z
\end{bmatrix} = \bL_{k+1}\bU_{k+1},
$$
where $\bb = \bL_k\by$, $\bc^\top = \bx^\top\bU_k$, $d = \bx^\top\by + z$, $\bL_{k+1}=\begin{bmatrix}
\bL_k &\bzero \\
\bx^\top & 1
\end{bmatrix}$, and $\bU_{k+1}=\begin{bmatrix}
\bU_k & \by\\
\bzero & z
\end{bmatrix}$.
From the assumption, $\bL_k$ and $\bU_k$ are nonsingular. Therefore
$$
\by = \bL_k^{-1}\bb, \qquad \bx^\top=\bc^\top\bU_k^{-1}, \qquad z=d - \bx^\top\by.
$$
If further, we could prove $z$ is nonzero such that $\bU_{k+1}$ is nonsingular, we complete the proof.
Since all the leading principal minors of $\bA_{k+1}$ are nonzero,
we have $\det(\bA_{k+1})=$
\footnote{By the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix}
\bA & \bB \\
\bC & \bD
\end{bmatrix}$, then $\det(\bM) = \det(\bA)\det(\bD-\bC\bA^{-1}\bB)$.}
$\det(\bA_k)\cdot$ $\det(d-\bc^\top\bA_k^{-1}\bb)
=\det(\bA_k)\cdot(d-\bc^\top\bA_k^{-1}\bb) \neq 0$, where $d-\bc^\top\bA_k^{-1}\bb$ is a scalar.
As $\det(\bA_k)\neq 0$ from the assumption, we obtain $d-\bc^\top\bA_k^{-1}\bb \neq 0$. Substitute $\bb = \bL_k\by$ and $\bc^\top = \bx^\top\bU_k$ into the formula, we have $d-\bx^\top\bU_k\bA_k^{-1}\bL_k\by =d-\bx^\top\bU_k(\bL_k\bU_k)^{-1}\bL_k\by =d-\bx^\top\by \neq 0$ which is exactly the form of $z\neq 0$. Thus we find $\bL_{k+1}$ with all the values on the diagonal being 1, and $\bU_{k+1}$ with all the values on the diagonal being nonzero which means $\bL_{k+1}$ and $\bU_{k+1}$ are nonsingular. \footnote{A triangular matrix (upper or lower) is nonsingular if and only if all the entries on its main diagonal are nonzero.}
This completes the proof.
\end{proof}
We further prove that if no permutation involves, the LU decomposition is unique.
\begin{corollary}[Uniqueness of the LU Decomposition without Permutation]\label{corollary:unique-lu-without-permutation}
Suppose the $n\times n$ square matrix $\bA$ has nonzero leaning principal minors. Then, the LU decomposition is unique.
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:unique-lu-without-permutation}]
Suppose the LU decomposition is not unique, then we can find two decompositions such that $\bA=\bL_1\bU_1 = \bL_2\bU_2$ which implies $\bL_2^{-1}\bL_1=\bU_2\bU_1^{-1}$. The left of the equation is a unit lower triangular matrix and the right of the equation is an upper triangular matrix.
This implies both sides of the above equation are diagonal matrices. Since the inverse of a unit lower triangular matrix is also a unit lower triangular matrix, and the product of unit lower triangular matrices is also a unit lower triangular matrix, this results in that $\bL_2^{-1}\bL_1 = \bI$.
The equality implies that both sides are identity matrices such that $\bL_1=\bL_2$ and $\bU_1=\bU_2$ which leads to a contradiction.
\end{proof}
In the proof of Theorem~\ref{theorem:lu-factorization-without-permutation}, we have shown that the diagonal values of the upper triangular matrix are all nonzero if the leading principal minors of $\bA$ are all nonzero. We then can formulate this decomposition in another form if we divide each row of $\bU$ by each diagonal value of $\bU$. This is called the \textit{LDU decomposition}.
\begin{corollaryHigh}[LDU Decomposition]\label{corollary:ldu-decom}
For any $n\times n$ square matrix $\bA$, if all the leading principal minors are nonzero, i.e., $\det(\bA_{1:k,1:k})\neq 0$, for all $k\in \{1,2,\ldots, n\}$, then $\bA$ can be \textbf{uniquely} factored as
\begin{equation}
\bA = \bL\bD\bU, \nonumber
\end{equation}
where $\bL$ is a unit lower triangular matrix, $\bU$ is a \textbf{unit} upper triangular matrix, and $\bD$ is a diagonal matrix.
\end{corollaryHigh}
The proof is trivial that from the LU decomposition of $\bA=\bL\bR$, we can find a diagonal matrix $\bD=diag(r_{11}, r_{22}, \ldots, r_{nn})$ such that $\bD^{-1}\bR = \bU$ is a unit upper triangular matrix. And the uniqueness comes from the uniqueness of the LU decomposition.
\section{Existence of the LU Decomposition with Permutation}\label{section:lu-perm}
In Theorem~\ref{theorem:lu-factorization-without-permutation}, we require that matrix $\bA$ has nonzero leading principal minors. However, this is not necessarily. Even when the leading principal minors are zero, nonsingular matrices still have an LU decomposition, but with an additional permutation. The proof is still from induction.
\begin{proof}[\textbf{of Theorem~\ref{theorem:lu-factorization-with-permutation}: LU Decomposition with Permutation}]
We note that any $1\times 1$ nonsingular matrix has a full LU decomposition $A=PLU$ by simply setting $P=1$, $L=1$, $U=A$.
We will show that if every $(n-1)\times (n-1)$ nonsingular matrix has a full LU decomposition, then this is also true for every $n\times n$ nonsingular matrix. By induction, we prove that every nonsingular matrix has a full LU decomposition.
We will formulate the proof in the following order. \textcolor{black}{If $\bA$ is nonsingular, then its row permuted matrix $\bB$ is also nonsingular}. And the \textit{Schur complement} of $b_{11}$ in $\bB$ is also nonsingular. Finally, we formulate the decomposition of $\bA$ by $\bB$ from this property.
We notice that at least one element in the first column of $\bA$ must be nonzero, otherwise $\bA$ will be singular. We can then apply a row permutation that makes the element in entry $(1,1)$ to be nonzero. That is, there exists a permutation $\bP_1$ such that $\bB = \bP_1 \bA$ in which case $b_{11} \neq 0$. Since $\bA$ and $\bP_1$ are both nonsingular and the product of nonsingular matrices is also nonsingular, then $\bB$ is also nonsingular.
\begin{mdframed}[hidealllines=\mdframehidelineNote,backgroundcolor=\mdframecolorSkip,frametitle={Schur complement of $\bB$ is also nonsingular:}]
Now consider the Schur complement of $b_{11}$ in $\bB$ with size $(n-1)\times (n-1)$:
$$
\widehat{\bB} = \bB_{2:n,2:n} -\frac{1}{b_{11}} \bB_{2:n,1} \bB_{1,2:n}.
$$
Suppose there is an $(n-1)$-vector $\bx$ satisfies
\begin{equation}\label{equ:lu-pivot1}
\widehat{\bB} \bx = 0.
\end{equation}
Then $\bx$ and $y=-\frac{1}{b_{11}}\bB_{1,2:n} \cdot \bx $ satisfy
$$
\bB
\left[
\begin{matrix}
\bx \\
y
\end{matrix}
\right]
=
\left[
\begin{matrix}
b_{11} & \bB_{1,2:n} \\
\bB_{2:n,1} & \bB_{2:n,2:n}
\end{matrix}
\right]
\left[
\begin{matrix}
\bx \\
y
\end{matrix}
\right]
=
\left[
\begin{matrix}
\bzero \\
0
\end{matrix}
\right].
$$
Since $\bB$ is nonsingular, $\bx$ and $y$ must be zero. Hence, Equation~\eqref{equ:lu-pivot1} holds only if $\bx=\bzero$ which means that the null space of $\widehat{\bB}$ is of dimension 0 and thus $\widehat{\bB}$ is nonsingular with size $(n-1)\times(n-1)$.
\end{mdframed}
By the induction assumption that any $(n-1)\times(n-1)$ nonsingular matrix can be factorized as the full LU decomposition form
$$
\widehat{\bB} = \bP_2\bL_2\bU_2.
$$
We then factor $\bA$ as
\begin{equation*}
\begin{aligned}
\bA &= \bP_1^\top
\left[
\begin{matrix}
b_{11} & \bB_{1,2:n} \\
\bB_{2:n,1} & \bB_{2:n,2:n}
\end{matrix}
\right]\\
&= \bP_1^\top
\left[
\begin{matrix}
1 & 0 \\
0 & \bP_2
\end{matrix}
\right]
\left[
\begin{matrix}
b_{11} & \bB_{1,2:n} \\
\bP_2^\top \bB_{2:n,1} &\bP_2^\top \bB_{2:n,2:n}
\end{matrix}
\right]\\
&= \bP_1^\top
\left[
\begin{matrix}
1 & 0 \\
0 & \bP_2
\end{matrix}
\right]
\left[
\begin{matrix}
b_{11} & \bB_{1,2:n} \\
\bP_2^\top \bB_{2:n,1} & \textcolor{blue}{\bL_2\bU_2}+\bP_2^\top \textcolor{blue}{\frac{1}{b_{11}} \bB_{2:n,1} \bB_{1,2:n}}
\end{matrix}
\right]\\
&= \bP_1^\top
\left[
\begin{matrix}
1 & 0 \\
0 & \bP_2
\end{matrix}
\right]
\left[
\begin{matrix}
1 & 0 \\
\frac{1}{b_{11}}\bP_2^\top \bB_{2:n,1} & \bL_2
\end{matrix}
\right]
\left[
\begin{matrix}
b_{11} & \bB_{1,2:n} \\
\bzero & \bU_2
\end{matrix}
\right].\\
\end{aligned}
\end{equation*}
Therefore, we find the full LU decomposition of $\bA=\bP\bL\bU$ by defining
$$
\bP = \bP_1^\top
\left[
\begin{matrix}
1 & 0 \\
0 & \bP_2
\end{matrix}
\right], \qquad
\bL=\left[
\begin{matrix}
1 & 0 \\
\frac{1}{b_{11}}\bP_2^\top \bB_{2:n,1} & \bL_2
\end{matrix}
\right], \qquad
\bU=
\left[
\begin{matrix}
b_{11} & \bB_{1,2:n} \\
\bzero & \bU_2
\end{matrix}
\right],
$$
from which the result follows.
\end{proof}
\section{Bandwidth Preserving in the LU Decomposition without Permutation}
For any matrix, the bandwidth of it can be defined as follows.
\begin{definition}[Matrix Bandwidth\index{Matrix bandwidth}]\label{defin:matrix-bandwidth}
For any matrix $\bA\in \real^{n\times n}$ with entry $(i,j)$ element denoted by $a_{ij}$. Then $\bA$ has \textbf{upper bandwidth $q$} if $a_{ij} =0$ for $j>i+q$, and \textbf{lower bandwidth $p$} if $a_{ij}=0$ for $i>j+p$.
An example of a $6\times 6$ matrix with upper bandwidth $2$ and lower bandwidth $3$ is shown as follows:
$$
\begin{bmatrix}
\boxtimes & \boxtimes & \boxtimes & 0& 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes & 0\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes & \boxtimes\\
0 & 0 & \boxtimes & \boxtimes& \boxtimes & \boxtimes\\
\end{bmatrix}.
$$
\end{definition}
Then, we prove that the bandwidth after the LU decomposition without permutation is preserved.
\begin{lemma}[Bandwidth Preserving]\label{lemma:lu-bandwidth-presev}
For any matrix $\bA\in \real^{n\times n}$ with upper bandwidth $q$ and lower bandwidth $p$. If $\bA$ has an LU decomposition $\bA=\bL\bU$, then $\bU$ has upper bandwidth $q$ and $\bL$ has lower bandwidth $p$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:lu-bandwidth-presev}]
The LU decomposition without permutation can be obtained as follows:
$$
\bA=\left[
\begin{matrix}
a_{11} & \bA_{1,2:n} \\
\bA_{2:n,1} & \bA_{2:n,2:n}
\end{matrix}
\right]
=\left[
\begin{matrix}
1 & 0 \\
\frac{1}{a_{11}} \bA_{2:n,1} & \bI_{n-1}
\end{matrix}
\right]
\left[
\begin{matrix}
a_{11} & \bA_{1,2:n}\\
0 & \bS
\end{matrix}
\right] = \bL_1 \bU_1,
$$
where $\bS =\bA_{2:n,2:n} - \frac{1}{a_{11}}\bA_{2:n,1}\bA_{1,2:n}$ is the Schur complement of $a_{11}$ in $\bA$.
We can name this decomposition of $\bA$ the $s$-decomposition of $\bA$.
The first column of $\bL_1$ and the first row of $\bU_1$ have the required structure (bandwidth $p$ and $q$ respectively), and the Schur complement $\bS$ of $a_{11}$ has upper bandwidth $q-1$ and lower bandwidth $p-1$ respectively. The result follows by induction on the $s$-decomposition of $\bS$.
\end{proof}
\section{Block LU Decomposition}
Another form of the LU decomposition is to factor the matrix into block triangular matrices.
\begin{theoremHigh}[Block LU Decomposition without Permutation]\label{theorem:block-lu-factorization-without-permutation}
For any $n\times n$ square matrix $\bA$, if the first $m$ leading principal block submatrices are nonsingular, then $\bA$ can be factored as
\begin{equation}
\bA = \bL\bU
=
\begin{bmatrix}
\bI & & & \\
\bL_{21} & \bI & & \\
\vdots & & \ddots & \\
\bL_{m1} & \ldots & \bL_{m,m-1} & \bI
\end{bmatrix}
\begin{bmatrix}
\bU_{11} &\bU_{12} & \ldots & \bU_{1m}\\
& \bU_{22} & & \vdots \\
& & \ddots & \bU_{m-1,m}\\
& & & \bU_{mm}\\
\end{bmatrix}
, \nonumber
\end{equation}
where $\bL_{i,j}$'s and $\bU_{ij}$'s are some block matrices.
Specifically, this decomposition is unique.
\end{theoremHigh}
Note that the $\bU$ in the above theorem is not necessarily upper triangular. An example can be shown as follows:
$$
\bA =
\left[\begin{array}{cc;{2pt/2pt}cc}
0& 1 & 1 & 1\\
-1& 2 & -1 & 2\\\hdashline[2pt/2pt]
2& 1 & 4 & 2\\
1& 2 & 3 & 3\\
\end{array}\right]
=
\left[\begin{array}{cc;{2pt/2pt}cc}
1& 0 & 0& 0\\
0& 1 & 0 & 0\\\hdashline[2pt/2pt]
5& -2 & 1 & 0\\
4& -1 & 0 & 1\\
\end{array}\right]
\left[\begin{array}{cc;{2pt/2pt}cc}
0& 1 & 1 & 1\\
-1& 2 & -1 & 2\\\hdashline[2pt/2pt]
0& 0 & -3 & 1\\
0& 0 & -2 & 1\\
\end{array}\right].
$$
The trivial non-block LU decomposition fails on $\bA$ since the entry $(1,1)$ is zero. However, the block LU decomposition exists.
\section{Application: Linear System via the LU Decomposition}\label{section:lu-linear-sistem}
Consider the well-determined linear system $\bA\bx = \bb$ with $\bA$ of size $n\times n $ and nonsingular. Avoid solving the system by computing the inverse of $\bA$, we solve linear equation by the LU decomposition. Suppose $\bA$ admits the LU decomposition $\bA = \bP\bL\bU$. The solution is given by the following algorithm.
\begin{algorithm}[H]
\caption{Solving Linear Equations by LU Decomposition}
\label{alg:linear-equation-by-LU}
\begin{algorithmic}[1]
\Require
matrix $\bA$ is nonsingular and square with size $n\times n $, solve $\bA\bx=\bb$;
\State LU Decomposition: factor $\bA$ as $\bA=\bP\bL\bU$; \Comment{(2/3)$n^3$ flops}
\State Permutation: $\bw = \bP^\top\bb$; \Comment{0 flops }
\State Forward substitution: solve $\bL\bv = \bw$; \Comment{$1+3+\ldots + (2n-1)=n^2$ flops}
\State Backward substitution: solve $\bU\bx= \bv$; \Comment{$1+3+\ldots + (2n-1)=n^2$ flops}
\end{algorithmic}
\end{algorithm}
The complexity of the decomposition step is $(2/3)n^3$ floating point operations (flops) \citep{lu2021numerical}, the backward and forward substitution steps both cost $1+3+\ldots + (2n-1)=n^2$ flops. Therefore, the total cost for computing the linear system via the LU factorization is $(2/3)n^3 + 2n^2$ flops. If we keep only the leading term, the Algorithm~\ref{alg:linear-equation-by-LU} costs $(2/3)n^3$ flops where the most complexity comes from the LU decomposition.
\paragraph{Linear system via the block LU decomposition} For a block LU decomposition of $\bA=\bL\bU$, we need to solve $\bL\bv = \bw$ and $\bU\bx = \bv$. But the latter system is not triangular and requires some extra computations.
\index{Inverse of a matrix}
\section{Application: Computing the Inverse of Nonsingular Matrices}\label{section:inverse-by-lu}
By Theorem~\ref{theorem:lu-factorization-with-permutation}, for any nonsingular matrix $\bA\in \real^{n\times n}$, we have a full LU factorization $\bA=\bP\bL\bU$. Then the inverse can be obtained by solving the matrix equation
$$
\bA\bX = \bI,
$$
which contains $n$ linear systems computation: $\bA\bx_i = \be_i$ for all $i \in \{1, 2, \ldots, n\}$ where $\bx_i$ is the $i$-the column of $\bX$ and $\be_i$ is the $i$-th column of $\bI$ (i.e., the $i$-th unit vector).
\begin{theorem}[Inverse of Nonsingular Matrix by Linear System]
Computing the inverse of a nonsingular matrix $\bA \in \real^{n\times n}$ by $n$ linear systems needs $\sim (2/3)n^3 + n(2n^2)=(8/3)n^3$ flops where $(2/3)n^3$ comes from the computation of the LU decomposition of $\bA$.
\end{theorem}
The proof is trivial by using Algorithm~\ref{alg:linear-equation-by-LU}.
However, the complexity can be reduced by taking the advantage of the structures of $\bU, \bL$.
We find that the inverse of the nonsingular matrix is $\bA^{-1} = \bU^{-1}\bL^{-1}\bP^{-1}=\bU^{-1}\bL^{-1}\bP^{T}$. By taking this advantage, the complexity is reduced from $(8/3)n^3$ to $2n^3$ flops.
\section{Application: Computing the Determinant via the LU Decomposition}
We can find the determinant easily by using the LU decomposition.
If $\bA=\bL\bU$, then $\det(\bA) = \det(\bL\bU) = \det(\bL)\det(\bU) = u_{11}u_{22}\ldots u_{nn}$ where $u_{ii}$ is the $i$-th diagonal of $\bU$ for $i\in \{1,2,\ldots,n\}$. \footnote{The determinant of a lower triangular matrix (or an upper triangular matrix) is the product of the diagonal entries.}
Further, for the LU decomposition with permutation $\bA=\bP\bL\bU$, $\det(\bA) = \det(\bP\bL\bU) = \det(\bP)u_{11}u_{22}\ldots u_{nn}$.
The determinant of a permutation matrix is either 1 or –1 because after
changing rows around (which changes the sign of the determinant \footnote{The determinant changes sign when two rows are exchanged (sign reversal).}) a permutation matrix
becomes identity matrix $\bI$, whose determinant is one.
\index{Partial pivoting}
\section{Pivoting}
\subsection{Partial Pivoting}\label{section:partial-pivot-lu}
In practice, it is desirable to pivot even when it is not necessary. When dealing with a linear system via the LU decomposition as shown in Algorithm~\ref{alg:linear-equation-by-LU}, if the diagonal entries of $\bU$ are small, it can lead to inaccurate solutions for the linear solution. Thus, it is common to pick the largest entry to be the pivot. This is known as the \textit{partial pivoting}. For example,
\paragraph{Partial Pivoting For a $4\times 4$ Matrix:}
\begin{equation}\label{equation:elmination-steps2}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bE_1}{\longrightarrow}
\begin{sbmatrix}{\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{2} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{5} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{7} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bP_1}{\longrightarrow}
\begin{sbmatrix}{\bP_1\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{7} & \textcolor{blue}{\boxtimes} & \textcolor{blue}{\boxtimes} \\
0 & 5 & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{2} & \textcolor{blue}{\boxtimes} & \textcolor{blue}{\boxtimes}
\end{sbmatrix}
\stackrel{\bE_2}{\longrightarrow}
\begin{sbmatrix}{\bE_2\bP_1\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 7 & \boxtimes & \boxtimes \\
0 & \bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{0} & \textcolor{blue}{\bm{\boxtimes}}
\end{sbmatrix},
\end{equation}
in which case, we pick 7 as the pivot after transformation by $\bE_1$ even when it is not necessary. This interchange permutation can guarantee that no multiplier is greater than 1 in absolute value during the Gaussian elimination.
More generally, the procedure for computing the LU decomposition with partial pivoting of $\bA\in\real^{n\times n}$ is given in Algorithm~\ref{alg:lu-partial-pivot}.
\begin{algorithm}[H]
\caption{LU Decomposition with Partial Pivoting}
\label{alg:lu-partial-pivot}
\begin{algorithmic}[1]
\Require
Matrix $\bA$ with size $n\times n$;
\State Let $\bU = \bA$;
\For{$k=1$ to $n-1$} \Comment{i.e., get the $k$-th column of $\bU$}
\State Find a row permutation $\bP_k$ that swaps $u_{kk}$ with the largest element in $|\bU_{k:n,k}|$;
\State $\bU =\bP_k\bU$;
\State Determine the Gaussian transformation $\bE_k $ to introduce zeros below the diagonal of the $k$-th column of $\bU$;
\State $\bU = \bE_k\bU$;
\EndFor
\State Output $\bU$;
\end{algorithmic}
\end{algorithm}
The algorithm requires $~2/3(n^3)$ flops and $(n-1)+(n-2)+\ldots + 1 \sim O(n^2)$ comparisons resulting from the pivoting procedure.
Upon completion, the upper triangular matrix $\bU$ is given by
$$
\bU = \bE_{n-1}\bP_{n-1} \ldots \bE_2\bP_2\bE_1\bP_1\bA.
$$
\paragraph{Computing the final $\bL$} And we here show that Algorithm~\ref{alg:lu-partial-pivot} computes the LU decomposition in the following form
$$
\bA = \bP\bL\bU,
$$
where $\bP=\bP_1 \bP_2\ldots\bP_{n-1}$, $\bU$ is the upper triangular matrix results directly from the algorithm, $\bL$ is unit lower triangular with $|\bL_{ij}|\leq 1$ for all $1 \leq i,j\leq n$. $\bL_{k+1:n,k}$ is a permuted version of $\bE_k$'s multipliers. To see this, we notice that the permutation matrices used in the algorithm fall into a special kind of permutation matrix since we only interchange two rows of the matrix. \textit{This implies the $\bP_k$'s are symmetric and $\bP_k^2 = \bI$ for $k\in \{1,2,\ldots,n-1\}$}.
Suppose
$$
\bM_k = (\bP_{n-1} \ldots \bP_{k+1}) \bE_k (\bP_{k+1} \ldots \bP_{n-1}).
$$
Then, $\bU$ can be written as
$$
\bU = \bM_{n-1}\ldots \bM_2\bM_1 \bP^\top \bA.
$$
To see what $\bM_k$ is, we realize that $\bP_{k+1}$ is a permutation with the upper left $k\times k$ block being an identity matrix. And thus we have
$$
\begin{aligned}
\bM_k &= (\bP_{n-1} \ldots \bP_{k+1}) (\bI_n-\bz_k\be_k^\top) (\bP_{k+1} \ldots \bP_{n-1})\\
&=\bI_n - (\bP_{n-1} \ldots \bP_{k+1})(\bz_k\be_k^\top)(\bP_{k+1} \ldots \bP_{n-1}) \\
&=\bI_n - (\bP_{n-1} \ldots \bP_{k+1}\bz_k) (\be_k^\top\bP_{k+1} \ldots \bP_{n-1}) \\
&=\bI_n - (\bP_{n-1} \ldots \bP_{k+1}\bz_k)\be_k^\top. \qquad &\text{(since $\be_k^\top\bP_{k+1} \ldots \bP_{n-1} = \be_k^\top$)}
\end{aligned}
$$
This implies that $\bM_k$ is unit lower triangular with the $k$-th column being the permuted version of $\bE_k$. And the final lower triangular $\bL$ is thus given by
$$
\bL = \bM_1^{-1}\bM_2^{-1} \ldots \bM_{n-1}^{-1}.
$$
\index{Complete pivoting}
\subsection{Complete Pivoting}\label{section:complete-pivoting}
In partial pivoting, when introducing zeros below the diagonal of the $k$-th column of $\bU$, the $k$-th pivot is determined by scanning the current subcolumn $\bU_{k:n,k}$. In complete pivoting, the largest absolute entry in the current submatrix $\bU_{k:n,k:n}$ is interchanged into the entry $(k,k)$ of $\bU$. Therefore, an additional \textit{column permutation} $\bQ_k$ is applied in each step. The final upper triangular matrix $\bU$ is obtained by
$$
\bU = \bE_{n-1}\bP_{n-1}\ldots (\bE_2\bP_2(\bE_1\bP_1\bA\bQ_1)\bQ_2) \ldots \bQ_{n-1}.
$$
Similarly, the complete pivoting algorithm is formulated in Algorithm~\ref{alg:lu-complete-pivot}.
\begin{algorithm}[H]
\caption{LU Decomposition with Complete Pivoting}
\label{alg:lu-complete-pivot}
\begin{algorithmic}[1]
\Require
Matrix $\bA$ with size $n\times n$;
\State Let $\bU = \bA$;
\For{$k=1$ to $n-1$} \Comment{the value $k$ is to get the $k$-th column of $\bU$}
\State Find a row permutation matrix $\bP_k$, and a column permutation $\bQ_k$ that swaps $\bU_{kk}$ with the largest element in $|\bU_{k:n,k:n}|$, say $\bU_{u,v} = \max{|\bU_{k:n,k:n}|}$;
\State $\bU =\bP_k\bU\bQ_k$;
\State Determine the Gaussian transformation $\bE_k $ to introduce zeros below the diagonal of the $k$-th column of $\bU$;
\State $\bU = \bE_k\bU$;
\EndFor
\State Output $\bU$;
\end{algorithmic}
\end{algorithm}
The algorithm requires $~2/3(n^3)$ flops and $(n^2+(n-1)^2+\ldots +1^2)\sim O(n^3)$ comparisons resulting from the pivoting procedure. Again, let $\bP=\bP_1 \bP_2\ldots\bP_{n-1}$, $\bQ = \bQ_1 \bQ_2\ldots\bQ_{n-1}$,
$$
\bM_k = (\bP_{n-1} \ldots \bP_{k+1}) \bE_k (\bP_{k+1} \ldots \bP_{n-1}), \qquad \text{for all $k\in \{1,2,\ldots,n-1\}$}
$$
and
$$
\bL = \bM_1^{-1}\bM_2^{-1} \ldots \bM_{n-1}^{-1}.
$$
We have $\bA = \bP\bL\bU \bQ^\top$ or $\bP^\top\bA\bQ = \bL\bU$ as the final decomposition.
\index{Pivoting}
\index{Rook pivoting}
\subsection{Rook Pivoting}
The \textit{rook pivoting} provides an alternative to the partial and complete pivoting. Instead of choosing the larges value in $|\bU_{k:n,k:n}|$ in the $k$-th step, it searches for an element of $\bU_{k:n,k:n}$ that is \textit{maximal in both its row and column}. Apparently, the rook pivoting is not unique such that we could find many entries that satisfy the criteria. For example, for a submatrix $\bU_{k:n,k:n}$ as follows
$$
\bU_{k:n,k:n} =
\begin{bmatrix}
1 & 2 & 3 & 4 \\
2 & 3 & 7 & 3 \\
5 & 2 & 1 & 2 \\
2 & 1 & 2 & 1 \\
\end{bmatrix},
$$
where the $7$ will be chosen by complete pivoting. And one of $5, 4, 7$ will be identified as a rook pivot.
\section{Rank-Revealing LU Decomposition}\label{section:rank-reveal-lu-short}
In many applications, a factorization produced by Gaussian elimination with pivoting when $\bA$ has rank $r$ will reveal rank in the following form\index{Rank-revealing}\index{Rank-revealing LU}
$$
\bP\bA\bQ =
\begin{bmatrix}
\bL_{11} & \bzero \\
\bL_{21}^\top & \bI
\end{bmatrix}
\begin{bmatrix}
\bU_{11} & \bU_{12} \\
\bzero & \bzero
\end{bmatrix},
$$
where $\bL_{11}\in \real^{r\times r}$ and $\bU_{11}\in \real^{r\times r}$ are nonsingular, $\bL_{21}, \bU_{21}\in \real^{r\times (n-r)}$, and $\bP,\bQ$ are permutations. Gaussian elimination with rook pivoting or complete pivoting can result in such decomposition \citep{hwang1992rank, higham2002accuracy}.
\newpage
\chapter{Cholesky Decomposition}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{Cholesky Decomposition}
\begin{theoremHigh}[Cholesky Decomposition]\label{theorem:cholesky-factor-exist}
Every positive definite matrix $\bA\in \real^{n\times n}$ can be factored as
$$
\bA = \bR^\top\bR,
$$
where $\bR \in \real^{n\times n}$ is an upper triangular matrix \textbf{with positive diagonal elements}. This decomposition is known as the \textit{Cholesky decomposition} of $\bA$. $\bR$ is known as the \textit{Cholesky factor} or \textit{Cholesky triangle} of $\bA$.
\item Alternatively, $\bA$ can be factored as $\bA=\bL\bL^\top$ where $\bL=\bR^\top$ is a lower triangular matrix \textit{with positive diagonals}.
\item Specifically, the Cholesky decomposition is unique (Corollary~\ref{corollary:unique-cholesky-main}, p.~\pageref{corollary:unique-cholesky-main}).
\end{theoremHigh}
The Cholesky decomposition is named after a French military officer and mathematician, Andr\'{e}-Louis Cholesky (1875-1918), who developed the Cholesky decomposition in his surveying work. Similar to the LU decomposition for solving linear systems, the Cholesky decomposition is further used primarily to solve positive definite linear systems. The development of the solution is similar to that of the LU decomposition in Section~\ref{section:lu-linear-sistem} (p.~\pageref{section:lu-linear-sistem}), and we shall not repeat the details.
\section{Existence of the Cholesky Decomposition via Recursive Calculation}
In this section, we prove the existence of the Cholesky decomposition via recursive calculation. In Section~\ref{section:cholesky-by-qr-spectral} (p.~\pageref{section:cholesky-by-qr-spectral}), we will also prove the existence of the Cholesky decomposition via the QR decomposition and spectral decomposition.
Before showing the existence of Cholesky decomposition, we need the following definitions and lemmas.
\begin{definition}[Positive Definite and Positive Semidefinite\index{Positive definite}\index{Positive semidefinite}]\label{definition:psd-pd-defini}
A matrix $\bA\in \real^{n\times n}$ is positive definite (PD) if $\bx^\top\bA\bx>0$ for all nonzero $\bx\in \real^n$.
And a matrix $\bA\in \real^{n\times n}$ is positive semidefinite (PSD) if $\bx^\top\bA\bx \geq 0$ for all $\bx\in \real^n$.
\end{definition}
One of the prerequisites for the Cholesky decomposition is the definition of the above positive definiteness of a matrix. We sketch several properties of this PD matrix as follows:
\begin{tcolorbox}[title={Positive Definite Matrix Property 1 of 5}]
We will show the equivalent definition on the positive definiteness of a matrix $\bA$ is that $\bA$ only has positive eigenvalues, or on the positive semidefiniteness of a matrix $\bA$ is that $\bA$ only has nonnegative eigenvalues. The proof is provided in Section~\ref{section:equivalent-pd-psd} (p.~\pageref{section:equivalent-pd-psd}) as a consequence of the spectral theorem.
\end{tcolorbox}
\begin{tcolorbox}[title={Positive Definite Matrix Property 2 of 5}]
\begin{lemma}[Positive Diagonals of Positive Definite Matrices]\label{lemma:positive-in-pd}
The diagonal elements of a positive definite matrix $\bA$ are all \textbf{positive}. And similarly, the diagonal elements of a positive semidefinite matrix $\bB$ are all \textbf{nonnegative}.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:positive-in-pd}]
From the definition of positive definite matrices, we have $\bx^\top\bA \bx >0$ for all nonzero $\bx$. In particular, let $\bx=\be_i$ where $\be_i$ is the $i$-th unit vector with the $i$-th entry equal to 1 and other entries equal to 0. Then,
$$
\be_i^\top\bA \be_i = a_{ii}>0, \qquad \forall i \in \{1, 2, \ldots, n\},
$$
where $a_{ii}$ is the $i$-th diagonal component. The proof for the second part follows similarly. This completes the proof.
\end{proof}
\begin{tcolorbox}[title={Positive Definite Matrix Property 3 of 5}]
\begin{lemma}[Schur Complement of Positive Definite Matrices\index{Schur complement}]\label{lemma:pd-of-schur}
For any positive definite matrix $\bA\in \real^{n\times n}$, its Schur complement of $a_{11}$ is given by $\bS_{n-1}=\bA_{2:n,2:n}-\frac{1}{a_{11}} \bA_{2:n,1}\bA_{2:n,1}^\top$ which is also positive definite.
\paragraph{A word on the notation} Note that the subscript $n-1$ of $\bS_{n-1}$ means it is of size $(n-1)\times (n-1)$ and it is a Schur complement of an $n\times n$ positive definite matrix. We will use this notation in the following sections.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:pd-of-schur}]
For any nonzero vector $\bv\in \real^{n-1}$, we can construct a vector $\bx\in \real^n$ by the following equation:
$$
\bx =
\begin{bmatrix}
-\frac{1}{a_{11}} \bA_{2:n,1}^\top \bv \\
\bv
\end{bmatrix},
$$
which is nonzero. Then
$$
\begin{aligned}
\bx^\top\bA\bx
&= \left[-\frac{1}{a_{11}} \bv^\top \bA_{2:n,1}\qquad \bv^\top\right]
\begin{bmatrix}
a_{11} & \bA_{2:n,1}^\top \\
\bA_{2:n,1} & \bA_{2:n,2:n}
\end{bmatrix}
\begin{bmatrix}
-\frac{1}{a_{11}} \bA_{2:n,1}^\top \bv \\
\bv
\end{bmatrix} \\
&= \left[-\frac{1}{a_{11}} \bv^\top \bA_{2:n,1}\qquad \bv^\top\right]
\begin{bmatrix}
0 \\
\bS_{n-1}\bv
\end{bmatrix} \\
&= \bv^\top\bS_{n-1}\bv.
\end{aligned}
$$
Since $\bA$ is positive definite, we have $\bx^\top\bA\bx = \bv^\top\bS_{n-1}\bv >0$ for all nonzero $\bv$. Thus, the Schur complement $\bS_{n-1}$ is positive definite as well.
\end{proof}
The above argument can be extended to PSD matrices as well. If $\bA$ is PSD, then the Schur complement $\bS_{n-1}$ is also PSD.
\begin{mdframed}[hidealllines=\mdframehidelineNote,backgroundcolor=\mdframecolorSkip]
\paragraph{A word on the Schur complement} In the proof of Theorem~\ref{theorem:lu-factorization-with-permutation} (p.~\pageref{theorem:lu-factorization-with-permutation}), we have shown this Schur complement $\bS_{n-1}=\bA_{2:n,2:n}-\frac{1}{a_{11}} \bA_{2:n,1}\bA_{2:n,1}^\top$ is also nonsingular if $\bA$ is nonsngular and $a_{11}\neq 0$. Similarly, the Schur complement of $a_{nn}$ in $\bA$ is $\widehat{\bS}_{n-1} =\bA_{1:n-1,1:n-1} - \frac{1}{a_{nn}}\bA_{1:n-1,n} \bA_{1:n-1,n}^\top$ which is also positive definite if $\bA$ is positive definite. This property can help prove the fact that the leading principal minors of positive definite matrices are all positive. See Section~\ref{appendix:leading-minors-pd} (p.~\pageref{appendix:leading-minors-pd}) for more details.
\end{mdframed}
\index{Recursive algorithm}
We then prove the existence of Cholesky decomposition using these lemmas.
\begin{proof}[\textbf{of Theorem~\ref{theorem:cholesky-factor-exist}: Existence of Cholesky Decomposition Recursively}]
For any positive definite matrix $\bA$, we can write out (since $a_{11}$ is positive by Lemma~\ref{lemma:positive-in-pd})
$$
\begin{aligned}
\bA &=
\begin{bmatrix}
a_{11} & \bA_{2:n,1}^\top \\
\bA_{2:n,1} & \bA_{2:n,2:n}
\end{bmatrix} \\
&=\begin{bmatrix}
\sqrt{a_{11}} &\bzero\\
\frac{1}{\sqrt{a_{11}}} \bA_{2:n,1} &\bI
\end{bmatrix}
\begin{bmatrix}
\sqrt{a_{11}} & \frac{1}{\sqrt{a_{11}}}\bA_{2:n,1}^\top \\
\bzero & \bA_{2:n,2:n}-\frac{1}{a_{11}} \bA_{2:n,1}\bA_{2:n,1}^\top
\end{bmatrix}\\
&=\begin{bmatrix}
\sqrt{a_{11}} &\bzero\\
\frac{1}{\sqrt{a_{11}}} \bA_{2:n,1} &\bI
\end{bmatrix}
\begin{bmatrix}
1 & \bzero \\
\bzero & \bA_{2:n,2:n}-\frac{1}{a_{11}} \bA_{2:n,1}\bA_{2:n,1}^\top
\end{bmatrix}
\begin{bmatrix}
\sqrt{a_{11}} & \frac{1}{\sqrt{a_{11}}}\bA_{2:n,1}^\top \\
\bzero & \bI
\end{bmatrix}\\
&=\bR_1^\top
\begin{bmatrix}
1 & \bzero \\
\bzero & \bS_{n-1}
\end{bmatrix}
\bR_1.
\end{aligned}
$$
where
$$\bR_1 =
\begin{bmatrix}
\sqrt{a_{11}} & \frac{1}{\sqrt{a_{11}}}\bA_{2:n,1}^\top \\
\bzero & \bI
\end{bmatrix}.
$$
Since we proved the Schur complement $\bS_{n-1}$ is positive definite in Lemma~\ref{lemma:pd-of-schur}, then we can factor it in the same way as
$$
\bS_{n-1}=
\hat{\bR}_2^\top
\begin{bmatrix}
1 & \bzero \\
\bzero & \bS_{n-2}
\end{bmatrix}
\hat{\bR}_2.
$$
Therefore, we have
$$
\begin{aligned}
\bA &= \bR_1^\top
\begin{bmatrix}
1 & \bzero \\
\bzero & \hat{\bR}_2^\top
\begin{bmatrix}
1 & \bzero \\
\bzero & \bS_{n-2}
\end{bmatrix}
\hat{\bR}_2.
\end{bmatrix}
\bR_1\\
&=
\bR_1^\top
\begin{bmatrix}
1 &\bzero \\
\bzero &\hat{\bR}_2^\top
\end{bmatrix}
\begin{bmatrix}
1 &\bzero \\
\bzero &\begin{bmatrix}
1 & \bzero \\
\bzero & \bS_{n-2}
\end{bmatrix}
\end{bmatrix}
\begin{bmatrix}
1 &\bzero \\
\bzero &\hat{\bR}_2
\end{bmatrix}
\bR_1\\
&=
\bR_1^\top \bR_2^\top
\begin{bmatrix}
1 &\bzero \\
\bzero &\begin{bmatrix}
1 & \bzero \\
\bzero & \bS_{n-2}
\end{bmatrix}
\end{bmatrix}
\bR_2 \bR_1.
\end{aligned}
$$
The same formula can be recursively applied. This process gradually continues down to the bottom-right corner giving us the decomposition
$$
\begin{aligned}
\bA &= \bR_1^\top\bR_2^\top\ldots \bR_n^\top \bR_n\ldots \bR_2\bR_1\\
&= \bR^\top \bR,
\end{aligned}
$$
where $\bR_1, \bR_2, \ldots, \bR_n$ are upper triangular matrices with positive diagonal elements and $\bR=\bR_1\bR_2\ldots\bR_n$ is also an upper triangular matrix with positive diagonal elements from which the result follows.
\end{proof}
The process in the proof can also be used to calculate the Cholesky decomposition and compute the complexity of the algorithm.
\begin{lemma}[$\bR^\top\bR$ is PD]\label{lemma:r-to-pd}\index{Positive definite}
Given any upper triangular matrix $\bR$ with positive diagonal elements, then
$
\bA = \bR^\top\bR
$
is positive definite.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:r-to-pd}]
If an upper triangular matrix $\bR$ has positive diagonals, it has full column rank, and the null space of $\bR$ is of dimension 0 by the fundamental theorem of linear algebra (Theorem~\ref{theorem:fundamental-linear-algebra}, p.~\pageref{theorem:fundamental-linear-algebra}). As a result, $\bR\bx \neq \bzero$ for any nonzero vector $\bx$. Thus $\bx^\top\bA\bx = ||\bR\bx||^2 >0$ for any nonzero vector $\bx$.
\end{proof}
This lemma above works not only for the upper triangular matrices $\bR$, but can be extended to any $\bR$ with linearly independent columns.
\begin{mdframed}[hidealllines=\mdframehideline,backgroundcolor=\mdframecolorSkip]
\paragraph{A word on the two claims} Combine Theorem~\ref{theorem:cholesky-factor-exist} and Lemma~\ref{lemma:r-to-pd}, we can claim that matrix $\bA$ is positive definite if and only if $\bA$ can be factored as $\bA=\bR^\top\bR$ where $\bR$ is an upper triangular matrix with positive diagonals.
\end{mdframed}
\section{Sylvester's Criterion: Leading Principal Minors of PD Matrices}\label{appendix:leading-minors-pd}
In Lemma~\ref{lemma:pd-of-schur}, we proved for any positive definite matrix $\bA\in \real^{n\times n}$, its Schur complement of $a_{11}$ is $\bS_{n-1}=\bA_{2:n,2:n}-\frac{1}{a_{11}} \bA_{2:n,1}\bA_{2:n,1}^\top$ and it is also positive definite.
This is also true for its Schur complement of $a_{nn}$, i.e., $\bS_{n-1}^\prime = \bA_{1:n-1,1:n-1} -\frac{1}{a_{nn}} \bA_{1:n-1,n}\bA_{1:n-1,n}^\top$ is also positive definite.
We then claim that all the leading principal minors (Definition~\ref{definition:leading-principle-minors}, p.~\pageref{definition:leading-principle-minors}) of a positive definite matrix $\bA \in \real^{n\times n}$ are positive. This is also known as Sylvester's criterion \citep{swamy1973sylvester, gilbert1991positive}. Recall that these positive leading principal minors imply the existence of the LU decomposition for positive definite matrix $\bA$ by Theorem~\ref{theorem:lu-factorization-without-permutation} (p.~\pageref{theorem:lu-factorization-without-permutation}).
To show Sylvester's criterion, we need the following lemma.
\begin{tcolorbox}[title={Positive Definite Matrix Property 4 of 5}]
\begin{lemma}[Quadratic PD]\label{lemma:quadratic-pd}
Let $\bE$ be any invertible matrix. Then $\bA$ is positive definite if and only if $\bE^\top\bA\bE$ is also positive definite.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:quadratic-pd}]
We will prove by forward implication and reverse implication separately as follows.
\paragraph{Forward implication}
Suppose $\bA$ is positive definite, then for any nonzero vector $\bx$, we have $\bx^\top \bE^\top\bA\bE \bx = \by^\top\bA\by > 0$, since $\bE$ is invertible such that $\bE\bx$ is nonzero. \footnote{Since the null space of $\bE$ is of dimension 0 and the only solution for $\bE\bx=\bzero$ is the trivial solution $\bx=0$.} This implies $\bE^\top\bA\bE$ is PD.
\paragraph{Reverse implication}
Conversely, suppose $\bE^\top\bA\bE$ is positive definite, for any nonzero $\bx$, we have $\bx^\top \bE^\top\bA\bE \bx>0$. For any nonzero $\by$, there exists a nonzero $\bx$ such that $\by =\bE\bx$ since $\bE$ is invertible. This implies $\bA$ is PD as well.
\end{proof}
We then provide a rigorous proof for Sylvester's criterion.
\begin{tcolorbox}[title={Positive Definite Matrix Property 5 of 5}]
\begin{theorem}[Sylvester's Criterion\index{Sylvester's criterion}]\label{lemma:sylvester-criterion}
The real symmetric matrix $\bA\in \real^{n\times n}$ is positive definite if and only if all the leading principal minors of $\bA$ are positive.
\end{theorem}
\end{tcolorbox}
\begin{proof}[of Theorem~\ref{lemma:sylvester-criterion}]
We will prove by forward implication and reverse implication separately as follows.
\paragraph{Forward implication: } We will prove by induction for the forward implication. Suppose $\bA$ is positive definite. Since all the components on the diagonal of positive definite matrices are all positive (Lemma~\ref{lemma:positive-in-pd}, p.~\pageref{lemma:positive-in-pd}). The case for $n=1$ is trivial that $\det(\bA)> 0$ if $\bA$ is a scalar.
Suppose all the leading principal minors for $k\times k$ matrices are all positive. If we could prove this is also true for $(k+1)\times (k+1)$ PD matrices, then we complete the proof.
For a $(k+1)\times (k+1)$ matrix with the block form $\bM=\begin{bmatrix}
\bA & \bb\\
\bb^\top & d
\end{bmatrix}$, where $\bA$ is a $k\times k$ submatrix. Then its Schur complement of $d$, $\bS_{k} = \bA - \frac{1}{d} \bb\bb^\top$ is also positive definite and its determinant is positive from the assumption. Therefore, $\det(\bM) = \det(d)\det( \bA - \frac{1}{d} \bb\bb^\top) $=
\footnote{By the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix}
\bA & \bB \\
\bC & \bD
\end{bmatrix}$, then $\det(\bM) = \det(\bD)\det(\bA-\bB\bD^{-1}\bC)$.}
$d\cdot \det( \bA - \frac{1}{d} \bb\bb^\top)>0$, which completes the proof.
\paragraph{Reverse implication:} Conversely, suppose all the leading principal minors of $\bA\in \real^{n\times n}$ are positive, i.e., leading principal submatrices are nonsingular. Suppose further the $(i,j)$-th entry of $\bA$ is denoted by $a_{ij}$, we realize that $a_{11}>0$ by the assumption.
Subtract multiples of the first row of $\bA$ to zero out the entries in the first column of $\bA$ below the first diagonal $a_{11}$. That is,
$$
\bA =
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
a_{21} & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
a_{n1} & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}
\stackrel{\bE_1 \bA}{\longrightarrow}
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
0 & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
0 & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}.
$$
This operation preserves the values of the principal minors of $\bA$. The $\bE_1$ might be mysterious to the readers. Actually, the $\bE_1$ contains two steps $\bE_1 = \bZ_{12}\bZ_{11}$. The first step $\bZ_{11}$ is to subtract the 2-nd row to the $n$-th row by a multiple of the first row, that is
$$
\begin{aligned}
\bA =
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
a_{21} & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
a_{n1} & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}
&\stackrel{\bZ_{11} \bA}{\longrightarrow}
\begin{bmatrix}
1 & 0 & \ldots &0\\
-\frac{a_{21}}{a_{11}} & 1 & \ldots &0\\
\vdots & \vdots & \ddots &\vdots\\
-\frac{a_{n1}}{a_{11}} & 0 & \ldots &1\\
\end{bmatrix}
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
a_{21} & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
a_{n1} & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}\\
&=
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
0 & (a_{22} -\frac{a_{21}}{a_{11}} a_{12}) & \ldots &(a_{2n}-\frac{a_{21}}{a_{11}} a_{1n})\\
\vdots & \vdots & \ddots &\vdots\\
0 & (a_{n2}-\frac{a_{n1}}{a_{11}}a_{12}) & \ldots &(a_{nn}-\frac{a_{n1}}{a_{11}}a_{1n})\\
\end{bmatrix},
\end{aligned}
$$
where we subtract the bottom-right $(n-1)\times (n-1)$ by some terms additionally. $\bZ_{12}$ is to add back these terms.
$$
\begin{aligned}
\bZ_{11}\bA &=
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
0 & (a_{22} -\frac{a_{21}}{a_{11}} a_{12}) & \ldots &(a_{2n}-\frac{a_{21}}{a_{11}} a_{1n})\\
\vdots & \vdots & \ddots &\vdots\\
0 & (a_{n2}-\frac{a_{n1}}{a_{11}}a_{12}) & \ldots &(a_{nn}-\frac{a_{n1}}{a_{11}}a_{1n})\\
\end{bmatrix}\\
&\stackrel{\bZ_{12}(\bZ_{11} \bA)}{\longrightarrow}
\begin{bmatrix}
1 & 0 & \ldots &0\\
\frac{a_{21}}{a_{11}} & 1 & \ldots &0\\
\vdots & \vdots & \ddots &\vdots\\
\frac{a_{n1}}{a_{11}} & 0 & \ldots &1\\
\end{bmatrix}
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
0 & (a_{22} -\frac{a_{21}}{a_{11}} a_{12}) & \ldots &(a_{2n}-\frac{a_{21}}{a_{11}} a_{1n})\\
\vdots & \vdots & \ddots &\vdots\\
0 & (a_{n2}-\frac{a_{n1}}{a_{11}}a_{12}) & \ldots &(a_{nn}-\frac{a_{n1}}{a_{11}}a_{1n})\\
\end{bmatrix}\\
&=\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
0 & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
0 & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}=\bE_1\bA.
\end{aligned}
$$
Now subtract multiples of the first column of $\bE_1\bA$, from the other columns of $\bE_1\bA$ to zero out the entries in the first row of $\bE_1\bA$ to the right of the first column. Since $\bA$ is symmetric, we can multiply on the right by $\bE_1^\top$ to get what we want. We then have
$$
\bA =
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
a_{21} & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
a_{n1} & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}
\stackrel{\bE_1 \bA}{\longrightarrow}
\begin{bmatrix}
a_{11} & a_{12} & \ldots &a_{1n}\\
0 & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
0 & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}
\stackrel{\bE_1 \bA\bE_1^\top}{\longrightarrow}
\begin{bmatrix}
a_{11} & 0 & \ldots &0\\
0 & a_{22} & \ldots &a_{2n}\\
\vdots & \vdots & \ddots &\vdots\\
0 & a_{n2} & \ldots &a_{nn}\\
\end{bmatrix}.
$$
This operation also preserves the values of the principal minors of $\bA$. The leading principal minors of $\bE_1 \bA\bE_1^\top$ are exactly the same as those of $\bA$.
Continue this process, we will transform $\bA$ into a diagonal matrix $\bE_n \ldots \bE_1 \bA\bE_1^\top\ldots\bE_n^\top$ whose diagonal values are exactly the same as the diagonals of $\bA$ and are positive. Let $\bE = \bE_n \ldots \bE_1$, which is an invertible matrix. Apparently, $\bE \bA \bE^\top$ is PD, which implies $\bA$ is PD as well from Lemma~\ref{lemma:quadratic-pd}.
\end{proof}
\section{Existence of the Cholesky Decomposition via the LU Decomposition without Permutation}
By Theorem~\ref{lemma:sylvester-criterion} on Sylvester's criterion and the existence of LU decomposition without permutation in Theorem~\ref{theorem:lu-factorization-without-permutation} (p.~\pageref{theorem:lu-factorization-without-permutation}), there is a unique LU decomposition for positive definite matrix $\bA=\bL\bU_0$ where $\bL$ is a unit lower triangular matrix and $\bU_0$ is an upper triangular matrix. Since \textit{the signs of the pivots of a symmetric matrix are the same as the signs of the eigenvalues} \citep{strang1993introduction}: \index{Pivot}
$$
\text{number of positive pivots = number of positive eigenvalues. }
$$
And $\bA = \bL\bU_0$ has the following form
$$
\begin{aligned}
\bA = \bL\bU_0 &=
\begin{bmatrix}
1 & 0 & \ldots & 0 \\
l_{21} & 1 & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
l_{n1} & l_{n2} & \ldots & 1
\end{bmatrix}
\begin{bmatrix}
u_{11} & u_{12} & \ldots & u_{1n} \\
0 & u_{22} & \ldots & u_{2n}\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & u_{nn}
\end{bmatrix}.\\
\end{aligned}
$$
This implies that the diagonals of $\bU_0$ contain the pivots of $\bA$. And all the eigenvalues of PD matrices are positive (see Lemma~\ref{lemma:eigens-of-PD-psd}, p.~\pageref{lemma:eigens-of-PD-psd}, which is a consequence of the spectral decomposition). Thus the diagonals of $\bU_0$ are positive.
Taking the diagonal of $\bU_0$ out into a diagonal matrix $\bD$, we can rewrite $\bU_0=\bD\bU$ as shown in the following equation
$$
\begin{aligned}
\bA = \bL\bU_0 =
\begin{bmatrix}
1 & 0 & \ldots & 0 \\
l_{21} & 1 & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
l_{n1} & l_{n2} & \ldots & 1
\end{bmatrix}
\begin{bmatrix}
u_{11} & 0 & \ldots & 0 \\
0 & u_{22} & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & u_{nn}
\end{bmatrix}
\begin{bmatrix}
1 & u_{12}/u_{11} & \ldots & u_{1n}/u_{11} \\
0 & 1 & \ldots & u_{2n}/u_{22}\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & 1
\end{bmatrix}=\bL\bD\bU,
\end{aligned}
$$
where $\bU$ is a \textit{unit} upper triangular matrix.
By the uniqueness of the LU decomposition without permutation in Corollary~\ref{corollary:unique-lu-without-permutation} (p.~\pageref{corollary:unique-lu-without-permutation}) and the symmetry of $\bA$, it follows that $\bU = \bL^\top$, and $\bA = \bL\bD\bL^\top$. Since the diagonals of $\bD$ are positive, we can set $\bR = \bD^{1/2}\bL^\top$ where $\bD^{1/2}=\diag(\sqrt{u_{11}}, \sqrt{u_{22}}, \ldots, \sqrt{u_{nn}})$ such that $\bA = \bR^\top\bR$ is the Cholesky decomposition of $\bA$, and $\bR$ is upper triangular with positive diagonals.
\index{Upper triangular}
\subsection{Diagonal Values of the Upper Triangular Matrix}\label{section:cholesky-diagonals}
Suppose $\bA$ is a PD matrix, take $\bA$ as a block matrix $\bA = \begin{bmatrix}
\bA_{k} & \bA_{12} \\
\bA_{21} & \bA_{22}
\end{bmatrix}$ where $\bA_{k}\in \real^{k\times k}$, and its block LU decomposition is given by
$$
\begin{aligned}
\bA &= \begin{bmatrix}
\bA_{k} & \bA_{12} \\
\bA_{21} & \bA_{22}
\end{bmatrix}
=\bL\bU_0=
\begin{bmatrix}
\bL_{k} & \bzero \\
\bL_{21} & \bL_{22}
\end{bmatrix}
\begin{bmatrix}
\bU_{k} & \bU_{12} \\
\bzero & \bU_{22}
\end{bmatrix} \\
&=\begin{bmatrix}
\bL_{k}\bU_{k} & \bL_{k}\bU_{12} \\
\bL_{21}\bU_{k11} & \bL_{21}\bU_{12}+\bL_{22}\bU_{22}
\end{bmatrix}.
\end{aligned}
$$
Then the leading principal minor (Definition~\ref{definition:leading-principle-minors}, p.~\pageref{definition:leading-principle-minors}), $\Delta_k=\det(\bA_{1:k,1:k} ) = \det(\bA_{k})$ is given by
$$
\Delta_k = \det(\bA_{k}) = \det(\bL_{k}\bU_{k} ) = \det(\bL_{k} )\det(\bU_{k}).
$$
We notice that $\bL_{k}$ is a unit lower triangular matrix and $\bU_{k}$ is an upper triangular matrix. By the fact that the determinant of a lower triangular matrix (or an upper triangular matrix) is the product of the diagonal entries, we obtain
$$
\Delta_k = \det(\bU_{k})= u_{11} u_{22}\ldots u_{kk},
$$
i.e., the $k$-th leading principal minor of $\bA$ is the determinant of the $k\times k$ submatrix of $\bU_0$. That is also the product of the first $k$ diagonals of $\bD$ ($\bD$ is the matrix from $\bA = \bL\bD\bL^\top$). Let $\bD = \diag(d_1, d_2, \ldots, d_n)$, therefore, we have
$$
\Delta_k = d_1 d_2\ldots d_k = \Delta_{k-1}d_k.
$$
This gives us an alternative form of $\bD$, i.e., the \textbf{squared} diagonal values of $\bR$ ($\bR$ is the Cholesky factor from $\bA=\bR^\top\bR$), and it is given by
$$
\bD = \diag\left(\Delta_1, \frac{\Delta_2}{\Delta_1}, \ldots, \frac{\Delta_n}{\Delta_{n-1}}\right),
$$
where $\Delta_k$ is the $k$-th leading principal minor of $\bA$, for all $k\in \{1,2,\ldots, n\}$. That is, the diagonal values of $\bR$ are given by
$$
\diag\left(\sqrt{\Delta_1}, \sqrt{\frac{\Delta_2}{\Delta_1}}, \ldots, \sqrt{\frac{\Delta_n}{\Delta_{n-1}}}\right).
$$
\subsection{Block Cholesky Decomposition}
Following from the last section, suppose $\bA$ is a PD matrix, take $\bA$ as a block matrix $\bA = \begin{bmatrix}
\bA_k & \bA_{12} \\
\bA_{21} & \bA_{22}
\end{bmatrix}$ where $\bA_k\in \real^{k\times k}$, and its block LU decomposition is given by
$$
\begin{aligned}
\bA &= \begin{bmatrix}
\bA_k & \bA_{12} \\
\bA_{21} & \bA_{22}
\end{bmatrix}
=\bL\bU_0=
\begin{bmatrix}
\bL_k & \bzero \\
\bL_{21} & \bL_{22}
\end{bmatrix}
\begin{bmatrix}
\bU_k & \bU_{12} \\
\bzero & \bU_{22}
\end{bmatrix} \\
&=\begin{bmatrix}
\bL_k\bU_k & \bL_{k}\bU_{12} \\
\bL_{21}\bU_{k} & \bL_{21}\bU_{12}+\bL_{22}\bU_{22}
\end{bmatrix}.\\
\end{aligned}
$$
where the $k$-th leading principal submatrix $\bA_k$ of $\bA$ also has its LU decomposition, $\bA_k=\bL_k\bU_k$. Then, it is trivial that the Cholesky decomposition of an $n\times n$ matrix contains $n-1$ other Cholesky decompositions within it: $\bA_k = \bR_k^\top\bR_k$, for all $k\in \{1,2,\ldots, n-1\}$. This is particularly true that any leading principal submatrix $\bA_k$ of the positive definite matrix $\bA$ is also positive definite. This can be shown that for positive definite matrix $\bA_{k+1} \in \real^{(k+1)\times(k+1)}$, and any nonzero vector $\bx_k\in\real^k$ appended by a zero element $\bx_{k+1} = \begin{bmatrix}
\bx_k\\
0
\end{bmatrix}$. It follows that
$$
\bx_k^\top\bA_k\bx_k = \bx_{k+1}^\top\bA_{k+1}\bx_{k+1} >0,
$$
and $\bA_k$ is positive definite. If we start from $\bA\in \real^{n\times n}$, we will recursively get that $\bA_{n-1}$ is PD, $\bA_{n-2}$ is PD, $\ldots$. And all of them admit a Cholesky decomposition.
\section{Existence of the Cholesky Decomposition via Induction}
In the last section, we proved the existence of the Cholesky decomposition via the LU decomposition without permutation. Following the proof of the LU decomposition in Section~\ref{section:exist-lu-without-perm} (p.~\pageref{section:exist-lu-without-perm}), we realize that the existence of Cholesky decomposition can be a direct consequence of induction as well.
\begin{proof}[\textbf{of Theorem~\ref{theorem:cholesky-factor-exist}: Existence of Cholesky Decomposition by Induction\index{Induction}}]
We will prove by induction that every $n\times n$ positive definite matrix $\bA$ has a decomposition $\bA=\bR^\top\bR$. The $1\times 1$ case is trivial by setting $R=\sqrt{A}$, thus, $A=R^2$.
Suppose for any $k\times k$ PD matrix $\bA_k$ has a Cholesky decomposition. If we prove any $(k+1)\times(k+1)$ PD matrix $\bA_{k+1}$ can also be factored as this Cholesky decomposition, then we complete the proof.
For any $(k+1)\times(k+1)$ PD matrix $\bA_{k+1}$, Write out $\bA_{k+1}$ as
$$
\bA_{k+1} = \begin{bmatrix}
\bA_k & \bb \\
\bb^\top & d
\end{bmatrix}.
$$
We note that $\bA_k$ is PD from the last section.
By the inductive hypothesis, it admits a Cholesky decomposition $\bA_k = \bR_k^\top\bR_k$. We can construct the upper triangular matrix
$$
\bR_{k+1}=\begin{bmatrix}
\bR_k & \br\\
0 & s
\end{bmatrix},
$$
such that it follows that
$$
\bR_{k+1}^\top\bR_{k+1} =
\begin{bmatrix}
\bR_k^\top\bR_k & \bR_k^\top \br\\
\br^\top \bR_k & \br^\top\br+s^2
\end{bmatrix}.
$$
Therefore, if we can prove $\bR_{k+1}^\top \bR_{k+1} = \bA_{k+1}$ is the Cholesky decomposition of $\bA_{k+1}$ (which requires the value $s$ to be positive), then we complete the proof. That is, we need to prove
$$
\begin{aligned}
\bb &= \bR_k^\top \br, \\
d &= \br^\top\br+s^2.
\end{aligned}
$$
Since $\bR_k$ is nonsingular, we have a unique solution for $\br$ and $s$ that
$$
\begin{aligned}
\br &= \bR_k^{-\top}\bb, \\
s &= \sqrt{d - \br^\top\br} = \sqrt{d - \bb^\top\bA_k^{-1}\bb},
\end{aligned}
$$
since we assume $s$ is nonnegative. However, we need to further prove that $s$ is not only nonnegative, but also positive. Since $\bA_k$ is PD, from Sylvester's criterion, and the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix}
\bA & \bB \\
\bC & \bD
\end{bmatrix}$, then $\det(\bM) = \det(\bA)\det(\bD-\bC\bA^{-1}\bB)$. We have
$$
\det(\bA_{k+1}) = \det(\bA_k)\det(d- \bb^\top\bA_k^{-1}\bb) = \det(\bA_k)(d- \bb^\top\bA_k^{-1}\bb)>0.
$$
Since $ \det(\bA_k)>0$, we then obtain that $(d- \bb^\top\bA_k^{-1}\bb)>0$ and this implies $s>0$.
We complete the proof.
\end{proof}
\section{Uniqueness of the Cholesky Decomposition}
\begin{corollary}[Uniqueness of Cholesky Decomposition\index{Uniqueness}]\label{corollary:unique-cholesky-main}
The Cholesky decomposition $\bA=\bR^\top\bR$ for any positive definite matrix $\bA\in \real^{n\times n}$ is unique.
\end{corollary}
The uniqueness of the Cholesky decomposition can be an immediate consequence of the uniqueness of the LU decomposition without permutation. Or, an alternative rigorous proof is provided as follows.
\begin{proof}[of Corollary~\ref{corollary:unique-cholesky-main}]
Suppose the Cholesky decomposition is not unique, then we can find two decompositions such that $\bA=\bR_1^\top\bR_1 = \bR_2^\top\bR_2$ which implies
$$
\bR_1\bR_2^{-1}= \bR_1^{-\top} \bR_2^\top.
$$
From the fact that the inverse of an upper triangular matrix is also an upper triangular matrix, and the product of two upper triangular matrices is also an upper triangular matrix, \footnote{Same for lower triangular matrices: the inverse of a lower triangular matrix is also a lower triangular matrix, and the product of two lower triangular matrices is also a lower triangular matrix.} we realize that the left-side of the above equation is an upper triangular matrix and the right-side of it is a lower triangular matrix. This implies $\bR_1\bR_2^{-1}= \bR_1^{-\top} \bR_2^\top$ is a diagonal matrix, and $\bR_1^{-\top} \bR_2^\top= (\bR_1^{-\top} \bR_2^\top)^\top = \bR_2\bR_1^{-1}$.
Let $\bLambda = \bR_1\bR_2^{-1}= \bR_2\bR_1^{-1}$ be the diagonal matrix. We notice that the diagonal value of $\bLambda$ is the product of the corresponding diagonal values of $\bR_1$ and $\bR_2^{-1}$ (or $\bR_2$ and $\bR_1^{-1}$). That is, for
$$
\bR_1=\begin{bmatrix}
r_{11} & r_{12} & \ldots & r_{1n} \\
0 & r_{22} & \ldots & r_{2n}\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & r_{nn}
\end{bmatrix},
\qquad
\bR_2=
\begin{bmatrix}
s_{11} & s_{12} & \ldots & s_{1n} \\
0 & s_{22} & \ldots & s_{2n}\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & s_{nn}
\end{bmatrix},
$$
we have,
$$
\begin{aligned}
\bR_1\bR_2^{-1}=
\begin{bmatrix}
\frac{r_{11}}{s_{11}} & 0 & \ldots & 0 \\
0 & \frac{r_{22}}{s_{22}} & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & \frac{r_{nn}}{s_{nn}}
\end{bmatrix}
=
\begin{bmatrix}
\frac{s_{11}}{r_{11}} & 0 & \ldots & 0 \\
0 & \frac{s_{22}}{r_{22}} & \ldots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \ldots & \frac{s_{nn}}{r_{nn}}
\end{bmatrix}
=\bR_2\bR_1^{-1}.
\end{aligned}
$$
Since both $\bR_1$ and $\bR_2$ have positive diagonals, this implies $r_{11}=s_{11}, r_{22}=s_{22}, \ldots, r_{nn}=s_{nn}$. And $\bLambda = \bR_1\bR_2^{-1}= \bR_2\bR_1^{-1} =\bI$.
That is, $\bR_1=\bR_2$ and this leads to a contradiction. The Cholesky decomposition is thus unique.
\end{proof}
\section{Last Words on Positive Definite Matrices}
In Section~\ref{section:equivalent-pd-psd} (p.~\pageref{section:equivalent-pd-psd}), we will prove that a matrix $\bA$ is PD if and only if $\bA$ can be factored as $\bA=\bP^\top\bP$ where $\bP$ is nonsingular. And in Section~\ref{section:unique-posere-pd} (p.~\pageref{section:unique-posere-pd}), we will prove that PD matrix $\bA$ can be uniquely factored as $\bA =\bB^2$ where $\bB$ is also PD. The two results are both consequences of the spectral decomposition of PD matrices.
To conclude, for PD matrix $\bA$, we can factor it into $\bA=\bR^\top\bR$ where $\bR$ is an upper triangular matrix with positive diagonals as shown in Theorem~\ref{theorem:cholesky-factor-exist} by Cholesky decomposition, $\bA = \bP^\top\bP$ where $\bP$ is nonsingular in Theorem~\ref{lemma:nonsingular-factor-of-PD} (p.~\pageref{lemma:nonsingular-factor-of-PD}), and $\bA = \bB^2$ where $\bB$ is PD in Theorem~\ref{theorem:unique-factor-pd} (p.~\pageref{theorem:unique-factor-pd}). For clarity, the different factorizations of positive definite matrix $\bA$ are summarized in Figure~\ref{fig:pd-summary}.
\begin{figure}[htbp]
\centering
\begin{widepage}
\centering
\resizebox{0.75\textwidth}{!}{%
\begin{tikzpicture}[>=latex]
\tikzstyle{state} = [draw, very thick, fill=white, rectangle, minimum height=3em, minimum width=6em, node distance=8em, font={\sffamily\bfseries}]
\tikzstyle{stateEdgePortion} = [black,thick];
\tikzstyle{stateEdge} = [stateEdgePortion,->];
\tikzstyle{stateEdge2} = [stateEdgePortion,<->];
\tikzstyle{edgeLabel} = [pos=0.5, text centered, font={\sffamily\small}];
\node[ellipse, name=pdmatrix, draw,font={\sffamily\bfseries}, node distance=7em, xshift=-9em, yshift=-1em,fill={colorals}] {PD Matrix $\bA$};
\node[state, name=bsqure, below of=pdmatrix, xshift=0em, yshift=1em, fill={colorlu}] {$\bB^2$};
\node[state, name=lq, right of=bsqure, xshift=3em, fill={colorlu}] {$\bP^\top\bP$};
\node[state, name=rsqure, left of=bsqure, xshift=-3em, fill={colorlu}] {$\bR^\top\bR$};
\node[ellipse, name=utv, below of=pdmatrix,draw, node distance=7em, xshift=0em, yshift=-4em,font={\tiny},fill={coloruppermiddle}] {PD $\bB$};
\node[ellipse, name=upperr, left of=utv, draw, node distance=8em, xshift=-3em,font={\tiny},fill={coloruppermiddle}] {\parbox{6em}{Upper \\Triangular $\bR$}};
\node[ellipse, name=nonp, right of=utv,draw, node distance=8em, xshift=3em, font={\tiny},fill={coloruppermiddle}] {Nonsingular $\bP$};
\coordinate (lq2inter3) at ($(pdmatrix.east -| lq.north) + (-0em,0em)$);
\draw (pdmatrix.east) edge[stateEdgePortion] (lq2inter3);
\draw (lq2inter3) edge[stateEdge]
node[edgeLabel, text width=7.25em, yshift=0.8em]{\parbox{5em}{Spectral\\Decomposition}} (lq.north);
\coordinate (rqr2inter1) at ($(pdmatrix.west) + (0,0em)$);
\coordinate (rqr2inter3) at ($(rqr2inter1-| rsqure.north) + (-0em,0em)$);
\draw (rqr2inter1) edge[stateEdgePortion] (rqr2inter3);
\draw (rqr2inter3) edge[stateEdge]
node[edgeLabel, text width=8em, yshift=0.8em]{\parbox{2em}{LU/\\Spectral/\\Recursive}} (rsqure.north);
\draw (pdmatrix.south)
edge[stateEdge] node[edgeLabel,yshift=0.5em]{\parbox{5em}{Spectral\\Decomposition} }
(bsqure.north);
\draw (upperr.north)
edge[stateEdge] node[edgeLabel,yshift=0.5em]{}
(rsqure.south);
\draw (utv.north)
edge[stateEdge] node[edgeLabel,yshift=0.5em]{}
(bsqure.south);
\draw (nonp.north)
edge[stateEdge] node[edgeLabel,yshift=0.5em]{}
(lq.south);
\end{tikzpicture}
}
\end{widepage}
\caption{Demonstration of different factorizations on positive definite matrix $\bA$.}
\label{fig:pd-summary}
\end{figure}
\section{Decomposition for Semidefinite Matrices}
For positive semidefinite matrices, the Cholesky decomposition also exists with slight modification.
\begin{theoremHigh}[Semidefinite Decomposition\index{Positive semidefinite}]\label{theorem:semidefinite-factor-exist}
Every positive semidefinite matrix $\bA\in \real^{n\times n}$ can be factored as
$$
\bA = \bR^\top\bR,
$$
where $\bR \in \real^{n\times n}$ is an upper triangular matrix with possible \textbf{zero} diagonal elements.
\end{theoremHigh}
For such decomposition, the diagonal of $\bR$ may not display the rank of $\bA$ \citep{higham2009cholesky}.
More generally, a rank-revealing decomposition for semidefinite decomposition is provided as follows.\index{Rank-revealing}\index{Semidefinite rank-revealing}
\begin{theoremHigh}[Semidefinite Rank-Revealing Decomposition\index{Rank-revealing}]\label{theorem:semidefinite-factor-rank-reveal}
Every positive semidefinite matrix $\bA\in \real^{n\times n}$ with rank $r$ can be factored as
$$
\bP^\top \bA\bP = \bR^\top\bR, \qquad \mathrm{with} \qquad
\bR = \begin{bmatrix}
\bR_{11} & \bR_{12}\\
\bzero &\bzero
\end{bmatrix} \in \real^{n\times n},
$$
where $\bR_{11} \in \real^{r\times r}$ is an upper triangular matrix with positive diagonal elements, and $\bR_{12}\in \real^{r\times (n-r)}$.
\end{theoremHigh}
The proof for the existence of the above rank-revealing decomposition for semidefinite matrices is delayed in Section~\ref{section:semi-rank-reveal-proof} (p.~\pageref{section:semi-rank-reveal-proof}) as a consequence of the spectral decomposition (Theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem}) and the column-pivoted QR decomposition (Theorem~\ref{theorem:rank-revealing-qr-general}, p.~\pageref{theorem:rank-revealing-qr-general}).
\section{Application: Rank-One Update/Downdate}\label{section:cholesky-rank-one-update}
Updating linear systems after low-rank modifications of the system matrix is widespread in machine learning, statistics, and many other fields. However, it is well known that this update can lead to serious instabilities in the presence
of round-off error \citep{seeger2004low}. If the system matrix is positive definite, it is almost always
possible to use a representation based on the Cholesky decomposition which is
much more numerically stable. We will shortly provide the proof for this rank one update/downdate via Cholesky decomposition in this section.
\subsection{Rank-One Update}\index{Rank-one update}
A rank-one update $\bA^\prime$ of matrix $\bA$ by vector $\bx$ is of the form:
\begin{equation*}
\begin{aligned}
\bA^\prime &= \bA + \bv \bv^\top\\
\bR^{\prime\top}\bR^\prime &= \bR^\top\bR + \bv \bv^\top.
\end{aligned}
\end{equation*}
If we have already calculated the Cholesky factor $\bR$ of $\bA \in \real^{n\times n}$, then the Cholesky factor $\bR^\prime$ of $\bA^\prime$ can be obtained efficiently. Note that $\bA^\prime$ differs from $\bA$ only via the symmetric rank-one matrix. Hence we can compute $\bR^\prime$ from $\bR$ using the rank-one Cholesky update, which takes $O(n^2)$ operations each saving from $O(n^3)$ if we do know $\bR$, the Cholesky decomposition of $\bA$ up front, i.e., we want to compute the Cholesky decomposition of $\bA^\prime$ via that of $\bA$.
To see this,
suppose there is a set of orthogonal matrices $\bQ_n \bQ_{n-1}\ldots \bQ_1$ such that that
$$
\bQ_n \bQ_{n-1}\ldots \bQ_1
\begin{bmatrix}
\bv^\top \\
\bR
\end{bmatrix}
=
\begin{bmatrix}
\bzero \\
\bR^\prime
\end{bmatrix}.
$$
Then we find out the expression for the Cholesky factor of $\bA^\prime$ by $\bR^\prime$.
Specifically, multiply the left-hand side (l.h.s.,) of the above equation by its transpose,
$$
\begin{bmatrix}
\bv & \bR^\top
\end{bmatrix}
\bQ_1 \ldots \bQ_{n-1}\bQ_n
\bQ_n \bQ_{n-1}\ldots \bQ_1
\begin{bmatrix}
\bv^\top \\
\bR
\end{bmatrix}
= \bR^\top\bR + \bv \bv^\top.
$$
And multiply the right-hand side (r.h.s.,) by its transpose,
$$
\begin{bmatrix}
\bzero & \bR^{\prime\top}
\end{bmatrix}
\begin{bmatrix}
\bzero \\
\bR^\prime
\end{bmatrix}=\bR^{\prime\top}\bR^\prime,
$$
which agrees with the l.h.s., equation. \textit{Givens rotations} are such orthogonal matrices that can transfer $\bR,\bv$ into $\bR^\prime$. We will discuss the fundamental meaning of Givens rotation shortly to prove the existence of QR decomposition in Section~\ref{section:qr-givens} (p.~\pageref{section:qr-givens}). Here, we only introduce the definition of it and write out the results directly. Feel free to skip this section for a first reading.
\begin{definition}[$n$-th Order Givens Rotation]\label{definition:givens-rotation-in-qr}
A Givens rotation is represented by a matrix of the following form
$$
\bG_{kl}=
\begin{bmatrix}
1 & & & & & & & & &\\
& \ddots & & & & && & &\\
& & 1 & & & & && &\\
& & & c & & & & s & &\\
&& & & 1 & & && &\\
&& & & &\ddots & && &\\
&& & & & & 1&& &\\
&& & -s & & & &c& &\\
&& & & & & & &1 & \\
&& & & & & & & &\ddots
\end{bmatrix}_{n\times n},
$$
where the $(k,k), (k,l), (l,k), (l,l)$ entries are $c, s, -s, c$ respectively, and $s = \cos \theta$ and $c=\cos \theta$ for some $\theta$.
Let $\bdelta_k \in \real^n$ be the zero vector except that the $k$-th entry is 1. Then mathematically, the Givens rotation defined above can be denoted by
$$
\bG_{kl}= \bI + (c-1)(\bdelta_k\bdelta_k^\top + \bdelta_l\bdelta_l^\top) + s(\bdelta_k\bdelta_l^\top -\bdelta_l\bdelta_k^\top ).
$$
where the subscripts $k,l$ indicate the \textbf{rotation is in plane $k$ and $l$}.
Specifically, one can also define the $n$-th order Givens rotation where $(k,k), (k,l), (l,k), (l,l)$ entries are $c, \textcolor{blue}{-s, s}, c$ respectively. The ideas are the same.
\end{definition}
It can be easily verified that the $n$-th order Givens rotation is an orthogonal matrix and its determinant is 1. For any vector $\bx =[x_1, x_2, \ldots, x_n]^\top \in \real^n$, we have $\by = \bG_{kl}\bx$, where
$$
\left\{
\begin{aligned}
&y_k = c \cdot x_k + s\cdot x_l, \\
&y_l = -s\cdot x_k +c\cdot x_l, \\
&y_j = x_j , & (j\neq k,l)
\end{aligned}
\right.
$$
That is, a Givens rotation applied to $\bx$ rotates two components of $\bx$ by some angle $\theta$ and leaves all other components the same.
Now, suppose we have an $(n+1)$-th order Givens rotation indexed from $0$ to $n$, and it is given by
$$
\bG_k = \bI + (c_k-1)(\bdelta_0\bdelta_0^\top + \bdelta_k\bdelta_k^\top) + s_k(\bdelta_0\bdelta_k^\top -\bdelta_k\bdelta_0^\top ),
$$
where $c_k = \cos \theta_k, s_k=\sin\theta_k$ for some $\theta_k$, $\bG_k \in \real^{(n+1)\times (n+1)}$, $\delta_k\in \real^{n+1}$ is a zero vector except that the $(k+1)$-th entry is 1.
\begin{mdframed}[hidealllines=\mdframehideline,backgroundcolor=\mdframecolorSkip]
Taking out the $k$-th column of the following equation
$$
\begin{bmatrix}
\bv^\top \\
\bR
\end{bmatrix}
=
\begin{bmatrix}
\bzero \\
\bR^\prime
\end{bmatrix},
$$
where we let the $k$-th element of $\bv$ be $v_k$, and the $k$-th diagonal of $\bR$ be $r_{kk}$.
We realize that $\sqrt{v_k^2 + r_{kk}^2} \neq 0$,
let $c_k = \frac{r_{kk}}{\sqrt{v_k^2 + r_{kk}^2}}$, $s_k=-\frac{v_k}{\sqrt{v_k^2 + r_{kk}^2}}$. Then,
$$
\left\{
\begin{aligned}
&v_k \rightarrow c_kv_k+s_kr_{kk}=0; \\
&r_{kk}\rightarrow -s_k v_k +c_kr_{kk}= \sqrt{v_k^2 + r_{kk}^2} = r^\prime_{kk} . \\
\end{aligned}
\right.
$$
That is, $G_k$ will introduce zero value to the $k$-th element to $\bv$ and nonzero value to $r_{kk}$.
\end{mdframed}
This finding above is essential for the rank-one update. And we obtain
$$
\bG_n \bG_{n-1}\ldots \bG_1
\begin{bmatrix}
\bv^\top \\
\bR
\end{bmatrix}
=
\begin{bmatrix}
\bzero \\
\bR^\prime
\end{bmatrix}.
$$
For each Givens rotation, it takes $6n$ flops. And there are $n$ such rotations, which requires $6n^2$ flops if keeping only the leading term. The complexity to calculate the Cholesky factor of $\bA^\prime$ is thus reduced from $\frac{1}{3} n^3$ to $6n^2$ flops if we already know the Cholesky factor of $\bA$ by the rank-one update. The above algorithm is essential to reduce the complexity of the posterior calculation in the Bayesian inference for Gaussian mixture model \citep{lu2021bayes}.
\subsection{Rank-One Downdate}
Now suppose we have calculated the Cholesky factor of $\bA$, and the $\bA^\prime$ is the downdate of $\bA$ as follows:
\begin{equation*}
\begin{aligned}
\bA^\prime &= \bA - \bv \bv^\top\\
\bR^{\prime\top}\bR^\prime &= \bR^\top\bR - \bv \bv^\top.
\end{aligned}
\end{equation*}
The algorithm is similar by proceeding as follows:
\begin{equation}\label{equation:rank-one-downdate}
\bG_1 \bG_{2}\ldots \bG_n
\begin{bmatrix}
\bzero \\
\bR
\end{bmatrix}
=
\begin{bmatrix}
\bv^\top \\
\bR^\prime
\end{bmatrix}.
\end{equation}
Again, $
\bG_k = \bI + (c_k-1)(\bdelta_0\bdelta_0^\top + \bdelta_k\bdelta_k^\top) + s_k(\bdelta_0\bdelta_k^\top -\bdelta_k\bdelta_0^\top ),
$ can be constructed as follows:
\begin{mdframed}[hidealllines=\mdframehideline,backgroundcolor=\mdframecolorSkip]
Taking out the $k$-th column of the following equation
$$
\begin{bmatrix}
\bzero \\
\bR
\end{bmatrix}
=
\begin{bmatrix}
\bv^\top \\
\bR^\prime
\end{bmatrix}.
$$
We realize that $r_{kk} \neq 0$,
let $c_k=\frac{\sqrt{r_{kk}^2 - v_k^2}}{r_{kk}}$, $s_k = \frac{v_k}{r_{kk}}$. Then,
$$
\left\{
\begin{aligned}
& 0 \rightarrow s_kr_{kk}=v_k; \\
&r_{kk}\rightarrow c_k r_{kk}= \sqrt{r_{kk}^2-v_k^2 }=r^\prime_{kk} . \\
\end{aligned}
\right.
$$
This requires $r^2_{kk} > v_k^2$ to make $\bA^\prime$ to be positive definite. Otherwise, $c_k$ above will not exist.
\end{mdframed}
Again, one can check that, multiply the l.h.s., of Equation~\eqref{equation:rank-one-downdate} by its transpose, we have
$$
\begin{bmatrix}
\bzero & \bR^\top
\end{bmatrix}
\bG_n \ldots \bG_{2}\bG_1
\bG_1 \bG_{2}\ldots \bG_n
\begin{bmatrix}
\bzero \\
\bR
\end{bmatrix} =\bR^\top\bR.
$$
And multiply the r.h.s., by its transpose, we have
$$
\begin{bmatrix}
\bv & \bR^{\prime\top}
\end{bmatrix}
\begin{bmatrix}
\bv^\top \\
\bR^\prime
\end{bmatrix}=\bv\bv^\top + \bR^{\prime\top}\bR^\prime.
$$
This results in $\bR^{\prime\top}\bR^\prime = \bR^\top\bR - \bv \bv^\top$.
\section{Application: Indefinite Rank Two Update}\index{Rank-two update}
Let $\bA = \bR^\top\bR$ be the Cholesky decomposition of $\bA$, \citet{goldfarb1976factorized, seeger2004low} introduce a stable method for the indefinite rank-two update of the form
$$
\bA^\prime = (\bI+\bv\bu^\top)\bA(\bI+\bu\bv^\top).
$$
Let
$$
\bigg\{
\begin{aligned}
\bz &= \bR^{-\top}\bv, \\
\bw &= \bR\bu,
\end{aligned}
\qquad
\rightarrow
\qquad
\bigg\{
\begin{aligned}
\bv &= \bR^{\top}\bz, \\
\bu &= \bR^{-1}\bw.
\end{aligned}
$$
And suppose the LQ decomposition \footnote{We will shortly introduce in Theorem~\ref{theorem:lq-decomposition} (p.~\pageref{theorem:lq-decomposition}).} of $\bI+\bz\bw^\top$ is given by $\bI+\bz\bw^\top =\bL\bQ$, where $\bL$ is lower triangular and $\bQ$ is orthogonal. Thus, we have
$$
\begin{aligned}
\bA^\prime &= (\bI+\bv\bu^\top)\bA(\bI+\bu\bv^\top)\\
&= (\bI+\bR^{\top}\bz \bw^\top \bR^{-\top })\bA(\bI+\bR^{-1}\bw \bz^\top\bR)\\
&= \bR^\top (\bI+\bz\bw^\top)(\bI+\bw\bz^\top)\bR\\
&= \bR^\top\bL \bQ\bQ^\top \bL^\top\bR \\
&= \bR^\top\bL \bL^\top\bR.
\end{aligned}
$$
Let $\bR^\prime = \bR^\top\bL$ which is lower triangular, we find the Cholesky decomposition of $\bA^\prime$.
\chapter{Nonnegative Matrix Factorization (NMF)}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{Nonnegative Matrix Factorization (NMF)}\index{NMF}\label{section:nmf}
Following the matrix factorization via the ALS, we now consider algorithms for solving the nonnegative matrix factorization (NMF) problem:
\begin{itemize}
\item Given a nonnegative matrix $\bA\in \real^{M\times N}$, find nonnegative matrix factors $\bW\in \real^{M\times K}$ and $\bZ\in \real^{K\times N}$ such that:
$$
\bA\approx\bW\bZ.
$$
\end{itemize}
To measure the approximation, the loss to evaluate is still from the Frobenius norm of the difference between the two matrices:
$$
L(\bW,\bZ) = ||\bW\bZ-\bA||^2.
$$
\section{NMF via Multiplicative Update}
Following Section~\ref{section:als-netflix} (p.~\pageref{section:als-netflix}), given $\bW\in \real^{M\times K}$, we want to update $\bZ\in \real^{K\times N}$, the gradient with respect to $\bZ$ is given by Equation~\eqref{equation:givenw-update-z-allgd} (p.~\pageref{equation:givenw-update-z-allgd}):
$$
\begin{aligned}
\frac{\partial L(\bZ|\bW)}{\partial \bZ} =2 \bW^\top(\bW\bZ-\bA) \in \real^{K\times N}.
\end{aligned}
$$
Applying the gradient descent idea in Section~\ref{section:als-gradie-descent} (p.~\pageref{section:als-gradie-descent}), the trivial update on $\bZ$ can be done by
$$
(\text{GD on $\bZ$})\gap \bZ \leftarrow \bZ - \eta \left(\frac{\partial L(\bZ|\bW)}{\partial \bZ}\right)=\bZ - \eta \left(2 \bW^\top\bW\bZ-2\bW^\top\bA\right),
$$
where $\eta$ is a small positive step size. Now if we suppose a different step size for each entry of $\bZ$ and incorporate the constant 2 into the step size, the update can be obtained by
$$
(\text{GD$^\prime$ on $\bZ$})\gap
\begin{aligned}
a_{kn} &\leftarrow z_{kn} - \frac{\eta_{kn}}{2} \left(\frac{\partial L(\bZ|\bW)}{\partial \bZ}\right)_{kn}\\
&=z_{kn} - \eta_{kn}(\bW^\top\bW\bZ-\bW^\top\bA)_{kn}, \gap k\in \{1,\ldots,K\}, n\in\{1,\ldots,N\},
\end{aligned}
$$
where $z_{kn}$ is the $(k,n)$-th entry of $\bZ$. Now if we rescale the step size:
$$
\eta_{kn} = \frac{z_{kn}}{(\bW^\top\bW\bZ)_{kn}},
$$
then we obtain the update rule:
$$
(\text{Multiplicative update on $\bZ$})\gap
z_{kn} \leftarrow z_{kn}\frac{(\bW^\top\bA)_{kn}}{(\bW^\top\bW\bZ)_{kn}}, \gap k\in \{1,\ldots,K\}, n\in\{1,\ldots,N\},
$$
which is known as the \textit{multiplicative update} and is first developed in \citet{lee2001algorithms} and further discussed in \citet{pauca2006nonnegative}. Analogously, the multiplicative update on $\bW$ can be obtained by
$$
(\text{Multiplicative update on $\bW$})\gap
w_{mk} \leftarrow w_{mk} \frac{(\bA\bZ^\top)_{mk}}{(\bW\bZ\bZ^\top)_{mk}}, \,\, m\in\{1,\ldots,M\}, k\in\{1,\ldots,K\}.
$$
\begin{theorem}[Convergence of Multiplicative Update]
The loss $L(\bW,\bZ)=||\bW\bZ-\bA||^2$ is non-increasing under the multiplicative update rules:
$$
\left\{
\begin{aligned}
z_{kn} &\leftarrow z_{kn}\frac{(\bW^\top\bA)_{kn}}{(\bW^\top\bW\bZ)_{kn}}, \gap k\in \{1,\ldots,K\}, n\in\{1,\ldots,N\},;\\
w_{mk} &\leftarrow w_{mk} \frac{(\bA\bZ^\top)_{mk}}{(\bW\bZ\bZ^\top)_{mk}}, \gap m\in\{1,\ldots,M\}, k\in\{1,\ldots,K\}.
\end{aligned}
\right.
$$
\end{theorem}
We refer the proof of the above theorem to \citep{lee2001algorithms}. Clearly the approximations $\bW$ and $\bZ$ remain nonnegative during the updates.
It
is generally best to update $\bW$ and $\bZ$ ``simultaneously”, instead of updating each
matrix completely before the other. In this case, after updating a row of $\bZ$, we update the corresponding column of $\bW$.
In
the implementation, a small positive quantity, say the square root of the machine
precision, should be added to the denominators in the approximations of $\bW$ and $\bZ$
at each iteration step. And a trivial $\epsilon=10^{-9}$ will suffice. The full procedure is shown in Algorithm~\ref{alg:nmf-multiplicative}.
\begin{algorithm}[h]
\caption{NMF via Multiplicative Updates}
\label{alg:nmf-multiplicative}
\begin{algorithmic}[1]
\Require $\bA\in \real^{M\times N}$;
\State initialize $\bW\in \real^{M\times K}$, $\bZ\in \real^{K\times N}$ randomly with nonnegative entries.
\State choose a stop criterion on the approximation error $\delta$;
\State choose maximal number of iterations $C$;
\State $iter=0$;
\While{$|| \bA- (\bW\bZ)||^2>\delta $ and $iter<C$}
\State $iter=iter+1$;
\For{$k=1$ to $K$}
\For{$n=1$ to $N$} \Comment{update $k$-th row of $\bZ$}
\State $z_{kn} \leftarrow z_{kn}\frac{(\bW^\top\bA)_{kn}}{(\bW^\top\bW\bZ)_{kn}+\textcolor{blue}{\epsilon}}$;
\EndFor
\For{$m=1$ to $M$} \Comment{update $k$-th column of $\bW$}
\State $w_{mk} \leftarrow w_{mk} \frac{(\bA\bZ^\top)_{mk}}{(\bW\bZ\bZ^\top)_{mk}+\textcolor{blue}{\epsilon}}$;
\EndFor
\EndFor
\EndWhile
\State Output $\bW,\bZ$;
\end{algorithmic}
\end{algorithm}
\section{Regularization}
Similar to the ALS with regularization in Section~\ref{section:regularization-extention-general} (p.~\pageref{section:regularization-extention-general}), recall that the regularization helps employ the ALS into general matrices. We can also add a regularization in the context of NMF:
$$
L(\bW,\bZ) =||\bW\bZ-\bA||^2 +\lambda_w ||\bW||^2 + \lambda_z ||\bZ||^2, \qquad \lambda_w>0, \lambda_z>0,
$$
where the induced matrix norm is still the Frobenius norm. The gradient with respective to $\bZ$ given $\bW$ is the same as that in Equation~\eqref{equation:als-regulari-gradien} (p.~\pageref{equation:als-regulari-gradien}):
$$
\begin{aligned}
\frac{\partial L(\bZ|\bW)}{\partial \bZ} =2\bW^\top(\bW\bZ-\bA) + \textcolor{blue}{2\lambda_z\bZ} \in \real^{K\times N}.
\end{aligned}
$$
The trivial gradient descent update can be obtained by
$$
(\text{GD on }\bZ) \gap \bZ \leftarrow \bZ - \eta \left(\frac{\partial L(\bZ|\bW)}{\partial \bZ}\right)=\bZ - \eta \left(2 \bW^\top\bW\bZ-2\bW^\top\bA+\textcolor{blue}{2\lambda_z\bZ}\right),
$$
Analogously, if we suppose a different step size for each entry of $\bZ$ and incorporate the constant 2 into the step size, the update can be obtained by
$$
(\text{GD$^\prime$ on $\bZ$})\gap
\begin{aligned}
z_{kn} &\leftarrow z_{kn} - \frac{\eta_{kn}}{2} \left(\frac{\partial L(\bZ|\bW)}{\partial \bZ}\right)_{kn}\\
&=z_{kn} - \eta_{kn}(\bW^\top\bW\bZ-\bW^\top\bA+\textcolor{blue}{\lambda_z\bZ})_{kn}, \\
&\gap\gap\qquad \gap k\in \{1,\ldots,K\}, n\in\{1,\ldots,N\}.
\end{aligned}
$$
Now if we rescale the step size:
$$
\eta_{kn} = \frac{z_{kn}}{(\bW^\top\bW\bZ)_{kn}},
$$
then we obtain the update rule:
$$
\begin{aligned}
(\text{Multiplicative update on $\bZ$})\gap
z_{kn} &\leftarrow z_{kn}\frac{(\bW^\top\bA)_{kn}-
\textcolor{blue}{\lambda_z z_{kn}}
}{(\bW^\top\bW\bZ)_{kn}}, \\
&\gap\gap\qquad \gap k\in \{1,\ldots,K\}, n\in\{1,\ldots,N\}.
\end{aligned}
$$
Similarly, the multiplicative update on $\bW$ can be obtained by
$$
\begin{aligned}
(\text{Multiplicative update on $\bW$})\gap
w_{mk} &\leftarrow w_{mk} \frac{(\bA\bZ^\top)_{mk}
-\textcolor{blue}{\lambda_w w_{mk}}
}{(\bW\bZ\bZ^\top)_{mk}}, \\
&\gap\gap\qquad \gap m\in\{1,\ldots,M\}, k\in\{1,\ldots,K\}.
\end{aligned}
$$
In the collaborative filtering context, it is also known that the NMF via multiplicative update can result in overfitting though the convergence results are good and the overfitting can be partly reduced via the regularization. However, the out-of-sample performance is still low. While Bayesian optimization via generative models can present overfitting to a large extent \citep{lu2022flexible}.
\section{Initialization}
In the above discussion, we initialize $\bW$ and $\bZ$ randomly. Whereas, there are also alternative strategies designed to obtain better initial estimates in the hope of converging more rapidly to a good solution \citep{boutsidis2008svd, gillis2014and}. We sketch the methods as follows:
\begin{itemize}
\item \textit{Clustering techniques.} Use some clustering methods on the columns of $\bA$, make the cluster means of the top $K$ clusters as the columns of $\bW$, and initialize $\bZ$ as a proper scaling of
the cluster indicator matrix (that is, $z_{kn}\neq 0$ indicates $\ba_n$ belongs to the $k$-th cluster);
\item \textit{Subset selection.} Pick $K$ columns of $\bA$ and set those as the initial columns for $\bW$, and analogously, $K$ rows of $\bA$ are selected to form the rows of $\bZ$;
\item \textit{SVD-based.} Suppose the SVD of $\bA=\sum_{i=1}^{r}\sigma_i\bu_i\bv_i^\top$ where each factor $\sigma_i\bu_i\bv_i^\top$ is a rank-one matrix with possible negative values in $\bu_i, \bv_i$, and nonnegative $\sigma_i$. Denote $[x]_+=\max(x, 0)$, we notice
$$
\bu_i\bv_i^\top = [\bu_i]_+[\bv_i]_+^\top+[-\bu_i]_+[-\bv_i]_+^\top-[-\bu_i]_+[\bv_i]_+^\top-[\bu_i]_+[-\bv_i]_+^\top.
$$
Either $[\bu_i]_+[\bv_i]_+^\top$ or $[-\bu_i]_+[-\bv_i]_+^\top$ can be selected as a column and a row in $\bW,\bZ$.
\end{itemize}
However, these techniques are not guaranteed to perform better theoretically.
We recommend readers to the aforementioned papers for more information.
\section{Movie Recommender Context}
Both the NMF and the ALS approximate the matrix and reconstruct the entries in the matrix with a set of basis vectors. The basis in the NMF is composed of vectors with nonnegative elements while the basis in the ALS can have positive or negative values.
The difference then is that the NMF reconstructs each vector as a positive summation of the basis vectors with a ``relative" small component in the direction of each basis vector.
Whereas, In the ALS the data is modeled as a linear combination of the basis you can add or subtract vectors as needed and the components in the direction of each basis vector can be large positive values or negative values. Therefore, depending on the application one or the other factorization can be utilized to describe the data with different meanings.
In the context of a movie recommender system then the rows of $\bW$ represent the feature of the movies and columns of $\bZ$ represent the features of a user.
In the NMF you can say that a movie is 0.5 comedy, 0.002 action, and 0.09 romantic. However, In the ALS you can get combinations such as 4 comedy, -0.05 comedy, and -3 drama, i.e., a positive or negative component on that feature.
The ALS and NMF are similar in the sense that the importance of each basis vector is not in a hierarchical manner. Whereas, the key difference between the ALS and the SVD is that, in the SVD, the importance of each vector in the basis is relative to the value of the singular value associated with that vector. For the SVD of $\bA=\sum_{i=1}^{r}\sigma_i\bu_i\bv_i^\top$,
this usually means that the reconstruction $\sigma_1\bu_1\bv_i^\top$ via the first set of basis vectors dominates and is the most used set to reconstruct data, then the second set, and so on. So the basis in the SVD has an implicit hierarchy and that doesn't happen in the ALS or the NMF.
\chapter{Alternating Least Squares (ALS)}\label{section:als}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{Netflix Recommender and Matrix Factorization}\label{section:als-netflix}
In the Netflix prize \citep{bennett2007netflix}, the goal is to predict the ratings of users for different movies, given the existing ratings of those users for other movies.
We index $M$ movies with $m= 1, 2,\ldots,M$ and $N$ users
by $n = 1, 2,\ldots,N$. We denote the rating of the $n$-th user for the $m$-th movie by $a_{mn}$. Define $\bA$ to be an $M \times N$ rating matrix with columns $\ba_n \in \real^M$ containing ratings of the $n$-th user. Note that many ratings $a_{mn}$ are missing and our goal is to predict those missing ratings accurately.
We formally consider algorithms for solving the following problem: The matrix $\bA$ is approximately factorized into an $M\times K$ matrix $\bW$ and a $K \times N$ matrix $\bZ$. Usually $K$ is chosen to be smaller than $M$ or $N$, so that $\bW$ and $\bZ$ are smaller than the original
matrix $\bA$. This results in a compressed version of the original data matrix.
An
appropriate decision on the value of $K$ is critical in practice, but the choice of $K$ is very
often problem dependent.
The factorization is significant in the sense, suppose $\bA=[\ba_1, \ba_2, \ldots, \ba_N]$ and $\bZ=[\bz_1, \bz_2, \ldots, \bz_N]$ are the column partitions of $\bA,\bZ$ respectively, then $\ba_n = \bW\bz_n$, i.e., each column $\ba_n$ is approximated by a linear combination of the columns of $\bW$ weighted by the components in $\bz_n$. Therefore, columns of $\bW$ can be thought of as containing the column basis of $\bA$. This is similar to the factorization in the data interpretation part (Part~\ref{part:data-interation}, p.~\pageref{part:data-interation}). What's different is that we are not restricting $\bW$ to be exact columns from $\bA$.
To find the approximation $\bA\approx\bW\bZ$, we need to define a loss function such that the distance between $\bA$ and $\bW\bZ$ can be measured. The loss function is selected to be the Frobenius norm between two matrices which vanishes to zero if $\bA=\bW\bZ$ where the advantage will be seen shortly.
To simplify the problem, let us
assume that there are no missing
ratings first. Project data vectors $\ba_n$ to a smaller
dimension $\bz_n \in \real^K$ with $K<M$,
such that the \textit{reconstruction error} measured by Frobenius norm is minimized (assume $K$ is known):
\begin{equation}\label{equation:als-per-example-loss}
\mathop{\min}_{\bW,\bZ} \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bw_m^\top\bz_n\right)^2,
\end{equation}
where $\bW=[\bw_1^\top; \bw_2^\top; \ldots; \bw_M^\top]\in \real^{M\times K}$ and $\bZ=[\bz_1, \bz_2, \ldots, \bz_N] \in \real^{K\times N}$ containing $\bw_m$'s and $\bz_n$'s as \textbf{rows and columns} respectively. The loss form in Equation~\eqref{equation:als-per-example-loss} is known as the \textit{per-example loss}. It can be equivalently written as
$$
L(\bW,\bZ) = \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bw_m^\top\bz_n\right)^2 = ||\bW\bZ-\bA||^2.
$$
Moreover, the loss $L(\bW,\bZ)=\sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bw_m^\top\bz_n\right)$ is convex with respect to $\bZ$ given $\bW$ and vice versa. Therefore, we can first minimize with respect to $\bZ$
given $\bW$ and then minimize with respect to $\bW$
given $\bZ$:
$$
\left\{
\begin{aligned}
\bZ &\leftarrow \mathop{\arg \min}_{\bZ} L(\bW,\bZ); \qquad \text{(ALS1)} \\
\bW &\leftarrow \mathop{\arg \min}_{\bW} L(\bW,\bZ). \qquad \text{(ALS2)}
\end{aligned}
\right.
$$
This is known as the \textit{coordinate descent algorithm} in which case we employ the least squares alternatively. Hence it is also called the \textit{alternating least squares (ALS)} \citep{comon2009tensor, takacs2012alternating, giampouras2018alternating}. The convergence is guaranteed if the loss function $L(\bW,\bZ)$ decreases at each iteration and we shall discuss more on this in the sequel.\index{ALS}
\begin{remark}[Convexity and Global Minimum]
Although the loss function defined by Frobenius norm $|| \bW\bZ-\bA||^2$ is convex in $\bW$ given $\bZ$ or vice versa, it is not convex in both variables together. Therefore we are not able to find the global minimum. However, the convergence is assured to find local minima.
\end{remark}
\paragraph{Given $\bW$, Optimizing $\bZ$}
Now, let's see what is in the problem of $\bZ \leftarrow \mathop{\arg \min}_{\bZ} L(\bW,\bZ)$. When there exists a unique minimum of the loss function $L(\bW,\bZ)$ with respect to $\bZ$, we speak of the \textit{least squares} minimizer of $\mathop{\arg \min}_{\bZ} L(\bW,\bZ)$.
Given $\bW$, $L(\bW,\bZ)$ can be written as $L(\bZ|\bW)$ to emphasize on the variable of $\bZ$:
$$
\begin{aligned}
L(\bZ|\bW) &= ||\bW\bZ-\bA||^2= \left\Vert\bW[\bz_1,\bz_2,\ldots, \bz_N]-[\ba_1,\ba_2,\ldots,\ba_N]\right\Vert^2=\left\Vert
\begin{bmatrix}
\bW\bz_1 - \ba_1 \\
\bW\bz_2 - \ba_2\\
\vdots \\
\bW\bz_N - \ba_N
\end{bmatrix}
\right\Vert^2.
\end{aligned}\footnote{The matrix norm used here is the Frobenius norm such that $|| \bA ||= \sqrt{\sum_{i=1,j=1}^{m,n} (\bA_{ij})^2}$ if $\bA\in \real^{m\times n}$. And the vector norm used here is the $l_2$ norm such that $||\bx||_2 = \sqrt{\sum_{i=1}^{n}x_i^2}$ if $\bx\in \real^n$.}
$$
Now, if we define
$$
\widetildebW =
\begin{bmatrix}
\bW & \bzero & \ldots & \bzero\\
\bzero & \bW & \ldots & \bzero\\
\vdots & \vdots & \ddots & \vdots \\
\bzero & \bzero & \ldots & \bW
\end{bmatrix}
\in \real^{MN\times KN},
\gap
\widetildebz=
\begin{bmatrix}
\bz_1 \\ \bz_2 \\ \vdots \\ \bz_N
\end{bmatrix}
\in \real^{KN},
\gap
\widetildeba=
\begin{bmatrix}
\ba_1 \\ \ba_2 \\ \vdots \\ \ba_N
\end{bmatrix}
\in \real^{MN},
$$
then the (ALS1) problem can be reduced to the normal least squares for minimizing $||\widetildebW \widetildebz - \widetildeba||^2$ with respect to $\widetildebz$. And the solution is given by
$$
\widetildebz = (\widetildebW^\top\widetildebW)^{-1} \widetildebW^\top\widetildeba.
$$
Alternatively,
a direct way to solve (ALS1) is to find the differential of $L(\bZ|\bW)$ with respect to $\bZ$:
\begin{equation}\label{equation:givenw-update-z-allgd}
\begin{aligned}
\frac{\partial L(\bZ|\bW)}{\partial \bZ} &=
\frac{\partial \,\,tr\left((\bW\bZ-\bA)(\bW\bZ-\bA)^\top\right)}{\partial \bZ}\\
&=\frac{\partial \,\,tr\left((\bW\bZ-\bA)(\bW\bZ-\bA)^\top\right)}{\partial (\bW\bZ-\bA)}
\frac{\partial (\bW\bZ-\bA)}{\partial \bZ}\\
&\stackrel{\star}{=}2 \bW^\top(\bW\bZ-\bA) \in \real^{K\times N},
\end{aligned}
\end{equation}
where the first equality is from the definition of Frobenius such that $|| \bA || = \sqrt{\sum_{i=1,j=1}^{m,n} (\bA_{ij})^2}=\sqrt{tr(\bA\bA^\top)}$, and equality ($\star$) comes from the fact that $\frac{\partial tr(\bA\bA^\top)}{\partial \bA} = 2\bA$. When the loss function is a differentiable function of $\bZ$, we may determine the least squares solution by differential calculus, and a minimum of the function
$L(\bZ|\bW)$ must be a root of the equation:
$$
\frac{\partial L(\bZ|\bW)}{\partial \bZ} = \bzero.
$$
By finding the root of the above equation, we have the ``candidate" update on $\bZ$ that find the minimizer of $L(\bZ|\bW)$
\begin{equation}\label{equation:als-z-update}
\bZ = (\bW^\top\bW)^{-1} \bW^\top \bA \leftarrow \mathop{\arg \min}_{\bZ} L(\bZ|\bW).
\end{equation}
Before we declare a root of the above equation is actually a minimizer rather than a maximizer (that's why we call the update a ``candidate" update above), we need to verify the function is convex such that if the function is twice differentiable, this can be equivalently done by verifying
$$
\frac{\partial^2 L(\bZ|\bW)}{\partial \bZ^2} > 0,
$$
i.e., the Hessian matrix is positive definite (recall the definition of positive definiteness, Definition~\ref{definition:psd-pd-defini}, p.~\pageref{definition:psd-pd-defini}). To see this, we write out the twice differential
$$
\frac{\partial^2 L(\bZ|\bW)}{\partial \bZ^2}= 2\bW^\top\bW \in \real^{K\times K},
$$
which has full rank if $\bW\in \real^{M\times K}$ has full rank (Lemma~\ref{lemma:rank-of-ata}, p.~\pageref{lemma:rank-of-ata}) and $K<M$.
\begin{remark}[Positive definite Hessian if $\bW$ has full rank]
We here claim that if $\bW$ has full rank, then $\frac{\partial^2 L(\bZ|\bW)}{\partial \bZ^2}$ is positive definite.
This can be done by checking that when $\bW$ has full rank, $\bW\bx=\bzero$ only when $\bx=\bzero$ since the null space of $\bW$ is of dimension 0. Therefore,
$$
\bx^\top (2\bW^\top\bW)\bx >0, \qquad \text{for any nonzero vector $\bx\in \real^K$}.
$$
\end{remark}
Now, the thing is that we need to check \textcolor{blue}{if $\bW$ has full rank so that the Hessian of $L(\bZ|\bW)$ is positive definiteness}, otherwise, we cannot claim the update of $\bZ$ in Equation~\eqref{equation:als-z-update} decreases the loss (due to convexity) so that the matrix decomposition is going into the right way to better approximate the original matrix $\bA$ by $\bW\bZ$ in each iteration.
We will shortly come back to the positive definiteness of the Hessian matrix in the sequel which relies on the following lemma
\begin{lemma}[Rank of $\bZ$ after Updating]\label{lemma:als-update-z-rank}
Suppose $\bA\in \real^{M\times N}$ has full rank with $M\leq N$ and $\bW\in \real^{M\times K}$ has full rank with $K<M$, then the update of $\bZ=(\bW^\top\bW)^{-1} \bW^\top \bA \in \real^{K\times N}$ in Equation~\eqref{equation:als-z-update} has full rank.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:als-update-z-rank}]
Since $\bW^\top\bW\in \real^{K\times K}$ has full rank if $\bW$ has full rank (Lemma~\ref{lemma:rank-of-ata}, p.~\pageref{lemma:rank-of-ata}) such that $(\bW^\top\bW)^{-1} $ has full rank.
Suppose $\bW^\top\bx=\bzero$, this implies $(\bW^\top\bW)^{-1} \bW^\top\bx=\bzero$. Thus
$$
\nspace(\bW^\top) \subseteq \nspace\left((\bW^\top\bW)^{-1} \bW^\top\right).
$$
Moreover, suppose $(\bW^\top\bW)^{-1} \bW^\top\bx=\bzero$, and since $(\bW^\top\bW)^{-1} $ is invertible. This implies $ \bW^\top\bx=(\bW^\top\bW)\bzero=\bzero$, and
$$
\nspace\left((\bW^\top\bW)^{-1} \bW^\top\right)\subseteq \nspace(\bW^\top).
$$
As a result, by ``sandwiching", it follows that
\begin{equation}\label{equation:als-z-sandiwch1}
\nspace(\bW^\top) = \nspace\left((\bW^\top\bW)^{-1} \bW^\top\right).
\end{equation}
Therefore, $(\bW^\top\bW)^{-1} \bW^\top$ has full rank $K$. Let $\bT=(\bW^\top\bW)^{-1} \bW^\top\in \real^{K\times M}$, and suppose $\bT^\top\bx=\bzero$. This implies $\bA^\top\bT^\top\bx=\bzero$, and
$$
\nspace(\bT^\top) \subseteq \nspace(\bA^\top\bT^\top).
$$
Similarly, suppose $\bA^\top(\bT^\top\bx)=\bzero$. Since $\bA$ has full rank with the dimension of the null space being 0: $dim\left(\nspace(\bA^\top)\right)=0$, $(\bT^\top\bx)$ must be zero. The claim follows since $\bA$ has full rank $M$ with the row space of $\bA^\top$ being equal to the column space of $\bA$ where $dim\left(\cspace(\bA)\right)=M$ and the $dim\left(\nspace(\bA^\top)\right) = M-dim\left(\cspace(\bA)\right)=0$. Therefore, $\bx$ is in the null space of $\bT^\top$ if $\bx$ is in the null space of $\bA^\top\bT^\top$:
$$
\nspace(\bA^\top\bT^\top)\subseteq \nspace(\bT^\top).
$$
By ``sandwiching" again,
\begin{equation}\label{equation:als-z-sandiwch2}
\nspace(\bT^\top) = \nspace(\bA^\top\bT^\top).
\end{equation}
Since $\bT^\top$ has full rank $K<M<N$, $dim\left(\nspace(\bT^\top) \right) = dim\left(\nspace(\bA^\top\bT^\top)\right)=0$.
Therefore,
$\bZ^\top=\bA^\top\bT^\top$ has full rank $K$.
We complete the proof.
\end{proof}
\paragraph{Given $\bZ$, Optimizing $\bW$}
Given $\bZ$, $L(\bW,\bZ)$ can be written as $L(\bW|\bZ)$ to emphasize the variable of $\bW$:
$$
\begin{aligned}
L(\bW|\bZ) &= ||\bW\bZ-\bA||^2.
\end{aligned}
$$
A direct way to solve (ALS2) is to find the differential of $L(\bW|\bZ)$ with respect to $\bW$:
$$
\begin{aligned}
\frac{\partial L(\bW|\bZ)}{\partial \bW} &=
\frac{\partial \,\,tr\left((\bW\bZ-\bA)(\bW\bZ-\bA)^\top\right)}{\partial \bW}\\
&=\frac{\partial \,\,tr\left((\bW\bZ-\bA)(\bW\bZ-\bA)^\top\right)}{\partial (\bW\bZ-\bA)}
\frac{\partial (\bW\bZ-\bA)}{\partial \bW}\\
&= 2(\bW\bZ-\bA)\bZ^\top \in \real^{M\times K}.
\end{aligned}
$$
The ``candidate" update on $\bW$ is similarly to find the root of the differential $\frac{\partial L(\bW|\bZ)}{\partial \bW}$:
\begin{equation}\label{equation:als-w-update}
\bW^\top = (\bZ\bZ^\top)^{-1}\bZ\bA^\top \leftarrow \mathop{\arg\min}_{\bW} L(\bW|\bZ).
\end{equation}
Again, we emphasize that the update is only a ``candidate" update. We need to further check whether the Hessian is positive definite or not.
The Hessian matrix is given by
$$
\begin{aligned}
\frac{\partial^2 L(\bW|\bZ)}{\partial \bW^2} =2\bZ\bZ^\top \in \real^{K\times K}.
\end{aligned}
$$
Therefore, by analogous analysis, if $\bZ$ has full rank with $K<N$, the Hessian matrix is positive definite.
\begin{lemma}[Rank of $\bW$ after Updating]\label{lemma:als-update-w-rank}
Suppose $\bA\in \real^{M\times N}$ has full rank with $M\leq N$ and $\bZ\in \real^{K\times N}$ has full rank with $K<N$, then the update of $\bW^\top = (\bZ\bZ^\top)^{-1}\bZ\bA^\top$ in Equation~\eqref{equation:als-w-update} has full rank.
\end{lemma}
The proof of Lemma~\ref{lemma:als-update-w-rank} is similar to that of Lemma~\ref{lemma:als-update-z-rank}, and we shall not repeat the details.
Combine the observations in Lemma~\ref{lemma:als-update-z-rank} and Lemma~\ref{lemma:als-update-w-rank}, as long as we \textcolor{blue}{initialize $\bZ, \bW$ to have full rank}, the updates in Equation~\eqref{equation:als-z-update} and Equation~\eqref{equation:als-w-update} are reasonable. \textcolor{blue}{The requirement on the $M\leq N$ is reasonable in that there are always more users than the number of movies}. We conclude the process in Algorithm~\ref{alg:als}.
\begin{algorithm}[H]
\caption{Alternating Least Squares}
\label{alg:als}
\begin{algorithmic}[1]
\Require $\bA\in \real^{M\times N}$ \textcolor{blue}{with $M\leq N$};
\State initialize $\bW\in \real^{M\times K}$, $\bZ\in \real^{K\times N}$ \textcolor{blue}{with full rank and $K<M\leq N$};
\State choose a stop criterion on the approximation error $\delta$;
\State choose maximal number of iterations $C$;
\State $iter=0$;
\While{$||\bA-\bW\bZ||>\delta $ and $iter<C$}
\State $iter=iter+1$;
\State $\bZ = (\bW^\top\bW)^{-1} \bW^\top \bA \leftarrow \mathop{\arg \min}_{\bZ} L(\bZ|\bW)$;
\State $\bW^\top = (\bZ\bZ^\top)^{-1}\bZ\bA^\top \leftarrow \mathop{\arg\min}_{\bW} L(\bW|\bZ)$;
\EndWhile
\State Output $\bW,\bZ$;
\end{algorithmic}
\end{algorithm}
\section{Regularization: Extension to General Matrices}\label{section:regularization-extention-general}
We can add a regularization to minimize the following loss:
\begin{equation}\label{equation:als-regularion-full-matrix}
L(\bW,\bZ) =||\bW\bZ-\bA||^2 +\lambda_w ||\bW||^2 + \lambda_z ||\bZ||^2, \qquad \lambda_w>0, \lambda_z>0,
\end{equation}
where the differential with respect to $\bZ, \bW$ are given respectively by
\begin{equation}\label{equation:als-regulari-gradien}
\left\{
\begin{aligned}
\frac{\partial L(\bW,\bZ) }{\partial \bZ} &= 2\bW^\top(\bW\bZ-\bA) + 2\lambda_z\bZ \in \real^{K\times N};\\
\frac{\partial L(\bW,\bZ) }{\partial \bW} &= 2(\bW\bZ-\bA)\bZ^\top + 2\lambda_w\bW \in \real^{M\times K}.
\end{aligned}
\right.
\end{equation}
The Hessian matrices are given respectively by
$$
\left\{
\begin{aligned}
\frac{\partial^2 L(\bW,\bZ) }{\partial \bZ^2} &= 2\bW^\top\bW+ 2\lambda_z\bI \in \real^{K\times K};\\
\frac{\partial^2 L(\bW,\bZ) }{\partial \bW^2} &= 2\bZ\bZ^\top + 2\lambda_w\bI \in \real^{K\times K}, \\
\end{aligned}
\right.
$$
which are positive definite due to the perturbation by the regularization. To see this,
$$
\left\{
\begin{aligned}
\bx^\top (2\bW^\top\bW +2\lambda_z\bI)\bx &= \underbrace{2\bx^\top\bW^\top\bW\bx}_{\geq 0} + 2\lambda_z ||\bx||^2>0, \gap \text{for nonzero $\bx$};\\
\bx^\top (2\bZ\bZ^\top +2\lambda_w\bI)\bx &= \underbrace{2\bx^\top\bZ\bZ^\top\bx}_{\geq 0} + 2\lambda_w ||\bx||^2>0,\gap \text{for nonzero $\bx$}.
\end{aligned}
\right.
$$
\textcolor{blue}{The regularization makes the Hessian matrices positive definite even if $\bW, \bZ$ are rank deficient}. And now the matrix decomposition can be extended to any matrix even when $M>N$. In rare cases, $K$ can be chosen as $K>\max\{M, N\}$ such that a high-rank approximation of $\bA$ is obtained. However, in most scenarios, we want to find the low-rank approximation of $\bA$ such that $K<\min\{M, N\}$. For example, the ALS can be utilized to find the low-rank neural networks to reduce the memory of the neural networks whilst increase the performance \citep{lu2021numerical}.
Therefore, the minimizers are given by finding the roots of the differential:
\begin{equation}\label{equation:als-regular-final-all}
\left\{
\begin{aligned}
\bZ &= (\bW^\top\bW+ \lambda_z\bI)^{-1} \bW^\top \bA ;\\
\bW^\top &= (\bZ\bZ^\top+\lambda_w\bI)^{-1}\bZ\bA^\top .
\end{aligned}
\right.
\end{equation}
The regularization parameters $\lambda_z, \lambda_w\in \real$ are used to balance the trade-off
between the accuracy of the approximation and the smoothness of the computed solution. The selection of the parameters is typically problem dependent and can be obtained by \textit{cross-validation}.
Again, we conclude the process in Algorithm~\ref{alg:als-regularizer}.
\begin{algorithm}[h]
\caption{Alternating Least Squares with Regularization}
\label{alg:als-regularizer}
\begin{algorithmic}[1]
\Require $\bA\in \real^{M\times N}$;
\State initialize $\bW\in \real^{M\times K}$, $\bZ\in \real^{K\times N}$ \textcolor{blue}{randomly without condition on the rank and the relationship between $M, N, K$};
\State choose a stop criterion on the approximation error $\delta$;
\State choose regularization parameters $\lambda_w, \lambda_z$;
\State choose maximal number of iterations $C$;
\State $iter=0$;
\While{$||\bA-\bW\bZ||>\delta $ and $iter<C$}
\State $iter=iter+1$;
\State $\bZ = (\bW^\top\bW+ \lambda_z\bI)^{-1} \bW^\top \bA \leftarrow \mathop{\arg \min}_{\bZ} L(\bZ|\bW)$;
\State $\bW^\top = (\bZ\bZ^\top+\lambda_w\bI)^{-1}\bZ\bA^\top \leftarrow \mathop{\arg\min}_{\bW} L(\bW|\bZ)$;
\EndWhile
\State Output $\bW,\bZ$;
\end{algorithmic}
\end{algorithm}
\section{Missing Entries}\label{section:alt-columb-by-column}
Since the matrix decomposition via the ALS is extensively used in the Netflix recommender data, where many entries are missing since many users have not watched some movies or they will not rate the movies for some reason. We can employ an additional mask matrix $\bM\in \real^{M\times N}$ where $m_{mn}\in \{0,1\}$ means if the user $n$ has rated the movie $m$ or not. Therefore, the loss function can be defined as
$$
L(\bW,\bZ) = ||\bM\odot \bA- \bM\odot (\bW\bZ)||^2,
$$
where $\odot$ is the \textit{Hadamard product} between matrices. For example, the Hadamard product for a $3 \times 3$ matrix $\bA$ with a $3\times 3$ matrix $\bB$ is
$$
\bA\odot \bB =
\begin{bmatrix}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{bmatrix}
\odot
\begin{bmatrix}
b_{11} & b_{12} & b_{13} \\
b_{21} & b_{22} & b_{23} \\
b_{31} & b_{32} & b_{33}
\end{bmatrix}
=
\begin{bmatrix}
a_{11}b_{11} & a_{12}b_{12} & a_{13}b_{13} \\
a_{21}b_{21} & a_{22}b_{22} & a_{23}b_{23} \\
a_{31}b_{31} & a_{32}b_{32} & a_{33}b_{33}
\end{bmatrix}.
$$
To find the solution of the problem, let's decompose the updates in Equation~\eqref{equation:als-regular-final-all} into:
\begin{equation}\label{equation:als-ori-all-wz}
\left\{
\begin{aligned}
\bz_n &= (\bW^\top\bW+ \lambda_z\bI)^{-1} \bW^\top \ba_n, &\gap& \text{for $n\in \{1,2,\ldots, N\}$} ;\\
\bw_m &= (\bZ\bZ^\top+\lambda_w\bI)^{-1}\bZ\bb_m, &\gap& \text{for $m\in \{1,2,\ldots, M\}$} ,
\end{aligned}
\right.
\end{equation}
where $\bZ=[\bz_1, \bz_2, \ldots, \bz_N], \bA=[\ba_1,\ba_2, \ldots, \ba_N]$ are the column partitions of $\bZ, \bA$ respectively. And $\bW^\top=[\bw_1, \bw_2, \ldots, \bw_M], \bA^\top=[\bb_1,\bb_2, \ldots, \bb_M]$ are the column partitions of $\bW^\top, \bA^\top$ respectively. The factorization of the updates indicates the update can be done in a column by column fashion.
\paragraph{Given $\bW$}
Let $\bo_n\in \real^M$ denote the movies rated by user $n$ where $o_{nm}=1$ if user $n$ has rated movie $m$, and $o_{nm}=1$ otherwise. Then the $n$-th column of $\bA$ without missing entries can be denoted as the Matlab style notation $\ba_n[\bo_n]$. And we want to approximate the existing $n$-th column by $\ba_n[\bo_n] \approx \bW[\bo_n, :]\bz_n$ which is actually a rank-one least squares problem:
\begin{equation}\label{equation:als-ori-all-wz-modif-z}
\begin{aligned}
\bz_n &= \left(\bW[\bo_n, :]^\top\bW[\bo_n, :]+ \lambda_z\bI\right)^{-1} \bW[\bo_n, :]^\top \ba_n[\bo_n], &\gap& \text{for $n\in \{1,2,\ldots, N\}$} .
\end{aligned}
\end{equation}
Moreover, the loss function with respect to $\bz_n$ can be described by
$$
L(\bz_n|\bW) =\sum_{m\in \bo_n} \left(a_{mn} - \bw_m^\top\bz_n\right)^2
$$
and if we are concerned about the loss for all users:
$$
L(\bZ|\bW) =\sum_{n=1}^N\ \sum_{m\in \bo_n} \left(a_{mn} - \bw_m^\top\bz_n\right)^2.
$$
\paragraph{Given $\bZ$}
Similarly, if $\bp_m \in \real^{N}$ denotes the users that have rated the movie $m$ with $p_{mn}=1$ if the movie $m$ has been rated by user $n$. Then the $m$-th row of $\bA$ without missing entries can be denoted as the Matlab style notation $\bb_m[\bp_m]$. And we want to approximate the existing $m$-th row by $\bb_m[\bp_m] \approx \bZ[:, \bp_m]^\top\bw_m$,
\footnote{Note that $\bZ[:, \bp_m]^\top$ is the transpose of $\bZ[:, \bp_m]$, which is equal to $\bZ^\top[\bp_m,:]$, i.e., transposing first and then selecting.}
which again is a rank-one least squares problem:
\begin{equation}\label{equation:als-ori-all-wz-modif-w}
\begin{aligned}
\bw_m &= (\bZ[:, \bp_m]\bZ[:, \bp_m]^\top+\lambda_w\bI)^{-1}\bZ[:, \bp_m]\bb_m[\bp_m], &\gap& \text{for $m\in \{1,2,\ldots, M\}$} .
\end{aligned}
\end{equation}
Moreover, the loss function with respect to $\bw_n$ can be described by
$$
L(\bw_n|\bZ) =\sum_{n\in \bp_m} \left(a_{mn} - \bw_m^\top\bz_n\right)^2
$$
and if we are concerned about the loss for all users:
$$
L(\bW|\bZ) =\sum_{m=1}^M \sum_{n\in \bp_m} \left(a_{mn} - \bw_m^\top\bz_n\right)^2 .
$$
The procedure is again formulated in Algorithm~\ref{alg:als-regularizer-missing-entries}.
\begin{algorithm}[H]
\caption{Alternating Least Squares with Missing Entries and Regularization}
\label{alg:als-regularizer-missing-entries}
\begin{algorithmic}[1]
\Require $\bA\in \real^{M\times N}$;
\State initialize $\bW\in \real^{M\times K}$, $\bZ\in \real^{K\times N}$ \textcolor{blue}{randomly without condition on the rank and the relationship between $M, N, K$};
\State choose a stop criterion on the approximation error $\delta$;
\State choose regularization parameters $\lambda_w, \lambda_z$;
\State compute the mask matrix $\bM$ from $\bA$;
\State choose maximal number of iterations $C$;
\State $iter=0$;
\While{\textcolor{blue}{$||\bM\circledast \bA- \bM\circledast (\bW\bZ)||^2>\delta $} and $iter<C$}
\State $iter=iter+1$;
\For{$n=1,2,\ldots, N$}
\State $\bz_n = \left(\bW[\bo_n, :]^\top\bW[\bo_n, :]+ \lambda_z\bI\right)^{-1} \bW[\bo_n, :]^\top \ba_n[\bo_n]$; \Comment{$n$-th column of $\bZ$}
\EndFor
\For{$m=1,2,\ldots, M$}
\State $\bw_m = (\bZ[:, \bp_m]\bZ[:, \bp_m]^\top+\lambda_w\bI)^{-1}\bZ[:, \bp_m]\bb_m[\bp_m]$;\Comment{$m$-th column of $\bW^\top$}
\EndFor
\EndWhile
\State Output $\bW^\top=[\bw_1, \bw_2, \ldots, \bw_M],\bZ=[\bz_1, \bz_2, \ldots, \bz_N]$;
\end{algorithmic}
\end{algorithm}
\section{Vector Inner Product}\label{section:als-vector-product}
We have seen that the ALS is to find matrices $\bW, \bZ$ such that $\bW\bZ$ can approximate $\bA\approx \bW\bZ$ in terms of minimum least squared loss:
$$
\mathop{\min}_{\bW,\bZ} \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bw_m^\top\bz_n\right)^2,
$$
that is, each entry $a_{mn}$ in $\bA$ can be approximated by the inner product between the two vectors $\bw_m^\top\bz_n$. The geometric definition of vector inner product is given by
$$
\bw_m^\top\bz_n = ||\bw||\cdot ||\bz|| \cos \theta,
$$
where $\theta$ is the angle between $\bw$ and $\bz$. So if the vector norms of $\bw, \bz$ are determined, the smaller the angle, the larger the inner product.
Come back to the Netflix data, where the rating are ranging from 0 to 5, and the larger the better. If $\bw_m$ and $\bz_n$ fall ``close" enough, then $\bw^\top\bz$ will have a larger value. This reveals the meaning behind the ALS where $\bw_m$ represents the features of movie $m$, whilst $\bz_n$ contains the features of user $n$. And each element in $\bw_m$ and $\bz_n$ represents a same feature. For example, it could be that the second feature $w_{m2}$ \footnote{$w_{m2}$ is the second element of vector $\bw_{m}$.} represents if the movie is an action movie or not, and $z_{n2}$ denotes if the user $n$ likes action movies or not. If it happens the case, then $\bw_m^\top\bz_n$ will be large and approximates $a_{mn}$ well.
Note that, in the decomposition $\bA\approx \bW\bZ$, we know the rows of $\bW$ contain the hidden features of the movies, and the columns of $\bZ$ contain the hidden features of the users. However, we cannot identify what are the meanings of the rows of $\bW$ or the columns of $\bZ$. We know they could be something like categories or genres of the movies, that provide some underlying connections between the users and the movies, but we cannot be sure what exactly they are. This is where the terminology ``hidden" comes from.
\section{Gradient Descent}\label{section:als-gradie-descent}
In Equation~\eqref{equation:als-ori-all-wz}, we obtain the column-by-column update directly from the full matrix way in Equation~\eqref{equation:als-regular-final-all} (with regularization considered). Now let's see what's behind the idea. Following Equation~\eqref{equation:als-regularion-full-matrix}, the loss under the regularization:
\begin{equation}
L(\bW,\bZ) =||\bW\bZ-\bA||^2 +\lambda_w ||\bW||^2 + \lambda_z ||\bZ||^2, \qquad \lambda_w>0, \lambda_z>0,
\end{equation}
Since we are now considering the minimization of the above loss with respect to $\bz_n$, we can decompose the loss into
\begin{equation}\label{als:gradient-regularization-zn}
\begin{aligned}
L(\bz_n) &=||\bW\bZ-\bA||^2 +\lambda_w ||\bW||^2 + \lambda_z ||\bZ||^2\\\
&= ||\bW\bz_n-\ba_n||^2 + \lambda_z ||\bz_n||^2 +
\underbrace{\sum_{i\neq n} ||\bW\bz_i-\ba_i||^2 + \lambda_z \sum_{i\neq n}||\bz_i||^2 + \lambda_w ||\bW||^2 }_{C_{z_n}},
\end{aligned}
\end{equation}
where $C_{z_n}$ is a constant with respect to $\bz_n$, and $\bZ=[\bz_1, \bz_2, \ldots, \bz_N], \bA=[\ba_1,\ba_2, \ldots, \ba_N]$ are the column partitions of $\bZ, \bA$ respectively. Taking the differential
$$
\frac{\partial L(\bz_n)}{\partial \bz_n} = 2\bW^\top\bW\bz_n - 2\bW^\top\ba_n + 2\lambda_z\bz_n,
$$
under which the root is exactly the first update of the column fashion in Equation~\eqref{equation:als-ori-all-wz}:
$$
\bz_n = (\bW^\top\bW+ \lambda_z\bI)^{-1} \bW^\top \ba_n, \gap \text{for $n\in \{1,2,\ldots, N\}$}.
$$
Similarly, we can decompose the loss with respect to $\bw_m$,
\begin{equation}\label{als:gradient-regularization-wd}
\begin{aligned}
L(\bw_m ) &=
||\bW\bZ-\bA||^2 +\lambda_w ||\bW||^2 + \lambda_z ||\bZ||^2\\
&=||\bZ^\top\bW-\bA^\top||^2 +\lambda_w ||\bW^\top||^2 + \lambda_z ||\bZ||^2\\\
&= ||\bZ^\top\bw_m-\bb_n||^2 + \lambda_w ||\bw_m||^2 +
\underbrace{\sum_{i\neq m} ||\bZ^\top\bw_i-\bb_i||^2 + \lambda_w \sum_{i\neq m}||\bw_i||^2 + \lambda_z ||\bZ||^2 }_{C_{w_m}},
\end{aligned}
\end{equation}
where $C_{w_m}$ is a constant with respect to $\bw_m$, and $\bW^\top=[\bw_1, \bw_2, \ldots, \bw_M],$ $\bA^\top=[\bb_1,\bb_2,$ $\ldots, \bb_M]$ are the column partitions of $\bW^\top, \bA^\top$ respectively.
Analogously, taking the differential with respect to $\bw_m$, it follows that
$$
\frac{\partial L(\bw_m)}{\partial \bw_m} = 2\bZ\bZ^\top\bw_m - 2\bZ\bb_n + 2\lambda_w\bw_m,
$$
under which the root is exactly the second update of the column fashion in Equation~\eqref{equation:als-ori-all-wz}:
$$
\bw_m = (\bZ\bZ^\top+\lambda_w\bI)^{-1}\bZ\bb_m, \gap \text{for $m\in \{1,2,\ldots, M\}$} .
$$
Now suppose we write out the iteration number ($k=1,2,\ldots$) as the superscript and we want to find the updates $\{\bz^{(k+1)}_n, \bw^{(k+1)}_m\}$ in the $(k+1)$-th iteration base on $\{\bZ^{(k)}, \bW^{(k)}\}$ in the $k$-th iteration:
$$
\left\{
\begin{aligned}
\bz^{(k+1)}_n &\leftarrow \mathop{\arg \min}_{\bz_n^{(k)}} L(\bz_n^{(k)});\\
\bw_m^{(k+1)} &\leftarrow \mathop{\arg\min}_{\bw_m^{(k)}} L(\bw_m^{(k)}).
\end{aligned}
\right.
$$
For simplicity, we will be looking at $\bz^{(k+1)}_n \leftarrow \mathop{\arg \min}_{\bz_n^{(k)}} L(\bz_n^{(k)})$, and the derivation for the update on $\bw_m^{(k+1)}$ will be the same. Suppose we want to approximate $\bz^{(k+1)}_n$ by a \textit{linear update} on $\bz^{(k)}_n$:
$$
\text{Linear Update: }\gap\boxed{\bz^{(k+1)}_n = \bz^{(k)}_n + \eta \bv.}
$$
The problem now turns to the solution of $\bv$ such that
$$
\bv=\mathop{\arg \min}_{\bv} L(\bz^{(k)}_n + \eta \bv) .
$$
By Taylor's formula, $L(\bz^{(k)}_n + \eta \bv)$ can be approximated by
$$
L(\bz^{(k)}_n + \eta \bv) \approx L(\bz^{(k)}_n ) + \eta \bv^\top \nabla L(\bz^{(k)}_n ),
$$
where $\eta$ is small enough and $\nabla L(\bz^{(k)}_n )$ is the gradient of $L(\bz)$ at $\bz^{(k)}_n$. Then a search under the condition $||\bv||=1$ given positive $\eta$ is as follows:
$$
\bv=\mathop{\arg \min}_{||\bv||=1} L(\bz^{(k)}_n + \eta \bv) \approx\mathop{\arg \min}_{||\bv||=1}
\left\{L(\bz^{(k)}_n ) + \eta \bv^\top \nabla L(\bz^{(k)}_n )\right\}.
$$
This is known as the \textit{greedy search}. The optimal $\bv$ can be obtained by
$$
\bv = -\frac{\nabla L(\bz^{(k)}_n )}{||\nabla L(\bz^{(k)}_n )||},
$$
i.e., $\bv$ is in the opposite direction of $\nabla L(\bz^{(k)}_n )$. Therefore, the update of $\bz_n^{(k+1)}$ is reasonable to be taken as
$$
\bz^{(k+1)}_n =\bz^{(k)}_n + \eta \bv = \bz^{(k)}_n - \eta \frac{\nabla L(\bz^{(k)}_n )}{||\nabla L(\bz^{(k)}_n )||},
$$
which is usually called the \textit{gradient descent}. Similarly, the gradient descent of $\bw_m^{(k+1)}$ is given by
$$
\bw^{(k+1)}_m =\bw^{(k)}_m + \eta \bv = \bw^{(k)}_m - \eta \frac{\nabla L(\bw^{(k)}_m )}{||\nabla L(\bw^{(k)}_m )||}.
$$
The updated procedure on Algorithm~\ref{alg:als-regularizer} by a gradient descent way is then formulated in Algorithm~\ref{alg:als-regularizer-missing-stochas-gradient}.
\begin{algorithm}[h]
\caption{Alternating Least Squares with Full Entries and Gradient Descent}
\label{alg:als-regularizer-missing-stochas-gradient}
\begin{algorithmic}[1]
\Require $\bA\in \real^{M\times N}$;
\State initialize $\bW\in \real^{M\times K}$, $\bZ\in \real^{K\times N}$ \textcolor{blue}{randomly without condition on the rank and the relationship between $M, N, K$};
\State choose a stop criterion on the approximation error $\delta$;
\State choose regularization parameters $\lambda_w, \lambda_z$, and step size $\eta_w, \eta_z$;
\State choose maximal number of iterations $C$;
\State $iter=0$;
\While{$|| \bA- (\bW\bZ)||^2>\delta $ and $iter<C$}
\State $iter=iter+1$;
\For{$n=1,2,\ldots, N$}
\State $\bz^{(k+1)}_n =\bz^{(k)}_n - \eta_z \frac{\nabla L(\bz^{(k)}_n )}{||\nabla L(\bz^{(k)}_n )||}$; \Comment{$n$-th column of $\bZ$}
\EndFor
\For{$m=1,2,\ldots, M$}
\State $\bw^{(k+1)}_m = \bw^{(k)}_m - \eta_w \frac{\nabla L(\bw^{(k)}_m )}{||\nabla L(\bw^{(k)}_m )||}$;\Comment{$m$-th column of $\bW^\top$}
\EndFor
\EndWhile
\State Output $\bW^\top=[\bw_1, \bw_2, \ldots, \bw_M],\bZ=[\bz_1, \bz_2, \ldots, \bz_N]$;
\end{algorithmic}
\end{algorithm}
\paragraph{Geometrical Interpretation of Gradient Descent}
\begin{lemma}[Direction of Gradients]\label{lemm:direction-gradients}
An important fact is that gradients are orthogonal to level curves (a.k.a., level surface).
\end{lemma}
See \citet{lu2021numerical, lu2022gradient} for a proof.
The lemma above reveals the geometrical interpretation of gradient descent. For finding a solution to minimize a convex function $L(\bz)$, gradient descent goes to the negative gradient direction that can decrease the loss. Figure~\ref{fig:alsgd-geometrical} depicts a $2$-dimensional case, where $-\nabla L(\bz)$ pushes the loss to decrease for the convex function $L(\bz)$.
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[A 2-dimensional convex function $L(\bz)$]{\label{fig:alsgd1}
\includegraphics[width=0.47\linewidth]{./imgs/alsgd1.pdf}}
\subfigure[$L(\bz)=c$ is a constant]{\label{fig:alsgd2}
\includegraphics[width=0.44\linewidth]{./imgs/alsgd2.pdf}}
\caption{Figure~\ref{fig:alsgd1} shows a function ``density" and a contour plot (\textcolor{bluepigment}{blue}=low, \textcolor{canaryyellow}{yellow}=high) where the upper graph is the ``density", and the lower one is the projection of it (i.e., contour). Figure~\ref{fig:alsgd2}: $-\nabla L(\bz)$ pushes the loss to decrease for the convex function $L(\bz)$.}
\label{fig:alsgd-geometrical}
\end{figure}
\section{Regularization: A Geometrical Interpretation}
\begin{figure}[h]
\centering
\includegraphics[width=0.95\textwidth]{./imgs/alsgd3.pdf}
\caption{Constrained gradient descent with $\bz^\top\bz\leq C$. The \textcolor{green}{green} vector $\bw$ is the projection of $\bv_1$ into $\bz^\top\bz\leq C$ where $\bv_1$ is the component of $-\nabla l(\bz)$ perpendicular to $\bz_1$. The right picture is the next step after the update in the left picture. $\bz^\star$ denotes the optimal solution of \{$\min l(\bz)$\}.}
\label{fig:alsgd3}
\end{figure}
We have seen in Section~\ref{section:regularization-extention-general} (p.~\pageref{section:regularization-extention-general}) that the regularization can extend the ALS to general matrices. The gradient descent can reveal the geometric meaning of the regularization. To avoid confusion, we denote the loss function without regularization by $l(\bz)$ and the loss with regularization by $L(\bz) = \l(\bz)+\lambda_z ||\bz||^2$ where $l(\bz): \real^n \rightarrow \real$. When minimizing $l(\bz)$, descent method will search in $\real^n$ for a solution. However, in machine learning, searching in the whole space $\real^n$ can cause overfitting. A partial solution is to search in a subset of the vector space, e.g., searching in $\bz^\top\bz < C$ for some constant $C$. That is
$$
\mathop{\arg\min}_{\bz} \gap l(\bz), \qquad s.t., \gap \bz^\top\bz\leq C.
$$
As shown above, a trivial gradient descent method will go further in the direction of $-\nabla l(\bz)$, i.e., update $\bz$ by $\bz\leftarrow \bz-\eta \nabla l(\bz)$ for a small step size $\eta$. When the level curve is $l(\bz)=c_1$ and the current position of $\bz=\bz_1$ where $\bz_1$ is the intersection of $\bz^\top\bz=C$ and $l(\bz)=c_1$, the descent direction $-\nabla l (\bz_1)$ will be perpendicular to the level curve of $l(\bz_1)=c_1$ as shown in the left picture of Figure~\ref{fig:alsgd3} (by Lemma~\ref{lemm:direction-gradients}). However, if we further restrict the optimal value can only be in $\bz^\top\bz\leq C$, the trivial descent direction $-\nabla l(\bz_1)$ will lead $\bz_2=\bz_1-\eta \nabla l(\bz_1)$ outside of $\bz^\top\bz\leq C$. A solution is to decompose the step $-\nabla l(\bz_1)$ into
$$
-\nabla l(\bz_1) = a\bz_1 + \bv_1,
$$
where $a\bz_1$ is the component perpendicular to the curve of $\bz^\top\bz=C$, and $\bv_1$ is the component parallel to the curve of $\bz^\top\bz=C$. Keep only the step $\bv_1$, then the update
$$
\bz_2 = \text{project}(\bz_1+\eta \bv_1) = \text{project}\left(\bz_1 + \eta
\underbrace{(-\nabla l(\bz_1) -a\bz_1)}_{\bv_1}\right)\footnote{where the project($\bx$)
will project the vector $\bx$ to the closest point inside $\bz^\top\bz\leq C$. Notice here the direct update $\bz_2 = \bz_1+\eta \bv_1$ can still make $\bz_2$ outside the curve of $\bz^\top\bz\leq C$.}
$$
will lead to a smaller loss from $l(\bz_1)$ to $l(\bz_2)$ and still match the requirement of $\bz^\top\bz\leq C$. This is known as the \textit{projection gradient descent}. It is not hard to see that the update $\bz_2 = \text{project}(\bz_1+\eta \bv_1)$ is equivalent to finding a vector $\bw$ (shown by the \textcolor{green}{green} vector in the left picture of Figure~\ref{fig:alsgd3}) such that $\bz_2=\bz_1+\bw$ is inside the curve of $\bz^\top\bz\leq C$. Mathematically, the $\bw$ can be obtained by $-\nabla l(\bz_1) -2\lambda \bz_1$ for some $\lambda$ as shown in the middle picture of Figure~\ref{fig:alsgd3}. This is exactly the negative gradient of $L(\bz)=l(\bz)+\lambda||\bz||^2$ such that
$$
\nabla L(\bz) = \nabla l(\bz) + 2\lambda \bz,
$$
and
$$
\begin{aligned}
\bw &= -\nabla L(\bz) \leadto \bz_2 &= \bz_1+ \bw =\bz_1 - \nabla L(\bz).
\end{aligned}
$$
And in practice, a small step size $\eta$ can avoid going outside the curve of $\bz^\top\bz\leq C$:
$$
\bz_2 =\bz_1 - \eta\nabla L(\bz),
$$
which is exactly what we have discussed in Section~\ref{section:regularization-extention-general} (p.~\pageref{section:regularization-extention-general}), the regularization term.
\paragraph{Sparsity}
In rare cases, we want to find sparse solution $\bz$ such that $l(\bz)$ is minimized. Regularization to be constrained in $||\bz||_1 \leq C$ exists to this purpose where $||\cdot||_1$ is the $l_1$ norm of a vector or a matrix. Similar to the previous case, the $l_1$ constrained optimization pushes the gradient descent towards the border of the level of $||\bz||_1=C$. The situation in the 2-dimensional case is shown in Figure~\ref{fig:alsgd4}. In a high-dimensional case, many elements in $\bz$ will be pushed into the breakpoint of $||\bz||_1=C$ as shown in the right picture of Figure~\ref{fig:alsgd4}.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\textwidth]{./imgs/alsgd4.pdf}
\caption{Constrained gradient descent with $||\bz||_1\leq C$, where the \textcolor{red}{red} dot denotes the breakpoint in $l_1$ norm. The right picture is the next step after the update in the left picture. $\bz^\star$ denotes the optimal solution of \{$\min l(\bz)$\}.}
\label{fig:alsgd4}
\end{figure}
\section{Stochastic Gradient Descent}
We consider again the per-example loss:
$$
L(\bW,\bZ)= \sum_{n=1}^N \sum_{m=1}^{M} \left(a_{mn} - \bw_m^\top\bz_n\right)^2 + \lambda_w||\bw_m||^2 +\lambda_z||\bz_n||^2.
$$
And when we iteratively decrease the per-example loss term $l(\bw_m, \bz_n)=\left(a_{mn} - \bw_m^\top\bz_n\right)^2$ for all $m\in \{1,2,\ldots, M\}, n\in\{1,2,\ldots,N\}$, the full loss $L(\bW,\bZ)$ can also be decreased. This is known as the \textit{stochastic coordinate descent}. The differentials with respect to $\bw_m, \bz_n$, and their roots are given by
$$
\left\{
\begin{aligned}
\nabla l(\bz_n)=\frac{\partial l(\bw_m,\bz_n)}{\partial \bz_n} &= 2\bw_m\bw_m^\top \bz_n + 2\lambda_w\bw_m -2a_{mn} \bw_m \\
&\qquad \leadtosmall \bz_n= a_{mn}(\bw_m\bw_m^\top+\lambda_z\bI)^{-1}\bw_m;\\
\nabla l(\bw_m)=\frac{\partial l(\bw_m, \bz_n)}{\partial \bw_m} &= 2\bz_n\bz_n^\top\bw_m +2\lambda_z\bz_n - 2a_{mn}\bz_n\\
&\qquad \leadtosmall \bw_m= a_{mn}(\bz_n\bz_n^\top+\lambda_w\bI)^{-1}\bw_n.
\end{aligned}
\right.
$$
or analogously, the update can be done by gradient descent, and since we update by per-example loss, it is also known as the \textit{stochastic gradient descent}
$$
\left\{
\begin{aligned}
\bz_n&= \bz_n - \eta_z \frac{\nabla l(\bz_n)}{||\nabla l(\bz_n)||}; \\
\bw_m&= \bw_m - \eta_w \frac{\nabla l(\bw_m)}{||\nabla l(\bw_m)||}.
\end{aligned}
\right.
$$
The stochastic gradient descent update for ALS is formulated in Algorithm~\ref{alg:als-regularizer-missing-stochas-gradient-realstoch}. And in practice, the $m,n$ in the algorithm can be randomly produced, that's where the name \textit{stochastic} comes from.
\begin{algorithm}[h]
\caption{Alternating Least Squares with Full Entries and Stochastic Gradient Descent}
\label{alg:als-regularizer-missing-stochas-gradient-realstoch}
\begin{algorithmic}[1]
\Require $\bA\in \real^{M\times N}$;
\State initialize $\bW\in \real^{M\times K}$, $\bZ\in \real^{K\times N}$ \textcolor{blue}{randomly without condition on the rank and the relationship between $M, N, K$};
\State choose a stop criterion on the approximation error $\delta$;
\State choose regularization parameters $\lambda_w, \lambda_z$, and step size $\eta_w, \eta_z$;
\State choose maximal number of iterations $C$;
\State $iter=0$;
\While{$|| \bA- (\bW\bZ)||^2>\delta $ and $iter<C$}
\State $iter=iter+1$;
\For{$n=1,2,\ldots, N$}
\For{$m=1,2,\ldots, M$} \Comment{in practice, $m,n$ can be randomly produced}
\State $\bz_n= \bz_n - \eta_z \frac{\nabla l(\bz_n)}{||\nabla l(\bz_n)||}$;\Comment{$n$-th column of $\bZ$}
\State $\bw_m= \bw_m - \eta_w \frac{\nabla l(\bw_m)}{||\nabla l(\bw_m)||}$;\Comment{$m$-th column of $\bW^\top$}
\EndFor
\EndFor
\EndWhile
\State Output $\bW^\top=[\bw_1, \bw_2, \ldots, \bw_M],\bZ=[\bz_1, \bz_2, \ldots, \bz_N]$;
\end{algorithmic}
\end{algorithm}
\section{Bias Term}
\begin{figure}[h]
\centering
\includegraphics[width=0.95\textwidth]{./imgs/als-bias.pdf}
\caption{Bias terms in alternating least squares where the \textcolor{canaryyellow}{yellow} entries denote ones (which are fixed) and \textcolor{cyan}{cyan} entries denote the added features to fit the bias terms. The dotted boxes give an example of how the bias terms work.}
\label{fig:als-bias}
\end{figure}
In ordinary least squares, a bias term is added to the raw matrix. A similar idea can be applied to the ALS problem. We can add a fixed column with all 1's to the last column of $\bW$, thus an extra row should be added to the last row of $\bZ$ to fit the features introduced by the bias term in $\bW$. Analogously, a fixed row with all 1's can be added to the first row of $\bZ$, and an extra column in the first column of $\bW$ to fit the features. The situation is shown in Figure~\ref{fig:als-bias}.
Following the loss with respect to the columns of $\bZ$ in Equation~\eqref{als:gradient-regularization-zn}, suppose $\widetildebz_n
=
\begin{bmatrix}
1\\
\bz_n
\end{bmatrix}
$ is the $n$-th column of $\widetildebZ$, we have
\begin{equation}
\begin{aligned}
L(\bz_n) &=||\widetildebW\widetildebZ-\bA||^2 +\lambda_w ||\widetildebW||^2 + \lambda_z ||\widetildebZ||^2\\\
&= \left\Vert
\widetildebW
\begin{bmatrix}
1 \\
\bz_n
\end{bmatrix}-\ba_n
\right\Vert^2
+
\underbrace{\lambda_z ||\widetildebz_n||^2}_{=\lambda_z ||\bz_n||^2+\lambda_z}
+
\sum_{i\neq n} ||\widetildebW\widetildebz_i-\ba_i||^2 + \lambda_z \sum_{i\neq n}||\widetildebz_i||^2 + \lambda_w ||\widetildebW||^2 \\
&= \left\Vert
\begin{bmatrix}
\widebarbw_0 & \widebarbW
\end{bmatrix}
\begin{bmatrix}
1 \\
\bz_n
\end{bmatrix}-\ba_n
\right\Vert^2
+ \lambda_z ||\bz_n||^2 + C_{z_n}
= \left\Vert
\widebarbW \bz_n -
\underbrace{(\ba_n-\widebarbw_0)}_{\widebarba_n}
\right\Vert^2
+ \lambda_z ||\bz_n||^2 + C_{z_n},
\end{aligned}
\end{equation}
where $\widebarbw_0$ is the first column of $\widetildebW$ and $C_{z_n}$ is a constant with respect to $\bz_n$. Let $\widebarba_n = \ba_n-\widebarbw_0$, the update of $\bz_n$ is just like the one in Equation~\eqref{als:gradient-regularization-zn} where the differential is given by:
$$
\frac{\partial L(\bz_n)}{\partial \bz_n} = 2\widebarbW^\top\widebarbW\bz_n - 2\widebarbW^\top\widebarba_n + 2\lambda_z\bz_n.
$$
Therefore the update on $\bz_n$ is given by the root of the above differential:
$$
\text{update on $\widetildebz_n$ is }
\left\{
\begin{aligned}
\bz_n &= (\widebarbW^\top\widebarbW+ \lambda_z\bI)^{-1} \widebarbW^\top \widebarba_n, \gap \text{for $n\in \{1,2,\ldots, N\}$};\\
\widetildebz_n &= \begin{bmatrix}
1\\\bz_n
\end{bmatrix}.
\end{aligned}
\right.
$$
Similarly, following the loss with respect to each row of $\bW$ in Equation~\eqref{als:gradient-regularization-wd}, suppose $\widetildebw_m =
\begin{bmatrix}
\bw_m \\
1
\end{bmatrix}$ is the $m$-th row of $\widetildebW$ (or $m$-th column of $\widetildebW^\top$), we have
\begin{equation}
\begin{aligned}
L(\bw_m )
&=||\widetildebZ^\top\widetildebW-\bA^\top||^2 +\lambda_w ||\widetildebW^\top||^2 + \lambda_z ||\widetildebZ||^2\\\
&=
||\widetildebZ^\top\widetildebw_m-\bb_m||^2 +
\underbrace{\lambda_w ||\widetildebw_m||^2}_{=\lambda_w ||\bw_m||^2+\lambda_w}
+
\sum_{i\neq m} ||\widetildebZ^\top\widetildebw_i-\bb_i||^2 + \lambda_w \sum_{i\neq m}||\widetildebw_i||^2 + \lambda_z ||\widetildebZ||^2 \\
&=
\left\Vert
\begin{bmatrix}
\widebarbZ^\top&
\widebarbz_0
\end{bmatrix}
\begin{bmatrix}
\bw_m \\
1
\end{bmatrix}
-\bb_m\right\Vert^2 +
\lambda_w ||\bw_m||^2
+
C_{w_m}\\
&=
\left\Vert
\widebarbZ^\top\bw_m
-(\bb_m-\widebarbz_0) \right\Vert^2+
\lambda_w ||\bw_m||^2
+
C_{w_m},
\end{aligned}
\end{equation}
where $\widebarbz_0$ is the last column of $\widetildebZ^\top$ and $\widebarbZ^\top$ contains the remaining columns of it,
$C_{w_m}$ is a constant with respect to $\bw_m$, and $\bW^\top=[\bw_1, \bw_2, \ldots, \bw_M], \bA^\top=[\bb_1,\bb_2, \ldots, \bb_M]$ are the column partitions of $\bW^\top, \bA^\top$ respectively. Let $\widebarbb_m = \bb_m-\widebarbz_0$, the update of $\bw_m$ is again just like the one in Equation~\eqref{als:gradient-regularization-wd} where the differential is given by:
$$
\frac{\partial L(\bw_md)}{\partial \bw_m} = 2\widebarbZ\cdot \widebarbZ^\top\bw_m - 2\widebarbZ\cdot \widebarbb_m + 2\lambda_w\bw_m.
$$
Therefore the update on $\bw_m$ is given by the root of the above differential
$$
\text{update on $\widetildebw_m$ is }
\left\{
\begin{aligned}
\bw_m&=(\widebarbZ\cdot \widebarbZ^\top+\lambda_w\bI)^{-1}\widebarbZ\cdot \widebarbb_m, \gap \text{for $m\in \{1,2,\ldots, M\}$} ;\\
\widetildebw_m &= \begin{bmatrix}
\bw_m \\ 1
\end{bmatrix}.
\end{aligned}
\right.
$$
Similar updates by gradient descent under the bias terms or treatment on missing entries can be deduced and we shall not repeat the details (see Section~\ref{section:als-gradie-descent}, p.~\pageref{section:als-gradie-descent} and \ref{section:alt-columb-by-column}, p.~\pageref{section:alt-columb-by-column} for a reference).
\chapter{Biconjugate Decomposition}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{Existence of the Biconjugate Decomposition}
The biconjugate decomposition was proposed in \citet{chu1995rank} and discussed in \citet{yang2000matrix}.
A variety of matrix decomposition methods can be unified via this biconjugate decomposition.
And the existence of the biconjugate decomposition relies on the rank-one reduction theorem shown below.
\begin{theorem}[Rank-One Reduction\index{Rank-one reduction}]\label{theorem:rank-1-reduction}
Given $m\times n$ matrix $\bA\in \real^{m\times n}$ with rank $r$, a pair of vectors $\bx\in \real^n$ and $\by\in \real^m$ such that $w=\by^\top\bA\bx \neq 0$, then the matrix $\bB=\bA-w^{-1}\bx\by^\top\bA$ has rank $r-1$ which has exactly one less than the rank of $\bA$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:rank-1-reduction}]
If we can show that the dimension of the null space $\nspace(\bB)$ is one larger than that of $\bA$. Then this implicitly shows $\bB$ has a rank exactly one less than the rank of $\bA$.
For any vector $\bn \in \nspace(\bA)$, i.e., $\bA\bn=\bzero$, we then have $\bB\bn =\bA\bn-w^{-1}\bx\by^\top\bA\bn=\bzero $ which means $\nspace(\bA)\subseteq \nspace(\bB)$.
Now for any vector $\bmm \in \nspace(\bB)$, then $\bB\bmm = \bA\bmm-w^{-1}\bx\by^\top\bA\bmm =\bzero$.
Let $k=w^{-1}\by^\top\bA\bmm$, which is a scalar, thus $\bA(\bmm - k\bx)=\bzero$, i.e., for any vector $\bn\in \nspace(\bA)$, we could find a vector $\bmm\in \nspace(\bB)$ such that $\bn=(\bmm - k\bx)\in \nspace(\bA)$. Note that $\bA\bx\neq \bzero$ from the definition of $w$. Thus, the null space of $\bB$ is therefore obtained from the null space of $\bA$ by adding $\bx$ to its basis which will increase the order of the space by 1. Thus the dimension of $\nspace(\bA)$ is smaller than the dimension of $\nspace(\bB)$ by 1 which completes the proof.
\end{proof}
Suppose matrix $\bA\in \real^{m\times n}$ has rank $r$, we can define a rank reducing process to generate a sequence of \textit{Wedderburn matrices} $\{\bA_k\}$:
$$
\bA_1 = \bA, \qquad \text{and}\qquad \bA_{k+1} = \bA_k-w_k^{-1}\bA_k\bx_k\by_k^\top\bA_k,
$$
where $\bx_k \in \real^n$ and $\by_k\in \real^m$ are any vectors satisfying $w_k = \by_k^\top\bA\bx_k \neq 0$. The sequence will terminate in $r$ steps since the rank of $\bA_k$ decreases by exactly one at each step. Write out the sequence:
$$
\begin{aligned}
\bA_1 &= \bA, \\
\bA_1-\bA_{2} &= w_1^{-1}\bA_1\bx_1\by_1^\top\bA_1,\\
\bA_2-\bA_3 &=w_2^{-1}\bA_2\bx_2\by_2^\top\bA_2, \\
\bA_3-\bA_4 &=w_3^{-1}\bA_3\bx_3\by_3^\top\bA_3, \\
\vdots &=\vdots\\
\bA_{r-1}-\bA_{r} &=w_{r-1}^{-1}\bA_{r-1}\bx_{r-1}\by_{r-1}^\top\bA_{r-1}, \\
\bA_r-\bzero &=w_{r}^{-1}\bA_{r}\bx_{r}\by_{r}^\top\bA_{r}.
\end{aligned}
$$
By adding the sequence we get
$$
(\bA_1-\bA_2)+(\bA_2-\bA_3)+\ldots+(\bA_{r-1}-\bA_{r})+(\bA_r-\bzero ) =\bA= \sum_{i=1}^{r}w_i^{-1}\bA_i\bx_i\by_i^\top\bA_i.
$$
\begin{theoremHigh}[Biconjugate Decomposition: Form 1]\label{theorem:biconjugate-form1}
This equality from rank-reducing process implies the following matrix decomposition
$$
\bA = \bPhi \bOmega^{-1} \bPsi^\top,
$$
where $\bOmega=diag(w_1, w_2, \ldots, w_r)$, $\bPhi=[\bphi_1,\bphi_2, \ldots, \bphi_r]\in \real^{m\times r}$ and $\bPsi=[\bpsi_1, \bpsi_2, \ldots, \bpsi_r]$ with
$$
\bphi_k = \bA_k\bx_k, \qquad \text{and}\qquad \bpsi_k=\bA_k^\top \by_k.
$$
\end{theoremHigh}
Obviously, different choices of $\bx_k$'s and $\by_k$'s will result in different factorizations. So this factorization is rather general and we will show its connection to some well-known decomposition methods.\index{Wedderburn sequence}
\begin{remark}
For the vectors $\bx_k, \by_k$ in the Wedderburn sequence, we have the following property
$$
\begin{aligned}
\bx_k &\in \nspace(\bA_{k+1}) \bot \cspace(\bA_{k+1}^\top), \\
\by_k &\in \nspace(\bA_{k+1}^\top) \bot \cspace(\bA_{k+1}).
\end{aligned}
$$
\end{remark}
\begin{lemma}[General Term Formula of Wedderburn Sequence: V1]\label{lemma:wedderburn-sequence-general}
For each matrix with $\bA_{k+1} = \bA_k -w_k^{-1} \bA_k\bx_k\by_k^\top \bA_k$, then $\bA_{k+1} $ can be written as
$$
\bA_{k+1} = \bA - \sum_{i=1}^{k}w_{i}^{-1} \bA\bu_i \bv_i^\top \bA,
$$
where
$$
\bu_k=\bx_k -\sum_{i=1}^{k-1}\frac{\bv_i^\top \bA\bx_k}{w_i}\bu_i,\qquad \text{and}\qquad \bv_k=\by_k -\sum_{i=1}^{k-1}\frac{\by_k^\top \bA\bu_i}{w_i}\bv_i.
$$
\end{lemma}
The proof of this lemma is delayed in Section~\ref{appendix:wedderburn-general-term} (p.~\pageref{appendix:wedderburn-general-term}). We notice that $w_i =\by_k^\top\bA_k\bx_k$ in the general term formula is related to $\bA_k$. So it's not the true general term formula. We will reformulate $w_i$ to be related to $\bA$ rather than $\bA_k$ later.
From the general term formula of the Wedderburn sequence, we have
$$
\begin{aligned}
\bA_{k+1} &= \bA - \sum_{i=1}^{k}w_{i}^{-1} \bA\bu_i \bv_i^\top \bA \\
\bA_{k} &= \bA - \sum_{i=1}^{k-1}w_{i}^{-1} \bA\bu_i \bv_i^\top \bA.
\end{aligned}
$$
Thus, $\bA_{k+1} - \bA_{k} = -w_{k}^{-1} \bA\bu_k \bv_k^\top \bA$. Since we define the sequence by $\bA_{k+1} = \bA_k -w_k^{-1} \bA_k\bx_k\by_k^\top \bA_k$. We then find $w_{k}^{-1} \bA\bu_k \bv_k^\top \bA = w_k^{-1} \bA_k\bx_k\by_k^\top \bA_k$. It is trivial to see
\begin{equation}\label{equation:wedderburn-au-akxk}
\begin{aligned}
\bA\bu_k &=\bA_k\bx_k,\\
\bv_k^\top \bA&=\by_k^\top \bA_k.
\end{aligned}
\end{equation}
Let $z_{k,i} = \frac{\bv_i^\top \bA\bx_k}{w_i}$ which is a scalar. From the definition of $\bu_k$ and $\bv_k$ in the above lemma, then
$\bullet$ $\bu_1=\bx_1$;
$\bullet$ $\bu_2 = \bx_2 - z_{2,1}\bu_1$;
$\bullet$ $\bu_3 = \bx_3 - z_{3,1}\bu_1-z_{3,2}\bu_2$;
$\bullet$ $\ldots$.
This process is just similar to the Gram-Schmidt process (Section~\ref{section:gram-schmidt-process}, p.~\pageref{section:gram-schmidt-process}). But now, we do not project $\bx_2$ onto $\bx_1$ with the smallest distance. The vector of $\bx_2$ along $\bx_1$ is now defined by $z_{2,1}$. This process is shown in Figure~\ref{fig:projection-wedd}. In Figure~\ref{fig:project-line-wedd}, $\bu_2$ is not perpendicular to $\bu_1$. But $\bu_2$ does not lie on the same line as $\bu_1$ so that $\bu_1, \bu_2$ still could span a $\real^2$ subspace. Similarly, in Figure~\ref{fig:project-space-wedd}, $\bu_3= \bx_3 - z_{3,1}\bu_1-z_{3,2}\bu_2$ does not lie in the space spanned by $\bu_1, \bu_2$ so that $\bu_1, \bu_2, \bu_3$ could still span a $\real^3$ subspace.
A moment of reflexion would reveal that the span of $\bx_2, \bx_1$ is the same as the span of $\bu_2, \bu_1$. Similarly for $\bv_i$'s. We have the following property:
\begin{equation}\label{equation:wedderburn-span-same}
\left\{
\begin{aligned}
\textrm{span}\{\bx_1, \bx_2, \ldots, \bx_j\} &= \textrm{span}\{\bu_1, \bu_2, \ldots, \bu_j\};\\
\textrm{span}\{\by_1, \by_2, \ldots, \by_j\} &= \textrm{span}\{\bv_1, \bv_2, \ldots, \bv_j\}.\\
\end{aligned}
\right.
\end{equation}
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[``Project" onto a line]{\label{fig:project-line-wedd}
\includegraphics[width=0.47\linewidth]{./imgs/projectline_wedderburn.pdf}}
\quad
\subfigure[``Project" onto a space]{\label{fig:project-space-wedd}
\includegraphics[width=0.47\linewidth]{./imgs/projectspace_wedderburn.pdf}}
\caption{``Project" a vector onto a line and onto a space.}
\label{fig:projection-wedd}
\end{figure}
Further, from the rank-reducing property in the Wedderburn sequence, we have
$$
\left\{
\begin{aligned}
\cspace(\bA_1) &\supset \cspace(\bA_2) \supset \cspace(\bA_3) \supset \ldots;\\
\nspace(\bA_1^\top) &\subset \nspace(\bA_2^\top) \subset \nspace(\bA_3^\top) \subset \ldots.
\end{aligned}
\right.
$$
Since $\by_k \in \nspace(\bA_{k+1}^\top)$, it then follows that $\by_j \in \nspace(\bA_{k+1}^\top)$ for all $j<k+1$, i.e., $\bA_{k+1}^\top \by_j=\bzero$ for all $j<k+1$. Which also holds true for $\bx_{k+1}^\top \bA_{k+1}^\top \by_j=0$ for all $j<k+1$. From Equation~\eqref{equation:wedderburn-au-akxk}, we also have $\bu_{k+1}^\top \bA^\top \by_j=0$ for all $j<k+1$. Following Equation~\eqref{equation:wedderburn-span-same}, we obtain
$$
\bv_j^\top\bA\bu_{k+1}=0 \text{ for all } j<k+1.
$$
Similarly, we can prove
$$
\bv_{k+1}^\top\bA\bu_{j}=0 \text{ for all } j<k+1.
$$
Moreover, we defined $w_k = \by_k^\top \bA_k\bx_k$. By Equation~\eqref{equation:wedderburn-au-akxk}, we can write the $w_k$ as:
$$
\begin{aligned}
w_k &= \by_k^\top \bA_k\bx_k\\
&=\bv_k^\top \bA\bx_k \\
&=\bv_k^\top\bA (\bu_k +\sum_{i=1}^{k-1}\frac{\bv_i^\top \bA\bx_k}{w_i}\bu_i) \qquad &(\text{by the definition of }\bu_k \text{ in Lemma~\ref{lemma:wedderburn-sequence-general}})\\
&=\bv_k^\top\bA \bu_k,\qquad &(\text{by } \bv_{k}^\top\bA\bu_{j}=0 \text{ for all } j<k)
\end{aligned}
$$
which can be utilized to substitute the $w_k$ in Lemma~\ref{lemma:wedderburn-sequence-general} and we then have the full version of the general term formula of the Wedderburn sequence such that the formula does not depend on $\bA_k$'s (in the form of $w_k$'s) with
\begin{equation}\label{equation:uk-vk-to-mimic-gram-process}
\bu_k=\bx_k -\sum_{i=1}^{k-1}\frac{\bv_i^\top \bA\bx_k}{\textcolor{blue}{\bv_i^\top\bA\bu_i}}\bu_i,\qquad \text{and}\qquad \bv_k=\by_k -\sum_{i=1}^{k-1}\frac{\by_k^\top \bA\bu_i}{\textcolor{blue}{\bv_i^\top\bA\bu_i}}\bv_i.
\end{equation}
\paragraph{Gram-Schmidt Process from Wedderburn Sequence} If $\bX=[\bx_1,
\bx_2, \ldots, \bx_r]\in \real^{n\times r}$, $\bY=[\by_1, \by_2, \ldots, \by_r]\in \real^{m\times r}$ effect a rank-reducing process for $\bA$. Let $\bA$ be the identity matrix and $(\bX, \bY)$ be identical and contain the vectors for which an orthogonal basis is desired, then $(\bU = \bV)$ gives the resultant orthogonal basis.
This form of $\bu_k$ and $\bv_k$ in Equation~\eqref{equation:uk-vk-to-mimic-gram-process} is very close to the projection to the perpendicular space of the Gram-Schmidt process in Equation~\eqref{equation:gram-schdt-eq2} (p.~\pageref{equation:gram-schdt-eq2}). We then define \fbox{$<\bx, \by>:=\by^\top\bA\bx$} to explicitly mimic the form of projection in Equation~\eqref{equation:gram-schdt-eq2}.
We formulate the results so far in the following lemma which can help us have a clear vision of what we have been working on and we will use these results extensively in the sequel:
\begin{lemma}[Properties of Wedderburn Sequence]\label{lemma:wedderburn-sequence-general-v2}
For each matrix with $\bA_{k+1} = \bA_k -w_k^{-1} \bA_k\bx_k\by_k^\top \bA_k$, then $\bA_{k+1} $ can be written as
$$
\bA_{k+1} = \bA - \sum_{i=1}^{k}w_{i}^{-1} \bA\bu_i \bv_i^\top \bA,
$$
where
\begin{equation}\label{equation:properties-of-wedderburn-ukvk}
\bu_k=\bx_k -\sum_{i=1}^{k-1}\frac{\textcolor{blue}{<\bx_k, \bv_i>}}{\textcolor{blue}{<\bu_i,\bv_i>}}\bu_i,\qquad
\text{and}\qquad \bv_k=\by_k -\sum_{i=1}^{k-1}\frac{\textcolor{blue}{<\bu_i,\by_k>}}{\textcolor{blue}{<\bu_i,\bv_i>}}\bv_i.
\end{equation}
Further, we have the following properties:
\begin{equation}\label{equation:wedderburn-au-akxk-2}
\begin{aligned}
\bA\bu_k &=\bA_k\bx_k,\\
\bv_k^\top \bA&=\by_k^\top \bA_k.
\end{aligned}
\end{equation}
\begin{equation}
<\bu_k, \bv_j>=<\bu_j, \bv_k>=0 \text{ for all } j<k.
\end{equation}
\begin{equation}\label{equation:wk-by-ukvk}
w_k = \by_k^\top\bA_k\bx_k = <\bu_k, \bv_k>
\end{equation}
\end{lemma}
By substituting Equation~\eqref{equation:wedderburn-au-akxk-2} into Form 1 of biconjugate decomposition, and using Equation~\eqref{equation:wk-by-ukvk} which implies $w_k = \bv_k^\top\bA\bu_k$, we have the Form 2 and Form 3 of this decomposition:
\begin{theoremHigh}[Biconjugate Decomposition: Form 2 and Form 3]\label{theorem:biconjugate-form2}
The equality from rank-reducing process implies the following matrix decomposition
$$
\bA = \bA\bU_r \bOmega_r^{-1} \bV_r^\top\bA,
$$
where $\bOmega_r=diag(w_1, w_2, \ldots, w_r)$, $\bU_r=[\bu_1,\bu_2, \ldots, \bu_r]\in \real^{m\times r}$ and $\bV_r=[\bv_1, \bv_2,$ $\ldots,$ $\bv_r] \in \real^{n\times r}$ with
\begin{equation}\label{equation:properties-of-wedderburn-ukvk2-inform2}
\bu_k=\bx_k -\sum_{i=1}^{k-1}\frac{<\bx_k, \bv_i>}{<\bu_i,\bv_i>}\bu_i,\qquad
\text{and}\qquad \bv_k=\by_k -\sum_{i=1}^{k-1}\frac{<\bu_i,\by_k>}{<\bu_i,\bv_i>}\bv_i.
\end{equation}
And also the following decomposition
\begin{equation}\label{equation:wedderburn-vgamma-ugamma}
\bV_\gamma^\top \bA \bU_\gamma = \bOmega_\gamma,
\end{equation}
where $\bOmega_\gamma=diag(w_1, w_2, \ldots, w_\gamma)$, $\bU_\gamma=[\bu_1,\bu_2, \ldots, \bu_\gamma]\in \real^{m\times \gamma}$ and $\bV_\gamma=[\bv_1, \bv_2,$ $\ldots,$ $\bv_\gamma]\in \real^{n\times \gamma}$. Note the difference between the subscripts $r$ and $\gamma$ we used here with $\gamma \leq r$.
\end{theoremHigh}
We notice that, in these two forms of biconjugate decomposition, they are independent of the Wedderburn matrices $\{\bA_k\}$.
\paragraph{A word on the notation} We will use the subscript to indicate the dimension of the matrix to avoid confusion in the sequel, e.g., the $r, \gamma$ in the above theorem.
\section{Properties of the Biconjugate Decomposition}
\begin{corollary}[Connection of $\bU_\gamma$ and $\bX_\gamma$]\label{corollary:biconjugate-connection-u-x}
If $(\bX_\gamma, \bY_\gamma) \in \real^{n\times \gamma}\times \real^{m\times \gamma}$ effects a rank-reducing process for $\bA$, then there are unique unit upper triangular matrices $\bR_\gamma^{(x)}\in\real^{\gamma\times \gamma}$ and $\bR_\gamma^{(y)}\in\real^{\gamma\times \gamma}$ such that
$$
\bX_\gamma = \bU_\gamma \bR_\gamma^{(x)}, \qquad \text{and} \qquad \bY_\gamma=\bV_\gamma\bR_\gamma^{(y)},
$$
where $\bU_\gamma$ and $\bV_\gamma$ are matrices with columns resulting from the Wedderburn sequence as in Equation~\eqref{equation:wedderburn-vgamma-ugamma}.
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:biconjugate-connection-u-x}]
The proof is trivial from the definition of $\bu_k$ and $\bv_k$ in Equation~\eqref{equation:properties-of-wedderburn-ukvk} or Equation~\eqref{equation:properties-of-wedderburn-ukvk2-inform2} by setting the $j$-th column of $\bR_\gamma^{(x)}$ and $\bR_\gamma^{(y)}$ as
$$
\left[\frac{<\bx_j,\bv_1>}{<\bu_1,\bv_1>}, \frac{<\bx_j,\bv_2>}{<\bu_2,\bv_2>}, \ldots, \frac{<\bx_j,\bv_{j-1}>}{<\bu_{j-1},\bv_{j-1}>}, 1, 0, 0, \ldots, 0 \right]^\top,
$$
and
$$
\left[\frac{<\bu_1, \by_j>}{<\bu_1,\bv_1>},\frac{<\bu_2, \by_j>}{<\bu_2,\bv_2>}, \ldots, \frac{<\bu_{j-1}, \by_j>}{<\bu_{j-1},\bv_{j-1}>}, 1, 0, 0, \ldots, 0 \right]^\top.
$$
This completes the proof.
\end{proof}
The $(\bU_\gamma, \bV_\gamma) \in \real^{m\times \gamma}\times \real^{n\times \gamma}$ in Theorem~\ref{theorem:biconjugate-form2} is called a \textbf{biconjugate pair} with respect to $\bA$ if $\Omega_\gamma$ is nonsingular and diagonal. And let $(\bX_\gamma, \bY_\gamma) \in \real^{n\times \gamma}\times \real^{m\times \gamma}$ effect a rank-reducing process for $\bA$, then $(\bX_\gamma, \bY_\gamma)$ is said to be \textbf{biconjugatable} and \textbf{biconjugated into a biconjugate pair} of matrices $(\bU_\gamma, \bV_\gamma)$, if there exist unit upper triangular matrices $\bR_\gamma^{(x)},\bR_\gamma^{(y)}$ such that $\bX_\gamma = \bU_\gamma \bR_\gamma^{(x)}$ and $\bY_\gamma=\bV_\gamma\bR_\gamma^{(y)}$.
\section{Connection to Well-Known Decomposition Methods}
\subsection{LDU Decomposition}
\begin{theorem}[LDU, \cite{chu1995rank} Theorem 2.4]\label{theorem:biconjugate-ldu}
If $(\bX_\gamma, \bY_\gamma) \in \real^{n\times \gamma}\times \real^{m\times \gamma}$
and $\bA\in \real^{m\times n}$ with $\gamma$ in $\{1, 2, \ldots, r\}$. Then $(\bX_\gamma, \bY_\gamma)$ can be biconjugated if and only if $\bY_\gamma^\top\bA\bX_\gamma$ has an LDU decomposition.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:biconjugate-ldu}]
Suppose $\bX_\gamma$ and $\bY_\gamma$ are biconjugatable, then, there exists unit upper triangular matrices $\brx$ and $\bry$ such that $\bxgamma = \bugamma\brx$, $\bygamma = \bvgamma\bry$ and $\bvgamma^\top\bA\bugamma = \bomegagamma$ is a nonsingular diagonal matrix. Then, it follows that
$$
\bygamma^\top\bA\bxgamma = \bryt \bvgamma^\top \bA \bugamma\brx = \bryt \bomegagamma \brx
$$
is the unique unit triangular LDU decomposition of $\bygamma^\top\bA\bxgamma$. This form above can be seen as the \textbf{fourth form of biconjugate decomposition}.
Conversely, suppose $\bygamma^\top\bA\bxgamma = \bR_2^\top \bD\bR_1$ is an LDU decomposition with both $\bR_1$ and $\bR_2$ being unit upper triangular matrices. Then since $\bR_1^{-1}$ and $\bR_2^{-1}$ are also unit upper triangular matrices, and $(\bX_\gamma, \bY_\gamma)$ biconjugates into $(\bX_\gamma\bR_1^{-1}, \bY_\gamma\bR_2^{-1})$.
\end{proof}
\begin{corollary}[Determinant]\label{corollary:lu-determinant}
Suppose $(\bX_\gamma, \bY_\gamma) \in \real^{n\times \gamma}\times \real^{m\times \gamma}$ are biconjugatable. Then
$$
\det(\bygamma^\top \bA\bxgamma) = \prod_{i=1}^{\gamma} w_i.
$$
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:lu-determinant}]
By Theorem~\ref{theorem:biconjugate-ldu}, since $(\bX_\gamma, \bY_\gamma)$ are biconjugatable, then there are unit upper triangular matrices $\brx$ and $\bry$ such that $\bygamma^\top\bA\bxgamma = \bryt \bomegagamma \brx$. The determinant is just the product of the trace.
\end{proof}
\begin{lemma}[Biconjugatable in Principal Minors]\label{lemma:Biconjugatable-in-Principal-Minors}
Let $r=rank(\bA) \geq \gamma$ with $\bA\in \real^{m\times n}$. In the Wedderburn sequence, take $\bx_i$ as the $i$-th basis in $\real^n$ for $i \in \{1, 2, \ldots, \gamma\}$ (i.e., $\bx_i = \be_i \in \real^n$) and $\by_i$ as the $i$-th basis in $\real^m$ for $i \in \{1, 2, \ldots, \gamma\}$ (i.e., $\by_i=\be_i \in
\real^m$). That is $\bygamma^\top \bA\bxgamma$ is the leading principal submatrix of $\bA$, i.e., $\bygamma^\top \bA\bxgamma = \bA_{1:\gamma, 1:\gamma}$. Then, $(\bX_\gamma, \bY_\gamma)$ is biconjugatable if and only if the $\gamma$-th leading principal minor of $\bA$ is nonzero. In this case, the $\gamma$-th leading principal minor of $\bA$ is given by $\prod_{i=1}^{\gamma} w_i$.\index{Leading principal minors}
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:Biconjugatable-in-Principal-Minors}]
The proof is trivial that the $\gamma$-th leading principal minor of $\bA$ is nonzero will imply that $w_i \neq 0$ for all $i\leq \gamma$. Thus the Wedderburn sequence can be successfully obtained. The converse holds since Corollary~\ref{corollary:lu-determinant} implies $\det(\bygamma^\top \bA\bxgamma)$ is nonzero.
\end{proof}
We thus finally come to the LDU decomposition for square matrices.
\begin{theorem}[LDU: Biconjugate Decomposition for Square Matrices]\label{theorem:biconjugate-square-ldu}
For any matrix $\bA\in \real^{n\times n}$, $(\bI_n, \bI_n)$ is biconjugatable if and only if all the leading principal minors of $\bA$ are nonzero. In this case, $\bA$ can be factored as
$$
\bA = \bV_n^{-\top} \bOmega_n \bU_n^{-1} = \bL\bD\bU,
$$
where $\bOmega_n = \bD$ is a diagonal matrix with nonzero values on the diagonal, $\bV_n^{-\top} = \bL$ is a unit lower triangular matrix, and $\bU_n^{-1} = \bU$ is a unit upper triangular matrix.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:biconjugate-square-ldu}]
From Lemma~\ref{lemma:Biconjugatable-in-Principal-Minors}, it is trivial that $(\bI_n, \bI_n)$ is biconjugatable. From Corollary~\ref{corollary:biconjugate-connection-u-x}, we have $\bU_n \bR_n^{(x)} = \bI_n$ and $\bI_n=\bV_n\bR_n^{(y)}$, thus $\bR_n^{(x)} = \bU_n^{-1}$ and $\bR_n^{(y)} = \bV_n^{-1}$ are well defined and we complete the proof.
\end{proof}
\subsection{Cholesky Decomposition}
For symmetric and positive definite matrices, the leading principal minors are positive for sure. The proof is provided in Section~\ref{appendix:leading-minors-pd} (p.~\pageref{appendix:leading-minors-pd}).
\begin{theorem}[Cholesky: Biconjugate Decomposition for PD Matrices]\label{theorem:biconjugate-square-cholesky}
For any symmetric and positive definite matrix $\bA\in \real^{n\times n}$, the Cholesky decomposition of $\bA$ can be obtained from the Wedderburn sequence applied to $(\bI_n, \bI_n)$ as $(\bX_n, \bY_n)$.
In this case, $\bA$ can be factored as
$$
\bA = \bU_n^{-\top} \bOmega_n \bU_n^{-1} = (\bU_n^{-\top} \bOmega_n^{1/2})( \bOmega_n^{1/2} \bU_n^{-1}) =\bR^\top\bR,
$$
where $\bOmega_n$ is a diagonal matrix with positive values on the diagonal, and $\bU^{-1}$ is a unit upper triangular matrix.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:biconjugate-square-cholesky}]
Since the leading principal minors of positive definite matrices are positive, $w_i>0$ for all $i\in \{1, 2,\ldots, n\}$. It can be easily verified via the LDU from biconjugation decomposition and the symmetric property of $\bA$ that $\bA = \bU_n^{-\top} \bOmega_n \bU_n^{-1}$. And since $w_i$'s are positive, thus $\bOmega_n$ is positive definite and it can be factored as $\bOmega_n = \bOmega_n^{1/2}\bOmega_n^{1/2}$. This implies $\bOmega_n^{1/2} \bU_n^{-1}$ is the Cholesky factor.
\end{proof}
\subsection{QR Decomposition}
Without loss of generality, we assume that $\bA\in \real^{n\times n}$ has full column rank so that the columns of $\bA$ can be factored as $\bA=\bQ\bR$ with $\bQ, \bR \in \real^{n\times n}$
\begin{theorem}[QR: Biconjugate Decomposition for Nonsingular Matrices]\label{theorem:biconjugate-square-qr}
For any nonsingular matrix $\bA\in \real^{n\times n}$, the QR decomposition of $\bA$ can be obtained from the Wedderburn sequence applied to $(\bI_n, \bA)$ as $(\bX_n, \bY_n)$.
In this case, $\bA$ can be factored as
$$
\bA = \bQ\bR,
$$
where $\bQ=\bV_n \bOmega_n^{-1/2}$ is an orthogonal matrix and $\bR = \bOmega_n^{1/2}\bR_n^{(x)}$ is an upper triangular matrix with \textcolor{blue}{Form 4} in Theorem~\ref{theorem:biconjugate-ldu} and let $\gamma=n$
$$
\bY_n^\top\bA\bX_n = \bR_n^{(y)\top} \bV_n^\top \bA \bU_n\bR_n^{(x)}=\bR_n^{(y)\top} \bOmega_n \bR_n^{(x)}.
$$
where we set $\gamma=n$ since $\gamma$ is any value with $\gamma\leq r$ and the rank $r=n$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:biconjugate-square-qr}]
Since $(\bX_n, \bY_n) = (\bI_n, \bA)$. Then
By Theorem~\ref{theorem:biconjugate-ldu}, we have the decomposition
$$
\bY_n^\top\bA\bX_n = \bR_n^{(y)\top} \bV_n^\top \bA \bU_n\bR_n^{(x)}=\bR_n^{(y)\top} \bOmega_n \bR_n^{(x)}.
$$
Substitute $(\bI_n, \bA)$ into the above decomposition, we have
\begin{equation}\label{equation:biconjugate-qr-ata1}
\begin{aligned}
\bY_n^\top\bA\bX_n &= \bR_n^{(y)\top} \bV_n^\top \bA \bU_n\bR_n^{(x)} = \bR_n^{(y)\top} \bOmega_n \bR_n^{(x)}\\
\bA^\top \bA &= \bR_n^{(y)\top} \bOmega_n \bR_n^{(x)} \\
\bA^\top \bA &= \bR_1^\top \bOmega_n \bR_1 \qquad (\text{$\bA^\top\bA$ is symmetric and let $\bR_1=\bR_n^{(x)}=\bR_n^{(y)}$})\\
\bA^\top \bA &= (\bR_1^\top \bOmega_n^{1/2\top}) (\bOmega_n^{1/2}\bR_1) \\
\bA^\top \bA &= \bR^\top\bR. \qquad (\text{Let $\bR = \bOmega_n^{1/2}\bR_1$})
\end{aligned}
\end{equation}
To see why $\bOmega_n$ can be factored as $\bOmega_n = \bOmega_n^{1/2\top}\bOmega_n^{1/2}$.
Suppose $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$ is the column partition of $\bA$. We obtain $w_i = \ba_i^\top \ba_i>0$ since $\bA$ is nonsingular. Thus $\bOmega_n=diag(w_1, w_2, \ldots, w_n)$ is positive definite and it can be factored as
\begin{equation}\label{equation:omega-half-qr}
\bOmega_n = \bOmega_n^{1/2}\bOmega_n^{1/2}= \bOmega_n^{1/2\top}\bOmega_n^{1/2}.
\end{equation}
By $\bxgamma = \bugamma\brx$ in Theorem~\ref{theorem:biconjugate-ldu} for all $\gamma\in \{1, 2, \ldots, n\}$, we have
$$
\begin{aligned}
\bX_n &= \bU_n\bR_1 \\
\bI_n &= \bU_n\bR_1, \qquad (\text{Since $\bX_n = \bI_n$}) \\
\bU_n &= \bR_1^{-1}
\end{aligned}
$$
By $\bygamma = \bvgamma\bry$ in Theorem~\ref{theorem:biconjugate-ldu} for all $\gamma\in \{1, 2, \ldots, n\}$, we have
\begin{equation}\label{equation:biconjugate-qr-ata2}
\begin{aligned}
\bY_n &= \bV_n\bR_1\\
\bA &= \bV_n\bR_1, \qquad &(\text{$\bA=\bY_n$}) \\
\bA^\top \bA &= \bR_1^\top\bV_n^\top \bV_n\bR_1 \\
\bR_1^\top \bOmega_n \bR_1&=\bR_1^\top\bV_n^\top \bV_n\bR_1, \qquad &(\text{Equation~\eqref{equation:biconjugate-qr-ata1}})\\
(\bR_1^\top \bOmega_n^{1/2\top}) (\bOmega_n^{1/2}\bR_1) &= (\bR_1^\top \bOmega_n^{1/2\top} \bOmega_n^{-1/2\top})\bV_n^\top \bV_n (\bOmega_n^{-1/2}\bOmega_n^{1/2}\bR_1), \qquad &\text{(Equation~\eqref{equation:omega-half-qr})} \\
\bR^\top\bR &= \bR^\top (\bOmega_n^{-1/2\top} \bV_n^\top) (\bV_n \bOmega_n^{-1/2}) \bR
\end{aligned}
\end{equation}
Thus, $\bQ=\bV_n \bOmega_n^{-1/2}$ is an orthogonal matrix.
\end{proof}
\subsection{SVD}
To differentiate the notation, let $\bA=\bU^\mathrm{svd} \bSigma^\mathrm{svd} \bV^{\mathrm{svd}\top}$ be the SVD of $\bA$ where $\bU^\mathrm{svd} = [\bu_1^\mathrm{svd}, \bu_2^\mathrm{svd}, \ldots, \bu_n^\mathrm{svd}]$, $\bV^\mathrm{svd} = [\bv_1^\mathrm{svd}, \bv_2^\mathrm{svd}, \ldots, \bv_n^\mathrm{svd}]$ and $\bSigma^\mathrm{svd} = diag(\sigma_1, \sigma_2, \ldots, \sigma_n)$. Without loss of generality, we assume $\bA\in \real^{n\times n}$ and $rank(\bA)=n$. Readers can prove the equivalence for general matrix $\bA\in \real^{m\times n}$.
If $\bX_n=\bV^\mathrm{svd}$, $\bY_n=\bU^\mathrm{svd}$ effect a rank-reducing process for $\bA$.
From the definition of $\bu_k$ and $\bv_k$ in Equation~\eqref{equation:properties-of-wedderburn-ukvk} or Equation~\eqref{equation:properties-of-wedderburn-ukvk2-inform2}, we have
$$
\bu_k = \bv_k^\mathrm{svd} \qquad \text{and} \qquad \bv_k = \bu_k^\mathrm{svd} \qquad \text{and} \qquad w_k = \by_k^\top \bA \bx_k=\sigma_k.
$$
That is $\bV_n = \bU^\mathrm{svd}$, $\bU_n = \bV^\mathrm{svd}$, and $\bOmega_n = \bSigma^\mathrm{svd}$, where we set $\gamma=n$ since $\gamma$ is any value that $\gamma\leq r$ and the rank $r=n$.
By $\bX_n= \bU_n\bR_n^{(x)}$ in Theorem~\ref{theorem:biconjugate-ldu}, we have
$$
\bX_n = \bU_n\bR_n^{(x)} \leadto
\bV^\mathrm{svd} = \bV^\mathrm{svd}\bR_n^{(x)} \leadto
\bI_n = \bR_n^{(x)}
$$
By $\bY_n = \bV_n\bR_n^{(y)}$ in Theorem~\ref{theorem:biconjugate-ldu}, we have
$$
\bY_n = \bV_n\bR_n^{(y)}\leadto
\bU^\mathrm{svd} = \bU^\mathrm{svd}\bR_n^{(y)} \leadto
\bI_n=\bR_n^{(y)}
$$
Again, from Theorem~\ref{theorem:biconjugate-ldu} and let $\gamma=n$, we have
$$
\bY_n^\top\bA\bX_n = \bR_n^{(y)\top} \bV_n^\top \bA \bU_n\bR_n^{(x)}=\bR_n^{(y)\top} \bOmega_n \bR_n^{(x)}.
$$
That is
$$
\bU^{\mathrm{svd}\top}\bA \bV^\mathrm{svd} = \bSigma^\mathrm{svd},
$$
which is exactly the form of SVD and we prove the equivalence between the SVD and the biconjugate decomposition when the Wedderburn sequence is applied to $(\bV^\mathrm{svd}, \bU^\mathrm{svd})$ as $(\bX_n, \bY_n)$.
\section{Proof General Term Formula of Wedderburn Sequence}\label{appendix:wedderburn-general-term}
We define the Wedderburn sequence of $\bA$ by $\bA_{k+1} = \bA_k -w_k^{-1} \bA_k\bx_k\by_k^\top \bA_k$ and $\bA_1 = \bA$. The proof of the general term formula of this sequence is then:
\begin{proof}[of Lemma~\ref{lemma:wedderburn-sequence-general}]
For $\bA_2$, we have:
$$
\begin{aligned}
\bA_2 &=\bA_1 -w_1^{-1} \bA_1\bx_1\by_1^\top \bA_1\\
&=\bA -w_1^{-1} \bA\bu_1\bv_1^\top \bA, \text{ where } \bu_1=\bx_1 \text{, } \bv_1=\by_1.
\end{aligned}
$$
For $\bA_3$, we can write out the equation:
$$
\begin{aligned}
\bA_3 &= \bA_2 -w_2^{-1} \bA_2\bx_2\by_2^\top \bA_2 \\
&=(\bA -w_1^{-1} \bA\bu_1\bv_1^\top \bA) \\
&\gap - w_2^{-1}(\bA -w_1^{-1} \bA\bu_1\bv_1^\top \bA)\bx_2\by_2^\top(\bA -w_1^{-1} \bA\bu_1\bv_1^\top \bA) \qquad &\text{(substitute $\bA_2$)}\\
&=(\bA -w_1^{-1} \bA\bu_1\bv_1^\top \bA) \\
&\gap - w_2^{-1}\textcolor{blue}{\bA}(\textcolor{blue}{\bx_2 }-w_1^{-1} \bu_1\bv_1^\top \bA\textcolor{blue}{\bx_2 })(\textcolor{blue}{\by_2^\top} -w_1^{-1}\textcolor{blue}{\by_2^\top} \bA\bu_1\bv_1^\top )\textcolor{blue}{\bA} \qquad &\text{(take out $\bA$)}\\
&=\bA -w_1^{-1} \bA\bu_1\bv_1^\top \bA - w_2^{-1}\bA\bu_2\bv_2^\top\bA\\
&=\bA -\sum_{i=1}^{2}w_i^{-1} \bA\bu_i\bv_i^\top \bA,
\end{aligned}
$$
where $\bu_2=\bx_2 -w_1^{-1} \bu_1\bv_1^\top \bA\bx_2=\bx_2 -\frac{\bv_1^\top \bA\bx_2}{w_1}\bu_1$, $\bv_2=\by_2 -w_1^{-1}\by_2^\top \bA\bu_1\bv_1=\by_2 -\frac{\by_2^\top \bA\bu_1}{w_1}\bv_1$.
Similarly, we can find the expression of $\bA_4$ by $\bA$:
$$
\begin{aligned}
\bA_4 &= \bA_3 -w_3^{-1} \bA_3\bx_3\by_3^\top \bA_3 \\
&=\bA -\sum_{i=1}^{2}w_i^{-1} \bA\bu_i\bv_i^\top \bA \\
&\gap - w_3^{-1} (\bA -\sum_{i=1}^{2}w_i^{-1} \bA\bu_i\bv_i^\top \bA)\bx_3\by_3^\top (\bA -\sum_{i=1}^{2}w_i^{-1} \bA\bu_i\bv_i^\top \bA) \qquad &\text{(substitute $\bA_3$)}\\
&=\bA -\sum_{i=1}^{2}w_i^{-1} \bA\bu_i\bv_i^\top \bA \\
&\gap - w_3^{-1}\textcolor{blue}{\bA} (\textcolor{blue}{\bx_3} -\sum_{i=1}^{2}w_i^{-1} \textcolor{blue}{\bx_3}\bu_i\bv_i^\top \bA) (\textcolor{blue}{\by_3^\top} -\sum_{i=1}^{2}w_i^{-1}\textcolor{blue}{\by_3^\top} \bA\bu_i\bv_i^\top )\textcolor{blue}{\bA} \qquad &\text{(take out $\bA$)}\\
&=\bA -\sum_{i=1}^{2}w_i^{-1} \bA\bu_i\bv_i^\top \bA - w_3^{-1}\bA\bu_3\bv_3^\top\bA\\
&=\bA -\sum_{i=1}^{3}w_i^{-1} \bA\bu_i\bv_i^\top \bA,
\end{aligned}
$$
where $\bu_3=\bx_3 -\sum_{i=1}^{2}\frac{\bv_i^\top \bA\bx_3}{w_i}\bu_i$, $\bv_3=\by_3 -\sum_{i=1}^{2}\frac{\by_2^\top \bA\bu_i}{w_i}\bv_i$.
Continue this process, we can define
$$
\bu_k=\bx_k -\sum_{i=1}^{k-1}\frac{\bv_i^\top \bA\bx_k}{w_i}\bu_i,
\qquad \text{and} \qquad
\bv_k=\by_k -\sum_{i=1}^{k-1}\frac{\by_k^\top \bA\bu_i}{w_i}\bv_i,
$$
and find the general term of Wedderburn sequence.
\end{proof}
\chapter*{Acknowledgments}
We thank Gilbert Strang for raising the question formulated in Corollary~\ref{corollary:invertible-intersection}, checking the writing of the book, for a stream of ideas and references about the three factorizations from the steps of elimination, and for the generous sharing of the manuscript of \citep{strang2021three}.
\part{Data Interpretation and Information Distillation}\label{part:data-interation}
\chapter{CR Decomposition}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\index{Data interpretation}
\section{CR Decomposition}\label{section:cr-decomposition}
CR decomposition is proposed in \citet{strang2021every, stranglu}. As usual, we firstly give the result and we will discuss the existence and the origin of this decomposition in the following sections.
\begin{theoremHigh}[CR Decomposition]\label{theorem:cr-decomposition}
Any rank-$r$ matrix $\bA \in \real^{m \times n}$ can be factored as
$$
\underset{m\times n}{\bA} = \underset{m\times r}{\bC} \gap \underset{r\times n}{\bR}
$$
where $\bC$ is the first $r$ linearly independent columns of $\bA$, and $\bR$ is an $r\times n$ matrix to reconstruct the columns of $\bA$ from columns of $\bC$. In particular, $\bR$ is the row reduced echelon form (RREF) of $\bA$ without the zero rows.
\item The storage for the decomposition is then reduced or potentially increased from $mn$ to $r(m+n)$.
\end{theoremHigh}
\section{Existence of the CR Decomposition}
Since matrix $\bA$ is of rank $r$, there are some $r$ linearly independent columns in $\bA$. We then choose linearly independent columns from $\bA$ and put them into $\bC$:
\begin{tcolorbox}[title={Find $r$ linearly Independent Columns From $\bA$}]
1. If column 1 of $\bA$ is not zero, put it into the column of $\bC$;
2. If column 2 of $\bA$ is not a multiple of column 1, put it into the column of $\bC$;
3. If column 3 of $\bA$ is not a combination of columns 1 and 2, put it into the column of $\bC$;
4. Continue this process until we find $r$ linearly independent columns (or all the linearly independent columns if we do not know the rank $r$ beforehand).
\end{tcolorbox}
When we have the $r$ linearly independent columns from $\bA$, we can prove the existence of CR decomposition by the column space view of matrix multiplication.
\paragraph{Column space view of matrix multiplication} A multiplication of two matrices $\bD\in \real^{m\times k}, \bE\in \real^{k\times n}$ is $\bA=\bD\bE=\bD[\be_1, \be_2, \ldots, \be_n] = [\bD\be_1, \bD\be_2, \ldots, \bD\be_n]$, i.e., each column of $\bA$ is a combination of columns from $\bD$.
\begin{proof}[of Theorem~\ref{theorem:cr-decomposition}]
As the rank of matrix $\bA$ is $r$ and $\bC$ contains $r$ linearly independent columns from $\bA$, the column space of $\bC$ is equivalent to the column space of $\bA$.
If we take any other column $\ba_i$ of $\bA$, $\ba_i$ can be represented as a linear combination of the columns of $\bC$, i.e., there exists a vector $\br_i$ such that $\ba_i = \bC \br_i$, $\forall i\in \{1, 2, \ldots, n\}$. Put these $\br_i$'s into the columns of matrix $\bR$, we obtain
$$
\bA = [\ba_1, \ba_2, \ldots, \ba_n] = [\bC \br_1, \bC \br_2, \ldots, \bC \br_n]= \bC \bR,
$$
from which the result follows.
\end{proof}
\index{Row echelon form}
\index{Reduced row echelon form}
\section{Reduced Row Echelon Form (RREF)}\label{section:rref-cr}
In Gaussian elimination Section~\ref{section:gaussian-elimination} (p.~\pageref{section:gaussian-elimination}), we introduced the elimination matrix (a lower triangular matrix) and permutation matrix to transform $\bA$ into an upper triangular form. We rewrite the Gaussian elimination for a $4\times 4$ square matrix, where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed:
\begin{tcolorbox}[title={Gaussian Elimination for a Square Matrix}]
$$
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bE_1}{\longrightarrow}
\begin{sbmatrix}{\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bP_1}{\longrightarrow}
\begin{sbmatrix}{\bP_1\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{0} & \textcolor{blue}{\bm{\boxtimes}} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bE_2}{\longrightarrow}
\begin{sbmatrix}{\bE_2\bP_1\bE_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \textcolor{blue}{\boxtimes} & \boxtimes & \boxtimes \\
0 & 0 & \textcolor{blue}{\boxtimes} & \boxtimes \\
0 & \bm{0} & \bm{0} & \textcolor{blue}{\bm{\boxtimes}}
\end{sbmatrix}.
$$
\end{tcolorbox}
\index{Gaussian elimination}
Furthermore, the Gaussian elimination can also be applied to a rectangular matrix, we give an example for a $4\times 5$ matrix as follows:
\begin{tcolorbox}[title={Gaussian Elimination for a Rectangular Matrix}]
$$
\begin{sbmatrix}{\bA}
\textcolor{blue}{2} & \boxtimes & 10 & 9 & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\end{sbmatrix}
\stackrel{\bE_1}{\longrightarrow}
\begin{sbmatrix}{\bE_1\bA}
\textcolor{blue}{2} & \boxtimes & 10 & 9 & \boxtimes\\
\bm{0} & \bm{0} & \textcolor{blue}{\bm{5}} & \bm{6} & \bm{\boxtimes}\\
\bm{0} & \bm{0} & \bm{2} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}
\stackrel{\bE_2}{\longrightarrow}
\begin{sbmatrix}{\bE_2\bE_1\bA}
\textcolor{blue}{2} & \boxtimes & 10 & 9 & \boxtimes\\
0 & 0 & \textcolor{blue}{5} & 6 & \boxtimes\\
0 & 0 & \bm{0} & \textcolor{blue}{\bm{3}} & \bm{\boxtimes}\\
0 & 0 & \bm{0} & \bm{0} & \bm{0}\\
\end{sbmatrix},
$$
\end{tcolorbox}
\noindent where the \textcolor{blue}{blue}-colored numbers are \textit{pivots} as we defined previously (Definition~\ref{definition:pivot}, p.~\pageref{definition:pivot}) and we call the last matrix above \textit{row echelon form} of matrix $\bA$. Note that we get the 4-th row as a zero row in this specific example. Going further, if we subtract each row by a multiple of the next row to make the entries above the pivots to be zero:
\begin{tcolorbox}[title={Reduced Row Echelon Form: Get Zero Above Pivots}]
$$
\begin{sbmatrix}{\bE_2\bE_1\bA}
\textcolor{blue}{2} & \boxtimes & 10 & 9 & \boxtimes\\
0 & 0 & \textcolor{blue}{5} & 6 & \boxtimes\\
0 & 0 & 0 & \textcolor{blue}{3} & \boxtimes\\
0 & 0 & 0 & 0 & 0\\
\end{sbmatrix}
\stackrel{\bE_3}{\longrightarrow}
\begin{sbmatrix}{\bE_3\bE_2\bE_1\bA}
\textcolor{blue}{2} & \boxtimes & \bm{0} & \bm{-3} & \bm{\boxtimes} \\
0 & 0 & \textcolor{blue}{5} & 6 & \boxtimes\\
0 & 0 & 0 & \textcolor{blue}{3} & \boxtimes\\
0 & 0 & 0 & 0 & 0\\
\end{sbmatrix}
\stackrel{\bE_4}{\longrightarrow}
\begin{sbmatrix}{\bE_4\bE_3\bE_2\bE_1\bA}
\textcolor{blue}{2} & \boxtimes & 0 & \bm{0} & \bm{\boxtimes}\\
0 & 0 & \textcolor{blue}{5} & \bm{0} & \bm{\boxtimes}\\
0 & 0 & 0 & \textcolor{blue}{3} & \boxtimes\\
0 & 0 & 0 & 0 & 0\\
\end{sbmatrix},
$$
\end{tcolorbox}
\noindent where $\bE_3$ subtracts 2 times the $2$-nd row from the $1$-st row, and $\bE_4$ adds the $3$-rd row to the $1$-st row and subtracts 2 times the $3$-rd row from the $2$-nd row. Finally, we get the full \textit{row reduced echelon form} by making the pivots to be 1:
\begin{tcolorbox}[title={Reduced Row Echelon Form: Make The Pivots To Be 1}]
$$
\begin{sbmatrix}{\bE_4\bE_3\bE_2\bE_1\bA}
\textcolor{blue}{2} & \boxtimes & 0 & 0 & \boxtimes\\
0 & 0 & \textcolor{blue}{5} & 0 & \boxtimes\\
0 & 0 & 0 & \textcolor{blue}{3} & \boxtimes\\
0 & 0 & 0 & 0 & 0\\
\end{sbmatrix}
\stackrel{\bE_5}{\longrightarrow}
\begin{sbmatrix}{\bE_5\bE_4\bE_3\bE_2\bE_1\bA}
\textcolor{blue}{\bm{1}} & \bm{\boxtimes} & \bm{0} & \bm{0} & \bm{\boxtimes}\\
\bm{0} & \bm{0} & \textcolor{blue}{\bm{1}} & \bm{0} & \bm{\boxtimes}\\
\bm{0} & \bm{0} & \bm{0} & \textcolor{blue}{\bm{1}} & \bm{\boxtimes}\\
0 & 0 & 0 & 0 & 0\\
\end{sbmatrix},
$$
\end{tcolorbox}
\noindent where $\bE_5$ makes the pivots to be 1. Note here,
it is not necessary for
the transformation matrices $\bE_1, \bE_2, \ldots, \bE_5$ to be lower triangular matrices as they are in LU decomposition.
They can also be permutation matrices or other matrices. We call this final matrix the \textbf{reduced row echelon form} of $\bA$ where it has 1's as pivots and zeros above the pivots.
\index{Rank}
\index{Pivot}
\begin{lemma}[Rank and Pivots]\label{lemma:rank-is-pivots}
The rank of $\bA$ is equal to the number of pivots.
\end{lemma}
\index{Reduced row echelon form}
\begin{lemma}[RREF in CR]\label{lemma:r-in-cr-decomposition}
The reduced row echelon form of the matrix $\bA$ without zero rows is the matrix $\bR$ in the CR decomposition.
\end{lemma}
In short, we first compute the reduced row echelon form of matrix $\bA$ by $rref(\bA)$, Then $\bC$ is obtained by removing from $\bA$ all the non-pivot columns (this can be determined by looking for columns in $rref(\bA)$ which do not contain a pivot). And $\bR$ is obtained by eliminating zero rows of $rref(\bA)$. And this is actually a special case of \textit{rank decomposition} (Definition~\ref{theorem:rank-decomposition}, p.~\pageref{theorem:rank-decomposition}) of matrix $\bA$. However, CR decomposition is so special that it involves the reduced row echelon form so that we introduce it here particularly.
$\bR$ has a remarkable form whose $r$ columns containing the pivots form an $r\times r$ identity matrix. Note again that we can just remove the zero rows from the row reduced echelon form to obtain this matrix $\bR$. In \citet{strang2021every}, the author gives a specific notation for the row reduced echelon form without removing the zero rows as $\bR_0$:
$$
\bR_0 = rref(\bA)=
\begin{bmatrix}
\bR \\
\bzero
\end{bmatrix}=
\begin{bmatrix}
\bI_r & \bF \\
\bzero & \bzero
\end{bmatrix}\bP, \footnote{Permutation matrix $\bP$ on the right side of a matrix is to permute the column of that matrix.
}
$$
where the $n\times n$ permutation matrix $\bP$ puts the columns of $r\times r$ identity matrix $\bI_r$ into the correct positions, matching the first $r$ linearly independent columns of the original matrix $\bA$.
The CR decomposition reveals a great theorem of linear algebra that the row rank equals the column rank of any matrix.
\begin{proof}[\textbf{of Theorem~\ref{lemma:equal-dimension-rank}, A Third Way}]
For CR decomposition of matrix $\bA=\bC\bR$, we have $\bR = [\bI_r, \bF ]\bP$, where $\bP$ is an $n\times n$ permutation to put the columns of the $r\times r$ identity matrix $\bI_r$ into the correct positions as shown above. It can be easily verified that the $r$ rows of $\bR$ are linearly independent of the submatrix of $\bI_r$ (since $\bI_r$ is nonsingular) such that the row rank of $\bR$ is $r$.
Firstly, from the definition of the CR decomposition, the $r$ columns of $\bC$ are from $r$ linearly independent columns of $\bA$, and the column rank of $\bA$ is $r$. Further,
$\bullet$ Since $\bA=\bC\bR$, all rows of $\bA$ are combinations of the rows of $\bR$. That is, the row rank of $\bA$ is no larger than the row rank of $\bR$;
$\bullet$ From $\bA=\bC\bR$, we also have $(\bC^\top\bC)^{-1}\bC^\top\bC\bR = (\bC^\top\bC)^{-1}\bC^\top\bA$, that is $\bR = (\bC^\top\bC)^{-1}\bC^\top\bA$. $\bC^\top\bC$ is nonsingular since it has full column rank $r$. Then all rows of $\bR$ are also combinations of the rows of $\bA$. That is, the row rank of $\bR$ is no larger than the row rank of $\bA$;
$\bullet$ By ``sandwiching", the row rank of $\bA$ is equal to the row rank of $\bR$ which is $r$.
Therefore, both the row rank and column rank of $\bA$ are equal to $r$ from which the result follows.
\end{proof}
\index{Rank decomposition}
\section{Rank Decomposition}
We previously mentioned that CR decomposition is a special case of rank decomposition. Formally, we prove the existence of the rank decomposition rigorously in the following theorem.
\begin{theoremHigh}[Rank Decomposition]\label{theorem:rank-decomposition}
Any rank-$r$ matrix $\bA \in \real^{m \times n}$ can be factored as
$$
\underset{m\times n}{\bA }= \underset{m\times r}{\bD}\gap \underset{r\times n}{\bF},
$$
where $\bD \in \real^{m\times r}$ has rank $r$, and $\bF \in \real^{r\times n}$ also has rank $r$, i.e., $\bD,\bF$ have full rank $r$.
The storage for the decomposition is then reduced or potentially increased from $mn$ to $r(m+n)$.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:rank-decomposition}]
By ULV decomposition in Theorem~\ref{theorem:ulv-decomposition} (p.~\pageref{theorem:ulv-decomposition}), we can decompose $\bA$ by
$$
\bA = \bU \begin{bmatrix}
\bL & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV.
$$
Let $\bU_0 = \bU_{:,1:r}$ and $\bV_0 = \bV_{1:r,:}$, i.e., $\bU_0$ contains only the first $r$ columns of $\bU$, and $\bV_0$ contains only the first $r$ rows of $\bV$. Then, we still have $\bA = \bU_0 \bL\bV_0$ where $\bU_0 \in \real^{m\times r}$ and $\bV_0\in \real^{r\times n}$. This is also known as the reduced ULV decomposition. Let \{$\bD = \bU_0\bL$ and $\bF =\bV_0$\}, or \{$\bD = \bU_0$ and $\bF =\bL\bV_0$\}, we find such rank decomposition.
\end{proof}
The rank decomposition is not unique. Even by elementary transformations, we have
$$
\bA =
\bE_1
\begin{bmatrix}
\bZ & \bzero \\
\bzero & \bzero
\end{bmatrix}
\bE_2,
$$
where $\bE_1 \in \real^{m\times m}, \bE_2\in \real^{n\times n}$ represent elementary row and column operations, $\bZ\in \real^{r\times r}$. The transformation is rather general, and there are dozens of these $\bE_1,\bE_2,\bZ$. Similar construction on this decomposition as shown in the above proof, we can recover another rank decomposition.
Analogously, we can find such $\bD,\bF$ by SVD, URV, CR, CUR, and many other decompositional algorithms. However, we may connect the different rank decompositions by the following lemma.
\begin{lemma}[Connection Between Rank Decompositions]\label{lemma:connection-rank-decom}
For any two rank decompositions of $\bA=\bD_1\bF_1=\bD_2\bF_2$, there exists a nonsingular matrix $\bP$ such that
$$
\bD_1 = \bD_2\bP
\qquad
\text{and}
\qquad
\bF_1 = \bP^{-1}\bF_2.
$$
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:connection-rank-decom}]
Since $\bD_1\bF_1=\bD_2\bF_2$, we have $\bD_1\bF_1\bF_1^\top=\bD_2\bF_2\bF_1^\top$. It is trivial that $rank(\bF_1\bF_1^\top)=rank(\bF_1)=r$ such that $\bF_1\bF_1^\top$ is a square matrix with full rank and thus is nonsingular. This implies $\bD_1=\bD_2\bF_2\bF_1^\top(\bF_1\bF_1^\top)^{-1}$. Let $\bP=\bF_2\bF_1^\top(\bF_1\bF_1^\top)^{-1}$, we have $\bD_1=\bD_2\bP$ and $\bF_1 = \bP^{-1}\bF_2$.
\end{proof}
\index{Idempotent matrix}
\section{Application: Rank and Trace of an Idempotent Matrix}
The CR decomposition is quite useful to prove the rank of an idempotent matrix. See also how it works in the orthogonal projection in \citet{lu2021numerical, lu2021rigorous}.
\begin{lemma}[Rank and Trace of an Idempotent Matrix\index{Trace}]\label{lemma:rank-of-symmetric-idempotent2_tmp}
For any $n\times n$ idempotent matrix $\bA$ (i.e., $\bA^2 = \bA$), the rank of $\bA$ equals the trace of $\bA$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:rank-of-symmetric-idempotent2_tmp}]
Any $n\times n$ rank-$r$ matrix $\bA$ has CR decomposition $\bA = \bC\bR$, where $\bC\in\real^{n\times r}$ and $\bR\in \real^{r\times n}$ with $\bC, \bR$ having full rank $r$.
Then,
$$
\begin{aligned}
\bA^2 &= \bA, \\
\bC\bR\bC\bR &= \bC\bR, \\
\bR\bC\bR &=\bR, \\
\bR\bC &=\bI_r,
\end{aligned}
$$
where $\bI_r$ is an $r\times r$ identity matrix. Thus
$$
trace(\bA) = trace(\bC\bR) =trace(\bR\bC) = trace(\bI_r) = r,
$$
which equals the rank of $\bA$. The equality above is from the invariant of cyclic permutation of trace.
\end{proof}
\part{Eigenvalue Problem}
\chapter{Eigenvalue and Jordan Decomposition}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{Eigenvalue and Jordan Decomposition}
\begin{theoremHigh}[Eigenvalue Decomposition]\label{theorem:eigenvalue-decomposition}
Any square matrix $\bA\in \real^{n\times n}$ with linearly independent eigenvectors can be factored as
$$
\bA = \bX\bLambda\bX^{-1},
$$
where $\bX$ contains the eigenvectors of $\bA$ as columns, and $\bLambda$ is a diagonal matrix $diag(\lambda_1, \lambda_2,$ $\ldots, \lambda_n)$ and $\lambda_1, \lambda_2, \ldots, \lambda_n$ are eigenvalues of $\bA$.
\end{theoremHigh}
Eigenvalue decomposition is also known as to diagonalize the matrix $\bA$. When no eigenvalues of $\bA$ are repeated, the eigenvectors are sure to be linearly independent. Then $\bA$ can be diagonalized. Note here without $n$ linearly independent eigenvectors, we cannot diagonalize. In Section~\ref{section:otherform-spectral} (p.~\pageref{section:otherform-spectral}), we will further discuss conditions under which the matrix has linearly independent eigenvectors.
\section{Existence of the Eigenvalue Decomposition}
\begin{proof}[of Theorem~\ref{theorem:eigenvalue-decomposition}]
Let $\bX=[\bx_1, \bx_2, \ldots, \bx_n]$ as the linearly independent eigenvectors of $\bA$. Clearly, we have
$$
\bA\bx_1=\lambda_1\bx_1,\qquad \bA\bx_2=\lambda_2\bx_2, \qquad \ldots, \qquad\bA\bx_n=\lambda_n\bx_n.
$$
In the matrix form,
$$
\bA\bX = [\bA\bx_1, \bA\bx_2, \ldots, \bA\bx_n] = [\lambda_1\bx_1, \lambda_2\bx_2, \ldots, \lambda_n\bx_n] = \bX\bLambda.
$$
Since we assume the eigenvectors are linearly independent, then $\bX$ has full rank and is invertible. We obtain
$$
\bA = \bX\bLambda \bX^{-1}.
$$
This completes the proof.
\end{proof}
We will discuss some similar forms of eigenvalue decomposition in the spectral decomposition section (Section~\ref{section:spectral-decomposition}, p.~\pageref{section:spectral-decomposition}), where the matrix $\bA$ is required to be symmetric, and the $\bX$ is not only nonsingular but also orthogonal. Or, the matrix $\bA$ is required to be a \textit{simple matrix}, that is, the algebraic multiplicity and geometric multiplicity are the same for $\bA$, and $\bX$ will be a trivial nonsingular matrix that may not contain the eigenvectors of $\bA$. The decomposition also has a geometric meaning, which we will discuss in Section~\ref{section:coordinate-transformation} (p.~\pageref{section:coordinate-transformation}).
A matrix decomposition in the form of $\bA =\bX\bLambda\bX^{-1}$ has a nice property that we can compute the $m$-th power efficiently.
\begin{remark}[$m$-th Power]\label{remark:power-eigenvalue-decom}
The $m$-th power of $\bA$ is $\bA^m = \bX\bLambda^m\bX^{-1}$ if the matrix $\bA$ can be factored as $\bA=\bX\bLambda\bX^{-1}$.
\end{remark}
We notice that we require $\bA$ have linearly independent eigenvectors to prove the existence of the eigenvalue decomposition. Under specific conditions, the requirement is intrinsically satisfied.
\begin{lemma}[Different Eigenvalues]\label{lemma:diff-eigenvec-decompo}
Suppose the eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_n$ of $\bA\in \real^{n\times n}$ are all different. Then the corresponding eigenvectors are automatically independent. In another word, any square matrix with different eigenvalues can be diagonalized.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:diff-eigenvec-decompo}]
Suppose the eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_n$ are all different, and the eigenvectors $\bx_1,\bx_2, \ldots, \bx_n$ are linearly dependent. That is, there exists a nonzero vector $\bc = [c_1,c_2,\ldots,c_{n-1}]^\top$ such that
$$
\bx_n = \sum_{i=1}^{n-1} c_i\bx_{i}.
$$
Then we have
$$
\begin{aligned}
\bA \bx_n &= \bA (\sum_{i=1}^{n-1} c_i\bx_{i}) \\
&=c_1\lambda_1 \bx_1 + c_2\lambda_2 \bx_2 + \ldots + c_{n-1}\lambda_{n-1}\bx_{n-1}.
\end{aligned}
$$
and
$$
\begin{aligned}
\bA \bx_n &= \lambda_n\bx_n\\
&=\lambda_n (c_1\bx_1 +c_2\bx_2+\ldots +c_{n-1} \bx_{n-1}).
\end{aligned}
$$
Combining the above two equations, we have
$$
\sum_{i=1}^{n-1} (\lambda_n - \lambda_i)c_i \bx_i = \bzero .
$$
This leads to a contradiction since $\lambda_n \neq \lambda_i$ for all $i\in \{1,2,\ldots,n-1\}$, from which the result follows.
\end{proof}
\begin{remark}[Limitation of Eigenvalue Decomposition]
The limitation of eigenvalue decomposition is that:
$\bullet$ The eigenvectors in $\bX$ are usually not orthogonal and there are not always enough eigenvectors (i.e., some eigenvalues are equal).
$\bullet$ To compute the eigenvalues and eigenvectors $\bA\bx = \lambda\bx$ requires $\bA$ to be square. Rectangular matrices cannot be diagonalized by eigenvalue decomposition.
\end{remark}
\section{Jordan Decomposition}
In eigenvalue decomposition, we suppose matrix $\bA$ has $n$ linearly independent eigenvectors. However, this is not necessarily true for all square matrices. We introduce further a generalized version of eigenvalue decomposition which is called the Jordan decomposition named after Camille Jordan \citep{jordan1870traite}.
We first introduce the definition of Jordan blocks and Jordan form for the further description of Jordan decomposition.
\begin{definition}[Jordan Block]
An $m\times m$ upper triangular matrix $B(\lambda, m)$ is called a Jordan block provided all $m$ diagonal elements are the same eigenvalue $\lambda$ and all upper subdigonal elements are ones:
$$
B(\lambda, m)=
\begin{bmatrix}
\lambda & 1 & 0 & \ldots & 0 & 0 & 0 \\
0 & \lambda & 1 & \ldots & 0 & 0 & 0 \\
0 & 0 & \lambda & \ldots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
0 & 0 & 0 & \ldots & \lambda & 1& 0\\
0 & 0 & 0 & \ldots & 0 &\lambda & 1\\
0 & 0 & 0 & \ldots & 0 & 0 & \lambda
\end{bmatrix}_{m\times m}
$$
\end{definition}
\begin{definition}[Jordan Form\index{Jordan block}]
Given an $n\times n$ matrix $\bA$, a Jordan form $\bJ$ for $\bA$ is a block diagonal matrix defined as
$$
\bJ=diag(B(\lambda_1, m_1), B(\lambda_2, m_2), \ldots B(\lambda_k, m_k))
$$
where $\lambda_1, \lambda_2, \ldots, \lambda_k$ are eigenvalues of $\bA$ (duplicates possible) and $m_1+m_2+\ldots+m_k=n$.
\end{definition}
Then, the Jordan decomposition follows:
\begin{theoremHigh}[Jordan Decomposition]\label{theorem:jordan-decomposition}
Any square matrix $\bA\in \real^{n\times n}$ can be factored as
$$
\bA = \bX\bJ\bX^{-1},
$$
where $\bX$ is a nonsingular matrix containing the generalized eigenvectors of $\bA$ as columns, and $\bJ$ is a Jordan form matrix $diag(\bJ_1, \bJ_2, \ldots, \bJ_k)$ where
$$
\bJ_i =
\begin{bmatrix}
\lambda_i & 1 & 0 & \ldots & 0 & 0 & 0 \\
0 & \lambda_i & 1 & \ldots & 0 & 0 & 0 \\
0 & 0 & \lambda_i & \ldots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
0 & 0 & 0 & \ldots & \lambda_i & 1& 0\\
0 & 0 & 0 & \ldots & 0 &\lambda_i & 1\\
0 & 0 & 0 & \ldots & 0 & 0 & \lambda_i
\end{bmatrix}_{m_i\times m_i}
$$
is an $m_i\times m_i$ square matrix with $m_i$ being the number of repetitions of eigenvalue $\lambda_i$ and $m_1+m_2+\ldots +m_k = n$. $\bJ_i$'s are referred to as Jordan blocks.
Further, nonsingular matrix $\bX$ is called the \textit{matrix of generalized eigenvectors} of $\bA$.
\end{theoremHigh}
As an example, a Jordan form can have the following structure:
$$
\begin{aligned}
\bJ&=diag(B(\lambda_1, m_1), B(\lambda_2, m_2), \ldots, B(\lambda_k, m_k))\\
&=
\begin{bmatrix}
\begin{bmatrix}
\lambda_1 & 1 & 0 \\
0 & \lambda_1 & 1 \\
0 & 0 & \lambda_1
\end{bmatrix} & & & &\\
& \begin{bmatrix}
\lambda_2
\end{bmatrix} & & &\\
& &\begin{bmatrix}
\lambda_3 & 1 \\
0 & \lambda_3
\end{bmatrix} & &\\
& & & \ddots & &\\
& & & & &\begin{bmatrix}
\lambda_k & 1 \\
0 & \lambda_k
\end{bmatrix}\\
\end{bmatrix}.
\end{aligned}
$$
\textbf{Decoding a Jordan Decomposition}: Note that zeros can appear on the upper sub-diagonal of $\bJ$ and the first column is always a diagonal containing only eigenvalues of $\bA$ in each block. Take out one block to decode, without loss of generality, we take out the first block $\bJ_1$. We shall show the columns $1, 2, \ldots, m_1$ of $\bA\bX = \bX\bJ$ with $\bX=[\bx_1, \bx_2, \ldots, \bx_n]$:
$$
\begin{aligned}
\bA\bx_1 &= \lambda_1 \bx_1 \\
\bA\bx_2 &= \lambda_1 \bx_2 + \bx_1 \\
\vdots &= \vdots \\
\bA\bx_{m_1} &= \lambda_1 \bx_{m_1} + \bx_{m_1-1}.
\end{aligned}
$$
For more details about Jordan decomposition, please refer to \citet{gohberg1996simple, hales1999jordan}.
The Jordan decomposition is not particularly interesting in practice as it is extremely sensitive to perturbation. Even with the smallest random change to a matrix, the matrix can be made diagonalizable \citep{van2020advanced}. As a result, there is no practical mathematical software library or tool that computes it. And the proof takes dozens of pages to discuss. For this reason, we leave the proof to interesting readers.
\section{Application: Computing Fibonacci Numbers}
We use eigenvalue decomposition to compute the Fibonacci number. This example is drawn from \citet{strang1993introduction}. Every new Fibonacci number $F_{k+2}$ is the sum of the two previous Fibonacci numbers $F_{k+1}+F_{k}$. The sequence is $0, 1, 1, 2, 3, 5, 8, \ldots$. Now, the problem is what is the number of $F_{100}$?
The eigenvalue decomposition can help find the general formula of the sequence.
\index{Fibonacci number}
\index{General formula of a sequence}
Let $\bu_{k}=\begin{bmatrix}
F_{k+1}\\
F_k
\end{bmatrix}$.
Then $\bu_{k+1}=\begin{bmatrix}
F_{k+2}\\
F_{k+1}
\end{bmatrix}=
\begin{bmatrix}
1&1\\
1&0
\end{bmatrix}
\bu_k
$ by the rule that $F_{k+2}=F_{k+1}+F_k$ and $F_{k+1}=F_{k+1}$.
Let $\bA=\begin{bmatrix}
1&1\\
1&0
\end{bmatrix}$
, we then have the general formula $\bu_{100} = \bA^{100}\bu_0$ where $\bu_0=
\begin{bmatrix}
1\\
0
\end{bmatrix}
$.
We will see in Lemma~\ref{lemma:determinant-intermezzo} (p.~\pageref{lemma:determinant-intermezzo}) that $\det(\bA-\lambda\bI)=0$ where $\lambda$ is the eigenvalue of $\bA$. Simple calculation shows that $\det(\bA-\lambda\bI) = \lambda^2-\lambda+1=0$ and
$$
\lambda_1 = \frac{1+\sqrt{5}}{2}, \qquad \lambda_2 = \frac{1-\sqrt{5}}{2}.
$$
The corresponding eigenvectors are
$$
\bx_1 =
\begin{bmatrix}
\lambda_1\\
1
\end{bmatrix}, \qquad
\bx_2 =
\begin{bmatrix}
\lambda_2\\
1
\end{bmatrix}.
$$
By Remark~\ref{remark:power-eigenvalue-decom} (p.~\pageref{remark:power-eigenvalue-decom}), $\bA^{100} = \bX\bLambda^{100}\bX^{-1} = \bX
\begin{bmatrix}
\lambda_1^{100}&0\\
0&\lambda_2^{100}
\end{bmatrix}\bX^{-1}$ where $\bX^{-1}$ can be easily calculated as $\bX^{-1} =
\begin{bmatrix}
\frac{1}{\lambda_1-\lambda_2} & \frac{-\lambda_2}{\lambda_1-\lambda_2} \\
-\frac{1}{\lambda_1-\lambda_2} & \frac{\lambda_1}{\lambda_1-\lambda_2}
\end{bmatrix}
=\begin{bmatrix}
\frac{\sqrt{5}}{5} & \frac{5-\sqrt{5}}{10} \\
-\frac{\sqrt{5}}{5} & \frac{5+\sqrt{5}}{10}
\end{bmatrix}$. We notice that $\bu_{100} = \bA^{100}\bu_0$ is just the first column of $\bA^{100}$ which is
$$
\bu_{100} =
\begin{bmatrix}
F_{101}\\
F_{100}
\end{bmatrix}=
\begin{bmatrix}
\frac{\lambda_1^{101}-\lambda_2^{101}}{\lambda_1-\lambda_2}\\
\frac{\lambda_1^{100}-\lambda_2^{100}}{\lambda_1-\lambda_2}
\end{bmatrix}.
$$
A simple check on the calculation, we have $F_{100}=3.542248481792631e+20$. Or more generally,
$$
\bu_{K} =
\begin{bmatrix}
F_{K+1}\\
F_{K}
\end{bmatrix}=
\begin{bmatrix}
\frac{\lambda_1^{K+1}-\lambda_2^{K+1}}{\lambda_1-\lambda_2}\\
\frac{\lambda_1^{K}-\lambda_2^{K}}{\lambda_1-\lambda_2}
\end{bmatrix},
$$
where the general form of $F_K$ is given by $F_K=\frac{\lambda_1^{K}-\lambda_2^{K}}{\lambda_1-\lambda_2}$.
\paragraph{Extension to general 2 by 2 matrices}
The above matrix $\begin{bmatrix}
1 & 1 \\ 1 & 0
\end{bmatrix}$ is special in that it has two independent eigenvectors so that it admits eigenvalue decomposition. However, it is also interesting to compute the $n$-th power of general 2 by 2 matrices, say
$$
\bA=
\begin{bmatrix}
a & b \\ c & d
\end{bmatrix}.
$$
Suppose $\alpha, \beta$ are eigenvalues of $\bA$ so that $\alpha, \beta$ are two roots of the characteristic polynomial
$$
-\det(\bA-\lambda\bI)= \lambda^2 + a_1 \lambda + a_0 = 0
\leadto \alpha, \beta = \frac{-a_1 \pm \sqrt{a_1^2 - 4a_0}}{2}.
$$
This reveals that $a_1 = -(\alpha+\beta)$ and $a_0=\alpha\beta$. By the \textit{Cayley-Hamilton theorem}, a matrix satisfies its own characteristic equation:
$$
\bA^2 + a_1 \bA + a_0 \bI = \bzero,
$$
so that
\begin{equation}\label{eqiation:2by2-cayley}
\bA^2 - (\alpha+\beta)\bA + \alpha\beta\bI = \bzero.
\end{equation}
Following the tricks in \citet{Williams1992npower}, we define matrices $\bX,\bY,\bZ$ such that
$$
\left\{
\begin{aligned}
\bX &= \frac{\bA-\beta\bI}{\alpha-\beta}, \gap \bY = \frac{\bA-\alpha\bI}{\beta-\alpha}, \gap &\text{if $\alpha\neq \beta$;}\\
\bZ &= \bA-\alpha\bI, &\text{if $\alpha=\beta$,}
\end{aligned}
\right.
$$
where $\bA = \alpha\bX + \beta\bY$, and
we have properties from Equation~\eqref{eqiation:2by2-cayley}, for any $k\geq 2$:
$$
\left\{
\begin{aligned}
\bX^k &= \bX, \gap \bX\bY=\bY\bX=\bzero, \gap \bY^k=\bY, \gap &\text{if $\alpha\neq \beta$;}\\
\bZ^k &= \bzero, &\text{if $\alpha=\beta$.}
\end{aligned}
\right.
$$
Therefore,
\begin{equation}\label{equation:genera2by2power}
\bA^n=
\left\{
\begin{aligned}
(\alpha\bX+\beta\bY)^n &= \alpha^n\bX+ \beta^n\bY= \alpha^n \left(\frac{\bA-\beta\bI}{\alpha-\beta}\right)+\beta^n\left(\frac{\bA-\alpha\bI}{\beta-\alpha}\right), \gap &\text{if $\alpha\neq \beta$;}\\
(\bZ+\alpha\bI)^n &= n\alpha^{n-1}\bZ + \alpha^n\bI=\alpha^{n-1}\left( n\bA - \alpha(n-1)\bI \right), &\text{if $\alpha=\beta$.}
\end{aligned}
\right.
\end{equation}
Hence, if matrix $\bA$ is nonsingular, $\alpha\neq0, \beta\neq 0$, Equation~\eqref{equation:genera2by2power} holds for all integer values of $n$; if matrix $\bA$ is real, but the eigenvalues are complex with some power of them real, e.g., $\alpha^m=\beta^m=f$ is real, then we have $\bA^m = f\bI$.
The $n$-th power for general matrices results in the general form of any sequence, e.g.,
$$
\bu_{k+1} =
\begin{bmatrix}
2 & 3 \\2 & 0
\end{bmatrix}
\bu_k.
$$
And we shall repeat the details.
\chapter{Schur Decomposition}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{Schur Decomposition}
\begin{theoremHigh}[Schur Decomposition]\label{theorem:schur-decomposition}
Any square matrix $\bA\in \real^{n\times n}$ with real eigenvalues can be factored as
$$
\bA = \bQ\bU\bQ^\top,
$$
where $\bQ$ is an orthogonal matrix, and $\bU$ is an upper triangular matrix. That is, all square matrix $\bA$ with real eigenvalues can be triangularized.
\end{theoremHigh}
\paragraph{A close look at Schur decomposition} The first column of $\bA\bQ$ and $\bQ\bU$ are $\bA\bq_1$ and $\bU_{11}\bq_1$. Then, $\bU_{11}, \bq_1$ are eigenvalue and eigenvector of $\bA$. But other columns of $\bQ$ need not be eigenvectors of $\bA$.
\paragraph{Schur decomposition for symmetric matrices} Symmetric matrix $\bA=\bA^\top$ leads to $\bQ\bU\bQ^\top = \bQ\bU^\top\bQ^\top$. Then $\bU$ is a diagonal matrix. And this diagonal matrix actually contains eigenvalues of $\bA$. All the columns of $\bQ$ are eigenvectors of $\bA$. We conclude that all symmetric matrices are diagonalizable even with repeated eigenvalues.
\index{Determinant}
\section{Existence of the Schur Decomposition}
To prove Theorem~\ref{theorem:schur-decomposition}, we need to use the following lemmas.
\begin{lemma}[Determinant Intermezzo]\label{lemma:determinant-intermezzo}
We have the following properties for determinant of matrices:
$\bullet$ The determinant of multiplication of two matrices is $\det(\bA\bB)=\det (\bA)\det(\bB)$;
$\bullet$ The determinant of the transpose is $\det(\bA^\top) = \det(\bA)$;
$\bullet$ Suppose matrix $\bA$ has eigenvalue $\lambda$, then $\det(\bA-\lambda\bI) =0$;
$\bullet$ Determinant of any identity matrix is $1$;
$\bullet$ Determinant of an orthogonal matrix $\bQ$:
$$
\det(\bQ) = \det(\bQ^\top) = \pm 1, \qquad \text{since } \det(\bQ^\top)\det(\bQ)=\det(\bQ^\top\bQ)=\det(\bI)=1;
$$
$\bullet$ Any square matrix $\bA$, we then have an orthogonal matrix $\bQ$:
$$
\det(\bA) = \det(\bQ^\top) \det(\bA)\det(\bQ) =\det(\bQ^\top\bA\bQ);
$$
\end{lemma}
\begin{lemma}[Submatrix with Same Eigenvalue]\label{lemma:submatrix-same-eigenvalue}
Suppose square matrix $\bA_{k+1}\in \real^{(k+1)\times (k+1)}$ has real eigenvalues $\lambda_1, \lambda_2, \ldots, \lambda_{k+1}$. Then we can construct a $k\times k$ matrix $\bA_{k}$ with eigenvalues $\lambda_2, \lambda_3, \ldots, \lambda_{k+1}$ by
$$
\bA_{k} =
\begin{bmatrix}
-\bp_2^\top- \\
-\bp_3^\top- \\
\vdots \\
-\bp_{k+1}^\top-
\end{bmatrix}
\bA_{k+1}
\begin{bmatrix}
\bp_2 & \bp_3 &\ldots &\bp_{k+1}
\end{bmatrix},
$$
where $\bp_1$ is a eigenvector of $\bA_{k+1}$ with norm 1 corresponding to eigenvalue $\lambda_1$, and $\bp_2, \bp_3, \ldots, \bp_{k+1}$ are any orthonormal vectors orthogonal to $\bp_1$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:submatrix-same-eigenvalue}]
Let $\bP_{k+1} = [\bp_1, \bp_2, \ldots, \bp_{k+1}]$. Then $\bP_{k+1}^\top\bP_{k+1}=\bI$, and
$$
\bP_{k+1}^\top \bA_{k+1} \bP_{k+1} =
\begin{bmatrix}
\lambda_1 & \bzero \\
\bzero & \bA_{k}
\end{bmatrix}.
$$
For any eigenvalue $\lambda = \{\lambda_2, \lambda_3, \ldots, \lambda_{k+1}\}$, by Lemma~\ref{lemma:determinant-intermezzo}, we have
$$
\begin{aligned}
\det(\bA_{k+1} -\lambda\bI) &= \det(\bP_{k+1}^\top (\bA_{k+1}-\lambda\bI) \bP_{k+1}) \\
&=\det(\bP_{k+1}^\top \bA_{k+1}\bP_{k+1} - \lambda\bP_{k+1}^\top\bP_{k+1}) \\
&= \det\left(
\begin{bmatrix}
\lambda_1-\lambda & \bzero \\
\bzero & \bA_k - \lambda\bI
\end{bmatrix}
\right)\\
&=(\lambda_1-\lambda)\det(\bA_k-\lambda\bI).
\end{aligned}
$$
Where the last equality is from the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix}
\bE & \bF \\
\bG & \bH
\end{bmatrix}$, then $\det(\bM) = \det(\bE)\det(\bH-\bG\bE^{-1}\bF)$.
Since $\lambda$ is an eigenvalue of $\bA$ and $\lambda \neq \lambda_1$, then $\det(\bA_{k+1} -\lambda\bI) = (\lambda_1-\lambda)\det(\bA_{k}-\lambda\bI)=0$ means $\lambda$ is also an eigenvalue of $\bA_{k}$.
\end{proof}
We then prove the existence of the Schur decomposition by induction.
\begin{proof}[\textbf{of Theorem~\ref{theorem:schur-decomposition}: Existence of Schur Decomposition}]
We note that the theorem is trivial when $n=1$ by setting $Q=1$ and $U=A$. Suppose the theorem is true for $n=k$ for some $k\geq 1$. If we prove the theorem is also true for $n=k+1$, then we complete the proof.
Suppose for $n=k$, the theorem is true for $\bA_k =\bQ_k \bU_k \bQ_k^\top$.
Suppose further \textit{$\bP_{k+1}$ contains orthogonal vectors $\bP_{k+1} = [\bp_1, \bp_2, \ldots, \bp_{k+1}]$ as constructed in Lemma~\ref{lemma:submatrix-same-eigenvalue} where $\bp_1$ is an eigenvector of $\bA_{k+1}$ corresponding to eigenvalue $\lambda_1$ and its norm is 1, $\bp_2, \ldots, \bp_{k+1}$ are orthonormal to $\bp_1$}. Let the other $k$ eigenvalues of $\bA_{k+1}$ be $\lambda_2, \lambda_3, \ldots, \lambda_{k+1}$. Since we suppose for $n=k$, the theorem is true, we can find a matrix $\bA_{k}$ with eigenvalues $\lambda_2, \lambda_3, \ldots, \lambda_{k+1}$. So we have the following property by Lemma~\ref{lemma:submatrix-same-eigenvalue}:
$$
\bP_{k+1}^\top \bA_{k+1} \bP_{k+1} =
\begin{bmatrix}
\lambda &\bzero \\
\bzero & \bA_k
\end{bmatrix} \qquad
\text{and} \qquad
\bA_{k+1} \bP_{k+1} =
\bP_{k+1}
\begin{bmatrix}
\lambda_1 &\bzero \\
\bzero & \bA_k
\end{bmatrix}.
$$
Let
$
\bQ_{k+1} = \bP_{k+1}
\begin{bmatrix}
1 &\bzero \\
\bzero & \bQ_k
\end{bmatrix}.
$
Then, it follows that
$$
\begin{aligned}
\bA_{k+1} \bQ_{k+1} &=
\bA_{k+1}
\bP_{k+1}
\begin{bmatrix}
1 &\bzero \\
\bzero & \bQ_k
\end{bmatrix}\\
&=
\bP_{k+1}
\begin{bmatrix}
\lambda_1 &\bzero \\
\bzero & \bA_k
\end{bmatrix}
\begin{bmatrix}
1 &\bzero \\
\bzero & \bQ_k
\end{bmatrix} \\
&=
\bP_{k+1}
\begin{bmatrix}
\lambda_1 & \bzero \\
\bzero & \bA_k\bQ_k
\end{bmatrix}\\
&=
\bP_{k+1}
\begin{bmatrix}
\lambda_1 & \bzero \\
\bzero & \bQ_k \bU_k
\end{bmatrix}\qquad &\text{(By the assumption for $n=k$)} \\
&=\bP_{k+1}
\begin{bmatrix}
1 &\bzero \\
\bzero & \bQ_k
\end{bmatrix}
\begin{bmatrix}
\lambda_1 &\bzero \\
\bzero & \bU_k
\end{bmatrix}\\
&=\bQ_{k+1}\bU_{k+1}. \qquad &\text{($\bU_{k+1} = \begin{bmatrix}
\lambda_1 &\bzero \\
\bzero & \bU_k
\end{bmatrix}$)}
\end{aligned}
$$
We then have $\bA_{k+1} = \bQ_{k+1}\bU_{k+1}\bQ_{k+1}^\top$, where $\bU_{k+1}$ is an upper triangular matrix, and $\bQ_{k+1}$ is an orthogonal matrix since $\bP_{k+1}$ and
$\begin{bmatrix}
1 &\bzero \\
\bzero & \bQ_k
\end{bmatrix}$ are both orthogonal matrices.
\end{proof}
\section{Other Forms of the Schur Decomposition}\label{section:other-form-schur-decom}
From the proof of the Schur decomposition, we obtain the upper triangular matrix $\bU_{k+1}$ by appending the eigenvalue $\lambda_1$ to $\bU_k$. From this process, the values on the diagonal are always eigenvalues. Therefore, we can decompose the upper triangular into two parts.
\begin{corollaryHigh}[Form 2 of Schur Decomposition]
Any square matrix $\bA\in \real^{n\times n}$ with real eigenvalues can be factored as
$$
\bQ^\top\bA\bQ = \bLambda +\bT, \qquad \textbf{or} \qquad \bA = \bQ(\bLambda +\bT)\bQ^\top,
$$
where $\bQ$ is an orthogonal matrix, $\bLambda=diag(\lambda_1, \lambda_2, \ldots, \lambda_n)$ is a diagonal matrix containing the eigenvalues of $\bA$, and $\bT$ is a \textit{strictly upper triangular} matrix.
\end{corollaryHigh}
A strictly upper triangular matrix is an upper triangular matrix having 0's along the diagonal as well as the lower portion. Another proof of this decomposition is that $\bA$ and $\bU$ (where $\bU = \bQ^\top\bA\bQ$) are similar matrices so that they have the same eigenvalues (Lemma~\ref{lemma:eigenvalue-similar-matrices}, p.~\pageref{lemma:eigenvalue-similar-matrices}). And the eigenvalues of any upper triangular matrices are on the diagonal.
To see this, for any upper triangular matrix $\bR \in \real^{n\times n}$ where the diagonal values are $r_{ii}$ for all $i\in \{1,2,\ldots,n\}$. We have
$$
\bR \be_i = r_{ii}\be_i,
$$
where $\be_i$ is the $i$-th basis vector in $\real^n$, i.e., $\be_i$ is the $i$-th column of the $n\times n$ identity matrix $\bI_n$.
So we can decompose $\bU$ into $\bLambda$ and $\bT$.
A final observation on the second form of the Schur decomposition is shown as follows. From $\bA\bQ = \bQ(\bLambda +\bT)$, it follows that
$$
\bA \bq_k = \lambda_k\bq_k + \sum_{i=1}^{k-1}t_{ik}\bq_i,
$$
where $t_{ik}$ is the ($i,k$)-th entry of $\bT$. The form is quite close to the eigenvalue decomposition. Nevertheless, the columns become orthonormal bases and these orthonormal bases are correlated.
\part{Reduction to Hessenberg, Tridiagonal, and Bidiagonal Form}
\chapter{Hessenberg Decomposition}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section*{Preliminary}
In real applications, we often want to factor matrix $\bA$ into two orthogonal matrices $\bA =\bQ\bLambda\bQ^\top$ where $\bLambda$ is diagonal or upper triangular, e.g., eigen analysis via the Schur decomposition and principal component analysis (PCA) via the spectral decomposition. This can be computed via a sequence of \textit{orthogonal similarity transformations}:
$$
\underbrace{\bQ_k^\top\ldots \bQ_2^\top \bQ_1^\top}_{\bQ^\top} \bA \underbrace{\bQ_1\bQ_2\ldots\bQ_k}_{\bQ}
$$
which converges to $\bLambda$. However, this transformation will always be very hard to handle in practice, for example, via the Householder reflectors. Following the QR decomposition via the Householder example, \footnote{where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed. } the sequence of orthogonal similarity transformation can be constructed via the Householder reflectors:
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes
\end{sbmatrix}
&\stackrel{\bH_1\times }{\rightarrow}
\begin{sbmatrix}{\bH_1\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times \bH_1^\top }{\rightarrow}
\begin{sbmatrix}{\bH_1\bA\bH_1^\top}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix},
\end{aligned}
$$
where the left Householder introduces zeros in the first column below the main diagonal (see Section~\ref{section:qr-via-householder}, p.~\pageref{section:qr-via-householder}), and unfortunately the right Householder will destroy the zeros introduced by the left Householder.
However, if we are less ambitious to modify the algorithms into two phases, where the first phase transforms into a Hessenberg matrix (Definition~\ref{definition:upper-hessenbert}, p.~\pageref{definition:upper-hessenbert}) or a tridiagonal matrix (Definition~\ref{definition:tridiagonal-hessenbert}, p.~\pageref{definition:tridiagonal-hessenbert}). And if we find a second phase algorithm to transform the results from the first one to the goal we want to find, then we complete the algorithm:
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes
\end{sbmatrix}
&\stackrel{\bH_1\times }{\rightarrow}
\begin{sbmatrix}{\bH_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times \bH_1^\top }{\rightarrow}
\begin{sbmatrix}{\bH_1\bA\bH_1^\top}
\boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix},
\end{aligned}
$$
where the left Householder will not influence the first row, and the right Householder will not influence the first column.
A phase 2 \footnote{which is usually an iterative algorithm.} algorithm to find the triangular matrix is shown as follows:
$$
\begin{aligned}
\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA\bH_1^\top\bH_2^\top \bH_3^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\text{Phase 2} }{\longrightarrow}
\begin{sbmatrix}{\bLambda}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & 0 & \bm{0} & \bm{\boxtimes}
\end{sbmatrix}
\end{aligned}
$$
From the discussion above, to compute the spectral decomposition, Schur decomposition, or singular value decomposition (SVD), we usually reach a compromise to calculate the Hessenberg, tridiagonal, or bidiagonal form in the first phase and leave the second phase to finish the rest \citep{van2012families, van2014restructuring, trefethen1997numerical}.
\section{Hessenberg Decomposition}
We firstly give the rigorous definition of the upper Hessenberg matrix.
\begin{definition}[Upper Hessenberg Matrix\index{Hessenbert matrix}]\label{definition:upper-hessenbert}
An \textbf{upper Hessenberg matrix} is a square matrix where all the entries below the first diagonal (i.e., the ones below the \textbf{main diagonal}) (a.k.a., \textbf{lower subdiagonal}) are zeros. Similarly, a lower Hessenberg matrix is a square matrix where all the entries above the first diagonal (i.e., the ones above the main diagonal) are zeros.
The definition of the upper Hessenberg can also be extended to rectangular matrices, and the form can be implied from the context.
In matrix language, for any matrix $\bH\in \real^{n\times n}$, and the entry ($i,j$) denoted by $h_{ij}$ for all $i,j\in \{1,2,\ldots, n\}$. Then $\bH$ with $h_{ij}=0$ for all $i\geq j+2$ is known as a Hessenberg matrix.
Let $i$ denote the smallest positive integer for which $h_{i+1, i}=0$ where $i\in \{1,2,\ldots, n-1\}$, then $\bH$ is \textbf{unreduced} if $i=n-1$.
\end{definition}
Take a $5\times 5$ matrix as an example, the lower triangular below the lower subdiagonal are zero in the upper Hessenberg matrix:
$$
\begin{sbmatrix}{possibly\,\, unreduced}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}
\qquad
\text{or}
\qquad
\begin{sbmatrix}{reduced}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & \textcolor{blue}{0} & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}.
$$
Then we have the following Hessenberg decomposition:
\begin{theoremHigh}[Hessenberg Decomposition]\label{theorem:hessenberg-decom}
Every $n\times n$ square matrix $\bA$ can be factored as
$$
\bA = \bQ\bH\bQ^\top \qquad \text{or} \qquad \bH = \bQ^\top \bA\bQ,
$$
where $\bH$ is an upper Hessenberg matrix, and $\bQ$ is an orthogonal matrix.
\end{theoremHigh}
It's not hard to find that a lower Hessenberg decomposition of $\bA^\top$ is given by $\bA^\top = \bQ\bH^\top\bQ^\top$ if $\bA$ admits the Hessenberg decomposition $\bA = \bQ\bH\bQ^\top$. The Hessenberg decomposition shares a similar form as the QR decomposition in that they both reduce a matrix into a sparse form where the lower parts of both are zero.
\begin{remark}[Why Hessenberg Decomposition]
We will see that the zeros introduced into $\bH$ from $\bA$ is accomplished by the left orthogonal matrix $\bQ$ (same as the QR decomposition) and the right orthogonal matrix $\bQ^\top$ here does not transform the matrix into any better or simple form. Then why do we want the Hessenberg decomposition rather than just a QR decomposition which has a simpler structure in that it even has zeros in the lower subdiagonal? As mentioned above, the answer is that the Hessenberg decomposition is usually used by other algorithms as a phase 1 step to find a decomposition that factor the matrix into two orthogonal matrices, e.g., SVD, UTV, and so on. And if we employ an aggressive algorithm that even favors zeros in the lower subdiagonal (again, as in the QR decomposition), the right orthogonal transform $\bQ^\top$ will destroy the zeros that can be seen very shortly.
On the other hand, the form $\bA = \bQ\bH\bQ^\top$ on $\bH$ is known as the \textit{orthogonal similarity transformation} (Definition~\ref{definition:similar-matrices}, p.~\pageref{definition:similar-matrices}) on $\bA$ such that the eigenvalues, rank and trace of $\bA$ and $\bH$ are the same (Lemma~\ref{lemma:eigenvalue-similar-matrices}, p.~\pageref{lemma:eigenvalue-similar-matrices}). Then if we want to study the properties of $\bA$, exploration on $\bH$ can be a relatively simpler task.
\end{remark}
\section{Similarity Transformation and Orthogonal Similarity Transformation}
As mentioned previously, the Hessenberg decomposition introduced in this section, the tridiagonal decomposition in the next section, the Schur decomposition (Theorem~\ref{theorem:schur-decomposition}, p.~\pageref{theorem:schur-decomposition}), and the spectral decomposition (Theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem}) share a similar form that transforms the matrix into a similar matrix. We now give the rigorous definition of similar matrices and similarity transformations. \index{Similar matrices}\index{Similarity tansformation}
\begin{definition}[Similar Matrices and Similarity Transformation]\label{definition:similar-matrices}
$\bA$ and $\bB$ are called \textbf{similar matrices} if there exists a nonsingular matrix $\bP$ such that $\bB = \bP\bA\bP^{-1}$.
In words, for any nonsingular matrix $\bP$, the matrices $\bA$ and $\bP\bA\bP^{-1}$ are similar matrices.
And in this sense, given the nonsingular matrix $\bP$, $\bP\bA\bP^{-1}$
is called a \textbf{similarity transformation} applied to matrix $\bA$.
Moreover, when $\bP$ is orthogonal, then $\bP\bA\bP^\top$ is also known as the \textbf{orthogonal similarity transformation} of $\bA$.
\end{definition}
The difference between the similarity transformation and orthogonal similarity transformation is partly explained in the sense of coordinate transformation (Section~\ref{section:coordinate-transformation}, p.~\pageref{section:coordinate-transformation}). Now we prove the important properties of similar matrices that will be shown very useful in the sequel.
\begin{lemma}[Eigenvalue, Trace and Rank of Similar Matrices\index{Trace}]\label{lemma:eigenvalue-similar-matrices}
Any eigenvalue of $\bA$ is also an eigenvalue of $\bP\bA\bP^{-1}$. The converse is also true that any eigenvalue of $\bP\bA\bP^{-1}$ is also an eigenvalue of $\bA$. I.e., $\Lambda(\bA) = \Lambda(\bB)$, where $\Lambda(\bX)$ is the spectrum of matrix $\bX$ (Definition~\ref{definition:spectrum}, p.~\pageref{definition:spectrum}).
And also the trace and rank of $\bA$ are equal to those of matrix $\bP\bA\bP^{-1}$ for any nonsingular matrix $\bP$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:eigenvalue-similar-matrices}]
For any eigenvalue $\lambda$ of $\bA$, we have $\bA\bx =\lambda \bx$. Then $\lambda \bP\bx = \bP\bA\bP^{-1} \bP\bx$ such that $\bP\bx$ is an eigenvector of $\bP\bA\bP^{-1}$ corresponding to $\lambda$.
Similarly, for any eigenvalue $\lambda$ of $\bP\bA\bP^{-1}$, we have $\bP\bA\bP^{-1} \bx = \lambda \bx$. Then $\bA\bP^{-1} \bx = \lambda \bP^{-1}\bx$ such that $\bP^{-1}\bx$ is an eigenvector of $\bA$ corresponding to $\lambda$.
For the trace of $\bP\bA\bP^{-1}$, we have $trace(\bP\bA\bP^{-1}) = trace(\bA\bP^{-1}\bP) = trace(\bA)$, where the first equality comes from the fact that the
trace of a product is invariant under cyclical permutations of the factors:
\begin{equation}
trace(\bA\bB\bC) = trace(\bB\bC\bA) = trace(\bC\bA\bB), \nonumber
\end{equation}
if all $\bA\bB\bC$, $\bB\bC\bA$, and $\bC\bA\bB$ exist.
For the rank of $\bP\bA\bP^{-1}$, we separate it into two claims as follows.
\paragraph{Rank claim 1: $rank(\bZ\bA)=rank(\bA)$ if $\bZ$ is nonsingular}
We will first show that $rank(\bZ\bA)=rank(\bA)$ if $\bZ$ is nonsingular. For any vector $\bn$ in the null space of $\bA$, that is $\bA\bn = \bzero$. Thus, $\bZ\bA\bn = \bzero$, that is, $\bn$ is also in the null space of $\bZ\bA$. And this implies $\nspace(\bA)\subseteq \nspace(\bZ\bA)$.
Conversely, for any vector $\bmm$ in the null space of $\bZ\bA$, that is $\bZ\bA\bmm = \bzero$, we have $\bA\bmm = \bZ^{-1} \bzero=\bzero$. That is, $\bmm$ is also in the null space of $\bA$. And this indicates $\nspace(\bZ\bA)\subseteq \nspace(\bA)$.
By ``sandwiching", the above two arguments imply
$$
\nspace(\bA) = \nspace(\bZ\bA)\quad \longrightarrow \quad rank(\bZ\bA)=rank(\bA).
$$
\paragraph{Rank claim 2: $rank(\bA\bZ)=rank(\bA)$ if $\bZ$ is nonsingular}
We notice that the row rank is equal to the column rank of any matrix (Corollary~\ref{lemma:equal-dimension-rank}, p.~\pageref{lemma:equal-dimension-rank}). Then $rank(\bA\bZ) = rank(\bZ^\top\bA^\top)$. Since $\bZ^\top$ is nonsingular, by claim 1, we have $rank(\bZ^\top\bA^\top) = rank(\bA^\top) = rank(\bA)$ where the last equality is again from the fact that the row rank is equal to the column rank of any matrix. This results in $rank(\bA\bZ)=rank(\bA)$ as claimed.
Since $\bP, \bP^{-1}$ are nonsingular, we then have $rank(\bP\bA\bP^{-1}) = rank(\bA\bP^{-1}) = rank(\bA)$ where the first equality is from claim 1 and the second equality is from claim 2. We complete the proof.
\end{proof}
\section{Existence of the Hessenberg Decomposition}
We will prove that any $n\times n$ matrix can be reduced to Hessenberg form via a sequence of Householder transformations that are applied from the left and the right to the matrix in an interleaved manner.
Previously, we utilized a Householder reflector to triangularize matrices and introduce zeros below the diagonal to obtain the QR decomposition. A similar approach can be applied to introduce zeros below the subdiagonal.
Before introducing the mathematical construction of such decomposition, we emphasize the following remark which will be very useful in the finding of the decomposition.
\begin{remark}[Left and Right Multiplied by a Matrix with Block Identity]\label{remark:left-right-identity}
For square matrix $\bA\in \real^{n\times n}$, and a matrix
$$
\bB = \begin{bmatrix}
\bI_k &\bzero \\
\bzero & \bB_{n-k}
\end{bmatrix},
$$
where $\bI_k$ is a $k\times k$ identity matrix. Then $\bB\bA$ will not change the first $k$ rows of $\bA$, and $\bA\bB$ will not change the first $k$ columns of $\bA$.
\end{remark}
The proof of this remark is trivial.
\subsection*{\textbf{First Step: Introduce Zeros for the First Column}}
Let $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$ be the column partitions of $\bA$, and each $\ba_i \in \real^{n}$. Suppose $\bar{\ba}_1, \bar{\ba}_2, \ldots, \bar{\ba}_n \in \real^{n-1}$ are vectors removing the first component in $\ba_i$'s. Let
$$
r_1 = ||\bar{\ba}_1||, \qquad \bu_1 = \frac{\bar{\ba}_1 - r_1 \be_1}{||\bar{\ba}_1 - r_1 \be_1||}, \qquad \text{and}\qquad \widetilde{\bH}_1 = \bI - 2\bu_1\bu_1^\top \in \real^{(n-1)\times (n-1)},
$$
where $\be_1$ here is the first basis for $\real^{n-1}$, i.e., $\be_1=[1;0;0;\ldots;0]\in \real^{n-1}$. To introduce zeros below the subdiagonal and operate on the submatrix $\bA_{2:n,1:n}$, we append the Householder reflector into
$$
\bH_1 = \begin{bmatrix}
1 &\bzero \\
\bzero & \widetilde{\bH}_1
\end{bmatrix},
$$
in which case, $\bH_1\bA$ will introduce zeros in the first column of $\bA$ below entry (2,1). The first row of $\bA$ will not be affected at all and kept unchanged by Remark~\ref{remark:left-right-identity}. And we can easily verify that both $\bH_1$ and $\widetilde{\bH}_1$ are orthogonal matrices and they are symmetric (from the definition of the Householder reflector). To have the form in Theorem~\ref{theorem:hessenberg-decom}, we multiply $\bH_1\bA$ on the right by $\bH_1^\top$ which results in $\bH_1\bA\bH_1^\top$. The $\bH_1^\top$ on the right will not change the first column of $\bH_1\bA$ and thus keep the zeros introduced in the first column.
An example of a $5\times 5$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bH_1\times}{\rightarrow}
&\begin{sbmatrix}{\bH_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bH_1^\top}{\rightarrow}
\begin{sbmatrix}{\bH_1\bA\bH_1^\top}
\boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
\end{aligned}
$$
\subsection*{\textbf{Second Step: Introduce Zeros for the Second Column}}
Let $\bB = \bH_1\bA\bH_1^\top$, where the entries in the first column below entry (2,1) are all zeros. And the goal is to introduce zeros in the second column below entry (3,2). Let $\bB_2 = \bB_{2:n,2:n}=[\bb_1, \bb_2, \ldots, \bb_{n-1}]$. Suppose again $\bar{\bb}_1, \bar{\bb}_2, \ldots, \bar{\bb}_{n-1} \in \real^{n-2}$ are vectors removing the first component in $\bb_i$'s. We can again construct a Householder reflector
\begin{equation}\label{equation:householder-qr-lengthr}
r_1 = ||\bar{\bb}_1||, \qquad \bu_2 = \frac{\bar{\bb}_1 - r_1 \be_1}{||\bar{\bb}_1 - r_1 \be_1||}, \qquad \text{and}\qquad \widetilde{\bH}_2 = \bI - 2\bu_2\bu_2^\top\in \real^{(n-2)\times (n-2)},
\end{equation}
where $\be_1$ now is the first basis for $\real^{n-2}$. To introduce zeros below the subdiagonal and operate on the submatrix $\bB_{3:n,1:n}$, we append the Householder reflector into
$$
\bH_2 = \begin{bmatrix}
\bI_2 &\bzero \\
\bzero & \widetilde{\bH}_2
\end{bmatrix},
$$
where $\bI_2$ is a $2\times 2$ identity matrix. We can see that $\bH_2\bH_1\bA\bH_1^\top$ will not change the first two rows of $\bH_1\bA\bH_1^\top$, and as the Householder cannot reflect a zero vector such that the zeros in the first column will be kept. Again, putting $\bH_2^\top$ on the right of $\bH_2\bH_1\bA\bH_1^\top$ will not change the first 2 columns so that the zeros will be kept.
Following the example of a $5\times 5$ matrix, the second step is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bH_1\bA\bH_1^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bH_2\times}{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA\bH_1^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bH_2^\top}{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA\bH_1^\top\bH_2^\top}
\boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\end{aligned}
$$
The same process can go on, and there are $n-2$ such steps. We will finally triangularize by
$$
\bH = \bH_{n-2} \bH_{n-3}\ldots\bH_1 \bA\bH_1^\top\bH_2^\top\ldots\bH_{n-2}^\top.
$$
And since $\bH_i$'s are symmetric and orthogonal, the above equation can be simply reduced to
$$
\bH =\bH_{n-2} \bH_{n-3}\ldots\bH_1 \bA\bH_1\bH_2\ldots\bH_{n-2}.
$$
Note here only $n-2$ such stages exist rather than $n-1$ or $n$. We will verify this number of steps by the example below.
The example of a $5\times 5$ matrix as a whole is shown as follows where again $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
\begin{mdframed}[hidealllines=\mdframehidelineNote,backgroundcolor=\mdframecolorSkip, frametitle={A Complete Example of Hessenberg Decomposition}]
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bH_1\times}{\rightarrow}
&\begin{sbmatrix}{\bH_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bH_1^\top}{\rightarrow}
\begin{sbmatrix}{\bH_1\bA\bH_1^\top}
\boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
\end{aligned}
$$
$$
\begin{aligned}
\qquad \qquad\qquad \qquad \qquad \stackrel{\bH_2\times}{\rightarrow}
&\begin{sbmatrix}{\bH_2\bH_1\bA\bH_1^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bH_2^\top}{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA\bH_1^\top\bH_2^\top}
\boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
\stackrel{\bH_3\times}{\rightarrow}
&\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA\bH_1^\top\bH_2^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bH_3^\top}{\rightarrow}
\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA\bH_1^\top\bH_2^\top\bH_3^\top}
\boxtimes & \boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix},
\end{aligned}
$$
\end{mdframed}
\section{Properties of the Hessenberg Decomposition}\label{section:hessenberg-decomposition}
The Hessenberg decomposition is not unique in the different ways to construct the Householder reflectors (say Equation~\eqref{equation:householder-qr-lengthr}, p.~\pageref{equation:householder-qr-lengthr}). However, under mild conditions, we can claim a similar structure in different decompositions.
\begin{theorem}[Implicit Q Theorem for Hessenberg Decomposition\index{Implicit Q theorem}]\label{theorem:implicit-q-hessenberg}
Suppose two Hessenberg decompositions of matrix $\bA\in \real^{n\times n}$ are given by $\bA=\bU\bH\bU^\top=\bV\bG\bV^\top$ where $\bU=[\bu_1, \bu_2, \ldots, \bu_n]$ and $\bV=[\bv_1, \bv_2, \ldots, \bv_n]$ are the column partitions of $\bU,\bV$. Suppose further that $k$ is the smallest positive integer for which $h_{k+1,k}=0$ where $h_{ij}$ is the entry $(i,j)$ of $\bH$. Then
\begin{itemize}
\item If $\bu_1=\bv_1$, then $\bu_i = \pm \bv_i$ and $|h_{i,i-1}| = |g_{i,i-1}|$ for $i\in \{2,3,\ldots,k\}$.
\item When $k=n-1$, the Hessenberg matrix $\bH$ is known as \textit{unreduced}. However, if $k<n-1$, then $g_{k+1,k}=0$.
\end{itemize}
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:implicit-q-hessenberg}]
Define the orthogonal matrix $\bQ=\bV^\top\bU$ and we have
$$
\left.
\begin{aligned}
\bG\bQ &= \bV^\top\bA\bV \bV^\top\bU = \bV^\top\bA\bU \\
\bQ\bH &= \bV^\top\bU \bU^\top\bA\bU = \bV^\top\bA\bU
\end{aligned}
\right\}
\leadto
\bG\bQ = \bQ\bH,
$$
the $(i-1)$-th column of each can be represented as
$$
\bG\bq_{i-1} = \bQ\bh_{i-1},
$$
where $\bq_{i-1}$ and $\bh_{i-1}$ are the $(i-1)$-th column of $\bQ$ and $\bH$ respectively. Since $h_{l,i-1}=0$ for $l\geq i+1$ (by the definition of upper Hessenberg matrices), $\bQ\bh_{i-1}$ can be represented as
$$
\bQ\bh_{i-1} = \sum_{j=1}^{i} h_{j,i-1} \bq_j = h_{i,i-1}\bq_i + \sum_{j=1}^{i-1} h_{j,i-1} \bq_j.
$$
Combine the two findings above, it follows that
$$
h_{i,i-1}\bq_i = \bG\bq_{i-1} - \sum_{j=1}^{i-1} h_{j,i-1} \bq_j.
$$
A moment of reflexion reveals that $[\bq_1, \bq_2, \ldots,\bq_k]$ is upper triangular. And since $\bQ$ is orthogonal, it must be diagonal and each value on the diagonal is in $\{-1, 1\}$ for $i\in \{2,\ldots, k\}$. Then, $\bq_1=\be_1$ and $\bq_i = \pm \be_i$ for $i\in \{2,\ldots, k\}$. Further, since $\bq_i =\bV^\top\bu_i$ and $h_{i,i-1}=\bq_i^\top (\bG\bq_{i-1} - \sum_{j=1}^{i-1} h_{j,i-1} \bq_j)=\bq_i^\top \bG\bq_{i-1}$. For $i\in \{2,\ldots,k\}$, $\bq_i^\top \bG\bq_{i-1}$ is just $\pm g_{i,i-1}$. It follows that
$$
\begin{aligned}
|h_{i,i-1}| &= |g_{i,i-1}|, \qquad &\forall i\in \{2,\ldots,k\},\\
\bu_i &= \pm\bv_i, \qquad &\forall i\in \{2,\ldots,k\}.
\end{aligned}
$$
This proves the first part. For the second part, if $k<n-1$,
$$
\begin{aligned}
g_{k+1,k} &= \be_{k+1}^\top\bG\be_{k} = \pm \be_{k+1}^\top\underbrace{\bG\bQ}_{\bQ\bH} \be_{k} = \pm \be_{k+1}^\top\underbrace{\bQ\bH \be_{k}}_{\text{$k$-th column of $\bQ\bH$}} \\
&= \pm \be_{k+1}^\top \bQ\bh_k = \pm \be_{k+1}^\top \sum_{j}^{k+1} h_{jk}\bq_j
=\pm \be_{k+1}^\top \sum_{j}^{\textcolor{blue}{k}} h_{jk}\bq_j=0,
\end{aligned}
$$
where the penultimate equality is from the assumption that $h_{k+1,k}=0$. This completes the proof.
\end{proof}
We observe from the above theorem, when two Hessenberg decompositions of matrix $\bA$ are both unreduced and have the same first column in the orthogonal matrices, then the Hessenberg matrices $\bH, \bG$ are similar matrices such that $\bH = \bD\bG\bD^{-1}$ where $\bD=\diag(\pm 1, \pm 1, \ldots, \pm 1)$. \textit{Moreover, and most importantly, if we restrict the elements in the lower subdiagonal of the Hessenberg matrix $\bH$ to be positive (if possible), then the Hessenberg decomposition $\bA=\bQ\bH\bQ^\top$ is uniquely determined by $\bA$ and the first column of $\bQ$.} This is similar to what we have claimed on the uniqueness of the QR decomposition (Corollary~\ref{corollary:unique-qr}, p.~\pageref{corollary:unique-qr}) and it is important to reduce the complexity of the QR algorithm for computing the singular value decomposition or eigenvalues of a matrix in general \citep{lu2021numerical}.
The next finding involves a Krylov matrix defined as follows:
\begin{definition}[Krylov Matrix\index{Krylov matrix}]\label{definition:krylov-matrix}
Given a matrix $\bA\in \real^{n\times n}$, a vector $\bq\in \real^n$, and a scalar $k$, the \textit{Krylov matrix} is defined to be
$$
\bK(\bA, \bq, k) =
\begin{bmatrix}
\bq, & \bA\bq, & \ldots, & \bA^{k-1}\bq
\end{bmatrix}
\in \real^{n\times n}.
$$
\end{definition}
\begin{theorem}[Reduced Hessenberg]\label{theorem:implicit-q-hessenberg-v2}
Suppose there exists an orthogonal matrix $\bQ$ such that $\bA\in \real^{n\times n}$ can be factored as $\bA = \bQ\bH\bQ^\top$. Then $\bQ^\top\bA\bQ=\bH$ is an unreduced upper Hessenberg matrix if and only if $\bR=\bQ^\top \bK(\bA, \bq_1, n)$ is nonsingular and upper triangular where $\bq_1$ is the first column of $\bQ$.
\item If $\bR$ is singular and $k$ is the smallest index so that $r_{kk}=0$, then $k$ is also the smallest index that $h_{k,k-1}=0$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:implicit-q-hessenberg-v2}] We prove by forward implication and converse implication separately as follows:
\paragraph{Forward implication}
Suppose $\bH$ is unreduced, write out the following matrix
$$
\bR = \bQ^\top \bK(\bA, \bq_1, n) = [\be_1, \bH\be_1, \ldots, \bH^{n-1}\be_1],
$$
where $\bR$ is upper triangular with $r_{11}=1$ obviously. Observe that $r_{ii} = h_{21}h_{32}\ldots h_{i,i-1}$ for $i\in \{2,3,\ldots, n\}$. When $\bH$ is unreduced, $\bR$ is nonsingular as well.
\paragraph{Converse implication}
Now suppose $\bR$ is upper triangular and nonsingular, we observe that $\br_{k+1} = \bH\br_{k}$ such that the $(k+2:n)$-th rows of $\bH$ are zero and $h_{k+1,k}\neq 0$ for $k\in \{1,2,\ldots, n-1\}$. Then $\bH$ is unreduced.
If $\bR$ is singular and $k$ is the smallest index so that $r_{kk}=0$, then
$$
\left.
\begin{aligned}
r_{k-1,k-1}&=h_{21}h_{32}\ldots h_{k-1,k-2}&\neq 0 \\
r_{kk}&=h_{21}h_{32}\ldots h_{k-1,k-2} h_{k,k-1}&= 0
\end{aligned}
\right\}
\leadto
h_{k,k-1} =0,
$$
from which the result follows.
\end{proof}
\chapter{Tridiagonal Decomposition}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{Tridiagonal Decomposition: Hessenberg in Symmetric Matrices}
We firstly give the formal definition of the tridiagonal matrix.
\begin{definition}[Tridiagonal Matrix\index{Tridiagonal matrix}]\label{definition:tridiagonal-hessenbert}
A tridiagonal matrix is a square matrix where all the entries below the lower subdiagonal and the entries above the upper subdiagonal are zeros. I.e., the tridiagonal matrix is a \textbf{band matrix}.
The definition of the tridiagonal matrix can also be extended to rectangular matrices, and the form can be implied from the context.
In matrix language, for any matrix $\bT\in \real^{n\times n}$, and the entry ($i,j$) denoted by $t_{ij}$ for all $i,j\in \{1,2,\ldots, n\}$. Then $\bT$ with $t_{ij}=0$ for all $i\geq j+2$ and $i \leq j-2$ is known as a tridiagonal matrix.
Let $i$ denote the smallest positive integer for which $h_{i+1, i}=0$ where $i\in \{1,2,\ldots, n-1\}$, then $\bT$ is \textbf{unreduced} if $i=n-1$.
\end{definition}
Take a $5\times 5$ matrix as an example, the lower triangular below the lower subdiagonal and upper triangular above the upper subdiagonal are zero in the tridiagonal matrix:
$$
\begin{sbmatrix}{possibly\,\, unreduced}
\boxtimes & \boxtimes & 0 & 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & 0 & 0\\
0 & \boxtimes & \boxtimes & \boxtimes & 0\\
0 & 0 & \boxtimes & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}
\qquad
\begin{sbmatrix}{reduced}
\boxtimes & \boxtimes & 0 & 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & 0 & 0\\
0 & \boxtimes & \boxtimes & \boxtimes & 0\\
0 & 0 & \textcolor{blue}{0} & \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}.
$$
Obviously, a tridiagonal matrix is a special case of an upper Hessenberg matrix.
Then we have the following tridiagonal decomposition:
\begin{theoremHigh}[Tridiagonal Decomposition]\label{theorem:tridiagonal-decom}
Every $n\times n$ symmetric matrix $\bA$ can be factored as
$$
\bA = \bQ\bT\bQ^\top \qquad \text{or} \qquad \bT = \bQ^\top \bA\bQ,
$$
where $\bT$ is a \textit{symmetric} tridiagonal matrix, and $\bQ$ is an orthogonal matrix.
\end{theoremHigh}
The existence of the tridiagonal matrix is trivial by applying the Hessenberg decomposition to symmetric matrix $\bA$.
\section{Properties of the Tridiagonal Decomposition}\label{section:tridiagonal-decomposition}
Similarly, the tridiagonal decomposition is not unique. However, and most importantly, if we restrict the elements in the lower subdiagonal of the tridiagonal matrix $\bT$ to be positive (if possible), then the tridiagonal decomposition $\bA=\bQ\bT\bQ^\top$ is uniquely determined by $\bA$ and the first column of $\bQ$.
\begin{theorem}[Implicit Q Theorem for Tridiagonal\index{Implicit Q theorem}]\label{theorem:implicit-q-tridiagonal}
Suppose two Tridiagonal decompositions of symmetric matrix $\bA\in \real^{n\times n}$ are given by $\bA=\bU\bT\bU^\top=\bV\bG\bV^\top$ where $\bU=[\bu_1, \bu_2, \ldots, \bu_n]$ and $\bV=[\bv_1, \bv_2, \ldots, \bv_n]$ are the column partitions of $\bU,\bV$. Suppose further that $k$ is the smallest positive integer for which $t_{k+1,k}=0$ where $t_{ij}$ is the entry $(i,j)$ of $\bT$. Then
\begin{itemize}
\item If $\bu_1=\bv_1$, then $\bu_i = \pm \bv_i$ and $|t_{i,i-1}| = |g_{i,i-1}|$ for $i\in \{2,3,\ldots,k\}$.
\item When $k=n-1$, the tridiagonal matrix $\bT$ is known as unreduced. However, if $k<n-1$, then $g_{k+1,k}=0$.
\end{itemize}
\end{theorem}
From the above theorem, we observe that if we restrict the elements in the lower sub-diagonal of the tridiagonal matrix $\bT$ to be positive (if possible), i.e., \textit{unreduced}, then the tridiagonal decomposition $\bA=\bQ\bT\bQ^\top$ is uniquely determined by $\bA$ and the first column of $\bQ$. This again is similar to what we have claimed on the uniqueness of the QR decomposition (Corollary~\ref{corollary:unique-qr}, p.~\pageref{corollary:unique-qr}).
Similarly, a reduced tridiagonal decomposition can be obtained from the implication of the Krylov matrix (Definition~\ref{definition:krylov-matrix}, p.~\pageref{definition:krylov-matrix}).
\begin{theorem}[Reduced Tridiagonal]\label{theorem:implicit-q-tridiagonal-v2}
Suppose there exists an orthogonal matrix $\bQ$ such that $\bA\in \real^{n\times n}$ can be factored as $\bA = \bQ\bT\bQ^\top$. Then $\bQ^\top\bA\bQ=\bT$ is an unreduced tridiagonal matrix if and only if $\bR=\bQ^\top \bK(\bA, \bq_1, n)$ is nonsingular and upper triangular where $\bq_1$ is the first column of $\bQ$.
\item If $\bR$ is singular and $k$ is the smallest index so that $r_{kk}=0$, then $k$ is also the smallest index that $t_{k,k-1}=0$.
\end{theorem}
The proofs of the above two theorems are the same as those in Theorem~\ref{theorem:implicit-q-hessenberg} and~\ref{theorem:implicit-q-hessenberg-v2}.
\chapter{Bidiagonal Decomposition}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{Bidiagonal Decomposition}\label{section:bidiagonal-decompo}
We firstly give the rigorous definition of the upper Bidiagonal matrix as follows:
\begin{definition}[Upper Bidiagonal Matrix\index{Bidiagonal matrix}]\label{definition:bidiagonal-matrix}
An upper bidiagonal matrix is a square matrix which is a banded matrix with non-zero entries along the \textbf{main diagonal} and the \textbf{upper subdiagonal} (i.e., the ones above the main diagonal). This means there are exactly two nonzero diagonals in the matrix.
Furthermore, when the diagonal below the main diagonal has the non-zero entries, the matrix is lower bidiagonal.
The definition of bidigonal matrices can also be extended to rectangular matrices, and the form can be implied from the context.
\end{definition}
Take a $7\times 5$ matrix as an example, the lower triangular below the main diagonal and the upper triangular above the upper subdiagonal are zero in the upper bidiagonal matrix:
$$
\begin{bmatrix}
\boxtimes & \boxtimes & 0 & 0 & 0\\
0 & \boxtimes & \boxtimes & 0 & 0\\
0 & 0 & \boxtimes & \boxtimes & 0\\
0 & 0 & 0 & \boxtimes & \boxtimes\\
0 & 0 & 0 & 0 & \boxtimes\\
0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0
\end{bmatrix}.
$$
Then we have the following bidiagonal decomposition:
\begin{theoremHigh}[Bidiagonal Decomposition]\label{theorem:Golub-Kahan-Bidiagonalization-decom}
Every $m\times n$ matrix $\bA$ can be factored as
$$
\bA = \bU\bB\bV^\top \qquad \text{or} \qquad \bB= \bU^\top \bA\bV,
$$
where $\bB$ is an upper bidiagonal matrix, and $\bU, \bV$ are orthogonal matrices.
\end{theoremHigh}
We will see the bidiagonalization resembles the form of a singular value decomposition where the only difference is the values of $\bB$ in bidiagonal form has nonzero entries on the upper subdiagonal such that it will be shown to play an important role in the calculation of the singular value decomposition.
\section{Existence of the Bidiagonal Decomposition: Golub-Kahan Bidiagonalization}
Previously, we utilized a Householder reflector to triangularize matrices and introduce zeros below the diagonal to obtain the QR decomposition, and introduce zeros below the sub-diagonal to obtain the Hessenberg decomposition. A similar approach can be employed to find the bidiagonal decomposition.
\subsection*{\textbf{First Step 1.1: Introduce Zeros for the First Column}}
Let $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$ be the column partitions of $\bA$, and each $\ba_i \in \real^{m}$.
We can construct the Householder reflector as follows:
$$
r_1 = ||\ba_1||, \qquad \bu_1 = \frac{\ba_1 - r_1 \be_1}{||\ba_1 - r_1 \be_1||} ,\qquad \text{and}\qquad \bH_1 = \bI - 2\bu_1\bu_1^\top \in \textcolor{blue}{\real^{m\times m}},
$$
where $\be_1$ here is the first basis for $\textcolor{blue}{\real^{m}}$, i.e., $\be_1=[1;0;0;\ldots;0]\in \textcolor{blue}{\real^{m}}$.
In this case, $\bH_1\bA$ will introduce zeros in the first column of $\bA$ below entry (1,1), i.e., reflect $\ba_1$ to $r_1\be_1$.
We can easily verify that both $\bH_1$ is a symmetric and orthogonal matrix (from the definition of Householder reflector).
An example of a $7\times 5$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bH_1\times}{\rightarrow}
&\begin{sbmatrix}{\bH_1\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}.
\end{aligned}
$$
Till now, this is exactly what we have done in the QR decomposition via the Householder reflector in Section~\ref{section:qr-via-householder} (p.~\pageref{section:qr-via-householder}).
Going further, to introduce zeros above the upper sub-diagonal of $\bH_1\bA$ is equivalent to introducing zeros below the lower subdiagonal of $(\bH_1\bA)^\top$.
\subsection*{\textbf{First Step 1.2: Introduce Zeros for the First Row}}
Now suppose we are looking at the \textit{transpose} of $\bH_1\bA$, that is $(\bH_1\bA)^\top =\bA^\top\bH_1^\top \in \real^{n\times m}$ and the column partition is given by $\bA^\top\bH_1^\top = [\bz_1, \bz_2, \ldots, \bz_m]$ where each $\bz_i \in \real^n$.
Suppose $\bar{\bz}_1, \bar{\bz}_2, \ldots, \bar{\bz}_m \in \real^{n-1}$ are vectors removing the first component in $\bz_i$'s.
Let
$$
r_1 = ||\bar{\bz}_1||, \qquad \bv_1 = \frac{\bar{\bz}_1 - r_1 \be_1}{||\bar{\bz}_1 - r_1 \be_1||}, \qquad \text{and}\qquad \widetilde{\bL}_1 = \bI - 2\bv_1\bv_1^\top \in\textcolor{blue}{\real^{(n-1)\times (n-1)}},
$$
where $\be_1$ now is the first basis for $\textcolor{blue}{\real^{n-1}}$, i.e., $\be_1=[1;0;0;\ldots;0]\in \textcolor{blue}{\real^{n-1}}$. To introduce zeros below the subdiagonal and operate on the submatrix $(\bA^\top\bH_1^\top)_{2:n,1:m}$, we append the Householder reflector into
$$
\bL_1 = \begin{bmatrix}
1 &\bzero \\
\bzero & \widetilde{\bL}_1
\end{bmatrix},
$$
in which case, $\bL_1(\bA^\top\bH_1^\top)$ will introduce zeros in the first column of $(\bA^\top\bH_1^\top)$ below entry (2,1), i.e., reflect $\bar{\bz}_1$ to $r_1\be_1$. The first row of $(\bA^\top\bH_1^\top)$ will not be affected at all and kept unchanged by Remark~\ref{remark:left-right-identity} (p.~\pageref{remark:left-right-identity}) such that the zeros introduced in Step 1.1) will be kept. And we can easily verify that both $\bL_1$ and $\widetilde{\bL}_1$ are orthogonal matrices and they are symmetric (from the definition of the Householder reflector).
Come back to the original \textit{untransposed} matrix $\bH_1\bA$, multiply on the right by $\bL_1^\top$ is to introduce zeros in the first row to the right of entry (1,2).
Again, following the example above, a $7\times 5$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bA^\top\bH_1^\top}
\boxtimes & 0 & 0 & 0 & 0 & 0 & 0 \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\end{sbmatrix}
\stackrel{\bL_1\times}{\rightarrow}
\begin{sbmatrix}{\bL_1 \bA^\top\bH_1^\top}
\boxtimes & 0 & 0 & 0 & 0 & 0 & 0 \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}
\stackrel{(\cdot)^\top}{\rightarrow}
\begin{sbmatrix}{\bH_1\bA\bL_1^\top }
\boxtimes & \bm{\boxtimes} & \bm{0} & \bm{0} & \bm{0} \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}.
\end{aligned}
$$
In short, $\bH_1\bA\bL_1^\top$ finishes the first step to introduce zeros for the first column and the first row of $\bA$.
\subsection*{\textbf{Second Step 2.1: Introduce Zeros for the Second Column}}
Let $\bB = \bH_1\bA\bL_1^\top$, where the entries in the first column below entry (1,1) are all zeros and the entries in the first row to the right of entry (1,2) are all zeros as well.
And the goal is to introduce zeros in the second column below entry (2,2).
Let $\bB_2 = \bB_{2:m,2:n}=[\bb_1, \bb_2, \ldots, \bb_{n-1}] \in \real^{(m-1)\times (n-1)}$.
We can again construct a Householder reflector
$$
r_1 = ||\bb_1||,\qquad \bu_2 = \frac{\bb_1 - r_1 \be_1}{||\bb_1 - r_1 \be_1||}, \qquad \text{and}\qquad \widetilde{\bH}_2 = \bI - 2\bu_2\bu_2^\top\in \textcolor{blue}{\real^{(m-1)\times (m-1)}},
$$
where $\be_1$ now is the first basis for $\textcolor{blue}{\real^{m-1}}$ i.e., $\be_1=[1;0;0;\ldots;0]\in \textcolor{blue}{\real^{m-1}}$. To introduce zeros below the main diagonal and operate on the submatrix $\bB_{2:m,2:n}$, we append the Householder reflector into
$$
\bH_2 = \begin{bmatrix}
1 &\bzero \\
\bzero & \widetilde{\bH}_2
\end{bmatrix},
$$
in which case, we can see that $\bH_2(\bH_1\bA\bL_1^\top)$ will not change the first row of $(\bH_1\bA\bL_1^\top)$ by Remark~\ref{remark:left-right-identity} (p.~\pageref{remark:left-right-identity}), and as the Householder cannot reflect a zero vector such that the zeros in the first column will be kept as well.
Following the example above, a $7\times 5$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bH_1\bA\bL_1^\top }
\boxtimes & \boxtimes & 0 & 0 & 0 \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\end{sbmatrix}
\stackrel{\bH_2\times }{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA\bL_1^\top }
\boxtimes & \boxtimes & 0& 0 & 0 \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}.
\end{aligned}
$$
\subsection*{\textbf{Second Step 2.2: Introduce Zeros for the Second Row}}
Same as step 1.2), now suppose we are looking at the \textit{transpose} of $\bH_2\bH_1\bA\bL_1^\top$, that is $(\bH_2\bH_1\bA\bL_1^\top)^\top =\bL_1\bA^\top\bH_1^\top\bH_2^\top \in \real^{n\times m}$ and the column partition is given by $\bL_1\bA^\top\bH_1^\top\bH_2^\top = [\bx_1, \bx_2, \ldots, \bx_m]$ where each $\bx_i \in \real^n$.
Suppose $\bar{\bx}_1, \bar{\bx}_2, \ldots, \bar{\bx}_m \in \real^{n-2}$ are vectors removing the first two components in $\bx_i$'s.
Construct the Householder reflector as follows:
$$
r_1 = ||\bar{\bx}_1||,\qquad \bv_2 = \frac{\bar{\bx}_1 - r_1 \be_1}{||\bar{\bx}_1 - r_1 \be_1||}, \qquad \text{and}\qquad \widetilde{\bL}_2 = \bI - 2\bv_2\bv_2^\top \in \textcolor{blue}{\real^{(n-2)\times (n-2)}},
$$
where $\be_1$ now is the first basis for $\textcolor{blue}{\real^{n-2}}$, i.e., $\be_1=[1;0;0;\ldots;0]\in \textcolor{blue}{\real^{n-2}}$. To introduce zeros below the subdiagonal and operate on the submatrix $(\bL_1\bA^\top\bH_1\bH_2)_{3:n,1:m}$, we append the Householder reflector into
$$
\bL_1 = \begin{bmatrix}
\bI_2 &\bzero \\
\bzero & \widetilde{\bL}_2
\end{bmatrix},
$$
where $\bI_2$ is a $2\times 2$ identity matrix.
In this case, $\bL_2(\bL_1\bA^\top\bH_1^\top\bH_2^\top)$ will introduce zeros in the second column of $(\bL_1\bA^\top\bH_1^\top\bH_2^\top)$ below entry (3,2). The first two rows of $(\bL_1\bA^\top\bH_1^\top\bH_2^\top)$ will not be affected at all and kept unchanged by Remark~\ref{remark:left-right-identity} (p.~\pageref{remark:left-right-identity}). \textbf{Further, the first column of it will be kept unchanged as well}. And we can easily verify that both $\bL_1$ and $\widetilde{\bL}_1$ are orthogonal matrices and they are symmetric (from the definition of the Householder reflector).
Come back to the original \textit{untransposed} matrix $\bH_2\bH_1\bA\bL_1^\top$, multiply on the right by $\bL_2^\top$ is to introduce zeros in the second row to the right of entry (2,3). Following the example above, a $7\times 5$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bL_1\bA^\top\bH_1^\top\bH_2^\top }
\boxtimes & 0 & 0 & 0 & 0 & 0 & 0 \\
\boxtimes & \boxtimes & 0 & 0 & 0 & 0 & 0\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\end{sbmatrix}
\stackrel{\bL_2\times }{\rightarrow}
\begin{sbmatrix}{\bL_2\bL_1\bA^\top\bH_1^\top\bH_2^\top }
\boxtimes & 0 & 0 & 0 & 0 & 0 & 0 \\
\boxtimes & \boxtimes & 0 & 0 & 0 & 0 & 0\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}
\stackrel{(\cdot)^\top }{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA\bL_1^\top\bL_2^\top}
\boxtimes & \boxtimes & 0 & 0 & 0\\
0 & \boxtimes & \bm{\boxtimes} & \bm{0} & \bm{0} \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}.
\end{aligned}
$$
In short, $\bH_2(\bH_1\bA\bL_1^\top)\bL_2^\top$ finishes the second step to introduce zeros for the second column and the second row of $\bA$.
The same process can go on, and we shall notice that there are $n$ such $\bH_i$ Householder reflectors on the left and $n-2$ such $\bL_i$ Householder reflectors on the right (suppose $m>n$ for simplicity). The interleaved Householder factorization is known as the \textit{Golub-Kahan Bidiagonalization} \citep{golub1965calculating}. We will finally bidiagonalize
$$
\bB = \bH_{n} \bH_{n-1}\ldots\bH_1 \bA\bL_1^\top\bL_2^\top\ldots\bL_{n-2}^\top.
$$
And since the $\bH_i$'s and $\bL_i$'s are symmetric and orthogonal, we have
$$
\bB =\bH_{n} \bH_{n-1}\ldots\bH_1 \bA\bL_1\bL_2\ldots\bL_{n-2}.
$$
A full example of a $7\times 5$ matrix is shown as follows where again $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
\begin{mdframed}[hidealllines=\mdframehidelineNote,backgroundcolor=\mdframecolorSkip,frametitle={A Complete Example of Golub-Kahan Bidiagonalization}]
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes\\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bH_1\times}{\rightarrow}
&\begin{sbmatrix}{\bH_1\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}
\stackrel{\times\bL_1^\top}{\rightarrow}
\begin{sbmatrix}{\bH_1\bA\bL_1^\top}
\boxtimes & \bm{\boxtimes} & \bm{0} & \bm{0} & \bm{0} \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
\end{sbmatrix}\\
\end{aligned}
$$
$$
\begin{aligned}
\qquad \qquad \qquad \qquad \gap \stackrel{\bH_2\times}{\rightarrow}
&\begin{sbmatrix}{\bH_2\bH_1\bA\bL_1^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bL_2^\top}{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA\bL_1^\top\bL_2^\top}
\boxtimes & \boxtimes & 0 & 0 & 0\\
0 & \boxtimes & \bm{\boxtimes} & \bm{0} & \bm{0} \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
\end{aligned}
$$
$$
\begin{aligned}
\qquad \qquad \qquad \qquad \qquad \qquad \stackrel{\bH_3\times}{\rightarrow}
&\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA\bL_1^\top\bL_2^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\times\bL_3^\top}{\rightarrow}
\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA\bL_1^\top\bL_2^\top\bL_3^\top}
\boxtimes & \boxtimes & 0 & 0 & 0\\
\boxtimes & \boxtimes & \boxtimes & 0 & 0\\
0 & \boxtimes & \boxtimes & \bm{\boxtimes} & \bm{0}\\
0 & 0 & \boxtimes & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
\stackrel{\bH_4\times}{\rightarrow}
&\begin{sbmatrix}{\bH_4\bH_3\bH_2\bH_1\bA\bL_1^\top\bL_2\bL_3^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes& \boxtimes & \boxtimes\\
0 & 0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{0} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{0} & \bm{\boxtimes}\\
0 & 0 & 0 & \bm{0} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bH_5\times}{\rightarrow}
\begin{sbmatrix}{\bH_5\bH_4\bH_3\bH_2\bH_1\bA\bL_1^\top\bL_2\bL_3^\top}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes& \boxtimes & \boxtimes\\
0 & 0 & 0 & \boxtimes & \boxtimes\\
0 & 0 & 0 & 0 & \bm{\boxtimes}\\
0 & 0 & 0 & 0 & \bm{0}\\
0 & 0 & 0 & 0 & \bm{0}
\end{sbmatrix}.
\end{aligned}
$$
\end{mdframed}
We present in a way where a right Householder reflector $\bL_i$ follows from a left one $\bH_i$. However, a trivial error that might be employed is that we do the left ones altogether, and the right ones follow. That is, a bidiagonal decomposition is a combination of a QR decomposition and a Hessenberg decomposition. Nevertheless, this is problematic, the right Householder reflector $\bL_1$ will destroy the zeros introduced by the left ones. Therefore, the left and right reflectors need to be employed in an interleaved manner to introduce back the zeros.
The Golub-Kahan bidiagonalization is not the most efficient way to calculate the bidiagonal decomposition. It requires $\sim 4mn^2-\frac{4}{3}n^3$ flops to compute a bidiagonal decomposition of an $m\times n$ matrix with $m>n$. Further, if $\bU, \bV$ are needed explicitly, additional $\sim 4m^2n-2mn^2 + 2n^3$ flops are required \citep{lu2021numerical}.
\paragraph{LHC Bidiagonalization}
Nevertheless, when $m\gg n$, we can extract the square triangular matrix (i.e., the QR decomposition) and apply the Golub-Kahan diagonalization on the square $n\times n$ matrix. This is known as the \textit{Lawson-Hanson-Chan (LHC) bidiagonalization} \citep{lawson1995solving, chan1982improved} and the procedure is shown in Figure~\ref{fig:lhc-bidiagonal}.\index{LHC bidiagonalization}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{imgs/LHC-bidiagonal.pdf}
\caption{Demonstration of LHC-bidiagonalization of a matrix}
\label{fig:lhc-bidiagonal}
\end{figure}
The LHC bidiagonalization starts by computing the QR decomposition $\bA = \bQ\bR$. Then follows by applying the Golub-Kahan process such that $\widetilde{\bR} = \widetilde{\bU} \widetilde{\bB} \bV^\top$ where $\widetilde{\bR}$ is the square $n\times n$ triangular submatrix inside $\bR$. Append $\widetilde{\bU}$ into
$$
\bU_0 =
\begin{bmatrix}
\widetilde{\bU} & \\
& \bI_{m-n}
\end{bmatrix},
$$
which results in $\bR=\bU_0\bB \bV^\top$ and $\bA = \bQ\bU_0\bB \bV^\top$. Let $\bU=\bQ\bU_0$, we obtain the bidiagonal decomposition. The QR decomposition requires $2mn^2-\frac{2}{3}n^3$ flops and the Golub-Kahan process now requires $\frac{8}{3}n^3$ (operating on an $n\times n$ submatrix) \citep{lu2021numerical}. Thus the total complexity to obtain the bidiagonal matrix $\bB$ is
then reduced to
$$
\text{LHC bidiagonalization: } \sim 2mn^2 + 2n^3 \text{ flops}.
$$
The LHC process creates zeros and then destroys them again in the lower triangle of the upper $n\times n $ square of $\bR$, but the zeros in the lower $(m-n)\times n$ rectangular matrix of $\bR$ will be kept. Thus when $m-n$ is large enough (or $m\gg n$), there is a net gain. Simple calculations will show the LHC bidiagonalization costs less when $m>\frac{5}{3}n$ compared to the Golub-Kahan bidiagonalization.
\index{Golub-Kahan process}
\index{LHC process}
\paragraph{Three-Step Bidiagonalization}
The LHC procedure is advantageous only when $m>\frac{5}{3}n$. A further trick is to apply the QR decomposition not at the beginning of the computation, but at a suitable point in the middle \citep{trefethen1997numerical}. In particular, the procedure is shown in Figure~\ref{fig:lhc-bidiagonal2} where we apply the first $k$ steps of left and right Householder reflectors as in the Golub-Kahan process leaving the bottom-right $(m-k)\times(n-k)$ submatrix ``unreflected". Then following the same LHC process on the submatrix to obtain the final bidiagonal decomposition. By doing so, the complexity reduces when $n<m<2n$.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{imgs/LHC-bidiagonal2.pdf}
\caption{Demonstration of Three-Step bidiagonalization of a matrix}
\label{fig:lhc-bidiagonal2}
\end{figure}
To conclude, the costs of the three methods are shown as follows:
$$
\left\{
\begin{aligned}
&\text{Golub-Kahan: } \sim 4mn^2-\frac{4}{3}n^3 \,\, \text{ flops}, \\
&\text{LHC: } \sim 2mn^2 + 2n^3 \,\, \text{ flops}, \\
&\text{Three-Step: } \sim 2mn^2 + 2m^2n -\frac{2}{3}m^3 -\frac{2}{3}n^3 \,\, \text{ flops} .
\end{aligned}
\right.
$$
When $m>2n$, LHC is preferred; when $n<m<2n$, the Three-Step method is preferred though the improvement is small enough as shown in Figure~\ref{fig:bidiagonal-loss-compare} where the operation counts for the three methods are plotted as a function of $\frac{m}{n}$. Notice that the complexity discussed here does not involve the extra computation of $\bU, \bV$. We shall not discuss the issue for simplicity.
\begin{SCfigure
\centering
\includegraphics[width=0.6\textwidth]{imgs/bidiagonal-loss.pdf}
\caption{Comparison of the complexity among the three bidiagonal methods. When $m>2n$, LHC is preferred; when $n<m<2n$, the Three-Step method is preferred though the improvement is small enough.}
\label{fig:bidiagonal-loss-compare}
\end{SCfigure}
\section{Connection to Tridiagonal Decomposition}
We fist illustrate the connection by the following lemma that reveals how to construct a tridiagonal matrix from a bidiagonal one.
\begin{lemma}[Construct Tridiagonal From Bidiagonal]\label{lemma:construct-triangular-from-bidia}
Suppose $\bB\in \real^{n\times n}$ is upper bidiagonal, then $\bT_1=\bB^\top\bB$ and $\bT_2=\bB\bB^\top$ are \textit{symmetric} triangular matrices.
\end{lemma}
The lemma above reveals an important property. Suppose $\bA=\bU\bB\bV^\top$ is the bidiagonal decomposition of $\bA$, then the symmetric matrix $\bA\bA^\top$ has a tridiagonal decomposition
$$
\bA\bA^\top=\bU\bB\bV^\top \bV\bB^\top\bU^\top = \bU\bB\bB^\top\bU^\top.
$$
And the symmetric matrix $\bA^\top\bA$ has a tridiagonal decomposition
$$
\bA^\top\bA=\bV\bB^\top\bU^\top \bU\bB\bV^\top=\bV\bB^\top\bB\bV^\top.
$$
As a final result in this section, we state a theorem giving the tridiagonal decomposition of a symmetric matrix with special eigenvalues.
\begin{theoremHigh}[Tridiagonal Decomposition for Nonnegative Eigenvalues]\label{theorem:tri-nonnegative-eigen}
Suppose $n\times n$ symmetric matrix $\bA$ has nonnegative eigenvalues, then there exists a matrix $\bZ$ such that
$$
\bA=\bZ\bZ^\top.
$$
Moreover, the tridiagonal decomposition of $\bA$ can be reduced to a problem to find the bidiagonal decomposition of $\bZ =\bU\bB\bV^\top$ such that the tridiagonal decomposition of $\bA$ is given by
$$
\bA = \bZ\bZ^\top = \bU\bB\bB^\top\bU^\top.
$$
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:tri-nonnegative-eigen}]
The eigenvectors of symmetric matrices can be chosen to be orthogonal (Lemma~\ref{lemma:orthogonal-eigenvectors}, p.~\pageref{lemma:orthogonal-eigenvectors}) such that symmetric matrix $\bA$ can be decomposed into $\bA=\bQ\bLambda\bQ^\top$ (spectral theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem}) where $\bLambda$ is a diagonal matrix containing the eigenvalues of $\bA$. When eigenvalues are nonnegative, $\bLambda$ can be factored as $\bLambda=\bLambda^{1/2} \bLambda^{1/2}$. Let $\bZ = \bQ\bLambda^{1/2}$, $\bA$ can be factored as $\bA=\bZ\bZ^\top$. Thus, combining our findings yields the result.
\end{proof}
\section*{Introduction and Background}
\addcontentsline{toc}{section}{Introduction and Background}
Matrix decomposition has become a core technology in statistics \citep{banerjee2014linear, gentle1998numerical}, optimization \citep{gill2021numerical}, machine learning \citep{goodfellow2016deep, bishop2006pattern}, and deep learning largely due to the development of back propagation algorithm in fitting a neural network and the low-rank neural networks in efficient deep learning.
The sole aim of this book is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, and Hilbert space. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields. Some excellent examples include \citet{householder2006principles, trefethen1997numerical, strang1993introduction, stewart2000decompositional, gentle2007matrix, higham2002accuracy, quarteroni2010numerical, golub2013matrix, beck2017first, gallier2017fundamentals, boyd2018introduction, strang2019linear, van2020advanced, strang2021every}. Most importantly, this book will only cover the compact proofs of the existence of the matrix decomposition methods. For more details on how to reduce the calculation complexity, rigorous discussion in various applications and examples, why each matrix decomposition method is important in practice, and preliminaries on tensor decomposition, one can refer to \citet{lu2021numerical}.
A matrix decomposition is a way of reducing a complex matrix into its constituent parts which are in simpler forms.
The
underlying principle of the decompositional approach to matrix computation is that it is not the
business of the matrix algorithmists to solve particular problems, but
it is an approach that can simplify more complex matrix operations which can be performed on the decomposed parts rather than on the original matrix itself. At a general level, a matrix decomposition task on matrix $\bA$ can be cast as
\begin{itemize}
\item $\bA=\bQ\bU$: where $\bQ$ is an orthogonal matrix that contains the same column space as $\bA$ and $\bU$ is a relatively simple and sparse matrix to reconstruct $\bA$.
\item $\bA=\bQ\bT\bQ^\top$: where $\bQ$ is orthogonal such that $\bA$ and $\bT$ are \textit{similar matrices} \footnote{See Definition~\ref{definition:similar-matrices} (p.~\pageref{definition:similar-matrices}) for a rigorous definition.} that share the same properties such as same eigenvalues, and sparsity. Moreover, working on $\bT$ is an easier task compared to that of $\bA$.
\item $\bA=\bU\bT\bV$: where $\bU, \bV$ are orthogonal matrices such that the columns of $\bU$ and the rows of $\bV$ constitute an orthonormal basis of the column space and row space of $\bA$ respectively.
\item $\underset{m\times n}{\bA}=\underset{m\times r}{\bB}\gap \underset{r\times n}{\bC}$: where $\bB,\bC$ are full rank matrices that can reduce the memory storage of $\bA$. In practice, a low-rank approximation $\underset{m\times n}{\bA}\approx \underset{m\times k}{\bD}\gap \underset{k\times n}{\bF}$ can be employed where $k<r$ is called the \textit{numerical rank} of the matrix such that the matrix can be stored much more inexpensively and can be multiplied rapidly with
vectors or other matrices.
An approximation of the form $\bA=\bD\bF$ is useful for storing the matrix $\bA$ more frugally (we can store $\bD$ and $\bF$ using $k(m+n)$ floats, as opposed to $mn$ numbers for storing $\bA$), for efficiently computing a matrix-vector product $\bb = \bA\bx$ (via $\bc = \bF\bx$ and $\bb = \bD\bc$), for data interpretation, and much more.
\item A matrix decomposition, which though is usually expensive to
compute, can be reused to solve new problems involving the
original matrix in different scenarios, e.g., as long as the factorization of $\bA$ is obtained, it can be reused to solve the set of linear systems $\{\bb_1=\bA\bx_1, \bb_2=\bA\bx_2, \ldots, \bb_k=\bA\bx_k\}$.
\item More generally, a matrix decomposition can help to understand the internal meaning and inherent logic of what happens when multiplied by the matrix such that each constituent has a geometrical transformation (see Section~\ref{section:coordinate-transformation}, p.~\pageref{section:coordinate-transformation}).
\end{itemize}
The matrix decomposition algorithms can fall into numerous categories. Nevertheless, we outline the fundamental six types here:
\begin{enumerate}
\item Factorizations arise from Gaussian elimination including the LU decomposition and its positive definite alternative - Cholesky decomposition;
\item Factorizations obtained when orthogonalizing the columns or the rows of a matrix such that the data can be explained well on an orthonormal basis;
\item Factorizations where the matrices are skeletoned such that a subset of the columns or the rows can represent the whole data in a small reconstruction error, whilst, the sparsity and nonnegativity of the matrices are kept as they are;
\item Reduction to Hessenberg, tridiagonal, or bidiagonal form, as a result, the properties of the matrices can be explored in these reduced matrices such as rank, eigenvalues, and so on;
\item Factorizations result from the computation of the eigenvalues of matrices;
\item In particular, the rest can be cast as a special kind of decompositions that involve optimization methods, and high-level ideas where the category may not be straightforward to determine.
\end{enumerate}
The world pictures for decomposition in Figure~\ref{fig:matrix-decom-world-picture} and \ref{fig:matrix-decom-world-picture2} connect each decomposition method by their internal relationships and also separate different methods by the criteria or prerequisites of them. Readers will get more information about the two pictures after reading the text.
\paragraph{Notation and preliminaries} In the rest of this section we will introduce and recap some basic knowledge about linear algebra. For the rest of the important concepts, we define and discuss them as per need for clarity.
Readers with enough background in matrix analysis can skip this section.
In the text, we simplify matters by considering only matrices that are real. Without special consideration, the eigenvalues of the discussed matrices are also real. We also assume throughout that $||\cdot||=||\cdot||_2$.
In all cases, scalars will be denoted in a non-bold font possibly with subscripts (e.g., $a$, $\alpha$, $\alpha_i$). We will use \textbf{boldface} lower case letters possibly with subscripts to denote vectors (e.g., $\bmu$, $\bx$, $\bx_n$, $\bz$) and
\textbf{boldface} upper case letters possibly with subscripts to denote matrices (e.g., $\bA$, $\bL_j$). The $i$-th element of a vector $\bz$ will be denoted by $\bz_i$ in bold font (or $z_i$ in the non-bold font). \textit{The $i$-th row and $j$-th column value of matrix $\bA$ will be denoted by $\bA_{ij}$ if block submatrices are involved, or by $a_{ij}$ if block submatrices are not involved}. Furthermore, it will be helpful to utilize the \textbf{Matlab-style notation}, the $i$-th row to $j$-th row and $k$-th column to $m$-th column submatrix of matrix $\bA$ will be denoted by $\bA_{i:j,k:m}$. When the index is not continuous, given ordered subindex sets $I$ and $J$, $\bA[I, J]$ denotes the submatrix of $\bA$ obtained by extracting the rows and columns of $\bA$ indexed by $I$ and $J$, respectively; and $\bA[:, J]$ denotes the submatrix of $\bA$ obtained by extracting the columns of $\bA$ indexed by $J$.
\index{Matlab-style notation}
And in all cases, vectors are formulated in a column rather than in a row. A row vector will be denoted by a transpose of a column vector such as $\ba^\top$. A specific column vector with values is split by the symbol $``;"$, e.g., $\bx=[1;2;3]$ is a column vector in $\real^3$. Similarly, a specific row vector with values is split by the symbol $``,"$, e.g., $\by=[1,2,3]$ is a row vector with 3 values. Further, a column vector can be denoted by the transpose of a row vector e.g., $\by=[1,2,3]^\top$ is a column vector.
The transpose of a matrix $\bA$ will be denoted by $\bA^\top$ and its inverse will be denoted by $\bA^{-1}$ . We will denote the $p \times p$ identity matrix by $\bI_p$. A vector or matrix of all zeros will be denoted by a \textbf{boldface} zero $\bzero$ whose size should be clear from context, or we denote $\bzero_p$ to be the vector of all zeros with $p$ entries.
\index{Eigenvalue}
\begin{definition}[Eigenvalue]
Given any vector space $E$ and any linear map $A: E \rightarrow E$, a scalar $\lambda \in K$ is called an eigenvalue, or proper value, or characteristic value of $\bA$ if there is some nonzero vector $\bu \in E$ such that
\begin{equation*}
\bA \bu = \lambda \bu.
\end{equation*}
\end{definition}
\index{Spectrum}
\index{Spectral radius}
\begin{definition}[Spectrum and Spectral Radius]\label{definition:spectrum}
The set of all eigenvalues of $\bA$ is called the spectrum of $\bA$ and is denoted by $\Lambda(\bA)$. The largest magnitude of the eigenvalues is known as the spectral radius $\rho(\bA)$:
$$
\rho(\bA) = \mathop{\max}_{\lambda\in \Lambda(\bA)} |\lambda|.
$$
\end{definition}
\index{Eigenvector}
\begin{definition}[Eigenvector]
A vector $\bu \in E$ is called an eigenvector, or proper vector, or characteristic vector of $\bA$ if $\bu \neq 0$ and if there is some $\lambda \in K$ such that
\begin{equation*}
\bA \bu = \lambda \bu,
\end{equation*}
where the scalar $\lambda$ is then an eigenvalue. And we say that $\bu$ is an eigenvector associated with $\lambda$.
\end{definition}
Moreover, the tuple $(\lambda, \bu)$ above is said to be an \textbf{eigenpair}. Intuitively, the above definitions mean that multiplying matrix $\bA$ by the vector $\bu$ results in a new vector that is in the same direction as $\bu$, but only scaled by a factor $\lambda$. For any eigenvector $\bu$, we can scale it by a scalar $s$ such that $s\bu$ is still an eigenvector of $\bA$. That's why we call the eigenvector an eigenvector of $\bA$ associated with eigenvalue $\lambda$. To avoid ambiguity, we usually assume that the eigenvector is normalized to have length $1$ and the first entry is positive (or negative) since both $\bu$ and $-\bu$ are eigenvectors.
\index{Linearly independent}
In this context, we will highly use the idea of the linear independence of a set of vectors. Two equivalent definitions are given as follows.
\begin{definition}[Linearly Independent]
A set of vectors $\{\ba_1, \ba_2, \ldots, \ba_m\}$ is called linearly independent if there is no combination can get $x_1\ba_1+x_2\ba_2+\ldots+x_m\ba_m=0$ except all $x_i$'s are zero. An equivalent definition is that $\ba_1\neq \bzero$, and for every $k>1$, the vector $\ba_k$ does not belong to the span of $\{\ba_1, \ba_2, \ldots, \ba_{k-1}\}$.
\end{definition}
In the study of linear algebra, every vector space has a basis and every vector is a linear combination of members of the basis. We then define the span and dimension of a subspace via the basis.
\index{Span}
\begin{definition}[Span]
If every vector $\bv$ in subspace $\mathcal{V}$ can be expressed as a linear combination of $\{\ba_1, \ba_2, \ldots,$ $\ba_m\}$, then $\{\ba_1, \ba_2, \ldots, \ba_m\}$ is said to span $\mathcal{V}$.
\end{definition}
\index{Subspace}
\begin{definition}[Subspace]
A nonempty subset $\mathcal{V}$ of $\real^n$ is called a subspace if $x\ba+y\ba\in \mathcal{V}$ for every $\ba,\bb\in \mathcal{V}$ and every $x,y\in \real$.
\end{definition}
\index{Basis}
\index{Dimension}
\begin{definition}[Basis and Dimension]
A set of vectors $\{\ba_1, \ba_2, \ldots, \ba_m\}$ is called a basis of $\mathcal{V}$ if they are linearly independent and span $\mathcal{V}$. Every basis of a given subspace has the same number of vectors, and the number of vectors in any basis is called the dimension of the subspace $\mathcal{V}$. By convention, the subspace $\{\bzero\}$ is said to have dimension zero. Furthermore, every subspace of nonzero dimension has a basis that is orthogonal, i.e., the basis of a subspace can be chosen orthogonal.
\end{definition}
\index{Column space}
\index{Range}
\begin{definition}[Column Space (Range)]
If $\bA$ is an $m \times n$ real matrix, we define the column space (or range) of $\bA$ to be the set spanned by its columns:
\begin{equation*}
\mathcal{C} (\bA) = \{ \by\in \mathbb{R}^m: \exists \bx \in \mathbb{R}^n, \, \by = \bA \bx \}.
\end{equation*}
And the row space of $\bA$ is the set spanned by its rows, which is equal to the column space of $\bA^\top$:
\begin{equation*}
\mathcal{C} (\bA^\top) = \{ \bx\in \mathbb{R}^n: \exists \by \in \mathbb{R}^m, \, \bx = \bA^\top \by \}.
\end{equation*}
\end{definition}
\index{Null space}
\index{Nullspace}
\index{Kernel}
\begin{definition}[Null Space (Nullspace, Kernel)]
If $\bA$ is an $m \times n$ real matrix, we define the null space (or kernel, or nullspace) of $\bA$ to be the set:
\begin{equation*}
\nspace (\bA) = \{\by \in \mathbb{R}^n: \, \bA \by = \bzero \}.
\end{equation*}
And the null space of $\bA^\top$ is defined as \begin{equation*}
\nspace (\bA^\top) = \{\bx \in \mathbb{R}^m: \, \bA^\top \bx = \bzero \}.
\end{equation*}
\end{definition}
Both the column space of $\bA$ and the null space of $\bA^\top$ are subspaces of $\real^n$. In fact, every vector in $\nspace(\bA^\top)$ is perpendicular to $\cspace(\bA)$ and vice versa.\footnote{Every vector in $\nspace(\bA)$ is also perpendicular to $\cspace(\bA^\top)$ and vice versa.}
\index{Rank}
\begin{definition}[Rank]
The $rank$ of a matrix $\bA\in \real^{m\times n}$ is the dimension of the column space of $\bA$. That is, the rank of $\bA$ is equal to the maximal number of linearly independent columns of $\bA$, and is also the maximal number of linearly independent rows of $\bA$. The matrix $\bA$ and its transpose $\bA^\top$ have the same rank. We say that $\bA$ has full rank if its rank is equal to $min\{m,n\}$.
Specifically, given a vector $\bu \in \real^m$ and a vector $\bv \in \real^n$, then the $m\times n$ matrix $\bu\bv^\top$ is of rank 1. In short, the rank of a matrix is equal to:
\begin{itemize}
\item number of linearly independent columns;
\item number of linearly independent rows;
\item and remarkably, these are always the same (see Theorem~\ref{lemma:equal-dimension-rank}, p.~\pageref{lemma:equal-dimension-rank}).
\end{itemize}
\end{definition}
\index{Orthogonal complement}
\begin{definition}[Orthogonal Complement in General]
The orthogonal complement $\mathcalV^\perp$ of a subspace $\mathcalV$ contains every vector that is perpendicular to $\mathcalV$. That is,
$$
\mathcalV^\perp = \{\bv | \bv^\top\bu=0, \,\,\, \forall \bu\in \mathcalV \}.
$$
The two subspaces are disjoint that span the entire space. The dimensions of $\mathcalV$ and $\mathcalV^\perp$ add to the dimension of the whole space. Furthermore, it satisfies that $(\mathcalV^\perp)^\perp=\mathcalV$.
\end{definition}
\begin{definition}[Orthogonal Complement of Column Space]
If $\bA$ is an $m \times n$ real matrix, the orthogonal complement of $\mathcal{C}(\bA)$, $\mathcal{C}^{\bot}(\bA)$ is the subspace defined as:
\begin{equation*}
\begin{aligned}
\mathcal{C}^{\bot}(\bA) &= \{\by\in \mathbb{R}^m: \, \by^\top \bA \bx=\bzero, \, \forall \bx \in \mathbb{R}^n \} \\
&=\{\by\in \mathbb{R}^m: \, \by^\top \bv = \bzero, \, \forall \bv \in \mathcal{C}(\bA) \}.
\end{aligned}
\end{equation*}
\end{definition}
Then we have the four fundamental spaces for any matrix $\bA\in \real^{m\times n}$ with rank $r$ as shown in Theorem~\ref{theorem:fundamental-linear-algebra}. To see the fundamental theorem of linear algebra, we first prove the following result, the row rank of a matrix equals the column rank, whose proof is essential in the sequel.
\index{Rank}
\index{Matrix rank}
\begin{theorem}[Row Rank Equals Column Rank]\label{lemma:equal-dimension-rank}
The dimension of the column space of a matrix $\bA\in \real^{m\times n}$ is equal to the dimension of its
row space, i.e., the row rank and the column rank of a matrix $\bA$ are equal.
\end{theorem}
\begin{proof}[{of Theorem~\ref{lemma:equal-dimension-rank}}]
We first notice that the null space of $\bA$ is orthogonal complementary to the row space of $\bA$: $\nspace(\bA) \bot \cspace(\bA^\top)$ (where the row space of $\bA$ is exactly the column space of $\bA^\top$), that is, vectors in the null space of $\bA$ are orthogonal to vectors in the row space of $\bA$. To see this, suppose $\bA$ has rows $\ba_1^\top, \ba_2^\top, \ldots, \ba_m^\top$ and $\bA=[\ba_1^\top; \ba_2^\top; \ldots; \ba_m^\top]$. For any vector $\bx\in \nspace(\bA)$, we have $\bA\bx = \bzero$, that is, $[\ba_1^\top\bx; \ba_2^\top\bx; \ldots; \ba_m^\top\bx]=\bzero$. And since the row space of $\bA$ is spanned by $\ba_1^\top, \ba_2^\top, \ldots, \ba_m^\top$. Then $\bx$ is perpendicular to any vectors from $\cspace(\bA^\top)$ which means $\nspace(\bA) \bot \cspace(\bA^\top)$.
Now suppose, the dimension of row space of $\bA$ is $r$. \textcolor{blue}{Let $\br_1, \br_2, \ldots, \br_r$ be a set of vectors in $\real^n$ and form a basis for the row space}. Then the $r$ vectors $\bA\br_1, \bA\br_2, \ldots, \bA\br_r$ are in the column space of $\bA$, which are linearly independent. To see this, suppose we have a linear combination of the $r$ vectors: $x_1\bA\br_1 + x_2\bA\br_2+ \ldots+ x_r\bA\br_r=\bzero$, that is, $\bA(x_1\br_1 + x_2\br_2+ \ldots+ x_r\br_r)=\bzero$ and the vector $\bv=x_1\br_1 + x_2\br_2+ \ldots+ x_r\br_r$ is in null space of $\bA$. But since $\br_1, \br_2, \ldots, \br_r$ is a basis for the row space of $\bA$, $\bv$ is thus also in the row space of $\bA$. We have shown that vectors from null space of $\bA$ is perpendicular to vectors from row space of $\bA$, thus $\bv^\top\bv=0$ and $x_1=x_2=\ldots=x_r=0$. Then \textcolor{blue}{$\bA\br_1, \bA\br_2, \ldots, \bA\br_r$ are in the column space of $\bA$, and they are linearly independent} which means the dimension of the column space of $\bA$ is larger than $r$. This result shows that \textbf{row rank of $\bA\leq $ column rank of $\bA$}.
If we apply this process again for $\bA^\top$. We will have \textbf{column rank of $\bA\leq $ row rank of $\bA$}. This completes the proof.
\end{proof}
Further information can be drawn from this proof is that if $\{\br_1, \br_2, \ldots, \br_r\}$ is a set of vectors in $\real^n$ that forms a basis for the row space, then \textcolor{blue}{$\{\bA\br_1, \bA\br_2, \ldots, \bA\br_r\}$ is a basis for the column space of $\bA$}. We formulate this finding into the following lemma.
\begin{lemma}[Column Basis from Row Basis]\label{lemma:column-basis-from-row-basis}
For any matrix $\bA\in \real^{m\times n}$, suppose that $\{\br_1, \br_2, \ldots, \br_r\}$ is a set of vectors in $\real^n$ which forms a basis for the row space, then $\{\bA\br_1, \bA\br_2, \ldots, \bA\br_r\}$ is a basis for the column space of $\bA$.
\end{lemma}
For any matrix $\bA\in \real^{m\times n}$, it can be easily verified that any vector in the row space of $\bA$ is perpendicular to any vector in the null space of $\bA$. Suppose $\bx_n \in \nspace(\bA)$, then $\bA\bx_n = \bzero$ such that $\bx_n$ is perpendicular to every row of $\bA$ which agrees with our claim.
Similarly, we can also show that any vector in the column space of $\bA$ is perpendicular to any vector in the null space of $\bA^\top$. Further, the column space of $\bA$ together with the null space of $\bA^\top$ span the whole $\real^m$ which is known as the fundamental theorem of linear algebra.
The fundamental theorem contains two parts, the dimension of the subspaces and the orthogonality of the subspaces. The orthogonality can be easily verified as shown above. Moreover, when the row space has dimension $r$, the null space has dimension $n-r$. This cannot be easily stated and we prove it in the following theorem.
\index{Fundamental spaces}
\index{Fundamental theorem of linear algebra}
\begin{figure}[h!]
\centering
\includegraphics[width=0.98\textwidth]{imgs/lafundamental.pdf}
\caption{Two pairs of orthogonal subspaces in $\real^n$ and $\real^m$. $\dim(\cspace(\bA^\top)) + \dim(\nspace(\bA))=n$ and $\dim(\nspace(\bA^\top)) + \dim(\cspace(\bA))=m$. The null space component goes to zero as $\bA\bx_n = \bzero \in \real^m$. The row space component goes to column space as $\bA\bx_r = \bA(\bx_r+\bx_n)=\bb \in \cspace(\bA)$.}
\label{fig:lafundamental}
\end{figure}
\begin{mdframed}[hidealllines=\mdframehideline,backgroundcolor=\mdframecolor]
\begin{theorem}[The Fundamental Theorem of Linear Algebra]\label{theorem:fundamental-linear-algebra}
\textit{Orthogonal Complement} and \textit{Rank-Nullity Theorem}: for any matrix $\bA\in \real^{m\times n}$, we have
$\bullet$ $\nspace(\bA)$ is orthogonal complement to the row space $\cspace(\bA^\top)$ in $\real^n$: $dim(\nspace(\bA))+dim(\cspace(\bA^\top))=n$;
$\bullet$ $\nspace(\bA^\top)$ is orthogonal complement to the column space $\cspace(\bA)$ in $\real^m$: $dim(\nspace(\bA^\top))+dim(\cspace(\bA))=m$;
$\bullet$ For rank-$r$ matrix $\bA$, $dim(\cspace(\bA^\top)) = dim(\cspace(\bA)) = r$, that is, $dim(\nspace(\bA)) = n-r$ and $dim(\nspace(\bA^\top))=m-r$.
\end{theorem}
\end{mdframed}
\begin{proof}[of Theorem~\ref{theorem:fundamental-linear-algebra}]
Following from the proof of Theorem~\ref{lemma:equal-dimension-rank}. Let $\br_1, \br_2, \ldots, \br_r$ be a set of vectors in $\real^n$ that form a basis for the row space, then \textcolor{blue}{$\bA\br_1, \bA\br_2, \ldots, \bA\br_r$ is a basis for the column space of $\bA$}. Let $\bn_1, \bn_2, \ldots, \bn_k \in \real^n$ form a basis for the null space of $\bA$. Following again from the proof of Theorem~\ref{lemma:equal-dimension-rank}, $\nspace(\bA) \bot \cspace(\bA^\top)$, thus, $\br_1, \br_2, \ldots, \br_r$ are perpendicular to $\bn_1, \bn_2, \ldots, \bn_k$. Then, $\{\br_1, \br_2, \ldots, \br_r, \bn_1, \bn_2, \ldots, \bn_k\}$ is linearly independent in $\real^n$.
For any vector $\bx\in \real^n $, $\bA\bx$ is in the column space of $\bA$. Then it can be expressed by a combination of $\bA\br_1, \bA\br_2, \ldots, \bA\br_r$: $\bA\bx = \sum_{i=1}^{r}a_i\bA\br_i$ which states that $\bA(\bx-\sum_{i=1}^{r}a_i\br_i) = \bzero$ and $\bx-\sum_{i=1}^{r}a_i\br_i$ is thus in $\nspace(\bA)$. Since $\{\bn_1, \bn_2, \ldots, \bn_k\}$ is a basis for the null space of $\bA$, $\bx-\sum_{i=1}^{r}a_i\br_i$ can be expressed by a combination of $\bn_1, \bn_2, \ldots, \bn_k$: $\bx-\sum_{i=1}^{r}a_i\br_i = \sum_{j=1}^{k}b_j \bn_j$, i.e., $\bx=\sum_{i=1}^{r}a_i\br_i + \sum_{j=1}^{k}b_j \bn_j$. That is, any vector $\bx\in \real^n$ can be expressed by $\{\br_1, \br_2, \ldots, \br_r, \bn_1, \bn_2, \ldots, \bn_k\}$ and the set forms a basis for $\real^n$. Thus the dimension sum to $n$: $r+k=n$, i.e., $dim(\nspace(\bA))+dim(\cspace(\bA^\top))=n$. Similarly, we can prove $dim(\nspace(\bA^\top))+dim(\cspace(\bA))=m$.
\end{proof}
Figure~\ref{fig:lafundamental} demonstrates two pairs of such orthogonal subspaces and shows how $\bA$ takes $\bx$ into the column space. The dimensions of the row space of $\bA$ and the null space of $\bA$ add to $n$. And the dimensions of the column space of $\bA$ and the null space of $\bA^\top$ add to $m$. The null space component goes to zero as $\bA\bx_{\bn} = \bzero \in \real^m$ which is the intersection of column space of $\bA$ and null space of $\bA^\top$. The row space component goes to column space as $\bA\bx_{\br} = \bA(\bx_{\br} + \bx_{\bn})=\bb\in \real^m$.
\index{Orthogonal matrix}
\begin{definition}[Orthogonal Matrix]
A real square matrix $\bQ$ is an orthogonal matrix if the inverse of $\bQ$ equals the transpose of $\bQ$, that is $\bQ^{-1}=\bQ^\top$ and $\bQ\bQ^\top = \bQ^\top\bQ = \bI$. In another word, suppose $\bQ=[\bq_1, \bq_2, \ldots, \bq_n]$ where $\bq_i \in \real^n$ for all $i \in \{1, 2, \ldots, n\}$, then $\bq_i^\top \bq_j = \delta(i,j)$ with $\delta(i,j)$ being the Kronecker delta function. If $\bQ$ contains only $\gamma$ of these columns with $\gamma<n$, then $\bQ^\top\bQ = \bI_\gamma$ stills holds with $\bI_\gamma$ being the $\gamma\times \gamma$ identity matrix. But $\bQ\bQ^\top=\bI$ will not be true. For any vector $\bx$, the orthogonal matrix will preserve the length: $||\bQ\bx|| = ||\bx||$.
\end{definition}
\index{Permutaiton matrix}
\begin{definition}[Permutation Matrix]\label{definition:permutation-matrix}
A permutation matrix $\bP$ is a square binary matrix that has exactly one entry of 1 in each row and each column and 0's elsewhere.
\paragraph{Row Point} That is, the permutation matrix $\bP$ has the rows of the identity $\bI$ in any order and the order decides the sequence of the row permutation. Suppose we want to permute the rows of matrix $\bA$, we just multiply on the left by $\bP\bA$.
\paragraph{Column Point} Or, equivalently, the permutation matrix $\bP$ has the columns of the identity $\bI$ in any order and the order decides the sequence of the column permutation. And now, the column permutation of $\bA$ is to multiply on the right by $\bA\bP$.
\end{definition}
The permutation matrix $\bP$ can be more efficiently represented via a vector $J \in \integer_+^n$ of indices such that $\bP = \bI[:, J]$ where $\bI$ is the $n\times n$ identity matrix and notably, the elements in vector $J$ sum to $1+2+\ldots+n= \frac{n^2+n}{2}$.
\begin{example}[Permutation]
Suppose,
$$\bA=\begin{bmatrix}
1 & 2&3\\
4&5&6\\
7&8&9
\end{bmatrix}
,\qquad \text{and} \qquad
\bP=\begin{bmatrix}
&1&\\
&&1\\
1&&
\end{bmatrix}.
$$
The row permutation is given by
$$
\bP\bA = \begin{bmatrix}
4&5&6\\
7&8&9\\
1 & 2&3\\
\end{bmatrix},
$$
where the order of the rows of $\bA$ appearing in $\bP\bA$ matches the order of the rows of $\bI$ in $\bP$. And the column permutation is given by
$$
\bA\bP = \begin{bmatrix}
3 & 1 & 2 \\
6 & 4 & 5\\
9 & 7 & 8
\end{bmatrix},
$$
where the order of the columns of $\bA$ appearing in $\bA\bP$ matches the order of the columns of $\bI$ in $\bP$. \exampbar
\end{example}
\index{Nonsingular matrix}
From an introductory course on linear algebra, we have the following remark on the equivalent claims of nonsingular matrices.
\begin{remark}[List of Equivalence of Nonsingularity for a Matrix]
For a square matrix $\bA\in \real^{n\times n}$, the following claims are equivalent:
\begin{itemize}
\item $\bA$ is nonsingular;
\item $\bA$ is invertible, i.e., $\bA^{-1}$ exists;
\item $\bA\bx=\bb$ has a unique solution $\bx = \bA^{-1}\bb$;
\item $\bA\bx = \bzero$ has a unique, trivial solution: $\bx=\bzero$;
\item Columns of $\bA$ are linearly independent;
\item Rows of $\bA$ are linearly independent;
\item $\det(\bA) \neq 0$;
\item $\dim(\nspace(\bA))=0$;
\item $\nspace(\bA) = \{\bzero\}$, i.e., the null space is trivial;
\item $\cspace(\bA)=\cspace(\bA^\top) = \real^n$, i.e., the column space or row space span the whole $\real^n$;
\item $\bA$ has full rank $r=n$;
\item The reduced row echelon form is $\bR=\bI$;
\item $\bA^\top\bA$ is symmetric positive definite;
\item $\bA$ has $n$ nonzero (positive) singular values;
\item All eigenvalues are nonzero.
\end{itemize}
\end{remark}
\index{Singular matrix}
It will be shown important to take the above equivalence into mind, otherwise, we will easily get lost. On the other hand, the following remark also shows the equivalent claims for singular matrices.
\begin{remark}[List of Equivalence of Singularity for a Matrix]
For a square matrix $\bA\in \real^{n\times n}$ with eigenpair $(\lambda, \bu)$, the following claims are equivalent:
\begin{itemize}
\item $(\bA-\lambda\bI)$ is singular;
\item $(\bA-\lambda\bI)$ is not invertible;
\item $(\bA-\lambda\bI)\bx = \bzero$ has nonzero $\bx\neq \bzero$ solutions, and $\bx=\bu$ is one of such solutions;
\item $(\bA-\lambda\bI)$ has linearly dependent columns;
\item $\det(\bA-\lambda\bI) = 0$;
\item $\dim(\nspace(\bA-\lambda\bI))>0$;
\item Null space of $(\bA-\lambda\bI)$ is nontrivial;
\item Columns of $\bA$ are linearly dependent;
\item Rows of $\bA$ are linearly dependent;
\item $\bA$ has rank $r<n$;
\item Dimension of column space = dimension of row space = $r<n$;
\item $\bA^\top\bA$ is symmetric semidefinite;
\item $\bA$ has $r<n$ nonzero (positive) singular values;
\item Zero is an eigenvalue of $\bA$.
\end{itemize}
\end{remark}
\part{Special Topics}
\chapter{Transformation in Matrix Decomposition}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{Coordinate Transformation in Matrix Decomposition}\label{section:coordinate-transformation}
Suppose a vector $\bv\in \real^3$ and it has elements $\bv = [3;7; 2]$. But what do these values 3, 7, and 2 mean? In the Cartesian coordinate system, it means it has a component of 3 on the $x$-axis, a component of 7 on the $y$-axis, and a component of 2 on the $z$-axis.\index{Coordinate transformation}
\section{An Overview of Matrix Multiplication}
\paragraph{Coordinate defined by a nonsingular matrix} Suppose further a $3\times 3$ nonsingular matrix $\bB$ such that $\bB$ is invertible and columns of $\bB$ are linearly independent. Thus the 3 columns of $\bB$ form a basis for the space $\real^{3}$. One step forward, we can take the 3 columns of $\bB$ as a basis for a \textcolor{blue}{new coordinate system}, which we call the \textcolor{blue}{$B$ coordinate system}. Going back to the Cartesian coordinate system, we also have three vectors as a basis, $\be_1, \be_2, \be_3$. If we put the three vectors into columns of a matrix, the matrix will be an identity matrix. So $\bI\bv = \bv$ means \textcolor{blue}{transferring $\bv$ from the Cartesian coordinate system into the Cartesian coordinate system}, the same coordinate. Similarly, \textcolor{blue}{$\bB\bv=\bu$ is to transfer $\bv$ from the Cartesian coordinate system into the $B$ system}. Specifically, for $\bv = [3;7; 2]$ and $\bB=[\bb_1, \bb_2, \bb_3]$, we have $\bu=\bB\bv = 3\bb_1+7\bb_2+2\bb_3$, i.e., $\bu$ contains 3 of the first basis $\bb_1$ of $\bB$, 7 of the second basis $\bb_2$ of $\bB$, and 2 of the third basis $\bb_3$ of $\bB$.
If again, we want to transfer the vector $\bu$ from $B$ coordinate system back to the Cartesian coordinate system, we just need to multiply by $\bB^{-1}\bu = \bv$.
\paragraph{Coordinate defined by an orthogonal matrix} A $3\times 3$ orthogonal matrix $\bQ$ defines a ``better" coordinate system since the three columns (i.e., basis) are orthonormal to each other. $\bQ\bv$ is to transfer $\bv$ from the Cartesian to the coordinate system defined by the orthogonal matrix. Since the basis vectors from the orthogonal matrix are orthonormal, just like the three vectors $\be_1, \be_2, \be_3$ in the Cartesian coordinate system, the transformation defined by the orthogonal matrix just rotates or reflects the Cartesian system.
$\bQ^\top$ can help transfer back to the Cartesian coordinate system.
\begin{figure}[h]
\centering
\includegraphics[width=0.99\textwidth]{imgs/eigenRotate.pdf}
\caption{Eigenvalue Decomposition: $\bX^{-1}$ transforms to a different coordinate system. $\bLambda$ stretches and $\bX$ transforms back. $\bX^{-1}$ and $\bX$ are nonsingular, which will change the basis of the system, and the angle between the vectors $\bv_1$ and $\bv_2$ will \textbf{not} be preserved, that is, the angle between $\bv_1$ and $\bv_2$ is \textbf{different} from the angle between $\bv_1^\prime$ and $\bv_2^\prime$. The length of $\bv_1$ and $\bv_2$ are also \textbf{not} preserved, that is, $||\bv_1|| \neq ||\bv_1^\prime||$ and $||\bv_2|| \neq ||\bv_2^\prime||$.}
\label{fig:eigen-rotate}
\end{figure}
\section{Eigenvalue Decomposition}
A square matrix $\bA$ with linearly independent eigenvectors can be factored as $\bA = \bX\bLambda\bX^{-1}$ where $\bX$ and $\bX^{-1}$ are nonsingular so that they define a system transformation intrinsically. $\bA\bu = \bX\bLambda\bX^{-1}\bu$ firstly transfers $\bu$ into the system defined by $\bX^{-1}$. Let's call this system the \textbf{eigen coordinate system}. $\bLambda$ is to stretch each component of the vector in the eigen system by the length of the eigenvalue. And then $\bX$ helps to transfer the resulting vector back to the Cartesian coordinate system. A demonstration of how the eigenvalue decomposition transforms between coordinate systems is shown in Figure~\ref{fig:eigen-rotate} where $\bv_1, \bv_2$ are two linearly independent eigenvectors of $\bA$ such that they form a basis for $\real^2$.
\begin{figure}[h]
\centering
\includegraphics[width=0.99\textwidth]{imgs/spectralrotate.pdf}
\caption{Spectral Decomposition $\bQ\bLambda \bQ^\top$: $\bQ^\top$ rotates or reflects, $\bLambda$ stretches cycle to ellipse, and $\bQ$ rotates or reflects back. Orthogonal matrices $\bQ^\top$ and $\bQ$ only change the basis of the system. However, they preserve the angle between the vectors $\bq_1$ and $\bq_2$, and the lengths of them.}
\label{fig:spectral-rotate}
\end{figure}
\section{Spectral Decomposition}
A symmetric matrix $\bA$ can be factored as $\bA = \bQ\bLambda\bQ^\top$ where $\bQ$ and $\bQ^\top$ are orthogonal so that they define a system transformation intrinsically as well. $\bA\bu = \bQ\bLambda\bQ^\top\bu$ firstly rotates or reflects $\bu$ into the system defined by $\bQ^\top$. Let's call this system the \textbf{spectral coordinate system}. $\bLambda$ is to stretch each component of the vector in the spectral system by the length of eigenvalue. And then $\bQ$ helps to rotate or reflect the resulting vector back to the original coordinate system. A demonstration of how the spectral decomposition transforms between coordinate systems is shown in Figure~\ref{fig:spectral-rotate} where $\bq_1, \bq_2$ are two linearly independent eigenvectors of $\bA$ such that they form a basis for $\real^2$. The coordinate transformation in the spectral decomposition is similar to that of the eigenvalue decomposition. Except that in the spectral decomposition, the orthogonal vectors transferred by $\bQ^\top$ are still orthogonal. This is also a property of orthogonal matrices. That is, orthogonal matrices can be viewed as matrices which change the basis of other matrices. Hence they preserve the angle (inner product) between the vectors
$$
\bu^\top \bv = (\bQ\bu)^\top(\bQ\bv).
$$
The above invariance of the angle between vectors also relies on the invariance of their lengths:
$$
||\bQ\bu|| = ||\bu||.
$$
\section{SVD}
\begin{figure}[H]
\centering
\includegraphics[width=0.99\textwidth]{imgs/svdrotate.pdf}
\caption{SVD: $\bV^\top$ and $\bU$ rotate or reflect, $\bSigma$ stretches the circle to an ellipse. Orthogonal matrices $\bV^\top$ and $\bU$ only change the basis of the system. However, they preserve the angle between the vectors $\bv_1$ and $\bv_2$, and the lengths of them.}
\label{fig:svd-rotate}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.99\textwidth]{imgs/polarrotate.pdf}
\caption{$\bV\bSigma \bV^\top$ from SVD or Polar decomposition: $\bV^\top$ rotates or reflects, $\bSigma$ stretches cycle to ellipse, and $\bV$ rotates or reflects back. Orthogonal matrices $\bV^\top$ and $\bV$ only change the basis of the system. However, they preserve the angle between the vectors $\bv_1$ and $\bv_2$, and the lengths of them.}
\label{fig:polar-rotate}
\end{figure}
Any $m\times n$ matrix can be factored as $\bA=\bU\bSigma\bV^\top$, the SVD. $\bA\bu=\bU\bSigma\bV^\top\bu$ then firstly rotates or reflects $\bu$ into the system defined by $\bV^\top$, which we call the \textbf{$V$ coordinate system}. $\bSigma$ stretches the first $r$ components of the resulted vector in the $V$ system by the lengths of the singular values. If $n\geq m$, then $\bSigma$ only keeps additional $m-r$ components which are stretched to zero while removing the final $n-m$ components. If $m>n$, the $\bSigma$ stretches $n-r$ components to zero, and also adds additional $m-n$ zero components. Finally, $\bU$ rotates or reflects the resulting vector into the \textbf{$U$ coordinate system} defined by $\bU$. A demonstration of how the SVD transforms in a $2\times 2$ example is shown in Figure~\ref{fig:svd-rotate}. Further, Figure~\ref{fig:polar-rotate} demonstrates the transformation of $\bV\bSigma \bV^\top$ in a $2\times 2$ example.
Similar to the spectral decomposition, orthogonal matrices $\bV^\top$ and $\bU$ only change the basis of the system. However, they preserve the angle between vectors $\bv_1$ and $\bv_2$.
\begin{figure}[H]
\centering
\includegraphics[width=0.99\textwidth]{imgs/polarrotate2.pdf}
\caption{Polar decomposition: $\bV^\top$ rotates or reflects, $\bSigma$ stretches cycle to ellipse, and $\bV$ rotates or reflects back. Orthogonal matrices $\bV^\top$, $\bV$, $\bQ_l$ only change the basis of the system. However, they preserve the angle between the vectors $\bv_1$ and $\bv_2$, and the lengths of them.}
\label{fig:polar-rotate2}
\end{figure}
\section{Polar Decomposition}
Any $n\times n$ square matrix $\bA$ can be factored as left polar decomposition $\bA = (\bU\bV^\top)( \bV\bSigma \bV^\top) = \bQ_l\bS$. Similarly, $\bA\bv = \bQ_l( \bV\bSigma \bV^\top)\bu$ is to transfer $\bu$ into the system defined by $\bV^\top$ and stretch each component by the lengths of the singular values. Then the resulted vector is transferred back into the Cartesian coordinate system by $\bV$. Finally, $\bQ_l$ will rotate or reflect the resulting vector from the Cartesian coordinate system into the $Q$ system defined by $\bQ_l$. The meaning of right polar decomposition shares a similar description. Similar to the spectral decomposition, orthogonal matrices $\bV^\top$ and $\bV$ only change the basis of the system. However, they preserve the angle between the vectors $\bv_1$ and $\bv_2$.
\part{Triangularization, Orthogonalization, and Gram-Schmidt Process}
\newpage
\chapter{QR Decomposition}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{QR Decomposition}
In many applications, we are interested in the column space of a matrix $\bA=[\ba_1, \ba_2, \ldots, \ba_n] \in \real^{m\times n}$. The successive spaces spanned by the columns $\ba_1, \ba_2, \ldots$ of $\bA$ are
$$
\cspace([\ba_1])\,\,\,\, \subseteq\,\,\,\, \cspace([\ba_1, \ba_2]) \,\,\,\,\subseteq\,\,\,\, \cspace([\ba_1, \ba_2, \ba_3])\,\,\,\, \subseteq\,\,\,\, \ldots,
$$
where $\cspace([\ldots])$ is the subspace spanned by the vectors included in the brackets. The idea of QR decomposition is the construction of a sequence of orthonormal vectors $\bq_1, \bq_2, \ldots$ that span the same successive subspaces.
$$
\bigg\{\cspace([\bq_1])=\cspace([\ba_1])\bigg\}\subseteq
\bigg\{\cspace([\bq_1, \bq_2])=\cspace([\ba_1, \ba_2])\bigg\}\subseteq
\bigg\{\cspace([\bq_1, \bq_2, \bq_3])=\cspace([\ba_1, \ba_2, \ba_3])\bigg\}
\subseteq \ldots,
$$
We provide the result of QR decomposition in the following theorem and we delay the discussion of its existence in the next sections.
\begin{theoremHigh}[QR Decomposition]\label{theorem:qr-decomposition}
Every $m\times n$ matrix $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$ (whether linearly independent or dependent columns) with $m\geq n$ can be factored as
$$
\bA = \bQ\bR,
$$
where
\begin{enumerate}
\item \textbf{Reduced}: $\bQ$ is $m\times n$ with orthonormal columns and $\bR$ is an $n\times n$ upper triangular matrix which is known as the \textbf{reduced QR decomposition};
\item \textbf{Full}: $\bQ$ is $m\times m$ with orthonormal columns and $\bR$ is an $m\times n$ upper triangular matrix which is known as the \textbf{full QR decomposition}. If further restrict the upper triangular matrix to be a square matrix, the full QR decomposition can be denoted as
$$
\bA = \bQ\begin{bmatrix}
\bR_0\\
\bzero
\end{bmatrix},
$$
where $\bR_0$ is an $n\times n$ upper triangular matrix.
\end{enumerate}
Specifically, when $\bA$ has full rank, i.e., $\bA$ has linearly independent columns, $\bR$ also has linearly independent columns, and $\bR$ is nonsingular for the \textit{reduced} case. This implies diagonals of $\bR$ are nonzero. Under this condition, when we further restrict elements on the diagonal of $\bR$ are positive, the \textit{reduced} QR decomposition is \textbf{unique}. The \textit{full} QR decomposition is normally not unique since the right-most $(m-n)$ columns in $\bQ$ can be in any order.
\end{theoremHigh}
\section{Project a Vector Onto Another Vector}\label{section:project-onto-a-vector}
Project a vector $\ba$ to a vector $\bb$ is to find the vector closest to $\ba$ on the line of $\bb$. The projection vector $\widehat{\ba}$ is some multiple of $\bb$. Let $\widehat{\ba} = \widehat{x} \bb$ and $\ba-\widehat{\ba}$ is perpendicular to $\bb$ as shown in Figure~\ref{fig:project-line}. We then get the following result:
\begin{tcolorbox}[title={Project Vector $\ba$ Onto Vector $\bb$}]
$\ba^\perp =\ba-\widehat{\ba}$ is perpendicular to $\bb$, so $(\ba-\widehat{x}\bb)^\top\bb=0$: $\widehat{x}$ = $\frac{\ba^\top\bb}{\bb^\top\bb}$ and $\widehat{\ba} = \frac{\ba^\top\bb}{\bb^\top\bb}\bb = \frac{\bb\bb^\top}{\bb^\top\bb}\ba$.
\end{tcolorbox}
\begin{figure}[h!]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Project onto a line]{\label{fig:project-line}
\includegraphics[width=0.47\linewidth]{./imgs/projectline.pdf}}
\quad
\subfigure[Project onto a space]{\label{fig:project-space}
\includegraphics[width=0.47\linewidth]{./imgs/projectspace.pdf}}
\caption{Project a vector onto a line and a space.}
\label{fig:projection-qr}
\end{figure}
\section{Project a Vector Onto a Plane}\label{section:project-onto-a-plane}
Project a vector $\ba$ to a space spanned by $\bb_1, \bb_2, \ldots, \bb_n$ is to find the vector closest to $\ba$ on the column space of $[\bb_1, \bb_2, \ldots, \bb_n]$. The projection vector $\widehat{\ba}$ is a combination of $\bb_1, \bb_2, \ldots, \bb_n$: $\widehat{\ba} = \widehat{x}_1\bb_1+ \widehat{x}_2\bb_2+\ldots+\widehat{x}_n\bb_n$. This is actually a least squares problem. To find the projection, we just solve the normal equation $\bB^\top\bB\widehat{\bx} = \bB^\top\ba$ where $\bB=[\bb_1, \bb_2, \ldots, \bb_n]$ and $\widehat{\bx}=[\widehat{x}_1, \widehat{x}_2, \ldots, \widehat{x}_n]$. We refer the details of this projection view in the least squares to \citet{strang1993introduction, trefethen1997numerical, yang2000matrix, golub2013matrix, lu2021rigorous} as it is not the main interest of this book. For each vector $\bb_i$, the projection of $\ba$ in the direction of $\bb_i$ can be analogously obtained by
$$
\widehat{\ba}_i = \frac{\bb_i\bb_i^\top}{\bb_i^\top\bb_i}\ba, \gap \forall i \in \{1,2,\ldots, n\}.
$$
Let $\widehat{\ba}=\sum_{i=1}^{n}\widehat{\ba_i}$, this results in
$$
\ba^\perp = (\ba-\widehat{\ba}) \perp \cspace(\bB),
$$
i.e., $(\ba-\widehat{\ba})$ is perpendicular to the column space of $\bB=[\bb_1, \bb_2, \ldots, \bb_n]$ as shown in Figure~\ref{fig:project-space}.
\section{Existence of the QR Decomposition via the Gram-Schmidt Process}\label{section:gram-schmidt-process}
For three linearly independent vectors $\{\ba_1, \ba_2, \ba_3\}$ and the space spanned by the three linearly independent vectors $\cspace{([\ba_1, \ba_2, \ba_3])}$, i.e., the column space of the matrix $[\ba_1, \ba_2, \ba_3]$. We intend to construct three orthogonal vectors $\{\bb_1, \bb_2, \bb_3\}$ in which case $\cspace{([\bb_1, \bb_2, \bb_3])}$ = $\cspace{([\ba_1, \ba_2, \ba_3])}$. Then we divide the orthogonal vectors by their length to normalize. This process produces three mutually orthonormal vectors $\bq_1 = \frac{\bb_1}{||\bb_1||}$, $\bq_2 = \frac{\bb_2}{||\bb_2||}$, $\bq_2 = \frac{\bb_2}{||\bb_2||}$.
For the first vector, we choose $\bb_1 = \ba_1$ directly. The second vector $\bb_2$ must be perpendicular to the first one. This is actually the vector $\ba_2$ subtracting its projection along $\bb_1$:
\begin{equation}
\begin{aligned}
\bb_2 &= \ba_2- \frac{\bb_1 \bb_1^\top}{\bb_1^\top\bb_1} \ba_2 = (\bI- \frac{\bb_1 \bb_1^\top}{\bb_1^\top\bb_1} )\ba_2 \qquad &(\text{Projection view})\\
&= \ba_2- \underbrace{\frac{ \bb_1^\top \ba_2}{\bb_1^\top\bb_1} \bb_1}_{\widehat{\ba}_2}, \qquad &(\text{Combination view}) \nonumber
\end{aligned}
\end{equation}
where the first equation shows $\bb_2$ is a multiplication of the matrix $(\bI- \frac{\bb_1 \bb_1^\top}{\bb_1^\top\bb_1} )$ and the vector $\ba_2$, i.e., project $\ba_2$ onto the orthogonal complement space of $\cspace{([\bb_1])}$. The second equality in the above equation shows $\ba_2$ is a combination of $\bb_1$ and $\bb_2$.
Clearly, the space spanned by $\bb_1, \bb_2$ is the same space spanned by $\ba_1, \ba_2$. The situation is shown in Figure~\ref{fig:gram-schmidt1} in which we choose \textbf{the direction of $\bb_1$ as the $x$-axis in the Cartesian coordinate system}. $\widehat{\ba}_2$ is the projection of $\ba_2$ onto line $\bb_1$. It can be clearly shown that the part of $\ba_2$ perpendicular to $\bb_1$ is $\bb_2 = \ba_2 - \widehat{\ba}_2$ from the figure.
For the third vector $\bb_3$, it must be perpendicular to both the $\bb_1$ and $\bb_2$ which is actually the vector $\ba_3$ subtracting its projection along the plane spanned by $\bb_1$ and $\bb_2$
\begin{equation}\label{equation:gram-schdt-eq2}
\begin{aligned}
\bb_3 &= \ba_3- \frac{\bb_1 \bb_1^\top}{\bb_1^\top\bb_1} \ba_3 - \frac{\bb_2 \bb_2^\top}{\bb_2^\top\bb_2} \ba_3 = (\bI- \frac{\bb_1 \bb_1^\top}{\bb_1^\top\bb_1} - \frac{\bb_2 \bb_2^\top}{\bb_2^\top\bb_2} )\ba_3 \qquad &(\text{Projection view})\\
&= \ba_3- \underbrace{\frac{ \bb_1^\top\ba_3}{\bb_1^\top\bb_1} \bb_1}_{\widehat{\ba}_3} - \underbrace{\frac{ \bb_2^\top\ba_3}{\bb_2^\top\bb_2} \bb_2}_{\bar{\ba}_3}, \qquad &(\text{Combination view})
\end{aligned}
\end{equation}
where the first equation shows $\bb_3$ is a multiplication of the matrix $(\bI- \frac{\bb_1 \bb_1^\top}{\bb_1^\top\bb_1} - \frac{\bb_2 \bb_2^\top}{\bb_2^\top\bb_2} )$ and the vector $\ba_3$, i.e., project $\ba_3$ onto the orthogonal complement space of $\cspace{([\bb_1, \bb_2])}$. The second equality in the above equation shows $\ba_3$ is a combination of $\bb_1, \bb_2, \bb_3$. We will see this property is essential in the idea of the QR decomposition.
Again, it can be shown that the space spanned by $\bb_1, \bb_2, \bb_3$ is the same space spanned by $\ba_1, \ba_2, \ba_3$. The situation is shown in Figure~\ref{fig:gram-schmidt2}, in which we choose \textbf{the direction of $\bb_2$ as the $y$-axis of the Cartesian coordinate system}. $\widehat{\ba}_3$ is the projection of $\ba_3$ onto line $\bb_1$, $\bar{\ba}_3$ is the projection of $\ba_3$ onto line $\bb_2$. It can be shown that the part of $\ba_3$ perpendicular to both $\bb_1$ and $\bb_2$ is $\bb_3=\ba_3-\widehat{\ba}_3-\bar{\ba}_3$ from the figure.
Finally, we normalize each vector by dividing their length which produces three orthonormal vectors $\bq_1 = \frac{\bb_1}{||\bb_1||}$, $\bq_2 = \frac{\bb_2}{||\bb_2||}$, $\bq_2 = \frac{\bb_2}{||\bb_2||}$.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Project $\ba_2$ onto the space perpendicular to $\bb_1$.]{\label{fig:gram-schmidt1}
\includegraphics[width=0.47\linewidth]{./imgs/gram-schmidt1.pdf}}
\quad
\subfigure[Project $\ba_3$ onto the space perpendicular to $\bb_1, \bb_2$.]{\label{fig:gram-schmidt2}
\includegraphics[width=0.47\linewidth]{./imgs/gram-schmidt2.pdf}}
\caption{The Gram-Schmidt process.}
\label{fig:gram-schmidt-12}
\end{figure}
This idea can be extended to a set of vectors rather than only three. And we call this process as \textit{Gram-Schmidt process}. After this process, matrix $\bA$ will be triangularized. The method is named after Jørgen Pedersen Gram and Erhard Schmidt, but it appeared earlier in the work of Pierre-Simon Laplace in the theory of Lie group decomposition.
As we mentioned previously, the idea of the QR decomposition is the construction of a sequence of orthonormal vectors $\bq_1, \bq_2, \ldots$ that span the same successive subspaces.
$$
\bigg\{\cspace([\bq_1])=\cspace([\ba_1])\bigg\} \subseteq
\bigg\{\cspace([\bq_1, \bq_2])=\cspace([\ba_1, \ba_2])\bigg\} \subseteq
\bigg\{\cspace([\bq_1, \bq_2, \bq_3])=\cspace([\ba_1, \ba_2, \ba_3])\bigg\}
\subseteq \ldots,
$$
This implies any $\ba_k$ is in the space spanned by $\cspace([\bq_1, \bq_2, \ldots, \bq_k])$. \footnote{And also, any $\bq_k$ is in the space spanned by $\cspace([\ba_1, \ba_2, \ldots, \ba_k])$.} As long as we have found these orthonormal vectors, to reconstruct $\ba_i$'s from the orthonormal matrix $\bQ=[\bq_1, \bq_2, \ldots, \bq_n]$, an upper triangular matrix $\bR$ is needed such that $\bA = \bQ\bR$.
The Gram–Schmidt process is not the only algorithm for finding the QR decomposition.
Several other QR decomposition algorithms exist such as \textit{Householder reflections} and \textit{Givens rotations} which are more reliable in the presence of round-off errors. These QR decomposition methods may also change the
order in which the columns of $\bA$ are processed.
\index{Orthogonal matrix}
\index{Orthogonal}
\index{Orthonormal}
\section{Orthogonal vs Orthonormal}\label{section:orthogonal-orthonormal-qr}
The vectors $\bq_1, \bq_2, \ldots, \bq_n\in \real^m$ are \textit{mutually orthogonal} when their dot products $\bq_i^\top\bq_j$ are zero whenever $i \neq j$. When each vector is divided by its length, the vectors become orthogonal unit vectors. Then the vectors $\bq_1, \bq_2, \ldots, \bq_n$ are \textit{mutually orthonormal}. We put the orthonormal vectors into a matrix $\bQ$.
When $m\neq n$: the matrix $\bQ$ is easy to work with because $\bQ^\top\bQ=\bI \in \real^{n\times n}$. Such $\bQ$ with $m\neq n$ is sometimes referred to as a \textbf{semi-orthogonal} matrix.
When $m= n$: the matrix $\bQ$ is square, $\bQ^\top\bQ=\bI$ means that $\bQ^\top=\bQ^{-1}$, i.e., the transpose of $\bQ$ is also the inverse of $\bQ$. Then we also have $\bQ\bQ^\top=\bI$, i.e., $\bQ^\top$ is the \textbf{two-sided inverse} of $\bQ$. We call this $\bQ$ an \textbf{orthogonal matrix}. \footnote{Note here we use the term \textit{orthogonal matrix} to mean the matrix $\bQ$ has orthonormal columns. The term \textit{orthonormal matrix} is \textbf{not} used for historical reasons.} To see this, we have
$$
\begin{bmatrix}
\bq_1^\top \\
\bq_2^\top\\
\vdots \\
\bq_n^\top
\end{bmatrix}
\begin{bmatrix}
\bq_1 &\bq_2 & \ldots & \bq_n
\end{bmatrix}
=
\begin{bmatrix}
1 & & & \\
& 1 & & \\
& & \ddots & \\
& & & 1
\end{bmatrix}.
$$
In other words, $\bq_i^\top \bq_j = \delta_{ij}$ where $\delta_{ij}$ is the \textit{Kronecker delta}. The columns of an orthogonal matrix $\bQ\in \real^{n\times n}$ form an \textbf{orthonormal basis} of $\real^n$.
Orthogonal matrices can be viewed as matrices that change the basis of other matrices. Hence they preserve the angle (inner product) between the vectors:
$\qquad \bu^\top \bv = (\bQ\bu)^\top(\bQ\bv)$.
This invariance of the inner products of angles between the vectors is preserved, which also relies on the invariance of their lengths: $\qquad ||\bQ\bu|| = ||\bu||$.
In real cases, multiplied by a orthogonal matrix $\bQ$ will rotate (if $\det(\bQ)=1$) or reflect (if $\det(\bQ)=-1$) the original vector space. Many decomposition algorithms will result in two orthogonal matrices, thus such rotations or reflections will happen twice.
\index{Classical Gram-Schmidt process}
\index{Modified Gram-Schimidt process}
\index{CGS}
\index{MGS}
\index{Numerical stability}
\section{Computing the Reduced QR Decomposition via CGS and MGS}\label{section:qr-gram-compute}
We write out this form of the reduced QR Decomposition such that $\bA=\bQ\bR$ where $\bQ\in \real^{m\times n}$ and $\bR\in \real^{n\times n}$:
\begin{equation}
\bA=\left[
\begin{matrix}
\ba_1 & \ba_2 & ... & \ba_n
\end{matrix}
\right]
=\left[
\begin{matrix}
\bq_1 & \bq_2 & ... & \bq_n
\end{matrix}
\right]
\begin{bmatrix}
r_{11} & r_{12}& \dots & r_{1n}\\
& r_{22}& \dots & r_{2n}\\
& & \ddots & \vdots \\
\multicolumn{2}{c}{\raisebox{1.3ex}[0pt]{\Huge0}} & & r_{nn} \nonumber
\end{bmatrix}.
\end{equation}
The orthogonal matrix $\bQ$ can be easily calculated by the Gram-Schmidt process. To see why we have the upper triangular matrix $\bR$, we write out these equations
\begin{equation*}
\begin{aligned}
\ba_1 & = r_{11}\bq_1 &= \sum_{i=1}^{1} r_{i1}\bq_1, \\
&\vdots& \\
\ba_k &= r_{1k}\bq_1 + r_{2k}\bq_2 + \ldots + r_{kk}\bq_k &= \sum_{i=1}^{k} r_{ik} \bq_k,\\
&\vdots& \\
\end{aligned}
\end{equation*}
which coincides with the second equation of Equation~\eqref{equation:gram-schdt-eq2} and conforms to the form of an upper triangular matrix $\bR$. And if we extend the idea of Equation~\eqref{equation:gram-schdt-eq2} into the $k$-th term, we will get
$$
\begin{aligned}
\ba_k &= \sum_{i=1}^{k-1}(\bq_i^\top\ba_k)\bq_i + \ba_k^\perp = \sum_{i=1}^{k-1}(\bq_i^\top\ba_k)\bq_i + ||\ba_k^\perp||\cdot \bq_k,
\end{aligned}
$$
where $\ba_k^\perp$ is such $\bb_k$ in Equation~\eqref{equation:gram-schdt-eq2} that we emphasize the ``perpendicular" property here.
This implies we can gradually orthonormalize $\bA$ to an orthonormal set $\bQ=[\bq_1, \bq_2, \ldots, \bq_n]$ by
\begin{equation}\label{equation:qr-gsp-equation}
\left\{
\begin{aligned}
r_{ik} &= \bq_i^\top\ba_k, \,\,\,\,\forall i \in \{1,2,\ldots, k-1\};\\
\ba_k^\perp&= \ba_k-\sum_{i=1}^{k-1}r_{ik}\bq_i;\\
r_{kk} &= ||\ba_k^\perp||;\\
\bq_k &= \ba_k^\perp/r_{kk}.
\end{aligned}
\right.
\end{equation}
The procedure is formulated in Algorithm~\ref{alg:reduced-qr}.
\begin{algorithm}[h]
\caption{Reduced QR Decomposition via Gram-Schmidt Process}
\label{alg:reduced-qr}
\begin{algorithmic}[1]
\Require Matrix $\bA$ has linearly independent columns with size $m\times n $ and $m\geq n$;
\For{$k=1$ to $n$} \Comment{compute $k$-th column of $\bQ,\bR$}
\For{$i=1$ to $k-1$}
\State $r_{ik} =\bq_i^\top\ba_k$; \Comment{entry ($i,k$) of $\bR$, $2m-1$ flops}
\EndFor \Comment{all $k-1$ iterations: $(k-1)(2m-1)$ flops}
\State $\ba_k^\perp= \ba_k-\sum_{i=1}^{k-1}r_{ik}\bq_i$;\Comment{$2m(k-1)$ flops}
\State $r_{kk} = ||\ba_k^\perp||$; \Comment{main diagonal of $\bR$, $2m$ flops}
\State $\bq_k = \ba_k^\perp/r_{kk}$; \Comment{$m$ flops}
\EndFor
\State Output $\bQ=[\bq_1, \ldots, \bQ_n]$ and $\bR$ with entry $(i,k)$ being $r_{ik}$;
\end{algorithmic}
\end{algorithm}
\index{Orthogonal rojection}
\subsection*{\textbf{Orthogonal Projection}}
We notice again from Equation~\eqref{equation:qr-gsp-equation}, i.e., step 2 to step 6 in Algorithm~\ref{alg:reduced-qr}, that the first two equalities imply that
\begin{equation}\label{equation:qr-gsp-equation2}
\left.
\begin{aligned}
r_{ik} &= \bq_i^\top\ba_k, \,\,\,\,\forall i \in \{1,2,\ldots, k-1\}\\
\ba_k^\perp&= \ba_k-\sum_{i=1}^{k-1}r_{ik}\bq_i\\
\end{aligned}
\right\}
\rightarrow
\ba_k^\perp= \ba_k- \bQ_{k-1}\bQ_{k-1}^\top \ba_k=(\bI-\bQ_{k-1}\bQ_{k-1}^\top )\ba_k,
\end{equation}
where $\bQ_{k-1}=[\bq_1,\bq_2,\ldots, \bq_{k-1}]$. This implies $\bq_k$ can be obtained by
$$
\bq_k = \frac{\ba_k^\perp}{||\ba_k^\perp||} = \frac{(\bI-\bQ_{k-1}\bQ_{k-1}^\top )\ba_k}{||(\bI-\bQ_{k-1}\bQ_{k-1}^\top )\ba_k||}.
$$
The matrix $(\bI-\bQ_{k-1}\bQ_{k-1}^\top )$ in the above equation is known as an \textit{orthogonal projection matrix} that will project $\ba_k$ along the column space of $\bQ_{k-1}$, i.e., project a vector so that the vector is perpendicular to the column space of $\bQ_{k-1}$. The net result is that the $\ba_k^\perp$ or $\bq_k$ calculated in this way will be orthogonal to the $\cspace(\bQ_{k-1})$, i.e., in the null space of $\bQ_{k-1}^\top$: $\nspace(\bQ_{k-1}^\top)$ by the fundamental theorem of linear algebra (Theorem~\ref{theorem:fundamental-linear-algebra}, p.~\pageref{theorem:fundamental-linear-algebra}).
Let $\bP_1=(\bI-\bQ_{k-1}\bQ_{k-1}^\top )$ and we claimed above $\bP_1=(\bI-\bQ_{k-1}\bQ_{k-1}^\top )$ is an orthogonal projection matrix such that $\bP_1\bv$ will project the $\bv$ onto the null space of $\bQ_{k-1}$. And actually, let $\bP_2=\bQ_{k-1}\bQ_{k-1}^\top$, then $\bP_2$ is also an orthogonal projection matrix such that $\bP_2\bv$ will project the $\bv$ onto the column space of $\bQ_{k-1}$.
But why do the matrix $\bP_1, \bP_2$ can magically project a vector onto the corresponding subspaces? It can be easily shown that the column space of $\bQ_{k-1}$ is equal to the column space of $\bQ_{k-1}\bQ_{k-1}^\top$:
$$
\cspace(\bQ_{k-1})=\cspace(\bQ_{k-1}\bQ_{k-1}^\top)=\cspace(\bP_2).
$$
Therefore, the result of $\bP_2\bv$ is a linear combination of the columns of $\bP_2$, which is in the column space of $\bP_2$ or the column space of $\bQ_{k-1}$. The formal definition of a \textit{projection matrix} $\bP$ is that it is idempotent $\bP^2=\bP$ such that projecting twice is equal to projecting once. What makes the above $\bP_2=\bQ_{k-1}\bQ_{k-1}^\top $ different is that the projection $\widehat{\bv}$ of any vector $\bv$ is perpendicular to $\bv-\widehat{\bv}$:
$$
(\widehat{\bv}=\bP_2\bv) \perp (\bv-\widehat{\bv}).
$$
This goes to the original definition we gave above: the \textit{orthogonal projection matrix}. To
avoid confusion, one may use the term \textit{oblique projection matrix} in the nonorthogonal
case. When $\bP_2$ is an orthogonal projection matrix, $\bP_1=\bI-\bP_2$ is also an orthogonal projection matrix that will project any vector onto the space perpendicular to the $\cspace(\bQ_{k-1})$, i.e., $\nspace(\bQ_{k-1}^\top)$. Therefore, we conclude the two orthogonal projections:
$$
\left\{
\begin{aligned}
\bP_1: &\gap \text{project onto $\nspace(\bQ_{k-1}^\top)$;} \\
\bP_2: &\gap \text{project onto $\cspace(\bQ_{k-1})$} .
\end{aligned}
\right.
$$
The further result that is important to notice is when the columns of $\bQ_{k-1}$ are mutually orthonormal, we have the following decomposition:
\begin{equation}\label{equation:qr-orthogonal-equality}
\boxed{\bP_1 = \bI - \bQ_{k-1}\bQ_{k-1}^\top = (\bI-\bq_1\bq_1^\top)(\bI-\bq_2\bq_2^\top)\ldots (\bI-\bq_{k-1}\bq_{k-1}^\top),}
\end{equation}
where $\bQ_{k-1}=[\bq_1,\bq_2,\ldots, \bq_{k-1}]$ and each $(\bI-\bq_i\bq_i^\top)$ is to project a vector into the perpendicular space of $\bq_i$.
This finding is important to make a step further toward a \textit{modified Gram-Schmidt process (MGS)} where we project and subtract on the fly. To avoid confusion, the previous Gram-Schmidt is called the \textit{classical Gram-Schmidt process (CGS)}. The difference between the CGS and MGS is, in the CGS, we project the same vector onto the orthogonormal ones and subtract afterward. However, in the MGS, the projection and subtraction are done in an interleaved manner. A three-column example $\bA=[\ba_1,\ba_2,\ba_3]$ is shown in Figure~\ref{fig:projection-mgs-demons-3d} where each step is denoted in a different color. We summarize the difference between the CGS and MGS processes for obtaining $\bq_k$ via the $k$-th column $\ba_k$ of $\bA$ and the orthonormalized vectors $\{\bq_1, \bq_2, \ldots, \bq_{k-1}\}$:
$$
\begin{aligned}
\text{(CGS)}: &\, \text{obtain $\bq_k$ by normalizing $\ba_k^\perp=(\bI-\bQ_{k-1}\bQ_{k-1}^\top)\ba_k$;} \\
\text{(MGS)}: &\, \text{obtain $\bq_k$ by normalizing $\ba_k^\perp=\left\{(\bI-\bq_{k-1}\bq_{k-1}^\top)\ldots\left[(\bI-\bq_2\bq_2^\top)\left((\bI-\bq_1\bq_1^\top) \ba_k\right)\right]\right\}$,}
\end{aligned}
$$
where the parentheses of the MGS indicate the order of the computation.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[CGS, step 1: \textcolor{blue}{blue} vector; step 2: \textcolor{green}{green} vector; step 3: \textcolor{purple}{purple} vector.]{\label{fig:projection-mgs-demons-cgs}
\includegraphics[width=0.47\linewidth]{./imgs/projectqr-cgs.pdf}}
\quad
\subfigure[MGS, step 1: \textcolor{blue}{blue} vector; step 2: \textcolor{purple}{purple} vector.]{\label{fig:projection-mgs-demons-mgs}
\includegraphics[width=0.47\linewidth]{./imgs/projectqr-mgs.pdf}}
\caption{CGS vs MGS in 3-dimensional space where $\bq_2^\prime$ is parallel to $\bq_2$ so that projecting on $\bq_2$ is equivalent to projecting on $\bq_2^\prime$.}
\label{fig:projection-mgs-demons-3d}
\end{figure}
\paragraph{What's the difference?}
Taking the three-column matrix $\bA=[\ba_1, \ba_2, \ba_3]$ as an example. Suppose we have computed $\{\bq_1, \bq_2\}$ such that $span\{\bq_1, \bq_2\}=span\{\ba_1, \ba_2\}$. And we want to proceed to compute the $\bq_3$.
In the CGS, the orthogonalization of column $\ba_n$ against column $\{\bq_1, \bq_2\}$ is performed by projecting the original column $\ba_3$ of $\bA$ onto $\{\bq_1, \bq_2\}$ respectively and subtracting at once:
\begin{equation}\label{equation:cgs-3d-exmp}
\left\{
\begin{aligned}
\ba_3^\perp &= \ba_3 - (\bq_1^\top\ba_3)\bq_1 - (\bq_2^\top\ba_3)\bq_2\\
&= \ba_3 - (\bq_1\bq_1^\top)\ba_3 - \boxed{(\bq_2\bq_2^\top)\ba_3}\\
\bq_3 &= \frac{\ba_3^\perp}{||\ba_3^\perp||},
\end{aligned}
\right.
\end{equation}
as shown in Figure~\ref{fig:projection-mgs-demons-cgs}.
In the MGS, on the other hand, the components along each $\{\bq_1, \bq_2\}$ are immediately subtracted out of the rest of the column $\ba_3$ as soon as the $\{\bq_1, \bq_2\}$ are computed. Therefore the orthogonalization of column $\ba_3$ against $\{\bq_1, \bq_2\}$ is not performed by projecting the original column $\ba_3$ against $\{\bq_1, \bq_2\}$ as it is in the CGS, but rather against a vector obtained by subtracting from that column $\ba_3$ of $\bA$ the components in the direction of $\bq_1, \bq_2$ successively. This is important because the error components of $\bq_i$ in $span\{\bq_1, \bq_2\}$ will be smaller (we will further discuss this in the following paragraphs).
More precisely, in the MGS the orthogonalization of column $\ba_3$ against $\bq_1$ is performed by subtracting the component of $\bq_1$ from the vector $\ba_3$:
$$
\ba_3^{(1) }= (\bI-\bq_1\bq_1^\top)\ba_3 = \ba_3 - (\bq_1\bq_1^\top)\ba_3,
$$
where $\ba_3^{(1) }$ is the component of $\ba_3$ that lies in a space perpendicular to $\bq_1$. And the further step is performed by
\begin{equation}\label{equation:mgs-3d-exmp}
\begin{aligned}
\ba_3^{(2) }= (\bI-\bq_2\bq_2^\top)\ba_3^{(1) }&=\ba_3^{(1) }-(\bq_2\bq_2^\top)\ba_3^{(1) }\\
&=\ba_3 - (\bq_1\bq_1^\top)\ba_3-\boxed{(\bq_2\bq_2^\top)\textcolor{blue}{\ba_3^{(1) }}}
\end{aligned}
\end{equation}
where $\ba_3^{(2) }$ is the component of $\ba_3^{(1) }$ that lies in a space perpendicular to $\bq_2$ and we highlight the difference to the CGS in Equation~\eqref{equation:cgs-3d-exmp} by \textcolor{blue}{blue} text. This net result is that $\ba_3^{(2) }$ is the component of $\ba_3$ that lies in the space perpendicular to $\{\bq_1, \bq_2\}$ as shown in Figure~\ref{fig:projection-mgs-demons-mgs}.
\paragraph{Main difference and catastrophic cancellation} The key difference is that the $\ba_3$ can in general have large components in $span\{\bq_1, \bq_2\}$ in which case one starts with large
values and ends up with small values with large relative errors in them. This is known as the problem of \textit{catastrophic cancellation}. Whereas $\ba_3^{(1) }$ is in the direction perpendicular to $\bq_1$ and has only a small ``error" component in the direction of $\bq_1$. Compare the \fbox{boxed} text in Equation~\eqref{equation:cgs-3d-exmp} and \eqref{equation:mgs-3d-exmp}, it is not hard to see $(\bq_2\bq_2^\top)\ba_3^{(1) }$ in Equation~\eqref{equation:mgs-3d-exmp} is more accurate by the above argument. And thus, because of the much smaller error in this projection factor, the MGS introduces less orthogonalization error at each subtraction step than that in the CGS. In fact, it can be shown that the final $\bQ$ obtained in the CGS satisfies
$$
||\bI-\bQ\bQ^\top|| \leq O(\epsilon \kappa^2(\bA)),
$$
where $\kappa(\bA)$ is a value larger than 1 determined by $\bA$.
Whereas, in the MGS, the error satisfies
$$
||\bI-\bQ\bQ^\top|| \leq O(\epsilon \kappa(\bA)).
$$
That is, the $\bQ$ obtained in the MGS is more orthogonal.
\paragraph{More to go, preliminaries for Householder and Givens methods} Although, we claimed here the MGS usually works better than the CGS in practice. The MGS can still fall victim to the \textit{catastrophic cancellation} problem. Suppose in iteration $k$ of the MGS algorithm, $\ba_k$ is almost in the span of $\{\bq_1, \bq_2, \ldots, \bq_{k-1}\}$. This will result in that $\ba_k^\perp$ has only a small component that is perpendicular to $span\{\bq_1, \bq_2, \ldots, \bq_{k-1}\}$, whereas the ``error" component in the $span\{\bq_1, \bq_2, \ldots, \bq_{k-1}\}$ will be amplified and the net result is $\bQ$ will be less orthonormal. In this case, if we can find a successive set of orthogonal matrices $\{\bQ_1, \bQ_2, \ldots, \bQ_l\}$ such that $\bQ_l\ldots\bQ_2\bQ_1\bA$ is triangularized, then $\bQ=(\bQ_l\ldots\bQ_2\bQ_1)^\top$ will be ``more" orthogonal than the CGS or the MGS. We will discuss this method in Section~\ref{section:qr-via-householder} (p.~\pageref{section:qr-via-householder}) and \ref{section:qr-givens} (p.~\pageref{section:qr-givens}) via Householder reflectors and Givens rotations.
\section{Computing the Full QR Decomposition via the Gram-Schmidt Process}\label{section:silentcolu_qrdecomp}
A full QR decomposition of an $m\times n$ matrix with linearly independent columns goes further by appending additional $m-n$ orthonormal columns to $\bQ$ so that it becomes an $m\times m$ orthogonal matrix. In addition, rows of zeros are appended to $\bR$ so that it becomes an $m\times n$ upper triangular matrix. We call the additional columns in $\bQ$ \textbf{silent columns} and additional rows in $\bR$ \textbf{silent rows}. The comparison between the reduced QR decomposition and the full QR decomposition is shown in Figure~\ref{fig:qr-comparison} where silent columns in $\bQ$ are denoted in \textcolor{gray}{gray}, blank entries are zero and \textcolor{blue}{blue} entries are elements that are not necessarily zero.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Reduced QR decomposition]{\label{fig:gphalf}
\includegraphics[width=0.47\linewidth]{./imgs/qrreduced.pdf}}
\quad
\subfigure[Full QR decomposition]{\label{fig:gpall}
\includegraphics[width=0.47\linewidth]{./imgs/qrfull.pdf}}
\caption{Comparison between the reduced and full QR decompositions.}
\label{fig:qr-comparison}
\end{figure}
\index{Independence check}
\section{Dependent Columns}\label{section:dependent-gram-schmidt-process}
Previously, we assumed matrix $\bA$ has linearly independent columns. However, this is not always necessary. Suppose in step $k$ of CGS or MGS, $\ba_k$ is in the plane spanned by $\bq_1, \bq_2, \ldots, \bq_{k-1}$ which is equivalent to the space spanned by $\ba_1, \ba_2, \ldots, \ba_{k-1}$, i.e., vectors $\ba_1, \ba_2, \ldots, \ba_k$ are dependent. Then $r_{kk}$ will be zero and $\bq_k$ does not exist because of the zero division. At this moment, we simply pick $\bq_k$ arbitrarily to be any normalized vector that is orthogonal to $\cspace([\bq_1, \bq_2, \ldots, \bq_{k-1}])$ and continue the Gram-Schmidt process. Again, for matrix $\bA$ with dependent columns, we have both reduced and full QR decomposition algorithms.
This idea can be further extended that when $\bq_k$ does not exist, we just skip the current steps. And add the silent columns in the end. In this sense, QR decomposition for a matrix with dependent columns is not unique. However, as long as you stick to a systematic process or a methodical procedure, QR decomposition for any matrix is unique.
This finding can also help to decide whether a set of vectors are linearly independent or not. Whenever $r_{kk}$ in CGS or MGS is zero, we report the vectors $\ba_1, \ba_2, \ldots, \ba_k$ are dependent and stop the algorithm for ``independence checking".
\section{QR with Column Pivoting: Column-Pivoted QR (CPQR)}\label{section:cpqr}
Suppose $\bA$ has dependent columns, a column-pivoted QR (CPQR) decomposition can be found as follows.
\begin{theoremHigh}[Column-Pivoted QR Decomposition\index{Column-pivoted QR (CPQR)}]\label{theorem:rank-revealing-qr-general}
Every $m\times n$ matrix $\bA=[\ba_1, \ba_2, ..., \ba_n]$ with $m\geq n$ and rank $r$ can be factored as
$$
\bA\bP = \bQ
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bzero
\end{bmatrix},
$$
where $\bR_{11} \in \real^{r\times r}$ is upper triangular, $\bR_{12} \in \real^{r\times (n-r)}$, $\bQ\in \real^{m\times m}$ is an orthogonal matrix, and $\bP$ is a permutation matrix. This is also known as the \textbf{full} CPQR decomposition. Similarly, the \textbf{reduced} version is given by
$$
\bA\bP = \bQ_r
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\end{bmatrix},
$$
where $\bR_{11} \in \real^{r\times r}$ is upper triangular, $\bR_{12} \in \real^{r\times (n-r)}$, $\bQ_r\in \real^{m\times r}$ contains orthonormal columns, and $\bP$ is a permutation matrix.
\end{theoremHigh}
\index{CPQR}
\index{Column pivoting}
\subsection{A Simple CPQR via CGS}
\paragraph{A Simple CPQR via CGS}
The classical Gram-Schmidt process can compute this CPQR decomposition.
Following the QR decomposition for dependent columns, when $r_{kk}=0$, the column $k$ of $\bA$ is dependent on the previous $k-1$ columns. Whenever this happens, we permute this column into the last column and continue the Gram-Schmidt process. We notice that $\bP$ is the permutation matrix that interchanges the dependent columns into the last $n-r$ columns. Suppose the first $r$ columns of $\bA\bP$ are $[\widehat{\ba}_1, \widehat{\ba}_2, \ldots, \widehat{\ba}_r]$, and the span of them is just the same as the span of $\bQ_r$ (in the reduced version), or the span of $\bQ_{:,:r}$ (in the full version)
$$
\cspace([\widehat{\ba}_1, \widehat{\ba}_2, \ldots, \widehat{\ba}_r]) = \cspace(\bQ_r) = \cspace(\bQ_{:,:r}).
$$
And $\bR_{12}$ is a matrix that recovers the dependent $n-r$ columns from the column space of $\bQ_r$ or column space of $\bQ_{:,:r}$. The comparison of reduced and full CPQR decomposition is shown in Figure~\ref{fig:qr-comparison-rank-reveal} where silent columns in $\bQ$ are denoted in \textcolor{gray}{grey}, blank entries are zero and \textcolor{blue}{blue}/\textcolor{orange}{orange} entries are elements that are not necessarily zero.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Reduced CPQR decomposition]{\label{fig:gphalf-rank-reveal}
\includegraphics[width=0.47\linewidth]{./imgs/qrreduced-revealing.pdf}}
\quad
\subfigure[Full CPQR decomposition]{\label{fig:gpall-rank-reveal}
\includegraphics[width=0.47\linewidth]{./imgs/qrfull-revealing.pdf}}
\caption{Comparison between the reduced and full CPQR decompositions.}
\label{fig:qr-comparison-rank-reveal}
\end{figure}
\subsection{A Practical CPQR via CGS}\label{section:practical-cpqr-cgs}
\paragraph{A Practical CPQR via CGS}
We notice that the simple CPQR algorithm pivot the first $r$ independent columns into the first $r$ columns of $\bA\bP$. Let $\bA_1$ be the first $r$ columns of $\bA\bP$, and $\bA_2$ be the rest. Then, from the full CPQR, we have
$$
[\bA_1, \bA_2] =
\bQ \begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bzero
\end{bmatrix}
=\left[
\bQ \begin{bmatrix}
\bR_{11} \\
\bzero
\end{bmatrix}
,
\bQ \begin{bmatrix}
\bR_{12} \\
\bzero
\end{bmatrix}
\right]
.
$$
It is not hard to see that
$$
||\bA_2|| = \left\Vert\bQ \begin{bmatrix}
\bR_{12} \\
\bzero
\end{bmatrix}\right\Vert
=
\left\Vert\begin{bmatrix}
\bR_{12} \\
\bzero
\end{bmatrix}\right\Vert
=
\left\Vert \bR_{12} \right\Vert,
$$
where the penultimate equality comes from the orthogonal equivalence under the matrix norm. Therefore, the norm of $\bR_{12}$ is decided by the norm of $\bA_2$. When favoring well-conditioned CPQR, $\bR_{12}$ should be small in norm. And a practical CPQR decomposition is to permute columns of the matrix $\bA$ firstly such that the columns are ordered decreasingly in vector norm:
$$
\widetildebA = \bA\bP_0 = [\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_n}],
$$
where $\{j_1, j_2, \ldots, j_n\}$ is a permuted index set of $\{1,2,\ldots, n\}$ and
$$
||\ba_{j_1}||\geq ||\ba_{j_2}||\geq \ldots\geq ||\ba_{j_n}||.
$$
Then apply the ``simple" reduced CPQR decomposition on $\widetildebA$ such that $\widetildebA \bP_1= \bQ_r[\bR_{11}, \bR_{12}]$. The ``practical" reduced CPQR of $\bA$ is then recovered as
$$
\bA\underbrace{\bP_0\bP_1}_{\bP} =\bQ_r[\bR_{11}, \bR_{12}].
$$
The further optimization of the CPQR algorithm is via the MGS where the extra bonus is to stop at a point when the factorization works on a rank deficient submatrix and the CPQR via this MGS can find the numerical rank \citep{lu2021numerical}. This is known as the \textit{partial factorization} and we shall not give the details.
\index{Column pivoting}
\index{Revealing rank one deficiency}
\section{QR with Column Pivoting: Revealing Rank One Deficiency}\index{Rank-revealing}
We notice that column-pivoted QR is just one method to find the column permutation where $\bA$ is rank deficient and we interchange the first linearly independent $r$ columns of $\bA$ into the first $r$ columns of the $\bA\bP$. If $\bA$ is nearly rank-one deficient and we would like to find a column permutation of $\bA$ such that the resulting pivotal element $r_{nn}$ of the QR decomposition is small. This is known as the \textit{revealing rank-one deficiency} problem.
\begin{theoremHigh}[Revealing Rank One Deficiency, \citep{chan1987rank}]\label{theorem:finding-good-qr-ordering}
If $\bA\in \real^{m\times n}$ and $\bv\in \real^n$ is a unit 2-norm vector (i.e., $||\bv||=1$), then there exists a permutation $\bP$ such that the reduced QR decomposition
$$
\bA\bP = \bQ\bR
$$
satisfies $r_{nn} \leq \sqrt{n} \epsilon$ where $\epsilon = ||\bA\bv||$ and $r_{nn}$ is the $n$-th diagonal of $\bR$. Note that $\bQ\in \real^{m\times n}$ and $\bR\in \real^{n\times n}$ in the reduced QR decomposition.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:finding-good-qr-ordering}]
Suppose $\bP\in \real^{n\times n}$ is a permutation matrix such that if $\bw=\bP^\top\bv$ where
$$
|w_n| = \max |v_i|, \,\,\,\, \forall i \in \{1,2,\ldots,n\}.
$$
That is, the last component of $\bw$ is equal to the max component of $\bv$ in absolute value. Then we have $|w_n| \geq 1/\sqrt{n}$. Suppose the QR decomposition of $\bA\bP$ is $\bA\bP = \bQ\bR$, then
$$
\epsilon = ||\bA\bv|| = ||(\bQ^\top\bA\bP) (\bP^\top\bv)|| = ||\bR\bw|| =
\begin{bmatrix}
\vdots \\
r_{nn} w_n
\end{bmatrix}
\geq |r_{nn} w_n| \geq |r_{nn}|/\sqrt{n},
$$
This completes the proof.
\end{proof}
The following discussion is based on the existence of the singular value decomposition (SVD) which will be introduced in Section~\ref{section:SVD} (p.~\pageref{section:SVD}). Feel free to skip at a first reading. Suppose the SVD of $\bA$ is given by $\bA = \sum_{i=1}^{n} \sigma_i \bu_i\bv_i^\top$, where $\sigma_i$'s are singular values with $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_n$, i.e., $\sigma_n$ is the smallest singular value, and $\bu_i$'s, $\bv_i$'s are the left and right singular vectors respectively. Then, if we let $\bv = \bv_n$ such that $\bA\bv_n = \sigma_n \bu_n$, \footnote{We will prove that the right singular vector of $\bA$ is equal to the right singular vector of $\bR$ if the $\bA$ has QR decomposition $\bA=\bQ\bR$ in Lemma~\ref{lemma:svd-for-qr} (p.~\pageref{lemma:svd-for-qr}). The claim can also be applied to the singular values. So $\bv_n$ here is also the right singular vector of $\bR$.} we have
$$
||\bA\bv|| = \sigma_n.
$$
By constructing a permutation matrix $\bP$ such that
$$
|\bP^\top \bv|_n = \max |\bv_i|, \,\,\,\, \forall i \in \{1,2,\ldots,n\},
$$
we will find a QR decomposition of $\bA=\bQ\bR$ with a pivot $r_{nn}$ smaller than $\sqrt{n}\sigma_n$. If $\bA$ is rank-one deficient, then $\sigma_n$ will be close to 0 and $r_{nn}$ is thus bounded to a small value in magnitude which is close to 0.
\index{Revealing rank r deficiency}
\section{QR with Column Pivoting: Revealing Rank r Deficiency*}\label{section:rank-r-qr}\index{Rank-revealing QR}
Following from the last section, suppose now we want to compute the reduced QR decomposition where $\bA\in \real^{m\times n}$ is nearly rank $r$ deficient with $r>1$. Our goal now is to find a permutation $\bP$ such that
\begin{equation}\label{equation:rankr-reval-qr}
\bA\bP =
\bQ\bR=
\bQ
\begin{bmatrix}
\bL & \bM \\
\bzero & \bN
\end{bmatrix},
\end{equation}
where $\bN \in \real^{r\times r}$ and $||\bN||$ is small in some norm.
A recursive algorithm can be applied to do so. Suppose we have already isolated a small $k\times k$ block $\bN_k$, based on which, if we can isolate a small $(k+1)\times (k+1)$ block $\bN_{k+1}$, then we can find the permutation matrix recursively. To repeat, suppose we have the permutation $\bP_k$ such that the $\bN_k \in \real^{k\times k}$ has a small norm,
$$
\bA\bP_k = \bQ_k \bR_k=
\bQ_k
\begin{bmatrix}
\bL_k & \bM_k \\
\bzero & \bN_k
\end{bmatrix}.
$$
We want to find a permutation $\bP_{k+1}$, such that $\bN_{k+1} \in \real^{(k+1)\times (k+1)}$ also has a small norm,
$$
\boxed{
\bA\bP_{k+1} = \bQ_{k+1} \bR_{k+1}=
\bQ_{k+1}
\begin{bmatrix}
\bL_{k+1} & \bM_{k+1} \\
\bzero & \bN_{k+1}
\end{bmatrix}}.
$$
From the algorithm introduced in the last section, there is an $(n-k)\times (n-k)$ permutation matrix $\widetilde{\bP}_{k+1}$ such that $\bL_k \in \real^{(n-k)\times (n-k)}$ has the QR decomposition $\bL_k \widetilde{\bP}_{k+1} = \widetilde{\bQ}_{k+1}\widetilde{\bL}_k$ such that the entry $(n-k, n-k)$ of $\widetilde{\bL}_k$ is small. By constructing
$$
\bP_{k+1} = \bP_k
\begin{bmatrix}
\widetilde{\bP}_{k+1} & \bzero \\
\bzero & \bI
\end{bmatrix},
\qquad
\bQ_{k+1} = \bQ_k
\begin{bmatrix}
\widetilde{\bQ}_{k+1} & \bzero \\
\bzero & \bI
\end{bmatrix},
$$
we have
$$
\boxed{
\bA \bP_{k+1} = \bQ_{k+1}
\begin{bmatrix}
\widetilde{\bL}_k & \widetilde{\bQ}_{k+1}^\top \bM_k \\
\bzero & \bN_k
\end{bmatrix}}.
$$
We know that entry $(n-k, n-k)$ of $\widetilde{\bL}_k$ is small, if we can prove the last row of $\widetilde{\bQ}_{k+1}^\top \bM_k$ is small in norm, then we find the QR decomposition revealing rank $k+1$ deficiency (see \citep{chan1987rank} for a proof).
\index{Householder transformation}
\index{Householder reflector}
\section{Existence of the QR Decomposition via the Householder Reflector}\label{section:qr-via-householder}
We first give the formal definition of a Householder reflector and we will take a look at its properties.
\begin{definition}[Householder Reflector]\label{definition:householder-reflector}
Let $\bu \in \real^n$ be a vector of unit length (i.e., $||\bu||=1$). Then $\bH = \bI - 2\bu\bu^\top$ is said to be a \textit{Householder reflector}, a.k.a., a \textit{Householder transformation}. We call this $\bH$ the Householder reflector associated with the unit vector $\bu$ where the unit vector $\bu$ is also known as the \textit{Householder vector}. If a vector $\bx$ is multiplied by $\bH$, then it is reflected in the hyperplane $span\{\bu\}^\perp$.
\item Note that if $||\bu|| \neq 1$, we can define $\bH = \bI - 2 \frac{\bu\bu^\top}{\bu^\top\bu} $ as the Householder reflector.
\end{definition}
Then we have the following corollary from this definition.
\begin{corollary}[Unreflected by Householder]
Any vector $\bv$ that is perpendicular to $\bu$ is left unchanged by the Householder transformation, that is, $\bH\bv=\bv$ if $\bu^\top\bv=0$.
\end{corollary}
The proof is trivial that $(\bI - 2\bu\bu^\top)\bv = \bv - 2\bu\bu^\top\bv=\bv$.
Suppose $\bu$ is a unit vector with $||\bu||=1$, and a vector $\bv$ is perpendicular to $\bu$. Then any vector $\bx$ on the plane can be decomposed into two parts, i.e., $\bx = \bx_{\bv} + \bx_{\bu}$: the first one $\bx_{\bu}$ is parallel to $\bu$ and the second one $\bx_{\bv}$ is perpendicular to $\bu$ (i.e., parallel to $\bv$). From Section~\ref{section:project-onto-a-vector} (p.~\pageref{section:project-onto-a-vector}) on the projection of a vector onto another one, $\bx_{\bu}$ can be computed by $\bx_{\bu} = \frac{\bu\bu^\top}{\bu^\top\bu} \bx = \bu\bu^\top\bx$. We then transform this $\bx$ by the Householder reflector associated with $\bu$, $\bH\bx = (\bI - 2\bu\bu^\top)(\bx_{\bv} + \bx_{\bu}) = \bx_{\bv} -\bu\bu^\top \bx = \bx_{\bv} - \bx_{\bu}$. That is, the space perpendicular to $\bu$ acts as a mirror and any vector $\bx$ is reflected by the Householder reflector associated with $\bu$ (i.e., reflected in the hyperplane $span\{\bu\}^\perp$). The situation is shown in Figure~\ref{fig:householder}.
\begin{SCfigure
\centering
\includegraphics[width=0.5\textwidth]{imgs/householder.pdf}
\caption{Demonstration of the Householder reflector. The Householder reflector obtained by $\bH=\bI-2\bu\bu^\top$ where $||\bu||=1$ will reflect vector $\bx$ along the plane perpendicular to $\bu$: $\bx=\bx_{\bv} + \bx_{\bu} \rightarrow \bx_{\bv} - \bx_{\bu}$.}
\label{fig:householder}
\end{SCfigure}
If we know two vectors are reflected to each other, the next corollary tells us how to find the corresponding Householder reflector.
\begin{corollary}[Finding the Householder Reflector]\label{corollary:householder-reflect-finding}
Suppose $\bx$ is reflected to $\by$ by a Householder reflector with $||\bx|| = ||\by||$, then the Householder reflector is obtained by
$$
\bH = \bI - 2 \bu\bu^\top, \text{ where } \bu = \frac{\bx-\by}{||\bx-\by||}.
$$
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:householder-reflect-finding}]
Write out the equation, we have
$$
\begin{aligned}
\bH\bx &= \bx - 2 \bu\bu^\top\bx =\bx - 2\frac{(\bx-\by)(\bx^\top-\by^\top)}{(\bx-\by)^\top(\bx-\by)} \bx\\
&= \bx - (\bx-\by) = \by.
\end{aligned}
$$
Note that the condition $||\bx|| = ||\by||$ is required to prove the result.
\end{proof}
The Householder reflectors are useful to set a block of components of a given vector to zero. Particularly, we usually would like to set the vector $\ba\in \real^n$ to be zero except the $i$-th element. Then the Householder vector can be chosen to be
$$
\bu = \frac{\ba - r\be_i}{||\ba - r\be_i||}, \qquad \text{where } r = \pm||\ba||
$$
which is a reasonable Householder vector since $||\ba|| = ||r\be_i|| = |r|$. We carefully notice that when $r=||\ba||$, $\ba$ is reflected to $||\ba||\be_i$ via the Householder reflector $\bH = \bI - 2 \bu\bu^\top$; otherwise when $r=-||\ba||$, $||\ba||$ is reflected to $-||\ba||\be_i$ via the Householder reflector.
\begin{remark}[Householder Properties]
If $\bH$ is a Householder reflector, then it has the following properties:
$\bullet$ $\bH\bH = \bI$;
$\bullet$ $\bH = \bH^\top$;
$\bullet$ $\bH^\top\bH = \bH\bH^\top = \bI$ such that Householder reflector is an orthogonal matrix;
$\bullet$ $\bH\bu = -\bu$, if $\bH = \bI - 2 \bu\bu^\top$.
\end{remark}
We note in the Gram-Schmidt section that QR decomposition is to use a triangular matrix to orthogonalize a matrix $\bA$. The further idea is that, if we have a set of orthogonal matrices that can make $\bA$ to be triangular step by step, then we can also recover the QR decomposition.
Specifically, if we have an orthogonal matrix $\bQ_1$ that can introduce zeros to the $1$-st column of $\bA$ except the entry (1,1); and an orthogonal matrix $\bQ_2$ that can introduce zeros to the $2$-nd column except the entries (2,1), (2,2); $\ldots$. Then, we can also find the QR decomposition.
For the way to introduce zeros, we could reflect the columns of the matrix to a basis vector $\be_1$ whose entries are all zero except the first entry.
Let $\bA=[\ba_1, \ba_2, \ldots, \ba_n]\in \real^{m\times n}$ be the column partition of $\bA$, and let
\begin{equation}\label{equation:qr-householder-to-chooose-r-numeraically}
r_1 = ||\ba_1||,\qquad \bu_1 = \frac{\ba_1 - r_1 \be_1}{||\ba_1 - r_1 \be_1||}, \qquad \text{and}\qquad \bH_1 = \bI - 2\bu_1\bu_1^\top,
\end{equation}
where $\be_1$ here is the first basis for $\real^m$, i.e., $\be_1 = [1;0;0;\ldots;0]\in \real^m$.
Then
\begin{equation}\label{equation:householder-qr-projection-step1}
\bH_1\bA = [\bH_1\ba_1, \bH_1\ba_2, \ldots, \bH_1\ba_n] =
\begin{bmatrix}
r_1 & \bR_{1,2:n} \\
\bzero& \bB_2
\end{bmatrix},
\end{equation}
which reflects $\ba_1$ to $r_1\be_1$ and introduces zeros below the diagonal in the $1$-st column. We observe that the entries below $r_1$ are all zero now under this specific reflection. Notice that we reflect $\ba_1$ to $||\ba_1||\be_1$ the two of which have the same length, rather than reflect $\ba_1$ to $\be_1$ directly. This is for the purpose of \textbf{numerical stability}.
\textbf{Choice of $r_1$:} moreover, the choice of $r_1$ is \textbf{not unique}. For \textbf{numerical stability}, it is also desirable to choose $r_1 =-\text{sign}(a_{11}) ||\ba_1||$, where $a_{11}$ is the first component of $\ba_{1}$. Or even, $r_1 =\text{sign}(a_{11}) ||\ba_1||$ is also possible as long as $||\ba_1||$ is equal to $||r_1\be_1||$. However, we will not cover this topic here.
We can then apply this process to $\bB_2$ in Equation~\eqref{equation:householder-qr-projection-step1} to make the entries below the entry (2,2) to be all zeros. Note that, we do not apply this process to the entire $\bH_1\bA$ but rather the submatrix $\bB_2$ in it because we have already introduced zeros in the first column, and reflecting again will introduce nonzero values back.
Suppose $\bB_2 = [\bb_2, \bb_3, \ldots, \bb_n]$ is the column partition of $\bB_2$, and let
$$
r_2 = ||\bb_2||,\qquad \bu_2 = \frac{\bb_2 - r_2 \be_1}{||\bb_2 - r_2 \be_1||}, \qquad \qquad \widetilde{\bH}_2 = \bI - 2\bu_2\bu_2^\top, \qquad \text{and}\qquad \bH_2 =
\begin{bmatrix}
1 & \bzero \\
\bzero & \widetilde{\bH}_2
\end{bmatrix},
$$
where now $\be_1$ here is the first basis for $\real^{m-1}$ and $\bH_2$ is also an orthogonal matrix since $\widetilde{\bH}_2$ is an orthogonal matrix. Then it follows that
$$
\bH_2\bH_1\bA = [\bH_2\bH_1\ba_1, \bH_2\bH_1\ba_2, \ldots, \bH_2\bH_1\ba_n] =
\begin{bmatrix}
r_1 & \bR_{12} & \bR_{1,3:n} \\
0 & r_2 & \bR_{2,3:n} \\
\bzero & \bzero &\bC_3
\end{bmatrix}.
$$
The same process can go on, and we will finally triangularize $\bA = (\bH_n \bH_{n-1}\ldots\bH_1)^{-1} \bR = \bQ\bR$. And since the $\bH_i$'s are symmetric and orthogonal, we have $\bQ=(\bH_n \bH_{n-1}\ldots\bH_1)^{-1} = \bH_1 \bH_2\ldots\bH_n$.
An example of a $5\times 4$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\bH_1}{\rightarrow}
\begin{sbmatrix}{\bH_1\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bH_2}{\rightarrow}
\begin{sbmatrix}{\bH_2\bH_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
&\stackrel{\bH_3}{\rightarrow}
\begin{sbmatrix}{\bH_3\bH_2\bH_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bH_4}{\rightarrow}
\begin{sbmatrix}{\bH_4\bH_3\bH_2\bH_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes \\
0 & 0 & 0 & \bm{\boxtimes} \\
0 & 0 & 0 & \bm{0}
\end{sbmatrix}
\end{aligned}
$$
\paragraph{A closer look at the QR factorization} The Householder algorithm is a process that makes a matrix triangular by a sequence of orthogonal matrix operations. In the Gram-Schmidt process (both CGS and MGS), we use a triangular matrix to orthogonalize the matrix. However, in the Householder algorithm, we use orthogonal matrices to triangularize. The difference between the two approaches is summarized as follows:
\begin{itemize}
\item Gram-Schmidt: triangular orthogonalization;
\item Householder: orthogonal triangularization.
\end{itemize}
We further notice that, in the Householder algorithm or the Givens algorithm that we will shortly see, a set of orthogonal matrices are applied so that the QR decomposition obtained is a \textit{full} QR decomposition. Whereas, the direct QR decomposition obtained by CGS or MGS is a \textit{reduced} one (although the silent columns or rows can be further added to find the full one).
\index{Givens rotation}
\section{Existence of the QR Decomposition via the Givens Rotation}\label{section:qr-givens}
We have defined the Givens rotation in Definition~\ref{definition:givens-rotation-in-qr} (p.~\pageref{definition:givens-rotation-in-qr}) to find the rank-one update/downdate of the Cholesky decomposition. Now consider the following $2\times 2$ orthogonal matrices
$$
\bF =
\begin{bmatrix}
-c & s\\
s & c
\end{bmatrix},
\qquad
\bJ=
\begin{bmatrix}
c & -s \\
s & c
\end{bmatrix},
\qquad
\bG=
\begin{bmatrix}
c & s \\
-s & c
\end{bmatrix},
$$
where $s = \sin \theta$ and $c=\cos \theta$ for some $\theta$. The first matrix has $\det(\bF)=-1$ and is a special case of a Householder reflector in dimension 2 such that $\bF=\bI-2\bu\bu^\top$ where $\bu=\begin{bmatrix}
\sqrt{\frac{1+c}{2}}, &\sqrt{\frac{1-c}{2}}
\end{bmatrix}^\top$ or $\bu=\begin{bmatrix}
-\sqrt{\frac{1+c}{2}}, &-\sqrt{\frac{1-c}{2}}
\end{bmatrix}^\top$. The latter two matrices have $\det(\bJ)=\det(\bG)=1$ and effects a rotation instead of a reflection. Such a matrix is called a \textbf{\textit{Givens rotation}}.
\begin{figure}[H]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[$\by = \bJ\bx$, counter-clockwise rotation.]{\label{fig:rotation1}
\includegraphics[width=0.47\linewidth]{imgs/rotation.pdf}}
\quad
\subfigure[$\by = \bG\bx$, clockwise rotation.]{\label{fig:rotation2}
\includegraphics[width=0.47\linewidth]{imgs/rotation2.pdf}}
\caption{Demonstration of two Givens rotations.}
\label{fig:rotation}
\end{figure}
Figure~\ref{fig:rotation} demonstrate a rotation of $\bx$ under $\bJ$, where $\by = \bJ\bx$ such that
$$
\left\{
\begin{aligned}
&y_1 = c\cdot x_1 - s\cdot x_2, \\
&y_2 = s \cdot x_1 + c\cdot x_2.
\end{aligned}
\right.
$$
We want to verify the angle between $\bx$ and $\by $ is actually $\theta$ (and counter-clockwise rotation) after the Givens rotation $\bJ$ as shown in Figure~\ref{fig:rotation1}. Firstly, we have
$$
\left\{
\begin{aligned}
&\cos(\alpha) =\frac{x_1}{\sqrt{x_1^2+x_2^2}}, \\
&\sin (\alpha) =\frac{x_2}{\sqrt{x_1^2+x_2^2}}.
\end{aligned}
\right.
\qquad
\text{and }\qquad
\left\{
\begin{aligned}
&\cos(\theta) =c, \\
&\sin (\theta) =s.
\end{aligned}
\right.
$$
This implies $\cos(\theta+\alpha) = \cos(\theta)\cos(\alpha)-\sin(\theta)\sin(\alpha)$.
If we can show $\cos(\theta+\alpha) = \cos(\theta)\cos(\alpha)-\sin(\theta)\sin(\alpha)$ is equal to $\frac{y_1}{\sqrt{y_1^2+y_2^2}}$, then we complete the proof.
For the former one, $\cos(\theta+\alpha) = \cos(\theta)\cos(\alpha)-\sin(\theta)\sin(\alpha)=\frac{c\cdot x_1 - s\cdot x_2}{\sqrt{x_1^2+x_2^2}}$. For the latter one, it can be verified that $\sqrt{y_1^2+y_2^2}=\sqrt{x_1^2+x_2^2}$, and $\frac{y_1}{\sqrt{y_1^2+y_2^2}} = \frac{c\cdot x_1 - s\cdot x_2}{\sqrt{x_1^2+x_2^2}}$. This completes the proof. Similarly, we can also show that the angle between $\by=\bG\bx$ and $\bx$ is also $\theta$ in Figure~\ref{fig:rotation2} and the rotation is clockwise.
It can be easily verified that the $n$-th order Givens rotation (Definition~\ref{definition:givens-rotation-in-qr}, p.~\pageref{definition:givens-rotation-in-qr}) is an orthogonal matrix and its determinant is 1. For any vector $\bx =[x_1, x_2, \ldots, x_n]^\top \in \real^n$, we have $\by = \bG_{kl}\bx$, where
$$
\left\{
\begin{aligned}
&y_k = c \cdot x_k + s\cdot x_l, \\
&y_l = -s\cdot x_k +c\cdot x_l, \\
&y_j = x_j , & (j\neq k,l)
\end{aligned}
\right.
$$
That is, a Givens rotation applied to $\bx$ rotates two components of $\bx$ by some angle $\theta$ and leaves all other components the same.
When $\sqrt{x_k^2 + x_l^2} \neq 0$,
let $c = \frac{x_k}{\sqrt{x_k^2 + x_l^2}}$, $s=\frac{x_l}{\sqrt{x_k^2 + x_l^2}}$. Then,
$$
\left\{
\begin{aligned}
&y_k = \sqrt{x_k^2 + x_l^2}, \\
&y_l = 0, \\
&y_j = x_j . & (j\neq k,l)
\end{aligned}
\right.
$$
This finding above is essential for the QR decomposition via the Givens rotation.
\begin{corollary}[Basis From Givens Rotations Forwards]\label{corollary:basis-from-givens}
For any vector $\bx \in \real^n$, there exists a set of Givens rotations $\{\bG_{12}, \bG_{13}, \ldots, \bG_{1n}\}$ such that $\bG_{1n}\ldots \bG_{13}\bG_{12}\bx = ||\bx||\be_1$ where $\be_1\in \real^n$ is the first unit basis in $\real^n$.
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:basis-from-givens}]
From the finding above, we can find a set of Givens rotations $\bG_{12}, \bG_{13}, \bG_{14}$ such that
$$
\bG_{12}\bx = \left[\sqrt{x_1^2 + x_2^2}, 0, x_3, \ldots, x_n \right]^\top,
$$
$$
\bG_{13}\bG_{12}\bx = \left[\sqrt{x_1^2 + x_2^2+x_3^2}, 0, 0, x_4, \ldots, x_n \right]^\top,
$$
and
$$
\bG_{14}\bG_{13}\bG_{12}\bx = \left[\sqrt{x_1^2 + x_2^2+x_3^2+x_4^2},0, 0, 0, x_5, \ldots, x_n \right]^\top.
$$
Continue this process, we will obtain $\bG_{1n}\ldots \bG_{13}\bG_{12} = ||\bx||\be_1$.
\end{proof}
\begin{remark}[Basis From Givens Rotations Backwards]\label{remark:basis-from-givens2}
In Corollary~\ref{corollary:basis-from-givens}, we find the Givens rotation that introduces zeros from the $2$-nd entry to th $n$-th entry (i.e., forward). Sometimes we want the reverse order, i.e., introduce zeros from the $n$-th entry to the $2$-nd entrysuch that $\bG_{12}\bG_{13}\ldots \bG_{1n}\bx = ||\bx||\be_1$ where $\be_1\in \real^n$ is the first unit basis in $\real^n$.
The procedure is similar, we can find a $\bG_{1n},\bG_{1,(n-1)}, \bG_{1,(n-2)}$ such that
$$
\bG_{1n}\bx = \left[\sqrt{x_1^2 + x_n^2}, x_2, x_3, \ldots, x_{n-1}, 0 \right]^\top,
$$
$$
\bG_{1,(n-1)}\bG_{1n}\bx = \left[\sqrt{x_1^2 + x_{n-1}^2+x_n^2}, x_2, x_3, \ldots, x_{n-2}, 0, 0 \right]^\top,
$$
and
$$
\bG_{1,(n-2)}\bG_{1,(n-1)}\bG_{1n}\bx = \left[\sqrt{x_1^2 + x_{n-2}^2+x_{n-1}^2+x_n^2}, x_2, x_3, \ldots,x_{n-3},0, 0, 0 \right]^\top.
$$
Continue this process, we will obtain $\bG_{12}\bG_{13}\ldots \bG_{1n}\bx = ||\bx||\be_1$.
\paragraph{An alternative form} Alternatively, there are rotations $\{\bG_{12}, \bG_{23}, \ldots,\bG_{(n-1),n}\}$ such that $\bG_{12} \bG_{23} \ldots \bG_{(n-1),n}\bx = ||\bx||\be_1$ where $\be_1\in \real^n$ is the first unit basis in $\real^n$ with
$$
\bG_{(n-1),n}\bx = \left[x_1, x_2, \ldots, x_{n-2},\sqrt{x_{n-1}^2 + x_n^2}, 0 \right]^\top,
$$
$$
\bG_{(n-2),(n-1)}\bG_{(n-1),n}\bx = \left[x_1, x_2, \ldots,x_{n-3}, \sqrt{x_{n-2}^2+x_{n-1}^2 + x_n^2}, 0, 0 \right]^\top,
$$
and
$$
\begin{aligned}
\bG_{(n-3),(n-2)}\bG_{(n-2),(n-1)}\bG_{(n-1),n}\bx &= \\
&\,\left[x_1, x_2, \ldots ,x_{n-4}, \sqrt{x_{n-3}^2+x_{n-2}^2+x_{n-1}^2 + x_n^2},0, 0, 0 \right]^\top.
\end{aligned}
$$
Continue this process, we will obtain $\bG_{12} \bG_{23} \ldots \bG_{(n-1),n}\bx = ||\bx||\be_1$.
\end{remark}
From Corollary~\ref{corollary:basis-from-givens} above, for the way to introduce zeros, we could \textbf{rotate} the columns of the matrix to a basis vector $\be_1$ whose entries are all zero except the first entry.
Let $\bA=[\ba_1, \ba_2, \ldots, \ba_n] \in \real^{m\times n}$ be the column partition of $\bA$, and let
\begin{equation}\label{equation:qr-rotate-to-chooose-r-numeraically}
\bG_1 = \bG_{1m}\ldots \bG_{13}\bG_{12},
\end{equation}
where $\be_1$ here is the first basis for $\real^m$, i.e., $\be_1 = [1;0;0;\ldots;0]\in \real^m$.
Then
\begin{equation}\label{equation:rotate-qr-projection-step1}
\begin{aligned}
\bG_1\bA &= [\bG_1\ba_1, \bG_1\ba_2, \ldots, \bG_1\ba_n] =
\begin{bmatrix}
||\ba_1|| & \bR_{1,2:n} \\
\bzero& \bB_2
\end{bmatrix},
\end{aligned}
\end{equation}
which rotates $\ba_1$ to $||\ba_1||\be_1$ and introduces zeros below the diagonal in the $1$-st column.
We can then apply this process to $\bB_2$ in Equation~\eqref{equation:rotate-qr-projection-step1} to make the entries below the (2,2)-th entry to be all zeros.
Suppose $\bB_2 = [\bb_2, \bb_3, \ldots, \bb_n]$, and let
$$
\bG_2 = \bG_{2m}\ldots\bG_{24}\bG_{23},
$$
where $\bG_{2n}, \ldots, \bG_{24}, \bG_{23}$ can be implied from context.
Then
$$
\begin{aligned}
\bG_2\bG_1\bA &= [\bG_2\bG_1\ba_1, \bG_2\bG_1\ba_2, \ldots, \bG_2\bG_1\ba_n] =
\begin{bmatrix}
||\ba_1|| & \bR_{12} & \bR_{1,3:n} \\
0 & ||\bb_2|| & \bR_{2,3:n} \\
\bzero & \bzero &\bC_3
\end{bmatrix}.
\end{aligned}
$$
The same process can go on, and we will finally triangularize $\bA = (\bG_n \bG_{n-1}\ldots\bG_1)^{-1} \bR = \bQ\bR$. And since $\bG_i$'s are orthogonal, we have $\bQ=(\bG_n \bG_{n-1}\ldots\bG_1)^{-1} = \bG_1^\top \bG_2^\top\ldots\bG_n^\top $, and
\begin{equation}\label{equation:givens-q}
\begin{aligned}
\bG_1^\top \bG_2^\top\ldots\bG_n^\top &=(\bG_n \ldots \bG_2 \bG_1)^\top \\
&=\left\{(\bG_{nm} \ldots \bG_{n,(n+1)}) \ldots (\bG_{2m}\ldots \bG_{23}) ( \bG_{1m} \ldots \bG_{12} )\right\}^\top .
\end{aligned}
\end{equation}
The Givens rotation algorithm works better when $\bA$ already has a lot of zeros below the main diagonal. An example of a $5\times 4$ matrix is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
\paragraph{Givens rotations in $\bG_1$} For a $5\times 4$ example, we realize that $\bG_1 = \bG_{15}\bG_{14}\bG_{13}\bG_{12}$. And the process is shown as follows:
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{12}}{\rightarrow}
\begin{sbmatrix}{\bG_{12}\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bG_{13}}{\rightarrow}
\begin{sbmatrix}{\bG_{13}\bG_{12}\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes &\boxtimes \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}\\
&\stackrel{\bG_{14}}{\rightarrow}
\begin{sbmatrix}{\bG_{14}\bG_{13}\bG_{12}\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes &\boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
\stackrel{\bG_{15}}{\rightarrow}
\begin{sbmatrix}{\bG_{15}\bG_{14}\bG_{13}\bG_{12}\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes &\boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\end{sbmatrix}
\end{aligned}
$$
\paragraph{Givens rotation as a big picture} Take $\bG_1, \bG_2, \bG_3, \bG_4$ as a single matrix, we have
$$
\begin{aligned}
\begin{sbmatrix}{\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes
\end{sbmatrix}
&\stackrel{\bG_1}{\rightarrow}
\begin{sbmatrix}{\bG_1\bA}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bG_2}{\rightarrow}
\begin{sbmatrix}{\bG_2\bG_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes}
\end{sbmatrix}\\
&\stackrel{\bG_3}{\rightarrow}
\begin{sbmatrix}{\bG_3\bG_2\bG_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bG_4}{\rightarrow}
\begin{sbmatrix}{\bG_4\bG_3\bG_2\bG_1\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes \\
0 & 0 & 0 & \bm{\boxtimes} \\
0 & 0 & 0 & \bm{0}
\end{sbmatrix}
\end{aligned}
$$
\paragraph{Orders to introduce the zeros} With the Givens rotations for the QR decomposition, it is flexible to choose different orders to introduce the zeros of $\bR$. In our case, we introduce zeros column by column. It is also possible to introduce zeros row by row.
\index{Uniqueness}
\section{Uniqueness of the QR Decomposition}\label{section:nonunique-qr}
The results of the QR decomposition from the Gram-Schmidt process, the Householder algorithm, and the Givens algorithms are different. Even in the Householder algorithm, we have different methods to choose the sign of $r_1$ in Equation~\eqref{equation:qr-householder-to-chooose-r-numeraically}. Thus, from this point, QR decomposition is not unique.
However, if we use just the procedure described in the Gram-Schmidt process, or systematically choose the sign in the Householder algorithm, then the decomposition is unique.
The uniqueness of the \textit{reduced} QR decomposition for full column rank matrix $\bA$ is assured when $\bR$ has positive diagonals by inductive analysis \citep{lu2021numerical}.
We here provide another proof for the uniqueness of the \textit{reduced} QR decomposition for matrices if the diagonal values of $\bR$ are positive which will shed light on the implicit Q theorem in Hessenberg decomposition (Section~\ref{section:hessenberg-decomposition}, p.~\pageref{section:hessenberg-decomposition}) or tridiagonal decomposition (Theorem~\ref{section:tridiagonal-decomposition}, p.~\pageref{section:tridiagonal-decomposition}).
\begin{corollary}[Uniqueness of the reduced QR Decomposition]\label{corollary:unique-qr}
Suppose matrix $\bA$ is an $m\times n$ matrix with full column rank $n$ and $m\geq n$. Then, the \textit{reduced} QR decomposition is unique if the main diagonal values of $\bR$ are positive.
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:unique-qr}]
Suppose the \textit{reduced} QR decomposition is not unique, we can complete it into a \textit{full} QR decomposition, then we can find two such full decompositions so that $\bA=\bQ_1\bR_1 = \bQ_2\bR_2$ which implies $\bR_1 = \bQ_1^{-1}\bQ_2\bR_2 = \bV \bR_2$ where $\bV= \bQ_1^{-1}\bQ_2$ is an orthogonal matrix. Write out the equation, we have
$$
\begin{aligned}
\bR_1 &=
\begin{bmatrix}
r_{11} & r_{12}& \dots & r_{1n}\\
& r_{22}& \dots & r_{2n}\\
& & \ddots & \vdots \\
\multicolumn{2}{c}{\raisebox{1.3ex}[0pt]{\Huge0}} & & r_{nn} \\
\bzero & \bzero &\ldots & \bzero
\end{bmatrix}=
\begin{bmatrix}
v_{11}& v_{12} & \ldots & v_{1m}\\
v_{21} & v_{22} & \ldots & v_{2m}\\
\vdots & \vdots & \ddots & \vdots\\
v_{m1} & v_{m2} & \ldots & v_{\textcolor{black}{mm}}
\end{bmatrix}
\begin{bmatrix}
s_{11} & s_{12}& \dots & s_{1n}\\
& s_{22}& \dots & s_{2n}\\
& & \ddots & \vdots \\
\multicolumn{2}{c}{\raisebox{1.3ex}[0pt]{\Huge0}} & & s_{nn} \\
\bzero & \bzero &\ldots & \bzero
\end{bmatrix}= \bV\bR_2,
\end{aligned}
$$
This implies
$$
r_{11} = v_{11} s_{11}, \qquad v_{21}=v_{31}=v_{41}=\ldots=v_{m1}=0.
$$
Since $\bV$ contains mutually orthonormal columns and the first column of $\bV$ is of norm 1. Thus, $v_{11} = \pm 1$.
We notice that $r_{ii}> 0$ and $s_{ii}> 0$ for $i\in \{1,2,\ldots,n\}$ by assumption such that $r_{11}> 0$ and $s_{11}> 0$ and $v_{11}$ can only be positive 1. Since $\bV$ is an orthogonal matrix, we also have
$$
v_{12}=v_{13}=v_{14}=\ldots=v_{1m}=0.
$$
Applying this process to the submatrices of $\bR_1, \bV, \bR_2$, we will find the upper-left submatrix of $\bV$ is an identity: $\bV[1:n,1:n]=\bI_n$ such that $\bR_1=\bR_2$. This implies $\bQ_1[:,1:n]=\bQ_2[:,1:n]$ and leads to a contradiction such that the reduced QR decomposition is unique.
\end{proof}
\index{Row space}
\section{LQ Decomposition}
We previously proved the existence of the QR decomposition via the Gram-Schmidt process in which case we are interested in the column space of a matrix $\bA=[\ba_1, \ba_2, ..., \ba_n] \in \real^{m\times n}$.
However, in many applications (see \citet{schilders2009solution}), we are also interested in the row space of a matrix
$\bB=[\bb_1^\top; \bb_2^\top; ...;\bb_m^\top] \in \real^{m\times n}$, where $\bb_i$ is the $i$-th row of $\bB$. The successive spaces spanned by the rows $\bb_1, \bb_2, \ldots$ of $\bB$ are
$$
\cspace([\bb_1])\,\,\,\, \subseteq\,\,\,\, \cspace([\bb_1, \bb_2]) \,\,\,\,\subseteq\,\,\,\, \cspace([\bb_1, \bb_2, \bb_3])\,\,\,\, \subseteq\,\,\,\, \ldots.
$$
The QR decomposition thus has its sibling which finds the orthogonal row space.
By applying QR decomposition on $\bB^\top = \bQ_0\bR$, we recover the LQ decomposition of the matrix $\bB = \bL \bQ$ where $\bQ = \bQ_0^\top$ and $\bL = \bR^\top$.
\begin{theoremHigh}[LQ Decomposition]\label{theorem:lq-decomposition}
Every $m\times n$ matrix $\bB$ (whether linearly independent or dependent rows) with $n\geq m$ can be factored as
$$
\bB = \bL\bQ,
$$
where
1. \textbf{Reduced}: $\bL$ is an $m\times m$ lower triangular matrix and $\bQ$ is $m\times n$ with orthonormal rows which is known as the \textbf{reduced LQ decomposition};
2. \textbf{Full}: $\bL$ is an $m\times n$ lower triangular matrix and $\bQ$ is $n\times n$ with orthonormal rows which is known as the \textbf{full LQ decomposition}. If further restrict the lower triangular matrix to be a square matrix, the full LQ decomposition can be denoted as
$$
\bB = \begin{bmatrix}
\bL_0 & \bzero
\end{bmatrix}\bQ,
$$
where $\bL_0$ is an $m\times m$ square lower triangular matrix.
\end{theoremHigh}
\paragraph{Row-pivoted LQ (RPLQ)\index{Row-pivoted}\index{RPLQ}} Similar to the column-pivoted QR in Section~\ref{section:cpqr} (p.~\pageref{section:cpqr}), there exists a row-pivoted LQ decomposition:
$$
\left\{
\begin{aligned}
\text{Reduced RPLQ: }&\qquad
\bP\bB &=&
\underbrace{\begin{bmatrix}
\bL_{11} \\
\bL_{21}
\end{bmatrix}}_{m\times r}
\underbrace{\bQ_r }_{r\times n};\\
\text{Full RPLQ: }&\qquad
\bP\bB &=&
\underbrace{\begin{bmatrix}
\bL_{11} & \bzero \\
\bL_{21} & \bzero
\end{bmatrix}}_{m\times m}
\underbrace{\bQ }_{m\times n},\\
\end{aligned}
\right.
$$
where $\bL_{11}\in \real^{r\times r}$ is lower triangular, $\bQ_r$ or $\bQ_{1:r,:}$ spans the same row space as $\bB$, and $\bP$ is a permutation matrix that interchanges independent rows into the upper-most rows.
\section{Two-Sided Orthogonal Decomposition}
\begin{theoremHigh}[Two-Sided Orthogonal Decomposition]\label{theorem:two-sided-orthogonal}
When square matrix $\bA\in \real^{n\times n}$ with rank $r$, the full CPQR, RPLQ of $\bA$ are given by
$$\bA\bP_1=\bQ_1
\begin{bmatrix}
\bR_{11} & \bR_{12}\\
\bzero & \bzero
\end{bmatrix},\qquad
\bP_2\bA=
\begin{bmatrix}
\bL_{11} & \bzero \\
\bL_{21} & \bzero
\end{bmatrix}
\bQ_2
$$
respectively. Then we would find out
$$
\bA\bP\bA = \bQ_1
\underbrace{\begin{bmatrix}
\bR_{11}\bL_{11}+\bR_{12}\bL_{21} & \bzero \\
\bzero & \bzero
\end{bmatrix}}_{\text{rank $r$}}
\bQ_2,
$$
where the first $r$ columns of $\bQ_1$ span the same column space of $\bA$, first $r$ rows of $\bQ_2$ span the same row space of $\bA$, and $\bP$ is a permutation matrix. We name this decomposition as \textbf{two-sided orthogonal decomposition}.
\end{theoremHigh}
This decomposition is very similar to the property of SVD: $\bA=\bU\bSigma\bV^\top$ that the first $r$ columns of $\bU$ span the column space of $\bA$ and the first $r$ columns of $\bV$ span the row space of $\bA$ (we shall see in Lemma~\ref{lemma:svd-four-orthonormal-Basis}, p.~\pageref{lemma:svd-four-orthonormal-Basis}). Therefore, the two-sided orthogonal decomposition can be regarded as an inexpensive alternative in this sense.
\index{Fundamental spaces}
\index{Orthonormal basis}
\begin{lemma}[Four Orthonormal Basis]
Given the two-sided orthogonal decomposition of matrix $\bA\in \real^{n\times n}$ with rank $r$: $\bA\bP\bA = \bU \bF\bV^\top$, where $\bU=[\bu_1, \bu_2, \ldots,\bu_n]$ and $\bV=[\bv_1, \bv_2, \ldots, \bv_n]$ are the column partitions of $\bU$ and $\bV$. Then, we have the following property:
$\bullet$ $\{\bv_1, \bv_2, \ldots, \bv_r\} $ is an orthonormal basis of $\cspace(\bA^\top)$;
$\bullet$ $\{\bv_{r+1},\bv_{r+2}, \ldots, \bv_n\}$ is an orthonormal basis of $\nspace(\bA)$;
$\bullet$ $\{\bu_1,\bu_2, \ldots,\bu_r\}$ is an orthonormal basis of $\cspace(\bA)$;
$\bullet$ $\{\bu_{r+1}, \bu_{r+2},\ldots,\bu_n\}$ is an orthonormal basis of $\nspace(\bA^\top)$.
\end{lemma}
\section{Rank-One Changes}
We previously discussed the rank-one update/downdate of the Cholesky decomposition in Section~\ref{section:cholesky-rank-one-update} (p.~\pageref{section:cholesky-rank-one-update}). The rank-one change $\bA^\prime$ of matrix $\bA$ in the QR decomposition is defined in a similar form:\index{Rank-one update}
$$
\begin{aligned}
\bA^\prime &= \bA + \bu\bv^\top, \\
\downarrow &\gap \downarrow\\
\bQ^\prime\bR^\prime &=\bQ\bR + \bu\bv^\top,
\end{aligned}
$$
where if we set $\bA^\prime = \bA - (-\bu)\bv^\top$, we recover the downdate form such that the update and downdate in the QR decomposition are the same. Let $\bw = \bQ^\top\bu$, we have
$$
\bA^\prime = \bQ(\bR + \bw\bv^\top).
$$
From the second form in Remark~\ref{remark:basis-from-givens2} (p.~\pageref{remark:basis-from-givens2}) on introducing zeros backwards, there exists a set of Givens rotations $\bG_{12} \bG_{23} \ldots \bG_{(n-1),n}$ such that
$$
\bG_{12} \bG_{23} \ldots \bG_{(n-1),n} \bw = \pm ||\bw|| \be_1,
$$
where $\bG_{(k-1),k}$ is the Givens rotation in plane $k-1$ and $k$ that introduces zero in the $k$-th entry of $\bw$. Apply this rotation to $\bR$, we have
$$
\bG_{12} \bG_{23} \ldots \bG_{(n-1),n}\bR = \bH_0 ,
$$
where the Givens rotations in this \textit{reverse order} are useful to transform the upper triangular $\bR$ into a ``simple" upper Hessenberg which is close to upper triangular matrices (see Definition~\ref{definition:upper-hessenbert}, p.~\pageref{definition:upper-hessenbert} that we will introduce in the Hessenberg decomposition). If the rotations are transforming $\bw$ into $\pm ||\bw||\be_1$ from \textit{forward order} as in Corollary~\ref{corollary:basis-from-givens} (p.~\pageref{corollary:basis-from-givens}), we will not have this upper Hessenberg $\bH_0$. To see this, suppose $\bR\in \real^{5\times 5}$, an example is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed. The backward rotations result in the upper Hessenberg $\bH_0$ which is relatively simple to handle:
$$
\begin{aligned}
\text{Backward: }
\begin{sbmatrix}{\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0 & 0& \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{45}}{\rightarrow}
\begin{sbmatrix}{\bG_{45}\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & 0 & \bm{\boxtimes}& \bm{\boxtimes}
\end{sbmatrix}
\stackrel{\bG_{34}}{\rightarrow}
\begin{sbmatrix}{\bG_{34}\bG_{45}\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & 0 & \boxtimes & \boxtimes
\end{sbmatrix}\\
&\stackrel{\bG_{23}}{\rightarrow}
\begin{sbmatrix}{\bG_{23}\bG_{34}\bG_{45}\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
\stackrel{\bG_{12}}{\rightarrow}
\begin{sbmatrix}{\bG_{12}\bG_{23}\bG_{34}\bG_{45}\bR}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}.
\end{aligned}
$$
And the forward rotations result in a \textbf{full matrix}:
$$
\begin{aligned}
\text{Forward: }
\begin{sbmatrix}{\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0 & 0& \boxtimes
\end{sbmatrix}
&\stackrel{\bG_{12}}{\rightarrow}
\begin{sbmatrix}{\bG_{12}\bR}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0 & 0& \boxtimes
\end{sbmatrix}
\stackrel{\bG_{23}}{\rightarrow}
\begin{sbmatrix}{\bG_{23}\bG_{12}\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0 & 0& \boxtimes
\end{sbmatrix}\\
&\stackrel{\bG_{34}}{\rightarrow}
\begin{sbmatrix}{\bG_{34}\bG_{23}\bG_{12}\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
0 & 0 & 0 & 0& \boxtimes
\end{sbmatrix}
\stackrel{\bG_{45}}{\rightarrow}
\begin{sbmatrix}{\bG_{45}\bG_{34}\bG_{23}\bG_{12}\bR}
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes}& \bm{\boxtimes} \\
\end{sbmatrix}.
\end{aligned}
$$
Generally, the backward rotation results in,
$$
\bG_{12} \bG_{23} \ldots \bG_{(n-1),n} (\bR+\bw\bv^\top) = \bH_0 \pm ||\bw|| \be_1 \bv^\top = \bH,
$$
which is also upper Hessenberg. Similar to triangularization via the Givens rotation in Section~\ref{section:qr-givens} (p.~\pageref{section:qr-givens}), there exists a set of rotations $\bJ_{12}, \bJ_{23}, \ldots, \bJ_{(n-1),n}$ such that
$$
\bJ_{(n-1),n} \ldots \bJ_{23}\bJ_{12}\bH = \bR^\prime,
$$
is upper triangular. Following the $5\times 5$ example, the triangularization is shown as follows
$$
\begin{aligned}
\underbrace{\bH_0 \pm ||\bw|| \be_1 \bv^\top}_{\bH} =
\begin{sbmatrix}{\bH}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
&\stackrel{\bJ_{12}}{\rightarrow}
\begin{sbmatrix}{\bJ_{12}\bH}
\bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
\bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \boxtimes & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
\stackrel{\bJ_{23}}{\rightarrow}
\begin{sbmatrix}{\bJ_{23}\bJ_{12}\bH}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}\\
&\stackrel{\bJ_{34}}{\rightarrow}
\begin{sbmatrix}{\bJ_{34}\bJ_{23}\bJ_{12}\bH}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & 0 & \boxtimes& \boxtimes
\end{sbmatrix}
\stackrel{\bJ_{45}}{\rightarrow}
\begin{sbmatrix}{\bJ_{45}\bJ_{34}\bJ_{23}\bJ_{12}\bH}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & 0 & \bm{0} & \bm{\boxtimes} \\
\end{sbmatrix}.
\end{aligned}
$$
And the QR decomposition of $\bA^\prime$ is thus given by
$$
\bA^\prime = \bQ^\prime \bR^\prime,
$$
where
\begin{equation}\label{equation:qr-rank-one-update}
\left\{
\begin{aligned}
\bR^\prime &=(\bJ_{(n-1),n} \ldots \bJ_{23}\bJ_{12}) (\bG_{12} \bG_{23} \ldots \bG_{(n-1),n}) (\bR+\bw\bv^\top);\\
\bQ^\prime &= \bQ\left\{(\bJ_{(n-1),n} \ldots \bJ_{23}\bJ_{12}) (\bG_{12} \bG_{23} \ldots \bG_{(n-1),n}) \right\}^\top .\\
\end{aligned}
\right.
\end{equation}
\section{Appending or Deleting a Column}\label{section:append-column-qr}
\paragraph{Deleting a column}
Suppose the QR decomposition of $\bA\in \real^{m\times n}$ is given by $\bA=\bQ\bR$ where the column partition of $\bA$ is $\bA=[\ba_1,\ba_2,\ldots,\ba_n]$. Now, if we delete the $k$-th column of $\bA$ such that $\bA^\prime = [\ba_1,\ldots,\ba_{k-1},\ba_{k+1},\ldots,\ba_n] \in \real^{m\times (n-1)}$. We want to find the QR decomposition of $\bA^\prime$ efficiently. Suppose further $\bR$ has the following form
$$
\begin{aligned}
\begin{blockarray}{ccccc}
\begin{block}{c[ccc]c}
& \bR_{11} & \ba & \bR_{12} & k-1 \\
\bR = & \bzero & r_{kk} & \bb^\top & 1 \\
& \bzero &\bzero& \bR_{22} & m-k \\
\end{block}
& k-1 & 1 & n-k & \\
\end{blockarray}.\\
\end{aligned}
$$
Apparently,
$$
\bQ^\top \bA^\prime =
\begin{bmatrix}
\bR_{11} &\bR_{12} \\
\bzero & \bb^\top \\
\bzero & \bR_{22}
\end{bmatrix} = \bH
$$
is upper Hessenberg. A $6\times 5$ example is shown as follows where $k=3$:
$$
\begin{aligned}
\begin{sbmatrix}{\bR = \bQ^\top\bA}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0 & 0& \boxtimes \\
0 & 0 & 0 & 0& 0
\end{sbmatrix}
&\longrightarrow
\begin{sbmatrix}{\bH = \bQ^\top\bA^\prime}
\boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes& \boxtimes \\
0 & 0 & 0& \boxtimes \\
0 & 0 & 0& 0
\end{sbmatrix}.
\end{aligned}
$$
Again, for columns $k$ to $n-1$ of $\bH$, there exists a set of rotations $\bG_{k,k+1}$, $\bG_{k+1,k+2}$, $\ldots$, $\bG_{n-1,n}$ that could introduce zeros for the elements $h_{k+1,k}$, $h_{k+2,k+1}$, $\ldots$, $h_{n,n-1}$ of $\bH$. The the triangular matrix $\bR^\prime$ is given by
$$
\bR^\prime = \bG_{n-1,n}\ldots \bG_{k+1,k+2}\bG_{k,k+1}\bQ^\top \bA^\prime.
$$
And the orthogonal matrix
\begin{equation}\label{equation:qr-delete-column-finalq}
\bQ^\prime = (\bG_{n-1,n}\ldots \bG_{k+1,k+2}\bG_{k,k+1}\bQ^\top )^\top = \bQ \bG_{k,k+1}^\top \bG_{k+1,k+2}^\top \ldots \bG_{n-1,n}^\top,
\end{equation}
such that $\bA^\prime = \bQ^\prime\bR^\prime$.
\paragraph{Appending a column}
Similarly, suppose $\widetilde{\bA} = [\ba_1,\ba_k,\bw,\ba_{k+1},\ldots,\ba_n]$ where we append $\bw$ into the $(k+1)$-th column of $\bA$. We can obtain
$$
\bQ^\top \widetilde{\bA} = [\bQ^\top\ba_1,\ldots, \bQ^\top\ba_k, \bQ^\top\bw, \bQ^\top\ba_{k+1}, \ldots,\bQ^\top\ba_n] = \widetilde{\bH}.
$$
A set of Givens rotations $\bJ_{m-1,m}, \bJ_{m-2,m-1}, \ldots, \bJ_{k+1,k+2}$ can introduce zeros for the $\widetilde{h}_{m,k+1}$, $\widetilde{h}_{m-1,k+1}$, $\ldots$, $\widetilde{h}_{k+2,k+1}$ elements of $\widetilde{\bH}$ such that
$$
\widetilde{\bR} =\bJ_{k+1,k+2}\ldots \bJ_{m-2,m-1}\bJ_{m-1,m} \bQ^\top \widetilde{\bA}.
$$
Suppose $\widetilde{\bH}$ is of size $6\times 5$ and $k=2$, an example is shown as follows where $\boxtimes$ represents a value that is not necessarily zero, and \textbf{boldface} indicates the value has just been changed.
$$
\begin{aligned}
\begin{sbmatrix}{\widetilde{\bH}}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & 0& \boxtimes \\
0 & 0 & \boxtimes & 0& 0 \\
0 & 0 & \boxtimes & 0& 0
\end{sbmatrix}
&\stackrel{\bJ_{56}}{\rightarrow}
\begin{sbmatrix}{\bJ_{56}\widetilde{\bH} \rightarrow \widetilde{h}_{63}=0}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \boxtimes & 0& \boxtimes \\
0 & 0 & \bm{\boxtimes} & 0& 0 \\
0 & 0 & \bm{0} & 0& 0
\end{sbmatrix}
\stackrel{\bJ_{45}}{\rightarrow}
\begin{sbmatrix}{\bJ_{45}\bJ_{56}\widetilde{\bH}\rightarrow \widetilde{h}_{53}=0}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \boxtimes & \boxtimes& \boxtimes \\
0 & 0 & \bm{\boxtimes} & 0& \bm{\boxtimes} \\
0 & 0 & \bm{0} & 0& \bm{\boxtimes}\\
0 & 0 & 0 & 0& 0
\end{sbmatrix}\stackrel{\bJ_{34}}{\rightarrow}
\begin{sbmatrix}{\bJ_{34}\bJ_{45}\bJ_{56}\widetilde{\bH}\rightarrow \widetilde{h}_{43}=0}
\boxtimes & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & \boxtimes & \boxtimes & \boxtimes & \boxtimes \\
0 & 0 & \bm{\boxtimes} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & \bm{0} & \bm{\boxtimes} & \bm{\boxtimes} \\
0 & 0 & 0 & 0& \boxtimes\\
0 & 0 & 0 & 0& 0
\end{sbmatrix}.
\end{aligned}
$$
And finally, the orthogonal matrix
\begin{equation}\label{equation:qr-add-column-finalq}
\widetilde{\bQ} = (\bJ_{k+1,k+2}\ldots \bJ_{m-2,m-1}\bJ_{m-1,m} \bQ^\top )^\top = \bQ \bJ_{m-1,m}^\top \bJ_{m-2,m-1}^\top \ldots \bJ_{k+1,k+2}^\top,
\end{equation}
such that $\widetilde{\bA} = \widetilde{\bQ}\widetilde{\bR}$.
\paragraph{Real world application} The method introduced above is useful for the efficient variable selection in the least squares problem via the QR decomposition. At each time we delete a column of the data matrix $\bA$, and apply an $F$-test to see if the variable is significant or not. If not, we will delete the variable and favor a simpler model \citep{lu2021rigorous}.
\section{Appending or Deleting a Row}
\paragraph{Appending a row}
Suppose the full QR decomposition of $\bA\in \real^{m\times n}$ is given by $\bA= \begin{bmatrix}
\bA_1 \\
\bA_2
\end{bmatrix}=\bQ\bR$ where $\bA_1\in \real^{k\times n}$ and $\bA_2 \in \real^{(m-k)\times n}$. Now, if we add a row such that $\bA^\prime = \begin{bmatrix}
\bA_1 \\
\bw^\top\\
\bA_2
\end{bmatrix} \in \real^{(m+1)\times n}$. We want to find the full QR decomposition of $\bA^\prime$ efficiently. Construct a permutation matrix
$$
\bP=
\begin{bmatrix}
\bzero & 1 & \bzero \\
\bI_k & \bzero & \bzero \\
\bzero & \bzero & \bI_{m-k}
\end{bmatrix}
\longrightarrow
\bP
\begin{bmatrix}
\bA_1 \\
\bw^\top\\
\bA_2
\end{bmatrix}
=
\begin{bmatrix}
\bw^\top\\
\bA_1 \\
\bA_2
\end{bmatrix}
.
$$
Then,
$$
\begin{bmatrix}
1 & \bzero \\
\bzero & \bQ^\top
\end{bmatrix}
\bP
\bA^\prime
=
\begin{bmatrix}
\bw^\top \\
\bR
\end{bmatrix}=\bH
$$
is upper Hessenberg. Similarly, a set of rotations $\bG_{12}, \bG_{23}, \ldots, \bG_{n,n+1}$ can be applied to introduce zeros for the elements $h_{21}$, $h_{32}$, $\ldots$, $h_{n+1,n}$ of $\bH$. The triangular matrix $\bR^\prime$ is given by
$$
\bR^\prime = \bG_{n,n+1}\ldots \bG_{23}\bG_{12}\begin{bmatrix}
1 & \bzero \\
\bzero & \bQ^\top
\end{bmatrix}
\bP \bA^\prime.
$$
And the orthogonal matrix
$$
\bQ^\prime = \left(\bG_{n,n+1}\ldots \bG_{23}\bG_{12}\begin{bmatrix}
1 & \bzero \\
\bzero & \bQ^\top
\end{bmatrix}
\bP \right)^\top
=
\bP^\top
\begin{bmatrix}
1 & \bzero \\
\bzero & \bQ
\end{bmatrix}
\bG_{12}^\top \bG_{23}^\top \ldots \bG_{n,n+1}^\top,
$$
such that $\bA^\prime = \bQ^\prime\bR^\prime$.
\paragraph{Deleting a row} Suppose $\bA = \begin{bmatrix}
\bA_1 \\
\bw^\top\\
\bA_2
\end{bmatrix} \in \real^{m\times n}$ where $\bA_1\in\real^{k\times n}$, $\bA_2 \in \real^{(m-k-1)\times n}$ with the full QR decomposition given by $\bA=\bQ\bR$ where $\bQ\in \real^{m\times m}, \bR\in \real^{m\times n}$. We want to compute the full QR decomposition of $\widetilde{\bA} = \begin{bmatrix}
\bA_1 \\
\bA_2
\end{bmatrix}$ efficiently (assume $m-1\geq n$). Analogously, we can construct a permutation matrix
$$
\bP =
\begin{bmatrix}
\bzero & 1 & \bzero \\
\bI_k & \bzero & \bzero \\
\bzero & \bzero & \bI_{m-k-1}
\end{bmatrix}
$$
such that
$$
\bP\bA =
\begin{bmatrix}
\bzero & 1 & \bzero \\
\bI_k & \bzero & \bzero \\
\bzero & \bzero & \bI_{m-k-1}
\end{bmatrix}
\begin{bmatrix}
\bA_1 \\
\bw^\top\\
\bA_2
\end{bmatrix}=
\begin{bmatrix}
\bw^\top \\
\bA_1\\
\bA_2
\end{bmatrix} = \bP\bQ\bR =\bM\bR ,
$$
where $\bM = \bP\bQ$ is an orthogonal matrix. Let $\bmm^\top$ be the first row of $\bM$, and a set of givens rotations $\bG_{m-1,m}, \bG_{m-2,m-1}, \ldots, \bG_{1,2}$ introducing zeros for elements $m_m, m_{m-1}, \ldots, m_2$ of $\bmm$ respectively such that $\bG_{1,2}\ldots \bG_{m-2,m-1}\bG_{m-1,m}\bmm = \alpha \be_1$ where $\alpha = \pm 1$. Therefore, we have
$$
\bG_{1,2}\ldots \bG_{m-2,m-1}\bG_{m-1,m} \bR =
\begin{blockarray}{cc}
\begin{block}{[c]c}
\bv^\top & 1 \\
\bR_1& m-1 \\
\end{block}
\end{blockarray},
$$
which is upper Hessenberg with $\bR_1\in \real^{(m-1)\times n}$ being upper triangular. And
$$
\bM \bG_{m-1,m}^\top \bG_{m-2,m-1}^\top \ldots \bG_{1,2}^\top =
\begin{bmatrix}
\alpha & \bzero \\
\bzero & \bQ_1
\end{bmatrix},
$$
where $\bQ_1\in \real^{(m-1)\times (m-1)}$ is an orthogonal matrix. The bottom-left block of the above matrix is a zero vector since $\alpha=\pm 1$ and $\bM$ is orthogonal. To see this, let $\bG=\bG_{m-1,m}^\top \bG_{m-2,m-1}^\top $ $\ldots \bG_{1,2}^\top $ with the first column being $\bg$ and $\bM = [\bmm^\top; \bmm_2^\top; \bmm_3^\top; \ldots, \bmm_{m}^\top]$ being the row partition of $\bM$. We have
$$
\begin{aligned}
\bmm^\top\bg &= \pm 1 \qquad \rightarrow \qquad \bg = \pm \bmm, \\
\bmm_i^\top \bmm &=0, \qquad \forall i \in \{2,3,\ldots,m\}.
\end{aligned}
$$
This results in
$$
\begin{aligned}
\bP\bA&=\bM\bR\\
&=(\bM \bG_{m-1,m}^\top \bG_{m-2,m-1}^\top \ldots \bG_{1,2}\top ) (\bG_{1,2}\ldots \bG_{m-2,m-1}\bG_{m-1,m} \bR ) \\
&=
\begin{bmatrix}
\alpha & \bzero \\
\bzero & \bQ_1
\end{bmatrix}
\begin{bmatrix}
\bv^\top \\
\bR_1
\end{bmatrix} =
\begin{bmatrix}
\alpha \bv^\top \\
\bQ_1\bR_1
\end{bmatrix}
=
\begin{bmatrix}
\bw^\top \\
\widetilde{\bA}
\end{bmatrix}
.
\end{aligned}
$$
This implies $\bQ_1\bR_1$ is the full QR decomposition of $\widetilde{\bA}=\begin{bmatrix}
\bA_1\\
\bA_2
\end{bmatrix}$.
\chapter{UTV Decomposition: ULV and URV Decomposition}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{UTV Decomposition}\label{section:ulv-urv-decomposition}
The UTV decomposition goes further by factoring the matrix into two orthogonal matrices $\bA=\bU\bT\bV$, where $\bU, \bV$ are orthogonal, whilst $\bT$ is (upper/lower) triangular.\footnote{These decompositions fall into a category known as the \textit{double-sided orthogonal decomposition}. We will see the UTV decomposition, complete orthogonal decomposition (Theorem~\ref{theorem:complete-orthogonal-decom}, p.~\pageref{theorem:complete-orthogonal-decom}), and singular value decomposition are all in this notion.}
The resulting $\bT$ supports rank estimation. The matrix $\bT$ can be lower triangular which results in the ULV decomposition, or it can be upper triangular which results in the URV decomposition. The UTV framework shares a similar form as the singular value decomposition (SVD, see Section~\ref{section:SVD}, p.~\pageref{section:SVD}) and can be regarded as an inexpensive alternative to the SVD.
\begin{theoremHigh}[Full ULV Decomposition]\label{theorem:ulv-decomposition}
Every $m\times n$ matrix $\bA$ with rank $r$ can be factored as
$$
\bA = \bU \begin{bmatrix}
\bL & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV,
$$
where $\bU\in \real^{m\times m}$ and $\bV\in \real^{n\times n}$ are two orthogonal matrices, and $\bL\in \real^{r\times r}$ is a lower triangular matrix.
\end{theoremHigh}
The existence of the ULV decomposition is from the QR and LQ decomposition.
\begin{proof}[of Theorem~\ref{theorem:ulv-decomposition}]
For any rank $r$ matrix $\bA=[\ba_1, \ba_2, \ldots, \ba_n]$, we can use a column permutation matrix $\bP$ (Definition~\ref{definition:permutation-matrix}, p.~\pageref{definition:permutation-matrix}) such that the linearly independent columns of $\bA$ appear in the first $r$ columns of $\bA\bP$. Without loss of generality, we assume $\bb_1, \bb_2, \ldots, \bb_r$ are the $r$ linearly independent columns of $\bA$ and
$$
\bA\bP = [\bb_1, \bb_2, \ldots, \bb_r, \bb_{r+1}, \ldots, \bb_n].
$$
Let $\bZ = [\bb_1, \bb_2, \ldots, \bb_r] \in \real^{m\times r}$. Since any $\bb_i$ is in the column space of $\bZ$, we can find a $\bE\in \real^{r\times (n-r)}$ such that
$$
[\bb_{r+1}, \bb_{r+2}, \ldots, \bb_n] = \bZ \bE.
$$
That is,
$$
\bA\bP = [\bb_1, \bb_2, \ldots, \bb_r, \bb_{r+1}, \ldots, \bb_n] = \bZ
\begin{bmatrix}
\bI_r & \bE
\end{bmatrix},
$$
where $\bI_r$ is an $r\times r$ identity matrix. Moreover, $\bZ\in \real^{m\times r}$ has full column rank such that its full QR decomposition is given by $\bZ = \bU\begin{bmatrix}
\bR \\
\bzero
\end{bmatrix}$, where $\bR\in \real^{r\times r}$ is an upper triangular matrix with full rank and $\bU$ is an orthogonal matrix. This implies
\begin{equation}\label{equation:ulv-smpl}
\bA\bP = \bZ
\begin{bmatrix}
\bI_r & \bE
\end{bmatrix}
=
\bU\begin{bmatrix}
\bR \\
\bzero
\end{bmatrix}
\begin{bmatrix}
\bI_r & \bE
\end{bmatrix}
=
\bU\begin{bmatrix}
\bR & \bR\bE \\
\bzero & \bzero
\end{bmatrix}.
\end{equation}
Since $\bR$ has full rank, this means
$\begin{bmatrix}
\bR & \bR\bE
\end{bmatrix}$ also has full rank such that its full LQ decomposition is given by
$\begin{bmatrix}
\bL & \bzero
\end{bmatrix} \bV_0$ where $\bL\in \real^{r\times r}$ is a lower triangular matrix and $\bV_0$ is an orthogonal matrix. Substitute into Equation~\eqref{equation:ulv-smpl}, we have
$$
\bA = \bU \begin{bmatrix}
\bL & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV_0 \bP^{-1}.
$$
Let $\bV =\bV_0 \bP^{-1}$ which is a product of two orthogonal matrices, and is also an orthogonal matrix. This completes the proof.
\end{proof}
A second way to see the proof of the ULV decomposition will be discussed in the proof of Theorem~\ref{theorem:complete-orthogonal-decom} shortly via the rank-revealing QR decomposition and trivial QR decomposition.
Now suppose the ULV decomposition of matrix $\bA$ is
$$
\bA = \bU \begin{bmatrix}
\bL & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV.
$$
Let $\bU_0 = \bU_{:,1:r}$ and $\bV_0 = \bV_{1:r,:}$, i.e., $\bU_0$ contains only the first $r$ columns of $\bU$, and $\bV_0$ contains only the first $r$ rows of $\bV$. Then, we still have $\bA = \bU_0 \bL\bV_0$. This is known as the \textbf{reduced ULV decomposition}.
Similarly, we can also claim the URV decomposition as follows.
\begin{theoremHigh}[Full URV Decomposition]\label{theorem:urv-decomposition}
Every $m\times n$ matrix $\bA$ with rank $r$ can be factored as
$$
\bA = \bU \begin{bmatrix}
\bR & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV,
$$
where $\bU\in \real^{m\times m}$ and $\bV\in \real^{n\times n}$ are two orthogonal matrices, and $\bR\in \real^{r\times r}$ is an upper triangular matrix.
\end{theoremHigh}
The proof is just similar to that of ULV decomposition and we shall not give the details.
Again, there is a version of reduced URV decomposition and the difference between the full and reduced URV can be implied from the context. The ULV and URV sometimes are referred to as the UTV decomposition framework \citep{fierro1997low, golub2013matrix}.
We will shortly see that the forms of ULV and URV are very close to the singular value decomposition (SVD). All of the three factor the matrix $\bA$ into two orthogonal matrices. Especially, there exists a set of bases for the four subspaces of $\bA$ in the fundamental theorem of linear algebra via the ULV and the URV. Taking ULV as an example, the first $r$ columns of $\bU$ form an orthonormal basis of $\cspace(\bA)$, and the last $(m-r)$ columns of $\bU$ form an orthonormal basis of $\nspace(\bA^\top)$. The first $r$ rows of $\bV$ form an orthonormal basis for the row space $\cspace(\bA^\top)$, and the last $(n-r)$ rows form an orthonormal basis for $\nspace(\bA)$ (similar to the two-sided orthogonal decomposition):
$$
\left\{
\begin{aligned}
\cspace(\bA) &= span\{\bu_1, \bu_2, \ldots, \bu_r\}, \\
\nspace(\bA) &= span\{\bv_{r+1}, \bv_{r+2}, \ldots, \bv_n\}, \\
\cspace(\bA^\top) &= span\{ \bv_1, \bv_2, \ldots,\bv_r \} ,\\
\nspace(\bA^\top) &=span\{\bu_{r+1}, \bu_{r+2}, \ldots, \bu_m\}.
\end{aligned}
\right.
$$
The SVD goes further that there is a connection between the two pairs of orthonormal basis, i.e., transforming from column basis into row basis, or left null space basis into right null space basis. We will get more details in the SVD section.
\section{Complete Orthogonal Decomposition}
What is related to the UTV decomposition is called the \textit{complete orthogonal decomposition} which factors into two orthogonal matrices as well.
\begin{theoremHigh}[Complete Orthogonal Decomposition]\label{theorem:complete-orthogonal-decom}
Every $m\times n$ matrix $\bA$ with rank $r$ can be factored as
$$
\bA = \bU \begin{bmatrix}
\bT & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV,
$$
where $\bU\in \real^{m\times m}$ and $\bV\in \real^{n\times n}$ are two orthogonal matrices, and $\bT\in \real^{r\times r}$ is an rank-$r$ matrix.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:complete-orthogonal-decom}]
By rank-revealing QR decomposition (Theorem~\ref{theorem:rank-revealing-qr-general}, p.~\pageref{theorem:rank-revealing-qr-general}), $\bA$ can be factored as
$$
\bQ_1^\top \bA\bP =
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bzero
\end{bmatrix},
$$
where $\bR_{11} \in \real^{r\times r}$ is upper triangular, $\bR_{12} \in \real^{r\times (n-r)}$, $\bQ_1\in \real^{m\times m}$ is an orthogonal matrix, and $\bP$ is a permutation matrix.
\begin{mdframed}[hidealllines=\mdframehideline,backgroundcolor=\mdframecolorSkip]
Then, it is not hard to find a decomposition such that
\begin{equation}\label{equation:orthogonal-complete-qr-or-not}
\begin{bmatrix}
\bR_{11}^\top \\
\bR_{12}^\top
\end{bmatrix}
=
\bQ_2
\begin{bmatrix}
\bS \\
\bzero
\end{bmatrix},
\end{equation}
where $\bQ_2$ is an orthogonal matrix, $\bS$ is an rank-$r$ matrix. The decomposition is reasonable in the sense the matrix $ \begin{bmatrix}
\bR_{11}^\top \\
\bR_{12}^\top
\end{bmatrix} \in \real^{n\times r}$ has rank $r$ of which the columns stay in a subspace of $\real^{n}$. Nevertheless, the columns of $\bQ_2$ span the whole space of $\real^n$, where we can assume the first $r$ columns of $\bQ_2$ span the same space as that of $ \begin{bmatrix}
\bR_{11}^\top \\
\bR_{12}^\top
\end{bmatrix}$. The matrix $\begin{bmatrix}
\bS \\
\bzero
\end{bmatrix}$ is to transform $\bQ_2$ into $ \begin{bmatrix}
\bR_{11}^\top \\
\bR_{12}^\top
\end{bmatrix}$.
\end{mdframed}
Then, it follows that
$$
\bQ_1^\top \bA\bP \bQ_2 =
\begin{bmatrix}
\bS^\top & \bzero \\
\bzero & \bzero
\end{bmatrix}.
$$
Let $\bU = \bQ_1$, $\bV=\bQ_2^\top\bP^\top$ and $\bT = \bS^\top$, we complete the proof.
\end{proof}
We can find that when Equation~\eqref{equation:orthogonal-complete-qr-or-not} is taken to be the reduced QR decomposition of $ \begin{bmatrix}
\bR_{11}^\top \\
\bR_{12}^\top
\end{bmatrix}$, then the complete orthogonal decomposition reduces to the ULV decomposition.
\section{Application: Row Rank equals Column Rank Again via UTV}
The UTV framework can prove the important theorem in linear algebra that the row rank and column rank of a matrix are equal.
Notice that to apply the UTV in the proof, a slight modification on the claim of the existence of the UTV decomposition needs to be taken care of. For example, in Theorem~\ref{theorem:ulv-decomposition}, the assumption of the matrix $\bA$ is to have rank $r$. Since rank $r$ already admits the fact that row rank equals column rank. A better claim here to this aim is to say matrix $\bA$ has column rank $r$ in Theorem~\ref{theorem:ulv-decomposition}. See
\citet{lu2021column} for a detailed discussion.
\begin{proof}[\textbf{of Theorem~\ref{lemma:equal-dimension-rank}, p.~\pageref{lemma:equal-dimension-rank}, A Second Way}]
Any $m\times n$ matrix $\bA$ with rank $r$ can be factored as
$$
\bA = \bU_0 \begin{bmatrix}
\bL & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV_0,
$$
where $\bU_0\in \real^{m\times m}$ and $\bV_0\in \real^{n\times n}$ are two orthogonal matrices, and $\bL\in \real^{r\times r}$ is a lower triangular matrix \footnote{Instead of using the ULV decomposition, in some texts, the authors use elementary transformations $\bE_1, \bE_2$ such that $
\bA = \bE_1 \begin{bmatrix}
\bI_r & \bzero \\
\bzero & \bzero
\end{bmatrix}\bE_2,
$ to prove the result.}. Let $\bD = \begin{bmatrix}
\bL & \bzero \\
\bzero & \bzero
\end{bmatrix}$, the row rank and column rank of $\bD$ are apparently the same. If we could prove the column rank of $\bA$ equals the column rank of $\bD$, and the row rank of $\bA$ equals the row rank of $\bD$, then we complete the proof.
Let $\bU = \bU_0^\top$, $\bV=\bV_0^\top$, then $\bD = \bU\bA\bV$. Decompose the above idea into two steps, a moment of reflexion reveals that, if we could first prove the row rank and column rank of $\bA$ are equal to those of $\bU\bA$, and then, if we further prove the row rank and column rank of $\bU\bA $ are equal to those of $\bU\bA\bV$, we could also complete the proof.
\paragraph{Row rank and column rank of $\bA$ are equal to those of $\bU\bA$} Let $\bB = \bU\bA$, and let further $\bA=[\ba_1,\ba_2,\ldots, \ba_n]$ and $\bB=[\bb_1,\bb_2,\ldots,\bb_n]$ be the column partitions of $\bA$ and $\bB$. Therefore, $[\bb_1,\bb_2,\ldots,\bb_n] = [\bU\ba_1,\bU\ba_2,\ldots, \bU\ba_n]$. If $x_1\ba_1+x_2\ba_2+\ldots+x_n\ba_n=0$, then we also have
$$
\bU(x_1\ba_1+x_2\ba_2+\ldots+x_n\ba_n) = x_1\bb_1+x_2\bb_2+\ldots+x_n\bb_n = 0.
$$
Let $j_1, j_2, \ldots, j_r$ be distinct indices between 1 and $n$, if the set $\{\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_r}\}$ is independent, the set $\{\bb_{j_1}, \bb_{j_2}, \ldots, \bb_{j_r}\}$ must also be linearly independent. This implies
$$
dim(\cspace(\bB)) \leq dim(\cspace(\bA)).
$$
Similarly, by $\bA = \bU^\top\bB$, it follows that
$$
dim(\cspace(\bA)) \leq dim(\cspace(\bB)).
$$
This implies
$$
dim(\cspace(\bB)) = dim(\cspace(\bA)).
$$
Apply the process to $\bB^\top$ and $\bA^\top$, we have
$$
dim(\cspace(\bB^\top)) = dim(\cspace(\bA^\top)).
$$
This implies the row rank and column rank of $\bA$ and $\bB=\bU\bA$ are the same.
Similarly, we can also show that the row rank and column rank of $\bU\bA$ and $\bU\bA\bV$ are the same. This completes the proof.
\end{proof}
\chapter{Skeleton/CUR Decomposition}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\index{Skeleton}
\section{Skeleton/CUR Decomposition}
\begin{theoremHigh}[Skeleton Decomposition]\label{theorem:skeleton-decomposition}
Any rank-$r$ matrix $\bA \in \real^{m \times n}$ can be factored as
$$
\underset{m\times n}{\bA }=
\underset{m\times r}{\bC} \gap \underset{r\times r}{\bU^{-1} }\gap \underset{r\times n}{\bR},
$$
where $\bC$ is some $r$ linearly independent columns of $\bA$, $\bR$ is some $r$ linearly independent rows of $\bA$, and $\bU$ is the nonsingular submatrix on the intersection.
\begin{itemize}
\item The storage for the decomposition is then reduced or potentially increased from $mn$ floats to $r(m+n)+r^2$ floats.
\item Or further, if we only record the position of the indices, it requires $mr$, $nr$ floats for storing $\bC, \bR$ respectively and extra $2r$ integers to remember the position of each column of $\bC$ in that of $\bA$ and each row of $\bR$ in that of $\bA$ (i.e., construct $\bU$ from $\bC,\bR$).
\end{itemize}
\end{theoremHigh}
Skeleton decomposition is also known as the \textit{CUR decomposition} following from the notation in the decomposition. The illustration of skeleton decomposition is shown in Figure~\ref{fig:skeleton} where the \textcolor{canaryyellow}{yellow} vectors denote the linearly independent columns of $\bA$ and \textcolor{green}{green} vectors denote the linearly independent rows of $\bA$.
In case when $\bA$ is square and invertible, we have skeleton decomposition $\bA=\bC\bU^{-1} \bR$ where $\bC=\bR=\bU=\bA$ such that the decomposition reduces to $\bA = \bA\bA^{-1}\bA$.
Specifically, given $I,J$ index vectors both with size $r$ that contain the indices of rows and columns selected from $\bA$ into $\bR$ and $\bC$ respectively, $\bU$ can be denoted as $\bU=\bA[I,J]$.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{imgs/skeleton.pdf}
\caption{Demonstration of skeleton decomposition of a matrix.}
\label{fig:skeleton}
\end{figure}
\section{Existence of the Skeleton Decomposition}
In Corollary~\ref{lemma:equal-dimension-rank} (p.~\pageref{lemma:equal-dimension-rank}), we proved the row rank and the column rank of a matrix are equal. In another word, we can also claim that the dimension of the column space and the dimension of the row space are equal. This property is essential for the existence of the skeleton decomposition.
We are then ready to prove the existence of the skeleton decomposition. The proof is rather elementary.
\begin{proof}[of Theorem~\ref{theorem:skeleton-decomposition}]
The proof relies on the existence of such nonsingular matrix $\bU$ which is central to this decomposition method.
\paragraph{Existence of such nonsingular matrix $\bU$} Since matrix $\bA$ is rank-$r$, we can pick $r$ columns from $\bA$ so that they are linearly independent. Suppose we put the specific $r$ independent columns $\ba_{i1}, \ba_{i2}, \ldots, \ba_{ir}$ into the columns of an $m\times r$ matrix $\bN=[\ba_{i1}, \ba_{i2}, \ldots, \ba_{ir}] \in \real^{m\times r}$. The dimension of the column space of $\bN$ is $r$ so that the dimension of the row space of $\bN$ is also $r$ by Corollary~\ref{lemma:equal-dimension-rank} (p.~\pageref{lemma:equal-dimension-rank}). Again, we can pick $r$ linearly independent rows $\bn_{j1}^\top,\bn_{j2}^\top, \ldots, \bn_{jr}^\top $ from $\bN$ and put the specific $r$ rows into rows of an $r\times r$ matrix $\bU = [\bn_{j1}^\top; \bn_{j2}^\top; \ldots; \bn_{jr}^\top]\in \real^{r\times r}$. Using Corollary~\ref{lemma:equal-dimension-rank} (p.~\pageref{lemma:equal-dimension-rank}) again, the dimension of the column space of $\bU$ is also $r$ which means there are the $r$ linearly independent columns from $\bU$. So $\bU$ is such a nonsingular matrix with size $r\times r$.
\paragraph{Main proof}
As long as we find the nonsingular $r\times r$ matrix $\bU$ inside $\bA$, we can find the existence of the skeleton decomposition as follows.
Suppose $\bU=\bA[I,J]$ where $I,J$ are index vectors of size $r$. Since $\bU$ is a nonsingular matrix, the columns of $\bU$ are linearly independent. Thus the columns of matrix $\bC$ based on the columns of $\bU$ are also linearly independent (i.e., select the $r$ columns of $\bA$ with the same entries of the matrix $\bU$. Here $\bC$ is equal to the $\bN$ we construct above and $\bC=\bA[:,J]$).
As the rank of the matrix $\bA$ is $r$, if we take any other column $\ba_i$ of $\bA$, $\ba_i$ can be represented as a linear combination of the columns of $\bC$, i.e., there exists a vector $\bx$ such that $\ba_i = \bC \bx$, for all $ i\in \{1, 2, \ldots, n\}$. Let $r$ rows of $\ba_i$ corresponding to the row entries of $\bU$ be $\br_i \in \real^r$ for all $i\in \{1, 2, \ldots, n\}$ (i.e., $\br_i$ contains $r$ entries of $\ba_i$). That is, select the $r$ entries of $\ba_i$'s corresponding to the entries of $\bU$ as follows:
$$
\bA = [\ba_1,\ba_2, \ldots, \ba_n]\in \real^{m\times n} \qquad \longrightarrow \qquad
\bA[I,:]=[\br_1, \br_2, \ldots, \br_n] \in \real^{r\times n}.
$$
Since $\ba_i = \bC\bx$, $\bU$ is a submatrix inside $\bC$, and $\br_i$ is a subvector inside $\ba_i$, we have $\br_i = \bU \bx$ which is equivalent to $\bx = \bU^{-1} \br_i$. Thus for every $i$, we have $\ba_i = \bC \bU^{-1} \br_i$. Combine the $n$ columns of such $\br_i$ into $\bR=[\br_1, \br_2, \ldots, \br_n]$, we obtain
$$
\bA = [\ba_1, \ba_2, \ldots, \ba_n] = \bC \bU^{-1} \bR,
$$
from which the result follows.
In short, we first find $r$ linearly independent columns of $\bA$ into $\bC\in \real^{m\times r}$. From $\bC$, we find an $r\times r$ nonsingular submatrix $\bU$. The $r$ rows of $\bA$ corresponding to entries of $\bU$ can help to reconstruct the columns of $\bA$. Again, the situation is shown in Figure~\ref{fig:skeleton}.
\end{proof}
\paragraph{CR decomposition vs skeleton decomposition} We note that CR decomposition and skeleton decomposition share a similar form. Even for the symbols used $\bA=\bC\bR$ for the CR decomposition and $\bA=\bC\bU^{-1}\bR$ for the skeleton decomposition.
Both in the CR decomposition and the skeleton decomposition, we \textbf{can} select the first $r$ independent columns to obtain the matrix $\bC$ (the symbol for both the CR decomposition and the skeleton decomposition). So $\bC$'s in the CR decomposition and the skeleton decomposition are exactly the same. On the contrary, $\bR$ in the CR decomposition is the reduced row echelon form without the zero rows, whereas $\bR$ in the skeleton decomposition is exactly some rows from $\bA$ so that $\bR$'s have different meanings in the two decompositional methods.
\paragraph{A word on the uniqueness of CR decomposition and skeleton decomposition} As mentioned above, both in the CR decomposition and the skeleton decomposition, we select the first $r$ linearly independent columns to obtain the matrix $\bC$. In this sense, the CR and skeleton decompositions have a unique form.
However, if we select the last $r$ linearly independent columns, we will get a different CR decomposition or skeleton decomposition. We will not discuss this situation here as it is not the main interest of this text.
To repeat, in the above proof for the existence of the skeleton decomposition, we first find the $r$ linearly independent columns of $\bA$ into the matrix $\bC$. From $\bC$, we find an $r\times r$ nonsingular submatrix $\bU$. From the submatrix $\bU$, we finally find the final row submatrix $\bR\in \real^{r\times n}$. A further question can be posed that if matrix $\bA$ has rank $r$, matrix $\bC$ contains $r$ linearly independent columns, and matrix $\bR$ contains $r$ linearly independent rows, then whether the $r\times r$ ``intersection" of $\bC$ and $\bR$ is invertible or not \footnote{We thank Gilbert Strang for raising this interesting question.}.
\begin{corollary}[Nonsingular Intersection]\label{corollary:invertible-intersection}
If matrix $\bA \in \real^{m\times n}$ has rank $r$, matrix $\bC$ contains $r$ linearly independent columns, and matrix $\bR$ contains $r$ linearly independent rows, then the $r\times r$ ``intersection" matrix $\bU$ of $\bC$ and $\bR$ is invertible.
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:invertible-intersection}]
If $I,J$ are the indices of rows and columns selected from $\bA$ into $\bR$ and $\bC$ respectively, then, $\bR$ can be denoted as $\bR=\bA[I, :]$, $\bC$ can be represented as $\bC = \bA[:,J]$, and $\bU$ can be denoted as $\bU=\bA[I,J]$.
Since $\bC$ contains $r$ linearly independent columns of $\bA$, any column $\ba_i$ of $\bA$ can be represented as $\ba_i = \bC\bx_i = \bA[:,J]\bx_i$ for all $i \in \{1,2,\ldots, n\}$. This implies the $r$ entries of $\ba_i$ corresponding to the $I$ indices can be represented by the columns of $\bU$ such that $\ba_i[I] = \bU\bx_i \in \real^{r}$ for all $i \in \{1,2,\ldots, n\}$, i.e.,
$$
\ba_i = \bC\bx_i = \bA[:,J]\bx_i \in \real^{m} \qquad \longrightarrow \qquad
\ba_i[I] =\bA[I,J]\bx_i= \bU\bx_i \in \real^{r}.
$$
Since $\bR$ contains $r$ linearly independent rows of $\bA$, the row rank and column rank of $\bR$ are equal to $r$. Combining the facts above, the $r$ columns of $\bR$ corresponding to indices $J$ (i.e., the $r$ columns of $\bU$) are linearly independent.
Again, by applying Corollary~\ref{lemma:equal-dimension-rank} (p.~\pageref{lemma:equal-dimension-rank}), the dimension of the row space of $\bU$ is also equal to $r$ which means there are the $r$ linearly independent rows from $\bU$, and $\bU$ is invertible.
\end{proof}
\chapter{Interpolative Decomposition (ID)}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{Interpolative Decomposition (ID)}
Column interpolative decomposition (ID) factors a matrix as the product of two matrices, one of which contains selected columns from the original matrix, and the other of which has a subset of columns consisting of the identity matrix and all its values are no greater than 1 in absolute value. Formally, we have the following theorem describing the details of the column ID.
\begin{theoremHigh}[Column Interpolative Decomposition]\label{theorem:interpolative-decomposition}
Any rank-$r$ matrix $\bA \in \real^{m \times n}$ can be factored as
$$
\underset{m \times n}{\bA} = \underset{m\times r}{\bC} \gap \underset{r\times n}{\bW},
$$
where $\bC\in \real^{m\times r}$ is some $r$ linearly independent columns of $\bA$, $\bW\in \real^{r\times n}$ is the matrix to reconstruct $\bA$ which contains an $r\times r$ identity submatrix (under a mild column permutation). Specifically entries in $\bW$ have values no larger than 1 in magnitude:
$$
\max |w_{ij}|\leq 1, \,\, \forall \,\, i\in [1,r], j\in [1,n].
$$
\item The storage for the decomposition is then reduced or potentially increased from $mn$ floats to $mr$, $(n-r)r$ floats for storing $\bC, \bW$ respectively and extra $r$ integers are required to remember the position of each column of $\bC$ in that of $\bA$.
\end{theoremHigh}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{imgs/id-column.pdf}
\caption{Demonstration of the column ID of a matrix where the \textcolor{canaryyellow}{yellow} vector denotes the linearly independent columns of $\bA$, white entries denote zero, and \textcolor{canarypurple}{purple} entries denote one.}
\label{fig:column-id}
\end{figure}
The illustration of the column ID is shown in Figure~\ref{fig:column-id} where the \textcolor{canaryyellow}{yellow} vectors denote the linearly independent columns of $\bA$ and the \textcolor{canarypurple}{purple} vectors in $\bW$ form an $r\times r$ identity submatrix. The positions of the \textcolor{canarypurple}{purple} vectors inside $\bW$ are exactly the same as the positions of the corresponding \textcolor{canaryyellow}{yellow} vectors inside $\bA$. The column ID is very similar to the CR decomposition (Theorem~\ref{theorem:cr-decomposition}, p.~\pageref{theorem:cr-decomposition}), both select $r$ linearly independent columns into the first factor and the second factor contains an $r\times r$ identity submatrix. The difference is in that the CR decomposition will exactly choose the first $r$ linearly independent columns into the first factor and the identity submatrix appears in the pivots (Definition~\ref{definition:pivot}, p.~\pageref{definition:pivot}). And more importantly, the second factor in the CR decomposition comes from the RREF (Lemma~\ref{lemma:r-in-cr-decomposition}, p.~\pageref{lemma:r-in-cr-decomposition}).
Therefore, the column ID can also be utilized in the applications of the CR decomposition as well, say proving the fact of rank equaling trace in idempotent matrices (Lemma~\ref{lemma:rank-of-symmetric-idempotent2_tmp}, p.~\pageref{lemma:rank-of-symmetric-idempotent2_tmp}), and proving the elementary theorem in linear algebra that column rank equals row rank of a matrix (Corollary~\ref{lemma:equal-dimension-rank}, p.~\pageref{lemma:equal-dimension-rank}).
Moreover, the column ID is also a special case of rank decomposition (Theorem~\ref{theorem:rank-decomposition}, p.~\pageref{theorem:rank-decomposition}) and is apparently not unique. The connection between different column IDs is given by Lemma~\ref{lemma:connection-rank-decom} (p.~\pageref{lemma:connection-rank-decom}).
\paragraph{Notations that will be extensively used in the sequel} Following again the Matlab-style notation, if $J_s$ is an index vector with size $r$ that contains the indices of columns selected from $\bA$ into $\bC$, then $\bC$ can be denoted as $\bC=\bA[:,J_s]$.
The matrix $\bC$ contains ``skeleton" columns of $\bA$, hence the subscript $s$ in $J_s$. From the ``skeleton" index vector $J_s$, the $r\times r$ identity matrix inside $\bW$ can be recovered by
$$
\bW[:,J_s] = \bI_r \in \real^{r\times r}.
$$
Suppose further we put the remaining indices of $\bA$ into an index vector $J_r$ where
$$
J_s\cap J_r=\varnothing \qquad \text{and}\qquad J_s\cup J_r = \{1,2,\ldots, n\}.
$$
The remaining $n-r$ columns in $\bW$ consist of an $r\times (n-r)$ \textit{expansion matrix} since the matrix contains \textit{expansion coefficients} to reconstruct the columns of $\bA$ from $\bC$:
$$
\bE = \bW[:,J_r] \in \real^{r\times (n-r)},
$$
where the entries of $\bE$ are known as the \textit{expansion coefficients}. Moreover, let $\bP\in \real^{n\times n}$ be a (column) permutation matrix (Definition~\ref{definition:permutation-matrix}, p.~\pageref{definition:permutation-matrix}) defined by $\bP=\bI_n[:,(J_s, J_r)]$ so that
$$
\bA\bP = \bA[:,(J_s, J_r)] = \left[\bC, \bA[:,J_r]\right],
$$
and
\begin{equation}\label{equation:interpolatibve-w-ep}
\bW\bP = \bW[:,(J_s, J_r)] =\left[\bI_r, \bE \right] \leadto \bW = \left[\bI_r, \bE \right] \bP^\top.
\end{equation}
\section{Existence of the Column Interpolative Decomposition}\label{section:proof-column-id}
\paragraph{Cramer's rule}
The proof of the existence of the column ID relies on the Cramer's rule that we shall shortly discuss here. Consider a system of $n$ linear equations for $n$ unknowns, represented in matrix multiplication form as follows \index{Cramer's rule}:
$$
\bM \bx = \bl,
$$
where $\bM\in \real^{n\times n}$ is nonsingular and $\bx,\bl \in \real^n$. Then the theorem states that in this case, the system has a unique solution, whose individual values for the unknowns are given by:
$$
x_i = \frac{\det(\bM_i)}{\det(\bM)}, \qquad \text{for all}\gap i\in \{1,2,\ldots, n\},
$$
where $\bM_i$ is the matrix formed by replacing the $i$-th column of $\bM$ with the column vector $\bl$. In full generality, the Cramer's rule considers the matrix equation
$$
\bM\bX = \bL,
$$
where $\bM\in \real^{n\times n}$ is nonsingular and $\bX,\bL\in \real^{n\times m}$. Let $I=[i_1, i_2, \ldots, i_k]$ and $J=[j_1,j_2,\ldots, j_k]$ be two index vectors where $1\leq i_1\leq i_2\leq \ldots\leq i_k\leq n$ and $1\leq j_1\leq j_2\leq \ldots\leq j_k\leq n$. Then $\bX[I,J]$ is a $k\times k$ submatrix of $\bX$. Let further $\bM_{\bL}(I,J)$ be the $n\times n$ matrix formed by replacing the $i_s$ column of $\bM$ by $j_s$ column of $\bL$ for all $s\in \{1,2,\ldots, k\}$. Then
$$
\det(\bX[I,J]) = \frac{\det\left(\bM_{\bL}(I,J)\right)}{\det(\bM)}.
$$
When $I,J$ are of size 1, it follows that
\begin{equation}\label{equation:cramer-rule-general}
x_{ij} = \frac{\det\left(\bM_{\bL}(i,j)\right)}{\det(\bM)}.
\end{equation}
Now we are ready to prove the existence of the column ID.
\begin{proof}[of Theorem~\ref{theorem:interpolative-decomposition}]
We have mentioned above the proof relies on the Cramer's rule. If we can show the entries of $\bW$ can be denoted by the Cramer's rule equality in Equation~\eqref{equation:cramer-rule-general} and the numerator is smaller than the denominator, then we can complete the proof. However, we notice that the matrix in the denominator of Equation~\eqref{equation:cramer-rule-general} is a square matrix. Here comes the trick.
\paragraph{Step 1: column ID for full row rank matrix}
For a start, we first consider the full row rank matrix $\bA$ (which implies $r=m$, $m\leq n$, and $\bA\in \real^{r\times n}$ such that the matrix $\bC\in \real^{r\times r}$ is a square matrix in the column ID $\bA=\bC\bW$ that we want). Determine the ``skeleton" index vector $J_s$ by
\begin{equation}\label{equation:interpolative-choose-js}
\boxed{ J_s = \mathop{\arg\max}_{J} \left\{|\det(\bA[:,J])|: \text{$J$ is a subset of $\{1,2,\ldots, n\}$ with size $r=m$} \right\},}
\end{equation}
i.e., $J_s$ is the index vector that is determined by maximizing the magnitude of the determinant of $\bA[:,J]$. As we have discussed in the last section, there exists a (column) permutation matrix such that
$$
\bA\bP =
\begin{bmatrix}
\bA[:,J_s]&\bA[:,J_r]
\end{bmatrix}.
$$
Since $\bC=\bA[:,J_s]$ has full column rank $r=m$, it is then nonsingular. The above equation can be reformulated as
$$
\begin{aligned}
\bA
&=\begin{bmatrix}
\bA[:,J_s] &\bA[:,J_r]
\end{bmatrix}\bP^\top\\
&=
\bA[:,J_s]
\bigg[
\bI_r \gap \bA[:,J_s]^{-1}\bA[:,J_r]
\bigg]
\bP^\top\\
&= \bC
\underbrace{\begin{bmatrix}
\bI_r & \bC^{-1}\bA[:,J_r]
\end{bmatrix}
\bP^\top}_{\bW}
\end{aligned},
$$
where the matrix $\bW$ is given by
$
\begin{bmatrix}
\bI_r & \bC^{-1}\bA[:,J_r]
\end{bmatrix}\bP^\top
=
\begin{bmatrix}
\bI_r & \bE
\end{bmatrix}\bP^\top
$ by Equation~\eqref{equation:interpolatibve-w-ep}. To prove the claim that the magnitude of $\bW$ is no larger than 1 is equivalent to proving that entries in $\bE=\bC^{-1}\bA[:,J_r]\in \real^{r\times (n-r)}$ are no greater than 1 in absolute value.
Define the index vector $[j_1,j_2,\ldots, j_n]$ as a permutation of $[1,2,\ldots, n]$ such that
$$
[j_1,j_2,\ldots, j_n] = [1,2,\ldots, n] \bP = [J_s, J_r].\footnote{Note here $[j_1,j_2,\ldots, j_n] $, $[1,2,\ldots, n]$, $J_s$, and $J_r$ are row vectors.}
$$
Thus, it follows from $\bC\bE=\bA[:,J_r]$ that
$$
\begin{aligned}
\underbrace{ [\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_r}]}_{=\bC=\bA[:,J_s]} \bE &=
\underbrace{[\ba_{j_{r+1}}, \ba_{j_{r+2}}, \ldots, \ba_{j_n}]}_{=\bA[:,J_r]:=\bB},
\end{aligned}
$$
where $\ba_i$ is the $i$-th column of $\bA$ and let $\bB=\bA[:,J_r]$.
Therefore, by Cramer's rule in Equation~\eqref{equation:cramer-rule-general}, we have
\begin{equation}\label{equation:column-id-expansionmatrix}
\bE_{kl} =
\frac{\det\left(\bC_{\bB}(k,l)\right)}
{\det\left(\bC\right)},
\end{equation}
where $\bE_{kl}$ is the entry ($k,l$) of $\bE$ and $\bC_{\bB}(k,l)$ is the $r\times r$ matrix formed by replacing the $k$-th column of $\bC$ with the $l$-th column of $\bB$. For example,
$$
\begin{aligned}
\bE_{11} &=
\frac{\det\left([\textcolor{blue}{\ba_{j_{r+1}}}, \ba_{j_2}, \ldots, \ba_{j_r}]\right)}
{\det\left([\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_r}]\right)},
\qquad
&\bE_{12} &=
\frac{\det\left([\textcolor{blue}{\ba_{j_{r+2}}}, \ba_{j_2},\ldots, \ba_{j_r}]\right)}
{\det\left([\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_r}]\right)},\\
\bE_{21} &=
\frac{\det\left([\ba_{j_1},\textcolor{blue}{\ba_{j_{r+1}}}, \ldots, \ba_{j_r}]\right)}
{\det\left([\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_r}]\right)},
\qquad
&\bE_{22} &=
\frac{\det\left([\ba_{j_1},\textcolor{blue}{\ba_{j_{r+2}}}, \ldots, \ba_{j_r}]\right)}
{\det\left([\ba_{j_1}, \ba_{j_2}, \ldots, \ba_{j_r}]\right)}.
\end{aligned}
$$
Since $J_s$ is chosen to maximize the magnitude of $\det(\bC)$ in Equation~\eqref{equation:interpolative-choose-js}, it follows that
$$
|\bE_{kl}|\leq 1, \qquad \text{for all}\gap k\in \{1,2,\ldots, r\}, l\in \{1,2,\ldots, n-r\}.
$$
\paragraph{Step 2: apply to general matrices}
To summarize what we have proved above and to abuse the notation. For any matrix $\bF\in \real^{r\times n}$ with \textbf{full} rank $r\leq n$, the column ID exists that $\bF=\bC_0\bW$ where the values in $\bW$ are no greater than 1 in absolute value.
Apply the finding to the full general matrix $\bA\in \real^{m\times n}$ with rank $r\leq \{m,n\}$, it is trivial that the matrix $\bA$ admits a rank decomposition (Theorem~\ref{theorem:rank-decomposition}, p.~\pageref{theorem:rank-decomposition}):
$$
\underset{m\times n}{\bA} = \underset{m\times r}{\bD}\gap \underset{r\times n}{\bF},
$$
where $\bD,\bF$ have full column rank $r$ and full row rank $r$ respectively. For the column ID of $\bF=\bC_0\bW$ where $\bC_0=\bF[:,J_s]$ contains $r$ linearly independent columns of $\bF$. We notice by $\bA=\bD\bF$ such that
$$
\bA[:,J_s]=\bD\bF[:,J_s],
$$
i.e., the columns indexed by $J_s$ of $(\bD\bF)$ can be obtained by $\bD\bF[:,J_s]$ which in turn are the columns of $\bA$ indexed by $J_s$. This makes
$$
\underbrace{\bA[:,J_s]}_{\bC}= \underbrace{\bD\bF[:,J_s]}_{\bD\bC_0},
$$
And
$$
\bA = \bD\bF =\bD\bC_0\bW = \underbrace{\bD\bF[:,J_s]}_{\bC}\bW=\bC\bW.
$$
This completes the proof.
\end{proof}
The above proof reveals an intuitive way to compute the optimal column ID of matrix $\bA$. However, any algorithm that is guaranteed to find such an optimally-conditioned factorization must have combinatorial complexity \citep{martinsson2019randomized, lu2022bayesian, lu2022comparative}. Therefore, randomized algorithms, approximation by column-pivoted QR (Section~\ref{section:cpqr}, p.~\pageref{section:cpqr}) and rank-revealing QR (Section~\ref{section:rank-r-qr}, p.~\pageref{section:rank-r-qr}) are applied to find a relatively well-conditioned decomposition for the column ID where $\bW$ is small in norm rather than having entries all smaller than 1 in magnitude. While, the Bayesian approach can constrain the components in $\bW$ in the range of -1 to 1 strictly \citep{lu2022bayesian, lu2022comparative}. And we shall not go into the details.
\section{Row ID and Two-Sided ID}
We term the decomposition above as column ID. This is no coincidence since it has its siblings:
\begin{theoremHigh}[The Whole Interpolative Decomposition]\label{theorem:interpolative-decomposition-row}
Any rank-$r$ matrix $\bA \in \real^{m \times n}$ can be factored as
$$
\begin{aligned}
\text{Column ID: }&\gap \underset{m \times n}{\bA} &=& \boxed{\underset{m\times r}{\bC}} \gap \underset{r\times n}{\bW} ; \\
\text{Row ID: } &\gap &=&\underset{m\times r}{\bZ} \gap \boxed{\underset{r\times n}{\bR}}; \\
\text{Two-Sided ID: } &\gap &=&\underset{m\times r}{\bZ} \gap \boxed{\underset{r\times r}{\bU}} \gap \underset{r\times n}{\bW}, \\
\end{aligned}
$$
where
\begin{itemize}
\item $\bC=\bA[:,J_s]\in \real^{m\times r}$ is some $r$ linearly independent columns of $\bA$, $\bW\in \real^{r\times n}$ is the matrix to reconstruct $\bA$ which contains an $r\times r$ identity submatrix (under a mild column permutation): $\bW[:,J_s]=\bI_r$;
\item $\bR=\bA[I_s,:]\in \real^{r\times n}$ is some $r$ linearly independent rows of $\bR$, $\bZ\in \real^{m\times r}$ is the matrix to reconstruct $\bA$ which contains an $r\times r$ identity submatrix (under a mild row permutation): $\bZ[I_s,:]=\bI_r$;
\item Entries in $\bW, \bZ$ have values no larger than 1 in magnitude: $\max |w_{ij}|\leq 1$ and $\max |z_{ij}|\leq 1$;
\item $\bU=\bA[I_s,J_s] \in \real^{r\times r}$ is the nonsingular submatrix on the intersection of $\bC,\bR$;
\item The three matrices $\bC,\bR,\bU$ in the $\boxed{\text{boxed}}$ texts share same notation as the skeleton decomposition (Theorem~\ref{theorem:skeleton-decomposition}, p.~\pageref{theorem:skeleton-decomposition}) where they even have same meanings such that the three matrices make the skeleton decomposition of $\bA$: $\bA=\bC\bU^{-1}\bR$.
\end{itemize}
\end{theoremHigh}
The proof of the row ID is just similar to that of the column ID. Suppose the column ID of $\bA^\top$ is given by $\bA^\top=\bC_0\bW_0$ where $\bC_0$ contains $r$ linearly independent columns of $\bA^\top$ (i.e., $r$ linearly independent rows of $\bA$). Let $\bR=\bC_0, \bZ=\bW_0$, the row ID is obtained by $\bA=\bZ\bR$.
For the two-sided ID, recall from the skeleton decomposition (Theorem~\ref{theorem:skeleton-decomposition}, p.~\pageref{theorem:skeleton-decomposition}). When $\bU$ is the intersection of $\bC, \bR$, it follows that $\bA=\bC\bU^{-1}\bR$. Thus $\bC\bU^{-1}=\bZ$ by the row ID. And this implies $\bC=\bZ\bU$. By column ID, it follows that $\bA=\bC\bW=\bZ\bU\bW$ which proves the existence of the two-sided ID.
\index{Data storage}
\paragraph{Data storage} For the data storage of each ID, we summarize as follows
\begin{itemize}
\item \textit{Column ID.} It requires $mr$ and $(n-r)r$ floats to store $\bC$ and $\bW$ respectively , and $r$ integers to store the indices of the selected columns in $\bA$;
\item \textit{Row ID.} It requires $nr$ and $(m-r)r$ floats to store $\bR$ and $\bZ$ respectively, and $r$ integers to store the indices of the selected rows in $\bA$;
\item \textit{Two-Sided ID.} It requires $(m-r)r$, $(n-r)r$, and $r^2$ floats to store $\bZ,\bW$, and $\bU$ respectively. And extra $2r$ integers are required to store the indices of the selected rows and columns in $\bA$.
\end{itemize}
\paragraph{Further reduction on the storage for two-sided ID for sparse matrix $\bA$}
Suppose the column ID of $\bA=\bC\bW$ where $\bC=\bA[:,J_s]$ and a good spanning rows index $I_s$ set of $\bC$ could be found:
$$
\bA[I_s,:] = \bC[I_s,:]\bW.
$$
We observe that $\bC[I_s,:] = \bA[I_s,J_s]\in \real^{r\times r}$ which is nonsingular (since full rank $r$ in the sense of both row rank and column rank). It follows that
$$
\bW = (\bA[I_s,J_s])^{-1} \bA[I_s,:].
$$
Therefore, there is no need to store the matrix $\bW$ explicitly. We only need to store $\bA[I_s,:]$ and $(\bA[I_s,J_s])^{-1}$. Or when we can compute the inverse of $\bA[I_s,J_s]$ on the fly, it only requires $r$ integers to store $J_s$ and recover $\bA[I_s,J_s]$ from $\bA[I_s,:]$. The storage of $\bA[I_s,:]$ is cheap if $\bA$ is sparse.
\chapter{Spectral Decomposition (Theorem)}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{Spectral Decomposition (Theorem)}\label{section:spectral-decomposition}
\begin{theoremHigh}[Spectral Decomposition]\label{theorem:spectral_theorem}
A real matrix $\bA \in \real^{n\times n}$ is symmetric if and only if there exists an orthogonal matrix $\bQ$ and a diagonal matrix $\bLambda$ such that
\begin{equation*}
\bA = \bQ \bLambda \bQ^\top,
\end{equation*}
where the columns of $\bQ = [\bq_1, \bq_2, \ldots, \bq_n]$ are eigenvectors of $\bA$ and are mutually orthonormal, and the entries of $\bLambda=diag(\lambda_1, \lambda_2, \ldots, \lambda_n)$ are the corresponding eigenvalues of $\bA$, which are real. And the rank of $\bA$ is the number of nonzero eigenvalues. This is known as the \textit{spectral decomposition} or \textit{spectral theorem} of real symmetric matrix $\bA$. Specifically, we have the following properties:
1. A symmetric matrix has only \textbf{real eigenvalues};
2. The eigenvectors are orthogonal such that they can be chosen \textbf{orthonormal} by normalization;
3. The rank of $\bA$ is the number of nonzero eigenvalues;
4. If the eigenvalues are distinct, the eigenvectors are unique as well.
\end{theoremHigh}
The above decomposition is called the spectral decomposition for real symmetric matrices and is often known as the \textit{spectral theorem}.
\paragraph{Spectral theorem vs eigenvalue decomposition} In the eigenvalue decomposition, we require the matrix $\bA$ to be square and the eigenvectors to be linearly independent. Whereas in the spectral theorem, any symmetric matrix can be diagonalized, and the eigenvectors are chosen to be orthonormal.
\paragraph{A word on the spectral decomposition} In Lemma~\ref{lemma:eigenvalue-similar-matrices} (p.~\pageref{lemma:eigenvalue-similar-matrices}), we proved that the eigenvalues of similar matrices are the same. From the spectral decomposition, we notice that $\bA$ and $\bLambda$ are similar matrices such that their eigenvalues are the same. For any diagonal matrices, the eigenvalues are the diagonal components. \footnote{Actually, we have shown in the last section that the diagonal values for triangular matrices are the eigenvalues of it.} To see this, we realize that
$$
\bLambda \be_i = \lambda_i \be_i,
$$
where $\be_i$ is the $i$-th basis vector. Therefore, the matrix $\Lambda$ contains the eigenvalues of $\bA$.
\section{Existence of the Spectral Decomposition}\label{section:existence-of-spectral}
We prove the theorem in several steps.
\begin{tcolorbox}[title={Symmetric Matrix Property 1 of 4}]
\begin{lemma}[Real Eigenvalues]\label{lemma:real-eigenvalues-spectral}
The eigenvalues of any symmetric matrix are all real.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:real-eigenvalues-spectral}]
Suppose eigenvalue $\lambda$ is a complex number $\lambda=a+ib$ where $a,b$ are real. Its complex conjugate is $\bar{\lambda}=a-ib$. Same for complex eigenvector $\bx = \bc+i\bd$ and its complex conjugate $\bar{\bx}=\bc-i\bd$ where $\bc, \bd$ are real vectors. We then have the following property
$$
\bA \bx = \lambda \bx\qquad \underrightarrow{\text{ leads to }}\qquad \bA \bar{\bx} = \bar{\lambda} \bar{\bx}\qquad \underrightarrow{\text{ transpose to }}\qquad \bar{\bx}^\top \bA =\bar{\lambda} \bar{\bx}^\top.
$$
We take the dot product of the first equation with $\bar{\bx}$ and the last equation with $\bx$:
$$
\bar{\bx}^\top \bA \bx = \lambda \bar{\bx}^\top \bx, \qquad \text{and } \qquad \bar{\bx}^\top \bA \bx = \bar{\lambda}\bar{\bx}^\top \bx.
$$
Then we have the equality $\lambda\bar{\bx}^\top \bx = \bar{\lambda} \bar{\bx}^\top\bx$. Since $\bar{\bx}^\top\bx = (\bc-i\bd)^\top(\bc+i\bd) = \bc^\top\bc+\bd^\top\bd$ is a real number. Therefore the imaginary part of $\lambda$ is zero and $\lambda$ is real.
\end{proof}
\begin{tcolorbox}[title={Symmetric Matrix Property 2 of 4}]
\begin{lemma}[Orthogonal Eigenvectors]\label{lemma:orthogonal-eigenvectors}
The eigenvectors corresponding to distinct eigenvalues of any symmetric matrix are orthogonal so that we can normalize eigenvectors to make them orthonormal since $\bA\bx = \lambda \bx \underrightarrow{\text{ leads to } } \bA\frac{\bx}{||\bx||} = \lambda \frac{\bx}{||\bx||}$ which corresponds to the same eigenvalue.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:orthogonal-eigenvectors}]
Suppose eigenvalues $\lambda_1, \lambda_2$ correspond to eigenvectors $\bx_1, \bx_2$ so that $\bA\bx_1=\lambda \bx_1$ and $\bA\bx_2 = \lambda_2\bx_2$. We have the following equality:
$$
\bA\bx_1=\lambda_1 \bx_1 \leadto \bx_1^\top \bA =\lambda_1 \bx_1^\top \leadto \bx_1^\top \bA \bx_2 =\lambda_1 \bx_1^\top\bx_2,
$$
and
$$
\bA\bx_2 = \lambda_2\bx_2 \leadto \bx_1^\top\bA\bx_2 = \lambda_2\bx_1^\top\bx_2,
$$
which implies $\lambda_1 \bx_1^\top\bx_2=\lambda_2\bx_1^\top\bx_2$. Since eigenvalues $\lambda_1\neq \lambda_2$, the eigenvectors are orthogonal.
\end{proof}
In the above Lemma~\ref{lemma:orthogonal-eigenvectors}, we prove that the eigenvectors corresponding to distinct eigenvalues of symmetric matrices are orthogonal. More generally, we prove the important theorem that eigenvectors corresponding to distinct eigenvalues of any matrix are linearly independent.
\begin{theorem}[Independent Eigenvector Theorem]\label{theorem:independent-eigenvector-theorem}
If a matrix $\bA\in \real^{n\times n}$ has $k$ distinct eigenvalues, then any set of $k$ corresponding eigenvectors are linearly independent.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:independent-eigenvector-theorem}]
We will prove by induction. Firstly, we will prove that any two eigenvectors corresponding to distinct eigenvalues are linearly independent. Suppose $\bv_1,\bv_2$ correspond to distinct eigenvalues $\lambda_1$ and $\lambda_2$ respectively. Suppose further there exists a nonzero vector $\bx=[x_1,x_2] \neq \bzero $ that
\begin{equation}\label{equation:independent-eigenvector-eq1}
x_1\bv_1+x_2\bv_2=\bzero.
\end{equation}
That is, $\bv_1,\bv_2$ are linearly independent.
Multiply Equation~\eqref{equation:independent-eigenvector-eq1} on the left by $\bA$, we get
\begin{equation}\label{equation:independent-eigenvector-eq2}
x_1 \lambda_1\bv_1 + x_2\lambda_2\bv_2 = \bzero.
\end{equation}
Multiply Equation~\eqref{equation:independent-eigenvector-eq1} on the left by $\lambda_2$, we get
\begin{equation}\label{equation:independent-eigenvector-eq3}
x_1\lambda_2\bv_1 + x_2\lambda_2\bv_2 = \bzero.
\end{equation}
Subtract Equation~\eqref{equation:independent-eigenvector-eq2} from Equation~\eqref{equation:independent-eigenvector-eq3} to find
$$
x_1(\lambda_2-\lambda_1)\bv_1 = \bzero.
$$
Since $\lambda_2\neq \lambda_1$, $\bv_1\neq \bzero$, we must have $x_1=0$. From Equation~\eqref{equation:independent-eigenvector-eq1}, $\bv_2\neq \bzero$, we must also have $x_2=0$ which arrives at a contradiction. Thus $\bv_1,\bv_2$ are linearly independent.
Now, suppose any $j<k$ eigenvectors are linearly independent, if we could prove that any $j+1$ eigenvectors are also linearly independent, we finish the proof. Suppose $\bv_1, \bv_2, \ldots, \bv_j$ are linearly independent and $\bv_{j+1}$ is dependent on the first $j$ eigenvectors. That is, there exists a nonzero vector $\bx=[x_1,x_2,\ldots, x_{j}]\neq \bzero$ that
\begin{equation}\label{equation:independent-eigenvector-zero}
\bv_{j+1}= x_1\bv_1+x_2\bv_2+\ldots+x_j\bv_j .
\end{equation}
Suppose the $j+1$ eigenvectors correspond to distinct eigenvalues $\lambda_1,\lambda_2,\ldots,\lambda_j,\lambda_{j+1}$.
Multiply Equation~\eqref{equation:independent-eigenvector-zero} on the left by $\bA$, we get
\begin{equation}\label{equation:independent-eigenvector-zero2}
\lambda_{j+1} \bv_{j+1} = x_1\lambda_1\bv_1+x_2\lambda_2\bv_2+\ldots+x_j \lambda_j\bv_j .
\end{equation}
Multiply Equation~\eqref{equation:independent-eigenvector-zero} on the left by $\lambda_{j+1}$, we get
\begin{equation}\label{equation:independent-eigenvector-zero3}
\lambda_{j+1} \bv_{j+1} = x_1\lambda_{j+1}\bv_1+x_2\lambda_{j+1}\bv_2+\ldots+x_j \lambda_{j+1}\bv_j .
\end{equation}
Subtract Equation~\eqref{equation:independent-eigenvector-zero3} from Equation~\eqref{equation:independent-eigenvector-zero2}, we find
$$
x_1(\lambda_{j+1}-\lambda_1)\bv_1+x_2(\lambda_{j+1}-\lambda_2)\bv_2+\ldots+x_j (\lambda_{j+1}-\lambda_j)\bv_j = \bzero.
$$
From assumption, $\lambda_{j+1} \neq \lambda_i$ for all $i\in \{1,2,\ldots,j\}$, and $\bv_i\neq \bzero$ for all $i\in \{1,2,\ldots,j\}$. We must have $x_1=x_2=\ldots=x_j=0$ which leads to a contradiction. Then $\bv_1,\bv_2,\ldots,\bv_j,\bv_{j+1}$ are linearly independent. This completes the proof.
\end{proof}
\begin{corollary}[Independent Eigenvector Theorem, CNT.]\label{theorem:independent-eigenvector-theorem-basis}
If a matrix $\bA\in \real^{n\times n}$ has $n$ distinct eigenvalues, then any set of $n$ corresponding eigenvectors form a basis for $\real^n$.
\end{corollary}
\begin{tcolorbox}[title={Symmetric Matrix Property 3 of 4}]
\begin{lemma}[Orthonormal Eigenvectors for Duplicate Eigenvalue]\label{lemma:eigen-multiplicity}
If $\bA$ has a duplicate eigenvalue $\lambda_i$ with multiplicity $k\geq 2$, then there exist $k$ orthonormal eigenvectors corresponding to $\lambda_i$.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:eigen-multiplicity}]
We note that there is at least one eigenvector $\bx_{i1}$ corresponding to $\lambda_i$. And for such eigenvector $\bx_{i1}$, we can always find additional $n-1$ orthonormal vectors $\by_2, \by_3, \ldots, \by_n$ so that $\{\bx_{i1}, \by_2, \by_3, \ldots, \by_n\}$ forms an orthonormal basis in $\real^n$. Put the $\by_2, \by_3, \ldots, \by_n$ into matrix $\bY_1$ and $\{\bx_{i1}, \by_2, \by_3, \ldots, \by_n\}$ into matrix $\bP_1$
$$
\bY_1=[\by_2, \by_3, \ldots, \by_n] \qquad \text{and} \qquad \bP_1=[\bx_{i1}, \bY_1].
$$
We then have
$$
\bP_1^\top\bA\bP_1 = \begin{bmatrix}
\lambda_i &\bzero \\
\bzero & \bY_1^\top \bA\bY_1
\end{bmatrix}.
$$
As a result, $\bA$ and $\bP_1^\top\bA\bP_1$ are similar matrices such that they have the same eigenvalues since $\bP_1$ is nonsingular (even orthogonal here, see Lemma~\ref{lemma:eigenvalue-similar-matrices}, p.~\pageref{lemma:eigenvalue-similar-matrices}). We obtain
$$
\det(\bP_1^\top\bA\bP_1 - \lambda\bI_n) =
\footnote{By the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix}
\bA & \bB \\
\bC & \bD
\end{bmatrix}$, then $\det(\bM) = \det(\bA)\det(\bD-\bC\bA^{-1}\bB)$.
}
(\lambda_i - \lambda )\det(\bY_1^\top \bA\bY_1 - \lambda\bI_{n-1}).
$$
If $\lambda_i$ has multiplicity $k\geq 2$, then the term $(\lambda_i-\lambda)$ occurs $k$ times in the polynomial from the determinant $\det(\bP_1^\top\bA\bP_1 - \lambda\bI_n)$, i.e., the term occurs $k-1$ times in the polynomial from $\det(\bY_1^\top \bA\bY_1 - \lambda\bI_{n-1})$. In another word, $\det(\bY_1^\top \bA\bY_1 - \lambda_i\bI_{n-1})=0$ and $\lambda_i$ is an eigenvalue of $\bY_1^\top \bA\bY_1$.
Let $\bB=\bY_1^\top \bA\bY_1$. Since $\det(\bB-\lambda_i\bI_{n-1})=0$, the null space of $\bB-\lambda_i\bI_{n-1}$ is not none. Suppose $(\bB-\lambda_i\bI_{n-1})\bn = \bzero$, i.e., $\bB\bn=\lambda_i\bn$ and $\bn$ is an eigenvector of $\bB$.
From $
\bP_1^\top\bA\bP_1 = \begin{bmatrix}
\lambda_i &\bzero \\
\bzero & \bB
\end{bmatrix},
$
we have $
\bA\bP_1
\begin{bmatrix}
z \\
\bn
\end{bmatrix}
=
\bP_1
\begin{bmatrix}
\lambda_i &\bzero \\
\bzero & \bB
\end{bmatrix}
\begin{bmatrix}
z \\
\bn
\end{bmatrix}$, where $z$ is any scalar. From the left side of this equation, we have
\begin{equation}\label{equation:spectral-pro4-right}
\begin{aligned}
\bA\bP_1
\begin{bmatrix}
z \\
\bn
\end{bmatrix}
&=
\begin{bmatrix}
\lambda_i\bx_{i1}, \bA\bY_1
\end{bmatrix}
\begin{bmatrix}
z \\
\bn
\end{bmatrix} \\
&=\lambda_iz\bx_{i1} + \bA\bY_1\bn.
\end{aligned}
\end{equation}
And from the right side of the equation, we have
\begin{equation}\label{equation:spectral-pro4-left}
\begin{aligned}
\bP_1
\begin{bmatrix}
\lambda_i &\bzero \\
\bzero & \bB
\end{bmatrix}
\begin{bmatrix}
z \\
\bn
\end{bmatrix}
&=
\begin{bmatrix}
\bx_{i1} & \bY_1
\end{bmatrix}
\begin{bmatrix}
\lambda_i &\bzero \\
\bzero & \bB
\end{bmatrix}
\begin{bmatrix}
z \\
\bn
\end{bmatrix}\\
&=
\begin{bmatrix}
\lambda_i\bx_{i1} & \bY_1\bB
\end{bmatrix}
\begin{bmatrix}
z \\
\bn
\end{bmatrix}\\
&= \lambda_i z \bx_{i1} + \bY_1\bB \bn \\
&=\lambda_i z \bx_{i1} + \lambda_i \bY_1 \bn. \qquad (\text{Since $\bB \bn=\lambda_i\bn$})\\
\end{aligned}
\end{equation}
Combine Equation~\eqref{equation:spectral-pro4-left} and Equation~\eqref{equation:spectral-pro4-right}, we obtain
$$
\bA\bY_1\bn = \lambda_i\bY_1 \bn,
$$
which means $\bY_1\bn$ is an eigenvector of $\bA$ corresponding to the eigenvalue $\lambda_i$ (same eigenvalue corresponding to $\bx_{i1}$). Since $\bY_1\bn$ is a combination of $\by_2, \by_3, \ldots, \by_n$ which are orthonormal to $\bx_{i1}$, the $\bY_1\bn$ can be chosen to be orthonormal to $\bx_{i1}$.
To conclude, if we have one eigenvector $\bx_{i1}$ corresponding to $\lambda_i$ whose multiplicity is $k\geq 2$, we could construct the second eigenvector by choosing one vector from the null space of $(\bB-\lambda_i\bI_{n-1})$ constructed above. Suppose now, we have constructed the second eigenvector $\bx_{i2}$ which is orthonormal to $\bx_{i1}$.
For such eigenvectors $\bx_{i1}, \bx_{i2}$, we can always find additional $n-2$ orthonormal vectors $\by_3, \by_4, \ldots, \by_n$ so that $\{\bx_{i1},\bx_{i2}, \by_3, \by_4, \ldots, \by_n\}$ forms an orthonormal basis in $\real^n$. Put the $\by_3, \by_4, \ldots, \by_n$ into matrix $\bY_2$ and $\{\bx_{i1},\bx_{i2}, \by_3, \by_4, \ldots, \by_n\}$ into matrix $\bP_2$:
$$
\bY_2=[\by_3, \by_4, \ldots, \by_n] \qquad \text{and} \qquad \bP_2=[\bx_{i1}, \bx_{i2},\bY_1].
$$
We then have
$$
\bP_2^\top\bA\bP_2 =
\begin{bmatrix}
\lambda_i & 0 &\bzero \\
0& \lambda_i &\bzero \\
\bzero & \bzero & \bY_2^\top \bA\bY_2
\end{bmatrix}
=
\begin{bmatrix}
\lambda_i & 0 &\bzero \\
0& \lambda_i &\bzero \\
\bzero & \bzero & \bC
\end{bmatrix},
$$
where $\bC=\bY_2^\top \bA\bY_2$ such that $\det(\bP_2^\top\bA\bP_2 - \lambda\bI_n) = (\lambda_i-\lambda)^2 \det(\bC - \lambda\bI_{n-2})$. If the multiplicity of $\lambda_i$ is $k\geq 3$, $\det(\bC - \lambda_i\bI_{n-2})=0$ and the null space of $\bC - \lambda_i\bI_{n-2}$ is not none so that we can still find a vector from the null space of $\bC - \lambda_i\bI_{n-2}$ and $\bC\bn = \lambda_i \bn$. Now we can construct a vector $\begin{bmatrix}
z_1 \\
z_2\\
\bn
\end{bmatrix}\in \real^n $, where $z_1, z_2$ are any scalar values, such that
$$
\bA\bP_2\begin{bmatrix}
z_1 \\
z_2\\
\bn
\end{bmatrix} = \bP_2
\begin{bmatrix}
\lambda_i & 0 &\bzero \\
0& \lambda_i &\bzero \\
\bzero & \bzero & \bC
\end{bmatrix}
\begin{bmatrix}
z_1 \\
z_2\\
\bn
\end{bmatrix}.
$$
Similarly, from the left side of the above equation, we will get $\lambda_iz_1\bx_{i1} +\lambda_iz_2\bx_{i2}+\bA\bY_2\bn$. From the right side of the above equation, we will get $\lambda_iz_1\bx_{i1} +\lambda_i z_2\bx_{i2}+\lambda_i\bY_2\bn$. As a result,
$$
\bA\bY_2\bn = \lambda_i\bY_2\bn,
$$
where $\bY_2\bn$ is an eigenvector of $\bA$ and orthogonal to $\bx_{i1}, \bx_{i2}$. And it is easy to construct the eigenvector to be orthonormal to the first two.
The process can go on, and finally, we will find $k$ orthonormal eigenvectors corresponding to $\lambda_i$.
Actually, the dimension of the null space of $\bP_1^\top\bA\bP_1 -\lambda_i\bI_n$ is equal to the multiplicity $k$. It also follows that if the multiplicity of $\lambda_i$ is $k$, there cannot be more than $k$ orthogonal eigenvectors corresponding to $\lambda_i$. Otherwise, it will come to the conclusion that we could find more than $n$ orthogonal eigenvectors which leads to a contradiction.
\end{proof}
The proof of the existence of the spectral decomposition is trivial from the lemmas above. Also, we can use Schur decomposition to prove its existence.
\begin{proof}[\textbf{of Theorem~\ref{theorem:spectral_theorem}: Existence of Spectral Decomposition}]
From the Schur decomposition in Theorem~\ref{theorem:schur-decomposition} (p.~\pageref{theorem:schur-decomposition}), symmetric matrix $\bA=\bA^\top$ leads to $\bQ\bU\bQ^\top = \bQ\bU^\top\bQ^\top$. Then $\bU$ is a diagonal matrix. And this diagonal matrix actually contains eigenvalues of $\bA$. All the columns of $\bQ$ are eigenvectors of $\bA$. We conclude that all symmetric matrices are diagonalizable even with repeated eigenvalues.
\end{proof}
For any matrix multiplication, we have the rank of the multiplication result no larger than the rank of the inputs. However, the symmetric matrix $\bA^\top \bA$ is rather special in that the rank of $\bA^\top \bA$ is equal to that of $\bA$ which will be used in the proof of singular value decomposition in the next section.
\begin{lemma}[Rank of $\bA\bB$]\label{lemma:rankAB}
For any matrix $\bA\in \real^{m\times n}$, $\bB\in \real^{n\times k}$, then the matrix multiplication $\bA\bB\in \real^{m\times k}$ has $rank$($\bA\bB$)$\leq$min($rank$($\bA$), $rank$($\bB$)).
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:rankAB}]
For matrix multiplication $\bA\bB$, we have
$\bullet$ All rows of $\bA\bB$ are combinations of rows of $\bB$, the row space of $\bA\bB$ is a subset of the row space of $\bB$. Thus $rank$($\bA\bB$)$\leq$$rank$($\bB$).
$\bullet$ All columns of $\bA\bB$ are combinations of columns of $\bA$, the column space of $\bA\bB$ is a subset of the column space of $\bA$. Thus $rank$($\bA\bB$)$\leq$$rank$($\bA$).
Therefore, $rank$($\bA\bB$)$\leq$min($rank$($\bA$), $rank$($\bB$)).
\end{proof}
\begin{tcolorbox}[title={Symmetric Matrix Property 4 of 4}]
\begin{lemma}[Rank of Symmetric Matrices]\label{lemma:rank-of-symmetric}
If $\bA$ is an $n\times n$ real symmetric matrix, then $rank(\bA)$ =
the total number of nonzero eigenvalues of $\bA$.
In particular, $\bA$ has full rank if and only if $\bA$ is nonsingular. Further, $\cspace(\bA)$ is the linear space spanned by the eigenvectors of $\bA$ that correspond to nonzero eigenvalues.
\end{lemma}
\end{tcolorbox}
\begin{proof}[of Lemma~\ref{lemma:rank-of-symmetric}]
For any symmetric matrix $\bA$, we have $\bA$, in spectral form, as $\bA = \bQ \bLambda\bQ^\top$ and also $\bLambda = \bQ^\top\bA\bQ$. Since we have shown in Lemma~\ref{lemma:rankAB} that the rank of the multiplication $rank$($\bA\bB$)$\leq$min($rank$($\bA$), $rank$($\bB$)).
$\bullet$ From $\bA = \bQ \bLambda\bQ^\top$, we have $rank(\bA) \leq rank(\bQ \bLambda) \leq rank(\bLambda)$;
$\bullet$ From $\bLambda = \bQ^\top\bA\bQ$, we have $rank(\bLambda) \leq rank(\bQ^\top\bA) \leq rank(\bA)$,
The inequalities above give us a contradiction. And thus $rank(\bA) = rank(\bLambda)$ which is the total number of nonzero eigenvalues.
Since $\bA$ is nonsingular if and only if all of its eigenvalues are nonzero, $\bA$ has full rank if and only if $\bA$ is nonsingular.
\end{proof}
Similar to the eigenvalue decomposition, we can compute the $m$-th power of matrix $\bA$ more efficiently via the spectral decomposition.
\begin{remark}[$m$-th Power]\label{remark:power-spectral}
The $m$-th power of $\bA$ is $\bA^m = \bQ\bLambda^m\bQ^\top$ if the matrix $\bA$ can be factored as $\bA=\bQ\bLambda\bQ^\top$.
\end{remark}
\section{Uniqueness of Spectral Decomposition}\label{section:uniqueness-spectral-decomposition}
Clearly, the spectral decomposition is not unique essentially because of the multiplicity of eigenvalues. One can imagine that eigenvalue $\lambda_i$ and $\lambda_j$ are the same for some $1\leq i,j\leq n$, and interchange the corresponding eigenvectors in $\bQ$ will have the same results but the decompositions are different.
But the \textit{eigenspaces} (i.e., the null space $\nspace(\bA - \lambda_i\bI)$ for eigenvalue $\lambda_i$) corresponding to each eigenvalue are fixed. So there is a unique decomposition in terms
of eigenspaces and then any orthonormal basis of these eigenspaces can be chosen.
\section{Other Forms, Connecting Eigenvalue Decomposition*}\label{section:otherform-spectral}
In this section, we discuss other forms of the spectral decomposition under different conditions. To see this, we provide a rigorous definition of the characteristic polynomial.
\begin{definition}[Characteristic Polynomial\index{Characteristic polynomial}]
For any square matrix $\bA \in \real^{n\times n}$, the \textbf{characteristic polynomial} $\det(\bA - \lambda \bI)$ is given by
$$
\begin{aligned}
\det(\lambda\bI-\bA ) &=\lambda^n - \gamma_{n-1} \lambda^{n-1} + \ldots + \gamma_1 \lambda + \gamma_0\\
&=(\lambda-\lambda_1)^{k_1} (\lambda-\lambda_2)^{k_2} \ldots (\lambda-\lambda_m)^{k_m},
\end{aligned}
$$
where $\lambda_1, \lambda_2, \ldots, \lambda_m$ are the distinct roots of $\det( \lambda\bI-\bA)$ and also the eigenvalues of $\bA$, and $k_1+k_2+\ldots +k_m=n$, i.e., $\det(\lambda\bI-\bA)$ is a polynomial of degree $n$ for any matrix $\bA\in \real^{n\times n}$ (see proof of Lemma~\ref{lemma:eigen-multiplicity}, p.~\pageref{lemma:eigen-multiplicity}).
\end{definition}
An important multiplicity arises from the characteristic polynomial of a matrix is then defined as follows:
\begin{definition}[Algebraic Multiplicity and Geometric Multiplicity\index{Algebraic multiplicity}\index{Geometric multiplicity}]
Given the characteristic polynomial of matrix $\bA\in \real^{n\times n}$:
$$
\begin{aligned}
\det(\lambda\bI-\bA ) =(\lambda-\lambda_1)^{k_1} (\lambda-\lambda_2)^{k_2} \ldots (\lambda-\lambda_m)^{k_m}.
\end{aligned}
$$
The integer $k_i$ is called the \textbf{algebraic multiplicity} of the eigenvalue $\lambda_i$, i.e., the algebraic multiplicity of eigenvalue $\lambda_i$ is equal to the multiplicity of the corresponding root of the characteristic polynomial.
The \textbf{eigenspace associated to eigenvalue $\lambda_i$} is defined by the null space of $(\bA - \lambda_i\bI)$, i.e., $\nspace(\bA - \lambda_i\bI)$.
And the dimension of the eigenspace associated with $\lambda_i$, $\nspace(\bA - \lambda_i\bI)$, is called the \textbf{geometric multiplicity} of $\lambda_i$.
In short, we denote the algebraic multiplicity of $\lambda_i$ by $alg(\lambda_i)$, and its geometric multiplicity by $geo(\lambda_i)$.
\end{definition}
\begin{remark}[Geometric Multiplicity]\label{remark:geometric-mul-meaning}
Note that for matrix $\bA$ and the eigenspace $\nspace(\bA-\lambda_i\bI)$, the dimension of the eigenspace is also the number of linearly independent eigenvectors of $\bA$ associated to $\lambda_i$, namely a basis for the eigenspace. This implies that
while there are an infinite number of eigenvectors associated with each eigenvalue $\lambda_i$, the fact that they form a subspace (provided the zero vector is added) means that they can be described by a finite number of vectors.
\end{remark}
By definition, the sum of the algebraic multiplicities is equal to $n$, but the sum of the geometric multiplicities can be strictly smaller.
\begin{corollary}[Multiplicity in Similar Matrices\index{Similar matrices}]\label{corollary:multipli-similar-matrix}
Similar matrices have same algebraic multiplicities and geometric multiplicities.
\end{corollary}
\begin{proof}[of Corollary~\ref{corollary:multipli-similar-matrix}]
In Lemma~\ref{lemma:eigenvalue-similar-matrices} (p.~\pageref{lemma:eigenvalue-similar-matrices}), we proved that the eigenvalues of similar matrices are the same, therefore, the algebraic multiplicities of similar matrices are the same as well.
Suppose $\bA$ and $\bB= \bP\bA\bP^{-1}$ are similar matrices where $\bP$ is nonsingular. And the geometric multiplicity of an eigenvalue of $\bA$, say $\lambda$, is $k$. Then there exists a set of orthogonal vectors $\bv_1, \bv_2, \ldots, \bv_k$ that are the basis for the eigenspace $\nspace(\bA-\lambda\bI)$ such that $\bA\bv_i = \lambda \bv_i$ for all $i\in \{1, 2, \ldots, k\}$. Then, $\bw_i = \bP\bv_i$'s are the eigenvectors of $\bB$ associated with eigenvalue $\lambda$. Further, $\bw_i$'s are linearly independent since $\bP$ is nonsingular. Thus, the dimension of the eigenspace $\nspace(\bB-\lambda_i\bI)$ is at least $k$, that is, $dim(\nspace(\bA-\lambda\bI)) \leq dim(\nspace(\bB-\lambda\bI)) $.
Similarly, there exists a set of orthogonal vectors $\bw_1, \bw_2, \ldots, \bw_k$ that are the bases for the eigenspace $\nspace(\bB-\lambda\bI)$, then $\bv_i = \bP^{-1}\bw_i$ for all $i \in \{1, 2, \ldots, k\}$ are the eigenvectors of $\bA$ associated to $\lambda$. This will result in $dim(\nspace(\bB-\lambda\bI)) \leq dim(\nspace(\bA-\lambda\bI)) $.
Therefore, by ``sandwiching", we get $dim(\nspace(\bA-\lambda\bI)) = dim(\nspace(\bB-\lambda\bI)) $, which is the equality of the geometric multiplicities, and the claim follows.
\end{proof}
\begin{lemma}[Bounded Geometric Multiplicity]\label{lemma:bounded-geometri}
For any matrix $\bA\in \real^{n\times n}$, its geometric multiplicity is bounded by algebraic multiplicity for any eigenvalue $\lambda_i$:
$$
geo(\lambda_i) \leq alg(\lambda_i).
$$
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:bounded-geometri}]
If we can find a similar matrix $\bB$ of $\bA$ that has a specific form of the characteristic polynomial, then we complete the proof.
Suppose $\bP_1 = [\bv_1, \bv_2, \ldots, \bv_k]$ contains the eigenvectors of $\bA$ associated with $\lambda_i$ which are linearly independent. That is, the $k$ vectors are bases for the eigenspace $\nspace(\bA-\lambda\bI)$ and the geometric multiplicity associated with $\lambda_i$ is $k$. We can expand it to $n$ linearly independent vectors such that
$$
\bP = [\bP_1, \bP_2] = [\bv_1, \bv_2, \ldots, \bv_k, \bv_{k+1}, \ldots, \bv_n],
$$
where $\bP$ is nonsingular. Then $\bA\bP = [\lambda_i\bP_1, \bA\bP_2]$.
Construct a matrix $\bB = \begin{bmatrix}
\lambda_i \bI_k & \bC \\
\bzero & \bD
\end{bmatrix}$ where $\bA\bP_2 = \bP_1\bC + \bP_2\bD$, then $\bP^{-1}\bA\bP = \bB$ such that $\bA$ and $\bB$ are similar matrices. We can always find such $\bC,\bD$ that satisfy the above condition, since $\bv_i$'s are linearly independent with spanning the whole space $\real^n$, and any column of $\bA\bP_2$ is in the column space of $\bP=[\bP_1,\bP_2]$.
Therefore,
$$
\begin{aligned}
\det(\bA-\lambda\bI) &= \det(\bP^{-1})\det(\bA-\lambda\bI)\det(\bP) \qquad &(\text{$\det(\bP^{-1}) = 1/\det(\bP)$})\\
&= \det(\bP^{-1}(\bA-\lambda\bI)\bP) \qquad &(\det(\bA)\det(\bB) = \det(\bA\bB))\\
&= \det(\bB-\lambda\bI) \\
&= \det(\begin{bmatrix}
(\lambda_i-\lambda) \bI_k & \bC \\
\bzero & \bD - \lambda \bI
\end{bmatrix})\\
&= (\lambda_i-\lambda)^k \det(\bD-\lambda\bI),
\end{aligned}
$$
where the last equality is from the fact that if matrix $\bM$ has a block formulation: $\bM=\begin{bmatrix}
\bA & \bB \\
\bC & \bD
\end{bmatrix}$, then $\det(\bM) = \det(\bA)\det(\bD-\bC\bA^{-1}\bB)$.
This implies
$$
geo(\lambda_i) \leq alg(\lambda_i).
$$
And we complete the proof.
\end{proof}
Following from the proof of Lemma~\ref{lemma:eigen-multiplicity} (p.~\pageref{lemma:eigen-multiplicity}), we notice that the algebraic multiplicity and geometric multiplicity are the same for symmetric matrices. We call these matrices simple matrices.
\begin{definition}[Simple Matrix]
When the algebraic multiplicity and geometric multiplicity are the same for a matrix, we call it a simple matrix.
\end{definition}
\begin{definition}[Diagonalizable]
A matrix $\bA$ is diagonalizable if there exists a nonsingular matrix $\bP$ and a diagonal matrix $\bD$ such that $\bA = \bP\bD\bP^{-1}$.
\end{definition}
Eigenvalue decomposition in Theorem~\ref{theorem:eigenvalue-decomposition} (p.~\pageref{theorem:eigenvalue-decomposition}) and spectral decomposition in Theorem~\ref{theorem:spectral_theorem} (p.~\pageref{theorem:spectral_theorem}) are such kinds of matrices that are diagonalizable.
\begin{lemma}[Simple Matrices are Diagonalizable]\label{lemma:simple-diagonalizable}
A matrix is a simple matrix if and only if it is diagonalizable.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:simple-diagonalizable}]
We will show by forward implication and backward implication separately as follows.
\paragraph{Forward implication} Suppose that $\bA\in \real^{n\times n}$ is a simple matrix, such that the algebraic and geometric multiplicities for each eigenvalue are equal. For a specific eigenvalue $\lambda_i$, let $\{\bv_1^i, \bv_2^i, \ldots, \bv_{k_i}^i\}$ be a basis for the eigenspace $\nspace(\bA - \lambda_i\bI)$, that is, $\{\bv_1^i, \bv_2^i, \ldots, \bv_{k_i}^i\}$ is a set of linearly independent eigenvectors of $\bA$ associated to $\lambda_i$, where ${k_i}$ is the algebraic or geometric multiplicity associated to $\lambda_i$: $alg(\lambda_i)=geo(\lambda_i)=k_i$. Suppose there are $m$ distinct eigenvalues, since $k_1+k_2+\ldots +k_m = n$, the set of eigenvectors consists of the union of $n$ vectors. Suppose there is a set of $x_j$'s such that
\begin{equation}\label{equation:proof-simple-diagonalize}
\bz = \sum_{j=1}^{k_1} x_j^1 \bv_j^1+ \sum_{j=1}^{k_2} x_j^2 \bv_j^2 + \ldots \sum_{j=1}^{k_m} x_j^m \bv_j^m = \bzero.
\end{equation}
Let $\bw^i = \sum_{j=1}^{k_i} x_j^i \bv_j^i$. Then $\bw^i$ is either an eigenvector associated with $\lambda_i$, or it is a zero vector. That is $\bz = \sum_{i=1}^{m} \bw^i$ is a sum of either zero vector or an eigenvector associated with different eigenvalues of $\bA$. Since eigenvectors associated with different eigenvalues are linearly independent. We must have $\bw^i =0$ for all $i\in \{1, 2, \ldots, m\}$. That is
$$
\bw^i = \sum_{j=1}^{k_i} x_j^i \bv_j^i = \bzero, \qquad \text{for all $i\in \{1, 2, \ldots, m\}$}.
$$
Since we assume the eigenvectors $\bv_j^i$'s associated with $\lambda_i$ are linearly independent, we must have $x_j^i=0$ for all $i \in \{1,2,\ldots, m\}, j\in \{1,2,\ldots,k_i\}$. Thus, the $n$ vectors are linearly independent:
$$
\{\bv_1^1, \bv_2^1, \ldots, \bv_{k_i}^1\},\{\bv_1^2, \bv_2^2, \ldots, \bv_{k_i}^2\},\ldots,\{\bv_1^m, \bv_2^m, \ldots, \bv_{k_i}^m\}.
$$
By eigenvalue decomposition in Theorem~\ref{theorem:eigenvalue-decomposition} (p.~\pageref{theorem:eigenvalue-decomposition}), matrix $\bA$ can be diagonalizable.
\paragraph{Backward implication} Suppose $\bA$ is diagonalizable. That is, there exists a nonsingular matrix $\bP$ and a diagonal matrix $\bD$ such that $\bA =\bP\bD\bP^{-1} $. $\bA$ and $\bD$ are similar matrices such that they have the same eigenvalues (Lemma~\ref{lemma:eigenvalue-similar-matrices}, p.~\pageref{lemma:eigenvalue-similar-matrices}), same algebraic multiplicities, and geometric multiplicities (Corollary~\ref{corollary:multipli-similar-matrix}, p.~\pageref{corollary:multipli-similar-matrix}). It can be easily verified that a diagonal matrix has equal algebraic multiplicity and geometric multiplicity such that $\bA$ is a simple matrix.
\end{proof}
\begin{remark}[Equivalence on Diagonalization]
From Theorem~\ref{theorem:independent-eigenvector-theorem} (p.~\pageref{theorem:independent-eigenvector-theorem}) that any eigenvectors corresponding to different eigenvalues are linearly independent, and Remark~\ref{remark:geometric-mul-meaning} (p.~\pageref{remark:geometric-mul-meaning}) that the geometric multiplicity is the dimension of the eigenspace. We realize, if the geometric multiplicity is equal to the algebraic multiplicity, the eigenspace can span the whole space $\real^n$ if matrix $\bA\in \real^{n\times n}$. So the above Lemma is equivalent to claim that if the eigenspace can span the whole space $\real^n$, then $\bA$ can be diagonalizable.
\end{remark}
\begin{corollary}
A square matrix $\bA$ with linearly independent eigenvectors is a simple matrix. Or if $\bA$ is symmetric, it is also a simple matrix.
\end{corollary}
From the eigenvalue decomposition in Theorem~\ref{theorem:eigenvalue-decomposition} (p.~\pageref{theorem:eigenvalue-decomposition}) and the spectral decomposition in Theorem~\ref{theorem:spectral_theorem} (p.~\pageref{theorem:spectral_theorem}), the proof is trivial for the above corollary.
Now we are ready to show the second form of the spectral decomposition.
\begin{theoremHigh}[Spectral Decomposition: The Second Form]\label{theorem:spectral_theorem_secondForm}
A \textbf{simple matrix} $\bA \in \real^{n\times n}$ can be factored as a sum of a set of idempotent matrices
$$
\bA = \sum_{i=1}^{n} \lambda_i \bA_i,
$$
where $\lambda_i$ for all $i\in \{1,2,\ldots, n\}$ are eigenvalues of $\bA$ (duplicate possible), and also known as the \textit{spectral values} of $\bA$. Specifically, we have the following properties:
1. Idempotent: $\bA_i^2 = \bA_i$ for all $i\in \{1,2,\ldots, n\}$;
2. Orthogonal: $\bA_i\bA_j = \bzero$ for all $i \neq j$;
3. Additivity: $\sum_{i=1}^{n} \bA_i = \bI_n$;
4. Rank-Additivity: $rank(\bA_1) + rank(\bA_2) + \ldots + rank(\bA_n) = n$.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:spectral_theorem_secondForm}]
Since $\bA$ is a simple matrix, from Lemma~\ref{lemma:simple-diagonalizable} (p.~\pageref{lemma:simple-diagonalizable}), there exists a nonsingular matrix $\bP$ and a diagonal matrix $\bLambda$ such that $\bA=\bP\bLambda\bP^{-1}$ where $\bLambda=\diag(\lambda_1, \lambda_2, \ldots, \lambda_n)$, and $\lambda_i$'s are eigenvalues of $\bA$ and columns of $\bP$ are eigenvectors of $\bA$. Suppose
$$
\bP = \begin{bmatrix}
\bv_1 & \bv_2&\ldots & \bv_n
\end{bmatrix}
\qquad
\text{and }
\qquad
\bP^{-1} =
\begin{bmatrix}
\bw_1^\top \\
\bw_2^\top \\
\vdots \\
\bw_n^\top
\end{bmatrix}
$$
are the column and row partitions of $\bP$ and $\bP^{-1}$ respectively. Then, we have
$$
\bA= \bP\bLambda\bP^{-1} =
\begin{bmatrix}
\bv_1 & \bv_2&\ldots & \bv_n
\end{bmatrix}
\bLambda
\begin{bmatrix}
\bw_1^\top \\
\bw_2^\top \\
\vdots \\
\bw_n^\top
\end{bmatrix}=
\sum_{i=1}^{n}\lambda_i \bv_i\bw_i^\top.
$$
Let $\bA_i = \bv_i\bw_i^\top$, we have $\bA = \sum_{i=1}^{n} \lambda_i \bA_i$.
We realize that $\bP^{-1}\bP = \bI$ such that
$$
\left\{
\begin{aligned}
&\bw_i^\top\bv_j = 1 ,& \mathrm{\,\,if\,\,} i = j. \\
&\bw_i^\top\bv_j = 0 ,& \mathrm{\,\,if\,\,} i \neq j.
\end{aligned}
\right.
$$
Therefore,
$$
\bA_i\bA_j =\bv_i\bw_i^\top\bv_j\bw_j^\top = \left\{
\begin{aligned}
&\bv_i\bw_i^\top = \bA_i ,& \mathrm{\,\,if\,\,} i = j. \\
& \bzero ,& \mathrm{\,\,if\,\,} i \neq j.
\end{aligned}
\right.
$$
This implies the idempotency and orthogonality of $\bA_i$'s. We also notice that $\sum_{i=1}^{n}\bA_i = \bP\bP^{-1}=\bI$, that is the additivity of $\bA_i$'s. The rank-additivity of the $\bA_i$'s is trivial since $rank(\bA_i)=1$ for all $i\in \{1,2,\ldots, n\}$.
\end{proof}
The decomposition is highly related to the Cochran's theorem and its application in the distribution theory of linear models \citep{lu2021numerical,lu2021rigorous}.
\begin{theoremHigh}[Spectral Decomposition: The Third Form]\label{Corollary:spectral_theorem_3Form}
A \textbf{simple matrix} $\bA \in \real^{n\times n}$ \textcolor{blue}{with $k$ distinct eigenvalues} can be factored as a sum of a set of idempotent matrices
$$
\bA = \sum_{i=1}^{\textcolor{blue}{k}} \lambda_i \bA_i,
$$
where $\lambda_i$ for all $i\in \{1,2,\ldots, \textcolor{blue}{k}\}$ are the distinct eigenvalues of $\bA$, and also known as the \textbf{spectral values} of $\bA$. Specifically, we have the following properties:
1. Idempotent: $\bA_i^2 = \bA_i$ for all $i\in \{1,2,\ldots, \textcolor{blue}{k}\}$;
2. Orthogonal: $\bA_i\bA_j = \bzero$ for all $i \neq j$;
3. Additivity: $\sum_{i=1}^{\textcolor{blue}{k}} \bA_i = \bI_n$;
4. Rank-Additivity: $rank(\bA_1) + rank(\bA_2) + \ldots + rank(\bA_{\textcolor{blue}{k}}) = n$.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{Corollary:spectral_theorem_3Form}]
From Theorem~\ref{theorem:spectral_theorem_secondForm}, we can decompose $\bA$ by $\bA =\sum_{j=1}^{n} \beta_i \bB_j$. Without loss of generality,
the eigenvalues $\beta_i$'s are ordered such that $\beta_1 \leq \beta_2 \leq \ldots \beta_n$ where duplicate is possible. Let $\lambda_i$'s be the distinct eigenvalues, and $\bA_i$ be the sum of the $\bB_j$'s associated with $\lambda_i$.
Suppose the multiplicity of $\lambda_i$ is $m_i$, and the $\bB_j$'s associated with $\lambda_i$ can be denoted as $\{\bB_{1}^i, \bB_{2}^i, \ldots, \bB_{m_i}^i\}$. Then $\bA_i$ can be denoted as $\bA_i = \sum_{j=1}^{m_i} \bB_{j}^i$. Apparently $\bA = \sum_{i=1}^{k} \lambda_i \bA_i$.
\paragraph{Idempotency} $\bA_i^2 = (\bB_1^i + \bB_2^i+\ldots \bB_{m_i}^i)(\bB_1^i + \bB_2^i+\ldots \bB_{m_i}^i)= \bB_1^i + \bB_2^i+\ldots \bB_{m_i}^i = \bA_i$ from the idempotency and orthogonality of $\bB_j^i$'s.
\paragraph{Ortogonality} $\bA_i\bA_j = (\bB_1^i + \bB_2^i+\ldots \bB_{m_i}^i)(\bB_1^j + \bB_2^j+\ldots \bB_{m_j}^j)=\bzero$ from the orthogonality of the $\bB_j^i$'s.
\paragraph{Additivity} It is trivial that $\sum_{i=1}^{k} \bA_i = \bI_n$.
\paragraph{Rank-Additivity} $rank(\bA_i ) = rank(\sum_{j=1}^{m_i} \bB_{j}^i) = m_i$ such that $rank(\bA_1) + rank(\bA_2) + \ldots + rank(\bA_{k}) = m_1+m_2+\ldots+m_k=n$.
\end{proof}
\begin{theoremHigh}[Spectral Decomposition: Backward Implication]\label{Corollary:spectral_theorem_4Form}
If a matrix $\bA \in \real^{n\times n}$ with $k$ distinct eigenvalues can be factored as a sum of a set of idempotent matrices
$$
\bA = \sum_{i=1}^{k} \lambda_i \bA_i,
$$
where $\lambda_i$ for all $i\in \{1,2,\ldots, k\}$ are the distinct eigenvalues of $\bA$, and
1. Idempotent: $\bA_i^2 = \bA_i$ for all $i\in \{1,2,\ldots, k\}$;
2. Orthogonal: $\bA_i\bA_j = \bzero$ for all $i \neq j$;
3. Additivity: $\sum_{i=1}^{k} \bA_i = \bI_n$;
4. Rank-Additivity: $rank(\bA_1) + rank(\bA_2) + \ldots + rank(\bA_{k}) = n$.
\item Then, the matrix $\bA$ is a simple matrix.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{Corollary:spectral_theorem_4Form}]
Suppose $rank(\bA_i) = r_i$ for all $i \in \{1,2,\ldots, k\}$. By ULV decomposition in Theorem~\ref{theorem:ulv-decomposition} (p.~\pageref{theorem:ulv-decomposition}), $\bA_i$ can be factored as
$$
\bA_i = \bU_i \begin{bmatrix}
\bL_i & \bzero \\
\bzero & \bzero
\end{bmatrix}\bV_i,
$$
where $\bL_i \in \real^{r_i \times r_i}$, $\bU_i \in \real^{n \times n}$, and $\bV_i\in \real^{n \times n}$ are orthogonal matrices. Let
$$
\bX_i =
\bU_i \begin{bmatrix}
\bL_i \\
\bzero
\end{bmatrix}
\qquad
\text{and}
\qquad
\bV_i =
\begin{bmatrix}
\bY_i \\
\bZ_i
\end{bmatrix},
$$
where $\bX_i \in \real^{n\times r_i}$, and $\bY_i \in \real^{r_i \times n}$ is the first $r_i$ rows of $\bV_i$. Then, we have
$$
\bA_i = \bX_i \bY_i.
$$
This can be seen as a \textbf{reduced} ULV decomposition of $\bA_i$. Appending the $\bX_i$'s and $\bY_i$'s into $\bX$ and $\bY$,
$$
\bX = [\bX_1, \bX_2, \ldots, \bX_k],
\qquad
\bY =
\begin{bmatrix}
\bY_1\\
\bY_2\\
\vdots \\
\bY_k
\end{bmatrix},
$$
where $\bX\in \real^{n\times n}$ and $\bY\in \real^{n\times n}$ (from rank-additivity). By block matrix multiplication and the additivity of $\bA_i$'s, we have
$$
\bX\bY = \sum_{i=1}^{k} \bX_i\bY_i = \sum_{i=1}^{k} \bA_i = \bI.
$$
Therefore $\bY$ is the inverse of $\bX$, and
$$
\bY\bX =
\begin{bmatrix}
\bY_1\\
\bY_2\\
\vdots \\
\bY_k
\end{bmatrix}
[\bX_1, \bX_2, \ldots, \bX_k]
=
\begin{bmatrix}
\bY_1\bX_1 & \bY_1\bX_2 & \ldots & \bY_1\bX_k\\
\bY_2\bX_1 & \bY_2\bX_2 & \ldots & \bY_2\bX_k\\
\vdots & \vdots & \ddots & \vdots\\
\bY_k\bX_1 & \bY_k\bX_2 & \ldots & \bY_k\bX_k\\
\end{bmatrix}
=\bI,
$$
such that
$$
\bY_i\bX_j = \left\{
\begin{aligned}
&\bI_{r_i} ,& \mathrm{\,\,if\,\,} i = j; \\
& \bzero ,& \mathrm{\,\,if\,\,} i \neq j.
\end{aligned}
\right.
$$
This implies
$$
\bA_i\bX_j = \left\{
\begin{aligned}
&\bX_i ,& \mathrm{\,\,if\,\,} i = j; \\
& \bzero ,& \mathrm{\,\,if\,\,} i \neq j,
\end{aligned}
\right.
\qquad
\text{and}
\qquad
\bA \bX_i = \lambda_i\bX_i.
$$
Finally, we have
$$
\begin{aligned}
\bA\bX &= \bA[\bX_1, \bX_2, \ldots, \bX_k] = [\lambda_1\bX_1, \lambda_2\bX_2, \ldots, \lambda_k\bX_k] = \bX\bLambda,
\end{aligned}
$$
where
$$\bLambda =
\begin{bmatrix}
\lambda_1 \bI_{r_1} & \bzero & \ldots & \bzero \\
\bzero & \lambda_2 \bI_{r_2} & \ldots & \bzero \\
\vdots & \vdots & \ddots & \vdots \\
\bzero & \bzero & \ldots & \lambda_k \bI_{r_k} \\
\end{bmatrix}
$$
is a diagonal matrix. This implies $\bA$ can be diagonalized and from Lemma~\ref{lemma:simple-diagonalizable} (p.~\pageref{lemma:simple-diagonalizable}), $\bA$ is a simple matrix.
\end{proof}
\begin{corollary}[Forward and Backward Spectral]
Combine Theorem~\ref{Corollary:spectral_theorem_3Form} and Theorem~\ref{Corollary:spectral_theorem_4Form}, we can claim that matrix $\bA \in \real^{n\times n}$ is a simple matrix with $k$ distinct eigenvalues if and only if it can be factored as a sum of a set of idempotent matrices
$$
\bA = \sum_{i=1}^{k} \lambda_i \bA_i,
$$
where $\lambda_i$ for all $i\in \{1,2,\ldots, k\}$ are the distinct eigenvalues of $\bA$, and
1. Idempotent: $\bA_i^2 = \bA_i$ for all $i\in \{1,2,\ldots, k\}$;
2. Orthogonal: $\bA_i\bA_j = \bzero$ for all $i \neq j$;
3. Additivity: $\sum_{i=1}^{k} \bA_i = \bI_n$;
4. Rank-Additivity: $rank(\bA_1) + rank(\bA_2) + \ldots + rank(\bA_{k}) = n$.
\end{corollary}
\section{Skew-Symmetric Matrix and its Properties*}
We have introduced the spectral decomposition for symmetric matrices. A special kind of matrices that's related to symmetric is called the \textit{skew-symmetric matrices}.
\begin{definition}[Skew-Symmetric Matrix\index{Skew-symmetric matrix}]
If matrix $\bA\in \real^{n\times n}$ has the following property, then it is known as a \textbf{skew-symmetric matrix}:
$$
\bA^\top = -\bA.
$$
Note that under this definition, for the diagonal values $a_{ii}$ for all $i \in \{1,2,\ldots, n\}$, we have $a_{ii} = -a_{ii}$ which implies all the diagonal components are 0.
\end{definition}
We have proved in Lemma~\ref{lemma:real-eigenvalues-spectral} (p.~\pageref{lemma:real-eigenvalues-spectral}) that all the eigenvalues of symmetric matrices are real. Similarly, we could show that all the eigenvalues of skew-symmetric matrices are imaginary.
\begin{lemma}[Imaginary Eigenvalues]\label{lemma:real-eigenvalues-spectral-skew}
The eigenvalues of any skew-symmetric matrix are all imaginary or zero.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:real-eigenvalues-spectral-skew}]
Suppose eigenvalue $\lambda$ is a complex number $\lambda=a+ib$ where $a,b$ are real. Its complex conjugate is $\bar{\lambda}=a-ib$. Same for complex eigenvector $\bx = \bc+i\bd$ and its complex conjugate $\bar{\bx}=\bc-i\bd$ where $\bc, \bd$ are real vectors. We then have the following property
$$
\bA \bx = \lambda \bx\qquad \underrightarrow{\text{ leads to }}\qquad \bA \bar{\bx} = \bar{\lambda} \bar{\bx}\qquad \underrightarrow{\text{ transpose to }}\qquad \bar{\bx}^\top \bA^\top =\bar{\lambda} \bar{\bx}^\top.
$$
We take the dot product of the first equation with $\bar{\bx}$ and the last equation with $\bx$:
$$
\bar{\bx}^\top \bA \bx = \lambda \bar{\bx}^\top \bx, \qquad \text{and } \qquad \bar{\bx}^\top \bA^\top \bx = \bar{\lambda}\bar{\bx}^\top \bx.
$$
Then we have the equality $-\lambda\bar{\bx}^\top \bx = \bar{\lambda} \bar{\bx}^\top\bx$ (since $\bA^\top=-\bA$). Since $\bar{\bx}^\top\bx = (\bc-i\bd)^\top(\bc+i\bd) = \bc^\top\bc+\bd^\top\bd$ is a real number. Therefore the real part of $\lambda$ is zero and $\lambda$ is either imaginary or zero.
\end{proof}
\begin{lemma}[Odd Skew-Symmetric Determinant]\label{lemma:skew-symmetric-determinant}
Given skew-symmetric matrix $\bA\in \real^{n\times n}$, if $n$ is odd, then $\det(\bA)=0$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:skew-symmetric-determinant}]
When $n$ is odd, we have
$$
\det(\bA) = \det(\bA^\top) = \det(-\bA) = (-1)^n \det(\bA) = -\det(\bA).
$$
This implies $\det(\bA)=0$.
\end{proof}
\begin{theoremHigh}[Block-Diagonalization of Skew-Symmetric Matrices]\label{theorem:skew-block-diagonalization_theorem}
A real skew-symmetric matrix $\bA \in \real^{n\times n}$ can be factored as
\begin{equation*}
\bA = \bZ \bD \bZ^\top,
\end{equation*}
where $\bZ$ is an $n\times n$ nonsingular matrix, and $\bD$ is a block-diagonal matrix with the following form
$$
\bD =
\diag\left(\begin{bmatrix}
0 & 1 \\
-1 & 0
\end{bmatrix},
\ldots,
\begin{bmatrix}
0 & 1 \\
-1 & 0
\end{bmatrix},
0, \ldots, 0\right).
$$
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:skew-block-diagonalization_theorem}]
We will prove by recursive calculation.
As usual, we will denote the entry ($i,j$) of matrix $\bA$ by $\bA_{ij}$.
\paragraph{Case 1).} Suppose the first row of $\bA$ is nonzero, we notice that $\bE\bA\bE^\top$ is skew-symmetric if $\bA$ is skew-symmetric for any matrix $\bE$. This will make both the diagonals of $\bA$ and $\bE\bA\bE^\top$ zeros, and the upper-left $2\times 2$ submatrix of $\bE\bA\bE^\top$ has the following form
$$
(\bE\bA\bE^\top)_{1:2,1:2}=
\begin{bmatrix}
0 & x \\
-x & 0
\end{bmatrix}.
$$
Since we suppose the first row of $\bA$ is nonzero, there exists a permutation matrix $\bP$ (Definition~\ref{definition:permutation-matrix}, p.~\pageref{definition:permutation-matrix}), such that we will exchange the nonzero value, say $a$, in the first row to the second column of $\bP\bA\bP^\top$. And as discussed above, the upper-left $2\times 2$ submatrix of $\bP\bA\bP^\top$ has the following form
$$
(\bP\bA\bP^\top)_{1:2,1:2}=
\begin{bmatrix}
0 & a \\
-a & 0
\end{bmatrix}.
$$
Construct a nonsingular matrix $\bM = \begin{bmatrix}
1/a & \bzero \\
\bzero & \bI_{n-1}
\end{bmatrix}$ such that the upper left $2\times 2$ submatrix of $\bM\bP\bA\bP^\top\bM^\top$ has the following form
$$
(\bM\bP\bA\bP^\top\bM^\top)_{1:2,1:2}=
\begin{bmatrix}
0 & 1 \\
-1 & 0
\end{bmatrix}.
$$
Now we finish diagonalizing the upper-left $2\times 2$ block. Suppose now $(\bM\bP\bA\bP^\top\bM^\top)$ above has a nonzero value, say $b$, in the first row with entry $(1,j)$ for some $j>2$, we can construct a nonsingular matrix $\bL = \bI - b\cdot\bE_{j2}$ where $\bE_{2j}$ is an all-zero matrix except the entry ($2,j$) is 1, such that $(\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top)$ will introduce 0 for the entry with value $b$.
\begin{mdframed}[hidealllines=\mdframehideline,backgroundcolor=\mdframecolorSkip,frametitle={A Trivial Example}]
For example, suppose $\bM\bP\bA\bP^\top\bM^\top$ is a $3\times 3$ matrix with the following value
$$
\bM\bP\bA\bP^\top\bM^\top =
\begin{bmatrix}
0 & 1 & b \\
-1 & 0 & \times \\
\times & \times & 0
\end{bmatrix},
\qquad \text{and}\qquad
\bL =\bI - b\cdot\bE_{j2}=
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 &-b & 1
\end{bmatrix},
$$
where $j=3$ for this specific example. This results in
$$
\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top =
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 &-b & 1
\end{bmatrix}
\begin{bmatrix}
0 & 1 & \textcolor{blue}{b} \\
-1 & 0 & \times \\
\times & \times & 0
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & -b \\
0 & 0 & 1
\end{bmatrix}
=
\begin{bmatrix}
0 & 1 & \textcolor{blue}{0} \\
-1 & 0 & \times \\
\times & \times & 0
\end{bmatrix}.
$$
\end{mdframed}
Similarly, if the second row of $\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top$ contains a nonzero value, say $c$, we could construct a nonsingular matrix $\bK = \bI+c\cdot \bE_{j1}$ such that $\bK\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top\bK^\top$ will introduce 0 for the entry with value $c$.
\begin{mdframed}[hidealllines=\mdframehideline,backgroundcolor=\mdframecolorSkip,frametitle={A Trivial Example}]
For example, suppose $\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top$ is a $3\times 3$ matrix with the following value
$$
\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top =
\begin{bmatrix}
0 & 1 & 0 \\
-1 & 0 & c \\
\times & \times & 0
\end{bmatrix},
\qquad \text{and}\qquad
\bK =\bI + c\cdot\bE_{j1}=
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
c &0 & 1
\end{bmatrix},
$$
where $j=3$ for this specific example. This results in
$$
\bK\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top\bK^\top =
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
c & 0 & 1
\end{bmatrix}
\begin{bmatrix}
0 & 1 & 0 \\
-1 & 0 & \textcolor{blue}{c} \\
\times & \times & 0
\end{bmatrix}
\begin{bmatrix}
1 & 0 & c \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}
=
\begin{bmatrix}
0 & 1 & 0 \\
-1 & 0 & \textcolor{blue}{0} \\
\times & \times & 0
\end{bmatrix}.
$$
Since we have shown that $\bK\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top\bK^\top$ is also skew-symmetric, then, it is actually
$$
\bK\bL\bM\bP\bA\bP^\top\bM^\top\bL^\top\bK^\top=
\begin{bmatrix}
0 & 1 & 0 \\
-1 & 0 & \textcolor{blue}{0} \\
\textcolor{red}{0} & \textcolor{red}{0} & 0
\end{bmatrix},
$$
so that we do not need to tackle the first 2 columns of the above equation.
\end{mdframed}
Apply this process to the bottom-right $(n-2)\times(n-2)$ submatrix, we will complete the proof.
\paragraph{Case 2).} Suppose the first row of $\bA$ is zero, a permutation matrix to put the first row into the last row and apply the process in case 1 to finish the proof.
\end{proof}
From the block-diagonalization of skew-symmetric matrices above, we could easily find that the rank of a skew-symmetric matrix is even. And we could prove the determinant of skew-symmetric with even order is nonnegative as follows.
\begin{lemma}[Even Skew-Symmetric Determinant]\label{lemma:skew-symmetric-determinant-even}
For skew-symmetric matrix $\bA\in \real^{n\times n}$, if $n$ is even, then $\det(\bA)\geq 0$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:skew-symmetric-determinant-even}]
By Theorem~\ref{theorem:skew-block-diagonalization_theorem}, we could block-diagonalize $\bA = \bZ\bD\bZ^\top$ such that
$$
\det(\bA) = \det(\bZ\bD\bZ^\top) = \det(\bZ)^2 \det(\bD) \geq 0.
$$
This completes the proof.
\end{proof}
\section{Applications}
\subsection{Application: Eigenvalue of Projection Matrix}
In Section~\ref{section:application-ls-qr} (p.~\pageref{section:application-ls-qr}), we will show that the QR decomposition can be applied to solve the least squares problem, where we consider the overdetermined system $\bA\bx = \bb$ with $\bA\in \real^{m\times n}$ being the data matrix, $\bb\in \real^m$ with $m\geq n$ being the observation matrix. Normally $\bA$ will have full column rank since the data from real work has a large chance to be unrelated. And the least squares solution is given by $\bx_{LS} = (\bA^\top\bA)^{-1}\bA^\top\bb$ for minimizing $||\bA\bx-\bb||^2$, where $\bA^\top\bA$ is invertible since $\bA$ has full column rank and $rank(\bA^T\bA)=rank(\bA)$. The recovered observation matrix is then $\hat{\bb} = \bA\bx_{LS} = \bA(\bA^\top\bA)^{-1}\bA^\top\bb$. $\bb$ may not be in the column space of $\bA$, but the recovered $\hat{\bb}$ is in this column space. We then define such a matrix $\bH=\bA(\bA^\top\bA)^{-1}\bA^\top$ to be a projection matrix, i.e., projecting $\bb$ onto the column space of $\bA$. Or, it is also known as hat matrix, since we put a hat on $\bb$. It can be easily verified the projection matrix is \textit{symmetric and idempotent} (i.e., $\bH=\bH^\top$ and $\bH^2=\bH$).
\index{Projection matrix}
\index{Idempotent}
\index{Idempotency}
\begin{remark}[Column Space of Projection Matrix]
We notice that the hat matrix $\bH = \bA(\bA^\top\bA)^{-1}\bA^\top$ is to project any vector in $\real^m$ into the column space of $\bA$. That is, $\bH\by \in \cspace(\bA)$. Notice again $\bH\by$ is the nothing but a combination of the columns of $\bH$, thus $\cspace(\bH) = \cspace(\bA)$.
In general, for any projection matrix $\bH$ to project vector onto subspace $\mathcalV$, then $\cspace(\bH) = \mathcalV$. More formally, in a mathematical language, this property can be proved by singular value decomposition (Section~\ref{section:SVD}, p.~\pageref{section:SVD}).
\end{remark}
We now show that any projection matrix has specific eigenvalues.
\begin{proposition}[Eigenvalue of Projection Matrix]\label{proposition:eigen-of-projection-matrix}
The only possible eigenvalues of a projection matrix are 0 and 1.
\end{proposition}
\begin{proof}[of Proposition~\ref{proposition:eigen-of-projection-matrix}]
Since $\bH$ is symmetric, we have spectral decomposition $\bH =\bQ\bLambda\bQ^\top$. From the idempotent property, we have
$$
\begin{aligned}
(\bQ\bLambda\bQ^\top)^2 &= \bQ\bLambda\bQ^\top \\
\bQ\bLambda^2\bQ^\top &= \bQ\bLambda\bQ^\top \\
\bLambda^2 &=\bLambda \\
\lambda_i^2 &=\lambda_i,
\end{aligned}
$$
Therefore, the only possible eigenvalues for $\bH$ are 0 and 1.
\end{proof}
This property of the projection matrix is important for the analysis of distribution theory for linear models. See \citet{lu2021rigorous} for more details.
Following the eigenvalue of the projection matrix, it can also give rise to the perpendicular projection $\bI-\bH$.
\begin{proposition}[Project onto $\mathcalV^\perp$]\label{proposition:orthogonal-projection_tmp}
Let $\mathcalV$ be a subspace and $\bH$ be a projection onto $\mathcalV$. Then $\bI-\bH$ is the projection matrix onto $\mathcalV^\perp$.
\end{proposition}
\begin{proof}[of Proposition~\ref{proposition:orthogonal-projection_tmp}]
First, $(\bI-\bH)$ is symmetric, $(\bI-\bH)^\top = \bI - \bH^\top = \bI-\bH$ since $\bH$ is symmetric. And
$$
(\bI-\bH)^2 = \bI^2 -\bI\bH -\bH\bI +\bH^2 = \bI-\bH,
$$
i.e., $\bI-\bH$ is idempotent.
Thus $\bI-\bH$ is a projection matrix. By spectral theorem again, let $\bH =\bQ\bLambda\bQ^\top$. Then $\bI-\bH = \bQ\bQ^\top - \bQ\bLambda\bQ^\top = \bQ(\bI-\bLambda)\bQ^\top$. Hence the column space of $\bI-\bH$ is spanned by the eigenvectors of $\bH$ corresponding to the zero eigenvalues of $\bH$ (by Proposition~\ref{proposition:eigen-of-projection-matrix}, p.~\pageref{proposition:eigen-of-projection-matrix}), which coincides with $\mathcalV^\perp$.
\end{proof}
Again, for a detailed analysis of the origin of the projection matrix and results behind the projection matrix, we highly recommend the readers refer to \citet{lu2021numerical} although it is not the main interest of matrix decomposition results.
\subsection{Application: An Alternative Definition of PD and PSD of Matrices}\label{section:equivalent-pd-psd}
In Definition~\ref{definition:psd-pd-defini} (p.~\pageref{definition:psd-pd-defini}), we defined the positive definite matrices and positive semidefinite matrices by the quadratic form of the matrices. We here prove that a symmetric matrix is positive definite if and only if all eigenvalues are positive.
\begin{lemma}[Eigenvalues of PD and PSD Matrices\index{Positive definite}\index{Positive semidefinite}]\label{lemma:eigens-of-PD-psd}
A matrix $\bA\in \real^{n\times n}$ is positive definite (PD) if and only if $\bA$ has only positive eigenvalues.
And a matrix $\bA\in \real^{n\times n}$ is positive semidefinite (PSD) if and only if $\bA$ has only nonnegative eigenvalues.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:eigens-of-PD-psd}]
We will prove by forward implication and reverse implication separately as follows.
\paragraph{Forward implication:}
Suppose $\bA$ is PD, then for any eigenvalue $\lambda$ and its corresponding eigenvector $\bv$ of $\bA$, we have $\bA\bv = \lambda\bv$. Thus
$$
\bv^\top \bA\bv = \lambda||\bv||^2 > 0.
$$
This implies $\lambda>0$.
\paragraph{Reverse implication:}
Conversely, suppose the eigenvalues are positive. By spectral decomposition of $\bA =\bQ\bLambda \bQ^\top$. If $\bx$ is a nonzero vector, let $\by=\bQ^\top\bx$, we have
$$
\bx^\top \bA \bx = \bx^\top (\bQ\bLambda \bQ^\top) \bx = (\bx^\top \bQ) \bLambda (\bQ^\top\bx) = \by^\top\bLambda\by = \sum_{i=1}^{n} \lambda_i y_i^2>0.
$$
That is, $\bA$ is PD.
Analogously, we can prove the second part of the claim.
\end{proof}
\begin{theoremHigh}[Nonsingular Factor of PSD and PD Matrices]\label{lemma:nonsingular-factor-of-PD}
A real symmetric matrix $\bA$ is PSD if and only if $\bA$ can be factored as $\bA=\bP^\top\bP$, and is PD if and only if $\bP$ is nonsingular.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{lemma:nonsingular-factor-of-PD}]
For the first part, we will prove by forward implication and reverse implication separately as follows.
\paragraph{Forward implication: }Suppose $\bA$ is PSD, its spectral decomposition is given by $\bA = \bQ\bLambda\bQ^\top$. Since eigenvalues of PSD matrices are nonnegative, we can decompose $\bLambda=\bLambda^{1/2}\bLambda^{1/2}$. Let $\bP = \bLambda^{1/2}\bQ^\top$, we can decompose $\bA$ by $\bA=\bP^\top\bP$.
\paragraph{Reverse implication: } If $\bA$ can be factored as $\bA=\bP^\top\bP$, then all eigenvalues of $\bA$ are nonnegative since for any eigenvalues $\lambda$ and its corresponding eigenvector $\bv$ of $\bA$, we have
$$
\lambda = \frac{\bv^\top\bA\bv}{\bv^\top\bv} = \frac{\bv^\top\bP^\top\bP\bv}{\bv^\top\bv}=\frac{||\bP\bv||^2}{||\bv||^2} \geq 0.
$$
This implies $\bA$ is PSD by Lemma~\ref{lemma:eigens-of-PD-psd}.
Similarly, we can prove the second part for PD matrices where the positive definiteness will result in the nonsingular $\bP$ and the nonsingular $\bP$ will result in the positiveness of the eigenvalues. \footnote{See also wiki page: https://en.wikipedia.org/wiki/Sylvester's\_criterion.}
\end{proof}
\subsection{Proof for Semidefinite Rank-Revealing Decomposition}\label{section:semi-rank-reveal-proof}
In this section, we provide a proof for Theorem~\ref{theorem:semidefinite-factor-rank-reveal} (p.~\pageref{theorem:semidefinite-factor-rank-reveal}), the existence of the rank-revealing decomposition for positive semidefinite matrices.\index{Rank-revealing}\index{Semidefinite rank-revealing}
\begin{proof}[of Theorem~\ref{theorem:semidefinite-factor-rank-reveal}]
The proof is a consequence of the nonsingular factor of PSD matrices (Theorem~\ref{lemma:nonsingular-factor-of-PD}, p.~\pageref{lemma:nonsingular-factor-of-PD}) and the existence of column-pivoted QR decomposition (Theorem~\ref{theorem:rank-revealing-qr-general}, p.~\pageref{theorem:rank-revealing-qr-general}).
By Theorem~\ref{lemma:nonsingular-factor-of-PD}, the nonsingular factor of PSD matrix $\bA$ is given by $\bA = \bZ^\top\bZ$, where $\bZ=\bLambda^{1/2}\bQ^\top$ and $\bA=\bQ\bLambda\bQ^\top$ is the spectral decomposition of $\bA$.
By Lemma~\ref{lemma:rank-of-symmetric} (p.~\pageref{lemma:rank-of-symmetric}), the rank of matrix $\bA$ is the number of nonzero eigenvalues (here the number of positive eigenvalues since $\bA$ is PSD). Therefore only $r$ components in the diagonal of $\bLambda^{1/2}$ are nonzero, and $\bZ=\bLambda^{1/2}\bQ^\top$ contains only $r$ independent columns, i.e., $\bZ$ is of rank $r$. By column-pivoted QR decomposition, we have
$$
\bZ\bP = \bQ
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bzero
\end{bmatrix},
$$
where $\bP$ is a permutation matrix, $\bR_{11}\in \real^{r\times r}$ is upper triangular with positive diagonals, and $\bR_{12}\in \real^{r\times (n-r)}$. Therefore
$$
\bP^\top\bA\bP =
\bP^\top\bZ^\top\bZ\bP =
\begin{bmatrix}
\bR_{11}^\top & \bzero \\
\bR_{12}^\top & \bzero
\end{bmatrix}
\begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bzero
\end{bmatrix}.
$$
Let
$$
\bR = \begin{bmatrix}
\bR_{11} & \bR_{12} \\
\bzero & \bzero
\end{bmatrix},
$$
we find the rank-revealing decomposition for semidefinite matrix $\bP^\top\bA\bP = \bR^\top\bR$.
\end{proof}
This decomposition is produced by using complete pivoting, which at each stage permutes the largest diagonal element
in the active submatrix into the pivot position. The procedure is similar to the partial pivoting discussed in Section~\ref{section:partial-pivot-lu} (p.~\pageref{section:partial-pivot-lu}).
\subsection{Application: Cholesky Decomposition via the QR Decomposition and the Spectral Decomposition}\label{section:cholesky-by-qr-spectral}
In this section, we provide another proof for the existence of Cholesky decomposition.
\begin{proof}[of Theorem~\ref{theorem:cholesky-factor-exist}]
From Theorem~\ref{lemma:nonsingular-factor-of-PD}, the PD matrix $\bA$ can be factored as $\bA=\bP^\top\bP$ where $\bP$ is a nonsingular matrix. Then, the QR decomposition of $\bP$ is given by $\bP = \bQ\bR$. This implies
$$
\bA = \bP^\top\bP = \bR^\top\bQ^\top\bQ\bR = \bR^\top\bR,
$$
where we notice that the form is very similar to the Cholesky decomposition except that we do not claim the $\bR$ has only positive diagonal values. From the CGS algorithm to compute the QR decomposition (Section~\ref{section:qr-gram-compute}, p.~\pageref{section:qr-gram-compute}), we realize that the diagonals of $\bR$ are nonnegative, and if $\bP$ is nonsingular, the diagonals of $\bR$ are also positive.
\end{proof}
The proof for the above theorem is a consequence of the existence of both the QR decomposition and the spectral decomposition. Thus, the existence of Cholesky decomposition can be shown via the QR decomposition and the spectral decomposition in this sense.
\subsection{Application: Unique Power Decomposition of Positive Definite Matrices}\label{section:unique-posere-pd}
\begin{theoremHigh}[Unique Power Decomposition of PD Matrices]\label{theorem:unique-factor-pd}
Any $n\times n$ positive definite matrix $\bA$ can be \textbf{uniquely} factored as a product of a positive definite matrix $\bB$ such that $\bA =\bB^2$.
\end{theoremHigh}
\begin{proof}[of Theorem~\ref{theorem:unique-factor-pd}]
We first prove that there exists such positive definite matrix $\bB$ so that $\bA = \bB^2$.
\paragraph{Existence} Since $\bA$ is PD which is also symmetric, the spectral decomposition of $\bA$ is given by $\bA = \bQ\bLambda\bQ^\top$. Since eigenvalues of PD matrices are positive by Lemma~\ref{lemma:eigens-of-PD-psd} (p.~\pageref{lemma:eigens-of-PD-psd}), the square root of $\bLambda$ exists. We can define $\bB = \bQ\bLambda^{1/2}\bQ^\top$ such that $\bA = \bB^2$ where $\bB$ is apparently PD.
\paragraph{Uniqueness} Suppose such factorization is not unique, then there exist two of this decomposition such that
$$
\bA = \bB_1^2 = \bB_2^2,
$$
where $\bB_1$ and $\bB_2$ are both PD. The spectral decompositions of them are given by
$$
\bB_1 = \bQ_1 \bLambda_1\bQ_1^\top, \qquad \text{and} \qquad \bB_2 = \bQ_2 \bLambda_2\bQ_2^\top.
$$
We notice that $\bLambda_1^2$ and $\bLambda_2^2$ contains the eigenvalues of $\bA$, and both eigenvalues of $\bB_1$ and $\bB_2$ contained in $\bLambda_1$ and $\bLambda_2$ are positive (since $\bB_1$ and $\bB_2$ are both PD). Without loss of generality, we suppose $\bLambda_1=\bLambda_2=\bLambda^{1/2}$, and $\bLambda=\diag(\lambda_1,\lambda_2, \ldots, \lambda_n)$ such that $\lambda_1\geq \lambda_2 \geq \ldots \geq \lambda_n$. By $\bB_1^2 = \bB_2^2$, we have
$$
\bQ_1 \bLambda \bQ_1^\top = \bQ_2 \bLambda \bQ_2^\top \leadto \bQ_2^\top\bQ_1 \bLambda = \bLambda \bQ_2^\top\bQ_1.
$$
Let $\bZ = \bQ_2^\top\bQ_1 $, this implies $\bLambda$ and $\bZ$ commute, and $\bZ$ must be a block diagonal matrix whose partitioning conforms to the block structure of $\bLambda$. This results in $\bLambda^{1/2} = \bZ\bLambda^{1/2}\bZ^\top$ and
$$
\bB_2 = \bQ_2 \bLambda^{1/2}\bQ_2^\top = \bQ_2 \bQ_2^\top\bQ_1\bLambda^{1/2} \bQ_1^\top\bQ_2 \bQ_2^\top=\bB_1.
$$
This completes the proof.
\end{proof}
Similarly, we could prove the unique decomposition of PSD matrix $\bA = \bB^2$ where $\bB$ is PSD. A more detailed discussion on this topic can be referred to \citet{koeber2006unique}.
\paragraph{Decomposition for PD matrices} To conclude, for PD matrix $\bA$, we can factor it into $\bA=\bR^\top\bR$ where $\bR$ is an upper triangular matrix with positive diagonals as shown in Theorem~\ref{theorem:cholesky-factor-exist} (p.~\pageref{theorem:cholesky-factor-exist}) by Cholesky decomposition, $\bA = \bP^\top\bP$ where $\bP$ is nonsingular in Theorem~\ref{lemma:nonsingular-factor-of-PD} (p.~\pageref{lemma:nonsingular-factor-of-PD}), and $\bA = \bB^2$ where $\bB$ is PD in Theorem~\ref{theorem:unique-factor-pd}.
\chapter{Singular Value Decomposition (SVD)}
\begingroup
\hypersetup{linkcolor=mydarkblue}
\minitoc \newpage
\endgroup
\section{Singular Value Decomposition (SVD)}\label{section:SVD}
In the eigenvalue decomposition, we factor the matrix into a diagonal matrix. However, this is not always true. If $\bA$ does not have linearly independent eigenvectors, such diagonalization does not exist. The singular value decomposition (SVD) fills this gap.
The matrix is split into two orthogonal matrices via SVD rather than being factored into an eigenvector matrix.
We provide the result of SVD in the following theorem and we will discuss the existence of SVD in the next sections.
\begin{theoremHigh}[Reduced SVD for Rectangular Matrices]\label{theorem:reduced_svd_rectangular}
For every real $m\times n$ matrix $\bA$ with rank $r$, then matrix $\bA$ can be factored as
$$
\bA = \bU \bSigma \bV^\top,
$$
where $\bSigma\in \real^{r\times r}$ is a diagonal matrix $\bSigma=diag(\sigma_1, \sigma_2 \ldots, \sigma_r)$ with $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_r$ and
\begin{itemize}
\item $\sigma_i$'s are the nonzero \textit{singular values} of $\bA$, in the meantime, they are the (positive) square roots of the nonzero \textit{eigenvalues} of $\trans{\bA} \bA$ and $ \bA \trans{\bA}$.
\item Columns of $\bU\in \real^{m\times r}$ contain the $r$ eigenvectors of $\bA\bA^\top$ corresponding to the $r$ nonzero eigenvalues of $\bA\bA^\top$.
\item Columns of $\bV\in \real^{n\times r}$ contain the $r$ eigenvectors of $\bA^\top\bA$ corresponding to the $r$ nonzero eigenvalues of $\bA^\top\bA$.
\item Moreover, the columns of $\bU$ and $\bV$ are called the \textit{left and right singular vectors} of $\bA$, respectively.
\item Further, the columns of $\bU$ and $\bV$ are orthonormal (by Spectral Theorem~\ref{theorem:spectral_theorem}, p.~\pageref{theorem:spectral_theorem}).
\end{itemize}
In particular, we can write out the matrix decomposition by the sum of outer products of vectors $\bA = \bU \bSigma \bV^\top = \sum_{i=1}^r \sigma_i \bu_i \bv_i^\top$, which is a sum of $r$ rank-one matrices.
\end{theoremHigh}
If we append additional $m-r$ silent columns that are orthonormal to the $r$ eigenvectors of $\bA\bA^\top$, just like the silent columns in the QR decomposition (Section~\ref{section:silentcolu_qrdecomp}, p.~\pageref{section:silentcolu_qrdecomp}), we will have an orthogonal matrix $\bU\in \real^{m\times m}$. Similar situation for the columns of $\bV$.
The comparison between the reduced and the full SVD is shown in Figure~\ref{fig:svd-comparison} where white entries are zero and \textcolor{blue}{blue} entries are not necessarily zero.
\begin{figure}[h]
\centering
\vspace{-0.35cm}
\subfigtopskip=2pt
\subfigbottomskip=2pt
\subfigcapskip=-5pt
\subfigure[Reduced SVD decomposition]{\label{fig:svdhalf}
\includegraphics[width=0.47\linewidth]{./imgs/svdreduced.pdf}}
\quad
\subfigure[Full SVD decomposition]{\label{fig:svdall}
\includegraphics[width=0.47\linewidth]{./imgs/svdfull.pdf}}
\caption{Comparison between the reduced and full SVD.}
\label{fig:svd-comparison}
\end{figure}
\section{Existence of the SVD}
To prove the existence of the SVD, we need to use the following lemmas. We mentioned above that the singular values are the square roots of the eigenvalues of $\bA^\top\bA$. While negative values do not have square roots such that the eigenvalues must be nonnegative.
\begin{lemma}[Nonnegative Eigenvalues of $\bA^\top \bA$]\label{lemma:nonneg-eigen-ata}
Given any matrix $\bA\in \real^{m\times n}$, $\bA^\top \bA$ has nonnegative eigenvalues.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:nonneg-eigen-ata}]
For eigenvalue and its corresponding eigenvector $\lambda, \bx$ of $\bA^\top \bA$, we have
$$
\bA^\top \bA \bx = \lambda \bx \leadto \bx^\top \bA^\top \bA \bx = \lambda \bx^\top\bx.
$$
Since $\bx^\top \bA^\top \bA \bx = ||\bA \bx||^2 \geq 0$ and $\bx^\top\bx \geq 0$. We then have $\lambda \geq 0$.
\end{proof}
Since $\bA^\top\bA$ has nonnegative eigenvalues, we then can define the \textit{singular value} $\sigma\geq 0$ of $\bA$ such that $\sigma^2$ is the eigenvalue of $\bA^\top\bA$, i.e., \fbox{$\bA^\top\bA \bv = \sigma^2 \bv$}. This is essential to the existence of the SVD.
We have shown in Lemma~\ref{lemma:rankAB} (p.~\pageref{lemma:rankAB}) that $rank$($\bA\bB$)$\leq$min$\{rank$($\bA$), $rank$($\bB$)\}.
However, the symmetric matrix $\bA^\top \bA$ is rather special in that the rank of $\bA^\top \bA$ is equal to $rank(\bA)$. And we now prove it.
\begin{lemma}[Rank of $\bA^\top \bA$]\label{lemma:rank-of-ata}
$\bA^\top \bA$ and $\bA$ have same rank.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:rank-of-ata}]
Let $\bx\in \nspace(\bA)$, we have
$$
\bA\bx = \bzero \leadto \bA^\top\bA \bx =\bzero,
$$
i.e., $\bx\in \nspace(\bA) \leadtosmall \bx \in \nspace(\bA^\top \bA)$, therefore $\nspace(\bA) \subseteq \nspace(\bA^\top\bA)$.
Further, let $\bx \in \nspace(\bA^\top\bA)$, we have
$$
\bA^\top \bA\bx = \bzero\leadtosmall \bx^\top \bA^\top \bA\bx = 0\leadtosmall ||\bA\bx||^2 = 0 \leadtosmall \bA\bx=\bzero,
$$
i.e., $\bx\in \nspace(\bA^\top \bA) \leadtosmall \bx\in \nspace(\bA)$, therefore $\nspace(\bA^\top\bA) \subseteq\nspace(\bA) $.
As a result, by ``sandwiching", it follows that
$$\nspace(\bA) = \nspace(\bA^\top\bA) \qquad
\text{and} \qquad
dim(\nspace(\bA)) = dim(\nspace(\bA^\top\bA)).
$$
By the fundamental theorem of linear algebra (Theorem~\ref{theorem:fundamental-linear-algebra}, p.~\pageref{theorem:fundamental-linear-algebra}), $\bA^\top \bA$ and $\bA$ have the same rank.
\end{proof}
Apply the observation to $\bA^\top$, we can also prove that $\bA\bA^\top$ and $\bA$ have the same rank:
$$
rank(\bA) = rank(\bA^\top \bA) = rank(\bA\bA^\top).
$$
In the form of the SVD, we claimed the matrix $\bA$ is a sum of $r$ rank-one matrices where $r$ is the number of nonzero singular values. And the number of nonzero singular values is actually the rank of the matrix.
\begin{lemma}[The Number of Nonzero Singular Values vs the Rank]\label{lemma:rank-equal-singular}
The number of nonzero singular values of matrix $\bA$ equals the rank of $\bA$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:rank-equal-singular}]
The rank of any symmetric matrix (here $\bA^\top\bA$) equals the number of nonzero eigenvalues (with repetitions) by Lemma~\ref{lemma:rank-of-symmetric} (p.~\pageref{lemma:rank-of-symmetric}). So the number of nonzero singular values equals the rank of $\bA^\top \bA$. By Lemma~\ref{lemma:rank-of-ata}, the number of nonzero singular values equals the rank of $\bA$.
\end{proof}
We are now ready to prove the existence of the SVD.
\begin{proof}[\textbf{of Theorem~\ref{theorem:reduced_svd_rectangular}: Existence of the Reduced SVD}]
Since $\bA^\top \bA$ is a symmetric matrix, by Spectral Theorem~\ref{theorem:spectral_theorem} (p.~\pageref{theorem:spectral_theorem}) and Lemma~\ref{lemma:nonneg-eigen-ata}, there exists an orthogonal matrix $\bV$ such that
$$
\boxed{\bA^\top \bA = \bV \bSigma^2 \bV^\top},
$$
where $\bSigma$ is a diagonal matrix containing the singular values of $\bA$, i.e., $\bSigma^2$ contains the eigenvalues of $\bA^\top \bA$.
Specifically, $\bSigma=diag(\sigma_1, \sigma_2, \ldots, \sigma_r)$ and $\{\sigma_1^2, \sigma_2^2, \ldots, \sigma_r^2\}$ are the nonzero eigenvalues of $\bA^\top \bA$ with $r$ being the rank of $\bA$. I.e., $\{\sigma_1, \ldots, \sigma_r\}$ are the singular values of $\bA$. In this case, $\bV\in \real^{n\times r}$.
Now we are into the central part.
\begin{mdframed}[hidealllines=\mdframehidelineNote,backgroundcolor=\mdframecolorSkip]
Start from \fbox{$\bA^\top\bA \bv_i = \sigma_i^2 \bv_i$}, $\forall i \in \{1, 2, \ldots, r\}$, i.e., the eigenvector $\bv_i$ of $\bA^\top\bA$ corresponding to $\sigma_i^2$:
1. Multiply both sides by $\bv_i^\top$:
$$
\bv_i^\top\bA^\top\bA \bv_i = \sigma_i^2 \bv_i^\top \bv_i \leadto ||\bA\bv_i||^2 = \sigma_i^2 \leadto ||\bA\bv_i||=\sigma_i
$$
2. Multiply both sides by $\bA$:
$$
\bA\bA^\top\bA \bv_i = \sigma_i^2 \bA \bv_i \leadto \bA\bA^\top \frac{\bA \bv_i }{\sigma_i}= \sigma_i^2 \frac{\bA \bv_i }{\sigma_i} \leadto \bA\bA^\top \bu_i = \sigma_i^2 \bu_i
$$
where we notice that this form can find the eigenvector of $\bA\bA^\top$ corresponding to $\sigma_i^2$ which is $\bA \bv_i$. Since the length of $\bA \bv_i$ is $\sigma_i$, we then define $\bu_i = \frac{\bA \bv_i }{\sigma_i}$ with norm 1.
\end{mdframed}
These $\bu_i$'s are orthogonal because $(\bA\bv_i)^\top(\bA\bv_j)=\bv_i^\top\bA^\top\bA\bv_j=\sigma_j^2 \bv_i^\top\bv_j=0$. That is
$$
\boxed{\bA \bA^\top = \bU \bSigma^2 \bU^\top}.
$$
Since \fbox{$\bA\bv_i = \sigma_i\bu_i$}, we have
$$
[\bA\bv_1, \bA\bv_2, \ldots, \bA\bv_r] = [ \sigma_1\bu_1, \sigma_2\bu_2, \ldots, \sigma_r\bu_r]\leadto
\bA\bV = \bU\bSigma,
$$
which completes the proof.
\end{proof}
By appending silent columns in $\bU$ and $\bV$, we can easily find the full SVD. A byproduct of the above proof is that the spectral decomposition of $\bA^\top\bA = \bV \bSigma^2 \bV^\top$ will result in the spectral decomposition of $\bA \bA^\top = \bU \bSigma^2 \bU^\top$ with the same eigenvalues.
\begin{corollary}[Eigenvalues of $\bA^\top\bA$ and $\bA\bA^\top$]
The nonzero eigenvalues of $\bA^\top\bA$ and $\bA\bA^\top$ are the same.
\end{corollary}
We have shown in Lemma~\ref{lemma:nonneg-eigen-ata} (p.~\pageref{lemma:nonneg-eigen-ata}) that the eigenvalues of $\bA^\top \bA$ are nonnegative, such that the eigenvalues of $\bA\bA^\top$ are nonnegative as well.
\begin{corollary}[Nonnegative Eigenvalues of $\bA^\top\bA$ and $\bA\bA^\top$]
The eigenvalues of $\bA^\top\bA$ and $\bA\bA^\top$ are nonnegative.
\end{corollary}
The existence of the SVD is important for defining the effective rank of a matrix.
\begin{definition}[Effective Rank vs Exact Rank]
\textit{Effective rank}, or also known as the \textit{numerical rank}.
Following Lemma~\ref{lemma:rank-equal-singular}, the number of nonzero singular values is equal to the rank of a matrix.
Assume the $i$-th largest singular value of $\bA$ is denoted as $\sigma_i(\bA)$. Then if $\sigma_r(\bA)\gg \sigma_{r+1}(\bA)\approx 0$, $r$ is known as the numerical rank of $\bA$. Whereas, when $\sigma_i(\bA)>\sigma_{r+1}(\bA)=0$, it is known as having \textit{exact rank} $r$ as we have used in most of our discussions.
\end{definition}
\section{Properties of the SVD}\label{section:property-svd}
\subsection{Four Subspaces in SVD}\label{section:four-space-svd}
For any matrix $\bA\in \real^{m\times n}$, we have the following property:
$\bullet$ $\nspace(\bA)$ is the orthogonal complement of the row space $\cspace(\bA^\top)$ in $\real^n$: $dim(\nspace(\bA))+dim(\cspace(\bA^\top))=n$;
$\bullet$ $\nspace(\bA^\top)$ is the orthogonal complement of the column space $\cspace(\bA)$ in $\real^m$: $dim(\nspace(\bA^\top))+dim(\cspace(\bA))=m$;
This is called the fundamental theorem of linear algebra and is also known as the rank-nullity theorem (Theorem~\ref{theorem:fundamental-linear-algebra}, p.~\pageref{theorem:fundamental-linear-algebra}). From the SVD, we can find an orthonormal basis for each subspace.
\begin{figure}[h!]
\centering
\includegraphics[width=0.98\textwidth]{imgs/lafundamental3-SVD.pdf}
\caption{Orthonormal bases that diagonalize $\bA$ from SVD.}
\label{fig:lafundamental3-SVD}
\end{figure}
\begin{lemma}[Four Orthonormal Basis]\label{lemma:svd-four-orthonormal-Basis}
Given the full SVD of matrix $\bA = \bU \bSigma \bV^\top$, where $\bU=[\bu_1, \bu_2, \ldots,\bu_m]$ and $\bV=[\bv_1, \bv_2, \ldots, \bv_n]$ are the column partitions of $\bU$ and $\bV$. Then, we have the following property:
$\bullet$ $\{\bv_1, \bv_2, \ldots, \bv_r\} $ is an orthonormal basis of $\cspace(\bA^\top)$;
$\bullet$ $\{\bv_{r+1},\bv_{r+2}, \ldots, \bv_n\}$ is an orthonormal basis of $\nspace(\bA)$;
$\bullet$ $\{\bu_1,\bu_2, \ldots,\bu_r\}$ is an orthonormal basis of $\cspace(\bA)$;
$\bullet$ $\{\bu_{r+1}, \bu_{r+2},\ldots,\bu_m\}$ is an orthonormal basis of $\nspace(\bA^\top)$.
The relationship of the four subspaces is demonstrated in Figure~\ref{fig:lafundamental3-SVD} where $\bA$ transfer the row basis $\bv_i$ into column basis $\bu_i$ by $\sigma_i\bu_i=\bA\bv_i$ for all $i\in \{1, 2, \ldots, r\}$.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:svd-four-orthonormal-Basis}]
From Lemma~\ref{lemma:rank-of-symmetric} (p.~\pageref{lemma:rank-of-symmetric}), for symmetric matrix $\bA^\top\bA$, $\cspace(\bA^\top\bA)$ is spanned by the eigenvectors, thus $\{\bv_1,\bv_2, \ldots, \bv_r\}$ is an orthonormal basis of $\cspace(\bA^\top\bA)$.
Since,
1. $\bA^\top\bA$ is symmetric, then the row space of $\bA^\top\bA$ equals the column space of $\bA^\top\bA$.
2. All rows of $\bA^\top\bA$ are the combinations of the rows of $\bA$, so the row space of $\bA^\top\bA$ $\subseteq$ the row space of $\bA$, i.e., $\cspace(\bA^\top\bA) \subseteq \cspace(\bA^\top)$.
3. Since $rank(\bA^\top\bA) = rank(\bA)$ by Lemma~\ref{lemma:rank-of-ata} (p.~\pageref{lemma:rank-of-ata}), we then have
The row space of $\bA^\top\bA$ = the column space of $\bA^\top\bA$ = the row space of $\bA$, i.e., $\cspace(\bA^\top\bA) = \cspace(\bA^\top)$. Thus $\{\bv_1, \bv_2,\ldots, \bv_r\}$ is an orthonormal basis of $\cspace(\bA^\top)$.
Further, the space spanned by $\{\bv_{r+1}, \bv_{r+2},\ldots, \bv_n\}$ is an orthogonal complement to the space spanned by $\{\bv_1,\bv_2, \ldots, \bv_r\}$, so $\{\bv_{r+1},\bv_{r+2}, \ldots, \bv_n\}$ is an orthonormal basis of $\nspace(\bA)$.
If we apply this process to $\bA\bA^\top$, we will prove the rest claims in the lemma. Also, we can see that $\{\bu_1,\bu_2, \ldots,\bu_r\}$ is a basis for the column space of $\bA$ by Lemma~\ref{lemma:column-basis-from-row-basis} (p.~\pageref{lemma:column-basis-from-row-basis}) \footnote{For any matrix $\bA$, let $\{\br_1, \br_2, \ldots, \br_r\}$ be a set of vectors in $\real^n$ which forms a basis for the row space, then $\{\bA\br_1, \bA\br_2, \ldots, \bA\br_r\}$ is a basis for the column space of $\bA$.}, since $\bu_i = \frac{\bA\bv_i}{\sigma_i},\, \forall i \in\{1, 2, \ldots, r\}$.
\end{proof}
\subsection{Relationship between Singular Values and Determinant}
Let $\bA\in \real^{n\times n}$ be a square matrix and the singular value decomposition of $\bA$ is given by $\bA = \bU\bSigma\bV^\top$, it follows that
$$
|\det(\bA)| = |\det(\bU\bSigma\bV^\top)| = |\det(\bSigma)| = \sigma_1 \sigma_2\ldots \sigma_n.
$$
If all the singular values $\sigma_i$ are nonzero, then $\det(\bA)\neq 0$. That is, $\bA$ is \textbf{nonsingular}. If there is at least one singular value such that $\sigma_i =0$, then $\det(\bA)=0$, and $\bA$ does not have full rank, and is not invertible. Then the matrix is called \textbf{singular}. This is why $\sigma_i$'s are known as the singular values.
\index{Orthogonal equivalence}
\subsection{Orthogonal Equivalence}
We have defined in Definition~\ref{definition:similar-matrices} (p.~\pageref{definition:similar-matrices}) that $\bA$ and $\bP\bA\bP^{-1}$ are similar matrices for any nonsingular matrix $\bP$. The \textit{orthogonal equivalence} is defined in a similar way.
\begin{definition}[Orthogonal Equivalent Matrices\index{Orthogonal equivalent matrices}]
Given any orthogonal matrices $\bU$ and $\bV$, the matrices $\bA$ and $\bU\bA\bV$ are called \textbf{orthogonal equivalent matrices}. Or \textbf{unitary equivalent} in the complex domain when $\bU$ and $\bV$ are unitary.
\end{definition}
Then, we have the following property for orthogonal equivalent matrices.
\begin{lemma}[Orthogonal Equivalent Matrices]\label{lemma:orthogonal-equivalent-matrix}
Given any orthogonal equivalent matrices $\bA$ and $\bB$, then singular values are the same.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:orthogonal-equivalent-matrix}]
Since $\bA$ and $\bB$ are orthogonal equivalent, there exist orthogonal matrices that $\bB = \bU\bA\bV$. We then have
$$
\bB\bB^\top = (\bU\bA\bV)(\bV^\top\bA^\top\bU^\top) = \bU\bA\bA^\top\bU^\top.
$$
This implies $\bB\bB^\top$ and $\bA\bA^\top$ are similar matrices. By Lemma~\ref{lemma:eigenvalue-similar-matrices} (p.~\pageref{lemma:eigenvalue-similar-matrices}), the eigenvalues of similar matrices are the same, which proves the singular values of $\bA$ and $\bB$ are the same.
\end{proof}
\subsection{SVD for QR}
\begin{lemma}[Orthogonal Equivalent Matrices]\label{lemma:svd-for-qr}
Suppose the full QR decomposition for matrix $\bA\in \real^{m\times n}$ with $m\geq n$ is given by $\bA=\bQ\bR$ where $\bQ\in \real^{m\times m}$ is orthogonal and $\bR\in \real^{m\times n}$ is upper triangular. Then $\bA$ and $\bR$ have the same singular values and right singular vectors.
\end{lemma}
\begin{proof}[of Lemma~\ref{lemma:svd-for-qr}]
We notice that $\bA^\top\bA = \bR^\top\bR$ such that $\bA^\top\bA$ and $\bR^\top\bR$ have the same eigenvalues and eigenvectors, i.e., $\bA$ and $\bR$ have the same singular values and right singular vectors (i.e., the eigenvectors of $\bA^\top\bA$ or $\bR^\top\bR$).
\end{proof}
The above lemma implies that an SVD of a matrix can be constructed by the QR decomposition of itself. Suppose the QR decomposition of $\bA$ is given by $\bA=\bQ\bR$ and the SVD of $\bR$ is given by $\bR=\bU_0 \bSigma\bV^\top$. Therefore, the SVD of $\bA$ can be obtained by
$$
\bA =\underbrace{ \bQ\bU_0}_{\bU} \bSigma\bV^\top.
$$
\section{Polar Decomposition}
\begin{theoremHigh}[Polar Decomposition]\label{theorem:polar-decomposition}
Given real $n\times n$ square matrix $\bA$ with rank $r$, then matrix $\bA$ can be factored as
$$
\bA = \bQ_l \bS,
$$
where $\bQ_l$ is an orthogonal matrix, and $\bS$ is a positive semidefinite matrix. And this form is called the \textit{left polar decomposition}. Also matrix $\bA$ can be factored as
$$
\bA = \bS\bQ_r,
$$
where $\bQ_r$ is an orthogonal matrix, and $\bS$ is a positive semidefinite matrix. And this form is called the \textit{right polar decomposition}.
Specially, the left and right polar decomposition of a square matrix $\bA$ is \textbf{unique}.
\end{theoremHigh}
Since every $n\times n$ square matrix $\bA$ has full SVD $\bA = \bU \bSigma \bV^\top$, where both $\bU$ and $\bV$ are $n\times n$ orthogonal matrix. We then have $\bA = (\bU\bV^\top)( \bV\bSigma \bV^\top) = \bQ_l\bS$ where it can be easily verified that $\bQ_l = \bU\bV^\top$ is an orthogonal matrix and $\bS = \bV\bSigma \bV^\top$ is a symmetric matrix. We notice that the singular values in $\bSigma$ are nonnegative, such that $\bS=\bV\bSigma \bV^\top = \bV\bSigma^{1/2}\bSigma^{1/2\top} \bV^\top$ showing that $\bS$ is PSD.
Similarly, we have $\bA = \bU \bSigma \bU^\top \bU \bV^\top = (\bU \bSigma \bU^\top)( \bU \bV^\top)=\bS\bQ_r$. And $\bS=\bU \bSigma \bU^\top = \bU\bSigma^{1/2}\bSigma^{1/2\top} \bU^\top$ such that $\bS$ is PSD as well.
For the uniqueness of the right polar decomposition, we suppose the decomposition is not unique, and two of the decompositions are given by
$$
\bA = \bS_1\bQ_1 = \bS_2\bQ_2,
$$
such that
$$
\bS_1= \bS_2\bQ_2\bQ_1^\top.
$$
Since $\bS_1$ and $\bS_2$ are symmetric, we have
$$
\bS_1^2 = \bS_1\bS_1^\top = \bS_2\bQ_2\bQ_1^\top\bQ_1 \bQ_2^\top\bS_2 = \bS_2^2.
$$
This implies $\bS_1=\bS_2$, and the decomposition is unique (Theorem~\ref{theorem:unique-factor-pd}, p.~\pageref{theorem:unique-factor-pd}). Similarly, the uniqueness of the left polar decomposition can be implied from the context.
\begin{corollary}[Full Rank Polar Decomposition]
When $\bA\in \real^{n\times n}$ has full rank, then the $\bS$ in both the left and right polar decomposition above is a symmetric positive definite matrix.
\end{corollary}
\section{Application: Least Squares via the Full QR Decomposition, UTV, SVD}\label{section:application-ls-qr}
\subsection*{\textbf{Least Squares via the Full QR Decomposition}}
Let's consider the overdetermined system $\bA\bx = \bb$, where $\bA\in \real^{m\times n}$ is the data matrix, $\bb\in \real^m$ with $m\geq n$ is the observation matrix. Normally $\bA$ will have full column rank since the data from real work has a large chance to be unrelated. And the least squares (LS) solution is given by $\bx_{LS} = (\bA^\top\bA)^{-1}\bA^\top\bb$ for minimizing $||\bA\bx-\bb||^2$, where $\bA^\top\bA$ is invertible since $\bA$ has full column rank and $rank(\bA^T\bA)=rank(\bA)$.\index{Least squares}
However, the inverse of a matrix is not easy to compute, we can then use QR decomposition to find the least squares solution as illustrated in the following theorem.
\begin{theorem}[LS via QR for Full Column Rank Matrix]\label{theorem:qr-for-ls}
Let $\bA\in \real^{m\times n}$ and $\bA=\bQ\bR$ is its full QR decomposition with $\bQ\in\real^{m\times m}$ being an orthogonal matrix, $\bR\in \real^{m\times n}$ being an upper triangular matrix appended by additional $m-n$ zero rows, and $\bA$ has full column rank with $m\geq n$. Suppose $\bR = \begin{bmatrix}
\bR_1 \\
\bzero
\end{bmatrix}$, where $\bR_1 \in \real^{n\times n}$ is the square upper triangular in $\bR$, $\bb\in \real^m$, then the LS solution to $\bA\bx=\bb$ is given by
$$
\bx_{LS} = \bR_1^{-1}\bc,
$$
where $\bc$ is the first $n$ components of $\bQ^\top\bb$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:qr-for-ls}]
Since $\bA=\bQ\bR$ is the full QR decomposition of $\bA$ and $m\geq n$, the last $m-n$ rows of $\bR$ are zero as shown in Figure~\ref{fig:qr-comparison} (p.~\pageref{fig:qr-comparison}). Then $\bR_1 \in \real^{n\times n}$ is the square upper triangular in $\bR$ and
$
\bQ^\top \bA = \bR = \begin{bmatrix}
\bR_1 \\
\bzero
\end{bmatrix}.
$
Thus,
$$
\begin{aligned}
||\bA\bx-\bb||^2 &= (\bA\bx-\bb)^\top(\bA\bx-\bb)\\
&=(\bA\bx-\bb)^\top\bQ\bQ^\top (\bA\bx-\bb) \qquad &(\text{Since $\bQ$ is an orthogonal matrix})\\
&=||\bQ^\top \bA \bx-\bQ^\top\bb||^2 \qquad &(\text{Invariant under orthogonal})\\
&=\left\Vert\begin{bmatrix}
\bR_1 \\
\bzero
\end{bmatrix} \bx-\bQ^\top\bb\right\Vert^2\\
&=||\bR_1\bx - \bc||^2+||\bd||^2,
\end{aligned}
$$
where $\bc$ is the first $n$ components of $\bQ^\top\bb$ and $\bd$ is the last $m-n$ components of $\bQ^\top\bb$. And the LS solution can be calculated by back substitution of the upper triangular system $\bR_1\bx = \bc$, i.e., $\bx_{LS} = \bR_1^{-1}\bc$.
\end{proof}
To verify Theorem~\ref{theorem:qr-for-ls}, for the full QR decomposition of $\bA = \bQ\bR$ where $\bQ\in \real^{m\times m}$ and $\bR\in \real^{m\times n}$. Together with the LS solution $\bx_{LS} = (\bA^\top\bA)^{-1}\bA^\top\bb$, we obtain
\begin{equation}\label{equation:qr-for-ls-1}
\begin{aligned}
\bx_{LS} &= (\bA^\top\bA)^{-1}\bA^\top\bb \\
&= (\bR^\top\bQ^\top\bQ\bR)^{-1} \bR^\top\bQ^\top \bb\\
&= (\bR^\top\bR)^{-1} \bR^\top\bQ^\top \bb \\
&= (\bR_1^\top\bR_1)^{-1} \bR^\top\bQ^\top \bb \\
&=\bR_1^{-1} \bR_1^{-\top} \bR^\top\bQ^\top \bb\\
&= \bR_1^{-1} \bR_1^{-\top} \bR_1^\top\bQ_1^\top \bb \\
&=\bR_1^{-1} \bQ_1^\top \bb,
\end{aligned}
\end{equation}
where $\bR = \begin{bmatrix}
\bR_1 \\
\bzero
\end{bmatrix}$ and $\bR_1\in \real^{n\times n}$ is an upper triangular matrix, and $\bQ_1 =\bQ_{1:m,1:n}\in \real^{m\times n}$ is the first $n$ columns of $\bQ$ (i.e., $\bQ_1\bR_1$ is the reduced QR decomposition of $\bA$). Then the result of Equation~\eqref{equation:qr-for-ls-1} agrees with Theorem~\ref{theorem:qr-for-ls}.
To conclude, using the QR decomposition, we first derive directly the least squares result which results in the argument in Theorem~\ref{theorem:qr-for-ls}. Moreover, we verify the result of LS from calculus indirectly by the QR decomposition as well. The two results coincide with each other. For readers who are interested in LS in linear algebra, a pictorial view of least squares for full column rank $\bA$ in the fundamental theorem of linear algebra is provided in \citet{lu2021revisit}.
\subsection*{\textbf{Least Squares via ULV/URV for Rank Deficient Matrices}}
In the above section, we introduced the LS via the full QR decomposition for full rank matrices.
However, if often happens that the matrix may be rank-deficient. If $\bA$ does not have full column rank, $\bA^\top\bA$ is not invertible.
We can then use the ULV/URV decomposition to find the least squares solution as illustrated in the following theorem.\index{Least squares}\index{Rank deficient}
\begin{theorem}[LS via ULV/URV for Rank Definient Matrix]\label{theorem:qr-for-ls-urv}
Let $\bA\in \real^{m\times n}$ with rank $r$ and $m\geq n$. Suppose $\bA=\bU\bT\bV$ is its full ULV/URV decomposition with $\bU\in\real^{m\times m}, \bV\in \real^{n\times n}$ being orthogonal matrix matrices, and
$$
\bT = \begin{bmatrix}
\bT_{11} & \bzero \\
\bzero & \bzero
\end{bmatrix}
$$
where $\bT_{11} \in \real^{r\times r}$ is a lower triangular matrix or an upper triangular matrix.
Suppose $\bb\in \real^m$, then the LS solution with the minimal 2-norm to $\bA\bx=\bb$ is given by
$$
\bx_{LS} = \bV^\top
\begin{bmatrix}
\bT_{11}^{-1}\bc\\
\bzero
\end{bmatrix},
$$
where $\bc$ is the first $r$ components of $\bU^\top\bb$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:qr-for-ls-urv}]
Since $\bA=\bQ\bR$ is the full QR decomposition of $\bA$ and $m\geq n$, the last $m-n$ rows of $\bR$ are zero as shown in Figure~\ref{fig:qr-comparison} (p.~\pageref{fig:qr-comparison}). Then $\bR_1 \in \real^{n\times n}$ is the square upper triangular in $\bR$ and
$
\bQ^\top \bA = \bR = \begin{bmatrix}
\bR_1 \\
\bzero
\end{bmatrix}.
$
Thus,
$$
\begin{aligned}
||\bA\bx-\bb||^2 &= (\bA\bx-\bb)^\top(\bA\bx-\bb)\\
&=(\bA\bx-\bb)^\top\bU\bU^\top (\bA\bx-\bb) \qquad &(\text{Since $\bU$ is an orthogonal matrix})\\
&=||\bU^\top \bA \bx-\bU^\top\bb||^2 \qquad &(\text{Invariant under orthogonal})\\
&=|| \bU^\top\bU\bT\bV \bx-\bU^\top\bb||^2\\
&=|| \bT\bV \bx-\bU^\top\bb||^2\\
&=||\bT_{11}\be - \bc||^2+||\bd||^2,
\end{aligned}
$$
where $\bc$ is the first $r$ components of $\bU^\top\bb$ and $\bd$ is the last $m-r$ components of $\bU^\top\bb$, $\be$ is the first $r$ components of $\bV\bx$ and $\bff$ is the last $n-r$ components of $\bV\bx$:
$$
\bU^\top\bb
= \begin{bmatrix}
\bc \\
\bd
\end{bmatrix},
\qquad
\bV\bx
= \begin{bmatrix}
\be \\
\bff
\end{bmatrix}
$$
And the LS solution can be calculated by back/forward substitution of the upper/lower triangular system $\bT_{11}\be = \bc$, i.e., $\be = \bT_{11}^{-1}\bc$. For $\bx$ to have a minimal 2-norm, $\bff$ must be zero. That is
$$
\bx_{LS} = \bV^\top
\begin{bmatrix}
\bT_{11}^{-1}\bc\\
\bzero
\end{bmatrix}.
$$
This completes the proof.
\end{proof}
\paragraph{A word on the minimal 2-norm LS solution} For the least squares problem, the set of all minimizers
$$
\mathcal{X} = \{\bx\in \real^n: ||\bA\bx-\bb|| =\min \}
$$
is convex \citep{golub2013matrix}. And if $\bx_1, \bx_2 \in \mathcal{X}$ and $\lambda \in [0,1]$, then
$$
||\bA(\lambda\bx_1 + (1-\lambda)\bx_2) -\bb|| \leq \lambda||\bA\bx_1-\bb|| +(1-\lambda)||\bA\bx_2-\bb|| = \mathop{\min}_{\bx\in \real^n} ||\bA\bx-\bb||.
$$
Thus $\lambda\bx_1 + (1-\lambda)\bx_2 \in \mathcal{X}$. In the above proof, if we do not set $\bff=\bzero$, we will find more least squares solutions. However, the minimal 2-norm least squares solution is unique. For the full-rank case in the previous section, the least squares solution is unique and it must have a minimal 2-norm. See also \citet{foster2003solving, golub2013matrix} for a more detailed discussion on this topic.
\subsection*{\textbf{Least Squares via SVD for Rank Deficient Matrices}}
\index{Least squares}
Apart from the UTV decomposition for rank-deficient least squares solution, SVD serves as an alternative.
\begin{theorem}[LS via SVD for Rank Deficient Matrix]\label{theorem:svd-deficient-rank}
Let $\bA\in \real^{m\times n}$ and $\bA=\bU\bSigma\bV^\top$ is its full SVD decomposition with $\bU\in\real^{m\times m}$ and $\bV\in \real^{n\times n}$ being orthogonal matrices and $rank(\bA)=r$. Suppose $\bU=[\bu_1, \bu_2, \ldots, \bu_m]$, $\bV=[\bv_1, \bv_2, \ldots, \bv_n]$ and $\bb\in \real^m$, then the LS solution with minimal 2-norm to $\bA\bx=\bb$ is given by
\begin{equation}\label{equation:svd-ls-solution}
\bx_{LS} = \sum_{i=1}^{r}\frac{\bu_i^\top \bb}{\sigma_i}\bv_i = \bV\bSigma^+\bU^\top \bb,
\end{equation}
where the upper-left side of $\bSigma^+ \in \real^{n\times m}$ is a diagonal matrix, $\bSigma^+ = \begin{bmatrix}
\bSigma_1^+ & \bzero \\
\bzero & \bzero
\end{bmatrix}$ where $\bSigma_1^+=diag(\frac{1}{\sigma_1}, \frac{1}{\sigma_2}, \ldots, \frac{1}{\sigma_r})$.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:svd-deficient-rank}]
Write out the loss to be minimized
$$
\begin{aligned}
||\bA\bx-\bb||^2 &= (\bA\bx-\bb)^\top(\bA\bx-\bb)\\
&=(\bA\bx-\bb)^\top\bU\bU^\top (\bA\bx-\bb) \qquad &(\text{Since $\bU$ is an orthogonal matrix})\\
&=||\bU^\top \bA \bx-\bU^\top\bb||^2 \qquad &(\text{Invariant under orthogonal})\\
&=||\bU^\top \bA \bV\bV^\top \bx-\bU^\top\bb||^2 \qquad &(\text{Since $\bV$ is an orthogonal matrix})\\
&=||\bSigma\balpha - \bU^\top\bb||^2 \qquad &(\text{Let $\balpha=\bV^\top \bx$})\\
&=\sum_{i=1}^{r}(\sigma_i\balpha_i - \bu_i^\top\bb)^2 +\sum_{i=r+1}^{m}(\bu_i^\top \bb)^2. \qquad &(\text{Since $\sigma_{r+1}=\sigma_{r+2}= \ldots= \sigma_m=0$})
\end{aligned}
$$
Since $\bx$ only appears in $\balpha$, we just need to set $\balpha_i = \frac{\bu_i^\top\bb}{\sigma_i}$ for all $i\in \{1, 2, \ldots, r\}$ to minimize the above equation. For any value of $\balpha_{r+1}, \balpha_{r+2}, \ldots, \balpha_{n}$, it won't change the result. From the regularization point of view (or here, we want the minimal 2-norm) we can set them to be 0. This gives us the LS solution via SVD:
$$
\bx_{LS} = \sum_{i=1}^{r}\frac{\bu_i^\top \bb}{\sigma_i}\bv_i=\bV\bSigma^+\bU^\top \bb = \bA^+\bb,
$$
where $\bA^+=\bV\bSigma^+\bU^\top\in \real^{n\times m}$ is known as the \textit{pseudo-inverse} of $\bA$.\index{Pseudo-inverse}
\end{proof}
\section{Application: Principal Component Analysis (PCA) via the Spectral Decomposition and the SVD}
Given a data set of $n$ observations $\{\bx_1,\bx_2,\ldots,\bx_n\}$ where $\bx_i\in \real^p$ for all $i\in \{1,2,\ldots,n\}$. Our goal is to project the data onto a low-dimensional space, say $m<p$. Define the sample mean vector and sample covariance matrix\index{Principal component analysis}
$$
\overline{\bx} = \frac{1}{n}\sum_{i=1}^{n}\bx_i
\qquad
\text{and}
\qquad
\bS = \frac{1}{n-1}\sum_{i=1}^{n} (\bx_i - \overline{\bx})(\bx_i-\overline{\bx})^\top.
$$
where the $n-1$ term in the covariance matrix is to make it to be an unbiased consistent estimator of the covariance \citep{lu2021rigorous}. Or the covariance matrix can also be defined as $\bS = \frac{1}{\textcolor{blue}{n}}\sum_{i=1}^{n} (\bx_i - \overline{\bx})(\bx_i-\overline{\bx})^\top$ which is also a consistent estimator of covariance matrix \footnote{Consistency: An estimator $\theta_n $ of $\theta$ constructed on the basis of a sample of size $n$ is said to be consistent if $\theta_n\stackrel{p}{\rightarrow} \theta$ as $n \rightarrow \infty $.}.
Each data point $\bx_i$ is then projected onto a scalar value by $\bu_1$ such that $\bu_1^\top\bx_i$. The mean of the projected data is obtained by $\Exp[\bu_1^\top\bx_i] = \bu_1^\top \overline{\bx}$, and the variance of the projected data is given by
$$
\begin{aligned}
\Cov[\bu_1^\top\bx_i] &= \frac{1}{n-1} \sum_{i=1}^{n}( \bu_1^\top \bx_i - \bu_1^\top\overline{\bx})^2=
\frac{1}{n-1} \sum_{i=1}^{n}\bu_1^\top ( \bx_i -\overline{\bx})( \bx_i -\overline{\bx})^\top\bu_1\\
&=\bu_1^\top\bS\bu_1.
\end{aligned}
$$
We want to maximize the projected variance $\bu_1^\top\bS\bu_1$ with respect to $\bu_1$ where we must constrain $||\bu_1||$ to prevent $||\bu_1|| \rightarrow \infty$ by setting it to be $\bu_1^\top\bu_1=1$. By Lagrange multiplier (see \citet{bishop2006pattern, boyd2004convex}), we have
$$
\bu_1^\top\bS\bu_1 + \lambda_1 (1 - \bu_1^\top\bu_1).
$$
A trivial calculation will lead to
$$
\bS\bu_1 = \lambda_1\bu_1 \leadto \bu_1^\top\bS\bu_1 = \lambda_1.
$$
That is, $\bu_1$ is an eigenvector of $\bS$ corresponding to eigenvalue $\lambda_1$. And the maximum variance projection $\bu_1$ is corresponding to the largest eigenvalues of $\bS$. The eigenvector is known as the \textit{first principal axis}.
Define the other principal axes by decremental eigenvalues until we have $m$ such principal components. The procedure brings about the dimension reduction. This is known as the \textit{maximum variance formulation} of PCA \citep{hotelling1933analysis, bishop2006pattern, shlens2014tutorial}. A \textit{minimum-error formulation} of PCA is discussed in \citet{pearson1901liii, bishop2006pattern}.
\paragraph{PCA via the spectral decomposition}
Now let's assume the data are centered such that $\overline{\bx}$ is zero, or we can set $\bx_i = \bx_i-\overline{\bx}$ to centralize the data. Let the data matrix $\bX \in \real^{n\times p}$ contain the data observations as rows. The covariance matrix is given by
$$
\bS= \frac{\bX^\top\bX}{n-1},
$$
which is a symmetric matrix, and its spectral decomposition is given by
\begin{equation}\label{equation:pca-equ1}
\bS = \bU\bLambda\bU^\top,
\end{equation}
where $\bU$ is an orthogonal matrix of eigenvectors (columns of $\bU$ are eigenvectors of $\bS$), and $\bLambda=diag(\lambda_1, \lambda_2,\ldots, \lambda_p)$ is a diagonal matrix with eigenvalues (ordered such that $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_p$). The eigenvectors are called \textit{principal axes} of the data, and they \textit{decorrelate} the the covariance matrix. Projections of the data on the principal axes are called the \textit{principal components}. The $i$-th principal component is given by the $i$-th column of $\bX\bU$. If we want to reduce the dimension from $p$ to $m$, we just select the first $m$ columns of $\bX\bU$.
\paragraph{PCA via the SVD}
If the SVD of $\bX$ is given by $\bX = \bP\bSigma\bQ^\top$, then the covariance matrix can be written as
\begin{equation}\label{equation:pca-equ2}
\bS= \frac{\bX^\top\bX}{n-1} = \bQ \frac{\bSigma^2}{n-1}\bQ^\top,
\end{equation}
where $\bQ\in \real^{p\times p}$ is an orthogonal matrix and contains the right singular vectors of $\bX$, and the upper left of $\bSigma$ is a diagonal matrix containing the singular values $diag(\sigma_1,\sigma_2,\ldots)$ with $\sigma_1\geq \sigma_2\geq \ldots$. The number of singular values is equal to $\min\{n,p\}$ which will not be larger than $p$ and some of which are zeros.
The above Equation~\eqref{equation:pca-equ2} compared with Equation~\eqref{equation:pca-equ1} implies Equation~\eqref{equation:pca-equ2} is also a spectral decomposition of $\bS$,
since the eigenvalues in $\bLambda$ and singular values in $\bSigma$ are ordered in a descending way, and the spectral decomposition in terms of the eigenspaces is unique (Section~\ref{section:uniqueness-spectral-decomposition}, p.~\pageref{section:uniqueness-spectral-decomposition}).
That is, the right singular vectors $\bQ$ are also the principal axes which decorrelate the covariance matrix,
and the singular values are related to the eigenvalues of the covariance matrix via $\lambda_i = \frac{\sigma_i^2 }{n-1}$. To reduce the dimensionality of the data from $p$ to $m$, we should select largest $m$ singular values and the corresponding right singular vectors. This is also related to the truncated SVD (TSVD) $\bX_m = \sum_{i=1}^{m}\sigma_i \bp_i\bq_i^\top$ that will be discussed in the next section, where $\bp_i$'s and $\bq_i$'s are the columns of $\bP$ and $\bQ$.
\paragraph{A byproduct of PCA via the SVD for high-dimensional data} For a principle axis $\bu_i$ of $\bS = \frac{\bX^\top\bX}{n-1}$, we have
$$
\frac{\bX^\top\bX}{n-1} \bu_i = \lambda_i \bu_i.
$$
Left multiply by $\bX$, we obtain
$$
\frac{\bX\bX^\top}{n-1} (\bX\bu_i) = \lambda_i (\bX\bu_i),
$$
which implies $\lambda_i$ is also an eigenvalue of $\frac{\bX\bX^\top}{n-1} \in \real^{n\times n}$, and the corresponding eigenvector is $\bX\bu_i$. This is also stated in the proof of Theorem~\ref{theorem:reduced_svd_rectangular} (p.~\pageref{theorem:reduced_svd_rectangular}), the existence of the SVD. If $p \gg n$, instead of finding the eigenvector of $\bS$, i.e., the principle axes of $\bS$, we can find the eigenvector of $\frac{\bX\bX^\top}{n-1}$. This reduces the complexity from $O(p^3)$ to $O(n^3)$. Suppose now, the eigenvector of $\frac{\bX\bX^\top}{n-1}$ is $\bv_i$ corresponding to nonzero eigenvalue $\lambda_i$,
$$
\frac{\bX\bX^\top}{n-1} \bv_i = \lambda_i \bv_i.
$$
Left multiply by $\bX^\top$, we obtain
$$
\frac{\bX^\top\bX}{n-1} (\bX^\top\bv_i) = \bS(\bX^\top\bv_i) = \lambda_i (\bX^\top\bv_i),
$$
i.e., the eigenvector $\bu_i$ of $\bS$, is proportional to $\bX^\top\bv_i$, where $\bv_i$ is the eigenvector of $\frac{\bX\bX^\top}{n-1}$ corresponding to the same eigenvalue $\lambda_i$. A further normalization step is needed to make $||\bu_i||=1$.
\section{Application: Low-Rank Approximation}\label{section:svd-low-rank-approxi}
For a low-rank approximation problem, there are basically two types related due to the interplay of rank and error: \textit{fixed-precision approximation problem} and \textit{fixed-rank
approximation problem}. In the fixed-precision approximation problem, for a given matrix $\bA$ and a given tolerance $\epsilon$, one wants to find a matrix $\bB$
with rank $r = r(\epsilon)$ such that $||\bA-\bB|| \leq \epsilon$ in an appropriate matrix norm. On the contrary, in the fixed-rank approximation problem, one looks for a matrix $\bB$ with fixed rank $k$ and an error $||\bA-\bB||$ as small as possible. In this section, we will consider the latter. Some excellent examples can also be found in \citet{kishore2017literature, martinsson2019randomized}.
Suppose we want to approximate matrix $\bA\in \real^{m\times n}$ with rank $r$ by a rank $k<r$ matrix $\bB$. The approximation is measured by spectral norm:\index{Low-rank approximation}
$$
\bB = \mathop{\arg\min}_{\bB} \ \ || \bA - \bB||_2,
$$
where the spectral norm is defined as follows:
\begin{definition}[Spectral Norm]\label{definition:spectral_norm}
The spectral norm of a matrix $\bA\in \real^{m\times n}$ is defined as
$$
||\bA||_2 = \mathop{\max}_{\bx\neq\bzero} \frac{||\bA\bx||_2}{||\bx||_2} =\mathop{\max}_{\bu\in \real^n: ||\bu||_2=1} ||\bA\bx||_2 ,
$$
which is also the maximal singular value of $\bA$, i.e., $||\bA||_2 = \sigma_1(\bA)$.
\end{definition}
Then, we can recover the best rank-$k$ approximation by the following theorem.
\begin{theorem}[Eckart-Young-Misky Theorem w.r.t. Spectral Norm\index{Eckart-Young-Misky theorem}]\label{theorem:young-theorem-spectral}
Given matrix $\bA\in \real^{m\times n}$ and $1\leq k\leq rank(\bA)=r$, and let $\bA_k$ be the truncated SVD (TSVD) of $\bA$ with the largest $k$ terms, i.e., $\bA_k = \sum_{i=1}^{k} \sigma_i\bu_i\bv_i^\top$ from SVD of $\bA=\sum_{i=1}^{r} \sigma_i\bu_i\bv_i^\top$ by zeroing out the $r-k$ trailing singular values of $\bA$. Then $\bA_k$ is the best rank-$k$ approximation to $\bA$ in terms of the spectral norm.
\end{theorem}
\begin{proof}[of Theorem~\ref{theorem:young-theorem-spectral}]
We need to show for any matrix $\bB$, if $rank(\bB)=k$, then $||\bA-\bB||_2 \geq ||\bA-\bA_k||_2$.
Since $rank(\bB)=k$, then $dim (\nspace(\bB))=n-k$. As a result, any $k+1$ basis in $\real^n$ intersects $\nspace(\bB)$. As shown in Lemma~\ref{lemma:svd-four-orthonormal-Basis} (p.~\pageref{lemma:svd-four-orthonormal-Basis}), $\{\bv_1,\bv_2, \ldots, \bv_r\}$ is an orthonormal basis of $\cspace(\bA^\top)\subset \real^n$, so that we can choose the first $k+1$ $\bv_i$'s as a $k+1$ basis for $\real^n$. Let $\bV_{k+1} = [\bv_1, \bv_2, \ldots, \bv_{k+1}]$, then there is a vector $\bx$ that
$$
\bx \in \nspace(\bB) \cap \cspace(\bV_{k+1}),\qquad s.t.\,\,\,\, ||\bx||_2=1.
$$
That is $\bx = \sum_{i=1}^{k+1} a_i \bv_i$, and $||\sum_{i=1}^{k+1} a_i \bv_i||_2 = \sum_{i=1}^{k+1}a_i^2=1$.
Thus,
$$
\begin{aligned}
||\bA-\bB||_2^2 &\geq ||(\bA-\bB)\bx||_2^2\cdot ||\bx||_2^2, \qquad &(\text{From defintion of spectral norm}) \\
&= ||\bA\bx||_2^2, \qquad &(\text{$\bx$ in null space of $\bB$}) \\
&=\sum_{i=1}^{k+1} \sigma_i^2 (\bv_i^\top \bx)^2, \qquad &(\text{$\bx$ orthogonal to $\bv_{k+2}, \ldots, \bv_r$}) \\
&\geq \sigma_{k+1}^2\sum_{i=1}^{k+1} (\bv_i^\top \bx)^2, \qquad &(\sigma_{k+1}\leq \sigma_{k}\leq\ldots\leq \sigma_{1}) \\
&\geq \sigma_{k+1}^2\sum_{i=1}^{k+1} a_i^2, \qquad &(\bv_i^\top \bx = a_i) \\
&= \sigma_{k+1}^2.
\end{aligned}
$$
It is trivial that $||\bA-\bA_k||_2^2 = ||\sum_{i=k+1}^{r}\sigma_i\bu_i\bv_i^\top||_2^2=\sigma_{k+1}^2$. Thus, $||\bA-\bA_k||_2 \leq ||\bA-\bB||_2$, which completes the proof.
\end{proof}
Moreover, readers can prove that $\bA_k$ is the best rank-$k$ approximation to $\bA$ in terms of the Frobenius norm. The minimal error is given by the Euclidean norm of the singular values that have been zeroed out in the process $||\bA-\bA_k||_F =\sqrt{\sigma_{k+1}^2 +\sigma_{k+2}^2+\ldots +\sigma_{r}^2}$.
SVD thus gives the best approximation of a matrix. As mentioned in \citet{stewart1998matrix, kishore2017literature}, \emph{the singular value decomposition is the creme de la creme of rank-reducing decompositions — the decomposition that all others try to beat}. And also \emph{The SVD is the climax of this linear algebra course} in \citet{strang1993introduction}.
\section{Polar decomposition}
\chapter*{\centering \begin{normalsize}Preface\end{normalsize}}
In 1954, Alston S. Householder published \textit{Principles of Numerical Analysis}, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices.
And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this book is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.
This book is primarily a summary of purpose, significance of important matrix decomposition methods, e.g., LU, QR, and SVD, and the origin and complexity of the methods which shed light on their modern applications.
Most importantly, this article presents improved procedures for most of the calculations of the decomposition algorithms which potentially reduce the complexity they induce.
Again, this is a decomposition-based context, thus we will introduce the related background when it is needed and necessary. In many other textbooks on linear algebra, the principal ideas are discussed and the matrix decomposition methods serve as a ``byproduct". However, we focus on the decomposition methods instead and the principal ideas serve as fundamental tools for them. The mathematical prerequisite is a first course in linear algebra. Other than this modest background, the development is self-contained, with rigorous proof provided throughout.
\chapter*{\centering \begin{normalsize}Keywords\end{normalsize}}
Existence and computing of matrix decompositions, Complexity, Floating point operations (flops), Low-rank approximation, Pivot, LU decomposition for nonzero leading principal minors, Data distillation, CR decomposition, CUR/Skeleton decomposition, Interpolative decomposition, Biconjugate decomposition, Coordinate transformation, Hessenberg decomposition, ULV decomposition, URV decomposition, Rank decomposition, Gram-Schmidt process, Householder reflector, Givens rotation, Rank revealing decomposition, Cholesky decomposition and update/downdate, Eigenvalue problems, Alternating least squares, Randomized algorithm.
\input{chapter-worldmap.tex}
\input{chapter-intro.tex}
\input{chapter-LU.tex}
\input{chapter-qr.tex}
\input{chapter-cr.tex}
\input{chapter-skeleton.tex}
\input{chapter-hessenberg.tex}
\input{chapter-eigenvalue.tex}
\input{chapter-spectral.tex}
\input{chapter-svd.tex}
\input{chapter-polar.tex}
\input{chapter-als.tex}
\input{chapter-NMF.tex}
\input{chapter-biconjugate.tex}
\newpage
\vskip 0.2in
| {
"timestamp": "2022-08-04T02:19:19",
"yymm": "2201",
"arxiv_id": "2201.00145",
"language": "en",
"url": "https://arxiv.org/abs/2201.00145",
"abstract": "In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.",
"subjects": "Numerical Analysis (math.NA); Machine Learning (cs.LG)",
"title": "Matrix Decomposition and Applications",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9884918519527643,
"lm_q2_score": 0.8438951084436077,
"lm_q1q2_score": 0.8341834385993007
} |
https://arxiv.org/abs/2201.08035 | Ansatz in a Nutshell: A comprehensive step-by-step guide to polynomial, $C$-finite, holonomic, and $C^2$-finite sequences | Given a sequence 1, 1, 5, 23, 135, 925, 7285, 64755, 641075, 6993545, 83339745,..., how can we guess a formula for it? This article will quickly walk you through the concept of ansatz for classes of polynomial, $C$-finite, holonomic, and the most recent addition $C^2$-finite sequences. For each of these classes, we discuss in detail various aspects of the guess and check, generating functions, closure properties, and closed-form solutions. Every theorem is presented with an accessible proof, followed by several examples intended to motivate the development of the theories. Each example is accompanied by a Maple program with the purpose of demonstrating use of the program in solving problems in this area. While this work aims to give a comprehensive review of existing ansatzes, we also systematically fill a research gap in the literature by providing theoretical and numerical results for the $C^2$-finite sequences. We hope the readers will enjoy the journey through our unifying framework for the study of ansatz. | \section{Introduction}
\section{Getting started}
When we come across a word or a phrase we have never seen before, we look it up in a dictionary. Likewise, whenever we encounter a sequence for which we do not know a formula, we could look it up in the Sloane's
OEIS, an online dictionary for number sequences \cite{OEIS}.
However, as it is not possible that the OEIS database consists of everyone of them, wouldn't it be great if we could find a formula by ourselves, regardless of whether or not our sequence is there? And this is precisely the central theme of this work.
\begin{tcolorbox}
\textbf{Theme:} {Given a sequence $a_n, \;\ n=0,1,2,\dots$, find or guess a (homogeneous) linear recurrence relation of the form:
\[ c_r(n)a_{n+r}+ c_{r-1}(n)a_{n+r-1}+ \dots + c_0(n)a_{n} = 0, \]
where $c_i(n)$ could be a constant, a polynomial in $n$,
or even a linear recurrence (in $n$) itself.}
\end{tcolorbox}
\begin{figure}
\label{fig:paper_structure}
\centering
\includegraphics[scale=0.29]{diagram.png}
\caption{Left: the classes of sequences in this study; Right: the structure of the paper}
\end{figure}
This paper is intended to provide a comprehensive background on the concept of ansatz for classes of polynomial, $C$-finite, holonomic, and the recently developed $C^2$-finite sequences. The inclusion relation of the four classes is depicted in Figure \ref{fig:paper_structure}.
While there are other classes of sequences (e.g. algebraic sequences, hypergeometric sequences), we focus exclusively on these classes so that we can provide an in-depth review of the subject from both the theoretical and practical aspects. In addition, the reader may notice beauty in the embedding nature of these sequences, i.e. a polynomial sequence as a coefficient of the holonomic sequences, and a $C$-finite sequence as a coefficient of the $C^2$-finite sequences. The right panel of Figure \ref{fig:paper_structure} gives a quick overview of the schematic structure of the paper.
Consequently, the aim of this paper is twofold: firstly, to present a step-by-step guide to ansatzes, and secondly to extend the knowledge of existing ansatzes to the class of $C^2$-finite sequences in a systematic manner. We have tried to keep the paper as self-contained yet easy-to-follow as possible. Every theorem is presented with a proof, where much effort has been made to make the proof accessible and transparent. Every theorem is illustrated with examples or applications that stimulated the development of the concept. While most of these examples are solved analytically, Maple has been used in several intermediate calculations and simplification of algebraic expressions. At the end of each example, we demonstrate the command required to enter the data of the problem into our Maple program.
Last but not least, it is worth emphasizing that the steps we provided in the proofs of theorems can be used to construct problem solutions, and in fact we have followed these steps precisely when implementing our Maple program. The interested reader is invited to study the codes accompanying this paper, provided at \texttt{\url{https://thotsaporn.com/Ansatz.html}} or implement the program in their favorite programming environment.
The outline of this paper is as follows. In the next section, we give a comprehensive review of the existing ansatz along with important theoretical properties.
Section \ref{sec:C2} presents results related to $C^2$-finite sequences where we formally give a definition of $C^2$-finite, and establish several results concerning the generating function and the closure properties. We also list a few interesting unsolved problems in that section.
Through our presentation, we hope that this paper will provide a unifying framework for understanding the various aspects of ansatz in the classes of polynomial, $C$-finite, holonomic, and $C^2$-finite sequences.
\section{Comprehensive review of existing ansatz}
This section concentrates on a comprehensive review of ansatzes in the classes of polynomials (as a sequence), $C$-finite, and holonomic sequences. For a list of excellent resources, see for example, \cite{KZ,Z1} for $C$-finite, \cite{K} for holonomic, as well as \cite{BP,KP} and the bibliography. There are also several available software tools that provide excellent computing facilities for ansatzes. The reader is encouraged to check out e.g. the Maple package \texttt{gfun} \cite{SZ} and Mathematica package \texttt{GeneratingFunctions} \cite{M} which provide commands for dealing with holonomic sequences. Although this list of references is by no means exhaustive, we acknowledge the contributions of all pioneer researchers in the field.
\subsection{Polynomial as a sequence}
\begin{enumerate}[leftmargin=0.4in,label=\textbf{\arabic*.}]
\item {\bf Ansatz:} $a_n = c_kn^k+c_{k-1}n^{k-1}+\dots+c_0.$
\item {\bf Example:} Let $ \displaystyle a_n = \sum_{i=1}^n i^2.$ \;\
Here, $a_n = \dfrac{n^3}{3}+\dfrac{n^2}{2}+\dfrac{n}{6} = \dfrac{n(n+1)(2n+1)}{6}$.
\newpage
\item {\bf Guessing:}
Input: the degree $k$ of polynomial and
sequence $a_n$ (of length more than $k+1$).
We solve the system of linear equations for $c_0, c_1, \dots, c_k$:
\[ \begin{bmatrix}
c_0 & c_1 & \dots & c_k
\end{bmatrix}
\begin{bmatrix}
1 & 1 & 1 & \dots & 1\\
0 & 1 & 2 & \dots & k\\
& & \dots \\
0^k & 1^k & 2^k & \dots & k^k\\
\end{bmatrix} =
\begin{bmatrix}
a_0 & a_1 & \dots & a_k
\end{bmatrix}.
\]
This matrix equation can be solved quickly using the inverse of the (Vandermonde) matrix that it is multiplied with.
Once all these $c_i$'s are obtained, we have to verify that our conjectured polynomial is valid for the rest of the terms $a_n, n > k$ in the sequence (otherwise there is no solution).
\begin{code}
> A:= [seq(add(i^2,i=0..n),n=0..20)];
> GuessPol(A,0,n);
\end{code}
\item {\bf Linear recurrence:}
The following proposition provides a linear recurrence relation for a polynomial sequence.
For convenience, we define the \emph{left shift operator} on $a_n$ by \;\ $Na_n = a_{n+1}.$
\begin{prop} \label{prop1}
$a_n$ is a polynomial in $n$ of degree at most $k$
if and only if $(N-1)^{k+1}a_n = 0$.
\end{prop}
\begin{proof}
Assume $a_n$ is a polynomial in $n$ of degree $k$.
Observe that $(N-1)a_n = a_{n+1}-a_n$
is a polynomial whose degree is reduced by (at least) 1.
By applying the operator $(N-1)$ repeatedly $k+1$ times, we obtain
the homogeneous linear recurrence relation of $a_n$, i.e.
\[ (N-1)^{k+1}a_n = 0. \]
This completes the proof of the forward direction. For the reverse direction, suppose the condition $(N-1)^{k+1}a_n = 0$ holds.
Consider
\[ a_n = N^n a_0 = [1+(N-1)]^n a_0 = \sum_{i=0}^n \binom{n}{i}(N-1)^ia_0 . \]
It follows from the assumption that
\begin{equation}
\label{eq:an_polynomial}
a_n = \sum_{i=0}^k \binom{n}{i}(N-1)^ia_0.
\end{equation}
Hence, $a_n$ is a polynomial (in $n$) of degree at most $k.$
\end{proof}
We obtain the following corollary immediately from \eqref{eq:an_polynomial}. Note that this has also been discussed in Proposition 1.4.2 of \cite{S2}.
\begin{cor}
A polynomial $a_n$ of degree at most $k$ can be written in expanded form as
\[
a_n = \sum_{i=0}^k \binom{n}{i}(N-1)^ia_0.
\]
\end{cor}
{\bf Example:} Let $a_n= \dfrac{n(n+1)(2n+1)}{6}.$
Then, $a_n$ can be written as $2\binom{n}{3}+3\binom{n}{2}+\binom{n}{1}.$
An important consequence of this observation is the following.
\begin{prop} \label{prop2}
For a non-negative integer $k$,
$\displaystyle \sum_{i=0}^n i^k$ is a polynomial of degree at most $k+1.$
\end{prop}
\begin{proof}
This is easy to see. Let $\displaystyle a_n = \sum_{i=0}^n i^k$.
Then, $(N-1)a_n = a_{n+1}-a_n = (n+1)^k,$ a polynomial of degree $k.$
Hence, from Proposition \ref{prop1}, $(N-1)^{k+2}a_n=0$ and, by reverse observation
of Proposition \ref{prop1}, $a_n$ is a polynomial of degree at most $k+1.$
\end{proof}
\begin{rem} Proposition \ref{prop2} is very useful as knowing such a bound on the polynomial degree allows us to make guesses rigorous. In particular, finding a polynomial equation for $\displaystyle a_n = \sum_{i=0}^n i^k$, for some fixed $k$, amounts to fitting a polynomial of degree $k+1$ to a set of data points $a_n, \, n=0,1,2,\dots, k+1.$
\end{rem}
\item {\bf Generating function:}
Every sequence corresponds to a generating function that comes in handy when determining the formula of the sequence, as we shall see later. Let us note that the generating function considered here is a \emph{formal} power series in the sense that it is regarded as an algebraic object, thereby ignoring the issue of convergence. The next proposition establishes a connection between polynomial sequences and the generating functions.
\begin{prop}
\label{prop:generating_polynomial}
Let $\displaystyle f(x) = \sum_{n=0}^{\infty} a_nx^n $
where $a_n$ is a polynomial in $n$ of degree $k$.
Then,
\begin{equation}
\label{eq:f_polynomial}
f(x) = \dfrac{P(x)}{(1-x)^{k+1}},
\end{equation}
for some polynomial $P(x)$ of degree at most $k$.
\end{prop}
\begin{proof}
Assume $a_n$ is a polynomial in $n$ of degree $k$. Then,
\begin{align*}
(1-x)^{k+1}f(x) &= \sum_{n=0}^{\infty} a_nx^n(1-x)^{k+1}\\
&= \sum_{n=0}^{\infty} \sum_{i=0}^{k+1} \binom{k+1}{i}(-1)^i a_nx^{n+i}\\
&= \sum_{i=0}^{k+1} \sum_{n=i}^{\infty} \binom{k+1}{i}(-1)^i a_{n-i}x^{n}\\
&= \sum_{n=k+1}^{\infty} \sum_{i=0}^{k+1} \binom{k+1}{i}(-1)^i a_{n-i}x^{n}
+ \sum_{n=0}^{k} \sum_{i=0}^{n} \binom{k+1}{i}(-1)^i a_{n-i}x^{n}.\\
\end{align*}
The first summation is essentially zero as, for each $n \geq k+1$,
\[
\displaystyle \sum_{i=0}^{k+1} \binom{k+1}{i}(-1)^i a_{n-i} = (N-1)^{k+1}a_{n-k-1} = 0,
\]
by Proposition \ref{prop1}. Hence,
\[ (1-x)^{k+1}f(x) = P(x), \]
where
\begin{equation}
\label{eq:generating_polynomial}
\displaystyle P(x) = \sum_{n=0}^{k} \sum_{i=0}^{n} \binom{k+1}{i}(-1)^i a_{n-i}x^{n},
\end{equation}
a polynomial of degree at most $k$.
\end{proof}
{\bf Example:} The generating function $f(x)$ for $\displaystyle a_n = \sum_{i=1}^n i^2$ is
$\dfrac{x^2+x}{(1-x)^4}.$
\begin{code}
> GenPol(n*(n+1)*(2*n+1)/6,n,x);
\end{code}
The next proposition states the converse of Proposition \ref{prop:generating_polynomial}.
\begin{prop}
\label{prop:converse_polynomial}
Assume that $\displaystyle f(x) = \sum_{n=0}^{\infty} a_nx^n $ satisfies
\[
f(x) = \dfrac{P(x)}{(1-x)^{k+1}},
\]
for some polynomial $P(x)$ of degree $k$.
Then, $a_n$ is a polynomial sequence of degree at most $k$.
\end{prop}
\begin{proof}
Since
\[
\displaystyle (1-x)^{k+1}f(x) = \sum_{n=k+1}^{\infty} \sum_{i=0}^{k+1} \binom{k+1}{i}(-1)^i a_{n-i}x^{n}
+ \sum_{n=0}^{k} \sum_{i=0}^{n} \binom{k+1}{i}(-1)^i a_{n-i}x^{n}
\]
is a polynomial of degree $k$, the first summation must be zero implying $(N-1)^{k+1}a_{n-k-1} =0$ for all $n\geq k+1$. Hence, from Proposition \ref{prop1}, we can conclude that $a_n$ is a polynomial of degree at most $k$.
\end{proof}
\item {\bf Closure properties:}
\begin{thm}
Assume that $a_n$ and $b_n$ are polynomial sequences of degree $k$ and $l$, respectively.
The following sequences are also polynomial sequences.
\begin{enumerate}[label=(\roman*)]
\item addition: $(a_n+b_n)_{n=0}^{\infty}$, degree at most $\max(k,l)$,
\item term-wise multiplication: $(a_n \cdot b_n)_{n=0}^{\infty}$, degree at most $k+l$,
\item Cauchy product: $(\sum_{i=0}^n a_i \cdot b_{n-i})_{n=0}^{\infty}$, degree at most $k+l+1$,
\item partial sum: $(\sum_{i=0}^n a_i)_{n=0}^{\infty}$, degree at most $k+1$,
\item linear subsequence: $(a_{mn})_{n=0}^{\infty}$, where $m$ is a positive integer, degree at most $k$.
\end{enumerate}
\end{thm}
\begin{proof}
The proofs of claims \emph{(i)},\emph{(ii)}, and \emph{(v)}
are rather straightforward.
Claims \emph{(iii)} and \emph{(iv)} follow from Proposition \ref{prop2} that
$ \displaystyle \sum_{i=0}^n i^k,$ where $k$ is a non-negative integer,
is a polynomial in $n$ of degree at most $k+1$.
\end{proof}
\end{enumerate}
\subsection{$C$-finite}
\begin{enumerate}[leftmargin=0.4in,label=\textbf{\arabic*.}]
\item {\bf Ansatz:} $a_n$ is defined by a homogeneous
linear recurrence with constant coefficients:
\[ a_{n+r} + c_{r-1}a_{n+r-1}+ c_{r-2}a_{n+r-2}+ \dots +c_{0}a_{n} = 0,\]
along with the initial values $a_0, a_1, \dots, a_{r-1}.$
We call $a_n$ a \emph{$C$-finite sequence} of order $r$.
\begin{rem}
From Proposition \ref{prop1}, a polynomial sequence is a special case of $C$-finite sequences.
\end{rem}
\item {\bf Example:} Let $ \displaystyle a_n = \left \lfloor{\left(\dfrac{n}{2}\right)^2}\right \rfloor.$ \;\
Here, $a_n$ satisfies the linear recurrence relation of order 4:
\[ a_{n+4}-2a_{n+3}+2a_{n+1}-a_n=0,\]
with initial values $a_0=0, a_1=0, a_2=1$ and $a_3=2.$
In terms of the left shift operator,
\[ 0 = (N^4-2N^3+2N-1) \cdot a_n = (N+1)(N-1)^3 \cdot a_n. \]
\item {\bf Guessing:}
Input: the order $r$ of linear recurrence and
sequence $a_n$ (of length more than $2r$).
We solve the system of linear equations for $c_0, c_1, \dots, c_{r-1}$
\[ \begin{bmatrix}
c_0 & c_1 & \dots & c_{r-1} & 1
\end{bmatrix}
\begin{bmatrix}
a_0 & a_1 & a_2 & \dots & a_{r-1}\\
a_1 & a_2 & a_3 & \dots & a_r\\
& & \dots \\
a_{r-1} & a_{r} & a_{r+1} & \dots & a_{2r-2}\\
a_r & a_{r+1} & a_{r+2} & \dots & a_{2r-1}
\end{bmatrix} =
\begin{bmatrix}
0 & 0 & \dots & 0
\end{bmatrix}.
\]
This matrix equation can be solved quickly through, say, the reduced row echelon form method.
Once all these $c_i$'s are obtained, we check that the rest of the $a_n, n > 2r-1$ satisfy the conjectured recurrence.
\begin{code}
> A:= [seq(floor((n/2)^2), n=0..30)];
> GuessC(A,N);
\end{code}
\item {\bf Generating function:}
Let us denote by $T(N)$ an \emph{annihilator} of $a_n$, that is, $T(N)\cdot a_n = 0$. In line with Proposition \ref{prop:generating_polynomial} in the previous section, the following proposition establishes a relationship between $C$-finite sequences and generating functions.
\begin{prop}
\label{prop:generating_cfinite}
Let $\displaystyle f(x) = \sum_{n=0}^{\infty} a_nx^n $
where $a_n$ is a $C$-finite sequence of order $r$ with annihilator
$T(N)$, a polynomial in $N$ of degree $r$. Then,
\begin{equation}
\label{eq:f_cfinite}
f(x) = \dfrac{P(x)}{x^rT(1/x)},
\end{equation}
for some polynomial $P(x)$ of degree at most $r-1$.
\end{prop}
\begin{proof}
Assume that $a_n$ is a $C$-finite sequence with annihilator $T(N)$.
Suppose further that $T(N) = c_rN^r+c_{r-1}N^{r-1}+\dots+c_1N+c_0,$
where $c_r=1$ and $c_{r-1},\dots, c_{1}, c_0$ are some constants. Then,
\begin{align*}
x^rT(1/x)f(x) &= \sum_{n=0}^{\infty} a_nx^{n}(c_0x^r+c_1x^{r-1}+\dots+c_r)\\
&= \sum_{n=0}^{\infty}\sum_{i=0}^{r} c_i a_nx^{n+r-i} = \sum_{i=0}^{r} \sum_{n=r-i}^{\infty} c_i a_{n-r+i}x^{n} \\
& = \sum_{n=r}^{\infty} \sum_{i=0}^{r} c_i a_{n-r+i}x^{n}
+ \sum_{n=0}^{r-1} \sum_{i=r-n}^{r} c_i a_{n-r+i}x^{n}.
\end{align*}
The first sum is zero as, for each $n \geq r$, \\
$\displaystyle \sum_{i=0}^{r} c_i a_{n-r+i}
= \sum_{i=0}^{r} c_iN^ia_{n-r} = T(N) \cdot a_{n-r} = 0,$
by assumption of $T(N)$. Hence,
\[ x^rT(1/x)f(x) = P(x), \]
where $\displaystyle P(x) = \sum_{n=0}^{r-1} \sum_{i=r-n}^{r} c_i a_{n-r+i}x^{n}$,
a polynomial of degree at most $r-1$.
\end{proof}
{\bf Example:} The generating function $f(x)$ for $a_n
= \left \lfloor{\left(\dfrac{n}{2}\right)^2}\right \rfloor$ is
$\dfrac{x^2}{(1+x)(1-x)^3}.$
\begin{code}
> GenC(N^4-2*N^3+2*N-1,[0,0,1,2],N,x);
\end{code}
The next proposition gives the converse of Proposition \ref{prop:generating_cfinite}. This proposition is very useful as it allows us to prove closure properties through generating functions which turned out to simplify our proof tremendously. In particular, the proposition implies that an upper bound for the order of a $C$-finite sequence can be determined by looking at the degree of the polynomial appearing in the denominator of \eqref{eq:f_cfinite}.
\begin{prop}
\label{prop:converse_cfinite}
Assume that $\displaystyle f(x) = \sum_{n=0}^{\infty} a_nx^n $ satisfies
\[
f(x) = \dfrac{P(x)}{x^rT(1/x)},
\]
for some polynomial $P(x)$ of degree $r-1$, and $T(N)$ is a polynomial in $N$ of degree $r$.
Then, $a_n$ is a $C$-finite sequence of order at most $r$.
\end{prop}
\begin{proof}
Since
\[
\displaystyle x^rT(1/x)f(x)= \sum_{n=r}^{\infty} \sum_{i=0}^{r} c_i a_{n-r+i}x^{n}
+ \sum_{n=0}^{r-1} \sum_{i=r-n}^{r} c_i a_{n-r+i}x^{n}
\]
is a polynomial of degree $r-1$, the first summation must be zero implying $T(N) \cdot a_{n-r} = 0$ for all $n\geq r$. Thus, $T(N)$ is an annihilator of $a_n$, and so $a_n$ is a $C$-finite sequence of order at most $r$.
\end{proof}
\item {\bf Closure properties:}
It will become evident later that knowing the operations under which the class of $C$-finite sequences is closed allows one to guess rigorously a formula of the resulting sequence. We first state and prove the following closure properties.
\begin{thm} \label{Cclose}
Assume that $a_n$ and $b_n$ are $C$-finite sequences
of order $r$ and $s$, respectively.
The following sequences are also $C$-finite, with the specified upper bound on the order.
\begin{enumerate}[label=(\roman*)]
\item addition: $(a_n+b_n)_{n=0}^{\infty}$, order at most $r+s$,
\item term-wise multiplication: $(a_n \cdot b_n)_{n=0}^{\infty}$, order at most $rs$,
\item Cauchy product: $(\sum_{i=0}^n a_i \cdot b_{n-i})_{n=0}^{\infty}$, order at most $r+s$,
\item partial sum: $(\sum_{i=0}^n a_i)_{n=0}^{\infty}$, order at most $r+1$,
\item linear subsequence: $(a_{mn})_{n=0}^{\infty}$, where $m$ is a positive integer,
order at most $r$.
\end{enumerate}
\end{thm}
\begin{proof}
The proofs for the closure properties are based on two different approaches, i.e. the generating function approach for proving \emph{(i)}, \emph{(iii)} and \emph{(iv)}, and the solution subspace approach for \emph{(ii)} and \emph{(v)}.
\vspace{1em}
\textbf{Generating function approach}
To prove the closure properties of addition, Cauchy product and partial sum, let $A(x)$ and $B(x)$ be the generating functions of $a_n$ and $b_n$, respectively. Then, the generating functions $C(x)$ of $\displaystyle c_n = a_n+b_n, \sum_{i=0}^n a_i \cdot b_{n-i}$
and $\displaystyle \sum_{i=0}^n a_i$ are $\displaystyle A(x)+B(x), A(x)B(x)$ and $\displaystyle A(x)\cdot\dfrac{1}{1-x}$, respectively.
By Proposition \ref{prop:converse_cfinite}, $c_n$ is a $C$-finite sequence whose order can be determined by looking at the degree of the polynomial appearing in the denominator of $C(x)$ in each case. \\
\end{proof}
We now give concrete examples of how the generating function approach can be used to verify the closure properties of \emph{(i)}, \emph{(iii)} and \emph{(iv)}. \vspace{1em}
{\bf Example:} Let $a_n = \left \lfloor{\left(\dfrac{n}{2}\right)^2}\right \rfloor$
and $b_n$ be the Fibonacci numbers.
We recall that $a_n$ satisfies the linear recurrence relation
\[ a_{n+4}-2a_{n+3}+2a_{n+1}-a_n=0, \]
with $a_0=0, a_1=0, a_2=1$ and $a_3=2.$
In terms of the shift operator, \[ (N+1)(N-1)^3 \cdot a_n = 0. \]
Also, $b_n$ satisfies the linear recurrence relation
\[ b_{n+2} -b_{n+1}-b_{n} = 0, \] with $b_0=0$ and $b_1=1.$
In terms of the shift operator, \[ (N^2-N-1) \cdot b_n = 0. \]
The generating function of $a_n$ and $b_n$ are
$A(x) = \dfrac{x^2}{(1+x)(1-x)^3}$, and
$B(x) = \dfrac{x}{1-x-x^2}$, respectively.
\begin{itemize}[leftmargin=0.2in]
\item addition: $c_n=a_n+b_n$. Then, \[ C(x) = A(x)+B(x)
= \dfrac{-x(x^4-x^3+x^2+x-1)}{(1+x)(1-x)^3(1-x-x^2)}.\]
That is, $c_n$ satisfies the linear recurrence of order 6:
\[
(N+1)(N-1)^3(N^2-N-1) \cdot c_n = 0.
\]
\item Cauchy product: $ c_n = \sum_{i=0}^n a_i \cdot b_{n-i}$.
Then, \[ C(x) = A(x)B(x)
= \dfrac{x^3}{(1+x)(1-x)^3(1-x-x^2)}. \]
That is, $c_n$ satisfies the linear recurrence of order 6:
\[
(N+1)(N-1)^3(N^2-N-1) \cdot c_n = 0.
\]
\item partial sum: $c_n = \sum_{i=0}^n a_i$ then
\[ C(x) = A(x)\cdot \dfrac{1}{1-x}
= \dfrac{x^2}{(1+x)(1-x)^4}. \]
That is, $c_n$ satisfies the linear recurrence of order 5:
\[
(N+1)(N-1)^4 \cdot c_n = 0.
\]
\end{itemize}
\begin{code}
> A := x^2/(1+x)/(1-x)^3:
> B := x/(1-x-x^2):
> CAddition(A,B,x);
> CCauchy(A,B,x);
> CParSum(A,x);
\end{code}
\textbf{Solution space approach}
Before embarking on a proof for closure properties of linear subsequence and term-wise multiplication, we continue with the example from the previous section, intended to highlight the key steps of the solution space approach. A formal proof will be deferred to the end of this section.
\begin{itemize}[leftmargin=0.2in]
\item linear subsequence: $c_n = a_{2n}$.
First, we apply the linear recurrence relation of $a_{2n}$, i.e.
$a_{2n+4}=2a_{2n+3}-2a_{2n+1}+a_{2n}$
repeatedly to yiel
\[ \begin{bmatrix}
c_n\\
c_{n+1}\\
c_{n+2}\\
c_{n+3} \\
c_{n+4}
\end{bmatrix} =
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
1 & $-2$ & 0 & 2 \\
4 & $-6$ & $-3$ & 6 \\
9 & $-12$ & $-8$ & 12 \\
\end{bmatrix} \cdot
\begin{bmatrix}
a_{2n}\\
a_{2n+1}\\
a_{2n+2}\\
a_{2n+3}
\end{bmatrix}.\]
A solution $P=(-1,3,-3,1,0)$ is in the left null space of matrix $M$ (in the middle), i.e. $P \cdot M = (0,0,0,0)$.
Hence, the linear recurrence of $c_n$ is
\[ -c_n+3c_{n+1}-3c_{n+2}+c_{n+3} = 0 \]
or in terms of the shift operator
\[ (N-1)^3\cdot c_n = 0. \]
\begin{code}
> CSubSeq(2,N^4-2*N^3+2*N-1,N);
\end{code}
Note that the above program returns $(N-1)^3(Nd_4-1)\cdot c_n = 0$, where $d_4$ is a free variable. We can assign zero to $d_4$ and obtain the desired third order recurrence relation.\\
\item term-wise multiplication: $c_n = a_n \cdot b_n$.
We apply the linear recurrence relation of $a_n$ and $b_n$, i.e.
$a_{n+4}=2a_{n+3}-2a_{n+1}+a_n$ and
$b_{n+2} =b_{n+1}+b_{n}$ repeatedly
to yield the system of relations
\begin{align*}
a_nb_n &= 1a_nb_n + 0a_nb_{n+1} + 0a_{n+1}b_{n} + 0a_{n+1}b_{n+1}
+ \dots + 0a_{n+3}b_{n+1}, \\
a_{n+1}b_{n+1} &= 0a_nb_n + 0a_nb_{n+1} + 0a_{n+1}b_{n} + 1a_{n+1}b_{n+1}
+ \dots + 0a_{n+3}b_{n+1}, \\
& \vdots \\
a_{n+8}b_{n+8} &= 117a_nb_n + 189a_nb_{n+1} -156a_{n+1}b_{n} -252a_{n+1}b_{n+1}
+ \dots + 252a_{n+3}b_{n+1}.
\end{align*}
We put this system of equations in the matrix form as
\[ \begin{bmatrix}
c_n\\
c_{n+1}\\
c_{n+2}\\
c_{n+3}\\
c_{n+4}\\
c_{n+5}\\
c_{n+6}\\
c_{n+7} \\
c_{n+8}
\end{bmatrix} =
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 2\\
2 & 3 & -4 & -6 & 0 & 0 & 4 & 6\\
6 & 10 & -9 & -15 & -6 & -10 & 12 & 20\\
20 & 32 & -30 & -48 & -15 & -24 & 30 & 48\\
48 & 78 & -64 & -104 & -48 & -78 & 72 & 117\\
117 & 189 & -156 & -252 & -104 & -168 & 156 & 252
\end{bmatrix} \cdot
\begin{bmatrix}
a_nb_n\\
a_{n}b_{n+1}\\
a_{n+1}b_{n}\\
a_{n+1}b_{n+1}\\
a_{n+2}b_{n}\\
a_{n+2}b_{n+1}\\
a_{n+3}b_{n}\\
a_{n+3}b_{n+1}
\end{bmatrix}.\]
A non-trivial solution $(1,2,-4,-8,5,8,-4,-2,1)$ is in the left null space of matrix $M$ (in the middle).
Hence, the linear recurrence of $c_n$ is
\[ c_n+2c_{n+1}-4c_{n+2}-8c_{n+3}+5c_{n+4}+8c_{n+5}-4c_{n+6}-2c_{n+7}+c_{n+8} = 0 \]
or in terms of the shift operator
\[ (N^2+N-1)(N^2-N-1)^3\cdot c_n = 0. \]
\end{itemize}
\begin{code}
> CTermWise(N^4-2*N^3+2*N-1,N^2-N-1,N);
\end{code}
\vspace{1em}
We shall now give a formal proof to the closure properties of linear subsequence and term-wise multiplication in the spirit of the last two examples.
\begin{proof}
For the case of the linear subsequence with a fixed positive integer $m$,
let $c_n=a_{mn}$. Then $c_{n}, c_{n+1},\dots, c_{n+r}$
can be put in the system of linear equations as
\[ \begin{bmatrix}
c_n\\
c_{n+1}\\
c_{n+2}\\
\dots \\
c_{n+r}
\end{bmatrix} =
\begin{bmatrix}
1 & 0 & 0 & \dots & \\
&&&\dots \\
M_{r-1}^{(1)} & M_{r-1}^{(2)} & M_{r-1}^{(3)} & \dots & & & & \\
M_{r}^{(1)} & M_{r}^{(2)} & M_{r}^{(3)} & \dots & & & &
\end{bmatrix} \cdot
\begin{bmatrix}
a_{mn}\\
a_{mn+1}\\
a_{mn+2}\\
\dots\\
a_{mn+r-1}
\end{bmatrix}.\]
The constant matrix $M$ (in the middle) has $r+1$ rows
and $r$ columns, which guarantees a non-trivial null space, i.e. there exists a solution $P\neq 0$ such that $P \cdot M = [0\, 0 \cdots 0]$.
This solution $P$ provides a $C$-finite recurrence relation
to $c_n, c_{n+1},\dots, c_{n+r}.$
As for the term-wise multiplication,
let $c_n=a_n\cdot b_n$. Then, $c_n, c_{n+1},\dots, c_{n+rs}$
can be put in the system of linear equations as
\[ \begin{bmatrix}
c_n\\
c_{n+1}\\
c_{n+2}\\
\dots \\
c_{n+rs-1} \\
c_{n+rs}
\end{bmatrix} =
\begin{bmatrix}
1 & 0 & 0 & \dots & 0 & 0 & 0 & \dots \\
0 & 0 & 0 & \dots & 0 & 1 & 0 & \dots \\
&&&\dots \\
M_{rs-1}^{(1)} & M_{rs-1}^{(2)}& M_{rs-1}^{(3)} & \dots & & & & \\
M_{rs}^{(1)} & M_{rs}^{(2)}& M_{rs}^{(3)} & \dots & & & &
\end{bmatrix} \cdot
\begin{bmatrix}
a_nb_n\\
a_{n}b_{n+1}\\
\dots\\
a_{n+1}b_n\\
a_{n+1}b_{n+1}\\
\dots\\
a_{n+r-1}b_{n+s-1}
\end{bmatrix}.\]
The constant matrix $M$ (in the middle) has $rs+1$ rows
and $rs$ columns, which again guarantees a
non-trivial solution $P$ in the null space of $M$.
This solution $P$ gives a $C$-finite recurrence relation to $c_n,c_{n+1},\dots, c_{n+rs}.$
\end{proof}
\textbf{Rigorous proof of identities with the closure properties}
The closure properties from Theorem \ref{Cclose} are extremely useful in verifying identities of $C$-finite sequences.
For example, let $c_n = a_n^2$, where
$a_n = \left \lfloor{\left(\dfrac{n}{2}\right)^2}\right \rfloor$.
Let us say we want to show that $c_n$ satisfies a linear recurrence
\[ c_{n+8} = 2c_{n+7}+2c_{n+6}-6c_{n+5}+6c_{n+3}-2c_{n+2}-2c_{n+1}+c_{n}, \;\ n \geq 0, \]
i.e. $(N+1)^3(N-1)^5 \cdot c_n = 0.$
Knowing that $a_n$ satisfies the linear recurrence of order 4, Theorem \ref{Cclose}: term-wise multiplication guarantees
that $c_n$ satisfies the linear recurrence of order at most $4\cdot4 = 16.$
It then suffices to verify the identity only by checking the numeric values for $n=0,1,2,\dots,15$
as this is adequate to determine a $C$-finite recurrence of order 16.
Let us give another example. Consider the same sequence $a_n$, but this time we want to verify a non-linear identity
\[ a_{n+1}-a_na_{n+1}+a_na_{n+2}+a_{n+1}^2-a_{n+1}a_{n+2}=0, \;\ n \geq 0. \]
We define $d_n = a_{n+1}-a_na_{n+1}+a_na_{n+2}+a_{n+1}^2-a_{n+1}a_{n+2}$ for $n \geq 0.$
The closure properties of $C$-finite sequence ensure that the order of $d_n$ will be at most
$4+4^2+4^2+4^2+4^2=68.$ So to prove that $d_n=0,$ for $n \geq 0$,
we only check the initial values of $d_n$ for $n=0,1,2,\dots,67$.
\item {\bf Closed-form solutions:}
In contrast to polynomial sequences that we started with an ansatz in a closed-form solution (and later derived a linear recurrence relation for it), our ansatz for a $C$-finite sequence was initially defined as a recurrence relation. A closed-form representation will now be given in the following proposition.
\begin{prop}
\label{prop:close_cfinite}
Assume that $a_n$ satisfies the linear recurrence with constant coefficients,
\[ 0 = T(N) \cdot a_n = [(N-\alpha_1)^{k_1} (N-\alpha_2)^{k_2}
\dots (N-\alpha_m)^{k_m}]\cdot a_n,\]
where $\alpha_i $, the $i$-th root of the characteristic polynomial of the recurrence, has multiplicity $k_i,\, i = 1, 2,\dots, m$.
Then, a closed-form formula of $a_n$ is given by
\begin{equation}
\label{eq:closed_form_cfinite}
a_n = \sum_{i=1}^m(c_{i,0}+c_{i,1}n+\dots+c_{i,k_i-1}n^{k_i-1})\alpha_i^n,
\end{equation}
for some constants $c_{i,0}, c_{i,1}, \dots, c_{i,k_i-1},
\;\ i=1,2,\dots,m,$ which can be determined by the initial conditions.
\end{prop}
\begin{proof}
We will show that $\displaystyle T(N) \cdot \sum_{i=1}^m(c_{i,0}+c_{i,1}n+\dots+c_{i,k_i-1}n^{k_i-1})\alpha_i^n = 0.$
This can be done by showing that each part of the sum is 0, i.e. for each $i$,
\[ (N-\alpha_i)^{k_i} \cdot [(c_{i,0}+c_{i,1}n+\dots+c_{i,k_i-1}n^{k_i-1})\alpha_i^n] = 0.\]
Notice that (dropping the $i$ subscript for simplicity)
\[ (N-\alpha)\cdot [P(n)\alpha^n] = P(n+1)\alpha^{n+1}- P(n)\alpha^{n+1} = Q(n)\alpha^{n+1}, \]
where $Q(n)$ has degree less than $P(n)$.
Therefore, if we apply $(N-\alpha)$ for $k$ times to $P(n)\alpha^n$
where $P(n)$ is a polynomial of degree $k-1$, we will get 0, and the result is now immediate.
We note that this formula of $a_n$ is the most general form
as the number of independent solutions equals the degree of $T(N)$.
\end{proof}
{\bf Example:} A closed-form formula for
$a_n = \left \lfloor{\left(\dfrac{n}{2}\right)^2}\right \rfloor$ is
$\dfrac{n^2}{4}-\dfrac{1}{8}+\dfrac{(-1)^n}{8}.$
\begin{code}
> CloseC(N^4-2*N^3+2*N-1,[0,0,1,2],0,N,n);
\end{code}
The next proposition, which is the converse of the previous proposition, ensures that any sequence expressed in the form \eqref{eq:closed_form_cfinite} is also a $C$-finite sequence. This proposition will be used in a later section to prove some results for the $C^2$-finite sequences.
\begin{prop}
\label{prop:converse_closeform_cfinite}
Suppose that
\[ a_n = \sum_{i=1}^m(c_{i,0}+c_{i,1}n+\dots+c_{i,k_i-1}n^{k_i-1})\alpha_i^n, \]
for some constants $c_{i,0}, c_{i,1}, \dots, c_{i,k_i-1}, \text{and } \alpha_i,
\;\ i=1,2,\dots,m,$.
Then, $a_n$ satisfies a linear recurrence with constant coefficients, i.e $a_n$ is a $C$-finite sequence. Moreover, the order of $a_n$ is $k_1+k_2+\dots+k_m$.
\end{prop}
\begin{proof}
Adopt the same steps followed in the proof of Proposition \ref{prop:close_cfinite} to get
\[ [(N-\alpha_1)^{k_1} (N-\alpha_2)^{k_2}
\dots (N-\alpha_m)^{k_m}]\cdot a_n=0.\]
This relation together with initial values $a_n$ of length $k_1+k_2+\dots +k_m$ defines a $C$-finite recurrence relation for $a_n$.
\end{proof}
\end{enumerate}
\subsection{Holonomic}
\begin{enumerate}[leftmargin=0.4in,label=\textbf{\arabic*.}]
\item {\bf Ansatz:} $a_n$ is defined by a linear recurrence with polynomial coefficients:
\[ p_r(n)a_{n+r} + p_{r-1}(n)a_{n+r-1}+ \dots +p_{0}(n)a_{n} = 0, \]
where $p_r(n) \neq 0, n=0,1,2, \dots$,
along with the initial values $a_0, a_1, \dots, a_{r-1}.$
We call $a_n$ a \emph{holonomic sequence} of order $r$
and degree $k$, where $k$ is the highest degree amongst the
polynomial $p_r(n), p_{r-1}(n), \dots, p_0(n).$
It is important to note that the condition $p_r(n) \neq 0$ for the leading coefficient is necessary for recursively computing the term $a_{n+r}$ in the sequence from its predecessors. In case of violation of the condition, the relation will be valid for $a_{n}, n>n_0$ where $n_0$ is the largest positive integer root of the equation $p_r(n)=0$.
\item {\bf Example 1:} Let $ \displaystyle a_n =\dfrac{1}{n+1}\binom{2n}{n}$, the Catalan numbers. \;\
Here, $a_n$ satisfies a holonomic recurrence of order 1 and degree 1:
\[ (4n+2)a_n-(n+2)a_{n+1}=0,\]
with initial values $a_0=1.$
In terms of the shift operator,
\[ [(4n+2)-(n+2)N] \cdot a_n = 0. \]
{\bf Example 2:} Let $ \displaystyle a_n =\sum_{i=1}^n \dfrac{1}{i}$, the harmonic numbers. \;\
Here, $a_n$ satisfies a holonomic recurrence of order 2 and degree 1:
\[ (n+1)a_n-(2n+3)a_{n+1}+(n+2)a_{n+2} =0,\]
with initial values $a_1=1, a_2=\frac{3}{2}.$
In terms of the shift operator,
\[ [n+1-(2n+3)N+(n+2)N^2] \cdot a_n = 0. \]
{\bf Example 3:} Let $ \displaystyle a_n = \left \lfloor{\left(\dfrac{n}{2}\right)^2}\right \rfloor,$
from the previous section.
Here, $a_n$ also satisfies a holonomic recurrence of order 2 and degree 1:
\[ (n+2)a_n+2a_{n+1}-na_{n+2} =0.\]
This example illustrates the trade-off between lower order and higher degree compared to the $C$-finite recurrence in the previous section.
\item {\bf Guessing:}
Input: the order $r$ and degree $k$ of linear recurrence and
sequence $a_n$ (of length more than $(r+1)(k+1)-1+r$).
Let us write $p_i(n) = \sum_{j=0}^k c_{i,j}n^j$,
$C_i = \begin{bmatrix}
c_{i,0}, c_{i,1}, \dots, c_{i,k}
\end{bmatrix}$ and
$P_n = \begin{bmatrix}
1, n, n^2, \dots, n^k
\end{bmatrix}.$
In matrix notation, the system of linear equations for $c_{i,j}, 0 \leq i \leq r, 0 \leq j \leq k$ takes the following form:
\[ \begin{bmatrix}
C_0, C_1, \dots, C_r
\end{bmatrix} \cdot
\begin{bmatrix}
P_0^Ta_0 & P_1^Ta_1 & \dots & P_{(r+1)(k+1)-1}^Ta_{(r+1)(k+1)-1}\\
P_0^Ta_1 & P_1^Ta_2 & \dots & P_{(r+1)(k+1)-1}^Ta_{(r+1)(k+1)}\\
& & \dots \\
P_0^Ta_{r} & P_1^Ta_{r+1} & \dots & P_{(r+1)(k+1)-1}^Ta_{(r+1)(k+1)-1+r}\\
\end{bmatrix} =
\begin{bmatrix}
0 & 0 & \dots & 0
\end{bmatrix}.
\]
A non-trivial solution $c_{i,j}$'s can be found quickly
using e.g. the reduced row echelon form method.
Once we get all these $c_{i,j}$'s, we check with
the rest of the $a_n, n > (r+1)(k+1)-1+r$, that they satisfy the conjectured recurrence.
\begin{code}
> A:= [seq(add(1/i,i=1..n),n=1..35)];
> GuessHo(A,2,1,1,n,N);
\end{code}
{\bf Exercise:} The sequence 1, 1, 5, 23, 135, 925, 7285, 64755, 641075, 6993545, 83339745,..., appearing in the abstract, is not contained in the OEIS database. Convince yourself that this sequence has a recurrence relation of the form \[a_{n+2} = (n+3)a_{n+1}+(n+2)a_{n} \] with $a_0=a_1=1$. \vspace{1em}
\textbf{Guessing turns into Rigorous Proof}
We note that guessing can turn into a rigorous proof,
if we happen to know the order and degree of
the relation of the holonomic sequence, as illustrated in the following example. The subject of finding bounds of the order and degree will be discussed throughout this section.
{\bf Example:} Given that we know $a_n$ is a
holonomic sequence of order 2 and degree 3, then $a_n$ must satisfy the recurrence relation
\begin{align*}
& (c_{2,0}+c_{2,1}n+c_{2,2}n^2+c_{2,3}n^3)\cdot a_{n+2}
+ (c_{1,0}+c_{1,1}n+c_{1,2}n^2+c_{1,3}n^3)\cdot a_{n+1} \\
& + (c_{0,0}+c_{0,1}n+c_{0,2}n^2+c_{0,3}n^3)\cdot a_{n} =0,
\end{align*}
for some unknown $c_{i,j}, 0 \leq i \leq 2, 0 \leq j \leq 3$.
To find the holonomic relation, we simply fit this equation to some data $a_n$, where we need at least $a_n$, $n=0,1,\dots,13$ to solve for the 12 unknowns.
\item {\bf Generating function:}
\begin{thm} \label{GenHolo}
Let $\displaystyle f(x) = \sum_{n=0}^{\infty} a_nx^n $
where $a_n$ is a holonomic sequence of order $r$ and degree $k$.
Then, $f(x)$ satisfies the (non-homogeneous)
linear differential equation with polynomial coefficients
\begin{equation} \label{diff}
q_0(x)f(x)+ q_1(x)f'(x)+ \dots + q_{r'}(x)f^{(r')}(x) = R(x),
\end{equation}
where order $r'$ is at most $k$, degree of the coefficient $q_t(x)$ for each $t$
is at most $r+k$, and degree of the polynomial $R(x)$ is at most $r-1.$
\end{thm}
\begin{defi}
The generating function $f(x)$ of a holonomic sequence is called \emph{holonomic}.
Also, $f(x)$ satisfying \eqref{diff} is called \emph{$D$-finite} (differentiably finite),
see \cite{S}.
\end{defi}
\begin{proof}
Assume that $a_n$ satisfies the relation
\[
p_r(n)a_{n+r} + p_{r-1}(n)a_{n+r-1}+ \dots +p_{0}(n)a_{n} = 0.
\]
We denote by $b_{s,t}$ the coefficient of $a_{n+t}n^s$ in the relation above, that is,
$\displaystyle p_t(n)=\sum_{s=0}^kb_{s,t}n^s$, and so the holonomic relation becomes
\begin{equation}
\label{holo8}
\sum_{t=0}^r\sum_{s=0}^k b_{s,t}n^sa_{n+t}=0.
\end{equation}
To prove our results, first we note the following identity. For fixed $s$ and $t$,
\begin{align*}
\sum_{j=0}^s c^{(s,t)}_jx^{j-t}f^{(j)}(x)
&= \sum_{j=0}^s c^{(s,t)}_j \sum_{n=0}^{\infty} (n)_ja_nx^{n-t} \\
&= \sum_{j=0}^s c^{(s,t)}_j \sum_{n=-t}^{\infty} (n+t)_ja_{n+t}x^{n} \\
&= \sum_{n=-t}^{\infty} \left[ \sum_{j=0}^s c^{(s,t)}_j (n+t)_j \right] a_{n+t}x^{n},
\end{align*}
where $(n)_j$ is a falling factor, i.e. $(n)_j = n(n-1)\dots(n-j+1)$, and $c^{(s,t)}_j$'s are some constants.
For each pair of $(s,t)$, we appeal to the method of equating the coefficients to obtain $c^{(s,t)}_j, \;\ j=0,1,2,\dots,s$.
Equating the corresponding coefficients of $n^j$ in the equation $\displaystyle \sum_{j=0}^s c^{(s,t)}_j (n+t)_j = n^s$ results in the system of $s+1$ linear equations with $s+1$ unknowns, and so the unknown constants $c^{(s,t)}_j$ can be determined.
Next, we define $\displaystyle A_{s,t}(x) = \sum_{n=0}^{\infty} a_{n+t}n^sx^n$ for $s,t \geq 0.$ Then,
\begin{equation} \label{tank}
\sum_{j=0}^s c^{(s,t)}_jx^{j-t}f^{(j)}(x)
= \sum_{n=-t}^{\infty} a_{n+t}n^sx^{n}
= A_{s,t}(x) + \sum_{n=-t}^{-1} a_{n+t}n^sx^{n}.
\end{equation}
From the holonomic relation of $a_n$ in \eqref{holo8},
multiply $x^n$ through and sum $n$ from $0$ to $\infty$:
\begin{align*}
0 &= \sum_{n=0}^\infty\sum_{t=0}^r\sum_{s=0}^k b_{s,t}n^sa_{n+t}x^n= \sum_{t=0}^r\sum_{s=0}^k b_{s,t}A_{s,t}(x) \\
&= \sum_{t=0}^r\sum_{s=0}^k b_{s,t}
\left[\sum_{j=0}^s c^{(s,t)}_jx^{j-t}f^{(j)}(x)
- \sum_{n=-t}^{-1} a_{n+t}n^sx^{n} \right] , \;\ \text{ from \eqref{tank}.}
\end{align*}
Multiplying $x^r$ on both sides and rearranging this equation, we obtain
\[ \sum_{t=0}^r\sum_{s=0}^k b_{s,t}
\sum_{j=0}^s c^{(s,t)}_jx^{j+r-t}f^{(j)}(x)
= \sum_{t=0}^r\sum_{s=0}^k b_{s,t}
\sum_{n=0}^{t-1} a_{n}(n-t)^sx^{n+r-t} . \]
Observe that the left hand side is the differential equation
of order at most $k$ and degree at most $r+k$, while the right hand side is the polynomial of degree at most $r-1$ as desired.
\end{proof}
{\bf Example 1:} Let $a_n=n!$. Then, $a_n$ satisfies the holonomic recurrence
\[ a_{n+1}-(n+1)a_n =0, \;\ a_0=1. \]
The differential equation corresponding to its generating function is
\[ (1-x)f(x) -x^2f'(x) = 1. \]
\begin{code}
> HoToDiff(n+1-N,[1],n,N,x,D);
\end{code}
{\bf Example 2:} Let $ \displaystyle a_n =\dfrac{1}{n+1}\binom{2n}{n}$,
which satisfies the holonomic recurrence
\[ (4n+2)a_n-(n+2)a_{n+1}=0.\]
The differential equation corresponding to its generating function is
\[ (1-2x)f(x) +(x-4x^2)f'(x) = 1. \]
It is well-known that a closed-form generating function of $f(x)$ is $\dfrac{1-\sqrt{1-4x}}{2x}.$
The reader can easily verify that this expression of $f(x)$ satisfies the above differential equation. \\
\begin{code}
> HoToDiff(4*n+2-(n+2)*N,[1],n,N,x,D);
\end{code}
{\bf Example 3:} Let $ \displaystyle a_n = \left \lfloor{\left(\dfrac{n}{2}\right)^2}\right \rfloor$
with a holonomic recurrence given by
\[ (n+2)a_n+2a_{n+1}-na_{n+2} =0.\]
The differential equation corresponding to its generating function is
\[ 2(1+x+x^2)f(x) +(-x+x^3)f'(x) = 0. \]
On the other hand, another holonomic ($C$-finite) recurrence of $a_n$ is
\[ a_{n+4}-2a_{n+3}+2a_{n+1}-a_n=0,\]
which leads to the (zero-order) differential equation relation
\[ (1+x)(1-x)^3f(x) = x^2. \]
The reader is encouraged to check that $f(x)$ from the zero-order relation satisfies the first-order differential equation.
\begin{code}
> HoToDiff(n+2+2*N-N^2*n,[0,0],n,N,x,D);
> HoToDiff(1-2*N+2*N^3-N^4,[0,0,1,2],n,N,x,D);
\end{code}
A nonhomogeneous differential equation in \eqref{diff}
can be transformed into a homogeneous one by differentiating multiple
times until the polynomial $R(x)$ on the right hand side becomes zero. We state this fact in the following corollary.
\begin{cor}
\label{cor:homogeneous_holonomic}
Let $\displaystyle f(x) = \sum_{n=0}^{\infty} a_nx^n $
where $a_n$ is a holonomic sequence of order $r$ and degree $k$.
Then, $f(x)$ satisfies a \textit{homogeneous}
linear differential equation with polynomial coefficients,
\begin{equation}
\label{eq:homoDiff_holo}
q_0(x)f(x)+ q_1(x)f'(x)+ \dots + q_{r'}(x)f^{(r')}(x) = 0,
\end{equation}
where order $r'$ is at most $r+k$, and degree of $q_t(x)$ for each $t$
is at most $r+k$.
\end{cor}
\begin{rem}
Theorem 7.1.2 of \cite{S2} appears to contain a typographical error in the bound of the order $r'$ of the homogeneous linear differential equation \eqref{eq:homoDiff_holo}, having $k$ specified as a bound therein as opposed to $r+k$.
\end{rem}
\textbf{Example:} The homogeneous differential equation of $\displaystyle f(x) = \sum_{n=0}^{\infty} n!x^n$ is
\[x^2f''(x)+(3x-1)f'(x)+f(x)=0.\]
\begin{code}
> HoToDiffHom(n+1-N,[1],n,N,x,D);
\end{code}
The next theorem, which is the converse of Corollary \ref{cor:homogeneous_holonomic}, ensures that one can always establish a holonomic recurrence relation for the coefficients $a_n$ of $f(x)$ satisfying a homogeneous linear differential equation with polynomial coefficients.
\begin{thm}
\label{thm:generating_holonomic}
Let $\displaystyle f(x) = \sum_{n=0}^{\infty} a_nx^n. $
Assume $f(x)$ satisfies a homogeneous
linear differential equation with polynomial coefficients
of order $r$ and degree $k$,
\begin{equation} \label{diff2}
q_0(x)f(x)+ q_1(x)f'(x)+ \dots + q_{r}(x)f^{(r)}(x) = 0.
\end{equation}
Then, $a_n$ is a holonomic sequence of order at most $r+k$ and degree
at most $r$.
\end{thm}
\begin{proof}
For $0 \leq t \leq r$, let $b_{s,t}$ be the coefficient of $x^sf^{(t)}(x)$ so that $\displaystyle q_t(x) = \sum_{s=0}^kb_{s,t}x^s.$
We note that, for each $s$ and $t$,
\[ x^sf^{(t)}(x) = \sum_{n=t}^{\infty} (n)_ta_nx^{n-t+s}
= \sum_{n=s}^{\infty} (n+t-s)_ta_{n+t-s}x^{n} . \]
Now \eqref{diff2} can be written as
\[ \sum_{t=0}^r \sum_{s=0}^{k}
\sum_{n=s}^{\infty} b_{s,t}(n+t-s)_ta_{n+t-s}x^{n} = 0 . \]
Then, for each $\, n \geq 0, \, a_n$ satisfies the following recurrence
\[ \sum_{t=0}^r \sum_{s=0}^{\min(n,k)} b_{s,t}(n+t-s)_ta_{n+t-s} = 0,\]
and the claim about the bounds on the order and degree follows immediately.
\end{proof}
\item {\bf Closure properties:}
\begin{thm} \label{hoclose}
Assume that $a_n$ and $b_n$ are holonomic sequences
of order $r$ and $s$, respectively.
The following sequences are also holonomic sequences, with the specified upper bound on the order.
\begin{enumerate}[label=(\roman*)]
\item addition: $(a_n+b_n)_{n=0}^{\infty},$ of order at most $r+s,$
\item term-wise multiplication: $(a_n \cdot b_n)_{n=0}^{\infty},$ of order at most $rs,$
\item partial sum: $(\sum_{i=0}^n a_i)_{n=0}^{\infty}$, of order at most $r+1,$
\item linear subsequence: $(a_{mn})_{n=0}^{\infty}$, where $m$ is a positive integer,
of order at most $r.$
\end{enumerate}
Furthermore, let $c_n=\sum_{i=0}^na_i\cdot b_{n-i}$ be the Cauchy product of $a_n$ and $b_n$.
Assume that the generating functions of $a_n$ and $b_n$, denoted by $A(x)$ and $B(x)$, satisfy homogeneous differential equations of orders $r_1$ and $r_2$, respectively. Then, the generating function of $c_n$ also satisfies a homogeneous differential equation of order at most $r_1r_2$.
\end{thm}
Let us reiterate here that if the resulting sequence under these operations has a zero leading coefficient $p_r(n)$ for some $n$, then the recurrence relation is only valid for $n>n_0$ where $n_0$ is the largest positive integer root of the equation $p_r(n)=0$.
\begin{proof}
To verify the closure properties \emph{(i)}-\emph{(iv)}, we follow the solution space approach
in the same vein as that for the $C$-finite. As for the Cauchy product, it is worth pointing out that while the proof also relies on the solution space approach, this time we work with the generating function instead of the sequence itself.
We first give the proof to \emph{(i)}-\emph{(iv)}. Assume $a_n$ is a holonomic sequence of order $r$, i.e.
\[ p_r(n)a_{n+r} + p_{r-1}(n)a_{n+r-1}+ \dots +p_{0}(n)a_{n} = 0, \]
and $b_n$ is a holonomic sequence of order $s$, i.e.
\[ q_s(n)b_{n+s} + q_{s-1}(n)b_{n+s-1}+ \dots +q_{0}(n)b_{n} = 0.\]
We note that for each fixed $k, \, k \geq r,$
$a_{n+k}$ can be written as a linear combination,
with rational function coefficients, of $a_{n+r-1}, a_{n+r-2}, \dots, a_{n}.$
This can be seen by repeated substitution starting from the term $a_{n+r}$, $a_{n+r+1}, \dots,$ all the way to $a_{n+k}$.
The same argument can be made for $b_{n+k}, k \geq s.$
\begin{itemize}[leftmargin=0.2in]
\item addition: let $c_n=a_n+b_n$. Then $c_n, c_{n+1},\dots, c_{n+r+s}$
can be put in the system of linear equations as
\[ \begin{bmatrix}
c_n\\
c_{n+1}\\
c_{n+2}\\
\dots \\
c_{n+r+s-1} \\
c_{n+r+s}
\end{bmatrix} =
\begin{bmatrix}
1 & 0 & 0 & \dots & 1 & 0 & 0 & \dots \\
0 & 1 & 0 & \dots & 0 & 1 & 0 & \dots \\
&&&\dots \\
M_{r+s-1}^{(1)}(n) & M_{r+s-1}^{(2)}(n) & M_{r+s-1}^{(3)}(n)& \dots & & & & \\
M_{r+s}^{(1)}(n) & M_{r+s}^{(2)}(n) & M_{r+s}^{(3)}(n)& \dots & & & &
\end{bmatrix} \cdot
\begin{bmatrix}
a_n\\
a_{n+1}\\
\dots\\
a_{n+r-1}\\
b_{n}\\
b_{n+1}\\
\dots\\
b_{n+s-1}
\end{bmatrix}.\]
Since the matrix $M$ with rational entries (rational matrix) in the middle
has $r+s+1$ rows and $r+s$ columns, its null space is non-trivial, i.e. there exists a row vector $P\neq0$ such that $P\cdot M =[0,0,\dots,0].$
Note, in addition, that the solution $P$ in the form of rational functions has one free variable (as the number of rows $>$ number of columns). We can turn the solutions in $P$ to polynomials. This $P$ gives a holonomic relation to $c_n, c_{n+1}, \dots, c_{n+r+s}.$
\item term-wise multiplication: In a similar way as in the addition case, let $c_n=a_n\cdot b_n$. Then $c_n, c_{n+1},\dots, c_{n+rs}$
can be put in the system of linear equations as
\[ \begin{bmatrix}
c_n\\
c_{n+1}\\
c_{n+2}\\
\dots \\
c_{n+rs-1} \\
c_{n+rs}
\end{bmatrix} =
\begin{bmatrix}
1 & 0 & 0 & \dots & 0 & 0 & 0 & \dots \\
0 & 0 & 0 & \dots & 0 & 1 & 0 & \dots \\
&&&\dots \\
M_{rs-1}^{(1)}(n) & M_{rs-1}^{(2)}(n) & M_{rs-1}^{(3)}(n) & \dots & & & & \\
M_{rs}^{(1)}(n) & M_{rs}^{(2)}(n) & M_{rs}^{(3)}(n) & \dots & & & &
\end{bmatrix} \cdot
\begin{bmatrix}
a_nb_n\\
a_{n}b_{n+1}\\
\dots\\
a_{n+1}b_n\\
a_{n+1}b_{n+1}\\
\dots\\
a_{n+r-1}b_{n+s-1}
\end{bmatrix}.\]
Again since the matrix $M$ has $rs+1$ rows
and $rs$ columns, the null space of $M$ is non-trivial. A polynomial solution $P\neq0$ gives a holonomic relation to $c_n, c_{n+1},\dots, c_{n+rs}.$
\item linear subsequence: For a fixed integer $m$, let $c_n=a_{mn}$. Then, $c_{n}, c_{n+1},\dots, c_{n+r}$
can be put in the system of linear equations as
\[ \begin{bmatrix}
c_n\\
c_{n+1}\\
c_{n+2}\\
\dots \\
c_{n+r}
\end{bmatrix} =
\begin{bmatrix}
1 & 0 & 0 & \dots & 0 \\
&&&\dots \\
M_{r-1}^{(1)}(mn) & M_{r-1}^{(2)}(mn) & M_{r-1}^{(3)}(mn) & \dots & M_{r-1}^{(r)}(mn) \\
M_{r}^{(1)}(mn) & M_{r}^{(2)}(mn) & M_{r}^{(3)}(mn) & \dots & M_{r}^{(r)}(mn)
\end{bmatrix} \cdot
\begin{bmatrix}
a_{mn}\\
a_{mn+1}\\
a_{mn+2}\\
\dots\\
a_{mn+r-1}
\end{bmatrix}.\]
Here, the matrix $M$ has $r+1$ rows
and $r$ columns, so a non-trivial polynomial solution $P$
(in a null space of $M$) exists. This $P$ gives a holonomic relation to $c_n, c_{n+1},\dots, c_{n+r}.$
\item partial sum: We set up a slightly different matrix equation this time to simplify the computation. Let $c_n=\sum_{i=0}^na_i$. Then, $c_{n-1}, c_n, c_{n+1},\dots, c_{n+r}$ satisfy the following system of linear equations
\[ \begin{bmatrix}
c_n-c_{n-1}\\
c_{n+1}-c_{n}\\
c_{n+2}-c_{n+1}\\
\dots \\
c_{n+r}-c_{n+r-1}
\end{bmatrix} =
\begin{bmatrix}
1 & 0 & 0 & \dots & 0 \\
0 & 1 & 0 & \dots & 0 \\
&&&\ddots \\
0 & 0 & 0 & \dots & 1 \\
-\dfrac{p_0(n)}{p_r(n)} & -\dfrac{p_1(n)}{p_r(n)}
& -\dfrac{p_2(n)}{p_r(n)} & \dots & -\dfrac{p_{r-1}(n)}{p_r(n)}
\end{bmatrix} \cdot
\begin{bmatrix}
a_{n}\\
a_{n+1}\\
\dots\\
a_{n+r-2}\\
a_{n+r-1}
\end{bmatrix}.\]
The matrix $M$ has $r+1$ rows and $r$ columns, so a non-trivial polynomial solution $P$ exists in the null space of $M$. This $P$ gives a holonomic relation of order $r+1$
to $c_{n-1}, c_n, c_{n+1},\dots, c_{n+r}.$\\
We now turn to the proof for the Cauchy product.
\item Cauchy product:
Let $c_n=\sum_{i=0}^na_i\cdot b_{n-i}$.
This time, we consider finding the homogeneous
differential equation of $C(x)=\sum_{n=0}^{\infty}c_nx^n$.
Then we can invoke Theorem \ref{thm:generating_holonomic}, the relationship between $C(x)$ and the $c_n$ itself, to conclude that $c_n$ is a holonomic sequence. \\
To this end, let us express the homogeneous differential equation of
$A(x)$ as
\[ p_0(x)A(x)+ p_1(x)A'(x)+ \dots + p_{r_1}(x)A^{(r_1)}(x) = 0,\]
and the homogeneous differential equation of
$B(x)$ as
\[q_0(x)B(x)+ q_1(x)B'(x)+ \dots + q_{r_2}(x)B^{(r_2)}(x) = 0.\]
Note that $C(x)=A(x)\cdot B(x)$
and that any order derivatives of $C(x)$ can be
written as a linear combination of $A^{(i)}(x)\cdot B^{(j)}(x),$
\;\ $0 \leq i \leq r_1-1$ and $0 \leq j \leq r_2-1$.
Hence, we can write the relation in matrix notation as:
\footnotesize
\[ \begin{bmatrix}
C(x)\\
C'(x)\\
C''(x)\\
\dots \\
C^{(r_1r_2-1)}(x) \\
C^{(r_1r_2)}(x)
\end{bmatrix} =
\begin{bmatrix}
1 & 0 & 0 & \dots & 0 & 0 & 0 & \dots \\
0 & 1 & 0 & \dots & 1 & 0 & 0 & \dots \\
&&&\dots \\
M_{r_1r_2-1}^{(1)}(n) & M_{r_1r_2-1}^{(2)}(n) & M_{r_1r_2-1}^{(3)}(n) & \dots & & & & \\
M_{r_1r_2}^{(1)}(n) & M_{r_1r_2}^{(2)}(n) & M_{r_1r_2}^{(3)}(n) & \dots & & & &
\end{bmatrix} \cdot
\begin{bmatrix}
A(x)B(x)\\
A(x)B'(x)\\
\dots\\
A'(x)B(x)\\
A'(x)B'(x)\\
\dots\\
A^{(r_1-1)}(x)B^{(r_2-1)}(x)
\end{bmatrix}.\]
\normalsize
Using the same arguments as above, since the matrix $M$ has $r_1r_2+1$ rows and $r_1r_2$ columns, the existence of a non-trivial polynomial solution $P$ (in the null space of $M$) is ensured. This $P$ gives a homogeneous relation in terms
of the differential equation of order $r_1r_2$ to $C(x), C'(x), \dots, C^{(r_1r_2)}(x).$ \qedhere
\end{itemize}
\end{proof}
\begin{cor}
Let $\displaystyle A(x) = \sum_{n=0}^{\infty} a_nx^n$ be holonomic. Then
$A'(x)$ and $\int A(x)$ are also holonomic.
\end{cor}
\begin{proof}
$\displaystyle A'(x) = \sum_{n=0}^{\infty}(n+1)a_{n+1}x^n.$
Since $n+1$ and $a_{n+1}$ are holonomic sequences, by the closure property of multiplication, so is the sequence $(n+1)a_{n+1}$.
Hence, $A'(x)$ is holonomic. A similar argument holds for $\int A(x).$
\end{proof}
\textbf{Example: Closure properties}
Let $ \displaystyle a_n =\dfrac{1}{n+1}\binom{2n}{n}$, the Catalan numbers. \;\
Recall that $a_n$ satisfies the holonomic recurrence of order 1 and degree 1:
\[ (4n+2)a_n-(n+2)a_{n+1}=0.\]
Let $ \displaystyle b_n =\sum_{i=1}^n \dfrac{1}{i}$, the harmonic numbers. \;\
Here, $b_n$ satisfies the holonomic recurrence of order 2 and degree 1:
\[ (n+1)b_n-(2n+3)b_{n+1}+(n+2)b_{n+2} =0.\]
We first show detailed calculations for the closure properties of addition and term-wise multiplication.
\begin{itemize}[leftmargin=0.2in]
\item addition: $c_n=a_n+b_n$. Consider a matrix equation:
\[ \begin{bmatrix}
c_n\\
c_{n+1}\\
c_{n+2}\\
c_{n+3}
\end{bmatrix} =
\begin{bmatrix}
1 & 1 & 0 \\
\frac{2(2n+1)}{n+2} & 0 & 1 \\
\frac{4(3+2n)(1+2n)}{(3+n)(2+n)} & -\frac{n+1}{n+2} & \frac{2n+3}{n+2} \\
\frac{8(5+2n)(3+2n)(1+2n)}{(4+n)(3+n)(2+n)} & -\frac{(5+2n)(n+1)}{(3+n)(2+n)}
& \frac{(12n+3n^2+11)}{(3+n)(2+n)}
\end{bmatrix} \cdot
\begin{bmatrix}
a_n\\
b_{n}\\
b_{n+1}
\end{bmatrix}.\]
The polynomial solution $P$ in the null space of the rational matrix $M$ is
\[ P = [ -2(n+1)(3n+7)(1+2n)(2+n)^2, (5+3n)(2+n)(9n^3+43n^2+58n+20), \dots ]. \]
This gives rise to the holonomic relation of $c_n$ of order 3 as
\begin{align*}
&-2(n+1)(3n+7)(1+2n)(2+n)^2c_n+(5+3n)(2+n)(9n^3+43n^2+58n+20)c_{n+1} \\
&-(216n+241n^2+111n^3+64+18n^4)(3+n)c_{n+2}\\
&+(3+n)(4+n)(3n+4)(n+1)^2c_{n+3} =0.
\end{align*}
\begin{code}
> R1:= 2+4*n+(-2-n)*N:
> R2:= n+1+(-3-2*n)*N+(2+n)*N^2:
> HoAdd(R1,R2,n,N,c);
\end{code}
\item term-wise multiplication: $d_n=a_n\cdot b_n$. Consider
\[ \begin{bmatrix}
d_n\\
d_{n+1}\\
d_{n+2}
\end{bmatrix} = \begin{bmatrix}
1 & 0 \\
0 & \frac{2(2n+1)}{n+2} \\
\frac{4(3+2n)(1+2n)(-1-n)}{(3+n)(2+n)^2} & \frac{4(3+2n)^2(1+2n)}{(3+n)(2+n)^2}
\end{bmatrix} \cdot
\begin{bmatrix}
a_nb_n\\
a_nb_{n+1}
\end{bmatrix}.
\]
The polynomial solution $P$ in the null space of the rational matrix $M$ is
\[ P = [ 4(3+2n)(1+2n)(n+1),-2(3+2n)^2(2+n),(2+n)^2(3+n)]. \]
This gives rise to the holonomic relation of $d_n$ of order 2 as
\begin{align*}
&4(3+2n)(1+2n)(n+1)d_n-2(3+2n)^2(2+n)d_{n+1}
+(2+n)^2(3+n)d_{n+2} =0.
\end{align*}
\begin{code}
> R1:= 2+4*n+(-2-n)*N:
> R2:= n+1+(-3-2*n)*N+(2+n)*N^2:
> HoTermWise(R1,R2,n,N,c);
\end{code}
We now give an example that shows how to obtain a homogeneous differential equation for the Cauchy product.
\item Cauchy product: $e_n= \sum_{i=0}^n a_i\cdot b_{n-i}$.
In this example, we let
$\displaystyle a_n = \dfrac{1}{n+1}\binom{2n}{n}$
and $b_n = n!.$
As $a_n$ and $b_n$ are holonomic, the
homogeneous differential equations $A(x)$ and $B(x)$ exist. Indeed, they satisfy
\begin{align*}
x(4x-1)A''(x)+2(5x-1)A'(x)+2A(x) &= 0, \\
x^2B''(x)+(3x-1)B'(x)+B(x) &= 0.
\end{align*}
Letting $E(x)=A(x)\cdot B(x)$, we obtain the matrix equation
\[ \begin{bmatrix}
E(x)\\
E'(x)\\
E^{''}(x)\\
E^{(3)}(x)\\
E^{(4)}(x)
\end{bmatrix} = \begin{bmatrix}
1 & 0 &0 &0 \\
0 & 1 & 1 & 0 \\
-\dfrac{1}{(x^2)}-\dfrac{2}{x(4x-1)}& -\dfrac{(3x-1)}{x^2}
& -\dfrac{2(5x-1)}{x(4x-1)}& 2 \\
& \dots & & \\
& \dots & &
\end{bmatrix} \cdot
\begin{bmatrix}
A(x)B(x) \\
A(x)B'(x) \\
A'(x)B(x) \\
A'(x)B'(x) \\
\end{bmatrix}.
\]
The polynomial solution $P$ in the null space of the rational matrix $M$ is found to be
\[ P = [ 2(72x^6+660x^5-1392x^4+900x^3-266x^2+37x-2)(4x-1), \dots]. \]
This gives rise to the homogeneous differential equation
of $C(x)$ of order $2\cdot2 = 4$ as
\begin{align*}
&2(72x^6+660x^5-1392x^4+900x^3-266x^2+37x-2)(4x-1)E(x) \\
&+2(1512x^8+11076x^7-26812x^6+22170x^5-9442x^4+2333x^3-342x^2+28x-1)E'(x) \\
&+\dots+x^5(4x-1)^2(4x^4+24x^3-31x^2+10x-1)E^{(4)}(x)=0.
\end{align*}
\begin{code}
> DA:= lhs(HoToDiffHom(4*n+2-(n+2)*N,[1],n,N,x,D))
/f(x);
> DB:= lhs(HoToDiffHom(n+1-N,[1],n,N,x,D))/f(x);
> HoCauchy(DA,DB,x,D,c);
\end{code}
\end{itemize}
\textbf{A remark on the rigorous proof of identities and the upper bound for the degree of recurrence relations}
As already mentioned previously in the $C$-finite section, the closure property is utterly important. Theorem \ref{hoclose} gives bounds for the order of recurrence relation. Meanwhile, bounds for the degree of polynomial coefficients can also be determined. Once the bounds for the order and degree are known, if the holonomic ansatz is fit with enough data, a rigorous recurrence relation can be ensured.
We now illustrate how we can obtain a good pragmatic upper bound for the degree using the following example.
{\bf Example:} Let us try to figure out an upper bound of the degree of the recurrence relation under the addition: $c_n=a_n+b_n$. Using the same example as above, recall that the bound for the order of $c_n$ is $r=2+1=3$, and the following rational matrix was obtained in the intermediate steps:
\[ M =
\begin{bmatrix}
1 & 1 & 0 \\
\frac{2(2n+1)}{n+2} & 0 & 1 \\
\frac{4(3+2n)(1+2n)}{(3+n)(2+n)} & -\frac{n+1}{n+2} & \frac{2n+3}{n+2} \\
\frac{8(5+2n)(3+2n)(1+2n)}{(4+n)(3+n)(2+n)} & -\frac{(5+2n)(n+1)}{(3+n)(2+n)}
& \frac{(12n+3n^2+11)}{(3+n)(2+n)}
\end{bmatrix}. \]
Getting rid of the denominator, we obtain the matrix
\[ \tilde{M} =
\begin{bmatrix}
(4+n)(3+n)(2+n) & (3+n)(2+n) & 0 \\
2(2n+1)(4+n)(3+n) & 0 & (3+n)(2+n) \\
4(3+2n)(1+2n)(4+n) & -(n+1)(3+n)
& (2n+3)(3+n)\\
8(5+2n)(3+2n)(1+2n)& -(5+2n)(n+1)
& 12n+3n^2+11\\
\end{bmatrix} \]
whose polynomial entries have the highest degree of $v=3$.
Finding a bound for the degree of $c_n$ amounts to determining the degree of the polynomial solution $P=\left[p_1(n), p_2(n), \cdots, p_{r+1}(n)\right]$ in the null space of $\tilde{M}$.
Formally, let us assume that the highest degree of polynomial in $P$ is $k$. Then, $\displaystyle p_l(n)= \sum_{j=0}^k b_{j,l}n^j$, $l=1,2,\dots,r+1$. By the method of equating the coefficients, the matrix equation $P\cdot \tilde{M}=0$ results in the system of $r(k+1+v)$ linear equations with $(r+1)(k+1)$ unknowns. The existence of a solution to this linear system is guaranteed whenever $k$ satisfies the inequality $(r+1)(k+1)>r(k+1+v)$, i.e. $k\geq rv$.
Therefore, this condition gives rise to a pragmatic upper bound for $k$, the degree of the holonomic sequence $c_n$.
\item {\bf Asymptotic approximation solutions:}
Unlike the $C$-finite recurrences, no closed form solution is available for holonomic recurrences. Hence, an approximation solution in terms of asymptotic expansion will be sought for a holonomic recurrence. As the method is quite complicated, we will not attempt to provide a theoretical analysis, but rather some applications of the method.
For a more thorough account on the subject, the interested reader is referred to \cite{JZ}.
In what follows, the sequence $a_n$ will be treated as a function. To emphasize this fact we will denote it by $y(n)$.\\
Suppose that $y(n)$ is a solution to
\begin{equation}
\label{eq:yn}
\sum_{i=0}^r p_i(n) y(n+i) = 0,
\end{equation}
where $p_r(n) \neq 0, \;\ n=0,1,2,\dots.$
The approach is based on the Birkhoff-Trjitzinsky method \cite{B, BT}. Although the detailed analysis of the method was given in \cite{B, BT}, we adopt here a variant which assumes a solution in its simplest form of \eqref{BT}. Then, substitute the assumed solution into
\eqref{eq:yn} (with the help of \eqref{rat1}) to find values for the parameters.
For this reason, we will refer to the method as the \emph{guess and check} in this paper.
Despite its simplicity, this variant has proven to be applicable to a large class of holonomic recurrences.
\vspace{1em}
\textbf{Guess and check method:}
The guess and check method is a general method of solving holonomic recurrences. What we will do is to try a simple solution of the form
\begin{equation} \label{BT}
y(n) = e^{\mu_0n\ln{n}+\mu_1n} \cdot n^{\theta} \cdot
e^{\alpha_1n^{\beta}+\alpha_2n^{\beta-\frac{1}{\rho}}
+\alpha_3n^{\beta-\frac{2}{\rho}}+\dots},
\end{equation}
where the parameters $\mu_0, \mu_1, \theta$ are complex, $\rho \, (\rho \geq 1),
\mu_0 \rho$ are integers,
and $\alpha_1 \neq 0, \;\ \beta = \frac{j}{\rho}, \, 0 \leq j < \rho.$
This method provides $r$ independent solutions (all formal series solutions) but only up to constant multiple. The function in the form of \eqref{BT} which satisfies \eqref{eq:yn} is called \emph{a formal series solution} (FSS).
Using some power series expansion and simplification,
we obtain
\begin{equation} \label{rat1}
\dfrac{y(n+k)}{y(n)} = n^{\mu_0k}\lambda^k\cdot\{ 1
+\dfrac{1}{n}\left(\theta k +\dfrac{k^2\mu_0}{2}\right)
+\dots \}\cdot
e^{\alpha_1\beta k n^{\beta-1}
+\alpha_2(\beta-\frac{1}{\rho})kn^{\beta-\frac{1}{\rho}-1}+\dots},
\end{equation}
for $k\geq0$, where $\lambda = e^{\mu_0+\mu_1}$.
\textbf{Applications:}
Let us give walkthrough examples to demonstrate the approach. Since the procedure involves some steps that require human input and expertise, no Maple program is provided in this section. \vspace{1em}
\textbf{Example 1:} $y(n) = n!$. The most standard and widely used asymptotic formula for the factorial function is Stirling's formula. In this example, we will try to obtain an asymptotic approximation for the factorial function using the method of guess and check.
From the recurrence relation $y(n+1)-(n+1)y(n) = 0$, we apply \eqref{rat1}:
\[ n^{\mu_0}\lambda\{1+\dfrac{1}{n}\left(\theta +\dfrac{\mu_0}{2}\right)+\dots\}
e^{\alpha_1\beta n^{\beta-1}+\dots}-(n+1) = 0. \]
Expanding the exponential term with power series and comparing the terms involving $n^{\mu_0}$ and $n$, we have
$\mu_0=1, \, \lambda = 1$. Also, since $\mu_0 \rho$ must be an integer,
we assign $\rho = 1$, the minimum possible value.
Next, the value of $\beta$ must be 0, as the coefficient of $n^s$ for $0 < s < 1$ must be 0.
For the coefficient of 1 (the constant term), we have
$\theta+\dfrac{\mu_0}{2}-1=0$. Hence, $\theta = \dfrac{1}{2}.$
Plugging in these parameters back to \eqref{BT}, we arrive at
\[ y(n) = K\left(\dfrac{n}{e}\right)^n\sqrt{n}\left(1+\dfrac{c_1}{n}+\dfrac{c_2}{n^2}+\dots \right).\]
We note that the infinite series on the right most arises from the series expansion of
$e^{\alpha_2n^{-1}+\alpha_3n^{-2}+\dots}.$
The values of $c_1, c_2, \dots$ can be figured out by plugging $y(n)$ back into the original recurrence and comparing the coefficient of $n^i$
for each $i$ (the method of undetermined coefficients).
The constant $K$, however, cannot be obtained by this method although
another asymptotic method shows that $K= \sqrt{2\pi}.$
\textbf{Example 2:}
Let $ \displaystyle y(n) = \left \lfloor{\left(\dfrac{n}{2}\right)^2}\right \rfloor$
with the holonomic recurrence
\[ (n+2)y(n)+2y(n+1)-ny(n+2) =0.\]
Apply \eqref{rat1} to get the relation:
\small
\[ (n+2)+2n^{\mu_0}\lambda\{1+\dfrac{1}{n}\left(\theta +\dfrac{\mu_0}{2}\right)+\dots\}
e^{\alpha_1\beta n^{\beta-1}+\dots}
-nn^{2\mu_0}\lambda^2\{1+\dfrac{1}{n}\left(2\theta +2\mu_0\right)+\dots\}
e^{2\alpha_1\beta n^{\beta-1}+\dots} = 0. \]
\normalsize
Expanding the exponential term with power series and comparing the terms involving $n^{2\mu_0}$ and $n$, we have $\mu_0=0$. Then, $1-\lambda^2=0$, which gives $\lambda = \pm 1$. Also, since $\mu_0 \rho$ must be an integer,
we again assign the minimum possible value $\rho = 1$.
Next, $\beta$ must be 0, as the coefficient of $n^s$ for $0 < s < 1$ must be 0.
For the coefficient of 1 (the constant term), we have
$2+2\lambda-\lambda^2(2\theta+2\mu_0)=0$.
Hence, $\theta = \dfrac{1+\lambda}{\lambda^2} = 2$ or $0$.
Plugging these parameters back into \eqref{BT}, we obtain
\[ y_1(n) = K_1 n^{2}\left(1+\dfrac{c_1}{n}+\dfrac{c_2}{n^2}+\dots \right),\]
\[ y_2(n) = K_2 (-1)^n\left(1+\dfrac{d_1}{n}+\dfrac{d_2}{n^2}+\dots \right),\]
where $y(n) = y_1(n)+y_2(n)$.
With this form of solution,
we apply the method of undetermined coefficients to the original recurrence, which in turn implies that $c_i$ and $d_i$
are all zero except $c_2=-1/2$. Hence, $y(n) = K_1n^2-\dfrac{K_1}{2}+K_2(-1)^n$,
agreeing with our earlier result that
$y(n) = \dfrac{n^2}{4}-\dfrac{1}{8}+\dfrac{(-1)^n}{8}.$
\end{enumerate}
\section{$C^2$-finite}
\label{sec:C2}
Building upon $C$-finite, the class of $C^2$-finite sequences that we will investigate in this section is a relatively new domain of research. The idea was first mentioned in \cite{KM} in the context of graph polynomials, and it has recently been discussed
in \cite{JNP,TZ} from a theoretical perspective.
\begin{enumerate}[leftmargin=0.4in,label=\textbf{\arabic*.}]
\item {\bf Ansatz:} $a_n$ is defined by a linear recurrence with $C$-finite sequence coefficients:
\[ C_{r,n}a_{n+r} + C_{r-1,n}a_{n+r-1}+ \dots +C_{0,n}a_{n} = 0, \]
where $C_{r,n} \neq 0, \, n=0,1,2, \dots$,
along with the initial values $a_0, a_1, \dots, a_{r-1}.$
We call $a_n$ a \emph{$C^2$-finite sequence} of order $r$.
This term was first coined in \cite{KM}.
As was the case for holonomic sequences, the condition $C_{r,n} \neq 0$ is necessary for recursively computing the value of $a_{n+r}$ from preceding terms.
\item {\bf Example 1:} $C^2$-finite sequence of order $1$ given by
\[ a_n = F_{n+1} \cdot a_{n-1}, \qquad a_0=1,\]
where $F_n$ is the Fibonacci sequence.
An interesting fact about this sequence is that it also satisfies
a non-linear relationship
\[ a_{n}a_{n+1}a_{n+3} - a_{n}a_{n+2}^2 - a_{n+2}a_{n+1}^2 = 0. \]
This nonlinear relation was provided by Robert Israel/Michael Somos in 2014.
It is the sequence A003266 on the Sloane’s OEIS, \cite{OEIS}.
This example motivated us to investigate connections between the class of $C^2$-finite sequences and a non-linear recurrence representation. Especially, since the conditions which ensure the existence of nonlinear recurrences
for the $C$-finite (sub)sequences have already been examined in \cite{EZ}, we are curious to know if similar results might be obtained for $C^2$-finite sequences.
Furthermore, it is also unclear whether or not one can find conditions for which a non-linear recurrence is a $C^2$-finite sequence. We leave these as open problems for the interested readers.
\textbf{Open problem 1:}
Find conditions which guarantee that a $C^2$-finite sequence can be represented in a non-linear recurrence relation.
\textbf{Open problem 2:}
Find conditions which guarantee that a non-linear recurrence is a $C^2$-finite sequence.
{\bf Example 2:} $C^2$-finite sequence of order $2$ given by
\[ a_{n+2} = a_{n+1}+2^na_{n}, \]
with initial values $a_0=1, a_1=1.$
In terms of the shift operator,
\[ [N^2-N-2^n] \cdot a_n = 0. \]
{\bf Example 3:} $C^2$-finite sequence of order $2$ given by
\[ a_{n+2} = F_{n+1} a_{n+1}+F_{n}a_{n}, \]
with initial values $a_0=1, a_1=1.$
\item {\bf Guessing:}
Input: the order $r$ of $a_n$, the order $d$ for each $C$-finite coefficient $C_{i,n},\, 0 \leq i \leq r$, and a sufficiently long sequence of data $a_n$.
This time guessing becomes very difficult due to the challenge we face when solving a system of nonlinear equations. We illustrate this by the following examples.
The simplest, non-trivial example is the second order relation
where $C_{1,n}$ and $C_{0,n}$ are of first order.
The form of ansatz is
\[ a_{n+2} = c_1\alpha^na_{n+1}+c_2\beta^na_{n}, \;\ \;\ n \geq 0. \]
Here, we solve for constants $\{\alpha, \beta, c_1, c_2\}$ through
the system of nonlinear equations.
With four parameters, Maple can still handle the computation in this case.
However, if the second order relation is assumed with $C_{1,n}$ and $C_{0,n}$ of second order, i.e.
\[ a_{n+2} = (c_1\alpha_1^n+c_2\alpha_2^n)a_{n+1}
+(c_3\alpha_3^n+c_4\alpha_4^n)a_{n}, \;\ \;\ n \geq 0, \]
with eight parameters to solve for, this time the problem becomes computationally infeasible.
Another approach that could be useful for guessing a $C^2$-finite relation is to apply a numerical solution method. In Maple, we can do this with the available \texttt{fsolve} built-in command.
Unfortunately, even with the numerical method, we were not able to obtain the solution within a finite number of steps.
It was rather disappointing to find that guessing for $C^2$-finite is not practical, as it plays a big role in determining an expression for the sequences.
\item {\bf Generating function:}
In this section, we establish several new properties for $C^2$-finite sequences. First, we give a formal definition of $C^2$-finite.
We recall from the $C$-finite section that
a closed-form formula for a $C$-finite sequence $C_n$ is
\[ C_n = \sum_{\alpha \in S} p_{\alpha}(n)\alpha^n, \]
where $\alpha$'s are the roots of the characteristic polynomial
of $C_n$.
We define $Deg(C_n)$ to be the highest degree of
$p_{\alpha}(n), \, \alpha \in S.$
\begin{defi}
A $C^2$-finite sequence $a_n$ is said to have order $r$ and
degree $k$ if $a_n$ satisfies the recurrence relation
\[ C_{r,n}a_{n+r} + C_{r-1,n}a_{n+r-1}+ \dots +C_{0,n}a_{n} = 0, \]
where for each $i, \, 0 \leq i \leq r, \;\ Deg(C_{i,n})$ is at most $k.$
\end{defi}
We are now ready to derive a new differential equation for the generating function of $C^2$-finite sequence. This inquiry was made in \cite{JNP}.
\begin{thm} \label{GenC2}
Let $\displaystyle f(x) = \sum_{n=0}^{\infty} a_nx^n $
where $a_n$ is a $C^2$-finite sequence of order $r$ and
degree $k$.
Then, $f(x)$ satisfies a (non-homogeneous)
linear differential equation with polynomial coefficients,
\begin{equation} \label{diffcom}
\sum_{\alpha} \left[ q_{\alpha,0}(x)f(\alpha x)+ q_{\alpha,1}(x)f'(\alpha x)
+ \dots + q_{\alpha,r'}(x)f^{(r')}(\alpha x)\right] = R(x)
\end{equation}
where $\alpha$ is defined to be $\alpha_{i,j}$, the root of the characteristic polynomial of $C_{i,n}.$
Here, the order $r'$ is at most $k$, degree of $q_{\alpha,t}(x)$
for each $\alpha$ and $t$
is at most $r+k$, and the degree of polynomial $R(x)$ is at most $r-1.$
\end{thm}
{\bf Notation.} To avoid any ambiguity in our notation $f^{(j)}(\alpha x)$, this notation means
\[
f^{(j)}(\alpha x)=\dfrac{d^jf(\alpha x)}{dx^j}.
\]
\begin{defi}
The function $f(x)$ as a generating function
of a $C^2$-finite sequence is called \emph{$C^2$-finite}.
Also, we will call a function $f(x)$ that satisfies \eqref{diffcom} \emph{$DC$-finite} (differentiably composite finite).
\end{defi}
\begin{rem}
In contrast with the $DC$-finite, another approach to generalizing the holonomic sequences is through the $D$-finite generating function (of a holonomic sequence). This was considered in \cite{JP} and the resulting generating function is known as \emph{$DD$-finite}.
\end{rem}
\begin{proof}
Following the idea of the proof of Theorem \ref{GenHolo}, assume that $a_n$ satisfies the relation
\[
C_{r,n}(n)a_{n+r} + C_{r-1,n}(n)a_{n+r-1}+ \dots +C_{0,n}(n)a_{n} = 0,
\]
where $C_{t,n}(n)$ can be written in a closed-form as
$\displaystyle C_{t,n}(n) = \sum_{\alpha \in S_t} p_{t,\alpha}(n)\alpha^n.$
We denote by $b_{s,t,\alpha}$ the coefficient of $n^s\alpha^na_{n+t}$ in the relation above. Then, $\displaystyle p_{t,\alpha}(n)=\sum_{s=0}^k b_{s,t,\alpha}n^s$, and so the $C^2$-finite relation becomes
\begin{equation} \label{c2rela}
\sum_{t=0}^r\sum_{\alpha \in S_t}\sum_{s=0}^k b_{s,t,\alpha}n^s\alpha^n a_{n+t}=0.
\end{equation}
We next prove the following identity. For fixed $s$ and $t$,
\begin{align*}
\sum_{j=0}^s c^{(s,t)}_jx^{j-t}f^{(j)}(\alpha x)
&= \sum_{j=0}^s c^{(s,t)}_j \sum_{n=0}^{\infty} (n)_ja_n \alpha^n x^{n-t} \\
&= \sum_{j=0}^s c^{(s,t)}_j \sum_{n=-t}^{\infty} (n+t)_ja_{n+t}\alpha^{n+t}x^{n} \\
&= \alpha^t \sum_{n=-t}^{\infty} \left[ \sum_{j=0}^s c^{(s,t)}_j (n+t)_j \right] a_{n+t}
(\alpha x)^{n}.
\end{align*}
For each pair of $(s,t)$, we solve for constants
$c^{(s,t)}_j, \;\ j=0,1,2,\dots,s, $ by equating coefficients of $n^j$ in the equation
$\displaystyle \sum_{j=0}^s c^{(s,t)}_j (n+t)_j = n^s$. Now, define $\displaystyle A_{s,t,\alpha}(x) = \sum_{n=0}^{\infty} a_{n+t}n^s (\alpha x)^n$ for fixed $s,t \geq 0$. Then,
\begin{equation} \label{rank}
\sum_{j=0}^s c^{(s,t)}_jx^{j-t}f^{(j)}(\alpha x)
= \alpha^t \sum_{n=-t}^{\infty} a_{n+t}n^s (\alpha x)^{n}
= \alpha^t A_{s,t,\alpha}(x) + \alpha^t \sum_{n=-t}^{-1} a_{n+t}n^s (\alpha x)^{n}.
\end{equation}
From the $C^2$-finite relation of $a_n$ in \eqref{c2rela},
multiply $x^n$ through, and sum $n$ from $0$ to $\infty$,
we obtain
\begin{align*}
0 &= \sum_{n=0}^\infty\sum_{t=0}^r\sum_{\alpha \in S_t}\sum_{s=0}^k b_{s,t,\alpha}n^s\alpha^n a_{n+t}x^n= \sum_{t=0}^r\sum_{s=0}^k\sum_{\alpha \in S_t}
b_{s,t,\alpha}A_{s,t,\alpha}(x) \\
&= \sum_{t=0}^r\sum_{s=0}^k\sum_{\alpha \in S_t} b_{s,t,\alpha}
\left[\alpha^{-t}\sum_{j=0}^s c^{(s,t)}_jx^{j-t}f^{(j)}(\alpha x)
- \sum_{n=-t}^{-1} a_{n+t}n^s(\alpha x)^{n} \right] , \;\ \text{ from \eqref{rank}.}
\end{align*}
Multiply $x^r$ on both sides and rearrange this equation:
\[ \sum_{t=0}^r\sum_{s=0}^k\sum_{\alpha \in S_t} b_{s,t,\alpha} \alpha^{-t}
\sum_{j=0}^s c^{(s,t)}_jx^{j+r-t}f^{(j)}(\alpha x)
= \sum_{t=0}^r\sum_{s=0}^k\sum_{\alpha \in S_t} b_{s,t,\alpha}
\sum_{n=0}^{t-1} a_{n}(n-t)^s \alpha^{n-t} x^{n+r-t} . \]
We see that the left hand side is the differential equation of order at most $k$ and degree at most $r+k$. The right hand side is the polynomial of degree at most $r-1.$
\end{proof}
{\bf Example 1:} Let $a_{n+1} = F_{n+2} \cdot a_{n}.$ Then,
\begin{align*}
f(x) &= \sum_{n=0}^{\infty} a_nx^n
= a_0+\sum_{n=1}^{\infty} F_{n+1} \cdot a_{n-1}x^n
= a_0+x\sum_{n=1}^{\infty} (c_1\alpha_+^{n+1}+c_2\alpha_-^{n+1}) \cdot a_{n-1}x^{n-1} \\
&= a_0+c_1\alpha_+^2x\sum_{n=0}^{\infty} \alpha_+^n \cdot a_nx^{n}
+c_2\alpha_-^2 x\sum_{n=0}^{\infty}\alpha_-^n \cdot a_nx^{n} , \;\ \mbox{ (shift index $n$ by 1)}\\
&= a_0+ c_1\alpha_+^2xf(\alpha_+x)+ c_2\alpha_-^2xf(\alpha_-x),
\end{align*}
where $\alpha_+$ and $\alpha_-$ are the roots of equation $x^2-x-1=0.$
\begin{code}
> C2ToDiff(N-(c1*a^(n+2)+c2*b^(n+2)),{1,a,b},[a0],
n,N,x,D);
\end{code}
We still consider a first-order relation in the next example, but this time the coefficient $C_n$ has a polynomial factor.
{\bf Example 2:} Let $a_{n+1} = (n+1)2^n \cdot a_{n}.$ Then,
\begin{align*}
f(x) &= \sum_{n=0}^{\infty} a_nx^n
= a_0 +\sum_{n=1}^{\infty} n2^{n-1} \cdot a_{n-1}x^n
= a_0 +x\sum_{n=0}^{\infty} (n+1)2^n \cdot a_nx^{n} \\
&= a_0 + x^2\sum_{n=1}^{\infty} n2^n \cdot a_nx^{n-1}
+x\sum_{n=0}^{\infty} 2^n \cdot a_nx^{n} \\
&= a_0 + x^2f'(2x)+ xf(2x).
\end{align*}
\begin{code}
> C2ToDiff(N-(n+1)*2^n,{1,2},[a0],n,N,x,D);
\end{code}
Let us now consider a second-order example.
{\bf Example 3:} Let $a_{n+2} = a_{n+1} + 2^n\cdot a_{n}.$ Then
\begin{align*}
f(x) &= \sum_{n=0}^{\infty} a_nx^n
= a_0 + a_1x+\sum_{n=2}^{\infty} a_{n-1}x^n
+\sum_{n=2}^{\infty} 2^{n-2} \cdot a_{n-2}x^n \\
&= a_0 + a_1x-a_0x+x\sum_{n=0}^{\infty} a_{n}x^n
+x^2\sum_{n=0}^{\infty}a_{n}(2x)^n \\
&= a_0 + (a_1-a_0)x + xf(x)+ x^2f(2x).
\end{align*}
\begin{code}
> C2ToDiff(N^2-N-2^n,{1,2},[a0,a1],n,N,x,D);
\end{code}
Similar to holonomic sequences, the differential equation \eqref{diffcom}
can be made homogeneous by differentiating multiple times until $R(x)$ becomes zero.
\begin{cor}
Let $\displaystyle f(x) = \sum_{n=0}^{\infty} a_nx^n $
where $a_n$ is a $C^2$-finite sequence of order $r$ and degree $k$.
Then, $f(x)$ satisfies a homogeneous
linear differential equation with polynomial coefficients
\begin{equation}
\sum_{\alpha} \left[ q_{\alpha,0}(x)f(\alpha x)+ q_{\alpha,1}(x)f'(\alpha x)
+ \dots + q_{\alpha,r'}(x)f^{(r')}(\alpha x)\right] = 0,
\end{equation}
where order $r'$ is at most $r+k$, and degree of $q_{\alpha,t}(x)$ for each $\alpha, t$ is at most $r+k$.
\end{cor}
\textbf{Example:} The homogeneous differential equation of $\displaystyle f(x) = \sum_{n=0}^{\infty} a_n x^n$ where
$a_{n+1}=F_{n+2} \cdot a_n$ is
\[f'(x)-c_1\alpha_{+}^2[f(\alpha_+x)+xf'(\alpha_+x)]-c_2\alpha_{-}^2[f(\alpha_-x)+xf'(\alpha_-x)]=0.\]
\begin{code}
> C2ToDiffHom(N-(c1*a^(n+2)+c2*b^(n+2)),{1,a,b},[a0],
n,N,x,D);
\end{code}
The next theorem ensures that we can always find a $C^2$-finite recurrence
relation for the coefficients $a_n$ of $f(x)$ which satisfies a homogeneous linear
differential equation of composite variables with polynomial coefficients.
\begin{thm}
\label{thm:conv_c2finite}
Let $\displaystyle f(x) = \sum_{n=0}^{\infty} a_nx^n$.
Assume $f(x)$ satisfies a homogeneous
linear differential equation with polynomial coefficients
of order $r$ and degree $k$
\begin{equation} \label{diffc2}
\sum_{\alpha} \left[ q_{\alpha,0}(x)f(\alpha x)+ q_{\alpha,1}(x)f'(\alpha x)
+ \dots + q_{\alpha,r}(x)f^{(r)}(\alpha x)\right] = 0.
\end{equation}
Then, $a_n$ is a $C^2$-finite sequence of order at most $r+k$ and degree
at most $r$.
\end{thm}
\begin{proof}
Assume that $\displaystyle q_{\alpha,t}(x) = \sum_{s=0}^kb_{\alpha,s,t}x^s.$
Then, for each $s, t, \alpha,$
\[ x^sf^{(t)}(\alpha x) = \sum_{n=t}^{\infty} (n)_ta_n\alpha^n x^{n-t+s}
= \alpha^{t-s} \sum_{n=s}^{\infty} (n+t-s)_ta_{n+t-s} (\alpha x)^{n} . \]
Now \eqref{diffc2} becomes
\[ \sum_{\alpha} \left[ \sum_{t=0}^r \sum_{s=0}^{k} b_{\alpha,s,t} \alpha^{t-s}
\sum_{n=s}^{\infty} (n+t-s)_ta_{n+t-s}(\alpha x)^{n} \right] = 0, \]
and so for each $n \geq 0, \;\ a_n$ satisfies the recurrence
\[ \sum_{t=0}^r \sum_{\alpha} \sum_{s=0}^{\min(n,k)}
\left[b_{\alpha,s,t}(n+t-s)_t\alpha^{n+t-s}\right] a_{n+t-s} = 0. \]
Proposition \ref{prop:converse_closeform_cfinite} implies that the coefficient
$\displaystyle C_{i,n} = \sum_{\alpha} \sum_{s=0}^{\min(n,k)} b_{\alpha,s,s+i}(n+i)_{s+i}\alpha^{n+i}$,
for each $i$, is a $C$-finite sequence. It follows that $a_n$ satisfying
\[ C_{r,n}a_{n+r} + C_{r-1,n}a_{n+r-1}+ \dots +C_{-k,n}a_{n-k} = 0, \]
is a $C^2$-finite sequence.
The claim of the order and degree is now immediate.
\end{proof}
\item {\bf Closure properties:}
\begin{thm} \label{c2close}
Assume $a_n$ and $b_n$ are $C^2$-finite sequences.
The following are also $C^2$-finite sequences.
\begin{enumerate}[label=(\roman*)]
\item addition: $(a_n+b_n)_{n=0}^{\infty},$
\item term-wise multiplication: $(a_n \cdot b_n)_{n=0}^{\infty},$
\item Cauchy product: $(\sum_{i=0}^n a_i \cdot b_{n-i})_{n=0}^{\infty},$
\item partial sum: $(\sum_{i=0}^n a_i)_{n=0}^{\infty}$,
\item linear subsequence: $(a_{mn})_{n=0}^{\infty}$, where $m$ is a positive integer.
\end{enumerate}
\end{thm}
The proof is along the same line as that of Theorem \ref{hoclose} for holonomic sequences, and we shall not repeat it here. This section will be devoted to a detailed discussion and examples instead.
\begin{rem}
The reader may have noticed that this time we have not specified the upper bound of the order in the theorem. While the same bounds as those used for holonomic sequences could be imposed, it is worth repeating here that for a $C^2$-finite sequence of order $r$, the leading coefficient $C_{r,n}$ must not be zero for any $n \geq 0$.
This condition makes it not straightforward to determine a general bound for the order of the sequence.
The first example below (from \cite{JNP}) illustrates this issue.
\end{rem}
{\bf Example 1:} Let $a_n$ and $b_n$ be a $C^2$-finite sequence of order 1 defined by
\begin{align*}
a_{n+1}+(-1)^na_n &= 0, \\
b_{n+1}+b_n &=0 .
\end{align*}
Let $c_n = a_n+b_n$ for $n \geq 0.$
A recurrence of order $2$ for $c_n$ is in the form
\[ [1-(-1)^n]c_{n+2} +2c_{n+1}+[1+(-1)^n]c_n=0. \]
This recurrence does not satisfy the definition of $C^2$-finite as
the leading term, $C_{2,n} = 1-(-1)^n$, contains infinitely many zeros.
On the other hand, a recurrence of order 3 makes $c_n$ a $C^2$-finite sequence:
\[
c_{n+3} + \frac{1}{2} \left[1+(-1)^n\right]c_{n+2}
+\frac{1}{2} \left[1-(-1)^n\right]c_n=0.
\]
The following example illustrates the idea behind the derived recurrence relations under the addition and term-wise multiplication operations.
{\bf Example 2:} Let $a_n$ be a sequence that satisfies the relation
\[ a_{n+1} = F_{n+2} \cdot a_{n}, \qquad a_0=1,\]
where $F_n$ is the Fibonacci sequence.
Let $b_n$ be a sequence that satisfies the relation
\[ b_{n+2} = b_{n+1}+2^nb_{n}, \;\ b_0=1, b_1=1. \]
\begin{itemize}[leftmargin=0.2in]
\item addition: $c_n = a_n+b_n.$
To solve for the recurrence relation
of $c_n$, we write $c_n, c_{n+1}, c_{n+2}$ and $c_{n+3}$
as a linear combination of $a_n, b_n$ and $b_{n+1}.$ That is,
\[ \begin{bmatrix}
c_n\\
c_{n+1}\\
c_{n+2}\\
c_{n+3}
\end{bmatrix} =
\begin{bmatrix}
1 & 1 & 0 \\
F_{n+2} & 0 & 1 \\
F_{n+3}F_{n+2} & 2^n & 1 \\
F_{n+4}F_{n+3}F_{n+2} & 2^n
& 2^{n+1}+1
\end{bmatrix} \cdot
\begin{bmatrix}
a_n\\
b_{n}\\
b_{n+1}
\end{bmatrix}.\]
The $C$-finite solution $P$ (in the null space) of the $C$-finite sequence matrix $M$ is
\[ P = [ 2^nF_{n+2}(F_{n+4}F_{n+3}-F_{n+3}-2^{n+1}), \dots ], \]
which gives rise to a $C^2$-finite relation of $c_n, \, n \geq 1$, of order 3
\begin{align*}
& 2^nF_{n+2}(F_{n+4}F_{n+3}-F_{n+3}-2^{n+1})c_n\\
&+\left[ F_{n+4}F_{n+3}F_{n+2}+2^{2n+1}
-2^{n+1}F_{n+3}F_{n+2}-F_{n+3}F_{n+2} \right]c_{n+1} \\
& + \left[ 2^{n+1}F_{n+2}+F_{n+2}
-F_{n+4}F_{n+3}F_{n+2}+2^n \right] c_{n+2}\\
&+\left[ F_{n+3}F_{n+2}-F_{n+2}-2^n \right] c_{n+3} =0.
\end{align*}
\item term-wise multiplication: $d_n = a_n \cdot b_n.$
To solve for the recurrence relation
of $d_n$, we write $d_n, d_{n+1}$ and $d_{n+2}$
as a linear relation of $a_nb_n$ and $a_nb_{n+1}.$ That is,
\[ \begin{bmatrix}
c_n\\
c_{n+1}\\
c_{n+2}
\end{bmatrix} =
\begin{bmatrix}
1 & 0 \\
0 & F_{n+2} \\
2^nF_{n+3}F_{n+2}
& F_{n+3}F_{n+2}
\end{bmatrix} \cdot
\begin{bmatrix}
a_nb_n\\
a_nb_{n+1}
\end{bmatrix}.\]
The $C$-finite solution $P$ (in the null space) of the $C$-finite sequence matrix $M$ is
\[ P = [ -2^nF_{n+2}F_{n+3}, -F_{n+3},1 ], \]
yielding a $C^2$-finite relation of $c_n, \, n \geq 0$, of order 2:
\begin{align*}
-2^nF_{n+2}F_{n+3}c_n -F_{n+3}c_{n+1}+c_{n+2} = 0.
\end{align*}
\end{itemize}
As we mentioned earlier, the proof of closure properties for $C^2$-finite is similar to the holonomic one. It turns out that we can directly use the same Maple code to get recurrence relations for $C^2$-finite:
\begin{code}
> HoAdd(N-F(n+2),N^2-N-2^n,n,N,c);
> HoTermWise(N-F(n+2),N^2-N-2^n,n,N,c);
\end{code}
\item {\bf Asymptotic approximation solutions:}
We have already seen that the rate of growth of $C$-finite
is $\mathcal{O}(\alpha^n)$ for some constant $\alpha$.
Here, the rate of growth of $C^2$-finite is
$\mathcal{O}(\alpha^{n^2})$ for some constant $\alpha$.
Also, we have presented a procedure to obtain an asymptotic approximation solution for the holonomic recurrence in the previous section.
It appears, however, to be more difficult to derive asymptotic approximation solutions for the $C^2$-finite recurrences, and merits further investigation.
We leave this as an open problem.
\textbf{Open problem 3:}
Find asymptotic approximations of solutions to the $C^2$-finite sequences.
\end{enumerate}
| {
"timestamp": "2022-01-25T02:10:08",
"yymm": "2201",
"arxiv_id": "2201.08035",
"language": "en",
"url": "https://arxiv.org/abs/2201.08035",
"abstract": "Given a sequence 1, 1, 5, 23, 135, 925, 7285, 64755, 641075, 6993545, 83339745,..., how can we guess a formula for it? This article will quickly walk you through the concept of ansatz for classes of polynomial, $C$-finite, holonomic, and the most recent addition $C^2$-finite sequences. For each of these classes, we discuss in detail various aspects of the guess and check, generating functions, closure properties, and closed-form solutions. Every theorem is presented with an accessible proof, followed by several examples intended to motivate the development of the theories. Each example is accompanied by a Maple program with the purpose of demonstrating use of the program in solving problems in this area. While this work aims to give a comprehensive review of existing ansatzes, we also systematically fill a research gap in the literature by providing theoretical and numerical results for the $C^2$-finite sequences. We hope the readers will enjoy the journey through our unifying framework for the study of ansatz.",
"subjects": "Combinatorics (math.CO)",
"title": "Ansatz in a Nutshell: A comprehensive step-by-step guide to polynomial, $C$-finite, holonomic, and $C^2$-finite sequences",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9854964186047326,
"lm_q2_score": 0.8459424373085146,
"lm_q1q2_score": 0.8336732423132996
} |
https://arxiv.org/abs/1009.0810 | The covering radius problem for sets of perfect matchings | Consider the family of all perfect matchings of the complete graph $K_{2n}$ with $2n$ vertices. Given any collection $\mathcal M$ of perfect matchings of size $s$, there exists a maximum number $f(n,x)$ such that if $s\leq f(n,x)$, then there exists a perfect matching that agrees with each perfect matching in $\mathcal M$ in at most $x-1$ edges. We use probabilistic arguments to give several lower bounds for $f(n,x)$. We also apply the Lovász local lemma to find a function $g(n,x)$ such that if each edge appears at most $g(n, x)$ times then there exists a perfect matching that agrees with each perfect matching in $\mathcal M$ in at most $x-1$ edges. This is an analogue of an extremal result vis-á-vis the covering radius of sets of permutations, which was studied by Cameron and Wanless (cf. \cite{cameron}), and Keevash and Ku (cf. \cite{ku}). We also conclude with a conjecture of a more general problem in hypergraph matchings. | \section{Introduction}
In this paper, let $K_{2n}$ be the complete graph with $2n$ {\it vertices}, $n\in\mathbb N$. A {\it matching} in $K_{2n}$ is a set of pairwise non-adjacent {\it edges}; that is, no two edges share a common vertex. A {\it perfect matching} is a matching which matches all vertices of the graph; that is, every vertex of the graph is incident to exactly one edge of the matching. Any perfect matching is represented by a collection of two-element sets where the elements of each set are two distinct vertices; for instance $\{\{v_1,v_2\},\{v_3,v_4\}\}$ is a perfect matching in $K_4$, as shown in figure 1. Also, a {\it hypergraph} is a pair $(V,E)$, where $V$ is a finite set of vertices and $E$ is a finite family of subsets of $V$, called {\it hyperedges}. Using the terminology in \cite{bollobas}, we denote by $H$ a $t${\it-uniform} hypergraph ($t\in\mathbb N$), a hypergraph with $E\subset V^{(t)}$, where $V^{(t)}:=\{Y:Y\subset V,|Y|=t\}$.\bigskip
\begin{figure}[h]
\centering
\includegraphics[width=0.50\textwidth]{k4.jpg}
\caption{Perfect matching in $K_4$.}
\label{fig:k4}
\end{figure}
Consider the following problem:\bigskip
\begin{framed}
Given a collection of similar structures (family of permutations, system of finite sets et al.), what is the maximum size of the collection in order to ensure the existence of another such structure that shares at most $k$ elements with each structure in the collection?
\end{framed}
\bigskip
This problem is also known as the covering radius problem. This research problem has its origins in group theory, particularly in the permutation group $S_n$ acting on the set $[n]$ of natural numbers from $1$ to $n$. In any collection $G$ of permutations, we can measure the {\it Hamming distance} (or distance) $d(g,h)$ between a permutation $g$ in $G$ and any permutation $h$ picked from $S_n$. Here, the Hamming distance between two permutations is the number of positions in which they defer. For example, in $S_3$, $d(123,231)=3$. If we were to fix $h$ above and measure the distances $d(h,p)$ for every $p\in G$, there exists a minimum distance which we can obtain between $h$ and some\footnote{There may exist more than one choice of $p_0$ which gives a minimum distance.} $p_0\in G$, i.e. $\min\{d(h,p):p\in G\}=d(g,p_0)$. Now, repeating this procedure for every permutation $h\in S_n$, we can find the maximum of all the minimum distances measured earlier. This maximum value, denoted $cr(G)$, is the covering radius of the collection $G$; in fact, a simple argument shows that $cr(G):=\max_{h\in S_n}\min_{g\in G}{d(g,h)}$. Therefore, the covering radius problem is the problem of finding or, in many cases, estimating the covering radius of any given collection of permutations.\bigskip
Apropos of recent research, lower bounds of the covering radius of $G$ of $S_n$ have been established by Cameron and Wanless (2005) in \cite{cameron}, in which covering arguments were used to formulate a general criteria to find lower bounds of covering radii of sets of permutations. Keevash and Ku (2006) later improved (cf. \cite{ku}) the general criteria, obtaining an even stronger result to determine the lower bound of the covering radius of any collection of permutations with some constraints vis-\`{a}-vis a frequency parameter. Their result is the best possible so far. Covering radius results have profound applications and implications in group theory and combinatorial structures; for instance, the authors above have applied their results to Latin squares and Latin transversals, and recent literature suggests several generalizations of this theory to general groups \cite{ku}. Moreover, Similar classes of problems for intersecting families of finite sets have been studied extensively, and in particular \cite{babai} and \cite{graham} are good sources of information.\bigskip
In this paper, we consider the analogue of the problem mentioned above for perfect matchings in complete graphs. Our fundamental question is as follows:\bigskip
\begin{framed}
Suppose we have a finite collection of perfect matchings of $K_{2n}$. What is the largest possible number of elements in this collection such that we can find a perfect matching of $K_{2n}$ that {\it agrees} with each perfect matching in the collection in at most $x-1$ edges?
\end{framed}
\bigskip
Let $M$ denote an arbitrary perfect matching of $K_{2n}$. Moreover, call a vertex set $U\subset V(K_{2n})$ {\it{good}} (w.r.t. $M$) if $|U|\leq n$ and $\forall v_i,v_j\in U(e_{ij}\notin M)$. For a set $W$ of vertices, we say that two perfect matchings $M,M'$ agree on $W$ iff (i) $W$ is good w.r.t. $M$ and $M'$; (ii) $\forall v_i\in W(e_{ij}\in M\Leftrightarrow e_{ij}\in M')$. For example, if $M=\{\{1,2\}, \{3,4\}, \{5,6\}\}$ and $M'=\{\{1,2\}, \{3,5\}, \{4,6\}\}$, then $M$ and $M'$ agree on $W=\{1\}$. We now present a few elementary bounds which are obtained from Boole's inequality (also known as union bound).
\section{Elementary Results}
The union bound gives us several useful results.
\begin{thm}
\label{Crude 1}
Let $\mathcal M=\{M_1,...,M_s\}$ be a collection of perfect matchings in $K_{2n}$. If $s\leq{x!}\cdot\frac{{2n-x\choose x}}{{n\choose x}}$, then there exists a perfect matching that agrees with each $M_i\in\mathcal M$ in at most $x-1$ edges.
\end{thm}
\begin{shaded}
\noindent\textbf{Proof.}\bigskip
Randomly select a perfect matching out of all the perfect matchings. Consider any $M_i\in\mathcal M$ and any $T\subset V$, $|T|=x$, which is good w.r.t. $M_i$. Let $A_{i,T}$ be the event that the perfect matching selected agrees with $M_i$ on $T$. Then $$\mathbb P(A_{i,T})=\left[\frac{(2n-2x)!}{2^{n-x}\cdot(n-x)!}\right]\bigg/\left[\frac{(2n)!}{2^n\cdot n!}\right],$$\bigskip
since there are $\frac{(2n)!}{2^n\cdot n!}$ perfect matchings and exactly $\frac{(2n-2x)!}{2^{n-x}\cdot(n-x)!}$ of them with $x$ fixed edges.\bigskip
Let us sum the probabilities over all possible $i$ and $T$. Clearly, there are $s$ possible values of $i$, and for each perfect matching $M_i$ there are less than ${2n\choose x}$ good $T$. Moreover, $A_{i,T}\Leftrightarrow A_{i,T'}$ whenever $T,T'$ belong the same collection of edges, implying that the number of good $T$ should be reduced by a factor of $2^x$ in our calculation in order to avoid counting same events more than once. Thus we have, by Boole's inequality, $$\mathbb P\left(\bigcup_{i,T}{A_{i,T}}\right)\leq\sum_{i,T}{\mathbb P(A_{i,T})}<s\cdot{2n\choose x}2^{-x}\cdot\left[\frac{(2n-2x)!}{2^{n-x}\cdot(n-x)!}\right]\bigg/\left[\frac{(2n)!}{2^n\cdot n!}\right]\leq 1.$$\bigskip
Therefore, with positive probability none of the $A_{i,T}$ occur, and there must exist a perfect matching which agrees with each $M_i\in\mathcal M$ on a vertex set of at most $x-1$ vertices, i.e. in at most $x-1$ edges.\smallskip
\begin{flushright}
$\blacksquare$
\end{flushright}
\end{shaded}
\bigskip
Notice that the bound on $s$ is weak, since we gave a crude bound of ${2n\choose x}2^{-x}$ for the number of good $T$. By considering $T$ differently, we can obtain the exact number of good $T$. Here, we introduce the notion of $x$-matchings.\bigskip
\begin{dfn}
\label{matching}
An $x$-matching of $K_{2n}$ is a matching of size $x$. Thus, if $x=n$, then the $x$-matching is simply a perfect matching.
\end{dfn}
With this in mind, we can derive a larger upper bound on $s$ as follows:\bigskip
\begin{thm}
\label{Crude 2}
Let $\mathcal M=\{M_1,...,M_s\}$ be a family of perfect matchings in $K_{2n}$. If $s<\frac{x!}{2^x}\cdot\frac{{2n\choose x}\cdot{2n-x\choose x}}{{n\choose x}^2}$, then there exists a perfect matching that agrees with each $M_i\in\mathcal M$ in at most $x-1$ edges.
\end{thm}
\bigskip
\begin{shaded}
\noindent\textbf{Proof.}\bigskip
Randomly pick a perfect matching out of all the perfect matchings. Consider any $M_i\in\mathcal M$ and pick any $x$-matching $X\subset M_i$. Let $A_{i,X}$ be the event that the perfect matching picked contains $X$. Then $$\mathbb P(A_{i,X})=\left[\frac{(2n-2x)!}{2^{n-x}\cdot(n-x)!}\right]\bigg/\left[\frac{(2n)!}{2^n\cdot n!}\right],$$\bigskip
by the same reasoning as shown in the proof of Theorem \ref{Crude 1}.\bigskip
Let us sum the probabilities over all $i$ and $X$. Clearly, there are $s$ possible values of $i$, and for each perfect matching $M_i$ there are exactly ${n\choose x}$ $x$-matchings. Thus we have $$\mathbb P\left(\bigcup_{i,X}{A_{i,X}}\right)\leq\sum_{i,X}{\mathbb P(A_{i,X})}\leq s\cdot{n\choose x}\cdot\left[\frac{(2n-2x)!}{2^{n-x}\cdot(n-x)!}\right]\bigg/\left[\frac{(2n)!}{2^n\cdot n!}\right]< 1.$$\bigskip
Therefore, with positive probability none of the $A_{i,X}$ occur, and there must exist a perfect matching which agrees with each $M_i\in\mathcal M$ in at most $x-1$ edges.\smallskip
\begin{flushright}
$\blacksquare$
\end{flushright}
\end{shaded}
\section{Main Result}
Our main theorem requires the Lov\'{a}sz sieve. The Lov\'{a}sz local lemma is a powerful tool for showing the existence of structures with desired properties. Briefly speaking, we toss our events onto a probability space and evaluate the conditional probabilities of certain bad events occurring. If these probabilities are not too large in value, then with positive probability none of the bad events occur. More precisely,\bigskip
\begin{thm}[Lov\'{a}sz]
\label{Lovasz}
Let $\mathcal A=\{A_1,A_2,...,A_n\}$ be a collection of events in an arbitrary probability space $(\Omega,\mathcal F,\mathbb P)$. A graph $G(V,E)$ is called a dependency graph $(V=\{1,2,...,n\})$ for the events $A_1,A_2,...,A_n$, where $e_{ij}\in E$ iff $A_i$ and $A_j$ are related by some property $\pi$. Suppose that $G(V,E)$ is a dependency graph for the above events and $\exists x_1,x_2,...x_n\in[0,1)$, $S\subset\{1,2,...,n\}\setminus\{j:e_{ij}\in E\}$ such that $$\mathbb P\left(A_i\mid\bigcap_{k\in S}{\overline{A_k}}\right)\leq x_i\cdot\prod_{e_{ij}\in E}{(1-x_j)}.$$\bigskip
\noindent Then $\mathbb P(\bigcap_{i=1}^{n}{\overline{A_i}})\geq\prod_{i=1}^{n}{(1-x_i)}$. Equivalently, with positive probability none of the $A_i$ occur.
\end{thm}
\bigskip
The proof of Theorem \ref{Lovasz} can be found in chapter 5 of \cite{alon} and chapter 19 of \cite{jukna}. We used the following special case (cf. \cite{alon}) of Theorem \ref{Lovasz} in our result:\bigskip
\begin{cor}
\label{Symmetric}
Suppose that $\mathcal A=\{A_1,A_2,...,A_n\}$ is a collection of events, and for any $A_i\in\mathcal A$ there is a subset $\mathcal D_{A_i}\subset\mathcal A$ of size at most $d$, such that for any subset $\mathcal S\subset\mathcal A\setminus\mathcal D_{A_i}$ we have $\mathbb P\left(A_i\mid\bigcap_{A_j\in\mathcal S}{\overline{A_j}}\right)\leq p$. If $ep(d+1)\leq 1$, then $\mathbb P(\bigcap_{i=1}^{n}{\overline{A_i}})>0$.
\end{cor}
\bigskip
We now establish our main result on the covering radius problem for sets of perfect matching using Corollary \ref{Symmetric}. In this proof, the strategy we use mirrors that of the proof of the lower bound on the covering radius for sets of permutations, as presented in \cite{ku}. Such a strategy is also used in proof of the Erd\H{o}s-Spencer theorem on Latin transversals, as presented in chapter 5 of \cite{alon} (pp. 73-74).\bigskip
\begin{thm}
\label{Main}
Let $\mathcal M=\{M_1,...,M_s\}$ be a collection of perfect matchings in $K_{2n}$. Moreover, each of the ${2n\choose 2}$ edges appears at most $k$ times in the $M_i\in\mathcal M$ (we call $k$ the frequency parameter). If $k\leq\frac{1}{e\cdot2x(2n-1){n-1\choose x-1}}\left(\sum_{j=2x-n}^{x}{{2x\choose 2j}\frac{(2j)!}{j!\cdot2^j}}-e\right)$, then there exists a perfect matching which agrees with each perfect matching $M_i\in\mathcal M$ in at most $x-1$ edges.
\end{thm}
\begin{shaded}
\noindent\textbf{Proof.}\bigskip
Randomly pick a perfect matching $M$ from the set of all perfect matchings in $K_{2n}$. Consider any $M_i\in\mathcal M$ and any $x$-matching $X\subset M_i$. Let $A_{i,X}$ be the event that $X\subset M$. In our dependency graph, connect $A_{i,X}$ to $A_{i',X'}$ iff $X$ and $X'$ share at least one common vertex in their underlying vertex sets.\bigskip
For each $A_{i,X}$, let the set of its neighbours in the dependency graph be $\mathcal D_{i,X}$.\bigskip
\noindent{\it Claim: $|\mathcal D_{i,X}|\leq k\cdot2x(2n-1){n-1\choose x-1}=d$.}\bigskip
Indeed, we first pick one vertex out of the $2x$ vertices in $X$, then choose out of the $2n-1$ remaining vertices one particular vertex to be its neighbour in the perfect matching. Next, we choose a perfect matching $M_{i'}\in\mathcal M$ that contains the constructed edge; this can be done in at most $k$ ways. Lastly, we just pick $x-1$ out of the remaining $n-1$ edges in $M_{i'}$ to form $X'$.\bigskip
Let us now consider the probability $\mathbb P\left(A_{i,X}\mid\bigcap_{A_{i',X'}\in\mathcal S}{\overline{A_{i',X'}}}\right)=p_0$ for any subset $\mathcal S\subset \mathcal A\setminus\mathcal D_{i,X}$. For brevity let us label $E=\bigcap_{A_{i',X'}\in\mathcal S}{\overline{A_{i',X'}}}$. We shall bound $p_0$ from above.\bigskip
Fix $A_{i,X}$. Without loss of generality, let the underlying set of $2x$ vertices of the $x$-matching $X$ be $V=\{v_1,v_2,...,v_{2x}\}$. Now, randomly pair arbitrarily many of the $2x$ vertices in $V$. This gives us a collection of singletons (vertices) and doubletons (edges) - we call such a collection $W$. However, we restrict our $W$ such that the total number of singletons and doubletons in any $W$ cannot exceed $n$, i.e. if there are $2p$ singletons (the number of singletons must be even) and $q$ doubletons then $2p+q\leq n$ (note that $q+p=x$). This gives us $n-x \geq p \geq0$. Thus, the set $\mathcal W$ of all such restricted $W$ has cardinality $$\sum_{k=2x-n}^{x}{{2x\choose 2k}\frac{(2k)!}{k!\cdot2^k}},$$\bigskip
where each summand is the number of ways to partition the underlying vertex set into a collection of $k$ doubletons and $2x-2k$ singletons. (To resolve ambiguity in the expression above, let ${n\choose k}=0$ if $k<0$.)\bigskip
For each $W$, let $B_W$ be the event that a perfect matching contains $W$. For example, $$M=\{\{v_1,v_2\},\{v_3,v_4\},\{v_5,v_6\}\}$$
contains $W=\{\{v_1,v_2\},\{v_3\},\{v_6\}\}$, where every pair of singletons in $W$ does not belong to any edge in $M$. Clearly, all the $B_W$ are mutually exclusive, and their union equals $\Omega$.\bigskip
We shall show that the number of perfect matchings contained in $B_W\cap E$ is at least the number of perfect matchings contained in $A_{i,X}\cap E$. This can be done by means of constructing an injection from $A_{i,X}\cap E$ to $B_W\cap E$ for a particular fixed $W$.\bigskip
\noindent{\it Claim: $\forall W\in\mathcal W\left(|B_W\cap E|\geq |A_{i,X}\cap E|\right)$.}\bigskip
First, for any $M\in A_{i,X}\cap E$, consider the remaining $n-x$ edges not in $X$. Direct each edge such that the tail of the directed edge is the vertex with the smaller subscript. Thus, every edge $\{v_i,v_j\}\notin X, i<j$ becomes $(v_i,v_j)$. This gives us $n-x$ ordered pairs of vertices. Now, arrange the $n-x$ edges lexicographically by the following rule: compare every two edges and place the edge whose first component has a vertex with a smaller subscript in front; i.e. if $(v_i,v_j),(v_k,v_l)$ are both directed edges originally belonging to $W$ and $k<i$, then $(v_k,v_l)$ goes in front of $(v_i,v_j)$. This gives an ordered $(n-x)$-tuple $((v_a,v_b),(v_c,v_d),...)$ where $a<b$, $c<d$ and $a<c$ and so on. Denote this sequence of transformations on $M\setminus X$ by $\tau$. Notice that for any two distinct perfect matchings $M,M'\in A_{i,X}\cap E$, at least two of their $n-x$ edges outside $X$ are distinct (e.g. $\{\{v_\alpha,v_\beta\},\{v_\chi,v_\delta\}\}\subset M$, $\{\{v_\alpha,v_\chi\},\{v_\beta,v_{\delta'}\}\}\subset M'$), so their images under $\tau$ will also be distinct. Therefore, $\tau$ is injective.\bigskip
Now consider $W$. Without loss of generality, let $W$ contain $2p$ singletons, where $0\leq p\leq n-x$. Order the singletons in $W$ naturally by comparing their respective vertices' subscripts. Denote by $W_\gamma$ the image of $W$ under the natural ordering. Without loss of generality, write $$W_\gamma=\Big\{\big(\{v_1\},\{v_2\},...,\{v_{2p}\}\big),\{v_{2p+1},v_{2p+2}\},...,\{v_{2x-1},v_{2x}\}\Big\}.$$
We define a mapping as follows:\bigskip
For any $M\in A_{i,X}\cap E$, we consider their images under $\tau$. Treating the $(n-x)$-tuple, of which each component is an ordered pair, as an ordered string of vertices of length $2n-2x$, we select the first $2p$ vertices appearing in the string and pair the $k$th vertex in the string with $v_k$ in $W$. This gives us the set $\Gamma=\{(\{v_1,v_a\},\{v_2,v_b\},...),\{v_{2p+1},v_{2p+2}\},...\}$. Remove the natural ordering on $\Gamma$ to yield $\Gamma_0=\{\{v_1,v_a\},\{v_2,v_b\},...,\{v_{2p+1},v_{2p+2}\},...\}$. Now map the shortened string of length $2n-2x-2p$ back to its set of unordered $n-x-p$ edges (note that this gives us edges which were originally in $M$); call this edge set $\Gamma_1$. Clearly, $\mathring{\Gamma}(M)=\Gamma_0\cup\Gamma_1$ gives us a perfect matching in $B_W\cap E$, since $E$ is the event that $X'\not \subseteq M$ where $X'\cap X=\emptyset$, guaranteeing that our mapping preserves $E$. Moreover, for any fixed $W$ and two distinct $M,M'\in A_{i,X}\cap E$ their respective $\mathring{\Gamma}$ are distinct. Indeed, consider $\{\{v_\alpha,v_\beta\},\{v_\chi,v_\delta\}\}\subset M\setminus X$ and $\{\{v_\alpha,v_\chi\},\{v_\beta,v_{\delta'}\}\}\subset M'\setminus X$, where $\{v_{\alpha}, v_{\beta}\}$ and $\{v_{\alpha}, v_{\chi}\}$ is the first edge in which $M$, $M'$ differ (after performing $\tau$ on $M\setminus X$ and $M'\setminus X$). Without loss of generality, let $\alpha = \min\{\alpha, \beta, \chi, \delta, \delta'\}$. Suppose that $\alpha$ is within the first $2p$ vertices of the ordered $(n-x)$-tuple (otherwise we are done since $\{\{v_\alpha,v_\beta\},\{v_\chi,v_\delta\}\}\subset M_W$ and $\{\{v_\alpha,v_\chi\},\{v_\beta,v_\delta\}\}\subset M'_W$). Then, if the mapping yields, for example, $\{\{v_4,v_\alpha\},\{v_5,v_\beta\}\}\subset M_W$, then we would yield $\{\{v_4,v_\alpha\},\{v_5,v_\chi\}\}\subset M'_W$; clearly $M_W\neq M'_W$. Thus there is an injection from $A_{i,X}\cap E$ to $B_W\cap E$, i.e. $|B_W\cap E|\geq |A_{i,X}\cap E|$.\bigskip
Therefore, we have $$p_0=\mathbb P\left(A_{i,X}\mid E\right)\leq\mathbb P\left(B_W\mid E\right).$$\bigskip
Summing over all possible $W$, we have $p_0\cdot\sum_{k=2x-n}^{x}{{2x\choose 2k}\frac{(2k)!}{k!\cdot2^k}}\leq 1$, which gives $$p_0\leq\frac{1}{\sum_{k=2x-n}^{x}{{2x\choose 2k}\frac{(2k)!}{k!\cdot2^k}}}=p.$$\bigskip
Now, we want $ep(d+1)\leq 1$. This is equivalent to $$k\leq\frac{1}{e\cdot2x(2n-1){n-1\choose x-1}}\left(\sum_{j=2x-n}^{x}{{2x\choose 2j}\frac{(2j)!}{j!\cdot2^j}}-e\right).$$\smallskip
\begin{flushright}
$\blacksquare$
\end{flushright}
\end{shaded}
\section{A Conjecture}
In the proof of Theorem \ref{Main}, we used the idea of $\tau$ transformation to create unique permutations of perfect matchings. Here, we extend the notion of perfect matchings of graphs to that of $t$-uniform hypergraphs of order $tn$, i.e. $\forall v\in V\exists! e\in E(v\in e)$. This gives us a more general problem as follows:\bigskip
\begin{framed}
Let $\mathcal M=\{M_1,...,M_s\}$ be a collection of perfect matchings of a $t$-uniform hypergraph $H$ of order $tn$. Moreover, each of the $\frac{(nt)!}{(t!)^n\cdot n!}$ $t$-edges of the hypergraph appears at most $k$ times. Suppose that there does not exist a perfect matching of $H$ which agrees with each perfect matching $M_i\in\mathcal M$ in at most $x-1$ edges. What is the best possible lower bound for $k$?
\end{framed}
\bigskip
Following the method of proof of Theorem \ref{Main}, randomly pick a perfect matching $M$ from the set of all perfect matchings of $H$. Consider any $M_i\in \mathcal M$ and any $x$-matching $X\subset M_i$. Let $A_{i,X}$ be the event that $X\subset M$, and connect $A_{i,X}$ to $A_{i',X'}$ iff $X$ and $X'$ share at least one common vertex in their underlying vertex set. For each $A_{i,X}$, let the set of its neighbours in the dependency graph be $\mathcal D_{i,X}$. A combinatorial argument yields $$|\mathcal D_{i,X}|\leq k\cdot tx{tn-1\choose t-1}{n-1\choose x-1}=d.$$\bigskip
If we attempt to bound $$\mathbb P\left(A_{i,X}\mid\bigcap_{A_{i',X'}\in\mathcal S}{\overline{A_{i',X'}}}\right)=p_0$$
for any subset $\mathcal S\subset\mathcal A\setminus\mathcal D_{i,X}$, a difficulty arises if we mirror the mapping technique. We can still consider events similar to $W$ which split the underlying vertex set of $X$ into sets of $t$-edges, $(t-1)$-edges etc. and order them. Moreover, if a transformation similar to $\tau$ is performed on any matching $M\in A_{i,X}\cap E$, injectivity is still preserved. However, while it seems intuitively true that our $B_W\cap E$ should contain more elements that $A_{i,X}\cap E$, it is not as straightforward to map vertices into the respective $(t-\theta)$-edges, $t-1\geq\theta\geq 1$, such that injectivity is preserved. Hence, the problem remains open. Particularly, we conjecture:\bigskip
\begin{con}
Let $\mathcal M=\{M_1,...,M_s\}$ be a collection of perfect matchings of a $t$-uniform hypergraph $H$ of order $tn$. Moreover, each of the $\frac{(nt)!}{(t!)^n\cdot n!}$ $t$-edges of the hypergraph appears at most $k$ times. If $k\leq \frac{1}{e\cdot tx{tn-1\choose t-1}{n-1\choose x-1}}\left(N-e\right)$, where $$N=\sum{\left[\prod_{i=1}^{n}{{tx\choose ta_i}\frac{(ta_i)!}{(t!)^{a_i}(a_i)!}}\right]}$$\smallskip
is the sum over all vectors $(a_1,...,a_t)\in\left(\mathbb N\cup\{0\}\right)^t$ satisfying $ta_t+(t-1)a_{t-1}+...+a_1=tx$ and $a_t+a_{t-1}+...+a_1\leq n$, then there exists a perfect matching which agrees with each perfect matching $M_i\in\mathcal M$ in at most $x-1$ edges.
\end{con}
\bigskip
The upper bound for $k$ is based on the assumption that the intuition is correct.
\section{Conclusion}
It is unknown whether the upper bound obtained for $k$, namely\bigskip
$$\frac{1}{e\cdot2x(2n-1){n-1\choose x-1}}\left(\sum_{j=2x-n}^{x}{{2x\choose 2j}\frac{(2j)!}{j!\cdot2^j}}-e\right),$$
\medskip
is optimal, inasmuch as there is hitherto no research done in this area. However, it is possibly a fairly strong bound because the Lov\'{a}sz sieve is known to establish good bounds in problems.\bigskip
A possible continuation of our research is as follows: In \cite{ku}, a semi-random construction of a permutation code was given. In particular, using an analogue of Theorem \ref{Lovasz} for two events, an algorithm was formulated to construct a set of permutations in $S_n$ that is $<s$-intersecting in polynomial expected time. It would be possible to consider an analogue of the semi-random construction for collections of perfect matchings in $K_{2n}$.
\newpage
| {
"timestamp": "2011-04-26T02:02:40",
"yymm": "1009",
"arxiv_id": "1009.0810",
"language": "en",
"url": "https://arxiv.org/abs/1009.0810",
"abstract": "Consider the family of all perfect matchings of the complete graph $K_{2n}$ with $2n$ vertices. Given any collection $\\mathcal M$ of perfect matchings of size $s$, there exists a maximum number $f(n,x)$ such that if $s\\leq f(n,x)$, then there exists a perfect matching that agrees with each perfect matching in $\\mathcal M$ in at most $x-1$ edges. We use probabilistic arguments to give several lower bounds for $f(n,x)$. We also apply the Lovász local lemma to find a function $g(n,x)$ such that if each edge appears at most $g(n, x)$ times then there exists a perfect matching that agrees with each perfect matching in $\\mathcal M$ in at most $x-1$ edges. This is an analogue of an extremal result vis-á-vis the covering radius of sets of permutations, which was studied by Cameron and Wanless (cf. \\cite{cameron}), and Keevash and Ku (cf. \\cite{ku}). We also conclude with a conjecture of a more general problem in hypergraph matchings.",
"subjects": "Combinatorics (math.CO)",
"title": "The covering radius problem for sets of perfect matchings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587250685455,
"lm_q2_score": 0.8438951104066293,
"lm_q1q2_score": 0.8335647583468316
} |
https://arxiv.org/abs/1307.1708 | Piecewise linear approximations of the standard normal first order loss function | The first order loss function and its complementary function are extensively used in practical settings. When the random variable of interest is normally distributed, the first order loss function can be easily expressed in terms of the standard normal cumulative distribution and probability density function. However, the standard normal cumulative distribution does not admit a closed form solution and cannot be easily linearised. Several works in the literature discuss approximations for either the standard normal cumulative distribution or the first order loss function and their inverse. However, a comprehensive study on piecewise linear upper and lower bounds for the first order loss function is still missing. In this work, we initially summarise a number of distribution independent results for the first order loss function and its complementary function. We then extend this discussion by focusing first on random variable featuring a symmetric distribution, and then on normally distributed random variables. For the latter, we develop effective piecewise linear upper and lower bounds that can be immediately embedded in MILP models. These linearisations rely on constant parameters that are independent of the mean and standard deviation of the normal distribution of interest. We finally discuss how to compute optimal linearisation parameters that minimise the maximum approximation error. | \section{Introduction}
Consider a random variable $\omega$ and a scalar variable $x$. The first order loss function is defined as
\begin{equation}
\mathcal{L}(x,\omega)=\mbox{E}[\max(\omega-x,0)],
\end{equation}
where $\mbox{E}$ denotes the expected value.
The complementary first order loss function is defined as
\begin{equation}
\widehat{\mathcal{L}}(x,\omega)=\mbox{E}[\max(x-\omega,0)].
\end{equation}
The first order loss function and its complementary function play a key role in several application domains. In inventory control \cite{spp98} it is often used to express expected inventory holding or shortage costs, as well as service level measures such as the widely adopted ``fill rate'', also known as $\beta$ service level \cite{Axsater2006}, p. 94. In finance the first order loss function may be employed to capture risk measures such as the so-called ``conditional value at risk'' (see e.g. \cite{citeulike:1194520}). These examples illustrate possible applications of this function. Of course, the applicability of this function goes beyond inventory theory and finance.
Despite its importance, to the best of our knowledge a comprehensive analysis of results concerning the first order loss function seems to be missing in the literature. In Section \ref{sec:loss_function}, we first summarise a number of distribution independent results for the first order loss function and its complementary function. We then focus on symmetric distributions and on normal distributions; for these we discuss ad-hoc results in Section \ref{sec:loss_function_normal}.
According to one of these results, the first order loss function can be expressed in terms of the cumulative distribution function of the random variable under scrutiny. Depending on the probability distribution adopted, integrating this function may constitute a challenging task. For instance, if the random variable is normally distributed, no closed formulation exists for its cumulative distribution function.
Several approximations have been proposed in the literature, see e.g. \cite{zelen64,shore1982,citeulike:12461167,citeulike:12461170,citeulike:12461174,citeulike:12317680}, which can be employed to approximate the first order loss function. However, these approximations are generally nonlinear and cannot be easily embedded in mixed integer linear programming (MILP) models.
In Section \ref{sec:lb} and \ref{sec:ub}, we introduce piecewise linear lower and upper bounds for the first order loss function and its complementary function for the case of normally distributed random variables. These bounds are based on standard bounding techniques from stochastic programming, i.e. Jensen's lower bound and Edmundson-Madansky upper bound \cite{citeulike:695971}, p. 167-168. The bounds can be readily used in MILP models and do not require instance dependent tabulations. Our linearisation strategy is based on standard optimal linearisation coefficients computed in such a way as to minimise the maximum approximation error, i.e. according to a minimax approach. Optimal coefficients for approximations comprising from two to eleven segments will be presented in Table \ref{tab:parameters}; these can be reused to approximate the loss function associated with any normally distributed random variable.
\section{The first order loss function and its complementary function}\label{sec:loss_function}
Consider a continuous random variable $\omega$ with support over $\mathbb{R}$, probability density function $g_\omega(x):\mathbb{R}\rightarrow(0,1)$ and cumulative distribution function $G_\omega(x):\mathbb{R}\rightarrow(0,1)$. The first order loss function can be rewritten as
\begin{equation}
\mathcal{L}(x,\omega)=\int_{-\infty}^{\infty} \max(t-x,0)g_\omega(t)\,dt=\int_{x}^{\infty} (t-x)g_\omega(t)\,dt.
\end{equation}
The complementary first order loss function can be rewritten as
\begin{equation}
\widehat{\mathcal{L}}(x,\omega)=\int_{-\infty}^{\infty} \max(x-t,0)g_\omega(t)\,dt=\int_{-\infty}^{x} (x-t)g_\omega(t)\,dt.
\end{equation}
\begin{lem}\label{thm:relationship_fol_com_2}
The first order loss function $\mathcal{L}(x,\omega)$ can also be expressed as
\begin{equation}
\mathcal{L}(x,\omega)=\int_{x}^{\infty} \left(1-G_\omega(t)\right)\,dt
\end{equation}
\end{lem}
\begin{proof}
\begin{align}
\mathcal{L}(x,\omega) &= \int_{x}^{\infty} (t-x)g_\omega(t)\,dt\\
&= \int_{x}^{\infty} t g_\omega(t)\,dt-x \int_{x}^{\infty} g_\omega(t)\,dt
\end{align}
the integration of
\[\int_{x}^{\infty} t g_\omega(t)\,dt\]
is a well-known integration by parts that proceeds as follows.
Let $u(t)=t$, $u'(t)=1$, $v(t)=-(1-G_\omega(t))$, $v'(t)=g_\omega(t)$. Rewrite the integral as
\[\int_{x}^{b} t g_\omega(t)\,dt=\left[-t(1-G_\omega(t))\right]_{x}^{b}+\int_{x}^{b}(1-G_\omega(t))\, dt\]
and take the limit since $b\rightarrow\infty$. The product term in the integration by parts formula converges to $x(1-G_\omega(x))$ as $b\rightarrow\infty$. We therefore take the limit to obtain the identity
\[\int_{x}^{\infty} t g_\omega(t)\,dt=x(1-G_\omega(x))+\int_{x}^{\infty}(1-G_\omega(t))\, dt\]
and by substituting $\int_{x}^{\infty} t g_\omega(t)\,dt$ with this expression we obtain
\begin{align}
\mathcal{L}(x,\omega) &= x(1-G_\omega(x))+\int_{x}^{\infty}(1-G_\omega(t))\, dt - x (1-G_\omega(x))\\
&= \int_{x}^{\infty}(1-G_\omega(t))\, dt
\end{align}
\end{proof}
The following well-known lemma is introduced, together with its proof, for completeness.
\begin{lem}\label{thm:compl_fol}
The complementary first order loss function $\widehat{\mathcal{L}}(x,\omega)$ can also be expressed as
\begin{equation}
\widehat{\mathcal{L}}(x,\omega)=\int_{-\infty}^{x} G_\omega(t)\,dt.
\end{equation}
\end{lem}
\begin{proof}
\begin{align}
\widehat{\mathcal{L}}(x,\omega) &= \int_{-\infty}^{x} (x-t)g_\omega(t)\,dt\\
&= x \int_{-\infty}^{x} g_\omega(t)\,dt-\int_{-\infty}^{x} t g_\omega(t)\,dt\\ &= x G_\omega(x) - \int_{-\infty}^{x} t g_\omega(t)\,dt\\
&= x G_\omega(x) - x G_\omega(x) + \int_{-\infty}^{x} G_\omega(t)\,dt\\
&= \int_{-\infty}^{x} G_\omega(t)\,dt
\end{align}
\end{proof}
There is a close relationship between the first order loss function and the complementary first order loss function.
\begin{lem}\label{thm:relationship_fol_com_1}
The first order loss function $\mathcal{L}(x,\omega)$ can also be expressed as
\begin{equation}
\mathcal{L}(x,\omega)=\widehat{\mathcal{L}}(x,\omega)-(x-\tilde{\omega})
\end{equation}
where $\tilde{\omega}=\mbox{\em E}[\omega]$.
\end{lem}
\begin{proof}
\begin{align}
\mathcal{L}(x,\omega) &= \int_{x}^{\infty} (t-x)g_\omega(t)\,dt\\
&= \int_{x}^{\infty} t g_\omega(t)\,dt-x \int_{x}^{\infty} g_\omega(t)\,dt\\ &= \int_{-\infty}^{\infty} t g_\omega(t)\,dt-\int_{-\infty}^{x} t g_\omega(t)\,dt-x \int_{x}^{\infty} g_\omega(t)\,dt\\
&= \int_{-\infty}^{\infty} t g_\omega(t)\,dt-\int_{-\infty}^{x} t g_\omega(t)\,dt-x (1-G_\omega(t))\\
&= \int_{-\infty}^{\infty} t g_\omega(t)\,dt- x G_\omega(x) + \int_{-\infty}^{x} G_\omega(t)\,dt-x (1-G_\omega(t))\\
&= \int_{-\infty}^{x} G_\omega(t)\,dt-(x-\tilde{\omega})\\
&= \widehat{\mathcal{L}}(x,\omega)-(x-\tilde{\omega})
\end{align}
\end{proof}
Because of the relation discussed in Lemma \ref{thm:relationship_fol_com_1}, in what follows without loss of generality most of the results will be presented for the complementary first order loss function.
Another known result for the first order loss function and its complementary function is their convexity, which we present next.
\begin{lem}\label{lem:convexity_loss_function}
$\mathcal{L}(x,\omega)$ and $\widehat{\mathcal{L}}(x,\omega)$ are convex in $x$.
\end{lem}
\begin{proof}
We shall prove the result for $\widehat{\mathcal{L}}(x,\omega)$. Recall that $\widehat{\mathcal{L}}(x,\omega)=\int_{-\infty}^{x} G_\omega(t)\,dt$. From the fundamental theorem of integral calculus
\[\frac{d}{dx}\widehat{\mathcal{L}}(x,\omega)=G_\omega(x)\]
and
\[\frac{d^2}{dx^2}\widehat{\mathcal{L}}(x,\omega)=g_\omega(x).\]
Since $g_\omega(x)$ is nonnegative the result follows immediately; furthermore, the proof for $\mathcal{L}(x,\omega)$ follows from Lemma \ref{thm:relationship_fol_com_1} and from the fact that $-x$ is convex.
\end{proof}
For a random variable $\omega$ with symmetric probability density function, we introduce the following results.
\begin{lem}\label{thm:fol_symm}
If the probability density function of $\omega$ is symmetric about a mean value $\tilde{\omega}$, then \[\mathcal{L}(x,\omega)=\widehat{\mathcal{L}}(2\tilde{\omega}-x,\omega).\]
\end{lem}
\begin{proof}
\begin{align}
\mathcal{L}(x,\omega) &= \int_{x}^{\infty} \left(1-G_\omega(t)\right)\,dt \\
&= \int_{-\infty}^{\tilde{\omega}-(x-\tilde{\omega})} G_\omega(t)\,dt \\
&= \widehat{\mathcal{L}}(2\tilde{\omega}-x,\omega)
\end{align}
\end{proof}
\begin{lem}\label{thm:fol_automorphism}
If the probability density function of $\omega$ is symmetric about a mean value $\tilde{\omega}$, then
\[\widehat{\mathcal{L}}(x,\omega)=\widehat{\mathcal{L}}(2\tilde{\omega}-x,\omega)+(x-\tilde{\omega})\] and
\[\mathcal{L}(x,\omega)=\mathcal{L}(2\tilde{\omega}-x,\omega)-(x-\tilde{\omega}).\]
\end{lem}
\begin{proof}
Follows immediately from Lemma \ref{thm:relationship_fol_com_1} and Lemma \ref{thm:fol_symm}.
\end{proof}
The results presented so far are easily extended to the case in which the random variable is discrete. In the following section we present results for the case in which the random variable is normally distributed.
\section{The first order loss function for a normally distributed random variable}\label{sec:loss_function_normal}
Let $\zeta$ be a normally distributed random variable with mean $\mu$ and standard deviation $\sigma$.
Recall that the Normal probability density function is defined as
\begin{equation}
g_\zeta(x)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}.
\end{equation}
No closed form expression exists for the cumulative distribution function \[G_\zeta(x)=\int_{-\infty}^x g_\zeta(x)\,dx.\]
Let $\phi(x)$ be the standard Normal probability density function
and $\Phi(x)$ the respective cumulative distribution function.
We next present three known results for a normally distributed random variable: a standardisation result in Lemma \ref{lem:folf_std_norm}, and two closed form expressions for the computation of the loss function and of its complementary function in Lemmas \ref{thm:compl_fol_closed} and \ref{thm:compl_fol_closed_2}.
\begin{lem}\label{lem:folf_std_norm}
The complementary first order loss function of $\zeta$ can be expressed in terms of the standard Normal cumulative distribution function as
\begin{equation}\label{eq:compl_fol_norm}
\widehat{\mathcal{L}}(x,\zeta)=\sigma\int_{-\infty}^{\frac{x-\mu}{\sigma}} \Phi(t)\,dt=\sigma\widehat{\mathcal{L}}\left(\frac{x-\mu}{\sigma},Z\right),
\end{equation}
where $Z$ is a standard Normal random variable.
\end{lem}
\begin{proof}
Recall that the complementary first order loss function is defined as
\[\widehat{\mathcal{L}}(x,\zeta)=\int_{-\infty}^{x} (x-t)g_\zeta(t)\,dt.\]
We change the upper integration limit to
\[f(x)=\frac{x-\mu}{\sigma}\]
by noting that
\[f^{-1}(y)=\sigma y+\mu,~~~~~\frac{df^{-1}(y)}{dy}=\sigma\]
it follows that
\begin{align}
\widehat{\mathcal{L}}(x,\zeta) &=\int_{-\infty}^{f(x)} (x-f^{-1}(t))g_\zeta(f^{-1}(t))\frac{df^{-1}(t)}{dt}\,dt.\\
&=\sigma\int_{-\infty}^{\frac{x-\mu}{\sigma}} (x-\sigma t -\mu)\frac{1}{\sigma}\phi(t)\,dt.\\
&=\sigma\int_{-\infty}^{\frac{x-\mu}{\sigma}} \sigma\left(\frac{x-\mu}{\sigma} - t\right)\frac{1}{\sigma}\phi(t)\,dt.
\end{align}
\end{proof}
\begin{lem}\label{thm:compl_fol_closed}
The complementary first order loss function $\widehat{\mathcal{L}}(x,\zeta)$ can be rewritten in closed form as
\[\widehat{\mathcal{L}}(x,\zeta)=\sigma\left(\phi\left(\frac{x-\mu}{\sigma}\right)+\Phi\left(\frac{x-\mu}{\sigma}\right)\frac{x-\mu}{\sigma}\right)\]
\end{lem}
\begin{proof}
Integrate by parts Eq. \ref{eq:compl_fol_norm} and observe that $\int_{-\infty}^{\frac{x-\mu}{\sigma}}t\phi(t)\,dt=-\phi(\frac{x-\mu}{\sigma})$.
\end{proof}
From Lemma \ref{thm:relationship_fol_com_1} the first order loss function of $\zeta$ can be expressed as
\begin{equation}\label{eq:fol_norm}
\mathcal{L}(x,\zeta)=-(x-\mu)+\sigma\int_{-\infty}^{\frac{x-\mu}{\sigma}} \Phi(t)\,dt
\end{equation}
Recall that an alternative expression is obtained via Lemma \ref{thm:relationship_fol_com_2},
\begin{equation}\label{eq:fol_norm_1}
\mathcal{L}(x,\zeta)=\sigma\int_{\frac{x-\mu}{\sigma}}^{\infty} (1-\Phi(t))\,dt.
\end{equation}
From Lemma \ref{thm:relationship_fol_com_1} and Lemma \ref{thm:compl_fol_closed} we obtain the closed form expression
\begin{equation}\label{eq:fol_norm_closed}
\mathcal{L}(x,\zeta)=-(x-\mu)+\sigma\left(\phi\left(\frac{x-\mu}{\sigma}\right)+\Phi\left(\frac{x-\mu}{\sigma}\right)\frac{x-\mu}{\sigma}\right)
\end{equation}
\begin{lem}\label{thm:compl_fol_closed_2}
The first order loss function $\mathcal{L}(x,\zeta)$ can be rewritten in closed form as
\[\mathcal{L}(x,\zeta)=\sigma\left(\phi\left(\frac{x-\mu}{\sigma}\right)-\left(1-\Phi\left(\frac{x-\mu}{\sigma}\right)\right)\frac{x-\mu}{\sigma}\right)\]
\end{lem}
\begin{proof}
Integrate by parts Eq. \ref{eq:fol_norm} and observe that $\int_{\frac{x-\mu}{\sigma}}^{\infty}t\phi(t)\,dt=\phi(\frac{x-\mu}{\sigma})$.
\end{proof}
Furthermore, since the Normal distribution is symmetric, both Lemma \ref{thm:fol_symm} and Lemma \ref{thm:fol_automorphism} hold.
\section{Jensen's lower bound for the standard normal first order loss function}\label{sec:lb}
We introduce a well-known inequality from stochastic programming \cite{citeulike:695971}, p. 167.
\subsection{Jensen's lower bound}
\begin{thm}[Jensen's inequality]\label{jensens}
Consider a random variable $\omega$ with support $\Omega$ and a function $f(x,s)$, which for a fixed $x$ is convex for all $s\in\Omega$, then
\[\mbox{E}[f(x,\omega)]\geq f(x,\mbox{E}[\omega]).\]
\end{thm}
\begin{proof}
\cite{citeulike:2516312}, p. 140.
\end{proof}
Common discrete lower bounding approximations in stochastic programming are extensions of Jensen's inequality. The usual strategy is to find a low cardinality discrete set of realisations representing a good approximation of the true underling distribution.
\cite{citeulike:2516312}, p. 288, discuss one of these discrete lower bounding approximations which consists in partitioning the support $\Omega$ into a number of disjoint regions, Jensen's bound is then applied in each of these regions.
More formally, let $g_{\omega}(\cdot)$ denote the probability density function of $\omega$ and consider a partition of the support $\Omega$ of $\omega$ into $N$ disjoint compact subregions $\Omega_1,\ldots,\Omega_N$. We define, for all $i=1,\ldots,N$
\[p_i=\Pr\{\omega\in \Omega_i\}=\int_{\Omega_i} g_{\omega}(t)\,dt\]
and
\[\mbox{E}[\omega|\Omega_i]=\frac{1}{p_i}\int_{\Omega_i} t g_{\omega}(t)\,dt\]
\begin{thm}\label{discrete_bounding}
\[\mbox{E}[f(x,\omega)]\geq\sum_{i=1}^N p_i f(x,\mbox{E}[\omega|\Omega_i])\]
\end{thm}
\begin{proof}
\cite{citeulike:2516312}, p. 289.
\end{proof}
\begin{thm}
Given a random variable $\omega$ Jensen's bound (Theorem \ref{jensens}) is applicable to the first order loss function $\mathcal{L}(x,\omega)$ and its complementary function $\widehat{\mathcal{L}}(x,\omega)$.
\end{thm}
\begin{proof}
Follows immediately from Lemma \ref{lem:convexity_loss_function}.
\end{proof}
Having established this result, we must then decide how to partition the support $\omega$ in order to obtain a good bound. In fact, to generate good lower bounds, it is necessary to carefully select the partition of the support $\omega$. The optimal partitioning strategy will depend, of course, on the probability distribution of the random variable $\omega$.
\subsection{Minimax discrete lower bounding approximation}
We discuss a minimax strategy for generating discrete lower bounding approximations of the (complementary) first order loss function. In this strategy, we partition the support $\omega$ into a predefined number of regions $N$ in order to minimise the maximum approximation error.
Consider a random variable $\omega$ and the associated complementary first order loss function
\[\widehat{\mathcal{L}}(x,\omega)=\mbox{E}[\max(x-\omega,0)];\]
assume that the support $\Omega$ of $\omega$ is partitioned into $N$ disjoint subregions $\Omega_1,\ldots,\Omega_N$.
\begin{lem}\label{lem:piecewise_linear}
For the (complementary) first order loss function the lower bound presented in Theorem \ref{discrete_bounding} is a piecewise linear function with $N+1$ segments.
\end{lem}
\begin{proof}
Consider the bound presented in Theorem \ref{discrete_bounding} and let $f(x,\omega)=\max(x-\omega,0)$,
\[\widehat{\mathcal{L}}_{lb}(x,\omega)=\sum_{i=1}^N p_i \max(x-\mbox{E}[\omega|\Omega_i],0)\]
this function is equivalent to
{\scriptsize
\[\widehat{\mathcal{L}}_{lb}(x,\omega)=\left\{
\begin{array}{ll}
0&-\infty\leq x\leq \mbox{E}[\omega|\Omega_1]\\
p_1 x - p_1\mbox{E}[\omega|\Omega_1]&\mbox{E}[\omega|\Omega_1]\leq x\leq \mbox{E}[\omega|\Omega_2]\\
(p_1+p_2) x - (p_1\mbox{E}[\omega|\Omega_1]+p_2\mbox{E}[\omega|\Omega_2])&\mbox{E}[\omega|\Omega_2]\leq x\leq \mbox{E}[\omega|\Omega_3]\\
\vdots&\vdots\\
(p_1+p_2+\hdots+p_N) x - (p_1\mbox{E}[\omega|\Omega_1]+p_2\mbox{E}[\omega|\Omega_2]+\hdots+p_N\mbox{E}[\omega|\Omega_N])&\mbox{E}[\omega|\Omega_{N-1}]\leq x\leq \mbox{E}[\omega|\Omega_N]\\
\end{array}
\right.\]\\}
which is piecewise linear in $x$ with breakpoints at $\mbox{E}[\omega|\Omega_1],\mbox{E}[\omega|\Omega_2],\ldots,\mbox{E}[\omega|\Omega_N]$. The proof for the first order loss function follows a similar reasoning.
\end{proof}
\begin{lem}\label{lem:tangent}
Consider the $i$-th linear segment of $\widehat{\mathcal{L}}_{lb}(x,\omega)$
\[\widehat{\mathcal{L}}^i_{lb}(x,\omega)=x\sum_{k=1}^i p_k-\sum_{k=1}^i p_k \mbox{E}[\omega|\Omega_k]~~~~\mbox{E}[\omega|\Omega_{i}]\leq x\leq \mbox{E}[\omega|\Omega_{i+1}],\]
where $i=1,\ldots,N$. Let $\Omega_i=[a,b]$, then $\widehat{\mathcal{L}}^i_{lb}(x,\omega)$ is tangent to $\widehat{\mathcal{L}}(x,\omega)$ at $x=b$. Furthermore, the $0$-th segment $x=0$ is tangent to $\widehat{\mathcal{L}}(x,\omega)$ at $x=-\infty$.
\end{lem}
\begin{proof}
Note that
\[\widehat{\mathcal{L}}^i_{lb}(x,\omega)=x\sum_{k=1}^i \int_{\Omega_k} g_{\omega}(t)\;dt-\sum_{k=1}^i \int_{\Omega_k} t g_{\omega}(t)\;dt\]
and that \[\Omega_1\cup\Omega_2\cup\ldots\cup\Omega_i=)-\infty,b]\]
it follows
\[\widehat{\mathcal{L}}^i_{lb}(x,\omega)=x \int_{-\infty}^b g_{\omega}(t)\;dt-\int_{-\infty}^b t g_{\omega}(t)\;dt\]
and
\[\widehat{\mathcal{L}}^i_{lb}(x,\omega)=G_{\omega}(b)(x-b)+\int_{-\infty}^b G_{\omega}(t)\;dt.\]
which is the equation of the tangent line to $\widehat{\mathcal{L}}(x,\omega)$ at a given point $b$, that is
\[y=\widehat{\mathcal{L}}(b,\omega)'(x-b)+\widehat{\mathcal{L}}(b,\omega).\]
$x=0$ is tangent to $\widehat{\mathcal{L}}(x,\omega)$ at $x=-\infty$ since $\widehat{\mathcal{L}}(x,\omega)$ is convex, positive and
\[\lim_{x\rightarrow -\infty} \widehat{\mathcal{L}}(x,\omega)=0.\]
The very same reasoning can be easily applied to the first order loss function.
\end{proof}
\begin{lem}\label{lem:max_error_at_breakpoints}
The maximum approximation error between $\widehat{\mathcal{L}}_{lb}(x,\omega)$ and $\widehat{\mathcal{L}}(x,\omega)$ will be attained at a breakpoint.
\end{lem}
\begin{proof}
By recalling that $\widehat{\mathcal{L}}(x,\omega)$ is convex (Lemma \ref{lem:convexity_loss_function}), since $\widehat{\mathcal{L}}_{lb}(x,\omega)$ is piecewise linear (Lemma \ref{lem:piecewise_linear}) and each segment of $\widehat{\mathcal{L}}_{lb}(x,\omega)$ is tangent to $\widehat{\mathcal{L}}(x,\omega)$ (Lemma \ref{lem:tangent}), it follows that the maximum error will be attained at a breakpoint.
\end{proof}
\begin{thm}\label{thm:approx_err_equal}
Given the number of regions $N$, $\Omega_1,\ldots,\Omega_N$ is an optimal partition of the support $\Omega$ of $\omega$ under a minimax strategy, if and only if approximation errors at breakpoints are all equal.
\end{thm}
\begin{proof}
The approximation errors for $x\rightarrow-\infty$ and $x\rightarrow\infty$ are both 0; since we have N+1 segments, we only have $N$ breakpoints to check.
We first show that ($\rightarrow$) if $\Omega_1,\ldots,\Omega_N$ is an optimal partition of the support $\Omega$ of $\omega$ under a minimax strategy, then approximation errors at breakpoints are all equal.
A first observation that follows immediately from Lemma \ref{lem:max_error_at_breakpoints} is that, if the slope of segment $\widehat{\mathcal{L}}^{i+1}_{lb}(x,\omega)$ remains unchanged, and the breakpoint between $\widehat{\mathcal{L}}^i_{lb}(x,\omega)$ and $\widehat{\mathcal{L}}^{i+1}_{lb}(x,\omega)$ moves towards the point at which $\widehat{\mathcal{L}}^{i+1}_{lb}(x,\omega)$ is tangent to $\widehat{\mathcal{L}}(x,\omega)$, the error at such breakpoint decreases.
If one changes the size of region $\Omega_i$ so that the upper limit becomes $b_i+\Delta$, then $\mbox{E}[\omega|\Omega_i]$ will increase if $\Delta>0$, or will decrease if $\Delta<0$. This immediately follows from the definition of $\mbox{E}[\omega|\Omega_i]$. Therefore, the breakpoint between segment $i$ and segment $i+1$, which occurs at $\mbox{E}[\omega|\Omega_i]$, will move accordingly. However, the slope of segment $i+1$, which we recall is equal to $G_\omega(b_{i+1})$, depends uniquely on the upper limit of the region $\Omega_{i+1}$, $b_{i+1}$, and is not affected by a change in the upper limit of region $\Omega_i$. Therefore, the error at the breakpoint between segment $i$ and segment $i+1$ will decrease if $\Delta>0$, or will increase if $\Delta<0$.
Now, assume that $\Omega_1,\ldots,\Omega_N$ is an optimal partition of the support $\Omega$ of $\omega$ and approximation errors at breakpoints are not all equal. Furthermore, assume that the maximum approximation error occurs at breakpoint $i$. By increasing the size of the region $\Omega_i$, i.e. by setting the upper limit to $b_i+\Delta$, where $\Delta>0$, it is possible to decrease the maximum error until it becomes equal to the error at breakpoint $k$, where $k\in\{1,\ldots,i-1\}$. The procedure can be repeated until all approximation errors are equal.
Second, we show that ($\leftarrow$) if approximation errors at breakpoints are all equal, then $\Omega_1,\ldots,\Omega_N$ is an optimal partition of the support $\Omega$ of $\omega$ under a minimax strategy.
If approximation errors at breakpoints are all equal and we change the size of region $\Omega_i$ by setting the upper limit to $b_i+\Delta$, where $\Delta>0$, then the approximation error at breakpoint $i-1$ will increase; conversely, if $\Delta<0$, then the approximation error at breakpoint $i+1$ will increase.
\end{proof}
By using this last result it is possible to derive a set of equations that can be solved for computing an optimal partitioning.
Let us consider the error $e_i$ at breakpoint $i$, this can be expressed as
\[e_i=\widehat{\mathcal{L}}(\mbox{E}[\omega|\Omega_i],\omega)-\widehat{\mathcal{L}}^{i}_{lb}(\mbox{E}[\omega|\Omega_i],\omega),\]
where
$\Omega_i=[a_i,b_i]$.
Since we have $N$ breakpoints to check, we must solve a system comprising the following $N-1$ equations
\[e_1=e_i~~~\mbox{for }i=2,\ldots,N.\]
under the following restrictions
\[
\begin{array}{lll}
a_1&=-\infty\\
b_N&=\infty\\
a_i&\leq b_i&\mbox{for }i=1,\ldots,N\\
b_i&=a_{i+1}&\mbox{for }i=1,\ldots,N-1
\end{array}
\]
The system therefore involves $N-1$ variables, each of which identifies the boundary between two disjoint regions $\Omega_i$ and $\Omega_{i+1}$.
\begin{thm}\label{thm:approx_err_symmetric}
Assume that the probability density function of $\omega$ is symmetric about a mean value $\tilde{\omega}$. Then, under a minimax strategy, if $\Omega_1,\ldots,\Omega_N$ is an optimal partition of the support $\Omega$ of $\omega$, breakpoints will be symmetric about $\tilde{\omega}$.
\end{thm}
\begin{proof}
This follows from Lemma \ref{thm:fol_automorphism} and Theorem \ref{thm:approx_err_equal}.
\end{proof}
In this case, by exploiting the symmetry of the piecewise linear approximation, an optimal partitioning can be derived by solving a smaller system comprising $\lceil N/2 \rceil$ equations, where $N$ is the number of regions $\Omega_i$ and $\lceil x \rceil$ rounds $x$ to the next integer value.
Unfortunately, equations in the above system are nonlinear and do not admit a closed form solution in the general case.
\subsubsection{Normal distribution}
We will next discuss the system of equations that leads to an optimal partitioning for the case of a standard Normal random variable $Z$. This partitioning leads to a piecewise linear approximation that is, in fact, easily extended to the general case of a normally distributed variable $\zeta$ with mean $\mu$ and standard deviation $\sigma$ via Lemma \ref{lem:folf_std_norm}. This equation suggests that the error of this approximation is independent of $\mu$ and proportional to $\sigma$.
Consider a partitioning for the support $\Omega$ of $Z$ into $N$ adjacent regions $\Omega_i=[a_i,b_i]$, where $i=1,\ldots,N$.
From Theorem \ref{thm:approx_err_symmetric}, if $N$ is odd, then $b_{\lceil N/2\rceil}=0$ and $b_{i}=-b_{N+1-i}$, if $N$ is even, then $b_{i}=-b_{N+1-i}$. We shall use Lemma \ref{thm:compl_fol_closed} for expressing $\widehat{\mathcal{L}}(x,Z)$.
Then, by observing that
\[\int_{a_i}^{b_i} t \phi(t)\,dt=\phi(a_i)-\phi(b_i)\]
and that $p_1+p_2+\ldots+p_i=\Phi(b_i)$, we rewrite
\begin{align}
\widehat{\mathcal{L}}^i_{lb}(\mbox{E}[Z|\Omega_i],Z) &=\Phi(b_i)\mbox{E}[Z|\Omega_i]-\sum_{k=1}^i(\phi(a_i)-\phi(b_i))\\
&=\Phi(b_i)\mbox{E}[Z|\Omega_i]-(\phi(a_1)-\phi(b_i))\\
&=\Phi(b_i)\mbox{E}[Z|\Omega_i]+\phi(b_i)
\end{align}
To express the conditional expectation $\mbox{E}[Z|\Omega_i]$
we proceed as follows: let $p_i=\Phi(b_i)-\Phi(a_i)$,
it follows
\[\mbox{E}[Z|\Omega_i]=\frac{\phi(a_i)-\phi(b_i)}{\Phi(b_i)-\Phi(a_i)}.\]
To solve the above system of non-linear equations we will exploit the close connections between finding a local minimum and solving a set of nonlinear equations. In particular, we will use the Gauss-Newton method to find a partition $\Omega_1,\ldots,\Omega_N$ of the support of $Z$ that minimises the following sum of squares
\[\sum_{k=2}^{N}(e_1-e_k)^2\]
This minimisation problem can be solved by software packages such as Mathematica (see \texttt{NMinimize}).
\subsubsection{Numerical examples}
The classical Jensen's bound for the complementary first order loss function of a standard Normal random variable $Z$ is show in Fig. \ref{fig:piecewise-2}. This is obtained by considering a degenerate partition of the support of $Z$ comprising only a single region $\Omega_1=[-\infty,\infty]$. In practice, we simply replace $Z$ by its expected value, i.e. zero. Therefore we simply have $\widehat{\mathcal{L}}_{lb}=\max(x,0)$. The maximum error of this piecewise linear approximation occurs for $x=0$ and it is equal to $1/\sqrt{2\pi}$.
\begin{figure}[h!]
\centering
\includegraphics[type=eps,ext=.eps,read=.eps,width=0.8\columnwidth]{piecewise-2}
\caption{classical Jensen's bound for $\widehat{\mathcal{L}}(x,Z)$}
\label{fig:piecewise-2}
\end{figure}
If we split the support of $Z$ into four regions (Fig. \ref{fig:piecewise-5}), the solution to the system of nonlinear equations prescribes to split $\Omega$ at $b_1=-0.886942$, $b_2=0$, $b_3=0.886942$. The maximum error is 0.0339052 and it is observed at $x\in\{\pm1.43535, \pm0.415223\}$.
\begin{figure}[h!]
\centering
\includegraphics[type=eps,ext=.eps,read=.eps,width=0.8\columnwidth]{piecewise-5}
\caption{five-segment piecewise Jensen's bound for $\widehat{\mathcal{L}}(x,Z)$}
\label{fig:piecewise-5}
\end{figure}
In Table \ref{tab:parameters} we report parameters of $\widehat{\mathcal{L}}_{lb}(x,Z)$ with up to eleven segments. In Fig. \ref{fig:error} we present the approximation error of $\widehat{\mathcal{L}}_{lb}(x,Z)$ with up to eleven segments.
\begin{figure}[h!]
\centering
\includegraphics[type=eps,ext=.eps,read=.eps,width=0.8\columnwidth]{error}
\caption{approximation error of $\widehat{\mathcal{L}}_{lb}(x,Z)$ with up to eleven segments}
\label{fig:error}
\end{figure}
\begin{landscape}
\begin{table}[htdp]
\tiny
\begin{center}
\begin{tabular}{l|l|lllllllllll}
&&\multicolumn{11}{c}{Piecewise linear approximation parameters}\\
Segments&Error&$i$&1&2&3&4&5&6&7&8&9&10\\
\hline
\multirow{4}{*}{2}&
\multirow{4}{*}{0.398942}
&$b_i$&$\infty$\\
&&$p_i$&1\\
&&$\mbox{E}[\omega|\Omega_i]$&0\\
\hline
\multirow{4}{*}{3}&
\multirow{4}{*}{0.120656}
&$b_i$&0&$\infty$\\
&&$p_i$&0.5&0.5\\
&&$\mbox{E}[\omega|\Omega_i]$&$-0.797885$&$0.797885$\\
\hline
\multirow{4}{*}{4}&
\multirow{4}{*}{0.0578441}
&$b_i$&$-0.559725$&$0.559725$&$\infty$\\
&&$p_i$&0.287833&0.424333&0.287833\\
&&$\mbox{E}[\omega|\Omega_i]$&$-1.18505$&$0$&$1.18505$\\
\hline
\multirow{4}{*}{5}&
\multirow{4}{*}{0.0339052}
&$b_i$&$-0.886942$&$0$&$0.886942$&$\infty$\\
&&$p_i$&$0.187555$&$0.312445$&$0.312445$&$0.187555$\\
&&$\mbox{E}[\omega|\Omega_i]$&$-1.43535$&$-0.415223$&$0.415223$&$1.43535$\\
\hline
\multirow{4}{*}{6}&
\multirow{4}{*}{0.0222709}
&$b_i$&$-1.11507$&$-0.33895$&$0.33895$&$1.11507$&$\infty$\\
&&$p_i$&$0.132411$&$0.234913$&$0.265353$&$0.234913$&$0.132411$\\
&&$\mbox{E}[\omega|\Omega_i]$&$-1.61805$&$-0.691424$&$0$&$0.691424$&$1.61805$\\
\hline
\multirow{4}{*}{7}&
\multirow{4}{*}{0.0157461}
&$b_i$&$-1.28855$&$-0.579834$&$0$&$0.579834$&$1.28855$&$\infty$\\
&&$p_i$&$0.0987769$&$0.182236$&$0.218987$&$0.218987$&$0.182236$&$0.0987769$\\
&&$\mbox{E}[\omega|\Omega_i]$&$-1.7608$&$-0.896011$&$-0.281889$&$0.281889$&$0.896011$&$1.7608$\\
\hline
\multirow{4}{*}{8}&
\multirow{4}{*}{0.0117218}
&$b_i$&$-1.42763$&$-0.765185$&$-0.244223$&$0.244223$&$0.765185$&$1.42763$&$\infty$\\
&&$p_i$&$0.0766989$&$0.145382$&$0.181448$&$0.192942$&$0.181448$&$0.145382$&$0.0766989$\\
&&$\mbox{E}[\omega|\Omega_i]$&$-1.87735$&$-1.05723$&$-0.493405$&$0$&$0.493405$&$1.05723$&$1.87735$\\
\hline
\multirow{4}{*}{9}&
\multirow{4}{*}{0.00906529}
&$b_i$&$-1.54317$&$-0.914924$&$-0.433939$&$0$&$0.433939$&$0.914924$&$1.54317$&$\infty$\\
&&$p_i$&$0.0613946$&$0.118721$&$0.152051$&$0.167834$&$0.167834$&$0.152051$&$0.118721$&$0.0613946$\\
&&$\mbox{E}[\omega|\Omega_i]$&$-1.97547$&$-1.18953$&$-0.661552$&$-0.213587$&$0.213587$&$0.661552$&$1.18953$&$1.97547$\\
\hline
\multirow{4}{*}{10}&
\multirow{4}{*}{0.00721992}
&$b_i$&$-1.64166$&$-1.03998$&$-0.58826$&$-0.19112$&$0.19112$&$0.58826$&$1.03998$&$1.64166$&$\infty$\\
&&$p_i$&$0.0503306$&$0.0988444$&$0.129004$&$0.146037$&$0.151568$&$0.146037$&$0.129004$&$0.0988444$&$0.0503306$\\
&&$\mbox{E}[\omega|\Omega_i]$&$-2.05996$&$-1.30127$&$-0.8004$&$-0.384597$&$0.$&$0.384597$&$0.8004$&$1.30127$&$2.05996$\\
\hline
\multirow{4}{*}{11}&
\multirow{4}{*}{0.00588597}
&$b_i$&$-1.72725$&$-1.14697$&$-0.717801$&$-0.347462$&$0.$&$0.347462$&$0.717801$&$1.14697$&$1.72725$&$\infty$\\
&&$p_i$&$0.0420611$&$0.0836356$&$0.110743$&$0.127682$&$0.135878$&$0.135878$&$0.127682$&$0.110743$&$0.0836356$&$0.0420611$\\
&&$\mbox{E}[\omega|\Omega_i]$&$-2.13399$&$-1.39768$&$-0.9182$&$-0.526575$&$-0.17199$&$0.17199$&$0.526575$&$0.9182$&$1.39768$&$2.13399$
\end{tabular}
\end{center}
\caption{parameters of $\widehat{\mathcal{L}}_{lb}(x,Z)$ with up to eleven segments}
\label{tab:parameters}
\end{table}%
\end{landscape}
In Fig. \ref{fig:piecewise-5-scaled} we exploited Lemma \ref{lem:folf_std_norm} to obtain
the five-segment piecewise Jensen's bound for $\widehat{\mathcal{L}}(x,\zeta)$, where $\zeta$ is a normally distributed random variable with mean $\mu=20$ and standard deviation $\sigma=5$. The maximum error is $\sigma0.0339052$ and it is observed at $x\in\{\sigma(\pm1.43535)+\mu, \sigma(\pm0.415223)+\mu\}$.
\begin{figure}[h!]
\centering
\includegraphics[type=eps,ext=.eps,read=.eps,width=0.8\columnwidth]{piecewise-5-scaled}
\caption{five-segment piecewise Jensen's bound for $\widehat{\mathcal{L}}(x,\zeta)$, where $\mu=20$ and $\sigma=5$}
\label{fig:piecewise-5-scaled}
\end{figure}
\section{An approximate piecewise linear upper bound for the standard normal first order loss function}\label{sec:ub}
In this section we introduce a simple bounding technique that exploits convexity of the (complementary) first order loss function to derive a piecewise linear upper bound.
\subsection{A piecewise linear upper bound}
Without loss of generality we shall introduce the bound for the complementary first order loss function. Consider a random variable $\omega$ with support $\Omega$. From Lemma \ref{lem:convexity_loss_function}, $\widehat{\mathcal{L}}(x,\omega)$ is convex in $x$ regardless of the distribution of $\omega$. Given an interval $[a,b]\in\mathbb{R}$, it is possible to construct an upper bound by exploiting the very same definition of convexity, that is by constructing a straight line $\widehat{\mathcal{L}}_{ub}(x,\omega)$ between the two points $(a,\widehat{\mathcal{L}}(a,\omega))$ and $(b,\widehat{\mathcal{L}}(b,\omega))$. The slope ($\alpha$) and the intercept ($\beta$) of this line can be easily computed
\[\alpha=\frac{\widehat{\mathcal{L}}(b,\omega)-\widehat{\mathcal{L}}(a,\omega)}{b-a},~~~~~\beta=\frac{b \widehat{\mathcal{L}}(a,\omega)- a \widehat{\mathcal{L}}(b,\omega)}{b-a}.\]
The upper bound is then
\begin{align}
\widehat{\mathcal{L}}_{ub}(x,\omega)&=\alpha x+\beta&a\leq x\leq b\\
&=\widehat{\mathcal{L}}(a,\omega) \frac{b - x}{b-a}+\widehat{\mathcal{L}}(b,\omega) \frac{x - a}{b-a}&a\leq x\leq b
\end{align}
We can improve the quality of this bound by partitioning the domain $\mathbb{R}$ of $\widehat{\mathcal{L}}(x,\omega)$ into $N$ disjoint regions $\mathcal{D}_i=[a_i,b_i]$, $i=1,\ldots,N$. The selected regions must be all compact and adjacent. Because of the convexity of $\widehat{\mathcal{L}}(x,\omega)$ the bound can be then applied to each of these regions considered separately.
However, since $\widehat{\mathcal{L}}(x,\omega)$ is defined over $\mathbb{R}$, it is not possible to guarantee a complete covering of the domain by using compact regions. We must therefore add two extreme regions $\mathcal{D}_0=[-\infty,a_1]$ and $\mathcal{D}_{N+1}=[a_{N+1}=b_N,\infty]$ to ensure the one obtained is indeed an upper bound for each $x\in\mathbb{R}$. By noting that
\[
\lim_{x\rightarrow_{-\infty}}\widehat{\mathcal{L}}(x,\omega)=0~~~\mbox{and}~~~\lim_{x\rightarrow_{\infty}}\widehat{\mathcal{L}}(x,\omega)=x
\]
it is easy to derive equations for the lines associated with these two extra regions. In particular, we associate with $\mathcal{D}_0$ a horizontal line with slope $\alpha=0$ and intercept $\beta=\widehat{\mathcal{L}}(a_1,\omega)$, and with $\mathcal{D}_{N+1}$ a line with slope $\alpha=1$ and intercept $\beta=\widehat{\mathcal{L}}(b_N,\omega)-b_N$.
Also in this case, we must then decide how to partition the domain $\mathbb{R}$ into $N+2$ intervals $\mathcal{D}_0,\ldots,\mathcal{D}_{N+1}$ to obtain a tight bound. Once more, the optimal partitioning strategy will depend on the probability distribution of the random variable $\omega$.
\subsection{Minimax piecewise linear upper bound}
We discuss a minimax strategy for generating a piecewise linear upper bound of the (complementary) first order loss function $\widehat{\mathcal{L}}(x,\omega)$. In this strategy, we partition of the domain $\mathbb{R}$ of $x$ into a predefined number of regions $N+2$ in order to minimise the maximum approximation error. Note that, since this domain is not compact, one needs at least two regions to derive a piecewise linear upper bound.
Consider a random variable $\omega$ and the associated complementary first order loss function
\[\widehat{\mathcal{L}}(x,\omega)=\mbox{E}[\max(x-\omega,0)];\]
assume that the domain $\mathbb{R}$ of $x$ is partitioned into $N+2$ disjoint adjacent subregions $\mathcal{D}_0,\ldots,\mathcal{D}_{N+1}$, where $\mathcal{D}_0=[-\infty,a_1]$, $\mathcal{D}_i=[a_i,b_i]$, for $i=1,\ldots,N$, and $\mathcal{D}_{N+1}=[b_N,\infty]$, and consider the following piecewise linear upper bound
\[
\widehat{\mathcal{L}}_{ub}(x,\omega)=\left\{
\begin{array}{ll}
\widehat{\mathcal{L}}(a_1,\omega)&x\in\mathcal{D}_0\\
\vdots\\
\widehat{\mathcal{L}}(a_i,\omega) \frac{b_i - x}{b_i-a_i}+\widehat{\mathcal{L}}(b_i,\omega) \frac{x - a_i}{b_i-a_i}&x\in\mathcal{D}_i\\
\vdots\\
x+\widehat{\mathcal{L}}(b_N,\omega)-b_N&x\in\mathcal{D}_{N+1}
\end{array}
\right.
\]
Let $\widehat{\mathcal{L}}^i_{ub}(x,\omega)$ be the linear segment of $\widehat{\mathcal{L}}_{ub}(x,\omega)$ over $\mathcal{D}_i$, for $i=0,\ldots,N+1$.
\begin{lem}\label{lem:max_error_ub}
Consider $\widehat{\mathcal{L}}^i_{ub}(x,\omega)$, where $i=1,\ldots,N$; the maximum approximation error between $\widehat{\mathcal{L}}^i_{ub}(x,\omega)$ and $\widehat{\mathcal{L}}(x,\omega)$ will be attained for \[\bar{x}_i=G^{-1}_\omega\left(\frac{\widehat{\mathcal{L}}(b_i,\omega)-\widehat{\mathcal{L}}(a_i,\omega)}{b_i-a_i}\right).\]
\end{lem}
\begin{proof}
The idea here is to derive a line that is tangent to $\widehat{\mathcal{L}}(x,\omega)$ and that has a slope equal to that of the $i$-th linear segment of $\widehat{\mathcal{L}}_{ub}(x,\omega)$. We have already discussed in Lemma \ref{lem:tangent} that the equation of the tangent to $\widehat{\mathcal{L}}(x,\omega)$ at a given point $\bar{x}_i$ is
\[y=\widehat{\mathcal{L}}(\bar{x}_i,\omega)'(x-p)+\widehat{\mathcal{L}}(\bar{x}_i,\omega)\]
that is
\[\widehat{\mathcal{L}}^i_{lb}(x,\omega)=G_{\omega}(\bar{x}_i)(x-p)+\int_{-\infty}^{\bar{x}_i} G_{\omega}(t)\;dt.\]
The slope $G_{\omega}(\bar{x}_i)$ only depends on $\bar{x}_i$. To find a tangent with a slope equal to that of the $i$-th linear segment of $\widehat{\mathcal{L}}_{ub}(x,\omega)$, we simply let \[G_{\omega}(\bar{x}_i)=\frac{\widehat{\mathcal{L}}(b_i,\omega)-\widehat{\mathcal{L}}(a_i,\omega)}{b_i-a_i}\] and invert the cumulative distribution function.
\end{proof}
Note that the maximum approximation error for the linear segment over $\mathcal{D}_0$ is $\widehat{\mathcal{L}}(a_1,\omega)$ and the maximum approximation error for the linear segment over $\mathcal{D}_{N+1}$ is $\widehat{\mathcal{L}}(b_N,\omega)-b_N$. This can be inferred from the fact that $\widehat{\mathcal{L}}(x,\omega)$ monotonically approaches 0 for $x\rightarrow-\infty$ and $x$ for $x\rightarrow\infty$.
\begin{thm}\label{thm:approx_err_equal_ub}
$\mathcal{D}_0,\ldots,\mathcal{D}_{N+1}$ is an optimal partition under a minimax strategy, if and only if the maximum approximation error between $\widehat{\mathcal{L}}(x,\omega)$ and each linear segment of $\widehat{\mathcal{L}}_{ub}(x,\omega)$ is the same.
\end{thm}
\begin{proof}
The proof of this theorem can be obtained from Lemma \ref{lem:max_error_ub} and from Theorem \ref{thm:approx_err_equal}. In particular, the key insight needed to understand this result is the following. In Theorem \ref{thm:approx_err_equal} we showed that the approximation errors at breakpoints for the piecewise linear lower bound presented are all equal to each other and also equal to the maximum approximation error; furthermore, in Lemma \ref{lem:tangent} we showed that the $i$-th piecewise linear segment of this lower bound agrees with the original function at point $b_i$, where $\Omega_i=[a_i,b_i]$ is the $i$-th partition of the support of $Z$, for $i=1,\ldots,N$. Since the first order loss function is convex and we know the maximum approximation error, by shifting up the piecewise linear lower bound by a value equal to the maximum approximation error we immediately obtain a piecewise linear upper bound comprising $N+1$ segments. This upper bound agrees with the original function at $\mbox{E}[Z|\Omega_i]$, for $i=1,\ldots,N$. The maximum approximation error will be attained at those points in which the lower bound was tangent to the original function, that is $a_1,b_1,b_2,\ldots,b_N$. By using a reasoning similar to the one developed for Theorem \ref{thm:approx_err_equal}, it is possible to show that, if we increase or decrease at least one $\mbox{E}[Z|\Omega_i]$, the maximum approximation error can only increase.
\end{proof}
By using this result it is possible to derive a set of equations that can be solved for computing an optimal partitioning.
Let us consider the maximum approximation error $e_i$ associated with the $i$-th linear segment of $\widehat{\mathcal{L}}_{ub}(x,\omega)$, this can be expressed as
\[e_i=\widehat{\mathcal{L}}^{i}_{ub}(\bar{x}_i,\omega)-\widehat{\mathcal{L}}(\bar{x}_i,\omega),\]
where $i=1,\ldots,N$; furthermore $e_0=\widehat{\mathcal{L}}(a_1,\omega)$ and $e_{N+1}=\widehat{\mathcal{L}}(b_N,\omega)-b_N$.
Since we have $N+2$ segments to check, we must solve a system comprising the following $N+1$ equations
\[e_0=e_i~~~\mbox{for }i=1,\ldots,N+1\]
under the following restrictions
\[
\begin{array}{lll}
a_i&\leq b_i&\mbox{for }i=1,\ldots,N\\
b_i&=a_{i+1}&\mbox{for }i=1,\ldots,N-1
\end{array}
\]
The system involves $N+1$ variables, each of which identifies the boundary between two disjoint adjacent regions $\mathcal{D}_i$ and $\mathcal{D}_{i+1}$.
\begin{thm}\label{thm:approx_err_symmetric_ub}
Assume that the probability density function of $\omega$ is symmetric about a mean value $\tilde{\omega}$. Then, under a minimax strategy, if $\mathcal{D}_0,\ldots,\mathcal{D}_{N+1}$ is an optimal partition of the domain, breakpoints will be symmetric about the mean value $\tilde{\omega}$.
\end{thm}
\begin{proof}
This follows from Lemma \ref{thm:fol_automorphism} and Theorem \ref{thm:approx_err_equal_ub}.
\end{proof}
In this case, by exploiting the symmetry of the piecewise linear approximation, an optimal partitioning can be derived by solving a smaller system comprising $\lceil (N+1)/2 \rceil$ equations, where $N+2$ is the number of regions $\mathcal{D}_i$ and $\lceil x \rceil$ rounds $x$ to the next integer value.
As in the case of Jensen's bound, equations in the above system are nonlinear and do not admit a closed form solution in the general case. For sake of completeness, we will briefly discuss next how to derive the system of nonlinear equations for the case of a standard normal random variable $Z$. However, one should note that in practice, by exploiting the properties illustrated in the proof of Theorem \ref{thm:approx_err_equal_ub}, one does not need to solve a new system of nonlinear equations to derive the piecewise linear upper bound. All information needed, i.e. maximum approximation error at breakpoints and locations of the breakpoint, are in fact immediately available as soon as the system of equation presented for the piecewise linear lower bound is solved (Table \ref{tab:parameters}).
The upper bound presented is closely related to a well-known inequality from stochastic programming, see e.g. \cite{citeulike:695971}, p. 168, \cite{citeulike:12355817}, p. 316, and \cite{citeulike:2516312}, pp. 291-293. As pointed out in \cite{citeulike:695971}, p. 168, Edmundson-Madanski's upper bound can be seen as a bound where the original distribution is replaced by a two point distribution and the problem itself is unchanged, or it can be viewed as a bound where the distribution is left unchanged and the original function is replaced by a linear affine function represented by a straight line. The above discussion clearly demonstrates the dual nature of this upper bound.
\subsection{Normal distribution}
We will next discuss the system of equations that leads to an optimal partitioning for the case of a standard Normal random variable $Z$. This partitioning leads to a piecewise linear approximation that is, in fact, easily extended to the general case of a normally distributed variable $\zeta$ with mean $\mu$ and standard deviation $\sigma$ via Lemma \ref{lem:folf_std_norm}. Also for this second approximation this equation suggests that the error is independent of $\mu$ and proportional to $\sigma$.
Consider a partitioning for the domain of $x$ in $\widehat{\mathcal{L}}(x,\omega)$ into $N+2$ adjacent regions $\mathcal{D}_i=[a_i,b_i]$, where $i=0,\ldots,N+1$. From Theorem \ref{thm:approx_err_symmetric_ub}, if $N$ is odd, then $b_{\lceil N/2\rceil}=0$ and $b_{i}=-b_{N+1-i}$, if $N$ is even, then $b_{i}=-b_{N+1-i}$. Also in this case, we shall use Lemma \ref{thm:compl_fol_closed} for expressing $\widehat{\mathcal{L}}(x,Z)$, and we will exploit the close connections between finding a local minimum and solving a set of nonlinear equations. We will therefore use the Gauss-Newton method to minimize the following sum of squares
\[\sum_{k=1}^{N+1}(e_0-e_k)^2\]
This minimisation problem can be solved by software packages such as Mathematica (see \texttt{NMinimize}).
\subsubsection{Numerical examples}
A two-segment piecewise linear upper bound for the complementary first order loss function of a standard Normal random variable $Z$ is shown in Fig. \ref{fig:piecewise-2-ub}. This bound has been obtained, under the minimax criterion previously described, by considering a single breakpoint in the domain, i.e. $x=0$. Of course, the maximum error of this piecewise linear approximation occurs for $x=\pm\infty$ and it is equal to $1/\sqrt{2\pi}$.
\begin{figure}[h!]
\centering
\includegraphics[type=eps,ext=.eps,read=.eps,width=0.8\columnwidth]{piecewise-2-ub}
\caption{two-segment piecewise linear upper bound for $\widehat{\mathcal{L}}(x,Z)$}
\label{fig:piecewise-2-ub}
\end{figure}
It is easy to observe that this upper bound can be obtained by adding to the classical Jensen's lower bound presented in Fig. \ref{fig:piecewise-2} a constant value equal to its maximum approximation error, i.e. $1/\sqrt{2\pi}$.
We next present a more interesting case, in which the domain has been split into five regions (Fig. \ref{fig:piecewise-5-ub}).
\begin{figure}[h!]
\centering
\includegraphics[type=eps,ext=.eps,read=.eps,width=0.8\columnwidth]{piecewise-5-ub}
\caption{five-segment piecewise linear upper bound for $\widehat{\mathcal{L}}(x,Z)$}
\label{fig:piecewise-5-ub}
\end{figure}
Breakpoints are positioned at $x\in\{\pm1.43535,\pm0.415223\}$. These were the locations at which the maximum error, i.e. 0.0339052, was observed in Fig. \ref{fig:piecewise-5}. Also in this case, the five-segment piecewise linear upper bound can be obtained by adding to the five-segment piecewise Jensen's lower bound a value equal to its maximum approximation error.
Finally, in Fig. \ref{fig:piecewise-5-scaled-ub}, we show an example in which we exploited Lemma \ref{lem:folf_std_norm} to obtain, from the approximation presented in Fig. \ref{fig:piecewise-5-ub}, the five-segment piecewise linear upper bound for $\widehat{\mathcal{L}}(x,\zeta)$, where $\zeta$ is a normally distributed random variable with mean $\mu=20$ and standard deviation $\sigma=5$. The maximum error is $\sigma0.0339052$ and it is observed at $x\in\{\pm\infty,\sigma(\pm0.886942)+\mu, \mu\}$.
\begin{figure}[h!]
\centering
\includegraphics[type=eps,ext=.eps,read=.eps,width=0.8\columnwidth]{piecewise-5-scaled-ub}
\caption{five-segment piecewise linear upper bound for $\widehat{\mathcal{L}}(x,\zeta)$, where $\mu=20$ and $\sigma=5$}
\label{fig:piecewise-5-scaled-ub}
\end{figure}
\section{Conclusions}
We summarised a number of distribution independent results for the first order loss function and its complementary function. We then focused on symmetric distributions and on normal distributions; for these we discussed ad-hoc results. To the best of our knowledge a comprehensive analysis of results concerning the first order loss function seems to be missing in the literature. The first contribution of this work was to fill this gap in the literature. Based on the results discussed, we developed effective piecewise linear approximation strategies based on a minimax framework. This is the second contribution of our work. More specifically, we developed piecewise linear upper and lower bounds for the first order loss function and its complementary function. These bounds rely on constant parameters that are independent of the means and standard deviation of the normal distribution considered. We discussed how to compute optimal parameters that minimise the maximum approximation error and we also provided a table with pre-computed optimal parameters for piecewise bound with up to eleven segments. These bounds can be easily embedded in existing MILP models.
\section*{Acknowledgements}
INSERT ACKNOWLEDGEMENTS HERE
\bibliographystyle{plain}
| {
"timestamp": "2013-08-14T02:05:51",
"yymm": "1307",
"arxiv_id": "1307.1708",
"language": "en",
"url": "https://arxiv.org/abs/1307.1708",
"abstract": "The first order loss function and its complementary function are extensively used in practical settings. When the random variable of interest is normally distributed, the first order loss function can be easily expressed in terms of the standard normal cumulative distribution and probability density function. However, the standard normal cumulative distribution does not admit a closed form solution and cannot be easily linearised. Several works in the literature discuss approximations for either the standard normal cumulative distribution or the first order loss function and their inverse. However, a comprehensive study on piecewise linear upper and lower bounds for the first order loss function is still missing. In this work, we initially summarise a number of distribution independent results for the first order loss function and its complementary function. We then extend this discussion by focusing first on random variable featuring a symmetric distribution, and then on normally distributed random variables. For the latter, we develop effective piecewise linear upper and lower bounds that can be immediately embedded in MILP models. These linearisations rely on constant parameters that are independent of the mean and standard deviation of the normal distribution of interest. We finally discuss how to compute optimal linearisation parameters that minimise the maximum approximation error.",
"subjects": "Optimization and Control (math.OC); Probability (math.PR)",
"title": "Piecewise linear approximations of the standard normal first order loss function",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9852713865827591,
"lm_q2_score": 0.845942439250491,
"lm_q1q2_score": 0.8334828800895328
} |
https://arxiv.org/abs/2007.15118 | Minors of a skew symmetric matrix: A combinatorial approach | We use Knuth's combinatorial approach to Pfaffians to reprove and clarify a century-old formula, due to Brill. It expresses arbitrary minors of a skew symmetric matrix in terms of Pfaffians. | \section{Brill's formula }
\noindent
In a paper \cite{DEK96} from 1996, Knuth took a combinatorial approach
to Pfaffians. It was immediately noticed that this approach
facilitates generalizations and simplified proofs of several known
identities involving Pfaffians; see for example Hamel \cite{AMH01}.
In this short note we apply the same approach to reprove a formula
that expresses arbitrary minors of a skew symmetric matrix in terms of
Pfaffians. The formula, which we here state as Theorem 1, first
appeared in a 1904 paper by Brill \cite{JBr04}. Our goal here is not
merely to give a short modern proof but also to clarify two aspects of
the formula: (1) To compute minors with Brill's formula
\cite[Eqn.~(1)]{JBr04} one needs to apply a sign convention explained
in a footnote; we integrate this sign into the formula. (2) As Brill
states right at the top of his paper, one can reduce to the case of
minors with disjoint row and column sets, and his formula deals only
with that case. We build that reduction into our proof and arrive at a
formula that also holds for minors with overlapping row and column
sets.
Bibliometric data suggests that Brill's formula may have gone largely
unnoticed. For us it turned out to be key to clarify a computation
in commutative algebra \cite{CVW-4}.
\subsection{Pfaffians following Knuth}
Let $T = (t_{ij})$ be an $n \times n$ skew symmetric matrix with
entries in some commutative ring. Assume that $T$ has zeros on the
diagonal; this is, of course, automatic if the characteristic of the
ring is not $2$. Set $\pff{ij} = t_{ij}$ for
$i,j \in \set{1,\ldots,n}$ and extend $\mathcal{P}$ to a function on
words in letters from the set $\set{1,\ldots,n}$ as follows:
\begin{equation*}
\pff{i_1\ldots i_{m}} \:=\:
\begin{cases}
0 & \text{ if $m$ is odd} \\
\sum \sgn{i_1\ldots i_{2k}}{j_1 \ldots j_{2k}}\pff{j_1j_2} \cdots
\pff{j_{2k-1}j_{2k}} & \text{ if $m = 2k$ is even}
\end{cases}
\end{equation*}
where the sum is over all partitions of $\set{i_1,\ldots,i_{2k}}$ into
$k$ subsets of cardinality $2$. The order of the two elements in each
subset is irrelevant as the difference in sign
$\pff{jj'} = - \pff{j'j}$ is offset by a change of sign of the
permutation; see \cite[Section~0]{DEK96}. The value of $\mathcal{P}$ on the
empty word is by convention $1$, and the value of $\mathcal{P}$ on a
word with a repeated letter is $0$. The latter is a convention in
characteristic $2$ and otherwise automatic.
For subsets $R$ and $S$ of $\set{1,\ldots,n}$ we write $T[R;S]$ for
the submatrix of $T$ obtained by taking the rows indexed by $R$ and
the columns indexed by $S$. The function $\mathcal{P}$ computes the
Pfaffians of skew symmetric submatrices of $T$. Indeed, for a subset
$R \subseteq \set{1,\ldots,n}$ with elements $r_1 < \cdots < r_m$ one
has
\begin{equation*}
\Pf{T[R;R]} \:=\: \pff{r_1\ldots r_{m}} \:.
\end{equation*}
With this approach to Pfaffians, Knuth~\cite{DEK96} gives elegant
proofs for several classic formulas, generalizing them and clarifying
sign conventions in the process through the introduction of
permutation signs. For a word $\alpha$ in letters from
$\set{1,\ldots,n}$ and $a \in \alpha$ the following variation on the
classic Laplacian expansion is obtained as a special case of a formula
ascribed to Tanner \cite{HWT78},
\begin{equation}
\label{eq:DEK}
\pff{\alpha} \:=\: \sum_{x\in\alpha}
\sgn{\alpha}{ax(\alpha\setminus ax)}
\pff{ax}\pff{\alpha\setminus ax} \:;
\end{equation}
see \cite[Eqn.~(2.0)]{DEK96}. The introduction of the permutation
sign is what facilitates our statement and proof of Brill's formula.
\begin{theorem}
Let $T$ be an $n \times n$ skew symmetric matrix. Let $R$ and $S$ be
subsets of the set $\set{1,\ldots,n}$ with elements
$r_1<\cdots <r_m$ and $s_1<\cdots <s_m$. With
\begin{equation*}
\rho \:=\: r_1\ldots r_m \qtext{and}
\sigma = s_1\ldots s_m
\end{equation*}
the next equality holds:
\begin{equation*}
\det(T[R;S])
\:=\: \Sign{\left\lfloor \frac{m}{2} \right\rfloor}
\sum_{k = 0}^{\left\lfloor
\frac{m}{2} \right\rfloor}
\mspace{6mu}\Sign{k}\mspace{-6mu}\sum_{\substack{|\omega| = 2k \\
\omega \subseteq \rho}}
\sgn{\rho}{\omega(\rho\setminus\omega)}\pff{\omega}
\pff{(\rho\setminus\omega)\sigma}\:.
\end{equation*}
\end{theorem}
\noindent Notice that only subwords $\omega$ of $\rho$ that contain
$\rho \cap \sigma$ contribute to the sum, otherwise the word
$(\rho\setminus\omega)\sigma$ has a repeated letter. The quantity
$\pff{\omega}$ is the Pfaffian of the submatrix
$\Pf{T[\set{\omega};\set{\omega}]}$, where $\set{\omega}$ is the subset of
$\set{1,\ldots,n}$ whose elements are the letters in
$\omega$. Similarly, for a word $\omega$ that contains $\rho \cap \sigma$,
the quantity $\pff{(\rho\setminus\omega)\sigma}$ equals
$\sgn{\alpha}{(\alpha\setminus\sigma)\sigma} \Pf{T[\set{\alpha};\set{\alpha}]}$
where the word $\alpha$ has the letters from $(\rho\setminus\omega)\sigma$
written in increasing order.
\section{Our proof}
\noindent
For subsets $R$ and $S$ as in the statement set
\begin{equation*}
\dett{\rho}{\sigma} = \det(T[R;S]) \:.
\end{equation*}
For a word $\alpha$ in letters from $\set{1,\ldots,n}$ we write
$\overline{\alpha}$ for the word that has the letters from $\alpha$
written in increasing order. In the words $\rho$ and $\sigma$ and their
subwords the letters already appear in increasing order, but the bar
notation comes in handy for concatenated words such as $\rho\sigma$.
\subsection{Disjoint row and column sets}
We first assume that $\rho$ and $\sigma$ are disjoint subwords of
$1\ldots n$. Under this assumption
$\pff{(\rho\setminus\omega)\sigma} = \pff{\rho\sigma\setminus\omega}$
holds for every subword $\omega \subseteq \rho$, and in the balance of
the section we use the simpler notation
$\pff{\rho\sigma\setminus\omega}$. We proceed by induction on $m$. For
$m=1$ the formula holds as one has
$\det(T[\set{r_1};\set{s_1}]) = t_{r_1s_1} = \pff{r_1s_1}$. For
$m = 2$ expansion of the determinant along row $r_1$ yields
\begin{equation*}
\dett{\rho}{\sigma} = \pff{r_1s_1}\pff{r_2s_2} - \pff{r_1s_2}\pff{r_2s_1} \:.
\end{equation*}
With $\alpha = \rho\sigma$ and $a = r_1$ the formula \eqref{DEK}
reads
\begin{align*}
\pff{\rho\sigma}
& \:=\: \sum_{x\in\rho\sigma}
\sgn{\rho\sigma}{r_1x(\rho\sigma\setminus r_1x)}
\pff{r_1x}\pff{\rho\sigma\setminus r_1x}
\\
& \:=\: \pff{r_1r_2}\pff{s_1s_2} -
\pff{r_1s_1}\pff{r_2s_2} + \pff{r_1s_2}\pff{r_2s_1}
\\
& \:=\: \pff{\rho}\pff{\sigma} - \dett{\rho}{\sigma} \:,
\end{align*}
which can be rewritten in the desired form:
\begin{equation*}
\dett{\rho}{\sigma} \:=\: -( \pff{\rho\sigma} - \pff{\rho}\pff{\sigma} ) \:.
\end{equation*}
Before we move on to the induction step we record another consequence
of \eqref{DEK}:
\begin{equation}
\label{eq:rso}
\begin{aligned}
\pff{\rho\sigma\setminus\omega}
& \:=\: \sum_{r\in\rho\setminus r_1\omega}
\sgn{\rho\sigma\setminus\omega}{r_1r(\rho\sigma\setminus\omega r_1r)}
\pff{r_1r}\pff{\rho\sigma\setminus \omega r_1r} \\
& \hspace{4pc} {} + \sum_{s\in\sigma}
\sgn{\rho\sigma\setminus\omega}{r_1s(\rho\sigma\setminus\omega r_1s)}
\pff{r_1s}\pff{\rho\sigma\setminus \omega r_1s}
\\
& \:=\: \sum_{r\in\rho\setminus r_1\omega}
\sgn{\rho\setminus\omega}{r_1r(\rho\setminus\omega r_1r)}
\pff{r_1r}\pff{\rho\sigma\setminus \omega r_1r} \\
& \hspace{4pc} {} + \Sign{|\rho \setminus \omega| - 1}
\sum_{s\in\sigma} \sgn{\sigma}{s(\sigma\setminus s)}
\pff{r_1s}\pff{\rho\sigma\setminus \omega r_1s} \:.
\end{aligned}
\end{equation}
Now let $m \ge 2$ and $|\rho| = m+1 = |\sigma|$. In the next computation
the first equality is the expansion of the determinant
$\dett{\rho}{\sigma}$ along row $r_1$. The second equality follows from
the induction hypothesis, and the third is obtained by changing the
order of summation. The fourth equality follows from \eqref{rso}. The
fifth equality uses that $|\omega|$ is even and that
$|\rho\setminus \omega| -1 = m - 2k \equiv m \mod 2$ holds.
\begin{align*}
\dett{\rho}{\sigma}
& \:=\: \sum_{s\in\sigma}\sgn{\sigma}{s\sigma\setminus s}\pff{r_1s}
\,\dett{\rho\setminus r_1}{\sigma\setminus s}
\\
& \:=\: \sum_{s\in\sigma}\sgn{\sigma}{s\sigma\setminus s}\pff{r_1s} \\
& \hspace{4pc} \cdot \Biggl(
\Sign{\left\lfloor \frac{m}{2} \right\rfloor}
\sum_{k = 0}^{\left\lfloor \frac{m}{2} \right\rfloor}
\mspace{6mu}\Sign{k}\mspace{-6mu} \sum_{\substack{
|\omega| = 2k \\ \scriptscriptstyle \omega \subseteq \rho\setminus r_1}}
\mspace{-4mu}\sgn{\rho\setminus r_1}{\omega(\rho\setminus r_1\omega)}
\pff{\omega}\pff{\rho\sigma\setminus r_1\omega s}
\Biggr)
\\
& \:=\: \Sign{\left\lfloor \frac{m}{2} \right\rfloor}
\sum_{k =0}^{\left\lfloor \frac{m}{2} \right\rfloor}
\mspace{6mu}\Sign{k}\mspace{-6mu} \sum_{\substack{|\omega| = 2k \\
\scriptscriptstyle \omega \subseteq \rho\setminus r_1}}
\mspace{-4mu}\sgn{\rho\setminus r_1}{\omega(\rho\setminus r_1\omega)} \pff{\omega}
\\ & \hspace{13pc} \cdot \Bigr(
\sum_{s\in\sigma}\sgn{\sigma}{s\sigma\setminus s}
\pff{r_1s} \pff{\rho\sigma\setminus r_1\omega s}
\Bigl)
\\
& \:=\: \Sign{\left\lfloor \frac{m}{2} \right\rfloor}
\sum_{k =0}^{\left\lfloor \frac{m}{2} \right\rfloor}
\mspace{6mu}\Sign{k}\mspace{-6mu} \sum_{\substack{|\omega| = 2k \\
\scriptscriptstyle \omega \subseteq \rho\setminus r_1}}
\mspace{-4mu}\sgn{\rho\setminus r_1}{\omega(\rho\setminus r_1\omega)} \pff{\omega}
\\ & \hspace{2pc}
\cdot \Sign{|\rho\setminus\omega|-1}
\Big(\pff{\rho\sigma\setminus\omega}
\mspace{4mu} - \mspace{-8mu}\sum_{r\in\rho\setminus\omega r_1}
\mspace{-9mu}\sgn{\rho\setminus\omega}{r_1r(\rho\setminus\omega r_1r)}
\pff{r_1r}\pff{\rho\sigma\setminus\omega r_1r} \Big)
\\
& \:=\: \Sign{\left\lfloor \frac{m}{2} \right\rfloor + m}
\Biggl(
\sum_{k=0}^{\left\lfloor \frac{m}{2} \right\rfloor}
\mspace{6mu}\Sign{k}\mspace{-6mu} \sum_{\substack{
|\omega| = 2k \\ \scriptscriptstyle \omega \subseteq \rho\setminus r_1}}
\sgn{\rho}{\omega(\rho\setminus\omega)}
\pff{\omega} \pff{\rho\sigma\setminus\omega} -
\sum_{k=0}^{\left\lfloor
\frac{m}{2} \right\rfloor}
\Sign{k} \\
& \hspace{2pc} \cdot \mspace{-4mu} \sum_{\substack{|\omega| = 2k \\
\scriptscriptstyle \omega \subseteq \rho\setminus r_1}}
\mspace{-4mu}\sgn{\rho}{\omega(\rho\setminus \omega)}
\pff{\omega} \mspace{-4mu}
\sum_{r\in\rho\setminus\omega r_1}
\mspace{-9mu}\sgn{\rho\setminus\omega}{r_1r(\rho\setminus\omega r_1r)}
\pff{r_1r}\pff{\rho\sigma\setminus\omega r_1r}
\Bigg)
\\
& \:=\: \Sign{\left\lfloor \frac{m+1}{2} \right\rfloor}
\Bigg(
\sum_{k=0}^{\left\lfloor \frac{m}{2} \right\rfloor}
\mspace{6mu}\Sign{k}\mspace{-6mu} \sum_{\substack{
|\omega| = 2k \\ \scriptscriptstyle \omega \subseteq \rho\setminus r_1}}
\mspace{-4mu}\sgn{\rho}{\omega(\rho\setminus\omega)}
\pff{\omega} \pff{\rho\sigma\setminus\omega} +
\sum_{k=0}^{\left\lfloor
\frac{m}{2} \right\rfloor}
\Sign{k+1} \\ &\hspace{3pc} \cdot \mspace{-3mu} \sum_{\substack{
|\omega| = 2k \\ \scriptscriptstyle \omega \subseteq \rho\setminus r_1}}
\sum_{r\in\rho\setminus\omega r_1}
\mspace{-9mu}\sgn{\rho}{\omega r_1r(\rho\setminus \omega r_1r)}
\pff{\omega} \pff{r_1r}\pff{\rho\sigma\setminus\omega r_1r}
\Bigg) \:.
\end{align*}
The next step is to simplify the last line in the display above. The
first equality in the following computation holds as $|\omega|$ is
even. The third equality follows by substituting $\omega'$ for the word
$\overline{r_1r\omega}$ and noticing that one then has
\begin{equation*}
\sgn{\rho}{r_1r\omega (\rho\setminus r_1r\omega)} \:=\:
\sgn{\rho}{\omega'(\rho\setminus \omega')}
\sgn{\omega'}{r_1r\omega }
\:=\:
\sgn{\rho}{\omega'(\rho\setminus \omega')}
\sgn{\omega'}{r_1r(\omega'\setminus r_1r)} \:.
\end{equation*}
The last equality follows from \eqref{DEK} applied with
$\alpha = \omega'$ and $a = r_1$.
\begin{align*}
{} & \sum_{\substack{|\omega| = 2k \\ \scriptscriptstyle \omega \subseteq \rho\setminus r_1}}
\mspace{3mu}\sum_{r\in\rho\setminus\omega r_1}
\mspace{-4mu}\sgn{\rho}{\omega r_1r(\rho\setminus \omega r_1r)}
\pff{\omega} \pff{r_1r}\pff{\rho\sigma\setminus\omega r_1r}
\\
&\:=\: \sum_{\substack{
|\omega| = 2k \\ \scriptscriptstyle \omega \subseteq \rho\setminus r_1}}
\mspace{3mu}\sum_{r\in\rho\setminus\omega r_1}
\sgn{\rho}{r_1r \omega(\rho\setminus r_1r\omega)}
\pff{\omega} \pff{r_1r}\pff{\rho\sigma\setminus r_1r\omega}
\\
& \:=\: \sum_{\substack{|\omega'| = 2k+2 \\ \scriptscriptstyle r_1 \in \omega' \subseteq \rho}}
\mspace{3mu} \sum_{r\in\omega'\setminus r_1}
\mspace{-4mu} \sgn{\rho}{\omega'(\rho\setminus \omega')}
\sgn{\omega'}{r_1r(\omega'\setminus r_1r)}
\pff{\omega'\setminus r_1r} \pff{r_1r}\pff{\rho\sigma\setminus\omega'}
\\
& \:=\: \sum_{\substack{
|\omega'| = 2k+2 \\ \scriptscriptstyle r_1 \in \omega' \subseteq \rho}}
\mspace{-4mu}\sgn{\rho}{\omega'(\rho\setminus \omega')}
\pff{\rho\sigma\setminus\omega'}
\Big(
\sum_{r\in\omega'\setminus r_1}
\mspace{-4mu}\sgn{\omega'}{r_1r(\omega'\setminus r_1r)}
\pff{r_1r}\pff{\omega'\setminus r_1r}
\Big)
\\
& \:=\: \sum_{\substack{ |\omega'| = 2(k+1) \\
\scriptscriptstyle r_1 \in \omega' \subseteq \rho}}
\mspace{-4mu} \sgn{\rho}{\omega'(\rho\sigma\setminus \omega')}
\pff{\rho\setminus\omega'} \pff{\omega'} \:.
\end{align*}
Substituting this into the computation above one gets the desired
equality.
\begin{align*}
\dett{\rho}{\sigma}
& \:=\: \Sign{\left\lfloor \frac{m+1}{2} \right\rfloor}
\Bigg(
\sum_{k=0}^{\left\lfloor \frac{m}{2} \right\rfloor}
\mspace{6mu}\Sign{k}\mspace{-6mu} \sum_{\substack{
|\omega| = 2k \\ \scriptscriptstyle \omega \subseteq \rho\setminus r_1}}
\mspace{-3mu}\sgn{\rho}{\omega(\rho\setminus\omega)}
\pff{\omega} \pff{\rho\sigma\setminus\omega} \\
& \hspace{4pc} {} + \sum_{k=0}^{\left\lfloor
\frac{m}{2} \right\rfloor}
\mspace{6mu}\Sign{k+1}\mspace{-12mu}
\sum_{\substack{ |\omega'| = 2(k+1) \\
\scriptscriptstyle r_1 \in \omega' \subseteq \rho}}
\mspace{-4mu}\sgn{\rho}{\omega'(\rho\setminus \omega')}
\pff{\rho\sigma\setminus\omega'} \pff{\omega'} \Bigg)
\\
& \:=\: \Sign{\left\lfloor \frac{m+1}{2} \right\rfloor}
\sum_{k=0}^{\left\lfloor \frac{m+1}{2} \right\rfloor}
\Sign{k} \sum_{\substack{ |\omega| = 2k \\
\scriptscriptstyle \omega \subseteq \rho}}
\sgn{\rho}{\omega(\rho\setminus\omega)}
\pff{\omega} \pff{\rho\sigma\setminus\omega} \:.
\end{align*}
This proves the asserted formula in the special case where $R$ and $S$
are disjoint, and it remains to reduce the general case to this one.
\subsection{Overlapping row and column sets}
First notice that the submatrix $T[R;S]$ agrees with a submatrix of
the $2n \times 2n$ skew symmetric matrix $T'$ obtained from $T$ by
repeating all entries, horizontally and vertically. For example, with
{\small \begin{equation*} T \:=\:
\begin{pmatrix}
0 & t_{12} & t_{13} \\
- t_{12} & 0 & t_{23} \\
- t_{13} & - t_{23} & 0
\end{pmatrix}
\qtext{and}
T' \:=\:
\begin{pmatrix}
0 & 0 & t_{12} & t_{12} & t_{13} & t_{13} \\
0 & 0 & t_{12} & t_{12} & t_{13} & t_{13} \\
- t_{12} & - t_{12} & 0 & 0 & t_{23} & t_{23} \\
- t_{12} & - t_{12} & 0 & 0 & t_{23} & t_{23} \\
- t_{13} & - t_{13} & - t_{23} & - t_{23} & 0 & 0 \\
- t_{13} & - t_{13} & - t_{23} & - t_{23} & 0 & 0
\end{pmatrix}
\end{equation*} }
one has $T[\set{1,2};\set{2,3}] = T'[\set{1,3};\set{4,6}]$. In
general, one has
\begin{gather*}
T[R;S] \:=\: T'[R';S'] \intertext{for} R' \:=\: \set{2r_1-1,\ldots,
2r_m-1} \qqtext{and} S' = \set{2s_1,\ldots, 2s_m} \:.
\end{gather*}
One now has
\begin{align*}
\det(T[R;S]) & \:=\: \det(T'[R';S']) \\
& \:=\: \Sign{\left\lfloor \frac{m}{2} \right\rfloor}
\sum_{k=0}^{\left\lfloor
\frac{m}{2} \right\rfloor}
\mspace{6mu}\Sign{k}\mspace{-6mu}\sum_{\substack{ |\omega'| = 2k \\
\scriptscriptstyle \omega' \subseteq \rho'}}
\sgn{\rho'}{\omega'(\rho'\setminus\omega')}\pff{\omega'}
\pff{\rho'\sigma'\setminus\omega'}
\end{align*}
where the second equality holds by the case already
established. Indeed, the numbers in $R'$ are odd and the numbers in
$S'$ are even. Notice that for every $k$ there is a one-to-one
correspondence, given by the relation $r_i' = 2r_i-1$, between
subwords $\omega'\subseteq \rho'$ of length $2k$ and subwords $\omega$ of
$\rho$ of length $2k$, and one has $\pff{\omega'} = \pff{\omega}$ for
corresponding subwords. Let $U$ denote the submatrix of $T'[R',S']$
obtained by removing the rows and columns whose indices, all odd,
appear in the word $\omega'$. One has
\begin{equation*}
\pff{\rho'\sigma'\setminus\omega'} = \sgn{\overline{\rho'\sigma'\setminus\omega'}}{\rho'\sigma'\setminus\omega'} \Pf{U} \:.
\end{equation*}
If an element $x \in R \cap S$ does not appear in $\omega$, then $2x-1$
is not in $\omega'$, so $U$ contains the identical rows/columns $2x-1$
and $2x$ from $T'$, whence
$\pff{\rho'\sigma'\setminus\omega'}=0=\pff{(\rho\setminus\omega)\sigma}$
holds. On the other hand, if every element of $R \cap S$ appears in
$\omega$, then the matrix $U$ is a submatrix of
$T[R\setminus R \cap S; S\setminus R \cap S]$ with Pfaffian
$\sgn{\overline{(\rho\setminus\omega)\sigma}}{(\rho\setminus\omega)\sigma}\pff{(\rho\setminus\omega)\sigma}$. As
one has $\rho'\sigma'\setminus\omega' = (\rho'\setminus\omega')\sigma'$
the permutations
$\big(\begin{smallmatrix}\overline{(\rho\setminus\omega)\sigma}\\
(\rho\setminus\omega)\sigma \end{smallmatrix}\big)$ and
$\big(\begin{smallmatrix}\overline{\rho'\sigma'\setminus\omega'}\\
\rho'\sigma'\setminus\omega' \end{smallmatrix}\big)$ have the same
sign, and the proof is complete.
\providecommand{\MR}[1]{\mbox{\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#1}}}
\renewcommand{\MR}[1]{\mbox{\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#1}}}
\providecommand{\arxiv}[2][AC]{\mbox{\href{http://arxiv.org/abs/#2}{\sf
arXiv:#2 [math.#1]}}} \def$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2020-07-31T02:04:22",
"yymm": "2007",
"arxiv_id": "2007.15118",
"language": "en",
"url": "https://arxiv.org/abs/2007.15118",
"abstract": "We use Knuth's combinatorial approach to Pfaffians to reprove and clarify a century-old formula, due to Brill. It expresses arbitrary minors of a skew symmetric matrix in terms of Pfaffians.",
"subjects": "Combinatorics (math.CO); Rings and Algebras (math.RA)",
"title": "Minors of a skew symmetric matrix: A combinatorial approach",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232874658905,
"lm_q2_score": 0.847967764140929,
"lm_q1q2_score": 0.8334024656180887
} |
https://arxiv.org/abs/1809.02430 | Arithmetic Progressions with Restricted Digits | For an integer $b \geqslant 2$ and a set $S\subset \{0,\cdots,b-1\}$, we define the Kempner set $\mathcal{K}(S,b)$ to be the set of all non-negative integers whose base-$b$ digital expansions contain only digits from $S$. These well-studied sparse sets provide a rich setting for additive number theory, and in this paper we study various questions relating to the appearance of arithmetic progressions in these sets. In particular, for all $b$ we determine exactly the maximal length of an arithmetic progression that omits a base-$b$ digit. | \section{Introduction}
In 1914 Kempner \cite{Ke14} introduced a variant of the harmonic series which excluded from its sum all those positive integers that contain the digit $9$ in their base-$10$ expansions. Unlike the familiar harmonic series, Kempner's modified series converges (the limit later shown to be $\approx 22.92$, see \cite{Ba79}). A simple generalisation of Kempner's original argument shows that convergence occurs as long as any non-empty set of digits is excluded, and that this result holds in any base (see \cite{BS08}, for example).
Let us introduce some notation to describe these results in general. Fix an integer $b \geqslant 2$ and a subset of integers $S \subseteq [0,b-1]$. Here and throughout the paper, for two integers $x$ and $y$ we use $[x,y]$ to denote the set $\{n\in\mathbb{Z}:x \leqslant n\leqslant y\}$. We then define the \emph{Kempner set} $\mathcal{K}(S,b)$ to be the set of non-negative integers that, when written in base $b$, contain only digits from $S$. Thus $\mathcal{K}([0,8],10)$ denotes the set originally studied by Kempner. We will assume throughout that $0 \in S$, to avoid the ambiguity of leading zeros, and require $S \neq [0,b-1]$, to preclude the trivial set $\mathcal{K}([0,b-1],b)$ (which is nothing more than $\mathbb{Z}_{\geqslant 0}$). These sets $S$ will be referred to as the \emph{permitted} sets $S$, and the related Kempner sets $K(S,b)$ as \emph{proper} Kempner sets.
The arithmetic properties of proper Kempner sets have been the object of considerable study in recent years, beginning with the work of Erd\H{o}s, Mauduit, and S\'ark\"ozy, who studied the distribution of residues in $\mathcal{K}(S,b)$ moduli small numbers \cite{EMS98} and proved the existence of integers in $\mathcal{K}(S,b)$ with many small prime factors \cite{EMS99}.
Notable recent work includes Maynard's proof \cite{May16} that the sets $\mathcal{K}(S,b)$ contain infinitely many primes whenever $b-\vert S\vert$ is at most $b^{23/80}$, provided $b$ is sufficiently large.
In this paper we consider the additive structure of proper Kempner sets. In particular, we consider the following extremal question: \emph{what is the length of the longest arithmetic progression in a proper Kempner set with a fixed given base?} Our methods will be combinatorial, rather than analytic (as in Maynard's work, \cite{May16}).
A well known conjecture of Erd\H{o}s-Tur\'{a}n (first given in \cite{ErTu36}) states that any set of positive integers with a divergent harmonic sum contains arithmetic progressions of arbitrary (finite) length. Since proper Kempner sets have convergent harmonic sums, this might suggest that the lengths of arithmetic progressions in a given proper Kempner set are uniformly bounded.
This is indeed the case. Let us say that a set $T\subset \mathbb{Z}$ is $k$-free if $T$ contains no arithmetic progression of length $k$. By a simple argument, given in Proposition \ref{prop:ell_finite}, one may show that the proper Kempner set $\mathcal{K}(S,b)$ is $(b^2-b+1)$-free for any $b \geqslant 2$.
The main purpose of this article is to understand how close this trivial upper bound is to the truth. \\
In our main theorem, we improve this bound for all $b> 2$, obtaining a tight result that expresses the length of the longest arithmetic progression in $\mathcal{K}(S,b)$ in terms of the prime factorisation of $b$. To state this theorem, we need to introduce some arithmetic functions. If $n$ and $b$ are natural numbers, let $\rho(n)$ denote the square-free radical of $n$ (ie. the product of all distinct primes dividing $n$), and let $\beta(b)$ denote the largest integer less than $b$ such that $\rho(\beta(b))\vert b$.
For example, $\beta(10) = 8$, and $\beta(p^k) = p^{k-1}$ for any prime power $p^k$. In other words, $\beta(b)$ is the greatest integer less than $b$ that divides some power of $b$. Finally, let $\ell(b)$ be the length of the longest arithmetic progression contained in some proper Kempner set of base $b$.
Our main theorem gives an exact evaluation of $\ell(b)$.
\begin{theorem}
\label{thm:main theorem} For all $b\geqslant 2$, one has $\ell(b) = (b-1)\beta(b)$.
\end{theorem}
\noindent For example, $\ell(10)=72$. One particular set that achieves this bound is Kempner's original set, $\mathcal{K}([0,8],10)$, which contains the $72$-term arithmetic progression $\{0,125,250,375,\cdots,8875\}$.\\
The arithmetic functions $\beta(b)$ and $\ell(b)$ are of independent interest, but do not appear to have been considered seriously before.\footnote{The sequence $\beta(b)$ is entry A079277 on the Online Encyclopedia of integer Sequences.} We establish average order results for $\beta(b)$ which show that, for most $b$, the trivial upper bound on $\ell(b)$ from Proposition \ref{prop:ell_finite} is asymptotically correct.
\begin{theorem}
\label{thm: ell nearly always maximal epsilon free}
There is a set of integers $A\subset \mathbb{Z}$ with natural density $1$, i.e. with \[ \lim\limits_{N\rightarrow \infty} \frac{1}{N}\vert A\cap [1,N]\vert = 1,\] such that $\ell(b)\sim b^2$ as $b\rightarrow \infty$ in $A$.
\end{theorem}
\noindent
\textit{Notation:} For $x\in \mathbb{R}$, let $\{x\}$ denote the fractional part of $x$ and let $ \lfloor x \rfloor$ denote the greatest integer that is at most $x$. For a natural number $n$, we let $[n]$ denote the set of integers $\{1,\cdots,n\}$. As mentioned previously, for two integers $x$ and $y$ we use $[x,y]$ to denote the set $\{ n\in \mathbb{Z}: x\leqslant n\leqslant y\}$. We use the notation $\log_q p$ to denote the logarithm of $p$ to base $q$ (as opposed to any iterations of logarithms).\\
\section{Progressions of Maximal Length in Kempner Sets}
In this section we give our proof of Theorem \ref{thm:main theorem}, which is an exact evaluation of $\ell(b)$ and the main result of this paper. This will be done in two parts: a constructive lower bound and a proof that this lower bound is sharp. Before that, as promised, we give a simple proof that the function $\ell(b)$ is at least well-defined, i.e. that Kempner sets do not contain arbitrarily long arithmetic progressions.
\begin{proposition} \label{prop:ell_finite}
For all $b\geqslant 2$, we have $\ell(b) \leqslant (b-1)b$.
\end{proposition}
\begin{proof} Suppose that $A \subset \mathcal{K}(S,b)$ is a finite arithmetic progression of $\vert A\vert$ terms with common difference $\Delta$. Choose $k \geqslant 0$ such that $b^k \leqslant \Delta < b^{k+1}$. If $I$ denotes the shortest interval of integers containing $A$, then $\vert I \vert = (\vert A \vert-1)\Delta+1$, hence $\vert A\vert = 1+ (\vert I\vert -1)/\Delta$.
If $A$ excludes the digit $d$, the upper bound $\Delta < b^{k+1}$ confines $A$ within the interval $[0,db^{k+1} - 1]$ or within an interval of the form
\[[b^{k+2} m + (d+1) b^{k+1}, b^{k+2}(m+1)+ db^{k+1}-1],\]
for some $m\in \mathbb{Z}_{\geqslant 0}$. Thus $\vert I\vert \leqslant b^{k+2}-b^{k+1}$, which yields
\[\vert A\vert \leqslant 1 + \frac{b^{k+2}-b^{k+1}-1}{\Delta} < 1 + \frac{b^{k+2} - b^{k+1}}{b^{k}} \leqslant b^2-b+1,\]
hence $\vert A\vert \leqslant b^2-b$ as claimed.
\end{proof}
The bound in the previous proposition is simple and -- as a consequence -- occasionally weak. In particular, it neglects the potentially compounding effects of digit exclusion at different orders of magnitude, and the arithmetic properties of orbits in the group $\mathbb{Z}/b\mathbb{Z}$. This structure can affect the bounds dramatically, as seen most clearly in the case when the base $b$ is prime.
\begin{proposition}
\label{prime prop}
Let $p$ be prime. Then $\ell(p) \leqslant p-1$.
\end{proposition}
\begin{proof} Suppose that $\mathcal{K}(S,p)$ contains the progression $A = \{k+ j \Delta : j \in [p]\}$ with $\Delta\neq 0$. By the pigeonhole principle, there exist distinct $i,j\in [p]$ with $k+j \Delta \equiv k+i \Delta \!\!\mod p$ for some $i\neq j$, hence $p \mid \Delta$ (since $p$ is prime). By deleting the rightmost digits of the elements of $A$ we obtain a new progression in $\mathcal{K}(S,p)$ with common difference $\Delta/p$; in particular, the progression
$$\left\{\left\lfloor \frac{k}{p} \right\rfloor + j\frac{\Delta}{p}: j\in [p]\right\}.$$
The new common difference is strictly smaller, and we obtain a contradiction by infinite descent.
\end{proof}
With a little more bookkeeping this proof generalizes to prime powers, and implies that $\ell(p^k) \leqslant p^{k-1}(p^k-1)$. So certainly $\ell(b)$ is not asymptotic to $b^2$ as $b$ ranges over all integers; some restriction in Theorem \ref{thm: ell nearly always maximal epsilon free} is required.\\
We now begin the proof of Theorem \ref{thm:main theorem}. Searching for long progressions in $\mathcal{K}([0,8],10)$, one might happen across the example noted earlier, namely the first $71$ multiples of $125$, which -- together with $0$ -- form an arithmetic progression of length $72$, none of whose members contain the digit 9. This example succeeds due to properties of the prime factorisation of $1000/125$, in relation to the base $10$. These properties generalise, and one may use this to construct long digit-excluding arithmetic progressions in arbitrary bases.
\begin{proposition}
\label{prop:AP construction} For all $b \geqslant 2$, the Kempner set $\mathcal{K}([0,b-2],b)$ contains an arithmetic progression of length $(b-1)\beta(b)$. Hence $\ell(b) \geqslant (b-1)\beta(b)$.
\end{proposition}
\begin{proof}
Let $K\geqslant 1$ be the smallest natural number such that $\beta(b)\vert b^K$. We claim that all the members of the arithmetic progression $$ A = \frac{b^K}{\beta(b)}[0,(b-1)\beta(b)-1]$$ exclude the digit $b-1$ from their base-$b$ expansions. To see this, let $k$ satisfy $0\leqslant k\leqslant K-1$.
Then $\gcd(b^{k+1}\beta(b),b^{K})\geqslant b^{k+1}>b^k\beta(b)$, which implies that $\gcd(b^{k+1},b^K/\beta(b))>b^k$ (by dividing through by $\beta(b)$).
In particular, for all integers $x$ and $y$, either
\begin{equation}
\label{separation condition}
\left\vert x \frac{b^K}{\beta(b)} - y b^{k+1}\right\vert>b^k \qquad \text{or} \qquad x \frac{b^K}{\beta(b)} = y b^{k+1}.
\end{equation}
This observation implies that none of the $K$ rightmost digits of any integer of the form $xb^K/\beta(b)$ can be equal to $b-1$. Indeed, in base $b$, the $b^{k}$ digit of $x b^K/\beta(b)$ is the unique integer $d$ in the range $0\leqslant d\leqslant b-1$ such that
\[\left\{\frac{xb^K/\beta(b)}{b^{k+1}} \right\}\in \left[\frac{d}{b},\frac{d+1}{b}\right).\]
Yet~\eqref{separation condition} implies that $\{\frac{x b^K}{b^{k+1}\beta(b)}\} \in \{0\} \cup (\frac{1}{b}, \frac{b-1}{b})$ for each $0 \leqslant k \leqslant K-1$. Since this is disjoint from $[\frac{b-1}{b},1)$, we conclude that none of the $K$ rightmost digits of any integer of the form $x b^K/\beta(b)$ can be equal to $b-1$.
We now fix $x \in [0,(b-1)\beta(b)-1]$ and consider the leftmost digits of $x b^K/\beta(b)$. Certainly $x b^K/\beta(b) < (b-1)b^K$. From this upper bound we see that the $b^{K}$ digit of $x b^K/\beta(b)$ lies in $[0,b-2]$ and that the digits associated to larger powers of $b$ are all $0$. Combining this with our previous observations, we conclude that $xb^K/\beta(b)$ omits the digit $(b-1)$ for all $x \in [0,(b-1) \beta(b)-1]$, so $A \subset \mathcal{K}([0,b-2],b)$ as claimed. Since $\vert A\vert = (b-1)\beta(b)$, we have $\ell(b) \geqslant (b-1)\beta(b)$.
\end{proof}
We now proceed with the second half of our evaluation of $\ell(b)$, the verification that this lower bound is exact. This requires a more technical argument.
\begin{proposition} \label{prop:upper_bound}
For all $b\geqslant 2$, we have $\ell(b) \leqslant (b-1)\beta(b)$.
\end{proposition}
\begin{proof}
Without loss of generality, let $S\subset [0,b-1]$ be any set of $b-1$ digits (containing $0$), and let $A = \{x+ j \Delta : j \in [0,\ell(b)-1]\}$ be an arithmetic progression in $\mathcal{K}(S,b)$ of maximal length, in which $\Delta > 0$ is taken minimally over all arithmetic progressions of length $\ell(b)$.
Let $\Delta = d_K b^K + \ldots + d_1 b + d_0$ denote the base $b$ expansion of $\Delta$, where $K$ is chosen such that $d_K \neq 0$. For notational convenience, let $\Delta_k:= d_k b^k + \ldots +d_1b + d_0$ for each $k \geqslant 0$. (Note that $\Delta_k = \Delta$ for $k \geqslant K$.) We may assume without loss of generality that $d_0 \neq 0$, else by removing the rightmost digit from all elements of $A$ one constructs an arithmetic progression contained in $\mathcal{K}(S,b)$ of common difference $\Delta/b$, contradicting our minimality assumption on $\Delta$. (This is the same device as we used in the proof of Proposition \ref{prime prop}).
Our proof of Proposition~\ref{prop:upper_bound} rests on the following claim, whose peculiar statement arises naturally from an inductive argument.
\begin{claim}
Consider the following statements:
\begin{itemize}
\item[\emph{C}$1$:] $\ell(b)\leqslant (b-1)\beta(b)$;
\item[\emph{C}$2$\emph{(}k\emph{)}:] there exist coprime integers $\lambda_k,\mu_k \in [1,b-1]$ satisfying\\ $\lambda_k \Delta_k = \mu_k b^{k+1}$.
\end{itemize}
Then either \emph{C1} holds or \emph{C2(}k\emph{)} holds for all $k \geqslant 0$.
\end{claim}
This claim immediately settles the theorem, since the statement C2($k$) cannot possibly hold for all $k\geqslant 0$. Indeed, we have $\lambda_k \Delta_k < b \Delta$, while $\mu_k b^{k+1}$ grows in $k$ without bound.
\end{proof}
\begin{proof}[Proof of Claim]
We prove this claim by induction, showing that for every $k\geqslant 0$, either C1 holds or C2($k^\prime$) holds for all $k^\prime\leqslant k$. For the base case $k=0$, note that $\Delta_k = d_0$. If $(d_0,b)=1$, then $d_0$ generates the additive group $\mathbb{Z}/b \mathbb{Z}$ and the elements $\{x + j \Delta : j \in [0,b-1]\}$ have $b$ distinct units digits. Thus $\ell(b) \leqslant (b-1) \leqslant (b-1)\beta(b)$, so C1 holds.
Otherwise, $(d_0,b)>1$, which implies that there exists $\lambda \in [1,b-1]$ for which $\lambda d_0 \equiv 0 \!\! \mod b$. Thus $\lambda d_0 = \mu b$ for some $\mu$, and we may assume that $(\lambda,\mu)=1$ by dividing through by common factors. This concludes the base case.
Proceeding to the inductive step, let $k\geqslant 1$ and assume that the inductive hypothesis C2($k^\prime$) holds for all smaller $k^\prime$. In particular, $\Delta_{k-1} = (\mu_{k-1}/\lambda_{k-1}) b^k$ for some coprime integers $\lambda_{k-1},\mu_{k-1}\in [1,b-1]$, and hence $\Delta_k = d_k b^k + (\mu_{k-1}/\lambda_{k-1}) b^k$.
Let $\lambda_k$ denote the order of $\Delta_k/b^{k+1}$ in the additive group $\mathbb{R}/\mathbb{Z}$, and let $\mu_k$ denote the integer $\lambda_k (\Delta_k/b^{k+1})$. We see that $(\lambda_k,\mu_k)=1$, as one could divide through by any common factors of $\lambda_k$ and $\mu_k$ to contradict the fact that $\lambda_k$ is the order of $\Delta_k/b^{k+1}$ in $\mathbb{R}/\mathbb{Z}$. Now, if $\lambda_k < b$, then $\mu_k < b$ as well, since $\Delta_k/b_{k+1}<1$ for any $k$. In this case, $\lambda_k$ and $\mu_k$ satisfy the conditions listed in C2($k$). Therefore C2($k^\prime$) holds for all $k^\prime \leqslant k$.
It remains to address the case $\lambda_k \geqslant b$. By usual facts about finite subgroups of $\mathbb{R}/\mathbb{Z}$, we note that the orbit of $\Delta_k/b^{k+1}$ in $\mathbb{R}/\mathbb{Z}$ is exactly the set of fractions with denominator dividing $\lambda_k$. In particular, the set of values
\[T=\left\{\frac{x}{b^{k+1}} + \frac{\Delta_k j}{b^{k+1}} \!\! \mod 1 : j \in [0,\lambda_k -1]\right\}\]
are equally spaced, with gaps of size $1/\lambda_k$. Since $\lambda_k \geqslant b$, for any integer $d \in [0,b-1]$ at least one member of $T$ lies in the half-open interval $[\frac{d}{b},\frac{d+1}{b})$. In other words, at least one member of the progression $x + \Delta_k[0,\lambda_k-1]$ has $b^{k}$ digit equal to $d$.
This information immediately implies that $x+\Delta[0,\lambda_k-1]$ is not contained in any proper Kempner set $\mathcal{K}(S,b)$, and hence $\ell(b) \leqslant \lambda_k -1$. However, more can be said with a slight refinement to our analysis. Equal spacing implies that at least $\lfloor \lambda_k/b \rfloor$ members of $T$ lie in the interval $[\frac{d}{b},\frac{d+1}{b})$. We are left with the stronger bound $\ell(b) \leqslant \lambda_k - \lfloor \lambda_k/b\rfloor$.
We now establish an upper bound on the function $\lambda_k - \lfloor \lambda_k/b\rfloor$, given the known constraints on $\lambda_k$. For starters, the inductive hypothesis implies that $\lambda_{k-1} \mid \mu_{k-1} b^{k}$, hence $\lambda_{k-1} \mid b^{k}$ (since $\lambda_{k-1}$ and $\mu_{k-1}$ are coprime). Since $\lambda_{k-1} < b$ and $\lambda_{k-1}$ divides a power of $b$, this implies that $\lambda_{k-1} \leqslant \beta(b)$. Secondly, the inductive hypothesis allows us to write
\[\frac{\Delta_k}{b^{k+1}} = \frac{d_k \lambda_{k-1} + \mu_{k-1}}{b \lambda_{k-1}},\]
which implies that $b\lambda_{k-1} (\Delta_k/b^{k+1}) \equiv 0 \!\! \mod 1$. This implies that $b \lambda_{k-1}$ is a multiple of the order of $(\Delta_k/b^{k+1}) \!\!\mod 1$, ie. $\lambda_k \mid b \lambda_{k-1}$. We conclude that $\lambda_k \leqslant b \lambda_{k-1} \leqslant b\beta(b)$.
The function $\lambda \mapsto \lambda - \lfloor \lambda/b \rfloor$ is non-decreasing as $\lambda$ increases over integers, hence
\[\ell(b) \leqslant \lambda_k - \left\lfloor \frac{\lambda_k}{b}\right\rfloor \leqslant b \beta(b) - \left\lfloor \frac{b \beta(b)}{b}\right\rfloor =b \beta(b) - \beta(b)= (b-1)\beta(b),\]
which implies that C1 holds. This completes the inductive step, and so completes the proof of Theorem \ref{thm:main theorem}.
\end{proof}
\section{Asymptotic Analysis}
In this section we analyse the function $\beta(b)$, with the ultimate goal of proving Theorem \ref{thm: ell nearly always maximal epsilon free}. We begin with the following simple observation.
\begin{proposition} \label{prop:betabounds1}
We have
\[\liminf_{n \to \infty} \frac{\beta(n)}{n} =0 \quad \text{and} \quad \limsup_{n \to \infty} \frac{\beta(n)}{n} =1.\]
\end{proposition}
\begin{proof}
The first claim follows from the observation that $\beta(p) = 1$ for all primes $p$. For the second, we note that $\beta(2^k+2)=2^k$ for all $k>1$.
\end{proof}
It is clear from this proposition that the behaviour of $\beta(n)$ is erratic as $n$ varies. However, its calculation may be understood as a certain integer programming problem, as illustrated by the following example.
\begin{example} \label{example:beta24}
In this example, we calculate $\beta(24)$ using techniques from mixed integer programming. We may write $\beta(24)=2^a\cdot 3^b$, with $a,b \in \mathbb{N}$. It follows that $a \log 2 + b \log 3 < \log 24$, and $(a,b)$ may be visualized as a lattice point in the following figure (Figure \ref{fig1}). The equation of the line is $f(x) = \log_3 24-x\log_3 2 $.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{plot1}
\caption{Lattice points $(a,b)$ corresponding to $\beta(24)=2^a\cdot3^b$.}
\label{fig1}
\end{figure}
Let us restrict our attention to the set $S$ of lattice points of the form $(a,b)$, in which $b$ is taken maximally for fixed $a$. If $(a,b) \in S$, the vertical distance from $(a,b)$ to the diagonal in Figure \ref{fig1} is then given by $\{\log_3 24 - a\log_3 2\}$. We also note that $a\log 2 + b \log 3$ is maximized among the lattice points below the line when $(a,b) \in S$ and $\{\log_3 24 - a \log_3 2\}$ is minimized (as a function of $a$). In our example, minimization occurs at $(a,b)=(1,2)$, and so we obtain $\beta(24)=2^1 \cdot 3^2=18$.
\end{example}
The technique of Example \ref{example:beta24} generalizes easily: if $n$ has $k$ prime divisors $p_1,\ldots,p_k$, we may associate to $n$ a set of lattice points in $\mathbb{Z}^k$, namely \[ \{ (a_1,\cdots,a_k) \in \mathbb{Z}_{\geqslant 0}^k : a_1\log p_1 + \cdots + a_k \log p_k < \log n\}.\] The lattice point $(a_1,\ldots, a_k)$ that minimizes distance to the the hyperplane
\[x_1 \log p_1 + \ldots x_k \log p_k = \log n\]
determines $\beta(n)$ by the formula $\beta(n)= \prod_{i=1}^k p_i^{a_i}$. \\
Combining this idea with well-known equidistribution results gives the following.
\begin{lemma} \label{thm:beta_asymp_along_AP}
We have $\beta(n) \sim n$ as $n \to \infty$ within $N\mathbb{Z}$ if and only if $N$ is not a prime power.
\end{lemma}
\begin{proof}
If $N=p^k$ is a prime power, then $N\mathbb{Z}$ contains the subsequence $\{p^{kn}\}_{n \geqslant 1}$. Since $\beta(p^{kn})=p^{kn-1}$, we cannot have $\beta(n) \sim n$ within $N\mathbb{Z}$.
Otherwise, let $p$ and $q$ be distinct primes dividing $N$, and fix a positive constant $\varepsilon$. As $\log_q p$ is irrational, the sequence $(\{u_n\})_{n=0}^\infty$ given by $u_n := n\cdot\log_q p$ is equidistributed mod $1$ (by the Equidistribution Theorem: see Proposition 21.1 of \cite{IwKo04}, say). In particular, there exists a positive parameter $L_\varepsilon$ such that $l>L_\varepsilon$ implies that the sequence $(\{ u_n\})_{n=0}^l$ contains at least one element in each interval mod $1$ of length $\varepsilon$.
Now let $m$ be a natural number and let $l = \lfloor \log_p m \rfloor$. From the above remarks, there exists a positive parameter $M_\varepsilon$ such that, for each $m>M_\varepsilon$, the shifted sequence $(\{\log_q m - u_n \})_{n=0}^l$ contains some element in the interval $(0,\varepsilon)$. In other words there exists $n_0$ at most $l$ (but dependent on $l$) such that
\[0 < \{\log_q m - n_0 \cdot \log_q p\}<\varepsilon.\] Also note that $\log_q m - n_0 \cdot \log_q p $ is positive.
Now, assume $pq \mid m$ and consider $(a,b):=(n_0,\lfloor \log_q m - n_0 \cdot \log_q p \rfloor )$. We have $\beta(m) \geqslant p^a \cdot q^b$ by construction. So
\[\beta(m)\geqslant p^a \cdot q^b = q^{\log_q m - \{\log_q m - n_0 \cdot \log_q p \}} > q^{\log_q m - \varepsilon}=m \cdot q^{-\varepsilon}.\]
Thus $q^{-\varepsilon} < \beta(m)/m < 1$, for all $m$ satisfying $m>M_\varepsilon$ and $pq \mid m$. Since $\varepsilon$ was arbitrary, and $q$ fixed, it follows that $\beta(m) \sim m$ within $pq\mathbb{Z}$, and hence within $N\mathbb{Z}$.
\end{proof}
By considering $N=6$, for example, we obtain a set of density $1/6$ (namely, $6\mathbb{Z}$) on which $\ell(b) \sim b^2$ as $b$ tends to infinity within that set. Any finite union of such sets $N_i\mathbb{Z}$, where $N_i$ has two distinct prime factors $p_i$ and $q_i$, will also have this property, and one may show with relative ease that such a union may be arranged to have natural density arbitrarily close to $1$.
However, by quantifying estimates made in the previous lemma, we can do slightly better, and show the existence of a set with the desired property that has density $1$, thereby proving Theorem \ref{thm: ell nearly always maximal epsilon free}.
\begin{proof}[Proof of Theorem \ref{thm: ell nearly always maximal epsilon free}]
Let $f(N)$ be a function that satisfies $f(N) \to \infty$ as $N \to \infty$ (to be further specified later). For integers $j \geqslant 0$, let $D_j$ denote the set of $n \in (2^{j-1},2^j]$ such that $n$ has at least two distinct prime factors $p,q \leqslant f(2^{j-1})$.
Let \[D:= \bigcup_{j\geqslant 0} D_j.\]
The set $D$ is our candidate set for use in Theorem \ref{thm: ell nearly always maximal epsilon free}.
\begin{lemma} If $f$ grows slowly enough, the set $D$ has natural density $1$.
\end{lemma}
\begin{proof} We begin by fixing $j \geqslant 0$ and bounding the size of $D_j$ from below. For convenience, we write $N$ for $2^{j-1}$.
To produce this lower bound, we find an upper bound for $(N,2N]\setminus D_j$. Indeed, by a standard application of a small sieve (e.g. the Selberg sieve, in particular Theorem 9.3.10 of \cite{Mu08}), one may show that the number of $n \in (N,2N]$ without any prime factor $p $ less than $f(N)$ is
\[O\bigg( N \prod_{p < f(N)} \left(1-\frac{1}{p} \right)\bigg),\]
provided $f(N)$ grows slowly enough. By Mertens' Third Theorem, this quantity is $O(N/\log f(N))$.
By using a union bound and the sieve above, we bound the number of $n \in (N,2N]$ with exactly one prime factor $p < f(N)$ by
\[O\Bigg( \sum_{p < f(N)} \frac{N}{p} \prod_{\substack{q<f(N) \\q \neq p}} \left(1-\frac{1}{q}\right)\!\Bigg).\]
This quantity is $O(N \log\log f(N)/\log f(N))$ (again by Mertens' theorems), and we conclude by exclusion that
\[\lvert D_j \rvert = N\left(1 - O \left(\frac{\log \log f(N)}{\log f(N)}\right)\right).\]
This already establishes that $D$ has full upper Banach density. To show that $D$ has \emph{natural} density $1$, we fix $\varepsilon > 0$ and note that, since $f(N) \to \infty$ as $j \to \infty$, there exists $j_0(\varepsilon)$ such that $\lvert D_j \rvert \geqslant 2^{j-1}(1-\varepsilon)$ for all $j \geqslant j_0(\varepsilon)$. In particular,
\begin{align*}
\sum_{\substack{n \leqslant X \\ n \in D}} 1 &\geqslant \sum_{j_0(\varepsilon) \leqslant j \leqslant \lceil \log_2 X \rceil} \vert D_j \vert - \sum_{X<n \leqslant 2^{\lceil \log_2 X \rceil}} 1. \\
&\geqslant (1-\varepsilon) \left(2^{\lceil \log_2 X \rceil} - 2^{j_0(\varepsilon)-1}\right) + X - 2^{\lceil \log_2 X \rceil}.
\end{align*}
Simplifying, we see that
\[ \liminf_{X \to \infty} \frac{\vert D \cap [1,X]\vert}{X} \geqslant \liminf_{X \to \infty} \frac{X - \varepsilon 2^{\lceil \log_2 X \rceil} - 2^{j_0(\varepsilon)}}{X} \geqslant 1 -2\varepsilon,\]
which implies that $D$ has natural density $1$, since $\varepsilon$ was arbitrary.
\end{proof}
Secondly, we prove that $\beta(n)$ is asymptotically large within $D$.
\begin{lemma} \label{second_lemma}
If $f$ grows slowly enough, then $\beta(n) \sim n$ as $n \to \infty$ within $D$.
\end{lemma}
\begin{proof} Our proof presents a more quantitative adaptation of the argument used in Lemma~\ref{thm:beta_asymp_along_AP}. Let $\varepsilon > 0$, and fix $n \in D$. By the definition of $D$, there exist distinct primes $p,q < f(n)$ for which $p,q \mid n$. We will show that, provided $n$ is large enough in terms of $\varepsilon$, there exist non-negative integers $a$ and $b$ for which
\[e^{\varepsilon} \geqslant \frac{n}{p^a q^b} > 1.\]
Since $p^a q^b \leqslant \beta(n) < n$, and $\varepsilon$ is arbitrary, this will complete the proof.
Taking logarithms, it suffices to find non-negative integers $a$ and $b$ for which
\[\frac{\varepsilon}{\log q} \geqslant \log_q n - a \log_q p - b > 0.\]
Setting $L = \lfloor \log_p n \rfloor$, it will be enough to prove that the sequence of fractional parts $\{ \{a \log_q p\} : a \in [1,L]\}$ contains an element in every interval modulo $1$ of length $\varepsilon/\log q$. Since $p,q \leqslant f(n)$, we reduce our theorem to the following claim:
\begin{claim} Let $L'=\lfloor \log n/\log f(n) \rfloor$. Then $S=\{ \{a \log_q p\} : a \in [1,L']\}$ contains an element in every interval modulo $1$ of length $\varepsilon/\log f(n)$, provided $f(n)$ grows slowly enough.
\end{claim}
The proof of this claim follows from the Erd\H{o}s-Tur\'an inequality (Corollary 1.1 of \cite{Mo94}). Indeed, for any interval $I$ modulo $1$ of length $\varepsilon/\log f(n)$, we have
\begin{align} \label{eq:erdos_turan_bound}
\left\vert \vert S\cap I\vert - \frac{\varepsilon L'}{\log f(n)} \right\vert \ll \frac{L'}{K+1} + \sum_{k \leqslant K} \frac{1}{k} \bigg\vert \sum_{a=1}^{L'} e^{2\pi i a k\log_q p} \bigg\vert
\end{align}
for any integer $K \geqslant 1$. It suffices to show that we may choose a $K$ such that the right-hand side in~\eqref{eq:erdos_turan_bound} is $o(L'/\log f(n))$ as $n \to \infty$.
Choosing $K = \lfloor \log^2 f(n)\rfloor $ ensures that $L'/(K+1) = o(L'/\log f(n))$. As for the second term in~\eqref{eq:erdos_turan_bound}, bounding the sum over $a$ as a geometric series gives
\[\sum_{k \leqslant K} \frac{1}{k} \bigg\vert \sum_{a=1}^{L'} e^{2\pi i a k\log_q p} \bigg\vert \leqslant G(K, p,q)\]
for some function $G$ that is independent of $L^\prime$. We may assume without loss of generality that $G$ is increasing in each variable. Then $$G(K,p,q) \ll G( \log^2 f(n), f(n), f(n) ),$$ so it suffices to show that
\begin{equation}
\label{growth function}
G\left(\log^2 f(n), f(n), f(n)\right) = o\left(\frac{L'}{\log f(n)}\right).
\end{equation}
Recalling the definition of $L'$, this is equivalent to showing
\[ G\left(\log^2 f(n), f(n), f(n)\right) \cdot \log^2 f(n)= o\left(\log n \right).\]
Yet $G$ is simply some absolute function, so if $f$ grows slowly enough then (\ref{growth function}) will hold. (If one so wished, one could quantify this growth condition using Baker's result \cite{Ba68} on linear forms of logarithms of primes). This proves the claim, and hence the lemma.
\end{proof}
Combining Lemma~\ref{second_lemma} with Theorem~\ref{thm:main theorem} yields Theorem \ref{thm: ell nearly always maximal epsilon free}.
\end{proof}
\vspace{5 mm}
\bibliographystyle{plain}
| {
"timestamp": "2018-09-10T02:09:51",
"yymm": "1809",
"arxiv_id": "1809.02430",
"language": "en",
"url": "https://arxiv.org/abs/1809.02430",
"abstract": "For an integer $b \\geqslant 2$ and a set $S\\subset \\{0,\\cdots,b-1\\}$, we define the Kempner set $\\mathcal{K}(S,b)$ to be the set of all non-negative integers whose base-$b$ digital expansions contain only digits from $S$. These well-studied sparse sets provide a rich setting for additive number theory, and in this paper we study various questions relating to the appearance of arithmetic progressions in these sets. In particular, for all $b$ we determine exactly the maximal length of an arithmetic progression that omits a base-$b$ digit.",
"subjects": "Number Theory (math.NT); Combinatorics (math.CO)",
"title": "Arithmetic Progressions with Restricted Digits",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9923043521388581,
"lm_q2_score": 0.8397339656668287,
"lm_q1q2_score": 0.8332716687700166
} |
https://arxiv.org/abs/1001.0548 | Proof of the combinatorial nullstellensatz over integral domains in the spirit of Kouba | It is shown that by eliminating duality theory of vector spaces from a recent proof of Kouba (O. Kouba, A duality based proof of the Combinatorial Nullstellensatz. Electron. J. Combin. 16 (2009), #N9) one obtains a direct proof of the nonvanishing-version of Alon's Combinatorial Nullstellensatz for polynomials over an arbitrary integral domain. The proof relies on Cramer's rule and Vandermonde's determinant to explicitly describe a map used by Kouba in terms of cofactors of a certain matrix. That the Combinatorial Nullstellensatz is true over integral domains is a well-known fact which is already contained in Alon's work and emphasized in recent articles of Michalek and Schauz; the sole purpose of the present note is to point out that not only is it not necessary to invoke duality of vector spaces, but by not doing so one easily obtains a more general result. | \section{Introduction}
The Combinatorial Nullstellensatz is a very useful theorem (see ~\cite{AlonNss})
about multivariate polynomials over an integral domain which bears some resemblance to the
classical Nullstellensatz of Hilbert.
\begin{theorem}[Alon, {C}ombinatorial {N}ullstellensatz (ideal-containment-version), Theorem 1.1 in ~\cite{AlonNss}]\label{thm:idealNss}
Let $K$ be a field, $R\subseteq K$ a subring, $f\in R[x_1,\dotsc, x_n]$, $S_1,\dotsc, S_n$ arbitrary nonempty subsets of $K$,
and $g_i:=\prod_{s\in S_i}(x_i-s)$ for every $1\leq i \leq n$. If $f(s_1,\dotsc, s_n)=0$
for every $(s_1,\dotsc, s_n)\in S_1\times\dotsm\times S_n$, then there exist polynomials $h_i\in R[x_1,\dotsc, x_n]$ with the property
that $\deg(h_i)\leq \deg(f) - \deg(g_i)$ for every $1\leq i \leq n$, and $f=\sum_{i=1}^n h_i g_i$.
\end{theorem}
\begin{theorem}[Alon, {C}ombinatorial {N}ullstellensatz (nonvanishing-version), Theorem 1.2 in ~\cite{AlonNss}]\label{thm:nonvanishingNss}
Let $K$ be a field, $R\subseteq K$ a subring, and $f\in R[x_1,\dotsc, x_n]$.
Let $c\cdot x_1^{d_1}\dotsm x_n^{d_n}$ be a term in $f$ with $c\neq 0$ whose degree $d_1+\dotsm+d_n$ is
maximum among all degrees of terms in $f$. Then every product $S_1\times\dotsm\times S_n$, where each $S_i$ is an arbitrary finite
subset of $R$ satisfying $|S_i|=d_i+1$, contains at least one point $(s_1,\dotsc, s_n)$ with $f(s_1,\dotsc,s_n)\neq 0$.
\end{theorem}
Three comments are in order. First, talking about subrings of a field is equivalent to talking about integral domains: every
subring of a field clearly is an integral domain, and, conversely, every integral domain $R$ is (isomorphic to) a subring
of its field of fractions $\mathrm{Quot}(R)$. Second, strictly speaking, rings are mentioned in ~\cite{AlonNss} only in
Theorem \ref{thm:idealNss}, but Alon's proof in ~\cite{AlonNss} of Theorem \ref{thm:nonvanishingNss} is valid for polynomials over
integral domains as well. Third, it is intended that the $S_i$ are allowed to be subsets of $K$ in Theorem ~\ref{thm:idealNss} but required
to be subsets of $R$ in Theorem ~\ref{thm:nonvanishingNss}, since this is the slightly stronger formulation: if Theorem ~\ref{thm:nonvanishingNss}
is true as it is formulated here, then by invoking it with $R=K$ and by viewing an $f\in R[x_1,\dotsc,x_n]$, $R$ being a subring of $K$, as a
polynomial in $K[x_1,\dotsc,x_n]$, it is true as well with the $S_i$ being allowed to be arbitrary subsets of $K$.
In ~\cite{AlonNss}, Theorem ~\ref{thm:nonvanishingNss} was deduced from Theorem \ref{thm:idealNss}.
In ~\cite{Kouba}, Kouba gave a beautifully simple and direct proof of the
nonvanishing-version of the Combinatorial Nullstellensatz, bypassing the use of the ideal-containment-version.
Kouba's argument was restricted to the case of polynomials over a field and at one step applied a suitably
chosen linear form on the vector space $K[x_1,\dotsc, x_n]$ to the given polynomial $f$ in Theorem ~\ref{thm:nonvanishingNss}.
However, for Kouba's idea to work, it is not necessary to have recourse to duality theory
of vector spaces and in the following section it will be shown how to make Kouba's idea work without it,
thus obtaining a direct proof of the full Theorem ~\ref{thm:nonvanishingNss}.
Finally, two relevant recent articles ought to be mentioned. A very short direct proof
of Theorem ~\ref{thm:nonvanishingNss} was given by Micha\l{}ek in ~\cite{Michalek} who explicitly
remarks that the proof works for integral domains as well. Moreover, the differences
$\bigr\{s-s':\; \{s,s'\}\in \genfrac{(}{)}{0pt}{}{S_k}{2} \bigl\}$ in the proof below play a similar role in
Micha\l{}ek's proof. In ~\cite{Schauz}, Schauz obtained far-reaching generalizations and sharpenings
of Theorem ~\ref{thm:nonvanishingNss}, expressly working with integral domains and generalizations
thereof throughout the paper.
\section{Proof of Theorem ~\ref{thm:nonvanishingNss}}
The proof of the Theorem ~\ref{thm:nonvanishingNss} will be based on the following simple lemma.
\begin{lemma}\label{lem:vandermonde} Let $R$ be an integral domain. Let $S=\{s_1,\dotsc,s_m\}\subseteq R$ be an arbitrary
finite subset. Then there exist elements $\lambda_1^{(S)},\dotsc, \lambda_m^{(S)}$ of $R$ such that
\begin{gather} \lambda_1^{(S)} \cdot (1, s_1, s_1^2,\dotsc, s_1^{m-1}) + \dotsm + \lambda_m^{(S)} \cdot (1,s_m,s_m^2,\dotsc, s_m^{m-1}) \notag\\
= (0,0,0,\dotsc,0,\prod_{1\leq i < j \leq m} (s_i-s_j)).
\end{gather}
\end{lemma}
\begin{proof} Let $[m]:=\{1,\dotsc, m\}$. Define $b$ to be the right-hand side of the claimed equation,
taken as a column vector, and let $A=(a_{ij})_{(i,j)\in [m]^2}$ be the Vandermonde matrix defined by $a_{ij}:=s_{j}^{i-1}$.
Then the statement of the lemma is equivalent to the existence of a solution $\lambda^{(S)}\in R^m$ of the system of linear equations
$A \lambda^{(S)} = b.$ By the well-known formula for the determinant of a
Vandermonde matrix (see ~\cite{LangAlgebra}, Ch. XIII, \S 4, example after Prop. 4.10), $\det(A)=\prod_{1\leq i < j \leq m} (s_i-s_j)$.
Since $S$ is a set, all factors of this product are nonzero, and since $R$ has
no zero divisors, the determinant is therefore nonzero as well. Now
let $\alpha_{ij}$ be the cofactors of $A$, i.e. $\alpha_{ij}:=(-1)^{i+j} \det(A^{(ij)})$,
where $A^{(ij)}$ is the $(m-1)\times (m-1)$ matrix
obtained from $A$ by deleting the $i$-th row and the $j$-th column (see ~\cite{BirkhoffMacLane}, Ch. IX, \S 3, before Lemma 1).
By Cramer's rule (see Ch. IX, \S 3, Corollary 2 of Theorem 6 in ~\cite{BirkhoffMacLane} or Theorem 4.4 in ~\cite{LangAlgebra}), for
every $j\in [m]$,
\[ \det(A) \cdot \lambda_j^{(S)} = \sum_{i=1}^m \alpha_{ij} b_i.\]
Using $b_m=\det(A)$, $b_i=0$ for every $1\leq i < m$, and the commutativity of an integral domain, this reduces to
\[ \det(A) \cdot \bigl( \lambda_j^{(S)} - \alpha_{mj}\bigr) = 0.\]
Hence, since $\det(A)\neq 0$ and $R$ has no zero divisors, if follows that
the cofactors $\lambda_j^{(S)}=\alpha_{mj}\in R$ provide explicit elements
with the desired property.
\end{proof}
Using this lemma, Kouba's argument may now be carried out without change in the setting of integral domains.
\begin{proof}[Proof of Theorem ~ \ref{thm:nonvanishingNss}] Let $R$ be an arbitrary integral domain and $f\in R[x_1,\dotsc, x_n]$ be
an arbitrary polynomial. Let $d_1,\dotsc, d_n\in \mathbb{N}_{\geq 0}$ be the exponents of a term $c\cdot x_1^{d_1}\dotsm x_{n}^{d_n}$
with $c\neq 0$ which has maximum degree in $f$. For each $k\in [n]$, choose an arbitrary finite
subset $S_k\subseteq R$ and apply Lemma ~\ref{lem:vandermonde} with $S=S_k$ and $m=|S|=d_k+1$ to obtain a family of
elements $(\lambda_{s_k}^{(S_k)})_{s_k\in S_k}$ of $R$ (where in order to avoid double indices the coefficients $\lambda$
are now being indexed by the elements of $S_k$ directly, not by an enumeration of each $S_k$) with the property that
\begin{align}
\sum_{s_k\in S_k} \lambda_{s_k}^{(S_k)} \cdot s_{k}^\ell & = 0\quad \text{ for every $\ell\in \{0,\dotsc, d_k-1\}$}, \label{eq:smallExponent}\\
\sum_{s_k\in S_k} \lambda_{s_k}^{(S_k)} \cdot s_{k}^{d_k} & = \prod_{\{s,s'\} \in \genfrac{(}{)}{0pt}{}{S_k}{2}} (s-s') =: r_k \in R\backslash\{ 0 \}\label{eq:matchingExponents}.
\end{align}
Using the coefficient families $(\lambda_{s_k}^{(S_k)})_{s_k\in S_k}$, define, \`a la Kouba, the map
\begin{align}
\Phi:\quad & R[x_1,\dotsc, x_n] \longrightarrow R \notag\\
g &\longmapsto \sum_{(s_{1},\dotsc,s_{n})\in S_1\times \dotsm \times S_n} \lambda_{s_1}^{(S_1)}\dotsm\lambda_{s_n}^{(S_n)} \cdot g(s_{1},\dotsc,s_{n}).
\label{def:koubaMap}
\end{align}
Due to the commutativity of an integral domain, $\Phi$ is an $R$-linear form on the $R$-module $R[x_1,\dotsc, x_n]$, hence
its value $\Phi(f)$ on a polynomial $f$ can be evaluated termwise as
\begin{align}
\Phi(f) = \sum_{c\cdot t\text{ a term in $f$}} c\cdot \Phi(t).
\end{align}
If $t = c\cdot x_1^{d_1'}\dotsm x_n^{d_n'}$ is an arbitrary term in $R[x_1,\dotsc,x_n]$, then
\begin{align}
\Phi(t) = c\cdot \Phi (x_1^{d_1'}\dotsm x_n^{d_n'})
& = c \cdot\sum_{(s_{1},\dotsc,s_{n})\in S_1\times \dotsm \times S_n} \lambda_{s_1}^{(S_1)}\dotsm\lambda_{s_n}^{(S_n)} \cdot s_{1}^{d_1'} \dotsm s_{n}^{d_n'} \notag\\
& = c \cdot \sum_{s_{1}\in S_1}\dotsm \sum_{s_{n} \in S_n} \lambda_{s_1}^{(S_1)}\dotsm\lambda_{s_n}^{(S_n)} \cdot s_{1}^{d_1'} \dotsm s_{n}^{d_n'} \notag\\
& = c \cdot \prod_{k=1}^n\biggl ( \sum_{s_k \in S_k} \lambda_{s_k}^{(S_k)} s_k^{d_k'},\biggr )\label{eq:productExpansion}
\end{align}
where in the last step again use has been made of the commutativity of an integral domain. By ~\eqref{eq:productExpansion}
and ~\eqref{eq:smallExponent} it follows that for every term $t$, if there is at least one exponent $d_i'$ with $d_i'<d_i$,
then $\Phi(t)=0$. Moreover, by the choice of the term $c\cdot x_1^{d_1}\dotsm x_{n}^{d_n}$, every
term $c'\cdot x_1^{d_1'}\dotsm x_{n}^{d_n'}$ of $f$ which is different from the term $c\cdot x_1^{d_1}\dotsm x_{n}^{d_n}$ must,
even if it has itself maximum degree in $f$, contain at least one exponent $d_i'$ with $d_i'<d_i$. Therefore
\begin{align}
& \sum_{(s_{1},\dotsc,s_{n})\in S_1\times \dotsm \times S_n} \lambda_{s_1}^{(S_1)}\dotsm\lambda_{s_n}^{(S_n)} \cdot f(s_{1},\dotsc,s_{n})
\eqBy{\eqref{def:koubaMap}}
\Phi(f) \eqBy{\eqref{eq:smallExponent},\eqref{eq:productExpansion}}
c\cdot \Phi(x_1^{d_1}\dotsm x_{n}^{d_n} ) = \notag\\
& \eqBy{\eqref{eq:matchingExponents},\eqref{eq:productExpansion}}
c\cdot \prod_{k=1}^n \prod_{\{s,s'\} \in \genfrac{(}{)}{0pt}{}{S_k}{2}} (s-s') = c\cdot \prod_{k=1}^n r_k \neq 0,
\end{align}
since $R$ has no zero divisors. Obviously this implies that there exists at least one point $(s_1,\dotsc, s_n)\in S_1\times\dotsm \times S_n$ where
$f$ does not vanish.
\end{proof}
\section{Concluding question}
Is there any interesting use for the fact that even in the
case of integral domains the coefficients of Kouba's map can be explicitly
expressed in terms of cofactors of the matrices $(s^{i-1}_j)$?
\section*{Acknowledgement} The author is very grateful to the department M9 of
Technische Universit\"{a}t M\"{u}nchen for excellent working conditions.
\bibliographystyle{amsplain}
| {
"timestamp": "2010-01-04T18:55:11",
"yymm": "1001",
"arxiv_id": "1001.0548",
"language": "en",
"url": "https://arxiv.org/abs/1001.0548",
"abstract": "It is shown that by eliminating duality theory of vector spaces from a recent proof of Kouba (O. Kouba, A duality based proof of the Combinatorial Nullstellensatz. Electron. J. Combin. 16 (2009), #N9) one obtains a direct proof of the nonvanishing-version of Alon's Combinatorial Nullstellensatz for polynomials over an arbitrary integral domain. The proof relies on Cramer's rule and Vandermonde's determinant to explicitly describe a map used by Kouba in terms of cofactors of a certain matrix. That the Combinatorial Nullstellensatz is true over integral domains is a well-known fact which is already contained in Alon's work and emphasized in recent articles of Michalek and Schauz; the sole purpose of the present note is to point out that not only is it not necessary to invoke duality of vector spaces, but by not doing so one easily obtains a more general result.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Proof of the combinatorial nullstellensatz over integral domains in the spirit of Kouba",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808741970027,
"lm_q2_score": 0.8499711718571775,
"lm_q1q2_score": 0.8332104833904048
} |
https://arxiv.org/abs/1906.05908 | Perfect matchings and derangements on graphs | We show that each perfect matching in a bipartite graph $G$ intersects at least half of the perfect matchings in $G$. This result has equivalent formulations in terms of the permanent of the adjacency matrix of a graph, and in terms of derangements and permutations on graphs. We give several related results and open questions. | \section{Introduction}
Our main result concerns perfect matchings in bipartite graphs.
\begin{theorem}\label{th:matchingInBipartiteGraph}
Let $G$ be a bipartite graph having a perfect matching $M$.
Then $M$ has non-empty intersection with at least half of the perfect matchings in $G$.
\end{theorem}
The value of $1/2$ in this conclusion can't be improved, since even cycles have two perfect matchings, which are disjoint. Moreover, the hypothesis that $G$ be bipartite is necessary (e.g., $K_4$ has three perfect matchings, which are disjoint). In fact, it turns out that for general graphs, the behavior is quite different.
\begin{theorem}\label{th:matchingInGeneral}
Let $G$ be a graph on $2n$ vertices having a perfect matching $M$. Then $M$ has non-empty intersection with at least $1/(2^{n-1}+1)$ of the perfect matchings in $G$. Moreover, if $n$ is even there is a $3$-regular graph on $2n$ vertices having $2^{n/2} + 1$ perfect matchings, one of which is disjoint from all the others.
\end{theorem}
This work was originally motivated as a study of derangements and permutations on graphs, as introduced by Clark \cite{clark2013graph}, and this approach leads to several related questions.
Let $G=(E,V)$ be a directed, loopless graph.
A derangement on $G$ is a bijection $\sigma:V \rightarrow V$ such that $(v,\sigma(v)) \in E$ for all $v \in V$.
A permutation on $G$ is a bijection $\sigma:V \rightarrow V$ such that either $(v, \sigma(v)) \in E$ or $v = \sigma(v)$ for all $v \in V$.
For any directed graph $G$, denote by $(d/p)_G$ the ratio of the number of derangements on $G$ to the number of permutations on $G$.
We also consider derangements and permutations on undirected graphs, by treating them as directed graphs for which $(u,v)$ is an edge if and only if $(v,u)$ is an edge.
From these definitions, it is easy to see that $(d/p)_{K_n}$ is the probability that a uniformly random permutation on $[n]$ is a derangement.
A classic application of inclusion-exclusion shows that
$$\lim_{n \rightarrow \infty}(d/p)_{K_n} = 1/e.$$
Many graphs do not admit any derangements, and hence $(d/p)$ may be as small as $0$.
In the other direction, if $C$ is a directed cycle, then there is one derangement and two permutations on $C$, and hence $(d/p)_C = 1/2$.
Theorem \ref{th:matchingInBipartiteGraph} is equivalent to the claim that $(d/p)$ is never larger than $1/2$.
\begin{restatable}{theorem}{ratioHalf}\label{thm:ratioHalf}
If $G$ is a loopless directed graph, then
$$(d/p)_G \leq 1/2,$$
with equality if and only if $G$ is a directed cycle.
\end{restatable}
Let us see that the claim that $(d/p)_G \leq 1/2$ in Theorem \ref{thm:ratioHalf} is equivalent to Theorem \ref{th:matchingInBipartiteGraph}.
Given a directed graph $G$, we construct a bipartite graph $G'$ as follows.
For each vertex $v \in V(G)$, we have a left vertex $v_l$ and a right vertex $v_r$ in $V(G')$.
We have $\{u_l, v_r\} \in E(G')$ if and only if either $(u,v) \in E(G)$ or $u =v$.
Then, each permutation on $G$ corresponds to a perfect matching in $G'$, and a derangement on $G$ corresponds to a perfect matching in $G'$ that does not use any edge of the form $\{v_l, v_r\}$.
Hence, Theorem \ref{thm:ratioHalf} is equivalent to the claim that the matching $\{ \{v_l, v_r\} : v \in V(G)\}$ in $G'$ intersects at least half of the perfect matchings in $G'$.
We can obtain the full statement of Theorem \ref{th:matchingInBipartiteGraph} by applying a permutation to the labeling of the right vertices of $G'$.
The formulation in terms of derangements and permutations on graphs leads to several other natural questions.
In particular, we investigate the ratio $(d/p)$ for restricted families of graphs.
In the case that $G$ is regular and very dense, it turns out that $(d/p)_G$ is always close to $1/e$, as it is for complete graphs.
\begin{theorem}\label{thm:veryDense}
Let $\mathcal{G} = \{G_2, G_3, \ldots\}$ be an infinite family of directed graphs, so that $G_n$ is $k_n$-regular on $n$ vertices.
Suppose that
$k_n = n - o(n/\log(n))$.
Then,
$$\lim_{n \rightarrow \infty} (d/p)_{G_n} = 1/e.$$
\end{theorem}
By Theorem \ref{thm:ratioHalf}, $(d/p) = 1/2$ is attained only for directed cycles.
A natural question is whether it is possible for $(d/p)$ to be nearly $1/2$ for dense graphs.
It turns out that there are graphs having positive edge density and $(d/p)$ arbitrarily close to $1/2$.
\begin{theorem}\label{thm:blowupExample}
For any $\varepsilon > 0$, there is a constant $c_\varepsilon > 0$, an infinite set $I$ of positive integers, and a family of graphs $\{G_n : n \in I\}$ such that $|V(G_n)| = n$, $|E(G_n)| \geq c_\varepsilon n^2$, and $\lim_{n \rightarrow \infty} (d/p)_{G_n}$ exists and is strictly larger than $1/2 - \varepsilon$.
\end{theorem}
We are also interested in how large $(d/p)_G$ can be for an undirected graph $G$.
In Section \ref{sec:constructions}, we show that $(d/p)_{K_{n,n}} > (d/p)_{K_{2n}}$.
However, in contrast to the case for directed graphs, this is essentially the only example we have found of an undirected graph $G$ with $(d/p)_G > (d/p)_{K_n}$.
We propose the following conjecture on the matter.
\begin{conjecture}\label{conj:undirected}
Let $G$ be an undirected graph on $n$ vertices.
If $n$ is even, then
$$(d/p)_G \leq (d/p)_{K_{n/2,n/2}},$$
with equality only for $K_{n/2,n/2}$.
\end{conjecture}
In the case that $n$ is odd, we don't know of any graphs $G$ on $n$ vertices with $(d/p)_G > (d/p)_{K_n}$; in particular, we don't know of any way to add a vertex to $K_{n,n}$ without substantially decreasing the ratio $d/p$.
The following theorem provides some evidence for Conjecture \ref{conj:undirected}.
\begin{restatable}{theorem}{undirectedBipartite}\label{thm:undirectedBipartite}
Let $G$ be a bipartite graph.
Then,
$$(d/p)_G \leq \frac{1}{\sum_{k=0}^{n} k!^{-2}},$$
with equality if and only if $G$ is a complete bipartite graph.
\end{restatable}
\begin{comment}
If $G$ is an unbalanced bipartite graph, then there are no derangements on $G$ and $(d/p)_G = 0$.
The proof of Theorem \ref{thm:undirectedBipartite} is in Section \ref{sec:undirectedBipartite}.
The proof of this theorem is in Section \ref{sec:arbitrary}.
In Section \ref{sec:constructions} we describe families of dense graphs with $(d/p)_G > 1/2 - \varepsilon$ for any $\varepsilon > 0$.
We also consider upper bounds on $(d/p)_G$ for restricted families of graphs.
For example, in the case that $G$ is regular and very dense, we show that $(d/p)_G$ is close to $1/e$, as is the case for complete graphs.
\begin{theorem}\label{thm:veryDense}
Let $\mathcal{G} = \{G_2, G_3, \ldots\}$ be an infinite family of directed graphs, so that $G_n$ is $k_n$-regular on $n$ vertices.
Suppose that
$k_n = n - o(n/\log(n))$.
Then,
$$\lim_{n \rightarrow \infty} (d/p)_{G_n} = 1/e.$$
\end{theorem}
The proof of this theorem is in Section \ref{sec:veryDense}.
A more interesting case is undirected graphs.
In Section \ref{sec:constructions}, we show that $(d/p)_{K_{n,n}} > (d/p)_{K_{2n}}$.
However, in contrast to the case for directed graphs, this is essentially the only example we have found with $(d/p)_G > (d/p)_{K_n}$.
We propose the following conjecture on the matter.
\begin{conjecture}\label{conj:undirected}
Let $G$ be an undirected graph on $n$ vertices.
If $n$ is even, then
$$(d/p)_G \leq (d/p)_{K_{n/2,n/2}},$$
with equality only for the complete bipartite graph.
\end{conjecture}
In the case that $n$ is odd, we don't know of any graphs $G$ with $(d/p)_G > (d/p)_{K_n}$; in particular, we don't know of any way to add a vertex to $K_{n,n}$ without substantially decreasing the ratio $d/p$.
The following theorem provides some evidence for Conjecture \ref{conj:undirected}.
\begin{restatable}{theorem}{undirectedBipartite}\label{thm:undirectedBipartite}
Let $G$ be an undirected, balanced bipartite graph.
Then,
$$(d/p)_G \leq \frac{1}{\sum_{k=0}^{n} k!^{-2}},$$
with equality if and only if $G$ is a complete bipartite graph.
\end{restatable}
Clearly, if $G$ is an unbalanced bipartite graph, then there are no derangements on $G$ and $(d/p)_G = 0$.
The proof of Theorem \ref{thm:undirectedBipartite} is in Section \ref{sec:undirectedBipartite}.
\end{comment}
\begin{comment}
\subsection{Equivalent formulations}
The questions about derangements and permutations on graphs studied here can be restated as questions about the permanents of matrices, or about perfect matchings.
In particular, the proofs in Sections \ref{sec:veryDense} and \ref{sec:undirectedBipartite} depend on the following reformulation in terms of the permanents of matrices.
The permanent of an $n \times n$ matrix $A = [a_{ij}]$ is
$$\per(A) = \sum_\sigma a_{1 \sigma(1)} a_{2 \sigma(2)} \ldots a_{n \sigma(n)},$$
where the sum runs over all permutations of $[n]$.
If $A$ is the adjacency matrix of a directed graph $G$, and $\sigma$ is a fixed permutation, then
$\prod_i a_{i \sigma(i)} = 1$
if and only if $(i,\sigma(i)) \in G$ for each $i \in [n]$, and $\prod_i a_{i \sigma(i)} =0$ otherwise.
Hence, $\prod_i a_{i \sigma(i)} = 1$ if and only if $\sigma$ is a derangement on $G$, and so the permanent of $A$ is equal to the number of derangements on $G$.
Similarly, the permanent of $A+I$, where $I$ is the identity matrix, is the number of permutations on $G$.
\begin{observation}\label{obs:deragementPermanent}
For a directed, loopless graph $G$ with adjacency matrix $A$,
$$(d/p)_G = \frac{\per(A)}{\per(A + I)}.$$
\end{observation}
Using this observation, we can restate Theorem \ref{thm:ratioHalf} in terms of permanents of $(0,1)$-matrices.
\begin{corollary}
Let $A$ be an $n \times n$, $(0,1)$-matrix such that $a_{ii} = 0$ for each $i \in [n]$.
Then,
$$\frac{\per(A)}{\per(A + I)} \leq \frac{1}{2}.$$
\end{corollary}
It is a well-known fact that the number of perfect matchings on a balanced bipartite graph is equal to the permanent of its biadjacency matrix, and we also relate graph derangements and permutations to perfect matchings.
Given a directed, loopless graph $G$ on vertex set $[n]$, construct a bipartite graph $G'$ on $[n] \sqcup [n]$ so that $(i,j) \in G'$ if and only if $(i,j) \in G$.
Then, a derangement on $G$ is exactly a perfect matching in $G'$.
Since $G$ is loopless, there are no edges $(i,i) \in G'$, and permutations on $G$ correspond to perfect matchings in the graph $G' \cup D$, where $D = \{(i,i):i \in [n]\}$.
From this, the following observation is immediate.
We denote by $\match(G)$ the number of perfect matchings in $G$.
\begin{observation}
Let $G$, $G'$, and $D$ be as in the previous paragraph.
Then,
$$(d/p)_G = \frac{\match(G')}{\match(G' \cup D)}.$$
\end{observation}
Note that we can permute the labels of the right vertices of $G'$ without affecting the number of perfect matchings.
Hence, we have the following corollary of Theorem \ref{thm:ratioHalf}.
\begin{corollary}
Let $G$ be a bipartite graph, and let $M$ be a perfect matching in the complement of $G$ that respects a $2$-coloring of $G$.
Then,
$$\frac{\match(G)}{\match(G \cup M)} \leq \frac{1}{2}.$$
\end{corollary}
If $G$ is an undirected bipartite graph, then the graph $G'$ obtained by the above construction is the disjoint union of two copies of $G$.
Hence, in this case, $\match(G') = \match(G)^2.$
This leads to the following observation.
\begin{observation}
If $G$ is an undirected bipartite graph, then the number of derangements on $G$ is equal to the square of the number of perfect matchings in $G$.
\end{observation}
\end{comment}
In the case of random graphs and digraphs, we show that the numbers of derangements and permutations are tightly concentrated about their means. This can be derived either from the arguments of Frieze and Jerrum \cite{frieze1995analysis} on the permanents of random matrices or from the more general framework of Janson \cite{janson1994numbers} who proved limit distribution laws for a wide range of subgraph counts.
For a probability $q$, we use $G_{n,q}$ (resp.\ $DG_{n,q}$) to denote the Erd\H{o}s--R\'enyi random graph (resp.\ digraph) on $n$ vertices where each edge (resp.\ arc) is included independently with probability $q$. It turns out that a great deal can be said about the ratio $(d/p)$ for these random structures (following \cite{janson1994numbers}); however, for both brevity and accessibility we'll restrict ourselves to the following.
\begin{proposition}\label{th:randomRatios}
For each $\delta > 0$, there is a sequence $\varepsilon_n \to 0$ satisfying the following. If $q = q_n \in (\delta, 1]$, then with probability tending to $1$ as $n \to \infty$,
\begin{eqnarray*}
e^{-1/q} (1 - \varepsilon_n) \leq &(d/p)_{G_{n,q}}& \leq e^{-1/q} (1 + \varepsilon_n), \qquad \text{and}\\
e^{-1/q} (1 - \varepsilon_n) \leq &(d/p)_{DG_{n,q}}& \leq e^{-1/q} (1 + \varepsilon_n).
\end{eqnarray*}
\end{proposition}
Thus a large random (di)graph with edge density $q$ will have $(d/p) \sim e^{-1/q}$. Setting $q=1/2$ shows that for large $n$, the vast majority of graphs and digraphs have $(d/p)$ tending to $e^{-2}$. Taking $q \to 1$, we obtain a randomized analog of Theorem \ref{thm:veryDense}. Moreover, the behavior for random graphs provides some (mild) additional evidence for Conjecture \ref{conj:undirected}.
\subsection{Prior work}
Derangements and permutations on graphs were introduced by Clark, \cite{clark2013graph}, who considered mainly the cycle structure of graph derangements.
Penrice \cite{penrice1991derangements} (using a slightly different terminology) investigated the number of derangements on $n$-partite graphs with $t$ vertices in each part, for $t$ fixed and $n$ large.
His result is a special case of Theorem \ref{thm:veryDense}, and his proof is based on the same ideas used in the proof of Theorem \ref{thm:veryDense}.
The literature on perfect matchings in bipartite graphs is extensive. For a general background on this, see the textbook of Lov\'asz and Plummer \cite{lovasz2009matching}.
\begin{comment}
Other restricted families of permutations have been investigated, but these topics are not related to our notion of derangements and permutations on graphs in any simple way.
The question of determining the number of permutations that avoid particular patterns has been very fruitful, e.g. \cite{simion1985restricted}.
Geometric permutations are those that can be realized as a line transversal of convex sets, \cite{katchalski1985geometric}.
\end{comment}
\subsection{Organization of the paper}
The construction for Theorem \ref{thm:blowupExample} is in Section \ref{sec:constructions}.
The complete bipartite graph is a special case of this construction.
Section \ref{sec:veryDense} contains the proof of Theorem \ref{thm:veryDense}.
Section \ref{sec:undirectedBipartite} contains the proof of Theorem \ref{thm:undirectedBipartite}.
Theorem \ref{thm:ratioHalf} is proved in Section \ref{sec:arbitrary}, which also includes the statement and proof of a result on Hamilton cycles in directed graphs (Corollary \ref{thm:hamilton}). In Section \ref{sec:matchingsInGeneralGraphs} we prove Theorem \ref{th:matchingInGeneral}, and we discuss random graphs and digraphs in Section \ref{sec:randomGraphs}.
Some open problems are listed in Section \ref{sec:openProblems}.
\section{Constructions}\label{sec:constructions}
In this section, we compute $(d/p)$ for blowups of directed graphs.
These provide the examples described in Theorem \ref{thm:blowupExample}.
A special case is the complete, balanced, undirected bipartite graph that plays an important role in Conjecture \ref{conj:undirected} and Theorem \ref{thm:undirectedBipartite}.
Let $D_{k,l}$, for $k \geq 1$ and $l \geq 2$, be a graph on vertices $v_{ij}$ for $i \in [k]$ and $j \in [l]$, such that $(v_{ij}, v_{lm}) \in D_{k,l}$ if and only if $m = j+1 \mod l$.
Note that $D_{1,n}$ is the directed cycle on $n$ vertices, and $D_{n,2}$ is the undirected complete bipartite graph on $[n] \sqcup [n]$.
Figure \ref{fig:D25} shows $D_{2,5}$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=1]{D25}
\end{center}
\caption{$D_{2,5}$} \label{fig:D25}
\end{figure}
\begin{proposition}\label{thm:construction}
The number of derangements on $D_{k,l}$ is $(k!)^l$, and the number of permutations on $D_{k,l}$ is
$$ \sum_{i=0}^k \left( \binom{k}{i} (k-i)! \right)^l.$$
Consequently,
$$ (d/p)_{D_{k,l}} = \left( \sum_{i=0}^k \frac{1}{(i!)^l} \right)^{-1}.$$
\end{proposition}
\begin{proof}
The graph $D_{k,l}$ is organized into $l$ parts, with $k$ vertices in each part.
In any derangement on $D_{k,l}$, each vertex in part $j$ has a single out-neighbor in part $j+1 \mod l$, and no two vertices have the same out-neighbor.
The claim on the number of derangements follows immediately.
If $P$ is a permutation on $D_{k,l}$, the number of fixed points of $P$ in each part of $D_{k,l}$ must be equal.
Hence, to count the number of permutations with $m$ fixed points, it is enough to count the number of ways to choose $m/l$ fixed points in each part, and multiply by the number of derangements on the remaining elements.
This is the formula given above.
\end{proof}
Since
$e = \lim_{n \rightarrow \infty} \sum_{i=0}^n \frac{1}{i!}$
and $K_{n,n} = D_{n,2}$,
it follows from Proposition \ref{thm:construction} that $\lim_{n \rightarrow \infty} (d/p)_{K_{n,n}} > 1/e$.
For small values of $n$, it is easy to calculate that $(d/p)_{K_{n,n}} > (d/p)_{K_{2n}}$.
Hence, $(d/p)_{K_{n,n}} > (d/p)_{K_{2n}}$ for all $n>1$, consistent with Conjecture \ref{conj:undirected}.
In light of Theorem \ref{thm:ratioHalf}, it is natural to wonder if it is possible for $(d/p)_G$ to be very close to $1/2$ if $G$ is not a directed cycle.
Choosing $l$ to be a large constant shows that this is the case.
In particular, Theorem \ref{thm:blowupExample} follows from Proposition \ref{thm:construction} by taking $G_n = D_{n/l,l}$ for a suitably chosen constant $l$ depending on $\varepsilon$.
\section{Very dense graphs}\label{sec:veryDense}
In this section, we prove Theorem \ref{thm:veryDense}.
The proof is based on two standard bounds on the permanents of matrices, the Minc-Br\'egman inequality (proved by Br\'egman in \cite{bregman1973some}) and van der Waerden conjecture (proved independently by Egorychev in \cite{egorychev1981solution} and by Falikman in \cite{falikman1981proof}).
Recall that the permanent of an $n \times n$ matrix $A = [a_{ij}]$ is
$$\per(A) = \sum_\sigma a_{1 \sigma(1)} a_{2 \sigma(2)} \ldots a_{n \sigma(n)},$$
where the sum runs over all permutations of $[n]$.
If $A$ is the adjacency matrix of a directed graph $G$, and $\sigma$ is a fixed permutation, then
$\prod_i a_{i \sigma(i)} = 1$
if and only if $(i,\sigma(i)) \in G$ for each $i \in [n]$, and $\prod_i a_{i \sigma(i)} =0$ otherwise.
Hence, $\prod_i a_{i \sigma(i)} = 1$ if and only if $\sigma$ is a derangement on $G$, and so the permanent of $A$ is equal to the number of derangements on $G$.
Similarly, the permanent of $A+I$, where $I$ is the identity matrix, is the number of permutations on $G$.
Using the connection between derangements and permanents, Theorem \ref{thm:veryDense} follows as an immediate corollary to the following, stronger, theorem.
\begin{theorem}\label{thm:denseGraphsPermanentVersion}
Let $I$ be an infinite set of positive integers, and let $\{A_n: n \in I\}$ and $\{B_n: n \in I\}$ be families of $n \times n$ matrices with entries in $\{0,1\}$.
Suppose that the sum of entries in each row and each column of $A_n$ is $k_n$, and that the sum of entries in each row and each column of $B_n$ is $k_{n}+1$.
Further, suppose that
$k_n = n - o(n/\log(n))$.
Then,
$$\lim_{n \rightarrow \infty} \frac{Per(A_n)}{Per(B_n)} = 1/e.$$
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:denseGraphsPermanentVersion}]
We use two standard estimates on the permanent of a matrix.
First, the Minc-Br\'egman inequality \cite{bregman1973some} is that, if $A$ is a $(0,1)$-matrix with row sums $k$, then
$$\per(A) \leq k!^{n/k},$$
with equality attained only by block-diagonal matrices.
Second, the van der Waerden Conjecture, proved independently in \cite{falikman1981proof,egorychev1981solution}, is that, if $A$ is a doubly stochastic matrix, then
$\per(A) \geq n! / n^n$,
with equality attained only by $\frac{1}{n}J$, where $J$ is the all-ones matrix.
An immediate corollary to this is that, for a matrix $A$ with row and column sums equal to $k$, we have $$\per(A) \geq n! (k / n)^n.$$
Notice the upper and lower bounds meet for $k=n$.
What remains is calculation using Stirling's approximation.
In what follows, $f(n) \sim g(n)$ means $\lim_{n \rightarrow \infty} (f(n)/g(n)) = 1$, and we denote $k=k_n$.
First, we show that $\lim_{n \rightarrow \infty} \per(A_n) / \per(B_n) \leq 1/e$.
\begin{align*}
\per(A_n)/ \per(B_n) &\leq \frac{k!^{n/k} n^n}{n!(k+1)^n}, \\
&\sim \frac{\sqrt{2 \pi k}^{n/k}}{\sqrt{2 \pi n}} \frac{ k^n}{ (k+1)^n}, \\
&\leq (2 \pi n)^{(n-k)/2k} \frac{ k^n}{ (k+1)^n}, \\
&= (2 \pi n)^{o(1/\log n)} \frac{k^n}{(k+1)^n}, \\
&\sim 1/e.
\end{align*}
In the last line, we use that $(k/(k+1))^n \sim 1/e$ for any $k = (1-o(1))n$, and $n^{1/\log n}$ is a constant so $(2 \pi n)^{o(1/\log n)} \sim 1$.
Next, we show that $\lim_{n \rightarrow \infty} \per(A_n) / \per(B_n) \geq 1/e$.
\begin{align*}
\per(A_n)/ \per(B_n) &\geq \frac{n!k^n}{(k+1)!^{n/(k+1)} n^n}, \\
&\sim \frac{\sqrt{2 \pi n}}{\sqrt{2 \pi (k+1)}^{n/(k+1)}} \frac{ k^n}{ (k+1)^n}, \\
&\geq \frac{\sqrt{2 \pi n}}{\sqrt{2 \pi n}^{n/(k+1)} - o(\sqrt{n}^{n/(k+1)})} \frac{ k^n}{ (k+1)^n} \\
&\sim \frac{1}{(2 \pi n)^{o(1/\log n)}} (1/e), \\
&\sim 1/e.
\end{align*}
\end{proof}
\section{Undirected bipartite graphs}\label{sec:undirectedBipartite}
In this section, we prove Theorem \ref{thm:undirectedBipartite}.
As in Section \ref{sec:veryDense}, we depend on the connection between derangements on graphs and permanents of matrices.
In what follows, for any $n \times n$ matrix $M$ and sets $S,S' \subset [n]$, we denote by $M(S,S')$ the sub-matrix of $M$ consisting of rows in $S$ and columns in $S'$, and denote by $M(\overline{S}, \overline{S'})$ the sub-matrix of $M$ with rows in $S$ and columns in $S'$ deleted.
We denote by $M(\overline{i}, \overline{j})$ the sub-matrix of $M$ obtained by deleting only row $i$ and column $j$.
For any graph $G$, we denote by $p_G$ the number of permutations on $G$, and by $d_G$ the number of derangements on $G$.
First, we need the following lemma on the permanent of an arbitrary matrix.
\begin{lemma}\label{thm:subpermanentFormula}
Let $M$ be an $n \times n$ matrix, and let $0 \leq k \leq n$.
Then,
$$\binom{n}{k} \per(M) = \sum_{S, S' \in \binom{[n]}{k}} \per(M(S,S')) \per(M(\overline{S},\overline{S'})).$$
\end{lemma}
\begin{proof}
Let $S \in \binom{[n]}{k}$.
For any $S' \in \binom{[n]}{k}$, let $\Sigma_{S'}$ be the set of permutations of $[n]$ such that $\sigma(i) \in S'$ for all $i \in S$.
For each permutation $\sigma$ of $[n]$, there is a unique set $S' \in \binom{[n]}{k}$ such that $\sigma(i) \in S'$ for all $i \in S$, and hence the sets $\Sigma_{S'}$ partition the set of all permutations.
\begin{align*}
\per(M) &= \sum_{\sigma} \prod_i m_{i \sigma(i)}, \\
&= \sum_{S' \in \binom{[n]}{k}} \sum_{\sigma \in \Sigma_{S'}}\prod_{i \in S}m_{i \sigma(i)} \prod_{i \in \overline{S}} m_{i \sigma(i)}.
\end{align*}
Let $\Sigma_{S'}^1$ be the set of bijections between $S$ and $S'$, and let $\Sigma_{S'}^2$ be the set of bijections between $\overline{S}$ and $\overline{S'}$.
Each $\sigma \in \Sigma_{S'}$ corresponds to exactly one pair of functions $\sigma_1 \in \Sigma_{S'}^1$ and $\sigma_2 \in \Sigma_{S'}^2$, so that $\sigma(i) = \sigma_1(i)$ for $i \in S$ and $\sigma(j) = \sigma_2(j)$ for $j \in \overline{S}$.
Hence,
\begin{align*}
\per(M) &= \sum_{S' \in \binom{[n]}{k}}\left( \sum_{\sigma \in \Sigma_{S'}^1}\prod_{i \in S}m_{i \sigma(i)} \right) \left(\sum_{\sigma \in \Sigma_{S'}^2}\prod_{i \in \overline{S}} m_{i \sigma(i)} \right),\\
&= \sum_{S' \in \binom{[n]}{k}} \per(M(S,S')) \per(M(\overline{S}, \overline{S'})).
\end{align*}
\end{proof}
We will need to break the permutations up by their fixed points.
\begin{lemma}\label{thm:permutationSum}
If $G$ is a directed, loopless graph on $[n]$ with adjacency matrix $M$, then
$$p_G = \sum_{S \subseteq [n]} \per(M(S,S)).$$
If $G$ is an undirected bipartite graph on $[n] \sqcup [n]$ with biadjacency matrix $M$, then
$$p_G = \sum_{S, S' \subseteq [n]} \per(M(S,S'))^2.$$
\end{lemma}
\begin{proof}
Each term on the right side of the equality counts the number of permutations on $G$ with the fixed points $\overline{S}$.
In the case of a bipartite graph, the fixed points occur in pairs; $\overline{S}$ is the set of fixed points among the left vertices, and $\overline{S'}$ is the set of fixed points among the right vertices.
\end{proof}
We're now ready to prove the theorem.
For convenience, we recall it here.
\undirectedBipartite*
\begin{proof}
Let $M$ be the biadjacency matrix of $G$.
Applying Lemma \ref{thm:subpermanentFormula}, we have, for each $k$ in the range $0 \leq k \leq n$, that
\begin{align*}
d_G &= \per(M)^2,\\
&= \binom{n}{k}^{-2} \left (\sum_{S,S' \in \binom{[n]}{k}} \per(M(S,S')) \per(M(\overline{S},\overline{S'})) \right)^2.\\
\end{align*}
Note that $\per(M(S,S'))\leq k!$, with equality holding for all $S,S'$ if and only if $G=K_{n,n}$.
Hence,
$$d_G \leq k!^2 \binom{n}{k}^{-2} \left (\sum_{S,S' \in \binom{[n]}{k}} \per(M(\overline{S},\overline{S'})) \right)^2.$$
By the Cauchy-Schwarz inequality,
$$\left (\sum_{S,S' \in \binom{[n]}{k}} \per(M(\overline{S},\overline{S'})) \right)^2 \leq \sum_{S,S' \in \binom{[n]}{k}} \per(M(\overline{S},\overline{S'}))^2 \sum_{S,S' \in \binom{[n]}{k}} 1.$$
Note that this inequality is tight if $G=K_{n,n}$.
Substituting back into the expression for $d_G$, we get
\begin{equation}\label{eq:d_G} d_G \leq k!^2 \sum_{S,S' \in \binom{[n]}{k}} \per(M(\overline{S},\overline{S'}))^2.\end{equation}
On the other hand, we have by Lemma \ref{thm:permutationSum} that
\begin{equation}\label{eq:p_G}
p_G = \sum_{k=0}^n \sum_{S, S' \in \binom{[n]}{k}} \per(M(\overline{S}, \overline{S'}))^2.
\end{equation}
Combining (\ref{eq:d_G}) and (\ref{eq:p_G}), we have
$$\frac{p_G}{d_G} \geq \sum_{k=0}^n k!^{-2},$$
with equality if and only if $G=K_{n,n}$.
\end{proof}
\section{Arbitrary directed graphs}\label{sec:arbitrary}
In this section, we prove Theorem \ref{thm:ratioHalf}.
Let us recall the statement of the theorem.
\ratioHalf*
The basic idea of the proof is to construct an injection from derangements to non-derangement permutations.
The injection that we construct is based on the cycle structure of the derangement.
As a graph, any permutation is the union of directed cycles and isolated vertices.
In particular, if $\pi(u) = w$ in a permutation $\pi$, then $uw$ is an edge in the corresponding graph.
If $\pi(u) = u$, then $u$ is an isolated vertex.
Derangements are those permutations with no isolated vertices.
Given a derangement $D$, the plan is to map $D$ to a permutation that shares all but one of the cycles of $D$.
The cycle $C$ that we break is the one that contains a specified vertex $v$.
The injection depends on the choice of $v$.
The image of $C$ will normally be a smaller cycle that contains $v$, together with at least one fixed point.
In some special cases, the image of $C$ may be a collection of fixed points, with no cycle.
Given the image of $D$, it is easy to recover all of the cycles in $D$, except for $C$.
It is easy to obtain the set of vertices in $C$, but harder to recover the order of these vertices in $C$.
Handling the cycle that contains the special vertex $v$ is the main difficulty of the proof, and we encapsulate it in the following Lemma.
\begin{lemma}\label{th:HamiltonMap}
Let $G$ be a directed, loopless graph, and let $v \in V(G)$.
Let $\mathcal{H}$ be the set of Hamilton cycles on $G$.
Let $\mathcal{G}$ be the set of permutations on $G$ with at least one fixed point and at most one cycle, such that $v$ is a vertex of the cycle if it exists.
Then, there is an injection $f_v$ from $\mathcal{H}$ to $\mathcal{G}$.
In addition, assuming that $G$ is not a directed cycle, there is a choice of $v$ for which the identity is not in the image of $f_v$.
\end{lemma}
Before proving Lemma \ref{th:HamiltonMap}, we show how it implies Theorem \ref{thm:ratioHalf}.
\begin{proof}[Proof of Theorem \ref{thm:ratioHalf}]
It is an easy observation that $(d/p)_G = 1/2$ if $G$ is a directed cycle.
Assume that $G$ is not a directed cycle.
We describe a family of injections $F_v$ from derangements on $G$ to non-derangement permutations on $G$, one injection for each vertex $v \in V(G)$.
This family of injections has the property there exists at least one $v$ for which the identity permutation is not in the image of $F_v$.
Hence, the total number of permutations is at least twice the number of derangements, plus one for the identity permutation.
For each derangement $D$ on $G$, let $F_v(D)$ be the union of all cycles of $D$ that do not contain $v$, together with the image of the cycle $C$ that contains $v$ under the function described in Lemma \ref{th:HamiltonMap} applied to the subgraph of $G$ induced by the vertices of $C$.
For the derangements that are Hamilton cycles, the map $F_v$ is exactly the map $f_v$ described in Lemma \ref{th:HamiltonMap}.
By Lemma \ref{th:HamiltonMap}, there is a choice of $v$ so that the identity permutation is not in the image of $f_v$.
For derangements that are not Hamilton cycles, each cycle that does not contain $v$ will be mapped to itself, and so the these derangements cannot map to the identity.
Hence, there is a choice of $v$ for which the identity is not in the image of $F_v$.
Given Lemma \ref{th:HamiltonMap}, it is easy to see that each permutation $P$ in the image of $F_v$ has a unique preimage $D$.
Indeed, the preimage of each cycle in $P$ that doesn't contain $v$ is the same cycle.
The preimage of the remaining vertices is their preimage under the map $f_v$ defined in Lemma \ref{th:HamiltonMap}, which is injective.
Hence, $F_v$ is injective, which is sufficient to prove Theorem \ref{thm:ratioHalf}.
\end{proof}
It remains to prove Lemma \ref{th:HamiltonMap}.
\begin{proof}[Proof of Lemma \ref{th:HamiltonMap}]
For any permutation $P$, we denote by $\fix(P)$ the set of fixed points of $P$.
The identity permutation is the unique permutation on $G$ such that $\fix(P) = V(G)$.
Let $v_0 = v$ and let $C=(v_0,v_1,v_2, \ldots,v_{n-1},v_0) \subseteq G$ be a Hamilton cycle on $G$.
Let $(v_i,v_j) \in G$ be a chord of $C$.
Note that $(v_i,v_j)$ completes a cycle $C_{ij}$ with edges from $C$, and there is a nonempty set $L_{ij}$ of vertices in $G$ that are not contained in $C_{ij}$.
Further, knowing $C$, it is easy to identify $i,j$ from either $C_{ij}$ or $L_{ij}$.
If $v_0$ is a vertex of $C_{ij}$, then we say that $(v_i,v_j)$ is a {\em forward chord} of $C$.
If there are no forward chords with respect to $C$, then $C$ is the only Hamilton cycle of $G$.
Indeed, suppose that $(v_0=u_0,u_1,u_2,\ldots,u_{n-1},u_0) \neq C$ is a Hamilton cycle of $G$.
Suppose that $(v_0, v_1, \ldots, v_i) = (u_0, u_1, \ldots, u_i)$, and $v_{i+1} \neq u_{i+1}$.
Then, $(u_i, u_{i+1})=(v_i, v_k)$ is a forward chord of $C$.
Indeed, since $k > i+1$, the sequence $(v_0, v_1, \ldots, v_i, u_{i+1} = v_k, v_{k+1}, \ldots, v_{n-1}, v_0=u_0)$ is a cycle that contains $v_0$.
We define a partial order on forward chords as follows.
Let $(v_i,v_j) \neq (v_k,v_l)$ be forward chords for $C$.
Then, $(v_i,v_j) \prec (v_k,v_l)$ if and only if $L_{ij} \subset L_{kl}$.
This relation inherits the property of being a partial order from the Boolean lattice.
We say that a forward chord $(v_i,v_j)$ is {\em minimal} if there is no forward chord $(v_k,v_l) \prec (v_i,v_j)$.
We say that the {\em first} minimal forward chord is a minimal forward chord $(v_i,v_j)$ such that $i$ is as small as possible.
Since there is at most one minimal forward chord starting at any vertex, the first minimal forward chord is unique.
See Figure \ref{fig:forwardChords}.
\begin{SCfigure}[][h]
\caption{The first minimal forward chord for this cycle is $v_1v_4$. The chord $v_7v_2$ is not forward, $v_1v_5$ is forward but not minimal, and $v_3v_6$ is forward and minimal but not first.}\label{fig:forwardChords}
\includegraphics[width=.6 \textwidth]{forward_chords_2}
\end{SCfigure}
We now describe how to map $C$ to a permutation.
If there are no forward chords with respect to $C$, we map the single Hamilton cycle of $G$ to the permutation consisting only of fixed points.
Otherwise, let $(v_s,v_t)$ be the first minimal forward chord.
We map $C$ to the permutation $P$ consisting of the cycle $C_{st}$ and the fixed points $\fix(P) = L_{st}$.
In order to show that the map we've described is injective, it will suffice to show that we can recover $C$ if we know $P$ and $v_0$.
To do this, we need to recover the identity of $v_s$ and $v_t$, and to find the order of $\fix(P)$ in $C$.
\begin{claim}
The first vertex in $C_{st}$ that has an edge into $\fix(P)$ is $v_s$, and $v_s$ has exactly one edge into $\fix(P)$.
\end{claim}
\begin{proof}
Suppose, for contradiction, that there is an edge $(v_i,v_j)$ with $i<s$ and $v_j \in \fix(P)$.
Then, either $(v_i,v_j)$ is minimal, or there is a minimal forward chord $(v_k, v_l) \prec (v_i,v_j)$.
If $(v_i,v_j)$ is minimal, then $(v_s,v_t)$ cannot be the first minimal forward chord, and we reach a contradiction.
If $(v_k, v_l) \prec (v_i,v_j)$, then either $k < s$ or $k \geq s$.
If $k < s$, then $(v_s, v_t)$ is not first, and we reach a contradiction.
If $k \geq s$, then $(v_s,v_t)$ is not minimal, and we reach a contradiction.
We still need to show that $v_s$ has only one edge into $\fix(P)$.
Suppose that there is an edge $(v_s,v_j) \in G$ with $j \neq s+1$ and $v_j \in \fix(P)$.
Then $(v_s,v_j)$ is a forward chord of $C$ with $(v_s,v_j) \prec (v_s,v_t)$, contradicting the choice of $(v_s,v_t)$ in the construction.
\end{proof}
Using this claim, we can identify $v_s$ and $v_t$, and we can identify $v_{s+1} \in \fix(P)$.
To do this, we simply start at $v_0$ and follow edges of $C_{st}$ until we find a vertex that has an edge into $\fix(P)$.
The vertex with an edge into $\fix(P)$ is $v_s$, the vertex in $\fix(P)$ that has an edge from $v_s$ is $v_{s+1}$, and the vertex that follows $v_s$ in $C_{st}$ is $v_t$.
Having identified $v_{s+1}$, we claim that we can determine the order of $\fix(P)$ in $C$.
Indeed, there are no forward chords in $\fix(P)$, since any such forward chord would precede $(v_s,v_t)$ in our partial order on forward chords, contradicting the minimality of $(v_s,v_t)$.
Hence, there is exactly one edge from any set $\{v_{s+1}, v_{s+2}, \ldots, v_{k}\}$ into $\{v_{k+1}, v_{k+2}, \ldots, v_{t-1}\}$.
Using this fact in an easy inductive argument, we can identify the order $\fix(P)$ in $C$.
It only remains to show that, under the assumption that $G$ is not a directed cycle, there is a choice of $v=v_0$ for which the identity permutation is not in the image of $f_v$.
It is already established that the permutation consisting of fixed points is only in the image of $f$ if there is exactly one Hamilton cycle $C$ on $G$.
If there is a chord $u,w$ of $C$, then $f_u(C)$ will include the cycle formed by $(u,w)$ together with edges of $C$, and hence the permutation consisting of fixed points will not be in the image of $f_u$. This completes the proof of Lemma \ref{th:HamiltonMap}.
\end{proof}
The proof of the main theorem is done.
Here is one other interesting consequence of Lemma \ref{th:HamiltonMap}.
\begin{corollary}\label{thm:hamilton}
For any directed graph $G=(E,V)$ that is not a directed cycle, there is a vertex $v \in V$ such that the number of cycles in $G$ that contain $v$ is at least twice the number of Hamilton cycles in $G$.
\end{corollary}
Indeed, for an appropriate choice of $v$, Lemma \ref{th:HamiltonMap} gives an explicit injection from the set of Hamilton cycles that contain $v$ to cycles that are not Hamilton.
If $G$ is a directed cycle together with an additional edge, then there are two cycles in $G$, one of which is Hamilton.
In this case, the conclusion of Corollary \ref{thm:hamilton} does not hold for those vertices not in the second cycle of $G$, and cannot be improved for those vertices that are in the second cycle of $G$.
The $2$-blowup of a directed cycle, discussed in Section \ref{sec:constructions}, gives a more interesting example for which the conclusion of Corollary \ref{thm:hamilton} is tight.
\section{Matchings in general graphs}\label{sec:matchingsInGeneralGraphs}
Let us now proceed to the proof of Theorem \ref{th:matchingInGeneral}.
\begin{proof}[Proof of Theorem \ref{th:matchingInGeneral}]
Let $G$ be a graph and $M=\{v_1u_1,v_2u_2,\ldots, v_nu_n\}$ be a perfect matching in $G$. By assigning $v_i,u_i$ one each to sets $L$ and $R$ for all $i$ we obtain $2^{n-1}$ different bipartitions of the vertex set. Let us denote by $G_1,\ldots, G_{2^{n-1}}$ the bipartite graphs obtained by only retaining the edges of $G$ between $L$ and $R$ for each of these bipartitions.
Note that $M$ is a perfect matching in each of the $G_i$ by construction. Let $a_i$ denote the number of perfect matchings of $G_i$ which intersect $M$ and $b_i$ the number of perfect matchings of $G_i$ that do not. By Theorem \ref{th:matchingInBipartiteGraph} we know $a_i \ge b_i$.
Furthermore, if $M'$ is any perfect matching of $G$ then $M \cup M'$ is bipartite, which means $M'$ is contained in at least one graph $G_i$. This implies that the number of perfect matchings in $G$ which don't intersect $M$ is upper bounded by $b_1+\cdots+b_n\le a_1+a_2+\cdots+a_n.$
On the other hand every perfect matching $M'$ of $G$ intersecting $M$ appears in at most $2^{n-1}$ of the $G_i$, so there are at least $(a_1+\cdots+a_n)/2^{n-1}$ such matchings. In particular, this is at least $1/2^{n-1}$ times the number of perfect matchings in $G$ which don't intersect $M$. This means that the proportion of perfect matchings which intersect $M$ is at least $1/(2^{n-1}+1)$ of the total number of perfect matchings as claimed.
\ignore{Let $G$ be any graph on $2n$ vertices and $M$ a perfect matching in $G$. By selecting one vertex from each edge of $M$, there are $2^{n-1}$ ways to partition the vertices of $G$ into sets $\{L, R\}$ such that each edge of $M$ meets $L$ exactly once. For each such bipartition, let $G[L,R]$ denote the bipartite subgraph of $G$ obtained by retaining only the edges that meet both $L$ and $R$.
Note that if $M'$ is any perfect matching of $G$, then $M \cup M'$ is bipartite and thus there exists a bipartition for which $M' \in G[L,R]$. Using this fact followed by Theorem \ref{th:matchingInBipartiteGraph}, we obtain
\begin{eqnarray*}
|\{M' : M \cap M' = \emptyset\}| &\leq& \sum_{\{L,R\}} |\{M' : M \cap M' = \emptyset \text{ and } M' \subseteq G[L,R]\}|\\
&\leq& \sum_{\{L,R\}} |\{M' : M \cap M' \neq \emptyset \text{ and } M' \subseteq G[L,R]\}|\\
&\leq& 2^{n-1} |\{M' : M \cap M' \neq \emptyset\}|,
\end{eqnarray*}
which establishes the lower bound.}
\hrulefill
Our construction is the graph $H$ defined on $\{v_1,\ldots, v_{2n},$ $u_1,\ldots,u_{2n}\}$ as follows. For each $i=1,\ldots, n$ we put all the edges between $\{v_{2i-1},v_{2i}\}$ and $\{u_{2i-1},u_{2i}\}$ and make $v_{2i-1}v_{2i}$ and $u_{2i}u_{2i+1}$ edges (with $u_{2n+1}=u_1$)
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.6][line cap=round,line join=round]
\defPt{-1}{-3}{u1}
\defPt{1}{-3}{u2}
\defPt{-1}{-5}{v1}
\defPt{1}{-5}{v2}
\defPt{3}{-1}{u3}
\defPt{3}{1}{u4}
\defPt{5}{-1}{v3}
\defPt{5}{1}{v4}
\defPt{1}{3}{u5}
\defPt{-1}{3}{u6}
\defPt{1}{5}{v5}
\defPt{-1}{5}{v6}
\defPt{-3}{1}{u7}
\defPt{-3}{-1}{u8}
\defPt{-5}{1}{v7}
\defPt{-5}{-1}{v8}
\def 3 {3}
\def 0.8 {0.8}
\pgfmathsetmacro{\p}{0.8*3}
\draw[line width=2pt] (u1) -- (v1) -- (u2) -- (v2) -- (u1);
\draw[line width=2pt] (u3) -- (v3) -- (u4) -- (v4) -- (u3);
\draw[line width=2pt] (u5) -- (v5) -- (u6) -- (v6) -- (u5);
\draw[line width=2pt] (u7) -- (v7) -- (u8) -- (v8) -- (u7);
\draw[line width=2pt,red] (v1) -- (v2);
\draw[line width=2pt,red] (v3) -- (v4);
\draw[line width=2pt,red] (v5) -- (v6);
\draw[line width=2pt,red] (v7) -- (v8);
\draw[line width=2pt,red] (u2) -- (u3);
\draw[line width=2pt,red] (u4) -- (u5);
\draw[line width=2pt,red] (u6) -- (u7);
\draw[line width=2pt,red] (u8) -- (u1);
\foreach \i in {1,...,8}
{
\draw (u\i) circle[radius = .08][fill = black];
\node[] at ($0.85*(u\i)$) {$u_{\i}$};
\draw (v\i) circle[radius = .08][fill = black];
\node[] at ($1.1*(v\i)$) {$v_{\i}$};
}
\end{tikzpicture}
\caption{The construction in Theorem \ref{th:matchingInGeneral} for $n=4$. The red edges form $M_0$.}
\label{fig:matchingExample}
\end{figure}
Let $M_0$ be the perfect matching in $G$ given by $\{v_1v_2,\ldots, v_{2n-1}v_{2n}\}$ and $\{u_2u_3,u_4u_5,\ldots, u_{2n}u_1\}$. Notice that if for each $i$ we choose either $v_{2i-1}u_{2i-1}$ and $v_{2i}u_{2i}$ or $v_{2i-1}u_{2i}$ and $v_{2i}u_{2i-1}$ we get a perfect matching in $G$, so there are $2^n$ perfect matchings disjoint from $M_0$. And in fact $H$ has no other perfect matchings, since (perhaps after some thought) it is clear that any perfect matching containing an edge of $M_0$ must coincide with $M_0$.
\end{proof}
The argument for our lower bound could perhaps be improved in two ways. First, note that every perfect matching $M'$ appears in $2^{c-1}$ bipartitions $G[L,R]$, where $c$ is the number of components of $M \cup M'$. For matchings disjoint with $M$, we used the crude lower bound of $1$ and for the others an upper bound of $2^{n-1}$. Although this may feel like giving up quite a lot, these were actually tight in our construction $H$. The second possible area for improvement would be to remove the slack incurred in our application of Theorem \ref{th:matchingInBipartiteGraph}.
\section{Random (di)graphs}\label{sec:randomGraphs}
Here, we outline the proof of Proposition \ref{th:randomRatios}. Our approach for this is a routine second-moment method in the spirit of Janson's \cite{janson1994numbers}, where the main idea is to first condition on the number of edges (resp.\ arcs). We direct the readers to \cite{alon2004probabilistic} for a general text on probabilistic combinatorics and to the papers \cite{janson1994numbers} and \cite{frieze1995analysis} for details of the argument we outline below.
\begin{proof}
We first outline the proof for the directed graph case, discussing modifications needed for the graph case at the end. Following the ideas in \cite{janson1994numbers, frieze1995analysis}, let $DG = DG_{n,m}$ denote a uniformly random digraph on $n$ vertices having exactly $m$ arcs. Let $X$ denote the number of derangements of $DG$ and let $Y$ denote the number of permutations. Note for a digraph $H$ on $[n]$ having $t$ arcs, the probability that $H$ is contained in $DG$ depends only on $t$ and is given by
\[
\mathbb{P}(H \subseteq DG) =f(t) := {n(n-1) - t \choose m-t} {n(n-1) \choose m}^{-1}.
\]
For $1 \leq t \leq 2n$ (and $\varepsilon n^2 /2 \leq m \leq n^2$), this is given by
\[
\mathbb{P}(H \subseteq DG) =f(t) = \left(\dfrac{m}{n(n-1)} \right) ^{t} \exp \left\{ \dfrac{-t^2}{2} \left(\dfrac{1}{m} - \dfrac{1}{n(n-1)} \right) + \mathcal{O}(1/n) \right\}.
\]
When $t=n$, this gives the probability that a given derangement is contained in $DG$. Let $Der(n)$ denote the number of derangements of the set $[n]$, and define $\gamma = m / [n(n-1)]$. Using that $n! /e-1 \leq Der(n) \leq n! /e+1$, we have
\begin{eqnarray*}
\mathbb{E}[X] &=& f(n) Der(n) = \gamma^n \exp \left[ \left(\dfrac{-n^2}{2m} + \dfrac{n}{2(n-1)} \right) + \mathcal{O}(1/n) \right] Der(n)\\
&=& \gamma ^n \exp \left[ \dfrac{-1}{2\gamma} + \dfrac{1}{2} + \mathcal{O}(1/n) \right]Der(n) = n! \gamma ^n \exp \left[\dfrac{-1}{2\gamma} - \dfrac{1}{2} + \mathcal{O}(1/n) \right]
\end{eqnarray*}
as the expected number of derangements of $DG_{n,m}$.
Similarly, if $\sigma$ is a permutation of $[n]$ with $k$ fixed points, then the probability that $\sigma$ is a permutation of $DG$ is $f(n-k)$. Therefore, the expected number of permutations of $DG$ is given by
\begin{eqnarray*}
\mathbb{E}[Y] &=& \sum_{k = 0} ^{n} f(n-k) {n \choose k} Der(n-k)\\
&=& n! \gamma^n \exp\left[\dfrac{-1}{2\gamma} - \dfrac{1}{2} + \mathcal{O}(1/n)\right]\\
& & \qquad \qquad \times \sum_{k=0} ^{n} \dfrac{(1/\gamma)^{k}}{k!} \exp\left[\dfrac{2kn - k^2}{2} \left(\dfrac{1}{m} - \dfrac{1}{n(n-1)} \right) \right]\\
&=& n! \gamma^n \exp\left[\dfrac{1}{2\gamma} - \dfrac{1}{2} + \mathcal{O}(\log(n)/n)\right],
\end{eqnarray*}
where in the last line, we estimate the sum by the Taylor series for $e^{1/\gamma}$. The slightly enlarged (suboptimal) error term comes from a very crude bound estimating the sum based on whether or not $(1/\gamma)^k / k! \leq 1/n$.
After this, to finish the directed graph case we need only estimate the variances of $X$ and $Y$. Routine computations almost identical to those in \cite{frieze1995analysis} show that $\mathbb{E}[X^2] = \mathbb{E}[X]^2 (1 + o(1))$ and $\mathbb{E}[Y^2] = \mathbb{E}[Y]^2 (1 + o(1))$. Therefore, by Chebyshev's inequality, with high probability $X/Y = (1 \pm o(1)) \mathbb{E}[X] / \mathbb{E}[Y] = (1 \pm o(1)) e^{-1/\gamma}.$ The proof concludes by the standard coupling of $DG_{n,q}$ and $DG_{n,m}$ using the fact that $\gamma = q + o(1)$ with high probability (since the number of arcs of $DG_{n,q}$ is well concentrated). See \cite{janson1994numbers, frieze1995analysis} for full details of this method.
The graph case is almost identical but with $\gamma = 2m / n^2$. In the directed graph case, a guiding heuristic in computing moments is that two random derangements should share approximately as many edges as two random permutations. Whereas in the graph case, the heuristic is that for a random permutation, the number of fixed points and the number of cycles of length 2 should be approximately independent.
\end{proof}
\section{Open problems}\label{sec:openProblems}
Finally, we mention a few open problems, mostly inspired by studying derangements and permutations on graphs.
\begin{enumerate}
\item Theorem \ref{th:matchingInGeneral} shows that in general, a perfect matching may intersect with only an exponentially small proportion of all perfect matchings, but there is a sizable gap between our construction and the general lower bound. What is the correct base for the exponent?
\item Let $S = \{(d/p)_G \ : \ \text{$G$ is a digraph}\}$ be the set of values arising as a ratio $(d/p)$. We have shown that $S \subseteq [0,1/2]$ and that $S$ is dense in $[0,1/e]$ (by taking the appropriate Erd\H{o}s--Renyi graphs). Is $S$ dense in $[0,1/2]$? If so, is $S$ equal to $[0,1/2] \cap \mathbb{Q}$?
\item What can be said about $(d/p)$ for regular directed graphs with degree greater than $n/2$, but much less than $n$?
\end{enumerate}
\bibliographystyle{plain}
| {
"timestamp": "2019-10-14T02:04:17",
"yymm": "1906",
"arxiv_id": "1906.05908",
"language": "en",
"url": "https://arxiv.org/abs/1906.05908",
"abstract": "We show that each perfect matching in a bipartite graph $G$ intersects at least half of the perfect matchings in $G$. This result has equivalent formulations in terms of the permanent of the adjacency matrix of a graph, and in terms of derangements and permutations on graphs. We give several related results and open questions.",
"subjects": "Combinatorics (math.CO)",
"title": "Perfect matchings and derangements on graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9921841125973777,
"lm_q2_score": 0.8397339716830605,
"lm_q1q2_score": 0.8331707055122289
} |
https://arxiv.org/abs/2105.05111 | The OEIS: A Fingerprint File for Mathematics | An introduction to the On-Line Encyclopedia of Integer Sequences (or OEIS,this https URL) for graduate students in mathematics | \section{The OEIS}\label{SecI}
The {\em On-Line Encyclopedia of Integer Sequences} or {\em OEIS} \cite{OEIS} is a free website (\url{https://oeis.org}) containing information about 350,000 number sequences. You will probably first encounter it when trying to identify a sequence that has come up in your work. If your sequence is recognized, the response will tell you the first 100 or sometimes 10,000 terms, give a definition, properties, formulas, references, links, computer programs, etc., as appropriate. If it is not recognized, the system will invite you to submit it if you think it is of general interest, so that the next person who looks it up will find it - and be grateful to you.
Ron Graham called the OEIS a ``fingerprint file for mathematics''. It has also been described as, ``pound for pound, one of the most useful mathematics sites on the web''. If you have been struggling with a sequence, and the OEIS tells you what it is, you will understand why it is so popular. Of course, if it tells you that your problem was solved forty years ago, you may be unhappy, but it is better to find out now rather than later. The database has been around, in one form or another, for 57 years, so if your sequence is not yet included, there is a moderate chance it is new (of course this is not a proof).
\section{The OEIS as a source of problems}\label{Sec2Prob}
The entries are constantly being updated.
Every day we get a hundred or so submissions of new sequences, and another hundred
comments on existing entries (new formulas, references, additional terms, etc.).
The new sequences are often sent in by non-mathematicians,
and are a great source of problems to work on. You can see the current submissions at \url{https://oeis.org/draft}.
Often enough you will see a sequence that is so interesting you want to drop everything else and work on it.
Well, go ahead! Many research papers have been born in this way. The ``Yellowstone Permutation''
\seqnum{A098550}
is one of my favorite examples: it was quite a challenge to prove that it {\em is}
a permutation of the positive integers, and the entry has a link to a paper \cite{Yellow} that a group of us wrote
analysing it.
Some of the things you might work on after seeing an interesting sequence on the drafts stack are:
\begin{itemize}
\item Is the sequence well-defined?
\item Does it contain infinitely many terms?
\item How fast does it grow? The ``graph'' button in every entry is helpful here.
\item Is there a formula or generating function?
\item Or you may see a conjecture or question that you think you can answer.
\item How would you program it to generate more terms?
\end{itemize}
Studying the ``drafts'' stack is an endless source of fun (and sleepless nights).
\section{The OEIS as a reference work}\label{SecRef}
Even when you know what the sequence is, you may still look it up, to find out what is presently known about it.
We try to make sure the sequences are well-supplied with references, especially any recent articles.
The coverage is broad: besides the obvious fields like combinatorics, graph theory,
group theory, number theory, computer science, recreational mathematics, there are
large numbers of sequences from physics and chemistry.
This makes the OEIS extremely useful as a reference work.
Earlier this year a senior group theorist told us he had been working on a problem for 20 years, but when he looked it up in the OEIS he found a reference that he was not aware of.
Another great resource are the computer programs. An important sequence will have programs to generate it in C, Java, MAGMA, Maple, Mathematica, PARI, Python, Sage, etc. This is very helpful for numerical investigations.
\section{Submitting your sequence or comment}\label{Sec2Sub}
If you have an interesting number sequence that is not in the OEIS, you should definitely submit it.
Having a sequence in the OEIS is something you can be proud of.
You will be joining an enterprise that has been running
for almost 60 years, and to which over 12,000 people have already contributed.
Contributions come from almost every country, and we have been called one of the most successful
international collaborations.
You must register before you can contribute, using your real name,
the name you would use in a scientific publication
(see \href{https://oeis.org/wiki/Special:RequestAccount}{Request Account}).
The OEIS is not a ``social media''. One of
the reasons for the success of the database is that we have high standards, all contributions are
refereed by the editors, and accuracy is of primary importance.
See \href{https://oeis.org/wiki/Overview_of_the_contribution_process}{here} for more about the submission process.
The OEIS Wiki also has a page showing examples of
\href{https://oeis.org/wiki/Examples_of_what_not_to_submit}{What not to submit}!
\section{The OEIS Wiki}\label{SecW}
The OEIS Wiki (\url{https://oeis.org/wiki}) has a great deal of useful information for users, especially in the ``Information'' section. There is a general index to the sequences, a style sheet, a Q\&A page, an FAQ page, pages called
\href{https://oeis.org/wiki/How_to_add_a_comment,_more_terms,_or_a_b-file_(short_version)}{How to add a comment, more terms, or a b-file, ...},
\href{https://oeis.org/wiki/Instructions_For_General_Users}{Instructions for general users},
\href{https://oeis.org/wiki/The_multi-faceted_reach_of_the_OEIS}{The multi-faceted reach of the OEIS}, and so on.
\section{Citations of the OEIS}\label{SecCite}
An especially important part of the OEIS Wiki is the section
(see \url{https://oeis.org/wiki/Works_Citing_OEIS}) that lists citations of the OEIS in
the literature or on the web. There are now about 10,000 citations, which often say things like ``This theorem would not have been discovered
without the help of the OEIS''.
You can help keep these pages up-to-date. If you come across a paper that mentions a number sequence,
check if the sequence is in the OEIS, add it if it isn't, and make sure the entry has a reference to the paper.
If the context seems new, consider adding a comment saying ``Arises in the spectral analysis of cobweb singularities,
see A. Spider et al. (2021)." If you come across an article that references the OEIS, make sure the paper
is listed in the ``Works Citing the OEIS'' pages on the Wiki. And if you happen to see a comment that ``This sequence is not in the OEIS", then add it at once.
When you write a paper yourself, using information from the OEIS, don't forget to mention us in your references list,
typically by saying
The OEIS Foundation Inc., Entry A123456. Published electronically at https://oeis.org, 2021,
\noindent
and also mention it in any relevant OEIS entries.
Many authors ``forget'' to do this, but it will help your career by drawing attention to your paper.
\section{``Link Rot''}\label{SecL}
Don't get me started! The OEIS serves as a guide to the literature on a huge number of subjects,
and we have hundreds of thousands of links. And every day many of these links break.
System administrators feel they are not doing their job if they don't change all their URLs every
couple of years. Or, as happened last year, a major university will decide to delete all the faculty home pages, with no warning. That broke several hundred of our links.
Pages on individual's web sites are the most fragile of all.
If you are a frequent user of the OEIS you will often run into this problem. You can help by locating a replacement
URL if the site has simply moved, or by adding a new link to a copy of the missing page on the wonderful Wayback Machine, run by the Internet Archive \cite{WBM}. A better solution is to ask the author of the page
for permission to put a local copy of the page on the OEIS server. We have a strong reputation, and hope to be around for a long time. Almost everyone we have asked has agreed.
But it is a time-consuming business, and you could help.
\section{Other topics}\label{SecOT}
\vspace*{+.2in}
\noindent{\bf ``Superseeker''.} This is a program that runs on our server, and tries very hard to identify a sequence.
It runs several programs that try to guess a formula or recurrence (including the powerful
Salvy-Zimmermann program {\em gfun} \cite{gfun}). It also transforms
the sequence in a hundred ways and looks up the results in the OEIS, hoping for a match.
To use it, send an email
to \href{mailto:superseeker@oeis.org}{\tt superseeker@oeis.org}.
with a blank subject, containing a single line of the form
lookup 0 1 3 7 11 15 23 35 43 47
\noindent
(with no commas).
Since this uses many resources on our server, please use it sparingly.
One of my current goals is to strengthen Superseeker. If you are interested in helping, please contact me.
\vspace*{+.2in}
\noindent{\bf The Sequence Fans Mailing List.} Any registered user of the OEIS can join, and
messages go out to a few hundred fellow sequence-lovers. This can be even more powerful than Superseeker if you are really desperate to identify a sequence. There is a link on the OEIS Wiki.
\vspace*{+.4in}
\noindent{\bf Triangles and arrays of numbers.}
The OEIS also includes triangles and arrays of numbers. Triangles are read by rows. Pascal's triangle
becomes \seqnum{A007318}: $1, ~1, 1, ~1, 2, 1,$ $1, 3, 3, 1, ~1, 4, 6, 4, 1,$ $1, 5, 10, 10, 5, 1, \ldots$.
Arrays are read by antidiagonals. The table of Nim-sums $m \oplus n$ \cite{BON},
$$
\begin{array}{ccccccc}
0 & 1 & 2 & 3 & 4 & 5 & \ldots \\
1 & 0 & 3 & 2 & 5 & 4 & \ldots \\
2 & 3 & 0 & 1 & 6 & 7 & \ldots \\
3 & 2 & 1 & 0 & 7 & 6 & \ldots \\
4 & 5 & 6 & 7 & 0 & 1 & \ldots \\
. & . & . & . & . & . & \ldots \\
\end{array}
$$
becomes \seqnum{A003987}, $0, ~1, 1,~ 2, 0, 2,$ $ 3, 3, 3, 3, $ $ 4, 2, 0, 2, 4, \ldots$.
\vspace{0.2in}
The OEIS aims for a broad coverage of integer sequences arising in science, especially in mathematics (of course including several versions of the famous subway stops sequence, since they have been published on tests; but not the number of pages in the $n$th issue of these {\em Notices} since 1954, because no one has mentioned it before). Our motto is accuracy and completeness, combined with good judgment.
We have a great many contributors, a great many entries, and we hope you will join us at \url{https:oeis.org}.
| {
"timestamp": "2021-05-12T02:27:09",
"yymm": "2105",
"arxiv_id": "2105.05111",
"language": "en",
"url": "https://arxiv.org/abs/2105.05111",
"abstract": "An introduction to the On-Line Encyclopedia of Integer Sequences (or OEIS,this https URL) for graduate students in mathematics",
"subjects": "History and Overview (math.HO); Combinatorics (math.CO)",
"title": "The OEIS: A Fingerprint File for Mathematics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9706877705205929,
"lm_q2_score": 0.8577680977182186,
"lm_q1q2_score": 0.8326250023977877
} |
https://arxiv.org/abs/1909.03300 | Cyclic Permutations: Degrees and Combinatorial Types | This note will give an enumeration of $n$-cycles in the symmetric group ${\mathcal S}_n$ by their degree (also known as their cyclic descent number) and studies similar counting problems for the conjugacy classes of $n$-cycles under the action of the rotation subgroup of ${\mathcal S}_n$. This is achieved by relating such cycles to periodic orbits of an associated dynamical system acting on the circle. We also compute the mean and variance of the degree of a random $n$-cycle and show that its distribution is asymptotically normal as $n \to \infty$. | \section{Introduction}
The classical Eulerian numbers describe the distribution of descent number in the full symmetric group ${\mathscr S}_n$ and have been studied extensively for more than a century (see for example \cite{P2} and \cite{St}). Understanding the distribution of descent number in a given conjugacy class of ${\mathscr S}_n$ is a more subtle problem that was first tackled in the 1990's by Gessel and Reutenauer \cite{GR}, by Diaconis, McGrath, and Pitman \cite{DMP}, and by Fulman \cite{F1}. \vspace{6pt}
This note will consider a variant of the descent number of a permutation $\nu \in {\mathscr S}_n$, which we call its {\it degree}, defined by
$$
\deg(\nu) = \# \big\{ i : \nu(i) > \nu(i+1) \big\},
$$
where the integer $i$ is taken modulo $n$. Our terminology is justified by a simple topological interpretation of this quantity, but it turns out that what we call degree in this paper has already been studied in the combinatorics literature under the name {\it cyclic descent number} (see for example \cite{C}, \cite{F2}, \cite{P1}, and \cite{DPS}). The degree has the advantage of being invariant under the left and right actions of the rotation subgroup of ${\mathscr S}_n$ generated by the cycle $(1 \, 2 \, \cdots \, n)$, and naturally occurs in the study of the combinatorial patterns of periodic orbits of covering maps of the circle (see \cite{M} and \cite{PZ}). Motivated by this connection, we will investigate the distribution of degree in the conjugacy class ${\mathscr C}_n$ of all $n$-cycles in ${\mathscr S}_n$. Let $N_{n,d}$ denote the number of $\nu \in {\mathscr C}_n$ with $\deg(\nu)=d$. In \S \ref{sec:des} we prove
\begin{theorem}\label{YEK}
For every $d \geq 1$,
$$
N_{n,d} = \sum_{k=1}^d (-1)^{d-k} \binom{n}{d-k} \Delta_n(k),
$$
where
$$
\Delta_n(k) = \sum_{r|n} \mu \Big( \frac{n}{r} \Big) \Big(\sum_{j=0}^{r-1} k^j \Big)
$$
and $\mu$ is the M\"{o}bius function.
\end{theorem}
This is the analog of the alternating sum formula for Eulerian numbers, and the similar formulas in \cite{DMP} and \cite{F1} for permutations with a given cycle structure and descent number. Our proof makes essential use of a count for the number of period $n$ orbits of the linear endomorphism $\mathbf{m}_k(x)=kx \ (\operatorname{mod} {\mathbb Z})$ of the circle ${\mathbb R}/{\mathbb Z}$ that realize the combinatorics of degree $d$ elements in ${\mathscr C}_n$, developed by C.~L.~Petersen and the author \cite{PZ}. The $\Delta_n(k)$ for $k \geq 2$ can be interpreted as the number of period $n$ points of $\mathbf{m}_k$ up to rotation by a \mbox{$(k-1)$-st} root of unity (see \S \ref{keek}, \S \ref{CNQD} and \S \ref{compp}). \vspace{6pt}
It follows immediately from \thmref{YEK} that the generating function $G_n(x)= \sum_{d=1}^{n-2} N_{n,d} \, x^d$ has the power series expansion
$$
G_n(x)=(1-x)^n \sum_{i \geq 1} \Delta_n(i) \, x^i.
$$
This result should be viewed as the analog of Carlitz's identity for Eulerian numbers (compare \S \ref{CNQD} and \S \ref{compp}). \vspace{6pt}
It is well known that the descent number of a randomly chosen permutation in ${\mathscr S}_n$ has mean $(n-1)/2$ and variance $(n+1)/12$ (see for example \cite{Pi}). In \S \ref{sec:stat} we prove
\begin{theorem}\label{DO}
The degree of a randomly chosen cycle in ${\mathscr C}_n$ (with respect to the uniform measure) has mean
$$
\frac{n}{2}-\frac{1}{n-1} \qquad \text{if} \ n \geq 3,
$$
and variance
$$
\frac{n}{12}+\frac{n}{(n-1)^2(n-2)} \qquad \text{if} \ n \geq 5.
$$
\end{theorem}
The idea of the proof, inspired by the method of Fulman in \cite{F1}, is to express the generating functions $G_n$ in terms of the generating functions of the Eulerian numbers (the so-called Eulerian polynomials) for which the mean and variance are already known (see \S \ref{asnorm}). \vspace{6pt}
The following central limit theorem for the degree is also proved in \S \ref{sec:stat}:
\begin{theorem}\label{SE}
When normalized by its mean and variance, the distribution of $\deg(\nu)$ for $\nu \in {\mathscr C}_n$ converges to the standard normal distribution as $n \to \infty$.
\end{theorem}
Compare \figref{normal}. \vspace{6pt}
\begin{figure}[t]
\centering
\begin{overpic}[width=\textwidth]{norm.pdf}
\end{overpic}
\caption{\sl Illustration of \thmref{SE}. Top: The histogram of the distribution of degree among $n$-cycles, scaled horizontally to $n-2$, for $10 \leq n \leq 70$. The rapid convergence to a bell-shaped density is evident. Bottom: The distribution of degree among $200$-cycles, normalized by its mean and variance (only 41 out of 198 points are visible in the given window). The curve in blue is the standard normal distribution.}
\label{normal}
\end{figure}
The central limit theorem for the distribution of descent number over ${\mathscr S}_n$ or a given conjugacy class has been known (see for example the papers of Bender \cite{B} or Harper \cite{H} for the former, and the recent work of Kim and Lee \cite{KL1} for the latter). Our proof follows the strategy of \cite{FKL} and is based on the idea of reducing convergence in distribution to pointwise convergence of the moment generating functions over some non-empty open interval, as utilized in \cite{KL2} (see \S \ref{asnorm}). \vspace{6pt}
Motivated by applications in dynamics, we also study the conjugacy classes of ${\mathscr C}_n$ under the action of the rotation subgroup of ${\mathscr S}_n$. Each such class is called a {\it combinatorial type} in ${\mathscr C}_n$. In \S \ref{sec:so} we count the number of $n$-cycles of a given symmetry order and use it to derive a (known) formula for the number of distinct combinatorial types in ${\mathscr C}_n$ (compare Theorems \ref{NQR} and \ref{NQ}). This section is elementary and rather independent of the rest of the paper, except for \S \ref{ttnn} where we discuss the problem of counting the number of distinct combinatorial types of a given degree. \vspace{6pt}
A partial motivation for writing the present note was the belief that using dynamical systems could give a different and interesting perspective on the problem of enumeration of $n$-cycles by their degree. After a preliminary version of this paper appeared on arXiv.org, J. Fulman informed me that an alternative proof of \thmref{YEK} can be given using results in his paper [F2]. More generally, he is able to derive similar enumeration results in arbitrary conjugacy classes in ${\mathscr S}_n$. However, his methods are not elementary or combinatorial, as they rely on representation theory of the Whitehouse module. An interesting open problem is to develop a purely combinatorial approach to our degree counts for $n$-cycles or other conjugacy classes in ${\mathscr S}_n$. \vspace{6pt}
\noindent
{\it Acknowledgments.} I'm grateful to J. Fulman for his support and interest in this work, and for sharing his insights on the problems discussed here. I also thank anonymous referees for several helpful comments, which in particular brought my attention to some of the previous work on cyclic descents.
\section{Preliminaries} \label{sec:pre}
Fix an integer $n \geq 2$. We denote by ${\mathscr S}_n$ the group of all permutations of $\{ 1,\ldots, n \}$ and by ${\mathscr C}_n$ the collection of all $n$-cycles in ${\mathscr S}_n$. Following the tradition of group theory, we represent $\nu \in {\mathscr C}_n$ by the symbol
$$
( 1 \ \nu(1) \ \nu^2(1) \ \cdots \ \nu^{n-1}(1) ).
$$
The {\it \bfseries rotation group} ${\mathscr R}_n$ is the cyclic subgroup of ${\mathscr S}_n$ generated by the $n$-cycle
$$
\rho := (1 \, 2 \, \cdots \, n).
$$
Elements of ${\mathscr R}_n \cap {\mathscr C}_n$ are called {\it \bfseries rotation cycles}. Thus, $\nu \in {\mathscr C}_n$ is a rotation cycle if and only if $\nu = \rho^m$ for some integer $1 \leq m < n$ with $\gcd(m,n)=1$. The reduced fraction $m/n$ is called the {\it \bfseries rotation number} of $\rho^m$. \vspace{6pt}
The rotation group ${\mathscr R}_n$ acts on ${\mathscr C}_n$ by conjugation. We refer to each orbit of this action as a {\it \bfseries combinatorial type} in ${\mathscr C}_n$. The combinatorial type of an $n$-cycle $\nu$ is denoted by $[\nu]$. It is easy to see that $\nu$ is a rotation cycle if and only if $[\nu]$ consists of $\nu$ only. In fact, if $\rho \nu \rho^{-1} = \nu$, then $\nu=\rho^m$ where $m=\nu(n)$.
\subsection{The symmetry order}
The combinatorial type of $\nu \in {\mathscr C}_n$ can be described explicitly as follows. Let
$$
G_\nu := \big\{ \rho^j : \rho^j \nu \rho^{-j}=\nu \big\}
$$
be the stabilizer group of $\nu$ under the action of ${\mathscr R}_n$. We call the order of $G_\nu$ the {\it \bfseries symmetry order} of $\nu$ and denote it by $\operatorname{sym}(\nu)$. If $r:=n/\operatorname{sym}(\nu)$, it follows that $G_\nu$ is generated by the power $\rho^r$ and the combinatorial type of $\nu$ is the $r$-element set
$$
[\nu]= \big\{ \nu, \rho \nu \rho^{-1}, \ldots, \rho^{r-1} \nu \rho^{-(r-1)} \big\}.
$$
Since $\operatorname{sym}(\rho \nu \rho^{-1})=\operatorname{sym}(\nu)$, we can define the symmetry order of a combinatorial type unambiguously as that of any cycle representing it:
$$
\operatorname{sym}([\nu]):=\operatorname{sym}(\nu).
$$
Evidently there are no $2$- or $3$-cycles of symmetry order $1$, and there is no $4$-cycle of symmetry order $2$. By contrast, it is not hard to see that for every $n \geq 5$ and every divisor $s$ of $n$ there is a $\nu \in {\mathscr C}_n$ with $\operatorname{sym}(\nu)=s$. \vspace{6pt}
Of the $(n-1)!$ elements of ${\mathscr C}_n$, precisely $\varphi(n)$ are rotation cycles. Here $\varphi$ is Euler's totient function defined by
$$
\varphi(n) := \# \, \{ m \in {\mathbb Z}: 1 \leq m \leq n \ \text{and} \ \gcd(m,n)=1 \}.
$$
If $\nu_1, \ldots, \nu_T$ are representatives of the distinct combinatorial types in ${\mathscr C}_n$, then
$$
(n-1)! = \sum_{\nu_i \in {\mathscr R}_n } \# [ \nu_i] + \sum_{\nu_i \notin {\mathscr R}_n } \# [ \nu_i] = \varphi(n)+ \sum_{\nu_i \notin {\mathscr R}_n } \# [ \nu_i].
$$
When $n$ is a prime number, we have $\varphi(n)=n-1$ and each $\# [\nu_i]$ in the far right sum is $n$. In this case the number of distinct combinatorial types in ${\mathscr C}_n$ is given by
\begin{equation}\label{hayoola}
T = (n-1) + \frac{(n-1)!-(n-1)}{n} = \frac{(n-1)!+(n-1)^2}{n}.
\end{equation}
Observe that $T$ being an integer gives a simple proof of Wilson's theorem according to which $(n-1)! = -1 \ (\operatorname{mod} n)$ whenever $n$ is prime.
\begin{figure}[t]
\centering
\begin{overpic}[width=\textwidth]{bm4.pdf}
\put (56,31) {$+$ four rotated copies of each}
\put (56,10) {$+$ four rotated copies of each}
\put (4,10) {${\mathscr C}_{5,3}^1$}
\put (4,31) {${\mathscr C}_{5,2}^1$}
\put (4,52) {${\mathscr C}_{5,1}^5$}
\put (21.5,52) {\footnotesize{$\rho$}}
\put (43.5,52) {\footnotesize{$\rho^2$}}
\put (65.5,52) {\footnotesize{$\rho^3$}}
\put (88,52) {\footnotesize{$\rho^4$}}
\put (21,33) {\footnotesize{$\pi$}}
\put (42,33) {\footnotesize{$\pi^{-1}$}}
\put (24,11) {\footnotesize{$\nu$}}
\put (46,11) {\footnotesize{$\nu^{-1}$}}
\end{overpic}
\caption{\sl The decomposition of ${\mathscr C}_5$ into subsets ${\mathscr C}_{5,d}^s$ of cycles with degree $d$ and symmetry order $s$, where the only admissible pairs are $(d,s)=(1,5), (2,1), (3,1)$. See Examples \ref{cyc5} and \ref{ajab}.}
\label{c5}
\end{figure}
\begin{example}\label{cyc5}
The $4!=24$ cycles in ${\mathscr C}_5$ fall into $(4!+4^2)/5=8$ distinct combinatorial types. The $4$ rotation cycles
\begin{align*}
\rho & = (1 \, 2 \, 3 \, 4 \, 5) & \rho^2 & = (1 \, 3 \, 5 \, 2 \, 4) \\
\rho^3 & = (1 \, 4 \, 2 \, 5 \, 3) & \rho^4 & = (1 \, 5 \, 4 \, 3 \, 2)
\end{align*}
(of rotation numbers $1/5, 2/5, 3/5, 4/5$) form $4$ distinct combinatorial types. The remaining $20$ cycles have symmetry order $1$, so they fall into $4$ combinatorial types each containing $5$ elements. These types are represented by
\begin{align*}
\pi & = (1 \, 2 \, 3 \, 5 \, 4) & \pi^{-1} & =(1 \, 4 \, 5 \, 3 \, 2) \\
\nu & =(1 \, 2 \, 4 \, 5 \, 3) & \nu^{-1} & =(1 \, 3 \, 5 \, 4 \, 2).
\end{align*}
Compare \figref{c5}.
\end{example}
\subsection{Descent number vs. degree}\label{DNUM}
A permutation $\nu \in {\mathscr S}_n$ has a {\it \bfseries descent} at $i \in \{ 1, \ldots, n-1 \}$ if $\nu(i)>\nu(i+1)$. The total number of such $i$ is called the {\it \bfseries descent number} of $\nu$:
$$
\operatorname{des}(\nu) := \# \big\{ 1 \leq i \leq n-1 : \nu(i) > \nu(i+1) \big\}
$$
Note that $0 \leq \operatorname{des}(\nu) \leq n-1$. The descent number is a basic tool in enumerative combinatorics (see for example \cite{St}). \vspace{6pt}
In this paper we will be working with a rotationally invariant version of the descent number which we call {\it \bfseries degree}.\footnote{As noted in the introduction, our degree is synonymous to what combinatorists have called ``cyclic descent number'' in recent years. Somewhat unfortunately, this same invariant is referred to as ``descent number'' in \cite{PZ}.} It simply amounts to counting $i=n$ as a descent if $\nu(n)>\nu(1)$:
$$
\deg(\nu) := \begin{cases} \operatorname{des}(\nu) & \quad \text{if} \ \nu(n)<\nu(1) \\ \operatorname{des}(\nu)+1 & \quad \text{if} \ \nu(n)>\nu(1). \end{cases}
$$
The terminology comes from the following topological characterization (see \cite{M} and \cite{PZ}): Take any set $\{ x_1, \ldots, x_n \}$ of distinct points on the circle in positive cyclic order. Then $\deg(\nu)$ is the minimum degree of a continuous covering map $f: {\mathbb R}/{\mathbb Z} \to {\mathbb R}/{\mathbb Z}$ which acts on this set as the permutation $\nu$ in the sense that $f(x_i)=x_{\nu(i)}$ for all $i$. A simple model which realizes this minimum degree is the covering map that sends each counter-clockwise arc $[x_i,x_{i+1}]$ affinely onto $[x_{\nu(i)},x_{\nu(i+1)}]$. \vspace{6pt}
\begin{example}\label{ajab}
The cycle $\nu=(1 \, 2 \, 4 \, 5 \, 3) \in {\mathscr C}_5$ has descents at $i=2$, $i=4$ and $i=5$, so $\deg(\nu)=3$. The eight representative cycles in ${\mathscr C}_5$ described in \exref{cyc5} have the following degrees:
\begin{align*}
& \deg(\rho)=\deg(\rho^2)=\deg(\rho^3)=\deg(\rho^4)=1, \\
& \deg(\pi)=\deg(\pi^{-1})=2, \\
& \deg(\nu)=\deg(\nu^{-1})=3.
\end{align*}
The four cycles of degree $1$ have symmetry order $5$, and the remaining cycles have symmetry order $1$. Compare \figref{c5}.
\end{example}
The following statement summarizes the basic properties of degree for cycles:
\begin{theorem}\label{degp}
Let $\nu \in {\mathscr C}_n$ with $\operatorname{sym}(\nu)=s$ and $\deg(\nu)=d$. \vspace{6pt}
\begin{enumerate}
\item[(i)]
$1 \leq d \leq n-2$ if $n \geq 3$. \vspace{6pt}
\item[(ii)]
$d=1 \Longleftrightarrow s=n \Longleftrightarrow \nu \in {\mathscr R}_n$. \vspace{6pt}
\item[(iii)]
$s | \gcd(n,d-1)$. \vspace{6pt}
\item[(iv)]
$\deg(\rho \nu) = \deg(\nu \rho) = \deg(\rho \nu \rho^{-1})=d$.
\end{enumerate}
\end{theorem}
Apart from (iii) the other claims are rather straightforward; see \cite[Lemma 2.5, Theorem 2.8, and Theorem 6.4]{PZ}. I'm informed by a referee that parts (ii) and (iv) also appear in \cite[Lemma 1]{K}. \vspace{6pt}
Observe that by (iv), the degree of a combinatorial type is well-defined:
$$
\deg([\nu]):=\deg(\nu).
$$
\begin{example}\label{hasht}
Take $\nu \in {\mathscr C}_8$ with $\operatorname{sym}(\nu)=s$ and $\deg(\nu)=d$. By the above theorem $1 \leq d \leq 6$, with $d=1$ if and only if $s=8$. Moreover, $s$ is a common divisor of $8$ and $d-1$. It follows that the only admissible pairs $(d,s)$ are $(1,8), (2,1), (3,1), (3,2), (4,1), (5,1), (5,2), (5,4), (6,1)$.
\end{example}
\subsection{Decompositions of ${\mathscr C}_n$}
Fix $n \geq 3$ and consider the following cross sections of ${\mathscr C}_n$ by the symmetry order and degree:
\begin{align*}
{\mathscr C}_n^s & := \{ \nu \in {\mathscr C}_n : \operatorname{sym}(\nu)=s \} \\
{\mathscr C}_{n,d} & := \{ \nu \in {\mathscr C}_n : \deg(\nu)=d \} \\
{\mathscr C}_{n,d}^s & := {\mathscr C}_n^s \cap {\mathscr C}_{n,d}.
\end{align*}
Observe that in our notation the symmetry order always appears as a superscript and the degree as a subscript after $n$. By \thmref{degp},
$$
{\mathscr C}_n^n = {\mathscr C}_{n,1} = {\mathscr C}_{n,1}^n = {\mathscr C}_n \cap {\mathscr R}_n
$$
and we have the decompositions
\begin{align*}
{\mathscr C}_n & = \bigcup_{s|n} {\mathscr C}_n^s = \bigcup_{d=1}^{n-2} {\mathscr C}_{n,d} & & \\
{\mathscr C}_n^s & = \bigcup_{j=1}^{\lfloor (n-3)/s \rfloor} {\mathscr C}_{n,js+1}^s & & \text{if} \ s|n, \ s<n \\
{\mathscr C}_{n,d} & = \bigcup_{s|\gcd(n,d-1)} {\mathscr C}_{n,d}^s & & \text{if} \ 2 \leq d \leq n-2.
\end{align*}
Hence the cardinalities
\begin{align*}
N_n^s & := \# \, {\mathscr C}_n^s \\
N_{n,d} & := \# \, {\mathscr C}_{n,d} \\
N_{n,d}^s & := \# \, {\mathscr C}_{n,d}^s
\end{align*}
satisfy the following relations:
\begin{align}
N_n^n & = N_{n,1} = N_{n,1}^n = \varphi(n) & & \notag\\
(n-1)! & = \sum_{s|n} N_n^s = \sum_{d=1}^{n-2} N_{n,d} \notag \\
N_n^s & = \sum_{j=1}^{\lfloor (n-3)/s \rfloor} N_{n,js+1}^s & & \text{if} \ s|n, \ s<n \label{foo} \\
N_{n,d} & = \sum_{s|\gcd(n,d-1)} N_{n,d}^s & & \text{if} \ 2 \leq d \leq n-2. \label{loo}
\end{align}
Let us also consider the counts for the corresponding combinatorial types
\begin{align*}
T_n & := \# \, \{ [\nu]: \nu \in {\mathscr C}_n \} \\
T_n^s & := \# \, \{ [\nu]: \nu \in {\mathscr C}_n^s \} \\
T_{n,d} & := \# \, \{ [\nu]: \nu \in {\mathscr C}_{n,d} \} \\
T_{n,d}^s & := \# \, \{ [\nu]: \nu \in {\mathscr C}_{n,d}^s \}.
\end{align*}
Evidently
$$
T_{n,d}^s = \frac{s}{n} \ N_{n,d}^s \qquad \text{and} \qquad T_n^s = \frac{s}{n} \ N_n^s
$$
and we have the following relations:
\begin{align}
T_n^n & = T_{n,1} =T_{n,1}^n = \varphi(n) \notag\\
T_n & = \frac{1}{n} \sum_{s|n} s N_n^s \label{soo} \\
T_{n,d} & = \frac{1}{n} \sum_{s|\gcd(n,d-1)} s N_{n,d}^s \qquad \text{if} \ 2 \leq d \leq n-2.\notag
\end{align}
Of course knowing the joint distribution $N_{n,d}^s$ would allow us to count all the $N$'s and $T$'s. However, finding an closed formula for $N_{n,d}^s$ seems to be difficult (a sample computation can be found in \S \ref{ttnn}). In \S \ref{subsec:sym} we derive a formula for $N_n^s$ by a direct count which in turn leads to a formula for $T_n$ (see Theorems \ref{NQR} and \ref{NQ}). In \S \ref{CNQD} we find a formula for $N_{n,d}$ indirectly by relating cycles in ${\mathscr C}_{n,d}$ to periodic orbits of an associated dynamical system acting on the circle (see \thmref{jeeg}).
\section{The distribution of symmetry order} \label{sec:so}
\subsection{The numbers $N_n^s$}\label{subsec:sym}
We begin with the simplest of our counting problems, that is, finding a formula for $N_n^s$. We will make use of the M\"{o}bius inversion formula
\begin{equation}\label{MIF}
g(m) = \sum_{k|m} f(k) \quad \Longleftrightarrow \quad f(m)= \sum_{k|m} \mu(k) \, g\Big( \frac{m}{k} \Big)
\end{equation}
on a pair of arithmetical functions $f,g$. Here $\mu$ is the M\"{o}bius function uniquely determined by the conditions $\mu(1):=1$ and $\sum_{k|m} \mu(k)=0$ for $m>1$. Applying \eqref{MIF} to the relation
$$
m = \sum_{k|m} \varphi(k)
$$
gives the classical identity
\begin{equation}\label{muu}
\varphi(m) = \sum_{k|m} \frac{m}{k} \, \mu(k) = \sum_{k|m} k \mu\Big( \frac{m}{k} \Big).
\end{equation}
\begin{theorem}\label{NQR}
For every $n \geq 2$ and every divisor $s$ of $n$,
\begin{equation}\label{ninu}
N_n^s = \frac{1}{n} \sum_{j|\frac{n}{s}} \mu (j) \, \varphi(sj) \, (sj)^{\frac{n}{sj}} \Big( \frac{n}{sj} \Big)!
\end{equation}
\end{theorem}
When $s=n$ the formula reduces to $N_n^n= (1/n) \mu(1) \varphi(n) n = \varphi(n)$ which agrees with our earlier count.
\begin{proof}
Set $r:=n/s$. We have $\rho^r \nu \rho^{-r}=\nu$ if and only if $\operatorname{sym}(\nu)$ is a multiple of $s$ if and only if $\nu \in {\mathscr C}_n^{n/j}$ for some $j|r$. Denoting $\nu$ by $(\nu_1 \ \nu_2 \ \cdots \ \nu_n)$, this condition can be written as
$$
(\rho^r(\nu_1) \ \rho^r(\nu_2) \ \cdots \ \rho^r(\nu_n)) = (\nu_1 \ \nu_2 \ \cdots \ \nu_n),
$$
which holds if and only if there is an integer $m$ such that
\begin{equation}\label{yek}
\rho^r(\nu_i) = \nu_{\rho^m(i)} \qquad \text{for all} \ i.
\end{equation}
The rotations $\rho^r: i \mapsto i+r$ and $\rho^m: i \mapsto i+m \ (\operatorname{mod} n)$ have orders $n/\gcd(r,n)=n/r$ and $n/\gcd(m,n)$ respectively. By \eqref{yek}, these orders are equal, hence
$$
r=\gcd(m,n).
$$
Setting $t:=m/r$ gives $\gcd(t,s)=1$, so there are at most $\varphi(s)$ possibilities for $t$ and therefore for $m$. The action of the rotation $\rho^m$ partitions ${\mathbb Z}/ n{\mathbb Z}$ into $r$ disjoint orbits each consisting of $s$ elements and these $r$ orbits are represented by $1, \ldots, r$. In fact, if
$$
i+ \ell m = i'+ \ell' m \ (\operatorname{mod} n) \quad \text{for some} \ 1 \leq i,i' \leq r \ \text{and} \ 1 \leq \ell, \ell' \leq s,
$$
then $i-i' = m(\ell'-\ell) \ (\operatorname{mod} n)$ so $i = i' \ (\operatorname{mod} r)$ which gives $i=i'$. Moreover, $\ell m = \ell' m \ (\operatorname{mod} n)$ so $\ell t = \ell' t \ (\operatorname{mod} s)$. Since $\gcd(t,s)=1$, this implies $\ell = \ell' \ (\operatorname{mod} s)$ which shows $\ell=\ell'$. \vspace{6pt}
Now \eqref{yek} shows that for each of the $\varphi(s)$ choices of $m$, the cycle $\nu$ is completely determined by the integers $\nu_1, \ldots, \nu_r$, and different choices of $m$ lead to different cycles. We may always assume $\nu_1=1$. This leaves $n-s$ choices for $\nu_2$ (corresponding to the elements of $\{ 1, \ldots, n \}$ that are not in the orbit of $\nu_1=1$ under $\rho^m$), $n-2s$ choices for $\nu_3$, $\ldots$ and $n-(r-1)s=s$ choices for $\nu_r$. Thus, the total number of choices for $\nu$ is
$$
\varphi(s) (n-s)(n-2s) \cdots s = \varphi(s)\, s^{r-1} \, (r-1)! = \frac{1}{n} \varphi(s) \, s^r \, r!
$$
This proves
$$
\sum_{j|r} N_n^{n/j} = \frac{1}{n} \varphi \Big( \frac{n}{r} \Big) \Big( \frac{n}{r} \Big)^r r!
$$
An application of the M\"{o}bius inversion formula \eqref{MIF} then gives
\begin{align*}
N_n^s = N_n^{n/r} & = \frac{1}{n} \sum_{j|r} \mu (j) \, \varphi\Big( \frac{nj}{r} \Big) \Big( \frac{nj}{r} \Big)^{\frac{r}{j}} \Big( \frac{r}{j} \Big)! \\
& =\frac{1}{n} \sum_{j|\frac{n}{s}} \mu (j) \, \varphi(sj) \, (sj)^{\frac{n}{sj}} \Big( \frac{n}{sj} \Big)! \qedhere
\end{align*}
\end{proof}
Table \ref{tab1} shows the values of $N_n^s$ for $2 \leq n \leq 15$. Notice that $N_2^1=N_3^1=N_4^2=0$ but all other values are positive. Moreover, as $n$ gets larger the distribution $N_n^s$ appears to be overwhelmingly concentrated at $s=1$. In fact, an elementary exercise gives the asymptotic estimate $N_n^1 \sim (n-1)!$ as $n \to \infty$ (compare \thmref{asymp} below for a similar analysis). This justifies the intuition that the chance of a randomly chosen $n$-cycle having any non-trivial rotational symmetry tends to zero as $n \to \infty$.
{\tiny
\begin{table}
\centering
\begin{tabular}{r|crrrrrrrrrrrrrrr}
\toprule
\diagbox{$n$}{$s$} & & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $11$ & $12$ & $13$ & $14$ & $15$ \\
\midrule
$2$ & & $0$ & $1$ & & & & & & & & & & & & & \\
$3$ & & $0$ & $-$ & $2$ & & & & & & & & & & & & \\
$4$ & & $4$ & $0$ & $-$ & $2$ & & & & & & & & & & & \\
$5$ & & $20$ & $-$ & $-$ & $-$ & $4$ & & & & & & & & & & \\
$6$ & & $108$ & $6$ & $4$ & $-$ & $-$ & $2$ & & & & & & & & & \\
$7$ & & $714$ & $-$ & $-$ & $-$ & $-$ & $-$ & $6$ & & & & & & & & \\
$8$ & & $4992$ & $40$ & $-$ & $4$ & $-$ & $-$ & $-$ & $4$ & & & & & & &\\
$9$ & & $40284$ & $-$ & $30$ & $-$ & $-$ & $-$ & $-$ & $-$ & $6$ & & & & & &\\
$10$ & & $362480$ & $380$ & $-$ & $-$ & $16$ & $-$ & $-$ & $-$ & $-$ & $4$ & & & & & \\
$11$ & & $3628790$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $10$ & & & & \\
$12$ & & $39912648$ & $3768$ & $312$ & $60$ & $-$ & $8$ & $-$ & $-$ & $-$ & $-$ & $-$ & $4$ & & & \\
$13$ & & $479001588$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $12$ & & \\
$14$ & & $6226974684$ & $46074$ & $-$ & $-$ & $-$ & $-$ & $36$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $6$ & \\
$15$ & & $87178287120$ & $-$ & $3880$ & $-$ & $192$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $8$ \\
\bottomrule
\end{tabular}
\vspace*{2mm}
\caption{\sl The distributions $N_n^s$ for $2 \leq n \leq 15$.}
\label{tab1}
\end{table}
}
\subsection{The numbers $T_n$}
The count \eqref{ninu} led us to the following formula for the number of distinct combinatorial types of $n$-cycles. It turns out that this formula is not new: It appears in the {\it On-line Encyclopedia of Integer Sequences} as the number of $2$-colored patterns of an $n \times n$ chessboard \cite{Sl}.
\begin{theorem}\label{NQ}
For every $n \geq 2$,
\begin{equation}\label{tq}
T_n = \frac{1}{n^2} \sum_{j|n} (\varphi(j))^2 \, j^{\frac{n}{j}} \Big( \frac{n}{j} \Big)!
\end{equation}
\end{theorem}
Observe that for prime $n$ the formula reduces to
$$
T_n = \frac{1}{n^2} \big( (\varphi(1))^2 \, n! + (\varphi(n))^2 \, n \big) = \frac{1}{n} ( (n-1)! + (n-1)^2 )
$$
which agrees with our derivation in \eqref{hayoola}. Table
\ref{teeq} shows the values of $T_n$ for $2 \leq n \leq 15$.
\begin{proof}
By \eqref{soo} and \eqref{ninu},
$$
T_n = \frac{1}{n} \sum_{s|n} s N_n^s = \frac{1}{n^2} \sum_{s|n} \sum_{j|\frac{n}{s}} s \mu(j) \, \varphi(sj) \, (sj)^{\frac{n}{sj}} \Big( \frac{n}{sj} \Big)!
$$
The sum interchange formula
$$
\sum_{s|n} \sum_{j|\frac{n}{s}} f(j,s) = \sum_{j|n} \sum_{s|j} f\Big( \frac{j}{s},s \Big)
$$
then gives
\begin{align*}
T_n & = \frac{1}{n^2} \sum_{j|n} \sum_{s|j} s \mu\Big( \frac{j}{s} \Big) \varphi(j) \, j^{\frac{n}{j}} \Big( \frac{n}{j} \Big)! \\
& = \frac{1}{n^2} \sum_{j|n} \Big( \sum_{s|j} s \mu\Big( \frac{j}{s} \Big) \Big) \, \varphi(j) \, j^{\frac{n}{j}} \Big( \frac{n}{j} \Big)! \\
& = \frac{1}{n^2} \sum_{j|n} (\varphi(j))^2 \, j^{\frac{n}{j}} \Big( \frac{n}{j} \Big)! \qquad (\text{by} \ \eqref{muu}). \qedhere
\end{align*}
\end{proof}
{\tiny
\begin{table}
\begin{tabular}{r|cl}
\toprule
$n$ & & $T_n$ \\
\midrule
$2$ & & $1$ \\
$3$ & & $2$ \\
$4$ & & $3$ \\
$5$ & & $8$ \\
$6$ & & $24$ \\
$7$ & & $108$ \\
$8$ & & $640$ \\
$9$ & & $4492$ \\
$10$ & & $36336$ \\
$11$ & & $329900$ \\
$12$ & & $3326788$ \\
$13$ & & $36846288$ \\
$14$ & & $444790512$ \\
$15$ & & $5811886656$ \\
\bottomrule
\end{tabular}
\vspace*{4mm}
\caption{\sl The values of $T_n$ for $2 \leq n \leq 15$.}
\label{teeq}
\end{table}
}
It is evident from Table \ref{teeq} that the sequence $\{ T_n \}$ grows rapidly as $n \to \infty$. More quantitatively, we have the following
\begin{theorem}\label{asymp}
$T_n \sim (n-2)!$ as $n \to \infty$.
\end{theorem}
\begin{proof}
This is easy to verify. By \eqref{tq},
$$
\frac{n^2 \, T_n}{n!} = 1 + \frac{(\varphi(n))^2}{(n-1)!} + \frac{1}{n!} \sum_j (\varphi(j))^2 \, j^\frac{n}{j} \Big( \frac{n}{j} \Big)!
$$
where the sum is taken over all divisors $j$ of $n$ with $1<j<n$. We need
only check that the term on the far right tends to $0$ as $n \to \infty$. If $j|n$ and $1<j<n$, then $j \leq \lfloor n/2 \rfloor$ and $n/j \leq \lfloor n/2 \rfloor$. Hence,
\begin{equation}\label{nina}
(\varphi(j))^2 \, j^{\frac{n}{j}} \Big( \frac{n}{j} \Big)! \leq j^{\frac{n}{j}+2} \Big( \frac{n}{j} \Big)! \leq \left \lfloor \frac{n}{2} \right \rfloor^{\lfloor n/2 \rfloor +2} \left \lfloor \frac{n}{2} \right \rfloor!
\end{equation}
The Stirling formula $k! \sim \sqrt{2\pi k} \ k^k \, e^{-k}$ for large $k$ gives the estimate
$$
\frac{k^k \ k!}{(2k)!} \leq \operatorname{const.} \Big( \frac{e}{4} \Big)^k.
$$
Applying this to \eqref{nina} for $k=\lfloor n/2 \rfloor$, we obtain
$$
\frac{1}{n!} (\varphi(j))^2 \, j^\frac{n}{j} \Big( \frac{n}{j} \Big)! \leq \operatorname{const.} n^2 \Big( \frac{e}{4} \Big)^{\frac{n}{2}}.
$$
This yields the upper bound
$$
\frac{1}{n!} \sum_j (\varphi(j))^2 \, j^\frac{n}{j} \Big( \frac{n}{j} \Big)! \leq \operatorname{const.} n^3 \Big( \frac{e}{4} \Big)^{\frac{n}{2}},
$$
which tends to $0$ as $n \to \infty$.
\end{proof}
\section{The distribution of degree}\label{sec:des}
We now turn to the problem of enumerating $n$-cycles with a given degree, using the dynamics of a family of linear endomorphisms of the circle. \vspace{6pt}
\noindent
{\it Convention.} We extend the definition of $N_{n,d}$ to all $d \geq 1$ by setting $N_{n,d}=0$ if $d \geq n-1$.
\subsection{The circle endomorphisms $\mathbf{m}_k$}\label{keek}
For each integer $k \geq 2$, consider the multiplication-by-$k$ map of the circle ${\mathbb R}/{\mathbb Z}$ defined by
$$
\mathbf{m}_k (x):= k x \ \ (\operatorname{mod} \ \ZZ).
$$
Let ${\mathscr O} = \{ x_1, x_2, \ldots, x_n \}$ be a period $n$ orbit of $\mathbf{m}_k$, where the representatives are labeled so that $0 < x_1 < x_2 < \cdots < x_n<1$. We say that ${\mathscr O}$ {\it \bfseries realizes} the cycle $\nu \in {\mathscr C}_n$ if
$$
\mathbf{m}_k(x_i)=x_{\nu(i)} \qquad \text{for all} \ 1 \leq i \leq n.
$$
The orbit ${\mathscr O}$ is said to realize the combinatorial type $[\nu]$ in ${\mathscr C}_n$ if it realizes the conjugate cycle $\rho^j \nu \rho^{-j}$ for some $j$. \vspace{6pt}
It follows from the topological interpretation of degree in \S \ref{DNUM} that if an orbit of $\mathbf{m}_k$ realizes $\nu \in {\mathscr C}_{n,d}$, then necessarily $k \geq d$. Conversely, if $\nu \in {\mathscr C}_{n,d}$ and $k \geq \max\{ d, 2 \}$, there are always periodic orbits of $\mathbf{m}_k$ that realize the combinatorial type $[\nu]$. These orbits are essentially determined by how they are deployed on the circle relative to the $k-1$ fixed points $0,1/(k-1),\ldots,(k-2)/(k-1)$ of $\mathbf{m}_k$, a useful description that makes it possible to enumerate them effectively. Below is a brief outline of how this is accomplished in \cite{PZ}. \vspace{6pt}
Consider $\{ 1, 2, \ldots, n \}$ with its natural cyclic order $\prec$ on three or more points. We define the {\it \bfseries signature} of $\nu \in {\mathscr C}_{n,d}^s$ as the binary vector $\operatorname{sig}(\nu)=(a_1, \ldots, a_n)$, where
$$
a_i := \begin{cases} 1 & \quad \text{if} \ \nu(i) \prec i \prec i+1 \prec \nu(i+1) \\
0 & \quad \text{otherwise.} \end{cases}
$$
The degree $d=\deg(\nu)$ and symmetry order $s=\operatorname{sym}(\nu)$ are both encoded in the signature: There are precisely $d-1$ entries of $1$ in $\operatorname{sig}(\nu)$. Moreover, $\operatorname{sig}(\nu)$ is $r$-periodic, where $r=n/s$:
\begin{equation}\label{rper}
a_{i+r}=a_i \qquad \text{for all} \ i.
\end{equation}
To any periodic orbit ${\mathscr O}=\{ x_1, x_2, \ldots, x_n \}$ of $\mathbf{m}_k$ that realizes $\nu$, we assign the cumulative {\it \bfseries deployment vector} $\operatorname{dep}({\mathscr O})=(w_1, \ldots, w_{k-1}) \in {\mathbb Z}^{k-1}$ whose components are defined by $w_i := \# ({\mathscr O} \cap (0,i/(k-1)))$. One can check that
\begin{enumerate}
\item[(i)]
$0 \leq w_1 \leq \cdots \leq w_{k-1}=n$, \vspace{6pt}
\item[(ii)]
$a_i=1$ implies $i \in \{ w_1, \ldots, w_{k-1} \}$.
\end{enumerate}
Any integer vector $w=(w_1,\ldots,w_{k-1})$ which satisfies these conditions is called {\it \bfseries $\nu$-admissible}.
\begin{theorem}\label{yagh}
Let $\nu \in {\mathscr C}_{n,d}$ and $k \geq \max\{ d, 2 \}$. For every $\nu$-admissible vector $w \in {\mathbb Z}^{k-1}$ there exists a unique periodic orbit ${\mathscr O}$ of $\mathbf{m}_k$ with $\operatorname{dep}({\mathscr O})=w$ that realizes $\nu$.
\end{theorem}
See \cite[Theorem 7.9]{PZ}. The idea of the proof is to translate the realization problem to the problem of finding the steady-state of an associated regular Markov chain. The theorem shows that in order to enumerate the orbits of $\mathbf{m}_k$ that realize $\nu$ we can simply look at how many $\nu$-admissible vectors in ${\mathbb Z}^{k-1}$ there are. A routine count shows that this number is
$\binom{n+k-d}{n}$ if $a_n=1$ and $\binom{n+k-d-1}{n}$ if $a_n=0$.
\begin{example}
The $6$-cycle $\nu=(1 \, 3 \, 2 \, 4 \, 6 \, 5)$ has degree $d=3$, symmetry order $s=2$, and signature $\operatorname{sig}(\nu)=(0,0,1,0,0,1)$. For the choice $k=4$, the $\nu$-admissible vectors $(w_1, w_2, w_3) \in {\mathbb Z}^3$ are those that satisfy $0 \leq w_1 \leq w_2 \leq w_3=6$ and $\{ 3,6 \} \subset \{ w_1, w_2, w_3 \}$. There are $\binom{6+4-3}{6}=7$ such vectors:
$$
(0,3,6), (1,3,6), (2,3,6), (3,3,6), (3,4,6), (3,5,6), (3,6,6).
$$
Table \ref{tabnew} shows the corresponding $7$ orbits of $\mathbf{m}_4$, guaranteed by \thmref{yagh}.
\end{example}
\begin{table}
\begin{tabular}{c|cl}
\toprule
$\operatorname{dep}({\mathscr O})$ & & ${\mathscr O}$ \\
\midrule
$(0,3,6)$ & & $\big\{ \frac{183}{455}, \frac{198}{455}, \frac{277}{455}, \frac{337}{455}, \frac{387}{455}, \frac{438}{455} \big\}$ \\[5pt]
$(1,3,6)$ & & $\big\{ \frac{89}{585}, \frac{254}{585}, \frac{356}{585}, \frac{431}{585}, \frac{461}{585}, \frac{554}{585} \big\}$ \\[5pt]
$(2,3,6)$ & & $\big\{ \frac{43}{315}, \frac{58}{315}, \frac{172}{315}, \frac{232}{315}, \frac{247}{315}, \frac{298}{315} \big\}$ \\[5pt]
$(3,3,6)$ & & $\big\{ \frac{101}{1365}, \frac{251}{1365}, \frac{404}{1365}, \frac{1004}{1365}, \frac{1049}{1365}, \frac{1286}{1365} \big\}$ \\[5pt]
$(3,4,6)$ & & $\big\{ \frac{41}{585}, \frac{71}{585}, \frac{164}{585}, \frac{284}{585}, \frac{449}{585}, \frac{551}{585} \big\}$ \\[5pt]
$(3,5,6)$ & & $\big\{ \frac{22}{315}, \frac{37}{315}, \frac{88}{315}, \frac{148}{315}, \frac{163}{315}, \frac{277}{315} \big\}$ \\[5pt]
$(3,6,6)$ & & $\big\{ \frac{94}{1365}, \frac{139}{1365}, \frac{376}{1365}, \frac{556}{1365}, \frac{706}{1365}, \frac{859}{1365} \big\}$ \\[5pt]
\bottomrule
\end{tabular}
\vspace*{2mm}
\caption{\sl Realizations of the $6$-cycle $(1 \, 3 \, 2 \, 4 \, 6 \, 5)$ under the map $\mathbf{m}_4$, parametrized by their deployment vectors.}
\label{tabnew}
\end{table}
Now take any $\nu \in {\mathscr C}_{n,d}^s$ and let $\operatorname{sig}(\nu)=(a_1,\ldots,a_n)$, $r=n/s$. The combinatorial type $[\nu]$ consists of the conjugates $\rho^{-j} \nu \rho^j$ for $1 \leq j \leq r$, and
$$
\operatorname{sig}(\rho^{-j} \nu \rho^j) = (a_{j+1},\ldots,a_n,a_1,\ldots,a_j).
$$
By the preceding remarks, the number of orbits of $\mathbf{m}_k$ that realize $\rho^{-j} \nu \rho^j$ is ${n+k-d \choose n}$ if $a_j=1$ and is ${n+k-d-1 \choose n}$ if $a_j=0$. By $r$-periodicity \eqref{rper}, the list $a_1, \ldots, a_r$ contains $(d-1)/s$ entries of $1$ and $(n-d+1)/s$ entries of $0$. Thus, the number of distinct orbits of $\mathbf{m}_k$ that realize $[\nu]$ is
$$
\frac{d-1}{s} {n+k-d \choose n} + \frac{n-d+1}{s} {n+k-d-1 \choose n} = \frac{k-1}{s} {n+k-d-1 \choose n-1}.
$$
This proves the following result (compare \cite[Theorem 7.5]{PZ}):
\begin{theorem}\label{real}
If $\nu \in {\mathscr C}_{n,d}^s$ and $k \geq \max \{ d,2 \}$, there are precisely
$$
\frac{k-1}{s} \binom{n+k-d-1}{n-1}
$$
period $n$ orbits of $\mathbf{m}_k$ that realize the combinatorial type $[\nu]$.
\end{theorem}
The following corollary is immediate:
\begin{corollary}\label{nib}
For every $k \geq 2$ and $d \geq 1$, the number of period $n$ orbits of $\mathbf{m}_k$ that realize some element of ${\mathscr C}_{n,d}$ is
$$
\frac{k-1}{n} \binom{n+k-d-1}{n-1} N_{n,d}.
$$
\end{corollary}
\begin{proof}
The claim is trivial if $d>k$ since in this case the number of such orbits and the binomial coefficient $\binom{n+k-d-1}{n-1}$ are both $0$. If $2 \leq d \leq k$, then by \thmref{real} for each divisor $s$ of \mbox{$\gcd(n,d-1)$} there are
$$
\frac{k-1}{s} \binom{n+k-d-1}{n-1} T_{n,d}^s = \frac{k-1}{n} \binom{n+k-d-1}{n-1} N_{n,d}^s
$$
period $n$ orbits of $\mathbf{m}_k$ that realize some element of ${\mathscr C}_{n,d}^s$. The result then follows from \eqref{loo} by summing over all such $s$. Finally, since ${\mathscr C}_{n,1}^n={\mathscr C}_{n,1}$, \thmref{real} shows that there are
$$
\frac{k-1}{n} \binom{n+k-2}{n-1} T_{n,1} = \frac{k-1}{n} \binom{n+k-2}{n-1} N_{n,1}
$$
period $n$ orbits of $\mathbf{m}_k$ that realize some element of ${\mathscr C}_{n,1}$.
\end{proof}
\subsection{The numbers $N_{n,d}$}\label{CNQD}
For $k \geq 2$ let $P_n(k)$ denote the number of periodic points of $\mathbf{m}_k$ of period $n$. The periodic points of $\mathbf{m}_k$ whose period is a divisor of $n$ are precisely the $k^n-1$ solutions of the equation $k^n x = x$ (mod ${\mathbb Z}$). Thus,
\begin{equation}\label{masti}
\sum_{r|n} P_r(k) = k^n-1
\end{equation}
and the M\"{o}bius inversion formula gives
\begin{equation}\label{perrr}
P_n(k) = \sum_{r|n} \mu\Big( \frac{n}{r} \Big) (k^r-1).
\end{equation}
Introduce the integer-valued quantity
$$
\Delta_n(k):= \begin{cases}
\dfrac{P_n(k)}{k-1} & \quad \text{if} \ k \geq 2 \vspace{6pt} \\
\varphi(n) & \quad \text{if} \ k=1. \end{cases}
$$
When $k \geq 2$ we can interpret $\Delta_n(k)$ as the number of period $n$ points of $\mathbf{m}_k$ up to the rotation of the form $x \mapsto x+j/(k-1) \, (\operatorname{mod} {\mathbb Z})$. This is because $\mathbf{m}_k$ and the rotation $x \mapsto x+1/(k-1) \, (\operatorname{mod} {\mathbb Z})$ commute, so $x$ is has period $n$ under $\mathbf{m}_k$ if and only if $x+1/(k-1)$ does. \vspace{6pt}
By \eqref{perrr}, for every $k \geq 2$,
$$
\Delta_n(k) = \sum_{r|n} \mu \Big( \frac{n}{r} \Big) \, \frac{k^r-1}{k-1} = \sum_{r|n} \mu \Big( \frac{n}{r} \Big) \Big(\sum_{j=0}^{r-1} k^j \Big).
$$
If $k=1$, the sum on the far right reduces to $\sum_{r|n} r \mu (n/r)$ which is equal to $\varphi(n)$ by \eqref{muu}. It follows that
\begin{equation}\label{masq}
\Delta_n(k) = \sum_{r|n} \mu \Big( \frac{n}{r} \Big) \Big(\sum_{j=0}^{r-1} k^j \Big) \qquad \text{for all} \ k \geq 1.
\end{equation}
Since $\mathbf{m}_k$ has $P_n(k)/n$ period $n$ {\it orbits} altogether, \corref{nib} shows that for every $k \geq 2$,
$$
\frac{k-1}{n} \sum_{d=1}^{n-2} \binom{n+k-d-1}{n-1} N_{n,d} = \frac{P_n(k)}{n}
$$
or
\begin{equation}\label{mame1}
\sum_{d=1}^{n-2} \binom{n+k-d-1}{n-1} N_{n,d} = \Delta_n(k).
\end{equation}
This is in fact true for every $k \geq 1$ (the case $k=1$ reduces to $N_{n,1}=\Delta_n(1)=\varphi(n)$).
\begin{remark}\label{ghod}
Since the summand in \eqref{mame1} is zero unless $1 \leq d \leq \min(n-2,k)$, we can replace the upper bound of the sum by $k$.
\end{remark}
\begin{theorem}\label{jeeg}
For every $d \geq 1$,
\begin{equation}\label{tada}
N_{n,d} = \sum_{i=1}^d (-1)^{d-i} \binom{n}{d-i} \Delta_n(i).
\end{equation}
\end{theorem}
In particular, the theorem claims vanishing of the sum if $d \geq n-1$. Table \ref{tab3} shows the values of $N_{n,d}$ for $2 \leq n \leq 12$.
{\tiny
\begin{table}
\centering
\begin{tabular}{r|crrrrrrrrrr}
\toprule
\diagbox{$n$}{$d$} & & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ \\
\midrule
$2$ & & $1$ & & & & & & & & & \\
$3$ & & $2$ & & & & & & & & & \\
$4$ & & $2$ & $4$ & & & & & & & & \\
$5$ & & $4$ & $10$ & $10$ & & & & & & & \\
$6$ & & $2$ & $42$ & $54$ & $22$ & & & & & & \\
$7$ & & $6$ & $84$ & $336$ & $252$ & $42$ & & & & & \\
$8$ & & $4$ & $208$ & $1432$ & $2336$ & $980$ & $80$ & & & & \\
$9$ & & $6$ & $450$ & $5508$ & $16548$ & $14238$ & $3402$ & $168$ & & & \\
$10$ & & $4$ & $950$ & $19680$ & $99250$ & $153860$ & $77466$ & $11320$ & $350$ & & \\
$11$ & & $10$ & $1936$ & $66616$ & $534688$ & $1365100$ & $1233760$ & $389224$ & $36784$ & $682$ & \\
$12$ & & $4$ & $3972$ & $217344$ & $2671560$ & $10568280$ & $15593376$ & $8893248$ & $1851096$ & $116580$ & $1340$ \\
\bottomrule
\end{tabular}
\vspace*{4mm}
\caption{\sl The distributions $N_{n,d}$ for $2 \leq n \leq 12$.}
\label{tab3}
\end{table}
}
\begin{proof}
This is a form of inversion for binomial coefficients. Use \eqref{mame1} to write
\begin{align*}
& \hspace{5mm} \sum_{i=1}^d (-1)^{d-i} \binom{n}{d-i} \Delta_n(i) & \\
& = \sum_{i=1}^d \sum_{j=1}^{n-2} (-1)^{d-i} \binom{n}{d-i} \binom{n+i-j-1}{n-1} N_{n,j} & \\
& = \sum_{i=1}^d \sum_{j=1}^i (-1)^{d-i} \binom{n}{d-i} \binom{n+i-j-1}{n-1} N_{n,j} & (\text{by \remref{ghod}}) \\
& = \sum_{j=1}^d \left( \sum_{i=j}^d (-1)^{d-i} \binom{n}{d-i} \binom{n+i-j-1}{n-1} \right) N_{n,j}. &
\end{align*}
Thus, \eqref{tada} is proved once we check that
\begin{equation}\label{saq1}
\sum_{i=j}^d (-1)^{d-i} \binom{n}{d-i} \binom{n+i-j-1}{n-1} = 0 \qquad \text{for} \ j<d.
\end{equation}
Introduce the new variables $a:=i-j$ and $b:=d-j>0$ so \eqref{saq1} takes the form
\begin{equation}\label{saq2}
\sum_{a=0}^b (-1)^a \binom{n}{b-a} \binom{n+a-1}{n-1} = 0.
\end{equation}
The identity
$$
\binom{n}{b-a} \binom{n+a-1}{n-1} = \frac{n}{b} \ \binom{n+a-1}{b-1} \binom{b}{a}
$$
shows that \eqref{saq2} is in turn equivalent to
\begin{equation}\label{saq3}
\sum_{a=0}^b (-1)^a \binom{n+a-1}{b-1} \binom{b}{a} = 0.
\end{equation}
To prove \eqref{saq3}, consider the binomial expansion
$$
P(x):=x^{n-1}(x+1)^b = \sum_{a=0}^b \binom{b}{a} x^{n+a-1}
$$
and differentiate it $b-1$ times with respect to $x$ to get
$$
P^{(b-1)}(x)= (b-1)! \sum_{a=0}^b \binom{n+a-1}{b-1} \binom{b}{a} x^{n+a-b}.
$$
Since $P$ has a zero of order $b$ at $x=-1$, we have $P^{(b-1)}(-1)=0$ and \eqref{saq3} follows.
\end{proof}
As an application of \thmref{jeeg}, we record the following result which will be invoked in \S \ref{sec:stat}:
\begin{theorem}
The generating function $G_n(x):= \sum_{d=1}^{n-2} N_{n,d} \, x^d$ has the expansion
\begin{equation}\label{genG}
G_n(x)=(1-x)^n \sum_{i \geq 1} \Delta_n(i) \, x^i.
\end{equation}
\end{theorem}
This should be viewed as an equality between formal power series. It is a true equality for $|x|<1$ where the series on the right converges absolutely.\footnote{This is because $\Delta_n(i)$ grows like $i^{n-1}$ for fixed $n$ as $i \to \infty$; compare \lemref{boz}.}
\begin{proof}
For each $d \geq 1$ the coefficient of $x^d$ in the product
$$
(1-x)^n \sum_{i \geq 1} \Delta_n(i) \, x^i = \sum_{j=0}^n (-1)^j \binom{n}{j} x^j \ \cdot \ \sum_{i \geq 1} \Delta_n(i) \, x^i
$$
is $\sum_{i=1}^d (-1)^{d-i} \binom{n}{d-i} \, \Delta_n(i)$. This is $N_{n,d}$ by \eqref{tada}.
\end{proof}
\subsection{The numbers $T_{n,d}$}\label{ttnn}
Our counts for the numbers $N_n^s$ and $N_{n,d}$ lead to the system of linear equations \eqref{foo} and \eqref{loo} on the $N^s_{n,d}$, but such systems are typically under-determined. Thus, additional information is needed to find the $N^s_{n,d}$ and therefore $T_{n,d}$. The following example serves to illustrates this point, where we use the dynamics of $\mathbf{m}_k$ to obtain this additional information.
\begin{example}\label{teegg}
For $n=8$ there are nine admissible pairs
$$
(d,s)= (1,8), (2,1), (3,1), (3,2), (4,1), (5,1), (5,2), (5,4), (6,1)
$$
(see \exref{hasht}). We record the values of $N^s_{8,d}$ on a grid as shown in \figref{t8}. By \eqref{foo} and \eqref{loo}, the values along the $s$-th row add up to $N_8^s$ and those along the $d$-th column add up to $N_{8,d}$, both available from Tables \ref{tab1} and \ref{tab3}. This immediately gives five of the required nine values:
$$
N^8_{8,1}=4, \quad N^1_{8,2}=208, \quad N^1_{8,4}=2336, \quad N^4_{8,5}=4, \quad N^1_{8,6}=80.
$$
\begin{figure}[t]
\centering
\begin{overpic}[width=0.6\textwidth]{T8sd.pdf}
\put (19,57) {\footnotesize $4$}
\put (29.7,19) {\footnotesize $208$}
\put (41.5,32) {\footnotesize $N^2_{8,3}$}
\put (41.5,19.3) {\footnotesize $N^1_{8,3}$}
\put (53.8,19) {\footnotesize $2336$}
\put (69.4,44.5) {\footnotesize $4$}
\put (66.6,32) {\footnotesize $N^2_{8,5}$}
\put (66.6,19.3) {\footnotesize $N^1_{8,5}$}
\put (81,19) {\footnotesize $80$}
\put (4,19) {\tiny $1$}
\put (4,32) {\tiny $2$}
\put (4,44) {\tiny $4$}
\put (4,57) {\tiny $8$}
\put (19.5,3) {\tiny $1$}
\put (32,3) {\tiny $2$}
\put (44.5,3) {\tiny $3$}
\put (57.2,3) {\tiny $4$}
\put (70,3) {\tiny $5$}
\put (82.5,3) {\tiny $6$}
\put (7,71) {$s$}
\put (96,6.5) {$d$}
\end{overpic}
\caption{\sl Computation of the joint distribution $N^s_{8,d}$ in \exref{teegg}.}
\label{t8}
\end{figure}
Moreover, it leads to the system of linear equations
\begin{equation}\label{char}
\begin{cases}
N^1_{8,3}+N^2_{8,3} & \! \! = 1432 \\
N^1_{8,5}+N^2_{8,5} & \! \! = 976 \\
N^1_{8,3}+N^1_{8,5} & \! \! = 2368 \\
N^2_{8,3}+N^2_{8,5} & \! \! = 40
\end{cases}
\end{equation}
on the remaining four unknowns which has rank $3$ and therefore does not determine the solution uniquely. An additional piece of information can be obtained by considering the period $8$ orbits of $\mathbf{m}_3$ which realize cycles in ${\mathscr C}^2_{8,3}$ (see \cite{PZ}, especially Theorem 6.6, for the results supporting the following claims). Every such orbit is {\it \bfseries self-antipodal} in the sense that it is invariant under the $180^{\circ}$ rotation $x \mapsto x+1/2$ of the circle ${\mathbb R}/{\mathbb Z}$. It follows that $x$ belongs to such orbit if and only if it satisfies
$$
3^4 x = x+\frac{1}{2} \quad (\operatorname{mod} {\mathbb Z}).
$$
This is equivalent to $x$ being rational of the form
$$
x=\frac{2j-1}{160} \quad (\operatorname{mod} {\mathbb Z}) \quad \text{for some} \ 1 \leq j \leq 80.
$$
Of the $10$ orbits of $\mathbf{m}_3$ thus determined, $4$ realize rotation cycles in ${\mathscr C}^8_{8,1}$ and the remaining $6$ realize cycles in ${\mathscr C}^2_{8,3}$. Moreover, by \thmref{real} every combinatorial type in ${\mathscr C}^2_{8,3}$ is realized by a {\it unique} orbit of $\mathbf{m}_3$. It follows that $N^2_{8,3}=4T^2_{8,3}=24$. Now from \eqref{char} we obtain
$$
N^1_{8,3} = 1408, \quad N^2_{8,3}= 24, \quad
N^1_{8,5} = 960, \quad N^2_{8,5}= 16
$$
and therefore
$$
T_{8,1}=4, \ \ T_{8,2}=26, \ \ T_{8,3}=182, \ \ T_{8,4}=292, \ \ T_{8,5}=126, \ \ T_{8,6}=10.
$$
Observe that $T_8=\sum_{d=1}^6 T_{8,d}=640$, in agreement with the value in Table \ref{teeq} coming from formula \eqref{tq}.
\end{example}
Table \ref{tab4} shows the result of similar but often more complicated dynamical arguments to determine $T_{n,d}$ for $n$ up to $12$. It would be desirable to develop a general method (and perhaps a closed formula) to compute these numbers for arbitrary $n$.
{\tiny
\begin{table}
\centering
\begin{tabular}{r|crrrrrrrrrr}
\toprule
\diagbox{$n$}{$d$} & & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$\\
\midrule
$2$ & & $1$ & & & & & & & & & \\
$3$ & & $2$ & & & & & & & & & \\
$4$ & & $2$ & $1$ & & & & & & & & \\
$5$ & & $4$ & $2$ & $2$ & & & & & & & \\
$6$ & & $2$ & $7$ & $10$ & $5$ & & & & & & \\
$7$ & & $6$ & $12$ & $48$ & $36$ & $6$ & & & & & \\
$8$ & & $4$ & $26$ & $\color{red}182$ & $292$ & $\color{red}126$ & $10$ & & & & \\
$9$ & & $6$ & $50$ & $612$ & $\color{red}1844$ & $1582$ & $378$ & $\color{red}20$ & & & \\
$10$ & & $4$ & $95$ & $\color{red}1978$ & $9925$ & $\color{red}15408$ & $7753$ & $\color{red}1138$ & $35$ & & \\
$11$ & & $10$ & $176$ & $6056$ & $48608$ & $124100$ & $112160$ & $35384$ & $3344$ & $62$ & \\
$12$ & & $4$ & $331$ & $\color{red}18140$ & $\color{red}222654$ & $\color{red}880848$ & $1299448$ & $\color{red}741260$ & $154258$ & $\color{red}9732$ & $\color{red}113$ \\
\bottomrule
\end{tabular}
\vspace*{4mm}
\caption{\sl The distributions $T_{n,d}$ for $2 \leq n \leq 12$. The entries in red cannot be obtained from the sole knowledge of the $N_n^s$ and $N_{n,d}$ in Tables \ref{tab1} and \ref{tab3}.}
\label{tab4}
\end{table}
}
\section{A statistical view of the degree}\label{sec:stat}
\subsection{Classical Eulerian numbers}\label{compp}
The numbers $N_{n,d}$ are the analogs of the {\it \bfseries Eulerian numbers} $A_{n,d}$ which tally the permutations of descent number $d$ in the full symmetric group ${\mathscr S}_n$:\footnote{The numbers $A_{n,d}$ are denoted by $\genfrac{\langle }{\rangle}{0pt}{}{n}{d}$ in \cite{GKP} and by $A(n,d+1)$ in \cite{St}.}
$$
A_{n,d} := \# \big\{ \nu \in {\mathscr S}_n: \operatorname{des}(\nu)=d \big\}.
$$
For each $n$ the index $d$ now runs from $0$ to $n-1$, with \mbox{$A_{n,0}=A_{n,n-1}=1$}. The Eulerian numbers occur in many contexts, including areas outside of combinatorics, and have been studied extensively (for an excellent account, see \cite{P2}). Here are a few of their basic properties:
\vspace{6pt}
\begin{enumerate}
\item[$\bullet$]
{\it Symmetry}:
$$
A_{n,d} = A_{n,n-d-1}.
$$
\item[$\bullet$]
{\it Linear recurrence relation}:
$$
A_{n,d} = (d+1) A_{n-1,d} + (n-d) A_{n-1,d-1}.
$$
\item[$\bullet$]
{\it Worpitzky's identity}:
\begin{equation}\label{worp}
\sum_{d=0}^{n-1} \binom{n+k-d-1}{n} A_{n,d} = k^n \qquad \text{for all} \ k \geq 1
\end{equation}
\item[$\bullet$]
{\it Alternating sum formula}:
\begin{equation}\label{asf}
A_{n,d} = \sum_{i=1}^{d+1} (-1)^{d-i+1} \, \binom{n+1}{d-i+1} \, i^n. \vspace{6pt}
\end{equation}
\item[$\bullet$]
{\it Carlitz's identity}: The generating function $A_n(x) := \sum_{d=0}^{n-1} A_{n,d} \, x^d$ (also known as the $n$-th ``Eulerian polynomial'') satisfies
\begin{equation}\label{genA}
A_n(x) = (1-x)^{n+1} \sum_{i \geq 1} i^n \, x^{i-1}.
\end{equation}
\end{enumerate}
The last three formulas reveal a remarkable similarity between the sequences $ N_{n,d}$ and $A_{n-1,d-1}$. In fact, \eqref{mame1} is the analog of Worpitzky's identity \eqref{worp} for $A_{n-1,d-1}$ once $\Delta_n(k)$ is replaced with $k^{n-1}$. Similarly, \eqref{tada} is the analog of the alternating sum formula \eqref{asf} for $A_{n-1,d-1}$ when we replace $\Delta_n(i)$ with $i^{n-1}$. Finally, \eqref{genG} is the analog of Carlitz's identity \eqref{genA} for $\sum_{d=1}^{n-1} A_{n-1,d-1} \, x^d = x A_{n-1}(x)$, again replacing $\Delta_n(i)$ with $i^{n-1}$. \vspace{6pt}
There is also an analogy between the $N_{n,d}$ and the restricted Eulerian numbers
\begin{equation}\label{ren}
B_{n,d} := \# \big\{ \nu \in {\mathscr C}_n: \operatorname{des}(\nu)=d \big\}.
\end{equation}
In the beautiful paper \cite{DMP} which is motivated by the problem of riffle shuffles of a deck of cards, the authors obtain exact formulas for the distribution of descents in a given conjugacy class of ${\mathscr S}_n$ (see also \cite{F1} for an alternative approach). As a special case, their formulas show that
$$
B_{n,d} = \sum_{i=1}^{d+1} (-1)^{d-i+1} \, \binom{n+1}{d-i+1} \, f_n(i),
$$
where
$$
f_n(i) := \frac{1}{n} \sum_{r|n} \mu \Big( \frac{n}{r} \Big) i^r
$$
is the number of aperiodic circular words of length $n$ from an alphabet of $i$ letters. One cannot help but notice the similarity between the above formula for $B_{n-1,d-1}$ and \eqref{tada}, and between $f_n(i)$ and $\Delta_n(i)$ in \eqref{masq}.
\subsection{Asymptotic normality}\label{asnorm}
The statistical behavior of classical Eulerian numbers is well understood. For example, it is known that the distribution $\{ A_{n,d} \}_{0 \leq d \leq n-1}$ is unimodal with a peak at $d= \lfloor n/2 \rfloor$. Moreover, the descent number of a randomly chosen permutation in ${\mathscr S}_n$ (with respect to the uniform measure) has mean and variance
\begin{align*}
\tilde{\mu}_n & := \frac{1}{n!} \sum_{d=0}^{n-1} d A_{n,d} = \frac{n-1}{2} \\
\tilde{\sigma}^2_n & := \frac{1}{n!} \sum_{d=0}^{n-1} (d-\tilde{\mu}_n)^2 A_{n,d} = \frac{n+1}{12}.
\end{align*}
These computations can be expressed in terms of the generating functions $A_n$ introduced in \S \ref{compp}:
\begin{align}
\frac{A'_n(1)}{n!} & = \frac{n-1}{2} \label{EN1} \vspace{6pt} \\
\frac{A''_n(1)}{n!} + \frac{A'_n(1)}{n!} - \left( \frac{A'_n(1)}{n!} \right)^2 & = \frac{n+1}{12}. \label{VN1}
\end{align}
When normalized by its mean and variance, the distribution $\{ A_{n,d} \}_{0 \leq d \leq n-1}$ converges to the standard normal distribution as $n \to \infty$ (see \cite{B}, \cite{H}, \cite{Pi}). This is the central limit theorem for Eulerian numbers. In fact, we have the error bound
$$
\sup_{x \in {\mathbb R}} \left| \frac{1}{n!} \sum_{d \leq \tilde{\sigma}_n x + \tilde{\mu}_n} A_{n,d} - \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{-t^2/2} \, dt \right| = O(n^{-1/2}).
$$
Similar results hold for the restricted Eulerian numbers $B_{n,d}$ defined in \eqref{ren}. In \cite{F1}, Fulman shows that the mean and variance of $\operatorname{des}(\nu)$ for a randomly chosen $\nu \in {\mathscr C}_n$ are also $(n-1)/2$ and $(n+1)/12$ provided that $n \geq 3$ and $n \geq 4$ respectively. More generally, he shows that the $k$-th moment of $\operatorname{des}(\nu)$ for $\nu \in {\mathscr C}_n$ is equal to the $k$-th moment of $\operatorname{des}(\nu)$ for $\nu \in {\mathscr S}_n$ provided that $n \geq 2k$. From this result one can immediately conclude that the normalized distribution $B_{n,d}$ is also asymptotically normal as $n \to \infty$. \vspace{6pt}
Below we will prove corresponding results for the distribution of degree for randomly chosen $n$-cycles.
\begin{theorem}\label{exp}
The mean and variance of $\deg(\nu)$ for a randomly chosen $\nu \in {\mathscr C}_n$ (with respect to the uniform measure) are
\begin{align*}
\mu_n & := \frac{1}{(n-1)!} \sum_{d=1}^{n-2} d N_{n,d} = \frac{n}{2}-\frac{1}{n-1} & (n \geq 3), \\
\sigma^2_n & := \frac{1}{(n-1)!} \sum_{d=1}^{n-2} (d-\mu_n)^2 N_{n,d} = \frac{n}{12}+\frac{n}{(n-1)^2(n-2)} & (n \geq 5).
\end{align*}
\end{theorem}
\begin{proof}
The argument is inspired by the method of \cite[Theorem 2]{F1}. We begin by using the formula \eqref{masq} for $\Delta_n(i)$ in the equation \eqref{genG} to express the generating function $G_n$ in terms of the Eulerian polynomials $A_j$ in \eqref{genA}:
\begin{align}\label{divo}
G_n(x) & = (1-x)^n \sum_{i \geq 1} \sum_{r|n} \sum_{j=0}^{r-1} \mu \Big( \frac{n}{r} \Big) \, i^j x^i \notag \\
& = (1-x)^n \sum_{i \geq 1} \sum_{j=0}^{n-1} i^j x^i + (1-x)^n \sum_{i \geq 1} \sum_{\substack{r|n \\ r<n}} \sum_{j=0}^{r-1} \mu \Big( \frac{n}{r} \Big) \, i^j x^i \notag \\
& = \sum_{j=0}^{n-1} x(1-x)^{n-j-1} A_j(x) + \sum_{\substack{r|n \\ r<n}} \sum_{j=0}^{r-1} \mu \Big( \frac{n}{r} \Big) \, x(1-x)^{n-j-1} A_j(x).
\end{align}
If $n \geq 3$, every index $j$ in the double sum in \eqref{divo} is $\leq n-3$, so the polynomial in $x$ defined by this double sum has $(1-x)^2$ as a factor. It follows that for $n \geq 3$,
$$
G_n(x)=x A_{n-1}(x)+ x(1-x) A_{n-2}(x) + O((1-x)^2)
$$
as $x \to 1$. This gives
$$
G'_n(1) = A'_{n-1}(1)+A_{n-1}(1)-A_{n-2}(1),
$$
so by \eqref{EN1}
$$
\mu_n = \frac{G'_n(1)}{(n-1)!} = \frac{n-2}{2}+1-\frac{1}{n-1}= \frac{n}{2}-\frac{1}{n-1}.
$$
Similarly, if $n \geq 5$, every index $j$ in the double sum in \eqref{divo} is $\leq n-4$, so the polynomial defined by this double sum has $(1-x)^3$ as a factor. It follows that for $n \geq 5$,
$$
G_n(x)=x A_{n-1}(x)+ x(1-x) A_{n-2}(x) + x(1-x)^2 A_{n-3}(x) + O((1-x)^3)
$$
as $x \to 1$. This gives
$$
G''_n(1) = A''_{n-1}(1)+2A'_{n-1}(1)-2A'_{n-2}(1)-2A_{n-2}(1)+2A_{n-3}(1).
$$
A straightforward computation using \eqref{EN1} and \eqref{VN1} then shows that
\[
\sigma^2_n = \frac{G''_n(1)}{(n-1)!} + \frac{G'_n(1)}{(n-1)!} - \left( \frac{G'_n(1)}{(n-1)!} \right)^2 = \frac{n}{12} + \frac{n}{(n-1)^2(n-2)}. \qedhere
\]
\end{proof}
\begin{remark}
More generally, the expression \eqref{divo} shows that for fixed $k$ and large enough $n$,
$$
G_n(x)=\sum_{j=0}^k x(1-x)^j A_{n-j-1}(x) + O((1-x)^{k+1})
$$
as $x \to 1$. Differentiating this $k$ times and evaluating at $x=1$, we obtain the relation
$$
G_n^{(k)}(1)= \sum_{j=0}^k (-1)^j \left( \binom{k}{j} j! \ A^{(k-j)}_{n-j-1}(1) + \binom{k}{j+1} (j+1)! \ A^{(k-j-1)}_{n-j-1}(1) \right)
$$
which in theory links the moments of $\deg(\nu)$ for $\nu \in {\mathscr C}_n$ to the moments of $\operatorname{des}(\nu)$ for $\nu \in {\mathscr S}_j$ for $n-k \leq j \leq n-1$.
\end{remark}
Numerical evidence suggest that the distribution $\{ N_{n,d} \}_{1 \leq d \leq n-2}$ is also unimodal and reaches a peak at $d= \lfloor n/2 \rfloor$. \thmref{asnor} below asserts that when normalized by its mean and variance, the distribution $\{ N_{n,d} \}_{1 \leq d \leq n-2}$ converges to normal as $n \to \infty$. In particular, the asymmetry of the numbers $N_{n,d}$ relative to $d$ will asymptotically disappear. These facts are illustrated in \figref{normal}. \vspace{6pt}
Consider the sequence of normalized random variables
$$
X_n := \frac{1}{\sigma_n} \big( \deg|_{{\mathscr C}_n} - \mu_n \big).
$$
Let ${\mathcal N}(0,1)$ denote the normally distributed random variable having mean $0$ and variance $1$.
\begin{theorem}\label{asnor}
$X_n \to {\mathcal N}(0,1)$ in distribution as $n \to \infty$.
\end{theorem}
The proof follows the strategy of \cite{FKL} and makes use of the following variant of a classical theorem of Curtiss. Recall that the {\it \bfseries moment generating function} $M_X$ of a random variable $X$ is the expected value of $e^{sX}$:
$$
M_X(s):={\mathbb E}(e^{sX}) \qquad \qquad (s \in {\mathbb R}).
$$
\begin{lemma}[\cite{KL2}]\label{kl}
Let $\{ X_n \}_{n \geq 1}$ and $Y$ be random variables and assume that $\lim_{n \to \infty} M_{X_n}(s)=M_{Y}(s)$ for all $s$ in some non-empty open interval in ${\mathbb R}$. Then $X_n \to Y$ in distribution as $n \to \infty$.
\end{lemma}
The proof of \thmref{asnor} via \lemref{kl} will depend on two preliminary estimates.
\begin{lemma}\label{boz}
For every $\varepsilon>0$ there are constants $n(\varepsilon), i(\varepsilon)>0$ such that
$$
\Delta_n(i) \begin{cases} \ \leq (1+\varepsilon) \, i^{n-1} & \text{if} \ n \geq 2 \ \text{and} \ i \geq i(\varepsilon) \vspace{6pt} \\
\ \geq (1-\varepsilon) \, i^{n-1} & \text{if} \ n \geq n(\varepsilon) \ \text{and} \ i \geq 2.
\end{cases}
$$
\end{lemma}
\begin{proof}
By \eqref{masti},
$$
\Delta_n(i) \leq \sum_{r|n} \Delta_r(i) = \frac{i^n-1}{i-1}.
$$
The upper bound follows since $(i^n-1)/(i-1)<(1+\varepsilon) i^{n-1}$ for all $n$ if $i$ is large enough depending on $\varepsilon$. \vspace{6pt}
For the lower bound, first note that the inequality $(i^r-1)/(i-1) \leq 2 i^{r-1}$ holds for all $r \geq 1$ and all $i \geq 2$. Thus, by \eqref{masq}, we can estimate
\begin{align*}
\Delta_n(i) & \geq \frac{i^n-1}{i-1} - \sum_{\substack{r|n \\ r<n}} \frac{i^r-1}{i-1} \geq i^{n-1} - \sum_{\substack{r|n \\ r<n}} 2i^{r-1} \\
& \geq i^{n-1} - 2 \sum_{r=1}^{\lfloor n/2 \rfloor} i^{r-1} \geq i^{n-1} - 2 \ \frac{i^{n/2}-1}{i-1} \\
& \geq i^{n-1} - 4 i^{n/2-1}.
\end{align*}
The last term is bounded below by $(1-\varepsilon) i^{n-1}$ for all $i$ if $n$ is large enough depending on $\varepsilon$.
\end{proof}
\begin{lemma}[\cite{FKL}]\label{jag}
For every $0<x<1$ and $n \geq 1$,
$$
\frac{(n-1)! \, x}{(\log(1/x))^n} \leq \sum_{i \geq 2} i^{n-1} x^i \leq \frac{(n-1)!}{x (\log(1/x))^n}.
$$
\end{lemma}
\begin{proof}
By elementary calculus,
$$
\sum_{i \geq 2} i^{n-1} x^i \leq \int_0^{\infty} u^{n-1} x^{u-1} \, du = \frac{(n-1)!}{x (\log(1/x))^n}
$$
and
\[
\sum_{i \geq 2} i^{n-1} x^i \geq \int_0^{\infty} u^{n-1} x^{u+1} \, du = \frac{(n-1)! \, x}{(\log(1/x))^n}. \qedhere
\]
\end{proof}
\begin{proof}[Proof of \thmref{asnor}]
By \lemref{kl} it suffices to show that $\lim_{n \to \infty} M_{X_n}(s) = M_{{\mathcal N}(0,1)}(s)=e^{s^2/2}$ for all negative values of $s$. Fix an $s<0$ and set $0<x := e^{s/\sigma_n}<1$ (we will think of $x$ as a function of $n$, with $\lim_{n \to \infty} x=1$). Notice that by \thmref{exp}
\begin{equation}\label{khak}
\mu_n = \frac{n}{2} + O(n^{-1}) \quad \text{and} \quad \sigma^2_n = \frac{n}{12} + O(n^{-2}) \quad \text{as} \ n \to \infty.
\end{equation}
Using \eqref{genG}, we can write
\begin{align*}
M_{X_n}(s) & = {\mathbb E}(e^{sX_n}) = \frac{e^{-s \mu_n/\sigma_n}}{(n-1)!} \, G_n(e^{s/\sigma_n}) = \frac{x^{-\mu_n}}{(n-1)!} \, G_n(x) \\
& = \frac{x^{1-\mu_n}(1-x)^n \varphi(n)}{(n-1)!} + \frac{x^{-\mu_n}(1-x)^n}{(n-1)!} \sum_{i \geq 2} \Delta_n(i) x^i.
\end{align*}
As the first term is easily seen to tend to zero, it suffices to show that
\begin{equation}\label{laboo}
H_n := \frac{x^{-\mu_n}(1-x)^n}{(n-1)!} \sum_{i \geq 2} \Delta_n(i) x^i \xrightarrow{n \to \infty} e^{s^2/2}.
\end{equation}
By \eqref{khak} we have the estimate
$$
1-x = -\frac{s}{\sigma_n} - \frac{s^2}{2 \sigma^2_n} + O(n^{-3/2}).
$$
This, combined with the basic expansion
$$
\log \left(\frac{1-x}{\log(1/x)} \right) = -\frac{1}{2} (1-x) - \frac{5}{24} (1-x)^2 + O((1-x)^3),
$$
shows that
\begin{equation}\label{hulu}
\left(\frac{1-x}{\log(1/x)} \right)^n = \exp \left( \frac{ns}{2\sigma_n} + \frac{n s^2}{24 \sigma^2_n} + O(n^{-1/2}) \right).
\end{equation}
Take any $\varepsilon>0$ and find $n(\varepsilon)$ from \lemref{boz}. Then, if $n \geq n(\varepsilon)$,
\begin{align*}
H_n & \geq \frac{x^{-\mu_n}(1-x)^n}{(n-1)!} \, (1-\varepsilon) \sum_{i \geq 2} i^{n-1} x^i & \\
& \geq (1-\varepsilon) \, x^{1-\mu_n} \left(\frac{1-x}{\log(1/x)} \right)^n & (\text{by \lemref{jag}}) \\
& = (1-\varepsilon) \exp \left( \frac{s(1-\mu_n)}{\sigma_n} +\frac{ns}{2\sigma_n} + \frac{n s^2}{24 \sigma^2_n} + O(n^{-1/2}) \right) & (\text{by \eqref{hulu}}) \\
& = (1-\varepsilon) \exp \left( \frac{s(1+O(n^{-1}))}{\sigma_n} + \frac{s^2}{2+O(n^{-3})} + O(n^{-1/2}) \right) & (\text{by \eqref{khak}}).
\end{align*}
Taking the $\liminf$ as $n \to \infty$ and then letting $\varepsilon \to 0$, we obtain
$$
\liminf_{n \to \infty} H_n \geq e^{s^2/2}.
$$
Similarly, take any $\varepsilon>0$, find $i(\varepsilon)$ from \lemref{boz} and use the basic inequality $\Delta_n(i) \leq (i^n-1)/(i-1) \leq 2i^{n-1}$ for all $n,i \geq 2$ to estimate
\begin{align*}
H_n & = \frac{x^{-\mu_n}(1-x)^n}{(n-1)!} \left( \sum_{2 \leq i <i(\varepsilon)} + \sum_{i \geq i(\varepsilon)} \right) \Delta_n(i) x^i \\
& \leq \frac{2x^{-\mu_n}(1-x)^n}{(n-1)!} \sum_{2 \leq i <i(\varepsilon)} i^{n-1} x^i + \frac{(1+\varepsilon) x^{-\mu_n}(1-x)^n}{(n-1)!} \sum_{i \geq i(\varepsilon)} i^{n-1} x^i.
\end{align*}
The first term is a polynomial in $x$ and is easily seen to tend to zero as $n \to \infty$. The second term is bounded above by
\begin{align*}
& (1+\varepsilon) \, x^{-1-\mu_n} \left(\frac{1-x}{\log(1/x)} \right)^n & (\text{by \lemref{jag}}) \\
= & (1+\varepsilon) \exp \left( \frac{s(-1-\mu_n)}{\sigma_n} + \frac{ns}{2\sigma_n} + \frac{n s^2}{24 \sigma^2_n} + O(n^{-1/2}) \right) & (\text{by \eqref{hulu}}) \\
= & (1+\varepsilon) \exp \left( \frac{s(-1+O(n^{-1}))}{\sigma_n} + \frac{s^2}{2+O(n^{-3})} + O(n^{-1/2}) \right) & (\text{by \eqref{khak}})
\end{align*}
Taking the $\limsup$ as $n \to \infty$ and then letting $\varepsilon \to 0$, we obtain
$$
\limsup_{n \to \infty} H_n \leq e^{s^2/2}.
$$
This verifies \eqref{laboo} and completes the proof.
\end{proof}
| {
"timestamp": "2021-08-03T02:05:12",
"yymm": "1909",
"arxiv_id": "1909.03300",
"language": "en",
"url": "https://arxiv.org/abs/1909.03300",
"abstract": "This note will give an enumeration of $n$-cycles in the symmetric group ${\\mathcal S}_n$ by their degree (also known as their cyclic descent number) and studies similar counting problems for the conjugacy classes of $n$-cycles under the action of the rotation subgroup of ${\\mathcal S}_n$. This is achieved by relating such cycles to periodic orbits of an associated dynamical system acting on the circle. We also compute the mean and variance of the degree of a random $n$-cycle and show that its distribution is asymptotically normal as $n \\to \\infty$.",
"subjects": "Dynamical Systems (math.DS); Combinatorics (math.CO); Probability (math.PR)",
"title": "Cyclic Permutations: Degrees and Combinatorial Types",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9914225140794154,
"lm_q2_score": 0.8397339736884711,
"lm_q1q2_score": 0.8325311673521217
} |
https://arxiv.org/abs/1804.08919 | How to generalize (and not to generalize) the Chu--Vandermonde identity | We consider two different interpretations of the Chu--Vandermonde identity: as an identity for polynomials, and as an identity for infinite matrices. Each interpretation leads to a class of possible generalizations, and in both cases we obtain a complete characterization of the solutions. | \section*{1. Introduction.}
One of the most celebrated formulae of elementary combinatorics
is the Chu--Vandermonde identity
\begin{equation}
\sum_{k=0}^n \binom{x}{k} \binom{y}{n-k} \;=\; \binom{x+y}{n}
\label{eq.vandermonde}
\end{equation}
(where $n$ is a nonnegative integer);%
\begin{CJK}{UTF8}{bsmi}
it was found by Chu Shih-Chieh (Zh\={u} Sh\`{\i}ji\'e, 朱世杰)%
\end{CJK}
sometime before 1303 \cite{Zhu_2006}
and was rediscovered circa 1772 by Alexandre-Th\'eophile Vandermonde
\cite{Vandermonde_1772}.
If we take $x$ and $y$ to be nonnegative integers,
this identity has a well-known combinatorial interpretation and proof:
we can choose an $n$-person committee from a group of $x$ women and $y$ men
in $\binom{x+y}{n}$ ways;
on the other hand, if this committee is to have $k$ women and $n-k$ men,
then the number of ways is $\binom{x}{k} \binom{y}{n-k}$;
summing over all possible values of $k$ gives the result.
But now, having proven the identity for nonnegative integer $x$~and~$y$,
we can shift gears and regard $x$~and~$y$ as algebraic indeterminates:
then both sides of \reff{eq.vandermonde} are polynomials in $x$~and~$y$
(of total degree~$n$),
which agree whenever $x$~and~$y$ are nonnegative integers;
since two polynomials that agree at infinitely many points must be equal,
it follows that \reff{eq.vandermonde} holds as a polynomial identity
in $x$~and~$y$.
We thus see already from this brief account that the Chu--Vandermonde identity
can be given at least two distinct interpretations.
On the one hand, we can regard $x$~and~$y$ as indeterminates;
then \reff{eq.vandermonde} is the identity
\begin{equation}
\sum_{k=0}^n f_k(x) \, f_{n-k}(y) \;=\; f_n(x+y)
\label{eq.fksum}
\end{equation}
for the polynomials
\begin{equation}
f_n(x) \;=\; \binom{x}{n} \;\stackrel{\rm def}{=}\; {x^{\underline{n}} \over n!}
\;\stackrel{\rm def}{=}\; {x(x-1) \cdots (x-n+1) \over n!}
\;\in\; {\mathbb Q}[x]
\;.
\label{def.f.binomial}
\end{equation}
(Of course, $x$ and $y$ can then be specialized, if we wish,
to specific values in any commutative ring containing the rationals,
such as the real or complex numbers.)
On the other hand, we can restrict $x$ and $y$ to be nonnegative integers;
then \reff{eq.vandermonde} is the identity
\begin{equation}
\sum_{j=0}^n L_{ij} \, L_{\ell,n-j} \;=\; L_{i+\ell,n}
\label{eq.identity.L}
\end{equation}
for the infinite lower-triangular Pascal matrix
\cite{Call_93,Aceto_01,Edelman_04}
$L = (L_{ij})_{i,j \ge 0}$ defined by
\begin{equation}
L_{ij} \;=\; \binom{i}{j}
\;.
\label{def.L.pascal}
\end{equation}
At this point it is natural to ask for generalizations of
\reff{eq.fksum}/\reff{def.f.binomial}
and \reff{eq.identity.L}/\reff{def.L.pascal}, respectively.
Are there other sequences ${\bm{f}} = (f_n)_{n \ge 0}$
of polynomials (or formal power series) satisfying \reff{eq.fksum},
and if so, can we classify them all?
Likewise, are there other lower-triangular matrices $L = (L_{ij})_{i,j \ge 0}$
satisfying \reff{eq.identity.L}, and if so, can we classify them?
The answer to the first question is well known,
and we will review it here (and extend it slightly).
Then we will give a negative answer to the second question,
but with an interesting twist;
some of these latter results seem to be new.
\bigskip
\section*{2. Convolution families.}
Let $R$ be a commutative ring,
and let ${\bm{f}} = (f_n)_{n \ge 0}$
be a sequence of formal power series $f_n(x) \in R[[x]]$.
We will call ${\bm{f}}$ a {\em convolution family}\/
\cite{Knuth_92} (see also \cite{Labelle_80,Zeng_96})
if it satisfies \reff{eq.fksum};
and we will call ${\bm{f}}$ a {\em weak convolution family}\/
if it satisfies \reff{eq.fksum} restricted to $x=y$:
\begin{equation}
\sum_{k=0}^n f_k(x) \, f_{n-k}(x) \;=\; f_n(2x)
\;.
\label{eq.fksum.weak}
\end{equation}
We use the notation $[t^n] \, F(t)$
to denote the coefficient of $t^n$ in the formal power series (or polynomial)
$F(t) \in R[[t]]$.
We then have the following characterization of (weak) convolution families:
\begin{proposition}[Characterization of convolution families]
\label{prop.convfamily}
Let $R$ be a commutative ring containing the rationals,
and let $\Psi(t) = \sum_{n=0}^\infty \psi_n t^n \in R[[t]]$
be any formal power series.
Then
\begin{equation}
f_n(x) \;=\; [t^n] \, e^{x \Psi(t)}
\label{eq.fn.G.H}
\end{equation}
defines a convolution family in $R[[x]]$.
Moreover, $\widetilde{f}_n(x) \stackrel{\rm def}{=} f_n(x) / e^{\psi_0 x}$
are polynomials in $x$ with $\deg \widetilde{f}_n \le n$
and satisfying $\widetilde{f}_n(0) = 0$ for $n \ge 1$.
[In~particular, the $f_n$ are themselves polynomials for all $n$
if and only if $\psi_0$ is nilpotent,
and are polynomials of degree at most $n$ for all $n$
if and only if $\psi_0 = 0$.]
Conversely, let $R$ be a commutative ring containing the rationals
in which the only idempotents are 0 and 1
(for instance, an integral domain or a local ring),
and let ${\bm{f}} = (f_n)_{n \ge 0}$ be a weak convolution family in $R[[x]]$.
Then either ${\bm{f}}$ is identically zero,
or else there exists a unique formal power series $\Psi \in R[[t]]$
such that \reff{eq.fn.G.H} holds.
In~particular, ${\bm{f}}$ is actually a convolution family,
and the foregoing statements about $\widetilde{f}_n$ hold.
\end{proposition}
\par\medskip\noindent{\sc Proof.\ }
By taking the coefficient of $t^n$ on both sides of the identity
$e^{x \Psi(t)} e^{y \Psi(t)} = e^{(x+y) \Psi(t)}$,
we see, using \reff{eq.fn.G.H}, that
\begin{equation}
\sum_{k=0}^n f_k(x) \, f_{n-k}(y) \;=\; f_n(x+y)
\;,
\end{equation}
or in other words that ${\bm{f}}$ is a convolution family.
Moreover, it is easy to see that
$f_n(x)/e^{\psi_0 x} = [t^n] \, e^{x [\Psi(t) - \psi_0]}$
is a polynomial in $x$ of degree at most $n$,
which has zero constant term for $n \ge 1$.
For the statements in brackets, ``if'' is easy,
and ``only if'' follows by looking at $n=0$.
For the converse, let us use the notation $f_{nk} = [x^k] \, f_n(x)$.
By using \reff{eq.fksum.weak} specialized to $n=0$ and $x=0$,
we see that $f_{00}^2 = f_{00}$;
so the hypothesis on $R$ implies that either $f_{00} = 0$ or $f_{00} = 1$.
If $f_{00} = 0$, then a simple inductive argument using
\reff{eq.fksum.weak} specialized to $n=0$
shows that $f_{0k} = 0$ for all $k$, i.e., $f_0 = 0$;
and then a second inductive argument using \reff{eq.fksum.weak}
shows that $f_n = 0$ for all $n$.
So we can assume that $f_{00} = 1$.
Define the bivariate ordinary generating function
$F(x,t) = \sum_{n=0}^\infty f_n(x) \, t^n \in R[[x,t]]$.
Since $F$ has constant term $f_{00} = 1$,
we can define $L(x,t) = \log F(x,t) \in R[[x,t]]$,
which has constant term 0.
The identity \reff{eq.fksum.weak}, translated to generating functions,
says that $F(x,t)^2 = F(2x,t)$, or equivalently that $2 L(x,t) = L(2x,t)$.
Writing $L(x,t) = \sum_{k=0}^\infty \ell_k(t) \, x^k$,
we see that $\sum_{k=0}^\infty (2 - 2^k) \, \ell_k(t) \, x^k = 0$
as a power series in $x$,
hence that $\ell_k(t) = 0$ for all $k \neq 1$.
It follows that $L(x,t) = x \Psi(t)$ for some $\Psi \in R[[t]]$.
Therefore $f_n(x) = [t^n] \, e^{x \Psi(t)}$,
as claimed in \reff{eq.fn.G.H}.
Indeed, \reff{eq.fn.G.H} says precisely that $L(x,t) = x \Psi(t)$,
so $\Psi$ is uniquely determined by ${\bm{f}}$.
\hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\bigskip
\noindent
{\bf Examples of convolution families.}
1. The binomial theorem corresponds to $f_n(x) = x^n/n!$
and $\Psi(t) = t$.
2. The Chu--Vandermonde identity corresponds to
$f_n(x) = \binom{x}{n} = x^{\underline{n}}/n!$
and $\Psi(t) = \log(1 + t)$.
3. The ``dual Chu--Vandermonde identity'' corresponds to
$f_n(x) = \binom{x+n-1}{n} = x^{\overline{n}}/n!$
and $\Psi(t) = -\log(1 - t)$.
4. Define the univariate Bell polynomials
$B_n(x) = \sum_{k=0}^n \stirlingsubset{n}{k} x^k$,
where the Stirling number $\stirlingsubset{n}{k}$
is the number of partitions of an $n$-element set into $k$ nonempty blocks
\cite{Graham_94}.
Then $f_n(x) = B_n(x)/n!$ corresponds to $\Psi(t) = e^t - 1$.
\bigskip
\noindent
See \cite{Knuth_92} for many further examples.
\bigskip
\bigskip
\noindent
{\bf Remarks.}
1. Proposition~\ref{prop.convfamily} is a slight extension of
\cite[Theorem~4.1]{Garsia_73}, \cite{Knuth_92}
and \cite[Exercise~5.37a]{Stanley_99},
in two directions:
allowing the $f_n$ to be formal power series rather than just polynomials,
and allowing the coefficients to lie in a commutative ring containing
the rationals rather than just a field of characteristic~0.
Knuth \cite{Knuth_92} proved the converse half by an inductive argument
based on carefully examining the coefficients $f_{nk}$.
This is nice, but it seems to me that the generating-function argument
using the logarithm, taken from \cite{Garsia_73,Stanley_99},
is simpler and more enlightening.
2. For the converse, it would alternatively suffice to assume that
\reff{eq.fksum} holds for $y = rx$
for {\em any}\/ fixed rational number $r \neq 0,-1$ (not just $r=1$):
we would then have $L(x,t) + L(rx,t) = L((1+r)x,t)$,
which also implies $L(x,t) = x \Psi(t)$
since $1 + r^m \neq (1+r)^m$ for all $m \in {\mathbb N} \setminus \{1\}$
and all $r \in {\mathbb Q} \setminus \{0,-1\}$.
3. By iterating \reff{eq.fksum}, we see that every convolution family ${\bm{f}}$
satisfies
\begin{equation}
\sum\limits_{\begin{scarray}
k_1,\ldots,k_m \ge 0 \\
k_1 + \ldots + k_m = n
\end{scarray}}
\!\!\!\!\!
f_{k_1}(x_1) \,\cdots\, f_{k_m}(x_m) \;=\; f_n(x_1 + \ldots + x_m)
\label{eq.fksum.iterated}
\end{equation}
for each integer $m \ge 1$.
This ``multinomial'' version of the identity is sometimes useful.
4. The foregoing theory can equivalently be expressed
in terms of the polynomials $F_n(x) = n! \, f_n(x)$, which satisfy
\begin{equation}
\sum_{k=0}^n \binom{n}{k} F_k(x) \, F_{n-k}(y) \;=\; F_n(x+y)
\;.
\label{eq.Fksum.binomial}
\end{equation}
Sequences of polynomials satisfying \reff{eq.Fksum.binomial}
are called {\em sequences of binomial type}\/;
they were introduced by Rota and collaborators
\cite{Mullin_70,Rota_73,Garsia_73,Fillmore_73,Roman_78,Roman_84}
and studied by means of the umbral calculus
\cite{Roman_78,Roman_84,Rota_94,Gessel_03}.\footnote{
As part of the definition of ``sequence of binomial type'',
Rota {\em et al.}\/ \cite{Mullin_70,Rota_73}
and many subsequent authors \cite{Garsia_73,Fillmore_73,Roman_78,Roman_84}
imposed the additional condition that $\deg F_n = n$ exactly.
But this condition is irrelevant for our purposes,
so we refrain from imposing it.
}
A purely combinatorial approach to sequences of binomial type,
employing the theory of species,
has been developed by Labelle \cite{Labelle_81}
(see also \cite[Section~3.1]{Bergeron_98}).
See also \cite{Zeng_96,Scott-Sokal_expidentities}
for some multivariate generalizations.
The identity \reff{eq.Fksum.binomial}
is closely related to the {\em exponential formula}\/
\cite{Wilf_94,Bergeron_98,Stanley_99}
in enumerative combinatorics,
which relates the weights $c_n$ of ``connected'' objects
to the weights $F_n(x)$ of ``all'' objects,
when the weight of an object is defined to be the product of the weights
of its ``connected components'' times a factor $x$ for each
connected component.
See \cite{Wilf_94,Bergeron_98,Stanley_99,Scott-Sokal_expidentities}
for further discussion and many applications.
$\blacksquare$ \bigskip
\bigskip
\noindent
{\bf Open questions.}
What happens if $R$ does not contain the rationals,
or if $R$ has idempotents other than 0 and 1?
\bigskip
\section*{3. Convolution families, generalized.}
Let us now generalize the identity \reff{eq.fksum}
by allowing three different sequences $f,g,h$ in place of $f,f,f$:
\begin{equation}
\sum_{k=0}^n f_k(x) \, g_{n-k}(y) \;=\; h_n(x+y)
\;.
\label{eq.conv.fgh}
\end{equation}
Do any new solutions arise?
It turns out that the answer is yes, as follows:
\begin{proposition}
\label{prop.convfamily.bis}
Let $R$ be a commutative ring containing the rationals,
and let $A,B,\Psi \in R[[t]]$. Then
\begin{equation}
f_n(x) \:=\: [t^n] \, A(t) e^{x \Psi(t)} \:,\quad
g_n(x) \:=\: [t^n] \, B(t) e^{x \Psi(t)} \:,\quad
h_n(x) \:=\: [t^n] \, A(t) B(t) e^{x \Psi(t)}
\label{eq.prop.convfamily.bis}
\end{equation}
are sequences in $R[[x]]$ satisfying \reff{eq.conv.fgh}.
Moreover, $\widetilde{f}_n(x) \stackrel{\rm def}{=} f_n(x) / e^{\psi_0 x}$
[where $\psi_0 \stackrel{\rm def}{=} \Psi(0)$]
are polynomials in $x$ with $\deg \widetilde{f}_n \le n$,
and likewise for $g_n$ and $h_n$.
[In~particular, the $f_n,g_n,h_n$ are themselves polynomials for all $n$
if and only if $\psi_0$ is nilpotent,
and are polynomials of degree at most $n$ for all $n$
if and only if $\psi_0 = 0$.]
Conversely, let $R$ be a commutative ring containing the rationals,
and let
${\bm{f}} = (f_n)_{n \ge 0}$, ${\bm{g}} = (g_n)_{n \ge 0}$, ${\bm{h}} = (h_n)_{n \ge 0}$
be sequences in $R[[x]]$ satisfying \reff{eq.conv.fgh}.
Assume further that $f_0(0)$ and $g_0(0)$ are invertible in $R$
[or equivalently that $h_0(0) = f_0(0) \, g_0(0)$ is invertible in $R$].
Then there exist unique formal power series $A,B,\Psi \in R[[t]]$
such that \reff{eq.prop.convfamily.bis} holds;
and $A$ and $B$ are invertible in $R[[t]]$.
\end{proposition}
\par\medskip\noindent{\sc Proof.\ }
By taking the coefficient of $t^n$ on both sides of the identity
$A(t) e^{x \Psi(t)} \, B(t) e^{y \Psi(t)} = {A(t) B(t) e^{(x+y) \Psi(t)}}$,
we see that \reff{eq.conv.fgh} holds.
The statements about $f_n,g_n,h_n$ are proven
as in Proposition~\ref{prop.convfamily}.
For the converse, let $\alpha = f_0(0)$ and $\beta = g_0(0)$;
by hypothesis $\alpha$ and $\beta$ are invertible, and $h_0(0) = \alpha\beta$.
Define the ordinary generating functions
$F(x,t) = \sum_{n=0}^\infty f_n(x) \, t^n$,
$G(x,t) = \sum_{n=0}^\infty g_n(x) \, t^n$
and $H(x,t) = \sum_{n=0}^\infty h_n(x) \, t^n$;
they have constant terms $\alpha$, $\beta$ and $\alpha\beta$, respectively.
Then define $L(x,t) = \log [\alpha^{-1} F(x,t)]$,
$M(x,t) = \log [\beta^{-1} G(x,t)]$ and
$N(x,t) = \log [(\alpha\beta)^{-1} H(x,t)]$;
they have constant term 0.
The identity \reff{eq.conv.fgh}, translated to generating functions,
says that $F(x,t) \, G(y,t) = H(x+y,t)$,
or equivalently that $L(x,t) + M(y,t) = N(x+y,t)$.
Then we must have $[x^k] N(x,t) = 0$ for $k \ge 2$
in order to avoid terms $x^i y^j$ with $i,j \ge 1$ in $N(x+y,t)$.
It then follows that we must have
\begin{equation}
L(x,t) \:=\: \Gamma(t) \,+\, x \Psi(t) \:,\quad
M(x,t) \:=\: \Delta(t) \,+\, x \Psi(t) \:,\quad
N(x,t) \:=\: \Gamma(t) \,+\, \Delta(t) \,+\, x \Psi(t)
\end{equation}
for some formal power series $\Gamma,\Delta,\Psi \in R[[t]]$,
where $\Gamma$ and $\Delta$ have zero constant term.
Then \reff{eq.prop.convfamily.bis} holds with
$A(t) = \alpha e^{\Gamma(t)}$ and $B(t) = \beta e^{\Delta(t)}$.
Furthermore, $L,M,N$ are uniquely determined by ${\bm{f}},{\bm{g}},{\bm{h}}$;
hence $\Gamma,\Delta,\Psi$ are also uniquely determined;
hence $A,B,\Psi$ are uniquely determined as well.
\hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\bigskip
{\bf Remarks.}
1. Under the assumption that $f_0(0)$ and $g_0(0)$ are invertible in $R$,
clearly ${\bm{f}} = {\bm{g}} = {\bm{h}}$ if and only if $A(t) = B(t) = 1$;
and in this case we recover the situation of Proposition~\ref{prop.convfamily}.
2. If the only idempotents in $R$ are 0 and 1,
then it is not difficult to show that the same holds true in $R[[t]]$.
In this case ${\bm{f}} = {\bm{g}} = {\bm{h}}$ if and only if
$A(t) = B(t) = 1$ or $A(t) = B(t) = 0$;
and we also recover the situation of Proposition~\ref{prop.convfamily}.
3. Note that here, unlike in Proposition~\ref{prop.convfamily},
it does not suffice to assume \reff{eq.conv.fgh} only for $x=y$.
Indeed, the equality
\begin{equation}
\sum_{k=0}^n f_k(x) \, g_{n-k}(x) \;=\; h_n(2x)
\end{equation}
can be taken as the {\em definition}\/ of ${\bm{h}}$
for {\em completely arbitrary}\/ sequences ${\bm{f}}$ and ${\bm{g}}$.
4. The foregoing theory can once again equivalently be expressed
in terms of the polynomials $F_n(x) = n! \, f_n(x)$
and likewise for $g_n$ and $h_n$, which satisfy
\begin{equation}
\sum_{k=0}^n \binom{n}{k} F_k(x) \, G_{n-k}(y) \;=\; H_n(x+y)
\;.
\label{eq.FGHsum.binomial}
\end{equation}
Sequences of polynomials $F_n(x) = n! \, f_n(x)$
defined by \reff{eq.prop.convfamily.bis}
are called {\em Sheffer sequences}\/ \cite[pp.~2, 18--19]{Roman_84}.\footnote{
As part of the definition of ``Sheffer sequence'',
some authors (e.g.\ \cite[p.~2]{Roman_84})
impose the additional conditions $A(0) \neq 0$,
$B(0) = 0$, and $B'(0) \neq 0$.
But these conditions are irrelevant for our purposes,
so we refrain from imposing them.
}
Many classical sequences of polynomials ---
including the Hermite, Laguerre and Bernoulli polynomials ---
are Sheffer sequences \cite[pp.~2, 28--31, 53--130]{Roman_84}.
$\blacksquare$ \bigskip
\bigskip
\noindent
{\bf Open question.}
What happens if $f_0(0)$ and/or $g_0(0)$ are not invertible?
\bigskip
\section*{4. Pascal-like matrices.}
We now turn to the matrix interpretation of the Chu--Vandermonde identity.
But before proceeding further,
let us generalize the identity \reff{eq.identity.L} in two ways:
First, we allow three different matrices $A,B,C$ in place of $L,L,L$;
and second, we do not require $A,B,C$ to be lower-triangular.
So let $A = (a_{ij})_{i,j \ge 0}$ be a matrix with entries
in a commutative ring $R$. We define the {\em row-generating series}\/
$A_i(u) = \sum_{j=0}^\infty a_{ij} u^j \in R[[u]$.
Clearly, there is a one-to-one correspondence between matrices
$A = (a_{ij})_{i,j \ge 0}$ and collections $(A_i(u))_{i \ge 0}$
of row-generating series.
We then have the following result:
\begin{proposition}[Characterization of Pascal-like matrices]
\label{prop.pascal-like}
Let $R$ be a commutative ring, and let $f,g,h \in R[[u]]$.
Then
\begin{equation}
A_i(u) \;=\; f(u) \, h(u)^i \:,\quad
B_i(u) \;=\; g(u) \, h(u)^i \:,\quad
C_i(u) \;=\; f(u) \, g(u) \, h(u)^i
\label{eq.ABC.fgh}
\end{equation}
are the row-generating series of matrices
$A = (a_{ij})_{i,j \ge 0}$,
$B = (b_{ij})_{i,j \ge 0}$,
$C = (c_{ij})_{i,j \ge 0}$
satisfying
\begin{equation}
\sum_{j=0}^n a_{ij} \, b_{\ell,n-j} \;=\; c_{i+\ell,n}
\qquad\hbox{for all $i,\ell,n \ge 0$}
\;.
\label{eq.identity.ABC}
\end{equation}
Conversely, suppose that
$A = (a_{ij})_{i,j \ge 0}$,
$B = (b_{ij})_{i,j \ge 0}$,
$C = (c_{ij})_{i,j \ge 0}$
are matrices with entries in a commutative ring $R$
that satisfy \reff{eq.identity.ABC}.
Suppose further that $a_{00}$ and $b_{00}$ are invertible in $R$
[or equivalently that $c_{00} = a_{00} b_{00}$ is invertible in $R$].
Then there exist unique series $f,g,h \in R[[u]]$
such that the row-generating series of $A,B,C$ are given by \reff{eq.ABC.fgh};
and $f$ and $g$ are invertible.
\end{proposition}
\par\medskip\noindent{\sc Proof.\ }
Multiplying \reff{eq.identity.ABC} by $u^n$ and summing over $n$,
we see that \reff{eq.identity.ABC} is equivalent to the equality
\begin{equation}
A_i(u) \, B_\ell(u) \;=\; C_{i+\ell}(u)
\qquad\hbox{for all $i,\ell \ge 0$}
\label{eq.AiBl}
\end{equation}
for the row-generating series.
It is immediate that the construction \reff{eq.ABC.fgh}
satisfies \reff{eq.AiBl}.
For the converse, let us write out \reff{eq.AiBl} in detail:
\begin{subeqnarray}
C_0 & = & A_0 B_0 \\
C_1 & = & A_0 B_1 \;=\; A_1 B_0 \\
C_2 & = & A_0 B_2 \;=\; A_1 B_1 \;=\; A_2 B_0 \\
& \vdots & \nonumber
\end{subeqnarray}
Since $a_{00}$ and $b_{00}$ are invertible in $R$,
it follows that $A_0$ and $B_0$ are invertible in $R[[u]]$.
From $C_n = A_0 B_n = A_n B_0$, we deduce that $A_n/A_0 = B_n/B_0$;
let us call this common value $h_n$.
Then \reff{eq.AiBl} says (after division by $A_0 B_0$) that
$h_i h_\ell = h_{i+\ell}$ for all $i,\ell$.
It follows by induction that $h_n = h_1^n$.
So \reff{eq.ABC.fgh} holds with $f = A_0$, $g = B_0$, $h = h_1$.
Moreover, it is clear from \reff{eq.ABC.fgh} that $f = A_0$ and $g = B_0$;
and since $A_0$ and $B_0$ are invertible, we must also have $h = A_1/A_0$.
So $f,g,h$ are uniquely determined.
\hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
\bigskip
\noindent
{\bf Open question.}
What happens if $a_{00}$ and/or $b_{00}$ are not invertible?
\bigskip
\bigskip
{\bf Remarks.}
1. Under the assumption that $a_{00}$ and $b_{00}$ are invertible in $R$,
clearly $A=B=C$ if and only if $f(u) = g(u) = 1$.
It turns out (as I discovered after completing
the proof of Proposition~\ref{prop.pascal-like})
that the $A=B=C$ special case of Proposition~\ref{prop.pascal-like}
had been proven nearly 40 years ago
by Olive \cite[Theorems~3.2 and 3.3]{Olive_79}.
2. If $f(t)$ and $h(t)$ are formal power series,
let ${\mathcal{R}}(f,h)$ be the infinite matrix
$({\mathcal{R}}(f,h)_{nk})_{n,k \ge 0}$ defined by
\begin{equation}
{\mathcal{R}}(f,h)_{nk}
\;=\;
[t^n] \, f(t) h(t)^k
\;.
\label{def.riordan}
\end{equation}
When $h$ has constant term 0, the matrix ${\mathcal{R}}(f,h)$ is lower-triangular
and is called a {\em Riordan array}\/ \cite{Shapiro_91,Sprugnoli_94,Barry_16};
such matrices arise frequently in enumerative combinatorics,
and the theory of Riordan arrays provides a useful unifying framework.
But --- as a handful of authors have noted
\cite{Bala_15} \cite{Hoggatt_75} \cite[p.~288]{Pemantle_13} ---
there are also interesting examples
in which $h$ has a nonzero constant term.
Let us call this more general concept,
in which $h$ is an arbitrary formal power series,
a {\em wide-sense Riordan array}\/.\footnote{
References \cite{Pemantle_13,Bala_15}
call this a ``generalized Riordan array'',
but we prefer to avoid this term because it has already been used,
in a highly-cited paper \cite{Wang_08},
for a completely unrelated generalization of Riordan arrays.
}
We can then see that the matrices $A,B,C$ defined in \reff{eq.ABC.fgh}
are simply the transpose of a wide-sense Riordan array.
3. What is the relation between Propositions~\ref{prop.convfamily.bis}
and \ref{prop.pascal-like}?
Comparing \reff{eq.conv.fgh} with \reff{eq.identity.ABC},
we see that $x,y,k$ correspond to $i,\ell,j$, respectively;
and then, comparing \reff{eq.prop.convfamily.bis} with \reff{eq.ABC.fgh},
we see that $A(t), B(t), e^{\Psi(t)}$
correspond to $f(u), g(u), h(u)$, respectively.
Therefore, if $\Psi(0) = 0$, we can define $h = e^\Psi$ satisfying $h(0) = 1$;
and since in this case the $f_n,g_n,h_n$ are {\em polynomials}\/,
it makes sense to evaluate them at integer arguments
to obtain $f_n(i) = a_{in}$ and analogously for $g$ and $h$.
And conversely, if $h(0) = 1$,
we can define $\Psi = \log h$ satisfying $\Psi(0) = 0$,
and a triplet of matrices $A,B,C$ satisfying \reff{eq.ABC.fgh}
can be extended to a triplet ${\bm{f}},{\bm{g}},{\bm{h}}$
of sequences of polynomials satisfying \reff{eq.prop.convfamily.bis};
once again we can evaluate them at integer arguments
to obtain $f_n(i) = a_{in}$ and analogously for $g$ and $h$.
In other words, when $h(0) = 1$, each column of the matrices $A,B,C$
is a {\em polynomial}\/ function of the index $i$.
If, by contrast, $\Psi(0) \stackrel{\rm def}{=} \psi_0 \neq 0$ or $h(0) \neq 1$,
then Propositions~\ref{prop.convfamily.bis} and \ref{prop.pascal-like}
are incommensurable for general commutative rings $R$.
However, when $R = {\mathbb R}$ or ${\mathbb C}$, we can still define $h = e^\Psi$
satisfying $h(0) = e^{\psi_0}$;
then $f_n(x),g_n(x),h_n(x)$ are polynomials multiplied by $e^{\psi_0 x}$
and can again be evaluated at integer arguments.
And conversely, if $R= {\mathbb R}$ (resp.\ ${\mathbb C}$) and $h(0) > 0$ (resp.\ $h(0) \neq 0$),
then we can define $\Psi = \log h$,
and the columns of $A,B,C$ are interpolated by functions
$f_n(x), g_n(x), h_n(x)$ that are polynomials multiplied by $e^{\psi_0 x}$.
$\blacksquare$ \bigskip
\bigskip
We can now specialize to the case in which $A$ and $B$ are lower-triangular.
In fact, we can be a bit more general.
Let us say that a matrix $M = (m_{ij})_{i,j \ge 0}$
is {\em lower-triangular in row $i$}\/ if $m_{ij} = 0$ for all $j > i$.
We then have:
\begin{corollary}
\label{cor.pascal-like}
Let
$A = (a_{ij})_{i,j \ge 0}$,
$B = (b_{ij})_{i,j \ge 0}$,
$C = (c_{ij})_{i,j \ge 0}$
be matrices with entries in a commutative ring $R$
that satisfy \reff{eq.identity.ABC};
and suppose further that $a_{00}$ and $b_{00}$ are invertible in $R$.
If $A$ and $B$ are lower-triangular in row 0,
then there exist $\alpha,\beta \in R$ with $\alpha$ and $\beta$ invertible,
and $h \in R[[u]]$, such that
\begin{equation}
A_i(u) \;=\; \alpha \, h(u)^i \:,\quad
B_i(u) \;=\; \beta \, h(u)^i \:,\quad
C_i(u) \;=\; \alpha\beta \, h(u)^i
\;.
\end{equation}
If, in addition, at least one of $A,B,C$ is lower-triangular in row 1,
then there exist $\alpha,\beta,\kappa,\lambda \in R$,
with $\alpha$ and $\beta$ invertible, such that
\begin{equation}
a_{ij} \;=\; \alpha \kappa^{i-j} \lambda^j \binom{i}{j} \:,\quad
b_{ij} \;=\; \beta \kappa^{i-j} \lambda^j \binom{i}{j} \:,\quad
c_{ij} \;=\; \alpha\beta \kappa^{i-j} \lambda^j \binom{i}{j}
\;.
\end{equation}
\end{corollary}
\par\medskip\noindent{\sc Proof.\ }
If $A$ and $B$ are lower-triangular in row 0,
then $f = a_{00} = \alpha$ and $g = b_{00} = \beta$.
If, in addition, at least one of $A,B,C$ is lower-triangular in row 1,
then $h = \kappa + \lambda u$.
\hbox{\hskip 6pt\vrule width6pt height7pt depth1pt \hskip1pt}\bigskip
When $A=B=C$, this yields:
\begin{corollary}[No-go theorem for lower-triangular Pascal-like matrices]
\label{cor.pascal-like.2}
Let $L = (L_{ij})_{i,j \ge 0}$ be a lower-triangular matrix
with entries in a commutative ring $R$
that satisfies \reff{eq.identity.L}.
If $L_{00}$ is invertible in $R$, then in fact $L_{00} = 1$,
and there exist $\kappa,\lambda \in R$ such that
\begin{equation}
L_{ij} \;=\; \kappa^{i-j} \lambda^j \binom{i}{j}
\;.
\end{equation}
\end{corollary}
Corollary~\ref{cor.pascal-like.2} thus gives a negative answer
to the question posed in the introduction:
there are no lower-triangular solutions to \reff{eq.identity.L}
other than trivial rescalings of the Pascal matrix.
But --- and this is the interesting twist ---
Proposition~\ref{prop.pascal-like} shows that there {\em are}\/
interesting examples if we give up the insistence that $L$ be lower-triangular:
we can take $f(u) = g(u) = 1$
and choose an {\em arbitrary}\/ formal power series $h(u)$,
not just $h(u) = \kappa + \lambda u$.
These examples turn out to have interesting applications
to the theory of Hankel-total positivity \cite{Sokal_totalpos};
but that story will have to be told elsewhere \cite{Petreolle-Sokal_nontri}.
\section*{Acknowledgments}
I wish to thank Mathias P\'etr\'eolle and Bao-Xuan Zhu
for helpful conversations,
and the referees for helpful suggestions.
This research was supported in part by
U.K.~Engineering and Physical Sciences Research Council grant EP/N025636/1.
| {
"timestamp": "2018-04-25T02:07:49",
"yymm": "1804",
"arxiv_id": "1804.08919",
"language": "en",
"url": "https://arxiv.org/abs/1804.08919",
"abstract": "We consider two different interpretations of the Chu--Vandermonde identity: as an identity for polynomials, and as an identity for infinite matrices. Each interpretation leads to a class of possible generalizations, and in both cases we obtain a complete characterization of the solutions.",
"subjects": "Combinatorics (math.CO); Commutative Algebra (math.AC)",
"title": "How to generalize (and not to generalize) the Chu--Vandermonde identity",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9911526457010863,
"lm_q2_score": 0.8397339716830606,
"lm_q1q2_score": 0.8323045477187466
} |
https://arxiv.org/abs/1211.4247 | Toric partial orders | We define toric partial orders, corresponding to regions of graphic toric hyperplane arrangements, just as ordinary partial orders correspond to regions of graphic hyperplane arrangements. Combinatorially, toric posets correspond to finite posets under the equivalence relation generated by converting minimal elements into maximal elements, or sources into sinks. We derive toric analogues for several features of ordinary partial orders, such as chains, antichains, transitivity, Hasse diagrams, linear extensions, and total orders. | \section{Introduction}
We define finite {\it toric partial orders} or {\it toric posets}, which are cyclic analogues of partial orders, but differ from an established notion of {\it partial cyclic orders} already in the literature; see Remark~\ref{disambiguation-remark} below.
Toric posets can be defined in combinatorial geometric ways
that are analogous to partial orders or posets:
\begin{enumerate}
\item[$\bullet$] Posets on a finite set $V$ correspond to open polyhedral cones that arise as chambers in {\it graphic hyperplane arrangements} in $\mathbb{R}^V$;
toric posets correspond to chambers occurring within
{\it graphic toric hyperplane arrangements} in the quotient space $\mathbb{R}^V/\mathbb{Z}^V$.
\item[$\bullet$] Posets correspond to {\it transitive closures} of acyclic orientations of graphs; toric posets correspond to a
notion of {\it toric transitive closures} of acyclic orientations.
\item[$\bullet$] Both transitive closure and toric transitive closure
will turn out to be {\it convex closures}, so that there is a notion of {\it toric Hasse diagram}
for a toric poset, like the Hasse diagram of a poset.
\end{enumerate}
We next make this more precise,
indicating where the main results will be proven.
\subsection{Posets geometrically}
We first recall
(e.g. from Stanley \cite{Stanley:73},
Greene and Zaslavsky \cite[\S 7]{Greene:83},
Postnikov, Reiner and Williams \cite[\S\S 3.3-3.4]{Postnikov:08})
geometric features of posets, specifically their relations
to graphic hyperplane arrangements and acyclic orientations, emphasizing
notions with toric counterparts.
Let $V$ be a finite set of cardinality $|V|=n$; often we will choose $V=[n]:=\{1,2,\ldots,n\}$. One can think of a {\it partially ordered
set} or {\it poset} $P$ on $V$ as a binary relation $i <_P j$ which is
\begin{itemize}
\item {\it irreflexive}: $ i\not <_P i$,
\item {\it antisymmetric}: $i <_P j$ implies $ j \not <_P i $, and
\item {\it transitive}: $i <_P j$ and $j <_P k$ implies $ i <_P k$.
\end{itemize}
However, one can also identify $P$ with a certain {\it open polyhedral cone} in $\mathbb{R}^V$
\begin{equation}
\label{cone-of-a-poset-defn}
c=c(P):=\{x \in \mathbb{R}^V: x_i < x_j \text{ if }i <_P j\}.
\end{equation}
Note that the cone $c$ determines the poset $P=P(c)$ as follows: $i <_P j$ if and only $x_i < x_j$ for all $x$ in $c$.
Each such cone $c$ also arises as a connected component in the complement
of at least one {\it graphic hyperplane arrangement} for a graph $G$, and
often arises in several such arrangements, as explained below.
Given a simple graph $G=(V,E)$,
the {\it graphic arrangement} $\mathcal{A}(G)$
is the union of all hyperplanes in $\mathbb{R}^V$ of the form
$x_i=x_j$ where $\{i,j\}$ is in $E$. Each point $x=(x_1,\ldots,x_n)$ in the {\it complement}
$\mathbb{R}^V {-} \mathcal{A}(G)$ determines an {\it acyclic orientation} $\omega(x)$
of the edge set $E$: for an edge $\{i,j\}$ in $E$, since
$x_i \neq x_j$, either
\begin{itemize}
\item $x_i < x_j$ and $\omega(x)$ directs $i \rightarrow j$, or
\item $x_j < x_i$ and $\omega(x)$ directs $j \rightarrow i$.
\end{itemize}
It is easily seen that the fibers of this map $\alpha_G: x \longmapsto \omega(x)$
are the connected components of the complement $\mathbb{R}^V {-} \mathcal{A}(G)$,
which are open polyhedral cones called {\it chambers}. Thus the map $\alpha_G$
induces a bijection between the set $\Acyc(G)$
of all acyclic orientations $\omega$
of $G$ and the set $\Chambers \mathcal{A}(G)$ of chambers $c$ of $\mathcal{A}(G)$:
\begin{equation}
\label{alpha-diagram}
\xymatrix{
\mathbb{R}^V {-} \mathcal{A}(G) \ar@{>>}[dr] \ar[rr]^{\alpha_G} & & \Acyc(G) \\
& \Chambers \mathcal{A}(G) \ar@{.>}[ur] &\\
}
\end{equation}
These two sets are well-known \cite[Theorem 7.1]{Greene:83}, \cite{Stanley:73}
to have cardinality
$$
|\Acyc(G)|=|\Chambers \mathcal{A}(G)|=T_G(2,0)
$$
where $T_G(x,y)$ is the {\it Tutte polynomial} of $G$ \cite{Tutte:54}.
Posets are also determined by their extensions to
{\it total orders} $w_1 < \cdots < w_n$, which are indexed by
permutations $w=(w_1,\ldots,w_n)$ of $V$. The total orders index the chambers
$$
c_w :=\{x \in \mathbb{R}^V: x_{w_1} < x_{w_2} < \cdots < x_{w_n}\}
$$
in the complement of the {\it complete graphic arrangement $\mathcal{A}(K_V)$},
also known as the {\it reflection arrangement of type $A_{n-1}$} or {\it braid arrangement}.
Given a poset $P$, its set $\mathcal{L}(P)$ of all {\it linear extensions} or
{\it extensions to a total order}
has the property that
$$
\overline{c(P)} = \bigcup_{w \in \mathcal{L}(P)} \overline{c}_w
$$
where $\overline{(\cdot)}$ denotes topological closure. Thus when one
{\it fixes} the graph $G$, chambers $c$ (or posets $P(c)$)
arising as $\alpha_G^{-1}(\omega)$ for various $\omega$ in $\Acyc(G)$
are determined by their sets $\mathcal{L}(P(c))$ of linear extensions.
The same poset $P$ or chamber $c=c(P)$ generally
arises in many graphic arrangements $\mathcal{A}(G)$,
as one varies the graph $G$, leading to
ambiguity in its labeling by a pair $(G,\omega)$ with
$\omega$ in $\Acyc(G)$. Nevertheless, this ambiguity
is well-controlled, in that there are two canonical choices
$
(\bar{G}(P),\bar{\omega}(P))
$
and
$
(\hat{G}^{\Hasse}(P),\omega^{\Hasse}(P))
$
with the following properties.
\begin{enumerate}
\item[$\bullet$] A graph $G$ has $c(P)$ occurring
in $\Chambers \mathcal{A}(G)$
if and only if
$
\hat{G}^{\Hasse}(P) \subseteq G \subseteq \bar{G}(P)
$
where $\subseteq$ is inclusion of edge sets.
In this case, $\alpha_G(c(P))=\omega$ where
$\omega$ is the restriction $\bar{\omega}(P)|_G$.
\item[$\bullet$] The map which sends
$(G,\omega) \longmapsto (\bar{G}(P),\bar{\omega}(P))$ is {\it transitive closure}.
It adds into $G$ all edges $\{i,j\}$ which lie on some {\it chain}
(= {\it totally ordered subset}) $C$ of $P$,
and directs $i \rightarrow j$ if $i <_C j$.
Alternatively phrased, transitive closure adds the
directed edge $i \rightarrow j$ to $(G,\omega)$ whenever there is
a directed path from $i$ to $j$ in $(G,\omega)$.
\end{enumerate}
The existence of a unique
{\it inclusion-minimal} choice $(\hat{G}^{\Hasse}(P),\omega^{\Hasse}(P))$,
called the {\it Hasse diagram} for $P$, follows
from this well-known fact \cite{Edelman:82, Edelman:02}:
the {\it transitive closure} $A \longmapsto \bar{A}$
on subsets $A$ of all possible oriented edges
$
\overleftrightarrow{K}_V=\{ (i,j) \in V \times V : i \neq j \},
$
is a {\it convex closure}, meaning that
\begin{equation}
\label{convex-closure-definition}
\text{ for } a \neq b \text{ with }
a,b \not\in \bar{A}\text{ and }
a \in \overline{A \cup \{b\}},
\text{ one has }b \notin \overline{A \cup \{a\}}.
\end{equation}
\subsection{Toric posets}
We do not initially define a toric poset $P$ on the finite set $V$
via some binary (or ternary) relation. Rather we
define it in terms of chambers in a {\it toric graphic arrangement}
$\mathcal{A}_{\tor}(G)=\pi(\mathcal{A}(G))$,
the image of the graphic arrangement $\mathcal{A}(G)$ under the
quotient map $\mathbb{R}^V \overset{\pi}{\rightarrow} \mathbb{R}^V/\mathbb{Z}^V$. These are
important examples of {\it unimodular toric arrangements} discussed by
Novik, Postnikov and Sturmfels in \cite[\S\S 4-5]{Novik:02}; see
also Ehrenborg, Readdy and Slone \cite{Ehrenborg:09}.
\begin{defn}
\label{toric-poset-defn}
A connected component $c$ of the complement
$\mathbb{R}^V/\mathbb{Z}^V {-} \mathcal{A}_{\tor}(G)$ is called a {\it toric chamber} for $G$;
denote by $\Chambers \mathcal{A}_{\tor}(G)$
the set of all toric chambers of $\mathcal{A}_{\tor}(G)$.
A {\it toric poset} $P$ is a set $c$ that arises as a toric chamber for
at least one graph $G$. We will write $P=P(c)$ and $c=c(P)$, depending
upon the context.
\end{defn}
\begin{ex}
When $n=2$, so $V=\{1,2\}$,
there are only two simple graphs $G=(V,E)$,
a graph $G_0$ with no edges and the complete graph $K_2$ with
a single edge $\{1,2\}$.
For both such graphs, the torus $\mathbb{R}^2/\mathbb{Z}^2$ remains connected
after removing the arrangement $\mathcal{A}_{\tor}(G)$, and hence
they each have only one toric chamber; call these chambers $c_0(=\mathbb{R}^2/\mathbb{Z}^2)$
for the graph $G_0$, and $c(=\mathbb{R}^2/\mathbb{Z}^2{-} \{x_1=x_2\})$
for the graph $K_2$. They represent two different toric posets $P(c_0)$
and $P(c)$,
even though their topological closures $\bar{c}=\bar{c}_0(=c_0)=\mathbb{R}^2/\mathbb{Z}^2$
are the same.
\end{ex}
A point $x$ in $\mathbb{R}^V/\mathbb{Z}^V$ does not have uniquely defined coordinates
$(x_1,\ldots,x_n)$. However, it is well-defined to speak of the
{\it fractional part} $\modone{x_i}$,
that is, the unique representative of the class of $x_i$ in $\mathbb{R}/\mathbb{Z}$ that
lies in $[0,1)$. Therefore a point $x$ in $\mathbb{R}^V/\mathbb{Z}^V {-} \mathcal{A}_{\tor}(G)$,
still induces an acyclic orientation $\omega(x)$ of $G$, as follows:
for each edge $\{i,j\}$ in $E$, since
$x_i \neq x_j \bmod{\mathbb{Z}}$, either
\begin{itemize}
\item $\modone{x_i} < \modone{x_j}$, and $\omega(x)$ directs $i \rightarrow j$, or
\item $\modone{x_j} < \modone{x_i}$, and $\omega(x)$ directs $j \rightarrow i$.
\end{itemize}
Denote this map $x \mapsto \omega(x)$ by
$
\mathbb{R}^V/\mathbb{Z}^V {-} \mathcal{A}_{\tor}(G)
\overset{\bar{\alpha}_G}{\longrightarrow}
\Acyc(G).
$
Unfortunately, two points lying in the same toric chamber
$c$ in $\Chambers_{\tor} \mathcal{A}_{\tor}(G)$ need not
map to the same acyclic orientation under $\bar{\alpha}_G$.
This ambiguity leads one naturally to the
following equivalence relation on acyclic orientations.
\begin{defn}
When two acyclic orientations $\omega$ and $\omega'$ of $G$ differ
only by converting one source vertex of $\omega$ into a sink of $\omega'$,
say that they differ by a \emph{flip}.
The transitive closure of the flip operation generates an
equivalence relation on $\Acyc(G)$ denoted by $\equiv$.
\end{defn}
A thorough investigation of this source-to-sink flip operation and
equivalence relation was
undertaken by Pretzel in~\cite{Pretzel:86}, and
studied earlier by Mosesjan \cite{Mosesjan:72}.
It has also appeared at other times in various contexts\footnote{
Pretzel called the source-to-sink flip {\it pushing down maximal
vertices}; in~\cite{Macauley:11}, it was called a \emph{click}.
In the category of representations of a quiver, it is
related to Bernstein, Gelfand and Ponomarev's
{\it reflection functors} \cite{Bernstein:73}.}
in the literature~\cite{Chen:10,Eriksson:09,Macauley:11,Speyer:09}.
Its relation to geometry of toric chambers $c=c(P)$ or
toric posets $P=P(c)$ is our first main result,
proven in \S~\ref{geometry-section}.
\begin{thm}
\label{geometric-interpretation-theorem}
The map $\bar{\alpha}_G$ induces a bijection
between $\Chambers \mathcal{A}_{\tor}(G)$ and $\Acyc(G)/\!\!\equiv$ as follows:
\begin{equation}
\label{alpha-floor-diagram}
\xymatrix{
\mathbb{R}^V/\mathbb{Z}^V {-} \mathcal{A}_{\tor}(G)
\ar@{>>}[d] \ar[r]^{\bar{\alpha}_G} & \Acyc(G) \ar@{>>}[d] \\
\Chambers \mathcal{A}_{\tor}(G) \ar@{.>}[r]_{\bar{\alpha}_G} & \Acyc(G)/\!\!\equiv
}
\end{equation}
In other words, two points $x,x'$ in $\mathbb{R}^V/\mathbb{Z}^V {-} \mathcal{A}_{\tor}(G)$
have $\bar{\alpha}_G(x)\equiv\bar{\alpha}_G(x')$ if and only if
$x,x'$ lie in the same toric chamber $c$ in $\Chambers \mathcal{A}_{\tor}(G)$.
\end{thm}
\noindent
The two sets $\Chambers \mathcal{A}_{\tor}(G)$ and $\Acyc(G)/\!\!\equiv$ appearing
in the theorem are known to have cardinality
$$
|\Acyc(G) / \!\! \equiv |=|\Chambers \mathcal{A}_{\tor}(G)|=T_G(1,0)
$$
where $T_G(x,y)$ is the Tutte polynomial of $G$;
see \cite{Macauley:08} and \cite[Theorem 4.1]{Novik:02}.
\begin{ex}
A {\it tree} $G$ on $n$ vertices has Tutte polynomial
$T_G(x,y)=x^{n-1}$. It will have $T(2,0)=2^{n-1}$
acyclic orientations $\omega$ and induced partial orders,
but only $T(1,0)=1$ toric chamber or toric partial order:
any two acyclic orientations of a tree are equivalent by a
sequence of source-to-sink moves.
\end{ex}
\begin{ex}
\label{four-cycle-toric-posets-example}
As a less drastic example, consider
$V=\{1,2,3,4\}$ and $G=(V,E)$ this graph:
$$
\tiny{
\xymatrix{
1\ar@{-}[r] & 2\ar@{-}[d] \\
3\ar@{-}[u]& 4\ar@{-}[l]
}
}
$$
It has Tutte polynomial $T_G(x,y)=x^3+x^2+x+y$, and hence
has $T_G(2,0)=2^3+2^2+2+0=14$ acyclic orientations $\omega$.
These $\omega$ fall into $T_G(1,0)=1^3+1^2+1+0=3$ different $\equiv$-classes
$[\omega]$, having cardinalities $4,4,6$, respectively,
corresponding to three different toric posets $P_i$
or chambers $c_i$ in $\Chambers \mathcal{A}_{\tor}(G)$:
$$
\tiny{
\begin{array}{rcccc}
P_1:
&
\xymatrix{
4 & \\
& 3 \ar[ul] \\
& 2 \ar[u] \\
1\ar[ur] \ar[uuu]
}
&
\xymatrix{
1 & \\
& 4 \ar[ul] \\
& 3 \ar[u] \\
2\ar[ur] \ar[uuu]
}
&
\xymatrix{
2 & \\
& 1 \ar[ul] \\
& 4 \ar[u] \\
3\ar[ur] \ar[uuu]
}
&
\xymatrix{
3 & \\
& 2 \ar[ul] \\
& 1 \ar[u] \\
4\ar[ur] \ar[uuu]
}
\\
& & & & \\
& & & & \\
& & & & \\
P_2:
&
\xymatrix{
1 & \\
& 2 \ar[ul] \\
& 3 \ar[u] \\
4\ar[ur] \ar[uuu]
}
&
\xymatrix{
2 & \\
& 3 \ar[ul] \\
& 4 \ar[u] \\
1\ar[ur] \ar[uuu]
}
&
\xymatrix{
3 & \\
& 4 \ar[ul] \\
& 1 \ar[u] \\
2\ar[ur] \ar[uuu]
}
&
\xymatrix{
4 & \\
& 1 \ar[ul] \\
& 2 \ar[u] \\
3\ar[ur] \ar[uuu]
}
\\
& & & & \\
& & & & \\
& & & & \\
P_3:
&
\xymatrix{
& 1 & \\
2\ar[ur]& &4\ar[ul] \\
& 3\ar[ur]\ar[ul]&
}
&
\xymatrix{
2 & 4 \\
1\ar[u]\ar[ur] & 3\ar[u]\ar[ul]
}
&
\xymatrix{
& 2 & \\
1\ar[ur]& &3\ar[ul] \\
& 4\ar[ur]\ar[ul]&
}
& \\
& &
\xymatrix{
& 3 & \\
2\ar[ur]& &4\ar[ul] \\
& 1\ar[ur]\ar[ul]&
}
&
\xymatrix{
1 & 3 \\
2\ar[u]\ar[ur] & 4\ar[u]\ar[ul]
}
&
\xymatrix{
& 4 & \\
1\ar[ur]& &3\ar[ul] \\
& 2\ar[ur]\ar[ul]&
}
\end{array}
}
$$
\end{ex}
{\it Toric total orders} (see \S~\ref{total-orders-section})
are indexed by the $(n-1)!$ cyclic equivalence classes of permutations
\begin{equation}
\begin{array}{rcl}
\label{typical-cyclic-permutation}
[w]:=[(w_1,w_2,\ldots,w_n)]=\left\{\right. &(w_1,w_2,\ldots,w_{n-1},w_n), & \\
&(w_2,\ldots,w_{n-1},w_n,w_1), &\\
&\vdots&\\
&(w_n,w_1,w_2,\ldots,w_{n-1}) &\left.\right\}
\end{array}
\end{equation}
and correspond to the toric chambers $c_{[w]}$
in the complement of the {\it toric complete graphic arrangement $\mathcal{A}_{\tor}(K_V)$}.
For a particular toric poset $P=P(c)$,
one says that $[w]$ is a {\it toric total
extension} of $P$ if $c_{[w]} \subseteq c$. Denote by $\mathcal{L}_{\tor}(P)$ the
set of all such toric total extensions $[w]$ of $P$.
Although it is possible (see Example~\ref{lack-of-determination-by-extensions-example} below)
for two different toric posets $P$ to have the same set $\mathcal{L}_{\tor}(P)$,
the following assertion (combining
Proposition~\ref{edge-subgraph-chamber-refinement} and
Corollary~\ref{determination-by-total-cyclic-extensions} below) still holds.
\begin{prop}
When one {\it fixes} the graph $G$, the toric chamber $c$
(or its poset $P=P(c)$) for which $\bar{\alpha}_G(c)=[\omega]$
is completely determined by its topological
closure $\overline{c}$.
Furthermore one has
$
\overline{c} = \bigcup_{w \in \mathcal{L}_{\tor}(P)} \overline{c}_{[w]}.
$
so that this closure depends only on the set of toric total extensions
$\mathcal{L}_{\tor}(P)$.
\end{prop}
\begin{ex}
The graph $G$ from Example~\ref{four-cycle-toric-posets-example}
and its three toric posets $P_1,P_2,P_3$ partition
the $(4-1)!=6$ different toric total orders on $V=\{1,2,3,4\}$
into their sets of toric total extensions $\mathcal{L}_{\tor}(P_i)$ as follows:
$$
\begin{aligned}
\mathcal{L}_{\tor}(P_1)
& = \{ [(1,2,3,4)] \},\\
\mathcal{L}_{\tor}(P_2)
& = \{ [(1,4,3,2)] \},\\
\mathcal{L}_{\tor}(P_3)
& = \{ [(1,2,4,3)],[(1,3,2,4)], [(1,3,4,2)],[(1,4,2,3)]\}.
\end{aligned}
$$
\end{ex}
As with posets, the same toric poset $P=P(c)$ arises as a chamber $c$
in {\it many} toric graphic arrangements $\mathcal{A}_{\tor}(G)$.
However, as with posets, this ambiguity is well-controlled,
in that there are two canonical choices of equivalence classes
$
(\bar{G}^{\tor}(P), [\bar{\omega}^{\tor}(P)])
$
and
$
(\hat{G}^{\torHasse}(P), [\omega^{\torHasse}(P)])
$
with the following properties.
\begin{enumerate}
\item[$\bullet$] A graph $G$ has $c(P)$ occurring in
$\Chambers \mathcal{A}_{\tor}(G)$ if and only if
$$
\hat{G}^{\torHasse}(P) \subseteq G \subseteq \bar{G}^{\tor}(P)
$$
where $\subseteq$ is inclusion of edges. In this case,
if $\bar{\alpha}_G(c(P))=[\omega]$, then
$\omega$ can be taken to be the
restriction to $G$ of a particular orientation
in the class $[\bar{\omega}^{\tor}(P)]$.
\item[$\bullet$] The map which sends
$(G,\omega) \longmapsto (\bar{G}^{\tor}, \bar{\omega}^{\tor})$ may be described by what
will be called (in \S~\ref{transitivity-section}) {\it toric transitive closure}:
one adds into $G$ all edges $\{i,j\}$ which lie on some {\it toric chain} $C$ in $P$.
Here a toric chain (see \S~\ref{chains-section}) is a subset $C \subset V$
which is totally ordered in {\it every} poset associated with
an orientation in the class $[\omega]$.
One directs $i \rightarrow j$ if there is a
{\it toric directed path} from $i$ to $j$ in $(G,\omega)$, as defined
in \S~\ref{directed-paths-section} below.
Alternatively phrased, toric transitive closure will add the
directed edge $i \rightarrow j$ to $(G,\omega)$ whenever there is
a toric directed path from $i$ to $j$ in $(G,\omega)$.
\end{enumerate}
The existence of the unique
{\it inclusion-minimal} choice $(\hat{G}^{\torHasse}(P),[\omega^{\torHasse}(P)])$,
which we will call the {\it toric Hasse diagram} of $P$, follows
from our second main result, proven in \S~\ref{convex-closure-section}.
\begin{thm}
\label{toric-convex-closure-theorem}
Considered as a {\it closure operation} $A \longmapsto \bar{A}^{\tor}$
on subsets $A$ of all possible oriented edges
$
\overleftrightarrow{K}_V=\{ (i,j) \in V \times V : i \neq j \},
$
toric transitive closure is a {\it convex closure}, that is,
it satisfies \eqref{convex-closure-definition} above.
\end{thm}
\begin{ex}
The toric poset $P_1=P(c_1)$ from Example~\ref{four-cycle-toric-posets-example}
appears as a chamber $c_1$ in $\Chambers \mathcal{A}_{\tor}(G_i)$ for
exactly four graphs $G_1,G_2,G_3,G_4$, each shown below with an
orientation $\omega_i$ such that $\bar{\alpha}_{G_i}(c_1)=[\omega_i]$.
$$
\tiny{
\xymatrix{
4 & \\
& 3 \ar[ul] \\
& 2 \ar[u] \\
1\ar[ur] \ar[uuu]
}
\qquad
\xymatrix{
4 & \\
& 3 \ar[ul] \\
& 2 \ar[u]\ar[uul] \\
1\ar[ur] \ar[uuu]
}
\qquad
\xymatrix{
4 & \\
& 3 \ar[ul] \\
& 2 \ar[u] \\
1\ar[ur] \ar[uuu] \ar[uur]
}
\qquad
\xymatrix{
4 & \\
& 3 \ar[ul] \\
& 2 \ar[u] \ar[uul]\\
1\ar[ur] \ar[uuu] \ar[uur]
}
}
$$
For any of these four pairs $(G_i,\omega_i)$ with $i=1,2,3,4$, one has that the
leftmost pair is its Hasse diagram $(\hat{G_i}^{\torHasse},\omega_i^{\torHasse})$,
and the rightmost pair is its toric transitive closure
$(\bar{G}^{\tor}_i,\bar{\omega}_i^{\tor})$.
\end{ex}
We close this Introduction with two remarks, one on terminology,
the other giving further motivation.
\begin{rem}
\label{disambiguation-remark}
Aside from the connection to toric hyperplane arrangements,
we have chosen the name ``toric partial order'',
as opposed to the arguably more natural term
``cyclic partial order'', because the latter is
easily confused with {\it partial cyclic orders}, the following
pre-existing concept in the literature, going
back at least as far as Megiddo~\cite{Megiddo:76}.
\begin{defn}
A \emph{partial cyclic order} on $V$ is a ternary relation $T \subseteq V\times V \times V$
that is
\begin{itemize}
\item \emph{antisymmetric}: If $(i,j,k)\in T$ then $(k,j,i)\not\in T$;
\item \emph{transitive}: If $(i,j,k)\in T$ and $(i,k,\ell)\in T$, then
$(i,j,\ell)\in T$;
\item \emph{cyclic}: If $(i,j,k)\in T$, then $(j,k,i)\in T$.
\end{itemize}
\end{defn}
\begin{defn}
\label{total-cyclic-order-defn}
When a partial cyclic order on $V$ is \emph{complete} in the sense that
for every triple $\{i,j,k\} \subseteq V$ of distinct elements,
$T$ contains some permutation of $(i,j,k)$, then $T$ is called a \emph{total cyclic order}. A total cyclic order on $V$ is easily seen to be the same a toric total order: specify a cyclic equivalence class $[w]$ as
in \eqref{typical-cyclic-permutation}, and then check that $[w]$ is
determined by knowing its restrictions $[w|_{\{i,j,k\}}]$
for all triples $\{i,j,k\}$.
\end{defn}
Partial cyclic orders have been widely studied, and
have some interesting features not shared by ordinary partial
orders. For example, every partial order can be extended to a total
order, but not every partial cyclic order can be extended
to a total cyclic order; an example of this on $13$ vertices
is given in \cite{Megiddo:76}.
\end{rem}
\begin{rem}
We mention a further analogy between posets and toric posets,
related to Coxeter groups, that was
one of our motivations for formalizing this concept.
Recall \cite{Bourbaki:02} that a {\it Coxeter system} $(W,S)$ is a group $W$
with generating set $S=\{s_1,\ldots,s_n\}$ having presentation
$
W= \langle S : (s_i s_j)^{m_{i,j}}=e \rangle
$
for some $m_{i,j}$ in $\{1,2,3,\ldots\} \cup \{\infty\}$,
where $m_{i,i}=1$ for all $i$ and $m_{i,j} \geq 2$ for $i \neq j$.
Associated to $(W,S)$ is the {\it Coxeter graph} on vertex set $S$
with an edge $\{s_i,s_j\}$ labeled by $m_{i,j}$ whenever $m_{i,j} > 2$, so that
$s_i,s_j$ do not commute; ignoring the edge labels, we will call this
the unlabeled Coxeter graph.
A {\it Coxeter element} for $(W,S)$ is an element of
the form $s_{w_1} s_{w_2} \cdots s_{w_n}$ for some choice of a total
order $w$ on $S$.
\begin{thm} Fix a Coxeter system $(W,S)$ with unlabeled Coxeter
graph $G$, and consider the map sending an
acyclic orientation $\omega$ in $\Acyc(G)$ having poset $P=\alpha_G(\omega)$
to the Coxeter element $s_{w_1} s_{w_2} \cdots s_{w_n}$ for any choice
of a linear extension $w$ in $\mathcal{L}(P)$.
\begin{enumerate}
\item[(i)]
This map is well-defined, and induces a bijection
(see \cite[\S V.6]{Bourbaki:02} and \cite{Cartier:69})
$$
\Acyc(G)
\longleftrightarrow
\{ \text{ Coxeter elements for }(W,S) \,\, \}.
$$
\item[(ii)]
It also induces a well-defined map on the toric equivalence classes
$[\omega]$ to the {\bf $W$-conjugacy classes} of all Coxeter elements,
and gives a bijection (see \cite{Eriksson:09, Macauley:08, Macauley:11, Shi:97}
and \cite[Remark 5.5]{Novik:02})
$$
\Acyc(G)/\!\!\equiv
\quad \longleftrightarrow \quad
\{ W\text{-conjugacy classes of Coxeter elements for }(W,S) \}.
$$
\end{enumerate}
\end{thm}
\noindent
We believe toric partial orders will play a key role in resolving
more questions about $W$-conjugacy classes.
\end{rem}
\section{Toric arrangements and
proof of Theorem~\ref{geometric-interpretation-theorem}}
\label{geometry-section}
Recall the statement of the theorem.
\vskip.1in
\noindent
{\bf Theorem~\ref{geometric-interpretation-theorem}.}
{\it
The map $\bar{\alpha}_G$ induces a bijection
between $\Chambers \mathcal{A}_{\tor}(G)$ and $\Acyc(G)/\!\!\equiv$ as follows:
$$
\xymatrix{
\mathbb{R}^V/\mathbb{Z}^V {-} \mathcal{A}_{\tor}(G)
\ar@{>>}[d] \ar[r]^{\bar{\alpha}_G} & \Acyc(G) \ar@{>>}[d] \\
\Chambers \mathcal{A}_{\tor}(G) \ar@{.>}[r]_{\bar{\alpha}_G} & \Acyc(G)/\!\!\equiv
}
$$
In other words, two points $x,x'$ in $\mathbb{R}^V/\mathbb{Z}^V {-} \mathcal{A}_{\tor}(G)$
have $\bar{\alpha}_G(x)\equiv\bar{\alpha}_G(x')$ if and only if
$x,x'$ lie in the same toric chamber $c$ in $\Chambers \mathcal{A}_{\tor}(G)$.
}
\vskip.1in
\noindent
Before embarking on the proof, we introduce one further geometric object
intimately connected with
\begin{itemize}
\item
the graphic arrangement $\mathcal{A}(G)=\bigcup_{\{i,j\} \in E} \{x \in \mathbb{R}^V: x_i = x_j\} \subset \mathbb{R}^V$, and
\item
the toric graphic arrangement
$\mathcal{A}_{\tor}(G)=\pi(\mathcal{A}(G))$, its image
under $\mathbb{R}^V \overset{\pi}{\rightarrow} \mathbb{R}^V/\mathbb{Z}^V$.
\end{itemize}
\begin{defn}
Define the \emph{affine graphic arrangement} in $\mathbb{R}^V$ by
\begin{equation}
\label{affine-arrangement-defn}
\mathcal{A}_{\aff}(G) := \pi^{-1}(\mathcal{A}_{\tor}(G)) = \pi^{-1}(\pi(\mathcal{A}(G)))
=\bigcup_{\substack{ \{i,j\} \in E\\ k \in \mathbb{Z}}} \{x \in \mathbb{R}^V: x_i = x_j+k \}.
\end{equation}
Call the connected components
$\hat{c}$ of the complement $\mathbb{R}^V {-}\mathcal{A}_{\aff}(G)$ {\it affine
chambers}, and denote the set of all such chambers $\Chambers \mathcal{A}_{\aff}(G)$.
\end{defn}
The reason for introducing $\mathcal{A}_{\aff}(G)$ and $\Chambers \mathcal{A}_{\aff}(G)$
is the following immediate consequence of the path-lifting
property for $\mathbb{R}^V \overset{\pi}{\rightarrow} \mathbb{R}^V/\mathbb{Z}^V$ as
a (universal) covering map (see e.g. \cite[Chap. 13]{Munkres:75}), along
with the definition \eqref{affine-arrangement-defn} of $\mathcal{A}_{\aff}(G)$
as the full inverse image under $\pi$ of $\mathcal{A}_{\tor}(G)$.
\begin{prop}
\label{path-lifting-prop}
Two points $x,y$ in $\mathbb{R}^V/\mathbb{Z}^V {-} \mathcal{A}_{\tor}(G)$
lie in the same chamber $c$ in $\Chambers\mathcal{A}_{\tor}(G)$ if
and only if they have two lifts $\hat{x}, \hat{y}$ lying in the
same affine chamber $\hat{c}$ in $\Chambers \mathcal{A}_{\aff}(G)$.
\end{prop}
\noindent
The point will be that, since affine chambers $\hat{c}$ are (open) convex
polyhedral regions in $\mathbb{R}^V$, it is sometimes easier to argue about
lifted points $\hat{x}$ rather than $x$ itself.
Our proof of
Theorem~\ref{geometric-interpretation-theorem}
proceeds by showing the map
$
\mathbb{R}^V/\mathbb{Z}^V {-} \mathcal{A}_{\tor}(G)
\overset{\bar{\alpha}_G}{\longrightarrow} \Acyc(G)
$
descends to
\begin{itemize}
\item a well-defined map
$
\Chambers \mathcal{A}_{\tor}(G)
\overset{\bar{\alpha}_G}{\longrightarrow} \Acyc(G)/\!\!\equiv,
$
\item which is surjective,
\item and injective.
\end{itemize}
\subsection{Well-definition}
We must show that when $x, y$ lie in the same toric chamber
$c$ in $\Chambers\mathcal{A}_{\tor}(G)$, then
$\bar{\alpha}_G(x) \equiv \bar{\alpha}_G(y)$.
As in Proposition~\ref{path-lifting-prop}, pick lifts
$\hat{x},\hat{y}$ in $\mathbb{R}^V$ and a path $\hat{\gamma}$ between
them in some affine chamber $\hat{c}$. Because these chambers are
open, one can assume without loss of generality that $\hat{\gamma}$
takes steps in coordinate directions only, and therefore that
$\hat{x}, \hat{y}$ differ in only a single coordinate: say
$\hat{x}_i \neq \hat{y}_i$, but $\hat{x}_j=\hat{y}_j$ for all $j \neq i$.
Furthermore, as $\bar{\alpha}_G(x)$ changes only when a coordinate
of $\hat{x}$ passes through an integer, without loss of generality, one may assume
$$
\begin{aligned}
\modone{\hat{x}_i}&=1-\varepsilon,\\
\modone{\hat{y}_i}&=\varepsilon
\end{aligned}
$$
for some arbitrarily small $\varepsilon > 0$.
Since the points on $\hat{\gamma}$ all avoid $\mathcal{A}_{\aff}(G)$,
and the $i^{th}$ coordinate will pass through $0$ at some point
on the path $\hat{\gamma}$, each of the
coordinates $\hat{x}_j(=\hat{y}_j)$ for indices $j$ with
$\{i,j\}$ in $E$ must have $0 < \modone{\hat{x}_j} < 1$.
Hence one can choose $\varepsilon$
small enough that all $j$ for which $\{i,j\}$ in $E$ satisfy
$$
\left( \modone{\hat{y}_i} = \right) \varepsilon < \modone{\hat{x}_j}
< 1 - \varepsilon \left( = \modone{\hat{x}_i} \right).
$$
One finds that $\bar{\alpha}_G(\hat{x})$ and $\bar{\alpha}_G(\hat{y})$
differ by changing $i$ from sink to a source, so
$\bar{\alpha}_G(\hat{x}) \equiv \bar{\alpha}_G(\hat{y})$,
as desired.
\subsection{Surjectivity}
It suffices to check that the map
$
\mathbb{R}^V/\mathbb{Z}^V {-} \mathcal{A}_{\tor}(G)
\overset{\bar{\alpha}_G}{\longrightarrow} \Acyc(G)
$
is surjective. Given an acyclic orientation $\omega$
of $G$, pick any linear extension $w_1 < \cdots < w_n$ of its associated
partial order $\alpha_G^{-1}(\omega)$ on $V$. Then choose real numbers
$0 < x_{w_1} < \cdots < x_{w_n} < 1$, so that
$$
x=(x_1,\ldots,x_n)=(\modone{x_1},\ldots,\modone{x_n})
$$
and hence $\bar{\alpha}_G(x)=\omega$.
\subsection{Injectivity}
The key to injectivity is the following lemma.
\begin{lem}\label{lem:key}
Suppose $x$ lies in a toric chamber $c$ in $\Chambers \mathcal{A}_{\tor}(G)$,
and $\bar{\alpha}_G(x)=\omega$.
Then for any $\omega' \equiv \omega$, there exists
some $x'$ in the same toric chamber $c$ having $\bar{\alpha}_G(x')=\omega'$.
\end{lem}
\begin{proof}
It suffices to check this when $\omega'$ is obtained from $\omega$ by
changing a source vertex $i$ in $\omega$ to a sink in $\omega'$.
Since $\bar{\alpha}_G(x)=\omega$, one must have for each $j$ with
$\{i,j\}$ in $E$ that
$$
(0 \leq ) \modone{x_i} < \modone{x_j} (< 1).
$$
Lift $x$ to $\hat{x}=(\modone{x_1},\ldots,\modone{x_n})$, and
choose $\varepsilon$ small enough so that each $j$ with $\{i,j\}$ in $E$ has
$
\modone{x_j} < 1-\varepsilon.
$
Define $\hat{y}$ to have all the same coordinates as $\hat{x}$ except for
$\hat{y}_i=-\varepsilon$, so that $\modone{\hat{y}_i}=1-\varepsilon$, and
hence $y:=\pi(\hat{y})$ has $\bar{\alpha}_G(y)=\omega'$ by construction.
Note that the straight-line path $\hat{\gamma}$
from $\hat{x}$ to $\hat{y}$ changes only the $i^{th}$ coordinate,
decreasing it from $\hat{x}_i$ to $\hat{y}_i=-\varepsilon$,
and hence never crosses any of the affine
hyperplanes in $\mathcal{A}_{\aff}(G)$. Therefore $\hat{x},\hat{y}$ lie
in the same affine chamber, and $x,y$ lie in the same toric
chamber $c$.
\end{proof}
Now suppose that points $x,x'$ in two toric chambers $c,c'$
have $\bar{\alpha}_G(x) \equiv \bar{\alpha}_G(x')$, and we must show that
$c=c'$. By Lemma~\ref{lem:key}, without loss of generality one
has $\bar{\alpha}_G(x) = \omega = \bar{\alpha}_G(x')$. Thus one can
lift $x,x'$ to $\hat{x},\hat{x}'$ having $\hat{x}_i,\hat{x}'_i$ in $[0,1)$
for all $i$, and hence
$\alpha_G(\hat{x})=\omega=\alpha_G(\hat{x}')$.
For each edge $\{i,j\}$ in $E$, say directed $i\to j$ in $\omega$,
one has both
$$
\begin{aligned}
0 \leq \hat{x}_i &< \hat{x}_j <1,\\
0 \leq \hat{x}'_i &< \hat{x}'_j <1.
\end{aligned}
$$
Thus every point $\hat{y}$ on the straight-line path
$\hat{\gamma}$ between $\hat{x}$ and $\hat{x}'$ also satisfies
$0 \leq \hat{y}_i < \hat{y}_j < 1$, avoiding all affine
hyperplanes in $\mathcal{A}_{\aff}(G)$. Thus $\hat{x},\hat{x}'$ lie
in the same affine chamber $\hat{c}$, so that $x,x'$ lie
in the same toric chamber, as desired. This completes the proof
of injectivity, and hence the proof of Theorem~\ref{geometric-interpretation-theorem}.$\qed$
\vskip.2in
One corollary to Theorem~\ref{geometric-interpretation-theorem} is a (slightly)
more concrete description of a toric chamber $c$.
\begin{cor}
\label{toric-chamber-as-union-cor}
For a graph $G=(V,E)$ and toric chamber $c$ in $\Chambers \mathcal{A}_{\tor}(G)$
with $\bar{\alpha}_G(c)=[\omega]$, one has
$$
c= \bigcup_{ \omega' \in [\omega] }
\bar{\alpha}_G^{-1}(\omega')
= \bigcup_{ \omega' \in [\omega] }
\{ x \in \mathbb{R}^V/\mathbb{Z}^V: \modone{x_i} < \modone{x_j} \text{ if }\omega'\text{ directs }i \rightarrow j\}.
$$
\end{cor}
\section{Toric extensions}
\label{extensions-section}
Recall that for two (ordinary) posets $P, P'$ on a set $V$, one
says that {\it $P'$ is an extension of $P$} when $i <_P j$ implies
$i <_{P'} j$. It is easily seen how to reformulate this geometrically:
$P'$ is an extension of $P$ if and only one has
an inclusion of their open polyhedral cones
$c(P') \subseteq c(P)$, as defined in \eqref{cone-of-a-poset-defn}.
This motivates the following definition.
\begin{defn}
Given two toric posets $P, P'$ say that {\it $P'$ is a toric
extension of $P$} if one has an inclusion of their open chambers
$c(P') \subseteq c(P)$ within $\mathbb{R}^V/\mathbb{Z}^V$.
\end{defn}
An obvious situation where this can occur is when
one has $G=(V,E)$ and $G'=(V,E')$ two graphs on the same vertex set $V$,
with $G$ an {\it edge-subgraph} of $G'$ in the sense that $E \subseteq E'$,
\begin{prop}
\label{edge-subgraph-chamber-refinement}
Fix $G=(V,E)$ a simple graph.
\begin{enumerate}
\item[(i)] Toric chambers in $\Chambers \mathcal{A}_{\tor}(G)$
are determined by their topological closures:
for any pair of chambers $c_1,c_2$
in $\Chambers \mathcal{A}_{\tor}(G)$, if
$\bar{c}_1=\bar{c}_2$ then $c_1=c_2$.
\item[(ii)]
If $G$ is an edge-subgraph of $G'$, then
$
\bar{c} = \bigcup_{c'} \bar{c}',
$
where the union runs over all toric chambers
$c'$ in $\Chambers \mathcal{A}_{\tor}(G')$ for which
$P(c')$ is a toric extension of $P(c)$.
\end{enumerate}
\end{prop}
\begin{proof}
For (i), first note that any toric chamber $c$ in $\Chambers \mathcal{A}_{\tor}(G)$
has boundary $\bar{c} {-} c$ contained in $\mathcal{A}_{\tor}(G)$.
Now assume two toric chambers $c_1, c_2$ in $\Chambers \mathcal{A}_{\tor}(G)$ have
$\bar{c}_1 = \bar{c}_2$, and we wish to show $c_1=c_2$.
Any point $x$ of $c_1$ has
$x \in c_1 \subseteq \bar{c}_1=\bar{c}_2$.
However, $x$ cannot lie in $\mathcal{A}_{\tor}(G)$ since
$c_1$ is disjoint from $\mathcal{A}_{\tor}(G)$, so $x$ does not lie
in $\bar{c}_2 {-} c_2 \subset \mathcal{A}_{\tor}(G)$ by our first
observation. Hence $x$ lies in $c_2$.
But then $c_1, c_2$ are connected components of
$\mathbb{R}^V / \mathbb{Z}^V {-} \Chambers \mathcal{A}_{\tor}(G)$, sharing the point $x$,
so $c_1=c_2$.
For (ii), we first argue that
\begin{equation}
\label{closure-described-via-lift}
\bar{c}=\pi\left( \overline{\pi^{-1}(c)} \right)
\end{equation}
using the fact that the covering map $\mathbb{R}^V \overset{\pi}{\rightarrow} \mathbb{R}^V/\mathbb{Z}^V$
is locally a homeomorphism. For any point $x$ in $\mathbb{R}^V/\mathbb{Z}^V$ there is
an open neighborhood $U$ which lifts to an open neighborhood $\hat{U}$,
mapping homeomorphically under $\pi$ to $U$. Hence $x$ is the
limit of a sequence $\{x_i\}_{i=1}^\infty$ of points in $c$
if and only if its lift $\hat{x}=\pi|_{\hat{U}}^{-1}(x)$ is a limit of
the sequence of points $\{\pi|_{\hat{U}}^{-1}(x_i)\}_{i=1}^\infty$ in
$\pi^{-1}(c)$. This shows \eqref{closure-described-via-lift}.
Since a toric chamber $c$ has $\pi^{-1}(c)$ given by a union of
affine chambers $\hat{c}$ in $\Chambers \mathcal{A}_{\aff}(G)$, in light of
\eqref{closure-described-via-lift},
it suffices to show that any affine chamber $\hat{c}$ in $\Chambers \mathcal{A}_{\aff}(G)$ has closure $\overline{\hat{c}}$
given by the union of the closures
$\overline{\hat{c}'}$ taken over all affine chambers $\hat{c}'$ in $\Chambers \mathcal{A}_{\aff}(G')$ that satisfy $\hat{c}' \subseteq \hat{c}$.
However, this
is clear since $\hat{c}$ is a polyhedron bounded by hyperplanes taken from $\mathcal{A}_{\aff}(G)$, while $\mathcal{A}_{\aff}(G')$ simply refines this decomposition with more hyperplanes.
\end{proof}
\section{Toric directed paths}
\label{directed-paths-section}
A particular special case of Proposition~\ref{edge-subgraph-chamber-refinement} is worth noting:
every graph $G=(V,E)$ is an edge-subgraph of the {\it complete graph $K_V$}.
As noted in the Introduction, acyclic orientations $\omega$ of $K_V$ correspond to total orders $w_1 < \cdots < w_n$, indexed by permutations $w=(w_1,\ldots,w_n)$ of $V=[n]:=\{1,2,\ldots,n\}$. It is easy to characterize the equivalence relation $\equiv$ on these total orders, and hence the toric chambers $\Chambers \mathcal{A}_{\tor}(K_V)$, in terms of cyclic shifts of these linear orders. However,
it is worthwhile to define this concept is a bit more generally-- it turns out to be crucial in Section~\ref{chains-section}.
\begin{defn}
\label{toric-directed-path-defn}
Given a simple graph $G=(V,E)$ and an acyclic orientation $\omega$ of
$G$, say that a sequence $(i_1,i_2,\ldots,i_m)$ of elements of $V$ forms
a {\it toric directed path in $\omega$} if $(G,\omega)$ contains all of these
edges:
\begin{equation}
\label{toric-directed-path-figure}
\xymatrix{
i_m & \\
&i_{m-1} \ar[ul] \\
&\vdots \ar[u]\\
&i_2 \ar[u] \\
i_1 \ar[uuuu] \ar[ur]&
}
\end{equation}
\end{defn}
\noindent
In particular, for small values of $m$, a toric
directed path in $\omega$
\begin{enumerate}
\item[$\bullet$]
of size $m=2$ is a directed edge $(i_1,i_2)$,
\item[$\bullet$]
of size $m=1$ is a degenerate path $(i_1)$ for any $i_1$ in $V$, and
\item[$\bullet$]
of size $m=0$ is the empty subset $\varnothing \subset V$.
\end{enumerate}
\begin{prop}
\label{flip-preserves-toric-directed-path}
An acyclic orientation $\omega$ of $G$
contains a toric directed path $(i_1,i_2,\ldots,i_m)$ if and only if every
acyclic orientation $\omega'$ in its $\equiv$-equivalence class contains a (unique) toric directed path
$$
(i_\ell,i_{\ell+1},\ldots,i_m,i_1,i_2,\ldots,i_{\ell-1})
$$
which is one of its cyclic shifts, that is, it lies in the cyclic
equivalence class $[(i_1,\ldots,i_m)]$.
\end{prop}
\begin{proof}
A toric directed path $(i_1,i_2,\ldots,i_m)$ has only one source, namely $i_1$,
and only one sink, namely $i_m$. The assertion follows by
checking that the effect of a source-to-sink flip at $i_1$ (resp. $i_m$)
is a cyclic shift to the toric directed path $(i_2,\ldots,i_m,i_1)$ (resp.
$(i_m,i_1,i_2,\ldots,i_{m-1})$).
\end{proof}
\begin{rem}
\label{Coleman-remark}
We point out a reformulation of the sink-to-source equivalence
relation $\equiv$ on $\Acyc(G)$, due to Pretzel \cite{Pretzel:86},
leading to a reformulation of toric directed paths, useful
in Section~\ref{Antichain-section} on toric antichains.
Given a simple graph $G=(V,E)$, say that a cyclic equivalence
class $I=[(i_1,\ldots,i_m)]$ of ordered vertices is a {\it directed
cycle} of $G$ if $m\geq 3$ and $G$ contains all of the (undirected)
edges $\{i_j,i_{j+1}\}_{j=1,2\ldots,m}$,
with subscripts taken modulo $m$. Given such a directed cycle $I$
define \emph{Coleman's $\nu$-function}~\cite{Coleman:89}
$$
\Acyc(G) \overset{\nu_I}{\longrightarrow} \mathbb{Z}
$$
where $\nu_I(\omega)$ for an acyclic orientation $\omega$ of $G$ is
defined to be the number of edges $\{i_j,i_{j+1}\}$ in $I$
which $\omega$ orients $i_j \rightarrow i_{j+1}$
minus the number of edges $\{i_j,i_{j+1}\}$
which $\omega$ orients $i_{j+1} \rightarrow i_j$.
It is easy to see that $\nu_I$ is preserved
by flips, and thus extends in a well-defined manner
to $\equiv$-classes $[\omega]$. In fact, Pretzel~\cite{Pretzel:86}
showed that this is a complete $\equiv$-invariant:
\begin{prop}\label{prop-pretzel}
Fixing the graph $G=(V,E)$, two acyclic orientations $\omega, \omega'$ in $\Acyc(G)$
have $\omega \equiv \omega'$ if and only if
$\nu_I(\omega)=\nu_I(\omega')$ for every directed cycle $I$ of $G$.
\end{prop}
Toric directed paths then have an obvious
characterization in terms of their $\nu_I$ function.
\begin{cor}
\label{cor:nu}
Given a directed cycle in $I=[(i_1,\ldots,i_m)]$ in $G$, an acyclic orientation $\omega$ in $\Acyc(G)$ contains a toric directed path lying in the
cyclic equivalence class $I$ if and only if $\nu_I(\omega)=m-1$.
\end{cor}
\end{rem}
\section{Toric total orders}
\label{total-orders-section}
An important special case of toric directed paths occurs when one
considers acyclic orientations of the complete graph $K_V$. Acyclic
orientations of $K_V$ correspond to permutations $w=(w_1,\ldots,w_n)$
of $V$ (or \emph{total orders}), and always form toric directed paths
in $w$. Hence their toric equivalence classes are the equivalence
classes $[w]$ of permutations up to cyclic shifts, or {\it toric total
orders}. This concept coincides with the pre-existing concept of
{\it total cyclic order} from
Definition~\ref{total-cyclic-order-defn}, even though toric {\it partial}
orders are not the same as {\it partial} cyclic orders. Therefore, we
can use these terms interchangeably.
By Theorem~\ref{geometric-interpretation-theorem}, these toric total
orders $[w]$ index the chambers $c_{[w]}$ in $\Chambers \mathcal{A}_{\tor}(K_V)$.
By Corollary~\ref{toric-chamber-as-union-cor}, one has this more concrete
description of such chambers:
\begin{equation}
\label{more-concrete-total-cyclic-chamber-description}
c_{[w]}= \bigcup_{ i=1 }^n
\{ x \in \mathbb{R}^V/\mathbb{Z}^V:
\modone{x_{w_i}} < \cdots < \modone{x_{w_n}}
< \modone{x_{w_1}} < \cdots < \modone{x_{w_{i-1}}} \}.
\end{equation}
\begin{defn}
\label{total-cyclic-extension-defn}
Given a toric poset $P=P(c)$ on $V$,
say that a toric total order $[w]$ on $V$ is a {\it toric total extension} of $P$ if the toric chamber $c_{[w]}$ of $\Chambers \mathcal{A}_{\tor}(K_V)$ is contained in $c$. Denote by $\mathcal{L}_{\tor}(P)$ the
set of all such toric total extensions $[w]$ of $P$.
\end{defn}
The following corollary is then a special case of Proposition~\ref{edge-subgraph-chamber-refinement}.
\begin{cor}
\label{determination-by-total-cyclic-extensions}
Fix a simple graph $G=(V,E)$.
Then any toric chamber/poset $c=c(P)$ in $\Chambers \mathcal{A}_{\tor}(G)$
has topological closure
$$
\bar{c} = \bigcup_{[w] \in \mathcal{L}_{\tor}(P)} \bar{c}_{[w]}.
$$
and is completely determined by its set $\mathcal{L}_{\tor}(P)$
of toric total extensions:
if $c_1,c_2$ in $\Chambers \mathcal{A}_{\tor}(G)$ have
$\mathcal{L}_{\tor}(P(c_1))=\mathcal{L}_{\tor}(P(c_2))$, then $c_1=c_2$.
\end{cor}
\begin{ex}
\label{lack-of-determination-by-extensions-example}
Corollary~\ref{determination-by-total-cyclic-extensions}
fails when one does {\it not} fix the graph $G$. For example,
when $V=\{1,2,3\}$, all $7$ of the {\it non-complete} graphs
$G \neq K_V=K_3$
share the property that $\Chambers \mathcal{A}_{\tor}(G)$ has only one
chamber $c=c(P)$ with $\mathcal{L}_{\tor}(P)=\{ [(1,2,3)], [(1,3,2)]\}$,
whose closure $\bar{c}$ is the entire torus $\mathbb{R}^3/\mathbb{Z}^3$.
However, the unique toric chambers $c$ for these $7$ graphs
are all different, when considered as {\it open} subsets of $\mathbb{R}^3/\mathbb{Z}^3$,
and therefore each represents a {\it different} toric poset $P=P(c)$.
On the other hand, the complete graph $V_V=K_3$ has $2$ different
toric equivalence classes of acyclic orientations, representing
two different chambers within the same toric arrangement $\mathcal{A}_{\tor}(K_3)$,
and two different toric posets: $P(c_{[(1,2,3)]})$ and $P(c_{[(1,3,2)]})$.
\end{ex}
\section{Toric chains}
\label{chains-section}
We introduce the toric analogue of a chain (= totally ordered subset)
in a poset, and explicate its relation to the toric directed paths
from Definition~\ref{toric-directed-path-defn} and the toric total
extensions from Definition~\ref{total-cyclic-extension-defn} (or
equivalently, total cyclic extensions).
As motivation, note that in an ordinary poset $P(c)$, a
chain $C=\{i_1,\ldots,i_m\} \subseteq V$ has the following geometric
description: there is a total ordering $(i_1,\ldots,i_m)$ of $C$ such
that every point $x$ in the open polyhedral cone $c=c(P)$ has
$x_{i_1}<x_{i_2}<\cdots<x_{i_m}$.
\begin{defn}\label{toric-chain-defn}
Fix a toric poset $P=P(c)$ on a finite set $V$.
Call a subset $C=\{i_1,\ldots,i_m\} \subseteq V$
a {\it toric chain} in $P$ if
there exists a cyclic equivalence class $[(i_1,\ldots,i_m)]$ of
linear orderings of $C$
with the following property:
for every $x$ in the open toric chamber $c=c(P)$ there exists some $(j_1,\dots,j_m)$ in $[(i_1,\ldots,i_m)]$ for which
\begin{equation}
\label{eq:toric-chain}
\modone{x_{j_1}}<\modone{x_{j_2}}<\cdots<\modone{x_{j_m}}.
\end{equation}
In this situation, we will say that $P|_C=[(i_1,\ldots,i_m)]$.
\end{defn}
\begin{rem}
\label{toric-chain-defn-remark}
Note that
\begin{enumerate}
\item[$\bullet$]
singleton sets $\{i\}$ and the empty subset $\varnothing \subset V$ are
always toric chains in $P$,
\item[$\bullet$]
subsets of toric chains are toric chains, and
\item[$\bullet$]
a pair $\{i,j\}$ is a toric chain in $P=P(c)$ if and only
if every point $x$ in the open toric chamber
$c$ has $\modone{x_i} \neq \modone{x_j}$;
in particular, this will be true whenever $c$ as appears as
toric chamber in $\Chambers \mathcal{A}_{\tor}(G)$ for a graph $G$ having
$\{i,j\}$ as an edge of $G$.
\end{enumerate}
\end{rem}
Though the definition of toric chain does not refer to a particular
graph $G$, there are several convenient characterizations that involve
a graph. In the following proposition, we list five equivalent
conditions.
The exception when $|C|\neq 2$ is needed because the last
condition is vacuously true whenever $|C|=2$; in this case, only the
first four are equivalent.
\begin{prop}\label{toric-chain-prop}
Fix a toric poset $P=P(c)$ on a finite set $V$, and
$C=\{i_1,\ldots,i_m\} \subseteq V$.
The first four of the following five conditions are equivalent,
and when $m=|C| \neq 2$, they are also equivalent to the fifth.
\begin{enumerate}
\item[(a)] $C$ is a toric chain in $P$, with $P|_C=[(i_1,\ldots,i_m)]$.
\item[(b)] For every graph $G=(V,E)$ and acyclic orientation $\omega$
of $G$ having $\bar\alpha_G(c)=[\omega]$, the subset $C$ is a chain
in the poset $P(G,\omega)$, ordered in
some cyclic shift of the order $(i_i,\dots,i_m)$.
\item[(c)] For every graph $G=(V,E)$ and acyclic orientation $\omega$
of $G$ having $\bar\alpha_G(c)=[\omega]$, the subset $C$ occurs as a
subsequence of a toric directed path in $\omega$, in some cyclic
shift of the order $(i_i,\dots,i_m)$.
\item[(d)] There exists a graph $G=(V,E)$ and acyclic orientation
$\omega$ of $G$ having $\bar\alpha_G(c)=[\omega]$ such that $C$ occurs as a
subsequence of a toric directed path in $\omega$, in some cyclic shift
of the order $(i_1,\dots,i_m)$.
\item[(e)] Every total cyclic extension $[w]$ in $\mathcal{L}_{\tor}(P(c))$ has the
same restriction $[w|_C]=[(i_1,\ldots,i_m)]$.
\end{enumerate}
\end{prop}
\noindent
The following easy and well-known lemma will be used in the proof.
\begin{lem}
\label{incomparability-lemma}
When two elements $i,j$ are incomparable in a finite poset $Q$ on $V$,
one can choose a linear extension $w=(w_1,\ldots,w_n)$
in $\mathcal{L}(Q)$ that has $i,j$ appearing consecutively,
say $(w_s,w_{s+1})=(i,j)$.
\end{lem}
\begin{proof}
Begin $w$ with any linear extension
$w_1,w_2,\ldots,w_{s-1}$
for the order ideal $Q_{<i} \cup Q_{<j}$,
followed by $w_s=i, w_{s+1}=j$, and
finish with any linear extension
$w_{s+2},w_{s+3},\ldots,w_n$
for $Q {-} \left(Q_{\leq i} \cup Q_{\leq j}\right)$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{toric-chain-prop}]
Note that if $|C|\leq 1$, all five conditions (a)-(e) are vacuously true,
so without loss of generality $|C| \geq 2$.
We will first show (a) implies (b) implies (c) implies (d) implies (e).
Then we will show that (e)
implies (a) when $|C|\geq 3$, and (d) implies (a) when $|C|=2$.
\vskip.1in
\noindent {\sf (a) implies (b).} Assume that $C$ is a toric chain of $P$,
with $P|_C=[(i_1,\ldots,i_m)]$,
and take a graph $G$ and orientation $\omega$ such that
$\bar{\alpha}_G(c)=[\omega]$.
We first show by contradiction that $C$ must be totally ordered in
$Q:=P(G,\omega)$. Assume not, and
say $i,j$ in $C$ are incomparable in $Q$. By Lemma~\ref{incomparability-lemma}
there is a linear extension $w=(w_1,\ldots,w_n)$ in $\mathcal{L}(Q)$
having $i,j$ appear consecutively, say $(w_s,w_{s+1})=(i,j)$.
Choose $x$ in $\mathbb{R}^n$ with
$0\leq x_{w_1}<\cdots<x_{w_n}<1$
and let $x'$ be obtained by $x$ by exchanging $x_i,x_j$,
that is $x'_i=x_j$ and $x'_j=x_i$.
Since $x=\modone{x}$ and $x'=\modone{x'}$,
one has $\bar{\alpha}_G(x)=\omega=\bar{\alpha}_G(x')$,
and hence $x, x'$ lie in $c=c(P)$.
The condition \eqref{eq:toric-chain} on $x, x'$ implies that
$[w|_C]=[w'|_C]$ should give the same cyclic order on $C$,
which forces $m=2$ and $C=\{i,j\}$. However, the average
$x''=\frac{x+x'}{2}$ gives a third point in $c$
having $\modone{x''_i}=x''_i=x''_j=\modone{x''_j}$,
contradicting \eqref{eq:toric-chain}.
Once one knows that $C$ is totally ordered in $Q$,
consideration of \eqref{eq:toric-chain} for the point $x$
chosen as above implies that $w|_C$ lies in $[(i_1,\ldots,i_m)]$,
and hence the same is true of $Q|_C$.
\vskip.1in
\noindent {\sf (b) implies (c).}
Assume for the toric poset $P=P(c)$,
every graph $G$ and orientation $\omega$ with
$\bar{\alpha}_G(c)=[\omega]$
has $C$ totally ordered in $P(G,\omega)$ by
a cyclic shift $(j_1,\ldots,j_m)$ in $[(i_1,\ldots,i_m)]$.
We will show that $C$ actually occurs in this order
as a subsequence of some toric directed path in $\omega$.
By Proposition~\ref{flip-preserves-toric-directed-path}, one is free to
alter $\omega$ within the class $[\omega]$.
So choose $\omega$ within $[\omega]$ among
all those for which $P(G,\omega)$ on $V$ totally orders $C$ as
$j_1 < \cdots < j_m$, but minimizing the cardinality $|Z|$
where
$$
Z:=\{z \in V: z \text{ there is a directed }\omega\text{ path from }j_m\text{ to }z\}
$$
Note that $Z$ is nonempty, since it contains $j_m$.
We claim that minimality forces $|Z|=1$, that is, $Z=\{j_m\}$.
To argue the claim by contradiction, assume $Z \neq \{j_m\}$.
Then one can find an $\omega$-sink $z \neq j_m$ in $Z$, as $V$ is finite,
and $\omega$ is acyclic.
Perform a sink-to-source flip at $z$ to create a new orientation
$\omega'$ in $[\omega]$. Then $\omega'$
still has $P(G,\omega')$ totally ordering $C$ as
$j_1 < \cdots < j_m$, but its set $Z'$ has $|Z'|<|Z|$ because
$Z' \subset Z {-} \{z\}$.
Now $Z=\{j_m\}$ means that $j_m$ is an $\omega$-sink. Create
$\omega'$ by flipping $j_m$ from sink to source.
Since $j_1$ is supposed to be comparable with
$j_m$ in $P(G,\omega')$, one must have
$j_m <_{P(G,\omega')} j_1$, that is, there is an $\omega'$-path
of the form
$j_m\rightarrow k \rightarrow \cdots \rightarrow j_1$;
possibly $k=j_1$ here.
But this means that prior to the sink-to-source flip of $j_m$,
one had a toric directed $\omega$-path
$k \rightarrow \cdots \rightarrow j_1 \rightarrow j_2 \rightarrow \cdots \rightarrow j_m$
that contained $C$, as desired.
\vskip.1in
\noindent {\sf (c) implies (d).}
Trivial.
\vskip.1in
\noindent
{\sf (d) implies (e).}
Assume the graph $G$ has $\bar{\alpha}_G(c)=[\omega]$ and
$C$ occurs in the order $(i_1,\ldots,i_m)$ as a
subsequence of a toric directed path in $\omega$.
We must show that every total cyclic extension $[w]$ of $P=P(c)$
has restriction $[w|_C] = [(i_1,\ldots,i_m)]$.
By Definition~\ref{total-cyclic-extension-defn}, one has $c_{[w]} \subseteq c$. By \eqref{more-concrete-total-cyclic-chamber-description},
one can pick a point $x$ in $c_{[w]}$, so that
$$
\modone{x_{w_1}} < \cdots < \modone{x_{w_n}}.
$$
Since $x$ also lies in $c$, one has $\bar{\alpha}_G(x) = \omega' \equiv \omega$.
Proposition~\ref{flip-preserves-toric-directed-path} implies
that $\omega'$ contains as a toric directed path
some cyclic shift $(j_1,\ldots,j_m)$ of $(i_1,\ldots,i_m)$.
Hence
$$
\modone{x_{j_1}} < \cdots < \modone{x_{j_m}},
$$
which forces $w|_C=(j_1,\ldots,j_m)$, as desired.
\vskip.1in
\noindent
{\sf (e) implies (a) when $|C|\geq 3$.}
Assume that every total cyclic extension $[w]$ of $P=P(c)$ has $w|_C$
lying in the same cyclic equivalence class $[(i_1,\ldots,i_m)]$.
We want to show that every point $x$ in $c$
satisfies \eqref{eq:toric-chain}.
Recall from Corollary~\ref{toric-chamber-as-union-cor}
that there is at least one graph $G$
and $\equiv$-class $[\omega]$ containing
$\bar{\alpha}_G(x)$, that is, $\bar{\alpha}_G(c)=[\omega]$.
It suffices to show that
the partial order $Q:=P(G,\omega)$ on $V$ induced by any orientation
$\omega$ in this $\equiv$-class has restriction $Q|_C$
to the subset $C$ giving a total order $(j_1,\ldots,j_m)$,
and this total order lies in $[(i_1,\ldots,i_m)]$.
Suppose that $Q|_C$ is {\it not} a total order;
say elements $i,j$ in $C$ are incomparable in $Q$. By Lemma~\ref{incomparability-lemma}, one can then
choose linear extensions $w,w'$ in
$\mathcal{L}(Q)$ that both have $i,j$ consecutive, and differ only
in swapping $i,j$, say $(w_s,w_{s+1})=(i,j)$ and $(w'_s,w'_{s+1})=(j,i)$.
Pick points $x,x'$ that satisfy
\begin{align*}
&0 \leq x_{w_1} < \cdots < x_{w_n} < 1\\
&0 \leq x'_{w'_1} < \cdots < x'_{w'_n} < 1.
\end{align*}
Since $x=\modone{x}, x'=\modone{x'}$,
one finds that $x,x'$ lie in $c_{[w]}, c_{[w']}$, respectively.
Also one has $\bar{\alpha}_G(x)=\omega=\bar{\alpha}_G(x')$ so that
both $x, x'$ lie in $c$. Hence $c_{[w]},c_{[w']} \subseteq c$, that
is, both $[w],[w']$ are total cyclic extensions in $\mathcal{L}_{\tor}(P(c))$.
However, since $|C| \geq 3$, there exists some third
element $k$ in $C {-} \{i, j\}$, and $[w],[w']$ differ
in their cyclic ordering of $\{i,j,k\}$. This contradicts
assumption (e), so $Q|_C$ is a total order.
Once one knows that $Q|_C$ is a total order $j_1 < \cdots <j_m$,
the above argument shows that $(j_1,\ldots,j_m)$ lies in the cyclic
equivalence class $[w|_C]$ for every $w$ in $\mathcal{L}_{\tor}(P)$, which
is $[(i_1,\ldots,i_m)]$ by assumption.
\vskip.1in
\noindent
{\sf (d) implies (a) when $|C|=2$.} Suppose
$\bar\alpha_G(c)=[\omega]$ and $C$ occurs as a subsequence of a toric
directed path in $\omega$, with $i_1<i_2$. By
Proposition~\ref{flip-preserves-toric-directed-path}, if
$\omega'\equiv\omega$, then $C$ occurs in a toric directed path in
$\omega'$. This means that for any $x$ with $\bar{\alpha}_G(x)=\omega'$,
we have $\modone{x_{i_1}}\neq\modone{x_{i_2}}$, and
so either $\modone{x_{i_1}}<\modone{x_{i_2}}$ or
$\modone{x_{i_2}}<\modone{x_{i_1}}$ must hold for every $x$ in
$c$. Thus $C$ is a toric chain of $P(c)$.
\end{proof}
\section{Toric transitivity}
\label{transitivity-section}
We next clarify the edges that are ``forced'' in a toric partial order,
an analogue of transitivity that we refer as \emph{toric transitivity}.
\begin{thm}
\label{thm:toric-transitivity}
Fix a toric poset $P=P(c)$ on $V$, and assume that $G=(V,E)$ has
$c$ appearing as a toric chamber in $\Chambers \mathcal{A}_{\tor}(G)$,
say $\bar{\alpha}_G(c)=[\omega]$. Then for any
non-edge pair $\{i,j\} \not\in E$, either
\begin{enumerate}
\item[(i)] $i,j$ lies on a toric chain in $P$, in which case
$c$ is also a toric chamber for $G^+=(V,E\cup \{i,j\})$,
and there is a unique extension $\omega^+$ of $\omega$
such that $\bar{\alpha}_{G^+}(c)=\omega^+$, or
\item[(ii)] $i,j$ lies on no toric chains in $P$, and then
the hyperplane $\modone{x_i = x_j}$ intersects the open toric chamber $c$.
\end{enumerate}
\end{thm}
\begin{proof}
Assertion (i) follows from Proposition~\ref{toric-chain-prop}: when $i,j$ lie
on a toric chain $C$ in $P$, assertion (b) of that proposition says that
they lie on a toric directed path in $\omega$
for every representative of the class $[\omega]$, and hence
the inequality $\modone{x_i} < \modone{x_j}$ (or its reverse inequality)
is already
implied by the other inequalities defining the points of
$\bar{\alpha}_G^{-1}(\omega)$ that come from the edges of $G$ induced by $C$.
For assertion (ii), note that whenever there exist no points $x$ of the open toric chamber $c$ having $\modone{x_i} = \modone{x_j}$, then every $x$ in $c$ has
either $\modone{x_i} < \modone{x_j}$ or
$\modone{x_j} < \modone{x_i}$. This shows that
$\{i,j\}$ is itself a toric chain in $P=P(c)$; see
Remark~\ref{toric-chain-defn-remark}.
\end{proof}
This suggests the following definition.
\begin{defn}
\label{toric-transitive-closure-defn}
Given a graph $G=(V,E)$ and $\omega$ in $\Acyc(G)$,
the {\it toric transitive closure} of the pair
$(G,\omega)$ is the pair
$
(\bar{G}^{\tor},\bar{\omega}^{\tor})
$
defined as follows.
The edges of $\bar{G}^{\tor}$ are obtained by
adding to the edges of $G$ all pairs
$\{i,j\}$ that are a subset of some toric directed path in $\omega$;
see the dotted edges in \eqref{toric-transitivity-figure} below.
The acyclic orientation
$\bar{\omega}^{\tor}$ orients the edge $i \rightarrow j$ if the
toric directed path contains a path from $i$ to $j$, rather than
from $j$ to $i$.\end{defn}
\begin{equation}
\label{toric-transitivity-figure}
\xymatrix{
i_m & \\
&i_{m-1} \ar[ul]\\
&i_{m-2} \ar[u]\ar@{-->}[uul]\\
&\vdots \ar[u]\\
&i_3 \ar[u]\ar@{-->}[uuuul]\\
&i_2 \ar[u]\ar@{-->}[uuuuul] \\
i_1 \ar[uuuuuu] \ar[ur]\ar@{-->}[uur]\ar@{-->}[uuuur]\ar@{-->}[uuuuur]&
}
\end{equation}
\begin{cor}
\label{toric-transitive-closure-independence}
The toric transitive closure
depends only upon the toric poset $P=P(c)$ which satisfies $\bar{\alpha}_G(c)=[\omega]$,
in the following sense:
given two graphs $G_i=(V,E_i)$ for $i=1,2$,
and $\omega_i$ in $\Acyc(G_i)$ with $\bar{\alpha}_{G_i}(c)=[\omega_i]$, then
\begin{enumerate}
\item[(i)]
$\bar{G}^{\tor}_1=\bar{G}^{\tor}_2$, and
\item[(ii)]
$\bar{\omega}^{\tor}_1 \equiv \bar{\omega}^{\tor}_2$.
\end{enumerate}
\end{cor}
\begin{proof}
Assertion (i) follows from the fact that $\{i,j\}$ appears as an edge
in $\bar{G}^{\tor}$ if and only if it is a subset of some toric chain of
$P$, and adding $\{i,j\}$ does not affect the toric poset $P=P(c)$,
according to Theorem~\ref{thm:toric-transitivity}(i). For assertion
(ii), note that iterating Theorem~\ref{thm:toric-transitivity}(i)
gives
$$
\bar{\alpha}_{\bar{G}^{\tor}}^{-1}(\bar{\omega}_1^{\tor})=
\bar{\alpha}_{G_1}^{-1}(\omega_1)=
c=\bar{\alpha}_{G_2}^{-1}(\omega_2)
=\bar{\alpha}_{\bar{G}^{\tor}}^{-1}(\bar{\omega}_2^{\tor}).
$$
Assertion (ii) then follows from Theorem~\ref{geometric-interpretation-theorem}.
\end{proof}
\begin{rem}
Note that the toric transitive closure of $\bar{A}^{\tor}$ is
always a subset of the ordinary transitive closure $\bar{A}$, since
any toric directed path that contains $(i,j)$ as a subsequence
also contains an ordinary directed path from $i$ to $j$.
\end{rem}
\section{Proof of Theorem~\ref{toric-convex-closure-theorem}}
\label{convex-closure-section}
Here we wish to regard a pair $(G,\omega)$ of a simple graph $G=(V,E)$
and acyclic orientation $\omega$ in $\Acyc(G)$ as a subset
$A \subset \overleftrightarrow{K}_V$ of the set
of all possible directed edges
$\overleftrightarrow{K}_V=\{ (i,j) \in V \times V : i \neq j \}$.
Then the toric transitive closure operation
$(G,\omega) \longmapsto (\bar{G}^{\tor},\bar{\omega}^{\tor})$
from Definition~\ref{toric-transitive-closure-defn}
may be regarded as a {\it closure operator} on $\overleftrightarrow{K}_V$,
that is, a map $A \longmapsto \bar{A}^{\tor}$
from $2^{\overleftrightarrow{K}_V}$ to itself, satisfying
\begin{itemize}
\item $A \subseteq \bar{A}^{\tor}$,
\item $A \subseteq B$ implies $\bar{A}^{\tor} \subseteq \bar{B}^{\tor}$, and
\item $\bar{\bar{A}}^{\tor}=\bar{A}^{\tor}$.
\end{itemize}
\noindent
Recall the statement of Theorem~\ref{toric-convex-closure-theorem}:
\vskip.1in
\noindent
{\bf Theorem~\ref{toric-convex-closure-theorem}.}
{\it
The toric transitive closure operation $A \longmapsto \bar{A}^{\tor}$
is a {\it convex closure}, that is,
$$
\text{ for } a \neq b \text{ with }
a,b \not\in \bar{A}^{\tor} \text{ and }
a \in \overline{A \cup \{b\}}^{\tor},
\text{ one has }b \notin \overline{A \cup \{a\}}^{\tor}.
$$
}
For the purposes of the proof, introduce one further
bit of terminology.
\begin{defn}
For $\omega$ in $\Acyc(G)$ and a toric directed path $C=(i_1,\ldots,i_m)$ in
$\omega$ of size $m\geq 3$, as in \eqref{toric-directed-path-figure},
call $(i_1,i_m)$ the {\it long edge} of $C$, and
call the other edges $(i_1,i_2),(i_2,i_3),\ldots,(i_{m-1},i_m)$ the
{\it short edges} of $C$.
\end{defn}
\begin{proof}[Proof of Theorem~\ref{toric-convex-closure-theorem}.]
Proceed by contradiction:
suppose $(i,j) \neq (k,\ell)$ are {\it not} in
$\bar{A}^{\tor}$, but both
\begin{enumerate}
\item[$\bullet$]
$(k,\ell)$ lies in $\overline{A\cup (i,j)}^{\tor}$,
say because $(i,j)$ creates a toric directed path $C$
also containing $(k,\ell)$, which was
not already present in $\bar{A}^{\tor}$,
and
\item[$\bullet$]
$(i,j)$ lies in $\overline{A\cup (k,\ell)}^{\tor}$,
say because $(k,\ell)$ creates a toric directed path $D$
also containing $(i,j)$, which was
not already present in $\bar{A}^{\tor}$.
\end{enumerate}
Introduce the (ordinary) partial order $Q$ on $V$ which
is the (ordinary) transitive closure of
$\bar{A}^{\tor} \cup \{(i,j),(k,\ell)\}$.
We use this to argue a contradiction in various cases.
\vskip.1in
\noindent
{\sf Case 1. Either $(i,j)$ is the long edge of $C$, or
$(k,\ell)$ is the long edge of $D$.}
By relabeling, assume without loss of generality
that $(i,j)$ is the long edge of $C$.
Then in $Q$, one has
\begin{equation}
\label{first-transitivity-inequalities}
i \leq k < \ell \leq j
\end{equation}
with at least one of the two weak inequalities being strict.
\vskip.1in
\noindent
{\it Subcase 1a. $(k,\ell)$ is also the long edge of $D$.}
Then in $Q$ one also has $k \leq i < j \leq \ell$, which
with \eqref{first-transitivity-inequalities} gives
$$
k \leq i \leq k < \ell \leq j \leq \ell
$$
forcing the contradiction $(i,j)=(k,\ell)$.
\vskip.1in
\noindent
{\it Subcase 1b. $(k,\ell)$ is a short edge of $D$.}
Then since $C$ has $(i,j)$ as its long edge and gives a toric directed path
containing $(k,\ell)$ (while $\bar{A}^{\tor}$ had no such path),
$C$ must contain a directed path from $k$ to $\ell$ with at least two steps.
Combining this with $D {-} \{(k,\ell)\}$ gives
a toric directed path in $\bar{A}^{\tor}$ that contains $(i,j)$; contradiction.
\vskip.1in
\noindent
{\sf Case 2. Both $(i,j),(k,\ell)$ are
short edges of $C, D$, respectively.}
In this case, $\bar{A}^{\tor}$ cannot contain a path
from $i$ to $j$, else replacing $(i,j)$ in $C$ with this path
would give the contradiction that $(i,j)$ is in $\bar{A}^{\tor}$.
Similarly, $\bar{A}^{\tor}$ cannot contain a path
from $k$ to $\ell$. Also note that, since $C$ (or $D$) is a directed path
containing all four of $\{i,j,k,\ell\}$, the four of them are totally ordered in $Q$.
We now argue in subcases based on how $Q$ totally orders $\{i,j,k,\ell\}$.
\vskip.1in
\noindent
{\it Subcase 2a. Either $Q$ has $i < j \leq k < \ell$
or $k < \ell \leq i < j$.}
In this case, adding $(i,j)$ to $\bar{A}^{\tor}$ cannot help to create
a directed path from $k$ to $\ell$, contradicting the existence of $C$.
\vskip.1in
\noindent
{\it Subcase 2b. Either $Q$ has $i \leq k < \ell \leq j$,
with at least one of the weak inequalities strict,
or $k \leq i < j \leq \ell$,
with at least one of the weak inequalities strict.}
Assume without loss of generality, by relabeling, that
one is in the first case $i \leq k < \ell \leq j$. But
then adding $(i,j)$ to $\bar{A}^{\tor}$ again cannot help to create
a directed path from $k$ to $\ell$, contradicting the existence of $C$.
\vskip.1in
\noindent
{\it Subcase 2c. Either $Q$ has $i \leq k \leq j \leq \ell$,
with at least two consecutive strict inequalities,
or $k \leq i \leq \ell \leq j$,
with at least two consecutive strict inequalities.}
Assume without loss of generality, by relabeling, that
one is in the first case $i \leq k \leq j \leq \ell$.
But then the consecutive strict inequalities either imply
the existence within $\bar{A}^{\tor}$ of
a directed path from $i$ to $j$,
or one from $k$ to $\ell$; contradiction.
\end{proof}
\section{Toric Hasse diagrams}
\label{Hasse-diagram-section}
For convex closures $A \longmapsto \bar{A}$, it is
well-known that for any subset $A$, its {\it extreme
points}
$$
\extreme(A): = \{ a \in A: a \not\in \overline{A {-} \{a\}} \}
$$
gives the unique set which is minimal under inclusion among all subsets
having the same closure as $A$; see \cite{Edelman:85}.
For ordinary transitive closure of an acyclic orientation $(G,\omega)$
as a subset of $\overleftrightarrow{K_V}$, its extreme points
are exactly the subset of directed edges $(i,j)$ in the usual {\it Hasse diagram} for its associated partial order $P$. This suggests the following definition.
\begin{defn}
\label{toric-Hasse-diagram-defn}
Given a graph $G=(V,E)$ and $\omega$ in $\Acyc(G)$,
corresponding to a subset $A$ of $\overleftrightarrow{K_V}$,
its {\it toric Hasse diagram} is the pair
$(\hat{G}^{\torHasse},\omega^{\torHasse})$ corresponding
to its subset of extreme points $\extreme(A)$ with respect to
the toric transitive closure operation $A \longmapsto \bar{A}^{\tor}$. The toric Hasse diagram of a toric poset $P$ is $(G^{})$
\end{defn}
Definition~\ref{toric-transitive-closure-defn} allows one to rephrase this as
follows:
\begin{itemize}
\item $\hat{G}^{\torHasse}$ is obtained from $G$ by
removing all {\it chord edges} $\{i_j,i_k\}$ with $|j-k| \geq 2$
from all toric directed paths $C=\{i_1,\ldots,i_m\}$ in $\omega$
that have $m=|C| \geq 4$, and
\item $\omega^{\torHasse}$ is the restriction $\omega|_{\hat{G}^{\torHasse}}$.
\end{itemize}
One then has the following analogue of
Corollary~\ref{toric-transitive-closure-independence}.
\begin{cor}
\label{toric-Hasse-diagram-closure-independence}
The toric Hasse diagram
depends only on the toric poset $P=P(c)$ having $\bar{\alpha}_G(c)=[\omega]$,
in the following sense:
given two graphs $G_i=(V,E_i)$ for $i=1,2$,
and $\omega_i$ in $\Acyc(G_i)$ with $\bar{\alpha}_G(c)=[\omega_i]$, then
\begin{enumerate}
\item[(i)]
$\hat{G}_1^{\torHasse}=\hat{G}_2^{\torHasse}$, and
\item[(ii)]
$\omega_1^{\torHasse} \equiv \omega_2^{\torHasse}$.
\end{enumerate}
\end{cor}
\begin{proof}
Same as the proof of Corollary~\ref{toric-transitive-closure-independence}.
The key point is that the toric directed paths
$C=\{i_1,\ldots,i_m\}$ in $\omega$ are the toric chains in $P$,
and when $|C| \geq 4$, removing chords from $C$ still keeps it a toric chain.
\end{proof}
\section{Toric antichains}
\label{Antichain-section}
Since chains in posets have a good toric analogue, one might ask if the same is true for antichains.
Recall that an {\it antichain} of an
ordinary poset $P$ on $V$ is a subset $A=\{i_1,\dots,i_m\} \subseteq V$
characterized
\begin{enumerate}
\item[$\bullet$] {\it combinatorially} by the condition
that no pair $\{i,j\}\subset A$ with $i \neq j$ are comparable,
that is, they lie on no chain of $P$, or
\item[$\bullet$] {\it geometrically} by the equivalent condition
that the $(|V|-m+1)$-dimensional linear subspace
$
\{x\in\mathbb{R}^V: x_{i_1}=x_{i_2}=\cdots=x_{i_m}\}
$
intersects the open polyhedral cone/chamber $c(P)$ in $\mathbb{R}^V$.
\end{enumerate}
In the toric situation, these two conditions
lead to different notions of toric antichains.
\begin{defn}
\label{toric-antichain-defn}
Given a toric poset $P=P(c)$ on the finite set $V$,
say that $A=\{i_1,\dots,i_m\} \subseteq V$
is a
\begin{enumerate}
\item[$\bullet$] {\it combinatorial toric antichain} of $P$ if no
$\{i,j\}\subset A$ with $i \neq j$ lie on a common toric chain of $P$.
\item[$\bullet$] {\it geometric toric antichain} if the
subspace $\{x\in\mathbb{R}^V/\mathbb{Z}^V: x_{i_1}=x_{i_2}=\cdots=x_{i_m}\}$
intersects the open toric chamber $c=c(P)$.
\end{enumerate}
\noindent
By analogy to the notion of the width of a poset, which is the size of its
largest antichain, define the \emph{geometric (resp. combinatorial) toric width} of a toric poset to be the size of the largest geometric (resp. combinatorial) toric antichain.
\end{defn}
Given a toric poset $P=P(c)$ and a graph $G=(V,E)$ with
$\bar{\alpha}_G(c)=[\omega]$, the definition and
Corollary~\ref{toric-chamber-as-union-cor} imply
that $A\subseteq V$ is a geometric toric antichain of $P$
if and only if $A$ is an antichain of $P(G,\omega')$
for some $\omega'\equiv\omega$.
The following proposition should also be clear.
\begin{prop}
In a toric poset $P$, every geometric toric antichain is a
combinatorial toric antichain. Thus its geometric toric width is
bounded above by its combinatorial toric width.
\end{prop}
The next example shows that the inequality between
these two notions of toric width can be strict.
\begin{ex}
Consider the toric poset $P=P(c)$ whose toric Hasse diagram is the
circular graph $G=C_6$ and for which $\bar{\alpha}_G(c)$ contains
the following representatives $\omega_1$, $\omega_2$ and $\omega_3$
of $\Acyc(G)$:
\[
\tiny{
\xymatrix{
5 & \\
4 \ar[u] & \\
3 \ar[u] & \\
2 \ar[u] & 6 \ar[uuul] \\
1 \ar[u] \ar[ur] & }\hspace{1in}
\xymatrix{
& \\
4 & \\
3 \ar[u] & \\
2 \ar[u] & 6 \\
1 \ar[u] \ar[ur] & 5 \ar[u]\ar[uuul] }\hspace{1in}
\xymatrix{
& \\
& \\
3 & 6 \\
2 \ar[u] & 5 \ar[u] \\
1 \ar[u] \ar[uur] & 4 \ar[u]\ar[uul] }}
\]
All three of these orientations satisfy $\nu_I(\omega_i)=2$ for the
directed cycle $I=[(1,2,3,4,5,6)]$ of $G$, where $\nu_I$ is
Coleman's $\nu$-function from Remark~\ref{Coleman-remark}.
Moreover, Proposition~\ref{prop-pretzel}
says that $\nu_I(\omega)=2$ must hold for any other $\omega$ in
$[\omega_i]$. It is easy to check that for any such $\omega$, the
directed graph $(G,\omega)$ must be isomorphic to either
$(G,\omega_1)$, $(G,\omega_2)$, or $(G,\omega_3)$.
Consequently, $P$ has no toric chains except for those of cardinality $0,1,2$,
that is, the empty set $\varnothing$, the $6$ singletons
and the $6$ edge pairs in $G$. From this one can easily
check that the combinatorial toric antichains of $P$ are the
empty set $\varnothing$, the $6$ singletons, the pairs $\{i,j\}$
which do not form edges of $G$, and the two triples $\{1,3,5\}, \{2,4,6\}$.
In particular, $P$ has combinatorial toric width $3$.
However, we claim neither of these triples $\{1,3,5\},\{2,4,6\}$ can
be a geometric toric antichain, so that the geometric toric width of
$P$ is $2$. To argue that $\{1,3,5\}$ is not a geometric toric antichain,
consider three paths of length $2$ in $G$ between the
elements of $\{1,3,5\}$, that is, the paths
$$
\begin{aligned}&1-2-3\\&3-4-5\\&5-6-1\end{aligned}
$$
The only way one could avoid having an $\omega$-directed path between two
elements of $\{1,3,5\}$ would be if $\omega$ orients both edges in each
of the three paths listed above
in opposite directions. But this would lead to $\nu_I(\omega)=0$
which is impossible for $\omega$ in $[\omega_i]$. The argument for
$\{2,4,6\}$ is similar.
\end{ex}
Despite the difference in the two notions of toric width,
one might still hope that one of the notions gives
a toric analogue for one or both of these two
classic results on chains and antichains in ordinary posets.
\begin{thm}\label{thm:Dilworth}
For any (ordinary) finite poset $P$, one has:
\begin{enumerate}
\item[(i)] Dilworth's Theorem \cite{Dilworth:50}:
$$
\max \{ |A|: A \text{ an antichain in }P \}=
\min \{ \ell: V=\cup_{i=1}^\ell C_i, \text{ with }C_i
\text{ chains in }P \}
$$
\item[(ii)] Mirsky's Theorem \cite{Mirsky:71}:
$$
\max \{ |C|: C \text{ a chain in }P \}=
\min \{ \ell: V=\cup_{i=1}^\ell A_i, \text{ with }A_i
\text{ antichains in }P \}.
$$
\end{enumerate}
\end{thm}
\noindent
One at least has the following
inequalities, coming from the easy observation
that a toric chain and toric antichain (whether combinatorial or geometric)
can intersect in at most one element.
\begin{prop}\label{prop-toric-dilworth}
For a toric poset $P$, both versions (geometric or
combinatorial) of a toric antichain lead to the following inequalities
holds:
\[
\begin{array}{lcl}
\max \{ |A|: A \text{ a toric antichain in }P \}&\leq&
\min \{ \ell: V=\cup_{i=1}^\ell C_i, \text{ with }C_i
\text{ toric chains in }P \} \\
\max \{ |C|: C \text{ a toric chain in }P \}&\leq&
\min \{ \ell: V=\cup_{i=1}^\ell A_i, \text{ with }A_i
\text{ toric antichains in }P \}.
\end{array}
\]
\end{prop}
\noindent
However, the following example shows that both inequalities in
Proposition~\ref{prop-toric-dilworth} can be strict: neither of
our two notions of toric antichain leads to a version
of Dilworth's Theorem, nor of Mirsky's theorem.
\begin{ex}\label{ex:mirsky-counterexample}
Consider the toric poset $P=P(c)$ whose toric Hasse diagram is the
circular graph $G=C_5$ and for which $\bar{\alpha}_G(c)$ contains
the following representatives $\omega_1$ and $\omega_2$ of
$\Acyc(G)$:
\[
\tiny{
\xymatrix{
4 & \\
3 \ar[u] & \\
2 \ar[u] & 5 \ar[uul] \\
1 \ar[u] \ar[ur] & }\hspace{1in}
\xymatrix{
& \\
3 & \\
2 \ar[u] & 5 \\
1 \ar[u] \ar[ur] & 4 \ar[u]\ar[uul] }}
\]
Both orientations above satisfy $\nu_I(\omega_i)=1$ for the directed
cycle $I=[(1,2,3,4,5)]$ of $G$. Proposition~\ref{prop-pretzel} says
that $\nu_I(\omega)=1$ must hold for any other $\omega$ in
$[\omega_i]$, and so for such an $\omega$, the directed graph
$(G,\omega)$ must be isomorphic to either $(G,\omega_1)$ or
$(G,\omega_2)$.
Consequently, $P$ has no toric chains except for those of cardinality $0,1,2$,
that is, the empty set $\varnothing$, the $5$ singletons
and the $5$ edge pairs in $G$. In particular, the maximum size of
a toric chain is $2$. From this one can also easily
check that the combinatorial toric antichains of $P$ are the
empty set $\varnothing$, the $5$ singletons, and the $5$ pairs $\{i,j\}$
which do not form edges of $G$. In fact,
all of these are also geometric toric antichains, so in this
example the two notions coincide, and for either one the
toric width is $2$.
However, as $|V|=5$, there is no partition of $V$ into
two toric chains (the analogue of Dilworth's Theorem fails),
nor into two toric antichains (the analogue of Mirsky's Theorem
fails).
\end{ex}
\bibliographystyle{amsplain}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2012-11-20T02:02:23",
"yymm": "1211",
"arxiv_id": "1211.4247",
"language": "en",
"url": "https://arxiv.org/abs/1211.4247",
"abstract": "We define toric partial orders, corresponding to regions of graphic toric hyperplane arrangements, just as ordinary partial orders correspond to regions of graphic hyperplane arrangements. Combinatorially, toric posets correspond to finite posets under the equivalence relation generated by converting minimal elements into maximal elements, or sources into sinks. We derive toric analogues for several features of ordinary partial orders, such as chains, antichains, transitivity, Hasse diagrams, linear extensions, and total orders.",
"subjects": "Combinatorics (math.CO)",
"title": "Toric partial orders",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9886682474702956,
"lm_q2_score": 0.8418256412990657,
"lm_q1q2_score": 0.832286281458705
} |
https://arxiv.org/abs/0810.5488 | The Magnus expansion and some of its applications | Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an elegant setting to built up approximate exponential representations of the solution of the system. It provides a power series expansion for the corresponding exponent and is sometimes referred to as Time-Dependent Exponential Perturbation Theory. Every Magnus approximant corresponds in Perturbation Theory to a partial re-summation of infinite terms with the important additional property of preserving at any order certain symmetries of the exact solution. The goal of this review is threefold. First, to collect a number of developments scattered through half a century of scientific literature on Magnus expansion. They concern the methods for the generation of terms in the expansion, estimates of the radius of convergence of the series, generalizations and related non-perturbative expansions. Second, to provide a bridge with its implementation as generator of especial purpose numerical integration methods, a field of intense activity during the last decade. Third, to illustrate with examples the kind of results one can expect from Magnus expansion in comparison with those from both perturbative schemes and standard numerical integrators. We buttress this issue with a revision of the wide range of physical applications found by Magnus expansion in the literature. | \section{}
\input{section1}
\input{section2}
\input{section3}
\input{section4}
\input{section5}
\input{section6}
\input{section7}
\subsection*{Acknowledgments}
One of us (J.A.O) was introduced to the field of Magnus expansion
by Prof. Silvio Klarsfeld (Orsay) to whom we also acknowledge his
continuous interest in our work. Encouraging comments by Prof.
A.J. Dragt in early stages of our involvement with ME are deeply
acknowledged. During the last decade we have benefitted from
enlightening discussions with and useful comments by Prof. Arieh
Iserles. We would like to thank him for that as much as for the
kind hospitality extended to us during our visits to his group in
DAMTP in Cambridge. We also thank Prof. Iserles for providing us
with his Latex macros for the graphs in section \ref{graph} and Dr. Ander Murua
for reading various parts of the manuscript and providing useful feedback.
Friendly collaboration with Dr. Per Christian Moan is also
acknowledged.
The work of SB and FC has been partially supported by Ministerio
de Ciencia e Innovaci\'on (Spain) under project MTM2007-61572
(co-financed by the ERDF of the European Union), that of JAO and
JR by contracts MCyT/FEDER, Spain (Grant No. FIS2004-0912) and
Generalitat Valenciana, Spain (Grant No. ACOMP07/03).
SB also aknowledges the support of the UPV trhough the project 20070307
\bibliographystyle{plain}
\section{Introduction}
\subsection{Motivation, overview and history}
The outstanding mathematician Wilhelm Magnus (1907-1990)
did important contributions to a wide variety of fields in
mathematics and mathematical physics \cite{abikoff94tml}. Among them
one can mention combinatorial group theory \cite{magnus76cgt} and
his collaboration in the Bateman project on higher transcendental
functions and integral transforms \cite{erdelyi53htf}. In this
report we review another of his long-lasting constructions: the
so-called Magnus expansion (hereafter referred to as ME). ME was
introduced as a tool to solve non-autonomous
linear differential equations for linear operators. It is
interesting to observe that in his seminal paper of 1954
\cite{magnus54ote}, although it is essentially mathematical in
nature, Magnus recognizes that his work was stimulated by results of
K.O. Friedrichs on the theory of linear operators in Quantum
Mechanics \cite{friedrichs53mao}. Furthermore, as the first
antecedent of his proposal he quotes a paper by R.P. Feynman in the
Physical Review \cite{feynman51aoc}. We stress these facts to show
that already in its very beginning ME was strongly related to
Physics and so has been ever since and there is no reason to doubt
that it will continue to be. This is the first motivation to offer
here a review as the present one.
Magnus proposal has the very attractive property of leading to
approximate solutions which exhibit at any order of approximation
some qualitative or physical characteristics which first principles
guarantee for the exact (but unknown) solution of the problem.
Important physical examples are the symplectic or unitary character
of the evolution operator when dealing with classical or quantum
mechanical problems, respectively. This is at variance with most
standard perturbation theories and is apparent when formulated in a
correct algebraic setting: Lie algebras and Lie groups. But this
great advantage has been some times darkened in the past by the
difficulties both in constructing explicitly higher order terms and
in assuring existence and convergence of the expansion.
In our opinion, recent years have witnessed great improvement in this
situation. Concerning general questions of existence and convergence new
results have appeared. From the point of view of applications some new
approaches in old fields have been published while completely new and
promising avenues have been open by the use of Magnus expansion in Numerical
Analysis. It seems reasonable to expect fruitful cross fertilization between
these new developments and the most conventional perturbative approach to ME
and from it further applications and new calculations.
This new scenario makes desirable for the Physics community in
different areas (and scientists and engineers in general) to have
access in as unified a way as possible to all the information
concerning ME which so far has been treated in very different
settings and has appeared scattered through very different
bibliographic sources.
As implied by the preceding paragraphs this report is mainly
addressed to a Physics audience, or closed neighbors, and
consequently we shall keep the treatment of its mathematical aspects
within reasonable limits and refer the reader to more detailed
literature where necessary. By the same token the applications
presented will be limited to examples from Physics or from the
closely related field of Physical Chemistry. We shall also emphasize
its instrumental character for numerically solving physical
problems.
In the present section as an introduction we present a brief
overview and sketchy history of more than 50 years of ME. To start
with, let us consider the initial value problem associated with
the linear ordinary differential equation
\begin{equation} \label{eqdif}
Y^{\prime}(t)=A(t)Y(t),\qquad\qquad Y(t_0)=Y_{0},
\end{equation}
where as usual the prime denotes derivative with respect to the real
independent variable which we take as time $t$, although much of what will be said
applies also for a complex independent variable. In order of increasing
complexity we may consider the equation above in different contexts:
\begin{enumerate}
\item[(a)] $Y:\mathbb{R}\longrightarrow\mathbb{C}$, $A:\mathbb{R}\longrightarrow
\mathbb{C}$. This means that the unknown $Y$ and the given $A$ are
complex scalar valued functions of one real variable. In this case
there is no problem at all: the solution reduces to a quadrature
and an ordinary exponential
evaluation:
\begin{equation} \label{eqescalar}
Y(t)= \exp \left( \int_{t_0}^{t}A(s)ds \right) \, Y_{0}.
\end{equation}
\item[(b)] $Y:\mathbb{R}\longrightarrow\mathbb{C}^{n}$, $A:\mathbb{R}%
\longrightarrow\mathbb{M}_{n}(\mathbb{C)}$, where
$\mathbb{M}_{n}(\mathbb{C)}$ is the set of $n\times n$ complex
matrices. Now $Y$ is a complex vector valued function and $A$ a
complex $n\times n$ matrix valued function. At variance with the
previous case, only in very special cases is the solution easy to
state: when for any pair of values of $t$, $t_{1}$ and $t_{2},$
one has $A(t_{1})A(t_{2})=A(t_{2})A(t_{1})$, which is certainly
the case if $A$ is constant. Then the solution reduces to a
quadrature (trivial or not) and a matrix exponential. With the
obvious changes in the meaning of the symbols, equation
(\ref{eqescalar}) still applies. In the general case, however, there
is not compact expression for the solution and (\ref{eqescalar})
is not any more the solution.
\item[(c)] $Y:\mathbb{R}\longrightarrow\mathbb{M}_{n}(\mathbb{C)}$,
$A:\mathbb{R}\longrightarrow\mathbb{M}_{n}(\mathbb{C)}$. Now both
$Y$ and $A$ are complex matrix valued functions. A particular
case, but still general enough to encompass the most interesting
physical and mathematical applications, corresponds to
$Y(t)\in\mathcal{G}$, $A(t)\in\mathfrak{g}$, where $\mathcal{G}$
and $\mathfrak{g}$ are respectively a matrix Lie group and its
corresponding Lie algebra. Why this is of interest is easy to
grasp: the key reason for the failure of (\ref{eqescalar}) is the
non-commutativity of matrices in general. So one can expect that
the (in general non-vanishing) commutators play an important role.
But when commutators enter the play one immediately thinks in Lie
structures. Furthermore, plainly speaking, the way from a Lie
algebra to its Lie group is covered by the exponential operation.
A fact that will be no surprise in this context. The same comments
of the previous case are valid here. In this report we shall
mostly deal with this matrix case.
\item[(d)] The most general situation one can think of corresponds to $Y(t)$
and $A(t)$ being operators in some space, e.g., Hilbert space in
Quantum Mechanics. Perhaps the most paradigmatic example of
(\ref{eqdif}) in this setting is the time-dependent Schr\"odinger
equation.
\end{enumerate}
Observe that case (b) above can be reduced to case (c). This is
easily seen if one introduces what in mathematical literature is
named the matrizant, a concept dating back at least to the
beginning of the 20th century in the work of Baker
\cite{baker02oti}. It is the $n\times n$ matrix $U(t,t_0)$ defined
through
\begin{equation}
Y(t)=U(t,t_0)Y_{0}. \label{matrizant}%
\end{equation}
Without loss of generality we will take $t_0=0$ unless otherwise
explicitly stated, for the sake of simplicity. When no confusion
may arise, we write only one argument in $U$ and denote $U(t,0)
\equiv U(t)$, which then
satisfy the differential equation and initial condition%
\begin{equation}
U^{\prime}(t)=A(t)U(t),\qquad\qquad U(0)=I, \label{laecuacion}%
\end{equation}
where $I$ stands for the $n$-dimensional identity matrix. The
reader will have recognized $U(t)$ as what in physical terms is
known as time evolution operator.
We are now ready to state Magnus proposal: a solution to
(\ref{laecuacion})
which is a true matrix exponential%
\begin{equation}
U(t)=\exp \Omega(t),\qquad\qquad \Omega(0)=O,\label{magsol}%
\end{equation}
and a series expansion for the matrix in the exponent%
\begin{equation}
\Omega(t)=\sum_{k=1}^{\infty}\Omega_{k}(t),\label{ME}%
\end{equation}
which is what we call the \emph{Magnus expansion}. The mathematical elaborations explained in the
next section determine $\Omega_{k}(t)$. Here we just write down the three
first terms of that series:%
\begin{align}
\Omega_{1}(t) & =\int_{0}^{t}A(t_{1})~\text{d}t_{1},\nonumber\\
\Omega_{2}(t) & =\frac{1}{2}\int_{0}^{t}\text{d}t_{1}\int_{0}^{t_{1}}%
\text{d}t_{2}\ \left[ A(t_{1}),A(t_{2})\right] \label{ME123}\\
\Omega_{3}(t) & =\frac{1}{6}\int_{0}^{t}\text{d}t_{1}\int_{0}^{t_{1}}%
\text{d}t_{2}\int_{0}^{t_{2}}\text{d}t_{3}\ (\left[ A(t_{1}),\left[
A(t_{2}),A(t_{3})\right] \right] +\left[ A(t_{3}),\left[ A(t_{2}%
),A(t_{1})\right] \right] )\nonumber
\end{align}
where $\left[ A,B\right] \equiv AB-BA$ is the matrix commutator
of $A$ and $B$.
The interpretation of these equations seems clear: $\Omega_{1}(t)$
coincides exactly with the exponent in (\ref{eqescalar}). But this
equation cannot give the whole solution as has already been said.
So, if one insists in having an exponential solution the exponent
has to be corrected. The rest of the ME in (\ref{ME}) gives that
correction necessary to keep the exponential form of the solution.
The terms appearing in (\ref{ME123}) already suggest the most
appealing characteristic of ME. Remember that a matrix Lie algebra
is a linear space in which one has defined the commutator as the
second internal composition law. If, as we suppose, $A(t)$ belongs
to a Lie algebra $\mathfrak{g}$ for all $t$ so does any sum of
multiple integrals of nested commutators. Then, if all terms in ME
have a structure similar to that of the ones shown before, the whole
$\Omega(t)$ and any approximation to it obtained by truncation of ME
will also belong to the same Lie algebra. In the next section it
will be shown that this turns out to be the case. And \emph{a fortiori} its
exponential will be in the corresponding Lie group.
Why is this so important for physical applications? Just because
many of the properties of evolution operators derived from first
principles are linked to the fact that they belong to a certain
Lie group: e.g. unitary group in Quantum Mechanics, symplectic
group in Classical Mechanics. In that way use of (truncated) ME
leads to approximations which share with the exact solution of
equation (\ref{laecuacion}) important qualitative (very often,
geometric) properties. For instance, in Quantum Mechanics every approximant
preserves probability conservation.
From the present point of view we could say that the last
paragraphs summarize, in a nut shell, the main contents of the
famous paper of Magnus of 1954. With no exaggeration its
appearance can be considered a turning point in the treatment of
the initial value problem defined by (\ref{laecuacion}).
But important, as it certainly was, Magnus paper left some
problems, at least partially, open:
\begin{itemize}
\item First, for what values of $t$ and for what operators $A$ does equation
(\ref{laecuacion}) admit a true exponential solution? This we call
the existence problem.
\item Second, for what values of $t$ and for
what linear operators $A$ does the series in equation (\ref{ME})
converge? This we describe as the convergence problem. We want to
emphasize that, although related, these two are different
problems. To see why, think of the scalar equation $y^{\prime}=y^{2}$
with $y(0)=1$. Its solution $y(t)=(1-t)^{-1}$ exists for $t\neq1$,
but its form in power series $y(t)=\sum_{0}^{\infty}t^{n}$
converges only for $\left| t\right| <1$.
\item Third, how to construct
higher order terms $\Omega_{k}(t)$, $k\geq3$, in the series? Moreover, is
there a closed-form expression for $\Omega_{k}(t)$?
\item Fourth, how to calculate in an efficient way
$\exp\Omega^{[N]}$, where $\Omega^{\lbrack N]}\equiv\sum_{k=1}^{N}%
\Omega_{k}(t)$ is a truncation of the ME?
\end{itemize}
All these questions, and many others, will be dealt with in the rest
of the paper. But before entering that analysis we think interesting
to present a view, however brief, from a historical perspective of
this half a century of developments on Magnus series. Needless to
say, we by no means try to present a detailed and exhaustive
chronological account of the many approaches followed by authors
from very different disciplines. To minimize duplication with later
sections we simply mention some representative samples in order the
reader can understand the evolution of the field.
Including some precedents, and with a, as undeniable as
unavoidable, dose of arbitrariness, we may distinguish four periods in
the history of our topic:
\begin{enumerate}
\item Before 1953. The problem which ME solves has a centennial history
dating back at least to the work of Peano, by the end of 19th
century, and Baker, at the beginning of the 20th (for references
to the original papers see e.g. \cite{ince56ode}). They combine
the theory of differential equations with an algebraic
formulation. Intimately related to these treatments from the very
beginning is the study of the so called Baker--Campbell--Hausdorff
or BCH formula for short
\cite{baker05aac,campbell98oal,hausdorff06dse} which gives $C$ in
terms of $A$, $B$ and their multiply nested commutators when
expressing $\exp(A)\exp(B)$ as $\exp(C)$. This topic has by itself
a long history up until today and will be discussed also in
section 2. As one of its early hallmarks we quote
\cite{dynkin47eot}. In Physics literature the interest in the
problem posed by equation (\ref{laecuacion}) highly revived with
the advent of Quantum Electrodynamics (QED). The works of F.J.
Dyson \cite{dyson49trt} and in particular R.P. Feynman
\cite{feynman51aoc} in the late forties and early fifties are
worth mentioning here.
\item 1953-1970. We have quoted as birth certificate of ME the paper \cite{magnus54ote} by
Magnus in 1954. This is not strictly true: there is a Research
Report \cite{magnus53asi} dated June 1953 which differs from the
published paper in the title and in a few minor details and which
should in fact be taken as a preliminary draft of it. In both
publications appears the result summarized above on ME with almost
identical words. The work of Pechukas and Light
\cite{pechukas66ote} gave for the first time a more specific
analysis of the problem of convergence than the rather vague
considerations in Magnus paper. Wei and Norman
\cite{wei63nog,wei63las}\ did the same for the existence problem.
Robinson, to the best of our knowledge, seems to have been the
first to apply ME to a physical problem \cite{robinson63mce}.
Special mention in this period deserves a paper by Wilcox
\cite{wilcox67eoa}, in which useful mathematical tools are given
and ME presented together with other algebraic treatments of
equation (\ref{laecuacion}), in particular Fer's infinite product
expansion \cite{fer58rdl}. Worth also of mention here is the first
application of ME as a numerical tool for integrating the
time-independent Schr\"odinger equation for potential scattering
by Chang and Light \cite{chang69eso}.
\item 1971-1990. During these years ME consolidated in different fronts. It
was successfully applied to a wide spectrum of fields in Physics
and Chemistry: from atomic \cite{baye73ats}\ and molecular
\cite{schek81aot}\ Physics to Nuclear Magnetic Resonance (NMR)
\cite{ernst86pon,waugh82tob} to
Quantum Electrodynamics \cite{dahmen82ido} and elementary particle
Physics \cite{dolivo90mea}. A number of case studies also helped
to clarify its mathematical structure, see for example
\cite{klarsfeld89apo}. The construction of higher order terms was
approached from different angles. The intrinsic and growing
complexity of ME allows for different schemes. One which has shown
itself very useful in tackling other questions like the
convergence problem was the recurrent scheme by Klarsfeld and Oteo
\cite{klarsfeld89rgo}.
\item Since 1991. The last decade of the 20th century witnessed a renewed
interest in ME which still continues nowadays. It has followed different
lines. Concerning the basic problems of existence and convergence
of ME there has been definite progress
\cite{blanes98maf,casas07scf,moan99ote,moan01cot}. ME has also been adapted
for specific types of equations: Floquet theory when $A(t)$ is a
periodic function \cite{casas01fte}, stochastic differential
equations \cite{burrage99hso} or equations of the form
$Z^{\prime}=AZ-ZB$ \cite{iserles01ame}. Special mention should be
made to the new field open in this most recent period that uses
Magnus scheme to build novel algorithms \cite{iserles99ots} for
the numerical integration of differential equations within the
most wide field of geometric integration \cite{budd99gin}. After
optimization \cite{blanes00iho,blanes02hoo}, these integrators
have proved to be highly competitive.
\end{enumerate}
As a proof of the persistent impact the 1954 paper by Magnus has had
in scientific literature we present in Figures \ref{cites1} and \ref{cites2} the number
of citations per year and the cumulative number of citations, respectively,
as December 2007 with data taken from ISI Web of
Science. The original paper appears about 750 times of which,
roughly, 50, 320 and 380 correspond respectively to each of the last
three periods we have considered. The enduring interest in that
seminal paper is clear from the figures.
\begin{figure}[th]
\begin{center}
\epsfxsize=4.6in {\epsfbox{Figures/Magnus_citations.eps}}
\end{center}
\caption{Persistency of Magnus original paper: number of citations per year.}
\label{cites1}
\end{figure}
\begin{figure}[th]
\begin{center}
\epsfxsize=4.6in {\epsfbox{Figures/Magnus_citations_sum.eps}}
\end{center}
\caption{Persistency of Magnus original paper: cumulative number of citations.}
\label{cites2}
\end{figure}
The presentation of this report is organized as follows. In the remaining of
this section we include some mathematical tools and notations that will
be used time and again in our treatment. In section 2 we introduce formally
the Magnus expansion, study its main features and analyze thoroughly
the convergence issue. Next, in section 3 several generalizations of the
Magnus expansion are reviewed, with special emphasis in its application
to general nonlinear differential equations. In order to illustrate the main
properties of ME, in section 4 we consider simple examples for which the
computations required are relatively straightforward. Section 5 is devoted
to an aspect that has been most recently studied in this setting: the design of new
algorithms for the numerical integration of differential equations based on the
Magnus expansion. There, after a brief characterization of numerical integrators,
we present several methods that are particularly efficient, as shown by the
examples considered. Given the relevance of the new numerical schemes,
we briefly review in section 6 some of its applications in different
contexts, ranging from boundary-value problems to stochastic differential equations. In
section 7, on the other hand, applications of the ME to significant physical problems
are considered. Finally, the paper ends with some concluding remarks.
\subsection{Mathematical preliminaries and notations} \label{notations}
Here we collect for the reader's convenience some mathematical
expressions, terminology and notations which appear most frequently
in the text. Needless to say that we have made no attempt to being
completely rigorous. We just try to facilitate the casual reading of
isolated sections.
As already mentioned, the natural mathematical habitat for
most of the objects we will deal with in this report is a Lie
group or its associated Lie algebra. Although most of the results
discussed in these pages are valid in a more general setting we will
essentially consider only matrix Lie groups and algebras.
By a Lie group $\mathcal{G}$ we understand a set which combines an
algebraic structure with a topological one. At the algebraic level
every two elements of $\mathcal{G}$ can be combined by an internal
composition law to produce a third element also in $\mathcal{G}$.
The law is required to be associative, to have an identity element
and every element must have an inverse. The ordinary product and
inverse of invertible matrix play that role in the cases we are
more interested in. The topological exigence forces the
composition law and the association of an inverse to be
sufficiently smooth functions.
A Lie algebra $\mathfrak{g}$ is a vector space whose elements can be
combined by a second law, the Lie bracket, which we represent by
$[A,B]=C$, with $A,B,C$ elements of $\mathfrak{g}$, in such a way
that the law is bilinear, skew-symmetric and satisfies the well
known Jacobi identity,
\begin{equation}
[A,[B,C]]+[B,[C,A]]+[C,[A,B]]=0. \label{jacobi}
\end{equation}
When dealing with matrices we take as Lie bracket the familiar
commutator:
\begin{equation}\label{comutador}
[A,B] = A B - B A, \qquad A\in \mathfrak{g}, \quad B\in
\mathfrak{g},
\end{equation}
where $AB$ stands for the usual matrix product.
If we consider a finite-di\-men\-sio\-nal Lie algebra with dimension $d$
and denote by $A_i, i=1, \ldots, d$, the vectors of one of its
basis then the fundamental brackets one has to know are
\begin{equation}
[A_i,A_j]=c_{ij}^{k}A_k, \label{constestruct}
\end{equation}
where sum over repeated indexes is understood. The coefficients
$c_{ij}^{k}$ are the so-called structure constants of the algebra.
Associated with any $A\in \mathfrak{g}$ we can define a linear
operator $\mathrm{ad}_{A}: \mathfrak{g}\rightarrow \mathfrak{g}$ which
acts according to
\begin{equation}\label{definad}
\mathrm{ad}_{A}B=[A,B], \qquad \mathrm{ad}_{A}^j B = [A, \mathrm{ad}_{A}^{j-1} B], \qquad
\mathrm{ad}_{A}^0 B=B, \qquad j \in \mathbb{N}, B\in \mathfrak{g}.
\end{equation}
Also of interest is the exponential of this $\mathrm{ad}_A$ operator,
\begin{equation}\label{definAd}
\mathrm{Ad}_A=\exp(\mathrm{ad}_A),
\end{equation}
whose action on $\mathfrak{g}$ is given by
\begin{equation}\label{actionAd}
\mathrm{Ad}_A(B) = \exp(A) \, B \, \exp(-A) = \sum_{k=0}^{\infty} \frac{1}{k!} \mathrm{ad}_A^k B,
\qquad B\in \mathfrak{g}.
\end{equation}
The type of matrices we will handle more frequently are orthogonal,
unitary and symplectic. Here are their characterization and the
notation we shall use for their group and algebra.
The special orthogonal group, $\ensuremath{\mathrm{SO}}(n)$, is the set of all $n\times
n$ real matrices with unit determinant satisfying $A^TA=AA^T=I$, where $A^T$ is the transpose
of $A$ and $I$ denotes the identity matrix. The corresponding
algebra $\ensuremath{\mathfrak{so}}(n)$ consists of the skew-symmetric
matrices.
A $n \times n$ complex matrix $A$ is called unitary if
$A^{\dag}A=AA^{\dag}=I$, where $A^{\dag}$ is
the conjugate transpose or Hermitian adjoint of $A$.
The special unitary group, $\ensuremath{\mathrm{SU}}(n)$, is the set of all $n\times n$
unitary matrices with unit determinant. The
corresponding algebra $\mathfrak{su}(n)$ consist of the
skew-Hermitian traceless matrices. Special relevance in some
quantum mechanical problems we discuss will have the case $n=2$.
In this case a convenient basis for $\mathfrak{su}(2)$ is made up
by the Pauli matrices
\begin{equation}\label{Paulis}
\sigma_1=\left(
\begin{array}{ccr}
0 & & 1 \\
1 & & 0
\end{array}
\right), \quad
\sigma_2=\left(
\begin{array}{ccr}
0 & & -i \\
i & & 0
\end{array}
\right), \quad
\sigma_3=\left(
\begin{array}{ccr}
1 & & 0 \\
0 & & -1
\end{array}
\right).
\end{equation}
They satisfy the identity
\begin{equation}\label{producpaulis}
\sigma_{j}\sigma_{k}=\delta_ {jk}+i\epsilon_{jkl}\sigma_l,
\end{equation}
and correspondingly
\begin{equation} \label{conmusigmas}
[\sigma_{j},\sigma_{k}]=2i\epsilon_{jkl}\sigma_l,
\end{equation}
which directly give the structure constants for $\ensuremath{\mathrm{SU}}(2)$. The
following identities will prove useful for $\boldsymbol{a}$ and $\boldsymbol{b}$ in
$\mathbb{R}^{3}$:
\begin{equation} \label{pauliescalar}
(\boldsymbol{a}\cdot\boldsymbol{\sigma})(\boldsymbol{b}\cdot\boldsymbol{\sigma}) =
\boldsymbol{a}\cdot\boldsymbol{b} \ I+
i(\boldsymbol{a}\times\boldsymbol{b})\cdot\boldsymbol{\sigma}, \qquad
[\boldsymbol{a}\cdot\boldsymbol{\sigma},\boldsymbol{b}\cdot\boldsymbol{\sigma}] =
2 i (\boldsymbol{a}\times\boldsymbol{b}) \cdot \boldsymbol{\sigma},
\end{equation}
where we have denoted $\boldsymbol{\sigma} = (\sigma_1, \sigma_2, \sigma_3)$.
Any $U \in \ensuremath{\mathrm{SU}}(2)$ can be written as
\begin{equation}\label{exppaulis}
U =\exp(i\boldsymbol{a}\cdot\boldsymbol{\sigma})=
\cos ( a ) \,I + i \frac{\sin (a) }{a} \boldsymbol{a}\cdot\boldsymbol{\sigma},
\end{equation}
where $a = \|\boldsymbol{a}\| = \sqrt{a_1^2 + a_2^2 + a_3^2}$.
A more elaborate expression which we shall make use of in later
sections is (with $a=1$)
\begin{equation}\label{cambiopict}
\exp(i\boldsymbol{a}\cdot\boldsymbol{\sigma} t) \, (\boldsymbol{b}\cdot\boldsymbol{\sigma}) \,
\exp(-i\boldsymbol{a}\cdot\boldsymbol{\sigma} t)=%
\boldsymbol{b}\cdot\boldsymbol{\sigma}+\sin 2t \, (\boldsymbol{b}\times\boldsymbol{a})\cdot\boldsymbol{\sigma}%
+\sin^2 t \, (\boldsymbol{a}\times(\boldsymbol{b}\times\boldsymbol{a}))\cdot\boldsymbol{\sigma}
\end{equation}
In Hamiltonian problems the symplectic group $\ensuremath{\mathrm{Sp}}(n)$ plays a
fundamental role. It is the group of $2n\times 2n$ real matrices
satisfying
\begin{equation}\label{simplectic}
A^{T}JA=J, \qquad \mbox{ with } \quad \qquad J=\left(
\begin{array}{rcc}
O_n & & I_n \\
-I_n & & O_n \\
\end{array}
\right)
\end{equation}
and $I_n$ denotes the $n$-dimensional identity matrix.
Its corresponding Lie algebra $\mathfrak{sp}(n)$ consists of
matrices verifying $B^{T}J + JB=O_{2n}$. In fact, these can be
considered particular instances of the so-called $J$-orthogonal
group, defined as \cite{postnikov94lga}
\begin{equation} \label{j-ortho}
\mathrm{O}_J(n) = \{ A \in \mathrm{GL}(n) \, : \, A^T J A = J \},
\end{equation}
where $\mathrm{GL}(n)$ is the group of all $n \times n$
nonsingular real matrices and $J$ is some constant matrix in
$\mathrm{GL}(n)$. Thus, one recovers the orthogonal group when
$J=I$, the symplectic group $\mathrm{Sp}(n)$ when $J$ is the basic
symplectic matrix given in (\ref{simplectic}), and the Lorentz
group $\mathrm{SO}(3,1)$ when $J = \mathrm{diag}(1,-1,-1,-1)$.
The corresponding Lie algebra is the set
\begin{equation} \label{j-algebra}
\mathrm{o}_J(n) = \{ B \in \mathfrak{gl}_n (\mathbb{R}) \, : \,
B^T J + J B = O \},
\end{equation}
where $\mathfrak{gl}_n (\mathbb{R})$ is the Lie algebra of all $n
\times n$ real matrices. If $B \in \mathrm{o}_J(n)$, then its
Cayley transform
\begin{equation} \label{cayley1}
A = (I - \alpha B)^{-1} (I + \alpha B)
\end{equation}
is $J$-orthogonal.
Another important matrix Lie group not included in the previous
characterization is the special linear group $\mathrm{SL}(n)$,
formed by all $n \times n$ real matrices with unit determinant. The
corresponding Lie algebra $\mathfrak{sl}(n)$ comprises all
traceless matrices. For real $2\times 2$ matrices in $\mathfrak{sl}(2)$ one has
\begin{equation}\label{expMat2}
\exp \left( \begin{array}{cc}
a & \ b \\ c & -a
\end{array} \right) =
\left( \begin{array}{cc}
\cosh(\eta) + \frac{a}{\eta} \sinh(\eta) & \ \frac{b}{\eta}
\sinh(\eta) \\
\frac{c}{\eta} \sinh(\eta) & \cosh(\eta) - \frac{a}{\eta} \sinh(\eta)
\end{array} \right)
\end{equation}
with $\eta=\sqrt{a^2+bc}$.
When dealing with convergence problems it is necessary to use some
type of norm for a matrix. By such we mean a non-negative real
number $\|A\|$ associated with each matrix $A \in \mathbb{C}^{n \times n}$ and satisfying
\begin{itemize}
\item[a)] $\|A\| \ge 0$ for all $A$ and $\|A\| = 0$ iff $A=O_n$.
\item[b)] $\| \alpha A\| = |\alpha| \, \| A\|$, for all scalars $\alpha$.
\item[c)] $\| A+B\| \le \| A \| + \| B\|$.
\end{itemize}
Quite often one adds the sub-multiplicative property
\begin{equation}\label{subaditiva}
\| A B \| \le \| A \| \, \| B \|,
\end{equation}
but not all matrix norms satisfy this condition \cite{golub96mc}.
There exist different families of matrix norms. Among the more
popular ones we have the $p$-norm $\| A\|_p$ and the Frobenius norm
$\|A\|_F$. For a matrix $A$ with elements $a_{ij}$, $i,j=1 \ldots
n$, they are defined as
\begin{eqnarray}
\| A\|_p & = & \max_{\|\textbf{x}\|_p =1} \|A\textbf{x}\|_p \label{spectral} \\
\| A\|_F & = & \sqrt {\sum_{i=1}^{n} \sum_{j=1}^{n}
|a_{ij}|^2}=\sqrt{\mathrm{tr}(A^{\dag} A) }, \label{Frobenius}
\end{eqnarray}
respectively, where $\|\textbf{x}\|_p =(\sum_{j=1}^{n}
|x_j|^p)^{\frac{1}{p}}$ and $\mathrm{tr}($A$)$ is the trace of the
matrix $A$. Although both verify (\ref{subaditiva}), the
$p$-norms have the important property that for every matrix $A$
and $\mathbf{x} \in \mathbb{R}^n$ one has $\| A \mathbf{x}\|_p \le
\|A\|_p \, \|\mathbf{x}\|_p$. The most used $p$-norms correspond to
$p=1$, $p=2$ and $p = \infty$.
Of paramount importance in numerical linear algebra is the case
$p=2$. The resulting $2$-norm of a vector is nothing but the Euclidean norm,
whereas in the matrix case it is also called the spectral norm of
$A$ and can be characterized as the square root of the largest
eigenvalue of $A^{\dag} A$. A frequently used inequality relating
Frobenius and spectral norms is the following:
\begin{equation} \label{ineqf1}
\|A\|_2 \le \|A\|_F \le \sqrt{n} \, \|A\|_2.
\end{equation}
In fact, this last inequality can be made more stringent \cite{tyrtyshnikov97abi}:
\begin{equation} \label{ineqf2}
\|A\|_F \le \sqrt{\mathrm{rank}(A)} \, \|A\|_2.
\end{equation}
Considering in a matrix Lie algebra $\mathfrak{g}$ a norm satisfying property
(\ref{subaditiva}), it is clear that $\|[A,B]\| \le 2 \|A\| \|B\|$, and the
$\mathrm{ad}$ operator defined by (\ref{definad}) is bounded, since
\[
\|\mathrm{ad}_A\| \le 2 \|A\|
\]
for any matrix $A$.
A matrix norm is said to be unitarily invariant if $\|U A V\| =
\|A\|$ whenever $U$, $V$ are unitary matrices. Frobenius and $p$-norms are
both unitarily invariant \cite{horn85man}.
In some of the most basic formulas for the Magnus expansion there
will appear the so-called Bernoulli numbers $B_n$, which are
defined through the generating function \cite{abramowitz65hom}
\[
\frac{t \e^{z t}}{\e^t - 1} = \sum_{n=0}^{\infty} B_n(z) \,
\frac{t^n}{n!}, \qquad |t| < 2 \pi
\]
as $B_n = B_n(0)$. Equivalently,
\[
\frac{x}{\e^x - 1} = \sum_{n=0}^{\infty} \, \frac{B_n}{n!} \, x^n,
\]
whereas the formula
\[
\frac{\e^x - 1}{x} = \sum_{n=0}^{\infty} \, \frac{1}{(n+1)!} \, x^n
\]
will be also useful in the sequel.
The first few nonzero Bernoulli numbers are $B_0 = 1$, $B_1 =
-\frac{1}{2}$, $B_2 = \frac{1}{6}$, $B_4 = -\frac{1}{30}$. In
general one has $B_{2m+1}=0$ for $m\geq 1$.
\section{The Magnus expansion (ME)}\label{section2}
Magnus proposal with respect to
the linear evolution equation
\begin{equation} \label{eq:evolution}
Y^{\prime}(t)=A(t) Y(t)
\end{equation}
with initial condition $Y(0)=I$, was to express the solution as the exponential of a certain
function,
\begin{equation} \label{eq:Omega}
Y(t)=\exp{\Omega (t)}.
\end{equation}
This is in contrast to the representation
\[
Y(t) = \mathcal{T} \left( \exp \int_0^t A(s) ds \right)
\]
in terms of the \emph{time-ordening operator} $\mathcal{T}$ introduced by Dyson \cite{dyson49trt}.
It turns out that $\Omega(t)$ in (\ref{eq:Omega})
can be obtained explicitly in a number of ways. The
crucial point is to derive a differential equation for the
operator $\Omega$ that replaces (\ref{eq:evolution}). Here we
reproduce the result first established by Magnus as Theorem III
in \cite{magnus54ote}:
\begin{theorem} \label{thMag}
(Magnus 1954). Let $A(t)$ be a known function of $t$
(in general, in an associative ring), and
let $Y(t)$ be an unknown function satisfying (\ref{eq:evolution})
with $Y(0)=I$. Then, if certain unspecified conditions of
convergence are satisfied, $Y(t)$ can be written in the form
\[
Y(t) = \exp{ \Omega(t)},
\]
where
\begin{equation} \label{OmegaDE}
\frac{d \Omega}{dt} = \sum_{n=0}^\infty \frac{B_n}{n!} \,
{\mathrm{ad}}^n_\Omega A,
\end{equation}
and $B_n$ are the Bernoulli numbers. Integration of
(\ref{OmegaDE}) by iteration leads to an infinite series for
$\Omega$ the first terms of which are
\[
\Omega(t) = \int_0^t A(t_1) dt_1 - \frac{1}{2} \int_0^t \left[
\int_0^{t_1} A(t_2) dt_2, A(t_1) \right] dt_1 + \cdots
\]
\end{theorem}
\subsection{A proof of Magnus Theorem}
The proof of this theorem is largely based on the derivative of
the matrix exponential map, which we discuss next. Given a scalar function
$\omega(t) \in \mathbb{R}$, the derivative of the exponential is
given by $d \exp(\omega(t))/dt = \omega'(t) \exp(\omega(t))$. One
could think of a similar formula for a matrix $\Omega(t)$.
However, this is not the case, since in general $[\Omega, \Omega']
\ne 0$. Instead one has the following result.
\begin{lemma} \label{lemma1}
The derivative of a matrix exponential can be written
alternatively as
\begin{eqnarray} \label{lem1a}
\mathit{(a)} \quad \frac{d}{dt} \exp(\Omega(t)) & = &
d \exp_{\Omega(t)}(\Omega'(t)) \, \exp(\Omega(t)), \\
\mathit{(b)} \quad \frac{d}{dt} \exp(\Omega(t)) & = &
\exp(\Omega(t)) \, d \exp_{-\Omega(t)}(\Omega'(t)), \label{lem1c} \\
\mathit{(c)} \quad \frac{d}{dt} \exp(\Omega(t)) & = & \int_0^1 \e^{x \Omega(t)}
\, \Omega'(t) \, \e^{(1-x) \Omega(t)} dx, \label{lem1b}
\end{eqnarray}
where $d \exp_{\Omega}(C)$ is defined by its (everywhere convergent)
power series
\begin{equation} \label{fdexp1}
d \exp_{\Omega}(C) = \sum_{k=0}^{\infty} \frac{1}{(k+1)!} \,
\mathrm{ad}_{\Omega}^k(C)
\equiv
\frac{\exp(\mathrm{ad}_{\Omega})-I}{\mathrm{ad}_{\Omega}}(C).
\end{equation}
\end{lemma}
\begin{proof}
Let $\Omega(t)$ be a matrix-valued differentiable function and
set
\[
Y(\sigma,t) \equiv \frac{\partial }{\partial t} \left(
\exp(\sigma \Omega(t)) \right) \exp(-\sigma \Omega(t))
\]
for $\sigma, t \in \mathbb{R}$. Differentiating with respect to
$\sigma$,
\begin{eqnarray*}
\frac{\partial Y}{\partial \sigma} & = &
\frac{\partial }{\partial t} \left( \exp(\sigma \Omega) \Omega
\right) \exp(-\sigma \Omega) + \frac{\partial }{\partial t} \left( \exp(\sigma \Omega) \right)
(- \Omega) \exp(-\sigma \Omega) \\
& = & \left( \exp(\sigma \Omega) \Omega' + \frac{\partial }{\partial t}
\left( \exp(\sigma \Omega) \right) \Omega \right)
\exp(-\sigma \Omega) \\
& & - \frac{\partial }{\partial t} \left( \exp(\sigma \Omega)
\right) \Omega \exp(-\sigma \Omega) = \exp(\sigma \Omega) \Omega' \exp(-\sigma \Omega) \\
& = & \exp(\mathrm{ad}_{\sigma \Omega})(\Omega') = \sum_{k=0}^{\infty} \frac{\sigma^k}{k!}
\mathrm{ad}_{\Omega}^k (\Omega'),
\end{eqnarray*}
where the first equality in the last line follows readily from
(\ref{definAd}) and (\ref{actionAd}). On the other hand
\begin{equation} \label{auxl1}
\frac{d }{dt}(\exp \Omega) \exp(-\Omega) = Y(1,t) = \int_0^1
\frac{\partial }{\partial \sigma} Y(\sigma, t) d\sigma
\end{equation}
since $Y(0,t) = 0$, and
\[
\int_0^1 \frac{\partial }{\partial \sigma} Y(\sigma, t) d\sigma
= \int_0^1 \sum_{k=0}^{\infty} \frac{\sigma^k}{k!}
\mathrm{ad}_{\Omega}^k (\Omega') d\sigma = \sum_{k=0}^{\infty} \frac{1}{(k+1)!}
\mathrm{ad}_{\Omega}^k (\Omega'),
\]
from which formula (\ref{lem1a}) follows. The convergence of the
power series (\ref{fdexp1}) is a consequence of the boundedness of
the ad operator: $\|\mathrm{ad}_{\Omega}\| \le 2 \|\Omega\|$.
Multiplying both sides of (\ref{lem1a}) by $\exp(-\Omega)$, we have
\[
\e^{-\Omega} \frac{d \e^{\Omega}}{dt} = \e^{-\Omega} d \exp_{\Omega}(\Omega') \e^{\Omega}
= \e^{\mathrm{ad}_{-\Omega}} d \exp_{\Omega}(\Omega') =
\frac{\e^{\mathrm{ad}_{-\Omega}} - I}{\mathrm{ad}_{-\Omega}} \Omega' = d \exp_{-\Omega}(\Omega')
\]
from which (\ref{lem1c}) follows readily.
Finally, equation (\ref{lem1b}) is obtained by taking
\[
\int_0^1 \frac{\partial }{\partial \sigma} Y(\sigma, t) d\sigma
= \int_0^1 \exp(\sigma \Omega) \Omega' \exp(-\sigma \Omega) d\sigma
\]
in (\ref{auxl1}).
\end{proof}
According to Rossmann \cite{rossmann02lgr} and Sternberg \cite{sternberg04lal},
formula (\ref{lem1a}) was first proved by F. Schur in
1890 \cite{schur90nbd} and was
taken up later from a different point of view by Poincar\'e (1899),
whereas the integral formulation
(\ref{lem1b}) has been derived a number of times in the physics
literature \cite{wilcox67eoa}.
As a consequence of the Inverse Function Theorem, the exponential
map has a local inverse in the vicinity of a point $\Omega$ at
which $d \exp_{\Omega} =
(\exp(\mathrm{ad}_{\Omega})-I)/\mathrm{ad}_{\Omega}$ is
invertible. The following lemma establishes when this takes place.
\begin{lemma} \label{lemma2}
(Baker 1905). If the eigenvalues of the linear operator
$\mathrm{ad}_{\Omega}$ are different from $2 m \pi i$ with $m \in
\{ \pm 1, \pm 2, \ldots \}$, then $d \exp_{\Omega}$ is invertible.
Furthermore,
\begin{equation} \label{fdexpinv}
d \exp_{\Omega}^{-1}(C) = \frac{\mathrm{ad}_{\Omega}}{\e^{\mathrm{ad}_{\Omega}} - I} C =
\sum_{k=0}^{\infty} \frac{B_k}{k!}
\mathrm{ad}_{\Omega}^k (C)
\end{equation}
and the convergence of the $d \exp_{\Omega}^{-1}$ expansion is
certainly assured if $\| \Omega \| < \pi$.
\end{lemma}
\begin{proof}
The eigenvalues of $d \exp_{\Omega}$ are of the form
\[
\mu = \sum_{k \ge 0} \frac{\nu^k}{(k+1)!} = \frac{\e^{\nu}
- 1}{\nu},
\]
where $\nu$ is an eigenvalue of $\mathrm{ad}_{\Omega}$. By
assumption, the values of $\mu$ are non-zero, so that $d
\exp_{\Omega}$ is invertible. By definition of the Bernoulli
numbers, the composition of (\ref{fdexpinv}) with (\ref{fdexp1})
gives the identity. Convergence for $\| \Omega \| < \pi$ follows
from $\|\mathrm{ad}_{\Omega}\| \le 2 \|\Omega\|$ and from the fact
that the radius of convergence of the series expansion for
$x/(\e^x - 1)$ is $2 \pi$.
\end{proof}
It remains to determine the eigenvalues of the operator
$\mathrm{ad}_{\Omega}$. In fact, it is not difficult to show that
if $\Omega$ has $n$ eigenvalues $\{ \lambda_j, \; j=1,2,\ldots,n
\}$, then $\mathrm{ad}_{\Omega}$ has $n^2$ eigenvalues $\{
\lambda_j - \lambda_k, \; j,k=1,2,\ldots,n \}$.
As a consequence of the previous discussion, Theorem \ref{thMag} can be
rephrased more precisely in the following terms.
\begin{theorem} \label{thMrefor}
The solution of the differential equation $Y' =
A(t) Y$ with initial condition $Y(0) = Y_0$
can be written as $Y(t) = \exp(\Omega(t)) Y_0$ with
$\Omega(t)$ defined by
\begin{equation} \label{fmag1}
\Omega' = d \exp_{\Omega}^{-1}(A(t)), \qquad \Omega(0) = O,
\end{equation}
where
\[
d \exp_{\Omega}^{-1}(A) = \sum_{k=0}^{\infty} \frac{B_k}{k!}
\mathrm{ad}_{\Omega}^k (A).
\]
\end{theorem}
\begin{proof}
Comparing the derivative of $Y(t) = \exp(\Omega(t)) Y_0$,
\[
\frac{dY}{dt} = \frac{d }{dt} \left( \exp(\Omega(t)) \right) Y_0
= d \exp_{\Omega}(\Omega') \, \exp(\Omega(t)) Y_0
\]
with $Y' = A(t) Y$, we obtain $A(t) = d \exp_{\Omega}(\Omega')$.
Applying the inverse operator $d \exp_{\Omega}^{-1}$ to this
relation yields the differential equation (\ref{fmag1}) for
$\Omega(t)$.
\end{proof}
Taking into account the numerical values of the first few
Bernoulli numbers, the differential equation (\ref{fmag1})
therefore becomes
\[
\Omega' = A(t) - \frac{1}{2} [\Omega, A(t)] + \frac{1}{12}
[\Omega,[\Omega,A(t)]] + \cdots,
\]
which is nonlinear in $\Omega$. By defining
\[
\Omega^{[0]} = O, \qquad \Omega^{[1]} = \int_0^t A(t_1) dt_1,
\]
and applying Picard fixed point iteration, one gets
\[
\Omega^{[n]} = \int_0^t \left( A(t_1) dt_1 - \frac{1}{2}
[\Omega^{[n-1]}, A] + \frac{1}{12}
[\Omega^{[n-1]},[\Omega^{[n-1]},A]] + \cdots \right)
dt_1
\]
and $\lim_{n \rightarrow \infty} \Omega^{[n]}(t) = \Omega(t)$ in
a suitably small neighbourhood of the origin.
\subsection{Formulae for the first terms in Magnus expansion}
\label{sec2.2}
Suppose now that $A$ is of first order in some
parameter $\varepsilon $ and try a solution in the form of a series
\begin{equation} \label{M-series}
\Omega(t) =\sum_{n=1}^\infty \Omega_n(t),
\end{equation}
where $\Omega_n$ is supposed to be of order $\varepsilon^n$.
Equivalently, we replace $A \longmapsto \varepsilon A$ in (\ref{eq:evolution})
and determine the successive terms of
\begin{equation} \label{M-series-eps}
\Omega(t) =\sum_{n=1}^\infty \varepsilon^n \Omega_n(t).
\end{equation}
This can be done explicitly, at least for the first terms,
by substituting the series (\ref{M-series-eps})
in (\ref{fmag1}) and equating powers of
$\varepsilon $. Obviously, the Magnus series (\ref{M-series}) is recovered by taking $\varepsilon = 1$.
Thus, using the
notation $A(t_i)\equiv A_i$, the first four orders read
\begin{enumerate}
\item $\Omega_1^{\prime}= A $, so that
\begin{equation}\label{O1}
\Omega_1(t)=\int_{0}^t {\rm d}t_1 A _1
\end{equation}
\item $\Omega_2^{\prime}=-\frac{1}{2}[\Omega_1,A ]$. Thus
\begin{equation}\label{O2}
\Omega_2(t)=\frac{1}{2}\int_{0}^t {\rm d}t_1 \int_{0}^{t_1} {\rm d}t_2
[A _1,A _2]
\end{equation}
\item $\Omega_3^{\prime}
=-\frac{1}{2}[\Omega_2,A ]+\frac{1}{12}[\Omega_1,[\Omega_1,A ]]$.
After some work and using the formula
\begin{equation}\label{intxy}
\int_0^\alpha {\rm d}x \int_0^x f(x,y){\rm d}y=\int_0^\alpha {\rm
d}y \int_y^\alpha f(x,y){\rm d}x
\end{equation}
we obtain
\begin{equation}\label{O3}
\Omega_3(t)=\frac{1}{6}\int_{0}^t {\rm d}t_1 \int_{0}^{t_1}{\rm d}t_2
\int_{0}^{t_2} {\rm d}t_3 \{
[A _1,[A _2,A _3]]+[[A _1,A _2],A _3]\}
\end{equation}
\item $\Omega_4^{\prime}=-\frac{1}{2}[\Omega_3,A ]+\frac{1}{12}[\Omega_2,[\Omega_1,A ]]
+\frac{1}{12}[\Omega_1,[\Omega_2,A ]]$, which yields
\begin{eqnarray}\label{O4}
\Omega_4(t)&=&\frac{1}{12}\int_{0}^t {\rm d}t_1 \int_{0}^{t_1}{\rm d}t_2
\int_{0}^{t_2} {\rm d}t_3 \int_{0}^{t_3} {\rm d}t_4 \{
[[[A _1,A _2],A _3]A _4] \\
&+&[A _1,[[A _2,A _3],A _4]]+[A _1,[A _2,[A _3,A _4]]]+[A _2,[A _3,[A _4,A _1]]]
\} \nonumber
\end{eqnarray}
\end{enumerate}
The apparent symmetry in the formulae above is deceptive. High
orders require repeated use of (\ref{intxy}) and become unwieldy.
Prato and Lamberti
\cite{prato97ano} give explicitly the fifth order using an
algorithmic point of view. One can also find in the literature quite
involved explicit expressions for an arbitrary order \cite
{bialynicki69eso,mielnik70cat,saenz02cat,strichartz87tcb,suarez01laa}.
In the next subsection we describe
a recursive procedure to generate the terms in the expansion.
\subsection{Magnus expansion generator}\label{MEG}
The above procedure can provide indeed a recursive procedure to generate
all the terms in the Magnus series (\ref{M-series}). Thus, by substituting
$\Omega(t)=\sum_{n=1}^{\infty }\Omega _{n}$ into equation
(\ref{fmag1}) and equating terms of the same order one gets in
general
\begin{eqnarray}
\Omega_{1}^{\prime} &=& A \nonumber \label{omegapunto} \\
\Omega_{n}^{\prime}
&=&\sum_{j=1}^{n-1}\frac{B_{j}}{j!}S_{n}^{(j)},\quad n\geq 2,
\label{omegapunt}
\end{eqnarray}
where
\begin{equation} \label{Sk}
S_n^{(k)}=\sum \,
[\Omega _{i_1},[ \ldots [\Omega _{i_k},A ] \ldots ]] \qquad (i_1+
\cdots +i_k=n-1).
\end{equation}
Notice that in the last equation the order in $A$ has been
explicitly reckoned, whereas $k$ represents the number of $\Omega
$'s. The newly defined operators $S_n^{(k)}$ can again be
calculated recursively. The recurrence relations are now given by
\begin{eqnarray}
S_{n}^{(j)} &=&\sum_{m=1}^{n-j}\left[ \Omega
_{m},S_{n-m}^{(j-1)}\right]
,\qquad\qquad 2\leq j\leq n-1 \label{eses} \\
S_{n}^{(1)} &=&\left[ \Omega _{n-1},A \right] ,\qquad
S_{n}^{(n-1)}= \mathrm{ad} _{\Omega _{1}}^{n-1} (A) . \nonumber
\end{eqnarray}
After integration we reach the final result in the form
\begin{eqnarray}
\Omega _{1} &=&\int_{0}^{t}A (\tau )d\tau \nonumber \\
\Omega _{n}
&=&\sum_{j=1}^{n-1}\frac{B_{j}}{j!}\int_{0}^{t}S_{n}^{(j)}(\tau
)d\tau ,\qquad\qquad n\geq 2. \label{omegn}
\end{eqnarray}
Alternatively, the expression of $S_n^{(k)}$ given by (\ref{Sk})
can be inserted into (\ref{omegn}), thus arriving at
\begin{equation} \label{recur2}
\Omega_n(t) = \sum_{j=1}^{n-1} \frac{B_j}{j!} \,
\sum_{
k_1 + \cdots + k_j = n-1 \atop
k_1 \ge 1, \ldots, k_j \ge 1}
\, \int_0^t \,
\mathrm{ad}_{\Omega_{k_1}(s)} \, \mathrm{ad}_{\Omega_{k_2}(s)} \cdots
\, \mathrm{ad}_{\Omega_{k_j}(s)} A(s) \, ds \qquad n \ge 2.
\end{equation}
Notice that each term $\Omega_n(t)$ in the Magnus series is
a multiple integral of combinations of $n-1$ nested commutators containing
$n$ operators $A(t)$. If, in particular, $A(t)$ belongs to some Lie algebra $\mathfrak{g}$,
then it is clear that $\Omega(t)$ (and in fact any truncation of the Magnus series) also stays
in $\mathfrak{g}$ and therefore $\exp(\Omega) \in \mathcal{G}$, where $\mathcal{G}$
denotes the Lie group whose corresponding Lie algebra (the tangent space at the
identity of $\mathcal{G}$) is $\mathfrak{g}$.
\subsection{Magnus expansion and time-dependent perturbation
theory}
\label{sec2.4}
It is not difficult to establish a connection between Magnus
series and Dyson perturbative series \cite{dyson49trt}. The later gives
the solution of (\ref{eq:evolution}) as
\begin{equation} \label{Dys1}
Y(t) = I + \sum_{n=1}^{\infty} P_n(t),
\end{equation}
where $P_n$ are time-ordered products
\[
P_n(t) = \int_0^t dt_1 \ldots \int_0^{t_{n-1}} dt_n \, A_1 A_2
\ldots A_n,
\]
where $A_i \equiv A(t_i)$. Then
\[
\sum_{j=1}^{\infty }\Omega _{j}(t)=\log \left( I+\sum_{j=1}^{\infty
}P_{j}(t)\right).
\]
As stated by Salzman \cite{salzman87ncf},
\begin{equation}
\Omega
_{n}=P_{n}-\sum_{j=2}^{n}\frac{(-1)^{n}}{j}R_{n}^{(j)},\qquad
n\geq 2, \label{MagDay}
\end{equation}
where
\[
R_n^{(k)}=\sum P_{i_1}P_{i_2}\ldots P_{i_k} \qquad
(i_1+\cdots+i_k=n)
\]
obeys to the quadratic recursion formula
\begin{eqnarray}
R_{n}^{(j)} &=&\sum_{m=1}^{n-j+1}R_{m}^{(1)}R_{n-m}^{(j-1)},
\label{erres}
\\
R_{n}^{(1)} &=&P_{n},\qquad R_{n}^{(n)}=P_{1}^{n}. \nonumber
\end{eqnarray}
Equation (\ref{erres}) represents the Magnus expansion generator
in Salzman's approach.
It may be useful to write down the first few equations provided by
this formalism:
\begin{eqnarray}\label{Salzeqs}
\Omega_1&=& P_1 \nonumber \\
\Omega_2&=& P_2-\frac{1}{2}P_1^2 \\
\Omega_3&=& P_3-\frac{1}{2}(P_1P_2+P_2P_1)+\frac{1}{3}P_1^3. \nonumber
\end{eqnarray}
A similar set of equations was developed by Burum
\cite{burum81meg}, thus providing
\begin{eqnarray}\label{Burum1}
P_1&=&\Omega _1,\nonumber \\
P_2&=&\Omega _2+\frac{1}{2!}\Omega _1^2, \\
P_3&=&\Omega _3+\frac{1}{2!}(\Omega _1 \Omega _2 + \Omega _2 \Omega _1) +
\frac{1}{3!} \Omega_1^3 \nonumber
\end{eqnarray}
and so on. The general term reads
\begin{equation}\label{MagDay1}
\Omega_{n}=P_{n}-\sum_{j=2}^{n}\frac{1}{j}Q_{n}^{(j)}, \qquad
n\geq 2,
\end{equation}
where
\begin{equation} \label{BurumQ}
Q_n^{(k)} = \sum \Omega _{i_1}\ldots\Omega _{i_k},
\qquad\quad (i_1+ \cdots +i_k=n).
\end{equation}
As before, subscripts indicate the order with respect to the
parameter $\varepsilon$, while superscripts represent the number
of factors in each product. Thus, the summation in (\ref{BurumQ})
extends over all possible products of $k$ (in general
non-commuting) operators $\Omega _i$ such that the overall order of
each term is equal to $n$. By regrouping terms, one has
\begin{eqnarray}\label{BurumQQ}
Q_n^{(k)} &=& \Omega _1 \sum_{i_2+\cdots+i_k=n-1}
\Omega _{i_2}\cdots\Omega _{i_k}+ \Omega _2 \sum_{i_2+\cdots+i_k=n-2}
\Omega _{i_2}\cdots\Omega _{i_k}\\ \nonumber
&+& \cdots +\Omega
_{n-k+1}\sum_{i_2+\cdots+i_k=k-1} \Omega _{i_2}\cdots\Omega _{i_k},
\end{eqnarray}
where $Q_{n}^{(j)}$ may also be obtained recursively from
\begin{eqnarray}\label{qs}
Q_{n}^{(j)} &=&\sum_{m=1}^{n-j+1}Q_{m}^{(1)}Q_{n-m}^{(j-1)},\\
Q_{n}^{(1)} &=&\Omega_{n},\qquad Q_{n}^{(n)}=\Omega_{1}^{n}.
\nonumber
\end{eqnarray}
By working out this recurrence one gets the same expressions as
(\ref{Salzeqs}) for the first terms. Further aspects of the
relationship between Magnus, Dyson series and time-ordered
products can be found in \cite{lam98dot} and \cite{oteo00ftp}.
\subsection{Graph theoretical analysis of Magnus
expansion} \label{graph}
The previous recursions allow us in principle to express any
$\Omega_k$ in the Magnus series in terms of $\Omega_1, \ldots,
\Omega_{k-1}$. In fact, this procedure has some advantages from a
computational point of view. On the other hand, as we have mentioned before,
when the
recursions are solved explicitly, $\Omega_k$ can
be expanded as a linear combination of terms that are composed
from integrals and commutators acting iteratively on $A$. The
actual expression, however, becomes increasingly complex with $k$,
as it should be evident from the first terms
(\ref{O1})-(\ref{O4}). An alternative form of the Magnus
expansion, amenable also for recursive derivation by using
graphical tools, can be obtained by associating each term in the
expansion with a \emph{binary rooted tree}, an approach worked out
by Iserles and N{\o}rsett \cite{iserles99ots}. For completeness,
in the sequel we show the equivalence of the recurrence
(\ref{eses})-(\ref{omegn}) with this graph theoretical
approach.
In essence, the idea of Iserles and N{\o}rsett is to associate
each term in $\Omega_k$ with a rooted tree, according to the
following prescription.
Let $\mathcal{T}_0$ be the set consisting of the single rooted tree
with one vertex, then $ \mathcal{T}_0=\{ \begin{picture}(10,5)
\put (5,3) \circle*{4}
\end{picture} \}$, establish the relationship
between this tree and $A$ through the map
\[
\begin{picture}(10,5)
\put (5,3) \circle*{4}
\end{picture} \leadsto A(t)
\]
and define recursively
\begin{displaymath}
\mathcal{T}_m=\left\{\rule[-12pt]{0pt}{24pt}\right. \;\;
\begin{picture}(20,30)(0,10)
\put (10,0) \line(-1,1){10}\line(1,1){10}
\put (0,10) \line(0,1){10}
\put (-3,23) {$\tau_1$}
\put (17,13) {$\tau_2$}
\end{picture}\quad :\; \tau_1\in\mathcal{T}_{k_1},\; \tau_2\in\mathcal{T}_{k_2},
\; k_1+k_2=m-1\left.\rule[-12pt]{0pt}{24pt}\right\}.
\end{displaymath}
Next, given two expansion terms $H_{\tau_1}$ and $H_{\tau_2}$,
which have been associated previously with $\tau_1 \in \mathcal{T}_{k_1}$
and $\tau_2 \in \mathcal{T}_{k_2}$, respectively ($k_1 + k_2 = m-1$), we
associate
\begin{displaymath}
H_\tau(t)=\left[\int_0^t H_{\tau_1}(\xi) d\xi,
H_{\tau_2}(t)\right] \qquad\mbox{with}\qquad \tau=
\begin{picture}(20,30)
\put (10,0) \line(-1,1){10}\line(1,1){10}
\put (0,10) \line(0,1){10}
\put (-3,23) {$\tau_1$}
\put (17,13) {$\tau_2$}
\end{picture}.
\end{displaymath}
Thus, each $H_\tau$ for $\tau \in \mathcal{T}_{m}$ involves exactly $m$
integrals and $m$ commutators.
These composition rules establish a one-to-one relationship
between a rooted tree $\displaystyle \tau\in\mathcal{T} \equiv \cup_{m\geq0}\mathcal{T}_m$, and
a matrix function $H_\tau(t)$ involving $A$, multivariate
integrals and commutators.
From here it is easy to deduce that every $\tau \in \mathcal{T}_m$, $m
\geq 1$, can be written in a unique way as
\begin{displaymath}
\tau=
\begin{picture}(50,65)
\put (10,0) \line(-1,1){10}\line(1,1){10}
\put (0,10) \line(0,1){10}
\put (-3,23) {$\tau_1$}
\put (20,10) \line(-1,1){10}\line(1,1){10}
\put (10,20) \line(0,1){10}
\put (7,33) {$\tau_2$}
\put (30,20) {\line(-1,1){10}}
\put (20,30) \line(0,1){10}
\put (17,43) {$\tau_3$}
\put (40,30) \line(-1,1){10}\line(1,1){10}
\put (50,40) \circle*{4}
\put (30,40) \line(0,1){10}
\put (27,53) {$\tau_s$}
\multiput (31,21.5)(2,2){4} {{\tiny .}}
\end{picture}
\end{displaymath}
or $\tau \equiv a(\tau_1, \tau_2, \ldots, \tau_s)$. Then the
Magnus expansion can be expressed in the form
\cite{iserles00lgm,iserles99ots}
\begin{equation} \label{m.2}
\Omega(t)=\sum_{m=0}^\infty \sum_{\tau\in\mathcal{T}_m} \alpha(\tau)
\int_0^t H_\tau(\xi) d\xi,
\end{equation}
with the scalar $\alpha(\begin{picture}(10,5) \put (5,3) \circle*{4}
\end{picture})=1$ and, in general,
\[
\alpha(\tau)=\frac{{B}_s}{s!} \prod_{l=1}^s
\alpha(\tau_l).
\]
Let us illustrate this procedure by writing down explicitly the
first terms in the expansion in a tree formalism. In $\mathcal{T}_1$ we
only have $k_1 = k_2 = 0$, so that a single tree is possible,
\[
\tau_1 = \begin{picture}(10,5)
\put (5,3) \circle*{4}
\end{picture}, \qquad
\tau_2 = \begin{picture}(10,5)
\put (5,3) \circle*{4}
\end{picture}, \qquad \Rightarrow \qquad
\tau = \begin{picture}(20,20)
\put (10,0) \line(-1,1){10}\line(1,1){10}
\put (0,10) \line(0,1){10}
\put (0,20) \circle*{4}
\put (20,10) \circle*{4}
\end{picture},
\]
with $\alpha(\tau) = -1/2$. In $\mathcal{T}_2$ there are two
possibilities, namely $k_1=0$, $k_2=1$ and $k_1=1$, $k_2=0$, and
thus one gets
\[
\begin{array}{lclclc}
\tau_1 = \begin{picture}(10,5)
\put (5,3) \circle*{4}
\end{picture}, & \quad & \tau_2 = \begin{picture}(20,20)
\put (10,0) \line(-1,1){10}\line(1,1){10}
\put (0,10) \line(0,1){10}
\put (0,20) \circle*{4}
\put (20,10) \circle*{4}
\end{picture} & \qquad \Rightarrow \qquad &
\tau = \begin{picture}(30,35)
\put (10,0) \line(-1,1){10}\line(1,1){10} \put (0,10) \line(0,1){10} \put (20,10) \line(-1,1){10}\line(1,1){10} \put (10,20) \line(0,1){10}
\put (0,20) \circle*{4}
\put (10,30) \circle*{4}
\put (30,20) \circle*{4}
\end{picture}, & \quad \alpha(\tau) = \frac{1}{12} \\
\tau_1 = \begin{picture}(20,20)
\put (10,0) \line(-1,1){10}\line(1,1){10}
\put (0,10) \line(0,1){10}
\put (0,20) \circle*{4}
\put (20,10) \circle*{4}
\end{picture}, & \quad & \tau_2 = \begin{picture}(10,5)
\put (5,3) \circle*{4}
\end{picture} & \qquad \Rightarrow \qquad &
\tau = \begin{picture}(30,45)
\put (20,0) \line(-1,1){10}\line(1,1){10} \put (10,10) \line(0,1){10} \put (10,20) \line(-1,1){10}\line(1,1){10} \put (0,30) \line(0,1){10}
\put (0,40) \circle*{4}
\put (20,30) \circle*{4}
\put (30,10) \circle*{4}
\end{picture} & \quad \alpha(\tau) = \frac{1}{4}
\end{array}
\]
and the process can be repeated for any $\mathcal{T}_m$. The
correspondence between trees and expansion terms should be clear
from the previous graphs. For instance, the last tree is nothing
but the integral of $A$, commuted with $A$, integrated and
commuted with $A$. In that way, by truncating the expansion
(\ref{m.2}) at $m=2$ we have
\begin{equation} \label{m.3}
\Omega(t) = \;\; \begin{picture}(10,10)
\put (5,0) \line(0,1){10}
\put (5,10) \circle*{4}
\end{picture} \;\; - \;\; \frac{1}{2} \;\; \begin{picture}(20,35)
\put (10,0) \line(0,1){10}
\put (10,10) \line(-1,1){10}\line(1,1){10}
\put (0,20) \line(0,1){10}
\put (0,30) \circle*{4}
\put (20,20) \circle*{4}
\end{picture} \quad + \quad \frac{1}{4} \;\; \begin{picture}(30,55)
\put (20,0) \line(0,1){10}
\put (20,10) \line(-1,1){10}\line(1,1){10}
\put (10,20) \line(0,1){10}
\put (10,30) \line(-1,1){10}\line(1,1){10}
\put (0,40) \line(0,1){10}
\put (0,50) \circle*{4}
\put (20,40) \circle*{4}
\put (30,20) \circle*{4}
\end{picture}\quad+\quad \frac{1}{12} \;\; \begin{picture}(30,45)
\put (10,0) \line(0,1){10}
\put (10,10) \line(-1,1){10}\line(1,1){10}
\put (0,20) \line(0,1){10}
\put (20,20) \line(-1,1){10}\line(1,1){10}
\put (10,30) \line(0,1){10}
\put (0,30) \circle*{4}
\put (10,40) \circle*{4}
\put (30,30) \circle*{4}
\end{picture} \quad + \cdots,
\end{equation}
i.e., the explicit expressions collected in subsection
\ref{sec2.2}.
Finally, the relationship between the tree formalism and the recurrence
(\ref{eses})-(\ref{omegn}) can be established as follows. From
(\ref{m.2}) we can write
\[
\sum_{m=1}^\infty \sum_{\tau\in\mathcal{T}_m} \alpha(\tau) H_\tau(t)
= \sum_{s=1}^m \frac{{B}_s}{s!}
\underset{k_1 + \cdots + k_s = m-s}{\underset{k_1, \ldots, k_s}{\sum}}
\sum_{\tau_i \in \mathcal{T}_{k_i}} \, \alpha(\tau_1) \cdots
\alpha(\tau_s) H_{a(\tau_1, \ldots, \tau_s)}.
\]
Thus, by comparing (\ref{omegn}) and (\ref{m.2}) we have
\[
\Omega_m(t)=\sum_{\tau \in \mathcal{T}_{m-1}} \alpha(\tau)
\int_0^t H_\tau(\xi) d\xi = \sum_{j=1}^{m-1} \frac{B_j}{j!}
\int_0^t S_m^{(j)}(\xi) d\xi
\]
so that
\[
S_m^{(j)} =
\underset{k_1 + \cdots + k_j = m-1-j}{\underset{k_1, \ldots, k_j}{\sum}}
\sum_{\tau_i \in \mathcal{T}_{k_i}} \, \alpha(\tau_1) \cdots
\alpha(\tau_j) H_{a(\tau_1, \ldots, \tau_j)}.
\]
In other words, each term $S_n^{(j)}$ in the recurrence
(\ref{eses}) carries on a complete set of binary trees. Although
both procedures are equivalent, the use of (\ref{eses}) and
(\ref{omegn}) can be particularly well suited when high orders of
the expansion are considered, for two reasons: (i) the enormous
number of trees involved for large values of $m$ and (ii) in
(\ref{m.2}) many terms are redundant, and a careful graph
theoretical analysis is needed to deduce which terms have to be
discarded \cite{iserles99ots}.
Recently, an ME-type formalism has been developed in the more
abstract setting of dendriform algebras. This generalized expansion incorporates the usual
one as a limit, but is formulated more in line with (non-commutative) Butcher
series. In this context, the use of planar rooted trees to represent
the expansion and the so-called pre-Lie product allows one to reduce the number
of terms at each order in comparison with expression (\ref{m.2}) \cite{ebrahimi-fard08ama}.
\subsection{Time-symmetry of the expansion}
\label{time-symmetry}
The map $\varphi^t : Y(t_0) \longrightarrow Y(t)$ corresponding to
the linear differential equation (\ref{eq:evolution}) with $Y(t_0) = Y_0$ is time
symmetric, $\varphi^{-t} \circ \varphi^t = \mathrm{Id}$, since
integrating (\ref{eq:evolution}) from $t=t_0$ to $t=t_f$ for every
$t_f \geq t_0$ and back to $t_0$ leads us to the original initial
value $Y(t_0)=Y_0$. Observe that, according with (\ref{matrizant}),
the map $\varphi^t$ can be expressed
in terms of the fundamental matrix (or evolution operator) $U(t,t_0)$ as
$\varphi^{t_f}(Y_0) = U(t_f,t_0)
Y_0$. Then time-symmetry establishes that
\[
U(t_0,t_f) = U^{-1}(t_f,t_0)
\]
or, in terms of the Magnus expansion,
\[
\Omega(t_f,t_0) = - \Omega(t_0,t_f).
\]
To take advantage of this feature, let us write the solution of
(\ref{eq:evolution}) at the final time $t_f=t_0 + s$ as
\begin{equation} \label{t-s3}
Y(t_{1/2} + \frac{s}{2}) = \exp \left( \Omega(t_{1/2} + \frac{s}{2},
t_{1/2} - \frac{s}{2}) \right) \, Y(t_{1/2} - \frac{s}{2}),
\end{equation}
where $t_{1/2} = (t_0 + t_f)/2$. Then
\begin{equation} \label{t-s4}
Y(t_{1/2} - \frac{s}{2}) = \exp \left(- \Omega(t_{1/2} + \frac{s}{2},
t_{1/2} - \frac{s}{2}) \right) \, Y(t_{1/2} + \frac{s}{2}).
\end{equation}
On the other hand, the solution at $t_0$ can be written as
\begin{equation} \label{t-s5}
Y(t_{1/2} - \frac{s}{2}) = \exp \left( \Omega(t_{1/2} - \frac{s}{2},
t_{1/2} + \frac{s}{2}) \right) \, Y(t_{1/2} + \frac{s}{2}),
\end{equation}
so that, by comparing (\ref{t-s4}) and (\ref{t-s5}),
\begin{equation} \label{t-s6}
\Omega(t_{1/2} - \frac{s}{2},t_{1/2} + \frac{s}{2}) = -
\Omega(t_{1/2} + \frac{s}{2},t_{1/2} - \frac{s}{2})
\end{equation}
and thus $\Omega$ does not contain even powers of $s$. If $A(t)$
is an analytic function and a Taylor series centered around
$t_{1/2}$ is considered, then each term in $\Omega_k$ is an odd
function of $s$ and, in particular, $\Omega_{2i+1}(s) =
\mathcal{O}(s^{2i+3})$. This fact has been noticed in
\cite{iserles01rah,munthe-kaas99cia} and will be fully
exploited in section \ref{Mintegrators} when analyzing the Magnus expansion as a
numerical device for integrating differential equations.
\subsection{Convergence of the Magnus expansion}
As we pointed out in the introduction, from a mathematical point of view, there are at least two
different issues of paramount importance at the very basis of the Magnus expansion:
\begin{enumerate}
\item (\emph{Existence}) For what values of $t$ and for what operators $A$ does equation
(\ref{eq:evolution}) admit an exponential solution in the form $Y(t) = \exp(\Omega(t))$
for a certain $\Omega(t)$?
\item (\emph{Convergence}) Given a certain operator $A(t)$,
for what values of $t$ does the Magnus
series (\ref{M-series}) converge? In other words, when $\Omega(t)$ in (\ref{eq:Omega})
can be obtained as the sum of the series (\ref{M-series})?
\end{enumerate}
Of course, given the relevance of the expansion, both problems have been
extensively treated in the literature since Magnus proposed this formalism in
1954. We next review some of the most relevant
contributions
available regarding both aspects, with special emphasis on the convergence of the
Magnus series.
\subsubsection{On the existence of $\Omega(t)$}
In most cases one is interested in the case where $A$ belongs to a Lie algebra
$\mathfrak{g}$ under the commutator product. In this general
setting the Magnus theorem can be formulated as four statements
concerning the solution of $Y^\prime = A(t) Y$, each one more
stringent than the preceding \cite{wei63nog}. Specifically,
\begin{itemize}
\item[(A)] The differential equation $Y^\prime = A(t) Y$
has a solution of the form $Y(t) = \exp \Omega(t)$.
\item[(B)] The exponent $\Omega(t)$ lies in the Lie algebra
$\mathfrak{g}$.
\item[(C)] The exponent $\Omega(t)$ is a continuous
differentiable function of $A(t)$ and $t$, satisfying the
nonlinear differential equation $\Omega^\prime = d
\exp_{\Omega}^{-1}(A(t))$.
\item[(D)] The operator $\Omega(t)$ can be computed by the Magnus series
(\ref{M-series}).
\end{itemize}
Let us analyze in detail now the conditions under which statements
(A)-(D) hold.
\vspace*{0.2cm}
\noindent (A) If $A(t)$ and $Y(t)$ are $n \times n$ matrices,
from well-known general theorems on differential equations it is clear that the initial
value problem defined by (\ref{eq:evolution}) and $Y(0)=I$ always
has a uniquely determined solution $Y(t)$ which is continuous and
has a continuous first derivative in any interval in which $A(t)$
is continuous \cite{coddington55tod}. Furthermore, the determinant
of $Y$ is always different from zero, since
\[
\det Y(t) = \exp \left( \int_0^t \, \mathrm{tr}\, A(s) ds
\right).
\]
On the other hand, it is well known that any matrix $Y$ can be
written in the form $\exp \Omega$ if and only if $\det Y \ne 0$
\cite[p. 239]{gantmacher59tto}, so that it is
always possible to write $Y(t) = \exp \Omega(t)$.
In the general context of Lie groups and Lie algebras, it is
indeed the regularity of the exponential map from the Lie algebra
$\mathfrak{g}$ to the Lie group $\mathcal{G}$ that determines the
global existence of an $\Omega(t) \in \mathfrak{g}$ \cite{dixmier57led,saito57scg}: the
exponential map of a complex Lie algebra is globally one-to-one if
and only if the algebra is nilpotent, i.e. there exists a finite
$n$ such that $\mathrm{ad}_{x_{1}}\mathrm{ad}_{x_{2}}\cdots \mathrm{ad}_{x_{n-1}}x_{n}=0
$, where $x_{j}$ are arbitrary elements from the Lie algebra. In
general, however, the injectivity of the exponential map is only
assured for $\xi \in \mathfrak{g}$ such that $\|\xi\| <
\rho_{\mathcal{G}}$ for a real number $\rho_{\mathcal{G}}
> 0$ and some norm in $\mathfrak{g}$ \cite{moan99ote,moan02obe}.
\vspace*{0.2cm}
\noindent (B) Although in principle $\rho_{\mathcal{G}}$
constitutes a sharp upper bound for the mere existence of the
operator $\Omega \in \mathfrak{g}$, its practical value in the
case of differential equations is less clear. As we have noticed,
any nonsingular matrix has a logarithm, but this logarithm might
be in $\mathfrak{gl}(n,\mathbb{C})$ even when the matrix is real.
The logarithm of $Y(t)$ may be complex even for real $A(t)$
\cite{wei63nog}. In such a situation, the
solution of (\ref{eq:evolution}) cannot be written as the
exponential of a matrix belonging to the Lie algebra over the
field of real numbers. One might argue that this is indeed
possible over the field of complex numbers, but (i) the element
$\Omega$ cannot be computed by the Magnus series (\ref{M-series}), since it
contains only real rational coefficients, and (ii) examples exist
where the logarithm of a complex matrix does not lie in the
corresponding Lie subalgebra \cite{wei63nog}.
It is therefore interesting to determine for which range of $t$ a
real $A(t)$ in (\ref{eq:evolution}) leads to a real logarithm.
This issue has been tackled by Moan in \cite{moan02obe} in the
context of a complete normed (Banach) algebra, proving that if
\begin{equation} \label{mo.1}
\int_0^t \|A(s)\|_2 \, ds < \pi
\end{equation}
then the solution of (\ref{eq:evolution}) can be written indeed as
$Y(t) = \exp \Omega(t)$, where $\Omega(t)$ is in the Banach
algebra.
\vspace*{0.2cm}
\noindent (C) In his original paper \cite{magnus54ote},
Magnus was well aware that if
the function $\Omega(t)$ is assumed to be differentiable, it may
not exist everywhere. In fact, he related the differentiability
issue to the problem of solving $d \exp_{\Omega}(\Omega') = A(t)$
with respect to $\Omega^\prime$ and provided an implicit condition
for an arbitrary $A$. More specifically, he proved the following
result for the case of $n \times n$ matrices (Theorem V in \cite{magnus54ote}).
\begin{theorem} \label{con-mag}
The equation $A(t) = d \exp_{\Omega}(\Omega')$ can be solved by
$\Omega' = d \exp_{\Omega}^{-1} A(t)$ for an arbitrary $n \times n$ matrix
$A$ if and only
if none of the differences between any two of the eigenvalues of
$\Omega$ equals $2\pi i m$, where
$m= \pm 1, \pm 2,\ldots$, ($m \ne 0$).
\end{theorem}
This result can be considered, in fact, as a reformulation of Lemma \ref{lemma2},
but, unfortunately,
has not very much practical application unless
the eigenvalues of $\Omega$ can easily be determined from those of $A(t)$. One
would like instead to have conditions based directly on $A$.
\subsubsection{Convergence of the Magnus series}
\label{convergence}
For dealing with the validity of statement (D) one has to analyze
the convergence of the series $\sum_{k=1}^\infty \Omega_k$. Magnus
also considered the question of when the series terminates at some
finite index $m$, thus giving a globally valid $\Omega = \Omega_1
+ \cdots +\Omega_m$. This will happen, for instance, if
\[
\left[ A(t), \int_0^t A(s) ds \right] = 0
\]
identically for all values of $t$, since then $\Omega_k = 0$ for
$k >1$. A sufficient (but not necessary) condition for the
vanishing of all terms $\Omega_k$ with $k > n$ is that
\[
[A(s_1), [A(s_2), [A(s_3), \cdots ,[A(s_n), A(s_{n+1})] \cdots
]]] = 0
\]
for any choice of $s_1, \ldots, s_{n+1}$. In fact, the termination
of the series cannot be established solely by consideration of the
commutativity of $A(t)$ with itself, and Magnus considered an
example illustrating this point.
In general, however, the Magnus series does not converge unless
$A$ is small in a suitable sense. Several bounds to the actual
radius of convergence in terms of $A$ have been obtained in the
literature. Most of these results can be stated as follows.
If $\Omega_m(t)$ denotes the homogeneous element with $m-1$
commutators in the Magnus series as given by (\ref{recur2}),
then $\Omega(t) = \sum_{m=1}^{\infty} \Omega_m(t)$ is absolutely
convergent for $0 \le t < T$, with
\begin{equation} \label{cm1.1}
T = \max \left\{ t \ge 0 \, : \, \int_0^t \|A(s)\|_2 \, ds < r_c
\right\}.
\end{equation}
Thus, both Pechukas and Light \cite{pechukas66ote} and
Karasev and Mosolova \cite{karasev77ipa} obtained $r_c=\log 2 =
0.693147\ldots$, whereas Chacon and Fomenko \cite{Chacon91rff} got
$r_c=0.57745\ldots$. In 1998, Blanes \textit{et al.}
\cite{blanes98maf} and Moan \cite{moan98eao} obtained independently
the improved bound
\begin{equation} \label{cm1.2}
r_c = \frac{1}{2} \int_0^{2 \pi} \frac{1}{2 + \frac{x}{2} (1 - \cot \frac{x}{2})}
dx \equiv \xi = 1.08686870\ldots
\end{equation}
by analyzing the recurrence (\ref{eses})-(\ref{omegn}) and (\ref{recur2}),
respectively. Furthermore, Moan also obtained a bound on the
individual terms $\Omega_m$ of the Magnus series \cite{moan02obe} which is useful,
in particular, for estimating
errors when the series is truncated. Specifically, he showed that
\[
\|\Omega_m(t)\| \le \frac{f_m}{2}
\left( 2 \int_0^t \|A(s)\|_2 \, ds \right)^m \le
\pi \left( \frac{1}{\xi} \int_0^t \|A(s)\|_2 \,
ds \right)^m,
\]
where $f_m$ are the coefficients of
\[
G^{-1}(x) = \sum_{m \ge 1} f_m x^m = x + \frac{1}{4} x^2 +
\frac{5}{72} x^3 + \frac{11}{576} x^4 + \frac{479}{86400} x^5 +
\cdots,
\]
the inverse function of
\[
G(s) = \int_0^s \frac{1}{2 + \frac{x}{2}(1- \cot \frac{x}{2})} \, dx.
\]
On the other hand, by analyzing
some selected examples, Moan \cite{moan02obe}
concluded that, in order to get convergence \emph{for all real matrices}
$A(t)$, necessarily $r_c \le \pi$ in (\ref{cm1.1}), and more recently Moan and Niesen
\cite{moan06cot} have been able to prove that indeed $r_c=\pi$ if only
real matrices are involved.
In any case, it is important to remark that statement (D) is locally valid,
but cannot be used to compute $\Omega$ in the large. However, as
we have seen, the other statements need not depend on the validity
of (D). In particular, if (B) and (C) are globally valid, one can
still investigate many of the properties of $\Omega$ even though
one cannot compute it with the aid of (D).
\subsubsection{An improved radius of convergence}
\label{subsec273}
The previous results on the convergence of the Magnus series have
been established for $n \times n$ real matrices: if $A(t)$ is a real $n \times n$ matrix, then
(\ref{mo.1}) gives a condition for $Y(t)$ to have a real logarithm. In fact,
under the same condition, the Magnus series (\ref{M-series}) converges precisely
to this logarithm, i.e., its sum $\Omega(t)$ satisfies $\exp(\Omega(t)) = Y(t)$
\cite{moan06cot}.
One should have in mind, however, that the original expansion was conceived
by requiring only that $A(t)$ be a linear operator depending on a real variable $t$
in an associative ring (Theorem \ref{thMag}). The idea was to define,
in terms of $A$, an operator $\Omega(t)$ such that
the solution of the initial value problem
$Y^\prime = A(t) Y$, $Y(0)=I$,
for a second operator $Y$ is given as $Y = \exp \Omega$.
The proposed expression for $\Omega$ is an infinite series satisfying the condition that
``its partial sums become Hermitian after multiplication by $i$ if $i A$ is a Hermitian operator"
\cite{magnus54ote}. As this quotation illustrates, Magnus expansion was first derived
in the context of quantum mechanics, and so one typically assumes that it is
also valid when $A(t)$ is a linear operator in a Hilbert space. Therefore, it might be
desirable to have conditions for the convergence of the Magnus series in this more general
setting. In \cite{casas07scf}, by applying standard techniques of
complex analysis and some elementary properties of the unit sphere, the bound
$r_c = \pi$ has been shown to be also valid for \emph{any} bounded normal
operator $A(t)$ in a Hilbert space of arbitrary dimension.
Next we review the main issues involved and refer the reader to \cite{casas07scf}
for a more detailed treatment.
Let us assume that $A(t)$ is a bounded operator in a Hilbert space $\mathcal{H}$,
with $2 \le \mathrm{dim} \ \mathcal{H} \le \infty$. Then we introduce a new parameter
$\varepsilon \in \mathbb{C}$ and denote by $Y(t;\varepsilon)$ the
solution of the initial value problem
\begin{equation} \label{cmeq.3.1}
\frac{dY}{dt} = \varepsilon A(t) Y, \qquad Y(0)=I,
\end{equation}
where now $I$ denotes the identity operator in $\mathcal{H}$. It is known that
$Y(t;\varepsilon)$ is an analytic function of $\varepsilon$ for a fixed value of $t$.
Let us introduce the set $B_{\gamma} \subset \mathbb{C}$
characterized by the real parameter $\gamma$,
\[
B_{\gamma} = \{ \varepsilon \in \mathbb{C} \, : \, |\varepsilon|
\int_0^t \|A(s)\| ds < \gamma \}.
\]
Here $\|.\|$ stands for the norm defined by the inner product on $\mathcal{H}$,
i.e., the $2$-norm introduced in subsection \ref{notations}.
If $t$ is fixed, the operator function
$\varphi(\varepsilon) = \log Y(t;\varepsilon)$ is well defined
in $B_{\gamma}$ when $\gamma$ is small enough, say $\gamma < \log 2$, as an
analytic function of $\varepsilon$.
As a matter of fact, this is a direct consequence of the results collected in section \ref{convergence}:
if, in particular, $|\varepsilon| \int_0^t \|A(s)\| ds < \log 2$, the
Magnus series corresponding to (\ref{cmeq.3.1}) converges and its sum
$\Omega(t; \varepsilon)$ satisfies $\exp(\Omega(t; \varepsilon)) = Y(t; \varepsilon)$.
In other words, the power series $\Omega(t; \varepsilon)$ coincides with
$\varphi(\varepsilon)$ when $|\varepsilon| \int_0^t \|A(s)\| ds < \log 2$, and so
the Magnus series is the power series expansion of $\varphi(\varepsilon)$ around
$\varepsilon = 0$.
\begin{theorem} \label{main-theorem}
The function $\varphi(\varepsilon) = \log Y(t;\varepsilon)$ is an analytic
function of $\varepsilon$ in the set $B_{\pi}$, with
\[
B_{\pi} = \{ \varepsilon \in \mathbb{C} \, : \, |\varepsilon|
\int_0^t \|A(s)\| ds < \pi \}.
\]
If $\mathcal{H}$ is infinite dimensional, the statement holds true if $Y$ is a normal operator.
\end{theorem}
In other words, $\gamma = \pi$. The proof of this theorem is based
on some elementary properties of the unit
sphere $S^1$ in a Hilbert space. Let us define the angle between any two vectors
$x \ne 0$, $y \ne 0$ in $\mathcal{H}$, $\mathrm{Ang}\{x,y\} = \alpha$, $0 \le \alpha \le \pi$, from
\[
\cos \alpha = \frac{\mathrm{Re} \langle x, y \rangle}{\|x\| \, \|y\|},
\]
where $\langle \cdot, \cdot \rangle$ is the inner product on $\mathcal{H}$.
This angle is a metric in $S^1$, i.e., the triangle inequality holds in $S^1$.
The first property we need
is given by the next lemma \cite{moan02obe}.
\begin{lemma} \label{lem-moan}
For any $x \in \mathcal{H}$, $x \ne 0$,
$\mathrm{Ang}\{Y(t;\varepsilon) x, x\} \le |\varepsilon| \int_0^t \|A(s)\| ds$.
\end{lemma}
Observe that if $Y$ is a normal operator in $\mathcal{H}$, i.e., $Y Y^{\dag} = Y^{\dag} Y$
(in particular, if $Y$ is unitary), then
$\|Y^{\dag} x\| = \|Yx\|$ for all $x \in \mathcal{H}$ and therefore
$\mathrm{Ang}\{Y^{\dag} x, x\} = \mathrm{Ang}\{Yx, x\}$.
The second required property provides useful information on the location of the
eigenvalues of a given bounded operator in $\mathcal{H}$ \cite{mityagin90unp}.
\begin{lemma} \label{lem-mityagin}
Let $T$ be a (bounded) operator on $\mathcal{H}$.
If $\mathrm{Ang}\{T x,x\} \le \gamma$ and $\mathrm{Ang}\{T^{\dag} x,x\} \le \gamma$
for any $x \ne 0$, $x \in \mathcal{H}$, where $T^{\dag}$ denotes the adjoint operator of $T$,
then the spectrum of $T$, $\sigma(T)$, is contained in the set
\[
\Delta_{\gamma} = \{ z = |z| \e^{i \omega} \in \mathbb{C} \, : \, |\omega| \le \gamma\}.
\]
\end{lemma}
\begin{proof}
(of Theorem \ref{main-theorem}). Let us introduce the
operator $T \equiv Y(t;\epsilon)$, with $\varepsilon \in B_{\gamma}$, $\gamma < \pi$. Then
by Lemma \ref{lem-moan}, $\mathrm{Ang}\{T x,x\} \le \gamma$ for all $x \ne 0$, and thus, by
Lemma \ref{lem-mityagin},
\begin{equation} \label{eq.pr.1}
\sigma(T) \subset \Delta_{\gamma}.
\end{equation}
If $\mathrm{dim} \ \mathcal{H} = \infty$ and we assume that $Y(t;\epsilon)$ is a normal operator,
then (\ref{eq.pr.1}) also holds.
From equation (\ref{cmeq.3.1}) in integral form,
\[
Y(t; \varepsilon) = I + \varepsilon \int_0^t A(s) Y ds,
\]
one gets $\|Y\| \le 1 + |\varepsilon| \int_0^t \|A(s)\| \, \|Y\| ds$, and application of Gronwall's
lemma \cite{gronwall19not} leads to
\[
\|Y(t;\varepsilon)\| \le \exp \left( |\varepsilon| \int_0^t \|A(s)\| ds \right).
\]
An analogous reasoning for the inverse operator also proves that
\[
\|Y^{-1}(t;\varepsilon)\| \le \exp \left( |\varepsilon| \int_0^t \|A(s)\| ds \right).
\]
In consequence,
\[
\|T\| \le \e^{\gamma} \qquad \mbox{ and } \qquad \|T^{-1}\| \le \e^{\gamma}.
\]
If $\lambda \ne 0 \in \sigma(T)$, then $|\lambda| \le \|T\|$ \cite{hunter01aan}
and therefore $|\lambda| \le \e^{\gamma}$. In addition, $\frac{1}{\lambda} \in
\sigma(T^{-1})$, so that $|\lambda| \ge \e^{-\gamma}$. Equivalently,
\begin{equation} \label{eq.pr.2}
\sigma(T) \subset \{ z \in \mathbb{C} : \e^{-\gamma} \le |z| \le \e^{\gamma} \} \equiv G_{\gamma}.
\end{equation}
Putting together (\ref{eq.pr.1}) and (\ref{eq.pr.2}), one has
\[
\sigma(T) \subset G_{\gamma} \cap \Delta_{\gamma} \equiv \Lambda_{\gamma}.
\]
Now choose any value $\gamma'$ such that $\gamma < \gamma' < \pi$
(e.g., $\gamma' = (\gamma + \pi)/2$) and consider the closed curve
$\Gamma = \partial \Lambda_{\gamma'}$. Notice that the curve $\Gamma$
encloses $\sigma(T)$ in its interior, so that it is possible to define \cite{dunford58lop} the function
$\varphi(\varepsilon) = \log Y(t; \varepsilon)$ by the equation
\begin{equation} \label{eq.pr.3}
\varphi(\epsilon) = \frac{1}{2 \pi i} \int_{\Gamma} \log z \, (z I - Y(t;\epsilon))^{-1} \, dz,
\end{equation}
where the integration along $\Gamma$ is performed in the counterclockwise direction.
As is well known, (\ref{eq.pr.3}) defines an analytic function of $\varepsilon$ in
$B_{\gamma'}$ \cite{dunford58lop} and the result of the theorem follows.
\end{proof}
\begin{theorem} \label{conv-mag}
Let us consider the differential equation $Y^\prime = A(t) Y$ defined in
a Hilbert space $\mathcal{H}$ with $Y(0)=I$, and let $A(t)$ be a bounded operator
in $\mathcal{H}$. Then, the Magnus series
$\Omega(t) = \sum_{k=1}^{\infty} \Omega_k(t)$, with
$\Omega_k$ given by (\ref{recur2}) converges in the interval $ t \in [0,T)$ such that
\[
\int_0^T \|A(s)\| ds < \pi
\]
and the sum $\Omega(t)$ satisfies $\exp \Omega(t) = Y(t)$. The statement also holds when
$\mathcal{H}$ is infinite-dimensional if $Y$ is a normal operator (in particular, if $Y$ is
unitary).
\end{theorem}
\begin{proof}
Theorem \ref{main-theorem} shows that $\log Y(t;\varepsilon) \equiv \varphi(\varepsilon)$ is a
well defined and analytic function of $\varepsilon$ for
\[
|\varepsilon| \int_0^t \|A(s)\| ds < \pi.
\]
It has also been shown that the Magnus series
$\Omega(t;\varepsilon) = \sum_{k=1}^{\infty} \varepsilon^k \Omega_k(t)$, with
$\Omega_k$ given by (\ref{recur2}), is absolutely convergent when
$|\varepsilon| \int_0^t \|A(s)\| ds < \xi = 1.0868...$ and its sum satisfies
$\exp \Omega(t;\varepsilon) = Y(t;\varepsilon)$. Hence, the Magnus series is
the power series of the analytic function $\varphi(\varepsilon)$ in the disk
$|\varepsilon| < \xi / \int_0^t \|A(s)\| ds$. But $\varphi(\varepsilon)$ is analytic
in $B_{\pi} \supset B_{\xi}$ and the power series has to be unique. In consequence,
the power series of $\varphi(\varepsilon)$ in $B_{\pi}$ has to be the same as the power
series of $\varphi(\varepsilon)$ in $B_{\xi}$, which is precisely the Magnus series. Finally,
by taking $\varepsilon = 1$ we get the desired result.
\end{proof}
\subsubsection{Further discussion}
Theorem \ref{conv-mag} provides sufficient conditions for the convergence
of the Magnus series based on an estimate by the norm of the operator $A$. In
particular, it guarantees that the operator $\Omega(t)$ in $Y(t) = \exp \Omega(t)$
can safely be obtained with the convergent series $\sum_{k \ge 1} \Omega_k(t)$ for
$0 \le t < T$ when the terms $\Omega_k(t)$ are computed with (\ref{recur2}).
A natural question at this
stage is what is the optimal convergence domain. In other words,
is the bound estimate $r_c = \pi$ given by Theorem \ref{conv-mag}
sharp or is there still room for improvement? In order to clarify this issue,
we next analyze two simple examples involving $2 \times 2$ matrices.
\noindent \textbf{Example 1}. Moan and Niesen \cite{moan06cot} consider the
coefficient matrix
\begin{equation} \label{ej1.1}
A(t) = \left( \begin{array}{rr}
2 & \ t \\
0 & -1
\end{array} \right).
\end{equation}
If we introduce, as before, the complex parameter $\varepsilon$ in the problem, the
corresponding exact solution $Y(t;\varepsilon)$ of (\ref{cmeq.3.1}) is given by
\begin{equation} \label{ej1.2}
Y(t;\varepsilon) = \left( \begin{array}{lc}
\e^{2 \varepsilon t} & \ \ \frac{1}{9 \varepsilon} \e^{2 \varepsilon t} - \left(
\frac{1}{9 \varepsilon} + \frac{1}{3} t \right) \e^{-\varepsilon t} \\
0 & \e^{-\varepsilon t}
\end{array} \right)
\end{equation}
and therefore
\[
\log Y(t;\varepsilon) = \left( \begin{array}{rc}
2t & \ g(t;\varepsilon) \\
0 & -t
\end{array} \right), \quad \mbox{ with } \ \ g(t;\varepsilon) = \frac{t (1 - \e^{3 \varepsilon t} +
3 \varepsilon t)}{3(1- \e^{3 \varepsilon t})}.
\]
The Magnus series can be obtained by computing the Taylor expansion of $\log Y(t;\varepsilon)$
around $\varepsilon = 0$. Notice that the function $g$ has a singularity when
$\varepsilon t = \frac{2\pi}{3} i$, and thus, by taking $\varepsilon = 1$, the Magnus series only converges up to $t= \frac{2}{3} \pi$. On the other hand, condition
$\int_0^T \|A(s)\| ds < \pi$ leads to $T \approx 1.43205 < \frac{2}{3} \pi$. In consequence, the
actual convergence domain of the Magnus series is larger than the estimate provided
by Theorem \ref{conv-mag}.
\hfill{$\Box$}
\vspace*{0.3cm}
\noindent \textbf{Example 2}. Let us introduce the matrices
\begin{equation} \label{ej2.1}
X_{1}= \left(
\begin{array}{cr}
1 & \ 0 \\
0 & -1
\end{array} \right) = \sigma_3, \qquad
X_{2}= \left(
\begin{array}{cc}
0 & \ 1 \\
0 & 0
\end{array} \right)
\end{equation}
and define
\[
A(t)= \left\{
\begin{array}{ll}
\beta \, X_{2} \quad & 0\leq t \leq 1 \\
\alpha \, X_{1} \quad & t > 1
\end{array} \right.
\]
with $\alpha, \beta$ complex constants. Then, the solution of
equation (\ref{eq:evolution}) at $t=2$ is
\[
Y(2) = \e^{\alpha X_{1}} \e^{\beta X_{2}} = \left(
\begin{array}{lc}
\e^{\alpha} \quad & \beta \e^{\alpha} \\
0 & \e^{-\alpha}
\end{array} \right),
\]
so that
\begin{equation} \label{ex1.4}
\log Y(2) = \log(\e^{\alpha X_{1}} \e^{\beta X_{2}}) = \alpha X_{1} +
\frac{2\alpha \beta}{1- \e^{-2\alpha}} \, X_{2},
\end{equation}
an analytic function if $|\alpha| < \pi$ with first singularities
at $\alpha = \pm i \pi$. Therefore, the Magnus series
cannot converge at $t=2$ if $|\alpha| \ge \pi$, independently
of $\beta \ne 0$, even when it is possible in this case to get a closed-form expression
for the general term. Specifically, a straightforward computation with
the recurrence (\ref{eses})-(\ref{omegn}) shows that
\begin{equation} \label{ex2.2}
\sum_{n=1}^{\infty} \Omega_{n}(2) =
\alpha X_{1} + \beta X_{2} + \sum_{n=2}^{\infty}
(-1)^{n-1}\frac{2^{n-1}B_{n-1}}{(n-1)!} \ \alpha^{n-1}
\beta \, X_{2}.
\end{equation}
If we take the spectral norm, then $\|X_1\| = \|X_2\| = 1$ and
\[
\int_0^{t=2} \|A(s)\| ds = |\alpha| + |\beta|,
\]
so that the convergence domain provided by
Theorem \ref{conv-mag} is $|\alpha| + |\beta| < \pi$ for this example. Notice that in the
limit $|\beta| \rightarrow 0$ this domain is optimal.
\hfill{$\Box$}
\vspace*{0.3cm}
From the analysis of Examples 1 and 2 we can conclude the following. First,
the convergence domain of the Magnus series provided by Theorem \ref{conv-mag}
is the best result one can get for a generic bounded operator $A(t)$ in a Hilbert
space, in the sense that one may consider specific matrices $A(t)$, as in Example 2, where
the series diverges for any time $t$ such that $\int_0^t \|A(s)\| ds > \pi$. Second, there
are also situations (as in Example 1) where the bound estimate $r=\pi$
is still rather conservative: \emph{the Magnus series converges indeed for a
larger time interval than that
given by Theorem \ref{conv-mag}}. This is particularly evident if one takes
$A(t)$ as a diagonal matrix: then,
the exact solution $Y(t;\varepsilon)$ of (\ref{cmeq.3.1}) is a diagonal matrix whose elements
are non-vanishing entire functions of $\varepsilon$, and obviously $\log Y(t;\varepsilon)$
is also an entire function of $\varepsilon$. In such circumstances, the convergence domain
$|\varepsilon| \int_0^t \|A(s)\| ds < \pi$ for the Magnus series does not make much sense.
Thus a natural question arises: is it possible to obtain a more precise criterion of
convergence? In trying to answer this question, in \cite{casas07scf} an alternative
characterization of the convergence has been developed which is valid for $n \times n$
complex matrices. More precisely, a connection has been established between the convergence
of the Magnus series and the existence of multiple eigenvalues of the
fundamental matrix $Y(t; \varepsilon)$ for a fixed $t$, which we denote by
$Y_t(\varepsilon)$. By using the theory of analytic matrix functions, and in particular,
of the logarithm of an
analytic matrix function (such as is done e.g. in \cite{yakubovich75lde}), the following
result has been proved in \cite{casas07scf}:
if the analytic matrix function $Y_t(\varepsilon)$
has an eigenvalue $\rho_0(\varepsilon_0)$ of multiplicity $l > 1$
for a certain $\varepsilon_0$ such that: (a) there is a curve in the $\varepsilon$-plane
joining $\varepsilon = 0$ with $\varepsilon = \varepsilon_0$, and
(b) the number of equal terms in
$\log \rho_1(\varepsilon_0)$, $\log \rho_2(\varepsilon_0)$, $\ldots,
\log \rho_{l}(\varepsilon_0)$ such that $\rho_k(\varepsilon_0) = \rho_0$, $k=1,\ldots, l$ is less
than the maximum dimension of the elementary Jordan block corresponding to $\rho_0$, then
the radius of convergence of the series $\Omega_t(\varepsilon) =
\sum_{k \ge 1} \varepsilon^k \Omega_{t,k}$ verifying
$\exp \Omega_t(\varepsilon) = Y_t(\varepsilon)$ is precisely $r = |\varepsilon_0|$. An analysis
along the same line has been carried out in \cite{veshtort03nsi}.
When this criterion is applied to Example 1, it gives as the radius of convergence of the
Magnus series corresponding to equation (\ref{cmeq.3.1}) for a fixed $t$,
\begin{equation} \label{Y-Seq1}
\Omega_t(\varepsilon) = \sum_{k=1}^{\infty} \varepsilon^k \ \Omega_{t,k},
\end{equation}
the value
\begin{equation} \label{cex1}
r = |\varepsilon| = \frac{2 \pi}{3t}.
\end{equation}
To get the actual convergence domain of the usual Magnus expansion
we have to take $\varepsilon = 1$, and so, from (\ref{cex1}), we get $2\pi/(3t) = 1$,
or equivalently $t = 2 \pi/3$, i.e., the result achieved from the analysis of the exact solution.
With respect to Example 2, one gets \cite{casas07scf}
\[
|\varepsilon| = \frac{\pi}{|\alpha| (t-1)}.
\]
If we now fix $\varepsilon = 1$, the actual $t$-domain of convergence of the Magnus series
is
\[
t = 1 + \frac{\pi}{|\alpha|}.
\]
Observe that, when $t=2$, we get $|\alpha| = \pi$ and thus the previous
result is recovered: the Magnus
series converges only for $|\alpha| < \pi$.
It should also be mentioned that
the case
of a diagonal matrix $A(t)$ is compatible with this alternative
characterization \cite{casas07scf}.
\subsection{Magnus expansion and the BCH formula}
The Magnus expansion can also be used to get explicitly
the terms of the series $Z$ in
\[
Z = \log( \e^{X_{1}} \, \e^{X_{2}} ),
\]
$X_{1}$ and $X_{2}$ being two non commuting indeterminate variables. As it
is well known \cite{postnikov94lga},
\begin{equation} \label{eq.5.0}
Z = X_{1} + X_{2} + \sum_{n=2}^{\infty} G_n(X_{1},X_{2}),
\end{equation}
where $G_n(X_{1},X_{2})$ is a homogeneous Lie polynomial in $X_{1}$ and $X_{2}$ of
grade $n$; in other words, $G_n$ can be expressed in terms of $X_{1}$
and $X_{2}$ by addition, multiplication by rational numbers and nested
commutators. This result is often known as the
Baker--Campbell--Hausdorff (BCH) theorem and proves to be very
useful in various fields of mathematics (theory of linear
differential equations \cite{magnus54ote}, Lie group theory
\cite{gorbatsevich97fol}, numerical analysis \cite{hairer06gni})
and theoretical physics (perturbation theory, transformation
theory, Quantum Mechanics and Statistical Mechanics
\cite{kumar65oet,weiss62tbf,wilcox67eoa}). In particular, in the
theory of Lie groups, with this theorem one can explicitly write
the operation of multiplication in a Lie group in canonical
coordinates in terms of the Lie bracket operation in its
algebra and also prove the existence of a local Lie group with a
given Lie algebra \cite{gorbatsevich97fol}.
If $X_{1}$ and $X_{2}$ are matrices and one considers the piecewise
constant matrix-valued function
\begin{equation} \label{eq.2.2.3b}
A(t) = \left\{ \begin{array}{ccl}
X_{2} & \quad & 0 \le t \le 1 \\
X_{1} & \quad & 1 < t \le 2
\end{array} \right.
\end{equation}
then the exact solution of (\ref{eq:evolution}) at $t=2$ is $Y(2)
= \e^{X_{1}} \, \e^{X_{2}}$. By computing $Y(2) = \e^{\Omega(2)}$ with
recursion (\ref{recur2}) one gets for the first terms
\begin{eqnarray} \label{eq.2.2.4}
\Omega_1(2) & = & X_{1} + X_{2} \nonumber \\
\Omega_2(2) & = & \frac{1}{2} [X_{1},X_{2}] \\
\Omega_3(2) & = & \frac{1}{12} [X_{1},[X_{1},X_{2}]] - \frac{1}{12}
[X_{2},[X_{1},X_{2}]] \nonumber \\
\Omega_4(2) & = & \frac{1}{24} [X_{1},[X_{2},[X_{2},X_{1}]]] \nonumber
\end{eqnarray}
In general, each $G_n(X_{1},X_{2})$ is a linear combination of the
commutators of the form $[V_1,[V_2, \ldots,[V_{n-1},V_n] \ldots]]$
with $V_i \in \{X_{1},X_{2}\}$ for $1 \le i \le n$, the coefficients being
universal rational constants. This is perhaps one of the reasons why
the Magnus expansion is often referred to in the literature as the
continuous analogue of the BCH formula. As a matter of fact,
Magnus proposed a different method for obtaining the first terms in
the series (\ref{M-series}) based on (\ref{eq.5.0}) \cite{magnus54ote}.
Now we can apply Theorem \ref{conv-mag} and obtain the following sharp bound.
\begin{theorem}
The Baker--Campbell--Hausdorff series in the form (\ref{eq.5.0})
converges absolutely when $\|X_{1}\| + \|X_{2}\| < \pi$.
\end{theorem}
This result can be generalized, of course, to any number of non
commuting operators $X_1, X_2, \ldots, X_q$. Specifically, the series
\[
Z = \log( \e^{X_{1}} \, \e^{X_{2}} \, \cdots \, \e^{X_{q}} ),
\]
converges absolutely if $\|X_{1}\| + \|X_{2}\| + \cdots + \|X_q\| < \pi$.
\subsection{Preliminary linear transformations}\label{PLT}
To improve the accuracy and the bounds on the convergence domain
of the Magnus series for a given problem, it is quite common to
consider first a linear transformation on the system in such a way that
the resulting differential equation can be more easily handled in a certain sense to be specified
for each problem. To illustrate the procedure, let us consider a simple example.
\noindent \textbf{Example}. Suppose we have the $2 \times 2$ matrix
\begin{equation} \label{ex1.7}
A(t) = \alpha(t) X_{1} + \beta(t) X_{2},
\end{equation}
where $X_1$ and $X_2$ are given by (\ref{ej2.1}) and
$\alpha$ and $\beta$ are complex functions of time,
$\alpha, \beta : \mathbb{R} \longrightarrow \mathbb{C}$. Then the
exact solution of $Y^{\prime} = A(t) Y$, $Y(0)=I$ is
\begin{equation}\label{ex1.8}
Y(t) = \left(
\begin{array}{cc}
\e^{\int_0^t\alpha(s)ds} \quad &
\displaystyle \int_0^t ds_{1}\e^{\int_{s_1}^t\alpha(s_2)ds_2} \ \beta(s_1) \
\e^{-\int_{0}^{s_1}\alpha(s_2)ds_2} \\
0 \quad & \e^{-\int_0^t\alpha(s)ds}
\end{array} \right).
\end{equation}
Let us factorize the solution as $Y(t)= \tilde{Y}_0(t) \tilde{Y}_1(t)$, with $\tilde{Y}_0(t)$ the
solution of the initial value problem defined by
\begin{equation}\label{ej1.11}
\tilde{Y}_0^{\prime} = A_0(t) \, \tilde{Y}_0 \qquad A_0(t) = \alpha(t) X_1 = \left(
\begin{array}{cc}
\alpha(t) & \ 0 \\
0 & -\alpha(t)
\end{array} \right)
\end{equation}
and $\tilde{Y}_0(0)=I$. Then, the
equation satisfied by $\tilde{Y}_1$ is
\begin{equation}\label{ej1.12}
\tilde{Y}_1^{\prime} = A_1(t) \tilde{Y}_1, \qquad \mbox{ with } \quad A_1=\tilde{Y}_0^{-1}A \, \tilde{Y}_0
- A_0 = \left(
\begin{array}{cc}
\quad 0 \quad & \beta(t) \, \e^{2\int_0^t\alpha(s)ds} \\
0 & 0
\end{array} \right),
\end{equation}
so that the first term of the Magnus expansion applied to (\ref{ej1.12})
already provides the exact solution (\ref{ex1.8}).
\hfill{$\Box$}
This, of course, is not
the typical behaviour, but in any case if the transformation $\tilde{Y}_0$ in the factorization
$Y(t)= \tilde{Y}_0(t) \tilde{Y}_1(t)$ is chosen
appropriately, the first few terms in the Magnus series applied to the equation satisfied by
of $\tilde{Y}_1$ give usually very accurate approximations.
Since this kind of preliminary transformation is frequently used in Quantum
Mechanics, we specialize the treatment to this particular setting
here and consider equation (\ref{laecuacion}) instead. In other words, we
write (\ref{eq:evolution}) in the more conventional form
of the time
dependent Sch\"odinger equation
\begin{equation} \label{eq:evolU}
\frac{dU(t)}{dt} = \tilde{H}(t) U(t),
\end{equation}
where $\tilde{H} \equiv H/(i \hbar)$, $\hbar$ is the reduced Planck constant, $H$ is the Hamiltonian
and $U$ corresponds to the evolution operator.
As in the example, suppose that $\tilde{H}$ can be split into two pieces,
$\tilde{H} = \tilde{H}_{0} + \varepsilon \tilde{H}_{1}$, with
$\tilde{H}_{0}$ a solvable Hamiltonian and $\varepsilon \ll 1$ a
small perturbation parameter. In such a situation one tries to
integrate out the $\tilde{H} _0$ piece so as to circumscribe the
approximation to the $\tilde{H} _1$ piece. In the case of
equation (\ref{eq:evolU}) this is carried out by means of
a linear time-dependent transformation. In Quantum Mechanics this
preliminary linear transformation corresponds to a new evolution
picture, such as the interaction or the adiabatic picture.
Among other possibilities we may factorize the
time-evolution operator as
\begin{equation}\label{Ufac}
U(t)=G(t)U_G(t) G^\dag (0),
\end{equation}
where $G(t)$ is a linear transformation whose purpose is to be
defined yet. In the new $G$-Picture, the corresponding time-evolution operator
$U_G$ obeys the equation
\begin{equation}\label{UGeq}
U_G^{\prime}(t)=\tilde{H} _G(t)U_G(t),\quad \tilde{H} _G(t)= G^\dag (t)\tilde{H} (t)G(t)-G^\dag
(t) G^{\prime}(t).
\end{equation}
The choice of $G$ depends on the nature of the problem at hand.
There is no generic formal recipe to find out the most appropriate
$G$. In the spirit of canonical transformations of Classical
Mechanics, one should built up the very $U_G$ perturbatively.
However, the aim here is different because $G$ is defined from the
beginning. Two rather common choices are:
\begin{itemize}
\item \textbf{Interaction Picture}. It is well suited when $\tilde{H} _0(t)$ is
diagonal in some basis, or else, it is constant. In that case
\begin{equation}\label{GInt}
G(t)=\exp\left( \int_0^t \tilde{H} _0(\tau) d \tau \right)
\end{equation}
so that
\begin{equation}\label{HGInt}
\tilde{H} _G(t)=\varepsilon \exp\left( -\int_0^t \tilde{H} _0(\tau) d \tau\right)
\tilde{H} _1(t) \exp\left( \int_0^t \tilde{H} _0(\tau) d
\tau\right) .
\end{equation}
\item \textbf{Adiabatic Picture}. A time scale of the
system much smaller than that of the interaction defines an
adiabatic regime. For instance, suppose that the Hamiltonian
operator $H(t)$ depends smoothly on time through the variable
$\tau = t/T$, where $T$ determines the time scale and
$T \rightarrow \infty$. Then the quantum mechanical evolution of the system
is described by $dU/dt = \tilde{H}(\varepsilon t) U$, with
$\varepsilon \equiv 1/T \ll 1$, or equivalently
\begin{equation} \label{adiab1}
\frac{dU(\tau)}{d\tau} = \frac{1}{\varepsilon} \tilde{H}(\tau) U(\tau),
\end{equation}
with $\tau \equiv \varepsilon t$. In this case
the appropriate transformation is a $G$ that
renders $\tilde{H} (t)$ instantaneously diagonal, i.e.,
\begin{equation}\label{GAdiab}
G^\dag (t)\tilde{H} (t)G(t)=E(t)={\rm diag}[E_1(t),E_2(t),\ldots ].
\end{equation}
The term $G^\dag G^{\prime}$ of the new Hamiltonian in (\ref{UGeq})
is, under adiabatic conditions, very small. Its main diagonal
generates the so-called Berry, or geometric, phase \cite{berry84qpf}.
\end{itemize}
Both types of $G$ do not exclude mutually, but they may be used in
succession. As a matter of fact, corrections to the adiabatic
approximation must be followed by the former one. In turn, an
adiabatic transformation may be iterated, as proposed by Garrido
\cite{garrido64gai} and Berry \cite{berry87qpc}.
In section 4 we shall use extensively
these preliminary linear transformations on several
standard problems of Quantum Mechanics to illustrate the practical features of the
Magnus expansion.
\subsection{Exponential product representations}
In contrast to Magnus expansion, much less attention has been paid
to solutions of (\ref{eq:evolution}) in the form of a product of
exponential operators. Both approaches are by no means equivalent,
since in general the operators $\Omega_n$ do not commute with each
other. For instance, for a quantum system as in equation
(\ref{eq:evolU}), the ansatz $U=\prod \exp (\Phi_n)$ (where
$\Phi_n$ are skew-Hermitian operators to be determined) is an
alternative to the Magnus expansion, also preserving the unitarity
of the time-evolution operator. One such procedure was devised by
Fer in 1958 in a paper devoted to the study of systems of linear
differential equations \cite{fer58rdl}. Although the original
result obtained by Fer was cited and explicitly stated by Bellman
\cite[p. 204]{bellman60itm}, sometimes it has been misquoted as a
reference for the Magnus expansion \cite{baye73ats}. On the other
hand, Wilcox associated Fer's name with an interesting alternative
infinite product expansion which is indeed a continuous analogue
of the Zassenhaus formula \cite{wilcox67eoa} (something also
attributed to the Fer factorization \cite[p. 372]{magnus76cgt}).
This however also led to some confusion since his approach is in
the spirit of perturbation theory, whereas Fer's original one was
essentially nonperturbative. The situation was clarified in
\cite{klarsfeld89eip}, where also some applications to Quantum
Mechanics were carried out for the first time.
In this section we discuss briefly the main features of the Fer
and Wilcox expansions, and how the latter can be derived from the
successive terms of the Magnus series. This will make clear the
different character of the two expansions. We also include some
details on the factorization of the solution proposed by Wei and
Norman \cite{wei63las,wei64ogr}. Finally we provide another
interpretation of the Magnus expansion as the continuous analogue
of the BCH formula in linear control theory.
\subsubsection{Fer method}\label{Fer-section}
An intuitive way to introduce Fer formalism is the following
\cite{iserles00lgm}.
Given the matrix linear system $Y^{\prime} = A(t) Y$, $Y(0)=I$, we know
that
\begin{equation} \label{Fer1a}
Y(t)=\exp(F_1(t))
\end{equation}
is the exact solution if $A$
commutes with its time integral $F_1(t)=\int_0^t A(s) ds$, and
$Y(t)$ evolves in
the Lie group $\mathcal{G}$ if $A$ lies in its corresponding Lie algebra
$\mathfrak{g}$. If the goal is to respect the Lie-group structure in
the general case, we need to `correct' (\ref{Fer1a}) without loosing this
important feature.
Two possible remedies arise in a quite natural way. The first is just to seek a
correction $\Delta(t)$ evolving in the Lie algebra $\mathfrak{g}$ so that
\[
Y(t) = \exp \left( F_{1}(t) + \Delta(t) \right).
\]
This is nothing but the Magnus expansion. Alternatively, one may correct
with $Y_{1}(t)$ in the Lie group $\mathcal{G}$,
\begin{equation} \label{Fer2}
Y(t) = \exp (F_{1}(t)) Y_{1}(t).
\end{equation}
This is precisely the approach pursued by Fer, i.e. representing the
solution of (\ref{eq:evolution}) in the factorized form (\ref{Fer2}), where
(hopefully) $Y_1$ will be closer to the identity matrix
than $Y$ at least for small $t$.
The question now is to find the differential equation satisfied by
$Y_1$. Substituting (\ref{Fer2}) into equation (\ref{eq:evolution}) we have
\begin{equation}\label{Fer3}
\frac{d}{d t} Y =\left( \frac{d}{d t}
\e^{F_1} \right) Y_1 \; + \; \e^{F_1} \frac{d}{d t}
Y_1 = A \e^{F_1} Y_1,
\end{equation}
so that, taking into account the expression for the derivative of
the exponential map (Lemma \ref{lemma1}), one arrives easily at
\begin{equation}\label{Fer5}
Y^{\prime}_1 = A_{1}(t) Y_1 \qquad Y_{1}(0) = I,
\end{equation}
where
\begin{equation}\label{Fer6}
A_{1}(t) = \e^{-F_1} A \e^{F_1} - \int_0^{1}{\rm d} x \;\e^{-x
F_1 } A \e^{x F_1}.
\end{equation}
The above procedure can be repeated to yield a sequence of
iterated matrices $A_{k}$. After $n$ steps we have the following recursive
scheme, known as the Fer expansion:
\begin{eqnarray}
Y &=&\e^{F_{1}}\e^{F_{2}}\cdots \e^{F_{n}}Y_{n} \label{Fer7} \\
Y_{n}^{\prime} &=&A_{n}(t)Y_{n}\quad \ \ \ \ \ Y_{n}(0)=I,\qquad j=1,2,\ldots
\nonumber
\end{eqnarray}
with $F_{n}(t)$ and $A_{n}(t)$ given by
\begin{eqnarray}
F_{n+1}(t) &=&\int_{0}^{t}A_{n}(s)ds\ \ \ \ \ \ \
A_{0}(t)=A(t),\quad n=0,1,2... \nonumber \\
A_{n+1} &=&\e^{-F_{n+1}}A_{n}\e^{F_{n+1}}-\int_{0}^{1}dx\
\e^{-xF_{n+1}}A_{n}\e^{xF_{n+1}} \nonumber \\
&=&\int_{0}^{1}dx\int_{0}^{x}du\ \e^{-(1-u)F_{n+1}}\left[
A_{n},F_{n+1}\right] \e^{(1-u)F_{n+1}} \label{Fer8} \\
&=&\sum_{j=1}^{\infty }\frac{(-)^{j}\ j}{(j+1)!}
\mathrm{ad}_{F_{n+1}}^{j}(A_{n}) ,\quad n=0,1,2... \nonumber
\end{eqnarray}
When after $n$ steps we impose $Y_{n}=I$ we are left with an approximation
to the exact solution $Y(t)$.
Inspection of the expression of $A_{n+1}$ in (\ref{Fer8}) reveals
an interesting feature of the Fer expansion. If we assume that a
perturbation parameter $\varepsilon$ is introduced in $A$, i.e. if
we substitute $A$ by $\varepsilon A$ in the formalism,
since $F_{n+1}$ is of the same order in $\varepsilon$ as
$A_n$, then an elementary recursion shows that the matrix $A_{n}$
starts with a term of order $\varepsilon^{2^n}$ (correspondingly
the operator $F_n$ contains terms of order $\varepsilon^{2^{n-1}}$
and higher). This should greatly enhance the rate of convergence
of the product in equation (\ref{Fer7}) to the exact solution.
It is possible to derive a bound on the
convergence domain in time of the expansion \cite{blanes98maf}. The idea
is just to look for conditions on $\ A(t)\ $ which insure $\
F_{n}\rightarrow 0$ as $n\rightarrow \infty $. As in the case of
the Magnus expansion, we take $A(t)$ to be a bounded matrix with
$\|A(t)\| \le k(t)\equiv k_{0}(t)$. Fer's algorithm, equations
(\ref{Fer7}) and
(\ref{Fer8}), provides then a recursive relation among corresponding bounds $%
\ k_{n}(t)\ $ for $\| A_{n}(t)\| $. If we denote $\ K_{n}(t)\equiv
\int_{0}^{t}k_{n}(s)ds$, we can write this
relation in the generic form $k_{n+1}=f(k_{n},K_{n})$, which after
integration gives
\begin{equation}
K_{n+1}=M(K_{n}). \label{iteracio}
\end{equation}
The question now is: When is $\ K_{n}\ \rightarrow 0$ as
$n\rightarrow \infty $? This is certainly so if $0$ is a stable
fixed point for the iteration of the mapping $\ M\ $ and $\ K_{0}\
$ is within its basin of attraction. To see when this is the case
we have to solve the equation $\ \xi =M(\xi )\ $ to find where the
next fixed point lies. Let us do it explicitly. By taking norms
in the recursive scheme (\ref{Fer8}) we have
\[
\|A_{n+1}\| \ \leq \int_{0}^{1}dx\int_{0}^{x}du\
\e^{2(1-u)K_{n}}\left\| \left[ A_{n},F_{n+1}\right] \right\| ,
\]
which can be written as $\| A_{n+1}\| \ \leq k_{n+1},$%
with
\[
k_{n+1}=\frac{1-\e^{2K_{n}}(1-2K_{n})}{2K_{n}}\frac{dK_{n}}{dt}
\]
and consequently $K_{n+1}$ is given by eq. (\ref{iteracio}) with
\begin{equation}
M(K_{n})=\int_{0}^{K_{n}}\frac{1-\e^{2x}(1-2x)}{2x}\ dx.
\label{rec}
\end{equation}
That is the mapping we have to iterate. It is clear that $\xi =0\
$is a stable fixed point of $M$. The next, unstable, fixed point
is $\xi =0.8604065 $. So we can conclude that we have a convergent
Fer expansion at least for values of time $\ t\ $ such that
\begin{equation}
\int_{0}^{t} \|A(s)\| ds\leq
K_{0}(t)<0.8604065. \label{con1}
\end{equation}
Notice that the bound for the convergence domain provided by this result is
smaller than the corresponding to the Magnus expansion (Theorem
\ref{conv-mag}).
\subsubsection{Wilcox method} \label{Wilcox-section}
A more tractable form of infinite product expansion has been devised by
Wilcox \cite{wilcox67eoa} in analogy with the Magnus approach. The
idea, as usual, is to treat $\varepsilon$ in
\begin{equation} \label{wil.1}
Y^{\prime} = \varepsilon A(t) Y, \qquad Y(0)=I
\end{equation}
as an expansion parameter and to determine the successive factors
in the product
\begin{equation}\label{W1}
Y(t) = \e^{W_1} \, \e^{W_2} \, \e^{W_3} \cdots
\end{equation}
by assuming that $W_n$ is exactly of order $\varepsilon^n$. Hence,
it is clear from the very beginning that the methods of Fer and
Wilcox give rise indeed to completely different infinite product
representations of the solution $Y(t)$.
The explicit expressions of $W_1$, $W_2$ and $W_3$ are given in
\cite{wilcox67eoa}. It is noteworthy that the operators $W_n$ can
be expressed in terms of Magnus operators $\Omega_k$, for which
compact formulae and recursive procedures are available. To this
end we simply use the Baker--Campbell--Hausdorff formula to
extract formally from the identity
\begin{equation}\label{W3}
\e^{W_1} \, \e^{W_2} \, \e^{W_3} \cdots =
\e^{\Omega_1+\Omega_2+\Omega_3+\cdots} \; ,
\end{equation}
terms of the same order in $\varepsilon$. After a straightforward
calculation one finds for the first terms
\begin{eqnarray}\label{W4}
W _1 &=& \Omega_1, \qquad W _2 = \Omega_2, \qquad W_3 = \Omega_3 -
\frac{1}{2}[\Omega_1,\Omega_2],\\
W_4 &=& \Omega_4 - \frac{1}{2}[\Omega_1,\Omega_3]
+\frac{1}{6}[\Omega_1,[\Omega_1,\Omega_2]],\quad {\rm etc.}
\end{eqnarray}
The main interest of the Wilcox formalism stems from the fact that
it provides explicit expressions for the successive approximations
to a solution represented as an infinite product of exponential
operators. This offers a useful alternative to the Fer expansion
whenever the computation of $F_n$ from equation (\ref{Fer8}) is
too cumbersome. We note in passing that to first order the three
expansions yield the same result: $F_1=W_1=\Omega_1$.
\subsubsection{Wei--Norman factorization}
\label{w-nfacto}
Suppose now that $A$ and $Y$ in equation (\ref{eq:evolution})
are linear operators and that $A(t)$ can be
expressed in the form
\begin{equation} \label{wn1}
A(t) = \sum_{i=1}^m u_i(t) X_i, \qquad m \; \mbox{ finite,}
\end{equation}
where the $u_i(t)$ are scalar functions of time, and $X_1, X_2,
\ldots, X_m$ are time-independent operators. Furthermore, suppose
that the Lie algebra $\mathfrak{g}$ generated by the $X_i$ is of
finite dimension $l$ (this is obviously true if $A$ and $Y$ are
finite dimensional matrix operators). Under these conditions, if
$X_1, X_2, \ldots, X_l$ is a basis for $\mathfrak{g}$, the Magnus
expansion allows to express the solution locally in the form $Y(t)
= \exp( \sum_{i=1}^l f_i(t) X_i)$. Wei and Norman, on the other
hand, show that there exists a neighborhood of $t=0$ in which the
solution can be written as a product \cite{wei63las,wei64ogr}
\begin{equation} \label{wn2}
Y(t) = \exp(g_1(t) X_1) \, \exp(g_2(t) X_2) \cdots \exp(g_l(t)
X_l),
\end{equation}
where the $g_i(t)$ are scalar functions of time. Moreover, the
$g_i(t)$ satisfy a set of nonlinear differential equations which
depend only on the Lie algebra $\mathfrak{g}$ and the $u_i(t)$'s.
These authors also study the conditions under which the solution
converges globally, that is, for all $t$. In particular, this
happens for all solvable Lie algebras in a suitable basis and for
any real $2 \times 2$ system of equations \cite{wei64ogr}.
In the terminology of Lie algebras and Lie groups, the representation
$Y(t) = \exp(\sum_{i=1}^l f_i(t) X_i)$ corresponds to the \emph{canonical
coordinates of the first kind}, whereas equation (\ref{wn2}) defines a
system of \emph{canonical coordinates of the second kind}
\cite{belinfante72aso,gorbatsevich97fol,postnikov94lga}.
This class of factorization has been used in combination with
the Fer expansion to obtain closed-form solutions of the Cauchy
problem defined by certain classes of parabolic linear partial
differential equations \cite{casas96sol}. When the algorithm is
applied, the solution is written as a finite product of
exponentials depending on certain ordering functions for which
convergent approximations are constructed in explicit form.
Notice that the representation (\ref{wn2}) is clearly useful when the spectral
properties of the individual operators $X_i$ are readily
available. Since the $X_i$ are constant and often have simple
physical interpretation, the evaluation of the eigenvalues and
eigenvectors can be done once for all times, and this may
facilitate the computation of the exponentials. This situation arises, in
particular, in control theory \cite{belinfante72aso}. The functions $u_i(t)$
are known as the controls, and the operator $Y(t)$ acts on the states of the
system, describing how the states are transformed along time.
\subsection{The continuous BCH formula}
\label{lcbch}
When applied to the
equation $Y^\prime = A(t) Y$ with the matrix $A(t)$ given by
(\ref{wn1}), the Magnus expansion adopts a particularly simple form. Furthermore, by making
use of the structure constants of the Lie algebra, it is
relatively easy to get explicit expressions for
the canonical coordinates of the first kind $f_i(t)$.
Let us
illustrate the procedure by considering the particular case
\[
A(t) = u_1(t) X_1 + u_2(t) X_2.
\]
Denoting by $\alpha_i(t) = \int_0^t u_i(s) ds$, and, for a given
function $\mu$,
\[
\left(\int_i \mu \right)(t) \equiv \int_0^t u_i(s) \mu(s) ds,
\]
a straightforward calculation shows that the first terms of $\Omega$ in
the Magnus expansion
can be written as
\begin{eqnarray} \label{wn3}
\Omega(t) & = & \beta_{1}(t) X_1 + \beta_{2}(t) X_2 +
\beta_{12}(t) [X_1,X_2] + \beta_{112}(t) [X_1,[X_1,X_2]] \nonumber \\
& & + \beta_{212}(t) [X_2,[X_1,X_2]] + \cdots
\end{eqnarray}
where
\begin{eqnarray} \label{wn4}
\beta_i &=& \alpha_i, \qquad\quad i=1,2, \nonumber\\
\beta_{12} &=& \frac12 (\int_1 \alpha_2 - \int_2 \alpha_1), \\
\beta_{112} &=&
\frac{1}{12} (\int_2 \alpha_1^2 - \int_1 \alpha_1 \alpha_2 )
- \frac{1}{4} (\int_1 \int_2 \alpha_1 - \int_1 \int_1 \alpha_2), \nonumber \\
\beta_{212} &=&
\frac{1}{12} (\int_2 \alpha_1 \alpha_2 - \int_1 \alpha_2^2 )
+ \frac{1}{4} (\int_2 \int_1 \alpha_2 - \int_2 \int_2 \alpha_1). \nonumber
\end{eqnarray}
Taking into account the structure constants of the particular
finite dimensional Lie algebra under consideration, from (\ref{wn3})
one easily gets the functions $f_i(t)$. In the general case, (\ref{wn3})
allows us to express $\Omega$ as a linear combination of
elements of a basis of the free Lie algebra generated by $X_1$ and
$X_2$. In this case, the recurrence (\ref{eses})-(\ref{omegn})
defining the Magnus expansion can be carried out only with the nested
integrals
\begin{equation} \label{wn5}
\alpha_{i_1 \cdots i_s}(t) \equiv \left(\int_{i_s} \cdots \int_{i_1} 1
\right)(t) =
\int_0^t \int_0^{t_{s}} \cdots \int_0^{t_3} \int_0^{t_2} u_{i_s}(t_s)
\cdots u_{i_1}(t_1) dt_1 \cdots dt_s
\end{equation}
involving the functions $u_1(t)$ and $u_2(t)$. Thus,
for instance, the coefficients in (\ref{wn4}) can be written (after
successive integration by parts) as
\begin{eqnarray*}
\beta_i &=& \alpha_i, \qquad \quad i=1,2, \\
\beta_{12} &=& \frac12 (\int_1 \alpha_{2} - \int_2
\alpha_1) \ = \ \frac12 (\alpha_{21} - \alpha_{12}), \\
\beta_{112} &=&
\frac{1}{6} (\int_2 \int_1 \alpha_1 + \int_1 \int_1 \alpha_2 - 2
\int_1 \int_2 \alpha_1) \ = \
\frac16 (\alpha_{112}+\alpha_{211}-2 \alpha_{121}), \\
\beta_{212} &=&
\frac{1}{6} (2 \int_2 \int_1 \alpha_2 - \int_2 \int_2 \alpha_1 -
\int_1 \int_2 \alpha_2) \ = \
\frac16 (2 \alpha_{212}-\alpha_{122}- \alpha_{221}).
\end{eqnarray*}
The series (\ref{wn3}) expressed in terms of the integrals
(\ref{wn5}) is usually referred to as
the continuous Baker--Campbell--Hausdorff formula
\cite{kawski02tco,murua06tha} for the linear case. We will generalize this formalism
to the nonlinear case in the next section.
\section{Generalizations of the Magnus expansion}
\label{section3}
In view of the attractive properties of the Magnus expansion as a
tool to construct approximate solutions of non-autonomous systems
of linear ordinary differential equations, it is hardly surprising
that several attempts have been made along the years either to
extend the procedure to a more general setting or to manipulate
the series to achieve further improvements. In this section we
review some of these generalizations, with special emphasis on the
treatment of nonlinear differential equations.
First we reconsider an iterative method originally
devised by Voslamber \cite{voslamber72oea} for computing
approximations $\Omega^{(n)}(t)$
in $Y(t)=\exp(\Omega(t))$ for the linear equation
$Y^\prime = A(t) Y$. The resulting approximation may be
interpreted as a re-summation of terms in the Magnus series and
possesses interesting features not shared by the corresponding
truncation of the conventional Magnus expansion. Then we adapt the
Magnus expansion to the physically relevant case of a periodic
matrix $A(t)$ with period $T$ which incorporates in a natural way
the structure of the solution ensured by the Floquet theorem. Next
we go one step further and generalize the Magnus expansion to the
so-called nonlinear Lie equation $Y^\prime = A(t,Y) Y$. Finally,
we show how the procedure can be applied to \emph{any} nonlinear
explicitly time-dependent differential equation. Although the
treatment is largely formal, in section~\ref{section5} we will see
that it is of paramount importance for designing new and highly
efficient numerical integration schemes for this class of
differential equations. We particularize the treatment to the
important case of Hamiltonian systems and also establish an interesting connection
with the Chen--Fliess series for nonlinear differential equations.
\subsection{Voslamber iterative method} \label{Voslamber}
Let us consider equation (\ref{eq:evolution}) when there is a perturbation
parameter $\varepsilon$ in the (in general, complex) matrix $A$, i.e.,
equation (\ref{wil.1}). Theorem \ref{conv-mag}
guarantees that, for sufficiently small values of $t$,
$Y(t;\varepsilon) = \exp \Omega(t;\varepsilon)$, where
\begin{equation} \label{vos.2}
\Omega(t;\varepsilon) = \sum_{n=1}^{\infty}
\varepsilon^n \, \Omega_{n}(t).
\end{equation}
The advantages of this representation and the approximations
obtained when the series is truncated have been sufficiently
recognized in the treatment done in previous sections. There is, however, a property
of the exact solution not shared by any truncation of the series
(\ref{vos.2}) which could be relevant in certain physical
applications: $(1/\varepsilon) \sum_{n=1}^{m} \varepsilon^n
\Omega_{n}(t)$ with $m>1$ is unbounded for $\varepsilon
\rightarrow \infty$ even when $\Omega(t,\varepsilon)/\varepsilon$
is bounded uniformly with respect to $\varepsilon$ under rather
general assumptions on the matrix $A(t)$ \cite{voslamber72oea}. Notice that
this is the case, in particular, for the adiabatic problem (\ref{adiab1}).
When Schur's unitary triangularization
theorem \cite{horn85man} is applied to the exact solution
$Y(t;\varepsilon)$ one has
\begin{equation} \label{vos.3}
T_{\varepsilon} = U^{\dag} \,Y \,U,
\end{equation}
where $T_{\varepsilon}$ is an upper triangular matrix and $U$ is unitary. In
other words, $Y$ is unitarily equivalent to an upper triangular
matrix $T_{\varepsilon}$. Differentiating (\ref{vos.3}) and using (\ref{wil.1})
one arrives at
\[
T_{\varepsilon}' = \varepsilon U^{\dag} A U T_{\varepsilon} +
\left[ T_{\varepsilon}, U^{\dag} U' \right].
\]
Since the second term on the right hand side is not upper
triangular, it follows at once that
\[
T_{\varepsilon}(t) = \exp \left( \varepsilon \int_0^t (U^{\dag} A U
)_{\vartriangle} ds \right),
\]
where the subscript $\vartriangle$ denotes the upper triangular
part (including terms on the main diagonal) of the corresponding
matrix. Taking into account (\ref{vos.3}) one gets
\begin{equation} \label{vos.4}
\Omega(t,\varepsilon) = \varepsilon U \left( \int_0^t (U^{\dag} A U
)_{\vartriangle} ds \right) U^{\dag}.
\end{equation}
Considering now the Frobenius norm (which is unitarily invariant,
section \ref{notations}) of both sides of this equation, one has
\begin{eqnarray} \label{suine2f}
\| \Omega\|_F & = & |\varepsilon|
\left\| \int_0^t (U^{\dag} A U
)_{\vartriangle} ds \right\|_F \le |\varepsilon| \int_0^t
\| (U^{\dag} A U )_{\vartriangle} \|_F \, ds \nonumber \\
& & \le |\varepsilon| \int_0^t
\| U^{\dag} A U \|_F \, ds = |\varepsilon| \int_0^t \| A \|_F \, ds,
\end{eqnarray}
If the the spectral norm is considered instead, from inequalities
(\ref{ineqf1}), (\ref{ineqf2}) and (\ref{suine2f}), one concludes that
\[
\| \Omega\|_2 \le \sqrt{\mathrm{rank} (A)} \,\,
|\varepsilon| \int_0^t \| A \|_2 \, ds.
\]
In any case, what is important to stress here is that for the exact solution
$\Omega(t;\varepsilon)/\varepsilon$ is bounded uniformly with
respect to the $\varepsilon$ parameter. Voslamber proceeds by
deriving an algorithm for generating successive approximations of
$Y(t;\varepsilon) = \exp(\Omega(t;\varepsilon))$ which, contrarily to the
direct series expansion (\ref{vos.2}), preserve this property. His
point of departure is to get a series expansion for the so-called
\textit{dressed derivative of $\Omega$} \cite{oteo05iat}
\begin{equation} \label{vos.5}
\Gamma \equiv \e^{\Omega/2} \, \Omega^{\prime} \,
\e^{-\Omega/2}.
\end{equation}
This is accomplished by inserting (\ref{fmag1}) in (\ref{vos.5}).
Specifically, one has
\begin{eqnarray*}
\Gamma & = & \e^{ \mathrm{ad}_{\Omega/2}} \,
\Omega^{\prime} = \e^{ \mathrm{ad}_{\Omega/2}} \, d
\exp_{\Omega}^{-1} (\varepsilon A) = \e^{ \mathrm{ad}_{\Omega/2}} \,
\frac{\mathrm{ad}_{\Omega}}{\e^{\mathrm{ad}_{\Omega}}-1} (\varepsilon A) \\
& = &
\frac{\mathrm{ad}_{\Omega/2}}{\sinh \Omega/2} (\varepsilon A) =
\sum_{n=0}^{\infty} \frac{B_n(1/2)}{n!} \mathrm{ad}_{\Omega}^n
(\varepsilon A)
\end{eqnarray*}
and finally \cite{voslamber72oea,oteo05iat}
\begin{equation} \label{vos.6}
\Gamma = \sum_{n=0}^{\infty} \frac{2^{1-n}-1}{n!} B_n
\, \mathrm{ad}_{\Omega}^n
(\varepsilon A),
\end{equation}
where, as usual, $B_n$ denote Bernoulli numbers. In order to
express $\Gamma$ as a power series of $\varepsilon$ one has to
insert the Magnus series (\ref{vos.2}) into eq. (\ref{vos.6}).
Then we get
\begin{equation} \label{vos.7}
\Gamma(t;\varepsilon) = \sum_{n=1}^{\infty} \varepsilon^n \,
\Gamma_n(t),
\end{equation}
where the terms $\Gamma_n$ can be expressed as a function of
$\Omega_k$ with $k \le n-2$ through the recursive procedure
\cite{oteo05iat}
\begin{eqnarray} \label{vos.8}
\Gamma_1 & = & A, \qquad\quad \Gamma_2 = 0, \\
\Gamma_n & = & \sum_{j=2}^{n-1} c_j \sum_{
k_1 + \cdots + k_j = n-1 \atop
k_1 \ge 1, \ldots, k_j \ge 1}
\,
\mathrm{ad}_{\Omega_{k_1}} \, \mathrm{ad}_{\Omega_{k_2}} \cdots
\, \mathrm{ad}_{\Omega_{k_j}} A, \qquad n \ge 3. \nonumber
\end{eqnarray}
Here
\[
c_j \equiv \frac{2^{1-j}-1}{j!} B_j,
\]
with $c_{2j+1} = 0$, $c_2=-1/24$, $c_4=7/5760$, etc. In
particular,
\begin{eqnarray*}
\Gamma_3 & = & -\frac{1}{24} [\Omega_1,[\Omega_1,A]] \\
\Gamma_4 & = & -\frac{1}{24} ([\Omega_1,[\Omega_2,A]] +
[\Omega_2,[\Omega_1,A]]).
\end{eqnarray*}
Now, from the definition of $\Gamma$, eq. (\ref{vos.5}), we write
\[
\Omega^{\prime} = \e^{-\Omega/2} \, \Gamma \, \e^{\Omega/2},
\]
which, after integration over $t$, can be used for constructing
successive approximations to $\Omega$ once the terms $\Gamma_n$
are known in terms of $\Omega_k$, $k \le n-2$. Thus, the $n$th
approximant $\Omega^{(n)}$ is defined by
\begin{equation} \label{vos.9}
\Omega^{(n)}(t) = \int_0^t \e^{-\frac{1}{2}
\Omega^{(n-1)}(s)} \Gamma^{(n)}(s) \, \e^{\frac{1}{2}
\Omega^{(n-1)}(s)} ds, \qquad n=1,2,\ldots
\end{equation}
where the $\varepsilon$ dependence has been omitted by simplicity
and $\Gamma^{(n)} = \sum_{k=1}^{n} \varepsilon^k \Gamma_k$,
$\Omega^{(0)}=O$. The first two approximants read explicitly
\begin{eqnarray} \label{vos.10}
\Omega^{(1)}(t,\varepsilon) & = & \varepsilon \, \Omega_1(t) =
\varepsilon \int_0^t A(s) ds \nonumber \\
\Omega^{(2)}(t,\varepsilon) & = & \varepsilon \int_0^t \e^{-\frac{1}{2}
\Omega^{(1)}(s,\varepsilon)} A(s) \, \e^{\frac{1}{2}
\Omega^{(1)}(s,\varepsilon)} ds.
\end{eqnarray}
In this approach the solution is approximated by $Y(t) \simeq
\exp(\Omega^{(n)})$. Observe that $\Omega^{(n)}$ contains
contributions from an infinity of orders in $\varepsilon$, whereas
the $n$th term in the Magnus series (\ref{vos.2}) is proportional
to $\varepsilon^n$. Furthermore, $\Omega^{(n)}$ contains
$\sum_{k=1}^n \varepsilon^k \Omega_k$ \emph{and} also higher
powers $\varepsilon^m$ ($m>n$). In particular, one gets easily
\[
\Omega^{(2)}(t;\varepsilon) = \varepsilon \, \Omega_1(t) +
\varepsilon^2 \, \Omega_2(t) + \sum_{k=3}^{\infty} \,
\frac{(-1)^{k-1}}{2^{k-1} (k-1)!} \varepsilon^k \int_0^t
\mathrm{ad}_{\Omega_1(s)}^{k-1} A(s) ds.
\]
From the structure of the expression (\ref{vos.9}) it is also
possible to find the asymptotic behaviour of
$\Omega^{(n)}/\varepsilon$ ($n \ge 3$) for $\varepsilon
\rightarrow \infty$ and prove that it remains bounded
\cite{voslamber72oea}, just as the exact solution does. This property
of the Voslamber iterative algorithm may lead to better
approximations of $Y(t)$ when the parameter $\varepsilon$ is not
very small, since in that case $\Omega^{(n)}/\varepsilon$ is
expected to remain close to $\Omega/\varepsilon$, as shown in
\cite{oteo05iat}.
\subsection{Floquet--Magnus expansion} \label{Floquet}
We now turn our attention to a specific case of equation
(\ref{eq:evolution}) with important physical and mathematical
applications, namely when the (complex) $n \times n$ matrix-valued
function $A(t)$ is periodic with period $T$. Then further
information is available on the structure of the exact solution as
is given by the celebrated Floquet theorem, which ensures the
factorization of the solution in a periodic part and a purely
exponential factor. More specifically,
\begin{equation} \label{fflo2}
Y(t) = P(t) \, \exp(t F),
\end{equation}
where $F$ and $P$ are $n \times n$ matrices, $P(t)=P(t+T)$ for all
$t$ and $F$ is constant. Thus, albeit a solution of
(\ref{eq:evolution}) is not, in general, periodic the departure
from periodicity is determined by (\ref{fflo2}). This result, when
applied in quantum mechanics, is referred to as Bloch wave theory
\cite{bloch28udq,galindo90qme}. It is widely used in problems of
solid state physics where space-periodic potentials are quite
common. In Nuclear Magnetic Resonance this structure is exploited
as far as either time-dependent periodic magnetic fields or sample
spinning are involved \cite{ernst86pon}. Asymptotic stability of the
solution $Y(t)$ is dictated by the nature of the eigenvalues of
$F$, the so-called characteristic exponents of the original
periodic system \cite{hale80ode}.
An alternative manner of interpreting equation (\ref{fflo2}) is to
consider the piece $P(t)$, provided it is invertible, to perform a
transformation of the solution in such a way that the coefficient
matrix corresponding to the new representation has all its matrix
entries given by constants. Thus the piece $\exp (tF)$ in
(\ref{fflo2}) may be considered as an exact solution of the system
(\ref{eq:evolution}) previously moved to a representation where
the coefficient matrix is the constant matrix $F$
\cite{yakubovich75lde}. The
$t$-dependent change of representation is carried out by $P(t)$.
Connecting with section \ref{PLT}, $P(t)$ is the appropriate
preliminary linear transformation
for periodic systems. Of course, Floquet theorem by itself gives
no practical information about this procedure. It just states that
such a representation does exist. In fact, a serious difficulty in
the study of differential equations with periodic coefficients is
that no general method to compute either the matrix $P(t)$ or the
eigenvalues of $F$ is known.
Mainly, two ways of exploiting the above structure of $Y(t)$ are
found in the literature \cite{levante95fqm}.
The first one consists in performing a
Fourier expansion of the formal solution leading to an infinite
system of linear differential equations with constant
coefficients. Thus, the $t$-dependent finite system is replaced
with a constant one at the price of handling infinite dimension.
Resolution of the truncated system furnishes an approximate
solution. The second approach is of perturbative nature. It deals
directly with the form (\ref{fflo2}) by expanding
\begin{equation} \label{fflo3}
P(t)=\sum_{n=1}^{\infty } P_{n}(t), \qquad
F=\sum_{n=1}^{\infty}F_{n}.
\end{equation}
Every term $F_{n}$ in (\ref{fflo3}) is fixed so as to ensure
$P_{n}(t)=P_{n}(t+T)$, which in turn guarantees the Floquet
structure (\ref{fflo2}) at any order of approximation.
Although the Magnus expansion, such as it has been formulated in
this work, does not provide explicitly the structure of the
solution ensured by Floquet theorem, it can be adapted without
special difficulty to cope also with this situation.
The starting point is to introduce the Floquet form (\ref{fflo2})
into the differential equation $Y^{\prime}=A(t)Y$. In that way the
evolution equation for $P$ is obtained:
\begin{equation} \label{ffloPeq}
P^{\prime}(t)= A(t) P(t) - P(t) F, \qquad P(0)=I.
\end{equation}
The constant matrix $F$ is also unknown and we will determine it
so as to ensure $P(t+T)=P(t).$ Now we replace the usual
perturbative scheme in equation (\ref{fflo3}) with the exponential
ansatz
\begin{equation} \label{fflo4}
P(t)=\exp (\Lambda (t)), \qquad \Lambda (0)=O.
\end{equation}
Obviously, $\Lambda (t+T)=\Lambda (t)$ so as to preserve
periodicity. Now equation (\ref{ffloPeq}) conveys
\begin{equation} \label{ffloDeq}
\frac{\text{d}}{\text{d}t}\exp (\Lambda )=A \exp (\Lambda )-\exp
(\Lambda )F,
\end{equation}
from which, as with the conventional Magnus expansion, it follows
readily that
\begin{equation} \label{ffloFMeq}
\Lambda^{\prime}= \sum_{k=0}^{\infty} \frac{B_{k}}{k!}
\text{ad}_{\Lambda }^{k} \, (A+(-1)^{k+1}F).
\end{equation}
This equation is now, in the Floquet context, the analogue of
Magnus equation (\ref{fmag1}). Notice that if we put $F=O$ then
(\ref{fmag1}) is recovered. The next move is to consider the
series expansions for $\Lambda $ and $F$
\begin{equation} \label{ffloseries}
\Lambda(t) = \sum_{k=1}^{\infty } \Lambda _{k}(t), \qquad
F=\sum_{k=1}^{\infty } F_{k},
\end{equation}
with $\Lambda _{k}(0)=O,$ for all $k$. Equating terms of the same
order in (\ref{ffloFMeq}) one gets the successive contributions to
the series (\ref{ffloseries}). Therefore, the explicit ansatz we
are propounding reads
\begin{equation} \label{F_ansatz}
Y(t)=\exp \left( \sum_{k=1}^{\infty }\Lambda _{k}(t)\right) \,
\exp \left( t\sum_{k=1}^{\infty }F_{k}\right) .
\end{equation}
This can be properly referred as the \emph{Floquet--Magnus expansion}.
Substituting the expansions of equation (\ref{ffloseries}) into
(\ref{ffloFMeq}) and equating terms of the same order one can
write
\begin{equation}
\Lambda_{n}^{\prime}=\sum\limits_{j=0}^{n-1}\dfrac{B_{j}}{j!}\left(
W_{n}^{(j)}(t)+(-1)^{j+1}T_{n}^{(j)}(t)\right) \qquad \ \ \ \ \ \ \
\ \ (n\geq 1).
\end{equation}
The terms $W_{n}^{(j)}(t)$ may be obtained by a similar recurrence
to that given in equation (\ref{eses})
\begin{equation} \label{ffloW}
\begin{tabular}{l}
$W_{n}^{(j)}=\sum\limits_{m=1}^{n-j}\left[ \Lambda _{m},W_{n-m}^{(j-1)}%
\right] \qquad \ \ \ \ \ \ \ \ \ \ (1\leq j\leq n-1),$ \\
\\
$W_{1}^{(0)}=A,\qquad \ \ \ \ \ W_{n}^{(0)}=O\qquad \ \ \ \ \ \ \ \ \ (n>1),$%
\end{tabular}
\end{equation}
whereas the terms $T_{n}^{(j)}(t)$ obey to the recurrence relation
\begin{equation} \label{ffloT}
\begin{array}{l}
T_{n}^{(j)}=\sum\limits_{m=1}^{n-j}\left[ \Lambda
_{m},T_{n-m}^{(j-1)}\right]
\qquad \ \ \ \ \ \ \ \ \ \ \ \ (1\leq j\leq n-1), \\
\\
T_{n}^{(0)}=F_{n}\qquad \ \ \ \ \ \ \ \ \ (n>0).
\end{array}
\end{equation}
Every $F_{n}$ is fixed by the condition $\Lambda _{n}(t+T)=
\Lambda_{n}(t)$. An outstanding feature is that $F_{n}$ can be
determined independently of $\Lambda _{n}(t)$ as the solution
$Y(t)=P(t)\exp (tF)$ shrinks to $Y(T)=\exp (TF)$. Consequently,
the conventional Magnus expansion $Y(t)=\exp (\Omega (t))$
computed at $t=T$ must furnish
\begin{equation} \label{fflostrobo}
F_{n}=\frac{\Omega _{n}(T)}{T},\qquad \text{for all } \; n.
\end{equation}
The first contributions to the Floquet-Magnus expansion read
explicitly
\begin{eqnarray} \label{firste}
\Lambda _{1}(t) & = & \int_{0}^{t} A(x)~\text{d}x-tF_{1},
\nonumber \\
F_{1} & = & \frac{1}{T} \int_{0}^{T}A(x)~\text{d}x, \nonumber \\
\Lambda _{2}(t) & = & \frac{1}{2} \int_{0}^{t}\left[ A(x)+F_{1},\Lambda _{1}(x)%
\right] ~\text{d}x-tF_{2}, \\ \nonumber
F_{2} & = & \frac{1}{2T} \int_{0}^{T}\left[ A(x)+F_{1},\Lambda _{1}(x)\right] ~%
\text{d}x. \nonumber
\end{eqnarray}
Moreover, from the recurrence relations (\ref{ffloW}) and
(\ref{ffloT}) it is possible to obtain a sufficient condition such
that convergence of the series $\sum \Lambda _{n}$ is guaranteed
in the whole interval $t \in [0,T]$ \cite{casas01fte}. In fact,
one can show that absolute convergence of the Floquet--Magnus
series is ensured at least if
\begin{equation}
\int_{0}^{T} \| A(t)\| ~\text{d}t < \xi _{F} \equiv
0.20925.
\end{equation}
Notice that convergence of the series $\sum F_{n}$ is already
guaranteed by (\ref{fflostrobo}) and the discussion concerning the
conventional Magnus expansion in subsections \ref{convergence} and
\ref{subsec273}. The
bound $\xi _{F}$ in the periodic Floquet case turns out to be
smaller than the corresponding bound $r_c = \pi$ in the conventional
Magnus expansion. At first sight this could be understood as an
impoverishment of the result. However it has to be recalled that,
due precisely to Floquet theorem, once the condition is fulfilled
in one period convergence is assured for any value of time. On the
contrary, in the general Magnus case the bound gives always a
running condition.
\subsection{Magnus expansions for nonlinear matrix equations}
\label{NL-Magnus}
It is possible to extend the procedure leading to the
Magnus expansion for the linear equation (\ref{eq:evolution}) and
obtain approximate solutions for the nonlinear matrix equation
\begin{equation} \label{nlm1}
Y^\prime = A(t, Y) Y, \quad\qquad Y(0) = Y_0 \in \mathcal{G},
\end{equation}
where $\mathcal{G}$ is a matrix Lie group, $A: \mathbb{R}_{+} \times \mathcal{G}
\longrightarrow \mathfrak{g}$ and $\mathfrak{g}$ denotes the corresponding Lie algebra.
Equation (\ref{nlm1}) appears in relevant physical fields such as rigid
body mechanics, in the calculation of Lyapunov exponents ($\mathcal{G}
\equiv \ensuremath{\mathrm{SO}}(n)$) and other problems arising in
Hamiltonian dynamics ($\mathcal{G} \equiv \ensuremath{\mathrm{Sp}}(n)$). In fact, it can be shown
that every differential equation evolving on a matrix Lie group
$\mathcal{G}$ can be written in the form (\ref{nlm1}) \cite{iserles00lgm}. Moreover, the
analysis of generic differential equations defined in homogeneous
spaces can be reduced to the Lie-group equation (\ref{nlm1})
\cite{munthe-kaas97nio}.
In \cite{casas06eme} a general procedure for devising Magnus
expansions for the nonlinear equation (\ref{nlm1}) is introduced.
It is based on applying Picard's iteration on the associated
differential equation in the Lie algebra and retaining in each
iteration the terms necessary to increase the order while
maintaining the explicit character of the expansion. The resulting
methods are thus explicit by design and are expressed in terms of
integrals.
As usual, the starting point in the formalism is to represent the
solution of (\ref{nlm1}) in the form
\begin{equation} \label{nlm2}
Y(t) = \exp(\Omega(t,Y_0)) Y_0.
\end{equation}
Then one obtains the differential equation satisfied by $\Omega$:
\begin{equation} \label{nlm3}
\Omega^\prime = d \exp_{\Omega}^{-1} \big(A(t, \e^{\Omega} Y_0)
\big), \qquad \Omega(0) = O,
\end{equation}
where $d \exp_{\Omega}^{-1}$ is given by (\ref{fdexpinv}). Now, as
in the linear case, one can apply Picard's iteration to equation
(\ref{nlm3}), giving instead
\begin{eqnarray*}
\Omega^{[m+1]}(t) & = & \int_0^t d
\exp_{\Omega^{[m]}(s)}^{-1} A(s,\e^{\Omega^{[m]}(s)} Y_0)
ds \\
& = & \int_0^t \sum_{k=0}^{\infty} \frac{B_k}{k!} \mathrm{ad}_{\Omega^{[m]}(s)}^k
A(s,\e^{\Omega^{[m]}(s)} Y_0) ds, \qquad m \ge 0.
\end{eqnarray*}
The next step in getting explicit approximations is to truncate
appropriately the $d \exp^{-1}$ operator in the above expansion.
Roughly speaking, when the whole series for $d \exp^{-1}$ is
considered, the power series expansion of the iterate function
$\Omega^{[k]}(t)$, $k \ge 1$, only reproduces the expansion of the
solution $\Omega(t)$ up to certain order, say $m$. In
consequence, the (infinite) power series of $\Omega^{[k]}(t)$ and
$\Omega^{[k+1]}(t)$ differ in terms $\mathcal{O}(t^{m+1})$. The
idea is then to discard in $\Omega^{[k]}(t)$ all terms of order
greater than $m$. This of course requires careful analysis
of each term in the expansion. For instance, $\Omega^{[0]} = O$
implies that $(\Omega^{[1]})^\prime = A(t,Y_0)$ and therefore
\[
\Omega^{[1]}(t) = \int_0^t A(s,Y_0) ds = \Omega(t,Y_0) + \mathcal{O}(t^2).
\]
Since
\[
A(s,\e^{\Omega^{[1]}(s)} Y_0) = A(0,Y_0) + \mathcal{O}(s)
\]
it follows at once that
\[
-\frac{1}{2} \int_0^t [ \Omega^{[1]}(s),A(s,\e^{\Omega^{[1]}(s)} Y_0) ]
\, ds = \mathcal{O}(t^3).
\]
When this second term in $\Omega^{[2]}(t)$ is included and
$\Omega^{[3]}$ is computed, it turns out that $\Omega^{[3]}$
reproduces correctly the expression of $\Omega^{[2]}$ up to
$\mathcal{O}(t^2)$. Therefore we truncate $d\exp^{-1}$ at the $k=0$ term
and take
\[
\Omega^{[2]}(t) = \int_0^t A(s,\e^{\Omega^{[1]}(s)} Y_0) ds.
\]
With greater generality, we let
\begin{eqnarray} \label{nlm4}
\Omega^{[1]}(t) & = & \int_0^t A(s,Y_0) ds \\
\Omega^{[m]}(t) & = & \sum_{k=0}^{m-2} \frac{B_k}{k!} \int_0^t
\mathrm{ad}_{\Omega^{[m-1]}(s)}^k A(s,\e^{\Omega^{[m-1]}(s)} Y_0)) ds, \qquad
m \ge 2 \nonumber
\end{eqnarray}
and take the approximation $\Omega(t) \approx \Omega^{[m]}(t)$.
This results in an explicit approximate solution that involves a
linear combination of multiple integrals of nested commutators, so
that $\Omega^{[m]}(t) \in \mathfrak{g}$ for all $m \ge 1$. It can also be
proved that $\Omega^{[m]}(t)$, once inserted in (\ref{nlm2}),
provides an explicit approximation $Y^{[m]}(t)$ for the solution
of (\ref{nlm1}) that is correct up to terms $\mathcal{O}(t^{m+1})$
\cite{casas06eme}. In addition, $\Omega^{[m]}(t)$ reproduces
exactly the sum of the first $m$ terms in the $\Omega$ series of
the usual Magnus expansion for the linear equation $Y'= A(t) Y$.
It makes sense, then, to regard the scheme (\ref{nlm4}) as an
explicit Magnus expansion for the nonlinear equation (\ref{nlm1}).
This procedure can be easily adapted to construct an exponential
representation of the solution for the differential system
\begin{equation} \label{nlm5}
Y' = [A(t,Y), Y], \qquad Y(0) = Y_0 \in \mbox{Sym}(n).
\end{equation}
Here $\mbox{Sym}(n)$ stands for the set of $n \times n$ symmetric
real matrices and the (sufficiently smooth) function $A$ maps $
\mathbb{R}_{+} \times \mbox{Sym}(n)$ into $\ensuremath{\mathfrak{so}}(n)$, the Lie
algebra of $n \times n$ real skew-symmetric matrices. It is well
known that the solution itself remains in $\mbox{Sym}(n)$ for all
$t \ge 0$. Furthermore, the eigenvalues of $Y(t)$ are independent
of time, i.e., $Y(t)$ has the same eigenvalues as $Y_0$. This
remarkable qualitative feature of the system (\ref{nlm5}) is the
reason why it is called an \textit{isospectral flow}. Such flows
have several interesting applications in physics and applied
mathematics, from molecular dynamics to micromagnetics to linear
algebra \cite{calvo97nso}.
Since $Y(t)$ and $Y(0)$ share the same spectrum, there exists a
matrix function $Q(t) \in \ensuremath{\mathrm{SO}}(n)$ such that
$Y(t) Q(t) = Q(t) Y(0)$ or, equivalently,
\begin{equation} \label{nlm6}
Y(t) = Q(t) Y_0 Q^T(t).
\end{equation}
Then, by inserting (\ref{nlm6}) into (\ref{nlm5}), it is clear
that the time evolution of $Q(t)$ is described by
\begin{equation} \label{nlm7}
Q' = A(t,QY_0Q^T) \, Q, \qquad Q(0)=I,
\end{equation}
i.e., an equation of type (\ref{nlm1}). Yet there is another
possibility: if we seek the orthogonal matrix solution of
(\ref{nlm7}) as $Q(t) = \exp(\Omega(t))$ with $\Omega$
skew-symmetric,
\begin{equation} \label{nlm8}
Y(t) = \e^{\Omega(t)} Y_0 \,\e^{-\Omega(t)}, \qquad t \ge 0,
\qquad \Omega(t) \in \ensuremath{\mathfrak{so}}(n),
\end{equation}
then the corresponding equation for $\Omega$ reads
\begin{equation} \label{nlm9}
\Omega' = d\exp_{\Omega}^{-1} \big( A(t,\e^{\Omega} Y_0
\e^{-\Omega}) \big), \qquad \Omega(0) = O.
\end{equation}
In a similar way as for equation (\ref{nlm3}), we apply Picard's
iteration to (\ref{nlm9}) and truncate the $d\exp^{-1}$ series at
$k=m-2$. Now we can also truncate consistently the operator
\[
\mathrm{Ad}_{\Omega} Y_0 \equiv \e^{\Omega} Y_0
\e^{-\Omega} = \e^{\mathrm{ad}_{\Omega}} Y_0
\]
and the outcome still lies in $\ensuremath{\mathfrak{so}}(n)$. By doing so, we replace
the computation of one matrix exponential by several commutators.
In the end, the scheme reads
\begin{eqnarray} \label{nlm10}
\Omega^{[1]}(t) & = & \int_0^t A(s,Y_0) ds \nonumber \\
\Theta_{m-1}(t) & = & \sum_{l=0}^{m-1} \frac{1}{l!}
\mathrm{ad}_{\Omega^{[m-1]}(t)}^l Y_0 \\
\Omega^{[m]}(t) & = & \sum_{k=0}^{m-2} \frac{B_k}{k!} \int_0^t
\mathrm{ad}_{\Omega^{[m-1]}(s)}^k A(s,\Theta_{m-1}(s)) ds, \qquad
m \ge 2 \nonumber
\end{eqnarray}
and, as before, one has $\Omega(t) = \Omega^{[m]}(t) +
\mathcal{O}(t^{m+1})$. Thus
\begin{eqnarray*}
\Theta_1(t) & = & Y_0 + [\Omega^{[1]}(t),Y_0] \\
\Omega^{[2]}(t) & = & \int_0^t A(s,\Theta_1(s)) ds \\
\Theta_2(t) & = & Y_0 + [\Omega^{[2]}(t), Y_0] + \frac{1}{2}
[\Omega^{[2]}(t),[\Omega^{[2]}(t), Y_0]] \\
\Omega^{[3]}(t) & = & \int_0^t A(s,\Theta_2(s)) ds - \frac{1}{2}
\int_0^t [\Omega^{[2]}(s),A(s,\Theta_2(s))] ds
\end{eqnarray*}
and so on. Observe that this procedure preserves the
isospectrality of the flow since the approximation
$\Omega^{[m]}(t)$ lies in $\ensuremath{\mathfrak{so}}(n)$ for all $m \ge 1$ and $t \ge
0$. It is also equally possible to develop a formalism based on
rooted trees in this case, in a similar way as for the standard
Magnus expansion.
\noindent \textbf{Example}. The double bracket equation
\begin{equation} \label{fdbe1}
Y^\prime = [[Y,N], Y], \quad\qquad Y(0)=Y_0 \in \mbox{Sym}(n)
\end{equation}
was introduced by Brockett \cite{brockett91dst} and Chu \& Driessel
\cite{chu90tpg} to solve certain standard problems in applied
mathematics, although similar equations also appear in the
formulation of physical theories such as micromagnetics
\cite{moore94nga}. Here $N$ is a constant $n \times n$ symmetric
matrix. It clearly constitutes an example of an
isospectral flow with $A(t,Y) \equiv [Y,N]$. When the
procedure (\ref{nlm10}) is applied to (\ref{fdbe1}), one
reproduces exactly the expansion
obtained in \cite{iserles02otd} with the convergence domain
established in \cite{casas04nim}.
\hfill{$\Box$}
\subsection{Treatment of general nonlinear equations}
\label{GNLM}
As a matter of fact, the Magnus expansion can be formally generalized
to any nonlinear explicitly time-dependent differential equation. Given
the importance of the expansion, it has been indeed (re)derived a number
of times along the years in different settings.
We have to mention in this respect the work of Agrachev and
Gamkrelidze \cite{agrachev81caa,agrachev79ter,gamkrelidze79ero},
and Strichartz \cite{strichartz87tcb}. In the context of
Hamiltonian dynamical systems, the expansion was first proposed in
\cite{oteo91tme} and subsequently applied in a more general
context in \cite{blanes01smf} with the aim of designing new numerical
integration algorithms.
By introducing nonstationary vector fields and flows, it turns out that
one gets a linear differential equation in terms of operators which can
be analyzed in exactly the same way as in section \ref{section2}.
Thus it is in principle possible to
build approximate solutions of the differential equation which
preserve some geometric properties of the exact solution. The
corresponding Magnus series expansion allows us to write the
formal solution and then different approximations can be
obtained by truncating the series. Obviously, this formal
expansion presents two difficulties in order to render a useful
algorithm in practice: (i) it is not evident what the domain of
convergence is, and (ii) some device has to be designed to compute
the exponential map once the series is truncated.
Next we briefly summarize the main ideas involved in
the procedure. To begin with, let us consider the autonomous equation
\begin{equation} \label{nlp1}
{\bf x}' = {\bf f(x}), \qquad
{\bf x}(0)={\bf x}_0\in \mathbb{R}^n.
\end{equation}
If $\varphi^t$ denotes the exact flow of (\ref{nlp1}), i.e. ${\bf
x}(t) =\varphi^t({\bf x}_0)$, then for each infinitely
differentiable map $g: \mathbb{R}^n \longrightarrow \mathbb{R}$,
$g(\varphi^t({\bf y}))$ admits the representation
\begin{equation}\label{nlp2}
g(\varphi^t({\bf y})) = \Phi^t [g]({\bf y})
\end{equation}
where the operator $\Phi^t$ acts on differentiable functions
\cite{olver93aol}. To
be more specific, let us introduce the Lie derivative (or Lie
operator) associated with ${\bf f}$,
\begin{equation} \label{nlp4}
L_{{\bf f}} = \sum_{i=1}^n f_i \frac{\partial}{\partial x_i}.
\end{equation}
It acts on differentiable functions $F: \mathbb{R}^n
\longrightarrow \mathbb{R}^m$ (see \cite[Chap. 8]{arnold89mmo}
for more details) as
\[
L_{{\bf f}} F( \mathbf{y}) = F^\prime(\mathbf{y})
\mathbf{f}(\mathbf{y}),
\]
where $F^\prime(\mathbf{y})$ denotes the Jacobian matrix of $F$.
It follows from the chain rule that, for the solution
$\varphi^t(\mathbf{x}_0)$ of (\ref{nlp1}),
\begin{equation} \label{nlp2a1}
\frac{d }{dt} F(\varphi^t(\mathbf{x}_0)) = (L_{{\bf f}}
F)(\varphi^t(\mathbf{x}_0)),
\end{equation}
and applying the operator iteratively one gets
\[
\frac{d^k }{dt^k} F(\varphi^t(\mathbf{x}_0)) = (L_{{\bf f}}^k
F)(\varphi^t(\mathbf{x}_0)), \qquad k \ge 1.
\]
Therefore, the Taylor series of $F(\varphi^t(\mathbf{x}_0))$ at
$t=0$ is given by
\begin{equation} \label{nlp2a}
F(\varphi^t(\mathbf{x}_0)) = \sum_{k \ge 0} \frac{t^k}{k!} (L_{{\bf
f}}^k F) (\mathbf{x}_0) = \exp (t L_{{\bf f}})[F](\mathbf{x}_0).
\end{equation}
Now, putting $F(\mathbf{y}) = \mathrm{Id}(\mathbf{y}) = \mathbf{y}$, the
identity map, this is just the Taylor series of the solution
itself
\[
\varphi^t(\mathbf{x}_0) = \sum_{k \ge 0} \frac{t^k}{k!} (L_{{\bf
f}}^k \mathrm{Id}) (\mathbf{x}_0) = \exp
(t L_{{\bf f}})[\mathrm{Id}](\mathbf{x}_0).
\]
If we substitute $F$ by $g$ in (\ref{nlp2a}) and compare with
(\ref{nlp2}), then it is clear that $\Phi^t[g](\mathbf{y}) = \exp
(t L_{{\bf f}})[g](\mathbf{y})$. The object $\exp (t L_{{\bf f}})$
is called the Lie transform associated with $\mathbf{f}$.
At this point, let us suppose that $\mathbf{f}(\mathbf{x})$ can be split
as $\mathbf{f}(\mathbf{x}) = \mathbf{f}_1(\mathbf{x}) +
\mathbf{f}_2(\mathbf{x})$, in such a way that the systems
\[
\mathbf{x}^\prime = \mathbf{f}_1(\mathbf{x}), \qquad
\mathbf{x}^\prime = \mathbf{f}_2(\mathbf{x})
\]
have flows $\varphi_1^t$ and $\varphi_2^t$, respectively, so that
\[
g(\varphi_{i}^{t}({\bf y})) =
\exp(t L_{{\bf f}_i}) [g] ({\bf y}) \qquad \qquad i=1,2.
\]
Then, for their composition one has
\begin{equation} \label{nlp7}
g(\varphi_{2}^{t}\circ\varphi_{1}^{s}({\bf y})) =
\exp(s L_{{\bf f}_1}) \exp(t L_{{\bf f}_2}) [g] ({\bf y}).
\end{equation}
This is precisely formula (\ref{nlp2a}) with
$\mathbf{f}=\mathbf{f}_1$, $t$ replaced with $s$ and with
$F(\mathbf{y}) = \exp(t L_{{\bf f}_2}) [g] ({\bf y})$. Notice that
the indices 1 and 2 as well as $s$ and $t$ to the left and right of
eq. (\ref{nlp7}) are permuted. In other words, \emph{the Lie transforms appear in
the reverse order to their corresponding maps} \cite{hairer06gni}.
The Lie derivative $L_{\mathbf{f}}$ satisfies some remarkable
properties. Given two functions $\psi_1, \ \psi_2$, it can be
easily verified that
\begin{eqnarray*}
& & L_{{\bf f}}(\alpha_1 \psi_1 + \alpha_2 \psi_2) =
\alpha_1 L_{{\bf f}} \psi_1 + \alpha_2 L_{{\bf f}} \psi_2,
\qquad \qquad \alpha_1,\alpha_2 \in \mathbb{R} \\
& & L_{{\bf f}}(\psi_1\, \psi_2)= (L_{{\bf f}} \psi_1) \, \psi_2 + \psi_1\,L_{{\bf f}} \psi_2
\end{eqnarray*}
and by induction we can prove the Leibniz rule
\[
L_{{\bf f}}^k(\psi_1\, \psi_2)= \sum_{i=0}^k
\left( \begin{array}{c}
k \\
i
\end{array} \right)
\left(L_{{\bf f}}^i \psi_1 \right) \,
\left(L_{{\bf f}}^{k-i} \psi_2 \right),
\]
with $ L_{{\bf f}}^i \psi=L_{{\bf f}}\, \left(L_{{\bf f}}^{i-1}
\psi \right) $ and $ L_{{\bf f}}^0 \psi=\psi$, justifying the name
of Lie derivative.
In addition, given two vector
fields ${\bf f}$ and ${\bf g}$, then
\begin{eqnarray*}
& & \alpha_1 L_{{\bf f}} + \alpha_2 L_{{\bf g}} =
L_{\alpha_1 {\bf f} + \alpha_2{\bf g}} ,
\nonumber \\
& & [L_{{\bf f}},L_{{\bf g}}]=
L_{{\bf f}}\,L_{{\bf g}} - L_{{\bf g}}\,L_{{\bf f}}
= L_{{\bf h}} ,
\end{eqnarray*}
where $ {\bf h} $ is another vector field corresponding to the Lie
bracket of the vector fields $\mathbf{f}$ and $\mathbf{g}$, denoted by
$ {\bf h}=({\bf f},{\bf g})$, whose
components are
\begin{equation} \label{commut-nl}
h_i = ({\bf f},{\bf g})_i = L_{{\bf f}}g_i - L_{{\bf g}}f_i =
\sum_{j=1}^n \left(
f_j \frac{\partial g_i}{\partial x_j} -
g_j \frac{\partial f_i}{\partial x_j} \right).
\end{equation}
Moreover, from (\ref{nlp2}) and (\ref{nlp2a1}) (replacing $F$
with $g$) we can write
\[
\frac{d }{dt} \Phi^t[g](\mathbf{x}_0) = \frac{d }{dt}
g(\varphi^t(\mathbf{x}_0)) = (L_{\mathbf{f}}
g)(\varphi^t(\mathbf{x}_0))= \Phi^t
L_{\mathbf{f}}[g](\mathbf{x}_0).
\]
Particularizing to the function $g(\mathbf{x}) = \mathrm{Id}_j(\mathbf{x})
= x_j$, we get
\[
\frac{d}{dt}\Phi^t[\mathrm{Id}_j]({\bf y}) = \Phi^t
L_{{\bf f(y})}[\mathrm{Id}_j]({\bf y}),
\qquad \quad j=1,\ldots,n, \quad {\bf y}={\bf x}_0
\]
or, in short,
\begin{equation} \label{nlp5}
\frac{d}{dt}\Phi^t = \Phi^t L_{{\bf f(y})},
\qquad \quad {\bf y}={\bf x}_0,
\end{equation}
i.e., a linear differential equation for the operator $\Phi^t$.
Notice that, as expected, equation (\ref{nlp5}) admits as formal
solution
\begin{equation} \label{nlp6}
\Phi^t = \exp(tL_{{\bf f(y)}}),
\qquad \qquad {\bf y}={\bf x}_0.
\end{equation}
We can follow the same steps for the non-autonomous equation
\begin{equation} \label{non-lin}
{\bf x}' = {\bf f}(t,{\bf x}) ,
\end{equation}
where now the operational equation to be solved is
\begin{equation} \label{nlp8}
\frac{d}{dt}\Phi^t = \Phi^t L_{\mathbf{f}(t,\mathbf{y})},
\qquad \quad {\bf y}={\bf x}_0.
\end{equation}
To simplify notation, from now on we consider ${\bf x}_0$ as a set
of coordinates such that $\mathbf{f}(t,\mathbf{x}_0)$ is a
differentiable function of ${\bf x}_0$. Since
$L_{{\bf f}}$ is a linear operator we can then use directly the Magnus
series expansion to obtain the formal solution of (\ref{nlp8}) as
$ \Phi^t= \exp(L_{\mathbf{w}(t,\mathbf{x}_0)})$, with ${\bf w}=\sum_i{\bf w}_i$.
The first two terms are now
\begin{eqnarray} \label{nlp9}
{\bf w}_1(t,{\bf x}_0) & = & \int_{0}^{t} {\bf f}(s,{\bf x}_0) ds
\nonumber \\
{\bf w}_2(t,{\bf x}_0) & = & - \frac{1}{2} \int_{0}^{t} ds_1
\int_{0}^{s_1} ds_2 ({\bf f}(s_1,{\bf x}_0), {\bf f}(s_2,{\bf x}_0)).
\end{eqnarray}
Observe that the sign of ${\bf w}_2$ is changed when compared with
$\Omega_{2}$ in (\ref{O2}) and the integrals affect only the
explicit time-dependent part of the vector field. In general, due to the
structure of equations (\ref{eq:evolution}) and (\ref{nlp8}), the expression
for $\mathbf{w}_n(t,{\bf x}_0)$ can be obtained from the corresponding
$\Omega_n(t)$ in the linear case by applying the following rules:
\begin{enumerate}
\item replace $A(t_i)$ by ${\bf f}(t_i,{\bf x}_0)$;
\item replace the commutator $[\cdot,\cdot]$ by the Lie bracket (\ref{commut-nl});
\item change the sign in $\mathbf{w}_n(t,{\bf x}_0)$ for even $n$.
\end{enumerate}
Once ${\bf w}^{[n]}=\sum_{i=1}^n{\bf w}_i(t,{\bf x}_0)$ is
computed, there still remains to evaluate the action of the Lie
transform $\exp(L_{\mathbf{w}(t,\mathbf{x}_0)})$ on the initial conditions
${\bf x}_0$. At time $t=T$, this can be seen as the 1-flow
solution of the autonomous differential equation
\begin{equation}\label{}
{\bf y}' = {\bf w}^{[n]}(T,{\bf y}) , \qquad \quad
{\bf y}(0) = {\bf x}_0,
\end{equation}
since ${\bf y}(1)=\exp(L_{\mathbf{w}(T,\mathbf{x}_0)}) \mathbf{x}_0 = \mathbf{x}(T)$.
Although this is arguably the most direct way to construct a Magnus
expansion for \emph{arbitrary} time dependent nonlinear differential equations, it is
by no means the only one. In particular, Agrachev and
Gamkrelidze \cite{agrachev79ter,gamkrelidze79ero} obtain a similar expansion
by transforming (\ref{nlp8}) into the integral equation
\begin{equation} \label{nlp10}
\Phi^t = \mathrm{Id} + \int_0^t \Phi^s \vec{X}_s ds
\end{equation}
which is subsequently solved by successive approximations. Here, for clarity,
we have denoted $\vec{X}_s \equiv L_{\mathbf{f}(s,\mathbf{x}_0)}$. Then one gets
the formal series
\begin{eqnarray} \label{nlp11}
\Phi^t & = & \mathrm{Id} + \int_0^t dt_1 \vec{X}_{t_1} + \int_0^t dt_1 \int_0^{t_1}
dt_2 \vec{X}_{t_2} \vec{X}_{t_1} + \cdots \\
& = & \mathrm{Id} + \sum_{m=1}^{\infty} \int_0^t dt_1 \int_0^{t_1}
dt_2 \cdots \int_0^{t_{m-1}} dt_m \; \vec{X}_{t_m} \cdots \vec{X}_{t_1}. \nonumber
\end{eqnarray}
An object with this shape is called a \emph{formal chronological series} \cite{agrachev79ter},
and the set of all formal chronological series can be endowed with a real associative
algebra structure. It is then possible to show that there exists an absolutely continuous
formal chronological series
\begin{equation} \label{nlp12}
V_t(\vec{X}_t) = \sum_{m=1}^{\infty} \int_0^t dt_1 \int_0^{t_1}
dt_2 \cdots \int_0^{t_{m-1}} dt_m \; G_m(\vec{X}_{t_1}, \ldots, \vec{X}_{t_m})
\end{equation}
such that
\[
\Phi^t = \exp(V_t(\vec{X}_t)).
\]
Here $G_m(\vec{X}_{t_1}, \ldots, \vec{X}_{t_m})$ are Lie polynomials homogeneous of
the first grade in each variable, which can be algorithmically constructed.
In particular,
\begin{eqnarray*}
G_1(\vec{X}_{t_1}) & = & \vec{X}_{t_1} \\
G_2(\vec{X}_{t_1}, \vec{X}_{t_2} ) & = & \frac{1}{2} [\vec{X}_{t_2}, \vec{X}_{t_1} ] \\
G_3(\vec{X}_{t_1}, \vec{X}_{t_2}, \vec{X}_{t_3} ) & = & \frac{1}{6} (
[ \vec{X}_{t_3}, [\vec{X}_{t_2}, \vec{X}_{t_1} ] ] + [ [\vec{X}_{t_3}, \vec{X}_{t_2}], \vec{X}_{t_1} ] )
\end{eqnarray*}
The series (\ref{nlp12}) in general diverges, even if the Lie operator $\vec{X}_t$ is analytic
\cite{agrachev81caa}.
Nevertheless, in certain cases convergence holds. For instance, if $\vec{X}_t$
belongs to a Banach Lie algebra $\mathcal{B}$ for all $t \in \mathbb{R}$, where one has a norm satisfying
$\| [X,Y]\| \le \|X\| \|Y\|$ for all $X,Y \in \mathcal{B}$ and $\int_0^t \| \vec{X}_s \| ds
\equiv \int_0^t \| L_{\mathbf{f}(s,\mathbf{x}_0)} \| ds
\le 0.44$, then $V_t(\vec{X}_t) $ converges absolutely in $\mathcal{B}$ \cite{agrachev79ter}. As a matter
of fact, an argument analogous to that used in
\cite{blanes98maf} and \cite{moan98eao}
may allow us to
improve this bound and get convergence for
\[
\int_0^t \| \vec{X}_s \| ds \le \frac{1}{2} \int_0^{2 \pi} \frac{1}{2 + \frac{x}{2} (1 - \cot \frac{x}{2})}
dx = 1.08686870\ldots
\]
\subsubsection{Treatment of Hamiltonian systems}
We have seen how the algebraic setting we have developed for linear systems of
differential equations may be extended formally to nonlinear systems. We will review next how it
can be adapted to the important class of Hamiltonian systems. In this context, the
role of a Lie bracket of vector fields (\ref{commut-nl}) is played by the classical Poisson
bracket \cite{thirring92cds}.
The Lie algebraic presentation of Hamiltonian systems in
Classical Mechanics has
been approached in different ways and the Magnus expansion invoked in this context by diverse
authors \cite{marcus70hot,spirig79aao,tani68ctg}. More explicit use of the Magnus
expansion is
done in \cite{oteo91tme} where the evolution operator for a classical system
is constructed and its differential equation analyzed.
To particularize to this situation the preceding general treatment,
let us consider a system with $l$ degrees of freedom and phase
space variables $ {\bf x=
(q,p)}=(q_1,\ldots,q_l,p_1,\ldots,p_l)$,
where $(q_{i}, p_{i}),$ $i=1,\ldots ,l$ are the usual pairs of canonical
conjugate coordinate and momentum, respectively. By defining the Poisson bracket of two scalar
functions $F(\mathbf{q},\mathbf{p})$ and $G(\mathbf{q},\mathbf{p})$ of phase space variables in the
conventional way \cite{thirring92cds}
\begin{equation*}
\{ F,G \} \equiv \sum_{i=1}^{l}\left( \frac{\partial F}{\partial
q_{i}}\frac{\partial G}{\partial p_{i}}-\ \frac{\partial F}{\partial p_{i}}%
\frac{\partial G}{\partial q_{i}}\right),
\end{equation*}%
we have
\begin{equation*}
\{ F, G \}=\sum_{i,j=1}^{2l}\frac{\partial F}{\partial x_{i}}%
J_{ij}\frac{\partial G}{\partial x_{j}},
\end{equation*}%
and in particular
\begin{equation*}
\{ x_{i}, x_{j} \}=J_{ij}.
\end{equation*}%
Here $J$ is the basic symplectic matrix appearing in equation (\ref{simplectic}) (with $n=l$).
With these definitions the set of (sufficiently smooth) functions on
phase space acquires the structure of a Lie algebra and we can
associate with any such function $F(\mathbf{x})$ a Lie operator
\begin{equation} \label{lie-op}
L_{F}=\sum_{i,j=1}^{2l}\frac{\partial F}{\partial x_{i}}J_{ij}\frac{%
\partial }{\partial x_{j}}
\end{equation}%
which acts on the same set of functions as $L_F G = \{F, G\}$. It is then a simple
exercise to show that the set of all Lie operators is also a Lie
algebra under the usual commutator $\left[ L_{F},L_{G}\right]
=L_{F}L_{G}-L_{G}L_{F}$ and furthermore
\begin{equation*}
\left[ L_{F},L_{G}\right] =L_{\{F,G\}}.
\end{equation*}
Given the Hamiltonian function $H({\bf q,p},t):
\mathbb{R}^{2l}\times \mathbb{R} \rightarrow \mathbb{R}$, where $
{\bf q, \, p} \in \mathbb{R}^l$, the equations of motion are
\begin{equation} \label{eq.HamNL}
\mathbf{q}' = \boldsymbol{\nabla}_{\mathbf{p}} H,
\qquad \qquad
\mathbf{p}' = -\boldsymbol{\nabla}_{\mathbf{q}} H,
\end{equation}
or, equivalently, in terms of $\mathbf{x}$,
\[
\mathbf{x}' = J \, \boldsymbol{\nabla}_{\mathbf{x}} H.
\]
It is then elementary to show that the Lie operator $L_{-H}$
is nothing but the Lie derivative $L_{\bf f}$
(\ref{nlp4}) associated with the function
\[
\mathbf{f} = J \, \boldsymbol{\nabla}_{\mathbf{x}} H.
\]
Therefore, the operational equation (\ref{nlp8}) becomes
\begin{equation} \label{hami2}
\frac{d}{dt}\Phi^t_H = \Phi^t_H L_{-H(\mathbf{y},t)},
\qquad \quad {\bf y}={\bf x}_0
\end{equation}
and the previous treatment also holds in this setting. As a result, the
Magnus expansion reads
\begin{eqnarray}
\Phi_H^t= \exp({L_W}),
\end{eqnarray}
where $W=\sum_{i=1}^{\infty}W_i$ and the first two terms are
\begin{eqnarray}
W_1({\bf x}_0) & = & - \int_{0}^{t} H({\bf x}_0,s) ds \\
W_2({\bf x}_0) & = & - \frac{1}{2} \int_{0}^{t} ds_1
\int_{0}^{s_1} ds_2 \{H({\bf x}_0,s_1), H({\bf x}_0,s_2)\}.
\nonumber
\end{eqnarray}
\subsection{Magnus expansion and the Chen--Fliess series}
\label{MECF}
Suppose that $\mathbf{f}$ in equation (\ref{non-lin}) has the form
$\mathbf{f}(t,\mathbf{x}) = \sum_{i=1}^m u_i(t)
\mathbf{f}_i(\mathbf{x})$, i.e., we are dealing with the nonlinear
differential equation
\begin{equation} \label{cf1}
\mathbf{x}^\prime(t) = \sum_{i=1}^m u_i(t) \;
\mathbf{f}_i(\mathbf{x}(t)), \qquad \mathbf{x}(0) = \mathbf{p},
\end{equation}
where $u_i(t)$ are integrable functions of time. Systems of the
form (\ref{cf1}) appear for instance in nonlinear control theory. In that
context the functions $u_i$ are the controls and $\mathbf{f}_i$
are related to the nonvarying geometry of the system. Observe that this problem
constitutes the natural (nonlinear) generalization of the case studied in
section \ref{lcbch}.
One of the most basic procedures for obtaining $\mathbf{x}(T)$ for a given $T$
is by applying simple Picard iteration. For an analytic \emph{output
function} $g: \mathbb{R}^n \longrightarrow \mathbb{R}$, from
(\ref{nlp2a1}) it is clear that
\begin{equation} \label{cf2}
\frac{d }{dt} g(\mathbf{x}(t)) = (L_{(\sum u_i
\mathbf{f}_i)}g)(\mathbf{x}(t)) = \sum_{i=1}^m u_i(t)
(E_ig)(\mathbf{x}(t)), \qquad g(\mathbf{x}(0)) = g(\mathbf{p}),
\end{equation}
where, for simplicity, we have denoted by $E_i$ the Lie derivative
$L_{\mathbf{f}_i}$. This can be particularized to the case $g = x_i$, the
$i$th component function.
By rewriting (\ref{cf2}) as an equivalent
integral equation and iterating we get
\begin{eqnarray} \label{cf3}
g(\mathbf{x}(t)) & = & g(\mathbf{p}) + \int_0^t \sum_{i_1 = 1}^m
u_{i_1}(t_1) (E_{i_1} g)(\mathbf{x}(t_1)) dt_1 \nonumber \\
& = & g(\mathbf{p}) + \int_0^t \sum_{i_1 = 1}^m u_{i_1}(t_1)
\bigg( (E_{i_1} g)(\mathbf{p}) + \nonumber \\
& & \hspace*{1cm} \left. \int_0^{t_1} \sum_{i_2=1}^m u_{i_2}(t_2)
(E_{i_2} E_{i_1} g)(\mathbf{x}(t_2)) dt_2 \right) dt_1 \\
& = & g(\mathbf{p}) + \int_0^t \sum_{i_1 = 1}^m u_{i_1}(t_1)
\left( (E_{i_1} g)(\mathbf{p}) + \int_0^{t_1} \sum_{i_2=1}^m u_{i_2}(t_2)
\Big( (E_{i_2} E_{i_1} g)(\mathbf{p}) \right. \nonumber \\
& & \hspace*{1cm} + \left. \int_0^{t_2} \sum_{i_3=1}^m
u_{i_3}(t_3) (E_{i_3} E_{i_2} E_{i_1} g)(\mathbf{x}(t_3)) dt_3
\Big) dt_2 \right) dt_1 \nonumber
\end{eqnarray}
and so on. Notice that in this expression the time
dependence of the solution is separated from the nonvarying
geometry of the system, which is contained in the vector fields
$E_i$ and need to be computed only once at the beginning of the
calculation. Next we reverse the names of the integration
variables and indices used (e.g., rename $i_1$ to become $i_3$ and
vice versa), so that
\begin{eqnarray} \label{cf4}
g(\mathbf{x}(t)) & = & g(\mathbf{p}) + \sum_{i_1 = 1}^m
\left( \int_0^t u_{i_1}(t_1)dt_1 \right)
(E_{i_1} g)(\mathbf{p}) \nonumber \\
& & + \sum_{i_2=1}^m \sum_{i_1=1}^m \left( \int_0^{t} \int_0^{t_2}
u_{i_2}(t_2) u_{i_1}(t_1) dt_1 dt_2 \right)
(E_{i_1} E_{i_2} g)(\mathbf{p}) \\
& & + \sum_{i_3 = 1}^m \sum_{i_2 = 1}^m \sum_{i_1 = 1}^m \left(
\int_0^t \int_0^{t_3} \int_0^{t_2} u_{i_3}(t_3) u_{i_2}(t_2)
u_{i_1}(t_1) dt_1 dt_2 dt_3 \right) \nonumber \\
& & \hspace*{1cm} (E_{i_1} E_{i_2} E_{i_3} g)(\mathbf{p}) + \cdots \nonumber
\end{eqnarray}
Observe that the indices in the Lie derivatives and in the
integrals are in the opposite order. This procedure can be further
iterated, thus yielding the formal infinite series
\begin{eqnarray} \label{cf5}
g(\mathbf{x}(t)) & = & g(\mathbf{x}(0)) + \sum_{s \ge 1} \sum_{i_1 \cdots i_s}
\int_0^t
\int_0^{t_{s}} \cdots \int_0^{t_3} \int_0^{t_2} u_{i_s}(t_s)
\cdots u_{i_1}(t_1) dt_1 \cdots dt_s \nonumber \\
& & \hspace*{1cm} E_{i_1} \cdots E_{i_s}
g(\mathbf{x}(0)),
\end{eqnarray}
where each $i_j \in L = \{1,\ldots,m \}$. An expression of the form
(\ref{cf5}) is referred to as the Chen--Fliess series, and it can be
proved that, under certain circumstances, it actually converges
uniformly to the solution of (\ref{cf2}) \cite{kawski02tco}. This series
originates in K.T. Chen's work \cite{chen57iop} on geometric invariants
and iterated integrals of paths in $\mathbb{R}^n$. Later, Fliess
\cite{fliess81fcn} applied the theory to the analysis of control systems.
One of the great advantages of the Chen--Fliess series is that it
can be manipulated with purely algebraic and combinatorial tools,
instead of working directly with nested integrals. To emphasize
this aspect, observe that each term in the series can be
identified by a sequence of indices or \emph{word} $w = i_1 i_2
\cdots i_s$ in the \emph{alphabet} $L$ through the following
two maps:
\begin{eqnarray*}
\mathcal{M}_1 : \, w = i_1 i_2 \cdots i_s & \longmapsto & \big( g \mapsto (E_w
g)(\mathbf{p}) = (E_{i_1} E_{i_2} \cdots E_{i_s} g)(\mathbf{p})
\big), \\
\mathcal{M}_2 : \, w = i_1 i_2 \cdots i_s & \longmapsto & \left( u \mapsto \int_0^t
u_{i_s}(t_s) \int_0^{t_{s}} \cdots \int_0^{t_2} u_{i_1}(t_1) dt_1
\cdots dt_{s-1} dt_s \right)
\end{eqnarray*}
In fact, the nested integral appearing in the map $\mathcal{M}_2$ can be
expressed in a simple way, as we did for the linear case in (\ref{wn5})
\begin{equation} \label{cf5b}
\alpha_{i_1\cdots i_s} = \int_0^t
u_{i_s}(t_s) \int_0^{t_{s}} \cdots \int_0^{t_2} u_{i_1}(t_1) dt_1
\cdots dt_{s-1} dt_s
\end{equation}
With this notation, the series of linear differential operators
appearing at the right-hand side of (\ref{cf5}) can be written in
the compact form \cite{murua06tha}
\begin{equation} \label{cf6}
\sum_{w \in L^*} \alpha_w E_w,
\end{equation}
where $L^*$ denotes the set of words on the
alphabet $L = \{1, 2, \ldots, m \}$, the function
$\alpha_w$ is given by (\ref{cf5b}) for each word $w \in L^*$ and
\[
E_w = E_{i_1} \cdots E_{i_s}, \qquad \mbox{ if } \quad w = i_1 \cdots
i_s \in L^*.
\]
It was proved by Chen that the series (\ref{cf6}) is an
exponential Lie series \cite{chen57iop}, i.e., it can be rewritten as the
exponential of a series of vector fields obtained as nested
commutators of $E_1,\ldots, E_m$. Such an expression is referred to
in nonlinear control as the formal analogue of a continuous
Baker--Campbell--Hausdorff formula and also as the logarithm of the Chen--Fliess
series \cite{kawski00ctl}.
Notice the similarities of this procedure with the more general treatment carried out in
subsection \ref{GNLM} for the nonlinear differential equation (\ref{nlp1}). Thus,
expression (\ref{nlp11}) constitutes the generalization of (\ref{cf5}) to an arbitrary
function $\mathbf{f}$ in (\ref{nlp1}). Conversely, the logarithm of the Chen--Fliess series
can be viewed as the corresponding nonlinear Magnus series for
the particular nonlinear system (\ref{cf1}).
From these considerations, it is clear that, in principle,
one can obtain an explicit formula for the terms of the logarithm of the Chen--Fliess series
in a basis of the Lie algebra generated by $E_1, \ldots,
E_m$, but this problem has been only recently solved for any number
of operators $m$ and arbitrary order, using labelled rooted trees
\cite{murua06tha}. Thus, for instance, when $m=2$, it holds that
\begin{eqnarray} \label{amcf1}
\sum_{w \in I^*} \alpha_w E_w & = & \exp( \beta_1 E_1 + \beta_2 E_2
+ \beta_{12} [E_1,E_2] + \beta_{112} [E_1,[E_1,E_2]] \nonumber \\
& & + \beta_{212} [E_2,[E_1,E_2]] + \cdots),
\end{eqnarray}
where, not surprisingly, the expressions of the $\beta$ coefficients are given by
(\ref{wn4}) with the corresponding change of sign in $\beta_{12}$
due to the nonlinear character of equation (\ref{cf1}).
Another relevant consequence of the connection between Magnus series and the
Chen--Fliess series is the following: the Lie series
defining the logarithm of the Chen--Fliess series can be obtained explicitly from
the recurrence (\ref{eses})-(\ref{omegn}), valid in principle
for the linear case. Of course, the successive terms of the Chen--Fliess
series itself can be generated by expanding the exponential.
\section{Illustrative examples}
\label{section4}
After having reviewed in the preceding two sections the main theoretical aspects of
the Magnus expansion and other exponential methods,
in this section we gather some examples of their application.
All of them are standard problems of Quantum Mechanics
where the exact solution for the evolution operator $U(t)$ is well
known. Due to their simplicity, higher order computations are
possible within a reasonable amount of effort. The comparison
between approximate and exact analytic results may help the reader
to grasp the advantages as well as the technical difficulties of the
methods we have analyzed.
The examples considered here are treated in
\cite{casas01fte,klarsfeld89eip,oteo05iat,pechukas66ote}, although
some results are unpublished material, in particular those
involving highest order computations. In subsection \ref{Ex:ME} we
present results concerning the most straightforward way of dealing
with ME, namely computations in the Interaction Picture. In
subsection \ref{Ex:AME} an application of ME in the adiabatic basis
is developed. Subsection \ref{Ex:FW} is devoted to illustrate the
exponential infinite-product expansions of Fer and Wilcox. An
example on the application of the iterative version of ME by Voslamber
is given in subsection \ref{Ex:IME}. Eventually, subsection \ref{Ex:FME}
contains an application of the Floquet--Magnus formalism.
\subsection{ME in the Interaction Picture}\label{Ex:ME}
We illustrate the application of ME in the Interaction Picture
(see subsection \ref{PLT}) by means of two simple time-dependent physical
systems frequently encountered in the literature, for which exact
solutions are available: the time-dependent forced harmonic
oscillator, and a particle of spin $\frac{1}{2}$ in a constant
magnetic field. In the first case we fix $\hbar =1$ for convenience.
As we will see, ME in the Interaction Picture is appropriate
whenever the characteristic time scale of the perturbation is
shorter than the proper time scale of the system.
To illustrate and evaluate the quality of the various approximations
for the time-evolution operator, we compute the transition
probabilities among non-perturbed eigenstates induced by the small
perturbation.
\subsubsection{Linearly forced harmonic oscillator}\label{LHO}
The Hamiltonian function describing a linearly driven
harmonic oscillator reads ($\hbar =1$)
\begin{equation}\label{eq:OA}
H =H _0+V(t), \quad \mbox{ with }
\qquad H _0=\frac{1}{2}\omega _{0}(p^2+q^2),\qquad V(t)=\sqrt 2 f(t)q
\end{equation}
and $f(t)$ is real. Here $q$ and $p$ stand for the position and
momentum operators satisfying $[q,p]=i$ and $\omega
_{0}$ gives the energy level spacing in absence of the perturbation
$V(t)$.
We introduce the usual operators $a_\pm
\equiv \frac{1}{\sqrt 2 }(q\mp i p)$, so that $[a_-,a_+]=1$. With
this notation we have
\begin{equation}\label{H0V}
H _0= \omega_0 \left(a_+ a_- + \frac{1}{2} \right),
\qquad V=f(t)(a_+ + a_-).
\end{equation}
The eigenstates of $H _0$ are denoted by $| n\rangle$, so that $H
_0 | n\rangle = \omega_0(n+\frac{1}{2})| n\rangle $, where $n$ stands for
the quantum number. With this notation $n=0$ corresponds to the
ground state.
For simplicity in the computations we choose $\omega_0 = 1$.
In accordance with the prescriptions in section \ref{PLT}, the
Hamiltonian in the Interaction Picture is given by (\ref{HGInt}) and
reads
\begin{equation}\label{LHOIP}
H _I(t)=\e^{iH _0t} \, V(t) \, \e^{-iH _0t}=f(t)(\e^{it}a_+ + \e^{-it}a_-) .
\end{equation}
Accordingly, the evolution operator is factorized as
\begin{equation}\label{eq:HO_HI}
U(t,0)=\exp(-iH _0 t) \, U_I(t,0),
\end{equation}
where the new evolution operator $U_I$ is obtained from
$U_I' = \tilde{H} _I(t) U_I \equiv -i H_I(t) U_I$.
The infinite Magnus series terminates in the present example. It
happens because the second order Magnus approximant, which involves
the computation of
\begin{eqnarray}\label{2HO}
[\tilde{H}_I (t_1),\tilde{H}_I (t_2)]&=&f(t_1)f(t_2) \left( \e^{i(t_1-t_2)} [a_+,a_-] +
\e^{-i(t_1-t_2)}[a_-,a_+] \right) \nonumber \\
&=& 2if(t_1)f(t_2)\sin(t_2-t_1)
\end{eqnarray}
reduces to a scalar function. Thus Magnus series in the Interaction
Picture furnishes the exact evolution operator irrespective of
$f(t)$:
\begin{eqnarray}\label{UHO}
U_I(t,0)&=&\exp\left( \int_0^t {\rm d}t_1 \tilde{H}_I (t_1)
-\frac{1}{2}\int_0^t {\rm d}t_1 \int_0^{t_1} {\rm d}t_2
[\tilde{H}_I (t_1),\tilde{H}_I (t_2)]\right) \nonumber \\
&=& \exp \big( -i(\alpha a_++\alpha^*a_-)-i\beta \big) \\
&=& \exp( -i\alpha a_+) \, \exp(-i\alpha^*a_-) \, \exp (
-i\beta-|\alpha|^2/2),\nonumber
\end{eqnarray}
where we have defined
\begin{eqnarray}\label{ab}
\alpha &\equiv & \int_0^t {\rm d}t_1 f(t_1) \e^{it_1},\label{eq:al} \\
\beta &\equiv & \int_0^t {\rm d}t_1 \int_0^{t_1} {\rm d}t_2
f(t_1)f(t_2) \sin(t_2-t_1).\label{eq:be}
\end{eqnarray}
Equations (\ref{UHO}) and (\ref{eq:HO_HI}) yield the exact
time-evolution operator for the linearly forced harmonic oscillator
Hamiltonian (\ref{eq:OA}) \cite{klarsfeld93dho}.
To compute transition probabilities between free harmonic
oscillator states of quantum numbers $n$ and $m$,
\begin{equation}\label{PHO}
P_{n\to m}=| \langle m|U_I |n \rangle |^2,
\end{equation}
the last form in (\ref{UHO}) results most convenient. Specifically,
assuming that the oscillator was initially in its ground state $| 0
\rangle $, we get in particular the familiar Poisson distribution
for the transition probabilities
\begin{equation}\label{PoissonHO}
P_{0\to n}= \frac{1}{n!} \, |\alpha|^{2n} \, \exp(-|\alpha|^2).
\end{equation}
\subsubsection{Two-level quantum systems}\label{2L}
The generic Hamiltonian for a two-level quantum system can be
written down in the form
\begin{equation}\label{H2L}
H (t)= \left( \begin{array}{ccc}
E_1(t) & \ & \ C(t) \\ C^*(t) & & E_2(t) \end{array} \right)
\end{equation}
where $E_1(t),E_2(t)$ are real functions and $C(t)$ is, in general,
a complex function of $t$. We define the solvable piece of the
Hamiltonian as the diagonal matrix
\begin{equation}\label{H2L0}
H _0(t) = \left( \begin{array}{cc}
E_1(t) & \ \ 0 \\ 0 & \ E_2(t) \end{array} \right)
\end{equation}
and all the time-dependent interaction described by the function
$C(t)$ is considered as a perturbation. In the Interaction Picture the
new Hamiltonian reads (see (\ref{HGInt}))
\begin{equation}\label{HI2L}
H _I(t)= \left( \begin{array}{ccc}
0 & \ \ & C(t) \displaystyle \exp \left( i\int_0^t {\rm d}t' \omega(t') \right) \\
C^*(t) \displaystyle \exp \left( -i\int_0^t {\rm d}t' \omega(t') \right) & & 0 \end{array} \right)
\end{equation}
with $\omega=(E_1-E_2)/\hbar$. Suppose now that $H_0$ is time-independent.
Then $U(t)=\exp( \tilde{H} _0t ) \, U_I(t)$. Without loss of generality, the
$H_0$ may be rendered traceless, so that $E_1=-E_2\equiv E$. Thus
$\pm E$ denote the eigenenergies associated to the eigenvectors $|+
\rangle \equiv (1, 0)^T$, $|- \rangle \equiv (0, 1)^T$ of $H_0$,
the unperturbed system. In terms of
Pauli matrices the Hamiltonian in this case may be expressed as
\begin{equation}\label{H2Lsig}
H(t) =\frac{1}{2}\hbar \omega \sigma_3 +f(t) \sigma_1 +g(t)\sigma_2,
\end{equation}
where $f=\mathrm{Re} (C)$ and $g=-\mathrm{Im} (C)$.
Since $H _0$ is diagonal, the transition probability between
eigenstates $|+ \rangle$, $|- \rangle$ of $H_0$ is simply
\begin{equation}\label{Ppm}
P(t) = |\langle +|U_I(t)|-\rangle |^2.
\end{equation}
As the evaluation of (\ref{Ppm}) requires the computation and manipulation of
exponential matrices involving Pauli matrices, formulas (\ref{exppaulis}) and
(\ref{cambiopict}) in section \ref{notations} come in hand here.
Next, we study two particular cases of interaction for which the
exact solution of the time evolution operator admits an analytic
expression.
{\bf 1- Rectangular step}. Suppose that in (\ref{H2Lsig}) $g=0$,
namely,
\begin{equation}\label{eq:Hround}
H(t)=\frac{1}{2}\hbar \omega \sigma_3 +f(t) \sigma_1
\end{equation}
with $f=0$ for $t<0$ and $f=V_0$ for $t \ge 0$. Alternatively, if we
restrict ourselves to compute an observable such as the transition
probability, this example is equivalent to a rectangular mound
(or rectangular barrier) of width $T=t$ above.
The exact solution for this problem reads
\begin{equation}\label{eq:analytic}
U(t,0)=\exp\left(- i\left( \frac{\omega}{2}\sigma_3 +
\frac{V_0}{\hbar} \sigma_1 \right) t \right),
\end{equation}
which yields the exact transition probability
\begin{equation}\label{Pex_rect}
P_{ex}=\frac{4\gamma^2}{4\gamma^2+\xi^2}\sin^2\sqrt{\gamma^2+\xi^2/4}
\end{equation}
between eigenstates
$|+ \rangle$, $|- \rangle$ of $H_0$. Here we have denoted
$\gamma \equiv V_0 t/\hbar $ and $\xi \equiv \omega t$.
The Interaction Picture is defined here by the explicit integration
of the diagonal piece in the Hamiltonian, so that
$U=\exp(-i\xi \sigma_3 /2) U_I$,
where $U_I$ stands for the time evolution operator in the
Interaction Picture and obeys
\begin{equation}\label{Eq:Dirac_eq}
U_I'=\tilde{H} _I(t) \, U_I , \qquad U_I (0)=I
\end{equation}
with
\begin{equation} \label{eq:H2}
H_I(t) =f(t) (\sigma_1 \cos \xi - \sigma_2 \sin \xi) .
\end{equation}
A computation with the usual time-dependent perturbation theory gives for the
first orders (formula (\ref{Dys1}))
\begin{eqnarray}\label{PT1}
P_{pt}^{(1)}&=&P_{pt}^{(2)}=\frac{4\gamma^2}{\xi^2}\sin^2(\xi/2)
\\
P_{pt}^{(3)}&=&P_{pt}^{(4)}=\frac{\gamma^2}{\xi^2}\left[ 2\sin \frac{\xi}{2}-
\frac{\gamma^2}{3\xi^2}\left( 9\sin \frac{\xi}{2}+\sin \frac{3\xi}{2}-
6\xi \cos \frac{\xi}{2} +4\sin^3 \frac{\xi}{2}\right) \right] ^2.
\nonumber
\end{eqnarray}
Notice that $P_{pt}^{(i)}>1$ may happen in the equations above
because the unitary character of the operator $U(t)$ is not preserved by the usual time-dependent
perturbation formalism.
In this example it is not difficult to compute the first four terms in the
Magnus series corresponding to $U_I(t) = \exp \Omega(t)$ in
(\ref{Eq:Dirac_eq}). To facilitate the notation, we define $s = \sin \xi$
and $c = \cos \xi$. The Magnus approximants in the Interaction
Picture may be written down in terms of Pauli matrices and read
explicitly
\begin{eqnarray}\label{M4}
\Omega_1&=&-i\frac{\gamma}{\xi}[\sigma_1 s+\sigma_2 (1-c)] \nonumber \\
\Omega_2&=&-i\left( \frac{\gamma}{\xi}\right) ^2\sigma_3 (s-\xi) \nonumber \\
\Omega_3&=&-i\left( \frac{\gamma}{\xi} \right) ^3 \frac{1}{3}\{ \sigma_1 [3\xi
(1+c)-(5+c)s]
+\sigma_2 [(3\xi - s)s-4(1-c)] \} \nonumber \\
\Omega_4&=&-i\left( \frac{\gamma}{\xi}\right) ^4 \frac{1}{3}\sigma_3 [(4 c +5)\xi
-(c+8)s].
\end{eqnarray}
The first two formulae for the approximate transition probabilities
are, respectively
\begin{eqnarray}\label{P4}
P_{M}^{(1)} &=& \sin^2\left( \frac{2\gamma}{\xi}\sin(\xi /2)\right)
\nonumber \\
P_{M}^{(2)} &=& \frac{4\gamma^2}{\xi^2} \frac{\sin^2\lambda}{\lambda^2} \sin^2(\xi/2), \quad
\lambda= [4\sin^2(\xi /2)+
\frac{\gamma^2}{\xi^2}(\sin \xi -\xi )^2]^{1/2} .
\end{eqnarray}
We omit explicit expressions for $P_{M}^{(3)}$ and $P_{M}^{(4)}$
since they are quite involved. However, we include their outputs in
Figures {\ref{plot:Rect1}, {\ref{plot:Rect2} and \ref{plot:Rect3},
where we plot the first to fourth order approximate transition
probabilities with ME in the Interaction Picture and compare them
to the exact case and also with perturbation theory outputs. In
Figure {\ref{plot:Rect1} and {\ref{plot:Rect2} we set $\gamma=1.5$
and $\gamma=2$ respectively, whereas in Figure {\ref{plot:Rect3} we
fix $\xi=1$.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=14cm]{Figures/IC_g15.eps}
\caption{Rectangular step:
Transition probabilities as a function of $\xi$, with $\gamma=1.5$.
The solid line corresponds to the exact result (\ref{Pex_rect}). Broken lines
stand for approximations obtained via ME and lines with symbols correspond
to perturbation theory, according to the legend. Computations up to fourth order,
in the Interaction Picture. } \label{plot:Rect1}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=14cm]{Figures/IC_g20.eps}
\caption{Rectangular step:
Transition probabilities as a function of $\xi$, with $\gamma=2$.
Lines are coded as in Figure \ref{plot:Rect1}.
Computations up to fourth order,
in the Interaction Picture.} \label{plot:Rect2}
\end{center}
\end{figure}
We observe that for the Magnus expansion in the Interaction Picture, the smaller
the value of the parameter $\xi$
the better works the approximate solution. As a matter of fact, in the
sudden limit, $\xi \ll 1$, ME furnishes the exact result; unlike
perturbation theory. As far as the intensity of the perturbation
$\gamma$ increases, the quality of the approximations spoils. This
effect is much more dramatic for the standard perturbation theory.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=14cm]{Figures/IC_xi1.eps}
\caption{Rectangular step:
Transition probabilities as a function of $\gamma$, with $\xi=1$.
Lines are coded as in Figure \ref{plot:Rect1}.
Computations up to fourth order,
in the Interaction Picture.} \label{plot:Rect3}
\end{center}
\end{figure}
On the other hand, it is clear from (\ref{eq:H2}) that
\[
\int_{-\infty}^t \|H_I(t_1)\|_2 \, dt_1 = \int_0^t |f(t_1)| \, dt_1 = V_0 \, t,
\]
whence $\int_{-\infty}^t \|\tilde{H}_I(t_1)\|_2 \, dt_1 = \gamma$, and thus
Theorem \ref{conv-mag} guarantees that
the Magnus expansion in the Interaction Picture is convergent if $\gamma < \pi$. Notice
that this is always the case for the parameters considered in Figures
\ref{plot:Rect1}-\ref{plot:Rect3}. The estimate $\gamma < \pi$ for the convergence
domain in the Interaction Picture should be compared with the corresponding one
in the Schr\"odinger picture:
\[
\int_{-\infty}^t \|\tilde{H}(t_1)\|_2 \, dt_1 = \sqrt{\gamma^2 + \frac{\xi^2}{4}} < \pi.
\]
Notice then that, as pointed out in subsection \ref{PLT},
a change of picture allows us to improve the convergence of the
Magnus expansion.
\vspace*{0.3cm}
{\bf 2- Hyperbolic secant step: Rosen--Zener model}. In the Rosen--Zener
Hamiltonian \cite{rosen32dsg} the interaction $C(t)$ in (\ref{H2L})
is given by the real function $V(t)= V_0 \, {\rm sech}( t/T)$, where $T$
determines the time-scale. We will use the notation $\gamma=\pi
V_0T/\hbar$ and $\xi =\omega T=2ET/\hbar$.
The corresponding Hamiltonian in terms of Pauli matrices is
\begin{equation} \label{eq:RZ_H}
H(t)=E\sigma_3+V(t)\sigma_1 \equiv \boldsymbol{a}(t) \cdot \boldsymbol{\sigma},
\qquad V(t)=V_0/\cosh(t/T),
\end{equation}
with $\boldsymbol{a} \equiv (V(t),0,E)$. In the Interaction
Picture one has
\begin{equation}\label{eq:HIs}
H_I(s)=V(s)(\sigma_1 \cos(\xi s) - \sigma_2 \sin(\xi s))
\end{equation}
in terms of the dimensionless time-variable $s=t/T$.
Notice that $\xi$ measures the ratio between the
\emph{interaction} time $T$ and the \emph{internal} time of the
system $\hbar /2E$. From (\ref{eq:HIs}), and after straightforward
calculation, the first and second ME operators are readily obtained.
The exact result for the transition probability (provided the time
interval extends from $-\infty$ to $+\infty$), as well as
perturbation theory and Magnus expansion up to second order read
\cite{pechukas66ote}
\begin{eqnarray}\label{P_Sech}
P_{ex}&=& \frac{\sin^2\gamma}{\cosh^2(\pi\xi/2)} \nonumber \\
P_{pt}^{(1)}&=&P_{pt}^{(2)}=\frac{\gamma^2}{\cosh^2(\pi\xi /2)} \nonumber \\
P_{M}^{(1)}&=&\sin^2[\gamma/\cosh(\pi\xi /2)] \nonumber \\
P_{M}^{(2)}&=&\frac{\sin^2\lambda}{\lambda^2}\frac{\gamma^2}{\cosh^2(\pi\xi
/2)} \\
\lambda&=&\gamma\left[ \frac{1}{\cosh^2(\pi\xi/2)}+\frac{\gamma^2g^2(\xi)}{\pi^4}\right] ^{1/2},
\quad g(\xi)=8\xi \sum_{k=0}^\infty \frac{2k+1}{[(2k+1)^2+\xi^2]^2}.
\nonumber
\end{eqnarray}
In Figures {\ref{plot:Sech2} and \ref{plot:Sech3} we plot some
results from the formulae in (\ref{P_Sech}). In Figure
{\ref{plot:Sech2} we take $\gamma=1.5$ and in Figure
{\ref{plot:Sech3} we set $\xi=0.3$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=14cm]{Figures/Sech_g15.eps}
\caption{Rosen--Zener model: Transition probabilities (\ref{P_Sech})
as a function of $\xi$, with $\gamma=1.5$. The solid line stand for
the exact result. Broken lines stand for approximations obtained via
ME and triangles correspond to perturbation theory, according to the
legend. Computations up to second order, in the Interaction Picture.
} \label{plot:Sech2}
\end{center}
\end{figure}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=14cm]{Figures/Sech_xi03.eps}
\caption{Rosen--Zener model:
Transition probabilities as a function of $\gamma$, with $\xi=0.3$.
Lines are coded as in Figure \ref{plot:Sech2}.
Computations up to second order, in the Interaction Picture.}
\label{plot:Sech3}
\end{center}
\end{figure}
Similarly to the case of the rectangular step, we observe in Figure
\ref{plot:Sech2} that \emph{the Magnus expansion works better in the sudden regime} defined
by $\xi \ll 1$, namely, when the internal time of the system
$\hbar/2E$ is much larger than the time scale $T$ of the
perturbation. Also, Figure \ref{plot:Sech3} illustrates how the
approximations spoil as far as the intensity $\gamma$ increases.
Notice in both figures the unitarity violation of the approximation built with the usual
perturbation theory.
In this case a simple calculation shows that
\[
\int_{-\infty}^{\infty} \| \tilde{H}_I(t)\|_2 \, dt = (1/\hbar) \, \int_{\infty}^{\infty} |V(t)| \, dt =
V_0 \pi T /\hbar = \gamma,
\]
and thus the Magnus series converges at least for $\gamma < \pi$.
\subsection{ME in the Adiabatic Picture}\label{Ex:AME}
Here we will illustrate the effect of using the Adiabatic Picture
introduced in Section \ref{PLT}. The use of this type of preliminary
transformation is convenient whenever the time scale of the
interaction is much larger than the proper time of the unperturbed
system.
Since an adiabatic regime conveys a smooth profile for the
perturbation, namely, existence of derivatives, the case of the
rectangular step cannot be properly used for the sake of
illustration.
\subsubsection{Linearly forced harmonic oscillator}\label{LHO_ad}
For the linearly driven harmonic oscillator the procedure yields the
exact solution, as in the preceding subsection \ref{2L}, albeit the
method is a bit more involved technically \cite{klarsfeld93dho}.
\subsubsection{Rosen--Zener model}\label{RZ_ad}
Following \cite{klarsfeld92cmi} we will deal with the Rosen--Zener
model (see Section \ref{2L} and \cite{pechukas66ote}) since it
allows a clear illustration of the adiabatic regime.
The preliminary linear transformation $G(s)$ defined in
(\ref{GAdiab}) for the Hamiltonian (\ref{eq:RZ_H}) is given by
$G(s)=\boldsymbol{\hat{b}}\cdot \boldsymbol{\sigma}$, where the unit
vector $\boldsymbol{\hat{b}} $ points in the direction
$\boldsymbol{b}=\boldsymbol{\hat{a}}+\boldsymbol{\hat{k}}$
($\boldsymbol{\hat{k}}=$unit vector along the $z$-axis). Remind that
$s=t/T$ is the dimensionless time-variable. The evolution operator
gets then factorized as
\begin{equation}
U_G(s)=G^\dag (s)U(s)G(s_0),
\end{equation}
which according to (\ref{UGeq}), satisfies the equation
\begin{equation}
\frac{dU_G}{ds} =\tilde H_G(s)U_G.
\end{equation}
Here $\tilde H_G\equiv -i H_G/\hbar$ is given by
\begin{equation}\label{HGs}
\tilde H_G(s)=\frac{T}{i\hbar} a \sigma_3- i \frac{\theta'}{ 2} \sigma_2 ,
\end{equation}
with $a^2=E^2_0+V^2_0/\cosh^2 s$ and $\cot \theta =(E_0/V_0)\cosh
s$.
Next, in analogy to (\ref{eq:HIs}), we introduce the Adiabatic
Interaction Picture which allows us to integrate the diagonal piece
of $\tilde{H}_G(s)$. The time-evolution operator gets eventually factorized
as
\begin{equation}
U_G(s)=\exp \left( (-iT/\hbar)\int^{\infty}_0 \textrm{d}s'\,
a(s')\sigma_3 \right) U_G^{(I)}(s) \exp \left(
(-iT/\hbar)\int_{-\infty}^0 \textrm{d}s'\, a(s')\sigma_3 \right) ,
\end{equation}
where $U_G^{(I)}(s)$ obeys the equation
\begin{equation}
\frac{dU_G^{(I)}}{ds}=\tilde H_G^{(I)}(s) \, U_G^{(I)},
\end{equation}
with
\begin{equation}\label{HGIs}
\tilde H_G^{(I)}(s)= -i(\theta' /2)[\sigma_1 \sin A(s)+\sigma_2
\cos A(s)],
\end{equation}
and
\begin{equation}\label{AHGIs}
A(s)=\frac{2T}{\hbar} \int_0^s \textrm{d}s'\, a(s') =
\frac{\xi}{2}\ln \frac{1+\rho}{1-\rho}+\frac{2\gamma}{\pi}\arctan
\frac{2\gamma}{\pi\xi}\rho .
\end{equation}
We have introduced the definition
\begin{equation}
\rho=\{ 1-[1+(\pi \xi/2\gamma)]\sin^2 \theta\} ^{1/2},
\end{equation}
in terms of the dimensionless strength parameter $\gamma=\pi
V_0T/\hbar$ and $\theta$. Using the ME to first order in the
adiabatic basis (which coincides with the fixed one at $s=\pm
\infty$) one finds the spin-flip approximate (first order)
transition probability
\begin{equation}\label{P_ad_M1}
^{ad}P_{M}^{(1)}=\sin^2 \left[ \int_0^{\theta_0} \textrm{d} \theta
\sin A(s(\theta)) \right].
\end{equation}
In Figure \ref{plot:RZ_Ad} we compare the numerical results given by
the new approximation (\ref{P_ad_M1}) with the exact formula
$P_{ex}$ in (\ref{P_Sech}). For the sake of illustration we plot
also the results in the usual Interaction Picture to first order in
ME (see $P_{M}^{(1)}$ in (\ref{P_Sech})). It is noteworthy the gain
achieved when using the adiabatic ME in the intermediate regime,
namely, moderate values of $\xi$, although only the first
order is considered. Here the adiabatic regime corresponds to large
values of $\xi = 2 E T/\hbar$ ($\varepsilon = 1/T \ll 1$).
It should be also noticed that for the Hamiltonian
(\ref{HGIs}) one has
\[
\int_{s_0}^{s} \| \tilde{H}_G^{(I)}(s_1)\|_2 \, ds_1 = \frac{1}{2} |\theta(s) - \theta(s_0)|
< \frac{1}{2} 2 \pi = \pi
\]
and thus the convergence condition given by Theorem \ref{conv-mag} is
always satisfied. In other words, for this example
\emph{the Magnus expansion is always convergent in the Adiabatic Interaction
Picture}.
More involved illustrative examples of ME in the Adiabatic Picture
may be found in \cite{klarsfeld92mai,klarsfeld93dho}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=14cm]{Figures/RZ_P_D_Ad.eps}
\caption{Rosen--Zener model: Transition probability as a function of $\xi$,
with $\gamma =1$. The solid line stands for the exact result in (\ref{P_Sech}).
The remaining lines stand for first order computations with ME (circles) and
perturbation theory (triangles). Open symbols are for the Adiabatic Picture
and solid symbols for the Interaction Picture.} \label{plot:RZ_Ad}
\end{center}
\end{figure}
\subsection{Fer and Wilcox infinite-product expansions}\label{Ex:FW}
Next we illustrate the use of Fer and Wilcox infinite-product expansions
by using the same time-dependent systems as before.
\subsubsection{Linearly forced harmonic oscillator}
Since the commutator $[a_-, a_+]$ is a $c$-number, Fer iterated
Hamiltonians $H^{(n)}$ with $n>1$ eventually vanish so that $F_n=0$
for $n>2$. The Wilcox operators $W_n$ with $n>2$ in eq.(\ref{W1})
vanish for the same reason. Thus, in this particular case, the
second-order approximation in either method leads to the exact
solution of the Schr\"odinger equation just as ME did. To sum up,
the final result reads
\begin{equation}\label{FWex3}
U_I= \e^{\Omega _1+\Omega _2}= \e^{W_1} \e^{W_2}= \e^{F_1} \e^{F_2}=
\e^{-i\beta} \e^{-i(\alpha a_+ + \alpha^* a_-)}\,,
\end{equation}
where $\alpha(t)$ and $\beta(t)$ are given in (\ref{eq:al}) and
(\ref{eq:be}), respectively.
\subsubsection{Two-level quantum system: Rectangular step}
For the Hamiltonian (\ref{eq:Hround}), the first-order Fer and Wilcox operators
in the Interaction Picture verify
\begin{equation}\label{FWex7}
F_1 = W_1 = \Omega_1 = \int_0^t {\rm d}t_1 \tilde{H} _I(t_1),
\end{equation}
where $\tilde{H}_I(t)$ is given by (\ref{eq:H2}). The explicit expression is collected
in (\ref{M4}) (first equation). Analogously, the second equation there
also corresponds to the second-order Wilcox operator $W_2 = \Omega_2$.
To proceed further with Fer's method we must calculate the modified
Hamiltonian $\tilde{H} ^{(1)}$ in (\ref{Fer2}). After straightforward algebra
one eventually obtains
\begin{equation}\label{FWex12}
\tilde{H} ^{(1)}=\frac{1}{2\theta}\left( \frac{\sin^2\theta}{\theta} - \sin2\theta +
\frac{1}{\theta}\left( \frac{\sin2\theta}{2\theta} -
\cos2\theta\right) F_1\right) \, [F_1,\tilde{H} _I]\; ,
\end{equation}
where $\theta=(2\gamma /\xi)\sin(\xi /2)$ (notice that $\tilde{H} ^{(1)}$
and therefore $F_2$ depend on $\sigma_1$ and $\sigma_2$, while
$W_2$ is proportional to $\sigma_3$). Since it does not seem
possible to derive an analytical expression for $F_2$, the
corresponding matrix elements have been computed by replacing
the integral by a conveniently chosen quadrature.
The transition probability $P(t)$ from an initial state with spin up
to a state with spin down (or viceversa) is given by (\ref{Ppm}).
This expression has been computed on assuming: $U_I\simeq
\e^{F_1}=\e^{W_1}$, $U_I\simeq \e^{F_1} \e^{F_2}$, $U_I\simeq
\e^{W_1} \e^{W_2}$, $U_I\simeq \e^{F_1} \e^{F_2} \e^{F_3}$ and $U_I\simeq
\e^{W_1} \e^{W_2} \e^{W_3}$, and the results have been compared with the
exact analytical solution (\ref{Pex_rect}).
In Figures \ref{plot:FW1} and \ref{plot:FW2} we show the transition probability $P$ as a
function of $\xi$ for two different values of $\gamma$ ($\gamma = 1.5$ and
$\gamma = 2$, respectively), while in
Figure \ref{plot:FW3} we have plotted $P$ versus $\gamma$ for $\xi$
fixed. Notice that the second order in the Wilcox expansion does not
contribute to the transition probability (this is similar to what
happens in perturbation theory). On the other hand, Fer's
second-order approximation is already in remarkable agreement with
the exact result whereas the third order cannot even be
distinguished from the exact result in Figure \ref{plot:FW2} at the cost of a much larger
computational effort.
Wilcox approximants preserve unitarity but do not give acceptable
approximations.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=14cm]{Figures/IC_F_W123_g15.eps}
\caption{Rectangular step: Transition probability as a function of $\xi$, with $\gamma=1.5$.
The solid line corresponds to the exact result (\ref{Pex_rect}). Broken lines stand for Fer and
Wilcox approximations up to third order.
Computations up to third order, in the Interaction Picture.} \label{plot:FW1}
\end{center}
\end{figure}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=14cm]{Figures/IC_F_W123_g20.eps}
\caption{Rectangular step: Transition probability as a function of $\xi$, with $\gamma=2$.
Lines are coded as in Figure \ref{plot:FW1}.
Computations up to third order, in the Interaction Picture.} \label{plot:FW2}
\end{center}
\end{figure}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=14cm]{Figures/IC_F_W123_xi1.eps}
\caption{Rectangular step: Transition probability as a function of $\gamma$, with $\xi=1$.
Lines are coded as in Figure \ref{plot:FW1}.
Computations up to third order, in the Interaction Picture.} \label{plot:FW3}
\end{center}
\end{figure}
\subsection{Voslamber iterative method}\label{Ex:IME}
Just to keep the same structure as in preceding subsections, we
mention that the Voslamber iterative method of subsection \ref{Voslamber}
also yields the exact solution for the
linearly driven harmonic oscillator after computing the second
iteration.
Next, for the two-level system with a rectangular step
described by the Hamiltonian (\ref{eq:Hround}) we compute the
second iterate $\Omega^{(2)}$ and compare
with second order ME approximation for $U_I$. As a test, we
shall obtain again the transition probability $P(t)$
given by (\ref{Pex_rect}). The
expression (\ref{Ppm}) will be calculated here on assuming: $U_I\simeq
\exp{\Omega^{(1)}} =\exp \Omega_1$, $U_I\simeq \exp({\Omega_1+\Omega_2})$ and
$U_I\simeq \exp{\Omega^{(2)} }$.
The second order Magnus approximation to the transition probability
is given by (\ref{P4}), whereas the second iterate is obtained from
(\ref{vos.10}),
\begin{eqnarray}\label{eq:dressed2+}
\Omega^{(2)}(t)&=& -i\frac{\gamma}{\omega} \int_0^{\omega t} \{
[\sin^2(\Delta)+\cos^2(\Delta)\cos{\xi}]\,\sigma_1 \nonumber \\
&&+\cos^2(\Delta)\sin{\xi}\,\sigma_2
- \sin(2\Delta)\sin(\xi /2)\,\sigma_3 \}{\rm d} \xi,
\end{eqnarray}
where $\Delta\equiv \frac{\gamma}{\omega}|\sin(\xi/2)|$, $\gamma
=V_0 t/\hbar$, $\xi =\omega t$. Since it does not seem possible to
derive an analytical expression for $\Omega^{(2)} $, the corresponding
matrix elements have been computed by approximating the integral in
(\ref{eq:dressed2+}) with a sufficiently accurate quadrature.
In Figure \ref{plot:G}, the various approximated transition
probabilities as well as the exact result (\ref{Pex_rect}) have
been plotted as a function of $\xi$ for a fixed value $\gamma =1.5$.
We observe that the approximation from the second iterate keeps the
trend of the exact solution in a better way that the second order
Magnus approximation does.
In Figure \ref{plot:xi} we have plotted the corresponding transition
probabilities versus $\gamma$ for fixed value $\xi =1$. Although
locally the second order Magnus approximation may be more accurate,
it seems that the trend of the exact solution is mimicked for a
longer interval of $\gamma$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=14cm]{Figures/Voslamber_xi.eps}
\caption{Rectangular step:
Transition probability in the two level system as a function of $\xi$, for
$\gamma =1.3$: Exact result (\ref{P_Sech}) (solid line), second iterate of
Voslamber method
(dashed line), second order ME (dotted line) and
first Voslamber iterate (or order in ME) (dash-dotted line).
Computations are done in the Interaction Picture.} \label{plot:G}
\end{center}
\end{figure}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=14cm]{Figures/Voslamber_g.eps}
\caption{Rectangular step:
Transition probability in the two level system as a function of $\gamma $, for
$\xi =1$. Lines are coded as in Figure \ref{plot:G}.
Computations are done in the Interaction Picture.} \label{plot:xi}
\end{center}
\end{figure}
As it has already been pointed out above, the Magnus expansion works
the better the more sudden the perturbation. Thus, the re-summation
involved in the iterative method improves a bit that issue. Further
results on the present example may be found in \cite{oteo05iat}.
\subsection{Linear periodic systems: Floquet--Magnus formalism}
\label{Ex:FME}
Next we deal with a periodically driven harmonic oscillator, where
the infinite expansions obtained from Floquet--Magnus recurrences
(see subsection \ref{Floquet}), are utterly summed. Comparison with the exact
solution illustrates the feasibility of the method.
The particular system we consider is described by the Hamiltonian (\ref{eq:OA})
with $f(t)=\frac{\beta}{\sqrt{2}}
\cos \omega t$ and $\omega_0 < \omega$. Once the recurrences of the Floquet--Magnus expansion (see
section \ref{Floquet}) are explicitly computed for several orders, their
general term may be guessed by inspection. For the Floquet operator
we get
\begin{equation}
F=-i\left[ \frac{\omega _{0}}{2}\left( p^{2}+q^{2}\right) -\beta \frac{%
\omega _{0}}{\omega }\sum_{k=0}^{\infty }\left( \frac{\omega _{0}}{\omega }%
\right) ^{k}q+\beta ^{2}\frac{\omega _{0}}{4\omega
^{2}}\sum_{k=0}^{\infty }\left( 2k+1\right) \left( \frac{\omega
_{0}}{\omega }\right) ^{k}\right] ,
\end{equation}
and the associated transformation results from
\begin{eqnarray} \label{eq:FLambda}
\Lambda (t) & = & i \beta ^{2}\frac{\omega _{0}}{\omega ^{3}}\left[ \sin
\left(
\omega t\right) \sum\limits_{k=0}^{\infty }\left( k+1\right) \left( \frac{%
\omega _{0}}{\omega }\right) ^{k}-\frac{\omega t}{2}\sum\limits_{k=0}^{%
\infty }\left( \frac{\omega _{0}}{\omega }\right) ^{k}\right] - \nonumber \\
& & i\left[ \sin \left( \omega t\right) q+\omega _{0}\left( \cos
\left( \omega
t\right) -1\right) p\right] \frac{\beta }{\omega }\sum\limits_{k=0}^{%
\infty }\left( \frac{\omega _{0}}{\omega }\right) ^{k}.
\end{eqnarray}
The resulting series may be summed in closed form thus yielding the
Floquet operator
\begin{equation}
F=-i\frac{\omega _{0}}{2}\left[ \left( q-\frac{\beta }{\omega
_{0}(\rho ^{2}-1)}\right) ^{2}+p^{2}\right] -i\frac{\beta
^{2}}{4\omega _{0}\left( \rho ^{2}-1\right) },
\end{equation}
with $\rho \equiv \omega /\omega _{0}$. Its eigenvalues are the
so-called \textit{Floquet eigenenergies} \cite{lefebvre97thf}
\begin{equation}
E_{n}=\omega _{0}\left( n+\frac{1}{2}\right) +\frac{\beta
^{2}}{4\omega _{0}\left( \rho ^{2}-1\right) }.
\end{equation}
The corresponding $\Lambda $ transformation after summation of the
series in (\ref{eq:FLambda}) is
\begin{equation}
\Lambda (t)=i\frac{\beta /\omega _{0}}{1-\rho ^{2}}\left[ \left(
\rho \sin
\omega t\right) q+(\cos \omega t-1)p+\left( \frac{2\rho ^{2}}{1-\rho ^{2}}%
+\cos \omega t\right) \frac{\beta \sin \omega t}{4\omega }\right] .
\end{equation}
Notice that, as they should, both operators are skew-Hermitian and
reproduce the exact solution of the problem.
\section{Numerical integration methods based on the Magnus expansion}
\label{section5}
\subsection{Introduction}
The Magnus expansion as formulated in section 2 has found extensive use in
mathematical physics, quantum chemistry, control theory, etc,
essentially as a perturbative tool in the treatment of the linear
equation
\begin{equation} \label{NI.1}
Y^\prime = A(t) Y, \qquad Y(t_0) = Y_0.
\end{equation}
When the recurrence (\ref{eses})-(\ref{omegn}) is applied, one is
able to get explicitly the successive terms $\Omega_k$ in the
series defining $\Omega$ as linear combinations of multivariate
integrals containing commutators acting iteratively on the
coefficient matrix $A$, as in (\ref{O1})-(\ref{O4}). As a result,
with this scheme analytical approximations to the exact solution
are constructed explicitly. These approximate solutions are fairly
accurate inside the convergence domain, especially when high order
terms in the Magnus series are taken into account, as illustrated
by the examples considered in section \ref{section4}.
There are several drawbacks, however, involved in the procedure
developed so far, especially when one tries to find accurate approximations to the
solution for very long times. The first one is implicitly contained in the
analysis done in section \ref{section2}: the size of the
convergence domain of the Magnus series may be relatively small. The logarithm of the
exact solution $Y(t)$ may have complex singularities and this implies that no series
expansion can converge beyond the first singularity. This
disadvantage may be up to some point avoided by using different
pictures, e.g. the transformations shown in section \ref{PLT}, in
order to increase the convergence domain of the Magnus expansion, a fact
also illustrated by several examples in section 4.
Unfortunately, these preliminary transformations sometimes
either do not guarantee convergence or the rate of convergence of
the series is very slow. In that case, accurate results can only
be obtained provided a large number of terms in the series are taken
into account.
The second drawback is the increasingly complex structure of the
terms $\Omega_k$ in the Magnus series: each $\Omega_k$ is a
$k$-multivariate integral involving a linear combination of
$(k-1)$-nested commutators of $A$ evaluated at different times
$t_i$, $i=1, \ldots, k$. Although in some cases these expressions
can be computed explicitly (for instance, when the elements of $A$
and its commutators are polynomial or trigonometric functions), in
general a special procedure has to be designed to approximate
multivariate integrals and reduce the number of commutators
involved.
When the entries of the coefficient matrix $A(t)$ are complicated
functions of time or they are only known for certain values of
$t$, numerical approximation schemes are unavoidable. In many
cases it is thus desirable to obtain just numerical approximations to
the exact solution at many different times. This section is
devoted precisely to the Magnus series expansion as a tool to build
numerical integrators for equation (\ref{NI.1}).
Before
embarking ourselves in exposing the technical details contained in
this construction, let us first introduce several concepts which are commonplace
in the context of the numerical integration of differential equations.
Given the general
(nonlinear) ordinary differential equation (ODE)
\begin{equation}\label{nde.1}
{\bf x}' = {\bf f}(t,{\bf x}), \qquad \qquad {\bf x}(t_0)={\bf
x}_0 \in \mathbb{C}^d,
\end{equation}
standard numerical integrators, such as Runge--Kutta and multistep methods,
proceed as follows. First the whole time interval $[t_0,t_f]$, is split into $N$ subintervals,
$[t_{n-1},t_n], \ n=1,\ldots,N$, with $t_N=t_f$, and subsequently the value of
${\bf x}(t_n)$ is approximated with a time-stepping advance procedure of the form
\begin{equation} \label{nde.2}
\mathbf{x}_{n+1} = \Phi(h_n,\mathbf{x}_n,\ldots,h_0,\mathbf{x}_0)
\end{equation}
starting from $\mathbf{x}_0$. Here the map $\Phi$ depends on the
specific numerical method and $h_n=t_{n+1}-t_n$ are the time
steps. For simplicity in the presentation we consider a constant
time step $h$, so that $t_n=t_0+n \, h$. In this way one gets ${\bf
x}_{n+1}$ as an approximation to ${\bf x}(t_{n+1})$. In other
words, the exact evolution of the system (\ref{nde.1}) is replaced
by the discrete or numerical flow (\ref{nde.2}). The simplest of
all numerical methods for (\ref{nde.1}) is the \emph{explicit
Euler scheme}
\begin{equation} \label{nde.3}
\mathbf{x}_{n+1} = \mathbf{x}_n + h \, {\bf f}(t_n,{\bf x}_n).
\end{equation}
It computes approximations $\mathbf{x}_n$ to the values
$\mathbf{x}(t_n)$ of the solution using one explicit evaluation of
$\mathbf{f}$ at the already computed value $\mathbf{x}_{n-1}$.
In general, the
numerical method (\ref{nde.2}) is said to be of order $p$ if,
assuming ${\bf x}_n= {\bf x}(t_n)$, then ${\bf x}_{n+1}={\bf
x}(t_{n+1})+\mathcal{O}(h^{p+1})$. Thus, in particular, Euler
method is of order one.
Of course, more elaborate and efficient general purpose
algorithms, using several $\mathbf{f}$ evaluations per step, have
been proposed along the years for the numerical treatment of
equation (\ref{nde.1}). In fact, any standard software package and program
library contains dozens of routines aimed to provide numerical
approximations with several degrees of accuracy, including
(explicit and implicit) Runge--Kutta methods, linear multistep
methods, extrapolation schemes, etc, with fixed or adaptive step
size. They are designed in such a way that the user has to provide
only the initial condition and the function $\mathbf{f}$ to obtain
approximations at any given time.
This being the case, one could ask the following question:
if general purpose integrators are widely available for the
integration of the linear equation (\ref{NI.1}) (which is a particular
case of (\ref{nde.1})), what is the point in designing new and somehow
sophisticated algorithms for this specific problem?
It turns out that in the same way as for classical
time-dependent perturbation
theory, the qualitative properties of the exact
solution are not preserved by the numerical approximations
obtained by standard integrators.
This motivates the study of the Magnus expansion
with the ultimate goal of constructing numerical integration methods. We will
show that highly accurate schemes can be indeed designed, which
in addition
preserve qualitative properties of the system. The procedure
can also incorporate the
tools developed for the analytical treatment, such as preliminary
linear transformations, to end up with improved numerical
algorithms.
We first summarize the main features of the well know class of
Runge--Kutta methods as representative of integrators of the form
(\ref{nde.2}). They are introduced for the general nonlinear ODE
(\ref{nde.1}) and subsequently adapted to the linear case
(\ref{NI.1}).
\subsection{Runge--Kutta methods}
\label{secRK}
The Runge--Kutta (RK) class of methods are possibly the most
frequently used algorithms for numerically solving ODEs.
Among them, perhaps the most successful
during more than half a century has been the 4th-order method, which applied
to equation (\ref{nde.1}) provides the following numerical
approximation for the integration step $t_n \mapsto t_{n+1} = t_n + h$:
\begin{eqnarray} \label{RK4}
{\bf Y}_{1} & = & {\bf x}_{n} \nonumber \\
{\bf Y}_{2} & = &
{\bf x}_{n} + \frac{h}{2} {\bf f}(t_{n},{\bf Y}_{1})
\nonumber \\
{\bf Y}_{3} & = &
{\bf x}_{n} + \frac{h}{2} {\bf f}(t_{n}+\frac{h}{2},{\bf Y}_{2}) \\
{\bf Y}_{4} & = &
{\bf x}_{n} + h {\bf f}(t_{n}+\frac{h}{2},{\bf Y}_{3}) \nonumber \\
{\bf x}_{n+1} & = & {\bf x}_{n} +
\frac{h}{6} \left( {\bf f}(t_{n},{\bf Y}_{1}) +
2{\bf f}(t_{n}+\frac{h}{2},{\bf Y}_{2}) +
2{\bf f}(t_{n}+\frac{h}{2},{\bf Y}_{3}) +
{\bf f}(t_{n}+h,{\bf Y}_{4}) \right). \nonumber
\end{eqnarray}
Notice that the function ${\bf f}$ can always be computed
explicitly because each ${\bf Y}_{i}$ depends only on the ${\bf
Y}_{j}, \ j<i$, previously evaluated.
To measure the computational cost
of the method, it is usual to consider that the evaluation of the
function ${\bf f}(t,{\bf x})$ is the most consuming part. In this
sense, scheme (\ref{RK4}) requires four evaluations, which is
precisely the number of \emph{stages} (or inner steps) in the algorithm.
The general class of $s$-stage Runge--Kutta methods
are characterized by the real numbers $a_{ij}$,
$b_i$ ($i,j=1, \ldots, s$) and $c_i = \sum_{j=1}^s a_{ij}$, as
\begin{eqnarray}
{\bf Y}_{i} & = & {\bf x}_{n} +
h \sum_{j=1}^{s}a_{ij} \, {\bf f}(t_{n}+c_{j}h,{\bf Y}_{j}) ,
\qquad i=1,\ldots,s \nonumber \\
{\bf x}_{n+1} & = & {\bf x}_{n} +
h \sum_{i=1}^{s}b_{i} \, {\bf f}(t_{n}+c_{i}h,{\bf Y}_{i}) ,
\label{RK-general}
\end{eqnarray}
where ${\bf Y}_i$, $ i=1,\ldots,s$ are the intermediate stages.
For simplicity, the associated coefficients are usually displayed
with the so-called \emph{Butcher tableau}
\cite{butcher87tna,hairer93sod} as follows:
\begin{equation} \label{RK-tablero}
\begin{array}{c|ccccc}
c_1 & a_{11} & & \ldots & & a_{1s} \\
\vdots & \vdots & & & & \vdots \\
c_s & a_{s1} & & \ldots & & a_{ss} \\
\hline
& b_1 & & \ldots & & b_s \end{array}
\end{equation}
If $a_{ij}=0$, $j\geq i$, then the intermediate stages ${\bf Y}_i$ can be evaluated
recursively and the method is explicit. In that case the zero $a_{ij}$
coefficients (in the upper triangular part of the tableau) are omitted
for clarity. With
this notation, `the' 4th-order Runge--Kutta method (\ref{RK4}) can be expressed as
\begin{equation} \label{RK4-tablero}
\begin{array}{c|cccc}
0 & & & & \\
\frac{1}{2} & \frac{1}{2} & & & \\
\frac{1}{2} & 0 & \frac{1}{2} & & \\
1 & 0 & 0 & 1 & \\
\hline & \ \frac{1}{6} \ & \ \frac{2}{6} \ & \ \frac{2}{6} \ &
\ \frac{1}{6}
\end{array}
\end{equation}
Otherwise, the scheme is implicit and requires to
numerically solve a system of $s \, d$ nonlinear equations
of the form
\begin{equation}\label{implicit}
{\bf y} = {\bf X}_{n} + h \, {\bf G}(h,{\bf y}),
\end{equation}
where ${\bf y}=({\bf Y}_1,\ldots,{\bf Y}_s)^T,{\bf X}_n=({\bf x}_n,\ldots,{\bf x}_n)^T
\in \mathbb{R}^{sd}$, and ${\bf G}$ is a function which depends on the method. A standard
procedure to get $\mathbf{x}_{n+1}$ from
(\ref{implicit}) is applying simple iteration:
\begin{equation}\label{implicit_recurs}
{\bf y}^{[j]} = {\bf X}_{n} +
h \, {\bf G}(h,{\bf y}^{[j-1]} ), \qquad
j=1,2,\ldots
\end{equation}
When $h$ is sufficiently small, the iteration starts with ${\bf
y}^{[0]}={\bf X}_{n}$ and stops once $ \| {\bf y}^{[j]}- {\bf
y}^{[j-1]}\| $ is smaller than a prefixed tolerance. Of course,
more sophisticated techniques can be used \cite{hairer93sod}.
After these general considerations, let us turn our attention now
to the linear equation (\ref{NI.1}).
When dealing with numerical methods applied to this
equation, it is important to keep in
mind that the relevant small parameter here is no longer the norm of
the matrix $A(t)$ as in the analytical treatment, but the time step
$h$. For this reason, the concept of order of accuracy of a method is
different in this context. Now ${\bf f}(t,{\bf x})=
A(t){\bf x}$ and the general class of $s$-stage Runge--Kutta
methods adopt the more compact form
\begin{eqnarray}
{\bf Y}_{i} & = & {\bf x}_{n} +
h \sum_{j=1}^{s}a_{ij}A_j{\bf Y}_{j} ,
\qquad i=1,\ldots,s \nonumber \\
{\bf x}_{n+1} & = & {\bf x}_{n} +
h \sum_{i=1}^{s}b_{i}A_i{\bf Y}_{i},
\label{RK-lineal}
\end{eqnarray}
with $A_i=A(t_n+c_ih)$. In terms of matrices, this is equivalent to
\begin{eqnarray}\label{maRKs}
\left\{
\begin{array}{c}
{\bf Y}_1 \\
\vdots \\
{\bf Y}_s
\end{array} \right\} & = &
\left\{
\begin{array}{c}
{\bf x}_n \\
\vdots \\
{\bf x}_n
\end{array} \right\} +
h \widetilde{A}
\left\{
\begin{array}{c}
{\bf Y}_1 \\
\vdots \\
{\bf Y}_s
\end{array} \right\}, \qquad \mbox{with} \qquad
\widetilde{A} = \left(
\begin{array}{ccc}
a_{11}A_1 & \cdots & a_{1s}A_s \\
\vdots & & \vdots \\
a_{s1}A_1 & \cdots & a_{ss}A_s
\end{array} \right) \nonumber \\
{\bf x}_{n+1} & = & {\bf x}_n +
h \left(
\begin{array}{ccc}
b_{1}A_1 \ & \cdots &\ b_{s}A_s
\end{array} \right)
\left\{
\begin{array}{c}
{\bf Y}_1 \\
\vdots \\
{\bf Y}_s
\end{array} \right\},
\end{eqnarray}
so that the application of the method for the integration step
$t_n \mapsto t_{n+1} = t_n + h$ can also be written as
\begin{equation}\label{}
{\bf x}_{n+1} = \left( I_d +
h \left(
\begin{array}{ccc}
b_{1}A_1 \ & \cdots \ & b_{s}A_s
\end{array} \right)
\left( I_{sd} - h \widetilde{A} \right)^{-1} \mathbb{I}_{sd\times d}
\right){\bf x}_n,
\end{equation}
where $\mathbb{I}_{sd\times d}=(I_d,I_d,\ldots,I_d)^T$ and $I_d$
is the $d\times d$ identity matrix. For instance, taking $s=2$ and
using Matlab this can be easily implemented as follows
\[
\begin{array} {l}
A = [ a11*A1 \ \ \ a12*A2 \ ; \ a21*A1 \ \ \ a22*A2 ]; \\
x = (Id + h*[b1*A1 \ \ \ b2*A2]*((I2d - h*A)\backslash [Id \ \ \ Id]'))*x;
\end{array}
\]
where $b1,b2,a11,\ldots,a22$ are the coefficients of the method,
$A1,A2$ correspond to $A_1,A_2$ and $Id$, $I2d$ are the identity
matrices of dimension $d$ and $2d$, respectively.
There exists an extensive literature about RK methods built for
many different purposes \cite{butcher87tna,hairer93sod}. It is
therefore reasonable to look for the most appropriate scheme to be
used for each problem. In practice, explicit RK methods are
preferred, because its implementation is usually simpler. They
typically require more stages than implicit methods and, in
general, a higher number of evaluations of the matrix $A(t)$,
although this is not always the case. For instance, the 4-stage
fourth order method (\ref{RK4}) only requires two new evaluations
of $A(t)$ per step ($A(t_n + h/2)$ and $A(t_n + h)$), which is
precisely the minimum number of evaluations needed by any fourth
order method. In our
numerical examples we will also use the 7-stage sixth order method
with coefficients \cite[p. 203-205]{butcher87tna}
\begin{equation}\label{RK6-tablero}
\begin{array}{c|ccccccc}
0 & & & & & & & \\
\frac{5-\sqrt{5}}{10} & \frac{5-\sqrt{5}}{10} & & & & & & \\
\frac{5+\sqrt{5}}{10} & \frac{-\sqrt{5}}{10} &
\frac{5+2\sqrt{5}}{10} & &
& & & \\
\frac{5-\sqrt{5}}{10} & \frac{-15+7\sqrt{5}}{20} &
\frac{-1+\sqrt{5}}{4}
& \frac{15-7\sqrt{5}}{10} & & & & \\
\frac{5+\sqrt{5}}{10} & \frac{5-\sqrt{5}}{60} & 0 & \frac{1}{6} & \frac{%
15+7\sqrt{5}}{60} & & & \\
\frac{5-\sqrt{5}}{10} & \frac{5+\sqrt{5}}{60} & 0 &
\frac{9-5\sqrt{5}}{12}
& \frac{1}{6} & \frac{-5+3\sqrt{5}}{10} & & \\
1 & \frac{1}{6} & 0 & \frac{-55+25\sqrt{5}}{12} &
\frac{-25-7\sqrt{5}}{12}
& 5-2\sqrt{5} & \frac{5+\sqrt{5}}{2} & \\
\hline & \frac{1}{12} & 0 & 0 & 0 & \frac{5}{12} & \frac{5}{12} &
\frac{1}{12}
\end{array}
\end{equation}
Observe that this method only requires three new evaluations of
the matrix $A(t)$ per step. This is, in fact, the minimum number
for a sixth-order method. Other implicit RK schemes widely used in the
literature involving the minimum number of stages at each order are
based on Gauss--Legendre collocation points \cite{hairer93sod}.
For instance, the corresponding
methods of order four and six (with two and three stages, respectively) have
the following coefficients:
\begin{equation}\label{RKGL46-tablero}
\begin{array}{c|cc}
\frac{3-\sqrt{3}}{6} & \frac14 & \frac{3-2\sqrt{3}}{12} \\
\frac{3+\sqrt{3}}{6} & \frac{3+2\sqrt{3}}{12} & \frac14 \\
\hline & \frac12 & \frac12
\end{array}
\qquad \qquad
\begin{array}{c|ccccc}
\frac{5-\sqrt{15}}{10} & \frac{5}{36} & \; & \frac29 - \frac{\sqrt{15}}{15} & \; &
\frac{5}{36} - \frac{\sqrt{15}}{30} \\
\frac12 & \frac{5}{36} + \frac{\sqrt{15}}{24} & \; & \frac29 & \; &
\frac{5}{36} + \frac{\sqrt{15}}{24} \\
\frac{5+\sqrt{15}}{10} & \frac{5}{36} + \frac{\sqrt{15}}{30} & \; &
\frac29 + \frac{\sqrt{15}}{15} & \; & \frac{5}{36} \\
\hline & \frac{5}{18} & & \frac{4}{9} & & \frac{5}{18}
\end{array}
\end{equation}
\subsection{Preservation of qualitative properties}
\label{Qualitproperties}
Notice that the numerical solution provided by the class of
Runge--Kutta schemes, and in general by an integrator of the form
(\ref{nde.2}), is constructed as a sum of vectors in
$\mathbb{R}^d$. Let us point out some (undesirable) consequences of this
fact. Suppose, for instance, that $\mathbf{x}$ is a vector known
to evolve on a sphere. One does not expect that $\mathbf{x}_n$
built as a sum of two vectors as in (\ref{nde.2}) preserves this feature of the exact
solution, whereas approximations of the form $\mathbf{x}_{n+1} =
Q_n \mathbf{x}_n$, with $Q_n$ an orthogonal matrix, clearly lie
on the sphere.
In section \ref{NL-Magnus} we have introduced isospectral flows
(\ref{nlm5}), which include as a particular case the system
\begin{equation} \label{nde.4}
Y^\prime = [A(t),Y], \qquad Y(t_0) = Y_0,
\end{equation}
with $A$ a skew-symmetric matrix.
As we have shown there, if $Y_0$ is a symmetric matrix, the exact solution can be factorized
as $Y(t) = Q(t) Y_0 Q^T(t)$, with $Q(t)$ an orthogonal matrix satisfying the equation
\begin{equation} \label{nde.5}
Q^\prime = A(t) Q, \qquad Q(0) = I
\end{equation}
and, in addition, $Y(t)$ and $Y(0)$ have the same eigenvalues.
This is also true when $Y$ is Hermitian, $A$ is skew-Hermitian and
$Q$ is unitary, in which case (\ref{nde.4}) and (\ref{nde.5})
can be interpreted as particular examples of
the Heisenberg and Schr\"odinger equations in
Quantum Mechanics, respectively.
When a numerical scheme of the form (\ref{nde.2}) is applied to
(\ref{nde.5}), in general, the approximations $Q_n$ will no longer
be unitary matrices and therefore $Y_n$ and $Y_0$ will not be unitarily
similar. As a result, the isospectral character of the system
(\ref{nde.4}) is lost in the numerical description. Observe that
explicit Runge--Kutta methods employ the ansatz that locally the
solution of the differential equation behaves like a polynomial in
$t$, so that one cannot expect that the approximate solution be a
unitary matrix. In this sense, explicit Runge--Kutta methods
present the same drawbacks in the numerical analysis of differential
equations as the standard time-dependent
perturbation theory when looking for analytical approximations.
Implicit Runge--Kutta methods, on the other hand, can be
considered as rational approximations,
and in some
cases the outcome they provide is a unitary matrix. For this class
of methods, however, matrices of
relatively large sizes have to be inverted, making the algorithms
computationally expensive. Furthermore, the evolution of many
systems (including highly oscillatory problems)
can not be efficiently approximated neither by
polynomial nor rational approximations.
With all these considerations in mind, a pair of questions arise in a quite natural way:
\begin{itemize}
\item[(Q1)] Is it possible
to design numerical integration methods for equation (\ref{nde.1})
such that the corresponding
numerical approximations still preserve the main qualitative features
of the exact solution?
\item[(Q2)] Since the Magnus expansion constitutes an extremely useful
procedure for
obtaining analytical (as opposed to numerical) approximate solutions to
the linear evolution equation (\ref{NI.1}), is it feasible to construct
efficient numerical integration schemes from the general formulation
exposed in section \ref{section2}?
\end{itemize}
It turns out that both questions can be answered in the affirmative.
As a matter of fact, it has been trying to address (Q1) how the
field of \emph{geometric numerical integration} has been developed
during the last years. Here the aim is to reproduce the
qualitative features of the solution of the differential equation
which is being discretised, in particular its geometric
properties. The motivation for developing such
structure-preserving algorithms arises independently in areas of
research as diverse as celestial mechanics, molecular dynamics,
control theory, particle accelerators physics, and numerical
analysis
\cite{hairer06gni,iserles00lgm,leimkuhler04shd,mclachlan02sm,mclachlan06gif}.
Although diverse, the systems appearing in these areas have
one important common feature. They all preserve some
underlying geometric structure which influences
the qualitative nature of the phenomena they produce.
In the field of geometric numerical integration these properties are built into
the numerical method, which gives the method an improved
qualitative behaviour, but also allows for a significantly more
accurate long-time integration than with general-purpose methods.
In addition to the construction of new numerical algorithms,
an important aspect of geometric integration is the explanation
of the relationship between preservation of the geometric
properties of a numerical method and the observed favourable
error propagation in long-time integration.
Geometric numerical integration has been an active and
interdisciplinary research area during the last decade, and
nowadays is the subject of intensive development. Perhaps the most
familiar examples of \emph{geometric integrators} are symplectic
integration algorithms in classical Hamiltonian dynamics,
symmetric integrators for reversible systems and methods
preserving first integrals and numerical methods on manifolds
\cite{hairer06gni}.
A particular class of geometric numerical integrators are the
so-called \emph{Lie-group methods}. If the matrix $A(t)$ in
(\ref{NI.1}) belongs
to a Lie algebra $\mathfrak{g}$, the aim of Lie-group methods is
to construct numerical solutions staying in the corresponding Lie
group $\mathcal{G}$ \cite{iserles00lgm}.
With respect to question (Q2) above, it will be the subject of the
next subsection, where the main issues involved in the
construction of numerical integrators based on the Magnus
expansion are analyzed. The methods thus obtained preserve the
qualitative properties of the system and in addition are highly
competitive with other more conventional numerical schemes with
respect to accuracy and computational effort. They constitute, therefore, clear
examples of Lie-group methods.
\subsection{Magnus integrators for linear systems}
\label{Mintegrators}
Since the Magnus series only converges locally, as we have pointed out before,
when the length of
the time-integration interval exceeds the bound provided by
Theorem \ref{conv-mag}, and in the spirit of any numerical
integration method, the usual procedure consists in dividing the
time interval $[t_0,t_f]$ into $N$ steps such that the Magnus series
converges in each subinterval $[t_{n-1},t_n]$, $n=1,\ldots,N$,
with $t_N = t_f$. In this way the solution of (\ref{NI.1}) at the
final time $t_f$ is represented by
\begin{equation} \label{Mint.1}
Y(t_N) = \prod_{n=1}^N \exp(\Omega(t_n,t_{n-1})) \, Y_0,
\end{equation}
and the series $\Omega(t_n,t_{n-1})$ has to be appropriately truncated.
Early steps in this approach were taken in \cite{chan68imc} and
\cite{chang69eso}, where second and fourth order numerical schemes were
used for calculations of collisional inelasticity and potential
scattering, respectively. Those authors were well aware that the
resulting integration method ``would become practical only when
the advantage of being able to use bigger step sizes outweighs the
disadvantage in having to evaluate the integrals involved in the
Magnus series and then doing the exponentiation" \cite{chan68imc}.
Later on, by following a similar approach, Devries
\cite{devries85aho} designed a numerical procedure for determining
a fourth order approximation to the propagator employed in the
integration of the single channel Schr\"odinger equation, but it
was in the pioneering work \cite{iserles99ots} where Iserles and
N{\o}rsett carried out the first systematic study of the Magnus
expansion with the aim of constructing numerical integration
algorithms for linear problems. To design the new integrators, the
explicit time dependency of each term $\Omega_k$ had to be
analyzed, in particular its order of approximation in time to the
exact solution.
Generally speaking, the process of rendering the Magnus expansion
a practical numerical integration algorithm involves three steps.
First, the $\Omega$ series is truncated at an appropriate order.
Second, the multivariate integrals in the truncated series
$\Omega^{[p]} = \sum_{i=1}^p \Omega_i$
are replaced by conveniently chosen approximations.
Third, the exponential of the matrix $\Omega^{[p]}$ has to be
computed. We now briefly consider the first two issues, whereas
the general problem of evaluating the matrix exponential will be treated in section
\ref{exponen}.
We have shown in subsection \ref{time-symmetry} that the Magnus
expansion is time symmetric. As a consequence of eq. (\ref{t-s6}),
if $A(t)$ is analytic and one evaluates its Taylor series centered
around the midpoint of a particular subinterval $[t_n, t_n+h]$,
then each term in $\Omega_k$ is an odd function of
$h$, and thus $\Omega_{2s+1} = \mathcal{O}(h^{2s+3})$ for $s \geq
1$. Equivalently, $\Omega^{[2s-2]} =
\Omega + \mathcal{O}(h^{2s+1})$ and $\Omega^{[2s-1]} = \Omega +
\mathcal{O}(h^{2s+1})$. In other words, for achieving an
integration method of order $2s$ ($s>1$) only terms up to
$\Omega_{2s-2}$ in the $\Omega$ series are required
\cite{blanes00iho,iserles99ots}. For this reason, in general, only
even order methods are considered.
Once the series expansion is truncated up to an appropriate order,
the multidimensional integrals involved have to be computed or at
least conveniently approximated. Although at first glance this seems to
be a quite difficult enterprise, it turns out that their very structure allows
one to approximate all the multivariate integrals appearing in $\Omega$
just by evaluating $A(t)$ at the nodes of a
univariate quadrature \cite{iserles99ots}.
To illustrate how this task can be accomplished, let us expand the
matrix $A(t)$ around the midpoint, $t_{1/2} \equiv t_n + h/2$, of the subinterval
$[t_n,t_{n+1}]$,
\begin{equation} \label{3.2.1}
A(t) = \sum_{j=0}^{\infty} a_j \left(t - t_{1/2} \right)^j, \qquad \mbox{ where } \quad
a_j = \frac{1}{j!} \, \frac{d^j A(t)}{d t^j} \big|_{t=t_{1/2}},
\end{equation}
and insert
the series (\ref{3.2.1}) into the recurrence defining the Magnus expansion
(\ref{eses})-(\ref{omegn}). In this
way one gets explicitly the expression of $\Omega_k$ up to order $h^6$ as
\begin{eqnarray} \label{3.2.2}
\Omega_1 & = & h a_0 + h^3 \frac{1}{12} a_2 + h^5 \frac{1}{80} a_4 +
\mathcal{O}(h^{7}) \nonumber
\\
\Omega_2 & = & h^3 \frac{-1}{12} [a_0,a_1] + h^5 \Big( \frac{-1}{80}
[a_0,a_3] + \frac{1}{240} [a_1,a_2] \Big) + \mathcal{O}(h^{7})
\nonumber \\
\Omega_3 & = & h^5 \Big( \frac{1}{360} [a_0,a_0,a_2] - \frac{1}{240}
[a_1,a_0,a_1] \Big) + \mathcal{O}(h^{7})
\\
\Omega_4 & = & h^5 \frac{1}{720} [a_0,a_0,a_0,a_1] + \mathcal{O}(h^{7}),
\nonumber
\end{eqnarray}
whereas $\Omega_5 = \mathcal{O}(h^{7})$, $ \Omega_6 = \mathcal{O}(h^{7})$
and $\Omega_7 = \mathcal{O}(h^{9})$.
Here we write for clarity $[a_{i_1},a_{i_2}, \ldots, a_{i_{l-1}},a_{i_l}]
\equiv [a_{i_1},[a_{i_2},[\ldots,[a_{i_{l-1}},a_{i_l}]\ldots]]]$.
Notice that, as anticipated, only odd powers of $h$ appear in
$\Omega_k$ and, in particular, $\Omega_{2i+1} =
\mathcal{O}(h^{2i+3})$ for $i>1$.
Let us denote $\alpha_i \equiv h^i a_{i-1}$. Then
$[\alpha_{i_1},\alpha_{i_2}, \ldots,
\alpha_{i_{l-1}},\alpha_{i_l}]$ is an element of order
$h^{i_1 + \cdots + i_l}$. In fact, the matrices
$\{\alpha_i\}_{i \ge 1}$ can be considered as the generators (with
grade $i$) of a graded free Lie algebra
$\mathcal{L}(\alpha_1,\alpha_2,\ldots)$ \cite{munthe-kaas99cia}.
It turns out that it is possible to build methods of order $p
\equiv 2s$ by considering only terms involving
$\alpha_1,\ldots,\alpha_{s}$ in $\Omega$. Then, these terms can
be approximated by appropriate linear combinations of the
matrix $A(t)$ evaluated at different points. In particular,
up to order four we have to approximate
\begin{equation} \label{eq.2.3a}
\Omega = \alpha_1 - \frac{1}{12} [\alpha_1,\alpha_2]
+\mathcal{O}(h^5),
\end{equation}
whereas up to order six the relevant expression is
\begin{eqnarray}
\Omega & = & \alpha_1 + \frac{1}{12} \alpha_3 - \frac{1}{12} [\alpha_1,\alpha_2]
+ \frac{1}{240} [\alpha_2,\alpha_3] + \frac{1}{360} [\alpha_1,\alpha_1,\alpha_3]
\label{eq.2.3b} \\
& & - \frac{1}{240} [\alpha_2,\alpha_1,\alpha_2] + \frac{1}{720} [\alpha_1,\alpha_1,\alpha_1,\alpha_2]
+\mathcal{O}(h^7). \nonumber
\end{eqnarray}
In order to present methods which
can be easily adapted for different quadrature rules we introduce
the averaged (or generalized momentum) matrices
\begin{equation} \label{eq.2.4}
A^{(i)}(h) \equiv \frac{1}{h^{i}} \int_{t_n}^{t_n+h} \, \left(t -
t_{1/2} \right)^i A(t) dt =
\frac{1}{h^i} \int_{-h/2}^{h/2} t^i A(t+t_{1/2}) dt
\end{equation}
for $i=0, \ldots, s-1$. If their exact evaluation is not possible
or is computationally expensive, a numerical quadrature may be
used instead. Suppose that $b_i$, $c_i$, $( i=1,\ldots,k)$, are
the weights and nodes of a particular quadrature rule, say $X$ (we
will use $X=G$ for Gauss--Legendre quadratures and $X=NC$ for
Newton--Cotes quadratures) of order $p$, respectively
\cite{abramowitz65hom},
\[
A^{(0)} = \int_{t_n}^{t_n+h} A(t) dt = h \sum_{i=1}^k b_i A_i +
\mathcal{O}(h^{p+1}),
\]
with $A_i \equiv A(t_n+c_ih)$. Then it is possible to
approximate all the integrals $A^{(i)}$ (up to the required order) just by
using only the evaluations $A_i$ at the nodes $c_i$ of the
quadrature rule required to compute $A^{(0)}$. Specifically,
\begin{equation} \label{ru1}
A^{(i)}= h \sum_{j=1}^k b_j\left(c_j-\frac12\right)^{i} A_j.
\qquad\quad i=0,\ldots,s-1,
\end{equation}
or equivalently, $A^{(i)} = h \sum_{j=1}^k
\left(Q^{(s,k)}_{X}\right)_{ij} A_{j}$
with
$\left(Q^{(s,k)}_{X}\right)_{ij}=b_j\left(c_j-\frac12\right)^{i}$.
In particular, if fourth and sixth order Gauss--Legendre quadrature
rules are considered, then for
$s=k=2$ we have \cite{abramowitz65hom}
\[
b_1=b_2=\frac{1}{2}, \quad c_1= \frac{1}{2} - \frac{\sqrt{3}}{6}, \quad c_2 =
\frac{1}{2} + \frac{\sqrt{3}}{6},
\]
to order four, whereas for $s=k=3$,
\[
b_1=b_3= \frac{5}{18}, \quad b_2=\frac{4}{9}, \quad c_1 = \frac{1}{2} - \frac{\sqrt{15}}{10},
\quad c_2 = \frac{1}{2}, \quad c_3 = \frac{1}{2} + \frac{\sqrt{15}}{10},
\]
to order six, so that
\begin{equation}\label{Gauss-Legendre}
Q_G^{(2,2)}= \left( \begin{array}{cc}
\frac12 & \ \ \frac12 \\
-\frac{\sqrt{3}}{12} & \frac{\sqrt{3}}{12}
\end{array} \right), \qquad
Q_G^{(3,3)}= \left( \begin{array}{ccc}
\frac{5}{18} & \ \ \frac49 & \ \ \frac{5}{18}\\
-\frac{\sqrt{15}}{36} & 0 & \frac{\sqrt{15}}{36} \\
\frac{1}{24} & 0 & \frac{1}{24}
\end{array} \right). \qquad
\end{equation}
Furthermore, substituting (\ref{3.2.1}) into (\ref{eq.2.4}) we
find (neglecting higher order terms)
\begin{equation} \label{eq.2.4b}
A^{(i)} = \sum_{j=1}^s \left(T^{(s)}\right)_{ij} \alpha_j
\; \equiv \;
\sum_{j=1}^s \frac{1-(-1)^{i+j}}{(i+j)2^{i+j}} \alpha_j,
\qquad 0 \le i \le s-1.
\end{equation}
If this relation is inverted (to order four, $s=2$, and six,
$s=3$) one has
\begin{equation}\label{CambioBase}
R^{(2)}\equiv (T^{(2)})^{-1} = \left( \begin{array}{cc}
1 & \ \ 0 \\
0 & \ \ 12
\end{array} \right), \qquad
R^{(3)}= \left( \begin{array}{ccc}
\frac{9}{4} & \ 0 & \ -15 \\
0 & \ 12 & \ 0 \\
-15 & \ 0 & \ 180
\end{array} \right)
\end{equation}
respectively, so that the corresponding expression of $\alpha_i$
in terms of $A^{(i)}$ or
$A_j$ is given by
\begin{equation} \label{CBaseMu}
\alpha_{i} = \sum_{j=1}^s \left( R^{(s)} \right)_{ij} A^{(j-1)}
= h \sum_{j=1}^k \left( R^{(s)}Q_{X}^{(s,k)} \right)_{ij}
A_{j}.
\end{equation}
Thus, by virtue of (\ref{CBaseMu}) we can write $\Omega(h)$ in
terms of the univariate integrals (\ref{eq.2.4}) or in terms of
any desired quadrature rule. In this way, one gets the final
numerical approximations to $\Omega$. Fourth and sixth-order
methods can be obtained by substituting in (\ref{eq.2.3a}) and
(\ref{eq.2.3b}), respectively. The algorithm then provides an
approximation for $Y(t_{n+1})$ starting from $Y_n \approx Y(t_n)$,
with $t_{n+1} = t_n + h$.
The observant reader surely has noticed that, up to order $h^6$,
there are more terms involved in (\ref{3.2.2}) than those
considered in (\ref{eq.2.3a}) or (\ref{eq.2.3b}): specifically,
$\frac{1}{12} \alpha_3$ in (\ref{eq.2.3a}) and $\frac{1}{80}
\alpha_5$ and $-\frac{1}{80} [\alpha_1, \alpha_4]$ in
(\ref{eq.2.3b}). The reason is that $\Omega^{[6]}$ can be
approximated by $A^{(0)},A^{(1)},A^{(2)}$ up to order $h^6$ and
then, these omitted terms are automatically reproduced when either
$A^{(0)},A^{(1)},A^{(2)}$ are evaluated analytically or are
approximated by any symmetric quadrature rule of order six or higher.
Another important issue involved in any approximation based on the
Magnus expansion is the number
of commutators appearing in $\Omega$. As it is already evident from
(\ref{3.2.2}), this number rapidly increases with the order, and so it might
constitute a major factor in the overall computational cost of the resulting
numerical methods. It is possible,
however, to design an optimization procedure aimed to reduce this
number to a minimum \cite{blanes02hoo}. For
instance, a straightforward counting of the number of commutators
in (\ref{eq.2.3b}) suggests
that it seems necessary to compute seven commutators up to order
six in $h$, whereas the general analysis carried out in
\cite{blanes02hoo} shows that this can be done with only three
commutators. More specifically, the scheme
\begin{eqnarray} \label{aprox6}
C_1 & = & [\alpha_1,\alpha_2] , \nonumber\\
C_2 & = & -\frac{1}{60} [\alpha_1,2\alpha_3+C_1] \\
\Omega^{[6]} & \equiv & \alpha_1 + \frac{1}{12}\alpha_3
+ \frac{1}{240}[-20\alpha_1 - \alpha_3 + C_1 , \alpha_2 + C_2], \nonumber
\end{eqnarray}
verifies that $\Omega^{[6]}=\Omega+\mathcal{O}(h^7)$. Three is in fact the minimum
number of commutators required to get a sixth-order approximation
to $\Omega$.
This technique to reduce the number of commutators is indeed valid
for any element in a graded free Lie algebra. It has been used, in
particular, to obtain approximations to the Baker--Campbell--Hausdorff
formula up to a given order with the
minimum number of commutators \cite{blanes03ool}.
As an illustration, next we provide
the relevant expressions for integration schemes of order 4 and 6, which
readily follow from the previous analysis.
\noindent \textbf{Order 4}. Choosing the Gauss--Legendre quadrature rule,
one has to evaluate
\begin{equation}\label{AiG4}
A_1 = A(t_n + (\frac{1}{2} - \frac{\sqrt{3}}{6}) h), \qquad
A_2 = A(t_n + (\frac{1}{2} + \frac{\sqrt{3}}{6}) h)
\end{equation}
and thus, taking $Q_G^{(2,2)}$ in (\ref{Gauss-Legendre}),
$R^{(2)}$ in (\ref{CambioBase}) and substituting in
(\ref{CBaseMu}) we find
\begin{equation}\label{alfasG4}
\alpha_1=\frac{h}{2}(A_1+A_2), \qquad
\alpha_2=\frac{h\sqrt{3}}{12}(A_2-A_1).
\end{equation}
Then, by replacing in (\ref{eq.2.3a}) we obtain
\begin{equation} \label{or4GL}
\begin{array}{ccc}
& \Omega^{[4]}(h) = \frac{h}{2} (A_1 + A_2) - h^2
\frac{\sqrt{3}}{12} [A_1, A_2] & \\
& Y_{n+1} = \exp( \Omega^{[4]}(h) ) Y_n.
\end{array}
\end{equation}
Alternatively, evaluating $A$ at equispaced points, with $k=3$ and
$c_1=0,c_2=1/2,c_3=1; \ b_1=b_3=1/6, b_2=2/3$ (i.e., using the
Simpson rule to approximate $\int_{t_n}^{t_n+h}A(s)ds$),
\[
A_1 = A(t_n), \qquad A_2 = A(t_n + \frac{h}{2}), \qquad
A_3 = A(t_n + h)
\]
we have instead $\alpha_1=\frac{h}{6}(A_1+4A_2+A_3)$,
$ \; \alpha_2=h(A_3-A_1)$, and then
\begin{eqnarray} \label{or4NC1}
\Omega^{[4]}(h) & = & \frac{h}{6} (A_1 + 4A_2 + A_3) -
\frac{h^2}{72} [A_1 + 4A_2 + A_3, A_3-A_1].
\end{eqnarray}
It should be noticed that other possibilities not directly obtainable from the previous
analysis are equally valid. For instance, one could consider
\begin{eqnarray} \label{or4NC2}
\Omega^{[4]}(h) & = & \frac{h}{6} (A_1 + 4A_2 + A_3) -
\frac{h^2}{12} [A_1, A_3].
\end{eqnarray}
Although apparently more $A$ evaluations are necessary in (\ref{or4NC1}) and
(\ref{or4NC2}), this
is not the case actually, since $A_3$ can be reused at the next integration
step.
\noindent \textbf{Order 6}.
In terms of Gauss--Legendre collocation points one has
\[
A_1 = A \big( t_n + (\frac{1}{2} - \frac{\sqrt{15}}{10}) h \big), \quad
A_2 = A \big( t_n + \frac{1}{2} h \big), \quad
A_3 = A \big( t_n + (\frac{1}{2} + \frac{\sqrt{15}}{10}) h \big)
\]
and similarly we obtain
\begin{equation} \label{4.1.4}
\alpha_1 = h A_2, \quad \alpha_2 = \frac{\sqrt{15}h}{3} (A_3 - A_1), \quad
\alpha_3 = \frac{10h}{3} (A_3 - 2 A_2 + A_1),
\end{equation}
which are then inserted in (\ref{aprox6}) to get the approximation
$Y_{n+1} = \exp( \Omega^{[6]} ) Y_n$.
If the matrix
$A(t)$ is only known at equispaced points, we can use the
Newton--Cotes (NC) quadrature values with $s=3$ and $k=5$,
$b_1=b_5= 7/90, b_2=b_4=32/90,b_3=12/90$ and $c_j=(j-1)/4, \
j=1,\ldots,5$. Then, using the corresponding matrix
$Q_{NC}^{(3,5)}$ from (\ref{ru1}) we get
\begin{eqnarray} \label{4.1.5}
\alpha_1 & = & \frac{1}{60} \big( -7 (A_1 + A_5) + 28 (A_2 + A_4) +
18 A_3 \big) \nonumber \\
\alpha_2 & = & \frac{1}{15} \big( 7 (A_5 - A_1) + 16 (A_4 - A_2) \big) \\
\alpha_3 & = & \frac{1}{3} \big( 7 (A_1 + A_5) - 4 (A_2 + A_4) -
6 A_3 \big). \nonumber
\end{eqnarray}
Both schemes involve the minimum number of commutators (three)
and require three or four evaluations of the matrix $A(t)$ per
integration step (observe that $A_5$ can be reused in the next step in
the Newton--Cotes implementation because $c_1=0$ and $c_5=1$).
Higher orders can be treated in a similar way. For instance, an
8th-order Magnus method can be obtained with only six commutators
\cite{blanes02hoo}. Also variable step size techniques can be
easily implemented \cite{blanes00iho,iserles99oti}.
\subsubsection{From Magnus to Fer and Cayley methods}
For
arbitrary matrix Lie groups it is feasible to design numerical
methods based also on the Fer and Wilcox expansions, whereas for
the $J$-orthogonal group (eq. (\ref{j-ortho})) the Cayley transform
also maps the Lie algebra onto the Lie group
\cite{postnikov94lga} and thus it allows us to build a new class of Lie-group
methods. Here we briefly show how these integration
methods can be easily constructed from the previous schemes based
on Magnus. In other words, if the solution of (\ref{NI.1}) in a
neighborhood of $t_0$ is written as
\begin{eqnarray}
Y(t_0 + h) & = & \e^{\Omega(h)} \,Y_0 \hspace*{5.3cm} \mbox{Magnus} \label{eq.2} \\
& = & \e^{F_1(h)} \e^{F_2(h)}\cdots Y_0 \qquad \qquad \qquad
\quad \qquad \mbox{Fer}
\label{eq.3} \\
& = & \e^{S_1(h)} \e^{S_2(h)} \cdots \e^{S_2(h)} \e^{S_1(h)} Y_0 \hspace*{1.75cm}
\mbox{Symmetric Fer} \label{eq.4} \\
& = & \left( I - \frac{1}{2} C(h) \right)^{-1} \left( I + \frac{1}{2} C(h)
\right) Y_0 \hspace*{0.9cm} \mbox{Cayley} \label{eq.6}
\end{eqnarray}
one may express the functions $F_{i}(h)$, $S_{i}(h)$ and $C(h)$ in terms of
the successive approximations to $\Omega $ and, by using the same
techniques as in the previous section, obtain the new methods. As the
schemes based on the Wilcox factorization are quite similar as the
Fer methods, they will not be considered here.
\subsubsection{Fer based methods}
To obtain integration methods based on the Fer factorization
(\ref{eq.3}) one applies the Baker--Campbell--Hausdorff (BCH)
formula after equating to the Magnus expansion (\ref{eq.2}). More
specifically, in the domain of convergence of expansions
(\ref{eq.2}) and (\ref{eq.3}) we can write
\[
\e^{\Omega(h)} = \e^{F_1(h)} \, \e^{F_2(h)} \ \cdots,
\]
where $F_1 = \Omega_1$ is the first term in the Magnus series,
$F_2 = O(h^3)$ and $F_3 = O(h^7)$. Then, a $p$-th order
algorithm with $3\leq p\leq 6$, based on the Fer expansion
requires to compute $F_2^{[p]}$ such that
\begin{equation} \label{eq.4.1.4}
Y(t_n + h) = \e^{F_1(h)} \, \e^{F_2^{[p]}(h)} \, Y(t_n)
+ \mathcal{O}(h^{p+1}).
\end{equation}
Taking into account that
\[
\e^{\Omega^{[p]}(h)} = \e^{F_1(h)} \, \e^{F_2^{[p]}(h)}
+ \mathcal{O}(h^{p+1}),
\]
we have ($F_1=\Omega_1$)
\[
\e^{F_2^{[p]}(h)}= \e^{-\Omega_1(h)} \, \e^{\Omega^{[p]}(h)}
+ \mathcal{O}(h^{p+1}).
\]
Then, by using the BCH formula and simple algebra to remove higher
order terms we obtain to order four
\begin{equation} \label{eq.4.1.2}
F_2^{[4]} = - \frac{1}{12} \big( [\alpha_1,\alpha_2] -
\frac{1}{2} [\alpha_1,\alpha_1,\alpha_2] \big),
\end{equation}
so that two
commutators are needed in this case. A sixth-order method can be
similarly obtained with four commutators \cite{blanes02hoo}. These
methods are slightly more expensive than their Magnus counterpart
and they do not preserve the time-symmetry of the exact solution.
This can be fixed by the self-adjoint version of the Fer
factorization in the form (\ref{eq.4}) proposed in
\cite{zanna01tfe} and presented in a more efficient way in
\cite{blanes02hoo}. The schemes based on (\ref{eq.4}) up to order
six are given by
\begin{equation}\label{sym-Fer-n}
Y(t_n + h) = \e^{S_1(h)} \e^{S_2^{[p]}(h)} \e^{S_1(h)} Y_n
\end{equation}
with $S_1 = \Omega_1/2$. A fourth-order method is given by
\begin{equation} \label{eq.4.2.3}
S_2^{[4]}(h) = -\frac{1}{12} [\alpha_1,\alpha_2]
\end{equation}
\begin{equation} \label{eq.4.2.4}
\end{equation}
and a sixth-order one by
\begin{eqnarray} \label{eq.4.2.5}
s_1 & = & [\alpha_1, \alpha_2] \nonumber \\
r_1 & = & \frac{1}{120} \, [\alpha_1, -4 \alpha_3 + 3 s_1] \nonumber \\
S_2^{[6]}(h) & = & \frac{1}{240} \,
\big[ -20 \alpha_1 - \alpha_3 + s_1 \ , \ \alpha_2 + r_1
\big].
\end{eqnarray}
To complete the formulation of the scheme, the $\alpha_i$ have to
be expressed in terms of the matrices $A_i$ evaluated at the
quadrature points (e.g., equations (\ref{4.1.4}) or
(\ref{4.1.5})).
\subsubsection{Cayley-transform methods}
We have seen in subsection \ref{notations} that for the
$J$-orthogonal group $\mathrm{O}_J(n)$ the Cayley transform (\ref
{cayley1}) provides a useful alternative to the exponential
mapping relating the Lie algebra to the Lie group. This fact is
particularly important for numerical methods where the evaluation
of the exponential matrix is the most computation-intensive part
of the algorithm.
If the solution of eq. (\ref{NI.1}) is written as
\begin{equation} \label{cayl2}
Y(t) = \left( I - \frac{1}{2} C(t) \right)^{-1} \left( I +
\frac{1}{2} C(t) \right) Y_0
\end{equation}
then $C(t) \in \mathrm{o}_J(n)$ satisfies the equation
\cite{iserles01oct}
\begin{equation} \label{cayl3}
C^\prime= A - \frac{1}{2} [C,A] - \frac{1}{4} CAC, \qquad t
\geq t_0, \qquad C(t_0) = 0.
\end{equation}
Time-symmetric methods of order 4 and 6 have been obtained based
on the Cayley transform (\ref{cayl2}) by expanding the solution of
(\ref{cayl3}) in a recursive manner and constructing quadrature
formulae for the multivariate integrals that appear in the
procedure \cite{diele98tct,iserles01oct,marthinsen01qmb}. It turns out that
efficient Cayley based methods can be built directly from Magnus
based integrators \cite{blanes02hoo}. In particular, we get:
\noindent \textbf{Order 4}:
\begin{equation} \label{eq.4.3.2}
C^{[4]} = \Omega^{[4]} \big( I - \frac{1}{12} (\Omega^{[4]})^2
\big) = \alpha_1 - \frac{1}{12} [\alpha_1,\alpha_2] - \frac{1}{12}
\alpha_1^3 + O(h^5),
\end{equation}
where $C^{[4]} = C(h) + O(h^5)$.
\vspace*{0.1cm}
\noindent \textbf{Order 6}:
\begin{equation} \label{eq.51b}
C^{[6]} = \Omega^{[6]} \left( I - \frac{1}{12}
( \Omega^{[6]})^2 \Big( I - \frac{1}{10}
(\Omega^{[6]})^2 \Big) \right) = C(h) + O(h^7).
\end{equation}
Three matrix-matrix products are required in addition to the three
commutators involved in the computation of $\Omega^{[6]}$, for a
total of nine matrix-matrix products per step.
\subsection{Numerical illustration: the Rosen--Zener model revisited}
\label{niarz}
Next we apply the previous numerical schemes to the integration of
the differential equation governing the evolution of a particular quantum
two-level system. Our purpose here is to illustrate the main issues
involved and compare the different approximations obtained with both
the analytical treatment done in section \ref{section4} and the exact
result. Specifically, we consider
the Rosen--Zener model in the
Interaction Picture already analyzed in subsection \ref{2L}. In this case
the equation to be solved is $U_I' = \tilde{H}_I(t) U_I$, or
equivalently, equation (\ref{NI.1}) with $Y(t) = U_I(t)$
and coefficient matrix ($\hbar = 1$)
\begin{equation} \label{marz}
A(t) = \tilde{H}_I(t) = -i V(s) \big(\sigma_1 \cos(\xi s) - \sigma_2 \sin(\xi s)
\big) \equiv -i \, {\bf b}(s) \cdot \boldsymbol{\sigma}.
\end{equation}
Here $V(s)=V_0/\cosh(s)$, $\xi =\omega T$ and $s = t/T$.
Given the initial condition $|+\rangle\equiv(1,0)^T$ at
$t=-\infty$, our purpose is to get an approximate value for the transition probability
to the state $|-\rangle\equiv(0,1)^T$ at $t=+\infty$. Its exact expression is
collected in the first line of eq. (\ref{P_Sech}), which we reproduce here for
reader's convenience:
\begin{equation}\label{neptex}
P_{ex} = |(U_I)_{12}(+\infty,-\infty)|^2=
\frac{\sin^2\gamma}{\cosh^2(\pi\xi/2)},
\end{equation}
with $\gamma=\pi V_0T$.
To obtain in practice a numerical approximation to $P_{ex}$ we
have to integrate the equation in a sufficiently large time interval. We take the
initial condition at $s_0=-25$ and the numerical integration is
carried out until $s_f=25$. Then, we determine $(U_I)_{12}(s_f,s_0)$.
As a first numerical test we take a fixed (and relatively large)
time step $h$ such that the whole numerical integration in the
time interval $s\in[s_0,s_f]$ is carried out with 50 evaluations of
the vector ${\bf b}(s)$ for all methods. In this way their computational cost is
similar.
To illustrate the
qualitative behaviour of Magnus integrators in comparison with
standard Runge--Kutta schemes, the following methods are
considered:
\begin{itemize}
\item {\bf Explicit first-order Euler} (E1):
$Y_{n+1}=Y_n+hA(t_n)Y_n$ with $t_{n+1}=t_n+h$ and $h=1$
(solid lines with squares in the figures).
\item {\bf Explicit fourth-order Runge--Kutta} (RK4), i.e., scheme
(\ref{RK4}) with $h=2$, since only two
evaluations of ${\bf b}(s)$ per step are required in the linear case (solid lines with triangles).
\item {\bf Second-order Magnus} (M2): we consider the midpoint rule (one evaluation per
step) to approximate $\Omega_1$ taking $h=1$ (dashed lines), i.e.,
\begin{equation}\label{mpoint}
Y_{n+1}=\exp\big( -ih \, {\bf b}_n\cdot \boldsymbol{\sigma} \big) \,
Y_n = \left(
\cos ( hb_n ) \,I - i \frac{\sin (hb_n) }{h b_n} \, \boldsymbol{b}_n\cdot\boldsymbol{\sigma}
\right) Y_n.
\end{equation}
with ${\bf b}_n\equiv{\bf b}(t_n+ h/2)$ and $b_n=\|{\bf b}_n\|$.
The trapezoidal rule is equally valid by considering
${\bf b}_n\equiv({\bf b}(t_n)+{\bf b}(t_n+ h))/2$.
\item {\bf Fourth-order Magnus} (M4). Using the fourth-order
Gauss--Legendre rule to approximate the integrals and taking
$h=2$ one has the scheme (\ref{or4GL}) which for this problem
reads
\begin{eqnarray}\label{expma4}
{\bf b}_1 & = & {\bf b}(t_n+c_1h) , \quad
{\bf b}_2={\bf b}(t_n+c_2h) , \nonumber \\
{\bf d} & = & \frac{h}{2}\big( {\bf b}_1 + {\bf b}_2 \big)
-h^2\frac{\sqrt{3}}{6} i ({\bf b}_1 \times {\bf b}_2) \\
Y_{n+1} & = & \exp\big( -ih \, {\bf d}\cdot \boldsymbol{\sigma} \big) \,
Y_n. \nonumber
\end{eqnarray}
with $c_1=\frac12-\frac{\sqrt{3}}{6}, \
c_2=\frac12+\frac{\sqrt{3}}{6}$ (dotted lines).
\end{itemize}
We choose $\xi=0.3$ and $\xi=1$, and each numerical integration is carried
out for different values of $\gamma$ in the range
$\gamma\in[0,2\pi]$. We plot the corresponding approximations to
the transition probability in a similar way as in Figure
\ref{plot:Sech3} for the analytical treatment. Thus we
obtain the plots of Figure~\ref{plot:Num_xi}.
As expected, the performance of the methods deteriorates
as $\gamma$ increases. Notice also that the qualitative behaviour of
the different numerical schemes is quite similar as that exhibited by
the analytical approximations. Euler and Runge--Kutta methods
do not preserve unitarity and may lead to
transition probabilities greater than 1 (just like the standard perturbation theory).
On the other hand,
for sufficiently small values of
$\gamma$ (inside the convergence domain of the Magnus
series) the fourth-order Magnus method improves the result achieved by
the second-order, whereas for large values of
$\gamma$ a higher order method does not necessarily lead to a better
approximation.
In Figure~\ref{plot:Num_gamma} we repeat the experiment now taking
$\gamma=1$ and $\gamma=2$, and for different values of $\xi$. In
the first case, only the Euler method differs considerably from
the exact solution and in the second case this happens for both RK
methods.
\begin{figure}[htb]
\begin{center}
\includegraphics[height=7cm,width=14cm]{Figures/FigsNum_xi_03_1.eps}
\end{center}
\caption{{Rosen--Zener model:
Transition probabilities as a function of $\gamma$, with $\xi=0.3$ and $\xi=1$. The curves
are coded as follows. Solid line represents the exact result;
E1: solid lines with squares; RK4: solid lines with triangles; M2:
dashed lines; M4: dashed-broken lines (indistinguishable from exact result in left panel).}}
\label{plot:Num_xi}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[height=7cm,width=14cm]{Figures/FigNum_ga_1_2.eps}
\end{center}
\caption{{Rosen--Zener model:
Transition probabilities as a function of $\xi$, with $\gamma=2$ and $\gamma=5$.
Lines are coded as in Figure \ref{plot:Num_xi}.}}
\label{plot:Num_gamma}
\end{figure}
To increase the accuracy, one can always take smaller time steps, but then
the number of evaluations of $A(t)$ increases, and this may be computationally
very expensive for some problems due to the complexity of the time dependence and/or
the size of the matrix. In those cases, it is important to have reliable
numerical methods providing the desired accuracy as fast as
possible or, alternatively, leading to the best possible accuracy
at a given computational cost.
A good perspective of the overall performance of a given numerical integrator is
provided by the so-called efficiency diagram.
This efficiency plot is obtained by carrying out the numerical
integration with different time steps,
corresponding to different number of evaluations of $A(t)$.
For each run one compares the corresponding approximation
with the exact solution and plot the error as a function of the
total number of matrix evaluations. The results are better
illustrated in a double logarithmic scale. In that case, the slope of the
curves should correspond, in the limit of very small time steps,
to (minus) the order of accuracy of the method.
To illustrate this issue, in Figure \ref{plot:Num_Efic} we collect
the efficiency plots of the previous schemes when $\xi=0.3$ with
$\gamma=10$ (left) and $\gamma=100$ (right). We have also included
the results obtained by several higher order integrators, namely
the sixth-order RK method (RK6) whose coefficients are collected
in (\ref{RK-tablero}) and the sixth-order Magnus integrator (M6)
given by (\ref{aprox6}) and (\ref{4.1.4}). We clearly observe
that the Euler method is, by far, the worst choice if accurate
results are desired. Notice the (double) logarithmic scale of the
plots: thus, for instance, when $\gamma=10$ the range goes approximately from
300 to 3000 evaluations of $A(t)$. Magnus integrators, in
addition to providing results in $\mathrm{SU}(2)$ by construction
(as the exact solution), show a better efficiency than
Runge--Kutta schemes for these examples, and this efficiency
increases with the value of $\gamma$ considered. The implicit
Runge-Kutta-Gauss-Legendre methods (\ref{RKGL46-tablero}) show
slightly better results than the explicit RK methods, but still
considerably worst than Magnus integrators.
\begin{figure}[tb]
\begin{center}
\includegraphics[height=7cm,width=14cm]{Figures/Fig_efic_Mag_RK.eps}
\end{center}
\caption{{Rosen--Zener model:
Error in the transition probabilities versus the number of evaluations
of the Hamiltonian $H_I(s)$ for $\xi=0.3$ and $\gamma=10$ (left panel)
and $\gamma=100$ (right panel).}}
\label{plot:Num_Efic}
\end{figure}
If we compare Magnus integrators of different orders of accuracy,
we observe that the most efficient scheme is the second order
method M2 when relatively low accuracies are desired. For higher
accuracy, however, it is necessary to carry out a thorough
analysis of the computational cost of the methods for a given
problem before asserting the convenience of M4 or M6 with respect
to higher order schemes. For a fixed time step $h$, the
computational cost of a certain family of methods (such as those
based on Magnus) usually increases with the order. However, if one
fixes the number of $A(t)$ evaluations, this is not necessarily
the case (sometimes higher order methods requires more commutators
but less exponentials).
Let us now compare the performance of the Magnus methods with
respect to other Lie-group solvers, namely Fer and Cayley methods.
We repeat the same experiments as in Fig.~\ref{plot:Num_Efic} but,
for clarity, only the results for the 6th-order methods are shown.
We consider the symmetric-Fer method given by (\ref{sym-Fer-n})
and (\ref{eq.4.2.5}) and the Cayley method (\ref{cayl2}) with
(\ref{eq.51b}) using in both cases the Gauss--Legendre quadrature.
The results obtained are collected in
Figure \ref{plot:Num_Ef_MaCaFe}. We clearly
observe that the relative performance of the Cayley method
deteriorates by increasing the value of $\gamma$ similarly to the
RK6. In spite of preserving the qualitative properties, this
example shows that for some problems, polynomial or rational
approximations do not perform efficiently. Here, in particular, the
Magnus scheme is slightly more efficient that the symmetric Fer
method, although for other problems their performance is quite similar.
\begin{figure}[tb]
\begin{center}
\includegraphics[height=7cm,width=14cm]{Figures/Fig_efic_MaCaFe6.eps}
\end{center}
\caption{{Rosen--Zener model:
Same as Figure~\ref{plot:Num_Efic} where we compare the performance of the
6th-order Magnus, symmetric-Fer, Cayley and RK6.}}
\label{plot:Num_Ef_MaCaFe}
\end{figure}
\subsection{Computing the exponential of a matrix}
\label{exponen}
We have seen that the numerical schemes based on the Magnus expansion
provide excellent results when applied to equation (\ref{NI.1}) with
coefficient matrix (\ref{marz}). In fact, they are even more efficient than
several Runge--Kutta algorithms. Of course, for this particular example
the number of $A(t)$ evaluations is a good indication of the computational
cost required by the numerical schemes, since the evaluation of
$\exp(\Omega)$ can be done analytically by means of formula (\ref{exppaulis}).
In general, however, the matrix exponential has to be also approximated
numerically and thus the performance of the numerical integration algorithms based on
the Magnus expansion strongly depends on this fact.
It may happen that the evaluation of $\exp(C)$, where $C$ is a (real or complex) $N \times N$ matrix,
represents the major factor in the overall computational cost
required by this class of algorithms and is probably one of their most
problematic aspects.
As a matter of fact, the approximation of the
matrix exponential is among the oldest and most extensively researched
topics in numerical mathematics. Although many efficient algorithms
have been developed, the problem is still far from been solved in general.
It seems then reasonable to briefly summarize here some of the
most widely used procedures in this context.
Let us begin with two obvious but important remarks. (i) First, one
has to distinguish whether it is required to evaluate the full matrix
$\exp(C)$ or only the product $\exp(C)v$ for some given vector $v$. In the later
case, special algorithms can be designed requiring a much reduced computational
effort. This is specially true when $C$ is large and sparse (as often happens
with matrices arising from the spatial discretization of partial differential
equations).
(ii) Second, for the numerical integration methods based on ME
one has to compute
$\exp(C(h))$, where $C(h)=\mathcal{O}(h^p)$, $h$ is a (not too large) step size
and $p\geq 1$. In other words, the matrices to be exponentiated have
typically a small norm (usually restricted by the convergence bounds of the
expansion).
In any case,
prior to the computation of $\exp(C)$, it is significant to have
as much information about the exponent $C$ as possible. Thus, for
instance, if the
matrix $C$ resides in a Lie algebra, then $\exp(C)$
belongs to the corresponding Lie group and one has to decide
whether this qualitative property has to be exactly preserved or
constructing a sufficiently accurate approximation (e.g., at a higher order
than the order of the integrator itself) is enough. Also when $C$ can be
split into different parts, one may consider a factorization of the form
$\exp(C) \approx \exp(C_1) \exp(C_2) \cdots \exp(C_m)$ if each individual
exponential is easy to evaluate exactly.
An important reference in this context is
\cite{moler78ndw} and its updated version \cite{moler03ndw}, where up to
nineteen (or twenty in \cite{moler03ndw})
different numerical algorithms for computing the exponential of a matrix are
carefully reviewed. An extensive software package for computing the matrix
exponential is Expokit, developed by R. Sidje, with
Fortran and Matlab versions available \cite{sidje07exs,sidje98esp}. In addition to computing
the matrix-valued function $\exp(C)$ for small, dense matrices $C$, Expokit has
functions for computing the vector-valued function $\exp(C)v$ for both small, dense
matrices and large, sparse matrices.
\subsubsection{Scaling and squaring with Pad\'e approximation}
\label{sec.4.4}
Among one of the least dubious ways of computing $\exp(C)$ is by
scaling and squaring in combination with a diagonal Pad\'e
approximation \cite{moler03ndw}. The procedure is based on a fundamental
property of the exponential function, namely
\[
\e^C = (\e^{C/j})^j
\]
for any integer $j$. The idea then is to choose $j$ to be a power of two for which
$\exp(C/j)$ can be reliably and efficiently computed,
and then to form the matrix $(\exp(C/j))^j$ by
repeated squaring. If the integer $j$ is chosen as the smallest power
of two for which $\|C\|/j < 1$, then $\exp(C/j)$ can be satisfactorily
computed by diagonal Pad\'e approximants of order, say, $m$. This is roughly the method
used by the built-in function \texttt{expm} in Matlab.
For the integrators based on the Magnus expansion, as
$C=\mathcal{O}(h^p)$ with $ p\geq 1$, one usually gets good approximations
with relatively small values of $j$ and $m$.
As is well known, diagonal Pad\'{e} approximants map the Lie algebra $%
\mathrm{o}_{J}(n)$ to the $J$-orthogonal Lie group $\mathrm{O}_{J}(n)$ and thus
constitute also a valid alternative to the evaluation of the
exponential matrix in
Magnus-based methods for this particular Lie group. More specifically, if $%
B\in \mathrm{o}_{J}(n)$, then $\psi _{2m}(tB)\in
\mathrm{O}_{J}(n)$ for sufficiently small $t\in \mathbb{R}$, with
\begin{equation}
\psi _{2m}(\lambda )=\frac{P_{m}(\lambda )}{P_{m}(-\lambda )},
\label{eq.24}
\end{equation}
provided the polynomials $P_{m}$ are generated according to the
recurrence
\begin{eqnarray*}
P_{0}(\lambda ) & = & 1, \ \qquad P_{1}(\lambda )=2+\lambda \\
P_{m}(\lambda ) &=&2(2m-1)P_{m-1}(\lambda )+\lambda
^{2}P_{m-2}(\lambda ).
\end{eqnarray*}
Moreover,
$\psi _{2m}(\lambda )= \exp(\lambda )+\mathcal{O}(\lambda ^{2m+1})$ and $\psi
_{2}$ corresponds to the Cayley transform (\ref{eq.6}), whereas
for $m=2,3$ we have
\begin{eqnarray*}
\psi _{4}(\lambda ) &=&\left( 1+\frac{1}{2}\lambda
+\frac{1}{12}\lambda ^{2}\right) \Big/\left( 1-\frac{1}{2}\lambda
+\frac{1}{12}\lambda ^{2}\right)
\\
\psi _{6}(\lambda ) &=&\left( 1+\frac{1}{2}\lambda +\frac{1}{10}\lambda ^{2}+%
\frac{1}{120}\lambda ^{3}\right) \Big/\left( 1-\frac{1}{2}\lambda +\frac{1}{%
10}\lambda ^{2}-\frac{1}{120}\lambda ^{3}\right) .
\end{eqnarray*}
Thus, we can combine the optimized approximations to $\Omega $
obtained in subsection \ref{Mintegrators} for Magnus based methods
with diagonal Pad\'{e} approximants up to the corresponding order
to obtain time-symmetric integration schemes preserving the
algebraic structure of the problem without computing the matrix
exponential. For instance, the ``Magnus--Pad\'e" methods thus
obtained involve, in addition to one matrix inversion, 3 and 8
matrix-matrix products for order 4 and 6, respectively.
Observe that since $\Omega^{[2n]} = \mathcal{O}(h)$ then
\[
\psi_{2m}(\Omega^{[2n]}) = \exp(\Omega^{[2n]}) + \mathcal{O}(h^{2k+1}),
\]
where $k = \min \{m,n \}$. With $m=n$ we have a method of order
$2n$. However, for some problems this rational approximation to
the exponential may be not very accurate depending on the
eigenvalues of $\Omega^{[2n]}$. In this case one may take $m>n$,
thus giving a better approximation to the exponential and a more
accurate result by increasing slightly the computational cost of
the method. Of course, when the norm of the matrix $\Omega^{[2n]}$ is not so
small, this technique can be combined with scaling and squaring
\cite{dragt95com}.
Instead of using Pad\'e approximants for the exponential
of the scaled matrix $B \equiv C/2^k$, Najfeld and Havel \cite{najfeld95dot} propose
a rational approximation for the matrix function
\begin{equation} \label{nreex}
H(B) = B \coth(B) = B \, \frac{\e^{2B}+I}{\e^{2B}-I},
\end{equation}
from which the exponential can be obtained as
\[
\e^{2B} = \frac{H(B)+B}{H(B)-B}
\]
and then iteratively square the result $k$ times to recover the
exponential of the original matrix $C$. From the continued fraction
expansion of $H(B)$, it is possible to compute the first rational
approximations as
\[
H_2(B) = \frac{I + \frac{2}{5} B^2}{I + \frac{1}{15} B^2}, \qquad
H_4(B) = \frac{I + \frac{4}{9} B^2 + \frac{1}{63} B^4}
{I + \frac{1}{9} B^2 + \frac{1}{945} B^4}
\]
and so on. Observe that the representation (\ref{nreex}) can be
regarded as a generalized Cayley transform of $B$ and thus it also provides
approximations in the group $\mathrm{O}_J(n)$. In \cite{najfeld95dot} the
authors report a saving of about 30\% in the number of matrix multiplications
with respect to diagonal Pad\'e approximants when an optimal $k$ and a
rational approximation for $H(B)$ is used.
\subsubsection{Chebyshev approximation}
Another valid alternative
is to use polynomial approximations to the exponential of $C$ as a
whole. Suppose, in particular, that $C$ is a matrix
of the form $C = -i \tau H$, with $H$ Hermitian and $\tau>0$, as is the case
in Quantum Mechanics. In the Chebyshev approach, the evolution operator
$\exp(-i \tau H)$ is expanded in a truncated series of Chebyshev
polynomials, in analogy with the approximation of a scalar function
\cite{talezer84aaa}. As is
well known, given a function $F(x)$ in the interval $[-1,1]$, the Chebyshev
polynomial approximations are optimal, in the sense that the maximum
error in the approximation is minimal compared to almost all possible
polynomial approximations \cite{suli03ait}. To apply this procedure, one
has to previously bound the extreme eigenvalues $E_{\mathrm{min}}$
and $E_{\mathrm{max}}$ of $H$. Then a truncated Chebyshev
expansion of $\exp(-i x)$ on the interval $[\tau E_{\mathrm{min}}, \tau
E_{\mathrm{max}}]$ is considered:
\[
\exp(-i x) \approx \sum_{n=0}^m c_n P_n(x),
\]
where
\[
P_n(x) = T_n \left( \frac{2x - \tau E_{\mathrm{max}} - \tau
E_{\mathrm{min}}}{\tau E_{\mathrm{max}} - \tau E_{\mathrm{min}}}
\right)
\]
with appropriately chosen coefficients $c_n$. Here
$T_n(x)$ are the Chebyshev polynomials on the interval
$[-1,1]$ \cite{abramowitz65hom}, which can be determined via
the recurrence relation
\[
T_{n+1}(x) = 2 x T_n(x) - T_{n-1}(x); \quad T_1(x) = x; \quad T_0(x) = 1.
\]
Finally, one uses the approximation
\begin{equation} \label{cheb1}
\exp(-i \tau H) \approx \sum_{n=0}^m c_n P_n(\tau H).
\end{equation}
This technique is frequently used in numerical quantum dynamics
to compute $\exp(-i \tau H) \psi_0$ over very long times. This can be done
with $m$ matrix-vector products if the approximation (\ref{cheb1}) is
considered with a sufficiently large truncation index $m$. In fact, the
degree $m$ necessary for achieving a specific accuracy depends linearly
on the step size $\tau$ and the spectral radius of $H$ \cite{nettesheim00mqc},
and thus an increase of the step size reduces the computational work
per unit step.
In a practical implementation, $m$ can be chosen such that the accuracy
is dominated by the round-off error \cite{leforestier91aco}. This approach
has two main drawbacks: (i) it is not unitary, and therefore the norm is not conserved
(although the deviation from unitarity is really small due to its extreme
accuracy),
and (ii) intermediate results are not obtained, since typically $\tau$ is very large.
\subsubsection{Krylov space methods}
As we have already pointed out, very often what is really required, rather than the
exponential of the matrix $C$ itself, is the computation of
$\exp(C)$ applied to a vector. In this situation,
evaluating $\e^C$ is somehow analogous to computing $C^{-1}$ to get the
solution of the linear system of equations
$C x = b$ for many different $b$'s: other procedures are clearly far more
desirable. The computation of $\e^C v$ can be efficiently done with Krylov subspace methods,
in which approximations to the solution are obtained from the Krylov spaces spanned
by the vectors $\{v, Cv, C^2 v, \ldots, C^j v\}$ for some $j$ that is typically
small compared to the dimension of $C$ \cite{druskin95ksa,park86uqt}.
The Lanczos method for solving iteratively symmetric
eigenvalue problems is of this form \cite{watkins02fom}. If, as before, we let $C = -i \tau H$,
the symmetric Lanczos process
generates recursively an orthonormal basis $V_{m} = [ v_{1} \cdots
v_{m}]$ of the $m$th Krylov subspace $K_{m}(H, u) = \mathrm{span}
(u, H u, \ldots, H^{m-1} u)$ such that
\[
H V_{m} = V_{m} L_{m} + [0 \cdots 0 \, \beta_{m} \, v_{m+1}],
\]
where the symmetric tridiagonal $m \times m$ matrix $L_{m} = V^T_{m}
H V_{m}$ is the orthogonal projection of $H$ onto $K_{m}(H, u)$. Finally,
\[
\exp(-i \tau H) u \approx V_{m} \exp(-i \tau L_{m}) V_{m}^T u
\]
and the matrix exponential $\exp(-i \tau L_{m})$ can be computed by diagonalizing
$L_{m}$, $L_{m} = Q_{m} D_{m} Q_{m}^T$, as
\[
\exp(-i \tau L_{m}) = Q_{m} \exp(-i \tau D_{m}) Q_{m}^T,
\]
with $D_{m}$ a diagonal matrix.
This iterative process is stopped when
\[
\beta_{m} \left\| \left( \exp(-i \tau L_{m}) \right)_{m,m} \right\|
< tol
\]
for a fixed tolerance. Very good approximations are often obtained with
relatively small values of $m$, and computable error bounds exist for the
approximation. This class of schemes require generally $\mathcal{O}(N^2)$
floating point operations in the computation of $\e^C v$.
More details are contained in the references
\cite{hochbruck99eif,hochbruck98eif,hochbruck97oks}.
\subsubsection{Splitting methods}
\label{splittingme}
Frequently, one has to exponentiate a matrix
which can be split into several parts which are either solvable or easy to deal
with. Let us assume for simplicity that $C=A+B$, where the computation $\e^C$ is very
expensive, but
$\e^A$ and $\e^B$ are cheap and easy to evaluate. In such circumstances, it makes
sense to
approximate $\e^{\varepsilon C}$ with $\varepsilon$ a small
parameter, by the following scheme:
\begin{equation} \label{split-standard}
\psi^{[p]}_{h} \equiv
\e^{\varepsilon b_m B} \e^{\varepsilon a_m A} \ \cdots \
\e^{\varepsilon b_1 B} \e^{\varepsilon a_1 A} \
= \e^{\varepsilon (A+B)} + \mathcal{O}(\varepsilon^{p+1})
\end{equation}
with appropriate parameters $a_i,b_i$. This can be seen as the
approximations to the solution at $t=\varepsilon$ of the equation
$Y^{\prime} = (A+B)Y$ by a composition of the exact solutions of
the equations $Y^{\prime}= AY$ and $Y^{\prime}= BY$ at times
$t=a_i \varepsilon$ and $t=b_i\varepsilon$, respectively. Two
instances of this kind of approximations are given by the well
known Lie--Trotter formula
\begin{equation} \label{Lie-Trotter}
\psi^{[1]}_{h} = \e^{ \varepsilon A} \e^{\varepsilon B}
\end{equation}
and the second order symmetric composition
\begin{equation}\label{leapfrog}
\psi^{[2]}_{h} = \e^{\varepsilon A/2} \e^{ \varepsilon B}
\e^{\varepsilon A/2},
\end{equation}
referred to as Strang splitting, St\"ormer, Verlet and leap-frog,
depending on the particular area where it is used.
Splitting methods have been considered in different contexts: in
designing symplectic integrators, for constructing volume-preserving
algorithms, in the numerical integration of partial differential equations, etc.
An extensive survey of the theory and practice of splitting methods can be found in
\cite{blanes02psp,hairer06gni,leimkuhler04shd,mclachlan02sm,mclachlan06gif}
and references therein.
Splitting methods are particularly useful in geometric
numerical integration. Suppose that
the matrix $C = A + B$ resides in a Lie algebra $\mathfrak{g}$. Then, obviously,
$\exp(C)$ belongs to the corresponding Lie group $\mathcal{G}$ and one is
naturally interested in getting approximations also in $\mathcal{G}$. In this respect,
notice that if $A,B \in
\mathfrak{g}$, then the scheme (\ref{split-standard}) also provides an approximation
in $\mathcal{G}$. It is worth noticing that other methods for the
approximation of the matrix exponential, e.g., Pad\'e approximants and Krylov
subspace techniques, are not guaranteed to map elements from $\mathfrak{g}$ to
$\mathcal{G}$. Although diagonal Pad\'e approximants map
the Lie algebra $\mathrm{o}_J$ to the underlying group, it is possible to show
that the only analytic function that maps $\mathfrak{sl}(n)$ into the special
linear group $\mathrm{SL}(n)$ approximating the exponential function up to a given order is the
exponential itself \cite{feng95vpa}. In consequence, diagonal Pad\'e
approximants only provide results in $\mathrm{SL}(n)$ if the computation
is accurate to machine precision.
In \cite{celledoni00ate}, Celledoni and Iserles devised a splitting
technique for obtaining an approximation to $\exp(C)$ in the Lie group $\mathcal{G}$ based
on a decomposition of $C \in \mathfrak{g}$ into low-rank matrices $C_i \in \mathfrak{g}$.
Basically, given a $n \times n$ matrix $C \in \mathfrak{g}$, they proposed to split
it in the form
\[
C = \sum_{i=1}^k C_i,
\]
such that
\begin{enumerate}
\item $C_i \in \mathfrak g$, for $i=1,\ldots,k$.
\item Each $\exp(C_i)$ is easy to evaluate exactly.
\item Products of such exponentials are computationally cheap.
\end{enumerate}
For instance, for the Lie algebra $\mathfrak{so}(n)$, the choice
\[
C_i = \frac{1}{2} c_i e_i^T - \frac{1}{2} e_i b_i^T, \qquad i=1,\ldots,n,
\]
where $c_1,\ldots,c_n$ are the columns of $C$ and $e_i$ is the $i$-th vector of the
canonical basis of $\mathbb{R}^n$, satisfies the above requirements (with $k=n$), whereas in the
case of $\mathfrak{g} = \mathfrak{sl}(n)$ other (more involved) alternatives are possible
\cite{celledoni00ate}.
Proceeding as in (\ref{Lie-Trotter}), with
\[
\psi^{[1]} = \exp(\varepsilon C_1) \exp(\varepsilon C_2) \cdots \exp(\varepsilon C_k)
\]
we get an order one approximation in $\varepsilon$ to $\exp(\varepsilon C)$,
whereas the symmetric composition
\begin{equation} \label{strang-espe}
\psi^{[2]} = \e^{\frac{1}{2} \varepsilon C_k} \e^{\frac{1}{2} \varepsilon C_{k-1}}
\cdots \e^{\frac{1}{2} \varepsilon C_2}
\e^{ \varepsilon C_1} \e^{\frac{1}{2} \varepsilon C_2} \cdots
\e^{\frac{1}{2} \varepsilon C_{k-1}} \e^{\frac{1}{2} \varepsilon C_k}
\end{equation}
provides an approximation of order two in $\varepsilon$, and this can be subsequently
combined with different techniques for increasing the order.
With respect to the computational cost, the results reported in
\cite{celledoni00ate} show that, up to order four in
$\varepsilon$, this class of splitting schemes are competitive
with the Matlab built-in function \texttt{expm} when machine
accuracy is not required in the final approximation. Running
\texttt{expm} on randomly generated matrices, it is possible to
verify that computing $\exp(C)$ to machine accuracy requires about
(20-30)$n^3$ floating point operations, depending on the
eigenvalues of $C$, whereas the 4th-order method constructed from
(\ref{strang-espe}) involves (12-15)$n^3$ operations when $C \in
\mathfrak{so}(100)$ \cite{celledoni00ate}. In the case of the
approximation of $\exp(C) v$ and $v \in \mathbb{R}^n$, the cost of
low-rank splitting methods drop down to $K n^2$, where $K$ is a
constant, and thus they are comparable to those achieved by
polynomial approximants \cite{hochbruck97oks}.
Splitting methods of the above type are by no means the only way to express
$\exp(C)$ as a product of exponentials of elements in $\mathfrak{g}$. For instance,
the Wei--Norman approach (\ref{wn2}) can also be implemented in this setting. Suppose
that $\dim \mathfrak{g} = s$ and let $\{X_1,X_2,\ldots, X_s\}$ be a basis of
$\mathfrak{g}$. In that case, as we have seen (subsection \ref{w-nfacto}),
it is possible to represent $\exp(t C)$
for $C \in \mathfrak{g}$ and sufficiently small $|t|$ in canonical coordinates of the
second kind,
\[
\e^{t C} = \e^{g_1(t) X_1} \, \e^{g_2(t) X_2} \cdots \e^{g_s(t) X_s},
\]
where the scalar functions $g_k$ are analytic at $t=0$. Although the $g_k$s are
implicitly defined, they can be approximated by Taylor series.
The cost of the procedure
can be greatly reduced by choosing adequately the basis and exploiting the Lie-algebraic
structure \cite{celledoni01mft}.
Yet another procedure to get approximations of $\exp(C)$ in a
Lie-algebraic setting which has received considerable attention
during the last years is based on generalized polar decompositions
(GPD), an approach introduced in \cite{munthe-kaas01gpd} and
further elaborated in \cite{iserles05eco,zanna01gpd}. In
particular, in \cite{iserles05eco}, by bringing together GPD with
techniques from numerical linear algebra, new algorithms are
presented with complexity $\mathcal{O}(n^3)$, both when the
exponential is applied to a vector and to a matrix. This is
certainly not competitive with Krylov subspace methods in the
first case, but represents at least a 50\% improvement on the
execution time, depending on the Lie algebra considered, in the
latter. Another difference with respect to Krylov methods is that
the algorithms based on generalized polar decompositions
approximate the exponential to a given order of accuracy and thus
they are well suited to exponential approximations within
numerical integrators for ODEs, since the error is subsumed in
that of the integration method.
For a complete description of the procedure and its practical
implementation we refer the reader to \cite{iserles05eco}.
\subsection{Additional numerical examples}
\label{numexM}
The purpose of subsection \ref{niarz} was to illustrate the main
features of the numerical schemes based on the Magnus expansion in
comparison with other standard integrators (such as Runge--Kutta
schemes) and other Lie-group methods (e.g., Fer and Cayley) on
a solvable system. For larger systems the efficiency analysis is
more challenging since the (exact or approximate)
computation of exponential matrices
play an important role on the performance of the methods. It
makes sense then to analyze from this point of view more realistic problems where one necessarily
has to approximate the exponential in a consistent way.
As an illustration of this situation
we consider next two skew-symmetric
matrices $A(t)$ and $Y(0) = I$, so that the solution $Y(t)$ of
$Y^{\prime} = A(t) Y$ is
orthogonal for all $t$. In particular, the upper triangular
elements of the matrices $A(t)$ are as follows:
\begin{eqnarray}
\mbox{(a)} \;\;\; A_{ij} & = & \sin \left( t(j^2 - i^2) \right) \qquad\qquad 1 \leq
i < j \leq
N \label{ej3.1} \\
\mbox{(b)} \;\;\; A_{ij} & = & \log \left( 1 + t \, \frac{j-i}{j+i} \right)
\label{ej3.2}
\end{eqnarray}
with $N=10$. In both cases $Y(t)$ oscillates with time, mainly due
to the time-dependence of $A(t)$ (in (\ref{ej3.1})) or the norm of
the eigenvalues (in (\ref{ej3.2})).
The integration is carried out in the interval $t \in [0,10]$ and
the approximate solutions are compared with the exact one at the final time $%
t_{\mathrm{f}}=10 $ (obtained with very high accuracy by using a
sufficiently small time step). The corresponding error at
$t_{\mathrm{f}}$ is computed for different values of the time step
$h$. The Lie-group solvers are implemented with Gauss--Legendre
quadratures and constant step size.
First, we plot the accuracy of the different 4-th and 6-th order methods
as a function of the number of $A(t)$ evaluations. In contrast to the previous
examples, now there is not a closed formula for the matrix
exponentials appearing in the Magnus based integrators, so that
some alternative procedure must be applied. In particular, the computation
of $\e^C$ to machine accuracy is done by scaling--(diagonal
Pad\'e)--squaring, so that the result is correct up to round-off.
Figure \ref{nexo6.2} shows the results obtained for the problems
(\ref{ej3.1}) and (\ref{ej3.2}) with fourth- and
sixth-order numerical schemes based on Magnus and Cayley, and also
explicit Runge--Kutta methods. In
the first problem, Magnus and Cayley show a very similar performance, which happens
to be only slightly better than that of RK methods.
The situation changes drastically, however, for the second problem. Here the
behaviour of
Cayley and RK methods is essentially similar, whereas schemes
based on Magnus are clearly more efficient. The reason seems to be that
Cayley and RK
methods give poor approximations to the exponential, which, on the other hand,
has to be accurately approximated, since
the eigenvalues of
$A(t)$ may take large values.
With respect to symmetric Fer
methods, their efficiency is quite similar to that of Magnus if the matrix
exponentials are evaluated accurately up to machine precision.
This is so for the matrix (\ref{ej3.1}) even if Pad\'e
approximants of relatively low order are used to replace the
exponentials.
On the other hand, the efficiency of ``Magnus-Pad\'e" methods (we
denote by MP$nm$ a Magnus method of order $n$ where the
exponential is approximated by a diagonal Pad\'e of order $m$, and
MP$n$ if $n=m$) is highly deteriorated for the problem
(\ref{ej3.2}), although it is always better than the corresponding
to Cayley schemes.
\begin{figure}[tb]
\begin{center}
\includegraphics[height=7cm,width=14cm]{Figures/FigsEfic_sin_log.eps}
\end{center}
\caption{{\protect\small Efficiency diagram corresponding to the
optimized 4-th (circles) and 6-th (squares) order Lie-group
solvers based on Magnus (solid lines) and Cayley (broken lines),
and the standard Runge--Kutta methods (dashed lines).}}
\label{nexo6.2}
\end{figure}
To better illustrate all these comments, in Figure \ref{nexo6.4} we
display the error in the solution corresponding to (\ref{ej3.1})
and (\ref{ej3.2}) as a function of time in the interval $t \in
[0,100]$ for $h=1/20$ as is obtained by the previous methods. We should stress that
all schemes require the same number of $A$
evaluations.
In the right
picture the exponentials appearing in the Magnus method are
computed using a Pad\'e approximant of order six (MP6), of
order eight (MP68) and to machine accuracy (M6). Observe the great
importance of evaluating the exponential as accurately as possible
for the matrix (\ref{ej3.2}): by increasing slightly the
computational cost per step in the computation of the matrix
exponential it is possible to improve dramatically the accuracy of
the methods. On the contrary, for problem (\ref{ej3.1}) the
meaningful fact seems to be that the integration scheme provides a
solution in the corresponding Lie group.
\begin{figure}[tb]
\begin{center}
\includegraphics[height=7cm,width=14cm]{Figures/FigsErrorG_sin_log.eps}
\end{center}
\caption{{\protect\small Error as a function of time (in
logarithmic scale) obtained with different 6-th order integrators
for $h=1/20$: (a) problem (\ref{ej3.1}); (b) problem
(\ref{ej3.2}).}} \label{nexo6.4}
\end{figure}
\subsection{Modified Magnus integrators}
\subsubsection{Variations on the basic algorithm}
The examples collected in subsections \ref{niarz} and \ref{numexM} show that the
numerical methods constructed from the Magnus expansion can be
computationally very efficient indeed. It is fair to say, however, that this
efficiency can be seriously affected when dealing with certain
types of problems.
Suppose, for instance, that one has to numerically integrate a
problem defined in $\mathrm{SU}(n)$. Although Magnus integrators are
unconditionally stable in this setting (since they preserve
unitarity up to roundoff independently of the step size $h$), in
practice only small values of $h$ are used for achieving accurate
results. Otherwise the convergence of the series is not assured.
Of course, the use of small time steps may render the algorithm
exceedingly costly.
In other applications the problem depends on several
parameters, so that the integration has to be carried out for
different values of the parameters. In that case the overall
integration procedure can be computationally very expensive.
In view of all these difficulties, it is hardly
surprising that several modifications of the standard algorithm of
Magnus integrators had been developed to try to minimize these undesirable
effects and get especially adapted integrators for particular
problems.
One basic tool used time and again in this context is performing a
preliminary linear transformation, similarly to those introduced
in section \ref{PLT}. These transformations can be carried out
either for the whole integration interval or at each step in the
process. Given an appropriately chosen transformation, $\tilde{Y}_0(t)$,
one factorizes $Y(t)=\tilde{Y}_0(t) \, \tilde{Y}_1(t)$, where the unknown $\tilde{Y}_1(t)$ satisfies the
equation
\begin{equation}\label{Pert}
\tilde{Y}_1' = B(t) \tilde{Y}_1
\end{equation}
and $B(t)$ depends on $A(t)$ and $\tilde{Y}_0(t)$. This transformation
makes sense, of course, if $\|B(t)\|<\|A(t)\|$ and thus typically
$\tilde{Y}_0$ is chosen in such a way that the norm of $B$ verifies the
above inequality. As a consequence, Magnus integrators can be applied on
(\ref{Pert}) with larger time steps providing also more accurate
results.
Alternatively, for problems where
in addition to the time step $h$ there is another parameter ($E$, say),
one may analyze the Magnus expansion in terms of $h$ and $E$. This
allows us to identify which terms at each order in the series
expansion give the main contribution to the error, and design
methods which include these terms in their formulation. The
resulting schemes should provide then more accurate results at a
moderate computational cost without altering the convergence
domain.
As a general rule, it is always desirable to have in
advance as much information about the equation and the properties
of its solution as possible, and then to try to incorporate all this
information into the
algorithm.
Let us review some useful possibilities in this context. From (\ref{3.2.1}) one has
\begin{equation}\label{Taylor_mid2}
A(t) = a_0 + \tau a_1 + \tau^2 a_2 + \cdots,
\end{equation}
where $\tau=t-t_{1/2}$. The first term is exactly solvable
($a_0=A(t_{1/2})$) and, for many problems, it just provides the
main contribution to the evolution of the system. In that case it
makes sense to take
\[
\tilde{Y}_0(t) = \e^{(t-t_n) a_0} = \e^{(t-t_n) A(t_{1/2})}
\]
and subsequently integrate eq. (\ref{Pert}) with
\[
B(t) = \e^{-(t-t_n) A(t_{1/2})}
\left( A(t) - A(t_{1/2}) \right)
\e^{(t-t_n) A(t_{1/2})}.
\]
This approach has been considered in
\cite{iserles02otg,iserles02tga}, and shows an extraordinary
improvement when the system is highly oscillatory and the main
oscillatory part is integrated with $\tilde{Y}_0$. In those cases, the
norm of $B(t)$ is considerably smaller than $\|A(t)\|$, but $B(t)$ is still highly
oscillatory, so that especially adapted quadrature rules have to
be used in conjunction with the Magnus expansion
\cite{iserles04otn,iserles04oqm}.
In some other problems the contributions from the derivatives
can also be significant, so that a more appropriate transformation
is defined by
\begin{equation}\label{magnus_2}
\tilde{Y}_0(t) = \exp \left( \int_{t_n}^{t} A(\tau) d\tau \right).
\end{equation}
The resulting methods can be considered then as a combination of
the Fer or Wilcox expansions and the Magnus expansion. This
approach has been pursued in \cite{degani06rrc}.
On the other hand, it is known that several physically relevant
systems evolve adiabatically or almost-adiabatically. In that case
it seems appropriate to consider the adiabatic picture which
instantaneously diagonalizes $A(t)$ (subsection \ref{PLT}). This
analysis is carried out in
\cite{jahnke04lts,jahnke03nif,lorenz05aif}. In \cite{jahnke03nif}
the adiabatic picture is used perturbatively, whereas in
\cite{jahnke04lts} it is shown that Magnus in the new picture
leads to significant improvements.
Alternatively, one can analyze the structure of the leading error
terms in order to identify the main contribution to the error at
each $\Omega_i$ in the Magnus series expansion. In most cases they
correspond to terms involving only $\alpha_1$ and its nested
commutators with $\alpha_2$
Thus, in particular, the standard fourth-order method given by (\ref{eq.2.3a}) can be
easily improved by including the dominant error term at higher
orders, i.e.,
\[
\Omega^{[4]} = \alpha_1 - \frac{1}{12} [\alpha_1,\alpha_2]
+ \frac{1}{720}[\alpha_1,\alpha_1,\alpha_1,\alpha_2]
- \frac{1}{30240}[\alpha_1,\alpha_1,\alpha_1,\alpha_1,\alpha_1,\alpha_2]+\ldots.
\]
We recall that using the fourth-order Gauss--Legendre quadrature
rule we can take $\alpha_1=\frac{h}{2}(A_1+A_2), \
\alpha_2=\frac{h\sqrt{3}}{12}(A_2-A_1)$ with $A_1,A_2$ given in
(\ref{AiG4}). The new method requires additional commutators but
the accuracy can be improved a good deal. This procedure is
analyzed in \cite{moan98eao}, where it is shown how
to sum up all terms of the form
$[\alpha_1,\alpha_1,\ldots,\alpha_1,\alpha_2]$. An error analysis
in the limit of very large values of $\|\alpha_1\|$ is done in
\cite{aparicio05neo,malham06ete}.
\subsubsection{Commutator-free Magnus integrators}
\label{cfmi}
All numerical methods based on the Magnus expansion appearing in
the preceding sections require the evaluation of a matrix
exponential which contain several nested commutators. As we have
repeatedly pointed out, computing the exponential is frequently
the most consuming part of the algorithm. There are problems where
the matrix $A(t)$ has a sufficiently simple structure which allows
one to approximate efficiently the exponential $\exp(A(t_i))$, or
the exponential of a linear combination of the matrix $A(t)$
evaluated at different points. In some sense, this is equivalent
to have efficient methods to compute or to approximate $\tilde{Y}_0$ in
(\ref{magnus_2}). It may happen, however, that the computation of
the matrix exponential is a much more involved task due to the
presence of commutators in the Magnus expansion. For this reason,
it makes sense to look for approximations to the Magnus expansion
which do not involve commutators whilst still preserving the same
qualitative properties. In other words, one may be interested in
compositions of the form
\begin{equation}\label{CF_expsInt}
\Psi^{[n]}_m \equiv
\exp\left(\int_{t_n}^{t_n+h} p_m(s)A(s) ds \right) \ \cdots \
\exp\left(\int_{t_n}^{t_n+h} p_1(s)A(s) ds \right)
\end{equation}
where $p_i(s)$ are scalar functions chosen in such a way that
$\Psi^{[n]}_m=\e^{\Omega(t_n+h)}+ \mathcal{O}(h^{n+1})$. Alternatively,
instead of
the functions $p_i(s)$, it is possible to find
coefficients $\varrho_{i,j}, \ i=1,\ldots,m, j=1,\ldots,k$ such
that
\begin{equation}\label{CF_exps}
\Psi^{[n]}_m \equiv
\e^{\tilde{A}_m} \cdots
\e^{\tilde{A}_1}, \qquad
\mbox{with} \qquad
\tilde{A}_i=h\sum_{j=1}^k \varrho_{i,j} A_j
\end{equation}
is an approximation of the same order. This procedure requires
first to compute $A_j=A(t_n+c_jh), \ j=1,\ldots,k$ for some
quadrature nodes, $c_j$, of order $n$ or higher and, obviously,
the coefficients $\varrho_{i,j}$ will depend on this choice.
The process simplifies if one works in the associated graded free
Lie algebra generated by $\{\alpha_i\}$, as in the sequel of eq.
(\ref{3.2.2}). Thus, achieving fourth-order integrators reduces
just to solve the equations for the new coefficients $a_{i,1}, a_{i,2}$
in
\begin{eqnarray}
\Psi^{[4]}_m & \equiv &
\exp\left( a_{m,1} \, \alpha_1+a_{m,2} \, \alpha_2 \right) \cdots
\exp\left( a_{1,1} \, \alpha_1+a_{1,1} \, \alpha_2 \right) \label{CF4b}
\end{eqnarray}
with the requirement that $\Psi^{[4]}_m= \exp\left( \Omega^{[4]}
\right)+O(h^5)$, where $ \Omega^{[4]}$ is given by
(\ref{eq.2.3a}). Here the dependence of $a_{i,1}, a_{i,2}$ on the
coefficients $\varrho_{i,j}$ is determined through the existing
relation between the $\alpha_i$ and the $A_j$ given in
(\ref{CBaseMu}). The order conditions for the coefficients $a_{i,1}, a_{i,2}$
can be easily obtained from the Baker--Campbell--Hausdorff
formula. As we have already mentioned, time-symmetry is an
important property to be preserved by the integrators, whereas, at
the same time, also simplifies the analysis. Scheme (\ref{CF4b})
is time-symmetric if
\begin{equation}\label{symm-coef}
a_{m+1-i,1}=a_{i,1}, \qquad
a_{m+1-i,2}=-a_{i,2},, \qquad i=1,2,\ldots,m
\end{equation}
in which case the order conditions at even order terms are
automatically satisfied.
As an illustration, the simple compositions
\begin{eqnarray}
\Psi^{[4]}_2 & \equiv &
\exp\left(\frac12 \alpha_1+\frac16 \alpha_2\right) \
\exp\left(\frac12 \alpha_1-\frac16 \alpha_2\right)
\label{2-CF} \\
\Psi^{[4]}_3 & \equiv & \exp\left(\frac{1}{12} \alpha_2\right) \
\exp\left(\alpha_1\right) \ \exp\left(-\frac{1}{12}
\alpha_2\right)
\label{3-CF}
\end{eqnarray}
are in fact fourth-order (commutator-free) methods requiring two
and three exponentials, respectively \cite{blanes00smf}. In
particular, scheme (\ref{2-CF}), when $\alpha_1, \alpha_2$ are
approximated using the fourth-order Gauss--Legendre quadrature as
shown in (\ref{AiG4}) and (\ref{alfasG4}) leads to the scheme
\begin{equation}
\Psi^{[4]}_2 \equiv
\exp\big(h(\varrho_{2,1} A_1+\varrho_{2,2} A_2)\big) \
\exp\big(h(\varrho_{1,1} A_1+\varrho_{1,2} A_2)\big) \label{2-CF-GL}
\end{equation}
with $\varrho_{1,1}=\varrho_{2,2}=\frac12+\frac{\sqrt{3}}{72},
\varrho_{1,2}=\varrho_{2,1}=\frac12-\frac{\sqrt{3}}{72}$. Methods
closely related to the scheme (\ref{3-CF}) are presented in
\cite{baye03fof,blanes00smf,lu00foc}, where they are applied to
the Schr\"odinger equation with a time-dependent potential. A
method quite similar to (\ref{2-CF}) is analyzed in
\cite{thalhammer06afo} through its application to parabolic
initial boundary value problems. A detailed study of fourth and
sixth order commutator-free methods is presented in
\cite{blanes06fas}.
On the other hand, very often the differential equation
(\ref{NI.1}) can be split into two parts, so that one has instead
\begin{equation}\label{sep-non-autonomo}
Y' = \big( A(t) + B(t) \big) Y,
\end{equation}
where each part can be trivially or very efficiently solved. For
instance, the Schr\"odinger equation with a time-dependent
potential and, possibly, a time-dependent kinetic energy belongs
to this class. In principle, the following families of geometric
integrators are specially tailored for this problem:
\begin{description}
\item[I-] The commutator-free Magnus integrators
(\ref{CF_exps}), which in this case read
\begin{equation}\label{CF_expsAB}
\Psi^{[n]}_m \equiv
\e^{\tilde{A}_m+\tilde{B}_m} \cdots
\e^{\tilde{A}_1+\tilde{B}_1}, \qquad
\mbox{with} \qquad
\tilde{A}_i=h\sum_{j=1}^k \varrho_{i,j} A_j, \ \
\tilde{B}_i=h\sum_{j=1}^k \varrho_{i,j} B_j.
\end{equation}
Assuming that $\e^{\tilde{A}_i}$ and $\e^{\tilde{B}_i}$ are
easily computed, then each exponential can be approximated by a
conveniently chosen splitting method (\ref{split-standard})
\cite{mclachlan02sm}
\begin{equation}\label{}
\e^{\tilde{A}_i+\tilde{B}_i}\simeq
\e^{b_s\tilde{B}_i}\e^{a_s\tilde{A}_i} \cdots
\e^{b_1\tilde{B}_i}\e^{a_1\tilde{A}_i}.
\end{equation}
\item[II-] If one takes the time variable in $A(t),B(t)$ as two new
coordinates, one may use any splitting method as follows \cite{sanzserna96cni}:
\begin{equation}\label{Split_AB(t)}
\Psi^{[n]}_{l,h} \equiv
\e^{b_lhB(w_l)}\e^{a_l hA(v_l)} \cdots
\e^{b_1hB(w_1)}\e^{a_1 hA(v_1)},
\end{equation}
with
\[
v_i=\sum_{j=1}^{i-1} b_j, \quad
w_i=\sum_{j=1}^{i} a_j.
\]
and $b_0=0, \ A(v_i)\equiv A(t_n+v_ih), \ B(w_i)\equiv
B(t_n+w_ih)$.
\end{description}
Both approaches have pros and cons. By applying procedure I we may
get methods of order $2n$ with only $n$ evaluations of $A(t)$,
$B(t)$ using e.g. Gauss--Legendre quadratures, but if $m$ in
(\ref{CF_expsAB}) is large, the number of matrix exponentials to
be computed leads to exceedingly costly methods. The approach II,
on the other hand, has the advantage of a smaller number of
stages, but also presents two drawbacks: (i) many evaluations of
$A(t),B(t)$ are required in general; (ii) for matrices $A$ and $B$
with a particular structure there are specially designed splitting
methods which are far more efficient, but these schemes are not
easily adapted to this situation.
Next we show how to combine splitting methods with
techniques leading to commutator free Magnus schemes to design efficient
numerical algorithms possessing the advantages of approaches I and II, and
at the same time generalizing the splitting idea
(\ref{split-standard}) to this setting
\cite{blanes06smf,blanes07smf}.
The starting point is similar as in previous schemes, i.e. we consider a
composition of the form
\begin{equation} \label{split-Magnus-n}
\psi_{l,h}^{[n]} =
\e^{\tilde{B}_l} \ \e^{\tilde{A}_l} \cdots
\e^{\tilde{B}_1} \ \e^{\tilde{A}_1} ,
\end{equation}
where the matrices $\tilde{A}_i$ and $\tilde{B}_i$ are
\begin{equation}\label{split-coefs}
\tilde{A}_i = h \sum_{j=1}^k \rho_{ij} A_j,
\qquad
\tilde{B}_i = h \sum_{j=1}^k \sigma_{ij} B_j,
\end{equation}
with appropriately chosen real parameters $\rho_{ij},\sigma_{ij}$
depending on the coefficients of the chosen quadrature rule.
Notice that $\e^{\tilde{A}_i}$ can be seen as the solution of the
initial value problem $Y^{\prime} = \hat{A}_i Y$, $Y(t_n)=I$ at
$t_{n+1}$, where $\tilde{A}_i = h \hat{A}_i$. Of course, the same
considerations apply to $\e^{\tilde{B}_i}$.
In many cases it is convenient to write the coefficients
$\rho_{ij},\sigma_{ij}$ explicitly in terms of the coefficients
$c_i$. Following \cite{blanes06smf} they can be written as
\begin{equation}\label{eq.2.11b}
\rho_{ij} = \sum_{l=1}^s a_{i,l} \left( R^{(s)} Q_X^{(s,k)}\right)_{lj}, \qquad
\sigma_{ij} = \sum_{l=1}^s b_{i,l} \left( R^{(s)} Q_X^{(s,k)}\right)_{ij}.
\end{equation}
where the coefficients for the matrices $R^{(s)}, \ s=2,3$ are
given in (\ref{CambioBase}) and for $Q_X^{(s,k)}$ (whose elements
depend on the coefficients $b_i,c_i$ for the quadrature rule) as
shown in (\ref{ru1}).
In this way, the coefficients $a_{ij}$ and $b_{ij}$ are
independent of the quadrature choice and can be obtained by
solving some order conditions (see \cite{blanes06smf} for more
details).
This procedure allows us to analyse separately particular cases
for the matrices $A,B$ in order to build efficient methods. For
instance, in \cite{blanes06smf} the following particular cases are
considered: (i) when the matrices $A(t),B(t)$ have a general
structure; (ii) when they satisfy the additional constraint
$[B(t_i),[B(t_j),[B(t_k),A(t_l)]]]=0$ as it happens, for instance,
if $A$ corresponds to the kinetic energy and $B$ to the potential
energy (both in classical or quantum mechanics).
As an illustration, we consider the following 4th-order 6-stage
BAB composition
\begin{equation} \label{split-Magnus-4}
\psi_{6,h}^{[4]} =
\e^{\tilde{B}_7} \ \e^{\tilde{A}_6} \ \e^{\tilde{B}_6} \cdots
\e^{\tilde{A}_1} \ \e^{\tilde{B}_1}.
\end{equation}
In Table \ref{table2} we collect the coefficients $a_{ij},b_{ij}$
to be used in (\ref{eq.2.11b}) to obtain the coefficients
$\rho_{ij},\sigma_{ij}$ to be used in the scheme
(\ref{split-Magnus-4}) for two methods, denoted by
$\mathrm{GS}_6$-4 in the general case (whose coefficients
$a_{i1},b_{i1}$ correspond to $S_6$ in \cite{blanes02psp}) and
$\mathrm{MN}_6$-4 when $[B(t_i),[B(t_j),[B(t_k),A(t_l)]]]=0$ (the
coefficients $a_{i1},b_{i1}$ correspond to $\mathrm{SRKN}_6^b$ in
\cite{blanes02psp}).
Finally, one has to write the scheme in terms of the matrices
$A_i,B_i$. For instance, the composition (\ref{split-Magnus-4})
with the 4th-order Gauss--Legendre quadrature (i.e. taking
$Q^{(2,2)}$ in (\ref{Gauss-Legendre}) and $R^{(2)}$ in
(\ref{CambioBase}) to obtain the coefficients
$\rho_{ij},\sigma_{ij}$ in (\ref{eq.2.11b})) gives
\begin{eqnarray}
\tilde{A}_i & = & \left( \frac{1}{2} a_{i1} - \sqrt{3} a_{i2}
\right) h A_1 + \left( \frac{1}{2} a_{i1} + \sqrt{3} a_{i2}
\right) h A_2 \nonumber \\
\tilde{B}_i & = & \left( \frac{1}{2} b_{i1} - \sqrt{3} b_{i2}
\right) h B_1 + \left( \frac{1}{2} b_{i1} + \sqrt{3} b_{i2}
\right) h B_2. \label{eq.2.16}
\end{eqnarray}
\begin{table}[tb]
\caption{Splitting methods of order 4 for separable non-autonomous
systems. $\mathrm{GS}_6$-4 is intended for general separable
problems, whereas $\mathrm{MN}_6$-4 can be applied when
$[B(t_i),[B(t_j),[B(t_k),A(t_l)]]]=0$. All the coefficients are given in terms of
$b_{11},a_{11},b_{21},a_{21},b_{31}$ for each method. }
\label{table2} { \footnotesize
\begin{tabular}{l}
\begin{tabular}{l|l}
$\mathrm{GS}_6$-4 & $\mathrm{MN}_6$-4 \\
\hline
$b_{11}= 0.0792036964311957 $ & $ b_{11}= 0.0829844064174052 $ \\
$a_{11}= 0.209515106613362$ & $ a_{11}= 0.245298957184271$ \\
$b_{21}= 0.353172906049774 $ & $ b_{21}= 0.396309801498368 $ \\
$a_{21}=-0.143851773179818$ & $ a_{21}= 0.604872665711080$ \\
$b_{31}=-0.0420650803577195 $ & $ b_{31}=-0.0390563049223486 $
\end{tabular}
\\
\begin{tabular}{lll}
\hline
$a_{31}= 1/2-(a_{11}+a_{21})$ & $ b_{41}=1-2(b_{11}+b_{21}+b_{31})$ \\
$a_{41}=a_{31}$ & $b_{51}=b_{31}$ \\
$a_{51}=a_{21}$ & $b_{61}=b_{21}$ \\
$a_{61}=a_{11}$ & $b_{71}=b_{11}$
\end{tabular}
\\
\begin{tabular}{lll}
\hline
$a_{12} = (2 a_{11}+2a_{21}+a_{31}-2b_{11}-2b_{21})/c$ &
$b_{12} = (2 a_{11} + 2 a_{21} - 2 b_{11} - b_{21})/d$ \\
$a_{22} = 0$ & $b_{22}=(-2a_{11}+b_{11})/d$ \\
$a_{32} = -a_{11}/c$ & $b_{32} = b_{42} = 0$ \\
$a_{42}=-a_{32}$ & $b_{52}=-b_{32}$ \\
$a_{52}=-a_{22}$ & $b_{62}=-b_{22}$ \\
$a_{62}=-a_{12}$ & $b_{72}=-b_{12}$ \\
\end{tabular}
\\
\begin{tabular}{ll}
\hline
$c = 12(a_{11}+2a_{21}+a_{31}-2b_{11}+2 a_{11} b_{11} - 2 b_{21} +
2 a_{11} b_{21})$ & \\
$d = 12(2a_{21}-b_{11}+2 a_{11} b_{11} - 2 a_{21} b_{11} - b_{21}
+ 2 a_{11} b_{21})$ & \\
\hline
\end{tabular}
\end{tabular}
}
\end{table}
\subsection{Magnus integrators for nonlinear differential
equations}
\label{mifnde}
The success of Magnus methods applied to the numerical integration of
linear systems has motivated several attempts to adapt the schemes
for solving time dependent nonlinear differential equations. For
completeness we present some recently proposed generalizations of
Magnus integrators. We consider two different problems: (i) a
nonlinear matrix equation defined in a Lie group, and (ii) a general
nonlinear equation to which the techniques of section \ref{GNLM}
can be applied.
\subsubsection{Nonlinear matrix equations in Lie groups}
As we have already mentioned, the strategy adopted by most
Lie-group methods for solving the nonlinear matrix differential equation
(\ref{nlm1}),
\[
Y^{\prime} = A(t, Y) Y, \quad\qquad Y(0) = Y_0 \in \mathcal{G}
\]
defined in a Lie group $\mathcal{G}$, whilst preserving its Lie
group structure, is to lift $Y(t)$ from $\mathcal{G}$ to the
underlying Lie algebra $\mathfrak{g}$ (usually with the
exponential map), then formulate and numerically solve there an
associated differential equation and finally map the solution back
to $\mathcal{G}$. In this way the discretization procedure works
in a linear space rather than in the Lie group. In particular, the
idea of the so-called Runge--Kutta--Munthe-Kaas class of schemes
is to approximate the solution of the
associated differential equation in the Lie algebra $\mathfrak{g}$
by means of a classical Runge--Kutta method
\cite{iserles00lgm,munthe-kaas98rkm,munthe-kaas99hor}.
To generalize Magnus integrators when $A=A(t,Y)$, an
important difference with respect to the linear case is that now
multivariate integrals depend also on the value of the (unknown)
variable $Y$ at quadrature points. This leads to implicit methods
and nonlinear algebraic equations in every step of the integration
\cite{zanna99car}, which in general cannot compete in efficiency
with other classes of geometric integrators such as splitting and
composition methods.
An obvious alternative is just to replace the integrals appearing
in the nonlinear Magnus expansion developed in section
\ref{NL-Magnus} by affordable quadratures, depending on the
particular problem. If, for instance, we use Euler's method to
approximate the first term in (\ref{nlm4}), $\Omega^{[1]}(h) = h
A(0,Y_0) + \mathcal{O}(h^2)$ and $\Omega^{[2]}$ is discretized with the
midpoint rule, we get the second order scheme
\begin{eqnarray} \label{nlmnum1}
v_2 & \equiv & h
A \left( \frac{h}{2}, \e^{\frac{h}{2} A(0,Y_0)} Y_0
\right) = \Omega^{[2]}(h) +
\mathcal{O}(h^3) \nonumber \\
Y_1 & = & \e^{v_2} Y_0.
\end{eqnarray}
The same procedure can be carried out at higher orders,
discretizing consistently the integrals appearing in
$\Omega^{[m]}(h)$ for $m>2$ \cite{casas06eme}.
\subsubsection{The general nonlinear problem}
In principle, it is possible to adapt all methods built for
linear problems to the general nonlinear non-autonomous equation
(\ref{non-lin})
\[
{\bf x}' = {\bf f}(t,{\bf x}),
\]
or equivalently, the operator differential equation
(\ref{nlp8}),
\[
\frac{d}{dt}\Phi^t = \Phi^t L_{{\bf f}(t,{\bf y})},
\qquad \quad {\bf y}={\bf x}_0.
\]
As we pointed out in section \ref{GNLM}, there are two problematic aspects
when
designing practical numerical schemes based on Magnus expansion in the nonlinear
case. The first one is how to
compute or approximate the truncated Magnus expansion (or its action on
the initial conditions). The second one is how to evaluate the required Lie
transforms. For example, to compute the Lie transform
$\exp(tL_{\bf f(y)})$ acting on ${\bf y}$ is equivalent to solve
the autonomous differential equation ${\bf x}' = {\bf f}({\bf x})
$ at $t=h$ with ${\bf x}(0)={\bf y}$, or ${\bf x}(t)=\exp(tL_{\bf
f(x_0)}){\bf x}_0$ where ${\bf x}_0={\bf y}$ can be considered as
a set of coordinates.
Very often, the presence of Lie brackets in the exponent leads to fundamental difficulties, since
the resulting vector fields usually have very
complicate structures. Sometimes, however, this problem can be circumvented by
using the same techniques leading to
commutator-free Magnus integrators in the linear case. In any case, one should
bear in mind that the action of the exponentials in the methods designed for the linear
case has to be replaced by their corresponding maps. Alternatively, if the method is
formulated in terms of Lie transforms, the order of the exponentials has to be reversed,
according to equation (\ref{nlp7}).
Next we illustrate how to numerically
solve the problem
\begin{equation}\label{t-separable}
{\bf x}' = {\bf f}_1(t,{\bf x}) + {\bf f}_2(t,{\bf x})
\end{equation}
using the scheme (\ref{split-Magnus-n}) with (\ref{eq.2.16}) and
the coefficients $a_{ij},b_{ij}$ taken from MN$_{6}4$ in
Table~\ref{table2}.
Let us consider the Duffing equation
\begin{equation} \label{Duffing}
q'' + \epsilon q' + q^3 - q = \delta \cos(\omega t)
\end{equation}
which can be obtained from the time-dependent
Hamiltonian
\begin{equation}\label{HamDuff1}
H(q,p,t) = T(p,t) + V(q,t) = \e^{-\epsilon t} \, \frac12 p^2 +
e^{\epsilon t}\left( \frac14 q^4 - \frac12 q^2
- \delta \cos(\omega t) q \right)
\end{equation}
or equivalently from
\begin{equation}\label{HamDuff2}
\frac{d}{dt} \left\{ \begin{array}{l}
q \\ p
\end{array} \right\} = \left\{ \begin{array}{c}
T'(t,p) \\ - V'(t,q)
\end{array} \right\} = \left\{ \begin{array}{c}
e^{-\epsilon t} p \\ 0
\end{array} \right\} + \left\{ \begin{array}{c}
0 \\
e^{\epsilon t}\left( q - q^3 + \delta \cos(\omega
t)\right)
\end{array} \right\}.
\end{equation}
Notice that this system has already the form (\ref{t-separable}), each part being
exactly solvable. In consequence,
the splitting method shown in
(\ref{Split_AB(t)}) can be used here. The procedure is described as
Algorithm 1 in Table~\ref{algorithms}.
\begin{table}[h!]
\caption{Algorithms for the numerical integration of
(\ref{HamDuff1}) or (\ref{HamDuff2}): (Algorithm 1) with scheme
(\ref{Split_AB(t)}), and (Algorithm 2) with scheme
(\ref{split-Magnus-4}).} \label{algorithms} {
\small
\begin{tabular}{c}
\begin{tabular}{l|l}
\hline
{\bf Algorithm 1:\ Standard split} &
{\bf Algorithm 1:\ Magnus split}
\\
\hline
$ %
\begin{array}{l}
q_{0} = q(t_n); \quad
p_{0}= p(t_n); \\
t_a=t_n; \quad
t_b=t_n \\
{\bf do} \ \ i=1,m \\
\quad p_{i} = p_{i-1} - h a_i V'(t_a,q_{i-1}) \\
\quad t_{a} = t_{a} + h a_i \\
\quad q_{i} = q_{i-1} + h b_i T'(t_b,p_{i}) \\
\quad t_{b} = t_{b} + h b_i \\
{\bf enddo} \\
\end{array} $
&
$ %
\begin{array}{l}
q_{0} = q(t_n); \quad
p_{0} = p(t_n); \\
{\bf do} \ \ i=1,k \\
\quad T'_{i}(p) = T'(t_n+c_ih,p);
\quad V'_{i}(q) = V'(t_n+c_ih,q) \\
{\bf enddo} \\
{\bf do} \ \ i=1,m \\
\quad \tilde V_i(q) = \sigma_{i1}V'_1(q)+\cdots+ \sigma_{ik}V'_k(q) \\
\quad \tilde T_i(p) = \rho_{i1}T'_1(p)+\cdots+ \rho_{ik}T'_k(p) \\
\quad p_{i} = p_{i-1} - h \tilde V_i(q_{i-1}) \\
\quad q_{i} = q_{i-1} + h \tilde T_i(p_{i}) \\
{\bf enddo} \\
\end{array} $
\\
\hline
\end{tabular}
\end{tabular}
}
\end{table}
Observe that the leap-frog composition (\ref{leapfrog})
corresponds to $m=2$ and
\begin{equation}\label{leapfrog-coefs}
a_1=a_2=\frac12, \quad b_1=1,b_2=0.
\end{equation}
Since $b_2=0$ one stage can be saved (with a trivial modification
of the algorithm) and the scheme is considered as a one stage
method. An efficient symmetric 5-stage fourth order integrator is given by
the coefficients ($m=6$)
\begin{equation}\label{Suzuki-coefs}
a_i=\frac{\gamma_i+\gamma_{i-1}}{2}, \quad b_i=\gamma_i.
\end{equation}
$i=0,1,\ldots,6$ with $\gamma_0=\gamma_6=0$ and
$\gamma_1=\gamma_2=\gamma_4=\gamma_5=1/(4-4^{1/3}), \
\gamma_3=1-4\gamma_1$.
Alternatively, we can use the Magnus integrator
(\ref{split-Magnus-4}). Since the kinetic energy is quadratic in
momenta, we can apply the fourth-order method MN$_{6}$-4. If we take
the fourth-order Gauss-Legendre quadrature rule for the evaluation
of the time-dependent function then we can consider
(\ref{eq.2.16}), where the coefficients $a_{ij},b_{ij}$ are given
in Table~\ref{table2}. Here, $A(t)$ plays the role of $T'(t,p)$
and $B(t)$ the role of $V'(t,q)$ (they are not interchangeable,
otherwise the performance seriously deteriorates). The computation
of one time step is shown as Algorithm 2 in
Table~\ref{algorithms}.
We take $\epsilon=1/20$, $\delta=1/4$, $\omega=1$ and initial
conditions $q(0)=1.75$, $p(0)=0$. We integrate up to $t=10\, \pi$
and measure the average error in phase space in terms of the
number of force evaluations for different time steps (in
logarithmic scale). The results are shown in Figure \ref{fig0}.
The scheme MN$_{6}$4 has 6 stages per step, but only two
time-evaluations. For this reason in the figure we have considered
as the number of evaluations per step both two and six (left and
right curves connected by an arrow). It is evident the superiority
of the new splitting Magnus integrators for this problem. If the
time-dependent functions dominate the cost of the algorithm the
superiority is even higher. Surprisingly, the method shows
better stability than the leap-frog method, which attains the
highest stability possible among the splitting methods for
autonomous problems.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=12cm]{Figures/FigDuffing2.eps}
\end{center}
\caption{{Average error versus number of force evaluations in the
numerical integration of (\ref{HamDuff2}) using second and fourth
order symplectic integrators for general separable systems (S2
corresponds to the second order leapfrog method with coefficients
(\ref{leapfrog-coefs}) and SU$_{5}4$ to the fourth order method
with coefficients (\ref{Suzuki-coefs})) and the fourth order
symplectic Runge--Kutta--Nystr\"om method MN$_{6}4$ with
initial conditions $q(0)=1.75$, $p(0)=0$ and $\epsilon=1/20$,
$\delta=1/4$, $\omega=1$.
}}
\label{fig0}
\end{figure}
\section{Some applications of the numerical integrators based on ME}
\label{animag}
In this section we collect several examples where the numerical integration methods
based on the Magnus expansion have been applied in the recent literature. Special
attention is dedicated to the numerical integration of the Schr\"odinger
equation, since the Magnus series expansion has been extensively used in this
setting almost since its very formulation.
The time-independent Schr\"odinger equation can be considered as a particular
example of a Sturm--Liouville problem, so we also review the applicability
of Magnus based techniques in this context. Then we consider a particular
nonlinear system (the differential Riccati equation) which can be, in some sense,
linearized, so that at the end one
may work with finite-dimensional matrices. Finally, we summarize a recent but
noteworthy application: the design of new classes of numerical schemes
for the integration of stochastic differential equations.
\subsection{Case study: numerical treatment of the Schr\"odinger
equation}
Before embarking ourselves in the use of numerical methods based on the Magnus
expansion in the integration of the Schr\"odinger equation, let us establish
first the theoretical framework which allows one to use numerical integrators
in this setting for
obtaining approximate solutions in time and space.
\subsubsection{Time-dependent Schr\"odinger equation}
To keep the treatment as simple as possible, we commence by considering the one-dimensional
time-dependent Schr\"odinger equation ($\hbar =1$)
\begin{equation} \label{Schr1}
i \frac{\partial}{\partial t} \psi (t,x) = H \psi(t,x) \equiv
-\frac{1}{2}\frac{\partial^2}{\partial x^2} \psi (t,x) + V(x)
\psi (t,x) ,
\end{equation}
with initial condition $\psi(0,x)=\psi_0(x)$. If we look for a
solution of the form $\psi(t,x)=\phi(t) \, \varphi(x)$, it is
clear that, by substituting into (\ref{Schr1}), one gets
$\phi(t)=\e^{-i t E}$, where $E$ is a constant and $\varphi(x)$ is
the solution of the second order differential equation
\begin{equation}\label{time-indep}
-\frac{d^2 \varphi}{dx^2} + V(x)\varphi = E \varphi.
\end{equation}
If $E>V$ the solution is oscillatory, whereas if $E<V$ the solution is
a linear combination of exponentially increasing and decreasing
functions. For bounded problems this last condition always takes place
at the boundaries. Since
\begin{equation}\label{binf}
\int |\psi(x,t)|^2 dx = \int |\varphi(x)|^2 dx < \infty,
\end{equation}
it is clear that the exponentially increasing solutions have to be
cancelled, and this can only occur for certain values of the
constant $E$, which are precisely the eigenvalues of the problem.
Let us assume that the system has only a discrete spectrum and
denote by $\{E_n,\varphi_n\}_{n=0}^{\infty}$, with $E_i<E_j, \
i<j$, the complete set of eigenvalues and associated eigenvectors.
It is well known that we can take $\{\varphi_n\}_{n=0}^{\infty}$
as an orthonormal basis and, since (\ref{Schr1}) is linear, any
solution can be written as
\begin{equation} \label{ap1}
\psi (t,x) = \sum_{n=0}^{\infty} c_n \, \e^{-i t E_n} \varphi_n(x).
\end{equation}
Using the standard notation for the inner product, one has
\begin{equation}\label{Sip2}
\langle \varphi_n(x) | \psi (t,x) \rangle
= \int \varphi_n^*(x)\, \psi(t,x) dx
= c_n \, \e^{-i t E_n}
\end{equation}
and
\[
|\langle \varphi_n(x) | \psi (t,x) \rangle|^2 = |c_n|^2
\]
is the probability to find the system in the eigenstate $\varphi_n$, so that
$\sum_n |c_n|^2=1$. The energy is given by
\begin{equation}\label{Sip3}
\mathcal{E} = \langle \psi | H | \psi \rangle
= \int \psi^*(t,x) \, H \, \psi(t,x) dx
= \sum_{n=1}^{\infty} \sum_{m=1}^{\infty} c_n^* c_m H_{n,m},
\end{equation}
where
\[
H_{n,m} \equiv \langle m | H | n \rangle
= \langle \varphi_m | H | \varphi_n \rangle.
\]
In general, the coefficients $c_n$ decrease very fast with $n$ and,
in some cases, the system allows only a finite number of states.
In that situation, one may consider the Schr\"odinger equation as a finite
dimensional linear system where the Hamiltonian is a matrix with
elements $H_{n,m}$. This is precisely the case for the examples
examined in section 4.
When the Hamiltonian is explicitly time-dependent, this procedure
is not longer valid. Instead one may use some alternative techniques which
we now briefly review.
(i) {\it Spectral Decomposition}.
Let us assume that the system is perturbed with a
time-dependent potential, i.e., equation (\ref{Schr1}) takes the form
\begin{equation} \label{Schr-t}
i \frac{\partial}{\partial t} \psi(t,x) = \hat{H}(t) \psi(t,x) \equiv
(\hat{T} + \hat{V}(t)) \, \psi(t,x),
\end{equation}
where
\[
\hat{T} \psi \equiv -\frac{1}{2}\frac{\partial^2 \psi}{\partial x^2},
\qquad \hat{V}(t) \psi \equiv (V(x) + \tilde V(t,x) ) \, \psi.
\]
In this case we cannot use separation of variables. However, since $\{\varphi_n\}$
is a complete bases we can still write the solution as
\begin{equation} \label{ap11}
\psi (t,x) \simeq \sum_{n=0}^{d-1} c_n(t) \, \e^{-i t E_n} \, \varphi_n(x),
\end{equation}
where $E_n$ and $\varphi_n$ are the exact
eigenvalues and eigenfunctions when
$\tilde V=0$, and the complex coefficients $c_n$ give the
probability amplitude to find the system in the state $\varphi_n$
($\sum_n |c_n(t)|^2=1$ for all $t$). Then, substituting
(\ref{ap11}) into (\ref{Schr-t}) we obtain the matrix equation
\begin{equation} \label{lin1}
i \, \frac{d}{dt}{\bf c} (t) = {\bf H} (t) {\bf c} (t), \qquad \qquad
{\bf c} (0) = {\bf c}_0 ,
\end{equation}
where $ \ {\bf c}= (c_0,\ldots ,c_{d-1})^T \in \mathbb{C}^d \ $
and $ \, {\bf H} \in \mathbb{C}^{d\times d} \, $ is an
Hermitian matrix associated to the Hamiltonian
\[
\, ({\bf H}(t))_{ij}= \langle \varphi_i |
\hat{H}(t)-\hat{H}_0 | \varphi_j \rangle \, \e^{i(E_i-E_j)t}, \qquad
i,j=1,\ldots,d
\]
and $\hat{H}_0 = \hat{H}(t=0)$.
Given the initial wave function $\psi(0,x)$, the
components of ${\bf c}_0$ are determined by $c_{0,i}=\langle \varphi_i
| \psi(0,x) \rangle$.
Obviously, any complete basis can be used in this case, although
the norm of the matrix ${\bf H} (t)$ may depend on the
choice. In addition, the number of basis elements, i.e. the
minimum dimension $d$ necessary to obtain a sufficiently accurate
result, also depends on the chosen basis.
(ii) {\it Space discretization}.
This procedure intends to take profit of the structure of the Hamiltonian
$\hat{H}$ in (\ref{Schr-t}): $\hat{V} $ is diagonal in the coordinate space and $ \, \hat{T}
\, $ is diagonal in the momentum space.
Let us assume that the system is defined in
the interval $ \, x \in [x_0, x_f] $ with periodic boundary
conditions. We can then split this interval in $ d $ parts of
length $ \Delta x= (x_f-x_0)/d \, $ and consider $ \, c_n = \psi
(t,x_n) \, $ where $ x_n=x_0+n\Delta x, $ $ n=0,1,\ldots,d-1$.
Then a finite dimensional linear equation similar (but with a
different coefficient matrix $\mathbf{H}$) to equation (\ref{lin1}) results.
Since $ \, \hat{V} $ is diagonal in the coordinate space and $ \,
\hat{T} \, $ is diagonal in momentum space, it is possible to use complex
Fast Fourier Transforms (FFTs) for evaluating the products ${\bf
H}{\bf c}$, where $ \, \hat{T} \psi (t,x_n) = \mathcal{F}^{-1} D_T
\mathcal{F} \psi (t,x_n) $, and $ \, D_T \, $ is a diagonal
operator.
\vspace*{0.3cm}
We thus see that whatever the procedure used (spectral decomposition or space
discretization), one ends up with a linear equation of the form
\begin{equation} \label{nse1}
i \frac{d\psi}{dt}(t) = H(t) \psi(t), \qquad \psi(0) = \psi_{0}
\end{equation}
where now $\psi(t)$ represents a complex vector with $d$ components
which approximates the (continuous) wave function. The
computational Hamiltonian $H(t)$ appearing in (\ref{nse1})
is thus a space discretization
(or other finite-dimensional model) of $\hat{H}(t) = \hat{T} +
\hat{V}(t)$. Numerical difficulties come mainly from the unbounded
nature of the Hamiltonian and the highly oscillatory behaviour of
the wave function.
It is at this point when numerical algorithms based on the
Magnus expansion, such as they have been formulated in previous
sections, come into play for integrating in time the linear
system (\ref{nse1}). To put them in perspective, let us
introduce first some other numerical methods also used in this
context. Our exposition is largely based on the reference
\cite{lubich02ifq}.
\paragraph{The implicit midpoint rule}
The approximation to the solution of (\ref{nse1}) provided by
this scheme is implicitly defined by
\begin{equation} \label{nse2}
i \frac{\psi_{n+1} - \psi_n}{\Delta t} = H(t_{n + 1/2}) \, \frac{1}{2}
(\psi_{n+1} + \psi_n),
\end{equation}
where $t_{n+1/2} = \frac{1}{2}(t_{n+1} + t_n)$. Here and in the sequel,
for clarity, we have denoted by $\Delta t$ the
time step size and $t_n = n \Delta t$. Alternatively,
\begin{equation} \label{nse3}
\psi_{n+1} = r(-i \Delta t H(t_{n + 1/2})) \, \psi_n, \quad
\mbox{ with } \quad r(z) = \frac{1 + \frac{1}{2} z}{1 -
\frac{1}{2} z}.
\end{equation}
Observe that, as $r$ is nothing but the Cayley transform, the
numerical propagator is unitary and consequently the Euclidean
norm of the discrete wave function is preserved along the
evolution: $\|\psi_{n+1} \| = \|\psi_n\|$. This is a crucial
qualitative feature the method shares with the exact solution,
contrarily to other standard numerical integrators, such as
explicit Runge--Kutta methods. From a purely numerical point of
view, the algorithm is stable for any step size $\Delta t$.
Another useful property of this numerical scheme is
time-symmetry: exchanging in (\ref{nse3}) $n$ by $n+1$ and $\Delta
t$ by $-\Delta t$ we get the same numerical method again.
Equivalently, $r(-z) = r(z)^{-1}$, exactly as the exponential
$\e^z$.
With respect to accuracy, it is not difficult to show that, if
$H(t)$ is bounded and sufficiently smooth, the error verifies
\begin{equation} \label{nse4}
\|\psi_n - \psi(t_n)\| = \mathcal{O}(\Delta t^2)
\end{equation}
uniformly for $n \Delta t$ in a time interval $[0, t_f]$. In other
words, the implicit midpoint rule is a second-order method. It
happens, however, that the constant in the term
$\mathcal{O}(\Delta t^2)$ depends on bounds of $H'$ and
$H^{\prime\prime}$ and on the norm of the third derivative of the solution
$\psi$. Since, in general, the wave function is highly oscillatory
in time, this time derivative can become large, and so the use of
very small time steps is mandatory.
\paragraph{The exponential midpoint rule}
Another possibility to get approximate solutions of (\ref{nse1})
consists in replacing $r(z)$ by $\exp(z)$ in (\ref{nse3}):
\begin{equation} \label{nse5}
\psi_{n+1} = \exp(-i \Delta t \, H(t_{n+1/2})) \, \psi_n.
\end{equation}
Now, instead of solving systems of linear equations as previously,
one has to compute the exponential of a large matrix times a
vector at each integration step. In this respect, the techniques
reviewed in subsection \ref{exponen} can be efficiently implemented. The
exponential midpoint rule (\ref{nse5}) also provides a unitary
propagator and it is time-symmetric. In addition, the error
satisfies the same condition (\ref{nse4}), but now the constant in
the $\mathcal{O}(\Delta t^2)$ term is independent of the time
derivatives of $\psi$ under certain assumptions on the commutator
$[H(t),H(s)]$ \cite{hochbruck03omi}. As a consequence, much larger
time steps can be taken to achieve the same accuracy as with the
implicit midpoint rule.
\paragraph{Integrators based on the Magnus expansion}
The method (\ref{nse5}) is a particular instance of a second order
Magnus method when the integral $\int_0^{\Delta t} H(s) ds$ is
replaced by the midpoint quadrature rule. In fact, we have already used it in
(\ref{mpoint}). Obviously, if higher order
approximations are considered, the accuracy can be enhanced a
great deal. This claim has to be conveniently justified, however,
since the order of the numerical methods based on Magnus has
been deduced when
$\|\Delta t H(t)\| \rightarrow 0$ and is obtained by studying the remainder
of the truncated Magnus series. In the Schr\"odinger equation, on
the other hand, one has to cope with discretizations of unbounded
operators, so in principle it is not evident how the previous results
on the order of accuracy apply in this context.
In \cite{hochbruck03omi}, Hochbruck and Lubich analyse
in detail the application of the fourth-order Magnus integrator
(\ref{or4GL}) to equation (\ref{nse1}), showing that it
works extremely well even with step sizes for which the corresponding
$\|\Delta t H(t)\|$ is large. In particular, the scheme
retains fourth order of accuracy in $\Delta t$
\emph{independently of the norm} of $H(t)$
when $H(t) = T + V(t)$, $T$ is a discretization of $-\frac{1}{2}
\frac{\partial^2}{\partial x^2}$ (with maximum eigenvalue
$E_{\mathrm{max}} \sim (\Delta
x)^{-2}$) and $V(t)$ is sufficiently smooth under the time step
restriction $\Delta t \sqrt{E_{\mathrm{max}}} \le Const.$ This is
so even when there is no guarantee that the Magnus series
converges at all.
\paragraph{Symplectic perspective}
The evolution operator corresponding to (\ref{Schr-t}) is not only unitary, but also symplectic
with canonical coordinates and momenta $\mbox{Re}(\psi)$ and $\mbox{Im}(\psi)$, respectively.
If we carry out a discretization in space, this symplectic structure is inherited by the
corresponding equation (\ref{lin1}). It makes sense, then, to write
${\bf c}= {\bf q} + i {\bf p}$ and consider the equations satisfied by ${\bf q, p}\in\mathbb{R}^d$, namely
\begin{equation}\label{SchrClas1}
\mathbf{q}' = \mathbf{H}(t) \mathbf{p}, \qquad \mathbf{p}' = -\mathbf{H}(t) \mathbf{q},
\end{equation}
which can be interpreted as the canonical equations corresponding to the Hamiltonian \cite{gray94chs}
\begin{equation}\label{SchrClas2}
\mathcal{H}(t,{\bf q, p}) = {\bf p}^T {\bf H}(t) {\bf p} + {\bf q}^T {\bf H}(t)
{\bf q}.
\end{equation}
Denoting ${\bf z=(q,p)}^T$, it is clear that
\[
{\bf z}' = \left( {\bf A}(t) + {\bf B}(t) \right)\, {\bf z}
\]
where
\begin{equation}\label{matMN}
{\bf A}(t) = \left( \begin{array}{cc}
0 & \;\; {\bf H}(t) \\
0 & 0
\end{array}
\right), \qquad \quad
{\bf B}(t) = \left( \begin{array}{cr}
0 & \;\; 0 \\
-{\bf H}(t) & 0
\end{array}
\right).
\end{equation}
For this system it is possible, therefore, to apply the commutator-free Magnus integrators
constructed in subsection \ref{cfmi}. In addition, one has
\[
{\bf [B,[B,[B,A]]]=[A,[A,[A,B]]]=0},
\]
and this property allows us to use
especially designed and highly efficient integration methods
\cite{blanes07smf}.
\subsubsection{Time-independent Schr\"odinger equation}
Restricting ourselves to the time-independent Schr\"odinger
equation (\ref{time-indep}), we next illustrate how Magnus
integrators can in fact be used to compute the discrete
eigenvalues defined by the problem. Although only the
Schr\"odinger equation in a finite domain is considered,
\begin{equation} \label{tise1}
-\frac{d^2 \varphi}{dx^2} + V(x) \varphi = \lambda \varphi, \qquad x \in (a,b)
\end{equation}
the procedure can be easily adapted to other types of eigenvalue
problems, in which one has to find both $\lambda \equiv E$ and
$\varphi$. Here it is
assumed that the potential is smooth, $V \in C^m(a,b)$ and, for simplicity,
$\varphi(a)=\varphi(b)=0$.
Under these assumptions, it is well known that the eigenvalues are
real, distinct and bounded from below.
The problem (\ref{tise1}) can be formulated in the
special linear group $\mathrm{SL}(2)$,
\begin{equation} \label{tise2}
\frac{d \mathbf{y}}{dx} = \left( \begin{array}{ccr}
0 & \;\; & \ \ 1 \\
V(x)-\lambda & & 0
\end{array} \right) \mathbf{y}, \qquad
x \in (a,b), \qquad \mbox{ where } \quad
\mathbf{y} = (\varphi, d\varphi/dx)^T,
\end{equation}
so that the Magnus expansion can be applied in a natural way. As usual, rather than
approximating the fundamental solution of (\ref{tise2}) in the entire interval
$(a,b)$ by $\exp(\Omega)$, the idea is to partition the interval into $N$ small
subintervals, and then apply a conveniently discretized version of the Magnus
expansion. In this way, the convergence problem no longer restricts the size
$(b-a)$ \cite{moan98eao}.
For the sake of simplicity, let us consider the fourth-order method (\ref{or4GL}).
Writing
\[
V_{n,1} = V(x_{n} + (\frac{1}{2}-\frac{\sqrt{3}}{6})h), \qquad
V_{n,2} = V(x_{n} + (\frac{1}{2}+\frac{\sqrt{3}}{6})h),
\]
where $h = (b-a)/N$ and $x_n = a + h \, n$, we form
\[
\sigma_{n}(\lambda) = \left( \begin{array}{ccc}
-\frac{\sqrt{3}}{12} h^2 (V_{n,1}-V_{n,2}) & \;\;\; & \ \ \ h \\
\frac{1}{2} h (V_{n,1}+V_{n,2}) - h \lambda & &
\frac{\sqrt{3}}{12} h^2 (V_{n,1}-V_{n,2})
\end{array} \right),
\]
for $n=0,1,\ldots,N-1$. Then, the fourth-order approximation to
the solution of (\ref{tise2}) at $x=b$ is
\begin{equation} \label{tise3}
\mathbf{y}(b) = \e^{\sigma_{N-1}(\lambda)} \cdots
\e^{\sigma_{1}(\lambda)} \, \e^{\sigma_{0}(\lambda)}
\mathbf{y}(a)
\end{equation}
and the values of $\lambda$ are obtained from (\ref{tise3}) by using repeatedly
the expression of the exponential of a traceless matrix, eq. (\ref{expMat2}), and requiring
that $\varphi(a)=\varphi(b)=0$. The resulting nonlinear equation in $\lambda$ can be solved,
for instance, by Newton--Raphson iteration, which provides quadratic convergence
for starting values sufficiently near the solution \cite{moan98eao}.
Although by construction this procedure leads to a global order of
approximation $\mathcal{O}(h^p)$ if a $p$th-order Magnus method is
applied, it turns out that the error also depends on the magnitude
of the eigenvalue. Specifically, the error in a $p$th-order method
grows as $\mathcal{O}(h^{p+1} \lambda^{p/2-1})$ \cite{moan98eao},
and thus one expects poor approximations for large eigenvalues.
This difficulty can be overcome up to a point by analyzing the
dependence on $\lambda$ of each term in the Magnus series and
considering partial sums of the terms carrying the most
significant dependence on $\lambda$. For instance, it is possible
to design a sixth-order Magnus integrator for this problem with
error $\mathcal{O}(h^7 \lambda)$, which therefore behaves like a
fourth-order method when $h^2 \lambda \approx 1$, whereas the
standard sixth-order Magnus scheme, carrying an error of
$\mathcal{O}(h^7 \lambda^2)$, reduces to an order-two method
\cite{moan98eao}. In any case, getting accurate approximations
when $|\lambda| \rightarrow \infty$ is more problematic
\cite{jodar00soa}.
\subsection{Sturm--Liouville problems}
The system defined by (\ref{tise1}) with boundary conditions
$\varphi(a)=\varphi(b)=0$ is just one particular example of a
second order Sturm--Liouville problem \cite{pryce93nso,zettl05slt}. It is thus
quite natural to try to apply Magnus integrators to more general
problems within this class.
A second order Sturm--Liouville eigenvalue problem has the form
\begin{equation} \label{st-1}
\frac{d}{dx} \left( p(x) \frac{dy}{dx}(x) \right) + q(x) y(x) =
\lambda \, r(x) y(x) \qquad \mbox{ on } (a,b)
\end{equation}
with separated boundary conditions which commonly have the form
\begin{equation} \label{st-2}
A_1 y(a) + A_2 p(a) y^\prime(a) = 0 \qquad
B_1 y(b) + B_2 p(b) y^\prime(b) = 0
\end{equation}
for given constants $A_i, B_i$ and functions $p(x)$, $q(x)$ and $r(x)$.
Solving this problem means, of course,
determining the values $\lambda_n$ of $\lambda$ for which eq.
(\ref{st-1}) has a nontrivial (continuously differentiable
square integrable) solution $y_n(x)$ satisfying
equations (\ref{st-2}) \cite{zettl05slt,bailey78aso}.
These and other higher order Sturm--Liouville problems can be
recasted as a linear matrix system of the form
\begin{equation} \label{st-3}
Y' = (\lambda B + C(x)) \, Y
\end{equation}
by transforming to the so-called \emph{compound matrix} or modified
Riccati variables \cite{greenberg98ota,jodar00soa}. Here $B$ is a constant matrix.
When generalizing
the above treatment based on the Magnus expansion to this problem,
there is one
elementary but important remark worth to be stated explicitly:
\emph{unless the differential equation (\ref{st-3}) has the same
large $\lambda$-asymptotics as some differential equation with
$x$-independent coefficients, then it will be impossible to develop
a Magnus method which accurately approximate its solutions for large
$\lambda$} \cite{jodar00soa}. The reason is that a Magnus method
approximates the solution by a discrete solution calculated using a
formula of the form $Y(x_{n+1}) = \exp(\sigma_n(\lambda)) Y(x_n)$; in
particular, on the first step $(x_0,x_1)$, the differential equation
is approximated by one in which the coefficient matrix is replaced
by the $x$-independent matrix $\sigma_0(\lambda)/(x_1-x_0)$.
In consequence, the attention should be restricted to systems for
which it is known that a suitable constant-coefficient system provides
the correct asymptotics. This is the case, in particular, for
equation (\ref{tise1}), and more generally for linear equations of order
$2n$ in which the $(2n-1)$st derivative is zero, such as
\[
(-1)^n y^{(2n)} + \sum_{j=0}^{2n-2} q_j(x) y^{(j)} = \lambda y.
\]
Here the asymptotics are determined by the equation
$(-1)^n y^{(2n)} = \lambda y$ \cite{naimark68ldo}. Even then, the
methods developed in \cite{moan98eao} for equation
(\ref{tise1}) and implemented for systems with matrices of general
size in \cite{jodar00soa} require a $\lambda$-dependent step size
restriction of the form $h \le \mathcal{O}(|\lambda|^{-1/4})$
in order to be defined. Nevertheless, the analysis carried out in
\cite{jodar00soa} shows that the fourth order Magnus integrator
based on a two-point Gaussian quadrature appears to offer significant
advantages over conventional methods based on power series and library
routines.
Magnus integrators have also been successfully applied in the
somewhat related problem of computing the Evans function for
spectral problems arising in the analysis of the linear
stability of travelling wave solutions to reaction-diffusion
PDEs \cite{aparicio05neo}. In this setting, Magnus integrators
possess some appealing features in comparison, for instance, with
Runge--Kutta schemes: (1) they are unconditionally stable; (2) their
performance is superior in highly oscillatory regimes and (3) their
step size can be controlled in advance. Items (2) and (3) are due
to the fact that error bounds for Magnus methods depend only
on low order derivatives of the coefficient matrix, not (as for
Runge--Kutta schemes) on derivatives of the solution. Therefore,
performance and, correspondingly, the choice of optimal step size
remain uniform over any bounded region of parameter space
\cite{aparicio05neo}.
\subsection{The differential Riccati equation}
Let us consider now the two-point boundary value
problem in the $t$ variable defined by the linear differential equation
\begin{equation}\label{BVP_Riccati}
{\bf y}' \equiv \left( \begin{array}{c}
{\bf y}_1' \\ {\bf y}_2'
\end{array} \right) = \left( \begin{array}{cc}
A(t) & \ \ B(t) \\
C(t) & D(t)
\end{array} \right) \
\left( \begin{array}{c}
{\bf y}_1 \\ {\bf y}_2
\end{array} \right), \qquad 0<t<T
\end{equation}
with separated boundary conditions
\begin{equation}\label{sbc}
(K_{11} \; \; K_{12}) \left( \begin{array}{c}
{\bf y}_1 \\ {\bf y}_2
\end{array} \right)_{t=0} = \; \gamma_1, \qquad
(K_{21} \;\; K_{22}) \left( \begin{array}{c}
{\bf y}_1 \\ {\bf y}_2
\end{array} \right)_{t=T} = \, \gamma_2.
\end{equation}
Here $ A \in \mathbb{C}^{q \times q} \, $, $
B \in \mathbb{C}^{q \times p} $, $C \in \mathbb{C}^{p \times q} \, $, $ D \in
\mathbb{C}^{p \times p} \, $, whereas
${\bf y}_1, \gamma_2\in \mathbb{C}^{p}$, ${\bf y}_2,\gamma_1 \in
\mathbb{C}^{q}$ and the matrices $K_{ij}$ have appropriate
dimensions. We next introduce the time-dependent change of variables
(or picture) ${\bf y}=Y_0(t) \, {\bf w}$, with
\begin{equation}\label{cov1}
Y_0(t) = \left( \begin{array}{cc}
I_p & \ \ 0 \\
X(t) & I_q
\end{array} \right)
\end{equation}
and choose the matrix $ X\in \mathbb{C}^{p \times q}$ so as to ensure that in the
new variables $\mathbf{w} = Y_0^{-1}(t) \mathbf{y}$ the system assume the partly
decoupled structure \cite{dieci92nio}
\begin{equation} \label{dececri}
{\bf w}' \equiv \left( \begin{array}{c}
{\bf w}_1' \\ {\bf w}_2'
\end{array} \right) = \left( \begin{array}{cc}
A+BX & \ \ B \\
O & D-XB
\end{array} \right) \
\left( \begin{array}{c}
{\bf w}_1 \\ {\bf w}_2
\end{array} \right),
\end{equation}
together with the corresponding boundary conditions for ${\bf w}$. It turns out that
this is possible if and only if $X(t)$ satisfies the so-called differential Riccati
equation \cite{dieci88art}
\begin{equation} \label{Riccati}
X' = C(t) + D(t) X - X A(t) - X B(t) X, \qquad X(0) = X_0
\end{equation}
for some $X_0$. By requiring
\begin{equation} \label{icrica1}
X_0=-K_{12}^{-1}K_{11},
\end{equation}
then the boundary conditions (\ref{sbc}) also decouple as
\begin{equation} \label{sbc12}
(O \;\; K_{12}) {\bf w}(0) = \gamma_1, \qquad
\big( K_{21}+K_{22}X(T) \;\;\; K_{22} \big) {\bf w}(T) = \gamma_2.
\end{equation}
Here we assume without loss of generality that $K_{12}$ is
invertible. In this way, the original boundary value problem can
be solved as follows \cite{dieci92nio,dieci88art}: (i) solve
equation (\ref{Riccati}) with initial condition (\ref{icrica1})
from $t=0$ to $t=T$; (ii) solve the $\mathbf{w}_2$-equation in
(\ref{dececri}) and (\ref{sbc12}), also from zero to $T$; (iii)
solve the $\mathbf{w}_1$-equation in (\ref{dececri}) from $t=T$ to
$t=0$ and recover ${\bf y}=Y_0(t) \, {\bf w}$. In other words, the
solution of the original two-point boundary value problem can be
obtained by solving a sequence of three different initial value
problems, one of which involves the nonlinear equation
(\ref{Riccati}). Obviously, steps (i) and (iii) can be solved
using numerical integrators based on the Magnus expansion. It
could be perhaps more surprising that these algorithms can indeed
be used to integrate the Riccati equation in step (ii).
Although the boundary value problem (\ref{BVP_Riccati}) is a convenient way to introduce
the differential Riccati equation (\ref{Riccati}), this equation arises in many fields
of science and engineering, such as linear quadratic
optimal control, stability theory, stochastic control,
differential games, etc. Accordingly, it has received
considerable attention in the literature, both focused to its theoretical aspects
\cite{bittanti91tre,reid72rde}
and its numerical treatment \cite{dieci92nio,kenney85nio,schiff99ana}.
In order to apply Magnus methods to solve numerically
the Riccati equation, we first apply the transformation
\begin{equation} \label{trans-ri1}
X(t) \ = \ V(t) \, W^{-1}(t),
\end{equation}
with $V \in \mathbb{C}^{p \times q}$, $W \in \mathbb{C}^{q \times q}$
and $V(0)=X_0, W(0)=I_q$, in the region where $ W(t) $ is
invertible. Then eq. (\ref{Riccati}) is equivalent to the linear
system
\begin{equation}\label{RiccatiLin1}
\begin{array}{c}
\displaystyle
Y' \, = \, S(t) \, Y(t) \, , \qquad Y(0) \, = \,
\left( \begin{array}{c} I_{q} \\ X_{0}
\end{array} \right) \,
\end{array}
\end{equation}
with
\begin{equation}\label{RiccatiLin2}
\begin{array}{c}
\displaystyle
Y(t) = \left( \begin{array}{c}
W(t) \\ V(t)
\end{array} \right), \qquad \qquad
S(t) = \left( \begin{array}{cc}
A(t) & \ \ B(t) \\
C(t) & D(t)
\end{array} \right)
\end{array}
\end{equation}
so that the previous Magnus integrators for linear problems can be
applied here. Apparently, this system is similar to
(\ref{BVP_Riccati}), but now we are dealing with an initial value
problem and $Y$ is a matrix instead of a vector.
When dealing in general with the differential Riccati equation (\ref{Riccati}), it is
meaningful to distinguish the following three cases:
\begin{itemize}
\item[(i)] The so-called symmetric Riccati equation, which corresponds to $q=p$,
$D(t) = -A(t)^{T}$ real, and $B(t)$, $C(t)$ real and symmetric
matrices. In this case, the solution satisfies $X^T=X$. It is straightforward to show
that this problem is equivalent to the treatment of the
generalized time-dependent harmonic oscillator, described by the
Hamiltonian function
\[
H = \frac12 {\bf p}^T B(t){\bf p} +
{\bf p}^T A(t){\bf q} -
\frac12 {\bf q}^T C(t) {\bf q}.
\]
The approximate solution attained by Magnus integrators when applied to
(\ref{RiccatiLin1})-(\ref{RiccatiLin2}) can be seen as the exact
solution corresponding to a perturbed symplectic matrix $\tilde{S}(t)\simeq
S(t)$. In other words, we are solving exactly a perturbed Hamiltonian system
so that the approximate solution, $\tilde{X}$, will shares several properties of the
exact solution, in particular $\tilde{X}^T=\tilde{X}$.
\item[(ii)] The linear non-homogeneous problem
\begin{equation}\label{linear-non-homog}
X' = D(t) X + C(t)
\end{equation}
corresponds to the particular case $A=0$ and $B=0$ in
(\ref{Riccati}).
\item[(iii)] The problem
\begin{equation}\label{isospectral}
X' = D(t) X + X A(t)
\end{equation}
is recovered from (\ref{Riccati}) by taking $C=0$ and $B=0$. It has been treated in
\cite{iserles01ame} by developing an \emph{ad hoc} Magnus-type expansion. Notice that
the case $p=q$, $D=-A$ corresponds to the linear isospectral system (\ref{nde.4}).
\end{itemize}
\subsection{Stochastic differential equations}
In recent years the use of stochastic differential equations (SDEs) has
become widespread in the simulation of random phenomena appearing
in physics, engineering, economics, etc, such as turbulent diffusion,
polymer dynamics and investment finance \cite{burrage04nmf}. Although
models based on SDEs can offer a more realistic representation of the
system than ordinary differential equations, the design of effective numerical
schemes for solving SDEs is, in comparison with ODEs, a less developed field
of research. This fact notwithstanding, it is true that recently new classes of integration
methods have been constructed which automatically incorporate
conservation properties the SDE possesses. Since some of the methods are based precisely
on the Magnus expansion, we briefly review here their main features, and refer
the reader to the more advanced literature on the subject
\cite{burrage04nmf,kloeden92nso,platen99ait}.
A SDE in its general form is usually written as
\begin{equation} \label{sde1}
dy(t) = g_0(t,y(t)) \, dt + \sum_{j=1}^d g_j(t,y(t)) \, dW_j(t), \qquad y(0) = y_0, \qquad
y \in \mathbb{R}^m,
\end{equation}
where $g_j$, ($j \ge 0$), are $m$-vector-valued functions. The function $g_0$ is the deterministic
continuous component (called the \emph{drift coefficient}), the $g_j$, ($j \ge 1$), represent
the stochastic continuous components (the \emph{diffusion coefficients}) and $W_j$ are
$d$ independent Wiener processes. A Wiener process $W$ (also called Brownian motion)
is a stochastic process \cite{burrage04nmf} satisfying
\[
W(0) = 0, \qquad E[W(t)] = 0, \qquad \mathrm{Var}[W(t)-W(s)] = t-s, \quad t > s
\]
which has independent increments on non-overlapping intervals. In other words, a Wiener process
is normally distributed with mean or expectation value $E$ equal to zero and variance $t$.
Equation (\ref{sde1}) can be written in integral form as
\begin{equation} \label{sde2}
y(t) = y_0 + \int_0^t g_0(s,y(s)) \, ds + \sum_{j=1}^d \int_0^t g_j(s,y(s)) \, dW_j(s).
\end{equation}
The $d$ integrals in (\ref{sde2}) cannot be considered as Riemann--Stieltjes integrals, since
the sample paths of a Wiener process are not of bounded variation. In fact, if different choices
are made for the point $\tau_i$ (in the subintervals $[t_{i-1},t_i]$ of a given partition) where
the function is evaluated, then the approximating sums for each $g_j$,
\begin{equation} \label{sde3}
\sum_{i=1}^N g_j(\tau_i, y(\tau_i)) (W_j(t_i) - W_j(t_{i-1})), \qquad
\tau_i = \theta t_i + (1- \theta) t_{i-1},
\end{equation}
converge (in the mean-square sense) to different values of the integral, depending
on the value of $\theta$ \cite{burrage99hso}. Thus, for instance,
\[
\int_a^b W(t) dW(t) = \frac{1}{2} (W^2(b) - W^2(a)) + (\theta - \frac{1}{2})(b-a).
\]
If $\theta=0$, then $\tau_i = t_{i-1}$ (the left-hand point of each subinterval) and the resulting
integral is called an It\^o integral; if $\theta=1/2$ (so that the midpoint is used instead), one has
a Stratonovich integral. These are the two main choices and, although they are related,
the particular election depends ultimately on the nature of the process to be modeled
\cite{burrage99hso}. It can be shown that the Stratonovich calculus
satisfies the Riemann--Stieltjes
rules of calculus, and thus it is the natural choice here.
When dealing with numerical methods for solving (\ref{sde1}), there are two ways of
measuring accuracy \cite{burrage04nmf}. The first is \emph{strong convergence}, essential when the
aim is to get numerical approximations to the trajectories which are close to the exact solution.
The second is \emph{weak convergence}, when only certain moments of the solution are of
interest. Thus, if $\hat{y}_n$ denotes the numerical approximation to $y(t_n)$ after $n$
steps with constant step size $h = (t_n - t_0)/n$, then the numerical solution
$\hat{y}$ converges strongly to the exact solution $y$
with strong global order $p$ if there exist $C>0$ (independent of $h$) and $\delta > 0$ such
that
\[
E[\| \hat{y}_n - y(t_n) \|] \le C h^p, \qquad h \in (0,\delta).
\]
It is worth noticing that $p$ can be fractional, since the root mean-square order of the
Wiener process is $h^{1/2}$. One of the simplest procedures for solving (\ref{sde1}) numerically
is the so-called Euler--Maruyama method \cite{maruyama55cmp},
\begin{equation} \label{sde4}
y_{n+1} = y_n + \sum_{j=0}^d J_j g_j(t_n,y_n),
\end{equation}
where
\[
h = t_{n+1} - t_n, \quad J_0 = h, \quad J_j = W_j(t_{n+1}) - W_j(t_n), \quad j=1,\ldots,d.
\]
This scheme turns out to be of strong order $1/2$. Here the $J_j$ can be computed
as $\sqrt{h} N_j$, where the $N_j$ are $N(0,1)$ normally distributed independent
random variables \cite{burrage99hso}.
For the general non-autonomous linear Stratonovich problem defined by
\begin{equation} \label{sde5}
dy = G_0(t) y \, dt + \sum_{j=1}^d G_j(t) \, y \, dW_j, \qquad y(0) = y_0 \in \mathbb{R}^m
\end{equation}
the Magnus expansion for the deterministic case can be extended in a
quite straightforward way. It is well worth noticing
that in equation (\ref{sde5}), even when the functions $G_j$ are constant, there is no explicit solution
\cite{kloeden92nso}, unless all the $G_j$, $j \ge 0$, commute with one another, in which case it holds
that
\begin{equation} \label{sde6}
y(t) = \exp \left( G_0 \, t + \sum_{j=1}^d G_j W_j(t) \right) y_0.
\end{equation}
In many modeling situations, however, there is no reason to expect that the functions $G_j$ associated
with the Wiener processes commute. If for simplicity we only consider the autonomous case and
write
\[
G(t) \equiv G_0 \, dt + \sum_{j=1}^d G_j \, dW_j(t),
\]
then (\ref{sde5}) can be expressed as
\[
dy = G(t) \, y \, dt, \qquad y(0) = y_0
\]
and thus one can formally apply the Magnus expansion to this
equation to get $y(t) = \exp(\Omega(t)) y_0$. The first term in
the series reads in this case
\[
\int_0^t G(s) \, ds \equiv \int_0^t G_0 \, ds + \sum_{j=1}^d \int_0^t G_j \, dW_j(s) =
G_0 \, t + \sum_{j=1}^d G_j J_j,
\]
where now
\[
J_j = \int_0^t dW(s) = W_j(t) - W_j(0).
\]
By inserting these expressions into the recurrence associated with the Magnus series, Burrage
and Burrage \cite{burrage99hso} show that
\begin{eqnarray} \label{sde7}
\Omega(t) & = & \sum_{j=0}^d G_j J_j + \frac{1}{2} \sum_{i=0}^d \sum_{j=i+1}^d [G_i, G_j]
(J_{ji} - J_{ij}) \\
& & + \sum_{i=0}^d \sum_{k=0}^d \sum_{j=k+1}^d [G_i,[G_j,G_k]]
\left( \frac{1}{3} (J_{kji} - J_{jki}) + \frac{1}{12} J_i (J_{jk} - J_{kj}) \right) + \cdots, \nonumber
\end{eqnarray}
where the multiple Stratonovich integrals are defined by
\begin{equation} \label{sde8}
J_{j_1 j_2 \cdots j_l}(t) = \int_0^t \int_0^{s_l} \cdots \int_0^{s_2} dW_{j_1}(s_1) \cdots
dW_{j_l}(s_l), \qquad j_i \in \{0,1,\ldots,d\}.
\end{equation}
Since not all the Stratonovich integrals are independent, one has
to compute only $d(d+1)(d+5)/6$ stochastic integral evaluations to achieve strong order 1.5 with
the expression (\ref{sde7}) \cite{burrage99hso}. If, on the other hand, $\Omega(t)$ is truncated
after the first set of terms, then the resulting numerical approximation
\[
y(t) = \exp \left( \sum_{j=0}^d G_j J_j \right) y_0
\]
has strong order $1/2$, but leads to smaller error coefficients than the Euler--Maruyama method
(\ref{sde4})
\cite{burrage99hso}. Furthermore, the error becomes smaller as the $G_i G_j$ terms get closer
to commuting and the scheme preserves the underlying structure of the problem.
One should notice at this point that equation (\ref{sde1}) (or, in the linear case, equation (\ref{sde5})),
has formally the same structure as the nonlinear ODE (\ref{cf1}) appearing in control theory. Therefore,
the formalism developed there to get the Chen--Fliess series can be applied here with the
alphabet $I = \{ 0, 1, \ldots, d\}$ and the integrals
\[
\left( \int_0 \mu \right)(t) = \int_0^t \mu(s) \, ds, \qquad
\left( \int_i \mu \right)(t) = \int_0^t \mu(s) \, dW_i(s), \quad i \ge 1,
\]
since the Stratonovich integrals satisfy the integration by parts rule.
In other words, one can obtain the corresponding Magnus expansion for arbitrary (linear or nonlinear)
stochastic differential equations simply by following the same procedure as for deterministic ODEs.
With respect to nonlinear Stratonovich stochastic differential equations, it should be remarked that
the use of Lie algebraic techniques as well as the design of Lie group methods for obtaining
strong approximations when the solution evolves on a smooth manifold
has received considerable attention in the recent literature
\cite{lord06esi,malham07slg,misawa01ala}.
\section{Physical applications}
From previous sections it should be clear that ME has a strong
bearing on both Classical and Quantum Mechanics. As far as
Classical Mechanics is concerned this has been most explicitly
shown in section \ref{GNLM}. On its turn Quantum Mechanics has
been repeatedly invoked as a source of applications, among
others, in sections \ref{PLT} and \ref{section4}. In this section
we present in a very schematic way and with no aim at completeness
some applications of ME in different areas of the physical
sciences. This will show that over the years ME has been one of
the preferred options to deal with equation (\ref{laecuacion})
which, under different appearances, pervades the entire field of
Physics. In the works mentioned here, almost exclusively
analytical methods are used and, in general, one must recognize
that in most, if not all, cases listed only the first two orders
of the expansion have been considered. In very specially simple
applications, due to particular algebraic properties of the
operators involved, this happens to be exact. Only with the more
recent advent of the numerical applications, as has been
emphasized in sections \ref{section5} and \ref{animag}, has the
expansion been carried in a more systematic way to higher orders.
\subsection{Nuclear, atomic and molecular physics}
\label{sec:nucatom}
As far as we know the first physical application of ME dates back to
1963. Robinson \cite{robinson63mce} published a brand new formalism
to investigate multiple Cou\-lomb excitations of deformed nuclei. As a
matter of fact, he states explicitly that only after completion of
his work he discovered the ME. His derivation of ME formulas is
certainly worth reading.
The Coulomb excitation process yields information about the low
lying nuclear states. Prior to Robinson work, the theory was
essentially based on perturbation expansions which requires that the
bombarding energy is kept so low that no nuclear reaction takes
place. Even worse, if heavier ions are used as projectiles the
electric field exerted on the target nucleus is so strong that
perturbation methods fail.
The work by Robinson improved the so-called at that time
\emph{sudden approximation}, which is equivalent to the assumption
that all nuclear energy levels are degenerate. Results are reported
in that reference for rotational and vibrational nuclei.
As representatives of the applications of ME in the field of Atomic
Physics we mention several types of atomic collisions. The ME is
used in \cite{eichler77maf} to derive the transition amplitude and
the cross section for K-shell ionization of atoms by heavy-ion
impact. This is an important process in heavy-ion physics. The
theoretical investigations of these reactions always assumed that
the projectile is a relatively light ion such as a proton or an
$\alpha$ particle. The use of ME allowed to extend the studies to
the ionization of light target atoms by much heavier projectile
ions.
In \cite{wille81mef,wille86moi} the ME is applied to study the
time-evolution of rotationally induced inner-shell excitation in
atomic collisions. In this context the internuclear motion can be
treated classically and the remaining quantum-mechanical problem
for the electronic motion is then time-dependent. In particular,
in \cite{wille81mef} explicit results for Ne$^{+}$Ne collisions
are given as well as a study of the convergence properties of ME
with respect to the impact parameter.
The ME is applied in \cite{hyman85dma} to the theoretical study of
electron-atom collisions, involving many channels coupled by
strong, long-range forces. Then, as a test case, the theory is
applied to electron-impact excitation of the resonance transitions
of Li, Na and K. Computations up to second order are carried out
and the cross sections found are in good agreement with
experimental data for the intermediate-energy range.
The following examples illustrate the use of ME in Molecular
Physics. In \cite{cady74rsl} it is applied for the first time to the
theory of the pressure broadening of rotational spectra. Unlike the
previous approaches to the problem, the S-matrix obtained is
unitary. As a consequence of it the relative contributions on the
linewidth of the attractive and repulsive anisotropy terms in the
interaction potential may be calculated.
Floquet theory is applied in \cite{Milfeld83sea} to systems periodic
in time in the semiclassical approximation of the
radiation--quantum-molecule interaction in an intense field. The
paper contains an interesting discussion about the appropriateness
of the Schr\"odinger and Interaction pictures. One and two-photon
probability transitions are obtained up to second order in ME.
Noteworthy, formulas through fifth order in ME are given, in a less
symmetrical form.
In \cite{schek81aot} it is explored the applicability of ME to the
multiphoton excitation of a sparse level system for which the
rotating wave function approximation is not applicable. This
reference provides a method of treating the time-evolution of a
pumped molecular system in the low energy region, which is
characterized by a sparse distribution of bound vibrational states.
\subsection{Nuclear magnetic resonance: Average Hamiltonian
theory}
\label{sec:NMR}
By far this is the field where ME has been
most systematically used and so we consider it apart. From
elementary quantum mechanics it is known that a constant magnetic
field breaks the degeneracy of the energy levels of an atomic
nucleus with spin. If the nuclear spin is $s$ then $2s+1$ sublevels
appear. In a sample these states are occupied according to Boltzman
distribution with a exponentially distributed population. When a
time-dependent radio-frequency electromagnetic field of appropriate
frequency is applied then energy can be absorbed by certain nuclei
which are consequently promoted to higher levels. This is the
physical phenomenon of Nuclear Magnetic Resonance (NMR).
It was Evans \cite{evans68osa} and Haeberlen and Waugh
\cite{haeberlen68cae} who first applied the ME to NMR. Since that
time, the ME has been instrumental in the development of improved
techniques in NMR spectroscopy \cite{burum81meg}.
The major advantage of NMR is the possibility of modifying the
nuclear spin Hamiltonian almost at will and to adapt it to the
needs of the problem to be solved \cite{ernst86pon}. This
manipulation requires an external perturbation of the system that
can be either time-independent (changes of temperature, pressure,
solvents, etc.) or time-dependent (sample spinning, pulsed
radio-frequency fields). In the later context, the concept of
average Hamiltonian provides an elegant description of the effects
of a time-dependent perturbation applied to the system. It was
originally introduced in NMR by Waugh \cite{ernst86pon,waugh82tob}
to explain the effects of multiple-pulse sequences.
The basic idea of average Hamiltonian theory, for a system governed
by $H(t)$, consists in describing the effective evolution within a
fixed time interval by an average Hamiltonian $\overline{H}$. The
theory states that this is always possible provided $H(t)$ is
periodic. The average Hamiltonian depends, however, on the beginning
and the end of the time interval observed. It is right the average
Hamiltonian $\overline{H}$ which is obtained by means of the ME.
When the total Hamiltonian splits in a time-independent and a
time-dependent piece, $H(t)=H_0+H_1(t)$, with $H_1(t)$ periodic, an
interesting new picture is used, labeled \emph{toggling frame}. It
certainly reminds the Interaction Picture defined in equation (\ref{GInt}) but
is rather different. In (\ref{Ufac}) the operator $G(t)$ associated
to the toggling frame is given by the time-ordered expression
\begin{equation}\label{toggling}
G(t)= \mathcal{T} \left( \exp{\int_0^t \tilde H_1(s)\, ds} \right)
\end{equation}
and the key point here is whether the formal time-ordering is
solvable.
As already mentioned the interplay between NMR and ME has been
fruitful along the years and acted in both directions. To prove that
it is still alive we quote two recent papers directly dealing with
that mutual interaction. In \cite{vandersypen04ntf} the relevance of ME
through NMR for the new field of quantum information processing and
computing is envisaged. The authors of \cite{veshtort06sea} have recently
explored the fourth and sixth order of ME to design a software
package for the simulation of NMR experiments. Although their
results are not yet conclusive their work shows the vitality of the
ME.
\subsection{Quantum Field Theory and High Energy Physics}
The starting point of any quantum field theory (QFT) calculation
is again equation (\ref{laecuacion}) which is conventionally
treated by time-dependent perturbation theory. So the first
question which arises is the connection between ME and Dyson-type
series. This has already been dealt with in subsection
\ref{sec2.4}. The main advantage of the first one is, as has
already repeatedly pointed out, that the unitary character of the
evolution operator is preserved at all orders of approximation. In
the historical development of QFT it was, however, Dyson approach
what was followed. The lost of unitarity was not thought to be of
great relevance in front of the problems presented by the
infinities appearing all over the place. Once renormalization idea
was introduced, this awful aspect of the theory was put also under
control. The results were, from the point of view of the
calculation of observable magnitudes, an unprecedented success:
the agreement between experimental results and their theoretical
counterparts was impressive.
So no wonder if alternatives to Dyson series, such as ME, did not
see popular acceptance. However, during the years there has been
interesting developments involving ME in the context of field
theory. In particular its use has shown to imply a re-ordering of
terms in the calculations in such a way that some infinities do
not appear and so make not necessary the introduction
of counterterms in the Hamiltonian. This is what happens for example in \cite%
{stefanovich01qft} where models are built in which ultraviolet
divergence appear neither in the Hamiltonian nor in the S-matrix.
In principle the results are valid for relativistic field theories
with any particle content and with minimal assumptions about the
form of the interaction.
ME as an alternative to conventional perturbation theory for
quantum fields has also been studied in \cite{dahmen86grf} where
normal products, Wick theorem and the like are used to deduce
graphical rules \textit{\`{a} la} Feynman for the terms $\Omega
_{i}$ for any value of $i$. This has been proved \cite
{dahmen88pao} helpful in the treatment of infrared divergences for
some QED processes such as the scattering of an electron on an
external potential or the bremsstrahlung of one hard photon, both
cases accompanied by the emission of an arbitrary number of soft
photons. An interesting feature of the ME based approach is that
the theory is free form infrared and mass divergences as a
consequence of the unitary character of the approximate
time-evolution operator \cite{dahmen88pao,dahmen82ido}. The method
is simpler than previous techniques based on re-summation of the
perturbation series to get rid of those divergences. Furthermore,
in contrast with the usual treatment, the resolution of the
detector is not an infrared regularization parameter. An
application to Bhabha scattering (elastic electron-positron
scattering) is developed in \cite{dahmen91uaf}. The difficulties
of extending the results to Quantum Chromodynamics are commented
in \cite{dahmen86grf}.
Recently, an extension of the Magnus expansion
has also been used in the context of Connes--Kreimer's Hopf algebra
approach to perturbative renormalization of quantum field theory
\cite{connes00riqI,connes00riqII}. In particular,
in \cite{ebrahimi-fard08anb}, it is shown that this generalized ME allows one to
solve the Bogoliubov--Atkinson
recursion in this setting.
In the field of high energy physics ME has also found applications. Next
we just quote two instances: one referring to heavy ion collisions
and the other to elementary particle physics.
In collision problems the unitarity of the time evolution operator
imposes some bound on the experimentally observable cross
sections. When these magnitudes are theoretically calculated one
usually keeps only the lowest orders in conventional perturbation
theory. This may be harmless at relatively low energies but it may
lead to unitarity bounds violation as the energy increases. The
use of a manifestly unitary approximation scheme is then
necessary. ME provides such an scheme. In heavy ion collision at
sufficiently high energy and given kinematic configuration (small
impact parameter) that violation is produced for
\textit{e}$^{+}e^{-}$when analyzed
in the lowest-order time-dependent perturbation theory. In \cite%
{ionescu94uae} a remedy for this situation was advanced by the use
of first order ME. It is discussed how most theoretical approaches
are based on either lowest-order time-dependent perturbation
theory or the Fermi--Weizs\"aker--Williams method of virtual
photons. These approaches violate unitarity bounds for
sufficiently high collision energies and thus the probability for
single-pair creation exceeds unity. With some additional
assumptions a restricted class of diagrams associated with
electron-positron loops can be summed to infinite order in the
external charge. The electron-positron transitions amplitudes and
production probabilities obtained are manifestly unitary and gauge
invariant.
In recent years there has been a great interest in neutrino oscillations and
its closely related solar neutrino problem. The known three
families of neutrinos with different flavors (electron, muon and
tau) were experimentally shown to be able of converting into each
other. The experiments were carried out with neutrinos of
different origins: solar, atmospheric, produced in nuclear
reactors and in particle accelerators. Here oscillation means that
neutrinos of a given flavour can, after propagation, change their
flavour. The accepted explanation for this phenomenon is that
neutrinos with a definite flavour have not a definite mass, and
the other way around. Let us denote by $\left| \nu _{\alpha
}\right\rangle $ the neutrinos of definite flavour with $\alpha $
the flavor index (i.e., electron, muon, tau) and by $\left| \nu
_{i}\right\rangle $ the neutrinos with well defined
distinct masses $m_{i},$ $i=1,2,3$ . Then the previous assertion means that $%
\left| \nu _{\alpha }\right\rangle $ will be a linear combination
of the different $\left| \nu _{i}\right\rangle. $
As neutrinos with different masses propagate with different
velocities this mixing allows for flavour conversion, i.e. for
neutrinos oscillations. ME enters the game in the solution of the
evolution operator in one basis. If one neutrino ``decouples''
from the other two then the problem reduces to one with only two
effective generations. Mathematically it is similar to the
two level system studied in Section 3. The reader is referred to \cite%
{dolivo92maf,dolivo90mea,dolivo96ntn,supanitsky08pee} for details of the calculations for two
and three generations.
\subsection{Electromagnetism}
The Maxwell equations govern the evolution of electromagnetic
waves. The equations are linear in its usual form and, although
they have been extensively studied, the complexity to obtain approximate
solutions is rather significant. When
reformulating the equations for a given problem, where the
geometry, the boundary conditions, etc. are considered and
appropriate discretisations are taken into account, it is frequent to end with
linear non-autonomous equations, so that
the Magnus expansion can be of interest here.
To illustrate some possible applications, let us consider the
Maxwell equations
\begin{equation}\label{Maxwell}
\left\{ \begin{array}{l} \displaystyle
\frac{\partial \mathbf{H}}{\partial t} = - \frac{1}{\mu} \, \nabla
\times \mathbf{E} \\
\displaystyle
\frac{\partial \mathbf{E}}{\partial t} = \frac{1}{\varepsilon} \, \nabla
\times \mathbf{H} - \frac{1}{\varepsilon} \mathbf{J}(t)
\end{array} \right.
\end{equation}
where $\mathbf{H},\mathbf{E},\mathbf{J}$ are the magnetic and
electric field intensities, and the current density, respectively,
$\mu$ is the permeability and $\varepsilon$ is the permittivity.
After space discritisation, these equations turn into a large
linear system of non-homogeneous equations and Magnus integrators can in principle be applied.
Time
dependent contributions can also appear from boundary conditions
or external interactions. In some cases
$\mathbf{J}=\sigma \, \mathbf{E}$ \cite{botchev08nid,jiang06sfd}, so that,
if the conductivity $\sigma$ is not constant, Magnus
integrators can be useful.
Let us now consider the frequency domain Maxwell equations (with
$\mathbf{J}=\mathbf{0}$)
\begin{equation}\label{Maxwell2}
\left\{ \begin{array}{l} \displaystyle
\nabla \times \mathbf{E} = i w \mu \mathbf{H}\\
\displaystyle
\nabla \times \mathbf{H} = -iw\varepsilon \mathbf{E}
\end{array} \right.
\end{equation}
where $w$ is the angular frequency. These equations are of
interest for time-harmonic lightwaves propagating in a
wave-guiding structure composed of linear isotropic materials. If
one is interested in the $x$ and $y$ components of $\mathbf{H}$
and $\mathbf{E}$, and how they propagate in the $z$ direction, the
equations to solve, after appropriate discretisation, take the
form \cite{lu06stf}
\begin{equation}\label{Maxwell2b}
-iw\varepsilon \frac{d \mathbf{u}}{dz} = A(z) \mathbf{v}, \qquad
-iw\mu \frac{d \mathbf{v}}{dz} = B(z) \mathbf{u}.
\end{equation}
Here $\mathbf{u}$, $\mathbf{v}$ are vectors and $A,B$ matrices
depending on $z$, which in this case play the role of evolution
parameter. A fourth-order Magnus integrator has been used in
\cite{lu06stf}. From section \ref{section5} we observe that higher order Magnus
integrators, combined with splitting methods could also lead to
efficient algorithms to obtain accurate numerical results with
preserved qualitative properties of the exact solution.
\subsection{Optics}
In the review paper \cite{dattoli88aav} one can find references to
some early applications of ME to Optics. For example to Hamiltonians
involving the generators of $\mathrm{SU(2)}$, $\mathrm{SU(1,1)}$ and
Heisenberg--Weyl groups with applications to laser-plasma scattering
and pulse propagation in free-electron lasers. Here as
representatives of the more modern interest of ME in Optics we quote
two applications referring to Helmholtz equation and to the study of
Stokes parameters.
Helmholtz equation in one spatial dimension with a variable
refractive index $n(x)$ reads
\begin{equation} \label{Helmholtz}
\psi^{\prime\prime}(x)+k^2n^{2}(x)\psi(x)=0,
\end{equation}
where $k$ is the wavenumber in vacuum.
Recently this time-honored wave equation has been treated in two
different ways, both using ME. From a more formal point of view, in
\cite{khan05wdm} Helmholtz equation is analyzed following the well
known procedure followed by Feshbach and Villars to convert the
second order relativistic quantum Klein--Gordon differential
equation for spin-0 particles in a first order differential equation
involving two components wave functions (the original wave function
and its time derivative). The evolution operator for Helmholtz
equation is then a $2\times2$ matrix which evolves according to the
fundamental equation (\ref{laecuacion}) with the only difference that
now the evolution parameter is $x$ instead of $t$. In
\cite{khan05wdm} the whole procedure is explained and the main
physical consequence, which amounts to the addition of correcting
terms to the Hamiltonian, is discussed in the case of an axially
symmetric graded-index medium, i.e. one in which the refractive index
is a polynomial.
Helmholtz equation has also been investigated with the help of ME
in \cite{lu06afo,lu06stf,lu07afo}. Here the propagation in a
slowly varying waveguide is considered and the boundary value
problem is converted into an initial value problem by the
introduction of appropriate operators which are shown to satisfy
equation (\ref{laecuacion}). Numerical methods to fourth order
borrowed from \cite{iserles99ots} are then used.
Since mid 19th century the polarization
state of light, and in general electromagnetic radiation or any
other transverse waves, is described by the so called Stokes
parameters which constitute a four-dimensional vector
$\mathbf{S}(\omega)$ depending on the frequency $\omega$. When the
light traverses an optical element which acts on its polarization
state the in and out Stokes vectors are related by
\begin{equation} \label{mueller}
\mathbf{S}_{\mathrm{out}}(\omega)=M(\omega)\mathbf{S}_{\mathrm{in}}(\omega),
\end{equation}
where the $4\times4$ matrix $M(\omega)$ is called the Mueller
matrix. It can be proved \cite{reimer06cao,reimer06mmd} that it
satisfies the equation
\begin{equation} \label{muellerdif}
M^{\prime}(\omega)=H(\omega) M(\omega),\qquad\qquad
M(\omega_0)=M_{0},
\end{equation}
where now the prime denotes derivative with respect to the real
independent variable $\omega$. For systems with zero
polarization-dependent loss (PDL) and no polarization mode
dispersion (PMD) $H(\omega)$ is constant whereas with PDL and PDM
the previous equation is just our equation (\ref{laecuacion}) and
the appropriateness of ME is apparent. The matrix $H(\omega)$ in
this application has an Hermitian and a non-Hermitian component.
ME has allowed a recursive calculation of successive orders of the
frequency variation of the Mueller matrix. This yields PMD and PDL
compensators that counteract the effects of PMD and PDL with
increased accuracy.
Also related to the use of Stokes vector one can mention the
so-called radiative transfer equation for polarized light. It is
relevant in Astrophysics to measure the magnetic fields in the Sun
and stars. That equation gives the variation of $\mathbf{S}(z)$
with the light path $z$
\[
\frac{d}{dz}\mathbf{S}(z)=-K(z)\mathbf{S}(z) +\mathbf{J},
\]
where $\mathbf{S}$ is the Stokes vector, $K$ is a $4\times 4$
matrix which describes absorption in the presence of Zeeman effect
and $\mathbf{J}$ stands for the emission term. In
\cite{lopez02ota,lopez99dia,semel99iot} ME is used to obtain an
exponential solution.
\subsection{General Relativity}
To illustrate once more the pervasive presence of the linear
differential equation (\ref{laecuacion}) let us mention reference
\cite{sanmiguel07ndo} in which the aim is to determine the time
elapsed between two events when the space-time is treated as in
General Relativity. Then it turns out to be necessary to solve a
two-point boundary value problem for null geodesics. In so doing
one needs to know a Jacobian whose expression involves
a~$8\times8$ matrix function obeying the basic equation
(\ref{laecuacion}). In \cite{sanmiguel07ndo} an eighth order
numerical method from \cite{blanes00iho} is used, which is proved
to be an efficient scheme.
\subsection{Search of Periodic Orbits}
The search of periodic orbits for some non-linear differential
autonomous equations, ${\bf x}'={\bf f(x)}$, $\mathbf{x}\in \mathbb{R}^d$
is of interest in Celestial Mechanics (periodic orbits of the
N-body problem) as well as in the general theory of dynamical systems.
Due to the complexity of this process, it is important to have
efficient numerical algorithms.
The Lindstedt--Poincar\'e technique is frequently used to calculate periodic
orbits. An iterative process proposed in \cite{viswanath03sda} consists in
starting with a guessed periodic orbit, and this guess is subsequently improved
by solving a correlation non-autonomous linear differential equation. The
numerical integration of this equation is carried out by
means of Magnus integrators.
\subsection{Geometric control of mechanical systems}
\label{sec-control}
Many mechanical systems studied in control theory
can be modeled by an ordinary differential equation of the form \cite{bullo05gco,kawski02tco}
\begin{equation} \label{control.1}
\mathbf{x}^\prime(t) = \mathbf{f}_0(\mathbf{x}(t)) + \sum_{i=1}^m u_i(t) \;
\mathbf{f}_i(\mathbf{x}(t)),
\end{equation}
initialized at $\mathbf{x}(0) = \mathbf{p}$. Here $\mathbf{x} \in
\mathbb{R}^d$ represents all the possible states of the system,
$\mathbf{f}_i$ are (real) analytic vector fields and the function
$\mathbf{u} = (u_1, \ldots, u_m)$ (the \emph{controls}) are assumed
to be integrable with respect to time and taking values in a compact
subset $U \subset \mathbb{R}^m$. The vector field $\mathbf{f}_0$ is
called the \emph{drift} vector field, whereas $\mathbf{f}_i$, $i \ge
1$, are referred to as the \emph{control} vector fields. When
$\mathbf{f}_0 \equiv 0$, the system (\ref{control.1}) is called
`without drift', and its analysis is typically easier.
For a given set of controls $\{u_i\}$, equation (\ref{control.1})
with initial value $\mathbf{x}(0)$ is nothing but a dynamical
system, which can be analyzed and (approximately) solved by
standard techniques. In control theory, however, one is interested
typically in the \emph{inverse problem}: given a target
$\mathbf{x}(T)$, find controls $\{u_i\}$ that steer from
$\mathbf{x}(0)$ to $\mathbf{x}(T)$ \cite{kawski02tco}, perhaps by following a prescribed
path. Just to
illustrate these abstract considerations, a typical problem could be
the determination of a set of controls that drive the actions of a
robot during a task.
The first step is to guarantee that there exists a solution. This is
the problem of controllability. To characterize the controllability
of linear systems of the form
\begin{equation} \label{LQR}
{\bf x}'(t) = A(t) {\bf x}(t) + B(t) {\bf u}(t)
\end{equation}
is a relatively simple task thanks to an algebraic criterion known
as the Kalman rank condition \cite{bullo05gco,kalman63col}. This issue, however,
is much more involved for the nonlinear system (\ref{control.1}) \cite{kawski02tco}.
The interest of ME in control theory, as it has been already
discussed in section \ref{MECF}, stems from the approximate ansatz
it provides connecting the states $\mathbf{x}(0)$ and
$\mathbf{x}(T)$. Thus, the Magnus expansion can be used either to predict a state
$\mathbf{x}(T)$ for a given control $\mathbf{u}$ or to find
reasonable controls $\mathbf{u}$ which made reachable the target
$\mathbf{x}(T)$ from $\mathbf{x}(0)$. Of course, many sets of
controls may exist and it raises questions concerning the
\emph{cost} of every scheme and consequently the search for the
optimal choice. For instance, the ME has been used in non-holonomic
motion planning of systems without drift
\cite{duleba97lom,duleba98oac}. Among non-holonomic systems there are free-floating
robots, mobile robots and underwater vehicles \cite{duleba98oac,murray94ami}.
In the particular case of linear quadratic optimal
control problems (appearing in in engineering problems as well as
in differential games) a given cost functional has to achieve a
minimum. When this happens, eq. (\ref{LQR}) can be written as
\cite{engwerda05lqd,reid72rde}
\begin{equation}\label{fi}
{\bf x}' \, = \, M(t,K(t)) {\bf x},
\end{equation}
where $ \, K(t) \in \mathbb{R}^{d \times d} \, $ has to solve a
Riccati differential equation similar to (\ref{Riccati}) with
final condition $K(T)=K_f$. In other words, the Riccati equation has to be
integrated backward in time and then to use it as an input in
(\ref{fi}). As mentioned, the Riccati differential matrix equation
has received much attention
\cite{blanes00asw,dieci92nio,engwerda05lqd,jodar88arm,kenney85nio,reid72rde,schiff99ana},
but an efficient implementation to this problem requires further
investigation, and methods from ME can play an important role.
\section{Conclusions}
In this report we have thoroughly reviewed the abiding work on
Magnus expansion carried out during more than fifty years from very
different perspectives.
As a result of a real interdisciplinary activity some aspects of
the original formulation have been refined. This applies for
example to the convergence properties of the expansion which have
been much sharpened.
In other features much practical progress has been made. This is
the case of the calculation of the terms of the series both
explicitly or recurrently. New techniques, like the ones borrowed
from graph theory, have also profitably entered the play.
Although originally formulated for linear systems of ordinary
differential equations, the domain of usage of ME has enlarged to
include other types of problems with differential equations:
stochastic equations, nonlinear equations or Sturm-Liouville
problems.
In parallel with this developments in the mathematical structure
of the ME the realm of its applications has also widen with the
years. It is worth stressing in this respect the versatility of
the expansion to cope with new applications in old fields, like
NMR for instance, and at the same time its capability to generate
new contributions, like the generation of efficient numerical
algorithms for geometric integrators.
All these facts, historical and present, presented and discussed
in this report strongly support the idea that ME can be a very
useful tool for physicists.
| {
"timestamp": "2008-10-30T14:46:58",
"yymm": "0810",
"arxiv_id": "0810.5488",
"language": "en",
"url": "https://arxiv.org/abs/0810.5488",
"abstract": "Approximate resolution of linear systems of differential equations with varying coefficients is a recurrent problem shared by a number of scientific and engineering areas, ranging from Quantum Mechanics to Control Theory. When formulated in operator or matrix form, the Magnus expansion furnishes an elegant setting to built up approximate exponential representations of the solution of the system. It provides a power series expansion for the corresponding exponent and is sometimes referred to as Time-Dependent Exponential Perturbation Theory. Every Magnus approximant corresponds in Perturbation Theory to a partial re-summation of infinite terms with the important additional property of preserving at any order certain symmetries of the exact solution. The goal of this review is threefold. First, to collect a number of developments scattered through half a century of scientific literature on Magnus expansion. They concern the methods for the generation of terms in the expansion, estimates of the radius of convergence of the series, generalizations and related non-perturbative expansions. Second, to provide a bridge with its implementation as generator of especial purpose numerical integration methods, a field of intense activity during the last decade. Third, to illustrate with examples the kind of results one can expect from Magnus expansion in comparison with those from both perturbative schemes and standard numerical integrators. We buttress this issue with a revision of the wide range of physical applications found by Magnus expansion in the literature.",
"subjects": "Mathematical Physics (math-ph)",
"title": "The Magnus expansion and some of its applications",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534376578004,
"lm_q2_score": 0.847967764140929,
"lm_q1q2_score": 0.8322408771391137
} |
https://arxiv.org/abs/1602.07585 | Mapping toric varieties into low dimensional spaces | A smooth $d$-dimensional projective variety $X$ can always be embedded into $2d+1$-dimensional space. In contrast, a singular variety may require an arbitrary large ambient space. If we relax our requirement and ask only that the map is injective, then any $d$-dimensional projective variety can be mapped injectively to $2d+1$-dimensional projective space. A natural question then arises: what is the minimal $m$ such that a projective variety can be mapped injectively to $m$-dimensional projective space? In this paper we investigate this question for normal toric varieties, with our most complete results being for Segre-Veronese varieties. | \section{Introduction}
It is well known that a smooth $d$-dimensional projective or affine variety can always be embedded into $\mathbb{P}^{2d+1}$. This story is different for singular varieties, including affine cones over smooth projective varieties. For example, the affine cone over the $n$th Veronese embedding of $\mathbb{P}^1$ cannot be embedded in $\mathbb{A}^m$ for $m<n+1$. In some situations, one may be willing to lose some information and be satisfied with an injective morphism $X \to \mathbb{P}^m$ or $X\to \mathbb{A}^m$.
\begin{ques}\label{q-1}
What is the minimal $m$ such that a projective (affine) variety $X$ can be mapped injectively into projective space (respectively, affine space) of dimension~$m$?
\end{ques}
The bound $2d+1$ holds in general in this setting by the same linear projection argument that yields an embedding of a smooth projective variety into $\mathbb{P}^{2d+1}$.
Namely, if $X\subseteq \mathbb{P}^N$ and $p$ is a point in $\mathbb{P}^N\setminus X$ which does not lie on any secant to $X$, that is on any line which intersects $X$ in at least two points, then the projection from $p$ to a hyperplane will be injective on $X$. One can then repeat this argument until the set of points on secant lines fills the ambient space. The bound $2d+1$ is simply the expected dimension of the secant variety, which is the Zariski closure of the set of points on secant lines. In the affine case, one simply uses the same argument on a projectivization; see ~\cite[Section~5.1]{ed:si} for a similar argument in a special case and \cite[Theorem~5.3]{tk-gk:aitng} for a different argument in a more general context.
Naturally, an absolute lower bound is given by the dimension of $X$. This absolute lower bound is sometimes attained even when $X$ is not itself $\mathbb{P}^d$ or $\mathbb{A}^d$, at least in positive characteristic \cite[Example 3.1]{ed:sifrg}. But this is a rare occurence. In characteristic zero, for normal varieties, it does not happen unless $X$ is isomorphic to $\mathbb{P}^m$ or $\mathbb{A}^m$, see~\cite[Corollary~4.6]{ed-hk:ismag}.
In this paper we focus on toric varieties and the affine cones over them. Our most complete results are for Segre-Veronese varieties, a construction simultaneously generalizing Veronese varieties ($r=1$) and Segre varieties (each $a_i=1$). For the affine cone over a Segre-Veronese variety we obtain:
\begin{thm}\label{thm-MinSizeSV}
Let $Y$ be the affine cone over the Segre-Veronese variety $X$ which is the image of the closed embedding $\prod_{i=1}^r \mathbb{P}^{n_i-1} \hookrightarrow \mathbb{P}^N$ given by the line bundle $\mathcal{O}(a_1,\dots,a_r)$. If $s$ is minimal such that $Y$ can be mapped injectively to $\mathbb{A}^s$, then
\begin{enumerate}
\item $s=2\sum_{i=1}^r n_i - 2r +1$, if at least one $a_i$ is not 1 or a power of $\operatorname{char} \Bbbk$;\label{thm-MinSizeSV1}
\item $s=2(n_1+n_2)-4$, if $r=2$ and $a_1,a_2$ are either 1 or a power of $\operatorname{char} \Bbbk$;\label{thm-MinSizeSV2}
\item $2\sum_{i=2}^r n_i - 2r +4\leq s\leq 2\sum_{i=1}^r n_i - 2r +1$ in the remaining cases.\label{thm-MinSizeSV3}
\end{enumerate}
If $s'$ is minimal such that $X$ can be mapped injectively to $\mathbb{P}^{s'}$. Then $s'=s-1$ in cases \ref{thm-MinSizeSV1} and \ref{thm-MinSizeSV2}, and $s'$ satisfies the same inequalities as $s-1$ in case \ref{thm-MinSizeSV3}.
\end{thm}
As a consequence of our lower bounds and linear projection arguments, we obtain in Corollary~\ref{cor:nondeg} a new proof of the nondegeneracy of the secant variety of Segre-Veronese varieties (see \cite[Theorem 4.2]{ha-mcb:odsvsvv}).
To obtain the sharper upper bounds given above, we need to determine if the set of points belonging to secants is closed, since projection from a point in the secant variety which does not lie on any secant will be injective on $X$. This is well-known for Veronese and Segre varieties, and we complete the picture for Segre-Veronese varieties:
\begin{customprop}{\ref{prop-notclosed}} Let $X$ be the Segre-Veronese variety which is the image of the closed embedding $\prod_{i=1}^r \mathbb{P}^{n_i-1} \hookrightarrow \mathbb{P}^N$ given by the line bundle $\mathcal{O}(a_1,\dots,a_r)$. If $r>2$ or $r=2$ and $(a_1,a_2)\neq(1,1)$, then the set of secant lines to $X$ does not fill the secant variety of $X$.
\end{customprop}
Lower bounds are obtained via completely different techniques, better suited to the affine case. If $Y$ is an affine algebraic variety, having an injective morphism $Y\to \mathbb{A}^m$ corresponds to having a \emph{separating set} $E=\{f_1,\ldots,f_m\}\subseteq \Bbbk[Y]$. The notion of separating set is usually defined relatively to a larger ring of functions $R\supset \Bbbk[Y]$ and in a much more general contexts, for ring of functions on a set, see \cite[Definition 1.1]{gk:si}. This variant comes up in many different contexts under different names (see for example the introduction of \cite{eds:fdepeei}).
For the affine cone $Y$ over a Segre-Veronese variety, we have a surjective map
$\mathbb{A}^d\to Y$ where $d={\sum_{i=1}^r n_i}$, dual to the inclusion map of the ring of regular functions on $Y$ into a polynomial ring in $d$ variables. A consequence is that the morphism ${Y\to \mathbb{A}^m}$ is injective if and only if the natural inclusion of the reduced fiber products
\[\mathbb{A}^d \times_Y \mathbb{A}^d \subseteq \mathbb{A}^d \times_{\mathbb{A}^m} \mathbb{A}^d\]
is an isomorphism. As $ \mathbb{A}^d \times_{\mathbb{A}^m} \mathbb{A}^{d}$ is the zero set in $ \mathbb{A}^{2d}$ of
\[(f_i\otimes 1-1\otimes f_i\mid i=1,\ldots,m)\]
it follows that the \emph{arithmetic rank} of the defining ideal of $\mathbb{A}^d \times_Y \mathbb{A}^d$, that is the minimal number of generators up to radical, is a lower bound for the minimal $m$ such that $Y$ can be mapped injectively to $\mathbb{A}^m$ (cf \cite[Section~3]{ed-jj:silc}). Our arguments exploit the fact that the affine cone over a Segre-Veronese variety is isomorphic to ${V/\! \!/G}:=\operatorname{Spec}({\kk[V]^G})$, where $V$ is a representation of an algebraic group $G$ and ${\kk[V]^G}:=\{f\in {\kk[V]} \mid f(u)=f(\sigma\cdot u), \text{ for all }u\in V,~\sigma\in G\}$. Accordingly, most of this paper will be written from that point of view.
If we assume $G$ is reductive (and so the quotient morphism $V\to {V/\! \!/G}$ is surjective), then having an injective morphism $\operatorname{Spec}({\kk[V]^G})=:{V/\! \!/G}\to \mathbb{A}^m$ corresponds to having a separating set in the sense of \cite[Section 2.3.2]{hd-gk:cit}, that is, a set $E$ of invariants such that whenever two points of $V$ can be separated by some invariant, they can be separated by an element of $E$.
In this setting, the fibre product $V\times_{{V/\! \!/G}}V$ is called the \emph{separating variety} and denoted by $\mathcal{S}_{V,G}$. The key observation of \cite[Section~3]{ed-jj:silc} is that the minimal size of a separating set is bounded below by the arithmetic rank of the defining ideal of $\mathcal{S}_{V,G}$. For representations of finite groups the arithmetic rank of the defining ideal of the separating variety ends up being meaningful in terms of the geometry of the representation (see \cite{ed-jj:silc}). As in \cite{ed-jj:silc}, we use the nonvanishing of local cohomology modules to find lower bounds for the arithmetic rank of the defining ideal of the separating variety. For Segre varieties with two factors, this is not conclusive in positive characteristic, and so instead we must use \'etale cohomology. Following \cite{ed-jj:silc}, the general strategy is to decompose the separating variety as a union of simpler objects. The difficulty is, unlike for representations of finite groups, the separating variety is not simply an arrangement of linear subspaces.
Linear projections are often sufficient to reach the minimal $m$ such that a projective variety can be mapped injectively to $\mathbb{P}^m$. In the case of toric varieties, the image will often no longer be toric. A natural question is the following:
\begin{ques}\label{min-m-mon}
What is the minimal $m$ such that a projective toric variety is mapped injectively to $\mathbb{P}^m$ so that the image is also a toric variety? Equivalently, what is the minimal size of a monomial separating set for the affine cone over a toric variety?
\end{ques}
In this paper we address this question for normal affine toric varieties. These include the affine cones over Segre-Veronese varieties. The normality assumption ensures that the ring can be identified with the ring of invariants of a representation of an algebraic torus, which provides extra structure. Indeed, the rings of invariants for representations of tori are determined by the combinatorics and convex geometry of the weights. In Proposition~\ref{prop-SVminmon}, we determine the answer to Question~\ref{min-m-mon} for Segre-Veronese varieties; as a consequence, in Corollary~\ref{cor-sparse} we give a bound on the sparsity of a separating set for the associated torus action.
In general, the minimal size of a monomial separating set is much larger than the minimal size of a separating set, but it is often still smaller than the size of a minimal generating set for the ring of invariants. Inspired by \cite{md-es:hdag}, we show the following.
\begin{customthm}{\ref{thm-2r+1}}
Let $V$ be a $n$ dimensional representation of a torus $T$ of rank $r\leqslant n$. The invariants involving at most $2r+1$ variables form a separating set.
\end{customthm}
The remainder of the paper is organized as follows. In Section~\ref{section-setup} we describe the combinatorial set-up and notation we need in order to discuss linear representations of tori, including giving an explicit link between representations of tori and Segre-Veronese varieties. In Section~\ref{section-SepVar} we first give some general results about the decomposition of the separating variety for representation of tori, before giving more explicit results in the case of Segre-Veronese varieties. Section~\ref{section-UpperBounds} focuses on upper bounds on the size of separatating sets and Section~\ref{section-LowerBounds} on the lower bounds. Finally, in Section~\ref{section-MonomialSepSets} we consider monomial separating sets. As well as the results mentioned above, we give a combinatorial characterization of monomial separating sets for representations of tori.
\section{Set up and Notation}\label{section-setup}
We work over an algebraically closed field $\Bbbk$ of arbitrary characteristic. The characteristic will sometimes make a difference. We consider a $n$-dimensional representation $V$ of a torus $T$ of rank $r$. Without loss of generality we can assume this is given by the weights $m_1,\ldots,m_n\in \mathbb Z^r$. That is, with respect to the basis $\{e_1,\dots ,e_n\}$ of $V$, the action of $T$ on $V$ is given by $t\mapsto \operatorname{diag}(t^{-m_1},\ldots,t^{-m_n})$, where $t$ denotes the element $(t_1,\ldots,t_r)$ of $T$. Let $A$ be the matrix whose columns are the $m_i$'s. We will assume throughout that $A$ has full rank $r\leq n$.
We will write $\Bbbk[x]$ to denote $\Bbbk[x_1,\ldots,x_n]=\Bbbk[V]$, where $\{x_1,\dots,x_n\}$ is the basis of the 1-forms of $\Bbbk[V]$ dual to the basis $\{e_1,\dots ,e_n\}$ of $V$. In terms of the basis $\{x_1,\dots,x_n\}$, the representation of $T$ takes the form $t\mapsto \operatorname{diag}(t^{m_1},\ldots,t^{m_n})$.
The ring of invariants ${\kk[V]}^T$ is the monomial algebra given by the semigroup $\mathcal{L}:=\ker_{\mathbb Z}A\cap \mathbb N^n$, that is
\[{\kk[V]}^T=\mathrm{span}_\Bbbk\{x^\alpha \mid \alpha\in \mathcal{L}\}.\]
The field of rational invariants is similarly given by $\ker_\mathbb Z A$:
\[\Bbbk(V)^T=\mathrm{span}_\Bbbk\{x^\beta \mid \beta \in \ker_{\mathbb Z} A\}.\]
For a natural number $a\in\mathbb N$, we write $[a]$ to denote the set $\{1,\ldots,a\}$. For $I\subseteq [n]$, we set $V_I=\mathrm{span}_\Bbbk\{e_i \mid i\in I\}$,
\[\ker_{\mathbb Z} A_I:=\{\beta\in \ker_{\mathbb Z} A \mid \beta_j=0 \ \text{for} \ j\notin I\}\,,\]
and $\mathcal{L}_I=\ker_\mathbb Z A_I \cap \mathcal{L}$.
For $u\in V$, we define $\operatorname{supp}(u):\{i\in [n]\mid u_i\neq 0\}$ and similarly for $\beta\in \mathbb Z^n$, ${\operatorname{supp}(\beta):=\{i\in [n]\mid \beta_i\neq 0\}}$. For $I\subseteq \{1,\ldots, n\}$, we define the \emph{weight set} of $I$ to be ${\mathrm{wt}(I):=\{m_i\mid i\in I\}}$, and for $u\in V$ and $\alpha \in \mathbb Z^n$, we write $\mathrm{wt}(u):=\mathrm{wt}(\operatorname{supp}(u))$ and ${\mathrm{wt}(\alpha):=\mathrm{wt}(\operatorname{supp}(\alpha))}$. Further, we will write $\operatorname{conv}(I)$, respectively $\conv^{\circ}(I)$, to denote the convex hull of $\mathrm{wt}(I)$, respectively the relative interior of the convex hull of $\mathrm{wt}(I)$, which is the interior of $\mathrm{wt}(I)$ with respect to the usual metric topology on the linear span of $\mathrm{wt}(I)\subseteq \mathbb R^n$.
\subsection{Segre-Veronese varieties}\label{section-SV}
We will pay particular attention to the affine cones over Segre-Veronese varieties.
A Segre-Veronese variety is the image of the closed embedding
\[{ {\prod_{i=1}^r \mathbb{P}^{n_i-1} \hookrightarrow \mathbb{P}^N}} \qquad \text{for} \ {N=1+\prod_{i=1}^r \binom{n_i+a_i-1}{a_i}}\]
given by the line bundle $\mathcal{O}(a_1,\dots,a_r)$ for some tuple $(a_1,\dots,a_r) \in \mathbb N^r$. Its ring of homogeneous coordinates is
\[S=\Bbbk\big[\ M_1 \cdots M_r \ \big| \ M_i \, \text{is a monomial of degree $a_i$ in the variables} \ x_{i1}, \dots , x_{i n_i} \big]\,.\]
This construction simultaneously generalizes Veronese varieties ($r=1$) and
Segre varieties (every $a_i=1$).
If every $a_i$ is coprime to the characteristic of $\Bbbk$, the homogeneous coordinate ring $S$ of the Segre-Veronese is the ring of invariants of the polynomial ring
\[R=\Bbbk[x_{i \ell} \ | \ 1 \leqslant i \leqslant r, 1 \leqslant \ell \leqslant n_i]\,,\]
under the linear action of a diagonalizable group, given as the product of a torus $T$ of rank $r-1$ acting with weights
\[m_{i\ell}=\left\{\begin{array}{cl} e_i & i=1,\ldots,r-1\\
-\sum_{i=1}^{r-1} e_i & i=r
\end{array}\right.\,,\]
and a product of cyclotomic groups $\mu_{a_1} \times \cdots \times \mu_{a_r}=:H$, where the $i$-th factor acts on $x_{i\ell}$ by scalar multiplication. That is, the ring of homogeneous coordinates of nonmodular Segre-Veronese varieties can be identified with the ring of invariants $R^G=\Bbbk[W]^G$, where $G=T\times H$ acts on the $(\sum_{i=1}^r n_i)$-dimensional $\Bbbk$-vector space as described above.
The ring $S$ can also be obtained over any field, up to isomorphism, as the ring of invariants of a torus. Precisely, set
\[S'=\Bbbk\big[\ x_0 M_1 \cdots M_r \ \big| \ M_i \, \text{is a monomial of degree $a_i$ in the variables} \ x_{i1}, \dots , x_{i n_i} \big]\,,\]
and $R'=\Bbbk[x_0 , x_{i \ell} \ | \ 1 \leqslant i \leqslant r, 1 \leqslant \ell \leqslant n_i]$. Then $S'$ is the ring of invariants of $R'$ under the action of a rank $r$ torus with weights $m_0= -\sum_{i=1}^r a_i e_i$ and $m_{i\ell} = e_i$. As both groups are reductive, the isomorphism $S\cong S'$ ensures that finding separating sets in $S$ and $S'$ is exactly the same, and so any bound established for one holds for the other. This follows from the following key fact: if $G$ is reductive, $E\subseteq {\kk[V]^G}$ is a separating set if and only if the morphism ${V/\! \!/G}\to\operatorname{Spec}(\Bbbk[E])$ induced by the inclusion $\Bbbk[E]\subseteq {\kk[V]^G}$ is injective (cf \cite[Theorem~2.2]{ed:sifrg}).
In the following lemma, we set $\operatorname{s}(a_1,\dots,a_r)$ to be the smallest cardinality of a separating set for a representation of a torus with ring of invariants isomorphic to the homogeneous coordinate ring of the Segre-Veronese variety $X$, where $X$ is the image of the closed embedding $\prod_{i=1}^r \mathbb{P}^{n_i-1} \hookrightarrow \mathbb{P}^N$ given by the line bundle $\mathcal{O}(a_1,\dots,a_r)$.
\begin{lem} \label{lem-nonmodSV}
Let $\Bbbk$ be a field of positive characteristic $p$, and fix $a_1,\dots , a_n$. Write each $a_i=a_i' p^{e_i}$ so that $\operatorname{gcd}(a_i',p)=1$. Then $\operatorname{s}(a_1,\dots,a_r)=\operatorname{s}(a'_1,\dots,a'_r)$.
\end{lem}
\begin{proof}
We will show that $\operatorname{s}(a_1,\dots,a_t)=\operatorname{s}(a_1,\dots,a_{t-1},p a_t)$, and the claim follows.
Let $R_1$ be the ring of functions on the affine cone over the Segre-Veronese variety given by the line bundle $\mathcal{O}(a_1,...,a_t)$, $R_2$ be the ring of functions on the affine cone over the Segre-Veronese variety given by the line bundle $\mathcal{O}(a_1,\ldots,a_{t-1},p a_t)$, and $R_3$ be the image of $R_1$ under the map which sends set of variables ($x_{t,1},\ldots,x_{t,n_t}$) to their $p$th powers. Note that $R_3$ is isomorphic to $R_1$.
Suppose that $\{g_1,\ldots,g_s\}\subset R_1$ form a separating set in $R_1$,. Then their images $\{g''_1,\ldots,g''_s\}$ under the map giving the isomorphism $R_1\cong R_3$ form a separating set for $R_2$. Indeed, it will be a separating set for $R_3$ (applying the isomorphism), and the morphism $\operatorname{Spec}(R_2)\to \operatorname{Spec} (R_3)$ induced by the inclusion $R_2\subseteq R_3$ is injective. It follows that an upper bound on the minimal size of separating sets for $R_1$ is also an upper bound for $R_2$.
Since $(R_2)^q \subseteq R_3$, taking a separating set for $R_2$ and taking $q$th powers produces a separating set for $R_3$ of the same cardinality. Then, applying the isomorphism $R_1\cong R_3$, we get a separating set for $R_1$ of the same size. Therefore, the minimal size of a separating set in $R_1$ is a lower bound for the minimal size of a separating set in $R_2$, completing the proof of the claim.
\end{proof}
\section{The separating variety}\label{section-SepVar}
In this section we describe the separating variety for representations of tori. The \emph{separating variety} $\mathcal{S}_{V,G}$ is a closed subvariety of $V\times V$ that encodes which points can be separated by invariants. Namely,
\[\mathcal{S}_{V,G}:=\{(u,v)\in V\times V \mid f(u)=f(v),~\forall f\in{\kk[V]^G}\}\,.\]
It deserves its name because it characterizes separating sets. Indeed, $E\subseteq {\kk[V]^G}$ is a separating set if and only if
\[\mathcal{V}_{V\times V}(f\otimes 1-1\otimes f \mid f\in E)=\mathcal{S}_{V,G}\]
(see \cite[Section 2]{gk:cirgpc}). In particular, the defining ideal for the separating variety is the radical of the \emph{separating ideal}
\[ \mathcal{I}_{V,G}:=(f \otimes 1 - 1 \otimes f \mid f\in {\kk[V]^G})\,.\]
Of course, the separating variety always contains the graph of the action
\[ \Gamma_{V,G}:=\{(u,\sigma\cdot u)\mid u\in V,\sigma\in G\}\,,\]
and its Zariski closure $\overline{\Gamma_{V,G}}$.
Let $V$ be a linear representation of an algebraic group $G$. Let $\pi\colon V\to {V/\! \!/G}$ be the morphism corresponding to the inclusion ${\kk[V]^G}\subseteq {\kk[V]}$. The \emph{nullcone} $\mathcal{N}_V$ is defined as $\mathcal{N}_V:=\pi^{-1}(\pi(0))$. It coincides with the set $\mathcal{V}_V(f\mid f\in {\kk[V]}^{G}_+)$ of common zeroes in $V$ of all nonconstant homogeneous invariants. Naturally, the product $\mathcal{N}_{V,G}\times\mathcal{N}_{V,G}$ is always contained in the separating variety. For a representation of a torus, the nullcone can be described in terms of the geometry of the weights of the action:
\begin{lem}[{see for example \cite[Proposition 4.4]{dlw:wirtipr}}] \label{lem-nullcone}
Let $V$ be a representation of a torus $T$. The nullcone is an arrangement of linear subspaces and its decomposition as irreducibles is as follows:
\[\mathcal{N}_{V,T}=\bigcup_{\tiny{\begin{array}{c} I \text{ maximal s.\,t.}\\ 0\notin \operatorname{conv}(\mathrm{wt}(I))\end{array}}} V_I.\]
\end{lem}
We obtain a first coarse decomposition of the separating variety:
\begin{prop}\label{prop-SepVarDecompGen}
Let $V$ be a representation of a torus of rank $r$ and suppose the matrix of weights has rank $r$. Then the separating variety can be written as
\[\mathcal{S}_{V,T}=\overline{\Gamma_{V,T}}\ \bigcup\ (\mathcal{N}_{V,T}\times \mathcal{N}_{V,T}) \ \bigcup_{K,I,J} \ \Gamma_{V_K,T} \oplus (V_{I}\times V_{J})\,,\]
where the second union ranges over all $K$ with $0\in\conv^{\circ}(\mathrm{wt}(K))$ and all $I,J\subseteq \{1,\ldots, n\}\setminus K$ such that $ 0\notin \conv^{\circ}(\mathrm{wt}(K\cup I))$ and $0\notin\conv^{\circ}(\mathrm{wt}(K\cup J))$.
\end{prop}
\begin{proof}
We first show the inclusion ``$\supseteq$''. By the discussion above, $\overline{\Gamma_{V,T}}$ and $\mathcal{N}_{V,T}\times \mathcal{N}_{V,T}$ are both contained in the separating variety. It remains to show that for each choice of $K,I,J$, the set $\Gamma_{V_K,T} \oplus (V_{I}\times V_{J})$ is contained in the separating variety. Let $(u_1,u_2)$ be an arbitrary point of $\Gamma_{V_K,T} \oplus (V_{I}\times V_{J})$. By definition we can write $(u_1,u_2)=(v_1,v_2)+(w_1,w_2)$, where $(v_1,v_2)\in \Gamma_{V_K,T}$ and $(w_1,w_2)\in V_{I}\times V_{J}$. Take ${\alpha\in\ker_{\mathbb Z}A\cap \mathbb N^n}$. As $0$ is not in the interior of the convex hull of $\mathrm{wt}(K\cup I)$ and $\mathrm{wt}(K\cup J)$, it follows that either $\operatorname{supp}(\alpha)\subseteq K$ or $\operatorname{supp}(\alpha)\not\subseteq K\cup I$ and $\operatorname{supp}(\alpha)\not\subseteq K\cup J$. If $\operatorname{supp}(\alpha)\subseteq K$, then
\[x^\alpha(u_1)=x^\alpha(v_1)=x^\alpha(v_2)=x^\alpha(u_2)\,,\]
and if $\operatorname{supp}(\alpha)\not\subseteq K\cup I$ and $\operatorname{supp}(\alpha)\not\subseteq K\cup J$, then
\[x^\alpha(u_1)=0=x^\alpha(u_2)\,.\]
In both cases $(u_1,u_2)\in \mathcal{S}_{V,T}$ as desired.
We now prove the reverse inclusion ``$\subseteq$''. Take $(u_1,u_2)\in \mathcal{S}_{V,T}$. As $T$ is a reductive group, this is equivalent to $\overline{Tu_1}\cap \overline{Tu_2}\neq \emptyset$ (follows from \cite[Corollary 3.5.2]{pen:impos}). Without loss of generality we have $z\in \overline{Tu_1}\cap \overline{Tu_2}$, where $Tz=\overline{Tz}$ is the unique closed orbit in $\overline{Tu_1}$ and $\overline{Tu_2}$. If $Tu_1=Tz$, we have
\[(u_1,u_2)\in \overline{Tu_2}\times \{u_2\}\subseteq \overline{\Gamma_{V,T}}\,.\]
Similarly, if $Tu_2=Tz$, then $(u_1,u_2)\in\overline{\Gamma_{V,T}}$. We now suppose that $Tz\neq Tu_1,Tu_2$. If $z=0$, then $(u,v)\in \mathcal{N}_{V,T}\times \mathcal{N}_{V,T}$, so we suppose $z\neq 0$. Then, since the orbit of $z$ is closed, $0$ is in the interior of the convex hull of $\mathrm{wt}(z)$ (see Lemma \ref{lem-closedorbit} below). By the extended Hilbert-Mumford criterion \cite[Theorem~C]{rwr:oooagalg}, there exist 1-dimensional subtori $S_1,S_2\subseteq T$ such that the intersections $\overline{S_1u_1}\cap Tz$ and $\overline{S_2u_2} \cap Tz$ are nonempty; that is, there exist $t_1,t_2\in T$ such that $t_1\cdot z\in \overline{S_1u_1}$ and $t_2\cdot z\in \overline{S_2u_2}$. If $t_1\cdot z\in S_1u_1$, then $Tu_1=Tz$, and this case is done. Similarly, $t_2\cdot z\in S_2u_2$ is also done. So we now suppose that $t_1\cdot z\notin S_1u_1$ and $t_2\cdot z\notin S_2u_2$. The 1-dimensional subtori $S_1,S_2$ correspond to a choice of $\delta_1,\delta_2\in \mathbb Z^r$. We then have
\[S_1u_1=\{(s^{\delta_1\cdot m_1}u_{1,1},\ldots, s^{\delta_1\cdot m_n}u_{1,n}) \mid s\in \Bbbk^*)\}\,,\]
and
\[S_2u_2=\{(s^{\delta_2\cdot m_1}u_{2,1},\ldots, s^{\delta_2\cdot m_n}u_{2,n}) \mid s\in \Bbbk^*)\}\,.\]
Without loss of generality, our assumption that $t_1\cdot z\notin S_1u_1$ and $t_2\cdot z\notin S_2u_2$ implies that $t_1\cdot z$ and $t_2\cdot z$ are seen to belong to the orbit closures by letting $s$ tends to zero in the above. It then follows that
\begin{align*}
\delta_1\cdot m_i >0&, \forall i\in \operatorname{supp}(u_1)\setminus \operatorname{supp}(z)\,,\\
\delta_2\cdot m_i >0&, \forall i\in \operatorname{supp}(u_2)\setminus \operatorname{supp}(z)\,,\\
\delta_1\cdot m_i=\delta_1\cdot m_i=0&, \forall i\in \operatorname{supp}(z)\,,
\end{align*}
and so we have $u_1=t_1\cdot z+w_1$ and $u_2=t_2\cdot z+w_2$, where $w_1,w_2\in\mathcal{N}_{V,T}$ and the intersection of their support with the support of $z$ is empty. Our assumptions that $Tz$ is the unique closed orbit in $\overline{Tu_1}$ and $\overline{Tu_2}$ and it is not equal to $Tu_1$ or $Tu_2$ implies that the orbits $Tu_1$ and $Tu_2$ are not closed. As a consequence, $0$ is not in the interior of the convex hull of $\mathrm{wt}(u_1)$ or $\mathrm{wt}(u_2)$. Writing
\[v_1:=t_1\cdot z, \ v_2:=t_2\cdot z, \ \text{and} \ K:=\operatorname{supp} z,\ I:=\operatorname{supp}(u_1)\setminus K, \ J:=\operatorname{supp}(u_2)\setminus K\,, \]
we have $0\in\conv^{\circ}(\mathrm{wt}(K))$, $0\notin \conv^{\circ}(\mathrm{wt}(K\cup I)),\conv^{\circ}(\mathrm{wt}(K\cup J))$, and
$(u_1,u_2)=(v_1,v_2)+(w_1,w_2)$ with $(v_1,v_2)\in\Gamma_{V_K,T}$ and $(w_1,w_2)\in V_I\times V_J$. This completes the proof.
\end{proof}
The following is stated without proof in the characteristic zero case in \cite[6.15]{irs:egiv}. We expect that it is already known, but include a proof for lack of an appropriate reference.
\begin{lem}[{cf. \cite[6.15]{irs:egiv}}]\label{lem-closedorbit}
The orbit of $z$ is closed if and only if $0\in\conv^{\circ}(\mathrm{wt}(z))$.
\end{lem}
\begin{proof}
By the extended Hilbert-Mumford criterion \cite[Theorem~C]{rwr:oooagalg}, it suffices to verify that any for any one parameter subgroup $\lambda$ of $T$, the limit of $\lambda z$ is contained in $T z$. Thus, $Tz$ is closed if and only if there does not exist $\delta \in \mathbb Z^r$ such that for all $i \in \operatorname{supp} \delta$, we have $ \delta \cdot m_i >0$. By the hyperplane separation theorem applied to the convex sets $U = \mathrm{int}(\mathrm{conv}\{ m_i \} )$ and $V=\{0\}$, it follows that a real vector $\delta$ satisfying the above exists if and only if $0 \in \conv^{\circ}(\mathrm{wt}(z))$. If a real such $\delta$ exists, it may be perturbed slightly in any direction except one parallel to a subspace generated by a set of $m_i$ that contain $0$ in its convex hull. However, the coordinates in such a subspace are necessarily rational.
\end{proof}
\begin{cor} \label{cor-sepvar1approx}
Let $V$ be a representation of a torus of rank $r$ and suppose the matrix of weights has rank $r$. Suppose that $0\in\conv^{\circ}(\mathrm{wt}(I))$ implies that $\mathrm{span}_\mathbb R \mathrm{wt}(I)=\mathbb R^r$. Then the separating variety can be written as
\[\mathcal{S}_{V,T}=\overline{\Gamma_{V,T}}\ \bigcup \ \mathcal{N}_{V,T}\times \mathcal{N}_{V,T}.\]
\end{cor}
\begin{proof}
Suppose $0\in\conv^{\circ}(\mathrm{wt}(K))$. The assumption that $\mathrm{wt}(K)$ spans $\mathbb R^r$ implies that $0$ is in the interior of the convex hull of $\mathrm{wt}(I)$ for any set containing $K$. Hence the third possible contribution in the statement of Proposition \ref{prop-SepVarDecompGen} does not occur.
\end{proof}
The following lemma gives a step towards establishing which irreducible components of $ \mathcal{N}_{V,T}\times \mathcal{N}_{V,T}$ are contained in $\overline{\Gamma_{V,T}}$.
\begin{lem}\label{lem-NullconeGraph}
Let $V_I\times V_J$ be an irreducible component of $\mathcal{N}_{V,T}\times \mathcal{N}_{V,T}$.
\begin{enumerate}
\item If $I\cap J=\emptyset$, then $V_I\times V_J \in \overline{\Gamma_{V,T}}$. \label{lem-uvinGraph}
\item If $\ker_{\mathbb Z} A_{I\cap J}\neq \{0\}$, then $(u,v)\notin \overline{\Gamma_{V,T}}$. \label{lem-uvnotinGraph}
\end{enumerate}
\end{lem}
\begin{proof}
(1): As $I$ and $J$ are disjoint, the maximality of $I$ implies that for each $j\in J$ there is an invariant with exponent vector $\alpha\in\mathbb N^n$ with $j\in\operatorname{supp}(\alpha)\subseteq I\cup \{j\}$. Hence $\alpha_jm_j+\sum_{i\in I}\alpha_im_i=0$. Using the Hilbert-Mumford criteria \cite[Theorem~C]{rwr:oooagalg}, one can see that supposing $V_I$ is a component of the null cone implies that $I$ is maximal among subsets $K$ of $\{1,\ldots,n\}$ such that there exist $\delta\in \mathbb Z^r$ satisfying $\delta\cdot m_i> 0$ for all $i\in I$. It follows that
\[\delta\cdot m_j=-\nicefrac{1}{\alpha_j}\sum_{i\in I} \alpha_i(\delta\cdot m_i)<0.\]
The $r$-tuple $\delta$ corresponds to a 1-parameter subgroup of $T$, where the induced action of $t\in\Bbbk*$ and on a vector $w\in V$ is given by $t\cdot w:=(t^{\delta\cdot m_1}w_1,\ldots,t^{\delta\cdot m_n}w_n)$. For each $t\in\Bbbk*$,
\begin{equation}
(u+t^{-1}\cdot v, t\cdot (u+t^{-1}\cdot v))=(u+t^{-1}\cdot v,t\cdot u +v) \label{eqn-sequence}
\end{equation}
belongs to the graph. Note that $\delta\cdot m_i> 0$ for $i\in I$ and $\delta\cdot m_j< 0$ for $i\in J$ imply that (\ref{eqn-sequence}) is also well defined for $t=0$. It follows that $(u+t^{-1}\cdot v,t\cdot u +v)|_{t=0}=(u,v)$ must belong to the Zariski closure of the graph $\overline{\Gamma_{V,T}}$.
(2): If $\ker_{\mathbb Z} A_{I\cap J}\neq \{0\}$, then there is a rational invariant with support contained in $\operatorname{supp}(u) \cap \operatorname{supp}(v)$, with exponent vector $\beta\in \mathbb Z^n$. Fix $i_0\in \operatorname{supp}(\beta)$. Without loss of generality we may assume that $i_0\in\operatorname{supp}(\beta^+)$. Note that since $A$ does not have zero columns, $\beta\in \ker_{\mathbb Z} A_{I\cap J}$ implies that $|\operatorname{supp}(\beta)|\geq 2$. Define
\begin{align*}
u_i &= \left\{\begin{array}{cl}
1 & i\in \operatorname{supp} (\beta)\\
0 & \text{otherwise}\end{array}\right.\\
v_i &= \left\{\begin{array}{cl}
1 & i\in \operatorname{supp} (\beta)\setminus \{i_0\}\\
0 & \text{otherwise}\end{array}\right. .
\end{align*}
Then $(u,v)\in V_I\times V_J$ and
\[(x^{\beta^+}\otimes x^{\beta^-}-x^{\beta^-}\otimes x^{\beta^+})(u,v)=x^{\beta^+}(u)x^{\beta^-}(v)-x^{\beta^-}(u)x^{\beta^+}(v)=1\neq 0.\]
That is $(u,v)\notin \overline{\Gamma_{V,T}}$ and so $V_I\times V_J\not\subseteq \overline{\Gamma_{V,T}}$.
\end{proof}
\subsection{The separating variety for the affine cone over Segre-Veronese varieties}
In this subsection we consider the case of non-modular Segre-Veronese varieties, that is such that $a_i\in\Bbbk*$ for each $i=1,\ldots,r$, which suffices to determine the minimal size of separating sets in general by Lemma \ref{lem-nonmodSV} .
\begin{prop}[Nonmodular Segre-Veronese]\label{prop-sepvarSV}
Consider the nonmodular Segre-Veronese variety which is the image of the closed embedding $\prod_{i=1}^r \mathbb{P}^{n_i-1} \hookrightarrow \mathbb{P}^N$ given by the line bundle $\mathcal{O}(a_1,\dots,a_r)$ and whose ring of homogeneous coordinates is identified with the ring of invariants $\Bbbk[W]^G$ as described in Section~\ref{section-SV}.
\begin{enumerate}
\item The separating variety $\mathcal{S}_{W,G}$ decomposes as follows: \label{prop-sepvarSV1}
\[\mathcal{S}_{W,G}=\bigcup_{\sigma\in H} (1,\sigma)(\mathcal{S}_{W,T})\,.\]
\item For $\sigma,\tau\in H$, if $(1,\sigma)(\mathcal{S}_{W,T})$ and $(1,\tau)(\mathcal{S}_{W,T})$ are distinct, then their intersection is $\mathcal{N}_{W,T}\times\mathcal{N}_{W,T}$. \label{prop-sepvarSVinter}
\item For $r=1$, $\mathcal{S}_{W,T}=W\times W$. \label{prop-sepvarS1}
\item For $r=2$, the decomposition of the separating variety $\mathcal{S}_{W,T}$ as a union of irreducibles is \label{prop-sepvarS2}
\[\mathcal{S}_{W,T}=\overline{\Gamma_{W,T}}\cup W_{\hat{1}}\times W_{\hat{1}} \cup W_{\hat{2}}\times W_{\hat{2}},\]
where $W_{\hat{k}}=\mathrm{span}\{w_{i,j}\mid i\neq k\}$. Furthermore, $\overline{\Gamma_{W,T}}$ is cut out by the ideal of $2\times 2$ minors
\[I_2\left(\begin{array}{cccccc}
x_{1,1}\otimes 1 & \cdots & x_{1,n_1}\otimes 1 & 1\otimes x_{2,1} & \cdots & 1\otimes x_{2,n_2} \\
1\otimes x_{1,1} & \cdots & 1\otimes x_{1,n_1} & x_{2,1}\otimes 1 & \cdots & x_{2,n_2}\otimes 1
\end{array}\right).\]
\item For $r\geq 3$, the decomposition of the separating variety as a union of irreducibles is\label{prop-sepvarS3}
\[\mathcal{S}_{W,T}=\overline{\Gamma_{W,T}} \cup \bigcup_{k,\ell=1}^r W_{\hat{k}}\times W_{\hat{\ell}}.\]
\end{enumerate}
\end{prop}
\begin{proof}
As $G$ is abelian, $T\trianglelefteq G$ with $G/T\cong H$ and $\Bbbk[W]^G=(\Bbbk[W]^T)^H$. That is, the quotient $\pi_G\colon W\to W/\!\!/G$ factors through the quotients $\pi_T\colon W\to W/\!\!/T$ and $\pi_H^T\colon W/\!\!/T\to (W/\!\!/T)/\!\!/H\cong W/\!\!/G$. By definition of the separating variety, we have
\[\begin{aligned}
\mathcal{S}_{W,G} & =\{(u,v)\in W\times W \mid \pi_G(u)=\pi_G(v)\}\\
& =\{(u,v)\in W\times W \mid \pi_H^T(\pi_T(u))=\pi_H^T(\pi_T(v))\}\,.
\end{aligned}\]
As $H$ is a finite group, it follows that
\[\begin{aligned}
\mathcal{S}_{W,G} & =\{(u,v)\in W\times W \mid \exists \sigma\in G,~\pi_T(u)=\sigma\cdot \pi_T(v)=\pi_T(\sigma\cdot v)\}\\
& =\{(u,v)\in W\times W \mid \exists \sigma\in G,~(u,\sigma\cdot v)\in\mathcal{S}_{W,T}\}\\
& =\{(u,v)\in W\times W \mid \exists \sigma\in G,~(u,v)\in (1,\sigma^{-1}(\mathcal{S}_{W,T})\}\\
&=\bigcup_{\sigma\in H} (1,\sigma)(\mathcal{S}_{W,T})\,.
\end{aligned}\]
Lemma \ref{lem-equal} implies that $(1,\sigma)(\mathcal{S}_{W,T})=(1,\tau)(\mathcal{S}_{W,T})$ if and only if $\sigma^{-1}\tau$ acts trivially on $W/\!\!/T$, since
\[ {(1,\sigma)(\mathcal{S}_{W,T})=(1,\tau)(\mathcal{S}_{W,T})} \quad \text{ if and only if } \quad {(1,1)(\mathcal{S}_{W,T})=(1,\sigma^{-1}\tau)(\mathcal{S}_{W,T})}\,.\]
So, if $(1,\sigma)(\mathcal{S}_{W,T})$ and $(1,\tau)(\mathcal{S}_{W,T})$ are distinct, then $\sigma^{-1}\tau$ acts nontrivially on $W/\!\!/T$. The $H$-action on $W/\!\!/T$ extends naturally to the representation ${\rho\colon H \to \operatorname{GL}(V)}$, where $V$ has basis $\{e_{j_1,\ldots,j_r} \mid j_i\in [n_i]\}$, $\pi_T(0)$ coincides with the origin in $V$ and $\rho(\gamma)(e_{j_1,\ldots,j_r})=\prod_{i=1}^{r}\zeta_{a_i}^{m_i}e_{j_1,\ldots,j_r}$ for each $\gamma=(\zeta_{a_1}^{m_1},\ldots,\zeta_{a_1}^{m_1})\in H$. Hence $\sigma^{-1}\tau$ acts nontrivially on $W/\!\!/T$ if and only if it acts nontrivially on $V$, but then $V^{\sigma^{-1}\tau}$ is simply the origin. It then follows that $(W/\!\!/T)^{\sigma^{-1}\tau}=\pi_T(0)$.
An element of this intersection $(1,\sigma)(\mathcal{S}_{W,T})\cap(1,\tau)(\mathcal{S}_{W,T})$ will be of the form $(u,\sigma\cdot v)=(u',\tau\cdot v')$ for some $(u,v),(u',v')\in \mathcal{S}_{W,T}$. We will have $u=u'$ and $v=\sigma^{-1}\tau\cdot v'$, and so
\[\pi_T(v')=\pi_T(u')=\pi_T(u)=\pi_T(v)=\pi_T(\sigma^{-1}\tau\cdot v')=\sigma^{-1}\tau\cdot \pi_T(v')\,;\]
that is,
\[\pi_T(u)=\pi_T(v)=\pi_T(v')\in (W/\!\!/T)^{\sigma^{-1}\tau}=\{\pi_T(0)\}\,.\]
As
\[\pi_T(\sigma\cdot v)=\sigma\cdot\pi_T(v)=\sigma\cdot\pi_T(0)=\pi_T(0)\,,\] we conclude that $(1,\sigma)(\mathcal{S}_{W,T})\cap(1,\tau)(\mathcal{S}_{W,T})=\mathcal{N}_{W,T}\times\mathcal{N}_{W,T}$ as desired.
Statement \ref{prop-sepvarS1} is clear since there are no nonconstant invariants. So suppose $r\geq 2$. Observe that $0$ is not in any proper subset of the weights. In particular, the conditions of Corollary \ref{cor-sepvar1approx} are met and so
\[\mathcal{S}_{W,T}=\overline{\Gamma_{W,T}}\ \bigcup \ \mathcal{N}_{W,T}\times \mathcal{N}_{W,T}\,.\]
Furthermore, by Lemma \ref{lem-nullcone}, we have
\[\mathcal{N}_{W,T}=\bigcup_{k=1}^r W_{\hat{k}}.\]
It remains to establish which products $W_{\hat{k}}\times W_{\hat{\ell}}$ are in $\overline{\Gamma_{W,T}}$.
We first consider the case $r=2$. In this case the intersection of the two components is the origin. Hence, by Lemma \ref{lem-uvinGraph}, the decomposition of the separating variety is as in statement \ref{prop-sepvarS2}. The ideal given in this same statement is exactly the toric ideal of the Lawrence lifting of the matrix of weights $A$,
\[\Lambda(A):=\left(\begin{array}{cc} A & 0\\ I & I\end{array}\right),\]
where $I$ denotes the $n\times n$ identity matrix. Thus is it is prime and has height $n-1$ (see \cite[Chapter 7, page 55]{bs:gbcp}). It is easy to see that this ideal vanishes on the graph, and so its zero set in $W\times W$ contains the closure of the graph. As $T$ is connected and $A$ has rank $1$, the closure of the graph is itself an irreducible variety of dimension $n+1$. It follows that the toric ideal associated to $\Lambda(A)$ is the defining ideal of $\overline{\Gamma_{W,T}}$.
Let us now consider the case $r\geq 3$. First note that in this case, the intersection of two irreducible component of the nullcone will contain the weight space of at least one weight. Let $K$ be the support of this intersection. As the weight space of each weight has dimension at least 2, it follows that $\ker A_K\neq \{0\}$, and so by Lemma \ref{lem-uvnotinGraph}, the product $W_{\hat{k}}\times W_{\hat{\ell}}$ is never contained in $\overline{\Gamma_{W,T}}$, and the decomposition is as stated.
\end{proof}
\begin{lem}
We have $\mathcal{N}_{W,T}=\bigcup_{k=1}^r W_{\hat{k}}$, where $W_{\hat{k}}=\mathrm{span}\{w_{i,j}\mid i\neq k\}$.
\end{lem}
\begin{proof}
Follows directly from Lemma \ref{lem-nullcone} since in this case zero is not in the convex hull of any proper subset of $wt([n])$. Indeed, any proper subset of the $r+1$ distinct weights are linearly independant.
\end{proof}
\begin{lem}\label{lem-equal}
$(1,1)(\mathcal{S}_{W,T})=(1,\sigma)(\mathcal{S}_{W,T})$ if and only if $\sigma$ acts trivially on $W/\!\!/T$.
\end{lem}
\begin{proof}
Suppose $(1,1)(\mathcal{S}_{W,T})=(1,\sigma)(\mathcal{S}_{W,T})$. Then for any $(u,v)\in \mathcal{S}_{W,T}$ there exists $(u',v')\in \mathcal{S}_{W,T}$ such that $(u,v)=(u',\sigma\cdot v')$. It follows that
\[\pi_T(u)=\pi_T(v)=\pi_T(\sigma\cdot v')=\sigma\cdot\pi_T(v')=\sigma\cdot\pi_T(u')=\sigma\cdot \pi_T(u)\,,\]
and so $\pi_T(u)=\sigma \pi_T(u)$. As we can choose $u$ arbitrarily and $\pi_T$ is surjective (since $T$ is a reductive group, see for example \cite[Lemma 2.3.1]{hd-gk:cit}), it follows that $\sigma$ acts trivially on $W/\!\!/T$.
On the other hand, suppose $\sigma$ acts trivially on $W/\!\!/T$, then of course so does $\sigma^{-1}$. For any $(u,v)\in\mathcal{S}_{W,T}$, we will have $(u,v)=(u, \sigma\cdot(\sigma^{-1}\cdot v))\in (1,\sigma)(\mathcal{S}_{W,T})$, since
\[\pi_T(u)=\pi_T(v)=\sigma^{-1}\cdot\pi_T(v)=\pi_T(\sigma^{-1}\cdot v)\,.\]
\end{proof}
\section{Upper bounds on the size of separating sets}\label{section-UpperBounds}
One can find an upper bound on the size of separating sets, given some knowledge of the secant variety of the embedding. In this section, for a projective variety $X \subseteq \mathbb{P}^n$, we define the \emph{secant set} of $X$ to be
\[ \sigma (X) = \bigcup_{ x,x' \in X, x \not= x'} \langle x , x' \rangle \subseteq \mathbb{P}^n,\]
where $\langle \ \rangle$ denotes linear span. The \emph{secant variety} of $X$ is the closure ${\operatorname{Sec}(X)}=\overline{\sigma(X)}$ of the secant set of $X$.
If $p \in \mathbb{P}^n$, we write $\pi_p$ for projection from $p$ onto a hyperplane.
\begin{lem}\label{lem-closed-secant} Let $X\subseteq \mathbb{P}^n$ be a projective variety, and $p \notin {\operatorname{Sec}(X)}$. If $\sigma(X)$ is not closed, then $\sigma(\pi_p(X))$ is not closed.
\end{lem}
\begin{proof}
Without loss of generality, suppose that $p=[0: \cdots : 0 : 1] \notin {\operatorname{Sec}(X)}$. Then projection from $p$ induces a well defined morphism ${\operatorname{Sec}(X)}\to \mathbb{P}^{n-1}$ that sends a point ${[x_1: \cdots : x_n : x_{n+1}]}$ to ${[x_1: \cdots : x_n]}$.
Suppose that $\sigma(\pi_p(X))$ is closed, and $\sigma(X)$ is not. Then, by the valuative criterion for properness (see, e.g. \cite[Theorem 4.7]{rh:ag}), there exists a punctured formal curve $\lambda_t, t\neq 0$ in $\sigma(X)$ that does not admit a limit in $\sigma(X)$ and such that $\gamma_t=\pi_p(\lambda_t)$ admits a limit in $\sigma(\pi_p(X))$.
Choose curves $a_t, b_t \subset \mathbb{A}^{n+1}$ and $u_t, v_t \subset \mathbb{A}^1$ such that $[a_t], [b_t] \subset X$ for $t\neq 0$ and $[u_t a_t + v_t b_t] = \lambda_t$. (Since the secant variety is parametrized by $(X \times X)\setminus \Delta \times \mathbb{A}^2$, this is possible locally.) For a point in $x \in\mathbb{A}^{n+1}$ write $x'$ for the projection onto the first $n$ coordinates and $x''$ for the last coordinate. Since $\gamma_t$ converges, after possibly rescaling, we have that $u_t a'_t + v_t b'_t$ converges and $u_t a''_t + v_t b''_t$ has a pole. Dividing through by the latter, we see that the limit of $\lambda_t$ is $p$, contradicting that $p \notin {\operatorname{Sec}(X)}$.
\end{proof}
\begin{lem}\label{lem-spec-iso-projection} Let $A$ be a graded $\Bbbk$-algebra generated in a single degree, and let $X=\operatorname{proj} A$. If $p\notin \sigma(X)$, then $\pi_p$ induces a bijective map of $\operatorname{Spec} A$ onto its image.
\end{lem}
\begin{proof} We choose coordinates so that $p=[0:\cdots:0:1]$. We lift the map $\pi_p$ to $\pi_{p,\text{aff}}:\mathbb{A}^{n+1} \to \mathbb{A}^{n}$ as $(x_1 ,\dots ,x_n , x_{n+1}) \mapsto (x_1 , \dots ,x_n)$. Now, the fiber over $\pi_{p,\text{aff}}|_{\operatorname{Spec} A}^{-1}(0)$ is just 0 since otherwise $[ 0:\cdots :0:1]$) would have to be in $X$, a contradiction. Suppose there are two points in $\operatorname{Spec} A$ that are mapped to the same point. The only possibility for this is if they are of the form $(a_1 ,\dots, a_n , b)$ and $(a_1 ,\dots, a_n , b') \in \operatorname{Spec} A$. But this forces $[0: \cdots : 0 :1]\in\sigma(X)$, again a contradiction.
\end{proof}
\begin{cor}\label{cor-upperbound}
Let $R$ be a subalgebra of a standard graded polynomial ring. Suppose that there is a separating set for $R$ consisting of homogeneous polynomials of the same degree and let $A\subseteq R$ be the subalgebra it generates. Let $X=\operatorname{proj} A$. Any set of $\dim \operatorname{Sec}(X)+1$ generic linear combinations of the original separating set will be a separating set. If, in addition, the secant set of $X$ does not fill its secant variety, then there exists a separating set of size $\dim \operatorname{Sec}(X)$.
\end{cor}
\begin{proof} If $Y\subseteq \mathbb{P}^n$ is a projective variety such that $\dim \operatorname{Sec}(Y)<n$, then a generic point satisfies $p \notin \operatorname{Sec}(Y)$, and the restriction of $\pi_p$ to $Y$ induces a homeomorphism onto the image. By Lemma~\ref{lem-spec-iso-projection}, this extends to a homeomorphism on the level of $\operatorname{Spec}$. Each such projection decreases the dimension of the ambiant space by one, and the codimension of the secant variety by at most one. Applying this repeatedly to projections of $X$, one obtains the first statement.
By Lemma~\ref{lem-closed-secant}, if $\sigma(X) \neq \operatorname{Sec}(X)$, this inequality is preserved by the projections $\pi_p$ above. Then, by Lemma~\ref{lem-spec-iso-projection}, one may project again to obtain an homeomorphism on $\operatorname{Spec}$, and decrease the dimension of the ambiant space by one more.
\end{proof}
\begin{cor}\label{cor-torusupperbound}
Let $V$ be a $n$-dimensional representation of a torus of rank $r$. Then there exists a separating set of size at most $2n-2r$.
\end{cor}
\begin{proof}
In this situation, it may not be possible to choose monomials of the same degree as generators for the ring of invariants. Let $f_1,\ldots,f_t$ be a set of monomial generators for ${\kk[V]}^T$ of degree $d_1,\ldots,d_n$. Let $x_0$ be an additional variable. Set $d:=\max\{d_i\mid i=1,\ldots t\}$ and $F_i:=x_0^{d-d_i}$ so that each $F_i$ has degree $d$. Note that $A:=\Bbbk[F_1,\ldots,F_t]$ is isomorphic to ${\kk[V]}^T$. In particular, it has the same dimension $n-r$, and so $X:=\operatorname{proj} A$ has dimension $n-r-1$ and its secant variety has dimension at most $2n-2r-1$. Applying Corollary~\ref{cor-upperbound} it follows that there exists an injective morphism $\operatorname{Spec}(A)\to \mathbb{A}^{2n-2r}$. As $A\cong{\kk[V]}^T$, it follows that there exist an injective morphism $V/\!\!/T\to\mathbb{A}^{2n-2r}$, that is, there exists a separating set of size $2n-2r$.
\end{proof}
\begin{rmk} The first statement of Corollary~\ref{cor-upperbound} may also be justified as follows. Recall that the \emph{analytic spread} of an ideal $I$ in a graded ring $(R,\mathfrak{m},\Bbbk)$ is the smallest size of a generating set for a minimal reduction of an ideal; if the residue field of the ring is infinite then a generic linear combination of the minimal generators generates a minimal reduction. This number also coincides with the dimension of the special fiber ring $R[It]\otimes \Bbbk$. If $I$ is generated in a single degree $d$, the special fiber ring is a subalgebra of $R$ generated by minimal generators of $I$. See \cite[Chapter~5]{SwansonHuneke} for a thorough treatment of analytic spread.
We claim that the special fiber ring of $\mathcal{I}_{V,G}$ is the coordinate ring of $\operatorname{Sec}(\operatorname{proj}(R^G))$. Indeed, this secant variety is the projectivization of the set of points of the form
\begin{align*}
&\big(\,a f_1(v) + a' f_1(v')\,,\,\dots\,,\,a f_t(v) + a' f_t(v')\,\big) \\
&=\big(\, f_1(v/\sqrt[d]{a}) - f_1(v'/\sqrt[d]{-a'})\,,\,\dots\,,\,f_t(v/\sqrt[d]{a}) - f_t(v'/\sqrt[d]{-a'})\,\big)
\end{align*}
where $a,a'\in \Bbbk, v,v' \in V,$ and $f_1,\dots,f_t$ are minimal generators for $R^G$, and hence its coordinate ring is isomorphic to $\Bbbk[f_1\otimes 1 - 1 \otimes f_1, \dots, f_t\otimes 1 - 1 \otimes f_t]$.
Consequently, the analytic spread of $\mathcal{I}_{V,G}$ is $s=\dim \operatorname{Sec}(\operatorname{proj}(R^G)) +1$. For a generic $s \times t$ matrix of scalars $A$, we have that $[f_1\otimes 1 - 1 \otimes f_1, \dots, f_t\otimes 1 - 1 \otimes f_t] \cdot A$ generates a minimal reduction $J$ of $\mathcal{I}_{V,G}$, and hence agrees with $\mathcal{I}_{V,G}$ up to radical. But then, setting $[f_1, \dots, f_t] \cdot A = [g_1, \dots, g_s]$, we have that $J=(g_1\otimes 1 - 1 \otimes g_1, \dots, g_s\otimes 1 - 1 \otimes g_s)$. Thus, $(g_1,\dots,g_s)$ is a separating set for $G$.
\end{rmk}
\begin{eg}[Veronese varieties]\label{eg-veroneses-example}
We consider a Veronese variety which is the image of the closed embedding $ \mathbb{P}^{n_1-1} \hookrightarrow \mathbb{P}^N$ given by the line bundle $\mathcal{O}(a_1)$ and we suppose that $a_1$ is not 1 or a power of $\operatorname{char} \Bbbk$. As discussed in Section \ref{section-SV}, its ring of homogeneous coordinates is equal to the ring of invariants of the cyclotomic group $\mu_{a_1}$ acting diagonally. The secant variety of this Veronese variety has dimension $2(n-1)$ when $d=2$ and $2(n-1)+1$ otherwise (classical). Hence Corollary \ref{cor-upperbound} implies that the minimal size of a separating set for the affine cone is at most $2n-1$ when $d=2$ and $2n$, otherwise. On the other hand, for all $d$, one can construct a separating set of size $2n-1$ (see \cite[Proposition 5.2.2]{ed:si}) and this is the minimal size of a separating set (follows from \cite[Theorem 3.4]{ed-jj:silc}). Note that the invariants forming this separating set are linear combinations of monomials from the minimal generating set given above, that is, they come from a (non-generic) linear projection of the Veronese variety. It follows that for $d>2$, the set of points belonging to secant lines does not fill out the secant variety (this is already well known).\hfill $\triangleleft$
\end{eg}
With an eye towards the last part of Corollary~\ref{cor-upperbound}, we study when the set of secant lines fills the secant variety of a Segre-Veronese variety. The following proposition is well-known in the case of Segre varieties; it translates to the fact that closest rank 2 approximation of a tensor is an ill-posed problem.
\begin{prop}\label{prop-notclosed} Let $X$ be the Segre-Veronese variety which is the image of the closed embedding $\prod_{i=1}^r \mathbb{P}^{n_i-1} \hookrightarrow \mathbb{P}^N$ given by the line bundle $\mathcal{O}(a_1,\dots,a_r)$. If $r>2$ or $r=2$ and $(a_1,a_2)\neq(1,1)$, then the set of secant lines to $X$ does not fill the secant variety of $X$.
\end{prop}
\begin{proof} We will write vectors in tensor notation.
First, let $r>2$. Set \[w = \phi_{a_4}(1,0,\dots,0) \otimes \cdots \otimes \phi_{a_r}(1,0,\dots,0)\,,\]
and
\[
\begin{aligned} v_{\lambda} = \lambda \cdot \phi_{a_1}(1, \lambda^{-1}, 0, \dots, 0) \otimes \phi_{a_2}(1, \lambda^{-1}, 0, \dots, 0) \otimes \phi_{a_3}(1, \lambda^{-1}, 0, \dots, 0) \otimes w\\
- \lambda \cdot \phi_{a_1}(1, 0, \dots, 0) \otimes \phi_{a_2}(1, 0, \dots, 0) \otimes \phi_{a_3}(1, 0, \dots, 0) \otimes w
\end{aligned} \]
for $\lambda \in \Bbbk$,
and
\[ \begin{aligned}
v_{\infty} = \phi_{a_1}(1, 0, \dots, 0) \otimes \phi_{a_2}(1, 0, \dots, 0) \otimes \phi_{a_3}(0,1, 0, \dots, 0) \otimes w \\ + \phi_{a_1}(1, 0, \dots, 0) \otimes \phi_{a_2}(0,1,0, \dots, 0) \otimes \phi_{a_3}(1, 0, \dots, 0) \otimes w \\ + \phi_{a_1}(0,1,0 \dots, 0) \otimes \phi_{a_2}(1, 0, \dots, 0) \otimes \phi_{a_3}(1, 0, \dots, 0) \otimes w\,.\end{aligned}\]
Performing multilinear expansion and canceling terms, one sees that $\{ v_\lambda \ | \ \lambda \in \Bbbk\} \cup \{ v_{\infty} \}$ forms a locally closed subset in $\mathbb{P}^N$. Clearly, $\{ v_\lambda \ | \ \lambda \in \Bbbk\}$ is contained in the set of secant lines to $X$, but $v_{\infty}$ is a rank 3 tensor, see e.g., \cite{vds-lhl:tripblrap}, and hence is not contained in the secant set, but is in the secant variety.
Now let $r=2$. Set
\[ \begin{aligned} v_{\lambda}= \lambda \cdot \phi_{a_1}(1, \lambda^{-1}, 0, \dots, 0) \otimes \phi_{a_2}(1, \lambda^{-1}, 0, \dots, 0) \\
- \lambda \cdot \phi_{a_1}(1, 0, \dots, 0) \otimes \phi_{a_2}(1, 0, \dots, 0)
\end{aligned}\]
for $\lambda \in \Bbbk$, and
\[ v_{\infty}= e_{a_1-1,1,0,\dots,0} \otimes e_{a_2,0, \dots,0} + e_{a_1,0, \dots,0} \otimes e_{a_2-1,1,0,\dots,0} \,,\]
where $e_{\alpha}$ is the basis vector in the coordinate corresponding to the monomial with exponent $\alpha$ under the Veronese map. One again verifies that $\{ v_\lambda \ | \ \lambda \in \Bbbk\} \cup \{ v_{\infty} \}$ is locally closed. It remains to show that $v_{\infty}$ does not lie on a secant line. In the case that $a_1=2, a_2=1$, this condition can be verified in Macaulay2 by writing a system of equations this vector to be expressed as the sum of two elements in $X$ and seeing that the ideal it generates is the trivial ideal. In the case of larger $a_i$, one sees that the coordinates corresponding to the $\mathcal{O}(2,1)$ case give the same system of equations multiplied by a uniform scalar, and hence again have no solution.
\end{proof}
\begin{prop}\label{prop-UBSV}[Upper bounds on the size of separating sets for the affine cone over Segre-Veroneses]
We consider the Segre-Veronese variety which is the image of the closed embedding $\prod_{i=1}^r \mathbb{P}^{n_i-1} \hookrightarrow \mathbb{P}^N$ given by the line bundle $\mathcal{O}(a_1,\dots,a_r)$. Then the minimal size of a separating set for the affine cone is bounded above by
\begin{enumerate}
\item $2(n_1+n_2)-4$, if $r=2$ and $a_1,a_2$ are either 1 or a power of $\operatorname{char}\Bbbk$.\label{prop-secantBound2}
\item $2\sum_{i=1}^r n_i - 2r +1$, in all other cases.\label{prop-secantBound3}
\end{enumerate}
Moreover, in case~(\ref{prop-secantBound2}) the separating set satisfying the bound can be obtained by taking generic linear combinations of a generating invariant monomials.
\end{prop}
\begin{proof}
In case~(\ref{prop-secantBound2}), we may assume that $(a_1,a_2)=(1,1)$ by Lemma \ref{lem-nonmodSV}. Then the secant variety is the space of rank 3 matrices, which has the dimension indicated.
In general, and hence in case~(\ref{prop-secantBound3}), the dimension of the secant variety is bounded above by $2(\sum (n_i-1))+1$, and is not closed by Proposition~\ref{prop-notclosed}, so the bound follows by Corollary~\ref{cor-upperbound}.
\end{proof}
\begin{eg} In the case of a Segre product with two factors and $n_1=3$, the set
\begin{align*}&x_{1,1} x_{2,1}, \ x_{1,1} x_{2,2},\ x_{1,2} x_{2,1} ,\ x_{1,2} x_{2,n_2},\ x_{1,3} x_{2,{n_2-1}} ,\ x_{1,3} x_{2,n_2} ,\\ &u_i:=x_{1,1} x_{2,i+1} - x_{1,2} x_{2,i},\ v_i:=x_{1,2} x_{2,i} - x_{1,3} x_{2,i-1},\ \quad i=2,\dots,n_2-1
\end{align*}
is a separating set. Indeed, by induction on $n_2$ it suffices to show that the values of $x_{1,1} x_{2,3}, x_{1,2} x_{2,2},$ and $x_{1,3} x_{2,1}$ can be recovered from those of $x_{1,1} x_{2,1}, x_{1,1} x_{2,2}, x_{1,2} x_{2,1}, u_2, $ and $v_2$. If $x_{1,1} x_{2,1} \neq 0$, then one has $x_{1,2} x_{2,2} = \frac{ x_{1,1} x_{2,2} \cdot x_{1,2} x_{2,1} }{x_{1,1} x_{2,1}}$. If $x_{1,1} x_{2,1} = 0$ and $x_{1,2} x_{2,1} \neq 0$, then $x_{1,1} = 0$, so $x_{1,3} x_{2,1} = 0$, from which $x_{1,2} x_{2,2}$ and $x_{2,1} x_{2,3}$ can be obtained. The case $x_{1,1} x_{2,1} = 0$ and $x_{1,1} x_{2,2} \neq 0$ is similar. Finally, if $x_{1,1} x_{2,1} = x_{1,2} x_{2,1} = x_{1,1} x_{2,2} = 0$, then at most one of $x_{1,1} x_{2,3}, x_{1,2} x_{2,2},$ and $x_{1,3} x_{2,1}$ is nonzero. If one of these is nonzero, then the two of $u_2, v_2,$ and $x_{1,3} x_{2,1} - x_{1,1} x_{2,3}= - u_2 - v_2$ containing that monomial are equal to it (up to sign), while if all three monomials are zero, these three binomials are zero. Thus, one can determine which of $x_{1,1} x_{2,3}, x_{1,2} x_{2,2},$ and $x_{1,3} x_{2,1}$ is nonzero, and the value, from the values of $u_2$ and $v_2$.\hfill $\triangleleft$
\end{eg}
\section{Lower bounds on the size of separating sets}\label{section-LowerBounds}
In this section, we focus on the affine cones over Segre-Veronese varieties. We will give lower bounds for the sizes of separating sets. In most cases, these lower bounds agree with the upper bounds in the previous section, thus giving the precise cardinality of a minimal separating set. Our technique is based on the following observation and relies on the use of local cohomology (\cite{sbi-gjl-al-cm-em-aks-uw:thlc} provides a good reference).
\begin{lem}\label{lem-seprankLC}\cite[Section~3]{ed-jj:silc}
Let $G$ act linearly on $V$. Then the minimal size of a separating set for $G$ is bounded below by the maximum $i$ such that $\HH{i}{\mathcal{I}(\mathcal{S}_{V,G})}{{\kk[V^2]}}$ is nonzero.
\end{lem}
We will require two elementary lemmas on local cohomology. The second Lemma below, while well-known to experts, is proved here for lack of an appropriate reference.
\begin{lem}\label{lem-specialization}(see,~e.g.,~\cite[Theorem~9.6]{sbi-gjl-al-cm-em-aks-uw:thlc})
Let $I$ and $J$ be ideals in a noetherian ring $A$. Then
\[{\operatorname{cd}(I,A)\geqslant \operatorname{cd}(I(A/J),A/J)}\,,\] where $\operatorname{cd}$ denotes the \emph{cohomological dimension}, the greatest nonvanishing index of the local cohomology.
\end{lem}
\begin{lem}\label{lem-Kunneth} Let $A$ and $B$ be $\Bbbk$-algebras, where $\Bbbk$ is a field. Let $\mathfrak{a}\subset A$ and $\mathfrak{b} \subset B$ be ideals. Set $C=A \otimes_{\Bbbk} B$ and $\mathfrak{c}=\mathfrak{a} C + \mathfrak{b} C$. Then $\HH{k}{\mathfrak{c}}{C}\cong \bigoplus_{i+j=k} \HH{i}{\mathfrak{a}}{A} \otimes_{\Bbbk} \HH{j}{\mathfrak{b}}{B}$. In particular, the cohomological dimension of $\mathfrak{c}$ is the sum of the cohomological dimensions of $\mathfrak{a}$ and of $\mathfrak{b}$.
\end{lem}
\begin{proof} Let $\mathfrak{a} = (f_1,\dots, f_s)$ and $\mathfrak{b}=(g_1, \dots, g_t)$. One computes $\HH{k}{\mathfrak{c}}{C}$ via the \v{C}ech complex $\{f_1,\dots, f_s, g_1, \dots, g_t\}$ on $C$, which we denote $\check{\mathcal{C}}(\{\underline{f},\underline{g}\},C)$. One verifies that we have an isomorphism of complexes \[\check{\mathcal{C}}(\{\underline{f},\underline{g}\},C)= \mathrm{Tot}\big(\check{\mathcal{C}}(\{\underline{f}\},A)\otimes_{\Bbbk}\check{\mathcal{C}}(\{\underline{g}\},B)\big)\,.\]
As this is a tensor product of free modules (over $\Bbbk$), the Kunneth formula yields an isomorphism
\[H^{\bullet}(\check{\mathcal{C}}(\{\underline{f},\underline{g}\},C))= \mathrm{Tot}\big(H^{\bullet}\check{\mathcal{C}}(\{\underline{f}\},A)\otimes_{\Bbbk} H^{\bullet}\check{\mathcal{C}}(\{\underline{g}\},B)\big)\,.\]
As the \v{C}ech complexes $\check{\mathcal{C}}(\{\underline{f}\},A)$ and $\check{\mathcal{C}}(\{\underline{g}\},B)$ compute $\HH{i}{\mathfrak{a}}{A}$ and $\HH{j}{\mathfrak{b}}{B}$, the Lemma is established.
\end{proof}
We obtain the first main result of the section.
\begin{thm}\label{thm-segvarUB} Let $X$ be the Segre-Veronese variety corresponding to the bundle $\mathcal{O}(a_1,\dots,a_n)$. If at least one $a_i$ is not 1 or $p^e$, where $p=\mathrm{char}(\Bbbk)$, then the minimal size of a separating set for the affine cone over $X$ is at least $2\sum_{i=1}^r n_i - 2r +1$.
\end{thm}
\begin{proof} By lemma \ref{lem-nonmodSV} we may assume that $a_i\in\Bbbk^*$ for all $i=1,\ldots,r$. We will show that the cohomological dimension of $\mathcal{I}(\mathcal{S}_{W,G})$ is equal to $s={2\sum_{i=1}^r n_i - 2r +1}$. By Proposition~\ref{prop-sepvarSV}, we can decompose $\mathcal{S}_{W,G}$ as a union of components isomorphic (via an automorphism of $W$) to $\mathcal{S}_{W,T}$. By Lemma~\ref{lem-equal}, and the assumption on the the $a_i$'s, there are at least two such components, and again by Proposition~\ref{prop-sepvarSV}, the intersection of any pair of distinct components is $\mathcal{N}_{W,G} \times \mathcal{N}_{W,G}$; note that this implies that the intersection of any union of components with another component is $\mathcal{N}_{W,G} \times \mathcal{N}_{W,G}$. Label the ideals of the distinct components as $\mathfrak{a}_1,\dots,\mathfrak{a}_t$, $\mathcal{N}=\mathcal{I}(\mathcal{N}_{W,G} \times \mathcal{N}_{W,G})$, and $\mathfrak{b}_i=\mathfrak{a}_1 \cap \cdots \cap \mathfrak{a}_i$, so that $\mathcal{S}_{W,G}=\mathfrak{b}_t$. There is a Mayer-Vietoris long exact sequence:
\[ \begin{aligned}
\cdots \longrightarrow \HH{i}{\mathcal{N}}{{\kk[V^2]}} \longrightarrow &\HH{i}{\mathfrak{b}_j}{{\kk[V^2]}} \oplus \HH{i}{\mathfrak{a}_{j+1}}{{\kk[V^2]}} \longrightarrow \HH{i}{\mathfrak{b}_{j+1}}{{\kk[V^2]}} \\
\longrightarrow &\HH{i+1}{\mathcal{N}}{{\kk[V^2]}} \longrightarrow \HH{i+1}{\mathfrak{b}_j}{{\kk[V^2]}} \oplus \HH{i+1}{\mathfrak{a}_{j+1}}{{\kk[V^2]}}\longrightarrow \cdots \end{aligned}\]
By Proposition~\ref{prop-UBSV}, there exists a separating set of size $s$ for the $T$-invariants. By Lemma~\ref{lem-seprankLC}, the cohomological dimension of $\mathfrak{a}_i$ is bounded above by~$s$. By induction on $i$, using that the cohomological dimension of $\mathcal{N}$ is $s+1$ from Lemma~\ref{lem-Nvg_Nvg}, it follows that $\HH{s}{\mathcal{I}(\mathcal{S}_{W,G})}{{\kk[V^2]}}$ is nonzero.
\end{proof}
\begin{cor}[{\cite[Theorem~4.2]{ha-mcb:odsvsvv}}]\label{cor:nondeg} For $X$ as above, the secant variety of $X$ is not defective.
\end{cor}
\begin{proof} Suppose the secant variety is defective, i.e., $\dim\operatorname{Sec}(X) < 2 \dim(X) +1 = 2(\sum_{i=1}^r (n_i-1))+1$. Then, by Proposition~\ref{prop-notclosed} and Corollary~\ref{cor-upperbound}, there exists a separating set of size less than $2(\sum_{i=1}^r (n_i-1))+1=2\sum_{i=1}^rn_i-2r+1$, which contradicts Theorem~\ref{thm-segvarUB}.
\end{proof}
\begin{lem}\label{lem-Nvg_Nvg} Let $Y$ be the affine cone over the nonmodular Segre-Veronese variety $\prod_{i=1}^r \mathbb{P}^{n_i-1} \hookrightarrow \mathbb{P}^N$ given by the line bundle $\mathcal{O}(a_1,\dots,a_r)$. The cohomological dimension of $\mathcal{I}(\mathcal{N}_{W,G} \times \mathcal{N}_{W,G})$ is $2 \sum_{i=1}^r n_i -2r +2$.
\end{lem}
\begin{proof} By Lemma~\ref{lem-Kunneth}, it is equivalent to compute the cohomological dimension of $\mathcal{I}(\mathcal{N}_{W,G})$. We have that
\[ \begin{aligned}
&\mathcal{I}(\mathcal{N}_{W,G})= \\
&\hspace{1cm} \big(\ M_1 \cdots M_r \ \big| \ M_i \ \text{is a monomial of degree $a_i$ in the variables} \ x_{i1}, \dots , x_{i n_i} \big)\,,\end{aligned}\]
whose radical is
\[(\, x_{1,j_1}\cdots x_{r,j_r} \mid j_i\in[n_i]\,) = \prod_{i=1}^r (x_{i,1}\, \dots, x_{i,n_i})\,.\]
Note that this coincides with the defining ideal for the nullcone of the action of the torus $T$ (see Proposition \ref{prop-sepvarSV}).
Set $\mathfrak{a}_i=(x_{i,1}, \dots, x_{i,n_i})$. We apply the Mayer-Vietoris spectral sequence of \cite{jam-rgl-az:lcasmi}. Since each $A \subseteq \{1,\dots,r\}$ yields a distinct ideal $\mathfrak{a}_A=\sum_{i\in A} \mathfrak{a}_i$, the intersection poset of the subspace arrangement defined by $J$ is the full Boolean poset. Thus, the associated simplicial complex of each interval in the poset is a homology sphere of dimension $\# A -2$, where, by convention, the $(-1)$-sphere is the empty set. Set $n_A=\sum_{i\in A} n_i = \mathrm{ht} (\mathfrak{a}_A)$. By \cite[Corollary~1.3]{jam-rgl-az:lcasmi}, there is a filtration of the local cohomology with support in $J$ such that the associated graded module satisfies
\[ \mathrm{gr} \big( \HH{q}{J}{{\kk[V]}} \big) \cong \bigoplus_{\emptyset \neq A\subseteq \{1,\dots,r\}} \HH{n_A}{\mathfrak{a}_A}{{\kk[V]}} \otimes_{\Bbbk} \tilde{H}_{n_A - q -1} ( S^{\# A -2} , \Bbbk)\,. \]
Thus, the cohomological dimension of $I(\mathcal{N}_{V,G})$ is
\[ \max \big\{ n_A - \# A +1 \ | \ \emptyset \neq A\subseteq [r] \big\} = \sum_{i=1}^r n_i -r +1 \,,\]
and hence the cohomological dimension of $\mathcal{I}(\mathcal{N}_{V,G}\times \mathcal{N}_{V,G})$ is twice this number, as claimed.
\end{proof}
For Segre varieties with at least three factors, we obtain a lower bound, but we do not expect this bound to be sharp in general.
\begin{prop}\label{prop-LBS}
Let $X$ be the Segre variety which is the image of the closed embedding $\prod_{i=1}^r \mathbb{P}^{n_i-1} \hookrightarrow \mathbb{P}^N$ given by the line bundle $\mathcal{O}(1,\dots,1)$, with $r>2$. Suppose, without loss of generality that $n_1\leqslant n_2\leqslant \cdots \leqslant n_r$. The size of a separating set for the affine cone over $X$ is at least $2 \big( \sum_{i=2}^{r} n_i \big) - 2r +4$.
\end{prop}
\begin{proof}
As $X$ is nonmodular and the $a_i$'s are all 1, its ring of homogeneous coordinates coincides with the ring of invariants of an action of a torus of rank $r-1$ as discussed in Section \ref{section-SV}. The following ideal cuts out the corresponding separating variety:
\[ I=\big(\ M_1 \cdots M_r \otimes 1-1\otimes M_1 \cdots M_r \ \big| \
M_i \ \text{is one of the variables} \ x_{i1}, \dots , x_{i n_i} \ \big).
\]
By Lemma \ref{lem-specialization}, the cohomological dimension of $I$ is bounded below by the cohomological dimension of the ideal we obtain via the linear specialization to $x_{1,1}\otimes 1=x_{1,2}\otimes 1=1\otimes x_{1,1}=1$ and $1\otimes x_{1,1}=0$. This ideal is
\[
\big(\ M_2 \cdots M_r \otimes 1,1\otimes M_2 \cdots M_r \ \big| \
M_i \ \text{is one of the variables} \ x_{i1}, \dots , x_{i n_i} \ \big)
\,.\]
This ideal coincides with $\mathcal{I}(\mathcal{N}_{W,T'} \times \mathcal{N}_{W,T'})$ for the action $T'$ on $W$ defining the Segre embedding of $\prod_{i=2}^r \mathbb{P}^{n_i-1} \hookrightarrow \mathbb{P}^N$. Applying Lemma~\ref{lem-Nvg_Nvg}, we obtain the bound in the statement.
\end{proof}
The proof above works for general Segre-Veronese varieties. We have restricted the statement in the proposition above, because in all other cases we can obtain a more precise result. The only remaining case is that of Segre products with two factors. In this case, local cohomology groups fail to provide a sufficient obstruction in positive characteristic, but we may argue along similar lines using \'etale cohomology. We refer the reader to \cite{Milne} for the facts from \'etale cohomology used below.
Fix $\Lambda=\mathbb Z/q\mathbb Z$, where $q\neq \operatorname{char}\Bbbk$ is a prime.
\begin{prop}\label{prop-upper} If $Y$ is a $d$-dimensional variety that is covered by $k$ affines, then $\Het{i}{Y,\Lambda}=0$ for all $i\geq d+k$. In particular, if $Z$ is a closed subset of $\mathbb{A}^d$ and $\Het{d+k-1}{\mathbb{A}^d \setminus Z,\Lambda}\neq 0$, then $Z$ cannot be defined by fewer than $k$ equations.
\end{prop}
We will use the following result.
\begin{prop}[Bruns-Schwanzl \cite{wb-rs:tneddv}] Let $M$ be a $2 \times s$ matrix of indeterminates in the polynomial ring $A$. Set $Z=V(I_2(M))\subset \mathbb{A}^{2s}$. Then $\Het{4s-4}{\mathbb{A}^{2s}\setminus Z, \Lambda}\cong \Lambda$, and the higher \'etale cohomology groups vanish.
\end{prop}
\begin{thm} \label{thm-S2factor} For the affine cone over the Segre embedding of $\mathbb{P}^{n_1-1} \times \mathbb{P}^{n_2-1}$, any separating set has size at least $2n_1+2n_2-4$.
\end{thm}
\begin{proof}
By the long exact sequence
\[ \cdots\rightarrow\Het{r}{\mathbb{A}^{2s}, \Lambda} \rightarrow\Het{r}{\mathbb{A}^{2s}\setminus Z, \Lambda} \rightarrow \Hetl{r+1}{Z}{\mathbb{A}^{2s}, \Lambda} \rightarrow \Het{r+1}{\mathbb{A}^{2s}, \Lambda} \rightarrow \cdots\]
it follows that $\Hetl{4s-3}{Z}{\mathbb{A}^{2s}, \Lambda}\cong \Lambda$ and the higher such groups vanish. Then, the Gysin isomorphism yields $\Het{2s-1}{Z,\Lambda}\cong \Lambda$, with the higher ones vanishing. Another application of the Gysin isomorphism yields $\Hetl{4s+2t-3}{Z\times \{0\}}{\mathbb{A}^{2s}\times \mathbb{A}^{t},\Lambda}\cong \Lambda$, where $0$ is the origin in $\mathbb{A}^t$. Applying the sequence above, we obtain
\[\Het{4s+2t-4}{(\mathbb{A}^{2s}\times \mathbb{A}^{t})\setminus (Z\times \{0\}),\Lambda}\cong \Lambda\]
and the higher groups vanish.
By Proposition \ref{prop-sepvarSV}, part \ref{prop-sepvarS2} the seprating variety decomposes as
\[
\mathcal{S}_{W,T}=\overline{\Gamma_{W,T}}\cup W_{\hat{1}}\times W_{\hat{1}} \cup W_{\hat{2}}\times W_{\hat{2}},
\]
Write $W,X,Y$ to denote $\overline{\Gamma_{W,T}},W_{\hat{1}},W_{\hat{2}}$, respectively, $A=\mathbb{A}^{2(m+n)}$, and $S$ for the separating variety $W\cup X \cup Y$. For all sequences below, we consider \'etale cohomology with coefficients in $\Lambda$.
We obtain one Mayer-Vietoris sequence:
\[\cdots\rightarrow\Het{i}{A\setminus\{0\}}\rightarrow \Het{i}{A\setminus X}\oplus\Het{i}{A \setminus Y}\rightarrow \Het{i}{A \setminus (X \cup Y)} \rightarrow \Het{i+1}{A\setminus\{0\}}\rightarrow\cdots\]
from which we conclude that
\[ \Het{i}{A\setminus X}\oplus\Het{i}{A \setminus Y}\cong \Het{i}{A \setminus (X \cup Y)}\]
by the natural inclusion maps for $i<4n_1+4n_2-1$.
From the Mayer-Vietoris sequence:
\begin{align*}
\cdots\rightarrow\Het{i}{A\setminus\{0\}}\rightarrow&\Het{i}{A\setminus(W\cap X)} \oplus \Het{i}{A\setminus(W\cap Y)} \\
&\rightarrow \Het{i}{A\setminus (W\cap (X\cup Y))}
\rightarrow\Het{i+1}{A\setminus\{0\}}\rightarrow\cdots\end{align*}
we conclude
\[\Het{i}{A\setminus(W\cap X)} \oplus \Het{i}{A\setminus(W\cap Y)} \cong \Het{i}{A\setminus (W\cap (X\cup Y))}\] for $i<4n_1+4n_2-1$.
We consider one more Mayer-Vietoris sequence:
\begin{align*}
\cdots\rightarrow\Het{i}{A\setminus (W\cap (X\cup &Y))}\rightarrow\Het{i}{A \setminus W} \oplus \Het{i}{A \setminus (X \cup Y)} \\&\rightarrow \Het{i}{A \setminus S}
\rightarrow\Het{i+1}{A\setminus (W\cap (X\cup Y))}\rightarrow\cdots
\end{align*}
which, assuming $n_1\geqslant 3$, applying the consequences of the long exact sequences above also reads
\begin{align*}
\cdots\rightarrow\Het{i}{A&\setminus(W\cap X)} \oplus \Het{i}{A\setminus(W\cap Y)}\rightarrow\Het{i}{A \setminus W} \\&\rightarrow \Het{i}{A \setminus S}
\rightarrow\Het{i+1}{A\setminus(W\cap X)} \oplus \Het{i+1}{A\setminus(W\cap Y)}\rightarrow\cdots
\end{align*}
for $4n_2<i<4n_2+4n_2-1$. In particular, for $t=4n_2+4n_2-3$,
\[
\cdots \rightarrow\Het{t-1}{A\setminus S}\rightarrow \Het{t}{A\setminus(W\cap X)} \oplus \Het{t}{A\setminus(W\cap Y)}
\rightarrow\Het{t}{A \setminus W}
\rightarrow\cdots
\]
which computes as
\[
\cdots \rightarrow\Het{t-1}{A\setminus S}\rightarrow \Lambda\times\Lambda
\rightarrow\Lambda
\rightarrow\cdots
\]
Thus, $\Het{4n_1+4n_2-4}{A\setminus S}\neq 0$. The theorem then follows from Proposition~\ref{prop-upper}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm-MinSizeSV}]
Recall that we can reduce to the case of nonmodular Segre-Veronese varieties by Lemma \ref{lem-nonmodSV}. The upper bounds are given by Proposition \ref{prop-UBSV} and the lower bounds are given by Theorem \ref{thm-segvarUB} for case \ref{thm-MinSizeSV1}, Proposition \ref{prop-LBS} for case \ref{thm-MinSizeSV3}, and Theorem \ref{thm-S2factor} for case \ref{thm-MinSizeSV2}. In cases \ref{thm-MinSizeSV1} and \ref{thm-MinSizeSV2} the upper and lower bounds coincide.
\end{proof}
\begin{rmk} In the cases of Theorem~\ref{thm-MinSizeSV} where the size of a minimal separating set is not exactly identified, namely, when the invariant ring is a Segre variety up to inseparable closure with at least three factors, we do not know whether local/\'etale cohomological bounds on the separating variety can be improved. In particular, we do not know whether they agree with the upper bounds provided by secant geometry techniques. However, in these cases in positive characteristic, local cohomological dimension is not a sharp lower bound in general. For a concrete example, one can compute that the projective dimension of ${\kk[V^2]}/{\mathcal{I}_{V,G}}$ is 6 for the Segre embedding of $\mathbb{P}^1\times \mathbb{P}^1 \times \mathbb{P}^1$ over $\mathbb F_2$, and hence the cohomological dimension is at most 6; equality holds due to Proposition~\ref{prop-LBS}. As in the proof of Theorem~\ref{thm-S2factor}, \'etale bounds may be sharper here.
\end{rmk}
\section{Monomial separating sets}\label{section-MonomialSepSets}
The focus of this section is on the invariants of representations of tori and their monomial separating sets. We include as a special case those representations whose ring of invariants is isomorphic to the ring of homogeneous coordinates on Segre-Veronese varieties.
\subsection{Combinatorial characterization of monomial separating subalgebras}
A monomial subalgebra of the invariant ring will be given by a subsemigroup $\mathcal{S}\subseteq \mathcal{L}$. For $I\subseteq \{1,\ldots,n\}$, set $\mathcal{S}_I=\mathcal{L}_I\cap \mathcal{S}$.
\begin{prop}\label{prop-SsepCharp}
Suppose $\Bbbk$ has positive characteristic $p$. The subsemigroup ${\mathcal{S}\subseteq \mathcal{L}}$ gives a separating algebra if and only if there exist $m\geq 1$ such that $p^m\mathcal{L}\subseteq \mathcal{S}$.
\end{prop}
\begin{proof}
Suppose there exist $m\geq 1$ such that $p^m\mathcal{L}\subseteq \mathcal{S}$. Take $u,v\in V$ and suppose they are separated by some invariant. Without loss of generality we may assume they are separated by $x^\alpha$ with $\alpha\in \mathcal{L}$. By our assumption, $p^m\alpha=\gamma\in \mathcal{S}$. Then $x^\gamma$ separates $u$ and $v$. Indeed, otherwise we have \[(x^\alpha(u))^{p^m}=x^\gamma(u)=x^\gamma(v)=(x^\alpha(v))^{p^m}\,,\]
and so $x^\alpha(u)=x^\alpha(v)$, a contradiction.
Now suppose $\mathcal{S}$ gives a separating algebra $A$. As this algebra is a graded subalgebra, it follows that the extension $A\subseteq \kk[x]^T$ is finite and $\kk[x]^T$ is the purely inseparable closure of $A$ in $\kk[x]$ (see \cite[Remark 3]{hd-gk:ciagac} or \cite[Theorem 6]{fdg:viac}). Hence, for any $x^\alpha\in\kk[x]^T$, there exist $m_\alpha\in \mathbb N$ such that $(x^\alpha)^{p^{m_\alpha}}\in A$. The finiteness of the extension ensures that there exists a natural number $m$ such that $(x^\alpha)^{p^{m}}\in A$. It follows that $p^m\mathcal{L}\subseteq\mathcal{S}$.
\end{proof}
\begin{lem} \label{lem-SSep}
Suppose that for all $I\subseteq [n]$, $\mathcal{L}_I\subseteq \mathbb Z\mathcal{S}_I$, and for all $\alpha\in \mathcal{L}$, $\alpha_i\neq 0$ implies that there exist $\gamma\in \mathcal{S}$ such that $i\in\operatorname{supp} (\gamma)$ and $\operatorname{supp} (\gamma) \subseteq \operatorname{supp} (\alpha)$. Then $\mathcal{S}$ gives a separating algebra.
\end{lem}
\begin{proof}
Take $u,v\in V$ and suppose they are separated by some invariant, without loss of generality, suppose they are separated by $x^\alpha$ with $\alpha\in \mathcal{L}$. Suppose first that $x^\alpha(u)=0\neq x^\alpha(v)$. As $x^\alpha(u)=0$, there exists $i\in \operatorname{supp} (\alpha)$ such that $u_i=0$. By our assumption, there exist $\gamma\in \mathcal{S}$ such that $i\in \operatorname{supp} (\gamma)$ and $\operatorname{supp} (\gamma)\subseteq \operatorname{supp} (\alpha)$. Then as $v_j\neq 0$ for all $j\in \operatorname{supp} (\alpha)$, we have $x^\gamma(u)=0\neq x^\gamma (v)$.
Suppose now that both $x^\alpha(u)$ and $x^\alpha(v)$ are non-zero. Then $u_i$ and $v_i$ are non-zero for all $i\in\operatorname{supp} (\alpha)$. By our assumption, there exist $\gamma,\gamma'\in \mathcal{S}$ such that $\alpha=\gamma-\gamma'$. Then one of $x^\gamma$ or $x^{\gamma'}$ must separate $u$ and $v$. Indeed, otherwise we have
\[x^\alpha(u)=\frac{x^\gamma(u)}{x^{\gamma'}(u)}=\frac{x^\gamma(v)}{x^{\gamma'}(v)}=x^\alpha(v),
\]
a contradiction.
\end{proof}
\begin{prop}\label{prop-sepalg}
Suppose $\Bbbk$ has characteristic zero. Then the following are equivalent:
\begin{enumerate}
\item $\mathcal{S}$ gives a separating algebra. \label{prop-sepalg1}
\item For all $I\subseteq [n]$, $\mathcal{L}_I\subseteq \mathbb Z\mathcal{S}_I$, and for all $\alpha\in \mathcal{L}$, $\alpha_i\neq 0$ implies that there exist $\gamma\in \mathcal{S}$ such that $i\in\operatorname{supp} (\gamma)$ and $\operatorname{supp} (\gamma) \subseteq \operatorname{supp} (\alpha)$. \label{prop-sepalg2}
\item For any prime number $p$ there exist $m\geq 1$ such that $p^m\mathcal{L}\subseteq \mathcal{S}$. \label{prop-sepalg3}
\item There exist prime numbers $p,q$ and an $m\geq 1$ such that $p^m\mathcal{L}\subseteq \mathcal{S}$ and $q^m\mathcal{L}\subseteq \mathcal{S}$. \label{prop-sepalg4}
\end{enumerate}
\end{prop}
\begin{proof}
(\ref{prop-sepalg1})$\Rightarrow$ (\ref{prop-sepalg2}): Suppose that $\mathcal{S}$ gives a separating algebra. As $T$ is reductive, the restriction of any separating set to a $T$-stable subspace gives a separating set \cite[Theorem 2.3.16]{hd-gk:cit}. Hence for any subset $I\subseteq [n]$, $\mathcal{S}_I$ must give a separating algebra in $\Bbbk[V_I]^T$. As we assume $\Bbbk$ has characteristic zero, the field of fractions of separating algebra given by $\mathcal{S}_I$ coincides with the field of fractions of the invariant ring \cite[Proposition 2.3.10]{hd-gk:cit}, that is $\mathbb Z\mathcal{S}=\mathbb Z\mathcal{L}\supseteq \mathcal{L}$. Now take $\alpha\in \mathcal{L}$ and suppose $i_0\in \operatorname{supp} (\alpha)$. Consider the points $u,v\in V$ defined as
\begin{align*}
u_i &= \left\{\begin{array}{cl}
1 & i\in \operatorname{supp} (\alpha)\\
0 & \text{otherwise}\end{array}\right.\\
v_i &= \left\{\begin{array}{cl}
1 & i\in \operatorname{supp} (\alpha)\setminus \{i_0\}\\
0 & \text{otherwise}\end{array}\right. .
\end{align*}
Then $u,v\in V_I$ and $x^\alpha(u)=1\neq 0=x^\alpha(v)$. As $\mathcal{S}_I$ gives a separating algebra, there exist $\gamma \in \mathcal{S}_I$ such that $x^\gamma(u)\neq x^\gamma(v)$. It follows that $i_o\in\operatorname{supp} (\gamma)$, since otherwise $x^\gamma(u)=1=x^\gamma(v)$, a contradiction.
(\ref{prop-sepalg2})$\Rightarrow$ (\ref{prop-sepalg3}): The integer matrix $A$ gives a representation of the torus of rank $r$ over any field. Condition (\ref{prop-sepalg2}) does not involve the base field at all. Thus, if (\ref{prop-sepalg2}) holds, we can think of it as olding over a field of any characteristic $p$. Then by Lemma \ref{lem-SSep}, $\mathcal{S}$ gives a separating algebra over any field, and by Proposition \ref{prop-SsepCharp}, it follows that for any prime $p$ there exists $m\in\mathbb N$ such that $p^m\mathcal{L}\subseteq \mathcal{S}$.
(\ref{prop-sepalg3})$\Rightarrow$ (\ref{prop-sepalg4}): Immediate.
(\ref{prop-sepalg4})$\Rightarrow$ (\ref{prop-sepalg1}): Take $u,v\in V$ and suppose they are separated by some invariant, without loss of generality we may assume they are separated by $x^\alpha$ with $\alpha\in \mathcal{L}$. By our assumption, $p^{m}\alpha =\gamma_1\in \mathcal{S}$ and $q^{m}\alpha =\gamma_2\in \mathcal{S}$. If $x^{\alpha}(u)=0\neq x^\alpha(v)$, then $x^{\gamma_1}(u)=(x^\alpha(u))^{p^{m}}=0\neq (x^\alpha(u))^{q^{m}}=x^{\gamma_2}(u)$. So we may suppose both $x^\alpha(u)$ and $x^\alpha(v)$ are non-zero. One of $x^{\gamma_1}$ or $x^{\gamma_2}$ must separate $u$ and $v$. Indeed, otherwise $x^{\alpha}(u)/x^\alpha(v)$ is both a $p^{m}$-th and a $q^{m}$-th root of unity, that is, $x^{\alpha}(u)/x^\alpha(v)=1$, a contradiction.
\end{proof}
\subsection{Invariants with small support separate}
We apply the results of the previous section to give a bound, for a torus action $T$, on the number $r$ such that there exists a separating set consisting of elements that each involve at most $r$ variables. In \cite{md-es:hdag}, Domokos and Szab\'o define invariants of algebraic groups to bound this number for actions of algebraic groups on product varieties; in fact, one may ask this question for any subring of a polynomial ring, so that one has a well-defined notion of the number of variables an element involves.
W do not believe that the following two results are new, but we have not seen the exact statement of Theorem \ref{thm-r+1} in the literature.
\begin{lem}\label{lem-sparsevectors} Let $\Bbbk$ be a field, $n > m$ and let $\{v_I \ | \ I \subseteq [n], |I|=m\}$ be a collection of nonzero vectors in $\Bbbk^n$ such that the projection of $v_I$ onto the coordinate subspace $\Bbbk^I$ is zero. Then $\dim(\Bbbk \langle v_I \rangle)\geqslant m+1$.
\end{lem}
\begin{proof} We proceed by induction on $m$. The case $m=0$ is trivial, as there exists a nonzero vector by hypothesis. For the inductive step, assume without loss of generality that the first coordinate of $v_{[m]}$ is nonzero. By the inductive hypothesis, there are $n$ linearly independent vectors $\{w_1, \dots, w_m\}$ in $\{v_I \ | \ I \subseteq [n], |I|=m, 1\in I\}$, since omitting the first coordinate produces a set of vectors satisfying the statement of the lemma. As the vectors $\{w_1, \dots, w_m\}$ all have first coordinate zero, the set of vectors $\{w_1, \dots, w_m, v_{[m]}\}$ is linearly independent.
\end{proof}
\begin{thm} Let $A$ be a surjective $r \times n$ matrix with $n > r$. Then the lattice $\ker_\mathbb Z(A)\subset \mathbb Z^n$ is generated by elements with at most $r+1$ nonzero entries. \label{thm-r+1}
\end{thm}
\begin{proof} For a subset $I\subseteq [n], |I|=r+1$, let $A_I$ be the matrix obtained from $A$ by taking only the columns whose indices lie in $I$. Let $K_I\subseteq \mathbb Z^I \subseteq \mathbb Z^n$ be the kernel of $A_I$, and $K$ be the kernel of $A$. By definition, $\mathbb Z \langle K_I \rangle \subseteq K$; we will show that these lattices agree upon tensoring with $\mathbb Q$ and $\mathbb F_p$ for each prime $p$. Note that for each such $I$, $K_I$ contains a nonzero vector. Moreover, $K+I$ contains a vector that is nonzero mod $p$ for each prime $p$: if $p v_I \in K_I$, then $v_I\in K_I$ as well. Thus, for any $\Bbbk=\mathbb Q, \mathbb F_p$, $\mathbb Z \langle K_I \rangle \otimes \Bbbk \subseteq K\otimes\Bbbk$, and $\mathbb Z \langle K_I \rangle \otimes \Bbbk$ satisfies the hypotheses of Lemma~\ref{lem-sparsevectors} with $m=n-(r+1)$, so that its dimension is at least $n-r$. As the sequence
\[ 0 \rightarrow K \rightarrow \mathbb Z^n \stackrel{A}{\rightarrow} \mathbb Z^r \rightarrow 0 \]
is split, we have that the dimension of $K \otimes \Bbbk$ is $n-r$. Thus, $\mathbb Z \langle K_I \rangle \otimes \Bbbk= K\otimes \Bbbk$ for all such $\Bbbk$, so $\mathbb Z \langle K_I \rangle = K$ as required.
\end{proof}
\begin{cor}\label{cor-r+1}
Let $V$ be a $n$ dimensional representation of a torus $T$ of rank $r\leq n$. The rational invariants $\Bbbk(V)^T$ are generated by rational invariants each involving at most $r+1$ variables.
\end{cor}
\begin{thm}\label{thm-2r+1}
Let $V$ be a $n$ dimensional representation of a torus $T$ of rank $r\leqslant n$. The invariants involving at most $2r+1$ variables form a separating set.
\end{thm}
\begin{proof}
Let $\mathcal{S}\subseteq \mathcal{L}$ be the subsemigroup generated by all elements with support of size at most $2r+1$. We will show that $S$ satisfies the conditions of Lemma~\ref{lem-SSep}.
First we show that for all $\alpha\in \mathcal{L}$ with $\alpha_i\neq 0$, there exist $\gamma\in \mathcal{S}$ such that $i\in\operatorname{supp} (\gamma)$ and $\operatorname{supp} (\gamma) \subseteq \operatorname{supp} (\alpha)$.
Take $\alpha\in \mathcal{L}$, and suppose $\alpha_{i_0}\neq 0$. If ${|\operatorname{supp} (\alpha) | \, \leqslant 2r+1}$, then ${\alpha \in \mathcal{S}}$, so there is nothing to do. So we suppose that ${|\operatorname{supp} (\alpha) | \, > 2r+1}$. We can rewrite the equation $0=\sum_{i=1}^n\alpha_i m_i$ as
\[-\alpha_{i_0}m_{i_0}=\sum_{i\in\operatorname{supp} (\alpha)\setminus\{i_0\}} \left(\frac{\alpha_i}{\sum_{i\in\operatorname{supp} (\alpha)\setminus\{i_0\}}\alpha_i}\right) m_i\,,\]
and so $-\alpha_{i_0}m_{i_0}$ is in the convex hull of $\{m_i\mid i\in \operatorname{supp} (\alpha)\setminus\{i_0\}\}$. By Carath\'eodory's Theorem \cite{cc:uvkpgwna} there is a subset $K\subseteq \operatorname{supp} (\alpha)\setminus\{i_0\}$ of size $|K| \leqslant r+1$ such that $-\alpha_{i_0}m_{i_0}$ is in the convex hull of $\{m_k \mid k\in K\}$. Hence, we have an equation
\[-\alpha_{i_0}m_{i_0}=\sum_{k\in K} \delta_k m_k,\]
where $\delta_k\geq 0$ and $\sum_{k\in K}\delta_k = 1$. Multiplying by a sufficiently large natural number, we find $\sum_{i\in K\cup \{i_0\}}\gamma_im_i=0$ with $\gamma_i\in\mathbb N$. Define $\gamma\in \mathbb N^n$ as follows:
\[
\gamma_i=\left \{ \begin{array}{cl}
\gamma_{i_0} & \mathrm{ if }~ i=i_0\\
\gamma_k & \mathrm{ if }~ i\in K\\
0 & \mathrm{ otherwise}
\end{array}\right.\]
Then $\gamma_{i_0}\neq 0$ and $\gamma$ has support $K\cup \{i_0\}\subseteq \operatorname{supp} (\alpha)$ of size at most $r+2\leqslant 2r+1$ so that $\gamma \in \mathcal{S}$ as required.
Now we show that for all $I\subseteq [n]$ we have $\mathcal{L}_I\subseteq \mathbb Z\mathcal{S}_I$.
Fix $I\subseteq [n]$. If $\mathcal{L}_I=0$, we are done, so suppose $\mathcal{L}_I\neq 0$. Take $\alpha\in \mathcal{L}_I$. Set $I'=\operatorname{supp} (\alpha)$. By Corollary~\ref{cor-r+1}, we can write $\alpha$ as a $\mathbb Z$-linear combination of elements of $\ker_\mathbb Z A_{I'}$ with support of size at most $r+1$. It will suffice to show that any $\beta\in \ker_\mathbb Z A_{I'}$ with $|\operatorname{supp} \beta|\leq r+1$ can be written $\beta=\gamma-\gamma'$ with $\gamma,\gamma'\in\mathcal{L}_{I'}$ having support of size at most $2r+1$.
Take $\beta=\beta^+-\beta^-\in\ker_\mathbb Z A_{I'}$ with $\beta^+,\beta^-\in\mathbb N^n$ with disjoint support and $|\operatorname{supp} (\beta)| \leqslant r+1$. Set $J^+=\operatorname{supp} (\beta^+)$ and $J^-=\operatorname{supp} (\beta^-)$. Note that we have ${|J^+| + |J^-|\leqslant r+1}$ and without loss of generality, both $J^+$ and $J^-$ are nonempty, and so ${\max{\{|J^+|,|J^-|\}}\leqslant r}$. As $\alpha$ has full support $I'$, $0$ is an interior point of the convex hull of the weight vectors $\{m_i \, | \, i \in I'\}$, that is, there exists an equation of the form
\begin{equation}\sum_{i\in J^-} \lambda_i m_i + \sum_{j \notin J^-} \lambda_j m_j = 0\,,\label{eqn-invfullsupp}\end{equation}
with $\lambda_i>0$ and $\sum_{i\in I'}\lambda_i=1$.
Set
\[m'=-\left(\frac{1}{\sum_{j\notin J^-}\lambda j}\right)\left(\sum_{i\in J^-} \lambda_i m_i \right) \quad \text{and} \quad \lambda_i'=\left(\frac{1}{\sum_{j\notin J^-}\lambda j}\right)\lambda_i\,.\]
Then Equation~(\ref{eqn-invfullsupp}) can be rewritten as
\[m'=\sum_{i\notin J^-}\lambda_i'm_i\,.\]
Note that $\lambda_i'>0$ and $\sum_{i\notin J^-}\lambda_i'=1$, so that $m'$ is an interior point of the convex hull of $\{ m_i \, | \, i \notin J^-\}$. As $J^+$ is nonempty and disjoint from $J^-$, there exist $j_0\in J^+\setminus J^-$. By Watson's Carath\'eodory Theorem \cite{cc:uvkpgwna}, there exists a subset $K\subseteq I'$ with $|K|\leqslant r$ and $K \cap J^- = \emptyset$ such that $m'$ is in the convex hull of $\{m_{j_0}\} \cup \{ m_k \, | \, k \in K\}$, that is, there are nonnegative rational numbers $\mu_{j_0}, \mu_k$ such that $\sum_{k\in K\cup\{j_0\}}\mu_k=1$ and
\begin{equation} m'=\mu_{j_0}m_{j_0} + \sum_{k\in K} \mu_{k} m_{k}\,.\label{eqn-smallsupp}\end{equation}
Substituting $m'$ for its value and reorganizing, we then get an equation
\[\left(\frac{1}{\sum_{j\notin J^-}\lambda j}\right)\sum_{i\in J^-}\left(\frac{\lambda_i}{\sum_{j\notin J^-}\lambda j}\right)m_i + \mu_{j_0} m_{j_0} + \sum_{k\in K}\mu_km_k=0\,.\]
It follows that there is $\gamma\in\mathcal{L}_I$ with support $J^-\cup \{j_0\}\subseteq \operatorname{supp} (\gamma) \subseteq J^-\cup\{j_0\}\cup K$. Note that $|\operatorname{supp} (\gamma) | \leq |J^-|+1+|K|\leq r+r+1=2r+1$. We may assume $\gamma-\beta^-\in\mathbb N^n$, multiplying $\gamma$ by a large natural number if needed, and so $\beta+\gamma\mathcal{L}_I$ with support
\[\operatorname{supp}(\beta+\gamma)\subseteq \operatorname{supp} (\beta) \cup \operatorname{supp}( \gamma) \subseteq J\cup (J^-\cup\{j_0\}\cup K)=J\cup K\,.\]
It follows that $|\operatorname{supp} (\beta+\gamma)| \leq |J|+|K|\leq r+1+r=2r+1.$ Thus, we can write $\beta=(\beta+\gamma)-\gamma$ as a difference of elements of $\mathcal{L}_I$, with support of size at most $2r+1$.
\end{proof}
\begin{eg}[The bound given by Theorem \ref{thm-2r+1} is sharp]
We consider the $2r+1$ dimensional representation of the torus of rank $r$ given by the matrix of weights:
\[A:=\left(\begin{array}{c|c|c}
I & \begin{array}{c}
5\\
\vdots\\
5
\end{array} & -6I
\end{array}\right).\]
As $A$ is already in reduced echelon form,
\[\ker_\mathbb Z A=\Big\langle v_0=\Big(-5\sum_{j=1}^r e_j, 1, 0\Big), v_i=\big(6e_i,0,e_i\big) \ \big| \ i\in [r] \Big\rangle\,,\]
as a $\mathbb Z$-module and so $\mathcal{L}=\ker_\mathbb Z A\cap \mathbb N^n$ is generated as a semigroup by
\begin{align*}
\bigg\{v_0+\sum_{j=1}^r v_j =\Big(\sum_{j=1}^re_j&,1,\sum_{j=1}^r e_j \Big), ~7v_0+6\sum_{j=1}^r v_j=\Big(\sum_{j=1}^re_j,7,6\sum_{j=1}^r e_j \Big)\,, \\
6v_0&+5\sum_{j=1}^r v_j=\Big(0,6,5\sum_{j=1}^r e_j \Big)\,, v_i=(6e_i,0,e_i) \ \Big| \ i\in [r] \bigg\}\,.
\end{align*}
Indeed, an arbitrary element of $\mathcal{L}$ will be of the form
\[\alpha=a_0\Big(-5\sum_{j=1}^r e_j, 1, 0\Big)+\sum_{i=1}^r a_i(6e_i,0,e_i)=\Big(\sum_{j=1}^r(-5a_0+6a_j)e_j, a_0, \sum_{j=1}^r a_je_j \Big)\,,\]
where $a_i\geqslant 0$ for each $i=0,\ldots,r$ and $-5a_0+6a_j\geqslant 0$ for each $j=1,\ldots,r$ since all entries of $\alpha$ must be nonnegative. In particular, for each $i=1,\ldots, r$, we will have
\begin{align*}
a_i-6\lceil a_0/6\rceil+\lfloor a_o/6\rfloor\geqslant 0&, \text{ if } 6|(a_0-1), \text{ and}\\
a_i-4\lceil a_0/6\rceil-\lfloor a_o/6\rfloor\geqslant 0&, \text{ otherwise.}
\end{align*}
Then we can write
\begin{align*}\alpha=\left\lfloor \frac{a_0}{6}\right\rfloor\left(6v_0+5\sum_{j=1}^rv_j \right)+ \Big(\left\lceil \frac{a_0}{6}\right\rceil &- \left\lfloor \frac{a_0}{6}\right\rfloor\Big)\left(7v_0+ 6\sum_{j=1}^rv_j\right)\\
&+\sum_{i=1}^r\left(a_i-6\left\lceil \frac{a_0}{6}\right\rceil+\left\lfloor \frac{a_o}{6}\right\rfloor\right)v_i\,,
\end{align*}
if $6|(a_o-1)$, and otherwise
\begin{align*}\alpha=\left\lfloor \frac{a_0}{6}\right\rfloor\left(6v_0+5\sum_{j=1}^rv_j \right)+ \Big(\left\lceil \frac{a_0}{6}\right\rceil &- \left\lfloor \frac{a_0}{6}\right\rfloor\Big)\left(v_0+ \sum_{j=1}^rv_j\right)\\
&+\sum_{i=1}^r\left(a_i-4\left\lceil \frac{a_0}{6}\right\rceil-\left\lfloor \frac{a_o}{6}\right\rfloor\right)v_i\,,
\end{align*}
proving our claim.
Let $\mathcal{S}\subseteq \mathcal{L}$ be the subsemigroup generated by all elements of $\mathcal{L}$ with support of size strictly less than $2r+1$. Our next claim is that this subsemigroup is generated by
\[\Big\{6v_0+5\sum_{j=1}^rv_j,~v_i \ \big| \ i\in[r] \Big\}\,.\]
Suppose $\gamma\in \mathcal{L}$ has support of size strictly less than $2r+1$, that is, it has at least one zero entry. As it belongs to $\mathcal{L}$ we can write
\[\gamma=g_1\Big(v_0+\sum_{j=1}^r v_j\Big)+g_2\Big(7v_0+6\sum_{j=1}^r v_j\Big)+g_3\Big(6v_0+5\sum_{j=1}^r v_j\Big)+\sum_{i=1}^r a_iv_i\,,\]
where $g_i,a_j$ are nonnegative integers. Hence, as $\gamma$ is equal to
\[(g_1+g_2+6a_1,\ldots,g_1+g_2+6a_r,g_1+7g_2+6g_3,g_1+6g_2+5g_3+a_1,\ldots, g_1+6g_2+5g_3+a_r)\,,\]
we must have $g_1=g_2=0$ since otherwise $\gamma$ has full support.
Our final claim is that $\mathcal{S}$ does not give a separating algebra. Indeed, the first $r+1$ entries of any element of $\mathcal{S}$ are divisible by 6, so for any prime $p$ and positive integer $m$,
\[p^m\Big(v_0+\sum_{j=1}^r v_j\Big)=(p^m,\ldots,p^m, p^m,p^m,\ldots,p^m)\]
does not belong to $\mathcal{S}$. Hence by Proposition \ref{prop-sepalg}, $\mathcal{S}$ does not give a separating subalgebra. \hfill $\triangleleft$
\end{eg}
\subsection{Minimal Size of monomial separating sets for Segre-Veroneses}
In this section, we study the minimal size of a monomial separating set for the affine cone over a Segre-Veronese variety. We consider the representation of a torus of rank $r$ whose ring of invariants is isomorphic to the ring of homogeneous coordinates on Segre-Veronese variety which is the image of the closed embedding $\prod_{i=1}^r \mathbb{P}^{n_i-1} \hookrightarrow \mathbb{P}^N$ given by the line bundle $\mathcal{O}(a_1,\dots,a_r)$ as described in Section~\ref{section-SV}. Set $I:=\{i\mid a_i=1 \text{ or} \text{ a} \text{ pure} \text{ power} \text{ of}\allowbreak \operatorname{char} \Bbbk\}$. As in Section~\ref{section-SV}, the ring of invariants is given by:
\[S=\Bbbk\big[\ x_0 M_1 \cdots M_r \ \big| \ M_i \, \text{is a monomial of degree $a_i$ in the variables} \ x_{i,1}, \dots , x_{i, n_i} \big]\,.\]
\begin{prop}\label{prop-SVminmon}
\begin{enumerate}
\item The monomial invariants with support of size at most $r+2$ form a separating set. \label{prop-SVminmon1}
\item The minimal size of a monomial separating set is \[\Bigg(\prod_{h=1}^r n_h\Bigg) \Bigg(1 + \frac{1}{2}\sum_{i\notin I} ({n_i -1})\Bigg)\,.\] \label{prop-SVminmon2}
\end{enumerate}
\end{prop}
\begin{proof}
As the torus is a reductive group by \cite[Theorem 2.3.16]{hd-gk:cit}, the restriction of any separating set to a subrepresentation must yield a separating set. As the restriction of an invariant monomial will be either zero or the same monomial, a monomial separating set must contain separating sets for any subrepresentation.
Let $V$ be a $(r+1)$-dimensional subrepresentation. As any proper subset of the set of weights is linearly independant, a $(r+1)$-dimensional subrepresentation has no nonconstant invariants unless its matrix of weights is of the form
\[A:=\left( \begin{array}{c|c}
I & \begin{array}{c} -a_1 \\ \vdots\\ -a_r\end{array}
\end{array}\right),\]
where $I$ is the $r\times r$ identity matrix. As $A$ is in row reduced echelon form over $\mathbb Z$, $\ker_\mathbb Z A$ is generated by $\alpha:=(a_1,\ldots,a_r,1)$ as a $\mathbb Z$-module. As $\alpha$ has positive entries, it must also generate $\ker_\mathbb Z A \cap \mathbb N^{r+1}$ as a semigroup. That is, the ring of invariants is generated by exactly 1 monomial, and we get one such monomial for each of the
$(r+1)$-dimensional subrepresentations with nonconstant invariants, of which there are $\prod_{i=1}^r {n_i}$.
Let $U$ be a $(r+2)$-dimensional subrepresentation. By the same argument as before, its set of weights is the full set of weights, but one weight is repeated. For simplicity we suppose the repeated weight is $e_1$ so that the matrix of weights is
\[B:=\left( \begin{array}{c|cc}
I & \begin{array}{cc} -a_1 &1 \\ -a_2 & 0 \\ \vdots & \vdots \\ -a_r & 0
\end{array}\end{array}\right).\]
As $B$ is in reduced echelon form, its $\mathbb Z$-kernel is generated by ${\alpha_1:=(a_1,\ldots,a_r,1,0)}$ and ${\beta:=(-1,0,\ldots,0,0,1)}$ as a $\mathbb Z$-module. It follows that $\alpha_1$,
\[{\alpha_2:=\alpha_1+a_1\beta=(0,a_2,\ldots,a_r,1,a_1)}\,,\] and \[\alpha_3:=\alpha_1+(a_1-1)\beta=(1,a_2,\ldots,a_r,1,a_1-1)\] also generate $\ker_\mathbb Z B$ as a $\mathbb Z$-module.
We will use Lemma~\ref{lem-SSep} to show that the $\{\alpha_1,\alpha_2,\alpha_3\}$ give a separating set. To establish the first condition it suffices to note that $\alpha_1$ and $\alpha_2$ give a generator for the ring of invariants of the $(r+1)$-dimensional subrepresentation with support $\{1,\ldots,r+1\}$ and $\{2,\ldots,r+2\}$, respectively. To establish the second condition, we remark that $\alpha' \in \ker_\mathbb Z B \cap \mathbb N^{r+2}$ will have support $\{1,\ldots,r+1\}$, $\{2,\ldots,r+2\}$ or $[r+2]$, and so we can take $\gamma$ equal to $\alpha_1$, $\alpha_2$, $\alpha_3$, respectively. Note that if $a_1=1$, then $\alpha_3=\alpha_1$ and so our $(r+2)$-dimensional representation does not require any new invariants beyond those needed for the $(r+1)$-dimensional subrepresentations. If $a_1=p^k$, where $p=\operatorname{char} \Bbbk$. Then
\begin{align*}p^k(1,\ldots,a_r,1,a_1-1)&=a_1(1,\ldots,a_r,1,a_1-1)\\
&=(a_1,\ldots,a_r,1,0)+(a_1-1)(0,a_2,\ldots,a_r,1,a_1)
\end{align*}
belongs to the semigroup generated by $(a_1,\ldots,a_r,1,0)$ and $(0,a_2,\ldots,a_r,1,a_1)$ and so by Proposition~\ref{prop-SsepCharp} it follows that $(a_1,\ldots,a_r,1,0)$ and $(0,a_2,\ldots,a_r,1,a_1)$ give a separating set. Again, our $(r+2)$-dimensional representation does not require any new invariants. But if $a_1$ is not 1 or a pure power of $\operatorname{char}\Bbbk$, $\alpha_1$ and $\alpha_2$ do not give a separating set. Indeed, let $\zeta$ be a primitive $m$-th root of unity with $a_1/m$ a pure power of $\operatorname{char} \Bbbk$ and set $u_1=(1,1,\ldots,1)$ and $u_2=(\zeta,1,\ldots,1)$. Then $\alpha_1(u_1)=\alpha_2(u_1)=\alpha_1(u_2)=\alpha_2(u_1)=1$ but $\alpha_3(u_1)=1\neq\zeta=\alpha_3(u_2)$. Therefore we will require an extra monomial. Although this monomial need not be $\alpha_3$, it will have full support. Therefore each $r+2$-dimensional subrepresentation with nonconstant invariants and repeated weight $i_0\notin I$ will necessitate at least one extra distinct monomial invariant. There are $\sum_{i_0 \notin I} \binom{n_i}{2} \prod_{i\neq i_0}{n_i}$ different such subrepresentations. It follows the minimal size of a separating set will be at least $\prod_{i=1}^rn_i+\sum_{i_0 \notin I} \binom{n_i}{2} \prod_{i\neq i_0}{n_i}$, which simplifies to the formula in Statement~(\ref{prop-SVminmon2}).
We will use Lemma~\ref{lem-SSep} to show Statement~(\ref{prop-SVminmon1}). Denote by $\mathcal{S}$ the semigroup corresponding to the monomial algebra generated by all monomial invariants depending on at most $r+2$ variables and set $\mathcal{L}$ to be the semigroup giving all monomial invariants. The first step is to show that for any subrepresentation $V$, $\mathcal{L}_{\operatorname{supp}(V)}\subseteq \mathbb Z \mathcal{L}_{\operatorname{supp}(V)}$. If $|\operatorname{supp}(V)|\leq r+2$, then the statement is trivially true. Suppose $|\operatorname{supp}(V)|> r+2$. By Corollary \ref{cor-r+1}, it follows that the rational invariants on $V$ are generated by the rational invariants with support of size at most $r+1$, and so in particular by the rational invariants with support of size at most $r+2$. Our argument of the previous paragraph shows that for any $(r+2)$-dimensional subrepresentation, the invariant monomials generate the field of rational invariants, therefore, the invariants monomials with support of size at most $r+2$ generated the rational invariants on $V$ as desired. Now take $\alpha$ be the exponent vector of a nonconstant invariant. As any proper subset of the set of distinct weights is linearly independent, $\mathrm{wt}(\alpha)=\{m_0,m_1,\ldots,m_r\}$. Hence, if $\alpha_{i_0,j_{i_0}}\neq 0$, we know that $\alpha_{0}\neq 0$ and for all $i\in [r]\setminus \{i_o\}$, there is $j_i\in[n_i]$ such that $\alpha_{i,j_i}\neq 0$. Define $\gamma\in\mathbb N^{1+\sum_{i=1}^r n_i}$ as $\gamma_0=1$, $\gamma_{i_0,j_{i_o}}=a_{i_0}$, $\gamma_{i,j_i}=a_i$ for all other $i\in[n]$, with the remaining entries zero. By construction $\gamma_{i_0,j_{i_o}}\neq 0$ and $\gamma$ has support of size $r+2$ contained in the support of $\alpha$. We have now established that $\mathcal{S}$ satisfies the two conditions of Lemma \ref{lem-SSep} and so the monomial invariants with support of size at most $r+2$ form a separating set, proving Statement~(\ref{prop-SVminmon1}).
A consequence of Statement~\ref{prop-SVminmon1}) is that to have a monomial separating set for the full representation is suffices to have a set of monomials which restricts to a separating set for each $(r+2)$-dimensional subrepresentation. Our argument in the first two paragraphs of the proof give a construction for such of separating set which has size equal to the formula in Statement~(\ref{prop-SVminmon2}), completing the proof.
\end{proof}
\begin{cor}\label{cor-sparse} The elements in a separating set must contain at least
\[\Bigg(\prod_{h=1}^r n_h\Bigg) \Bigg(1 + \frac{1}{2}\sum_{i\notin I} ({n_i -1})\Bigg)\,\]
monomials between them altogether.
\end{cor}
\begin{proof} As the ring of invariants is generated by monomials, the set of monomials contained in the elements of any separating set must form a monomial separating set. The previous proposition applies to this set.
\end{proof}
\bibliographystyle{plain}
| {
"timestamp": "2016-02-25T02:12:21",
"yymm": "1602",
"arxiv_id": "1602.07585",
"language": "en",
"url": "https://arxiv.org/abs/1602.07585",
"abstract": "A smooth $d$-dimensional projective variety $X$ can always be embedded into $2d+1$-dimensional space. In contrast, a singular variety may require an arbitrary large ambient space. If we relax our requirement and ask only that the map is injective, then any $d$-dimensional projective variety can be mapped injectively to $2d+1$-dimensional projective space. A natural question then arises: what is the minimal $m$ such that a projective variety can be mapped injectively to $m$-dimensional projective space? In this paper we investigate this question for normal toric varieties, with our most complete results being for Segre-Veronese varieties.",
"subjects": "Commutative Algebra (math.AC); Algebraic Geometry (math.AG)",
"title": "Mapping toric varieties into low dimensional spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9790357604052423,
"lm_q2_score": 0.8499711718571774,
"lm_q1q2_score": 0.8321521725617266
} |
https://arxiv.org/abs/1010.2565 | Proof of the monotone column permanent conjecture | Let A be an n-by-n matrix of real numbers which are weakly decreasing down each column, Z_n = diag(z_1,..., z_n) a diagonal matrix of indeterminates, and J_n the n-by-n matrix of all ones. We prove that per(J_nZ_n+A) is stable in the z_i, resolving a recent conjecture of Haglund and Visontai. This immediately implies that per(zJ_n+A) is a polynomial in z with only real roots, an open conjecture of Haglund, Ono, and Wagner from 1999. Other applications include a multivariate stable Eulerian polynomial, a new proof of Grace's apolarity theorem and new permanental inequalities. | \section{The monotone column permanent conjecture.}
Recall that the \emph{permanent} of an $n$-by-$n$ matrix $H=(h_{ij})$
is the unsigned variant of its determinant:
$$
\mathrm{per}(H) = \sum_{\sigma\in\S_n} \prod_{i=1}^n h_{i,\sigma(i)},
$$
with the sum over all permutations $\sigma$ in the symmetric group $\S_n$.
A \emph{monotone column matrix} $A=(a_{ij})$ has real entries which are
weakly decreasing reading down each column:\ that is, $a_{ij}\geq a_{i+1,j}$
for all $1\leq i\leq n-1$ and $1\leq j\leq n$. Let $J_n$ be the $n$-by-$n$
matrix in which every entry is $1$.\\
\noindent\textbf{The Monotone Column Permanent Conjecture (MCPC).}\\
If $A$ is an $n$-by-$n$ monotone column matrix then $\mathrm{per}(zJ_n+A)$
is a polynomial in $z$ which has only real roots.\\
The MCPC first appears as Conjecture 2 of \cite{HOW}.
(Originally with increasing columns -- but the convention of decreasing
columns is clearly equivalent and will be more natural later.)
Theorem 1 of \cite{HOW} proves the MCPC for monotone column matrices
in which every entry is either $0$ or $1$. Other special cases appear in
\cite{H,HOW,HV,KYZ}, either for $n\leq 4$ or for rather restrictive conditions
on the entries of $A$. In this paper we prove the MCPC in full
generality\footnote{Thus, it was not really a permanent conjecture.}.
In fact we prove more. The following multivariate version of the MCPC
originates in \cite{HV}. Let $Z_n=\mathrm{diag}(z_1,\ldots,z_n)$ be an $n$-by-$n$
diagonal matrix of algebraically independent commuting indeterminates
$\mathbf{z}=\{z_1,\ldots,z_n\}$. A polynomial $f(z_1,\ldots,z_n)\in\CC[\mathbf{z}]$ is
\emph{stable} provided that either $f\equiv 0$ or whenever $w_j\in\CC$ are
such that $\Im(w_j)>0$ for all $1\leq j\leq n$, then $f(w_1,\ldots,w_n)\neq 0$.
A stable polynomial with real coefficients is \emph{real stable}.
\\
\noindent\textbf{The Multivariate MCP Conjecture (MMCPC).}\\
If $A$ is an $n$-by-$n$ monotone column matrix then $\mathrm{per}(J_nZ_n+A)$
is a real stable polynomial in $\RR[\mathbf{z}]$.\\
Note that $J_nZ_n+A=(z_j+a_{ij})$, so that $z_j$ is associated with the $j$-th
column, for each $1\leq j\leq n$. We also write
$\mathrm{per}(J_nZ_n+A)=\mathrm{per}(z_j+a_{ij})$ as it seems clearer. The MMCPC implies the
MCPC since if one sets all $z_j=z$, then $\mathrm{per}(z+a_{ij})$ is a polynomial
in one variable with real coefficients; this diagonalization preserves stability
(see Lemma \ref{basic}(c) below), and a univariate real polynomial is stable if and
only if it has only real roots.
In Section 2 we review some results from the theory of stable polynomials which
are required for our proofs.
In Section 3 we reduce the MMCPC to the case of $\{0,1\}$-matrices which are
weakly decreasing down columns and weakly increasing from left to right across
rows -- these we call \emph{Ferrers matrices} for convenience.
Then we further transform the MMCPC for Ferrers matrices,
derive a differential recurrence relation for the resulting polynomials, and
use this and the results of Section 2 to prove the conjecture by induction.
In Section 4 we extend these results to sub-permanents of rectangular matrices,
derive a cycle-counting extension of one of them, discuss a multivariate stable
generalization of Eulerian polynomials, present a new proof of Grace's
apolarity theorem and derive new permanental inequalities.
\section{Stable polynomials.}
Over a series of several papers, Borcea and Br\"and\'en have developed
the theory of stable polynomials into a powerful and flexible technique.
The results we need are taken from \cite{BB4,BB5,Br}; see also Sections 2
and 3 of \cite{W}.
Let $\EuScript{H}=\{w\in\CC:\ \Im(w)>0\}$ denote the open upper
half of the complex plane, and let $\overline{\EuScript{H}}$ denote its closure in $\CC$.
As above, $\mathbf{z}=\{z_1,\ldots,z_n\}$ is a set of $n$ indeterminates.
For $f\in\CC[\mathbf{z}]$ and $1\leq j\leq n$, let $\deg_{z_j}(f)$ denote the degree of $z_j$
in $f$.
\begin{LMA}[see Lemma 2.4 of \cite{W}] \label{basic}
These operations preserve stability of polynomials in $\CC[\mathbf{z}]$.\\
\textup{(a)}\ \textup{Permutation:}\ for any permutation $\sigma\in\S_n$,
$f \mapsto f(z_{\sigma(1)},\ldots,z_{\sigma(n)})$.\\
\textup{(b)}\ \textup{Scaling:}\ for $c\in\CC$ and $\a\in\RR^{n}$ with
$\a>\boldsymbol{0}$, $f \mapsto cf(a_{1}z_{1},\ldots,a_{n}z_{n})$.\\
\textup{(c)}\ \textup{Diagonalization:}\ for $1\leq i<j\leq n$,
$f \mapsto f(\mathbf{z})|_{z_i=z_j}$.\\
\textup{(d)}\ \textup{Specialization:}\ for $a\in\overline{\EuScript{H}}$,
$f \mapsto f(a, z_{2},\ldots,z_{n})$.\\
\textup{(e)}\ \textup{Inversion:}\ if $\deg_{z_1}(f)=d$,
$f \mapsto z_{1}^{d}f(-z_{1}^{-1},z_{2},\ldots,z_{n})$.\\
\textup{(f)}\ \textup{Translation:}\
$f \mapsto f_1 = f(z_1+t,z_2,\ldots,z_n)\in\CC[\mathbf{z},t]$.\\
\textup{(g)}\ \textup{Differentiation:}\
$f \mapsto \partial f(\mathbf{z})/\partial z_1$.
\end{LMA}
\begin{proof}
Only part (f) is not made explicit in \cite{BB4,BB5,W}. But clearly if
$z_1\in\EuScript{H}$ and $t\in\EuScript{H}$ then $z_1+t\in\EuScript{H}$, from which the result follows.
\end{proof}
Of course, parts (d,e,f,g) apply to any index $j$ as well, by permutation.
Part (g) is the only difficult one -- it is essentially the Gauss-Lucas Theorem.
\begin{LMA} \label{coeffs}
Let $f(\mathbf{z},t)\in\CC[\mathbf{z},t]$ be stable, and let
$$
f(\mathbf{z},t) = \sum_{k=0}^d f_k(\mathbf{z}) t^k
$$
with $f_d(\mathbf{z})\not\equiv 0$. Then $f_k(\mathbf{z})$ is stable for all $0\leq k\leq d =
\deg_t(f)$.
\end{LMA}
\begin{proof}
Consider any $0\leq k\leq d$. Clearly $f_k(\mathbf{z})$ is a constant multiple of $\partial^k f(\mathbf{z},t)/\partial t^k |_{t=0}$, which is stable by Lemma \ref{basic} (c, g).
\end{proof}
Polynomials $g, h \in \RR[\mathbf{z}]$ are in \emph{proper position}, denoted
by $g \ll h$, if the polynomial $h + \i g$ is stable. This is the
multivariate analogue of interlacing roots for univariate polynomials with only
real roots.
\begin{PROP}[Lemma 2.8 of \cite{BB5} and Theorem 1.6 of \cite{BB4}] \label{HB-O}
Let $g,h\in\RR[\mathbf{z}]$.\\
\textup{(a)}\ Then $h\ll g$ if and only if $g+th\in\RR[\mathbf{z},t]$ is stable.\\
\textup{(a)}\ Then $ag+bh$ is stable for all $a,b\in\RR$ if and only if
either $h\ll g$ or $g\ll h$.
\end{PROP}
It then follows from Lemma \ref{basic}(d,g) that if $h \ll g$ then both $h$ and $g$
are stable (or identically zero).
\begin{PROP}[Lemma 2.6 of \cite{BB4}] \label{proper}
Suppose that $g\in \RR[\mathbf{z}]$ is stable. Then the sets
$$
\{h \in \RR[\mathbf{z}] : g\ll h \} \quad \mbox{ and }
\quad \{h \in \RR[\mathbf{z}] : h \ll g\}
$$
are convex cones containing $g$.
\end{PROP}
\begin{PROP}\label{cone}
Let $V$ be a real vector space, $\phi : V^{n}\rightarrow \RR$
a multilinear form, and $e_1,\ldots, e_n, v_2, \ldots, v_n$ fixed vectors
in $V$. Suppose that the polynomial
$$
\phi(e_1, v_2 +z_2e_2, \ldots, v_n + z_ne_n)
$$
in $\RR[\mathbf{z}]$ is not identically zero. Then the set of all $v_1 \in V$
for which the polynomial
$$
\phi(v_1+z_1e_1, v_2 +z_2e_2, \ldots, v_n + z_ne_n)
$$
is stable is either empty or a convex cone (with apex $0$) containing
$e_1$ and $-e_1$.
\end{PROP}
\begin{proof}
Let $C$ be the set of all $v_1 \in V$ for which the polynomial
$\phi(v_1+z_1e_1, v_2 +z_2e_2, \ldots, v_n + z_ne_n)$
is stable. For $v\in V$ let $F_v = \phi(v, v_2 +z_2e_2, \ldots, v_n + z_ne_n)$.
Since
$$
\phi(v_1+z_1e_1, v_2 +z_2e_2, \ldots, v_n + z_ne_n)= F_{v_1} + z_1F_{e_1},
$$
we have $C = \{v \in V : F_{e_1} \ll F_{v}\}$. Moreover since $F_{\lambda v+ \mu w}
= \lambda F_v + \mu F_w$ it follows from Proposition \ref{proper} that $C$ is a
convex cone provided that $C$ is non-empty. If $C$ is nonempty then
$F_v + z_1F_{e_1}$ is stable for some $v \in V$. But then $F_{e_1}$ is stable,
and so is
$$
(\pm 1 + z_1)F_{e_1} = \phi(\pm e_1+z_1e_1, v_2 +z_2e_2, \ldots, v_n + z_ne_n)
$$
which proves that $\pm e_1 \in C$.
\end{proof}
(Of course, by permuting the indices Proposition \ref{cone} applies to
any index $j$ as well.)
Let $\CC[\mathbf{z}]^\mathsf{ma}$ denote the vector subspace of \emph{multiaffine} polynomials:\
that is, polynomials of degree at most one in each indeterminate.
\begin{PROP}[Theorem 5.6 of \cite{Br}] \label{Delta}
Let $f\in\RR[\mathbf{z}]^\mathsf{ma}$ be multiaffine. The following are equivalent:\\
\textup{(a)}\ $f$ is real stable.\\
\textup{(b)}\ For all $1\leq i<j\leq n$ and all $\a\in\RR^n$,
$f_i(\a)f_j(\a)-f(\a)f_{ij}(\a)\geq 0$, in which the subscripts denote
partial differentiation.
\end{PROP}
A linear transformation $T:\CC[\mathbf{z}]\rightarrow\CC[\mathbf{z}]$ \emph{preserves stability}
if $T(f)(\mathbf{z})$ is stable whenever $f(\mathbf{z})$ is stable.
\begin{PROP}[Theorem 1.1 of \cite{BB5}] \label{BB}
Let $T:\CC[\mathbf{z}]^{\mathsf{ma}}\rightarrow\CC[\mathbf{z}]$ be a linear transformation.
Then $T$ preserves stability if and only if either\\
\textup{(a)}\ $T(f) = \eta(f)\cdot p$ for some linear functional
$\eta:\CC[\mathbf{z}]^{\mathsf{ma}}\rightarrow\CC$ and stable $p\in\CC[\mathbf{z}]$, or\\
\textup{(b)}\ the polynomial $G_T(\mathbf{z},\w)=T \prod_{j=1}^n(z_j+w_j)$
is stable in $\CC[\mathbf{z},\w]$.
\end{PROP}
The \emph{complexification} $T_\CC:\CC[\mathbf{z}]\rightarrow\CC[\mathbf{z}]$ of a linear
transformation $T:\RR[\mathbf{z}]\rightarrow\RR[\mathbf{z}]$ is defined as follows.
For any $f\in\CC[\mathbf{z}]$ write $f = g + \i h$ with $g,h\in\RR[\mathbf{z}]$, which can be
done uniquely. Then $T_\CC(f) = T(g)+\i T(h)$. Let also $\hat{T}_\CC= T(g)-\i T(h)$.
\begin{PROP} \label{R-to-C}
Let $T:\RR[\mathbf{z}]\rightarrow\RR[\mathbf{z}]$ be a linear transformation and let $f \in \CC|z]$ be a stable polynomial.
If $T$ preserves real stability then either $T_\CC(f)$ is stable, or $\hat{T}_\CC(f)$ is stable.
\end{PROP}
\begin{proof}
Let $f=g + \i h\in\CC[\mathbf{z}]$ with $g,h\in\RR[\mathbf{z}]$, and assume that $f$ is
stable. Then $h\ll g$ by definition. By Proposition \ref{HB-O}(b),
it follows that $ah+bg$ is real stable for all $a,b\in\RR$. Therefore,
$aT(h)+bT(g)$ is real stable for all $a,b\in\RR$. By Proposition \ref{HB-O}(b)
again, either $T(h)\ll T(g)$ or $T(g)\ll T(h)$. By Proposition \ref{HB-O}(a),
either $T(g)+\i T(h)=T_\CC(f)$ is stable, or $T(g)-\i T(h) =
\hat{T}_\CC(f)$ is stable.
\end{proof}
\section{Proof of the MMCPC.}
\subsection{Reduction to Ferrers matrices.}
\begin{LMA}\label{ferrers}
If $\mathrm{per}(z_j+a_{ij})$ is stable for all Ferrers matrices,
then the MMCPC is true.
\end{LMA}
\begin{proof}
If $\mathrm{per}(z_j+a_{ij})$ is stable for all Ferrers matrices, then by
permuting the columns of such a matrix, the same is true for all monotone
column $\{0,1\}$-matrices. Now let $A=(a_{ij})$ be an arbitrary $n$-by-$n$
monotone column matrix. We will show that $\mathrm{per}(z_j+a_{ij})$ is stable by
$n$ applications of Proposition \ref{cone}.
Let $V$ be the vector space of column vectors of length $n$.
The multilinear form $\phi$ we consider is the permanent of an $n$-by-$n$
matrix obtained by concatenating $n$ vectors in $V$.
Let each of $e_1,\ldots,e_n$ be the all-ones vector in $V$.
Initially, let $v_1,v_2,\ldots,v_n$ be arbitrary monotone $\{0,1\}$-vectors.
Then $\phi(v_1+z_1e_1,\ldots,v_n+z_ne_n) = \mathrm{per}(J_nZ_n+H)$
for some monotone column $\{0,1\}$-matrix $H$.
One can specialize any number of $v_j$ to the zero vector, and
any number of $z_j$ to $1$, and the result is not identically zero.
By hypothesis, all these polynomials are stable.
Now we proceed by induction.
Assume that if $v_1,\ldots,v_{j-1}$ are the first $j-1$ columns of $A$,
and if $v_j,\ldots,v_n$ are arbitrary monotone $\{0,1\}$-columns, then
$\phi(v_1+z_1e_1,\ldots,v_n+z_ne_n)$ is stable. (The base case, $j=1$,
is the previous paragraph.) Putting $v_j=0$ and $z_j=1$, the resulting
polynomial is not identically zero. By Proposition \ref{cone} (applied to index $j$),
the set of vectors $v_j$ such that $\phi(v_1+z_1e_1,\ldots,v_n+z_ne_n)$ is
stable is a convex cone containing $\pm e_j$.
Moreover, it contains all monotone $\{0,1\}$-columns, by hypothesis.
Now, any monotone column of real numbers can be written as a nonnegative
linear combination of $-e_1$ and monotone $\{0,1\}$-columns, and
hence is in this cone. Thus, we may take $v_1,\ldots,v_{j-1},v_j$ to be the
first $j$ columns of $A$, $v_{j+1},\ldots,v_n$ to be arbitrary monotone
$\{0,1\}$-columns, and the resulting polynomial is stable.
This completes the induction step.
After the $n$-th step we find that $\mathrm{per}(J_nZ_n+A)$ is stable.
\end{proof}
\subsection{A more symmetrical problem.}
Let $A=(a_{ij})$ be an $n$-by-$n$ Ferrers matrix, and let $\mathbf{z}=\{z_1,\ldots,z_n\}$.
For each $1\leq j\leq n$, let $y_j=(z_j+1)/z_j$, and let
$Y_n=\mathrm{diag}(y_1,\ldots,y_n)$. The matrix obtained from $J_nZ_n+A$ by factoring $z_j$
out of column $j$ for all $1\leq j\leq n$ is $AY_n+J_n-A=(a_{ij}y_j+1-a_{ij})$.
It follows that
\begin{equation} \label{z-to-y}
\mathrm{per}(z_j+a_{ij})= z_1 z_2 \cdots z_n\cdot\mathrm{per}(a_{ij}y_j+1-a_{ij}).
\end{equation}
\begin{LMA} \label{y-lma}
For a Ferrers matrix $A=(a_{ij})$, $\mathrm{per}(z_j+a_{ij})$ is stable if and
only if $\mathrm{per}(a_{ij}y_j+1-a_{ij})$ is stable.
\end{LMA}
\begin{proof}
The polynomials are not identically zero.
Notice that $\Im(z_j)>0$ if and only if $\Im(y_j)=\Im(1+z_j^{-1})<0$.
If $\mathrm{per}(z_j+a_{ij})$ is stable, then $\mathrm{per}(a_{ij}y_j+1-a_{ij})\neq 0$
whenever $\Im(y_j)<0$ for all $1\leq j\leq n$. Since this polynomial has
real coefficients, it follows that it is stable. The converse is similar.
\end{proof}
The set of $n$-by-$n$ Ferrers matrices has the following duality
$A\mapsto A^\vee = J_n - A^{\top}$:\ transpose and exchange
zeros and ones. That is, $A^\vee = (a_{ij}^\vee)$ in which $a_{ij}^\vee =
1-a_{ji}$ for all $1\leq i,j\leq n$. Note that $(A^\vee)^\vee=A$.
However, the form of the expression $\mathrm{per}(a_{ij}y_j+1-a_{ij})$ is not
preserved by this symmetry. To remedy this defect, introduce new indeterminates
$\x=\{x_1,\ldots,x_n\}$ and consider the matrix $B(A)=(b_{ij})$ with entries
$b_{ij} = a_{ij}y_j+(1-a_{ij})x_{i}$ for all $1\leq i,j\leq n$.
For example, if
$$
A = \left[\begin{array}{lllll}
0 & 1 & 1 & 1 & 1 \\
0 & 0 & 1 & 1 & 1 \\
0 & 0 & 1 & 1 & 1 \\
0 & 0 & 0 & 1 & 1 \\
0 & 0 & 0 & 0 & 0
\end{array}\right]
\hspace{5mm}\mathrm{then}\hspace{5mm}
B(A) = \left[\begin{array}{lllll}
x_1 & y_2 & y_3 & y_4 & y_5 \\
x_2 & x_2 & y_3 & y_4 & y_5 \\
x_3 & x_3 & y_3 & y_4 & y_5 \\
x_4 & x_4 & x_4 & y_4 & y_5 \\
x_5 & x_5 & x_5 & x_5 & x_5
\end{array}\right].
$$
For emphasis, we may write $B(A;\x;\y)$ to indicate that the row variables are
$\x$ and the column variables are $\y$. The matrices $B(A)$ and $B(A^\vee)$ have
the same general form, and in fact
\begin{eqnarray} \label{dual1}
\mathrm{per}(B(A^\vee;\x;\y)) = \mathrm{per}(B(A;\y;\x)).
\end{eqnarray}
Clearly $\mathrm{per}(B(A))$ specializes to $\mathrm{per}(a_{ij}y_j+1-a_{ij})$ by setting
$x_i=1$ for all $1\leq i\leq n$. We will show that $\mathrm{per}(B(A))$ is stable,
for any Ferrers matrix $A$. By Lemmas \ref{basic}(d), \ref{y-lma}, and
\ref{ferrers}, this will imply the MMCPC.
\subsection{A differential recurrence relation.}
Next, we derive a differential recurrence relation for polynomials of the form
$\mathrm{per}(B(A))$, for $A$ an $n$-by-$n$ Ferrers matrix.
There are two cases:\ either $a_{nn}=0$ or $a_{nn}=1$.
Replacing $A$ by $A^\vee$ and using (\ref{dual1}), if necessary, we can assume
that $a_{nn}=0$.
\begin{LMA} \label{diff-reln}
Let $A=(a_{ij})$ be an $n$-by-$n$ Ferrers matrix with $a_{nn}=0$,
let $k\geq 1$ be the number of $0$'s in the last column of $A$,
and let $A^\circ$ be the matrix obtained from $A$ by deleting the
last column and the last row of $A$. Then
$$
\mathrm{per}(B(A)) =
k x_n\, \mathrm{per}(B(A^\circ)) + x_n y_n\, \partial \mathrm{per}(B(A^\circ)),
$$
in which
$$
\partial =
\sum_{i=1}^{n-k}\frac{\partial}{\partial x_i}
+ \sum_{j=1}^{n-1}\frac{\partial}{\partial y_j}.
$$
\end{LMA}
\begin{proof}
In the permutation expansion of $\mathrm{per}(B(A))$ there are two types of terms:\
those that do not contain $y_n$ and those that do. Let $T_\sigma$ be the
term of $\mathrm{per}(B(A))$ indexed by $\sigma\in\S_n$. For each
$n-k+1\leq i\leq n$, let $\EuScript{C}} \newcommand{\CC}{\mathbb{C}_i$ be the set of those terms $T_\sigma$ such
that $\sigma(i)=n$; for a term in $\EuScript{C}} \newcommand{\CC}{\mathbb{C}_i$ the variable chosen in the last column
is $x_{i}$. Let $\EuScript{D}} \renewcommand{\d}{\mathbf{d}$ be the set of all other terms; for a term in $\EuScript{D}} \renewcommand{\d}{\mathbf{d}$ the
variable chosen in the last column is $y_n$.
For every permutation $\sigma\in\S_n$, let $(i_\sigma,j_\sigma)$ be such
that $\sigma(i_\sigma)=n$ and $\sigma(n)=j_\sigma$, and define
$\pi(\sigma)\in\S_{n-1}$ by putting $\pi(i)=\sigma(i)$ if $i\neq i_\sigma$,
and $\pi(i_\sigma)=j_\sigma$ (if $i_\sigma\neq n$).
Let $T_{\pi(\sigma)}$ be the corresponding term of $\mathrm{per}(B(A^\circ))$.
See Figure 1 for an example. Informally, $\pi(\sigma)$ is obtained
from $\sigma$, in word notation, by replacing the largest element with the last
element, unless the largest element is last, in which case it is deleted.
\begin{figure}[tb]
$$
\left[\begin{array}{cccccc}
\cdot & \cdot & \Box & \cdot & \cdot & \cdot \\
\Box & \cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot & \Box \\
\cdot & \cdot & \cdot & \cdot & \Box & \cdot \\
\cdot & \Box & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \Box & \cdot & \cdot
\end{array}\right]
\quad \mapsto \quad
\left[\begin{array}{cccccc}
\cdot & \cdot & \Box & \cdot & \cdot & \\
\Box & \cdot & \cdot & \cdot & \cdot & \\
\cdot & \cdot & \cdot & \Box & \cdot & \\
\cdot & \cdot & \cdot & \cdot & \Box & \\
\cdot & \Box & \cdot & \cdot & \cdot & \\
& & & & &
\end{array}\right]
$$
\caption{$\sigma= 3\ 1\ 6\ 5\ 2\ 4$ maps to $\pi(\sigma)= 3\ 1\ 4\ 5\ 2$.}
\end{figure}
For each $n-k+1\leq i\leq n$, consider all permutations $\sigma$ indexing terms
in $\EuScript{C}} \newcommand{\CC}{\mathbb{C}_i$. The mapping $T_\sigma \mapsto T_{\pi(\sigma)}$ is a bijection from the
terms in $\EuScript{C}} \newcommand{\CC}{\mathbb{C}_i$ to all the terms in $\mathrm{per}(B(A^\circ))$. Also, for each
$\sigma\in\EuScript{C}} \newcommand{\CC}{\mathbb{C}_i$, $T_\sigma = x_n T_{\pi(\sigma)}$. Thus, for each
$n-k+1\leq i\leq n$, the sum of all terms in $\EuScript{C}} \newcommand{\CC}{\mathbb{C}_i$ is $x_n \mathrm{per}(B(A^\circ))$.
Next, consider all permutations $\sigma$ indexing terms in $\EuScript{D}} \renewcommand{\d}{\mathbf{d}$.
The mapping $T_\sigma\mapsto T_{\pi(\sigma)}$ is $(n-k)$-to-one
from $\EuScript{D}} \renewcommand{\d}{\mathbf{d}$ to the set of all terms in $\mathrm{per}(B(A^\circ))$, since one
needs both $\pi(\sigma)$ and $i_\sigma$ to recover $\sigma$.
Let $v_\sigma$ be the variable in position $(i_\sigma,j_\sigma)$ of $B(A^\circ)$.
Then $v_\sigma T_\sigma = x_n y_n T_{\pi(\sigma)}$. It follows that
for any variable $w$ in the set $\{x_1,\ldots,x_{n-k},y_1,\ldots,y_{n-1}\}$,
the sum over all terms in $\EuScript{D}} \renewcommand{\d}{\mathbf{d}$ such that $v_\sigma=w$ is
$$
x_n y_n \frac{\partial}{\partial w} \mathrm{per}(B(A^\circ)).
$$
Since $v_\sigma$ is any element of the set
$\{x_1,\ldots,x_{n-k},y_1,\ldots,y_{n-1}\}$, it follows that the sum of all terms in
$\EuScript{D}} \renewcommand{\d}{\mathbf{d}$ is $x_n y_n \partial \mathrm{per}(B(A^\circ))$.
The preceding paragraphs imply the stated formula.
\end{proof}
\subsection{Finally, proof of the MMCPC}
\begin{THM} \label{perBA-stable}
For any $n$-by-$n$ Ferrers matrix $A$, $\mathrm{per}(B(A))$ is stable.
\end{THM}
\begin{proof}
As above, by replacing $A$ by $A^\vee$ if necessary, we may assume that
$a_{1n}=0$. We proceed by induction on $n$, the base case $n=1$ being trivial.
For the induction step, let $A^\circ$ be as in Lemma \ref{diff-reln}.
By induction, we may assume that $\mathrm{per}(B(A^\circ))$ is stable;\
clearly this polynomial is multiaffine.
Thus, by Lemma \ref{diff-reln}, it suffices to prove that the linear
transformation $T=k+y_n\partial$ maps stable multiaffine polynomials to stable
polynomials if $k\geq 1$. This operator has the form $T=k+z_{m}\sum_{j=1}^{m-1}
\partial/\partial z_j$ (renaming the variables suitably). By Proposition \ref{BB}
it suffices to check that the polynomial
\begin{eqnarray*}
G_T(\mathbf{z},\w)
&=& T \prod_{j=1}^m(z_j+w_j)\\
&=& \left(k+z_m\sum_{j=1}^{m-1}\frac{1}{z_j+w_j}\right) \prod_{j=1}^m(z_j+w_j)
\end{eqnarray*}
is stable.
If $z_j$ and $w_j$ have positive imaginary parts for all $1\leq j\leq m$ then
$$
\xi = \frac{k}{z_m} + \sum_{j=1}^{m-1}\frac{1}{z_j+w_j}
$$
has negative imaginary part (since $k\geq 0$). Thus $z_m\xi\neq 0$. Also, $z_j+w_j$
has positive imaginary part, so that $z_j+w_j\neq 0$ for each $1\leq j\leq m$.
It follows that $G_T(\mathbf{z},\w)\neq 0$, so that $G_T$ is stable, completing the
induction step and the proof.
\end{proof}
\begin{proof}[Proof of the MMCPC]
Let $A$ be any $n$-by-$n$ Ferrers matrix. By Theorem \ref{perBA-stable},
$\mathrm{per}(B(A))$ is stable. Specializing $x_i=1$ for all $1\leq i\leq n$,
Lemma \ref{basic}(d) implies that $\mathrm{per}(a_{ij}y_j+1-a_{ij})$ is stable.
Now Lemma \ref{y-lma} implies that $\mathrm{per}(z_j+a_{ij})$ is stable.
Finally, Lemma \ref{ferrers} implies that the MMCPC is true.
\end{proof}
Henceforth, we shall refer to the MMCPT -- ``T'' for ``Theorem''.
\section{Further results.}
\subsection{Generalization to rectangular matrices.}
We can generalize Theorem \ref{perBA-stable} to rectangular matrices,
as follows. Let $A=(a_{ij})$ be an $m$-by-$n$ matrix.
As in the square case, $A$ is a \emph{Ferrers matrix} if it is a
$\{0,1\}$-matrix that is weakly decreasing down columns and increasing across rows.
The matrix $B(A)$ is constructed just as in the square case:
$B(A)=(b_{ij}) = (a_{ij}y_j+(1-a_{ij})x_{i})$, using row variables
$\x=\{x_1,\ldots,x_m\}$ and column variables $\y=\{y_1,\ldots,y_n\}$.
For emphasis, we may write $B(A;\x;\y)$
to indicate that the row variables are $\x$ and the column variables are $\y$.
The symmetry $A\mapsto A^\vee$ takes an $m$-by-$n$ Ferrers matrix to
an $n$-by-$m$ Ferrers matrix.
Now let $k\leq \min\{m,n\}$. The \emph{$k$-permanent} of
an $m$-by-$n$ matrix $H=(h_{ij})$ is
$$
\mathrm{per}_k(H) = \sum_R \sum_C \sum_\beta \prod_{i\in R} h_{i,\beta(i)},
$$
in which $R$ ranges over all $k$-element subsets of $\{1,\ldots,m\}$,
$C$ ranges over all $k$-element subsets of $\{1,\ldots,n\}$, and
$\beta$ ranges over all bijections from $R$ to $C$. (Note that
$\mathrm{per}_0(H)=1$ for any matrix $H$.) In the case $k=m=n$
this reduces to the permanent of a square matrix.
For an $m$-by-$n$ Ferrers matrix $A$ and $k\leq\min\{m,n\}$, note that
\begin{eqnarray} \label{dual2}
\mathrm{per}_k(B(A^\vee;\x;\y)) = \mathrm{per}_k(B(A;\y;\x)).
\end{eqnarray}
Thus, replacing $A$ by $A^\vee$ if necessary, we may assume that $m\leq n$.
\begin{PROP} \label{m-to-k}
Let $A=(a_{ij})$ be an $m$-by-$n$ Ferrers matrix.
Then $\mathrm{per}_k(B(A))$ is stable for all $k\leq \min\{m,n\}$.
\end{PROP}
\begin{proof}
Using (\ref{dual2}), if necessary, we may assume that $m\leq n$.
We begin by showing that $\mathrm{per}_m(B(A))$ is stable.
Let $A'$ be the $n$-by-$n$ Ferrers matrix obtained by concatenating $n-m$ rows
of $0$'s to the bottom of $A$.
One checks that
$$
\mathrm{per}(B(A'))= (n-m)!\,x_n x_{n-1}\cdots x_{m+1} \cdot \mathrm{per}_m(B(A)).
$$
By Theorem \ref{perBA-stable}, $\mathrm{per}(B(A'))$ is stable, and it follows
easily that $\mathrm{per}_m(B(A))$ is stable.
Now, let $J_{m,n}$ be the $m$-by-$n$ matrix of all 1's. Then
$$
\mathrm{per}_m(B(A)+tJ_{m,n})
= \sum_{k=0}^m \mathrm{per}_k(B(A)) \binom{n-k}{m-k}(m-k)! t^{m-k}.
$$
By Lemma \ref{basic}(c,f), this polynomial is stable.
Extracting the coefficient of $t^{m-k}$ from this, and dividing by
$ \binom{n-k}{m-k}(m-k)!$, Lemma \ref{coeffs} shows that $\mathrm{per}_k(B(A))$
is stable for all $0\leq k\leq m$.
\end{proof}
Proposition \ref{m-to-k} suggests the idea of a similar generalization of
the MMCPT:\ is it true that $\mathrm{per}_k(J_{m,n}Z_n+A)$ is stable for every
$m$-by-$n$ monotone column matrix $A$ and $k\leq\min\{m,n\}$?
This conjecture originates in \cite{HV}. One cannot
derive this from Proposition \ref{m-to-k}, however, because there is no
analogue of (\ref{z-to-y}) for $k$-permanents. Nonetheless, we can prove
this result for half the cases.
\begin{PROP}
Let $A$ be an $m$-by-$n$ monotone column matrix with $m\geq n$, and let
$k\leq n$. Then $\mathrm{per}_k(J_{m,n}Z_n+A)$ is stable.
\end{PROP}
\begin{proof}
Let $A'$ be the $m$-by-$m$ matrix obtained from $A$ by concatenating
$m-n$ zero columns to the right of $A$. Then
$$
\mathrm{per}(J_mZ_m+A') = (m-n)!z_m z_{m-1} \cdots z_{n+1}\cdot \mathrm{per}_n(J_{m,n}Z_n+A).
$$
Since $\mathrm{per}(J_mZ_m+A')$ is stable, it follows that $\mathrm{per}_n(J_{m,n}Z_n+A)$
is stable. By Lemma \ref{basic}(c,f), it follows that
$\mathrm{per}_n(J_{m,n}Z_n+A+tJ_{m,n})$ is stable.
Extracting the coefficient of $t^{n-k}$ from this, and dividing by
$ \binom{m-k}{n-k}(n-k)!$, Lemma \ref{coeffs} shows that $\mathrm{per}_k(J_{m,n}Z_n+A)$
is stable for all $0\leq k\leq n$.
\end{proof}
\subsection{A cycle-counting generalization.}
Theorem \ref{perBA-stable} can be generalized in another direction, as follows.
For each permutation $\sigma\in\S_n$, let $\mathrm{cyc}(\sigma)$ denote the number of
cycles of $\sigma$. For an indeterminate $\alpha$ and an
$n$-by-$n$ matrix $H=(h_{ij})$, the \emph{$\alpha$-permanent} of $H$ is
$$
\mathrm{per}(H;\alpha) = \sum_{\sigma\in\S_n} \alpha^{\mathrm{cyc}(\sigma)}
\prod_{i=1}^n h_{i,\sigma(i)}.
$$
The numbers $\mathrm{cyc}(\sigma)$ behave well with respect to the duality
$A\mapsto A^\vee$:\ for any Ferrers matrix $A$,
\begin{eqnarray} \label{dual3}
\mathrm{per}(B(A^\vee;\x;\y);\alpha) = \mathrm{per}(B(A;\y;\x);\alpha).
\end{eqnarray}
\begin{LMA}
Let $A=(a_{ij})$ be an $n$-by-$n$ Ferrers matrix with $a_{nn}=0$,
let $k\geq 1$ be the number of $0$'s in the last column of $A$,
and let $A^\circ$ be the matrix obtained from $A$ by deleting the
last column and the last row of $A$. Then
$$
\mathrm{per}(B(A);\alpha) = (\alpha + k-1) x_n\, \mathrm{per}(B(A^\circ);\alpha)
+ x_n y_n\, \partial \mathrm{per}(B(A^\circ);\alpha),
$$
with $\partial$ as in Lemma \ref{diff-reln}.
\end{LMA}
\begin{proof}
Adopt the notation of the proof of Lemma \ref{diff-reln}. To obtain
the present result, observe that if $T_\sigma$ is in $\EuScript{C}} \newcommand{\CC}{\mathbb{C}_n$ then
$\mathrm{cyc}(\sigma) = 1 + \mathrm{cyc}(\pi(\sigma))$, and otherwise
$\mathrm{cyc}(\sigma)=\mathrm{cyc}(\pi(\sigma))$.
\end{proof}
\begin{PROP} \label{alpha}
For $\alpha> 0$ and $A$ a Ferrers matrix, $\mathrm{per}(B(A);\alpha)$ is stable.
\end{PROP}
\begin{proof}
Reprising the proof of Theorem \ref{perBA-stable}, it suffices to show
that an operator of the form
$$
T=(\alpha+k-1)+z_{m}\sum_{i=1}^{m-1} \partial/\partial z_i
$$
preserves stability when $k\geq 1$. The argument of the proof of Theorem \ref{perBA-stable} works
when $\alpha> 0$.
\end{proof}
For $\alpha> 0$ and $A$ a Ferrers matrix, specialize all $x_i=1$ and
diagonalize all $y_j=z$ in $\mathrm{per}(B(A);\alpha)$. By Lemma \ref{basic}(c,d),
the result is a (univariate) polynomial with only real roots. This special
case is also implied by Theorem 2.5 of \cite{H}.
\subsection{Multivariate stable Eulerian polynomials.}
Given a permutation $\sigma \in \S_n$, viewed as a linear sequence
$\sigma (1) \sigma (2) \cdots \sigma (n)$,
let $\text{L}(\sigma)$ denote the result of the following procedure. First
form the two-line array
\begin{eqnarray*}
\text{T}(\sigma) = \left (
\begin{matrix}
1 & 2 & \cdots & n \\
\sigma (1) & \sigma (2) & \cdots & \sigma (n)
\end{matrix}
\right ).
\end{eqnarray*}
Then, viewing $\text{T}(\sigma)$ as a map sending $i$ to $\sigma (i)$,
break $\text{T}(\sigma)$ into cycles,
with the smallest element of each cycle at the end, and
cycles listed left-to-right with smallest elements increasing.
Finally, erase the parentheses delimiting the cycles to form
a new linear sequence $\text{L}(\sigma)$. For example,
if $\sigma=341526978$, then $\text{T}(\sigma)$ has cycle decomposition
$(31)(452)(6)(987)$ and $\text{L}(\sigma)=314526987$.
Let $\text{P}(\sigma)$ denote the placement of $n$ nonattacking rooks on
the squares $(i,\sigma (i))$, $1\le i \le n$.
As noted by Riordan \cite{Rio}, rooks in $\text{P}(\sigma)$ that occur above the
diagonal correspond to
exceedences (values of $i$ for which $\sigma (i) > i$) in $\sigma$ and descents
(values of $i$ for which $\sigma (i) > \sigma (i+1)$) in $\text{L}(\sigma)$.
Hence
\begin{eqnarray}
\label{Eulerian}
\sum_{\sigma \in \S_n}
z^{\text{exc}(\sigma)} =
\sum_{\sigma \in \S_n}
z^{\text{des}(\sigma)},
\end{eqnarray}
where $\text{exc}(\sigma)$ is the number of exceedences and
$\text{des}(\sigma)$ the number of descents
of $\sigma$.
The polynomial in (\ref{Eulerian}) is known as the {\it Eulerian polynomial}.
It is one of the classic examples in
combinatorics of a polynomial with only real roots.
Let $E_n=(e_{ij})$ denote the $n$-by-$n$ matrix with $e_{ij}=0$ if $i\geq j$
and $e_{ij}=1$ if $i<j$. Then $\mathrm{per}(B(E_n;\x;\y))$ is stable by Theorem
\ref{perBA-stable}. Let $\boldsymbol{1}$ be the vector (of appropriate size) of all ones.
From the above discussion, $\mathrm{per}(B(E_n;\boldsymbol{1};z,\ldots,z))$ equals the
Eulerian polynomial; by Lemma \ref{basic}(c,d) this is a univariate stable
polynomial with real coefficients, so it has only real roots.
Similarly, we see that
\begin{align}
\label{BBB}
\mathrm{per}(B(E_n;\boldsymbol{1};\y)) &=
\sum_{\sigma \in \S_n}
\prod _{ \sigma(i) > \sigma(i+1)} y_{\sigma(i)} \\ &=
\sum_{\sigma \in \S_n}
\prod _{ \sigma(i) > i} y_{\sigma(i)}
\end{align}
is stable in $\{y_1,y_2,\ldots ,y_{n}\}$ (but $y_1$ does not really occur).
Letting $f=\mathrm{per}(B(E_n;\boldsymbol{1};\y))$, note that the partial of $f$ with respect to $y_i$,
evaluated at all $y$-variables equal to $1$, equals the number of permutations
in $\S_n$ that have $i$ as a ``descent top", i.e. have the property that
$i$ is followed immediately
by something less than $i$. Denoting this number by $\text{Top}(i;n)$,
applying Proposition \ref{Delta} to $f$ we get
\begin{align}
\text{Top}(i;n)\text{Top}(j;n) \ge n! \, \text{Top}(i,j;n),
\end{align}
where $\text{Top}(i,j;n)$ is the number of permutations in $\S_n$
having both $i$ and $j$ as descent
tops, with $2\le i<j\le n$. Dividing both sides of the above equation by $n!^2$ shows that
occurrences of descent tops in a uniformly random permutation are negatively correlated.
More general forms of (\ref{BBB}) can be defined which still maintain stability.
First of all,
cycles in $\text{T}(\sigma)$
clearly translate into left-to-right minima in $\text{L}(\sigma)$, and so by Proposition
\ref{alpha} the polynomial
\begin{eqnarray*}
\sum_{\sigma \in \S_n}
\alpha ^{\text{LRmin}(\sigma)}
\prod _{\sigma(i) > \sigma(i+1)} y_{\sigma(i)}
\end{eqnarray*}
is stable in the $\y$ for $\alpha>0$, with $\text{LRmin}(\sigma)$
denoting the number of left-to-right minima of $\sigma$.
Secondly, the sum in (\ref{BBB}) can be replaced by a sum over permutations of a
multiset. For a given vector $\v =(v_1,\ldots ,v_t) \in \mathbb N ^t$,
let $N(\v) = \{1^{v_1}2^{v_2}\cdots t^{v_t}\}$ denote the multiset
with $v_i$ copies of $i$, and $\EuScript{M}} \newcommand{\m}{\mathbf{m}(\v)$ the set of multiset permutations
of $N(\v)$ (so for example $\EuScript{M}} \newcommand{\m}{\mathbf{m}(2,1) = \{112,121,211\}$).
Riordan \cite{Rio} noted that if we map our previous sequence $L(\sigma)$ to a multiset permutation
$m(\sigma)$ by replacing numbers $1$ through $v_1$ in $L(\sigma)$ by $1$'s,
numbers $v_1+1$ through $v_1+v_2$ by $2$'s, etc., we get a $\prod v_i!$ to $1$ map,
and furthermore certain squares above the diagonal where rooks in $P(\sigma)$
correspond to descents in $L(\sigma)$ no longer correspond to descents in
$m(\sigma)$. For example, if $v_1=2$, then $1$ and $2$ in $L(\sigma)$ both get mapped
to $1$ in $m(\sigma)$, so a rook on square $(1,2)$ no longer corresponds to a descent.
Let $n$ denote the sum of the coordinates of $\v$, and let
$Y(\v)$ be the sequence of variables obtained by starting with
$y_1,\ldots ,y_n$ and setting the first $v_1$ $y$-variables equal to $y_1$,
the next $v_2$ $y$-variables equal to $y_2$, etc. Then if $E(\v)$ is the
Ferrers matrix whose first $v_1$ columns are all zeros, the next $v_2$ columns have
ones in the top $v_1$ rows and zeros below, the next $v_3$ columns have ones in the
top $v_1+v_2$ rows and zeros below, etc., an easy extension of the argument
above implies
\begin{align}
\label{MBBB}
(1/\prod _i v_i !) \text{per}( B( E(\v;\boldsymbol{1};Y(\v)) ) )
=\sum_{\sigma \in \EuScript{M}} \newcommand{\m}{\mathbf{m}(\v)}
\prod _{ \sigma(i) > \sigma(i+1)} y_{\sigma(i)}
\end{align}
is stable in the $y_i$.
This contains Simion's result \cite{Sim}, that the multiset Eulerian polynomial
has only real roots. If $\v$ has all ones, i.e. $N(\v)$ is a set, it reduces to our
previous result. Finally, we note that this argument also shows that if we replace the
condition $\sigma (i) > \sigma (i+1)$ by the more general condition
$\sigma (i) > \sigma (i+1)+j-1$, for any fixed positive integer $j$, we still get
stability.
\subsection{Grace's Apolarity Theorem.}
Univariate complex polynomials $f(t)=\sum_{k=0}^n \binom{n}{k} a_k t^k$ and
$g(t)=\sum_{k=0}^n \binom{n}{k} b_k t^k$ are \emph{apolar} if $a_nb_n\neq 0$
and
$$
\sum_{k=0}^n \binom{n}{k} (-1)^{n-k} a_k b_{n-k} = 0.
$$
\begin{LMA} \label{per-wz}
Let $f(t)=\sum_{k=0}^n \binom{n}{k} a_k t^k$ and
$g(t)=\sum_{k=0}^n \binom{n}{k} b_k t^k$ be complex polynomials of degree $n$.
Let the roots of $f(t)$ be $z_1,\ldots,z_n$ and let the roots of $g(t)$ be
$w_1,\ldots,w_n$. Then
$$
\sum_{k=0}^n \binom{n}{k} (-1)^{n-k} a_k b_{n-k} = n!\,a_n b_n\, \mathrm{per}(w_i-z_j).
$$
In particular, $f(t)$ and $g(t)$ are apolar if and only if $\mathrm{per}(w_i-z_j)=0$.
\end{LMA}
\begin{proof}
It suffices to prove this for monic polynomials $f(t)$ and $g(t)$.
For each permutation $\sigma\in\S_n$ there are $2^n$ terms in $\mathrm{per}(w_i-z_j)$,
since for each $1\leq i\leq n$ either $w_i$ or $-z_{\sigma(i)}$ can be chosen.
For each subset $R$ of rows of size $k$, and subset $C$ of columns of size
$n-k$, the monomial $\w^R\mathbf{z}^C$ is produced $k!(n-k)!$ times.
Since $(-1)^{k}a_{n-k}$ is the $k$-th elementary symmetric function of
$\{z_1,\ldots,z_n\}$, and similarly for $(-1)^{k}b_{n-k}$ and $\{w_1,\ldots,w_n\}$,
the result follows.
\end{proof}
\begin{LMA} \label{mobius}
Let $f(t)$ and $g(t)$ be polynomials of degree $n$.
Let $t\mapsto\phi(t)=(at+b)/(ct+d)$ be a M\"obius transformation,
with inverse $\phi^{-1}(t)=(\alpha t+\beta)/(\gamma t + \delta)$.
Let $\widehat{f}(t)=(\gamma t+\delta)^n f(\phi^{-1}(t))$ and
$\widehat{g}(t)=(\gamma t+\delta)^n g(\phi^{-1}(t))$ have degree $n$.
Then $\widehat{f}(t)$ and $\widehat{g}(t)$ are apolar if and only if
$f(t)$ and $g(t)$ are apolar.
\end{LMA}
\begin{proof}
Let the roots of $f(t)$ be $z_1,\ldots,z_n$ and let the roots of $g(t)$
be $w_1,\ldots,w_n$. Then the roots of $\widehat{f}(t)$ are
$\phi(z_1),\ldots,\phi(z_n)$ and the roots of $\widehat{g}(t)$ are
$\phi(w_1),\ldots,\phi(w_n)$. Consider the permanent $\mathrm{per}(\phi(w_i)-\phi(z_j))$.
The $(i,j)$-entry of this matrix is
$$
\frac{aw_i+b}{cw_i+d}-\frac{az_j+b}{cz_j+d}=\frac{(ad-bc)(w_i-z_j)}{(cw_i+d)(cz_j+d)}.
$$
Factor $(cw_i+d)^{-1}$ out of row $i$, factor $(cz_j+d)^{-1}$ out of column
$j$, and factor $ad-bc$ out of every row. Therefore
$$
\mathrm{per}(\phi(w_i)-\phi(z_j)) = \frac{(ad-bc)^n}{\prod_{h=1}^n (cw_h+d)(cz_h+d)}
\cdot \mathrm{per}(w_i-z_j).
$$
Since the prefactor on the right-hand side is neither zero nor infinite,
the result follows from Lemma \ref{per-wz}.
\end{proof}
A \emph{circular region} $\EuScript{A}} \renewcommand{\a}{\mathbf{a}$ is a proper subset of $\CC$ that is either
open or closed, and is bounded by either a circle or a straight line.
\begin{THM}[Grace's Apolarity Theorem] \label{grace}
Let $f(t)$ and $g(t)$ be apolar polynomials. If every root of $g(t)$
is in a circular region $\EuScript{A}} \renewcommand{\a}{\mathbf{a}$, then $f(t)$ has at least one root in $\EuScript{A}} \renewcommand{\a}{\mathbf{a}$.
\end{THM}
\begin{proof}
It suffices to prove this for monic polynomials $f(t)$ and $g(t)$, and
for open circular regions $\EuScript{A}} \renewcommand{\a}{\mathbf{a}$ since a closed circular region is the intersection
of all open circular regions which contain it. Let $t\mapsto\phi(t)$ be
a M\"obius transformation that maps $\EuScript{A}} \renewcommand{\a}{\mathbf{a}$ to the upper half-plane $\EuScript{H}$.
By Lemma \ref{mobius}, it suffices to prove this when the circular region
is $\EuScript{H}$ itself. Let the roots of $f(t)$ be
$z_1,\ldots,z_n$ and let the roots of $g(t)$ be $w_1,\ldots,w_n$.
If all of $w_1,\ldots,w_n$ are real numbers, then by permuting the rows of
$(w_i+z_j)$ we can assume that $w_1\leq\cdots\leq w_n$ without changing the
value of $\mathrm{per}(w_i+z_j)$. By the MMCPT,
$\mathrm{per}(w_i+z_j)$ is a real stable polynomial in $z_1,\ldots,z_n$.
In other words, the transformation $T:\RR[z]\rightarrow\RR[\mathbf{z}]$ defined by
$$
T(f(z)) = \mathrm{per}(w_i+z_j),
$$
where $f(z)=(z+w_1)(z+w_2)\cdots(z+w_n)$
preserves real stability. This is a linear transformation. Suppose that $f(z) \in \CC[z]$ is stable. By Proposition
\ref{R-to-C}, either $T_\CC(f(z))$ is stable or $\hat{T}_\CC(f(z))$ is stable.
Diagonalizing by setting $z_j=z$ for all $1\leq j\leq n$, we see that
$T_\CC(f)(z,\ldots,z)= n! f(z)$, so that $T_\CC(f(z))$ is stable.
Therefore $\mathrm{per}(w_i+z_j)$ is a stable polynomial
in $\CC[\mathbf{z}]$, for all $w_1,\ldots,w_n\in\EuScript{H}$. Therefore $\mathrm{per}(w_i+z_j)$
is a stable polynomial in $\CC[\w,\mathbf{z}]$. Actually it satisfies a stronger stability property. Namely if $z_j \in \overline{\EuScript{H}}$ and
$w_j \in \EuScript{H}$ for all $1\leq j\leq n$, then $\mathrm{per}(w_i+z_j) \neq 0$. Indeed if we fix $\zeta_j \in \overline{\EuScript{H}}$ for all $1\leq j\leq n$, then
the polynomial $\mathrm{per}(w_i + \zeta_j) \in \CC[\w]$ is either identically zero or stable (and not identically zero) by Lemma \ref{basic} (d). Clearly $\mathrm{per}(w_i + \zeta_j)$ is not identically zero.
Now assume that $w_i\in\EuScript{H}$ for all $1\leq i\leq n$. Arguing for a
contradiction, assume that $z_j\not\in\EuScript{H}$ for all $1\leq j\leq n$.
Then $-z_j\in\overline{\EuScript{H}}$ for all $1\leq j\leq n$. Hence $\mathrm{per}(w_i-z_j)\neq 0$ by the argument given in the previous paragraph. But $\mathrm{per}(w_i-z_j)=0$ by Lemma
\ref{per-wz}, since $f(t)$ and $g(t)$ are apolar. This contradiction completes
the proof.
\end{proof}
Our proof of Grace's apolarity theorem relies on MMCPT. To avoid going in circles we should ensure ourselves that the
proof of MMCPT does not use any form of Grace's apolarity theorem. In fact what we use in the proof of MMCPT is that condition (b) in Proposition \ref{BB} implies that the operator preserves stability, the proof of which does not use Grace's apolarity theorem.
\subsection{Permanental inequalities}
If $A$ is a $n$-by-$n$ matrix and $S \subseteq [n]=\{1,\ldots,n\}$ let $A_S$ be the matrix obtained by replacing the columns indexed by $S$ by columns of all ones.
\begin{CORO}\label{negass}
Let $A$ be an $n$-by-$n$ monotone matrix, $S \subset [n]$, and $i,j \in [n]\setminus S$. Then
$$
\mathrm{per}(A_{S\cup \{i\}})\mathrm{per}(A_{S\cup \{j\}}) \geq \mathrm{per}(A_{S\cup \{i,j\}}) \mathrm{per}(A_S).
$$
\end{CORO}
\begin{proof}
The proof follows immediately from Proposition \ref{Delta} applied to $\mathrm{per}(z_j+a_{ij})$.
\end{proof}
Note that Corollary \ref{negass} can be interpreted as that the permanent is pairwise negatively associated in columns.
The \emph{generating polynomial} of a discrete measure $\mu : 2^{[n]} \rightarrow \RR_{\geq 0}$ is given by
$$
G_\mu(\mathbf{z})= \sum_{S \in 2^{[n]}} \mu(S) \prod_{j \in S}z_j.
$$
The measure $\mu$ is \emph{Rayleigh} if
\begin{equation}\label{Ra}
\frac {\partial G_\mu}{\partial z_i}(\x)\frac {\partial G_\mu}{\partial z_j}(\x) \geq \frac {\partial^2 G_\mu}{\partial z_i\partial z_j}(\x)G_\mu(\x),
\end{equation}
for all $\x \in \RR_{\geq 0}^n$, and it is called \emph{strongly Rayleigh} if \eqref{Ra} holds for all $\x \in \RR^n$. We refer
to \cite{BBL,W1} for more information on Rayleigh and strongly Rayleigh measures.
Suppose that $A$ is an $n$-by-$n$ monotone matrix with nonnegative coefficients. Consider the
discrete measure $\mu_A : 2^{[n]} \rightarrow \RR_{\geq 0}$ defined by $\mu_A(S) = \mathrm{per}(A_S)$. By Proposition \ref{Delta} we see that $\mu_A$ is strongly Rayleigh. This fact entails many inequalities.
\begin{CORO}\label{nlc}
Suppose that $A$ is an $n$-by-$n$ monotone matrix with nonnegative coefficients. Then
$$
\mathrm{per}(A_S)\mathrm{per}(A_T) \geq \mathrm{per}(A_{S\cup T})\mathrm{per}(A_{S \cap T}),
$$
for all $S, T \subseteq [n]$.
Moreover
$$
\mathrm{per}(A) \leq s_1\cdots s_n \frac {n!}{n^n},
$$
where $s_i$ is the sum of the elements in the $i$th column.
\end{CORO}
\begin{proof}
The first inequality holds for all Rayleigh measures, see \cite[Theorem 4.4]{W1}.
Let $\mu(S)= \mathrm{per}(A_{[n]\setminus S})/n!$. By the above $
\mu(S)\mu(T) \geq \mu(S\cup T)\mu(S \cap T)
$
for all $S, T \subseteq [n]$. Thus $\mu(S \cup T) \leq \mu(S)\mu(T)$ whenever $S\cap T=\emptyset$, and after iteration
$\mu([n]) \leq \mu(\{1\})\cdots \mu(\{n\})$. The proof now follows by observing that
$\mu(\{i\})= s_i/n$ for all $i \in [n]$.
One can also prove the last inequality by an elementary argument. If there are two
different consecutive elements $b>a$ in a column of $A$, replace these by their average to obtain the matrix $A'$. It is plain to see that $\mathrm{per}(A) \leq \mathrm{per}(A')$. Iterating this procedure it follows that $\mathrm{per}(A) \leq \mathrm{per}(B)$, where each element in column $i$ of $B$ is equal to $s_i/n$. The inequality now follows.
\end{proof}
The second inequality in Corollary \ref{nlc} can be compared with the Van der Waerden conjecture
which asserts that the permanent of a doubly stochastic matrix is greater than or equal to $n!/n^n$. The Van der Waerden conjecture which was stated in 1926 was proved by Falikman in 1981; the case of equality was proved by Egorychev. Gurvits has recently provided a beautiful proof of a vast generalization of the Van der Waerden conjecture using stable polynomials, see \cite{Gu} and
\cite[Section 8]{W}.
If $f : 2^{[n]} \rightarrow \RR$, and $\mu$ is a discrete measure on $2^{[n]}$ we let
$$
\int f d \mu = \sum_{S \in 2^{[n]}} f(S)\mu(S).
$$
A measure $\mu$ on $2^{[n]}$ is \emph{negatively associated} if for all increasing functions
$f,g : 2^{[n]} \rightarrow \RR$ depending on disjoint sets of variables
$$
\int fg d\mu \int d\mu \leq \int f d\mu \int g d\mu,
$$
see e.g. \cite{BBL}.
\begin{CORO}
Suppose that $A$ is an $n$-by-$n$ monotone matrix with nonnegative coefficients. Then the
discrete measure $\mu_A : 2^{[n]} \rightarrow \RR_{\geq 0}$, defined by $\mu_A(S) = \mathrm{per}(A_S)$, is negatively associated.
\end{CORO}
\begin{proof}
The corollary follows from the fact that $\mu_A$ is strongly Rayleigh. Such measures are negatively associated, which was proved in \cite[Theorem 4.9]{BBL}
\end{proof}
| {
"timestamp": "2010-10-18T02:05:31",
"yymm": "1010",
"arxiv_id": "1010.2565",
"language": "en",
"url": "https://arxiv.org/abs/1010.2565",
"abstract": "Let A be an n-by-n matrix of real numbers which are weakly decreasing down each column, Z_n = diag(z_1,..., z_n) a diagonal matrix of indeterminates, and J_n the n-by-n matrix of all ones. We prove that per(J_nZ_n+A) is stable in the z_i, resolving a recent conjecture of Haglund and Visontai. This immediately implies that per(zJ_n+A) is a polynomial in z with only real roots, an open conjecture of Haglund, Ono, and Wagner from 1999. Other applications include a multivariate stable Eulerian polynomial, a new proof of Grace's apolarity theorem and new permanental inequalities.",
"subjects": "Combinatorics (math.CO)",
"title": "Proof of the monotone column permanent conjecture",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9884918529698322,
"lm_q2_score": 0.8418256512199033,
"lm_q1q2_score": 0.8321377978518979
} |
https://arxiv.org/abs/1912.05001 | Eigenvalue continuity and Geršgorin's theorem | Two types of eigenvalue continuity are commonly used in the literature. However, their meanings and the conditions under which continuities are used are not always stated clearly. This can lead to some confusion and needs to be addressed. In this note, we revisit the Geršgorin disk theorem and clarify the issue concerning the proofs of the theorem by continuity. | \section{Introduction}
In his seminal paper in 1931 \cite{Ger31}, Ger\v{s}gorin presented an important result about the localization of the eigenvalues of matrices. He showed that
(1) all eigenvalues of a square matrix lie in the union of the later so-called {\em Ger\v{s}gorin disks} and (2) if some, say $m$, of the disks are disjoint from the remaining disks, then the union of these $m$ disks contains exactly $m$ eigenvalues (counted with algebraic multiplicities). The result was named after Ger\v{s}gorin as the {\em Ger\v{s}gorin disk theorem} due to its importance and applications for estimating and localizing eigenvalues.
Let $A=(a_{ij})$ be an $n\times n$ complex matrix and let $r_i=\sum_{j\not = i} |a_{ij}|$, $i=1, \dots, n$.
The set $D_i=\{z\in \Bbb C : |z-a_{ii}|\leq r_i\}$ is referred to as a Ger\v{s}gorin disk of $A$.
Let $A_0$ be the diagonal matrix that has the same main diagonal as $A$.
Ger\v{s}gorin proved the second part of his theorem by considering the matrix $A(t)=A_0+t(A-A_0)$, $t\in [0, 1]$, and letting $t$ increase continuously from 0 to 1. Intuitively,
the concentric Ger\v{s}gorin disks of $A(t)$ centered at
$a_{ii}$ ($i=1, \dots, n$) get larger and larger as $t$ increases from 0 to 1.
He stated that
``Since the eigenvalues of the matrix depend continuously on its elements, it follows that $m$ eigenvalues must always lie in the disks ...".
Ger\v{s}gorin used as a fact without justification that
{\em eigenvalues are continuous functions of the entries of matrices}.
Such a statement is often seen in the literature when it comes to the proof of the second part of the
Ger\v{s}gorin disk theorem.
For instance, here are a few widely-cited
and comprehensive references. In the first edition of Horn and Johnson's book
{\em Matrix Analysis} \cite{HJ185}, page~345, it asserts that ``the eigenvalues are continuous functions of the entries of $A$ (see Appendix D)...", in Rahman and Schmeisser's
{\em Analytic Theory of Polynomials} \cite{Rahman02}, page~55,
it states that
\lq\lq The eigenvalues of $A(t)$ are continuous functions of
$t$ ...\rq\rq,
in Varga's {\em Ger\v{s}gorin and His Circles} \cite{Var2004}, page~8,
it is written that
``the eigenvalues $\lambda_i(t)$ of $A(t)$ also vary continuously
with $t$
...",
and
in Wilkinson's {\em The Algebraic Eigenvalue Problem} \cite{Wil1965},
page~72,
it says that
``the eigenvalues all traverse continuous paths".
What does it really mean to say that eigenvalues are continuous functions?
Ger\v{s}gorin's proof by continuity may lead one to imagine {continuous}
curves of the eigenvalues evolving
on the complex plane, or one may trace the curves continuously.
But that is not as easy as it sounds. First, ordering eigenvalues with a parameter
can be tricky and difficult; second,
the eigenvalue curves may merge
and the algebraic multiplicities of eigenvalues can change as the parameter varies.
\begin{example}\label{example1}
{\rm
Let $A(t) = \left( {0 \atop t}{t\atop 0}\right )$, $t\in [-1, 1].$ Each $t$
produces a set of two eigenvalues.
How does one order the eigenvalues as functions of $t$? It is natural to order the eigenvalues of $A(t)$ as $\lambda_1(t)=t$, $\lambda_2(t)=-t$.
Notice that $A(t)$ is real symmetric. For real symmetric (or complex Hermitian) matrices, we usually want the eigenvalues to be in a non-increasing (or non-decreasing) order. So we would order the
eigenvalues as $\mu_1(t)=|t|\geq \mu_2(t)=-|t|$. (Unlike $\lambda_1(t)$ and $\lambda_2(t)$, $\mu_1(t)$ and $\mu_2(t)$ are not differentiable. Of course, there are
infinitely many ways to parameterize the eigenvalues as non-continuous functions.)}
\end{example}\label{example0}
The eigenvalues in Example~\ref{example0} are parameterized as continuous functions of $t$.
Is this always possible?
The answer is {\em yes} for $t$ on a real interval (but why? see Theorem \ref{thm1})
and {\em no} for $t$ on a complex domain containing the origin (see Example~\ref{example1}).
Ger\v{s}gorin's original proof by continuity is more like \lq\lq hand-waving" than a rigorous proof and it
has led to some confusion or ambiguity
\cite{Farenick2005}. A rigorous proof of the theorem using eigenvalues as continuous functions requires creating or referencing some heavy machinery that was absent from all the classical sources. This issue deserves attention and clarification for both teaching and research.
Additionally, matrices depending on a parameter play important roles in scientific areas.
In some studies such as stability problems and adiabatic quantum computing, one may consider a real parameter $t$ joining matrix $A$ and matrix $B$ by $(1-t)A+tB$ and analyze the change of the eigenvalues
as $t$ varies.
In section 2, we briefly recap the eigenvalue continuity in the topological sense. In section 3, we summarize a celebrated result of Kato on the continuity of eigenvalues as functions. In section 4, we discuss the existing proofs of
the Ger\v{s}gorin disk theorem and present a proof with topological continuity and a proof with functional continuity. We end the paper
by including a short and neat proof of the second part of the Ger\v{s}gorin disk theorem by using the argument principle.
\section{Topological continuity of eigenvalues}
Are eigenvalues of a matrix continuous functions of the matrix?
Since eigenvalue problems of matrices are essentially root problems of (characteristic) polynomials, one
immediately realizes that the question is a bit subtle and needs careful formulation.
It is known that the roots of a polynomial vary continuously as {\em a function} of the coefficients. In \cite{Har_AMSP_1987}, the authors gave a nice proof for the
result concerning the continuity of zeros of complex polynomials.
In fact, the map sending a monic polynomial
$f(z) = z^n + a_1z^{n-1} + \cdots + a_n$
to the multi-set of its zeros
$\pi(f) = \{\lambda_1, \dots, \lambda_n\}$
is continuous in the following sense.
For monic polynomials
$f(z) = z^n + a_1 z^{n-1} + \cdots + a_n$
and $\tilde f(z) = z^n + \tilde a_1 z^{n-1} + \cdots + \tilde a_n$
with multi-sets of zeros
$\pi(f) = \{\lambda_1, \dots, \lambda_n\}$ and
$\pi (\tilde f) = \{\tilde \lambda_1, \dots, \tilde \lambda_n\}$,
one can use the metrics
$$\|f-\tilde f\| = \max\{|a_j-\tilde a_{j}|: 1 \le j \le n\}$$
and
{\small $$d(\pi (f), \pi (\tilde f)) =
\min_{J}
\left\{ \max_{1 \le j \le n} |\lambda_j - \tilde \lambda_{j_\ell}|:
J=(j_1, \dots, j_n)
\hbox{ is a permutation of } (1, \dots, n)\right\}.$$}
Then $\pi$ is (pointwise) continuous; that is, for fixed $f$ and
for any given $\varepsilon>0$, there exists $\delta>0$ (depending on $f$) such that
$\|f-\tilde f\| <\delta$ implies $d(\pi(f), \pi(\tilde f))<\varepsilon$.
Moreover, if $\xi$ is a zero of $f(z)$ with algebraic multiplicity $m$, then
$\tilde f$ has exactly $m$ zeros
in the disk centered at $\xi$ with radius $\varepsilon$.
If we identity $f$ with $n$-tuple $(a_1, \dots, a_{n})\in {\mathbb C}^n$,
then $\pi$ is a homeomorphism between ${\mathbb C}^n$ (with the usual topology) and the quotient space ${\mathbb C}^n_{\sim}$ (with the induced quotient topology),
the unordered $n$-tuples (see \cite[p.\,153]{BhaMA97}) .
Applying this result to matrices, one
gets the eigenvalue continuity as
the eigenvalues of an $n\times n$ matrix $A$
are the zeros of
the characteristic polynomial
$$p_A(z) = \det(zI - A) = z^n + a_1 z^{n-1} + \cdots + a_n,$$
where $a_j$ is $(-1)^j$ times the sum of the
$j\times j$ principal minors of $A$.
To be more specific, with $M_n$ for the space of $n\times n$ complex matrices,
we consider the eigenvalue function
$\sigma: M_n \rightarrow {\mathbb C}^n_{\sim}$ that maps a matrix $A\in M_n$ to its spectrum
$\sigma (A)\in {\mathbb C}^n_{\sim}$.
For the continuity of $\sigma$, we can use any (fixed) norm $\|\cdot\|$
on $M_n$.
The function $\sigma$ is continuous, i.e., for fixed $A\in M_n$ and for any given $\varepsilon > 0$, there is
$\delta > 0$ (depending on $A$) such that
$d(\sigma(A), \sigma(\tilde A)) < \varepsilon$ whenever $\|A-\tilde A\| < \delta$.
Such an eigenvalue continuity
may be referred to as
{\em eigenvalue topological continuity} or {\em eigenvalue matching continuity}.
Thus, eigenvalues are always continuous in the topological sense.
A nice proof regarding eigenvalue topological continuity for the discrete case (i.e., matrix sequences) is available in \cite[pp.\,138--140]{Artin}. The same continuity of eigenvalues is also studied
in \cite[p.\,121]{HJ1_2nd}) by using Schur triangularization and compactness of the unitary group.
Closely related to eigenvalue continuity are
eigenvalue perturbation (variation) results with norm bounds
involving the entries of matrices (see \cite{RCLiHandbookPert, Ost} and \cite[p.\,563, Appendix D]{HJ1_2nd}).
There is another possible way of thinking of the eigenvalue continuity problem. Let $A(t)$ be a family of $n\times n$ matrices depending continuously on a parameter $t$ over a domain in the complex plane or on a real interval. Then do there exist $n$ continuous complex functions of $t$ that represent eigenvalues of $A(t)$?
We discuss the question in the next section.
\section{Parametrization of eigenvalues as continuous functions}
In some applications, one needs to consider a continuous
function $A: D \rightarrow M_n$, where $A(t) \in M_n$ and $D$ is a certain subset of
${\mathbb C}$ (say, a domain); and one wants to parametrize the eigenvalues of $A(t)$ as
$n$ continuous functions $\lambda_1(t), \dots, \lambda_n(t)$ with $t \in D$.
We refer to such continuity as {\em eigenvalue functional continuity} provided
there exist $n$ continuous functions of $t$ that represent eigenvalues of $A(t)$.
Eigenvalue functional continuity is widely used in
the proof of the second part of the Ger\v{s}gorin disk theorem;
similar ideas are needed in the perturbation
theory of Hermitian matrices, stable matrices, etc.
However,
such a parametrization is not always possible over a complex domain (\cite[p.\,154]{BhaMA97},
\cite[p.\,64; p.\,108]{Kato95}).
\begin{example}\label{example1}
{\rm
Let $A(t) = \left( {0 \atop t}{1\atop 0}\right )$, $t \in D = \{z\in {\mathbb C}: |z| < 1\}.$
It is impossible to have two continuous functions
$\lambda_1(t), \lambda_2(t)$ on $D$ representing the
eigenvalues of $A(t)$. This is because each eigenvalue
$\lambda$ of $A(t)$ satisfies $\lambda^2 = t$; thus, the desired
continuous functions $\lambda_1(t)$ and $\lambda_2(t)$ have
to satisfy $(\lambda_1(t))^2 = (\lambda_2(t))^2 = t$ for all $t$ on the open unit disk,
which is impossible (as is known, there is no continuous function $f$ on a disk $D$ containing the origin such that $(f(z))^2=z$ for all $z\in D$).
However,
as $t\rightarrow 0$, $A(t)$ approaches $A(0)=\left( {0 \atop 0}{1\atop 0}\right )$
(entrywise), which has repeated eigenvalue 0. Any small disk that contains the origin will contain two eigenvalues of $A(t)$ when $t$ is close enough to $0$.
(This is what topological continuity means.)}
\end{example}
The difference between topological continuity and functional continuity is that the eigenvalues (as a whole) are always topologically continuous but need not be continuous
as individual functions.
The two continuities for $A(t)$ are equivalent
when the parameter $t$ belongs to a real interval.
This is well explained in \cite{BhaMA97, Kato95}.
In \cite[p.\,109, Theorem 5.2]{Kato95}, the following remarkable result is shown.
\begin{theorem}[Kato, 1966] \label{thm1} Suppose that $D \subset {\mathbb C}$ is a connected domain
and that $A: D \rightarrow M_n$ is a continuous function.
If {\rm (1)} $D$ is a real interval, or {\rm (2)}
$A(t)$ has only real eigenvalues, then there exist $n$ eigenvalues $($counted with
algebraic multiplicities$)$ of $A(t)$ that
can be parameterized as continuous functions $\lambda_1(t),$ $ \dots,$ $ \lambda_n(t)$
from $D$ to $\Bbb C$.
In the second case, one can set
$\lambda_1(t) \ge \cdots \ge \lambda_n(t)$.
\end{theorem}
The study of eigenvalue functional continuity can be traced back at least as early as 1954
\cite{Rellich54}.
Rellich \cite[p.\,39]{Rellich69} showed that individual eigenvalues are continuous functions when the matrices are Hermitian (in such case all eigenvalues are necessarily real). In his well-received book,
Kato \cite[p.\,109]{Kato95}
showed that
topological continuity implies functional continuity when the parameter is restricted to a real interval or if all the eigenvalues of the matrices are real, i.e., Theorem~\ref{thm1}.
It is tempting to extend Kato's result on a real interval for the parameter to a domain (with
interior points) on the complex plane. However, this is impossible.
Let $z_0\not = 0$ and let $D_{z_0}$ be an open disk centered at $z_0$ that does not contain the origin. Considering $A(z) = \left (
{0 \atop z-z_0} {\;1 \atop \;0} \right )$,
$z\in D_{z_0}$, we see that there does not exist a continuous eigenvalue function
of $A(z)$ on $D_{z_0}$.
Suppose, otherwise, there is a continuous eigenvalue function $\lambda(z)$ on $D_{z_0}$, then
$(\lambda(z))^2 = z-z_0$ for all $z\in D_{z_0}$. This leads to a continuous function
$f(z) = \lambda(z+z_0)$ defined on the open unit disk $D=\{z\in {\mathbb C} : |z|<1\}$ such that $(f(z))^2 = z$
for all $z\in D$, a contradiction.
So, in a sense, the result of Kato is the best possible with respect to eigenvalue functional continuity.
\section{Proofs of the Ger\v{s}gorin disk theorem}
Ger\v{s}gorin's disk theorem
is a useful result for estimating and localizing the eigenvalues of a matrix. Usually and traditionally, the second part of the theorem is proved by considering the matrix $A(t)=A_0+t(A-A_0)$ (where $A_0$ is the diagonal matrix that has the same main diagonal as $A$) and by using eigenvalue continuity (see
\cite{Bru1994}, \cite[p.\,23]{HoodThesis17}, \cite[p.\,345]{HJ185}, \cite[p.\,74]{Hou1964}, \cite[p.\,372]{Lan1985},
\cite[p.\,499]{Meyer2000}, \cite[p.\,55]{Rahman02}, \cite[p.\,169]{SS1990},
\cite[p.\,8]{Var2004}, \cite[p.\,72]{Wil1965}), and \cite[p.\,70]{ZFZbook11}).
However, in these references, it is not always clear
which types of eigenvalue continuity conditions
were used. If it is topological continuity,
then one needs to add some details in the proofs to justify why the total number of eigenvalues in an isolated region remains the same when $t$ increases from 0 to 1 (note that the algebraic multiplicity of an eigenvalue may change); if
it is functional continuity (which is the case in most texts), then it would
be nice to state Kato's result (or other references) as evidence of
the existence of continuous functions that represent the eigenvalues.
In the following, we
state the Ger\v{s}gorin disk theorem and give two different proofs. One
(Proposition~5) uses eigenvalue functional continuity (Kato's theorem) and exploits the fact that a continuous function takes a connected set into a connected set; the other (Proposition~6)
uses eigenvalue topological continuity and exploits the fact that a continuous
function on a compact set is {\em uniformly continuous}.
In the latter, we completely avoid the continuity of each
eigenvalue as a function.
\begin{theorem}[Ger\v{s}gorin \cite{Ger31}, 1931]\label{GDT}
Let $A = (a_{ij}) \in M_n$ and define the disks
$$D_i = \big \{ z \in {\mathbb C}: |z-a_{ii}| \le \sum_{j\ne i} |a_{ij}|\big \}, \quad i = 1, \dots, n.$$
Then \,
{\rm (1)} All eigenvalues of $A$ are contained in the union $\cup_{i=1}^n D_i$.
{\rm (2)} If $\cup_{i=1}^n D_i$ is the union of
$k$ disjoint connected regions $R_1, \dots, R_k$,
and $R_r$ is the union of $m_r$ of the disks $D_1, \dots, D_n$, then
$R_r$ contains exactly $m_r$ eigenvalues of $A$, $r = 1, \dots, k$.
\end{theorem}
Part (1) says that every eigenvalue of $A$ is contained in a Ger\v{s}gorin disk.
Its proof is easy, standard, and omitted here.
Part (2) is immediate from Propositions \ref{Prop5} and \ref{Prop6}.
We call a union of some Ger\v{s}gorin disks a {\em Ger\v{s}gorin region}
(which in general need not be connected). In particular, the singletons of diagonal entries are
degenerate Ger\v{s}gorin regions. By a {\em curve} we mean the image (range) of
a continuous map from a real closed interval to the complex plane $\gamma: [a, b] \mapsto {\mathbb C}$.
\begin{proposition}\label{Prop5}
Let $A = (a_{ij}) \in M_n$ and let $A(t)=A_0 + t(A-A_0)$,
where $t \in [0,1]$ and
$A_0 = {\rm diag}\,(a_{11}, \dots, a_{nn})$. Then each continuous eigenvalue
curve of
$A(t)$ lies entirely in a connected Ger\v{s}gorin region of $A$.
\end{proposition}
\proof
By Kato's result (Theorem \ref{thm1}), there exists a selection of
$n$ eigenvalues $\lambda_1(t), \dots, \lambda_n(t)$ of $A(t)$ that are continuous functions in $t$ on the real interval $[0, 1]$. Moreover, part (1) of the Ger\v{s}gorin disk theorem ensures that
$\lambda_1(t), \dots, $ $\lambda_n(t)$ are contained
in $\cup_{i=1}^k R_i$ for every $t\in [0, 1]$, and each set $\lambda_j([0, 1])$ is connected.
Let $r\in \{1, \dots, k\}$. Since $R_r$ comprises $m_r$ disks (not necessarily different) whose centers are $m_r$ elements of
the diagonal matrix $A_0$, $m_r$ of the continuous eigenvalue curves $\lambda_1(t), \dots, \lambda_n(t)$ are in $R_r$ at $t=0$. If $\lambda_j(0)\in R_r$, then
the connected set $$\lambda_j([0, 1])= \lambda_j([0, 1])\cap
\cup_{i=1}^k R_i = (\lambda_j([0, 1])\cap R_r) \cup(\lambda_j([0, 1])\cap \cup_{i\not = r}^k R_i)$$
is the union of two disjoint closed sets, the first of which is nonempty. Therefore, the second set is empty and hence $\lambda_j([0, 1])\subset R_r$.
\qed
The following proposition considers the eigenvalues as a whole in a Ger\v{s}gorin region rather than focusing on an individual eigenvalue as a function. That is, we use eigenvalue topological continuity and avoid entirely (the difficult issue of) eigenvalue functional continuity (which is not needed) to prove the assertion.
\begin{proposition}\label{Prop6}
Let $A = (a_{ij}) \in M_n$ and let $A(t)=A_0 + t(A-A_0)$, where $t \in [0,1]$ and
$A_0 = {\rm diag}\,(a_{11}, \dots, a_{nn})$. Then a connected Ger\v{s}gorin region of $A$ contains the same number of eigenvalues of $A(t)$ for all $t\in [0, 1]$.
\end{proposition}
\proof
Every entry of $ A(t)$ is a continuous function of $t \in [0,1]$ and
each Ger\v{s}gorin disk of $A(t)$ ($0\leq t \leq 1$) is contained in a Ger\v{s}gorin disk of $A=A(1)$ with the same corresponding center. Let $R_r$, $r=1, \dots, k$, be the connected Ger\v{s}gorin regions
of $A$. (The number of connected Ger\v{s}gorin regions for $A(t)$ may vary depending on $t$.)
Suppose that $R_r$ contains $m_r$ diagonal entries of $A$, i.e., $m_r$ eigenvalues of
$A_0=A(0)$ (counted with algebraic multiplicities). We claim that $R_r$ contains $m_r$ eigenvalues of $A(t)$ for all
$t\in [0, 1]$ (that is, the sum of the algebraic multiplicities of the eigenvalues
of $A(t)$ remains constant on each connected Ger\v{s}gorin region of $A$ as $t$ varies from 0 to 1).
Since the eigenvalues are topologically continuous over the compact set $[0, 1]$,
the continuity is uniform. To be precise,
the map $\varphi: [0, 1]\mapsto {\mathbb C}^n_{\sim}$ defined by $\varphi (t)=\sigma (A(t))$
is uniformly continuous.
Let $\varepsilon > 0$ be such that $|x-y| > 2 \varepsilon$ for all
$x, y$ lying in any two disjoint Ger\v{s}gorin regions of $A$. There is $\delta > 0$ (depending only on $\varepsilon $) such that
for any $t_1$ and $t_2$ satisfying $0 \le t_1<t_2 \le 1$ and $t_2-t_1 < \delta$, the eigenvalues of
$A(t_1)$ and $A(t_2)$ can be labeled as
$\lambda_1,\dots, \lambda_n$ and $\mu_1, \dots, \mu_n$ such that
$|\lambda_j - \mu_j| < \varepsilon$ for $j=1, \dots, n$.
We divide the interval $[0, 1]$ into $N$ subintervals:
$0 = t_0 < t_1 < \cdots < t_N = 1$
such that $t_{i}-t_{i-1} < \delta$ for $i = 1, \dots, N$.
We show that on each of the intervals
$A(t)$ has $m_r$
eigenvalues in $R_r$ for $t\in [t_{i}, t_{i+1}]$, $i=0, 1, \dots, N-1$.
By assumption, $A(t_0)=A_0=A(0)$ has exactly $m_r$ eigenvalues in $R_r$.
For any $t\in [t_0, t_1]$, since $t-t_0<\delta$, $A(t)$ has exactly $m_r$ eigenvalues each of which is located in some disk centered at an eigenvalue of $A(t_0)$ with radius $\varepsilon$. By our choice of $\varepsilon$, all the $m_r$ eigenvalues of $A(t)$ are contained in
$R_r$ (i.e., not in other regions) for all $t\in [t_0, t_1]$.
Because $t_2-t_1<\delta$, for any $t\in [t_1, t_2]$, $A(t)$ has exactly $m_r$ eigenvalues that are close (with respect to $\varepsilon$)
to the $m_r$ eigenvalues of $A(t_1)$ in $R_r$.
Again, by our choice of $\varepsilon$, all these
$m_r$ eigenvalues of $A(t)$ are also contained in $R_r$
for all $t\in [t_1, t_2]$.
Repeating the arguments for
$[t_2, t_3],$ $ \dots, $ $ [t_{N-1}, t_N]$,
we see that $A(t)$ has exactly $m_r$ eigenvalues
in $R_r$ for each $t\in [0, 1]$. Thus,
$A(t_N)=A(1)=A$ has
exactly $m_r$ eigenvalues in the region $R_r$. \qed
\section{A proof of Ger\v{s}gorin theorem using the argument principle}\label{ComplexAnalysis}
The Ger\v{s}gorin disk theorem is a statement about counting eigenvalues according to their algebraic multiplicities;
it is essentially about counting zeros of a polynomial that depends on a parameter.
Thus, Rouch\'{e}'s theorem would be a much more natural and effective tool since it focuses squarely on what the theorem says about numbers of eigenvalues.
This approach does not require the parameter $t$ to be real and it does not need the concept of any eigenvalue continuity, functional or topological.
There is a short and neat proof of the second part of the Ger\v{s}gorin disk theorem that uses the argument principle. This approach was adopted in the second edition of Horn and Johnson's book {\em Matrix Analysis}
\cite[p.\,389]{HJ1_2nd}
(see also \cite[p.\,103]{Ser10}), while the proof by continuity used in the first edition \cite[p.\,345]{HJ185} was abandoned.
Let $\Gamma$ be a simple contour in the complex plane that surrounds the Ger\v{s}gorin region to be considered.
Let $p_{t}(z)$ be the characteristic polynomial of $A(t)$ for each given $t\in [0, 1]$. By the argument principle
\cite[p.\,123]{JBC78},
the number of zeros (counted with algebraic multiplicities) of $p_{t}(z)$ inside $\Gamma$ is
$$m(t)=\frac{1}{2\pi i}
\oint_{\Gamma} \frac{p_{t}'(z)}{p_{t}(z)}dz.$$
On the other hand,
$f(t, z):=\frac{p_{t}'(z)}{p_{t}(z)}$
is a continuous function from $[0, 1]\times \Gamma $ to $\Bbb C$. By Leibniz's rule
\cite[p.\,68]{JBC78}, $m(t)$ is a continuous function on $[0, 1]$. As $m(t)$ is an integer, it has to be a constant. Thus,
$m(0)=m(1)$, which is the number of eigenvalues of $A$ in the Ger\v{s}gorin region.
Similar ideas
using Rouch\'{e}'s theorem or winding numbers have been employed in the study of localization for
nonlinear eigenvalue problems \cite{BinH13, BinH15, HoodThesis17}.
Eigenvalues as functions deserve study and it is
an interesting (and classical) problem. There is a well-developed theory on the
smoothness of roots of polynomials
(see \cite{AKLM98},
\cite[Chap.\,II, \S 4]{Kato95}, \cite{KLM04}, and
\cite{LR07}).
\bigskip
\noindent
{\bf Acknowledgment.}
This work was initiated by a talk given by the second author at the
International Conference on Matrix Theory with Applications
- Joint meeting of \lq\lq International Research Center for Tensor and Matrix Theory (IRCTM)", Shanghai University, China,
and \lq\lq Applied Algebra and Optimization Research Center (AORC)", Sungkyunkwan University, South Korea, at the
Shanghai University, December 17-20, 2018. Both authors thank the Centers for hospitality during the meeting and thank
Roger Horn for helpful discussions.
| {
"timestamp": "2019-12-12T02:03:25",
"yymm": "1912",
"arxiv_id": "1912.05001",
"language": "en",
"url": "https://arxiv.org/abs/1912.05001",
"abstract": "Two types of eigenvalue continuity are commonly used in the literature. However, their meanings and the conditions under which continuities are used are not always stated clearly. This can lead to some confusion and needs to be addressed. In this note, we revisit the Geršgorin disk theorem and clarify the issue concerning the proofs of the theorem by continuity.",
"subjects": "Spectral Theory (math.SP)",
"title": "Eigenvalue continuity and Geršgorin's theorem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969698879861,
"lm_q2_score": 0.8459424373085146,
"lm_q1q2_score": 0.8320664180363125
} |
https://arxiv.org/abs/1509.00886 | Certain Integrals Arising from Ramanujan's Notebooks | In his third notebook, Ramanujan claims that $$ \int_0^\infty \frac{\cos(nx)}{x^2+1} \log x \,\mathrm{d} x + \frac{\pi}{2} \int_0^\infty \frac{\sin(nx)}{x^2+1} \mathrm{d} x = 0. $$ In a following cryptic line, which only became visible in a recent reproduction of Ramanujan's notebooks, Ramanujan indicates that a similar relation exists if $\log x$ were replaced by $\log^2x$ in the first integral and $\log x$ were inserted in the integrand of the second integral. One of the goals of the present paper is to prove this claim by contour integration. We further establish general theorems similarly relating large classes of infinite integrals and illustrate these by several examples. | \section{Introduction}
If you attempt to f\/ind the values of the integrals
\begin{gather}\label{1}
\int_0^{\infty}\frac{\cos(nx)}{x^2+1}\log x \,\mathrm{d} x \qquad\text{and}\qquad\int_0^{\infty}\frac{\sin(nx)}{x^2+1}\mathrm{d} x, \qquad n>0,
\end{gather}
by consulting tables such as those of Gradshteyn and Ryzhik \cite{gr} or by
invoking a computer algebra system such as \emph{Mathematica}, you will be
disappointed, if you hoped to evaluate these integrals in closed form, that
is, in terms of elementary functions. On the other hand, the latter integral
above can be expressed in terms of the exponential integral $\operatorname{Ei}(x)$~\cite[formula~(3.723), no.~1]{gr}.
Similarly, if $1/(x^2+1)$ is replaced by any even rational function with the
degree of the denominator at least one greater than the degree of the
numerator, it does not seem possible to evaluate any such integral in closed
form.
However, in his third notebook, on p.~391 in the pagination of the second
volume of \cite{nb}, Ramanujan claims that the two integrals in \eqref{1} are
simple multiples of each other. More precisely,
\begin{gather}\label{2}
\int_0^{\infty}\frac{\cos(nx)}{x^2+1}\log x \,
\mathrm{d} x +\frac{\pi}{2}\int_0^{\infty}\frac{\sin(nx)}{x^2+1}\mathrm{d} x =0.
\end{gather}
Moreover, to the left of this entry, Ramanujan writes, ``contour
integration''. We now might recall a couple of sentences of G.H.~Hardy from
the introduction to Ramanujan's \emph{Collected papers} \cite[p.~xxx]{cp},
``\dots\ he had [before arriving in England] never heard of \dots\ Cauchy's
theorem, and had indeed but the vaguest idea of what a function of a complex
variable was''. On the following page, Hardy further wrote, ``In a few
years' time he had a very tolerable knowledge of the theory of functions
\dots''. Generally, the entries in Ramanujan's notebooks were recorded by
him in approximately the years 1904--1914, prior to his departure for
England. However, there is evidence that some of the entries in his third
notebook were recorded while he was in England. Indeed, in view of Hardy's
remarks above, almost certainly, \eqref{2}~is such an entry. A~proof of~\eqref{2} by contour integration was given by the f\/irst author in his book
\cite[pp.~329--330]{IV}.
The identity \eqref{2} is interesting because it relates in a simple way two
integrals that we are unable to individually evaluate in closed form.
On the other hand, the simpler integrals
\begin{gather*
\int_0^{\infty}\frac{\cos(nx)}{x^2+1}
\mathrm{d} x=\frac{\pi e^{-n}}{2} \qquad\text{and}\qquad \int_{-\infty}^{\infty}\frac{\sin(nx)}{x^2+1}\mathrm{d} x =0
\end{gather*}
have well-known and trivial evaluations, respectively.
With the use of the most up-to-date photographic techniques, a new edition
of \emph{Ramanujan's Notebooks}~\cite{nb} was published in~2012 to help
celebrate the 125th anniversary of Ramanujan's birth. The new reproduction
is vastly clearer and easier to read than the original edition. When the
f\/irst author reexamined~\eqref{2} in the new edition, he was surprised to
see that Ramanujan made a further claim concerning~\eqref{2} that was not
visible in the original edition of~\cite{nb}. In a~cryptic one line, he
indicated that a relation similar to~\eqref{2} existed if $\log x$ were
replaced by $\log^2x$ in the f\/irst integral and $\log x$ were inserted in the
integrand of the second integral of~\eqref{2}. One of the goals of the
present paper is to prove (by contour integration) this unintelligible entry
in the f\/irst edition of the notebooks~\cite{nb}. Secondly, we establish
general theorems relating large classes of inf\/inite integrals for which
individual evaluations in closed form are not possible by presently known
methods. Several further examples are given.
\section{Ramanujan's extension of~(\ref{2})}
We prove the entry on p.~391 of \cite{nb} that resurfaced with the new printing of~\cite{nb}.
\begin{Theorem}\label{theorem1} We have
\begin{gather}\label{4}
\int_0^{\infty}\frac{\sin(nx)}{x^2+1}\mathrm{d} x +\frac{2}{\pi}\int_0^{\infty}\frac{\cos(nx)}{x^2+1}\log x\,\mathrm{d} x =0
\end{gather}
and
\begin{gather}\label{4a}
\int_0^{\infty}\frac{\sin(nx)\log x}{x^2+1}\mathrm{d} x+\frac{1}{\pi}\int_0^{\infty}\frac{\cos(nx)\log^2x}{x^2+1}\mathrm{d} x=\frac{\pi^2e^{-n}}{8}.
\end{gather}
\end{Theorem}
\begin{proof} Def\/ine a branch of $\log z$ by $-\frac12\pi<\theta=\arg z\leq
\frac32\pi$. We integrate
\begin{gather*}
f(z):=\frac{e^{inz}\log^2z}{z^2+1}
\end{gather*}
over the positively oriented closed contour $C_{R,\varepsilon}$ consisting
of the semi-circle $C_{R}$ given by \linebreak \mbox{$z=Re^{i\theta}$},
$0\leq\theta\leq\pi$,
the interval $[-R,-\varepsilon]$, the semi-circle $C_{\varepsilon}$ given by
$z=\varepsilon e^{i\theta}$, $\pi\geq\theta\geq0$, and the interval $[\varepsilon,R]$,
where $0<\varepsilon<1$ and $R>1$. On the interior of $C_{R,\varepsilon}$
there is a simple pole at~$z=i$, and so by the residue theorem,
\begin{gather}\label{5}
\int_{C_{R,\varepsilon}}f(z)\mathrm{d} z=2\pi i\frac{e^{-n} \cdot\big({-}\frac14\pi^2\big)}{2i}=-\frac{e^{-n}\pi^3}{4}.
\end{gather}
Parameterizing the respective semi-circles, we can readily show that
\begin{gather}\label{6}
\int_{C_{\varepsilon}}f(z)\mathrm{d} z=o(1),
\end{gather}
as $\varepsilon\to0$, and
\begin{gather}\label{7}
\int_{C_{R}}f(z)\mathrm{d} z=o(1),
\end{gather}
as $R\to\infty$.
Hence, letting $\varepsilon\to0$ and $R\to\infty$ and combining \eqref{5}--\eqref{7}, we conclude that
\begin{gather}\label{8}
- \frac{e^{-n}\pi^3}{4}=\int_{-\infty}^0\frac{e^{inx}(\log|x|+i\pi)^2}{x^2+1}\mathrm{d} x+\int_0^{\infty}\frac{e^{inx}\log^2x}{x^2+1}\mathrm{d} x\\
\hphantom{- \frac{e^{-n}\pi^3}{4}}{}
=\int_0^{\infty}\!\frac{(\cos(nx)-i\sin(n x))(\log x+i\pi)^2}{x^2+1}\mathrm{d} x+\int_0^{\infty}\!\frac{(\cos(nx)+i\sin(nx))\log^2x}{x^2+1}\mathrm{d} x.\nonumber
\end{gather}
If we equate real parts in \eqref{8}, we f\/ind that
\begin{gather}\label{boo}
-\frac{e^{-n}\pi^3}{4} =\int_0^{\infty}\frac{\cos(nx)\big(2\log^2x-\pi^2\big)+2\pi\sin(nx)\log x}{x^2+1}\mathrm{d} x.
\end{gather}
It is easy to show, e.g., by contour integration, that
\begin{gather}\label{easy}
\int_0^{\infty}\frac{\cos(nx)}{x^2+1}\mathrm{d} x=\frac{\pi e^{-n}}{2}.
\end{gather}
(In his quarterly reports, Ramanujan derived~\eqref{easy} by a~dif\/ferent method \cite[p.~322]{I}.)
Putting this evaluation in~\eqref{boo}, we readily deduce~\eqref{4a}.
If we equate imaginary parts in~\eqref{8}, we deduce that
\begin{gather*}
0=\int_0^{\infty}\frac{\pi^2\sin(nx)+2\pi\cos(nx)\log x}{x^2+1}\mathrm{d} x,
\end{gather*}
from which the identity \eqref{4} follows.
\end{proof}
\section{A second approach to the entry at the top of p.~391}
\begin{Theorem} \label{thm:int:s}
For $s \in (- 1, 2)$ and $n \geq 0$,
\begin{gather}
\frac{\pi}{2} e^{- n} = \int_0^{\infty} \frac{\cos (n x - \pi s / 2)}{x^2
+ 1} x^s \mathrm{d} x. \label{eq:int:s}
\end{gather}
\end{Theorem}
Before indicating a proof of Theorem~\ref{thm:int:s}, let us see how the
integral~\eqref{eq:int:s} implies Ramanujan's integral relations~\eqref{4} and~\eqref{4a}. Essentially,
all we have to do is to take derivatives of \eqref{eq:int:s} with respect to~$s$ (and interchange the order of dif\/ferentiation and integration); then, upon
setting $s = 0$, we deduce~\eqref{4} and~\eqref{4a}.
First, note that upon setting $s = 0$ in \eqref{eq:int:s}, we obtain \eqref{easy}.
On the other hand, taking a~derivative of~\eqref{eq:int:s} with respect to
$s$, and then setting $s = 0$, we f\/ind that
\begin{gather}
0 = \int_0^{\infty} \frac{\cos (n x)}{x^2 + 1} \log x \,\mathrm{d} x +
\frac{\pi}{2} \int_0^{\infty} \frac{\sin (n x)}{x^2 + 1} \mathrm{d} x,
\label{eq:int:rama}
\end{gather}
which is the formula~\eqref{4} that Ramanujan recorded on p.~391. Similarly, taking
two derivatives of~\eqref{eq:int:s} and then putting $s=0$, we arrive at
\begin{gather*}
0 = \int_0^{\infty} \frac{\cos (n x)}{x^2 + 1} \log^2 x\, \mathrm{d} x + \pi
\int_0^{\infty} \frac{\sin (n x)}{x^2 + 1} \log x \,\mathrm{d} x -
\frac{\pi^2}{4} \int_0^{\infty} \frac{\cos (n x)}{x^2 + 1} \mathrm{d} x,
\end{gather*}
which, using \eqref{easy}, simplif\/ies to
\begin{gather}\label{eq:int:rama2}
\frac{\pi^3}{8} e^{- n} = \int_0^{\infty} \frac{\cos (n x)}{x^2 + 1} \log^2
x \,\mathrm{d} x + \pi \int_0^{\infty} \frac{\sin (n x)}{x^2 + 1} \log x\,
\mathrm{d} x.
\end{gather}
Note that this is Ramanujan's previously unintelligible formula~\eqref{4a}. If we
likewise take $m$ derivatives before setting $s = 0$, we obtain the following
general set of relations connecting the integrals
\begin{gather*}
I_m := \int_0^{\infty} \frac{\cos (n x)}{x^2 + 1} \log^m x \,\mathrm{d} x,
\qquad J_m := \int_0^{\infty} \frac{\sin (n x)}{x^2 + 1} \log^m x
\,\mathrm{d} x.
\end{gather*}
\begin{Corollary}
For $m \geq 1$,
\begin{gather*}
0 = \sum_{k = 0}^m \binom{m}{k} \left( \frac{\pi}{2} \right)^k (- 1)^{[k
/ 2]} \left\{ \begin{matrix}
I_{m - k}, & \text{if $k$ is even}\\
J_{m - k}, & \text{if $k$ is odd}
\end{matrix} \right\} .
\end{gather*}
\end{Corollary}
We now provide a proof of Theorem \ref{thm:int:s}.
\begin{proof}
In analogy with our previous proof, we integrate
\begin{gather*}
f_s (z) := \frac{e^{i n z} z^s}{z^2 + 1}
\end{gather*}
over the contour $C_{R, \varepsilon}$ and let $\varepsilon \rightarrow 0$ and
$R \rightarrow \infty$. Here, $z^s = e^{s \log z}$ with $-\frac12\pi<\arg z \leq \frac32\pi$, as above. By the residue theorem,
\begin{gather}\label{aa}
\int_{C_{R, \varepsilon}} f_s (z) \mathrm{d} z = 2 \pi i \frac{e^{- n} e^{\pi
i s / 2}}{2 i} = \pi e^{- n} e^{\pi i s / 2} .
\end{gather}
Letting $\varepsilon \rightarrow 0$ and $R \rightarrow \infty$, and using
bounds for the integrand on the semi-circles as we did above, we deduce that
\begin{gather}\label{bb}
\lim_{\substack{R\to\infty\\\varepsilon\to0}}\int_{C_{R, \varepsilon}} f_s (z) \mathrm{d} z = \int_{- \infty}^{\infty}
\frac{e^{i n x} x^s}{x^2 + 1} \mathrm{d} x = \int_0^{\infty} \frac{e^{- i n
x} x^s e^{\pi i s}}{x^2 + 1} \mathrm{d} x + \int_0^{\infty} \frac{e^{i n x}
x^s}{x^2 + 1} \mathrm{d} x.
\end{gather}
Combining \eqref{aa} and \eqref{bb}, we f\/ind that
\begin{gather}
\pi e^{- n} e^{\pi i s / 2} = \int_0^{\infty} \big(e^{i n x} + e^{- i n x} e^{\pi i s}\big)
\frac{x^s }{x^2 + 1} \mathrm{d} x.\label{eq:int:s:i}
\end{gather}
We then divide both sides of \eqref{eq:int:s:i} by $2e^{\pi i s / 2}$ to
obtain \eqref{eq:int:s}.
Note that the integrals are absolutely convergent for $s \in ( - 1, 1)$.
By Dirichlet's test, \eqref{eq:int:s:i} holds for
$s \in ( - 1, 2)$.
\end{proof}
Replacing $s$ with $s + 1$ in \eqref{eq:int:s}, we obtain the following
companion integral.
\begin{Corollary}
For $s \in (- 2, 1)$ and $n \geq 0$,
\begin{gather}
\frac{\pi}{2} e^{- n} = \int_0^{\infty} \frac{x \sin (n x - \pi s /
2)}{x^2 + 1} x^s \mathrm{d} x. \label{eq:int:s:sin}
\end{gather}
\end{Corollary}
\begin{Example}
Setting $s = 0$ in \eqref{eq:int:s:sin}, we f\/ind that
\begin{gather}
\frac{\pi}{2} e^{- n} = \int_0^{\infty} \frac{x \sin (n x)}{x^2 + 1}
\mathrm{d} x, \label{eq:int:sin}
\end{gather}
which is well known. After taking one derivative with respect to $s$ in
\eqref{eq:int:s:sin}
and setting $s=0$, we similarly f\/ind that
\begin{gather}\label{dd}
0 = \int_0^{\infty} \frac{x \sin (n x)}{x^2 + 1} \log x\, \mathrm{d} x -
\frac{\pi}{2} \int_0^{\infty} \frac{x \cos (n x)}{x^2 + 1} \mathrm{d} x,
\end{gather}
which may be compared with Ramanujan's formula \eqref{4}. As a
second example, after taking two derivatives of \eqref{eq:int:s:sin} with
respect to $s$, setting $s=0$, and using
\eqref{eq:int:sin}, we arrive at the identity
\begin{gather}\label{cc}
\frac{\pi^3}{8} e^{- n} = \int_0^{\infty} \frac{x \sin (n x)}{x^2 + 1}
\log^2 x\, \mathrm{d} x - \pi \int_0^{\infty} \frac{x \cos (n x)}{x^2 + 1}
\log x \,\mathrm{d} x.
\end{gather}
\end{Example}
We of\/fer a few additional remarks before generalizing our ideas in the
next section. Equating real
parts in the identity~\eqref{eq:int:s:i} from the proof of
Theorem~\ref{thm:int:s}, we f\/ind that
\begin{gather}
\pi e^{- n} \cos (\pi s / 2) = \int_0^{\infty} \big( \cos (n x) (1 + \cos (\pi
s)) + \sin (n x) \sin (\pi s)\big) \frac{x^s }{x^2 + 1}\mathrm{d} x.
\label{eq:int:s:re}
\end{gather}
Setting $s = 0$ in \eqref{eq:int:s:re}, we again obtain \eqref{easy}.
On the other hand, note that
\begin{gather*}
\left[ \frac{\mathrm{d}}{\mathrm{d} s} \big( \cos (n x) (1 + \cos (\pi s)) + \sin (n
x) \sin (\pi s)\big) \right]_{s = 0} = \pi \sin (n x) .
\end{gather*}
Hence, taking a derivative of \eqref{eq:int:s:re} with respect to $s$, and
then setting $s = 0$, we f\/ind that
\begin{gather*}
0 = \pi \int_0^{\infty} \frac{\sin (n x)}{x^2 + 1} \mathrm{d} x + 2
\int_0^{\infty} \frac{\cos (n x)}{x^2 + 1} \log x\, \mathrm{d} x,
\end{gather*}
which is the formula \eqref{4} that Ramanujan recorded on p.~391. Similarly, taking
two derivatives of \eqref{eq:int:s:re} and letting $s=0$, we deduce that
\begin{gather*}
- \frac{\pi^3}{4} e^{- n} = - \pi^2 \int_0^{\infty} \frac{\cos (n x)}{x^2
+ 1} \mathrm{d} x + 2 \pi \int_0^{\infty} \frac{\sin (n x)}{x^2 + 1} \log x\,
\mathrm{d} x + 2 \int_0^{\infty} \frac{\cos (n x)}{x^2 + 1} \log^2x\, \mathrm{d}
x,
\end{gather*}
which, using \eqref{easy}, simplif\/ies to
\begin{gather*}
\frac{\pi^3}{8} e^{- n} = \pi \int_0^{\infty} \frac{\sin (n x)}{x^2 + 1}
\log x \, \mathrm{d} x + \int_0^{\infty} \frac{\cos (n x)}{x^2 + 1} \log^2x\,
\mathrm{d} x
\end{gather*}
which is the formula \eqref{4a} arising from Ramanujan's unintelligible remark in the initial edition of~\cite{nb}.
The integral \eqref{eq:int:s:re} has the companion
\begin{gather}
\pi e^{- n} \sin (\pi s / 2) = \int_0^{\infty} \big( \cos (n x) \sin (\pi s) +
\sin (n x) (1 - \cos (\pi s))\big) \frac{x^s }{x^2 + 1}\mathrm{d} x,
\label{eq:int:s:im}
\end{gather}
which is obtained by equating imaginary parts in~\eqref{eq:int:s:i}.
However, taking derivatives of~\eqref{eq:int:s:im} with respect to~$s$, and
then setting $s = 0$, does not generate new identities. Instead, we recover
precisely the previous results. For instance, taking a derivative of~\eqref{eq:int:s:im} with respect to~$s$, and then setting $s = 0$, we again
deduce~\eqref{easy}.
Taking two derivatives of~\eqref{eq:int:s:im} with
respect to~$s$, and then setting $s = 0$, we obtain
\begin{gather*}
0 = \pi^2 \int_0^{\infty} \frac{\sin (n x)}{x^2 + 1} \mathrm{d} x + 2 \pi
\int_0^{\infty} \frac{\cos (n x)}{x^2 + 1} \log x\, \mathrm{d} x,
\end{gather*}
which is again Ramanujan's formula~\eqref{4}.
\section{General theorems}
The phenomenon observed by Ramanujan in \eqref{2}
can be generalized by replacing the rational function $1/(z^2+1)$ by a
general rational function $f(z)$ in which the denominator has degree at least
one greater than the degree of the numerator. We shall also assume that
$f(z)$ does not have any poles on the real axis. We could prove a theorem
allowing for poles on the real axis, but in such instances we would need to
consider the principal values of the resulting integrals on the real axis.
In our arguments above, we used the fact that $1/(z^2+1)$ is an even
function. For our general theorem, we require that $f(z)$ be either even or
odd. For brevity, we let $\text{Res}(F(z);z_0)$ denote the residue of a~function~$F(z)$ at a pole~$z_0$. As above, we def\/ine a branch of~$\log z$
by $-\frac12\pi<\theta=\arg z\leq \frac32\pi$.
For a rational function $f(z)$ as prescribed above and each
nonnegative integer $m$, def\/ine
\begin{gather}\label{IJ}
I_m:=\int_0^{\infty}f(x) \cos x \log^mx\,\mathrm{d} x \qquad\text{and}\qquad J_m:=\int_0^{\infty}f(x) \sin x \log^mx\,
\mathrm{d} x.
\end{gather}
\begin{Theorem}\label{theorem2} Let $f(z)$ denote a rational function in $z$, as described above, and let $I_m$ and $J_m$ be defined by~\eqref{IJ}. Let
\begin{gather}\label{S}
S:=2\pi i\sum_U\textup{Res}(e^{iz}f(z)\log^mz,z_j),
\end{gather}
where the sum is over all poles $z_j$ of $e^{iz}f(z)\log^mz$ lying in the upper half-plane~$U$. Suppose that~$f(z)$ is even. Then
\begin{gather}\label{S1}
S=\sum_{k=0}^m\binom{m}{k}(i\pi)^{m-k}(I_k-iJ_k)+(I_m+iJ_m).
\end{gather}
Suppose that $f(z)$ is odd. Then
\begin{gather}\label{S2}
S=-\sum_{k=0}^m\binom{m}{k}(i\pi)^{m-k}(I_k-iJ_k)+(I_m+iJ_m).
\end{gather}
\end{Theorem}
Observe that \eqref{S1} and \eqref{S2} are recurrence relations that enable
us to successively calculate~$I_m$ and~$J_m$. With each succeeding value of~$m$, we see that two previously non-appearing integrals arise. If~$f(z)$ is
even, then these integrals are $I_m$ and $J_{m-1}$, while if $f(z)$ is odd,
these integrals are $J_m$ and $I_{m-1}$. The previously non-appearing
integrals appear in either the real part or the imaginary part of the
right-hand sides of~\eqref{S1} and~\eqref{S2}, but not both real and
imaginary parts. This fact therefore does not enable us to explicitly
determine either of the two integrals. We must be satisf\/ied with obtaining
recurrence relations with increasingly more terms.
\begin{proof}
We commence as in the proof of Theorem~\ref{theorem1}. Let
$C_{R,\varepsilon}$ denote the positively oriented contour consisting of
the semi-circle~$C_{R}$ given by $z=Re^{i\theta}$, $0\leq\theta\leq\pi$,
$[-R,-\epsilon]$, the semi-circle~$C_{\varepsilon}$ given by $z=\varepsilon
e^{i\theta}$, $\pi\geq\theta\geq0$, and $[\varepsilon,R]$, where
$0<\varepsilon<d$, where~$d$ is the smallest modulus of the poles of~$f(z)$
in~$U$. We also choose~$R$ larger than the moduli of all the poles of~$f(z)$ in~$U$. By the residue theorem,
\begin{gather}\label{1a}
\int_{C_{R,\varepsilon}}e^{iz}f(z)\log^mz\,\mathrm{d} z=S,
\end{gather}
where $S$ is def\/ined in~\eqref{S}.
We next directly evaluate the integral on the left-hand side of~\eqref{1a}.
As in the proof of Theorem~\ref{theorem1}, we can easily show that
\begin{gather}\label{2a}
\int_{C_{\varepsilon}}e^{iz}f(z)\log^mz\,\mathrm{d} z=o(1),
\end{gather}
as $\varepsilon$ tends to $0$.
Secondly, we estimate the integral over $C_R$. By hypothesis, there exist a~positive constant $A$ and a~positive number~$R_0$, such that for $R\geq R_0$,
$|f(Re^{i\theta})|\leq A/R$. Hence, for $R\geq R_0$,
\begin{gather}
\left|\int_{C_{R}}e^{iz}f(z)\log^mz\,\mathrm{d} z\right| =
\left|\int_0^\pi
e^{iRe^{i\theta}}f(Re^{i\theta})\log^m(Re^{i\theta})iRe^{i\theta}\mathrm{d}\theta\right|
\nonumber\\
\hphantom{\left|\int_{C_{R}}e^{iz}f(z)\log^mz\,\mathrm{d} z\right|}{}
\leq\int_0^\pi e^{-R\sin\theta}|f(Re^{i\theta})|(\log R+\pi)^mR\,\mathrm{d}\theta\nonumber\\
\hphantom{\left|\int_{C_{R}}e^{iz}f(z)\log^mz\,\mathrm{d} z\right|}{}
\leq A(\log R+\pi)^m\left(\int_0^{\pi/2}+\int_{\pi/2}^{\pi}\right)e^{-R\sin\theta}\mathrm{d}\theta.\label{3a}
\end{gather}
Since $\sin\theta\geq2\theta/\pi$, $0\leq\theta\leq\pi/2$, upon replacing $\theta$ by $\pi-\theta$, we f\/ind that
\begin{align}\label{4ab}
\int_{\pi/2}^{\pi}e^{-R\sin\theta}\mathrm{d}\theta&=\int_0^{\pi/2}e^{-R\sin\theta}\mathrm{d}\theta
\leq\int_0^{\pi/2}e^{-2R\theta/\pi}\mathrm{d}\theta
=\frac{\pi}{2R}\left(1-e^{-R}\right).
\end{align}
The bound~\eqref{4ab} also holds for the f\/irst integral on the far right-hand side of~\eqref{3a}. Hence, from~\eqref{3a},
\begin{gather}\label{5a}
\left|\int_{C_{R}}e^{iz}f(z)\log^mz\,\mathrm{d} z\right|\leq A(\log R+\pi)^m\frac{\pi}{R}\left(1-e^{-R}\right)=o(1),
\end{gather}
as $R$ tends to inf\/inity.
Hence, so far, by \eqref{1a}, \eqref{2a}, and \eqref{5a}, we have shown that
\begin{gather}
S=\int_{-\infty}^0e^{ix}f(x)(\log|x|+i\pi)^m\mathrm{d} x+\int_0^{\infty}e^{ix}f(x)\log^mx\,\mathrm{d} x\nonumber\\
\hphantom{S}{}=\int_0^{\infty}\left\{e^{-ix}f(-x)(\log x+i\pi)^m+e^{ix}f(x)\log^mx\right\}\,\mathrm{d} x.\label{6a}
\end{gather}
Suppose f\/irst that $f(x)$ is even. Then \eqref{6a} takes the form
\begin{gather*}
S =\int_0^{\infty}f(x)\left\{e^{-ix}(\log x+i\pi)^m+e^{ix}\log^mx\right\}\mathrm{d} x\nonumber\\
\hphantom{S}{} =\int_0^{\infty}f(x)\left\{e^{-ix}\sum_{k=0}^m\binom{m}{k}\log^kx(i\pi)^{m-k}+e^{ix}\log^mx\right\}\mathrm{d} x\nonumber\\
\hphantom{S}{} =\sum_{k=0}^m\binom{m}{k}(i\pi)^{m-k}(I_k-iJ_k)+(I_m+iJ_m)
\end{gather*}
which establishes \eqref{S1}. Secondly, suppose that~$f(z)$ is odd. Then, \eqref{6a} takes the form
\begin{gather}
S =\int_0^{\infty}f(x)\left\{-e^{-ix}(\log x+i\pi)^m+e^{ix}\log^mx\right\}\mathrm{d} x\nonumber\\
\hphantom{S}{} =\int_0^{\infty}f(x)\left\{-e^{-ix}\sum_{k=0}^m\binom{m}{k}\log^kx(i\pi)^{m-k}+e^{ix}\log^mx\right\}\mathrm{d} x\nonumber\\
\hphantom{S}{}=-\sum_{k=0}^m\binom{m}{k}(i\pi)^{m-k}(I_k-iJ_k)+(I_m+iJ_m),\label{8a}
\end{gather}
from which \eqref{S2} follows.
\end{proof}
\begin{Example} Let $f(z)=z/(z^2+1)$. Then
\begin{gather*
2\pi i\,\textup{Res}\left(\frac{e^{iz}z\log^mz}{z^2+1},i\right)=\frac{\pi i}{e}\left(\frac{i\pi}{2}\right)^m,
\end{gather*}
and so we are led by \eqref{S2} to the recurrence relation
\begin{gather}\label{e2}
\frac{\pi
i}{e}\left(\frac{i\pi}{2}\right)^m=-\sum_{k=0}^m\binom{m}{k}(i\pi)^{m-k}(I_k-iJ_k)+(I_m+iJ_m),
\end{gather}
where
\begin{gather*}
I_m:=\int_0^{\infty}\frac{x \cos x \log^mx}{x^2+1}\mathrm{d} x \quad\text{and}\quad J_m:=\int_0^{\infty}\frac{x \sin x \log^mx}{x^2+1}
\mathrm{d} x.
\end{gather*}
(In the sequel, it is understood that we are assuming that $n=1$ in Theorem~\ref{theorem1} and in all our deliberations of the two preceding sections.)
If $m=0$, \eqref{e2} reduces to
\begin{gather}\label{e3}
J_0=\frac{\pi}{2e},
\end{gather}
which is \eqref{easy}. After simplif\/ication, if $m=1$, \eqref{e2} yields
\begin{gather}\label{e4}
-\frac{\pi^2}{2e}=-i\pi I_0-\pi J_0+2iJ_1.
\end{gather}
If we equate real parts in \eqref{e4}, we once again deduce~\eqref{e3}. If we equate imaginary parts in~\eqref{e4}, we f\/ind that
\begin{gather}\label{e5}
J_1-\frac{\pi}{2}I_0=0,
\end{gather}
which is identical with~\eqref{dd}. Setting $m=2$ in~\eqref{e2}, we f\/ind that
\begin{gather}\label{e6}
-\frac{i\pi^3}{4e}=\pi^2(I_0-iJ_0)-2i\pi(I_1-iJ_1)+2iJ_2.
\end{gather}
Equating real parts on both sides of~\eqref{e6}, we once again deduce~\eqref{e5}. If we equate imaginary parts in~\eqref{e6} and employ~\eqref{e3}, we arrive at
\begin{gather}\label{e7}
J_2-\pi I_1=\frac{\pi^3}{8e},
\end{gather}
which is the same as~\eqref{cc}.
Lastly, we set $m=3$ in~\eqref{e2} to f\/ind that
\begin{gather}\label{e8}
\frac{\pi^4}{8e}=i\pi^3(I_0-iJ_0)+3\pi^2(I_1-iJ_1)-3i\pi(I_2-iJ_2)+2iJ_3.
\end{gather}
If we equate real parts on both sides of~\eqref{e8} and simplify, we deduce~\eqref{e7} once again. On the other hand, when we equate imaginary parts on
both sides of~\eqref{e8}, we deduce that
\begin{gather}\label{e9}
2J_3-3\pi I_2-3\pi^2J_1+\pi^3I_0=0.
\end{gather}
A slight simplif\/ication of~\eqref{e9} can be rendered with the use of~\eqref{e5}.
\end{Example}
We can replace the rational function $1 / ( x^2 +
1)$ in Theorem~\ref{thm:int:s} by other even rational functions~$f ( x)$ to
obtain the following
generalization of Theorem~\ref{thm:int:s}. Its proof is in the same spirit
as that of Theorem~\ref{theorem2}.
\begin{Theorem}
\label{thm:int:f:s}Suppose that~$f (z)$ is an even rational function with
no real poles and with the degree of the denominator exceeding the degree
of the numerator by at least~$2$. Then,
\begin{gather*}
\frac{\pi i}{e^{\pi i s / 2}} \sum_U \operatorname{Res} ( e^{i n z} f ( z) z^s,
z_j) = \int_0^{\infty} \cos (n x - \pi s / 2) f ( x) x^s \mathrm{d} x,
\end{gather*}
where the sum is over all poles~$z_j$ of~$f ( z)$ lying in the upper
half-plane~$U$.
\end{Theorem}
Note that, as we did for~\eqref{eq:int:s:sin}, we can replace~$s$ with $s + 1$
in Theorem~\ref{thm:int:f:s} to obtain a~corresponding result for odd rational
functions~$x f (x)$. This is illustrated in Example~\ref{eg:odd} below.
As an application, we derive from Theorem~\ref{thm:int:f:s} the following
explicit integral evaluation, which reduces to Theorem~\ref{thm:int:s} when $r
= 0$.
\begin{Theorem}
\label{thm:int:sr}Let $r \geq 0$ be an integer. For $s \in (- 1, 2 ( r
+ 1))$ and $n \geq 0$,
\begin{gather*}
\int_0^{\infty} \frac{\cos (n x - \pi s / 2)}{( x^2 + 1)^{r + 1}} x^s
\mathrm{d} x = \frac{\pi}{2} e^{- n} \sum_{k = 0}^r \frac{1}{2^{r + k}}
\binom{r + k}{k} \sum_{j = 0}^{r - k} ( - 1)^j \binom{s}{j} \frac{n^{r -
k - j}}{( r - k - j) !} .
\end{gather*}
\end{Theorem}
\begin{proof}
Setting $f ( z) = 1 / ( z^2 + 1)^r$ in Theorem~\ref{thm:int:f:s}, we see that we need to
calculate the residue
\begin{gather*}
\operatorname{Res} \left( \frac{e^{i n z} z^s}{( z^2 + 1)^{r + 1}}, i \right) =
\operatorname{Res} \left( \frac{\alpha (z)}{( z - i)^{r + 1}}, i \right),
\end{gather*}
where
\begin{gather*}
\alpha (z) = \frac{e^{i n z} z^s}{(z + i)^{r + 1}}
\end{gather*}
is analytic in a neighborhood of $z = i$. Equivalently, we calculate the
coef\/f\/icient of $x^r$ in the Taylor expansion of $\alpha (x + i)$ around $x =
0$. Using the binomial series
\begin{gather*}
\frac{1}{( x + a)^{r + 1}} = \sum_{k \geq 0} ( - 1)^k \binom{r +
k}{k} x^k a^{- r - k - 1}
\end{gather*}
with $a = 2 i$, we f\/ind that
\begin{gather*}
\alpha (x + i) = e^{- n} \frac{e^{i n x} (x + i)^s}{(x + 2 i)^{r +
1}} \\
\hphantom{\alpha (x + i)}{}
= e^{- n} \sum_{k \geq 0} ( - 1)^k \binom{r + k}{k} x^k (2 i)^{-
r - k - 1} \sum_{j \geq 0} \binom{s}{j} x^j i^{s - j} \sum_{l
\geq 0} \frac{(i n x)^l}{l!} .
\end{gather*}
Extracting the coef\/f\/icient of $x^r$, we conclude that
\begin{gather*}
\operatorname{Res} \left( \frac{e^{i n z} z^s}{( z^2 + 1)^{r + 1}}, i \right) =
\frac{e^{- n}}{( 2 i)^{r + 1}} \sum_{k = 0}^r \frac{( - 1)^k}{( 2 i)^k}
\binom{r + k}{k} \sum_{j = 0}^{r - k} \binom{s}{j} i^{s - j} \frac{( i
n)^{r - k - j}}{( r - k - j) !}\\
\hphantom{\operatorname{Res} \left( \frac{e^{i n z} z^s}{( z^2 + 1)^{r + 1}}, i \right)}{}
= \frac{e^{- n} e^{\pi i s / 2}}{2 i} \sum_{k = 0}^r \frac{1}{2^{r +
k}} \binom{r + k}{k} \sum_{j = 0}^{r - k} ( - 1)^j \binom{s}{j} \frac{n^{r
- k - j}}{( r - k - j) !}.
\end{gather*}
Theorem \ref{thm:int:sr} now follows from Theorem~\ref{thm:int:f:s}.
\end{proof}
\begin{Example}
In particular, in the case $s = 0$,
\begin{gather}\label{ab}
\int_0^{\infty} \frac{\cos (n x)}{( x^2 + 1)^{r + 1}} \mathrm{d} x =
\frac{\pi}{2} e^{- n} \sum_{k = 0}^r \frac{1}{2^{r + k}} \binom{r + k}{k}
\frac{n^{r - k}}{( r - k) !}.
\end{gather}
We note that, more generally, this integral can be expressed in terms of the modif\/ied Bessel function $K_{r+1/2}(z)$ of order $r+1/2$.
Namely, we have \cite[formula~(3.773), no.~6]{gr}
\begin{gather}\label{ab2}
\int_0^{\infty} \frac{\cos (n x)}{( x^2 + 1)^{r + 1}} \mathrm{d} x = \left(
\frac{n}{2} \right)^{r+1/2} \frac{\sqrt{\pi}}{\Gamma(r+1)} K_{r+1/2} (n) .
\end{gather}
When $r\ge0$ is an integer, the Bessel function $K_{r+1/2}(z)$ is elementary and the right-hand side of~\eqref{ab2} evaluates to the right-hand side of~\eqref{ab}.
\end{Example}
On the other hand, taking a derivative with respect to $s$ before setting $s =
0$, and observing that, for $j \geq 1$,
\begin{gather*}
\left[ \frac{\mathrm{d}}{\mathrm{d} s} \binom{s}{j} \right]_{s = 0} = \frac{(-
1)^{j - 1}}{j},
\end{gather*}
we arrive at the following generalization of Ramanujan's formula
\eqref{4}.
\begin{Corollary}
We have
\begin{gather*}
\int_0^{\infty} \frac{\sin (n x)}{( x^2 + 1)^{r + 1}} \mathrm{d} x
+ \frac{2}{\pi} \int_0^{\infty} \frac{\cos (n x)}{( x^2 + 1)^{r + 1}} \log x\, \mathrm{d} x\\
\qquad{} = - e^{- n} \sum_{k = 0}^r \frac{1}{2^{r + k}} \binom{r
+ k}{k} \sum_{j = 1}^{r - k} \frac{1}{j} \frac{n^{r - k - j}}{( r - k -
j) !} .
\end{gather*}
\end{Corollary}
We leave it to the interested reader to make explicit the corresponding
generalization of~\eqref{eq:int:rama2}.
\begin{Example} \label{eg:odd}
As a direct extension of~\eqref{eq:int:s:sin}, replacing
$s$ with $s + 1$ in Theorem~\ref{thm:int:sr}, we obtain the following
companion integral. For integers $r \geq 0$, and any $s \in (- 2, 2 r +
1)$ and $n \geq 0$,
\begin{gather*}
\int_0^{\infty} \frac{x \sin (n x - \pi s / 2)}{( x^2 + 1)^{r + 1}} x^s
\mathrm{d} x = \frac{\pi}{2} e^{- n} \sum_{k = 0}^r \frac{1}{2^{r + k}}
\binom{r + k}{k} \sum_{j = 0}^{r - k} ( - 1)^j \binom{s + 1}{j}
\frac{n^{r - k - j}}{( r - k - j) !} .
\end{gather*}
In particular, setting $s = 0$, we f\/ind that
\begin{gather}\label{abc}
\int_0^{\infty} \frac{x \sin (n x)}{( x^2 + 1)^{r + 1}} \mathrm{d} x =
\frac{\pi}{2} e^{- n} \sum_{k = 0}^r \frac{1}{2^{r + k}} \binom{r + k}{k}
\left\{ \frac{n^{r - k}}{( r - k) !} - \frac{n^{r - k - 1}}{( r - k - 1)
!} \right\},
\end{gather}
while taking a derivative with respect to $s$ before setting $s = 0$ and
observing that, for~$j \geq 2$,
\begin{gather*}
\left[ \frac{\mathrm{d}}{\mathrm{d} s} \binom{s + 1}{j} \right]_{s = 0} =
\frac{(- 1)^j}{j (j - 1)},
\end{gather*}
we f\/ind that{\samepage
\begin{gather*}
\int_0^{\infty} \frac{x \cos (n x)}{( x^2 + 1)^{r + 1}} \mathrm{d} x
- \frac{2}{\pi} \int_0^{\infty} \frac{x \sin (n x)}{( x^2 + 1)^{r + 1}} \log x \, \mathrm{d} x\\
\qquad{} = e^{- n} \sum_{k = 0}^r \frac{1}{2^{r + k}} \binom{r +
k}{k} \left[ \frac{n^{r - k - 1}}{( r - k - 1) !} - \sum_{j = 2}^{r - k}
\frac{1}{j (j - 1)} \frac{n^{r - k - j}}{( r - k - j) !} \right]\\
\qquad{} = \frac{2}{\pi} \int_0^{\infty} \frac{\cos (n x)}{( x^2 + 1)^{r + 1}} \mathrm{d} x
- \frac{2}{\pi} \int_0^{\infty} \frac{x \sin (n x)}{( x^2 + 1)^{r + 1}} \mathrm{d} x\\
\qquad\quad{} - e^{- n} \sum_{k = 0}^r \frac{1}{2^{r + k}} \binom{r + k}{k} \sum_{j = 2}^{r - k} \frac{1}{j (j - 1)} \frac{n^{r - k - j}}{( r - k - j) !},
\end{gather*}
upon the employment of \eqref{ab}} and~\eqref{abc}.
\end{Example}
\subsection*{Acknowledgements}
We wish to thank Khristo Boyadzhiev, Larry Glasser and the referees for their careful and helpful suggestions.
\pdfbookmark[1]{References}{ref}
| {
"timestamp": "2015-10-15T02:05:41",
"yymm": "1509",
"arxiv_id": "1509.00886",
"language": "en",
"url": "https://arxiv.org/abs/1509.00886",
"abstract": "In his third notebook, Ramanujan claims that $$ \\int_0^\\infty \\frac{\\cos(nx)}{x^2+1} \\log x \\,\\mathrm{d} x + \\frac{\\pi}{2} \\int_0^\\infty \\frac{\\sin(nx)}{x^2+1} \\mathrm{d} x = 0. $$ In a following cryptic line, which only became visible in a recent reproduction of Ramanujan's notebooks, Ramanujan indicates that a similar relation exists if $\\log x$ were replaced by $\\log^2x$ in the first integral and $\\log x$ were inserted in the integrand of the second integral. One of the goals of the present paper is to prove this claim by contour integration. We further establish general theorems similarly relating large classes of infinite integrals and illustrate these by several examples.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Certain Integrals Arising from Ramanujan's Notebooks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9857180677531124,
"lm_q2_score": 0.8438951045175642,
"lm_q1q2_score": 0.8318426518113643
} |
https://arxiv.org/abs/1906.03148 | Unsupervised and Supervised Principal Component Analysis: Tutorial | This is a detailed tutorial paper which explains the Principal Component Analysis (PCA), Supervised PCA (SPCA), kernel PCA, and kernel SPCA. We start with projection, PCA with eigen-decomposition, PCA with one and multiple projection directions, properties of the projection matrix, reconstruction error minimization, and we connect to autoencoder. Then, PCA with singular value decomposition, dual PCA, and kernel PCA are covered. SPCA using both scoring and Hilbert-Schmidt independence criterion are explained. Kernel SPCA using both direct and dual approaches are then introduced. We cover all cases of projection and reconstruction of training and out-of-sample data. Finally, some simulations are provided on Frey and AT&T face datasets for verifying the theory in practice. | \section{Introduction}
Assume we have a dataset of \textit{instances} or \textit{data points} $\{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{y}_i)\}_{i=1}^n$ with sample size $n$ and dimensionality $\ensuremath\boldsymbol{x}_i \in \mathbb{R}^d$ and $\ensuremath\boldsymbol{y}_i \in \mathbb{R}^\ell$.
The $\{\ensuremath\boldsymbol{x}_i\}_{i=1}^n$ are the input data to the model and the $\{\ensuremath\boldsymbol{y}_i\}_{i=1}^n$ are the observations (labels).
We define $\mathbb{R}^{d \times n} \ni \ensuremath\boldsymbol{X} := [\ensuremath\boldsymbol{x}_1, \dots, \ensuremath\boldsymbol{x}_n]$ and $\mathbb{R}^{\ell \times n} \ni \ensuremath\boldsymbol{Y} := [\ensuremath\boldsymbol{y}_1, \dots, \ensuremath\boldsymbol{y}_n]$.
We can also have an out-of-sample data point, $\ensuremath\boldsymbol{x}_t \in \mathbb{R}^d$, which is not in the training set. If there are $n_t$ out-of-sample data points, $\{\ensuremath\boldsymbol{x}_{t,i}\}_1^{n_t}$, we define $\mathbb{R}^{d \times n_t} \ni \ensuremath\boldsymbol{X}_t := [\ensuremath\boldsymbol{x}_{t,1}, \dots, \ensuremath\boldsymbol{x}_{t,n_t}]$.
Usually, the data points exist on a subspace or sub-manifold. Subspace or manifold learning tries to learn this sub-manifold \cite{ghojogh2019feature}.
Principal Component Analysis (PCA) \cite{jolliffe2011principal} is a very well-known and fundamental linear method for subspace and manifold learning \cite{friedman2001elements}. This method, which is also used for feature extraction \cite{ghojogh2019feature}, was first proposed by \cite{pearson1901liii}.
In order to learn a nonlinear sub-manifold, kernel PCA was proposed, first by \cite{scholkopf1997kernel,scholkopf1998nonlinear}, which maps the data to high dimensional feature space hoping to fall on a linear manifold in that space.
The PCA and kernel PCA are unsupervised methods for subspace learning. To use the class labels in PCA, supervised PCA was proposed \cite{bair2006prediction} which scores the features of the $\ensuremath\boldsymbol{X}$ and reduces the features before applying PCA. This type of SPCA were mostly used in bioinformatics \cite{ma2011principal}.
Afterwards, another type of SPCA \cite{barshan2011supervised} was proposed which has a very solid theory and PCA is really a special case of it when we the labels are not used.
This SPCA also has dual and kernel SPCA.
The PCA and SPCA have had many applications for example eigenfaces \cite{turk1991eigenfaces,turk1991face} and kernel eigenfaces \cite{yang2000face} for face recognition and detecting orientation of image using PCA \cite{mohammadzade2017critical}.
There exist many other applications of PCA and SPCA in the literature.
In this paper, we explain the theory of PCA, kernel SPCA, SPCA, and kernel SPCA and provide some simulations for verifying the theory in practice.
\section{Principal Component Analysis}
\subsection{Projection Formulation}
\subsubsection{A Projection Point of View}
Assume we have a data point $\ensuremath\boldsymbol{x} \in \mathbb{R}^d$. We want to project this data point onto the vector space spanned by $p$ vectors $\{\ensuremath\boldsymbol{u}_1, \dots, \ensuremath\boldsymbol{u}_p\}$ where each vector is $d$-dimensional and usually $p \ll d$. We stack these vectors column-wise in matrix $\ensuremath\boldsymbol{U} = [\ensuremath\boldsymbol{u}_1, \dots, \ensuremath\boldsymbol{u}_p] \in \mathbb{R}^{d \times p}$. In other words, we want to project $\ensuremath\boldsymbol{x}$ onto the column space of $\ensuremath\boldsymbol{U}$, denoted by $\mathbb{C}\text{ol}(\ensuremath\boldsymbol{U})$.
The projection of $\ensuremath\boldsymbol{x} \in \mathbb{R}^d$ onto $\mathbb{C}\text{ol}(\ensuremath\boldsymbol{U}) \in \mathbb{R}^p$ and then its representation in the $\mathbb{R}^d$ (its reconstruction) can be seen as a linear system of equations:
\begin{align}\label{equation_projection}
\mathbb{R}^d \ni \widehat{\ensuremath\boldsymbol{x}} := \ensuremath\boldsymbol{U \beta},
\end{align}
where we should find the unknown coefficients $\ensuremath\boldsymbol{\beta} \in \mathbb{R}^p$.
If the $\ensuremath\boldsymbol{x}$ lies in the $\mathbb{C}\text{ol}(\ensuremath\boldsymbol{U})$ or $\textbf{span}\{\ensuremath\boldsymbol{u}_1, \dots, \ensuremath\boldsymbol{u}_p\}$, this linear system has exact solution, so $\widehat{\ensuremath\boldsymbol{x}} = \ensuremath\boldsymbol{x} = \ensuremath\boldsymbol{U \beta}$. However, if $\ensuremath\boldsymbol{x}$ does not lie in this space, there is no any solution $\ensuremath\boldsymbol{\beta}$ for $\ensuremath\boldsymbol{x} = \ensuremath\boldsymbol{U \beta}$ and we should solve for projection of $\ensuremath\boldsymbol{x}$ onto $\mathbb{C}\text{ol}(\ensuremath\boldsymbol{U})$ or $\textbf{span}\{\ensuremath\boldsymbol{u}_1, \dots, \ensuremath\boldsymbol{u}_p\}$ and then its reconstruction. In other words, we should solve for Eq. (\ref{equation_projection}). In this case, $\widehat{\ensuremath\boldsymbol{x}}$ and $\ensuremath\boldsymbol{x}$ are different and we have a residual:
\begin{align}\label{equation_residual_1}
\ensuremath\boldsymbol{r} = \ensuremath\boldsymbol{x} - \widehat{\ensuremath\boldsymbol{x}} = \ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{U \beta},
\end{align}
which we want to be small. As can be seen in Fig. \ref{figure_residual_and_space}, the smallest residual vector is orthogonal to $\mathbb{C}\text{ol}(\ensuremath\boldsymbol{U})$; therefore:
\begin{align}
\ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{U\beta} \perp \ensuremath\boldsymbol{U} &\implies \ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{U \beta}) = 0, \nonumber \\
& \implies \ensuremath\boldsymbol{\beta} = (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U})^{-1} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}. \label{equation_beta}
\end{align}
It is noteworthy that the Eq. (\ref{equation_beta}) is also the formula of coefficients in linear regression \cite{friedman2001elements} where the input data are the rows of $\ensuremath\boldsymbol{U}$ and the labels are $\ensuremath\boldsymbol{x}$; however, our goal here is different. Nevertheless, in Section \ref{section_PCA_reconstruction}, some similarities of PCA and regression will be introduced.
\begin{figure}[!t]
\centering
\includegraphics[width=2.2in]{./images/residual_and_space}
\caption{The residual and projection onto the column space of $\ensuremath\boldsymbol{U}$.}
\label{figure_residual_and_space}
\end{figure}
Plugging Eq. (\ref{equation_beta}) in Eq. (\ref{equation_projection}) gives us:
\begin{align*}
\widehat{\ensuremath\boldsymbol{x}} = \ensuremath\boldsymbol{U} (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U})^{-1} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}.
\end{align*}
We define:
\begin{align}\label{equation_hat_matrix}
\mathbb{R}^{d \times d} \ni \ensuremath\boldsymbol{\Pi} := \ensuremath\boldsymbol{U} (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U})^{-1} \ensuremath\boldsymbol{U}^\top,
\end{align}
as ``projection matrix'' because it projects $\ensuremath\boldsymbol{x}$ onto $\mathbb{C}\text{ol}(\ensuremath\boldsymbol{U})$ (and reconstructs back).
Note that $\ensuremath\boldsymbol{\Pi}$ is also referred to as the ``hat matrix'' in the literature because it puts a hat on top of $\ensuremath\boldsymbol{x}$.
If the vectors $\{\ensuremath\boldsymbol{u}_1, \dots, \ensuremath\boldsymbol{u}_p\}$ are orthonormal (the matrix $\ensuremath\boldsymbol{U}$ is orthogonal), we have $\ensuremath\boldsymbol{U}^\top = \ensuremath\boldsymbol{U}^{-1}$ and thus $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}$. Therefore, Eq. (\ref{equation_hat_matrix}) is simplified:
\begin{align}
& \ensuremath\boldsymbol{\Pi} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top.
\end{align}
So, we have:
\begin{align}\label{equation_x_hat}
\widehat{\ensuremath\boldsymbol{x}} = \ensuremath\boldsymbol{\Pi}\, \ensuremath\boldsymbol{x} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}.
\end{align}
\begin{figure}[!t]
\centering
\includegraphics[width=2.8in]{./images/centered_data}
\caption{The principal directions P1 and P2 for (a) non-centered and (b) centered data. As can be seen, the data should be centered for PCA.}
\label{figure_centered_data}
\end{figure}
\subsubsection{Projection and Reconstruction in PCA}
The Eq. (\ref{equation_x_hat}) can be interpreted in this way: The $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}$ projects $\ensuremath\boldsymbol{x}$ onto the row space of $\ensuremath\boldsymbol{U}$, i.e., $\mathbb{C}\text{ol}(\ensuremath\boldsymbol{U}^\top)$ (projection onto a space spanned by $d$ vectors which are $p$-dimensional). We call this projection, ``projection onto the PCA subspace''. It is ``subspace'' because we have $p \leq d$ where $p$ and $d$ are dimensionality of PCA subspace and the original $\ensuremath\boldsymbol{x}$, respectively. Afterwards, $\ensuremath\boldsymbol{U} (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x})$ projects the projected data back onto the column space of $\ensuremath\boldsymbol{U}$, i.e., $\mathbb{C}\text{ol}(\ensuremath\boldsymbol{U})$ (projection onto a space spanned by $p$ vectors which are $d$-dimensional). We call this step ``reconstruction from the PCA'' and we want the residual between $\ensuremath\boldsymbol{x}$ and its reconstruction $\widehat{\ensuremath\boldsymbol{x}}$ to be small.
If there exist $n$ training data points, i.e., $\{\ensuremath\boldsymbol{x}_i\}_{i=1}^n$, the projection of a training data point $\ensuremath\boldsymbol{x}$ is:
\begin{align}\label{equation_projection_onePoint}
\mathbb{R}^{p} \ni \widetilde{\ensuremath\boldsymbol{x}} := \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{x}},
\end{align}
where:
\begin{align}\label{equation_mean_removed_x}
&\mathbb{R}^{d} \ni \breve{\ensuremath\boldsymbol{x}} := \ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{\mu}_x,
\end{align}
is the centered data point and:
\begin{align}
&\mathbb{R}^{d} \ni \ensuremath\boldsymbol{\mu}_x := \frac{1}{n} \sum_{i=1}^n \ensuremath\boldsymbol{x}_i,
\end{align}
is the mean of training data points.
The reconstruction of a training data point $\ensuremath\boldsymbol{x}$ after projection onto the PCA subspace is:
\begin{align}\label{equation_reconstruction_onePoint}
\mathbb{R}^{d} \ni \widehat{\ensuremath\boldsymbol{x}} := \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{x}} + \ensuremath\boldsymbol{\mu}_x = \ensuremath\boldsymbol{U}\widetilde{\ensuremath\boldsymbol{x}} + \ensuremath\boldsymbol{\mu}_x,
\end{align}
where the mean is added back because it was removed before projection.
Note that in PCA, all the data points should be centered, i.e., the mean should be removed first. The reason is shown in Fig. \ref{figure_centered_data}.
In some applications, centering the data does not make sense. For example, in natural language processing, the data are text and centering the data makes some negative measures which is non-sense for text. Therefore, data is not sometimes centered and PCA is applied on the non-centered data. This method is called Latent Semantic Indexing (LSI) or Latent Semantic Analysis (LSA) \cite{dumais2004latent}.
If we stack the $n$ data points column-wise in a matrix $\ensuremath\boldsymbol{X} = [\ensuremath\boldsymbol{x}_1, \dots, \ensuremath\boldsymbol{x}_n] \in \mathbb{R}^{d \times n}$, we first center them:
\begin{align}\label{equation_centered_training_data}
\mathbb{R}^{d \times n} \ni \breve{\ensuremath\boldsymbol{X}} := \ensuremath\boldsymbol{X} \ensuremath\boldsymbol{H} = \ensuremath\boldsymbol{X} - \ensuremath\boldsymbol{\mu}_x,
\end{align}
where $\breve{\ensuremath\boldsymbol{X}} = [\breve{\ensuremath\boldsymbol{x}}_1, \dots, \breve{\ensuremath\boldsymbol{x}}_n] = [\ensuremath\boldsymbol{x}_1 - \ensuremath\boldsymbol{\mu}_x, \dots, \ensuremath\boldsymbol{x}_n - \ensuremath\boldsymbol{\mu}_x]$ is the centered data and:
\begin{align}
\mathbb{R}^{n \times n} \ni \ensuremath\boldsymbol{H} := \ensuremath\boldsymbol{I} - (1/n) \ensuremath\boldsymbol{1}\ensuremath\boldsymbol{1}^\top,
\end{align}
is the centering matrix.
See Appendix \ref{section_appendix_centering} for more details about the centering matrix.
The projection and reconstruction, Eqs. (\ref{equation_projection_onePoint}) and (\ref{equation_reconstruction_onePoint}), for the whole training data are:
\begin{align}
&\mathbb{R}^{p \times n} \ni \widetilde{\ensuremath\boldsymbol{X}} := \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}}, \label{equation_projection_severalPoint} \\
&\mathbb{R}^{d \times n} \ni \widehat{\ensuremath\boldsymbol{X}} := \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}} + \ensuremath\boldsymbol{\mu}_x = \ensuremath\boldsymbol{U}\widetilde{\ensuremath\boldsymbol{X}} + \ensuremath\boldsymbol{\mu}_x, \label{equation_reconstruction_severalPoint}
\end{align}
where $\widetilde{\ensuremath\boldsymbol{X}} = [\widetilde{\ensuremath\boldsymbol{x}}_1, \dots, \widetilde{\ensuremath\boldsymbol{x}}_n]$ and $\widehat{\ensuremath\boldsymbol{X}} = [\widehat{\ensuremath\boldsymbol{x}}_1, \dots, \widehat{\ensuremath\boldsymbol{x}}_n]$ are the projected data onto PCA subspace and the reconstructed data, respectively.
We can also project a new data point onto the PCA subspace for $\ensuremath\boldsymbol{X}$ where the new data point is not a column of $\ensuremath\boldsymbol{X}$. In other words, the new data point has not had impact in constructing the PCA subspace. This new data point is also referred to as ``test data point'' or ``out-of-sample data'' in the literature.
The Eq. (\ref{equation_projection_severalPoint}) was for projection of $\ensuremath\boldsymbol{X}$ onto its PCA subspace. If $\ensuremath\boldsymbol{x}_t$ denotes an out-of-sample data point, its projection onto the PCA subspace ($\widetilde{\ensuremath\boldsymbol{x}}_t$) and its reconstruction ($\widehat{\ensuremath\boldsymbol{x}}_t$) are:
\begin{align}
&\mathbb{R}^{p} \ni \widetilde{\ensuremath\boldsymbol{x}}_t = \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{x}}_t, \label{equation_outOfSample_projection_PCA} \\
&\mathbb{R}^{d} \ni \widehat{\ensuremath\boldsymbol{x}}_t = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{x}}_t + \ensuremath\boldsymbol{\mu}_x = \ensuremath\boldsymbol{U} \widetilde{\ensuremath\boldsymbol{x}}_t + \ensuremath\boldsymbol{\mu}_x, \label{equation_outOfSample_reconstruct_PCA}
\end{align}
where:
\begin{align}
\mathbb{R}^{d} \ni \breve{\ensuremath\boldsymbol{x}}_t := \ensuremath\boldsymbol{x}_t - \ensuremath\boldsymbol{\mu}_x,
\end{align}
is the centered out-of-sample data point which is centered using the mean of training data. Note that for centering the out-of-sample data point(s), we should use the mean of the training data and not the out-of-sample data.
If we consider the $n_t$ out-of-sample data points, $\mathbb{R}^{d \times n_t} \ni \ensuremath\boldsymbol{X}_t = [\ensuremath\boldsymbol{x}_{t,1}, \dots, \ensuremath\boldsymbol{x}_{t,n_t}]$, all together, the projection and reconstruction of them are:
\begin{align}
&\mathbb{R}^{p \times n_t} \ni \widetilde{\ensuremath\boldsymbol{X}}_t = \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}}_t, \\
&\mathbb{R}^{d \times n_t} \ni \widehat{\ensuremath\boldsymbol{X}}_t = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}}_t + \ensuremath\boldsymbol{\mu}_x = \ensuremath\boldsymbol{U} \widetilde{\ensuremath\boldsymbol{X}}_t + \ensuremath\boldsymbol{\mu}_x,
\end{align}
respectively, where:
\begin{align}
\mathbb{R}^{d \times n_t} \ni \breve{\ensuremath\boldsymbol{X}}_t := \ensuremath\boldsymbol{X}_t - \ensuremath\boldsymbol{\mu}_x.
\end{align}
\subsection{PCA Using Eigen-Decomposition}
\subsubsection{Projection Onto One Direction}
In Eq. (\ref{equation_reconstruction_onePoint}), if $p=1$, we are projecting $\ensuremath\boldsymbol{x}$ onto only one vector $\ensuremath\boldsymbol{u}$ and reconstruct it. If we ignore adding the mean back, we have:
\begin{align*}
\widehat{\ensuremath\boldsymbol{x}} = \ensuremath\boldsymbol{u}\ensuremath\boldsymbol{u}^\top \breve{\ensuremath\boldsymbol{x}}.
\end{align*}
The squared length (squared $\ell_2$-norm) of this reconstructed vector is:
\begin{align}
&||\widehat{\ensuremath\boldsymbol{x}}||_2^2 = ||\ensuremath\boldsymbol{u}\ensuremath\boldsymbol{u}^\top \breve{\ensuremath\boldsymbol{x}}||_2^2 = (\ensuremath\boldsymbol{u}\ensuremath\boldsymbol{u}^\top \breve{\ensuremath\boldsymbol{x}})^\top (\ensuremath\boldsymbol{u}\ensuremath\boldsymbol{u}^\top \breve{\ensuremath\boldsymbol{x}}) \nonumber \\
& = \breve{\ensuremath\boldsymbol{x}}^\top \ensuremath\boldsymbol{u} \underbrace{\ensuremath\boldsymbol{u}^\top \ensuremath\boldsymbol{u}}_{1} \ensuremath\boldsymbol{u}^\top \breve{\ensuremath\boldsymbol{x}} \overset{(a)}{=} \breve{\ensuremath\boldsymbol{x}}^\top \ensuremath\boldsymbol{u}\, \ensuremath\boldsymbol{u}^\top \breve{\ensuremath\boldsymbol{x}} \overset{(b)}{=} \ensuremath\boldsymbol{u}^\top \breve{\ensuremath\boldsymbol{x}}\, \breve{\ensuremath\boldsymbol{x}}^\top \ensuremath\boldsymbol{u}, \label{equation_x_hat_length_squared}
\end{align}
where $(a)$ is because $\ensuremath\boldsymbol{u}$ is a unit (normal) vector, i.e., $\ensuremath\boldsymbol{u}^\top \ensuremath\boldsymbol{u} = ||\ensuremath\boldsymbol{u}||_2^2 = 1$, and $(b)$ is because $\breve{\ensuremath\boldsymbol{x}}^\top \ensuremath\boldsymbol{u} = \ensuremath\boldsymbol{u}^\top \breve{\ensuremath\boldsymbol{x}} \in \mathbb{R}$.
Suppose we have $n$ data points $\{\ensuremath\boldsymbol{x}_i\}_{i=1}^n$ where $\{\breve{\ensuremath\boldsymbol{x}}_i\}_{i=1}^n$ are the centered data.
The summation of the squared lengths of their projections $\{\widehat{\ensuremath\boldsymbol{x}}_i\}_{i=1}^n$ is:
\begin{align}\label{equation_sum_projected}
\sum_{i=1}^n ||\widehat{\ensuremath\boldsymbol{x}}_i||_2^2 \overset{(\ref{equation_x_hat_length_squared})}{=} \sum_{i=1}^n \ensuremath\boldsymbol{u}^\top \breve{\ensuremath\boldsymbol{x}}_i\, \breve{\ensuremath\boldsymbol{x}}_i^\top \ensuremath\boldsymbol{u} = \ensuremath\boldsymbol{u}^\top \Big(\sum_{i=1}^n \breve{\ensuremath\boldsymbol{x}}_i\, \breve{\ensuremath\boldsymbol{x}}_i^\top\Big) \ensuremath\boldsymbol{u}.
\end{align}
Considering $\breve{\ensuremath\boldsymbol{X}} = [\breve{\ensuremath\boldsymbol{x}}_1, \dots, \breve{\ensuremath\boldsymbol{x}}_n] \in \mathbb{R}^{d \times n}$, we have:
\begin{align}\label{equation_covariance_matrix}
\mathbb{R}^{d \times d} \ni \ensuremath\boldsymbol{S} &:= \sum_{i=1}^n \breve{\ensuremath\boldsymbol{x}}_i\, \breve{\ensuremath\boldsymbol{x}}_i^\top = \breve{\ensuremath\boldsymbol{X}} \breve{\ensuremath\boldsymbol{X}}^\top
\overset{(\ref{equation_centered_training_data})}{=} \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H} \ensuremath\boldsymbol{H}^\top\ensuremath\boldsymbol{X}^\top \nonumber \\
&\overset{(\ref{equation_centeringMatrix_is_symmetric})}{=} \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H} \ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top \overset{(\ref{equation_centeringMatrix_is_idempotent})}{=} \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top,
\end{align}
where $\ensuremath\boldsymbol{S}$ is called the ``covariance matrix''. If the data were already centered, we would have $\ensuremath\boldsymbol{S} = \ensuremath\boldsymbol{X} \ensuremath\boldsymbol{X}^\top$.
Plugging Eq. (\ref{equation_covariance_matrix}) in Eq. (\ref{equation_sum_projected}) gives us:
\begin{align}
\sum_{i=1}^n ||\widehat{\ensuremath\boldsymbol{x}}_i||_2^2 = \ensuremath\boldsymbol{u}^\top \ensuremath\boldsymbol{S} \ensuremath\boldsymbol{u}.
\end{align}
Note that we can also say that $\ensuremath\boldsymbol{u}^\top \ensuremath\boldsymbol{S} \ensuremath\boldsymbol{u}$ is the variance of the projected data onto PCA subspace. In other words, $\ensuremath\boldsymbol{u}^\top \ensuremath\boldsymbol{S} \ensuremath\boldsymbol{u} = \mathbb{V}\text{ar}(\ensuremath\boldsymbol{u}^\top \breve{\ensuremath\boldsymbol{X}})$. This makes sense because when some non-random thing (here $\ensuremath\boldsymbol{u}$) is multiplied to the random data (here $\breve{\ensuremath\boldsymbol{X}}$), it will have squared (quadratic) effect on variance, and $\ensuremath\boldsymbol{u}^\top \ensuremath\boldsymbol{S} \ensuremath\boldsymbol{u}$ is quadratic in $\ensuremath\boldsymbol{u}$.
Therefore, $\ensuremath\boldsymbol{u}^\top \ensuremath\boldsymbol{S} \ensuremath\boldsymbol{u}$ can be interpreted in two ways: (I) the squared length of reconstruction and (II) the variance of projection.
We want to find a projection direction $\ensuremath\boldsymbol{u}$ which maximizes the squared length of reconstruction (or variance of projection):
\begin{equation}
\begin{aligned}
& \underset{\ensuremath\boldsymbol{u}}{\text{maximize}}
& & \ensuremath\boldsymbol{u}^\top \ensuremath\boldsymbol{S} \ensuremath\boldsymbol{u}, \\
& \text{subject to}
& & \ensuremath\boldsymbol{u}^\top \ensuremath\boldsymbol{u} = 1,
\end{aligned}
\end{equation}
where the constraint ensures that the $\ensuremath\boldsymbol{u}$ is a unit (normal) vector as we assumed beforehand.
Using Lagrange multiplier \cite{boyd2004convex}, we have:
\begin{align*}
\mathcal{L} = \ensuremath\boldsymbol{u}^\top \ensuremath\boldsymbol{S} \ensuremath\boldsymbol{u} - \lambda (\ensuremath\boldsymbol{u}^\top \ensuremath\boldsymbol{u} - 1),
\end{align*}
Taking derivative of the Lagrangian and setting it to zero gives:
\begin{align}\label{equation_pca_eigendecomposition_1}
& \mathbb{R}^p \ni \frac{\partial \mathcal{L}}{\partial \ensuremath\boldsymbol{u}} = 2\ensuremath\boldsymbol{S} \ensuremath\boldsymbol{u} - 2\lambda \ensuremath\boldsymbol{u} \overset{\text{set}}{=} 0 \implies \ensuremath\boldsymbol{S} \ensuremath\boldsymbol{u} = \lambda \ensuremath\boldsymbol{u}.
\end{align}
The Eq. (\ref{equation_pca_eigendecomposition_1}) is the eigen-decomposition of $\ensuremath\boldsymbol{S}$ where $\ensuremath\boldsymbol{u}$ and $\lambda$ are the leading eigenvector and eigenvalue of $\ensuremath\boldsymbol{S}$, respectively \cite{ghojogh2019eigenvalue}. Note that the leading eigenvalue is the largest one. The reason of being leading is that we are maximizing in the optimization problem.
As a conclusion, if projecting onto one PCA direction, the PCA direction $\ensuremath\boldsymbol{u}$ is the leading eigenvector of the covariance matrix.
Note that the ``PCA direction'' is also called ``principal direction'' or ``principal axis'' in the literature. The dimensions (features) of the projected data onto PCA subspace are called ``principal components''.
\subsubsection{Projection Onto Span of Several Directions}
In Eq. (\ref{equation_reconstruction_onePoint}) or (\ref{equation_reconstruction_severalPoint}), if $p>1$, we are projecting $\breve{\ensuremath\boldsymbol{x}}$ or $\breve{\ensuremath\boldsymbol{X}}$ onto PCA subspace with dimensionality more than one and then reconstruct back. If we ignore adding the mean back, we have:
\begin{align*}
\widehat{\ensuremath\boldsymbol{X}} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}}.
\end{align*}
It means that we project every column of $\breve{\ensuremath\boldsymbol{X}}$, i.e., $\breve{\ensuremath\boldsymbol{x}}$, onto a space spanned by the $p$ vectors $\{\ensuremath\boldsymbol{u}_1, \dots, \ensuremath\boldsymbol{u}_p\}$ each of which is $d$-dimensional. Therefore, the projected data are $p$-dimensional and the reconstructed data are $d$-dimensional.
The squared length (squared Frobenius Norm) of this reconstructed matrix is:
\begin{align*}
||\widehat{\ensuremath\boldsymbol{X}}||_F^2 &= ||\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}}||_F^2 = \textbf{tr}\big((\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}})^\top (\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}})\big) \\
& = \textbf{tr}(\breve{\ensuremath\boldsymbol{X}}^\top \ensuremath\boldsymbol{U} \underbrace{\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U}}_{\ensuremath\boldsymbol{I}} \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}}) \overset{(a)}{=} \textbf{tr}(\breve{\ensuremath\boldsymbol{X}}^\top \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}}) \\
& \overset{(b)}{=} \textbf{tr}(\ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}} \breve{\ensuremath\boldsymbol{X}}^\top \ensuremath\boldsymbol{U}),
\end{align*}
where $\textbf{tr}(.)$ denotes the trace of matrix, $(a)$ is because $\ensuremath\boldsymbol{U}$ is an orthogonal matrix (its columns are orthonormal), and $(b)$ is because $\textbf{tr}(\breve{\ensuremath\boldsymbol{X}}^\top \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}}) = \textbf{tr}(\breve{\ensuremath\boldsymbol{X}} \breve{\ensuremath\boldsymbol{X}}^\top \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top) = \textbf{tr}(\ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}} \breve{\ensuremath\boldsymbol{X}}^\top \ensuremath\boldsymbol{U})$.
According to Eq. (\ref{equation_covariance_matrix}), the $\ensuremath\boldsymbol{S} = \breve{\ensuremath\boldsymbol{X}} \breve{\ensuremath\boldsymbol{X}}^\top$ is the covariance matrix; therefore:
\begin{align}
||\widehat{\ensuremath\boldsymbol{X}}||_F^2 = \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}\, \ensuremath\boldsymbol{U}).
\end{align}
We want to find several projection directions $\{\ensuremath\boldsymbol{u}_1, \dots, \ensuremath\boldsymbol{u}_p\}$, as columns of $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times p}$, which maximize the squared length of reconstruction (or variance of projection):
\begin{equation}
\begin{aligned}
& \underset{\ensuremath\boldsymbol{U}}{\text{maximize}}
& & \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}\, \ensuremath\boldsymbol{U}), \\
& \text{subject to}
& & \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I},
\end{aligned}
\end{equation}
where the constraint ensures that the $\ensuremath\boldsymbol{U}$ is an orthogonal matrix as we assumed beforehand.
Using Lagrange multiplier \cite{boyd2004convex}, we have:
\begin{align*}
\mathcal{L} = \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}\, \ensuremath\boldsymbol{U}) - \textbf{tr}\big(\ensuremath\boldsymbol{\Lambda}^\top (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} - \ensuremath\boldsymbol{I})\big),
\end{align*}
where $\ensuremath\boldsymbol{\Lambda} \in \mathbb{R}^{p \times p}$ is a diagonal matrix $\textbf{diag}([\lambda_1, \dots, \lambda_p]^\top)$ including the Lagrange multipliers.
\begin{align}
& \mathbb{R}^{d \times p} \ni \frac{\partial \mathcal{L}}{\partial \ensuremath\boldsymbol{U}} = 2\ensuremath\boldsymbol{S} \ensuremath\boldsymbol{U} - 2\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Lambda} \overset{\text{set}}{=} 0 \nonumber \\
& \implies \ensuremath\boldsymbol{S} \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Lambda}. \label{equation_pca_eigendecomposition_2}
\end{align}
The Eq. (\ref{equation_pca_eigendecomposition_2}) is the eigen-decomposition of $\ensuremath\boldsymbol{S}$ where the columns of $\ensuremath\boldsymbol{U}$ and the diagonal of $\ensuremath\boldsymbol{\Lambda}$ are the eigenvectors and eigenvalues of $\ensuremath\boldsymbol{S}$, respectively \cite{ghojogh2019eigenvalue}. The eigenvectors and eigenvalues are sorted from the leading (largest eigenvalue) to the trailing (smallest eigenvalue) because we are maximizing in the optimization problem.
As a conclusion, if projecting onto the PCA subspace or $\textbf{span}\{\ensuremath\boldsymbol{u}_1, \dots, \ensuremath\boldsymbol{u}_p\}$, the PCA directions $\{\ensuremath\boldsymbol{u}_1, \dots, \ensuremath\boldsymbol{u}_p\}$ are the sorted eigenvectors of the covariance matrix of data $\ensuremath\boldsymbol{X}$.
\subsection{Properties of $\ensuremath\boldsymbol{U}$}
\subsubsection{Rank of the Covariance Matrix}
We consider two cases for $\breve{\ensuremath\boldsymbol{X}} \in \mathbb{R}^{d \times n}$:
\begin{enumerate}
\item If the original dimensionality of data is greater than the number of data points, i.e., $d \geq n$:
In this case, $\textbf{rank}(\breve{\ensuremath\boldsymbol{X}}) = \textbf{rank}(\breve{\ensuremath\boldsymbol{X}}^\top) \leq n$. Therefore, $\textbf{rank}(\ensuremath\boldsymbol{S}) = \textbf{rank}(\breve{\ensuremath\boldsymbol{X}}\breve{\ensuremath\boldsymbol{X}}^\top) \leq \min\big(\textbf{rank}(\breve{\ensuremath\boldsymbol{X}}), \textbf{rank}(\breve{\ensuremath\boldsymbol{X}}^\top)\big)-1 = n-1$. Note that $-1$ is because the data are centered. For example, if we only have one data point, it becomes zero after centering and the rank should be zero.
\item If the original dimensionality of data is less than the number of data points, i.e., $d \leq n-1$ (the $-1$ again is because of centering the data):
In this case, $\textbf{rank}(\breve{\ensuremath\boldsymbol{X}}) = \textbf{rank}(\breve{\ensuremath\boldsymbol{X}}^\top) \leq d$. Therefore, $\textbf{rank}(\ensuremath\boldsymbol{S}) = \textbf{rank}(\breve{\ensuremath\boldsymbol{X}}\breve{\ensuremath\boldsymbol{X}}^\top) \leq \min\big(\textbf{rank}(\breve{\ensuremath\boldsymbol{X}}), \textbf{rank}(\breve{\ensuremath\boldsymbol{X}}^\top)\big) = d$.
\end{enumerate}
So, we either have $\textbf{rank}(\ensuremath\boldsymbol{S}) \leq n-1$ or $\textbf{rank}(\ensuremath\boldsymbol{S}) \leq d$.
\subsubsection{Truncating $\ensuremath\boldsymbol{U}$}
Consider the following cases:
\begin{figure}[!t]
\centering
\includegraphics[width=3in]{./images/coordinate_rotation}
\caption{Rotation of coordinates because of PCA.}
\label{figure_coordinate_rotation}
\end{figure}
\begin{enumerate}
\item If $\textbf{rank}(\ensuremath\boldsymbol{S}) = d$:
we have $p=d$ (we have $d$ non-zero eigenvalues of $\ensuremath\boldsymbol{S}$), so that $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times d}$. It means that the dimensionality of the PCA subspace is $d$, equal to the dimensionality of the original space. Why does this happen? That is because $\textbf{rank}(\ensuremath\boldsymbol{S}) = d$ means that the data are spread wide enough in all dimensions of the original space up to a possible rotation (see Fig. \ref{figure_coordinate_rotation}). Therefore, the dimensionality of PCA subspace is equal to the original dimensionality; however, PCA might merely rotate the coordinate axes. In this case, $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times d}$ is a square orthogonal matrix so that $\mathbb{R}^{d \times d} \ni \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^{-1} = \ensuremath\boldsymbol{I}$ and $\mathbb{R}^{d \times d} \ni \ensuremath\boldsymbol{U}^\top\ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{U}^{-1}\ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}$ because $\textbf{rank}(\ensuremath\boldsymbol{U}) = d$, $\textbf{rank}(\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top) = d$, and $\textbf{rank}(\ensuremath\boldsymbol{U}^\top\ensuremath\boldsymbol{U}) = d$.
That is why in the literature, PCA is also referred to as coordinate rotation.
\item If $\textbf{rank}(\ensuremath\boldsymbol{S}) < d$ and $n > d$:
it means that we have enough data points but the data points exist on a subspace and do not fill the original space wide enough in every direction. In this case, $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times p}$ is not square and $\textbf{rank}(\ensuremath\boldsymbol{U}) = p < d$ (we have $p$ non-zero eigenvalues of $\ensuremath\boldsymbol{S}$). Therefore, $\mathbb{R}^{d \times d} \ni \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top \neq \ensuremath\boldsymbol{I}$ and $\mathbb{R}^{p \times p} \ni \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}$ because $\textbf{rank}(\ensuremath\boldsymbol{U}) = p$, $\textbf{rank}(\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top) = p < d$, and $\textbf{rank}(\ensuremath\boldsymbol{U}^\top\ensuremath\boldsymbol{U}) = p$.
\item If $\textbf{rank}(\ensuremath\boldsymbol{S}) \leq n-1 < d$:
it means that we do not have enough data points to properly represent the original space and the points have an ``intrinsic dimensionality''. For example, we have two three-dimensional points which are one a two-dimensional line (subspace). So, similar to previous case, the data points exist on a subspace and do not fill the original space wide enough in every direction. The discussions about $\ensuremath\boldsymbol{U}$, $\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top$, and $\ensuremath\boldsymbol{U}^\top\ensuremath\boldsymbol{U}$ are similar to previous case.
\end{enumerate}
Note that we might have $\textbf{rank}(\ensuremath\boldsymbol{S}) = d$ and thus $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times d}$ but want to ``truncate'' the matrix $\ensuremath\boldsymbol{U}$ to have $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times p}$. Truncating $\ensuremath\boldsymbol{U}$ means that we take a subset of best (leading) eigenvectors rather than the whole $d$ eigenvectors with non-zero eigenvalues. In this case, again we have $\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top \neq \ensuremath\boldsymbol{I}$ and $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}$.
The intuition of truncating is this: the variance of data might be noticeably smaller than another direction; in this case, we can only keep the $p<d$ top eigenvectors (PCA directions) and ``ignore'' the PCA directions with smaller eigenvalues to have $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times p}$. Figure \ref{figure_PCA_oneDirection} illustrates this case for a 2D example.
Note that truncating can also be done when $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times p}$ to have $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times q}$ where $p$ is the number of non-zero eigenvalues of $\ensuremath\boldsymbol{S}$ and $q < p$.
From all the above analyses, we conclude that as long as the columns of the matrix $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times p}$ are orthonormal, we always have $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}$ regardless of the value $p$. If the orthogonal matrix $\ensuremath\boldsymbol{U}$ is not truncated and thus is a square matrix, we also have $\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top = \ensuremath\boldsymbol{I}$.
\begin{figure}[!t]
\centering
\includegraphics[width=1.6in]{./images/PCA_oneDirection}
\caption{A 2D example where the data is almost on a line and the second principal direction is very small and can be ignored.}
\label{figure_PCA_oneDirection}
\end{figure}
\subsection{Reconstruction Error in PCA}\label{section_PCA_reconstruction}
\subsubsection{Reconstruction in Linear Projection}
If we center the data, the Eq. (\ref{equation_residual_1}) becomes $\ensuremath\boldsymbol{r} = \breve{\ensuremath\boldsymbol{x}} - \widehat{\ensuremath\boldsymbol{x}}$ because the reconstructed data will also be centered according to Eq. (\ref{equation_reconstruction_onePoint}).
According to Eqs. (\ref{equation_residual_1}), (\ref{equation_mean_removed_x}), and (\ref{equation_reconstruction_onePoint}), we have:
\begin{align}
\ensuremath\boldsymbol{r} = \ensuremath\boldsymbol{x} - \widehat{\ensuremath\boldsymbol{x}} = \breve{\ensuremath\boldsymbol{x}} + \ensuremath\boldsymbol{\mu}_x - \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{x}} - \ensuremath\boldsymbol{\mu}_x = \breve{\ensuremath\boldsymbol{x}} - \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{x}}.
\end{align}
Figure \ref{figure_reconstruction_error} shows the projection of a two-dimensional point (after the data being centered) onto the first principal direction, its reconstruction, and its reconstruction error. As can be seen in this figure, the reconstruction error is different from least square error in linear regression.
\begin{figure}[!t]
\centering
\includegraphics[width=3.1in]{./images/reconstruction_error}
\caption{(a) Projection of the black circle data points onto the principal direction where the green square data points are the projected data. (b) The reconstruction coordinate of the data points. (c) The reconstruction error in PCA. (d) The least square error in linear regression.}
\label{figure_reconstruction_error}
\end{figure}
For $n$ data points, we have:
\begin{align}
\ensuremath\boldsymbol{R} &:= \ensuremath\boldsymbol{X} - \widehat{\ensuremath\boldsymbol{X}} = \breve{\ensuremath\boldsymbol{X}} + \ensuremath\boldsymbol{\mu}_x - \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}} - \ensuremath\boldsymbol{\mu}_x \nonumber \\
&= \breve{\ensuremath\boldsymbol{X}} - \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}},
\end{align}
where $\mathbb{R}^{d \times n} \ni \ensuremath\boldsymbol{R} = [\ensuremath\boldsymbol{r}_1, \dots, \ensuremath\boldsymbol{r}_n]$ is the matrix of residuals.
If we want to minimize the reconstruction error subject to the orthogonality of the projection matrix $\ensuremath\boldsymbol{U}$, we have:
\begin{equation}
\begin{aligned}
& \underset{\ensuremath\boldsymbol{U}}{\text{minimize}}
& & ||\breve{\ensuremath\boldsymbol{X}} - \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top\breve{\ensuremath\boldsymbol{X}}||_F^2, \\
& \text{subject to}
& & \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}.
\end{aligned}
\end{equation}
The objective function can be simplified:
\begin{align*}
&||\breve{\ensuremath\boldsymbol{X}} - \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top\breve{\ensuremath\boldsymbol{X}}||_F^2 \\
&= \textbf{tr}\big((\breve{\ensuremath\boldsymbol{X}} - \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top\breve{\ensuremath\boldsymbol{X}})^\top (\breve{\ensuremath\boldsymbol{X}} - \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top\breve{\ensuremath\boldsymbol{X}})\big) \\
&= \textbf{tr}\big((\breve{\ensuremath\boldsymbol{X}}^\top - \breve{\ensuremath\boldsymbol{X}}^\top\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top) (\breve{\ensuremath\boldsymbol{X}} - \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top\breve{\ensuremath\boldsymbol{X}})\big) \\
&= \textbf{tr}(\breve{\ensuremath\boldsymbol{X}}^\top\breve{\ensuremath\boldsymbol{X}}-2\breve{\ensuremath\boldsymbol{X}}^\top\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top\breve{\ensuremath\boldsymbol{X}} + \breve{\ensuremath\boldsymbol{X}}^\top \ensuremath\boldsymbol{U} \underbrace{\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U}}_{\ensuremath\boldsymbol{I}} \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}}) \\
&= \textbf{tr}(\breve{\ensuremath\boldsymbol{X}}^\top\breve{\ensuremath\boldsymbol{X}}-\breve{\ensuremath\boldsymbol{X}}^\top\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top\breve{\ensuremath\boldsymbol{X}}) \\
&= \textbf{tr}(\breve{\ensuremath\boldsymbol{X}}^\top\breve{\ensuremath\boldsymbol{X}})-\textbf{tr}(\breve{\ensuremath\boldsymbol{X}}^\top\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top\breve{\ensuremath\boldsymbol{X}}) \\
&= \textbf{tr}(\breve{\ensuremath\boldsymbol{X}}^\top\breve{\ensuremath\boldsymbol{X}})-\textbf{tr}(\breve{\ensuremath\boldsymbol{X}}\breve{\ensuremath\boldsymbol{X}}^\top\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top).
\end{align*}
Using Lagrange multiplier \cite{boyd2004convex}, we have:
\begin{align*}
\mathcal{L} = &\,\textbf{tr}(\breve{\ensuremath\boldsymbol{X}}^\top\breve{\ensuremath\boldsymbol{X}})-\textbf{tr}(\breve{\ensuremath\boldsymbol{X}}\breve{\ensuremath\boldsymbol{X}}^\top\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top) \\
&- \textbf{tr}\big(\ensuremath\boldsymbol{\Lambda}^\top (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} - \ensuremath\boldsymbol{I})\big),
\end{align*}
where $\ensuremath\boldsymbol{\Lambda} \in \mathbb{R}^{p \times p}$ is a diagonal matrix $\textbf{diag}([\lambda_1, \dots, \lambda_p]^\top)$ containing the Lagrange multipliers.
Equating the derivative of Lagrangian to zero gives:
\begin{align}
& \mathbb{R}^{d \times p} \ni \frac{\partial \mathcal{L}}{\partial \ensuremath\boldsymbol{U}} = 2\breve{\ensuremath\boldsymbol{X}}\breve{\ensuremath\boldsymbol{X}}^\top \ensuremath\boldsymbol{U} - 2\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Lambda} \overset{\text{set}}{=} 0 \nonumber \\
& \implies \breve{\ensuremath\boldsymbol{X}}\breve{\ensuremath\boldsymbol{X}}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Lambda}, \nonumber \\
& \overset{(\ref{equation_covariance_matrix})}{\implies} \ensuremath\boldsymbol{S} \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Lambda},
\end{align}
which is again the eigenvalue problem \cite{ghojogh2019eigenvalue} for the covariance matrix $\ensuremath\boldsymbol{S}$. We had the same eigenvalue problem in PCA. Therefore, \textit{PCA subspace is the best linear projection in terms of reconstruction error. In other words, PCA has the least squared error in reconstruction.}
\subsubsection{Reconstruction in Autoencoder}
\begin{figure}[!t]
\centering
\includegraphics[width=2.3in]{./images/autoencoder_2}
\caption{(a) An example of autoencoder with five hidden layers and linear activation functions, and (b) its reduction to an autoencoder with one hidden layer.}
\label{figure_autoencoder}
\end{figure}
We saw that PCA is the best in reconstruction error for \textit{linear} projection. If we have $m>1$ successive linear projections, the reconstruction is:
\begin{align}
\widehat{\ensuremath\boldsymbol{X}} = \underbrace{\ensuremath\boldsymbol{U}_1 \cdots \ensuremath\boldsymbol{U}_m}_\text{reconstruct} \underbrace{\ensuremath\boldsymbol{U}_m^\top \cdots \ensuremath\boldsymbol{U}_1^\top}_\text{project} \breve{\ensuremath\boldsymbol{X}} + \ensuremath\boldsymbol{\mu}_x,
\end{align}
which can be seen as an undercomplete \textit{autoencoder} \cite{goodfellow2016deep} with $2m$ layers without activation function (or with identity activation functions $f(\ensuremath\boldsymbol{x}) = \ensuremath\boldsymbol{x}$). The $\ensuremath\boldsymbol{\mu}_x$ is modeled by the intercepts included as input to the neurons of auto-encoder layers. Figure \ref{figure_autoencoder} shows this autoencoder.
As we do not have any non-linearity between the projections, we can define:
\begin{align}
&\ddot{\ensuremath\boldsymbol{U}} := \ensuremath\boldsymbol{U}_1 \cdots \ensuremath\boldsymbol{U}_m \implies \ddot{\ensuremath\boldsymbol{U}}^\top = \ensuremath\boldsymbol{U}_m^\top \cdots \ensuremath\boldsymbol{U}_1^\top, \\
&\therefore ~~~~ \widehat{\ensuremath\boldsymbol{X}} = \ddot{\ensuremath\boldsymbol{U}} \ddot{\ensuremath\boldsymbol{U}}^\top \ensuremath\boldsymbol{X} + \ensuremath\boldsymbol{\mu}_x. \label{equation_linear_autoencoder_reconstruction}
\end{align}
The Eq. (\ref{equation_linear_autoencoder_reconstruction}) shows that the whole autoencoder can be reduced to an undercomplete autoencoder with one hidden layer where the weight matrix is $\ddot{\ensuremath\boldsymbol{U}}$ (see Fig. \ref{figure_autoencoder}).
In other words, in autoencoder neural network, every layer excluding the activation function behaves as a linear projection.
Comparing the Eqs. (\ref{equation_reconstruction_severalPoint}) and (\ref{equation_linear_autoencoder_reconstruction}) shows that the whole autoencoder is reduced to PCA.
Therefore, \textit{PCA is equivalent to an undercomplete autoencoder with one hidden layer without activation function}. Therefore, if we trained weights of such an autoencoder by back-propagation \cite{rumelhart1986learning} are roughly equal to the PCA directions.
Moreover, as PCA is the best linear projection in terms of reconstruction error, \textit{if we have an undercomplete autoencoder with ``one'' hidden layer, it is best not to use any activation function}; this is not noticed by some papers in the literature, unfortunately.
We saw that an autoencoder with $2m$ hidden layers without activation function reduces to linear PCA.
This explains why in autoencoders with more than one layer, we use non-linear activation function $f(.)$ as:
\begin{align}
&\widehat{\ensuremath\boldsymbol{X}} = f^{-1}(\ensuremath\boldsymbol{U}_1 \dots f^{-1}(\ensuremath\boldsymbol{U}_m f(\ensuremath\boldsymbol{U}_m^\top \nonumber \\
&~~~~~~~~~~~~~~~~~~~ \dots f(\ensuremath\boldsymbol{U}_1^\top \ensuremath\boldsymbol{X})\dots))\dots) + \ensuremath\boldsymbol{\mu}_x.
\end{align}
\subsection{PCA Using Singular Value Decomposition}
The PCA can be done using Singular Value Decomposition (SVD) of $\breve{\ensuremath\boldsymbol{X}}$, rather than eigen-decomposition of $\ensuremath\boldsymbol{S}$. Consider the complete SVD of $\breve{\ensuremath\boldsymbol{X}}$ (see Appendix \ref{section_appendix_SVD}):
\begin{align}
\mathbb{R}^{d \times n} \ni \breve{\ensuremath\boldsymbol{X}} = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Sigma}\ensuremath\boldsymbol{V}^\top,
\end{align}
where the columns of $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times d}$ (called left singular vectors) are the eigenvectors of $\breve{\ensuremath\boldsymbol{X}}\breve{\ensuremath\boldsymbol{X}}^\top$, the columns of $\ensuremath\boldsymbol{V} \in \mathbb{R}^{n \times n}$ (called right singular vectors) are the eigenvectors of $\breve{\ensuremath\boldsymbol{X}}^\top \breve{\ensuremath\boldsymbol{X}}$, and the $\ensuremath\boldsymbol{\Sigma} \in \mathbb{R}^{d \times n}$ is a rectangular diagonal matrix whose diagonal entries (called singular values) are the square root of eigenvalues of $\breve{\ensuremath\boldsymbol{X}}\breve{\ensuremath\boldsymbol{X}}^\top$ and/or $\breve{\ensuremath\boldsymbol{X}}^\top \breve{\ensuremath\boldsymbol{X}}$.
See Proposition \ref{proposition_SVD} in Appendix \ref{section_appendix_SVD} for proof of this claim.
According to Eq. (\ref{equation_covariance_matrix}), the $\breve{\ensuremath\boldsymbol{X}}\breve{\ensuremath\boldsymbol{X}}^\top$ is the covariance matrix $\ensuremath\boldsymbol{S}$. In Eq. (\ref{equation_pca_eigendecomposition_2}), we saw that the eigenvectors of $\ensuremath\boldsymbol{S}$ are the principal directions. On the other hand, here, we saw that the columns of $\ensuremath\boldsymbol{U}$ are the eigenvectors of $\breve{\ensuremath\boldsymbol{X}}\breve{\ensuremath\boldsymbol{X}}^\top$.
Hence, we can apply SVD on $\breve{\ensuremath\boldsymbol{X}}$ and take the left singular vectors (columns of $\ensuremath\boldsymbol{U}$) as the principal directions.
An interesting thing is that in SVD of $\breve{\ensuremath\boldsymbol{X}}$, the columns of $\ensuremath\boldsymbol{U}$ are automatically sorted from largest to smallest singular values (eigenvalues) and we do not need to sort as we did in using eigenvalue decomposition for the covariance matrix.
\subsection{Determining the Number of Principal Directions}
Usually in PCA, the components with smallest eigenvalues are cut off to reduce the data. There are different methods for estimating the best number of components to keep (denoted by $p$), such as using Bayesian model selection \cite{minka2001automatic}, scree plot \cite{cattell1966scree}, and comparing the ratio $\lambda_j / \sum_{k=1}^d \lambda_k$ with a threshold \cite{abdi2010principal} where $\lambda_i$ denotes the eigenvalue related to the $j$-th principal component.
Here, we explain the two methods of scree plot and the ratio.
The scree plot \cite{cattell1966scree} is a plot of the eigenvalues versus sorted components from the leading (having largest eigenvalue) to trailing (having smallest eigenvalue). A threshold for the vertical (eigenvalue) axis chooses the components with the large enough eigenvalues and removes the rest of the components.
A good threshold is where the eigenvalue drops significantly. In most of the datasets, a significant drop of eigenvalue occurs.
Another way to choose the best components is the ratio \cite{abdi2010principal}:
\begin{align}
\frac{\lambda_j}{\sum_{k=1}^d \lambda_k},
\end{align}
for the $j$-th component. Then, we sort the features from the largest to smallest ratio and select the $p$ best components or up to the component where a significant drop of the ratio happens.
\section{Dual Principal Component Analysis}
Assume the case where the dimensionality of data is high and much greater than the sample size, i.e., $d \gg n$.
In this case, consider the incomplete SVD of $\breve{\ensuremath\boldsymbol{X}}$ (see Appendix \ref{section_appendix_SVD}):
\begin{align}\label{equation_SVD_dual}
\breve{\ensuremath\boldsymbol{X}} = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Sigma}\ensuremath\boldsymbol{V}^\top,
\end{align}
where here, $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times p}$ and $\ensuremath\boldsymbol{V} \in \mathbb{R}^{n \times p}$ contain the $p$ leading left and right singular vectors of $\breve{\ensuremath\boldsymbol{X}}$, respectively, where $p$ is the number of ``non-zero'' singular values of $\breve{\ensuremath\boldsymbol{X}}$ and usually $p \ll d$. Here, the $\ensuremath\boldsymbol{\Sigma} \in \mathbb{R}^{p \times p}$ is a square matrix having the $p$ largest non-zero singular values of $\breve{\ensuremath\boldsymbol{X}}$. As the $\ensuremath\boldsymbol{\Sigma}$ is a square diagonal matrix and its diagonal includes non-zero entries (is full-rank), it is invertible \cite{ghodsi2006dimensionality}.
Therefore, $\ensuremath\boldsymbol{\Sigma}^{-1} = \textbf{diag}([\frac{1}{\sigma_1}, \dots, \frac{1}{\sigma_p}]^\top)$ if we have $\ensuremath\boldsymbol{\Sigma} = \textbf{diag}([\sigma_1, \dots, \sigma_p]^\top)$.
\subsection{Projection}
Recall Eq. (\ref{equation_projection_severalPoint}) for projection onto PCA subspace: $\widetilde{\ensuremath\boldsymbol{X}} = \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{X}}$.
On the other hand, according to Eq. (\ref{equation_SVD_dual}), we have:
\begin{align}\label{equation_dual_1}
\breve{\ensuremath\boldsymbol{X}} = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Sigma}\ensuremath\boldsymbol{V}^\top \implies \ensuremath\boldsymbol{U}^\top\breve{\ensuremath\boldsymbol{X}} = \underbrace{\ensuremath\boldsymbol{U}^\top\ensuremath\boldsymbol{U}}_{\ensuremath\boldsymbol{I}}\ensuremath\boldsymbol{\Sigma}\ensuremath\boldsymbol{V}^\top = \ensuremath\boldsymbol{\Sigma}\ensuremath\boldsymbol{V}^\top.
\end{align}
According to Eqs. (\ref{equation_projection_severalPoint}) and (\ref{equation_dual_1}), we have:
\begin{align}\label{equation_projected_dual}
\therefore ~~~~~~~ \widetilde{\ensuremath\boldsymbol{X}} = \ensuremath\boldsymbol{\Sigma}\ensuremath\boldsymbol{V}^\top
\end{align}
The Eq. (\ref{equation_projected_dual}) can be used for projecting data onto PCA subspace instead of Eq. (\ref{equation_projection_severalPoint}). This is projection of training data in dual PCA.
\subsection{Reconstruction}
According to Eq. (\ref{equation_SVD_dual}), we have:
\begin{align}
\breve{\ensuremath\boldsymbol{X}} = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Sigma}\ensuremath\boldsymbol{V}^\top &\implies \breve{\ensuremath\boldsymbol{X}}\ensuremath\boldsymbol{V} = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Sigma}\underbrace{\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{V}}_{\ensuremath\boldsymbol{I}} = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Sigma} \nonumber \\
&\implies \ensuremath\boldsymbol{U} = \breve{\ensuremath\boldsymbol{X}}\ensuremath\boldsymbol{V}\ensuremath\boldsymbol{\Sigma}^{-1}. \label{equation_dual_U}
\end{align}
Plugging Eq. (\ref{equation_dual_U}) in Eq. (\ref{equation_reconstruction_severalPoint}) gives us:
\begin{align}
\widehat{\ensuremath\boldsymbol{X}} &= \ensuremath\boldsymbol{U} \widetilde{\ensuremath\boldsymbol{X}} + \ensuremath\boldsymbol{\mu}_x \overset{(\ref{equation_dual_U})}{=} \breve{\ensuremath\boldsymbol{X}}\ensuremath\boldsymbol{V}\ensuremath\boldsymbol{\Sigma}^{-1}\widetilde{\ensuremath\boldsymbol{X}} + \ensuremath\boldsymbol{\mu}_x \nonumber \\
&\overset{(\ref{equation_projected_dual})}{=} \breve{\ensuremath\boldsymbol{X}}\ensuremath\boldsymbol{V}\underbrace{\ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{\Sigma}}_{\ensuremath\boldsymbol{I}}\ensuremath\boldsymbol{V}^\top + \ensuremath\boldsymbol{\mu}_x \nonumber \\
&\implies \widehat{\ensuremath\boldsymbol{X}} = \breve{\ensuremath\boldsymbol{X}}\ensuremath\boldsymbol{V}\ensuremath\boldsymbol{V}^\top + \ensuremath\boldsymbol{\mu}_x. \label{equation_reconstruction_dual}
\end{align}
The Eq. (\ref{equation_reconstruction_dual}) can be used for reconstruction of data instead of Eq. (\ref{equation_reconstruction_severalPoint}).
This is reconstruction of training data in dual PCA.
\subsection{Out-of-sample Projection}
Recall Eq. (\ref{equation_outOfSample_projection_PCA}) for projection of an out-of-sample point $\ensuremath\boldsymbol{x}_t$ onto PCA subspace.
According to Eq. (\ref{equation_dual_U}), we have:
\begin{align}
&\ensuremath\boldsymbol{U}^\top \overset{(\ref{equation_dual_U})}{=} \ensuremath\boldsymbol{\Sigma}^{-\top}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{X}}^\top \overset{(a)}{=} \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{X}}^\top \label{equation_U_transpose_dual}\\
&\overset{(\ref{equation_outOfSample_projection_PCA})}{\implies} \widetilde{\ensuremath\boldsymbol{x}}_t = \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{X}}^\top \breve{\ensuremath\boldsymbol{x}}_t, \label{equation_outOfSample_projection_dual}
\end{align}
where $(a)$ is because $\ensuremath\boldsymbol{\Sigma}^{-1}$ is diagonal and thus symmetric.
The Eq. (\ref{equation_outOfSample_projection_dual}) can be used for projecting out-of-sample data point onto PCA subspace instead of Eq. (\ref{equation_outOfSample_projection_PCA}).
This is out-of-sample projection in dual PCA.
Considering all the $n_t$ out-of-sample data points, the projection is:
\begin{align}
\widetilde{\ensuremath\boldsymbol{X}}_t = \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{X}}^\top \breve{\ensuremath\boldsymbol{X}}_t.
\end{align}
\subsection{Out-of-sample Reconstruction}
Recall Eq. (\ref{equation_outOfSample_reconstruct_PCA}) for reconstruction of an out-of-sample point $\ensuremath\boldsymbol{x}_t$.
According to Eqs. (\ref{equation_dual_U}) and (\ref{equation_U_transpose_dual}), we have:
\begin{align}
&\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top = \breve{\ensuremath\boldsymbol{X}}\ensuremath\boldsymbol{V}\ensuremath\boldsymbol{\Sigma}^{-1} \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{X}}^\top \nonumber\\
&\overset{(\ref{equation_outOfSample_reconstruct_PCA})}{\implies} \widehat{\ensuremath\boldsymbol{x}}_t = \breve{\ensuremath\boldsymbol{X}}\ensuremath\boldsymbol{V}\ensuremath\boldsymbol{\Sigma}^{-2}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{X}}^\top \breve{\ensuremath\boldsymbol{x}}_t + \ensuremath\boldsymbol{\mu}_x. \label{equation_outOfSample_reconstruct_dual}
\end{align}
The Eq. (\ref{equation_outOfSample_reconstruct_dual}) can be used for reconstruction of an out-of-sample data point instead of Eq. (\ref{equation_outOfSample_reconstruct_PCA}). This is out-of-sample reconstruction in dual PCA.
Considering all the $n_t$ out-of-sample data points, the reconstruction is:
\begin{align}
\widehat{\ensuremath\boldsymbol{X}}_t = \breve{\ensuremath\boldsymbol{X}}\ensuremath\boldsymbol{V}\ensuremath\boldsymbol{\Sigma}^{-2}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{X}}^\top \breve{\ensuremath\boldsymbol{X}}_t + \ensuremath\boldsymbol{\mu}_x.
\end{align}
\subsection{Why is Dual PCA Useful?}
The dual PCA can be useful for two reasons:
\begin{enumerate}
\item As can be seen in Eqs. (\ref{equation_projected_dual}), (\ref{equation_reconstruction_dual}), (\ref{equation_outOfSample_projection_dual}), and (\ref{equation_outOfSample_reconstruct_dual}), the formulae for dual PCA only includes $\ensuremath\boldsymbol{V}$ and not $\ensuremath\boldsymbol{U}$. The columns of $\ensuremath\boldsymbol{V}$ are the eigenvectors of $\breve{\ensuremath\boldsymbol{X}}^\top \breve{\ensuremath\boldsymbol{X}} \in \mathbb{R}^{n \times n}$ and the columns of $\ensuremath\boldsymbol{U}$ are the eigenvectors of $\breve{\ensuremath\boldsymbol{X}} \breve{\ensuremath\boldsymbol{X}}^\top \in \mathbb{R}^{d \times d}$. In case the dimensionality of data is much high and greater than the sample size, i.e., $n \ll d$, computation of eigenvectors of $\breve{\ensuremath\boldsymbol{X}}^\top \breve{\ensuremath\boldsymbol{X}}$ is easier and faster than $\breve{\ensuremath\boldsymbol{X}} \breve{\ensuremath\boldsymbol{X}}^\top$ and also requires less storage. Therefore, dual PCA is more efficient than direct PCA in this case in terms of both speed and storage. Note that the result of PCA and dual PCA are exactly the same.
\item Some inner product forms, such as $\breve{\ensuremath\boldsymbol{X}}^\top \breve{\ensuremath\boldsymbol{x}}_t$, have appeared in the formulae of dual PCA. This provides opportunity for kernelizing the PCA to have kernel PCA using the so-called kernel trick. As will be seen in the next section, we use dual PCA in formulation of kernel PCA.
\end{enumerate}
\section{Kernel Principal Component Analysis}
\begin{figure}[!t]
\centering
\includegraphics[width=3.2in]{./images/nonlinear_manifold}
\caption{(a) A 2D nonlinear manifold where the data exist on in the 3D original space. As the manifold is nonlinear, the geodesic distances of points on the manifold are different from their Euclidean distances. (b) The correct unfolded manifold where the geodesic distances of points on the manifold have been preserved. (c) Applying the linear PCA, which takes Euclidean distances into account, on the nonlinear data where the found subspace has ruined the manifold so the far away red and green points have fallen next to each other. The credit of this example is for Prof. Ali Ghodsi.}
\label{figure_nonlinear_manifold}
\end{figure}
The PCA is a linear method because the projection is linear. In case the data points exist on a non-linear sub-manifold, the linear subspace learning might not be completely effective. For example, see Fig. \ref{figure_nonlinear_manifold}.
In order to handle this problem of PCA, we have two options. We should either change PCA to become a nonlinear method or we can leave the PCA to be linear but change the data hoping to fall on a linear or close to linear manifold. Here, we do the latter so we change the data. We increase the dimensionality of data by mapping the data to feature space with higher dimensionality hoping that in the feature space, it falls on a linear manifold. This is referred to as ``blessing of dimensionality'' in the literature \cite{donoho2000high} which is pursued using kernels \cite{hofmann2008kernel}. This PCA method which uses the kernel of data is named ``kernel PCA'' \cite{scholkopf1997kernel}.
\subsection{Kernels and Hilbert Space}
Suppose that $\ensuremath\boldsymbol{\phi}: \ensuremath\boldsymbol{x} \rightarrow \mathcal{H}$ is a function which maps the data $\ensuremath\boldsymbol{x}$ to Hilbert space (feature space). The $\ensuremath\boldsymbol{\phi}$ is called ``pulling function''. In other words, $\ensuremath\boldsymbol{x} \mapsto \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x})$. Let $t$ denote the dimensionality of the feature space, i.e., $\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}) \in \mathbb{R}^t$ while $\ensuremath\boldsymbol{x} \in \mathbb{R}^d$. Note that we usually have $t \gg d$.
If $\mathcal{X}$ denotes the set of points, i.e., $\ensuremath\boldsymbol{x} \in \mathcal{X}$, the kernel of two vectors $\ensuremath\boldsymbol{x}_1$ and $\ensuremath\boldsymbol{x}_2$ is $k: \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$ and is defined as \cite{hofmann2008kernel,herbrich2001learning}:
\begin{align}
k(\ensuremath\boldsymbol{x}_1, \ensuremath\boldsymbol{x}_2) := \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_1)^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_2),
\end{align}
which is a measure of ``similarity'' between the two vectors because the inner product captures similarity.
We can compute the kernel of two matrices $\ensuremath\boldsymbol{X}_1 \in \mathbb{R}^{d \times n_1}$ and $\ensuremath\boldsymbol{X}_2 \in \mathbb{R}^{d \times n_2}$ and have a ``kernel matrix'' (also called ``Gram matrix''):
\begin{align}
\mathbb{R}^{n_1 \times n_2} \ni \ensuremath\boldsymbol{K}(\ensuremath\boldsymbol{X}_1, \ensuremath\boldsymbol{X}_2) := \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}_1)^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}_2),
\end{align}
where $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}_1) := [\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_1), \dots, \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_n)] \in \mathbb{R}^{t \times n_1}$ is the matrix of mapped $\ensuremath\boldsymbol{X}_1$ to the feature space. The $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}_2) \in \mathbb{R}^{t \times n_2}$ is defined similarly.
We can compute the kernel matrix of dataset $\ensuremath\boldsymbol{X} \in \mathbb{R}^{d \times n}$ over itself:
\begin{align}\label{equation_kernel_matrix_of_X}
\mathbb{R}^{n \times n} \ni \ensuremath\boldsymbol{K}_x := \ensuremath\boldsymbol{K}(\ensuremath\boldsymbol{X}, \ensuremath\boldsymbol{X}) = \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}),
\end{align}
where $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) := [\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_1), \dots, \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_n)] \in \mathbb{R}^{t \times n}$ is the pulled (mapped) data.
Note that in kernel methods, the pulled data $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})$ are usually not available and merely the kernel matrix $\ensuremath\boldsymbol{K}(\ensuremath\boldsymbol{X}, \ensuremath\boldsymbol{X})$, which is the inner product of the pulled data with itself, is available.
There exist different types of kernels. Some of the most well-known kernels are:
\begin{align}
&\text{Linear:} ~~ k(\ensuremath\boldsymbol{x}_1, \ensuremath\boldsymbol{x}_2) = \ensuremath\boldsymbol{x}_1^\top \ensuremath\boldsymbol{x}_2 + c_1, \\
&\text{Polynomial:} ~~ k(\ensuremath\boldsymbol{x}_1, \ensuremath\boldsymbol{x}_2) = (c_1\ensuremath\boldsymbol{x}_1^\top \ensuremath\boldsymbol{x}_2 + c_2)^{c_3}, \\
&\text{Gaussian:} ~~ k(\ensuremath\boldsymbol{x}_1, \ensuremath\boldsymbol{x}_2) = \exp\big(\!-\frac{||\ensuremath\boldsymbol{x}_1 - \ensuremath\boldsymbol{x}_2||_2^2}{2\sigma^2}\big), \\
&\text{Sigmoid:} ~~ k(\ensuremath\boldsymbol{x}_1, \ensuremath\boldsymbol{x}_2) = \tanh(c_1\ensuremath\boldsymbol{x}_1^\top\ensuremath\boldsymbol{x}_2 + c_2),
\end{align}
where $c_1$, $c_2$, $c_3$, and $\sigma$ are scalar constants. The Gaussian and Sigmoid kernels are also called Radial Basis Function (RBF) and hyperbolic tangent, respectively. Note that the Gaussian kernel can also be written as $\exp\big(\!-\gamma||\ensuremath\boldsymbol{x}_1 - \ensuremath\boldsymbol{x}_2||_2^2\big)$ where $\gamma > 0$.
It is noteworthy to mention that in the RBF kernel, the dimensionality of the feature space is infinite. The reason lies in the Maclaurin series expansion (Taylor series expansion at zero) of this kernel:
\begin{align*}
\exp(-\gamma r) \approx 1 - \gamma r + \frac{\gamma^2}{2!} r^2 - \frac{\gamma^3}{3!} r^3 + \dots,
\end{align*}
where $r := ||\ensuremath\boldsymbol{x}_1 - \ensuremath\boldsymbol{x}_2||_2^2$, which is infinite dimensional with respect to $r$.
It is also worth mentioning that if we want the pulled data $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})$ to be centered, i.e.:
\begin{align}
\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X}) := \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H},
\end{align}
we should double center the kernel matrix (see Appendix \ref{section_appendix_centering}) because if we use centered pulled data in Eq. (\ref{equation_kernel_matrix_of_X}), we have:
\begin{align*}
&\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})^\top \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X}) = \big(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H}\big)^\top \big(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H}\big) \\
&\overset{(\ref{equation_centeringMatrix_is_symmetric})}{=} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H} \overset{(\ref{equation_kernel_matrix_of_X})}{=} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{H},
\end{align*}
which is the double-centered kernel matrix.
Thus:
\begin{align}\label{equation_doubleCentered_kernel}
\breve{\ensuremath\boldsymbol{K}}_x := \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{H} = \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})^\top \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X}),
\end{align}
where $\breve{\ensuremath\boldsymbol{K}}_x$ denotes the double-centered kernel matrix (see Appendix \ref{section_appendix_centeringKernel}).
\subsection{Projection}
We apply incomplete SVD on the centered pulled (mapped) data $\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})$ (see Appendix \ref{section_appendix_SVD}):
\begin{align}\label{equation_kernelPCA_SVD_Phi}
\mathbb{R}^{t \times n} \ni \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X}) = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Sigma}\ensuremath\boldsymbol{V}^\top,
\end{align}
where $\ensuremath\boldsymbol{U} \in \mathbb{R}^{t \times p}$ and $\ensuremath\boldsymbol{V} \in \mathbb{R}^{n \times p}$ contain the $p$ leading left and right singular vectors of $\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})$, respectively, where $p$ is the number of ``non-zero'' singular values of $\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})$ and usually $p \ll t$. Here, the $\ensuremath\boldsymbol{\Sigma} \in \mathbb{R}^{p \times p}$ is a square matrix having the $p$ largest non-zero singular values of $\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})$.
However, as mentioned before, the pulled data are not necessarily available so Eq. (\ref{equation_kernelPCA_SVD_Phi}) cannot be done.
The kernel, however, is available. Therefore, we apply eigen-decomposition \cite{ghojogh2019eigenvalue} to the double-centered kernel:
\begin{align}\label{equation_kernelPCA_eigen_centered_kernel}
\breve{\ensuremath\boldsymbol{K}}_x \ensuremath\boldsymbol{V} = \ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Lambda},
\end{align}
where the columns of $\ensuremath\boldsymbol{V}$ and the diagonal of $\ensuremath\boldsymbol{\Lambda}$ are the eigenvectors and eigenvalues of $\breve{\ensuremath\boldsymbol{K}}_x$, respectively.
The columns of $\ensuremath\boldsymbol{V}$ in Eq. (\ref{equation_kernelPCA_SVD_Phi}) are the right singular vectors of $\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})$ which are equivalent to the eigenvectors of $\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})^\top \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X}) = \breve{\ensuremath\boldsymbol{K}}_x$,
according to Proposition \ref{proposition_SVD} in Appendix \ref{section_appendix_SVD}.
Also, according to that proposition, the diagonal of $\ensuremath\boldsymbol{\Sigma}$ in Eq. (\ref{equation_kernelPCA_SVD_Phi}) is equivalent to the square root of eigenvalues of $\breve{\ensuremath\boldsymbol{K}}_x$.
Therefore, in practice where the pulling function is not necessarily available, we use Eq. (\ref{equation_kernelPCA_eigen_centered_kernel}) in order to find the $\ensuremath\boldsymbol{V}$ and $\ensuremath\boldsymbol{\Sigma}$ in Eq. (\ref{equation_kernelPCA_SVD_Phi}). The Eq. (\ref{equation_kernelPCA_eigen_centered_kernel}) can be restated as:
\begin{align}\label{equation_kernelPCA_eigen_centered_kernel_2}
\breve{\ensuremath\boldsymbol{K}}_x \ensuremath\boldsymbol{V} = \ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^2,
\end{align}
to be compatible to Eq. (\ref{equation_kernelPCA_SVD_Phi}).
It is noteworthy that because of using Eq. (\ref{equation_kernelPCA_eigen_centered_kernel_2}) instead of Eq. (\ref{equation_kernelPCA_SVD_Phi}), \textit{the projection directions $\ensuremath\boldsymbol{U}$ are not available in kernel PCA to be observed or plotted.}
Similar to what we did for Eq. (\ref{equation_projected_dual}):
\begin{align}
&\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X}) = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Sigma}\ensuremath\boldsymbol{V}^\top \nonumber \\
&\implies \ensuremath\boldsymbol{U}^\top\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X}) = \underbrace{\ensuremath\boldsymbol{U}^\top\ensuremath\boldsymbol{U}}_{\ensuremath\boldsymbol{I}}\ensuremath\boldsymbol{\Sigma}\ensuremath\boldsymbol{V}^\top = \ensuremath\boldsymbol{\Sigma}\ensuremath\boldsymbol{V}^\top \nonumber \\
&\therefore ~~~~~~~ \ensuremath\boldsymbol{\Phi}(\widetilde{\ensuremath\boldsymbol{X}}) = \ensuremath\boldsymbol{U}^\top\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X}) = \ensuremath\boldsymbol{\Sigma}\ensuremath\boldsymbol{V}^\top, \label{equation_projected_kernel_PCA}
\end{align}
where $\ensuremath\boldsymbol{\Sigma}$ and $\ensuremath\boldsymbol{V}$ are obtained from Eq. (\ref{equation_kernelPCA_eigen_centered_kernel_2}).
The Eq. (\ref{equation_projected_kernel_PCA}) is projection of the training data in kernel PCA.
\subsection{Reconstruction}
Similar to what we did for Eq. (\ref{equation_reconstruction_dual}):
\begin{align}
\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X}) = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Sigma}\ensuremath\boldsymbol{V}^\top &\implies \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{V} = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Sigma}\underbrace{\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{V}}_{\ensuremath\boldsymbol{I}} = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Sigma} \nonumber \\
&\implies \ensuremath\boldsymbol{U} = \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{V}\ensuremath\boldsymbol{\Sigma}^{-1}. \label{equation_kernel_U}
\end{align}
Therefore, the reconstruction is:
\begin{alignat}{2}
&\ensuremath\boldsymbol{\Phi}(\widehat{\ensuremath\boldsymbol{X}}) &&= \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Phi}(\widetilde{\ensuremath\boldsymbol{X}}) + \ensuremath\boldsymbol{\mu}_x \overset{(\ref{equation_kernel_U})}{=} \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{V}\ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{\Phi}(\widetilde{\ensuremath\boldsymbol{X}}) + \ensuremath\boldsymbol{\mu}_x \nonumber \\
& &&\overset{(\ref{equation_projected_kernel_PCA})}{=} \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{V}\underbrace{\ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{\Sigma}}_{\ensuremath\boldsymbol{I}}\ensuremath\boldsymbol{V}^\top + \ensuremath\boldsymbol{\mu}_x \nonumber \\
&\implies &&\ensuremath\boldsymbol{\Phi}(\widehat{\ensuremath\boldsymbol{X}}) = \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{V}\ensuremath\boldsymbol{V}^\top + \ensuremath\boldsymbol{\mu}_x. \label{equation_reconstruction_kernel_PCA}
\end{alignat}
However, the $\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})$ is not available necessarily; therefore, we cannot reconstruct the training data in kernel PCA.
\subsection{Out-of-sample Projection}
Similar to what we did for Eq. (\ref{equation_outOfSample_projection_dual}):
\begin{align}
&\ensuremath\boldsymbol{U}^\top \overset{(\ref{equation_kernel_U})}{=} \ensuremath\boldsymbol{\Sigma}^{-\top}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})^\top \overset{(a)}{=} \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})^\top \nonumber \\
&\implies \ensuremath\boldsymbol{\phi}(\widetilde{\ensuremath\boldsymbol{x}}_t) = \ensuremath\boldsymbol{U}^\top \breve{\ensuremath\boldsymbol{\phi}}(\ensuremath\boldsymbol{x}_t) = \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})^\top \breve{\ensuremath\boldsymbol{\phi}}(\ensuremath\boldsymbol{x}_t), \nonumber \\
&\overset{(\ref{equation_appendix_centered_kernelVector_outOfSample})}{\implies} \ensuremath\boldsymbol{\phi}(\widetilde{\ensuremath\boldsymbol{x}}_t) = \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{k}}_t, \label{equation_outOfSample_projection_kernel_PCA}
\end{align}
where $(a)$ is because $\ensuremath\boldsymbol{\Sigma}^{-1}$ is diagonal and thus symmetric and the $\breve{\ensuremath\boldsymbol{k}}_t \in \mathbb{R}^n$ is calculated by Eq. (\ref{equation_appendix_doubleCentered_outOfSample_kernel_oneSample}) in Appendix \ref{section_appendix_centeringKernel}.
The Eq. (\ref{equation_outOfSample_projection_kernel_PCA}) is the projection of out-of-sample data in kernel PCA.
Considering all the $n_t$ out-of-sample data points, $\ensuremath\boldsymbol{X}_t$, the projection is:
\begin{align}
\ensuremath\boldsymbol{\phi}(\widetilde{\ensuremath\boldsymbol{X}}_t) = \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{K}}_t,
\end{align}
where $\breve{\ensuremath\boldsymbol{K}}_t$ is calculated by Eq. (\ref{equation_appendix_doubleCentered_outOfSample_kernel}).
\subsection{Out-of-sample Reconstruction}
Similar to what we did for Eq. (\ref{equation_outOfSample_reconstruct_dual}):
\begin{align}
&\implies \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top \overset{(\ref{equation_kernel_U})}{=} \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{V}\ensuremath\boldsymbol{\Sigma}^{-1} \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})^\top \nonumber\\
&\implies \ensuremath\boldsymbol{\phi}(\widehat{\ensuremath\boldsymbol{x}}_t) = \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{V}\ensuremath\boldsymbol{\Sigma}^{-2}\ensuremath\boldsymbol{V}^\top\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})^\top \breve{\ensuremath\boldsymbol{\phi}}(\ensuremath\boldsymbol{x}_t) + \ensuremath\boldsymbol{\mu}_x \nonumber \\
&\overset{(\ref{equation_appendix_centered_kernelVector_outOfSample})}{\implies} \ensuremath\boldsymbol{\phi}(\widehat{\ensuremath\boldsymbol{x}}_t) = \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{V}\ensuremath\boldsymbol{\Sigma}^{-2}\ensuremath\boldsymbol{V}^\top \breve{\ensuremath\boldsymbol{k}}_t + \ensuremath\boldsymbol{\mu}_x, \label{equation_outOfSample_reconstruct_kernel_PCA}
\end{align}
where the $\breve{\ensuremath\boldsymbol{k}}_t \in \mathbb{R}^n$ is calculated by Eq. (\ref{equation_appendix_doubleCentered_outOfSample_kernel_oneSample}) in Appendix \ref{section_appendix_centeringKernel}.
Considering all the $n_t$ out-of-sample data points, $\ensuremath\boldsymbol{X}_t$, the reconstruction is:
\begin{align}
\ensuremath\boldsymbol{\phi}(\widehat{\ensuremath\boldsymbol{X}}_t) = \breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{V}\ensuremath\boldsymbol{\Sigma}^{-2}\ensuremath\boldsymbol{V}^\top \breve{\ensuremath\boldsymbol{K}}_t + \ensuremath\boldsymbol{\mu}_x,
\end{align}
where $\breve{\ensuremath\boldsymbol{K}}_t$ is calculated by Eq. (\ref{equation_appendix_doubleCentered_outOfSample_kernel}).
In Eq. (\ref{equation_outOfSample_reconstruct_kernel_PCA}), the $\breve{\ensuremath\boldsymbol{\Phi}}(\ensuremath\boldsymbol{X})$, appeared at the left of expression, is not available necessarily; therefore, we cannot reconstruct an out-of-sample point in kernel PCA.
According to Eqs. (\ref{equation_reconstruction_kernel_PCA}) and (\ref{equation_outOfSample_reconstruct_kernel_PCA}), we conclude that kernel PCA is not able to reconstruct any data, whether training or out-of-sample.
\subsection{Why is Kernel PCA Useful?}
Finally, it is noteworthy that as the choice of the best kernel might be hard, the kernel PCA is not ``always'' effective in practice \cite{ghodsi2006dimensionality}.
However, it provides us some useful theoretical insights for explaining the PCA, Multi-Dimensional Scaling (MDS) \cite{cox2008multidimensional}, Isomap \cite{tenenbaum2000global}, Locally Linear Embedding (LLE) \cite{roweis2000nonlinear}, and Laplacian Eigenmap (LE) \cite{belkin2003laplacian} as special cases of kernel PCA with their own kernels (see \cite{ham2004kernel} and chapter 2 in \cite{strange2014open}).
\section{Supervised Principal Component Analysis Using Scoring}
The older version of SPCA used scoring \cite{bair2006prediction}.
In this version of SPCA, PCA is not a special case of SPCA. The version of SPCA, which will be introduced in the next section, is more solid in terms of theory where PCA is a special case of SPCA.
In SPCA using scoring, we compute the similarity of every feature of data with the class labels and then sort the features and remove the features having the least similarity with the labels. The larger the similarity of a feature with the labels, the better that feature is for discrimination in the embedded subspace.
Consider the training dataset $\mathbb{R}^{d \times n} \ni \ensuremath\boldsymbol{X} = [\ensuremath\boldsymbol{x}_1, \dots, \ensuremath\boldsymbol{x}_n] = [\ensuremath\boldsymbol{x}^1, \dots, \ensuremath\boldsymbol{x}^d]^\top$ where $\ensuremath\boldsymbol{x}_i \in \mathbb{R}^d$ and $\ensuremath\boldsymbol{x}^j \in \mathbb{R}^n$ are the $i$-th data point and the $j$-th feature, respectively.
This type of SPCA is only for classification task so we can consider the dimensionality of the labels to be one, $\ell=1$. Thus, we have $\ensuremath\boldsymbol{Y} \in \mathbb{R}^{1 \times n}$. We define $\mathbb{R}^n \ni \ensuremath\boldsymbol{y} := \ensuremath\boldsymbol{Y}^\top$.
The score of the $j$-th feature, $\ensuremath\boldsymbol{x}^j$, is:
\begin{align}
\mathbb{R} \ni s_j := \frac{(\ensuremath\boldsymbol{x}^j)^\top \ensuremath\boldsymbol{y}}{||(\ensuremath\boldsymbol{x}^j)^\top \ensuremath\boldsymbol{x}^j||_2} = \frac{(\ensuremath\boldsymbol{x}^j)^\top \ensuremath\boldsymbol{y}}{\sqrt{(\ensuremath\boldsymbol{x}^j)^\top \ensuremath\boldsymbol{x}^j}},
\end{align}
After computing the scores of all the features, we sort the features from largest to smallest score. Let $\ensuremath\boldsymbol{X}' \in \mathbb{R}^{d \times n}$ denote the training dataset whose features are sorted.
We take the $q \leq d$ features with largest scores and remove the other features. Let:
\begin{align}
\mathbb{R}^{q \times n} \ni \ensuremath\boldsymbol{X}'' := \ensuremath\boldsymbol{X}'(1:q, :),
\end{align}
be the training dataset with $q$ best features.
Then, we apply PCA on the $\ensuremath\boldsymbol{X}'' \in \mathbb{R}^{q \times n}$ rather than $\ensuremath\boldsymbol{X} \in \mathbb{R}^{d \times n}$. Applying PCA and kernel PCA on $\ensuremath\boldsymbol{X}''$ results in SPCA and kernel PCA, respectively.
This type of SPCA was mostly used and popular in bioinformatics for genome data analysis \cite{ma2011principal}.
\section{Supervised Principal Component Analysis Using HSIC}
\subsection{Hilbert-Schmidt Independence Criterion}
Suppose we want to measure the dependence of two random variables. Measuring the correlation between them is easier because correlation is just ``linear'' dependence.
According to \cite{hein2004kernels}, two random variables are independent if and only if any bounded continuous functions of them are uncorrelated. Therefore, if we map the two random variables $\ensuremath\boldsymbol{x}$ and $\ensuremath\boldsymbol{y}$ to two different (``separable'') Reproducing Kernel Hilbert Spaces (RKHSs) and have $\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x})$ and $\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{y})$, we can measure the correlation of $\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x})$ and $\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{y})$ in Hilbert space to have an estimation of dependence of $\ensuremath\boldsymbol{x}$ and $\ensuremath\boldsymbol{y}$ in the original space.
The correlation of $\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x})$ and $\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{y})$ can be computed by the Hilbert-Schmidt norm of the cross-covariance of them \cite{gretton2005measuring}. Note that the squared Hilbert-Schmidt norm of a matrix $\ensuremath\boldsymbol{A}$ is \cite{bell2016trace}:
\begin{align*}
||\ensuremath\boldsymbol{A}||_{HS}^2 := \textbf{tr}(\ensuremath\boldsymbol{A}^\top \ensuremath\boldsymbol{A}),
\end{align*}
and the cross-covariance matrix of two vectors $\ensuremath\boldsymbol{x}$ and $\ensuremath\boldsymbol{y}$ is \cite{gubner2006probability,gretton2005measuring}:
\begin{align*}
\mathbb{C}\text{ov}(\ensuremath\boldsymbol{x}, \ensuremath\boldsymbol{y}) := \mathbb{E}\Big[&\big(\ensuremath\boldsymbol{x} - \mathbb{E}(\ensuremath\boldsymbol{x})\big) \big(\ensuremath\boldsymbol{y} - \mathbb{E}(\ensuremath\boldsymbol{y})\big) \Big].
\end{align*}
Using the explained intuition, an empirical estimation of the Hilbert-Schmidt Independence Criterion (HSIC) is introduced \cite{gretton2005measuring}:
\begin{align}\label{equation_HSIC}
\text{HSIC} := \frac{1}{(n-1)^2}\, \textbf{tr}(\ddot{\ensuremath\boldsymbol{K}}_x\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}),
\end{align}
where $\ddot{\ensuremath\boldsymbol{K}}_x$ and $\ensuremath\boldsymbol{K}_y$ are the kernels over $\ensuremath\boldsymbol{x}$ and $\ensuremath\boldsymbol{y}$, respectively. In other words, $\ddot{\ensuremath\boldsymbol{K}}_x = \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x})^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x})$ and $\ensuremath\boldsymbol{K}_y = \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{y})^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{y})$.
We are using $\ddot{\ensuremath\boldsymbol{K}}_x$ rather than $\ensuremath\boldsymbol{K}_x$ because $\ensuremath\boldsymbol{K}_x$ is going to be used in kernel SPCA in the next sections.
The term $1/(n-1)^2$ is used for normalization.
The $\ensuremath\boldsymbol{H}$ is the centering matrix (see Appendix \ref{section_appendix_centering}):
\begin{align}
\mathbb{R}^{n \times n} \ni \ensuremath\boldsymbol{H} = \ensuremath\boldsymbol{I} - (1/n) \ensuremath\boldsymbol{1}\ensuremath\boldsymbol{1}^\top.
\end{align}
The $\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}$ double centers the $\ensuremath\boldsymbol{K}_y$ in HSIC.
The HSIC (Eq. (\ref{equation_HSIC})) measures the dependence of two random variable vectors $\ensuremath\boldsymbol{x}$ and $\ensuremath\boldsymbol{y}$. Note that $\text{HSIC}=0$ and $\text{HSIC}>0$ mean that $\ensuremath\boldsymbol{x}$ and $\ensuremath\boldsymbol{y}$ are independent and dependent, respectively. The greater the HSIC, the greater dependence they have.
\subsection{Supervised PCA}
Supervised PCA (SPCA) \cite{barshan2011supervised} uses the HSIC. We have the data $\ensuremath\boldsymbol{X} = [\ensuremath\boldsymbol{x}_1, \dots, \ensuremath\boldsymbol{x}_n] \in \mathbb{R}^{d \times n}$ and the labels $\ensuremath\boldsymbol{Y} = [\ensuremath\boldsymbol{y}_1, \dots, \ensuremath\boldsymbol{y}_n] \in \mathbb{R}^{\ell \times n}$, where $\ell$ is the dimensionality of the labels and we usually have $\ell=1$. However, in case the labels are encoded (e.g., one-hot-encoded) or SPCA is used for regression (e.g., see \cite{ghojogh2019instance}), we have $\ell > 1$.
SPCA tries to maximize the dependence of the projected data points $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}$ and the labels $\ensuremath\boldsymbol{Y}$. It uses a linear kernel for the projected data points:
\begin{align}\label{equation_SPCA_kernel_of_projection}
\ddot{\ensuremath\boldsymbol{K}}_x = (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X})^\top (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}) = \ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X},
\end{align}
and an arbitrary kernel $\ensuremath\boldsymbol{K}_y$ over $\ensuremath\boldsymbol{Y}$.
For classification task, one of the best choices for the $\ensuremath\boldsymbol{K}_y$ is delta kernel \cite{barshan2011supervised} where the $(i,j)$-th element of kernel is:
\begin{align}
\ensuremath\boldsymbol{K}_y = \delta_{\ensuremath\boldsymbol{y}_i, \ensuremath\boldsymbol{y}_j} :=
\left\{
\begin{array}{ll}
1 & \text{if } \ensuremath\boldsymbol{y}_i = \ensuremath\boldsymbol{y}_j,\\
0 & \text{if } \ensuremath\boldsymbol{y}_i \neq \ensuremath\boldsymbol{y}_j,
\end{array}
\right.
\end{align}
where $\delta_{\ensuremath\boldsymbol{y}_i, \ensuremath\boldsymbol{y}_j}$ is the Kronecker delta which is one if the $\ensuremath\boldsymbol{x}_i$ and $\ensuremath\boldsymbol{x}_j$ belong to the same class.
Another good choice for kernel in classification task in SPCA is an arbitrary kernel (e.g., linear kernel $\ensuremath\boldsymbol{K}_y = \ensuremath\boldsymbol{Y}^\top \ensuremath\boldsymbol{Y}$) over $\ensuremath\boldsymbol{Y}$ where the columns of $\ensuremath\boldsymbol{Y}$ are one-hot encoded. This is a good choice because the distances of classes will be equal; otherwise, some classes will fall closer than the others for no reason and fairness between classes goes away.
The SPCA can also be used for regression (e.g., see \cite{ghojogh2019instance}) and that is one of the advantages of SPCA. In that case, a good choice for $\ensuremath\boldsymbol{K}_y$ is an arbitrary kernel (e.g., linear kernel $\ensuremath\boldsymbol{K}_y = \ensuremath\boldsymbol{Y}^\top \ensuremath\boldsymbol{Y}$) over $\ensuremath\boldsymbol{Y}$ where the columns of the $\ensuremath\boldsymbol{Y}$, i.e., labels, are the observations in regression. Here, the distances of observations have meaning and should not be manipulated.
The HSIC in SPCA case becomes:
\begin{align}\label{equation_HSIC_SPCA}
\text{HSIC} = \frac{1}{(n-1)^2}\, \textbf{tr}(\ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}).
\end{align}
where $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times p}$ is the unknown projection matrix for projection onto the SPCA subspace and should be found. The desired dimensionality of the subspace is $p$ and usually $p \ll d$.
We should maximize the HSIC in order to maxzimize the dependence of $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}$ and $\ensuremath\boldsymbol{Y}$. Hence:
\begin{equation}
\begin{aligned}
& \underset{\ensuremath\boldsymbol{U}}{\text{maximize}}
& & \textbf{tr}(\ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}), \\
& \text{subject to}
& & \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I},
\end{aligned}
\end{equation}
where the constraint ensures that the $\ensuremath\boldsymbol{U}$ is an orthogonal matrix, i.e., the SPCA directions are orthonormal.
Using Lagrangian \cite{boyd2004convex}, we have:
\begin{align*}
\mathcal{L} &= \textbf{tr}(\ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}) - \textbf{tr}\big(\ensuremath\boldsymbol{\Lambda}^\top (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} - \ensuremath\boldsymbol{I})\big) \\
&\overset{(a)}{=} \textbf{tr}(\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top) - \textbf{tr}\big(\ensuremath\boldsymbol{\Lambda}^\top (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} - \ensuremath\boldsymbol{I})\big),
\end{align*}
where $(a)$ is because of the cyclic property of trace and $\ensuremath\boldsymbol{\Lambda} \in \mathbb{R}^{p \times p}$ is a diagonal matrix $\textbf{diag}([\lambda_1, \dots, \lambda_p]^\top)$ including the Lagrange multipliers.
Setting the derivative of Lagrangian to zero gives:
\begin{align}
& \mathbb{R}^{d \times p} \ni \frac{\partial \mathcal{L}}{\partial \ensuremath\boldsymbol{U}} = 2 \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{U} - 2\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Lambda} \overset{\text{set}}{=} 0 \nonumber \\
& \implies \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Lambda}, \label{eqaution_SPCA_eigendecomposition}
\end{align}
which is the eigen-decomposition of $\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top$ where the columns of $\ensuremath\boldsymbol{U}$ and the diagonal of $\ensuremath\boldsymbol{\Lambda}$ are the eigenvectors and eigenvalues of $\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top$, respectively \cite{ghojogh2019eigenvalue}. The eigenvectors and eigenvalues are sorted from the leading (largest eigenvalue) to the trailing (smallest eigenvalue) because we are maximizing in the optimization problem.
As a conclusion, if projecting onto the SPCA subspace or $\textbf{span}\{\ensuremath\boldsymbol{u}_1, \dots, \ensuremath\boldsymbol{u}_p\}$, the SPCA directions $\{\ensuremath\boldsymbol{u}_1, \dots, \ensuremath\boldsymbol{u}_p\}$ are the sorted eigenvectors of $\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top$.
In other words, the columns of the projection matrix $\ensuremath\boldsymbol{U}$ in SPCA are the $p$ leading eigenvectors of $\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top$.
Similar to what we had in PCA, the projection, projection of out-of-sample, reconstruction, and reconstruction of out-of-sample in SPCA are:
\begin{align}
&\widetilde{\ensuremath\boldsymbol{X}} = \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}, \label{equation_SPCA_projection} \\
&\widetilde{\ensuremath\boldsymbol{x}}_t = \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_t, \label{equation_SPCA_projection_outOfSample} \\
&\widehat{\ensuremath\boldsymbol{X}} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X} = \ensuremath\boldsymbol{U} \widetilde{\ensuremath\boldsymbol{X}}, \label{equation_SPCA_reconstruction} \\
&\widehat{\ensuremath\boldsymbol{x}}_t = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_t = \ensuremath\boldsymbol{U} \widetilde{\ensuremath\boldsymbol{x}}_t, \label{equation_SPCA_reconstruction_outOfSample}
\end{align}
respectively.
In SPCA, there is no need to center the data as the centering is already handled by $\ensuremath\boldsymbol{H}$ in HSIC. This gets more clear in the following section where we see that PCA is a spacial case of SPCA.
Note that in the equations of SPCA, although not necessary, we can center the data and in that case, the mean of embedding in the subspace will be zero.
Considering all the $n_t$ out-of-sample data points, the projection and reconstruction are:
\begin{align}
&\widetilde{\ensuremath\boldsymbol{X}}_t = \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}_t, \\
&\widehat{\ensuremath\boldsymbol{X}}_t = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}_t = \ensuremath\boldsymbol{U} \widetilde{\ensuremath\boldsymbol{X}}_t,
\end{align}
respectively.
\subsection{PCA is a spacial case of SPCA!}
Not considering the similarities of the labels means that we do not care about the class labels so we are unsupervised.
if we do not consider the similarities of labels, the kernel over the labels becomes the identity matrix, $\ensuremath\boldsymbol{K}_y = \ensuremath\boldsymbol{I}$.
According to Eq. (\ref{eqaution_SPCA_eigendecomposition}), SPCA is the eigen-decomposition of $\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top$. In this case, this matrix becomes:
\begin{align*}
\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top &= \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{I}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top \overset{(\ref{equation_centeringMatrix_is_symmetric})}{=} \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{I}\ensuremath\boldsymbol{H}^\top\ensuremath\boldsymbol{X}^\top \\
&= \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{H}^\top\ensuremath\boldsymbol{X}^\top = (\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H})(\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H})^\top \\
&\overset{(\ref{equation_centered_training_data})}{=} \breve{\ensuremath\boldsymbol{X}} \breve{\ensuremath\boldsymbol{X}}^\top \overset{(\ref{equation_covariance_matrix})}{=} \ensuremath\boldsymbol{S},
\end{align*}
which is the covariance matrix whose eigenvectors are the PCA directions.
Thus, if we do not consider the similarities of labels, i.e., we are unsupervised, SPCA reduces to PCA as expected.
\subsection{Dual Supervised PCA}
The SPCA can be formulated in dual form \cite{barshan2011supervised}.
We saw that in SPCA, the columns of $\ensuremath\boldsymbol{U}$ are the eigenvectors of $\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top$. We apply SVD on $\ensuremath\boldsymbol{K}_y$ (see Appendix \ref{section_appendix_SVD}):
\begin{align*}
\mathbb{R}^{n \times n} \ni \ensuremath\boldsymbol{K}_y = \ensuremath\boldsymbol{Q} \ensuremath\boldsymbol{\Omega} \ensuremath\boldsymbol{Q}^\top,
\end{align*}
where $\ensuremath\boldsymbol{Q} \in \mathbb{R}^{n \times n}$ includes left or right singular vectors and $\ensuremath\boldsymbol{\Omega} \in \mathbb{R}^{n \times n}$ contains the singular values of $\ensuremath\boldsymbol{K}_y$. Note that the left and right singular vectors are equal because $\ensuremath\boldsymbol{K}_y$ is symmetric and thus $\ensuremath\boldsymbol{K}_y \ensuremath\boldsymbol{K}_y^\top$ and $\ensuremath\boldsymbol{K}_y^\top \ensuremath\boldsymbol{K}_y$ are equal. As $\ensuremath\boldsymbol{\Omega}$ is a diagonal matrix with non-negative entries, we can decompose it to $\ensuremath\boldsymbol{\Omega} = \ensuremath\boldsymbol{\Omega}^{1/2} \ensuremath\boldsymbol{\Omega}^{1/2} = \ensuremath\boldsymbol{\Omega}^{1/2} (\ensuremath\boldsymbol{\Omega}^{1/2})^\top$ where the diagonal entries of $\ensuremath\boldsymbol{\Omega}^{1/2} \in \mathbb{R}^{n \times n}$ are square root of diagonal entries of $\ensuremath\boldsymbol{\Omega}$. Therefore, we can decompose $\ensuremath\boldsymbol{K}_y$ into:
\begin{align}
\ensuremath\boldsymbol{K}_y &= \ensuremath\boldsymbol{Q} \ensuremath\boldsymbol{\Omega}^{1/2} (\ensuremath\boldsymbol{\Omega}^{1/2})^\top \ensuremath\boldsymbol{Q}^\top \nonumber \\
&= (\ensuremath\boldsymbol{Q} \ensuremath\boldsymbol{\Omega}^{1/2}) (\ensuremath\boldsymbol{Q} \ensuremath\boldsymbol{\Omega}^{1/2})^\top = \ensuremath\boldsymbol{\Delta} \ensuremath\boldsymbol{\Delta}^\top, \label{equation_SPCA_decompose_kernelOfLabels}
\end{align}
where:
\begin{align}
\mathbb{R}^{n \times n} \ni \ensuremath\boldsymbol{\Delta} := \ensuremath\boldsymbol{Q} \ensuremath\boldsymbol{\Omega}^{1/2}.
\end{align}
Therefore, we have:
\begin{align*}
\therefore ~~~~ \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top &\overset{(\ref{equation_SPCA_decompose_kernelOfLabels})}{=} \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Delta} \ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top \\
&\overset{(\ref{equation_centeringMatrix_is_symmetric})}{=} (\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Delta}) (\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Delta})^\top = \ensuremath\boldsymbol{\Psi}\ensuremath\boldsymbol{\Psi}^\top,
\end{align*}
where:
\begin{align}\label{equation_Psi_dual_SPCA}
\mathbb{R}^{d \times n} \ni \ensuremath\boldsymbol{\Psi} := \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Delta}.
\end{align}
We apply incomplete SVD on $\ensuremath\boldsymbol{\Psi}$ (see Appendix \ref{section_appendix_SVD}):
\begin{align}\label{equation_Psi_SVD_dual_SPCA}
\mathbb{R}^{d \times n} \ni \ensuremath\boldsymbol{\Psi} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Sigma} \ensuremath\boldsymbol{V}^\top,
\end{align}
where $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times p}$ and $\ensuremath\boldsymbol{V} \in \mathbb{R}^{d \times p}$ include the $p$ leading left or right singular vectors of $\ensuremath\boldsymbol{\Psi}$, respectively, and $\ensuremath\boldsymbol{\Sigma} \in \mathbb{R}^{p \times p}$ contains the $p$ largest singular values of $\ensuremath\boldsymbol{\Psi}$.
We can compute $\ensuremath\boldsymbol{U}$ as:
\begin{align}
\ensuremath\boldsymbol{\Psi} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Sigma} \ensuremath\boldsymbol{V}^\top &\implies \ensuremath\boldsymbol{\Psi}\ensuremath\boldsymbol{V} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Sigma} \underbrace{\ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{V}}_{\ensuremath\boldsymbol{I}} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Sigma} \nonumber\\
&\implies \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{\Psi}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-1} \label{equation_U_dual_SPCA}
\end{align}
The projection of data $\ensuremath\boldsymbol{X}$ in dual SPCA is:
\begin{align}
\widetilde{\ensuremath\boldsymbol{X}} &\overset{(\ref{equation_SPCA_projection})}{=} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X} \overset{(\ref{equation_U_dual_SPCA})}{=} (\ensuremath\boldsymbol{\Psi}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-1})^\top \ensuremath\boldsymbol{X} = \ensuremath\boldsymbol{\Sigma}^{-\top}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Psi}^\top\ensuremath\boldsymbol{X} \nonumber \\
& \overset{(\ref{equation_Psi_dual_SPCA})}{=} \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top\ensuremath\boldsymbol{X}. \label{equation_projected_dual_SPCA}
\end{align}
Note that $\ensuremath\boldsymbol{\Sigma}$ and $\ensuremath\boldsymbol{H}$ are symmetric.
Similarly, out-of-sample projection in dual SPCA is:
\begin{align}\label{equation_outOfSample_projected_dual_SPCA}
\widetilde{\ensuremath\boldsymbol{x}}_t = \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top\ensuremath\boldsymbol{x}_t.
\end{align}
Considering all the $n_t$ out-of-sample data points, the projection is:
\begin{align}
\widetilde{\ensuremath\boldsymbol{X}}_t = \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top\ensuremath\boldsymbol{X}_t.
\end{align}
Reconstruction of $\ensuremath\boldsymbol{X}$ after projection onto the SPCA subspace is:
\begin{align}
\widehat{\ensuremath\boldsymbol{X}} &\overset{(\ref{equation_SPCA_reconstruction})}{=} \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X} = \ensuremath\boldsymbol{U}\widetilde{\ensuremath\boldsymbol{X}} \nonumber \\
&\overset{(a)}{=} \ensuremath\boldsymbol{\Psi}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top\ensuremath\boldsymbol{X} \nonumber \\
&= \ensuremath\boldsymbol{\Psi}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-2}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top\ensuremath\boldsymbol{X} \nonumber \\
&\overset{(\ref{equation_Psi_dual_SPCA})}{=} \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Delta}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-2}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top\ensuremath\boldsymbol{X} \label{equation_reconstruction_dual_SPCA}
\end{align}
where $(a)$ is because of Eqs. (\ref{equation_U_dual_SPCA}) and (\ref{equation_projected_dual_SPCA}).
Similarly, reconstruction of an out-of-sample data point in dual SPCA is:
\begin{align}\label{equation_outOfSample_reconstruction_dual_SPCA}
\widehat{\ensuremath\boldsymbol{x}}_t = \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Delta}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-2}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top\ensuremath\boldsymbol{x}_t.
\end{align}
Considering all the $n_t$ out-of-sample data points, the reconstruction is:
\begin{align}
\widehat{\ensuremath\boldsymbol{X}}_t = \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Delta}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-2}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top\ensuremath\boldsymbol{X}_t.
\end{align}
Note that dual PCA was important especially because it provided opportunity to kernelize the PCA. However, as it is explained in next section, kernel SPCA can be obtained directly from SPCA. Therefore, dual SPCA might not be very important for the sake of kernel SPCA.
The dual SPCA has another benefit similar to what we had for dual PCA. In Eqs. (\ref{equation_projected_dual_SPCA}), (\ref{equation_outOfSample_projected_dual_SPCA}), (\ref{equation_reconstruction_dual_SPCA}), and (\ref{equation_outOfSample_reconstruction_dual_SPCA}), $\ensuremath\boldsymbol{U}$ is not used but $\ensuremath\boldsymbol{V}$ exists. In Eq. (\ref{equation_Psi_SVD_dual_SPCA}), the columns of $\ensuremath\boldsymbol{V}$ are the eigenvectors of $\ensuremath\boldsymbol{\Psi}^\top \ensuremath\boldsymbol{\Psi} \in \mathbb{R}^{n \times n}$, according to Proposition \ref{proposition_SVD} in appendix \ref{section_appendix_SVD}. On the other hand, in direct SPCA, we have eigen-decomposition of $\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{X}^\top \in \mathbb{R}^{d \times d}$ in Eq. (\ref{eqaution_SPCA_eigendecomposition}) which is then used in Eqs. (\ref{equation_SPCA_projection}), (\ref{equation_SPCA_projection_outOfSample}), (\ref{equation_SPCA_reconstruction}), and (\ref{equation_SPCA_reconstruction_outOfSample}).
In case we have huge dimensionality, $d \gg n$, decomposition of and $n \times n$ matrix is faster and needs less storage so dual SPCA will be more efficient.
\subsection{Kernel Supervised PCA}
The SPCA can be kernelized by two approaches, using either direct SPCA or dual SPCA \cite{barshan2011supervised}.
\subsubsection{Kernel SPCA Using Direct SPCA}
According to the representation theory \cite{alperin1993local}, any solution (direction) $\ensuremath\boldsymbol{u} \in \mathcal{H}$ must lie in the span of ``all'' the training vectors mapped to $\mathcal{H}$, i.e., $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) = [\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_1), \dots, \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_n)] \in \mathbb{R}^{t\times n}$ (usually $t \gg d$). Note that $\mathcal{H}$ denotes the Hilbert space (feature space). Therefore, we can state that:
\begin{align*}
\ensuremath\boldsymbol{u} = \sum_{i=1}^n \theta_i\, \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) = \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\, \ensuremath\boldsymbol{\theta},
\end{align*}
where $\ensuremath\boldsymbol{\theta} \in \mathbb{R}^n$ is the unknown vector of coefficients, and $\ensuremath\boldsymbol{u} \in \mathbb{R}^t$ is the kernel SPCA direction in Hilbert space here.
The directions can be put together in $\mathbb{R}^{t \times p} \ni \ensuremath\boldsymbol{U} := [\ensuremath\boldsymbol{u}_1, \dots, \ensuremath\boldsymbol{u}_p]$:
\begin{align}\label{equation_U_kernel_SPCA}
\ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\, \ensuremath\boldsymbol{\Theta},
\end{align}
where $\ensuremath\boldsymbol{\Theta} := [\ensuremath\boldsymbol{\theta}_1, \dots, \ensuremath\boldsymbol{\theta}_p] \in \mathbb{R}^{n \times p}$.
The Eq. (\ref{equation_HSIC_SPCA}) in the feature space becomes:
\begin{align*}
\text{HSIC} = \frac{1}{(n-1)^2}\, \textbf{tr}(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}).
\end{align*}
The $\textbf{tr}(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H})$ can be simplified as:
\begin{align}
&\textbf{tr}(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}) \nonumber \\
&= \textbf{tr}(\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top) \nonumber \\
&= \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top\ensuremath\boldsymbol{U}) \label{equation_traceInHSIC_kernel_SPCA}
\end{align}
Plugging Eq. (\ref{equation_U_kernel_SPCA}) in Eq. (\ref{equation_traceInHSIC_kernel_SPCA}) gives us:
\begin{align}
&\textbf{tr}\big(\ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\, \ensuremath\boldsymbol{\Theta}\big) \nonumber \\
&= \textbf{tr}(\ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{\Theta}),
\end{align}
where:
\begin{align}\label{equation_kernel_SPCA_kernel_X}
\mathbb{R}^{n \times n} \ni \ensuremath\boldsymbol{K}_x := \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}).
\end{align}
Note that the Eqs. (\ref{equation_kernel_SPCA_kernel_X}) and (\ref{equation_SPCA_kernel_of_projection}) are different and should not be confused.
Moreover, for the constraint of orthogonality of projection matrix, i.e., $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}$, in features space becomes:
\begin{align}
\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} &= \big(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\, \ensuremath\boldsymbol{\Theta}\big)^\top \big(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\, \ensuremath\boldsymbol{\Theta}\big) \nonumber \\
&= \ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{\Theta} = \ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{K}_x\, \ensuremath\boldsymbol{\Theta}
\end{align}
Therefore, the optimization problem is:
\begin{equation}
\begin{aligned}
& \underset{\ensuremath\boldsymbol{\Theta}}{\text{maximize}}
& & \textbf{tr}(\ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{\Theta}), \\
& \text{subject to}
& & \ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{K}_x\, \ensuremath\boldsymbol{\Theta} = \ensuremath\boldsymbol{I},
\end{aligned}
\end{equation}
where the objective variable is the unknown $\ensuremath\boldsymbol{\Theta}$.
Using Lagrange multiplier \cite{boyd2004convex}, we have:
\begin{align*}
&\mathcal{L} = \\
&\textbf{tr}(\ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{\Theta}) - \textbf{tr}\big(\ensuremath\boldsymbol{\Lambda}^\top (\ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{K}_x\, \ensuremath\boldsymbol{\Theta} - \ensuremath\boldsymbol{I})\big) \\
&= \textbf{tr}(\ensuremath\boldsymbol{\Theta} \ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x) - \textbf{tr}\big(\ensuremath\boldsymbol{\Lambda}^\top (\ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{K}_x\, \ensuremath\boldsymbol{\Theta} - \ensuremath\boldsymbol{I})\big),
\end{align*}
where $\ensuremath\boldsymbol{\Lambda} \in \mathbb{R}^{p \times p}$ is a diagonal matrix $\textbf{diag}([\lambda_1, \dots, \lambda_p]^\top)$.
\begin{align}
& \mathbb{R}^{n \times p} \ni \frac{\partial \mathcal{L}}{\partial \ensuremath\boldsymbol{\Theta}} = 2 \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{\Theta} - 2\ensuremath\boldsymbol{K}_x\ensuremath\boldsymbol{\Theta} \ensuremath\boldsymbol{\Lambda} \overset{\text{set}}{=} 0 \nonumber \\
& \implies \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{\Theta} = \ensuremath\boldsymbol{K}_x\ensuremath\boldsymbol{\Theta} \ensuremath\boldsymbol{\Lambda}, \label{equation_kernel_SPCA_generalized_eigen_problem}
\end{align}
which is the generalized eigenvalue problem $(\ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x, \ensuremath\boldsymbol{K}_x)$ \cite{ghojogh2019eigenvalue}. The $\ensuremath\boldsymbol{\Theta}$ and $\ensuremath\boldsymbol{\Lambda}$, which are the eigenvector and eigenvalue matrices, respectively, can be calculated according to \cite{ghojogh2019eigenvalue}.
Note that in practice, we can naively solve Eq. (\ref{equation_kernel_SPCA_generalized_eigen_problem}) by left multiplying $\ensuremath\boldsymbol{K}_x^{-1}$ (hoping that it is positive definite and thus not singular):
\begin{align}
&\underbrace{\ensuremath\boldsymbol{K}_x^{-1}\ensuremath\boldsymbol{K}_x}_{\ensuremath\boldsymbol{I}} \ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{\Theta} = \ensuremath\boldsymbol{\Theta} \ensuremath\boldsymbol{\Lambda} \nonumber \\
&\implies \ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{\Theta} = \ensuremath\boldsymbol{\Theta} \ensuremath\boldsymbol{\Lambda},
\end{align}
which is the eigenvalue problem \cite{ghojogh2019eigenvalue} for $\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_y\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x$, where columns of $\ensuremath\boldsymbol{\Theta}$ are the eigenvectors of it and $\ensuremath\boldsymbol{\Lambda}$ includes its eigenvalues on its diagonal.
If we take the $p$ leading eigenvectors to have $\ensuremath\boldsymbol{\Theta} \in \mathbb{R}^{n \times p}$, the projection of $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \in \mathbb{R}^{t \times n}$ is:
\begin{align}
\mathbb{R}^{p \times n} \ni \ensuremath\boldsymbol{\Phi}(\widetilde{\ensuremath\boldsymbol{X}}) &= \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \nonumber \\
&\overset{(\ref{equation_U_kernel_SPCA})}{=} \ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) = \ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{K}_x,
\end{align}
where $\mathbb{R}^{n \times n} \ni \ensuremath\boldsymbol{K}_x := \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})$.
Similarly, the projection of out-of-sample data point $\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_t) \in \mathbb{R}^t$ is:
\begin{align}
\mathbb{R}^{p} \ni \ensuremath\boldsymbol{\phi}(\widetilde{\ensuremath\boldsymbol{x}}_t) &= \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_t) \nonumber \\
&\overset{(\ref{equation_U_kernel_SPCA})}{=} \ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_t) = \ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{k}_t,
\end{align}
where $\ensuremath\boldsymbol{k}_t$ is Eq. (\ref{equation_appendix_kernelVector_outOfSample}).
Considering all the $n_t$ out-of-sample data points, $\ensuremath\boldsymbol{X}_t$, the projection is:
\begin{align}
\mathbb{R}^{p \times n_t} \ni \ensuremath\boldsymbol{\phi}(\widetilde{\ensuremath\boldsymbol{X}}_t) = \ensuremath\boldsymbol{\Theta}^\top \ensuremath\boldsymbol{K}_t,
\end{align}
where $\ensuremath\boldsymbol{K}_t$ is Eq. (\ref{equation_appendix_kernelMatrix_outOfSample}).
As we will show in the following section, in kernel SPCA, as in kernel PCA, we cannot reconstruct data, whether training or out-of-sample.
\subsubsection{Kernel SPCA Using Dual SPCA}
The Eq. (\ref{equation_Psi_dual_SPCA}) in $t$-dimensional feature space becomes:
\begin{align}\label{equation_Psi_kernel_SPCA_using_dual}
\mathbb{R}^{t \times n} \ni \ensuremath\boldsymbol{\Psi} = \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Delta},
\end{align}
where $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) = [\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_1), \dots, \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_n)] \in \mathbb{R}^{t \times n}$.
Applying SVD (see Appendix \ref{section_appendix_SVD}) on $\ensuremath\boldsymbol{\Psi}$ of Eq. (\ref{equation_Psi_kernel_SPCA_using_dual}) is similar to the form of Eq. (\ref{equation_Psi_SVD_dual_SPCA}).
Having the same discussion which we had for Eqs. (\ref{equation_kernelPCA_SVD_Phi}) and (\ref{equation_kernelPCA_eigen_centered_kernel_2}), we do not necessarily have $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})$ in Eq. (\ref{equation_Psi_kernel_SPCA_using_dual}) so we can obtain $\ensuremath\boldsymbol{V}$ and $\ensuremath\boldsymbol{\Sigma}$ as:
\begin{align}\label{equation_kernel_SPCA_eigen_Delta_centered_kernel}
\big(\ensuremath\boldsymbol{\Delta}^\top \breve{\ensuremath\boldsymbol{K}}_x \ensuremath\boldsymbol{\Delta}\big) \ensuremath\boldsymbol{V} = \ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^2,
\end{align}
where $\breve{\ensuremath\boldsymbol{K}}_x := \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{\Delta}$ and the columns of $\ensuremath\boldsymbol{V}$ are the eigenvectors of (see Proposition \ref{proposition_SVD} in Appendix \ref{section_appendix_SVD}):
\begin{align*}
\ensuremath\boldsymbol{\Psi}^\top \ensuremath\boldsymbol{\Psi} &\overset{(a)}{=} \ensuremath\boldsymbol{\Delta}^\top \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{\Delta} \overset{(\ref{equation_kernel_SPCA_kernel_X})}{=} \ensuremath\boldsymbol{\Delta}^\top \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{\Delta} \\
&= \ensuremath\boldsymbol{\Delta}^\top \breve{\ensuremath\boldsymbol{K}}_x \ensuremath\boldsymbol{\Delta},
\end{align*}
where $(a)$ is because of Eqs. (\ref{equation_Psi_kernel_SPCA_using_dual}) and (\ref{equation_centeringMatrix_is_symmetric}).
It is noteworthy that because of using Eq. (\ref{equation_kernel_SPCA_eigen_Delta_centered_kernel}) instead of Eq. (\ref{equation_Psi_kernel_SPCA_using_dual}), \textit{the projection directions $\ensuremath\boldsymbol{U}$ are not available in kernel SPCA to be observed or plotted.}
Similar to equations (\ref{equation_Psi_SVD_dual_SPCA}) and (\ref{equation_U_dual_SPCA}), we have:
\begin{align}
\ensuremath\boldsymbol{\Psi} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Sigma} \ensuremath\boldsymbol{V}^\top &\implies \ensuremath\boldsymbol{\Psi}\ensuremath\boldsymbol{V} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Sigma} \underbrace{\ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{V}}_{\ensuremath\boldsymbol{I}} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Sigma} \nonumber\\
&\implies \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{\Psi}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-1}, \label{equation_U_kernel_SPCA_using_dual}
\end{align}
where $\ensuremath\boldsymbol{V}$ and $\ensuremath\boldsymbol{\Sigma}$ are obtained from Eq. (\ref{equation_kernel_SPCA_eigen_Delta_centered_kernel}).
The projection of data $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})$ is:
\begin{align}
\ensuremath\boldsymbol{\Phi}(\widetilde{\ensuremath\boldsymbol{X}}) &= \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) = (\ensuremath\boldsymbol{\Psi}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-1})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \nonumber \\
&= \ensuremath\boldsymbol{\Sigma}^{-\top}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Psi}^\top\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \nonumber \nonumber \\
& \overset{(\ref{equation_Psi_kernel_SPCA_using_dual})}{=} \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}), \nonumber \\
& \overset{(\ref{equation_kernel_SPCA_kernel_X})}{=} \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x.
\label{equation_projected_kernel_SPCA_using_dual}
\end{align}
Note that $\ensuremath\boldsymbol{\Sigma}$ and $\ensuremath\boldsymbol{H}$ are symmetric.
Similarly, out-of-sample projection in kernel SPCA is:
\begin{align}
\ensuremath\boldsymbol{\phi}(\widetilde{\ensuremath\boldsymbol{x}}_t) &= \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_t) \nonumber \\
&= \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\,\ensuremath\boldsymbol{k}_t,
\end{align}
where $\ensuremath\boldsymbol{k}_t$ is Eq. (\ref{equation_appendix_kernelVector_outOfSample}).
Considering all the $n_t$ out-of-sample data points, $\ensuremath\boldsymbol{X}_t$, the projection is:
\begin{align}
\ensuremath\boldsymbol{\phi}(\widetilde{\ensuremath\boldsymbol{X}}_t)
= \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\,\ensuremath\boldsymbol{K}_t.
\end{align}
where $\ensuremath\boldsymbol{K}_t$ is Eq. (\ref{equation_appendix_kernelMatrix_outOfSample}).
Reconstruction of $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})$ after projection onto the SPCA subspace is:
\begin{align}
\ensuremath\boldsymbol{\Phi}(\widehat{\ensuremath\boldsymbol{X}}) &= \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{\Phi}(\widetilde{\ensuremath\boldsymbol{X}}) \nonumber \\
& \overset{(a)}{=} \ensuremath\boldsymbol{\Psi}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{\Sigma}^{-1}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x \nonumber \\
&= \ensuremath\boldsymbol{\Psi}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-2}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x \nonumber \\
&\overset{(\ref{equation_Psi_kernel_SPCA_using_dual})}{=} \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Delta}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-2}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{K}_x \label{equation_outOfSample_projected_kernel_SPCA_using_dual}
\end{align}
where $(a)$ is because of Eqs. (\ref{equation_U_kernel_SPCA_using_dual}) and (\ref{equation_projected_kernel_SPCA_using_dual}).
Similarly, reconstruction of an out-of-sample data point in dual SPCA is:
\begin{align}
\widehat{\ensuremath\boldsymbol{x}}_t &= \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Delta}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-2}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_t) \nonumber\\
&= \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{H}\ensuremath\boldsymbol{\Delta}\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Sigma}^{-2}\ensuremath\boldsymbol{V}^\top\ensuremath\boldsymbol{\Delta}^\top\ensuremath\boldsymbol{H}\,\ensuremath\boldsymbol{k}_t, \label{equation_outOfSample_reconstruction_kernel_SPCA_using_dual}
\end{align}
where $\ensuremath\boldsymbol{k}_t$ is Eq. (\ref{equation_appendix_kernelVector_outOfSample}).
However, in Eqs. (\ref{equation_outOfSample_projected_kernel_SPCA_using_dual}) and (\ref{equation_outOfSample_reconstruction_kernel_SPCA_using_dual}), we do not necessarily have $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})$; therefore, in kernel SPCA, as in kernel PCA, we cannot reconstruct data, whether training or out-of-sample.
\begin{figure*}[!t]
\centering
\includegraphics[width=6.5in]{./images/project_train_Frey}
\caption{Projection of the Frey dataset onto the subspaces of (a) the PCA, (b) the dual SPCA, (c) the kernel PCA (linear kernel), (d) the kernel PCA (RBF kernel), and (e) the kernel PCA (cosine kernel).}
\label{figure_projection_train_Frey}
\end{figure*}
\section{Simulations}\label{section_simulations}
\subsection{Experiments on PCA and Kernel PCA}
For the experiments on PCA and kernel PCA, we used the Frey face dataset. This dataset includes $1965$ images of one subject with different poses and expressions.
The dataset, except for reconstruction experiments, was standardized so that its mean and variance became zero and one, respectively.
First, we used the whole dataset for training the PCA, dual PCA, and kernel PCA.
Figure \ref{figure_projection_train_Frey} shows the projection of the images onto the two leading dimensions of the PCA, dual PCA, and kernel PCA subspaces.
As can be seen in this figure, the result of PCA and dual PCA are exactly the same but in this dataset, dual PCA is much faster.
The linear kernel is $\ensuremath\boldsymbol{K} = \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) = \ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{X}$.
According to Eqs. (\ref{equation_SVD_dual}) and (\ref{equation_kernelPCA_SVD_Phi}), dual PCA and kernel PCA with linear kernel are equivalent. Figure \ref{figure_projection_train_Frey} verifies it.
The projections of kernel PCA with RBF and cosine kernels are also shown in this figure.
We then shuffled the Frey dataset and randomly split the data to train/out-of-sample sets with $80\%/20\%$ portions. Figure \ref{figure_projection_test_Frey} shows the projection both training and out-of-sample images onto the PCA, dual PCA, and kernel PCA subspaces. The used kernels were linear, RBF, and cosine.
As can be seen, the out-of-sample data, although were not seen in the training phase, are projected very well. The model, somewhat, has extrapolated the projections.
The top ten PCA directions for the PCA trained on the entire Frey dataset are shown in Fig. \ref{figure_projection_directions}.
As can be seen, the projection directions of a facial dataset are some facial features which are like ghost faces. That is why the facial projection directions are also referred to as ``ghost faces''.
The ghost faces in PCA is also referred to as ``eigenfaces'' \cite{turk1991eigenfaces,turk1991face}.
In this figure, the projection directions have captured different facial features which discriminate the data with respect to the maximum variance. The captured features are eyes, nose, cheeks, chin, lips, and eyebrows, which are the most important facial features.
This figure does not include projection directions of kernel PCA because in kernel PCA, the projection directions are not available as mentioned before.
Note that the face recognition using kernel PCA is referred to as ``kernel eigenfaces'' \cite{yang2000face}.
The reconstruction of training and test images after projection onto PCA are shown in Fig. \ref{figure_reconstruction_images}. We have reconstructed using all and also two top projection directions. As expected, the reconstructions using all projection directions are good enough.
The recognition of faces using PCA and kernel PCA are called eigenfaces \cite{turk1991eigenfaces,turk1991face} and kernel eigenfaces \cite{yang2000face} because PCA uses eigenvalue decomposition \cite{ghojogh2019eigenvalue} of the covariance matrix.
\begin{figure*}[!t]
\centering
\includegraphics[width=6.5in]{./images/project_test_Frey}
\caption{Projection of the training and out-of-sample sets of Frey dataset onto the subspaces of (a) the PCA, (b) the dual SPCA, (c) the kernel PCA (linear kernel), (d) the kernel PCA (RBF kernel), and (e) the kernel PCA (cosine kernel). The images with red frame are the out-of-sample images.}
\label{figure_projection_test_Frey}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=6in]{./images/directions}
\caption{The ghost faces: the leading eigenvectors of PCA and SPCA for Frey, AT\&T, and AT\&T glasses datasets.}
\label{figure_projection_directions}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=3.7in]{./images/reconstruction}
\caption{The reconstructed faces using all and two of the leading eigenvectors of PCA and SPCA for Frey and AT\&T datasets.}
\label{figure_reconstruction_images}
\end{figure*}
\subsection{Experiments on SPCA and Kernel SPCA}
For the experiments of SPCA and kernel SPCA, we used the AT\&T face dataset which includes $400$ images, $40$ subjects, and $10$ images per subject. The images of every person have different poses and expressions.
For better visualization of separation of classes in the projection subspace, we only used the images of the first three subjects.
The dataset, except for reconstruction experiments, was standardized so that its mean and variance became zero and one, respectively.
First, we used the entire $30$ images for training and the projection of images are shown in Fig. \ref{figure_projection_train_ATT} for SPCA, dual SPCA, direct kernel SPCA, and kernel SPCA using dual SPCA. As can be seen, the result of direct kernel SPCA and kernel SPCA using dual SPCA are very similar which is also claimed in \cite{barshan2011supervised} (although for some datasets they differ as we will see in Fig. \ref{figure_projection_test_ATT_glasses} for example). In comparisons, note that rotation, flipping, and scale do not matter in subspace and manifold learning because these impact all the distances of instances similarly.
We then took the first six images of each of the three subjects as training images and the rest as test images. The projection of training and out-of-sample images onto SPCA, dual SPCA, direct kernel SPCA, and kernel SPCA using dual SPCA are shown in Fig. \ref{figure_projection_test_ATT}.
This figure shows that projection of out-of-sample images have been properly carried on in SPCA and its variants.
The top projection directions of SPCA, dual SPCA, direct kernel SPCA, and kernel SPCA using dual SPCA trained by the entire $30$ images are shown in Fig. \ref{figure_projection_directions}. Moreover, the projection directions of PCA are also shown for this dataset. We can refer to the ghost faces (facial projection directions) of SPCA as the ``supervised eigenfaces''. The face recognition using kernel SPCA can also be referred to as ``kernel supervised eigenfaces''.
The Fig. \ref{figure_projection_directions} does not include projection directions of kernel SPCA because in kernel SPCA, the projection directions are not available as mentioned before.
Comparison of PCA and SPCA directions show that both PCA and SPCA are capturing eye glasses as important discriminators. However, some Haar wavelet like features \cite{stankovic2003haar} are captured as the projection directions in SPCA. We know that Haar wavelets are important in face recognition and detection such as Viola-Jones face detector \cite{wang2014analysis} which uses Haar wavelets.
The reconstruction of training and test images after projection onto SPCA are shown in Fig. \ref{figure_reconstruction_images}. We have reconstructed using all and also two top projection directions. As expected, the reconstructions using all projection directions are good enough.
We call the recognition of faces using SPCA and kernel SPCA as supervised eigenfaces and kernel supervised eigenfaces.
\begin{figure*}[!t]
\centering
\includegraphics[width=6.5in]{./images/project_train_ATT_normalized}
\caption{Projection of the AT\&T dataset onto the subspaces of (a) the SPCA, (b) the dual SPCA, (c) the direct kernel SPCA (linear kernel), (d) the direct kernel SPCA (RBF kernel), (e) the direct kernel SPCA (cosine kernel) subspaces, (f) the kernel SPCA using dual (linear kernel), (g) the kernel SPCA using dual (RBF kernel), and (h) the kernel SPCA using direct (cosine kernel).}
\label{figure_projection_train_ATT}
\end{figure*}
\begin{figure*}[!t]
\centering
\includegraphics[width=6.5in]{./images/project_test_ATT_normalized}
\caption{Projection of the training and out-of-sample sets of AT\&T dataset onto the subspaces of (a) the SPCA, (b) the dual SPCA, (c) the direct kernel SPCA (linear kernel), (d) the direct kernel SPCA (RBF kernel), (e) the direct kernel SPCA (cosine kernel) subspaces, (f) the kernel SPCA using dual (linear kernel), (g) the kernel SPCA using dual (RBF kernel), and (h) the kernel SPCA using direct (cosine kernel). The images with red frame are the out-of-sample images.}
\label{figure_projection_test_ATT}
\end{figure*}
\subsection{Comparison of PCA and SPCA}
For comparison of PCA and SPCA, we used $90$ images of AT\&T dataset without eye glasses and $90$ with eye glasses. We used all the $180$ images for training.
The dataset was standardized so that its mean and variance became zero and one, respectively.
Figure \ref{figure_projection_test_ATT_glasses} shows the projection of data onto PCA, SPCA, direct kernel SPCA, and kernel SPCA using dual SPCA. As can be seen, the PCA has not separated the two classes of with and without eye glasses. However, the two classes have been properly separated in SPCA and kernel SPCA because they make use of the class labels.
The projection directions of PCA and SPCA for this experiment are shown in Fig. \ref{figure_projection_directions}. Both PCA and SPCA have captured eyes as discriminators; however, SPCA has also focused on the frame of eye glasses because of the usage of class labels. That is while PCA has also captured other distracting facial features such as forehead, cheeks, hair, mustache, etc, because it is not aware that the two classes are different in terms of glasses and sees the whole dataset as a whole.
\begin{figure*}[!t]
\centering
\includegraphics[width=6.5in]{./images/project_test_ATT_glasses_normalized}
\caption{Projection of the of AT\&T glasses dataset onto the subspaces of (a) the PCA, (b) the SPCA, (c) the direct kernel SPCA (linear kernel), (d) the direct kernel SPCA (RBF kernel), (e) the direct kernel SPCA (cosine kernel) subspaces, (f) the kernel SPCA using dual (linear kernel), (g) the kernel SPCA using dual (RBF kernel), and (h) the kernel SPCA using direct (cosine kernel). The images with red frame are the images with glasses and the other images do not have glasses.}
\label{figure_projection_test_ATT_glasses}
\end{figure*}
\section{Conclusion and Future Work}\label{section_conclusions}
In this paper, the PCA and SPCA were introduced in details of theory. Moreover, kernel PCA and kernel SPCA were covered. The illustrations and experiments on Frey and AT\&T face datasets were also provided in order to analyze the explained methods in practice.
The calculation of $\ensuremath\boldsymbol{K}_y \in \mathbb{R}^{n \times n}$ in SPCA might be challenging for big data in terms of speed and storage. The Supervised Random Projection (SRP) \cite{karimi2018srp,karimi2018exploring} addresses this problem by approximating the kernel matrix $\ensuremath\boldsymbol{K}_y$ using Random Fourier Features (RFF)
\cite{rahimi2008random}. As a future work, we will write a tutorial on SRP.
Moreover, the sparsity is very effective because of the \textit{``bet on sparsity''} principal: ``Use a procedure that does well in sparse problems, since no procedure does well in dense problems \cite{friedman2001elements,tibshirani2015statistical}.''
Another reason for the effectiveness of the sparsity is Occam's razor \cite{domingos1999role} stating that ``simpler solutions are more likely to be correct than complex ones'' or ``simplicity is a goal in itself''.
Therefore, the sparse methods such as sparse PCA \cite{zou2006sparse,shen2008sparse}, sparse kernel PCA \cite{tipping2001sparse}, and Sparse Supervised Principal Component Analysis (SSPCA) \cite{sharifzadeh2017sparse} have been proposed. We will defer these methods to future tutorials.
\section*{Acknowledgment}
The authors hugely thank Prof. Ali Ghodsi (see his great online courses \cite{web_data_visualization,web_classification}), Prof. Mu Zhu, Prof. Wayne Oldford, Prof. Hoda Mohammadzade, and other professors whose courses have partly covered the materials mentioned in this tutorial paper.
| {
"timestamp": "2019-06-10T02:14:49",
"yymm": "1906",
"arxiv_id": "1906.03148",
"language": "en",
"url": "https://arxiv.org/abs/1906.03148",
"abstract": "This is a detailed tutorial paper which explains the Principal Component Analysis (PCA), Supervised PCA (SPCA), kernel PCA, and kernel SPCA. We start with projection, PCA with eigen-decomposition, PCA with one and multiple projection directions, properties of the projection matrix, reconstruction error minimization, and we connect to autoencoder. Then, PCA with singular value decomposition, dual PCA, and kernel PCA are covered. SPCA using both scoring and Hilbert-Schmidt independence criterion are explained. Kernel SPCA using both direct and dual approaches are then introduced. We cover all cases of projection and reconstruction of training and out-of-sample data. Finally, some simulations are provided on Frey and AT&T face datasets for verifying the theory in practice.",
"subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)",
"title": "Unsupervised and Supervised Principal Component Analysis: Tutorial",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462197739625,
"lm_q2_score": 0.8418256452674008,
"lm_q1q2_score": 0.8316784639507053
} |
https://arxiv.org/abs/1905.07658 | The Robin Laplacian - spectral conjectures, rectangular theorems | The first two eigenvalues of the Robin Laplacian are investigated along with their gap and ratio. Conjectures by various authors for arbitrary domains are supported here by new results for rectangular boxes.Results for rectangular domains include that: the square minimizes the first eigenvalue among rectangles under area normalization, when the Robin parameter $\alpha \in \mathbb{R}$ is scaled by perimeter; that the square maximizes the second eigenvalue for a sharp range of $\alpha$-values; that the line segment minimizes the Robin spectral gap under diameter normalization for each $\alpha \in \mathbb{R}$; and the square maximizes the spectral ratio among rectangles when $\alpha>0$. Further, the spectral gap of each rectangle is shown to be an increasing function of the Robin parameter, and the second eigenvalue is concave with respect to $\alpha$.Lastly, the shape of a Robin rectangle can be heard from just its first two frequencies, except in the Neumann case. | \section{\bf Introduction}
\label{intro}
New shape optimization conjectures are developed and old ones revisited for the first two eigenvalues of the Robin Laplacian. Along the way, conjectures are supported with theorems on the special case of rectangular domains.
Shape optimization problems for the spectrum of the Robin Laplacian
\begin{equation*}\label{robinproblem}
\begin{split}
- \Delta u & = \lambda u \ \quad \text{in $\Omega$,} \\
\frac{\partial u}{\partial\nu} + \alpha u & = 0 \qquad \text{on $\partial \Omega$,}
\end{split}
\end{equation*}
have resolutely resisted techniques employed on the Neumann and Dirichlet endpoint cases ($\alpha=0$ and $\alpha=\infty$ respectively). For example, Rayleigh's conjecture that the ball minimizes the first eigenvalue among all domains of given volume was proved for Dirichlet boundary conditions by Faber and Krahn in the 1920s, using rearrangement methods. The Neumann case is trivial since the first eigenvalue is zero for every domain. Yet the Robin case of the conjecture, which lies between the Neumann and Dirichlet ones, was established only in the 1980s in the plane by Bossel \cite{B86}. Her extremal length methods were extended to higher dimensions by Daners \cite{D06} in 2006, followed in 2010 by a new shape optimization approach of Bucur and Giacomini \cite{BG10}.
Lurking beyond the Neumann case lie the negative Robin parameters, for which Bareket \cite{B77} conjectured the ball might maximize the first eigenvalue among domains of given volume. Freitas and {Krej\v{c}i\v{r}\'{\i}k}\ \cite{FK15} disproved this conjecture in general with an annular counterexample, but they succeeded in proving it in $2$ dimensions when the negative Robin parameter is sufficiently close to $0$. For the second eigenvalue with negative Robin parameter, recent papers by Freitas and Laugesen \cite{FL18a,FL18b} generalize to a natural range of parameter values the sharp Neumann upper bounds of Szeg\H{o} \cite{S54} and Weinberger \cite{W56}, with the ball being the maximizer.
\subsection*{Overview of results} Rectangles are everyone's first choice when seeking computable examples. The Neumann and Dirichlet spectra of rectangles are completely explicit, but the Robin eigenvalues must be determined from transcendental equations (as collected in \autoref{identifyinginterval} and \autoref{identifying}), and thus are more complicated to extremize. Both positive and negative Robin parameters will be considered. Negative Robin parameters correspond in the heat equation to non-physical boundary conditions, with ``heat flowing from cold to hot''. Negative parameters do arise in a physically sensible way in a model for surface superconductivity \cite{GS07}. In any case, from a mathematical perspective the negative parameter regime is a natural continuation of the positive parameter situation.
A \textbf{rectangular box} in ${\mathbb R}^n$ is the Cartesian product of $n$ open intervals. The edges can be taken parallel to the coordinate axes, by rotational invariance of the Laplacian. A \textbf{cube} is a box whose edges all have the same length. In $2$ dimensions the box is a rectangle, and the cube is a square.
For rectangular boxes of given volume, \autoref{firsttwo} illustrates the following six results, two of which concern
%
\begin{figure}
\begin{center}
\includegraphics[scale=0.45]{firsttwoeigen.pdf}
\hspace{1.5cm}
\includegraphics[scale=0.45]{firsttworatio.pdf}
\end{center}
\caption{\label{firsttwo} Left: the first two eigenvalues $\lambda_1$ and $\lambda_2$ for the unit square and for a rectangle with area $1$ and aspect ratio $7$, plotted as functions of the Robin parameter $\alpha$. Right: the ratio $\lambda_2/|\lambda_1|$ for the square and rectangle.}
\end{figure}
dependence on the Robin parameter while the other four involve shape optimization:
\begin{itemize}
\item monotonicity of the spectral gap $\lambda_2-\lambda_1$ as a function of $\alpha \in {\mathbb R}$ (\autoref{gapmonot}; due to Smits \cite[Section 4]{Sm96} for $\alpha>0$)
\item concavity of the first and second eigenvalues with respect to $\alpha \in {\mathbb R}$ (\autoref{secondeigconcave})
\item maximality of the cube for the first eigenvalue when $\alpha<0$, and minimality of the cube when $\alpha>0$ (\autoref{lambda1higherdim}; minimality when $\alpha>0$ is due to Freitas and Kennedy \cite[Theorem 4.1]{FK18} in $2$ dimensions and to Keady and Wiwatanapataphee \cite{KW18} in all dimensions),
\item maximality of the cube for the second eigenvalue when $\alpha \leq 0$ (\autoref{lambda2higherdim}); when $\alpha>0$ this maximality can fail, as seen on the left of \autoref{firsttwo},
\item maximality of the cube for the magnitude of the $\lambda_2$-horizontal intercept, that is, for the first nonzero Steklov eigenvalue (\autoref{sigma1higherdim}; this was proved in a stronger form with a different approach by Girouard \emph{et al.}\ \cite{GLPS17})
\item maximality of the cube for the spectral ratio $\lambda_2/|\lambda_1|$ (\autoref{ratio}).
\end{itemize}
Now let us place these rectangular results in context with conjectures and results for general domains. The first result above proves a special case of Smits' monotonicity conjecture for the spectral gap on arbitrary convex domains \cite[Section 4]{Sm96}; see \autoref{NeumannDirichletGapConj}. The second result suggests concavity of the second eigenvalue (\autoref{lambda2concavity}) when the Robin parameter is positive and the domain is convex. The third result is of Bareket/Rayleigh type. When $\alpha>0$ it is the rectangular analogue of the Bossel--Daners theorem for general domains. The fourth result, about maximizing the second eigenvalue, is the rectangular version of Freitas and Laugesen's result \cite{FL18a} for general domains with $\alpha \in [-(1+1/n)R^{-1},0]$, where $R$ is the radius of the ball having the same volume as the domain. That $\alpha$-range for general domains is not thought to be optimal. The fifth result is of Brock-type for the Steklov eigenvalue. The sixth one, about maximality of the spectral ratio, motivates \autoref{ratioconj} later in the paper for general domains.
Further, the spectral gap of a rectangular box is shown in \autoref{gapD} to be minimal for the degenerate rectangle of the same diameter, for each $\alpha \in {\mathbb R}$, which is consistent with \autoref{robingap} later for arbitrary convex domains when $\alpha>0$.
\smallskip
The most difficult results in the paper arise when the Robin parameter is scaled by the perimeter $L$ of a planar domain, that is, when the Robin parameter is $\alpha/L$. For rectangles with given area, \autoref{firsttwoperim} and its close-up in \autoref{firsttwoperimcloseup} illustrate:
\begin{figure}
\begin{center}
\includegraphics[scale=0.45]{firsttwoeigenperim.pdf}
\hspace{1.5cm}
\includegraphics[scale=0.45]{firsttworatioperim.pdf}
\end{center}
\caption{\label{firsttwoperim} Length scaling $\alpha/L$. Left: the first two eigenvalues $\lambda_1(\cdot\,;\alpha/L)$ and $\lambda_2(\cdot\,;\alpha/L)$ for the unit square and for a rectangle with area $1$ and aspect ratio $7$, plotted as functions of $\alpha$. Here the perimeter is $L=4$ for the unit square and $L=2(\sqrt{7}+1/\sqrt{7})$ for the rectangle. \autoref{firsttwoperimcloseup} provides a close-up view near the origin. Right: the ratio $\lambda_2(\cdot\,;\alpha/L)/|\lambda_1(\cdot\,;\alpha/L)|$ for the square and rectangle.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{firsttwoeigenperimcloseup.pdf}
\end{center}
\caption{\label{firsttwoperimcloseup} Length scaling $\alpha/L$. Close-up view near the origin of the left side of \autoref{firsttwoperim}, showing the first two eigenvalues $\lambda_1(\cdot\,;\alpha/L)$ and $\lambda_2(\cdot\,;\alpha/L)$ for the unit square and for a rectangle of area $1$. The first eigenvalue is minimal for the square, for all $\alpha$, and the second eigenvalue is maximal for the square in a range that includes the magnified region.}
\end{figure}
\begin{itemize}
\item minimality of the square for the first eigenvalue when $\alpha \in {\mathbb R}$ (\autoref{lambda1L})
\item maximality of the square for the second eigenvalue when $\alpha \in [\alpha_-,\alpha_+]$ (\autoref{lambda2L}), where $\alpha_- \simeq -9.4$ and $\alpha_+ \simeq 33.2$; outside that range the maximizer is the degenerate rectangle,
\item maximality of the square for the first nonzero Steklov eigenvalue (\autoref{steklov}, which gives a new proof of a result by Girouard, Lagac\'{e}, Polterovich and Savo \cite{GLPS17})
\item maximality of the square for the spectral ratio $\lambda_2/\lambda_1$ (\autoref{ratio2dim}) when $\alpha>0$.
\end{itemize}
The first of these results, about minimality of the first Robin eigenvalue when the parameter is $\alpha/L$, suggests a new Rayleigh-type inequality, \autoref{lambda1Lgeneral}, in which the disk is the minimizer for the first eigenvalue among convex domains. This conjectured inequality applies for all $\alpha \in {\mathbb R}$, and notably does not switch direction at $\alpha=0$. The Bareket switching phenomenon seems not to occur, due to the scaling of the Robin parameter by perimeter. The second result, about maximizing the second eigenvalue, is the rectangular version of Freitas and Laugesen's \cite{FL18b} result for simply connected domains with $\alpha \in [-2\pi,2\pi]$. \autoref{convexconj} describes a higher dimensional generalization for the second eigenvalue on convex domains. The Steklov result (the third one above) is a rectangular Weinstock type inequality. The fourth result, maximality of the square for the spectral ratio, stimulates a conjecture for all convex domains when $\alpha \geq -2\pi$, in \autoref{ratioconjlength}.
Lastly, the inverse spectral problem for Robin rectangles has an appealingly simple statement (\autoref{hearingdrum}): each rectangle is determined up to congruence by its first two Robin eigenvalues, whenever $\alpha \neq 0$.
For background on spectral optimization for the Laplacian I recommend the survey by Grebenkov and Nguyen \cite{GN13}, and the book \cite{H17} edited by Henrot, which includes a chapter of Robin results. Upper and lower bounds on the first eigenvalue in terms of inradius have been developed by Kova\v{r}\'{\i}k \cite[Theorem 4.5]{Ko14}. His lower bound was recently sharpened by Savo \cite[Corollary 3]{S19}. For rectangular domains, the latest developments include an analysis of Courant-sharp Robin eigenvalues on the square by Gittins and Helffer \cite{GH19}, and of P\'{o}lya-type inequalities for disjoint unions of rectangles by Freitas and Kennedy \cite{FK18}.
On a wry historical note, Robin's connection to the Robin boundary condition appears rather tenuous, according to investigations by Gustafson and Abe \cite{GA98}.
\smallskip
The main results and conjectures are in the next two sections. Proofs appear later in the paper, especially in \autoref{mainproofs}.
\section{\bf Monotonicity and concavity as a function of the Robin parameter}
\label{results}
We start by investigating the first two eigenvalues, and their gap and ratio, on a fixed domain as functions of the Robin parameter $\alpha$. Write $\Omega$ for a bounded Lipschitz domain in ${\mathbb R}^n$. The eigenvalues of the Robin Laplacian, denoted $\lambda_k(\Omega;\alpha)$ for $k=1,2,\dots$, are increasing and continuous as functions of the boundary parameter $\alpha \in {\mathbb R}$, and for each fixed $\alpha$ satisfy
\[
\lambda_1(\Omega;\alpha) < \lambda_2(\Omega;\alpha) \leq \lambda_3(\Omega;\alpha) \leq \dots \to \infty .
\]
These facts can be established using the Rayleigh quotient and its associated minimax variational characterization of the $k$th eigenvalue, which reads
\begin{equation} \label{minimax}
\lambda_k(\Omega;\alpha) = \min_{U_k} \max_{u \in U_k \setminus 0} \frac{\int_\Omega |\nabla u|^2 \, dx + \alpha \int_{\partial \Omega} u^2 \, dS}{\int_\Omega u^2 \, dx}
\end{equation}
where $U_k$ ranges over all $k$-dimensional subspaces of $H^1(\Omega)$; see for example \cite[{\S}4.2]{H17}.
\section*{Monotonicity} Each individual eigenvalue $\lambda_k(\Omega;\alpha)$ is increasing as a function of $\alpha$, by the minimax characterization \eqref{minimax}. Is the gap between the first two eigenvalues also increasing with respect to $\alpha$? On this question, Smits \cite{Sm96} has raised:
\begin{conjx}[Monotonicity of the spectral gap with respect to the Robin parameter; \protect{\cite[Section 4]{Sm96}}] \label{NeumannDirichletGapConj}
For convex bounded domains $\Omega$, the spectral gap $(\lambda_2-\lambda_1)(\Omega;\alpha)$ is strictly increasing as a function of $\alpha > 0$. In particular, the Neumann gap provides a lower bound on the Dirichlet gap:
\[
\nu_2(\Omega) < \delta_2(\Omega) - \delta_1(\Omega)
\]
where $0 = \nu_1 < \nu_2$ are the first and second Neumann eigenvalues of the Laplacian, and $0<\delta_1 < \delta_2$ are the first and second Dirichlet eigenvalues.
\end{conjx}
Smits proved his conjecture for intervals, and observed that he had verified it also in $2$ dimensions for rectangles and disks. In the next theorem we provide a proof in all dimensions for rectangular boxes, handling both positive and negative values of $\alpha$.
\autoref{NeumannDirichletGapConj} fails on some convex domains when $\alpha \ll 0$, since an asymptotic formula due to Khalile \cite[Corollary 1.3]{Kh18} implies that certain convex polygons with distinct smallest angles have spectral gap behaving like $(\text{const.})\alpha^2$ for large negative $\alpha$. The gap for such a polygon would be decreasing as a function of $\alpha$ when $\alpha \ll 0$.
Rectangular boxes are better behaved, it turns out, for all real $\alpha$.
\begin{theorem}[Monotonicity of the spectral gap with respect to the Robin parameter, on rectangular boxes] \label{gapmonot}
For a rectangular box $\mathcal{B}$, as $\alpha$ increases from $-\infty$ to $\infty$ the spectral gap $(\lambda_2-\lambda_1)(\mathcal{B};\alpha)$ strictly increases from $0$ to the Dirichlet gap $(\delta_2-\delta_1)(\mathcal{B})$.
\end{theorem}
The spectral ratio $\lambda_2/\lambda_1$ satisfies a monotonicity property for all Lipschitz domains, provided we multiply the ratio by $\alpha$. Write $\sigma_1(\Omega)$ for the first nonzero Steklov eigenvalue of the domain.
\begin{theorem}[Monotonicity of $\alpha$ times the spectral ratio] \label{ratiomonot}
For each bounded Lipschitz domain $\Omega$, the map
\[
\alpha \mapsto \alpha \frac{\lambda_2(\Omega;\alpha)}{\lambda_1(\Omega;\alpha)}
\]
is increasing for $\alpha > -\sigma_1(\Omega)$, that is, whenever $\lambda_2(\Omega;\alpha)>0$.
\end{theorem}
The proof of \autoref{ratiomonot} breaks down when $\alpha < -\sigma_1(\Omega)$ (that is, when $\lambda_2$ is negative), although it seems reasonable to conjecture that $\alpha \lambda_2/\lambda_1$ is increasing for that range of $\alpha$-values also.
The spectral ratio without a factor of $\alpha$ seems to decrease rather than increase.
\begin{conjx}[Monotonicity of the spectral ratio] \label{gapmonotpure}
For every bounded Lipschitz domain $\Omega$, the map
\[
\alpha \mapsto \frac{\lambda_2(\Omega;\alpha)}{\lambda_1(\Omega;\alpha)}
\]
is decreasing for $\alpha>0$.
\end{conjx}
This spectral ratio approaches $\infty$ as $\alpha \to 0$, since the first eigenvalue approaches zero.
\autoref{gapmonotpure} is open even for rectangles, where the formula for the spectral ratio seems tricky to handle analytically. The conjecture can apparently fail for acute isosceles triangles with $\alpha<0$, by numerical work of D. Kielty (private communication).
\section*{Concavity} Next, recall that the first Robin eigenvalue $\lambda_1(\Omega;\alpha)$ is a concave function of $\alpha \in {\mathbb R}$, by the Rayleigh principle
\[
\lambda_1(\Omega;\alpha) = \min_{u \in H^1(\Omega)} \frac{\int_\Omega |\nabla u|^2 \, dx + \alpha \int_{\partial \Omega} u^2 \, dS}{\int_\Omega u^2 \, dx} ,
\]
which expresses the first eigenvalue curve as the minimum of a family of linear functions of $\alpha$.
Is the second Robin eigenvalue also concave with respect to $\alpha$? That seems too much to expect on arbitrary domains, and even among convex domains, nonconcavity can occur for some negative $\alpha$ by numerical work of Kielty (private communication). So we raise a question for positive $\alpha$.
\begin{conjx}[Concavity of the second eigenvalue with respect to the Robin parameter] \label{lambda2concavity}
For convex bounded domains $\Omega$, the second eigenvalue $\lambda_2(\Omega;\alpha)$ is a concave function of $\alpha>0$.
\end{conjx}
A special case in which concavity holds is when the domain has a plane of symmetry and the second eigenfunction is odd with respect to that plane --- for then the second eigenvalue equals the first eigenvalue of the mixed Dirichlet--Robin problem on one half of the domain, and by the Rayleigh principle that mixed eigenvalue is concave with respect to $\alpha$. This argument applies, for example, to the ball and to rectangular boxes. (For a comprehensive treatment of the ball's spectrum, see \cite[Section 5]{FL18a}.)
For rectangular boxes we may proceed explicitly, and handle all $\alpha \in {\mathbb R}$.
\begin{theorem}[Concavity of the first two eigenvalues with respect to the Robin parameter, on rectangular boxes] \label{secondeigconcave}
For each rectangular box $\mathcal{B}$, the first and second eigenvalues $\lambda_1(\mathcal{B};\alpha)$ and $\lambda_2(\mathcal{B};\alpha)$ are strictly concave functions of $\alpha \in {\mathbb R}$.
\end{theorem}
The gap $(\lambda_2-\lambda_1)(\mathcal{B};\alpha)$ appears to be concave too, for $\alpha>0$, according to numerical investigations (omitted) that build on eigenvalue formulas for the box (as developed in \autoref{identifyinginterval} and \autoref{identifying}). I have not succeeded in proving this numerical observation. The gap cannot be concave for all $\alpha$, since the gap for the box is positive and tends to $0$ as $\alpha \to -\infty$, by \autoref{gapmonot}.
\section{\bf Optimal rectangular boxes for low Robin eigenvalues}
\label{optimal}
\section*{First Robin eigenvalue}
Faber and Krahn proved almost a century ago Rayleigh's conjecture that the first Dirichlet eigenvalue of the Laplacian is minimal on the ball, among domains of given volume. Bossel \cite{B86} proved the analogous result for Robin eigenvalues in $2$ dimensions with $\alpha>0$, by applying her new characterization of the first eigenvalue. Daners \cite{D06} extended Bossel's method to higher dimensions. An alternative approach via the calculus of variations was found more recently by Bucur and Giacomini \cite{BG10,BG15a}.
For $\alpha < 0$, Bareket \cite{B77} conjectured that the ball would maximize (not minimize) the Robin eigenvalue among domains of given volume. Although Bareket's conjecture turns out to be false for large $\alpha<0$ due to an annular counterexample by Freitas and {Krej\v{c}i\v{r}\'{\i}k}\ \cite{FK15}, those authors did establish the conjecture in $2$ dimensions whenever $\alpha<0$ is small enough, depending only on the volume of the domain. Bareket's conjecture holds also when the domain is close enough to a ball, by Ferone, Nitsch and Trombetti \cite{FNT15}, and holds for all domains in $2$-dimensions if perimeter rather than area of the domain is normalized, by Antunes, Freitas and {Krej\v{c}i\v{r}\'{\i}k}\ \cite[Theorem 2]{AFK17}.
For the class of rectangular boxes, the analogue of the Bossel--Daners theorem was proved by Freitas and Kennedy \cite[Theorem 4.1]{FK18} in $2$ dimensions and in all dimensions by Keady and Wiwatanapataphee \cite{KW18} (whose paper contains several other results for rectangles too). That is, they proved $\lambda_1(\mathcal{B};\alpha)$ is minimal for the cube among all rectangular boxes $\mathcal{B}$ of given volume, when $\alpha>0$. We state this result next as part (i), and prove also in part (ii) a reversed or Bareket-type inequality for all $\alpha<0$.
\begin{theorem}[Extremizing the first Robin eigenvalue on rectangular boxes]\label{lambda1higherdim}\
\noindent (i) If $\alpha>0$ then $\lambda_1(\mathcal{B};\alpha)$ is positive and is minimal for the cube and only the cube, among rectangular boxes $\mathcal{B}$ of given volume.
\noindent (ii) If $\alpha<0$ then $\lambda_1(\mathcal{B};\alpha)$ is negative and is maximal for the cube and only the cube, among rectangular boxes $\mathcal{B}$ of given volume.
\end{theorem}
The ``opposite'' extremal problems are known to have no solution, since when $\alpha>0$ the first eigenvalue $\lambda_1(\mathcal{B};\alpha)$ is unbounded above as the rectangular box degenerates in some direction, and when $\alpha<0$ the eigenvalue is unbounded below; see \eqref{firsttoinfinity} later in the paper, for these facts.
Next we restrict attention to rectangles $\mathcal{R}$ in $2$ dimensions. Scaling the Robin parameter by boundary length is found to yield a different (and, in some cases, better) result of Rayleigh type for the first eigenvalue. To state the result, we write
\[
\text{$A=$ area of rectangle $\mathcal{R}$, \quad $L=$ length of $\partial \mathcal{R}$.}
\]
The quantity
\[
\lambda_1(\mathcal{R};\alpha/L)A
\]
is scale invariant by an easy rescaling calculation, and it is known to be minimal among rectangles for the square when $\alpha=\infty$ (the Dirichlet case), when $\alpha=0$ (the trivial Neumann case), and when $\alpha \to -\infty$ (by substituting into Levitin and Parnovski's asymptotic for piecewise smooth domains \cite[Theorem 4.15]{H17}, \cite[{\S}3]{LP08}). Thus it is natural to suspect the square should be the minimizer for each real value of $\alpha$, including for $\alpha<0$.
\begin{theorem}[Minimizing the first Robin eigenvalue on rectangles, with length scaling]\label{lambda1L} If $\alpha \neq 0$ then $\lambda_1(\mathcal{R};\alpha/L)A$ is minimal for the square and only the square, among rectangles $\mathcal{R}$.
\end{theorem}
When $\alpha=0$, every domain has the same first Neumann eigenvalue, namely $\lambda_1=0$.
\autoref{lambda1L} implies the $2$-dimensional case of \autoref{lambda1higherdim} when $\alpha>0$, as follows. Suppose $\mathcal{R}$ and $\mathcal{S}$ are a rectangle and a square having the same area $A$. Since $\lambda_1$ is increasing with respect to the Robin parameter, the rectangular isoperimetric inequality $4\sqrt{A} \leq L$ implies that
\[
\lambda_1(\mathcal{R};\alpha/4\sqrt{A}) \geq \lambda_1(\mathcal{R};\alpha/L) \geq \lambda_1(\mathcal{S};\alpha/L(\mathcal{S})) ,
\]
with the final inequality holding by \autoref{lambda1L}. Since $L(\mathcal{S})=4\sqrt{A}$, we conclude $\lambda_1(\mathcal{R};\alpha/4\sqrt{A}) \geq \lambda_1(\mathcal{S};\alpha/4\sqrt{A})$. Replacing $\alpha$ by $4\sqrt{A} \alpha$, we deduce $\lambda_1(\mathcal{R};\alpha) \geq \lambda_1(\mathcal{S};\alpha)$ for all $\alpha>0$, which is \autoref{lambda1higherdim}(i) in $2$ dimensions. Incidentally, this argument reveals that the scale invariant form of \autoref{lambda1higherdim}(i), if one wants to write it that way, involves minimizing $\lambda_1(\mathcal{R};\alpha/\sqrt{A})A$ in $2$ dimensions and in higher dimensions minimizing $\lambda_1(\mathcal{B};\alpha/V^{1/n})V^{2/n}$, where $V$ is the volume of the box $\mathcal{B}$.
One would like to extend \autoref{lambda1L} to higher dimensions for $\lambda_1(\mathcal{B};\alpha/S^{1/(n-1)})V^{2/n}$, or even better for $\lambda_1(\mathcal{B};\alpha V^{1-2/n}/S)V^{2/n}$, where $S$ the surface area of the box. I have not succeeded in establishing these generalizations.
For arbitrary convex domains, an improved Rayleigh--Bossel type conjecture is suggested by \autoref{lambda1L}.
\begin{conjx}[Minimizing the first Robin eigenvalue on convex domains, with length scaling] \label{lambda1Lgeneral}
For each $\alpha \in {\mathbb R}$, the scale invariant ratio
\[
\lambda_1(\Omega;\alpha/L(\Omega)) A(\Omega)
\]
is minimal when the convex bounded planar domain $\Omega$ is a disk.
\end{conjx}
The conjecture arose in conversation with P. Freitas, and is stated in our work \cite{FL18b}.
For $\alpha<0$, this conjecture goes in the opposite direction to the upper bound conjectured by Bareket, which does not employ length scaling on the Robin parameter.
For $\alpha>0$, the conjecture would strengthen Bossel's theorem in the class of convex domains, as one sees by replacing $L(\Omega)$ with $\sqrt{A(\Omega)}$ and arguing with rescaling like we did for rectangles above.
\autoref{lambda1Lgeneral} is known to hold for $\alpha=\infty$ (the usual Faber--Krahn inequality), and trivially for $\alpha=0$, and it holds also on smooth domains as $\alpha \to -\infty$ since
\begin{equation} \label{Robinasymptotic}
\lambda_1(\Omega;\alpha/L(\Omega)) A(\Omega) \sim - \frac{A(\Omega)}{L(\Omega)^2} \alpha^2 \qquad \text{as $\alpha \to -\infty$}
\end{equation}
by the Robin asymptotic of Lacey \emph{et al.}\ \cite[Theorem 4.14]{H17}, \cite{LOS98}, noting $A/L^2$ is maximal for the disk by the isoperimetric theorem.
\autoref{lambda1Lgeneral} can fail for nonconvex domains, as Dorin Bucur pointed out to me on a sunny morning during a conference in Santiago, by using boundary perturbation arguments. For $\alpha>0$ one can drive $\lambda_1(\Omega;\alpha/L(\Omega))$ arbitrarily close to $0$ by imposing a boundary perturbation that greatly increases the perimeter and barely changes the area. For example, one could add to the domain an outward spike of width $\epsilon^2$ and length $1/\epsilon$ and then construct a trial function that equals $1$ on the original domain and vanishes on the spike, except for a transition zone of length $1$; the Rayleigh quotient is then $O(\epsilon)$ as $\epsilon \to 0$. For $\alpha<0$ one can drive $\lambda_1(\Omega;\alpha/L(\Omega))$ arbitrarily close to $-\infty$ by doing the same spike perturbation except taking the complementary trial function, that is, the function that vanishes on the original domain and equals $1$ on the spike except for the transition zone of length $1$; its Rayleigh quotient equals $\alpha/\epsilon+O(1)$ as $\epsilon \to 0$.
In the reverse direction to the conjecture, a sharp \emph{upper} bound on the first eigenvalue is known for general domains, when the Robin parameter is scaled by boundary length.
\begin{theorem}[Maximizing the first Robin eigenvalue, with length scaling; see \protect{\cite[Theorem A]{FL18b}}] \label{linearbound}
Fix $\alpha \neq 0$. If $\Omega$ is a bounded, Lipschitz planar domain then
\[
\lambda_1\big( \Omega;\alpha/L(\Omega) \big) A(\Omega) < \alpha
\]
with equality holding in the limit for rectangular domains that degenerate to a line segment (meaning the aspect ratio tends to infinity). More generally, if $\Omega \subset {\mathbb R}^n, n \geq 2$ is a bounded Lipschitz domain then
\[
\lambda_1\big(\Omega;\alpha V(\Omega)^{1-2/n} /S(\Omega)\big) V(\Omega)^{2/n} < \alpha
\]
with equality holding in the limit for degenerate rectangular boxes (which means as $S/V^{(n-1)/n} \to \infty$).
\end{theorem}
In the omitted case $\alpha=0$, all domains have first Neumann eigenvalue $\lambda_1(\Omega;0) = 0$.
Note that \autoref{linearbound} is sharp for fixed $\alpha$, but not sharp for a fixed domain $\Omega$ as $\alpha \to \pm \infty$, since the first Robin eigenvalue approaches a finite number (the Dirichlet eigenvalue) as $\alpha \to \infty$ and approaches $-\infty$ quadratically rather than linearly as $\alpha \to -\infty$, by the asymptotic formula \eqref{Robinasymptotic}.
\section*{Second Robin eigenvalue}
The second Dirichlet eigenvalue is minimal for the union of two balls, under a volume constraint; this observation by Krahn \cite{K26} was extended by Kennedy \cite{K09} from the Dirichlet to the Robin case for $\alpha>0$. The survey article \cite[{\S}4.6.1]{H17} gives a clear account of these lower bounds, which are applications of the Faber--Krahn and Bossel--Daners theorems, respectively.
Upper bounds do not exist for $\alpha>0$. The second Dirichlet eigenvalue has no upper bound, since a thin rectangular box of given volume can have arbitrarily large eigenvalue. The same reasoning holds in the Robin case when $\alpha>0$, as was remarked after \autoref{lambda1higherdim}.
For $\alpha=0$, the second Neumann eigenvalue does have an upper bound, being largest for the ball by work of Szeg\H{o} \cite{S54} for simply connected domains in the plane and Weinberger \cite{W56} for domains in all dimensions. This Neumann result was extended recently to the second Robin eigenvalue for a range of $\alpha \leq 0$ by Freitas and Laugesen \cite{FL18a}. Specifically, they proved $\lambda_2(\Omega;\alpha)$ is maximal for the ball $B(R)$ having the same volume as $\Omega$, for each $\alpha \in \big[-(1+1/n)R^{-1},0 \big]$. The result fails when $\alpha<0$ is large in magnitude, by an annular counterexample. Corollaries include Weinberger's result for the Neumann eigenvalue ($\alpha=0$), and Brock's sharp upper bound \cite{B01} on the first nonzero Steklov eigenvalue, which follows from taking $\alpha=-R^{-1}$.
The analogous assertions for rectangular boxes hold for all $\alpha \leq 0$:
\begin{theorem}[Maximizing the second Robin eigenvalue on rectangular boxes]\label{lambda2higherdim}
If $\alpha \leq 0$ then $\lambda_2(\mathcal{B};\alpha)$ is maximal for the cube and only the cube, among rectangular boxes $\mathcal{B}$ of given volume.
\end{theorem}
\begin{corollary}[Maximizing the first nonzero Steklov eigenvalue on rectangular boxes]\label{sigma1higherdim}
The first nonzero Steklov eigenvalue $\sigma_1(\mathcal{B})$ is maximal for the cube and only the cube, among rectangular boxes $\mathcal{B}$ of given volume.
\end{corollary}
The Steklov result in \autoref{sigma1higherdim} was proved directly by Girouard \emph{et al.}\ \cite[Theorem 1.6]{GLPS17}. Indeed, they proved a stronger result, namely that the cube maximizes $\sigma_1$ among rectangular boxes of given surface area.
\subsubsection*{Length-scaled Robin parameter.}
An upper bound on the second Robin eigenvalue with length-scaled Robin parameter was proved by Freitas and Laugesen \cite[Theorem B]{FL18b} for simply connected planar domains, namely that $\lambda_2(\Omega;\alpha/L) A$ is maximal for the disk provided $\alpha \in [-2\pi,2\pi]$. (It is not known to what extent that interval of $\alpha$-values can be enlarged.) Thanks to the isoperimetric inequality this result implies, for simply connected domains with $\alpha \in [-R^{-1},0]$, the inequality from \cite{FL18a} that $\lambda_2(\Omega;\alpha)$ is maximal for the disk of the same area. It also implies Weinstock's result \cite{We54} that the first nonzero Steklov eigenvalue of a simply connected domain is maximal for the disk, under perimeter normalization, as explained in \cite{FL18b}.
It is an open problem to generalize this length-scaled upper bound on the second eigenvalue to higher dimensions. Convexity might provide a reasonable substitute for simply connectedness. Write ${\mathbb B}$ for the unit ball. The next conjecture was raised by Freitas and Laugesen \cite{FL18b}.
\begin{conjx}[Maximizing the second Robin eigenvalue on convex domains, with surface area and volume scaling \protect{\cite[Conjecture 2]{FL18b}}]\label{convexconj}
The ball maximizes the scale invariant quantity $\lambda_2(\Omega;\alpha V^{1-2/n}/S) V^{2/n}$
among all convex bounded domains $\Omega \subset {\mathbb R}^n$, for $\alpha$ such that $-1 \leq \alpha V({\mathbb B})^{1-2/n}/S({\mathbb B}) \leq 0$. Hence the ball also maximizes $\lambda_2(\Omega;\alpha/S^{1/(n-1)}) V^{2/n}$ for a suitable range of $\alpha$, where now the Robin parameter is scaled purely by perimeter.
\end{conjx}
Taking $n=2$ reduces the conjecture back to maximizing $\lambda_2(\Omega;\alpha/L)A$.
For rectangles, we will develop a length-scaled upper bound on the second eigenvalue that is analogous to \autoref{convexconj} and has a sharp interval of $\alpha$-values. Let $\alpha_+ \simeq 33.2054$ and $\alpha_- \simeq -9.3885$ be the numbers defined later by \eqref{alphaplus} and \eqref{alphaneg}.
\begin{theorem}[Maximizing the second Robin eigenvalue on rectangles, with length scaling]\label{lambda2L}
If $\alpha \in [\alpha_-,\alpha_+]$ then $\lambda_2(\mathcal{R};\alpha/L)A$ is maximal for the square and only the square, among rectangles $\mathcal{R}$.
If $\alpha \notin [\alpha_-,\alpha_+]$ then the degenerate rectangle is asymptotically maximal among all rectangles, meaning $\lambda_2(\mathcal{R};\alpha/L)A < \alpha$ with equality in the limit as $\mathcal{R}$ degenerates to an interval.
\end{theorem}
One would like to generalize to boxes in higher dimensions, but I have not succeeded in doing so.
\autoref{lambda2L} implies the $2$-dimensional case of \autoref{lambda2higherdim} with the Robin parameter replaced by $\alpha/4\sqrt{A}$, when $\alpha \in [\alpha_-,0]$: one simply argues like we did earlier for the first eigenvalue after the statement of \autoref{lambda1L}, using that $\lambda_2$ is increasing with respect to the Robin parameter and that $\alpha/4\sqrt{A} \leq \alpha/L$ by the rectangular isoperimetric inequality, when $\alpha \leq 0$.
A corollary of \autoref{convexconj} would be a result proved already by Bucur \emph{et al.}\ \cite[Theorem 3.1]{BFNT17} that among convex domains, the ball maximizes the scale invariant quantities $\sigma_1 S/V^{1-2/n}$ and $\sigma_1 S^{1/(n-1)}$. Here $\sigma_1$ is the first nonzero Steklov eigenvalue; recall the Steklov eigenfunctions are harmonic and satisfy $\partial u/\partial \nu = \sigma u$ on the boundary, with eigenvalues $0=\sigma_0 < \sigma_1 \leq \sigma_2 \leq \dots$.
Analogously, \autoref{lambda2L} has as a corollary that the first nonzero Steklov eigenvalue of a rectangle is maximal for the square, under perimeter normalization.
\begin{corollary}[Maximizing the first nonzero Steklov eigenvalue on rectangles, with length normalization] \label{steklov}
The scale invariant quantity $\sigma_1 L$ is maximal among rectangles for the square and only the square.
\end{corollary}
\autoref{steklov} is not new. It is due to Girouard \emph{et al.}\ \cite[Theorem 1.6]{GLPS17}, who proved the result and its extension to all dimensions, showing that that the cube maximizes $\sigma_1$ among rectangular boxes of given surface area. See also Tan \cite{T17} for the $2$-dimensional case of rectangles. What is new is our derivation of the Steklov corollary from a family of Robin results.
\begin{remark}
For simply connected domains we recalled above that $\lambda_2(\Omega;\alpha/L)A$ is maximal for the disk when $-2\pi \leq \alpha \leq 2\pi$. Perhaps at some $\alpha$-value beyond $2\pi$ another domain takes over as maximizer, and so on again and again as $\alpha$ continues to increase toward infinity? In the class of rectangles, at least, such ``domain cascading'' does not occur. Instead, \autoref{lambda2L} establishes a sharp transition between the square and the degenerate rectangle precisely at the Robin parameters $\alpha_-$ and $\alpha_+$.
\end{remark}
\section*{Spectral gap $\lambda_2-\lambda_1$}
The Neumann and Dirichlet spectral gaps are minimal for the line segment among all convex domains in ${\mathbb R}^n$ of given diameter, by work of Payne--Weinberger \cite{PW60} and Andrews--Clutterbuck \cite{AC11}, respectively. For the Robin gap, an analogous conjecture has been stated by Andrews, Clutterbuck and Hauer \cite{ACH18}:
\begin{conjx}[Minimizing the spectral gap on convex domains, under diameter normalization \protect{\cite[Sections 2 and 10]{ACH18}}]\label{robingap} Fix $\alpha > 0$ and the dimension $n \geq 2$. Among convex bounded domains $\Omega \subset {\mathbb R}^n$ of given diameter $D$, the Robin spectral gap is minimal for the degenerate box (line segment) of diameter $D$:
\[
\lambda_2(\Omega;\alpha)-\lambda_1(\Omega;\alpha) > \lambda_2((0,D);\alpha)-\lambda_1((0,D);\alpha) .
\]
\end{conjx}
A partial result \cite[Theorem 2.1]{ACH18} says that the inequality holds with $\alpha$ on the right side replaced by $0$, that is, replacing the right side by the Neumann gap $\pi^2/D^2$; and even this result assumes the Robin ground state on $\Omega$ is log-concave, which is known to fail for some convex domains \cite[Theorem 1.2]{ACH18}.
For rectangular boxes, we can prove the Robin gap conjecture for all $\alpha \in {\mathbb R}$.
\begin{theorem}[Minimizing the spectral gap on rectangular boxes, under diameter normalization]\label{gapD} Fix $\alpha \in {\mathbb R}$ and the dimension $n \geq 2$. Among rectangular boxes $\mathcal{B}$ of given diameter $D$, the Robin spectral gap is minimal for the degenerate box (line segment) of diameter $D$:
\[
\lambda_2(\mathcal{B};\alpha)-\lambda_1(\mathcal{B};\alpha) > \lambda_2((0,D);\alpha)-\lambda_1((0,D);\alpha) .
\]
\end{theorem}
Maximizing the Robin gap is generally not possible among convex domains of given diameter, perimeter, or area, since the Dirichlet spectral gap can be arbitrarily large by an observation of Smits \cite[Theorem 5 and discussion]{Sm96}. He worked with a degenerating family of sectors. A degenerating family of acute isosceles triangles would presumably behave the same way.
Among rectangular boxes, though, the spectral gap is not only bounded above, it is maximal at the cube for each value of the Robin parameter.
\begin{theorem}[Maximizing the spectral gap on rectangular boxes]\label{gapSV} Fix $\alpha \in {\mathbb R}$ and the dimension $n \geq 2$. Among rectangular boxes $\mathcal{B}$ of given diameter (or given surface area, or given volume), the Robin spectral gap $(\lambda_2-\lambda_1)(\mathcal{B};\alpha)$ is maximal for the cube and only the cube.
\end{theorem}
\section*{Spectral ratio $\lambda_2/\lambda_1$}
The gap maximization in \autoref{gapSV} allows us to maximize also the ratio of the first two eigenvalues. We take the absolute value of the first eigenvalue, in the next result, in order to unify the cases of positive and negative $\alpha$.
\begin{corollary}[Maximizing the spectral ratio on rectangular boxes]\label{ratio} Fix $\alpha \neq 0$ and the dimension $n \geq 2$. Among rectangular boxes $\mathcal{B}$ of given volume, the Robin spectral ratio
\[
\frac{\lambda_2(\mathcal{B};\alpha)}{|\lambda_1(\mathcal{B};\alpha)|}
\]
is maximal for the cube and only the cube.
\end{corollary}
\begin{corollary}[Maximizing the spectral ratio on rectangles, with length scaling]\label{ratio2dim} Fix $\alpha > 0$. The length-scaled Robin spectral ratio
\[
\frac{\lambda_2(\mathcal{R};\alpha/L)}{|\lambda_1(\mathcal{R};\alpha/L)|}
\]
is maximal for the square and only the square, among rectangles $\mathcal{R}$.
\end{corollary}
The absolute value on $\lambda_1$ is superfluous in the statement of \autoref{ratio2dim}, since the first eigenvalue is positive when $\alpha>0$. We retain the absolute value anyway because the corollary ought to hold also when $\alpha<0$ --- although I have not found a proof.
If the \autoref{gapmonotpure} for monotonicity of the spectral ratio were known to be true, then \autoref{ratio2dim} would imply the planar case of \autoref{ratio}, for $\alpha>0$. That short argument is left to the reader.
The spectral ratio has a long history. Payne, P\'{o}lya and Weinberger \cite{PPW55} proved in the Dirichlet case ($\alpha=\infty$) that $\lambda_2/\lambda_1 \leq 3$ for planar domains. Payne and Schaefer \cite[\S3]{PS01} extended that result to hold on an interval of $\alpha$-values near $\infty$. The Payne--P\'{o}lya--Weinberger (PPW) conjecture asserted a sharp upper bound: that the Dirichlet ratio $\lambda_2/\lambda_1$ should be maximal for the disk. This conjecture and its analogue in $n$ dimensions were proved by Ashbaugh and Benguria \cite{AB91}. The analogous Robin question has been raised by Henrot \cite[p.~458]{H03}: to find the range of $\alpha$ values for which the ball maximizes the Robin spectral ratio. Some inequalities on that ratio have been proved by Dai and Shi \cite{DS14}.
In view of these ratio results, an analogue of \autoref{ratio} seems plausible for general domains.
\begin{conjx}[Maximizing the spectral ratio]\label{ratioconj} Fix the dimension $n \geq 2$. Among bounded Lipschitz domains $\Omega$ of given volume, the Robin spectral ratio
\[
\frac{\lambda_2(\Omega;\alpha)}{|\lambda_1(\Omega;\alpha)|}
\]
is maximal for the ball $B(R)$ having the same volume as $\Omega$, when $\alpha \geq -1/R, \alpha \neq 0$.
\end{conjx}
\autoref{ratioconj} holds for sufficiently small $\alpha<0$ on $C^2$-smooth planar domains of given area, because in that situation Freitas and {Krej\v{c}i\v{r}\'{\i}k}\ \cite[Theorem 2]{FK15} showed $|\lambda_1(\Omega;\alpha)|$ is minimal for the disk (the Bareket conjecture), while $\lambda_2(\Omega;\alpha)$ is maximal for the disk by a result of Freitas and Laugesen \cite[Theorem A]{FL18a}.
The conjecture holds also at $\alpha=-1/R$ in all dimensions, since in that case the spectral ratio is $\leq 0$ by \cite[Theorem A]{FL18a}, with equality for the ball.
The limiting case $\alpha \to 0$ of \autoref{ratioconj} follows from the isoperimetric theorem and the Szeg\H{o}--Weinberger theorem \cite{W56} for the first nonzero Neumann eigenvalue, as we now explain. For $\alpha \simeq 0$ one has $\lambda_1(\Omega;\alpha) \simeq \alpha S/V$ where $S$ is the surface area of $\partial \Omega$ and $V$ is the volume of $\Omega$ (see \cite[p.\,89]{H17}). Also $\lambda_2(\Omega;\alpha) \simeq \lambda_2(\Omega;0)=\nu_1(\Omega)$, the first nonzero Neumann eigenvalue. Thus \autoref{ratioconj} says in the limit $\alpha \to 0$ that $\nu_1(\Omega)/S$ is maximal for the ball. The isoperimetric theorem guarantees $S$ is minimal for the ball, and the Szeg\H{o}--Weinberger theorem gives maximality of $\nu_1$ for the ball, among domains of given volume. Hence this limiting case of the conjecture holds true.
For $\alpha < -1/R$, I am not sure what domain might extremize the spectral ratio. Any extremal conjecture would need to be consistent with the spectral asymptotics as $\alpha \to -\infty$. For the ball or any other smooth domain, $\lambda_1$ and $\lambda_2$ are known to behave like $-\alpha^2$ to leading order (by Lacey \emph{et al.}\ \cite{LOS98} for the first eigenvalue and Daners and Kennedy \cite{DK10} for all eigenvalues; see \cite[{\S}4]{H17} for more literature). Thus $\lambda_2/|\lambda_1| \to -1$ as $\alpha \to -\infty$. On the other hand, the asymptotics for polygonal domains by Khalile \cite[Corollary 1.3, Theorem 3.6]{Kh18} imply that certain convex polygons have spectral ratio $\lambda_2/|\lambda_1|$ converging to a constant greater than $-1$ as $\alpha \to -\infty$.
One lesson here is that rectangles provide an unreliable guide to the behavior of general domains, for large negative $\alpha$. One should in that range consider at least polygons whose angles are not all the same.
We finish this subsection by conjecturing an analogue of \autoref{ratio2dim}.
\begin{conjx}[Maximizing the spectral ratio on convex domains, with length scaling]\label{ratioconjlength} Among convex bounded planar domains $\Omega$, the length-scaled Robin spectral ratio
\[
\frac{\lambda_2(\Omega;\alpha/L)}{|\lambda_1(\Omega;\alpha/L)|}
\]
is maximal for the disk, for each $\alpha \geq -2\pi$.
\end{conjx}
The conjecture holds when $\alpha=-2\pi$, because then the second eigenvalue is $\leq 0$ with equality for the disk, by \cite[Theorem B]{FL18b} (which applies to all simply connected planar domains, not just convex ones). Further, the second eigenvalue of the disk is positive when $\alpha>-2\pi$.
The limiting case $\alpha \to 0$ of \autoref{ratioconjlength} reduces to the Szeg\H{o}--Weinberger theorem, since $\lambda_1(\Omega;\alpha/L) \simeq \alpha/A$ and $\lambda_2(\Omega;\alpha/L) \simeq \nu_1(\Omega)$.
The limiting case $\alpha \to \infty$ of the conjecture would recover the convex planar case of Ashbaugh and Benguria's sharp PPW inequality.
\section*{Hearing the shape of a Robin rectangle} Dirichlet and Neumann drums cannot always be ``heard'', as Gordon, Webb and Wolpert \cite{GWW92} famously showed. The inverse spectral problem for Robin drums is apparently an open problem. Arendt, ter Elst and Kennedy \cite{AEK14} have written that ``it may well be the case that one can hear the shape of a drum after all, if one loosens the membrane before striking it''.
Hearing the shape of a \emph{rectangular} drum with Robin boundary conditions is a solvable special case, and requires merely the first two frequencies:
\begin{theorem}[Hearing a rectangular Robin drum]\label{hearingdrum}
If $\alpha \neq 0$ then each rectangle $\mathcal{R}$ is determined up to congruence by its first two eigenvalues, $\lambda_1(\mathcal{R};\alpha)$ and $\lambda_2(\mathcal{R};\alpha)$.
\end{theorem}
The theorem is spectacularly false in the Neumann case ($\alpha = 0$), where no pre-specified number of eigenvalues can be guaranteed to determine the rectangle. For example, every rectangle of width $m$ and height less than $1$ has the same first $m+1$ Neumann eigenvalues, namely $(j\pi/m)^2$ for $j=0,1,\dots,m$.
Incidentally, a Steklov inverse spectral problem was resolved recently for rectangular boxes in all dimensions by Girouard \emph{et al.}\ \cite[Corollary 1.8]{GLPS17}, who observed that the full spectrum determines the perimeter, and then the perimeter and the first eigenvalue $\sigma_1$ together determine the rectangle.
\section*{Polygonal open problems}
\label{openproblems}
For each theorem where the square is the optimizer among rectangles, it seems reasonable to conjecture that the square is in fact optimal among all (convex) quadrilaterals. The exception is \autoref{gapSV}, where the spectral gap is unbounded above in general; see the discussion before that theorem.
The equilateral triangle should presumably be optimal among triangles, although sometimes triangles are so ``pointy'' that they behave differently from general domains. More generally, the regular $N$-gon might be optimal among (convex) $N$-gons, although such problems seem currently out of reach --- for example, the polygonal Rayleigh conjecture about minimizing the first Dirichlet eigenvalue remains open even for pentagons.
The inverse spectral problem for triangles is particularly fascinating. A triangle is known to be determined by its full Dirichlet spectrum, via the wave trace method of Durso \cite{D90}. Later, Grieser and Maronna \cite{GM13} found a delightful, different proof using the heat trace and the sum of reciprocal angles of the triangle. These results are wildly overdetermined, though, since they employ infinitely many eigenvalues in pursuit of the three side lengths of the triangle. For that reason, Laugesen and Suideja \cite[p.\,17]{LS09} suspected that the first three Dirichlet eigenvalues should suffice to determine a triangle. Antunes and Freitas \cite{AF11} developed convincing numerical evidence in favor of that conjecture, although a proof remains elusive. Similar results should presumably hold for the Robin problem when $\alpha \neq 0$. (The Neumann case $\alpha=0$ is less clear \cite[Section 3c]{AF11}.) No investigations appear yet to have been carried out on determining a triangle from its first three Robin eigenvalues.
\section{\bf Monotonicity and convexity lemmas}
\label{notation}
This self-contained section establishes the underpinnings of the rest of the paper. The section can be skipped for now, and revisited later as needed.
The four basic functions needed to determine the first and second Robin eigenvalues of intervals, and hence of rectangular boxes, are:
\begin{align*}
g_1(x) = x \tan x , \qquad & g_1 : (0,\pi/2) \to (0,\infty) , \\
g_2(x) = -x \cot x , \qquad & g_2 : (0,\pi) \to (-1,\infty) , \\
h_1(x) = x \tanh x , \qquad & h_1 : (0,\infty) \to (0,\infty) , \\
h_2(x) = x \coth x , \qquad & h_2 : (0,\infty) \to (1,\infty) .
\end{align*}
These functions have positive first derivatives and so are strictly increasing, as shown in \autoref{g1g2h1h2}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{h1h2.pdf}
\hspace{1cm}
\includegraphics[scale=0.4]{g1g2.pdf}
\end{center}
\caption{\label{g1g2h1h2} The functions $h_1(x)=x \tanh x$ and $h_2(x)=x \coth x$, and $g_1(x)=x \tan x$ and $g_2(x)=-x \cot x$.}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{HH1HH2.pdf}
\hspace{1cm}
\includegraphics[scale=0.4]{GG1GG2.pdf}
\end{center}
\caption{\label{G1G2H1H2} The functions $H_1(y)=h_1^{-1}(y)/y$ and $H_2(y)=h_2^{-1}(y)/y$, and $G_1(y)=g_1^{-1}(y)/y$ and $G_2(y)=g_2^{-1}(y)/y$.}
\end{figure}
Define four more functions, shown in \autoref{G1G2H1H2}, by
\[
G_1(y) = \frac{g_1^{-1}(y)}{y} , \quad G_2(y) = \frac{g_2^{-1}(y)}{y} , \quad H_1(y) = \frac{h_1^{-1}(y)}{y} , \quad H_2(y) = \frac{h_2^{-1}(y)}{y} , \quad
\]
for $y$-values in the ranges of $g_1,g_2,h_1,h_2$ respectively. Three of them are strictly decreasing, while $H_2$ is strictly increasing, as we now justify.
\begin{lemma}[Monotonicity]\label{monotonicity}\
\begin{itemize}
\item[(i)] $G_1^\prime(y)<0$ for all $y>0$.
\item[(ii)] $G_2^\prime(y)<0$ for all $y> -1, y \neq 0$.
\item[(iii)] $H_1^\prime(y)<0$ for all $y>0$.
\item[(iv)] $H_2^\prime(y)>0$ for all $y>1$.
\end{itemize}
\end{lemma}
\begin{proof}
Given any strictly increasing function $h$ with $h^\prime>0$, we may write $x=h^{-1}(y)$ and differentiate the function $H(y)=h^{-1}(y)/y=x/h(x)$ to obtain
\begin{equation} \label{Hprime}
H^\prime(y) = \frac{1}{h^\prime(x)} \left( \frac{x}{h(x)} \right)^{\! \prime} ,
\end{equation}
where the derivative on the left is taken with respect to $y$ and on the right with respect to $x$. Applying this derivative formula to the four functions in the lemma gives the following observations.
$(x/g_1(x))^\prime = (\cot x)^\prime<0$, so that $G_1^\prime < 0$.
$(x/g_2(x))^\prime = -(\tan x)^\prime<0$, so that $G_2^\prime < 0$. (Here $x \neq \pi/2$, since $y \neq 0$.)
$(x/h_1(x))^\prime = (\coth x)^\prime<0$, so that $H_1^\prime < 0$.
$(x/h_2(x))^\prime = (\tanh x)^\prime>0$, so that $H_2^\prime > 0$.
\end{proof}
We proceed to develop concavity and derivative properties of the eight functions.
\begin{lemma}[Concavity of the inverse squared]\label{concaveinverse}\
\begin{itemize}
\item[(i)] $(g_1^{-1}(y)^2)^{\prime \prime}<0$ for all $y>0$.
\item[(ii)] $(g_2^{-1}(y)^2)^{\prime \prime}<0$ for all $y> -1$.
\item[(iii)] $(h_1^{-1}(y)^2)^{\prime \prime}>0$ for all $y>0$.
\item[(iv)] $(h_2^{-1}(y)^2)^{\prime \prime}>0$ for all $y>1$.
\end{itemize}
\end{lemma}
\begin{proof}
Writing $y=h(x)$, we find
\[
\frac{1}{2} \frac{d^2\ }{dy^2} \left( h^{-1}(y)^2 \right) = \frac{1}{2} \frac{d^2(x^2)}{dy^2} = \frac{h^\prime(x) - x h^{\prime \prime}(x)}{h^\prime(x)^3} .
\]
Replacing $h$ on the right side with $g_1,g_2,h_1,h_2$ respectively gives the following expressions, where for $g_2$ we split the interval in two pieces:
\begin{align}
\text{(i)} \hspace{2cm} & -\frac{4\cos^4 x}{(2x+\sin 2x)^3} (2x - \sin 2x + 4 x^2 \tan x) < 0 , \qquad x \in (0,\pi/2) , \label{inverse-i} \\
\text{(ii)} \hspace{2cm} & -\frac{4\sin^3 x \cos x}{(2x-\sin 2x)^3} (2x \tan x + 2\sin^2 x - 4 x^2) < 0 , \qquad x \in (0,\pi/2) , \label{inverse-iia} \\
\text{(ii)} \hspace{2cm} & -\frac{4\sin^4 x}{(2x-\sin 2x)^3} (2x + \sin 2x - 4 x^2 \cot x) < 0 , \qquad x \in [\pi/2,\pi) , \label{inverse-iib} \\
\text{(iii)} \hspace{2cm} & \frac{4\cosh^4 x}{(\sinh 2x + 2x)^3} (\sinh 2x - 2x + 4 x^2 \tanh x) > 0 , \qquad x \in (0,\infty) , \label{inverse-iii} \\
\text{(iv)} \hspace{2cm} & \frac{4\sinh^3 x \cosh x}{(\sinh 2x - 2x)^3} (2\sinh^2 x + 2x \tanh x - 4 x^2) > 0 , \qquad x \in (0,\infty) , \label{inverse-iv}
\end{align}
noting that the inequalities \eqref{inverse-i}, \eqref{inverse-iib} and \eqref{inverse-iii} use only that $\pm \sin 2x < 2x < \sinh 2x$, while inequalities \eqref{inverse-iia} and \eqref{inverse-iv} are proved as follows.
To show negativity in \eqref{inverse-iia}, observe that $2x \tan x + 2\sin^2 x - 4 x^2$ is positive for all $x \in (0,\pi/2)$ because for small $x$ it behaves like $O(x^2)$ and its second derivative is positive on the whole interval:
\[
(2x \tan x + 2\sin^2 x - 4 x^2)^{\prime \prime} = \tan x \sec^2 x (4x-\sin 4x) > 0 .
\]
The argument for \eqref{inverse-iv} is analogous: $2\sinh^2 x + 2x \tanh x - 4 x^2$ behaves like $O(x^2)$ for small $x$ and has positive second derivative everywhere since
\[
(2\sinh^2 x + 2x \tanh x - 4 x^2)^{\prime \prime} = \tanh x \operatorname{sech}^2 x (\sinh 4x - 4x) > 0 .
\]
\end{proof}
\subsection*{Definition of $\alpha_\pm$}
As needed for the statement of \autoref{lambda2L}, let $\alpha_+ \simeq 33.2054$ be the root of
\begin{equation}
g_1^{-1}(\alpha/8)^2 + g_2^{-1}(\alpha/8)^2 = \alpha/4 , \label{alphaplus}
\end{equation}
and $\alpha_- \simeq -9.3885$ be the root of
\begin{equation}
h_1^{-1}(|\alpha|/8)^2 + h_2^{-1}(|\alpha|/8)^2 = |\alpha|/4 . \label{alphaneg}
\end{equation}
That these roots exist and are unique can be seen as follows. At $\alpha=0$, the left side of \eqref{alphaplus} is $0^2+(\pi/2)^2$ while the right side is $0$, and so the left side is larger. As $\alpha \to \infty$ the left side of \eqref{alphaplus} approaches $(\pi/2)^2+\pi^2$ while the right side approaches $\infty$, and so the right side is larger. Hence \eqref{alphaplus} has a root at some $\alpha_+>0$, and the root is unique because \autoref{concaveinverse} implies that $(g_1^{-1})^2+(g_2^{-1})^2$ is strictly concave. Similarly, at $\alpha=-8$, the left side of \eqref{alphaneg} is $h_1^{-1}(1)^2+0^2$ while the right side is $2$, and so the left side is smaller (one computes that $h_1(\sqrt{2})>1$ and so $h_1^{-1}(1)<\sqrt{2})$. As $\alpha \to -\infty$ the left side of \eqref{alphaneg} grows quadratically with $\alpha$, while the right side grows only linearly, and so the left side is larger. Hence \eqref{alphaneg} has a root for some $\alpha_-<-8$, and the root is unique because \autoref{concaveinverse} implies that $(h_1^{-1})^2+(h_2^{-1})^2$ is strictly convex. The roots $\alpha_+$ and $\alpha_-$ can then be approximated numerically, to find the values stated above. \qed
\begin{lemma}[Bounds on the inverse]\label{inversebounds}
For all $y>0$,
\[
g_1^{-1}(y)^2 > y - y^2 \qquad \text{and} \qquad h_1^{-1}(y)^2 < y + y^2 .
\]
\end{lemma}
\begin{proof}
Putting $y=g_1(x)=x \tan x$ into the first inequality shows it is equivalent to $x(1+\tan^2 x) > \tan x$, which reduces to $x > \sin x \cos x$. For the second inequality, putting $y=h_1(x)=x \tanh x$ reduces it to $x(1-\tanh^2 x) < \tanh x$ and hence to $x < \sinh x \cosh x$.
\end{proof}
\begin{lemma}[Asymptotics of the inverse]\label{inverseasymptotics} As $y \to \infty$,
\[
h_1^{-1}(y) = y \left( 1 + O(e^{-2y}) \right) \qquad \text{and} \qquad h_2^{-1}(y) = y \left( 1 + O(e^{-2y}) \right) .
\]
\end{lemma}
\begin{proof}
If $y=h_1(x)=x \tanh x \leq x$ then $x=y \coth x = y \left( 1 + O(e^{-2x}) \right) = y \left( 1 + O(e^{-2y}) \right)$.
If $y=h_2(x)=x \coth x = x \left( 1 + O(e^{-2x}) \right)$ then $y-x \to 0$ as $x \to \infty$, and so $x=y \left( 1 + O(e^{-2y}) \right)$.
\end{proof}
\begin{lemma}[Derivative comparison]\label{derivcomparison}
For all $y>0$, we have
\begin{equation} \label{Hderiv}
G_1^\prime(y) > G_2^\prime(y)
\end{equation}
and
\begin{equation} \label{inversederiv}
\frac{d\ }{dy} \big( g_2^{-1}(y)^2 - g_1^{-1}(y)^2 \big) > 0 .
\end{equation}
\end{lemma}
\begin{proof}
By the derivative formula \eqref{Hprime}, the conclusion $G_1^\prime(y) > G_2^\prime(y)$ is equivalent to
\[
\left. \frac{1}{g_1^\prime(x)} \left( \frac{x}{g_1(x)} \right)^{\! \prime} \right|_{x_1=g_1^{-1}(y)}
>
\left. \frac{1}{g_2^\prime(x)} \left( \frac{x}{g_2(x)} \right)^{\! \prime} \right|_{x_2=g_2^{-1}(y)} ,
\]
which evaluates to
\[
\frac{2 \cot^2 x_1}{2x_1 + \sin 2x_1} < \frac{2 \tan^2 x_2}{2x_2 - \sin 2x_2} .
\]
Multiply on the left by $y^2 = g_1(x_1)^2 = x_1^2 \tan^2 x_1$ and on the right by $y^2 = g_2(x_2)^2 = x_2^2 \cot^2 x_2$, and hence obtain that the desired inequality is equivalent to
\[
\frac{2 x_1^2}{2x_1 + \sin 2x_1} < \frac{2 x_2^2}{2x_2 - \sin 2x_2} ,
\]
or
\[
\frac{2x_1}{1 + \operatorname{sinc} 2x_1} < \frac{2x_2}{1 - \operatorname{sinc} 2x_2} .
\]
Recall that $2x_1 \in (0,\pi)$ and $2x_2 \in (\pi,2\pi)$, so that $\operatorname{sinc} 2x_1$ is positive and $\operatorname{sinc} 2x_2$ is negative. Thus it suffices to show that the function $x/(1+|\operatorname{sinc} x|)$ is strictly increasing when $x \in (0,2\pi)$. This last fact is easily verified, since
\[
\left( \frac{x}{1+\operatorname{sinc} x} \right)^{\! \prime} = \frac{x(1-\cos x)+ 2\sin x}{x(1+\operatorname{sinc} x)^2} > 0
\]
for $x \in (0,\pi)$, and one can argue similarly when $x \in (\pi,2\pi)$.
Next, formula \eqref{inversederiv} says
\[
g_2^{-1}(y) (g_2^{-1})^\prime(y) > g_1^{-1}(y) (g_1^{-1})^\prime(y) .
\]
Again let $x_1=g_1^{-1}(y) \in (0,\pi/2)$ and $x_2=g_2^{-1}(y) \in (\pi/2,\pi)$. Since $g_1^\prime>0$ and $g_2^\prime>0$, the preceding inequality is equivalent to
\[
\frac{g_1^\prime(x_1)}{x_1} > \frac{g_2^\prime(x_2)}{x_2} .
\]
Substituting the definitions of $g_1$ and $g_2$ and using the identities $\sec^2=\tan^2+1$ and $\csc^2=\cot^2+1$ now reduces the inequality to
\[
\frac{y+y^2}{x_1^2} + 1 > \frac{y+y^2}{x_2^2} + 1,
\]
which certainly holds true since $x_1<\pi/2<x_2$.
\end{proof}
\begin{lemma}[More derivative comparison]\label{derivcomparisonneg}\
(i)
\begin{equation} \label{inversederivneg1}
\frac{d\ }{dy} \big( g_2^{-1}(y)^2 + h_1^{-1}(-y)^2 \big) > 0 , \qquad -1<y<0 .
\end{equation}
(ii)
\begin{equation} \label{inversederivneg2}
\frac{d\ }{dy} \big( -h_2^{-1}(y)^2 + h_1^{-1}(y)^2 \big) < 0 , \qquad 1<y<\infty .
\end{equation}
\end{lemma}
\begin{proof}
(i) Let $x_2=g_2^{-1}(y)$ and $x_1=h_1^{-1}(-y)$.
Formula \eqref{inversederivneg1} holds if and only if
\[
g_2^{-1}(y) (g_2^{-1})^\prime(y) > h_1^{-1}(-y) (h_1^{-1})^\prime(-y) .
\]
Since $g_2^\prime>0$ and $h_1^\prime>0$, the inequality is equivalent to
\[
\frac{h_1^\prime(x_1)}{x_1} > \frac{g_2^\prime(x_2)}{x_2} .
\]
Substituting the definitions of $h_1$ and $g_2$ and using the identities $\operatorname{sech}^2=1-\tanh^2$ and $\csc^2=\cot^2+1$, and recalling $y=-h_1(x_1)=g_2(x_2)$, the inequality simplifies to
\[
- \frac{y+y^2}{x_1^2} + 1 > \frac{y+y^2}{x_2^2} + 1,
\]
which is true since $-1<y<0$ implies $y+y^2<0$.
(ii) Let $x_1=h_1^{-1}(y)$ and $x_2=h_2^{-1}(y)$.
Formula \eqref{inversederivneg2} holds if and only if
\[
h_2^{-1}(y) (h_2^{-1})^\prime(y) > h_1^{-1}(y) (h_1^{-1})^\prime(y) .
\]
Since $h_1^\prime>0$ and $h_2^\prime>0$, the inequality is equivalent to
\[
\frac{h_1^\prime(x_1)}{x_1} > \frac{h_2^\prime(x_2)}{x_2} .
\]
Substituting the definitions of $h_1$ and $h_2$ and using the identities $\operatorname{sech}^2=1-\tanh^2$ and $\operatorname{csch}^2=\coth^2-1$, the inequality simplifies to
\[
\frac{y-y^2}{x_1^2} + 1 > \frac{y-y^2}{x_2^2} + 1 .
\]
Note $1<y<\infty$ implies $y-y^2<0$, and so the task is to show $x_2 < x_1$, or in other words
\begin{equation}\label{h1h2}
h_2^{-1}(y) < h_1^{-1}(y) , \qquad y > 1 .
\end{equation}
This inequality holds since $h_1$ and $h_2$ are increasing and $h_1(x)<h_2(x)$ for all $x$.
\end{proof}
\begin{lemma}[More monotonicity]\label{specific}
(i) The function
\[
f_1(x) = \frac{1}{h_1(x)} - \frac{x h_1^\prime(x)}{2h_1(x)^2}
\]
is strictly decreasing, with $f_1(x) \to 1/3$ as $x \to 0+$ and $f_1(x) \to 0$ as $x \to \infty$. Further, $h_1(f_1^{-1}(w))<1/2w$ for all $w \in (0,1/3)$.
(ii) Similarly
\[
f_2(x) = \frac{1}{h_2(x)} - \frac{x h_2^\prime(x)}{2h_2(x)^2}
\]
is strictly decreasing, with $f_2(x) \to 1$ as $x \to 0+$ and $f_2(x) \to 0$ as $x \to \infty$. Further, $f_1(x)<f_2(x)$ for all $x>0$, and $h_2(f_2^{-1}(w))<1/w$ for all $w \in (0,1)$, and $h_1(f_1^{-1}(w))+h_2(f_2^{-1}(w))<1/w$ for all $w \in (0,1/3)$.
\end{lemma}
\begin{proof}
(i) Direct calculation with $h_1(x)=x \tanh x$ gives that
\[
f_1(x) = \frac{\coth x - x \operatorname{csch}^2 x}{2x}
\]
and so $f_1(0+)=1/3$ by elementary series expansions, with $\lim_{x \to \infty} f_1(x)=0$. One computes
\[
f_1^\prime(x) = \frac{4x^2 \coth x - \sinh 2x - 2x}{4x^2 \sinh^2 x} .
\]
We will show the numerator is negative, so that $f_1$ is strictly decreasing.
Notice $4x^2 \coth x - \sinh 2x - 2x = 0$ at $x=0$. Thus it suffices to show the first derivative is negative for all $x>0$, which is clear because
\[
(4x^2 \coth x - \sinh 2x - 2x)^\prime
= -4 (\cosh x - x \operatorname{csch} x)^2 < 0 .
\]
For the second claim in part (i), rearrange the definition of $f_1$ to get that
\begin{equation} \label{h1f1}
h_1(x) = \frac{1}{f_1(x)} \left( 1 - \frac{x h_1^\prime(x)}{2h_1(x)} \right) = \frac{1}{2f_1(x)} \left( 1 - \frac{1}{\operatorname{sinch} 2x} \right)
\end{equation}
by substituting $h_1(x)=x \tanh x$. Hence $h_1(x) < 1/2f_1(x)$, and so $h_1(f_1^{-1}(w)) < 1/2w$ for all $w$ in the range of $f_1$, which is $(0,1/3)$.
(ii) Substituting $h_2(x)=x \coth x$ into the definition of $f_2$ gives that
\[
f_2(x) = \frac{\tanh x + x \operatorname{sech}^2 x}{2x} ,
\]
from which one evaluates the limit as $f_2(0+)=1$, and obviously $\lim_{x \to \infty} f_2(x)=0$. Differentiating, we find
\[
f_2^\prime(x) = - \frac{4x^2 \tanh x + \sinh 2x - 2x}{4x^2 \cosh^2 x} < 0 ,
\]
so that $f_2$ is strictly decreasing.
Further, the inequality $f_1(x)<f_2(x)$ holds when $x>0$ because it is equivalent to $\tanh 2x < 2x$, by manipulating the formulas above for $f_1(x)$ and $f_2(x)$.
For the final claims in part (ii) of the lemma, rearrange the definition of $f_2$ so as to express $h_2$ in terms of $f_2$:
\begin{equation} \label{h2f2}
h_2(x) = \frac{1}{f_2(x)} \left( 1 - \frac{x h_2^\prime(x)}{2h_2(x)} \right) = \frac{1}{2f_2(x)} \left( 1 + \frac{1}{\operatorname{sinch} 2x} \right)
\end{equation}
by substituting $h_2(x)=x \coth x$. The middle part of formula \eqref{h2f2} implies $h_2(x) < 1/f_2(x)$, and so $h_2(f_2^{-1}(w)) < 1/w$ for all $w$ in the range of $f_2$, that is, for all $w \in (0,1)$.
By adding formulas \eqref{h1f1} and \eqref{h2f2} we deduce
\[
h_1(f_1^{-1}(w))+h_2(f_2^{-1}(w)) = \frac{1}{2w} \left( 2 - \frac{1}{\operatorname{sinch} 2f_1^{-1}(w)} + \frac{1}{\operatorname{sinch} 2f_2^{-1}(w)} \right)
\]
for all $w \in (0,1/3)$. This last expression is less than $1/w$ since $f_1^{-1}(w) < f_2^{-1}(w)$, using here that $f_1$ and $f_2$ are decreasing with $f_1<f_2$.
\end{proof}
Next we examine situations where $H(y)$ is strictly convex with respect to $\log y$.
\begin{lemma}[Convexity with respect to $\log y$]\label{convexity}
The functions $G_1(y)$ and $H_1(y)$ are strictly convex with respect to $\log y$, with
\[
\frac{d^2\ }{dz^2} G_1(e^z) > 0 \quad \text{and} \quad \frac{d^2\ }{dz^2} H_1(e^z) > 0 , \qquad z \in {\mathbb R} .
\]
\end{lemma}
\begin{proof}
A straightforward calculation shows
\begin{equation}\label{secondderiv}
\frac{d^2\ }{dz^2} H(e^z) = - \frac{h(x) h^{\prime \prime}(x)}{h^\prime(x)^3} - \frac{1}{h^\prime(x)} + \frac{x}{h(x)} ,
\end{equation}
whenever $H(y)=h^{-1}(y)/y$ and $y=e^z=h(x)$. The task is to show the right side of \eqref{secondderiv} is positive when $h$ is replaced by $g_1$ and also when it is replaced by $h_1$. Note $g_1^\prime>0$ and $h_1^\prime>0$, and so the denominators are positive in every case.
When $h=g_1$, the right side of \eqref{secondderiv} equals
\[
\frac{8x \cot x}{(2x+\sin 2x)^3} \big(x^2-\sin^2 x \cos^2 x +2x \cos^3 x \sin x \big) ,
\]
which is positive because $x^2 \geq \sin^2 x$.
When $h=h_1$, the right side of \eqref{secondderiv} evaluates to
\[
\frac{8x \coth x}{(2x+\sinh 2x)^3} f(x)
\]
where $f(x)=x^2-\sinh^2 x \cosh^2 x + 2x \cosh^3 x \sinh x$. Notice $f(0)=0,f^\prime(0)=0$ and
\[
f^{\prime \prime}(x) = 4 \cosh^2 x + 4 x \sinh x \cosh x (3\sinh^2 x + 5\cosh^2 x ) > 0 ,
\]
from which it follows that $f(x)>0$ for all $x>0$, completing the proof.
\end{proof}
Convexity or concavity of functions in the form $y(1-y)H(cy)^2$ will be important for our arguments too.
\begin{lemma}[Convexity with $G_1$ and $G_2$]\label{convexityyy} Fix $c>0$.
(i) $y(1-y)G_1(cy)^2$ is a strictly convex function of $y>0$.
(ii) $y(1-y)G_2(cy)^2$ is a strictly convex function of $y>0$.
(iii) $y(1-y)G_2(-cy)^2$ is a strictly decreasing function of $0<y<\min(1,1/c)$.
\end{lemma}
\begin{proof}
Part (i). Equivalently, we show strict convexity of $y(1-y/c)G_1(y)^2$ for $y>0$. Direct differentiation gives
\[
\frac{1}{2} \frac{d^2\ }{dy^2} \big( y(1-y/c)G_1(y)^2 \big)
= \frac{x^2}{g_1(x)^3} - \frac{2 x}{g_1(x)^2 g_1^\prime(x)} + \frac{1 - g_1(x)/c}{g_1(x) g_1^\prime(x)^2} - \frac{x (1- g_1(x)/c) g_1^{\prime \prime}(x)}{g_1(x) g_1^\prime(x)^3}
\]
where $y=g_1(x)$. The right side of this formula can be rewritten as
\begin{equation} \label{bigrightside}
\frac{x^2}{g_1(x)^3} - \frac{2 x}{g_1(x)^2 g_1^\prime(x)} + \frac{1}{g_1(x) g_1^\prime(x)^2} - \frac{x g_1^{\prime \prime}(x)}{g_1(x) g_1^\prime(x)^3} - \frac{1}{c} \left( \frac{g_1^\prime(x) - x g_1^{\prime \prime}(x)}{g_1^\prime(x)^3} \right) .
\end{equation}
The parenthetical term in \eqref{bigrightside} is negative by \eqref{inverse-i}.
The other terms in \eqref{bigrightside} equal
\[
\frac{2\cot^3 x}{(2x + \sin 2x)^3} f(x)
\]
where $f(x) = -1 + 4x^2 + \cos 4x + x \sin 4x$. To complete the proof we will show $f(x)$ is positive when $x>0$. Obviously $f(0)=0$, and so it suffices to show $f^\prime(x)>0$. One computes
\[
f^\prime(x) = 4x ( 2 + \cos 4x - 3 \operatorname{sinc} 4x) ,
\]
which is positive whenever $x>3/4$ because
\[
2 + \cos 4x - 3 \operatorname{sinc} 4x \geq 1 - \frac{3}{4x} > 0.
\]
Further, power series expansions yield
\[
f^\prime(x) = 8x \sum_{k=2}^\infty (-1)^k \frac{k-1}{(2k+1)!} (4x)^{2k} .
\]
When $0 < x \leq 1$, the terms of this alternating series decrease in magnitude as $k$ increases, so that $f^\prime(x)>0$, completing this part of the proof.
Part (ii). In formula \eqref{bigrightside} we replace $g_1(x)$ by $g_2(x)=-x \cot x$. The parenthetical term in \eqref{bigrightside} is then negative by \eqref{inverse-iib}, noting
$x \in (\pi/2,\pi)$ because $g_2(x)=y>0$ in this part of the lemma. The other terms in \eqref{bigrightside} equal
\[
-\frac{2\tan^3 x}{(2x - \sin 2 x)^3} f(x)
\]
where $f(x)$ is the function used in part (i) of the proof. Since $f$ is positive, we see the last expression is positive when $\pi/2<x<\pi$, as we wanted to show.
Part (iii). Rescale by $c$ and consider $y(1-y/c)G_2(-y)^2$ for $0<y<\min(c,1)$. Differentiating gives
\[
\frac{d\ }{dy} \left( y(1-y/c)G_2(-y)^2 \right) = \frac{2x}{g_2^\prime(x)} \left( \frac{1}{c} + \frac{1}{g_2(x)} - \frac{x g_2^\prime(x)}{2g_2(x)^2} \right)
\]
where $y=-g_2(x)$. The right side is negative since $g_2^\prime(x)>0$ and $0<-g_2(x)<c$.
\end{proof}
\begin{lemma}[Concavity with $H_1$]\label{concavityH1} If $0 < c \leq 3$ then $y(1-y)H_1(cy)^2$ is strictly decreasing and strictly concave for $y \in (0,1)$.
If $c>3$ then a number $y_1(c) \in (0,1/2)$ exists such that $y(1-y)H_1(cy)^2$ is:
\begin{itemize}
\item strictly increasing for $y \in \big(0, y_1(c)\big)$,
\item strictly decreasing for $y \in \big(y_1(c),1\big)$,
\item strictly concave for $y \in \big(y_1(c),1\big)$.
\end{itemize}
\end{lemma}
To unify the two parts of this lemma, one simply defines $y_1(c)=0$ when $0 < c \leq 3$.
\begin{proof}
After rescaling, we consider the function $K(y) = y(1-y/c)H_1(y)^2$ and show that if $0 < c \leq 3$ then $K(y)$ is strictly decreasing and strictly concave for $y \in (0,c)$, while if $c>3$ then a number $y_c \in (0,c/2)$ exists such that $K(y)$ is
\begin{itemize}
\item strictly increasing for $y \in (0, y_c)$,
\item strictly decreasing for $y \in (y_c,c)$,
\item strictly concave for $y \in (y_c,c)$.
\end{itemize}
The values $y_c$ and $y(c)$ are related by $y_c = c y_1(c)$.
To begin with, differentiating the definition of $K$ yields
\[
K^\prime(y) = \frac{2x}{h_1^\prime(x)} \left( f_1(x) - \frac{1}{c} \right)
\]
where $y=h_1(x)$ and
\[
f_1(x) = \frac{1}{h_1(x)} - \frac{x h_1^\prime(x)}{2h_1(x)^2} , \qquad x>0 .
\]
\autoref{specific} says this function $f_1$ is strictly decreasing and has limiting values $f_1(0+)=1/3$ and $\lim_{x \to \infty} f_1(x)=0$, so that $0<f_1(x)<1/3$ for all $x>0$.
If $c \leq 3$ then $f_1(x)<1/c$ for all $x>0$, and so $K^\prime(y)<0$ for all $y>0$. Let $y_c=0$ in this case.
If $c>3$ then $f_1(x_c) = 1/c$ for a unique number $x_c>0$. Letting $y_c=h_1(x_c)$, we have that $K^\prime(y)>0$ when $x<x_c$, that is, when $y<y_c$. Also, $K^\prime(y)<0$ when $y>y_c$. Further, $h_1(f_1^{-1}(1/c))<c/2$ by \autoref{specific}(i) (noting $1/c < 1/3$) and so $h_1(x_c) < c/2$, or $y_c<c/2$, as desired.
Concavity of $K(y)$ remains to be proved, when $y_c < y < c$, which is the interval on which $f_1(x)<1/c$. Formula \autoref{bigrightside} with $g_1$ replaced by $h_1$ shows that
\[
\frac{1}{2} K^{\prime \prime}(y) =
\frac{x^2}{h_1(x)^3} - \frac{2 x}{h_1(x)^2 h_1^\prime(x)} + \frac{1}{h_1(x) h_1^\prime(x)^2} - \frac{x h_1^{\prime \prime}(x)}{h_1(x) h_1^\prime(x)^3} - \frac{1}{c} \left( \frac{h_1^\prime(x) - x h_1^{\prime \prime}(x)}{h_1^\prime(x)^3} \right) .
\]
The parenthetical term is positive by \eqref{inverse-iii}. Thus we may replace $1/c$ with the smaller number $f_1(x)$, getting an upper bound
\begin{align*}
\frac{1}{2} K^{\prime \prime}(y)
& < \frac{x^2}{h_1(x)^3} - \frac{2 x}{h_1(x)^2 h_1^\prime(x)} + \frac{1}{h_1(x) h_1^\prime(x)^2} - \frac{x h_1^{\prime \prime}(x)}{h_1(x) h_1^\prime(x)^3} - f_1(x) \left( \frac{h_1^\prime(x) - x h_1^{\prime \prime}(x)}{h_1^\prime(x)^3} \right) \\
& = \frac{2\cosh^2 x \coth^3 x}{x(2x + \sinh 2x)^2} \, \tilde{f}(x)
\end{align*}
by substituting $h_1(x)=x \tanh x$, where
\[
\tilde{f}(x) = 2x^2-\sinh^2 x - x \tanh x .
\]
To show $K^{\prime \prime}(y)<0$ we want $\tilde{f}(x)<0$ for all $x>0$. This is true, since $\tilde{f}(0)=0,\tilde{f}^\prime(0)=0$ and
\[
\tilde{f}^{\prime \prime}(x) = \frac{\sinh x}{2\cosh^3 x} (4x - \sinh 4x) < 0.
\]
\end{proof}
\begin{lemma}[Concavity with $H_2$]\label{concavityhtwo}
Let $c>0$. The function $y(1-y)H_2(cy)^2$ is strictly concave for $y>1/c$, and is
\begin{itemize}
\item strictly increasing for $y \in \big(1/c, y_2(c)\big)$,
\item strictly decreasing for $y \in \big(y_2(c),\infty\big)$,
\end{itemize}
for some number $y_2(c) \geq 1/c$. Furthermore, $y_1(c)+y_2(c)<1$ whenever $c>1$, where the number $y_1(c)$ was constructed in \autoref{concavityH1}.
\end{lemma}
\begin{proof}
Rescaling by $c>0$, we want to show strict concavity for the function
\[
K(y) = y(1-y/c)H_2(y)^2 = \left( \frac{1}{y} - \frac{1}{c} \right) h_2^{-1}(y)^2 , \qquad y > 1 .
\]
The second derivative $K^{\prime \prime}(y)$ is given by formula \eqref{bigrightside}, except replacing $g_1$ with $h_2$ there. The parenthetical term in this new version of \eqref{bigrightside} is positive by estimate \eqref{inverse-iv}, and so the factor of $-1/c$ in \eqref{bigrightside} makes that term negative.
The other terms in \eqref{bigrightside}, after $g_1(x)$ is replaced throughout by $h_2(x)=x \coth x$, evaulate to
\[
-\frac{2\tanh^3 x}{(\sinh 2x - 2x)^3} f(x)
\]
where $f(x) = 1 + 4x^2 - \cosh 4x + x \sinh 4x$. To finish the concavity proof we show $f(x)>0$ for all $x>0$, so that $K^{\prime \prime}(y)<0$. Obviously $f(0)=0$, and the derivative is positive since
\[
f^\prime(x) = 4x ( 2 + \cosh 4x - 3 \operatorname{sinch} 4x) = \sum_{k=2}^\infty \frac{(4x)^{2k+1}}{(2k)!} \left( 1 - \frac{3}{2k+1} \right) > 0
\]
by the usual power series expansions.
For the monotonicity assertions in the lemma, we want a number $y_c = cy_2(c) \geq 1$ such that $K(y)$ is
\begin{itemize}
\item strictly increasing for $y \in (1, y_c)$,
\item strictly decreasing for $y \in (y_c,\infty)$.
\end{itemize}
To get these results, differentiate the definition of $K$ to find
\[
K^\prime(y) = \frac{2x}{h_2^\prime(x)} \left( f_2(x) - \frac{1}{c} \right)
\]
where $y=h_2(x)$ and
\[
f_2(x) = \frac{1}{h_2(x)} - \frac{x h_2^\prime(x)}{2h_2(x)^2} , \qquad x>0 .
\]
\autoref{specific}(ii) says that this function $f_2$ decreases strictly in value from $1$ to $0$ as $x$ increases from $0$ to $\infty$.
If $0 < c \leq 1$ then $f_2(x)<1/c$ for all $x>0$, and choosing $y_c=1$ gives $K^\prime(y)<0$ for all $y>y_c$. If $c>1$ then $f_2(x_c) = 1/c$ for a unique number $x_c>0$. Letting $y_c=h_2(x_c)$, we have that $K^\prime(y)>0$ when $1<y<y_c$ (or $0<x<x_c$), while $K^\prime(y)<0$ when $y>y_c$ (or $x>x_c$). This proves the monotonicity claims in the lemma.
Finally, we want to prove $y_1(c)+y_2(c)<1$ when $c>1$. When $1 < c \leq 3$ we recall $y_1(c)=0$ by the comment after \autoref{concavityH1}, and so the task in this range is to show $y_2(c)<1$. By definition $y_2(c)=y_c/c=h_2(x_c)/c$, and so $y_2(c)<1$ if and only if $h_2(f_2^{-1}(1/c))<c$. This last inequality follows from \autoref{specific}(ii) with $w=1/c \in (0,1)$. Next, when $c > 3$ we use the definition of $y_1(c)$ in \autoref{concavityH1} to show that $y_1(c)+y_2(c)<1$ if and only if $h_1(f_1^{-1}(1/c))+h_2(f_2^{-1}(1/c))<c$, which then follows from \autoref{specific}(ii) with $w=1/c \in (0,1/3)$.
\end{proof}
\section{\bf The first and second Robin eigenvalues of an interval}
\label{identifyinginterval}
The Rayleigh quotient
\[
\frac{\int_{-t}^t (u^\prime)^2 \, dx + \alpha \left( u(t)^2 + u(-t)^2 \right) }{\int_{-t}^t u^2 \, dx}
\]
for the interval
\[
\mathcal{I}(t)=(-t,t)
\]
generates the eigenvalue equation $-u^{\prime \prime} = \lambda u$ with boundary condition $\partial u / \partial \nu + \alpha u = 0$ at $x=\pm t$. Our results depend on understanding how the first and second Robin eigenvalues of the interval depend on the half-length $t>0$ and Robin parameter $\alpha \in {\mathbb R}$.
The following lemmas each state two eigenvalue formulas. The first formula involves $g_1,g_2,h_1,h_2$ (which were defined in \autoref{notation}), and is useful when the half-length $t$ is fixed and the Robin parameter $\alpha$ is varying. The second formula involves $G_1,G_2,H_1,H_2$ (also defined in \autoref{notation}), and is useful when $\alpha$ is fixed and $t$ is varying.
\begin{lemma} \label{firsteigen}
\[
\lambda_1(\mathcal{I}(t);\alpha) =
\begin{cases}
g_1^{-1}(\alpha t)^2/t^2 \\
0 \\
-h_1^{-1}(-\alpha t)^2/t^2
\end{cases}
=
\begin{cases}
\alpha^2 G_1(\alpha t)^2 & \text{if $\alpha>0$,} \\
0 & \text{if $\alpha=0$,} \\
-\alpha^2 H_1(-\alpha t)^2 & \text{if $\alpha<0$.}
\end{cases}
\]
This first eigenvalue is a strictly increasing function of $\alpha \in {\mathbb R}$. As $\alpha \to -\infty$ it equals $-\alpha^2 \left(1+O(e^{2\alpha t}) \right)$, and as $\alpha \to \infty$ it converges to $(\pi/2t)^2$.
\end{lemma}
\noindent A more precise asymptotic as $\alpha \to -\infty$ was given by \text{Antunes \emph{et al.}\ \cite[Proposition 1]{AFK17}.}
\begin{lemma} \label{secondeigen}
\[
\lambda_2(\mathcal{I}(t);\alpha) =
\begin{cases}
g_2^{-1}(\alpha t)^2/t^2 \\
0 \\
-h_2^{-1}(-\alpha t)^2/t^2
\end{cases}
=
\begin{cases}
\alpha^2 G_2(\alpha t)^2 & \text{if $\alpha> -1/t$,} \\
0 & \text{if $\alpha= -1/t$,} \\
-\alpha^2 H_2(-\alpha t)^2 & \text{if $\alpha< -1/t$.}
\end{cases}
\]
This second eigenvalue is a strictly increasing function of $\alpha \in {\mathbb R}$. As $\alpha \to -\infty$ it equals $-\alpha^2 \left(1+ O(e^{2\alpha t}) \right)$, and as $\alpha \to \infty$ it converges to $(\pi/t)^2$.
\end{lemma}
When $\alpha=0$ the upper right formula in \autoref{secondeigen} is not well defined, because $G_2(0)$ is undefined (its denominator being zero). The upper left formula $g_2^{-1}(0)^2/t^2=(\pi/2t)^2$ still gives the correct value for $\lambda_2(\mathcal{I}(t);0)$.
\autoref{firstsixinterval} plots the first six eigenvalues as functions of $\alpha$ for $t=1$, that is, for the interval $\mathcal{I}(1)=(-1,1)$. Formulas for these eigenvalue curves could be obtained from the proofs of the lemmas below.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{intervalsix.pdf}
\end{center}
\caption{\label{firstsixinterval} The first six eigenvalues $\lambda_k(\mathcal{I}(1);\alpha)$ for $k=1,\dots,6$ of the interval $\mathcal{I}(1)=(-1,1)$, plotted as functions of the Robin parameter $\alpha$. The eigenvalues come in pairs, corresponding to even and odd eigenfunctions. The even eigenvalue is always lower than the odd one.}
\end{figure}
\begin{proof}[Proof of \autoref{firsteigen} and \autoref{secondeigen}]
The Robin spectrum of the interval is known, of course, but the proofs and notations vary in clarity and notation, and many authors examine only $\alpha>0$. So it seems helpful to include a proof here, using our notation.
Fix $t>0$. By symmetry of the interval, we may assume each eigenfunction is either even or odd. Thus the eigenvalue problem is
\[
\begin{split}
u^{\prime \prime} + \lambda u & =0 , \qquad 0<x<t,\\
u^\prime(t) + \alpha u(t) & = 0 .
\end{split}
\]
(i) First suppose $\lambda<0$, and write $\lambda= -\rho^2$ where $\rho>0$, so that the eigenfunction equation says $u^{\prime \prime}=\rho^2 u$. The even solution is $u=\cosh \rho x$, and applying the boundary condition gives $\rho \tanh \rho t = -\alpha$. Hence $\alpha < 0$, and multiplying by $t$ gives $h_1(\rho t) = -\alpha t$. Inverting, $\rho = h_1^{-1}(-\alpha t)/t$ when $\alpha<0$.
The odd solution is $u=\sinh \rho x$. Applying the boundary condition yields $\rho \coth \rho t = -\alpha$, or $h_2(\rho t) = -\alpha t$. Hence $- \alpha t > 1$, or $\alpha < -1/t$. Inverting yields $\rho = h_2^{-1}(-\alpha t)/t$. This $\rho$-value is smaller than the one found in the even case, since $h_2^{-1} < h_1^{-1}$ by \eqref{h1h2}, and hence the eigenvalue $\lambda=-\rho^2$ is larger than in the even case.
There are no other negative eigenvalues. Combining these facts establishes the formula for $\lambda_1$ in \autoref{firsteigen} when $\alpha<0$, and the formula for $\lambda_2$ in \autoref{secondeigen} when $\alpha<-1/t$.
(ii) Now suppose $\lambda=0$, so that the eigenfunction equation is $u^{\prime \prime}=0$. The even solution $u=1$ satisfies the boundary condition when $\alpha=0$, and the odd solution $u=x$ satisfies it when $\alpha=-1/t$. This yields the zero eigenvalues in the lemmas.
(iii) Lastly, suppose $\lambda>0$, and write $\lambda= \rho^2$ where $\rho>0$. The eigenfunction equation $u^{\prime \prime}= -\rho^2 u$ has even solution $u=\cos \rho x$, for which the boundary condition says $\rho t \tan \rho t = \alpha t$. The roots of this condition arise from the branches of $x\tan x$, and so there are roots with $\rho t \in (\pi/2,3\pi/2), (3\pi/2,5\pi/2),\ldots$; and when $\alpha>0$ we can further narrow these intervals to $\rho t \in (\pi,3\pi/2), (2\pi,5\pi/2), \ldots$; also, when $\alpha>0$ there is a smaller root with $\rho t \in (0,\pi/2)$ coming from the first branch of $\tan$, that is, from $g_1(\rho t)=\alpha t$. Thus the smallest ``even'' eigenvalue when $\alpha>0$ is the square of $\rho = g_1^{-1}(\alpha t)/t$.
Consider now the odd solution $u=\sin \rho x$ of the eigenfunction equation. It must satisfy the boundary condition $-\rho t \cot \rho t = \alpha t$. The roots of the boundary condition come from the branches of $-x\cot x$, and so there are roots with $\rho t \in (\pi,2\pi), (2\pi,3\pi)$ and so on; and when $\alpha > -1/t$ (so that $\alpha t > -1$ lies in the range of $g_2$), there is also a smaller root with $\rho t \in (0,\pi)$, coming from the first branch of $\cot$, that is, from $g_2(\rho t) = \alpha t$. More precisely, if $-1/t<\alpha \leq 0$ then $\rho t \in (0,\pi/2]$ and if $\alpha>0$ then $\rho t \in (\pi/2,\pi)$. Either way, the smallest ``odd'' eigenvalue when $\alpha>-1/t$ is the square of $\rho = g_2^{-1}(\alpha t)/t$.
Suppose $\alpha>0$. All eigenvalues are then positive, and the preceding paragraphs show the smallest even eigenvalue has $\rho t<\pi/2$, while the smallest odd eigenvalue has $\rho t>\pi/2$. Thus the even eigenvalue is the first one, which gives the formula for $\lambda_1$ in \autoref{firsteigen}. Also, the smallest odd eigenvalue has $\rho t < \pi$, while the second-smallest even eigenvalue has $\rho t > \pi$. Thus the odd eigenvalue is the smaller one, giving the formula for $\lambda_2$ in \autoref{secondeigen} when $\alpha>0$.
Suppose finally that $-1/t < \alpha \leq 0$. As found in parts (i) and (ii) of the proof, the first eigenvalue is even and $\leq 0$, while all other eigenvalues are positive. The work above shows that the smallest odd eigenvalue has $\rho t \leq \pi/2$, while the second-smallest even eigenvalue has $\rho t > \pi/2$. Again the odd eigenvalue is the smaller one, giving the formula for $\lambda_2$ in \autoref{secondeigen} when $-1/t < \alpha \leq 0$.
Finally, the first and second eigenvalues are strictly increasing as functions of $\alpha$ because $g_1, h_1, g_2, h_2$ and their inverses are all strictly increasing. The liming values as $\alpha \to \infty$ follow from evaluating $g_1^{-1}(\infty)=\pi/2$ and $g_2^{-1}(\infty)=\pi$. To derive the limiting behavior $-\alpha^2 \left(1+ O(e^{2\alpha t}) \right)$ as $\alpha \to -\infty$, simply substitute $y=-\alpha t$ into the asymptotic formulas in \autoref{inverseasymptotics}.
\end{proof}
To determine qualitatively how the first two eigenvalues of the interval depend on its length, we split the next three propositions into the cases of $\alpha$ being positive, zero, or negative. \autoref{firsttwoint} illustrates the negative and positive cases.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{intervalneg1.pdf}
\hspace{1cm}
\includegraphics[scale=0.5]{intervalpos1.pdf}
\end{center}
\caption{\label{firsttwoint} Left: the first two eigenvalues $\lambda_1$ and $\lambda_2$ for the interval $\mathcal{I}(t)=(-t,t)$, when the half-length $t>0$ is variable and the Robin parameter $\alpha=-1$ is fixed. The horizontal asymptote has height $-\alpha^2=-1$. Right: the eigenvalues $\lambda_1$ and $\lambda_2$ as functions of $t>0$ when $\alpha=1$.}
\end{figure}
\begin{proposition}\label{1dimpos}
When $\alpha>0$, the first two eigenvalues, $\lambda_1(\mathcal{I}(t);\alpha)$ and $\lambda_2(\mathcal{I}(t);\alpha)$, are strictly decreasing as functions of $t>0$, and so is the spectral gap $\lambda_2(\mathcal{I}(t);\alpha)-\lambda_1(\mathcal{I}(t);\alpha)$. As $t$ increases from $0$ to $\infty$, all three functions decrease from $\infty$ to $0$.
\end{proposition}
See the right side of \autoref{firsttwoint}. In fact, every eigenvalue $\lambda_k(\mathcal{I}(t);\alpha)$ for $k \geq 1$ is decreasing as a function of $t$, when $\alpha>0$, as one can see by rescaling the integrals in the Rayleigh quotient to integrate over the fixed interval $(-1,1)$ instead of over $\mathcal{I}(t)$. The interesting part of the lemma is that the spectral gap also decreases with $t$.
\begin{proof}
The function $G_1$ is positive and strictly decreasing, by \autoref{monotonicity}, and so $t \mapsto \lambda_1(\mathcal{I}(t);\alpha)$ is positive and strictly decreasing, by the formula in \autoref{firsteigen}. It is easy to check that $\lim_{y \to 0+} G_1(y)=\infty$ and $\lim_{y \to \infty} G_1(y)=0$, and so $\lambda_1(\mathcal{I}(t);\alpha)$ tends to $0$ as $t \to \infty$ and tends to $\infty$ as $t \to 0$. (The blow-up as $t \to 0$ can be determined quite precisely, since $g_1(x) = x \tan x \simeq x^2$ as $x \to 0$ and so $g_1^{-1}(y)^2 \simeq y$, so that $\lambda_1(\mathcal{I}(t);\alpha) \simeq \alpha t /t^2=\alpha/t$ as $t \to 0$.)
The function $G_2(y)$ is positive and strictly decreasing for $y>0$, by \autoref{monotonicity}. Hence by \autoref{secondeigen}, $t \mapsto \lambda_2(\mathcal{I}(t);\alpha)$ is positive and strictly decreasing. Again it is straightforward to see $\lim_{y \to 0+} G_2(y)=\infty$ and $\lim_{y \to \infty} G_2(y)=0$, and so $\lambda_2(\mathcal{I}(t);\alpha)$ tends to $\infty$ as $t \to 0$ and tends to $0$ as $t \to \infty$.
Next, decompose the spectral gap as
\[
(\lambda_2-\lambda_1)(\mathcal{I}(t);\alpha)
= \big( \sqrt{\lambda_2(\mathcal{I}(t);\alpha)}+\sqrt{\lambda_1(\mathcal{I}(t);\alpha)} \big) \big( \sqrt{\lambda_2(\mathcal{I}(t);\alpha)}-\sqrt{\lambda_1(\mathcal{I}(t);\alpha)} \big) .
\]
The first factor on the right side is strictly decreasing from $\infty$ to $0$ as a function of $t$, because $\lambda_1$ and $\lambda_2$ have that property. Meanwhile, the second factor equals $\alpha G_2(\alpha t) - \alpha G_1(\alpha t)$, whose $t$-derivative is $\alpha^2 \big( G_2^\prime(\alpha t) - G_1^\prime(\alpha t) \big)$. This derivative is negative by \eqref{Hderiv} in \autoref{derivcomparison}, and so the second factor decreases strictly as $t$ increases. \autoref{1dimpos} now follows.
\end{proof}
The result is easy in the Neumann case, where $\alpha=0$:
\begin{proposition}\label{1dimzero}
When $\alpha=0$, the first eigenvalue $\lambda_1(\mathcal{I}(t);0)=0$ is constant and the second eigenvalue $\lambda_2(\mathcal{I}(t);0)=(\pi/2t)^2$ decreases strictly from $\infty$ to $0$ as $t$ increases from $0$ to $\infty$.
\end{proposition}
Note the Neumann spectral gap equals the second eigenvalue, because the first eigenvalue is zero.
Next we treat negative $\alpha$. Again see \autoref{firsttwoint}.
\begin{proposition}\label{1dimneg}
Fix $\alpha<0$. Then $\lambda_1(\mathcal{I}(t);\alpha)$ is strictly increasing and $\lambda_2(\mathcal{I}(t);\alpha)$ is strictly decreasing, as a function of $t>0$, and hence the spectral gap $\lambda_2(\mathcal{I}(t);\alpha) - \lambda_1(\mathcal{I}(t);\alpha)$ is strictly decreasing. The limiting values are:
\begin{align*}
\lim_{t \to 0} \lambda_1(\mathcal{I}(t);\alpha) = -\infty , & \qquad \lim_{t \to \infty} \lambda_1(\mathcal{I}(t);\alpha) = -\alpha^2 , \\
\lim_{t \to 0} \lambda_2(\mathcal{I}(t);\alpha) = \infty , & \qquad \lim_{t \to \infty} \lambda_2(\mathcal{I}(t);\alpha) = -\alpha^2 , \\
\lim_{t \to 0} (\lambda_2-\lambda_1)(\mathcal{I}(t);\alpha) = \infty , & \qquad \lim_{t \to \infty} (\lambda_2-\lambda_1)(\mathcal{I}(t);\alpha) = 0 ,
\end{align*}
and the horizontal intercept for $\lambda_2$ occurs at $t=1/|\alpha|$ since $\lambda_2(\mathcal{I}(1/|\alpha|);\alpha) = 0$.
\end{proposition}
The observation that $t \mapsto \lambda_1(\mathcal{I}(t);\alpha)$ is strictly increasing, when $\alpha < 0$, was made already by Antunes \emph{et al.}\ \cite[Proposition 2]{AFK17}, and they found the limiting value $-\alpha^2$ as $t \to \infty$, in \cite[Proposition 3]{AFK17}.
\begin{proof}
The function $H_1$ is positive and strictly decreasing, by \autoref{monotonicity}, and so $t \mapsto \lambda_1(\mathcal{I}(t);\alpha)$ is negative and strictly increasing, by the formula in \autoref{firsteigen}. Since $H_1(\infty)=1$ and $H_1(0+)=\infty$, the limiting values of $\lambda_1$ as $t \to \infty$ and $t \to 0$ are as stated in the lemma. (The blow-up as $t \to 0$ can be established precisely, since $h_1(x) = x \tanh x \simeq x^2$ as $x \to 0$ and so $h_1^{-1}(y)^2 \simeq y$, so that $\lambda_1(\mathcal{I}(t);\alpha) \simeq -(-\alpha t) /t^2=\alpha/t$ as $t \to 0$. This blow-up rate was noted by Antunes \emph{et al.}\ \cite[Proposition 3]{AFK17}.)
The second eigenvalue requires more careful analysis. The function $G_2(y)$ is negative and strictly decreasing for $-1<y<0$, by \autoref{monotonicity}, and so $G_2(y)^2$ is positive and strictly increasing. Hence $t \mapsto \lambda_2(\mathcal{I}(t);\alpha)$ is positive and strictly decreasing when $0 < t < -1/\alpha$ by \autoref{secondeigen} (remembering here that $-\alpha>0$). Further, $G_2(0-)= -\infty$ and so $\lambda_2$ tends to $\infty$ as $t \to 0+$. Also, $\lim_{y \searrow -1} G_2(y)=0$ and so the eigenvalue approaches $0$ as $t$ approaches $-1/\alpha$ from below.
When $t = -1/\alpha$ the second eigenvalue is $0$.
Now suppose $t > -1/\alpha$. \autoref{monotonicity} says $H_2$ is positive and strictly increasing, and so $t \mapsto \lambda_2(\mathcal{I}(t);\alpha)$ is negative and strictly decreasing by \autoref{secondeigen}. Note the eigenvalue approaches $0$ as $t$ approaches $-1/\alpha$ from above, since $H_2(1)=0$. Further, $H_2(\infty)=1$ and so $\lambda_2$ tends to $-\alpha^2$ as $t \to \infty$.
\end{proof}
\section{\bf The first and second Robin eigenvalues of a rectangular box}
\label{identifying}
Now that the interval is understood, we can identify the first and second Robin eigenvalues of the rectangular box
\[
\mathcal{B}(w) = \mathcal{I}(w_1) \times \dots \times \mathcal{I}(w_n)
\]
where $w=(w_1,\dots,w_n) \in {\mathbb R}^n, n \geq 1$, with $w_j > 0$ for each $j$. The width of the box in the $j$th direction is $2w_j$. Later in the section we show the spectral gap of the box is the same as the gap of its longest edge, that is, the largest width or longest interval.
\begin{lemma}[First eigenvalue] \label{firsteigenbox}
\begin{align*}
\lambda_1(\mathcal{B}(w);\alpha)
& = \lambda_1\big( \mathcal{I}(w_1) ; \alpha \big) + \lambda_1\big( \mathcal{I}(w_2) ; \alpha \big) + \dots + \lambda_1\big( \mathcal{I}(w_n) ; \alpha \big) \\
& =
\begin{cases}
\alpha^2 \left| \big( G_1(\alpha {w_1}), \dots , G_1(\alpha {w_n}) \big) \right|^2 & \text{if $\alpha>0$,} \\
0 & \text{if $\alpha=0$,} \\
- \alpha^2 \left| \big( H_1(-\alpha {w_1}), \dots , H_1(-\alpha {w_n}) \big) \right|^2 & \text{if $\alpha<0$.}
\end{cases}
\end{align*}
This first eigenvalue is a strictly increasing function of $\alpha \in {\mathbb R}$.
\end{lemma}
\begin{proof}
By separation of variables, the first eigenvalue for the box arises from summing the first eigenvalues of each of the intervals. (The first eigenfunction for the box is the product of the first eigenfunctions of the intervals.) Hence the lemma follows directly from \autoref{firsteigen}.
\end{proof}
The first eigenvalue tends to infinity in magnitude when any one of the edge lengths tends to zero:
\begin{equation}\label{firsttoinfinity}
\lim_{w_n \to 0} \lambda_1(\mathcal{B}(w);\alpha) =
\begin{cases}
\ \ \infty & \text{if $\alpha>0$,} \\
-\infty & \text{if $\alpha<0$,}
\end{cases}
\end{equation}
by \autoref{1dimpos} and \autoref{1dimneg}, where the other edges $w_1,\dots,w_{n-1}$ are arbitrary and may vary as $w_n \to 0$. For more precise inequalities on the first eigenvalue see Freitas and Kennedy \cite[Appendix A.1]{FK18}.
The second eigenvalue of the box depends on knowing which edge is longest.
\begin{lemma}[Second eigenvalue] \label{secondeigenbox}
If the longest edge of the box is the first one, so that $w_1 \geq w_j$ for all $j$, then
\begin{equation} \label{eigentwodecomp}
\lambda_2\big( \mathcal{B}(w) ; \alpha \big) = \lambda_2\big( \mathcal{I}(w_1) ; \alpha \big) + \lambda_1\big( \mathcal{I}(w_2) ; \alpha \big) + \dots + \lambda_1\big( \mathcal{I}(w_n) ; \alpha \big)
\end{equation}
for all $\alpha \in {\mathbb R}$. This second eigenvalue is a strictly increasing function of $\alpha \in {\mathbb R}$.
\end{lemma}
It is no loss of generality to suppose the first edge of the box is the longest, since we may always rotate the box. Formula \eqref{eigentwodecomp} can be made more explicit by using the interval results from \autoref{identifyinginterval}.
\begin{proof}
By separation of variables, the second eigenvalue for the box arises from summing the second eigenvalue on one of the intervals, say the $k$th interval, and the first eigenvalues of the remaining $n-1$ intervals. We will show $w_k \geq w_j$ for all $j$, so that the longest interval is the one on which the second eigenvalue must be taken.
Since $\lambda_2\big( \mathcal{B}(w) ; \alpha \big)$ is the smallest eigenvalue having the specified form, the eigenvalue would increase if we used the second eigenvalue for $w_j$ instead of for $w_k$. Thus
\[
\lambda_2\big( \mathcal{I}(w_k) ; \alpha \big) + \lambda_1\big( \mathcal{I}(w_j) ; \alpha \big)
\leq
\lambda_1\big( \mathcal{I}(w_k) ; \alpha \big) + \lambda_2\big( \mathcal{I}(w_j) ; \alpha \big) .
\]
That is, the spectral gap of the interval increases from $w_k$ to $w_j$:
\[
(\lambda_2-\lambda_1)\big( \mathcal{I}(w_k) ; \alpha \big) \leq
(\lambda_2-\lambda_1)\big( \mathcal{I}(w_j) ; \alpha \big) .
\]
Since the spectral gap is strictly decreasing as a function of the length of the interval, by \autoref{1dimpos}, \autoref{1dimzero} and \autoref{1dimneg}, we deduce $w_k \geq w_j$.
\end{proof}
\begin{corollary}[Spectral gap of a box equals the gap of its longest edge] \label{boxgap}
If $w_1 \geq w_j$ for all $j$ then
\[
(\lambda_2-\lambda_1)\big( \mathcal{B}(w) ; \alpha \big) = (\lambda_2-\lambda_1)\big( \mathcal{I}(w_1) ; \alpha \big) , \qquad \alpha \in {\mathbb R} .
\]
\end{corollary}
This corollary follows by subtraction of \autoref{firsteigenbox} and \autoref{secondeigenbox}.
\begin{example}[Second eigenvalue of the square] \label{squareexample}
The square $\mathcal{S}$ with edge length $2$ has vanishing second eigenvalue for
\[
\alpha_0 \simeq -0.68825 ,
\]
meaning $\lambda_2(\mathcal{S};\alpha_0) = 0$. Hence $\lambda_1(\mathcal{S};\alpha) < 0 < \lambda_2(\mathcal{S};\alpha)$ whenever $\alpha \in (\alpha_0,0)$.
\begin{proof}
We need only consider $\alpha<0$, since the second eigenvalue is positive when $\alpha \geq 0$. From \autoref{secondeigenbox}, \autoref{firsteigen} and \autoref{secondeigen} we find
\begin{align*}
\lambda_2(\mathcal{S};\alpha)
& = \lambda_2\big( \mathcal{I}(1) ; \alpha \big) + \lambda_1\big( \mathcal{I}(1) ; \alpha \big) \\
& = g_2^{-1}(\alpha)^2 - h_1^{-1}(-\alpha)^2 .
\end{align*}
We assume here that $\alpha> -1$, since otherwise the lemmas show the second eigenvalue of the square is negative, whereas we want it to vanish.
Thus the second eigenvalue vanishes when the number $\alpha \in (-1,0)$ satisfies $g_2^{-1}(\alpha) = h_1^{-1}(-\alpha)$. Writing $x=g_2^{-1}(\alpha) \in (0,\pi/2)$, the condition becomes $h_1(x)=-g_2(x)$, which reduces to $\tanh x = \cot x$. Solving numerically gives $x \simeq 0.93755$, and so $\alpha= g_2(x) = -x \cot x \simeq -0.68825$.
Since the first and second Robin eigenvalues of the square are strictly increasing as functions of $\alpha$ by \autoref{firsteigenbox} and \autoref{secondeigenbox}, we conclude the second eigenvalue is positive when $\alpha > \alpha_0$, and of course the first eigenvalue is negative when $\alpha<0$.
\end{proof}
\end{example}
\section{\bf Proofs of main theorems}
\label{mainproofs}
\begin{proof}[\bf Proof of \autoref{gapmonot}]
Without loss of generality we may assume $w_1$ is the largest of the $w_j$. We will show that the spectral gap is strictly increasing for $\alpha$ in each of the three intervals $(-\infty,-1/{w_1}), (-1/{w_1},0)$ and $(0,\infty)$.
The spectral gap of the box equals the spectral gap of its longest side, with
\[
(\lambda_2-\lambda_1)(\mathcal{B};\alpha) = (\lambda_2-\lambda_1)(\mathcal{I}(w_1);\alpha)
\]
by \autoref{boxgap}. Hence when $\alpha>0$,
\[
(\lambda_2-\lambda_1)(\mathcal{B};\alpha) = \frac{g_2^{-1}(\alpha {w_1})^2 - g_1^{-1}(\alpha {w_1})^2}{w_1^2}
\]
by using the formulas for the first two eigenvalues of the interval from \autoref{firsteigen} and \autoref{secondeigen}.
Thus the spectral gap is strictly increasing with respect to $\alpha>0$, by \eqref{inversederiv} in \autoref{derivcomparison}. The limit as $\alpha \to \infty$ equals $\big(\pi^2 - (\pi/2)^2\big)/w_1^2$,
which is the gap between the first two Dirichlet eigenvalues of the box.
When $-1/{w_1}<\alpha<0$, the gap is
\[
(\lambda_2-\lambda_1)(\mathcal{B};\alpha) = \frac{g_2^{-1}(\alpha {w_1})^2 + h_1^{-1}(-\alpha {w_1})^2}{w_1^2} ,
\]
which is strictly increasing with respect to $\alpha$ by \autoref{derivcomparisonneg}(i).
When $\alpha<-1/{w_1}$, the gap formula is that
\[
(\lambda_2-\lambda_1)(\mathcal{B};\alpha) = \frac{-h_2^{-1}(-\alpha {w_1})^2 + h_1^{-1}(-\alpha {w_1})^2}{w_1^2} ,
\]
which is strictly increasing with respect to $\alpha$ by \autoref{derivcomparisonneg}(ii). The gap tends to $0$ as $\alpha \to -\infty$, by the asymptotic formulas for the interval stated in \autoref{firsteigen} and \autoref{secondeigen}.
\end{proof}
\begin{proof}[\bf Proof of \autoref{ratiomonot}]
The first eigenvalue equals $0$ at $\alpha=0$, and is positive when $\alpha>0$ and negative when $\alpha<0$.
Further, it is concave as a function of $\alpha$, as we observed in \autoref{results} using the characterization of $\lambda_1(\Omega;\alpha)$ as the minimum of the Rayleigh quotient (which depends linearly on $\alpha$). Hence the difference quotient
\[
\frac{\lambda_1(\Omega;\alpha)}{\alpha} = \frac{\lambda_1(\Omega;\alpha)-\lambda_1(\Omega;0)}{\alpha-0}
\]
is positive for all $\alpha \neq 0$, and is decreasing as a function of $\alpha$, by concavity.
The theorem now follows, since
\[
\alpha \frac{\lambda_2(\Omega;\alpha)}{\lambda_1(\Omega;\alpha)} = \frac{\lambda_2(\Omega;\alpha)}{\lambda_1(\Omega;\alpha)/\alpha}
\]
where on the right side both the numerator and the denominator are positive, and the numerator is increasing and the denominator is decreasing as a function of $\alpha$, so that the ratio is increasing.
\end{proof}
\begin{proof}[\bf Proof of \autoref{secondeigconcave}]
\autoref{firsteigenbox} says that the first eigenvalue of the box is found by summing the first eigenvalues of each edge, and similarly for the second eigenvalue of the box in \autoref{secondeigenbox} except in that case one uses the second eigenvalue of the longest edge. Thus it suffices to establish the $1$-dimensional case of the theorem, namely, to show strict concavity with respect to $\alpha$ of the first and second eigenvalues of a fixed interval $\mathcal{I}(t)$.
If $\alpha > 0$ then $\lambda_1 \big( \mathcal{I}(t); \alpha \big)
= g_1^{-1}(\alpha t)^2/t^2$ by \autoref{firsteigen}, and so \autoref{concaveinverse} gives strict concavity with respect to $\alpha$. If $\alpha < 0$ then $\lambda_1 \big( \mathcal{I}(t); \alpha \big) = - h_1^{-1}(-\alpha t)^2/t^2$ and so again \autoref{concaveinverse} yields strict concavity. To ensure concavity of the first eigenvalue around the ``join'' at $\alpha=0$, we note the slopes match up from the left and the right there: $g_1(x) \simeq x^2$ and $h_1(x) \simeq x^2$ for $x \simeq 0$, and so $\lambda_1 \big( \mathcal{I}(t); \alpha \big) \simeq \alpha/t$ when $\alpha \simeq 0$.
If $\alpha > -1/t$ then $\lambda_2 \big( \mathcal{I}(t); \alpha \big) = g_2^{-1}(\alpha t)^2/t^2$ by \autoref{secondeigen} and so \autoref{concaveinverse} proves strict concavity with respect to $\alpha$. If $\alpha < -1/t$ then $\lambda_2 \big( \mathcal{I}(t); \alpha \big) = - h_2^{-1}(-\alpha t)^2/t^2$ and again \autoref{concaveinverse} proves strict concavity.
For concavity of the second eigenvalue around the join at $\alpha=-1/t$, we will show the slopes from the left and right agree. For the right, we note that $g_2(x) = -x \cot x \simeq -1+x^2/3$ when $x \simeq 0$ and so $g_2^{-1}(y) \simeq \sqrt{3(1+y)}$, hence $\lambda_2 \big( \mathcal{I}(t); \alpha \big) \simeq (3/t)(\alpha+1/t)$ when $\alpha \simeq -1/t$. For the left, $h_2(x) = x \coth x \simeq 1+x^2/3$ when $x \simeq 0$ and so $h_2^{-1}(y) \simeq \sqrt{3(y-1)}$, and hence once again $\lambda_2 \big( \mathcal{I}(t); \alpha \big) \simeq (3/t)(\alpha+1/t)$ when $\alpha \simeq -1/t$. Thus the slopes of the second eigenvalue curve from the left and right are the same at $\alpha=-1/t$, namely $3/t$. Therefore, by our work above, strict concavity holds on a neighborhood of that point, completing the proof.
\end{proof}
In order to prove the next theorem, we need an elementary convexity result for the norm of a separated vector field.
\begin{lemma}\label{convexfield}
If $f_1,\dots,f_n$ are nonnegative, strictly convex functions on ${\mathbb R}$ then
\[
\left| \big( f_1(z_1),\dots,f_n(z_n) \big) \right|
\]
is strictly convex as a function of $z=(z_1,\dots,z_n) \in {\mathbb R}^n$.
\end{lemma}
\begin{proof}
If $w=(w_1,\dots,w_n)$ and $z=(z_1,\dots,z_n)$ are given and $0 < \varepsilon < 1$, then by the triangle inequality,
\begin{align*}
& (1-\varepsilon)\left| \big( f_1(w_1),\dots,f_n(w_n) \big) \right| + \varepsilon \left| \big( f_1(z_1),\dots,f_n(z_n) \big) \right| \\
& \geq \left| \big( (1-\varepsilon)f_1(w_1) + \varepsilon f_1(z_1),\dots,(1-\varepsilon)f_n(w_n) + \varepsilon f_n(z_n) \big) \right| \\
& \geq \left| \big( f_1((1-\varepsilon)w_1+\varepsilon z_1),\dots,f_n((1-\varepsilon)w_n+\varepsilon z_n) \big) \right|
\end{align*}
by convexity of $f_1,\dots,f_n$ and the fact that all components of the vectors are nonnegative. Further, if equality holds then $w_1=z_1,\dots,w_n=z_n$ by strict convexity of $f_1,\dots,f_n$. Thus strict convexity holds in the lemma.
\end{proof}
\begin{proof}[\bf Proof of \autoref{lambda1higherdim}]
We start with convexity results for the first eigenvalue. Given a vector $z = (z_1,\dots,z_n) \in {\mathbb R}^n$, write
\[
e^z = (e^{z_1},\dots,e^{z_n}) .
\]
We will prove:
\begin{align}
\text{if $\alpha>0$ then $\sqrt{\lambda_1(\mathcal{B}(e^z);\alpha)}$ is a strictly convex function of $z \in {\mathbb R}^n$,} \label{lambda1sqrti} \\
\text{if $\alpha<0$ then $\sqrt{-\lambda_1(\mathcal{B}(e^z);\alpha)}$ is a strictly convex function of $z \in {\mathbb R}^n$.} \label{lambda1sqrtii}
\end{align}
First, \autoref{firsteigenbox} gives when $\alpha>0$ that
\[
\sqrt{\lambda_1(\mathcal{B}(e^z);\alpha)}
= \alpha \left| \big( G_1(\alpha e^{z_1}), \dots , G_1(\alpha e^{z_n}) \big) \right| .
\]
Each individual component $G_1(\alpha e^{z_j})$ is strictly convex as a function of $z_j$ by \autoref{convexity}, and so \autoref{convexfield} implies conclusion \eqref{lambda1sqrti}. Simlarly, when $\alpha<0$ we have
\[
\sqrt{-\lambda_1(\mathcal{B}(e^z);\alpha)}
= |\alpha| \left| \big( H_1(|\alpha| e^{z_1}), \dots , H_1(|\alpha| e^{z_n}) \big) \right| .
\]
The components $H_1(|\alpha| e^{z_j})$ are strictly convex as functions of $z_j$, by \autoref{convexity}, and so conclusion \eqref{lambda1sqrtii} follows from \autoref{convexfield}. Now we can prove the theorem.
(i) Suppose $\alpha>0$. Consider rectangular boxes $\mathcal{B}(e^z)$ of given volume $V$, which means $2e^{z_1} \cdots 2e^{z_n}=V$, or $z_1 + \dots + z_n = \log(2^{-n}V)$. This set of vectors $z$ forms a hyperplane in ${\mathbb R}^n$ perpendicular to the direction $(1,\dots,1)$, and the function $f(z)=\sqrt{\lambda_1(\mathcal{B}(e^z);\alpha)}$ is strictly convex on that hyperplane by \eqref{lambda1sqrti}.
We want to show $f$ achieves its strict global minimum at the cube. That is, we want $f$ to have a strict global minimum at the point $z=(t,\dots,t)$ where the hyperplane intersects the line through the origin in direction $(1,\dots,1)$. Due to the strict convexity, it suffices to show that the gradient of $f$ restricted to the hyperplane vanishes at this $z$, which means we want $(\nabla f) (t,\dots,t)$ to be parallel to $(1,\dots,1)$. That the gradient vector has this property follows from the invariance of $f(z_1,\dots,z_n)$ under permutation of the variables.
\emph{Note.} Convexity of $\lambda_1(\mathcal{B}(e^z);\alpha)$ was proved by Keady and Wiwatanapataphee \cite[Corollary 2]{KW18} when $\alpha>0$. That convexity is weaker than \eqref{lambda1sqrti}, where the square root is imposed on the eigenvalue, but it was strong enough for them to prove part (i) of \autoref{lambda1higherdim}.
\smallskip
(ii) Suppose $\alpha<0$. Argue as in part (i), except this time using \eqref{lambda1sqrtii} instead of \eqref{lambda1sqrti} and letting $f(z)=\sqrt{-\lambda_1(\mathcal{B}(e^z);\alpha)}$.
\end{proof}
\begin{proof}[\bf Proof of \autoref{lambda1L}]
By scale invariance of the expression $\lambda_1(\mathcal{R};\alpha/L)A$ we may assume the rectangle has perimeter $L=2$. That is, we need only consider the family of rectangles
\[
\mathcal{R}(p) = (0,p) \times (0,1-p) ,
\]
where $0 < p < 1$. Clearly these rectangles have perimeter $2$ and area $p(1-p)$.
(i) First suppose $\alpha>0$. We claim $\lambda_1 \big( \mathcal{R}(p);\alpha/2 \big) A \big( \mathcal{R}(p) \big)$ is strictly convex as a function of $p \in (0,1)$, and hence is strictly decreasing for $p \in (0,1/2]$ and strictly increasing for $p \in [1/2,1)$, with its minimum at $p=1/2$ (the square).
By \autoref{firsteigenbox} applied with $\alpha/2$ instead of $\alpha$, and with ${w_1}=p/2$ and ${w_2}=(1-p)/2$, we have
\[
\lambda_1(\mathcal{R}(p);\alpha/2) A \big( \mathcal{R}(p) \big) = (\alpha/2)^2 \left( G_1(\alpha p/4)^2 + G_1(\alpha (1-p)/4)^2 \right) p(1-p) .
\]
The function $p \mapsto G_1(\alpha p/4)^2 \, p(1-p)$ is strictly convex for $0<p<1$ by \autoref{convexityyy}(i), and replacing $p$ by $1-p$ shows that $p \mapsto G_1(\alpha (1-p)/4)^2 \, p(1-p)$ is strictly convex also. Clearly $\lambda_1 \big( \mathcal{R}(p);\alpha/2 \big) A \big( \mathcal{R}(p) \big)$ is even with respect to $p=1/2$ since the rectangle $\mathcal{R}(1-p)$ is the same as $\mathcal{R}(p)$ except rotated by angle $\pi/2$. Thus by the strict convexity just proved, the function $p \mapsto \lambda_1 \big( \mathcal{R}(p);\alpha/2 \big) A \big( \mathcal{R}(p) \big)$ must be strictly decreasing for $p \in (0,1/2]$ and strictly increasing for $p \in [1/2,1)$.
(ii) Next suppose $\alpha<0$. We claim $\lambda_1 \big( \mathcal{R}(p);\alpha/2 \big) A \big( \mathcal{R}(p) \big)$ is strictly decreasing for $p \in (0,1/2]$ and strictly increasing for $p \in [1/2,1)$, so that again the minimum occurs for the square, $p=1/2$.
\autoref{firsteigenbox} with ${w_1}=p/2$ and ${w_2}=(1-p)/2$ gives that
\begin{equation} \label{firstLeq}
-\lambda_1(\mathcal{R}(p);\alpha/2) A \big( \mathcal{R}(p) \big) = (\beta/2)^2 \left( H_1(\beta p/4)^2 + H_1(\beta (1-p)/4)^2 \right) p(1-p)
\end{equation}
where $\beta=-\alpha>0$. To prove the claim it suffices to show the existence of a number $p(\beta)$ with
\[
0 \leq p(\beta) < \frac{1}{2}
\]
such that the right side of \eqref{firstLeq} is strictly increasing on $\big( 0, p(\beta) \big)$, strictly concave on $\big( p(\beta),1-p(\beta) \big)$, and strictly decreasing on $\big( 1-p(\beta) , 1\big)$ --- because then the evenness of \eqref{firstLeq} under $p \mapsto 1-p$ guarantees that the right side of \eqref{firstLeq} is strictly increasing on $(0,1/2]$ and strictly decreasing on $[1/2,1)$.
In fact, we need only show that the term $p \mapsto H_1(\beta p/4)^2 p(1-p)$ is strictly increasing on $\big( 0, p(\beta) )$, strictly concave on $\big( p(\beta),1-p(\beta) \big)$, and strictly decreasing on $\big( 1-p(\beta),1 \big)$, because then the same holds true when we replace $p$ by $1-p$, and adding two functions with these properties yields another function with these properties.
\autoref{concavityH1} establishes the desired properties with $p(\beta)=y_1(\beta/4)$, and in fact establishes a little more, namely that $H_1(\beta p/4)^2 p(1-p)$ is strictly concave and strictly decreasing on the whole interval $\big( p(\beta),1 \big)$. Thus the theorem is proved.
\end{proof}
\begin{proof}[\bf Proof of \autoref{linearbound}]
We extend the $2$-dimensional proof given by Freitas and Laugesen \cite[Theorem A]{FL18b}. Substituting the constant trial function $u(x) \equiv 1$ into the Rayleigh quotient gives the upper bound
\[
\lambda_1(\Omega;\alpha V^{1-2/n}/S) V^{2/n} \leq \frac{0 + (\alpha V^{1-2/n}/S)\int_{\partial \Omega} 1^2 \, dS}{\int_\Omega 1^2 \, dx} \, V^{2/n} = \alpha.
\]
We show this inequality must be strict. If equality held, then the constant trial function $u \equiv 1$ would be a first eigenfunction, and taking the Laplacian of it would imply $\lambda_1(\Omega;\alpha V^{1-2/n}/S) = 0$, and hence $\alpha=0$, contradicting the hypothesis in the theorem. Hence equality cannot hold and the inequality is strict.
To show equality is attained asymptotically for rectangular boxes that degenerate, consider a box $\mathcal{B}(w)$ and assume the volume is fixed, say $V=1$ for convenience. Suppose the box degenerates, which means the surface area tends to infinity. The surface area is
\[
S = 2 \sum_{k=1}^n \frac{2w_1 \dots 2w_n}{2w_k} = \sum_{k=1}^n \frac{1}{w_k} ,
\]
since $V=2w_1 \cdot \dots \cdot 2w_n=1$.
For $\alpha>0$ we have
\begin{align*}
\lambda_1\big(\mathcal{B}(w);\alpha/S\big)
& = \sum_{k=1}^n g_1^{-1}(w_k \alpha/S)^2/w_k^2 \qquad \text{by \autoref{firsteigenbox}} \\
& \geq \alpha - n\alpha^2/S^2 \qquad \text{since $g_1^{-1}(y)^2 \geq y-y^2$ by \autoref{inversebounds}} \\
& \to \alpha \qquad \qquad \qquad \text{as $S \to \infty$.}
\end{align*}
When $\alpha<0$ the proof is similar, except replacing $g_1^{-1}(w_k \alpha/S)^2$ with $-h_1^{-1}(w_k |\alpha|/S)^2$.
\end{proof}
\begin{proof}[\bf Proof of \autoref{lambda2higherdim}]
The second eigenvalue of the box is
\begin{equation} \label{eigentwo}
\lambda_2\big( \mathcal{B}(w) ; \alpha \big) = \lambda_2\big( \mathcal{I}(w_1) ; \alpha \big) + \lambda_1\big( \mathcal{I}(w_2) ; \alpha \big) + \dots + \lambda_1\big( \mathcal{I}(w_n) ; \alpha \big)
\end{equation}
by \autoref{secondeigenbox}, where we take $w_1$ to be the largest of the $w_j$, that is, we assume the first edge of the box is its longest.
When $\alpha=0$ (the Neumann case), the theorem is easy and well known:
\[
\lambda_2\big( \mathcal{B}(w) ; \alpha \big) = \left( \frac{\pi}{2w_1} \right)^{\! \! 2}
\]
and this expression is largest when the box is a cube having the same volume as the original box $\mathcal{B}(w)$, because in that case the longest side is as short as possible.
Next suppose $\alpha<0$. We proceed in two steps. First we equalize the shorter edges of the box. Let
\[
w_2^* = \dots = w_n^* =(w_2 \cdots w_n)^{1/(n-1)}
\]
so that $w_j^* \leq w_1$, and define $\widehat{w}=(w_2,\dots,w_n) , \widehat{w}^*=(w_2^*,\dots,w_n^*) \in {\mathbb R}^{n-1}$. The $(n-1)$-dimensional boxes $\mathcal{B}(\widehat{w})$ and $\mathcal{B}(\widehat{w}^*)$ have the same volume, since $w_2 \cdots w_n = w_2^* \cdots w_n^*$. Formula \eqref{eigentwo} and maximality of the cube for the first eigenvalue when $\alpha<0$, from \autoref{lambda1higherdim}(ii), together show that
\begin{align*}
\lambda_2\big( \mathcal{B}(w) ; \alpha \big)
& = \lambda_2\big( \mathcal{I}(w_1) ; \alpha \big) + \lambda_1\big( \mathcal{B}(\widehat{w}) ; \alpha \big) \\
& \leq \lambda_2\big( \mathcal{I}(w_1) ; \alpha \big) + \lambda_1\big( \mathcal{B}(\widehat{w}^*) ; \alpha \big) ,
\end{align*}
with equality if and only if $w_2=\dots =w_n$.
Next we equalize the first edge as well. Let
\[
t = (w_1 w_2 \cdots w_n)^{1/n} = (w_1 w_2^* \cdots w_n^*)^{1/n} ,
\]
so that $w_j^* \leq t \leq w_1$ for each $j$. Then
\begin{align*}
\lambda_2\big( \mathcal{I}(w_1) ; \alpha \big) + \lambda_1\big( \mathcal{B}(\widehat{w}^*) ; \alpha \big)
& = \lambda_2\big( \mathcal{I}(w_1) ; \alpha \big) + \lambda_1\big( \mathcal{I}(w_2^*) ; \alpha \big) + \dots + \lambda_1\big( \mathcal{I}(w_n^*) ; \alpha \big) \\
& \leq \lambda_2\big( \mathcal{I}(t) ; \alpha \big) + \lambda_1\big( \mathcal{I}(t) ; \alpha \big) + \dots + \lambda_1\big( \mathcal{I}(t) ; \alpha \big)
\end{align*}
by the strict monotonicity properties of $\lambda_1$ and $\lambda_2$ with respect to the length of the interval, in \autoref{1dimneg}, when $\alpha < 0$. Equality holds if and only if $w_1=t$ and $w_j^*=t$. Putting together our inequalities, we conclude
\[
\lambda_2\big( \mathcal{B}(w) ; \alpha \big) \leq \lambda_2\big( \mathcal{B}(t,\dots,t) ; \alpha \big)
\]
with equality if and only if $w=(t,\dots,t)$. That is, $\lambda_2(\mathcal{B};\alpha)$ is maximal for the cube and only the cube, among rectangular boxes $\mathcal{B}$ of given volume.
\end{proof}
\begin{proof}[\bf Proof of \autoref{sigma1higherdim}]
The Steklov eigenvalue problem for the Laplacian is
\[
\begin{split}
\Delta u & = 0 \ \ \quad \text{in $\Omega$,} \\
\frac{\partial u}{\partial\nu} & = \sigma u \quad \text{on $\partial \Omega$,}
\end{split}
\]
where the eigenvalues are $0=\sigma_0 < \sigma_1 \leq \sigma_2 \leq \dots$. Clearly $\sigma$ belongs to the Steklov spectrum exactly when $0$ belongs to the Robin spectrum for parameter $\alpha=-\sigma$. In particular, $\alpha=-\sigma_1$ is the horizontal intercept value for the second Robin spectral curve, meaning $\lambda_2(\Omega;-\sigma_1)=0$.
Let $\mathcal{C}$ be a cube having the same volume as the box $\mathcal{B}$. \autoref{lambda2higherdim} says $\lambda_2$ is smaller for $\mathcal{B}$ than for $\mathcal{C}$, at each $\alpha$, and since the eigenvalues are increasing with respect to $\alpha$, we conclude the horizontal intercept is larger (less negative) for $\mathcal{B}$ than for $\mathcal{C}$. In other words, $\sigma_1(\mathcal{B}) \leq \sigma_1(\mathcal{C})$. The inequality is strict due to the strictness in \autoref{lambda2higherdim}.
A more detailed account of this proof goes as follows. From \autoref{secondeigenbox} and results in \autoref{identifyinginterval} we know $\lambda_2(\mathcal{B};\alpha)$ is continuous and strictly increasing as a function of $\alpha$, and tends to $-\infty$ as $\alpha \to -\infty$, and is positive at $\alpha=0$. Hence there is a unique horizontal intercept value $\alpha_{\mathcal{B}}<0$ at which $\lambda_2(\mathcal{B};\alpha_{\mathcal{B}})=0$. Note $\sigma_1(\mathcal{B})=-\alpha_{\mathcal{B}}$, since the fact that $\lambda_1(\mathcal{B};\alpha) < 0 <\lambda_2(\mathcal{B};\alpha)$ for all $\alpha \in (\alpha_{\mathcal{B}},0)$ implies that no $\alpha$-value in that interval corresponds to a Steklov eigenvalue for $\mathcal{B}$. Similarly there is a unique horizontal intercept value $\alpha_{\mathcal{C}}<0$ at which $\lambda_2(\mathcal{C};\alpha_{\mathcal{C}})=0$, and $\sigma_1(\mathcal{C})=-\alpha_{\mathcal{C}}$.
Choosing $\alpha=\alpha_{\mathcal{B}}$ in \autoref{lambda2higherdim} gives that
\[
0 = \lambda_2(\mathcal{B};\alpha_{\mathcal{B}}) \leq \lambda_2(\mathcal{C};\alpha_{\mathcal{B}}) ,
\]
with strict inequality unless the box $\mathcal{B}$ is a cube. Because the eigenvalues are strictly increasing functions of $\alpha$, it follows that $\alpha_{\mathcal{C}} \leq \alpha_{\mathcal{B}}$ with strict inequality unless the box is a cube. That is, $\sigma_1(\mathcal{C}) \geq \sigma_1(\mathcal{B})$ with strict inequality unless the box is a cube.
\end{proof}
\begin{proof}[\bf Proof of \autoref{lambda2L}] By scale invariance, it suffices to prove the theorem for the family of rectangles $\mathcal{R}(p) = (0,p) \times (0,1-p)$. These rectangles have perimeter $L=2$ and area $A=p(1-p)$, and so the quantity to be maximized is
\[
Q(p) = \lambda_2\big( \mathcal{R}(p);\alpha/2\big) p(1-p) .
\]
We may assume $p \in (0,1/2]$, so that the long side has length $1-p$ and the short side has length $p$. Then by \autoref{secondeigenbox}, the second eigenvalue of the rectangle equals
\begin{equation} \label{normalizedsecond}
\lambda_2\big( \mathcal{R}(p);\alpha/2\big) = \lambda_2\big( (0,1-p); \alpha/2\big) + \lambda_1\big( (0,p); \alpha/2\big) , \qquad p \in (0,1/2] .
\end{equation}
Step 1. We start by proving inequalities for the square ($p=1/2$), specifically that
\begin{align}
\lambda_2\big( \mathcal{R}(1/2);\alpha/2\big)/4 & > \alpha, \qquad \alpha \in (\alpha_-,\alpha_+) , \label{biggeralpha1} \\
\lambda_2\big( \mathcal{R}(1/2);\alpha/2\big)/4 & < \alpha, \qquad \alpha \notin [\alpha_-,\alpha_+] . \label{biggeralpha2}
\end{align}
Equality in \eqref{biggeralpha1} would mean
\begin{equation} \label{eq:concavesquare}
\lambda_2\big( \mathcal{I}(1/4) ; \alpha/2 \big) + \lambda_1\big( \mathcal{I}(1/4) ; \alpha/2 \big) = 4\alpha ,
\end{equation}
which when $\alpha>0$ reduces to
\[
g_2^{-1}(\alpha/8)^2 + g_1^{-1}(\alpha/8)^2 = \alpha/4
\]
by applying the interval eigenvalue formulas in \autoref{firsteigen} and \autoref{secondeigen}. Thus equality holds at $\alpha_+ \simeq 33.2$ by definition \eqref{alphaplus}. When $\alpha<-8$, equality \eqref{eq:concavesquare} reduces to
\[
-h_2^{-1}(-\alpha/8)^2 - h_1^{-1}(-\alpha/8)^2 = \alpha/4 ,
\]
and so equality holds at $\alpha_- \simeq -9.4$ by definition \eqref{alphaneg}. The strict inequalities \eqref{biggeralpha1} and \eqref{biggeralpha2} now follow from strict concavity of the second eigenvalue of the fixed rectangle $\mathcal{R}(1/2)$ as a function of $\alpha \in {\mathbb R}$ (\autoref{secondeigconcave}).
\smallskip
Step 2. Next we establish convexity facts for the interval, on various ranges of $\alpha$-values. If $\alpha>0$ then
\begin{align}
p \mapsto \lambda_1\big( (0,p); \alpha\big) p(1-p) \qquad \text{is strictly convex for $p \in (0,1)$,} \label{firstconvex} \\
p \mapsto \lambda_2\big( (0,p); \alpha\big) p(1-p) \qquad \text{is strictly convex for $p \in (0,1)$.} \label{secondconvex}
\end{align}
Claim \eqref{firstconvex} holds by \autoref{convexityyy}(i), since $\lambda_1\big( (0,p); \alpha\big) = \alpha^2 G_1( \alpha p/2)^2$ by \autoref{firsteigen} applied with $t=p/2$. Similarly claim \eqref{secondconvex} holds by \autoref{convexityyy}(ii), since $\lambda_2\big((0,p);\alpha \big) = \alpha^2 G_2(\alpha p/2)^2$ by \autoref{secondeigen}.
If $-6 \leq \alpha < 0$ then
\begin{equation} \label{specialalpha}
\text{$p \mapsto \lambda_1\big( (0,p); \alpha\big) p(1-p)$ is strictly increasing and strictly convex for $p \in (0,1)$,}
\end{equation}
by applying \autoref{concavityH1} with $c=|\alpha|/2 \leq 3$.
If $\alpha < -6$ then
\begin{align}
\text{$p \mapsto \lambda_1\big( (0,p); \alpha\big) p(1-p)$ is strictly decreasing for\ } & \text{$p \in \big(0,y_1(|\alpha|/2)\big)$} \label{specialalpha6} \\
\text{and is strictly increasing and strictly convex for\ } & \text{$p \in \big(y_1(|\alpha|/2),1\big)$,} \label{specialalpha7}
\end{align}
by \autoref{concavityH1} applied with $c=|\alpha|/2 > 3$. The lemma showed $0<y_1(|\alpha|/2)<1/2$.
Now we claim when $\alpha < 0$ that the second eigenvalue satisfies:
\begin{align}
\text{$p \mapsto \lambda_2\big( (0,p); \alpha\big) p(1-p)$ is strictly decreasing when $p \in \big(0,\min(1,2/|\alpha|) \big)$,} \label{specialalpha2} \\
\text{$\lambda_2\big( (0,p); \alpha\big) < 0$ when $p \in \big(\! \min(1,2/|\alpha|),1 \big)$.} \label{specialalpha3}
\end{align}
Indeed, if $p<2/|\alpha|$ then $\alpha>-2/p$ and so $\lambda_2\big( (0,p); \alpha\big) = \alpha^2 G_2(\alpha p/2)^2$ by \autoref{secondeigen}. Thus \eqref{specialalpha2} holds by \autoref{convexityyy}(iii) with $c=|\alpha|/2$. For \eqref{specialalpha3}, if $p>2/|\alpha|$ then $\alpha<-2/p$ and so $\lambda_2\big( (0,p); \alpha\big) = -\alpha^2 H_2(-\alpha p/2)^2 < 0$ by \autoref{secondeigen}.
Further, if $\alpha < -2$ then the second eigenvalue satisfies that
\begin{align}
\text{$p \mapsto \lambda_2\big( (0,p); \alpha\big) p(1-p)$ is strictly convex when\ } & \text{$p \in \big( 2/|\alpha|,1 \big)$} \label{specialalpha4} \\
\text{and strictly increasing when\ } & \text{$p \in \big( y_2(|\alpha|/2),1 \big)$.} \label{specialalpha5}
\end{align}
Here, \autoref{concavityhtwo} with $c=|\alpha|/2>1$ ensures that $y_2(|\alpha|/2) \geq 2/|\alpha|$ and so the $p$ values in \eqref{specialalpha4} and \eqref{specialalpha5} satisfy $p>2/|\alpha|$. Hence $\lambda_2\big( (0,p); \alpha\big)= -\alpha^2 H_2(|\alpha| p/2)^2$, and applying \autoref{concavityhtwo} yields \eqref{specialalpha4} and \eqref{specialalpha5}. That lemma also gives that
\begin{equation} \label{y1y2l1}
y_1(|\alpha|/2) < 1 - y_2(|\alpha|/2) .
\end{equation}
\smallskip
Step 3. At last we may prove the theorem.
\smallskip
(i) Suppose $\alpha>0$. Observe $Q(p)$ is strictly convex for $p \in (0,1/2]$, by \eqref{normalizedsecond}, \eqref{firstconvex} and \eqref{secondconvex}. It follows that the maximum of $Q(p)$ occurs either as $p \to 0$ or at $p=1/2$. As the rectangle degenerates, the limiting value is $\lim_{p \to 0} Q(p) = \alpha$, since $\lambda_2\big( (0,1-p); \alpha/2\big)$ converges to the finite value $\lambda_2\big( (0,1); \alpha/2\big)$ while $\lambda_1\big( (0,p); \alpha/2\big) \sim (\alpha/2)/(p/2)=\alpha/p$ as $p \to 0$ (using the blow-up rate from the proof of \autoref{1dimpos}). Meanwhile, the square has $Q(1/2) = \lambda_2\big( \mathcal{R}(1/2);\alpha/2\big)/4$. It follows from \eqref{biggeralpha1} and \eqref{biggeralpha2} that when $(0,\alpha_+)$ the maximum of $Q(p)$ occurs at $p=1/2$, and when $\alpha \in (\alpha_+,\infty)$ the maximum is achieved in the limit as $p \to 0$.
In the borderline case $\alpha=\alpha_+$, equality holds in \eqref{biggeralpha1} and so $\lim_{p \to 0} Q(p) = Q(1/2)$, from which strict convexity of $Q$ implies $Q(p) < Q(1/2)$ for all $p \in (0,1/2)$. Therefore the square gives the largest value for $Q$.
\smallskip
(ii) Suppose $\alpha=0$, in which case the second Neumann eigenvalue of the rectangle is $\pi^2/(1-p)^2$, remembering here that the long side has length $1-p$. Multiplying by the area $p(1-p)$ gives $\pi^2 p/(1-p)$, which for $p \in (0,1/2]$ is strictly maximal at $p=1/2$. In other words, the square maximizes the area-normalized second eigenvalue.
\smallskip
(iii) Suppose $-4 \leq \alpha < 0$. Then $Q(p)$ is strictly increasing when $p \in (0,1/2]$, by using \eqref{normalizedsecond} and applying \eqref{specialalpha} with $\alpha/2$ instead of $\alpha$, and applying \eqref{specialalpha2} with $\alpha/2$ instead of $\alpha$ and $1-p$ instead of $p$. (The assumption $-4 \leq \alpha < 0$ ensures when using \eqref{specialalpha2} that $\min(1,2/|\alpha/2|)=1$.) Hence $Q(p)$ achieves its maximum at $p=1/2$ (the square).
\smallskip
(iv) Suppose $-8 \leq \alpha < 4$, so that $\min(1,2/|\alpha/2|)=4/|\alpha|$. The argument in the preceding paragraph gives this time that $Q(p)$ is strictly increasing when $p \in (q(\alpha),1/2]$, where $q(\alpha)=1-4/|\alpha| \in (0,1/2]$. We will show $Q(p) < Q(q(\alpha))$ when $p \in (0,q(\alpha))$, so that once again $p=1/2$ gives the maximum of $Q$. To show $Q(p) < Q(q(\alpha))$, observe that $\lambda_1\big( (0,p); \alpha/2\big) p(1-p)$ is strictly increasing in $p$ by \eqref{specialalpha}, while the second eigenvalue $\lambda_2\big( (0,1-p); \alpha/2\big)$ equals zero at $p=q(\alpha)$ (by \autoref{secondeigen}, since $\alpha/2=-2/(1-q(\alpha))$) and is negative when $0 < p < q(\alpha)$ (by applying \eqref{specialalpha3}).
\smallskip
(v) Suppose $-12 \leq \alpha < -8$. Let $c=|\alpha|/4 \leq 3$. From \eqref{specialalpha} with $\alpha/2$ in place of $\alpha$ we know $\lambda_1\big( (0,p); \alpha/2\big) p(1-p)$ is strictly convex for $p \in (0,1)$. From \eqref{specialalpha4} with $p$ replaced by $1-p$ we see that $\lambda_2\big( (0,1-p); \alpha/2\big) p(1-p)$ is strictly convex for $p \in (0,1-4/|\alpha|)$. This interval includes $(0,1/2]$ because $\alpha < -8$. Hence $Q(p)$ is strictly convex for $p \in (0,1/2]$, by \eqref{normalizedsecond}, and so the maximum of $Q$ occurs either as $p \to 0$ or at $p=1/2$. The limiting value as the rectangle degenerates is $\lim_{p \to 0} Q(p) = \alpha$ since $\lambda_1\big( (0,p); \alpha/2\big) \sim (\alpha/2)/(p/2) = \alpha/p$ as $p \to 0$ (using the blow-up rate from the proof of \autoref{1dimneg}). Thus when $\alpha \in [-12,\alpha_-)$ or $\alpha \in (\alpha_-,-8)$, the theorem follows from the comparison of the square and the degenerate rectangle in \eqref{biggeralpha1} and \eqref{biggeralpha2}. In the borderline case $\alpha=\alpha_-$, the square ($p=1/2$) gives the largest eigenvalue, by arguing as for the borderline case $\alpha=\alpha_+$ in part (i) above.
\smallskip
(vi) Suppose $\alpha< -12$. The normalized first eigenvalue $\lambda_1\big( (0,p); \alpha/2\big) p(1-p)$ is strictly convex for $p \in \big(y_1(|\alpha|/4),1\big)$, by \eqref{specialalpha7} with $\alpha/2$ in place of $\alpha$. Recall from \autoref{concavityH1} with $c=|\alpha|/4>3$ that the number $y_1(|\alpha|/4)$ lies between $0$ and $1/2$. Meanwhile, $\lambda_2\big( (0,1-p); \alpha/2\big) p(1-p)$ is strictly convex for $p \in (0,1/2]$, as observed above in part (v). Adding these two convex functions shows that $Q(p)$ is strictly convex for $p \in \big(y_1(|\alpha|/4),1/2\big]$. Thus $Q$ attains its maximum on that interval at one of the endpoints.
On the remaining interval $\big(0,y_1(|\alpha|/4)\big)$, we will show $Q$ is strictly decreasing and hence attains its maximum at the left endpoint (as $p \to 0$). Armed with that fact, one completes the proof for $\alpha<-12$ by recalling from \eqref{biggeralpha2} that the function $Q(p)$ attains a bigger value as $p \to 0$ than it does at $p=1/2$.
To show $Q$ is strictly decreasing on $\big(0,y_1(|\alpha|/4)\big)$, note $\lambda_1\big( (0,p); \alpha/2\big) p(1-p)$ is strictly decreasing for $p \in \big(0,y_1(|\alpha|/4)\big)$, by \eqref{specialalpha6}. Further, $\lambda_2\big( (0,1-p); \alpha/2\big) p(1-p)$ is strictly decreasing for $p \in \big(0,1-y_2(|\alpha|/4)\big)$, by replacing $\alpha$ with $\alpha/2$ and $p$ with $1-p$ in \eqref{specialalpha5}. That last interval contains $\big(0,y_1(|\alpha|/4)\big)$, due to \eqref{y1y2l1}, and so $Q(p)$ is strictly decreasing on $\big(0,y_1(|\alpha|/4)\big)$.
\end{proof}
\begin{proof}[\bf Proof of \autoref{steklov}]
See the proof of \autoref{sigma1higherdim} for the relationship between the Steklov and Robin spectra.
After rescaling the rectangle $\mathcal{R}$, we may suppose it has area $4$. Write $\mathcal{S}$ for the square of sidelength $2$ and hence area $4$ and perimeter $8$. \autoref{squareexample} gives that $\lambda_2(\mathcal{S};\alpha_0)=0$ where $\alpha_0 \simeq -0.68825$, and so $\sigma_1(\mathcal{S})= |\alpha_0|$. Thus the task is to prove $\sigma_1(\mathcal{R}) L(\mathcal{R}) \leq 8|\alpha_0|$, with equality if and only if the rectangle is a square.
Choosing $\alpha= 8\alpha_0 \simeq -5.5$ in \autoref{lambda2L} yields that
\begin{equation} \label{eq:squarebound}
\lambda_2\big( \mathcal{R};8\alpha_0/L(\mathcal{R}) \big) \leq \lambda_2\big(\mathcal{S};8\alpha_0/L(\mathcal{S})\big) = \lambda_2(\mathcal{S};\alpha_0) =0 .
\end{equation}
Also $\lambda_2(\mathcal{R};0)$ is positive. Since $\lambda_2(\mathcal{R};\alpha)$ is a continuous, strictly increasing function of $\alpha$, it follows that a unique number $\widetilde{\alpha} \in [8\alpha_0,0)$ exists for which $\lambda_2\big( \mathcal{R};\widetilde{\alpha}/L(\mathcal{R}) \big) = 0$. Hence $-\widetilde{\alpha}/L(\mathcal{R})=\sigma_1(\mathcal{R})$, and so $\sigma_1(\mathcal{R}) L(\mathcal{R}) = -\widetilde{\alpha} \leq 8|\alpha_0|$, as we needed to show.
If equality holds then equality holds in \eqref{eq:squarebound}, and so the equality statement in \autoref{lambda2L} implies $\mathcal{R}$ is a square.
\end{proof}
\begin{proof}[\bf Proof of \autoref{gapD}]
\autoref{boxgap} shows the spectral gap for the box equals the spectral gap of its longest edge:
\[
(\lambda_2-\lambda_1)(\mathcal{B};\alpha) = (\lambda_2-\lambda_1)((0,s);\alpha)
\]
where we write $s$ for the length of the longest edge of the box. Since $s<D$ and the spectral gap of an interval is strictly decreasing as a function of the length (by \autoref{1dimpos}, \autoref{1dimzero} and \autoref{1dimneg}), the conclusion of the theorem follows.
\end{proof}
\begin{proof}[\bf Proof of \autoref{gapSV}]
Arguing as in the preceding proof, we see that to maximize the gap we must minimize the longest side $s$ of the box, subject to the constraint of fixed diameter. That is, we want to minimize the scale invariant ratio $s/D$ among boxes. The minimum is easily seen to occur for the cube, by fixing $s$ and increasing all the other side lengths to increase the diameter.
The argument is similar under a surface area constraint since the scale invariant ratio $s^{n-1}/S$ is minimal among boxes for the cube, and under a volume constraint too since the ratio $s^n/V$ is minimal for the cube.
\emph{Comment.} The version of the theorem with diameter constraint implies the one with volume constraint, since $s^n/V=(s/D)^n(D^n/V)$ and each ratio on the right is minimal at the cube. Similarly, the result with surface area constraint implies the one with volume constraint, since $s^n/V = (s^{n-1}/S)^{n/(n-1)}(S^{n/(n-1)}/V)$ and each ratio on the right is minimal at the cube.
\end{proof}
\begin{proof}[\bf Proof of \autoref{ratio}]
The ratio can be rewritten in terms of the spectral gap as
\[
\frac{\lambda_2(\mathcal{B};\alpha)}{|\lambda_1(\mathcal{B};\alpha)|}
=
\frac{\lambda_2(\mathcal{B};\alpha)-\lambda_1(\mathcal{B};\alpha)}{|\lambda_1(\mathcal{B};\alpha)|} + \operatorname{sign}(\alpha) .
\]
The numerator on the right is maximal for the cube having the same volume as $\mathcal{B}$, by \autoref{gapSV}, while the denominator is minimal for that cube by \autoref{lambda1higherdim}. Hence the ratio is maximal for the cube.
\end{proof}
\begin{proof}[\bf Proof of \autoref{ratio2dim}]
In terms of the spectral gap, the ratio is
\[
\frac{\lambda_2(\mathcal{R};\alpha/L)}{\lambda_1(\mathcal{R};\alpha/L)}
=
\frac{\lambda_2(\mathcal{R};\alpha/L)-\lambda_1(\mathcal{R};\alpha/L)}{\lambda_1(\mathcal{R};\alpha/L)A} A + 1 .
\]
The numerator on the right is maximal for the square having the same boundary length $L$ as $\mathcal{R}$, by the ``surface area'' version of \autoref{gapSV} applied with $\alpha/L$ instead of $\alpha$. And of course, the factor of $A$ is largest for the same square, by the isoperimetric inequality for rectangles. Meanwhile the denominator on the right side is positive (since $\alpha>0$) and is minimal for the square by \autoref{lambda1L}. Hence the right side is maximal for the square.
\end{proof}
\begin{proof}[\bf Proof of \autoref{hearingdrum}]
After a rotation and translation, we may write the rectangle as $\mathcal{R} = \mathcal{I}(t) \times \mathcal{I}(s)$ where $t \geq s$. The first and second eigenvalues of this rectangle are the given information, and the task is to determine the side lengths $t$ and $s$.
In terms of $t$ and $s$, the eigenvalues are
\begin{align*}
\lambda_1(\mathcal{R};\alpha) & = \lambda_1(\mathcal{I}(t);\alpha) + \lambda_1(\mathcal{I}(s);\alpha) , \\
\lambda_2(\mathcal{R};\alpha) & = \lambda_2(\mathcal{I}(t);\alpha) + \lambda_1(\mathcal{I}(s);\alpha) ,
\end{align*}
by \autoref{firsteigenbox} and \autoref{secondeigenbox}. Subtracting, we obtain the spectral gap as
\[
(\lambda_2-\lambda_1)(\mathcal{R};\alpha) = (\lambda_2-\lambda_1)(\mathcal{I}(t);\alpha) ,
\]
and so the value of the left side is also given information. The right side is the spectral gap of the interval $\mathcal{I}(t)$, which is a strictly decreasing function $t$ by \autoref{1dimpos} and \autoref{1dimneg}. Hence the longer sidelength $t$ of the rectangle is uniquely determined by the given information.
The value of $\lambda_1(\mathcal{I}(s);\alpha)$ can then be determined from the formulas above. This eigenvalue depends strictly monotonically on the length $s$, by \autoref{1dimpos} and \autoref{1dimneg}, and hence the value of $s$ is uniquely determined.
\emph{Comment.} The final step of the proof is where the assumption $\alpha \neq 0$ is used --- when $\alpha=0$ the first eigenvalue is zero for every interval and hence is not strictly monotonic as a function of the length.
\end{proof}
\section*{Acknowledgments}
This research was supported by a grant from the Simons Foundation (\#429422 to Richard Laugesen) and travel support from the University of Illinois Scholars' Travel Fund. Conversations with Dorin Bucur were particularly helpful, at the conference ``Results in Contemporary Mathematical Physics'' in honor of Rafael Benguria (Santiago, Chile, December 2018). I am grateful to Derek Kielty for carrying out numerical investigations in support of this research and pointing out relevant literature, and to Pedro Freitas for many informative conversations about Robin eigenvalues.
| {
"timestamp": "2019-05-21T02:13:10",
"yymm": "1905",
"arxiv_id": "1905.07658",
"language": "en",
"url": "https://arxiv.org/abs/1905.07658",
"abstract": "The first two eigenvalues of the Robin Laplacian are investigated along with their gap and ratio. Conjectures by various authors for arbitrary domains are supported here by new results for rectangular boxes.Results for rectangular domains include that: the square minimizes the first eigenvalue among rectangles under area normalization, when the Robin parameter $\\alpha \\in \\mathbb{R}$ is scaled by perimeter; that the square maximizes the second eigenvalue for a sharp range of $\\alpha$-values; that the line segment minimizes the Robin spectral gap under diameter normalization for each $\\alpha \\in \\mathbb{R}$; and the square maximizes the spectral ratio among rectangles when $\\alpha>0$. Further, the spectral gap of each rectangle is shown to be an increasing function of the Robin parameter, and the second eigenvalue is concave with respect to $\\alpha$.Lastly, the shape of a Robin rectangle can be heard from just its first two frequencies, except in the Neumann case.",
"subjects": "Spectral Theory (math.SP); Analysis of PDEs (math.AP)",
"title": "The Robin Laplacian - spectral conjectures, rectangular theorems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9902915258107348,
"lm_q2_score": 0.8397339736884712,
"lm_q1q2_score": 0.8315814380790676
} |
https://arxiv.org/abs/1707.01354 | Bounding the number of common zeros of multivariate polynomials and their consecutive derivatives | We upper bound the number of common zeros over a finite grid of multivariate polynomials and an arbitrary finite collection of their consecutive Hasse derivatives (in a coordinate-wise sense). To that end, we make use of the tool from Gröbner basis theory known as footprint. Then we establish and prove extensions in this context of a family of well-known results in algebra and combinatorics. These include Alon's combinatorial Nullstellensatz, existence and uniqueness of Hermite interpolating polynomials over a grid, estimations on the parameters of evaluation codes with consecutive derivatives, and bounds on the number of zeros of a polynomial by DeMillo and Lipton, Schwartz, Zippel, and Alon and Füredi. As an alternative, we also extend the Schwartz-Zippel bound to weighted multiplicities and discuss its connection with our extension of the footprint bound. | \section{Introduction}
Estimating the number of zeros of a polynomial over a field $ \mathbb{F} $ has been a central problem in algebra, where one of the main inconveniences is counting \textit{repeated zeros}, that is, \textit{multiplicities}. In the univariate case, this is easily solved by defining the multiplicity of a zero as the minimum positive integer $ r $ such that the first $ r $ \textit{consecutive derivatives} of the given polynomial vanish at that zero. In addition, Hasse derivatives \cite{hasse} are used instead of classical derivatives in order to give meaningful information over fields of positive characteristic. In this way, the number of zeros of a polynomial, counted with multiplicities, is upper bounded by its degree. Formally:
\begin{equation}
\sum_{a \in \mathbb{F}} m(F(x), a) \leq \deg(F(x)).
\label{eq in intro 1}
\end{equation}
If $ \mathcal{V}_{\geq r}(F(x)) $ denotes the set of zeros of $ F(x) $ of multiplicity at least $ r $, then a weaker, but still sharp, bound is the following:
\begin{equation}
\# \mathcal{V}_{\geq r}(F(x)) \cdot r \leq \deg(F(x)).
\label{eq in intro 2}
\end{equation}
In the multivariate case, the standard approach is to consider the first $ r $ consecutive Hasse derivatives as those whose multiindices have order less than $ r $, where the order of a multiindex $ (i_1, i_2, \ldots, i_m) $ is defined as $ \sum_{j=1}^m i_j $. We will use the terms \textit{standard multiplicities} to refer to this type of multiplicities. In this work, we consider arbitrary finite families $ \mathcal{J} $ of multiindices that are consecutive in a coordinate-wise sense: if $ (i_1, i_2, \ldots, i_m) $ belongs to $ \mathcal{J} $ and $ k_j \leq i_j $, for $ j = 1,2, \ldots, m $, then $ (k_1, k_2, \ldots, k_m) $ also belongs to $ \mathcal{J} $. Obviously, the (finite) family $ \mathcal{J} $ of multiindices of order less than a given positive integer $ r $ satisfies this property, hence is a particular case.
Our main contribution is an upper bound on the number of common zeros over a grid of a family of polynomials and their (Hasse) derivatives corresponding to a finite set $ \mathcal{J} $ of consecutive multiindices. This upper bound makes use of the technique from Gr{\"o}bner basis theory known as \textit{footprint} \cite{footprints, hoeholdt}, and can be seen as an extension of the classical \textit{footprint bound} \cite[Section 5.3]{clo1} in the sense of (\ref{eq in intro 2}). A first extension for standard multiplicities has been given as Lemma 2.4 in the expanded version of \cite{ruudbound}.
We will then show that this bound is sharp for ideals of polynomials, characterize those which satisfy equality, and give as applications extensions of known results in algebra and combinatorics: Alon's combinatorial Nullstellensatz \cite{alon, ball, clark, nulmultisets, shortproof}, existence and uniqueness of Hermite interpolating polynomials \cite{gasca, kopparty-multiplicity, lorentz}, estimations on the parameters of evaluation codes with consecutive derivatives \cite{weightedRM, kopparty-multiplicity, multiplicitycodes}, and the bounds by DeMillo and Lipton \cite{demillo}, Zippel \cite{zippel-first, zippel}, and Alon and F{\"u}redi \cite{alon-furedi}, and a particular case of the bound given by Schwartz in \cite[Lemma 1]{schwartz}.
The bound in \cite[Lemma 1]{schwartz} can also be derived by those given by DeMillo and Lipton \cite{demillo}, and Zippel \cite[Theorem 1]{zippel-first}, \cite[Proposition 3]{zippel} (see Proposition \ref{proposition zippel implies schwartz} below), and is referred to as the \textit{Schwartz-Zippel bound} in many works in the literature \cite{extensions, weightedRM, kopparty-multiplicity, multiplicitycodes}. Interestingly, an extension of such bound for standard multiplicities in the sense of (\ref{eq in intro 1}) has been recently given in \cite[Lemma 8]{extensions}, but as Counterexample 7.4 in \cite{grid} shows, no straightforward extension of the footprint bound in the sense of (\ref{eq in intro 1}) seems possible (recall that we will give a footprint bound in the sense of (\ref{eq in intro 2})). To conclude this work, we give an extension of the Schwartz-Zippel bound in the sense of (\ref{eq in intro 1}) to derivatives with weighted order less than a given positive integer, which we will call \textit{weighted multiplicities}. This bound is inspired by \cite[Lemma 8]{extensions}, and we will discuss its connection with our extension of the footprint bound.
The results are organized as follows: We start with some preliminaries in Section 2. We then give the main bound in Section 3, together with some particular cases, an interpretation of the bound, and sharpness and equality conditions. In Section 4, we give a list of applications. Finally, in Section 5 we give an extension of the Schwartz-Zippel bound in the sense of (\ref{eq in intro 1}) to weighted multiplicities, and discuss the connections with the bound in Section 3.
\subsection*{Notation}
Throughout this paper, $ \mathbb{F} $ denotes an arbitrary field. We denote by $ \mathbb{F}[\mathbf{x}] = \mathbb{F}[x_1, x_2, \ldots, $ $ x_m] $ the ring of polynomials in the $ m $ variables $ x_1, x_2, \ldots, x_m $ with coefficients in $ \mathbb{F} $. A multiindex is a vector $ \mathbf{i} = (i_1, i_2, \ldots, i_m) \in \mathbb{N}^m $, where $ \mathbb{N} = \{ 0,1,2,3, \ldots \} $, and as usual we use the notation $ \mathbf{x}^\mathbf{i} = x_1^{i_1} x_2^{i_2} \cdots x_m^{i_m} $. We also denote $ \mathbb{N}_+ = \{ 1,2,3, \ldots \} $.
In this work, $ \preceq $ denotes the coordinate-wise partial ordering in $ \mathbb{N}^m $, that is, $ (i_1, i_2, \ldots, $ $ i_m) \preceq (j_1, j_2, \ldots, j_m) $ if $ i_k \leq j_k $, for all $ k = 1,2, \ldots, m $. We will use $ \preceq_m $ to denote a given monomial ordering in the set of monomials of $ \mathbb{F}[\mathbf{x}] $ (see \cite[Section 2.2]{clo1}), and we denote by $ {\rm LM}_{\preceq_m}(F(\mathbf{x})) $ the leading monomial of $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ with respect to $ \preceq_m $, or just $ {\rm LM}(F(\mathbf{x})) $ if there is no confusion about $ \preceq_m $. Finally, the notation $ \langle A \rangle $ means ideal generated by $ A $ in a ring, and $ \langle A \rangle_\mathbb{F} $ means vector space over $ \mathbb{F} $ generated by $ A $.
\section{Consecutive derivatives}
In this work, we consider Hasse derivatives, introduced first in \cite{hasse}. They coincide with usual derivatives except for multiplication with a non-zero constant factor when the corresponding multiindex contains no multiples of the characteristic of the field, and they have the advantage of not being identically zero otherwise.
\begin{definition}[\textbf{Hasse derivative \cite{hasse}}] \label{def Hasse derivative}
Let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ be a polynomial. Given another family of independent variables $ \mathbf{z} = (z_1, z_2, \ldots, z_m) $, the polynomial $ F(\mathbf{x} + \mathbf{z}) $ can be written uniquely as
$$ F(\mathbf{x} + \mathbf{z}) = \sum_{\mathbf{i} \in \mathbb{N}^m} F^{(\mathbf{i})}(\mathbf{x}) \mathbf{z}^\mathbf{i}, $$
for some polynomials $ F^{(\mathbf{i})}(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $, for $ \mathbf{i} \in \mathbb{N}^m $. For a given multiindex $ \mathbf{i} \in \mathbb{N}^m $, we define the $ \mathbf{i} $-th Hasse derivative of $ F(\mathbf{x}) $ as the polynomial $ F^{(\mathbf{i})}(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $.
\end{definition}
We next formalize the concept of zero of a polynomial of at least a given multiplicity as that of common zero of the given polynomial and a given finite family of its derivatives:
\begin{definition} \label{def general multiplicity}
Let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ be a polynomial, let $ \mathbf{a} \in \mathbb{F}^m $ be an affine point, and let $ \mathcal{J} \subseteq \mathbb{N}^m $ be a finite set. We say that $ \mathbf{a} $ is a zero of $ F(\mathbf{x}) $ of multiplicity at least $ \mathcal{J} $ if $ F^{(\mathbf{i})}(\mathbf{a}) = 0 $, for all $ \mathbf{i} \in \mathcal{J} $.
\end{definition}
The concept of \textit{consecutive derivatives}, in a coordinate-wise sense, can be formalized by the concept of \textit{decreasing sets} of multiindices (recall that $ \preceq $ denotes the coordinate-wise ordering in $ \mathbb{N}^m $):
\begin{definition} [\textbf{Decreasing sets}]
We say that the set $ \mathcal{J} \subseteq \mathbb{N}^m $ is decreasing if whenever $ \mathbf{i} \in \mathcal{J} $ and $ \mathbf{j} \in \mathbb{N}^m $ are such that $ \mathbf{j} \preceq \mathbf{i} $, it holds that $ \mathbf{j} \in \mathcal{J} $.
\end{definition}
Observe that the finite set $ \mathcal{J} = \{ (i_1, i_2, \ldots, i_m) \in \mathbb{N}^m : \sum_{j=1}^m i_j < r \} $, for a positive integer $ r $, is decreasing. Moreover, if $ m = 1 $, then these are all possible decreasing finite sets. The concept of weighted orders and weighted multiplicities shows that this is not the case when $ m > 1 $:
\begin{definition} [\textbf{Weighted multiplicities}] \label{def weighted multiplicity}
Fix a vector of positive weights $ \mathbf{w} = (w_1, $ $ w_2, $ $ \ldots, w_m) \in \mathbb{N}_+^m $. Given a multiindex $ \mathbf{i} = (i_1, i_2, \ldots, i_m) \in \mathbb{N}^m $, we define its weighted order as
\begin{equation}
\mid \mathbf{i} \mid_\mathbf{w} = i_1 w_1 + i_2 w_2 + \cdots + i_m w_m.
\label{eq def weighted norm}
\end{equation}
Let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ be a polynomial and let $ \mathbf{a} \in \mathbb{F}^m $ be an affine point. We say that $ \mathbf{a} $ is a zero of $ F(\mathbf{x}) $ of weighted multiplicity $ r \in \mathbb{N} $, and we write
$$ m_\mathbf{w}(F(\mathbf{x}), \mathbf{a}) = r, $$
if $ F^{(\mathbf{i})}(\mathbf{a}) = 0 $, for all $ \mathbf{i} \in \mathbb{N}^m $ with $ \mid \mathbf{i} \mid_{\mathbf{w}} < r $, and $ F^{(\mathbf{j})}(\mathbf{a}) \neq 0 $, for some $ \mathbf{j} \in \mathbb{N}^m $ with $ \mid \mathbf{j} \mid_{\mathbf{w}} = r $.
\end{definition}
We also introduce the definition of weighted degree, which will be convenient for different results in the following sections:
\begin{definition} [\textbf{Weighted degrees}] \label{def weighted degree}
Let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ be a polynomial and let $ \mathbf{w} \in \mathbb{N}_+^m $ be a vector of positive weights. We define the weighted degree of $ F(\mathbf{x}) $ as
$$ \deg_\mathbf{w}(F(\mathbf{x})) = \max \{ \mid \mathbf{i} \mid_\mathbf{w} : F_\mathbf{i} \neq 0 \}, $$
where $ F(\mathbf{x}) = \sum_{\mathbf{i} \in \mathbb{N}^m} F_\mathbf{i} \mathbf{x}^\mathbf{i} $ and $ F_\mathbf{i} \in \mathbb{F} $, for all $ \mathbf{i} \in \mathbb{N}^m $.
\end{definition}
Other interesting sets of consecutive derivatives that we will consider throughout the paper are those given by bounding each index separately, that is, sets of the form $ \mathcal{J} = \left\lbrace (i_1, i_2, \ldots, i_m) \in \mathbb{N}^m : i_j < r_j, j = 1,2, \ldots, m \right\rbrace $, for a given $ (r_1, r_2, \ldots, r_m) \in \mathbb{N}_+^m $, where $ \preceq $ denotes the coordinate-wise partial ordering.
\section{The footprint bound for consecutive derivatives} \label{sec footprint bounds}
In this section, we will give an extension of the footprint bound \cite[Section 5.3]{clo1} to upper bound the number of common zeros over a finite grid of a family of polynomials and a given set of their consecutive derivatives, as in Definition \ref{def general multiplicity}. We give some particular cases and an interpretation of the bound. We conclude by studying its sharpness.
Throughout the section, fix a decreasing finite set $ \mathcal{J} \subseteq \mathbb{N}^m $, an ideal $ I \subseteq \mathbb{F}[\mathbf{x}] $ and finite subsets $ S_1, S_2, \ldots, S_m \subseteq \mathbb{F} $. Write $ S = S_1 \times S_2 \times \cdots \times S_m $, and denote by $ G_j(x_j) \in \mathbb{F}[x_j] $ the defining polynomial of $ S_j $, that is, $ G_j(x_j) = \prod_{s \in S_j}(x_j-s) $, for $ j = 1,2, \ldots, m $. The three objects involved in our bound are the following:
\begin{definition} \label{def main objects in bound}
We define the ideal
$$ I_{\mathcal{J}} = I + \left\langle \left\lbrace \prod_{j=1}^m G_j(x_j)^{r_j} : (r_1, r_2, \ldots, r_m) \notin \mathcal{J} \right\rbrace \right\rangle $$
and the set of zeros of multiplicity at least $ \mathcal{J} $ of the ideal $ I $ in the grid $ S = S_1 \times S_2 \times \cdots \times S_m $ as
$$ \mathcal{V}_{\mathcal{J}}(I) = \left\lbrace \mathbf{a} \in S : F^{(\mathbf{i})}(\mathbf{a}) = 0, \forall F(\mathbf{x}) \in I, \forall \mathbf{i} \in \mathcal{J} \right\rbrace . $$
Finally, given a monomial ordering $ \preceq_m $, we define the footprint of an ideal $ J \subseteq \mathbb{F}[\mathbf{x}] $ as
$$ \Delta_{\preceq_m} (J) = \left\lbrace \mathbf{x}^\mathbf{i} : \mathbf{x}^\mathbf{i} \notin \left\langle {\rm LM}(J) \right\rangle \right\rbrace , $$
where $ {\rm LM}(J) = \{ {\rm LM}(F(\mathbf{x})) : F(\mathbf{x}) \in J \} $ with respect to the monomial ordering $ \preceq_m $. We write $ \Delta(J) $ if there is no confusion about the monomial ordering.
\end{definition}
\subsection{The general bound}
\begin{theorem} \label{th footprint bound}
For any monomial ordering, it holds that
\begin{equation} \label{general bound}
\# \mathcal{V}_{\mathcal{J}}(I) \cdot \# \mathcal{J} \leq \# \Delta \left( I_\mathcal{J} \right).
\end{equation}
\end{theorem}
The rest of the subsection is devoted to the proof of this result. The first auxiliary tool is the Leibniz formula, which follows by a straightforward computation (see also \cite[pages 144--155]{torresbook}):
\begin{lemma} [\textbf{Leibniz formula}] \label{lemma leibniz formula}
Let $ F_1(\mathbf{x}), F_2(\mathbf{x}), \ldots, F_s(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ and let $ \mathbf{i} \in \mathbb{N}^m $. It holds that
$$ \left( \prod_{j=1}^s F_j(\mathbf{x}) \right)^{(\mathbf{i})} = \sum_{\mathbf{i}_1 + \mathbf{i}_2 + \cdots + \mathbf{i}_s = \mathbf{i}} \left( \prod_{j=1}^s F_j^{(\mathbf{i}_j)}(\mathbf{x}) \right). $$
\end{lemma}
The second auxiliary tool is the existence of Hermite interpolating polynomials with Hasse derivatives. For our purposes, a \textit{separated-variables} extension of univariate Hermite interpolation over grids is enough. This extension is straightforward and seems to be known in the literature (see \cite[Section 3.1]{lorentz}), but we give a short proof in the Appendix for convenience of the reader.
\begin{definition} \label{def evaluation map}
We define the evaluation map on a finite set $ T \subseteq \mathbb{F}^m $ with derivatives corresponding to multiindices in $ \mathcal{J} $ as
\begin{equation} \label{evaluation map definition}
\begin{split}
{\rm Ev} : & \mathbb{F}[\mathbf{x}] \longrightarrow \mathbb{F}^{\# T \cdot \# \mathcal{J}} \\
& F(\mathbf{x}) \mapsto \left( \left( F^{(\mathbf{i})}(\mathbf{a}) \right) _{\mathbf{i} \in \mathcal{J}} \right) _{\mathbf{a} \in T} .
\end{split}
\end{equation}
\end{definition}
\begin{lemma}[\textbf{Hermite interpolation}] \label{lemma hermite}
The evaluation map $ {\rm Ev} : \mathbb{F}[\mathbf{x}] \longrightarrow \mathbb{F}^{\# T \cdot \# \mathcal{J}} $ defined in (\ref{evaluation map definition}) is surjective, for all finite sets $ T \subseteq \mathbb{F}^m $ and $ \mathcal{J} \subseteq \mathbb{N}^m $.
\end{lemma}
\begin{proof}
See the Appendix.
\end{proof}
With these tools, we may now prove Theorem \ref{th footprint bound}:
\begin{proof}[Proof of Theorem \ref{th footprint bound}]
Fix multiindices $ \mathbf{r} = (r_1, r_2, \ldots, r_m) \notin \mathcal{J} $ and $ \mathbf{i} = (i_1, i_2, \ldots, i_m) \in \mathcal{J} $, and define $ G(\mathbf{x}) = \prod_{j=1}^m G_j(x_j)^{r_j} $. By Lemma \ref{lemma leibniz formula}, it holds that
\begin{equation}
G^{(\mathbf{i})}(\mathbf{x}) = \prod_{j=1}^m \left( G_j(x_j)^{r_j} \right)^{(i_j)}.
\label{eq in proof of footprint 1}
\end{equation}
Furthermore, if $ r > i $ and $ F(x) \in \mathbb{F}[x] $, then there exists $ H(x) \in \mathbb{F}[x] $ such that
\begin{equation}
\left( F(x)^r \right)^{(i)} = \sum_{i_1 + i_2 + \cdots + i_r = i} \left( \prod_{j=1}^r F^{(i_j)}(x) \right) = H(x) F(x)^{r-i},
\label{eq in proof of footprint 2}
\end{equation}
again by Lemma \ref{lemma leibniz formula}, since at least $ r-i > 0 $ indices $ i_j $ must be equal to $ 0 $, for each $ (i_1, i_2, \ldots, i_m) \in \mathbb{N}^m $ such that $ \sum_{j=1}^m i_j = i $. Finally, since $ \mathcal{J} $ is decreasing, it holds that $ \mathbf{r} - \mathbf{i} $ has at least one positive coordinate. Hence, combining (\ref{eq in proof of footprint 1}) and (\ref{eq in proof of footprint 2}), we see that $ G^{(\mathbf{i})}(\mathbf{a}) = 0 $, for all $ \mathbf{a} \in \mathcal{V}_{\mathcal{J}}(I) \subseteq S $. This implies that
$$ {\rm Ev}(F(\mathbf{x})) = \mathbf{0}, \quad \forall F(\mathbf{x}) \in I_\mathcal{J}, $$
by the definition of the ideal $ I_\mathcal{J} $ and the set $ \mathcal{V}_{\mathcal{J}}(I) $, and where we consider $ T = \mathcal{V}_{\mathcal{J}}(I) $ in the definition of $ {\rm Ev} $ (Definition \ref{def evaluation map}).
Therefore, the evaluation map $ {\rm Ev} $ can be extended to the quotient ring
$$ {\rm Ev} : \mathbb{F}[\mathbf{x}] / I_\mathcal{J} \longrightarrow \mathbb{F}^{\# \mathcal{V}_{\mathcal{J}}(I) \cdot \# \mathcal{J}}, $$
which is again surjective, since the original evaluation map is surjective by Lemma \ref{lemma hermite}. Since the domain and codomain of this map are $ \mathbb{F} $-linear vector spaces and the map itself is also $ \mathbb{F} $-linear, we conclude that
$$ \# \mathcal{V}_{\mathcal{J}}(I) \cdot \# \mathcal{J} = \dim_\mathbb{F} \left( \mathbb{F}^{\# \mathcal{V}_{\mathcal{J}}(I) \cdot \# \mathcal{J}} \right) \leq \dim_\mathbb{F} \left( \mathbb{F}[\mathbf{x}] / I_\mathcal{J} \right). $$
Finally, Proposition 4 in \cite[Section 5.3]{clo1} says that the monomials in $ \Delta(J) $ constitute a basis of $ \mathbb{F}[\mathbf{x}] / J $, for an ideal $ J \subseteq \mathbb{F}[\mathbf{x}] $. This fact implies that
$$ \dim_\mathbb{F} \left( \mathbb{F}[\mathbf{x}] / I_\mathcal{J} \right) = \# \Delta \left( I_\mathcal{J} \right), $$
and the result follows.
\end{proof}
\subsection{Some particular cases}
In this subsection, we derive some particular cases of Theorem \ref{th footprint bound}. We start with the classical form of the footprint bound (see Proposition 8 in \cite[Section 5.3]{clo1}, and \cite{footprints, hoeholdt}):
\begin{corollary} [\textbf{\cite{clo1, footprints, hoeholdt}}] \label{usual footprint bound}
Setting $ \mathcal{J} = \{ \mathbf{0} \} $, we obtain that
$$ \# \mathcal{V}(I) \leq \# \Delta \left( I + \left\langle G_1(x_1), G_2(x_2), \ldots, G_m(x_m) \right\rangle \right), $$
where $ \mathcal{V}(I) $ denotes the set of zeros of the ideal $ I $ in $ S $.
\end{corollary}
The case of zeros of standard multiplicity at least a given positive integer was first obtained as Lemma 2.4 in the extended version of \cite{ruudbound}, and reads as follows:
\begin{corollary}[\textbf{\cite{ruudbound}}] \label{footprint bound with multi}
Given an integer $ r \in \mathbb{N}_+ $, and setting $ \mathcal{J} = \{ (i_1, i_2, \ldots, i_m) \in \mathbb{N}^m : \sum_{j=1}^m i_j < r \} $, we obtain that
$$ \# \mathcal{V}_{\geq r}(I) \cdot \binom{m + r - 1}{m} \leq \# \Delta \left( I + \left\langle \left\lbrace \prod_{j=1}^m G_j(x_j)^{r_j} : \sum_{j=1}^m r_j = r \right\rbrace \right\rangle \right), $$
where $ \mathcal{V}_{\geq r}(I) $ denotes the set of zeros of multiplicity at least $ r $ of the ideal $ I $ in $ S $.
\end{corollary}
Another particular case is obtained when upper bounding each coordinate of the multiindices separately:
\begin{corollary}
Given a multiindex $ (r_1, r_2, \ldots, r_m) \in \mathbb{N}_+^m $, and setting $ \mathcal{J} = \{ (i_1, i_2, \ldots, i_m) $ $ \in \mathbb{N}^m : i_j < r_j, j = 1,2, \ldots, m \} $, we obtain that
$$ \# \mathcal{V}_{\mathcal{J}}(I) \cdot \prod_{j=1}^m r_j \leq \# \Delta \left( I + \left\langle G_1(x_1)^{r_1}, G_2(x_2)^{r_2}, \ldots, G_m(x_m)^{r_m} \right\rangle \right). $$
\end{corollary}
Finally, we obtain a footprint bound for weighted multiplicities:
\begin{corollary} \label{corollary footprint for weighted multi}
Given an integer $ r \in \mathbb{N}_+ $, a vector of positive weights $ \mathbf{w} = (w_1, w_2, \ldots, $ $ w_m) $ $ \in \mathbb{N}_+ $, and setting $ \mathcal{J} = \{ \mathbf{i} \in \mathbb{N}^m : \mid \mathbf{i} \mid_{\mathbf{w}} < r \} $, we obtain that
$$ \# \mathcal{V}_{\geq r, \mathbf{w}}(I) \cdot {\rm B}(\mathbf{w}; r) \leq \# \Delta \left( I + \left\langle \left\lbrace \prod_{j=1}^m G_j(x_j)^{r_j} : \sum_{j=1}^m r_jw_j \geq r \right\rbrace \right\rangle \right), $$
where $ \mathcal{V}_{\geq r, \mathbf{w}}(I) $ denotes the set of zeros of weighted multiplicity at least $ r $ of the ideal $ I $ in $ S $, and where $ {\rm B}(\mathbf{w}; r) = \# \left\lbrace \mathbf{i} \in \mathbb{N}^m : \mid \mathbf{i} \mid_{\mathbf{w}} < r \right\rbrace $.
\end{corollary}
To conclude, we give a more explicit form of the bound in the previous corollary by estimating the number $ B(\mathbf{w}; r) $:
\begin{corollary}
Given an integer $ r \in \mathbb{N}_+ $ and a vector of positive weights $ \mathbf{w} = (w_1, w_2, \ldots, $ $ w_m) $ $ \in \mathbb{N}_+ $, it holds that
\begin{equation}
\binom{m + r - 1}{m} \leq w_1 w_2 \cdots w_m B(\mathbf{w}; r).
\label{eq estimate on weighted binomial coeffs}
\end{equation}
In particular, we deduce from the previous corollary that
$$ \# \mathcal{V}_{\geq r, \mathbf{w}}(I) \cdot \binom{m + r - 1}{m} \leq w_1 w_2 \cdots w_m \cdot \# \Delta \left( I + \left\langle \left\lbrace \prod_{j=1}^m G_j(x_j)^{r_j} : \sum_{j=1}^m r_jw_j \geq r \right\rbrace \right\rangle \right). $$
\end{corollary}
\begin{proof}
Define the map $ T_\mathbf{j} : \mathbb{N}^m \longrightarrow \mathbb{N}^m $ by
$$ T_\mathbf{j}(\mathbf{i}) = (i_1w_1 + j_1, i_2w_2 + j_2, \ldots, i_mw_m + j_m), $$
for all $ \mathbf{i} = (i_1, i_2, \ldots, i_m), \mathbf{j} = (j_1, j_2, \ldots, j_m) \in \mathbb{N}^m $. Now define $ \mathcal{J}(\mathbf{w}; r) = \{ \mathbf{i} \in \mathbb{N}^m : \\ \mid \mathbf{i} \mid_{\mathbf{w}} < r \} $. By the Euclidean division, we see that
$$ \mathcal{J}((1,1, \ldots, 1); r) \subseteq \bigcup_{\mathbf{j} \in \prod_{k=1}^m [0,w_k)} T_\mathbf{j} \left( \mathcal{J} \left( \mathbf{w}; r \right) \right) . $$
By counting elements on both sides of the inclusion, the result follows.
\end{proof}
\subsection{Interpretation of the bound and illustration of the set $ \Delta(I_\mathcal{J}) $} \label{subsec interpretation bound}
In this subsection, we give a graphical description of the footprint $ \Delta(I_\mathcal{J}) $ which will allow us to provide an interpretation of the bound (\ref{general bound}).
First, we observe that by adding the polynomials $ \prod_{i=1}^m G_i(x_i)^{r_i} $, for $ (r_1, r_2, \ldots, r_m) \notin \mathcal{J} $, we are bounding the set of points $ \Delta(I_\mathcal{J}) $ by a certain subset $ \mathcal{J}_S \subseteq \mathbb{N}^m $, which we now define:
\begin{definition}
We define the set
$$ \mathcal{J}_S = \left\lbrace \mathbf{i} \in \mathbb{N}^m : \mathbf{i} \nsucceq (r_1 \# S_1, r_2 \# S_2, \ldots, r_m \# S_m), \forall (r_1, r_2, \ldots, r_m) \notin \mathcal{J} \right\rbrace . $$
\end{definition}
For clarity, we now give a description of this set by a positive defining condition that follows from the properties of the Euclidean division and the fact that $ \mathcal{J} $ is decreasing.
\begin{lemma} \label{lemma auxiliary applications footprint 0}
It holds that
\begin{equation*}
\begin{split}
\mathcal{J}_S = \{ & \left( p_1 \# S_1 + t_1, p_2 \# S_2 + t_2, \ldots, p_m \# S_m + t_m \right) \in \mathbb{N}^m : \\
& (p_1, p_2, \ldots, p_m) \in \mathcal{J}, 0 \leq t_j < \# S_j, \forall j = 1,2, \ldots, m \}.
\end{split}
\end{equation*}
\end{lemma}
We may then state the fact that the footprint is bounded by this set as follows:
\begin{lemma}
It holds that
$$ \Delta(I_\mathcal{J}) \subseteq \{ \mathbf{x}^\mathbf{i} : \mathbf{i} \in \mathcal{J}_S \}. $$
\end{lemma}
Moreover, the set $ \mathcal{J}_S $ can be easily seen as the union of $ \# \mathcal{J} $ $ m $-dimensional rectangles in $ \mathbb{N}^m $ whose sides have lengths $ \# S_1 $, $ \# S_2 $, $ \ldots $, $ \# S_m $, respectively. In particular, we obtain the following:
\begin{lemma}\label{lemma auxiliary applications footprint 1}
It holds that
\begin{equation}
\# \mathcal{J}_S = \# S \cdot \# \mathcal{J}.
\label{eq computation elements not in}
\end{equation}
\end{lemma}
The footprint bound (\ref{general bound}) can then be interpreted as follows: Consider the set $ \mathcal{J}_S \subseteq \mathbb{N}^m $. For each $ \mathbf{x}^\mathbf{i} \in {\rm LM}(I_\mathcal{J}) $, remove from $ \mathcal{J}_S $ all points $ \mathbf{j} $ such that $ \mathbf{i} \preceq \mathbf{j} $. The remaining points correspond to the multiindices in $ \Delta(I_\mathcal{J}) $, and thus there are $ \# \Delta(I_\mathcal{J}) $ of them.
In particular, if $ F_1(\mathbf{x}), F_2(\mathbf{x}), \ldots, F_t(\mathbf{x}) \in I $, then we may only remove the points corresponding to $ {\rm LM}(F_i(\mathbf{x})) $, for $ i = 1,2, \ldots, t $, and we obtain an upper bound on $ \# \Delta(I_\mathcal{J}) $.
\begin{example}
Let us assume now that $ m = 2 $, $ \# S_1 = \# S_2 = 2 $, and $ \mathcal{J} = \{ (0,1), $ $ (1,1),$ $ (2,1),$ $ (0,0), $ $ (1,0), $ $ (2,0), $ $ (3,0), $ $ (4,0), $ $ (5,0) \} $.
In Figure \ref{fig interpretation}, top image, we represent by black dots the monomials whose multiindices belong to $ \mathcal{J}_S $, among which medium-sized dots correspond to multiindices that belong to $ \mathcal{J} $ when each coordinate is multiplied by $ 2 $. Blank dots correspond to multiindices that do not belong to $ \mathcal{J}_S $, and the largest ones correspond to minimal multiindices that do not belong to $ \mathcal{J}_S $.
In Figure \ref{fig interpretation}, bottom image, we represent in the same way the set $ \Delta(I_\mathcal{J}) $, whenever $ \langle {\rm LM}(I_\mathcal{J}) \rangle $ is generated by $ x_1^2x_2^3 $, $ x_1^8x_2 $, and the leading monomials of $ G_1(x_1)^{r_1} G_2(x_2)^{r_2} $, for minimal $ (r_1,r_2) \notin \mathcal{J} $, which in this case are $ x_2^4 $, $ x_1^6x_2^2 $ and $ x_1^{12} $.
In conclusion, the bound (\ref{general bound}) says that the number of zeros in $ S $ of $ I $ of multiplicity at least $ \mathcal{J} $ is at most $ 3 $.
\end{example}
\begin{figure}[h]
\begin{center}
\begin{tabular}{c@{\extracolsep{1cm}}c}
\begin{tikzpicture}[line width=1pt, scale=1]
\tikzstyle{every node}=[inner sep=0pt, minimum width=4.5pt]
\draw (0,0) node (O) [draw, circle, fill=black] {};
\draw (0,5) node (topy) {};
\draw (14,0) node (topx) {};
\draw[->] (O) -- (topy);
\draw[->] (O) -- (topx);
\foreach \p in {(0,4),(6,2),(12,0)}{\draw \p node [draw, scale=2, circle, fill=white] {};}
\foreach \p in {(0,2),(2,2),(2,0),(4,2),(4,0),(6,0),(8,0),(10,0)}{\draw \p node [draw, circle, fill=black] {};}
\foreach \p in {(2,4),(4,4),(6,4),(8,2),(10,2),(12,2)}{\draw \p node [draw, circle, fill=white] {};}
\foreach \p in {(0,3),(0,1),(1,3),(1,2),(1,1),(1,0),(2,3),(2,1),(3,3),(3,2),(3,1),(3,0),(4,3),(4,1),(5,3),(5,2),(5,1),(5,0),(6,1),(7,1),(7,0),(8,1),(9,1),(9,0),(10,1),(11,1),(11,0)}{\draw \p node [draw, scale=0.5, circle, fill=black] {};}
\foreach \p in {(1,4),(3,4),(5,4),(6,3),(7,2),(9,2),(11,2),(12,1)}{\draw \p node [draw, scale=0.7, circle, fill=white] {};}
\draw (-1,3.5) -- (5.5,3.5) -- (5.5,1.5) -- (11.5,1.5) -- (11.5,-1);
\draw (-0.7,-0.7) node {$ 1 $};
\draw (-0.7,1) node {$ x_2 $};
\draw (-0.7,2) node {$ x_2^2 $};
\draw (-0.7,3) node {$ x_2^3 $};
\draw (-0.7,4) node {$ x_2^4 $};
\draw (1,-0.7) node {$ x_1 $};
\draw (2,-0.7) node {$ x_1^2 $};
\draw (3,-0.7) node {$ x_1^3 $};
\draw (4,-0.7) node {$ x_1^4 $};
\draw (5,-0.7) node {$ x_1^5 $};
\draw (6,-0.7) node {$ x_1^6 $};
\draw (7,-0.7) node {$ x_1^7 $};
\draw (8,-0.7) node {$ x_1^8 $};
\draw (9,-0.7) node {$ x_1^9 $};
\draw (10,-0.7) node {$ x_1^{10} $};
\draw (11,-0.7) node {$ x_1^{11} $};
\draw (12,-0.7) node {$ x_1^{12} $};
\draw (10,4) node {The set $ \mathcal{J}_S $};
\end{tikzpicture}
\end{tabular}
\begin{tabular}{c@{\extracolsep{1cm}}c}
\begin{tikzpicture}[line width=1pt, scale=1]
\tikzstyle{every node}=[inner sep=0pt, minimum width=4.5pt]
\draw (0,0) node (O) [draw, circle, fill=black] {};
\draw (0,5) node (topy) {};
\draw (14,0) node (topx) {};
\draw[->] (O) -- (topy);
\draw[->] (O) -- (topx);
\foreach \p in {(0,4),(6,2),(12,0),(2,3),(8,1)}{\draw \p node [draw, scale=2, circle, fill=white] {};}
\foreach \p in {(0,2),(2,2),(2,0),(4,2),(4,0),(6,0),(8,0),(10,0)}{\draw \p node [draw, circle, fill=black] {};}
\foreach \p in {(2,4),(4,4),(6,4),(8,2),(10,2),(12,2)}{\draw \p node [draw, circle, fill=white] {};}
\foreach \p in {(0,3),(0,1),(1,3),(1,2),(1,1),(1,0),(2,1),(3,2),(3,1),(3,0),(4,1),(5,2),(5,1),(5,0),(6,1),(7,1),(7,0),(9,0),(11,0)}{\draw \p node [draw, scale=0.5, circle, fill=black] {};}
\foreach \p in {(1,4),(3,3),(3,4),(4,3),(5,3),(5,4),(6,3),(7,2),(9,1),(9,2),(10,1),(11,1),(11,2),(12,1)}{\draw \p node [draw, scale=0.7, circle, fill=white] {};}
\draw (-1,3.5) -- (1.5,3.5) -- (1.5,2.5) -- (5.5,2.5) -- (5.5,1.5) -- (7.5,1.5) -- (7.5,0.5) -- (11.5,0.5) -- (11.5,-1);
\draw [dashed] (1.5,3.5) -- (5.5,3.5) -- (5.5,2.5);
\draw [dashed] (7.5,1.5) -- (11.5,1.5) -- (11.5,0.5);
\draw (-0.7,-0.7) node {$ 1 $};
\draw (-0.7,1) node {$ x_2 $};
\draw (-0.7,2) node {$ x_2^2 $};
\draw (-0.7,3) node {$ x_2^3 $};
\draw (-0.7,4) node {$ x_2^4 $};
\draw (1,-0.7) node {$ x_1 $};
\draw (2,-0.7) node {$ x_1^2 $};
\draw (3,-0.7) node {$ x_1^3 $};
\draw (4,-0.7) node {$ x_1^4 $};
\draw (5,-0.7) node {$ x_1^5 $};
\draw (6,-0.7) node {$ x_1^6 $};
\draw (7,-0.7) node {$ x_1^7 $};
\draw (8,-0.7) node {$ x_1^8 $};
\draw (9,-0.7) node {$ x_1^9 $};
\draw (10,-0.7) node {$ x_1^{10} $};
\draw (11,-0.7) node {$ x_1^{11} $};
\draw (12,-0.7) node {$ x_1^{12} $};
\draw (10,4) node {The set $ \Delta(I_\mathcal{J}) $};
\end{tikzpicture}
\end{tabular}
\end{center}
\caption{Illustration of the sets $ \mathcal{J}_S $ and $ \Delta(I_\mathcal{J}) $ in $ \mathbb{N}^m $.}
\label{fig interpretation}
\end{figure}
As a consequence of this interpretation, we may deduce the following useful fact:
\begin{lemma} \label{lemma auxiliary applications footprint 2}
Assume that the finite set $ \mathcal{J} \subseteq \mathbb{N}^m $ is decreasing and $ \mathbf{x}^\mathbf{i} = {\rm LM}(F(\mathbf{x})) $ with respect to some monomial ordering, for some polynomial $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $. If $ \mathbf{i} \in \mathcal{J}_S $, then it holds that
\begin{equation}
\# \Delta(\langle F(\mathbf{x}) \rangle_\mathcal{J}) < \# S \cdot \# \mathcal{J}.
\label{eq suficient condigion not max footprint}
\end{equation}
\end{lemma}
We conclude with a simple description of $ \mathcal{J}_S $ in the cases of multiindices bounded by weighted orders and multiindices bounded on each coordinate separately, which follow by straightforward calculations:
\begin{remark}
Given a vector of positive weights $ \mathbf{w} = (w_1, w_2, \ldots, w_m) \in \mathbb{N}_+^m $, a positive integer $ r \in \mathbb{N}_+ $, and $ \mathcal{J} = \{ \mathbf{r} \in \mathbb{N}^m : \mid \mathbf{r} \mid_\mathbf{w} < r \} $, it holds that
$$ \mathcal{J}_S = \left\lbrace (i_1, i_2, \ldots, i_m) \in \mathbb{N}^m : \sum_{j=1}^m \left\lfloor \frac{i_j}{\# S_j} \right\rfloor w_j < r \right\rbrace . $$
On the other hand, given $ (r_1, r_2, \ldots, r_m) \in \mathbb{N}_+^m $ and $ \mathcal{J} = \left\lbrace (i_1, i_2, \ldots, i_m) \in \mathbb{N}^m : \right. $ $ i_j < r_j, $ $ \left. j = 1,2, \ldots, m \right\rbrace $, it holds that
$$ \mathcal{J}_S = \left\lbrace (i_1, i_2, \ldots, i_m) \in \mathbb{N}^m : i_j < r_j \# S_j, j = 1,2, \ldots, m \right\rbrace . $$
\end{remark}
\subsection{Sharpness and equality conditions}
To conclude the section, we study the sharpness of the bound (\ref{general bound}). We will give sufficient and necessary conditions on the ideal $ I $ for (\ref{general bound}) to be an equality, and we will see that (\ref{general bound}) is the sharpest bound that can be obtained as a strictly increasing function of the size of the footprint $ \Delta(I_\mathcal{J}) $.
We start by defining the ideal associated to a set of points and a set of multiindices.
\begin{definition} \label{def ideal of a set with multi}
Given $ \mathcal{V} \subseteq \mathbb{F}^m $, we define
$$ I(\mathcal{V}; \mathcal{J}) = \left\lbrace F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] : F^{(\mathbf{i})}(\mathbf{a}) = 0, \forall \mathbf{a} \in \mathcal{V}, \forall \mathbf{i} \in \mathcal{J} \right\rbrace . $$
\end{definition}
In the next proposition we show that this set is indeed an ideal and gather other properties similar to those of ideals and algebraic sets in algebraic geometry.
\begin{proposition} \label{prop properties of ideals of zeros}
Given a set of points $ \mathcal{V} \subseteq \mathbb{F}^m $, the set $ I(\mathcal{V}; \mathcal{J}) $ in the previous definition is an ideal in $ \mathbb{F}[\mathbf{x}] $. Moreover, the following properties hold:
\begin{enumerate}
\item
$ I \subseteq I(\mathcal{V}_\mathcal{J}(I); \mathcal{J}) $.
\item
$ \mathcal{V} \subseteq \mathcal{V}_\mathcal{J}(I(\mathcal{V}; \mathcal{J})) $.
\item
$ I = I(\mathcal{V}_\mathcal{J}(I); \mathcal{J}) $ if, and only if, $ I = I(\mathcal{W}; \mathcal{J}) $ for some set $ \mathcal{W} \subseteq \mathbb{F}^m $.
\item
$ \mathcal{V} = \mathcal{V}_\mathcal{J}(I(\mathcal{V}; \mathcal{J})) $ if, and only if, $ \mathcal{V} = \mathcal{V}_\mathcal{J}(K) $, for some ideal $ K \subseteq \mathbb{F}[\mathbf{x}] $.
\end{enumerate}
\end{proposition}
\begin{proof}
The fact that $ I(\mathcal{V}; \mathcal{J}) $ is an ideal follows from the Leibniz formula (Lemma \ref{lemma leibniz formula}) and the fact that $ \mathcal{J} $ is decreasing. The properties in items 1, 2, 3, and 4 follow as in classical algebraic geometry and are left to the reader.
\end{proof}
The following is the main result of the subsection:
\begin{theorem} \label{th equality conditions}
Fixing a monomial ordering, the bound (\ref{general bound}) is an equality if, and only if,
\begin{equation}
I_\mathcal{J} = I \left( \mathcal{V}_\mathcal{J}(I); \mathcal{J} \right).
\end{equation}
In particular, for any choice of decreasing finite set $ \mathcal{J} \subseteq \mathbb{N}^m $ and a finite set of points $ \mathcal{V} \subseteq \mathbb{F}^m $, there exists an ideal, $ I = I(\mathcal{V}; \mathcal{J}) $, satisfying equality in (\ref{general bound}).
\end{theorem}
\begin{proof}
With notation as in the proof of Theorem \ref{th footprint bound}, the evaluation map $ {\rm Ev} : \mathbb{F}[\mathbf{x}] \longrightarrow \mathbb{F}^{\# \mathcal{V}_\mathcal{J}(I) \cdot \# \mathcal{J}} $ from Definition \ref{def evaluation map} is $ \mathbb{F} $-linear and surjective by Lemma \ref{lemma hermite}. By definition, its kernel is
$$ {\rm Ker}({\rm Ev}) = I(\mathcal{V}_\mathcal{J}(I); \mathcal{J}). $$
On the other hand, we saw in the proof of Theorem \ref{th footprint bound} that $ I_\mathcal{J} \subseteq {\rm Ker}({\rm Ev}) $. This means that the evaluation map
$$ {\rm Ev} : \mathbb{F}[\mathbf{x}] / I_\mathcal{J} \longrightarrow \mathbb{F}^{\# \mathcal{V}_\mathcal{J}(I) \cdot \# \mathcal{J}} $$
is an isomorphism if, and only if, $ I_\mathcal{J} = I(\mathcal{V}_\mathcal{J}(I); \mathcal{J}) $.
Finally, the fact that this evaluation map is an isomorphism is equivalent to (\ref{general bound}) being an equality, by the proof of Theorem \ref{th footprint bound}. Together with Proposition \ref{prop properties of ideals of zeros} and the fact that $ I = I_\mathcal{J} $ if $ I = I(\mathcal{V}; \mathcal{J}) $ by the proof of Theorem \ref{th footprint bound}, the theorem follows.
\end{proof}
Thanks to this result, we may establish that the bound (\ref{general bound}) is the sharpest bound that is a strictly increasing function of the size of the footprint $ \Delta(I_\mathcal{J}) $, in the following sense: If equality holds for such a bound, then it holds in (\ref{general bound}).
\begin{corollary}
Let $ f : \mathbb{N} \longrightarrow \mathbb{R} $ be a strictly increasing function, and assume that
\begin{equation}
\# \mathcal{V}_\mathcal{J}(I) \leq f(\# \Delta (I_\mathcal{J})),
\label{eq alternative footprint bounds}
\end{equation}
for all ideals $ I \subseteq \mathbb{F}[\mathbf{x}] $. If equality holds in (\ref{eq alternative footprint bounds}) for a given ideal $ I \subseteq \mathbb{F}[\mathbf{x}] $, then equality holds in (\ref{general bound}) for such ideal.
\end{corollary}
\begin{proof}
First we have that $ I_\mathcal{J} \subseteq I(\mathcal{V}_\mathcal{J}(I); \mathcal{J}) $ as we saw in the proof of the previous theorem. Hence the reverse inclusion holds for their footprints and thus
\begin{equation}
f \left( \# \Delta \left( I \left( \mathcal{V}_\mathcal{J}(I); \mathcal{J} \right) \right) \right) \leq f(\# \Delta (I_\mathcal{J})).
\label{eq footprint sharper proof 1}
\end{equation}
Now, since $ \mathcal{V}_\mathcal{J}(I) = \mathcal{V}_\mathcal{J}(I(\mathcal{V}_\mathcal{J}(I); \mathcal{J})) $ by Proposition \ref{prop properties of ideals of zeros}, and equality holds in (\ref{eq alternative footprint bounds}) for $ I $, we have that
\begin{equation}
f(\# \Delta (I_\mathcal{J})) = \# \mathcal{V}_\mathcal{J}(I) = \# \mathcal{V}_\mathcal{J}(I(\mathcal{V}_\mathcal{J}(I); \mathcal{J})) \leq f(\# \Delta (I(\mathcal{V}_\mathcal{J}(I); \mathcal{J}))).
\label{eq footprint sharper proof 2}
\end{equation}
Combining (\ref{eq footprint sharper proof 1}) and (\ref{eq footprint sharper proof 2}), and using that $ f $ is strictly increasing, we conclude that
$$ \# \Delta (I(\mathcal{V}_\mathcal{J}(I); \mathcal{J}))) = \# \Delta (I_\mathcal{J}), $$
which implies that equality holds in (\ref{general bound}) for $ I $ by Theorem \ref{th equality conditions}, and we are done.
\end{proof}
\section{Applications of the footprint bound for consecutive derivatives}
In this section, we present a brief collection of applications of Theorem \ref{th footprint bound}, which are extensions to consecutive derivatives of well-known important results from the literature. Throughout the section, we will fix again finite sets $ S_1, S_2, \ldots, S_m \subseteq \mathbb{F} $ and $ S = S_1 \times S_2 \times \cdots \times S_m $.
\subsection{Alon's combinatorial Nullstellensatz}
The combinatorial Nullstellensatz is a non-vanishing theorem by Alon \cite[Theorem 1.2]{alon} with many applications in combinatorics. It has been extended to non-vanishing theorems for standard multiplicities in \cite[Corollary 3.2]{ball} and for multisets (sets with multiplicities) in \cite[Theorem 6]{nulmultisets}.
In this subsection, we establish and prove a combinatorial Nullstellensatz for consecutive derivatives and derive the well-known particular cases as corollaries. The formulation in \cite[Theorem 1.1]{alon} is equivalent in essence. We will extend that result in the next subsection in terms of Gr{\"o}bner bases.
\begin{theorem} \label{th combinatorial nullstellensatz general}
Let $ \mathcal{J} \subseteq \mathbb{N}^m $ be a decreasing finite set, let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ be a non-zero polynomial, and let $ \mathbf{x}^\mathbf{i} = {\rm LM}(F(\mathbf{x})) $ for some monomial ordering. If $ \mathbf{i} \in \mathcal{J}_S $, then there exist $ \mathbf{s} \in S $ and $ \mathbf{j} \in \mathcal{J} $ such that
$$ F^{(\mathbf{j})}(\mathbf{s}) \neq 0. $$
\end{theorem}
\begin{proof}
By Lemma \ref{lemma auxiliary applications footprint 2}, the assumptions imply that
$$ \# \Delta(\langle F(\mathbf{x}) \rangle_\mathcal{J}) < \# S \cdot \# \mathcal{J}. $$
On the other hand, Theorem \ref{th footprint bound} implies that
$$ \# \mathcal{V}_\mathcal{J}(F(\mathbf{x})) \cdot \# \mathcal{J} \leq \# \Delta(\langle F(\mathbf{x}) \rangle_\mathcal{J}). $$
Therefore not all points in $ S $ are zeros of $ F(\mathbf{x}) $ of multiplicity at least $ \mathcal{J} $, and the result follows.
\end{proof}
We now derive the original theorem \cite[Theorem 1.2]{alon}. This constitutes an alternative proof. See also \cite{shortproof} for another recent short proof.
\begin{corollary} [\textbf{\cite{alon}}]
Let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $. Assume that the coefficient of $ \mathbf{x}^\mathbf{i} $ in $ F(\mathbf{x}) $ is not zero and $ {\rm deg}(F(\mathbf{x})) = \mid \mathbf{i} \mid $. If $ \# S_j > i_j $ for all $ j = 1,2, \ldots, m $, then there exist $ s_1 \in S_1 $, $ s_2 \in S_2 $, $ \ldots $, $ s_m \in S_m $, such that
$$ F(s_1, s_2, \ldots, s_m) \neq 0. $$
\end{corollary}
\begin{proof}
First, there exists a graded monomial ordering such that $ \mathbf{x}^{\mathbf{i}} = {\rm LM}(F(\mathbf{x})) $ since $ {\rm deg}(F(\mathbf{x})) = \mid \mathbf{i} \mid $. Now, the assumption implies that
$$ \mathbf{i} \nsucceq (r_1 \#S_1, r_2 \# S_2, \ldots, r_m \# S_m), $$
for all $ \mathbf{r} = (r_1, r_2, \ldots, r_m) $ such that $ r_j = 1 $ for some $ j $, and the rest are zero. These are in fact all minimal multiindices not in $ \mathcal{J} = \{ \mathbf{0} \} $. Thus the result follows from the previous theorem.
\end{proof}
The next consequence is a combinatorial Nullstellensatz for weighted multiplicities, where the particular case $ w_1 = w_2 = \ldots = w_m = 1 $ coincides with \cite[Corollary 3.2]{ball} (recall the definition of weighted degree from Definition \ref{def weighted degree}):
\begin{corollary} \label{theorem comb nulls weighted}
Let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $, let $ \mathbf{w} = (w_1, w_2, \ldots, w_m) \in \mathbb{N}_+^m $ and let $ r \in \mathbb{N}_+ $. Assume that the coefficient of $ \mathbf{x}^\mathbf{i} $ in $ F(\mathbf{x}) $ is not zero and $ {\rm deg}_{\mathbf{w}}(F(\mathbf{x})) = \mid \mathbf{i} \mid_{\mathbf{w}} $.
Assume also that, for all $ \mathbf{r} = (r_1, r_2, \ldots, r_m) $ with $ \mid \mathbf{r} \mid_{\mathbf{w}} \geq r $, there exists a $ j $ such that $ r_j \# S_j > i_j $. Then there exist $ s_1 \in S_1 $, $ s_2 \in S_2 $, $ \ldots $, $ s_m \in S_m $, and some $ \mathbf{j} \in \mathbb{N}^m $ with $ \mid \mathbf{j} \mid_{\mathbf{w}} < r $, such that
$$ F^{(\mathbf{j})}(s_1, s_2, \ldots, s_m) \neq 0. $$
\end{corollary}
\begin{proof}
It follows from Theorem \ref{th combinatorial nullstellensatz general} as the previous corollary.
\end{proof}
We conclude with a combinatorial Nullstellensatz for multiindices bounded on each coordinate separately:
\begin{corollary}
Let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $, let $ (r_1, r_2, \ldots, r_m) \in \mathbb{N}_+^m $, and assume that $ \mathbf{x}^\mathbf{i} = $ $ {\rm LM}(F(\mathbf{x})) $, $ \mathbf{i} = (i_1, i_2, \ldots, i_m) $, for some monomial ordering and $ i_j < r_j \# S_j $, for all $ j = 1,2, \ldots, m $. There exist $ s_1 \in S_1 $, $ s_2 \in S_2 $, $ \ldots $, $ s_m \in S_m $, and some $ \mathbf{j} = (j_1, j_2, \ldots, j_m) \in \mathbb{N}^m $ with $ j_k < r_k $, for all $ k = 1,2, \ldots, m $, such that
$$ F^{(\mathbf{j})}(s_1, s_2, \ldots, s_m) \neq 0. $$
\end{corollary}
\subsection{Gr{\"o}bner bases of ideals of zeros in a grid}
An equivalent but more refined consequence is obtaining a Gr{\"o}bner basis for ideals $ I(S; \mathcal{J}) $ associated to the whole grid $ S $ and to a decreasing finite set of multiindices (recall Definition \ref{def ideal of a set with multi}). This result is also usually referred to as combinatorial Nullstellensatz in many works in the literature (see \cite[Theorem 1.1]{alon}, \cite[Theorem 3.1]{ball} and \cite[Theorem 1]{nulmultisets}). We briefly recall the notion of Gr{\"o}bner basis. We will also make repeated use of the Euclidean division on the multivariate polynomial ring and its properties. See \cite[Chapter 2]{clo1} for more details.
\begin{definition}[\textbf{Gr{\"o}bner bases}]
Given a monomial ordering $ \preceq_m $ and an ideal $ I \subseteq \mathbb{F}[\mathbf{x}] $, we say that a finite family of polynomials $ \mathcal{F} \subseteq I $ is a Gr{\"o}bner basis of $ I $ with respect to $ \preceq_m $ if
$$ \left\langle {\rm LM}_{\preceq_m}(I) \right\rangle = \left\langle {\rm LM}_{\preceq_m}(\mathcal{F}) \right\rangle. $$
Moreover, we say that $ \mathcal{F} $ is reduced if, for any two distinct $ F(\mathbf{x}), G(\mathbf{x}) \in \mathcal{F} $, it holds that $ {\rm LM}_{\preceq_m}(F(\mathbf{x})) $ does not divide any monomial in $ G(\mathbf{x}) $.
\end{definition}
Recall that a Gr{\"o}bner basis of an ideal generates it as an ideal. To obtain reduced Gr{\"o}bner bases, we need a way to minimally generate decreasing finite sets in $ \mathbb{N}^m $, which is given by the following object:
\begin{definition} \label{def generators decreasing finite set}
For any decreasing finite set $ \mathcal{J} \subseteq \mathbb{N}^m $, we define
$$ \mathcal{B}_\mathcal{J} = \{ \mathbf{i} \notin \mathcal{J} : \mathbf{j} \notin \mathcal{J} \textrm{ and } \mathbf{j} \preceq \mathbf{i} \Longrightarrow \mathbf{i} = \mathbf{j} \}. $$
\end{definition}
The main result of this subsection is the following:
\begin{theorem} \label{th groebner general}
For any decreasing finite set $ \mathcal{J} \subseteq \mathbb{N}^m $, the family
$$ \mathcal{F} = \left\lbrace \prod_{j=1}^m G_j(x_j)^{r_j} : (r_1, r_2, \ldots, r_m) \in \mathcal{B}_\mathcal{J} \right\rbrace $$
is a reduced Gr{\"o}bner basis of the ideal $ I(S; \mathcal{J}) $ with respect to any monomial ordering. In particular, for any $ F(\mathbf{x}) \in I(S; \mathcal{J}) $, there exist polynomials $ H_\mathbf{r}(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ such that
$$ \deg(H_\mathbf{r}(\mathbf{x})) + \sum_{j=1}^m r_j \deg(G_j(x_j)) \leq \deg(F(\mathbf{x})), $$
for $ \mathbf{r} = (r_1, r_2, \ldots, r_m) \in \mathcal{B}_\mathcal{J} $, and
$$ F(\mathbf{x}) = \sum_{\mathbf{r} \in \mathcal{B}_\mathcal{J}} \left( H_\mathbf{r}(\mathbf{x}) \prod_{j=1}^m G_j(x_j)^{r_j} \right). $$
\end{theorem}
\begin{proof}
It suffices to prove that, if $ F(\mathbf{x}) \in I(S; \mathcal{J}) $ and we divide it by the family $ \mathcal{F} $ (in an arbitrary order), then the remainder must be the zero polynomial.
Performing such division, we obtain $ F(\mathbf{x}) = G(\mathbf{x}) + R(\mathbf{x}) $, where $ R(\mathbf{x}) $ is the remainder of the division and $ G(\mathbf{x}) \in I(S; \mathcal{J}) $. Assume that $ R(\mathbf{x}) \neq 0 $ and let $ \mathbf{x}^\mathbf{i} $ be the leading monomial of $ R(\mathbf{x}) $ with respect to the chosen monomial ordering. Since no leading monomial of the polynomials in $ \mathcal{F} $ divides $ \mathbf{x}^\mathbf{i} $, we conclude that
$$ \mathbf{i} \nsucceq (r_1 \#S_1, r_2 \#S_2, \ldots, r_m \# S_m), $$
for all minimal $ \mathbf{r} = (r_1, r_2, \ldots, r_m) \notin \mathcal{J} $, that is, for all $ \mathbf{r} \in \mathcal{B}_\mathcal{J} $. Thus by Theorem \ref{th combinatorial nullstellensatz general}, we conclude that not all points in $ S $ are zeros of $ R(\mathbf{x}) $ of multiplicity at least $ \mathcal{J} $, which is absurd since $ R(\mathbf{x}) = F(\mathbf{x}) - G(\mathbf{x}) \in I(S; \mathcal{J}) $, and we are done.
The fact that $ \mathcal{F} $ is reduced follows from observing that the multiindices $ \mathbf{r} \in \mathcal{B}_\mathcal{J} $ are minimal among those not in $ \mathcal{J} $. The last part of the theorem follows by performing the Euclidean division.
\end{proof}
The following particular case is \cite[Theorem 1.1]{alon}:
\begin{corollary} [\textbf{\cite{alon}}]
If $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ vanishes at all points in $ S $, then there exist polynomials $ H_j(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ such that $ \deg(H_j(\mathbf{x})) + \deg(G_j(x_j)) \leq \deg(F(\mathbf{x})) $, for $ j = 1,2, \ldots, m $, and
$$ F(\mathbf{x}) = \sum_{j=1}^m H_j(\mathbf{x}) G_j(x_j). $$
\end{corollary}
To study the case of weighted multiplicities, we observe the following:
\begin{remark}
Given a vector of positive weights $ \mathbf{w} = (w_1, w_2, \ldots, w_m) \in \mathbb{N}_+^m $, a positive integer $ r \in \mathbb{N}_+ $, and the set $ \mathcal{J} = \{ \mathbf{i} \in \mathbb{N}^m : \mid \mathbf{i} \mid_\mathbf{w} < r \} $, it holds that $ \mathcal{B}_\mathcal{J} = \mathcal{B}_\mathbf{w} $, where
$$ \mathcal{B}_\mathbf{w} = \left\lbrace (i_1, i_2, \ldots, i_m) \in \mathbb{N}^m : r \leq \sum_{j=1}^m i_j w_j < r + \min \left\lbrace w_j : i_j \neq 0 \right\rbrace \right\rbrace . $$
\end{remark}
We then obtain the next consequence, where the particular case $ w_1 = w_2 = \ldots = w_m = 1 $ coincides with \cite[Theorem 3.1]{ball}.
\begin{corollary} \label{corollary groebner weighted}
Given a vector of positive weights $ \mathbf{w} = (w_1, w_2, \ldots, w_m) \in \mathbb{N}_+^m $ and a positive integer $ r \in \mathbb{N}_+ $, if $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ vanishes at all points in $ S $ with weighted multiplicity at least $ r $, then there exist polynomials $ H_\mathbf{r}(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ such that $ \deg(H_\mathbf{r}(\mathbf{x})) + \sum_{j=1}^m r_j \deg(G_j(x_j)) \leq \deg(F(\mathbf{x})) $, for all $ \mathbf{r} = (r_1, r_2, \ldots, r_m) \in \mathcal{B}_\mathbf{w} $, and
$$ F(\mathbf{x}) = \sum_{\mathbf{r} \in \mathcal{B}_\mathbf{w}} \left( H_\mathbf{r}(\mathbf{x}) \prod_{j=1}^m G_j(x_j)^{r_j} \right). $$
\end{corollary}
We conclude with the case of multiindices bounded on each coordinate separately:
\begin{corollary} \label{corollary groebner coordinate}
Given a vector $ (r_1, r_2, \ldots, r_m) \in \mathbb{N}_+^m $, if $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ is such that $ F^{(\mathbf{j})}(\mathbf{s}) = 0 $, for all $ \mathbf{s} \in S $ and all $ \mathbf{j} = (j_1, j_2, \ldots, j_m) \in \mathbb{N}^m $ satisfying $ j_k < r_k $, for all $ k = 1,2, \ldots, m $, then there exist polynomials $ H_j(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ such that $ \deg(H_j(\mathbf{x})) + r_j \deg(G_j(x_j)) \leq \deg(F(\mathbf{x})) $, for all $ j = 1,2, \ldots, m $, and
$$ F(\mathbf{x}) = \sum_{j = 1}^m H_j(\mathbf{x}) G_j(x_j)^{r_j}. $$
\end{corollary}
\begin{proof}
It follows from Theorem \ref{th groebner general} observing that, if $ \mathcal{J} = \left\lbrace (j_1, j_2, \ldots, j_m) \in \mathbb{N}^m : j_k < r_k, \right. $ $ \left. k = 1,2, \ldots, m \right\rbrace $, then
$$ \mathcal{B}_\mathcal{J} = \left\lbrace r_j \mathbf{e}_j \in \mathbb{N}^m : j = 1,2, \ldots, m \right\rbrace, $$
where $ \mathbf{e}_1, \mathbf{e}_2, \ldots, \mathbf{e}_m \in \mathbb{N}^m $ are the vectors in the canonical basis.
\end{proof}
\subsection{Hermite interpolation over grids with consecutive derivatives}
In the appendix we show that the evaluation map (Definition \ref{def evaluation map}) is surjective. This has been used to prove Theorem \ref{th footprint bound}. In this subsection, we see that the combinatorial Nullstellensatz (Theorem \ref{th combinatorial nullstellensatz general}) implies that the evaluation map over the whole grid $ S $, with consecutive derivatives, is an isomorphism when taking an appropriate domain. More concretely, we show the existence and uniqueness of Hermite interpolating polynomials over $ S $ with derivatives in $ \mathcal{J} $ when choosing monomials in $ \mathcal{J}_S $. Finding appropriate sets of points, derivatives and polynomials to guarantee existence and uniqueness of Hermite interpolating polynomials has been extensively studied \cite{gasca, kopparty-multiplicity, lorentz}. The next result is new to the best of our knowledge:
\begin{theorem} \label{th uniqueness hermite}
Given a decreasing finite set $ \mathcal{J} \subseteq \mathbb{N}^m $, the evaluation map in Definition \ref{def evaluation map} for the finite set $ S = S_1 \times S_2 \times \cdots \times S_m $, defined as
$$ {\rm Ev} : \left\langle \mathcal{J}_S \right\rangle_\mathbb{F} \longrightarrow \mathbb{F}^{\# S \cdot \# \mathcal{J}}, $$
is a vector space isomorphism. In other words, for all $ b_{\mathbf{j}, \mathbf{a}} \in \mathbb{F} $, where $ \mathbf{j} \in \mathcal{J} $ and $ \mathbf{a} \in S $, there exists a unique polynomial of the form
$$ F(\mathbf{x}) = \sum_{\mathbf{i} \in \mathcal{J}_S} F_\mathbf{i} \mathbf{x}^\mathbf{i} \in \mathbb{F}[\mathbf{x}], $$
where $ F_\mathbf{i} \in \mathbb{F} $ for all $ \mathbf{i} \in \mathcal{J}_S $, such that $ F^{(\mathbf{j})}(\mathbf{a}) = b_{\mathbf{j}, \mathbf{a}} $, for all $ \mathbf{j} \in \mathcal{J} $ and all $ \mathbf{a} \in S $.
\end{theorem}
\begin{proof}
The map is one to one by Theorem \ref{th combinatorial nullstellensatz general}, and both vector spaces have the same dimension over $ \mathbb{F} $ by Lemma \ref{lemma auxiliary applications footprint 1}, hence the map is a vector space isomorphism.
\end{proof}
\begin{remark}
Observe that we may similarly prove that the following two maps are vector space isomorphisms:
$$ \left\langle \mathcal{J}_S \right\rangle_\mathbb{F} \stackrel{\rho}{\longrightarrow} \mathbb{F}[\mathbf{x}] / I(S; \mathcal{J}) \stackrel{{\rm Ev}}{\longrightarrow} \mathbb{F}^{\# S \cdot \# \mathcal{J}}, $$
where $ \rho $ is the projection to the quotient ring. We may then extend the notion of \textit{reduction} of a polynomial as follows (see \cite[Section 3.1]{clark} and \cite[Section 6.3]{gasca}, for instance): Given $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $, we define its reduction over the set $ S $ with derivatives in $ \mathcal{J} $ as
$$ G(\mathbf{x}) = \rho^{-1} \left( F(\mathbf{x}) + I(S; \mathcal{J}) \right) . $$
\end{remark}
As an immediate consequence, we obtain the following result on Hermite interpolation with weighted multiplicities:
\begin{corollary} \label{cor uniqueness hermite weighted}
For every vector of positive weights $ \mathbf{w} = (w_1, w_2, \ldots, w_m) \in \mathbb{N}_+^m $, every positive integer $ r \in \mathbb{N}_+ $, and elements $ b_{\mathbf{j}, \mathbf{a}} \in \mathbb{F} $, for $ \mathbf{j} \in \mathbb{N}^m $ with $ \mid \mathbf{j} \mid_\mathbf{w} < r $ and for $ \mathbf{a} \in S $, there exists a unique polynomial of the form
$$ F(\mathbf{x}) = \sum_{\mathbf{i} \in \mathbb{N}^m} F_\mathbf{i} \mathbf{x}^\mathbf{i}, $$
where $ F_\mathbf{i} \in \mathbb{F} $ for all $ \mathbf{i} = (i_1, i_2, \ldots, i_m) \in \mathbb{N}^m $, and $ F_\mathbf{i} = 0 $ whenever
$$ \sum_{j=1}^m \left\lfloor \frac{i_j}{\# S_j} \right\rfloor w_j \geq r, $$
such that $ F^{(\mathbf{j})}(\mathbf{a}) = b_{\mathbf{j}, \mathbf{a}} $, for all $ \mathbf{j} \in \mathbb{N}^m $ with $ \mid \mathbf{j} \mid_\mathbf{w} < r $ and all $ \mathbf{a} \in S $.
\end{corollary}
We conclude with the case of multiindices bounded on each coordinate separately:
\begin{corollary} \label{cor uniqueness hermite coordintate}
Given $ (r_1, r_2, \ldots, r_m) \in \mathbb{N}_+^m $ and given elements $ b_{\mathbf{j}, \mathbf{a}} \in \mathbb{F} $, for $ \mathbf{j} = (j_1, j_2, \ldots, j_m) \in \mathbb{N}^m $ with $ j_k < r_k $, for all $ k = 1,2, \ldots, m $, and for $ \mathbf{a} \in S $, there exists a unique polynomial of the form
$$ F(\mathbf{x}) = \sum_{i_1 = 0}^{r_1\# S_1 - 1} \sum_{i_2 = 0}^{r_2\# S_2 - 1} \cdots \sum_{i_m = 0}^{r_m\# S_m - 1} F_\mathbf{i} \mathbf{x}^\mathbf{i}, $$
such that $ F^{(\mathbf{j})}(\mathbf{a}) = b_{\mathbf{j}, \mathbf{a}} $, for all $ \mathbf{j} = (j_1, j_2, \ldots, j_m) \in \mathbb{N}^m $ with $ j_k < r_k $, for all $ k = 1,2, \ldots, m $, and all $ \mathbf{a} \in S $.
\end{corollary}
\subsection{Evaluation codes with consecutive derivatives}
In this subsection, we extend the notion of \textit{evaluation code} from the theory of error-correcting codes (see \cite[Section 2]{weightedRM} and \cite[Section 4.1]{handbook}, for instance) to evaluation codes with consecutive derivatives. By doing so, we generalize \textit{multiplicity codes} \cite{multiplicitycodes}, which have been shown to achieve good parameters in decoding, local decoding and list decoding \cite{kopparty-multiplicity, multiplicitycodes}. We compute the dimensions of the new codes and give a lower bound on their minimum Hamming distance.
\begin{definition} \label{def evaluation codes with derivatives}
Given a decreasing finite set $ \mathcal{J} \subseteq \mathbb{N}^m $ and a set of monomials $ \mathcal{M} \subseteq \mathcal{J}_S $, we define the $ \mathbb{F} $-linear code (that is, the $ \mathbb{F} $-linear vector space)
$$ \mathcal{C}(S, \mathcal{M}, \mathcal{J}) = {\rm Ev} \left( \left\langle \mathcal{M} \right\rangle _\mathbb{F} \right) \subseteq \mathbb{F}^{\# S \cdot \# \mathcal{J}}, $$
where $ {\rm Ev} $ is the evaluation map from Definition \ref{def evaluation map}.
\end{definition}
As in \cite{multiplicitycodes}, we will consider these codes over the alphabet $ \mathbb{F}^{\# \mathcal{J}} $, that is, each evaluation $ \left( F^{(\mathbf{i})} \left( \mathbf{a} \right) \right)_{\mathbf{i} \in \mathcal{J}} \in \mathbb{F}^{\# \mathcal{J}} $, for $ \mathbf{a} \in S $, constitutes one symbol of the alphabet. Thus each codeword has length $ \# S $ over this alphabet. This leads to the following definition of minimum Hamming distance of an $ \mathbb{F} $-linear code:
\begin{definition}
Given an $ \mathbb{F} $-linear code $ \mathcal{C} \subseteq \left( \mathbb{F}^{\# \mathcal{J}} \right) ^{\# S} $, we define its minimum Hamming distance as
$$ d_H(\mathcal{C}) = \min \left\lbrace {\rm wt_H}(\mathbf{c}) : \mathbf{c} \in \mathcal{C}, \mathbf{c} \neq \mathbf{0} \right\rbrace, $$
where, for any $ \mathbf{c} \in \left( \mathbb{F}^{\# \mathcal{J}} \right) ^{\# S} $, $ {\rm wt_H}(\mathbf{c}) $ denotes the number of its non-zero components over the alphabet $ \mathbb{F}^{\# \mathcal{J}} $.
\end{definition}
As a consequence of Theorem \ref{th uniqueness hermite}, we may exactly compute the dimensions of the codes in Definition \ref{def evaluation codes with derivatives} and give a lower bound on their minimum Hamming distance:
\begin{corollary}
The code in Definition \ref{def evaluation codes with derivatives} satisfies that
$$ \dim_\mathbb{F}(\mathcal{C}(S, \mathcal{M}, \mathcal{J})) = \# \mathcal{M}, \quad \textrm{and} $$
$$ d_H(\mathcal{C}(S, \mathcal{M}, \mathcal{J})) \geq \left\lceil \frac{\min \left\lbrace \# \Delta( \left\langle F(\mathbf{x}) \right\rangle _\mathcal{J}) : F(\mathbf{x}) \in \left\langle \mathcal{M} \right\rangle _\mathbb{F} \right\rbrace}{\# \mathcal{J}} \right\rceil . $$
\end{corollary}
\begin{remark} \label{remark weighted multiplicity codes}
Given a vector of positive weights $ \mathbf{w} = (w_1, w_2, \ldots, w_m) \in \mathbb{N}_+^m $, a positive integer $ r \in \mathbb{N}_+ $, and a set of monomials
$$ \mathcal{M} \subseteq \left\lbrace x_1^{i_1} x_2^{i_2} \cdots x_m^{i_m} : \sum_{j=1}^m \left\lfloor \frac{i_j}{\# S_j} \right\rfloor w_j < r \right\rbrace, $$
we may define, as a particular case of the codes in Definition \ref{def evaluation codes with derivatives}, the corresponding weighted multiplicity code as the $ \mathbb{F} $-linear code
$$ \mathcal{C}(S, \mathcal{M}, \mathbf{w}, r) = {\rm Ev} \left( \left\langle \mathcal{M} \right\rangle _\mathbb{F} \right) \subseteq \left( \mathbb{F}^{{\rm B}(\mathbf{w} ; r)} \right)^{\# S} . $$
Observe that weighted multiplicity codes contain as particular cases classical Reed-Muller codes (see \cite[Section 13.2]{pless}), by choosing $ \mathbf{w} = (r,r, \ldots, r) $ for a given $ r \in \mathbb{N}_+ $, and classical multiplicity codes \cite{multiplicitycodes} by choosing $ \mathbf{w} = (1,1, \ldots, 1) $ and an arbitrary $ r \in \mathbb{N}_+ $. Therefore, choices of $ \mathbf{w} \in \mathbb{N}^m $ such that $ 1 \leq w_i \leq r $, for $ i =1,2, \ldots, m $, give codes with the same length but intermediate alphabet sizes between those of Reed-Muller and multiplicity codes. This has the extra flexibility (see \cite[Section 1.2]{multiplicitycodes}) of choosing alphabets of sizes $ \# \left( \mathbb{F}^{{\rm B}(\mathbf{w}; r)} \right) $ (whenever $ \mathbb{F} $ is finite), where
$$ 1 \leq {\rm B}(\mathbf{w}; r) \leq \binom{m+r-1}{m}. $$
\end{remark}
\subsection{Bounds by DeMillo, Lipton, Zippel, Alon and F{\"u}redi}
In this subsection, we obtain a weaker but more concise version of the bound (\ref{general bound}) for a single polynomial, which has as particular cases the bounds by DeMillo and Lipton \cite{demillo}, Zippel \cite[Theorem 1]{zippel-first}, \cite[Proposition 3]{zippel}, and Alon and F{\"u}redi \cite[Theorem 5]{alon-furedi}. We observe that Counterexample 7.4 in \cite{grid} shows that a straightforward extension of these bounds to standard multiplicities as in (\ref{eq in intro 1}) is not possible, in contrast with the bound given by Schwartz in \cite[Lemma 1]{schwartz}, which has been already extended in \cite[Lemma 8]{extensions}.
\begin{theorem} \label{th generalization demillo lipton et al}
For any decreasing finite set $ \mathcal{J} \subseteq \mathbb{N}^m $ and any polynomial $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $, if $ \mathbf{x}^\mathbf{i} = {\rm LM}(F(\mathbf{x})) \in \mathcal{J}_S $, for some monomial ordering, then it holds that
\begin{equation}
\# \left( S \setminus \mathcal{V}_\mathcal{J}(F(\mathbf{x})) \right) \# \mathcal{J} \geq \# \left\lbrace \mathbf{j} \in \mathcal{J}_S : \mathbf{j} \succeq \mathbf{i} \right\rbrace.
\label{eq bound generalization of demillo-lipton}
\end{equation}
\end{theorem}
\begin{proof}
First, from the bound (\ref{general bound}) and Lemma \ref{lemma auxiliary applications footprint 1}, we obtain that
\begin{equation}
\# \left( S \setminus \mathcal{V}_\mathcal{J}(F(\mathbf{x})) \right) \# \mathcal{J} \geq \# S \# \mathcal{J} - \# \Delta( \left\langle F(\mathbf{x}) \right\rangle_\mathcal{J}) = \# \left( \mathcal{J}_S \setminus \Delta( \left\langle F(\mathbf{x}) \right\rangle_\mathcal{J}) \right) ,
\label{eq proof bound demillo-lipton}
\end{equation}
where we consider $ \Delta( \left\langle F(\mathbf{x}) \right\rangle_\mathcal{J}) \subseteq \mathbb{N}^m $ by abuse of notation. As explained in Subsection \ref{subsec interpretation bound}, we may lower bound $ \# \left( \mathcal{J}_S \setminus \Delta( \left\langle F(\mathbf{x}) \right\rangle_\mathcal{J}) \right) $ by the number of multiindices $ \mathbf{j} \in \mathcal{J}_S $ satisfying $ \mathbf{j} \succeq \mathbf{i} $, and we are done.
\end{proof}
The following consequence summarizes the results by DeMillo and Lipton \cite{demillo}, and Zippel \cite[Theorem 1]{zippel-first}, \cite[Proposition 3]{zippel}:
\begin{corollary}[\textbf{\cite{demillo, zippel-first, zippel}}]
Let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ be such that its degree in the $ j $-th variable is $ d_j \in \mathbb{N} $, for $ j = 1,2, \ldots, m $. If $ d_j < \# S_j $, for $ j = 1,2, \ldots, m $, then the number of non-zeros in $ S $ of $ F(\mathbf{x}) $ is at least
$$ \prod_{j=1}^m \left( \# S_j - d_j \right). $$
\end{corollary}
\begin{proof}
The result is the particular case $ \mathcal{J} = \{ \mathbf{0} \} $ of the previous theorem using any monomial ordering and the facts that $ \mathcal{J}_S = S $ and $ i_j \leq d_j $, for $ j = 1,2, \ldots, m $.
\end{proof}
The following is a similar bound due to Alon and F{\"u}redi \cite[Theorem 5]{alon-furedi}:
\begin{corollary}[\textbf{\cite{alon-furedi}}]
Let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $. If not all points in $ S $ are zeros of $ F(\mathbf{x}) $, then the number of its non-zeros in $ S $ is at least
$$ \min \left\lbrace \prod_{j=1}^m y_j : 1 \leq y_j \leq \# S_j, \sum_{j=1}^m y_j \geq \sum_{j=1}^m \# S_j - \deg(F(\mathbf{x})) \right\rbrace. $$
\end{corollary}
\begin{proof}
The result follows from Theorem \ref{th generalization demillo lipton et al} as in the previous corollary, taking any monomial ordering and considering $ y_j = \# S_j - i_j $, for $ j = 1,2, \ldots, m $.
\end{proof}
We omit the case of weighted multiplicities. In the next section, we will give an extension of the bound given by Schwartz in \cite[Lemma 1]{schwartz} to weighted multiplicities in the sense of (\ref{eq in intro 1}), which is stronger than the bound in Corollary \ref{corollary footprint for weighted multi} in some cases.
We conclude with the case of multiindices bounded on each coordinate separately:
\begin{corollary}
Let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ with $ \mathbf{x}^\mathbf{i} = {\rm LM}(F(\mathbf{x})) $, $ \mathbf{i} = (i_1, i_2, \ldots, i_m) $, for some monomial ordering. If $ i_j < r_j \# S_j $, for $ j = 1,2, \ldots, m $, then the number $ N $ of elements $ \mathbf{s} \in S $ such that $ F^{(\mathbf{j})}(\mathbf{s}) \neq 0 $, for some $ \mathbf{j} = (j_1, j_2, \ldots, j_m) \in \mathbb{N}^m $ with $ j_k < r_k $, for all $ k = 1,2, \ldots, m $, satisfies
$$ N \cdot \prod_{j=1}^m r_j \geq \prod_{j=1}^m \left( r_j \# S_j - i_j \right). $$
\end{corollary}
\subsection{The Schwartz-Zippel bound on the whole grid}
In the next section, we will give an extension of bound given by Schwartz in \cite[Lemma 1]{schwartz} for weighted multiplicities that can be proven as the extensions to standard multiplicities given in \cite[Lemma 8]{extensions} and \cite[Theorem 5]{weightedRM}. In this subsection, we observe that the case where all points in $ S $ are zeros of a given weighted multiplicity follows from Corollary \ref{theorem comb nulls weighted}:
\begin{corollary} \label{corollary weak schwartz on whole grid}
Let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $, let $ \mathbf{w} = (w_1, w_2, \ldots, w_m) \in \mathbb{N}_+^m $, let $ r \in \mathbb{N}_+ $, and assume that $ s = \# S_1 = \# S_2 = \ldots = \# S_m $. If all points in $ S = S_1 \times S_2 \times \cdots \times S_m $ are zeros of $ F(\mathbf{x}) $ of weighted multiplicity at least $ r $, then
$$ r \# S \leq \deg_\mathbf{w} (F(\mathbf{x})) s^{m-1}. $$
\end{corollary}
\begin{proof}
Assume that the bound does not hold, take $ \mathbf{x}^\mathbf{i} $ such that $ \mid \mathbf{i} \mid_\mathbf{w} = \deg_\mathbf{w}(F(\mathbf{x})) $ and whose coefficient in $ F(\mathbf{x}) $ is not zero, and take a vector $ \mathbf{r} = (r_1, r_2, \ldots, r_m) \in \mathbb{N}^m $ with $ \mid \mathbf{r} \mid_\mathbf{w} \geq r $. Then
$$ s w_1 r_1 + s w_2 r_2 + \cdots + s w_m r_m \geq sr > \deg_\mathbf{w} (F(\mathbf{x})) = \mid \mathbf{i} \mid_\mathbf{w}, $$
hence there exists a $ j $ such that $ r_j \# S_j > i_j $. By Corollary \ref{theorem comb nulls weighted}, some element in $ S $ is not a zero of $ F(\mathbf{x}) $ of weighted multiplicity at least $ r $, which contradicts the assumptions and we are done.
\end{proof}
\section{The Schwartz-Zippel bound for weighted multiplicities}
As we will see in Proposition \ref{proposition zippel implies schwartz}, the bound given by Schwartz in \cite[Lemma 1]{schwartz} can be derived by those given by DeMillo and Lipton \cite{demillo}, and Zippel \cite[Theorem 1]{zippel-first}, \cite[Proposition 3]{zippel}, and is usually referred to as the Schwartz-Zippel bound. This bound has been recently extended to standard multiplicities in \cite[Lemma 8]{extensions}, and further in \cite[Theorem 5]{weightedRM}. In this section, we observe that it may be easily extended to weighted multiplicities (see Definition \ref{def weighted multiplicity}), due to the additivity of weighted order functions. We show the sharpness of this bound and compare it with the bound (\ref{general bound}) with an example, whenever it makes sense to compare both bounds.
\subsection{The bound}
\begin{theorem} \label{th S-Z bound}
Let $ \mathbf{w} = (w_1, w_2, \ldots, w_m ) \in \mathbb{N}_+^m $ be a vector of positive weights, let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ and let $ \mathbf{x}^{\mathbf{i}} = {\rm LM}(F(\mathbf{x})) $, $ \mathbf{i} = (i_1, i_2, \ldots, i_m) $, with respect to the lexicographic ordering. It holds that
\begin{equation}
\sum_{\mathbf{a} \in S} m_{\mathbf{w}}(F(\mathbf{x}), \mathbf{a}) \leq \#S \sum_{j=1}^m \frac{i_j w_j}{\# S_j}.
\label{eq S-Z bound}
\end{equation}
\end{theorem}
When $ w_1 = w_2 = \ldots = w_m = 1 $, observe that \cite[Theorem 5]{weightedRM} is recovered from this theorem, and \cite[Lemma 8]{extensions} is recovered from the next corollary. Observe also that this corollary is stronger than Corollary \ref{corollary weak schwartz on whole grid}.
\begin{corollary}
Let $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ and $ \mathbf{w} \in \mathbb{N}_+^m $. If $ s = \# S_1 = \# S_2 = \ldots = \# S_m $, then
$$ \sum_{\mathbf{a} \in S} m_{\mathbf{w}}(F(\mathbf{x}), \mathbf{a}) \leq \deg_{\mathbf{w}}(F(\mathbf{x})) s^{m-1}. $$
\end{corollary}
To prove Theorem \ref{th S-Z bound}, we need an auxiliary lemma, whose proof can be directly translated from those of \cite[Lemma 5]{extensions} and \cite[Corollary 7]{extensions}:
\begin{lemma} \label{lemma SZ}
If $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ and $ \mathbf{a} = (a_1, a_2, \ldots, a_m) \in \mathbb{F}^m $, then
\begin{enumerate}
\item
$ m_{\mathbf{w}} \left( F^{(\mathbf{i})}(\mathbf{x}), \mathbf{a} \right) \geq m_{\mathbf{w}}(F(\mathbf{x}), \mathbf{a}) - \mid\mathbf{i}\mid_{\mathbf{w}} $, for all $ \mathbf{i} \in \mathbb{N}^m $, and
\item
$ m_{\mathbf{w}} \left( F(\mathbf{x}), \mathbf{a} \right) \leq m_{w_m}(F(a_1,a_2, \ldots, a_{m-1},x_m), a_m) $.
\end{enumerate}
\end{lemma}
We may now prove Theorem \ref{th S-Z bound}. We follow closely the steps given in the proof of \cite[Lemma 8]{extensions}.
\begin{proof}[Proof of Theorem \ref{th S-Z bound}]
We will prove the result by induction on $ m $, where the case $ m = 1 $ follows from (\ref{eq in intro 1}). Fix then $ m > 1 $. We may assume without loss of generality that $ x_1 \prec_l x_2 \prec_l \ldots \prec_l x_m $, where $ \preceq_l $ is the lexicographic ordering. Write $ \mathbf{x}^\prime = (x_1, x_2, \ldots, x_{m-1}) $. There are unique polynomials $ F_j(\mathbf{x}^\prime) \in \mathbb{F}[\mathbf{x}^\prime] $, for $ j = 1,2, \ldots, t $, such that
$$ F(\mathbf{x}) = \sum_{j=0}^t F_j(\mathbf{x}^\prime) x_m^j, $$
where $ {\rm LM}(F(\mathbf{x})) = {\rm LM}(F_t(\mathbf{x}^\prime)) x_m^t $. Let $ \mathbf{a} = (a_1, a_2, \ldots, a_m) \in S $ and write $ \mathbf{a}^\prime = (a_1, a_2, \ldots, a_{m-1}) $ and $ \mathbf{w}^\prime = (w_1, w_2, \ldots, $ $ w_{m-1}) $. Take $ \mathbf{k} \in \mathbb{N}^{m-1} $ such that $ \mid\mathbf{k}\mid_{\mathbf{w}^\prime} = m_{\mathbf{w}^\prime}(F_t(\mathbf{x}^\prime), \mathbf{a}^\prime) $ and $ F_t^{(\mathbf{k})}(\mathbf{a}^\prime) \neq 0 $. By the previous lemma, we see that
\begin{equation*}
\begin{split}
m_{\mathbf{w}}(F(\mathbf{x}), \mathbf{a}) & \leq \mid(\mathbf{k},0)\mid_{\mathbf{w}} + m_{\mathbf{w}} \left( F^{(\mathbf{k},0)}(\mathbf{x}), \mathbf{a} \right) \\
& \leq m_{\mathbf{w}^\prime}(F_t(\mathbf{x}^\prime), \mathbf{a}^\prime) + m_{w_m} \left( F^{(\mathbf{k},0)}(\mathbf{a}^\prime, x_m), a_m \right) .
\end{split}
\end{equation*}
Summing these inequalities over all $ a_m \in S_m $ and applying the case $ m=1 $, we obtain that
\begin{equation*}
\sum_{a_m \in S_m} m_{\mathbf{w}}(F(\mathbf{x}), \mathbf{a}) \leq m_{\mathbf{w}^\prime}(F_t(\mathbf{x}^\prime), \mathbf{a}^\prime) \# S_m + w_mt.
\end{equation*}
Using this last inequality, summing over $ a_i \in S_i $, for $ i = 1,2, \ldots, m-1 $, and applying the case of $ m-1 $ variables, it follows that
\begin{equation*}
\begin{split}
\sum_{\mathbf{a} \in S} m_{\mathbf{w}}(F(\mathbf{x}), \mathbf{a}) & \leq \sum_{a_1 \in S_1} \cdots \sum_{a_{m-1} \in S_{m-1}} m_{\mathbf{w}^\prime}(F_t(\mathbf{x}^\prime), \mathbf{a}^\prime) \# S_m + w_mt \frac{\# S}{\#S_m} \\
& \leq \sum_{j=1}^{m-1} w_ji_j \frac{\# S}{\# S_j} + w_mt \frac{\# S}{\#S_m},
\end{split}
\end{equation*}
and the result follows.
\end{proof}
\subsection{Sharpness of the bound}
In this subsection, we prove the sharpness of the bound (\ref{eq S-Z bound}), whose proof can be translated word by word from that of \cite[Proposition 7]{moreresults}. Therefore, we only present a sketch of the proof:
\begin{proposition}
For all finite sets $ S_1, S_2, \ldots, S_m \subseteq \mathbb{F} $, $ S = S_1 \times S_2 \times \cdots \times S_m $, all vectors of positive weights $ \mathbf{w} = (w_1, w_2, \ldots, w_m) \in \mathbb{N}_+^m $ and all $ \mathbf{i} = (i_1, i_2, \ldots, i_m) \in \mathbb{N}^m $, there exists a polynomial $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ such that $ \mathbf{x}^\mathbf{i} = {\rm LM}(F(\mathbf{x})) $ with respect to the lexicographic ordering, and such that
$$ \sum_{\mathbf{a} \in S} m_{\mathbf{w}}(F(\mathbf{x}), \mathbf{a}) = \#S \sum_{j=1}^m \frac{i_j w_j}{\# S_j}. $$
\end{proposition}
\begin{proof}[Sketch of proof]
Denote $ s_j = \# S_j $ and $ S_j = \left\lbrace a^{(j)}_1, a^{(j)}_2, \ldots, a^{(j)}_{s_j} \right\rbrace $, and choose $ r^{(j)}_k \in \mathbb{N} $ such that $ i_j = r^{(j)}_1 + r^{(j)}_2 + \cdots + r^{(j)}_{s_j} $, for $ k = 1,2, \ldots, s_j $ and $ j = 1,2, \ldots, m $. Now define
$$ F(\mathbf{x}) = \prod_{j = 1}^m \prod_{k = 1}^{s_j} \left( x_j - a^{(j)}_k \right)^{r^{(j)}_k}. $$
Now, fixing integers $ 1 \leq k_j \leq s_j $, for $ j = 1,2, \ldots, m $, translating the point $ \left( a^{(1)}_{k_1}, a^{(2)}_{k_2}, \ldots, \right. $ $ \left. a^{(m)}_{k_m} \right) $ to the origin $ \mathbf{0} $, and using the Gr{\"o}bner basis from Corollary \ref{corollary groebner weighted}, we see that
$$ m_\mathbf{w} \left( F \left( \mathbf{x} \right), \left( a^{(1)}_{k_1}, a^{(2)}_{k_2}, \ldots, a^{(m)}_{k_m} \right) \right) = r^{(1)}_{k_1} w_1 + r^{(2)}_{k_2} w_2 + \cdots + r^{(m)}_{k_m} w_m, $$
for all $ k_j = 1,2, \ldots, s_j $ and all $ j = 1,2, \ldots, m $. The result then follows by summing these multiplicities.
\end{proof}
\subsection{Comparison with the footprint bound}
In this subsection, we will compare the bounds (\ref{general bound}) and (\ref{eq S-Z bound}) whenever it makes sense to do so. To that end, we will write them as follows: fix a vector of positive weights $ \mathbf{w} = (w_1, w_2, \ldots, w_m) \in \mathbb{N}_+^m $, a positive integer $ r \in \mathbb{N}_+ $, and a polynomial $ F(\mathbf{x}) \in \mathbb{F}[\mathbf{x}] $ such that $ \mathbf{x}^\mathbf{i} = {\rm LM}(F(\mathbf{x})) $, $ \mathbf{i} = (i_1, i_2, \ldots, i_m) $, with respect to the lexicographic ordering. We first consider the footprint bound as in Corollary \ref{corollary footprint for weighted multi}:
\begin{equation}
\# \mathcal{V}_{\geq r, \mathbf{w}}(F(\mathbf{x})) \cdot {\rm B}(\mathbf{w}; r) \leq \# \Delta \left( \left\langle \left\lbrace F(\mathbf{x}) \right\rbrace \bigcup \left\lbrace \prod_{j=1}^m G_j(x_j)^{r_j} : \sum_{j=1}^m r_jw_j \geq r \right\rbrace \right\rangle \right).
\label{eq comparison 1st bound}
\end{equation}
And next we consider the bound (\ref{eq S-Z bound}) as follows:
\begin{equation}
\# \mathcal{V}_{\geq r, \mathbf{w}}(F(\mathbf{x})) \cdot r \leq \#S \sum_{j=1}^m \frac{i_j w_j}{\# S_j}.
\label{eq comparison 2nd bound}
\end{equation}
First we observe that the bound (\ref{eq comparison 1st bound}) also holds for any other monomial ordering, and not only the lexicographic one, as is the case with (\ref{eq comparison 2nd bound}). Second we observe that (\ref{eq comparison 2nd bound}) gives no information whereas (\ref{eq comparison 1st bound}) does, whenever
\begin{equation}
\sum_{j=1}^m \left\lfloor \frac{i_j}{\# S_j} \right\rfloor w_j < r \leq \sum_{j=1}^m \frac{i_j w_j}{\# S_j},
\label{eq region where no information}
\end{equation}
by the discussion in Subsection \ref{subsec interpretation bound}.
Next, we observe that when we do not count multiplicities, that is, $ w_1 = w_2 = \ldots = w_m = r = 1 $, then (\ref{eq comparison 1st bound}) implies (\ref{eq comparison 2nd bound}) via Theorem \ref{th generalization demillo lipton et al}:
\begin{proposition} \label{proposition zippel implies schwartz}
If $ w_1 = w_2 = \ldots = w_m = r = 1 $, that is, $ \mathcal{J} = \{ \mathbf{0} \} $, it holds that $ {\rm B}(\mathbf{w}; r) = 1 $ and
$$ \# \Delta \left( \left\langle F(\mathbf{x}), G_1(x_1), G_2(x_2), \ldots, G_m(x_m) \right\rangle \right) \leq \# S - \prod_{j=1}^m \left( \# S_j - i_j \right) \leq \#S \sum_{j=1}^m \frac{i_j}{\# S_j}. $$
In particular, (\ref{eq comparison 1st bound}) implies (\ref{eq comparison 2nd bound}) in this case.
\end{proposition}
Moreover, when $ m = 1 $ and we count multiplicities, all bounds coincide, giving (\ref{eq in intro 2}). In the following example we show that this is not the case in general. As we will see, each bound, (\ref{eq comparison 1st bound}) and (\ref{eq comparison 2nd bound}), can be tighter than the other one in different cases, hence complementing each other:
\begin{example}\label{ex1}
Consider $ m = 2 $, $ w_1 = 2 $, $ w_2 = 3 $, $ r = 5 $ and $ \# S_1 = \# S_2 = 4 $. Thus we have that
$$ \mathcal{J} = \{(0,0), (1,0),(0,1), (2,0)\}, \quad \textrm{and} $$
$$ \mathcal{J}_S = \left( [0,11] \times [0,3] \right) \cup \left( [0,3] \times [0,7] \right) . $$
Consider all pairs $ (i_1, i_2) \in \mathcal{J}_S $ and polynomials $ F(x_1,x_2) $ such that $ {\rm LM}(F(x_1,x_2)) = x_1^{i_1} x_2^{i_2} $, with respect to the lexicographic ordering. In Figure \ref{figone}, we show the upper bounds on the number of zeros of $ F(x_1, x_2) $ of weighted multiplicity at least $ 5 $ given by (\ref{eq comparison 1st bound}) (table above) and (\ref{eq comparison 2nd bound}) (table below), respectively. As is clear from the figure, in some regions of the set $ \mathcal{J}_S $, the first bound is tighter than the second (bold numbers in the table above) and vice versa (bold numbers in the table below). Furthermore the first bound gives non-trivial information in the region given by (\ref{eq region where no information}), where the second does not (depicted by dashes).
\end{example}
\begin{figure}[h]
\begin{center}
\begin{tabular}{r|rrrrrrrrrrrr}
$x_1^7$ & \bf{15}&\bf{15}&\bf{15}&\bf{15}\\
$x_1^6$ & 14&\bf{14}&\bf{15}&\bf{15}\\
$x_1^5$ & 13&13&\bf{14}&\bf{15}\\
$x_1^4$ & 12&13&14&15\\
$x_1^3$ & 9&10&11&12&14&\bf{14}&\bf{14}&\bf{14}&\bf{15}&\bf{15}&\bf{15}&\bf{15}\\
$x_1^2$ & 6&7&9&10&12&12&\bf{13}&\bf{13}&\bf{14}&\bf{14}&\bf{15}&\bf{15}\\
$x_1$ & 3&4&6&8&10&10&\bf{11}&\bf{12}&\bf{13}&\bf{13}&\bf{14}&\bf{15}\\
$1$ & 0&2&4&6&8&9&10&11&12&\bf{13}&\bf{14}&\bf{15}\\
\hline
& $1$ & $x_2$ & $x_2^2$ & $x_2^3$ & $x_2^4$ & $x_2^5$ & $x_2^6$ & $x_2^7$ & $x_2^8$ & $x_2^9$ & $x_2^{10}$ & $x_2^{11}$ \\
& \\
$x_1^7$ & --&--&--&--\\
$x_1^6$ & 14&--&--&--\\
$x_1^5$ & \bf{12}&13&15&--\\
$x_1^4$ & \bf{9}&\bf{11}&\bf{12}&\bf{14}\\
$x_1^3$ & \bf{7}&\bf{8}&\bf{10}&12&\bf{13}&15&--&--&--&--&--&--\\
$x_1^2$ & \bf{4}&\bf{6}&\bf{8}&\bf{9}&\bf{11}&12&14&--&--&--&--&--\\
$x_1$ & \bf{2}&4&\bf{5}&\bf{7}&\bf{8}&10&12&13&15&--&--&--\\
$1$ & 0&\bf{1}&\bf{3}&\bf{4}&\bf{6}&\bf{8}&\bf{9}&11&12&14&--&--\\
\hline
& $1$ & $x_2$ & $x_2^2$ & $x_2^3$ & $x_2^4$ & $x_2^5$ & $x_2^6$ & $x_2^7$ & $x_2^8$ & $x_2^9$ & $x_2^{10}$ & $x_2^{11}$
\end{tabular}
\end{center}
\label{figone}
\caption{Upper bounds on the number of zeros of weighted multiplicity at least $ r = 5 $ when $ w_1 = 2 $, $ w_2 = 3 $ and $ \#S_1 = \#S_2 = 4 $, from Example \ref{ex1}. }
\end{figure}
\section*{Appendix: Proof of Lemma \ref{lemma hermite}} \label{appendix}
In this appendix, we give the proof of Lemma \ref{lemma hermite}. We first treat the univariate case ($ m = 1 $) in the classical form. The proof for Hasse derivatives can be directly translated from the result for classical derivatives:
\begin{lemma}
Let $ a_1, a_2, \ldots, a_n \in \mathbb{F} $ be pair-wise distinct and let $ M \in \mathbb{N}_+ $. There exist polynomials $ F_{i,j}(x) \in \mathbb{F}[x] $ such that
$$ F_{i,j}^{(k)}(a_l) = \delta_{i,k} \delta_{j,l}, $$
for all $ i,k = 0,1,2, \ldots, M $ and all $ j,l = 1,2, \ldots, n $, where $ \delta $ denotes the Kronecker delta.
\end{lemma}
Now, since $ \mathcal{J} $ is finite, we may fix an integer $ M $ such that $ \mathcal{J} \subseteq [0,M]^m $. Similarly, we may find a finite set $ S \subseteq \mathbb{F} $ such that $ T \subseteq S^m $. Denote then $ s = \# S $ and $ S = \{ a_1, a_2, \ldots, a_s \} $, and let $ F_{i,j,k}(x_k) \in \mathbb{F}[x_k] $ be polynomials as in the previous lemma in each variable $ x_k $, for $ i = 0,1,2, \ldots, M $, $ j = 1,2, \ldots, s $ and $ k = 1,2, \ldots, m $. Define now
$$ F_{\mathbf{i},\mathbf{j}}(\mathbf{x}) = F_{i_1,j_1,1}(x_1) F_{i_2,j_2,2}(x_2) \cdots F_{i_m,j_m,m}(x_m) \in \mathbb{F}[\mathbf{x}], $$
for $ \mathbf{i} = (i_1, i_2, \ldots, i_m) \in [0,M]^m $ and $ \mathbf{j} = (j_1, j_2, \ldots, j_m) \in [1,s]^m $. By the previous lemma and Lemma \ref{lemma leibniz formula}, we see that
$$ F_{\mathbf{i},\mathbf{j}}^{(\mathbf{k})} \left( a_{l_1}, a_{l_2}, \ldots, a_{l_m} \right) = \left( \delta_{i_1, k_1} \delta_{i_2, k_2} \cdots \delta_{i_m, k_m} \right) \left( \delta_{j_1, l_1} \delta_{j_2, l_2} \cdots \delta_{j_m, l_m} \right) = \delta_{\mathbf{i}, \mathbf{k}} \delta_{\mathbf{j}, \mathbf{l}}, $$
for all $ \mathbf{i}, \mathbf{k} \in [0,M]^m $ and all $ \mathbf{j}, \mathbf{l} \in [1,s]^m $. Finally, given values $ b_{\mathbf{i},\mathbf{j}} \in \mathbb{F} $, for $ \mathbf{i} \in \mathcal{J} $ and $ \mathbf{j} \in T $, define
$$ F(\mathbf{x}) = \sum_{\mathbf{i} \in \mathcal{J}} \sum_{\mathbf{j} \in T} b_{\mathbf{i},\mathbf{j}} F_{\mathbf{i}, \mathbf{j}}(\mathbf{x}) \in \mathbb{F}[\mathbf{x}]. $$
We see that $ {\rm Ev}(F(\mathbf{x})) = ((b_{\mathbf{i},\mathbf{j}})_{\mathbf{i} \in \mathcal{J}})_{\mathbf{j} \in T} $, and we are done.
\section*{Acknowledgement}
The authors gratefully acknowledge the support from The Danish Council for Independent Research (Grant No. DFF-4002-00367). The second listed author also gratefully acknowledges the support from The Danish Council for Independent Research via an EliteForsk-Rejsestipendium (Grant No. DFF-5137-00076B).
\bibliographystyle{plainnat}
| {
"timestamp": "2017-07-06T02:04:51",
"yymm": "1707",
"arxiv_id": "1707.01354",
"language": "en",
"url": "https://arxiv.org/abs/1707.01354",
"abstract": "We upper bound the number of common zeros over a finite grid of multivariate polynomials and an arbitrary finite collection of their consecutive Hasse derivatives (in a coordinate-wise sense). To that end, we make use of the tool from Gröbner basis theory known as footprint. Then we establish and prove extensions in this context of a family of well-known results in algebra and combinatorics. These include Alon's combinatorial Nullstellensatz, existence and uniqueness of Hermite interpolating polynomials over a grid, estimations on the parameters of evaluation codes with consecutive derivatives, and bounds on the number of zeros of a polynomial by DeMillo and Lipton, Schwartz, Zippel, and Alon and Füredi. As an alternative, we also extend the Schwartz-Zippel bound to weighted multiplicities and discuss its connection with our extension of the footprint bound.",
"subjects": "Information Theory (cs.IT); Combinatorics (math.CO)",
"title": "Bounding the number of common zeros of multivariate polynomials and their consecutive derivatives",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9877587257892506,
"lm_q2_score": 0.8418256532040707,
"lm_q1q2_score": 0.8315206345455564
} |
https://arxiv.org/abs/1910.03293 | The conjugate gradient method with various viewpoints | Connections of the conjugate gradient (CG) method with other methods in computational mathematics are surveyed, including the connections with the conjugate direction method, the subspace optimization method and the quasi-Newton method BFGS in numrical optimization, and the Lanczos method in numerical linear algebra. Two sequences of polynomials related to residual vectors and conjugate vectors are reviewed, where the residual polynomials are similar to orthogonal polynomials in the approximation theory and the roots of the polynomials reveal certain information of the coefficient matrix. The convergence rates of the steepest descent and CG are reconsidered in a viewpoint different from textbooks. The connection of infinite dimensional CG with finite dimensional preconditioned CG is also reviewed via numerical solution of an elliptic equation. | \section{Introduction}
\label{sec:introduction}
The conjugate gradient method, proposed by Hestenes and Stiefel \cite{Hestenes-Stiefel}, is an effective method for solving linear system
\begin{align}
\label{eqn:Axb}
Ax=b,
\end{align}
where $A$ is symmetric positive definite. In the original paper \cite{Hestenes-Stiefel}, it is shown that the iterates in CG possess neat properties and that CG has connections with certain mathematical objects, such as orthogonal polynomials and continued fractions. From then on, many aspects of CG were explored with a large number of literatures, including the convergence rate \cite{convergencerate-VanderSluis,convergencerate-Sleijpen,convergencerate-Greenbaum}, preconditioners \cite{preconditon-Johnson,preconditon-Chan}, the related polynoials \cite{rootspolynomials-Manteuffel} and CG in Hilbert space \cite{CG-hilbert-Herzog-Sachs} and so on. Also, there are many works on exploring connections between CG and other algorithms, such as the quasi-Newton method BFGS \cite{BFGS-Nazareth,BFGS-Forsgren-Odland} and Lanczos method \cite{Lanczos-Meurant}. Such connections are interesting to the authors and become the main motivation of this paper. The purpose of this paper is to provide a further reading material for CG in textbooks.
In order to be a friendly further reading material for textbooks, the exposition
of the paper is made as detailed and easy to understand as possible. Usually claims are derived starting from elementary calculations. Only basic calculas, basic numerical linear algebra and basic Hilbert space theory are required as preliminary.
\section{Connections with other methods}
\label{set:connections}
There are several ways to derive CG, offering different viewpoints. In this paper, CG will be derived by the conjugate direction method, which is often done in textbooks. Connections of CG with other iterative methods are demonstrated by the fact that with the same initial guess, the iterates of CG are identical to the iterates generated by these methods.
\subsection{From the viewpoint of optimization}
Given a symmetric positive definite matrix $A$, solving $Ax=b$ is equivalent to minimizing $J(x)=\frac{1}{2}x^TAx-b^Tx$. Suppose there is an initial guess $x_0$. The corresponding residual $r_0=-\nabla J(x)=b-Ax_0$.
\begin{proposition}
At $x_k$, if $ J(x)$ is marching along the direction $d_k$, then optimal step size $\alpha_k$ and the quantity of descent is
\begin{align}
\label{eqn:descent-quantity}
\alpha_k = \frac{r_k^Td_k}{d_k^TAd_k},\quad
J(x_{k+1}) = J(x_k) - \frac{\left(r_k^Tp_k\right)^2}{2p_k^TAp_k},
\end{align}
where $r_k=b-Ax_k$ is the residual at $x_k$.
\end{proposition}
\subsubsection{Connection with conjugate direction method}
The CG method can be considered as a special case of conjugate direction method, just as mentioned in the original paper
\cite{Hestenes-Stiefel}. The first conjugate direction is taken as the steepest descent direction $r_0$, i.e., $p_0=r_0$. Then
\begin{align}
\label{eqn:x1}
x_1=x_0+\alpha_0 p_0,\quad \alpha_0=\frac{r_0^Tp_0}{p_0^TAp_0},\\
\label{eqn:r1}
r_1=b-Ax_1=r_0-\alpha_0Ap_0.
\end{align}
At $x_1$, instead of marching along the steepest descent direction $r_1$, a direction $p_1$ conjugate to $p_0$ is constructed, based on the information at hand. That is
\begin{align}
p_1=r_1+\beta_0 p_0,\quad \beta_0=-\frac{p_0^TAr_1}{p_0^TAp_0}.
\end{align}
Then
\begin{align}
x_2=x_1+\alpha_1 p_1,\quad \alpha_1=\frac{r_1^Tp_1}{p_1^TAp_1},\\
r_2=b-Ax_2=r_1-\alpha_1Ap_1.
\end{align}
Repeating this process, at $x_k$, gives the CG algorithm,
\begin{align}
\label{eqn:cd-iter-1}
& p_k=r_k+\beta_{k-1} p_{k-1}, \quad \beta_{k-1}=-\frac{p_{k-1}^TAr_k}{p_{k-1}^TAp_{k-1}},\\
\label{eqn:cd-iter-2}
& x_{k+1}=x_k+\alpha_k p_k, \quad \alpha_k=\frac{r_k^Tp_k}{p_k^TAp_k},\\
\label{eqn:cd-iter-3}
& r_{k+1}=b-Ax_k=r_k-\alpha_kAp_k.
\end{align}
The resulting directions $r_0,r_1,\dots,r_k$ are orthoganal to each other, and $p_0,p_1,\dots,p_k$ are conjugate to each other. By the orthogonal conditions and the conjugate conditions, there are alternative formulas for $\alpha_k$ and $\beta_k$,
\begin{align}
\label{eqn:cd-beta-alternative}
\beta_{k-1} & =-\frac{p_{k-1}^TAr_k}{p_{k-1}^TAp_{k-1}}=-\frac{p_{k-1}^TAr_k}{p_{k-1}^TAr_{k-1}}=-\frac{(r_{k-1}-r_k)^Tr_k}{(r_{k-1}-r_k)^Tr_{k-1}}=\frac{r_k^Tr_k}{r_{k-1}^Tr_{k-1}},\\
\label{eqn:cd-alpha-alternative}
\alpha_k & =\frac{r_k^Tr_k}{p_k^TAp_k}=\frac{p_k^Tr_k}{p_k^TAp_k}=\frac{p_k^Tr_0}{p_k^TAp_k}.
\end{align}
If $x_0=0$ is chosen, then $r_0=b$ and $\alpha_k=\frac{p_k^Tb}{p_k^TAp_k}$. And if the process terminates at $k=n$, then
\begin{align}
\label{eqn:cd-exact-solution}
x = \alpha_0 p_0 + \alpha_1 p_1 + \ldots + \alpha_{n-1} p_{n-1} = \sum_{i=0}^{n-1}\frac{p_i^Tb}{p_i^TAp_i}p_i = \left(\sum_{i=0}^{n-1}\frac{p_i p_i^T}{p_i^TAp_i}\right) b.
\end{align}
Therefore an explicit expression of $A^{-1}$ is resulted
\begin{align}
\label{eqn:cd-A-inverse}
A^{-1} = \sum_{i=0}^{n-1}\frac{p_i p_i^T}{p_i^TAp_i}.
\end{align}
\subsubsection{Connection with subspace optimization}
The CG method can be considered as a two-dimensional subspace minimization method for the objective function $ J(x)$, see for example \cite{subspaceoptimization-Xu}. At the beginning of subspace minimization, there is only one direction $r_0$ availabe. Therefore we proceed as (\ref{eqn:x1})-(\ref{eqn:r1}) to obtain $r_1$. Now consider minimizing $ J(x)$ in the two-dimensional affine subspace $\pi_2=x_1+\mbox{span}\{r_1,r_0\}$. In order to solve the subproblem, let $x=x_1+\xi r_1+\eta r_0$ and consider the function $h(\xi,\eta)= J(x_1+\xi r_1+\eta r_0)$, where $(\xi,\eta)\in \mathbb{R}^2$. Forcing $\frac{\partial h}{\partial \xi}=0$ and $\frac{\partial h}{\partial \eta}=0$, gives
\begin{align}
\label{eqn:subspace-eqn1}
r_1^TAr_1\cdot\xi + r_1^TAr_0\cdot\eta & = r_1^Tr_1,\\
\label{eqn:subspace-eqn2}
r_1^TAr_0\cdot\xi + r_0^TAr_0\cdot\eta & = 0,
\end{align}
where the orthoganality $r_1^Tr_0=0$ is used. From (\ref{eqn:subspace-eqn1})-(\ref{eqn:subspace-eqn2}), there holds
\begin{align}
\label{eqn:subspace-soln1}
\eta & = -\frac{r_1^TAr_0}{r_0^TAr_0}\xi,\\
\label{eqn:subspace-soln2}
\xi & = \frac{r_1^Tr_1}{r_1^TAr_1-(r_1^TAr_0)^2/r_0^TAr_0}.
\end{align}
Denote $\tilde{p}_1=r_1+\frac{\eta}{\xi}r_0=r_1-\frac{r_1^TAr_0}{r_0^TAr_0}r_0$. It is easy to see that $\tilde{p}_1=p_1$, the $p_1$ in the CG method. Thus $\tilde{p}_1^TAr_0=0$ as in the CG method. A little calculation shows that
\begin{align}
\xi & = \frac{r_1^Tr_1}{r_1^TAr_1-(r_1^TAr_0)^2/r_0^TAr_0}=\frac{r_1^Tp_1}{p_1^TAp_1}=\alpha_1.
\end{align}
where $\alpha_1$ is the step size in the CG method.
The minimizer of the subproblem is
\begin{align}
\nonumber
\tilde{x}_2 & = x_1 + \xi\left(r_1+\frac{\eta}{\xi}r_0\right) = x_1 + \alpha_1 p_1.
\end{align}
From the above formula, it can be seen that $\tilde{x}_2=x_2$ and that $x_2$ is the minimizer of $J(x)$ both in the two-dimensional affine subspace $\pi_2$ and in the direction $p_1$. The next residual vector arises
$$
r_2=b-Ax_2=r_1-\frac{r_1^Tp_1}{p_1^TAp_1}Ap_1,
$$
which satisfies orthogonal conditions $r_2^Tr_1=0$ and $r_2^Tp_1=0$. Now there are two options for the new two-dimensional affine subspace for $J(x)$ to be minimized, i.e., $\pi_2=x_1+\mbox{span}\{r_2,r_1\}$ and $\pi_2=x_1+\mbox{span}\{r_2,p_1\}$. Which subspace should be chosen? In order to answer this question, consider a more general subproblem, i.e., at $x_k$, to minimize $ J(x)$ in a two dimensional subspace $\mbox{span}\{u,v\}$, with orthogonal condition $u^Tv=0$,
\begin{align}
\label{eqn:general-subproblem}
\min_{\xi,\eta} J(x_k+\xi u+\eta v).
\end{align}
By similar arguments as the case $\pi_2=x_1+\mbox{span}\{r_1,r_0\}$, the following proposition can be derived.
\begin{proposition}
\label{proposi:general-subproblem-orth}
The minimizer $\tilde{x}_{k+1}=x_k+\xi u+\eta v$ of $J(x_k+\xi u+\eta v)$ is also the minimizer of $ J(x)$ at $x_k$ along the direction $\tilde{p} = u+\frac{\eta}{\xi}v$, with step size $\tilde{\alpha} = \xi$,
where
\begin{align}
\frac{\eta}{\xi} = -\frac{u^TAv}{v^TAv},\quad \xi =\frac{u^Tu}{p^TAp}.
\end{align}
Furthermore, the new residual $\tilde{r}_{k+1}=b-A\tilde{x}_{k+1}$ is orthogonal to $u$, $v$ and $\tilde{p}$ is conjugate to $v$, i.e.,
\begin{align}
\label{eqn:general-subproblem-orth}
\tilde{r}_{k+1}^Tu=0,\quad \tilde{r}_{k+1}^Tv=0,\quad \tilde{p}^TAv=0.
\end{align}
\end{proposition}
Applying Propositon \ref{proposi:general-subproblem-orth} at $x_2$, if $\pi_2=x_2+\mbox{span}\{r_2,r_1\}$ is chosen, i.e., $u=r_2,v=r_1$, then $\tilde{r}_3$ is orthogonal to $r_2$ and $r_1$ but not necessarily orthogonal to $r_0$, and $\tilde{p}_2$ is conjugate to $r_1$ rather to $p_1$. Hence, the works that we have done seems to be messed up.
If $\pi_2=x_2+\mbox{span}\{r_2,p_1\}$ is chosen, i.e., $u=r_2,v=p_1$, then $\tilde{r}_3$ is orthogonal to $r_2$ and $p_1$, and $\tilde{p}_2$ is conjugate to $p_1$. It can be verified that such $\tilde{p}_2$ is also conjugate to $p_0$ and therefore $\tilde{r}_3$ is orthogonal to $r_0$. In fact, with $u=r_2,v=p_1$
\begin{align}
\tilde{p}_2=r_2-\frac{p_{1}^TAr_1}{p_{1}^TAp_{1}} p_{1}=p_2,\\
\tilde{x}_{3}=x_2+\frac{r_2^Tp_2}{p_2^TAp_2} p_2=x_3,\\
\tilde{r}_{3}=b-Ax_3=r_2-\frac{r_2^Tp_2}{p_2^TAp_2}Ap_2=r_3,
\end{align}
where $p_2$, $x_3$ and $r_3$ are iterates of the CG method. Therefore $\tilde{p}_2$ is conjugate to $p_0$ and $\tilde{r}_3$ is orthogonal to $r_0$. These resulting properties seem satifying, because they gurantee that we are making progress.
Repeating this process, at $x_k$, the subspace $\pi_2=x_k+\mbox{span}\{r_k,p_{k-1}\}$ is taken, which results in
\begin{align}
\label{eqn:subspace-iter-1}
\tilde{p}_k=r_k-\frac{p_{k-1}^TAr_k}{p_{k-1}^TAp_{k-1}} p_{k-1},\\
\label{eqn:subspace-iter-2}
\tilde{x}_{k+1}=x_k+\frac{r_k^Tp_k}{p_k^TAp_k} p_k,\\
\label{eqn:subspace-iter-3}
\tilde{r}_{k+1}=b-Ax_k=r_k-\frac{r_k^Tp_k}{p_k^TAp_k}Ap_k,
\end{align}
which are the same iterates as $p_k$, $x_{k+1}$, $r_{k+1}$ in the CG method (\ref{eqn:cd-iter-1})-(\ref{eqn:cd-iter-3}).
\subsubsection{Connection with BFGS}
BFGS is a special quasi-Newton method for minimizing a function $h(x)$. Denote $g_k=\nabla h(x_k)$, $s_k=x_{k+1}-x_k$, and $y_k=g_{k}-g_{k+1}$. Given the initial data $x_0$, $H_0$, the BFGS method reads
\begin{align}
\label{eqn:bfgs-1}
d_k & = -H_k g_k,\\
\label{eqn:bfgs-2}
x_{k+1} & = x_k + \alpha_k d_k,\quad \alpha_k=\mbox{arg}\min_{\alpha}h(x_k+\alpha d_k),\\
\label{eqn:bfgs-3}
H_{k+1} & = H_{k}+\frac{1}{s_{k}^Ty_{k}}\left[1+\frac{y_{k}^TH_{k}y_{k}}{s_{k}^Ty_{k}}\right]s_{k}s_{k}^T - \frac{1}{s_{k}^Ty_{k}}(s_{k}y_{k}^TH_{k}+H_{k}y_{k}s_{k}^T).
\end{align}
When applied to the quadratic function $ J(x)$, with the same initial guess $x_0$ as the CG method, and with $H_0 = I$, BFGS will produce the same iterates as CG, maybe firstly discovered by Nazareth \cite{BFGS-Nazareth}. Noting that the residual $r=b-Ax=-\nabla J(x)$, BFGS holds
\begin{align}
\hat{p}_0 & = -H_0 g_0 = r_0,\\
\hat{x}_{1} & = x_0 + \alpha_0 \hat{p}_0,\quad \alpha_0=\frac{r_0^T\hat{p}_0}{\hat{p}_0^TA\hat{p}_0},\\
H_{1} & = H_{0}+\frac{1}{s_{0}^Ty_{0}}\left[1+\frac{y_{0}^TH_{0}y_{0}}{s_{0}^Ty_{0}}\right]s_{0}s_{0}^T - \frac{1}{s_{0}^Ty_{0}}(s_{0}y_{0}^TH_{0}+H_{0}y_{0}s_{0}^T).
\end{align}
Thus $\hat{x}_1$ is identical to $x_1$ generated by the CG method, and $\hat{p}_0$ identical to $p_0$. In passing, $r_1$ is the same, with orthogonal condition $r_1^Tr_0=0$. Note that $s_0=x_1-x_0=\alpha_0 \hat{p}_0$, $y_0=g_{0}-g_{1}=r_1-r_0=\alpha_0 A\hat{p}_0$ and $s_0^Tr_1=0$. The next marching direction $\hat{p}_1$ is computed as
\begin{align}
\label{eqn:bfgs-cd-1}
\hat{p}_1 & = -H_1 g_1 = H_1 r_1 = r_1 - \frac{y_0^TH_0r_1}{s_0^Ty_0}s_0 = r_1 - \frac{y_0^Tr_1}{\hat{p}_0^Ty_0}\hat{p}_0 = r_1 - \frac{p_0^TAr_1}{p_0^TAp_0}p_0 = p_1.
\end{align}
The above formula shows that $\hat{p}_1$ is identical to $p_1$ of the CG method. Therefore all the orthogonal conditions and conjugate conditions pass up to $r_2$ and $\hat{p}_1$. Furthermore,
\begin{align}
\label{eqn:bfgs-cd-2}
H_1r_2 = H_0r_2 - \frac{1}{s_{0}^Ty_{0}}s_{0}y_{0}^TH_{0}r_2 = H_0r_2 - \frac{1}{s_{0}^Ty_{0}}s_{0}(r_1-r_0)^Tr_2 = H_0r_2 = r_2.
\end{align}
Suppose that up to $x_{k}$, the iterates and directions of BFGS and the CG method are the same, with $s_{k-1}^Tr_k=0$ and $H_{k-1}r_{k}=H_{k-2}r_{k}=\cdots=H_{0}r_{k}=r_{k}$. The next BFGS direction $\hat{p}_k$ is computed as
\begin{align}
\label{eqn:bfgs-cd-3}
\nonumber
\hat{p}_k & = -H_k g_k = H_k r_k = H_{k-1}r_k - \frac{1}{s_{k-1}^Ty_{k-1}}s_{k-1}y_{k-1}^TH_{k-1}r_{k} = r_k - \frac{1}{s_{k-1}^Ty_{k-1}}s_{k-1}y_{k-1}^Tr_{k}\\
& = r_k - \frac{y_{k-1}^Tr_{k}}{s_{k-1}^Ty_{k-1}}s_{k-1} = r_k - \frac{\hat{p}_{k-1}^TAr_{k}}{\hat{p}_{k-1}^TA\hat{p}_{k-1}}\hat{p}_{k-1} = r_k - \frac{p_{k-1}^TAr_{k}}{p_{k-1}^TAp_{k-1}}p_{k-1} = p_k,
\end{align}
where $p_k$ is the conjugate vector in the CG method. Therefore $\hat{x}_{k+1}=x_{k+1}$ will hold. Furthermore, similar to (\ref{eqn:bfgs-cd-2}) it can be derived that $H_{k}r_{k+1}=H_{k-1}r_{k+1}=\cdots=H_{0}r_{k+1}=r_{k+1}$. Hence the induction method applies. For more information on the connection of CG and quasi-Newton methods, see \cite{BFGS-Forsgren-Odland}.
\subsection{From the viewpoint of linear algebra}
Recall the Lanczos process:
given $x_0$, compute $r_0=b-Ax_0$, $v_0=\frac{r_0}{\|r_0\|}$, $\tau_0=\|r_0\|$.
\begin{align}
\label{eqn:lanczos-1}
\tau_1 v_1 & = Av_0 - \sigma_0 v_0,\quad \sigma_0=v_0^TAv_0,\quad \tau_1=\|Av_0 - \sigma_0 v_0\|,\\
\label{eqn:lanczos-2}
\tau_2 v_2 & = Av_1 - \sigma_1 v_1 - \tau_1 v_0,\quad \sigma_1=v_1^TAv_1,\quad \tau_2=\|Av_1 - \sigma_1 v_1\|,\\
\nonumber
\vdots\\
\label{eqn:lanczos-vk}
\tau_k v_k & = Av_{k-1} - \sigma_{k-1} v_{k-1} - \tau_{k-1} v_{k-2},\quad \sigma_{k-1}=v_{k-1}^TAv_{k-1},\\
\nonumber
& \quad \tau_k=\|Av_{k-1} - \sigma_{k-1} v_{k-1} - \tau_{k-1} v_{k-2}\|.
\end{align}
It is easy to see that $v_i$ and $v_j$ are orthogonal, if $i\neq j$.
Denote $V_k=[v_0,v_1,\cdots,v_{k-1}]$, $\mathcal{V}_k=\mbox{span}\{v_0,v_1,\cdots,v_{k-1}\}$, $e_k\in \mathbb{R}^k$ with the $k$-th component being 1, and
\begin{equation}
T_k\triangleq \left(
\begin{array}{ccccc}
\sigma_0 & \tau_1 & & & \\
\tau_1 & \sigma_1 & \tau_2 & & \\
& \ddots & \ddots & \ddots &\\
& & \ddots & \ddots & \tau_{k-1}\\
& & & \tau_{k-1} & \sigma_{k-1}\\
\end{array}
\right). \nonumber
\end{equation}
Then
\begin{align}
\label{eqn:lanczos-3}
AV_k & = V_k T_k + \tau_k v_k e_k^T,\\
V_k^TAV_k & = T_k.
\end{align}
$T_k$ is the projection to $\mathcal{V}_k$ of the restriction $A|_{\mathcal{V}_k}$.
Consider the projected equations
\begin{align}
\label{eqn:lanczos-4}
V_k^TA(x_0+V_k z_k) = V_k^T b,
\end{align}
i.e.,
\begin{align}
\label{eqn:lanczos-5}
V_k^TAV_k z_k = V_k^T r_0 = \|r_0\|e_1,\mbox{ or }T_k z_k = \tau_0 e_1.
\end{align}
Solve $T_k z_k = \|r_0\|e_1$ by the Cholesky method. Suppose the Cholesky decomposition of $T_k$ is
\begin{align}
\label{eqn:lanczos-6}
T_k = L_kD_kL_k^T.
\end{align}
Then
\begin{align}
\nonumber
z_k = L_k^{-T}D_k^{-1}L_k^{-1}\tau_0 e_1,
\end{align}
and
\begin{align}
\nonumber
\bar{x}_k = x_0+V_k z_k.
\end{align}
Note that although the columns of $V_k$ is accumulated with each increase in $k$, i.e., $V_{k}=[V_{k-1}, v_k]$, the components of such $z_k$ changes fully with each increase in $k$. The reason is that $T_k$ is not a lower triangular matrix.
To overcome this difficulty, rearrange the factors in the expression of $\bar{x}_k$,
\begin{align}
\nonumber
\bar{x}_k = x_0+V_k z_k = x_0+V_k L_k^{-T}D_k^{-1}L_k^{-1}\tau_0 e_1 = x_0+\bar{W}_k w_k,
\end{align}
where $\bar{W}_k=V_k L_k^{-T}$ and $w_k=D_k^{-1}L_k^{-1}\tau_0 e_1$. Let $\bar{W}_k=[\bar{p_0},\ldots,\bar{p}_{k-1}]$.
\begin{proposition}
\label{proposi:accumulativeness}
$L_k, D_k, \bar{W}_k, w_k$ are all accumulated with each increase in $k$, i.e.,
\begin{align}
\nonumber
& L_{k}(1:k-1,1:k-1) =L_{k-1},& \quad D_{k}(1:k-1,1:k-1) = D_{k-1},\\
\nonumber
& \bar{W}_{k} =[\bar{W}_{k-1}, \bar{p}_{k}],& \quad w_k = \left(w_{k-1},\bar{\alpha}_k\right)^T,
\end{align}
where $M(1:k-1,1:k-1)$ means the first $k-1$ rows and first $k-1$ columns of a matrix $M$. The columns of $\bar{W}_k$ are conjugate, i.e.,
\begin{align}
\label{eqn:lanczos-7}
\bar{p}_i^TA\bar{p}_j=0,\quad i\neq j
\end{align}
and
\begin{align}
\label{eqn:lanczos-8}
\bar{p}_i^TA\bar{p}_i=\delta_i,
\end{align}
where $\delta_i$ is the $i$-th element of the diagonal of $D_k$.
\end{proposition}
\begin{proof}
The claim for $L_k$ and $D_k$ is obvious since $T_k$ is accumulated with each increase in $k$. Now consider the claim for $\bar{W}_{k}$. Rewrite $\bar{W}_k=V_k L_k^{-T}$ as $\bar{W}_k L_k^{T}= V_k$, or
\begin{equation}
\left[ \bar{p}_0,\ldots,\bar{p}_{k-1} \right]
\left(
\begin{array}{ccccc}
1 & l_{21} & & & \\
& 1 & l_{32} & & \\
& & \ddots & \ddots &\\
& & & \ddots & l_{k-1,k}\\
& & & & 1\\
\end{array}
\right)
=
\left[ v_0,\ldots,v_{k-1} \right] . \nonumber
\end{equation}
By the above relation, $\bar{W}_{k}$ is accumulated with each increase in $k$. The argument for $w_k$ is similar.
\end{proof}
By Proposition \ref{proposi:accumulativeness},
\begin{align}
\label{eqn:lanczos-iterate}
\bar{x}_k = x_0+\bar{W}_k w_k = x_0+\bar{W}_{k-1} w_{k-1}+\bar{\alpha}_k \bar{p}_k=\bar{x}_{k-1}+\bar{\alpha}_k \bar{p}_k.
\end{align}
Up to now, with the orthogonal conditions of $v_i$, the conjugate conditions of $p_i$ and the iterates (\ref{eqn:lanczos-iterate}), it can be seen that the Lanczos + Cholesky method is quite similar to the CG method. The connection can be found in \cite{Lanczos-Householder,Lanczos-Meurant}, and will be explored in detail in the following.
Suppose that the two methods start with the same initial guess $x_0$. At $k=0$, $T_0=\sigma_0$, therefore $L_0=1$, $D_0=\sigma_0$, and $\bar{p}_0=v_0$, $\bar{\alpha}_0=D_0^{-1}L_0^{-1}\tau_0=\frac{\tau_0}{\sigma_0}$. Note that $\sigma_0=v_0^TAv_0=\frac{r_0^TAr_0}{\|r_0\|^2}=\frac{1}{\alpha_0}$, where $\alpha_0$ is the first step size in the CG method. There holds
\begin{align}
\label{eqn:lanczos-connection-p0}
\bar{p}_0=v_0=\frac{r_0}{\|r_0\|}=\frac{p_0}{\|r_0\|},\\
\label{eqn:lanczos-connection-alpha0}
\bar{\alpha}_0=\frac{\tau_0}{\sigma_0}=\|r_0\|\alpha_0.
\end{align}
Thus
\begin{align}
\label{eqn:lanczos-connection-alpha0-p0}
\bar{\alpha}_0\bar{p}_0=\alpha_0 p_0,\\
\label{eqn:lanczos-connection-x1}
\bar{x}_1=x_0+\bar{\alpha}_0\bar{p}_0=x_1,
\end{align}
where $p_0$ and $x_1$ are the iterates in the CG method. In passing, $\bar{r}_1=r_1$. Rewrite $r_1$ in terms of $v_0$,
\begin{align}
\label{eqn:lanczos-connection-r1}
r_1 = r_0 - \bar{\alpha}_0A\bar{p}_0=\tau_0 v_0 - \bar{\alpha}_0Av_0=-\bar{\alpha}_0\left(Av_0-\frac{\tau_0}{\bar{\alpha}_0}v_0\right)=-\bar{\alpha}_0\left(Av_0-\sigma_0v_0\right).
\end{align}
It is clear now that
\begin{align}
\nonumber
-\frac{\|r_1\|}{\bar{\alpha}_0} \frac{r_1}{\|r_1\|} = Av_0-\sigma_0v_0=\tau_1 v_1.
\end{align}
Since $\tau_1>0$ and $v_1$ is a unit vector, it must be that
\begin{align}
\label{eqn:lanczos-connection-r1v1}
v_1=-\frac{r_1}{\|r_1\|}.
\end{align}
Therefore
\begin{align}
\nonumber
\sigma_1 & =v_1^TAv_1=\frac{r_1^TAr_1}{\|r_1\|^2}=\frac{p_1^TAp_1+\beta_0^2p_0^TAp_0}{\|r_1\|^2}\\
\label{eqn:lanczos-connection-sigma-1}
& = \frac{p_1^TAp_1}{\|r_1\|^2}+\beta_0\frac{\|r_1\|^2}{\|r_0\|^2}\frac{p_0^TAp_0}{\|r_1\|^2}=\frac{1}{\alpha_1}+\frac{\beta_0}{\alpha_0},\\
\label{eqn:lanczos-connection-tau-1}
\tau_1 & =\frac{\|r_1\|}{\bar{\alpha}_0}=\frac{\|r_1\|}{\|r_0\|\alpha_0}=\frac{\sqrt{\beta_0}}{\alpha_0},
\end{align}
where $\beta_0$ is the one in the CG method.
Let the indices of the matrices $L_k$ and $D_k$ be labeled starting from 0. By the relation $\bar{W}_2=V_2L_2^{-T}$, there holds
\begin{align}
\label{eqn:lanczos-connection-pv1}
[\bar{p}_0,\bar{p}_1]
\left[
\begin{array}{cc}
1 & l_{10}\\
& 1
\end{array}
\right]=[v_0,v_1]
\end{align}
and thus
\begin{align}
\nonumber
l_{10}\bar{p}_0 + \bar{p}_1 = v_1.
\end{align}
Since $\bar{p}_0$ and $\bar{p}_1$ are conjugate, by (\ref{eqn:lanczos-connection-p0}) and (\ref{eqn:lanczos-connection-r1v1})
\begin{align}
\nonumber
l_{10} & =\frac{v_1^TA\bar{p}_0}{\bar{p}_0^TA\bar{p}_0}=\frac{\|r_0\|^2}{p_0^TAp_0}\left(-\frac{r_1}{\|r_1\|}\right)^TA\left(\frac{p_0}{\|r_0\|}\right)\\
\label{eqn:lanczos-connection-l10}
& =-\frac{r_1^T(\alpha_0Ap_0)}{\|r_1\|\|r_0\|}
=\frac{r_1^T(r_1-r_0)}{\|r_1\|\|r_0\|}
=\frac{\|r_1\|}{\|r_0\|}.
\end{align}
Therefore
\begin{align}
\nonumber
\bar{p}_1 & = v_1 - l_{10}\bar{p}_0=-\frac{r_1}{\|r_1\|}-\frac{\|r_1\|}{\|r_0\|}\frac{p_0}{\|r_0\|} = - \frac{1}{\|r_1\|}\left(r_1-\frac{\|r_1\|^2}{\|r_0\|^2}p_0\right)\\
\label{eqn:lanczos-connection-p1}
& = - \frac{1}{\|r_1\|}\left(r_1-\beta_0 p_0\right)=- \frac{1}{\|r_1\|}p_1,
\end{align}
where $p_1$ and $x_2$ are the iterates in the CG method. Hence by the above equality,
\begin{align}
\label{eqn:lanczos-connection-d1}
\delta_1 = \bar{p}_1^TA\bar{p}_1 = \frac{p_1^TAp_1^T}{r_1^Tr_1}=\frac{1}{\alpha_1},
\end{align}
where $\alpha_1$ is the second step size in the CG method.
By the relation $w_2=D_2^{-1}L_2^{-1}\tau_0 e_1$, there holds
\begin{align}
\label{eqn:lanczos-connection-alphas}
\left[
\begin{array}{cc}
1 & \\
l_{10} & 1
\end{array}
\right]
\left[
\begin{array}{c}
\delta_0 \bar{\alpha}_0\\
\delta_1 \bar{\alpha}_1
\end{array}
\right]=
\tau_0\left[
\begin{array}{c}
1\\
0
\end{array}
\right],
\end{align}
and thus by (\ref{eqn:lanczos-connection-d1})
\begin{align}
\label{eqn:lanczos-connection-alpha1}
\bar{\alpha}_1=-\frac{l_{10}\delta_0\bar{\alpha}_0}{\delta_1}=-\frac{\|r_1\|}{\|r_0\|}\frac{\|r_0\|}{\delta_1}=-\|r_1\|\alpha_1.
\end{align}
From (\ref{eqn:lanczos-connection-p1}) and (\ref{eqn:lanczos-connection-alpha1}), there holds
\begin{align}
\label{eqn:lanczos-connection-alpha1-p1}
\bar{\alpha}_1\bar{p}_1=\alpha_1 p_1,\\
\label{eqn:lanczos-connection-x2}
\bar{x}_2=x_1+\bar{\alpha}_1\bar{p}_1=x_2,
\end{align}
where $x_2$ are the iterates in the CG method. By induction, the connection can be established,
\begin{align}
\label{eqn:lanczos-connection-rvk}
v_k & = (-1)^k\frac{r_k}{\|r_k\|},\\
\label{eqn:lanczos-connection-pk}
\bar{p}_k & = (-1)^k\frac{1}{\|r_k\|}p_k,\\
\label{eqn:lanczos-connection-alphak}
\bar{\alpha}_k & =(-1)^k\|r_k\|\alpha_k,\\
\label{eqn:lanczos-connection-sigma-k}
\sigma_k & = \frac{1}{\alpha_k}+\frac{\beta_{k-1}}{\alpha_{k-1}},\\
\label{eqn:lanczos-connection-tau-k}
\tau_k & = \frac{\sqrt{\beta_{k-1}}}{\alpha_{k-1}},\\
\label{eqn:lanczos-connection-lkkm1}
l_{k,k-1} & = \frac{\|r_k\|}{\|r_{k-1}\|},\\
\label{eqn:lanczos-connection-dk}
\delta_k & = \frac{1}{\alpha_k}.
\end{align}
If the process terminates at $k=n$, then $V_{n}$ is an orthogonal matrix and therefore
\begin{align}
\label{eqn:lanczos-det}
\det(A) = \det(T_n) = \det(D_{n}) = \prod_{k=0}^{n-1}\frac{1}{\alpha_k}, \mbox{ or }, \det(A^{-1}) = \prod_{k=0}^{n-1}\alpha_k.
\end{align}
On the other hand, since $\beta_{i}=\frac{\|r_{i+1}\|^2}{\|r_{i}\|^2}$, the product of $\beta_{i}$,
\begin{align}
\label{eqn:lanczos-product-beta}
\prod_{i=0}^{k-1}\beta_i=\frac{\|r_k\|^2}{\|r_{0}\|^2}
\end{align}
is nothing but the square of the 2-norm of the relative residual.
\section{Residual polynomials and conjugate polynomials}
In the CG iteration process, there are two sequences of vectors $\{r_k\}$ and $\{p_k\}$, i.e.,
those of residual vectors and conjugate vectors, with both $r_k$ and $p_k$ lying in the Krylov subspace
$\mathcal{K}=\mbox{span}\{r_0,Ar_0,\ldots,A^{k-1}r_0\}$. As is noted in the original paper \cite{Hestenes-Stiefel}, checking the CG process, $\{r_k\}$ and $\{p_k\}$ are related to two sequences of polynomials
$\{R_k(\lambda)\}$ and $\{P_k(\lambda)\}$ such that
\begin{align}
r_k = R_k(A)r_0,\\
p_k = P_k(A)r_0.
\end{align}
In this sense, $\{R_k(\lambda)\}$ and $\{P_k(\lambda)\}$ will be called residual polynomials and conjugate polynomials respectively in this paper.
Note that the degrees of $R_k(\lambda)$ and $P_k(\lambda)$ are both $k$. Although $\{r_k\}$ and $\{p_k\}$ are intertwined as
\begin{align}
r_{k+1} = r_{k} - \alpha_k A p_{k},\\
p_{k+1} = r_{k+1} + \beta_k p_{k},
\end{align}
They can be decoupled to obtain their own three-term reccurences as,
\begin{align}
\nonumber
r_{k+1} & = r_{k} - \alpha_k A p_{k}\\
\nonumber
& = r_{k} - \alpha_k A \left( r_k+\beta_{k-1}p_{k-1} \right)\\
\nonumber
& = (1-\alpha_k A)r_{k} - \alpha_k \beta_{k-1} A p_{k-1}\\
\nonumber
& = (1-\alpha_k \lambda)r_{k} - \alpha_k \beta_{k-1} \frac{1}{\alpha_{k-1}}
\left( r_{k-1}-r_{k} \right)\\
\label{eqn:three-term-reccurence-rk}
& = \left( 1+\frac{\alpha_k}{\alpha_{k-1}}\beta_{k-1}-\alpha_k A \right) r_{k} - \frac{\alpha_k}{\alpha_{k-1}}\beta_{k-1} r_{k-1},\\
\nonumber
p_{k+1} & = r_{k+1} + \beta_k p_{k}\\
\nonumber
& = r_{k} - \alpha_k A p_k + \beta_k p_{k}\\
\nonumber
& = p_{k} -\beta_{k-1}p_{k-1} - \alpha_k A p_k + \beta_k p_{k}\\
\label{eqn:three-term-reccurence-pk}
& = \left( 1+\beta_k- \alpha_k A \right) p_{k} -\beta_{k-1}p_{k-1}.
\end{align}
Correspondingly, $\{R_k(\lambda)\}$ and $\{P_k(\lambda)\}$ are intertwined as
\begin{align}
R_{k+1}(\lambda) = R_{k}(\lambda) - \alpha_k \lambda P_{k}(\lambda),\\
P_{k+1}(\lambda) = R_{k+1}(\lambda) + \beta_k P_{k}(\lambda).
\end{align}
and have their own three-term reccurences as,
\begin{align}
\nonumber
R_{k+1}(\lambda) & = \left( 1+\frac{\alpha_k}{\alpha_{k-1}}\beta_{k-1}-\alpha_k \lambda \right) R_{k}(\lambda) - \frac{\alpha_k}{\alpha_{k-1}}\beta_{k-1} R_{k-1}(\lambda)\\
P_{k+1}(\lambda) & = \left( 1+\beta_k- \alpha_k \lambda \right) P_{k}(\lambda) -\beta_{k-1}P_{k-1}(\lambda).
\end{align}
The properties of the polynomials $\{R_k(\lambda)\}$ and $\{P_k(\lambda)\}$ reveals certain information of the matrix $A$.
\subsection{The roots of residual polynomials and conjugate polynomials}
Recall the Lanczos process. Two sequences of polynomials $\{\bar{R}_k(\lambda)\}$ and $\{\bar{P}_k(\lambda)\}$ can also be defined such that
\begin{align}
v_k = \bar{R}_k(A)v_0,\\
\bar{p}_k = \bar{P}_k(A)v_0.
\end{align}
Due to the correspondences of $v_k,r_k$ and $\bar{p}_k,p_k$ in (\ref{eqn:lanczos-connection-rvk}) and (\ref{eqn:lanczos-connection-pk}), $\bar{R}_k(\lambda)$ is
a multiple of $R_k(\lambda)$, and $\bar{P}_k(\lambda)$ a multiple of $P_k(\lambda)$. Therefore the roots of $R_k(\lambda)$ are the same as those of $\bar{R}_k(\lambda)$, and the roots of $P_k(\lambda)$ the same as those of $\bar{P}_k(\lambda)$. The roots of $\bar{R}_k(\lambda)$ and those of $\bar{P}_k(\lambda)$ are closely related to the the relation $AV_k = V_kT_k + \tau_k v_k e_k^T$ in Lanczos process. Rewrite this relation as $V_k^TA = T_kV_k^T + \tau_k e_k v_k^T$, that is
\begin{align}
\nonumber
\left(
\begin{array}{c}
v_0^T \\
v_1^T \\
\vdots \\
v_{k-1}^T
\end{array}
\right)A
= T_k
\left(
\begin{array}{c}
v_0^T \\
v_1^T \\
\vdots \\
v_{k-1}^T
\end{array}
\right)
+ \tau_k
\left(
\begin{array}{c}
0 \\
0 \\
\vdots \\
v_{k}^T
\end{array}
\right).
\end{align}
Note that $v_k=\bar{R}_k(A)v_0$, the above formula is rewritten further as
\begin{align}
\label{eqn:reccurence-Tk-Rk}
\left(
\begin{array}{c}
\left( \bar{R}_0(A)v_0 \right)^T \\
\left( \bar{R}_1(A)v_0 \right)^T \\
\vdots \\
\left( \bar{R}_{k-1}(A)v_0 \right)^T \\
\end{array}
\right)A
= T_k
\left(
\begin{array}{c}
\left( \bar{R}_0(A)v_0 \right)^T \\
\left( \bar{R}_1(A)v_0 \right)^T \\
\vdots \\
\left( \bar{R}_{k-1}(A)v_0 \right)^T \\
\end{array}
\right)
+ \tau_k
\left(
\begin{array}{c}
0 \\
0 \\
\vdots \\
\left( \bar{R}_{k}(A)v_0 \right)^T
\end{array}
\right).
\end{align}
It turns out that the sequence of polynomials $\bar{R}_k(\lambda)$ satisfies the same reccurence relation,
\begin{align}
\label{eqn:eigenvalue-Tk}
\left(
\begin{array}{c}
\bar{R}_0(\lambda) \\
\bar{R}_1(\lambda) \\
\vdots \\
\bar{R}_{k-1}(\lambda) \\
\end{array}
\right)\lambda
= T_k
\left(
\begin{array}{c}
\bar{R}_0(\lambda) \\
\bar{R}_1(\lambda) \\
\vdots \\
\bar{R}_{k-1}(\lambda) \\
\end{array}
\right)
+ \tau_k
\left(
\begin{array}{c}
0 \\
0 \\
\vdots \\
\bar{R}_{k}(\lambda)
\end{array}
\right).
\end{align}
which can be considered as derived from (\ref{eqn:reccurence-Tk-Rk}) by replacing $A$ by $\lambda$ and ignoring $v_0$.
In fact, by (\ref{eqn:lanczos-vk}), there holds
\begin{align}
Av_{k-1} = \sigma_{k-1} v_{k-1} + \tau_{k-1} v_{k-2} + \tau_k v_k.
\end{align}
Since $v_k = \bar{R}_k(A)v_0$, the above formula is converted to
\begin{align}
A\bar{R}_{k-1}(A)v_0 = \sigma_{k-1} \bar{R}_{k-1}(A)v_0 + \tau_{k-1} \bar{R}_{k-2}(A)v_0 + \tau_k \bar{R}_k(A) v_0.
\end{align}
Thus the polynomials $\bar{R}_k(\lambda)$ satisfy the reccurence
\begin{align}
\lambda\bar{R}_{k-1}(\lambda) = \sigma_{k-1} \bar{R}_{k-1}(\lambda) + \tau_{k-1} \bar{R}_{k-2}(\lambda) + \tau_k \bar{R}_k(\lambda).
\end{align}
Putting them in matrix-vector format, (\ref{eqn:eigenvalue-Tk}) is resulted.
From (\ref{eqn:eigenvalue-Tk}), it is easily seen that the roots $\bar{\lambda}_i$ of $\bar{R}_k(\lambda)$ is nothing but the eigenvalues of $T_k$, that is,
\begin{align}
\nonumber
T_k
\left(
\begin{array}{c}
\bar{R}_0(\bar{\lambda}_i) \\
\bar{R}_1(\bar{\lambda}_i) \\
\vdots \\
\bar{R}_{k-1}(\bar{\lambda}_i) \\
\end{array}
\right)
=
\bar{\lambda}_i
\left(
\begin{array}{c}
\bar{R}_0(\bar{\lambda}_i) \\
\bar{R}_1(\bar{\lambda}_i) \\
\vdots \\
\bar{R}_{k-1}(\bar{\lambda}_i) \\
\end{array}
\right).
\end{align}
The corresponding eigenvectors are formed by the function values of $\bar{R}_j$ at $\bar{\lambda}_i$, $j=0,\ldots,k-1$. Therefore, $\bar{R}_k(\lambda)$ or $R_k(\lambda)$ is the characteristic polynomial of $T_k$. If there is no roundoff error and the CG iteration process is terminated at step $n$, $T_n$ is an $n\times n$ matrix similar to $A$ and $R_n(\lambda)$ is the characteristic polynomial of $A$. In this case, $\{\bar{R}_k(\lambda)\}$ or $\{R_k(\lambda)\}$ are the Sturm sequence of $T_n$.
Furthermore, if $v_m$ or $r_m$ vanishes for some $m<n$, $\bar{R}_m(\lambda)$ or $R_m(\lambda)$ has common factor with the characteristic polynomial of $A$. To see this, let $\varphi_i, i=1,\ldots,n$, be the normalized eigenvectors of the symmetric positive definite matrix $A$ and $\lambda_i$ the corresponding eigenvalues. Suppose that $r_0=\xi_{i_1} \varphi_{i_1}+\cdots+\xi_{i_l} \varphi_{i_l} \neq 0$, with $\xi_{i_1} \neq 0, \ldots, \xi_{i_l} \neq 0$. Then
\begin{align}
r_m = R_m(A)r_0 = R_m(\lambda_{i_1})\xi_{i_1} \varphi_{i_1} + \cdots + R_m(\lambda_{i_l})\xi_{i_l} \varphi_{i_l}.
\end{align}
Therefore $R_m(\lambda_{i_1})=0, \ldots, R_m(\lambda_{i_l})=0$. If the corresponding eigenvalues $\lambda_{i_1},\ldots,
\lambda_{i_l}$ are distinct, then $m=l$. Because if $m<l$, then the polynomial $R_m(\lambda)$ of degree $m<l$ will have more than $m$ different roots, a contradiction. If $m>l$, then the dimension of $\mbox{span}\{r_0,\ldots,r_{m-1}\}$ will not equal to the dimension of the Krylov subspace $\mbox{span}\{r_0,\ldots,A^{m-1}r_0\}$, again a contradiction. In this case of $m=l$, $R_m(\lambda)$ is a factor of the characteristic polynomial of $A$.
As for the roots of the conjugate polynomials $\bar{P}_k(\lambda)$, consider again the reccurence relation $AV_k = V_kT_k + \tau_k v_k e_k^T$ in Lanczos process. Due to the relation $\bar{W}_k=V_k L_k^{-T}$ between $\bar{p}_i$ and $v_i$ and the fact that $e_k^T L_k^{-T} = e_k^T$, there holds $A\bar{W}_k = \bar{W}_kL_k^TL_kD_k + \tau_k v_k e_k^T$. Note that $L_k^TL_kD_k$ is not symmetric. Scaling trick is applied,
\begin{align}
\label{eqn:recurrence-pk-bar}
A\bar{W}_kD_k^{-\frac{1}{2}} = \bar{W}_kD_k^{-\frac{1}{2}}D_k^{\frac{1}{2}}L_k^TL_kD_k^{\frac{1}{2}} + \tau_k v_k e_k^TD_k^{-\frac{1}{2}}.
\end{align}
Denote $\bar{\bar{W}}_k = \bar{W}_kD_k^{-\frac{1}{2}}$ and $\bar{T}_k = D_k^{\frac{1}{2}}L_k^TL_kD_k^{\frac{1}{2}}$. Since
$$
\bar{T}_k = D_k^{\frac{1}{2}}L_k^TL_kD_k^{\frac{1}{2}}D_k^{\frac{1}{2}}L_k^T L_k^{-T}D_k^{-\frac{1}{2}} = \left( D_k^{\frac{1}{2}}L_k^T \right) T_k \left( D_k^{\frac{1}{2}}L_k^T \right)^{-1},
$$
$\bar{T}_k$ is similar to $T_k$. By the relation $v_k = l_{k,k-1}\bar{p}_{k-1} + \bar{p}_k$ and the fact that
$\bar{p}_{k-1}/\sqrt{\delta_{k-1}}$ is the last column of $\bar{\bar{W}}_k$, (\ref{eqn:recurrence-pk-bar}) is rewritten as
\begin{align}
\nonumber
A\bar{\bar{W}}_k & = \bar{\bar{W}}_k\bar{T}_k + \frac{1}{\sqrt{\delta_{k-1}}}\tau_k l_{k,k-1} \bar{p}_{k-1} e_k^T + \frac{1}{\sqrt{\delta_{k-1}}}\tau_k \bar{p}_{k} e_k^T \\
\label{eqn:recurrence-pk-bar-bar}
& = \bar{\bar{W}}_k\bar{\bar{T}}_k + \frac{1}{\sqrt{\delta_{k-1}}}\tau_k \bar{p}_{k} e_k^T,
\end{align}
where
$$
\bar{\bar{T}}_k = \bar{T}_k +
\left(
\begin{array}{ccc}
& & \\
& & \\
& & \tau_k l_{k,k-1}
\end{array}
\right)
=
\bar{T}_k +
\left(
\begin{array}{ccc}
& & \\
& & \\
& & \frac{\beta_{k-1}}{\alpha_{k-1}}
\end{array}
\right),
$$
and $\alpha_k, \beta_k$ are quantities in CG iteration.
Proceed as the arguments for the roots of $\bar{R}_k(\lambda)$, it can be seen that the roots of the conjugate polynomial $P_k(\lambda)$ or $\bar{P}_k(\lambda)$, are the eigenvalues of $\bar{\bar{T}}_k$ which is a modification of $\bar{T}_k$, with $\bar{T}_k$ similar to $T_k$. For the roots of such sequences of polynomials of more general conjugate gradient method, see \cite{rootspolynomials-Manteuffel}.
\subsection{Duality between residual polynomials and $n$-dimensional geometry}
Recall that in the CG iteration process, the residual vectors are orthogonal with respect to the Euclidean inner product $(\cdot,\cdot)$, that is, $(r_i,r_j)=0, i\ne j$. Since the residual polynomial $R_i(\lambda)$ is associated with $r_i$, a natural question is that whether the polynomials $R_i(\lambda)$ are orthogonal in some sense? The answer is yes, provided in the original paper \cite{Hestenes-Stiefel}.
In order to explain the idea, as above let $\varphi_i, i=1,\ldots,n$ be the normalized eigenvectors of the symmetric positive definite matrix $A$ and $\lambda_i$ the corresponding eigenvalues. Suppose that $x_0$ is the initial iterate such that $r_0=\xi_1 \varphi_1+\ldots+\xi_n \varphi_n$ with all $\xi_i\ne 0$. In this setting, let us relate the inner product of $r_i, r_j$ with the polynomials $R_i(\lambda), R_j(\lambda)$ as follows,
\begin{align}
\nonumber
(r_i,r_j) & = \left( R_i(A)r_0, R_j(A)r_0 \right)\\
\nonumber
& = \left( R_i(\lambda_1)\xi_1\varphi_1+\ldots+R_i(\lambda_n)\xi_n\varphi_n, R_j(\lambda_1)\xi_1\varphi_1+\ldots+R_j(\lambda_n)\xi_n\varphi_n \right)\\
\label{eqn:duality-ri-Ri-1}
& = \xi_1^2 R_i(\lambda_1) R_j(\lambda_1)+\ldots+\xi_n^2 R_i(\lambda_n) R_j(\lambda_n).
\end{align}
The point is that whether (\ref{eqn:duality-ri-Ri-1}) can be viewed as an inner product of the polynomials $R_i(\lambda), R_j(\lambda)$. Define a step function $m(\lambda)$ as follows,
\begin{align}
\label{eqn:definition-mass-distribution}
m(\lambda)=
\left\{
\begin{aligned}
& 0,\quad \lambda<\lambda_1\\
& \xi_1^2, \quad \lambda_1 \le \lambda < \lambda_2\\
& \vdots\\
& \xi_1^2+\ldots+\xi_i^2,\quad \lambda_i \le \lambda < \lambda_{i+1}\\
& \xi_1^2+\ldots+\xi_n^2=1,\quad \lambda_n \le \lambda.
\end{aligned}
\right.
\end{align}
It is easily seen that $m(\lambda)$ is a nonnegative and nondecreasing function. The Riemann-Stieltjes integral exists for any continuous function $f(\lambda)$ with respect to $m(\lambda)$, and
\begin{align}
\int_0^c f(\lambda)dm(\lambda) = \xi_1^2 f(\lambda_1)+\ldots+\xi_n^2 f(\lambda_n),
\end{align}
where $c>\lambda_n$ is a constant. Under this definition,
\begin{align}
\label{eqn:duality-ri-Ri-2}
\int_0^c R_i(\lambda) R_j(\lambda)dm(\lambda) = (r_i,r_j).
\end{align}
That is the polynomials $R_i(\lambda), R_j(\lambda)$ are orthogonal with respect to this Riemann-Stieltjes integral. Note that $r_0,r_1,\ldots,r_{n-1}$ are orthogonal vectors of $\mathbb{R}^n$ and thus are a basis. Also note that $R_0(\lambda),R_1(\lambda),\ldots,R_{n-1}(\lambda)$ are a basis of the polynomial space $\mathbb{P}^{n-1}$. If the correspondence $\lambda^k \leftrightarrow A^kr_0$ is specified, then the $n$-dimensional space $\mathbb{R}^{n}$ is isomorphic to the polynomial space $\mathbb{P}^{n-1}$. In the original paper \cite{Hestenes-Stiefel}, Hestenes and Stiefel called the polynomials $R_i(\lambda)$ orthogonal polynomials based on such orthogonality.
As for the conjugate vectors and conjugate polynomials, there are similar relations,
\begin{align}
\nonumber
(Ap_i,p_j) & = \left( AP_i(A)r_0, P_j(A)r_0 \right)\\
\nonumber
& = \xi_1^2 \lambda_1 P_i(\lambda_1) P_j(\lambda_1)+\ldots+\xi_n^2 \lambda_n P_i(\lambda_n) P_j(\lambda_n)\\
\label{eqn:duality-pi-Pi}
& = \int_0^c \lambda P_i(\lambda) P_j(\lambda)dm(\lambda).
\end{align}
Since the conjugate vectors $p_i$ satisfy $(Ap_i,p_j)=0$, the corresponding conjugate polynomials $P_i(\lambda)$ are orthogonal with respect to the weight function $\lambda$.
\section{Convergence rate}
Since CG is closely related to the steepest descent method, the convergence rates of the two methods will be reviewed and compared. Note that given a vector $b$ and a symmetric positive definite matrix $A$, the iterative sequence $\{x_k\}$ of the two methods are completely dertimined by the initial guess $x_0$. As for two successive iterates, for the steepest descent, $x_{k+1}$ is determined totally by $x_k$; for CG, $x_{k+1}$ is determined by $x_k$ and the conjugate direction $p_{k}$. Usually there are two kinds of measurements for convergence rate of an iterative method. One is the ratio of every two successive error norms,
\begin{align}
\frac{\|x_{k+1}-x_*\|}{\|x_{k}-x_*\|},
\end{align}
which is usually adopted for the steepest descent, while the another is the total effect of the ratio in $k$ steps,
\begin{align}
\frac{\|x_{k}-x_*\|}{\|x_{0}-x_*\|}.
\end{align}
which is usually used for CG. Let us call the former two-term ratio and the latter $k$-term ratio.
Note that
\begin{align}
J(x)=\frac{1}{2}x^TAx-x^Tb=\frac{1}{2}x^TAx-x^TAx_*=\frac{1}{2}(x-x_*)^TA(x-x_*) - \frac{1}{2}x_*^TAx_*.
\end{align}
Thus the $A$-norm of error satisfies
\begin{align}
\label{eqn:error-A-norm-Jx}
(x-x_*)^TA(x-x_*)=2J(x)+x_*^TAx_*.
\end{align}
At $x_k$, if $ J(x)$ is marching along the direction $d_k$, by (\ref{eqn:descent-quantity}) and the fact that $\|x_{k}-x_*\|_A^2 = r_k^TA^{-1}r_k$, the $A$-norms of successive errors satisfy
\begin{align}
\label{eqn:successive-error-A-norm-SD}
\|x_{k+1}-x_*\|_A^2 = \|x_{k}-x_*\|_A^2 - \frac{\left(r_k^Td_k\right)^2}{d_k^TAd_k} = \|x_{k}-x_*\|_A^2 \left( 1 - \frac{\left(r_k^Td_k\right)^2}{d_k^TAd_k\cdot r_k^TA^{-1}r_k} \right).
\end{align}
\subsection{Convergence rate of steepest descent}
In this case, $d_k=r_k$. Define
\begin{align}
Q_{sd}(x_k)\triangleq \frac{\|x_{k+1}-x_*\|_A}{\|x_{k}-x_*\|_A},
\end{align}
A reasonable definition of the convergence factor for the steepest descent is
\begin{align}
Q_{sd}=\max_{x_k}Q_{sd}(x_k).
\end{align}
Note that by (\ref{eqn:error-A-norm-Jx}),
\begin{align}
\nonumber
\|x_{k+1}-x_*\|_A^2 & = 2J(x_{k+1})+x_*^TAx_*\\
\nonumber
& = \min_{\alpha}2J(x_k+\alpha r_k) + x_*^TAx_* \\
\nonumber
& = \min_{\alpha}(x_k+\alpha r_k-x_*)^TA(x_k+\alpha r_k-x_*).
\end{align}
Therefore, in essence, the convergence factor of the steepest descent is a max-min problem
\begin{align}
Q_{sd}^2=\max_{x_k}Q_{sd}^2(x_k) = \max_{x_k}\min_{\alpha}\frac{(x_k+\alpha r_k-x_*)^TA(x_k+\alpha r_k-x_*)}{(x_k-x_*)^TA(x_k-x_*)}.
\end{align}
As is seen in (\ref{eqn:successive-error-A-norm-SD}), the inner optimization problem min has a solution and
\begin{align}
\label{prob:convergence-factor-general}
Q_{sd}^2 = \max_{x_k}\left( 1 - \frac{\left(r_k^Td_k\right)^2}{d_k^TAd_k\cdot r_k^TA^{-1}r_k} \right).
\end{align}
With $d_k=r_k$,
\begin{align}
\frac{\left(r_k^Td_k\right)^2}{d_k^TAd_k\cdot r_k^TA^{-1}r_k} = \frac{\left(r_k^Tr_k\right)^2}{r_k^TAr_k\cdot r_k^TA^{-1}r_k} =
\left( \frac{r_k^Tr_k}{r_k^TAr_k} \right)
\left( \frac{r_k^Tr_k}{r_k^TA^{-1}r_k} \right).
\end{align}
(\ref{prob:convergence-factor-general}) is reduced to
\begin{align}
\label{prob:convergence-factor-steepest-descent}
Q_{sd}^2 = \max_{r_k}\left( 1 - \left( \frac{r_k^Tr_k}{r_k^TAr_k} \right)
\left( \frac{r_k^Tr_k}{r_k^TA^{-1}r_k} \right) \right).
\end{align}
Setting $v = \frac{r_k}{\|r_k\|_2}$, problem (\ref{prob:convergence-factor-steepest-descent}) is related to the following constrained optimization problem,
\begin{align}
\label{prob:harmonic-optimization-1}
\left\{
\begin{aligned}
\max_{v} \left( v^TAv \right) \left( v^TA^{-1}v \right)\\
s.t.\quad \|v\|_2 = 1.
\end{aligned}
\right.
\end{align}
Since $A$ is symmetric positive definite, using the spectral information of $A$, problem (\ref{prob:harmonic-optimization-1}) is equivalent to
\begin{align}
\label{prob:harmonic-optimization-2}
\left\{
\begin{aligned}
\max_{\xi_i} \left( \sum_{i=1}\lambda_i \xi_i^2 \right)\left( \sum_{i=1}\lambda_i^{-1} \xi_i^2 \right)\\
s.t.\quad \sum_{i}\xi_i^2 = 1,
\end{aligned}
\right.
\end{align}
By setting $t_i = \xi_i^2$, it is reduced to
\begin{align}
\label{prob:harmonic-optimization-3}
\left\{
\begin{aligned}
\max_{t_i} \left( \sum_{i=1}\lambda_i t_i \right)\left( \sum_{i=1}\lambda_i^{-1} t_i \right)\\
s.t.\quad \sum_{i}t_i = 1\\
t_i \ge 0.
\end{aligned}
\right.
\end{align}
Problem (\ref{prob:harmonic-optimization-3}) has an explicit solution. Before the explicit solution is derived, a lemma is needed.
\begin{lemma}
\label{lem:harmonic-variation}
Let $0< \lambda_1 < \lambda_2 < \lambda_3$ be three positive numbers and denote $c_{ij} = \left( \sqrt{\frac{\lambda_j}{\lambda_i}} - \sqrt{\frac{\lambda_i}{\lambda_j}} \right)^2$. Then there holds
\begin{align}
\label{eqn:harmonic-variation-1}
\sqrt{c_{12}} + \sqrt{c_{23}} < \sqrt{c_{13}},
\end{align}
and therefore
\begin{align}
\label{eqn:harmonic-variation-2}
c_{12} + c_{23} < c_{13}.
\end{align}
\end{lemma}
\begin{proof}
It is easy to verify that
\begin{align}
\nonumber
\frac{\lambda_2-\lambda_1}{\sqrt{\lambda_1}} \frac{\lambda_3-\lambda_2}{\sqrt{\lambda_2 \lambda_3}\left( \sqrt{\lambda_2}+\sqrt{\lambda_3} \right)}
<
\frac{\lambda_3-\lambda_2}{\sqrt{\lambda_3}} \frac{\lambda_2-\lambda_1}{\sqrt{\lambda_1 \lambda_2}\left( \sqrt{\lambda_1}+\sqrt{\lambda_2} \right)}.
\end{align}
Thus
\begin{align}
\nonumber
\frac{\lambda_2-\lambda_1}{\sqrt{\lambda_1}} \left( \frac{1}{\sqrt{\lambda_2}}-\frac{1}{\sqrt{\lambda_3}} \right)
<
\frac{\lambda_3-\lambda_2}{\sqrt{\lambda_3}} \left( \frac{1}{\sqrt{\lambda_1}}-\frac{1}{\sqrt{\lambda_2}} \right).
\end{align}
Moving terms to both sides gives
\begin{align}
\nonumber
\frac{\lambda_2-\lambda_1}{\sqrt{\lambda_1 \lambda_2}} + \frac{\lambda_3-\lambda_2}{\sqrt{\lambda_2 \lambda_3}}
<
\frac{\lambda_2-\lambda_1}{\sqrt{\lambda_1 \lambda_3}} + \frac{\lambda_3-\lambda_2}{\sqrt{\lambda_1 \lambda_3}}
=
\frac{\lambda_3-\lambda_1}{\sqrt{\lambda_1 \lambda_3}},
\end{align}
which is nothing but (\ref{eqn:harmonic-variation-1}).
\end{proof}
\begin{proposition}
\label{proposi:harmonic-optimization}
Let $0<\lambda_1 \le \lambda_2 \le \ldots \le \lambda_n$ be the eigenvalues of the symmetric positive definite matrix $A$. Then the maximum value of problem (\ref{prob:harmonic-optimization-3}) is
\begin{align}
\frac{1}{4}\left( \lambda_1+\lambda_n \right)\left( \frac{1}{\lambda_1}+\frac{1}{\lambda_n} \right).
\end{align}
\end{proposition}
\begin{proof}
Rewrite the product as
\begin{align}
\nonumber
& f(t_1,\ldots,t_n)\\
\nonumber
\triangleq & \left( \lambda_1 t_1+\ldots+\lambda_n t_n \right)\left( \lambda_1^{-1} t_1+\ldots+\lambda_n^{-1} t_n \right)\\
\nonumber
= & t_1^2+\ldots+t_n^2+\frac{\lambda_1}{\lambda_2}t_1 t_2+\ldots+\frac{\lambda_1}{\lambda_n}t_1 t_n+\frac{\lambda_2}{\lambda_1}t_2 t_1+\ldots+\frac{\lambda_2}{\lambda_n}t_2 t_n + \ldots\\
\nonumber
& \quad +\frac{\lambda_n}{\lambda_1}t_n t_1+\ldots+\frac{\lambda_n}{\lambda_{n-1}}t_{n} t_{n-1}\\
\nonumber
= & \left( t_1+t_2+\ldots+t_n \right)^2 + \left( \frac{\lambda_1}{\lambda_2} + \frac{\lambda_2}{\lambda_1} - 2 \right)t_1 t_2 +\ldots +
\left( \frac{\lambda_1}{\lambda_n} + \frac{\lambda_n}{\lambda_1} - 2 \right)t_1 t_n\\
\nonumber
& \quad + \left( \frac{\lambda_2}{\lambda_3} + \frac{\lambda_3}{\lambda_2} - 2 \right)t_2 t_3 +\ldots +
\left( \frac{\lambda_2}{\lambda_n} + \frac{\lambda_n}{\lambda_2} - 2 \right)t_2 t_n\\
\nonumber
& \quad + \ldots\\
\nonumber
& \quad + \left( \frac{\lambda_{n-1}}{\lambda_n} + \frac{\lambda_n}{\lambda_{n-1}} - 2 \right)t_{n-1} t_n\\
\nonumber
= & 1 + \left( \sqrt{\frac{\lambda_2}{\lambda_1}} - \sqrt{\frac{\lambda_2}{\lambda_1}} \right)^2 t_1 t_2 +\ldots +
\left( \sqrt{\frac{\lambda_n}{\lambda_1}} - \sqrt{\frac{\lambda_1}{\lambda_n}} \right)^2 t_1 t_n\\
\nonumber
& \quad + \left( \sqrt{\frac{\lambda_3}{\lambda_2}} - \sqrt{\frac{\lambda_2}{\lambda_3}} \right)^2 t_2 t_3 +\ldots +
\left( \sqrt{\frac{\lambda_n}{\lambda_2}} - \sqrt{\frac{\lambda_2}{\lambda_n}} \right)^2 t_2 t_n\\
\nonumber
& \quad + \ldots\\
\nonumber
& \quad + \left( \sqrt{\frac{\lambda_n}{\lambda_{n-1}}} - \sqrt{\frac{\lambda_{n-1}}{\lambda_n}} \right)^2 t_{n-1} t_n\\
\nonumber
= & 1 + c_{12}t_1 t_2 + \ldots + c_{n-1,n}t_{n-1} t_n\\
\label{eqn:harmonic-optimization-4}
= & 1 + \frac{1}{2}t^T C t,
\end{align}
where $t = (t_1,\ldots,t_n)^T$, and
\begin{equation}
C = \left(
\begin{array}{ccccc}
0 & c_{12} & c_{13} & \ldots & c_{1n} \\
c_{12} & 0 & c_{23} & \ldots & c_{2n} \\
c_{13} & \ddots & 0 & \ddots & \vdots \\
\vdots & & \ddots & 0 & c_{n-1,n}\\
c_{1n} & c_{2n} & \ldots & c_{n-1,n} & 0\\
\end{array}
\right), \quad
c_{ij} = \left( \sqrt{\frac{\lambda_j}{\lambda_i}} - \sqrt{\frac{\lambda_i}{\lambda_j}} \right)^2. \nonumber
\end{equation}
Therefore (\ref{prob:harmonic-optimization-3}) is reduced further to the following quadratic programming
\begin{align}
\label{prob:harmonic-optimization-4}
\left\{
\begin{aligned}
\max_{t} & f(t) = 1 + \frac{1}{2}t^T C t\\
\mbox{s.t.} & \quad g(t) = e^T t - 1 = 0\\
& \quad t_i \ge 0,
\end{aligned}
\right.
\end{align}
where $e = (1,\ldots,1)^T$.
If $n=2$, $f(t_1,t_2) = 1 + c_{12}t_1 t_2 = 1 + c_{12}t_1 (1-t_1)$ and the maximum is attained when $t_1 = t_2 = \frac{1}{2}$. Consider the case $n \ge 3$ in the sequel.
Assume that $0<\lambda_1 < \lambda_2 < \ldots < \lambda_n$. Since $\lambda_i \ne \lambda_j$, $c_{ij}>0$. Let $(\bar{t}_1,\ldots,\bar{t}_n)$ be a maximum point. It is to be shown that $\bar{t}_i=0$, $i=2,\ldots,n-1$. By contradition, suppose that $\bar{t}_2 > 0$. Construct a marching direction
\begin{align}
d = \left(\eta_1,-(\eta_1 + \eta_n),0,\ldots,0,\eta_n \right)^T,
\end{align}
where $\eta_1 = \frac{c_{2n}}{c_{1n}} > 0$ and $\eta_n = \frac{c_{12}}{c_{1n}} > 0$. Since $\bar{t}_2 > 0$, $t = \bar{t} + \alpha d$ will be a feasible point if $\alpha > 0$ is sufficiently small. For this direction, there holds $Cd > 0$ component-wise. In fact,
\begin{equation}
\left(
\begin{array}{ccccc}
0 & c_{12} & c_{13} & \ldots & c_{1n} \\
c_{12} & 0 & c_{23} & \ldots & c_{2n} \\
c_{13} & \ddots & 0 & \ddots & \vdots \\
\vdots & & \ddots & 0 & c_{n-1,n}\\
c_{1n} & c_{2n} & \ldots & c_{n-1,n} & 0\\
\end{array}
\right)
\left(
\begin{array}{c}
\eta_1 \\
-(\eta_1+\eta_n) \\
0 \\
\vdots\\
0 \\
\eta_n \\
\end{array}
\right)
=
\left(
\begin{array}{c}
c_{1n}\eta_n - c_{12}(\eta_1+\eta_n) \\
c_{12}\eta_1 + c_{2n}\eta_n \\
c_{13}\eta_1 - c_{23}(\eta_1+\eta_n) + c_{3n}\eta_n \\
\vdots\\
c_{1,n-1}\eta_1 - c_{2,n-1}(\eta_1+\eta_n) + c_{n-1,n}\eta_n \\
c_{1n}\eta_1 - c_{2n}(\eta_1+\eta_n) \\
\end{array}
\right). \nonumber
\end{equation}
By Lemma \ref{lem:harmonic-variation},
\begin{align}
\nonumber
\eta_1 + \eta_n = \frac{c_{2n}}{c_{1n}} + \frac{c_{12}}{c_{1n}} < 1.
\end{align}
Therefore, for the first component
\begin{align}
\nonumber
c_{1n}\eta_n - c_{12}(\eta_1+\eta_n) = c_{12} - c_{12}(\eta_1+\eta_n) > 0.
\end{align}
Similarly for the last component
\begin{align}
\nonumber
c_{1n}\eta_1 - c_{2n}(\eta_1+\eta_n) > 0.
\end{align}
For the $i$-th component, $i=3,\ldots,n-1$, by Lemma \ref{lem:harmonic-variation} again,
\begin{align}
\nonumber
& c_{1i}\eta_1 - c_{2i}(\eta_1+\eta_n) + c_{in}\eta_n\\
\nonumber
= & \frac{1}{c_{1n}} \left( c_{1i}c_{2n} - c_{2i}(c_{2n}+c_{12}) + c_{in}c_{12} \right)\\
\nonumber
= & \frac{1}{c_{1n}} \left( (c_{1i} - c_{2i})c_{2n} - c_{2i}c_{12} + c_{in}c_{12} \right)\\
\nonumber
> & \frac{1}{c_{1n}} \left( c_{12}c_{2n} - c_{2i}c_{12} + c_{in}c_{12} \right)\\
\nonumber
> & \frac{1}{c_{1n}} \left( c_{in}c_{12} + c_{in}c_{12} \right)\\
\nonumber
> & 0.
\end{align}
Thus $Cd>0$ component-wise. As a result,
\begin{align}
\label{eqn:positive-direction-derivative}
\bar{t}^T Cd \ge \bar{t}_2 \left( c_{12}\eta_1 + c_{2n}\eta_n \right) > 0.
\end{align}
Along this direction $d$, if $\alpha > 0$ is sufficiently small, $t = \bar{t} + \alpha d$ is feasible and
\begin{align}
\nonumber
f(\bar{t}+\alpha d) & = 1 + \frac{1}{2}\left( \bar{t}+\alpha d \right)^T C \left( \bar{t}+\alpha d \right)\\
\nonumber
& = 1 + \frac{1}{2}\left( \bar{t}^T C t + 2\bar{t}^T Cd \alpha + d^TCd \alpha^2 \right)\\
\label{eqn:contradiction}
& > f(\bar{t}),
\end{align}
which is a contradition to the assumption that $\bar{t}$ is a maximum point. By similar arguments, it can be proved that $\bar{t}_i = 0$, $i = 3,\ldots,n-1$. Therefore $f(\bar{t}) = 1 + c_{1n} \bar{t}_1 \bar{t}_n$. Analogue to the case $n=2$, the maximum is reached when $\bar{t}_1 = \bar{t}_n = \frac{1}{2}$, and the maximum is just
\begin{align}
\nonumber
& \left( \frac{1}{2}\lambda_1 + 0\cdot \lambda_2 + \cdots + 0\cdot \lambda_{n-1} + \frac{1}{2}\lambda_n\right)
\left( \frac{1}{2}\frac{1}{\lambda_1} + 0\cdot \frac{1}{\lambda_2} + \cdots + 0\cdot \frac{1}{\lambda_{n-1}} + \frac{1}{2}\frac{1}{\lambda_n} \right)\\
= & \frac{1}{4}\left( \lambda_1+\lambda_n \right)\left( \frac{1}{\lambda_1}+\frac{1}{\lambda_n} \right).
\end{align}
Converted to the original problem (\ref{prob:harmonic-optimization-1}), $\bar{t}_1 = \bar{t}_n = \frac{1}{2}$ corresponds to
\begin{align}
v = \frac{1}{\sqrt{2}}\varphi_1 + \frac{1}{\sqrt{2}}\varphi_n.
\end{align}
If there are repeated eigenvalues, suppose the distinct eigenvalues are listed as $\tilde{\lambda}_1 < \ldots < \tilde{\lambda}_k$. The $t_i$'s can be separated into groups corresponding to distinct eigenvalues. For example, if $\tilde{\lambda}_1=\lambda_1=\lambda_2<\tilde{\lambda}_2=\lambda_3=\lambda_4<\ldots$. Then $t_1,t_2$ can be combined together as a new variable $\tilde{t}_1=t_1+t_2$ and $t_3,t_4$ combined as $\tilde{t}_2=t_3+t_4$. And the above argument applies.
\end{proof}
By Proposition \ref{proposi:harmonic-optimization}, the convergence factor of the steepest descent is
\begin{align}
Q_{sd} = \sqrt{1-\frac{4\lambda_1 \lambda_n}{\left( \lambda_1+\lambda_n \right)^2}}=\frac{\lambda_n - \lambda_1}{\lambda_n + \lambda_1},
\end{align}
which is the same as the convergence factor derived by Chebyshev polynomial in textbook. In fact, in textbook, the following factor $\bar{Q}_{sd}$ is taken as the convergence factor,
\begin{align}
\bar{Q}_{sd}^2 = \min_{\alpha}\max_{x_k}\frac{(x_k+\alpha r_k-x_*)^TA(x_k+\alpha r_k-x_*)}{(x_k-x_*)^TA(x_k-x_*)}.
\end{align}
Note that for a function of two arguments $S(\alpha,x)$,
\begin{align}
\max_{x}\min_{\alpha}S(\alpha,x) \le \min_{\alpha}\max_{x}S(\alpha,x).
\end{align}
Therefore $\bar{Q}_{sd}$ is an upper bound of $Q_{sd}$, i.e., $Q_{sd} \le \bar{Q}_{sd}$. It turns out that the minimum value $\bar{Q}_{sd}$ is also $\frac{\lambda_n - \lambda_1}{\lambda_n + \lambda_1}$, that is $Q_{sd} = \bar{Q}_{sd}$.
\subsection{Convergence rate of CG}
For CG, the marching direction is the conjugate direction, i.e. $d_k = p_k$, and the successive error $A$-norms satisfy
\begin{align}
\label{eqn:successive-error-A-norm-CG}
\|x_{k+1}-x_*\|_A^2 = \|x_{k}-x_*\|_A^2 - \frac{\left(r_k^Tp_k\right)^2}{p_k^TAp_k} = \|x_{k}-x_*\|_A^2 \left( 1 - \frac{\left(r_k^Tp_k\right)^2}{p_k^TAp_k\cdot r_k^TA^{-1}r_k} \right).
\end{align}
Note that for CG, $x_{k+1}$ is not determined by $x_k$ only. Rather, it is determined by both $x_k$ and $p_k$, and the convergence factor depends on $r_k$ and $p_k$. Therefore define
\begin{align}
Q_{cg}(r_k,p_k)\triangleq \frac{\|x_{k+1}-x_*\|_A}{\|x_{k}-x_*\|_A}.
\end{align}
The convergence factor of two-term ratio for CG is defined as
\begin{align}
Q_{cg}=\sup_{r_k,p_k}Q_{cg}(r_k,p_k).
\end{align}
By (\ref{eqn:successive-error-A-norm-CG}), the following quantity should be maximized,
\begin{align}
\frac{p_k^TAp_k\cdot r_k^TA^{-1}r_k}{\left(r_k^Tp_k\right)^2}.
\end{align}
By the properties of the iterates of CG, $r_k^Tp_k = r_k^Tr_k$, and $(p_k-r_k)^TAp_k = 0$. Therefore
\begin{align}
\nonumber
p_k^TAp_k & = r_k^TAr_k+2(p_k-r_k)^TAr_k+(p_k-r_k)^TA(p_k-r_k)\\
\nonumber
& = r_k^TAr_k-2(p_k-r_k)^TA(p_k-r_k)+(p_k-r_k)^TA(p_k-r_k)\\
\nonumber
& = r_k^TAr_k-(p_k-r_k)^TA(p_k-r_k).
\end{align}
Replacing $r_k^Tp_k$ by $r_k^Tr_k$, gives
\begin{align}
\nonumber
\frac{p_k^TAp_k\cdot r_k^TA^{-1}r_k}{\left(r_k^Tp_k\right)^2} & =
\left(\frac{r_k^TAr_k-(p_k-r_k)^TA(p_k-r_k)}{r_k^Tr_k}\right)\left(\frac{r_k^TA^{-1}r_k}{r_k^Tr_k}\right)\\
\nonumber
& \le \left(\frac{r_k^TAr_k}{r_k^Tr_k}\right)\left(\frac{r_k^TA^{-1}r_k}{r_k^Tr_k}\right)\\
\nonumber
& \le \frac{1}{4}\left( \lambda_1+\lambda_n \right)\left( \frac{1}{\lambda_1}+\frac{1}{\lambda_n} \right).
\end{align}
Note that the maximum value of $Q_{cg}(r_k,p_k)$ may not exist, but $(p_k-r_k)^TA(p_k-r_k)$ may be arbitrarily small. So
\begin{align}
Q_{cg}=\sup_{r_k,p_k}Q_{cg}(r_k,p_k)=\sqrt{1-\frac{4\lambda_1 \lambda_n}{\left( \lambda_1+\lambda_n \right)^2}}=\frac{\lambda_n - \lambda_1}{\lambda_n + \lambda_1}.
\end{align}
In this sense the convergence factor $Q_{cg}$ of two-term ratio for CG is the same as $Q_{sd}$ for the steepest descent.
At first glance, this convergence factor $Q_{cg}$ may seem too large for CG, because in textbook, the convergence estimate is
\begin{align}
\label{eqn:convergence-estimate-CG}
\|x_{k}-x_*\|_A \le 2\left( \frac{\sqrt{\lambda_n}-\sqrt{\lambda_1}}{\sqrt{\lambda_n}+\sqrt{\lambda_1}} \right)^k \|x_{0}-x_*\|_A.
\end{align}
However, note that the quantity
$\frac{\sqrt{\lambda_n}-\sqrt{\lambda_1}}{\sqrt{\lambda_n}+\sqrt{\lambda_1}}$
should be considered as the average convergence factor of the $k$-term ratio, and indivisual two-term ratio may exceed this average convergence factor, as the numerical experiment shows, see Table \ref{tab1:ratio}. Here the matrix $A$ is randomly chosen. 'ratio-2' represents $\|x_{k}-x_*\|_A/\|x_{k-1}-x_*\|_A$ and 'ratio-k' represents $\sqrt[k]{\|x_{k}-x_*\|_A/\|x_{0}-x_*\|_A}$. For more theories on the convergence rate of CG, see \cite{convergencerate-VanderSluis,convergencerate-Sleijpen,convergencerate-Greenbaum}.
\begin{table}[!h]
\renewcommand{\arraystretch}{1.10}
\tabcolsep 0pt \caption{Two-term ratios and k-term mean ratios.}
\vspace*{-12pt} \label{tab1:ratio}
\begin{center}
\def1.0\textwidth{0.8\textwidth}
{\rule{1.0\textwidth}{1pt}}
\begin{tabular*}{1.0\textwidth}{@{\extracolsep{\fill}}cccccccc}
$k$ & $\frac{\lambda_n-\lambda_1}{\lambda_n+\lambda_1}$ & ratio-2 & ratio-k & $\frac{\sqrt{\lambda_n}-\sqrt{\lambda_1}}{\sqrt{\lambda_n}+\sqrt{\lambda_1}}$ &
$k$ & ratio-2 & ratio-k \\
\cline{1-5}\cline{6-8}
& & & & & & &\\
1 & & 0.996 & 0.996 & & 14 & 0.964 & 0.958\\
2 & & 0.998 & 0.997 & & 15 & 0.906 & 0.954\\
3 & & 0.996 & 0.997 & & 16 & 0.747 & 0.940\\
4 & & 0.989 & 0.995 & & 17 & 0.810 & 0.932\\
5 & & 0.992 & 0.994 & & 18 & 0.746 & 0.920\\
6 & & 0.989 & 0.993 & & 19 & 0.854 & 0.917\\
7 & 0.999 & 0.985 & 0.992 & 0.969 & 20 & 0.996 & 0.921\\
8 & & 0.964 & 0.989 & & 21 & 0.001 & 0.677\\
9 & & 0.946 & 0.984 & & 22 & 0.000 & 0.474\\
10 & & 0.908 & 0.976 & & 23 & 0.100 & 0.443\\
11 & & 0.940 & 0.973 & & 24 & 0.028 & 0.395\\
12 & & 0.845 & 0.961 & & 25 & 0.055 & 0.365\\
13 & & 0.913 & 0.958 &
\end{tabular*}
{\rule{1.0\textwidth}{1pt}}
\end{center}
\end{table}
\section{CG in Hilbert space}
One of the origins of equation (\ref{eqn:Axb}) is discretization of second order self-adjoint differential equation, for example, the Poisson equation with homogeneous Dirichlet boundary condition,
\begin{align}
\label{eqn:poisson-problem}
\left\{
\begin{aligned}
-\Delta u + cu= f,\quad \Omega\\
u = 0,\quad \partial\Omega.
\end{aligned}
\right.
\end{align}
A natural question is that whether the CG method can be applied to the self-adjoint operator equation directly, before its discretization. The answer is yes, see \cite{CG-hilbert-Hayes,CG-hilbert-Herzog-Sachs}. But the function spaces and operators involved should be chosen thoughtfully. Problem (\ref{eqn:poisson-problem}) is taken to demonstrate the idea of construction of the CG method in function space, and to illustrate the relationship between the finite dimensial CG and the infinite dimensional CG.
The above elliptic problem can be rewritten as an operator equation in certain sense
\begin{align}
\mathscr{A} u = f.
\end{align}
Recall that in the CG algorithm, operations such as $Ax$ and $A^2x$ will be implicitly involved. If the CG is generalized to the operator equation (\ref{eqn:poisson-operator}), analogously $\mathscr{A}u$ and $\mathscr{A}^2u$ should be well defined. Therefore the domain space and the range space of $\mathscr{A}$ should be the same. If $\mathscr{A}$ is chosen as $-\Delta + c$, and the domain space of $\mathscr{A}$ chosen as $C^2_0(\Omega)$, $\mathscr{A}u$ is well defined but $\mathscr{A}^2u$ may not be well defined. Furthermore, $\mathscr{A}u$ may not lie in $C^2_0(\Omega)$. If the domain space of $\mathscr{A}$ is chosen as $L^2(\Omega)$, $\mathscr{A}$ is not defined in the whole domain space.
In PDE community, problem (\ref{eqn:poisson-problem}) usually is understood in weak sense, that is, find $u\in H_0^1(\Omega)$, such that
\begin{align}
\label{eqn:poisson-weak-sense}
\int_{\Omega}\nabla u \nabla v dx + c\int_{\Omega}u v dx = \int_{\Omega}fvdx,\quad \forall v\in H_0^1(\Omega).
\end{align}
In this sense, problem (\ref{eqn:poisson-problem}) can be firstly considered as
\begin{align}
\label{eqn:poisson-weak-operator}
\mathscr{L} u = f,\quad \mathscr{L}:X\to X^*,
\end{align}
where $X=H_0^1(\Omega)$, with inner product
\begin{align}
(u,v)_{X}=\int_{\Omega}\nabla u \nabla v dx + \int_{\Omega}u v dx.
\end{align}
In order to generalize the CG method, the image of the operator $\mathscr{L}$ should be pulled back to the domain space $X$. Rietz isomorphism can do this job. Then, problem (\ref{eqn:poisson-problem}) is rewritten as the following operator equation
\begin{align}
\label{eqn:poisson-operator}
\mathscr{A}u\triangleq\mathscr{RL} u = \mathscr{R}f,
\end{align}
where $\mathscr{R}$ is the Rietz isomorphism $\mathscr{R}:X^*\to X$, i.e., $(\mathscr{R}f,v)_{X}=\langle f, v \rangle_{X^*,X}$.
On the other point of view, the solution $u$ of problem (\ref{eqn:poisson-weak-sense}) can be considered as the minimizer of the following quadratic functional
\begin{align}
\mathscr{J}(u)=\frac{1}{2}\langle \mathscr{L} u, u \rangle_{X^*,X} - \langle f, u \rangle_{X^*,X},\quad \mathscr{J}:X\to \mathbb{R}
\end{align}
and $\mathscr{L} u - f = 0$ corresponds to $\mathscr{J}'(u)=0$, similar to that $Ax - b = 0$ corresponds to $J'(u)=0$ in Section \ref{set:connections}. Here $\mathscr{J}'(u)\in X^*$ is the Fr\'{e}chlet derivative of $\mathscr{J}(u)$. Consider the steepest descend direction of $\mathscr{J}$ at $u$. Note the steepest descend direction of a function or functional is the minus gradient of that function or functional. It is important to distinguish the two notions of Fr\'{e}chlet derivative and gradient of a functional. For this example, the Fr\'{e}chlet derivative of $\mathscr{J}$ at $u$ is $\mathscr{J}'(u)\in X^*$ and the gradient is $\nabla\mathscr{J}(u)\in X$. They are related by $\nabla\mathscr{J}(u)=\mathscr{R}\mathscr{J}'(u)$. In finite dimensional Euclidian space $\mathbb{R}^n$, $\nabla J(u)=(J'(u))^T$, that is if $\mathbb{R}^n$ is considered as a column vector space, the Fr\'{e}chlet derivative of $J$ at $u$ is identified as a row vector, the gradient of $J$ is a column vector and the Riesz isomorphism is the operation of transpose.
After clarifying the setting and the notions, the CG method for $\mathscr{L} u - f = 0$ in function space can be derived. Suppose that $u_0$ is an initial guess. The residual $r_0=f-\mathscr{L} u_0=-\mathscr{J}'(u)\in X^*$. The first search direction is taken as the steepest descend direction $p_0=-\nabla \mathscr{J}'(u_0)=\mathscr{R}r_0\in X$. Note that $\langle \mathscr{L}u, p \rangle_{X^*,X}=\langle \mathscr{L}p, u \rangle_{X^*,X}$. Minimizing $\mathscr{J}(u_0+\alpha p_0)$ with respect to $\alpha$ gives
\begin{align}
\alpha_0 = \frac{\langle f-\mathscr{L}u_0, p_0 \rangle_{X^*,X}}{\langle \mathscr{L}p_0, p_0 \rangle_{X^*,X}} = \frac{\langle r_0, p_0 \rangle_{X^*,X}}{\langle \mathscr{L}p_0, p_0 \rangle_{X^*,X}}.
\end{align}
In passing,
\begin{align}
u_1 = u_0 + \alpha_0 p_0,\\
r_1 = r_0 - \alpha_0 \mathscr{L}p_0.
\end{align}
Note that $r_1\in X^*$. With $r_1$ and $p_0$ at hand, a direction $p_1$ conjugate to $p_0$ is constructed as
\begin{align}
p_1 = \mathscr{R}r_1+\beta_0 p_0,\quad \beta_0=-\frac{\langle \mathscr{L}p_0, \mathscr{R}r_1 \rangle_{X^*,X}}{\langle \mathscr{L}p_0, p_0 \rangle_{X^*,X}}.
\end{align}
Repeating these steps, the CG method in function space is resulted
\begin{align}
\label{eqn:cg-infinite-pk}
p_k & = \mathscr{R}r_k+\beta_{k-1} p_{k-1},\quad \beta_{k-1}=-\frac{\langle \mathscr{L}p_{k-1}, \mathscr{R}r_k \rangle_{X^*,X}}{\langle \mathscr{L}p_{k-1}, p_{k-1} \rangle_{X^*,X}},\\
\label{eqn:cg-infinite-uk}
u_{k+1} & = u_k + \alpha_k p_k,\quad \alpha_k = \frac{\langle r_k, p_k \rangle_{X^*,X}}{\langle \mathscr{L}p_k, p_k \rangle_{X^*,X}},\\
\label{eqn:cg-infinite-rk}
r_{k+1} & = r_k - \alpha_k \mathscr{L}p_k.
\end{align}
Another natural question arises. If (\ref{eqn:poisson-weak-operator}) is discretized with discretized algebraic equations $AU = F$, and the finite dimensional CG is applied to $AU = F$, then is there any connection between the infinite dimensional CG iterates for (\ref{eqn:poisson-weak-operator}) and the finite dimensional CG iterates for $AU = F$?
To be specific, consider the finite element discretization for (\ref{eqn:poisson-weak-sense}): Find $u_h = \sum\limits_{j=1}^{n}\eta_j\varphi_j\in X_h$, s.t.,
\begin{align}
\label{eqn:poisson-fem}
\int_{\Omega}\nabla u_h \nabla v_h dx + c\int_{\Omega}u_h v_h dx = \int_{\Omega}fv_h dx,\quad \forall v_h\in X_h,
\end{align}
or by short hand notation,
\begin{align}
a( u_h,v_h ) = (f,v_h),\quad \forall v_h\in X_h,
\end{align}
which can be considered as a discretized analog of the operator equation $\mathscr{L} u = f$ in (\ref{eqn:poisson-weak-operator})
\begin{align}
\label{eqn:poisson-weak-operator-fem}
\mathscr{L}_h u_h = f_h,\quad \mathscr{L}_h:X_h\to X_h^*,
\end{align}
where $f_h$ is the projection of $f$ in $X_h^*$. Therefore (\ref{eqn:poisson-fem}) can be rewritten as
\begin{align}
\langle \mathscr{L}_h u_h, v_h \rangle_{X_h^*,X_h} = \langle f_h, v_h \rangle_{X_h^*,X_h},\quad \mathscr{L}_h:X_h\to X_h^*,
\end{align}
Here $X_h$ is a finite element subspace of $X$ spaned by piece-wise linear basis functions $\varphi_1,\ldots,\varphi_n$, with inner product inherited from $X$,
\begin{align}
\left( u_h,v_h \right)_{X_h} = \int_{\Omega}\nabla u_h \nabla v_h dx + \int_{\Omega}u_h v_h dx.
\end{align}
In the following, $\langle \cdot, \cdot \rangle = \langle \cdot, \cdot \rangle_{X_h^*,X_h}$ will denote the dual pair and $\left( \cdot, \cdot \right)_{X_h}$ will denote the inner product.
It is worthwhile to note that usually it is not (\ref{eqn:poisson-weak-operator-fem}) that is solved by CG, rather, it is the representation of (\ref{eqn:poisson-weak-operator-fem}) that is solved by CG. Such representation can be derived by choosing $v_h=\varphi_j$ in (\ref{eqn:poisson-fem}), and is denoted as $AU = F$. Here $U=(\eta_1,\ldots,\eta_n)^T$ is the coefficients of the finite element basis functions $\varphi_i$, the elements $a_{ij}$ of the matrix $A$ and elments $F_i$ of $F$ are
\begin{align}
\label{eqn:aij}
a_{ij} & = \int_{\Omega}\nabla \varphi_j \nabla \varphi_i dx + c\int_{\Omega}\varphi_j \varphi_i dx,\\
F_i & = \int_{\Omega}f\varphi_i dx.
\end{align}
In the finite element method, usually $A$ is formulated as $A=K+cM$, where $K$ is the stiff matrix and $M$ is the mass matrix. In this example, the matrix $A$ is symmetric positive definite. Define $Q_1:\mathbb{R}^n\to X_h$ and $Q_2:X_h^*\to \mathbb{R}^n$ as,
\begin{align}
Q_1
\left(
\begin{array}{c}
\eta_1 \\
\eta_2 \\
\vdots \\
\eta_n
\end{array}
\right)
= \sum_{j=1}^{n}\eta_i \varphi_i,
\quad
Q_2 r_h
=
\left(
\begin{array}{c}
\langle r_h,\varphi_1 \rangle \\
\langle r_h,\varphi_2 \rangle \\
\vdots \\
\langle r_h,\varphi_n \rangle
\end{array}
\right).
\end{align}
Note that $Q_2 = Q_1^*$ is the Hilbert conjugate of $Q_1$, i.e.,
\begin{align}
\langle r_h,Q_1 U \rangle = \langle Q_1^*r_h,U \rangle_{\mathbb{R}^n} = \langle Q_2r_h,U \rangle_{\mathbb{R}^n},\quad \forall U\in \mathbb{R}^n, r_h\in X_h^*.
\end{align}
In the language of mapping, with the base $\varphi_1,\ldots,\varphi_n$, the representation of $\mathscr{L}_h$ is $A=Q_2\mathscr{L}_h Q_1=Q_1^*\mathscr{L}_h Q_1$, with $Q_1U=u_h$ and $Q_2f_h=F$. That is the following diagram commutes.
\begin{align}
\xymatrix{
X_h \ar[rr]^{\mathscr{L}_h} & & X_h^* \ar[d]^{Q_1^*} \\
\mathbb{R}^n\ar[u]^{Q_1}\ar[rr]^{A} & & \mathbb{R}^n}
\end{align}
Let $w_h = \sum\limits_{j=1}^{n}w_i\varphi_i$ and $v_h = \sum\limits_{j=1}^{n}v_i\varphi_i$, then $\left( \mathscr{L}_h w_h,v_h \right)_{X_h}$ can be represented as
\begin{align}
w^T(K+cM)v = w^TAv,
\end{align}
where $w = (w_1,\ldots,w_n)^T,v = (v_1,\ldots,v_n)^T$.
In addition, the discrete analog $\mathscr{R}_h$ of the Riesz isomorphism $\mathscr{R}$ should be introduced, $\mathscr{R}_h:X_h^*\to X_h$. Let $r_h\in X_h^*$ and $w_h=\mathscr{R}_h r_h \in X_h$. The representation of $\mathscr{R}_h$ can be derived as follows. Suppose $w_h = \sum\limits_{j=1}^{n}w_i\varphi_i$ and denote $r_i = \langle r_h,\varphi_i \rangle$. By the definition of Riesz isomorphism,
\begin{align}
r_i = \langle r_h,\varphi_i \rangle = \left( \mathscr{R}_hr_h,\varphi_i \right)_{X_h} = \left( w_h,\varphi_i \right)_{X_h} = \sum_{j=1}^{n}\left( \varphi_j,\varphi_i \right)_{X_h}w_j.
\end{align}
In matrix-vector format,
$$
r = (K+M)w.
$$
Thus
$$
w = Rr,
$$
where $R=(K+M)^{-1}$. So $R$ is a representation of $\mathscr{R}_h$. Note that if $v_h = \sum\limits_{j=1}^{n}v_i\varphi_i$, then the inner product $\left( w_h,v_h \right)_{X_h}$ can be represented as
\begin{align}
w^T(K+M)v = w^TR^{-1}v,
\end{align}
and the dual pair $\langle r_h,v_h \rangle$ can be represented as $r^Tv$.
With the above preparation, We are ready to compare the CG iterates for both $AU=F$ and $\mathscr{L}_h u_h = f_h$. In the following, the CG iterates for $AU=F$ will be denoted as $U_k, r_k, p_k, \alpha_k, \beta_k$, the CG iterates for $\mathscr{L}_h u_h = f_h$ will be denoted as $u_h^k, r_h^k, p_h^k, \alpha^k, \beta^k$, and the representation of $u_h^k, r_h^k, p_h^k$ as $U^k, r^k, p^k$. The comparisons are sumerized in Table \ref{tab1:comparison-iterates-of-cg}. Taking a closer look at formulas for $p^k, \alpha^k, \beta^k$, we find that the representive CG iterates for $\mathscr{L}_h u_h = f_h$ is a preconditioned conjugate gradient method (PCG) for $AU=F$ with the preconditioner $R$, which is the discretized Riesz isomorphism. The CG for the discretized operator equation $\mathscr{L}_h u_h = f_h$ can be viewed as an inexact CG in function space applied to the operator equation $\mathscr{L} u = f$, just as the relationship between the finite dimensional Newton method and the infinite dimensional Newton method for nonlinear differential equations, see \cite{newtonmethod-Deuflhard}. In this sense, any PCG for $AU=F$ can be considered as an inexact infinite dimensional CG for the operator equation $\mathscr{L} u = f$.
\begin{table}[!h]
\renewcommand{\arraystretch}{1.10}
\tabcolsep 0pt \caption{Comparison of the CG iterates.}
\vspace*{-12pt} \label{tab1:comparison-iterates-of-cg}
\begin{center}
\def1.0\textwidth{1.0\textwidth}
{\rule{1.0\textwidth}{1pt}}
\begin{tabular*}{1.0\textwidth}{@{\extracolsep{\fill}}ccccc}
$AU=F$ & & $\mathscr{L}_h u_h = f_h$ & & Representation of iterates\\
\cline{1-5}
\\
$U_0$ & & $u_h^0=Q_1U_0$ & & $u^0=U_0$
\\
\\
$r_0 = F-AU_0 = Q_2 r_h^0$ & & $r_{h}^0=f_h-\mathscr{L}_{h} u_{h}^0$ & &$r^0 = Q_2 r_h^0 = r_0$
\\
\\$p_0 = r_0$ & & $p_{h}^0=\mathscr{R}_h r_h^0$ & & $p^0=R r^0$
\\
\\
$\alpha_0=\frac{r_0^Tp_0}{p_0^TAp_0}$ & & $\alpha^0=\frac{\langle r_h^0, p_h^{0} \rangle}{\langle \mathscr{L}_h p_h^{0},p_h^{0} \rangle}$ & & $\alpha^0=\frac{(r^0)^T p^{0} }{(p^0)^TAp^0}$
\\
\\
$U_{1}=U_0+\alpha_0 p_0$ & & $u_h^{1}=u_h^0+\alpha^0 p_h^0$ & & $u^{1}=u^0+\alpha^0 p^0$
\\
\\
$r_{1}=r_0-\alpha_0 A p_0$ & & $r_h^{1}=r_h^0-\alpha^0 \mathscr{L}_h p_h^0$ & & $r^{1}=r^0-\alpha^0 A p^0$
\\
\\
$\beta_{k-1}=-\frac{p_{k-1}^TAr_k}{p_{k-1}^TAp_{k-1}}$ & & $\beta^{k-1}=-\frac{\langle \mathscr{L}_h p_h^{k-1},\mathscr{R}_h r_h^{k} \rangle}{\langle \mathscr{L}_h p_h^{k-1},p_h^{k-1} \rangle}$ & & $\beta^{k-1}=-\frac{(p^{k-1})^T A (Rr^{k}) }{(p^{k-1})^T A p^{k-1}}$
\\
\\
$p_k=r_k+\beta_{k-1} p_{k-1}$ & & $p_{h}^k=\mathscr{R}_{h}r_{h}^k+\beta^{k-1} p_{h}^{k-1}$ & & $p^k=R r^k+\beta^{k-1} p^{k-1}$
\\
\\
$\alpha_k=\frac{r_k^Tp_k}{p_k^TAp_k}$ & & $\alpha^k=\frac{\langle r_h^k, p_h^{k} \rangle}{\langle \mathscr{L}_h p_h^{k},p_h^{k} \rangle}$ & & $\alpha^k=\frac{(Rr^k)^Tp^k}{(p^k)^TAp^k}$
\\
\\
$U_{k+1}=U_k+\alpha_k p_k$ & & $u_h^{k+1}=u_h^k+\alpha^k p_h^k$ & & $U^{k+1}=U^k+\alpha^k p^k$
\\
\\
$r_{k+1}=r_k-\alpha_kAp_k$ & & $r_h^{k+1}=r_h^k-\alpha^k \mathscr{L}_h p_h^k$ & & $r^{k+1}=r^k-\alpha^k A p^k$
\\
\\
\end{tabular*}
{\rule{1.0\textwidth}{1pt}}
\end{center}
\end{table}
\section{Concluding remarks}
\label{sec6:conclusions}
CG is one of connection nodes of computational mathematics. It connects to the conjugate direction method, the subspace optimization method and the BFGS quasi-Newton method in numerical optimization, connects to the Lanczos method in numerical linear algebra, connects to the orthogonal polynomials in numerical approximation theory, and connects to PCG in numerical PDEs. It is full of mathematical ideas and novel computational techniques. Maybe there are still undiscovered connections or merits inside CG.
| {
"timestamp": "2019-12-17T02:04:59",
"yymm": "1910",
"arxiv_id": "1910.03293",
"language": "en",
"url": "https://arxiv.org/abs/1910.03293",
"abstract": "Connections of the conjugate gradient (CG) method with other methods in computational mathematics are surveyed, including the connections with the conjugate direction method, the subspace optimization method and the quasi-Newton method BFGS in numrical optimization, and the Lanczos method in numerical linear algebra. Two sequences of polynomials related to residual vectors and conjugate vectors are reviewed, where the residual polynomials are similar to orthogonal polynomials in the approximation theory and the roots of the polynomials reveal certain information of the coefficient matrix. The convergence rates of the steepest descent and CG are reconsidered in a viewpoint different from textbooks. The connection of infinite dimensional CG with finite dimensional preconditioned CG is also reviewed via numerical solution of an elliptic equation.",
"subjects": "Numerical Analysis (math.NA)",
"title": "The conjugate gradient method with various viewpoints",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9926541749467244,
"lm_q2_score": 0.8376199714402812,
"lm_q1q2_score": 0.8314669616689512
} |
https://arxiv.org/abs/1606.03746 | Optimal Packings of 22 and 33 Unit Squares in a Square | Let $s(n)$ be the side length of the smallest square into which $n$ non-overlapping unit squares can be packed. In 2010, the author showed that $s(13)=4$ and $s(46)=7$. Together with the result $s(6)=3$ by Keaney and Shiu, these results strongly suggest that $s(m^2-3)=m$ for $m\ge 3$, in particular for the values $m=5,6$, which correspond to cases that lie in between the previous results.In this article we show that indeed $s(m^2-3)=m$ for $m=5,6$, implying that the most efficient packings of 22 and 33 squares are the trivial ones. To achieve our results, we modify the well-known method of sets of unavoidable points by replacing them with continuously varying families of such sets. | \section{Introduction}
The study of packing unit squares
into a square goes back to Erd\"os and Graham \cite{eg}, who
examined the asymptotic packing efficiency as the side length of the containing square increased towards infinity.
G\"obel \cite{goe} was the first to show that particular packings are optimal for a given non-square number of unit squares. The search for good packings for given number of unit squares
was addressed in the popular science literature in various articles by Gardner
\cite{mg}.
Let $s(n)$ be the side length of the smallest square into which $n$ non-overlapping unit squares can be packed. Non-trivial cases
for which $s(n)$ is known are $s(m^2-1)=s(m^2-2)=m$ for $m \ge 2$ (Nagamochi
\cite{naga}, single values previously shown by
G\"obel \cite{goe}, El Moumni \cite{el}, and Friedman \cite{fried}), $s(5)=2+{\frac{1}{2}}\sqrt{2}$ (G\"obel \cite{goe}), $s(6)=3$ (Kearney and Shiu
\cite{ks}), $s(10)=3+{\frac{1}{2}}\sqrt{2}$
(Stromquist \cite{strom}), $s(13)=4$, and $s(46)=7$ (Bentz \cite{b46}). There are moreover non-trivial best packings and lower bounds known for various values of $n$.
Examples on many of these results and the underlying techniques used are given in the survey article by Friedman \cite{fried}.
In \cite{ks} and \cite{b46}, it was shown that $s(m^2-3)=m$ for $m=3,4,7$. These results suggested that the holds for the intermediate values
$m=5,6$. We will show this result in this article, by adopting the proof for $m=7$ from \cite{b46}.
Previously, the best lower bounds in these cases are $s(22) \ge \sqrt{15}+1\approx 4.87298$ and $s(33)\ge \sqrt{24}+1\approx 5.89898$, and follow from a general result in Nagamochi \cite{naga}.
As the trivial (or ``chess board'') packings show that $s(22)\le 5$, and $s(33) \le 6$, it it suffices to establish the opposite inequality.
Let
a box be the interior of any square with side length $s$ satisfying $1<s\le 1.01$. Following Stromberg, we will establish that $m^2-3$ squares
cannot be packed in a square with side length smaller then $m$ by proving the equivalent statement that it is impossible to pack $m^2-3$ boxes in a square of side length $m$.
In order to do so, we will adopt the previously used method of unavoidable points to continuously varying sets of such points.
We will introduce this modification in Section \ref{nonav}, in addition to given several technical lemmas.
The optimality proofs for $m=6,5$ ($n=33, 22$) are then given in sections \ref{sec33}, \ref{sec22}, respectively.
\section{Continuously changing unavoidable configurations}
Optimality proofs for square packing utilize arguments based on resource starvation. Subsets of a containing square are associated with numerical resources in such a way that each
packed box uses up a certain amount of resources (by intersecting the subset corresponding with the resource). The overall amount of resource available limits the number of boxes
that can be packed.
The proofs in \cite{goe} and many later publications are based on finite number of points, each of which has resource value $1$. In \cite{el}, resources were associated with line segments, such that the length of intersection between a box and the line segment determined the amount of resource allocated to the box. A more complex configuration in \cite{naga} uses a combined system of (weighted) points, line segments, and a rectangular area.
Our arguments will use a two-tier approach. We will first start out with systems of points containing too many resources for a direct proof. A new technical
result (Theorem \ref{t:cont}) will allow us to use the flexibility in our initial systems to show that any potential packing must contain a local abundance of boxes. We will use
this local ``over-concentration" of boxes to obtain a contradicting in combination with a second resource system based on a line segment.
We will start by stating several ``non-avoidance'' lemmas, which guarantee that a box will intersect particular type of subsets in its vicinity.
\label{nonav}
\begin{lemma}[Friedman \cite{fried}, Stromquist \cite{strom2}] \label{triangle} Let $T$ be a triangle with sides of length at most 1. Then any box whose centre is in
$T$ must contain one of the vertices of $T$.
\end{lemma}
\begin{lemma}[Friedman \cite{fried}, Stromquist \cite{strom2}]
\label{rimevendistance}
Let $a \le 1$, $b\le 1$, and $a+2b \le 2\sqrt{2}$, then any box whose centre is in the rectangle
$[0,a]\times[0,b]$ must intersect the $x$-axis, the point $(0,a)$ or the point $(a,b)$.
\end{lemma}
We will use Lemma \ref{rimevendistance} in the cases of $a<2\sqrt{2}-2 \approx 0.828$, $b=1$ and
$a=1$, $b< \sqrt{2}- \frac{1}{2} \approx 0.914$.
\begin{lemma}[Stromquist \cite{strom2}, \cite{strom}]
\label{rimunevenpoints}
Let $2\sqrt{2}-2<a<1,\, 0<b<1$, and $(a,b)$ within a distance of $1$ from $(0,1)$. Moreover,
let $f(a)$ be the infimum of \begin{equation}\frac{\cos \theta}{1+\cos \theta}+\frac{1-a \cos \theta}{\sin \theta}\label{fa}
\end{equation} for
$\theta \in (0,\frac{\pi}{4}]$.
If $b<f(a)$, then any box whose centre is in the quadrilateral with vertices $(0,0),(0,1),(a,0),$ and $(a,b)$ must intersect
the $x$-axis, the point $(0,1)$ or the point $(a,b)$.
Moreover, the infimum of (\ref{fa}) is a minimum and is obtained at a value of $\theta$ satisfying
\begin{equation}\label{3rdpoly} 2 \cos^3 \theta -(2a+2) \cos^2 \theta + (a^2-2a+3) \cos \theta -(1-a^2)=0.\end{equation}
\end{lemma}
We will be using Lemma \ref{rimunevenpoints} in the case $a=\frac{1}{2}\sqrt{3}, b=0.5$.
\begin{lemma}[Nagamochi \cite{naga}] If $l$ is a line that lies within a distance of $(\sqrt{2}-1)/2$ of the centre of a box $B$, then
$l$ will intersect $B$ with a length of more than $1$. \label{closeline}
\end{lemma}
\begin{lemma}[Stromquist \cite{strom2}] \label{l:paralines} Let $L_1$ and $L_2$ be two parallel lines of distance $d\le1$, and $B$ a box with
its centre between them. Then $B$ must intersect the
two lines with a common length of intersection of at least $\min\{1,2\sqrt{2}-2d\}$.
\end{lemma}
The following lemma extends Lemma \ref{rimunevenpoints} to values of $a$ smaller than $2\sqrt{2}-2$.
\begin{lemma} Let $0<a<2\sqrt{2}-2,\, 0<b\le1$, and $(a,b)$ within a distance of $1$ from $(0,1)$.
Then any box whose centre is in the quadrilateral $Q$ with vertices $(0,0),(0,1),(a,0),$ and $(a,b)$ must intersect
the $x$-axis, the point $(0,1)$ or the point $(a,b)$.
\end{lemma}
{\bf Proof: } If $b\le \frac{1}{2}$ then the distance from $(0,0)$ to $(a,b)$ is less than $1$ and so the line segment between these points divides $Q$ into two triangles, all of whose sides have length at least one. The result now follows from Lemma \ref{triangle}.
So assume that $b >0.5$ and that the box $B$ does not intersect $(0,1)$ or the $x$-axis. Now the two line segments from $(0,0)$ to $(a,\frac{1}{2})$, and from
$(0,1)$ to $(a,\frac{1}{2})$ divide $Q$ into three triangles, such that Lemma \ref{triangle} is applicable to each of them. By our assumption, the box $B$ must contain either $(a,b)$ or
$(a,\frac{1}{2})$. In the first case, the lemma holds, so assume that $B$ contains $(a,\frac{1}{2})$.
As the centre of $B$ is contained in $Q$, it is also contained in the larger rectangle $R$ with corners $(0,0)$, $(a,0)$, $(a,1)$, and $(0,1)$. Applying Lemma \ref{rimevendistance}
to $R$ yields that $B$ contains $(a,1)$. As $(a,b)$ lies on the line segment from $(a,\frac{1}{2})$ to $(a,1)$, it is contained in $B$. \hfill{$\Box$}\smallskip
We will use the lemma for $a=0.8$, $0.4 \le b \le 1$.
\begin{lemma} \label{l:pBl}
Let $l$ be a line and $P$ a point with a distance of more than $0.51$ from $l$. If a box $B$ covers $P$ such that $P$ and the center of $B$ lie on opposite sides of $l$,
than $B$ intersect $l$ with a length of intersection that exceeds $1$.
\end{lemma}
{\bf Proof: }
The midpoint of B must lie within $0.505\sqrt{ 2}$ of $P$, and hence within a distance of $0.505 \sqrt{2}-0.51$ from the line $l$.
As this value is less than $( \sqrt{2}-1)/2$, the results follows from Lemma 5.
\hfill{$\Box$}\smallskip
Consider a square $S$ of side length $l$ in the Euclidean plane (which in our cases we will take to be $[0,m]\times [0,m]$ for $m \in \{5,6\}$). A set of points $P \subset S$ is called \emph{unavoidable} if every box $B \subseteq S$ contains one of the points in $P$. In practice, we show unavoidability of $P$ by dividing $S$ into several regions $S_i$, so that by one of our unavoidability lemmas, any box with midpoint in $S_i$ must either intersect a point in $P$ or the boundary of $S$. If $S$ contains
an unavoidable set of $t$ points, it follows that no more than $t$ boxes can be packed into $S$, and hence $s(t+1)\ge l$.
Figure \ref{6pat} depicts an unavoidable set of points for the square $[0,6]\times [0,6]$. The points in the lowest row are
$$\left(i,\sqrt{2}-\frac{1}{2}\right) \;\;\;\;\; i=1,2,\ldots,5,$$
and the remaining ones are
arranged so that all shown triangles are equilateral of side length 1.
Lemma \ref{triangle} is applicable to the triangles, Lemma \ref{rimevendistance} to the rectangles, and
Lemma \ref{rimunevenpoints} to the remaining quadrilateral regions (with $a=\frac{\sqrt{3}}{2}$,
$b=\frac{1}{2}$).
Figure \ref{6pat} is a variant of configurations used to show that $s(46)=7$ in \cite{b46} and to derive a lower bound on $s(11)$ in
\cite{strom}. As the unavoidable set consists of $33$ points, it demonstrates the (known) result that $s(34)=6$.
\begin{figure}[ht]
\centering
\includegraphics[trim=80 29 80 18,clip=true]{33points.png}
\caption{An unavoidable sets with 33 points}\label{6pat}
\end{figure}
Note that the configuration in Figure \ref{6pat} contains a degree of flexibility. For example, we can obtain a different
unavoidable configuration by deleting one of the points closest to the left hand side of the square and instead adding a different point
a small amount further to the right. If there exists a packing of
$33$ boxes, then each of them must contain exactly one point in each configuration, and hence one box must contain both the deleted and added point (and the line segment between them), an argument that has appeared in several previous proofs. The next theorem shows that this approach can be generalized to situations in which more than one point is moved at one time.
\begin{theo} \label{t:cont} Let $S$ be a square with a packing $\mathcal{P}$ of boxes, $I=[a,b]$, $t \in \mathbb{N}$, and $f_k:I \to S$ a collection of continuous mappings, for $1\le k\le t$. Suppose further
that
\begin{enumerate}\item for each $i \in I$, $F_i=\{f_k(i)|1\le k \le t\}$ is an unavoidable set of points;
\item if for some $1 \le k \le t$, $f_k(a)$ is not contained in a box of $\mathcal{P}$, then $f_k(i)=f_k(a)$ for all $i \in I$.
\item if for some $1 \le k,l \le t$, $k\ne l$, $f_k(a)$ and $f_l(a)$ lie in the same box of $\mathcal{P}$, then $f_k(i)=f_k(a)$ for all $i \in I$.
\end{enumerate}
Then for all $1 \le k \le t$, the image $f_k(I)$ will either lie entirely within one box, or completely outside any box.
\end{theo}
{\bf Proof: }
If $f_k(a)$ is not contained in any box, then $f_k$ is constant. Hence for the theorem to be wrong, there
must be $1\le k\le t$, $i\in I$, such that $f_k(a)$ lies in some box $B_k$ while $f_k(i) \notin B_k$. Assume that this is indeed the case.
As boxes are open and $f_k$ is continuous, it
follows that there is a smallest such $i' \in I$ for which $f_k(i')$ lies outside $B_k$. Minimizing over all indices, we may assume w.l.o.g. that $i'$ is the smallest value of $i$ for which any $f_s(i)$ lies outside the box containing $f_s(a)$.
Now, as $F_{i'}$ is an unavoidable set of points, there exist a $1 \le l\le k$, necessarily with $l \ne k$, such that $f_l(i') \in B_k$. Boxes are open, therefore
there exist an $\epsilon >0$ such that $f_l(i'-\epsilon) \in B_k$. By the minimality of $i'$, it follows that $f_l(a) \in B_k$. However, now $f_l(a), f_k(a) \in B_k$, and so $f_k$ is constant by condition 3., contradicting that $f_k(i') \notin B_k$. The result follows.
\hfill{$\Box$}\smallskip
\section{The best possible packing of 33 unit squares}
\label{sec33}
\begin{theo}
\label{s33}
33 non-overlapping unit squares cannot be packed in a square of side length less than 6.
\end{theo}
{\bf Proof: }
Let $S$ be the square $[0,6]^2$ and assume by way of contradiction that there is a packing $\mathcal{P}$ of 33 boxes into $S$.
Consider the collection of 33 red points and 33 blue points depicted in Figure
\ref{fig:redblue}. The red points are the points from Figure \ref{fig:redblue}, while the blue points are obtained from the red ones by mirroring along the line $y=3$. It follows that both red and blue
points form unavoidable sets of points. Hence each box in $\mathcal{P}$ will contain exactly one red and blue point.
\begin{figure}[ht]
\centering
\includegraphics[trim=80 29 80 20,clip=true]{33pointsRedBlue.png}
\caption{Two unavoidable sets with 33 points each}\label{fig:redblue}
\end{figure}
We will apply Theorem \ref{t:cont} twice. In the first instance we choose our values $f_k(i)$ so that $F_a$ is the set of red points from Figure \ref{fig:redblue}, while in the second case
the start configuration will be the set of blue points. We will
give an informal description of the other values of $f_i(k)$ by describing the ``movement" of the points configuration. Many of our movement will move an entire row of equally-colored points from Figure \ref{fig:redblue}. We will denote the red rows and blue rows by $r_1,\dots,r_6$, and $b_1,\dots,b_6$, respectively, where the rows are numbered from bottom to top.
For any two such rows $r,r'$, we denote by $v(r,r')$ their vertical distance.
We first note that as every red point and every blue points lies in exactly one box of $\mathcal{P}$, the second and third condition of Theorem \ref{t:cont} are automatically full-filled.
Now consider
a simultaneous vertical movement of a row $r_i$. Such a move will preserve the unavoidability of the points configuration as long as, whenever they are defined,
$v(r_i,r_{i-1}) \le \frac{1}{2}\sqrt{3}$, $v(r_i,r_{i+1})\le \frac{1}{2}\sqrt{3}$ and in addition, the vertical distance from $r_1$ and $r_6$ to the top or bottom edge of $S$, respectively, is
at most $\sqrt{2}-\frac{1}{2}$.
With regard to a given configuration, for $i=2,\dots ,5$, let $m_1=v(r_1,r_{2})$, $m_i=\max\{v(r_i,r_{i+1}), v(r_i,r_{i-1})\}$, for $i=2,\dots ,5$, and $m_6=v(r_6,r_5)$.
Clearly, for $i=1,\dots,6 $, there exists a unique configuration $F_i$ that minimizes
$m_i$ and that is reachable from $F_a$ by vertical movement of rows, such that unavoidability is preserved throughout.
Let
$y_1, \dots, y_6$ be the second coordinate values of the points in the $i$-th row in $F_i$ (the exact values of $y_i$ can be easily calculated, but are not needed).
As
$$2\left(\sqrt{2}-\frac{1}{2}\right)+2\cdot 0.8 + 3\cdot\frac{1}{2}\sqrt{3}>6,$$
we note that in $F_i$, we have $v(r_i,r_{i-1}), v(r_i,r_{i+1})\le 0.8$, wherever defined.
Now consider $F_i$ for $i=2,4,6$. Here the the $i$-th row contains $6$ points, and we may move the $i$-th row horizontally to to the left and right, provided the distances
to all ``critical" points in the adjacent rows stays within $1$. As the adjacent rows have a vertical distance of less than $0.8$, it is easy to check that this allows for a movement of at least $0.1$ to either side. In addition, in this situation, we may move the left-most point of any such row horizontally to the right, until it reaches the point $(1,y_i)$. This maximal reflection is possible, as the distance to the adjacent row is less than $2\sqrt{2}-2\ge 0.8$.
We now proceed as follows. We move the point configuration to one of the $F_i$. If the $i$-th row contains $5$ points, we note that one of the points occupies the point $(1,y_i)$.
If the $i$-th row contains $6$ points, we move the row $0.1$ to the left and back to the right. Finally, move the left-most point of row $i$ from $(0.5,y_i)$ to $(1,y_i)$ and back, and note that this point, combining both movements, has moved over the line segment from $(0.4,y_i)$ to $(1,y_i)$. We repeat this procedure for all $F_i$.
By Theorem \ref{t:cont}, the line segments $[0.4,1]\times \{y_i\}$ for $i\in \{2,4,6\}$ lie within the same box of $\mathcal{P}$. Moreover, as these sets
are the result of movement from different points of the base configuration, they, as well as the point sets $\{(1,y_i)\}$ from the movement of the $5$-point rows lie in different boxes.
We now repeat this procedure for the blue points, noting that we obtain the same values $y_i$ as in the case of the red points. The blue points will have a
$6$-point row where the red points have a $5$-point row and vice versa. We can conclude that for $i\in\{1,3,5\} $, the line segment $[0.4,1]\times \{y_i\}$ lies within one box of $\mathcal{P}$.
Hence, taking both configurations together, the segments $[0.4,1]\times \{y_i\}$ lie each in one box of $\mathcal{P}$ for $i=1,\dots, 6$. Moreover, these segments all lie in different boxes
as the points $(1,y_i)$ are all within the movement of different points from the red base configuration.
It follows that in $\mathcal{P}$, there are $6$ distinct boxes $B_1,\dots,B_6$ such that $B_i$ covers $(0.4,y_i)$. Now let $l$ be the line segment from
$(\sqrt{2}-\frac{1}{2},0)$ to $(\sqrt{2}-\frac{1}{2},6)$. We are interested in the length of the intersection of $B_i$ and $l$. If the midpoint of $B_i$ lies on the same side of $l$ as $(0.4, y_i)$, then
this intersection exceeds $1$ by Lemma \ref{l:paralines} (with $d=\sqrt{2}-\frac{1}{2}$). If the midpoint of $B_i$ lies on $l$ or on the side of $l$ opposite from $(0.4,y_i)$, then the intersection has length larger
than $1$ by Lemma \ref{l:pBl}. Hence all six boxes intersect $l$ with a length larger than $1$. However, the length of $l$ is $6$, for a contradiction.
Hence $33$ boxes cannot be packed in a square of side length $6$ and so $s(33)\ge 6$.\hfill{$\Box$}\smallskip
\begin{cor}The trivial packing is optimal for packing $33$ unit squares in a square, and we have that $s(33)=6$.
\end{cor}
\section{The best possible packing of 22 unit squares}
\label{sec22}
\begin{theo}
\label{th22}
$22$ non-overlapping unit squares cannot be packed in a square of side length less than $5$.
\end{theo}
{\bf Proof: } Let $S$ be the square $[0,5]^2$, and assume, by way of contradiction, that there exist a packing $\mathcal{P}$ of $22$ boxes into $S$.
Consider the configuration of $22$ red and $23$ blue points shown in Figure \ref{fig:bluered22}. Together, the points of both colours are exactly the elements the set
$\{0.5,1,1.5,2,2.5,3,3.5,4,4.5\} \times \{0.9,1.7,2.5,3.3,4.1\}$. We will denote the rows of red and blue points by $r_1,\dots,r_5$, and $b_1,\dots, b_5$, with numbering from bottom
to top, and we let $y_1,\dots,y_5$ be the value of their second coordinate.
\begin{figure}[h]
\centering
\includegraphics[scale= 0.33,trim=180 44 180 27,clip=true]{22pointsredblue.png}
\caption{Unavoidable configurations of 22 red and 23 blue points}\label{fig:bluered22}
\end{figure}
By the same basic argument as applied to Figure \ref{fig:redblue} we can show that both the set of red points and the set of blue points form unavoidable sets. It follows that each red points lies
in exactly one box from $\mathcal{P}$. In the case of the blue points, either exactly
one blue point is not contained in a box of $\mathcal{P}$ or exactly one box of $\mathcal{P}$ contain two blue points. In the latter case, the two blue points that lie in the same box must be within a distance of each other that is smaller than the diagonal of a maximal size box, which is $1.01 \sqrt{2}$.
Thus, by the symmetry in our blue point configuration, we may assume that all blue points with first coordinate value
lower than $2$ lie in a box of $\mathcal{P}$ that does not contain any other blue points.
As in the proof of Theorem \ref{s33}, we will apply Theorem \ref{t:cont} twice, with the red and blue points as the respective starting configurations. Once again we
describe the function $f_k$ informally in terms of the movement of points. In the case of the red points, we will first move $r_2$ and $r_4$ (i.e. the red rows
containing $5$ points) horizontally a distance of $0.1$
to the left and back, followed by
a movement of the left-most point in each such row horizontally to the right until it reaches a first coordinate value of $1$. These movement preserve
unavoidablility, as the vertical distance between adjacent rows is $0.8$ in all cases. As in Theorem \ref{s33} we record that different boxes of $\mathcal{P}$ contain the line segments $[0.4,1]\times \{1.7\}$ and $[0.4,1]\times \{3.3\}$, and that these boxes must also be different from the boxes containing the (stationary) points $(1,0.9)$, $(1,2.5)$, and $ (1,4.1)$.
For the blue points, we want repeat these movement with rows $b_1,b_3,$ and $b_5$. However, we need to modify our procedure, as Theorem \ref{t:cont} requires the a blue point remains stationary
if it does not lie in a box of $\mathcal{P}$ or is in a box of $\mathcal{P}$ that contains a different blue point. If row $r_i$, $i\in \{1,3,5\}$ does not contain such point, we move it horizontally a distance of
$0.1$ and back to its original position. We then move its leftmost point horizontally from $(0.5,y_i)$ to $(1,y_i)$. In case that $r_i$ does contain such point, we just move its leftmost point from $(0.5,y_i)$ to $(1,y_i)$. We note that the last case can only happen for one of the rows $b_1,b_3,b_5$, because if there are two such exceptional blue points, they must lie in the same box, and hence (due to the size of the boxes) in either the same or adjacent rows.
By Theorem \ref{t:cont} the trajectory of each of the leftmost points of $b_1,r_2,b_3,r_4,b_5$ lies completely within a box of $\mathcal{P}$. Moreover each of these trajectories intersects a different red point from the initial configuration, and hence the trajectories lie within different boxes.
We can conclude that there are $5$ boxes $B_1, \dots, B_5$ in $\mathcal{P}$
such that the line segments $[0.4,1] \times \{y_i\}$ lie completely within $B_i$, except that at most one of $B_1,B_3,B_5$ might only cover the line segment
$[0.5,1] \times \{y_i\}$.
Let $l$ be the line segment $\{\sqrt{2}-\frac{1}{2}\} \times [0,5]$. As in Theorem \ref{s33} we can check that if $B_i$ contains $[0.4,1]\times \{y_i\}$, it intersect
$l$ with a length of intersection that exceeds $1$. As $l$ has length $5$, one of $B_1,B_3,B_5$ does not cover the entire line segment from $[0.4,1]\times\{y_i\}$.
We consider $2$ cases:
\begin{enumerate}
\item First assume that $B_3$ does not completely cover $[0,4]\times\{y_3\}$. As there was at most one exceptional row, $B_1,B_2,B_4,$ and $B_5$ all intersect $l$ with a length of intersection
exceeding $1$. It follows that $B_3 \cap l \subseteq \{\sqrt{2}-\frac{1}{2}\}\times (2,3)$, and so $B_3$ does not cover the points $(\sqrt{2}-\frac{1}{2},2)$ and $ (\sqrt{2}-\frac{1}{2},3)$. In Figure \ref{fig:midline}, these two points are depicted in green.
Let $m$ be the midpoint of $B_3$. The location of $m$ is constraint as follows: $m$ must lie on the right side of $l$ and separated from it by a distance of at least $\frac{1}{2}\sqrt{2}-\frac{1}{2}$, for
otherwise the length of intersection of $B_3$ and $l$ would exceed $1$ by either Lemma \ref{l:paralines} or Lemma \ref{closeline}. As $B_3$ does not cover $(\sqrt{2}-\frac{1}{2},2)$ or $(\sqrt{2}-\frac{1}{2},3)$, $m$ cannot be
within a distance of $0.5$ from either of these points. Finally, as $B_3$ covers $[0.5,1] \times \{y_3\}$, the distance from $m$ to $(0.5,y_3)$ must be smaller than half the
diagonal of a maximal box, i.e. smaller than $0.505\sqrt{2}$. Figure \ref{fig:midline} shows the remaining possible locations of $m$ as a shaded area. The area is bounded
by line and circle segments that intersect in
$4$ points with approximate coordinates $(1.12, 2.5 \pm 0.05), \, (1.2,2.5\pm 0.1)$.
An easy calculation shows that
the entire area is within a distance of $0.5$ from the point $(1.5,y_3)=(1.5,2.5)$. In Figure \ref{fig:midline}, this distance is indicated by a circle. It follows that $B_3$ also covers the point $(1.5, 2.5)$.
Hence in our initial configuration of points, the box $B_3$ covers two blue points, namely those at $(0.5,2.5)$ and $(1.5,2.5)$. However, this contradict our assumption that all blue points with a second coordinate value smaller than $2$ do not share a box with another blue point.
\begin{figure}[h]
\centering
\includegraphics[scale= 0.6,trim=0 40 70 40,clip=true]{5by5finalargument.png}
\caption{The midpoint of the box $B_3$ must lie in the shaded area}\label{fig:midline}
\end{figure}
\item Assume that for one $i \in \{1,5\}$, $B_i$ does not cover the entire line segment $[0.4,1] \times \{y_i\}$. By symmetry, we may assume that this is the case for $i=1$. Then
$B_i$ covers $[0.4,1]\times \{y_i\}$ for $i=2,\dots, 5$, and, as in the previous case, this implies that each such $B_i$ intersects the line segment $l$ with a length of intersection larger than
$1$. It follows that the point $(\sqrt{2}-\frac{1}{2},1)$ is denied to $B_1$. This point is depicted green in Figure \ref{fig:lowline}.
Let $m$ be the midpoint of $B_1$. As before we can conclude that $m$ lies on the opposite side of $l$ from the point $(0.5,\sqrt{2}-\frac{1}{2})$, with a distance of at least $\frac{1}{2}\sqrt{2}-\frac{1}{2}$ from $l$, but within a distance of $0.505\sqrt{2}$ of $(\frac{1}{2},\sqrt{2}-\frac{1}{2})$. These constraints intersect at approximately $(1.13, 0.56)$ and
$ (1.13,1.24)$.
The resulting area is depicted in Figure \ref{fig:lowline} and lies completely within a distance of $\frac{1}{2}$ from the point $(\sqrt{2}-\frac{1}{2},1)$. It follows that $(\sqrt{2}-\frac{1}{2},1)$ lies in
$B_1$. However, the point is denied to $B_1$, for a contradiction.
\end{enumerate}
In either case we get a contradiction. It follows that the packing $\mathcal{P}$ does not exists, and hence $s(22)\ge 5$.\hfill{$\Box$}\smallskip
\begin{figure}[h]
\centering
\includegraphics[scale= 0.6,trim=0 0 70 38,clip=true]{5by5finalargumentLowerSection.png}
\caption{The midpoint of the box $B_1$ must lie in the shaded area}\label{fig:lowline}
\end{figure}
\begin{cor}The trivial packing is optimal for packing $22$ unit squares in a square, and we have that $s(22)=5$.
\end{cor}
| {
"timestamp": "2016-06-14T02:12:29",
"yymm": "1606",
"arxiv_id": "1606.03746",
"language": "en",
"url": "https://arxiv.org/abs/1606.03746",
"abstract": "Let $s(n)$ be the side length of the smallest square into which $n$ non-overlapping unit squares can be packed. In 2010, the author showed that $s(13)=4$ and $s(46)=7$. Together with the result $s(6)=3$ by Keaney and Shiu, these results strongly suggest that $s(m^2-3)=m$ for $m\\ge 3$, in particular for the values $m=5,6$, which correspond to cases that lie in between the previous results.In this article we show that indeed $s(m^2-3)=m$ for $m=5,6$, implying that the most efficient packings of 22 and 33 squares are the trivial ones. To achieve our results, we modify the well-known method of sets of unavoidable points by replacing them with continuously varying families of such sets.",
"subjects": "Combinatorics (math.CO)",
"title": "Optimal Packings of 22 and 33 Unit Squares in a Square",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9850429120895838,
"lm_q2_score": 0.8438951005915208,
"lm_q1q2_score": 0.831272887384804
} |
https://arxiv.org/abs/1206.2110 | Some criteria for spectral finiteness of a finite subset of the real matrix space $\mathbb{R}^{d\times d}$ | In this paper, we present some checkable criteria for the spectral finiteness of a finite subset of the real $d\times d$ matrix space $\mathbb{R}^{d\times d}$, where $2\le d<\infty$. | \section{Introduction}\label{sec1
Throughout this paper, by $\rho(M)$ we mean the usual spectral radius of a real square matrix $M\in\mathbb{R}^{d\times d}$, where $2\le d<+\infty$. For an arbitrary finite family of real matrices
\bean
\bA=\{A_1,\dotsc,A_K\}\subset\mathbb{R}^{d\times d},
\eean
its \textit{generalized spectral radius $\pmb{\rho}(\bA)$}, first introduced by Daubechies and Lagarias in \cite{DL92-01}, is defined by
\bean
\pmb{\rho}(\bA)=\sup_{n\ge1}\max_{M\in\bA^n}\sqrt[n]{\rho(M)}\quad\left(\,=\limsup_{n\to+\infty}\max_{M\in\bA^n}\sqrt[n]{\rho(M)}\right),
\eean
where
\begin{equation*}
\bA^n=\left\{\stackrel{n\textrm{-folds}}{\overbrace{M_1\dotsm M_n}}\,\bigg\vert\, M_i\in\bA\textrm{ for }1\le i\le n\right\}\quad \forall n\ge1.
\end{equation*}
According to the Berger-Wang spectral formula~\cite{BW92} (also see \cite{Els, Dai-JMAA} for simple proofs), this quantity is very important for many pure and applied mathematics branches like numerical computation of matrices, differential equations, coding theory, wavelets, stability analysis of random matrix, control theory, combinatorics, and so on. See, for example, \cite{Bar, Jun}.
Therefore, the following finite-step realization property for the accurate computation of $\pmb{\rho}(\bA)$ becomes very interesting and important, because it makes the stability question algorithmically decidable; see, e.g., \cite[Proposition~2.9]{Bar}.
\begin{prob}\label{prob1
Does there exist a finite-length word which realize $\pmb{\rho}$ for $\bA$?
In other words, does there exist any $M^*\in\bA^{n^*}$ such that $\pmb{\rho}(\bA)=\sqrt[\leftroot{-2}\uproot{2}n^*]{\rho(M^*)}$, for some $n^*\ge1$?
\end{prob}
If one can find such a word $M^*$ for some $n^*\ge1$, then $\bA$ is said to possess \textit{the spectral finiteness}.
This spectral finiteness, for any bounded $\bA$, was conjectured respectively by Pyatnitski\v{i} (see, e.g.,~\cite{PR91,SWMW07}), Daubechies and Lagarias in~\cite{DL92-01}, Gurvits in~\cite{Gur95}, and by Lagarias and Wang in~\cite{LW95}. It has been disproved first by Bousch and Mairesse in \cite{BM}, and then by Blondel \textit{et al.} in \cite{BTV}, by Kozyakin in~\cite{Koz05, Koz07}, all offered the existence of counterexamples in the case where $d=2$ and $K=2$; moreover, an explicit expression for such a counterexample has been found in the recent work of Hare \textit{et al.}~\cite{HMST}.
However, an affirmative solution to Problem~\ref{prob1} is very important; this is because it implies an effective computation of $\pmb{\rho}(\bA)$ and decidability of stability by only finitely many steps of computations. There have been some sufficient (and necessary) conditions for the spectral finiteness for some types of $\bA$, based on and involving Barabanov norms, polytope norms, ergodic theory or some limit properties of $\bA$, for example, in Gurvits~\cite{Gur95}, Lagarias and Wang~\cite{LW95}, Guglielmi, Wirth and Zennaro~\cite{GWZ05}, Kozyakin~\cite{Koz07}, Dai, Huang and Xiao~\cite{DHX-pro}, and Dai and Kozyakin~\cite{DK}. But these theoretic criteria seem to be difficult to be directly employed to judge whether or not an explicit family $\bA$ or even a pair $\{A,B\}\subset\mathbb{R}^{2\times 2}$ have the spectral finiteness.
From literature, as far we know, there are only few results on such an explicit family of matrices $\bA$ as follows.
\begin{thm-A}[{Theys~\cite{Theys}, also \cite{JB08}}
If $A_1,\dotsc,A_K\in\mathbb{R}^{d\times d}$ are all symmetric matrices, then $\bA$ has the spectral finiteness such that
$\brho(\bA)=\max_{1\le k\le K}\rho(A_k)$.
\end{thm-A}
A more general version of this theorem is the following.
\begin{thm-B}[{Barabanov~\cite[Proposition~2.2]{Bar}}
If a finite set $\bA\subset\mathbb{R}^{d\times d}$ only contains normal matrices, then the spectral finiteness holds.
\end{thm-B}
For a matrix $A$, by $A^T$ we mean its transpose matrix. Another generalization of Theorem~A is the following.
\begin{thm-C}[{Plischke and Wirth~\cite[Proposition~18]{PW:LAA08}}
If $\bA=\{A_1,\dotsc,A_K\}\in\mathbb{R}^{d\times d}$ is symmetric in the sense of $A_k^T\in\bA$ for all $1\le k\le K$, then $\bA$ has the spectral finiteness.
\end{thm-C}
Jungers and Blondel~\cite{JB08} proved that for a pair of $\{0,1\}$-matrices of $2\times 2$, the spectral finiteness holds.
A more general result than this is
\begin{thm-D}[{Cicone \textit{et al.}~\cite{CGSZ10}}
If $A_1$ and $A_2$ are $2\times 2$ sign-matrices; that is, $A_1,A_2$ belong to $\{0,\pm1\}^{2\times 2}$, then the spectral finiteness holds for $\{A_1,A_2\}$.
\end{thm-D}
The followings are other different type of results.
\begin{thm-E}[{Br\"{o}ker and Zhou~\cite{BZ}}
If $\bA=\{A, B\}\subset\mathbb{R}^{2\times 2}$ satisfies $\det(A)<0$ and $\det(B)<0$, then
$\brho(\bA)=\max\left\{\rho(A),\rho(B), \sqrt{\rho(AB)}\right\}$.
\end{thm-E}
\begin{thm-F}[{M\"{o}{\ss}ner~\cite{Mo}}
If $\bA=\{L, R\}\subset\mathbb{R}^{2\times 2}$ satisfies
$L=\left(\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right)R\left(\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right)$,
then $\bA$ has the spectral finiteness with $\brho(\bA)=\max\left\{\rho(L), \sqrt{\rho(LR)}\right\}$.
\end{thm-F}
\begin{thm-G}[{Guglielmi \textit{et al.}~\cite[Theorem~4]{GMV}}]
Let $\bA=\{A,B\}$ satisfy
\begin{equation*}
A=\begin{pmatrix}a&b\\c&d\end{pmatrix}\quad\textrm{and}\quad B=\begin{pmatrix}a&-b\\-c&d\end{pmatrix},\quad \textrm{where }a,b,c,d\in\mathbb{R}.
\end{equation*}
Then $\bA$ has the spectral finiteness such that
\begin{equation*}
\brho(\bA)=
\begin{cases}
\rho(A)=\rho(B)& \textrm{if }bc\ge0,\\
\sqrt{\rho(AB)}& \textrm{if }bc<0.\end{cases}
\end{equation*}
\end{thm-G}
\begin{thm-H}[Dai \textit{et al.}~\cite{DHLX11}]
If one of $A, B\in\mathbb{R}^{d\times d}$ is of rank one, then there holds the spectral finiteness property for $\{A,B\}$.
\end{thm-H}
We will present a new criterion for the spectral finiteness of a finite subset of $\mathbb{R}^{d\times d}$, see Theorem~\ref{thm1} in Section~\ref{sec2}, which generalizes Theorems~A,~C and G. From this we can obtain some checkable sufficient conditions for the spectral finiteness.
Finally in Section~\ref{sec3}, we will improve the main theorem of Kozyakin~\cite{Koz07} to get a sufficient and necessary condition for the spectral finiteness of a type of $2$-by-$2$ matrix set $\bA$; see Theorem~\ref{thm8}.
\section{Symmetric optimal words and the spectral finiteness}\label{sec2
We let $\bA=\{A_1,\dotsc,A_K\}\subset\mathbb{R}^{d\times d}$ be an arbitrarily given set, where $2\le K<\infty$ and $2\le d<\infty$. By $\|\cdot\|$, we denote the usual euclidean norm of $\mathbb{R}^{d\times d}$. Let
$$
\bK=\{1,2,\dotsc,K\}\quad \textrm{and}\quad
\bK^n=\stackrel{n\textit{-folds}}{\overbrace{\bK\times\dotsm\times\bK}}.
$$
For any word $w=(k_1,\dotsc,k_n)\in\bK^n$ of length $n$, we write $\bA(w)=A_{k_1}\dotsm A_{k_n}\in\bA^n$.
A word $w^*=(k_1^*,\dotsc,k_n^*)\in\bK^n$ of length $n$ is called an \textit{$(\bA,n)$-optimal word}, provided that it satisfies condition:
\begin{equation*}
\|\bA(w^*)\|=\max_{w\in\bK^n}\|\bA(w)\|.
\end{equation*}
This section is mainly devoted to proving the following criterion for the spectral finiteness of $\bA$, which generalizes Theorems~A and C and the first part of Theorem~G.
\begin{theorem}\label{thm1
Let $\bA=\{A_1,\dotsc,A_K\}\subset\mathbb{R}^{d\times d}$. If there exists an $(\bA,n^*)$-optimal word $w^*$ with $\bA(w^*)^T\bA(w^*)\in\bA^{2n^*}$ $(\textrm{resp. } \bA(w^*)\bA(w^*)^T\in\bA^{2n^*})$ for some $n^*\ge1$, then $\bA$ has the spectral finiteness such that
\begin{equation*}
\brho(\bA)=\sqrt[\leftroot{-3}n^*]{\|\bA(w^*))\|}=\sqrt[\leftroot{-3}2n^*]{\rho\left(\bA(w^*)^T\bA(w^*)\right)}\quad\left(=\sqrt[\leftroot{-3}2n^*]{\rho\left(\bA(w^*)\bA(w^*)^T\right)}\right).
\end{equation*}
\end{theorem}
\begin{proof}
Let $w^*$ be an $(\bA,n^*)$-optimal word of length $n^*$, which is such that $\bA(w^*)^T\bA(w^*)\in\bA^{2n^*}$ (resp. $\bA(w^*)\bA(w^*)^T\in\bA^{2n^*}$), for some $n^*\ge1$. Then from the Berger-Wang spectral formula~\cite{BW92}, it follows that
\begin{equation*}\begin{split}
\brho(\bA)&=\inf_{n\ge1}\max_{w\in\bK^n}\sqrt[n]{\|\bA(w)\|}\le\sqrt[\leftroot{-3}n^*]{\|\bA(w^*))\|}\\
&=\sqrt[\leftroot{-3}2n^*]{\rho\left(\bA(w^*)^T\bA(w^*)\right)}\\
&=\sqrt[\leftroot{-3}2n^*]{\rho\left(\bA(w^*)\bA(w^*)^T\right)}\\
&\le\brho(\bA).
\end{split}\end{equation*}
This implies the desired result and ends the proof of Theorem~\ref{thm1}.
\end{proof}
For the case where $\bA$ is symmetric as in Theorem~C, one can find an $(\bA,1)$-optimal word $w^*$ such that both $\bA(w^*)^T\bA(w^*)$ and $\bA(w^*)\bA(w^*)^T$ belong to $\bA^{2}$. On the other hand, the following simple example shows our Theorem~\ref{thm1} is an essential extension of Theorem~C.
\begin{example}\label{example2
Let $\bA$ consist of the following three matrices:
\begin{equation*}
A_1=\begin{pmatrix}1&1&2\\0&1&1\\0&0&1\end{pmatrix},\quad A_2=\begin{pmatrix}1&0&0\\1&1&0\\2&1&1\end{pmatrix},\quad \textrm{and}\quad A_3=\begin{pmatrix}\cos\alpha&\sin\alpha&0\\-\sin\alpha&\cos\alpha&0\\0&0&\sqrt{\frac{3-\sqrt{5}}{2}}\end{pmatrix}.
\end{equation*}
It is evident that $\bA$ is not symmetric. However, $w^*=(1)$ is an $(\bA,1)$-optimal word such that
\begin{equation*}
\bA(w^*)^T\bA(w^*)=A_2A_1\in\bA^{2}
\end{equation*}
and so $\brho(\bA)=\sqrt{\rho(A_2A_1)}$.
\end{example}
As a consequence of Theorem~\ref{thm1}, we can obtain the following checkable criterion for the spectral finiteness of a kind of $\bA$.
\begin{cor}\label{cor3
Let $\bA$ consist of the following $K+2$ matrices:
\begin{equation*}
A_0=\begin{pmatrix}a&b\\c&d\end{pmatrix},\;A_1=\begin{pmatrix}a_{11}&r_1b\\r_1c&d_{11}\end{pmatrix},\;\dotsc,\;A_K=\begin{pmatrix}a_{KK}&r_Kb\\r_Kc&d_{KK}\end{pmatrix},\; \textrm{and}\; B=\begin{pmatrix}b_{11}&r\sqrt{|b|}\\r\sqrt{|c|}&b_{22}\end{pmatrix},
\end{equation*}
where $r, r_1,\dotsc,r_K$ are all constants. If $bc\ge0$ and $\|B\|\le\max_{0\le i\le K}\rho(A_i)$,
then $\bA$ has the spectral finiteness and moreover
\begin{equation*}
\brho(\bA)=\max_{0\le k\le K}\rho(A_k).
\end{equation*}
\end{cor}
\begin{proof}
If $bc=0$ then the statement holds trivially. Next, we assume $bc>0$.
Let $k^*\in\{0,1,\dotsc,K\}$ be such that
\begin{equation*}
\rho(A_{k^*})=\max_{0\le k\le K}\rho(A_k),
\end{equation*}
and we put
\begin{equation*}
Q=\begin{pmatrix}q_1&0\\0&q_2\end{pmatrix}
\end{equation*}
which is such that
$$q_1q_2\not=0\quad \textrm{and}\quad \frac{q_1}{q_2}=\sqrt{\frac{c}{b}}.$$
Then,
\begin{gather*}
QA_0Q^{-1}=\begin{pmatrix}a&\sqrt{bc}\\\sqrt{bc}&d\end{pmatrix},\\
QA_1Q^{-1}=\begin{pmatrix}a_{11}&r_1\sqrt{bc}\\r_1\sqrt{bc}&d_{11}\end{pmatrix},\\
\vdots\quad\vdots\quad \vdots\\
QA_KQ^{-1}=\begin{pmatrix}a_{KK}&r_K^{}\sqrt{bc}\\r_K^{}\sqrt{bc}&d_{KK}\end{pmatrix},\\
\intertext{and}
QBQ^{-1}=\begin{pmatrix}b_{11}&r\sqrt{|c|}\\r\sqrt{|b|}&b_{22}\end{pmatrix}=B^T.
\end{gather*}
So,
\begin{equation*}
\rho(A_i)=\|QA_iQ^{-1}\|\quad \textrm{for }0\le i\le K,\quad\textrm{and}\quad\|B^T\|\le\max_{0\le i\le K}\|QA_iQ^{-1}\|.
\end{equation*}
Thus, $w^*=(k^*)$ is a $(Q\bA Q^{-1},1)$-optimal word with $QA_{k^*}Q^{-1}\in Q\bA Q^{-1}$.
From Theorem~\ref{thm1}, this thus proves Corollary~\ref{cor3}.
\end{proof}
Corollary~\ref{cor3} generalizes the first part of Theorem~G stated in Section~\ref{sec1}. A special case of Corollary~\ref{cor3} is the following which is of independent interest.
\begin{cor}\label{cor4
Let $\bA$ consist of
\begin{equation*}
A=\begin{pmatrix}\lambda_1&0\\0&\lambda_2\end{pmatrix}\quad \textrm{and}\quad B=\begin{pmatrix}a&b\\c&d\end{pmatrix}.
\end{equation*}
If $bc\ge0$, then $\brho(\bA)=\max\{\rho(A), \rho(B)\}$.
\end{cor}
Now we are naturally concerned with the following.
\begin{prob}\label{prob2
What can we say for $\bA$ without the constraint condition $bc\ge0$ in Corollary~\ref{cor4}?
\end{prob}
First, a special case might be simply observed as follows.
\begin{prop}\label{prop5
Let $A,B\in\mathbb{R}^{d\times d}$, where $2\le d<\infty$, be a pair of matrices such that
\bean
A=\begin{pmatrix}a_1&0&\dotsm&0\\0&a_2&\dotsm&0\\\vdots&\vdots&\ddots&\vdots\\0&0&\dotsm&a_d\end{pmatrix}\quad\textrm{and}\quad B=\begin{pmatrix}0&\dotsm&0&b_1\\0&\dotsm&b_2&0\\\vdots&{}&\vdots&\vdots\\b_d&\dotsm&0&0\end{pmatrix}.
\eean
Then $\bA=\{A,B\}$ is such that $\brho(\bA)=\max\{\rho(A), \rho(B)\}$.
\end{prop}
\begin{proof}
We will only prove the statement in the case of $d=3$, since the other case may be similarly proved.
By replacing $A$ and $B$ with $A/\brho$ and $B/\brho$ respectively if necessary, there is no loss of generality in assuming
$\brho(\bA)=1$. By contradiction, we assume
\begin{equation*}
\rho(A)=\max\{|a_1|,|a_2|, |a_3|\}<1
\end{equation*}
and
\begin{equation*}
\rho(B)=\max\left\{|b_2|,\sqrt{|b_1b_3|}\right\}<1.
\end{equation*}
Let $\{(m_k,n_k)\}_{k=1}^{+\infty}$ be an arbitrary sequence of positive integer pairs. We claim that
\begin{equation*}
\|A^{m_1}B^{n_1}A^{m_2}B^{n_2}\dotsm A^{m_k}B^{n_k}\|\to0
\end{equation*}
as $k\to+\infty$.
Indeed, the claim follows from the following simple computation:
\begin{gather*}
A^m=\begin{pmatrix}a_1^m&0&0\\0&a_2^m&0\\0&0&a_3^m\end{pmatrix},\quad
B^n=\begin{cases}\begin{pmatrix}(b_1b_3)^{n^\prime}&0&0\\0&b_2^{n^\prime}&0\\0&0&(b_3b_1)^{n^\prime}\end{pmatrix}& \textrm{if }n=2n^\prime,\\\begin{pmatrix}(b_1b_3)^{n^\prime}&0&0\\0&b_2^{n^\prime}&0\\0&0&(b_3b_1)^{n^\prime}\end{pmatrix}B& \textrm{if }n=2n^\prime+1;\end{cases}
\end{gather*}
and for any constants $q_i,c_i, d_i$ for $i=1,2,3$,
\begin{gather*}
\begin{pmatrix}q_1&0&0\\0&q_2&0\\0&0&q_3\end{pmatrix}\begin{pmatrix}0&0&c_1\\0&c_2&0\\c_3&0&0\end{pmatrix}
=\begin{pmatrix}0&0&q_1c_1\\0&q_2c_2&0\\q_3c_3&0&0\end{pmatrix},\\
\intertext{and}
\begin{pmatrix}0&0&c_1\\0&c_2&0\\c_3&0&0\end{pmatrix}
\begin{pmatrix}0&0&d_1\\0&d_2&0\\d_3&0&0\end{pmatrix}=\begin{pmatrix}c_1d_3&0&0\\0&c_2d_2&0\\0&0&c_3d_1\end{pmatrix}.
\end{gather*}
Then, this claim is a contradiction to $\brho(\bA)=1$ and so it implies that $\brho(\bA)=\max\{\rho(A), \rho(B)\}$.
\end{proof}
It should be noted that although $\rho(B)<1$ and $\|A\|<1$ in the proof of Proposition~\ref{prop5} under the contradiction assumption, yet $\|B\|>1$ possibly
happens; for example,
\begin{equation*}
B=\begin{pmatrix}0&0&6/5\\0&4/5&0\\2/5&0&0\end{pmatrix}
\end{equation*}
is such that $\rho(B)=4/5<1$ but $\|B\|=6/5>1$. This is just the nontrivial point of the above proof of Proposition~\ref{prop5}.
For Problem~\ref{prob2}, we cannot, however, expect a general positive solution as disproved by the following counterexample.
\begin{example}\label{example6
Let
\begin{equation*}
A_0=\alpha\begin{pmatrix} -3&3.5\\-4&4.5
\end{pmatrix}\quad \textrm{and}\quad A_1=\beta\begin{pmatrix}0.5&0\\0&1\end{pmatrix}
\end{equation*}
where $\alpha>0, \beta>0$, and $bc=-14<0$. Then $\bA=\{A_0,A_1\}$ cannot be simultaneously symmetrized and there exists a pair of $\alpha,\beta$ so that $\bA$ has no the spectral finiteness.
\end{example}
\begin{proof}
Putting $Q=\left(\begin{smallmatrix}-0.5& 1 \\ 0 & 1\end{smallmatrix}\right)$,
we have
\begin{equation*}
B_0:=Q^{-1}A_0Q=\alpha\begin{pmatrix}1 & 0 \\ 2 & 0.5
\end{pmatrix}
\quad\textrm{and}\quad
B_{1}:=Q^{-1}A_1Q=\beta\begin{pmatrix}0.5 & 1 \\ 0 & 1\end{pmatrix}.
\end{equation*}
According to Kozyakin~\cite[Theorem~10, Lemma~12 and Theorem~6]{Koz07}, it follows that there always exists a pair of real numbers $\alpha>0,\beta>0$ such that $\{B_0, B_1\}$ and so $\bA$ do not have the spectral finiteness.
Thus, if $\{A_0,A_1\}$ might be simultaneously symmetrized for some pair of $\alpha>0,\beta>0$, then $\{A_0, A_1\}$ and hence $\{B_0, B_1\}$ have the spectral finiteness from Theorem~A, for all $\alpha>0,\beta>0$. This is a contradiction. Therefore, $\{A_0,A_1\}$ cannot be simultaneously symmetrized for all $\alpha>0,\beta>0$.
This proves the statement of Example~\ref{example6}.
\end{proof}
Meanwhile this argument shows that the constraint condition ``$bc\ge0$" in Corollary~\ref{cor3} and even in Corollary~\ref{cor4} is crucial for the spectral finiteness in our situation.
Given an arbitrary set $\bA=\{A_1,\dotsc,A_K\}\subset\mathbb{R}^{d\times d}$, although its periodic stability implies that it is stable almost surely in terms of arbitrary Markovian measures as shown in Dai, Huang and Xiao~\cite{DHX11-aut} for the discrete-time case and in Dai~\cite{Dai-JDE} for the continuous-time case, yet its absolute stability is generally undecidable; see, e.g., Blondel and Tsitsiklis~\cite{BT97, BT00, BT00-aut}.
However, Corollary~\ref{cor3} is equivalent to the statement\,---\,``periodic stability $\Rightarrow$ absolute stability'', under suitable additional conditions.
\begin{prop}\label{prop7
Let $\bA$ consist of the following $K+2$ matrices:
\begin{equation*}
A_0=\begin{pmatrix}a&b\\c&d\end{pmatrix},\;A_1=\begin{pmatrix}a_{11}&r_1b\\r_1c&d_{11}\end{pmatrix},\;\dotsc,\;A_K=\begin{pmatrix}a_{KK}&r_Kb\\r_Kc&d_{KK}\end{pmatrix},\; \textrm{and}\; B=\begin{pmatrix}b_{11}&r\sqrt{|b|}\\r\sqrt{|c|}&b_{22}\end{pmatrix},
\end{equation*}
where $r, r_1,\dotsc,r_K$ are all constants, such that $bc\ge0$ and $\|B\|\le\max_{0\le i\le K}\rho(A_i)$.
Then $\bA$ is absolutely stable if and only if $\rho(A_k)<1$ for all $0\le k\le K$.
\end{prop}
\begin{proof}
The statement is obvious and we omit the details here.
\end{proof}
In fact, the absolute stability of $\bA$ is decidable in the situation of Theorem~\ref{thm1}.
\section{Kozyakin's model}\label{sec3
In \cite{Koz07}, Kozyakin systemly considered the spectral finiteness of $\bA$ which consists of the following two matrices:
\begin{equation*}
A_0=\alpha\begin{pmatrix}a&b\\0&1\end{pmatrix}\quad \textrm{and} \quad A_1=\beta\begin{pmatrix}1&0\\c&d\end{pmatrix},
\end{equation*}
where $a,b,c,d,\alpha$, and $\beta$ are all real constants, such that
\begin{equation*}
\alpha,\beta>0\quad \textrm{and}\quad bc\ge1\ge a>0,\;d>0.\leqno{(\mathrm{K})}
\end{equation*}
Let $\brho=\brho(\bA)$. We first note that from \cite{Bar} there exists a Barabanov norm $\pmb{\|}\cdot\pmb{\|}$ on $\mathbb{R}^2$; i.e.,
$$
\brho\pmb{\|}x\pmb{\|}=\max\{\pmb{\|}A_0x\pmb{\|}, \pmb{\|}A_1x\pmb{\|}\}\quad\forall x\in\mathbb{R}^2.
$$
And so for any $x_0\in\mathbb{R}^2\setminus\{0\}$, one can find a corresponding (B-extremal) switching law
$$
\mathfrak{i}_{\bcdot}(x_0)\colon\{1,2,\dotsc\}\rightarrow\{0,1\}
$$
such that
$$\pmb{\|}A_{\mathfrak{i}_n(x_0)}\dotsm A_{\mathfrak{i}_1(x_0)}x_0\pmb{\|}=\pmb{\|}x_0\pmb{\|}\brho^n\quad\forall n\ge1.$$
Then from Kozyakin~\cite[Theorem~6]{Koz07}, it follows that there exists the limit
\begin{equation*}
\sigma(\bA):=\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n\mathfrak{i}_k(x_0),
\end{equation*}
called the \textit{switching frequency} of $\bA$, which does not depend on the choices of $x_0$ and the (B-extremal) switching law $\mathfrak{i}_{\bcdot}(x_0)$.
Kozyakin (cf.~\cite[Theorem~10]{Koz07}) asserted that if $\sigma(\bA)$ is irrational, then $\bA$ does not have the spectral finiteness. We now show that this is also necessary.
\begin{theorem}\label{thm8
Under condition $(\mathrm{K})$, $\bA$ has the spectral finiteness iff its switching frequency $\sigma(\bA)$ is rational.
\end{theorem}
\begin{proof}
If $\sigma(\bA)$ is an irrational number, then \cite[Theorem~10]{Koz07} follows that $\bA$ does not have the spectral finiteness.
Next, assume $\sigma(\bA)$ is rational. Then \cite[Theorem~6]{Koz07} implies that one can find some $x_0\in\mathbb{R}^2\setminus\{0\}$ and a corresponding
periodic switching law, say
\begin{equation*}
\mathfrak{i}_{\bcdot}(x_0)=(\uwave{i_1i_2\dotsm i_\pi}\,\uwave{i_1i_2\dotsm i_\pi}\,\uwave{i_1i_2\dotsm i_\pi}\,\dotsm),
\end{equation*}
such that
$$\pmb{\|}A_{\mathfrak{i}_n(x_0)}\dotsm A_{\mathfrak{i}_1(x_0)}x_0\pmb{\|}=\pmb{\|}x_0\pmb{\|}\brho^n\quad\forall n\ge1,$$
where $\brho=\brho(\bA)$. Therefore, it holds that
$$\pmb{\|}(A_{i_\pi}\dotsm A_{i_1})^n\pmb{\|}\ge\brho^{n\pi}\quad\forall n\ge1.$$
Moreover, from the classical Gel'fand spectral formula we have
\bean
\rho(A_{i_\pi}\dotsm A_{i_1})=\lim_{n\to\infty}\sqrt[n]{\pmb{\|}(A_{i_\pi}\dotsm A_{i_1})^n\pmb{\|}}\ge\brho^{\pi}.
\eean
Thus, $\brho(\bA)=\sqrt[\pi]{\rho(A_{i_\pi}\dotsm A_{i_1})}$, which means the spectral finiteness.
This completes the proof of Theorem~\ref{thm8}.
\end{proof}
This result improves \cite[Theorem~10]{Koz07} and it should be convenient for applications. Let us consider an explicit example.
\begin{example}\label{example9
Let $\bB=\{B_0,B_1\}$ be such that
\bean
B_0=\begin{pmatrix}a&b\\0&1\end{pmatrix}\quad \textrm{and} \quad B_1=\begin{pmatrix}1&0\\c&d\end{pmatrix},
\eean
where $a,b,c,d\in\mathbb{R}$.
\end{example}
We will divide our arguments into several cases.
1). If $ad=0$ then we have either $\mathrm{rank}(B_0)=1$ or $\mathrm{rank}(B_1)=1$ and so $\bB$ has the spectral finiteness from Theorem~H stated in Section~\ref{sec1}.
2). If $bc=0$ then $\bB$ has the spectral finiteness from Corollary~\ref{cor4} stated in Section~\ref{sec2}.
3). If $a<0$ and $d<0$, then $\bB$ has the spectral finiteness from Theorem~E stated in Section~\ref{sec1}.
4). If $a=d$ and $b=c$, then $\bB$ has the spectral finiteness from Theorem~F stated in Section~\ref{sec1} such that $\brho(B)=\max\left\{\rho(B_0), \sqrt{\rho(B_0B_1)}\right\}$.
5). Next, let $ad\not=0, bc\not=0$, and define
\begin{equation*}
Q=\begin{pmatrix}\frac{a-1}{b}&1\\0&1\end{pmatrix}.
\end{equation*}
When $a\not=1$, we can obtain that
\begin{equation*}
QB_0Q^{-1}=\begin{pmatrix}a&0\\0&1\end{pmatrix}\quad \textrm{and}\quad QB_1Q^{-1}=\begin{pmatrix}1+\frac{bc}{a-1}&\frac{(d-1)(a-1)-bc}{a-1}\\\frac{bc}{a-1}&d-\frac{bc}{a-1}\end{pmatrix}.
\end{equation*}
Note that
\begin{equation*}
\frac{(d-1)(a-1)-bc}{a-1}\times\frac{bc}{a-1}\ge0\quad
\mathrm{iff}\quad
[(1-a)(1-d)-bc]\times bc\ge0.
\end{equation*}
Hence, if
\begin{equation*}
[(1-a)(1-d)-bc]\times bc\ge0,
\end{equation*}
then from Corollary~\ref{cor4}, it follows that $\bB$ has the spectral finiteness such that
$$\brho(B)=\max\{\rho(B_0), \rho(B_1)\}.$$
6). If $a=d=1$ and $bc\ge1$, then $\bB$ has the spectral finiteness from Theorem~\ref{thm8}.
Indeed in this case, \cite[Lemma~12]{Koz07} implies that the switching frequency $\sigma(\bB)=\frac{1}{2}$ is rational, and then Theorem~\ref{thm8}
implies the spectral finiteness of $\bB$.
We notice that our cases 1)\,--\,5) are beyond Kozyakin's condition $(\mathrm{K})$.
\section*{\textbf{Acknowledgments}
\noindent The author would like to thank professors Y.~Huang and M.~Xiao for some helpful discussion, and is particularly grateful to professor Victor Kozyakin for his helpful comments to Theorem~\ref{thm8}.
\bibliographystyle{amsplain}
| {
"timestamp": "2012-06-12T02:05:25",
"yymm": "1206",
"arxiv_id": "1206.2110",
"language": "en",
"url": "https://arxiv.org/abs/1206.2110",
"abstract": "In this paper, we present some checkable criteria for the spectral finiteness of a finite subset of the real $d\\times d$ matrix space $\\mathbb{R}^{d\\times d}$, where $2\\le d<\\infty$.",
"subjects": "Numerical Analysis (math.NA); Optimization and Control (math.OC); Rings and Algebras (math.RA)",
"title": "Some criteria for spectral finiteness of a finite subset of the real matrix space $\\mathbb{R}^{d\\times d}$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9850429125286726,
"lm_q2_score": 0.8438950966654774,
"lm_q1q2_score": 0.8312728838880276
} |
https://arxiv.org/abs/0812.3356 | Definite integrals by the method of brackets. Part 1 | A new heuristic method for the evaluation of definite integrals is presented. This method of brackets has its origin in methods developed for theevaluation of Feynman diagrams. We describe the operational rules and illustrate the method with several examples. The method of brackets reduces the evaluation of a large class of definite integrals to the solution of a linear system of equations. | \section{Introduction} \label{sec-intro}
\setcounter{equation}{0}
The problem of analytic evaluations of definite integrals has been of
interest to scientists since Integral Calculus was developed. The central
question can be stated vaguely as follows: \\
\begin{center}
{\em given a class of
functions} $\mathfrak{F}$ {\em and an
interval} $[a,b] \subset \mathbb{R}$, {\em express the integral of} $f \in
\mathfrak{F}$
\begin{equation}
I = \int_{a}^{b} f(x) \, dx,
\nonumber
\end{equation}
\noindent
{\em in terms of the special values of functions in an enlarged class}
$\mathfrak{G}$.
\end{center}
\medskip
For instance, by elementary
arguments it is possible to show that if $\mathfrak{F}$ is the
class of rational functions, then the enlarged class $\mathfrak{G}$ can be
obtained by including logarithms and inverse trigonometric
functions. G. Cherry has
discussed in \cite{cherry1}, \cite{cherry2} and \cite{cherry5}
extensions of this classical paradigm.
The following results illustrate the idea:
\begin{equation}
\int \frac{x^{3} \, dx}{\ln(x^{2}-1)} =
\frac{1}{2} \text{li}(x^{4}-2x^{2}+1) + \frac{1}{2} \text{li}(x^{2}-1),
\label{ex-1}
\end{equation}
\noindent
but
\begin{equation}
\int \frac{x^{2} \, dx}{\ln(x^{2}-1)}
\end{equation}
\noindent
can not be written in terms of elementary functions and the
logarithmic integral
\begin{equation}
\text{li}(x) := \int \frac{x \, dx}{\ln x}
\end{equation}
\noindent
that appears in (\ref{ex-1}). The reader will find in \cite{bronstein2}
the complete theory behind integration in terms of elementary functions.
Methods for the evaluation of definite integrals were also developed since
the early stages of Integral Calculus. Unfortunately, these are mostly
ad-hoc procedures and a general theory needs to be developed. The method
proposed in this paper represents a new addition to these procedures.
The evaluations of definite integrals have been collected in
tables of integrals.
The earliest volume available to the authors
is \cite{bierens1}, compiled by Bierens de Haan who also presented
in \cite{bierens2} a survey of the methods employed in the verification of
the entries. These tables form the main source for the
popular volume by I. S. Gradshteyn and I. M. Ryzhik \cite{gr}.
Naturally any document containing a
large number of entries, such as the table \cite{gr} or the encyclopedic
treatise \cite{prudnikov1}, is
likely to contain errors.
For instance, the appealing
integral
\begin{equation}
I = \int_{0}^{\infty} \frac{dx}{(1+x^2)^{3/2} \left[ \varphi(x) + \sqrt{\varphi(x) }
\right]^{1/2}} = \frac{\pi}{2 \sqrt{6}},
\label{wrong}
\end{equation}
\noindent
with
\begin{equation}
\varphi(x) = 1 + \frac{4x^{2}}{3(1+x^{2})^{2}},
\end{equation}
that appears as entry $3.248.5$ in \cite{gr6}, the sixth
edition of the mentioned table, is incorrect. The numerical value of $I$ is
$0.666377$ and the right hand side of (\ref{wrong}) is about $0.641275$.
The table \cite{gr} is in the process of being revised. After we informed
the editors of the error in $3.248.5$, it was taken out. There is no entry
$3.248.5$ in \cite{gr}.
At the present time, we are unable to evaluate the integral $I$.
The revision of integral tables
is nothing new. C. F. Lindman \cite{lindman1} compiled a long list of
errors from the table by Bierens de Haan \cite{bierens3}. The editors of
\cite{gr} maintain the webpage
\begin{verbatim} http://www.mathtable.com/gr/\end{verbatim}
\noindent
where the corrections to the table are
stored. The second author
has began in \cite{moll-gr5,moll-gr7,moll-gr1,moll-gr2,moll-gr3,
moll-gr4,moll-gr6,moll-gr8} a systematic verification of the entries in
\cite{gr}. It is in this task that the method proposed in the present
article becomes a valuable tool.
The {\em method of brackets} presented here, even
though it is heuristic and
still lacking a rigorous description, is quite powerful. Moreover, it
is quite simple to work with: the evaluation of a definite integral is
reduced to solving a linear system of equations. Many of the entries
of \cite{gr} can be derived using this method.
The basic idea behind it is the assignement
of a {\em bracket} $\langle{a \rangle}$ to any parameter $a$.
This is a symbol associated to the divergent integral
\begin{equation}
\int_{0}^{\infty} x^{a -1} \, dx.
\end{equation}
\noindent
The formal rules for operating with these brackets are described in Section
\ref{sec-bracket} and their justification is work-in-progress, we expect
to report in the near future. The rest of the paper
provides a list of examples
illustrating the new technique.
Given a formal sum
\begin{equation}
f(x) = \sum_{n=0}^{\infty} a_{n}x^{\alpha n + \beta -1}
\end{equation}
\noindent
we associate to the integral of $f$ a {\em bracket series} written as
\begin{equation}
\int_{0}^{\infty} f(x) \, dx \stackrel{\bullet}{=} \sum_{n} a_{n} \langle{ \alpha n + \beta \rangle},
\end{equation}
\noindent
to keep in mind the formality of the method described in this paper.
Convergence
issues are ignored at the present time. Moreover only integrals over the
half-line $[0, \infty)$ will be considered. \\
\noindent
{\bf Note}. In the evaluation of these formal sums, the index
$n \in \mathbb{N}$ will be replaced by a number $n^{*}$
defined by the vanishing
of the bracket. Observe that it is possible that $n^{*} \in \mathbb{C}$.
For book-keeping purposes, specially in cases
with many indices, we write $\displaystyle{\sum_{n}}$ instead of the usual
$\displaystyle{\sum_{n=0}^{\infty}}$. After the brackets are eliminated, those
indices that remain recover their original nature. \\
The rules of
operation described below assigns a {\em value}
to the bracket series. The claim is
that for a large class of integrands, including all the examples
described here, this formal procedure
provides the actual value of the integral. Many
of the examples involve the hypergeometric function
\begin{equation}
{_{p}F_{q}}(z) := \sum_{n=0}^{\infty}
\frac{(a_{1})_{n} \, (a_{2})_{n} \cdots (a_{p})_{n}}
{(b_{1})_{n} \, (b_{2})_{n} \cdots (b_{q})_{n}} \frac{z^{n}}{n!}.
\end{equation}
\noindent
This series converges absolutely for all $z \in \mathbb{C}$ if $p \leq q$ and
for $|z|< 1$ if $p = q+1$. The series diverges for all $z \neq 0$ if
$p > q+1$ unless the series terminates. The special case $p = q+1$ is of
great interest. In this special case and with $|z|=1$, the series
\begin{equation}
{_{q+1}F_{q}}(a_{1}, \cdots, a_{q+1}; b_{1}, \cdots, b_{q};z)
\end{equation}
\noindent
converges absolutely if $\mathop{\rm Re}\nolimits{ \left(\sum b_{j} - \sum a_{j}
\right) } >0$. The series converges conditionally if $z = e^{i \theta} \neq 1$
and $0 \geq \mathop{\rm Re}\nolimits{ \left(\sum b_{j} - \sum a_{j} \right)} > -1$
and the series diverges if
$\mathop{\rm Re}\nolimits{ \left(\sum b_{j} - \sum a_{j} \right)} \leq -1$.
{{
\begin{figure}[ht]
\begin{center}
\centerline{\epsfig{file=Triangle.eps,width=20em,angle=0}}
\caption{The triangle}
\label{figure0}
\end{center}
\end{figure}
}}
The last section of this paper
employs the method of brackets to evaluate
certain definite integrals associated to a Feynman diagram. From the
present point of view, a {\em Feynman diagram} is simply a generic graph $G$
that contains $E+1$ {\em external} lines and $N$ {\em internal} lines or
{\em propagators} and $L$ {\em loops}. All but one of these external lines
are assumed independent. The internal and
external lines represent particles that transfer momentum among the vertices
of the diagram. Each of these particles
carries a {\em mass} $m_{i} \geq 0$ for $i=1,\cdots, N$. The
vertices represent the interaction of these
particles and conservation of momentum at each vertex assigns the momentum
corresponding to the internal lines. A Feynman diagram has an associated
integral given by the {\em parametrization} of the
diagram. For example, in Figure \ref{figure0}
we have three external lines represented by the momentum $P_{1}, \, P_{2},
\, P_{3}$ and one loop. The parameters $a_{i}$ are arbitrary real
numbers. The integral
associated to this diagram is given by
\begin{eqnarray}
G & = & \frac{(-1)^{-D/2}}{\Gamma(a_{1})\Gamma(a_{2})\Gamma(a_{3})}
\int_{0}^{\infty} \int_{0}^{\infty} \int_{0}^{\infty} \frac{x_{1}^{a_{1}-1} x_{2}^{a_{2}-1} x_{3}^{a_{3}-1} }
{(x_{1}+x_{2}+x_{3})^{D/2}}
\nonumber \\
& \times & \text{exp}(x_{1}m_{1}^{2} + x_{2}m_{2}^{2} + x_{3}m_{3}^{2})
\, \, \text{exp} \left( -
\frac{C_{11}P_{1}^{2} + 2C_{12}P_{1} \cdot P_{2} + C_{22}P_{2}^{2} }
{x_{1}+x_{2}+x_{3}} \right) {\mathbf{dx}},
\nonumber
\end{eqnarray}
\noindent
where ${\mathbf{dx}} := dx_{1} \, dx_{2} \, dx_{3}$. The
evaluation of this integral in terms of the variables $P_{i} \in
{\mathbb{R}}^{4}, \, m_{i} \in \mathbb{R}$ and $a_{i} \in \mathbb{R}$
is the {\em solution of the Feynman diagram}. The functions $C_{ij}$ are
polynomials described in Section \ref{sec-feynman}.
The method of brackets presented here has its origin in quantum field
theory (QFT). A
version of the method of brackets was developed to address one of
the fundamental questions in QFT: the evaluation of loop integrals arising
from Feynman diagrams. As described above, these
are directed graphs depicting the interaction
of particles in the model. The loop integrals depend on the dimension $D$
and one of the (many) intrinsic difficulties
is related to their divergence at $D=4$, the dimension of
the physical world. A correction to this problem is obtained by
taking $D= 4 - 2 \epsilon$ and considering a Laurent expansion in powers of
$\epsilon$. This is called the {\em dimensional regularization} \cite{bollini}
and the parameter $\epsilon$ is the {\em dimensional regulator}.
The method of brackets discussed in this paper is based on previous results by
I. G. Halliday, R. M. Ricotta and G. V. Dunne
\cite{dunne-1987},
\cite{dunne-1989} and
\cite{halliday-1987}. The work involves an analytic extension of $D$ to
negative values, so the method was labelled NDIM (negative dimensional
integration method). The validity of this continuation is based on the
observation that the objects associated to a Feynman diagram (loop integrals
as well as the functions linked to propagators) are analytic in the dimension
$D$. A. Suzuki and A. Schmidt employed this technique to the evaluation of
diagrams with two loops \cite{suzuki-twoa}, \cite{suzuki-twoa1}; three loops
\cite{suzuki-three}; tensorial integrals \cite{suzuki-tensor} and massive
integrals with one loop \cite{suzuki-massive2}, \cite{suzuki-massive},
\cite{suzuki-massive1}. An extensive use of this
method as well as an analysis of the solutions
was provided by C. Anastasiou and E. Glover in \cite{anastasiou-a} and
\cite{anastasiou-b}. The conclusion of these studies is that the NDIM method
is inadequate to the evaluation of Feynman diagrams with an arbitrary number
of loops. The proposed solutions involve hypergeometric functions with a
large number of parameters. By establishing new procedural rules
I. Gonzalez and I.Schmidt \cite{gonzalez-2007} and \cite{gonzalez-2008}
have concluded that the modification of the previous procedures permits now
the evaluation of more complex Feynman diagrams. One of the results of
\cite{gonzalez-2007}, \cite{gonzalez-2008} is the justification of the
method of brackets in terms of arguments derived from fractional calculus.
The authors have given NDIM the alternative name IBFE (Integration by
Fractional Expansion).
From the mathematical
point of view, the NDIM method has been used to provide evaluation of a
very limited type of integrals \cite{suzuki-int2}, \cite{suzuki-int1}. The
examples presented in this paper show great flexibility of the method
of brackets. A systematic study of integrals arising from Feynman diagrams
is in preparation.
\section{A detour on definite integrals} \label{sec-detour}
\setcounter{equation}{0}
The literature contains a large variety of techniques for
the evaluation of definite integrals.
Elementary techniques are surveyed in classical texts such as
\cite{edwards2} and \cite{fichtenholz1}. The text
\cite{antimirov1} contains an excellent collection of problems solved by
the method of contour integration. The reader will find in
\cite{irrbook} a discussion of several elementary analytic methods involved
in the evaluation of integrals.
It is hard to predict the
type of techniques required for the evaluation of a specific definite
integral. For instance,
\cite{vardi1} contains a detailed account of the
proof of
\begin{equation}
\int_{\pi/4}^{\pi/2} \ln \ln \tan x \, dx = \frac{\pi}{2}
\ln \left( \frac{\sqrt{2 \pi} \Gamma \left( \tfrac{3}{4} \right) }
{\Gamma \left( \tfrac{1}{4} \right) } \right),
\end{equation}
\noindent
that appears as formula $4.229.7$ in \cite{gr}.
This particular example involves the use of $L$-functions
\begin{equation}
L_{\chi}(s) = \sum_{n=1}^{\infty} \frac{\chi(n)}{n^{s}},
\end{equation}
\noindent
where $\chi$ is a character. This is a generalization of the Riemann zeta
function $\zeta(s)$ (corresponding
to $\chi \equiv 1$). Vardi's technique
has been extended in \cite{luis2} to provide a systematic study of
integrals of the form
\begin{equation}
I_{Q} = \int_{0}^{1} Q(x) \ln \ln 1/x \, dx,
\end{equation}
\noindent
that gives evaluations such as
\begin{eqnarray}
\int_{0}^{1} \frac{x \ln \ln 1/x \, dx}{(x+1)^{2}} & = &
\frac{1}{2} ( - \ln^{2}2 + \gamma - \ln \pi + \ln 2 ), \\
\int_{0}^{\infty} \ln x \ln \tanh x \, dx & = &
\frac{\gamma \pi^{2}}{8} - \frac{3}{4} \zeta'(2) + \frac{\pi^{2} \ln 2}{12}.
\end{eqnarray}
\noindent
Here $\gamma = - \Gamma'(1)$ is Euler's constant.
A second class of examples appeared
during the evaluation of definite integrals
related to the Hurwitz zeta function
\begin{equation}
\zeta(z,q) = \sum_{n=0}^{\infty} \frac{1}{(n+q)^{z}}.
\end{equation}
\noindent
In \cite{espmoll1} the authors found an
evaluation that generalizes the classical integral
\begin{equation}
L_{1} := \int_{0}^{1} \ln \Gamma(q) \, dq = \ln \sqrt{2 \pi},
\label{L1-value}
\end{equation}
\noindent
namely
\begin{equation}
L_{2} := \int_{0}^{1} \ln^{2} \Gamma(q) \, dq =
\frac{\gamma^{2}}{12} + \frac{\pi^{2}}{48} + \frac{\gamma L_{1}}{3}
+ \frac{4}{3} L_{1}^{2} - \frac{A \zeta'(2)}{\pi^{2}} +
\frac{\zeta''(2)}{2 \pi^{2}},
\end{equation}
\noindent
where $L_{1}$ is in (\ref{L1-value}) and
$A = \gamma + \ln 2 \pi$. The natural
next step, namely the evaluation of
\begin{equation}
L_{3} := \int_{0}^{1} \ln^{3} \Gamma(q) \, dq,
\end{equation}
\noindent
remains to be completed. In \cite{espmoll3} and \cite{espmoll6} the reader
will find a relation between $L_{3}$ and the Tornheim sums $T(m,k,n)$,
for $m, k, n \in \mathbb{N}$. These sums are defined by
\begin{equation}
T(a,b,c) := \sum_{j_{1}=1}^{\infty} \sum_{j_{2}=1}^{\infty}
\frac{1}{j_{1}^{a} \, j_{2}^{b} \, (j_{1} + j_{2} )^{c}}.
\end{equation}
\noindent
The special case
\begin{equation}
T(n,0,m) = \sum_{j_{2} > j_{1}} \frac{1}{j_{1}^{m} j_{2}^{n}},
\end{equation}
\noindent
corresponds to the multiple zeta value (MZV) $\zeta(n,m)$ of
depth $2$. The MZV is given by
\begin{equation}
\zeta(n_{1},n_{2}, \cdots, n_{r}) := \sum_{j_{1} > j_{2} > \cdots > j_{r}}
\frac{1}{j_{1}^{n_{1}} \, j_{2}^{n_{2}} \cdots j_{r}^{n_{r}} },
\end{equation}
\noindent
where the parameter $r$ is called the depth of the sum. These series were
initially considered by Euler and have recently appeared in many different
places. The reader will find in \cite{kreimer1} a description of how these
sums are connected to knots and Feynman diagrams. These diagrams
are a very rich source of interesting integrals. The last section of this
paper is dedicated to the evaluation of some of these integrals by the
method of brackets.
The computation of hyperbolic volumes of $3$-manifolds provides a
different source of interesting integrals. Mostow's rigidity theorem states
that a finite volume $3$-manifold has a unique hyperbolic structure. In
particular its volume is a topological invariant. An interesting class of
such $3$-manifolds is provided by
hyperbolic knots or link complement in $S^{3}$. The reader will find
information about this topic in the articles by C. Adams and J. Weeks in
\cite{menasco}. It turns out that
their hyperbolic structure can be given in
terms of hyperbolic tetrahedra \cite{adams1}. Milnor \cite{milnor82}
describes how the volume of these tetrahedra can be expressed in terms of
the Clausen function
\begin{equation}
\text{Cl}_{2}(\theta) := - \int_{0}^{\theta} \log | 2 \sin \frac{u}{2} | \, du.
\label{clausen-1}
\end{equation}
\noindent
The reader will find in \cite{maclachlan-reid} a discussion on arithmetic
properties of $3$-manifolds. In particular, Chapter 11 has up to date
information on their volumes.
Zagier \cite{zagier2} provided an arithmetic version of these computations
in his study of the Dedekind zeta function
\begin{equation}
\zeta_{K}(s) := \sum_{\frak{a}} \frac{1}{N(\frak{a})^{s}},
\end{equation}
\noindent
for a number field
$K$ that is not totally real. Here $N(\frak{a})$ is the norm of the
ideal $\frak{a}$ and the sum runs over all the nonzero integral ideals of
$K$. In the case of totally real number fields a classical result of
Siegel shows that $\zeta_{K}(2m)$ is a rational multiple
of $\pi^{2nm}/\sqrt{D}$, where $n$ and $D$ denote the degree and the
discriminant of $K$, respectively. Little is known in the non-totally real
situation. Zagier \cite{zagier2}
proves that $\zeta_{K}(2)$ is given by a finite sum of values of
\begin{equation}
A(x) := \int_{0}^{x} \frac{1}{1+t^{2}} \log \frac{4}{1+t^{2}} \, dt.
\end{equation}
\noindent
The function $A$ can be written as
\begin{equation}
A(x) = \text{Cl}_{2}(\pi - 2 \tan^{-1}x) - \text{Cl}_{2}(\pi).
\end{equation}
\noindent
Morever, he conjectured that $\zeta_{k}(2m)$ can be given in terms of
\begin{equation}
A_{m}(x) := \frac{2^{2m-1}}{(2m-1)!}
\int_{0}^{\infty} \frac{t^{2m-1} \, dt}{x \sinh^{2}t + x^{-1} \cosh^{2}t}.
\end{equation}
\noindent
The conjecture is established in the special case where $K$ is an abelian
extension of $\mathbb{Q}$. The example $K = \mathbb{Q}(\sqrt{-7})$ yields
\begin{equation}
\zeta_{\mathbb{Q}(\sqrt{-7})}(2) = \frac{\pi^{2}}{3 \sqrt{7}}
\left(
A \left( \cot \tfrac{\pi}{7} \right) +
A \left( \cot \tfrac{2 \pi}{7} \right) +
A \left( \cot \tfrac{4 \pi}{7} \right) \right)
\end{equation}
\noindent
and also
\begin{equation}
\zeta_{\mathbb{Q}(\sqrt{-7})}(2) = \frac{2 \pi^{2}}{7 \sqrt{7}}
\left( 2 A (\sqrt{7}) +
A( \sqrt{7} + 2 \sqrt{3} ) +
A(\sqrt{7} - 2 \sqrt{3}) \right),
\end{equation}
\noindent
leading to the new Claussen identity
\begin{equation}
A \left( \cot \tfrac{\pi}{7} \right) +
A \left( \cot \tfrac{2 \pi}{7} \right) +
A \left( \cot \tfrac{4 \pi}{7} \right) =
\tfrac{6}{7} \left(
2 A (\sqrt{7}) +
A( \sqrt{7} + 2 \sqrt{3} ) +
A(\sqrt{7} - 2 \sqrt{3}) \right).
\nonumber
\end{equation}
\noindent
Zagier stated in $1986$ that there was no direct proof of this identity. To
this day this has elluded considerable effort. The
famous text of Lewin \cite{lewin1} has such parametric identities but it
misses this one.
R. Crandall \cite{crandall-08}
has worked out a theory in which certain Claussen identities are seen to be
equivalent to the vanishing of log-rational integrals. \\
J. Borwein and D. Broadhurst \cite{bor-broad1998} identified a large
number of finite volume hyperbolic $3$-manifolds whose volumes are
expressed in the form
\begin{equation}
\frac{a}{b} \text{vol}(\frak{M}) =
\frac{(-D)^{3/2} }{(2 \pi)^{2n-4}} \frac{\zeta_{K}(2)}{2 \zeta(2)}.
\label{bor-bro}
\end{equation}
\noindent
Here $K$ is a field associated to the manifold $\frak{M}$ (the so-called
invariant trace field) and $n$ and $D$ are
the degree and discriminant of $K$, respectively. The authors offer
a systematic numerical study of the rational numbers
$\displaystyle{\frac{a}{b}}$. The identity of Zagier described above yields
the remarkable identity
\begin{equation}
\int_{\pi/3}^{\pi/2}
\ln \left| \frac{\tan t + \sqrt{7}}{\tan t - \sqrt{7}} \right| \, dt
=
A( \sqrt{7}) +
\frac{1}{2} A( \sqrt{7} + 2 \sqrt{3}) +
\frac{1}{2} A( \sqrt{7} - 2 \sqrt{3}).
\nonumber
\end{equation}
\noindent
This example corresponds to the link $6_{1}^{3}$ with discriminant $D= - 7$.
Zagier's result gives $\displaystyle{\frac{a}{b}} = 2$ in (\ref{bor-bro}). \\
Coffey \cite{coffey2008}, \cite{coffey2009}
has studied the integral above, that
also appears in the reduction of a
multidimensional Feynman integral \cite{lunev1994}. The goal is to produce
a more direct proof of Zagier remarkable identity as well as the many
others that have been numerically verified in \cite{bor-broad1998}.
\medskip
The subject of evaluation of definite integrals has a rich history. We
expect that the method of brackets developed in this paper will
expand the class of integrals that can be expressed in
analytic form.
\section{The method of brackets} \label{sec-bracket}
\setcounter{equation}{0}
The method of brackets discussed in this paper is based on the assignment
of a {\em bracket} $\langle{a \rangle}$ the parameter $a$. In the
examples presented here $a \in \mathbb{R}$, but the extension to
$a \in \mathbb{C}$ is direct. The
formal rules for operating with these brackets are described next.
\begin{definition}
Let $f$ be a formal power series
\begin{equation}
f(x) = \sum_{n=0}^{\infty} a_{n}x^{\alpha n+\beta-1}.
\label{series-f}
\end{equation}
\noindent
The symbol
\begin{equation}
\int_{0}^{\infty} f(x) \, dx \stackrel{\bullet}{=} \sum_{n} a_{n} \langle{ \alpha n + \beta \rangle}
\end{equation}
\noindent
represents a {\em bracket series} assignement to the
integral on the left. Rule \ref{rule-ass1} describes how to evaluate this
series.
\end{definition}
\begin{definition}
The symbol
\begin{equation}
\phi_{n} := \frac{(-1)^{n}}{\Gamma(n+1)}
\end{equation}
\noindent
will be called the {\em indicator of} $n$.
\end{definition}
\noindent
The symbol $\phi_{n}$ gives a simpler form for the bracket series associated
to an integral. For example,
\begin{equation}
\int_{0}^{\infty} x^{a-1} e^{-x} \, dx \stackrel{\bullet}{=} \sum_{n} \phi_{n} \langle{ n + a \rangle}.
\end{equation}
\noindent
The integral is the gamma function $\Gamma(a)$ and the right-hand side
its bracket expansion.
\begin{rules}
\label{rule-binom}
For $\alpha \in \mathbb{C}$, the expression
\begin{equation}
(a_{1} + a_{2} + \cdots + a_{r})^{\alpha}
\end{equation}
\noindent
is assigned the bracket series
\begin{equation}
\sum_{m_{1}, \cdots, m_{r}} \phi_{1,2,\cdots,r} \, a_{1}^{m_{1}}
\cdots a_{r}^{m_{r}}
\frac{ \langle{-\alpha + m_{1} + \cdots + m_{r} \rangle}}{\Gamma(-\alpha)},
\end{equation}
\noindent
where $\phi_{1,2,\cdots,r}$ is a short-hand notation for the product
$\phi_{m_{1}} \phi_{m_{2}} \cdots \phi_{m_{r}}$.
\end{rules}
\noindent
\begin{rules}
\label{rule-ass1}
The series of brackets
\begin{equation}
\sum_{n} \phi_{n} f(n) \langle{ a n + b \rangle}
\end{equation}
\noindent
is given the {\em value}
\begin{equation}
\frac{1}{a} f(n^{*}) \Gamma(-n^{*})
\end{equation}
\noindent
where $n^{*}$ solves the equation $an+b = 0$.
\end{rules}
\noindent
\begin{rules}
A two-dimensional series of brackets
\begin{equation}
\sum_{n_{1}, n_{2}} \phi_{n_{1},n_{2}} f(n_{1},n_{2})
\langle{ a_{11} n_{1} + a_{12}n_{2}+c_{1} \rangle}
\langle{ a_{21} n_{1} + a_{22}n_{2}+c_{2} \rangle}
\end{equation}
\noindent
is assigned the {\em value}
\begin{equation}
\frac{1}{|a_{11} a_{22} - a_{12}a_{21}|} f(n_{1}^{*}, n_{2}^{*}) \Gamma(-n_{1}^{*})
\Gamma( -n_{2}^{*})
\end{equation}
\noindent
where $(n_{1}^{*},n_{2}^{*})$ is the unique solution to the linear
system
\begin{eqnarray}
a_{11} n_{1} + a_{12}n_{2}+c_{1} & = & 0, \label{system-11} \\
a_{21} n_{1} + a_{22}n_{2}+c_{2} & = & 0, \nonumber
\end{eqnarray}
\noindent
obtained by the vanishing of the expressions in the brackets. A
similar rule applies to higher dimensional series, that is,
\begin{equation}
\sum_{n_{1}} \cdots \sum_{n_{r}} \phi_{1,\cdots,r}
f(n_{1},\cdots,n_{r})
\langle{ a_{11}n_{1}+ \cdots a_{1r}n_{r} + c_{1} \rangle} \cdots
\langle{ a_{r1}n_{1}+ \cdots a_{rr}n_{r} + c_{r} \rangle}
\nonumber
\end{equation}
\noindent
is assigned the value
\begin{equation}
\frac{1}{| \text{det}(A) |} f(n_{1}^{*}, \cdots, n_{r}^{*})
\Gamma(-n_{1}^{*}) \cdots f(-n_{r}^{*}),
\end{equation}
\noindent
where $A$ is the matrix of coefficients
$(a_{ij})$ and $\{ n_{i}^{*} \, \}$ is the solution of the
linear system obtained by the vanishing of the brackets. The value is not
defined if the matrix $A$ is not invertible.
\end{rules}
\begin{rules}
\label{rule-disc}
In the case where the assignment leaves free parameters, any
divergent series in these parameters is discarded. In case several choices
of free parameters are available, the series that converge in a common region
are added to contribute to the integral.
\end{rules}
A typical place to apply Rule \ref{rule-disc} is where the
hypergeometric functions ${_{p}F_{q}}$, with $p = q+1$, appear. In this
case the convergence of the series imposes restrictions on the
internal parameters of the problem.
Example \ref{example-bubble}, dealing
with a Feynman diagram with a {\em bubble},
illustrates the latter part of this rule. \\
\noindent
{\bf Note}. To motivate Rule \ref{rule-binom} start with the identity
\begin{equation}
\frac{1}{A^{\alpha}} = \frac{1}{\Gamma(\alpha)} \int_{0}^{\infty} x^{\alpha-1}e^{-Ax} \, dx,
\end{equation}
\noindent
and apply it to $A = a_{1} + \cdots + a_{r}$ to produce
\begin{eqnarray}
(a_{1} + \cdots + a_{r})^{\alpha} & = & \frac{1}{\Gamma(-\alpha)}
\int_{0}^{\infty} x^{-\alpha-1} \text{exp}\left[-(a_{1}+ \cdots + a_{r})x \right] \, dx
\nonumber \\
& = & \frac{1}{\Gamma(-\alpha)}
\int_{0}^{\infty} x^{-\alpha-1} e^{-a_{1}x} \cdots e^{-a_{r}x} \, dx. \nonumber
\end{eqnarray}
\noindent
Expanding the exponentials we obtain
\begin{equation}
(a_{1} + \cdots + a_{r})^{\alpha} \stackrel{\bullet}{=}
\frac{1}{\Gamma(-\alpha)} \sum_{m_{1}} \cdots \sum_{m_{r}}
\phi_{1,\cdots,r} a_{1}^{m_{1}} \cdots a_{r}^{m_{r}}
\int_{0}^{\infty} x^{- \alpha + m_{1} + \cdots + m_{r}-1} \, dx \nonumber
\end{equation}
\noindent
and thus
\begin{equation}
(a_{1} + \cdots + a_{r})^{\alpha} \stackrel{\bullet}{=}
\sum_{m_{1}} \cdots \sum_{m_{r}}
\phi_{1,\cdots,r} a_{1}^{m_{1}} \cdots a_{r}^{m_{r}}
\frac{\langle{-\alpha + m_{1} + \cdots + m_{r} \rangle} }{\Gamma(-\alpha)}.
\end{equation}
\noindent
This is Rule \ref{rule-binom}. \\
\section{Wallis' formula}
\label{sec-wallis}
\setcounter{equation}{0}
The evaluation
\begin{equation}
J_{2,m} := \int_{0}^{\infty} \frac{dx}{(1+x^{2})^{m+1}} = \frac{\pi}{2^{2m+1}} \binom{2m}{m}
\label{wallis-1}
\end{equation}
\noindent
is historically one of the earliest closed-form expressions for a definite
integral. The change of variables $x = \tan \theta$ converts it into its
trigonometric form
\begin{equation}
J_{2,m} := \int_{0}^{\pi/2} \cos^{2m} \theta \, d \theta =
\frac{\pi}{2^{2m+1}} \binom{2m}{m}.
\label{wallis-2}
\end{equation}
\noindent
An elementary argument shows that $J_{2,m}$ satisfies the recurrence
\begin{equation}
J_{2,m} = \frac{2m-1}{2m}J_{2,m-1}
\end{equation}
\noindent
and then one simply checks that the right hand side of (\ref{wallis-2})
satisfies the same recurrence with matching initial conditions.
A second elementary proof of (\ref{wallis-1}) is presented in \cite{sarah1}:
using $\cos^{2} \theta = \tfrac{1}{2}(1 + \cos 2 \theta)$ one
obtains the recurrence
\begin{equation}
J_{2,m} = 2^{-m} \sum_{i=0}^{\lfloor{ m/2 \rfloor}} \binom{m}{2i}J_{2,i},
\end{equation}
\noindent
and the inductive proof follows from the identity
\begin{equation}
\sum_{i=0}^{\lfloor{ m/2 \rfloor}} 2^{-2i} \binom{m}{2i} \binom{2i}{i}
= 2^{-m} \binom{2m}{m}.
\end{equation}
\noindent
This can be established using automatic methods developed by H. Wilf and
D. Zeilberger in \cite{aequalsb}. \\
The proof of Wallis' formula by the method of brackets starts with the
expansion of the integrand as
\begin{equation}
(1+x^{2})^{-m-1} \stackrel{\bullet}{=} \sum_{n_{1}} \sum_{n_{2}} \phi_{1,2}
\frac{ \langle{m+1+n_{1}+n_{2} \rangle} }{\Gamma(m+1)}x^{2n_{2}}.
\end{equation}
\noindent
The corresponding integral $J_{2,m}$ is assigned the bracket series
\begin{equation}
J_{2,m} \stackrel{\bullet}{=} \sum_{n_{1}} \sum_{n_{2}} \phi_{1,2}
\frac{1}{\Gamma(m+1)}
\langle{m+1+n_{1}+n_{2} \rangle}
\langle{2n_{2}+1 \rangle}.
\end{equation}
\noindent
Rule \ref{rule-ass1} then shows that
\begin{equation}
J_{2,m} = \frac{1}{2} \frac{\Gamma(-n_{1}^{*}) \Gamma(-n_{2}^{*})}
{\Gamma(m+1)},
\end{equation}
\noindent
where $(n_{1}^{*},n_{2}^{*})$ is the solution to the linear
system of equations
\begin{eqnarray}
m+1+n_{1}+n_{2} & = & 0, \label{system-1} \\
2n_{2}+1 & = & 0. \nonumber
\end{eqnarray}
\noindent
Therefore $n_{1}^{*} = -(m + \tfrac{1}{2} )$ and $n_{2}^{*} = -\tfrac{1}{2}$.
We conclude that
\begin{equation}
J_{2,m} = \frac{\Gamma(m + \tfrac{1}{2}) \Gamma(\tfrac{1}{2})}{2 \Gamma(m)}.
\label{wallis-3}
\end{equation}
This is exactly the right-hand side of (\ref{wallis-1}).
\section{The integral representation of the gamma function}
\label{sec-gamma}
\setcounter{equation}{0}
The exponential in the integral
\begin{equation}
I = \int_{0}^{\infty} x^{a -1} e^{-x} \, dx
\label{int-exp}
\end{equation}
\noindent
is expanded in power series to obtain
\begin{equation}
x^{a-1} e^{-x} = \sum_{n=0}^{\infty} \frac{(-1)^{n} x^{n+a-1}}{n!} =
\sum_{n=0}^{\infty} \phi_{n}x^{n+a-1}.
\end{equation}
\noindent
Therefore, the integral (\ref{int-exp}) gets assigned the bracket series
\begin{equation}
I \stackrel{\bullet}{=} \sum_{n} \phi_{n} \langle{a+n \rangle}.
\label{h-gamma}
\end{equation}
Rule \ref{rule-ass1}
assigns the value $\Gamma(a)$ to (\ref{h-gamma}). This is
precisely the value of the integral:
\begin{equation}
\int_{0}^{\infty} x^{a-1} e^{-x} \, dx = \Gamma(a).
\end{equation}
\noindent
Rule \ref{rule-ass1}
was developed from this example.
\section{A Fresnel integral} \label{sec-fresnel}
\setcounter{equation}{0}
In this section we verify the evaluation of Fresnel integral
\begin{equation}
\int_{0}^{\infty} \sin ( a x^{2} ) \, dx = \frac{\pi}{2 \sqrt{2a}}.
\label{fresnel-1}
\end{equation}
\noindent
The reader will find in \cite{antimirov1} the
standard evaluation using contour integrals and other elementary proofs in
\cite{flanders1} and \cite{leonard1}.
In order to apply the method of brackets, use the
hypergeometric representation
\begin{eqnarray}
\frac{\sin z}{z} & = &
{_{0}F_{1}} \left[ -; \tfrac{3}{2}; \, - \tfrac{z^{2}}{4}
\right], \nonumber
\end{eqnarray}
\noindent
that can be written as
\begin{equation}
\sin z = \sum_{n=0}^{\infty} \phi_{n} \frac{z^{2n+1}}{\left( \tfrac{3}{2}
\right)_{n} 4^{n} }.
\end{equation}
\noindent
Therefore
\begin{equation}
\int_{0}^{\infty} \sin ( a x^{2} ) \, dx \stackrel{\bullet}{=}
\sum_{n} \phi_{n} \frac{a^{2n+1}}{\left( \tfrac{3}{2}
\right)_{n} 4^{n} } \langle{ 4n+3 \rangle}.
\end{equation}
According to Rule \ref{rule-ass1}, the assignment of the
right-hand side is obtained by
evaluating the function
\begin{equation}
g(n) := \frac{a^{2n+1}}{\left( \tfrac{3}{2} \right)_{n} 4^{n}}
\end{equation}
at the solution of $4n^{*}+3 = 0$. Therefore the integral (\ref{fresnel-1})
has the value
\begin{equation}
\tfrac{1}{4}g(-\tfrac{3}{4})
= \frac{a^{-1/2} \Gamma(\tfrac{3}{4}) }{\left( \tfrac{3}{2} \right)_{-3/4}
4^{1/4}
}, \nonumber
\end{equation}
\noindent
where the factor $\tfrac{1}{4}$ comes from the term $4n+3$ in the bracket.
Using $(a)_{m} = \Gamma(a+m)/\Gamma(a)$, we obtain
\begin{equation}
\left( \frac{3}{2} \right)_{-3/4} =
\frac{2 \Gamma(\tfrac{3}{4}) }{\sqrt{\pi}}.
\end{equation}
\noindent
We conclude that the assigned value is
$\pi/2 \sqrt{2a}$. As expected, this is consistent with
(\ref{fresnel-1}). \\
The method also give the evaluation of
\begin{equation}
I = \int_{0}^{\infty} x^{b-1} \sin ( ax^{c} ) \, dx.
\label{fresnel-gen}
\end{equation}
\noindent
The change of variables $t = x^{c}$ transforms (\ref{fresnel-gen}) into
\begin{equation}
I = \frac{1}{c} \int_{0}^{\infty} t^{b/c -1} \sin(at) \, dt,
\end{equation}
\noindent
and this is formula $3.761.4$ in \cite{gr} with value
\begin{equation}
\int_{0}^{\infty} x^{b-1} \sin(ax^{c}) \, dx
= \frac{\Gamma(b/c)}{ca^{b/c}} \sin \left( \frac{ \pi b }{2c} \right).
\label{modi-2}
\end{equation}
To verify this result by the method of brackets, start with the
expansion
\begin{equation}
x^{b-1} \sin( ax^{c} ) = \sum_{n =0}^{\infty}
\phi_{n} \frac{a^{2n+1}}{\left( \tfrac{3}{2} \right)_{n} 2^{2n}}
x^{2nc + c + b -1 }
\end{equation}
\noindent
and associate to it the bracket series
\begin{equation}
\int_{0}^{\infty} x^{b-1} \sin ( ax^{c} ) \, dx
\stackrel{\bullet}{=} \sum_{n}
\phi_{n} \frac{a^{2n+1}}{\left( \tfrac{3}{2} \right)_{n} 2^{2n}}
\langle{ 2nc + c + b \rangle}.
\end{equation}
\noindent
Apply Rule \ref{rule-ass1} to obtain
\begin{equation}
I = \frac{1}{2c} \frac{a^{2n_{*}+1}}{\left( \tfrac{3}{2} \right)_{n^{*}}
2^{2n^{*}} } \Gamma(-n^{*}),
\label{value-11}
\end{equation}
\noindent
where $n^{*}$ solve $2nc + b + c = 0$; that is, $n^{*} = -1/2 - b/2c$.
Then (\ref{value-11}) yields
\begin{equation}
I = \frac{\Gamma(\tfrac{3}{2})
2^{b/c}}{c a^{b/c} \Gamma(1 - \tfrac{b}{2c}) }
\Gamma \left( \tfrac{1}{2} + \tfrac{b}{2c} \right).
\label{modi-1}
\end{equation}
\noindent
with $x = b/2c$ to transform (\ref{modi-1}) into (\ref{modi-2}).
To transform (\ref{modi-1}) into (\ref{modi-2}), simplify
(\ref{modi-1}) using the reflection formula
\begin{equation}
\Gamma(x) \Gamma(1-x) = \frac{\pi}{\sin \pi x},
\end{equation}
\noindent
and the duplication formula
\begin{equation}
\Gamma( x + \tfrac{1}{2} ) = \frac{\Gamma(2x) \sqrt{\pi}}{\Gamma(x)
2^{2x-1}},
\end{equation}
\noindent
with $x = b/2c$.
\medskip
\noindent
{\bf Note}. The method developed by Flanders \cite{flanders1} is based on
showing that
\begin{equation}
F(t) := \int_{0}^{\infty} e^{-tx^{2}} \cos x^{2} \, dx \, \, \text{ and } \, \,
G(t) := \int_{0}^{\infty} e^{-tx^{2}} \sin x^{2} \, dx
\label{fresnel-00}
\end{equation}
\noindent
satisfy the functional equation
\begin{equation}
F^{2}(t) - G^{2}(t) = 2F(t)G(t) = \frac{\pi}{4(1+t^{2})}.
\end{equation}
\noindent
The latter can be solved to obtain the values
\begin{equation}
F(t) = \sqrt{\frac{\pi}{8}} \sqrt{ \frac{\sqrt{1+t^{2}}+t}{1+t^{2}}}
\text{ and }
G(t) = \sqrt{\frac{\pi}{8}} \sqrt{ \frac{\sqrt{1+t^{2}}-t}{1+t^{2}}}.
\end{equation}
\noindent
A second elementary proof was obtained by Leonard \cite{leonard1}.
Converting (\ref{fresnel-00}) into the Laplace
transform of $\cos x/2 \sqrt{x}$ and $\sin x/2 \sqrt{x}$
respectively, he shows that
\begin{equation}
F(t) = \frac{1}{\sqrt{\pi}} \int_{0}^{\infty} \frac{u^{2}+t}{1+(u^{2}+t)^{2}} \, du
\,\, \text{ and } \, \,
G(t) = \frac{1}{\sqrt{\pi}} \int_{0}^{\infty} \frac{du}{1+(u^{2}+t)^{2}}.
\end{equation}
\noindent
The evaluation of these integrals described in \cite{leonard1}, is
elementary but long. A shorter argument follows from the formula
\begin{equation}
f(a) := \int_{0}^{\infty} \frac{dx}{x^{4} + 2ax^{2}+1} = \frac{\pi}{2 \sqrt{2(1+a)}}.
\label{quartic-00}
\end{equation}
\noindent
Indeed, the values
\begin{equation}
G(t) = (1+t^{2})^{-3/4} f \left( \frac{t}{\sqrt{t^{2}+1}} \right)
\text{ and }
F(t) = (1+t^{2})^{-1/4} G(t/\sqrt{1+t^{2}}) + tG(t),
\nonumber
\end{equation}
\noindent
follow from (\ref{quartic-00}) by a change of
variable $v = (1+t^{2})^{1/4}u$. The evaluation of the
quartic integral (\ref{quartic-00})
by the method of brackets
is discussed in detail in Section \ref{sec-quartic}.
\section{An integral of beta type} \label{sec-beta}
\setcounter{equation}{0}
In this section we present the evaluation of
\begin{equation}
I = \int_{0}^{\infty} \frac{x^{a} \, dx}{(E + Fx^{b})^{c}}.
\label{beta-1}
\end{equation}
\noindent
The change of variables $x = C^{1/b}t^{1/b}$, with $C = E/F$, yields
\begin{equation}
I = \frac{C^{u}}{bE^{c}} \int_{0}^{\infty} \frac{t^{u-1} \, dt}{(1+t)^{c}}
\end{equation}
\noindent
where $u = (a+1)/b$. The new
integral evaluates as $B( c - u, u )$ where $B(x,y)$ is the classical beta
function; see \cite{gr}, formula $8.380.3$. We conclude that
\begin{equation}
I = \frac{C^{u}}{bE^{c}} B ( c -u , u).
\label{value-g}
\end{equation}
To evaluate this integral by the method of brackets, the integrand
$(E + Fx^{b})^{-c}$ is expanded as
\begin{equation}
\sum_{n_{1}} \sum_{n_{2}} \phi_{1,2} E^{n_{1}} F^{n_{2}} x^{bn_{2}}
\frac{\langle{ c + n_{1}+n_{2} \rangle}}{\Gamma(c)}.
\end{equation}
\noindent
Replacing in (\ref{beta-1}) we obtain
\begin{eqnarray}
I & \stackrel{\bullet}{=} & \sum_{n_{1}} \sum_{n_{2}} \phi_{1,2} E^{n_{1}}F^{n_{2}}
\frac{\langle{ c + n_{1} + n_{2} \rangle} }{\Gamma(c)}
\int_{0}^{\infty} x^{a + bn_{2}+1-1} \, dx \nonumber \\
& \stackrel{\bullet}{=} & \sum_{n_{1}} \sum_{n_{2}} \phi_{1,2} E^{n_{1}}F^{n_{2}}
\frac{1}{\Gamma(c)} \langle{ c + n_{1} + n_{2} \rangle}
\langle{a+ bn_{2} + 1 \rangle}. \nonumber
\end{eqnarray}
To obtain the value assigned to the two dimensional sum, solve
\begin{eqnarray}
c + n_{1} + n_{2} & = & 0 \nonumber \\
a + bn_{2} + 1 & = & 0, \nonumber
\end{eqnarray}
\noindent
to produce the solution $n_{1}^{*} = \tfrac{a+1}{b} - c$ and $n_{2}^{*} =
- \frac{a+1}{b}$. Therefore
\begin{equation}
I = \frac{1}{b \Gamma(c)}
E^{n_{1}^{*}} F^{n_{2}^{*}} \Gamma( - n_{1}^{*}) \Gamma(-n_{2}^{*}),
\end{equation}
\noindent
and this reduces to the value in (\ref{value-g}).
\section{A combination of powers and exponentials } \label{sec-comb1}
\setcounter{equation}{0}
In this section we employ the method of brackets and evaluate the integral
\begin{equation}
I := \int_{0}^{\infty} \frac{x^{\alpha -1 } \, dx }{\left( A + B \text{exp}(C x^{\beta})
\right)^{\gamma}},
\label{comb-exp1}
\end{equation}
\noindent
with $\alpha, \, \beta, \, \gamma, \, A, \, B, \, C \in \mathbb{R}$.
To evaluate this integral we consider the bracket series
\begin{equation}
\left( A + B \text{exp}(C x^{\beta}) \right)^{-\gamma} \stackrel{\bullet}{=}
\sum_{n_{1},n_{2}} A^{n_{1}} B^{n_{2}} \text{exp}(C n_{2}x^{\beta})
\frac{\langle{\gamma + n_{1} + n_{2} \rangle}}{\Gamma(\gamma)}.
\end{equation}
\noindent
The exponential function is expanded as
\begin{eqnarray}
\text{exp}(C n_{2} x^{\beta}) & = &
\sum_{n_{3}=0}^{\infty} \frac{C^{n_{3}} n_{2}^{n_{3}}}{\Gamma(n_{3}+1)}
x^{\beta n_{3}} \nonumber \\
& = & \sum_{n_{3}=0}^{\infty} C^{n_{3}} (-n_{2})^{n_{3}} \phi_{n_{3}}
x^{n_{3}}. \nonumber
\end{eqnarray}
\noindent
Therefore, the integral (\ref{comb-exp1}) is assigned the bracket series
\begin{equation}
I \stackrel{\bullet}{=} \sum_{n_{1},n_{2},n_{3}} \phi_{1,2,3}
\frac{A^{n_{1}} B^{n_{2}} C^{n_{3}} (-n_{2})^{n_{3}}
\langle{ \alpha + \beta n_{3} \rangle}
\langle{ \gamma + n_{1} + n_{2} \rangle}}{\Gamma(\gamma)}.
\nonumber
\end{equation}
\noindent
The vanishing of the two brackets leads to the system
\begin{eqnarray}
\alpha + \beta n_{3} & = & 0 \nonumber \\
\gamma + n_{1} + n_{2} & = & 0, \nonumber
\end{eqnarray}
\noindent
and we have to choose a free parameter
between $n_{1}$ and $n_{2}$. Observe that
$n_{3} = - \alpha/ \beta$ is determined by the method. \\
\noindent
{\bf Choice 1}: take $n_{2}$ to be free. Then $n_{1}^{*} = -\gamma - n_{2}$ and
$n_{3}^{*} = - \alpha/\beta$. This leads to
\begin{equation}
I = \sum_{n_{2}=0}^{\infty} \frac{B^{n_{2}} \Gamma(\alpha/\beta)
\Gamma(\gamma + n_{2})}{A^{\gamma + n_{2}} C^{\alpha/\beta}
\beta \Gamma(\gamma) (-n_{2})^{\alpha/\beta} }. \noindent
\end{equation}
\noindent
This is impossible due to the presence of the term $n_{2}^{\alpha/\beta}$
leading to a divergent series. These divergent series are discarded. \\
\noindent
{\bf Choice 2}: take $n_{1}$ as the free variable. Then
$n_{3}^{*} = -\alpha/\beta$ and $n_{2}^{*} = -\gamma - n_{1}$. This time we
obtain
\begin{equation}
I = \frac{\Gamma(\alpha/\beta)}{\Gamma(\gamma)}
\frac{1}{B^{\gamma} C^{\alpha/\beta} \, \beta}
\sum_{n_{1}=0}^{\infty} (-1)^{n_{1}}
\frac{\Gamma(\gamma+n_{1})}{\Gamma(1+n_{1})}
\frac{(A/B)^{n_{1}}}{(\gamma + n_{1})^{\alpha/\beta}}.
\end{equation}
\noindent
This formula cannot be expressed in term of more elementary special
functions.
In the special case $\gamma =1$ we obtain
\begin{equation}
I = - \frac{\Gamma(\nu)}{a \beta c^{\nu}} \text{PolyLog}(\nu,-a/b).
\end{equation}
\noindent
with $\nu = \alpha/\beta$. The
polylogarithm function appearing here is defined by
\begin{equation}
\text{PolyLog}(z,k) := \sum_{n=1}^{\infty} \frac{z^{n}}{n^{k}}.
\end{equation}
\noindent
Specializing to $A=B=C=\alpha=\gamma =1$ and $\beta = 2$ we obtain
\begin{equation}
\int_{0}^{\infty} \frac{dx}{1+e^{x^{2}}} = - \frac{\sqrt{\pi}}{2} (\sqrt{2}-1)
\zeta \left( \tfrac{1}{2} \right).
\end{equation}
\noindent
Of course, this
integral can be evaluated by simply expanding the integrand as a
geometric series.
\section{The Mellin transform of a quadratic exponential}
\label{sec-mellin}
\setcounter{equation}{0}
The Mellin transform of a function $f(x)$ is defined by
\begin{equation}
\mathcal{M}(f)(s) := \int_{0}^{\infty} x^{s-1} f(x) \, dx.
\end{equation}
\noindent
Many of the integrals appearing in \cite{gr} are of this type. For example,
$3.462.1$ states that
\begin{equation}
\mathcal{M}\left( e^{-\beta x^{2} - \gamma x} \right)(s) =
\int_{0}^{\infty} x^{s-1} e^{- \beta x^{2} - \gamma x} \, dx =
(2 \beta)^{-s/2} \Gamma(s) e^{\gamma^{2}/(8 \beta)} D_{-s}
\left( \frac{\gamma}{\sqrt{2 \beta}} \right).
\label{form-hyper1}
\end{equation}
\noindent
Here $D_{p}(z)$ is the parabolic cylinder function defined
by (formula $9.240$ in
\cite{gr})
\begin{equation}
D_{p}(z) = 2^{p/2} e^{-z^{2}/4}
\left( \frac{\sqrt{\pi}}{\Gamma((1-p)/2)} {_{1}F_{1}} \left( - \frac{p}{2},
\frac{1}{2}; \frac{z^{2}}{2} \right) -
\frac{\sqrt{2 \pi}z}{\Gamma(-p/2)} {_{1}F_{1}} \left( \frac{1-p}{2},
\frac{3}{2}; \frac{z^{2}}{2} \right)
\right).
\nonumber
\end{equation}
A direct application of the method of brackets gives
\begin{equation}
\int_{0}^{\infty} x^{s-1} e^{- \beta x^{2} - \gamma x} \, dx \stackrel{\bullet}{=}
\sum_{n_{1}} \sum_{n_{2}} \phi_{1,2} \beta^{n_{1}} \gamma^{n_{2}}
\langle{ s + 2n_{1} + n_{2} \rangle}.
\end{equation}
The equation $s+2n_{1} + n_{2}=0$ gives two choices for a
free index. Taking $n_{2}^{*} = -2n_{1}-s$ leads
to the series
\begin{eqnarray}
\sum_{n_{1}=0}^{\infty} \frac{1}{\Gamma(n_{1}+1)}
\left( - \frac{\beta}{\gamma^{2}} \right)^{n_{1}} (s)_{2n_{1}} & = &
\sum_{n_{1}=0}^{\infty} \frac{1}{\Gamma(n_{1}+1)}
\left( - \frac{4 \beta}{\gamma^{2}} \right)^{n_{1}}
\left( \frac{s}{2} \right)_{n_{1}}
\left( \frac{s+1}{2} \right)_{n_{1}} \nonumber \\
& = &
{_{2}F_{0}} \left( \frac{s}{2}, \frac{s+1}{2} \Big{|} -
\frac{4 \beta}{\gamma^{2}} \right).
\nonumber
\end{eqnarray}
\noindent
This choice of a free
index is excluded because the resulting series diverges. The second choice
is $n_{1}^{*} = -n_{2}/2-s/2$
and this yields the series
\begin{equation}
\frac{1}{2 \beta^{s/2}} \sum_{n_{2}=0}^{\infty}
\frac{\rho^{n_{2}}}{\Gamma(n_{2}+1)}
\Gamma \left( \frac{n_{2}}{2} + \frac{s}{2} \right),
\label{mess-11}
\end{equation}
\noindent
where $\rho = - \gamma/\sqrt{\beta}$.
To write (\ref{mess-11}) in
hypergeometric form we separate it into two sums according to
the parity of $n_{2}$ and obtain
\begin{equation}
\frac{1}{2 \beta^{s/2}}
\left(
\Gamma \left( \frac{s}{2} \right)
\sum_{n=0}^{\infty} \frac{\rho^{2n}}{(1)_{2n}} \left( \frac{s}{2} \right)_{n}+
\Gamma \left( \frac{1+s}{2} \right)
\sum_{n=0}^{\infty} \frac{\rho^{2n+1}}{(2)_{2n}}
\left( \frac{1+s}{2} \right)_{n}
\right).
\nonumber
\end{equation}
\noindent
The identity
\begin{equation}
(a)_{2n} = 4^{n} \left( \frac{a}{2} \right)_{n}
\left( \frac{a+1}{2} \right)_{n}
\end{equation}
\noindent
gives the final representation of the sum as
\begin{equation}
\frac{1}{2 \beta^{s/2}}
\left[
\Gamma \left( \frac{s}{2} \right)
{_{1}F_{1}} \left( \frac{s}{2}, \frac{1}{2}; \frac{1}{2}\rho^{2} \right) +
\rho \Gamma \left( \frac{1+s}{2} \right)
{_{1}F_{1}} \left( \frac{1+s}{2}, \frac{3}{2}; \frac{1}{2}\rho^{2} \right)
\right].
\end{equation}
\noindent
This is (\ref{form-hyper1}). \\
The special case $s = 1$ gives
\begin{equation}
\int_{0}^{\infty} e^{-\beta x^{2} - \gamma x} \, dx =
\tfrac{1}{2\sqrt{\beta}} \left[ \Gamma(\tfrac{1}{2}) \, \,
{_{1}F_{1}}\left( \tfrac{1}{2}; \tfrac{1}{2}; \tfrac{1}{2} \rho^{2} \right)
+ \rho \,
{_{1}F_{1}}\left( 1; \tfrac{3}{2}; \tfrac{1}{2} \rho^{2} \right) \right].
\end{equation}
The first hypergeometric sum evaluates to $e^{\gamma^{2}/4 \beta}$ and
using the
representation of the {\em error function}
\begin{equation}
\text{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^{2}} \, dt
\end{equation}
\noindent
as
\begin{equation}
\text{erf}(x) = \frac{2x}{\sqrt{\pi}} e^{-x^{2}}
{_{1}F_{1}}\left( 1; \tfrac{3}{2}; x^{2} \right),
\end{equation}
(given as $8.253.1$ in \cite{gr}) we find the value of the second
hypergeometric sum. The conclusion is that
\begin{equation}
\int_{0}^{\infty} e^{-\beta x^{2} - \gamma x} \, dx = \frac{\sqrt{\pi}}{2 \beta}
\text{exp} \left( \frac{\gamma^{2}}{4 \beta} \right)
\left( 1 - \text{erf} \left( \frac{\gamma}{2 \sqrt{\beta}} \right) \right).
\end{equation}
\noindent
This can be checked directly by completing the square in the integrand.
\section{A multidimensional integral from Gradshteyn and Ryzhik}
\label{sec-multi}
\setcounter{equation}{0}
The method of brackets can also be used to evaluate some multidimensional
integrals. Consider the following integral
\begin{equation}
I_{n} := \int_{0}^{\infty} \int_{0}^{\infty} \cdots \int_{0}^{\infty}
\frac{x_{1}^{p_{1}-1} x_{2}^{p_{2}-1} \cdots x_{n}^{p_{n}-1}
\, \, dx_{1}dx_{2} \cdots dx_{n}}
{\left( 1 + (r_{1}x_{1})^{q_{1}} + \cdots + (r_{n}x_{n})^{q_{n}} \right)^{s}},
\end{equation}
\noindent
which appears as $4.638.3$ in \cite{gr} with an incorrect
evaluation. \\
The first step in the evaluation of $I_{n}$ is to expand the denominator of
the integrand using Rule \ref{rule-binom} as
\begin{equation}
\frac{1}
{\left( 1 + (r_{1}x_{1})^{q_{1}} + \cdots + (r_{n}x_{n})^{q_{n}} \right)^{s}}
\stackrel{\bullet}{=}
\sum_{k_{0}, k_{1}, \cdots, k_{n}}
\phi_{0,\cdots,n} \prod_{j=1}^{n} (r_{j}x_{j})^{q_{j}k_{j}}
\frac{\langle{s+ k_{0} + \cdots + k_{n} \rangle} }{\Gamma(s)}.
\nonumber
\end{equation}
\noindent
Next the integral is assigned the value
\begin{equation}
I_{n} \stackrel{\bullet}{=}
\sum_{k_{0}, k_{1}, \cdots, k_{n}}
\phi_{0,\cdots,n} \prod_{j=1}^{n} (r_{j}x_{j})^{q_{j}k_{j}}
\frac{\langle{s+ k_{0} + \cdots + k_{n} \rangle} }{\Gamma(s)}
\prod_{j=1}^{n} \langle{ p_{j} + q_{j}k_{j} \rangle}.
\end{equation}
The evaluation of this bracket sum involves the values
\begin{equation}
k_{0} = -s + \sum_{j=1}^{n} \frac{p_{j}}{q_{j}} \text{ and }
k_{j} = - \frac{p_{j}}{q_{j}} \text{ for } 1 \leq j \leq n.
\end{equation}
\noindent
We conclude that
\begin{equation}
I_{n} =
\frac{1}{\Gamma(s)}
\Gamma \left( s - \sum_{j=1}^{n} \frac{p_{j}}{q_{j}} \right)
\prod_{j=1}^{n} \frac{\Gamma \left( \tfrac{p_{j}}{q_{j}} \right)}
{q_{j}r_{j}^{p_{j}} }.
\end{equation}
The table \cite{gr} has the exponents of $r_{j}$ written as $p_{j}q_{j}$
instead of $p_{j}$. This has now been corrected.
\section{An example involving Bessel functions} \label{sec-bessel}
\setcounter{equation}{0}
The Bessel function $J_{\nu}(x)$ is defined by the series
\begin{equation}
J_{\nu}(x) = \frac{1}{2^{\nu}} \sum_{k=0}^{\infty} (-1)^{k}
\frac{z^{2k+\nu}}{2^{2k} k! \Gamma(\nu + k + 1)},
\end{equation}
\noindent
and it admits the hypergeometric representation
\begin{equation}
J_{\nu}(x) = \frac{x^{\nu}}{2^{\nu} \, \Gamma(1+ \nu)} \,
{_{0}F_{1}}
\left(
\begin{matrix}
- \\
1 + \nu
\end{matrix}
\Big{|} \frac{-x^{2}}{4}
\right).
\end{equation}
\noindent
The method of brackets will now be employed to evaluate the integral
\begin{equation}
I := \int_{0}^{\infty} x^{-\lambda} J_{\nu}( \alpha x ) J_{\mu}( \beta x) \, dx.
\label{bessel-1}
\end{equation}
\noindent
Three integrals of this type form Section $6.574$ of \cite{gr}. \\
Replacing the hypergeometric form in the integral, we have
\begin{eqnarray}
I & \stackrel{\bullet}{=} & \frac{ \left( \tfrac{\alpha}{2} \right)^{\nu}
\left( \tfrac{\beta}{2} \right)^{\mu} }
{\Gamma(\nu+1) \Gamma(\mu+1)} \nonumber \\
& \times & \int_{0}^{\infty} \sum_{n_{1}, n_{2}}
\phi_{1,2} \frac{\alpha^{2n_{1}} \, \beta^{2n_{2}} }
{4^{n_{1}+n_{2}} \, (\nu+1)_{n_{1}} \, (\mu+1)_{n_{2}}}
x^{2n_{1}+2n_{2}-\lambda +\nu + \mu} \, dx.\nonumber
\end{eqnarray}
\noindent
Therefore, the bracket series associated to the integral (\ref{bessel-1})
becomes
\begin{eqnarray}
I & \stackrel{\bullet}{=} & \frac{ 2^{-\nu-\mu} \alpha^{\nu}
\beta^{\mu}}
{\Gamma(\nu+1) \Gamma(\mu+1)} \nonumber \\
& \times & \sum_{n_{1}} \sum_{n_{2}}
\frac{\phi_{1,2}}{ 4^{n_{1}+ n_{2}}} \frac{\alpha^{2n_{1}} \, \beta^{2n_{2}} }
{(\nu+1)_{n_{1}} \, (\mu+1)_{n_{2}}}
\langle{ 2n_{1}+2n_{2}-\lambda +\nu + \mu + 1 \rangle}.\nonumber
\end{eqnarray}
The vanishing of the brackets yields the value
$n_{1}^{*} = \frac{1}{2}(\lambda- \nu - \mu -1)
-n_{2}$ and it follows that
\begin{equation}
I = \frac{2^{-\nu -\mu}}{\Gamma(\nu+1) \Gamma(\mu+1)}
\sum_{n_{2}=0}^{\infty} \frac{\phi_{2}}{ 4^{n_{1}^{*} +n_{2}}}
\frac{\alpha^{2 n_{1}^{*}} \beta^{2n_{2}} }{(\nu+1)_{n_{1}^{*}}
(\mu+1)_{n_{2}}}
\frac{\Gamma(-n_{1}^{*} )}{2}. \nonumber
\end{equation}
Writing the Pochhammer symbol $(\nu+1)_{n_{1}^{*}}$ in terms of the
gamma function we obtain
\begin{eqnarray}
I & = & \frac{\beta^{\mu} \alpha^{\lambda - \mu -1} }
{2^{\lambda} \Gamma(\mu+ +1)} \nonumber \\
& \times & \sum_{n_{2}=0}^{\infty}
\frac{(-1)^{n_{2}} }{\Gamma(n_{2}+1)}
\frac{( \beta^{2}/\alpha^{2})^{n_{2}} }
{\Gamma( \nu + 1 + \tfrac{1}{2}(\lambda - \nu -\mu-1) - n_{2} ) }
\frac{\Gamma( \tfrac{1}{2}(\nu+\mu-\lambda+1) + n_{2})}{(\mu+1)_{n_{2}}}.
\nonumber
\end{eqnarray}
\noindent
In order to write this in hypergeometric terms, we start with
\begin{eqnarray}
I & = & \frac{\beta^{\mu} \alpha^{\lambda - \mu -1} }
{2^{\lambda} \Gamma(\mu+ +1)} \nonumber \\
& \times &
\sum_{n_{2}=0}^{\infty}
(-1)^{n_{2}}
\frac{ ( \tfrac{1}{2} ( \nu + \mu - \lambda +1) )_{n_{2}}
(\beta^{2}/\alpha_{2})^{n_{2}} }
{( \tfrac{1}{2}(\lambda + \nu - \mu +1))_{-n_{2}} \, (\mu+1)_{n_{2}}
\Gamma(n_{2}+1)},
\nonumber
\end{eqnarray}
\noindent
and use the identity
\begin{equation}
(c)_{-n} = \frac{(-1)^{n}}{(1-c)_{n}},
\end{equation}
\noindent
to obtain
\begin{eqnarray}
I & = & \frac{\beta^{\mu} \alpha^{\lambda-\mu-1}}{2^{\lambda}}
\frac{\Gamma( \tfrac{1}{2}(\nu+\mu-\lambda+1)}
{\Gamma(\mu+1) \Gamma( \tfrac{1}{2} (\lambda + \nu - \mu+1) )} \nonumber \\
& \times &
\sum_{n_{2}=0}^{\infty}
( \tfrac{1}{2} (1 - \lambda-\nu+\mu) )_{n_{2}}
( \tfrac{1}{2} (\nu+\mu - \lambda + 1) )_{n_{2}}
\frac{1}{(\mu+1)_{n_{2}} \, \Gamma(n_{2}+1)}
\left( \frac{\beta^{2}}{\alpha^{2}} \right)^{n_{2}},
\nonumber
\end{eqnarray}
\noindent
that can be written as
\begin{eqnarray}
I & = & \frac{\beta^{\mu} \alpha^{\lambda-\mu-1}}{2^{\lambda}}
\frac{\Gamma( \tfrac{1}{2}(\nu+\mu-\lambda+1)) }
{\Gamma(\mu+1) \Gamma(\tfrac{1}{2}(\lambda + \nu-\mu+1)) } \nonumber \\
& \times & {_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{1}{2}(1 - \lambda -\nu + \mu) & & \tfrac{1}{2}(\nu+\mu-\lambda+1) \\
& \mu+1 &
\end{matrix}
\Big{|} \frac{\beta^{2}}{\alpha^{2}}
\right).
\nonumber
\end{eqnarray}
This solution is valid for $|\beta^{2}/\alpha^{2} | < 1$ and it corresponds to
formula $6.574.3$ in \cite{gr}. The table contains an error in this formula,
the power of $\beta$ is written as $\nu$ instead of $\mu$. To obtain a
formula valid for $|\beta^{2}/\alpha^{2}| > 1$ we could proceed as before
and obtain $6.574.1$ in \cite{gr}. Alternatively exchange
$(\nu, \alpha)$ by $(\mu, \beta)$ and use the formula developed above.
\section{A new evaluation of a quartic integral} \label{sec-quartic}
\setcounter{equation}{0}
The integral
\begin{equation}
N_{0,4}(a;m) := \int_{0}^{\infty} \frac{dx}{(x^{4} + 2ax^{2} + 1)^{m+1}}
\end{equation}
\noindent
is given by
\begin{equation}
N_{0,4}(a,m) = \frac{\pi}{2} \frac{P_{m}(a) }{[2(a+1)]^{m+\tfrac{1}{2}}},
\label{value-quartic}
\end{equation}
\noindent
where $P_{m}$ is the polynomial
\begin{equation}
P_{m}(a) = \sum_{l=0}^{m} d_{l,m}a^{l},
\end{equation}
\noindent
with coefficients
\begin{equation}
d_{l,m} = 2^{-2m} \sum_{k=l}^{m} 2^{k} \binom{2m-2k}{m-k} \binom{m+k}{m}
\binom{k}{l}.
\end{equation}
The sequence
$\{ d_{l,m}: \, 0 \leq l \leq m \}$ have remarkable arithmetical and
combinatorial properties
\cite{manna-moll-survey}. \\
The reader will find in \cite{amram} a survey of the many different proofs
of (\ref{value-quartic}) available in the literature. One of these proofs
follows from the hypergeometric representation
\begin{equation}
N_{0,4}(a,m) = 2^{m - \tfrac{1}{2}} (a+1)^{-m - \tfrac{1}{2} }
B \left( 2m + \tfrac{3}{2} \right)
{_{2}F_{1}}
\left(
\begin{matrix}
-m & & m+1 \\
& m + \tfrac{3}{2} &
\end{matrix}
\Big{|} \frac{1-a}{2}
\right).
\end{equation}
New proofs of this evaluation keep on appearing. For instance, the survey
\cite{amram} does not include the recent automatic proof by
C. Koutschan and V. Levandovskyy \cite{koutschan1}.
The goal of this section is to provide yet another
proof of the identity (\ref{value-quartic}) using the method of brackets.
The bracket series for $I \equiv N_{0,4}(a,m)$ is formed by the usual
procedure. The result is
\begin{equation}
\label{N-bracket}
I \stackrel{\bullet}{=} \frac{1}{\Gamma(m+1)} \sum_{n_{1},n_{2},n_{3}}
\phi_{1,2,3} (2a)^{n_{2}}
\langle{ 4n_{1}+2n_{2}+1 \rangle}
\langle{ m+1 + n_{1}+n_{2}+n_{3} \rangle}.
\end{equation}
\noindent
The expression (\ref{N-bracket}) contains
two brackets and three indices. Therefore the final result will
be a single series on the free index. We employ the following notation:
$I$ is the original bracket series, the symbol $I_{j}$ denotes the series
$I$ after eliminating the index $n_{j}$. Similarly $I_{i,j}$ denotes the
series $I$ after first eliminating $n_{i}$ (to produce $I_{i}$) and then
eliminating $n_{j}$. \\
\noindent
{\bf Case 1}: $n_{3}$ is the free index. Eliminate first $n_{1}$ from
the bracket $\langle{ 4n_{1} + 2n_{2}+1 \rangle}$ to obtain
$n_{1}^{*} = - \tfrac{1}{2}n_{2} - \tfrac{1}{4}$. The resulting bracket
series is
\begin{equation}
I_{1} \stackrel{\bullet}{=} \sum_{n_{2},n_{3}} \phi_{2,3}
\frac{(2a)^{n_{2}} \Gamma(\tfrac{1}{2}n_{2} + \tfrac{1}{4}) }{4 \Gamma(m+1) }
\langle{ m + \tfrac{3}{4} + \tfrac{1}{2}n_{2} + n_{3} \rangle}.
\end{equation}
\noindent
The next step is to eliminate $n_{2}$ to get $n_{2}^{*} = -2m- \tfrac{3}{2}
- 2n_{3}$ and obtain
\begin{equation}
I_{1,2} = \frac{1}{2 \Gamma(m+1) (2a)^{2m+3/2}}
\sum_{n_{3}=0}^{\infty} \frac{\phi_{3}}{(2a)^{n_{3}}}
\Gamma( -m-\tfrac{1}{2}-n_{3}) \Gamma( 2m + \tfrac{3}{2} + 2n_{3}).
\end{equation}
In order to simplify these expressions, we employ
\begin{equation}
\Gamma(x+m) = (x)_{m} \Gamma(x), \,
\Gamma(x-m) = (-1)^{m}\Gamma(x)/(1-x)_{m}
\label{reduce-1}
\end{equation}
\noindent
and
\begin{equation}
(x)_{2m} = 2^{2m}
\left(\tfrac{1}{2}x \right)_{m} \left( \tfrac{1}{2}(x+1) \right)_{m},
\label{reduce-2}
\end{equation}
\noindent
for $x \in \mathbb{R}$ and $m \in \mathbb{N}$. We obtain
\begin{equation}
\Gamma( -m - \tfrac{1}{2} - n_{3}) =
\frac{(-1)^{n_{3}}\Gamma(-\tfrac{1}{2}-m)}{( \tfrac{3}{2} + m)_{n_{3}}}
\nonumber
\end{equation}
\noindent
and
\begin{equation}
\Gamma(2m+ \tfrac{3}{2} + 2n_{3}) =
\Gamma(2m+ \tfrac{3}{2}) (m + \tfrac{3}{4})_{n_{3}}
(m + \tfrac{5}{4})_{n_{3}} 2^{2n_{3}}.
\nonumber
\end{equation}
\noindent
These yield
\begin{equation}
I_{1,2} =
\frac{\Gamma(-\tfrac{1}{2} -m) \Gamma( 2m + \tfrac{3}{2})}
{2 \Gamma(m+1) (2a)^{2m+ 3/2}}
\sum_{n_{3}=0}^{\infty}
\frac{(m + 3/4)_{n_{3}} \, (m + 5/4)_{n_{3}} }{(m + 3/2)_{n_{3}} n_{3}!}
a^{-2n_{3}},
\end{equation}
\noindent
or
\begin{equation}
I_{1,2} =
\frac{\Gamma(-\tfrac{1}{2} -m) \Gamma( 2m + \tfrac{3}{2})}
{2 \Gamma(m+1) (2a)^{2m+ 3/2}}
\, \, {_{2}F_{1}}
\left(
\begin{matrix}
m+ \tfrac{3}{4} & & m + \tfrac{5}{4} \\
& m + \tfrac{3}{2} &
\end{matrix}
\, \Big{|} \frac{1}{a^{2}}
\right).
\end{equation}
\noindent
{\bf Note}. The reader
can check that $I_{1,2} = I_{2,1}$, so the value of the sum for
the quartic integral does not depend on the order in which the indices
$n_{1}$ and $n_{2}$ are eliminated. The reader can also verify that this
occurs in the next two cases described below; that is, $I_{1,3} = I_{3,1}$
and $I_{2,3} = I_{3,2}$.
\medskip
\noindent
{\bf Case 2}: $n_{1}$ is the free index. A similar argument yields
\begin{equation}
I_{2,3} =
\frac{\Gamma(m + \tfrac{1}{2}) \Gamma( \tfrac{1}{2})}
{2 \Gamma(m+1) (2a)^{1/2}}
\, \, {_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{1}{4} & & \tfrac{3}{4} \\
& \tfrac{1}{2} - m &
\end{matrix}
\, \Big{|} \frac{1}{a^{2}}
\right).
\end{equation}
\medskip
\noindent
{\bf Case 3}: $n_{2}$ is the free index. Eliminate $n_{1}$ from the
bracket series (\ref{N-bracket}) to produce
\begin{equation}
I_{1} \stackrel{\bullet}{=} \sum_{n_{2},n_{3}} \phi_{2,3}
\frac{(2a)^{n_{2}} \Gamma(\tfrac{1}{2}n_{2} + \tfrac{1}{4}) }{4 \Gamma(m+1) }
\langle{ m + \tfrac{3}{4} + \tfrac{1}{2}n_{2} + n_{3} \rangle},
\end{equation}
\noindent
and now eliminate $n_{3}$ to obtain $n_{3}^{*} = -m - \tfrac{3}{4} -
\tfrac{1}{2}n_{2}$. This yields
\begin{equation}
I_{1,3} =
\frac{1}{4 \Gamma(m+1)} \sum_{n_{2}=0}^{\infty} (-1)^{n_{2}}
\frac{(2a)^{n_{2}}}{n_{2}!} \Gamma( \tfrac{1}{2}n_{2} + \tfrac{1}{4} )
\Gamma( m+ \tfrac{3}{4} + \tfrac{1}{2}n_{2}).
\end{equation}
In order to obtain a hypergeometric representations of these
expressions, we separate the last series
according to the parity of $n_{2}$:
\begin{eqnarray}
I_{1,3} & = &
\frac{1}{4 \Gamma(m+1)}
\sum_{n_{2}=0}^{\infty} \frac{(2a)^{2n_{2}}}{(2n_{2})!} \,
\Gamma( n_{2} + \tfrac{1}{4})
\Gamma( n_{2} + m + \tfrac{3}{4}) \nonumber \\
& - & \frac{1}{4 \Gamma(m+1)}
\sum_{n_{2}=0}^{\infty} \frac{(2a)^{2n_{2}+1}}{(2n_{2}+1)!} \,
\Gamma( n_{2} + \tfrac{3}{4})
\Gamma( n_{2} + m + \tfrac{5}{4}). \nonumber
\end{eqnarray}
Using the standard formulas (\ref{reduce-1}) and (\ref{reduce-2}), we can
write this in the form
\begin{eqnarray}
I_{1,3} & = &
\frac{\Gamma(\tfrac{1}{4}) \Gamma( m + \tfrac{3}{4})}{4 \Gamma(m+1)}
\, \, {_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{1}{4} & & m+ \tfrac{3}{4} \\
& \tfrac{1}{2} &
\end{matrix}
\, \Big{|} a^{2}
\right) - \nonumber \\
& & \frac{a \Gamma(\tfrac{3}{4}) \Gamma( m + \tfrac{5}{4})}{2 \Gamma(m+1)}
\, \, {_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{3}{4} & & m+ \tfrac{5}{4} \\
& \tfrac{3}{2} &
\end{matrix}
\, \Big{|} a^{2}
\right).
\nonumber
\end{eqnarray}
In summary: we have obtained three series related to the integral
$N_{0,4}(a,m)$. The series $I_{1,2}$ and $I_{2,3}$ are given in terms of
the hypergeometric function ${_{2}F_{1}}$ with last argument $1/a^{2}$.
These series converge when $a^{2} > 1$. The remaining case $I_{1,3}$ gives
${_{2}F_{1}}$ with argument $a^{2}$, that is convergent when $a^{2} < 1$.
Rule \ref{rule-disc} states that we must add the series $I_{1,2}$ and
$I_{2,3}$ to get a valid representation for $a^{2} >1$. In conclusion, the
method of brackets shows that
\begin{eqnarray}
N_{0,4}(a,m) & = &
\frac{\Gamma(\tfrac{1}{4}) \Gamma(m + \tfrac{3}{4} ) }{4 \Gamma(m+1)}
\, \, {_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{1}{4} & & m+ \tfrac{3}{4} \\
& \tfrac{1}{2} &
\end{matrix}
\, \Big{|} a^{2}
\right) +
\nonumber \\
& - & \frac{a \Gamma(\tfrac{3}{4}) \Gamma(m + \tfrac{5}{4} ) }{2 \Gamma(m+1)}
\, \, {_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{3}{4} & & m+ \tfrac{5}{4} \\
& \tfrac{3}{2} &
\end{matrix}
\, \Big{|} a^{2}
\right) \quad \text{ for } a^{2} < 1,
\nonumber \\
& = &
\frac{\Gamma(\tfrac{1}{2}) \Gamma(m + \tfrac{1}{2} ) }{2 \sqrt{2a} \Gamma(m+1)}
\, \, {_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{1}{4} & & \tfrac{3}{4} \\
& \tfrac{1}{2} -m &
\end{matrix}
\, \Big{|} \frac{1}{a^{2} }
\right)
\nonumber \\
& + & \frac{ \Gamma(-\tfrac{1}{2}) \Gamma(2m + \tfrac{3}{2} ) }{2
(2a)^{2m+3/2} \Gamma(m+1)}
\, \, {_{2}F_{1}}
\left(
\begin{matrix}
m+ \tfrac{3}{4} & & m+ \tfrac{5}{4} \\
& m+ \tfrac{3}{2} &
\end{matrix}
\, \Big{|} \frac{1}{ a^{2} }
\right) \quad \text{ for } a^{2} > 1.
\nonumber
\end{eqnarray}
The continuity of these expressions at $a=1$ requires the evaluation of
${_{2}F_{1}}(a,b;c;1)$. Recall that this is finite only when $c > a+b$. In
our case, we have four hypergeometric terms and in each one of them, the
corresponding expression $c-(a+b)$ equals $-\tfrac{1}{2}-m$. Therefore
each hypergeometric term blows up as $a \to 1$. This divergence
is made evident by employing the relation
\begin{equation}
{_{2}F_{1}}(a,b,c;z) = (1-z)^{c-a-b} {_{2}F_{1}}(c-a, c-b,c;z).
\end{equation}
\noindent
The expression for $N_{0,4}(a,m)$ given above is transformed into
\begin{eqnarray}
N_{0,4}(a,m) & = &
\frac{\Gamma(\tfrac{1}{4}) \Gamma(m + \tfrac{3}{4} ) }{4 \Gamma(m+1)
(1-a^{2})^{m+1/2}}
\, \, {_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{1}{4} & & -m- \tfrac{1}{4} \\
& \tfrac{1}{2} &
\end{matrix}
\, \Big{|} a^{2}
\right) +
\nonumber \\
& - & \frac{a \Gamma(\tfrac{3}{4}) \Gamma(m + \tfrac{5}{4} ) }
{2 \Gamma(m+1)(1-a^{2})^{m+1/2}}
\, \, {_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{3}{4} & & -m+ \tfrac{1}{4} \\
& \tfrac{3}{2} &
\end{matrix}
\, \Big{|} a^{2}
\right) \quad \text{ for } a^{2} < 1,
\nonumber \\
& = &
\frac{\Gamma(\tfrac{1}{2}) \Gamma(m + \tfrac{1}{2} ) }
{2 \sqrt{2a} \Gamma(m+1) (1-a^{-2})^{m+1/2}}
\, \, {_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{1}{4}-m & & -\tfrac{1}{4} -m \\
& \tfrac{1}{2} -m &
\end{matrix}
\, \Big{|} \frac{1}{a^{2} }
\right)
\nonumber \\
& + & \frac{ \Gamma(-\tfrac{1}{2}) \Gamma(2m + \tfrac{3}{2} ) }{2
(2a)^{2m+3/2} \Gamma(m+1) (1- a^{-2})^{m+1/2}}
\, \, {_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{3}{4} & & \tfrac{1}{4} \\
& m+ \tfrac{3}{2} &
\end{matrix}
\, \Big{|} \frac{1}{ a^{2} }
\right) \quad \text{ for } a^{2} > 1.
\nonumber
\end{eqnarray}
\medskip
Introduce the functions
\begin{eqnarray}
G_{1}(a,m) & = & \small{\left( \frac{3}{4} \right)_{m} }
\, \,
\small{
{_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{1}{4} & & -\tfrac{1}{4} - m \\
& \tfrac{1}{2} &
\end{matrix}
\, \Big{|} a^{2}
\right)
}
\nonumber \\
& - & 2 a \small{ \left( \frac{1}{4} \right)_{m+1} }
\, \,
\small{
{_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{3}{4} & & \tfrac{1}{4} - m \\
& \tfrac{3}{2} &
\end{matrix}
\, \Big{|} a^{2}
\right),
}
\nonumber
\end{eqnarray}
\noindent
and
\begin{eqnarray}
G_{2}(a,m) & = & \small{\left( \frac{1}{2} \right)_{m}}
(2a)^{2m+1}
\, \, {_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{1}{4}-m & & -\tfrac{1}{4} - m \\
& \tfrac{1}{2}-m &
\end{matrix}
\, \Big{|} \frac{1}{a^{2}}
\right) \nonumber \\
& - & (-1)^{m} m! 2^{-2m}
\small{\binom{4m+1}{2m} }
\, \,
\small{
{_{2}F_{1}}
\left(
\begin{matrix}
\tfrac{3}{4} & & \tfrac{1}{4} \\
& m+ \tfrac{3}{2} &
\end{matrix}
\, \Big{|} \frac{1}{a^{2} }
\right).
}
\nonumber
\end{eqnarray}
\noindent
Then
\begin{equation}
N_{0,4}(a,m) =
\frac{\pi \sqrt{2} }{4m!} \frac{G_{1}(a,m)}{(1-a^{2})^{m+1/2}}
\label{nzero-1}
\end{equation}
\noindent
for $a^{2} < 1$ and
\begin{equation}
N_{0,4}(a,m) = \frac{\pi}{2^{2m+5/2} \sqrt{a} m!}
\frac{G_{2}(a,m)}{(a^{2}-1)^{m+1/2}}
\label{nzero-2}
\end{equation}
\noindent
for $a^{2} > 1$. The functions $G_{1}(a,m)$
and $G_{2}(a,m)$ match at $a=1$ to sufficiently high order to verify the
continuity at $a=1$. Morever, their blow up at $a=-1$ is a reflection of the
fact that the convergence of the integral $N_{0,4}(a,m)$
requires $a> -1$. \\
It is possible to show that both expressions (\ref{nzero-1}) and
(\ref{nzero-2})
reduce to (\ref{value-quartic}).
The details will appear elsewhere.
\section{Integrals from Feynman diagrams} \label{sec-feynman}
\setcounter{equation}{0}
The flexibility of the method of brackets is now illustrated by evaluating
examples of definite integrals appearing in the resolution of Feynman diagrams.
The reader will find in
\cite{huang-1}, \cite{smirnov1}, \cite{itzykson1} and \cite{zinn-1} information
about these diagrams. The mathematical theory behind Quantum Field Theory
and in particular to the role of Feynman diagrams can be obtained from
\cite{folland-1} and \cite{connes-marcolli}.
The graph $G$ contains $N$ {\em propagators} or {\em internal lines}, $L$
{\em loops} associated to independent internal momenta ${\mathbf{Q}}:=
\{ Q_{1}, \cdots, Q_{L} \}$, $E$ independent external momenta
$\mathbf{P} := \{ P_{1}, \cdots, P_{E} \}$ (therefore the diagram has $E+1$
{\em external lines}). The momentum $P_{j}, Q_{j}$ belong to
$\mathbb{R}^{4}$ and the space $\mathbb{R}^{4}$
is equipped with the Minkowski metric. Therefore, for $A, \, B
\in \mathbb{R}^{4}$, we have
\begin{equation}
A^{2} := A_{0}^{2} - A_{1}^{2} - A_{2}^{2} - A_{3}^{2},
\end{equation}
\noindent
and
\begin{equation}
A \cdot B := A_{0}B_{0} - A_{1}B_{1} - A_{2}B_{2} - A_{3}B_{3}.
\end{equation}
Finally, each propagator has a {\em mass} $m_{j} \geq 0$
associated to it, collected in the vector $\mathbf{m} =
(m_{1}, \cdots, m_{N})$.
The method of dimensional regularization
(see \cite{ryder} for details) gives an
integral expression in the momentum space that represents the diagram in
$D = 4 - 2 \epsilon$ dimensions. In Minkowski space the integral is given by
\begin{equation}
G = G( \mathbf{P}, \mathbf{m}) :=
\int \frac{d^{D}Q_{1}}{i \pi^{D/2}} \cdots
\int \frac{d^{D}Q_{L}}{i \pi^{D/2}}
\frac{1}{(B_{1}^{2}-m_{1}^{2})^{a_{1}}}
\cdots
\frac{1}{(B_{N}^{2}-m_{N}^{2})^{a_{N}}}.
\label{feyn-1}
\end{equation}
\noindent
The symbol $B_{j}$ represents the momentum of the $j$-th propagator and it is
a linear combination of the internal and external momenta $\mathbf{P}$ and
$\mathbf{Q}$, respectively. The vector $\mathbf{a} := ( a_{1}, \cdots, a_{N})$
captures the {\em powers}
of the propagators and they may assume arbitrary values.
In order to simplify (\ref{feyn-1}), we use the identity
\begin{equation}
\frac{1}{A^{\alpha}} = \frac{1}{\Gamma(\alpha)} \int_{0}^{\infty} x^{\alpha-1} e^{-Ax} \, dx
\end{equation}
\noindent
with $A = B_{j}^{2}-m_{j}^{2}$ and convert it into
\begin{equation}
G = \frac{1}{\prod_{j=1}^{N} \Gamma(a_{j})}
\int_{0}^{\infty} \text{exp} \left( \sum_{j=1}^{N} x_{j}m_{j}^{2} \right)
\int \frac{\prod_{j=1}^{L} d^{D}Q_{j}}{(i \pi^{D/2})^{L} }
\text{exp} \left( - \sum_{j=1}^{N} x_{j} B_{j}^{2} \right) \mathbf{dx},
\label{feyn-2}
\end{equation}
\noindent
where $\displaystyle{\mathbf{dx} = \prod_{j=1}^{N} x_{j}^{a_{j}-1} dx_{j}}$.
The next step in the reduction process is to integrate (\ref{feyn-2}) with
respect to the internal momenta $Q_{j}$. This
gives an expression for the integral
$G$ in terms of only the external momenta $P_{j}$ and the masses $m_{j}$. This
step can be
achieved by introducing the {\em Schwinger
parametrizaton} (see \cite{gonzalez-2005} and chapter 1, section 4 of
\cite{connes-marcolli}
for details) and $x_{j}$ are called the {\em Schwinger variables}.
The final result is the representation
\begin{equation}
G = \frac{(-1)^{-LD/2} }{\prod_{j=1}^{N} \Gamma(a_{j}) }
\int_{0}^{\infty} U^{-D/2} \text{exp} \left( \sum_{j=1}^{N} x_{j}m_{j}^{2} \right)
\text{exp} \left( - \frac{F}{U} \right) \mathbf{dx}.
\label{Schwinger-rep}
\end{equation}
\noindent
The function $F$ corresponds to a quadratic structure of the external
momentum defined by
\begin{equation}
F = \sum_{i,j=1}^{E} C_{i,j} P_{i} \cdot P_{j}.
\end{equation}
\noindent
The function $U$ and the coefficients $C_{i,j}$ are the
{\em Symanzik polynomials} in the
Schwinger parameters $x_{j}$. These polynomials are
given in terms of determinants of the so-called
{\em matrix of parameters}. The polynomial $C_{i,j}$
are symmetric, that is $C_{i,j} = C_{j,i}$. A systematic algorithm to
write down the expression (\ref{Schwinger-rep}) directly from the
Feynman diagram is presented in \cite{gonzalez-2005}.
\begin{example}
Figure \ref{figure1} depicts the interaction of three
particles corresponding to the three external lines of
momentum $P_{1}, \, P_{2}, \, P_{3}$. In this case the Schwinger
parametrization provides the integral
{{
\begin{figure}[ht]
\begin{center}
\centerline{\epsfig{file=Triangle.eps,width=20em,angle=0}}
\caption{The triangle}
\label{figure1}
\end{center}
\end{figure}
}}
\begin{eqnarray}
G & = & \frac{(-1)^{-D/2}}{\Gamma(a_{1})\Gamma(a_{2})\Gamma(a_{3})}
\int_{0}^{\infty} \int_{0}^{\infty} \int_{0}^{\infty} \frac{x_{1}^{a_{1}-1} x_{2}^{a_{2}-1} x_{3}^{a_{3}-1} }
{(x_{1}+x_{2}+x_{3})^{D/2}}
\nonumber \\
& \times & \text{exp}(x_{1}m_{1}^{2} + x_{2}m_{2}^{2} + x_{3}m_{3}^{2})
\text{exp} \left( -
\frac{C_{1}P_{1}^{2} + 2C_{12}P_{1} \cdot P_{2} + C_{22}P_{2}^{2} }
{x_{1}+x_{2}+x_{3}} \right) dx_{1}dx_{2}dx_{3}.
\nonumber
\end{eqnarray}
\noindent
The algorithm in \cite{gonzalez-2005} and \cite{gonzalez-2007} gives the coefficients $C_{i,j}$ as
\begin{equation}
C_{11} = x_{1}(x_{2}+x_{3}), \, \, C_{12} = x_{1}x_{3}, \, \,
C_{22} = x_{3}(x_{1}+x_{2}).
\end{equation}
Conservation of momentum gives $P_{3} = P_{1} + P_{2}$ and replacing the
coefficients $C_{i,j}$ we obtain
\begin{eqnarray}
G & = & \frac{(-1)^{-D/2}}{\prod_{j=1}^{3} \Gamma(a_{j})}
\int_{0}^{\infty} \int_{0}^{\infty} \int_{0}^{\infty} x^{a_{1}-1} x^{a_{2}-1} x^{a_{3}-1} \times \nonumber \\
& \times & \frac{\text{exp}\left(x_{1}m_{1}^{2} + x_{2}m_{2}^{2} +
x_{3}m_{3}^{2} \right)
\text{exp}\left( - \frac{x_{1}x_{2}P_{1}^{2} + x_{2}x_{3}P_{2}^{2}
+ x_{3}x_{1}P_{3}^{2}}{x_{1}+x_{2}+x_{3}} \right)}{(x_{1}+x_{2}+x_{3})^{D/2}}
dx_{1}dx_{2}dx_{3}.
\nonumber
\end{eqnarray}
To solve the Feynman diagram in Figure \ref{figure1}
it is required to evaluate the
integral $G$ as a function of the variables $P_{1}, \, P_{2} \in
{\mathbb{R}}^{4}$, the masses $m_{i}$, the dimension $D$ and the
parameters $a_{i}$.
We now describe the evaluation of the integral $G$ in the special massless
situation: $m_{1}=m_{2}=m_{3} =0$. Moreover we assume that
$P_{1}^{2}=P_{2}^{2}=0$. The
integral to be evaluated is then
\begin{equation}
G_{1} = \frac{(-1)^{-D/2}}{\Gamma(a_{1}) \Gamma(a_{2}) \Gamma(a_{3})}
\int_{\mathbb{R}_{+}^{3}} x_{1}^{a_{1}-1} x_{2}^{a_{2}-1} x_{3}^{a_{3}-1}
\frac{\text{exp}\left( - \frac{x_{1}x_{3}}
{x_{1}+x_{2}+x_{3}} P_{3}^{2} \right)}{(x_{1}+x_{2}+x_{3})^{D/2}}
\, dx_{1}\, dx_{2}\, dx_{3}.
\nonumber
\end{equation}
The method of brackets gives
\begin{equation}
G_{1} \stackrel{\bullet}{=} \frac{(-1)^{-D/2}}{\Gamma(a_{1}) \Gamma(a_{2}) \Gamma(a_{3}) }
\sum_{n_{1}} \sum_{n_{2}} \sum_{n_{3}} \sum_{n_{4}} \phi_{1234}
(P_{3}^{2})^{n_{1}} \frac{\Delta_{1}\Delta_{2} \Delta_{3} \Delta_{4}}
{\Gamma( D/2+n_{1})},
\end{equation}
\noindent
where the brackets $\Delta_{j}$ are given by
\begin{eqnarray}
\Delta_{1} & = & \langle{ D/2 + n_{1} + n_{2} + n_{3} + n_{4} \rangle},
\nonumber \\
\Delta_{2} & = & \langle{ a_{1} + n_{1} + n_{2} \rangle},
\nonumber \\
\Delta_{3} & = & \langle{ a_{2} + n_{3} \rangle},
\nonumber \\
\Delta_{4} & = & \langle{ a_{3} + n_{1} + n_{4} \rangle}.
\nonumber
\end{eqnarray}
\noindent
The solution contains no free indices: there are four sums and
the linear system corresponding to the vanishing of the
brackets eliminates all of them:
\begin{equation}
n_{1}^{*} = \tfrac{D}{2} - a_{1} - a_{2} - a_{3}, \,
n_{2}^{*} = - \tfrac{D}{2} + a_{2} + a_{3}, \,
n_{3}^{*} = -a_{2}, \,
n_{4}^{*} = - \tfrac{D}{2} + a_{1} + a_{2}. \nonumber
\end{equation}
\noindent
We conclude that
\begin{eqnarray}
G_{1} & = & \frac{(-1)^{-D/2}}{\Gamma(a_{1}) \Gamma(a_{2}) \Gamma(a_{3}) }
(P_{3}^{2})^{D/2-a_{1}-a_{2}-a_{3}} \times \nonumber \\
& \times &
\frac{\Gamma(a_{1}+a_{2}+a_{3}-\tfrac{D}{2}) \Gamma( \tfrac{D}{2} -a_{2}-a_{3})
\Gamma(a_{2}) \Gamma(\tfrac{D}{2}) \Gamma(\tfrac{D}{2}-a_{1}-a_{2})}
{\Gamma(D - a_{1}-a_{2}-a_{3})}.
\nonumber
\end{eqnarray}
\end{example}
\begin{example}
\label{example-bubble}
The second example considers the diagram depicted in Figure \ref{figure2}. The
resolution of this diagram is well-known and it appears in \cite{boos1},
\cite{davydychev1991} and \cite{davydychev1992}. The diagram contains
two external lines and two internal lines (propagators) with the same mass $m$.
These propagators are marked $1$ and $2$.
{{
\begin{figure}[ht]
\begin{center}
\centerline{\epsfig{file=Bubble.eps,width=20em,angle=0}}
\caption{The bubble}
\label{figure2}
\end{center}
\end{figure}
}}
In momentum variables, the integral representation of this diagram is given
by
\begin{equation}
G = \int_{\mathbb{R}^{D}} \frac{d^{D}Q}{i \pi^{D/2}}
\frac{1}{(Q^{2}-m^{2})^{a_{1}} \, ( (P-Q)^{2} - m^{2})^{a_{2}}}.
\end{equation}
\noindent
For the diagram considered here, we have $U = x_{1} + x_{2}$ and
$F = x_{1}x_{2}P^{2}$. According to
(\ref{Schwinger-rep}), the Schwinger representation is given by
\begin{eqnarray}
G & = & \frac{(-1)^{-D/2}}{\Gamma(a_{1})\Gamma(a_{2})} \nonumber \\
& \times & \int_{0}^{\infty} \int_{0}^{\infty} \frac{x^{a_{1}-1} x^{a_{2}-1}}{(x_{1}+x_{2})^{D/2}}
\text{exp}\left( m^{2}(x_{1}+x_{2}) \right)
\text{exp} \left( - \frac{x_{1}x_{2}}{x_{1}+x_{2}} P^{2} \right)
\, dx_{1} \, dx_{2}.
\nonumber
\end{eqnarray}
In order to generate the bracket series for $G$, we expand first
the exponential function to obtain
\begin{equation}
G \stackrel{\bullet}{=} \frac{(-1)^{-D/2}}{\Gamma(a_{1}) \Gamma(a_{2})}
\sum_{n_{1},n_{2}} \phi_{1,2} (P^{2})^{n_{1}} (-m^{2})^{n_{2}}
\int_{R^{2}_{+}} \frac{x_{1}^{n_{1}} x_{2}^{n_{2}} \, dx_{1} \, dx_{2}}
{(x_{1}+x_{2})^{D/2+n_{1}-n_{2}}}. \label{G-part1}
\end{equation}
Expanding now the term
\begin{equation}
\frac{1}{(x_{1}+x_{2})^{D/2+n_{1}-n_{2} } } \stackrel{\bullet}{=}
\sum_{n_{3},n_{4}} \phi_{3,4} \frac{x_{1}^{n_{3}} x_{2}^{n_{4}} }
{\Gamma(D/2 + n_{1}-n_{2})} \Delta_{1},
\end{equation}
\noindent
with $\Delta_{1} = \langle{\tfrac{D}{2} + n_{1}-n_{2}+n_{3}+n_{4} \rangle}$,
and replacing in (\ref{G-part1}) yields
\begin{equation}
G \stackrel{\bullet}{=} \frac{(-1)^{-D/2}}{\Gamma(a_{1}) \Gamma(a_{2})}
\sum_{n_{1},\cdots,n_{4}} \phi_{1,2,3,4}
\frac{(P^{2})^{n_{1}} (-m^{2})^{n_{2}} }{\Gamma(\tfrac{D}{2}+n_{1}-n_{2})}
\Delta_{1} \Delta_{2}\Delta_{3},
\end{equation}
\noindent
where
\begin{eqnarray}
\Delta_{1} & = & \langle{\tfrac{D}{2} + n_{1}-n_{2}+n_{3}+n_{4} \rangle},
\nonumber \\
\Delta_{2} & = & \langle{ a_{1} + n_{1} + n_{3} \rangle}, \nonumber \\
\Delta_{3} & = & \langle{ a_{2} + n_{1} + n_{4} \rangle}. \nonumber
\end{eqnarray}
The expression for $G$ contains $4$ indices and the vanishing of the brackets
allows us to express all of them in terms of a single index. We will denote
by $G_{j}$ the expression for $G$ where the index $n_{j}$ is free. \\
\noindent
{\bf The sum $G_{1}$}: in this case the solution of the corresponding linear
system is
\begin{equation}
n_{2}^{*} = \tfrac{D}{2} - a_{1}-a_{2} - n_{1}, \,
n_{3}^{*} = -a_{1} -n_{1}, \, n_{4}^{*} = -a_{2}-n_{1},
\end{equation}
\noindent
and the sum $G_{1}$ becomes
\begin{eqnarray}
G_{1} & = & (-1)^{-D/2} \frac{(-m^{2})^{D/2-a_{1}+a_{2} } } {\Gamma(a_{1})
\Gamma(a_{2}) } \times \nonumber \\
& \times & \sum_{n_{1}=0}^{\infty}
\frac{\Gamma(a_{1}+a_{2}- D/2 + n_{1}) \, \Gamma(a_{1}+n_{1}) \,
\Gamma(a_{2}+n_{1}) }{\Gamma(a_{1}+a_{2}+2n_{1}) }
\frac{ \left( \frac{P^{2}}{m^{2}} \right)^{n_{1}} }{n_{1}!}. \nonumber
\end{eqnarray}
\noindent
This can be expressed as
\begin{equation}
G_{1} = \lambda_{1} (-m^{2})^{D/2-a_{1}+a_{2}}
{_{3}F_{2}}
\left(
\begin{matrix}
a_{1}+a_{2}- \tfrac{D}{2}, & a_{1}, & a_{2} \\
\tfrac{1}{2}(a_{1}+a_{2}+1), & \tfrac{1}{2}(a_{1}+a_{2}) &
\end{matrix}
\Big{|} \frac{P^{2}}{4m^{2}},
\right)
\end{equation}
\noindent
where
\begin{equation}
\lambda_{1} = (-1)^{-D/2} \frac{\Gamma(a_{1}+a_{2} - D/2)}{\Gamma(a_{1}+a_{2})}.
\end{equation}
\medskip
\noindent
{\bf The sum $G_{2}$}: keeping $n_{2}$ as the free index gives
\begin{equation}
n_{1}^{*} = \tfrac{D}{2} - a_{1}-a_{2} - n_{2}, \,
n_{3}^{*} = a_{2} -\tfrac{D}{2}+n_{2}, \, n_{4}^{*} = a_{1}- \tfrac{D}{2}
+n_{2},
\nonumber
\end{equation}
\noindent
which leads to
\begin{equation}
G_{2} = \lambda_{2} (P_{1}^{2})^{D/2-a_{1}+a_{2}}
{_{3}F_{2}}
\left(
\begin{matrix}
a_{1}+a_{2}- \tfrac{D}{2}, & \tfrac{1}{2}(1+a_{1}+a_{2}-D), &
\tfrac{1}{2}(2+a_{1}+a_{2}-D) \\
1+a_{1}-\tfrac{D}{2}, & 1 + a_{2}- \tfrac{D}{2} &
\end{matrix}
\Big{|} \frac{4m^{2}}{P^{2}}
\right)
\nonumber
\end{equation}
\noindent
where the prefactor $\lambda_{2}$ is given by
\begin{equation}
\lambda_{2} = (-1)^{-D/2} \frac{\Gamma(a_{1}+a_{2} - D/2)
\Gamma(\tfrac{D}{2}-a_{1}) \Gamma(\tfrac{D}{2} - a_{2}) }
{\Gamma(a_{1}) \Gamma(a_{2}) \Gamma(D - a_{1}-a_{2})}.
\nonumber
\end{equation}
\medskip
\noindent
{\bf The cases $G_{3}$ and $G_{4}$} are computed by a similar procedure.
The results are
\begin{equation}
G_{3} = \lambda_{3} (P_{1}^{2})^{-a_{1}} (-m^{2})^{D/2-a_{2}}
{_{3}F_{2}}
\left(
\begin{matrix}
a_{1} , & \tfrac{1}{2}(1+a_{1}-a_{2}), &
\tfrac{1}{2}(2+a_{1}-a_{2}) \\
1+a_{1}-a_{2}, & 1 -a_{2} + \tfrac{D}{2} &
\end{matrix}
\Big{|} \frac{4m^{2}}{P^{2}}
\right)
\nonumber
\end{equation}
\noindent
and
\begin{equation}
G_{4} = \lambda_{4} (P_{1}^{2})^{-a_{2}} (-m^{2})^{D/2-a_{1}}
{_{3}F_{2}}
\left(
\begin{matrix}
a_{2} , & \tfrac{1}{2}(1-a_{1}+a_{2}), &
\tfrac{1}{2}(2-a_{1}+a_{2}) \\
1-a_{1}+a_{2}, & 1 -a_{1} + \tfrac{D}{2} &
\end{matrix}
\Big{|} \frac{4m^{2}}{P^{2}}
\right)
\nonumber
\end{equation}
\noindent
where the prefactors $\lambda_{3}$ and $\lambda_{4}$ are given by
\begin{equation}
\lambda_{3} = (-1)^{-D/2} \frac{\Gamma(a_{2} - D/2) }
{\Gamma(a_{2}) } \text{ and }
\lambda_{4} = (-1)^{-D/2} \frac{\Gamma(a_{1} - D/2) }
{\Gamma(a_{1}) }.
\end{equation}
The contributions of these four sums are now classified according to their
region of convergence. This is determined by the parameter $\rho :=
|4m^{2}/P^{2}|$. In the region $\rho > 1$, only the sum $G_{1}$ converges,
therefore $G = G_{1}$ there. In the region $\rho < 1$ the three remaining sums
converge. Therefore, according to Rule \ref{rule-disc}, we have
\begin{equation}
G = \begin{cases}
\begin{matrix}
G_{1} & \quad \text{ for } & \rho > 1, \\
G_{2} + G_{3} + G_{4} & \quad \text{ for } & \rho < 1.
\end{matrix}
\end{cases}
\end{equation}
We have evaluated the Feynman diagram in Figure \ref{figure2} and
expressed its solution in terms of hypergeometric functions that correspond
naturally to the two quotient of the two energy scales present in the diagram.
\end{example}
\medskip
\begin{example}
The next example shows that the method of brackets succeeds in the
evaluation of very complicated integrals.
We consider a Feynman diagram
with four loops as shown in Figure \ref{figure3}.
{{
\begin{figure}[ht]
\begin{center}
\centerline{\epsfig{file=4loops.eps,width=20em,angle=0}}
\caption{A diagram with four loops}
\label{figure3}
\end{center}
\end{figure}
}}
\medskip
The methods described in \cite{gonzalez-2005} for the
Schwinger representation (\ref{Schwinger-rep}) of this diagram, give
\begin{equation}
\begin{array}{ll}
U= &
x_{1}x_{3}x_{5}x_{7}+x_{1}x_{3}x_{5}x_{8}+x_{1}x_{3}x_{6}x_{7}+x_{1}x_{4}x_{5}x_{7}+x_{2}x_{3}x_{5}x_{7}+
\\
&
x_{1}x_{3}x_{6}x_{8}+x_{1}x_{4}x_{5}x_{8}+x_{1}x_{4}x_{6}x_{7}+x_{2}x_{3}x_{5}x_{8}+x_{2}x_{3}x_{6}x_{7}+
\\
&
x_{2}x_{4}x_{5}x_{7}+x_{1}x_{3}x_{7}x_{8}+x_{1}x_{4}x_{6}x_{8}+x_{1}x_{5}x_{6}x_{7}+x_{2}x_{3}x_{6}x_{8}+
\\
&
x_{2}x_{4}x_{5}x_{8}+x_{2}x_{4}x_{6}x_{7}+x_{3}x_{4}x_{5}x_{7}+x_{1}x_{4}x_{7}x_{8}+x_{1}x_{5}x_{6}x_{8}+
\\
&
x_{2}x_{3}x_{7}x_{8}+x_{2}x_{4}x_{6}x_{8}+x_{2}x_{5}x_{6}x_{7}+x_{3}x_{4}x_{5}x_{8}+x_{3}x_{4}x_{6}x_{7}+
\\
&
x_{1}x_{5}x_{7}x_{8}+x_{2}x_{4}x_{7}x_{8}+x_{2}x_{5}x_{6}x_{8}+x_{3}x_{4}x_{6}x_{8}+x_{3}x_{5}x_{6}x_{7}+
\\
&
x_{2}x_{5}x_{7}x_{8}+x_{3}x_{4}x_{7}x_{8}+x_{3}x_{5}x_{6}x_{8}+x_{3}x_{5}x_{7}x_{8}%
\end{array}%
\end{equation}%
\noindent
and for the function $F$ in (\ref{Schwinger-rep}):
\begin{equation}
\begin{array}{ll}
F= &
(x_{1}x_{2}x_{3}x_{5}x_{7}+x_{1}x_{2}x_{3}x_{5}x_{8}+x_{1}x_{2}x_{3}x_{6}x_{7}+x_{1}x_{2}x_{4}x_{5}x_{7}+
\\
&
x_{1}x_{2}x_{3}x_{6}x_{8}+x_{1}x_{2}x_{4}x_{5}x_{8}+x_{1}x_{2}x_{4}x_{6}x_{7}+x_{1}x_{3}x_{4}x_{5}x_{7}+
\\
&
x_{1}x_{2}x_{3}x_{7}x_{8}+x_{1}x_{2}x_{4}x_{6}x_{8}+x_{1}x_{2}x_{5}x_{6}x_{7}+x_{1}x_{3}x_{4}x_{5}x_{8}+
\\
&
x_{1}x_{3}x_{4}x_{6}x_{7}+x_{1}x_{2}x_{4}x_{7}x_{8}+x_{1}x_{2}x_{5}x_{6}x_{8}+x_{1}x_{3}x_{4}x_{6}x_{8}+
\\
&
x_{1}x_{3}x_{5}x_{6}x_{7}+x_{1}x_{2}x_{5}x_{7}x_{8}+x_{1}x_{3}x_{4}x_{7}x_{8}+x_{1}x_{3}x_{5}x_{6}x_{8}+
\\
& x_{1}x_{3}x_{5}x_{7}x_{8})\;P^{2}.%
\end{array}%
\end{equation}%
The large number of terms appearing in the expressions for $U$ and $F$ ($34$
and $21$ respcetively) makes it almost impossible to apply the method of
brackets without an apriori factorization of these polynomials. This
factorizations minimizes the number of sums and maximizes the number of
brackets. In this example, the optimal factorization is given by
\begin{equation}
\begin{array}{l}
F=x_{1}f_{7}\;P^{2}, \\
\\
U=x_{1}f_{6}+f_{7},%
\end{array}%
\end{equation}%
where the functions $f_{i}$ are given by:
\begin{equation}
\begin{array}{l}
f_{7}=(x_{2}f_{6}+f_{5}), \\
f_{6}=x_{3}f_{4}+(x_{4}f_{4}+f_{3}), \\
f_{5}=x_{3}(x_{4}f_{4}+f_{3}), \\
f_{4}=x_{5}f_{2}+(x_{6}f_{2}+f_{1}), \\
f_{3}=x_{5}(x_{6}f_{2}+f_{1}), \\
f_{2}=(x_{7}+x_{8}), \\
f_{1}=x_{7}x_{8}.%
\end{array}%
\end{equation}
To analyze the diagram considered here, we start with the
parametric representation
\begin{equation}
G=\dfrac{(-1)^{-D/2}}{\prod\limits_{j=1}^{8}\Gamma (a_{j})}
\int\limits_{0}^{\infty } \;\frac{\exp \left( -\dfrac{
x_{1}f_{7}}{x_{1}f_{6}+f_{7}}p_{1}^{2}\right) }{\left(
x_{1}f_{6}+f_{7}\right) ^{\frac{D}{2}}} {\mathbf{dx}},
\end{equation}
\noindent
and expand the exponential function first. A systematic expansion
associated to the polynomials $f_{i}$ leads to the order
\begin{equation}
U\longrightarrow f_{7}\longrightarrow f_{6}\longrightarrow
f_{5}\longrightarrow f_{4}\longrightarrow f_{3}\longrightarrow f_{2},
\end{equation}%
\noindent
that yields the bracket series
\begin{equation}
G \stackrel{\bullet}{=} \dfrac{(-1)^{-D/2}}{\prod\limits_{j=1}^{8}\Gamma (a_{j})}%
\sum\limits_{n_{1},..,n_{15}}\phi _{n_{1},..,n_{15}}\;\dfrac{(P^{2})^{n_{1}}%
}{\Gamma (\frac{D}{2}+n_{1})}\;\Omega _{\left\{ n\right\}
}\prod\limits_{j=1}^{15}\Delta _{j},
\end{equation}%
where we have defined the factor
\begin{equation}
\Omega _{\left\{ n\right\} }=\dfrac{1}{\Gamma (-n_{1}-n_{3})\Gamma
(-n_{2}-n_{4})\Gamma (-n_{5}-n_{7})\Gamma (-n_{6}-n_{8})\Gamma
(-n_{9}-n_{11})\Gamma (-n_{10}-n_{12})}, \nonumber
\end{equation}%
and the corresponding brackets by
\begin{equation}
\begin{array}{lll}
\Delta _{1}=\left\langle \frac{D}{2}+n_{1}+n_{2}+n_{3}\right\rangle , & &
\Delta _{9}=\left\langle a_{2}+n_{4}\right\rangle , \\
\Delta _{2}=\left\langle -n_{1}-n_{3}+n_{4}+n_{5}\right\rangle , & & \Delta
_{10}=\left\langle a_{3}+n_{5}+n_{6}\right\rangle , \\
\Delta _{3}=\left\langle -n_{2}-n_{4}+n_{6}+n_{7}\right\rangle , & & \Delta
_{11}=\left\langle a_{4}+n_{8}\right\rangle , \\
\Delta _{4}=\left\langle -n_{5}-n_{7}+n_{8}+n_{9}\right\rangle , & & \Delta
_{12}=\left\langle a_{5}+n_{9}+n_{10}\right\rangle , \\
\Delta _{5}=\left\langle -n_{6}-n_{8}+n_{10}+n_{11}\right\rangle , & &
\Delta _{13}=\left\langle a_{6}+n_{12}\right\rangle , \\
\Delta _{6}=\left\langle -n_{9}-n_{11}+n_{12}+n_{13}\right\rangle , & &
\Delta _{14}=\left\langle a_{7}+n_{13}+n_{14}\right\rangle , \\
\Delta _{7}=\left\langle -n_{10}-n_{12}+n_{14}+n_{15}\right\rangle , & &
\Delta _{15}=\left\langle a_{8}+n_{13}+n_{15}\right\rangle , \\
\Delta _{8}=\left\langle a_{1}+n_{1}+n_{2}\right\rangle . & &
\end{array}%
\end{equation}%
There is a unique way to evaluate the series: the numbers of indices is
the same as the number of brackets. Solving the corresponding linear system
leads to
\begin{equation}
G=(-1)^{-D/2}\frac{(P^{2})^{n_{1}^{*}}}{\Gamma (D/2+n_{1}^{*})}
\;\Omega _{\left\{
n^{*}\right\} }\;\frac{\prod\limits_{j=1}^{15}\Gamma (-n_{j}^{*})}{%
\prod\limits_{j=1}^{8}\Gamma (a_{j})},
\end{equation}%
where the values $n_{i}^{*}$ are given by
\begin{equation}
\nonumber
\begin{array}{lll}
n_{1}^{*}=2D-a_{1}-a_{2}-a_{3}-a_{4}-a_{5}-a_{6}-a_{7}-a_{8}, & &
n_{9}^{*}=D-a_{5}-a_{6}-a_{7}-a_{8}, \\
n_{2}^{*}=a_{2}+a_{3}+a_{4}+a_{5}+a_{6}+a_{7}+a_{8}-2D, & &
n_{10}^{*}=a_{6}+a_{7}+a_{8}-D, \\
n_{3}^{*}=a_{1}-\frac{D}{2}, & & n_{11}^{*}=a_{5}-\frac{D}{2}, \\
n_{4}^{*}=-a_{2}, & & n_{12}^{*}=-a_{6}, \\
n_{5}^{*}=\frac{3D}{2}-a_{3}-a_{4}-a_{5}-a_{6}-a_{7}-a_{8}, & & n_{13}^{*}=\frac{D}{%
2}-a_{7}-a_{8}, \\
n_{6}^{*}=a_{4}+a_{5}+a_{6}+a_{7}+a_{8}-\frac{3D}{2}, & & n_{14}^{*}=a_{8}-\frac{D}{%
2}, \\
n_{7}^{*}=a_{3}-\frac{D}{2}, & & n_{15}^{*}=a_{7}-\frac{D}{2}, \\
n_{8}^{*}=-a_{4}. & &
\end{array}%
\end{equation}
\end{example}
\medskip
\begin{example}
The last example discussed in this paper gives the value of a Feynman
diagram as a hypergeometric function of two variables. The diagram
shown in Figure \ref{figure2} contains
two external lines and with internal lines (propagators) with distinct
masses. The same diagram with equal masses was described in Example
\ref{example-bubble}. The integral
representation of this diagram in the momentum space is given by
\begin{equation}
G=\int \frac{d^{D}Q}{i\pi ^{D/2}}\frac{1}{(Q^{2}-m_{1}^{2})^{a_{1}}\left(
(P-Q)^{2}-m_{2}^{2}\right) ^{a_{2}}}.
\end{equation}%
\noindent
On the other hand, the parametric representation of Schwinger is
\begin{equation}
G=\dfrac{(-1)^{-\frac{D}{2}}}{\prod\limits_{j=1}^{2}\Gamma (a_{j})}%
\int\limits_{0}^{\infty } \;\frac{\exp \left(
x_{1}m_{1}^{2}\right) \exp \left( x_{2}m_{2}^{2}\right) \exp \left( -\frac{%
x_{1}x_{2}}{x_{1}+x_{2}}P^{2}\right) }{\left( x_{1}+x_{2}\right) ^{\frac{D}{2%
}}} \, {\bf dx}.
\end{equation}%
In order to find the bracket series associated to this integral, we first
expand the exponentials
\begin{equation}
G \stackrel{\bullet}{=} \dfrac{(-1)^{-\frac{D}{2}}}{\prod\limits_{j=1}^{2}\Gamma (a_{j})}%
\sum\limits_{n_{1},n_{2},n_{3}}\phi _{n_{1,}n_{2},n_{3}}\;\left(
-m_{1}^{2}\right) ^{n_{1}}\left( -m_{2}^{2}\right) ^{n_{2}}\left(
P^{2}\right) ^{n_{3}}\int \;\frac{%
x_{1}^{n_{1}+n_{3}}x_{2}^{n_{2}+n_{3}}}{\left( x_{1}+x_{2}\right) ^{\frac{D}{%
2}+n_{3}}} \, {\mathbf{dx}} . \label{k}
\nonumber
\end{equation}%
\noindent
and then the denominator
\begin{equation}
\nonumber
\frac{1}{\left( x_{1}+x_{2}\right) ^{\frac{D}{2}+n_{3}}} \stackrel{\bullet}{=}
\sum\limits_{n_{4},n_{5}}\phi _{n_{4,}n_{5}}\;\frac{%
x_{1}^{n_{4}}x_{2}^{n_{5}}}{\Gamma (\frac{D}{2}+n_{3})}\left\langle \tfrac{D%
}{2}+n_{3}+n_{4}+n_{5}\right\rangle.
\end{equation}%
\noindent
We obtain
\begin{eqnarray}
\nonumber
G & \stackrel{\bullet}{=} & \frac{(-1)^{-\frac{D}{2}}}{\prod\limits_{j=1}^{2}\Gamma (a_{j})}%
\sum\limits_{n_{1},..,n_{5}}\phi _{n_{1,..,}n_{5}}\;\frac{\left(
-m_{1}^{2}\right) ^{n_{1}}\left( -m_{2}^{2}\right) ^{n_{2}}\left(
P^{2}\right) ^{n_{3}}}{\Gamma (\frac{D}{2}+n_{3})}\Delta _{1} \nonumber \\
& \times &
\int x_{1}^{a_{1}+n_{1}+n_{3}+n_{4}-1} \, dx_{1}
\, \, \int x_{2}^{a_{2}+n_{2}+n_{3}+n_{5}-1} \, dx_{2}.
\nonumber
\end{eqnarray}%
\noindent
Using the rules for transforming integrals into brackets yields
\begin{equation}
G \stackrel{\bullet}{=} \frac{(-1)^{-\frac{D}{2}}}{\prod\limits_{j=1}^{2}\Gamma
(a_{j})}\sum\limits_{n_{1},..,n_{5}}\phi _{n_{1,..,}n_{5}} \frac{\left(
-m_{1}^{2}\right) ^{n_{1}}\left( -m_{2}^{2}\right) ^{n_{2}}\left(
P^{2}\right) ^{n_{3}}}{\Gamma (\frac{D}{2}+n_{3})}
\Delta _{1} \Delta_{2} \Delta_{3},
\end{equation}%
\noindent
with brackets defined by
\begin{equation}
\begin{array}{l}
\Delta _{1}=\left\langle \frac{D}{2}+n_{3}+n_{4}+n_{5}\right\rangle, \\
\\
\Delta _{2}=\left\langle a_{1}+n_{1}+n_{3}+n_{4}\right\rangle, \\
\\
\Delta _{3}=\left\langle a_{2}+n_{2}+n_{3}+n_{5}\right\rangle.%
\end{array}%
\label{f34}
\end{equation}%
\noindent
We have to choose two free values from $n_{1}, \cdots, n_{5}$. The result
is a hypergeometric function of multiplicity two. There are
$10$ such choices and the brackets in (\ref{f34}) produce the linear
system
\begin{equation}
\begin{array}{l}
0=\frac{D}{2}+n_{3}+n_{4}+n_{5} \\
\\
0=a_{1}+n_{1}+n_{3}+n_{4} \\
\\
0=a_{2}+n_{2}+n_{3}+n_{5}.%
\end{array}%
\end{equation}
We denote by $G_{i,j}$ the solution of this system
with free indices $n_{i}$ and $n_{j}$. The
series appearing in the solution to the Feynman diagram
in Figure \ref{figure3} is expressed in terms
of the Appell function $F_{4}$, defined by
\begin{equation}
F_{4}\left( \left.
\begin{array}{ccc}
\alpha & & \beta \\
& & \\
\gamma & & \delta %
\end{array}
\right\vert \, x, \, y \right)
=\sum\limits_{m,n=0}^{\infty }\frac{%
(\alpha )_{m+n}(\beta )_{m+n}}{(\gamma )_{m}(\delta )_{n}}\frac{x^{m}}{m!}%
\dfrac{y^{n}}{n!}.
\end{equation}
Following a procedure similar to the one described in the previous example,
we obtain
the explicit values of the integral $G$ is given in terms of the functions
$G_{i,j}$.
\begin{equation}
G =
\begin{cases}
\begin{matrix}
G_{1,2}+G_{1,4}+G_{2,5} & \text{ when } &
m_{1}^{2},m_{2}^{2}<P^{2} \\
G_{1,3}+G_{3,5} & \text{ when } &
m_{1}^{2},P^{2}<m_{2}^{2} \\
G_{2,3}+G_{3,4}
& \text{ when } &m_{2}^{2},P^{2}<m_{1}^{2}.
\end{matrix}
\end{cases}
\nonumber
\end{equation}
These in turn are expressed in terms of the Appell function: \\
{\small{
\begin{eqnarray}
\nonumber
G_{1,2} & = & (-1)^{-\frac{D}{2}}\frac{\Gamma (a_{1}+a_{2}-\frac{D}{2})\Gamma (%
\frac{D}{2}-a_{1})\Gamma (\frac{D}{2}-a_{2})}{\Gamma (a_{1})\Gamma
(a_{2})\Gamma (D-a_{1}-a_{2})}\left( P^{2}\right) ^{\frac{D}{2}-a_{1}-a_{2}}
\\
& & \nonumber \\
& \times & F_{4}\left( \left.
\begin{array}{ccc}
1+a_{1}+a_{2}-D & & a_{1}+a_{2}-\frac{D}{2} \\
& & \\
1+a_{1}-\frac{D}{2} & & 1+a_{2}-\frac{D}{2}%
\end{array}
\right\vert \frac{m_{1}^{2}}{P^{2}},\frac{m_{2}^{2}}{P^{2}}\right)
\nonumber \\
& & \nonumber \\
\nonumber
G_{1,4} & = & (-1)^{-\frac{D}{2}}\frac{\Gamma (a_{2}-\frac{D}{2})}{\Gamma (a_{2})}%
\left( -m_{2}^{2}\right) ^{\frac{D}{2}-a_{2}}\left( P^{2}\right)
^{-a_{1}} \\
& \times & \;F_{4}\left( \left.
\begin{array}{ccc}
1+a_{1}-\frac{D}{2} & & a_{1} \\
& & \\
1+a_{1}-\frac{D}{2} & & 1-a_{2}+\frac{D}{2}%
\end{array}%
\right\vert \frac{m_{1}^{2}}{P^{2}},\frac{m_{2}^{2}}{P^{2}}\right)
\nonumber \\
& & \nonumber \\
\nonumber
G_{2,5} & = & (-1)^{-\frac{D}{2}}\frac{\Gamma (a_{1}-\frac{D}{2})}{\Gamma (a_{1})}%
\left( -m_{1}^{2}\right) ^{\frac{D}{2}-a_{1}}\left( P^{2}\right)
^{-a_{2}} \\
& \times & \;F_{4}\left( \left.
\begin{array}{ccc}
1+a_{2}-\frac{D}{2} & & a_{2} \\
& & \\
1-a_{1}+\frac{D}{2} & & 1+a_{2}-\frac{D}{2}%
\end{array}%
\right\vert \frac{m_{1}^{2}}{P^{2}},\frac{m_{2}^{2}}{P^{2}}\right)
\nonumber \\
& & \nonumber \\
\nonumber
G_{1,3} & = & (-1)^{-\frac{D}{2}}\frac{\Gamma (a_{1}+a_{2}-\frac{D}{2})\Gamma (%
\frac{D}{2}-a_{1})}{\Gamma (a_{2})\Gamma (\frac{D}{2})}\left(
-m_{2}^{2}\right) ^{\frac{D}{2}-a_{1}-a_{2}} \\
& \times & \;F_{4}\left( \left.
\begin{array}{ccc}
a_{1}+a_{2}-\frac{D}{2} & & a_{1} \\
& & \\
\frac{D}{2} & & 1+a_{1}-\frac{D}{2}%
\end{array}%
\right\vert \frac{P^{2}}{m_{2}^{2}},\frac{m_{1}^{2}}{m_{2}^{2}}\right)
\nonumber \\
& & \nonumber \\
\nonumber
G_{3,5} & = & (-1)^{-\frac{D}{2}}\frac{\Gamma (a_{1}-\frac{D}{2})}{\Gamma (a_{1})}%
\left( -m_{1}^{2}\right) ^{\frac{D}{2}-a_{1}}\left( -m_{2}^{2}\right)
^{-a_{2}} \\
& \times & \;F_{4}\left( \left.
\begin{array}{ccc}
\frac{D}{2} & & a_{2} \\
& & \\
\frac{D}{2} & & 1-a_{1}+\frac{D}{2}%
\end{array}%
\right\vert \frac{P^{2}}{m_{2}^{2}},\frac{m_{1}^{2}}{m_{2}^{2}}\right)
\nonumber \\
& & \nonumber \\
\nonumber
G_{2,3} & = & (-1)^{-\frac{D}{2}}\frac{\Gamma (a_{1}+a_{2}-\frac{D}{2})\Gamma (%
\frac{D}{2}-a_{2})}{\Gamma (a_{1})\Gamma (\frac{D}{2})}\left(
-m_{1}^{2}\right) ^{\frac{D}{2}-a_{1}-a_{2}} \\
& \times &
\;F_{4}\left( \left.
\begin{array}{ccc}
a_{1}+a_{2}-\frac{D}{2} & & a_{2} \\
& & \\
\frac{D}{2} & & 1+a_{2}-\frac{D}{2}%
\end{array}%
\right\vert \frac{P^{2}}{m_{1}^{2}},\frac{m_{2}^{2}}{m_{1}^{2}}\right)
\nonumber \\
& & \nonumber \\
\nonumber
G_{3,4} & = &
(-1)^{-\frac{D}{2}}\frac{\Gamma (a_{2}-\frac{D}{2})}{\Gamma (a_{2})}%
\left( -m_{2}^{2}\right) ^{\frac{D}{2}-a_{2}}\left( -m_{1}^{2}\right)
^{-a_{1}} \\
& \times & \;F_{4}\left( \left.
\begin{array}{ccc}
\frac{D}{2} & & a_{1} \\
& & \\
\frac{D}{2} & & 1-a_{2}+\frac{D}{2}%
\end{array}%
\right\vert \frac{P^{2}}{m_{1}^{2}},\frac{m_{2}^{2}}{m_{1}^{2}}\right).
\nonumber
\end{eqnarray}
}}
\end{example}
\section{Conclusions and future work} \label{sec-conclusions}
\setcounter{equation}{0}
The method of brackets provides a very effective procedure to evaluate
definite integrals over the interval $[0, \infty)$. The method is based on
a heuristic list of rules on the bracket series associated to such
integrals. In particular we have provided a variety of examples that
illustrate the power of this method. A rigorous validation of these rules
as well as a systematic study of integrals from
Feynman diagrams is in progress. \\
\noindent
{\bf Acknowledgments}. The authors wish to thank R. Crandall for discussions
on an earlier version of the paper. \\
| {
"timestamp": "2008-12-17T18:55:40",
"yymm": "0812",
"arxiv_id": "0812.3356",
"language": "en",
"url": "https://arxiv.org/abs/0812.3356",
"abstract": "A new heuristic method for the evaluation of definite integrals is presented. This method of brackets has its origin in methods developed for theevaluation of Feynman diagrams. We describe the operational rules and illustrate the method with several examples. The method of brackets reduces the evaluation of a large class of definite integrals to the solution of a linear system of equations.",
"subjects": "Mathematical Physics (math-ph); Classical Analysis and ODEs (math.CA)",
"title": "Definite integrals by the method of brackets. Part 1",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808678600414,
"lm_q2_score": 0.8479677622198947,
"lm_q1q2_score": 0.8312465738662556
} |
https://arxiv.org/abs/math/0703860 | Sets that contain their circle centers | Say that a subset S of the plane is a "circle-center set" if S is not a subset of a line, and whenever we choose three noncollinear points from S, the center of the unique circle through those three points is also an element of S. A problem appearing on the Macalester College Problem of the Week website was to prove that a finite set of points in the plane, no three lying on a common line, cannot be a circle-center set. Various solutions to this problem that did not use the full strength of the hypotheses appeared, and the conjecture was subsequently made that every circle-center set is unbounded. In this article, we prove a stronger assertion, namely that every circle-center set is dense in the plane, or equivalently that the only closed circle-center set is the entire plane. Along the way we show connections between our geometrical method of proof and number theory, real analysis, and topology. | \section{Introduction}
In 2001, Stan Wagon posed an interesting problem on the Macalester College Problem of the Week website \cite{PoW}:
\begin{problem}
Let $\S$ be a finite set of points in the plane in general position, that is, no three points of $\S$ lie on a line. Show that if there are at least three points in $\S$, then $\S$ cannot ``contain its circle centers''.
\end{problem}
The terminology needs a brief explanation. A set $\S\subset{\mathbb R}^2$ is said to {\it contain its circle centers\/} if, for any three non-collinear points from $\S$, the center of the circle through those points is always in $\S$; in other words, if $S$ contains the vertices of any triangle, then it also contains the triangle's circumcenter.
Several solutions were quickly submitted, and two (outlined in Exercises \ref{minex} and \ref{iterex} below) were posted on the website, along with some discussion. An especially interesting feature of this pair of solutions was that one did not use the assumption that no three points of $\S$ lie on a line, while the other did not use the assumption that $\S$ is finite!
The discussion naturally turned at that point to how much could be said about sets that contain their circle centers. The entire plane $\S={\mathbb R}^2$ is a perfectly reasonable example of such a set; less trivially, the set $\S={\mathbb Q}^2$, consisting of all points in the plane with rational numbers as coordinates, contains its circle centers (see Exercise \ref{q2ex} below). Are there lots of sets with this property, or only a few?
It's clearly cheating to put {\em all} the points of $\S$ on a single line, or else there are no circle centers to form. This motivates the following definition:
\begin{definition}
A {\emcircle-center set} is a subset of ${\mathbb R}^2$ that is not a subset of any line and that contains its circle centers.
\end{definition}
John Guilford conjectured that any circle-center set\ must be unbounded. Working on this conjecture eventually led to an even stronger discovery: {\em there is essentially only one circle-center set}. The reader might well be skeptical at this point, especially given the fact that we have already mentioned {\em two} circle-center set s, namely ${\mathbb R}^2$ and ${\mathbb Q}^2$!
To pin down what we mean by ``essentially'', we need to review some basic topological terms about sets in the plane. An {\em open disk} is simply the interior of any circle. A set $\S$ is {\em dense} if every open disk contains at least one of the set's points, or equivalently if for any point $P$ in ${\mathbb R}^2$, there is a sequence of points $\{P_1,P_2,P_3,\dots\}$ from $\S$ that converges to~$P$. A {\em closed set} is a set $\S$ with the property that, for any sequence of points $\{P_1,P_2,P_3,\dots\}$ from $\S$ whose limit exists, the limit itself must be a point in~$\S$. Finally, the {\em closure} $\overline\S$ of $\S$ is the set of all points we can obtain by taking limits of sequences of points from $\S$, or equivalently the smallest closed set containing $\S$. (See \cite[Chapter 1]{DR} for information about these topological concepts in more abstract spaces.)
Using this terminology, the first version of our theorem is:
\begin{theorem}
Every circle-center set\ $\S$ must be dense in ${\mathbb R}^2$.
\label{dense.thm}
\end{theorem}
The circle-center set\ ${\mathbb Q}^2$ is indeed dense in ${\mathbb R}^2$: every open disk contains a point both of whose coordinates are rational.
We can rephrase this theorem slightly by pointing out that if a set $\S$ contains its circle centers, then its closure $\overline\S$ also contains its circle centers (see Exercise \ref{closureex} below). Since the closure of any dense subset of ${\mathbb R}^2$ is ${\mathbb R}^2$ itself, the theorem above is actually equivalent to the following version:
\begin{theorem}
${\mathbb R}^2$ is the only closed circle-center set.
\label{only1.thm}
\end{theorem}
This is the sense in which there is ``only one'' set containing its circle centers. The example ${\mathbb Q}^2$ does not contradict this statement, since it is not closed, and indeed the closure of ${\mathbb Q}^2$ is all of ${\mathbb R}^2$.
In the following exercises and the rest of this paper, we use the notation $C(P,Q,R)$ to denote the circumcenter of $\triangle PQR$ and $r(P,Q,R)$ to denote its circumradius, that is, $C(P,Q,R)$ and $r(P,Q,R)$ are the center and radius, respectively, of the circle going through the points $P$, $Q$, and~$R$.
\begin{exercise}\label{minex}
Given three non-collinear points $D,E,F$, let $G = C(D,E,F)$ and $H=C(D,E,G)$. Show that one of $r(D,E,G)$, $r(D,F,G)$, and $r(E,F,G)$ is less than $r(D,E,F)$, unless $\triangle DEF$ is equilateral, in which case $r(D,G,H) < r(D,E,F)$. Conclude that no circle-center set\ can be finite by considering, for a potential counterexample $\S$, the minimal radius of a circle through three points of $\S$.
\end{exercise}
\begin{exercise}\label{iterex}
Given three non-collinear points $D,E,F$, let $G = C(D,E,F)$, $H=C(D,E,G)$, and $I=C(D,E,H)$. Show that either one of $G$, $H$, or $I$ lies on the line $DE$, or else the three points $G$, $H$, and $I$ are collinear. Furthermore, show that $G$, $H$, and $I$ are not distinct, then $\triangle DGH$ is equilateral and $C(D,G,H)$ lies on the line $DE$. Conclude that no circle-center set\ can be in general position.
\end{exercise}
\begin{exercise}\label{q2ex}
Suppose that both endpoints of a line segment are in ${\mathbb Q}^2$. Show that the perpendicular bisector of the line segment has an equation that can be written in the form $ax+by+c=0$ with $a$, $b$, and $c$ rational. Conclude that if $D$, $E$, and $F$ are non-collinear points in ${\mathbb Q}^2$, then the center of the circle through $D$, $E$, and $F$ is also in ${\mathbb Q}^2$.
\end{exercise}
\begin{exercise}\label{closureex}
Show that the location of the circumcenter of a triangle depends continuously on the positions of the triangle's vertices. Conclude that the circumcenter $C(\lim P_n,\lim Q_n,\lim R_n)$ of three limit points is the same as the limiting circumcenter $\lim C(P_n,Q_n,R_n)$, and so the closure of a circle-center set\ is again a circle-center set.
\end{exercise}
\section{First dimension first}
\label{cotsec}
Before we attack the problem stated in the introduction, let's first consider a ``one-dimensional'' version of the problem. Note that every circle-center set\ contains isosceles triangles: if $M$, $N$, and $P$ are points in a circle-center set, then $Q=C(M,N,P)$ is also in the circle-center set, and each of $\triangle QMN$, $\triangle QNP$, and $\triangle QPM$ is isosceles. We can scale and rotate triangles freely without changing the relative positions of their circumcenters, so we don't lose generality by looking at the specific case of an isosceles triangle whose base runs vertically from $M=(0,1)$ to $N=(0,-1)$.
In this section we consider only circumcenters of isosceles triangles with $M$ and $N$ as vertices. Note that the third vertices of these triangles, and their circumcenters, all lie on the $x$-axis, and so this really is a ``one-dimensional'' version of the original circle-center problem. So let's look closely at how the circumcenter of an isosceles triangle relates to the dimensions of that triangle.
\newtheorem*{CT}{Cotangent Relation}
\begin{CT}
Let $\alpha$ be a real number that is not a multiple of that is not a multiple of $\pi/2$. The center of the circle passing through the three points $(0,1)$, $(0,-1)$, and $(\cot\alpha,0)$ is $(\cot2\alpha,0)$.
\end{CT}
\begin{proof}[Proof when $0<\alpha<\pi/4$]
Define the points $M=(0,1)$, $N=(0,-1)$, $O=(0,0)$, and $P=(1,1)$. Let $Q=(\cot\alpha,0)$, so that the measure of $\angle MQO$ equals $\alpha$. Since $\overline{OQ}$ and $\overline{MP}$ are parallel, we see that the measure of $\angle QMP$ equals $\alpha$ as well (see Figure~\ref{CRfig}). If we now let $R=(\cot2\alpha,0)$, then the same argument shows that both $\angle MRO$ and $\angle RMP$ have measure $2\alpha$. However, this implies (as Figure~\ref{CRfig} shows) that $\angle MQR$ and $\angle QMR$ both have measure $\alpha$, and so $\triangle MQR$ is isosceles with $QR=MR$.
\begin{figure}[hbtp]
\begin{picture}(0,0)
\put(-4, 104){$M$}
\put(-3, 2){$O$}
\put(120, 104){$P$}
\put(373, 6){$Q$}
\put(177, 6){$R$}
\put(332, 6){$\scriptstyle\alpha$}
\put(44, 88.5){$\scriptstyle\alpha$}
\put(49, 98.5){$\scriptstyle\alpha$}
\put(150, 6){$\scriptstyle2\alpha$}
\end{picture}
\includegraphics[height=1.5in]{4.pdf}
\caption{The Cotangent Relation}
\label{CRfig}
\end{figure}
A symmetric argument shows that $QR=NR$ as well. Since $R$ is equidistant from the three points $M,N,Q$, we conclude that $R$ is actually the center of the circle passing through $M$, $N$, and $Q$.
\end{proof}
\begin{exercise}\label{cotangentex}
Prove the Cotangent Relation in the range $\pi/4<\alpha<\pi/2$ using a similar geometric argument.
\end{exercise}
\begin{remark}
Once the Cotangent Relation is known for angles $0<\alpha<\pi/2$, then it is valid for any $\alpha$ that is not a multiple of $\pi/2$, since $\cot x$ is an odd function that is periodic with period $\pi$. If $\alpha$ were a multiple of $\pi$ then $\cot\alpha$ would be undefined; if $\alpha$ were an odd multiple of $\pi/2$, then $(\cot\alpha,0) = (0,0)$ would be on the same line as $(0,1)$ and $(0,-1)$, and hence the circle center would be undefined. This is the reason for the restriction that $\alpha$ is not a multiple of $\pi/2$.
\end{remark}
\begin{exercise}\label{right.range.ex}
Show that every circle-center set\ contains an isosceles triangle whose vertex angle is strictly between $\pi/6$ and $5\pi/6$ in measure. (Hint: by taking a single circle center, one can find an isosceles triangle whose vertex angle is at most $2\pi/3$ in measure. What can be done if the measure of that vertex angle is too small?)
\end{exercise}
Now it is also true to say that the center of the circle passing through the three points $(0,1)$, $(0,-1)$, and $(x,0)$ is $\big((x^2-1)/2x,0\big)$. Why have we bothered to phrase the Cotangent Relation in the form we chose, instead of just stating a ``Rational Function Relation''?
The reason is that it gives us much better insight into what happens when we repeat this process of taking circle centers. Starting from the three points
\begin{equation}
M=(0,1), \quad N=(0,-1), \quad Q_1=(\cot\alpha,0),
\label{MNQ.def}
\end{equation}
let's define a sequence $\{Q_j\}$ of points recursively by setting
\begin{equation}
Q_j=C(M,N,Q_{j-1}) \quad (j\ge2).
\label{Qj.def}
\end{equation}
A simple induction using the Cotangent Relation shows that $Q_j = (\cot(2^{j-1}\alpha),0)$ for every $j\ge1$.
\begin{example}
Take $\alpha = \pi/127$, where the number $127=2^7-1$ has been carefully chosen. Then
\begin{align*}
Q_1 ={}& (\cot\frac\pi{127},0),\; Q_2 = (\cot\frac{2\pi}{127},0),\; Q_3 = (\cot\frac{4\pi}{127},0),\\
Q_4 &= (\cot\frac{8\pi}{127},0),\; Q_5 = (\cot\frac{16\pi}{127},0),\; Q_6 = (\cot\frac{32\pi}{127},0),\\
&Q_7 = (\cot\frac{64\pi}{127},0),\; Q_8 = (\cot\frac{128\pi}{127},0) = (\cot\frac\pi{127},0) = Q_1,\; Q_9 = Q_2, \dots.
\end{align*}
We see that with this starting value of $\alpha$, the sequence $\{Q_j\}$ is periodic with period~7. In fact, the same is true of the binary expansion of $\alpha/\pi = 1/127$:
\[
1/127 = (0.\ 0000001\ 0000001\ 0000001 \dots)_2.
\]
We would have needed incredibly good fortune to locate this period-7 sequence using the ``Rational Function Relation'' $(x,0) \mapsto \big( (x^2-1)/2x,0 \big)$; who would have thought to try $x=\cot(\pi/127)$?
\hfill$\diamondsuit$\end{example}
In some special situations, one of the $Q_j$ might lie at the origin, in which case $Q_{j+1} = C(M,N,Q_j)$ cannot be defined. Similarly, for some special values of $\alpha$, the evaluation $\cot(2^j\alpha)$ is undefined. It's not hard to see that these exceptions exactly correspond to each other.
\begin{example}
If we take $\alpha = \pi/32$, then we have
\begin{align*}
Q_1 ={}& (\cot\frac\pi{32},0),\; Q_2 = (\cot\frac\pi{16},0),\; Q_3 = (\cot\frac\pi8,0) = (1+\sqrt2,0),\\
Q_4 &= (\cot\frac\pi4,0) = (1,0),\; Q_5 = (\cot\frac\pi2,0) = (0,0),
\end{align*}
and so $Q_6$ cannot be defined.
The binary expansion of $\alpha/\pi = 1/32$ is simply $(0.00001)_2$, an expansion that terminates after five bits. In this case, $2^5\alpha$ is a multiple of $\pi$, and so $\cot(2^5\alpha)$ is undefined, which reflects the fact that $Q_6$ could not be defined.
\hfill$\diamondsuit$\end{example}
In general, we can see that if $Q_1 = (\cot\alpha,0)$, then $Q_j = (\cot \beta_j,0)$ where $\beta_j/\pi$ is the number whose binary expansion we get from throwing away the first $j-1$ bits in the binary expansion of $\alpha/\pi$.
\begin{exercise}
With $Q_j$ defined as in equations \eqref{MNQ.def} and \eqref{Qj.def}, show that $Q_j$ is well-defined for all $j\ge1$ unless $\alpha/\pi$ is a rational number of the form $n/2^k$. Assuming this is not the case, show that the sequence $\{Q_j\}$ is eventually periodic if and only if $\alpha/\pi$ is a rational number. For any integers $L\ge0$ and $P\ge1$, show that $\alpha$ can be chosen so that the sequence $\{Q_j\}$ is eventually periodic with period exactly $P$, after a preperiod of length exactly $L$.
\end{exercise}
\begin{example}
Consider the {\em Thu\'e-Morse constant}
\[
\tau = (0.01101001100101101001011001101001\dots)_2,
\]
where the bits in the binary expansion start with 0, then its complement 1, then the complement 10 of that pair, then the complement 1001 of all four previous bits, and so on.
This builds up a sequence of truncations $\tau_k$ of $\tau$, from which the entire binary expansion of $\tau$ can be recovered:
\begin{align*}
\tau_0 &= 0.\ 0, \\
\tau_1 &= 0.\ 0\ 1, \\
\tau_2 &= 0.\ 01\ 10, \\
\tau_3 &= 0.\ 0110\ 1001, \\
\tau_4 &= 0.\ 01101001\ 10010110,\, \dots.
\end{align*}
It is not hard to prove that there are never three 0's or three 1's in a row in this binary expansion, which has an interesting implication (see Exercise \ref{TM.exercise}) for the sequence of points $Q_j$ we get by taking $\alpha=\tau/\pi$ in equations \eqref{MNQ.def} and \eqref{Qj.def}. One important to fact to note is that the binary expansion of $\tau$ is not eventually periodic, and hence all of the $Q_j$ are distinct in this case. (See \cite{EW} for other properties of this interesting number.)
\hfill$\diamondsuit$\end{example}
\begin{exercise}
Prove that the Thu\'e-Morse constant $\tau$ never has three 0's in a row or three 1's in a row in its binary expansion. If $Q_j$ is defined as in equations \eqref{MNQ.def} and \eqref{Qj.def} with the choice $\alpha=\tau/\pi$, show that all the $Q_j$ are on the line segment between $(\cot(7\pi/8),0) = (-1-\sqrt2,0)$ and $(\cot(\pi/8),0) = (1+\sqrt2,0)$.
\label{TM.exercise}
\end{exercise}
From these examples, it seems that lots of choices for $\alpha$ result in the sequence $\{Q_j\}$ being bounded. As it turns out, this is quite misleading: while such examples are easy to write down, they are actually relatively rare. It is known that a ``typical'' real number has every possible string of bits in its binary expansion, including a string of 0's or 1's as long as we wish. In fact, even more is true: if we count how many strings of length $k$ in the binary expansion of a ``typical'' real number are equal to our favorite length-$k$ string, then the result becomes closer and closer to $1/2^k$ as we go further and further out. This property of a real number is called being {\em normal to the base~2}. For example, the pairs of bits in the binary expansion of a ``typical'' real number are, in the limit, evenly split among the $4=2^2$ possibilities 00, 01, 10, and 11.
So what do we mean by ``typical''? Suppose a set $S$ has the property that for every positive number $\varepsilon$, there is a collection of open intervals (maybe infinitely many intervals) of total length at most $\varepsilon$ whose union contains $S$. Then in many ways $S$ acts like a ``very small'' set. Any countable set, such as ${\mathbb Q}$, is a set of this type: simply choose an open interval of length $\varepsilon/2$ containing the first element of the set, an open interval of length $\varepsilon/4$ containing the second element, an open interval of length $\varepsilon/8$ containing the third element, and so on. The technical term is that $S$ ``has measure zero'', in the terminology of Lebesgue measure (see \cite[Chapter 3]{HR}), and the idea is that $S$ is such a small set that numbers in $S$ are ``atypical'', while numbers outside of $S$ are ``typical''. In fact we say that ``almost all'' numbers are outside of $S$.
One of the fundamental theorems in this subject is that the set of real numbers that are not normal to the base~2 has measure zero. This doesn't mean that every real number is normal to the base~2---in fact, none of the rational numbers are. (See the article \cite{GM} for a construction of a very abnormal number in this respect.) What this result does mean is that ``almost all'' real numbers are normal to the base~2: if we chose one at random, we would have a ``100\% chance'' of picking a normal one!
Since almost all real numbers are normal, that means that almost all real numbers contain any given binary string somewhere (indeed, many times) in their binary expansion. Therefore by throwing away the right number of initial bits, that given binary string can be brought to the front of the binary expansion, meaning that the resulting number is as close to any prescribed real number as we like. The following corollary of these observations gives the big picture for this one-dimensional problem:
\begin{theorem}
Let $M=(0,1)$ and $N=(0,-1)$. For almost all choices of a point $Q_1$ on the $x$-axis, the closure of the sequence $\{Q_j\}$ defined by equations \eqref{MNQ.def} and \eqref{Qj.def} is the entire $x$-axis.
\end{theorem}
\section{The plane truth}
Now let's go back to two dimensions and investigate Theorems \ref{dense.thm} and ~\ref{only1.thm}. We start by proving the interesting fact that two triangles that are naturally related in the context of taking circle centers are actually similar to each other. This Similarity Relation is a more developed version of the Cotangent Relation we saw earlier (and so our detour into one dimension really was helpful!).
\newtheorem*{SRsc}{Similarity Relation (special case)}
\begin{SRsc}
Let $\alpha$ be a real number in the range $0<\alpha<\pi/2$. Let $M=(0,1)$, $N=(0,-1)$, $Q=(\cot\alpha,0)$, and $R=(\cot2\alpha,0)$, and define $S$ to be the center of the circle passing through $M$, $Q$, and $R$. Then $\triangle NMQ$ is similar to $\triangle QRS$, both triangles being isosceles with vertices $Q$ and $S$, respectively. The constant of proportionality is $(\cot\alpha-\cot2\alpha)/2$, and the bases of $\triangle NMQ$ and $\triangle QRS$ are perpendicular.
\end{SRsc}
We note that the function $\cot x$ is strictly decreasing on the interval $0<x<\pi$, and so $(\cot\alpha-\cot2\alpha)/2$ is always positive in the range $0<\alpha<\pi/2$.
\begin{proof}
We know from the Cotangent Relation that $R$ is actually $C(M,N,Q)$ in disguise. This implies that $MR=QR$, and so $\triangle MQR$ is isosceles with vertex $R$. Now it is an elementary fact from Euclidean geometry that if $S$ is defined to be the center of the circle through $M$, $Q$, and $R$, then $\overline{RS}$ is perpendicular to $\overline{MQ}$. Indeed, both $R$ and $S$ lie on the perpendicular bisector of $\overline{MQ}$, the former because $\triangle MQR$ is isosceles. As we already know that the measure of $\angle MQR$ is $\alpha$, this implies that the measure of $\angle QRS$ is $\pi/2-\alpha$, as Figure~\ref{SRfig}(a) shows. We now see that $\triangle MNQ$ and $\triangle QRS$ are both isosceles triangles with base angles of $\pi/2-\alpha$ (Figure~\ref{SRfig}(b)), and therefore these two triangles are similar.
\begin{figure}[hbtp]
\hfill
\begin{picture}(0, 115)
\put(-25,100){(a)}
\put(-10, 48){$M$}
\put(91, 25){$Q$}
\put(36, 16){$R$}
\put(63, 110){$S$}
\put(64, 29){$\scriptscriptstyle\alpha$}
\put(72, 55){\vector(-1,-1){25}}
\put(73, 57){$\scriptstyle\pi/2-\alpha$}
\end{picture}
\includegraphics[height=1.5in]{5a.pdf}
\hfill\hfill
\begin{picture}(0,0)
\put(-25,100){(b)}
\put(-10, 48){$M$}
\put(-8, -1){$N$}
\put(91, 25){$Q$}
\put(33, 25){$R$}
\put(63, 109){$S$}
\put(16, 66){$\scriptstyle\pi/2-\alpha$}
\put(32, 62){\vector(1,-2){16}}
\put(28, 62){\vector(-4,-3){20}}
\end{picture}
\includegraphics[height=1.5in]{5b.pdf}
\hfill\hfill
\caption{The Similarity Relation}
\label{SRfig}
\end{figure}
The fact that the constant of proportionality equals $(\cot\alpha-\cot2\alpha)/2$ follows from noting that $MN=2$ while $QR=\cot\alpha-\cot2\alpha$. Just as easily, we see that the base $\overline{QR}$ of $\triangle QRS$ lies upon the perpendicular bisector of the base $\overline{MN}$ of $\triangle MNQ$, by the same argument as in the previous paragraph. In particular, the two triangles' bases $\overline{MN}$ and $\overline{QR}$ are indeed perpendicular.
\end{proof}
Again, we can scale and rotate the plane without affecting these relative statements about circle centers. Therefore, after the slight change of notation $\beta=2\alpha$, we have the following corollary:
\newtheorem*{SR}{Similarity Relation}
\begin{SR}
Let $\triangle DEF$ be isosceles with vertex $F$, and let $\beta$ be the measure of the vertex angle at $F$. Let $G=C(D,E,F)$ and $H=C(E,F,G)$. Then $\triangle FGH$ is similar to $\triangle DEF$ with constant of proportionality
\begin{equation}
\lambda = (\cot(\beta/2)-\cot\beta)/2. \label{lambdadef}
\end{equation}
Moreover, the axes of symmetry of the two triangles are perpendicular.
\end{SR}
We remark that the constant $\lambda$ defined in equation \eqref{lambdadef} is less than 1 precisely when $\beta$ is in the range $\pi/6 < \beta < 5\pi/6$.
Now let's play a game similar to the one we played last section---namely, recursively generate new points by taking centers of circles going through old points. This time, however, instead of holding two of the points fixed, we let all of the new points take their fair turns.
As remarked earlier, every circle-center set\ contains an isosceles triangle, so let's start with an isosceles $\triangle P_1P_2P_3$ with vertex $P_3$, and let $\beta$ denote the measure of the vertex angle:
\begin{equation}
P_3P_1 = P_3P_2, \quad \beta = \mathop{\text{m}}(\angle P_1P_3P_2).
\label{P123.def}
\end{equation}
From these starting points, we recursively define a sequence of circle-centers:
\begin{equation}
P_n=C(P_{n-1},P_{n-2},P_{n-3}) \quad (n\ge4).
\label{Pn.def}
\end{equation}
By the Similarity Relation, we know that $\triangle P_3P_4P_5$ is similar to $\triangle P_1P_2P_3$, where the constant of proportionality $\lambda$ is given in equation \eqref{lambdadef}. Furthermore, $\triangle P_3P_4P_5$ is itself isosceles with vertex angle $\beta$, and so $\triangle P_5P_6P_7$ is similar to $\triangle P_3P_4P_5$ by the Similarity Relation again, with the same constant of proportionality. Continuing in this way as long as we wish, we see that for any number $k\ge1$, the triangle $\triangle P_{2k+1}P_{2k+2}P_{2k+3}$ is similar to $\triangle P_1P_2P_3$ with constant of proportionality $\lambda^k$.
The result is that these triangles swirl around in the plane, getting bigger or smaller depending on whether or not $\lambda>1$. Figure~\ref{recursefig}(a) was drawn with $\beta=2\pi/13$, corresponding to $\lambda\approx1.076$, while Figure~\ref{recursefig}(b) was drawn with $\beta=2\pi/11$, corresponding to $\lambda\approx0.925$. In both figures, the darkest triangle is the original $\triangle P_1P_2P_3$, the successive triangles getting lighter and lighter the farther along we go in the recursion.
\begin{figure}[hbtp]
\hfill
\begin{picture}(0, 0)
\put(-25,100){(a)}
\put(97, 71){$\scriptscriptstyle\infty$}
\end{picture}
\includegraphics[height=2.25in]{6a.pdf}
\hfill\hfill
\begin{picture}(0,0)
\put(-25,100){(b)}
\put(72, 82){$\scriptscriptstyle\infty$}
\end{picture}
\includegraphics[height=2.25in]{6b.pdf}
\hfill\hfill
\caption{Points generated by recursively taking circle centers}
\label{recursefig}
\end{figure}
Spurred on by the prominent patterns in Figure~\ref{recursefig}, let's examine in more detail the behavior of the points $P_n$. For notational purposes, we define the following sets of points:
\begin{gather*}
\S_1 = \{P_1,P_5,P_9,P_{13},\dots\}, \quad \S_2 = \{P_2,P_6,P_{10},P_{14},\dots\}, \\
\S_3 = \{P_3,P_7,P_{11},P_{15},\dots\}, \quad \S_4 = \{P_4,P_8,P_{12},P_{16},\dots\},
\end{gather*}
where in each set the indices increase by four.
\newtheorem*{OQ}{Orderly Queues Relation}
\begin{OQ}
Let $P_1,P_2,P_3$ be points in the plane such that $\triangle P_1P_2P_3$ is isosceles with vertex $P_3$. Define the sequence $P_n$ recursively by $P_n=C(P_{n-1},P_{n-2},P_{n-3})$ for $n\ge4$. Then there exists a point $P_\infty$ in the plane with the following properties:
\begin{enumerate}
\item Each of the sets $\S_1$, $\S_2$, $\S_3$, and $\S_4$ is contained in a line that goes through $P_\infty$;
\item The lines containing $\S_1$ and $\S_3$ are perpendicuar to each other, as are the lines containing $\S_2$ and $\S_4$.
\end{enumerate}
\end{OQ}
These orderly queues can actually be seen in Figure~\ref{recursefig}, even in Figure~\ref{recursefig}(a) where the points $P_n$ are heading away from $P_\infty$, but certainly in Figure~\ref{recursefig}(b) where they are heading towards~$P_\infty$.
\begin{proof}
Out of all compositions of translations, scalings, and rotations of the plane, there is a unique mapping $f$ that sends $P_1$ to $P_3$ and $P_2$ to $P_4$; this mapping has a unique fixed point (see Exercise \ref{mapping.exercise} below), which we label $P_\infty$, and the mapping $f$ can be thought of as the composition of a scaling and a rotation, both centered at $P_\infty$. As $\vvec{P_2P_1}$ and $\vvec{P_4P_3}$ are perpendicular, the rotation component must be a rotation by $\pi/2$. If we were to set coordinates so that $P_\infty$ were the origin, then $f$ could be represented simply as multiplication by a $2\times2$ matrix:
\[
f\bigg( \bigg( \begin{matrix} x \\ y \end{matrix} \bigg) \bigg) = \lambda \bigg( \begin{matrix} \cos\pi/2 & {-\sin\pi/2} \\ \sin\pi/2 & \cos\pi/2 \end{matrix} \bigg) \bigg( \begin{matrix} x \\ y \end{matrix} \bigg) = \bigg( \begin{matrix} 0 & \hskip-6pt {-\lambda} \\ \lambda & \hskip-6pt0 \end{matrix} \bigg) \bigg( \begin{matrix} x \\ y \end{matrix} \bigg),
\]
with $\lambda$ as defined in equation~\eqref{lambdadef}.
Now the Similarity Relation can be interpreted as saying that the mapping $f$ also sends $P_3$ to $P_5$. Moreover, as we have mentioned before, translations, scalings, and rotations all respect circumcenters of triangles (that is, under any of these transformations, the image of a triangle's circumcenter is the same as the circumcenter of the triangle's image). Therefore we have
\[
f(P_4) = f(C(P_1,P_2,P_3)) = C(f(P_1), f(P_2), f(P_3)) = C(P_3,P_4,P_5) = P_6,
\]
and an easy induction using the same argument shows that in fact $f(P_n) = P_{n+2}$ for every $n\ge1$. Since $f$ causes a rotation by $\pi/2$ around the point $P_\infty$, the second iterate $f\circ f$ causes a rotation by $\pi$, and so the points $P$, $P_\infty$, and $f(f(P))$ always lie a line. However, $f(f(P_n)) = P_{n+4}$ for every $n\ge1$. Therefore all the points in any of the sets $\S_1,\S_2,\S_3,\S_4$ lie on a single line that includes $P_\infty$, as claimed. The argument also shows that the lines containing $\S_1$ and $\S_3$ are perpendicuar to each other, as are the lines containing $\S_2$ and~$\S_4$.
\end{proof}
\begin{remark}
If we choose the first three points to be $(0,-1)$, $(0,1)$, and $(0,x)$, for instance, then it can be worked out by the compulsive reader that the coordinates of $P_\infty$ are
\[
P_\infty = \bigg( \frac{12x^3-4x}{x^4+18x^2+1} , \frac{3x^4+2x^2-1}{x^4+18x^2+1} \bigg).
\]
\end{remark}
\begin{exercise}
Let $P_1$, $P_2$, $Q_1$, and $Q_2$ be points in the plane, with $P_1\ne P_2$ and $Q_1\ne Q_2$. Show that among all compositions of translations, scalings, and rotations, there is a unique mapping that sends $P_1$ to $Q_1$ and $P_2$ to $Q_2$. Moreover, if the vectors $\vvec{P_1P_2}$ and $\vvec{Q_1Q_2}$ are distinct, show that this mapping has a unique fixed point.
\label{mapping.exercise}
\end{exercise}
\begin{remark}
This exercise can be done using conventional Euclidean geometry; however, it is instructive to find a solution by viewing the plane as the set of complex numbers, whereupon the mappings described are simply the linear functions $z\mapsto az+b$.
\end{remark}
The Orderly Queues Relation provides us with quite a bit of structure hidden inside any circle-center set. Is there a way to exploit this structure to prove that every circle-center set\ must be dense in the plane---that is, that any closed circle-center set\ must fill up the entire plane down to the last point? The key step on the path to answering this question turns out to be showing that any closed circle-center set\ must at least fill up an entire line segment.
\newtheorem*{SFF}{Segment-Filling Fact}
\begin{SFF}
Let $\S$ be a closed circle-center set, and suppose that there are two perpendicular lines $\L_1$ and $\L_2$ such that $\S$ contains:
\begin{itemize}
\item the point of intersection $I$ of $\L_1$ and $\L_2$;
\item another point $A$ on $\L_1$ besides $I$; and
\item a sequence of points \{$P_1$, $P_2$, \dots\} on $\L_2$ that converges to $I$.
\end{itemize}
Then $\S$ contains the entire line segment $\overline{AI}$.
\end{SFF}
\begin{proof}
As usual, we may scale, rotate, and translate our set $\S$ as we wish, and so without loss of generality we can assume the following: $\L_1$ is the $y$-axis, $\L_2$ is the $x$-axis, $I$ is the origin, and $A=(0,1)$. The sequence of points \{$P_1$, $P_2$, \dots\} then lies along the $x$-axis. To prove that $\S$ contains the entire segment $\overline{AI}$, it suffices to prove that $\S$ contains every point of the form $(0,a/2^n)$ where $n\ge0$ and $0\le a\le 2^n$ are integers, since the closure of this set of points is $\overline{AI}$.
In fact, we choose to prove an even stronger statement: for every integer $n\ge0$, and for every integer $0\le a\le 2^n$, not only does $\S$ contain the point $(0,a/2^n)$, but it also (except in the case $a/2^n = 1$) contains a sequence of points on the line $y=a/2^n$ that converges to $(0,a/2^n)$. (Figure \ref{inductionfig} helps us visualize just what is being claimed here. In that figure, the $x$-coordinates of the $\{P_j\}$ are shown decreasing monotonically to 0; this could be assumed anyway without losing generality by passing to a subsequence, but it isn't important to the proof.) The reason we make our job apparently harder in this way is that the new statement can actually be proved by induction on $n$.
\begin{figure}[hbtp]
\hfill
\begin{picture}(0, 0)
\put(-25,150){(a)}
\end{picture}
\includegraphics[height=2.6in]{inductiona.pdf}
\hfill\hfill
\begin{picture}(0,0)
\put(0,150){(b)}
\end{picture}
\includegraphics[height=2.6in]{inductionb.pdf}
\hfill\hfill
\caption{Inductive proof of the Segment-Filling Fact}
\label{inductionfig}
\end{figure}
First of all, note that the base case $n=0$ is true precisely because of the hypotheses described in the first paragraph of this proof. For the inductive step, assume that the case $n-1$ is known to be true (Figure \ref{inductionfig}(a) provides a picture of this induction hypothesis for $n-1=3$); we need to establish that the case $n$ is true as well. If the numerator $a$ is even, say $a=2b$, then $a/2^n=b/2^{n-1}$, and so the induction hypothesis takes care of those values of $a$ immediately. Thus we can restrict our attention to odd numerators $a=2b-1$ with $1\le b\le 2^{n-1}$. Note that it suffices to prove that $\S$ contains a sequence of points on the line $y=(2b-1)/2^n$ that converges to $(0,(2b-1)/2^n)$: since $\S$ is closed, it will then automatically contain the point $(0,(2b-1)/2^n)$ as well.
Now by the induction hypothesis, $\S$ contains both $E=(0,b/2^{n-1})$ and $F=(0,(b-1)/2^{n-1})$, as well as a sequence of points $\{G_j\}$ on the line $y=(b-1)/2^{n-1}$ converging to $F$. (Figure \ref{inductionfig}(b) illustrates this with $n=4$ again and $b=5$.) Note that $\triangle EFG_j$ is a right triangle with hypotenuse $\overline{EG_j}$. Since the circumcenter of a right triangle is the midpont of its hypotenuse, the circle-center set\ $\S$ also contains the midpoint of each of the segments $\overline{EG_j}$. But it is easily seen that this sequence of midpoints lies on the line $y=\tfrac12((b-1)/2^n + b/2^n) = a/2^n$ and converges to $(0,a/2^n)$. This establishes the inductive step and hence finishes the proof.
\end{proof}
\begin{exercise}
If a closed circle-center set\ $\S$ contains a sequence of points on a line $\L$ that converges to a point $P$, prove that it automatically contains a point on the line through $P$ that is perpendicular to $\L$.
\end{exercise}
Since we're obtaining more and more information about what's happening inside circle-center set s, let's press on to another useful fact:
\newtheorem*{QF}{Quadrilateral Fact}
\begin{QF}
Any circle-center set that contains an entire line segment must also contain an entire quadrilateral and its interior.
\end{QF}
(By ``quadrilateral'' we mean a nondegenerate quadrilateral, of course, so that it has an interior to speak of.)
\begin{proof}
By the usual rotation/translation/scaling argument, we can assume that our circle-center set\ contains a segment of the $x$-axis together with the point $(0,1)$. It then remains to understand the behavior of the function
\[
f(u,v) = \text{the center of the circle through $(0,1)$, $(u,0)$, and $(v,0)$} = \bigg( \frac{u+v}{2},\frac{u v+1}{2} \bigg)
\]
which takes values in the plane. The following exercise does the rest of the work.
\end{proof}
\begin{exercise}\label{quadex}
Let $a<b<c<d$ be real numbers. Show that as $u$ ranges over the interval $[a,b]$ and $v$ ranges over the interval $[c,d]$, the function $f(u,v)$ fills the quadrilateral whose vertices are $f(a,c)$, $f(a,d)$, $f(b,d)$, and $f(b,c)$. Conclude that the Quadrilateral Fact is true in general.
\end{exercise}
Finally, we sprint to the end of our journey by assembling these known facts together into a proof of our two (equivalent) main theorems.
\begin{proof}[Proof of Theorem \ref{only1.thm}]
Let $\S$ be a closed circle-center set; we want to show that $\S={\mathbb R}^2$. By Exercise \ref{right.range.ex}, we know that $\S$ contains an isosceles triangle with vertex angle strictly between $\pi/6$ and $5\pi/6$ in measure. Because the constant $\lambda$ defined in equation~\ref{lambdadef} is less than 1 for this vertex angle, the Orderly Queues Relation tells us that $\S$ contains a sequence of points $\S_3$ which all lie on a common line and which converge to the point $P_\infty$. In this case, the point $P_\infty$ is also in $\S$ since $\S$ is closed. Moreover, $\S$ contains the point $P_1$, which is on the line through $P_\infty$ that is perpendicular to the line containing $\S_3$. Therefore, by the Segment-Filling Fact, $\S$ must contain the entire line segment joining $P_\infty$ and $P_1$.
Once the circle-center set\ $\S$ contains a line segment, the Quadrilateral Fact shows that $\S$ must actually contain an entire quadrilateral and its interior. From here we can quickly show that $\S$ must be all of ${\mathbb R}^2$, by the following argument: Choose any point $A$ in ${\mathbb R}^2$, and consider the circle centered at $A$ that goes through the center of the quadrilateral (any circle centered at $A$ that intersects the interior of the quadrilateral would do). The intersection of this circle with the interior of the quadrilateral is some circular arc. No matter how small this arc might be, we can choose three distinct points on the arc, which are all in $\S$ because they're inside the quadrilateral. Therefore the center of the circle through those three points must be in $\S$---but that center is just the point $A$ we chose! And since $A$ was an arbitrary point in the plane, we see that $\S$ must be all of ${\mathbb R}^2$, as claimed.
\end{proof}
{\it Acknowledgements.} The author thanks Lowell W.\ Beineke and an anonymous referee for helpful comments on an earlier version of this manuscript; several of their suggestions have become improvements in the exposition of this paper. The author was supported in part by grants from the Natural Sciences and Engineering Research Council.
| {
"timestamp": "2007-03-29T02:28:13",
"yymm": "0703",
"arxiv_id": "math/0703860",
"language": "en",
"url": "https://arxiv.org/abs/math/0703860",
"abstract": "Say that a subset S of the plane is a \"circle-center set\" if S is not a subset of a line, and whenever we choose three noncollinear points from S, the center of the unique circle through those three points is also an element of S. A problem appearing on the Macalester College Problem of the Week website was to prove that a finite set of points in the plane, no three lying on a common line, cannot be a circle-center set. Various solutions to this problem that did not use the full strength of the hypotheses appeared, and the conjecture was subsequently made that every circle-center set is unbounded. In this article, we prove a stronger assertion, namely that every circle-center set is dense in the plane, or equivalently that the only closed circle-center set is the entire plane. Along the way we show connections between our geometrical method of proof and number theory, real analysis, and topology.",
"subjects": "Metric Geometry (math.MG)",
"title": "Sets that contain their circle centers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9898303428461218,
"lm_q2_score": 0.8397339716830605,
"lm_q1q2_score": 0.8311941650905793
} |
https://arxiv.org/abs/2202.09327 | Hadamard Inverse Function Theorem Proved by Variational Analysis | We present a proof of Hadamard Inverse Function Theorem by the methods of Variational Analysis, adapting an idea of I. Ekeland and E. Sere. | \section{Introduction}
The classical example $(x,y)\to e^x(\cos y, \sin y)$ shows that -- except in dimension one -- the derivative may be everywhere invertible while the function itself is invertible only locally. Probably the historically first sufficient condition for global invertibility is given by J. S. Hadamard, see \eqref{eq:2} in Theorem~\ref{thm:hadamard} below.
An excellent overview -- both from research and educational perspective -- of this topic is given in \cite{plastock}. Perhaps the easiest to understand -- because of its geometrical nature -- proof involves application of the Mountain Pass Theorem to the function $x\to\|f(x)-y\|$ to ensure the injectivity of $f$. However, Mountain Pass Theorem -- although ``obvious'' -- is hard to verify and may impose additional restrictions of technical nature.
Here we present a new proof based on a recent idea by I. Ekeland and E. S\'er\'e~\cite{ES}. This idea allows obtaining a continuous right inverse to $f$ on any compact, see Proposition~\ref{pro:main}. The necessary key Proposition~\ref{pro:main} is proved by methods of Variational Analysis in the flavour of the monographs of A. Dontchev~\cite{Asen-book}, A. Dontchev and T. Rockafellar~\cite{doro}, and A. Ioffe~\cite{ioffe}.
\bigskip
We work in a Banach space $(X, \|\cdot\|)$ and denote its closed unit ball by~$B_X$. Recall that the function
$$
f: X\to X
$$
is called Fr\'echet differentiable at $x\in X$ if there is a bounded linear operator $f'(x):X\to X$ such that
$$
\lim_{\|h\|\to 0}\frac{f(x+h)-f(x)-f'(x)h}{\|h\|} = 0.
$$
The function $f$ is called {\em smooth}, denoted $f\in C^1$, if the function
$$
x \to f'(x)
$$
is norm-to-norm continuous.
We present a modern proof to the following classical
\begin{theo} \textsc{(Hadamard)}
\label{thm:hadamard}
Let $f\in C^1$, $f'(x)$ be invertible for all $x$ and satisfying
\begin{equation}
\label{eq:2}
\|[f'(x)]^{-1}\| \le M,\quad\forall x\in X,
\end{equation}
for some $M > 0$.
Then $f$ is $C^1$ invertible on $X$.
In other words, there is $g\in C^1$ such that
$$
g(f(x)) =f(g(x))=x,\quad\forall x\in X.
$$
\end{theo}
The work is organised as follows. In the next Section~\ref{sec:prelim} we give the necessary preliminary known facts. In Section~\ref{sec:mainpro} we prove the key Proposition~\ref{pro:main} and in the final Section~\ref{sec:proof} we complete the proof of Hadamard Theorem~\ref{thm:hadamard}.
\section{Preliminaries}
\label{sec:prelim}
We start with recalling the classical (local) Inverse Function Theorem.
\begin{theo}
\label{thm:inverse-classical}
Let $f\in C^1$ and let $f'(x_0)$ be invertible. Then there are $\varepsilon,\delta >0$ such that for each $y$ such that
$$
\|y - f(x_0)\| <\varepsilon
$$
there is unique $x=:g(y)$ such that $\|x-x_0\| < \delta$ and
$$
f(x) = y.
$$
Moreover, $g\in C^1$ and
\begin{equation}
\label{eq:inv-der}
g'(f(x_0)) = [f'(x_0)]^{-1}.
\end{equation}
\end{theo}
The following statements are also well-known.
\begin{lem}
\label{lem:uniform-on-compact}
Let $f$ be $C^1$. Let $K\subset X$ be compact and let $r>0$. Then
$$
f(x+th) = f(x) + tf'(x)h + o(t)\mbox{ uniformly on }x\in K\mbox{ and }h\in rB_X.
$$
More precisely, there is $\alpha(t)\to 0$ as $t\to 0$ such that
$$
\sup\{\| f(x+th) - f(x) - tf'(x)h\|: x\in K,\ h\in rB_X\} \le \alpha(t)t.
$$
\end{lem}
\begin{lem}
\label{lem:cont-1}
Let $X$ be a Banach space and let $A(x)$ be bounded linear operator for each $x\in X$. Let the function $x\to A(x)$ be norm-to-norm continuous at $x_0$. If $A(x_0)$ is invertible then
$$
x\to A^{-1}(x)
$$
is continuous at $x_0$.
\end{lem}
Next is a precursor to Ekeland Variational Principle, see \cite[Chapter 5, Section 1]{AE}. Of course, it easily follows from Ekeland Variational Principle itself, see e.g. \cite[Basic Lemma]{ioffe}. See also the comments concerning the ``Basic Lemma'' on \cite[p. 93]{ioffe}. Here we present a proof based on what is called in these comments ``simple iteration''.
\begin{lem}
\label{lem-ek-tem}
Let $X$ be a Banach space and let $\mu:X \to \mathbb{R}^+\cup\{\infty\}$ be lower semicontinuous and such that for some $ r>0$
$$
\forall x:\ 0<\mu (x) < \infty\Rightarrow \exists y:\ \mu(y) < \mu(x) - r\|y-x\|.
$$
Then for each $x\in \mathrm{dom}\, \mu$ there is $y\in X$ such that
$$
\mu(y) = 0\mbox{ and }r\|y-x\| \le \mu(x).
$$
\end{lem}
\begin{proof}
Fix $x_0\in \mathrm{dom}\, \mu$ such that $\mu(x_0) > 0$. Let $x_1,x_2,\ldots,x_n$ be already chosen in the following way.
Set
\begin{equation}
\label{eq:nu-def}
\nu_n= \sup \{ \|x-x_n\|:\ \mu(x) < \mu(x_n) - r\|x-x_n\|\}.
\end{equation}
We are given that the set in the right hand side is nonempty, so $\nu_n > 0$. Also, since $\mu\ge 0$, we have that $\nu_n \le \mu(x_n)/r <\infty$.
Choose a $x_{n+1}$ such that
\begin{equation}
\label{eq:x-n+1-def}
\mu(x_{n+1}) < \mu(x_n) - r\|x_{n+1}-x_n\|
\mbox{ and }\|x_{n+1}-x_n\| > \nu_n/2.
\end{equation}
Note that
$$
\|x_{n+1} - x_0\| \le \sum_{i=0}^n \|x_{i+1}-x_i\|\le \sum_{i=0}^n(\mu(x_i)-\mu(x_{i+1}))/r\le\mu(x_0)/r.
$$
If if $\mu(x_{n+1})=0$, we are done. If not, we continue by induction.
If we would end up with an infinite sequence $(x_n)_0^\infty$, then from the above inequality $\sum_{i=0}^\infty \|x_{i+1}-x_i\| \le\mu(x_0)/r$, so $x_n\to \bar x$ as $n\to\infty$ and $\|\bar x- x_0\|\le \mu(x_0)/r$. From \eqref{eq:x-n+1-def} it follows that $\nu_n\to 0$.
If $\mu(\bar x) > 0$ then we can find $\bar y$ such that
\begin{equation}\label{vuh}
\mu(\bar y) < \mu(\bar x) - r\|\bar y-\bar x\|.
\end{equation}
Since $\mu$ is lower semicontinuous, we will have for all $n$ large enough $\mu(\bar y) < \mu(x_n) - r\|\bar y-x_n\|$. Hence, see \eqref{eq:nu-def}, $\nu_n \ge \|\bar y-x_n\| $ for all $n$ large enough. Since $\nu_n\to 0$, we get that $\bar y =\bar x$ which contradicts \eqref{vuh}.
So, $\mu(\bar x) = 0$ and we are done.
\end{proof}
\section{Right inverse \`a la Ekeland \& S\'er\'e}
\label{sec:mainpro}
The following is what distinguishes our proof of Hadamard Theorem.
\begin{prop}
\label{pro:main}
Let $f\in C^1$, $f'(x)$ be invertible for all $x$ and let $f$ satisfy~\eqref{eq:2}. Let $K\subset X$ be compact. Then $f$ has a continuous right inverse on $K$, that is, there is a continuous $g:K \to X$ such that
$$
f(g(x)) = x,\quad \forall x\in K.
$$
Moreover, if $f(0)=0\in K$ then there is a continuous right inverse of $f$ on $K$ that satisfies
$$
g(0) = 0.
$$
\end{prop}
\begin{proof}
Let $C(K,X)$ be the space of all continuous functions from $K$ to $X$. It is clear that when equipped with the norm
$$
\|g\|_\infty := \max_{y\in K} \|g(y)\|
$$
it is a Banach space.
Consider the following measure
$$
\mu: C(K,X) \to \mathbb{R}^+
$$
of how much a given function $g$ differs from a right inverse of $f$:
$$
\mu(g) := \max_{y\in K} \| f(g(y)) - y\|.
$$
It is clear that $\mu$ is lower semicontinuous. (It is easy to check that it is continuous but we do not need this.)
The claim is that there exists $g$ such that $\mu(g) = 0$.
In order to check the condition of Lemma~\ref{lem-ek-tem}, fix $\hat g\in C(K,X)$ such that
$$
\mu(\hat g) > 0.
$$
Set $u:K\to X$ as
$$
u(y) := y - f(\hat g(y)).
$$
By definition,
$$
\mu (\hat g) = \| u \|_\infty.
$$
So, $u$ is not identically equal to zero, because $\mu(\hat g)>0$.
Put
$$
w( y) :=[f'(\hat g(y))]^{-1} u(y), \quad \forall y\in K.
$$
Because $x\to f'(x)$ is continuous, from Lemma~\ref{lem:cont-1} it follows that
$$
y\to [f'(\hat g(y))]^{-1}
$$
is norm-to-norm continuous, so $w\in C(K,X)$.
Therefore, for $t > 0$
$$
g_t := \hat g + tw \in C(K,X).
$$
Note for future reference that form \eqref{eq:2} it follows that $\|w\|_\infty \le M \|u\|_\infty$, that is
\begin{equation}
\label{eq:norm-w}
\|w\|_\infty \le M\mu(\hat g).
\end{equation}
Our next aim is to estimate $\mu(g_t)$. By definition
$$
\mu(g_t) := \max_{y\in K} \| f(g_t(y)) - y\|.
$$
For $y\in K$ define $\varphi _y: \mathbb{R}^+\to \mathbb{R}^+$ by
$$
\varphi _y(t) := \| f(g_t(y)) - y\|,
$$
hence
\begin{equation}
\label{eq:mufi}
\mu(g_t) := \max_{y\in K} \varphi _y(t).
\end{equation}
Because the set $ \hat g(K)$ is compact and the set $w(K)$ is bounded, from Lemma~\ref{lem:uniform-on-compact} it follows that
$$
\max_{y\in K}\| f(\hat g(y) +tw(y)) - f(\hat g(y)) - t f'(\hat g(y))w(y)\| = \alpha(t)t,
$$
where $\alpha(t)\to 0$ as $t\to 0$.
But
$$
f'(\hat g(y))w(y) = f'(\hat g(y)) [f'(\hat g(y))]^{-1} u(y) = u(y),
$$
so
$$
\|f(g_t(y)) - f(\hat g(y)) - tu(y)\|_\infty =\alpha(t)t.
$$
Therefore, for any $y\in K$
\begin{eqnarray*}
\varphi _y(t) &=& \| f(g_t(y)) - y\| \\
&\le& \|f(\hat g(y)) + tu(y)-y\| + \|f(g_t(y)) - f(\hat g(y)) - tu(y)\|\\
&\le& \|(t-1)u(y)\| + \alpha(t)t.
\end{eqnarray*}
Since $\varphi_y(0) = \|u(y)\|$ we have that for small $t$
$$
\varphi _y(t) \le (1-t) \varphi _y (0) + \alpha(t)t.
$$
Taking a maximum over $y\in K$, see \eqref{eq:mufi}, we get
$$
\mu(g_t) \le (1-t) \mu(g_0)+ \alpha(t)t,
$$
or, in other words,
\begin{equation}
\label{eq:diff}
\mu( \hat g + tw) \le\mu(\hat g) - t \mu(\hat g) + \alpha(t)t.
\end{equation}
Since $\mu(\hat g) > 0$, for some $\delta > 0$ we then have $|\alpha(t)|< \mu(\hat g)/2$ for $ t\in (0,\delta)$. So,
$$
\mu( \hat g + tw) < \mu(\hat g) - (t/2)\mu(\hat g), \quad \forall t\in (0,\delta).
$$
From \eqref{eq:norm-w}, which is $ \mu(\hat g) \ge (1/M)\|w\|_\infty$, we get
$$
\mu( \hat g + tw) < \mu(\hat g) - (1/2M)\|tw\|_\infty, \quad \forall t\in (0,\delta),
$$
and we can apply Lemma~\ref{lem-ek-tem} with $r=1/2M$, $x=\hat g$ and $y= \hat g +(\delta/2)w$, to conclude that $\mu$ vanishes somewhere.
If $f(0)=0\in K$ then we can modify the above by considering instead of $C(K,X)$ the Banach space of continuous $g:K\to X$ such that $g(0)=0$. It is clear that in this case $u(0)=w(0) = 0$ and everything else works in the same way.
\end{proof}
\section{Proof of Theorem~\ref{thm:hadamard}}
\label{sec:proof}
\begin{proof}
It is enough to show that $f$ is bijective.
Let $y\in X$ be arbitrary and set $K=\{y\}$. From Proposition~\ref{pro:main} it follows that there is $x=g(y)$ such that $f(x)=y$. So, $f$ is surjective, i.e. $f(X)=X$.
Let $a,b\in X$ be such that $f(a)=f(b)$. By considering instead of $f$ the function
$$
x \to f(b-x) - f(b)
$$
we can assume without loss of generality that
$$
b=0\mbox{ and } f(0)=0.
$$
Then
$$
f(a) = 0.
$$
Set
$$K := f([0,a]).$$
Since $f$ is continuous, $K$ is compact. From Proposition~\ref{pro:main} there is a continuous
$$
g:K\to X,\mbox{ such that }g(0) = 0\mbox{ and }f(g(y))=y,\quad\forall y\in K.
$$
Consider
$$
I := \{t\in [0,1]:\ g(f(ta)) = ta\}.
$$
Obviously, $0\in I$, because $g(0) = 0$. Due to the continuity of $g$ and $f$ the set $I$ is closed and, therefore, compact. Let
$$
\bar t := \max \{ t:t\in I\}.
$$
Assume that $\bar t < 1$. By the local Inverse Function Theorem, see Theorem~\ref{thm:inverse-classical}, there are $\delta,\varepsilon>0$ such that for each $y\in X$ such that
$$
\|y-f(\bar t a)\| < \varepsilon
$$
there is unique $x\in X$ such that $\|x-\bar ta\| < \delta$ and
$$
f(x) = y.
$$
From the continuity of $f$ there is $\mu > 0$ such that for all $t\in(\bar t, \bar t +\mu)\subset (0,1)$ we have $\|ta - \bar ta\| < \delta$, $\|f(ta) -f(\bar t a)\| < \varepsilon$. Moreover, $\|g(f(ta)) - \bar t a\| =\| ta - \bar t a\| < \delta$, since $g(f(\bar ta)) = \bar t a$.
Then, because of $f(ta)\in K$ we have that $ f(g(f(ta))) = f(ta)$. From the uniqueness of the solution to $f(\cdot )= f( ta)$ in this neighbourhood we get $g(f(ta)) = ta$ for all $t\in(\bar t, \bar t +\mu)$ which contradicts the definition of $\bar t $.
So, $\bar t =1$ meaning that $g(f(a)) = a$. But $f(a) = 0$. Since $g(0)=0$, it follows that $a=0$.
We have proved that if $f(a)=f(b)$ then $a=b$, so $f$ is injective.
\end{proof}
| {
"timestamp": "2022-02-21T02:22:03",
"yymm": "2202",
"arxiv_id": "2202.09327",
"language": "en",
"url": "https://arxiv.org/abs/2202.09327",
"abstract": "We present a proof of Hadamard Inverse Function Theorem by the methods of Variational Analysis, adapting an idea of I. Ekeland and E. Sere.",
"subjects": "Functional Analysis (math.FA)",
"title": "Hadamard Inverse Function Theorem Proved by Variational Analysis",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575132207567,
"lm_q2_score": 0.845942439250491,
"lm_q1q2_score": 0.8311870994378634
} |
https://arxiv.org/abs/1609.00088 | Statistics on bargraphs viewed as cornerless Motzkin paths | A bargraph is a self-avoiding lattice path with steps $U=(0,1)$, $H=(1,0)$ and $D=(0,-1)$ that starts at the origin and ends on the $x$-axis, and stays strictly above the $x$-axis everywhere except at the endpoints. Bargraphs have been studied as a special class of convex polyominoes, and enumerated using the so-called wasp-waist decomposition of Bousquet-Mélou and Rechnitzer. In this paper we note that there is a trivial bijection between bargraphs and Motzkin paths without peaks or valleys. This allows us to use the recursive structure of Motzkin paths to enumerate bargraphs with respect to several statistics, finding simpler derivations of known results and obtaining many new ones. We also count symmetric bargraphs and alternating bargraphs. In some cases we construct statistic-preserving bijections between different combinatorial objects, proving some identities that we encounter along the way. | \section{Introduction}
Bargraphs have appeared in the literature with different names, from skylines~\cite{Ger} to wall polyominoes~\cite{Fer}. Their enumeration was addressed by Bousquet-M\'elou and Rechnitzer~\cite{BMR}, Prellberg and Brak~\cite{PB}, and Fereti\'c~\cite{Fer}. These papers use the so-called {\em wasp-waist} decomposition or some variation of it to find a generating function with two variables $x$ and $y$ keeping track of the number of $H$ and $U$ steps, respectively.
The same decomposition has later been exploited by Blecher, Brennan and Knopfmacher to obtain refined enumerations with respect to statistics such as peaks~\cite{BBK_peaks}, levels~\cite{BBK_levels}, walls~\cite{BBK_walls}, descents and area~\cite{BBK_parameters}.
Bargraphs are used to represent histograms and they also have connections to statistical physics, where they are used to model polymers.
There is a trivial bijection between bargraphs and Motzkin paths with no peaks or valleys, which surprisingly seems not to have been noticed before.
In Section~\ref{sec:bij} we present this bijection and we use it to describe our basic recursive decomposition. In Section~\ref{sec:stat} we find the generating functions for bargraphs with respect to several statistics,
most of which are new, but in some cases we also present simpler derivations for statistics that have been considered before. In cases where we obtain enumeration sequences that coincide with each other or with sequences that have appeared in the literature, we construct bijections between the relevant combinatorial objects.
Finally, in Section~\ref{sec:subsets} we find the generating functions for symmetric bargraphs and for alternating bargraphs.
\section{Bijection to Motzkin paths}\label{sec:bij}
A bargraph is a lattice path with steps $U=(0,1)$, $H=(1,0)$ and $D=(0,-1)$ that starts at the origin and ends on the $x$-axis, stays strictly above the $x$-axis everywhere except at the endpoints, and has no pair of consecutive steps of the form $UD$ or $DU$. Sometimes it is convenient to identify a bargraph with the correspodning word on the alphabet $\{U,D,H\}$.
The {\em semiperimeter} of a bargraph is the number of $U$ steps plus the number of $H$ steps (this terminology makes sense when considering the closed polyomino obtained by connecting the two endpoints of the path with a horizontal segment). A bargraph of semiperimeter $15$ is drawn on the right of Figure~\ref{fig:Delta}.
Let $\mathcal B$ denote the set of bargraphs, and let $\mathcal B_n$ denote those of semiperimeter $n$. Let $B=B(x,y)$ be the generating function for $\mathcal B$ where $x$ marks the number of $H$ steps and $y$ marks the number of $U$ steps. Note that the coefficient of $z^n$ in $B(z,z)$ is $|\mathcal B_n|$. We do not consider the trivial bargraphs with no steps, so the smallest possible semiperimeter is $2$.
Recall that a Motzkin path is a lattice path with up steps $(1,1)$, horizontal steps $(1,0)$ and down steps $(1,-1)$ that starts at the origin, ends on the $x$-axis, and never goes below the $x$-axis. A peak in a Motzkin path is an up step followed by a down step, and a valley is a down step followed by an up step.
Let $\mathcal M$ denote the set of Motzkin paths without peaks or valleys, and let $\mathcal M_n$ denote the subset of those where the number of up steps plus the number of horizontal steps equals $n$. Elements of $\mathcal M$ will be called {\em cornerless} Motzkin paths.
Let $M=M(x,y)$ be the generating function for $\mathcal M$ where $x$ marks the number of $H$ steps and $y$ marks the number of $U$ steps.
The coefficient of $z^n$ in $M(z,z)$ is $|\mathcal M_n|$. Let $\epsilon$ denote the empty Motzkin path, so that $\mathcal M_0=\{\epsilon\}$. To make our notation less clunky, we will sometimes write $\epsilon$ instead of $\{\epsilon\}$.
For $n\ge2$, there is an obvious bijection between $\mathcal M_{n-1}$ and $\mathcal B_n$ that appears not to have been noticed in the literature.
Given a path in $\mathcal M$, insert an up step at the beginning and a down step at the end (resulting in an {\em elevated} Motzkin path), and then turn all the up steps $(1,1)$ into $U=(1,0)$, all the down steps $(1,-1)$ into $D=(0,-1)$, and leave the horizontal steps $(1,0)$ unchanged as $H=(1,0)$. Denote by $\Delta:\mathcal M_{n-1}\to\mathcal B_n$ the resulting bijection. An example is shown in Figure~\ref{fig:Delta}.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=0.5]
\draw(24,0) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)\H-- ++(0,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S-- ++(0,-1) circle(1.2pt)\S;
\draw[thin,dotted](24,0)--(32,0);
\draw(22,1) node {$\rightarrow$};
\draw(0,0) circle (1.2pt) -- ++(1,1) circle(1.2pt)\U-- ++(1,0) circle(1.2pt)-- ++(1,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(1,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(1,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(1,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)\H-- ++(1,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(1,1) circle(1.2pt)\U-- ++(1,0) circle(1.2pt)-- ++(1,-1) circle(1.2pt)\D-- ++(1,-1) circle(1.2pt);
\draw[thin,dotted](0,0)--(20,0);
\end{tikzpicture}
\caption{A cornerless Motzkin path and the corresponding bargraph obtained by applying $\Delta$.}\label{fig:Delta}
\end{figure}
With some abuse of notation, we will denote up steps and down steps in a Motzkin path by $U$ and $D$ respectively, just like the steps in bargraphs that map to them via $\Delta$.
Cornerless Motzkin paths admit a very simple decomposition that parallels the standard recursive decomposition of Motzkin paths. Namely, every nonempty path in $\mathcal M$ can be decomposed uniquely as $HA$, $UA'D$, or $UA'DHA$, where $A$ is an arbitrary path in $\mathcal M$ and $A'$ is an arbitrary nonempty path. Figure~\ref{fig:decomposition} illustrates these cases. It follows that $M=M(x,y)$ satisfies the equation
\begin{align}\nonumber
M&=1+xM+y(M-1)+y(M-1)xM\\ &=(1+y(M-1))(1+xM).
\label{eq:M}
\end{align}
Using that $B=y(M-1)$, which is a consequence of the bijection $\Delta$, we obtain
\begin{equation}\label{eq:B} xB^2-(1-x-y-xy)B+xy=0.
\end{equation}
Solving for $B$, we get
$$B=\frac{1-x-y-xy-\sqrt{(1-x-y-xy)^2-4x^2y}}{2x},$$
which agrees with \cite[p.~92]{BMR}, \cite[Eq.~(3.6)]{BMB}, and~\cite[Eq.~(8)]{Fer} (with $y$ replaced by $y^2$).
The generating function for bargraphs by semiperimeter is
$$B(z,z)=\frac{1-2z-z^2-\sqrt{(1-2z-z^2)^2-4z^3}}{2z},$$
and the beginning of its series expansion is $z^2+2z^3+5z^4+13z^5+35z^6+97z^7+275z^8+\cdots$,
given by sequence A082582 from the OEIS~\cite{OEIS}.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=0.45]
\motzkin{(-1,0)}
\draw (1,0.4) node[above] {$\mathcal M$};
\draw (4,0.4) node {$=$};
\draw (5.5,0) node {$.$}; \draw (5.5,0.2) node[above] {$\epsilon$};
\draw (7,0.4) node {$\sqcup$};
\draw[thick,red] (8,0) circle(1.2pt) -- ++(1,0) circle(1.2pt); \motzkin{(9,0)}; \draw (11,0.4) node[above] {$\mathcal M$};
\draw (14,0.4) node {$\sqcup$};
\draw[thick,red] (15,0) circle(1.2pt) -- ++(1,1) circle(1.2pt); \motzkin{(16,1)}; \draw (18,1.4) node[above] {$\mathcal M\setminus \epsilon$}; \draw[thick,red] (20,1) circle(1.2pt) -- ++(1,-1) circle(1.2pt);
\draw (22,0.4) node {$\sqcup$};
\draw[thick,red] (23,0) circle(1.2pt) -- ++(1,1) circle(1.2pt); \motzkin{(24,1)}; \draw (26,1.4) node[above] {$\mathcal M\setminus\epsilon$}; \draw[thick,red] (28,1) circle(1.2pt) -- ++(1,-1) circle(1.2pt); \draw[thick,red] (29,0) circle(1.2pt) -- ++(1,0) circle(1.2pt); \motzkin{(30,0)}; \draw (32,0.4) node[above] {$\mathcal M$};
\begin{scope}[shift={(0,-4)}]
\draw (4,0.4) node {$=$};
\draw (5,0.4) node {$($};
\draw (6.5,0) node {$.$}; \draw (6.5,0.2) node[above] {$\epsilon$};
\draw (8,0.4) node {$\sqcup$};
\draw[thick,red] (9,0) circle(1.2pt) -- ++(1,1) circle(1.2pt); \motzkin{(10,1)}; \draw (12,1.4) node[above] {$\mathcal M\setminus\epsilon$}; \draw[thick,red] (14,1) circle(1.2pt) -- ++(1,-1) circle(1.2pt);
\draw (16,0.4) node {$)$};
\draw (17,0.4) node {$\times$};
\draw (18,0.4) node {$($};
\draw (19.5,0) node {$.$}; \draw (19.5,0.2) node[above] {$\epsilon$};
\draw (21,0.4) node {$\sqcup$};
\draw[thick,red] (22,0) circle(1.2pt) -- ++(1,0) circle(1.2pt); \motzkin{(23,0)}; \draw (25,0.4) node[above] {$\mathcal M$};
\draw (28,0.4) node {$)$};
\end{scope}
\end{tikzpicture}
\caption{The decomposition of Motzkin paths without peaks or valleys.}\label{fig:decomposition}
\end{figure}
We point out that whereas the wasp-waist decomposition considers five cases by splitting bargraphs in different ways, the decomposition in Figure~\ref{fig:decomposition} is significantly simpler. It can in fact be considered as having one single case by interpreting the factorization $M=(1+y(M-1))(1+xM)$ as follows: every element of $\mathcal M$ can be uniquely split into two pieces: an elevated non-empty path (this piece could be empty), and a path that is either empty or starts with an H.
Via the bijection $\Delta$, the statistics on $\mathcal B$ that we will consider correspond naturally to statistics on $\mathcal M$.
When it creates no confusion, we will use the same notation for both a statistic on $\mathcal B$ and the corresponding statistic on $\mathcal M$.
Given such a statistic $\st$, we let $M_{\st}=M_{\st}(t,x,y)$
and $B_{\st}=B_{\st}(t,x,y)$ denote the generating functions for $\mathcal M$ and $\mathcal B$, respectively, where $t$ marks $\st$, $x$ marks the number of $H$, and $y$ marks the number of $U$. We will also use the analogous notation with the letter $M$ replaced by any symbol denoting a subset of $\mathcal M$.
If $\st(\epsilon)=0$ and $\st(M)=\st(\Delta(M))$ for all $M\in\mathcal M$, then $B_{\st}=y(M_{\st}-1)$ using the bijection $\Delta$. For some statistics these conditions do not hold, but we find similar relations between $B_{\st}$ and $M_{\st}$. Substituting $x=z$ and $y=z$ in $B_{\st}$ we obtain the generating function for $\mathcal B$ by semiperimeter and the statistic $\st$.
In the study of some statistics, it will be helpful to separate cornerless paths according to the first and the last step.
The generating function of paths that start with an $H$ is $xM$, and the same applies for paths that end with an $H$.
Thus, the generating function for paths that start with a $U$ is $M-1-xM$, and similarly for paths that end with a $D$.
The generating function for paths that start and end with an $H$ is $x+x^2M$. By subtracting it from the generating function of paths that start with an $H$, it follows that paths that start with an $H$ and end with a $D$ are counted by $xM-x-x^2M$, and by symmetry so are paths that start with a $U$ and end with an $H$.
By subtracting this from the generating function for paths that start with a $U$, we obtain that the generating function for paths that start with a $U$ and end with a $D$ is
$$M-1-xM-(xM-x-x^2M)=(1-x)^2M+x-1.$$
Table~\ref{tab:startend} summarizes these generating functions.
\begin{table}[h]
\centering
\begin{tabular}{c||c|c|c}
& end with an $H$ & end with a $D$ & total \\ \hline \hline
start with an $H$ & $x+x^2M $ & $(x-x^2)M -x$ & $xM $\\ \hline
start with a $U$ & $(x-x^2)M -x$ & $(1-x)^2M +x-1$ & $M -1-xM $ \\ \hline
total & $xM $ &$M -1-xM $ & $M -1$
\end{tabular}
\caption{The generating functions for nonempty cornerless Motzkin paths with given initial and final steps.}
\label{tab:startend}
\end{table}
\section{Statistics on bargraphs}\label{sec:stat}
In this section we use the bijection $\Delta$ and the decomposition of cornerless Motzkin paths in Figure~\ref{fig:decomposition} to find the generating functions for bargraphs with respect to several statistics, while also keeping track of the number of $U$ and $H$ steps.
For each statistic $\st$ we obtain an explicit formula for $B_{\st}$. In some cases, for the sake of simplicity, we will only state the equation that it satisfies, which is quadratic for all the statistics in this paper.
The statistics that we consider are the height of the first column (Section \ref{hfc}); the number of double rises and double falls (\ref{dr});
the number of valleys or peaks of width $\ell$ along with the number of horizontal steps in such valleys and peaks (\ref{vl});
the number of corners of different types (\ref{cor}); the number of horizontal segments (\ref{hs}) and horizontal segments of length 1 (\ref{uhs}); the length of the first descent, together with a bijective proof of a simple recurrence satisfied by this statistic, and the $x$-coordinate of the first descent (\ref{fd}); the number of columns of height $h$ for any fixed $h$, and
the number of initial columns of height 1 (\ref{ch}); the least column height~(\ref{lch}) and the width of the leftmost horizontal segment (\ref{lhs}), together with a bijective proof of their equidistribution on bargraphs with fixed semiperimeter, and a bijection relating the latter statistic with the number of initial columns of height 1 (\ref{lhs}); the number of occurrences of $UHU$ (\ref{uhu}); the length of the initial staircase $UHUH\dots$ (\ref{stair}); the number of odd-height and even-height columns (\ref{oheh}); and the area~(\ref{area}).
Table~\ref{tab:OEIS} lists the OEIS reference~\cite{OEIS} of the sequences that occur in the paper, most of which have been added as a result of this work.
\begin{table}[htb]
\centering
\begin{tabular}{l|l|l}
Sect. & Statistic or subset of bargraphs & OEIS sequences \\
\hline\hline
\ref{hfc} & height of the first column & A273342, A273343 \\ \hline
\ref{dr} & number of double rises & A273713, A273714 \\ \hline
\ref{dr} & number of double rises and double falls & A276066 \\ \hline
\ref{vl} & number of valleys of width 1 & A273721, A273722 \\ \hline
\ref{vl} & number of peaks of width 1 & A273715, A273716 \\ \hline
\ref{cor} & number of $\llcorner$ corners & A273717, A273718 \\ \hline
\ref{hs} & number of horizontal segments & A274486 \\ \hline
\ref{uhs} & number of horizontal segments of length $1$ & A274491 A274492 \\ \hline
\ref{fd} & length of the first descent & A276067, A276068 \\ \hline
\ref{fd} & $x$-coordinate of the first descent & A273897, A273898 \\ \hline
\ref{ch} & number of columns of height $1$ & A273899, A273900 \\ \hline
\ref{ch} & number of initial columns of height $1$ & A274490 \\ \hline
\ref{lch} & least column height & A274488 \\ \hline
\ref{lhs} & width of the leftmost horizontal segment & A274488 \\ \hline
\ref{uhu} & number of occurrences of $UHU$ & A273896 \\ \hline
\ref{stair} & length of the initial staircase $UHUH\dots$ & A274494, A274495 \\ \hline
\ref{oheh} & number of odd-height and even-height columns & A273901,
A273902, A273903, A273904 \\ \hline
\ref{area} & area & A273346, A273347, A273348 \\ \hline
\ref{sec:sym} & symmetric bargraphs & A273905 \\ \hline
\ref{sec:alt} & weakly alternating bargraphs & A275448 \\ \hline
\ref{sec:alt} & strictly alternating bargraphs & A023342 \\
\end{tabular}
\caption{The OIES sequences~\cite{OEIS} corresponding to the statistics and subsets of bargraphs studied in this paper.}
\label{tab:OEIS}
\end{table}
We use the term {\em ascent} (resp. {\em descent}) to refer to a maximal consecutive sequence of $U$ (resp. $D$) steps.
\subsection{Height of the first column}\label{hfc}
Let $\hfc$ be the statistic ``height of the first column'' on $\mathcal B$.
On $\mathcal M$, let $\hfc$ denote the statistic that measures the length of the initial ascent of the path, that is, the number of steps $U$ right at the beginning of the path.
Then, using the second line of the decomposition in Figure~\ref{fig:decomposition}, we get
$$M_{\hfc}=(1+ty(M_{\hfc}-1))(1+xM),$$
where we can easily use the expression for $M$ coming from~\eqref{eq:M} and solve for $M_{\hfc}$.
If $A\in\mathcal M$ and $G=\Delta(M)$, then $\hfc(G)=\hfc(A)+1$. It follows that $B_{\hfc}=ty(M_{\hfc}-1)$, and
$B_{\hfc}$ satisfies the equation
$$(1 - t(1 -x + y + xy) + t^2 y) B_{\hfc}^2 - t(1 - y)(1 - x - ty - txy) B_{\hfc} + t^2 xy(1 - y) = 0.$$
In particular, the generating function for $\mathcal B$ with respect to semiperimeter and height of the first column is
$$B_{\hfc}(t,z,z)=
t \,\frac {1-2z-tz+{z}^{2}+t{z}^{3} -\left( 1-tz \right) \sqrt{(1-z)(1-3z-z^2-z^3)} }{2(1-t+t^2 z-t{z}^{2})}.
$$
\subsection{Number of double rises and number of double falls}\label{dr}
Let $\dr$ (resp. $\df$) be the statistic that counts pairs of adjacent $U$ steps (resp. $D$ steps) ---also called {\em double rises} (resp. {\em double falls})--- in bargraphs and in cornerless Motzkin paths. By symmetry, $\dr$ and $\df$ are equidistributed.
Using the second line of the decomposition in Figure~\ref{fig:decomposition}, we have
\begin{equation}\label{eq:Mdr}
M_{\dr}=\left[1+y(t(M_{\dr}-1-xM_{\dr})+xM_{\dr})\right](1+xM_{\dr}).
\end{equation}
This is because, in this decomposition, the number of double rises in a path is obtained by summing the number of double rises in each of the two pieces, except that if the first piece is the elevation of a path starting with a $U$, it creates an additional double rise.
The generating function with respect to $\dr$ for paths starting with an $H$ is $xM_{\dr}$, and for paths starting with a $U$ is $M_{\dr}-1-xM_{\dr}$.
Equation~\eqref{eq:Mdr} can be solved to find an expression for $M_{\dr}$.
For every $A\in\mathcal M$, $\dr(\Delta(A))$ equals $\dr(A)+1$ if $A$ starts with an $U$ step and it equals $\dr(A)$ otherwise. It follows that
$B_{\dr}=y\left(t(M_{\dr}-1-xM_{\dr})+xM_{\dr}\right)$, and so it satisfies
$$x B_{\dr}^2 - (1 - x - ty- xy) B_{\dr} + xy = 0.$$
In particular, the generating function for bargraphs with no double rises according to semiperimeter is
$$B_{\dr}(0,z,z)=\frac {1-z-z^2-\sqrt {1-2z-z^2-2z^3+z^4}}{2z},$$
which agrees with the generating function for RNA secondary structure numbers~\cite[Exer. 6.43]{EC2} with the exponents shifted by one. Recall that a {\em secondary structure} is a simple graph with vertices $\{1,2,\dots,n\}$ such that every $i$ is adjacent to at most one vertex $j$ with $|j-i|>1$, $\{i,i+1\}$ is an edge for all $i$, and there are no two edges $\{i,k\},\{j,l\}$ with $i<j<k<l$.
Next we give a bijection between bargraphs of semiperimeter $n+1$ with no double rises and secondary structures on $n$ vertices. Given such a bargraph, delete the initial $U$ and the final $D$. Since the bargraph has no double rises, the remaining steps can be written uniquely as a sequence of blocks $HU$, $H$ and $D$. Note that there are $n$ blocks, and that no $HU$ is immediately followed by a $D$. Each block will correspond to a vertex of the secondary structure, labeled increasingly from left to right. For each $HU$ block, consider the first $D$ block to its right that is at the same height in the bargraph, and draw an edge between the two corresponding vertices in the secondary structure. Finally, draw edges between every pair of consecutive vertices. It is easy to see that this construction produces a secondary structure, and that it is a bijection.
We end this section by refining the argument behind Equation~\eqref{eq:Mdr} in order to give the joint distribution of $\dr$ and $\df$ on bargraphs and cornerless Motzkin paths. Let $B_{\dr,\df}$ and $M_{\dr,\df}$ be the corresponding generating functions where $t$ marks $\dr$ and $s$ marks $\df$. In the decomposition in Figure~\ref{fig:decomposition}, the elevated piece of the path creates an additional double rise if it starts with a $U$, and an additional double fall if it ends with a $D$.
The generating function by $\dr$ and $\df$ of paths that start with an $H$ is $xM_{\dr,\df}$, since the initial $H$ step does not affect the statistics $\dr$ and $\df$, and the same holds for paths that end with an $H$. The generating function by $\dr$ and $\df$ of paths that start and end with an $H$ is $x+x^2M_{\dr,\df}$. Thus, by the same argument used to obtain Table~\ref{tab:startend}, but keeping track of $\dr$ and $\df$, it follows that paths that start with an $H$ and end with a $D$ are counted by $xM_{\dr,\df}-x-x^2M_{\dr,\df}$, and so are paths that start with a $U$ and end with an $H$. Similarly, the generating function for paths that start with a $U$ and end with a $D$ is
$(1-x)^2M_{\dr,\df}+x-1$.
Adding the contributions of the elevated path in the different cases, we get the equation
$$
M_{\dr,\df}=\left[1+y\left[x+x^2M_{\dr,\df}+(t+s)((x-x^2)M_{\dr,\df}-x)+ts((1-x)^2M_{\dr,\df}+x-1)\right]\right](1+xM_{\dr,\df}).
$$
Using that $B_{\dr,\df}=y\left[x+x^2M_{\dr,\df}+(t+s)((x-x^2)M_{\dr,\df}-x)+ts((1-x)^2M_{\dr,\df}+x-1)\right]$, we obtain the following equation for $B_{\dr,\df}$:
$$xB_{\dr,\df}^2-(1-x-(t+s)xy-tsy+tsxy)B_{\dr,\df}+xy=0.$$
\subsection{Number of valleys and peaks of width $\ell$}\label{vl}
For $\ell\ge1$, define a {\em valley} (resp. {\em peak}) of width $\ell$ in a bargraph or a cornerless Motzkin path to be a sequence of steps $DH^\ell U$ (resp. $UH^\ell D$). Let $v_\ell$ (resp. $p_\ell$) be statistic counting the number of valleys (resp. peaks) of width $\ell$.
To count valleys of width $\ell$, we use the first line in the decomposition in Figure~\ref{fig:decomposition} to obtain
$$M_{v_\ell}=1+x M_{v_\ell}+y(M_{v_\ell}-1)+xy(M_{v_\ell}-1)\left[M_{v_\ell}+(t-1)x^{\ell-1}(M_{v_\ell}-1-xM_{v_\ell})\right].$$
Indeed, an extra valley of width $\ell$ is created in the last case when the second part of the path starts with $H^{\ell-1}U$; these paths are counted by $x^{\ell-1}(M_{v_\ell}-1-xM_{v_\ell})$. Using that $B_{v_\ell}=y(M_{v_\ell}-1)$, we obtain the following equation for $B_{v_\ell}$:
$$(1 - (1 - t)(1 - x)x^{\ell-1})B_{v_\ell}^2 - (1-x-y-xy- (1 - t)x^{\ell+1}y) B_{v_\ell} + xy = 0.$$
Solving it and setting $x=z$ and $y=z$, we get
$$B_{v_\ell}(t,z,z)= \frac{1-2z-{z}^{2}+(t-1){z}^{\ell+2}-\sqrt{1-4z+2{z}^{2}+{z}^{4}
+2 (1-t) {z}^{\ell+2}(1+z^2)
+(1-t)^2 {z}^{2\ell+4}}}
{2(z-(1-t)(1-z)z^\ell)}.$$
Similarly, for peaks of width $\ell$, using the second line in the decomposition in Figure~\ref{fig:decomposition}, we have
$$M_{p_\ell}=[1+y(M_{p_\ell}-1+(t-1)x^\ell)](1+x M_{p_\ell}),$$
since an extra peak of width $\ell$ is created when the path that is elevated is precisely $H^\ell$.
Using that $B_{p_\ell}=y(M_{p_\ell}-1+(t-1)x^\ell)$, we obtain the equation
$$x B_{p_\ell}^2 - (1-x-y-xy- (1 - t)x^{\ell+1}y ) B_{p_\ell} + y(x - (1 - t)(1 - x)x^\ell)= 0.$$
It follows that
$$B_{p_\ell}(t,z,z)= \frac{1-2z-{z}^{2}+(t-1){z}^{\ell+2}-\sqrt{1-4z+2{z}^{2}+{z}^{4}
+2 (1-t) {z}^{\ell+2}(1+z^2)
+(1-t)^2 {z}^{2\ell+4}}}
{2z}.$$
Generating functions for several statistics related to peaks in bargraphs were found in~\cite{BBK_peaks} using the wasp-waist decomposition. The statistics include the total number of peaks, the number of horizontal steps in peaks, and the height of the first peak. All of them can be derived using the decomposition in Figure~\ref{fig:decomposition}. For example, to find the generating function $B_{p,\hsp}(t,s,x,y)$ for bargraphs where $t$ marks the number of peaks and $s$ marks the number of horizontal steps in peaks, we first note that the corresponding generating function $M_{p,\hsp}(t,s,x,y)$ for cornerless Motzkin paths satisfies
$$M_{p,\hsp}=\left(1+y\left(M_{p,\hsp}-1+\frac{tsx}{1-sx}-\frac{x}{1-x}\right)\right)(1+xM_{p,\hsp}).$$
This is because in the decomposition in Figure~\ref{fig:decomposition}, when the elevated path in the first piece is of the form $H^\ell$ for $\ell\ge1$, it creates a peak of width $\ell$, contributing $ts^\ell x^\ell$ to the generating function instead of $x^\ell$. Summing $ts^\ell x^\ell-x^\ell$ for $\ell\ge1$ gives the term $\frac{tsx}{1-sx}-\frac{x}{1-x}$. Solving for $M_{p,\hsp}$ and using that $$B_{p,\hsp}=y\left(M_{p,\hsp}-1+\frac{tsx}{1-sx}-\frac{x}{1-x}\right)$$ we obtain an expression for $B_{p,\hsp}$.
We can similarly find the generating function $B_{v,\hsv}(t,s,x,y)$ where $t$ marks the number of valleys and $s$ marks the number of horizontal steps in valleys. In this case, the corresponding generating function $M_{v,\hsv}(t,s,x,y)$ for
$\mathcal M$ satisfies
$$M_{v,\hsv}=1+xM_{v,\hsv}+y(M_{v,\hsv}-1)+xy(M_{v,\hsv}-1)\left[H+\left(\frac{ts}{1-sx}-\frac{1}{1-x}\right)(M_{v,\hsv}-1-xM_{v,\hsv})\right].$$
Indeed, in the fourth case of the decomposition in the first line of Figure~\ref{fig:decomposition}, an extra valley of width $\ell$ appears if the path to the right of the red $H$ step starts with $H^{\ell-1}U$ for $\ell\ge1$. Such paths thus contribute $ts^\ell x^{\ell-1}(M_{v,\hsv}-1-xM_{v,\hsv})$ instead of $x^{\ell-1}(M_{v,\hsv}-1-xM_{v,\hsv})$.
Solving for $M_{v,\hsv}$ and using that $B_{v,\hsv}=y(M_{v,\hsv}-1)$ we obtain an expression for $B_{v,\hsv}$.
\subsection{Number of corners}\label{cor}
Denote by $\llcorner$, $\lrcorner$, $\ulcorner$ and $\urcorner$ the statistics counting the number of occurrences of $DH$, $HU$, $UH$ and $HD$ in a bargraph, respectively. For every $G\in\mathcal B$, we have $\ulcorner(G)=1+\lrcorner(G)$ and $\urcorner(G)=1+\llcorner(G)$. Also, by symmetry, the statistics $\ulcorner$ and $\urcorner$ are equidistributed on bargraphs, and so are $\llcorner$ and $\lrcorner$. The statistic $\urcorner$ coincides with the number of descents, and similarly the statistic $\ulcorner$ equals the number of ascents.
We can find the distribution of these statistics using the decomposition for $\mathcal M$ in the first line of Figure~\ref{fig:decomposition}. Let $\llcorner$ and $\ulcorner$ denote the number of $DH$ and $UH$ in paths in $\mathcal M$, respectively, and let $M_{\llcorner,\ulcorner}=M_{\llcorner,\ulcorner}(t,s,x,y)$ be the generating function where $t$ and $s$ mark these two statistics, respectively. Then
\begin{equation}\label{eq:MDHUH}
M_{\llcorner,\ulcorner}=1+xM_{\llcorner,\ulcorner}+y(M_{\llcorner,\ulcorner}-1+(s-1)xM_{\llcorner,\ulcorner})(1+txM_{\llcorner,\ulcorner}),
\end{equation}
since an extra occurrence of $UH$ is created when the elevated piece of the decomposition starts with an $H$, and an extra occurrence of $DH$ is created in the fourth case of the decomposition.
The generating function for bargraphs where $s$ and $t$ mark the statistics $\llcorner$ and $\ulcorner$, respectively, is then $B_{\llcorner,\ulcorner}=y(M_{\llcorner,\ulcorner}-1+(s-1)xM_{\llcorner,\ulcorner})$.
Eliminating $M_{\llcorner,\ulcorner}$ from this equation and~\eqref{eq:MDHUH}, we see that $B_{\llcorner,\ulcorner}$ satisfies
$$txB_{\llcorner,\ulcorner}^2 - (1 - x - y + xy - txy - sxy)B_{\llcorner,\ulcorner} + sxy = 0.$$
Setting $t=0$ and $s=1$ in the above equation, we see that the generating function for bargraphs with no occurrences of $DH$ is $\frac{xy}{1-x-y}$. These are {\em nondecreasing} bargraphs, namely those where the heights of the columns increase weakly from left to right. Their generating function can be easily explained combinatorially, since such a bargraph is determined by the left boundary, which consists of any sequence of $U$ and $H$ steps that starts with a $U$ and ends with an $H$. A related family is that of {\em increasing} bargraphs, which are those avoiding $DH$ and $HH$. Their generating function is $\frac{xy}{1-y-xy}$, since the left boundary has the additional restriction of not containing two consecutive $H$s.
Since the total number of corners in a bargraph is counted by $\llcorner+\lrcorner+\ulcorner+\urcorner=2(\llcorner+\ulcorner)$, the generating function
$$
B_{\llcorner,\ulcorner}(u^2,u^2,x,y)=\frac {1-x-y+(1-2u^2)xy-
\sqrt { (1-y ) ( 1-2x-y+(2-4{u}^{2})xy+{x}^{2}-
(1-2u^2)^2{x}^{2}y ) }}{2{u}^{2}x}
$$
enumerates bargraphs with respect to the total number of corners, marked by $u$.
Viewing the bargraph as a polyomino, the two additional bottom corners can be easily included by multiplying the generating function by $u^2$.
In~\cite{BBK_walls}, the authors find the distribution of the number of ascents (called {\em walls} in that paper) of any fixed length $r$.
\subsection{Number of horizontal segments} \label{hs}
Define a {\em horizontal segment} to be a maximal sequence of consecutive $H$ steps, and
let $\hs$ be the statistic ``number of horizontal segments,'' both on $\mathcal B$ and on $\mathcal M$.
Horizontal segments of length at least 2 are considered in~\cite{BBK_levels}, where they are called {\em levels}.
In any bargraph, the number of horizontal segments equals half of the total number of corners.
It follows that $\mathcal B_{\hs}=B_{\llcorner,\ulcorner}(t,t,x,y)$, and so
\begin{equation}
\label{eq:Bhs}
B_{\hs}=\frac {1-x-y+xy-2txy-\sqrt{(1-y)(1-2x-y+{x}^{2}+(2-4t)xy-(1-2t)^2{x}^{2}y)}}{2tx}.
\end{equation}
Alternatively, a direct derivation can be given as follows.
Let $\mathcal M^H$ be the subset of $\mathcal M$ consisting of paths that start with an $H$. For $A\in\mathcal M$, he have that
$\hs(HA)$ equals $\hs(A)$ if $A\in\mathcal M^H$ and it equals $\hs(A)+1$ otherwise.
Via the usual decomposition, we have
\begin{align*}
M^H_{\hs}&=x[t(M_{\hs}-M^H_{\hs})+M^H_{\hs}],\\
M_{\hs}&=1+M^H_{\hs}+y(M_{\hs}-1)(1+M^H_{\hs}).
\end{align*}
Eliminating $M^H_{\hs}$ from these equations and using that $B_{\hs}=y(M_{\hs}-1)$, we recover~\eqref{eq:Bhs}.
\subsection{Number of horizontal segments of length 1} \label{uhs}
Let $\uhs$ denote the statistic ``number of horizontal segments of length 1," also called unit horizontal segments, both on $\mathcal B$ and on $\mathcal M$.
Let $\mathcal M^{(1)}$ be the subset of $\mathcal M$ consisting of paths that start with a horizonal segment of length 1 (note that these are the paths that start with $HU$, plus the length-one path $H$). Let $\mathcal M^{(2)}$ be the subset of $\mathcal M$ consisting of paths that start with $HH$. Note that the paths in $\mathcal M$ but not in $\mathcal M^{(1)}$ or $\mathcal M^{(2)}$ are precisely those that are empty or start with a $U$.
For $A\in\mathcal M$,
$$\uhs(HA)=\begin{cases}
\uhs(A)-1 & \text{if $A\in\mathcal M^{(1)}$,} \\
\uhs(A) & \text{if $A\in\mathcal M^{(2)}$,} \\
\uhs(A)+1 & \text{otherwise.}
\end{cases}$$
Using the standard decomposition, it follows that
\begin{align*}
M^{(1)}_{\uhs}&=tx(M_{\uhs}-M^{(1)}_{\uhs}-M^{(2)}_{\uhs}),\\
M^{(2)}_{\uhs}&=x\left(\frac{M^{(1)}_{\uhs}}{t}+M^{(2)}_{\uhs}\right),\\
M_{\uhs}&=1+M^{(1)}_{\uhs}+M^{(2)}_{\uhs}+y(M_{\uhs}-1)(1+M^{(1)}_{\uhs}+M^{(2)}_{\uhs}).
\end{align*}
Solving this system of equations, we get an expression for $M_{\uhs}$. Using that $B_{\uhs}=y(M_{\uhs}-1)$, we obtain the equation
$$x(t + x - tx)B_{\uhs}^2 -(1-x-y+xy- 2xy(t + x - tx))B_{\uhs} + xy(t + x -tx) = 0.$$
\subsection{Length and abcissa of the first descent} \label{fd}
Let $\fd$ denote the length of the first descent in a bargraph. The corresponding statistic on $\mathcal M$, via $\Delta$, is the length of the first descent of the cornerless Motzkin path, with the caveat that we have to add one if this first descent occurs at the end of the path. We also denote this statistic by $\fd$.
Let $\mathcal M^I$ be the subset of $\mathcal M$ consisting of Motzkin paths that have the first descent (equivalently, all their $D$ steps) at the end. These correspond to nondecreasing bargraphs, mentioned in Section~\ref{cor}. Clearly,
$$M^I_{\fd}=\frac{tx}{1-x-ty},$$
since such a path is determined by a sequence of $U$ and $H$ steps that ends with an $H$, and only the $U$ steps contribute to $\fd$, with the $t$ in the numerator accounting for the fact that we have to add one when the first descent occurs at the end.
For a path of the form $UADB\in\mathcal M$ (where $A\in\mathcal M$ is nonempty and $B\in\mathcal M$ is either empty or starts with an $H$), we have $\fd(UADB)=\fd(A)$ except when $A\in\mathcal M^I$ and $B$ is empty, in which case $\fd(UADB)=\fd(A)+1$. It follows that
$$M_{\fd}=t+xM_{\fd}+y(M_{\fd}-t+(t-1)M^I_{\fd})+y(M_{\fd}-t)xM.$$
We can now use the known expressions for $M^I_{\fd}$ and $M$ and solve for $M_{\fd}$. Since $B_{\fd}=y(M_{\fd}-t)$, it follows that
\begin{equation}\label{eq:Bfd}B_{\fd}=t(1-x-y)\frac{1 - x - y - xy-\sqrt{(1-y)((1-x)^2 - y(1 + x)^2)}}{2 x (1-x-ty)}.
\end{equation}
If we denote by $a_{n,k}$ the number of bargraphs of semiperimeter $n$ whose first descent has length $k$, so that
$$B_{\fd}(t,z,z)=\sum_{k,n}a_{n,k}t^kz^n,$$
then Equation~\eqref{eq:Bfd} implies that
\begin{equation}\label{eq:Bfd2}(1-x-tz)\sum_{k,n}a_{n,k}t^kz^n=t g(z)
\end{equation}
for some function $g(z)$ that does not depend on $t$.
For any fixed $k,n\ge2$, extracting the coefficient of $t^kz^n$ on both sides of~\eqref{eq:Bfd2} gives
$$a_{n,k}-a_{n-1,k}-a_{n-1,k-1}=0.$$
This simple recurrence can also be proved directly by the following combinatorial argument. Let $A_{n,k}$ denote the set of bargraphs with semiperimeter $n$ and first descent of length $k$, so that $a_{n,k}=|A_{n,k}|$.
Fix $n,k\ge2$. Given a bargraph in $A_{n,k}$, the step before the first descent must be an~$H$. The step before that, which we denote by $s$, must be either an $H$ or a $U$. This splits the set $A_{n,k}$ into two disjoint sets $A^H_{n,k}$ and $A^U_{n,k}$, where the superindex indicates the step $s$.
A bijection between $A^H_{n,k}$ and $A_{n-1,k}$ is obtained by removing the step $s=H$.
A bijection between $A^U_{n,k}$ and $A_{n-1,k-1}$ is obtained by removing the step $s=U$ and a $D$ from the first descent of the path. For example, for $n=6$ and $k=3$,
the bargraph $UUUUHHHDDDD$ is mapped to $UUUHHDDD$, and $UUUUHDDDHD$ is mapped to $UUUHDDHD$.
It follows that
$$a_{n,k}=|A_{n,k}|=|A^H_{n,k}|+|A^U_{n,k}|=a_{n-1,k}+a_{n-1,k-1}.$$
\medskip
A related statistic that one can consider is the $x$-coordinate of the first descent in a bargraph. Let $\xfd$ denote this statistic on $\mathcal B$, and also the corresponding statistic on $\mathcal M$ under $\Delta$, which is the number of $H$ steps before the first descent. Then
$$M_{\xfd}=1+txM_{\xfd}+y(M_{\xfd}-1)+xy(M_{\xfd}-1)M,$$
and
$$B_{\xfd}=y(M_{\xfd}-1)=
ty\,\frac { 1+x-y-2tx-xy-\sqrt {(1-y) ( 1-2x-y-2xy+{x}^{2}-{x}^{2}y ) } }{2(1-t-(1-t)y+({t
}^{2}-t)x+txy)}.$$
\subsection{Number of columns of height $h$ and number of initial columns of height $1$} \label{ch}
Let $\uc$ denote the number of columns of height 1 in bargraphs. The corresponding statistic on $\mathcal M$, via $\Delta$, is the number of $H$ steps on the $x$-axis, which we also denote by $\uc$.
Using the usual decomposition,
$$M_{\uc}=(1+y(M-1))(1+txM_{\uc})$$
and
$$B_{\uc}=y(M_{\uc}-1)=y\,\frac {
1-x-y-xy+2t(1-t){x}^{2}(1-y)- \sqrt { (1-y) (1-2x-y-2xy+{x}^{2}-{x}^{2}y ) } }{2x ( 1-t+ty+({t}^{2}-t)x(1-y) )}.$$
We can now recursively count the number of columns of height $h$ in bargraphs, which we denote by $\ch_h$, for any $h\ge2$. They correspond to $H$ steps at height $h-1$ in cornerless Motzkin paths. From the decomposition,
$$M_{\ch_h}=(1+y(M_{\ch_{h-1}}-1))(1+xM_{\ch_h}),$$
and so
$$M_{\ch_h}=\frac{1}{\dfrac{1}{1+y(M_{\ch_{h-1}}-1)}-x},$$
from where one can find $B_{\ch_h}=y(M_{\ch_h}-1)$. The above recurrence, in terms of bargraphs, can be written as
$$B_{\ch_h}= \frac{y}{x} \left(\frac{1}{1-x-xB_{\ch_{h-1}}} - 1 - x\right).$$
A related statistic is the number of initial columns of height 1, which we denote by $\iuc$. The corresponding statistic on $\mathcal M$ is the number of initial $H$ steps. We now have
$$M_{\iuc}=1+txM_{\iuc}+y(M-1)+xy(M-1)M$$
and
\begin{equation}\label{eq:Biuc}
B_{\iuc}=y(M_{\iuc}-1)=
\frac {1-2x-y+{x}^{2}+(2t-1){x}^{2}y
-(1-x)\sqrt {(1-y) [(1-x)^2-y(1+x)^2]}}{2x (1-tx ) }.
\end{equation}
\subsection{Least column height} \label{lch}
Let $\lch$ be the statistic ``least column height'' (that is, height of a shortest column) in a bargraph. The corresponding statistic on $\mathcal M$, which we also denote by $\lch$, is the least height of a horizontal step.
From the decomposition in the first line of Figure~\ref{fig:decomposition},
$$M_{\lch}=1+xM+ty(M_{\lch}-1)+xy(M-1)M,$$
since in the second and fourth cases there is a step at height 0, and in the third case the least height increases by one.
Solving for $M_{\lch}$ and using that $B_{\lch}=ty(M_{\lch}-1)$, we get
\begin{equation}\label{eq:Blch}
B_{\lch}=t(1-y)\,\frac {1-x-y-xy
-\sqrt {(1-y)[(1-x)^2-y(1+x)^2]}}{2x ( 1-ty)}.
\end{equation}
\subsection{Width of leftmost horizontal segment} \label{lhs}
Let $\lhs$ be the statistic ``width of leftmost horizontal segment,'' both in bargraphs and cornerless Motzkin paths. From the usual decomposition,
$$M_{\lhs}=1+\frac{tx(1-x)}{1-tx}M+y(M_{\lhs}-1)+xy(M_{\lhs}-1)M.$$
For the second summand, note that paths of the form $HA$ where $A\in\mathcal M$ starts with $H^\ell$ (but not $H^{\ell+1}$) contributes $t^{\ell+1}x^{\ell+1}(M-xM)$; summing over $\ell\ge0$ we get $\frac{tx(1-x)}{1-tx}M$.
Solving for $M_{\lhs}$ and using that $B_{\lhs}=y(M_{\lhs}-1)$, we get
\begin{equation}\label{eq:Blhs}
B_{\lhs}=t(1-x)\,\frac {1-x-y-xy
-\sqrt {(1-y)[(1-x)^2-y(1+x)^2]}}{2x ( 1-tx)}.
\end{equation}
By comparing~\eqref{eq:Blch} and~\eqref{eq:Blhs}, we see that
$(1 - x)(1 - ty)B_{\lch} = (1 - y)(1 - tx)B_{\lhs}$. In particular,
$$B_{\lch}(t,z,z)=B_{\lhs}(t,z,z),$$
that is, the statistics $\lch$ and $\lhs$ are equidistributed on bargraphs of fixed semiperimeter.
Let us describe a direct combinatorial proof of this equality by showing that both
$\{A\in\mathcal B_n:\lch(A)>h\}$ and $\{A\in\mathcal B_n:\lhs(A)>h\}$ are in bijection with $\mathcal B_{n-h}$.
The bijection $\{A\in\mathcal B_n:\lch(A)>h\}\to\mathcal B_{n-h}$ is obtained by deleting the initial $U^h$ and the final $D^h$.
The bijection $\{A\in\mathcal B_n:\lhs(A)>h\}\to\mathcal B_{n-h}$ is obtained by deleting the initial $H^h$ from the leftmost horizontal segment. Examples of these bijections for all bargraphs with $n=6$ and $h=2$ are given in Figure~\ref{fig:lch_lhs}.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=0.45]
\draw(0,0) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S-- ++(0,-1) circle(1.2pt)\S-- ++(0,-1) circle(1.2pt);
\draw(3,0) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt)\H-- ++(0,-1) circle(1.2pt)\S-- ++(0,-1) circle(1.2pt)\S;
\draw(7,0) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S-- ++(0,-1) circle(1.2pt);
\draw(11,0) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S-- ++(0,-1) circle(1.2pt)\S;
\draw(15,0) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)\H-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S-- ++(0,-1) circle(1.2pt);
\draw[dotted] (0,0)--(1,0);
\draw[dotted] (3,0)--(5,0);
\draw[dotted] (7,0)--(9,0);
\draw[dotted] (11,0)--(13,0);
\draw[dotted] (15,0)--(18,0);
\draw[very thin,dashed] (-0.5,2)--(18.5,2);
\draw (0.5,-1) node{$\downarrow$};
\draw (4,-1) node{$\downarrow$};
\draw (8,-1) node{$\downarrow$};
\draw (12,-1) node{$\downarrow$};
\draw (16.5,-1) node{$\downarrow$};
\draw(0,-5) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S-- ++(0,-1) circle(1.2pt);
\draw(3,-5) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt)\H-- ++(0,-1) circle(1.2pt)\S;
\draw(7,-5) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt);
\draw(11,-5) circle(1.2pt) -- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S;
\draw(15,-5) circle(1.2pt) -- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)\H-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt);
\draw[dotted] (0,-5)--(1,-5);
\draw[dotted] (3,-5)--(5,-5);
\draw[dotted] (7,-5)--(9,-5);
\draw[dotted] (11,-5)--(13,-5);
\draw[dotted] (15,-5)--(18,-5);
\begin{scope}[shift={(24,-10)}]
\draw(0,15) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)\H-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S-- ++(0,-1) circle(1.2pt);
\draw(0,11) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt)\H-- ++(1,0) circle(1.2pt)\H-- ++(0,-1) circle(1.2pt)\S;
\draw(0,7) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt)\H-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt);
\draw(0,3) circle(1.2pt) -- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)\H-- ++(1,0) circle(1.2pt)-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S;
\draw(0,0) circle(1.2pt) -- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)\H-- ++(1,0) circle(1.2pt)\H-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt);
\draw[dotted] (0,15)--(3,15);
\draw[dotted] (0,11)--(4,11);
\draw[dotted] (0,7)--(4,7);
\draw[dotted] (0,3)--(4,3);
\draw[dotted] (0,0)--(5,0);
\draw[very thin,dashed] (2,18.5)--(2,-0.5);
\draw (5.5,16) node{$\rightarrow$};
\draw (5.5,12) node{$\rightarrow$};
\draw (5.5,8) node{$\rightarrow$};
\draw (5.5,4) node{$\rightarrow$};
\draw (5.5,0.5) node{$\rightarrow$};
\draw(7,15) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S-- ++(0,-1) circle(1.2pt);
\draw(7,11) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt)\H-- ++(0,-1) circle(1.2pt)\S;
\draw(7,7) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt);
\draw(7,3) circle(1.2pt) -- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S;
\draw(7,0) circle(1.2pt) -- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)\H-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt);
\draw[dotted] (7,15)--(8,15);
\draw[dotted] (7,11)--(9,11);
\draw[dotted] (7,7)--(9,7);
\draw[dotted] (7,3)--(9,3);
\draw[dotted] (7,0)--(10,0);
\end{scope}
\end{tikzpicture}
\caption{The bijections $\{A\in\mathcal B_n:\lch(A)>h\}\to\mathcal B_{n-h}$ (left) and $\{A\in\mathcal B_n:\lhs(A)>h\}\to\mathcal B_{n-h}$ (right) for $n=6$ and $h=2$.}\label{fig:lch_lhs}
\end{figure}
Another remarkable equidistribution phenomenon arises when comparing the statistics $\lhs$ and $\iuc$. In terms of generating functions, it can be checked by comparing~\eqref{eq:Biuc} and~\eqref{eq:Blhs} that
$$B_{\lhs}-\frac{txy}{1-tx}=t\left(B_{\iuc}-\frac{txy}{1-tx}\right).$$
Combinatorially, this formula states that if we ignore bargraphs consisting of one row ---which contribute $\frac{txy}{1-tx}$ to both generating functions---, then the distribution of the statistics $\lhs$ and $\iuc+1$ on the remaining bargraphs with a fixed number of $U$ and $H$ steps is the same. In particular, these statistics are equidistributed on bargraphs of fixed semiperimeter (and at least two rows), that is,
$$|\{A\in\mathcal B_n:\lhs(A)=h\}|=|\{A\in\mathcal B_n:\iuc(A)=h-1\}|$$ for all $1\le h\le n-2$.
Next we give a combinatorial proof of this fact, by providing a bijection $\phi$ from
the set of bargraphs with at least two rows to itself that preserves the number of $U$s and the number of $H$s, and such that $\iuc(\phi(A))=\lhs(A)-1$.
Given a bargraph A with at least two rows, let $h=\lhs(A)$. Consider two cases, as illustrated in Figure~\ref{fig:phi}:
\begin{itemize}
\item If the first column has height $1$, then $A$ starts with $UH^hU^iH$ (for some $i\ge1$).
Replace this initial piece with $UH^{h-1}U^iHH$, and let $\phi(A)$ be the resulting bargraph. Viewing the bargraph as a polyomino, the height of column $h$ has been increased to match the height of column $h+1$.
\item If the first column has height at least $2$, then $A$ starts with $U^{i+1}H^hV$ (for some $i\ge1$), with $V\in\{U,D\}$. Replace this initial piece with $UH^{h-1}U^iHV$, and let $\phi(A)$ be the resulting bargraph. In terms of the polyomino, the first $h-1$ columns have been shortened to height~$1$.
\end{itemize}
In both cases, it is clear that $\iuc(\phi(A))=h-1$. Note that in the second case, if $h=1$, then $\phi(A)=A$.
To see that $\phi$ is a bijection, observe that every bargraph $A'$ with at least two rows starts either with $UH^{h-1}U^iHH$ or with $UH^{h-1}U^iHV$ (where $V\in\{U,D\}$), for some $i,h\ge1$, where $\iuc(A')=h-1$. Indeed, these cases are determined by whether the first horizontal segment that is not at height 1 has length 1 or at least 2, respectively. These cases correspond to the two bullets above, and so $\phi^{-1}(A')$ is obtained by replacing $UH^{h-1}U^iHH$ with $UH^hU^iH$, or $UH^{h-1}U^iHV$ with $U^{i+1}H^hV$, respectively.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=0.5]
\draw(0,0) circle(1.2pt) -- ++(0,1) circle(1.2pt);
\draw[thick,red](0,1) circle(1.2pt) -- ++(1,0) circle(1.2pt)\H-- ++(1,0) circle(1.2pt)\H;
\draw(4,1) -- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt);
\draw[red] (2,1) node[above] {$h$};
\draw(0,0.5) node[right] {$1$};
\draw[thin,dotted](0,0)--(5,0);
\draw(2,-1) node {$UH^hU^iH$};
\draw(6,0) node {$\rightarrow$};
\begin{scope}[shift={(8,0)}]
\draw(0,0) circle(1.2pt) -- ++(0,1) circle(1.2pt);
\draw[thick,blue](0,1) circle(1.2pt) -- ++(1,0) circle(1.2pt)\H-- ++(1,0) circle(1.2pt);
\draw[blue] (1.5,1) node[above] {$h-1$};
\draw(3,1) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt)\H;
\draw[thin,dotted](0,0)--(5,0);
\draw(2,-1) node {$UH^{h-1}U^iHH$};
\end{scope}
\begin{scope}[shift={(0,-7)}]
\draw(0,0) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt);
\draw[thick,red](0,3) circle(1.2pt) -- ++(1,0) circle(1.2pt)\H-- ++(1,0) circle(1.2pt)\H;
\draw(4,3.3)--(4,2.7);
\draw[red] (2,3) node[above] {$h$};
\draw(0,1.5) node[right] {$\ge2$};
\draw[thin,dotted](0,0)--(5,0);
\draw(2,-1) node {$U^{i+1}H^hV$};
\draw(6,0) node {$\rightarrow$};
\begin{scope}[shift={(8,0)}]
\draw(0,0) circle(1.2pt) -- ++(0,1) circle(1.2pt);
\draw[thick,blue](0,1) circle(1.2pt) -- ++(1,0) circle(1.2pt)\H-- ++(1,0) circle(1.2pt);
\draw(4,3.3)--(4,2.7);
\draw[blue] (1.5,1) node[above] {$h-1$};
\draw(3,1) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt);
\draw[thin,dotted](0,0)--(5,0);
\draw(2,-1) node {$UH^{h-1}U^iHV$};
\end{scope}
\end{scope}
\end{tikzpicture}
\caption{The bijection $\phi$.}\label{fig:phi}
\end{figure}
\subsection{Number of occurrences of $UHU$} \label{uhu}
Let $\uhu$ count the number of occurrences of $UHU$ in $\mathcal B$ and $\mathcal M$. From the decomposition, we have
$$M_{\uhu}=[1+y\,(M_{\uhu}-1+(t-1)x(M_{\uhu}-1-xM_{\uhu}))]\,(1+xM_{\uhu}).$$
This is because when the elevated path in the first piece starts with a $HU$, it creates an additional occurrence of $UHU$. Since the generating function with respect to $\uhu$ for paths that start with $H$ is $xM_{\uhu}$, the one for paths that start with $U$ is $M_{\uhu}-1-xM_{\uhu}$, and so the one for paths that start with $HU$ is $x(M_{\uhu}-1-xM_{\uhu})$.
Now, since $B_{\uhu}=y\,(M_{\uhu}-1+(t-1)x(M_{\uhu}-1-xM_{\uhu}))$, we get
that $B_{uhu}$ satisfies
$$xB_{uhu}^2 - (1 - x - y - txy)B_{uhu} + xy = 0.$$
\subsection{Length of the initial staircase $UHUH\dots$} \label{stair}
Let $\stair$ be the statistic that measures the length of the longest initial sequence of the form $UHUH\dots$ in a bargraph.
The corresponding statistic on $\mathcal M$, also denoted $\stair$, is the length of the longest initial sequence of the form $HUHU\dots$.
Using the first line of the decomposition in Figure~\ref{fig:decomposition},
\begin{align*}
\mathcal M&=\epsilon\cup H\mathcal M \cup [U\times(\mathcal M\setminus\epsilon)\times D] \times(\epsilon\cup H\mathcal M)\\
&=\epsilon\cup H \cup HH\mathcal M \cup [HU\times(\mathcal M\setminus\epsilon)\times D]\times (\epsilon\cup H\mathcal M)
\cup [U\times(\mathcal M\setminus\epsilon)\times D] \times(\epsilon\cup H\mathcal M),
\end{align*}
where to obtain the second line we have applied the decomposition again to the $\mathcal M$ in the term $H\mathcal M$. It now follows that
$$M_{\stair}=1+tx+tx^2M+[t^2xy(M_{\stair}-1)](1+xM)+y(M-1)(1+xM).$$
Solving for $M_{\stair}$ and using that $B_{\stair}=ty(M_{\stair}-1)$, we get
$$B_{\stair}=t\,\frac { R -(1-x+tx^2-t^2xy)\sqrt{S}}{2x ( 1-{t}^{2}x+{t}^{2}{x}^
{2}-{t}^{2}xy+({t}^{4}-t^2){x}^{2}y) },$$
where $R=1-2x-y+(1+t)x^2-{t}^{2}xy+({t}^{2}+t-1){x}^{2}y+{t}^{2}x{y}^{2}-t{x}^{3}+(t-2t^3){x}^{3}y+{t}^{2}{x}^{2}{y}^{2}$ and $S=(1-y)(1-2x-y+x^2-2xy-x^2y).$
\subsection{Number of odd-height and even-height columns} \label{oheh}
Let $\oh$ and $\eh$ denote the statistics that count the number of columns of odd and even height in bargraphs, respectively. The corresponding statistics on $\mathcal M$, which we also denote by $\oh$ and $\eh$, are the number of $H$ steps at even and odd height, respectively. Let $B_{\oh,\eh}(t,s,x,y)$ and $M_{\oh,\eh}(t,s,x,y)$ be the generating functions on $\mathcal B$ and $\mathcal M$ where $t$ marks $\oh$ and $s$ marks $\eh$. Let $M_{\eh,\oh}(t,s,x,y)=M_{\oh,\eh}(s,t,x,y)$.
From the usual decomposition,
\begin{align*}
M_{\oh,\eh}&=(1+y(M_{\eh,\oh}-1))(1+txM_{\oh,\eh}),\\
M_{\eh,\oh}&=(1+y(M_{\oh,\eh}-1))(1+syM_{\eh,\oh}).
\end{align*}
The second equation can be obtained from the first one by interchanging the variables $t$ and $s$.
Solving for $M_{\oh,\eh}$ and using that $B_{\oh,\eh}=y(M_{\oh,\eh}-1)$,
we get
$$B_{\oh,\eh}=\frac {
(1-tx)(1-sy)-(1+tx)(1+sy){y}^{2}
-\sqrt {S}
}{2y ( s+t(1-s)x+tsxy) },
$$
where
$$S=(1-y) ( 1-tx+y+txy )
(1-tx-2sy+2t(s-1)xy+(s^2-1){y}^{2}-t({s}^{2}+1)x{y}^{2}-2s{y}^{3}+2ts(s-1)x{y}^{3}-{s}^{2}{y}^{4}-{s}^{2}tx{y}^{4}).$$
\subsection{Area}\label{area}
The enumeration of bargraphs according to semiperimeter and area was obtained in~\cite{BMB,BBK_parameters} by using a construction that adds one column at a time. Here we show that our decomposition of cornerless Motzkin paths can be used to derive a continued fraction. By $\area$ of a bargraph we mean the area of the region under the bargraph and above the $x$-axis. On cornerless Motzkin paths, the statistic $\area$ will refer to the area of the region directly below the $H$ steps of the path and above the $x$-axis (that is, the regions below $U$ and $D$ steps are not included). With this definition, we have
\begin{equation}\label{eq:Barea} B_{\area}(t,x,y)=y(M_{\area}(t,tx,y)-1),
\end{equation}
since $\Delta$ lifts all the $H$ steps, and so each $H$ step contributes an additional unit to the area.
From the second line of the decomposition in Figure~\ref{fig:decomposition},
$$M_{\area}(t,x,y)=(1+y(M_{\area}(t,tx,y)-1))(1+xM_{\area}(t,x,y)).$$
Solving for $M_{\area}(t,x,y)$, we get
$$M_{\area}(t,x,y)=\frac{1}{\dfrac{1}{1+y(M_{\area}(t,tx,y)-1)}-x}.$$
Iterating this formula to get a continued fraction and using~\eqref{eq:Barea}, we get
$$B_{\area}=-y+\frac{y}{-tx+\dfrac{1}{1-y+\dfrac{y}{-t^2x+\dfrac{1}{1-y+\dfrac{y}{-t^3x+\dfrac{1}{\ddots}}}}}}.$$
Note that truncating the above continued fraction by stopping at $-t^nx+1$ produces the correct generating polynomial for bargraphs with up to $n$ vertical steps, since the contribution of the truncated terms contains a factor $y^{n+1}$.
\section{Subsets of bargraphs}\label{sec:subsets}
In this section we find generating functions for three subsets of bargraphs: symmetric, weakly alternating, and strictly alternating.
\subsection{Symmetric bargraphs}\label{sec:sym}
{\em Symmetric bargraphs} are those that are invariant under reflection along a vertical line. Let $\mathcal B^S$ denote the set of symmetric bargraphs. These bargraphs correspond via the bijection $\Delta$ to paths in $\mathcal M$ that are symmetric, which we denote by $\mathcal M^S$. For convenience, the set $\mathcal M^S$ does not include the empty path. We use the term
{\it Motzkin path prefix} to refer to a path with steps $U$, $H$ and $D$ starting at the origin and not going under the $x$-axis, but ending at any height. Let $\mathcal P_h$ denote the set of Motzkin path prefixes without peaks and valleys ending at height $h$, and let $P_h=P_h(x,y)$ be corresponding generating function where $x$ marks the number of $H$s and $y$ marks the number of $U$s, as usual. Clearly, $P_0=M$. For $h\ge1$, we obtain
$$P_h=y P_{h-1}+x P_h + xy (M-1) P_h$$
by separating the cases when the path starts with a $U$ and does not return to the $x$-axis,
when it starts with an $H$, and when starts with a $U$ and returns to the $x$-axis. In the latter case, the first return gives a decomposition of the form $UADHB$, where $A\in\mathcal M$ is non-empty and $B\in\mathcal P_h$. Solving for $P_h$, we get
$$P_h=\frac{yP_{h-1}}{1-x-xy(M-1)},$$
and iterating over $h$,
$$P_h=\frac{y^h M}{\left(1-x-xy(M-1)\right)^h}$$
for all $h\ge0$.
In a path in $\mathcal M^S$ with an odd number of steps, the middle step must be an $H$.
In a path in $\mathcal M^S$ with an even number of steps, the two steps in the middle must be
$HH$, since otherwise it would contain a peak or a valley. It follows that every path in $\mathcal M^S$ is of the form $AHA'$ or $AHHA'$, where $A\in\mathcal P_h$ for some $h\ge0$, and $A'$ is the vertical reflection of $A$.
In that case,
the number of horizontal steps in $AHA'$ or $AHHA'$ equals
twice the number of horizontal steps in A plus $1$ or $2$, respectively,
while the number of up steps in $AHA'$ or $AHHA'$ equals
the number of up steps in $A$ plus the number of down steps in $A$, i.e. twice the
number of up steps in $A$ minus $h$. It follows that
$$M^S(x,y)=\sum_{h\ge0}(x+x^2)\frac{P_h(x^2,y^2)}{y^h}=\sum_{h\ge0}
\frac{(x+x^2) y^h M(x^2,y^2)}{\left(1-x^2-x^2y^2(M(x^2,y^2)-1)\right)^h}
=\frac{(x+x^2)M(x^2,y^2)}{1-\frac{y}{1-x^2-x^2y^2(M(x^2,y^2)-1)}}.
$$
The generating function for symmetric bargraphs is then $B^S=y M^S(x,y)$, and we get the expression
$$B^S=(1+x)\,
\frac { \sqrt { (1-y^2) [(1-x^2)^2- y^2(1+x^2)^2]}-1+{x}^{2}+{y}^{2}+2{x}^{2}y+{x}^{2}{y}^{2} }{ 2x\left(1-y-{x}^{2}-{x}^{2}y \right)}.$$
\subsection{Alternating bargraphs}\label{sec:alt}
{\em Weakly alternating bargraphs} are those where ascents and descents alternate. Equivalently, they are of the form
$$U^{i_1}H^{j_1}D^{k_1}H^{\ell_1}U^{i_2}H^{j_2}D^{k_2}H^{\ell_2}\dots U^{i_m}H^{j_m}D^{k_m}$$
for some $m\ge1$ and $i_r,j_r,k_r,\ell_r\ge1$ for all $r$.
{\em Strictly alternating bargraphs} (called {\em alternating bargraphs} in~\cite{PB,R_thesis}), are those of the form
$$U^{i_1}HD^{k_1}HU^{i_2}HD^{k_2}H\dots U^{i_m}HD^{k_m}$$
for some $m\ge1$ and $i_r,k_r\ge1$ for all $r$.
Here we derive generating functions for these sets of bargraphs by using our decomposition of cornerless Motzkin paths.
Let $\mathcal B^{\rm WA}$ be the set of weakly alternating bargraphs, and let $\mathcal M^{\rm WA}$ denote the set of alternating paths in $\mathcal M$, which we define as those whose ascents and descents alternate. Adapting the decomposition in the second line of Figure~\ref{fig:decomposition} to alternating paths, we get
\begin{equation}\label{eq:decompMA}
\mathcal M^{\rm WA}=(\epsilon\cup U\mathcal M^\Lambda D)\times(\epsilon\cup H\mathcal M^{\rm WA}),
\end{equation}
where $\mathcal M^\Lambda$ is the subset of $\mathcal M^{\rm WA}$ consisting of nonempty paths that do not start or end with an $H$, or consist only of $H$s.
Since $xM^{\rm WA}$ is the generating function for paths in $\mathcal M^{\rm WA}$ that start (respectively, end) with an $H$, and
$x+x^2M^{\rm WA}$ is the generating function for paths in $\mathcal M^{\rm WA}$ that start and end with an $H$, we have, by inclusion-exclusion, that
$$M^{\rm WA}-xM^{\rm WA}-xM^{\rm WA}+(x+x^2M^{\rm WA})=(1-x)^2M^{\rm WA}+x$$
is the generating function for paths that neither start nor end with an $H$. Removing the empty path and adding paths consisting only of $H$s, it follows that
$$M^\Lambda=(1-x)^2M^{\rm WA}+x-1+\frac{x}{1-x}.$$
By Equation~\eqref{eq:decompMA}, we have
$$M^{\rm WA}=(1+y M^\Lambda)(1+xM^{\rm WA}),$$
which we can solve to obtain $M^{\rm WA}$ and $M^\Lambda$. Finally, using that $B^{\rm WA}=yM^\Lambda$, we get
$$B^{\rm WA}=\frac {1-2x-y+2xy+{x}^{2}
-\sqrt {\left((1 - x)^2 - y\right)\left((1 - x)^2 - y(1 - 2x)^2\right)}
}
{2x(1-x)}.$$
To obtain the generating function $B^{\rm SA}$ for the set $\mathcal B^{\rm SA}$ of strictly alternating bargraphs, we note that weakly alternating bargraphs can be obtained uniquely from strictly alternating bargraphs by replacing each $H$ with an arbitrary non-empty sequence of $H$ steps. Therefore,
$B^{\rm WA}(x,y)=B^{\rm SA}(\frac{x}{1-x},y)$, or equivalently, $B^{\rm SA}(x,y)=B^{\rm WA}(\frac{x}{1+x},y)$, giving
\begin{equation}\label{eq:BSA}
B^{\rm SA}=\frac {1-y+x^2y-\sqrt {1-2y+y^2-2x^2y-2x^2y^2+x^4y^2}
}{2x}.
\end{equation}
Interestingly, this generating function happens to be closely related with the generating function $K$ of Motzkin paths with no occurrences of $UD$, $UU$ and $DD$, where $x$ marks the number of $U$ and $D$ steps, and $y$ marks the number of $H$ steps. Indeed, letting $\mathcal{K}$ be the set of such Motzkin paths, it is clear that every nonempty path in $\mathcal{K}$ can be decomposed uniquely either as $HA$ or as $UA'DA$, where $A\in\mathcal{K}$ is arbitrary, and $A'$ is any path in $\mathcal{K}$ that starts and ends with an $H$. Thus, $K$ satisfies the equation
$$K=1+yK+x^2(y+y^2K)K,$$ where the factor $y+y^2K$ is the contribution of the piece $A'$.
Solving this equation we obtain $$K=\frac {1-y-x^2y-\sqrt {1-2y+y^2-2x^2y-2x^2y^2+x^4y^2}
}{2x^2y^2}=\frac{B^{\rm SA}-xy}{xy^2},$$ where the last equality is obtained by comparing with Equation~\eqref{eq:BSA}. The generating function for these paths by semiperimeter, $K(z,z)$, gives sequence A023432 in~\cite{OEIS}.
Next we give a recursive bijection $f:\mathcal B^{\rm SA}\setminus\{UHD\}\to\mathcal{K}$ that maps the statistics $\#H$ (i.e., number of $H$ steps) and $\#U$ in bargraphs to the statistics $\#U+\#D+1$ and $\#H+2$ in Motzkin paths, respectively. In particular, the bijection takes bargraphs of semiperimeter $n$ (for $n\ge3$) to paths with $n-3$ steps.
The base case consists of bargraphs with only one $H$ step, for which we define $f(U^aHD^a)=H^{a-2}$, where $a\ge2$.
Every bargraph $G\in\mathcal B^{\rm SA}$ with at least two $H$ steps can be uniquely decomposed as
\begin{equation}\label{eq:decompG} G=U^{a} G_1 H G_2 D^{a},
\end{equation} where $a\ge1$, $G_1\in\mathcal B^{\rm SA}$ and $UG_2D\in\mathcal B^{\rm SA}$.
Note that $a$ is the height of the lowest $H$ step in $G$, and that the $H$ that appears in the decomposition~\eqref{eq:decompG} is the leftmost $H$ step at height $a$. For such $G$, let
$$f(G)=H^{a-1} U\, f(UG_1D)\, HD\, f(UG_2D).$$
Note that $f(UG_1D)$ is either empty (when $G_1=UHD$) or it starts with an $H$. By induction, the path $f(G)$ contains no $UD$, $UU$ or $DD$. Figure~\ref{fig:f} shows an example of this construction.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=0.5]
\fill[cyan] (0,2)--(0,6)--(1,6)--(1,4)--(2,4)--(2,5)--(3,5)--(3,2)--(0,2);
\draw (3.5,3) node[below] {$H$};
\fill[pink] (4,2)--(4,6)--(5,6)--(5,5)--(6,5)--(6,6)--(7,6)--(7,3)--(8,3)--(8,5)--(9,5)--(9,2)--(4,2);
\draw(0,0) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S-- ++(1,0) circle(1.2pt)-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S-- ++(1,0) circle(1.2pt)-- ++(0,1) circle(1.2pt)\N-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S-- ++(0,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(0,1) circle(1.2pt)\N-- ++(1,0) circle(1.2pt)-- ++(0,-1) circle(1.2pt)\S-- ++(0,-1) circle(1.2pt)\S-- ++(0,-1) circle(1.2pt);
\draw[red](0,0) circle(1.2pt) -- ++(0,1) circle(1.2pt)\N; \draw[red](9,2) circle(1.2pt) -- ++(0,-1) circle(1.2pt)\S;
\draw[thin,dotted](0,0)--(9,0);
\draw(11,1) node {$\rightarrow$};
\draw(12,0) circle (1.2pt) -- ++(1,0) circle(1.2pt)\H-- ++(1,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(1,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)\H-- ++(1,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(1,-1) circle(1.2pt)-- ++(1,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)\H-- ++(1,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(1,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(1,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt);
\draw[cyan,thick](15,1) circle (1.2pt) -- ++(1,0) circle(1.2pt)-- ++(1,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)\H-- ++(1,-1) circle(1.2pt);
\draw[pink,thick](22,0) circle (1.2pt) -- ++(1,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)\H-- ++(1,1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(1,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt)-- ++(1,-1) circle(1.2pt)-- ++(1,0) circle(1.2pt);
\draw[red](12,0) circle (1.2pt) -- ++(1,0) circle(1.2pt)\H;
\draw[thin,dotted](12,0)--(31,0);
\end{tikzpicture}
\caption{A strictly alternating bargraph and the corresponding Motzkin path in $\mathcal{K}$ obtained by applying~$f$.}\label{fig:f}
\end{figure}
| {
"timestamp": "2016-09-02T02:01:46",
"yymm": "1609",
"arxiv_id": "1609.00088",
"language": "en",
"url": "https://arxiv.org/abs/1609.00088",
"abstract": "A bargraph is a self-avoiding lattice path with steps $U=(0,1)$, $H=(1,0)$ and $D=(0,-1)$ that starts at the origin and ends on the $x$-axis, and stays strictly above the $x$-axis everywhere except at the endpoints. Bargraphs have been studied as a special class of convex polyominoes, and enumerated using the so-called wasp-waist decomposition of Bousquet-Mélou and Rechnitzer. In this paper we note that there is a trivial bijection between bargraphs and Motzkin paths without peaks or valleys. This allows us to use the recursive structure of Motzkin paths to enumerate bargraphs with respect to several statistics, finding simpler derivations of known results and obtaining many new ones. We also count symmetric bargraphs and alternating bargraphs. In some cases we construct statistic-preserving bijections between different combinatorial objects, proving some identities that we encounter along the way.",
"subjects": "Combinatorics (math.CO)",
"title": "Statistics on bargraphs viewed as cornerless Motzkin paths",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9915543740571681,
"lm_q2_score": 0.8376199592797929,
"lm_q1q2_score": 0.8305457344214657
} |
https://arxiv.org/abs/math/0603362 | Hardy inequalities for simply connected planar domains | In 1986 A. Ancona showed, using the Koebe one-quarter Theorem, that for a simply-connected planar domain the constant in the Hardy inequality with the distance to the boundary is greater than or equal to 1/16. In this paper we consider classes of domains for which there is a stronger version of the Koebe Theorem. This implies better estimates for the constant appearing in the Hardy inequality. | \section{Main result and discussion}
Let $\Omega$ be a domain in $\mathbb R^2$ and let $\Omega^c =\mathbb R^2\setminus \Omega$ be its complement.
For any function $u\in
\plainC{1}_0(\Omega)$ we have:
\begin{equation}\label{hardy:eq}
\int_{\Omega} |\nabla u|^2 d\bx \ge r^2 \int_{\Omega} \frac{|u|^2}{\d(\bx)^2}
d\bx,\quad \d(\bx) = \inf_{\by\in\Omega^c}|\by-\bx|,
\end{equation}
e.g. E.B. Davies \cite{D1}, \cite{D2}, \cite{D3} and V.G. Maz'ya \cite{M}.
It is well known that for convex domains $r = 1/2$ and it is sharp, see e.g.
\cite{D1}.
However, the sharp constant for non-convex domains
is unknown, although for arbitrary planar simply-connected domains
A. Ancona \cite{A} proved \eqref{hardy:eq} with $r=1/4$.
Some specific examples of non-convex domains were considered
in \cite{D3} (see also J. Tidblom \cite{T2}).
For example, it was found that if $\Omega={\mathbb R}^2\setminus {\mathbb R}_+$,
${\mathbb R}_+ = [0,\infty)$, then $r^2= 0.20538...$.
Our objective is to obtain the Hardy inequality for
simply-connected non-convex domains $\Omega\subset \mathbb R^2$, whose degree of non-convexity
can be "quantified". We introduce two possible "measures" of
non-convexity.
Let ${\sf{\Lambda}}\subset\mathbb C$ be a simply-connected domain such that $0\subset\partial{\sf{\Lambda}}$.
Denote by
${\sf{\Lambda}}(w, \phi) = e^{i\phi}{\sf{\Lambda}}+w$
the transformation of ${\sf{\Lambda}}$ by rotation by angle $\phi\in (-\pi, \pi]$
in the positive direction and translation by $w\in\mathbb C$:
\begin{equation}\label{trans:eq}
{\sf{\Lambda}}(w, \phi) = \{ z\in\mathbb C: e^{-i\phi}(z-w)\in {\sf{\Lambda}}\}.
\end{equation}
Denote by $K_{\t}\subset \mathbb C$, \ $\t\in [0, \pi]$ the sector
\begin{equation}\label{sector:eq}
K_{\t} = \{z\in \mathbb C: |\arg z|< \t\}.
\end{equation}
In words, this is an open sector symmetric with respect to the
real axis, with the angle $2\t$ at the vertex.
Here and below we always assume that $\textup{arg}\
\zeta\in (-\pi, \pi]$ for all $\zeta\in\mathbb C$. Our first assumption on
the domain $\Omega$ is the following
\begin{cond}\label{cone:cond} There exists a number $\t\in [0, \pi]$
such that for each $w\in \Omega^c$ one can find a $\phi = \phi_w\in
(-\pi, \pi]$ such that
\begin{equation*}
\Omega\subset K_{\t}(w, \phi_w).
\end{equation*}
\end{cond}
Very loosely speaking, this means that the domain $\Omega$ satisfies the
exterior cone condition. The difference is of course that the cone
is now supposed to be infinite. Because of this, Condition \ref{cone:cond} is equivalent
to itself if stated
for the boundary points $w\in\partial\Omega$ only.
Note also that if Condition
\ref{cone:cond} is satisfied for some $\t$, then automatically
$\t\ge\pi/2$, and the equality $\t = \pi/2$ holds for convex domains.
\begin{thm}\label{main1:thm}
Suppose that the domain $\Omega\subset \mathbb R^2, \Omega\not = \mathbb R^2$
satisfies Condition \ref{cone:cond} with some $\t\in [\pi/2, \pi]$.
Then for any $u\in \plainC{1}_0(\Omega)$ the Hardy inequality
\eqref{hardy:eq} holds with
\begin{equation}\label{r1:eq}
r = \frac{\pi}{4\t}.
\end{equation}
\end{thm}
It is clear that the constant $r$
runs from $1/4$ to $1/2$ when $\t$ varies from $\pi$ to $\pi/2$. For the
domain $\Omega = K_{\t}$ Theorem \ref{main1:thm} does not give the best known result,
found in \cite{D3}, saying that the value of $r$ remains equal to $1/2$ for the range
$\t \in [0, \t_0]$ where $\t_0 \approx 2.428$, which is considerably greater than $\pi/2$.
To describe another way to characterize the non-convexity, for
$a >0$ and $\t\in [0, \pi)$, introduce the domains
\begin{equation}\label{pan:eq}
\tilde D_{a} = \{z\in\mathbb C: |z|> a\ \& \ \
|\arg z|\not = \pi\},\ \ D_{a, \t} = \tilde D_{a}(-a e^{i\t}, 0).
\end{equation}
The domain $\tilde D_{a}$ is the exterior of the disk of radius $a$
centered at the origin with an infinite cut along the negative real semi-axis.
\begin{cond} \label{outside:cond}
There exist numbers $a >0$ and $\t_0\in [0, \pi)$
such that for any $w\in \partial\Omega$ one can
find a $\phi = \phi_w\in (-\pi, \pi]$ and $\t\in [0, \t_0]$ such that
\begin{equation*}
\Omega\subset D_{a, \t}(w, \phi_w).
\end{equation*}
\end{cond}
Note that any domain satisfying Conditions \ref{cone:cond} or
\ref{outside:cond}, is automatically simply-connected.
The following Theorem applies to the domains with a finite in-radius
\begin{equation*}
\d_{\textup{\tiny in}} = \sup_{z\in\Omega} \d(z).
\end{equation*}
\begin{thm}\label{main2:thm}
Let $\Omega\subset \mathbb R^2$, $\Omega\not= \mathbb R^2$ be a domain such that
$\d_{\textup{ in}} <\infty$. Suppose that $\Omega$ satisfies Condition
\ref{outside:cond} with some $\t_0\in [0, \pi)$ and that
$$
2\d_{\textup{\tiny in}}\le R_0(a),\ \ R_0(a) = \frac{a}{2(2^{1/2} |\tan(\t_0/2)| + 1)}.
$$
Then the Hardy
inequality \eqref{hardy:eq} holds with
\begin{equation}\label{r2:eq}
r = \frac{1}{2}
\biggl[
1 - 4 \bigl(2^{1/2}|\tan(\t_0/2)| + 1\bigr)\frac{\d_{\textup{in}}}{a}
\biggr].
\end{equation}
\end{thm}
A natural example of a domain to apply the above
theorem, is the following horseshoe-shaped domain
$$
{\sf{\Lambda}} = \{z\in\mathbb C: \rho<|z|< \rho + \d,\ |\arg z| < \psi,\
\re z > \rho\cos\psi\},\ \ \psi\in (0, \pi),
$$
with $\rho, \d >0$.
Simple geometric considerations show that
this domain satisfies Condition
\ref{outside:cond} with $a = \rho$ and
\begin{equation*}
\t_0 = \begin{cases}
0,\ \ \psi\le \pi/2,\\
2\psi -\pi,\ \ \psi > \pi/2.
\end{cases}
\end{equation*}
Assuming that $\d \rho^{-1}$ is small, so that $\d_{\textup{in}} = \d$,
we deduce from Theorem \ref{main2:thm} that the Hardy inequality holds
with a constant $r$, which gets close to $1/2$ as $\d \rho^{-1}\to 0$.
On the other hand, if $\d_{\textup{in}}\rho^{-1}$ is large, one could apply Theorem
\ref{main1:thm},
noticing that ${\sf{\Lambda}}$ satisfies Condition \ref{cone:cond}
with $\t = (\pi+\psi)/2$, which gives the
Hardy inequality with constant
$$
r = \frac{1}{2\bigl(1+\psi\pi^{-1}\bigr)}.
$$
which is obviously independent of $\d_{\textup{in}}$ or $\rho$.
Let us mention briefly some other recent results for convex domains,
concerning the Hardy inequality
with a remainder term. In the paper \cite{BM} H.Brezis and M. Marcus
showed that if $\Omega\in{\mathbb R}^d$, $d\ge 2$,
then the inequality could be improved to include the $\plainL2$-norm:
\begin{equation}\label{hardy:eq:corr}
\int_{\Omega} |\nabla u|^2 d\bx \ge \frac{1}{4}\int_{\Omega} \frac{|u|^2}{\d(\bx)^2}
+ C(\Omega) \,\int_{\Omega} |u|^2 d\bx,
\end{equation}
where the constant $C(\Omega)>0$ depends on the diameter of $\Omega$.
They also conjectured that $C(\Omega)$ should depend on
the Lebesgue measure of $\Omega$. This conjecture was justified in \cite{HOHOL}
and later generalised to $\plainL{p}$-type inequalities in \cite{T1}.
Later S. Filippas, V.G. Maz'ya and A. Tertikas \cite{FMT} (see also
F.G. Avkhadiev \cite{A}) obtained
for $C(\Omega)$ an estimate in terms of the in-radius $\d_{\textup{in}}$.
\section{A version of the Koebe Theorem}
A. Ancona has pointed out in \cite{A} (page 278)
that the Hardy inequality for simply-connected planar domains can be obtained from the famous Koebe
one-quarter Theorem. Let $f$ be a
conformal mapping (i.e. analytic univalent) defined on the unit disk
$\mathbb D = \{z\in\mathbb C: |z| < 1\}$, normalized by the
condition $f(0) = 0$, $f'(0) = 1$.
Denote by $\Omega$ the image of the disk under the function $f$,
i.e. $\Omega = f(\mathbb D)$, and set
$$
\d(\zeta) = \dist\{\zeta, \partial\Omega\} = \inf_{w\notin\Omega}|w-\zeta|
$$
to be the distance from the point $\zeta\in\Omega$ to the boundary
$\partial\Omega$. The classical Koebe one-quarter Theorem tells us that
\begin{equation*}
\d(0) \ge r,
\end{equation*}
with $r = 1/4$. On the other hand, if the domain $\Omega$ is convex,
then it is known that $r = 1/2$, see e.g. P.L.Duren \cite{Duren}, Theorem 2.15.
Without the normalization conditions $f(0) = 0$, $f'(0) = 1$ the
above estimate can be rewritten as follows:
\begin{equation}\label{koebe:eq}
\d(f(0))\ge r |f'(0)|.
\end{equation}
For any simply-connected domain $\Omega\subset \mathbb C,\ \Omega\not = \mathbb C$ we denote
by $\mathbb A(\Omega)$ the class of all conformal maps such that
$f(\mathbb D) = \Omega$.
Our proof of the main Theorems \ref{main1:thm} and \ref{main2:thm} relies on a
version of the Koebe theorem, in which the constant $r$ assumes values in the interval
$[1/4, 1/2]$. We begin with a general statement which deduces the required Koebe-type result
by comparing the domain $\Omega$ with some suitable "reference" domain.
Let ${\sf{\Lambda}}\subset \mathbb C, {\sf{\Lambda}} \not = \mathbb C$
be a simply-connected domain such that $0\subset\partial{\sf{\Lambda}}$, and let
$g$ be a conformal function which maps ${\sf{\Lambda}}$ onto the complex
plane with a cut along the negative semi-axis, i.e. onto
\begin{equation*}
{\sf{\Pi}} = \mathbb C\setminus\{z\in\mathbb C: \im z = 0, \re z \le 0\},
\end{equation*}
such that $g(0) = 0$. We call ${\sf{\Lambda}}$ a \textsl{standard domain} and $g$ -
a \textsl{comformal map associated} with the standard domain ${\sf{\Lambda}}$.
\begin{lem}\label{gprime:lem}
Let $w\in\partial\Omega$ and suppose that for some standard
domain ${\sf{\Lambda}}$ the inclusion
\begin{equation}\label{standard:eq}
\Omega\subset{\sf{\Lambda}}(w, \phi)
\end{equation}
holds with some $\phi\in (- \pi, \pi]$.
Let $g$ be a conformal map associated with ${\sf{\Lambda}}$, and suppose that there are numbers
$M\in (0, \infty)$
and $R_0\in (0, \infty]$ such that
for all $R\in (0, R_0)$
\begin{equation}\label{gprime:eq}
\left|\frac{g'(z)}{g(z)}\right|\ge \frac{\b}{|z|}
\end{equation}
for $z: 0< |z| \le R, z\in{\sf{\Lambda}}$ with some constant $\b = \b(R) \in (0, M]$.
Then for any $f\in \mathbb A(\Omega)$ satisfying the
condition $M|f'(0)| < 4 R_0$, the inequality
\begin{equation}\label{koebe2:eq}
|f(0) - w|\ge \frac{\b(R_1)}{4}|f'(0)|,\ \ R_1 = \frac{M|f'(0)|}{4},
\end{equation}
holds.
\end{lem}
\begin{proof}
Since $\Omega\subset{\sf{\Lambda}}(w, \phi)$, the function
\begin{equation*}
h(z) = g\bigl(e^{-i\phi}(f(z)-w)\bigr)
\end{equation*}
is conformal on $\mathbb D$.
Since $0\notin {\sf{\Pi}}$, by the classical Koebe Theorem,
\begin{equation*}
|h(0)|\ge \frac{1}{4} |h'(0)|,
\end{equation*}
so that
\begin{equation*}
|g\bigl(e^{-i\phi}(f(0)-w)\bigr)|
\ge \frac{1}{4}|g'\bigl(e^{-i\phi}(f(0)-w)\bigr)| |f'(0)|.
\end{equation*}
If $|f(0)-w|\ge M |f'(0)|/4$,
then there is nothing to prove, so we assume that
$|f(0) - w|\le M|f'(0)|/4$.
Then by the assumption \eqref{gprime:eq} we get
\begin{equation*}
\frac{1}{\b\bigl(M|f'(0)|/4\bigr)}|f(0) - w|\ge \frac{1}{4}|f'(0)|,
\end{equation*}
which leads to the required estimate.
\end{proof}
\begin{cor}\label{standard:cor}
Let $\Omega\subset\mathbb C$ be a domain.
Suppose that for any $w\in \partial\Omega$ there is a standard domain
${\sf{\Lambda}} = {\sf{\Lambda}}_w$, such that the inclusion \eqref{standard:eq} holds with some $\phi = \phi_w$,
and that the associated conformal maps $g = g_w$ satisfy \eqref{gprime:eq}
for all $0 <|z|\le R, z\in{\sf{\Lambda}}_w$, for all $R < R_0$
with some $\b(R) = \b_w(R)\in (0, M]$ where $M\in (0, \infty)$ and $R_0\in (0, \infty]$
are independent of $w$.
If $R_0 <\infty$, then under the condition $M\d_{\textup{in}} < R_0$ the
estimate \eqref{koebe:eq} holds for all $f\in \mathbb A(\Omega)$ with
$$
r = \frac{1}{4}\ \inf_{w\in\partial\Omega}\b_w(R'),\ R' = M \d_{\textup{in}}.
$$
If $R_0 = \infty$, then the estimate \eqref{koebe:eq} holds for all $f\in\mathbb A(\Omega)$
with
$$
r = \frac{1}{4}\ \inf_{w\in\partial\Omega}\inf_{R>0}\b_w(R).
$$
\end{cor}
Observe that under the conditions of this corollary, the domain $\Omega$
is automatically simply-connected and $\Omega\not = \mathbb C$.
\begin{proof}
In the case $R_0 = \infty$ the result immediately follows
from Lemma \ref{gprime:lem}.
Assume that $R_0 <\infty$.
By the classical Koebe Theorem $|f'(0)|\le 4\d(f(0))\le 4\d_{\textup{in}}$, so that
by Lemma \ref{gprime:lem}, for each $w\in\partial\Omega$ we have the estimate
\eqref{koebe2:eq}. Since $R_1 \le R'$ and $\b_w(\ \cdot\ )$ is a decreasing function,
the required inequality \eqref{koebe:eq} follows.
\end{proof}
Now we apply the above results in the cases of standard domains $K_{\t}$ and
$D_{a, \t}$, see \eqref{sector:eq} and \eqref{pan:eq} for definitions.
\begin{thm}\label{koebe:thm}
Suppose that $\Omega$ satisfies Condition \ref{cone:cond}
with some $\t\in [\pi/2, \pi]$.
Then for any $f\in\mathbb A(\Omega)$ the inequality
\eqref{koebe:eq} holds with $r$ given by \eqref{r1:eq}.
\end{thm}
\begin{proof}
Due to Condition \ref{outside:cond}, for each $w\in\partial\Omega$ we have
$\Omega\subset K_{\t}(w, \phi)$ with some $\phi\in(-\pi, \pi]$.
Clearly, the domain $K_{\t}$ is standard and the function
$$
g(z) = z^{\a},\ \a = \frac{\pi}{\t}
$$
is a conformal map associated with $K_{\t}$.
One immediately obtains:
$$
\frac{g'(z)}{g(z)} = \a \frac{z^{\a-1}}{z^\a} = \frac{\a}{z}, z\in {\sf{\Lambda}},
$$
so that the conditions of Corollary
\ref{standard:cor} hold with the constant $\b=\a$
and $R_0 = \infty$.
Now Corollary \ref{standard:cor} leads to the proclaimed result.
\end{proof}
Note that for convex domains the angle $\t$ is $\pi/2$, and hence we
recover the known result $r = 1/2$. Actually, the proof of Lemma \ref{gprime:lem}
is modelled on that for convex domains, which is featured in
\cite{Duren}, Theorem 2.15.
Let us prove a similar result for Condition \ref{outside:cond}:
\begin{thm}\label{pan:thm}
Suppose that $\Omega$ satisfies Condition
\ref{outside:cond} with some $a >0$,\ $\t_0\in [0, \pi)$,
and that $\d_{\textup{in}} < \infty$,
\begin{equation}\label{din:eq}
2\d_{\textup{in}}\le R_0(a), \ \ R_0(a) = \frac{a}{2\bigl(
2^{1/2}|\tan(\t_0/2)| + 1
\bigr)}.
\end{equation}
Then for any $f\in\mathbb A(\Omega)$ the inequality
\eqref{koebe:eq} holds with $r$ given by \eqref{r2:eq}.
\end{thm}
\begin{proof}
Due to Condition \ref{outside:cond}, for each $w\in\partial\Omega$ we have
$\Omega\subset D_{a, \t}(w, \phi)$ with some $\t\in [0, \t_0]$ and $\phi\in(-\pi, \pi]$.
Clearly, the domain $D_{a, \t}$ is standard and the function
$$
g(z) = h^2(za^{-1}),\ \
h(\zeta) = \sqrt{\zeta+ e^{i\t}} - \frac{1}{\sqrt{\zeta+ e^{i\t}}}
- 2i b,\ \ b = \sin \frac{\t}{2},
$$
is a conformal map associated with $D_{a, \t}$.
Write:
\begin{align*}
h(\zeta) = &\ \frac{\zeta+e^{i\t} - 1
- 2ib \sqrt{ \zeta+ e^{i\t} }}{\sqrt{ \zeta+ e^{i\t} }}\\[0.3cm]
= &\ \frac{\zeta+ e^{i\t} - 1
- 2ib \sqrt{ \zeta+ e^{i\t} }}{\sqrt{ \zeta+ e^{i\t} }}\\[0.3cm]
= &\ \frac{\zeta+ 2i b e^{i\t/2}
- 2ib \sqrt{ \zeta+ e^{i\t} }}{\sqrt{ \zeta+ e^{i\t}}}\\[0.3cm]
= &\ \frac{\psi(\zeta)}{\sqrt{\zeta+e^{i\t}}}.
\end{align*}
A direct calculation shows that
\begin{equation*}
\frac{g'(z)}{g(z)} = \frac{2\psi'(z/a)}{a\psi(z/a)} - \frac{1}{z+a e^{i\t}}.
\end{equation*}
Let us investigate the function $\psi$ is more detail. Assume that $|\zeta|\le 1/2$.
Rewrite:
\begin{align*}
\psi(\zeta) = &\ \zeta + 2ib e^{i\t/2} - 2ib e^{i\t/2}\sqrt{1 + \zeta e^{-i\t/2}}\\[0.3cm]
= &\ \zeta + 2ib e^{i\t/2} - 2ibe^{i\t/2}\biggl(1+ \frac{1}{2}\zeta e^{-i\t}+ \zeta\gamma_1\biggr)\\[0.3cm]
= &\ (1-ibe^{-i\t/2}) \zeta - 2 ib e^{i\t/2} \zeta\gamma_1
= e^{-i\t/2} \cos \frac{\t}{2}\ \zeta - 2 ib e^{i\t/2} \zeta\gamma_1,
\end{align*}
where
$$
|\gamma_1(\zeta)|\le 2^{-3/2} |\zeta|,\ |\zeta|\le 1/2.
$$
Let's look at the derivative:
\begin{align*}
\psi'(\zeta) = &\ 1 - \frac{ib}{\sqrt{\zeta+e^{i\t}}}\\
= &\ 1 - e^{-i\t/2}\frac{ib}{\sqrt{1+\zeta e^{-i\t}}}
= 1 - ib e^{-i\t/2}(1+\gamma_2)\\
= &\ e^{-i\t/2} \cos \frac{\t}{2} - ib e^{-i\t/2}\gamma_2,
\end{align*}
where
$$
|\gamma_2(\zeta)|\le 2^{1/2}|\zeta|, \ |\zeta|\le 1/2.
$$
Therefore
\begin{align*}
\frac{g'(z)}{g(z)} = &\ \frac{2e^{-i\t/2} \cos \frac{\t}{2} - 2ib e^{-i\t/2}\gamma_2(z/a)}
{e^{-i\t/2} \cos \frac{\t}{2}\ z - 2 i b e^{i\t/2} z\gamma_1(z/a)} - \frac{1}{z+a e^{i\t}}\\[0.3cm]
= &\ \frac{1}{z}\biggl[
\frac{2 - 2i\tan(\t/2) \gamma_2(z/a)}{1 - 2i \tan (\t/2) e^{i\t}\gamma_1(z/a)}
- \frac{z}{a} \frac{1}{z/a + e^{i\t}}
\biggr]
\end{align*}
A simple calculation shows that
\begin{align*}
\left|\frac{g'(z)}{g(z)}\right| \ge
&\ \frac{2}{|z|}\biggl[
\frac{1 - |\tan(\t/2)| |\gamma_2(z/a)|}{1 + 2 |\tan (\t/2)| |\gamma_1(z/a)|}
- 2\frac{|z|}{a}
\biggr]\\[0.3cm]
\ge &\ \frac{2}{|z|}\biggl[
1 - 2 \bigl(2^{1/2}|\tan(\t/2)| + 1\bigr)\frac{|z|}{a}
\biggr], \ |z|/a\le 1/2.
\end{align*}
Therefore the condition \eqref{gprime:eq} is satisfied for all $0<|z|\le R$,
$$
R < R_0 = \frac{a}{2\bigl(
2^{1/2}|\tan(\t/2)| + 1
\bigr)},
$$
with
\begin{equation}\label{beta:eq}
\b(R) = \b_{\t}(R) = 2\biggl[
1 - 2 \bigl(2^{1/2}|\tan(\t/2)| + 1\bigr)\frac{R}{a}
\biggr].
\end{equation}
Note that $\b\le M$ with $M=2$ and $\b_{\t}\ge \b_{\t_0}$, so that by Corollary \ref{standard:cor}
the estimate \eqref{koebe:eq} holds with the $r$ given by \eqref{r2:eq}.
\end{proof}
\section{Proof of Theorems \ref{main1:thm}, \ref{main2:thm}}
As soon as the Koebe Theorem \eqref{koebe:eq} is established, our
proof of the Hardy inequality follows that by A.Ancona \cite{A}.
Namely, our starting point is the inequality for the
half-plane, which is an immediate consequence of
the classical Hardy inequality in one dimension.
Below we use the usual notation $z = x+ iy$, $x, y\in \mathbb R$.
\begin{prop}\label{hplane:prop} Let $\mathbb C_+ = \{z\in\mathbb C: \re z >0\}$.
For any $u\in \plainC{1}_0(\mathbb C_+)$ one has
\begin{equation*}
\int_{\mathbb C_+} |\nabla u|^2 dx dy \ge \frac{1}{4} \int_{\mathbb
C_+} \frac{|u|^2}{x^2} dx dy.
\end{equation*}
\end{prop}
Theorems \ref{main1:thm} and \ref{main2:thm} immediately follow
from Proposition \ref{hplane:prop} with the help of the following
conditional result:
\begin{thm}\label{hkoebe:thm}
Let $\Omega\subset\mathbb C$, $\Omega\not=\mathbb C$ be a simply connected domain.
Suppose that for all $g\in \mathbb A(\Omega)$ the inequality
\eqref{koebe:eq} holds with some $r\in [1/4, 1/2]$. Then for any
conformal mapping $f:\mathbb C_+\to \Omega$ the following version of
the Koebe Theorem holds:
\begin{equation}\label{dist:eq}
\d(f(z))\ge 2rx|f'(z)|.
\end{equation}
\end{thm}
For $r = 1/4$ the estimate \eqref{dist:eq} can be found in \cite{A}.
For the reader's convenience we provide a proof of \eqref{dist:eq}.
\begin{proof}
For a conformal mapping $f: \mathbb C_+\to \Omega$ and arbitrary $z\in
\mathbb C_+$ we define
\begin{equation*}
g(w) = g_z(w) = f\bigl(h(w)\bigr),\ h(w) = \frac{\overline z w +
z}{1-w} ,
\end{equation*}
where $w\in \mathbb D$. It is clear that for each fixed $z\in\mathbb
C_+$ the function $h$ maps $\mathbb D$ onto $\mathbb C_+$ and $h(0)
= z, \ g(0) = f(z)$. The derivative is
\begin{equation*}
g'(w) = \frac{z+\overline z}{(1-w)^2} f'\bigl(h(w)\bigr),
\end{equation*}
so that
\begin{equation*}
g'(0) = 2 x f'(z).
\end{equation*}
Therefore, the Koebe theorem \eqref{koebe:eq} implies that
\begin{equation*}
\d\bigl(f(z)\bigr) = \d\bigl(g(0)\bigr)\ge r |g'(0)| = 2r x |f'(z)|,
\end{equation*}
as required.
\end{proof}
\begin{proof}[Proof of Theorems \ref{main1:thm}, \ref{main2:thm}]
According to Theorems \ref{koebe:thm} or \ref{pan:thm} the Koebe
Theorem for functions $f\in \mathbb A(\Omega)$ holds with the values of
$r$ given by \eqref{r1:eq} or \eqref{r2:eq} respectively.
Let $f:\mathbb C_+\to \Omega$ be a conformal map. Remembering that
conformal maps preserve the Dirichlet integral, from Proposition
\ref{hplane:prop} we get for any $u\in \plainC{1}_0(\Omega)$
\begin{align*}
\int_{\Omega}|\nabla u|^2 d\bx = &\ \int_{\mathbb C_+} |\nabla(u\circ
f)|^2 d x dy
\ge \frac{1}{4} \int_{\mathbb C_+} \frac{|(u\circ f)|^2}{x^2} d\bx \\[0.3cm]
= &\ r^2 \int_{\mathbb C_+} \frac{|(u\circ f)|^2}{ (2rx)^2
|f'(z)|^2} |f'(z)|^2 d\bx \ge r^2\int_{\Omega}
\frac{|u|^2}{\d(\bx)^2} d\bx.
\end{align*}
At the last step we have used \eqref{dist:eq}.
Now Theorems \ref{main1:thm}, \ref{main2:thm} follow.
\end{proof}
\medskip
\noindent
{\it Acknowledgements.}
The authors are grateful to partial support by the SPECT ESF European programme.
A. Laptev would like to thank the University of Birmingham for its hospitality.
A. Sobolev was partially supported by G\"oran Gustafsson Foundation.
\bibliographystyle{amsplain}
\providecommand{\mbox{\rule{3em}{.4pt}}\,} {\leavevmode\hbox
to3em{\hrulefill}\thinspace}
| {
"timestamp": "2006-03-15T09:18:32",
"yymm": "0603",
"arxiv_id": "math/0603362",
"language": "en",
"url": "https://arxiv.org/abs/math/0603362",
"abstract": "In 1986 A. Ancona showed, using the Koebe one-quarter Theorem, that for a simply-connected planar domain the constant in the Hardy inequality with the distance to the boundary is greater than or equal to 1/16. In this paper we consider classes of domains for which there is a stronger version of the Koebe Theorem. This implies better estimates for the constant appearing in the Hardy inequality.",
"subjects": "Functional Analysis (math.FA); Complex Variables (math.CV)",
"title": "Hardy inequalities for simply connected planar domains",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717436787542,
"lm_q2_score": 0.8418256512199033,
"lm_q1q2_score": 0.8305214005975228
} |
https://arxiv.org/abs/2207.11807 | AAA interpolation of equispaced data | We propose AAA rational approximation as a method for interpolating or approximating smooth functions from equispaced data samples. Although it is always better to approximate from large numbers of samples if they are available, whether equispaced or not, this method often performs impressively even when the sampling grid is fairly coarse. In most cases it gives more accurate approximations than other methods. | \section{Introduction}
The aim of this paper is to propose a method for interpolation
of real or complex data in equispaced points on an interval, which without
loss of generality we take to be $[-1,1]$. In its basic form the method
simply computes a AAA rational
approximation\footnote{pronounced ``triple-A''}~\cite{aaa}
to the data, and thus the interpolant is a numerical one,
not mathematically exact: a crucial advantage for
robustness. In Chebfun~\cite{chebfun}, the fit can be computed
to the default relative accuracy $10^{-13}$ by the command
\begin{equation}
{\tt r = aaa(F)},
\label{command1}
\end{equation}
where $F$ is the vector of data values.
(As explained in Section~3, an adjustment is made if
the AAA approximant turns out to have
poles in the interval of approximation.)
If interpolation by a polynomial rather than a rational function is desired,
this can by determined by a further step in which $r$ is approximated by a Chebyshev series,
\begin{equation}
{\tt p = chebfun(r)}.
\label{command2}
\end{equation}
For example, Figure~\ref{fig1} shows the AAA interpolant $r$ of
$f(x) = e^x/\sqrt{1+9x^2}$ in 50 equispaced points of $[-1,1]$.
This is a rational function of degree $17$ with accuracy $\|f-r\|
\approx 9.6\times 10^{-14}$, computed in about a millisecond on
a laptop. (Throughout this paper, $\|\cdot\|$ is the
$\infty$-norm over $[-1,1]$.) The Chebfun polynomial approximation $p$
to $r$ has degree $104$ and the same accuracy $\|f-p\kern .8pt \| \approx
9.6\times 10^{-14}$. The exact degree 49 polynomial interpolant
$p_{\hbox{\footnotesize\rm exact}}^{}$ to the data, by contrast, has error $\|f-p_{\hbox{\footnotesize\rm exact}}^{}\| \approx
109.3$ because of the Runge phenomenon~\cite{runge,ATAP}.
It is fascinating that one can generate high-degree polynomial interpolants
like this that are so much better between the sample points
than polynomial interpolants of minimal degree, but we see no
particular advantage in $p(x)$ as compared with $r(x)$, so for the remainder
of the paper, we just discuss $r(x)$.
The problem of interpolating or approximating equispaced data arises
in countless applications, and there is a large literature on the
subject, with many algorithms having been proposed, some of them
highly effective in practice. One reason why no single algorithm
has taken over is that there is an unavoidable tradeoff in this
problem between accuracy and stability. In particular, if $n$
equispaced samples are taken of a function $f$ that is analytic
on $[-1,1]$, one might expect that exponential convergence to $f$
should be possible as $n\to\infty$. However, the {\em impossibility
theorem\/} asserts that exponential convergence is only possible in
tandem with exponential instability, and that, conversely,
a stable algorithm can converge at best at a root-exponential rate
$\|f-r_n\| = \exp(-C\sqrt n\kern 1pt)$, $C>0$~\cite{ptk}. In practice, it is
usual to operate in an in-between regime, accepting some instability
as the price of better accuracy. In the face of this complexity, it
follows that different algorithms may be advantageous for different
classes of functions, and that pinning down the properties of any
particular algorithm may not be straightforward.
\begin{figure}
\begin{center}
\vspace*{15pt}
\includegraphics[scale=.9]{fig1.eps}
\vspace*{5pt}
\end{center}
\caption{\small\label{fig1}AAA interpolation of
$f(x) = e^x/\sqrt{1+9x^2}$ in $50$ equispaced points in $[-1,1]$.
The rational interpolant, of degree $17$, matches $f$ to accuracy $3.3 \times 10^{-14}$
at the sample points and $9.6\times 10^{-14}$ on $[-1,1]$.}
\end{figure}
In this complicated situation we will do our best to elucidate the
properties of AAA interpolation. First we compare its
performance numerically against that of some existing algorithms for a collection
of approximation problems (sections~2 and~3), and then we present certain
theoretical considerations (section 4). The final section briefly
discusses AAA variants, the effect of noise, and other issues.
Apart from some remarks around Figure~\ref{twoposs},
we will not describe details of the AAA algorithm, partly because
these can be found elsewhere~\cite{aaa} and partly because the
essential point here is not AAA per se but just
rational approximation.
At present, AAA appears to be the best general-purpose
rational approximation tool available, but other methods may come along
in the future.
\section{Existing methods}
Many methods have been proposed for interpolation or approximation of equispaced
data, and we will not attempt a comprehensive review. We will,
however, mention the main categories of methods and choose five specific
examples for numerical comparisons. For previous surveys with further
references, see~\cite{bo1,bo2,ptk}.
With any interpolant or approximant, there are always the two
questions of the {\em form\/} of the approximant and the {\em
method\/} of defining or computing it. The main forms that have been
advocated are polynomials or piecewise polynomials,
Fourier series, rational functions, radial
basis functions (RBFs), and various modifications and combinations
of these. The methods proposed generally involve mathematically
exact interpolation or some version of least-squares approximation.
(Ironically, because of conditioning issues, exact interpolants may
be less accurate in floating-point arithmetic than least-squares
approximations, even at the sample points, let alone in-between.)
Almost every method involves a choice of one or two parameters, which usually
affect the tradeoff between accuracy and stability and can
therefore be interpreted in part as regularization parameters.
AAA interpolation may appear at first as an exception to this rule,
but a parameter implicitly involved is the tolerance, which in
the Chebfun implementation is set by default to $10^{-13}$. We will
discuss this further in Section 4.
\smallskip
{\em Polynomial least-squares.}
Interpolation of $n$ data values by a polynomial of degree $n-1$ leads to
exponential instability at a rate $O(2^n)$, as has been known
since Runge in 1901~\cite{runge,ATAP}. Least-squares fitting
by a polynomial of degree $d< n-1$, however, is better behaved.
To cut off the exponential growth as $n\to\infty$ entirely, $d$ must
be restricted to size $O(\sqrt n\kern 1pt )$~\cite{bx,rakh}, but one
can often get away with larger values in practice, and a simple
choice is $d \approx n/\gamma$, where $\gamma>1$ is an {\em oversampling ratio}.
According to equation (4.1) of~\cite{ptk}, this cuts the exponential unstable
growth rate from $2^n$ to $C^n$ with
\begin{equation}
C = \left[(1+\alpha)^{1+\alpha} (1-\alpha)^{1-\alpha}\right]^{1/2}, \quad
\alpha = 1/\gamma.
\label{kuijlaars}
\end{equation}
For example, $\gamma=2$ yields a growth rate of about $(3^{3/4}/2)^n \approx
(1.14)^n$, which is mild
enough to be a good choice in many applications.
Our experiments of Figure~\ref{figbig} in the
next section use this value $\gamma=2$.
\smallskip
{\em Fourier, polynomial, and RBF extensions.}
The idea of {\em Fourier extension\/} is to approximate $f$ by a Fourier series
tied not to $[-1,1]$ but to a larger domain
$[-T,T\kern 1pt]$~\cite{boyd02,bhp}.
The fit is carried out by least-squares or regularized least-squares,
often simply
by means of the backslash operator in MATLAB, as has been analyzed
by Adcock, Huybrechs, and Vaquero~\cite{ahmv,h09} and Lyon~\cite{lyon}.
A related idea is {\em polynomial extension}, in which $f$ is approximated by
polynomials expressed in a basis of orthogonal polynomials defined on
an interval $[-T,T\kern 1pt]$~\cite{as}. A third possibility is {\em
RBF extension,} in which $f$ is approximated by smooth RBFs whose centers
extend outside $[-1,1]$~\cite{flt,piret}. In Figure~\ref{figbig} of the
next section, we use Fourier extension with $T=2$ and an oversampling
ratio of $2$, so that the least-squares matrices have twice as many
rows as columns.
\smallskip
{\em Fourier series with corrections.}
If $f$ is periodic, trigonometric (Fourier) interpolation provides a
perfect approximation method: exponentially convergent and stable.
In the case of quadrature, this becomes the exponentially
convergent trapezoidal rule~\cite{tw}. For nonperiodic $f$,
an attractive idea is to
employ a trigonometric fit modified by corrections of one sort or another,
often the addition of a polynomial term,
designed to mitigate the effect of the implicit discontinuity at
the boundary. This idea goes back as far as James Gregory in 1670, before
Fourier analysis and even calculus~\cite{forn21}!\ \ The result
will not be exponentially convergent, but it can have an algebraic
convergence rate of arbitrary order depending on the choice of the
corrections, and the rate may improve to super-algebraic if the
correction order is taken to increase with $n$. This idea has
been applied in many variations, an early example being a method
of Eckhoff with precursors he attributes to Krylov and Lanczos~\cite{eckhoff}.
In the ``Gregory interpolant''
of~\cite{jt}, an interpolant in the form of a sum of a trigonometric term and
a polynomial is constructed whose integral equals
the result for the Gregory quadrature formula.
Fornberg has proposed (for quadrature, not yet approximation)
a method of regularized endpoint corrections in
which extra parameters are introduced whose
amplitudes are then limited by optimization~\cite{forn21}.
Figure~\ref{figbig} of the next section shows curves for
a least-squares method in which a Fourier series is combined with a polynomial term of
degree about $\sqrt n$, with an oversampling ratio of about $2$.
\smallskip
{\em Multi-domain methods.}
Related in spirit to methods involving boundary corrections are
methods in which $f$ is approximated by different functions over different
subintervals of $[-1,1]$---in the simplest case, a big central interval and two
smaller intervals near the ends. For examples see~\cite{boyd07,bo2,klein13,pg}.
\smallskip
{\em Splines.} Splines, which are piecewise polynomials
satisfying certain continuity conditions, take the multi-domain idea further and
are an obvious candidate for
approximations that will not suffer from Gibbs oscillations at the boundaries.
The most familiar case is cubic spline interpolants, where the sample
points are nodes separating cubic pieces with
continuity of function values and first and second derivatives.
Cubic splines (with the standard natural boundary conditions at the ends
and not-a-knot conditions one node in from the ends) are one of the methods
presented in Figure~\ref{figbig} of the next section.
\smallskip
{\em Mapping.} By a conformal map, polynomial approximations can be
transformed to other approximations that are more suitable for equispaced
interpolation and approximation. The prototypical method in this area
was introduced by Kosloff and Tal-Ezer~\cite{kte}, and there is also
a connection with prolate spheroidal wave functions~\cite{prolate}.
The general conformal mapping point of
view was put forward in~\cite{ht} and in~\cite[chapter 22]{ATAP}.
See also~\cite{boyd16}.
\smallskip
{\em Gegenbauer reconstruction.} Another class of methods has been
developed from the point of view of edge detection and elimination of
the Gibbs phenomenon in harmonic analysis. For entries
into this extensive literature, see~\cite{gt} and~\cite{tadmor}.
\smallskip
{\em Explicit regularization methods.} Several other methods, often
nonlinear, have been proposed involving various strategies of
explicit regularization to counter the instability of
high-accuracy approximation~\cite{berzins,boyd92,chand,wmi}.
We emphasize that even many of the simpler numerical methods implicitly involve
regularization introduced by rounding errors, as will be discussed
in Section~4.
\smallskip
{\em Floater--Hormann rational interpolation:~Chebfun \verb|'equi'|.}
Finally, here is another method involving rational functions.
Floater and Hormann introduced a family of degree $n-1$ rational interpolants in
barycentric form whose weights can be adjusted
to achieve any prescribed order of accuracy~\cite{fh}. (The AAA method also uses
a barycentric representation, but it is an approximant, in principle, not
an interpolant, and it is not very closely related to Floater--Hormann approximation.)
The method we show in Figure~\ref{figbig} of the next section, due to
Klein~\cite{gk,klein}, is based on interpolants
whose order of accuracy is adaptively determined via the
\verb|'equi'| option of the Chebfun constructor~\cite{bos,chebex}.
\section{Numerical comparison}
As mentioned in the opening paragraph, our
interpolation method consists of AAA approximation with its
standard tolerance of $10^{-13}$, so long as the approximant that
is produced has no ``bad poles,'' that is,
poles in the interval $[-1,1]$. The principal
drawback of AAA approximation is that such poles sometimes appear---often
with such small residues that they do not contribute
meaningfully to the quality of the approximation (in which case
they may be called ``spurious poles'' or ``Froissart doublets''~\cite{ATAP}).
When the original
AAA paper~\cite{aaa} was published, a ``cleanup'' procedure was
proposed to address this problem. We are no longer confident that
this procedure is very helpful, and instead, we now propose the
method of {\em AAA-least squares} (AAA-LS) introduced
in~\cite{costa}. Here, if there are any bad poles, these are discarded, and
the other poles are retained to form the basis of a linear least-squares
fit to find a new rational approximation represented in partial fractions form.
For details, see the ``{\tt if any}'' block in the AAA part of
the code listed in the appendix.
Typically this correction makes little difference to accuracy down to
levels of $10^{-7}$ or so, but it is often unsuccessful at achieving tighter
accuracies than this.
Poles in $[-1,1]$ almost never appear in the approximation of functions
$f(x)$ that are complex (a case not illustrated here as it is less common
in applications). For real problems, accordingly, another way of avoiding bad
poles is to perturb the data by a small, smooth complex function.
Overall, however, it must be said that the appearance
of unwanted poles in AAA approximants is not yet fully understood, and
it seems likely that improvements are in store in this active research area.
Comparing the AAA method against other methods can quickly grow very
complicated since most methods have
adjustable parameters and there are any number of functions
one could apply them to. To keep
the discussion under control, the panels of Figure~\ref{figbig}
correspond to five functions:
\begin{alignat}{3}
&f_A^{}(x) &&= \sqrt{1.21-x^2}\kern 20pt &&
\hbox{(branch points at $\pm 1.1$),}\\[2pt]
&f_B^{}(x) &&= \sqrt{0.01+x^2} && \hbox{(branch points at $\pm 0.1i$),}\\[3pt]
&f_C^{}(x) &&= \tanh(5x) && \hbox{(poles on the imaginary axis),}\\[3.5pt]
&f_D^{}(x) &&= \sin(40x) && \hbox{(entire, oscillatory),}\\[3pt]
&f_E^{}(x) &&= \exp(-1/x^2) && \hbox{($C^\infty$ but not analytic).}
\end{alignat}
Each panel displays convergence curves for six methods:
\vskip 6pt
~~~~~~Cubic splines,
\vskip 3pt
~~~~~~Polynomial least-squares with oversampling ratio $\gamma = 2$,
\vskip 3pt
~~~~~~Fourier extension on $[-2,2]$ with oversampling ratio $\gamma = 2$,
\vskip 3pt
~~~~~~Fourier series plus polynomial of degree $\sqrt n$ with oversampling ratio $\gamma = 2$,
\vskip 3pt
~~~~~~Floater--Hormann rational interpolation: Chebfun \verb|'equi'|,
\vskip 3pt
~~~~~~AAA with tolerance $10^{-13}$.
\vskip 6pt
\noindent For details of the methods, see the code
listing in the appendix.
\begin{figure}[t]
\vspace*{5pt}
\begin{center}
\includegraphics[scale=1.1]{testsplt.eps}
\vspace{-3pt}
\end{center}
\caption{\small\label{figbig}Six approximation methods applied to five
smooth functions on $[-1,1]$. In each case the horizontal axis is $n$,
the number of sample points, and the vertical axis is $\|f-r\|$, the maximum
error over $[-1,1]$. The thicker dot on the AAA curve marks the final value of $n$
at which the rational function is an interpolant rather than just an approximation.
The dashed lines in (A) and (D) mark the instability estimate
$(1.14)^n\varepsilon_{\hbox{\scriptsize\rm machine}}$ from $(\ref{kuijlaars})$
with oversampling ratio $\gamma=2$.
The results are discussed in the text, and the code
is listed in the appendix.}
\end{figure}
Many observations can be drawn from Figure~\ref{figbig}. The most
basic is that AAA consistently appears to be the best of the methods,
and is the first to reach accuracy $10^{-10}$ in every case. It is
typical for AAA to converge twice as fast as the other methods, and for the
test function $f_C^{}(x) = \tanh(z)$, whose singularities consist of
poles that the AAA approximants readily capture, its superiority
is especially striking.
It is worth spelling out the meaning of the AAA convergence curves
of Figure~\ref{figbig}.
Each point on one of these curves corresponds to
a rational approximation whose error is $10^{-13}$ or less on the
discrete grid (at least if the partial fractions least-squares
procedure has not been invoked because of bad poles).
For very small~$n$, this will be a rational interpolant,
of degree $\lceil (n-1)/2\rceil$, with error exactly zero on the grid in principle
though nonzero in floating-point arithmetic. In the figure, the last
such $n$ is marked by a thicker dot. For most $n$, AAA terminates with
a rational approximant of degree less than
$\lceil (n-1)/2\rceil$ that matches the data to accuracy $10^{-13}$
on the grid without interpolating exactly.
We think of this as a numerical interpolant, since the
error on the grid is so small, whereas much larger errors
are possible between the grid points. As the grid gets finer, the
errors between grid points reduce until the tolerance of $10^{-13}$
is reached all across $[-1,1]$.
Another observation about Figure~\ref{figbig} is that
the Floater--Hormann \verb|'equi'| meth\-od is very good~\cite{bos}.
Unlike AAA in its pure form, without partial fractions correction, it
is guaranteed always to produce an interpolant that is pole-free
in $[-1,1]$.
The slowest method to converge is often cubic splines, whose behavior is rock
solid algebraic at the fixed rate $\|f-r\| = O(n^{-4})$, assuming
$f$ is smooth enough. The convergence of spline
approximations could be speeded up by using degrees that increase
with $n$ (no doubt at the price of some of that rock solidity).
In panels (A) and (D) of Figure~\ref{figbig}, the polynomial least-squares
approximations converge at first but eventually
diverge exponentially because of unstable amplification of
rounding errors. Note that the upward-sloping red curves in these two
figures both extrapolate back to about $10^{-16}$, machine precision; the
dotted red lines mark the prediction $10^{-16}\times (1.14)^n$ from (\ref{kuijlaars}) with $\gamma = 2$.
Before this point, it is interesting to compare the very different initial phases
for $f_A^{}$, with singularities near
$x=\pm 1$, and $f_B^{}$, with singularities near $x=0$. Clearly we
have initial convergence in the first case and initial divergence in the second,
a consequence of Runge's principle that convergence of polynomial interpolants
depends on analyticity near the middle of the interval.
The figure for the function $f_E^{}$ looks much like that for $f_B^{}$.
The Fourier extension method, as a rule, does somewhat better than
polynomial least-squares in Figure~\ref{figbig}; in certain
limits one expects Fourier methods to have an advantage over
polynomials of a factor of $\pi/2$~\cite[chapter~22]{ATAP}. Perhaps not too much should
be read into the precise positions of these curves in
the figure, however, as both methods have been implemented
with arbitrary choices of parameters that might have been adjusted
in various ways.
\section{Convergence properties}
What can be said in general about AAA approximation of equispaced
data? We shall organize the discussion around two questions to be
taken up in successive subsections.
\vspace{7pt}
\begin{itemize}
\item
How does the method normally behave?
\item
How is this behavior consistent with the impossibility theorem?
\end{itemize}
\vspace{7pt}
\noindent
It would be good to support our observations with theorems guaranteeing the success of
the method under appropriate hypotheses,
but unfortunately, like most methods of rational approximation, AAA lacks a theoretical
foundation.
A key property affecting all of the discussion is that, unlike four
of the other five methods of Figure~\ref{figbig} (all but
Floater--Hormann \verb|'equi'|), AAA approximation is
nonlinear. As so often happens in computational mathematics, the nonlinearity
is essential to its power, while at the same time leading to analytical challenges.
For example, it means that the theory of frames in numerical approximation,
as presented in~\cite{fna1,fna2,ds}, is not directly applicable.
A theme of that theory, however, remains important here, which is to
distinguish between {\em approximation\/} and
{\em sampling}. The approximation issue is, how well can rational
functions approximate a smooth function $f$ on $[-1,1]$? The sampling
issue is, how effectively will an algorithm based on equally spaced samples find these
good rational approximations?
Our discussion will make reference to the five example functions
$f_A^{},\dots, f_E^{}$ of Figure~\ref{figbig}, which are illustrative of many more
experiments we have carried out, and in addition we will consider
a sixth function. In any analysis of polynomial approximations on
$[-1,1]$, and also in the proof of the impossibility theorem
(even though this result is not restricted to polynomial approximations),
one encounters functions analytic inside a {\em Bernstein ellipse} in
the complex plane, which means an ellipse with foci $\pm 1$.
We define the {\em amber function} (Bernstein means amber
in German) by its Chebyshev series
\begin{equation}
A(x) = \sum_{k=0}^\infty 2^{-k} s_k T_k(x),
\label{amber}
\end{equation}
where the numbers $s_k = \pm 1$ are determined by the binary
expansion of $\pi$,
\begin{equation}
\pi = 11.00100100001111110110\dots\kern 1pt{}_2^{},
\label{pi}
\end{equation}
with $s_k=1$ when the bit is $1$
and $s_k=-1$ when it is $0$. In Chebfun, one can construct $A$ with the commands
\vspace{6pt}
{\small
\begin{verbatim}
s = dec2bin(floor(2^52*pi));
c = 2.^(0:-1:-53)';
ii = find(s=='0'); c(ii) = -c(ii);
A = chebfun(c,'coeffs');
\end{verbatim}
\par}
\vspace{6pt}
\noindent
The point of $A(x)$ is that it is analytic in the Bernstein
$2$-ellipse but has
no further analytic structure beyond that, since the bits of $\pi$ are
effectively random. In particular, it is not analytic or meromorphic in
any larger region of the $x$-plane. (We believe it has
the $2$-ellipse as a natural boundary~\cite[chap.~4]{kahane}.)
Figure~\ref{figamber} sketches $A(x)$ over $[-1,1]$.
\begin{figure}
\begin{center}
\kern 7pt
\includegraphics[scale=.7]{amber}
\caption{\label{figamber}The amber function, a test function constructed to be
analytic in the Bernstein
$2$-ellipse but not in any larger region of
the complex $x$-plane.}
\end{center}
\end{figure}
Figure~\ref{ambercurves} is another plot along the lines of Figure~\ref{figbig}, but
for $A(x)$, and extending to $n=400$ instead of $200$.
We are now prepared to examine the properties of AAA interpolation.
\begin{figure}
\vspace*{10pt}
\begin{center}
\includegraphics[scale=.9]{ambercurves}
\caption{\label{ambercurves}A convergence plot as
in Figure~$\ref{figbig}$ for the amber function $A(x)$ of~$(\ref{amber})$. Note
that for this function, which has no analytic structure beyond analyticity in the
Bernstein $2$-ellipse, the AAA and {\tt 'equi'} methods perform similarly.}
\end{center}
\end{figure}
\bigskip
\noindent{\bf 4.1. How does the method normally behave?}
\smallskip
We believe the usual behavior of AAA equispaced interpolation is as follows. For
small values of $n$, there is a good chance that $f$ will be poorly resolved on the
grid, and the initial AAA interpolant will have poles between the grid points in $[-1,1]$. In such cases,
as described in the last section, the method switches to a least-squares fit that often
produces acceptable accuracy but without outperforming other methods.
As $n$ increases, however, $f$ begins to be resolved, and here rational approximation shows its
power. If $f$ happens to be itself rational, like the Runge function $1/(1+25x^2)$
used for experiments in a number of other papers,
AAA may capture it exactly. More typically, $f$ is
not rational but, as in the examples of Figure~\ref{figbig}, it has analytic
structure that rational approximants can exploit. If it is meromorphic, like $\tanh(5x)$,
then AAA quickly finds nearby poles and therefore converges at an accelerating rate.
Even if it has branch point singularities, rapid convergence still takes
place~\cite{clustering}.
In this middle phase of rapid convergence of AAA approximation, the errors are
many orders of magnitude bigger between the grid points (e.g., $10^{-6}$) than
at the grid points ($10^{-13}$). The big errors may be near the
endpoints, the pattern familiar in polynomial interpolation since Runge,
but they may also
be in the interior, as happens for example with approximation
of $f(x) = \sqrt{0.01 + x^2}$. Figure~\ref{twoposs} illustrates
these two possibilities. Convergence eventually happens because the grid points
get closer together and the big errors between them are clamped down.
\begin{figure}
\begin{center}
\vspace*{8pt}
\includegraphics[scale=.85]{twoposs}
\caption{\label{twoposs}AAA approximation in its mid-phase of
rapid convergence at $n=50$ for two different functions $f(x)$.
Black dots mark the $n$ sample points, with errors below the AAA relative
tolerance level of $10^{-13}$ marked by the red line. The circled black dots
are the subset of AAA support points, where the error in principle is $0$ (apart from
rounding errors), though it has been artificially plotted at $10^{-18}$.
With $f(x) = \sqrt{1.21-x^2}$, above, the big errors between sample points are
near the boundaries, whereas with
$f(x) = \sqrt{0.01+x^2}$, below, they are in the interior.}
\end{center}
\vspace*{-3pt}
\end{figure}
The AAA method does not keep converging for $n\to\infty$, however.
Instead, it eventually slows down and is limited by its prescribed
relative accuracy, $10^{-13}$ by default. Thus although it gets high accuracy faster
than the other methods, in the end it too levels off.
To illustrate the significance of the AAA tolerance, Figure~\ref{julia}
repeats the error plots for $f_C^{}(x) = \tanh(5x)$ and $f_D^{}(x)
= \sin(40 x)$ for $n = 4,8,\dots, 200$, but now calculated in 77-digit
BigFloat arithmetic using Julia (with the GenericLinearAlgebra
package) instead of the usual 16-digit
floating point arithmetic. The solid curve shows behavior with tolerance
$10^{-13}$ and the blue dots with tolerance $10^{-50}$.
\begin{figure}
\begin{center}
\kern 8pt
\includegraphics[scale=1.05]{julia}
\caption{\label{julia}AAA errors for two of the functions
of Figure~$\ref{figbig}$ computed in $77$-digit precision
with Julia. The solid lines are based on the usual AAA tolerance
of $10^{-13}$, and the blue dots are based on
tolerance $10^{-50}$.
(This computation does not check for poles in $[-1,1]$, which should
in principle lead to values $\|f-r\|=\infty$ especially
in the early stages of the curves on the right; but here the error
is just measured on a $1000$-point equispaced grid.)}
\end{center}
\end{figure}
The amber function $A(x)$ was constructed to have no hidden analytic
structure to be exploited; we think of it as being as far from rational as possible.
In Figure~\ref{ambercurves}, this is
reflected in the fact that AAA and polynomial approximants converge at approximately the
same rate until the latter begins to diverge exponentially.
Note also that in Figure~\ref{ambercurves}, unlike the five plots of Figure~\ref{figbig},
AAA fails to outperform the Floater--Hormann \verb|'equi'| method.
This is consistent with the view
that AAA is a robust strategy that exploits analytic structure whereas
Floater--Hormann is a robust interpolation strategy that does not exploit analytic structure.
\bigskip
\noindent{\bf 4.2. How is this consistent with the impossibility theorem?}
\smallskip
In the introduction we summarized the
impossibility theorem of~\cite{ptk} as follows.
In approximation of analytic
functions from $n$ equispaced samples,
exponential convergence as $n\to\infty$ is only possible in
tandem with exponential instability; conversely,
a stable algorithm can converge at best root-exponentially.
The essential reason for this (and the essential construction in
the proof of the theorem) can be summarized in a sentence.
{\em Some analytic functions
are much bigger between the sample points than they are at the sample points;
thus high accuracy requires some approximations to be huge.}
We now explain how the
theorem relates to the six numerical methods presented in Figures~\ref{figbig}
and~\ref{ambercurves}.
{\em Fourier series plus polynomials}, with our choice of polynomial degree
$O(\sqrt n\kern 1pt)$, converge at a root-exponential rate. It it neither
exponentially accurate nor exponentially unstable.
The {\em Floater--Hormann \verb|'equi'| interpolant} also converges (it appears)
at a root exponential rate, for reasons related to its adaptive choice of degree.
{\em Cubic splines} converge at a lower rate, just $O(n^{-4})$ for
smooth $f$. Again this method is neither
exponentially accurate nor exponentially unstable.
{\em Fourier extension} also appears to converge root-exponentially, making it, too,
neither exponentially accurate nor exponentially unstable.
{\em Polynomial least-squares,} however, reveals the hugeness of
certain functions. In the terms of the
theorem, this is the only one of our methods that
appears to be exponentially accurate and exponentially unstable.
\begin{figure}
\begin{center}
\kern 8pt
\includegraphics[scale=1.05]{twobytwo}
\caption{\label{twobytwo}The solid lines repeat the polynomial least-squares
(left) and Fourier extension (right) curves of Figure~$\ref{ambercurves}$.
The dots show corresponding results for alternative implementations of
each method. On the left, switching to the ill-conditioned monomial basis for polynomial
least-squares cuts off the
exponential acuracy and also the exponential instability. On the
right, switching to the well-conditioned Vandermonde with Arnoldi
basis for Fourier extension
introduces exponential accuracy and exponential instability.}
\end{center}
\end{figure}
Our statements about these last two methods, however, come with a big qualification, which
is illustrated in Figure~\ref{twobytwo}.
In fact, the difference
between Fourier extension and polynomial least-squares lies not in their
essence but in the fashion in which they are implemented. If you implement
either one with a well-conditioned basis, then it is exponentially accurate
and exponentially unstable. This is what we have done with the polynomial least-squares
method, which uses the well-conditioned basis of Chebyshev polynomials. The
Fourier extension method, on the other hand, was implemented with
the ill-conditioned basis of
complex exponentials $\exp(i\pi k x/2)$. In an ill-conditioned basis like this, high
accuracy will require huge coefficient vectors~\cite{fna1,fna2,ds}, but rounding errors prevent
their computation in floating point arithmetic through the mechanism of
matrices whose condition numbers are unable to grow bigger than
$O((\varepsilon_{\hbox{\scriptsize\rm machine}}^{})^{-1})$.
It is these rounding errors that make our
implementation of Fourier extension stable. Re-implemented
via Vandermonde with Arnoldi~\cite{vander}, as shown in Figure~\ref{twobytwo},
it becomes exponentially accurate and exponentially unstable.
Conversely, we can implement polynomial least-squares with the exponentially
ill-conditioned monomial basis instead of Chebyshev polynomials.
Because of rounding errors, it then loses its its exponential
accuracy and its exponential instability, as also shown in Figure~\ref{twobytwo}.
Finally, what about AAA?
The experiments suggest it is neither exponentially
accurate nor exponentially unstable.
Insofar as the impossibility theorem is concerned,
there is no inconsistency. Still, what is the mechanism?
In Figure 3.1 we have highlighted the last value of $n$
for which AAA interpolates the data.
In these and many other examples,
that value is very small. Afterwards,
AAA favors closer fits to the data over increasing degrees
of rational approximation, and this has the familiar effect
of oversampling. Yet, owing to its nonlinear nature,
AAA is free to vary the oversampling factor---and convergence
rates along with it---depending on the data and on the chosen tolerance.
The stability of AAA also stems from the representation
of rational functions in barycentric form, which is discussed
in the original AAA paper~\cite{aaa}.
\section{Discussion}
Although we have emphasized just the behavior on $[-1,1]$,
it is well known that rational approximants have good properties of
analytic continuation, beyond the original approximation set.
The AAA method certainly partakes of this advantageous behavior.
For example, Figure~\ref{fig5} shows the approximation of Figure~\ref{fig1} again
($f(x) = e^x/\sqrt{1+9x^2}$ sampled at $50$ equispaced points in $[-1,1]$), but now evaluated
in the complex plane. There are many digits of accuracy far from the
approximation domain $[-1,1]$. This is numerical analytic continuation, and the
other methods we have compared against have no such capabilities.
\begin{figure}
\begin{center}
\vspace*{28pt}
\includegraphics[scale=.7]{fig5}
\caption{\label{fig5}The AAA approximation of Figure~$\ref{fig1}$
extended into the complex plane. The dots are the poles of the rational
approximant, and the curves are level curves of the error $|f(z)-r(z)|$
with levels (from outside in) $10^{-2}, 10^{-4},\dots, 10^{-14}$.}
\end{center}
\end{figure}
Another impressive feature of rational approximation is its ability to
handle sampling grids with missing data without much loss of accuracy.
A striking illustration of this effect is presented in Figure~2 of~\cite{wdt}.
The AAA algorithm, as implemented in Chebfun, has a number of adjustable
features. We have bypassed all these, avoiding both the ``cleanup''
procedure and the Lawson iteration~\cite{aaa} (which is not normally invoked
in any case, by default).
One of the drawbacks of AAA approximation is that although it is
extremely fast at lower degrees, say, $d< 100$, it slows down
for higher degrees: the complexity is $O(m d^{\kern .8pt 3})$, where $m$ is
the size of the sample set and $d$ is the degree. For most applications,
we have in mind a regime of problems with $d<100$.
None of the examples shown in this paper come close to this limit.
(With the current Chebfun code, one could write for example
\verb|aaa(F,X,'mmax',200,'cleanup','off','lawson',0)|.)
Our discussion has assumed that the data $f(x_k)$ are accurate samples
of a smooth function, the only errors being rounding errors down at the relative level of
machine precision, around $10^{-16}$. The Chebfun default tolerance
of $10^{-13}$ was set with this error level in mind.
To handle data contaminated by noise at a higher level $\varepsilon$, we
recommend running AAA with its tolerance parameter \verb|'tol'| set to one or two orders
of magnitude greater than $\varepsilon$.
For unknown noise levels, it should be possible to devise adaptive methods based
on apparent convergence rates---detecting the bend in an L-shaped curve---but we
have not pursued this. Another approach to dealing with noise in rational approximation
is to combine AAA fitting
in space with calculations related to Prony's method in Fourier space, as advocated
by Wilber, Damle, and Townsend~\cite{wdt}.
Most of the methods we have discussed are linear, but AAA is not. This
raises the question, will it do as well for a truly complicated ``arbitrary'' function
as it has done for the functions with relatively simple properties we have
examined? As a check of its performance in such a setting, Figure~\ref{all6} repeats
Figure~\ref{ambercurves}, but now for the function $f$ consisting of the
sum of all six test functions we have considered:
$f_A^{},$ $f_B^{},$ $f_C^{},$ $f_D^{},$ $f_E^{},$ and $A$.
As usual, AAA outperforms the other methods, but blips in
the convergence curve at $n=200$ and $264$ highlight that it comes with no guarantees.
Both blips correspond to cases where ``bad poles'' have turned up in
the approximation interval $[-1,1]$.
These problems are related to rounding errors, as can be confirmed
by an implementation in extended precision arithmetic as in Figure~\ref{julia}
or by simply raising the AAA convergence tolerance to $10^{-11}$.
It does seem that further research about avoiding unwanted poles in
AAA approximation is called for, but fortunately, in a
practical setting, such poles are
immediately detectable and thus pose no risk of inaccuracy
without warning to the user.
\begin{figure}
\begin{center}
\kern 8pt
\includegraphics[scale=.9]{all6}
\caption{\label{all6}AAA approximation of the function $f(x)$ defined
as the sum of all six functions
$f_A^{},$ $f_B^{},$ $f_C^{},$ $f_D^{},$ $f_E^{},$ and $A$ considered in
this paper. As usual, AAA mostly outperforms the other methods, but there
are blips at $n=200$ and $n=264$ corresponding to poles in the
approximation interval $[-1,1]$.}
\end{center}
\end{figure}
\section*{Acknowledgments}
We are grateful for helpful suggestions from
John Boyd, Bengt Fornberg, Karl Meerbergen, Yuji Nakatsukasa,
and Olivier S\`ete.
\section*{Appendix: MATLAB/Chebfun code for Figure 2}
\indent~~
\vspace{10pt}
{\footnotesize
\verbatiminput{tests.m}
\par
}
| {
"timestamp": "2022-07-26T02:21:01",
"yymm": "2207",
"arxiv_id": "2207.11807",
"language": "en",
"url": "https://arxiv.org/abs/2207.11807",
"abstract": "We propose AAA rational approximation as a method for interpolating or approximating smooth functions from equispaced data samples. Although it is always better to approximate from large numbers of samples if they are available, whether equispaced or not, this method often performs impressively even when the sampling grid is fairly coarse. In most cases it gives more accurate approximations than other methods.",
"subjects": "Numerical Analysis (math.NA)",
"title": "AAA interpolation of equispaced data",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9770226341042415,
"lm_q2_score": 0.8499711794579723,
"lm_q1q2_score": 0.8304410806667171
} |
https://arxiv.org/abs/1910.05980 | Fractional Laplacian, homogeneous Sobolev spaces and their realizations | We study the fractional Laplacian and the homogeneous Sobolev spaces on R^d , by considering two definitions that are both considered classical. We compare these different definitions, and show how they are related by providing an explicit correspondence between these two spaces, and show that they admit the same representation. Along the way we also prove some properties of the fractional Laplacian. | \section{Introduction and statement of the main results}
The goal of this paper is to clarify a point that in our opinion has
been overlooked in the literature.
Classically, the homogeneous Sobolev
spaces and the fractional Laplacian
are defined in two different ways.
In one case, we consider the Laplacian $\Delta$
as densely defined, self-adjoint and
positive on $L^2({\mathbb R}^d)$, and following Komatsu \cite{Komatsu}
it is possible to define the fractional powers $\Delta^{s/2}$ by means
of the spectral theorem, $s>0$.
Denoting by ${\mathcal S}$ the space of Schwartz functions, one observes that
for $p\in(1,\infty)$,
$\| \Delta^{s/2} \varphi\|_{L^p({\mathbb R}^n)}$ is a norm on ${\mathcal S}$. The
{\em homogeneous Sobolev space} is the space
$\dot{W}^{s,p}$ defined as the closure of ${\mathcal S}$ in such a
norm.
The second definition is modelled on the classical Littlewood--Paley
decomposition
of function spaces (see details below) and gives rise to
an operator, that we will denote by $\dot\Delta^{s/2}$, acting on spaces
of tempered distributions {\em modulo polynomials}.
Below, we describe the latter approach and show
how these two definitions are related to each other, by showing that
they admit the same,
explicit realization. We mention that
the analysis involved by the fractional Laplacian $\Delta^s$ has drawn
great interest in the latest years, beginning with the groundbreaking papers
\cite{Caffarelli,hitchhiker}, see also the recent papers
\cite{bourdaud2,bourdaud3,brasco-salort,Kwasnicki}. We also mention that we were led to consider this
problem while working on spaces of entire functions of exponential
type whose fractional Laplacian is $L^p$ on the real line, \cite{MPS-Bernstein}.
\medskip
Let ${\mathcal S}$ denote the space of Schwartz functions,
${\mathcal S}'$ denote the space of tempered
distributions. We denote by ${\mathbb N}_0$ the set of nonnegative integers
and
by ${\mathcal P}_k$ be the set of all polynomials of $d$-real
variables of degree $\le k$, with $k\in{\mathbb N}_0$. We also set
${\mathcal P}_\infty=\bigcup_{k=0}^{+\infty} {\mathcal P}_k$, the collection of all
polynomials. We also write
$\overline{\mathbb N}={\mathbb N}_0\bigcup\{+\infty\}$.
For $k\in\overline{\mathbb N}$, we consider the equivalence relations in ${\mathcal S}'$
$$
f\sim_k g \Longleftrightarrow f-g\in {\mathcal P}_k
$$
and we denote by ${\mathcal S}'/{\mathcal P}_k$ the space of tempered distribution modulo
polynomials in ${\mathcal P}_k$, that is, the space of the equivalence classes with
respect the equivalence relation $\sim_k$. If $u\in{\mathcal S}'$ we denote by
$[u]_k$ its equivalent class in ${\mathcal S}'/{\mathcal P}_k$.
For $k\in {\mathbb N}_0$, we denote by ${\mathcal S}_k$
the space of all Schwartz functions $\varphi$ such that
$$
\int x^\gamma\varphi(x)\, dx=0
$$
for any multi-index $\gamma\in{\mathbb N}^d$ with $|\gamma|\le k$ and set
${\mathcal S}_\infty =\bigcap_{k=0}^{+\infty} {\mathcal S}_k$.
Each space ${\mathcal S}_k$, $k\in \overline{\mathbb N}$ is a subspace
of ${\mathcal S}$ that
inherits the same topology of ${\mathcal S}$. Moreover, it is easy to see that
$$
\big({\mathcal S}_k\big)'={\mathcal S}'/{\mathcal P}_k \,.
$$
For $p\in[1,\infty]$ we denote by $L^p({\mathbb R}^d)$ (or, simply by
$L^p$) the standard Lebesgue space on ${\mathbb R}^d$ and we write
$$
\|f\|_{L^p}^p=\int |f(x)|^p\, dx \,,
$$
if $p<\infty$, and $\|f\|_{L^\infty}
=\operatorname{ess}\sup_{x\in{\mathbb R}^d} |f(x)|$.
If $f\in L^1({\mathbb R}^d)$ we define the Fourier transform
$$
\hat f (\xi) = \int f(x) e^{-2\pi i x\xi}\, dx \,.
$$
The Fourier transform induces a surjective isomorphism
${\mathcal F}:{\mathcal S}\to{\mathcal S}$ and
then
it extends to a unitary operator ${\mathcal F}: L^2\to
L^2$ and to a surjective isomorphism ${\mathcal F}:{\mathcal S}'\to{\mathcal S}'$.
For $u\in{\mathcal S}'$, we shall also write $\widecheck u $ to denote ${\mathcal F}^{-1} u$.
It is important here to observe that if $\varphi\in{\mathcal S}_\infty$, then for
all $\tau\in{\mathbb R}$,
${\mathcal F}(|\cdot|^\tau\widecheck\varphi)\in {\mathcal S}_\infty$ as well, see e.g.
\cite{Triebel,Grafakos}.
\medskip
\subsection{The
fractional Laplacian $\dot\Delta^{s/2}$ and the homogeneous
Sobolev spaces $\dot{L}^p_s$}
We now recall the perhaps most standard
definition of the fractional powers of the Laplacian and of the homogeneous
Sobolev spaces. This material is classical and well known, see e.g.
\cite{Triebel,Grafakos}.
We now have the following definition.
\begin{defn}\label{defn-homo-lap} {\rm
Let $s>0$. For $\varphi\in {\mathcal S}_\infty$, we define the
{\em fractional Laplacian} of $\varphi$ as
$
\dot{\Delta}^{s/2} \varphi ={\mathcal F}^{-1}(|\cdot|^{s}\widehat \varphi) \,.
$
}
\end{defn}
The next lemma follows from the arguments in \cite[Chapter
1]{Grafakos}, or \cite[5.1.2]{Triebel}.
\begin{lem}\label{lem1}
The operator
$$
\dot{\Delta}^{s/2} :{\mathcal S}_\infty\to {\mathcal S}_\infty
$$
is a surjective isomorphism with inverse
$$
{\mathcal I}_s \varphi= {\mathcal F}^{-1} \big(|\xi|^{-s}\widehat\varphi\big) \,.
$$
\end{lem}
Next,
given $[u]\in {\mathcal S}'/{\mathcal P}_\infty$ we define another distribution in
${\mathcal S}'/{\mathcal P}_\infty$ by setting for all $\varphi\in{\mathcal S}_\infty$
\begin{equation}\label{distribution}
\big\langle{\mathcal F}^{-1}(|\cdot|^s\widehat u),\varphi\big\rangle
:=\big\langle u,{\mathcal F}(|\cdot|^s\widecheck \varphi)\big\rangle,
\end{equation}
where, with an abuse of notation,\footnote{We warn the reader that, we
shall denote with the same symbol $\langle\cdot,\cdot\rangle$ different
pairings of duality, such as ${\mathcal S}$ and ${\mathcal S}'$, ${\mathcal S}_\infty$ and
${\mathcal S}'/{\mathcal P}_\infty$,
$L^p$ and $L^{p'}$,
etc. The actual pairing of duality should be clear from the context
and there should not be any confusion.} we denote by $\langle\cdot,\cdot\rangle$
also
the pairing of duality between ${\mathcal S}_\infty$ and ${\mathcal S}'/{\mathcal P}_\infty$.
We remark that, clearly, the terms of both sides of
\eqref{distribution} are independent of the choice of the
representative in $[u]$.
The fact that \eqref{distribution} defines
a well-defined distribution follows at once from Lemma \ref{lem1}.
Then, we extend the definition of $\dot{\Delta}^{s/2}$ to
$({\mathcal S}_\infty)'={\mathcal S}'/{\mathcal P}_\infty$.
\begin{defn}\label{defn-frac-lap-eq-classes} {\rm
We define the operator
$$
\dot{\Delta}^{s/2}: {\mathcal S}'/{\mathcal P}_\infty\to {\mathcal S}'/{\mathcal P}_\infty
$$
by setting, for any $[u]\in {\mathcal S}'/{\mathcal P}_\infty$,
\begin{equation}\label{frac-lapl}
\dot{\Delta}^{s/2}[u] ={\mathcal F}^{-1}(|\cdot|^{s}\widehat u) \,,
\end{equation}
and we call it
the fractional Laplacian (of
order $s$) of $[u]$.
}
\end{defn}
Since the term on the right hand side is independent of the choice of
the representative in $[u]$, we may simply write
$[\dot{\Delta}^{s/2}u]$ in place of $[\dot{\Delta}^{s/2}[u]]$.
\begin{remark}\label{remark-Laplacians}{\em
We stress the fact that the terminology ``homogeneous Laplacian'' is
non-standard, and that such operator is typically simply called
fractional Laplacian. However, one of the main goals of the present
paper is to consider the two standard different definitions of the
fractional Laplacian and of the corresponding homogeneous Sobolev
spaces and to study how they are related to each other.
Thus, we need to
distinguish them, and we do so by introducing such terminology.
}
\end{remark}
For $1<p<\infty$, we define $\dot{L}^p$ as the space of all
elements in ${\mathcal S}'/{\mathcal P}_\infty$ such that in every equivalence class there is a
representative that belongs to $L^p$, that is,
\begin{equation}\label{def-hom-Lp}
\dot{L}^p = \Big\{ [u]\in {\mathcal S}'/{\mathcal P}_\infty:\, \|[u]\|_{\dot L^p}^p
:=\inf_{P\in{\mathcal P}_\infty} \|u+P\|_{L^p}^{p} <+\infty \Big\} \,.
\end{equation}
Clearly, if $[u]\in \dot{L}^p$, the
representative of $[u]$ in $L^p$ is unique.
We observe that, since ${\mathcal S}_\infty$ is dense in $L^p$ for all
$p\in[1,+\infty)$, see Lemma \ref{added-lem} below, we have that
$\dot\Delta^{s/2}u \in \dot L^p$, $p\in(1,\infty)$, if (and only if) the
mapping
$$
{\mathcal S}_\infty\ni \psi \mapsto \langle u, \dot\Delta^{s/2} \psi\rangle
$$
is bounded in the $L^{p'}$-norm. In fact, in this case, there exists a unique $g\in
L^p$ such that
$ \langle u, \dot\Delta^{s/2} \psi\rangle = \langle g, \psi\rangle$ for all
$\psi\in{\mathcal S}_\infty$ and we set
$$
\dot\Delta^{s/2} u = [g]\,.
$$
\medskip
Next, notice that by definition $\dot{\Delta}^{s/2} P =0$ for any
polynomial $P$.
The homogeneous Sobolev spaces $\dot{L}^p_s$ are defined as
follows.
\begin{defn}\label{defn-hom-sob-space} {\rm
Let $s>0$ and let $1<p<\infty$. Then, the homogeneous
Sobolev space $\dot{L}^p_s$ is defined as
$$
\dot{L}^p_s = \big\{
[f]\in{\mathcal S}'/{\mathcal P}_\infty:\, \dot{\Delta}^{s/2}f
\in \dot L^p
\big\}\,,
$$
and we set
\begin{equation}\label{hom-sob-norm}
\|f\|^p_{\dot{L}^p_s}=\|\dot{\Delta}^{s/2}f\|_{\dot{L}^p} \,.
\end{equation} \medskip
}
\end{defn}
Notice that, because of \eqref{def-hom-Lp}, equation \eqref{hom-sob-norm}
defines a norm on $\dot{L}^p_s$. Thus, the homogeneous
Sobolev space $\dot{L}^p_s$ is a space of equivalent classes in ${\mathcal S}'/{\mathcal P}_\infty$.\medskip
\subsection{Fractional powers of $\Delta$ and the homogeneous Sobolev
spaces ${\dot{W}}^{s,p}$}
We now describe more in details the other definition of the homogeneous Sobolev
spaces.
As me mentioned,
the operator $\Delta$ is densely defined, self-adjoint and
positive on $L^2({\mathbb R}^d)$, and following Komatsu \cite{Komatsu}
it is possible to define the fractional powers $\Delta^{\alpha/2}$ for
all $\alpha\in{\mathbb C}$, with $\operatorname{Re}}%{\, \text{\rm Re\,}\alpha>0$. Here we restrict our
attention to the case $\alpha=s>0$.
For $\varphi\in{\mathcal S}$,
by the spectral theorem we have that
\begin{equation}\label{frac-lapl-def2}
\Delta^{s/2} \varphi ={\mathcal F}^{-1}(|\cdot|^s \widehat \varphi) \,.
\end{equation}
We now have a simple lemma to describe the basic properties of
the action of $\Delta^{s/2}$ on ${\mathcal S}$.
\begin{lem}\label{added-lem}
Let $s>0$ and $p\in(1,\infty)$. The following properties
hold:
\begin{itemize}
\item[(i)] ${\mathcal S}_\infty$ is dense in $L^p$;
\item[(ii)] for every $\varphi\in{\mathcal S}$, $\Delta^{s/2}\varphi\in L^p$;\smallskip
\item[(iii)] $\|\Delta^{s/2}\varphi\|_{L^p}$ is a norm on ${\mathcal S}$.
\end{itemize}
\end{lem}
\proof
(i) is well-known and also easy to prove directly. For (ii), for
$\varphi\in{\mathcal S}$, write $\Delta^{s/2}\varphi = \Delta^{s/2}(I+\Delta)^{-M}
(I+\delta)^M\varphi$. If $M>s$ it is easy to check that
$\Delta^{s/2}(I+\Delta)^{-M}$ is a Fourier multiplier satisfying
Mihlin--H\"ormander condition (see \cite[Theorem
5.2.7]{Grafakos-cl}), hence bounded on $L^p$,
$p\in(1,\infty)$. Since $(I+\delta)^M\varphi\in L^p$ for all $p$'s, (ii)
follows. In order to prove (iii), we only need to observe that if
$\varphi\in{\mathcal S}$ and
$(I+\Delta)^M\varphi=0$, then $\widehat\varphi$ is a distribution supported at
the origin, hence $\varphi=0$.
\qed \medskip
Then, we have the following definition.
\begin{defn}\label{dotWps-def}{\rm
For $1<p<\infty$ and $s>0$ we define ${\dot{W}}^{s,p}$, also
called the {\em homogeneous Sobolev space} as
the closure of ${\mathcal S}$
with respect to the norm $\|\Delta^{s/2} \varphi\|_{L^p}$. More
precisely, given the equivalent relation on the space of $L^p$-Cauchy
sequences of Schwartz functions,
$\{ \varphi_n\}\sim \{ \psi_n\} $ if $\Delta^{s/2}(\varphi_n-\psi_n)\to 0$ as $n\to+\infty$, and
$[\{ \varphi_n\}]$ denote the equivalent classes, then
\begin{multline}
{\dot{W}}^{s,p}
= \Big\{ \big[\{\varphi_n\}\big]: \ \{
\varphi_n\}\subseteq{\mathcal S},\, \{ \Delta^{s/2}\varphi_n\} \text{\ is a Cauchy
sequence in\ } L^p, \\
\text{with }
\big\| \big[\{\varphi_n\}\big]\big\|_{{\dot{W}}^{s,p}} = \lim_{n\to+\infty}
\| \Delta^{s.2} \varphi_n \|_{L^p}
\Big\}\,.\label{dotWps-eq}
\end{multline}
}
\end{defn}
If $\{ \Delta^{s/2}\varphi_n\}$ is a
Cauchy
sequences in $L^p$, and we denote by $f$ its limit, we set
$$
\|f\|_{{\dot{W}}^{s,p}} = \lim_{n\to+\infty} \|
\Delta^{s/2}\varphi_n\|_{L^p} \,. \medskip
$$
We remark that it is equivalent to define ${\dot{W}}^{s,p}$
as the closure of the ${\mathcal C}^\infty$ functions with compact support,
space that we denote by ${\mathcal C}^\infty_c$, with respect to the same norm
$\|\Delta^{s/2} \varphi\|_{L^p}$.
\medskip
Thus, we have described two different notions of the fractional
Laplacian, $\dot\Delta^{s/2}$ and $\Delta^{s/2}$, that in particular
are defined and taking values
on completely different spaces. Both these
notions can be considered as ``classical''. Moreover, we have
described two different scales $\dot{L}^p_s$ and $\dot{W}^{s,p}$ of
homogeneous Sobolev spaces, for $s>0$ and $p\in(1,\infty)$.
The main goals of this note are two. The first one
is to describe the spaces ${\dot{W}}^{s,p}$
explicitly and obtain a realization of them as space of functions
(see below).
The second one is to show how
the homogeneous Sobolev spaces ${\dot{W}}^{s,p}$ and $\dot{L}^p_s$ are
related to each other, providing in fact an explicit one-to-one and
onto correspondence between them.
\medskip
Finally, observe that, if $\varphi\in{\mathcal S}$, then $\dot{\Delta}^{s/2}\varphi =
[\Delta^{s/2}\varphi]_\infty$.
\medskip
\subsection{The realization spaces $E^{s,p}$}
Following G. Bourdaud \cite{Bourdaud, bourdaud2, bourdaud3}, if $k\in\overline{\mathbb N}$ and
$\dot{X}$ is a given subspace of
${\mathcal S}'/{\mathcal P}_k$ which is a Banach space, such that the natural inclusion of
$\dot{X}$ into ${\mathcal S}'/{\mathcal P}_k$ is continuous,
we call {\em realization} of $\dot{X}$ a subspace $E$ of ${\mathcal S}'$ such
that there exists a bijective linear map
$$
R: \dot{X}\to E
$$
such that $\big[ R[u] \big] = [u]$ for every $[u]\in\dot{X}$.
We endow $E$ of the norm given by $\| R[u]\|_E = \| [u] \|_{\dot{X}}$.
Obviously, $E$ becomes a Banach space with such norm.
Thus, a realization of
$\dot{L}^p_s$ is a space $E$ of tempered distributions
such that for each $f\in E$, the equivalent class
modulo polynomials $[f]$ of $f$ is in $\dot{L}^p_s$ and conversely, for each
$[f]\in\dot{L}^p_s$ there exists a unique $\tilde f\in[f]$ such that
$\tilde f\in E$, and such correspondence
is an isometry. Our first
main result is that the spaces $E^{s,p}$ defined below are a realization of $\dot{W}^{s,p}$,
for $p\in(1,+\infty)$, $s>0$.
The spaces are also a realization of the spaces $\dot{L}^p_s$, as was
indicated
already in \cite{Bourdaud, bourdaud2}, and here we give a proof of this
latter fact. As a consequence, we obtain a precise correspondence between
the spaces $\dot{W}^{s,p}$ and $\dot{L}^p_s$.
In order to describe such realization spaces $E^{s,p}$ we
need to recall the definition of $\operatorname{BMO}$ and of the
homogeneous Lipschitz spaces
$\dot{\Lambda}^s$, for $s>0$. We begin with the latter one.
For $s>0$ we write $\lfloor s \rfloor$
to denote its integer part.
\begin{defn}\label{diff-op-def}{\rm
The
difference operators $D^k_h$
of increment
$h\in{\mathbb R}^d\setminus\{0\}$ and of order $k\in{\mathbb N}$
are defined as follows.
If $k=1$, we write
$D_h$ in place of $D_h^1$, is
$D_h f(x)=f(x+h)-f(x)$,
and
then inductively, for $k=2,3,\dots$
$$
D_h^k f(x)=D_h \big[ D_h^{k-1}f\big](x)\,.
$$
}
\end{defn}
For a non-negative integer $m$, we denote by ${\mathcal C}^m$ the space of
continuously differentiable functions of order $m$.
We recall the following well-known facts, see \cite[6.3.1]{Grafakos}.
\begin{prop}\label{elem-prop-finite-diff}
{\rm (i)} For $k=2,3,\dots$ we have the explicit expression
$$
D_h^k f(x) =\sum_{j=0}^k (-1)^{k-j} \begin{pmatrix} k\\
j \end{pmatrix} f(x+jh)\,.
$$
{\rm (ii)} If $f\in {\mathcal C}({\mathbb R}^d)$ is such that
$D_h^kf(x)=0$ for all $x,h\in{\mathbb R}^d$, $h\neq0$, then $f$ is a
polynomial of degree at most $k-1$, and conversely, $D_h^kf(x)=0$
for all polynomials of of degree $\le k-1$.
\end{prop}
\medskip
\begin{defn}{\rm
Let
$\gamma>0$. We define the {\em homogeneous Lipschitz space}
$\dot{\Lambda}^\gamma$ as
$$
\dot{\Lambda}^\gamma = \Big\{ [f]\in {\mathcal C}/{\mathcal P}_{\lfloor \gamma
\rfloor}\, :\,
\| f\|_{\dot{\Lambda}^\gamma} := \sup_{x,h\in{\mathbb R},\, h\neq0}
\frac{|D_h^{\lfloor \gamma \rfloor +1} f(x)|}{|h|^\gamma} <+\infty \Big\}\,.
$$
}
\end{defn}
We remark that, for all $\gamma>0$,
if $[f]_{\lfloor
\gamma\rfloor}\in \dot{\Lambda}^\gamma$, then $f$ is a function of moderate growth, so
that $\dot{\Lambda}^\gamma \subseteq {\mathcal S}'/{\mathcal P}_{\lfloor \gamma
\rfloor}$. Moreover,
$f$ is in ${\mathcal C}^{\lfloor
\gamma\rfloor}$.
For these and other properties of Lipschitz spaces, see
e.g. \cite{Krantz-Lipschitz-paper,Triebel,Grafakos}. \medskip
Next, we introduce the Sobolev-$\operatorname{BMO}$ spaces.
\begin{defn}{\rm
The space $\operatorname{BMO}$ is the space of locally integrable
functions, modulo constants such that
$$
\| f\|_{\operatorname{BMO}}:= \sup_{x\in{\mathbb R}^d} \sup_{r>0} \frac{1}{|B_r|} \int_{B_r(x)}
|f(y)-f_{B_r}|\, dy <+\infty \,.
$$
where $B_r(x)=B_r$ denotes the ball centered at $x$ of radius $r>0$,
$|B_r|$ its measure, and $f_{B_r}$ the average of $f$ over $B_r(x)$.
For $k=1,2,\dots$, we define the Sobolev-$\operatorname{BMO}$ space as
$$
S_k(\operatorname{BMO}) = \big\{ f\in{\mathcal S}'/{\mathcal P}_k:\, \partial_x^\alpha f \in\operatorname{BMO}\ \text{for
} |\alpha|=k
\big\} \,,
$$
and set
$$
\| f\|_{S_k(\operatorname{BMO})} = \sum_{|\alpha|=k} \| \partial^\alpha f\|_{\operatorname{BMO}} \,.
$$
}
\end{defn}
\begin{defn}{\rm
Given $f\in{\mathcal S}'$, if there
exists a sequence $\{\varphi_n\}\subseteq{\mathcal S}$ such that
\begin{itemize}
\item[{\tiny $\bullet$}] $\varphi_n\to f$ in ${\mathcal S}'$,
\item[{\tiny $\bullet$}] $\{ \Delta^{s/2} \varphi_n\}$ is a Cauchy
sequence in $L^p({\mathbb R}^d)$,
\end{itemize}
then we say that $\Delta^{s/2} f$ is defined as the $L^p$-limit of
$\{\Delta^{s/2} \varphi_n\}$ and we set $\| \Delta^{s/2} f \|_{L^p}=\lim_{n\to+\infty} \|
\Delta^{s/2} \varphi_n\|_{L^p}$.
}
\end{defn}
\begin{remark}\label{convergence-rem}{\rm
We observe that the definition is well-given since if
$\{\varphi_n\},\{\psi_n\}\subseteq{\mathcal S}$, are two sequences such that
$\varphi_n,\psi_n\to f$ in ${\mathcal S}'$ and
$\{\Delta^{s/2} \varphi_n\}, \{\Delta^{s/2} \psi_n\}$ are both Cauchy in
$L^p$, then for every $\eta\in{\mathcal S}_\infty$,
$$
\langle \Delta^{s/2} \varphi_n -\Delta^{s/2} \psi_n, \eta\rangle =
\langle \varphi_n -\psi_n, \Delta^{s/2} \eta\rangle \to 0
\,
$$
as $n\to\infty$. Since ${\mathcal S}_\infty$ is dense in $L^p$,
$\{\Delta^{s/2} \varphi_n\}, \{\Delta^{s/2} \psi_n\}$ have the same
$L^p$-limit.
}
\end{remark}
\medskip
We now are ready to define the spaces $E^{s,p}$ that we will
show to be the realization spaces for ${\dot{W}}^{s,p}$.
For a sufficiently smooth function $f$, we denote by $P_{f;m;x_0}$ the
Taylor polynomial of $f$ of degree $m\in{\mathbb N}_0$ at $x_0\in{\mathbb R}^d$.
\begin{defn}\label{Esp-def}{\rm For $s>0$ and $p\in(1,+\infty)$, we define the
spaces $E^{s,p}$ as follows.
{\rm (i)} Let $0<s<\frac dp$, and let $p^*\in(1,\infty)$ given by
$ \frac{1}{p^*}=\frac1p -\frac sd$.
Then, we define
$$
E^{s.p}=\big\{f\in L^{p^*}:\,
\| f\|_{E^{s,p}} := \|\Delta^{s/2} f\|_{L^p}<+\infty \big\} \,.
$$
{\rm (ii)} Let $s>d/p$, $s-d/p\not\in{\mathbb N}$ and let
$m=\lfloor s-d/p \rfloor$. Then, we define
$$
E^{s.p}
=\Big\{
f\in\dot{\Lambda}^{s-\frac dp}:\,
\ P_{f;m;0}=0\,,\
\| f\|_{E^{s,p}} := \|\Delta^{s/2} f\|_{L^p} <+\infty\, \Big\}\,.
$$
{\rm (iii)} Let $s-d/p\in{\mathbb N}$ and set $m=\lfloor s-d/p \rfloor$.
Let $B$ be a fixed ball in ${\mathbb R}^d$. Then, if $m=0$,
$$
E^{s.p}
=\Big\{
f\in \operatorname{BMO}:
f_B=0, \quad
\, \| f\|_{E^{s,p}} :=\| \Delta^{s/2} f\|_{L^p}<+\infty\, \Big\}\,,
$$
while, if $m\ge1$,
\begin{multline*}
E^{s.p}
=\Big\{
f\in S_m(\operatorname{BMO}) \cap{\mathcal C}^{m-1}:
{\rm (i)}\, \ P_{f;m-1;0}=0\,,\
{\rm (ii)} \, (\partial_x^\alpha f )_B=0\, \quad\text{for }
|\alpha|=m\,,\\
{\rm (iii)} \, \| f\|_{E^{s,p}} :=\| \Delta^{s/2} f\|_{L^p}<+\infty\, \Big\}\,.
\end{multline*}
}
\end{defn}
\subsection{Littlewood--Paley decomposition}
In order to state our second main result we need the Littlewood--Paley
decomposition of $\dot{L}^p_s$, for whose details
see e.g. to
\cite[6.2.2]{Grafakos}.
Let $\eta\in {\mathcal C}^\infty_c({\mathbb R}^d)$ such that $\operatorname{supp}\eta\subseteq
\{\xi:\, 1/2\le |\xi|\le 2\}$, identically $1$ on the annulus
$\{\xi:\, 1\le|\xi|\le3/2\}$ and such that
$$
\sum_{j\in{\mathbb Z}}\eta(2^{-j}\xi)=1 \,, \qquad\qquad\qquad
\xi\in{\mathbb R}^d\backslash \{0\} \,.
$$
For $j\in{\mathbb Z}$ we set $\eta_j = \eta(2^{-j}\cdot)$ and
for $f\in{\mathcal S}'$ we define\footnote{Classically, these operators are
denoted by $\Delta_j$. Given the recurrent appearance of
$\Delta$ in this paper, we decided to use the notation $M_j$ instead.}
$$
M_j f={\mathcal F}^{-1}(\eta_j\widehat f).
$$
We observe that, due to the support conditions of the $\eta_j$'s,
for all $j,k\in{\mathbb Z}$ we
have that
\begin{itemize}
\item[{\tiny $\bullet$}] $M_j\le (M_{j-1}+M_j+M_{j+1})M_j\le 3M_j$;
\item[{\tiny $\bullet$}] $M_kM_j=0$ if $|k-j|>1$.
\end{itemize}
Then, $M_j f=0$ for any $f\in{\mathcal P}_\infty$, therefore the operators
$M_j$ are well-defined on ${\mathcal S}'/{\mathcal P}_\infty$ and taking values in ${\mathcal S}'$, as
it is immediate to check. Thus, for simplicity of notation, we write
$M_ju$ instead of $M_j [u]_\infty$, and
observe that
\begin{equation}\label{u-dec-S'}
[u]_\infty =\sum_{j\in{\mathbb Z}} [M_ju]_\infty\,,
\end{equation}
where the convergence is in
${\mathcal S}'/{\mathcal P}_\infty$.
\begin{comment}
Furthermore, we recall that for $u\in
{\mathcal S}'/{\mathcal P}_\infty$,
$M_j u$ is entire function of exponential type, and satisfies the
estimate\CR \footnote{Check and prove?} \color{black}
\begin{equation}\label{Delta-j-stima1}
\| M_j u\|_\infty \le C 2^{jd/p} \| M_j u\|_{L^p} \,.
\end{equation}
We also set $\widetilde{M}(u)= {\mathcal F}^{-1} \big( \tilde\eta
u)$.\CR\footnote{Non credo abbiamo bisogno di $\widetilde{M}$, n\`e
del punto (2) nella Prop. sotto, cio\`e gli spazi di Sobolev
inomogenei.} \color{black}
\end{comment}
Then, we have the following characterization of the spaces
$\dot{L}^p_s$, $\dot\operatorname{BMO}:=\operatorname{BMO}/{\mathcal P}_\infty$, and
$\dot\Lambda^\gamma$, respectively.
For $1<p<\infty$ and $s>0$, $[f]_\infty\in
\dot{L}^p_s$, if and only if $\big(\sum_{j\in{\mathbb Z}} \big(2^{js}|M_j f|\big)^2\big)^{1/2}
\in L^p$ and
\begin{equation}\label{LP-charact-Lps}
\bigg\| \bigg(\sum_{j\in{\mathbb Z}} \big(2^{js}|M_j f|\big)^2\bigg)^{1/2}
\bigg\|_{L^p} \approx \|f\|_{\dot{L}^p_s} \,.
\end{equation}
It holds that $[f]_\infty\in\dot\operatorname{BMO}$ if and only if
$\big(\sum_{j\in{\mathbb Z}} |M_j f|^2\big)^{1/2}
\in L^\infty$ and in this case
\begin{equation}\label{LP-charact-BMO}
\sup_{x\in{\mathbb R}^d} \bigg(\sum_{j\in{\mathbb Z}} |M_j f|^2\bigg)^{1/2}
\bigg\|_{L^p} \approx \|f\|_{\dot\operatorname{BMO}} \,.
\end{equation}
For $\gamma>0$, we have that $[f]_{\lfloor \gamma\rfloor}
\in {\mathcal S}'/{\mathcal P}_{\lfloor \gamma\rfloor}$, we have
$[f]_{\lfloor \gamma\rfloor} \in
\dot\Lambda^\gamma$
if and only if $\sup_{j\in{\mathbb Z}} 2^{j\gamma} \|M_j
f\|_\infty <+\infty$
and we have
\begin{equation}\label{LP-charc-Lip}
\sup_{j\in{\mathbb Z}} 2^{j\gamma} \|M_j
f\|_\infty \approx \|f\|_{\dot\Lambda^\gamma} \,,
\end{equation}
see e.g. \cite[Theorem 6.3.6.]{Grafakos}
\medskip
\subsection{Statement of the main results}
Our first result describes explicitly the elements of
$\dot{W}^{s,p}$, $s>0$, $p\in(1,\infty)$.
\begin{theorem}\label{elements-Wspdot-charact}
Let $s>0$, $1<p<\infty$,
and let $\{\Delta^{s/2} \varphi_n\}$ be a Cauchy sequence in $L^p$, with
$\varphi_n\in{\mathcal S}$.
Then, the following properties hold.
\begin{itemize}
\item[(i)] If $0<s<d/p$, then there exists a unique $g\in L^{p^*}$
such that $\varphi_n\to g$ in $L^{p^*}$ as $n\to+\infty$,
where $\frac{1}{p^*} = \frac1p -\frac sd$.\smallskip
\item[(ii)] If $s-d/p=m\in{\mathbb N}_0$, then there exists
a unique $g\in S_m(\operatorname{BMO})$
such that $[\varphi_n]_m\to [g]_m$ in $ S_m(\operatorname{BMO})$ as $n\to+\infty$.\smallskip
\item[(iii)] If $s>d/p$ and $s-d/p\not\in{\mathbb N}$,
then there exists
a unique $g\in \dot\Lambda^{s-d/p}$
such that $[\varphi_n]_m\to [g]_m$ in
$\dot\Lambda^{s-d/p}$ as $n\to+\infty$, where $m=\lfloor s-d/p \rfloor$.
\end{itemize}
In particular, $\dot{W}^{s,p}$ is a space of functions if $s<d/p$,
while
$\dot{W}^{s,p}\subseteq{\mathcal S}'/{\mathcal P}_m$, where $m=\lfloor s-d/p \rfloor$,
if $s\ge d/p$.
\end{theorem}
\begin{theorem}\label{main-thm-realization-Wsp-dot}
For $s>0$ and $p\in(1,\infty)$, the spaces $E^{s,p}$ are realization
spaces of ${\dot{W}}^{s,p}$. More precisely, the space
$\dot{W}^{s,p}$ can be identified with a subspace of ${\mathcal S}'/{\mathcal P}_m$,
where $m=\lfloor s-d/p \rfloor$, and for each $[u]_m\in
\dot{W}^{s,p}$ there exists a unique $f\in [u]_m\cap E^{s,p}$ and
such that
$$
\|\Delta^{s/2} f\|_{L^p} = \| [u]_m\|_{{\dot{W}}^{s,p}} \,.
$$
\end{theorem}
Recall that, given $B=B(x_0,r)$ a ball in ${\mathbb R}^d$, and an locally integrable
function $f$,
we denote by $f_B$ the
average of $f$ over $B$.
\begin{theorem}\label{main-thm-realization-Lps-dot}
Let $s>0$, $1<p<\infty$. Then the following properties hold.
\begin{itemize}
\item[(i)] If $0<s<d/p$, given $[u]_\infty\in \dot{L}^p_s$ there exists a
unique $f\in [u]_\infty$ and such that $f\in E^{s,p}$, and it is
given by
$$
f = \sum_{j\in{\mathbb Z}} M_j u \,,
$$
where the convergence is in
$L^{p^*}$.
\item[(ii)] If $s-d/p=m\in{\mathbb N}_0$,
let $B=B(x_0,r)$ be any fixed ball in ${\mathbb R}^d$.
Then, if $m=0$, given
$[u]_\infty\in \dot{L}^p_s$ consider $f\in [u]_\infty$
given by
\begin{equation}\label{the-series-2.0}
f= f_0+f_1 - (f_0+f_1)_B := \sum_{j\le0} \big(
M_j u - M_ju(0) \big) + \sum_{j\ge1}
M_j u - (f_0+f_1)_B
\,,
\end{equation}
where the first series converges uniformly on compact subsets of
${\mathbb R}^d$, while the second one in the $\operatorname{BMO}$-norm.
Then, $f\in\operatorname{BMO}$ with $f_B=0$, hence $f\in E^{s,p}$.
If $s-d/p=m\in{\mathbb N}$, $m\ge1$, given $[u]_\infty\in \dot{L}^p_s$, let
\begin{equation}\label{the-series-2.m}
f= f_0+f_1 - (f_0+f_1)_B := \sum_{j\le0} \big(
M_j u - P_{M_ju;m;0} \big) + \sum_{j\ge1}
M_j u - (f_0+f_1)_B
\,,
\end{equation}
where the both series converge uniformly on compact subsets of
${\mathbb R}^d$, together with all derivatives up to order $m-1$.
Then, $f\in E^{s,p}$, in particular,
$f\in S_m(\operatorname{BMO})$, $P_{f;m-1;0}=0$, and $(\partial^\alpha f)_B=0$ for
$|\alpha|=m$.
\item[(iii)] If $s>d/p$ and $s-d/p\not\in{\mathbb N}$, set $m=\lfloor
s-d/p\rfloor$.
Then
given $[u]_\infty\in \dot{L}^p_s$ there exists a
unique $f\in [u]_\infty$ such that $f\in E^{s,p}$, and it is
given by
\begin{equation}\label{the-series-3}
f = \sum_{j\in{\mathbb Z}} \big(
M_j u -P_{M_j u; m;0} \big) \,,
\end{equation}
where the convergence is uniformly on compact sets in
${\mathbb R}^d$, together with all the derivatives up to order $m$, and in the
$\dot\Lambda^{s-d/p}$-norm.
\end{itemize}
\end{theorem}
Finally, we provide the explicit isometric correspondence between the
two homogeneous Sobolev spaces $\dot{W}^{s,p}$ and $\dot{L}^p_s$.
\begin{theorem}\label{Cor}
Let $s>0$, $1<p<\infty$ and let $m=\lfloor s-d/p\rfloor$.
Then, given any $[u]_\infty\in\dot{L}^p_s$ there
exists a unique $f\in [u]_\infty$ such that $f\in E^{s,p}$, so that
$f\in\dot{W}^{s,p}$, or
$[f]_m \in \dot{W}^{s,p}$, resp., if $0<s<d/p$ or $s\ge d/p$, resp.,
with equality of norms.
Conversely, given $f\in\dot{W}^{s,p}$ if $0<s<d/p$, or
$[f]_m \in \dot{W}^{s,p}$,
if $s\ge d/p$, resp., let $f$ be its image in the realization space
$E^{s,p}$,
then
$[f]_\infty\in\dot{L}^p_s$, with equality of norms.
\end{theorem}
\medskip
\section{Preliminary facts}\label{preliminaries-sec}
\subsection{The fractional integral operator ${\mathcal I}_s$}
For $0<s<d$, we consider the Riesz potential ${\mathcal I}_s$ defined for $\varphi\in{\mathcal S}$ as
$$
{\mathcal I}_s \varphi =
{\mathcal F}^{-1} ( |\xi|^{-s}\widehat \varphi) = \omega_{s,d} \big( |\cdot|^{s-d}* \varphi \big)\,,
$$
where $\omega_{s,d}=\Gamma\big((d-s)/2\big)/2^s\pi^{s/2}\Gamma(s/2)$.
Observe that for $s\in(0,d)$, $|\xi|^{-s}$ is locally integrable, so
that if $\varphi\in{\mathcal S}$, $|\xi|^{-s}\widehat \varphi\in L^1$ and ${\mathcal I}_s$ is
well-defined.
For $p\in(0,\infty)$ we denote by $H^p({\mathbb R}^d)$, or simply by $H^p$,
the classical Hardy space on ${\mathbb R}^d$. Having fixed $\Phi\in{\mathcal S}$ with
$\int\Phi=1$, then
\begin{equation}\label{Hardy-sp-def}
H^p({\mathbb R}^d)
=\big\{ f\in{\mathcal S}':\, f^*(x):=\sup_{t>0} |f*\Phi_t(x)| \in L^p({\mathbb R}^d)
\big\} \,,
\end{equation}
where
$$
\|f\|_{H^p}= \|f^*\|_{L^p}\,.
$$
We recall that the definition of $H^p$ is independent of the choice of
$\Phi$ and that, when
$p\in(1,\infty)$, $H^p$ coincides with $L^p$, with equivalence of
norms. We also recall that
${\mathcal S}_\infty$ is dense in $H^p$ for all $p\in(0,\infty)$, see
\cite[Ch.II,5.2]{Stein}.
For these and other
properties of the Hardy spaces see e.g. \cite{Stein} or
\cite{Grafakos}.\medskip
For the operator ${\mathcal I}_s$ the following regularity result holds,
see \cite{Stein},
\cite{Grafakos}, \cite{Krantz-fractional-Hardy-paper} and in
particular \cite{Adams} for (ii).
\begin{prop}\label{Riesz-pot-prop}
Let $s>0$. The operator ${\mathcal I}_s$, initially defined for $\varphi\in{\mathcal S}$, extends to a
bounded operator
\begin{itemize}
\item[(i)] if $0<p<d/s$, then ${\mathcal I}_s :H^p\to H^{p^*}$, where
$\frac{1}{p^*} = \frac1p
- \frac sd$;\smallskip
\item[(ii)] if $p=d/s$, then ${\mathcal I}_s :H^p\to \operatorname{BMO}$;\smallskip
\item[(iii)] if $p>d/s$, then ${\mathcal I}_s :H^p\to \dot{\Lambda}^{s-d/p}$.
\end{itemize}
\end{prop}
\medskip
We also recall that the Riesz transforms are the operators $R_j$,
$j=1,\dots,d$,
initially defined
on Schwartz functions
$$
R_j f(x) = {\mathcal F}^{-1} \Big(
-i\frac{\xi_j}{|\xi|} \widehat f\Big) \,,
$$
and that they satisfy the relation $\sum_{j=1}^d R_j^2 =I$. Moreover,
the $R_j$ are bounded
$$
R_j :H^p \to H^p
$$
for all $p\in(0,\infty)$, and $j=1,\dots,d$.
\medskip
\section{Proofs of the main results}\label{realization-sec}
Our first task is to describe explicitly
the elements of ${\dot{W}}^{s,p}$, that is,
the limits of $L^p$-Cauchy
sequences of the form $\{\Delta^{s/2} \varphi_n\}$, with $\varphi_n\in{\mathcal S}$.
We are going to use this simple lemma, of which we include the
proof for sake of completeness.
\begin{lem}\label{integral-estimate}
Let $1<p<\infty$, $d/p<\nu<1+d/p$. Given $a\in{\mathbb R}^d\setminus\{0\}$, let
and let $g_a(x)= |a-x|^{\nu-d} - |x|^{\nu-d}$. Then, $g_a\in L^{p'}$
and
$$
\| g_a\|_{L^{p'}} \lesssim |a|^{\nu-d/p}\,.
$$
\end{lem}
\proof
We first observe that $g_a\in L^{p'}_{{\rm loc}}$ if and only if
$\nu>d/p$, and then that
$\| g_a\|_{L^{p'}} =C |a|^{\nu-d/p}$, where
\begin{align*}
C^{p'}
& = \int_0^\infty \int_S \Big| \frac{1}{|a'-rx'|^{d-\nu}} - \frac{1}{r^{d-\nu}}
\Big|^{p'} \, d\sigma(x')\, r^{d-1}\, dr\\
& = \int_0^\infty \frac{r^{d-1}}{r^{(d-\nu)p'}} \int_S \Big| \frac{1}{|a'/r-x'|^{d-\nu}} - 1
\Big|^{p'} \, d\sigma(x')\, dr \\
& =: \int_0^\infty \frac{r^{d-1}}{r^{(d-\nu)p'}} F(r)\, dr \,,
\end{align*}
where $S$ denotes the unit sphere in ${\mathbb R}^d$, $d\sigma$ the induced
surface measure, and $a'=a/|a|$. Thus, since by rotation invariance,
$C$ is independent of $a'$,
we only need to show that $C$
is finite if and only if $0<\nu<1+d/p$. Notice that $F(r)$ is finite
when $\nu>d/p$, so we only need to check the integrability of
$\frac{r^{d-1}}{r^{(d-\nu)p'}} F(r)$ as $r\to 0^+, \infty$. As $r\to
0^+$, $F(r)= {\mathcal O}(1)$ and $\int_0^\infty
\frac{r^{d-1}}{r^{(d-\nu)p'}}<\infty$ if and only if $d/p<\nu$, which
is the case. As $r\to \infty$, $F(r)\approx 1/s^{p'}$, so that
$$
\int_2^\infty \frac{r^{d-1}}{r^{(d-\nu)p'}} F(r)\, dr
\approx \int_0^\infty \frac{1}{r^{(d-\nu)p'-d+1+p'}} \, dr
$$
which is finite if and only if $\nu-d/p<1$. The conclusion now
follows.
\qed
\medskip
\proof[Proof of Thm. \ref{elements-Wspdot-charact}]
For $0<s<d$, it is immediate to check that for $\varphi\in{\mathcal S}$ we have
that
${\mathcal I}_s \Delta^{s/2} \varphi=\varphi$. Therefore, by
Prop. \ref{Riesz-pot-prop} (i) we have
\begin{equation} \label{emb-ineq-1}
\|\varphi\|_{L^{p^*}}
= \| {\mathcal I}_s \Delta^{s/2} \varphi \|_{L^{p^*}} \lesssim \| \Delta^{s/2}
\varphi \|_{L^p} \,.
\end{equation}
Then, let $\{\varphi_n\}\subseteq{\mathcal S}$ be such that $\{
\Delta^{s/2}\varphi_n\}$ is a Cauchy sequence in $L^p$. By the above
estimate \eqref{emb-ineq-1} it follows that
$$
\|\varphi_m - \varphi_n\|_{L^{p^*}}
\lesssim \| \Delta^{s/2}\varphi_m -\Delta^{s/2}
\varphi_n \|_{L^p} \,,
$$
so that there exists $f\in L^{p^*}$ such that $\varphi_n\to f$ in
$L^{p^*}$ as $n\to+\infty$.
By Remark \ref{convergence-rem}, this implies that
$\Delta^{s/2}\varphi_n \to \Delta^{s/2}f$ in $L^p$. Moreover,
\begin{equation} \label{emb-ineq-2}
\|f\|_{L^{p^*}} =\lim_{n\to+\infty} \| \varphi_n\|_{L^{p*}}
\lesssim \lim_{n\to+\infty} \| \Delta^{s/2}
\varphi_n\|_{L^p} =
\| \Delta^{s/2}f\|_{L^p} \,.
\end{equation}
This proves (i).
In order to prove case (ii) with $s=d/p$,
let $\{\varphi_n\}\subseteq{\mathcal S}$ be such that $\{
\Delta^{s/2}\varphi_n\}$ is a Cauchy sequence in $L^p$. Since
$0<s=d/p<d$,
again we have that ${\mathcal I}_s \Delta^{s/2} \varphi_n=\varphi_n$.
By Prop. \ref{Riesz-pot-prop} (ii) it follows that
$$
\|\varphi_m -\varphi_n\|_{\operatorname{BMO}}
\lesssim \| \Delta^{s/2}\varphi_m -\Delta^{s/2}
\varphi_n \|_{L^p} \,.
$$
Hence, there exists a unique $g\in \operatorname{BMO}$ such that $\varphi_n\to g$ in
$\operatorname{BMO}$, and it is such\ that $\Delta^{s/2}g\in L^p$. For, given any
$\psi\in{\mathcal S}_\infty$ we have $\Delta^{s/2}\psi\in {\mathcal S}_\infty \subseteq
H^1$ so that
\begin{align}
\langle g, \Delta^{s/2}\psi \rangle
& = \lim_{n\to+\infty} \langle \varphi_n ,\Delta^{s/2}\psi \rangle
= \lim_{n\to+\infty} \langle \Delta^{s/2}\varphi_n , \psi \rangle
= \langle G, \psi \rangle \,, \label{bdd-fnct-Lp}
\end{align}
where $G$ is the $L^p$-limit of $\{ \Delta^{s/2}\varphi_n\}$. This shows
that the mapping $\psi\to \langle g, \Delta^{s/2}\psi \rangle$ is bounded in
the $L^{p'}$-norm on a dense subset,
and therefore $\Delta^{s/2}g$ defines an element of $L^p$, that is $G$.
Let now $s-d/p=m\in{\mathbb N}$ and let $\{\varphi_n\}\subseteq{\mathcal S}$ be such that $\{
\Delta^{s/2}\varphi_n\}$ is a Cauchy sequence in $L^p$. Using the
identity
\begin{equation}\label{id-riesz}
| \Delta^{(s-1)/2} \nabla \varphi|^2 = \sum_{j=1}^d |R_j \Delta^{s/2}
\varphi|^2 \,,
\end{equation}
which is valid for all $\varphi\in{\mathcal S}$, and the boundedness of the Riesz
transforms, we see that $\{ \Delta^{(s-1)/2} \partial_j \varphi_n\}$ is a Cauchy
sequence in $L^p$, for $j=1,\dots,d$. If
$s-d/p=1$, we apply the previous argument to each of the sequences
$\{ \Delta^{(s-1)/2} \partial_j \varphi_n\}$, $j=1,\dots,d$.
If $s-d/p=2$, the argument is analogous but simpler,
given the identity $\Delta^{s/2}
= \Delta^{(s-2)/2}\Delta$,
and in the general
case we argue inductively.
We obtain that, for all multiindices
$\alpha$, with $|\alpha|=m$, there exists $g_\alpha\in\operatorname{BMO}$ such that
$\partial_x^\alpha \varphi_n\to g_\alpha$ in $\operatorname{BMO}$ and
$\Delta^{(s-m)/2}g_\alpha \in L^p$. Using the fact that in ${\mathcal S}'$
$$
\partial_{x_j} g_\alpha = \partial_{x_\ell} g_\beta
$$
if $\alpha+e_j=\beta+e_\ell$, where $|\alpha|=|\beta|=$,
$e_j,e_\ell$ are the standard
basis vectors, $j,\ell\in\{1,\dots,d\}$, and induction again, it is
easy to see that there exists $g\in S_m(\operatorname{BMO})$ such that $\partial^\alpha
g=g_\alpha$, for $|\alpha|=m$. This implies that $\varphi_n\to g$ in $ S_m(\operatorname{BMO})$,
hence in particular in ${\mathcal S}'/{\mathcal P}_m$.
Let $\psi\in{\mathcal S}_\infty$, so that we have $\Delta^{s/2}\psi\in {\mathcal S}_\infty \subseteq
H^1$, so that
\begin{align*}
\langle g, \Delta^{s/2}\partial^\alpha \psi \rangle
& = \lim_{n\to+\infty} \langle \partial^\alpha \varphi_n ,\Delta^{s/2} \psi \rangle
= \lim_{n\to+\infty} \langle \Delta^{s/2}\varphi_n , \partial^\alpha \psi \rangle
= \langle G, \partial^\alpha \psi \rangle \,,
\end{align*}
Hence, the mapping $\eta\to \langle g, \Delta^{s/2}\eta \rangle$ is bounded in
the $L^{p'}$-norm on ${\mathcal S}_\infty$, that is,
$\Delta^{s/2}g$ defines an element of
$L^p$. This proves (ii).
\medskip
Let now $s>d/p$ and $s-d/p\not\in{\mathbb N}$.
We assume first that $d/p<s<1+d/p$. For $\varphi\in{\mathcal S}$ and $x\neq
y$ we write
\begin{align*}
\varphi (x) -\varphi(y)
& = \int \frac{e^{ix\xi}-e^{iy\xi}}{|\xi|^s} \big( \Delta^{s/2}
\varphi)\widehat{\,}(\xi)\, d\xi\\
&= \int H(x,y,t) \Delta^{s/2}
\varphi(t) \, dt
\,,
\end{align*}
where
$$
H(x,y,t)
= {\mathcal F}^{-1} \Big( \frac{e^{ix(\cdot)}-e^{iy(\cdot)}}{|\cdot|^s} \Big) (t)
= \gamma_{s,d} \big( |t-x|^{s-d} - |t-y|^{s-d} \big)
\,.
$$
By Lemma \ref{integral-estimate} it follows that $H(x,y,\cdot)\in L^{p'}$ and that
$$
\|H(x,y,\cdot)\|_{L^{p'}}
\lesssim |x-y|^{s-d/p} \,.
$$
Therefore,
\begin{align*}
\|\varphi\|_{\dot{\Lambda}^{s-d/p}}
& \lesssim \| \Delta^{s/2} \varphi \|_{L^p} \,.
\end{align*}
Hence, given $\{\Delta^{s/2}\varphi_n\}$ is a Cauchy sequence in $L^p$,
there exists a unique $g\in \dot{\Lambda}^{s-d/p}$ such that $\varphi_n\to
g$, as $n\to+\infty$. We argue as before, using the fact that
the dual space of $H^q$ is $\dot\Lambda^{d(1/q-1)}$ when $0<q<1$, see
e.g. \cite[Ch. III]{GarciaCuerva-deFrancia}.
Let $q$ be given by $1/q= 1-1/p+s/d$, so that $0<q<1$. Then
$\dot{\Lambda}^{s-d/p} = (H^q)^*$, and
given any
$\psi\in{\mathcal S}_\infty$ we have $\Delta^{s/2}\psi\in {\mathcal S}_\infty \subseteq
H^q$, so that
\begin{align}
\langle g, \Delta^{s/2}\psi \rangle
& = \lim_{n\to+\infty} \langle \varphi_n ,\Delta^{s/2}\psi \rangle
= \lim_{n\to+\infty} \langle \Delta^{s/2}\varphi_n , \psi \rangle
= \langle G, \psi \rangle \,, \label{bdd-fnct-Lp}
\end{align}
where $G$ is the $L^p$-limit of $\{ \Delta^{s/2}\varphi_n\}$. This shows
that the mapping $\psi\to \langle g, \Delta^{s/2}\psi \rangle$ is bounded in
the $L^{p'}$-norm on a dense subset,
and therefore $\Delta^{s/2}g$ defines an element of $L^p$, that is $G$.
We conclude that
$\Delta^{s/2}g\in L^p$ as well. Finally, if $s-d/p\not\in{\mathbb N}$ and
$s-d/p>1$, let $m=\lfloor s-d/p \rfloor$, and $|\alpha|=m$. Since,
$\| \Delta^{(s-m)/2}\partial^\alpha \varphi_n\|^{L^p} \le
\| R^\alpha \Delta^{s/2}\partial^\alpha \varphi_n\|^{L^p} \le \|
\Delta^{s/2}\partial^\alpha \varphi_n\|^{L^p}$, $\{ \Delta^{(s-m)/2}\partial^\alpha
\varphi_n\}$ is a Cauchy sequence in $L^p$.
Using the first part
we can find $g_\alpha\in
\dot{\Lambda}^{s-m-d/p}$ such that
$\partial^\alpha \varphi_n \to g_\alpha$ in
$\dot{\Lambda}^{s-m-d/p}$, for all $\alpha$ with $|\alpha|=m$.
Hence, arguing as in (ii), there exists a unique
$g\in\dot{\Lambda}^{s-d/p}$ such that $\partial^\alpha g=g_\alpha$ for
$|\alpha|=m$. Hence,
$\varphi_n\to g$ in ${\mathcal S}'/{\mathcal P}_m$, since $\partial^\alpha \varphi_n \to \partial^\alpha g$ in
$\dot{\Lambda}^{s-m-d/p}$
for
$|\alpha|=m$, $\varphi_n\to g$ in $\dot{\Lambda}^{s-d/p}$. Finally, by \eqref{bdd-fnct-Lp} it is
now clear that $\Delta^{s/2} g\in L^p$, and (iii) follows as well.
\qed
\medskip
We state the characterization of ${\dot{W}}^{s,p}$, that also provides
the precise embedding result. It may be considered {\em folklore} by
some, but to the best of our knowledge, there exists no explicit
statement in the literature.
From this, the proof of Thm. \ref{main-thm-realization-Wsp-dot}
is then obvious.
\begin{cor}\label{real-hom-sob} For $s>0$ and $p\in(1,+\infty)$, the
following properties hold.
\noindent
{\rm (i)} Let $0<s<\frac dp$, and let $p^*\in(1,\infty)$ given by
$\frac{1}{p^*}=\frac1p - \frac sd$.
Then,
$$
{\dot{W}}^{s,p} =\big\{f\in L^{p^*}:\,
\|\Delta^{s/2} f\|_{L^p} <+\infty \big\} \,,
$$
and
$$
\| f\|_{ L^{p^*}} \lesssim \| f\|_{{\dot{W}}^{s,p}}\,.
$$
\noindent
{\rm (ii)}
Let $s-d/p\in{\mathbb N}$ and set $m=\lfloor s-d/p \rfloor$.
Then,
$$
{\dot{W}}^{s,p} =\big\{f \in S_m(\operatorname{BMO}):\,
\|\Delta^{s/2} f\|_{L^p} <+\infty \big\} \,,
$$
and
$$
\| f\|_{ S_m(\operatorname{BMO})} \lesssim \| f\|_{{\dot{W}}^{s,p}}\,.
$$
\noindent
{\rm (iii)}
Let $s>d/p$, $s-d/p\not\in{\mathbb N}$ and let
set $m=\lfloor s-d/p \rfloor$.
Then,
$$
{\dot{W}}^{s,p} =\big\{f \in \dot{\Lambda}^{s-d/p}:\,
\|\Delta^{s/2} f\|_{L^p} <+\infty \big\} \,,
$$
and
$$
\| f\|_{ \dot{\Lambda}^{s-d/p}} \lesssim \| f\|_{{\dot{W}}^{s,p}}\,.
$$
In particular, $\dot{W}^{s,p}$ is a space of functions if $s<d/p$,
while
$\dot{W}^{s,p}\subseteq{\mathcal S}'/{\mathcal P}_m$, where $m=\lfloor s-d/p \rfloor$,
if $s\ge d/p$.
\end{cor}
We also have the following characterization. We point out that if $m$
is a negative integer, ${\mathcal P}_m=\emptyset$ and ${\mathcal S}'/{\mathcal P}_m $ is ${\mathcal S}'$
itself, as well as ${\mathcal S}_m={\mathcal S}$.
\begin{cor}\label{ale-charact}
For $s>0$ and $p\in(1,+\infty)$, let $m=\lfloor s-d/p\rfloor$. Then,
\begin{multline*}
\dot{W}^{s,p}
= \big\{ [f]_m\in{\mathcal S}'/{\mathcal P}_m :\,
\text{(i)
there exists a sequence}\ \{\varphi_n\}\subseteq {\mathcal S}\ \text{such that }
[\varphi_n]_m\to [f]_m \text{ in }{\mathcal S}'/{\mathcal P}_m; \\
\text{(ii) the sequence} \ \{\Delta^{s/2}\varphi_n\}\ \text{is a
Cauchy sequence in}\ L^p \big\}\,.
\end{multline*}
\end{cor}
\proof
We have shown that any element of $\dot{W}^{s,p}$ is an element of
${\mathcal S}'/{\mathcal P}_m$ satisfying the conditions $(i)$ and $(ii)$ above.
Conversely, let $[f]_m\in{\mathcal S}'/{\mathcal P}_m$ satisfy the conditions $(i)$ and
$(ii)$. We need to show that for any $f\in[f]_m$,
$\Delta^{s/2}f$ is a well-defined element of $L^p$ -- in fact the
$L^p$-limit of the sequence $\{\Delta^{s/2}\varphi_n\}$.
Let $g=\lim_{n\to+\infty} \Delta^{s/2} \varphi_n$ and fix
$\psi\in{\mathcal S}_\infty$. Then $\Delta^{s/2}\psi
\in{\mathcal S}_\infty\subseteq{\mathcal S}_m$ and we have
\begin{align*}
\langle g,\psi\rangle
&= \lim_{n\to+\infty} \langle \Delta^{s/2} \varphi_n, \psi\rangle =
\lim_{n\to+\infty} \langle \varphi_n, \Delta^{s/2} \psi\rangle
= \lim_{n\to+\infty} \langle [\varphi_n]_m, \Delta^{s/2} \psi\rangle \\
& = \langle [f]_m ,\Delta^{s/2} \psi\rangle \,.
\end{align*}
Therefore, we may set $\Delta^{s/2}[f]_m =g\in L^p$.
\qed \medskip
We also have the following, but significant observation.
\begin{cor}\label{Delta-s-ammazza-polimoni}
Let $s>0$, $m=[s-\frac d2]$. Then, for every $P\in{\mathcal P}_m$,
$\Delta^{s/2}P=0$.
\end{cor}
\proof
We show that there exists sequences of Schwartz functions $\{\varphi_n\}$
such that $\{\Delta^{s/2}\varphi_n\}$ is a Cauchy sequence in $L^2$, while
$\varphi_n\to P$ in ${\mathcal S}'$. Suppose first $s>\frac d2$ and
$s-\frac d2\not\in{\mathbb N}$. Let $\psi\in C^\infty_c$ be nonnegative,
identically $1$ for $|x|\le1 $ and with support in
$\{|x|\le2\}$. Define $\psi^\varepsilon(x)=\psi(\varepsilon x)$, and set $\varphi_n =
P\psi^{\frac1n}$. Then, it is obvious that $\varphi_n \to
P$ in ${\mathcal S}'$, while if we assume momentarily that $P=x^\alpha$, with
$|\alpha|=k$, and write
\begin{align*}
\|\Delta^{s/2} \varphi_n\|_{L^2}^2
& = \int |\xi|^{2s} \big| \big( \widehat P*
\widehat{\psi (\cdot/n)}\big)(\xi) \big|^2 \, d\xi
= \int |\xi|^{2s} \big| \big( \widehat P*
\widehat{\psi (\cdot/n)}\big)(\xi) \big|^2 \, d\xi \\
& = n^{2(d+k)} \int |\xi|^{2s} \big|
\partial_\xi^\alpha \widehat\psi (n\xi) \big|^2 \, d\xi
= n^{d+2(k-s)} \int |x|^{2s} \big|
\partial^\alpha \widehat\psi (x) \big|^2 \, dx \,,
\end{align*}
which tends to $0$ as $n\to+\infty$ when $s-\frac d2>k$, i.e. when
$k\le [s-\frac d2]$.
Next, we assume that $s=\frac d2$. Let $\{\varphi_n\}$ be a
sequence in ${\mathcal S}_\infty$ such that $\Delta^{s/2} \varphi_n \to 0$ in
$L^2$. Then, since $\varphi_n \in {\mathcal S}_\infty$,
$\varphi_n={\mathcal I}_{d/2}\Delta^{s/2}\varphi_n$, by Prop. \ref{Riesz-pot-prop}
(ii), $\varphi_n\to 0$ in $\operatorname{BMO}$, hence to a constant. Finally, if
$s-\frac d2=m\in{\mathbb N}$, and we assume the conclusion valid for $m-1$,
we proceed inductively, using identity \eqref{id-riesz}. If
$ \Delta^{s/2} \varphi_n \to 0$ in
$L^2$, then $\Delta^{s-1/2} \partial_j\varphi_n \to 0$ in
$L^2$ and therefore $\partial_j \varphi_n\to 0$ in ${\mathcal P}_{m-1}$. This proves the lemma.
\qed
\medskip
We now turn to the proof of Thm. \ref{main-thm-realization-Lps-dot}.
\proof[Proof of Thm. \ref{main-thm-realization-Lps-dot}]
(i) Let $[u]_\infty\in \dot{L}^p_s$ with $s<d/p$, and set
$\varphi_n = \sum_{|j|\le n} M_j u$. Then, $\varphi_n\in{\mathcal S}_\infty$ and using
Cor. \ref{real-hom-sob} (1) and the equality $\dot{\Delta}^{s/2}\varphi=
[\Delta^{s/2}\varphi]_\infty$ for functions in ${\mathcal S}$ we have
that, for $n<m\in{\mathbb N}$,
\begin{align*}
\| \varphi_m-\varphi_n\|_{L^{p^*}}
& = \Big\| \sum_{n<|j|\le m} M_j u\Big\|_{L^{p^*}} \lesssim
\Big\| \dot{\Delta}^{s/2} \Big( \sum_{n<|j|\le m} M_j u \Big) \Big\|_{L^p}
= \Big\| \sum_{n<|j|\le m} M_j u\Big\|_{\dot{L}^p_s} \\
& \approx \Big\| \Big( \sum_{k\in{\mathbb Z}} \big( 2^{ks} \big| M_k
\textstyle{ \sum_{n<|j|\le m} M_j u } \big| \big)^2 \Big)^{1/2}
\Big\|_{L^p} \\
& \lesssim \Big\| \Big(
\sum_{n-2\le |k|\le m+2} \big( 2^{ks} \big| M_k u \big| \big)^2 \Big)^{1/2}
\Big\|_{L^p}
\,.
\end{align*}
Since $u\in\dot{L}^p_s$, the equivalence of norms in
\eqref{LP-charact-Lps} implies that $\{\varphi_n\}$ is a Cauchy
sequence in $L^{p^*}$. Therefore,
it follows that $f=\sum_{j\in{\mathbb Z}} M_j u$, where the convergence is in
$L^{p^*}$.
This proves (i)
\medskip
We now prove (iii). Consider the series on the right hand side of
\eqref{the-series-3}. We claim that the series converges uniformly on compact sets in
${\mathbb R}^d$, together with the series of partial derivatives up to order
$m:=\lfloor s-d/p \rfloor$. Assuming the claim, and letting $f$ denotes
its sum, we clearly have that $f\in{\mathcal C}^m$ and $P_{f;m;0}=0$. Moreover,
letting $\Phi_n =\sum_{|j|\le n} (M_j u-P_{M_ju;m;0})$, $\varphi_n=
\sum_{|j|\le n} M_j u$, and using Cor. \ref{real-hom-sob} we have
\begin{align*}
\| \Phi_n \|_{\dot{\Lambda}^{s-d/p}}
& = \| \varphi_n \|_{\dot{\Lambda}^{s-d/p}} \lesssim \| \varphi_n
\|_{\dot{W}^{s,p}} = \| \Delta^{s/2} \varphi_n \|_{L^p} \\
& \le \Big\| \Big(\sum_{|k|\le n+1} 2^{ks} |M_ju|^2 \Big)^{1/2}
\Big\|_{L^p} \,.
\end{align*}
Hence, the series on the right hand side of \eqref{the-series-3}
converges also in $\dot{\Lambda}^{s-d/p}$ and uniquely determines $f$.
Then, the conclusion
follows modulo the claim.
The following estimate holds true for
$f\in\dot{\Lambda}^\gamma$, $\gamma$ not an integer,
see \cite[Cor. 3.4]{Krantz-Lipschitz-paper},
\begin{equation}\label{vera?}
\big|f(x)-P_{f;m;y}(x)\big| \lesssim |x-y|^{\gamma-m} \| \nabla^m
f\|_{\dot{\Lambda}^{\gamma-\lfloor \gamma \rfloor}} \,,
\end{equation}
where we set $\nabla^m f = \big(\partial^\alpha f\big)_{|\alpha|=m} $.
Since $s-m>d/p$, by Sobolev's embedding theorem, Cor. \ref{real-hom-sob} (iii),
if
$|x|\le r$ we
have
\begin{align*}
| \Phi_\ell (x) - \Phi_n(x) |
& \lesssim_r \| \nabla^m \Phi_\ell - \nabla^m \Phi_n\|_{\dot\Lambda^{s-m-d/p}} \lesssim \|
\varphi_\ell - \varphi_n\|_{\dot{W}^{s,p}} \\
& \approx \Big\| \Big( \sum_{k\ge1} \Big(2^{ks} \big|M_k \big(
\sum_{\ell\le |j|\le n} \Delta_ju \big) \big|\Big)^2 \Big)^{1/2}
\Big\|_{L^p} \\
& \lesssim \Big\| \Big( \sum_{\ell-1\le k\le n+1} \big(2^{ks}
|M_k u| \big)^2 \Big)^{1/2} \Big\|_{L^p}
\end{align*}
This shows that the series converges uniformly on
compact sets. The same argument now easily applies to series of partial
derivatives up to order $m$, since
$$
| \partial^\alpha \varphi_\ell (x) - \partial^\alpha \varphi_n(x) |
\lesssim_r\| \nabla^{m-|\alpha|} \partial^\alpha \varphi_\ell
- \nabla^{m-|\alpha|} \partial^\alpha \varphi_n\|_{\Lambda^{s-m-d/p}}
\lesssim \|\varphi_\ell - \varphi_n\|_{\dot{W}^{s,p}}\,.
$$
Therefore, $\Phi_n \to f$, which is an element of $[u]_\infty$, and
this proves (iii).
\medskip
In order to prove (ii),
let $s-d/p=m=0$, and
consider the two series on the right hand side of
\eqref{the-series-2.0}, that is,
$\sum_{j\le0} \big(
M_j u -M_j(0) \big)$, and $\sum_{j\ge1}
M_j u$. We begin with the latter one.
We observe that $\sum_{j\ge1}
M_j u$ converges to an element of $\dot\operatorname{BMO}:=\operatorname{BMO}/{\mathcal P}_\infty$,
say to $f+P$, for some
$P\in{\mathcal P}_\infty$.
However,
its Fourier transform has support in $\{|\xi|\ge1\}$, hence it must be $P=0$, so
that $\sum_{j\ge1}
M_j u=f_1$ is a locally integrable function.
Next we show that the former series converge uniformly on compact subsets to
a function $f_0$.
For $|x|\le r$
we have
\begin{align*}
|M_j u(x) -M_ju(0) |
& \lesssim_r \sup_{|x|\le r} | M_j\nabla u (x)|
\lesssim 2^{j(1+d/p)}\| M_j u \|_{L^p} \\
& \lesssim 2^j \Big(
\int \Big( \sum_{k\in{\mathbb Z}} |M_k 2^{jd/p} M_j u (x)|^2 \Big)^{p/2} \, dx \Big)^{1/p} \\
& \lesssim 2^j \Big(
\int \Big( \sum_{|k-j|\le1 } \big( 2^k |M_k u(x)|\big) ^2
\Big)^{p/2} \, dx \Big)^{1/p} \\
& \lesssim 2^j \| [u]_\infty\|_{\dot{L}^p_s}
\,.
\end{align*}
Since $j\le0$, the conclusion follows. Now set
$$
f = f_0+f_1 - (f_0+f_1)_B \,,
$$
and the case $m=0$ is proved.
Let now $s-d/p=m\ge1$ and consider the series on the right hand side
of \eqref{the-series-2.m}.
For $|x|\le r$, arguing as before
we have
\begin{align*}
|M_j u(x) -P_{M_ju;m;0}(x) |
& \lesssim_r \sup_{|x|\le r} | M_j\nabla^{m+1} u (x)|
\lesssim 2^{j(m+1+d/p)}\| M_j u \|_{L^p} \\
& \lesssim 2^j \Big(
\int \Big( \sum_{k\in{\mathbb Z}} |M_k 2^{jd/p} M_j u (x)|^2 \Big)^{p/2} \, dx \Big)^{1/p} \\
& \lesssim 2^{j(m+1)} \Big(
\int \Big( \sum_{|k-j|\le1 } \big( 2^k |M_k u(x)|\big) ^2
\Big)^{p/2} \, dx \Big)^{1/p} \\
& \lesssim 2^{j(m+1)} \| [u]_\infty\|_{\dot{L}^p_s}
\,.
\end{align*}
Thus, the series converges uniformly on compact subsets.
Clearly, the same argument applies to all derivatives $\partial^\alpha_x$ with
$|\alpha|\le m$. Then, the function $f_0\in{\mathcal C}^m$. On the other
hand, the series $\sum_{j\ge1} M_ju$ is such that, for $|\alpha|\le
m-1$, recalling that $j\ge1$,
\begin{align*}
|\partial^\alpha M_ju(x)|
& \le 2^{j(|\alpha|+d/p)} \| M_ju\|_{L^p} \lesssim
\Big(
\int \Big( \sum_{|j-k|\le1}
|M_k (2^{j(|\alpha|+d/p)} M_ju)|^2 \Big)^{p/2} \, dx \Big)^{1/p} \\
& \lesssim
\Big(
\int \Big( \sum_{|j-k|\le1}
|M_k (2^{j(|\alpha|+d/p)} M_ju)|^2 \Big)^{p/2} \, dx \Big)^{1/p} \\
& \lesssim
2^{-j} \Big(
\int \Big( \sum_{k\ge0}
(2^{ks} |M_k u|) ^2 \Big)^{p/2} \, dx \Big)^{1/p} \,.
\end{align*}
Thus, also the series $\sum_{j\ge1} M_ju$ converges uniformly on
compact subsets, together with its partial derivatives $\partial^\alpha$,
with $|\alpha|\le m-1$. Finally, if $|\alpha|=m$ we have
$$
\partial^\alpha( f_0 + f_1)
= \sum_{j\le0} M_j (\partial^\alpha u) - M_j (\partial^\alpha u)(0) +
\sum_{j\ge1} M_j (\partial^\alpha u) \,,
$$
and we can repeat the argument of the case $m=0$.
Finally, we set $f_0+f_1=f_2$ and define
$$
f= f_2- P_{f_2; m-1;0}(x) -
\sum_{|\alpha|=m} (\partial^\alpha f_2)_B
\,.
$$
It is now easy to see that $f\in E^{s,p}$.
\qed
\medskip
The proof of Theorem \ref{Cor} is now obvious, and we are done.
\medskip
\section{Final remarks, and comparison with the work of Bourdaud}
In \cite{Bourdaud} Bourdaud proved a version of
Thm. \ref{main-thm-realization-Lps-dot}, and our work is inspired by
his. The descriptions of the realization spaces are similar, but not
identical. In particular we explicitly refer to the $\operatorname{BMO}$
space. Moreover, Bourdaud is mainly focused on the techniques of the
Besov spaces, and the proof in the case of Sobolev spaces are
different, and typically more involved. However, we point out that
Bourdaud proved that the spaces $E^{s,p}$ with $s-d/p\not\in{\mathbb N}_0$
are the {\em unique} realization of $\dot{L}^p_s$ that are dilation
invariant, while, if $s-d/p \in{\mathbb N}_0$ there does not exist any
dilation
invariant realization of $\dot{L}^p_s$.
We mention that we have restrict ourselves to the case
$p\in(1,\infty)$. The case of the Hardy spaces, that is, $p\in(0,1]$
is also of great interest and we believe it is worth further investigation.
Finally,
it would also be very interesting to study analogous properties for
the homogeneous versions of Sobolev and Besov spaces in the
sub-Riemannian setting, see the recent papers
\cite{PV,BPTV,BPV,BPV-GAHA}. \medskip
{\em Acknowledgements.} We warmly thank G. Bourdaud for several
useful comments and remarks
that improved the presentation of this paper.
| {
"timestamp": "2019-12-04T02:19:41",
"yymm": "1910",
"arxiv_id": "1910.05980",
"language": "en",
"url": "https://arxiv.org/abs/1910.05980",
"abstract": "We study the fractional Laplacian and the homogeneous Sobolev spaces on R^d , by considering two definitions that are both considered classical. We compare these different definitions, and show how they are related by providing an explicit correspondence between these two spaces, and show that they admit the same representation. Along the way we also prove some properties of the fractional Laplacian.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Fractional Laplacian, homogeneous Sobolev spaces and their realizations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9863631663220389,
"lm_q2_score": 0.8418256532040707,
"lm_q1q2_score": 0.8303458167854858
} |
https://arxiv.org/abs/1605.07093 | Maps of Degree 1 and Lusternik--Schnirelmann Category | Given a map $f: M \to N$ of degree 1 of closed manifolds. Is it true that the Lusternik--Schnirelmann category of the range of the map is not more that the category of the domain? We discuss this and some related questions. | \section{Introduction}
Below $\cat$ denotes the (normalized) Lusternik--Schnirelmann category,~\cite{CLOT}.
\m
\begin{quest}\label{question}
Let $M, N$, $\dim M=\dim N=n$, be two closed connected orientable manifolds, and let $f: M \to N$ be a map of degree $\pm 1$. Is it true that $\cat M \geq \cat N$?
\end{quest}
There are several reasons to conjecture that the above-mentioned question has the affirmative answer for all maps $f$ of degree 1. Indeed, informally, the domain of $f$ is more ``massive'' than the range of $f$. For example, it is well known that the induced maps $f_*: H_*(M) \to H_*(N)$ and $f_*: \pi_1(M) \to \pi_1(N)$ are surjective.
\m In~\cite{R2} I proved some results that confirm the conjecture under some suitable hypotheses, and now people speak about the Rudyak conjecture, cf. \cite[Open Problem 2.48]{CLOT},~ \cite{D}, (although I prefer to state questions rather than conjectures). To date we do not know any counterexample.
\m Here we demonstrate more situations when the conjecture holds,
and discuss some variations of the conjecture.
\section{Preliminaries}
\begin{df}\rm Let $M,N, \dim M=\dim N=n$, be two closed connected oriented smooth manifolds, and let $[M]\in H_n(M)=\Z$ and $[N]\in H_n(N)=\Z$ be the corresponding fundamental classes. Given a map $f: M\to N$, we define the {\em degree} of a map $f:M \to N$ as the number $\deg f\in \Z$ such that $f_*[M]=(\deg f) [N]$.
\end{df}
Let $E$ be a commutative ring spectrum (cohomology theory).
\begin{lem}\label{mono} If $M$ is $E$-orientable and $f: M \to N$ is a map of degree $1$ then $f_*\colon E_*(M)\to E_*(N)$ is an epimorphism and $f^*\colon E^*(N)\to E^*(M)$ is a monomorphism.
\end{lem}
\p See~\cite[Theorem V.2.13]{R1}, cf. also Dyer~\cite{Dyer}.
\qed
\begin{remark}\label{r:local}\rm An analog of \lemref{mono} holds for cohomology with local coefficients. Let $\A$ be a local coefficient system of abelian groups on $N$, and let $f^*(\A)$ be the induced coefficient system. Then $f_*:H_*(M;f^*\A)\to H_*(N;\A)$ is a split epimorphism and $f^*\colon H^*(N;\A)\to H^*(M;f^*\A)$ is a split monomorphism. The proofs are based on Poincar\'e duality with local coefficients and the equality $f_*(f^*x\frown y)=x\frown f_* y$ for $x\in H^*(N;\A), y\in H_*(M;f^*\A)$. See e.g.~\cite{B}.
\end{remark}
\begin{df}\rm Given a CW space $X$, the {\em Lusternik--Schnirelmann category} $\cat X$ of $X$ is the least integer $m$ such that there exists an open covering $\{A_0, A_1,\ldots, A_m \}$ with each $A_i$ contractible in $X$ (not necessary in itself). If no such covering exist, we put $\cat X=\infty$.
\end{df}
Note the inequality $\cat X \leq \dim X$ for $X$ connected.
\m In future we abbreviate Lusternik--Schnirelmann to LS. A good source for LS theory is~\cite{CLOT}.
\m Given a closed smooth manifold $M$ and a smooth function $f: M \to \R$, the number of critical points of $f$ can't be less than $\cat M$,~\cite{LS1, LS2}. This result turned out to be the starting point of LS theory. Currently, the LS theory is a broad area of intensive topological research.
\m Let $X$ be a path connected space. Take a point $x_0\in X$, set
$$
PX=P(X,x_0)=\{\omega\in X^{[0,1]} \bigm| \omega(0)=x_0\}
$$
and consider the fibration $p: PX \to X, p(\omega)=\omega(1)$. Let $p_k=p_{k}^X: P_k(X)\to X$ be the $k$-fold fiberwise join $PX*_X\cdots *_X* PX \to X$. According to the Ganea--\v Svarc theorem, \cite{CLOT, S}, the inequality $\cat X<k$ holds if and only if the fibration $p_k: P_k(X)\to X$ has a section. In other words, the number $\cat X$ is the least $k$ such that the fibration $p_{k+1}: P_{k+1}(X)\to X$ has a section.
\section{Approximations}
Recall (see the Introduction) that we discuss whether the existence of a map $f: M \to N$ of degree 1 implies the inequality $\cat M \geq \cat N$. Here we present two results appearing when we approximate the LS category by the cup-length and Toomer invariant.
\m Recall the following cup-length estimate of LS category. Let $E$ be a commutative ring spectrum (cohomology theory). The cup-length of $X$ with respect to $E$ is the number
\[
\cl_E(X):=\sup \{m\bigm| u_1\smallsmile\cdots \smallsmile u_m \neq 0 \text{ where } u_i\in\wt E^*(X)\}.
\]
The well-known cup-length theorem~\cite{CLOT} asserts that $\cl_E(X)\leq \cat X$.
\m We give another estimate of LS category.
\begin{df}\label{d:toomer} \rm Define the (cohomological) Toomer invariant
\[
e_E^*(X)=\sup\{k\bigm | \ker \{p_k^*: E^*(X))\to E^*(P_k(X))\}\neq 0\}.
\]
\end{df}
Note the decreasing sequence $\cdots \supset \ker (p_k^*)\supset \ker (p_{k+1}^*)
\supset \cdots$. Moreover,
\[
e_E^*(X)=\inf\{k\bigm | \ker \{p_k^*: E^*(X)\to E^*(P_k(X))\}= 0\}-1.
\]
\m In the definition of the Toomer invariant, $E$ does not need to be a ring spectrum, it can be an arbitrary spectrum.
\begin{prop}\label{p:doubleineq}
We have $\cl_E(X)\leq e_E^*(X)\leq \cat X$.
\end{prop}
\p First, $p_m=p_{m}^X$ has a section for $m> \cat X$, and so $\ker p_{m}^*=0$ for all $m>\cat X$. Hence, $e_E^*(X)\leq \cat X$. Now, we put $\cl_E(X)=l$, $e_E^*(X)=k$ and prove that $l\leq k$. Take $u_1, \ldots, u_l\in \wt E^*(X)$ such that $u_1\smallsmile \cdots \smallsmile u_l\neq 0$. Then, since $p_{k+1}^*$ is monic, we conclude that $p_{k+1}^*(u_1\smallsmile \cdots \smallsmile u_l)\neq 0$. In other words,
\[(p_{k+1}^*u_1)\smallsmile \cdots \smallsmile (p_{k+1}^* u_l)\neq 0
\]
in $P_{k+1}X$. Hence, $\cat P_{k+1}(X)\geq l$ because of the cup-length theorem. It remains to note that $\cat P_{k+1}(X)\leq k$ for all $k$, see~\cite[Prop. 1.5(ii)]{R2} or~\cite{CLOT}.
\qed
\begin{prop}\label{p:ineq} Let $M$ be a closed connected $E$-orientable manifold, and let $f: M \to N$ be a map of degree $\pm 1$. Then $\cl_E(M)\geq \cl_E(N)$ and $e^*_E(M)\geq e^*_E(N)$.
\end{prop}
\p Put $\cl_E(N)=l$ and take $u_1, \ldots u_l\in \wt E^*(N)$ such that $u_1\smallsmile \cdots \smallsmile u_l\neq 0$. Then $f^*(u_1\smallsmile \cdots \smallsmile u_l)\neq 0$ by \ref{mono}. So, $f^*(u_1)\smallsmile \cdots \smallsmile f^*(u_l)\neq 0$ in $M$, and thus $\cl_E(M)\geq l$.
\m Now, let $e^*_E(M)=k$. Then $(p_i^M)^*$ is a monomorphism for $i>k$. Consider the commutative diagram
\[
\CD
P_iM @>P_if>> P_i N\\
@Vp_i^M VV @Vp_i^N VV\\
M@>f>> N.
\endCD
\]
and note that, by \ref{mono}, $f^*((p_i^M)^*)$ is monic for $i>k$. Hence, because of commutativity of the diagram, $(p_i^N)^*$ is monic for $i>k$. Thus, $e^*_E(N)\leq k$.
\qed
\begin{cor}\label{c:cat=cl}
Let $M$ be a closed connected $E$-orientable manifold, and let $f: M \to N$ be a map of degree $\pm 1$.
{\rm(i)} Suppose that $e^*_E(N)=\cat N$. Then $\cat M\geq \cat N$.
{\rm(ii)} Suppose that $\cl_E(N)=\cat N$. Then $\cat M\geq \cat N$.
\end{cor}
\p (i) We have $\cat M \geq e^*_E(M)\geq e^*_E(N)=\cat N$, the second inequality following from \propref{p:ineq}.
(ii) Because of (i), it suffices to prove that $e^*_E(N)=\cat N$. But this holds since $\cl_E(N)\leq e^*_E(N)\leq \cat N$.
\qed
\section{Low-Dimensional Manifolds}\label{s:low}
We prove that for $n\leq 4, \cat M^n \geq \cat N^n$ provided that there exists a map $f: M \to N$ of degree 1. The inequality holds trivially for $n=1$.
\m The case $n=2$ is also simple. Denote by $g(X)$ the genus of a closed connected orientable surface $X$. Then $g(M)\geq g(N)$ because of surjectivity of $f_*: H_2(M) \to H_2(N)$, see~\ref{mono}. Furthermore, $\cat X=1$ if $g(X)=0$ ($ X=S^2)$ and $\cat X=2$ for $g>1$.
\m The case $n=3$ is considered in~\cite[Corollary 1.3]{OR}.
\m The case $n=4$.
First, if $\cat N=4$ then $\cat N=\cat M$ by~\cite[Corollary 3.6(ii)]{R2}.
\m
Next we consider $\cat N=3$ and prove that $\cat M \geq 3$. By way of contradiction, assume that $\cat M\leq 2$, and hence the group $\pi_1(M)$ is free by~\cite{DKR}.
Now, $\pi_1(N)$ is free since $\deg f=1$, see~\cite{DR}. But then $N$ has a CW decomposition whose 3-skeleton is a wedge of spheres,~\cite{Hil}, and hence $\cat N\leq 2$, a contradiction. Finally, the case $\cat M=1<2=\cat N$ is impossible for trivial reasons ($M$ should be a homotopy sphere, and therefore $N$ should be a homotopy sphere).
\section{Some exemplifications}
The following result is a weak version of \cite[Theorem 3.6(i)]{R2}.
\begin{thm} \label{t:main}
Let $M, N$ be two smooth closed connected stably parallelizable manifolds, and assume that there exists a map $f: M \to N$ of degree $\pm 1$. If $N$ is $(q-1)$-connected and $\dim N\leq 2q\cat N-4$ then $\cat M \geq \cat N$.
\qed
\end{thm}
\begin{remark}\rm
In~\cite[Theorem 3.6(i)]{R2} we use~\cite[Corollary 3.3(i)]{R2} where, in turn, we require $\dim N \geq 4$. However, the case $\dim N \leq 4$ is covered by Section~\ref{s:low}.
\end{remark}
\begin{thm} \label{t:torus}
Let $T^k$ denote the $k$-dimensional torus. Let $M, N$ be two smooth closed connected stably parallelizable manifolds, and assume that there exists a map $f: M \to N$ of degree $\pm 1$. Then there exists $k$ such that $\cat (M \ts T^k)\geq \cat (N\ts T^k)$.
\end{thm}
\p Put $\dim M=n$ and note that $\cat (T^k\ts M)\geq \cat T^k=k$. Now, if $k\geq n+4$ then
\[
2\cat M-4\geq 2k-4 \geq k+n,
\]
and we are done by \theoref{t:main}.
\qed
\m Another example. Consider the exceptional Lie group $G_2$. Recall that $\dim G_2=14$. Note that $G_2$ is parallelizable being a Lie group.
\begin{prop} Let $M$ be a stably parallelizable 14-dimensional closed manifold that admits a map $f: M \to G_2$ of degree $\pm 1$. Then $\cat M \geq \cat G_2$.
\end{prop}
\p The group $G_2$ is 2-connected and $\cat G_2=4$,~\cite{IM}. Now the result follows from \theoref{t:main} with $N=G_2$ and $q=3$ because $14=\dim G_2\leq 2q\cat G_2-4=20$.
\qed
\m Let $SO_n$ denote the special orthogonal group, i.e., the group of the orthogonal $n\times n$-matrices of determinant 1. Recall that $\dim SO_n=n(n-1)/2$.
\begin{thm} \label{t:son}
Let $M$ be a closed connected smooth manifold, and let $f: M \to SO_n$ be a map of degree $\pm 1$. Then $\cat M \geq \cat(SO_n)$ for $n\leq 9$.
\end{thm}
\p We apply \corref{c:cat=cl} for the case $E=H\Z/2$, i.e., to arbitrary closed connected manifolds. Below $\cl$ denotes $\cl_{\Z/2}$. Because of \corref{c:cat=cl} it suffices to prove that $\cat SO_n=\cl(SO_n)$ for $n\leq 9$. Recall that $H^*(SO_n;\Z/2)$ is the polynomial algebra on generators $b_i$ of odd degree $i<n$, truncated by the relations ${b_i^{p_i}}=0$ where $p_i$ is the smallest power of 2 such that ${b_i^{p_i}}$ has degree $\geq n$,~\cite{H}. In other words,
\[
H^*(SO_n;\Z/2)=\Z/2[b_1, \ldots, b_k, \ldots]/(b_1^{p_1}, \ldots, b_k^{p_k}, \ldots).
\]
Note that $p_k=1$ for $2k-1>n$, and so $H^*(SO_n)$ is really a truncated polynomial ring (not a formal power series ring). Hence,
\[
\cl(SO_n)=(p_1-1)+\ldots +(p_k-1)+\ldots
\]
and the sum on the right is finite because $p_k=1$ for all but finitely many $k$'s.
\m For sake of simplicity, we use the notation $S_n$ for $H^*(SO_n;\Z/2)$. We have:
$\dim SO_3=3$, $\cl(SO_3)=3 \text{ because } S_3=\Z/2[b_1]/(b_1^4)$.\\
$\dim SO_4=6$, $\cl(SO_4)=4 \text{ because } S_4=\Z/2[b_1,b_3]/(b_1^4, b_3^2)$.\\
$\dim SO_5=10$, $\cl(SO_5)=8 \text{ because } S_5=\Z/2[b_1,b_3]/(b_1^8, b_3^2)$.\\
$\dim SO_6=15$, $\cl(SO_6)=9 \text{ because } S_6=\Z/2[b_1,b_3, b_5]/(b_1^8, b_3^2, b_5^2)$.\\
$\dim SO_7=21$, $\cl(SO_7)=11\text{ because } S_7=\Z/2[b_1,b_3, b_5]/(b_1^8, b_3^4, b_5^2)$.\\
$\dim SO_8=28$, $\cl(SO_8)=12 \text{ because } S_8=\Z/2[b_1,b_3, b_5, b_7]/(b_1^8, b_3^4, b_5^2, b_7^2)$.\\
$\dim SO_9=36$, $\cl(SO_9)=20 \text{ because } S_9=\Z/2[b_1,b_3, b_5, b_7]/(b_1^{16}, b_3^4, b_5^2, b_7^2)$.\\
The values of $\cat SO_n, n\leq 9$ are calculated in~\cite{IMN}, see also~\cite{I2}. Compare these values with the above-noted values of $\cl(SO_n)$ and see that $\cat SO_n=\cl(SO_n)$ for $n\leq 9$.
\qed
\begin{remarks}\rm 1. The anonymous referee noticed that, probably, the method of \theoref{t:son} can also be applied to other Lie groups ($U_n, SU_n$, etc.). This is indeed true, but we do not develop these things here.
\m 2. In above-mentioned \theoref{t:main} we can relax the assumption on $M$ by requiring that the normal bundle of $M$ be stably fiber homotopy trivial, i.e., that $M$ is $S$-orientable where $S$ is the sphere spectrum. However, we can't provide the same weakening for $N$ because our proof in~\cite{R2} uses surgeries on $N$.
\end{remarks}
\section{Theme and Variations: Other Numerical Invariants Similar to LS Category}
Let $\crit X$ denote the minimum number of critical points of a smooth function $f:X \to \R$ on the closed smooth manifold $X$, and let $\ballcat (X)$ be the minimum $m\in \N$ such that there is a covering of $X$ by $m+1$ smooth open balls. It is known that $\cat X \leq \ballcat(X)\leq \crit M-1$, see~\cite{CLOT}. Note that there are examples with $\cat M<\ballcat(M)$. Indeed, there are examples of manifolds $M$ such that $\cat M=\cat (M\setminus \pt)$,~\cite{I1}, while $\cat (M\setminus \pt)=\cat M-1$ whenever $\cat M=\ballcat(M)$. On the other hand, there are no known examples with $\ballcat M +1<\crit M$.
\m Now, we can pose an open question whether $\ballcat M\geq \ballcat N$ and $\crit M \geq \crit N$ provided there exists a map $f: M \to N$ of degree 1.
\m We can also consider the number $\crit^*(X)$, that is, the minimum number of {\em nongenerate} critical points of a smooth function $f:X \to \R$ on the closed smooth manifold $X$. There is a big difference between $\crit X$ and $\crit^*X$. For example, if $S_g$ is a surface of genus $g\geq 1$ then $\crit S_g=3$ while $\crit^*S_g=2g$. So, we can ask if $\crit^* M \geq \crit^* N$ provided there exists a map $f: M \to N$ of degree~1. One of the lower bounds of $\crit^*(X)$ is the sum of Betti numbers $\SB(X)$, the inequality $\SB(X)\leq\crit^*(X)$ being a direct corollary of Morse theory, \cite{M}, and we can regard $\SB(X)$ as an approximation of $\crit^*X$. Now, if $f: M \to N$ is a map of degree~1 then the inequality $\SB(M)\geq \SB(N)$ follows from \ref{mono}.
\m If $M,N$ are closed simply-connected manifolds of dimension $\geq 6$ and there exists a map $M\to N$ of degree~1 then $\crit^*(M)\geq \crit^*(N)$. Indeed, for every Morse function $h: X\to \R$ on a closed connected smooth manifold $X$ we have the Morse inequalities
\[
m_{\gl}\geq r_{\gl}+t_{\gl}+t_{\gl-1}
\]
where $m_{\gl}$ is the number of critical points of index $\gl$ of $h$ and $r_{\gl}, t_\lambda$ are the rank and the torsion rank of $H_{\gl}(X)$, respectively. Now, if $X$ is a closed simply connected manifold with $\dim M \geq 6 $ then $X$ possesses a Morse function for which the above-mentioned Morse inequalities turn out to be equalities. This is a well-known Smale Theorem~\cite{Sm}. Now, the inequalities $r_{\gl}(M)\geq r_{\gl}(N)$ and $t_{\gl}(M) \geq t_{\gl}(N)$ follow from \ref{r:local}, and we are done.
\qed
\m {\bf Acknowledgments:} The work was partially supported by a grant from the Simons Foundation (\#209424 to Yuli Rudyak). I am very grateful to the anonymous referee who assisted a lot in my work with \corref{c:cat=cl} and \theoref{t:son}, and made other valuable suggestions that improved the paper.
| {
"timestamp": "2016-09-27T02:00:55",
"yymm": "1605",
"arxiv_id": "1605.07093",
"language": "en",
"url": "https://arxiv.org/abs/1605.07093",
"abstract": "Given a map $f: M \\to N$ of degree 1 of closed manifolds. Is it true that the Lusternik--Schnirelmann category of the range of the map is not more that the category of the domain? We discuss this and some related questions.",
"subjects": "Algebraic Topology (math.AT)",
"title": "Maps of Degree 1 and Lusternik--Schnirelmann Category",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969655605173,
"lm_q2_score": 0.8633916205190225,
"lm_q1q2_score": 0.8492293780328882
} |
https://arxiv.org/abs/1810.00492 | Chords of an ellipse, Lucas polynomials, and cubic equations | A beautiful theorem of Thomas Price links the Fibonacci numbers and the Lucas polynomials to the plane geometry of an ellipse, generalizing a classic problem about circles. We give a brief history of the circle problem, an account of Price's ellipse proof, and a reorganized proof, with some new ideas, designed to situate the result within a dense web of connections to classical mathematics. It is inspired by Cardano's solution of the cubic equation and Newton's theorem on power sums, and yields an interpretation of generalized Lucas polynomials in terms of the theory of symmetric polynomials. We also develop additional connections that surface along the way; e.g., we give a parallel interpretation of generalized Fibonacci polynomials, and we show that Cardano's method can be used write down the roots of the Lucas polynomials. | \section{Introduction}
A classic problem
instructs the reader to mark off $n$-equally spaced points around on a unit circle, draw the chords connecting one of the points to the remaining $n-1$ others, and find the product of the lengths.
\begin{figure}[htbp]\label{fig:circlechords}
\begin{center}
\resizebox{2in}{!}{\includegraphics{CircleDiagonals}}
\caption{Chords of a Circle}
\label{diagonals}
\end{center}
\end{figure}
Although most or all of the lengths are irrational, the product is exactly $n$. An elegant solution (\cite{galovich},
\cite{price1}, \cite{silva}) involves the $n$th roots of unity $1, \zeta, \zeta^2, \ldots, \zeta^{n-1}$, which are equally spaced around the unit circle in the complex plane. Connecting $1$ to each of the $n - 1$ others and multiplying the lengths of the segments gives the product:
$$\Pi_n = |1 - \zeta | |1 - \zeta^2| \cdots |1 - \zeta^{n-1}| = |(1 - \zeta)(1 - \zeta^2) \cdots (1 - \zeta^{n-1})|.$$
This is the absolute value of the polynomial $$\Pi_n(z) = (z - \zeta)(z - \zeta^2) \cdots (z - \zeta^{n-1})$$ evaluated at $z=1$.
The $n$th roots of unity are, by definition, the roots of $z^n-1$. Now $\Pi_n$ has as roots all roots of unity except 1; it follows that $\Pi_n(z) = \frac{z^n - 1}{z - 1}=z^{n-1}+\dots+z+1$. Thus the answer desired is $|\Pi_n(1)| = n$.
This problem has an interesting history, of which we discuss a few highlights below in section \ref{sec:circlehistory}.
In \cite{price1}, Thomas E. Price considered the following generalization. Scale Figure \ref{fig:circlechords} horizontally and/or vertically. It becomes an ellipse. What happens to the product of the chord lengths? An elegant explicit formula is given in \cite{price1}, which we re-derive below (Proposition \ref{prop:price}).
\begin{figure}[htbp]
\begin{center}
\resizebox{2in}{!}{\includegraphics{StretchedDiagonals}}
\caption{Stretched Chords}
\label{fig:ellipsechords}
\end{center}
\end{figure}
Showcasing the power of his results, Price in \cite{price2} considered the special case of a vertical stretch factor of $\sqrt{5}$ (see Figure \ref{fig:ellipsechords}). In this situation, the product of the chord lengths is given in Table \ref{tbl:ellipseproducts}. The answer is remarkable.
\begin{table}
\begin{center}
\begin{tabular}{c|c}
$n$ & product\\ \hline
2 & $2 = 2\cdot 1$\\
3 & $6 = 3\cdot 2$\\
4 & $12 = 4\cdot 3$\\
5 & $25 = 5\cdot 5$\\
6 & $48 = 6\cdot 8$\\
7 & $91= 7\cdot 13$\\
\vdots & \vdots \\
$n$ & $nF_n$
\end{tabular}
\end{center}
\caption{Products of stretched chords. The symbol $F_n$ refers to the $n$th Fibonacci number.}\label{tbl:ellipseproducts}
\end{table}
Our purpose in this note is to offer an alternative proof of this beautiful and insufficiently-well-known result of Price. We recover the result via a conceptually transparent argument that highlights several gems of classical mathematics. It is inspired by the classical Cardano formula for the solution of a reduced cubic equation by radicals, and makes use of a formula of Newton expressing the elegant relation between elementary symmetric polynomials and power sums. In the situation of interest, Newton's formula becomes the two-term linear recurrence that characterizes the generalized Lucas polynomials. Thus, the generalized Lucas polynomials also generalize the reduced cubic. The method provides an interpretation of the generalized Lucas polynomials that makes their Binet formula fall out as an immediate consequence. We give an analogous interpretation of the generalized Fibonacci polynomials that justifies their Binet formula as well.
We integrate our proof of Price's result with exposition of the classical mathematics involved. Thus, we aim for a completely self-contained treatment. We include short proofs of Cardano's and Newton's formulas, as well as all the facts about Lucas and Fibonacci polynomials that are needed.
The structure of the paper is as follows. In section \ref{sec:cardano}, we give a short proof of Cardano's formula for solving a reduced cubic polynomial in radicals, due to Solomon, and show that the polynomial that serves for the ellipse the role played above by $z^n-1$ for the circle, which we call $\Omega_n(z)$, generalizes the reduced cubic. We show that $\Omega_n(z)$ has a natural interpretation in terms of the fundamental theorem on symmetric polynomials (FTSP). In section \ref{sec:newtonbinet}, we sketch a proof of Newton's theorem on power sums, and then use it to derive a recursive formula for the key part of $\Omega_n(z)$. We observe that this recurrence is actually none other than the recurrence defining the generalized Lucas polynomials, up to a sign change, and therefore $\Omega_n(z)$ is expressed in terms of the generalized Lucas polynomials. The Binet formula for the generalized Lucas polynomial now falls out as a consequence of the interpretation in terms of the FTSP. In section \ref{sec:solution}, we use known facts about generalized Lucas and Fibonacci polynomials to derive Price's beautiful formula \ref{prop:price} for the product of the stretched elliptical chords. In section \ref{sec:solnbyradicals}, we bring the inquiry full-circle by giving the solution of $\Omega_n(z)$ by radicals, which generalizes the Cardano formula, and recovers the known formula for the roots of the Lucas polynomial as a special case. The rest of the paper consists of commentary and loose ends. Price's starting point, the circle problem described above, has an interesting history, and we give some highlights in section \ref{sec:circlehistory}. In section \ref{sec:furtherremarks}, in the name of self-containment, we offer proofs for the facts we used in section \ref{sec:solution}, including a proof of the Binet formula for generalized Fibonacci polynomials along the lines of what was done for the generalized Lucas polynomials in section \ref{sec:newtonbinet}. We also discuss the relation between this and Price's work.
Throughout, $\zeta$ refers to a primitive $n$th root of unity, which can be taken to be $e^{2\pi i / n}$. We prefer it to $\zeta_n$ to avoid visual clutter, but the notation conceals the dependence of $\zeta$ on $n$. Hopefully no confusion will result.
\section{A generalization of Cardano's reduced cubic}\label{sec:cardano}
Since the product of chord lengths for the circle is so elegantly found via the polynomial $z^n-1$ whose roots $1,\zeta, \zeta^2,\dots$ describe the points of interest in the complex plane, the essential challenge in the generalized elliptical version of the problem is to describe the polynomial whose roots are the horizontally and vertically scaled images of these points. As Price observed in \cite{price1}, these image points have the form
\begin{equation}\label{eq:roots}
\zeta^ja + \zeta^{-j}b,
\end{equation}
where $a> |b|\geq 0$, and $a+b$, $a-b$ are the lengths of the ellipse's horizontal and vertical axes. The original unit circle of Figure \ref{fig:circlechords} is the case $a=1$, $b=0$. In the case where the circle is vertically scaled by $\sqrt{5}$, as in Figure \ref{fig:ellipsechords}, we have $a=\phi = (1+\sqrt{5})/2$, the golden ratio, and $b=\hat\phi = (1-\sqrt{5})/2$, its algebraic conjugate.
Our solution begins with the observation that, by the classical Cardano formula, the roots of a {\em reduced cubic equation}
\begin{equation}\label{eq:cubic}
z^3 + pz + q = 0
\end{equation}
also have the form \eqref{eq:roots}, where $n=3$, and
\begin{equation}\label{eq:aandb}
a,b = \sqrt[3]{-\frac{q}{2}\pm\sqrt{\frac{q^2}{4}+\frac{p^3}{27}}}.
\end{equation}
The cube roots must be chosen so that $ab = -p/3$. (In this context the word ``reduced" refers to the fact that \eqref{eq:cubic} is monic with no quadratic term.)
The following interpretation of Cardano's formula
is given by Ronald Solomon in \cite[pp.~50--51]{solomon}. The identity
\begin{equation}\label{eq:identity}
(a+b)^3 - 3ab(a+b) - (a^3 + b^3) = 0
\end{equation}
holds in any field (indeed, in any commutative ring). This identity becomes the reduced cubic \eqref{eq:cubic} if $z=a+b$, $p = -3ab$, and $q=-(a^3 + b^3)$. Thus, finding a $z$ satisfying \eqref{eq:cubic} can be achieved by finding $a$ and $b$ satisfying $ab = -p/3$ and $a^3 + b^3 = -q$, whereupon $z=a+b$ is the desired solution. This in turn can be accomplished by noting that $ab = -p/3$ implies $a^3b^3 = -p^3/27$; thus $a^3,b^3$ are quantities whose sum and product are known. It follows that $a^3,b^3$ solve the known quadratic equation $X^2 + qX - p^3/27=0$ (the {\em resolvent quadratic} of \eqref{eq:cubic}). Applying the quadratic formula to this equation and taking cube roots yields \eqref{eq:aandb}.
Because we have a choice of cube roots for $a,b$ but must preserve the known relation $ab = -p/3$, we can twist $a$ by a factor of $\zeta$, but not without twisting $b$ by $\zeta^{-1}$. This is why the three solutions to \eqref{eq:cubic} have the form \eqref{eq:roots}.
We seek a generalization of \eqref{eq:identity} for $n>3$ with the same property, that $a$ can be replaced by $\zeta a$ but not without replacing $b$ by $\zeta^{-1}b$.
By moving the final term to the right side, the identity \eqref{eq:identity} expresses the {\em power sum} $a^3 + b^3$ in terms of the {\em elementary symmetric polynomials} $a+b$ and $ab$. By the fundamental theorem on symmetric polynomials, higher power sums $a^n+b^n$ can also be expressed in terms of $a+b$ and $ab$, since they are symmetric in $a,b$; in fact, the expression can be taken, in a unique way, to be an integer polynomial.
\begin{definition}\label{def:L}
For each $n$, let $L_n(X,Y)$ denote the unique integer polynomial such that\begin{equation}\label{eq:higheridentity}
L_n(a+b,ab) - (a^n + b^n) = 0
\end{equation} is an identity.
\end{definition}
We will see below that the polynomial $L_n(X,Y)$ just defined is actually a familiar mathematical object, although it is not usually defined in this way.
Since $a^n+b^n$ is homogeneous of degree $n$ in the indeterminates $a,b$, the expression $L_n(a+b,ab)$ must also be homogeneous of degree $n$. Thus $L_n(X,Y)$ is homogeneous if $X$ and $Y$ are given the weights $1$ and $2$ respectively. Since $a^n+b^n$ is not divisible by $ab$, $L_n(X,Y)$ cannot be divisible by $Y$. Thus it must have a term that does not include $Y$, and it follows that $L_n(X,Y)$ is of degree $n$ in $X$. (It is not hard to see, and we will show below, that it is actually \textit{monic} of degree $n$ in $X$.)
This is the desired identity generalizing \eqref{eq:identity}: if one makes the substitution $a \mapsto \zeta a$, $b\mapsto \zeta^{-1}b$, both $ab$ and $a^n+b^n$ are left unchanged. Thus \eqref{eq:higheridentity} becomes $L_n(\zeta a + \zeta^{-1}b, ab) - (a^n + b^n) = 0$. Repeating this substitution $n-2$ more times, we conclude that all the desired numbers \eqref{eq:roots} are roots of $L_n(z,ab) - (a^n+b^n)$; bearing in mind the above remark about the degree of $L_n(X,Y)$ in $X$, we have
\begin{prop}\label{prop:thepoly}
The roots of the polynomial
\[
\Omega_n(z) := L_n(z,ab) - (a^n + b^n),
\]
are the $n$ numbers \eqref{eq:roots} for $j=0,1,\dots,n-1$, where $L_n$ is the polynomial of Definition \ref{def:L}.\qed
\end{prop}
By \eqref{eq:higheridentity}, $\Omega_n(z)$ can also be written $L_n(z,ab) - L_n(a+b,ab)$, implying a certain symmetry between the constant (in $z$) term and the rest. To avoid notational clutter, we have suppressed the dependence of $\Omega_n(z)$ on $a$ and $b$.
\section{Newton's and Binet's formulas}\label{sec:newtonbinet}
To use proposition \ref{prop:thepoly} to find the product of the stretched elliptical chords as in figure \ref{fig:ellipsechords}, we need more information about $\Omega_n(z)$, or, equivalently, $L_n(X,Y)$. A classical result of Newton delivers a recursive formula for $L_n(X,Y)$.
{\em Newton's theorem} states that, for any $m$ indeterminates $a_1,\dots,a_m$, the {\em power sums} $s_1 = \sum_j a_j$, $s_2 = \sum_j a_j^2$, $\dots$, and the {\em elementary symmetric polynomials} $\sigma_1 = \sum_j a_j$, $\sigma_2 = \sum_{j<k} a_ja_k$, $\sigma_3 = \sum_{j<k<\ell} a_{j}a_{k}a_{\ell}$, $\dots$, obey the following relation for any natural number $n$:
\begin{equation}\label{eq:newton}
s_n - s_{n-1}\sigma_1 + s_{n-2}\sigma_2 - \dots \pm s_1\sigma_{n-1} \mp n\sigma_n = 0.
\end{equation}
The proof is a beautiful computation. The power sum $s_n$ is, of course, the sum of all terms of the form $a_j^n$. The next term of \eqref{eq:newton}, $s_{n-1}\sigma_1$, cancels all of these terms, but introduces new ones of the form $a_j^{n-1}a_k$. Then $s_{n-2}\sigma_2$ cancels these, but introduces new terms $a_j^{n-2}a_ka_\ell$, and so on, until finally $n\sigma_n$ cancels everything remaining. The relation holds even when $n>m$, via the convention that $\sigma_j = 0$ when $j>m$, in which case, only the first $m+1$ terms are nonzero.
In the situation of interest to us, we have $m=2$ unknowns. Thus \eqref{eq:newton} reduces to
\begin{equation}\label{eq:newton2}
(a^n + b^n) - (a^{n-1}+b^{n-1})(a+b) + (a^{n-2}+b^{n-2})(ab) = 0
\end{equation}
for any $n\geq 3$. In fact, since when $n=2$ we have $a^{n-2}+b^{n-2}=2=n$, it actually holds for all $n\geq 2$, which can of course also be checked directly. Making the substitution $a^j + b^j\mapsto L_j(a+b,ab)$ (for $j=n, n-1, n-2$) followed by $a+b \mapsto X$ and $ab \mapsto Y$, we obtain
\[
L_n(X,Y) - XL_{n-1}(X,Y) + Y L_{n-2}(X,Y) = 0,
\] or
\begin{equation}\label{eq:lucasrec}
L_n(X,Y) = X L_{n-1}(X,Y) - Y L_{n-2}(X,Y),
\end{equation}
valid for all $n\geq 2$. This linear recurrence characterizes $L_n(X,Y)$ for all $n$ once $L_0(X,Y)$ and $L_1(X,Y)$ are known. Since $L_0(a+b,ab) = a^0 + b^0 =2$, we have $L_0(X,Y) = 2$, and since $L_1(a+b,ab) = a^1 + b^1 = a+b$, we have $L_1(X,Y) = X$. It now follows from \eqref{eq:lucasrec} by induction on $n$ that $L_n(X,Y)$ is monic in $X$, as was claimed above. Thus, per Proposition \ref{prop:thepoly}, $\Omega_n(z)$ is the unique monic polynomial with the roots \eqref{eq:roots}.
Readers familiar with the generalized Fibonacci and Lucas polynomials (\cite{bergumhoggatt}, \cite{hoggattbicknell}, \cite{swamy}, \cite{webbparberry}) will recognize in \eqref{eq:lucasrec} the recurrence defining these polynomials, up to the sign change $Y\mapsto -Y$. The first two polynomials $L_0 = 2$ and $L_1 = X$ coincide with the generalized Lucas polynomials. It is then immediate, since $L_0$ and $L_1$ do not involve $Y$, that
\begin{prop}\label{prop:lucas}
The $L_n(X,Y)$ of Definition \ref{def:L} is the image of the $n$th generalized Lucas polynomial $V_n(X,Y)$ under the substitution $Y\mapsto -Y$.\qed
\end{prop}
Thus we can get a formula for $L_n$ (and thus for $\Omega_n$) from the known formula (\cite[equation (2.22)]{swamy}) for the generalized Lucas polynomials:
\[
L_n(X,Y) = \sum_{r=0}^{\lfloor n/2\rfloor} (-1)^r\frac{n}{n-r}\binom{n-r}{r} X^{n-2r}Y^r
\]
We include this formula for interest, although it turns out not to be needed for proving the intended result of Price about the product of elliptical chords.
Proposition \ref{prop:lucas} is a reformulation of a standard fact about the generalized Lucas polynomials. The so-called \textit{Binet formula} (e.g. \cite[equation (14)]{bergumhoggatt} or \cite[equation (2.2)]{swamy}) asserts that
\begin{equation}\label{eq:binet}
V_n(X,Y) = a^n + b^n,
\end{equation}
where $a=(X + \sqrt{X^2 + 4Y})/2$ and $b=(X-\sqrt{X^2+4Y})/2$, or, equivalently, $X = a+b$ and $Y = -ab$. Normally, one sees $X,Y, V_n$ as conceptually prior to $a,b$. The usual proof is either by a mechanical induction, or via a standard exercise in finding a closed form for the linear recurrence $V_n = XV_{n-1} + YV_{n-2}$. In the latter case, $a,b$ enter as the roots of the characteristic polynomial of that recurrence. However, in our present context, which treats $a,b$ as the conceptual starting point (as in Definition \ref{def:L}), \eqref{eq:binet} asserts that up to the substitution $Y\mapsto -Y$, $V_n(X,Y)$ is the polynomial that expresses the power sum $a^n+b^n$ in terms of the elementary symmetric polynomials $X = a+b$ and $-Y = ab$. This is the content of proposition \ref{prop:lucas}. Thus our argument for proposition \ref{prop:lucas} is an alternative proof of the Binet formula. It is essentially computation-free in the sense that the important calculation was subcontracted to Newton's theorem.
\section{Stretched chords}\label{sec:solution}
We apply the above to re-derive Price's formula for the product of the elliptical chords. By Proposition \ref{prop:thepoly}, the polynomial with roots in the form \eqref{eq:roots} is $$\Omega_n(z) = L_n(z,ab) - (a^n+b^n).$$ This polynomial plays the role played by $z^n-1$ in the product of chords in the circle, and $a+b$ is the image of $1$ under the scaling, so by the same logic used in the introduction, the product of the elliptical chords is the absolute value of
\[
\frac{\Omega_n(z)}{z-(a+b)}
\]
evaluated at $z=a+b$. This is equal to $\Omega_n'(a+b)$, since $a+b$ is a root of $\Omega_n(z)$. We have that
\[
\Omega_n'(z) = \frac{d}{dz} L_n(z,ab) = \frac{d}{dz} V_n(z,-ab)
\]
where $V_n$ is the generalized Lucas polynomial, with the second equality by Proposition \ref{prop:lucas}. Let $U_n(X,Y)$ denote the generalized Fibonacci polynomial, which is defined by the same recurrence as the Lucas polynomials but with the initial data $U_0 = 0$, $U_1= 1$. By a known relation \cite[equation (3.10)]{swamy} between Lucas and Fibonacci polynomials,
\[
\frac{d}{dz} V_n(z,-ab) = n U_n(z,-ab)
\]
where $U_n$ is the generalized Fibonacci polynomial. Substituting $a+b$ for $z$, the Binet formula for generalized Fibonacci polynomials (\cite[Theorem 1]{webbparberry} or \cite[equation (2)]{hoggattbicknell
) states that
\[
U_n(a+b,-ab) = \frac{a^n-b^n}{a-b}.
\]
Recalling that $a>|b|\geq 0$, this is positive, so taking absolute values has no effect. Thus, we recover Price's beautiful result that
\begin{prop}[Price \cite{price1}]\label{prop:price}
The product of the elliptical chord lengths described in the introduction is
\[
\pushQED{\qed}
n\frac{a^n-b^n}{a-b}.\qedhere
\popQED
\]
\end{prop}
As noted by Price in \cite{price2}, this yields $nF_n$, as in Table \ref{tbl:ellipseproducts}, when $a=\phi$ and $b=\hat\phi$, the golden ratio and its algebraic conjugate.
\section{Solution of $\Omega_n(z)$ by radicals}\label{sec:solnbyradicals}
In this section we bring the inquiry full circle by returning to Cardano. The key polynomial $\Omega_n(z)$ of our argument is modeled on Cardano's reduced cubic (cf. section \ref{sec:cardano}). Thus it is no surprise that it can be solved in radicals by essentially the same method. We recover the known formula for the roots of the Lucas polynomials as a special case.
For $n=3$, we have
\[
\Omega_3(z) = z^3 - 3abz - (a^3 +b^3)
\]
by comparing \eqref{eq:identity} with definition \ref{def:L} and proposition \ref{prop:thepoly} (which defines $\Omega_n(z)$). In section \ref{sec:cardano}, we identified this with Cardano's reduced cubic $x^3 + px + q$ by setting $-3ab = p$ and $-(a^3+b^3)=q$. The presumption is that $p$ and $q$ belong to a prespecified field, such as the rational numbers. Rationality would not have been affected by instead setting $ab = p$ and $a^3 + b^3=q$, so moving forward, this is the convention we generalize: let $p=ab$ and let $q=a^n+b^n$. Then $\Omega_n(z)=L_n(z,p)-q$ is a univariate polynomial over the field that contains $p$ and $q$, and the goal of a solution in radicals is an expression for its roots in terms of $p$ and $q$.
We already know the roots are given by \eqref{eq:roots}; thus we only need to express $a,b$. The method is identical to that which derived Cardano's formula in section \ref{sec:cardano}. We know that $a^nb^n=p^n$. Thus, $a^n$ and $b^n$ are roots of the quadratic equation
\[
X^2 - qX + p^n = 0.
\]
Applying the quadratic formula and taking $n$th roots, we obtain
\begin{equation}\label{eq:solnbyradicals}
a,b = \sqrt[n]{\frac{q \pm \sqrt{q^2-4p^n}}{2}}.
\end{equation}
So a radical expression for the roots of $\Omega_n(z) = L_n(z,p)-q$ is obtained by substituting \eqref{eq:solnbyradicals} in \eqref{eq:roots}.
The generalized Lucas polynomials are $V_n(X,Y) = L_n(X,-Y)$, i.e. the case $p=-Y$, $q=0$. In this case, \eqref{eq:solnbyradicals} simplifies to
\[
a,b = \sqrt[n]{\pm\sqrt{-(-Y)^n}} = \sqrt[n]{\pm i\sqrt{-Y}^n} = \xi\sqrt{-Y}, \xi^{-1}\sqrt{-Y},
\]
where $\xi$ is an $n$th root of $i$, i.e. a $4n$th root of unity that is not a $2n$th root. We can take $\xi = e^{\pi i / 2n}$, $\zeta = \xi^4$, for example, and then by \eqref{eq:roots}, the roots of $V_n(X,Y)$ (as a polynomial in $X$) are
\[
\sqrt{-Y} (\zeta^j\xi + \zeta^{-j}\xi^{-1})= 2\sqrt{-Y} \cos \frac{(1+4j)\pi}{2n}
\]
for $j=0,1,\dots,n-1$. The expression $(1+4j)\pi/2n$ ranges over all the $(1\operatorname{mod} 4)$-multiples of $\pi / 2n = 2\pi / 4n$ in the range $[0,2\pi)$. Exploiting the symmetry $\cos \theta = \cos(2\pi - \theta)$, those in the range $[\pi,2\pi)$ can be replaced by the $(3\operatorname{mod} 4)$-multiples in the range $[0,\pi)$. Thus a slight simplification is
\[
X = 2\sqrt{-Y}\cos\frac{(1+2j)\pi}{2n}
\]
for $j=0,1,\dots,n-1$. Compare \cite[equation (2.24)]{swamy}. The classical Lucas polynomials are the case $Y=1$, so $\sqrt{-Y} = i$. Compare \cite[p.~274]{hoggattbicknell}
\section{History of the circle problem}\label{sec:circlehistory}
Price's inspiration, the circle problem described in the introduction, has its own interesting print history, of which we share some highlights here. We think it is likely that the problem has been rediscovered many times, so we make no attempt to be exhaustive. As given in the introduction, the proof (henceforth, the ``standard proof") that the product of the chord lengths is $n$ (henceforth, the ``circle theorem") consists of five steps:
\begin{enumerate}[label = (\alph*)]
\item Interpret the $n$ equidistant points on the unit circle as the $n$th roots of unity $1,\zeta,\dots,\zeta^{n-1}$ in the complex plane $\mathbb{C}$.\label{step:demoivre}
\item Interpret the relevant chord lengths as absolute values of the complex numbers $1-\zeta^j$, $j=1,\dots,n-1$.\label{step:geom}
\item Exploit the fact that products commute with absolute values in $\mathbb{C}$ to identify the product of the chord lengths with $(1-\zeta)\dots(1-\zeta^{n-1})$.\label{step:absval}
\item Note that this is the polynomial $(z-\zeta)\dots(z-\zeta^{n-1})$, evaluated at $z=1$.\label{step:poly}
\item Invoke the identity $\left(z-\zeta\right)\dots(z-\zeta^{n-1}) = z^{n-1} + \dots + z + 1$.\label{step:identity}
\end{enumerate}
Taken individually, each of these is routine, very classical, or both. Thus a broadly-construed ``history of the circle theorem" would be a history of the complex plane, the roots of unity, and their connection to the circle. This is beyond our present scope. But we mention some striking signposts.
The first of these is a result quite close to the circle theorem that was discovered in 1716 by Roger Cotes, and enunciated (without proof) in his {\em Harmonia Mensurarum}, published posthumously in 1722 (see \cite[p.~194-195]{stillwell}). Cotes is better known as the editor of the second edition of Newton's {\em Principia}. The following statement of Cotes' theorem is taken from \cite[p.~195]{stillwell}:
{\em If $A_0,\dots,A_{n-1}$ are equally spaced points on the unit circle with center $O$, and if $P$ is a point on $OA_0$ such that $OP=x$, then
\[
PA_0\cdot PA_1\cdot \dots \cdot PA_{n-1} = 1 - x^n.
\]}
One obtains the circle theorem by dividing through by $PA_0 = 1-x$ and then letting $P\to A_0$ (equivalently $x\to 1$).
How Cotes came to this conclusion has not been preserved, but he and his contemporaries Johann Bernoulli and Abraham de Moivre were taking halting steps toward the formula
\begin{equation}\label{eq:demoivre}
(\cos \theta + i\sin\theta)^n = \cos n\theta + i\sin n\theta.
\end{equation}
This formula now bears de Moivre's name \cite[p.~192--195]{stillwell}, although it never appeared in his writings \cite[p.~193]{stillwell}. To a modern reader, and indeed by later in the 18th century, de Moivre's formula would be seen as a consequence of the formula
\[
e^{i\theta} = \cos\theta + i\sin \theta
\]
given by Euler \cite[Ch. 8, article 138]{euler} in 1748.
John Stillwell has speculated \cite[p.~195]{stillwell} that Cotes may have used reasoning related to some variant of \eqref{eq:demoivre}. To us, this formula is the starting point of the circle theorem, for it gives the identification \ref{step:demoivre} of the roots of unity with points on a unit circle: set the right side of \eqref{eq:demoivre} to $1$, obtaining $n\theta=2\pi k$ for some integer $k$, and then solve for $\theta$, yielding $\theta = 2\pi k / n$. Then consult the left side to conclude $(\cos 2\pi k / n + i\sin 2\pi k / n)^n = 1$, so that the $n$th roots of unity are the numbers $\cos 2\pi k/n + i\sin 2\pi k / n$. By the late 18th century, this was a standard maneuver -- see for example \cite[article 23, p.~249]{lagrange}.
We are about to skip ahead 200 years, but we mention two major 19th century developments that are mathematically adjacent to our story.
First, the identity \ref{step:identity} is the starting point for the chapter on cyclotomic (``circle-dividing") equations in Carl Friedrich Gauss' 1801 {\em Disquisitiones Arithmeticae} (\cite[Ch. 7]{gauss}). In this work, Gauss proved that any root of unity $\zeta$ can be expressed in radicals, via calculations in what we now recognize as the Galois group of the polynomial $\Pi_n(z)=z^{n-1} + \dots + z + 1$. This was the first real use of Galois groups -- 30 years before Galois! As a corollary, he deduced that if $n$ is prime, a regular $n$-gon can be constructed with ruler and compass if and only if $n$ has the form $2^k + 1$, and therefore the 17-gon is constructible.
Second, the special case of \ref{step:identity} with the substitution $z=1$ (as in \ref{step:poly}, \ref{step:absval}) figures in the series of monumental papers by Ernst Kummer (\cite{kummerw}) that birthed the field of algebraic number theory. Kummer investigated how factorization changes when one expands from the ring of integers $\mathbb{Z}$ to the bigger ring $\mathbb{Z}[\zeta]$ consisting of integer linear combinations of $\zeta$ and its powers. (See \cite[Ch. 4-6]{edwards} for a modern exposition of Kummer's contribution.) When $n=p$ is prime, substituting $z=1$ into the identity \ref{step:identity} yields $p=(1-\zeta)\dots(1-\zeta^{p-1})$, which is the factorization of $p$ into primes in $\mathbb{Z}[\zeta]$ (e.g. \cite[p.~174]{kummer}).
We now reach the 20th century. The circle theorem was stated in a short 1954 note of W. Sichardt \cite{sichardt} in \textit{Zeitschrift f\"{u}r Angewandte Mathematik und Mechanik} (\textit{ZAMM}). This is the first instance of which we are aware in which the theorem was given in the form we have stated it, although again we make no claim to comprehensivity.
Sichardt made the observation much earlier --- in 1927 --- in the context of an engineering problem having to do with the construction of wells! The calculation of this particular product of chords was directly motivated by the engineering context.
Sichardt described finding the pattern in the products empirically. He did not claim credit for the proof. He stated that an unnamed mathematician who worked for Siemens provided a proof that was lost during the war. He attributed the proof given in the note to a Prof. Szab\'{o} of the Technical University of Berlin. It is a slightly less clean version of the standard proof. It makes use of the fact that the length of a chord in the unit circle subtended by an angle $\theta$ is $2\sin(\theta/2)$, and thus the product of interest can be expressed as
\[
2^{n-1}\prod_{j=1}^{n-1} \sin \pi j/n.
\]
Each factor $\sin \pi j / n$ is expressed as $\frac{1}{2i}\left(\eta^j - \eta^{-j}\right)$, where $\eta$ is a $2n$th root of unity rather than an $n$th, so a certain amount of bookkeeping is needed to arrive at the expression in terms of the polynomial $\Pi_n(z)$. The comparative cleanness of the standard proof comes from the fact that taking absolute values allows one to forego all this bookkeeping.
In 1972, Kurt Eisemann, in another short note in {\em ZAMM} \cite{eisemann}, extended Sichardt's result by considering the product of the lengths of perpendiculars from the center of the circle to the chords. Independently, Zalman Usiskin \cite{usiskin} in 1979 derived various identities involving products of sines using only the standard trigonometric identities $\sin 2\theta = 2\sin\theta\cos\theta$ and $\cos\theta = \sin (\pi/2-\theta)$, and noted the interpretation of these identities in terms of chord lengths. One of the identities proven by Usiskin yielded the case $n=45$ of the circle theorem, and he stated the general case without proof.
In 1987, Steven Galovich \cite{galovich} took up Usiskin's challenge to find a general principle explaining his various sine-product identities. Among other results, Galovich proved the circle theorem, using exactly the standard proof. But he introduced it with the words, ``Although the next theorem and proof are evidently well known, it is natural to include them in this note." \cite[p.~112]{galovich}
By this point, the circle theorem had been posed as an exercise in textbooks, for example \cite[p.~16, exercise 12]{baknewman} and \cite[p.~69, problem 44]{kreyszig}.
In 1995, Andre Mazzoleni and Samuel Shan-Pu Shen published a note \cite{mazzolenishen} in the February issue of {\em Mathematics Magazine} giving a short proof of the circle theorem via the theory of residues of a function of a complex variable. For precedents for the result, they cited exercises in several textbooks in complex analysis. Responses from readers were published in the June and October issues pointing out other precedents. Among these readers were Usiskin, and also Eisemann, who called attention to Sichardt's contribution as well as his own. One reader fit an extremely concise version of the standard proof into a letter to the editor \cite{silva}.
In 2002, Barry Lewis gave a riff \cite{lewis} on the circle theorem by deriving formulas for power sums of the chord lengths, rather than their product.
Finally we come to Price's work in the early 2000's. In addition to \cite{price1}, \cite{price2} which are the starting point of the present article, Price also produced \cite{price3}, extending Eisemann's \cite{eisemann} results to the elliptical situation. In these works, Price cited Sichardt for the circle theorem, and gave the standard proof.
\section{Further remarks}\label{sec:furtherremarks}
\subsection{The Binet formula for the generalized Fibonacci polynomials}\label{sec:binetfibonacci}
We touted our computation-free proof of the Binet formula for the generalized Lucas polynomials in section \ref{sec:newtonbinet}, but then in section \ref{sec:solution}, we {\em used} the Binet formula for the generalized Fibonacci polynomials, cited from the literature and so justified by one of the standard proofs, in our proof of Price's main result (Proposition \ref{prop:price}). This situation calls out for an independent proof of the Binet formula for generalized Fibonacci polynomials along the lines of what we have done above for Lucas polynomials. Can this be given?
It can. The proof above for the Lucas polynomials proceeds by defining (Definition \ref{def:L}) a polynomial that expresses $a^n+b^n$ in terms of $a+b$ and $ab$, whose existence and uniqueness are guaranteed by the fundamental theorem on symmetric polynomials; verifying that it is the generalized Lucas polynomial in the cases $n=0$ and $n=1$; and then specializing Newton's theorem to prove that it obeys the Fibonacci/Lucas recursion (up to a sign change) and therefore (Proposition \ref{prop:lucas}) coincides with the generalized Lucas polynomial (up to the same sign change) for all $n$. Finally it observes that the Binet formula can be interpreted as the statement that the generalized Lucas polynomial expresses $a^n+b^n$ in terms of $a+b$ and $-ab$, i.e. the statement that has just been proven. In the Fibonacci case, the Binet formula states that
\[
U_n(X,Y) = \frac{a^n-b^n}{a-b},
\]
where $a,b$ are defined by $a+b=X$, $ab=-Y$, as above. Now $(a^n-b^n)/(a-b)$ is equal to $$a^{n-1}+a^{n-2}b+\dots+ab^{n-2} + b^{n-1},$$ the {\em complete homogeneous symmetric polynomial} of degree $n-1$ in $a,b$. Like the power sum $a^n+b^n$, the fundamental theorem on symmetric polynomials guarantees a unique polynomial expressing this in terms of $a+b$ and $ab$, so we can copy the entire pattern of the above proof if we can find an analogue to Newton's theorem that tells us this polynomial (call it $F_n(X,Y)$) obeys the Fibonacci/Lucas recursion up to the sign change. (The verification that it coincides with $U_n$ if $n=0$ and $n=1$ is trivial: $(a^0-b^0)/(a-b) = 0$, so $F_0 = 0= U_0$, and $n=1$ is similar.)
In fact, there is such an analogue. For any $m$ indeterminates $a_1,\dots,a_m$, and any natural number $j$, let $h_j$ be the complete homogeneous symmetric polynomial of degree $j$, i.e. the sum of {\em every} monomial in the $a_i$'s of degree $j$. Let $\sigma_j$ be the elementary symmetric polynomial of degree $j$, as in section \ref{sec:newtonbinet}. Then these polynomials satisfy a relation nearly identical to the one between the $\sigma$'s and the power sums captured by Newton, namely
\begin{equation}\label{eq:completehom}
h_n-h_{n-1}\sigma_1+h_{n-2}\sigma_2 - \dots \pm h\sigma_{n-1} \mp \sigma_n = 0
\end{equation}
for all $n\geq 1$. (As in section \ref{sec:newtonbinet}, this continues to hold for $n>m$ via the convention that $\sigma_j=0$ for $j>m$.) In the two-variable case, where we have $(a^k-b^k)/(a-b) = h_{k-1}$, this specializes to
\[
\frac{a^n-b^n}{a-b} - (a+b) \frac{a^{n-1}-b^{n-1}}{a-b} + (ab)\frac{a^{n-2}-b^{n-2}}{a-b} = 0
\]
for all $n\geq 3$. In fact it even specializes for $n = 2$ in view of $(a^0-b^0)/(a-b)=0$ so that the final term vanishes. The proof now proceeds exactly as in section \ref{sec:newtonbinet} to identify $F_n(X,Y)$ with $U_n(X,-Y)$.
The formula \eqref{eq:completehom} is easy to find in the literature (e.g. \cite[equation (7.13)]{stanley}), but for the sake of self-containedness and because it is very beautiful, we give the usual proof via formal power series. Let
\[
H(t) = \sum_0^\infty h_jt^j
\]
be the ordinary generating function for the complete homogeneous symmetric polynomials. Because $h_j$ is the sum of all monomials of a given degree $j$ in the given set of indeterminates, $H(t)$ is the sum of {\em all} monomials, each multiplied by $t^{\text{its degree}}$. Thus
\begin{align*}
H(t) &= \left(1 + a_1t + a_1^2t^2 + \dots\right) \ldots \left(1 + a_mt + a_m^2t^2 + \dots\right)\\
&= \left(\frac{1}{1-a_1t}\right)\ldots\left(\frac{1}{1-a_mt}\right).
\end{align*}
Then
\[
H(t)^{-1} = \left(1 - a_1t\right) \ldots \left(1 - a_mt\right) = \sum_0^\infty (-1)^k\sigma_k t^k,
\]
since the coefficient of a given $t^k$ is the sum of all products of $k$-sets of $-a_j$'s, by multiplying out. Thus,
\[
1 = H(t)H(t)^{-1} = \sum_0^\infty h_jt^j\sum_0^\infty (-1)^k\sigma_k t^k,
\]
and \eqref{eq:completehom} is obtained from this formula by extracting the coefficient of $t^n$ on both sides and comparing.
\subsection{Relation between generalized Fibonacci and Lucas polynomials}\label{sec:fiblucas}
In the proof of Price's formula, Proposition \ref{prop:price}, we made use of the following identity relating generalized Fibonacci and Lucas polynomials, for which we cited a paper \cite{swamy} by Swamy:
\[
\frac{d}{dX} V_n(X,Y) = nU_n(X,Y).
\]
In the interest of keeping this paper self-contained, we give a proof. It resembles a calculation in \cite[p.~152]{price2}. (It is different from the proof given by Swamy, which is based on generating functions.) Set $a+b=X$ and $ab=-Y$ as usual. Treating $Y$ as a constant, $b$ and therefore $X$ and $V_n(X,Y)$ become rational functions of $a$. We have $b=-Ya^{-1}$, so that $db/da = Ya^{-2} = -ba^{-1}$. Thus, by the Binet formulas for $V_n$ and $U_n$ and the chain rule, we have
\begin{align*}
\frac{d}{dX} V_n(X,Y) &= \frac{dV_n(X,Y)/da}{dX/da}\\
&= \frac{d(a^n + b^n)/da}{d(a+b)/da}\\
&= \frac{na^{n-1} +nb^{n-1}(-ba^{-1})}{1-ba^{-1}}\\
&= n\frac{a^n - b^n}{a-b}\\
&= nU_n(X,Y).
\end{align*}
\subsection{Relation with Price's work}\label{sec:price}
A comparison of the method above with \cite{price1} and \cite{price2} has a through-the-looking-glass quality. There are many contact points, but the overall effect is completely different. The direct calculations in \cite{price1} and \cite{price2} are here contextualized as special cases of classical theorems and other known results. What follows is our best attempt to tease out the relationship in more detail.
Price \cite{price1} worked with a polynomial $P_n(z)$ (called $P_n(z;a,b)$ in \cite{price2}) that is equal to our $L_n(z,ab)$, but {\em defined} it to be the polynomial such that $P_n(z) - (a^n+b^n)$ has the roots \eqref{eq:roots}. Our additions are the direct proof, using the invariance of $ab$ and $a^n+b^n$ under $a\mapsto \zeta a, b\mapsto \zeta^{-1}b$, that the polynomial characterized by Definition \ref{def:L} is this same polynomial, and the related observation that it generalizes Cardano's reduced cubic.
Price proved the recurrence relation $P_n(z) = zP_{n-1}(z) - ab P_{n-2}(z)$ in \cite{price1}. This is our \eqref{eq:lucasrec}, after the substitution $Y\mapsto ab$. Price's proof amounts to verifying a version of \eqref{eq:newton2} with direct calculation. In \cite{price2}, he observed that this makes $P_n(z)$ a generalized Lucas polynomial, and used this to prove the Binet formulas for Lucas and Fibonacci numbers. Our additions are the observation that \eqref{eq:newton2} is immediate from Newton's theorem, that the analogous formula for Fibonacci polynomials is immediate from a standard fact about complete homogeneous symmetric polynomials (section \ref{sec:binetfibonacci}), and the interpretation of the Binet formulas as the statements that the Lucas and Fibonacci polynomials are the polynomials that express, respectively, power sums and complete homogeneous symmetric polynomials, in terms of elementary symmetric polynomials.
Price's \cite{price1} proof of proposition \ref{prop:price} proceeds by first proving with an induction based on the same calculation mentioned in the previous paragraph that, for any $\theta\in [0,2\pi)$,
\begin{equation}\label{eq:continuous}
P_n(ae^{i\theta} + b e^{-i\theta}) = a^ne^{in\theta} + b^ne^{-in\theta},
\end{equation} and then deriving proposition \ref{prop:price} from a computation with the continuous variable $\theta$ involving the L'Hopital rule. The L'Hopital calculation is repeated in \cite[p.~152]{price2}. We do not need \eqref{eq:continuous} for our proof of \ref{prop:price}, but it is immediate from our work by substituting $a\mapsto ae^{i\theta}$ and $b\mapsto be^{-i\theta}$ in $L_n(a+b,ab) = a^n + b^n
. We instead derived proposition \ref{prop:price} from the Binet formulas and the known identity $\frac{d}{dX}V_n(X,Y) = nU_n(X,Y)$. In the name of keeping this paper self-contained, we included a proof of this identity, which is structurally similar to Price's L'Hopital calculation.
Another addition is our observation (section \ref{sec:solnbyradicals}) that Cardano's method allows us to express the roots of $\Omega_n(z)$ by radicals and thereby obtain the known formula for the roots of the Lucas polynomial.
Proposition \ref{prop:price}, and the other results given here, do not exhaust the results found in \cite{price1} and \cite{price2}. In \cite{price1}, Price also considered the products of chord lengths that arise from rotating the roots of unity by a fixed angle along the unit circle prior to scaling. In \cite{price2}, he used the interpretation of Fibonacci and Lucas numbers in terms of products of elliptical chord lengths to recover identities and divisibility properties of these numbers, such as $F_{2n} = F_nL_n$.
\section{Acknowledgement}
Several individuals were involved in calling the authors' attention to Price's work: Francis Su, who included the special case of Table \ref{tbl:ellipseproducts} in his Harvey Mudd Math Fun Facts (\cite{su}); Bowen Kerins and Darryl Yong, who then included the problem of computing this product in a problem set during a summer course at the Park City Mathematics Institute; and Sam Shah, who posted the problem on his blog, whereby the authors were acquainted with it. The authors wish to thank Kerins in particular, who was a generous correspondent, and also Thomas Price himself, Tom Edgar, and Harold Edwards, for useful comments and for alerting us to \cite{price3}, \cite{galovich}, and \cite{kummerw} respectively.
| {
"timestamp": "2019-09-25T02:04:06",
"yymm": "1810",
"arxiv_id": "1810.00492",
"language": "en",
"url": "https://arxiv.org/abs/1810.00492",
"abstract": "A beautiful theorem of Thomas Price links the Fibonacci numbers and the Lucas polynomials to the plane geometry of an ellipse, generalizing a classic problem about circles. We give a brief history of the circle problem, an account of Price's ellipse proof, and a reorganized proof, with some new ideas, designed to situate the result within a dense web of connections to classical mathematics. It is inspired by Cardano's solution of the cubic equation and Newton's theorem on power sums, and yields an interpretation of generalized Lucas polynomials in terms of the theory of symmetric polynomials. We also develop additional connections that surface along the way; e.g., we give a parallel interpretation of generalized Fibonacci polynomials, and we show that Cardano's method can be used write down the roots of the Lucas polynomials.",
"subjects": "History and Overview (math.HO); Number Theory (math.NT)",
"title": "Chords of an ellipse, Lucas polynomials, and cubic equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98866824713641,
"lm_q2_score": 0.8577681049901037,
"lm_q1q2_score": 0.8480480888100859
} |
https://arxiv.org/abs/2202.11420 | Sub-optimality of Gauss--Hermite quadrature and optimality of the trapezoidal rule for functions with finite smoothness | The sub-optimality of Gauss--Hermite quadrature and the optimality of the trapezoidal rule are proved in the weighted Sobolev spaces of square integrable functions of order $\alpha$, where the optimality is in the sense of worst-case error. For Gauss--Hermite quadrature, we obtain matching lower and upper bounds, which turn out to be merely of the order $n^{-\alpha/2}$ with $n$ function evaluations, although the optimal rate for the best possible linear quadrature is known to be $n^{-\alpha}$. Our proof of the lower bound exploits the structure of the Gauss--Hermite nodes; the bound is independent of the quadrature weights, and changing the Gauss--Hermite weights cannot improve the rate $n^{-\alpha/2}$. In contrast, we show that a suitably truncated trapezoidal rule achieves the optimal rate up to a logarithmic factor. | \section{Introduction} \label{sec:intro}
This paper is concerned with a sub-optimality of Gauss--Hermite quadrature and an optimality of the trapezoidal rule.
Given a function $f\colon \mathbb{R}\to \mathbb{R}$, Gauss--Hermite quadrature is
one of the standard numerical integration methods to compute the integral
\begin{equation}
I(f):=\int_\mathbb{R}f(x)\frac{1}{\sqrt{2\pi}}\mathrm{e}^{-x^2/2}\mathrm{d}x.
\label{eq:def_I(f)}
\end{equation}
It is a Gauss-type quadrature formula, i.e., the quadrature points are the zeros of the degree $n$ orthogonal polynomial associated with the weight function, and the corresponding quadrature weights are readily defined. With the weight function $\rho(x):=\frac{1}{\sqrt{2\pi}}\mathrm{e}^{-x^2/2}$, the orthogonal polynomial we have is the (so-called probabilist's) Hermite polynomial.
Gauss--Hermite quadrature is widely used;
here, we just mention
spectral methods \cite{Guo.BY_1999_Hermite,mao2017hermite,Canuto.C_etal_2006_book}, and applications in
aerospace engineering \cite{braun2021satellite,BBM2020book},
finance \cite{Brandimarte2006book,FR2008book,GH2010}, and
physics \cite{SS2016book,Gezerlis2020book}.
Nevertheless, the limitation of this method seems to be less known.
We start with numerical results that illustrate this deficiency.
Figure~\ref{fig:GH-sub} shows a comparison of the Gauss--Hermite rule and a suitably truncated trapezoidal rule on $\mathbb{R}$.
Here the target function in \eqref{eq:def_I(f)} is $f(x)=|x|^p$ with $p\in \{1,3,5\}$.
For the trapezoidal rule, we integrate $f(x)\rho(x)$ with a suitable cut-off of the domain.
We discuss the setting of this experiment in more detail at the end of Section~\ref{sec:trapez}.
\begin{figure}[H]
\centering
\includegraphics[scale=1.05]{graph-fitted-tri.pdf}
\vspace*{-7mm}
\caption{Absolute integration errors for $f(x)=|x|^p$ with $p=1$ \textup{(}left{)}, $p=3$ \textup{(}centre{)}, and $p=5$ \textup{(}right{)}.
Gauss--Hermite rule exhibits a slower error decay than the trapezoidal rule does.}
\label{fig:GH-sub}
\end{figure}
What we observe in Figure~\ref{fig:GH-sub} is that while the trapezoidal rule achieves around $\mathcal{O}(n^{-p-0.8})$, Gauss--Hermite quadrature achieves only a slower convergence rate almost $\mathcal{O}(n^{-p/2-0.5})$, where $n$ is the number of quadrature points.
A similar empirical inefficiency of Gauss--Hermite quadrature is reported in a paper by one of the present authors and Nuyens~\cite{NS2021}, also in a recent paper by Trefethen~\cite{T2021}
argued that Gauss--Hermite quadrature converges more slowly than the truncated trapezoidal
rule as $n\to\infty$, because its quadrature points are unnecessarily spread out, and in effect not enough quadrature points are utilised.
In this paper, we
\emph{prove} a sub-optimality of Gauss--Hermite quadrature.
More precisely, we establish a sharp lower bound for the error decay of Gauss--Hermite quadrature in the sense of worst case error. Moreover, we show that a suitably truncated trapezoidal rule achieves the optimal rate of convergence, up to a logarithmic factor.
\iffalse
\yk{For analytic functions, efficiency of the trapezoidal rule is well known;
related studies date back at least to the paper by Goodwin in 1949 \cite{Goodwin1949}, and this accuracy is not only widely known to, but also still actively studied by contemporary numerical analysts \cite{S1997,MS2001,GilEtAl2007,Waldvogel2011,TW2014,T2021}.
Sugihara \cite{S1997} conducted an extensive research on the integration error for the trapezoidal rule corresponding to various decay conditions. ...in particular for functions decaying like $\exp(-x^2)$ he shows the rate $\exp(-Cn...)$..the constant is explicit.
\ys{(How about here...?)Our results show that this efficiency extends to Sobolev class functions, where we do not have tools from complex analysis such as contour integrals.
Our proof uses the strategy recently developed by one of the present authors and Nuyens~\cite{NS2021} for a class of quasi-Monte Carlo methods.}
There are also lower bounds for a general class quadratures. These are not matching lower bounds but close, showing the near optimality of the trapezoidal rule in this function space.
\cite{T2021} shows .....
For GH, Barret (REF) seems to be the first to have considered the rate...
also Davis and ....book (4.6.1.18)
Note that there is no decaying condition written anywhere. We are not aware of any.
Trethenn SIREV's numerical result (Figure something) support this rate for functions decaying like $\exp(-x^2)$.
Is TR optimal?
How do TR and GH compare for analytic functions? We do not address these important questions in this paper.
}
\fi
The integrands of our interest are functions with finite smoothness.
In this regard, we will work under the assumption that the function $f$ lives in the weighted $L^2$-Sobolev space with smoothness $\alpha\in\mathbb{N}$.
This function space is widely used; see for example the books \cite{B1998,Canuto.C_etal_2006_book,Shen.J_2011_book} and references therein.
For $\alpha\in\mathbb{N}$, this space is equivalent to the so-called Hermite space of finite smoothness, which has been attracting attention in high-dimensional computations; see \cite{IG2015_Hermite,DILP2018,gnewuch2021countable} for its use in high-dimensional computations, and see \cite[1.5.4. Proposition]{B1998} together with Section~\ref{sec:space} below for the equivalence.
In this setting, we prove that the worst case integration error of Gauss--Hermite quadrature is bounded from below by $n^{-\alpha/2}$ up to a constant. This rate matches the upper bound shown by Mastroianni and Monegato \cite{MM1994}, and thus cannot be improved.
Moreover, this rate provides a rigorous verification of the numerical findings \cite[Section 4]{DILP2018}, where the authors computed approximate values of the worst-case error for Gauss--Hermite quadrature in the Hermite space of finite smoothness, and observed the rate $\mathcal{O}(n^{-\alpha/2})$ for $\alpha=1,2,$ and $3$.
In the proof, we exploit the structure of the Gauss--Hermit quadrature \textit{points}. The argument is independent of the quadrature weights, and thus tuning them does not change the result.
The proof in particular indicates that, if the spacing of a node set decreases asymptotically no faster than $1/\sqrt{n}$, then the corresponding quadrature rule cannot achieve the worst-case error better than $O(n^{-\alpha/2})$; see the proof of Lemma~\ref{lem:lower_bound_general} together with Theorem~\ref{thm:GH-LB}.
It turns out that this rate is merely half of the best possible:
if we allow $n$ quadrature-points and weights to be arbitrary, then the best achievable using (linear) quadrature is of the rate $\mathcal{O}(n^{-\alpha})$; see \cite[Theorem 1]{DILP2018} for a precise statement.
Dick et al.~\cite{DILP2018} also show that a class of numerical integration methods based on so-called (scaled) higher order digital nets achieve the optimal rate $\mathcal{O}(n^{-\alpha})$ up to a logarithmic factor in the multi-dimensional setting, including the one-dimensional setting as a special case.
Our results on the trapezoidal rule show that the trapezoidal rule, a method arguably significantly simpler than one-dimensional higher order digital nets,
also achieves this optimal rate, up to a logarithmic factor, and thus is nearly \textit{twice} as fast as the error-decay rate of Gauss--Hermite rule.
It is also worth mentioning that Gauss--Hermite quadrature requires a nontrivial algorithm to generate quadrature points, whereas for the trapezoidal rule we simply have equispaced points.
For analytic functions, the efficiency of the trapezoidal rule is well known;
related studies date back at least to the paper by Goodwin in 1949 \cite{Goodwin1949}, and this accuracy is not only widely known, but also still actively studied by contemporary numerical analysts \cite{S1997,MS2001,GilEtAl2007,Waldvogel2011,TW2014,T2021}.
Our results show that this efficiency extends to Sobolev class functions, where we do not have tools from complex analysis such as contour integrals.
Our proof uses the strategy recently developed by one of the present authors and Nuyens~\cite{NS2021} for a class of quasi-Monte Carlo methods.
We now mention other error estimates for Gauss--Hermite quadrature in the literature.
Based on results by Freud~\cite{Freud1972}, Smith, Sloan, and Opie~\cite{SSO1983} showed an upper bound $\mathcal{O}(n^{-\alpha/2})$ for $\alpha$-times continuously differentiable functions whose $\alpha$-th derivative satisfies a suitable growth condition for $\alpha\in\mathbb{N}$.
Since the weighted Sobolev space seems to be more frequently used, our focus is on this class.
Della Vecchia and Mastroianni~\cite{DM2003} showed an upper bound $\mathcal{O}(n^{-1/6})$ for $\rho$-integrable a.e.~differentiable functions whose derivative is also $\rho$-integrable.
Moreover, their result implies a matching lower bound in the sense of worst-case error, and thus in this sense their upper bound is sharp.
It does not seem to be trivial to determine whether their bounds generalise, for example to the order $n^{-\alpha/6}$ or to $n^{-\alpha/2+1/3}$, with the $\alpha$-times differentiability of the integrand for $\alpha\ge2$.
In contrast, our results show that, by assuming the square $\rho$-integrability, the rate improves to $\mathcal{O}(n^{-\alpha/2})$, for general $\alpha\in\mathbb{N}$.
For analytic functions,
the rate $\exp(-C\sqrt{n})$ with a constant $C>0$ has been mentioned in the literature for functions with a \textit{suitable decay}
that are analytic in a strip region.
Barrett~\cite{Barrett.W_1961} seems to be the first to have presented this rate; see also Davis and Rabinowitz \cite[Equation~(4.6.1.18)]{DR1984}.
Note, however, that no explicit proof or statement is given in either of these references; in particular, the decay condition for which this rate holds is not given.
We are not aware of any reference that gives an explicit statement with complete assumptions.
On the other hand, for the trapezoidal rule, Sugihara~\cite{S1997} conducted an extensive research on the integration error for functions analytic in a strip region with various decay conditions.
In particular, for functions decaying at the rate $\exp(-\tilde{C}|x|^\rho)$ ($\rho\geq 1$) on the real axis, under other suitable assumptions he established the rate $\exp(-{C}^*n^{\rho/(\rho+1)})$, with an explicit constant ${C}^*>0$; see \cite[Theorem 3.1]{S1997} for a precise statement.
Hence, whatever the decay condition the Gauss--Hermite rule requires to give the rate $\exp(-C\sqrt{n})$ may be, for $\rho\geq 1$ the trapezoidal rule attains a faster rate for the function class considered in \cite{S1997}.
Note that lower bounds for the integration error are also presented in \cite{S1997}, which shows that the trapezoidal rule is near-optimal in the function class considered there.
Trefethen \cite{T2021} makes the comparison of the trapezoidal rule and Gauss--Hermite quadrature explicit.
In \cite[Theorem 5.1]{T2021}, for the trapezoidal rule and various other quadrature rules he established the rate
$\exp(-C{n}^{2/3})$ for a class of functions that are analytic in a strip region which decays at the rate $\exp(-x^2)$ on the real axis.
He also presents a numerical result for integrating $\cos(x^3)\exp(-x^2)$ with the physicists' Gauss--Hermite rule, which supports the rate $\exp(-C\sqrt{n})$ for the Gauss--Hermite rule.
Before moving on, we mention that our results motivate further studies of Gauss--Hermite based algorithms in high-dimensional problems.
Integration problems in high dimensions arise, for example in computing statistics of solutions of partial differential equations parametrised by random variables.
In particular, integration with respect to the Gaussian measure has been attracting increasing attention; see for example \cite{Graham.I_eta_2015_Numerische,C2018,YK2019,HS2019ML,D2021,dung2022analyticity}.
The measure being Gaussian, algorithms that use Gauss--Hermite quadrature as a building block have gained popularity \cite{C2018,D2021,dung2022analyticity}.
The key to proving error estimates in this context is the regularity of the quantity of interest with respect to the parameter, and for elliptic and parabolic problems such smoothness, even an analytic regularity, has been shown \cite{BNT2010SIREV,NT2009parabolic,dung2022analyticity}.
In contrast, the solutions of parametric hyperbolic systems suffer from limited regularity under mild assumptions \cite{MNT2013_StochasticCollocationMethod,MNT2015_AnalysisComputationElasticWave}.
Hence, our results, which show the sub-optimality of Gauss--Hermite rule for functions with finite smoothness, caution us and encourage further studies of the use of algorithms based on Gauss--Hermite rule for this class of problems.
Finally, we note that if we have the weight function
$\mathrm{e}^{-x^2}$ instead of $\mathrm{e}^{-x^2/2}$,
the corresponding orthogonal polynomials are called physicist's Hermite polynomials. Our results for Gauss--Hermite quadrature can be obtained for these polynomials by simply rescaling our results by $x\mapsto \sqrt{2}x.$
Likewise, results for physicist's Hermite polynomials in the literature, e.g.~in \cite{Szego1975}, are used throughout this paper.
The rest of this paper is organized as follows. In Section~\ref{sec:space} we introduce necessary definitions such as Hermite polynomials, the weighted Sobolev spaces, and the Hermite spaces.
We also discuss the norm equivalence between the weighted Sobolev space and the Hermite space.
In Section~\ref{sec:GH-bounds} the sub-optimality of Gauss--Hermite quadrature is shown. In particular, we obtain matching lower and upper bounds for the worst-case error.
In Section~\ref{sec:trapez} the optimality of the trapezoidal rule is shown.
Section~\ref{sec:conc} concludes this paper.
\section{Function spaces with finite smoothness} \label{sec:space}
Throughout this paper, we use the weighted space $L^2_\rho:=L^2_\rho(\mathbb{R})$, the normed space consisting of the equivalence classes of Lebesgue measurable functions $f\colon\mathbb{R}\to\mathbb{R}$ satisfying $\|f\|_{L^2_\rho}^2:=\int_{\mathbb{R}}|f(x)|^2\rho(x)\mathrm{d}x<\infty$, where the equivalence relation is given by $f\sim g$ if and only if $\|f-g\|_{L^2_\rho}=0$.
\subsection{Hermite polynomials}
For $k\in\mathbb{N}\cup\{0\}$, the $k$-th degree probabilist's Hermite polynomial is given by
\begin{align}\label{eq:Hermite-def}
H_{k}(x)
=\frac{(-1)^{k}} {\sqrt{k!}}
\mathrm{e}^{x^{2}/2}
\frac{\mathrm{d}^{k}}{\mathrm{d} x^{k}}
\mathrm{e}^{-x^{2}/2},\quad x\in\mathbb{R},
\end{align}
where they are normalised so that $\|H_k\|_{L^2_\rho}=1$ for all $k\in\mathbb{N}\cup\{0\}$.
The polynomials $(H_k)_{k\geq 0}$ form a complete orthonormal system for $L^2_\rho$.
The following properties are used throughout the paper.
\begin{align}\label{eq:Hermite-deriv}
H_k'(x)=\sqrt{k}H_{k-1}(x),\; k\ge1;
\end{align}
\begin{align}\label{eq:Hermite-rho-deriv}\
\frac{\mathrm{d}^{\tau}}{\mathrm{d} x^{\tau}}\left(H_k(x)\rho(x)\right)
&=
\frac{(-1)^{k}} {\sqrt{2\pi k!}}
\mathrm{e}^{x^{2}/2}
\frac{\mathrm{d}^{k+\tau}}{\mathrm{d} x^{k+\tau}}
\mathrm{e}^{-x^{2}/2} \nonumber
\\
&=
(-1)^\tau\sqrt{\frac{(k+\tau)!}{k!}}H_{k+\tau}(x)\rho(x),\; k\ge0,\; \tau\ge0.
\end{align}
\subsection{Weighted Sobolev space}
The function space we consider is the Sobolev space of square integrable functions, the integrability condition of which is imposed by the Gaussian measure.
\begin{Definition}[Weighted Sobolev space]\label{def:Sobolev}
For $\alpha\in\mathbb{N}$, the weighted Sobolev space $\mathscr{H}_{\alpha}$(with the weight function $\rho$) is the class of all functions $f\in L^2_\rho$ such that $f$ has weak derivatives satisfying $f^{(\tau)}\in L^2_\rho$ for $\tau=1,\dots,\alpha$:
\begin{align*}
\mathscr{H}_{\alpha}:=
\Biggl\{
f \in L^2_\rho \;\bigg|\; \|f\|_\alpha := \biggl(\sum_{\tau=0}^\alpha \|f^{(\tau)}\|^2_{L^2_\rho}\biggr)^{1/2} < \infty
\Biggr\}.
\end{align*}
\end{Definition}
Elements in $\mathscr{H}_{\alpha}$ for $\alpha\in \mathbb{N}$ are in the standard local Sobolev space $W^{1,2}_{\mathrm{loc}}(\mathbb{R})$, and thus admit a continuous representative.
In what follows, we always take the continuous representative of $f\in \mathscr{H}_{\alpha}$.
We recall another important class of functions, the so-called Hermite space.
For this space we follow the definition of \cite{DILP2018}.
\begin{Definition}[Hermite space with finite smoothness]\label{def:Hermite}
For $\alpha\in\mathbb{N}$, the Hermite space with finite smoothness $\mathcal{H}^{\mathrm{Hermite}}_{\alpha}$ is given by
\begin{align*}
\mathcal{H}^{\mathrm{Hermite}}_{\alpha}:=
\Bigg\{
f \in L^2_\rho \;\bigg|\; \|f\|_{\mathcal{H}^{\mathrm{Hermite}}_{\alpha}} :=
\biggl(
\sum_{k=0}^\infty r_{\alpha}(k)^{-1}\bigl|\widehat{f}(k)\bigr|^2
\biggr)^{1/2} < \infty
\Bigg\},
\end{align*}
where $\widehat{f}(k)=(f,H_k)_{L^2_{\rho}}:=\int_\bbR f(x)H_k(x) \rho(x)\mathrm{d} x$, and
\[
r_\alpha(k):=\begin{cases}1, &\mathrm{if }\; k=0,\\ \bigl(\sum^\alpha_{\tau=0}\beta_\tau(k)\bigr)^{-1}, &\mathrm{if }\;k\ge1, \end{cases}
\;\mathrm{and}\;
\beta_\tau(k):=\begin{cases} \frac{k!}{(k-\tau)!}, &\mathrm{if }\;k\ge\tau,\\ 0, &\mathrm{otherwise.} \end{cases}
\]
\end{Definition}
It turns out $\mathscr{H}_{\alpha}=\mathcal{H}^{\mathrm{Hermite}}_{\alpha}$, where the equality here means the norm equivalence.
Hence, results established for the Hermite space $\mathcal{H}^{\mathrm{Hermite}}_{\alpha}$ can be readily translated to $\mathscr{H}_{\alpha}$ up to a constant, which allows us to compare the results on higher order digital nets in \cite{DILP2018} with ours.
A proof of this equivalence of the norm is outlined in \cite[1.5.4.~Proposition]{B1998}.
A more detailed proof of one direction of the equivalence, $f\in\mathcal{H}^{\mathrm{Hermite}}_{\alpha}$ implying $f\in\mathscr{H}_{\alpha}$ with the same $\alpha\in\mathbb{N}$, is given in \cite[Lemma~6]{DILP2018}.
Here, for completeness we prove its converse.
\begin{lemma}
Let $f\in\mathscr{H}_{\alpha}$ with $\alpha\in\mathbb{N}$, then $f\in\mathcal{H}^{\mathrm{Hermite}}_{\alpha}$ with the same smoothness parameter $\alpha$.
\end{lemma}
\begin{proof}
We first prove the claim for $\alpha=1$. Assume $f\in\mathscr{H}_{1}$. Let
$F(x):=f(x)\; (\rho (x))^{1/2+\varepsilon}$ and $ G(x):=H_k(x)(\rho (x))^{1/2-\varepsilon}$ for $0<\varepsilon<1/2$.
We have
\[\int_{\bbR}F'(x) \phi(x) \mathrm{d} x
=
- \int_{\bbR}F(x) \phi'(x) \mathrm{d} x,
\]
for any function $\phi$ in the space of compactly supported infinitely differentiable functions $C^\infty _{\mathrm{c}} (\bbR)$.
Since $G$ is in the standard Sobolev space $W^{1,2} (\bbR)$,
there exists a sequence $\{\phi_N\}_{N\in\mathbb{N}}\subset C^\infty _{\mathrm{c}} (\bbR)$ that satisfies
\[
\|G-\phi_N\|^2_{L^2(\bbR)} + \|G'-\phi'_N\|^2_{L^2(\bbR)} = \|G-\phi_N\|^2_{W^{1,2}(\bbR)} \le 1/N.
\]
Then, letting $g\in L^2(\mathbb{R})$, the Cauchy--Schwarz inequality implies
$\int_{\bbR} |F(x)g(x)| \mathrm{d} x
\le
\|f\|_{L^2_\rho}\|g\|_{L^2(\bbR)}<\infty$,
and for
$F'(x)=f'(x) (\rho (x))^{1/2+\varepsilon}-(\varepsilon+1/2)xf(x)(\rho (x))^{1/2+\varepsilon}$ we also have
\begin{align*}
\int_{\bbR} |F'(x)g(x)| \mathrm{d} x
&\le
\left(\int_{\bbR} | f'(x) |^2 \mathrm{e}^{-x^2/2} \mathrm{d} x \right)^{1/2}\;
\left(\int_{\bbR} |g(x)| \mathrm{d} x \right)^{1/2}
\\
&\kern-1.5cm +
\sup_{t\in\mathbb{R}}|(\varepsilon+1/2)^2t^2 \mathrm{e}^{-\varepsilon t^2}|
\left(\int_{\bbR} | f(x) |^2 \mathrm{e}^{-x^2/2} \mathrm{d} x \right)^{1/2}\;
\left(\int_{\bbR} |g(x)|^2 \mathrm{d} x \right)^{1/2}
<\infty.
\end{align*}
Therefore both $\langle F,\cdot\rangle:=\int_{\bbR}F(x) \cdot \mathrm{d} x$ and $\langle F',\cdot \rangle:=\int_{\bbR}F'(x) \cdot \mathrm{d} x$ define a continuous functional on $L^2(\bbR)$.
Hence, we have $\int_{\bbR}F'(x)G(x)\mathrm{d} x=-\int_{\bbR}F(x)G'(x)\mathrm{d} x$ and thus
\begin{align*}
&\int_{\bbR} f'(x) H_k(x) \rho(x) -(\varepsilon+1/2)x H_k (x)f(x)\rho (x) \mathrm{d} x
=\int_{\bbR} F'(x)G(x) \mathrm{d} x\\
=&-\int_{\bbR} F(x)G'(x) \mathrm{d} x=-
\biggl(\!\int_{\bbR}\!f(x) \sqrt{k}H_{k-1}(x) \rho(x) -(-\varepsilon+1/2)x H_k (x)f(x)\rho (x) \mathrm{d} x
\biggr),
\end{align*}
which is equivalent to
\begin{align}
\int_{\bbR} f'(x)& H_k(x) \rho(x)=-\int_{\bbR} f(x) \sqrt{k}H_{k-1}(x) \rho(x)-xH_k (x)f(x)\rho (x) \mathrm{d} x
\label{eq:<f',H>-identity}
\\
&=
-\int_{\bbR} f(x) (H_k (x)\rho (x))' \mathrm{d} x=
\int_{\bbR} f(x) \sqrt{k+1}H_{k+1} (x)\rho (x) \mathrm{d} x
,\nonumber
\end{align}
where we used $(H_k (x)\rho (x))'=-\sqrt{k+1}H_{k+1} (x)\rho (x)$. Hence we obtain
\begin{align*}
(f',H_k)_{L^2_\rho} ^2 =(k+1)\;(f,H_{k+1})_{L^2_\rho} ^2,
\end{align*}
and
\[
\sum_{k=0} ^\infty (k+1)\;(f,H_{k+1})_{L^2_\rho} ^2=\sum_{k=0} ^\infty (f',H_k)_{L^2_\rho}^2=\|f'\|_{L^2_\rho} ^2 <\infty.
\]
This implies $f\in\mathcal{H}^{\mathrm{Hermite}}_{1}$ since $r_1^{-1}(k)=k+1$.
For general $\tau=1,...,\alpha$, assuming $f\in\mathscr{H}_\tau$ and by repeating the same argument as above,
we have $(f^{(\tau)},H_{k})_{L^2_\rho}^2=(f,H_{k+\tau})_{L^2_\rho}^2\prod_{j=1}^{\tau}(k+j)$ for $k\geq0$ and thus
\begin{align*}
\|f^{(\tau)}\|_{L^2_\rho}^2
&=
\sum_{k=0}^{\infty}
(f,H_{k+\tau})_{L^2_\rho}^2\prod_{j=1}^{\tau}(k+j)
\geq\frac{1}{\tau^{\tau}}\sum_{k=0}^{\infty}(f,H_{k+\tau})_{L_{\rho}^{2}}^{2}(k+\tau)^{\tau}\\
&
=\frac{1}{\tau^{\tau}}\sum_{k=\tau}^{\infty}k^{\tau}(f,H_{k})_{L_{\rho}^{2}}^{2}
.
\end{align*}
Hence, observing $\lim_{k\to\infty }r_\tau(k)\,k^{\tau}=1$ (see also \cite[p.~687]{DILP2018}), we conclude $f\in \mathcal{H}^{\mathrm{Hermite}}_{\tau}$.
\end{proof}
\section{Matching bounds for Gauss--Hermite quadrature}\label{sec:GH-bounds}
In this section, we prove the sub-optimality of Gauss--Hermite quadrature.
We first introduce the following linear quadrature of general form
\begin{align}\label{eq:general_quadrature}
Q_n(f)=\sum_{j=1}^{n}w_{j} f(\xi_{j})
\end{align}
with arbitrary $n$ distinct quadrature points on the real line
\[ -\infty<\xi_1<\xi_2<\cdots <\xi_n<\infty \]
and quadrature weights $w_1,\ldots,w_n\in \mathbb{R}$.
Gauss--Hermite quadrature $Q_{n}^{\mathrm{GH}}$ is given by the points $(\xi_{j}^{\mathrm{GH}})_{j=1,\dots,n}$ being the roots of $H_n$ and the weights $(w_{j})_{j=1,\dots,n}$ being
$
w_{j}={1}/{[H_{n}'(\xi_{j}^{\mathrm{GH}})]^{2}}
$,
see for example \cite[Theorem 3.5]{Shen.J_2011_book}.
Given a quadrature rule $Q_n$,
it is convenient to introduce the notation
\[
{e}^{\mathrm{wor}}(Q_{n}, \mathscr{H}_{\alpha})
:=
\sup_{0\neq f\in \mathscr{H}_{\alpha}}
\frac{|I(f)-Q_n(f)|}
{\|f\|_{\alpha}}
.
\]
The quantity ${e}^{\mathrm{wor}}(Q_{n}, \mathscr{H}_{\alpha})$ is commonly referred to as the \emph{worst-case error} of $Q_n$ in $\mathscr{H}_{\alpha}$; see for example \cite{Dick.J_Kuo_Sloan_2013_ActaNumerica}.
Now we can state our aim of this section more precisely: we prove the matching lower and upper bounds on ${e}^{\mathrm{wor}}(Q_{n}^{\mathrm{GH}}, \mathscr{H}_{\alpha})$ for Gauss--Hermite quadrature.
\subsection{Lower bound}
We first derive the following lower bound on ${e}^{\mathrm{wor}}(Q_{n}, \mathscr{H}_{\alpha})$ for the general quadrature \eqref{eq:general_quadrature}.
\begin{lemma}\label{lem:lower_bound_general}
Let $\alpha\in\mathbb{N}$.
For $n\geq 2$, let
\begin{align}\label{eq:assum_minimum_distance}
\sigma := \begin{cases} \alpha & \text{if }\ \min_{j=1,\ldots,n-1}(\xi_{j+1}-\xi_{j})\leq 1.\\
0 & \text{otherwise.}\end{cases}
\end{align}
Then, there exists a constant $c_{\alpha}>0$, which depends only on $\alpha$, such that the worst-case error ${e}^{\mathrm{wor}}(Q_n, \mathscr{H}_{\alpha})$ of a general function-value based linear quadrature \eqref{eq:general_quadrature} in the weighted Sobolev space $\mathscr{H}_{\alpha}$ is bounded below by
\begin{align*}
&{e}^{\mathrm{wor}}(Q_n, \mathscr{H}_{\alpha})\geq c_{\alpha}\min_{i=1,\dots,n-1}(\xi_{i+1}-\xi_{i})^{\sigma+1/2}
\\&
\qquad\quad\times \sum_{j=1}^{n-1}\mathrm{e}^{-\max(\xi_{j}^2,\xi_{j+1}^2)/2}
\left(\sum_{k=1}^{n-1}\mathrm{e}^{-\mathds{1}_{\geq 0}(\xi_{k}\xi_{k+1}) \min(\xi_{k}^2,\xi_{k+1}^2)/2}\right)^{-1/2},
\end{align*}
where $\mathds{1}_{\geq 0}(x)$ is equal to 1 if $x\geq 0$ and 0 otherwise.
\end{lemma}
\begin{proof}
The heart of the matter is to construct a function $0\neq h_{n}\in \mathscr{H}_{\alpha}$ such that $h_{n}(\xi_{j})=0$ for all $1\leq j\leq n$, resulting in $Q_n(h_{n})=0$, and that $\|h_n\|_{\alpha}$ is small but $I(h_n)$ is large.
Define a function $h\colon \mathbb{R}\to \mathbb{R}$ by
\[ h(x):=h_n(x)
:=\begin{cases}
\displaystyle
\biggl(\frac{x-\xi_{j}}{\xi_{j+1}-\xi_{j}}\biggr)^{\!\!\alpha} \biggl(1-\frac{x-\xi_{j}}{\xi_{j+1}-\xi_{j}}\biggr)^{\!\!\alpha}
&\kern-4mm
\begin{array}{l}
\text{if there exists }j\!\in\! \{1,\ldots,n-1\}\\[-2.5pt]
\text{such that }x\in [\xi_{j},\xi_{j+1}],
\end{array}
\\[10pt]
0 &\kern-4mm\begin{array}{l}\text{otherwise.}\end{array}
\end{cases}
\]
Then, $h$ turns out to fulfill our purpose. This type of fooling function used to prove lower bounds for the worst-case error is called \emph{bump function} (of finite smoothness), in quasi-Monte Carlo theory; see, for instance, \cite[Section~2.7]{DHP2015}.
First we show $h\in \mathscr{H}_{\alpha}$. It follows from
\[ \left(\frac{x-\xi_{j}}{\xi_{j+1}-\xi_{j}}\right)^{\alpha}\left(1-\frac{x-\xi_{j}}{\xi_{j+1}-\xi_{j}}\right)^{\alpha}=\sum_{\ell=0}^{\alpha}(-1)^{\ell}\binom{\alpha}{\ell}\left(\frac{x-\xi_{j}}{\xi_{j+1}-\xi_{j}}\right)^{\alpha+\ell}\]
that we have
\begin{align*}
h^{(\tau)}(x)=\frac{1}{(\xi_{j+1}-\xi_{j})^{\tau}}\sum_{\ell=0}^{\alpha}(-1)^{\ell}\binom{\alpha}{\ell}\frac{(\alpha+\ell)!}{(\alpha+\ell-\tau)!}\left(\frac{x-\xi_{j}}{\xi_{j+1}-\xi_{j}}\right)^{\alpha+\ell-\tau}
\end{align*}
for $\tau=0,1,\ldots,\alpha$ and any $\xi_{j}<x<\xi_{j+1}$.
As we have $h^{(\tau)}(\xi_{j})=h^{(\tau)}(\xi_{j+1})=0$ for $\tau=0,1,\ldots,\alpha-1$, the function $h$ is $(\alpha-1)$-times continuously differentiable. Moreover, $h^{(\alpha-1)}$ is continuous piecewise polynomial and thus weakly differentiable. Also, noting that $\mathds{1}_{\geq 0}(\xi_{j}\xi_{j+1})=0$ only when $\xi_{j}<0<\xi_{j+1}$ and otherwise $\mathds{1}_{\geq 0}(\xi_{j}\xi_{j+1})=1$, we have
\begin{align*}
& \int_{\xi_{j}}^{\xi_{j+1}}| h^{(\tau)}(x)|^2\rho(x)\mathrm{d} x \\
& \leq \frac{\mathrm{e}^{-\mathds{1}_{\geq 0}(\xi_{j}\xi_{j+1}) \min(\xi_{j}^2,\xi_{j+1}^2)/2}}{\sqrt{2\pi}}\int_{\xi_{j}}^{\xi_{j+1}}| h^{(\tau)}(x)|^2\mathrm{d} x
\\
& = \frac{\mathrm{e}^{-\mathds{1}_{\geq 0}(\xi_{j}\xi_{j+1}) \min(\xi_{j}^2,\xi_{j+1}^2)/2}}{\sqrt{2\pi}(\xi_{j+1}-\xi_{j})^{2\tau}} \\
&{
\times \kern-3mm\sum_{\ell_1,\ell_2=0}^{\alpha}\kern-1.5mm(-1)^{\ell_1+\ell_2} \kern-0.5mm
\binom{\alpha}{\ell_1}\binom{\alpha}{\ell_2}\frac{(\alpha+\ell_1)!}{(\alpha+\ell_1-\tau)!}
\frac{(\alpha+\ell_2)!}{(\alpha+\ell_2-\tau)!}
\int_{\xi_{j}}^{\xi_{j+1}}\kern-1mm\Bigl(\frac{x-\xi_{j}}{\xi_{j+1}-\xi_{j}}\Bigr)^{2(\alpha-\tau)+\ell_1+\ell_2}\kern-0.6mm\mathrm{d} x} \\
& = \frac{\mathrm{e}^{-\mathds{1}_{\geq 0}(\xi_{j}\xi_{j+1}) \min(\xi_{j}^2,\xi_{j+1}^2)/2}}{\sqrt{2\pi}(\xi_{j+1}-\xi_{j})^{2\tau-1}}\\&
\quad{\times \sum_{\ell_1,\ell_2=0}^{\alpha}
\frac{(-1)^{\ell_1+\ell_2}}{2(\alpha-\tau)+\ell_1+\ell_2+1}\binom{\alpha}{\ell_1}\binom{\alpha}{\ell_2}\frac{(\alpha+\ell_1)!}{(\alpha+\ell_1-\tau)!}\frac{(\alpha+\ell_2)!}{(\alpha+\ell_2-\tau)!},}
\end{align*}
for $\tau=0,1,\ldots,\alpha$. The last sum over $\ell_1$ and $\ell_2$ does not depend on $j$.
Denoting this sum by $S_{\alpha,\tau}$, we obtain
\begin{align*}
\|h\|_{\alpha}^2 & = \sum_{\tau=0}^{\alpha}\int_{\mathbb{R}}| h^{(\tau)}(x)|^2 \rho(x)\mathrm{d} x = \sum_{\tau=0}^{\alpha}\sum_{j=1}^{n-1}\int_{\xi_{j}}^{\xi_{j+1}}| h^{(\tau)}(x)|^2\rho(x)\mathrm{d} x \\
& \leq \frac{1}{\sqrt{2\pi}}\sum_{\tau=0}^{\alpha}S_{\alpha,\tau}\sum_{j=1}^{n-1}\frac{\mathrm{e}^{-\mathds{1}_{\geq 0}(\xi_{j}\xi_{j+1}) \min(\xi_{j}^2,\xi_{j+1}^2)/2}}{(\xi_{j+1}-\xi_{j})^{2\tau-1}}\\
& \leq
\frac{1}{\sqrt{2\pi}}
\sum_{\tau=0}^{\alpha}
\frac{S_{\alpha,\tau}}
{\min_{1\le i\le n-1}(\xi_{i+1}-\xi_{i})^{2\tau-1}}\sum_{j=1}^{n-1}\mathrm{e}^{-\mathds{1}_{\geq 0}(\xi_{j}\xi_{j+1}) \min(\xi_{j}^2,\xi_{j+1}^2)/2}\\
& \leq
\frac{1}{\sqrt{2\pi}\min_{1\le i\le n-1}(\xi_{i+1}-\xi_{i})^{2\sigma-1}}\sum_{\tau=0}^{\alpha}S_{\alpha,\tau}\sum_{j=1}^{n-1}\mathrm{e}^{-\mathds{1}_{\geq 0}(\xi_{j}\xi_{j+1}) \min(\xi_{j}^2,\xi_{j+1}^2)/2}<\infty.
\end{align*}
This proves $h\in \mathscr{H}_{\alpha}$.
By definition of $h$, we have $h(\xi_{j})=0$ for all $j=1,\ldots,n$, and thus
\[ Q_n(h)=0.\]
Moreover, we have
\begin{align*}
I(h) & = \int_{\mathbb{R}}h(x)\rho(x)\mathrm{d} x = \sum_{j=1}^{n-1}\int_{\xi_{j}}^{\xi_{j+1}}h(x)\rho(x)\mathrm{d} x \\
& \geq \frac{1}{\sqrt{2\pi}}\sum_{j=1}^{n-1}\mathrm{e}^{-\max(\xi_{j}^2,\xi_{j+1}^2)/2}\int_{\xi_{j}}^{\xi_{j+1}}\left(\frac{x-\xi_{j}}{\xi_{j+1}-\xi_{j}}\right)^{\alpha}\left(1-\frac{x-\xi_{j}}{\xi_{j+1}-\xi_{j}}\right)^{\alpha}\mathrm{d} x \\
& = \frac{1}{\sqrt{2\pi}}\sum_{j=1}^{n-1}\mathrm{e}^{-\max(\xi_{j}^2,\xi_{j+1}^2)/2}(\xi_{j+1}-\xi_{j})\int_{0}^{1}x^{\alpha}\left(1-x\right)^{\alpha}\mathrm{d} x \\
& = \frac{(\alpha!)^2}{(2\alpha+1)!\sqrt{2\pi}}\sum_{j=1}^{n-1}\mathrm{e}^{-\max(\xi_{j}^2,\xi_{j+1}^2)/2}(\xi_{j+1}-\xi_{j}) \\
& \geq \frac{(\alpha!)^2}{(2\alpha+1)!\sqrt{2\pi}}
\min_{1\le i\le n-1}(\xi_{i+1}-\xi_{i})\sum_{j=1}^{n-1}\mathrm{e}^{-\max(\xi_{j}^2,\xi_{j+1}^2)/2}.
\end{align*}
Using the above results, we obtain
\begin{align*}
{e}^{\mathrm{wor}}(Q_n,\mathscr{H}_{\alpha}) &
\geq \frac{|I(h)-Q_n(h)|}{\|h\|_{\alpha}}\\
& \geq \frac{(\alpha!)^2}{(2\alpha+1)!(2\pi)^{1/4}}\left(\sum_{\tau=0}^{\alpha}S_{\alpha,\tau}\right)^{-1/2} \min_{1\le i\le n-1}(\xi_{i+1}-\xi_{i})^{\sigma+1/2}\\
&{\quad \times \sum_{j=1}^{n-1}\mathrm{e}^{-\max(\xi_{j}^2,\xi_{j+1}^2)/2}
\left(\sum_{k=1}^{n-1}\mathrm{e}^{-\mathds{1}_{\geq 0}(\xi_{k}\xi_{k+1}) \min(\xi_{k}^2,\xi_{k+1}^2)/2}\right)^{-1/2}.}
\end{align*}
Now the proof is complete.
\end{proof}
Using the general lower bound in Lemma~\ref{lem:lower_bound_general}, we obtain the following lower bound on the worst-case error for Gauss--Hermite quadrature.
\begin{theorem}\label{thm:GH-LB}
Let $\alpha\in\mathbb{N}$.
For any $n\geq 2$, the worst-case error of the Gauss--Hermite quadrature in the weighted Sobolev space $\mathscr{H}_{\alpha}$ is bounded from below as
\[ {e}^{\mathrm{wor}}(Q_{n}^{\mathrm{GH}}, \mathscr{H}_{\alpha})\geq C_{\alpha}n^{-\alpha/2}\]
with a constant $C_{\alpha}>0$ that depends on $\alpha$ but independent of $n$.
\end{theorem}
\begin{proof}
Let $\xi_{j}^{\mathrm{GH}}$, $j=1,\dots,n$, be the roots of $H_n$.
For any $n\geq 2$, it holds that
\begin{align}\label{eq:minimum_distance}
\frac{\pi}{\sqrt{n+1/2}}<\min_{j=1,\dots,n-1} (\xi_{j+1}^{\mathrm{GH}}-\xi_{j}^{\mathrm{GH}})\leq \frac{\sqrt{21/2}}{\sqrt{n+1/2}};
\end{align}
see, for instance, \cite[Eq.~(6.31.22)]{Szego1975}.
Thus, to invoke Lemma~\ref{lem:lower_bound_general} we let
\[ \sigma := \begin{cases} \alpha & \text{for }\ n\geq 10,\\
0 & \text{for }\ 2\leq n<10,\end{cases}
\]
so that the conditions in \eqref{eq:assum_minimum_distance} are satisfied.
Also, each node $\xi_{j}^{\mathrm{GH}}$ is bounded below and above as follows; see, for instance, \cite[Eq.~(6.31.19)]{Szego1975}:
for $n$ odd, we have $\xi_{(n+1)/2}^{\mathrm{GH}}=0$ with the positive zeros satisfying
\begin{align}\label{eq:node_distribution_odd}
\frac{j\pi}{\sqrt{n+1/2}} <\xi_{(n+1)/2+j}^{\mathrm{GH}}<\frac{4j+3}{\sqrt{n+1/2}}
\quad\text{for }\ j=1,\ldots,(n-1)/2,
\end{align}
and for $n$ even,
\begin{align}\label{eq:node_distribution_even}
\frac{(j-1/2)\pi}{\sqrt{n+1/2}} <\xi_{n/2+j}^{\mathrm{GH}}<\frac{4j+1}{\sqrt{n+1/2}}
\quad\text{for }\ j=1,\ldots,n/2,
\end{align}
with symmetricity $\xi_{j}^{\mathrm{GH}} = -\xi_{n+1-j}^{\mathrm{GH}}$, $1\leq j\leq n$ for $n\geq 2$ odd and even.
Let $n$ be odd. Using the result in Lemma~\ref{lem:lower_bound_general}, equations~\eqref{eq:minimum_distance}, and \eqref{eq:node_distribution_odd}, together with the symmetricity of the Hermite zeros, we obtain
\begin{align*}
& {e}^{\mathrm{wor}}(Q_{n}^{\mathrm{GH}}, \mathscr{H}_{\alpha}) \\
& \geq 2c_{\alpha}
\left( \frac{\pi^2}{n+1/2}\right)^{\sigma/2+1/4}\,\sum_{j=1}^{(n-1)/2}\mathrm{e}^{-(\xi_{(n+1)/2+j}^{\mathrm{GH}})^2/2}
\left(2\sum_{k=1}^{(n-1)/2}\mathrm{e}^{-(\xi_{(n+1)/2+k-1}^{\mathrm{GH}})^2/2}\right)^{-1/2}\\
& \geq 2c_{\alpha}
\left( \frac{\pi^2}{n+1/2}\right)^{\sigma/2+1/4}\,\sum_{j=1}^{(n-1)/2}\kern-2mm\mathrm{e}^{-(4j+3)^2/(2n+1)}
\left(2+2\kern-2.5mm\sum_{k=2}^{(n-1)/2}\kern-2mm\mathrm{e}^{-\pi^2(k-1)^2/(2n+1)}\right)^{-1/2}.
\end{align*}
The sum over $j$ is further bounded below by
\begin{align*}
\sum_{j=1}^{(n-1)/2}\mathrm{e}^{-(4j+3)^2/(2n+1)} & \geq \int_1^{(n-1)/2+1}\mathrm{e}^{-(4x+3)^2/(2n+1)}\mathrm{d} x\\
& = \frac{\sqrt{n+1/2}}{4}\int_{7/\sqrt{n+1/2}}^{(2n+5)/\sqrt{n+1/2}}\mathrm{e}^{-x^2/2}\mathrm{d} x\\
& \geq \frac{\sqrt{n+1/2}}{4}\int_{\sqrt{14}}^{11\sqrt{2}/\sqrt{7}}\mathrm{e}^{-x^2/2}\mathrm{d} x\\
& = \frac{\sqrt{\pi(n+1/2)}}{4\sqrt{2}}\left( \erf(11/\sqrt{7})-\erf(\sqrt{7})\right),
\end{align*}
where $\erf$ denotes the error function, and the last inequality holds for any odd $n\geq 3$.
The sum over $k$ is further bounded above by
\begin{align*}
\sum_{k=2}^{(n-1)/2}\mathrm{e}^{-\pi^2(k-1)^2/(2n+1)} & \leq \int_1^{(n-1)/2}\mathrm{e}^{-\pi^2(x-1)^2/(2n+1)}\mathrm{d} x \\
& \leq \sqrt{n+1/2} \int_{0}^{\infty}\mathrm{e}^{-\pi^2x^2/2}\mathrm{d} x = \sqrt{\frac{n+1/2}{2\pi}}.
\end{align*}
Using these bounds, we have
\begin{align*}
{e}^{\mathrm{wor}}(Q_{n}^{\mathrm{GH}}, \mathscr{H}_{\alpha})
& \geq 2c_{\alpha}
\left( \frac{\pi^2}{n+1/2}\right)^{\sigma/2+1/4}\frac{\sqrt{\pi(n+1/2)}}{4\sqrt{2}}
\left( \erf(11/\sqrt{7})-\erf(\sqrt{7})\right)\\
& \quad \times \left(2+2\sqrt{\frac{n+1/2}{2\pi}}\right)^{-1/2}\\
& \geq c_{\alpha}\pi^{\sigma+1/4} \frac{\erf(11/\sqrt{7})-\erf(\sqrt{7})}{2\sqrt{2}\sqrt{2+\sqrt{2}}} \frac{1}{(n+1/2)^{\sigma/2}}\\
& \geq c_{\alpha}\pi^{1/4} \frac{\erf(11/\sqrt{7})-\erf(\sqrt{7})}{2^{(\alpha+5)/2}} \frac{1}{n^{\alpha/2}}.
\end{align*}
Let $n$ be even.
As in the odd case, but now using \eqref{eq:node_distribution_even} instead of \eqref{eq:node_distribution_odd}, we obtain
\begin{align*}
& {e}^{\mathrm{wor}}(Q_{n}^{\mathrm{GH}}, \mathscr{H}_{\alpha}) \\
& \geq c_{\alpha}
\left( \frac{\pi^2}{n+1/2}\right)^{\sigma/2+1/4}
\\
& \qquad\times
\Biggl(\mathrm{e}^{-(\xi_{n/2+1}^{\mathrm{GH}}})^2/2+2\sum_{j=2}^{n/2}\mathrm{e}^{-(\xi_{n/2+j}^{\mathrm{GH}})^2/2}\Biggr)
\Biggl(1+2\sum_{k=2}^{n/2}\mathrm{e}^{-(\xi_{n/2+k-1}^{\mathrm{GH}})^2/2}\Biggr)^{-1/2}\\
& \geq c_{\alpha}\left( \frac{\pi^2}{n+1/2}\right)^{\sigma/2+1/4}\\
& \qquad\times
\Biggl(\mathrm{e}^{-5^2/(2n+1)}+2\sum_{j=2}^{n/2}\mathrm{e}^{-(4j+1)^2/(2n+1)}\Biggr)
\Biggl(1+2\sum_{k=2}^{n/2}\mathrm{e}^{-\pi^2(k-3/2)^2/(2n+1)}\Biggr)^{-1/2}.
\end{align*}
The sum over $j$ is equal to 0 for $n=2$ and is bounded below by
\begin{align*}
\sum_{j=2}^{n/2}&\mathrm{e}^{-(4j+1)^2/(2n+1)} \geq \int_2^{n/2+1}\mathrm{e}^{-(4x+1)^2/(2n+1)}\mathrm{d} x\\
& = \frac{\sqrt{n+1/2}}{4}\int_{9/\sqrt{n+1/2}}^{(2n+5)/\sqrt{n+1/2}}\mathrm{e}^{-x^2/2}\mathrm{d} x\\
& \geq \frac{\sqrt{n+1/2}}{4}\int_{3\sqrt{2}}^{13\sqrt{2}/3}\mathrm{e}^{-x^2/2}\mathrm{d} x = \frac{\sqrt{\pi(n+1/2)}}{4\sqrt{2}}\left( \erf(13/3)-\erf(3)\right),
\end{align*}
for $n\geq 4$.
Noting that we have $\mathrm{e}^{-5}\geq \sqrt{5\pi}\left( \erf(13/3)-\erf(3)\right)/4$, it holds that
\[ \mathrm{e}^{-5^2/(2n+1)}+2\sum_{j=2}^{n/2}\mathrm{e}^{-(4j+1)^2/(2n+1)} \geq \frac{\sqrt{\pi(n+1/2)}}{2\sqrt{2}}\left( \erf(13/3)-\erf(3)\right),\]
for any even $n$.
The sum over $k$ is again equal to 0 for $n=2$ and is bounded above by
\begin{align*}
\sum_{k=2}^{n/2}\mathrm{e}^{-\pi^2(k-3/2)^2/(2n+1)} & = \sum_{k=1}^{n/2-1}\mathrm{e}^{-\pi^2(k-1/2)^2/(2n+1)} \leq \int_0^{ n/2-1}\mathrm{e}^{-\pi^2(x-1/2)^2/(2n+1)}\mathrm{d} x \\
& \leq \sqrt{n+1/2} \int_{-\infty}^{\infty}\mathrm{e}^{-\pi^2x^2/2}\mathrm{d} x = \sqrt{\frac{2n+1}{\pi}}.
\end{align*}
It follows from these bounds on the sums that
\begin{align*}
{e}^{\mathrm{wor}}(Q_{n}^{\mathrm{GH}}, \mathscr{H}_{\alpha}) & \geq c_{\alpha}\left( \frac{\pi^2}{n+1/2}\right)^{\sigma/2+1/4}\frac{\sqrt{\pi(n+1/2)}}{2\sqrt{2}}\\
&\qquad\times
\bigl( \erf(13/3)-\erf(3)\bigr)
\left(1+2\sqrt{\frac{2n+1}{\pi}}\right)^{-1/2} \\
& \geq c_{\alpha}\pi^{\sigma+1/4} \frac{\erf(13/3)-\erf(3)}{2\sqrt{2}\sqrt{2\sqrt{2}+2}}\frac{1}{(n+1/2)^{\sigma/2}}\\
& \geq c_{\alpha}\pi^{1/4} \frac{\erf(13/3)-\erf(3)}{2^{(\alpha+6)/2}}\frac{1}{n^{\alpha/2}}.
\end{align*}
Altogether, we obtain a lower bound for the worst-case error
\[ {e}^{\mathrm{wor}}(Q_{n}^{\mathrm{GH}}, \mathscr{H}_{\alpha})\geq C_{\alpha}n^{-\alpha/2}\]
with
\begin{align*}
C_{\alpha}
&=c_{\alpha}\pi^{1/4} \min\Biggl\{\frac{\erf(11/\sqrt{7})-\erf(\sqrt{7})}{2^{(\alpha+5)/2}}, \frac{\erf(13/3)-\erf(3)}{2^{(\alpha+6)/2}}\Biggr\}\\
&=
c_{\alpha}\pi^{1/4} \frac{\erf(13/3)-\erf(3)}{2^{(\alpha+6)/2}},
\end{align*}
which holds for all $n\geq 2$.
\end{proof}
The general lower bound in Lemma~\ref{lem:lower_bound_general} depends on the set of quadrature points but not on the set of weights.
Because the lower bound for Gauss--Hermite quadrature in Theorem~\ref{thm:GH-LB} is built up on Lemma~\ref{lem:lower_bound_general}, the sub-optimality of Gauss--Hermite quadrature holds irrespective of the choice of the quadrature weights.
The proof of Lemma~\ref{lem:lower_bound_general} in particular indicates that, if the spacing of a node set decreases asymptotically no faster than $1/\sqrt{n}$, then the corresponding quadrature rule cannot achieve the worst-case error better than $O(n^{-\alpha/2})$.
To elaborate this point, we present the following less tight but more general result, which implies that any function-value based quadrature rule that does not have a quadrature point in $[0,n^{-1/2}]$, say, cannot have a worst-case error better than $O(n^{-\alpha/2-1/4})$.
\begin{corollary}\label{cor:delta-LB}
For $f\in\mathscr{H}_{\alpha}$ with $\alpha\in\mathbb{N}$,
let $Q_n(f)$ be a quadrature of the form~\eqref{eq:general_quadrature}.
Take a positive number
$\delta=\delta(n)\in(0,1]$
such that no quadrature
point is in $(0,\delta)$.
Then, we have
\[
{e}^{\mathrm{wor}}(Q_n, \mathscr{H}_{\alpha})\geq C_{\alpha}\delta^{\alpha+1/2},
\]
where the constant $C_{\alpha}>0$ is independent of $\delta$. In
particular, if ${Q}_{n}$ does not have any quadrature point in $(0,n^{-r})$, $r>0$, then we have ${e}^{\mathrm{wor}}(Q_n, \mathscr{H}_{\alpha})\geq C_{\alpha}n^{-r\alpha-r/2}$.
\end{corollary}
\begin{proof}
Consider a function $f_{\delta,\alpha}$ defined by $f_{\delta,\alpha}(x):=(x/\delta)^{\alpha}(1-x/\delta)^{\alpha}\mathds{1}_{[0,\delta]}(x)$,
$x\in\mathbb{R}$. Then, following the proof of Lemma~\ref{eq:general_quadrature},
analogous calculations show that we have $|I(f_{\delta,\alpha})-Q_{n}(f_{\delta,\alpha})|=|I(f_{\delta,\alpha})|\geq C_\alpha\|f_{\delta,\alpha}\|_{\alpha}\delta^{\alpha+1/2}$
for a constant $C_\alpha>0$.
This completes the proof.
\end{proof}
In passing, we note that the Theroem~\ref{thm:GH-LB} also gives a lower bound for the interpolation $L^1_{\rho}$-error.
Given a function $f\colon\mathbb{R}\to\mathbb{R}$, let $\Lambda_{n}f$
be the polynomial interpolant defined by the zeros of $H_{n}$.
Then,
like other interpolatory quadratures, Gauss--Hermite quadrature satisfies
$\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}\Lambda_{n}f(x)\,\mathrm{e}^{-x^{2}/2}\mathrm{d} x=\sum_{j=1}^{n}w_{j}f(x_{j})$.
Therefore,
with
$
\|f-\Lambda_{n}f\|_{L_{\rho}^{1}(\mathbb{R})}
:=\int_{\mathbb{R}}\big|f(x)-\Lambda_{n}f(x)\big|\,\rho(x)\mathrm{d}x$
denoting the interpolation $L^1_{\rho}$-error, we have
\begin{align*}
|I(f)-Q^{\mathrm{GH}}_n(f)|&=\Bigl|\int_{\mathbb{R}}\big[f(x)-\Lambda_{n}f(x)\big]\,\rho(x)\mathrm{d}x\Bigr|\leq
\|f-\Lambda_{n}f\|_{L_{\rho}^{1}(\mathbb{R})},
\end{align*}
and thus $\sup_{0\not=f\in \mathscr{H}_{\alpha}}\frac{\|f-\Lambda_{n}f\|_{L_{\rho}^{1}(\mathbb{R})}}{\|f\|_\alpha}\geq C_\alpha n^{-\alpha/2}$.
\subsection{Upper bound}
In the previous section, we showed a lower bound for the worst-case error of the rate $n^{-\alpha/2}$.
A matching upper bound has been shown by Mastroianni and Monegato, the result of whom we adapt to our setting.
\begin{proposition}[\cite{MM1994}]
Let $\alpha\in\mathbb{N}$ and $n\in\mathbb{N}$. For $f\in\mathscr{H}_{\alpha}$, let $Q_{n}^{\mathrm{GH}}(f)$ be the Gauss--Hermite approximation to $I(f)$. Then, we have
\[
|I(f)-Q_{n}^{\mathrm{GH}}(f)|\leq C n^{-\alpha/2}\|f\|_{\alpha},
\]
where $C>0$ is a constant independent of $n$ and $f$.
\end{proposition}
\begin{proof}
First, $f\in\mathscr{H}_{\alpha}$ implies $f^{(\tau)}\in W^{1,2}_{\mathrm{loc}}(\mathbb{R})$ and thus $f^{(\tau)}$ admits a locally absolutely continuous representative for $\tau=0,\dots,\alpha-1$. Moreover,
from $f\in\mathscr{H}_{\alpha}$, for any $0<\varepsilon<1/2$ we have
\[
\int_{\mathbb{R}}
\mathrm{e}^{\varepsilon x^2/2}|f^{(\alpha)}(x)|\mathrm{e}^{-x^2/2}\mathrm{d}x
\leq
\Bigl(
\int_{\mathbb{R}}
|f^{(\alpha)}(x)|^2\mathrm{e}^{-x^2/2}
\mathrm{d}x\Bigr)^{1/2}
\Bigl(
\int_{\mathbb{R}}
\mathrm{e}^{-x^2(1/2-\varepsilon)}
\mathrm{d}x\Bigr)^{1/2}<\infty.
\]
Thus, from \cite[Theorem 2]{MM1994}, the statement follows.
\end{proof} \section{Optimality of the trapezoidal rule} \label{sec:trapez}
In this section, we prove the optimality of trapezoidal rules in $\mathscr{H}_\alpha$.
More precisely, we consider the following quadrature with $n$ equispaced points
\begin{align}
Q_{n,T}^{{*}}(g):=\frac{2T}{n}\sum_{j=0}^{n-1}g(\xi_j^*), \quad\text{with}\ \xi_j^*:=\frac{2T}{n}j-T,\ j=0,\dots,n-1.\label{eq:def-trap}
\end{align}
Here, $T>0$ is a parameter that controls the cut-off of the integration domain from $\mathbb{R}$ to $[-T,T]$.
We call $Q_{n,T}^*$ a \textit{trapezoidal rule}:
indeed, for $n$ even, $Q_{n,T}^{*}(g)$ is nothing but the standard truncated trapezoidal rule for functions on $\mathbb{R}$ with mesh size $\Delta x$
\[
Q^*_{n,T}(g)=\Delta x \sum_{\ell=-n/2}^{n/2-1}g(\ell\, \Delta x),
\]
with $\Delta x=2T/n$, while for $n$ odd we have the trapezoidal rule only shifted by $-\frac12\Delta x$:
\[
Q^*_{n,T}(g)=\Delta x \sum_{\ell=-(n-1)/2}^{(n-1)/2}g\Bigl(\bigl(\ell-\frac12\bigr)\Delta x\Bigr).
\]
Our proof strategy is based on the approach by Nuyens and Suzuki \cite{NS2021}, where a multidimensional integration problem with respect to the Lebesgue measure using a quasi-Monte Carlo method called rank-$1$ lattice rule was considered.
Following \cite{NS2021}, we consider the bound
\begin{align}
\left|
\int_{\bbR} g(x) \mathrm{d} x
-
Q_{n,T}^*(g)
\right|
\le
\biggl|\int_{\bbR} g(x) \mathrm{d} x -\int_{-T}^T g(x) \mathrm{d} x \biggr|
+
\biggl|\int_{-T}^T g(x) \mathrm{d} x - Q_{n,T}^* (g) \biggr|.
\label{eq:error-decomp-g}
\end{align}
We will let $g=f\rho$ with $f\in\mathscr{H}_\alpha$ later in Theorem~\ref{thm:trape-opt}.
The first term of the right hand side in \eqref{eq:error-decomp-g} can be bounded as in \cite[Proposition~8]{NS2021};
we provide a proof adapted to our setting below in Proposition~\ref{prop:trape}.
To bound the second term, we now derive what corresponds to \cite[Lemma~5 and Proposition~7]{NS2021}.
\begin{lemma}
\label{lem:trap-error-inside}
Let $T>0$, $\alpha\in\mathbb{N}$, and $n\in\mathbb{N}$ be given.
Suppose that $g^{(\tau)}\colon\mathbb{R}\to\mathbb{R}$ is absolutely continuous on any compact interval for $\tau=0,\dots,\alpha-1$,
and that $g^{(\alpha)}$ is in $L^2(\bbR)$.
Suppose further that $g$ satisfies
\begin{equation}
\|g\|_{\alpha,[-T,T]}
:=
\left( \left(\sum_{\tau=0}^{\alpha-1} \left(\int_{-T}^T g^{(\tau)}(x) \mathrm{d} x\right)^2 +\int_{-T}^T | g^{(\alpha)}(x) |^2\mathrm{d} x \right)
\right)^{1/2}
< \infty
\end{equation}
and
\begin{align}
\|g\|_{\alpha,\mathrm{decay}}
:=
\sup_{\substack{x\in\bbR \\ \tau\in\{0,\ldots,\alpha-1\}}}
\left| \mathrm{e}^{(1-\varepsilon)x^2/2} \, g^{(\tau)}(x) \right|
<
\infty
,\quad \text{for some $\varepsilon\in(0,1)$}.
\label{eq:decay-cond}
\end{align}
Then the error of the $n$-point trapezoidal rule on the interval $[-T,T]$ defined in \eqref{eq:def-trap} is bounded by
\begin{align}
\left|
\int_{-T} ^T g(x) \mathrm{d} x
-
Q_{n,T}^*(g)
\right|
&\le
C_\alpha\|g\|_{\alpha,[-T,T]} T^{\alpha+1/2} \frac{1}{n^{\alpha}} \nonumber
\\&\quad+
\alpha \max\{1,(2T)^{\alpha-1}\}\|g\|_{\alpha,\mathrm{decay}}\mathrm{e}^{-(1-\varepsilon)T^2/2}
,
\end{align}
with $C_\alpha :=2\sqrt{\zeta(2\alpha)}/\pi^\alpha$, where $\zeta(2\alpha):=\sum_{m=1}^{\infty}m^{-2\alpha}<\infty$.
\end{lemma}
\begin{proof}
With a suitable auxiliary function $G=G^{[-T,T]}$ periodic on $[-T,T]$ satisfying $\int_{-T}^T G(x) \mathrm{d} x=\int_{-T}^T g(x) \mathrm{d} x$, we consider the following bound:
\begin{align}
\left|
\int_{-T} ^T g(x) \mathrm{d} x
-
Q_{n,T}^*(g)
\right|
\le
\left|
\int_{-T} ^T G(x) \mathrm{d} x
-
Q_{n,T}^*(G)
\right|
+
\left|
Q_{n,T}^*(g-G)
\right|
.\label{eq:[-T,T]-err-decomp}
\end{align}
Choosing
\[
G(x):=g(x)-\sum_{\tau=1}^{\alpha}\frac{B^{[-T,T]}_{\tau}(x)}{\tau!}\left(\int_{-T}^T g^{(\tau)}(s) \mathrm{d} s\right)\text{ for }x\in [-T-\delta,T+\delta],
\]
for an arbitrarily fixed small $\delta\in (0,1)$ turns out to be convenient, as we now explain. Here,
$B^{[-T,T]}_{\tau}(x)$ is the scaled Bernoulli polynomial of degree $\tau$ on $[-T,T]$, namely
\[
B^{[-T,T]}_{\tau}(x) = (2T)^{\tau-1} B_{\tau}\left(\frac{x+T}{2T}\right)
\]with $B_{\tau}$ being the standard Bernoulli polynomial of degree $\tau$.
We have $\int_{-T}^T G(x) \mathrm{d} x=\int_{-T}^T g(x) \mathrm{d} x$ by simply noticing that $\int_0^1 B_\tau(x)\mathrm{d} x =0$ for $\tau\ge 1$.
The function $G$ is ($\alpha-1$)-times differentiable on
$(-T-\delta,T+\delta)$
with $G^{(\alpha-1)}$ being absolutely continuous on $[-T,T]$. Moreover, we have
\begin{align*}
&\int_{-T}^{T} G^{(\tau)}(x)\mathrm{d} x =\int_{-T}^{T} g^{(\tau)}(x)\mathrm{d} x - \left(\int_{-T}^{T} B^{[-T,T]}_{0}(x) \mathrm{d} x \right)\left(\int_{-T}^{T} g^{(\tau)}(s)\mathrm{d} s\right) =0 ,
\end{align*}
for $\tau=1,\ldots,\alpha$, and thus the fundamental theorem of calculus tells us
\begin{align*}
&G^ {(\tau)}(-T)=G^ {(\tau)}(T),\quad \text{for }\ \tau=0,\ldots,\alpha-1.
\end{align*}
These properties of $G$ imply the following two Fourier series representations.
First, from the periodicity and the absolute continuity of $G$ on $[-T,T]$,
we have the pointwise-convergent Fourier series expansion
\[
G(x)= \sum_{m\in\mathbb{Z}} \widehat{G}(m) \phi^{[-T,T]}_{m}(x),
\]
where $\phi^{[-T,T]}_{m}(x):=\exp(\frac{2\pi \mathrm{i} m(x+T)}{2T})/\sqrt{2T}$, $m\in\mathbb{Z}$ are the orthonormal Fourier basis on $L^2([-T,T])$ and
$\widehat{G}(m):=\int_{-T}^T G(x) \exp(\frac{-2\pi \mathrm{i} m(x+T)}{2T})/\sqrt{2T}\, \mathrm{d} x$, $m\in\mathbb{Z}$
are the Fourier coefficients.
Second,
from the square integrability of $G^{(\alpha)}$, we have the $L^2$-convergent Fourier series representation
\begin{align*}
\left(G(x)\right)^{(\alpha)}
&=\sum_{m\in\mathbb{Z}} \widehat{G^{(\alpha)}}(m) \phi^{[-T,T]}_{m}(x)=\sum_{m\in\mathbb{Z}} \left(\frac{2\pi \mathrm{i} m}{2T}\right)^{\alpha} \widehat{G}(m) \phi^{[-T,T]}_{m}(x),
\end{align*}
where in the second equality we repeatedly used the integration by parts.
Using these representations, we obtain
\begin{align}
\biggl|
Q_{n,T}^*(G)&
-
\int_{-T}^T G(x) \mathrm{d} x
\biggr|\notag
=
\biggl|
\frac{2T}{n} \sum_{j=0}^{n-1}\sum_{m\in\mathbb{Z}} \widehat{G}(m) \phi^{[-T,T]}_{m}(\xi^*_j)
-
\sqrt{2T}\widehat{G}(0)
\biggr|
\\
&=
\Biggl|
\sqrt{2T}\sum_{m\in\mathbb{Z}\setminus\{0\}} \widehat{G}(mn)
\Biggr|\notag
\\
&\le
\sqrt{2T} \Biggl(
\sum_{m\in\mathbb{Z}\setminus\{0\}} |\widehat{G}(mn)|^2 \left(\frac{2\pi m n}{2T}\right)^{2\alpha}
\Biggr)^{1/2}
\Biggl(
\sum_{m\in\mathbb{Z}\setminus\{0\}} \left(\frac{2T}{2\pi m n}\right)^{2\alpha}
\Biggr)^{1/2}\notag
\\
&\le
\sqrt{2T}\|G\|_{\alpha,[-T,T]}
\sqrt{2\zeta(2\alpha)}
\left(\frac{T}{\pi n}\right)^{\alpha}
=:C_\alpha \|G\|_{\alpha,[-T,T]} T^{\alpha+1/2} \frac{1}{n^\alpha},\label{eq:bd-in-G}
\end{align}
where in the first to second lines we used the pointwise convergence of the series and\begin{align*}
\frac{2T}{n} \sum_{j=0}^{n-1} \phi^{[-T,T]}_{m}(\xi^*_j)
=
\frac{\sqrt{2T}}{n} \sum_{j=0}^{n-1} \exp(2\pi\mathrm{i}\:\! m j/n)
=
\begin{cases}
\sqrt{2T} & \text{if } m\equiv 0 \pmod{n},\\
0 & \text{otherwise},
\end{cases}
\end{align*}
while in the fourth line we used the Parseval identity.
The equation \eqref{eq:bd-in-G} is further bounded by $C_\alpha \|g\|_{\alpha,[-T,T]} T^{\alpha+1/2}\frac{1}{n^\alpha}$ since
\begin{align*}
\|G\|_{\alpha,[-T,T]}&=\left( \left(\int_{-T}^T G(x) \mathrm{d} x\right)^2 +\int_{-T}^T | \left(G\right)^{(\alpha)}(x) |^2\mathrm{d} x
\right)^{1/2}
\\
&=
\left( \left(\int_{-T}^T g(x) \mathrm{d} x\right)^2 +\int_{-T}^T | g^{(\alpha)}(x)|^2 \mathrm{d} x -\frac{1}{2T}\left(\int_{-T}^T g^{(\alpha)}(y)\mathrm{d} y \right)^2
\right)^{1/2}
\\
&\le
\|g\|_{\alpha,[-T,T]}.
\end{align*}
Now we bound the the second term of the right hand side in \eqref{eq:[-T,T]-err-decomp}:
\begin{align*}
\left|
Q_{n,T}^*(g-G)
\right|
&=
\Biggl|\frac{1}{n}\sum_{j=0}^{n-1}\sum_{\tau=1}^{\alpha}\frac{B^{[-T,T]}_{\tau}(\xi^*_j)}{\tau!}
\biggl(\int_{-T}^T g^{(\tau)}(s) \mathrm{d} s
\biggr)\Biggr|
\\
&\le
\sum_{\tau=1}^{\alpha}
\Bigg|
\frac{1}{n}\sum_{j=0}^{n-1}\frac{B^{[-T,T]}_{\tau}(\xi^*_j)}{\tau!}
\Bigg|
\bigl| g^{(\tau-1)}(T)-g^{(\tau-1)}(-T)\bigr|
\\
&\le
\sum_{\tau=1}^{\alpha}\frac{(2T)^{\tau-1}}{2} (2 \|g\|_{\alpha,\mathrm{decay}})\,\mathrm{e}^{-(1-\varepsilon)T^2/2}
\\
&\le
\alpha \max\{1,(2T)^{\alpha-1}\}\|g\|_{\alpha,\mathrm{decay}}\,\mathrm{e}^{-(1-\varepsilon)T^2/2},
\end{align*}
where in the penultimate line we used $|\frac{B^{[-T,T]}_{\tau}(x)}{\tau!}|\le \frac{(2T)^{\tau-1}}{2}$ for $x\in[-T,T]$; see~\cite[Equation~(6)]{NS2021} or~ \cite{L1940}.
Together with \eqref{eq:bd-in-G}, the statement follows.
\end{proof}
Now, what remains in the bound~\eqref{eq:error-decomp-g} is the error due to chopping the real line to the interval $[-T,T]$.
The following result tells us how to choose $T$ to obtain a total error bounded by $\mathcal{O}(n^{-\alpha})$ up to a logarithmic factor.
\begin{proposition}
\label{prop:trape}
Let $\alpha\in\mathbb{N}$.
Suppose that the function $g^{(\tau)}\colon\mathbb{R}\to\mathbb{R}$ is absolutely continuous on any compact interval for $\tau=0,\dots,\alpha-1$,
and that $g^{(\alpha)}$ is in $L^2(\bbR)$.
Suppose further that $g$ satisfies
\begin{align}
\|g\|_{\alpha}^*
:=
\sup_{\substack{I\subset\bbR\\|I|<\infty}} \|g\|_{\alpha,I}
:=
\sup_{\substack{I\subset\bbR\\|I|<\infty}}\left( \left(\sum_{\tau=0}^{\alpha-1} \left(\int_I g^{(\tau)}(x) \mathrm{d} x\right)^2 +\int_I | g^{(\alpha)}(x) |^2\mathrm{d} x \right)
\right)^{1/2}
< \infty
\label{eq:def-alpha-star}
\end{align}
and
\begin{align}
\|g\|_{\alpha,\mathrm{decay}}
:=
\sup_{\substack{x\in\bbR \\ \tau\in\{0,\ldots,\alpha-1\}}}
\left| \mathrm{e}^{(1-\varepsilon)x^2/2} \, g^{(\tau)}(x) \right|
<
\infty
,\qquad\text{for some $\varepsilon\in(0,1)$}.\label{eq:decay-cond-proptrap}
\end{align}
Then, for any integer $n\ge2$, the error for the $n$-point trapezoidal rule $Q^*_{n,T}$ as in \eqref{eq:def-trap} with the cut-off interval $[-T,T]$ given by
\begin{align}
T
&=
\sqrt{\frac{2}{(1-\varepsilon)}\alpha \ln(n)}
,\label{eq:def-T}
\end{align}
can be bounded by
\begin{align}\label{eq:tr-bound}
\left|
\int_{\bbR} g(x) \mathrm{d} x
-
Q_{n,T}^*(g)
\right|
\le
C \, \left(\|g\|_{\alpha}^* + \|g\|_{\alpha,\mathrm{decay}}\right) \,
\frac{(\ln n)^{ (\alpha/2 + 1/4) \, }}{n^\alpha}
,
\end{align}
where the constant $C$ is independent of $n$ and $g$ but depends on $\alpha$ and $\varepsilon$.
\end{proposition}
\begin{proof}
Consider the bound \eqref{eq:error-decomp-g}.
The error due to cutting off the integration domain is bounded by
\begin{align*}
\left|\int_{\bbR} g(x) \mathrm{d} x -\int_{-T}^T g(x) \mathrm{d} x \right|
&\le
2\|g\|_{\alpha,\mathrm{decay}} \int_T^\infty \mathrm{e}^{-(1-\varepsilon)x^2/2} \mathrm{d} x
\\
&\le
\frac{2\|g\|_{\alpha,\mathrm{decay}}}{(1-\varepsilon)T}\int_T^\infty (1-\varepsilon) x \mathrm{e}^{-(1-\varepsilon)x^2/2} \mathrm{d} x
\\&=
\frac{2\|g\|_{\alpha,\mathrm{decay}}}{(1-\varepsilon)T} \mathrm{e}^{-(1-\varepsilon)T^2/2}
=
\frac{\sqrt{2}\|g\|_{\alpha,\mathrm{decay}}}{\sqrt{\alpha(1-\varepsilon)}}n^{-\alpha} (\ln(n))^{-1/2}.
\end{align*}
Noting that $n\ge2$ and \eqref{eq:def-T} imply $2T>1$ so that $\max\{1,(2T)^{\alpha-1}\}=(2T)^{\alpha-1}$, from Lemma~\ref{lem:trap-error-inside} we have
\begin{align*}
\left|
\int_{\bbR} g(x) \mathrm{d} x
-
Q_{n,T}^*(g)
\right|
&\le
\frac{\sqrt{2}\|g\|_{\alpha,\mathrm{decay}}}{\sqrt{\alpha(1-\varepsilon)}} (\ln(n))^{-1/2}\, n^{-\alpha}
\\
&\qquad\quad+
C_1 \, \|g\|_{\alpha}^*
\,
(\ln(n))^{(\alpha/2+1/4)} \,
n^{-\alpha}
\\
&\qquad\quad+
C_2
\,
\|g\|_{\alpha,\mathrm{decay}}
\, (\ln(n))^{(\alpha/2-1/2) }
\,
n^{-\alpha}\\
&\le
C \, \left(\|g\|_{\alpha}^* + \|g\|_{\alpha,\mathrm{decay}}\right) \,
(\ln n)^{ (\alpha/2 + 1/4)} \, n^{-\alpha},
\end{align*}
where the constants $C_1$, $C_2$ and $C$ are independent of $n$ and $g$ but depend on $\alpha$ and $\varepsilon$. Thus the claim is proved.
\iffalse
\begin{align*}
\left|
\int_{\bbR} g(x) \mathrm{d} x
-
Q_{n,T}^*(g)
\right|
&\le
C_1 \, \|g\|_{\alpha,\mathrm{decay}} \, \max\{1, ((1-\varepsilon) \, T^2/2)^{1/2}\} \, \frac{2\exp(-(1-\varepsilon) \, T^2/2)}{(1-\varepsilon) \, T^2}
\\
&\qquad\quad+
C_2 \, \|g\|_{\alpha}^*
\,
\max\{1, 2T\}^{(\alpha+1/2)} \,
\frac{1}{n^\alpha}
\\
&\qquad\quad+
C_3
\,
\|g\|_{\alpha,\mathrm{decay}}
\, \max\{1, 2T\}^{(\alpha-1) }
\,
\exp(-(1-\varepsilon) T^2/2) \\
&\le
C \, \left(\|g\|_{\alpha}^* + \|g\|_{\alpha,\mathrm{decay}}\right) \,
\frac{(\ln n)^{ (\alpha/2 + 1/4) \, }}{n^\alpha}.
\end{align*}
\fi
\end{proof}
\begin{remark}
The result \cite[Theorem~2]{NS2021} by Nuyens and Suzuki obtained for a class of quasi-Monte Carlo methods called good lattice rules can be seen as a multidimensional counterpart of Proposition~\ref{prop:trape}.
Indeed, it can be checked that the trapezoidal rule is a good lattice, and thus under the same assumption as Proposition~\ref{prop:trape}, the result therein is immediately applicable to the trapezoidal rule. However, we obtained a better bound by exploiting our one-dimensional setting in Proposition~\ref{prop:trape}.
Compare this result with \cite[Theorem~2]{NS2021} with the parameters therein being $d=1$, $\beta=(1-\varepsilon)/2$ and $p=q=2$ to see the improvement.
\end{remark}
Our results offer several insights to interpret results available in the literature.
\begin{sloppypar}
In the context of spectral methods, Boyd \cite{Boyd.JP_2001_book,Boyd.JP_2009} pointed out that the Gauss--Hermite points are distributed roughly uniformly over the interval $[O(-n^{1/2}),O(n^{1/2})]$, and thus, the total number of point being $n$, the spacing between adjacent points decreases only as $O(n^{-1/2})$; see for example \cite[Chapter~17]{Boyd.JP_2001_book} and \cite[Fig.~6]{Boyd.JP_2009}.
The proof of Theorem~\ref{thm:GH-LB} (see also Corollary~\ref{cor:delta-LB}) shows that it is this slow decrease of the spacing that causes the sub-optimal convergence rate.
\end{sloppypar}
In \cite[Section 5]{T2021}, Trefethen compared Gausss--Hermite quadrature and various quadrature formulas, including the trapezoidal rule.
Although the focus there was analytic integrands,
the author also discusses the nonanalytic case.
On page 142, he seems to have reasoned that
\footnote{``The ratio increases to nearly order $n^{1/2}$ for nonanalytic functions $f$, where intervals growing just logarithmically rather than algebraically with $n$ are appropriate for balancing domain-truncation and discretization errors.'' \cite[p.\ 142]{T2021}.}
for the nonanalytic functions on $\mathbb{R}$ decaying at a suitable rate (presumably at the rate $\exp(-x^2)$ as $x\to\infty$ including derivatives)
the right choice of the cut-off of the domain that
balances domain-truncation and quadrature errors
should be logarithmic in $n$,
while the Gauss--Hermite rule distributes quadrature points to unnecessarily wide intervals $[\mathcal{O}(-n^{1/2})), \mathcal{O}(n^{1/2}))]$.
Proposition~\ref{prop:trape} supports this point for the trapezoidal rule.
Indeed,
under the exponential decay condition \eqref{eq:decay-cond-proptrap},
we cut off the integration domain logarithmically \eqref{eq:tr-bound}, and we achieve the optimal rate $\mathcal{O}(n^{-\alpha})$ up to a logarithmic factor.
Note, however, that for polynomially decaying finitely smooth functions, we expect the right choice of the domain cut-off to grow algebraically; see \cite[Theorem 2 (ii)]{NS2021} for a related result.
In Proposition~\ref{prop:trape}, the choice of the cut-off interval \eqref{eq:def-T} requires the smoothness parameter $\alpha$, which might not be known in practice.
Replacing $\alpha$ in \eqref{eq:def-T} with any slowly increasing function $\gamma(n)$, such as $\max\{\ln(\ln(n)),0\}$, yields a less tight bound for the trapezoidal rule, but with an $\alpha$-free construction, still achieving the optimal rate up to a factor of $(\gamma(n)\ln n)^{ (\alpha/2 + 1/4)}$.
\begin{corollary}\label{cor:alpha-free}
Suppose that assumptions in Proposition~\ref{prop:trape} are satisfied. Let $\gamma(n)\colon\mathbb{N}\to[0,\infty)$
be a non-decreasing function satisfying $\lim_{n\to\infty}\gamma(n)=\infty$.
Then, for any integer $n\geq\gamma^{-1}(\alpha):=\min\{m\in\mathbb{N}\mid\gamma(m)\geq\alpha\}$,
the error for $Q_{n,\tilde{T}}^{*}$ as in~\eqref{eq:def-trap} with
\[
\tilde{T}=\sqrt{\frac{2}{(1-\varepsilon)}\gamma(n)\ln(n)}
\]
can be bounded by
\begin{align}\left|
\int_{\bbR} g(x) \mathrm{d} x
-
Q_{n,\tilde{T}}^*(g)
\right|
\le
C \, \left(\|g\|_{\alpha}^* + \|g\|_{\alpha,\mathrm{decay}}\right) \,
\frac{(\gamma(n)\ln n)^{ (\alpha/2 + 1/4) \, }}{n^\alpha}
,
\end{align}
where the constant $C$ is independent of $n$ and $g$ but depends on $\alpha$ and $\varepsilon$.
\end{corollary}
Now we are going to show that Proposition~\ref{prop:trape} is applicable to the weighted Sobolev space $\mathscr{H}_{\alpha}$.
In view of the optimal rate in \cite[Theorem 1]{DILP2018} for the Hermite space $\mathcal{H}^{\mathrm{Hermite}}_{\alpha}$ and the characterisation of $\mathcal{H}^{\mathrm{Hermite}}_{\alpha}$ with $\mathscr{H}_{\alpha}$ discussed in Section~\ref{sec:space}, the resulting rate below establishes the optimality, up to a logarithmic factor, of our trapezoidal rule.
\begin{theorem}\label{thm:trape-opt}
Fix $\varepsilon\in(1/2,1)$ arbitrarily.
For $f\in\mathscr{H}_{\alpha}$ with $\alpha\in\mathbb{N}$, consider $Q^*_{n,T}(f\rho)$ as in \eqref{eq:def-trap} with
$T
=
\sqrt{\frac{2}{1-\varepsilon}\alpha \ln(n)}$. Then, we have
\begin{align*}
|
I(f)
-
Q_{n,T}^*(f\rho)
|
\le
C \|f\|_{\alpha}
\frac{(\ln n)^{ (\alpha/2 + 1/4) \, }}{n^\alpha}
\end{align*}
for any integer $n\ge 2$,
where the constant $C$ is independent of $n$ and $f$ but depends on $\alpha$ and $\varepsilon$.
\end{theorem}
\begin{proof}
Let $g:=f\rho$. In view of Proposition~\ref{prop:trape}, it suffices to show $\|g\|_{\alpha}^* + \|g\|_{\alpha,\mathrm{decay}}\leq C\|f\|_\alpha$ for some constant $C>0$, where $\|g\|_{\alpha}^*$ and $\|g\|_{\alpha,\mathrm{decay}}$ are as in
\eqref{eq:def-alpha-star} and \eqref{eq:decay-cond-proptrap} with $\varepsilon\in(1/2,1)$, respectively.
We first show $\|g\|_{\alpha}^*\leq C \|f\|_{\alpha}$.
We have
\[
(\|g\|^*_\alpha)^2
\le
\sum_{\tau=0}^{\alpha-1} \Bigl(\int_\bbR |g^{(\tau)}(x)| \mathrm{d} x\Bigr)^2 +\int_\bbR | g^{(\alpha)}(x) |^2\mathrm{d} x,
\]
but using \eqref{eq:Hermite-def} and the chain rule,
for $\tau=0,\ldots,\alpha-1$ we have
\begin{align*}
\|g^{(\tau)}&\|_{L^1(\bbR)}
\le
\sum_{\ell=0}^\tau \binom{\tau}{\ell} \| f^{(\tau-\ell)}(x)\rho^{(\ell)}(x) \|_{L^1(\bbR)}
\\
&=
\sum_{\ell=0}^\tau \binom{\tau}{\ell} \left(\int_{\bbR} \left|\rho(x) f^{(\tau-\ell)}(x)\sqrt{\ell!}(-1)^{\ell} H_{\ell}(x) \right| \mathrm{d} x\right)
\\
&\le
\sum_{\ell=0}^\tau \binom{\tau}{\ell} \sqrt{\ell!} \left(\int_{\bbR} \left| f^{(\tau-\ell)}(x)\right|^2\rho(x)\mathrm{d} x\right)^{1/2}\left(\int_{\bbR} \left|H_{\ell}(x) \right|^2 \rho(x) \mathrm{d} x\right)^{1/2}
<\infty,
\end{align*}
while for $\tau=\alpha$ we have
\begin{align*}
\| g^{(\alpha)}(x) \|_{L^2(\bbR)}
&\le
\sum_{\ell=0}^\alpha \binom{\alpha}{\ell} \| f^{(\alpha-\ell)}(x)\rho^{(\ell)}(x) \|_{L^2(\bbR)}
\\
&=
\sum_{\ell=0}^\alpha \binom{\alpha}{\ell} \left(\int_{\bbR} \left|\rho(x) f^{(\alpha-\ell)}(x)\sqrt{\ell!}(-1)^{\ell} H_{\ell}(x) \right|^2 \mathrm{d} x\right)^{1/2}
\\
&\le
\sum_{\ell=0}^\alpha \binom{\alpha}{\ell} \left( \sup_{t\in\bbR} \left(H_{\ell}^2(t)\rho(t) \right) \int_{\bbR} \rho(x) \bigl|f^{(\alpha-\ell)}(x)\bigr|^2\ell! \mathrm{d} x\right)^{1/2}
<\infty.
\end{align*}
Hence, $\|g\|^*_\alpha\leq C \|f\|_{\alpha}$ holds.
To show $\|g\|_{\alpha,\mathrm{decay}}\leq C \|f\|_{\alpha}$, let $h_{\tau}:=\rho^{\varepsilon-1} g^{(\tau)}$ for $\tau=0,\ldots,\alpha-1$. Then, we have
\begin{align*}
\| h_{\tau} \|_{L^2(\bbR)}
&\le
\sum_{\ell=0}^\tau \binom{\tau}{\ell} \|\rho^{\varepsilon-1}(x) f^{(\tau-\ell)}(x)\rho^{(\ell)}(x) \|_{L^2(\bbR)}
\\
&=
\sum_{\ell=0}^\tau \binom{\tau}{\ell} \left(\int_{\bbR} \left|\rho^{\varepsilon}(x) f^{(\tau-\ell)}(x)\sqrt{\ell!}(-1)^{\ell} H_{\ell}(x) \right|^2 \mathrm{d} x\right)^{1/2}
\\
&\le
\sum_{\ell=0}^\tau \binom{\tau}{\ell} \left( \sup_{t\in\bbR} \left(H_{\ell}^2(t)\rho^{2\varepsilon-1}(t) \right) \int_{\bbR} \rho(x) \left|f^{(\tau-\ell)}(x)\right|^2 \ell! \mathrm{d} x\right)^{1/2}
<\infty
\end{align*}
and
\begin{align*}
\|&h_{\tau}'(x)\|_{L^2(\bbR)}
\le
\|(1-\varepsilon)x g^{(\tau)}(x)\rho^{\varepsilon-1} (x) \|_{L^2(\bbR)} + \|g^{(\tau+1)}(x)\rho^{\varepsilon-1}(x)\|_{L^2(\bbR)}
\\
&
\le
\sum_{\ell=0}^\tau \binom{\tau}{\ell}\left( \int_{\bbR} \left|(1-\varepsilon)x\rho^{\varepsilon}(x) f^{(\tau-\ell)}(x)\sqrt{\ell!}(-1)^{\ell} H_{\ell}(x) \right|^2 \mathrm{d} x\right)^{1/2}
\\
&\quad+
\sum_{\ell=0}^{\tau+1} \binom{\tau+1}{\ell}\left(\int_{\bbR} \left|\rho^{\varepsilon}(x) f^{({\tau+1}-\ell)}(x)\sqrt{\ell!}(-1)^{\ell} H_{\ell}(x) \right|^2 \mathrm{d} x\right)^{1/2} \\
&\le
\sum_{\ell=0}^\tau \binom{\tau}{\ell} \left( \sup_{t\in \bbR}\left|\rho^{2\varepsilon-1}(t)(1-\varepsilon)^2 t^2 H^2_{\ell}(t) \right| \int_{\bbR} |f^{(\tau-\ell)}(x)|^2\rho(x)\ell! \mathrm{d} x\right)^{1/2}\\
&\quad+
\sum_{\ell=0}^{\tau+1} \binom{\tau+1}{\ell}\left(\sup_{t\in \bbR}\left| \rho^{2\varepsilon-1}(t) H_{\ell}^2(t) \right| \int_{\bbR} |f^{({\tau+1}-\ell)}(x)|^2 \ell! \rho(x)\mathrm{d} x\right)^{1/2}<
\infty.
\end{align*}
Thus, from the Sobolev inequality, e.g., \cite[Theorem~8.8]{B2011}, for $\tau=0,\dots,\alpha-1$ we have $\|h_{\tau}\|_\infty\le C\|h_{\tau}\|_{W^{1,2}(\bbR)}<\infty$. This completes the proof.
\end{proof}
Theorem~\ref{thm:trape-opt} is an application of Proposition~\ref{prop:trape} to $g=f\rho$ with $f\in \mathscr{H}_{\alpha}$.
Similarly, applying Corollary~\ref{cor:alpha-free} to $g=f\rho$ yields a trapezoidal rule whose construction is independent of $\alpha$ with the optimal convergence rate up to a factor of $(\gamma(n)\ln n)^{ (\alpha/2 + 1/4)}$ for functions in $\mathscr{H}_{\alpha}$.
Since the argument is straightforward from
Corollary~\ref{cor:alpha-free} and
Theorem~\ref{thm:trape-opt}, we omit the details.
\subsection*{Details of Figure~\ref{fig:GH-sub}}
Now we are ready to discuss the details of Figure~\ref{fig:GH-sub} in Section~\ref{sec:intro}.
The trapezoidal rule used there is $Q^*_{n,T}$ as in \eqref{eq:def-trap} with $T=\sqrt{\frac{2}{1-\varepsilon}\alpha\ln (n)}$ and $\varepsilon=0.51$.
Here, we chose $\alpha=p$, since $f(x)=|x|^p$ is in $\mathscr{H}_p$ but not in $\mathscr{H}_{p+1}$.
The number of points $n$ is chosen to be odd for the trapezoidal rule and even for Gauss--Hermite quadrature, so that both quadrature rules do not evaluate at the origin $x=0$ where the integrand is not smooth.
The rate around $\mathcal{O}(n^{-p/2-0.5})$ we observe for Gauss--Hermite quadrature is consistent with the matching bounds of the of order $n^{-\alpha/2}=n^{-p/2}$ in the sense of worst-case error, since $f(x)=|x|^p$ is a specific element from $\mathscr{H}_p$;
the rate around $\mathcal{O}(n^{-p-0.8})$ we observe for the trapezoidal rule also supports our results, according to which we expect to see at least $\mathcal{O}(n^{-p})$ for any function in $\mathscr{H}_p$.
\section{Conclusions}\label{sec:conc}
In this paper, we proved the sub-optimality of Gauss--Hermite quadrature and the optimality of the trapezoidal rule for functions with finite smoothness, in the sense of worst-case error.
The lower bound presented for Gauss--Hermite quadrature is sharp, and the upper bound presented for the trapezoidal rule is also sharp, up to a logarithmic factor.
To establish the lower bound for Gauss--Hermite rule, we constructed a sequence of fooling functions.
This strategy also demonstrated that what causes this lower bound is the placement of quadrature points, and thus tuning the quadrature weights does not improve the bound.
A key for showing the optimality of the trapezoidal rule was the auxiliary periodic function in Lemma~\ref{lem:trap-error-inside}.
The function used there is in fact an orthogonal projection in a suitable sense;
for details, we refer to \cite{NS2021}.
Needless to say, upon the domain truncation $[-T,T]$, other quadrature rules on the finite interval, such as Clenshaw--Curtis or Gauss--Legendre quadratures, can also be used.
For these quadrature rules, analogous upper bounds should be able to be derived, without the necessity of introducing the aforementioned periodic function.
Since these quadratures are arguably more complicated to use than the trapezoidal rule, and the error analysis should be less involved, we had left them out from the scope of this paper.
Our results suggest that the truncated trapezoidal rule may be also promising for high-dimensional problems.
One generalisation of the trapezoidal rule to the multidimensional setting is the lattice rule.
Nuyens and Suzuki \cite{NS2021} studied this method for the integration problem on $\mathbb{R}^d$ with respect to the Lebesgue measure.
To verify if it works well for the Gaussian measure in a high-dimensional setting is kept for future works.
Another generalisation to high-dimensional settings is by the Smolyak-type algorithms.
As mentioned in Section~\ref{sec:intro}, this type of methods based on Gauss--Hermite points is widely used.
In light of the results presented in this paper, especially when the target function is expected to have limited smoothness, the trapezoidal rule may be a better choice.
Investigating these speculations is also kept for future works.
\section*{Acknowledgments}
Part of this work was carried out when Yoshihito Kazashi was working at CSQI, Institute of Mathematics, \'Ecole Polytechnique F\'ed\'erale de Lausanne, Switzerland. We thank Dirk Nuyens and Ken’ichiro Tanaka for their valuable comments.
\bibliographystyle{siamplain}
| {
"timestamp": "2023-01-16T02:11:10",
"yymm": "2202",
"arxiv_id": "2202.11420",
"language": "en",
"url": "https://arxiv.org/abs/2202.11420",
"abstract": "The sub-optimality of Gauss--Hermite quadrature and the optimality of the trapezoidal rule are proved in the weighted Sobolev spaces of square integrable functions of order $\\alpha$, where the optimality is in the sense of worst-case error. For Gauss--Hermite quadrature, we obtain matching lower and upper bounds, which turn out to be merely of the order $n^{-\\alpha/2}$ with $n$ function evaluations, although the optimal rate for the best possible linear quadrature is known to be $n^{-\\alpha}$. Our proof of the lower bound exploits the structure of the Gauss--Hermite nodes; the bound is independent of the quadrature weights, and changing the Gauss--Hermite weights cannot improve the rate $n^{-\\alpha/2}$. In contrast, we show that a suitably truncated trapezoidal rule achieves the optimal rate up to a logarithmic factor.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Sub-optimality of Gauss--Hermite quadrature and optimality of the trapezoidal rule for functions with finite smoothness",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517488441416,
"lm_q2_score": 0.8670357512127872,
"lm_q1q2_score": 0.8480058327840606
} |
https://arxiv.org/abs/1207.6681 | Box-counting fractal strings, zeta functions, and equivalent forms of Minkowski dimension | We discuss a number of techniques for determining the Minkowski dimension of bounded subsets of some Euclidean space of any dimension, including: the box-counting dimension and equivalent definitions based on various box-counting functions; the similarity dimension via the Moran equation (at least in the case of self-similar sets); the order of the (box-)counting function; the classic result on compact subsets of the real line due to Besicovitch and Taylor, as adapted to the theory of fractal strings; and the abscissae of convergence of new classes of zeta functions. Specifically, we define box-counting zeta functions of infinite bounded subsets of Euclidean space and discuss results pertaining to distance and tube zeta functions. Appealing to an analysis of these zeta functions allows for the development of theories of complex dimensions for bounded sets in Euclidean space, extending techniques and results regarding (ordinary) fractal strings obtained by the first author and van Frankenhuijsen. | \section{Introduction}
\label{sec:Introduction}
Motivated by the theory of complex dimensions of fractals strings (the main theme of \cite{LapvF6}), we introduce box-counting fractal strings and box-counting zeta functions which, along with the distance and tube zeta functions of \cite{LapRaZu}, provide possible foundations for the pursuit of theories of complex dimensions for arbitrary bounded sets in Euclidean space of any dimension. We also summarize a variety of well-known techniques for determining the box-counting dimension, or equivalently the Minkowski dimension, of such sets. Thus, while new results are presented in this paper, it is partially expository and also partially tutorial.
Our main result establishes line (iv) of the following theorem. (See also Theorem \ref{thm:MainResult} below, along with the relevant definitions provided in this paper.) The other lines have been established elsewhere in the literature, as cited accordingly throughout the paper.
\begin{theorem}
\label{thm:Summary}
Let $A$ be a bounded infinite subset of $\mathbb{R}^m$ \emph{(}equipped with the usual metric\emph{)}. Then the following quantities are equal\emph{:}\footnote{The fact that $A$ is infinite is only required for part (iv) of the theorem; see Remark \ref{rmk:FiniteBoxCountingFractalString}.}
\begin{enumerate}
\setlength{\itemsep}{0in}
\item the \emph{upper box-counting dimension} of $A$\emph{;}
\item the \emph{upper Minkowski dimension} of $A$\emph{;}
\item the \emph{asymptotic order of growth of the counting function} of the box-counting fractal string $\mathcal{L}_B$\emph{;}
\item the \emph{abscissa of convergence} of the box-counting zeta function $\zeta_B$\emph{;}
\item the \emph{abscissa of convergence} of the distance zeta function $\zeta_d$.
\end{enumerate}
\end{theorem}
A summary of the remaining sections of this paper is as follows:
In Section \ref{sec:ClassicNotionsOfDimension}, we discuss classical notions of dimension such as \emph{similarity dimension} (or \emph{exponent}), \emph{box-counting dimension}, and \emph{Minkowski dimension} as well as their properties. (See \cite{BesTa,Falc,HaPo,Hut,LapLuvF1,LapLuvF2,LapPe3,LapPe2,LapPeWin,LapPo1,LapRaZu,
LapvF6,MeSa,Mor,Tr2,Tri,Zu,ZuZup}.)
In Section \ref{sec:FractalStringsAndZetaFunctions}, we summarize but a few of the interesting results on fractal strings and counting functions regarding, among other things, geometric zeta functions, complex dimensions, the order of a counting function, and connections with Minkowski measurability. (See \cite{BesTa,LapPo1,LapvF6,Lev,Tri}.) The material in Sections \ref{sec:ClassicNotionsOfDimension} and \ref{sec:FractalStringsAndZetaFunctions} motivates the results presented in Sections \ref{sec:BoxCountingFractalStringsAndZetaFunctions} and \ref{sec:DistanceAndTubeZetaFunctions}.
In Section \ref{sec:BoxCountingFractalStringsAndZetaFunctions}, we introduce \emph{box-counting fractal strings} and \emph{box-counting zeta functions} and, in particular, we show that the abscissa of convergence of the box-counting zeta function of a bounded infinite set is the upper box-counting dimension of the set. These topics are the focus of \cite{LapRoZu}.
In Section \ref{sec:DistanceAndTubeZetaFunctions}, we share recent results from \cite{LapRaZu} on \emph{distance, tube} and \emph{relative zeta functions}, including connections between the corresponding complex dimensions and Minkowski content and measurability.
In Section \ref{sec:SummaryOfResultsAndOpenProblems}, Theorem \ref{thm:Summary} is restated in Theorem \ref{thm:MainResult} using notation and terminology discussed throughout the paper. We also propose several open problems for future work in this area.
\section{Classic notions of dimension}
\label{sec:ClassicNotionsOfDimension}
We begin with a brief discussion of a classic method for constructing self-similar fractals and a famous fractal, the Cantor set $C$. (See \cite{Falc,Hut}.)
\begin{definition}
\label{def:IFSAndAttractor}
Let $N$ be an integer such that $N \geq 2$. An \emph{iterated function system} (IFS) $\mathbf{\Phi}=\{\Phi_j\}_{j=1}^N$ is a finite family of contractions on a complete metric space $(X,d_X)$. Thus, for all $x,y \in X$ and each $j=1,\ldots,N$ we have
\begin{align}
d_X(\Phi_j(x),\Phi_j(y)) &\leq r_jd_X(x,y),
\end{align}
where $0<r_j<1$ is the \emph{scaling ratio} (or Lipschitz constant) of $\Phi_j$ for each $j=1,\ldots,N$.
The \emph{attractor} of $\mathbf{\Phi}$ is the nonempty compact set $F \subset X$ defined as the unique fixed point of the contraction mapping
\begin{align}
\label{eqn:IFSAsMap}
\displaystyle
\mathbf{\Phi}(\cdot) &:= \bigcup_{j=1}^N \Phi_j(\cdot)
\end{align}
on the space of compact subsets of $X$ equipped with the Hausdorff metric. That is, $F=\mathbf{\Phi}(F)$. If
\begin{align}
\label{eqn:IFSSelfSimilar}
d_X(\Phi_j(x),\Phi_j(y)) &= r_jd_X(x,y)
\end{align}
for each $j=1,\ldots,N$ (i.e., if the contraction maps $\Phi_j$ are similarities with scaling ratios $r_j$), then the attractor $F$ is the \emph{self-similar set} associated with $\mathbf{\Phi}$.
\end{definition}
\begin{remark}
\label{rmk:AttentionOnEuclideanSpacesAndOSC}
We focus our attention on Euclidean spaces of the form $X=\mathbb{R}^m$, where $m$ is a positive integer and $d_X=d_m$ is the classic $m$-dimensional Euclidean distance. We denote $d_m(x,y)$ by $|x-y|$. Furthermore, we consider only iterated functions systems which satisfy the \emph{open set condition} (see \cite{Falc,Hut}). Recall that an IFS $\mathbf{\Phi}$ satisfies the open set condition if there is a nonempty open set $V \subset \mathbb{R}^m$ for which $\Phi_j(V)\subset V$ for each $j$ and the images $\Phi_j(V)$ are pairwise disjoint.
\end{remark}
\begin{figure}
\includegraphics[scale=.5]{cantorstringharp.eps}
\caption{The classic ``middle-third removal'' construction of the Cantor set $C$ is depicted on the left. The Cantor string $\mathcal{L}_{CS}$ is the nonincreasing sequence comprising the lengths of the removed intervals which are depicted on the right as a fractal harp.}
\label{fig:CantorConstructionAndHarp}
\end{figure}
\begin{example}[The dimension of the Cantor set]
The Cantor set $C$ can be constructed in various ways. For instance, we have the classic ``middle-third removal'' construction of $C$ as depicted in Figure 1.
A more elegant construction shows $C$ to be the unique nonempty attractor of the iterated function system $\mathbf{\Phi}_C$ on $[0,1]$ given by the two contracting similarities $\varphi_1(x)=x/3$ and $\varphi_2(x)=x/3+2/3$. The box-counting dimension of $C$ is $\log_{3}2$, a fact which can be established with any of the myriad of formulas presented in this paper. Notably, $\log_{3}2$ is equivalently found to be: the \emph{order of the geometric counting function} (see Remark \ref{rmk:DimensionEqualsOrder}) of the \emph{box-counting fractal string} $\mathcal{L}_B$ of $C$ (which is related but not equal to the Cantor string $\mathcal{L}_{CS}$, see Definition \ref{def:BoxCountingFractalString}, Equation \eqref{eqn:CantorStringLengths}, and \cite[Ch.1]{LapvF6}); the \emph{abscissa of convergence} of either the \emph{geometric zeta function} of $\mathcal{L}_{CS}$ (Definition \ref{def:GeometricZetaFunction}), the \emph{box-counting zeta function} of $C$ (Definition \ref{def:BoxCountingZetaFunction}), the \emph{distance zeta function} of $C$ (Definition \ref{def:DistanceZetaFunction}), or the \emph{tube zeta function} of $C$ (Equation \eqref{tube_zeta} in Section \ref{sec:TubeZetaFunction}); or else the unique real-valued solution of the corresponding Moran equation (cf. Equation \eqref{eqn:Moran}): $2\cdot3^{-s}=1$.
\end{example}
\subsection{Similarity dimension}
\label{sec:SimilarityDimension}
The first notion of `dimension' we consider is the similarity dimension (or `similarity exponent') of a self-similar set.
\begin{definition}
\label{def:SimilarityDimension}
Let $\mathbf{\Phi}$ be an iterated function system that satisfies the open set condition and is generated by similarities with scaling ratios $\{r_j\}_{j=1}^N$, with $N \geq 2$. Then the \emph{similarity dimension} of the attractor of $\mathbf{\Phi}$ (that is, of the self-similar set associated with $\mathbf{\Phi}$) is the unique real solution $D_\mathbf{\Phi}$ of the equation
\begin{align}
\label{eqn:Moran}
\sum_{j=1}^N r_j^\sigma &= 1, \quad \sigma \in \mathbb{R}.
\end{align}
\end{definition}
\begin{remark}
\label{rmk:MoransTheorem}
Equation \eqref{eqn:Moran} is known as Moran's equation. Moran's Theorem is a well-known result which states that the similarity dimension $D_\mathbf{\Phi}$ is equal to the box-counting (and Hausdorff) dimension of the self-similar attractor of $\mathbf{\Phi}$.\footnote{Moran's original result in \cite{Mor} was established in $\mathbb{R}$ (i.e., for $m=1$) but is valid for $m\geq 1$; cf. \cite{Falc,Hut}.} In fact, $D_\mathbf{\Phi}$ is positive, a fact that can be verified directly from Equation \eqref{eqn:Moran}. For details regarding iterated functions systems, the open set condition, and Moran's Theorem, see \cite[Ch.9]{Falc} as well as \cite{Hut} and \cite{Mor}.
\end{remark}
\subsection{Box-counting dimension}
\label{sec:BoxCountingDimension}
In this section we discuss the central notion of box-counting dimension and some of its properties.
\begin{definition}
\label{def:BoxCountingFunction}
Let $A$ be a subset of $\mathbb{R}^m$. The \emph{box-counting function} of $A$ is the function $N_B(A,\cdot): (0,\infty) \rightarrow \mathbb{N} \cup \{0\}$, where (for a given $x>0$) $N_B(A,x)$ denotes the maximum number of disjoint closed balls $B(a,x^{-1})$ with centers $a \in A$ of radius $x^{-1}$.
\end{definition}
\begin{definition}
\label{def:BoxCountingDimension}
For a set $A \subset \mathbb{R}^m$, the \emph{lower} and \emph{upper box-counting dimensions} of $A$, denoted $\underline{\dim}_{B}A$ and $\overline{\dim}_{B}A$, respectively, are given by
\begin{equation}
\begin{aligned}
\underline{\dim}_{B}A&:= \liminf_{x\to\infty}\frac{\log N_B(A,x)}{\log x}, \label{eqn:UpperBoxDimension}\\
\overline{\dim}_{B}A&:= \limsup_{x\to\infty}\frac{\log N_B(A,x)}{\log x}.
\end{aligned}
\end{equation}
When $\underline{\dim}_{B}A=\overline{\dim}_{B}A$, the following limit exists and is called the \emph{box-counting dimension} of $A$, denoted $\dim_{B}A$:
\begin{align*}
\dim_{B}A &:= \lim_{x\to\infty}\frac{\log N_B(A,x)}{\log x}.
\end{align*}
\end{definition}
\begin{remark}
\label{rmk:EquivalentUpperBoxDimension}
The upper and lower box-counting dimensions are independent of the ambient dimension $m$. In most applications the set $A$ is such that $N_B(A,x) \asymp x^d$ as $x\to\infty$, for some constant $d \in [0,m]$ (the relation $\asymp$ is explained at the end of Notation \ref{not:DistanceVolumeAndBigO} below). It is easy to see that then, $\dim_{B}A=d$. Also, the following equivalent form of the upper box-counting dimension will prove to be useful in Section \ref{sec:BoxCountingFractalStringsAndZetaFunctions} (see Notation \ref{not:DistanceVolumeAndBigO} below for an explanation of `Big-$O$' notation: $f(x)=O(g(x))$ as $x \rightarrow \infty$).
\begin{align*}
\overline{\dim}_{B}A &=\inf\{\alpha \geq 0 : N_B(A,x)=O(x^{\alpha}) \textnormal{ as } x \rightarrow \infty\}.
\end{align*}
\end{remark}
\begin{remark}
\label{rmk:VariousCountingFunctions}
There are many equivalent definitions of the box-counting dimension (see \cite[Ch.3]{Falc}). For instance, the box-counting function $N_B(A,x)$ given in Definition \ref{def:BoxCountingFunction} may be replaced by:
\begin{enumerate}
\setlength{\itemsep}{0in}
\item the minimum number of sets of diameter at most $x^{-1}$ required to cover $A$;
\item the minimum number of closed balls of radius $x^{-1}$ required to cover $A$;
\item the minimum number of closed cubes with side length $x^{-1}$ required to cover $A$; or
\item the number of $x^{-1}$-mesh cubes that intersect $A$.
\end{enumerate}
\end{remark}
\begin{remark}
\label{rmk:varepsilonVSx}
One may also define the box-counting function in terms of $\varepsilon>0$, where $\varepsilon=x^{-1}$ plays the role of the scale under consideration.\footnote{Indeed, note that given $\varepsilon>0$, $N_B(A,\varepsilon^{-1})$ is the maximum number of disjoint balls $B(a,\varepsilon)$ with center $a \in A$ and radius $\varepsilon$ (or, \emph{mutatis mutandis}, $\varepsilon$ denotes any of the counterparts to the notion of radius given in (i)--(iv) of Remark \ref{rmk:VariousCountingFunctions}).} Although this may be a more natural way to describe a box-counting function, the results relating box-counting functions and \emph{geometric counting functions} (see Definition \ref{def:GeometricCountingFunction}) presented in Section \ref{sec:BoxCountingFractalStringsAndZetaFunctions} are stated and analyzed in terms of $x>0$. Moreover, this convention is used throughout \cite{LapvF6} and that text will be vital to the development of further material based on the results presented in this paper.
\end{remark}
\begin{remark}
\label{rmk:PropertiesBoxCountingDimension}
If the box-counting function $N_B(A,x)$ is given as in Definition \ref{def:BoxCountingFunction} or one of the alternatives in Remark \ref{rmk:VariousCountingFunctions}, then the upper and lower box-counting dimensions have the following properties (cf. \cite[Ch.~3]{Falc} and \cite{Fdrr}):
\begin{enumerate}
\setlength{\itemsep}{0in}
\item Let $V$ be a bounded $n$-dimensional submanifold of $\mathbb{R}^m$ which is \emph{rectifiable} in the sense that $V \subset f(\mathbb{R}^n)$, where $f:\mathbb{R}^n\to\mathbb{R}^m$ is a Lipschitz function. Then $\dim_{B}V=n$.
\item Both $\overline{\dim}_B$ and $\underline{\dim}_B$ are monotonic. That is, if $A_1 \subset A_2 \subset \mathbb{R}^m$, then
\begin{align*}
\overline{\dim}_{B}A_1 &\leq \overline{\dim}_{B}A_2, \quad \underline{\dim}_{B}A_1 \leq \underline{\dim}_{B}A_2.
\end{align*}
\item Let $\overline{A}$ denote the closure of $A$ (i.e., the smallest closed subset of $\mathbb{R}^m$ which contains $A$). Then
\begin{align*}
\overline{\dim}_{B}\overline{A} &= \overline{\dim}_{B}A, \quad
\underline{\dim}_{B}\overline{A} = \underline{\dim}_{B}A.
\end{align*}
\item For any two sets $A_1,A_2 \subset \mathbb{R}^m$,
\begin{align*}
\overline{\dim}_{B}(A_1 \cup A_2)&= \max\left\{\overline{\dim}_{B}A_1,\overline{\dim}_{B}A_2\right\}.
\end{align*} That is, $\overline{\dim}_B$ is finitely stable. On the other hand, $\underline{\dim}_B$ is not finitely stable.
\item Neither $\overline{\dim}_B$ nor $\underline{\dim}_B$ is countably stable. That is, neither $\overline{\dim}_B$ nor $\underline{\dim}_B$ satisfies the analogue of property (iv) for a countable collection of subsets of $\mathbb{R}^m$.
\end{enumerate}
\end{remark}
Concerning the loss of finite stability of the lower box dimension mentioned in
(iv) of Remark \ref{rmk:PropertiesBoxCountingDimension}, it is possible to construct two bounded sets $A$ and $B$ in $\mathbb{R}^m$ such that their lower box-counting dimensions are both equal to zero, while the box-counting dimension of their union is equal to $m$; see \cite[Theorem 1.4]{Zu}.
A simple way to see why property (v) of Remark \ref{rmk:PropertiesBoxCountingDimension} is satisfied for the upper box-counting dimension is to consider the countable set $A=\{1,1/2,1/3,\ldots\}$ and note that $\overline{\dim}_{B}A=1/2$ whereas $\overline{\dim}_{B}\{1/j\}=0$ for each positive integer $j$.
The following proposition shows that one need only consider certain discrete sequences of scales which tend to zero in order to determine the box-counting dimension of a set.
\begin{proposition}
\label{prop:GeometricSequenceSufficient}
Let $\lambda>1$ and $A \subset \mathbb{R}^m$. Then
\begin{align*}
\underline{\dim}_{B}A &=\liminf_{k \rightarrow \infty}\frac{\log N_B(A,\lambda^{k})}{\log\lambda^{k}}, \\
\overline{\dim}_{B}A &=\limsup_{k \rightarrow \infty}\frac{\log N_B(A,\lambda^{k})}{\log\lambda^{k}}.
\end{align*}
\end{proposition}
\begin{proof}
If $\lambda^{k}< x \leq\lambda^{k+1}$, then
\begin{align*}
\frac{\log N_B(A,x)}{\log x} &\leq
\frac{\log N_B(A,\lambda^{k+1})}{\log\lambda^{k}} =
\frac{\log N_B(A,\lambda^{k+1})}{\log\lambda^{k+1}-\log\lambda}.
\end{align*}
Therefore,
\begin{align*}
\limsup_{x\to\infty}\frac{\log N_B(A, x)}{\log x} & \leq \limsup_{k \rightarrow \infty}\frac{\log N_B(A,\lambda^{k})}{\log\lambda^{k}}.
\end{align*}
The opposite inequality clearly holds and the case for the lower limits follows \emph{mutatis mutandis}.
\end{proof}
\begin{example}[Box-counting dimension of the Cantor set]
\label{eg:BoxCountingDimensionOfCantorSet}
Let $C$ be the Cantor set and $n \in \mathbb{N}$. Also, let $N_B(A,3^n)$ denote the minimum number of disjoint closed intervals with length $3^{-n}$ required to cover $C$. Then
$N_B(A,3^n)=2^n$, so by Proposition \ref{prop:GeometricSequenceSufficient} we have
\begin{align}
\label{eqn:BoxCountingDimensionOfCantorSet}
\dim_{B}C &= \lim_{n\rightarrow\infty} \frac{\log{2^n}}{\log{3^n}}=\log_{3}2.
\end{align}
\end{example}
In the next section, we discuss the Minkowski dimension, which is well known to be equivalent to the box-counting dimension.
\subsection{Minkowski dimension}
\label{sec:MinkowskiDimension}
Minkowski content and Minkowski dimension require a specific notion of volume and can be stated concisely with the following notation.
\begin{notation}[Distance, volume, and Big-$O$]
\label{not:DistanceVolumeAndBigO}
Let $\varepsilon >0$ and $A \subset \mathbb{R}^m$. Let $d(x,A)$ denote the distance between a point $x\in\mathbb{R}^m$ and the set $A$ given by
\begin{align*}
d(x,A) &:= \inf\{|x-u|_m : u \in A\},
\end{align*}
where $|\cdot|_m$ denotes the $m$-dimensional Euclidean norm. The \emph{$\varepsilon$-neighborhood} of $A$, denoted $A_\varepsilon$, is the set of points in $\mathbb{R}^m$ which are within $\varepsilon$ of $A$. Specifically,
\begin{align*}
A_\varepsilon &= \{x \in \mathbb{R}^m: d(x,A) < \varepsilon\}.
\end{align*}
In the sequel, we fix the set $A$ and are concerned with the $m$-dimensional Lebesgue measure (denoted $\textnormal{vol}_m$) of its $\varepsilon$-neighborhood $A_\varepsilon$ for a given $\varepsilon>0$. Recall, for completeness,\footnote{The book \cite{Cohn} is a good general reference on elementary (as well as more advanced) measure theory.} that the $m$-dimensional Lebesgue measure of a (measurable) set $A \subset \mathbb{R}^m$ is given by
\begin{align*}
\textnormal{vol}_m(A) &:=\inf\left\{\sum_{n=1}^\infty \prod_{j=1}^m(b_{n,j}-a_{n,j}) : A \subset \bigcup_{n=1}^\infty \left(\prod_{j=1}^m[a_{n,j},b_{n,j}]\right) \right\}.
\end{align*}
In the case of an \emph{ordinary fractal string} $\Omega \subset \mathbb{R}$ (see the latter part of Definition \ref{def:FractalString}), we are interested in the 1-dimensional volume (i.e., length) of the \emph{inner} $\varepsilon$-neighborhood of the boundary $\partial\Omega$. Specifically, given an ordinary fractal string $\Omega$ and $\varepsilon>0$, the volume $V_{\textnormal{inner}}(\varepsilon)$ of the inner $\varepsilon$-neighborhood of $\partial\Omega$ is defined by
\begin{align}
\label{eqn:VolumeInnerNeighborhood}
V_{\textnormal{inner}}(\varepsilon) &:=\textnormal{vol}_1\{x \in \Omega : d(x,\partial\Omega) <\varepsilon\}.
\end{align}
For two functions $f$ and $g$, with $g$ nonnegative, we write $f(x)=O(g(x))$ as $x\to\infty$ if there exists a positive real number $c$ such that for all sufficiently large $x$, $|f(x)|\leq cg(x)$. More generally, if there exists $C$ such that $|f(x)|\leq Cg(x)$ for all $x$ sufficiently close to some value $a \in \mathbb{R}\cup \{\pm\infty\}$, then we write $f(x)=O(g(x))$ as $x\to a$. If both $f(x)=O(g(x))$ and $g(x)=O(|f(x)|)$ as $x\to a$, we write $f(x)\asymp g(x)$ as $x\to a$. Moreover, if $\lim_{x\to a} f(x)/g(x)=1$,\footnote{Or, more generally, if $f(x)=g(x)(1+o(1))$ as $x\to a$, where $o(1)$ stands for a function tending to zero as $x\to a$.} then we write $f(x) \sim g(x)$ as $x\to a$. Analogous notation will be used for infinite sequences.
\end{notation}
\begin{definition}[Minkowski content]
\label{def:MinkowskiContent0}
Let $r$ be a given nonnegative real number. The \emph{upper} and \emph{lower $r$-dimensional Minkowski contents} of a bounded set $A\subset \mathbb{R}^m$ are respectively given by
\begin{align*}
\mathscr{M}^{*r}(A) &:= \limsup_{\varepsilon\rightarrow 0^+}\frac{\textnormal{vol}_m(A_\varepsilon)}{\varepsilon^{m-r}},\\
\mathscr{M}_*^{r}(A) &:= \liminf_{\varepsilon\rightarrow 0^+}\frac{\textnormal{vol}_m(A_\varepsilon)}{\varepsilon^{m-r}}.
\end{align*}
\end{definition}
It is easy to see that if $\mathscr{M}^{*r}(A)<\infty$, then $\mathscr{M}^{*s}(A)=0$ for each $s>r$. Furthermore, since $A$ is bounded, then clearly $\mathscr{M}^{*r}(A)=0$ for $r>m$. On the other hand, if $\mathscr{M}^{*r}(A)>0$, then $\mathscr{M}^{*s}(A)=\infty$ for each $s<r$. Therefore, there exists a unique point in $[0,m]$ at which the function $r\mapsto \mathscr{M}^{*r}(A)$ jumps from the value of $\infty$ to zero. This unique point is called the \emph{upper Minkowski dimension} of $A$. The \emph{lower Minkowski dimension} of $A$ is defined analogously by using the lower $r$-dimensional Minkowski content.
\begin{definition}[Minkowski dimension]
\label{def:MinkowskiDimension0}
The \emph{upper} and \emph{lower Minkowski dimensions} of a bounded set $A$ are defined respectively by
\begin{equation}
\begin{aligned}
\overline{\dim}_{M}A &:=\inf\{r\ge0:\mathscr{M}^{r*}(A)=0\}=\sup\{r\ge0:\mathscr{M}^{r*}(A)=\infty\}, \label{eqn:UpperMinkowskiDimension}\\
\underline{\dim}_{M}A &:= \inf\{r\ge0:\mathscr{M}_*^{r}(A)=0\}=\sup\{r\ge0:\mathscr{M}_*^{r}(A)=\infty\}.
\end{aligned}
\end{equation}
When $\overline{\dim}_{M}A=\underline{\dim}_{M}A$, the common value is called the \emph{Minkowski dimension} of $A$, denoted by $\dim_{M}A$.
\end{definition}
When we write $\dim_{M}A$, we implicitly assume that the Minkowski dimension of $A$ exists. In most applications we have that $\textnormal{vol}_m(A_\varepsilon)\asymp\varepsilon^{\alpha}$ as $\varepsilon\to0^+$, where $\alpha$ is a number in $[0,m]$. Then $\dim_{M}A$ exists and is equal to $m-\alpha$ (in light of Definitions \ref{def:MinkowskiContent0} and \ref{def:MinkowskiDimension0}). Note that here
\begin{equation*}
\alpha=\lim_{\varepsilon\to0^+}\frac{\log\textnormal{vol}_m(A_\varepsilon)}{\log\varepsilon},
\end{equation*}
and hence,
\begin{equation*}
\dim_{M}A=m-\lim_{\varepsilon\to0^+}\frac{\log\textnormal{vol}_m(A_\varepsilon)}{\log\varepsilon}.
\end{equation*}
It is not difficult to show that the following more general result holds.
\begin{proposition}
\label{prop:MinkowskiDimension}
The upper and lower Minkowski dimensions of a bounded set $A\subset \mathbb{R}^m$ are respectively given by
\begin{align*}
\overline{\dim}_{M}A &= m-\liminf_{\varepsilon \rightarrow 0^+}\frac{\log\textnormal{vol}_m(A_\varepsilon)}{\log\varepsilon},\\
\underline{\dim}_{M}A &= m-\limsup_{\varepsilon \rightarrow 0^+}\frac{\log\textnormal{vol}_m(A_\varepsilon)}{\log\varepsilon}.
\end{align*}
\end{proposition}
\begin{remark}
\label{rmk:EquivalentUpperMinkoskiDimension}
The upper and lower Minkowski dimensions are, of course, independent of the ambient dimension $m$. The upper Minkowski dimension is equivalently given by
\begin{align*}
\overline{\dim}_{M}A &=\inf\{\alpha \geq 0 : \textnormal{vol}_m(A_\varepsilon)=O(\varepsilon^{m-\alpha}) \textnormal{ as } \varepsilon \rightarrow 0^+\}.
\end{align*}
\end{remark}
\begin{remark}
\label{rmk:Tr2}
It is interesting that there exists a bounded set $A$ in $\mathbb{R}^m$ such that the upper and lower box dimension are different (see, e.g., \cite[p.\ 122]{Tr2}), and even such that $\overline{\dim}_{M}A=m$ and $\underline{\dim}_{M}A=0$ (see \cite[Theorem 1.2]{Zu}).
\end{remark}
\begin{remark}
\label{rmk:HaPo}
The upper Minkowski dimension of $A$ is important in the study of the Lebesgue integrability of the distance function $d(x,A)^{-\gamma}$ in an $\varepsilon$-neighborhood of $A$, where $\varepsilon$ is a fixed positive number:
\begin{equation}
\mbox{If $\gamma<m-\overline\dim_{M}A$, then $\int_{A_{\varepsilon}}d(x,A)^{-\gamma}dx<\infty$.}
\end{equation}
This nice result is due to Harvey and Polking, and is implicitly stated in \cite{HaPo}; see also \cite{Zu} for related results and references. This fact enabled the first and third authors, along with G.~Radunovi$\acute{\textnormal{c}}$, to determine the abscissa of convergence of the so-called distance zeta function of $A$; see Definition~\ref{def:DistanceZetaFunction} below along with Theorem \ref{thm:DistanceZetaFunctionUpperBoxDimension} and \cite{LapRaZu} for details.
\end{remark}
\begin{definition}[Minkowski measurability]
\label{def:MinkowskiMeasurability}
Let $A \subset \mathbb{R}^m$ be such that $D_M=\dim_{M}A$ exists. The \emph{upper} and \emph{lower Minkowski content} of $A$ are respectively defined as its $D_M$-dimensional upper
and lower Minkowski contents, that is,
\begin{align*}
\mathscr{M}^* &:=\mathscr{M}^{*D_M}(A)=\limsup_{\varepsilon\to 0^+} \frac{\textnormal{vol}_m(A_\varepsilon)}{\varepsilon^{m-D_M}},\\
\mathscr{M}_* &:=\mathscr{M}_*^{D_M}(A)=\liminf_{\varepsilon\to 0^+} \frac{\textnormal{vol}_m(A_\varepsilon)}{\varepsilon^{m-D_M}}.
\end{align*}
If the upper and lower Minkowski contents agree and lie in $(0,\infty)$, then $A$ is said to be \emph{Minkowski measurable} and the \emph{Minkowski content of $A$} is given by
\begin{align*}
\mathscr{M} &:=\lim_{\varepsilon\to 0^+}\frac{\textnormal{vol}_m(A_\varepsilon)}{\varepsilon^{m-D_M}}.
\end{align*}
\end{definition}
For example, if $A$ is such that $\textnormal{vol}_m(A_\varepsilon) \sim M\varepsilon^{\alpha}$ as $\varepsilon \to 0^+$, then $\dim_{B}A=m-\alpha$ and $\mathscr{M}=M$.
\begin{openproblem}
If $A$ and $B$ are Minkowski measurable in
$\mathbf{R}^m$ and $\mathbf{R}^n$, respectively, is their Cartesian product $A\times B$ Minkowski measurable in $\mathbf{R}^{m+n}$\emph{?} See also Remark \ref{rmk:Resman} below dealing with the so-called normalized Minkowski content, and its independence of the ambient dimension~$m$.
\end{openproblem}
\begin{remark}
Another question to consider is whether or not the union $A\cup B$ of two Minkowski measurable sets is Minkowski measurable. If not, it would be interesting to find an explicit counter-example. (The answer is clearly affirmative if $A$ and $B$ are a positive distance apart.)
\end{remark}
The Minkowski measurable sets on the real line have been characterized in \cite{LapPo1}; see also Theorem~\ref{thm:CriterionForMinkowskiMeasurability}. Some classes of Minkowski measurable sets are known in the plane in the case of smooth spirals, see \cite{ZuZup}, and in the case of discrete spirals, see \cite{MeSa}. It is interesting that in general, bilipschitz $C^1$ mappings do not preserve Minkowski measurability, even for subsets of the real line; see \cite{MeSa}.
We close this section with the following example.
\begin{example}
If $A$ is any bounded set in $\mathbb{R}^m$ such that its closure is of positive $m$-dimensional Lebesgue measure, then $\dim_BA = m$. Indeed, $\textnormal{vol}_m(\overline{A}) > 0$ immediately implies that $\underline{\dim}_B\overline{A} = m$, and therefore by property (iii) of Remark \ref{rmk:PropertiesBoxCountingDimension}, $\underline{\dim}_BA = m$. Since $\overline{\dim}_BA \leq m$, this proves that $\dim_BA$ exists and $\dim_BA = m$. In particular, the claim holds for any Lebesgue nonmeasurable set $A\subset\mathbb{R}^m$. Indeed, the closure of such a set $A$ cannot be of Lebesgue measure zero (i.e., we cannot have $\textnormal{vol}_m(\overline{A}) = 0$) since, in that case, $A$ would also be of Lebesgue measure zero, implying that $A$ is Lebesgue measurable (because Lebesgue measure is complete in this setting).\footnote{For the notions of measure theory used in this example, we refer, e.g., to \cite{Cohn}.}
\end{example}
\section{Fractal strings and zeta functions}
\label{sec:FractalStringsAndZetaFunctions}
In this section, we discuss a few of the many results on fractal strings presented in \cite{LapvF6}.
\subsection{Fractal strings and ordinary fractal strings}
\label{sec:FractalStringsAndOrdinaryFractalStrings}
\begin{definition}
\label{def:FractalString}
A \emph{fractal string} $\mathcal{L}$ is a nonincreasing sequence of positive real numbers which tends to zero. Hence,
\begin{align*}
\mathcal{L} &= (\ell_j)_{j \in \mathbb{N}},
\end{align*}
where $(\ell_j)_{j\in\mathbb{N}}$ is nonincreasing and $\lim_{j \rightarrow \infty}\ell_j=0$. Equivalently, a fractal string $\mathcal{L}$ can be thought of as the collection
\[
\{ l_n : l_n \textnormal{ has multiplicity } m_n, n \in \mathbb{N} \},
\]
where $(l_n)_{n \in \mathbb{N}}$ is a strictly decreasing sequence of positive real numbers and, for each $n\in\mathbb{N}$, $m_n$ is the number of lengths (or, more generally, `scales') $\ell_j$ such that $\ell_j=l_n$.
An \emph{ordinary fractal string} $\Omega$ is a bounded open subset of the real line. In that case, we can write $\Omega$ as the disjoint union of its connected components $I_j$ (i.e., $\Omega=\cup_{j=1}^\infty I_j$), and for each $j\geq 1$, $\ell_j$ denotes the length of the interval $I_j$ (i.e., $\ell_j=|I_j|_1$).\footnote{Note that without loss of generality, we may assume that $\ell_1\geq\ell_2\geq\ldots$, with $\ell_j\to 0$ as $j\to \infty$. We ignore the trivial case where $\Omega$ is a finite union of open intervals; see Remark \ref{rmk:FiniteFractalStrings}.}
\end{definition}
\begin{remark}
\label{rmk:FiniteFractalStrings}
In \cite{LapvF6}, for instance, finite fractal strings (i.e., nonincreasing sequences of real numbers with a finite number of positive terms) are allowed. However, for reasons described in Remark \ref{rmk:AbscissaForFiniteSequences}, the finite case is not considered in this paper.
\end{remark}
If, as in Definition \ref{def:FractalString} above, an ordinary fractal string $\Omega$ is written as the union of a countably infinite collection of disjoint open intervals $I_j$ (necessarily its connected components), then the lengths $\ell_j$ of the intervals $I_j$ comprise a fractal string $\mathcal{L}$. Moreover, $\dim_M(\partial\Omega)$ is given by
\begin{align}
\label{eqn:OrdinaryFractalStringMinkowski} \dim_M(\partial\Omega) &= \inf\{\alpha\geq 0 : V_{\textnormal{inner}}(\varepsilon)= O(\varepsilon^{1-\alpha}) \textnormal{ as } \varepsilon \to 0^+\},
\end{align}
where $V_{\textnormal{inner}}(\varepsilon)$ is the $1$-dimensional Lebesgue measure of the inner $\varepsilon$-neighborhood of $\Omega$ (see formula \eqref{eqn:VolumeInnerNeighborhood} in Notation \ref{not:DistanceVolumeAndBigO}). In fact, Equation \eqref{eqn:OrdinaryFractalStringMinkowski} is used to define the \emph{Minkowski dimension} of (the boundary of) an ordinary fractal string in \cite{LapvF6}.\footnote{More specifically, $\dim_M(\partial\Omega)$ should really be denoted by $\dim_{M,\textnormal{inner}}(\partial\Omega)$ and called the \emph{inner Minkowski dimension} of $\partial\Omega$ (or of $\mathcal{L}$).} Moreover, it is shown in \cite{LapPo1} that $V_{\textnormal{inner}}(\varepsilon)$, and hence also $\dim_M(\partial\Omega)$, depends only on the fractal string $\mathcal{L}$ (but not on the particular rearrangement of the intervals $I_j$ composing $\Omega$).
\begin{definition}
\label{def:AbscissaOfConvergence}
Let $\mathcal{L}$ be a fractal string. The \emph{abscissa of convergence} of the Dirichlet series $\sum_{j=1}^\infty\ell_j^s$ is defined by
\begin{align}
\label{eqn:AbscissaOfConvergence}
\sigma &=\inf \left\{ \alpha \in \mathbb{R} : \sum_{j=1}^\infty\ell_j^\alpha<\infty \right\}.
\end{align}
Thus, $\{ s \in \mathbb{C} : \textnormal{Re}(s)>\sigma \}$ is the largest open half-plane on which this series converges; see, e.g., \cite[\S VI.2]{Ser}.
\end{definition}
\begin{remark}
\label{rmk:AbscissaForFiniteSequences}
If $\mathcal{L}$ were allowed to be a finite sequence of positive real numbers (as in \cite{LapvF6}), then we would have $\sigma=-\infty$ since the corresponding Dirichlet series would be an entire function. In the context of this paper, we always have that $\sigma\geq 0$ (since $\sum_{j=1}^\infty \ell_j^\alpha$ is clearly divergent when $\alpha=0$). This explains why we consider only (bounded) infinite sets in the development of \emph{box-counting fractal strings} in Section \ref{sec:BoxCountingFractalStringsAndZetaFunctions}. Indeed, for clarity of exposition, we only consider fractal strings consisting of infinitely many positive lengths (or scales), and hence, ordinary fractal strings comprising infinitely many disjoint intervals.
\end{remark}
\begin{remark}
\label{rmk:ConvergenceAndDivergence}
A key distinction between a fractal string $\mathcal{L}$ and an ordinary fractal string $\Omega$ lies in the sum of the corresponding lengths (or scales), denoted $(\ell_j)_{j\in\mathbb{N}}$ in either case. Specifically, since an ordinary fractal string $\Omega$ is bounded, $\sum_{j=1}^\infty \ell_j$ is necessarily convergent. On the other hand, for a fractal string $\mathcal{L}$, $\sum_{j=1}^\infty \ell_j$ may be divergent. See Example \ref{eg:BoxCountingFractalStringOf1DFractal} for a bounded set in $\mathbb{R}^2$ whose \emph{box-counting fractal string} is a fractal string whose lengths have an unbounded sum and yet contains pertinent information regarding the bounded set. (In a somewhat different setting, many other classes of examples are provided in \cite[esp. \S 13.1 \& \S 13.3]{LapvF6} and in \cite{LapPe2,LapPeWin}.)
\end{remark}
\begin{definition}
\label{def:GeometricZetaFunction}
Let $\mathcal{L}$ be a fractal string. The \emph{geometric zeta function} of $\mathcal{L}$ is defined by
\begin{align}
\label{eqn:GeometricZetaFunction}
\zeta_\mathcal{L}(s) &= \sum_{j=1}^\infty\ell_j^s,
\end{align}
where $s \in \mathbb{C}$ and $\textnormal{Re}(s)>D_\mathcal{L}:=\sigma$. The \emph{dimension} of $\mathcal{L}$, denoted $D_\mathcal{L}$, is defined as the abscissa of convergence $\sigma$ of the Dirichlet series which defines $\zeta_\mathcal{L}$.
\end{definition}
In order to define the complex dimensions of a fractal string, as in \cite{LapvF6}, we assume there exists a meromorphic extension of the geometric zeta function $\zeta_\mathcal{L}$ to a suitable region. First, consider the \emph{screen} $S$ as the contour
\begin{align}
\label{eqn:Screen}
S:S(t)+it \quad &(t \in \mathbb{R}),
\end{align}
where $S(t)$ is a continuous function $S:\mathbb{R} \to [-\infty,D_\mathcal{L}]$. Next, consider the \emph{window} $W$ as the set
\begin{align}
\label{eqn:Window}
W &= \{s \in \mathbb{C} : \textnormal{Re}(s) \geq S(\textnormal{Im}(s))\}.
\end{align}
By a mild abuse of notation, we denote by $\zeta_\mathcal{L}$ both the geometric zeta function of $\mathcal{L}$ and its meromorphic extension to some region.
\begin{definition}
\label{def:ComplexDimensions}
Let $W \subset \mathbb{C}$ be a window on an open connected neighborhood of which $\zeta_\mathcal{L}$ has a meromorphic extension. The set of (\emph{visible}) \emph{complex dimensions} of $\mathcal{L}$ is the set $\calD_\mathcal{L}=\calD_\mathcal{L}(W)$ given by
\begin{align}
\calD_\mathcal{L} &= \left\{ \omega \in W : \zeta_\mathcal{L} \textnormal{ has a pole at } \omega \right\}.
\end{align}
\end{definition}
\noindent In the case where $\zeta_\mathcal{L}$ has a meromorphic extension to $W=\mathbb{C}$, the set $\calD_\mathcal{L}$ is referred to as the \emph{complex dimensions} of $\mathcal{L}$. Such is the case for the Cantor string $\Omega_{CS}$.
\begin{example}[Complex dimensions of the Cantor string]
\label{eg:ComplexDimensionsOfCantorString}
The Cantor string $\Omega_{CS}$ is the ordinary fractal string given by $\Omega_{CS}=[0,1] \setminus C$, where $C$ is the Cantor set (see Example \ref{eg:BoxCountingDimensionOfCantorSet}). The lengths of the Cantor string are given by the fractal string
\begin{align}
\label{eqn:CantorStringLengths}
\mathcal{L}_{CS} &=\{3^{-n}: 3^{-n} \textnormal{ has multiplicity } 2^{n-1}, n \in \mathbb{N}\}.
\end{align}
The geometric zeta function of the Cantor string, denoted $\zeta_{CS}$, is given by
\begin{align}
\label{eqn:CantorStringGeometricZetaFunction}
\zeta_{CS}(s) &:= \zeta_{\mathcal{L}_{CS}}(s)= \sum_{n=1}^\infty 2^{n-1}3^{-ns} = \frac{3^{-s}}{1-2\cdot3^{-s}}.
\end{align}
The closed form on the right-hand side of Equation \eqref{eqn:CantorStringGeometricZetaFunction} allows for the meromorphic continuation of $\zeta_{CS}$ to all of $\mathbb{C}$. Hence, $\zeta_{CS}=3^{-s}(1-2\cdot 3^{-s})^{-1}$ for all $s \in \mathbb{C}$. It follows that the complex dimensions of the Cantor string, denoted $\calD_{CS}$, are the complex roots of the Moran equation $2\cdot3^{-s}=1$. Thus, with the window $W$ chosen to be all of $\mathbb{C}$ (so that $\calD_{CS}:=\calD_{\mathcal{L}_{CS}}=\calD_{\mathcal{L}_{CS}}(\mathbb{C})$, in the notation of Definition \ref{def:ComplexDimensions}), we have
\begin{align}
\label{eqn:CantorStringComplexDimensions}
\calD_{CS} &=\left\{ \log_3{2} + i\frac{2\pi}{\log 3}z : z \in \mathbb{Z} \right\}.
\end{align}
Note that by Equation \eqref{eqn:BoxCountingDimensionOfCantorSet} the dimension $D_{CS}:=D_{\mathcal{L}_{CS}}=\log_{3}2$ of the Cantor string coincides with $\dim_{B}C=\dim_{M}C$, and this value is the unique real-valued complex dimension of the Cantor string.
\end{example}
\subsection{Geometric counting function of a fractal string}
\label{sec:GeometricCountingFunctionOfAFractalString}
The results in this section connect the counting function of the lengths of a fractal string to its dimension and geometric zeta function.
\begin{definition}
\label{def:GeometricCountingFunction}
The \emph{geometric counting function} of $\mathcal{L}$, or the \emph{counting function of the reciprocal lengths} of $\mathcal{L}$, is given by
\begin{align*}
\displaystyle N_\mathcal{L}(x) &:= \#\{j \in \mathbb{N} : \ell_j^{-1} \leq x\} = \sum_{n \in \mathbb{N}, \, l_n^{-1} \leq\, x}m_n
\end{align*}
for $x>0$.
\end{definition}
The following easy proposition is identical to Proposition 1.1 of \cite{LapvF6}.
\begin{proposition}
\label{prop:CountingFunctionD}
Let $\alpha \geq 0$ and $\mathcal{L}$ be a fractal string. Then $N_\mathcal{L}(x)=O(x^\alpha)$ as $x \rightarrow \infty$ if and only if
$\ell_j=O(j^{-1/\alpha})$ as $j \rightarrow \infty$.
\end{proposition}
\begin{proof}
Suppose that for some $C>0$ we have
\[
N_\mathcal{L}(x) \leq Cx^\alpha.
\]
Let $x=\ell_j^{-1}$, then $j \leq C\ell_j^{-\alpha}$, which implies that
\[
\ell_j = O(j^{-1/\alpha}).
\]
Conversely, if $\ell_j \leq cj^{-1/\alpha}$
for $j \in \mathbb{N}$ and some $c>0$, then given $x>0$, we have
\[
\ell_j^{-1} > x \textnormal{ for } j > (cx)^\alpha.
\]
Therefore,
\[
N_\mathcal{L}(x) \leq (cx)^\alpha.
\]
\end{proof}
\begin{remark}
\label{rmk:MoreAsymptoticBehavior}
Many additional (and harder) results connecting the asymptotic behavior of the geometric counting function, the spectral counting function, and the (upper and lower) Minkowski content(s) of a fractal string $\mathcal{L}$ are provided in \cite{LapPo1}. The simplest one states that $N_\mathcal{L}(x)=O(x^\alpha)$ as $x\to\infty$ (i.e., $\ell_j=O(j^{-1/\alpha})$ as $j\to\infty$) if and only if $\mathscr{M}^{*\alpha}(\partial\Omega)<\infty$, where (consistent with our earlier comment) $\mathscr{M}^{*\alpha}(\partial\Omega)$ is given as in Definition \ref{def:MinkowskiContent0} except with $\textnormal{vol}_m(\cdot)$ replaced with $V_{\textnormal{inner}}(\cdot)$.
\end{remark}
\begin{notation}
\label{not:D}
The infimum of the nonnegative values of $\alpha$ which satisfy Proposition \ref{prop:CountingFunctionD} plays a key role in our results. Hence, we let $D_N$ denote that special value. That is,
\begin{align}
\label{eqn:CountingFunctionD}
D_N:= \inf\{\alpha \geq 0 : N_\mathcal{L}(x)=O(x^\alpha) \textnormal{ as } x\to\infty\}.
\end{align}
\end{notation}
The following lemma is a restatement of a portion of Lemma 13.110 of \cite{LapvF6}.\footnote{In fact, a stronger result holds in the setting of generalized fractal strings (viewed as measures) in \cite{LapvF6}, but it is beyond the scope of this paper. (See also \cite{LapLuvF1}.)}
\begin{lemma}
\label{lem:GeometricZetaFunctionIntegralTransformCountingFunction}
Let $\mathcal{L}$ be a fractal string. Then
\begin{align}
\label{eqn:GeometricZetaFunctionIntegralTransformCountingFunction}
\zeta_\mathcal{L}(s) &= s\int_0^\infty N_\mathcal{L}(x)x^{-s-1}dx
\end{align}
and, moreover, the integral converges \emph{(}and hence, Equation \eqref{eqn:GeometricZetaFunctionIntegralTransformCountingFunction} holds\emph{)} if and only if $\sum_{j=1}^\infty \ell_j^s$ converges, i.e., if and only if $\textnormal{Re}(s)>D_\mathcal{L}=\sigma$.
\end{lemma}
\begin{proof}
Let $s \geq 0$ be a real number. (The case where $s \in \mathbb{C}$ follows immediately from this case by analytic continuation for $\textnormal{Re}(s)>D_\mathcal{L}=\sigma$ since $\zeta_\mathcal{L}$ is holomorphic in that half-plane.) For any given $n \in \mathbb{N}$, we have
\begin{align*}
s\int_0^{\ell_n^{-1}} N_\mathcal{L}(x)x^{-s-1}dx &= \sum_{j=1}^{n-1}s\int_{\ell_j^{-1}}^{\ell_{j+1}^{-1}} N_\mathcal{L}(x)x^{-s-1}dx
= \sum_{j=0}^{n-1}j(\ell_j^s-\ell_{j+1}^s)
\end{align*}
since $N_\mathcal{L}(x)=0$ for $x<\ell_1^{-1}$ and $N_\mathcal{L}(x)=j$ for $\ell_j^{-1} \leq x < \ell_{j+1}^{-1}$. Furthermore,
\begin{align*}
s\int_0^{\ell_j^{-1}} N_\mathcal{L}(x)x^{-s-1}dx &=
\sum_{j=1}^{n-1}j\ell_j^s-\sum_{j=1}^{n}(j-1)\ell_j^s
= \sum_{j=1}^{n}\ell_j^s-n\ell_n^s.
\end{align*}
Now, for $s\geq 0$, we have $n\ell_n^s \leq 2\sum_{j=[n/2]}^n\ell_j^s$. Thus, Equation \eqref{eqn:GeometricZetaFunctionIntegralTransformCountingFunction} holds if and only if $\sum_{j=1}^\infty \ell_j^s$ converges.\footnote{That is, if and only if $s>D_\mathcal{L}=\sigma$ (or, more generally, if $s\in\mathbb{C}$, if and only if $\textnormal{Re}(s)>D_\mathcal{L}$).} Moreover,
\begin{align*}
\lim_{n \rightarrow \infty} s\int_0^{\ell_n^{-1}} N_\mathcal{L}(x)x^{-s-1}dx &= \zeta_\mathcal{L}(s)
\end{align*}
since the tail $\sum_{j=[n/2]}^\infty \ell_j^s$ converges to zero. (Here, $[y]$ denotes the integer part of the real number $y$.)
\end{proof}
The following proposition will be used to prove a portion of our main result, Theorem \ref{thm:MainResult} (cf. Theorem 13.111 and Corollary 13.112 of \cite{LapvF6}, as well as \cite{LapLuvF1,LapLuvF2}, where this proposition is established in the context of $p$-adic fractal strings and also of ordinary (real) fractal strings).
\begin{proposition}
\label{prop:CountingFunctionDFractalStringD}
Let $\mathcal{L}$ be a fractal string. Then
\begin{align*}
D_\mathcal{L} &=D_N,
\end{align*}
where $D_\mathcal{L}=\sigma$ is the dimension of $\mathcal{L}$ given by Equation \eqref{eqn:AbscissaOfConvergence} \emph{(}and Definition \ref{def:GeometricZetaFunction}\emph{)} and $D_N$ is given by Equation \eqref{eqn:CountingFunctionD}.
\end{proposition}
\begin{proof}
The proof given here follows that of \cite{LapvF6}, \emph{loc.~cit.} (See also \cite{LapLuvF1}.) First, suppose $\textnormal{Re}(s)>D_N$. Denoting $t=\textnormal{Re}(s)$, we choose any fixed $\alpha\in(D_N,t)$.
Using Lemma \ref{lem:GeometricZetaFunctionIntegralTransformCountingFunction}, for $x_1=(\ell_1)^{-1}$ we have
\begin{align*}
|\zeta_\mathcal{L}(s)|
&\leq |s|\int_{x_1}^\infty Cx^{\alpha}x^{-t-1}dx = \left[\frac{|s|Cx^{\alpha-t}}{\alpha-t}\right]_{x_1}^\infty
= 0 - \frac{|s|Cx_1^{\alpha-t}}{\alpha-t},
\end{align*}
since $\alpha-t<0$. Hence, $|\zeta_\mathcal{L}(s)|<\infty$. In other words, $t>D_{\mathcal{L}}$ for any $t>D_N$. Letting $t\searrow D_N$, we obtain that $D_{\mathcal{L}}\le D_N$.
For the converse, suppose $\alpha < D_N$. Then $N_\mathcal{L}(x)$ is not $O(x^\alpha)$ as $x \rightarrow \infty$. So, there exists a strictly increasing sequence $(x_j)_{j\in\mathbb{N}}$ with $x_1 \geq \ell_1^{-1}$ which tends to infinity such that
\[
N_\mathcal{L}(x) \geq jx_j^\alpha
\]
for each $j$. Then, for $t \leq 1$,
\[
t\int_{\ell_1^{-1}}^\infty N_\mathcal{L}(x)x^{-t-1}dx \geq \sum_{j=1}t\int_{x_j}^{x_{j+1}}jx^\alpha x^{-t-1}dx
\]
since $N_\mathcal{L}(x)$ is increasing.
We estimate $t\int_{x_j}^{x_{j+1}}x^{-t-1}dx \geq x_j^{-t}$ to obtain
\[
t\int_{\ell_1^{-1}}^\infty N_\mathcal{L}(x)x^{-t-1}dx \geq \sum_{j=1}^\infty jx_j^{\alpha-t}.
\]
For $t \leq \alpha$, the sum diverges. Hence, $D_\mathcal{L} \geq \alpha$ for all $\alpha<D_N$, and so $D_\mathcal{L} \geq D_N$.
\end{proof}
For a given fractal string $\mathcal{L}$, Theorem \ref{thm:CountingFunctionAndComplexDimensions} (cf. Theorem 5.10 and Theorem 5.18 in \cite{LapvF6}) shows that under mild conditions the complex dimensions $\calD_\mathcal{L}$ contain enough information to determine the geometric counting function $N_\mathcal{L}$ (at least, asymptotically, and in important geometric situations, exactly\footnote{This is the case, for instance, for self-similar strings.}).
\begin{theorem}
\label{thm:CountingFunctionAndComplexDimensions}
Let $\mathcal{L}$ be a fractal string such that $\calD_\mathcal{L}$ consists entirely of simple poles with respect to a window $W$. Then, under certain mild growth conditions on $\zeta_\mathcal{L}$,\footnote{Namely, if $\zeta_\mathcal{L}$ is \emph{languid} (see \cite[Def.~5.2]{LapvF6}) of a suitable order.} we have
\begin{align}
\label{eqn:CountingFunctionAndComplexDimensions}
N_\mathcal{L}(x) = \sum_{\omega \in \calD_\mathcal{L}}
\frac{x^{\omega}}{\omega}\textnormal{res}(\zeta_\mathcal{L}(s);\omega) + \{\zeta_\mathcal{L}(0)\} + R(x),
\end{align}
where $R(x)$ is an error term of small order and the term in braces is included only if $0 \in W\backslash \calD_\mathcal{L}$.
\end{theorem}
\begin{remark}
\label{rmk:SimpleAndStronglyLanguid}
If the poles are not simple, the explicit formula for $N_{\mathcal{L}}$ is slightly more complicated (see\cite[Chs.~5,6]{LapvF6}). If an ordinary fractal string $\Omega$ is {\it strongly languid} (see \cite[Def.~5.3]{LapvF6}), then by Theorem 5.14 and Theorem 5.22 of \cite{LapvF6}, Equation \eqref{eqn:CountingFunctionAndComplexDimensions} holds with no error term (i.e., $W=\mathbb{C}$ and $R(x) \equiv 0$) and hence, formula \eqref{eqn:CountingFunctionAndComplexDimensions} is exact in that case.
\end{remark}
\begin{remark}
\label{rmk:FractalTubeFormulas}
Similar (but harder to derive) explicit formulas called \emph{fractal tube formulas} are obtained in \cite[Ch.~8]{LapvF6}, which, as described therein, allows for the expression of $V_\textnormal{inner}(\varepsilon)$ in terms of the underlying (visible) complex dimensions of $\mathcal{L}$. (Still in \cite{LapvF6}, they are used, in particular, to derive the equivalence of (i) and (iii) in Theorem \ref{thm:CriterionForMinkowskiMeasurability} below.) We will return to this topic in Section \ref{sec:SummaryOfResultsAndOpenProblems} when discussing Open Problems \ref{op:BoxCountingAndCountingFormulas} and \ref{op:VolumeTubularNeighborhood}.
\end{remark}
Analogous results regarding connections between the structure of the complex dimensions $\calD_\mathcal{L}$ of an ordinary fractal string $\Omega$ with lengths $\mathcal{L}$ and the (inner) Minkowski measurability of $\partial\Omega$ are presented in the next section.
\subsection{Classic results}
\label{sec:ClassicResults}
The following theorem is precisely Theorem 1.10 of \cite{LapvF6}. It is actually a consequence of a classic theorem of Besicovitch and Taylor (see \cite{BesTa}) stated in terms of ordinary fractal strings, and was first observed in this context in \cite{Lap2}.\footnote{There is, however, one significant difference with the setting of \cite{BesTa}. Namely, here, as in \cite{Lap2} and \cite{LapvF6}, we are assuming that we are working with the \emph{inner} (rather than ordinary) Minkowski dimension and Minkowski content of $\partial\Omega$; see the statement and the proof of Theorem 1.10 in \cite{LapvF6}, along with Equation \eqref{eqn:VolumeInnerNeighborhood} above. By contrast, in the context of \cite{BesTa}, one should assume that $\Omega$ is of full measure in its closed convex hull (i.e., in the smallest compact interval containing it).}
\begin{theorem}
\label{thm:BesicovitchAndTaylor}
Suppose $\Omega$ is an ordinary fractal string with infinitely many lengths denoted by $\mathcal{L}$. Then the abscissa of convergence of $\zeta_\mathcal{L}$ coincides with the Minkowski dimension of $\partial\Omega$. That is, $D_\mathcal{L}=\dim_M(\partial\Omega)$.
\end{theorem}
The following result is Theorem 8.15 of \cite{LapvF6}. For complete details regarding connections between complex dimensions and Minkowski measurability, see \cite[Ch.~8]{LapvF6}.
\begin{theorem}[Criterion for Minkowski measurability]
\label{thm:CriterionForMinkowskiMeasurability}
Let $\Omega$ be an ordinary fractal string whose geometric zeta function $\zeta_\mathcal{L}$ has a meromorphic extension which satisfies certain mild growth conditions.\footnote{Specifically, $\zeta_\mathcal{L}$ is languid for a screen $S$ passing strictly between the vertical line $\textnormal{Re}(s)=D_\mathcal{L}$ and all the complex dimensions (of the corresponding fractal string) $\mathcal{L}$ with real part strictly less than $D_\mathcal{L}$, and not passing through 0.} Then the following are equivalent\emph{:}
\begin{enumerate}
\item $D_\mathcal{L}$ is the only complex dimension with real part $D_\mathcal{L}$, and it is simple.
\item $N_\mathcal{L}(x)=cx^{D_\mathcal{L}}+o(x^{D_\mathcal{L}})$ as $x \to\infty$, for some positive constant $c$.\footnote{In the spirit of Proposition \ref{prop:CountingFunctionD}, condition (ii) is easily seen to be equivalent to
\[
\ell_j=Lj^{-1/D_\mathcal{L}}+o(j^{-1/D_\mathcal{L}}) \quad \textnormal{as} \quad j\to\infty,
\]
for some positive constant $L$. In that case, we have $c=L^{D_\mathcal{L}}$.}
\item $\partial\Omega$ is Minkowski measurable.
\end{enumerate}
Moreover, if any of these conditions is satisfied, then the Minkowski content $\mathscr{M}$
of $\partial\Omega$ is given by
\begin{align*}
\mathscr{M} &= \frac{c2^{1-D_\mathcal{L}}}{1-D_\mathcal{L}} = 2^{1-D_\mathcal{L}}\frac{\textnormal{res}(\zeta_\mathcal{L}(s);D_\mathcal{L})}{D_\mathcal{L}(1-D_\mathcal{L})}.
\end{align*}
\end{theorem}
\begin{remark}
\label{rmk:TheoryOfComplexDimensions}
We note that the equivalence of (ii) and (iii) in Theorem \ref{thm:CriterionForMinkowskiMeasurability} was first established in \cite{LapPo1} for any ordinary fractal string, without any hypothesis on the growth of the associated geometric zeta function. As was alluded to in Remark \ref{rmk:FractalTubeFormulas}, however, the equivalence of (i) and (iii) in Theorem \ref{thm:CriterionForMinkowskiMeasurability} was proved in \cite{LapvF6} (and in earlier works of the authors of \cite{LapvF6}) by using a suitable generalization of Riemann's explicit formula that is central to the theory of complex dimensions and is obtained in \cite[Chs.~5 \& 8]{LapvF6}.
\end{remark}
\begin{example}[The Cantor set is not Minkowski measurable]
\label{eg:CantorSetNotMinkowskiMeasurable}
By Equation \eqref{eqn:CantorStringComplexDimensions} in Example \ref{eg:ComplexDimensionsOfCantorString}, there is an infinite collection of complex dimensions $\omega \in \calD_{CS}$ of the Cantor string with real part $D_{CS}=\log_{3}2$. Hence, by Theorem \ref{thm:CriterionForMinkowskiMeasurability}, the Cantor set $C$ is not Minkowski measurable. This fact was first established in \cite{LapPo1} by using the equivalence of (ii) and (iii) and showing that (ii) does not hold. Actually, still in \cite{LapPo1}, for $\alpha=\dim_{B}C=D_{CS}$, both $\mathscr{M}^{\alpha*}=\mathscr{M}^*$ and $\mathscr{M}^{\alpha}_*=\mathscr{M}_*$ are explicitly computed and shown to be different (with $0<\mathscr{M}_*<\mathscr{M}^*<\infty$). This result was significantly refined and extended in \cite[Ch.~10]{LapvF6} in the broader context of generalized Cantor strings.
\end{example}
\begin{remark}
\label{rmk:LatticeNonLattice}
Example \ref{eg:CantorSetNotMinkowskiMeasurable} is indicative of another result from \cite{LapvF6} pertaining to a dichotomy in the properties of self-similar attractors of certain iterated function systems on compact intervals. Specifically, if an iterated function system on a compact interval $I$ satisfies the open set condition with at least one gap and there is some $0<r<1$ and positive integers $k_j$ such that $\gcd(k_1,\ldots,k_N)=1$ and the scaling ratios satisfy $r_j=r^{k_j}$ for each $j = 1,\ldots, N$, then the complement $I \setminus A$ of the resulting attractor $A$ is an ordinary fractal string known as a \emph{lattice self-similar string}. For example, the Cantor string $\Omega_{CS}=[0,1] \setminus C$ is a lattice self-similar string. If no such $r$ exists, then $I \setminus A$ is a \emph{nonlattice self-similar string}. The complex dimensions of a self-similar string are given by (a subset of) the complex roots of the corresponding Moran equation \eqref{eqn:Moran}. In the lattice case there are countably many complex dimensions with real part $D_\mathcal{L}=\dim_{B}A=\dim_{M}A$, so by Theorem \ref{thm:CriterionForMinkowskiMeasurability}, $A$ is not Minkowski measurable. In the nonlattice case, Theorem \ref{thm:CriterionForMinkowskiMeasurability} does not necessarily apply (because its hypotheses need not be satisfied, see \cite[Example~5.32]{LapvF6}), however the only complex dimension with real part $D_\mathcal{L}$ is $\dim_{B}A=\dim_{M}A$ and by Theorem 8.36 of \cite{LapvF6} we have that $A$ is Minkowski measurable. Therefore, the boundary of a self-similar string is Minkowski measurable if and only if it is nonlattice. See \cite[\S 8.4]{LapvF6} for details.
\end{remark}
We conclude the section on classic results with the following remark which, in light of the expression for $V_{\textnormal{inner}}(\varepsilon)$ obtained in \cite{LapPo1} (see also \cite[Eq.~(8.1)]{LapvF6}), can be deduced from Lemma 1 of \cite[\S 1.4]{Lev}.\footnote{For convenience, Remark \ref{rmk:DimensionEqualsOrder}
is stated in the language of fractal strings. A direct (and independent) proof of the fact that Equation \eqref{eqn:DimensionEqualsOrder} holds
can be found in \cite{LapvF6}. (See also \cite{LapLuvF1}.) Also, Equation \eqref{eqn:DimensionEqualsOrder} is a simple consequence of results obtained in \cite{LapPo1}.} The remark below provides yet another connection between counting functions and dimensions, although it is a simple restatement of Proposition \ref{prop:CountingFunctionDFractalStringD} in our context.
\begin{remark}
\label{rmk:DimensionEqualsOrder}
For a fractal string $\mathcal{L}$, the \emph{order of the geometric counting function} $N_\mathcal{L}$, denoted $\rho_\mathcal{L}$, is given by
\begin{align}
\label{eqn:OrderOfCountingFunction}
\rho_\mathcal{L} &:=\limsup_{x \rightarrow \infty} \frac{\log{N_\mathcal{L}(x)}}{\log{x}}.
\end{align}
We have that the dimension $D_\mathcal{L}$ coincides with $\rho_\mathcal{L}$. That is,
\begin{align}
\label{eqn:DimensionEqualsOrder}
D_\mathcal{L} &=\rho_\mathcal{L}.
\end{align}
Note that, for a given fractal string $\mathcal{L}$, the order of the counting function $\rho_\mathcal{L}$ given in Equation \eqref{eqn:OrderOfCountingFunction} and the value $D_N$ given in Equation \eqref{eqn:CountingFunctionD} provide essentially the same information regarding the geometric counting function $N_\mathcal{L}$. Indeed, it can be shown directly that $\rho_\mathcal{L}=D_N$, and hence Equation \eqref{eqn:DimensionEqualsOrder} would follow from Proposition \ref{prop:CountingFunctionDFractalStringD}. This connection is examined further in \cite{LapRoZu}.
\end{remark}
In the next section, motivated by the box-counting function $N_B$ and connections between the geometric counting function $N_\mathcal{L}$ and dimension $D_\mathcal{L}$ of a fractal string $\mathcal{L}$, we define and investigate the properties of box-counting fractal strings.
\section{Box-counting fractal strings and zeta functions}
\label{sec:BoxCountingFractalStringsAndZetaFunctions}
In this section, we develop the definition of and results pertaining to \emph{box-counting fractal strings}. These fractal strings are defined in order to provide a framework in which one may, perhaps, extend the results on ordinary fractal strings via associated zeta functions and complex dimensions in \cite{LapvF6} to bounded sets. Further exploration with box-counting fractal strings, such as Minkowski measurability of bounded sets, is central to the development of the authors' paper \cite{LapRoZu}. The box-counting fractal string and the box-counting zeta function for bounded sets in Euclidean spaces were introduced by the second author during the First International Meeting of the Permanent International Session of Research Seminars (PISRS) at the University of Messina, PISRS Conference 2011: Analysis, Fractal Geometry, Dynamical Systems, and Economics.\footnote{See Remark \ref{rmk:PrecursorToBoxCountingZetaFunction}.} The introduction took place after listening to a lecture of the third author about his results (with the first author and Goran Radunovi\'c) in \cite{LapRaZu} on distance and tube zeta functions for arbitrary compact subsets of $\mathbb{R}^m$. Some of these results are also discussed in Section \ref{sec:DistanceAndTubeZetaFunctions} below.
\begin{remark}
\label{rmk:PrecursorToBoxCountingZetaFunction}
At the time, the second author did not yet know that the first author had already proposed (since the early 2000s, in several research documents) to introduce and study a `box-counting zeta function' (defined via the Mellin transform of a `box-counting function'), much as in Corollary \ref{cor:EquivalentZetaFunctions} and Definition \ref{def:BoxCountingFunction} as well as Remark \ref{rmk:VariousCountingFunctions}) in order to develop a higher-dimensional theory of complex dimensions and obtain the associated fractal tube formulas for suitable fractal subsets of $\mathbb{R}^m$ (established when $m=1$ in \cite[Ch.~8]{LapvF6}). The explicit construction of a `box-counting fractal string' associated with a given bounded set $A\subset\mathbb{R}^m$ is new and potentially quite useful, however.\footnote{Compare, when $m=1$, the construction provided in \cite{LapMa} of certain fractal strings associated with suitable monotonically increasing step functions, viewed as `geometric counting functions' (in the sense of \cite{LapPo1,LapvF6} and of Definition \ref{def:GeometricCountingFunction} above).} Of course, the other results of \cite{LapRoZu} described in Section \ref{sec:BoxCountingFractalStringsAndZetaFunctions} are new as well.
\end{remark}
\subsection{Definition of box-counting fractal strings}
\label{sec:DefinitionOfBoxCountingFractalStrings}
If $A \subset \mathbb{R}^m$ is bounded, then the diameter of $A$, denoted $\diam(A)$, is finite. So for nonempty $A$ and all $x$ small enough, we have $N_B(A,x)=1$ when $N_B(A,\cdot)$ is given as in Definition \ref{def:BoxCountingFunction} or one of the options in Remark \ref{rmk:VariousCountingFunctions}. Indeed, for a given bounded infinite set $A$, each such box-counting function uniquely defines a fractal string $\mathcal{L}_B$, which is introduced below and called the \emph{box-counting fractal string}, by uniquely determining a sequence of distinct scales $(l_n)_{n\in\mathbb{N}}$ along with corresponding multiplicities $(m_n)_{n\in\mathbb{N}}$.
Given a fixed bounded infinite set $A$, the range of a chosen box-counting function $N_B(A,\cdot)$ can be thought of as a strictly increasing sequence of positive integers $(M_n)_{n\in\mathbb{N}}$. In this context, we can readily define a fractal string $\mathcal{L}_B$ whose geometric counting function $N_{\mathcal{L}_B}$ essentially coincides with $N_B(A,\cdot)$; see Lemma \ref{lem:CountingFunctionCorrelation} below. To this end, the key idea is to make the distinct scales $l_n$ of the desired (box-counting) fractal string $\mathcal{L}_B$ correspond to the scales at which the box-counting function $N_B(A,\cdot)$ jumps. Furthermore, the multiplicities $m_n$ are defined in order to have the resulting geometric counting function $N_{\mathcal{L}_B}$ (nearly) coincide with the chosen box-counting function $N_B(A,\cdot)$. Such \emph{box-counting fractal strings} potentially allow for the development of a theory of complex dimensions of fractal strings, as presented in \cite{LapvF6}, by means of results in Section \ref{sec:FractalStringsAndZetaFunctions} similar to Theorem \ref{thm:CountingFunctionAndComplexDimensions} above.\footnote{Parts of such a theory are now developed in \cite{LapRaZu}, using distance and tube zeta functions; see Section \ref{sec:DistanceAndTubeZetaFunctions} below for a few sample results. Once the two parallel theories are more fully developed, a challenging problem will consist in comparing and contrasting their respective results and scopes.} These concepts are central to the development of the paper \cite{LapRoZu}.
\begin{definition}
\label{def:BoxCountingFractalString}
Let $A$ be a bounded infinite subset of $\mathbb{R}^m$ and let $N_B(A,\cdot)$ denote a box-counting function given by one of the options in Remark \ref{rmk:VariousCountingFunctions}. Denote the range of $N_B(A,\cdot)$ as a strictly increasing sequence of positive integers $(M_n)_{n\in\mathbb{N}}$.
For each $n\in\mathbb{N}$, let $l_n$ be the scale given by
\begin{align}
\label{eqn:LengthsOfABoxCountingFractalString}
l_n:=(\sup\{x \in (0,\infty): N_B(A,x)=M_n\})^{-1}.
\end{align}
Also, let $m_1:=M_2$, and for $n\geq 2$, let $m_n:=M_{n+1}-M_n$.
The \emph{box-counting fractal string} of A, denoted $\mathcal{L}_B$, is given by
\begin{align*}
\mathcal{L}_B &:= \{l_n : l_n \textnormal{ has multiplicity } m_n, n \in \mathbb{N}\}.
\end{align*}
\end{definition}
\begin{remark}
\label{rmk:CountingFunctionDefinesUniqueFractalString}
Note that the distinct scales $l_n$ and the multiplicities $m_n$ are \emph{uniquely defined by the box-counting function} $N_B(A,\cdot)$ since $N_B(A,x)$ is nondecreasing as $x\rightarrow\infty$. Also, each $l_n$ is equivalently given by
\[
l_n = \inf\{\varepsilon \in (0,\infty): N_B(A,\varepsilon^{-1})=M_n\}.
\]
\end{remark}
It remains to show that $\mathcal{L}_B$ is indeed a fractal string; see Definition \ref{def:FractalString}. That is, since we want to use as many of the results from \cite{LapvF6} as possible (some of which are presented in Section \ref{sec:FractalStringsAndZetaFunctions}), we must verify that $\mathcal{L}_B=(\ell_j)_{j\in\mathbb{N}}$ is a nonincreasing sequence of positive real numbers which tends to zero. This is accomplished with the following proposition, in which other behaviors of $N_B(A,\cdot)$ are also determined.
For clarity of exposition and in order to ease the notation used in this section, in particular in the following proposition, take $N_B(A,\cdot)$ to be defined by option (i) of Remark \ref{rmk:VariousCountingFunctions} and let $N_B(A,0):=0$. (Completely analogous results hold when $N_B(A,\cdot)$ is given by Definition \ref{def:BoxCountingFunction} or one of the other options in Remark \ref{rmk:VariousCountingFunctions}, \emph{mutatis mutandis}.) Note that we have $N_B(A,x)\leq N_B(A,y)$ whenever $0<x<y$. In this setting, and following Remark \ref{rmk:varepsilonVSx}, $x^{-1}$ denotes the diameter of the sets used to cover $A$. Furthermore, let $x_n:=l_n^{-1}$ for each $n\in\mathbb{N}$, and note that we have $N_B(A,x_2)=m_1=M_2$ and
\[
N_B(A,x_{n+1})-N_B(A,x_n)=m_n=M_{n+1}-M_n, \quad \textnormal{for } n \geq 2.
\]
\begin{proposition}
\label{prop:BoxCountingJumps}
Let $A$ be a bounded infinite subset of $\mathbb{R}^m$ and let $l_n$ be given by Equation \emph{\eqref{eqn:LengthsOfABoxCountingFractalString}}. Then the sequence $(x_n)_{n\in\mathbb{N}}:=(l_n^{-1})_{n\in\mathbb{N}}$ is a countably infinite, strictly increasing sequence of positive real numbers such that, for each $n \in \mathbb{N}$ and all $x$ such that $x_{n-1} < x \leq x_n$ \emph{(}letting $x_0=0$\emph{)}, we have
\begin{align}
\label{eqn:BoxCountingJumps}
N_B(A,x_{n-1}) &< N_B(A,x) = N_B(A,x_n).
\end{align}
Furthermore,
\begin{enumerate}
\setlength{\itemsep}{0in}
\item $x_1>0$ and $N_B(A,x_1)=1$,
\item $x_n \nearrow \infty$ as $n \rightarrow \infty$, and
\item $\displaystyle \bigcup_{n \in \mathbb{N}} \{N_B(A,x_n)\} = \textnormal{range}~N_B(A,\cdot)$.
\end{enumerate}
\end{proposition}
\begin{proof}
We have that $N_B(A,x)$ is nondecreasing for $x>0$. Further, the range of $N_B(A,\cdot)$, denoted $\textnormal{range}~N_B(A,\cdot)$ (and also realized as the sequence $(M_n)_{n\in\mathbb{N}}$ above), is at most countable since it is a subset of $\mathbb{N}$. In fact, $\textnormal{range}~N_B(A,\cdot)$ is countably infinite (otherwise, $A$ would be finite). Hence, $(x_n)_{n \in \mathbb{N}}$ is a unique, countably infinite, strictly increasing sequence of positive real numbers such that, for each $n \in \mathbb{N}$ and all $x$ such that $x_{n-1} < x \leq x_n$ (letting $x_0=0$), we have
\begin{align*}
N_B(A,x_{n-1}) &< N_B(A,x) = N_B(A,x_n).
\end{align*}
Since $A$ is bounded and contains more than two elements, there exists a unique $x'\in (0,\infty)$ such that $N_B(A,x)=1$ if $0<x \leq x'$, and $N_B(A,x)>1$ if $x>x'$. By the definition of the sequence $(x_n)_{n\in\mathbb{N}}$, we have $x'=x_1$.
Now, suppose $(x_n)_{n\in\mathbb{N}}$ has an accumulation point at some $x'' \in (0,\infty)$. Then $N_B(A,x'')=\infty$ since $N_B(A,\cdot)$ increases by some positive integer value at $x_n$ for each $n \in \mathbb{N}$ and since $\textnormal{range}~N_B(A,\cdot)\subset\mathbb{N}$. However, this contradicts the boundedness of $A$. Further, assuming $N_B(A,\cdot)$ is bounded implies that $A$ is finite. Hence, $x_n \nearrow \infty$ as $n \rightarrow \infty$.
Lastly, suppose there exists $k \in \textnormal{range}~N_B(A,\cdot)$ such that we have $k \neq N_B(A,x_n)$ for all $n \in \mathbb{N}$. Since $x_n \nearrow \infty$ as $n \rightarrow \infty$ and $N_B(A,\cdot)$ is nondecreasing, there exists a unique $n_0 \in \mathbb{N}$ such that $x_{n_0-1}<y<x_{n_0}$ for all $y$ such that $N_B(A,y)=k$. However, Equation \eqref{eqn:BoxCountingJumps} implies $N_B(A,y)=k=N_B(A,x_{n_0})$, which is a contradiction. Therefore, $\bigcup_{n \in \mathbb{N}}\{N_B(A,x_n)\} = \textnormal{range}~N_B(A,\cdot)$.
\end{proof}
\begin{remark}
\label{rmk:BoxCountingFractalStringIsAFractalString}
By line (ii) of Proposition \ref{prop:BoxCountingJumps}, $l_n \searrow 0$ as $n \rightarrow \infty$ and, hence, $\mathcal{L}_B$ is indeed a fractal string in the sense of Definition \ref{def:FractalString}.
\end{remark}
\begin{example}[Box-counting fractal string of the Cantor set]
\label{eg:CantorSetBoxCountingFractalString}
Consider the Cantor set $C$. For $x>0$, let the box-counting function $N_B(C,x)$ be the minimum number of sets of diameter $x^{-1}$ required to cover $C$ (i.e., as in option (i) of Remark \ref{rmk:VariousCountingFunctions}). Then the box-counting fractal string $\mathcal{L}_B$ of $C$ is given by
\begin{align}
\label{eqn:CantorSetBoxCountingFractalString}
\mathcal{L}_B &= \{l_1=1: m_1=2\}\cup\{l_n=3^{-(n-1)}: m_n =2^{n-1}, n \geq 2 \}.
\end{align}
Indeed, for each $n \in \mathbb{N}$, exactly $2^{n}$ intervals of diameter $3^{-n}$ are required to cover $C$. If $x^{-1}<3^{n}$, then more than $2^{n}$ intervals of diameter $x^{-1}$ are required to cover $C$.
\end{example}
\begin{example}[Box-counting fractal string of a 1-dimensional fractal]
\label{eg:BoxCountingFractalStringOf1DFractal}
Consider the self-similar set $F$ which is the attractor of the IFS $\mathbf{\Phi}_1=\{\Phi_j\}_{j=1}^4$ on the unit square $[0,1]^2\subset\mathbb{R}^2$ given by
\begin{align*}
\Phi_1(x) &=\frac{1}{4}x, \quad \Phi_2(x) =\frac{1}{4}x+\left(\frac{3}{4},0\right), \quad
\Phi_3(x) =\frac{1}{4}x+\left(\frac{3}{4},\frac{3}{4}\right), \quad \textnormal{and}\\ \Phi_4(x) &=\frac{1}{4}x+\left(0,\frac{3}{4}\right).
\end{align*}
The Moran equation of $F$ is simply $4\cdot 4^{-s}=1$, hence $D_{\mathbf{\Phi}_1}=\dim_BF=\dim_MF=1$ and $F$ is a 1-dimensional self-similar set which is totally disconnected.
Let $N_B(F,x)$ be as in Definition \ref{def:BoxCountingFunction}. Then, for $x \in (0,2]$, we have
\[
N_B(F,x)=
\begin{cases}
1, & 0 < x \leq 2/\sqrt{2}, \\
2, & 2/\sqrt{2} < x \leq 8/\sqrt{17}, \\
3, & 8/\sqrt{17} < x \leq 2.
\end{cases}
\]
Indeed, we have: $\sqrt{2}$ is the distance between $(0,0)$ and $(1,1)$; $\sqrt{17}/4 $ is the minimum distance between $(0,0),(1,1/4),$ and $(1/4,1)$; and the minimum distance between $(0,0),(0,1),(1,0),$ and $(1,1)$ is $1$. Hence, $M_1=1, M_2=2$, and $M_3=3$.
For $x>2$, the self-similarity of $F$ implies that $M_n=j4^k$, where $n$ is uniquely expressed as $n=3k+j$ with $k\in \mathbb{N}\cup\{0\}$ and $j\in\{1,2,3\}$. So, for $n\geq 2$, we have $m_n=M_{n+1}-M_n=4^k$ and therefore the box-counting fractal string $\mathcal{L}_B=(\ell_j)_{j=1}^\infty$ of $F$ is the sequence obtained by putting the following collection of distinct scales in nonincreasing order and listing them according to multiplicity:
\begin{align*}
&\left\{\sqrt{2}/2: \textnormal{ multiplicity } 2 \right\}\\
&\cup \left\{\sqrt{2}/(2\cdot 4^k): \textnormal{ multiplicity } 4^k, k\in \mathbb{N} \right\}\\
&\cup \left\{\sqrt{17}/(8\cdot 4^k): \textnormal{ multiplicity } 4^k, k\in \mathbb{N}\cup\{0\} \right\}\\
&\cup \left\{1/(2\cdot 4^k): \textnormal{ multiplicity } 4^k, k\in \mathbb{N}\cup\{0\} \right\}.
\end{align*}
\end{example}
Examples \ref{eg:CantorSetBoxCountingFractalString} and \ref{eg:BoxCountingFractalStringOf1DFractal} will be revisited and expanded upon in the following subsection.
\subsection{Box-counting zeta functions}
\label{sec:BoxCountingZetaFunctions}
Suppose $A$ is a bounded infinite subset of $\mathbb{R}^m$. Each scale $l_n \in \mathcal{L}_B$ is distinct and, for $n \geq 2$, counted according to the multiplicity $m_n:=N_B(A,x_{n+1})-N_B(A,x_n)$. It will help to note that we can also consider $\mathcal{L}_B$ to be given by the nonincreasing sequence $(\ell_j)_{j \in \mathbb{N}}$, where the distinct values among the $\ell_j$'s repeat the $l_n$'s according to the multiplicities $m_n$. (The convention of distinguishing the notation $\ell_j$ and $l_n$ in this way is established in \cite{LapvF6} and its predecessors, where the distinction allows for various results therein to be more clearly stated and derived.) In this setting, we immediately have the following connection between $N_{\mathcal{L}_B}$, the counting function of the reciprocal lengths of $\mathcal{L}_B$, and the box-counting function $N_B(A,x)$.
\begin{lemma}
\label{lem:CountingFunctionCorrelation}
For $x \in (x_1,\infty)\setminus(x_n)_{n \in \mathbb{N}}$,
\begin{align*}
N_{\mathcal{L}_B}(x)=N_B(A,x).
\end{align*}
\end{lemma}
\begin{proof}
The result follows at once from Definitions \ref{def:BoxCountingFunction} and \ref{def:BoxCountingFractalString}.
\end{proof}
In general, Lemma \ref{lem:CountingFunctionCorrelation} does not hold for $x=x_n$, though equality may hold for certain choices of $N_B(A,x)$. Furthermore, the primary applications of Lemma \ref{lem:CountingFunctionCorrelation} are Corollary \ref{cor:EquivalentZetaFunctions} and Theorem \ref{thm:JohnnyDarko} where the behavior of $N_B(A,x)$ at $x=x_n$ does not affect the conclusions. Moreover, for a bounded infinite set $A$, the geometric zeta function of the box-counting fractal string $\mathcal{L}_B$ is given by
\begin{align*}
\zeta_{\mathcal{L}_B}(s)&=N_B(A,l_2^{-1})l_1^s+\sum_{n=2}^{\infty} (N_B(A,l_{n+1}^{-1})-N_B(A,l_n^{-1})) l_n^s = \sum_{j=1}^\infty \ell_j^s,
\end{align*}
for $\textnormal{Re}(s)>D_{\mathcal{L}_B}$. We take this zeta function to be our \emph{box-counting zeta function} for a bounded infinite set $A$ in Definition \ref{def:BoxCountingZetaFunction}.
\begin{definition}
\label{def:BoxCountingZetaFunction}
Let $A$ be a bounded infinite subset of $\mathbb{R}^m$. The \emph{box-counting zeta function} of $A$, denoted $\zeta_B$, is the geometric zeta function of the box-counting fractal string $\mathcal{L}_B$. That is,
\begin{align*}
\zeta_B(s) := \zeta_{\mathcal{L}_B}(s) = \sum_{n=1}^\infty m_n l_n^s,
\end{align*}
for $Re(s) > D_B := D_{\mathcal{L}_B}$. The (optimum) value $D_B$ is the \emph{abscissa of convergence} of $\zeta_B$. The set of \emph{box-counting complex dimensions} of $A$, denoted $\calD_B$, is the set of complex dimensions $\calD_{\mathcal{L}_B}$ of the box-counting fractal string $\mathcal{L}_B$.
\end{definition}
\begin{remark}
\label{rmk:FiniteBoxCountingFractalString}
Note that we do not consider the case when $A$ is finite. One may, of course, define the box-counting fractal string $\mathcal{L}_B$ for such a set as a finite sequence of positive real numbers. In that case, however, the box-counting zeta function would comprise a finite sum, which would yield an abscissa of convergence $-\infty$ and no complex dimensions; see Remark \ref{rmk:AbscissaForFiniteSequences}. That is, in the context of the theory of complex dimensions of fractal strings, the case of finite sets is not very interesting.
\end{remark}
\begin{example}[Box-counting zeta function of the Cantor set]
\label{eg:CantorSetBoxCountingZetaFunction}
By Example \ref{eg:CantorSetBoxCountingFractalString}, the box-counting fractal string $\mathcal{L}_B$ of the Cantor set $C$ is given by Equation \eqref{eqn:CantorSetBoxCountingFractalString}.
It follows that for $\textnormal{Re}(s)>\log_{3}2$, the box-counting zeta function of $C$ is given by
\begin{align*}
\zeta_B(s) &= 2+\sum_{n=2}^\infty 2^{n-1}\cdot 3^{-(n-1)s}
= 1+\frac{1}{1-2\cdot3^{-s}}.
\end{align*}
Thus, $D_B=\dim_{B}C=\dim_{M}C=\log_{3}2$ and $\zeta_B$ has a meromorphic extension to all of $\mathbb{C}$ given by the last expression in the above equation. Moreover, we have
\begin{align*}
\calD_B &=\calD_{CS}=\calD_{\mathcal{L}_{CS}}=\left\{ \log_3{2} + i\frac{2\pi}{\log{3}}z : z \in \mathbb{Z} \right\}.
\end{align*}
\end{example}
\begin{example}
\label{eg:BoxCountingZetaFunction1DFractal}
The box-counting fractal string $\mathcal{L}_B$ of the 1-dimensional self-similar set $F$ (the attractor of the IFS $\mathbf{\Phi}_1$), where $N_B(F,x)$ is as in Definition \ref{def:BoxCountingFunction}, is given in Example \ref{eg:BoxCountingFractalStringOf1DFractal}. Hence, the box-counting zeta function of $F$ is given (for $\textnormal{Re}(s)>1$) by
\begin{align}
\label{eqn:1DBoxCountingZetaFunction}
\zeta_B(s) &= \left(\frac{\sqrt{2}}{2}\right)^s + \left(\left(\frac{\sqrt{2}}{2}\right)^s + \left(\frac{\sqrt{17}}{8}\right)^s + \left(\frac{1}{2}\right)^s\right)
\sum_{k=0}^\infty\left(\frac{4}{4^s}\right)^k \\
&= \left(\frac{\sqrt{2}}{2}\right)^s + \frac{\left(\sqrt{2}/2\right)^s + \left(\sqrt{17}/8\right)^s + \left(1/2\right)^s}{1-4\cdot 4^{-s}}.
\end{align}
Thus, $D_B=\dim_{B}F=\dim_{M}F=1$ and $\zeta_B$ has a meromorphic extension to all of $\mathbb{C}$ given by the last expression in the above equation. Moreover, we have
\begin{align}
\label{eqn:1DBoxCountingComplexDimensions}
\calD_B &= \calD_B(\mathbb{C})=\left\{1 + i\frac{2\pi}{\log{4}}z : z \in \mathbb{Z} \right\}.
\end{align}
Note that the series corresponding to $\zeta_B(1)$ is divergent. Hence, the fractal string $\mathcal{L}_B$ does not correspond to an ordinary fractal string (which, by definition, requires $\zeta_\mathcal{L}(1)=\sum_{j=1}^\infty \ell_j$ to be convergent).
\end{example}
\begin{remark}
\label{rmk:LatticeSelfSimilarIFS}
The Cantor set $C$ and the 1-dimensional self-similar set $F$ are each the attractor of a \emph{lattice} iterated function system; see \cite[\S 13.1]{LapvF6} as well as \cite{LapPe3,LapPe2,LapPeWin}. Essentially, an IFS generated by similarities (i.e., an IFS for which Equation \eqref{eqn:IFSSelfSimilar} holds) is \emph{lattice} if, for the distinct values $t_1,\ldots,t_{N_0}$ among the scaling ratios $r_1,\ldots,r_N$, there are positive integers $k_j$ where $\gcd(k_1,\ldots,k_{N_0})=1$ and a positive real number $0<r<1$ such that $t_j=r^{k_j}$ for each $j=1,\ldots,N_0$.\footnote{Equivalently, the precise definition (following \cite{LapvF6}) is that the \emph{distinct} scaling ratios generate a multiplicative group of rank 1.} Note that in each case, the box-counting complex dimensions comprise a set of complex numbers with a unique real part (equal to the box-counting dimension) and a vertical (and arithmetic) progression, in both directions, of imaginary parts.
In the case of the Cantor set $C$, the box-counting complex dimensions $\calD_B$ coincide with the usual complex dimensions $\calD_{CS}$. Moreover, the structure of $\calD_{CS}$ allows for the application of Theorem \ref{thm:CriterionForMinkowskiMeasurability} and, hence, we conclude (as in \cite{LapPo1} and \cite{LapvF6}) that $C$ is not Minkowski measurable.
In the case of the 1-dimensional self-similar set $F$ of Examples \ref{eg:BoxCountingFractalStringOf1DFractal} and \ref{eg:BoxCountingZetaFunction1DFractal}, the set of complex dimensions $\calD_B$ has no counterpart in the context of usual complex dimensions since $F$ is not the complement of an ordinary fractal string. As such, Theorem \ref{thm:CriterionForMinkowskiMeasurability} does not apply. Moreover, since $\dim_MF=1$, the corresponding results in \cite[\S 13.1]{LapvF6} do not apply either.
This provides motivation for developing a theory of complex dimensions which can take such examples, and many others, into account. The box-counting fractal strings defined in this paper, and investigated further in \cite{LapRoZu}, provide a first step in developing one such theory. Analogous comments regarding the further development of a higher-dimensional theory of complex dimensions can be made about the results of \cite{LapRaZu} to be discussed in Section \ref{sec:DistanceAndTubeZetaFunctions}.
\end{remark}
The next corollary follows readily from Lemma \ref{lem:CountingFunctionCorrelation} and Proposition \ref{prop:CountingFunctionDFractalStringD}. It establishes the equivalence of the box-counting zeta function $\zeta_B$ and an integral transform of the (appropriately truncated) box-counting function $N_B(A,x)$.
\begin{corollary}
\label{cor:EquivalentZetaFunctions}
Let $A$ be a bounded set. Then
\begin{align*}
\zeta_B(s) &= \zeta_{\mathcal{L}_B}(s) = s\int_{l_1^{-1}}^{\infty}x^{-s-1}N_B(A,x)dx,
\end{align*}
for $Re(s) > D_B$.
\end{corollary}
We close this subsection with a theorem which is a partial statement of our main result, Theorem \ref{thm:MainResult}. Specifically, the upper box-counting dimension of a bounded infinite set is equal to the abscissa of convergence of the corresponding box-counting zeta function.
\begin{theorem}
\label{thm:JohnnyDarko}
Let $A$ be a bounded infinite subset of $\mathbb{R}^m$. Then $\overline{\dim}_{B}A=D_B$.
\end{theorem}
\begin{proof}
The proof follows from a connection made through $D_N$, the asymptotic growth rate of the geometric counting function $N_\mathcal{L}(x)$ of a fractal string $\mathcal{L}$ given by Equation \eqref{eqn:CountingFunctionD}.
Let $\mathcal{L}=\mathcal{L}_B$. By Proposition \ref{prop:CountingFunctionDFractalStringD}, we have $D_\mathcal{L}=D_B=D_N$. Now, Lemma \ref{lem:CountingFunctionCorrelation} implies $N_\mathcal{L}(x)=N_B(A,x)$ for $x \in (l_1^{-1},\infty)\setminus(l_n^{-1})_{n\in\mathbb{N}}$. Since these counting functions are nondecreasing, the equation $\overline{\dim}_{B}A=D_N$ follows from the formulation of $\overline{\dim}_{B}A$ given in Remark \ref{rmk:EquivalentUpperBoxDimension}.
\end{proof}
\subsection{Tessellation fractal strings and zeta functions}
In this subsection, we loosely discuss another type of fractal string defined for a given bounded infinite subset $A$ of $\mathbb{R}^m$. Unlike the box-counting fractal string $\mathcal{L}_B$, which is completely determined by the set $A$ and the box-counting function $N_B(A,\cdot)$, the \emph{tessellation fractal string} defined here depends on the set $A$, a chosen parameter, and a chosen family of tessellations of $\mathbb{R}^m$.
First, choose a scaling parameter $\lambda\in(0,1)$. For any $n\in\mathbb{N}$, consider the $n$-th tessellation of $\mathbb{R}^m$ defined by the family of cubes of length $\lambda^n$ (obtained by taking translates of the cube $[0,\lambda^n]^m$ in $\mathbb{R}^m$). Henceforth, the number of cubes of the $n$-th tessellation that intersect $A$ is denoted by $m_n(\lambda)$. Let the scale $l_n(\lambda):=\lambda^n$ be of multiplicity $m_n(\lambda)$. This defines the box-counting fractal string $\mathcal{L}(A,\lambda)=(\ell_j)_{j\in\mathbb{N}}$, where $(\ell_j)_{j\in\mathbb{N}}$ is the sequence starting with $l_1$ with multiplicity $m_1$, $l_2$ with multiplicity $m_2$, and so on. The geometric counting function $N_{\mathcal{L}(A,\lambda)}(x)=\#\{j\in\mathbb{N}: \ell_j^{-1}\ge x\}$ of the fractal string is then well defined.
More generally, let $U$ be a compact subset of $\mathbb{R}^m$ with nonempty interior, satisfying the following properties:
\begin{enumerate}
\addtolength{\itemsep}{0.3\baselineskip}
\item[(a)] there exists a countable family of isometric maps $f_j:\mathbb{R}^m\to\mathbb{R}^m$ such that the the family of sets $V_j=f_j(U)$, $j\ge1$, is a cover of $\mathbb{R}^m$;
\item[(b)] for $j\neq k$, the interiors of $V_j$ and $V_k$ are disjoint.
\end{enumerate}
We say that the family $(V_j)_{j\ge1}$ is a \emph{tessellation of $\mathbb{R}^m$}, generated by the basic shape (or `tile') $U$ and the family of isometries. Also, we say in short that \emph{$U$ tessellates~$\mathbb{R}^m$}. Note that if $\lambda\in(0,1)$ is a fixed real number, then the basic shape generates a sequence of tessellations indexed by $n\in\mathbb{N}$: $(\lambda^n V_j)_{j\ge1}$. The family $(\lambda^n V_j)_{j\ge1}$ is called the \emph{$n$-th tessellation of $\mathbb{R}^m$}, generated by the basic shape (or tile) $U$, the family of isometries $(f_j)_{j\ge1}$, and $\lambda\in(0,1)$.
Let $A$ be a given bounded set in $\mathbb{R}^m$. Define $m_n(\lambda)=m_n(A,U,\lambda)$ analogously as above, by counting the number of elements of the $n$-th tessellation which intersect $A$. The \emph{tessellation fractal string} $\mathcal{L}(A,U,\lambda)$ of the set $A$ is then the fractal string defined by
\begin{align*}
\mathcal{L}(A,U,\lambda) &:=\{l_n(\lambda)=\lambda^n:\mbox{ $l_n(\lambda)$ has multiplicity $m_n(\lambda)$, $n\in\mathbb{N}$}\}=(\ell_j)_{j\in\mathbb{N}}.
\end{align*}
The middle set is in fact a \emph{multiset}, by which we mean that its elements repeat with prescribed multiplicity. Using arguments analogous to those from \cite[pp.\ 38--39]{Falc}, we obtain that
\begin{equation}
\label{dimBA}
\overline\dim_BA=\limsup_{n\to\infty}\frac{\log m_n(\lambda)}{\log\lambda^{-n}}.
\end{equation}
(Also, see \cite[p.\ 41]{Falc} or \cite[p.\ 24]{Tri}.) Here, we have also used a version of Proposition~\ref{prop:GeometricSequenceSufficient} above.
The geometric zeta function of the tessellation fractal string $\mathcal{L}(A,U,\lambda)$, called the \emph{tessellation zeta function}, is given by
\begin{align}
\label{zeta_tess}
\zeta_{\mathcal{L}(A,U,\lambda)}(s) &=\sum_{j=1}^{\infty}\ell_j^s=\sum_{n=1}^{\infty} m_n(\lambda)\,\lambda^{ns}
\end{align}
for $\textnormal{Re}(s)$ large enough. Also, when defined accordingly, the set of complex dimensions $\calD_{\mathcal{L}(A,U,\lambda)}$ of the tessellation fractal string $\mathcal{L}(A,U,\lambda)$ is called the set of \emph{tessellation complex dimensions} of $A$ (relative to the tessellation associated with $U$ and $\lambda$).
The main result regarding this zeta function is the following theorem.
\begin{theorem}
\label{thm:TessellationDimension}
Let $U \subset \mathbb{R}^m$ be a compact set with nonempty interior, which tessellates $\mathbb{R}^m$. Then, the upper box-counting dimension of a bounded infinite set $A$ in $\mathbb{R}^m$ is equal to the abscissa of convergence $D_{\mathcal{L}(A,U,\lambda)}$ of the geometric zeta function \eqref{zeta_tess} of its tessellation fractal string. That is,
\begin{align*}
\overline{\dim}_{B}A=D_{\mathcal{L}(A,U,\lambda)}.
\end{align*}
\end{theorem}
\begin{proof}
Using Cauchy's criterion for convergence, we obtain that the series \eqref{zeta_tess} converges for all $s\in\mathbb{C}$ such that
$$
\limsup_{n\to\infty}m_n(\lambda)^{1/n}\lambda^{\operatorname{Re}(s)}<1,
$$
that is,
$$
\operatorname{Re}(s)>\frac{\log(\limsup_{n\to\infty}m_n(\lambda)^{1/n})}{\log\lambda^{-1}}.
$$
The series (\ref{zeta_tess}) diverges if we have the opposite inequality. Therefore, the abscissa of convergence of (\ref{zeta_tess}) is
\begin{equation}
\label{dimD}
D_{\mathcal{L}(A,U,\lambda)}=\frac{\log(\limsup_{n\to\infty}m_n(\lambda)^{1/n})}{\log\lambda^{-1}}=\limsup_{n\to\infty}\frac{\log m_n(\lambda)}{\log\lambda^{-n}}.
\end{equation}
In light of \eqref{dimBA} and \eqref{dimD}, we deduce that $\overline\dim_BA=D_{\mathcal{L}(A,U,\lambda)}$.
\end{proof}
\begin{remark}
The value of $D_{\mathcal{L}(A,U,\lambda)}$ is independent of the choice of $\lambda\in(0,1)$, since by Theorem~\ref{thm:TessellationDimension} its
value is equal to $\overline\dim_BA$. In concrete applications we choose the basic shape $U$
and $\lambda\in(0,1)$ that are best suited to the geometry of $A$. For example, if $A$ is the triadic Cantor set, we take $U=[0,1]$ and $\lambda=1/3$, while for the
Sierpinski gasket we take $U$ to be an equilateral triangle and $\lambda=1/2$.
\end{remark}
\begin{example}
Let $F$ be the 1-dimensional self-similar set from Examples \ref{eg:BoxCountingFractalStringOf1DFractal} and \ref{eg:BoxCountingZetaFunction1DFractal}. We define $U$ as the unit square $[0,1]^2$ and $\lambda=1/4$. Here, the scale $l_n(1/4)=1/4^n$ occurs with multiplicity $m_n(1/4)=9\cdot4^n$, defining the corresponding tessellation fractal string $\mathcal{L}(F,U,1/4)$. For $\textnormal{Re}(s)>1$, the tessellation zeta function is given by
$$
\zeta_{\mathcal{L}(F,U,1/4)}(s) = \sum_{n=1}^\infty 9\cdot4^n\cdot 4^{-ns}=\frac9{4^{s-1}-1}.
$$
According to Theorem~\ref{thm:TessellationDimension}, the abscissa of convergence $D_{\mathcal{L}(F,U,1/4)}=1$ equal to the box-counting dimension of $F$. It follows that $\zeta_{\mathcal{L}(F,U,1/4)}(s)$ has a meromorphic extension to all of $\mathbb{C}$ given by $9(4^{s-1}-1)^{-1}$. Furthermore, the set of tessellation complex dimensions is equal to the set $\calD_B$ of box-counting complex dimensions given in~ \eqref{eqn:1DBoxCountingComplexDimensions}. That is,
\[
\calD_{\mathcal{L}(F,U,1/4)}=\calD_B=\left\{ 1 + i\frac{2\pi}{\log 4}z : z\in\mathbb{Z} \right\}.
\]
Note that the tessellation fractal string $\mathcal{L}(F,U,1/4)$ is unbounded, in the sense that the series given by $\zeta_{\mathcal{L}(F,U,1/4)}(1)$ is divergent.
Analogous results hold regarding the Cantor set $C$ and its (classical and box-counting) fractal strings, zeta functions, and complex dimensions. Further (higher-dimensional) examples will be studied in \cite{LapRoZu}.
\end{example}
\section{Distance and tube zeta functions}
\label{sec:DistanceAndTubeZetaFunctions}
In this section, we deal with a class of zeta functions introduced by the first author during the 2009 ISAAC Conference at the University of Catania in Sicily, Italy. More generally, the main results of this section are obtained in the forthcoming paper \cite{LapRaZu}, written by the first and third authors, along with Goran Radunovi\'c. We state here only some of the basic results, without attempting to work at the greatest level of generality. We refer to \cite{LapRaZu} for more general statements and additional results and illustrative examples.
The following definition can be found in \cite{LapRaZu}.
\begin{definition}
\label{def:DistanceZetaFunction}
Let $A \subset \mathbb{R}^m$ be bounded. The \emph{distance zeta function} of $A$, denoted $\zeta_d$, is defined by
\begin{align}
\label{eqn:DistanceZetaFunction}
\zeta_d(s) &:=\int_{A_\varepsilon}d(x,A)^{s-m}dx
\end{align}
for $\textnormal{Re}(s)>D_d$, where $D_d=D_d(A)$ denotes the abscissa of convergence of the distance zeta function $\zeta_d$ and $\varepsilon$ is a fixed positive number.
\end{definition}
\begin{remark}
\label{rmk:IndependentOfVarepsilon}
It is shown in \cite{LapRaZu} that
changing the value of $\varepsilon$ modifies the distance zeta function by adding an entire function to $\zeta_d$. Hence, the main properties of $\zeta_d$ do not depend on the choice of $\varepsilon>0$. As a result, this is also the case for $D_d$, the abscissa of convergence of $\zeta_d$ (cf. Theorem \ref{thm:DistanceZetaFunctionUpperBoxDimension}), and $\textnormal{res}(\zeta_d;D_d)$, the residue of $\zeta_d$ at $s=D_d$ (cf. Theorem \ref{thm:DistanceZetaFunctionAndMinkowskiContent}).
\end{remark}
The distance zeta function can be used as an effective tool in the computation of the box-counting dimensions of various subsets $A$ of some Euclidean space; see \cite{LapRaZu}. Indeed, one of the basic results concerning the distance zeta function is given in the following theorem, which is Theorem 1 in \cite{LapRaZu}. Note: unlike in Theorem \ref{thm:JohnnyDarko} above, we allow $A$ to be finite here.
\begin{theorem}
\label{thm:DistanceZetaFunctionUpperBoxDimension}
Let $A$ be a nonempty bounded subset of $\mathbb{R}^m$. Then $D_d=\overline{\dim}_{B}A$.
\end{theorem}
\begin{corollary}
\label{cor:holomorphic}
$\zeta_d$ is holomorphic in the half-plane $\textnormal{Re}(s)>\overline{\dim}_BA$. Furthermore, this open right half-plane is the largest one in which $\zeta_d(s)$ is holomorphic.
\end{corollary}
\begin{remark}
\label{rmk:LowerBoxCountingDimensionFromDistanceZetaFunction}
We do not know if the value of the \emph{lower} box-counting dimension $\underline{\dim}_BA$ can be computed from the distance zeta function $\zeta_d$.
\end{remark}
It is shown in \cite{LapRaZu} that the distance zeta function represents a natural extension of the geometric zeta function $\zeta_{\mathcal{L}}$ of a bounded (i.e., summable) fractal string $\mathcal{L}=(\ell_j)_{j\in\mathbb{N}}$. Indeed, we can identify the string with an ordinary fractal string of the form $\Omega=\cup_{j=1}^\infty I_j$, where $I_j:=(a_{j+1},a_j)$ and $a_j:=\sum_{k\ge j}\ell_k$. Note that $|I_j|=\ell_j$. Defining $A=\{a_j\}_{j=1}^\infty\subset\mathbb{R}$, it is easy to see that $\zeta_d(s)=a(s)\zeta_{\mathcal{L}}(s)+b(s)$, where $a(s)$ vanishes nowhere and $a(s)$ and $b(s)$ are explicit meromorphic functions in the complex plane with (typically) a pole at the origin. Hence, the zeta functions $\zeta_\mathcal{L}$ and $\zeta_d$ have the same abscissa of convergence. \emph{It follows that if $\mathcal{L}$ is nontrivial} (\emph{i.e., has infinitely many lengths}), \emph{then}\footnote{If we allow $\mathcal{L}$ to be trivial, then one should replace $D_d(A)$ and $D_\mathcal{L}$ with $\max\{D_d(A),0\}$ and $\max\{D_\mathcal{L},0\}$, respectively, in Equation \eqref{eqn:DistanceDimensionEquivalence}.}
\begin{align}
\label{eqn:DistanceDimensionEquivalence}
D_d(A)=D_\mathcal{L}=\overline{\dim}_BA=\overline{\dim}_B(\partial\Omega).
\end{align}
\subsection{Minkowski content and residue of the distance zeta function}
\label{sec:MinkowskiContentAndResidueOfTheDistanceZetaFunction}
A remarkable property of the distance zeta function is that its residue at $s=D_d$ is closely related to the $D_d$-dimensional Minkowski content of $A$; see \cite{LapRaZu}.
\begin{theorem}
\label{thm:DistanceZetaFunctionAndMinkowskiContent}
Let $A$ be a nonempty bounded set in $\mathbb{R}^m$.
Assuming that the distance zeta function can be meromorphically extended to a neighborhood of $s=D_d$ and $D_d<m$, then for its residue at $s=D_d$ we have that
\begin{align}
\label{inequality}
(m-D_d){\mathscr{M}}_*^{D_d}\le \operatorname{res}(\zeta_d(s);D_d)\le(m-D_d){\mathscr{M}}^{*D_d}.
\end{align}
If, in addition, $A$ is Minkowski measurable, it then follows that
\begin{align}
\label{eqn:MinkowskiContentAsResidue}
\operatorname{res}(\zeta_d(s);D_d)=(m-D_d){\mathscr{M}}^{D_d}.
\end{align}
\end{theorem}
The last part of this result (namely, Equation \eqref{eqn:MinkowskiContentAsResidue}) generalizes the corresponding one obtained in \cite{LapvF6} in the context of ordinary fractal strings to the case of bounded sets in Euclidean spaces; see \cite{LapRaZu}.
\begin{example}
It can be shown that, in the case of the Cantor set $C$, we have strict inequalities in Equation \eqref{inequality}. Indeed, in this case $m=1$, $D_d=\log_{3}2$, and
$$
\operatorname{res}(\zeta_d(s);D_d)=\frac1{\log2}\,2^{-D_d},
$$
whereas the values of the lower and upper $D_d$-dimensional Minkowski contents have been computed in \cite[Theorem 2.16]{LapvF6} (as well as earlier in \cite{LapPo1}):
$$
{\mathscr{M}}_*^{D_d}(A)=\frac1{D_d}\left(\frac{2D_d}{1-D_d} \right)^{1-D_d} ,\quad
{\mathscr{M}}^{*D_d}(A)=2^{2-D_d}.
$$
This is a special case of an example in~\cite{LapRaZu} dealing with generalized Cantor sets. Generalized Cantor strings, which are a certain type of generalized fractal strings, and their (geometric and spectral) oscillations are studied in detail in \cite[Ch.~10]{LapvF6}.
\end{example}
\begin{remark}
An open problem is to determine whether there exists a set $A$ such that one of the inequalities in Equation \eqref{inequality} is strict and the other is an equality.
\end{remark}
\begin{remark}
\label{rmk:Resman}
According to a recent result due to Maja Resman in \cite{Res}, we know that if $A$ is Minkowski measurable, then the value of the \emph{normalized} $D_d$-\emph{dimensional Minkowski content} of a bounded set $A \subset \mathbb{R}^m$,\footnote{This choice of normalized Minkowski content is well known in the literature; see, e.g., \cite{Fdrr}.} defined by
\begin{align}
\label{eqn:DdMinkowskiContent}
\frac{{\mathscr{M}}^{D_d}(A)}{\omega(m-D_d)},
\end{align}
is independent of the ambient dimension $m$. Here, for $t>0$, we let
\[
\omega(t):=2\pi^{t/2}t^{-1}\Gamma(t/2)^{-1},
\]
where $\Gamma$ is the classic Gamma function. For any positive integer $k$, $\omega(k)$ is equal to the $k$-dimensional Lebesgue measure of the unit ball in $\mathbb{R}^k$. In other words, the value given in Equation \eqref{eqn:DdMinkowskiContent} is intrinsic to the set $A$ and hence independent of the embedding of $A$ in $\mathbb{R}^k$. Therefore, we may ask if the value of the normalized residue,
\begin{align*}
\frac{\mbox{res}(\zeta_d(s);D_d)}{(m-D_d)\,\omega(m-D_d)},
\end{align*}
is also independent of $m$. Combining the preceding two results (namely, Theorems \ref{thm:DistanceZetaFunctionUpperBoxDimension} and \ref{thm:DistanceZetaFunctionAndMinkowskiContent}), we immediately deduce that if $A$ is Minkowski measurable, then the answer is positive.
\end{remark}
\subsection{Tube zeta function}
\label{sec:TubeZetaFunction}
Given $\varepsilon>0$, it is also natural to introduce the following zeta function of a bounded set $A$ in $\mathbb{R}^m$, involving the tube around $A$ (which we view as the mapping $t\mapsto |A_t|$, for $0\leq t\leq\varepsilon$):\footnote{In the sequel, $|A_t|=|A_t|_m$ denotes the $m$-dimensional volume (Lebesgue measure) of $A_t$, the $t$-neighborhood of $A\subset\mathbb{R}^m$. In our earlier notation, we have $|A_t|=\textnormal{vol}_m(A_t)$.}
\begin{align}
\label{tube_zeta}
\tilde\zeta_A(s) &=\int_0^\varepsilon t^{s-m-1}|A_t|\,dt,
\end{align}
for $\textnormal{Re}(s)$ sufficiently large, where $\varepsilon$ is a fixed positive number. Hence, $\tilde\zeta_A$ is called the \emph{tube zeta function} of $A$. Its abscissa of convergence is equal to $\overline{\dim}_{B}A$, which follows immediately from Theorems \ref{thm:DistanceZetaFunctionUpperBoxDimension} above and \ref{thm:DistanceAndTubeZetaFunctions} below. Tube zeta functions are closely related to distance zeta functions, as shown by the following result; see \cite{LapRaZu}.
\begin{theorem}
\label{thm:DistanceAndTubeZetaFunctions}
If $A\subset\mathbb{R}^m$ and $\operatorname{Re}(s)>\overline{\dim}_BA$, then for any $\varepsilon>0$,
\begin{equation}
\label{identity}
\zeta_d(s) =\varepsilon^{s-m}|A_\varepsilon|+(m-s)\tilde\zeta_A(s).
\end{equation}
\end{theorem}
The proof of this result when $s\in\mathbb{R}$ follows, for example, from \cite[Theorem~2.9(a)]{Zu0}, and the proof of the cited theorem is based on integration by parts. By analytic continuation and in light of Corollary \ref{cor:holomorphic}, the identity \eqref{identity} is then extended to complex values of $s$ such that $\textnormal{Re}(s)>\overline{\dim}_BA$.
It follows from \eqref{identity} that the abscissae of convergence of the zeta functions $\zeta_d$ and $\tilde\zeta_A$ are the same, and therefore also coincide with $\overline{\dim}_BA$. This identity, \eqref{identity}, extends to the $m$-dimensional case \cite[identity (13.129) in Lemma 13.110, p.~442]{LapRaZu}, which has been formulated in the context of $p$-adic fractal strings and ordinary (real) fractal strings. (See also \cite{LapLuvF1}.) Using \eqref{identity} and Theorem \ref{thm:DistanceZetaFunctionAndMinkowskiContent}, it is easy to derive the following consequence; see \cite{LapRaZu}.
\begin{corollary}
\label{cor_res}
If $D=\dim_BA$ exists, $D<m$, and there exists a meromorphic extension of $\tilde\zeta_A(s)$ to a neighborhood of $s=D$, then
\begin{align*}
&{\mathscr{M}}_*^D \le\operatorname{res}(\tilde\zeta_A(s);D)\le {\mathscr{M}}^{*D}.
\end{align*}
In particular, if $A$ is Minkowski measurable, then
\begin{align*}
\operatorname{res}(\tilde\zeta_A(s);D) &={\mathscr{M}}^D.
\end{align*}
\end{corollary}
As we can see, the tube zeta function is ideally suited to study the Minkowski content.
In Corollary~\ref{cor_res}, we have assumed the existence of a meromorphic extension of the tube zeta function to a neighborhood of $s=D$. This condition can be ensured under fairly general conditions. We provide a result from~\cite{LapRaZu} dealing with the case of Minkowski measurable sets. Non-Minkowski measurable sets can be treated as well; see \cite{LapRaZu}, along with Remark \ref{rmk:NonMinkowskiAnalogs} and Theorem \ref{thm:NonMinkowskiMeasurableCase}.
\begin{theorem}[Minkowski measurable case]
\label{mer}
Let $A$ be a subset of $\mathbb{R}^m$ such that there exist $\alpha>0$, $M\in(0,\infty)$ and $D\in[0,m]$, satisfying
\begin{align}
\label{eqn:MinkowskiMeasurableCase}
|A_t| &= t^{m-D}\left(M+O(t^\alpha)\right)\quad\mbox{as $t\to 0^+$.}
\end{align}
Then $A$ is Minkowski measurable, $\operatorname{dim}_BA=D$, and $\mathscr{M}^D(A)=M$. Furthermore, the tube zeta function $\tilde{\zeta}_A(s)$ has for abscissa of convergence $D(\tilde\zeta_A)=D$, and it admits a \emph{(}necessarily unique\emph{)} meromorphic continuation {\rm({\it at least})} to the half-plane $\{\operatorname{Re}(s)>D-\alpha\}$. The only pole in this half-plane is $s=D$; it is simple, and $\operatorname{res}(\tilde\zeta_A;D)=M$.
\end{theorem}
An analogous result holds also for the distance function of $A$. Theorem~\ref{mer} shows that the relevant information concerning the possible existence of a nontrivial meromorphic extension of the tube (or the distance) zeta function associated with~$A$, is encoded in the second term of the asymptotic expansion (as $t \to 0^+$) of the tube function $t\mapsto|A_t|$. Various extensions of this result and examples can be found in~\cite{LapRaZu}, as we next discuss. For example, in Theorem \ref{mer}, the conclusion for $\zeta_d$ would be the same except for the fact that $\operatorname{res}(\zeta_d(s);D)=(m-D)M$ (and $D=D_d(A)$, in our earlier notation from Equation \eqref{eqn:DistanceDimensionEquivalence}, for instance).
\begin{remark}
\label{rmk:NonMinkowskiAnalogs}
In \cite{LapRaZu}, one can find suitable analogs of Theorem \ref{mer} for non-Minkowski measurable sets, both in the case where the underlying scaling behavior of $|A_t|$ is log-periodic (as for the Cantor set or the Sierpinski gasket and carpet, for example) and in more general, non-periodic situations. Furthermore, in that case, the (visible) complex dimensions of $A$ (i.e., the poles of $\tilde{\zeta}_A$ in the half-plane $\{\textnormal{Re}(s)>D-\alpha\}$, with $\alpha>0$, are also determined, and shown to consist of a vertical infinite arithmetic progression located on the `critical line' $\{\textnormal{Re}(s)=D\}$ (much as for the Cantor string and other lattice self-similar strings); see Theorem \ref{thm:NonMinkowskiMeasurableCase} below for a typical sample theorem. In \cite{LapRaZu}, the case of so-called `quasi-periodic' fractals is also considered in this and related contexts. In the latter situation, one appeals, in particular, to well-known (and rather sophisticated) number theoretic theorems asserting the transcendentality (and hence, the irrationality) of certain expressions. It is noteworthy that (in light of Theorem \ref{thm:DistanceAndTubeZetaFunctions}) all of these results concerning $\tilde{\zeta}_A$ have precise counterparts for the distance zeta function $\zeta_d$; see \cite{LapRaZu}.
\end{remark}
In order to state more precisely a sample theorem in the non-Minkowski measurable case, we introduce the following hypothesis (LP) (log-periodic) and notation:
\begin{itemize}
\item[(LP)] Let $A\subset\mathbb{R}^m$ be a bounded set such that there exists $D\geq 0, \alpha>0$ and a periodic function $G:\mathbb{R}\to [0,\infty)$ with minimal period $T>0$, satisfying
\end{itemize}
\begin{align}
\label{eqn:NonConstantPeriodic}
|A_t| &= t^{m-D}(G(\log{t^{-1}})+O(t^\alpha)), \quad \textnormal{as } t\to 0^+.
\end{align}
In the sequel, let
\begin{align}
\label{eqn:NonConstantPeriodicIntegral}
\hat{G}_0(t) &= \int_0^T e^{-2\pi it\tau}G(\tau)\,d\tau, \quad \textnormal{for } t\in\mathbb{R}.
\end{align}
Note that $G$ is nonconstant since we have assumed it to have a positive minimal period, and that $\hat{G}_0$ is (essentially) the Fourier transform of the cut-off function $G_0$ of $G$ to $[0,T]$.
\begin{theorem}[Non-Minkowski measurable case]
\label{thm:NonMinkowskiMeasurableCase}
Assume that the bounded set $A\subset\mathbb{R}^m$ satisfies the log-periodicity property \emph{(LP)} above. Then $\dim_BA$ exists, $\dim_BA=D$, $G$ is continuous and\footnote{It follows that the restriction of $G$ to $[0,T]$ has for range $[\calM^D_*(A),\calM^{*D}(A)]$.}
\begin{align}
\label{eqn:UpperLowerMinkowskiContentNonMinkCase}
\calM^D_*(A)&=\min{G}, \quad \calM^{*D}(A)=\max{G}.
\end{align}
Hence, $A$ is not Minkowski measurable and \emph{(}provided $G>0$\emph{)} is non-degenerate, i.e., $0<\calM^D_*(A)<\calM^{*D}(A)<\infty$.
Furthermore, $\tilde{\zeta}_A$ admits a \emph{(}necessarily unique\emph{)} meromorphic continuation to the half-plane $\{\textnormal{Re}(s)>D-\alpha\}$ with only poles
\begin{align}
\label{eqn:PolesNonMinkowskiCase}
\mathcal{P}(\tilde{\zeta}_A) &=\left\lbrace s_k:=D+\frac{2\pi}{T}ik: \hat{G}_0\left(\frac{k}{T}\right)\neq 0, k\in\mathbb{Z}\right\rbrace,
\end{align}
where $\hat{G}_0$ is given by \eqref{eqn:NonConstantPeriodicIntegral}. These poles are all simple and\footnote{In addition, $|\textnormal{res}(\tilde{\zeta}_A;s_k)|\leq\frac{1}{T}\int_0^TG(\tau)\,d\tau$ for all $k\in\mathbb{Z}$, and $\textnormal{res}(\tilde{\zeta}_A;s_k)\to 0$ as $|k|\to\infty$.}
\begin{align}
\label{eqn:ResiduesNonMinkCase}
\textnormal{res}(\tilde{\zeta}_A;s_k) &= \frac{1}{T}\hat{G}_0\left(\frac{k}{T}\right), \quad \textnormal{for all }k\in\mathbb{Z}.
\end{align}
In particular, for $k=0$, we have that
\begin{align}
\label{eqn:ResidueAsIntegralNonMinkCase}
\textnormal{res}(\tilde{\zeta}_A;D) &=\frac{1}{T}\int_0^TG(\tau)\,d\tau.
\end{align}
Finally, $A$ admits an average Minkowski content $\calM^D_{av}$ \emph{(}which lies in $(0,\infty)$ if $G>0$\emph{)} that is also given by \eqref{eqn:ResidueAsIntegralNonMinkCase}.\footnote{This average Minkowski content is defined exactly as in \cite[Definition~8.29,\S 8.4.3]{LapvF6}, except for the fact that $1-D$ is replaced with $m-D$.}
\end{theorem}
\begin{remark}
\label{rmk:CounterpartTheoremForDistanceZetaFunctions}
As was alluded to in Remark \ref{rmk:NonMinkowskiAnalogs}, Theorem \ref{thm:NonMinkowskiMeasurableCase} has a precise counterpart for the distance zeta function $\zeta_d$. In fact, under the same assumption (LP), the same conclusions as in Theorem \ref{thm:NonMinkowskiMeasurableCase} hold, except for the fact that the counterpart of \eqref{eqn:ResiduesNonMinkCase} and \eqref{eqn:ResidueAsIntegralNonMinkCase}, respectively, reads as follows (with $D=D_{d}(A)$, as in Equation \eqref{eqn:DistanceDimensionEquivalence} and Theorems \ref{thm:DistanceZetaFunctionAndMinkowskiContent} and \ref{thm:NonMinkowskiMeasurableCase}):
\begin{align}
\label{eqn:ResiduesNonMinkCaseDistance}
\textnormal{res}(\zeta_d;s_k) &= \frac{m-s_k}{T}\hat{G}_0\left(\frac{k}{T}\right), \quad \textnormal{for all }k\in\mathbb{Z},
\end{align}
and
\begin{align}
\label{eqn:ResidueAsIntegralNonMinkCaseDistance}
\textnormal{res}(\zeta_d;D) &=(m-D)\frac{1}{T}\int_0^TG(\tau)\,d\tau,
\end{align}
so that $\calM^D_{av}(A)$, the average Minkowski content of $A$, is now given by
\begin{align}
\label{eqn:AverageMinkowskiContent}
\calM^D_{av}(A) &=\frac{1}{T}\int_0^TG(\tau)\,d\tau=\frac{1}{m-D}\textnormal{res}(\zeta_d;D).
\end{align}
\end{remark}
\begin{example}
\label{eg:IllustratingNonMinkCase}
Theorem \ref{thm:NonMinkowskiMeasurableCase} (and its counterpart for $\zeta_d$ discussed in Remark \ref{rmk:CounterpartTheoremForDistanceZetaFunctions}) can be illustrated, for instance, by the Cantor set or string (see \cite{LapRaZu} and compare with \cite[\S 1.1.2]{LapvF6}, including Figures 1.4 and 1.5) and the Sierpinski carpet (as well as, similarly, by the Sierpinski gasket). For the Sierpinski carpet $A$, as discussed in \cite{LapRaZu}, we have $D=\log_{3}8$, $\alpha=D-1$, and $T=\log{3}$. Hence, both $\tilde{\zeta}_A$ and $\zeta_d$ have a meromorphic continuation to $\left\lbrace \textnormal{Re}(s)>1\right\rbrace$, with set of (simple) poles (the visible complex dimensions of $A$) given by
\begin{align}
\label{eqn:IllustrationNonMinkCaseComplexDimensions}
\mathcal{P}(\tilde{\zeta}_A) &= \mathcal{P}(\zeta_d) = \left\lbrace D+\frac{2\pi}{T}ik: \hat{G}_0\left(\frac{k}{T}\right)\neq 0, k\in\mathbb{Z}\right\rbrace.
\end{align}
Actually, in this case, both $\tilde{\zeta}_A$ and $\zeta_d$ can be meromorphically extended to all of $\mathbb{C}$.
\end{example}
A number of other results concerning the existence of meromorphic continuation of $\tilde{\zeta}_A$ and $\zeta_d$ (as well as $\zeta_\mathcal{L}$, in the case of fractal strings) and the resulting structure of the poles (the visible complex dimensions) can be found in \cite{LapRaZu}, under various assumptions on the bounded set $A\subset\mathbb{R}^m$ (or on the fractal string $\mathcal{L}$). Moreover, a number of other applications of the distance and tube zeta functions are provided in \cite{LapRaZu}, in order to study various classes of fractals, including fractal chirps and `zigzagging' fractals.
\begin{remark}
The box-counting zeta function $\zeta_B$ of a set $A \subset \mathbb{R}^m$ given by Definition \ref{def:BoxCountingZetaFunction} is closely related to the tube zeta function $\tilde\zeta_A$. To see this, it suffices to perform the change of variables $x=t^{-1}$ in Equation \eqref{tube_zeta} and compare with Corollary~\ref{cor:EquivalentZetaFunctions}. Note that for $x>0$, we have (under suitable hypotheses) $|A_{1/x}|\asymp x^{-m}N_B(A,x)$ as $x\to\infty$. Here, $N_B(A,x)$ is defined as the number of $x^{-1}$-mesh cubes that intersect $A$; see (iv) in Remark \ref{rmk:VariousCountingFunctions}. It is clear, however, that these two zeta functions are in general not equal to each other. Moreover, we do not know if the corresponding two sets of complex dimensions of $A$, associated with these two zeta functions, coincide.
\end{remark}
Various generalizations of the notion of distance zeta function are possible. One of them, which is especially interesting, deals with zeta functions associated to relative fractal drums. By a \emph{relative fractal drum}, introduced in \cite{LapRaZu}, we mean an ordered pair $(A,\Omega)$, where $A$ is an arbitrary nonempty subset of $\mathbb{R}^m$, and $\Omega$ is an open subset such that $A_\varepsilon$ contains $\Omega$ for some positive $\varepsilon$, and the $m$-dimensional Lebesgue measure of $\Omega$ is finite. (Note that both $A$ and $\Omega$ are now allowed to be unbounded.) The corresponding \emph{relative zeta function} (or the distance zeta function of the relative fractal drum), also introduced in \cite{LapRaZu}, is defined in much the same way as in Equation \eqref{eqn:DistanceZetaFunction}:
\begin{align*}
\zeta_d(s;A,\Omega) &:=\int_{\Omega}d(x,A)^{s-m}dx.
\end{align*}
It is possible to show that the abscissa of convergence of the relative zeta function is equal to the relative box dimension $\overline\dim_B(A,\Omega)$; see \cite{LapRaZu} for details and illustrative examples.\footnote{We caution the reader that, in general, the relative upper box dimension (defined as an infimum over $\alpha \in \mathbb{R}$ instead of over $\alpha\geq 0$) may be negative and that there are even cases where it is equal to $-\infty$ (e.g., when $\textnormal{dist}(A,\Omega)>0)$; see \cite{LapRaZu}.}
\begin{remark}
\label{rmk:StandardFractalDrum}
For the standard notion of fractal drum, which corresponds to the choice $A=\partial\Omega$ with $\Omega$ bounded, we refer, e.g., to \cite{Lap1,Lap2} along with \cite[\S 12.5]{LapvF6} and the relevant references therein.
\end{remark}
\begin{remark}
\label{rmk:RelativeFractalDrums}
It is easy to see that the notion of relative fractal drum $(A,\Omega)$ is a natural extension of the notion of fractal string ${\mathcal{L}}=\{\ell_j\}$. Indeed, for a given (standard) fractal string ${\mathcal{L}}=\{\ell_j\}$, it suffices to define $A=\{a_j\}$, where $a_j:=\sum_{k\ge j}\ell_k$ and $\Omega=\cup_{k\ge1} (a_{k+1},a_k)$. We caution the reader that the notion of generalized fractal string already exists but does not coincide with the notion of relative fractal drum. Specifically, in \cite[Ch.~4]{LapvF6}, a generalized fractal string is defined to be a locally positive or locally complex measure on $(0,\infty)$ supported on a subset of $(x_0,\infty)$, for some positive real number $x_0$.
\end{remark}
\section{Summary of results and open problems}
\label{sec:SummaryOfResultsAndOpenProblems}
For a bounded infinite set $A$, recall that $\overline{\dim}_{B}A$ denotes the upper box-counting dimension of $A$ given by Equation \eqref{eqn:UpperBoxDimension}, $\overline{\dim}_{M}A$ denotes the upper Minkowski dimension of $A$ given by Equation \eqref{eqn:UpperMinkowskiDimension}, $D_B$ denotes the abscissa of convergence of the box-counting zeta function $\zeta_B$ of $A$ given in Definition \ref{def:BoxCountingZetaFunction}, $\rho_\mathcal{L}$ denotes the order of the geometric counting function $N_\mathcal{L}$ given by Equation \eqref{eqn:OrderOfCountingFunction} where $\mathcal{L}=\mathcal{L}_B$, $D_N$ denotes the value corresponding to the (asymptotic) growth rate of $N_\mathcal{L}$ given by Equation \eqref{eqn:CountingFunctionD}, and $D_d$ is the abscissa of convergence of the distance zeta function $\zeta_d$ given in Definition \ref{def:DistanceZetaFunction}. Furthermore, let $D_t$ denote the abscissa of convergence of the tube zeta defined in Equation \eqref{tube_zeta}.
The following theorem summarizes our main result (as stated in Theorem \ref{thm:Summary} of the introduction), which pertains to the determination of the box-counting dimension of a bounded infinite set. (Recall that the equalities $\overline{\dim}_{B}A=D_d=D_t$ are established in \cite{LapRaZu}; see Theorem \ref{thm:DistanceZetaFunctionUpperBoxDimension} and the comment following Theorem \ref{thm:DistanceAndTubeZetaFunctions} above.)
\begin{theorem}
\label{thm:MainResult}
Let $A$ be a bounded infinite subset of $\mathbb{R}^m$ and let $\mathcal{L}=\mathcal{L}_B$ be the corresponding box-counting fractal string. Then the following equalities hold\emph{:}
\begin{align*}
\overline{\dim}_{B}A &= \overline{\dim}_{M}A = D_B = \rho_\mathcal{L} = D_N = D_d=D_t.
\end{align*}
\end{theorem}
\begin{proof}
The classic equality $\overline{\dim}_{B}A = \overline{\dim}_{M}A$ is established in \cite{Falc}. The equality $\overline{\dim}_{M}A=D_B=D_N$ follows from Theorem \ref{thm:JohnnyDarko}. The equality $\rho_\mathcal{L}=D_N$ follows from Remark \ref{rmk:DimensionEqualsOrder}. Finally, as was recalled just above, the equalities $\overline{\dim}_{M}A=D_d$ and $\overline{\dim}_{M}A=D_t$ are established in \cite{LapRaZu}; see Theorem \ref{thm:DistanceZetaFunctionUpperBoxDimension} and the comment following Theorem \ref{thm:DistanceAndTubeZetaFunctions}. These last two equalities are valid whether or not $A$ is infinite.
\end{proof}
Recall that, as stated in Definition \ref{def:BoxCountingFunction}, $N_B(A,x)$ denotes the maximum number of disjoint balls of radius $x^{-1}$ centered in $A$. In this setting and for $\varepsilon>0$ we have
\begin{align}
\label{eqn:LowerBoundForVolume}
B_m \varepsilon^m N_B(A,\varepsilon^{-1}) \leq |A_\varepsilon|,
\end{align}
where $A_\varepsilon$ is the $\varepsilon$-neighborhood of $A$, $|A_\varepsilon|=|A_\varepsilon|_m$ $B_m$ is the $m$-dimensional volume of a ball in $\mathbb{R}^m$ with unit radius, and $0<\varepsilon<x_1^{-1}$, where $x_1^{-1}$ is given by Proposition \ref{prop:BoxCountingJumps}.
Motivated by Equation \eqref{eqn:LowerBoundForVolume} and Theorem \ref{thm:CountingFunctionAndComplexDimensions}, we propose the following open problem (which is stated rather roughly here).
\begin{openproblem}
\label{op:BoxCountingAndCountingFormulas}
Let $A$ be a bounded infinite subset of $\mathbb{R}^m$ with box-counting fractal string $\mathcal{L}_B$. Assume suitable growth conditions on $\zeta_B$ \emph{(}such as the languidity of $\zeta_B$ on an appropriate window, see \emph{\cite[Chs.~5 \& 8]{LapvF6})} and assume for simplicity that all of the complex dimensions are simple \emph{(}i.e., are simple poles of $\zeta_B$\emph{)}. Then, as $\varepsilon\to 0^+$, compare the quantities
\begin{align}
\label{eqn:VolumeFormulaOpenProblem}
|A_\varepsilon|, \quad \varepsilon^m N_{\mathcal{L}_B}(\varepsilon^{-1}), \quad \textnormal{and } \, \varepsilon^m \left(\sum_{\omega \in \calD_B}
\frac{\varepsilon^{-\omega}}{\omega}\textnormal{res}(\zeta_B(s);\omega) + R(\varepsilon^{-1})\right),
\end{align}
where $R(\varepsilon^{-1})$ is an error term of small order.
\end{openproblem}
If one were to provide a more precise version of the above open problem and solve it, one might consider pursuing a generalization of Theorem \ref{thm:CriterionForMinkowskiMeasurability} in the spirit of the theory of complex dimensions of fractal strings, as described in \cite{LapvF6}, and of its higher-dimensional counterpart in \cite{LapPe3,LapPe2,LapPeWin}. Naturally, the clarified version of this open problem would consist of replacing the implicit `approximate equalities' in Equation \eqref{eqn:VolumeFormulaOpenProblem} with true equalities, modulo suitable modifications and under appropriate hypotheses.
Analogously (but possibly more accurately), in light of the results from \cite{LapRaZu} discussed in Section \ref{sec:DistanceAndTubeZetaFunctions}, as well as from the results about fractal tube formulas obtained in \cite[Ch.~~8]{LapvF6} for fractal strings and in \cite{LapPe3,LapPe2} and especially \cite{LapPeWin} in the higher-dimensional case (for fractal sprays and self-similar tilings),\footnote{A survey of the results of \cite{LapPe3,LapPe2,LapPeWin} can be found in \cite[\S 13.1]{LapvF6}.} we propose the following open problem. (A similar problem can be posed for the tube zeta function $\tilde\zeta_A$ discussed in Section \ref{sec:TubeZetaFunction}.)
\begin{openproblem}
\label{op:VolumeTubularNeighborhood}
Let $A$ be a bounded subset of $\mathbb{R}^m$ with distance zeta function $\zeta_d$. Under suitable growth assumptions on $\zeta_d$ \emph{(}such as the languidity of $\zeta_B$ on an appropriate window, see \emph{\cite[Chs.~5 \& 8]{LapvF6})}, and assuming for simplicity that all of the corresponding complex dimensions are simple, calculate the volume of the tubular neighborhood of $A$ in terms of the complex dimensions of $A$ \emph{(}defined here as the poles of the meromorphic continuation of $\zeta_d$ union the `integer dimensions' \{0,1,\ldots,m\}\emph{)} and the associated residues.
Moreover, even without assuming that the complex dimensions are simple, express the resulting fractal tube formula as a sum of residues of an appropriately defined `tubular zeta function' \emph{(}in the sense of \emph{\cite{LapPe3,LapPe2,LapPeWin,LapPeWi2})}.
\end{openproblem}
\section*{Acknowledgments}
The first author (M.~L.~Lapidus) would like to thank the Institut des Hautes Etudes Scientifiques (IHES) in Bures-sur-Yvette, France, for its hospitality while he was a visiting professor in the spring of 2012 and this paper was completed.
The authors would like to thank our anonymous referees. They provided helpful comments and suggestions in their very thorough reviews of a preliminary version of this paper.
In closing, the authors would like to thank the Department of Mathematics at the University of Messina and the organizers of the Permanent International Session of Research Seminars (PISRS), especially Dr.\ David Carfi, for their organization of and invitations to speak in the First International Meeting of PISRS, Conference 2011: Analysis, Fractal Geometry, Dynamical Systems, and Economics. The authors' participation in this conference led directly to collaboration on the development of the new results presented in the paper, namely the results pertaining to box-counting fractal strings and box-counting zeta functions.
\bibliographystyle{amsalpha}
| {
"timestamp": "2013-02-04T02:00:32",
"yymm": "1207",
"arxiv_id": "1207.6681",
"language": "en",
"url": "https://arxiv.org/abs/1207.6681",
"abstract": "We discuss a number of techniques for determining the Minkowski dimension of bounded subsets of some Euclidean space of any dimension, including: the box-counting dimension and equivalent definitions based on various box-counting functions; the similarity dimension via the Moran equation (at least in the case of self-similar sets); the order of the (box-)counting function; the classic result on compact subsets of the real line due to Besicovitch and Taylor, as adapted to the theory of fractal strings; and the abscissae of convergence of new classes of zeta functions. Specifically, we define box-counting zeta functions of infinite bounded subsets of Euclidean space and discuss results pertaining to distance and tube zeta functions. Appealing to an analysis of these zeta functions allows for the development of theories of complex dimensions for bounded sets in Euclidean space, extending techniques and results regarding (ordinary) fractal strings obtained by the first author and van Frankenhuijsen.",
"subjects": "Mathematical Physics (math-ph); Number Theory (math.NT)",
"title": "Box-counting fractal strings, zeta functions, and equivalent forms of Minkowski dimension",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513889704252,
"lm_q2_score": 0.8596637451167997,
"lm_q1q2_score": 0.8477585962944496
} |
https://arxiv.org/abs/2301.08149 | Zeros of fractional derivatives of polynomials | We investigate the behavior of fractional derivatives of polynomials. In particular, we consider the locations and the asymptotic behaviour of their zeros and give bounds for their Mahler measure. | \section{Introduction}
Questions concerning finding exact or approximate values of the zeros of polynomial functions $p(x) = c_n x^n + c_{n-1}x^{n-1} + \cdots + c_1 x + c_0$ are classical, and (for the case of real coefficients $c_0, c_1, \cdots c_n$) properties of the distribution of these zeros have been studied since at least 1637, when Descartes established his fundamental Rule of Signs (in {\em La G{\' e}om{\' e}trie} \cite{descartes1637}). This important result was refined to finite intervals by Budan in 1807 and by Fourier in 1820 (see \cite{bf1} or \cite{bf2}). By then, thanks to the work of Euler, Gauss and Argand (see \cite{fine1997} for more details), the Fundamental Theorem of Algebra had been established, guaranteeing that a polynomial of degree $n$ has exactly $n$ complex zeros (counted with multiplicity). Not much later,
in 1829, Cauchy \cite{cauchy1829} was able to prove that all zeros of a monic polynomial $p(x)$, with complex coefficients, must lie inside the disk $|z| < 1 + \max_{0 \leq k \leq n-1} |c_k|$,
the bounds that were eventually generalized by Landau \cite{landau1907}, Fej{\' e}r \cite{fejer1908} and others.
In another direction,
one could ask about the relation between locations of the zeros of a polynomial $p(x)$ and the zeros of its derivative $p^{\prime}(x)$, as Rolle has done in his {\em Trait{\' e} d’alg{\`e}bre} of 1690 (see \cite{shain1937}); the well-known theorem bearing his name -- that states that between any two zeros of a real polynomial there lies at least one zero of the derivative -- was proved rigorously by Cauchy \cite{cauchy1823} in 1823. In the complex plane, the situation becomes even more interesting. As Gauss noted in 1836, all zeros of $p^{\prime}(x)$ lie in the convex hull of the zeros of $p(x)$.
The first proof of this proposition was published by Lucas \cite{lucas1874} in 1874; it is now known as the Gauss-Lucas Theorem
(also see \cite{marden1966}). At the beginning of the 20th century it was refined by B{\^o}cher \cite{bocher1904}, Jensen \cite{jensen1912} and Walsh \cite{walsh1920}, and in more recent times, several related extensions and generalizations of it have been considered by Dimitrov \cite{dimitrov1998}, Brown \& Xiang \cite{brown1999}, Sendov \cite{sendov2021}, Tao \cite{tao2022} and others.
The main aim of our work is to investigate connections between these two central themes. We will try to show that their key ideas can be combined in a very natural way, but to quite surprising effects, if one considers the fractional
derivatives $p^{(\alpha)} (x)$, where $\alpha \in \mathbb{R}$ is a variable $0 \leq \alpha \leq n = \deg p(x)$.
Our main goal in this paper will be to answer one of the most intriguing questions that arises as soon as one begins to study these topics: since obviously $\deg p^{\prime}(x) = \deg p(x) - 1$, and the Fundamental Theorem asserts than the same reduction must occur for the total number of zeros,
what happens to the zeros of fractional derivatives, as the real $\alpha$ increases continuously from $0$ to $n$? How
do the zeros of polynomials vanish, and why? As it turns out, these questions have remarkably simple and elegant answers. Namely, for a polynomial $p(x)$ of degree $n$, each of its $n$ zeros will belong to a path of unique length that connects it to the origin, where the ``length'' of the path can be measured by the number of zeros of its derivatives it contains; in other words, for each $0 \leq k \leq n-1$ there will be a unique path (originating at one of the zeros of $p(x)$) that will contain exactly $k$ zeros of its higher derivatives. (Figure \ref{fig two} shows this general property for a generic cubic polynomial).
\begin{figure}[ht]
\includegraphics[width=0.45\textwidth]{paper-deg-3-center-0.pdf}
\includegraphics[width=0.45\textwidth]{paper-deg-3-center+1-I.pdf}
\caption{Paths \(z(\alpha)\) of zeros of the Riemann-Liouville fractional derivatives
\(\RL{0}{\alpha}p\) and
\(\RL{1-i}{\alpha}p\)
of the polynomial \(p(x)=x^3 + (1+i)x^2 + x - i\).
We end the paths when \(z(\alpha)\) reaches the ``origin'' (\(a=0\) in the first case, and \(a=1-i\) in the second).
}\label{fig two}
\end{figure}
Another goal of this paper will be to try to understand some of the particulars of the paths the fractional zeros take, their dynamical properties. In order to state our results concerning this general flow of polynomial zeros more precisely, first we will need to recall some basic definitions and properties
of fractional derivatives. There exist a multitude of different definitions of fractional derivatives, each with its own particular advantages and disadvantages. Unlike in
our study of the Riemann zeta function and the Stieltjes constants where the Gr\"unwald-Letnikov fractional derivative
\cite{g:1867,l:1869-1} worked
really well (see \cite{fps} and \cite{fps2}), in the case of polynomials their divergence forces us instead to employ
the Riemann-Liouville fractional derivative \cite{riefrac}, which also can be thought of as a truncated version of the Gr\"unwald-Letnikov fractional derivative \cite{ortigueira2011fractional}.
The rest of the paper is structured as follows. Our Section \ref{sec derv} contains a short introduction to the Riemann-Liouville differintegral focusing on results most applicable to fractional derivatives of polynomials.
In Section \ref{sec low} this theory is applied to the two simplest cases: polynomials of degree one and two. These are the two cases where, thanks to the manageable classical formulas for the zeros, all the main questions can be conclusively answered. With the cubic polynomials things become somewhat murky, but general convergence trends can still be established. In Section \ref{sec flow} we do just that: we examine how the zeros of integral derivatives are connected to the zeros of fractional derivatives in the most general setting, and we look at the paths of zeros and investigate their convergence and the overall flow. In Section \ref{sec bounds} we consider the behaviour of the zeros on a larger scale and we prove bounds for
the Mahler measure of the fractional derivatives, which are then, in Section \ref{caputo}, also established for the Caputo fractional derivative. Finally, in Section \ref{conclusion}, we discuss some intriguing open problems and unsolved questions.
\section{Riemann-Liouville Fractional
Derivatives}\label{sec derv}
In full generality, the \(\alpha\)-th Riemann-Liouville fractional derivative is defined as follows, see \cite{oldham-spanier}, for example:
\begin{definition}[Riemannn-Liouville fractional derivative]\label{def rl}
Let \(f\) be analytic on a convex open set \(C\) let and \(a\in C\).
For \(\alpha>0\) the Riemann-Liouville integral is
\[
\RLI{a}{\alpha} f(t)=\frac{1}{\Gamma(\alpha)}
\int_a^t f(t)(t-\tau)^{\alpha-1} \,d\tau.
\]
Set \(\RLI{a}{0}f(t)=f(t)\) and for \(\alpha>0\) define the Riemann-Liouville fractional derivative as
\[
\RL{a}{\alpha} f(t) = \frac{d^m}{dt^m} \RLI{a}{m-\alpha} f(t),
\]
where $m=\lceil \alpha \rceil$. For \(\alpha<0\) set
\(\RL{a}{\alpha}f(t)=\RLI{a}{-\alpha} f(t)\).
\end{definition}
In what follows, we will only consider the special case of polynomials, composed of the simple power functions $p(x)=(x-a)^{\beta}$, where $\beta \in \mathbb{R}$, $a\in\mathbb{C}} \newcommand{\N}{\mathbb{N}$. For these, the $\alpha$-th Riemann-Liouville fractional derivative can be computed
using the Power Rule:
\begin{align}\label{eq poly derv}
\RL{a}{\alpha}(x-a)^{\beta}=\begin{cases}
0 & \text{if }\alpha-\beta-1 \in \N\\
\dfrac{\Gamma(\beta+1)}{\Gamma(\beta-\alpha+1)}(x-a)^{\beta-\alpha} & \text{otherwise}
\end{cases}
\end{align}
where, for $z \in \mathbb{C}$, the gamma function $\Gamma(z)$ is defined as $\Gamma(z)=\int_0^\infty t^{z-1}e^{-t}dt$; it satisfies $\Gamma(1+n)=n\Gamma(n)$, which implies that $\Gamma(n) = (n-1)!$ for $n \in \mathbb{N}$. The function $\Gamma(z)$ has no zeros in the complex
plane $\mathbb{C}$, and has poles at all the negative integers (see \cite{artin1964}).
\begin{remark}
The Riemann-Liouville fractional derivative of a monomial \(f(x)=x^n\) is multivalued. When changing the branch of the complex logarithm in the computation of the fractional derivative all coefficients of the derivative are changed by the same factor.
So choosing a different branch of the complex logarithm does not change the zeros of the derivative, which means that we
can fix the branch in our consideration of zeros of derivatives of polynomials.
We use the principal branch of the complex logarithm.
\end{remark}
It should be noted that the constant $a$ that centers the expansion (\ref{eq poly derv}) plays a key role in all our computations below, as the ``origin,'' or the limit of convergence, of the flow of zeros of derivatives (Figure \ref{fig two} illustrates its role). Also noteworthy is the fact that the Riemann-Liouville fractional derivative satisfies all properties expected of a regular derivative, with the exception of the composition rule. The following example shows why it fails:
\begin{example}
From (\ref{eq poly derv}), the \(1.5\)-th derivative of \(p(x)=1\) is
\(
\textstyle\RL{0}{1.5}p(x)=\frac{2}{ \csqrt{\pi}}x^{-1.5}
\)
and
\(\textstyle\RL{0}{1}\left(\RL{0}{0.5}p(x)\right) =\frac{2}{ \csqrt{\pi}}x^{-0.5}.
\)
However, \(\RL{0}{0.5}\left(\RL{0}{1}p(x)\right)=\RL{0}{0.5}0=0.
\)
\end{example}
When \(\beta\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}\setminus\Z\) we still have
\(\RL{a}{\alpha}\left(\RL{a}{\beta}p(x)\right)=\RL{a}{\alpha+\beta}p(x)\).
\begin{remark} It is possible to go beyond the standard values of $0 \leq \alpha$, and consider what happens for $\alpha < 0$. Here, there will be \(n+1\) complex roots, because the first term in Equation \ref{eq lem derv} below has the root \(0\). Just like in the standard case, the extended curves \(z(\alpha)\) of zeros of the differintegral \(\RL{a}{\alpha}p\) will be continuous for \(\alpha<0\) unless
\(\RL{a}{\alpha}p\) has a double root; however, they will not be smooth at integral \(\alpha > 0\). More on this will be said in Section \ref{sec flow} below.
\end{remark}
In what follows, we consider the zeros of the the fractional derivatives of polynomials \(p\in\mathbb{C}} \newcommand{\N}{\mathbb{N}[x]\) of degree \(n\), and we investigate the implicit functions \(z:(0,n)\to \mathbb{C}} \newcommand{\N}{\mathbb{N}\) given by
\[
\RL{a}{\alpha}f(z(\alpha))=0.
\]
If \(\left(\RL{a}{\alpha}p(x)\right)'\ne 0\), for \(\alpha\in(0,n)\), then
\(z(\alpha)\) is differentiable on \((0,n)\).
We denote the roots of the polynomial \(p(x)\) by \(z_1, z_2, \dots,z_n\)
and for \(1\le k\le n\) we define the the implicit function \(z_k:[0,n)\to\mathbb{C}} \newcommand{\N}{\mathbb{N}\) by \(z_k(0)=z_k\)
and \(\RL{a}{\alpha}p(z(\alpha))=0\).
The following representation of the fractional derivatives of a general monic polynomial will be most useful.
\begin{lemma}\label{lem derv}
Let \(p(x)=(x-a)^n+\sum_{j=0}^{n-1} c_j (x-a)^{j}\in\mathbb{C}[x]\) and \(\alpha\not\in\N\). Then
\begin{equation}\label{eq lem derv}
\RL{a}{\alpha}p(x)=\frac{n!}{\Gamma(n+1-\alpha)} (x-a)^{-\alpha}
\left[(x-a)^{n}+
\sum_{j=0}^{n-1}
\left(
\prod_{k=j+1}^n\!(k-\alpha)\right)\cdot
\frac{j!}{n!} c_j (x-a)^{j}
\right].
\end{equation}
\end{lemma}
\begin{proof}
Applying the Power Rule (\ref{eq poly derv}), we obtain \begin{align*}
\RL{a}{\alpha}p(x) &=
\frac{\Gamma(n+1)}{\Gamma(n+1-\alpha)} (x-a)^{n-\alpha}+
\sum_{j=0}^{n-1} \frac{\Gamma(j+1)}{\Gamma(j+1-\alpha)} c_j (x-a)^{j-\alpha}\\
&=\frac{\Gamma(n+1)}{\Gamma(n+1-\alpha)} (x-a)^{-\alpha}
\left[(x-a)^{n}+
\sum_{j=0}^{n-1}
\frac{\Gamma(n+1-\alpha)}{\Gamma(j+1-\alpha)}
\frac{\Gamma(j+1)}{\Gamma(n+1)} c_j (x-a)^{j}
\right]
\\
&=\frac{n!}{\Gamma(n+1-\alpha)} (x-a)^{-\alpha}
\left[(x-a)^{n}+
\sum_{j=0}^{n-1}
\left(
\prod_{k=j+1}^n\!(k-\alpha)\right)\cdot
\frac{j!}{n!} c_j (x-a)^{j}
\right],
\end{align*}
as wanted. \qedhere
\end{proof}
\begin{remark} The representation of the fractional derivatives of polynomials given in the above Lemma \ref{lem derv} has the property that their roots only depend on the factors in square brackets, which in turn implies the useful fact that the branch cut of the complex logarithm does not affect the paths \(z(\alpha)\) of
zeros of these fractional derivatives.
We also see that for \(p(x)=(x-a)^n\) we have:
\begin{equation}\label{eq at a}
\RL{a}{\alpha}p(a)=
\begin{cases}
0 &\mbox{if }\alpha<n\\
\Gamma(n+1) & \mbox{if }\alpha=n\\
\mbox{undefined} &\mbox{if }\alpha>n
\end{cases}
\end{equation}
\end{remark}
\begin{remark} A few words should be said about our plots of the implicit functions \(z:\mathbb{R}} \newcommand{\Z}{\mathbb{Z}\to\mathbb{C}} \newcommand{\N}{\mathbb{N}\)
with \(z(\alpha)\) given by \(\RL{a}{\alpha}p(z(\alpha))=0\).
The dots labelled `\(\bullet\)\textsf{k}' represent zeros of the \(k\)th Riemann-Liouville differintegral (thus `\(\bullet \)\textsf{0}'
represent the zeros of the polynomial \(p(x)\) itself), while circles
`\(\circ\)\textsf{k}' represent points that are limits of \(z(\alpha)\) as \({\alpha\to k}\) but are not zeros of \(\RL{a}{k}p\). These occur for integral \(k\) with \(k\ge\deg p(x)\), where \(\RL{a}{k}p(x)\) is constant, for example at \(x=a\), see Equation (\ref{eq at a}).
Moreover, when a point is either a zero or a limit point of zeros of both the \(j\)th and the \(k\)th differintegrals, then it is represented by `\(\bullet\) \textsf{j \& k}' or `\(\circ\) \textsf{j \& k}' respectively.
In Figures \ref{fig two}, \ref{fig double}, and \ref{fig longest} we let all paths of zeros of \(\RL{a}{\alpha}p(x)\) end when they reach the origin \(a\). In Figures \ref{fig one}, \ref{fig double}, \ref{fig p1},
\ref{fig quintic paths}, and \ref{fig quintic abs}
we continue the paths past \(a\).
In Figures \ref{fig one}, \ref{fig double}, and \ref{fig p1} we display the path of zeros \(z(\alpha)\) for \(\alpha<0\) and \(\alpha>n\) in lighter colors than for \(0\le \alpha < n \) where \(n\) is the degree of the polynomial.
The point \(a\) is only labeled with values for \(\alpha\) for \(\alpha\ge 0\).
\end{remark}
\begin{figure}
\includegraphics[width=0.45\textwidth]{paper-deg-1.pdf}
\includegraphics[width=0.45\textwidth]{paper-deg-2.pdf}
\caption{Path $z(\alpha)$ of zeros of differintegrals of the polynomials \(p(x)=x-2-3i\) and $p(x)=(x-2-3i)(x+2+i)$.}
\label{fig one}
\end{figure}
\section{Low Degrees}\label{sec low}
Formulas for finding zeros of polynomials of low degrees have been known for centuries. Applying the Riemann-Liouville derivative to these low-degree cases
has proved to be a simple but informative exercise. In this section we summarize some of these results, stating the most useful ones as lemmas. They are
examples of a
dynamic that shares certain key characteristic with most high-degree cases, but some aspects of which are often unique. For example, in the linear case,
the path the zeros take is also linear, while already in the quadratic case one observes a considerably more complex behavior.
Let us start with the linear polynomials. Here the situation is simple. The paths of zeros will always be linear, too, and they can be completely described. We have:
\begin{lemma}[Linear Polynomials]\label{lem linear}
Let \(p(x)=(x-a)+c_0\). As $\alpha$ increases from $0$ to $1$, the path of zeros of \(\RL{a}{\alpha}p(x)\) is given by \(z(\alpha)=(\alpha-1)c_0+a\).
\begin{proof}
From (\ref{eq poly derv}), with $\beta = 1$, we get: \(\RL{a}{\alpha}p(x)=(x-a)^{-\alpha}\left((x-a)+(1-\alpha)c_0\right)\).
\end{proof}
\end{lemma}
\begin{remark}
In addition to considering the $\alpha$-th derivatives in the usual range $0 \leq \alpha \leq n$, one could also look at what happens when $\alpha < 0$ and when $\alpha>n$.
In the linear case this is, again, simple. From Lemma \ref{lem linear} we can deduce that, as with $\alpha<0$, the line of roots of the derivatives continues. Similarly, for $\alpha>1$, $\alpha\not\in\Z$ lie on the same line, see Figure \ref{fig one} below.
\end{remark}
Let us now consider the quadratic case. This case is considerably more interesting, since there are
now two paths of zeros of the fractional derivatives, and they exhibit a much more complex and intricate behavior.
We first notice that the path of the zero closest to $a$ will directly connect with $a$, while the path of the farther zero will in the process of reaching $a$ pass through the zero of the first derivative.
\begin{lemma}\label{lem quad root order}
Let $p(x)=x^2+c_1(x-a)+c_0$ with discriminant \(d=\) and roots \(s_1\) and \(s_2\).
\begin{enumerate}
\item If \(d\in\mathbb{C}} \newcommand{\N}{\mathbb{N}\setminus\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{<0}\), denote by \( \csqrt{d}\) the
complex number $r$ with \(r^2=d\) and \(\Re(r)> 0\). Then for the
roots \(s_1=\frac{-b+ \csqrt{b^2-4c}}{2}\) and \(s_2=\frac{-b- \csqrt{b^2-4c}}{2}\)
of \(g(x)=x^2+bx+c\in\mathbb{C}[x]\) we have \(|s_1|\le |s_2|\).
\item If \(d\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{<0}\), then \(|s_1|=|s_2|\).
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item Write \(d=1-\frac{4c}{b^2}\).
Then
\(|s_1|=\left|\frac{b}{2}\right|\cdot \left|-1+ \csqrt{d}\right|\)
and \(|s_2|=\left|\frac{b}{2}\right|\cdot \left|-1- \csqrt{d}\right|\).
Now
\begin{align*}
\left|-1+ \csqrt{d}\right|^2 & = \left(-1+ \csqrt{d}\right)\left(-1+\overline{ \csqrt{d}}\right)=1- \csqrt{d}-\overline{ \csqrt{d}}+|d| \text{ and }\\
\left|-1- \csqrt{d}\right|^2 &= \left(-1- \csqrt{d}\right)\left(-1-\overline{ \csqrt{d}}\right)=1+ \csqrt{d}+\overline{ \csqrt{d}}+|d|
\end{align*}
Because \(\Re\left( \csqrt{d}\right)> 0 \) we have
\(|s_1|\le |s_2|\).
\item Now assume $\Re(r)=0$, meaning $r$ is completely imaginary, i.e. $r=it$. Then using the information above we have,
\begin{align*}
\left|-1+ \csqrt{d}\right|^2 & =1- \csqrt{d}-\overline{ \csqrt{d}}+|d| = 1-r-\overline{r}+|r^2| =1-it-\overline{it}+\left|(it)^2\right| \\
& =1-t^2 =1+it-it-t^2 =1+it+\overline{it}-t^2 =\left|-1- \csqrt{d}\right|^2.
\end{align*}
Therefore, we have \(|s_1|= |s_2|\).\qedhere
\end{enumerate}
\end{proof}
\begin{remark}
As of yet, there is no known reliable ordering of the zeros for any of the higher degree polynomials. In fact, there exist examples of cubic polynomials for which the standard Euclidean distance (which works so well for the linear and the quadratic cases) can be shown to fail: see Figure \ref{fig longest}.
\end{remark}
In addition to the natural ordering on the quadratic roots, another question that seems to be of interest is the one that concerns the trends of descent of their paths, especially since it had such a nice answer in the linear case. As it turns out, the asymptotes of the two quadratic paths exist, and the quadratic formula alone is enough to help us find them.
\begin{theorem}\label{prop roots quad}
For the quadratic polynomial $p(x)=(x-a)^2+c_1(x-a)+c_0$, the paths of zeros of the fractional derivatives \(\RL{a}{\alpha} p(x)\) are given as
\begin{equation}\label{eq roots quad}
z_{1,2}(\alpha)=a+\frac{-(2-\alpha) c_1\pm \csqrt{(2-\alpha)^2\cdot (c_1)^2 - 8(2-\alpha)(1-\alpha) \cdot c_0}}{4},
\end{equation}
with \(|z_1(\alpha)|\ge|z_2(\alpha)|\), for {\(\alpha\in[0,1]\)}, and \(\lim_{\alpha\to 1}z_1(\alpha)=a\)
and
\(\lim_{\alpha\to 2}z_{1,2}(\alpha)=a\).
\begin{proof}
With the help of (\ref{eq poly derv}), the fractional derivatives of \(p(x)\) can be written as
\[
\RL{a}{\alpha}p(x)=\frac{2}{\Gamma(3-\alpha)}x^{-\alpha}
\left(
(x-a)^{2} +\frac{(2-\alpha)\cdot c_1}{2} (x-a) +\frac{(2-\alpha)(1-\alpha)\cdot c_0}{2}\right).
\]
Now, set $y=x-a$. Then the roots of $y^{2} +\frac{(2-\alpha)\cdot c_1}{2} y +\frac{(2-\alpha)(1-\alpha)\cdot c_0}{2}$ .
are
\[
z_{1,2}(\alpha)=a+\frac{-(2-\alpha) c_1\pm \csqrt{(2-\alpha)^2\cdot (c_1)^2 - 8(2-\alpha)(1-\alpha) \cdot c_0}}{4}
\]
The ordering of the roots \(|z_1(\alpha)|\ge|z_2(\alpha)|\), for {\(\alpha\in[0,1]\)}, follows with Lemma \ref{lem quad root order}. Furthermore, we have
\[
\lim_{\alpha\to 1} z_{1,2}(\alpha)=a+\frac{- c_1\pm \csqrt{ (c_1)^2}}{4}=a+\frac{-c_1\pm c_1}{4}.
\]
Thus $\lim_{\alpha\to 1} z_1(\alpha)=a$ and $\lim_{\alpha\to 1} z_2(\alpha)= a+\frac{c_1}{2}$.
Similarly
\[
\lim_{\alpha\to 2}\left[a+\frac{-(2-\alpha) c_1\pm \csqrt{(2-\alpha)^2\cdot (c_1)^2 - 8(2-\alpha)(1-\alpha) \cdot c_0}}{4}\right]=a.\qedhere
\]
\end{proof}
\end{theorem}
\begin{corollary}
For \(\alpha \to \pm\infty\), the asymptotes of the quadratic paths are
\[\textstyle z_{1,2}(\alpha)\approx\frac{(2-\alpha) c_1}{4}\left[-1\pm \csqrt{1-\frac{8c_0}{c_1}}\right]. \]
\end{corollary}
\begin{proof}
By (\ref{eq roots quad})
we have,
\begin{align*}
z_{1,2}(\alpha)&=\frac{-(2-\alpha) c_1\pm(2-\alpha) c_1 \csqrt{1 - 8\frac{(1-\alpha)}{(2-\alpha)} \cdot \frac{c_0}{c_1}}}{4}\\
&=\frac{(2-\alpha) c_1}{4}\left[-1\pm \csqrt{1-\frac{8c_0(1-\alpha)}{c_1^2(2-\alpha)}}\right],\\
\end{align*}
and since $\frac{1-\alpha}{2-\alpha}\to 1$, as $\alpha\to \pm\infty$, this yields linear asymptotes for $z_1(\alpha)$ and $z_2(\alpha)$.
\end{proof}
An noteworthy special case occurs when the polynomial has a double root. Then the paths display an interesting symmetry, see Figure \ref{fig double}.
In fact, it is easy to see that specializing our Proposition \ref{prop roots quad} to the case of a double zero of the polynomial itself yields:
\begin{corollary}
If $p(x)=(x-z_0)^2$ then the zeros of \(\RL{0}{\alpha}p\) are
\[z_{1,2}(\alpha)=\frac{z_0\left(-(2-\alpha) \pm \csqrt{\alpha(2-\alpha)}\right)}{2}. \]
\end{corollary}
\begin{figure}
\includegraphics[width=0.45\textwidth]{paper-deg-2-double-totale.pdf}
\includegraphics[width=0.45\textwidth]{paper-deg-2-double-root-abs.pdf}
\caption{Paths \(z(\alpha)\) of zeros of the fractional derivatives of
\(p(x)=x^2+(2+6i)x-12+9i\) where \(\RL{0}{1/2}p\) has a double root. In the plot on the right the wide, light green graph represents the real part of \(z(\alpha\)), while the thin, dark red graph represents its imaginary part.
}\label{fig double}
\end{figure}
Another natural question to ask is whether, given that a quadratic polynomial has distinct zeros, can its fractional derivative have a double zero. Setting \(z_1(\alpha)=z_2(\alpha)\) one gets:
\begin{corollary}\label{lem double zero}
Let $p(x)=(x-a)^2+c_1(x-a)^1+c_0(x-a)^0$. Then the fractional derivative \(\RL{a}{\alpha}(p(x))\) will have a double zero precisely for one \(\alpha\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}\setminus\N\), namely:
\(\alpha = 1-\frac{c_1^2}{8c_0- c_1^2}\).
\end{corollary}
\section{Flow of Zeros}\label{sec flow}
As stated above, one of our main goals was to consider the paths of zeros of the fractional derivatives of polynomials \(p(x) \in\mathbb{C}} \newcommand{\N}{\mathbb{N}[x]\) of arbitrary degrees. Unfortunately, unlike in the linear and quadratic cases, already for the cubics we find that the situation becomes considerably more complicated. This can be seen from the fact that one of the nicest properties -- the natural ordering of zeros -- fails already for degree 3: in other words,
it is not true in general that zeros furthest away from the origin will yield the longest paths of zeros of fractional derivatives on its way to the origin. Figure
\ref{fig longest} shows a notable counterexample.
\begin{figure}
\includegraphics[width=0.45\textwidth]{paper-furthest-longest-counterexample-paths.pdf}
\includegraphics[width=0.45\textwidth]{paper-furthest-longest-counterexample-abs.pdf}
\caption{The paths \(z(\alpha)\) of zeros of the fractional derivatives of
the cubic \(p(x)=(x-3-2i)(x-2+5i)(x-4+4i)\)
and the absolute values \(|z(\alpha)|.\)
In the latter case, the zero of \(p(x)\) with the greatest absolute value is
not the starting point of the longest path.
}
\label{fig longest}
\end{figure}
However, certain convergence properties of the paths can be established in general. For example, the following theorem shows that all the paths terminate in the origin $a$.
\begin{theorem}\label{theo roots lim}
Let \(p(x) \in \mathbb{C}[x]\) of degree \(n\) such that for all \(\alpha\in [0,n]\)
the fractional derivative \(p^{(\alpha)}(x)\) has no double zeros.
Then there is an ordering of the roots \(z_1,\,z_2,\,...,\,z_n\) or \(p\) such that for
\(p^{(\alpha)}(z_j(\alpha)) = 0\) and \(z_j(0) = z_j\) for \(j\in\{1,...,n\}\) we have
\(\lim_{\alpha \to j} z_j(\alpha) = a\)
\end{theorem}
\begin{proof}
We denote the coefficients in the expansion proved in Lemma \ref{lem derv} by
\begin{equation}
d^\alpha_j:=\prod_{k=j+1}^n\!(k-\alpha)\cdot \frac{j!}{n!}. \label{eq dalphaj}
\end{equation}
Let \(0\le j\le n\) and \(m>j\). Here, clearly
\[
\lim_{\alpha\to m}
d^\alpha_j=
\lim_{\alpha\to m}
\prod_{k=j+1}^n\!(k-\alpha)\cdot \frac{j!}{n!}=0.
\]
Write the coefficients of the derivatives \(\RL{a}{\alpha}p(x)\) by
\(d^\alpha_j\,c_j\)
as symmetric functions of the roots
\(z_1(\alpha),z_2(\alpha), \dots, z_n(\alpha)\) of \(\RL{a}{\alpha}p(x)\).
Because
\[
0=\lim_{\alpha\to m} d^\alpha_0\,c_0= \lim_{\alpha\to m} \prod_{k=1}^n z_k(\alpha)
\]
we have
\(\lim_{\alpha\to m} z_{k_1}(\alpha)=0\), for at least one \(1\le k_1\le n\).
Inductively, continuing with the next coefficient we get:
\begin{equation}\label{eq sym 1}
0=\lim_{\alpha\to m} d^\alpha_1\,c_1= \lim_{\alpha\to m}
\sum_{l=1}^n \prod_{k\ne l} z_k(\alpha).
\end{equation}
There is one summand in (\ref{eq sym 1}) that does not contain \(z_{k_1}(\alpha)\).
So we need to have \(\lim_{\alpha\to m} z_{k_2}(\alpha)=0\) for at least one \(k_2\ne k_1\).
This argument also holds for all \(d^\alpha_j\,c_j\) with \(j<m\).
Therefore, we get \(\lim_{\alpha\to m} z_{k}(\alpha)=0\) for at least \(m\) distinct
\(k\in\{1,\dots, n\}\).
\end{proof}
\begin{figure}
\includegraphics[width=0.45\textwidth]{paper-deg-3-p1-totale.pdf}
\includegraphics[width=0.45\textwidth]{paper-deg-3-p1-totale-mahler.pdf}
\caption{Paths \(z(\alpha)\) of zeros of \(\RL{0}{\alpha}(x^3+x^2+x+1+i)\).
illustrating the growth of the absolute value of \(z(\alpha)\) as
\(\alpha\to\infty\) and \(\alpha\to-\infty\) and the Mahler measure
of \(\RL{0}{\alpha}p\) along with the bounds from Theorems \ref{theo upper bound} and \ref{theo lower bound}.
}
\label{fig p1}
\end{figure}
\section{Bounds}\label{sec bounds}
As stated in the introduction, for \(f\in\mathbb{C}} \newcommand{\N}{\mathbb{N}[x]\) the Gauss-Lucas theorem states that all zeros of \(f'\) lie in the convex hull of the set of zeros of \(f\), see \cite[Theorem 6.1]{marden1966}. By induction this generalizes to all integral derivatives. Unfortunately, although all roots of the fractional derivatives converge to the origin, by our Theorem \ref{theo roots lim}, the analogue of the Gauss-Lucas theorem does not hold for the fractional derivatives.
This is an immediate consequence of a result by Genchev and Sendov
{\cite{genchev-sendov}}, which is also stated as {\cite[Theorem 2]{nikolov-sendov}}:
\begin{theorem}
Let \(L:\mathbb{C}} \newcommand{\N}{\mathbb{N}[x]\to\mathbb{C}} \newcommand{\N}{\mathbb{N}[x]\) be a linear operator, such that \(L(p)\ne 0\) implies that
the convex hull of the set of roots of \(p\) contains the roots of \(L(p)\).
Then \(L\) is a linear functional or there are \(c\in\mathbb{C}} \newcommand{\N}{\mathbb{N}\setminus\{0\}\) and \(k\in\N\) such that \(L(p)=cp^{(k)}\).
\end{theorem}
Figure \ref{fig longest} illustrates this result by giving a specific counterexample to the Gauss-Lucas property for the case of the Riemann-Liouville fractional derivatives. Nevertheless, it is possible to make some useful statements about how the absolute values of zeros
\(z_k(\alpha)\)
of the fractional derivatives \(\RL{0}{\alpha} f\)
decrease as \(\alpha\) increases in terms of the Mahler measure of \(f\).
Let \(p(x)=x^n+\sum_{j=1}^{n-1} c_j x^j=\prod_{j=1}^n (x-z_j)\in\mathbb{C}[x]\).
For the Mahler measure \(M(f)\) \cite{mahler} we have
\[
M(p) = \exp\left(\int_0^1 \log(|f(e^{2\pi i\theta})|)\, d\theta \right)= \prod_{j=1}^n \max\{1,|z_j|\}.
\]
Denote the height of \(f\) by
\(
\nheight{p} = \max\{c_0,\dots,c_n\}
\)
and the length of \(f\) by
\(
\nlength{p}=|c_0|+\dots+|c_n|.
\)
Recall that Mahler was able to prove the bounds
\begin{equation}\label{eq mahler height}
\binom{n}{\lfloor n/2 \rfloor}^{-1} \nheight{p} \le M(p) \le \nheight{p} \csqrt{n+1}
\end{equation}
and
\begin{equation}\label{eq mahler length}
2^{-n} \nlength{p} \le M(p) \le \nlength{p}.
\end{equation}
For \(\ntwo{p} = \left(\sum_{j=1}^n |c_j|^2\right)^\frac{1}{2}\) we have Landau's inequality
\cite{landau1905}
\begin{equation}\label{eq landau}
M(p)\le ||p||_2.
\end{equation}
We generalize the definition of the Mahler measure to fractional derivatives.
Let \(Z(\alpha)\) be the set of zeros of \(\RL{0}{\alpha}p\). We set
\[
M\left(\RL{0}{\alpha}p\right)=\prod_{z\in Z(\alpha)}\max\{1,|z|\}.
\]
and prove bounds similar to Equations \ref{eq mahler height},
\ref{eq mahler length}, and \ref{eq landau}
for the fractional cases.
We first estimate the coefficients \(d^\alpha_j\) from Proposition \ref{theo roots lim} and then use them to derive bounds for \(M\left(\RL{0}{\alpha}p\right)\).
\begin{figure}
\includegraphics[width=0.45\textwidth]{paper-mahler-paths.pdf}
\includegraphics[width=0.45\textwidth]{paper-caputo-mahler-paths.pdf}
\caption{Comparing paths \(z(\alpha)\) of zeros of the Riemann-Liouville (left) and Caputo (right) fractional derivatives
of the quintic
\(p(x)=(x+1-2i)(x-3-2i)(x-2+5i)(x-3+3i)(x+1+5i)\), for \(0\le\alpha\le 5\).
}
\label{fig quintic paths}
\end{figure}
\begin{lemma}\label{lem dj}
Let \(n\in\N\) and \(\alpha\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}\setminus\N\).
Let
\(d^\alpha_j=\left(\prod_{k=j+1}^n\!(k-\alpha)\right)\cdot \frac{j!}{n!}
\) where \(0 \le j\le n-1\). Then
\begin{enumerate}
\item\label{dj gt 0}
\(|d^\alpha_j| \le \frac{n-\alpha}{n}\), for \(0<\alpha< n\).
\item\label{dj lt}
\(|d^\alpha_j| \le \frac{\ff{|n-\alpha|}{n}}{n!}\),
where \(\ff{|n-\alpha|}{n}=\prod_{k=0}^{n-1} |n-\alpha-k|\),
for \(\alpha<0\) and \(\alpha>n\).
\item\label{dj lt 0 gt 2n}
\( |d^\alpha_j|\ge\frac{n-\alpha}{n}\), for \(\alpha<0\)
and \(\alpha>2n\).
\item\label{dj gt n}
\(|d^\alpha_j|\ge\frac{\alpha-n}{n} \binom{n-1}{(n-1)/2}^{-1}\), for \(n<\alpha\le 2n\).
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item
For \(0\le\alpha < j+1\), we have
\[
|d^\alpha_j| = \frac{n-\alpha}{n} \prod_{k=j+1}^{n-1}\frac{k-\alpha}{k} \cdot \frac{j!}{j!}< \frac{n-\alpha}{n}.
\]
When \(j+1 < \alpha\) set \(h:=\lfloor\alpha\rfloor\). We get
\begin{align*}
|d^\alpha_j|
&=
\frac{n-\alpha}{n}\cdot\frac{(n-1-\alpha)\cdots(h+1-\alpha)\cdot(\alpha-j-1)\cdots(\alpha-h)\cdot j!}{(n-1)!}
\le \frac{n-\alpha}{n}.
\end{align*}
\item For \(\alpha<0\) and \(\alpha>n\), we have
\begin{align*}
|d^\alpha_j|
&=
\frac{|n-\alpha|\cdots|j+1-\alpha|\cdot j!}{n!}\\
&=
\frac{|\alpha-n|\cdots|\alpha-j-1|\cdot j!}{n!}\\
&\le
\frac{|\alpha-n|\cdots|\alpha-1|}{n!}=\frac{|n-\alpha|_n}{n!}.
\end{align*}
For \(\alpha<0\) and \(\alpha>2n\), we have
\begin{align*}
|d^\alpha_j|
&=\frac{n-\alpha}{n}\cdot\frac{(n-1-\alpha)\cdots(j+1-\alpha)\cdot j!}{(n-1)!} \\
&\geq\frac{n-\alpha}{n}\cdot\frac{(n-1)\cdots(j+1)\cdot j!}{(n-1)!}
=\frac{n-\alpha}{n}.
\end{align*}
\item
For \(\alpha>2n\), we have
\begin{align*}
|d^\alpha_j|
=\frac{\alpha-n}{n}\cdot\frac{(\alpha-n+1)\cdots(\alpha-j-1)\cdot j!}{(n-1)!} \ge \frac{\alpha-n}{n}.
\end{align*}
\item
For \(2n>\alpha>n\), we have
\begin{align*}
|d^\alpha_j|
&=\frac{(\alpha-n)\cdots(\alpha-j-1)\cdot j!}{n!}\\
&\ge\frac{\alpha-n}{n}\cdot\frac{1\cdots(n-j-1)\cdot j!}{(n-1)!}\\
&\ge\frac{\alpha-n}{n} \binom{n-1}{(n-1)/2}^{-1}.
\end{align*}
\end{enumerate}
\end{proof}
\begin{figure}
\includegraphics[width=0.45\textwidth]{paper-mahler-abs.pdf}
\includegraphics[width=0.45\textwidth]{paper-caputo-mahler-abs.pdf}
\caption{Comparing the absolute value of \(z(\alpha)\) of zeros of the Riemann-Liouville (left) and Caputo (right) fractional derivatives
of the quintic
\(p(x)=(x+1-2i)(x-3-2i)(x-2+5i)(x-3+3i)(x+1+5i)\), for \(0\le\alpha\le 5\).
}
\label{fig quintic abs}
\end{figure}
We now show that \(M\left(\RL{0}{\alpha} p\right)\)
is bounded from above for \(0<\alpha<\deg p\) where the bound linearly
decreases with \(\alpha\). Furthermore, in Theorem \ref{theo lower bound},
we see that for \(\alpha<0\) and \(\alpha>\deg p\) the Mahler measure
\(M\left(\RL{0}{\alpha} p\right)\) increases at least linearlily with \(\alpha\).
In our proofs we use the following notation. We set
\begin{align}\label{eq equivalent poly}
\RLs{0}{\alpha}f(x) = \frac{\Gamma(n+1-\alpha)}{n!}x^\alpha\cdot \RL{0}{\alpha}f(x).
\end{align}
Because \(\RLs{0}{\alpha}f(x)\)
has the same roots as \(\RL{0}{\alpha}\) except for the additional root \(0\).
we have
\begin{align*}
M\left(\RLs{0}{\alpha}f\right)
= M\left(\RL{a}{\alpha}f)\right).
\end{align*}
\begin{theorem}\label{theo upper bound}
Let \(f(x)\in\mathbb{C}[x]\) be monic and let \(0<\alpha<n\). Then
\begin{enumerate}
\item $M(\RL{0}{\alpha} f) \le \frac{n-\alpha}{n}\nlength{f}+1$
\item $M(\RL{0}{\alpha} f) \le \csqrt{n+1} \max\left\{\frac{n-\alpha}{n} \nheight{f},1\right\}$
\item $M(\RL{0}{\alpha} f) \le \frac{n-\alpha}{n} \ntwo{f}+1$
\end{enumerate}
\end{theorem}
\begin{proof}
Write \(f(x)=\sum_{j=1}^n c_j x^j\).
With the notation from Lemma \ref{lem derv} we have
\begin{align*}
\RLs{0}{\alpha}(x)
= x^{n}+ \sum_{j=0}^{n-1}
\left(
\prod_{k=j+1}^n\!(k-\alpha)\right)\cdot
\frac{j!}{n!} c_j x^{j} = x^n+\sum_{j=0}^{n-1} d^{\alpha}_j\,c_j\, x^j.
\end{align*}
This yields the following three bounds for the Mahler measure of \(\RL{a}{\alpha} f \).
\begin{enumerate}
\item
With Lemma \ref{lem dj} (\ref{dj gt 0}), we get
\begin{align*}
\nlength{\RLs{0}{\alpha}}
&=|d_0^{(\alpha)}c_0|+|d_1^{(\alpha)}c_1|+\dots+|d_{n-1}^{(\alpha)}c_{n-1}|+1\\
&\le \frac{n-\alpha}{n}\left(|c_0|+|c_1|+\dots+|c_{n-1}|\right)+1\\
&\le \frac{n-\alpha}{n} \nlength{f}+1.
\end{align*}
With Mahler's (\ref{eq mahler length}), we get
\[
M\left(\RL{a}{\alpha} f\right) =
M\left(\RLs{0}{\alpha}\right) \le
\nlength{\RLs{0}{\alpha}} \le \frac{n-\alpha}{n}\nlength{f}+1.
\]
\item
By Lemma \ref{lem dj} (\ref{dj gt 0}), we have
\begin{align*}
\nheight{\RLs{0}{\alpha}}
&=\max\{|d_0^{(\alpha)}c_0|,|d_1^{(\alpha)}c_1|,\dots,|d_{n-1}^{(\alpha)}c_{n-1}|,1\}\\
&\le \max\left\{\frac{n-\alpha}{n}\max\{|c_0|,|c_1|,\dots,|c_{n-1}|\},1\right\}
\\
&\le \frac{n-\alpha}{n}\nheight{f}+1.
\end{align*}
With Mahler's (\ref{eq mahler height}),
we get
\[
M\left(\RL{a}{\alpha} f\right) =
M\left(\RLs{a}{\alpha} f\right) \le
\csqrt{n+1} \nheight{\RLs{0}{\alpha}} \le \csqrt{n+1} \max\left\{\frac{n-\alpha}{n} \nheight{f},1\right\}.
\]
\item
By Lemma \ref{lem dj} (\ref{dj gt 0}), we have
\begin{align*}
\ntwo{\RLs{0}{\alpha}}
&= \csqrt{(d_0^{(\alpha)}c_0)^2+\dots+(d_{n-1}^{(\alpha)}c_{n-1})^2+1}\\
&\le\frac{n-\alpha}{n} \csqrt{c_0^2+\dots+c_{n-1}^2}+1\\
&\le\frac{n-\alpha}{n}\ntwo{f}+1.
\end{align*}
With Landau's (\ref{eq landau}), we get
\[
M(\RL{a}{\alpha} f )=
M\left(\RLs{0}{\alpha}\right) \le \ntwo{\RLs{0}{\alpha}} \le\frac{n-\alpha}{n} ||f||_2+1.
\]
\end{enumerate}
\end{proof}
Now we are able to prove:
\begin{theorem}\label{theo lower bound}
Let \(f(x)\in\mathbb{C}} \newcommand{\N}{\mathbb{N}[x]\) be monic of degree \(n\). Then
\begin{enumerate}
\item\label{tlb1}\(M(\RL{0}{\alpha} f)\ge 2^{-n}\frac{n-\alpha}{n}(\nlength{f}-1)+1\), for \(\alpha<0\) and \(\alpha>2n\)
\item\label{tlb2}\(M(\RL{0}{\alpha} f)\le \frac{\ff{|n-\alpha|}{n}}{n!}\nlength{f}+1\), for \(\alpha<0\) and \(\alpha>n\)
\item\label{tlb3}\(M(\RL{0}{\alpha} f)\ge 2^{-n}\binom{n-1}{\lceil(n-1)/2\rceil}^{-1}\frac{\alpha-n}{n}\nlength{f}\), for \(n<\alpha\le 2n\)
\end{enumerate}
\begin{proof}
\begin{enumerate}
\item[\ref{tlb1}.]
By Lemma \ref{lem dj} (\ref{dj lt 0 gt 2n}), we have
\begin{align*}
\nlength{\RLs{0}{\alpha}f}
&=|d_0^{(\alpha)}c_0|+|d_1^{(\alpha)}c_1|+\dots+|d_{n-1}^{(\alpha)}c_{n-1}|+1\\
&\ge \frac{n-\alpha}{n}\left(|c_0|+|c_1|+\dots+|c_{n-1}|+1-1\right)+1\\
&=\frac{n-\alpha}{n}(\nlength{f}-1)+1.
\end{align*}
With Mahler's (\ref{eq mahler length}), we get
\[
M(\RL{0}{\alpha} f) =M\left(\RLs{0}{\alpha}\right) \ge 2^{-n} \nlength{\RLs{0}{\alpha}} \ge 2^{-n}\frac{n-\alpha}{n}(\nlength{f}-1)+1.
\]
\item[\ref{tlb2}.]
By Lemma \ref{lem dj} (\ref{dj lt 0 gt 2n}), we have
\begin{align*}
\nlength{\RLs{0}{\alpha}f}
&= \csqrt{(d_0^{(\alpha)}c_0)^2+\dots+(d_{n-1}^{(\alpha)}c_{n-1})^2+1}\\
&\le\frac{\ff{|n-\alpha|}{n}}{n!}\csqrt{c_0^2+\dots+c_{n-1}^2}+1\\
&\le\frac{\ff{|n-\alpha|}{n}}{n!}\ntwo{f}+1.
\end{align*}
With Landau's (\ref{eq landau}),
we get
\[
M\left(\RL{a}{\alpha} f\right) =
M\left(\RLs{a}{\alpha} f\right) \le
\nlength{\RLs{a}{\alpha} f}
\le \frac{\ff{|n-\alpha|}{n}}{n!}\ntwo{f}+1.
\]
\item[\ref{tlb3}.]
By Lemma \ref{lem dj} (\ref{dj gt n})
\begin{align*}
\nlength{\RLs{0}{\alpha}}
&=|d_0^{(\alpha)}c_0|+|d_1^{(\alpha)}c_1|+\dots+|d_{n-1}^{(\alpha)}c_{n-1}|+1\\
&\ge \frac{\alpha-n}{n}\binom{n-1}{\lceil(n-1)/2\rceil}^{-1}\left(|c_0|+|c_1|+\dots+|c_{n-1}|+1\right)\\
&=\frac{\alpha-n}{n}\binom{n-1}{\lceil(n-1)/2\rceil}^{-1}\nlength{f}
\end{align*}
With Mahler's (\ref{eq mahler length}), we get
\[
M(\RL{0}{\alpha} f) =M\left(\RLs{0}{\alpha}\right) \ge 2^{-n} \nlength{\RLs{0}{\alpha}} \ge 2^{-n}\frac{\alpha-n}{n}\binom{n-1}{(n-1)/2}^{-1}\nlength{f}.
\]
\end{enumerate}
\end{proof}
\end{theorem}
In Figure \ref{fig p1} we present the paths of zeros and the
Mahler measures of the fractional derivatives of a degree 3 polynomial along with the bounds from Theorems \ref{theo lower bound} and \ref{theo upper bound}.
Furthermore Figures \ref{fig one}, \ref{fig double}, and \ref{fig p1} show
the growth of $M(\RL{0}{\alpha} p)$ for \(\alpha<0\) and \(\alpha>0\).
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{paper-caputo-rieliou-mahler-bound.pdf}
\end{center}
\caption{Mahler measure of the Riemann-Liouville and Caputo fractional derivatives of the quintic \(p(x)=(x+1-2i)(x-3-2i)(x-2+5i)(x-3+3i)(x+1+5i)\) for \(0\le\alpha\le 5\) along with the bound from
Theorem \ref{theo upper bound} and Corollary \ref{cor upper bound caputo}.}
\label{fig bound mahler}
\end{figure}
\section{Caputo Fractional Derivatives} \label{caputo}
Switching the order of differentiation and integration in Definition \ref{def rl} yields the Caputo fractional derivative, see \cite{diethelm2010} for example.
\begin{definition}[Caputo fractional derivative]\label{def caputo}
Let \(f\) be analytic on a convex open set \(C\) let and \(a\in C\). Let $\alpha>0$ and $m=\lceil \alpha \rceil$ then the \(\alpha\)-th Caputo fractional derivative is
\[
\CA{a}{\alpha} f(t)
=
\RLI{a}{m-\alpha}
\frac{d^m}{dt^m}
f(t).
\]
\end{definition}
The Caputo fractional derivative has the advantage that the composition rule holds, while the power rule is very similar to that of the Riemann-Liouville fractional derivative:
\[
\CA{a}{\alpha}x^\beta= \begin{cases}
0 &
\begin{array}{l}\text{if } \beta \in \{0,1,\dots, m-1\}
\end{array}\\
\dfrac{\Gamma(\beta+1)}{\Gamma(\beta-\alpha+1)}(x-a)^{\beta-\alpha} &
\begin{array}{l}
\text{if } \beta\in\N \text{ and }\beta \ge m\text{ or }\\[-0.5ex]
\text{if } \beta\not\in\N \text{ and }\beta > m-1,
\end{array}
\end{cases}
\]
where $m=\lceil \alpha \rceil$.
These derivatives differ, but obey some common general trends. Figures \ref{fig quintic paths} and \ref{fig quintic abs} compare the paths of zeros of the Riemann-Liouville and Caputo derivatives of a degree 5 polynomial.
The upper bounds from Theorem \ref{theo upper bound} easily transfer
to the Caputo fractional derivative. Because the coefficients of the Caputo fractional derivatives of a polynomial \(p\in\mathbb{C}} \newcommand{\N}{\mathbb{N}[x]\) are either the
same as those of the Riemann-Liouville fractional derivative or zero,
we have
\[
\norm{\CAs{0}{\alpha}}{k} \le \norm{\RLs{0}{\alpha}}{k},
\]
for \(k\in\{1,2,\infty\}\). This yields the bounds:
\begin{corollary}
\label{cor upper bound caputo}
Let \(p\in\mathbb{C}} \newcommand{\N}{\mathbb{N}[x]\) be monic of degree $n$ and let \(0<\alpha<n\). Then
\begin{enumerate}
\item $M(\CA{0}{\alpha} p) \le \frac{n-\alpha}{n}\nlength{p}+1$,
\item $M(\CA{0}{\alpha} p) \le \csqrt{n+1} \max\left\{\frac{n-\alpha}{n} \nheight{p},1\right\}$,
\item $M(\CA{0}{\alpha} p) \le \frac{n-\alpha}{n} \ntwo{p}+1$.
\end{enumerate}
\end{corollary}
\section{Conclusion} \label{conclusion}
The bounds we have established in Section 5 and Section 6 were sufficient for our purposes, but they are
far from best possible.
Figure \ref{fig bound mahler} illustrates the decline of $M(\RL{0}{\alpha} p)$ and $M(\CA{0}{\alpha} f)$, when $\alpha$ approaches \(n\), as described by Theorem \ref{theo upper bound} and Corollary \ref{cor upper bound caputo} and Theorem \ref{theo roots lim}. In a future work, it would be interesting to consider the true growth of the paths of zeros, for \(\alpha<0\) and \(\alpha>n\). Also, the maximal extent of loops of paths, after traversing the origin \(a\), is something that could be worth looking at. Moreover, as Figure \ref{fig p1} clearly shows, the paths exhibit very distinct linear asymptotes in both directions \(\alpha\to\infty\) and \(\alpha\to-\infty\). For polynomials of higher degrees, their exact directions are not yet known.
In addition to this, as we have noted earlier, the useful Gauss-Lucas property does not hold universally for the fractional derivatives of polynomials. However, some of the dynamical properties we have observed could be investigated with insights related to those that play a key role in the integral case. In particular, Gauss himself suggested a very intriguing physical interpretation of the nontrivial critical points of a polynomial (the critical points which are not zeros) as the equilibrium points in certain force fields, generated by particles placed at the zeros of the polynomial, with masses equal to the multiplicity of the zeros and repelling with a force inversely proportional to the distance. This amazing physical application of a purely theoretical polynomial concept is exceedingly intriguing and should be investigated further. It could go a long way in explaining the profound intricacies of the paths of zeros, and their seemingly chaotic local behavior.
| {
"timestamp": "2023-01-20T02:15:14",
"yymm": "2301",
"arxiv_id": "2301.08149",
"language": "en",
"url": "https://arxiv.org/abs/2301.08149",
"abstract": "We investigate the behavior of fractional derivatives of polynomials. In particular, we consider the locations and the asymptotic behaviour of their zeros and give bounds for their Mahler measure.",
"subjects": "General Mathematics (math.GM)",
"title": "Zeros of fractional derivatives of polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9927672366245092,
"lm_q2_score": 0.8539127492339909,
"lm_q1q2_score": 0.8477366003754666
} |
https://arxiv.org/abs/1701.05855 | Judicious partitions of uniform hypergraphs | The vertices of any graph with $m$ edges may be partitioned into two parts so that each part meets at least $\frac{2m}{3}$ edges. Bollobás and Thomason conjectured that the vertices of any $r$-uniform hypergraph with $m$ edges may likewise be partitioned into $r$ classes such that each part meets at least $\frac{r}{2r-1}m$ edges. In this paper we prove the weaker statement that, for each $r\ge 4$, a partition into $r$ classes may be found in which each class meets at least $\frac{r}{3r-4}m$ edges, a substantial improvement on previous bounds. | \section{Introduction}
The vertices of any graph (indeed, any multigraph) may be partitioned into two parts, each of which meets at
most two thirds of the edges (\cite{BS93}; it also appears as a problem in \cite{MGT}). An equivalent statement in
this case is that each part spans at most one third of the edges. These two statements give rise to different
generalisations when a partition into more than two parts is considered. In this paper we shall only address
the problem of meeting many edges; the problem of spanning few edges is addressed in \cite{BS99} for the
graph case and \cite{BS97} for the hypergraph case.
A particularly interesting case occurs when we partition the vertices of an $r$-uniform hypergraph into $r$
classes. Bollob\'as and Thomason (see \cite{BRT93}, \cite{BS00}) conjectured that every $r$-uniform hypergraph
with $m$ edges has an $r$-partition in which each class meets at least $\frac{r}{2r-1}m$ edges.
The author \cite{Hasa} recently proved the conjecture for the case $r=3$.
The previous best known bound for each $r>3$ was proved by Bollob\'as and Scott \cite{BS00}, who in fact
obtained a constant independent of $r$. The method they used was to progressively refine a partition by
repartitioning the vertices in two or three parts to increase the number of parts which met $cm$ edges; in
doing so they showed that $r$ disjoint parts meeting at least $cm$ edges may be found for $c=0.27$.
Our strategy will be to combine those ideas with the methods used to prove the conjectured bound in the case
$r=3$ \cite{Hasa}. We shall obtain values of $c_r=\frac{r}{3r-4}$ for $r>3$, and so $c_r \to \frac{1}{3}$ as $r \to \infty$. While this is a significant improvement on previous bounds, it is still some distance from the
conjectured bounds which approach $\frac{1}{2}$ in the limit.
All results obtained in this paper apply to hypergraphs in which repeated edges are permitted, and in
fact we shall need this extra generality in an induction step.
\section{New parts from old}
In this section we give two lemmas which show that if we have a partition in which some parts meet many more
edges than required we may locally refine that partition and increase the number of parts which meet the
required number of edges. The first of these lemmas was proved in \cite{BS00}; the second is a new result in
the same spirit. The setting for each lemma is the same. $G$ is a multi-hypergraph, but not necessarily
uniform; edges may have any number of vertices, and edges of any size may be repeated. $G$ has $m$ edges, but
the maximum degree of a vertex is less than $cm$ where $c>0$ is an arbitrary constant. For $A,B\subset V$
we write $d(A)$ for the number of edges meeting $A$ and $d(A,B)$ for the number meeting both $A$ and $B$ (we
shall only use the latter notation where $A$ and $B$ are disjoint).
\begin{lemma}[\cite{BS00}]\label{rutwothree}
Let $c>0$ be a constant and let $G$ be a multi-hypergraph on vertex set $V$ with $m$ edges such that $\Delta(G)<cm$. If $A$ and $B$ are disjoint subsets of $V$, each of which meet at least $2cm$ edges, then there is a partition of $A\cup B$ into three parts, each of which meets at least $cm$ edges.
\end{lemma}
Lemma \ref{rutwothree} was proved in \cite{BS00}; we provide their proof for completeness.
\begin{proof}
We may replace each edge by a subedge if necessary so that no edge meets either $A$ or $B$ in more than
one vertex. We may then partition $A$ into three parts, $A_1, A_2, A_3$, such that the union of any two
meets at least $cm$ edges: we may do this by taking $A_1$ to be a maximal part meeting fewer than $cm$ edges,
and then dividing $A\setminus A_1$ into two non-empty parts, since any set which strictly contains $A_1$ must
meet at least $cm$ edges and $A_2\cup A_3=A\setminus A_1$ meets $d(A)-d(A_1)>2cm-cm=cm$ edges. We find a
similar partition $B=B_1\cup B_2\cup B_3$. It is sufficient to find $i, j$ such that $d(A_i\cup B_j)\ge cm$;
then $A_i\cup B_j, A\setminus A_i, B\setminus B_j$ will be a suitable partition. We shall show that this is
always possible.
Since each edge meets $A$ in at most one place then $d(A)=\sum_id(A_i)$ (and likewise $d(B)=\sum_id(B_i)$).
Similarly, if an edge meets both $A$ and $B$ then we may find unique $i, j$ for which it meets $A_i$ and $B_j$.
Therefore,
\begin{equation*}
d(A,B)=\sum_{i,j}d(A_i,B_j).
\end{equation*}
Also,
\begin{equation*}
d(A_i\cup B_j)=d(A_i)+d(B_j)-d(A_i,B_j),
\end{equation*}
so
\begin{eqnarray*}
\sum_{i,j}d(A_i\cup B_j) &=& \sum_{i,j}\left(d(A_i)+d(B_j)-d(A_i,B_j)\right)\\
&=& 3d(A)+3d(B)-\sum_{i,j}d(A_i,B_j)\\
&=& 3d(A)+3d(B)-d(A,B)\\
&\ge& 3d(A)+2d(B)\ge 10cm.
\end{eqnarray*}
Since there are nine terms in the LHS, at least one must exceed $cm$.
\end{proof}
This proof shows more than required: we can always find a partition of $A\cup B$ into three parts, two meeting at least $cm$ edges and the third meeting at least $\frac{10cm}{9}$. However, we cannot always find a partition into three parts, two meeting more than $cm+1$ edges and the third meeting at least $cm$, as seen by considering the case where each of $A$ and $B$ have three vertices, two meeting $cm-1$ edges and one meeting 2 edges.
A part which does not meet the desired number of edges can also be useful provided we have another part meeting sufficiently many edges to combine with it.
\begin{lemma}\label{rugoodbad}
Let $c>0$ be a constant and let $G$ be a multi-hypergraph on vertex set $V$ with $m$ edges such that $\Delta(G)<cm$. If $A$ and $B$ are disjoint subsets of $V$, with $d(A)\ge 2cm$ and $d(A)+2d(B)\ge 3cm$, then there is a partition of $A\cup B$ into two parts, each of which meets at least $cm$ edges.
\end{lemma}
\begin{proof}
As before, we may replace each edge by a subedge if necessary so that no edge meets either $A$ or $B$ in
more than one vertex. We may then again partition $A$ into three parts $A_1, A_2, A_3$ such that the union
of any two meets at least $cm$ edges. It is now sufficient to find $i$ such that $d(A_i\cup B)\ge cm$; then
$A_i\cup B, A\setminus A_i$ will be a suitable partition. We claim this is always possible.
Again,
\begin{equation*}
d(A,B)=\sum_{i}d(A_i,B)
\end{equation*}
and
\begin{equation*}
d(A_i\cup B)=d(A_i)+d(B)-d(A_i,B),
\end{equation*}
so
\begin{eqnarray*}
\sum_{i}d(A_i\cup B) &=& \sum_{i}\left(d(A_i)+d(B)-d(A_i,B)\right)\\
&=& d(A)+3d(B)-\sum_{i}d(A_i,B)\\
&=& d(A)+3d(B)-d(A,B)\\
&\ge& d(A)+2d(B)\ge 3cm.
\end{eqnarray*}
Since there are three terms in the LHS, at least one must be at least $cm$.
\end{proof}
In particular, we may find such a partition when $d(A)\ge 2cm$ and $d(B)\ge \frac{cm}{2}$;
this is the only case in which we shall apply Lemma \ref{rugoodbad}.
\section{The bounds}
Throughout this section $G$ is an $r$-uniform multi-hypergraph on vertex set $V$ with $m$ edges.
Our goal is to prove that there is a partition of $V$ into $r$ parts such that each part meets at least
$c_r m$ edges for some suitable constant $c_r$. In order to use Lemmas \ref{rugoodbad} and
\ref{rutwothree} we need to reduce to the case $\Delta(G)<c_rm$. To that end we apply induction on $r$. If
some vertex meets at least $c_r m$ edges then we may remove that vertex, replacing each edge of $G$ by a subedge of
size $r-1$ not containing that vertex, and then use the result for $r-1$ to partition the remaining vertices
into $r-1$ parts, each meeting at least $c_{r-1} m$ edges; this is sufficient so long as $c_{r-1} \ge c_r$,
which will be the case. We shall take $c_2=\frac{2}{3}$, so the case $r=2$ is known.
Our plan will be to look at partitions for which $\sum_id(V_i)$ is large, and show that if some parts meet too
few edges then there are enough parts meeting at least $2c_rm$ edges to allow us to construct a good partition
by combining parts as above.
Certainly any partition which is optimal in the sense of maximising $\sum_id(V_i)$ also satisfies the
local optimality condition that $\sum_id(V_i)$ cannot be increased by moving a single vertex. We shall
consider particularly the yet weaker condition that $\sum_id(V_i)$ cannot be increased by moving a single
vertex into $V_r$. We begin by establishing bounds on partitions satisfying this condition.
\begin{lemma}\label{aaa}
Let $V_1, V_2, \ldots, V_r$ be a partition for which $\sum_i d(V_i)$ cannot be increased by
moving a vertex into $V_r$. Then
\begin{equation*}
\sum_{i=1}^r d(V_i) \ge (r+1)m-rd(V_r).
\end{equation*}
\end{lemma}
\begin{proof}
For each $i<r$ and each $v\in V_i$, since moving $v$ into $V_r$ does not increase the sum, the number of edges
$e$ such that $e\cap V_i = \{v\}$ must be at least the number of edges $e$ containing $v$ which do not meet
$V_r$: the first quantity is the decrease in $d(V_i)$ effected by moving $v$ and the second is the increase
in $d(V_r)$. Thus
\begin{equation*}
\sum_{i\ne r}\sum_{v\in V_i}|\{e:e\cap V_i={v}\}|\le \sum_{i\ne r}\sum_{v\in V_i}|\{e\ni v: e\cap V_r =\emptyset\}|.
\end{equation*}
Since, for $v\ne w$, $\{e:e\cap V_i={v}\}$ and $\{e:e\cap V_i={w}\}$ are disjoint,
\begin{eqnarray*}
\sum_{i\ne r}\sum_{v\in V_i}|\{e:e\cap V_i={v}\}| &=& \sum_{i\ne r}|\bigcup_{v\in V_i}\{e:e\cap V_i={v}\}| \\
&=& \sum_{i\ne r}|\{e:|e\cap V_i|=1\}|.
\end{eqnarray*}
For $1\le i <r$, let $E_i=\{e:|e\cap V_i|=1\}$. For each edge $e$ write $f(e)$ for the number of parts (including $V_r$)
which meet $e$. If $f(e)<r$ then $|e\cap V_i|>1$ for some $i$, and so $e$ is in at most $f(e)-1$ of the $E_i$;
trivially if $f(e)=r$ then $e$ is in at most $r-1=f(e)-1$ of the $E_i$. Thus
\begin{eqnarray*}
\sum_{i\ne r}|\{e:|e\cap V_i|=1\}| &\le& \sum_{e}(f(e)-1) \\
&=& \sum_{i=1}^r d(V_i)-m.
\end{eqnarray*}
Also,
\begin{equation*}
\sum_{i\ne r}\sum_{v\in V_i}|\{e\ni v: e\cap V_r =\emptyset\}|=r(m-d(V_r)),
\end{equation*}
since each edge not meeting $V_r$ is counted exactly $r$ times in the sum, once for each vertex it contains.
Combining the above relations, we see that
\begin{equation*}
\sum_{i=1}^r d(V_i)-m \le r(m-d(V_r)),
\end{equation*}
as required.
\end{proof}
The reason for considering the condition that $\sum_id(V_i)$ cannot be increased by moving a single
vertex into $V_r$ is that, as we shall see, it is preserved by moving vertices into $V_r$. If we are able
to start from a partition which satisfies the condition and in which $V_1,\ldots, V_{r-1}$ are ``good'' (in
the sense of meeting at least a certain proportion of edges) then we can try to improve the partition by
moving vertices into $V_r$ while keeping the other parts good. In this way we will either obtain a partition
into $r$ good parts or we will be forced to stop because $V_1,\ldots, V_{r-1}$ are all minimal good sets. We formalise these ideas in the following lemma.
\begin{lemma}\label{aab}
Let $V_1, V_2, \ldots, V_r$ be a partition for which $\sum_i d(V_i)$ is as large as possible, ordered such that
$d(V_1)\ge d(V_2)\ge \cdots \ge d(V_r)$, and let $c>0$ be a constant. If $d(V_{r-1})\ge cm$ then either there
exists a partition $W_1, W_2, \ldots, W_r$, with $W_i\subseteq V_i$ for $1\le i\le r-1$
(and so $W_r\supseteq V_r$), such that each part meets at least $cm$ edges, or there exists a partition
$W_1, W_2, \ldots, W_r$, again with $W_i\subseteq V_i$ for $1\le i\le r-1$,
such that, for each $i\ne r$, $d(W_i)\ge cm$ but $d(W_i\setminus \{w\})< cm$ for any $w\in W_i$, and also
\begin{equation*}
\sum_{i=1}^{r-1} d(W_i) > (r+1)(m-cm).
\end{equation*}
\end{lemma}
\begin{proof}
Suppose $U_1, U_2, \ldots, U_r$ is a partition with $U_i\subseteq V_i$ for each $i<r$ and so $U_r\supseteq V_r$.
For any $v\notin U_r$, say $v\in U_i$, let $U'_i=U_i\setminus\{v\}$, $U'_r=U_r\cup\{v\}$, $V'_i=V_i\setminus\{v\}$
and $V'_r=V_r\cup\{v\}$. Then
\begin{eqnarray*}
d(U_i)-d(U'_i)&=&|\{e:e\cap U_i=\{v\}\}| \\
&\ge&|\{e:e\cap V_i=\{v\}\}| \\
&=&d(V_i)-d(V'_i)
\end{eqnarray*}
and
\begin{eqnarray*}
d(U'_r)-d(U_r)&=&|\{e\ni v:e\cap U_r=\emptyset\}| \\
&\le&|\{e\ni v:e\cap V_r=\emptyset\}| \\
&=&d(V'_r)-d(V_r)
\end{eqnarray*}
so
\begin{eqnarray*}
d(U'_i)+d(U'_r) &\le& \left(d(U_i)+d(V'_i)-d(V_i)\right)+\left(d(V'_r)-d(V_r)+d(U_r)\right) \\
&=&d(U_i)+d(U_r)+\left(d(V'_i)+d(V'_r)-d(V_i)-d(V_r)\right) \\
&\le&d(U_i)+d(U_r),
\end{eqnarray*}
meaning that $U_1, U_2, \ldots, U_r$ also satisfies the condition that $\sum_i d(U_i)$ cannot be increased by moving a vertex into $U_r$.
For each $i<r$, then, let $W_i$ be a minimal subset of $V_i$ satisfying $d(W_i)\ge cm$, and let $W_r=V\setminus\bigcup_{i<r}W_i$.
If $d(W_r)\ge cm$ then this is a suitable partition with each part meeting at least $cm$ edges; if not then Lemma \ref{aaa} ensures that
\begin{eqnarray*}
\sum_{i=1}^{r-1} d(W_i) &\ge& (r+1)m-rd(W_r)-d(W_r) \\
&>&(r+1)(m-cm),
\end{eqnarray*}
as required.
\end{proof}
We are now ready to prove the main result. We shall show that if we start from a partition maximising $\sum_id(V_i)$ then either we have enough elbow room to obtain a good partition by repeated application of Lemmas \ref{rutwothree} and \ref{rugoodbad} or we may use Lemma \ref{aab} to obtain a good partition.
\begin{theorem}\label{rumain}
Let $c_2=\frac{2}{3}$, $c_3=\frac{5}{9}$ and $c_r=\frac{r}{3r-4}$ for $r>3$. If $G$ is an $r$-uniform
multi-hypergraph with $m$ edges there is a partition of the vertex set into $r$ parts with each part meeting at
least $c_r m$ edges.
\end{theorem}
\begin{proof}
We use induction on $r$; the case $r=2$ is known. For $r>2$, if some vertex $v$ meets at least
$c_r m$ edges then we may, by replacing each edge with a subedge of size $r-1$ not containing $v$ and applying
the $r-1$ case, find a partition of the other vertices into $r-1$ parts each meeting at least $c_{r-1}m > c_rm$
edges. Together with $\{v\}$, this is a suitable $r$-partition. Thus we may assume that no vertex meets $c_r m$
edges.
Now let $V_1, V_2, \ldots, V_r$ be a partition for which $\sum_i d(V_i)$ is as large as possible, ordered such
that $d(V_1)\ge d(V_2)\ge \cdots \ge d(V_r)$. If $d(V_r)\ge c_r m$ then we are done, so we may assume
$d(V_r)< c_r m$. We consider three cases based on the values of $d(V_{r-1})$ and $d(V_r)$.
\vspace{5pt}
\textbf{Case 1.} $d(V_{r-1})\ge c_r m$.
By Lemma \ref{aab}, either we have a good partition or we may find a partition $W_1, W_2, \ldots, W_r$ such that, for each
$i<r$, $d(W_i)\ge c_r m$ but $W_i$ is minimal for this property, and further that $\sum_{i=1}^{r-1} d(W_i) \ge (r+1)(m-c_r m)$.
Suppose the partition found is of the latter type. For each $i<r$, since $\Delta(G)< c_r m$, $|W_i|>1$. Let $v, w$ be two
vertices in $W_i$; by minimality of $W_i$ the number of edges meeting $W_i$ only at $v$ is more than $d(W_i)-c_r m$, and so
is the number meeting $W_i$ only at $w$. These sets of edges are disjoint from each other and from the set of edges
meeting $W_i$ in more than one vertex; thus, writing $d_2(X)$ for the number of edges meeting $X$ in more than one vertex,
\begin{equation*}
d(W_i) > 2(d(W_i)-c_r m)+d_2(W_i),
\end{equation*}
i.e.
\begin{equation*}
d(W_i)+d_2(W_i)<2c_r m.
\end{equation*}
However, each edge not meeting $W_r$ meets at least one other part at more than one vertex, so
\begin{equation*}
\sum_{i=1}^{r-1}d_2(W_i)>m-c_r m.
\end{equation*}
Combining this with the bound on $\sum_{i=1}^{r-1} d(W_i)$ gives
\begin{equation*}
\sum_{i=1}^{r-1}(d(W_i)+d_2(W_i))>(r+2)(m-c_r m),
\end{equation*}
so for some $i<r$
\begin{equation*}
d(W_i)+d_2(W_i)>\frac{r+2}{r-1}(m-c_r m).
\end{equation*}
Consequently, using our upper bound on $d(W_i)+d_2(W_i)$,
\begin{eqnarray*}
&&\frac{r+2}{r-1}(m-c_r m) < 2c_r m \\
&\Rightarrow&(r+2)(1-c_r) < 2c_r(r-1) \\
&\Rightarrow& r+2 < 3rc_r
\end{eqnarray*}
and so $c_r > \frac{r+2}{3r}$.
For $r=3$, $c_r=\frac{5}{9}=\frac{r+2}{3r}$; for $r\ge 4$,
\begin{eqnarray*}
c_r &=& \frac{r}{3r-4} \\
&=& \frac{r+2}{3r}-\frac{2-8r}{3r(3r-4)} \\
&\le& \frac{r+2}{3r};
\end{eqnarray*}
in either case a contradiction is obtained (from our assumption that we did not get a good partition from Lemma \ref{aab}). (End of Case 1.)
\vspace{1.5ex}
Note that, using Lemma \ref{aaa}, if $d(V_r) < c_r m$ then
\begin{eqnarray*}
d(V_{r-1})-c_r m&=&\sum_{i=1}^{r} d(V_i)-\sum_{i=1}^{r-2} d(V_i)-d(V_r)-c_rm \\
&>& \sum_{i=1}^{r} d(V_i)-(r-2)m-2c_r m\\
&\ge& (r+1)m-rd(V_r)-(r-2)m -2c_r m\\
&>& (r+1)m-rc_rm-(r-2)m -2c_r m\\
&=& 3m-(r+2)c_r m,
\end{eqnarray*}
so for $r=3$, since $c_r < \frac{3}{r+2}$, Case 1 is the only possible case. For the remaining cases, then, we assume $r\ge 4$.
\vspace{5pt}
\textbf{Case 2.} $c_r m > d(V_{r-1}) \ge d(V_r) \ge \frac{c_rm}{2}$.
Suppose that $k$ parts meet at least $2c_r m$ edges, $l$ meet fewer than $c_r m$, and the remaining $r-k-l$ meet at least $c_r m$
but fewer than $2c_r m$. Using Lemma \ref{rugoodbad} we may combine a part meeting at least $2c_r m$ edges and a part meeting at least
$\frac{c_rm}{2}$ to produce two parts meeting at least $c_r m$; we know, since $V_r$ meets fewest edges, that each part meets at
least $\frac{c_rm}{2}$ and so we can obtain a good partition provided $k\ge l$. We shall show that this must be so.
Since $r \ge 4$, $2c_r\le 1$, and since $c_r m > d(V_{r-1})$, $l\ge 2$. So
\begin{eqnarray*}
\sum_{i=1}^rd(V_i) &<& 2c_r m(r-k-l)+c_r ml+mk \\
&=& 2c_r mr -c_r lm + (1-2c_r)km.
\end{eqnarray*}
However, by Lemma \ref{aaa},
\begin{equation*}
\sum_{i=1}^r d(V_i) > (r+1)m-rc_r m,
\end{equation*}
so
\begin{equation*}
2rc_r -lc_r + k(1-2c_r) > (r+1)-rc_r;
\end{equation*}
however, if $k<l$ then
\begin{eqnarray*}
2rc_r -lc_r + k(1-2c_r) &\le& 2rc_r-lc_r+(l-1)(1-2c_r) \\
&=& (2r-1)c_r+(l-1)(1-3c_r)\,.
\end{eqnarray*}
Since $l\ge 2$ and $1-3c_r<0$,
\begin{eqnarray*}
(2r-1)c_r+(l-1)(1-3c_r)&\le& (2r-4)c_r+1 \\
&=& r+1-rc_r
\end{eqnarray*}
(the final equality follows since $c_r=\frac{r}{3r-4}$), and a contradiction is obtained.
\vspace{5pt}
\textbf{Case 3.} $c_r m > d(V_{r-1})$ and $\frac{c_rm}{2} > d(V_r)$.
Suppose that $j$ parts meet at least $2c_r m$ edges, $k$ meet at least $\frac{c_rm}{2}$ but fewer than $c_r m$,
$l$ meet fewer than $\frac{c_rm}{2}$, and the remaining $r-j-k-l$ meet at least $c_r m$ but fewer than $2c_r m$.
Using Lemma \ref{rugoodbad} $k$ times and Lemma \ref{rutwothree} $l$ times, we may find $r$ disjoint parts which meet at least $c_r m$ edges
provided that $j\ge k+2l$ (each application of Lemma \ref{rugoodbad} requires one part which meets $2c_r m$ edges and each application of Lemma \ref{rutwothree} requires two). We shall show that this must be so.
From Lemma \ref{aaa}, since $d(V_r) < \frac{c_rm}{2}$ and $r\ge 4$,
\begin{eqnarray*}
\frac{1}{r}\sum_{i=1}^r \frac{d(V_i)}{m} &>& \frac{r+1}{r}-\frac{c_r}{2} \\
&=& 2c_r+\frac{r+1}{r}-\frac{5c_r}{2} \\
&=& 2c_r+\frac{r^2-2r-8}{r(6r-8)} \\
&\ge& 2c_r;
\end{eqnarray*}
similarly, since $c_r\le\frac{1}{2}$,
\begin{eqnarray*}
\frac{1}{r}\sum_{i=1}^r \frac{d(V_i)}{m} &>& \frac{r+1}{r}-\frac{c_r}{2} \\
&>& \frac{3}{4} \\
&\ge& \frac{2}{3}+\frac{c_r}{6};
\end{eqnarray*}
so, since $c_r>\frac{1}{3}$ for every $r$, Lemma \ref{rulast}, following, ensures that $j\ge k+2l$.
\end{proof}
We now give the result needed to fill in the gap.
\begin{lemma}\label{rulast}
Let $\frac{1}{3}\le c\le \frac{1}{2}$. If we have a finite, non-empty collection of numbers in $[0,1]$, whose
arithmetic mean $a$ is at least $\max\left(2c,\frac{2}{3}+\frac{c}{6}\right)$, of which $j$ are at least $2c$,
$k$ are at least $\frac{c}{2}$ but less than $c$, $l$ are less than $\frac{c}{2}$, and the rest are at least
$c$ but less than $2c$, then $j\ge k+2l$.
\end{lemma}
\begin{proof}
We proceed by induction on $k+l$; if $k=l=0$ then there is nothing to prove. If $k\ge 1$ then one of our
numbers is less than the mean, so another must exceed the mean; since the mean of those
two numbers is at most $\frac{1+c}{2}\le 2c \le a$, either there are no other numbers (in which case we are
done) or the mean of the remaining numbers is at least $a$ and we are done by induction.
If $l\ge 1$ then one number is less than $\frac{c}{2}$; if also $j\le 1$ then the total is less than
$1+\frac{c}{2}+(n-2)2c<2cn$, so there must be two numbers which are at least $2c$. Since the mean of these
three numbers is at most $\frac{2}{3}+\frac{c}{6}\le a$, either there are no other numbers (in which case we
are done) or the mean of the remaining numbers is at least $a$ and again we are done by induction.
\end{proof}
The value of $c_r$ which we use is tight only in the second case of Theorem \ref{rumain} (for $r>4$); if we were to use instead some $c'_r > c_r$ it would be feasible for two of our parts to meet just under $c'_r$ edges while the remaining parts each met just under $2c'_r$ edges, giving us no immediate way to use Lemmas \ref{rutwothree} and \ref{rugoodbad}. To improve further on these bounds using a similar method, then, we might seek to prove analogues of Lemmas \ref{rutwothree} and \ref{rugoodbad} for parts which meet more than $\beta cm$ for some $1<\beta<2$. To do this, however, we would need a way to impose some stronger assumption on the degrees of vertices than simply $\Delta(G)<cm$.
| {
"timestamp": "2017-01-23T02:07:45",
"yymm": "1701",
"arxiv_id": "1701.05855",
"language": "en",
"url": "https://arxiv.org/abs/1701.05855",
"abstract": "The vertices of any graph with $m$ edges may be partitioned into two parts so that each part meets at least $\\frac{2m}{3}$ edges. Bollobás and Thomason conjectured that the vertices of any $r$-uniform hypergraph with $m$ edges may likewise be partitioned into $r$ classes such that each part meets at least $\\frac{r}{2r-1}m$ edges. In this paper we prove the weaker statement that, for each $r\\ge 4$, a partition into $r$ classes may be found in which each class meets at least $\\frac{r}{3r-4}m$ edges, a substantial improvement on previous bounds.",
"subjects": "Combinatorics (math.CO)",
"title": "Judicious partitions of uniform hypergraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987946220128863,
"lm_q2_score": 0.8577680995361899,
"lm_q1q2_score": 0.8474287516838972
} |
https://arxiv.org/abs/2204.05074 | Supercritical Site Percolation on the Hypercube: Small Components are Small | We consider supercritical site percolation on the $d$-dimensional hypercube $Q^d$. We show that typically all components in the percolated hypercube, besides the giant, are of size $O(d)$. This resolves a conjecture of Bollobás, Kohayakawa, and Łuczak from 1994. | \section{Introduction}
The $d$-dimensional hypercube $Q^d$ is the graph with the vertex set $V(Q^d)=\{0,1\}^d$, where two vertices are adjacent if they differ in exactly one coordinate. Throughout this paper, we denote by $n=2^d$ the order of the hypercube.
In bond percolation on $G$, one considers the subgraph $G_p$ obtained by including every edge independently with probability $p$. Erd\H{o}s and Spencer \cite{ES} conjectured that, similar to the classical $G(n,p)$ model, there is a phase transition at $p=\frac{1}{d}$ in $Q^d_p$: for $\epsilon>0$, when $p=\frac{1-\epsilon}{d}$, \textbf{whp} all components have size $O(d)$, and when $p=\frac{1+\epsilon}{d}$, there exists \textbf{whp} a unique giant component in $Q^d_p$ whose order is linear in $n$. Their conjecture was proved by Ajtai, Koml\'os, and Szemer\'edi \cite{AKS}. Bollob\'as, Kohayakawa, and Łuczak \cite{BKL2} improved upon that result, extending it to a wider range of $p$, allowing $\epsilon=o(1)$, and giving an asymptotic value of the order of the giant component of $Q^d_p$. Furthermore, they proved that the second largest component in $Q^d_p$ is typically of order $O(d)$.
In site percolation on $G$, one considers the induced subgraph $G[R]$, where $R$ is a random subset of vertices formed by including each vertex independently with probability $p$. In the setting of site percolation on the hypercube, Bollob\'as, Kohayakawa, and Łuczak \cite{BKL} proved the following:
\begin{thm}[Theorems 8, 9 of \cite{BKL}]\label{BKL}
\textit{Let $\epsilon>0$ be a small enough constant, and let $p=\frac{1+\epsilon}{d}$. Let $R$ be a random subset formed by including each vertex of $Q^d$ independently with probability $p$. Then, \textbf{whp}, there is a unique giant component of $Q^d[R]$, whose asymptotic size is $\frac{\left(2\epsilon-O(\epsilon^2)\right)n}{d}$. Furthermore, \textbf{whp} the size of the other components is at most $d^{10}$.}
\end{thm}
Motivated by the results in bond percolation on the hypercube, in the same paper \cite{BKL}, and also in their paper on bond percolation on the hypercube \cite{BKL2}, Bollobás, Kohayakawa, and Łuczak conjectured that \textbf{whp} all the components of $Q^d[R]$ besides the giant component are of asymptotic size at most $\gamma d$, where $\gamma$ is a constant depending only on $\epsilon$ (see Conjecture 11 of \cite{BKL}). We prove this conjecture:
\begin{theorem}\label{main-theorem}
Let $\epsilon>0$ be a small enough constant, and let $p=\frac{1+\epsilon}{d}$. Let $R$ be a random subset formed by including each vertex of $Q^d$ independently with probability $p$. Then, \textbf{whp}, all the components of $Q^d[R]$ besides the giant component are of size $O_{\epsilon}\left(d\right)$.
\end{theorem}
We note that in our proof we obtain an inverse polynomial dependency on $\epsilon$, of order $\frac{1}{\epsilon^5}$, in the hidden constant in $O_{\epsilon}(d)$.
The text is structured as follows. In Section $2$, we show that big components are 'everywhere dense' (see the exact definition in Lemma \ref{dense-giant-2}). In Section $3$, we exhibit several structures that \textbf{whp} do not appear in the site-percolated hypercube. In Section $4$, we prove Theorem \ref{main-theorem}.
Our notation is fairly standard. Throughout the rest of the text, unless specifically stated otherwise, we set $\epsilon$ to be a small enough constant, $p=\frac{1+\epsilon}{d}$, and $R$ to be a random subset of $V(Q^d)$ formed by including each vertex independently with probability $p$. Furthermore, we denote by $L_1$ the largest connected component of $Q^d[R]$. We denote by $N_G(S)$ the external neighbourhood of a set $S$ in the graph $G$. When $T$ is a subset of $V(G)$, we write $N_T(S)=N_G(S)\cap T$. We omit rounding signs for the sake of clarity of presentation.
\section{Big components are everywhere dense}
We begin with the following claim, which holds for any $d$-regular graph:
\begin{claim} \label{DFS-lemma}
\textit{Let $G=(V,E)$ be a $d$-regular graph on $n$ vertices. Let $\epsilon$ be a small enough constant, and let $p=\frac{1+\epsilon}{d}$. Form $R\subseteq V$ by including each vertex of $V$ independently with probability $p$. Then, \textbf{whp}, any connected component $S$ of $G[R]$ with $|S|=k>300\ln n$ has $|N_G(S)|\ge \frac{9kd}{10}$.}
\end{claim}
\begin{proof}
We run the Depth First Search (DFS) algorithm with $n$ random bits $X_i$, discovering the connected components while generating $G[R]$ (see \cite{site-percolation}). If there is a connected component $S$ of size $k$, then there is an interval of random bits whose length is $k+|N_G(S)|$ in the DFS where we receive $k$ positive answers. Thus, the probability of having a component violating the claim, whose discovery by the DFS starts with a given bit $X_i$ is at most:
\begin{align*}
P\left[Bin\left(\frac{9kd}{10}+k,\frac{1+\epsilon}{d}\right)\ge k\right]
\end{align*}
Hence, by a Chernoff-type bound (see Appendix A in \cite{probablistic method}), the probability of having such a component is at most
\begin{align*}
\exp\left(-\frac{k}{100}\right)\le o\left(\frac{1}{n^2}\right),
\end{align*}
where the inequality follows from our bound on $k$. We complete the proof with a union bound on the $<n$ possible values of $k$ and $< n$ possible starting points of the interval in the DFS.
\end{proof}
We further require the following simple claim:
\begin{claim} \label{seperating}
Let $S\subseteq V(Q^d)$ such that $|S|=k\le d$. Then, there exist pairwise disjoint subcubes, each of dimension at least $d-k+1$, such that every $v\in S$ is in exactly one of these subcubes.
\end{claim}
\begin{proof}
We prove by induction on $k$. The case $k=1$ is trivial.
Assume that the statement holds for all $k'<k$. Choose any two vertices of $S$. There is at least one coordinate on which they do not agree. Consider the two pairwise disjoint (and complementary) subcubes of dimension $d-1$: one where we fix this coordinate to be $0$ and let the other coordinates vary, and the other where we fix this coordinate to be $1$ and let the other coordinates vary. Clearly, each of the two vertices is in exactly one of these subcubes. Since these subcubes are disjoint, no vertex from the other $k-2$ vertices is in both of them. We can then apply the induction hypothesis on each of these subcubes, giving rise to pairwise disjoint subcubes of dimension at least $d-1-(k-2)=d-k+1$, each having exactly one of the vertices of $S$.
\end{proof}
We conclude this section with a lemma, bounding the probability that a fixed small set of vertices has no vertex at Hamming distance $1$ from many vertices in large components or in their neighbourhoods. The proof here borrows several ideas used in \cite{AKS, BKL, EKK}.
\begin{lemma}\label{dense-giant-2}
Fix $S\subseteq V(Q^d)$ such that $|S|=k\le \frac{\epsilon d}{10}$. Then, the probability that every $v\in S$ has less than $\frac{\epsilon^2d}{40}$ neighbours (in $Q^d$) in components of $Q^d[R]$ whose size is at least $n^{1-\epsilon}$ or in their neighbourhoods in $Q^d$ is at most $\exp\left(-\frac{\epsilon^2dk}{40}\right)$.
\end{lemma}
\begin{proof}
Let $S=\{v_1,\cdots, v_k\}$. By Claim \ref{seperating}, we can consider pairwise disjoint subcubes of dimension at least $\left(1-\frac{\epsilon }{10}\right)d$, each containing exactly one of the vertices of $S$. We denote these subcubes by $Q_1,\cdots, Q_k$, where $v_i\in V(Q_i)$.
For each $i$, consider $Q_i$, and assume, without loss of generality, that $v_i$ is the origin of $Q_i$. We then create $\frac{\epsilon d}{10}$ pairwise disjoint subcubes of $Q_i$ of dimension at least $\left(1-\frac{\epsilon}{5}\right)d$, each at distance $1$ from $v_i$, by fixing one of the first $\frac{\epsilon d}{10}$ coordinates of $Q_i$ to be $1$, the rest of the first $\frac{\epsilon d}{10}$ coordinates to be $0$, and letting the other coordinates vary. Denote these subcubes by $Q_i(1), \cdots, Q_i\left(\frac{\epsilon d}{10}\right)$, and the vertex at distance $1$ from $v_i$ in $Q_i(j)$ by $v_i(j)$. We denote by $n'$ the order of each $Q_i(j)$, and note that $n'\ge2^{\left(1-\frac{\epsilon}{5}\right)d}=n^{1-\frac{\epsilon}{5}}$.
Fix $i$. Observe that $p$ is still supercritical at every $Q_i(j)$ since $(1+\epsilon)\left(1-\frac{\epsilon}{5}\right)\ge 1+\frac{3\epsilon}{5}$. Denote by $L_1(i,j)$ the largest component of $Q_i(j)[R]$. Then, by Theorem \ref{BKL}, \textbf{whp} $\big|L_1(i,j)\big|\ge \frac{7\epsilon n'}{6d}.$ Furthermore, by Claim \ref{DFS-lemma}, \textbf{whp}
$\bigg|N_{Q_i(j)}\left(L_1(i,j)\right)\bigg|\ge \frac{21\epsilon n'}{20}.$ Thus, setting $\mathcal{A}_{(i,j)}$ to be the event that $\bigg|L_1(i,j)\cup N_{Q_i(j)}\left(L_1(i,j)\right)\bigg|\ge\frac{21\epsilon n'}{20}$, we have that $P\left(\mathcal{A}_{(i,j)}\right)=1-o(1)$. Now, let $\mathcal{B}_{(i,j)}$ be the event that $v_i(j)\in L_1(i,j)\cup N_{Q_i(j)}\left(L_1(i,j)\right)$. Since the hypercube is transitive, we have that $P\left(\mathcal{B}_{(i,j)}|\mathcal{A}_{(i,j)}\right)\ge \frac{21\epsilon n'/20}{n'}=\frac{21\epsilon}{20}$. Therefore, $P\left(\mathcal{B}_{(i,j)}\cap \mathcal{A}_{(i,j)}\right)\ge \left(1-o(1)\right)\frac{21 \epsilon}{20}\ge \epsilon$. Since the subcubes $Q_i(j)$ are pairwise disjoint, the events $B_{(i,j)}\cap \mathcal{A}_{(i,j)}$ are independent for different $j$. Thus, by a typical Chernoff-type bound, with probability at least $1-\exp\left(-\frac{\epsilon^2d}{40}\right)$, at least $\frac{\epsilon^2d}{40}$ of the $v_i(j)$ are in $L_1(i,j)\cup N_{Q_i(j)}\left(L_1(i,j)\right)$ with $\bigg|L_1(i,j)\cup N_{Q_i(j)}\left(L_1(i,j)\right)\bigg|\ge\frac{21\epsilon n'}{20}$. Thus, with the same probability, $v_i$ is at distance $1$ from at least $\frac{\epsilon^2d}{40}$ vertices in components whose size is at least $\frac{\epsilon n'}{d}\ge \frac{\epsilon n^{1-\frac{\epsilon}{5}}}{d}\ge n^{1-\epsilon}$ or in their neighbourhoods.
Since $v_i$'s are in pairwise disjoint subcubes, these events are also independent for each $v_i$. Hence, the probability that none of the $v_i$ are at distance $1$ from at least $\frac{\epsilon^2d}{40}$ vertices in components whose size is at least $n^{1-\epsilon}$ or in their neighbourhoods is at most $\exp\left(-\frac{\epsilon^2dk}{40}\right)$.
\end{proof}
\section{Unlikely structures in the percolated hypercube}
Denote by $N^2_G(v)$ the set of vertices in $G$ at distance exactly $2$ from $v$. The following lemma shows that, typically, there are no large sections of a sphere of radius $2$ in $Q^d[R]$.
\begin{lemma}\label{no-sphere}
\textbf{Whp}, there is no $v\in Q^d$ such that $\big|N^2_{Q^d}(v)\cap R\big|\ge 2d$.
\end{lemma}
\begin{proof}
We have $n$ ways to choose $v$, and $\binom{\binom{d}{2}}{2d}$ ways to choose a subset of $2d$ vertices from $N^2_{Q^d}(v)$. We include them in $R$ with probability at most $p^{2d}$. Hence, by the union bound, the probability of violating the lemma is at most:
\begin{align*}
2^d\binom{\binom{d}{2}}{2d}\left(\frac{1+\epsilon}{d}\right)^{2d}\le 2^d\left(\frac{ed}{4}\cdot \frac{1+\epsilon}{d}\right)^{2d}\le \left(\frac{14}{15}\right)^{d}=o(1).
\end{align*}
\end{proof}
From Lemma \ref{no-sphere}, we are able to derive the following lemma:
\begin{lemma} \label{big-neighbourhood}
\textbf{Whp} there are no $S\subseteq R$ and $W\subseteq V(Q^d)$ disjoint from $S$ such that $|W|\le\frac{\epsilon^4d|S|}{9\cdot 200^2}$ and every $v\in S$ has $d_{Q^d}(v, W)\ge \frac{\epsilon^2d}{200}$.
\end{lemma}
\begin{proof}
Assume the contrary. By our assumption on $S$, there are at least $\frac{\epsilon^2 d|S|}{200}$ edges between $S$ and $W$. Thus, the average degree from $W$ to $S$ is at least $\frac{\epsilon^2 d|S|}{200|W|}$. Now, let us count the number of cherries with the vertex of degree $2$ in $W$. By Jensen's inequality, we have at least $\binom{\epsilon^2 d|S|/200|W|}{2}|W|\ge \frac{\epsilon^4d^2|S|^2}{4\cdot 200^2|W|}\ge \frac{9d|S|}{4}$ such cherries, where we used our assumption on $|W|$. Thus, by the pigeonhole principle, there is a vertex $v\in S$ that is in $\frac{2}{|S|}\cdot\frac{9d|S|}{4}=\frac{9d}{2}$ cherries. Since every pair of vertices in $S$ is connected by at most two paths of length $2$ in $Q^d$, we obtain that $v$ is at Hamming distance $2$ from at least $\frac{\frac{9d}{2}}{2}=\frac{9d}{4}>2d$ vertices in $S\subseteq R$. On the other hand, by Lemma \ref{no-sphere}, \textbf{whp} there is no $v\in Q^d$ such that $|N_{Q^d}^2(v)\cap R|\ge 2d$, completing the proof.
\end{proof}
The next lemma bounds the number of subtrees of a given order in a $d$-regular graph.
\begin{lemma} \label{trees}
\textit{Let $t_k(G)$ denote the number of $k$-vertex trees contained in a $d$-regular graph $G$ on $n$ vertices. Then,
$$t_k(G)\le n(ed)^{k-1}.$$}
\end{lemma}
This follows directly from Lemma 2.1 of \cite{trees}.
We are now ready to state and prove the final lemma of this section:
\begin{lemma}\label{no-squid}
Let $C>0$ be a constant. Then, \textbf{whp}, there is no $S\subseteq V(Q^d)$, such that $|S|\le Cd$, $S$ is connected in $Q^d$, and at least $\frac{\epsilon d}{10}$ vertices $v\in S$ have that $\big|N_{L_1\cup N_{Q^d}(L_1)}(v)\big|<\frac{\epsilon^2d}{40}$.
\end{lemma}
\begin{proof}
By Theorem \ref{BKL}, \textbf{whp} there is a unique giant component whose size is larger than $d^{10}$, which we denote by $L_1$. Thus, it suffices to show that \textbf{whp} there is no such $S$, where $\frac{\epsilon d}{10}$ of its vertices have less than $\frac{\epsilon^2d}{40}$ vertices at Hamming distance $1$ from components of size at least $n^{1-\epsilon}$ or their neighbourhoods, since \textbf{whp} these components and their neighbourhoods are in fact $L_1\cup N_{Q^d}(L_1)$.
Since $Q^d$ is connected, we can consider connected sets of size exactly $Cd$. By Lemma \ref{trees}, we have $n(ed)^{Cd}$ ways to choose $S$. We have $\binom{Cd}{\frac{\epsilon d}{10}}$ ways to choose the vertices in $S$ which do not have at least $\frac{\epsilon^2d}{40}$ vertices at Hamming distance $1$ from components of size at least $n^{1-\epsilon}$ or their neighbourhoods. By Lemma \ref{dense-giant-2}, the probability that no vertex in a given set of $\frac{\epsilon d}{10}$ vertices in $S$ has at least $\frac{\epsilon^2d}{40}$ vertices at Hamming distance $1$ from components of size at least $n^{1-\epsilon}$ or their neighbourhoods is at most $\exp\left(-\frac{\epsilon^4d^2}{400}\right)$. Hence, the probability of the event violating the statement of the lemma is at most:
\begin{align*}
n(ed)^{Cd}\binom{Cd}{\frac{\epsilon d}{10}}\exp\left(-\frac{\epsilon^4d^2}{400}\right)\le n\left((ed)^C\left(\frac{10eC}{\epsilon}\right)^{\frac{\epsilon}{10}}\exp\left(-\frac{\epsilon^4d}{400}\right)\right)^d=o(1).
\end{align*}
\end{proof}
\section{Proof of Theorem \ref{main-theorem}}
Let $p_1=\frac{1+\epsilon/2}{d}$. Form $R_1$ by including each vertex $v\in V(Q^d)$ independently with probability $p_1$. By Theorem \ref{BKL}, \textbf{whp} there is a unique giant component, denote it by $L_1'$. We can thus split the vertices of the hypercube into the following three disjoint sets: $T=L_1'\cup N_{Q^d}(L_1')$, $M$ is the set of vertices outside $T$ with at least $\frac{\epsilon^2d}{200}$ neighbours in $T$, and $S=V\left(Q^d\right)\setminus\left(T\cup M\right)$.
Let $p_2=\frac{\epsilon}{2d-2-\epsilon}$. Form $R_2$ by including each vertex $v\in V(Q^d)$ independently with probability $p_2$. Note that since $1-p=(1-p_1)(1-p_2)$, the random set $R$ has the same distribution as $R_1\cup R_2$. Thus, with a slight abuse of notation, we write $R=R_1\cup R_2$. We begin by performing the second exposure (i.e. by generating $R_2$) on $S\cup M$, and only afterwards on $T$. Note that once we show that a connected set has a neighbour in $T\cap R_2$, it means that it merges into $L_1'$ (whereas by Theorem \ref{BKL}, \textbf{whp} $L_1'\subseteq L_1$).
Let $C_1$ be a positive constant, possibly depending on $\epsilon$. We first show that \textbf{whp} if there is a connected component $B$ in $(S\cup M)\cap R$ such that $\big|B\cap M\big|\ge C_1d$, it merges with $L_1'$ after the second exposure on $T$. By construction, every $v\in M$ has at least $\frac{\epsilon^2d}{200}$ neighbours in $T$, and $T\cap M=\emptyset$. Thus, by Lemma \ref{big-neighbourhood}, we have that \textbf{whp} $\big|N_T(B\cap M)\big|\ge \frac{C_1\epsilon^4d^2}{9\cdot 200^2}$. Hence,
\begin{align*}
P\left[\big|N_T(B)\cap R_2\big|=0\right]&\le \left(1-\frac{\epsilon}{2d}\right)^{\frac{C_1\epsilon^4d^2}{9\cdot200^2}}\le \exp\left(-\frac{C_1\epsilon^5d}{18\cdot200^2}\right)=o(1/n),
\end{align*}
by choosing $C_1\ge\frac{18\cdot200^2}{\epsilon^5}$. We conclude with the union bound over the $<n$ connected components in $(S\cup M)\cap R$.
Thus, we are left to deal with connected components $B\subseteq(S\cup M)\cap R$ of size at least $2C_1d$ with less than $C_1d$ vertices in $B\cap M$. Taking a connected subset $B'\subseteq B$ of size exactly $Cd:=2C_1d$, we have that $|B'\cap S|\ge |B'|-|B'\cap M|\ge C_1d\ge \frac{\epsilon d}{20}$. But by Lemma \ref{no-squid}, \textbf{whp} there is no connected set of size $Cd$ in $Q^d$ with at least $\frac{\frac{\epsilon}{2} d}{10}=\frac{\epsilon d}{20}$ vertices which do not have at least $\frac{\left(\frac{\epsilon}{2}\right)^2d}{40}\ge \frac{\epsilon^2d}{200}$ vertices at Hamming distance $1$ from $L_1'\cup N_{Q^d}(L_1')$. Recalling that $S, M,T$ and $L_1'$ were formed according to the supercritical percolation with $p_1=\frac{1+\epsilon/2}{d}$, this means that \textbf{whp} there is no such connected set $B'$. All in all, \textbf{whp}, there is no connected component $B$ in $Q^d[R]$ of size at least $Cd$ outside of $L_1$. \qedsymbol
\paragraph{Acknowledgements.} The first author wishes to thank Arnon Chor and Dor Elboim for fruitful discussions.
| {
"timestamp": "2022-04-12T02:40:58",
"yymm": "2204",
"arxiv_id": "2204.05074",
"language": "en",
"url": "https://arxiv.org/abs/2204.05074",
"abstract": "We consider supercritical site percolation on the $d$-dimensional hypercube $Q^d$. We show that typically all components in the percolated hypercube, besides the giant, are of size $O(d)$. This resolves a conjecture of Bollobás, Kohayakawa, and Łuczak from 1994.",
"subjects": "Probability (math.PR); Combinatorics (math.CO)",
"title": "Supercritical Site Percolation on the Hypercube: Small Components are Small",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9835969684454966,
"lm_q2_score": 0.8615382040983515,
"lm_q1q2_score": 0.847406365751116
} |
https://arxiv.org/abs/1109.6277 | Domination Value in Graphs | A set $D \subseteq V(G)$ is a \emph{dominating set} of $G$ if every vertex not in $D$ is adjacent to at least one vertex in $D$. A dominating set of $G$ of minimum cardinality is called a $\gamma(G)$-set. For each vertex $v \in V(G)$, we define the \emph{domination value} of $v$ to be the number of $\gamma(G)$-sets to which $v$ belongs. In this paper, we study some basic properties of the domination value function, thus initiating \emph{a local study of domination} in graphs. Further, we characterize domination value for the Petersen graph, complete $n$-partite graphs, cycles, and paths. | \section{Introduction}
Let $G = (V(G),E(G))$ be a simple, undirected, and nontrivial
graph with order $|V(G)|$ and size $|E(G)|$. For $S \subseteq
V(G)$, we denote by $\langle S \rangle$ the subgraph of $G$ induced by $S$.
The \emph{degree of a vertex $v$} in $G$, denoted by $\deg_G(v)$,
is the number of edges that are incident to $v$ in $G$; an \emph{end-vertex} is a vertex of degree one,
and a \emph{support vertex} is a vertex that is adjacent to an end-vertex. We denote by
$\Delta(G)$ \emph{the maximum degree} of a graph $G$. For a vertex
$v \in V(G)$, the \emph{open neighborhood} $N(v)$ of $v$ is the
set of all vertices adjacent to $v$ in $G$, and the \emph{closed
neighborhood} $N[v]$ of $v$ is the set $N(v) \cup \{v\}$. A set $D
\subseteq V(G)$ is a \emph{dominating set} (DS) of $G$ if for each
$v \not\in D$ there exists a $u \in D$ such that $uv\in E(G)$. The
\emph{domination number} of $G$, denoted by $\gamma(G)$, is the
minimum cardinality of a DS in $G$; a DS of $G$ of minimum
cardinality is called a $\gamma(G)$-set. For earlier discussions
on domination in graphs, see \cite{B1, B2, EJ, Jaegar, Ore}. For a
survey of domination in graphs, refer to \cite{Dom1, Dom2}. We generally
follow \cite{CZ} for notation and graph theory terminology.
Throughout the paper, we denote by $P_n$, $C_n$, and $K_n$ the path,
the cycle, and the complete graph on $n$ vertices, respectively.\\
In \cite{Slater}, Slater introduced the notion of the number of dominating sets of $G$, which he denoted by HED$(G)$ in honor of Steve Hedetniemi; further, he also
used \#$\gamma(G)$ to denote the number of $\gamma(G)$-sets. In this paper, we will use $\tau(G)$ to denote the total number of $\gamma(G)$-sets, and
by $DM(G)$ the collection of all $\gamma(G)$-sets. For each vertex
$v \in V(G)$, we define the \emph{domination value} of $v$,
denoted by $DV_G(v)$, to be the number of $\gamma(G)$-sets to
which $v$ belongs; we often drop $G$ when ambiguity is not a
concern. See \cite{CK} for a discussion on \emph{total domination value} in graphs. For a further work on domination value in graphs, see \cite{Yi}. In this paper, we study some basic
properties of the domination value function, thus initiating a
\emph{local study} of domination in graphs. When a real-world
situation can be modeled by a graph, the locations (vertices)
with high domination values are of interest. One can use
domination value in selecting locations for fire departments or
convenience stores, for example. Though numerous papers on
domination have been published, no prior systematic local study of
domination is known. However, in \cite{pruned tree}, Mynhardt
characterized the vertices in a tree $T$ whose domination value is $0$ or $\tau(T)$.
It should be noted that finding domination value of any given vertex in
a given graph $G$ can be an extremely difficulty task, given the difficulty attendant to finding $\tau(G)$ or just $\gamma(G)$.\\
\section{Basic properties of domination value: upper and lower bounds}
In this section, we consider the lower and upper bounds of the
domination value function for a fixed vertex $v_0$ and for $v \in
N[v_0]$. Clearly, $0 \le DV_G(v) \le \tau(G)$ for any graph $G$
and for any vertex $v \in V(G)$. We will say the bound is sharp if
equality is obtained for a graph of some order in an inequality.
We first make the following observations.\\
\begin{observation}\label{observation}
$\displaystyle \sum_{v \in V(G)} DV_G(v) = \tau(G) \cdot
\gamma(G)$
\end{observation}
\begin{observation}\label{observation1}
If there is an isomorphism of graphs carrying a vertex $v$ in $G$
to a vertex $v'$ in $G'$, then $DV_G(v)=DV_{G'}(v')$.
\end{observation}
Examples of graphs that admit automorphisms are cycles, paths, and
the Petersen graph. The Pertersen graph, which is often used as a
counter-example for conjectures, is vertex-transitive (p.27,
\cite{Petersen}). Let $\mathcal{P}$ denote the Petersen graph with
labeling as in Figure \ref{P}.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.5}{\input{P.pstex_t}} \caption{The Petersen
graph}\label{P}
\end{center}
\end{figure}
It's easy to check that $\gamma(\mathcal{P})=3$. We will show that
$DV(v)=3$ for each $v \in V(\mathcal{P})$. Since $\mathcal{P}$ is
vertex-transitive, it suffices to compute $DV_{\mathcal{P}}(1)$.
For the $\gamma(\mathcal{P})$-set $\Gamma$ containing the vertex
$1$, one can easily check that no vertex in $N(1)$ belongs to
$\Gamma$. Further, notice that no three vertices from
$\{1,2,3,4,5\}$ form a $\gamma(\mathcal{P})$-set. Keeping these
two conditions in mind, one can readily verify that the
$\gamma(\mathcal{P})$-sets containing the vertex $1$ are
$\{1,3,7\}$, $\{1,4,10\}$, and $\{1,8,9\}$, and thus
$DV(1)=3=DV(v)$ for each $v \in V(\mathcal{P})$.
\begin{observation}\label{observation2}
Let $G$ be the disjoint union of two graphs $G_1$ and $G_2$. Then
$\gamma(G)=\gamma(G_1)+ \gamma(G_2)$ and $\tau(G)=\tau(G_1) \cdot
\tau(G_2)$. For $v \in V(G_1)$, $DV_G(v)=DV_{G_1}(v) \cdot
\tau(G_2)$.
\end{observation}
\begin{proposition}\label{upperbound1}
For a fixed $v_0 \in V(G)$, we have
$$\tau(G) \le \sum_{v \in N[v_0]} DV_G(v) \le \tau(G) \cdot \gamma(G),$$ and both bounds are
sharp.
\end{proposition}
\begin{proof}
The upper bound follows from Observation \ref{observation}. For
the lower bound, note that every $\gamma(G)$-set $\Gamma$
must contain a vertex in $N[v_0]$: otherwise $\Gamma$ fails to dominate $v_0$.\\
For sharpness of the lower bound, take $v_0$ to be an end-vertex
of $P_{3k}$ for $k \ge 1$ (see Theorem \ref{theorem on paths} and
Corollary \ref{path on 3k}). For sharpness of the upper bound,
take as $v_0$ the central vertex of (A) in Figure \ref{figure1}. \hfill
\end{proof}
\begin{proposition} \label{upperbound2}
For any $v_0 \in V(G)$, $$\sum_{v \in N[v_0]} DV_G(v) \le \tau(G)
\cdot (1+ \deg_G(v_0)),$$ and the bound is sharp.
\end{proposition}
\begin{proof}
For each $v\in N[v_0]$, $DV_G(v)\leq \tau(G)$ and $|N[v_0]|=1+ \deg_G(v_0)$. Thus,\\
$$\sum_{v\in N[v_0]}DV(v) \leq \sum_{v\in N[v_0]}\tau(G)=\tau(G)\!\!\!\sum_{v\in N[v_0]}1=\tau(G)(1+ \deg_G(v_0)).$$\\
The upper bound is achieved for a graph of order $n$ for any
$n\geq 1$. Let $G_n$ be a graph on $n$ vertices containing an
isolated vertex. To see the sharpness of the upper bound, take as $v_0$
one of the isolates vertices, then the upper bound follows by
Observation \ref{observation2} and $\deg_G(v_0)=0$. \hfill
\end{proof}
\begin{figure}[htbp]
\begin{center}
\scalebox{0.5}{\input{DVupperbound.pstex_t}} \caption{Examples of
local domination values and their upper bounds}\label{figure1}
\end{center}
\end{figure}
We will compare two examples, where each example attains the upper
bound of Proposition \ref{upperbound1} or Proposition
\ref{upperbound2}, but not both. Let $v_0$ be the central vertex
of degree $3$, which is not a support vertex as in the graph (A)
of Figure \ref{figure1}. Then $\sum_{v \in N[v_0]} DV_G(v)=3$.
Note that $\tau (G) =1$, $\gamma (G)=3$, and $\deg_G(v_0)=3$.
Proposition \ref{upperbound1} yields the upper bound $\tau (G)
\cdot \gamma (G)=1\cdot 3=3$, which is sharp. But, the upper bound
provided by Proposition \ref{upperbound2}
is $\tau (G) \cdot (1+\deg_G(v_0))=1 \cdot (1+3)=4$, which is not sharp in this case.\\
Now, let $v_0$ be an isolated vertex as labeled in the graph (B)
of Figure \ref{figure1}. Then $\sum_{v \in N[v_0]}DV_{G'}(v)=2$.
Note that $\tau (G')=2$, $\gamma(G')=4$, and $\deg_{G'}(v_0)=0$.
Proposition \ref{upperbound2} yields the upper bound $\tau (G')
\cdot (1+ \deg_{G'}(v_0))=2 \cdot (1+0)=2$, which is sharp. But,
the upper bound provided by Proposition \ref{upperbound1} is
$\tau (G') \cdot \gamma (G')=2 \cdot 4=8$, which is not sharp in this case.\\
\begin{proposition}\label{subgraph-tau}
Let $H$ be a subgraph of $G$ with $V(H)=V(G)$. If
$\gamma(H)=\gamma(G)$, then $\tau(H) \le \tau(G)$.
\end{proposition}
\begin{proof}
By the first assumption, every DS for $H$ is a DS for $G$. By
$\gamma(H)=\gamma(G)$, it's guaranteed that every DS of minimum
cardinality for $H$ is also a DS \emph{of minimum cardinality for $G$}.
\end{proof}
The \emph{complement} $\overline{G}=(V(\overline{G}),
E(\overline{G}))$ of a graph $G$ is the graph such that
$V(\overline{G})=V(G)$ and $uv \in E(\overline{G})$ if and only if
$uv \not\in E(G)$. We recall the following
\begin{theorem}
Let $G$ be any graph of order $n$. Then
\begin{itemize}
\item[(i)] (\cite{Jaegar}, Jaegar and Payan)
$\gamma(G)+\gamma(\overline{G}) \le n+1$; and
\item[(ii)](\cite{B1}, p.304) $\gamma(G) \le n- \Delta(G)$.
\end{itemize}
\end{theorem}
\begin{proposition}
Let $G$ be a graph on $n=2m \ge 4$ vertices. If $G$ or
$\overline{G}$ is $mK_2$, then
$$DV_G(v)+DV_{\overline{G}}(v)=n-1+2^{\frac{n}{2}-1}.$$
\end{proposition}
\begin{proof}
Without loss of generality, assume $G=mK_2$ and label the vertices
of $G$ by $1,2,\ldots,2m$. Further assume that the vertex $2k-1$
is adjacent to the vertex $2k$, where $1 \le k \le m$. Then
$DV_G(1)=2^{m-1}$, which consists of the vertex 1 and one vertex
from each path $K_2$. By Observation \ref{observation1} and
Observation \ref{observation2}, $DV_G(v)=2^{m-1}=2^{\frac{n}{2}-1}$ for any $v\in V(G)$.\\
Now, consider $\overline{G}$ and the vertex labeled $1$ for ease
of notation. Since $\Delta(G)=n-2$, $\gamma(\overline{G})>1$.
Noting that $\{1, \alpha\}$ as $\alpha$ ranges from 2 to $2m$
enumerates all dominating sets of $\overline{G}$ containing the
vertex 1, we have $\gamma(\overline{G})=2$ and
$DV_{\overline{G}}(1)=n-1$. By Observation \ref{observation1},
$DV_{\overline{G}}(v)=n-1$ holds for any $v \in V(\overline{G})$.
Therefore, $DV_G(v)+DV_{\overline{G}}(v)=n-1+2^{\frac{n}{2}-1}$. \hfill
\end{proof}
Next we consider domination value of a graph $G$ when $\Delta(G)$ is
given.
\begin{observation} \label{degree n-1}
Let $G$ be a graph of order $n \ge 2$ such that $\Delta (G)=n-1$. Then
$\gamma(G)=1$ and $DV(v) \le 1$ for any $v \in V(G)$. Equality
holds if and only if $\deg_G(v)=n-1$.
\end{observation}
\begin{proposition}\label{degree n-2}
Let $G$ be a graph of order $n \ge 3$ such that $\Delta(G)=n-2$. Then
$\gamma(G)=2$ and $DV(v) \leq n-1$ for any $v \in V(G)$. Further,
if $\deg(v)=n-2$, then $DV(v)=|N[w]|$ where $vw \notin E(G)$.
\end{proposition}
\begin{proof}
Let $\deg_G(v)=\Delta(G)=n-2$, then $\gamma(G)>1$ and there's only
one vertex $w$ such that $vw \notin E(G)$. Clearly, $\{v, w\}$ is
a $\gamma(G)$-set; so $\gamma(G)=2$. Noticing that $v$ dominates
$N[v]$, we see that the number of $\gamma(G)$-sets containing $v$
is $|N[w]|$; i.e., $DV(v)=|N[w]| \le n-2$. \hfill
\end{proof}
\begin{theorem}\label{theorem on Delta(G)=n-3}
Let $G$ be a graph of order $n \ge 4$ and $\Delta (G)=n-3$. Fix a
vertex $v$ with $\deg_G(v)= \Delta(G)$.
\begin{itemize}
\item [(i)] If $G$ is disconnected, then $\gamma(G)=2$ with
$DV(v)=2$ or $\gamma(G)=3$ with $DV(v)\le n-3$. \item [(ii)] If
$G$ is connected, then $\gamma(G)=2$ with $DV(v) \le n-2$ or
$\gamma(G)=3$ with $DV(v) \le (\frac{n-1}{2})^2$.
\end{itemize}
\end{theorem}
\begin{proof}
Since $\deg_G(v)=\Delta(G)=n-3$, there are two vertices, say
$\alpha$ and $\beta$, such that $v \alpha, v \beta \not\in E(G)$.
We consider four cases.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.5}{\input{max_n_3D.pstex_t}} \caption{Cases 1, 2, and
3 when $\Delta(G)=n-3$}\label{max1}
\end{center}
\end{figure}
\emph{Case 1. Neither $\alpha$ nor $\beta$ is adjacent to any
vertex in $N[v]$:} Let $G^{\prime}= \langle V(G) - \{\alpha,
\beta\} \rangle$. Then $\deg_{G'}(v)=n-3$ with $|V(G^{\prime})|=n-2$.
By Observation~\ref{degree n-1}, $\gamma(G')=1$ and
$DV_{G'}(v)=1$. First suppose $\alpha$ and $\beta$ are isolated
vertices in $G$. (Consider (A) of Figure \ref{max1} with the edge
$\alpha \beta$ being removed.) Observation~\ref{observation2},
together with $\gamma(\langle\{\alpha,\beta\}\rangle)=2$ and
$\tau(\langle\{\alpha,\beta\}\rangle)=1$, yields $\gamma(G)=3$ and
$DV_G(v)=1$. Next assume that $G$ has no isolated vertex, then $\alpha
\beta \in E(G)$ (see (A) of Figure \ref{max1}). Observation
\ref{observation2}, together with $\gamma(\langle\{\alpha,\beta \}\rangle)=1$ and
$\tau(\langle\{\alpha,\beta \}\rangle)=2$, yields $\gamma(G)=2$ and $DV_G(v)=2$.\\
\emph{Case 2. Exactly one of $\alpha$ and $\beta$ is adjacent to a
vertex in $N(v)$:} Without loss of generality, assume that
$\alpha$ is adjacent to a vertex in $N(v)$. First suppose that $G$
is not connected. Then $\alpha \beta \not\in E(G)$. (Consider (B)
of Figure \ref{max1} with the edge $\alpha \beta$ being removed.)
Let $G'=\langle V(G)-\{\beta\} \rangle$. Then $\deg_{G'}(v)=n-3$ with
$|V(G')|=n-1$. By Proposition \ref{degree n-2}, $\gamma(G')=2$ and
$DV_{G'}(v)=|N[\alpha]| \le n-3$. Observation \ref{observation2},
together with $\gamma(\langle \{\beta\} \rangle)=1$ and
$\tau(\langle \{\beta\}\rangle)=1$, yields $\gamma(G)=3$ and
$DV_G(v)=|N[\alpha]|\le n-3$. Next suppose that $G$ is connected.
Then $\alpha \beta \in E(G)$ and $\alpha$ is a support vertex of
$G$. (See (B) of Figure \ref{max1}.) Since $\Delta(G)<n-1$,
$\gamma(G)>1$. Since $\{v,\alpha\}$ is a $\gamma(G)$-set,
$\gamma(G)=2$. Noting that $v$ dominates $V(G)-\{\alpha, \beta\}$,
the number of $\gamma(G)$-sets containing $v$ equals the number of
vertices in $G$ that dominates both $\alpha$ and $\beta$. Thus $DV_G(v)=2$.\\
\emph{Case 3. There exists a vertex in $N(v)$, say $x$, that is
adjacent to both $\alpha$ and $\beta$:} Notice that $n \ge 6$ in
this case, since $vx, \alpha x, \beta x \in E(G)$ and
$\deg_G(v)=\Delta (G)$ (see (C) of Figure \ref{max1}). Since $\{v,
x\}$ is a $\gamma(G)$-set, $\gamma(G)=2$. If $\alpha\beta \not\in
E(G)$, then $DV(v)=|N[\alpha] \cap N[\beta]| \le n-3$. If
$\alpha\beta \in E(G)$, then $|N[\alpha] \cap N[\beta]| \le n-4$
since $\Delta (G)=n-3$. Noting both $\{v, \alpha\}$ and $\{v,
\beta\}$ are $\gamma(G)$-sets, we have $DV(v)=2+|N[\alpha] \cap N[\beta]| \le n-2$.\\
\begin{figure}[htbp]
\begin{center}
\scalebox{0.5}{\input{max_n_34.pstex_t}} \caption{Subcases 4.1
and 4.2 when $\Delta(G)=n-3$}\label{max2}
\end{center}
\end{figure}
\emph{Case 4. There exist vertices in $N(v)$ that are adjacent to
$\alpha$ and $\beta$, but no vertex in $N(v)$ is adjacent to both
$\alpha$ and $\beta$:} Let $x_0 \in N(v) \cap N(\alpha)$ and $y_0
\in N(v) \cap N(\beta)$. We consider two
subcases.\\
\emph{Subcase 4.1. $\alpha \beta \not\in E(G)$ (see (A) of Figure
\ref{max2}):} First, assume $\gamma(G)=2$. This is possible when
$\{x_0, y_0\}$ is a $\gamma(G)$-set satisfying $N[x_0] \cup
N[y_0]=V(G)$. Notice that there's no $\gamma(G)$-set containing
$v$ when $\gamma(G)=2$ since there's no vertex in $G$ that is
adjacent to both $\alpha$ and $\beta$. Thus $DV(v)=0$. Second,
assume $\gamma(G)>2$. Since $\{v, \alpha, \beta\}$ is a
$\gamma(G)$-set, $\gamma(G) =3$. Noticing that every
$\gamma(G)$-set contains a vertex in $N[\alpha]$ and a vertex in
$N[\beta]$ and that $N[\alpha] \cap N[\beta] =\emptyset$, we see
$$DV(v) = |N[\alpha]| \cdot |N[\beta]| \le
\left(\frac{|N[\alpha]|+|N[\beta]|}{2}\right)^2 \le
\left(\frac{n-1}{2}\right)^2,$$ where the first inequality is the
arithmetic-geometric mean inequality (i.e., $\frac{a+b}{2} \geq
\sqrt{ab}$ for $a, b \geq 0$).\\
\emph{Subcase 4.2. $\alpha \beta \in E(G)$ (see (B) of Figure
\ref{max2}):} Since $\{v, \alpha\}$ is a $\gamma(G)$-set,
$\gamma(G)=2$. Since there's no vertex in $N(v)$ that is adjacent
to both $\alpha$ and $\beta$, there are only two $\gamma(G)$-sets
containing $v$, i.e., $\{v, \alpha\}$ and $\{v, \beta\}$. Thus
$DV(v)=2$. \hfill
\end{proof}
\noindent\textbf{Remark.} In the proof of Theorem~\ref{theorem on
Delta(G)=n-3}, we observe that one may have $DV(v)=0$ even though
$\deg_G(v)=\Delta(G) \le n-3$. See Figure \ref{zeroTDV-max-deg}
for a graph of order $n$, $\deg_G(v)= \Delta(G)=n-3$,
$\gamma(G)=2$, and $DV(v)=0$.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.5}{\input{zeroDV_max_deg.pstex_t}} \caption{A graph of
order $9$, $\deg_G(v)=\Delta(G)=6$, $DV(v)=0$ with a unique
$\gamma$-set $\{x_0, y_0\}$}\label{zeroTDV-max-deg}
\end{center}
\end{figure}
\section{Domination value on complete $n$-partite graphs}
For a \emph{complete} $n$-partite graph $G$, let $V(G)$ be
partitioned into $n$-partite sets $V_1$, $V_2$, $\ldots$, $V_n$,
and let $a_i=|V_i|\geq 1$ for each $1\leq i \leq n$, where $n\geq
2$.
\begin{proposition}
Let $G=K_{a_1,a_2, \ldots, a_n}$ be a complete $n$-partite graph
with $a_i \ge 2$ for each $i$ ($1 \le i \le n$). Then
$$\tau(G)=\frac{1}{2}\left[\left(\sum_{i=1}^{n} a_i\right)^2 - \sum_{i=1}^{n} a_i^2\right] \hskip .2in
\mbox{ and } \hskip .2in DV(v)=\left(\sum_{i=1}^{n} a_i\right)-a_j
\mbox{ if } v \in V_j.$$
\end{proposition}
\begin{proof}
Since $\Delta(G) < |V(G)|-1$, $\gamma(G)>1$. Any two vertices from
different partite sets form a $\gamma(G)$-set, so $\gamma(G)=2$.
If $v \in V_j$, then
\begin{equation}\label{DV for n-partite}
DV(v)=\deg_G(v)=\left(\sum_{i=1}^{n} a_i\right)-a_j.
\end{equation}
From Observation \ref{observation} and (\ref{DV for n-partite}),
we have
\begin{eqnarray*}
\sum_{j=1}^{n}\sum_{v\in V_j} DV(v) =2 \tau(G) \
\Longleftrightarrow \ \sum_{j=1}^{n}\left(a_j\sum_{i=1}^{n} a_i
-a_{j}^{2}\right) =2\tau(G) \\
\Longleftrightarrow \ \
\left(\sum_{i=1}^{n} a_i\right)\left(\sum_{j=1}^{n}
a_j\right)-\sum_{j=1}^{n} a_j^2 =2\tau(G),
\end{eqnarray*}
and thus the formula for $\tau(G)$ follows.\hfill
\end{proof}
\begin{proposition}\label{n-partite2}
Let $G=K_{a_1,a_2, \ldots, a_n}$ be a complete $n$-partite graph
such that $a_i=1$ for some $i$, say $a_j=1$ for $j=1,2,\ldots, k$,
where $1 \le k \le n$. Then $\tau(G)=k$ and
\begin{equation*} DV(v)= \left\{
\begin{array}{ll}
1 & \mbox{ if } v \in V_j \ (1 \le j \le \ k)\\
0 & \mbox{ if } v \in V_j \ (k+1 \le j \le n) .
\end{array} \right.
\end{equation*}
\end{proposition}
\begin{proof}
Since $\Delta(G)=|V(G)|-1$, by Observation \ref{degree n-1},
$\gamma(G)=1$ and $DV(v)$ follows. By Observation
\ref{observation}, together with $\gamma(G)=1$, we have $\tau(G)=\sum_{v
\in V(G)}DV_G(v)=k$.\hfill
\end{proof}
If $a_i=1$ for each $i$ ($1\leq i\leq n$), then $G=K_n$. As an
immediate consequence of Proposition \ref{n-partite2}, we have the
following.
\begin{corollary}
If $G=K_n$ ($n \ge 1$), then $\tau(G)= n$ and $DV(v)=1$ for each
$v \in V(K_n)$.
\end{corollary}
If $n=2$, then $G=K_{a_1, a_2}$ is a complete bi-partite graph.
\begin{corollary}
If $G=K_{a_1, a_2}$, then
\begin{equation*}
\tau(G)= \left\{
\begin{array}{ll}
a_1 \cdot a_2 & \mbox{ if } a_1, a_2 \ge 2 \\
2 & \mbox{ if } a_1=a_2=1\\
1 & \mbox{ if } \{a_1, a_2\} = \{1,x\}, \mbox{ where }x>1 .
\end{array} \right.
\end{equation*}
If $a_1, a_2 \ge 2$, then
\begin{equation*}
DV(v)= \left\{
\begin{array}{ll}
a_2 & \mbox{ if } v \in V_1 \\
a_1 & \mbox{ if } v \in V_2 .
\end{array} \right.
\end{equation*}
If $a_1=a_2=1$, $DV(v)=1$ for any $v$ in $K_{1,1}$.
If $\{a_1, a_2\} = \{1,x\}$ with $x>1$, say $a_1=1$ and $a_2=x$,
then
\begin{equation*} DV(v)= \left\{
\begin{array}{ll}
1 & \mbox{ if } v \in V_1 \\
0 & \mbox{ if } v \in V_2 .
\end{array} \right.
\end{equation*}
\end{corollary}
\section{Domination value on cycles}
Let the vertices of the cycle $C_n$ be labeled $1$ through $n$
consecutively in counter-clockwise order, where $n \ge 3$. Observe
that the domination value is constant on the vertices of $C_n$,
for each $n$, by vertex-transitivity. Recall that
$\gamma(C_n)=\lceil \frac{n}{3} \rceil$ for $n \ge 3$ (see
p.364, \cite{CZ}).\\
\noindent {\bf Examples.} (a) $DM(C_4)=\{\{1,
2\}, \{1,3\}, \{1,4\}, \{2, 3\}, \{2, 4\}, \{3, 4\}\}$ since $\gamma(C_4)=2$;
so $\tau(C_4)=6$ and $DV(i)=3$ for each $i \in V(C_4)$.\\
(b) $\gamma(C_6)=2$, $DM(C_6)=\{\{1, 4\}, \{2, 5\}, \{3, 6\} \}$;
so $\tau(C_6)=3$ and $DV(i)=1$ for each $i \in V(C_6)$.\\
\begin{theorem}\label{theorem on cycles}
For $n \ge 3$,
\begin{equation*}
\tau(C_n) = \left\{
\begin{array}{lr}
\ 3 & \mbox{ if } n \equiv 0 \mbox{ (mod 3)} \ \\
\ n(1+\frac{1}{2} \lfloor \frac{n}{3} \rfloor) & \mbox{ if } n \equiv 1 \mbox{ (mod 3)} \ \\
\ n & \mbox{ if } n \equiv 2 \mbox{ (mod 3)} .
\end{array} \right.
\end{equation*}
\end{theorem}
\begin{proof}
First, let $n=3k$, where $k \ge 1$. Here $\gamma(C_{n})=k$; a
$\gamma(C_{n})$-set $\Gamma$ comprises $k$ $K_1$'s and $\Gamma$ is
fixed by the choice of the first $K_1$. There exists exactly one
$\gamma(C_{n})$-set containing the vertex $1$, and there are two
$\gamma(C_{n})$-sets omitting the vertex 1 such as $\Gamma$
containing
the vertex 2 and $\Gamma$ containing the vertex $n$. Thus $\tau(C_{n})=3$.\\
Second, let $n=3k+1$, where $k \ge 1$. Here $\gamma(C_{n})=k+1$; a
$\gamma(C_{n})$-set $\Gamma$ is constituted in exactly one of the
following two ways: 1) $\Gamma$ comprises $(k-1)$ $K_1$'s and one
$K_2$; 2) $\Gamma$ comprises $(k+1)$ $K_1$'s. \\
\emph{Case 1) $\langle \Gamma \rangle \cong (k-1)K_1 \cup K_2$:} Note that
$\Gamma$ is fixed by the choice of the single $K_2$. Choosing a
$K_2$ is the same as choosing its initial vertex
in the counter-clockwise order. Thus $\tau=3k+1$.\\
\emph{Case 2) $\langle \Gamma \rangle \cong (k+1)K_1$:} Note that, since each
$K_1$ dominates three vertices, there are exactly two vertices,
say $x$ and $y$, each of whom is adjacent to two distinct $K_1$'s
in $\Gamma$. And $\Gamma$ is fixed by the placements of $x$ and
$y$. There are $n=3k+1$ ways of choosing $x$. Consider the
$P_{3k-2}$ (a sequence of $3k-2$ slots) obtained as a result of
cutting from $C_{n}$ the $P_3$ centered about $x$. Vertex $y$ may
be placed in the first slot of any of the
$\lceil\frac{3k-2}{3}\rceil=k$ subintervals of the $P_{3k-2}$. As
the order of selecting the two vertices
$x$ and $y$ is immaterial, $\tau=\frac{(3k+1)k}{2}$. \\
Summing over the two disjoint cases, we get
$$\tau(C_{n})=(3k+1)+\frac{(3k+1)k}{2}=(3k+1)\left(1+\frac{k}{2}\right)= n \left(1+\frac{1}{2} \left\lfloor\frac{n}{3} \right\rfloor \right).$$
Finally, let $n=3k+2$, where $k \ge 1$. Here $\gamma(C_{n})=k+1$;
a $\gamma(C_{n})$-set $\Gamma$ comprises of only $K_1$'s and is
fixed by the placement of the only vertex which is adjacent to two
distinct $K_1$'s in $\Gamma$. Thus $\tau(C_{n})=n$. \hfill
\end{proof}
\begin{corollary}
Let $v\in V(C_n)$, where $n \ge 3$. Then
\begin{equation*}
DV(v) = \left\{
\begin{array}{lr}
\ 1 & \mbox{ if } n \equiv 0 \mbox{ (mod 3)} \ \\
\ \frac{1}{2} \lceil\frac{n}{3}\rceil (1+ \lceil\frac{n}{3}\rceil) & \mbox{ if } n \equiv 1 \mbox{ (mod 3)} \ \\
\ \lceil\frac{n}{3}\rceil & \mbox{ if } n \equiv 2 \mbox{ (mod 3) }.
\end{array} \right.
\end{equation*}
\end{corollary}
\begin{proof}
It follows by Observation~\ref{observation}, Observation~\ref{observation1}, and Theorem~\ref{theorem on cycles}.~\hfill
\end{proof}
\section{Domination value on paths}
Let the vertices of the path $P_n$ be labeled $1$ through $n$
consecutively. Recall that $\gamma(P_n)=\lceil \frac{n}{3} \rceil$ for $n \ge 2$.\\
\noindent {\bf Examples.} (a) $\gamma(P_4)=2$, $DM(P_4)=\{
\{1,3\}, \{1,4\}, \{2, 3\}, \{2,4\}\}$; so $\tau(P_4)=4$ and
$DV(i)=2$ for each $i \in V(P_4)$.\\
(b) $\gamma_t(P_5)=2$, $DM(P_5)=\{\{1, 4\}, \{2,4\}, \{2,5\}\}$;
so $\tau(P_5)=3$, and
\begin{equation*}
DV(i)= \left\{
\begin{array}{ll}
1 & \mbox{ if } i =1,5\\
2 & \mbox{ if } i=2,4\\
0 & \mbox{ if } i =3 .
\end{array} \right.
\end{equation*}
\noindent\textbf{Remark.} Since $P_n \subset C_n$ with the same vertex set,
by Proposition \ref{subgraph-tau}, we have $\tau(P_n) \le
\tau(C_n)$ for $n \ge 3$, as one can verify from the theorem
below.
\begin{theorem}\label{theorem on paths}
For $n \ge 2$,
\begin{equation*} \tau(P_n) = \left\{
\begin{array}{lr}
\ 1 & \mbox{ if } n \equiv 0 \mbox{ (mod 3)} \ \\
\ n+ \frac{1}{2}\lfloor \frac{n}{3} \rfloor (\lfloor \frac{n}{3}\rfloor-1) & \mbox{ if } n \equiv 1 \mbox{ (mod 3)} \ \\
\ 2+ \lfloor \frac{n}{3} \rfloor & \mbox{ if } n \equiv 2 \mbox{
(mod 3)} .
\end{array} \right.
\end{equation*}
\end{theorem}
\begin{proof}
First, let $n=3k$, where $k \ge 1$. Then $\gamma(P_n)=k$ and a
$\gamma(P_n)$-set $\Gamma$ comprises $k$ $K_1$'s. In this case,
each vertex in $\Gamma$ dominates three vertices, and no vertex of
$P_{n}$ is dominated by more than one vertex. Thus none of the
end-vertices of $P_n$ belongs to any $\Gamma$, which contains and
is fixed by the vertex $2$; hence $\tau(P_n)=1$.\\
Second, let $n=3k+1$, where $k \ge 1$. Here $\gamma(P_n)=k+1$; a
$\gamma(P_n)$-set $\Gamma$ is constituted in exactly one of the
following two ways: 1) $\Gamma$ comprises $(k-1)$ $K_1$'s and one
$K_2$; 2) $\Gamma$ comprises $(k+1)$ $K_1$'s.\\
\emph{Case 1) $\langle \Gamma \rangle \cong (k-1)K_1 \cup K_2$, where $k \ge
1$:} Note that $\Gamma$ is fixed by the placement of the single
$K_2$, and none of the end-vertices belong to any $\Gamma$, as
each component with cardinality $c$ in $\langle \Gamma \rangle$ dominates $c+2$
vertices. Initial vertex of $K_2$ may be placed in one of the $n
\equiv 2$ (mod $3$) slots. Thus $\tau=k$.\\
\emph{Case 2) $\langle \Gamma \rangle \cong (k+1)K_1$, where $k \ge 1$:} A
$\Gamma$ containing both end-vertices of the path is unique (no
vertex is doubly dominated). The number of $\Gamma$ containing
exactly one of the end-vertices (one doubly dominated vertex) is
$2{k \choose 1}=2k$. The number of $\Gamma$ containing none of the
end-vertices (two doubly dominated vertices) is ${k \choose 2}=
\frac{k(k-1)}{2}$. Thus
$\tau=1+2k+\frac{k(k-1)}{2}$.\\
Summing over the two disjoint cases, we get
$$\tau(P_n)=k+\left(1+2k+\frac{k(k-1)}{2} \right)=3k+1+ \frac{k(k-1)}{2}=
n+\frac{1}{2} \left\lfloor\frac{n}{3}\right\rfloor
\left(\left\lfloor \frac{n}{3}\right\rfloor-1 \right).$$
Finally, let $n=3k+2$, where $k \ge 0$. Here $\gamma(P_n)=k+1$,
and $\gamma(P_n)$-set $\Gamma$ comprises of ($k+1$) $K_1$'s. Note
that there's no $\Gamma$ containing both end-vertices of $P_n$.
The number of $\Gamma$ containing exactly one of the end-vertices
(no doubly dominated vertex) of the path is two. The number of
$\Gamma$ containing neither of the end-vertices (one doubly
dominated vertex) is $k$. Summing the two disjoint cases, we have
$\tau(P_n)=2+k=2+ \lfloor \frac{n}{3}\rfloor$. \hfill
\end{proof}
For the domination value of a vertex on $P_n$, note that
$DV(v)=DV(n+1-v)$ for $1 \le v \le n$ as $P_n$ admits the obvious
automorphism carrying $v$ to $n+1-v$. More precisely, we have the
classification result which follows. First, as an immediate
consequence of Theorem \ref{theorem on paths}, we have the
following result.
\begin{corollary} \label{path on 3k}
Let $v \in V(P_{3k})$, where $k \ge 1$. Then
\begin{equation*}
DV(v)= \left\{
\begin{array}{ll}
0 & \mbox{ if } v \equiv 0,1 \mbox{ (mod 3)} \\
1 & \mbox{ if } v \equiv 2 \mbox{ (mod 3)} .
\end{array} \right.
\end{equation*}
\end{corollary}
\begin{proposition}
Let $v \in V(P_{3k+1})$, where $k \ge 1$. Write $v=3q+r$, where
$q$ and $r$ are non-negative integers such that $0 \le r < 3$.
Then, noting $\tau(P_{3k+1})=\frac{1}{2}(k^2+5k+2)$, we have
\begin{equation*} DV(v)= \left\{
\begin{array}{ll}
\frac{1}{2}q(q+3) & \mbox{ if } v \equiv 0 \mbox{ (mod 3) } \ \\
(q+1)(k-q+1) & \mbox{ if } v \equiv 1 \mbox{ (mod 3) } \ \\
\frac{1}{2}(k-q)(k-q+3) & \mbox{ if } v \equiv 2 \mbox{ (mod 3) }.
\end{array} \right.
\end{equation*}
\end{proposition}
\begin{proof}
Let $\Gamma$ be a $\gamma(P_{3k+1})$-set for $k \ge 1$. We
consider two cases.\\
\textit{Case 1) $\langle \Gamma \rangle \cong (k-1)K_1 \cup K_2$, where $k \ge
1$:} Denote by $DV^1(v)$ the number of such $\Gamma$'s containing
$v$. Noting $\tau=k$ in this case, we have
\begin{equation}\label{path 3k+1 case1}
DV^1(v)= \left\{
\begin{array}{ll}
q & \mbox{ if } v \equiv 0 \mbox{ (mod 3)} \ \\
0 & \mbox{ if } v \equiv 1 \mbox{ (mod 3)} \ \\
k-q & \mbox{ if } v \equiv 2 \mbox{ (mod 3)} .
\end{array} \right.
\end{equation}
We prove by induction on $k$. One can easily check (\ref{path 3k+1
case1}) for $k=1$. Assume that (\ref{path 3k+1 case1}) holds for
$G=P_{3k+1}$ and consider $G'=P_{3k+4}$. First, notice that each
$\Gamma$ of the $k$ $\gamma(P_{3k+1})$-sets of $G$ induces a
$\gamma(P_{3k+4})$-set $\Gamma'=\Gamma \cup \{3k+3\}$ of $G'$.
Additionally, $G'$ has the $\gamma(P_{3k+4})$-set $\Gamma^*$ that
contains and is determined by $\{3k+2, 3k+3\}$, which does not
come from any $\gamma(P_{3k+1})$-set of $G$. The presence of
$\Gamma^*$ implies that $DV^1_{G'}(v)=DV^1_G(v)+1$ for $v \equiv
2$ (mod 3), where $v \le 3k+1$. Clearly, $DV^1_{G'}(3k+2)=1$,
$DV^1_{G'}(3k+3)=k+1$, and $DV^1_{G'}(3k+4)=0$.\\
\textit{Case 2) $\langle \Gamma \rangle \cong (k+1)K_1$, where $k \ge 1$:}
Denote by $DV^2(v)$ the number of such $\Gamma$'s containing $v$.
First, suppose both end-vertices belong to the unique $\Gamma$ and
denote by $DV^{2,1}(v)$ the number of such $\Gamma$'s containing
$v$. Then we have
\begin{equation}\label{Case 3-1}
DV^{2,1}(v)= \left\{
\begin{array}{ll}
0 & \mbox{ if } v \equiv 0, 2 \mbox{ (mod 3)} \ \\
1 & \mbox{ if } v \equiv 1 \mbox{ (mod 3)} .
\end{array} \right.
\end{equation}
Second, suppose exactly one end-vertex belongs to each $\Gamma$;
denote by $DV^{2,2}(v)$ the number of such $\Gamma$'s containing
$v$. Then, noting $\tau=2k$ in this case, we have
\begin{equation}\label{Case 3-2}
DV^{2,2}(v)= \left\{
\begin{array}{ll}
q & \mbox{ if } v \equiv 0 \mbox{ (mod 3)} \ \\
k & \mbox{ if } v \equiv 1 \mbox{ (mod 3)} \ \\
k-q & \mbox{ if } v \equiv 2 \mbox{ (mod 3)} .
\end{array} \right.
\end{equation}
We prove by induction on $k$. One can easily check (\ref{Case
3-2}) for $k=1$. Assume that (\ref{Case 3-2}) holds for
$G=P_{3k+1}$ and consider $G'=P_{3k+4}$. First, notice that each
$\Gamma$ of the $k$ $\gamma(P_{3k+1})$-sets of $G$ containing the
left end-vertex $1$ induces a $\gamma(P_{3k+4})$-set
$\Gamma'=\Gamma \cup \{3k+3\}$ of $G'$. Second, each $\Gamma$ of
$k$ $\gamma(P_{3k+1})$-sets of $G$ containing the right end-vertex
$3k+1$ induces a $\gamma(P_{3k+4})$-set $\Gamma'=\Gamma \cup
\{3k+4\}$ of $G'$. Third, a $\gamma(P_{3k+1})$-set $\Gamma$ of $G$
containing $1$ and $3k+1$ (both left and right end-vertices of
$G$) induces a $\gamma(P_{3k+4})$-set $\Gamma^{*1}=\Gamma \cup
\{3k+3\}$ of $G'$ (making $3k+2$ the only doubly dominated vertex
in $G'$). Additionally, $\Gamma^{*2}=\{v \in V(P_{3k+1}) \mid v \equiv 2 \mbox{ (mod 3)}\} \cup \{3k+2, 3k+4\}$ is a
$\gamma(P_{3k+4})$-set for $G'$, which does not come from any
$\gamma(P_{3k+1})$-set of $G$. The presence of $\Gamma^{*1}$ and
$\Gamma^{*2}$ imply that
\begin{equation*}
DV^{2,2}_{G'}(v)= \left\{
\begin{array}{ll}
DV^{2,2}_G(v) & \mbox{ if } v \equiv 0 \mbox{ (mod 3)} \ \\
DV^{2,2}_G(v)+1 & \mbox{ if } v \equiv 1,2 \mbox{ (mod 3)}
\end{array} \right.
\end{equation*}
for $v \le 3k+1$.
Clearly, $DV^{2,2}_{G'}(3k+2)=1$, $DV^{2,2}_{G'}(3k+3)=k+1$, and $DV^{2,2}_{G'}(3k+4)=k+1$.\\
Third, suppose no end-vertex belongs to $\Gamma$; denote by
$DV^{2,3}(v)$ the number of such $\Gamma$'s containing $v$. Then,
noting $\tau= {k\choose 2}$ in this case and setting ${a\choose
b}=0$ when $a<b$, we have
\begin{equation} \label{Case 3-3}
DV^{2,3}(v)= \left\{
\begin{array}{ll}
\frac{1}{2}(q-1)q & \mbox{ if } v \equiv 0 \mbox{ (mod 3)} \ \\
q(k-q) & \mbox{ if } v \equiv 1 \mbox{ (mod 3)} \ \\
\frac{1}{2}(k-q-1)(k-q) & \mbox{ if } v \equiv 2\mbox{ (mod 3)} .
\end{array} \right.
\end{equation}
Again, we prove by induction on $k$. Since $DV^{2,3}(v)=0$ for
each $v \in V(P_{4})$, we consider $k \ge 2$. One can easily check
(\ref{Case 3-3}) for the base, $k=2$. Assume that (\ref{Case 3-3})
holds for $G=P_{3k+1}$ and consider $G'=P_{3k+4}$, where $k \ge
2$. First, notice that each $\Gamma$ of the $k\choose 2$
$\gamma(P_{3k+1})$-sets of $G$ containing neither end-vertices of
$G$ induces a $\gamma(P_{3k+4})$-set $\Gamma'=\Gamma \cup
\{3k+3\}$ of $G'$. Additionally, each $\Gamma_r$ of the $k$
$\gamma(P_{3k+1})$-sets of $G$ containing the right-end vertex
$3k+1$ of $G$ induces a $\gamma(P_{3k+4})$-set $\Gamma_r'=\Gamma_r
\cup \{3k+3\}$ of $G'$ (making $3k+2$ one of the two
doubly-dominated vertices in $G'$): If we denote by $DV^r_G(v)$
the number of such $\Gamma_r$'s containing $v$ in $G$, then one
can readily check
\begin{equation*}
DV^r_G(v)= \left\{
\begin{array}{ll}
0 & \mbox{ if } v \equiv 0 \mbox{ (mod 3)} \ \\
q & \mbox{ if } v \equiv 1 \mbox{ (mod 3)} \ \\
k-q & \mbox{ if } v \equiv 2\mbox{ (mod 3)} ,
\end{array} \right.
\end{equation*}
again by induction on $k$. Thus, the presence of $\Gamma'_r$
implies $DV_{G'}^{2,3}(v)=DV_G^{2,3}(v)+DV^r_G(v)$ for $v \le
3k+1$. Clearly, $DV^{2,3}_{G'}(3k+2)=0=DV^{2,3}_{G'}(3k+4)$
and $DV^{2,3}_{G'}(3k+3)= {k\choose 2}+k=\frac{1}{2}k(k+1)$.\\
Summing over the three disjoint cases (\ref{Case 3-1}), (\ref{Case
3-2}), and (\ref{Case 3-3}) for $\langle \Gamma \rangle \cong (k+1)K_1$, we have
\begin{equation}\label{path 3k+1 case3}
DV^2(v)= \left\{
\begin{array}{ll}
q + \frac{1}{2}(q-1)q& \mbox{ if } v \equiv 0 \mbox{ (mod 3)} \ \\
1+k+q(k-q) & \mbox{ if } v \equiv 1 \mbox{ (mod 3)} \ \\
k-q+ \frac{1}{2}(k-q-1)(k-q)& \mbox{ if } v \equiv 2 \mbox{ (mod
3)} .
\end{array} \right.
\end{equation}
Now, by summing over (\ref{path 3k+1 case1}) and (\ref{path 3k+1
case3}), i.e., $DV(v)=DV^1(v)+DV^2(v)$, we obtain the formula
claimed in this proposition.~\hfill
\end{proof}
\begin{proposition}
Let $v \in V(P_{3k+2})$, where $k \ge 0$. Write $v=3q+r$, where
$q$ and $r$ are non-negative integers such that $0 \le r < 3$.
Then, noting $\tau(P_{3k+2})=k+2$, we have
\begin{equation*}
DV(v)= \left\{
\begin{array}{ll}
0 & \mbox{ if } v \equiv 0 \mbox{ (mod 3) } \ \\
1+q & \mbox{ if } v \equiv 1 \mbox{ (mod 3) } \ \\
k+1-q & \mbox{ if } v \equiv 2 \mbox{ (mod 3) } .
\end{array} \right.
\end{equation*}
\end{proposition}
\begin{proof}
Let $\Gamma$ be a $\gamma(P_{3k+2})$-set for $k \ge 0$. Then $\langle
\Gamma \rangle \cong (k+1) K_1$. Note that no $\Gamma$ contains both
end-vertices of $P_{3k+2}$.\\
First, suppose $\Gamma$ contains
exactly one end-vertex, and denote by $DV'(v)$ the number of such
$\Gamma$'s containing $v$. Noting $\tau=2$ in this case, for $v
\in V(P_{3k+2})$, we have
\begin{equation}\label{path on 3k+2 case1}
DV'(v)= \left\{
\begin{array}{ll}
0 & \mbox{ if } v \equiv 0 \mbox{ (mod 3) } \ \\
1 & \mbox{ if } v \equiv 1,2 \mbox{ (mod 3) }.
\end{array} \right.
\end{equation}
Next, suppose $\Gamma$ contains no end-vertices (thus $k \ge 1$),
and denote by $DV''(v)$ the number of such $\Gamma$'s containing
$v$. Noting $\tau=k$ in this case, we have
\begin{equation}\label{path on 3k+2 case2}
DV''(v)= \left\{
\begin{array}{ll}
0 & \mbox{ if } v \equiv 0 \mbox{ (mod 3) } \ \\
q & \mbox{ if } v \equiv 1 \mbox{ (mod 3) } \ \\
k-q & \mbox{ if } v \equiv 2 \mbox{ (mod 3) } .
\end{array} \right.
\end{equation}
We prove by induction on $k$. One can easily check (\ref{path on
3k+2 case2}) for the base, $k=1$. Assume that (\ref{path on 3k+2
case2}) holds for $G=P_{3k+2}$ and consider $G'=P_{3k+5}$. First,
notice that each $\Gamma$ of the $k$ $\gamma(P_{3k+2})$-sets
containing neither end-vertex of $G$ induces a
$\gamma(P_{3k+5})$-set $\Gamma'=\Gamma \cup \{3k+4\}$.
Additionally, the only $\gamma(P_{3k+2})$-set $\Gamma$ of $G$
containing the right-end vertex $3k+2$ of $G$ induces a
$\gamma(P_{3k+5})$-set $\Gamma^\star=\Gamma \cup \{3k+4\}$ of $G'$
(making $3k+3$ the only doubly-dominated vertex). The presence of
$\Gamma^\star$ implies that
\begin{equation*}
DV_{G'}''(v)= \left\{
\begin{array}{ll}
DV_G''(v) & \mbox{ if } v \equiv 0, 1 \mbox{ (mod 3) } \\
DV_G''(v)+1 & \mbox{ if } v \equiv 2 \mbox{ (mod 3) }
\end{array} \right.
\end{equation*}
for $v \le 3k+2$. Clearly, $DV_{G'}''(3k+3)=0=DV_{G'}''(3k+5)$ and
$DV_{G'}''(3k+4)=k+1$.\\
Now, by summing over the two disjoint cases (\ref{path on 3k+2
case1}) and (\ref{path on 3k+2 case2}), i.e.,
$DV(v)=DV'(v)+DV''(v)$, we obtain the formula claimed in this
proposition. \hfill
\end{proof}
\textit{Acknowledgement.} The author thanks Cong X. Kang for
suggesting the concept of \emph{domination value} and his valuable
comments and suggestions. The author also thanks the referee for a couple of helpful comments.
\nocite{*}
\bibliographystyle{amsplain}
| {
"timestamp": "2012-03-02T02:03:38",
"yymm": "1109",
"arxiv_id": "1109.6277",
"language": "en",
"url": "https://arxiv.org/abs/1109.6277",
"abstract": "A set $D \\subseteq V(G)$ is a \\emph{dominating set} of $G$ if every vertex not in $D$ is adjacent to at least one vertex in $D$. A dominating set of $G$ of minimum cardinality is called a $\\gamma(G)$-set. For each vertex $v \\in V(G)$, we define the \\emph{domination value} of $v$ to be the number of $\\gamma(G)$-sets to which $v$ belongs. In this paper, we study some basic properties of the domination value function, thus initiating \\emph{a local study of domination} in graphs. Further, we characterize domination value for the Petersen graph, complete $n$-partite graphs, cycles, and paths.",
"subjects": "Combinatorics (math.CO)",
"title": "Domination Value in Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9814534316905262,
"lm_q2_score": 0.8633916205190225,
"lm_q1q2_score": 0.8473786688512392
} |
https://arxiv.org/abs/1510.04471 | Nearest points and delta convex functions in Banach spaces | Given a closed set $C$ in a Banach space $(X, \|\cdot\|)$, a point $x\in X$ is said to have a nearest point in $C$ if there exists $z\in C$ such that $d_C(x) =\|x-z\|$, where $d_C$ is the distance of $x$ from $C$. We shortly survey the problem of studying how large is the set of points in $X$ which have nearest points in $C$. We then discuss the topic of delta-convex functions and how it is related to finding nearest points. | \section{Nearest points in Banach spaces}
\subsection{Background}\label{sec intro}
Let $(X, \|\cdot\|)$ be a real Banach space, and let $C\subseteq X$ be a non-empty closed set. Given $x\in X$, its distance from $C$ is given by
\[d_C(x) = \inf_{y\in C} \|x-y\|.\]
If there exists $z\in C$ with $d_C(x) = \|x-z\|$, we say that $x$ has a \emph{nearest point} in $C$. Let also
\[N(C) = \big\{x\in X: x \text{ has a nearest point in $C$ }\big\}.\]
One can then ask questions about the structure of the set $N(C)$. This question has been studied in \cite{Ste63, Lau78, Kon80, Zaj83, BF89, DMP91, Dud04, RZ11, RZ12} to name just a few. More specifically, the following questions are at the heart of this note:
\medskip
\begin{center}
\emph{Given a nonempty closed set $C\subseteq X$, how large is the set $N(C)$? When is it non-empty?}
\end{center}
\medskip
One way to do so is to consider sets which are large in the set theoretic sense, such as dense $G_{\delta}$ sets. We begin with a few definitions.
\begin{defin}
If $N(C) = X$, i.e., every point in $X$ has a nearest point in $C$, then $C$ is said to be proximinal. If $N(C)$ contains a dense $G_{\delta}$ set, then $C$ is said to be almost proximinal.
\end{defin}
In passing we recall that If every point in $X$ is uniquely proximinal then $C$ is said to be a \emph{Chebyshev set} It has been conjectured for over half a century, that in Hilbert space
Chebyshev sets are necessarily convex, but this is only proven for weakly closed sets \cite{BV10}. See also \cite{FM15} for a recent survey on the topic.
For example, closed convex sets in reflexive spaces are proximinal, as well as closed sets in finite dimensional spaces. See \cite{BF89}. One can also consider stronger notions of ``large" sets. See Section \ref{sec porous}. First, we also need the following definition.
\begin{defin}
A Banach space is said to be a (sequentially) Kadec space if for each sequence $\{x_n\}$ that converges weakly to $x$ with $\lim \|x_n\| = \|x\|$, $\{x_n\}$ converges to $x$ in norm, i.e.,
\[\lim_{n\to \infty} \|x-x_n\| =0.\]
\end{defin}
With the above definitions in hand, the following result holds.
\begin{thm}[Lau \cite{Lau78}, Borwein-Fitzpatrick \cite{BF89}]\label{thm Lau BF}
If $X$ is a reflexive Kadec space and $C\subseteq X$ is closed, then $C$ is almost proximinal.
\end{thm}
The assumptions on $X$ are in fact necessary.
\begin{thm}[Konjagin \cite{Kon80}]\label{thm Kon}
If $X$ is not both Kadec and reflexive, then there exist $C\subseteq X$ closed and $U\subseteq X\setminus C$ open such that no $x\in U$ has a nearest point in $C$.
\end{thm}
It is known that under stronger assumption on $X$ one can obtain stronger results on the set $N(C)$. See Section \ref{sec porous}.
\medskip
\subsection{Fr\'echet sub-differentiability and nearest points}\label{sec diff}
We begin with a definition.
\begin{defin}
Assume that $f: X\to \mathbb R$ is a real valued function with $f(x)$ finite. Then $f$ is said to be Fr\'echet sub-differentiable at $x\in X$ if there exists $x^*\in X^*$ such that
\begin{align}\label{cond subdiff}
\liminf_{y\to 0}\frac{f(x+y)-f(x)-x^*(y)}{\|y\|} \ge 0.
\end{align}
The set of points in $X^*$ that satisfy \eqref{cond subdiff} is denoted by $\partial f(x)$.
\end{defin}
Sub-derivatives have been found to have many applications in approximation theory. See for example \cite{BF89, BZ05, BL06, BV10, Pen13}.
One of the connections between sub-differentiability and the nearest point problem was studied in \cite{BF89}. Given $C\subseteq X$ closed, the following modification of a construction of \cite{Lau78} was introduced.
\begin{align*}
L_n(C) = \Big\{x\in X\setminus C : \exists x^*\in \mathbb S_{X^*} \text{ s.t. }\sup_{\delta>0}~\inf_{z\in C\cap B(x,d_C(x)+\delta)}~x^*(x-z) > \big(1-2^{-n}\big)d_C(x)\Big\},
\end{align*}
where $\mathbb S_{X^*}$ denotes the unit sphere of $X^*$. Also, let
\begin{align*}
L(C) = \bigcap_{n=1}^{\infty}L_n(C).
\end{align*}
The following is known.
\begin{prop}[Borwein-Fitzpatrick \cite{BF89}]\label{prop open}
For every $n\in \mathbb N$, $L_n(C)$ is open. In particular, $L(C)$ is $G_{\delta}$.
\end{prop}
Finally, let
\begin{align*}
\Omega(C) = & \Big\{ x\in X\setminus C : \exists x^*\in \mathbb S_{X^*}, \text{ s.t. } \forall \epsilon >0, \exists \delta>0,
\\ & ~~\quad\inf_{z\in C\cap B(x,d_C(x)+\delta)}x^*(x-z) > \big(1-\epsilon\big)d_C(x)\Big\}.
\end{align*}
While $L(C)$ is $G_{\delta}$ by Proposition \ref{prop open}, under the assumption that $X$ is reflexive, the following is known.
\begin{prop}[Borwein-Fitzpatrick \cite{BF89}]
If $X$ is reflexive then $\Omega(C) = L(C)$. In particular, $\Omega(C)$ is $G_{\delta}$.
\end{prop}
The connection to sub-differentiability is given in the following proposition.
\begin{prop}[Borwein-Fitzpatrick \cite{BF89}]
If $x\in X\setminus C$ and $\partial d_C(x) \neq \emptyset$, then $x\in \Omega(C)$.
\end{prop}
Also, the following result is known.
\begin{thm}[Borwein-Preiss \cite{BP87}]\label{thm var}
If $f$ is lower semicontiuous on a reflexive Banach space, then $f$ is Fr\'echet sub-differentiable on a dense set.
\end{thm}
In fact, Theorem \ref{thm var} holds under a weaker assumption. See \cite{BP87, BF89}.
Since the distance function is lower semicontinuous, it follows that it is sub-differentiable on a dense subset, and therefore, by the above propositions, $\Omega(C)$ is a dense $G_{\delta}$ set. Thus, in order to prove Theorem \ref{thm Lau BF}, it is only left to show that every $x\in \Omega(C)$ has a nearest point in $C$. Indeed, if $\{z_n\}\subseteq C$ is a minimizing sequence, then by extracting a subsequence, assume that $\{z_n\}$ has a weak limit $z\in C$. By the definition of $\Omega(C)$, there exists $x^* \in \mathbb S_{X^*}$ such that
\[\|x-z\| \ge x^*(x-z) = \lim_{n\to \infty} x^*(x-z_n) \ge d_C(x) = \lim_{n\to \infty}\|x-z_n\|.\]
On the other hand, by weak lower semicontinuity of the norm,
\[\lim_{n\to \infty} \|x-z_n\| \ge \|x-z\|,\]
and so $\|x-z\| = \lim\|x-z_n\|$. Since it is known that $\{z_n\}$ converges weakly to $z$, the Kadec property implies that in fact $\{z_n\}$ converges in norm to $z$. Thus $z$ is a nearest point. This completes the proof of Theorem \ref{thm Lau BF}.
This scheme of proof from \cite{BF89} shows that differentiation arguments can be used to prove that $N(C)$ is large.
\medskip
\subsection{Nearest points in non-Kadec spaces}\label{sec non-Kadec}
It was previously mentioned that closed convex sets in reflexive spaces are proximinal. It also known that non-empty ``Swiss cheese" sets (sets whose complement is a mutually disjoint union of open convex sets) in reflexive spaces are almost proximinal \cite{BF89}. These two examples show that for some classes of closed sets, the Kadec property can be removed. Moreover, one can consider another, weaker, way to ``measure" whether a set $C\subseteq X$ has ``many" nearest points: ask whether the set of nearest points in $C$ to points in $X\setminus C$ is dense in the boundary of $C$. Note that if $C$ is almost proximinal, then nearest points are dense in the boundary. The converse, however, is not true. In \cite{BF89} an example of a non-Kadec reflexive space was constructed where for every closed set, the set of nearest points is dense in its boundary. The following general question is still open.
\begin{qst}
Let $(X, \|\cdot\|)$ be a reflexive Banach space and $C\subseteq X$ closed. Is the set of nearest points in $C$ to points in $X\setminus C$ dense in its boundary?
\end{qst}
Relatedly, if the set $C$ is norm closed and bounded in a space with the \emph{Radon-Nikodym property} as is the caae of reflexive space, then $N(C)$ is nonempty and is large enough so that $\overline{\rm conv} C = \overline{\rm conv} N(C)$ \cite{BF89}.
\medskip
\subsection{Porosity and nearest points}\label{sec porous}
As was mentioned in subsection \ref{sec diff}, one can consider stronger notions of ``large" sets. One is the following notion.
\begin{defin}
A set $S\subseteq X$ is said to be porous if there exists $c\in (0,1)$ such that for every $x\in X$ and every $\epsilon>0$, there is a $y \in B(0,\epsilon)\setminus \{0\}$ such that
\[ B(x+y, c\|y\|) \cap S = \emptyset.\]
A set is said to be $\sigma$-porous if it a countable union of porous sets. Here and in what follows, $B(x,r)$ denotes the closed ball around $x$ with radius $r$.
\end{defin}
See \cite{Zaj05, LPT12} for a more detailed discussion on porous sets. It is known that every $\sigma$-porous set is of the first category, i.e., union of nowhere dense set. Moreover, it is known that the class of $\sigma$-porous sets is a proper sub-class of the class of first category sets. When $X=\mathbb R^n$, one can show that every $\sigma$-porous set has Lebesgue measure zero. This is not the case for every first category set: $\mathbb R$ can be written as a disjoint union of a set of the first category and a set of Lebesgue measure zero. Hence, the notion of porosity automatically gives a stronger notion of large sets: every set whose complement is $\sigma$-porous is also a dense $G_{\delta}$ set.
\medskip
A Banach space $(X, \|\cdot\|)$ is said to be \emph{uniformly convex} if the function
\begin{align}\label{def uni conv}
\delta(\epsilon) = \inf\left\{ 1-\left\|\frac{x+y}2\right\| ~ : ~ x, y\in \mathbb S_X, \|x-y\| \ge \epsilon \right\},
\end{align}
is strictly positive whenever $\epsilon>0$. Here $\mathbb S_X$ denotes the unit sphere of $X$. In \cite{DMP91} the following was shown.
\begin{thm}[De Blasi-Myjak-Papini \cite{DMP91}]\label{thm DMP}
If $X$ is uniformly convex, then $N(C)$ has a $\sigma$-porous compliment.
\end{thm}
In fact, \cite{DMP91} proved a stronger result, namely that for every $x$ outside a $\sigma$-porous set, the minimization problem is \emph{well posed}, i.e., there is unique minimizer to which every minimizing sequence converges. See also \cite{FP91, RZ11, RZ12} for closely related results in this direction.
The proof of Theorem \ref{thm DMP} builds on ideas developed in \cite{Ste63}. However, it would be interesting to know whether one could use differentiation arguments as in Section \ref{sec diff}. This raises the following question:
\begin{qst}
Can differentiation arguments be used to give an alternative proof of Theorem \ref{thm DMP}?
\end{qst}
More specifically, if one can show that $\partial d_C \neq \emptyset$ outside a $\sigma$-porous set, then by the arguments presented in Section \ref{sec diff}, it would follow that $N(C)$ has a $\sigma$-porous complement. Next, we mention two important results regarding differentiation in Banach spaces.
\begin{thm}[Preiss-Zaj\'i\v{c}ek \cite{PZ84}]\label{thm PZ}
If $X$ has a separable dual and $f:X\to \mathbb R$ is continuous and convex, then $X$ is Fr\'echet differentiable outside a $\sigma$-porous set.
\end{thm}
See also \cite[Sec. 3.3]{LPT12}. Theorem \ref{thm PZ} implies that if, for example, $d_C$ is a linear combination of convex functions (see more on this in Section \ref{sec DC}), then $N(C)$ has a $\sigma$-porous complement. Also, we have the following.
\begin{thm}[C\'uth-Rmoutil \cite{CR13}]\label{thm CR}
If $X$ has a separable dual and $f:X\to \mathbb R$ is Lipschitz, then the set of points where $f$ is Fr\'echet sub-differentiable but not differentiable is $\sigma$-porous.
\end{thm}
Since $d_C$ is 1-Lipschitz, the questions of seeking points of sub-differentiability or points of differentiability are similar. Theorem \ref{thm PZ} and Theorem \ref{thm CR} remain true if we consider $f:A\to \mathbb R$ where $A\subseteq X$ is open and convex.
\bigskip
\section{DC functions and DC sets}\label{sec DC}
\subsection{Background}
\begin{defin}
A function $f:X\to \mathbb R$ is said to be delta-convex, or DC, if it can be written as a difference of two convex functions on $X$.
\end{defin}
This notion was introduced in \cite{Har59} and was later studied by many authors. See for example \cite{KM90, Cep98, Dud01, VZ01, DVZ03, BZ05, Pav05, BB11}. In particular, \cite{BB11} gives a good introduction to this topic. We will discuss here only the parts that are closely related to the nearest point problem.
The following is an important proposition. See for example \cite{VZ89, HPT00} for a proof.
\begin{prop}\label{prop select}
If $f_1,\dots,f_k$ are DC functions and $f:X\to \mathbb R$ is continuous and $f(x)\in \big\{f_1(x),\dots,f_n(x)\big\}$. Then $f$ is also DC.
\end{prop}
The result is true if we replace the domain $X$ by any convex subset.
\medskip
\subsection{DC functions and nearest points}
Showing that a given function is in fact DC is a powerful tool, as it allows us to use many known results about convex and DC functions. For example, if a function is DC on a Banach space with a separable dual, then by Theorem \ref{thm PZ}, it is differentiable outside a $\sigma$-porous set. In the context of the nearest point problem, if we know that the distance function is DC, then using the scheme presented in Section \ref{sec diff}, it would follow that $N(C)$ has a $\sigma$-porous complement. The same holds if we have a difference of a convex function and, say, a smooth function.
\medskip
The simplest and best known example is when $(X,\|\cdot\|)$ is a Hilbert space, where we have the following.
\begin{align*}
d_C^2(x) & = \inf_{y\in C}\|x-y\|^2
\\ & = \inf_{y\in C}\Big[\|x\|^2-2\langle x,y\rangle +\|y\|^2\Big]
\\ & = \|x\|^2 - 2\sup_{y\in C}\Big[\langle x,y\rangle - \|y\|^2/2\Big],
\end{align*}
and the function $x\mapsto \sup_{y\in C}\Big[\langle x,y\rangle - \|y\|^2/2\Big]$ is convex as a supremum of affine functions. Hence $d_C^2$ is DC on $X$.
Moreover, in a Hilbert space we have the following result (see \cite[Sec. 5.3]{BZ05}).
\begin{thm}\label{thm local DC}
If $(X, \|\cdot\|)$ is a Hilbert space, $d_C$ is locally DC on $X\setminus C$.
\end{thm}
\begin{proof}
Fix $y\in C$ and $x_0\in X\setminus C$. It can be shown that if we let $f_y(x) = \|x-y\|$, then $f_y$ satisfies
\begin{align*}
\big\|f_y'(x_1)-f_y'(x_2)\big\|_{X^*} \le L_{x_0}\|x_1-x_2\|, ~~ x_1,x_2 \in B_{x_0},
\end{align*}
where $L_{x_0} = \frac 4 {d_S(x_0)}$ and $B_{x_0} = B\Big(x_0, \frac 1 2 d_C(x_0)\Big)$. In particular,
\begin{align}\label{lip prop}
\big(f_y'(x+tv_1)-f_y'(x+t_2v)\big)(v) \le L_{x_0}(t_2-t_1), ~~ v\in \mathbb S_X, t_2> t_1 \ge 0,
\end{align}
whenever $x+t_1v, x+t_2v \in B_{x_0}$. Next, the convex function $F(x) = \frac {L_{x_0}} 2 \|x\|^2$ satisfies
\begin{align}\label{anti lip hilbert}
\big(F'(x_1)-F'(x_2)\big)(x_1-x_2) \ge L_{x_0}\|x_1-x_2\|^2, ~~ \forall x_1,x_2\in X.
\end{align}
In particular
\begin{align}\label{anti lip}
\big(F'(x+t_2v)-F'(x+t_1v)\big)(v) \ge L_{x_0}(t_2-t_1), ~~ v\in \mathbb S_X, ~t_2>t_1\ge 0.
\end{align}
Altogether, if $g_y(x) = F(x)-f_y(x)$, then
\begin{align*}
\big(g_y'(x+t_2v)-g_y'(x+t_1v)\big)(v) \stackrel{\eqref{lip prop}\wedge \eqref{anti lip}}{\ge} 0, ~~ v\in \mathbb S_X, ~t_2>t_1\ge 0,
\end{align*}
whenever $x+t_1v, x+t_2v \in B_{x_0}$. This implies that $g_y$ is convex on $B_{x_0}$. It then follows that
\begin{align*}
d_C(x) & = \frac {L_{x_0}} 2 \|x\|^2 -\sup_{y\in C}\Bigg[~\frac {L_{x_0}} 2 \|x\|^2-\|x-y\|\Bigg] = h(x) - \sup_{y\in C}g_y(x)
\end{align*}
is DC on $B_{x_0}$.
\end{proof}
\begin{remark}
Even in $\mathbb R^2$ there are sets for which $d_C$ is not DC everywhere (not even locally DC), as was shown in \cite{BB11}. Thus, the most one could hope for is a locally DC function on $X\setminus C$.
\end{remark}
\medskip
Given $q\in (0,1]$, a norm $\|\cdot\|$ is said to be $q$\emph{-H\"older smooth} at a point $x\in X$ if there exists a constant $K_x\in (0,\infty)$ such that for every $y\in \mathbb S_X$ and every $\tau>0$,
\begin{align*}
\frac{\|x+\tau y\|}{2} + \frac{\|x-\tau y\|}{2} \le 1+K_x\tau^{1+q}.
\end{align*}
If $q=1$ then $(X,\|\cdot\|)$ is said to be \emph{Lipschitz smooth} at $x$.
The spaces $L_p$, $p \ge 2$ are known to be Lipschitz smooth, and in general $L_p$, $p>1$, is $s$-H\"older smooth with $s = \min\{1,p-1\}$.
\medskip
A Banach space is said to be $p$\emph{-uniformly convex} if for every $x,y\in \mathbb S_X$,
\begin{align*}
1-\left\|\frac{x+y}{2}\right\| \ge L\|x-y\|^p.
\end{align*}
Note that this is similar to assuming that $\delta(\epsilon) = L\epsilon^p$ in \eqref{def uni conv}. The spaces $L_p$, $p>1$, are $r$-uniformly convex with $r = \max\{2,p\}$.
One could ask whether the scheme of proof of Theorem \ref{thm local DC} can be used in a more general setting.
\begin{prop}\label{prop Hilbert}
Let $(X, \|\cdot\|)$ be a Banach space, $C\subseteq X$ a closed set, and fix $x_0\in X\setminus C$ and $y\in C$. Assume that there exists $r_0$ such that $f_y(x) = \|x-y\|$ has a Lipschitz derivative on $B(x_0,r_0)$:
\begin{align}\label{lip of der}
\big\|f_y'(x_1)-f_y'(x_2)\| \le L_{x_0}\|x_1-x_2\|.
\end{align}
Then the norm is Lipschitz smooth on $-y + B_{x_0} = B(x_0-y, r_0)$. If in addition there exists a function $F: X \to \mathbb R$ satisfying
\begin{align}\label{strong conv}
\big(F'(x_1)-F'(x_2)\big)(x_1-x_2) \ge L_{x_0}\|x_1-x_2\|^2, ~~ \forall x_1,x_2\in B(x_0,r_0),
\end{align}
then $(X,\|\cdot\|)$ admits an equivalent norm which is 2-uniformly convex. In particular, if $X=L_p$ then $p=2$.
\end{prop}
\begin{proof}
To prove the first assertion note that \eqref{lip of der} is equivalent to
\begin{align*}
\|x-y+h\|+\|x-y-h\|-2\|x-y\| \le L_{x_0}\|h\|^2, ~~ x\in B_{x_0}.
\end{align*}
See for example \cite[Prop. 2.1]{Fab85}.
To prove the second assertion, note that a function that satisfies \eqref{strong conv} is also known as \emph{strongly convex}: one can show that \eqref{strong conv} is in fact equivalent to the condition
\begin{align*}
f\left(\frac{x_1+x_2}{2}\right) \le \frac 1 2f(x_1)+\frac 1 2 f(x_2) - C\|x_1-x_2\|^2,
\end{align*}
for some constant $C$. See for example \cite[App. A]{SS07}. This implies that there exists an equivalent norm which is 2-uniformly convex (\cite[Thm 5.4.3]{BV10}).
\end{proof}
\begin{remark}
From \cite{Ara88} it is know that if $F:X\to \mathbb R$ satisfies
\begin{align*}
\big(F'(x_1)-F'(x_2)\big)(v) \ge L\|x_1-x_2\|^2,
\end{align*}
for \emph{all} $x_1,x_2\in X$, and also that $F$ is twice (Fr\'echet) differentiable at one point, then $(X,\|\cdot\|)$ is isomorphic to a Hilbert space.
\end{remark}
\begin{remark}
If we replace the Lipschitz condition by a H\"older condition
\begin{align*}
\big\|f_y'(x_1)-f_y'(x_2)\big\| \le \|x_1-x_2\|^{\beta}, ~~ \beta <1,
\end{align*}
then in order to follow the same scheme of proof of Theorem \ref{thm local DC}, instead of \eqref{anti lip hilbert}, we would need a function $F$ satisfying
\begin{align*}
\big(F'(x_1)-F'(x_2)\big)(x_1-x_2) \ge \|x_1-x_2\|^{1+\beta}, ~~ x_1,x_2\in B_{x_0}.
\end{align*}
which implies
\begin{align}\label{anti holder}
\big\|F'(x_1)-F'(x_2)\big\| \ge \|x_1-x_2\|^{\beta}, ~~ x_1,x_2\in B_{x_0}.
\end{align}
If $G= (F')^{-1}$, then we get
\begin{align*}
\|Gx_1-Gx_2\| \le \|x_1-x_2\|^{1/\beta}, ~~ x_1,x_2 \in F'(B_{x_0}),
\end{align*}
which can occur only if $G$ is a constant. Hence \eqref{anti holder} cannot hold and the scheme of proof cannot be used if we replace the Lipschitz condition by a H\"older condition.
\end{remark}
\medskip
\subsection{DC sets, DC representable sets}
\begin{defin}
A set $C$ is is said to be a DC set if $C=A\setminus B$ where $A,B$ are convex.
\end{defin}
We can also consider the following class of sets.
\begin{defin}
A set $C\subseteq X$ is said to be DC representable if there exists a DC function $f:X\to R$ such that $C = \big\{x\in X: f(x) \le 0\big\}$.
\end{defin}
Note that if $C = A\setminus B$ is a DC set, then we can write $C = \Big\{\mathbbm 1_{B}-\mathbbm 1_{A}+1/2 \le 0\Big\}$, where $\mathbbm 1_A$, $\mathbbm 1_B$ are the indicator functions of $A,B,$ respectively. Therefore, $C$ is DC representable. Moreover, we have the following.
\begin{thm}[Thach \cite{Th93}]
Assume that $X$ and $Y$ are two Banach space, and $T:Y\to X$ is surjective map with $\mathrm{ker}(T) \neq \emptyset$. Then for any set $M\subseteq X$ there exists a DC representable set $D\subseteq Y$, such that $M= T(D)$.
\end{thm}
Also, the following is known. See \cite{HPT00}.
\begin{prop}
If $C$ is a DC representable set, then there exist $A,B \subseteq X\oplus \mathbb R$ convex, such that $x\in C \iff (x,x') \in A\setminus B$.
\end{prop}
\begin{proof}
Define $g_1(x,x') = f_1(x)-x'$, $g_2(x,x') = f_2(x)-x'$. Let $A = \big\{(x,x') : g_1(x,x')\le 0\big\}$, $B = \big\{(x,x'): g_2(x,x') \le 0\big\}$. Then $x\in C \iff (x,x') \in A\setminus B$.
\end{proof}
In particular, every DC representable set in $X$ is a projection of a DC set in $X\oplus \mathbb R$. The following theorem was proved in \cite{TK96}
\begin{thm}[Thach-Konno \cite{TK96}]
If $X$ is a reflexive Banach space and $C\subseteq X$ is closed, then $C$ is DC representable.
\end{thm}
This raises the following question.
\begin{qst}\label{qst DC}
Is it true that for some classes of spaces, e.g. uniformly convex spaces, there exists $\alpha>0$ such that $d_C^{\alpha}$ is locally DC on $X\setminus C$ whenever $C$ is a DC representable set?
\end{qst}
If the answer to Question \ref{qst DC} is positive, then by the discussion in subsection \ref{sec diff} we could conclude that $N(C)$ has a $\sigma$-porous complement, thus giving an alternative proof of Theorem \ref{thm DMP}. One could also ask Question \ref{qst DC} for DC sets instead of DC representable sets.
\medskip
To end this note, we discuss some simple cases where DC and DC representable sets can be used to study the nearest point problem.
\begin{prop}
Assume that $C = X \setminus \bigcup_{a\in \Lambda}U_a$, where each $U_a$ is an open convex set. Then $d_C$ is locally DC (in fact, locally concave) on $X\setminus C$.
\end{prop}
\begin{proof}
First, it is shown in \cite[Sec. 3]{BF89} that if $a\in \Lambda$, then $d_{X\setminus U_a}$ is concave on $U_a$. Next, it also shown in \cite{BF89} that if $x\in U_a$ then $d_{X\setminus U_a}(x) = d_C(x)$. In particular, $d_C$ is concave on $U_a$.
\end{proof}
\begin{prop}
Assume that $C = A\setminus B$ is a closed DC set, and assume $A$ is closed and $B$ is open, then $d_C$ is convex whenever $d_{C}(x) \le d_{A\cap B}$.
\end{prop}
\begin{proof}
Since $A = \big(A\setminus B\big) \bigcup B$, we have
$$d_A(x) = \min\big\{d_{A\setminus B}(x), d_{A\cap B}(x)\big\} = \min\big\{d_{C}(x), d_{A\cap B}(x)\big\}.$$
Hence, if $d_C(x) \le d_{A\cap B}(x)$ then $d_C(x) = d_A(x)$ is convex.
\end{proof}
\begin{prop}
Assume that $C$ is a DC representable set, i.e., $C = \big\{x\in X: f_1(x)-f_2(x)\le 0\big\}$, and that $f_2(x) =\max_{1\le i \le m}\varphi_i(x)$, where $\varphi_i$ is affine. Then $d_C$ is DC on $X$.
\end{prop}
\begin{proof}
Write
\begin{align*}
C & = \Big\{x : f_1(x)-f_2(x)\le 0\Big\}
\\ &= \Big\{x : f_1(x)-\max_{1\le i \le m}\varphi_i(x) \le 0\Big\}
\\ & = \Big\{x : \min_{1\le i \le m}\big(f_1(x)-\varphi_i(x)\big) \le 0\Big\}
\\ & = \bigcup_{i=1}^n \Big\{x : f_1(x)-\varphi_i(x) \le 0\Big\}.
\end{align*}
where the sets $\Big\{x~ : ~ f_1(x)-\varphi_i(x) \le 0\Big\}$ are convex sets. Hence, we have that
\[d_C(x) = \min_{1\le i \le m}d_{C_i}(x),\]
is a minimum of convex sets and therefore by Proposition \ref{prop select} is a DC function.
\end{proof}
In \cite{Cep98} it was shown that if $X$ is superreflexive, then any Lipschitz map is a uniform limit of DC functions. See also \cite[Sec. 5.1]{BV10}. We have the following simple result.
\begin{prop}
If $X$ is separable, then $d_C$ is a limit (not necessarily uniform) of DC functions.
\end{prop}
\begin{proof}
If $X$ is separable, i.e., there exists a countable $Q = \{q_1,q_2,\dots\}\subseteq X$ with $\bar Q = X$. We have
\begin{align*}
d_C(x) = \inf_{z\in C}\|x-z\| = \inf_{z\in C\cap Q}\|x-z\| = \lim_{n\to \infty} \Big[\min_{z\in C\cap Q_n}\|x-z\|\Big],
\end{align*}
where $Q_n = \{q_1,q_2,\dots,q_n\}$. Again by Proposition \ref{prop select} we have that $\min_{z\in C\cap Q_n}\|x-z\|$ is a DC function as a minimum of convex functions.
\end{proof}
\section{Conclusion}
Despite many decades of study, the core questions addressed in this note are still far from settled. We hope that our analysis will encourage others to take up the quest, and also to reconsider the related \emph{Chebshev problem} \cite{B07,BV10}.
| {
"timestamp": "2015-10-16T02:08:58",
"yymm": "1510",
"arxiv_id": "1510.04471",
"language": "en",
"url": "https://arxiv.org/abs/1510.04471",
"abstract": "Given a closed set $C$ in a Banach space $(X, \\|\\cdot\\|)$, a point $x\\in X$ is said to have a nearest point in $C$ if there exists $z\\in C$ such that $d_C(x) =\\|x-z\\|$, where $d_C$ is the distance of $x$ from $C$. We shortly survey the problem of studying how large is the set of points in $X$ which have nearest points in $C$. We then discuss the topic of delta-convex functions and how it is related to finding nearest points.",
"subjects": "Functional Analysis (math.FA)",
"title": "Nearest points and delta convex functions in Banach spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9899864281950695,
"lm_q2_score": 0.8558511414521923,
"lm_q1q2_score": 0.8472810145929291
} |
https://arxiv.org/abs/2004.08316 | On Zeckendorf Related Partitions Using the Lucas Sequence | Zeckendorf proved that every positive integer has a unique partition as a sum of non-consecutive Fibonacci numbers. Similarly, every natural number can be partitioned into a sum of non-consecutive terms of the Lucas sequence, although such partitions need not be unique. In this paper, we prove that a natural number can have at most two distinct non-consecutive partitions in the Lucas sequence, find all positive integers with a fixed term in their partition, and calculate the limiting value of the proportion of natural numbers that are not uniquely partitioned into the sum of non-consecutive terms in the Lucas sequence. | \section{Introduction}
The Fibonacci numbers have fascinated mathematicians for centuries with many interesting properties. By convention, the Fibonacci sequence $\left\{F_n\right\}_{n=0}^{\infty}$ is defined as follows: let $F_0 = 0$, $F_1 = 1$, and $F_n = F_{n-1} + F_{n-2}$, for $n\ge 2$.
A beautiful theorem of Zeckendorf \cite{Z} states that every natural number $n$ can be uniquely written as a sum of non-consecutive Fibonacci numbers. This gives the so-called Zeckendorf partition of $n$. A formal statement of Zeckendorf's theorem is as follows:
\begin{thm}[Zeckendorf]\label{p1}
For any $n\in\mathbb{N}$, there exists a unique increasing sequence of positive integers $\{c_1, c_2, \ldots, c_k\}$ such that $c_1\ge 2$, $c_i\ge c_{i-1}+2$ for $i = 2, 3, \ldots, k$, and $n = \sum_{i=1}^kF_{c_i}$.
\end{thm}
Much work has been done to understand the structure of Zeckendorf partitions and their applications (see \cite{BDEMMTW1, BDEMMTW2, B, CSH, Fr, MG1, MG2, HS, L, MW1, MW2}) and to generalize them (see \cite{Chu, Day, DDKMMV, DFFHMPP, FGNPT, GTNP, Ho, K, ML, MMMS, MMMMS}). In this paper, we study the partition of natural numbers into Lucas numbers. The Lucas sequence $\left\{L_n\right\}_{n=0}^{\infty}$ is defined as follows: let $L_0 = 2$, $L_1 = 1$, and $L_n = L_{n-1} + L_{n-2}$, for $n\ge 2$. As the Lucas sequence is closely related to the Fibonacci sequence, it is not surprising that we can also partition natural numbers using Lucas numbers.
\begin{thm}[Zeckendorf]\label{p2}
Every natural number can be partitioned into the sum of non-consecutive terms of the Lucas sequence.
\end{thm}
Note that the distinction between Theorems \ref{p1} and \ref{p2} lies in the \textit{uniqueness} property of such partitions of natural numbers in the Fibonacci and Lucas sequences.~Although $5$ is uniquely partitioned into $F_5 = 5$ in $\{F_2, F_3, \ldots \}$, its partition is not unique in the Lucas sequence as $5 = L_0 + L_2 = 2 + 3$ and $5 = L_1 + L_3 = 1 + 4$. In \cite{B2}, Brown shows various ways to have a unique partition using Lucas sequence. \footnote{For more on Brown's criteria, see \cite{BHLMT1, BHLMT2}.} In this paper, we prove the following results.
\begin{thm}\label{m1}
If we allow $L_0$ and $L_2$ to appear simultaneously in a partition, each natural number can have at most two distinct non-consecutive partitions in the Lucas sequence.
\end{thm}
\begin{thm}\label{m2} Suppose that we do not allow $L_0$ and $L_2$ to appear simultaneously in a partition.
The set of all positive integers having the summand $L_k$ in their partition is given by
$$Z(k) \ =\ \begin{cases}\left\{2+3n+\left\lfloor\frac{n+1}{\Phi}\right\rfloor\, :\, n\ge 0\right\}, & {\rm if}~ k = 0,\\
\left\{3n+\left\lfloor\frac{n+\Phi^2}{\Phi}\right\rfloor\, :\, n\ge 0\right\}, &{\rm if}~ k = 1,\\
\left\{L_k\left\lfloor\frac{n+\Phi^2}{\Phi}\right\rfloor+ nL_{k+1}+j\, :\, n\ge 0 ~{\rm and}~ 0\le j\le L_{k-1}-1\right\}, &{\rm if}~ k \ge 2.
\end{cases}$$
\end{thm}
Theorem \ref{m2} is an analogue of \cite[Theorem 3.4]{MG1}. For $k\ge 0$, we find all positive integers having the summand $L_k$ in their partition. We have a different formula when $k = 0$ instead of one formula for all values of $k$ as in \cite[Theorem 3.4]{MG1}.
Our next result is predicted by \cite[Theorem 1]{Cha}, which deals with general recurrence relations; however, in the case of Lucas numbers, we can relate Lucas partitions to the golden string.
\begin{thm}\label{m3}
If we allow $L_0$ and $L_2$ to appear simultaneously in a partition, the proportion of natural numbers that are not uniquely partitioned into the sum of non-consecutive terms of the Lucas sequence converges to $\frac{1}{3\Phi+1}$, where $\Phi$ is the golden ratio.
\end{thm}
\section{Preliminaries}
\subsection{Definitions}
\begin{defi}
Let $A = \{a_0, a_1, \ldots, a_m \}$ be the set consisting of the first $m+1$ terms of the sequence $\big\{a_k\big\}_{k=0}^{\infty}$.~We say a proper subset $B$ of $A$ is a \textit{non-consecutive subset} of $A$ if the elements of $B$ are pairwise non-consecutive in $\big\{a_k\big\}_{k=0}^{\infty}$.~Furthermore, we say a sum $S$ is a \textit{non-consecutive sum} of $A$ if $S$ is the sum of distinct elements of $A$ that are pairwise non-consecutive in $\big\{a_k\big\}_{k=0}^{\infty}$.
\end{defi}
\begin{defi}
Let $A_m = \{L_0, L_1, \ldots, L_m \}$ denote the set consisting of the first $m+1$ terms of the Lucas sequence.
\end{defi}
\subsection{The golden string}
The golden string $S = BABBABABBABBA\ldots$ is defined to be the infinite string of $A$'s and $B$'s constructed recursively as follows. Let $S_1 = A$ and $S_2 = B$, and then, for $k\ge 3$, $S_k$ is the concatenation of $S_{k-1}$ and $S_{k-2}$, which we denote by $S_{k-1}\circ S_{k-2}$. For example, $S_3 = S_2\circ S_1 = a_2\circ a_1 = BA$, $S_4 = S_3\circ S_2 = a_2a_1\circ a_2 = BAB$, $S_5 = S_4\circ S_3 = BABBA$, and so on. Interestingly, the golden string is highly connected to the Zeckendorf partition \cite{MG2}. As we will see later, the string is also closely related to the partitions of natural numbers into Lucas numbers.
\begin{rek}\label{r1}\normalfont We mention two properties of the golden string that we will use in due course.
\begin{itemize}
\item[(1)] For $j\ge 1$, the $(F_{2j})$th character of $S$ is $B$ and the $(F_{2j+1})$th character of $S$ is $A$. This can be easily proved using induction.
\item[(2)] The number of $B$'s amongst the first $n$ characters of $S$ is given by
$\left\lfloor\frac{n+1}{\Phi}\right\rfloor$,
where $\Phi = \frac{1+\sqrt{5}}{2}$ is the golden ratio. For a proof of this result, see \cite[Lemma 3.3]{MG2}.
\end{itemize}
\end{rek}
\section{At Most Two Partitions}
In this section, we present our results that determine the maximum number of non-consecutive partitions that a natural number can have in the Lucas sequence, the proofs of which are adapted from \cite{HR}. Before we prove Theorem \ref{m1}, we introduce the following preliminary lemmas. For the proofs of Lemmas \ref{Lemma3} and \ref{Lemma5}, see Appendix B.
\begin{lem}\label{Lemma3}
Let $S$ be any non-consecutive sum of $A_m$.~Then
\begin{enumerate}
\item if $m$ is odd, $S$ assumes all values from 0 to $L_{m+1}-1$ inclusive, and
\item if $m$ is even, then $S$ assumes all values from 0 to $L_{m+1}+1$ inclusive, excluding $L_{m+1}$.
\end{enumerate}
\end{lem}
\begin{lem}\label{Lemma5}
If $m \geq 0$, then $L_{2m+1}+1$ has exactly two non-consecutive partitions in the Lucas sequence.
\end{lem}
\begin{proof}[Proof of Theorem \ref{m1}]
It suffices to show that for every non-negative integer $m$, there is no natural number that is equal to three or more distinct non-consecutive sums of $A_m$.~We proceed by strong induction.~No natural is equal to three or more distinct non-consecutive sums of $A_0$ and $A_1$.~This shows the base case.~Assume Theorem \ref{m1} holds for all non-negative integers less than or equal to $m=k$.~In our first case, suppose that $k$ is odd.~From Lemma \ref{Lemma3}, the non-consecutive sums that we can form from $A_k$ are the values from 0 to $L_{k+1}-1$ inclusive.~Hence, when we add the term $L_{k+1}$ to $A_k$, all new non-consecutive sums that can be formed must be at least $L_{k+1}$.~This implies there is no possible way in which we can form a third distinct non-consecutive sum of $A_{k+1}$ for any natural number because there is no intersection between the non-consecutive sums in which we can form before and after the addition of the term $L_{k+1}$.~When $k \geq 2$ is even, we have from Lemma \ref{Lemma3} that all non-consecutive sums we can form from $A_k$ are the values from 0 to $L_{k+1}+1$ inclusive, excluding $L_{k+1}$.~When we add the term $L_{k+1}$ to $A_k$, all new non-consecutive sums that can be formed are at least $L_{k+1}$ with $L_{k+1}+1$ being the only non-consecutive sum formed again, namely $L_{k+1}+L_1$.~By Lemma \ref{Lemma5}, we know that $L_{k+1}+1$ has exactly two distinct non-consecutive partitions in the Lucas sequence.~Therefore, there is no possible way in which we can form a third distinct non-consecutive sum of $A_{k+1}$ for any natural number.~This completes the inductive step.
\end{proof}
\section{Partitions with a Fixed Term}
Let $\mathcal{X}_k$ denote the set of all positive integers having $L_k$ as the smallest summand in their partition. Let $\mathcal{Q}_k = (q_k(j))_{j\ge 1}$ be the strictly increasing sequence obtained by rearranging the elements of $\mathcal{X}_k$ into ascending numerical order. We consider the cases $k = 0$ and $k\ge 1$ separately.
\subsection{When $k = 0$}
Table 1 replaces each term $q_k(j)$ in $\mathcal{Q}_k$ with an ordered list of the summands in its partition.
\begin{center}
\begin{tabular}{cccccccccccc}
Row & & & & & & & & & & & \\
\hline
1 & & $L_0$ & & & & & & & & & \\
2 & & $L_0$ & & $L_3$ & & & & & & & \\
3 & & $L_0$ & & & & $L_4$ & & & & & \\
$4$ & & $L_0$ & & & & & & $L_5$ & & & \\
$5$ & & $L_0$ & & $L_3$ & & & & $L_5$ & & & \\
$6$ & & $L_0$ & & & & & & & & $L_6$ & \\
$7$ & & $L_0$ & & $L_3$ & & & & & & $L_6$ & \\
$8$ & & $L_0$ & & & & $L_4$ & & & & $L_6$ &
\end{tabular}
\end{center}
\begin{center}
Table 1. The partitions of the positive integers having $L_0$ as their smallest summand.
\end{center}
\begin{lem}\label{l1}
For $j\ge 3$, the rows of Table 1 for which $L_{j}$ is the largest summand are those numbered from $F_{j-1}+1$ to $F_{j}$ inclusive.
\end{lem}
\begin{proof}
We prove by induction. \textit{Base cases:} it is easy to check that the statement of the lemma is true for $j = 3$ and $j = 4$. \textit{Inductive hypothesis:} assume that it is true for all $j$ such that $3\le j\le m$ for some $m\ge 4$. By the inductive hypothesis, the number of rows such that their largest summands are no greater than $L_{m-1}$ is
$$1+\sum_{j=3}^{m-1}(F_{j} - F_{j-1}) \ =\ F_{m-1},$$
which is also the number of rows whose largest summand is $L_{m+1}$. Due to the inductive hypothesis, the rows whose largest summand is $L_m$ are numbered from $F_{m-1}+1$ to $F_{m}$ inclusive. Therefore, the rows whose largest summand is $L_{m+1}$ are numbered from $F_{m}+1$ to $F_{m+1}$, as desired. This completes our proof.
\end{proof}
\begin{lem}\label{k1}
For $j\ge 1$, we have
$$q_k(j+1)-q_k(j) \ =\ \begin{cases}L_{2}, & \mbox{ if }A \mbox{ is the }j\mbox{th character of }S,\\
L_{3}, &\mbox{ if }B \mbox{ is the }j\mbox{th character of }S.\end{cases}$$
\end{lem}
\begin{proof}
We prove by induction. \textit{Base cases:} it is easy to check that the statement of the lemma is true for $1\le j\le F_4-1$. \textit{Inductive hypothesis:} suppose that it is true for $1\le j\le F_m-1$ for some $m\ge 4$. By Lemma \ref{l1}, the number of rows in Table 1 whose largest summand is no greater than $L_{m-1}$ is
$$1+ \sum_{j=3}^{m-1}(F_{j} - F_{j-1}) \ =\ F_{m-1},$$
which is also the number of rows whose largest summand is $L_{m+1}$. Furthermore, the rows for which $L_{m+1}$ is the largest summand are numbered from $F_{m}+1$ to $F_{m+1}$ inclusive. Therefore, the ordering of the rows in Table 1 implies that
$q_k(i+F_{m})\ =\ q_k(i) + L_{m+1}$,
for $1\le i\le F_{m-1}$. Hence, for $1\le i\le F_{m-1}-1$, we have
\begin{align*}
q_k(i+1+F_{m}) - q_k(i+F_{m}) &\ =\ (q_k(i+1) + L_{m+1})-(q_k(i) + L_{m+1})\ =\ q_k(i+1) - q_k(i).
\end{align*}
By the construction of $S$, the substring comprising of its first $F_{m-1}$ characters is identical to the substring of its characters numbered from $F_{m}+1$ to $F_{m+1}$ inclusive. Thus the lemma is true for $F_{m}+1\le j\le F_{m+1}-1$. It remains to show that it is true for $j = F_{m}$. We have
\begin{align*}
q_k(F_m+1) - q_k(F_m) \ =\begin{cases}\ L_{m+1} - (L_{m}+L_{m-2} + \cdots + L_4) \ =\ L_3,&\mbox{ if }m\mbox{ is even,}\\
L_{m+1} - (L_m + L_{m-2} + \cdots + L_3)\ =\ L_2,&\mbox{ if }m\mbox{ is odd.}\end{cases}
\end{align*}
By Remark \ref{r1} item (1), we know that the lemma is true for $j = F_{m}$, completing the proof.
\end{proof}
\subsection{When $k\ge 1$}
Table 2 replaces each term $q_k(j)$ in $\mathcal{Q}_k$ with an ordered list of the summands in its partition.
\begin{center}
\begin{tabular}{cccccccccccc}
Row & & & & & & & & & & & \\
\hline
1 & & $L_k$ & & & & & & & & & \\
2 & & $L_k$ & & $L_{k+2}$ & & & & & & & \\
3 & & $L_k$ & & & & $L_{k+3}$ & & & & & \\
$4$ & & $L_k$ & & & & & & $L_{k+4}$ & & & \\
$5$ & & $L_k$ & & $L_{k+2}$ & & & & $L_{k+4}$ & & & \\
$6$ & & $L_k$ & & & & & & & & $L_{k+5}$ & \\
$7$ & & $L_k$ & & $L_{k+2}$ & & & & & & $L_{k+5}$ & \\
$8$ & & $L_k$ & & & & $L_{k+3}$ & & & & $L_{k+5}$ &
\end{tabular}
\end{center}
\begin{center}
Table 2. The partitions of the positive integers having $L_k$ as their smallest summand.
\end{center}
Table 2 is similar to Table 1 in \cite{MG1}. The next lemma follows from \cite[Lemma 3.1]{MG1}.
\begin{lem}\label{l2}
For $j\ge 2$, the rows of Table 2 for which $L_{k+j}$ is the largest summand are those numbered from $F_{j}+1$ to $F_{j+1}$ inclusive.
\end{lem}
\begin{lem}
For $j\ge 1$, we have
$$q_k(j+1)-q_k(j) \ =\ \begin{cases}L_{k+1}, & \mbox{ if }A \mbox{ is the }j\mbox{th character of }S,\\
L_{k+2}, &\mbox{ if }B \mbox{ is the }j\mbox{th character of }S.\end{cases}$$
\end{lem}
\begin{proof}
We prove by induction. \textit{Base cases:} it is easy to check that the statement of the lemma is true for $j$ such that $1\le j\le F_4-1$. \textit{Inductive hypothesis:} assume that it is true for $1\le j\le F_m-1$ for some $m\ge 4$. From Lemma \ref{l2}, the first $F_{m-1}$ rows of Table 2 are those for which the largest summand is no greater than $L_{k+m-2}$. Also, the rows for which $L_{k+m}$ is the largest summand are those numbered from $F_m+1$ to $F_{m+1}$ inclusive. Therefore, the ordering of the rows implies that
$q_k(i+F_m) \ =\ q_k(i) + L_{k+m}$,
for $i = 1, 2, \ldots, F_{m-1}$. Hence, for $i = 1, 2, \ldots, F_{m-1}-1$, we have
\begin{align*}
q_k(i+1+F_m) - q_k(i+F_m) &\ =\ (q_k(i+1)+L_{k+m}) - (q_k(i) + L_{k+m})\ =\ q_k(i+1) - q_k(i).
\end{align*}
By the construction of $S$, the substring comprising its first $F_{m-1}$ characters is identical to the substring of its characters numbered from $F_m+1$ to $F_{m+1}$ inclusive. Thus, the lemma is true for $F_m+1\le j\le F_{m+1}-1$. It remains to show that the lemma is true for $j = F_m$. We have
\begin{align*}
q_k(F_m+1) - q_k(F_m)\ =\begin{cases}\ L_{k+m} - (L_{k+m-1}+L_{k+m-3} + \cdots + L_{k+3}) \ =\ L_{k+2},&\mbox{ if }m\mbox{ is even,}\\
L_{k+m} - (L_{k+m-1}+L_{k+m-3} + \cdots + L_{k+2})\ =\ L_{k+1},&\mbox{ if }m\mbox{ is odd.}\end{cases}
\end{align*}
By Remark \ref{r1} item (1), we know that the lemma is true for $j = F_{m}$, completing the proof.
\end{proof}
We are ready to prove Theorem \ref{m2}.
\begin{proof}[Proof of Theorem \ref{m2}] We consider three cases.
\mbox{ }\newline
Case 1: $k = 0$. By Lemma \ref{k1}, we have $\mathcal{X}_0 \ =\ \{2+a(n)L_2 + b(n)L_3: n\ge 0\}$, where $a(n)$ and $b(n)$ denote the number of $A$'s and $B$'s, respectively, amongst the first $n$ characters in the golden string. Using Remark \ref{r1} item (2), we have
\begin{align*}
\mathcal{X}_0 &\ =\ \left\{2+3\left(n-\left\lfloor \frac{n+1}{\Phi}\right\rfloor\right)+4\left\lfloor \frac{n+1}{\Phi}\right\rfloor\, :\, n\ge 0\right\}\ =\ \left\{2+3n+\left\lfloor \frac{n+1}{\Phi}\right\rfloor\,:\, n\ge 0\right\}.
\end{align*}
It is clear that $Z(0) = \mathcal{X}_0$; hence, the statement of the lemma is true when $k = 0$.
\mbox{ }\newline
Case 2: $k = 1$. Using a similar reasoning as above, we have
\begin{align*}
\mathcal{X}_1 &\ =\ \left\{1+L_2\left(n-\left\lfloor \frac{n+1}{\Phi}\right\rfloor\right)+L_3\left\lfloor \frac{n+1}{\Phi}\right\rfloor\, :\, n\ge 0\right\}\\
&\ =\ \left\{1+3\left(n-\left\lfloor \frac{n+1}{\Phi}\right\rfloor\right)+4\left\lfloor \frac{n+1}{\Phi}\right\rfloor\, :\, n\ge 0\right\}\ =\ \left\{3n+\left\lfloor \frac{n+\Phi^2}{\Phi}\right\rfloor\, :\, n\ge 0\right\}.
\end{align*}
It is clear that $Z(1) = \mathcal{X}_1$; hence, the statement of the lemma is true when $k = 1$.
\mbox{ }\newline
Case 3: $k\ge 2$. Using a similar reasoning as above, we have
\begin{align*}
\mathcal{X}_k &\ =\ \left\{L_k+L_{k+1}\left(n-\left\lfloor \frac{n+1}{\Phi}\right\rfloor\right)+L_{k+2}\left\lfloor \frac{n+1}{\Phi}\right\rfloor\, :\, n\ge 0\right\}\\
&\ =\ \left\{L_k\left(1+\left\lfloor \frac{n+1}{\Phi}\right\rfloor\right)+nL_{k+1}\, :\, n\ge 0\right\}\ =\ \left\{L_k\left\lfloor \frac{n+\Phi^2}{\Phi}\right\rfloor+nL_{k+1}\, :\, n\ge 0\right\}.
\end{align*}
If $k\ge 3$, the numbers in $\{L_0, L_1, \ldots, L_{k-2}\}$ are used to obtain the partitions of all integers for which the largest summand is no greater than $L_{k-2}$. In particular, such partitions generate all integers from $1$ to $L_{k-1}-1$ inclusive. Furthermore, such partitions can be appended to any partition having $L_k$ as its smallest summand to produce another partition. Therefore,
\begin{align*}
Z(k) \ =\ \left\{L_k\left\lfloor \frac{n+\Phi^2}{\Phi}\right\rfloor+nL_{k+1} + j\, :\, n\ge 0\mbox{ and } 0\le j\le L_{k-1}-1\right\},
\end{align*}
as desired. It is easy to check that this formula is also true for $k = 2$.
\end{proof}
\section{Proportion of Nonunique Partitions}
Let $c(N)$ count the number of numbers that are not uniquely represented in the Lucas sequence and are at most $N$. We want to show that $\displaystyle\lim_{N\rightarrow \infty}\frac{c(N)}{N} = \frac{1}{1+3\Phi}$, where $\Phi = (1+\sqrt{5})/2$ is the golden ratio. Note that \cite[Lemma 3]{B2} says we can make the Lucas partition unique by requiring that not both $L_0$ and $L_2$ appear in the partition. Therefore, if a number has two partitions, then one of the partition starts with $L_0+L_2$. If we can characterize all of these numbers and find a formula for $c(N)$ in terms of $N$, we are done. Call the set of these numbers $K$. We form the following table listing all of such numbers in increasing order. Let $q_k(j)$ be the $jth$ smallest number in $K$.
\begin{center}
\begin{tabular}{cccccccccccc}
Row & & & & & & & & & & & \\
\hline
1 & & $L_0+L_2$ & & & & & & & & & \\
2 & & $L_0+L_2$ & & $L_4$ & & & & & & & \\
3 & & $L_0+L_2$ & & & & $L_5$ & & & & & \\
$4$ & & $L_0+L_2$ & & & & & & $L_6$ & & & \\
$5$ & & $L_0+L_2$ & & $L_4$ & & & & $L_6$ & & & \\
$6$ & & $L_0+L_2$ & & & & & & & & $L_7$ & \\
$7$ & & $L_0+L_2$ & & $L_4$ & & & & & & $L_7$ & \\
$8$ & & $L_0+L_2$ & & & & $L_5$ & & & & $L_7$ &
\end{tabular}
\end{center}
\begin{center}
Table 3. The partitions of the positive integers having $L_0$ and $L_2$ as their smallest summands.
\end{center}
Observe that Table 3 has the same structure as Table 1. Therefore, Lemma \ref{k1} applies with a change of index. In particular, we have the following.
\begin{lem}\label{k2}
For $j\ge 1$, we have
$$q_k(j+1)-q_k(j) \ =\ \begin{cases}L_{3}, & \mbox{ if }A \mbox{ is the }j\mbox{th character of }S,\\
L_{4}, &\mbox{ if }B \mbox{ is the }j\mbox{th character of }S.\end{cases}$$
\end{lem}
Therefore, we can write
$$K \ =\ \{L_0+L_2 + a(n)L_3 + b(n)L_4: n\ge 0\},$$
where $a(n)$ and $b(n)$ denote the number of $A$'s and $B$'s, respectively, amongst the first $n$ characters in the golden string. Hence,
\begin{align*}K &\ =\ \{5 + 4(n-\floor{(n+1)/\Phi})+7\floor{(n+1)/\Phi}\,:\, n\ge 0\}\ =\ \{5+4n+3\floor{(n+1)/\Phi}\,:\, n\ge 0\}.\end{align*}
Now, we are ready to compute the limit.
\begin{proof}[Proof of Theorem \ref{m3}]The number of integers with two partitions up to a number $N$ is exactly
$\#\{n\ge 0\,|\, 5+4n+3\floor{(n+1)/\Phi}\le N\}$.
The number is found to be $\frac{N-1}{4+3/\Phi}$ within an error of at most $1$. Therefore, as claimed, the limit is
$$\lim_{N\rightarrow \infty}\frac{1}{N}\frac{N-1}{4+3/\Phi}\ =\ \frac{1}{4+3/\Phi} \ =\ \frac{1}{1+3\Phi}.$$
\end{proof}
Among the first $N$ natural numbers, we see how $\alpha = \frac{1}{3\Phi + 1} \approx 0.17082$ estimates the proportion of natural numbers within this range that do not have unique non-consecutive partitions in the Lucas sequence.~The data we collect is shown in Table 4.
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
N & $c\left(N \right)$ & $\beta\left(N\right)$ \\
\hline
10 & 1 & 10.000 \% \\ \hline
100 & 17 & 17.000\% \\ \hline
1,000 & 171 & 17.100\% \\ \hline
10,000 & 1,708 & 17.080\% \\ \hline
$10^5$ & 17,082 & 17.082\%\\ \hline
$10^6$ & 170,820 & 17.082\%\\ \hline
\end{tabular}
\end{center}
\begin{center}
Table 4. Proportion $\beta\left(N\right)$ of the first $N$ natural numbers that do not have unique non-consecutive partitions in the Lucas sequence.
\end{center}
| {
"timestamp": "2021-02-24T02:06:57",
"yymm": "2004",
"arxiv_id": "2004.08316",
"language": "en",
"url": "https://arxiv.org/abs/2004.08316",
"abstract": "Zeckendorf proved that every positive integer has a unique partition as a sum of non-consecutive Fibonacci numbers. Similarly, every natural number can be partitioned into a sum of non-consecutive terms of the Lucas sequence, although such partitions need not be unique. In this paper, we prove that a natural number can have at most two distinct non-consecutive partitions in the Lucas sequence, find all positive integers with a fixed term in their partition, and calculate the limiting value of the proportion of natural numbers that are not uniquely partitioned into the sum of non-consecutive terms in the Lucas sequence.",
"subjects": "Number Theory (math.NT)",
"title": "On Zeckendorf Related Partitions Using the Lucas Sequence",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9895109090493823,
"lm_q2_score": 0.8558511469672594,
"lm_q1q2_score": 0.8468740464465293
} |
https://arxiv.org/abs/2110.01132 | Extrema of Luroth Digits and a zeta function limit relation | We describe how certain properties of the extrema of the digits of Luroth expansions lead to a probabilistic proof of a limiting relation involving the Riemann zeta function and the Bernoulli triangles. We also discuss trimmed sums of Luroth digits. Our goal is to show how direct computations in this case lead to explicit formulas and some interesting discussions of special functions. | \section{\@startsection {section}{1}{\z@}
{-30pt \@plus -1ex \@minus -.2ex}
{2.3ex \@plus.2ex}
{\normalfont\normalsize\bfseries\boldmath}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}
{-3.25ex\@plus -1ex \@minus -.2ex}
{1.5ex \@plus .2ex}
{\normalfont\normalsize\bfseries\boldmath}}
\renewcommand{\@seccntformat}[1]{\csname the#1\endcsname. }
\makeatother
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{proposition}{Proposition}
\newtheorem{corollary}{Corollary}
\theoremstyle{definition}
\newtheorem{definition}{Definition}
\newtheorem{conjecture}{Conjecture}
\newtheorem{remark}{Remark}
\begin{document}
\begin{center}
\uppercase{\bf Extrema of Luroth Digits and a zeta function limit relations}
\vskip 20pt
{\bf Jayadev S. Athreya\footnote{Supported by National Science Foundation Grant DMS 2003528}}\\
{\smallit Department of Mathematics and Department of Comparative History of Ideas, University of Washington, Seattle, WA, USA.}\\
{\tt jathreya@uw.edu}\\
{\bf Krishna B. Athreya}\\
{\smallit Department of Mathematics and Department of Statistics, Iowa State University, Ames, IA, USA.}\\
{\tt kbathreya@gmail.com}\\
\end{center}
\vskip 20pt
\centerline{\smallit Received: , Revised: , Accepted: , Published: }
\vskip 30pt
\centerline{\bf Abstract}
\noindent
We describe how certain properties of the extrema of the digits of Luroth expansions lead to a probabilistic proof of a limiting relation involving the Riemann zeta function and the Bernoulli triangles. We also discuss trimmed sums of Luroth digits. Our goal is to show how direct computations in this case lead to formulas and some interesting discussions of special functions.
\pagestyle{myheadings}
\markright{\smalltt INTEGERS: 21 (2021)\hfill}
\thispagestyle{empty}
\baselineskip=12.875pt
\vskip 30pt
\section{Introduction}
Let $X_1, X_2, \ldots X_k, \ldots$ be a sequence of independent, identically distributed (IID) random variables taking values in the positive integers $\mathbb N =\{1, 2, 3, \ldots \}$ with $$P(X_1 = n) = \frac{1}{n(n+1)}, n \geq 1.$$ We call this distribution the \emph{Luroth} distribution. The \emph{Luroth map} $L: (0, 1] \rightarrow (0, 1]$ is the piecewise-linear map given by $$L(x) = N(x) ( (N(x)+1) x - 1),$$ where $$N(x) =\left \lfloor \frac 1 x \right \rfloor.$$ It is a nice exercise to check that $L$ preserves Lebesgue measure $m$, and that if $U$ is a uniform $(0, 1]$ random variable, the sequence $X_k = N(L^{k-1} (U))$ is a sequence of IID random variables with the above distribution, since $$P(X_1 = n) = m\left(N^{-1}(n)\right) = m\left(\frac{1}{n+1}, \frac{1}{n}\right] = \frac{1}{n(n+1)}.$$
This map and its associated digit expansion was introduced by Luroth in~\cite{Luroth}. Subsequently, Luroth random variables have been extensively studied, since they provide an attractive motivating example at the intersection of many questions on number theory, probability theory, and dynamical systems and ergodic theory. The Luroth map is in some sense a kind of linearized version of the Gauss map (see Section~\ref{sec:Gauss}), where, as we remark above, the resulting digit expansion of numbers have the appealing property that the digits are \emph{independent} random variables.
This makes it often possible to do very explicit computations with Luroth random variables. See~\cite{JDv} for an introduction to the ergodic properties of the map $L$, ~\cite{Galambos} for a discussion of different types of digit expansions, and, for example~\cite{Giuliano} and~\cite{Tan} and the references within for more recent work on Luroth and related expansions from the probabilistic and dynamical perspectives respectively.
Our results center on understanding the limiting behavior of \emph{maxima} of sequences of IID Luroth random variables, and the convergence behavior of appropriately centered and scaled sums. These are classical topics in the theory of sequences of random variables, and focusing on this setting leads to some interesting computations.
Using the fact that the Luroth random variables are \emph{heavy-tailed}, we will show that the probability $\rho_k$ that the maximum $M_k$ of the first $k$ elements $\{X_1, \ldots, X_k\}$ of a a sequence of Luroth variables is \emph{unique} tends to $1$ as $k \rightarrow \infty.$ Our main new result, Theorem~\ref{theorem:rhok}, shows how to \emph{explicitly} compute this probability $\rho_k$, giving an interesting relationship between this probability, the Riemann zeta function, and partial sums of binomial coefficients which are entries in what is known as Bernoulli's triangle. Using the fact that $\rho_k \rightarrow 1$, we obtain a new limiting relation, Corollary~\ref{cor:zeta}, between these coefficients and the Riemann zeta function.
Our other main result considers the behavior of the sample sums $$S_k = \sum_{i=1}^k X_i.$$ Since $E(X_1) = \infty$, the law of large numbers shows that $S_k/k$ tends to infinity with probability one. Inspired by work of Diamond-Vaaler, who studied the corresponding \emph{trimmed sum} for continued fraction expansions, and using a result of Mori~\cite{Mori} we show in Theorem~\ref{theorem:trimmed} that by removing the maximum, and normalizing by $k\log k$, we have, with probability $1$, as $k \rightarrow \infty$, $$\frac{S_k-M_k}{k\log k} \rightarrow 1.$$ Note that the asymptotic uniqueness result says that asymptotically, we are removing the \emph{unique} largest summand in $S_k$ by subtracting $M_k$. Let $$M_k = \max(X_1, \ldots, X_k)$$ be the sample maximum, and let $$\rho_k = P(\mbox{ there exists } ! 1 \le i \le k \mbox{ such that }X_i = M_k)$$ be the probability that this maximum is achieved exactly once. Our first main observation is the following explicit formula for $\rho_k$.
\begin{theorem}\label{theorem:rhok} For $k \geq 1$, \begin{equation}\label{eq:rhok} \rho_k = k\left (2^{k-1} + \sum_{j=2}^k T(k-1, k-j) \zeta(j) (-1)^{j+1}\right),\end{equation} where $\zeta(j) = \sum_{n=1}^{\infty} n^{-j}, j > 1$ is the Riemann zeta function and for $l \geq j$ positive integers, $$T(l, j) = \sum_{i=0}^j \binom{l}{i}.$$
\end{theorem}
By \cite{AF, ESS}, we have $$\lim_{k \rightarrow \infty} \rho_k = 1.$$ Thus, as a corollary, we obtain the following limiting relation involving the coefficients $T(l, j)$ and values of the Riemann zeta function.
\begin{corollary}\label{cor:zeta} We have
\begin{equation}\label{eq:zeta}\lim_{k \rightarrow \infty} k\left (2^{k-1} + \sum_{j=2}^k T(k-1, k-j) \zeta(j) (-1)^{j+1}\right) = 1.\end{equation}
\end{corollary}
The coefficients $T(l, j)$, partial sums of binomial coefficients, are entries in what is known as \emph{Bernoulli's triangle}; see~\cite{OEIS-A008949}. Note that $T(l, 0) = 1$, $T(l, l) = 2^l$, and $T(l, l-1) = 2^{l} -1$. Let $$S_k = \sum_{i=1}^k X_i$$ be the sample sum. Since $E(X_1) = \infty$, we have, by the strong law of large numbers, $$\lim_{k \rightarrow \infty} \frac{S_k}{k} = +\infty.$$ However, if we subtract the sample maximum, we have the following almost sure result.
\begin{theorem}\label{theorem:trimmed} With probability $1$, $$\lim_{k \rightarrow \infty} \frac{S_k - M_k}{k \log k} = 1.$$
\end{theorem}
\noindent This is a consequence of a result of Mori~\cite{Mori}. The proof features a nice appearance of the Lambert $W$-function. There is a (more difficult) analogous result~\cite{DV} for continued fraction digits, where the limiting value is $\log 2$. There are substantial additional difficulties in that setting since continued fraction digits of a randomly chosen $x \in (0, 1)$ do not form an IID sequence.
\section{Uniqueness of Maxima}\label{sec:maxproof}
Suppose $Y_1, \ldots, Y_k, \ldots$ is a sequence of IID $\mathbb N$-valued random variables with $$P(Y_1 = n) = p_n.$$ Let $$\tau_n = \sum_{m \geq n} p_m = P(Y_1 \geq n).$$ Let $\rho_{k, m}$ denote the probability that the sample maximum $M_k = \max(Y_1, \ldots, Y_k) = m$ and is achieved exactly once. Then $$\rho_{k,m} = k p_m (1-\tau_m)^{k-1}.$$ Note that $\rho_k$, the probability that the sample maximum $M_k$ is achieved exactly once by $Y_1, \ldots, Y_k$, is then given by $$\rho_k = \sum_{m=1}^{\infty} \rho_{k,m}.$$ Also note that (although it is not a probability) we have $$\sum_{k=1}^{\infty} \rho_{k,m} = p_m \sum_{k=1}^{\infty} k(1-\tau_m)^{k-1} = \frac{p_m}{\tau_m^2}.$$ Specializing to Luroth sequences, we have $p_m = \frac{1}{m(m+1)}$ and $\tau_m = \frac{1}{m}$. Thus $$\rho_{k,m} = \frac{k}{m(m+1)} \left( 1 - \frac 1 m \right)^{k-1} = k \frac{(m-1)^{k-1}}{m^k(m+1)} .$$ Let $$Q_k(m) = \frac{1}{k} \rho_{k, m} = \frac{(m-1)^{k-1}}{m^k(m+1)}.$$ Th next lemma shows how to expand $Q_k$ as a partial fraction.
\begin{lemma}\label{lem:partial} We have \begin{equation}\label{eq:qk}Q_k(m) = 2^{k-1} \left( \frac 1 m - \frac{1}{m+1} \right) + \sum_{j=2}^k T(k-1, k-j) (-1)^{j+1} \frac{1}{m^j}.\end{equation}
\end{lemma}
\begin{proof} Note that $$T(k-1, 0) = 1, T(k-1, k-1) = 2^{k-1}, T(k-1, k-2) = 2^{k-1} - 1$$ and $$Q_{k+1}(m) = \left( 1 - \frac 1 m\right) Q_{k}(m).$$ Our proof proceeds by induction. For $k=1$, we have $$Q_1(m) = \rho_{1, m} = \frac{1}{m(m+1)}.$$ The right hand side of Equation (\ref{eq:qk}) for $k=1$ is $$2^{1-1} \left (\frac 1 m - \frac{1}{m+1} \right) = \frac{1}{m(m+1)},$$ since the index of the sum starts at $j=2$, making the sum empty. Thus the base case ($k=1$) is verified. We now assume that Equation (\ref{eq:qk}) is true for $k$, and we need to prove it for $k+1$. Let $$c(j, k) = (-1)^{j+1} T(k-1, k-j).$$ Note that
\begin{align}\label{eq:induction} Q_{k+1}(m) &= \left( 1 - \frac 1 m\right) Q_{k}(m) \nonumber
= \left( 1 - \frac 1 m\right) \left(2^{k-1} \left( \frac 1 m - \frac{1}{m+1} \right) + \sum_{j=2}^k c(j,k) \frac{1}{m^j}\right) \\ \nonumber
&= 2^{k-1}\frac{1}{m(m+1)} - \frac{2^{k-1}}{m}\left( \frac 1 m - \frac{1}{m+1}\right) + \sum_{j=2}^{k}\left( \frac{c(j,k) }{m^j} -\frac{c(j-1, k) }{m^{j+1}}\right)\\ \nonumber
&= 2^k \frac{1}{m(m+1)} - \frac{(2^{k-1} - c(2, k))}{m^2} + \sum_{j=3}^{k+1} \frac{\left(c(j, k)- c(j-1, k) \right)}{m^j} . \\
\end{align}
\noindent Now note that for $j \geq 2$, $$c(j, k) = c(j, k-1) - c(j-1, k-1)$$ and $$c(k+1, k) = 0, c(2, k) = 1-2^{k-1}, c(k,k) = (-1)^{k+1}, c(1, k) = 2^{k-1}.$$ Plugging these into Equation (\ref{eq:induction}), we have our result.
\end{proof}
\subsection{Proof of Theorem~\ref{theorem:rhok}} To prove Theorem~\ref{theorem:rhok}, we note that $$\sum_{m=1}^{\infty} \frac{1}{m(m+1)} = 1; \sum_{m=1}^{\infty} \frac{1}{m^j} = \zeta(j), j \geq 2.$$ Using Lemma~\ref{lem:partial}, and summing over $m$, we obtain Equation (\ref{eq:rhok}). To obtain Corollary~\ref{cor:zeta}, it was shown in~\cite{AF, ESS} that $$\rho_k \rightarrow 1 \mbox{ if and only if } p_m/\tau_m \rightarrow 0.$$ In our setting, $\rho_m/\tau_m = \frac{1}{m+1}$, so the condition is satisfied, and we obtain Corollary~\ref{cor:zeta}. In Figure~\ref{fig:rhok}, we show numerical values of $\rho_k$ for $2 \le k \le 40$ (note that $\rho_1 = 1$).
\begin{figure}[h!] \includegraphics[width=0.9\textwidth]{rhokgraph.pdf}
\caption{$\rho_k$ for $k =2$ to $k=40$. \medskip}\label{fig:rhok}
\end{figure}
\section{Trimmed sums}\label{sec:trimproof}
In this section, we prove Theorem~\ref{theorem:trimmed}. We first note that using standard limit theorems in probability theory, we can show that $$\frac{S_k - M_k}{k \log k} \xrightarrow{d} 1.$$ The random variables $X_n$ are in the domain of attraction of a \emph{stable law} of order $1$. Using~\cite[Theorem 11.2.3]{AL}, we have $$\frac{S_k - k\log k}{k} \xrightarrow{d} C,$$ where $C$ is a Cauchy random variable. Thus, $$\frac{S_k}{k \log k} = \left(\frac{1}{\log k}\frac{(S_k - k \log k)}{k} + 1 \right) \xrightarrow{d} 1.$$
Next, note that for $c >0$, $$\lim_{k \rightarrow \infty} P\left(\frac{M_k}{k} < c\right) = \lim_{k \rightarrow \infty} \left( 1 - \frac{1}{ck}\right)^{k} = e^{-1/c}.$$ Therefore $$\frac{M_k}{k\log k} \xrightarrow{d} 0.$$
Putting these together, we obtain $$\frac{S_k - M_k}{k \log k} \xrightarrow{d} 1,$$ and therefore we also have this convergence in probability.
To replace convergence in probability with almost sure convergence, we use a result of Mori~\cite[Theorem 1]{Mori}, with, in his notation, $$r=1, A(x) = x \log x.$$ Mori studies the behavior of the sample sums with the first $r$ largest terms removed, normalized by $A(n)$, where $A$ is an absolutely continuous increasing function such that there is an $0< \alpha< 2$, with $$A(x)x^{-\frac{1}{\alpha}} \mbox{ increasing and } \sup \frac{A(2x)}{A(x)} < \infty.$$ Keeping our notation consistent with ~\cite[Section 1]{Mori}, we write $A(x) = x \log x$, and note that this is a non-decreasing absolutely continuous function, with inverse function $B(x) = \frac{x}{W(x)}$, where $W(x)$ is the Lambert $W$ function, that is, the inverse of the function $we^w$. Putting $$\mathcal F(x) = P(X_1 > x) = \frac{1}{\lfloor x \rfloor + 1}$$ and $$J_2 = \int_{0}^{\infty} \mathcal F^2(x) dB^2(x) = \sum_{n=2}^{N} \frac{1}{n^2} \left( \frac{n^2}{W(n)^2} - \frac{(n-1)^2}{W(n-1)^2}\right),$$ Mori's result shows that if $J_2 < \infty$, that almost surely $$\frac{S_k-M_k}{A(k)} - c_k \rightarrow 0,$$ where $$c_k = \frac{k}{A(k)} E(X_1| X_1 < A(k)).$$ In our setting, $$c_k \sim \frac{k}{k \log k} \log(k \log k) \rightarrow 1 \mbox{ as } k \rightarrow \infty.$$ So if we can show $J_2 < \infty$, we have completed our proof of Theorem~\ref{theorem:trimmed}. From~\cite{lambertw}, we see that for $x >>1$, $$W(x) = \log x - \log \log x + o(1).$$ Note that $$dB^2(x) = 2 B(x) B'(x) dx = \frac{x}{W(x) (W(x) + 1)} dx.$$ Thus we have $$J_2 \le \int_1^{\infty} \frac{2}{x^2} B(x) B'(x) dx \sim 2 \int_{1}^{\infty} \frac{1}{x W(x) (W(x) + 1)} = 2 \left(\frac{1}{W(1)}\right).$$ In particular, $J_2 < \infty$. In Figure~\ref{fig:lambert}, we show the partial sums for $J_2$.
\begin{figure}[h!]\includegraphics[width=0.9\textwidth]{LambertMoriSum.pdf}\caption{Partial sums of $\sum_{n=2}^{N} \frac{1}{n^2} \left( \frac{n^2}{W(n)^2} - \frac{(n-1)^2}{W(n-1)^2}\right)$, $N=3, \ldots, 1000$. \medskip}\label{fig:lambert}
\end{figure}
\section{Continued Fractions}\label{sec:Gauss} \noindent Similar questions can be asked for the \emph{Gauss map} $G: (0, 1) \rightarrow (0, 1)$, $$G(x) = \left\{ \frac 1 x \right\},$$ and the associated continued fraction expansion of a number $x \in [0, 1)$. Recall that for $x \in (0, 1)$, we can set, for $n \geq 1$, $$a_n = a_n(x) = \left \lfloor \frac{1}{G^{n-1}(x)} \right \rfloor,$$ and we can write $$x = \frac{1}{a_1 + \frac{1}{a_2 + \frac{1}{a_3 + \ldots}}}.$$ The probability measure $\mu_G$ given by $$d\mu_G(x) = \frac{1}{\log 2} \frac{dx}{1+x}$$ is an absolutely continuous ergodic invariant measure for $G$, and as noted after Theorem~\ref{theorem:trimmed}, Diamond and Vaaler~\cite{DV} proved a similar result for the digits $a_n$ for an $x$ chosen at random according to $\mu_G$. This is more difficult than our setting since although the digits form a stationary sequence, they are no longer IID. Regarding the uniqueness of sample maxima, if we define $$\rho_k = \mu_G\left(x \in (0, 1): \mbox{ there exists } ! 1 \le i \le k \mbox{ such that } a_i (x)= \max_{1 \le j \le k} a_j(x) \right),$$ we conjecture that $$\lim_{k \rightarrow \infty} \rho_k = 1.$$ We believe this conjecture since although the sequence $\{a_i\}$ is not IID, it is rapidly mixing, and the distribution of $a_i$ is heavy-tailed. The Gauss map is associated in a natural way to geodesic flow on the modular surface, and we believe similar results should also hold for other continued fraction type expansions associated to geodesic flow on other hyperbolic surfaces, for example, the Rosen continued fractions.
\bigskip
\noindent {\bf Acknowledgements.} We thank Bruce Berndt, Alex Kontorovich, and Jeffrey Lagarias for useful discussions. Lagarias pointed out to us some interesting asymptotics involving binomial coefficients and zeta functions in~\cite[Theorem 2]{BL}. We thank Avanti Athreya, the anonyomous referee, and the Integers Editorial Office for suggestions that greatly improved the exposition of the paper. The first author thanks the National Science Foundation for its support via grant NSF DMS 2003528. The first author is a faculty member at the University of Washington, which is on the lands of the Coast Salish people, and touches the shared waters of all tribes and bands within the Suquamish, Tulalip and Muckleshoot nations.
| {
"timestamp": "2021-10-05T02:24:56",
"yymm": "2110",
"arxiv_id": "2110.01132",
"language": "en",
"url": "https://arxiv.org/abs/2110.01132",
"abstract": "We describe how certain properties of the extrema of the digits of Luroth expansions lead to a probabilistic proof of a limiting relation involving the Riemann zeta function and the Bernoulli triangles. We also discuss trimmed sums of Luroth digits. Our goal is to show how direct computations in this case lead to explicit formulas and some interesting discussions of special functions.",
"subjects": "Probability (math.PR); Dynamical Systems (math.DS); Number Theory (math.NT)",
"title": "Extrema of Luroth Digits and a zeta function limit relation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232884721166,
"lm_q2_score": 0.8615382076534742,
"lm_q1q2_score": 0.8467398143903608
} |
https://arxiv.org/abs/1907.13292 | The character graph of a finite group is perfect | For a finite group $G$, let $\Delta(G)$ denote the character graph built on the set of degrees of the irreducible complex characters of $G$. In graph theory, a perfect graph is a graph $\Gamma$ in which the chromatic number of every induced subgraph $\Delta$ of $\Gamma$ equals the clique number of $\Delta$. In this paper, we show that the character graph $\Delta(G)$ of a finite group $G$ is always a perfect graph. We also prove that the chromatic number of the complement of $\Delta(G)$ is at most three. | \section{Introduction}
$\noindent$ Let $G$ be a finite group . Also let ${\rm cd}(G)$ be the set of all character degrees of $G$, that is,
${\rm cd}(G)=\{\chi(1)|\;\chi \in {\rm Irr}(G)\} $, where ${\rm Irr}(G)$ is the set of all complex irreducible characters of $G$. The set of prime divisors of character degrees of $G$ is denoted by $\rho(G)$. It is well known that the
character degree set ${\rm cd}(G)$ may be used to provide information on the structure of the group $G$. For example, Ito-Michler's Theorem \cite{[C]} states that if a prime $p$ divides no character degree of a finite group $G$, then $G$ has a
normal abelian Sylow $p$-subgroup. Another result due to Thompson \cite{[D]} says that if a prime $p$ divides
every non-linear character degree of a group $G$, then $G$ has a normal $p$-complement.
A useful way to study the character degree set of a finite group $G$ is to associate a graph to ${\rm cd}(G)$.
One of these graphs is the character graph $\Delta(G)$ of $G$ \cite{[I]}. Its vertex set is $\rho(G)$ and two vertices $p$ and $q$ are joined by an edge if the product $pq$ divides some character degree of $G$. We refer the readers to a survey by Lewis \cite{[M]} for results concerning this graph and related topics.
Let $\Gamma=(V(\Gamma),E(\Gamma))$ be a finite simple graph with the vertex set $V(\Gamma)$ and edge set $E(\Gamma)$. A clique of $ \Gamma $ is a set of mutually adjacent vertices, and that the maximum size of a clique of $ \Gamma $, the clique number of $ \Gamma $, is denoted by $ \omega(\Gamma) $. Minimum number of colors needed to color vertices of the graph $ \Gamma $ so that any two adjacent vertices of $ \Gamma $ have different colors, is called the chromatic number of $ \Gamma $ and denoted by $ \chi(\Gamma) $. Clearly $\omega(\Gamma)\leqslant \chi(\Gamma)$ for any graph $\Gamma$. The graph $\Gamma$ is perfect if $\omega(\Delta)=\chi(\Delta)$, for every induced subgraph $\Delta$ of $\Gamma$.
The theory of perfect graphs relates the concept of graph colorings to the concept of cliques. Aside from having an interesting structure, perfect graphs are considered important for three reasons. First, several common classes of graphs are known to always be perfect. For instance, bipartite graphs, chordal graphs and comparability graphs are perfect. Second, a number of important algorithms only work on perfect graphs. Finally, perfect graphs can be used in a wide variety of applications, ranging from scheduling to order theory to communication theory.
One of the turning point for the investigation on the character degree graph is the "Three-Vertex Theorem" by Palfy \cite{palfy}: the complement of $\Delta(G)$ does not contain any triangle whenever $G$ is a finite solvable group. In a recent paper \cite{1} this was extended by showing that, under the same solvability assumption, the complement of $\Delta(G)$ dose not contain any cycle of odd length, which is equivalent to say that the complement of $\Delta(G)$ is a bipartite graph. Thus as the complement of a perfect graph is perfect \cite{lovasz}, the character graph $\Delta(G)$ of a solvable group $G$ is perfect. This argument motivates an interesting problem: is the character graph of an arbitrary finite group perfect?. In this paper, we wish to solve this problem.\\
\noindent \textbf{Theorem A.} \textit{Let $G$ be a finite group then $\Delta(G)$ is a perfect graph.
}\\
Theorem A will be used to determine the chromatic number of the complement of $\Delta(G)$.\\
\noindent \textbf{Corollary B.} \textit{ Suppose $G$ is a finite group. Then the chromatic number of the complement of $\Delta(G)$ is at most three.
}
\section{Preliminaries}
$\noindent$ In this paper, all groups are assumed to be finite and all
graphs are simple and finite. For a finite group $G$, the set of prime divisors of $|G|$ is denoted by $\pi(G)$. Also note that for an integer $n\geqslant 1$, the set of prime divisors of $n$ is denoted by $\pi(n)$. We begin with Corollary 11.29 of \cite{[isa]}.
\begin{lemma}\label{fraction}
Let $ N \lhd G$ and $\chi \in \rm{Irr}(G)$. Let $\varphi \in \rm{Irr}(N)$ be a constituent of $\chi_N$. Then $\chi(1)/\varphi(1)$ divides $[G:N]$.
\end{lemma}
Let $\Gamma$ be a graph with vertex set $V(\Gamma)$ and edge set
$E(\Gamma)$. A cycle on $n$ vertices $v_1, \ldots,
v_n$, $n \geq 3$, is a graph whose vertices can be arranged in a
cyclic sequence in such a way that two vertices are adjacent if
they are consecutive in the sequence, and are non-adjacent
otherwise. A cycle with $n$ vertices is said to be of length $n$
and is denoted by $C_n$, i.e., $C_n : v_1, \ldots, v_n, v_1$.
The join $\Gamma \ast \Delta$ of graphs $\Gamma$
and $\Delta$ is the graph $\Gamma \cup \Delta$ together with all
edges joining $V (\Gamma)$ and $V (\Delta)$. An independent set is a set of vertices of a graph
$\Gamma$ such that no two of them are adjacent. A maximum
independent set is an independent set of largest possible size.
This size is called the independence number of $\Gamma$ and is
denoted by $\alpha(\Gamma)$. Finally we should mention that the complement of $\Gamma$ and the induced subgraph of $\Gamma$ on $X\subseteq V(\Gamma)$
are denoted by $\Gamma^c$ and $\Gamma[X]$, respectively. For more details, we
refer the reader to basic textbooks on the subject, for instance
\cite{BM}. Now we present some properties of perfect graphs.
\begin{lemma}\label{comp}\cite{lovasz}
A graph $\Gamma$ is perfect if and only if the complement of $\Gamma$ is perfect.
\end{lemma}
\begin{lemma}\label{hole}\cite{[cr]}
A graph $\Gamma$ is perfect if and only if it has no induced subgraph isomorphic either to a cycle of odd order at least 5, or to the complement of such a cycle.
\end{lemma}
We now state some relevant results on character graphs
needed in the next section.
\begin{lemma}\label{pal}
Let $G$ be a group and let $\pi \subseteq \rho(G)$.\\
\textbf{a)} (Palfy's condition) \cite {palfy} If $G$ is solvable and $|\pi|\geqslant 3$, then there exist two distinct primes $u, v$ in $\pi$ and $\chi \in \rm{Irr}(G)$ such that $uv | \chi(1)$.\\
\textbf{b)} (Moreto-Tiep's condition) \cite{moreto} If $|\pi|\geqslant 4$, then there exists $\chi \in \rm{Irr}(G)$ such that $\chi (1)$ is divisible by two distinct primes in $\pi$.
\end{lemma}
\begin{lemma}\label{chpsl}\cite{[white]}
Let $G\cong \rm{PSL}_2(q)$, where $q\geqslant 4$ is a power of a prime $p$.\\
\textbf{a)}
If $q$ is even, then $\Delta(G)$ has three connected components, $\{2\}$, $\pi(q-1)$ and $\pi(q+1)$, and each component is a complete graph.\\
\textbf{b)}
If $q>5$ is odd, then $\Delta(G)$ has two connected components, $\{p\}$ and $\pi((q-1)(q+1))$.\\
i)
The connected component $\pi((q-1)(q+1))$ is a complete graph if and only if $q-1$ or $q+1$ is a power of $2$.\\
ii)
If neither of $q-1$ or $q+1$ is a power of $2$, then $\pi((q-1)(q+1))$ can be partitioned as $\{2\}\cup M \cup P$, where $M=\pi (q-1)-\{2\}$ and $P=\pi(q+1)-\{2\}$ are both non-empty sets. The subgraph of $\Delta(G)$ corresponding to each of the subsets $M$,$P$ is complete, all primes are adjacent to $2$, and no prime in $M$ is adjacent to any prime in $P$.
\end{lemma}
\begin{lemma}\label{cycle} \cite{AC}
Let $G$ be a finite group, and let $\pi$ be a subset of the vertex set of $\Delta(G)$ such that $|\pi|$ is an odd number larger than 1. Then $\pi$ is the set of vertices of a cycle in $\Delta(G)^c$ if and only if $O^{\pi^\prime}(G)=S\times A$, where $A$ is abelian, $S\cong \rm{SL}_2(u^\alpha)$ or $S\cong \rm{PSL}_2(u^\alpha)$ for a prime $u\in \pi$ and a positive integer $\alpha$, and the primes in $\pi - \{u\}$ are alternately odd divisors of $u^\alpha+1$ and $u^\alpha-1$.
\end{lemma}
\begin{lemma}{\label{HIHLEW}}\cite{HIHL}
If $G$ and $H$ are two non-abelian groups that satisfy $\rho (G)
~\cap~ \rho (H) = F$ with $|F| = n$, then $\Delta (G\times H) = K_n \ast \Delta (G) [\rho (G) - F] \ast \Delta (H) [\rho (H) - F]$, where $K_n$ is a complete graph with vertex set $F$.
\end{lemma}
\section{Proof of main results}
\noindent In this section, we wish to prove our main results.\\
\noindent\textit{\bf Proof of Theorem A.}
On the contrary, we assume that $\Delta(G)$ is not perfect. Then by Lemma \ref{hole}, there exists a subset $\pi\subseteq \rho (G)$ such that for some integer $n\geqslant 2$, $|\pi|=2n+1$ and $\Delta(G)^c[\pi]$ or $\Delta(G)[\pi]$ is a cycle. Now one of the following cases occurs:\\
Case 1. $\Delta(G)^c[\pi]$ is a cycle. Then there exist primes $p_0, p_1, \dots , p_{2n} \in \rho(G)$ such that $\Delta(G)^c[\pi]$ is the cycle $C:p_0, p_1, \dots , p_{2n},p_0$. Using Lemma \ref{cycle},
$N:=O^{\pi^\prime}(G)=S\times A$, where $A$ is abelian, $S\cong \rm{SL}_2(p^m)$ or $S\cong \rm{PSL}_2(p^m)$ for a prime $p\in \pi$ and a positive integer $m$, and the primes in $\pi - \{p\}$ are alternately odd divisors of $p^m+1$ and $p^m-1$. Without loss of generality, we can assume that $p=p_0$. Since $\Delta(G)^c[\pi]$ is a cycle of length at least $5$, $p_1$ is not adjacent to both vertices $p_3$ and $p_4$ in $\Delta(G)^c$. Also for some $\epsilon \in \{\pm 1\}$, $p_3 \in \pi(p^m-\epsilon)-\{2\}$ and $p_4 \in \pi(p^m+\epsilon)-\{2\}$. Note that $p_1$ is an element of either $\pi(p^m-\epsilon)-\{2\}$ or $\pi(p^m+\epsilon)-\{2\}$. Without loss of generality, we can assume that $p_1 \in\pi(p^m-\epsilon)-\{2\}$. since $p_1$ and $p_4$ are adjacent vertices in $\Delta(G)$, for some $\chi \in \rm{Irr}(G)$, $p_1p_4|\chi(1)$. Now let $\varphi \in \rm{Irr}(N)$ be a constituent of $\chi_N$. Then by Lemma \ref{fraction}, $\chi(1)/\varphi(1)$ divides $[G:N]$. Therefore as $G/N$ is a $\pi^\prime$-group, $p_1p_4|\varphi(1)$. It is a contradiction as using Lemma \ref{chpsl}, $p_1$ and $p_4$ are non-adjacent vertices in $\Delta(N)$.\\
Case 2. $\Delta(G)[\pi]$ is a cycle. If $\Delta(G)[\pi]\cong C_5$, then $\Delta(G)^c[\pi]\cong C_5$ which is a contradiction with Case 1. Thus $|\pi|\geqslant 7$. Hence there exist distinct primes $p_1, p_2, p_3,q_1, q_2, q_3\in \pi$ such that the induced subgraphs of $\Delta(G)^c$ on the sets $\pi_1:=\{p_1, p_2, p_3\}$ and $\pi_2:=\{q_1, q_2, q_3\}$ are cycles of length $3$. Hence by Lemma \ref{cycle}, for every $i=1,2$, $N_i:=O^{\pi_i^\prime}(G)=S_i\times A_i$, where $A_i$ is abelian, $S_i\cong \rm{SL}_2(u_i^{\alpha_i})$ or $S_i\cong \rm{PSL}_2(u_i^{\alpha_i})$ for a prime $u_i\in \pi_i$ and a positive integer $\alpha_i$, and the primes in $\pi_i - \{u_i\}$ are alternately odd divisors of $u_i^{\alpha_i}+1$ and $u_i^{\alpha_i}-1$. Let $N:=S_1S_2$. It is easy to see that $N/Z(N)\cong \rm{PSL}_2(u_1^{\alpha_1})\times\rm{PSL}_2(u_2^{\alpha_2})$. Therefore by Lemmas \ref{chpsl} and \ref{HIHLEW}, $\Delta(N/Z(N))[\pi_1 \cup \pi_2]\subseteq \Delta(G)$ is a complete bipartite graph with parts $\pi_1$ and $\pi_2$ which is a contradiction as $\Delta(G)[\pi]$ is a cycle of $\Delta(G)$.\qed\\
\noindent\textit{\bf Proof of Corollary B.}
Using Theorem A, $\Delta(G)$ is a perfect graph. Thus by Lemma \ref{comp}, $\Delta(G)^c$ is too. Hence $\chi(\Delta(G)^c)=\omega(\Delta(G)^c)$. It is clear that $\omega(\Delta(G)^c)=\alpha(\Delta(G))$. By Lemma \ref{pal}, $\alpha(\Delta(G))\leqslant 3$. Hence $\chi(\Delta(G)^c)\leqslant 3$ and the proof is completed.\qed
\section*{Acknowledgements}
This research was supported in part
by a grant from School of Mathematics, Institute for Research in Fundamental Sciences (IPM).
| {
"timestamp": "2019-08-01T02:05:59",
"yymm": "1907",
"arxiv_id": "1907.13292",
"language": "en",
"url": "https://arxiv.org/abs/1907.13292",
"abstract": "For a finite group $G$, let $\\Delta(G)$ denote the character graph built on the set of degrees of the irreducible complex characters of $G$. In graph theory, a perfect graph is a graph $\\Gamma$ in which the chromatic number of every induced subgraph $\\Delta$ of $\\Gamma$ equals the clique number of $\\Delta$. In this paper, we show that the character graph $\\Delta(G)$ of a finite group $G$ is always a perfect graph. We also prove that the chromatic number of the complement of $\\Delta(G)$ is at most three.",
"subjects": "Group Theory (math.GR)",
"title": "The character graph of a finite group is perfect",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9893474910448,
"lm_q2_score": 0.855851143290548,
"lm_q1q2_score": 0.8467341813223274
} |
https://arxiv.org/abs/0808.1065 | Infinite log-concavity: developments and conjectures | Given a sequence (a_k) = a_0, a_1, a_2,... of real numbers, define a new sequence L(a_k) = (b_k) where b_k = a_k^2 - a_{k-1} a_{k+1}. So (a_k) is log-concave if and only if (b_k) is a nonnegative sequence. Call (a_k) "infinitely log-concave" if L^i(a_k) is nonnegative for all i >= 1. Boros and Moll conjectured that the rows of Pascal's triangle are infinitely log-concave. Using a computer and a stronger version of log-concavity, we prove their conjecture for the nth row for all n <= 1450. We also use our methods to give a simple proof of a recent result of Uminsky and Yeats about regions of infinite log-concavity. We investigate related questions about the columns of Pascal's triangle, q-analogues, symmetric functions, real-rooted polynomials, and Toeplitz matrices. In addition, we offer several conjectures. | \section{Introduction}
Let
$$
(a_k)=(a_k)_{k\ge0}=a_0,a_1,a_2,\ldots
$$
be a sequence of real numbers. It will
be convenient to extend the sequence to negative indices by letting
$a_k=0$ for $k<0$. Also, if $(a_k)=a_0,a_1,\ldots,a_n$ is a finite
sequence then we let $a_k=0$ for $k>n$.
Define the {\it ${\cal L}$-operator\/} on sequences to be ${\cal L}(a_k)=(b_k)$
where $b_k=a_k^2-a_{k-1}a_{k+1}$.
Call a sequence {\it $i$-fold log-concave\/} if ${\cal L}^i(a_k)$
is a nonnegative sequence. So log-concavity in the ordinary sense is
$1$-fold log-concavity.
Log-concave sequences arise in many areas of algebra, combinatorics,
and geometry. See the survey articles of Stanley~\cite{sta:lus} and
Brenti~\cite{bre:lus} for more information.
Boros and Moll~\cite[page 157]{bm:ii} defined $(a_k)$ to be
{\it infinitely log-concave\/} if it is $i$-fold log-concave for all
$i\ge1$. They introduced this definition in conjunction with the
study of a specialization of the Jacobi polynomials whose coefficient
sequence they conjectured to be infinitely log-concave.
Kauers and Paule~\cite{kp:cpm} used a computer algebra package to prove
this conjecture for ordinary log-concavity.
Since the coefficients of these polynomials can be expressed in terms of
binomial coefficients, Boros and Moll also made the statement:
\begin{center}
``Prove that the binomial coefficients are $\infty$-logconcave.''
\end{center}
We will take this to be a conjecture that the rows of Pascal's triangle are infinitely log-concave, although we will later discuss the columns and other lines.
When
given a function of more than one variable, we will
subscript the ${\cal L}$-operator by the parameter which is varying to form
the sequence.
So ${\cal L}_k\binom{n}{k}$ would refer to the operator acting on the
sequence $\binom{n}{k}_{k\ge0}$.
Note that we drop the sequence parentheses
for sequences of binomial coefficients to improve readability.
We now restate
the Boros-Moll conjecture formally.
\begin{conj}
\label{bm:con}
The sequence $\binom{n}{k}_{k\ge0}$ is infinitely log-concave for all
$n\ge0$.
\end{conj}
In the next section, we use a strengthened version of log-concavity
and computer calculations to verify Conjecture~\ref{bm:con} for all
$n\le 1450$. Uminsky and Yeats~\cite{uy:uri} set up a correspondence
between certain symmetric
sequences and points in ${\mathbb R}^m$. They then described an infinite region
${\cal R}\subset{\mathbb R}^m$ bounded by hypersurfaces and such that
each sequence corresponding to a point of ${\cal R}$
is infinitely log-concave. In Section~\ref{ril}, we show how
our methods can be used to give a simple derivation of one of their
main theorems. We investigate infinite log-concavity of the columns
and other lines of Pascal's
triangle in Section~\ref{cpt}.
Section~\ref{gps} is devoted to two $q$-analogues of the binomial
coefficients. For the Gaussian polynomials, we show that certain
analogues of some infinite log-concavity conjectures
are false while others appear to be true. In contrast, our second
$q$-analogue seems to retain all the log-concavity properties of the
binomial coefficients. In Section~\ref{sf}, after showing why the
sequence $(h_k)_{k\ge0}$ of complete homogeneous symmetric is an
appropriate analogue of sequences of binomial coefficients, we explore
its log-concavity properties. We end with a section of related
results and questions about real-rooted polynomials and Toeplitz matrices.
While one purpose of this article is to present our
results, we have written it with two more targets in mind.
The first is to convince our audience that infinite log-concavity is
a fundamental concept. We hope that its definition as a natural
extension of traditional log-concavity helps to make this case.
Our second aspiration is to attract others to work on the subject; to
that end, we have presented several open problems. These
conjectures each represent fundamental questions in the area, so even
solutions of special cases may be interesting.
\section{Rows of Pascal's triangle}
\label{rpt}
One of the difficulties with proving the Boros-Moll conjecture is that
log-concavity is not preserved by the ${\cal L}$-operator. For example,
the sequence
$4,5,4$ is log-concave but ${\cal L}(4,5,4)=16,9,16$ is not. So we
will seek a condition stronger than log-concavity which is preserved
by ${\cal L}$. Given $r\in{\mathbb R}$, we say that a sequence $(a_k)$ is
{\it $r$-factor log-concave\/} if
\begin{equation}
\label{r}
a_k^2\ge r a_{k-1}a_{k+1}
\end{equation}
for all $k$. Clearly this implies log-concavity if $r\ge1$.
We seek an $r>1$ such that $(a_k)$ being $r$-factor log-concave implies
that $(b_k)={\cal L}(a_k)$ is as well. Assume the original sequence is
nonnegative. Then expanding $r b_{k-1} b_{k+1}\le b_k^2$ in terms of
the $a_k$ and rearranging the summands, we see that this is equivalent
to proving
$$
(r-1) a_{k-1}^2 a_{k+1}^2 + 2 a_{k-1} a_k^2 a_{k+1}
\le a_k^4 + r a_{k-2} a_k (a_{k+1}^2-a_k a_{k+2} ) + r a_{k-1}^2 a_k a_{k+2}.
$$
By our assumptions, the two expressions with factors of $r$ on the right are
nonnegative, so it suffices to prove the inequality obtained when
these are dropped. Applying~\ree{r} to the left-hand side gives
$$
(r-1) a_{k-1}^2 a_{k+1}^2 + 2 a_{k-1} a_k^2 a_{k+1}
\le \frac{r-1}{r^2} a_k^4 + \frac{2}{r} a_k^4.
$$
So we will be done if
$$
\frac{r-1}{r^2}+ \frac{2}{r} =1.
$$
Finding the root $r_0>1$ of the corresponding quadratic equation
finishes the proof of the first assertion of the following lemma, while the second assertion follows easily from the first.
\begin{lem}
\label{rfactor:le}
Let
$(a_k)$ be a nonnegative sequence and let $r_0=(3+\sqrt{5})/2$.
Then $(a_k)$ being $r_0$-factor log-concave
implies that ${\cal L}(a_k)$ is too. So in this case $(a_k)$ is infinitely
log-concave.\hfill \qed
\end{lem}
Now to prove that any row of Pascal's triangle is infinitely
log-concave, one merely lets a computer find ${\cal L}_k^i\binom{n}{k}$ for
$i$ up to some bound $I$.
If these sequences are all nonnegative and
${\cal L}_k^I\binom{n}{k}$ is $r_0$-factor log-concave, then the
previous lemma shows that this row is infinitely log-concave. Using
this technique, we have obtained the following theorem.
\begin{thm}
\label{row:th}
The sequence $\binom{n}{k}_{k\ge0}$ is infinitely log-concave for
all $n\le 1450$.\hfill \qed
\eth
We note that the necessary value of $I$ increases slowly with increasing $n$.
As an example, when $n=100$, our technique works with $I=5$, while for
$n=1000$, we need $I=8$.
Of course, the method developed in this section can be applied to any
sequence such that ${\cal L}^i(a_k)$ is $r_0$-factor log-concave for some
$i$. In particular, it is interesting to try it on the original
sequence which motivated Boros and Moll~\cite{bm:ii} to define infinite
log-concavity. They were studying the polynomial
\begin{equation}
\label{Pm}
P_m(x) = \sum_{\ell=0}^m d_\ell(m) x^\ell
\end{equation}
where
$$
d_\ell(m) = \sum_{j=\ell}^m 2^{j-2m}
\binom{2m-2j}{m-j}\binom{m+j}{m}\binom{j}{\ell}.
$$
Kauers~[private communication] has used our method to verify
infinite log-concavity of the sequence $(d_\ell(m))_{\ell\ge0}$ for
$m\le 129$. For such values of $m$, ${\cal L}^5_\ell$ applied to the sequence is
$r_0$-factor log-concave.
\section{A region of infinite log-concavity}
\label{ril}
Uminsky and Yeats~\cite{uy:uri} took a different approach to the
Boros-Moll Conjecture as described in the Introduction. Since they
were motivated by the rows of Pascal's triangle, they only considered
real sequences $a_0,a_1,\ldots,a_n$ which are symmetric (in that
$a_k=a_{n-k}$ for all $k$) and satisfy $a_0=a_n=1$. Each such sequence
corresponds to a point $(a_1,\ldots,a_m)\in{\mathbb R}^m$ where $m=\fl{n/2}$.
Their region, ${\cal R}$, whose points all correspond to infinitely log-concave
sequences, is bounded by $m$ parametrically defined hypersurfaces. The
parameters are $x$ and $d_1,d_2,\ldots,d_m$ and it will be convenient
to have the notation
$$
s_k=\sum_{i=1}^k d_i.
$$
We will also need $r_1=(1+\sqrt{5})/2$. Note that $r_1^2=r_0$.
The $k$th hypersurface, $1\le k< m$, is defined as
$$
\begin{array}{l}
{\cal H}_k=
\{(x^{s_1},\ldots,x^{s_{k-1}},r_1x^{s_k},x^{s_{k+1}+d_k-d_{k+1}},\ldots,x^{s_m+d_k-d_{k+1}}):
\\[5pt]
\hs{100pt} x\ge1, \quad 1=d_1>\cdots> d_k>d_{k+2}>\cdots>d_m>0\},
\end{array}
$$
while
$$
{\cal H}_m=
\{(x^{s_1},\ldots,x^{s_{m-1}},cx^{s_{m-1}}):\
x\ge1, \quad 1=d_1>\cdots > d_{m-1}>0\},
$$
where
$$
c=\case{r_1}{if $n=2m$,}{2}{if $n=2m+1$.}
$$
Let us say that the \emph{correct side} of ${\cal H}_k$ for $1\le k\le m$ consists of those points in ${\mathbb R}^m$ that can be obtained from a point on ${\cal H}_k$ by increasing the $k$th coordinate.
Then let ${\cal R}$ be the region of all points in ${\mathbb R}^m$ having
increasing coordinates and lying on the correct side of ${\cal H}_k$ for all $k$.
We will show how our method of the previous section can be used to give
a simple proof of one of Uminsky and Yeats' main theorems. But first we
need a modified version of Lemma~\ref{rfactor:le} to take care of the
case when $n=2m+1$.
\begin{lem}
\label{rfactor:mod}
Let $a_0,a_1,\ldots, a_{2m+1}$ be a symmetric, nonnegative sequence such that
\begin{enumerate}
\item[(i)] $a_k^2 \ge r_0 a_{k-1} a_{k+1}$ for $ k< m$, and
\item[(ii)] $a_m\ge 2 a_{m-1}$.
\end{enumerate}
Then ${\cal L}(a_k)$ has the same properties, which implies that $(a_k)$ is
infinitely log-concave.
\end{lem}
\begin{proof}
Clearly ${\cal L}(a_k)$ is still symmetric. To show that the other
two properties persist, note that in demonstrating
Lemma~\ref{rfactor:le} we actually proved more. In particular, we
showed that if equation~\ree{r} holds at index $k$ of the
sequence $(a_k)$ (with $r=r_0$), then it also holds at index $k$ of the
sequence ${\cal L}(a_k)$ provided that the original sequence is log-concave.
Note that the assumptions of the current lemma imply log-concavity of
$(a_k)$: This is clear at indices $k\neq m,m+1$ because of
condition~(i). Also, using symmetry
and multiplying condition (ii) by $a_m$ gives
$a_m^2 \ge 2 a_{m-1} a_{m} = 2 a_{m-1} a_{m+1}$ (and symmetrically for
$k=m+1$).
So now we know that condition~(i) is also true for ${\cal L}(a_k)$. As for
condition~(ii), using symmetry we see that we need to prove
$$
a_m^2-a_{m-1} a_m\ge 2\left( a_{m-1}^2-a_{m-2} a_m \right).
$$
Rearranging terms and dropping one of them shows that it suffices to
demonstrate
$$
2 a_{m-1}^2 + a_{m-1} a_m \le a_m^2.
$$
But this is true because of~(ii), and we are done.
\end{proof}
\begin{thm}[\cite{uy:uri}]
Any sequence corresponding to a point of ${\cal R}$ is infinitely log-concave.
\eth
\begin{proof}
It suffices to show that the sequence satisfies the hypotheses of
Lemma~\ref{rfactor:le} when $n=2m$, or Lemma~\ref{rfactor:mod} when $n=2m+1$.
Suppose first that
$k<m$. Being on the correct side of ${\cal H}_k$
is equivalent to there being values of the parameters such that
$$
a_k^2
\hs{5pt}\ge\hs{5pt}
(r_1 x^{s_k})^2
\hs{5pt}=\hs{5pt}
r_1^2 x^{(s_{k-1}+d_k)+(s_{k+1}-d_{k+1})}
\hs{5pt}=\hs{5pt}
r_0 a_{k-1} a_{k+1}.
$$
Thus we have the necessary inequalities for this range of $k$.
If $k=m$ then we can use an argument as in the previous paragraph if $n=2m$.
If
$n=2m+1$, then being on the correct side of ${\cal H}_m$
is equivalent to
$$
a_m\ge 2 x^{s_{m-1}}=2 a_{m-1}.
$$
This is precisely condition~(ii) of Lemma~\ref{rfactor:mod}, which finishes
the proof.
\end{proof}
\section{Columns and other lines of Pascal's triangle}
\label{cpt}
While we have treated Boros and Moll's statement about the infinite
log-concavity of the binomial coefficients to be a statement about the
rows of Pascal's triangle, their wording also suggests an examination
of the columns.
\begin{conj}
\label{col:con}
The sequence $\binom{n}{k}_{n\ge k}$ is infinitely log-concave for
all fixed $k\ge0$.
\end{conj}
We will give two pieces of evidence for this conjecture. One is a demonstration
that various columns corresponding to small values
of $k$ are infinitely log-concave.
Another is a proof that ${\cal L}_n^i\binom{n}{k}$ is
nonnegative for certain values of $i$ and all $k$.
\begin{prop}
\label{small:k}
The sequence $\binom{n}{k}_{n\ge k}$ is infinitely log-concave for
$0\le k \le 2$.
\end{prop}
\begin{proof} When $k=0$ we have, for all $i \geq 1$,
$$
{\cal L}_n^i\binom{n}{0}=(1,0,0,0,\ldots).
$$
For $k=1$ we obtain
$$
{\cal L}_n\binom{n}{1} = (1,1,1,\dots)
$$
so infinite log-concavity follows from the $k=0$ case.
The sequence when $k=2$ is a fixed point of
the ${\cal L}$-operator, again implying infinite log-concavity.
\end{proof}
In what follows, we use the notation $L(a_k)$ for the $k$th
element of the sequence ${\cal L}(a_k)$, and similarly for $L_k$ and $L_n$.
\begin{prop}
\label{small:i}
The sequence ${\cal L}_n^i\binom{n}{k}$ is nonnegative for all $k$ and for
$0\le i\le 4$.
\end{prop}
\begin{proof}
By the previous proposition, we only need to check $k\ge3$.
Using the expression for a binomial coefficient in terms of
factorials, it is easy to derive the following expressions:
$$
L_n\binom{n}{k} =
\frac{1}{n}\binom{n}{k}\binom{n}{k-1}
$$
and
$$
L_n^2\binom{n}{k} =
\frac{2}{n^2 (n-1)}
\binom{n}{k}^2 \binom{n}{k-1}\binom{n}{k-2}.
$$
With a little more work, one can show that $L_n^3\binom{n}{k}$ can
be expressed as a product of nonnegative factors times the polynomial
$$
(4k-6)n^2-(4k^2-10k+6)n-k^2.
$$
To show that this is nonnegative, we write $n=k+m$ for $m\ge0$ to get
$$
(4k-6)m^2+(4k^2-2k-6)m+(3k^2-6k).
$$
But the coefficients of the powers of $m$ are all positive for $k\ge3$,
so we are done with the case $i=3$.
When $i=4$, we follow the same procedure, only now the polynomial in
$m$ has coefficients which are polynomials in $k$ up to degree $7$.
For example, the coefficient of $m^3$ is
$$
528 k^7- 8 k^6 - 11,248 k^5 + 25,360 k^4 - 5,888 k^3 - 24,296 k^2
+16,080 k - 1,584.
$$
To make sure this is nonnegative for integral $k\ge3$, one rewrites
the polynomial as
$$
(528 k^2- 8 k - 11,248) k^5 + (25,360 k^2 - 5,888 k - 24,296) k^2
+(16,080 k - 1,584),
$$
finds the smallest $k$ such that each of the factors in parentheses is
nonnegative from this value on, and then checks any remaining $k$ by
direct substitution.
\end{proof}
Kauers and Paule~\cite{kp:cpm} proved that the rows of Pascal's
triangle are $i$-fold log-concave for $i\le5$.
Kauers~[private communication] has used their techniques to confirm
Proposition~\ref{small:i} and to also check the case $i=5$ for the
columns. For the latter case, Kauers used a computer to determine
\begin{equation}
\label{kau}
\frac{({\cal L}_n^5 \binom{n}{k})}{\binom{n}{k}^{2^5}}
\end{equation}
explicitly, which
is just a rational function in $n$ and $k$. He then showed that
\ree{kau} is nonnegative by means of cylindrical algebraic decomposition.
We refer the interested reader to~\cite{kp:cpm} and the references therein for more
information on such techniques.
More generally, we can look at an arbitrary line in Pascal's triangle,
i.e., consider the sequence $\binom{n+mu}{k+mv}_{m\ge0}$.
The unimodality and (1-fold) log-concavity of such sequences has been investigated in
\cite{bbs:ucs, sw:upp, tz:usb, tz:usb2}.
We do not require that $u$ and $v$ be coprime, so such sequences
need not contain all of the binomial coefficients in which a geometric
line would intersect Pascal's triangle, e.g., a sequence such as
$\binom{n}{0}, \binom{n}{2},\binom{n}{4},\ldots$ would be included.
By letting $u<0$, one can get a finite truncation of a column. For
example, if $n=5$, $k=3$, $u=-1$, and $v=0$ then we get the sequence
$$
\binom{5}{3}, \binom{4}{3}, \binom{3}{3}
$$
which is not even $2$-fold log-concave. So we will only consider $u\ge0$.
Also
$$
\binom{n+mu}{k+mv} = \binom{n+mu}{n-k+m(u-v)}
$$
so we can also assume $v\ge0$.
We offer the following conjecture, which includes
Conjecture~\ref{bm:con} as a special case.
\begin{conj}
\label{all:con}
Suppose that $u$ and $v$ are distinct nonnegative integers. Then
$\binom{n+mu}{mv}_{m\ge0}$ is infinitely log-concave for all $n \geq 0$ if
and only if $u<v$ or $v=0$.
\end{conj}
We first give a quick proof of the ``only if'' direction. Supposing
that $u>v \ge1$, we consider the sequence
$$
\binom{0}{0}, \binom{u}{v}, \binom{2u}{2v}, \ldots
$$
obtained when $n=0$.
We claim that this sequence is not even log-concave and that
log-concavity fails at the second term. Indeed, the fact that
$\binom{u}{v}^2 < \binom{2u}{2v}$ follows immediately from the
identity
\[
\binom{u}{0}\binom{u}{2v} + \binom{u}{1}\binom{u}{2v-1} + \cdots +
\binom{u}{v}\binom{u}{v} + \cdots + \binom{u}{2v}\binom{u}{0} = \binom{2u}{2v},
\]
which is a special case of Vandermonde's Convolution.
The proof just given shows that subsequences of the columns of
Pascal's triangle are the only {\it infinite} sequences of the form
$\binom{n+mu}{mv}_{m\ge0}$ that can possibly be infinitely log-concave.
We also note that the previous conjecture says nothing about what
happens on the diagonal $u=v$. Of course, the case $u=v=1$ is
Conjecture~\ref{col:con}. For other diagonal values, the evidence is
conflicting.
One can show by computer that $\binom{n+mu}{mu}_{m\ge0}$ is not $4$-fold log-concave
for $n=2$ and any
$2\le u\le500$. However, this is the only {\it known} value of $n$
for which $\binom{n+mu}{mu}_{m\ge0}$ is not an infinitely log-concave
sequence for some $u\ge1$.
We conclude this section by offering considerable computational
evidence in favor of the ``if'' direction of Conjecture~\ref{all:con}.
Theorem~\ref{row:th} provides such evidence when $u=0$ and $v=1$. Since all
other
sequences with $u<v$ have a finite number of nonzero entries, we can
use the $r_0$-factor log-concavity technique for these sequences as well.
For all $n\le 500$, $2\le v\le20$ and $0\le u<v$, we have checked that
$\binom{n+mu}{mv}_{m\ge0}$ is infinitely log-concave.
\section{$q$-analogues}
\label{gps}
This section will be devoted to discussing two $q$-analogues of
binomial coefficients. For the Gaussian polynomials, we will see
that the corresponding generalization of Conjecture~\ref{bm:con} is
false, and we show one exact reason why it fails. In contrast, the
corresponding generalization of Conjecture~\ref{col:con} appears to be
true. This shows how delicate these conjectures are and may in part
explain why they seem to be difficult to prove. After introducing our
second $q$-analogue, we
conjecture that the corresponding generalizations of
Conjectures~\ref{bm:con}, \ref{col:con} and \ref{all:con} are all
true. This second $q$-analogue arises in the study of quantum groups; see,
for example, the books of Jantzen~\cite{jan:lqg} and Majid~\cite{maj:qgp}.
Let $q$ be a variable and consider a polynomial $f(q)\in{\mathbb R}[q]$.
Call $f(q)$ \linebreak $q$-{\it nonnegative\/} if all the coefficients of $f(q)$ are
nonnegative. Apply
the ${\cal L}$-operator to sequences of polynomials $(f_k(q))$ in the
obvious way. Call such a sequence {\it $q$-log-concave\/} if
${\cal L}(f_k(q))$ is a sequence of $q$-nonnegative polynomials, with
$i$-fold $q$-log-concavity and infinite $q$-log-concavity defined
similarly.
We will be particularly interested in the Gaussian polynomials. The
standard {\it $q$-analogue of the nonnegative integer $n$\/} is
$$
[n]=[n]_q=\frac{1-q^n}{1-q} = 1+q+q^2+\cdots+q^{n-1}.
$$
Then, for $0\le k\le n$, the {\it Gaussian polynomials\/} or
{\it $q$-binomial coefficients\/} are defined as
$$
\gaus{n}{k}=\gaus{n}{k}_q = \frac{[n]_q!}{[k]_q! [n-k]_q!}
$$
where $[n]_q!=[1]_q [2]_q\cdots [n]_q$. For more information,
including proofs of the assertions made in the next paragraph, see the
book of Andrews~\cite{and:tp}.
Clearly substituting $q=1$ gives $\genfrac{[}{]}{0pt}{}{n}{k}_1=\binom{n}{k}$.
Also, it is well known that the Gaussian polynomials
are indeed $q$-nonnegative polynomials. In fact, they have
various combinatorial interpretations, one of which we will need.
An (integer) {\it partition of $n$\/} is a weakly decreasing
positive integer sequence $\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_\ell)$ such
that $|\lambda|\stackrel{\rm def}{=}\sum_i \lambda_i=n$. The $\lambda_i$ are
called {\it parts\/}. For notational convenience, if a part $k$ is repeated $r$ times in a
partition $\lambda$ then we will denote this by writing $k^r$ in the
sequence for $\lambda$.
We say that $\lambda$ {\it fits inside an $s\times t$ box} if
$\lambda_1\le t$ and $\ell\le s$. Denote the set of all such partitions
by $P(s,t)$.
It is well known, and easy to prove by induction on $n$, that
\begin{equation}
\label{gau}
\gaus{n}{k}=\sum_{\lambda\in P(n-k,k)} q^{|\lambda|}.
\end{equation}
We are almost ready to prove that the sequence
$\left(\gaus{n}{k}\right)_{k\ge0}$ is not infinitely $q$-log-concave.
In fact, we will show it is not even $2$-fold $q$-log-concave.
First we need a lemma. In it, we use $\mathop{\rm mint}\nolimits f(q)$ to denote the
nonzero term of least degree in $f(q)$.
\begin{lem}
\label{mint}
Let $L_k\left(\gaus{n}{k}\right)=B_k(q)$. Then for $k\le n/2$,
$$
\mathop{\rm mint}\nolimits B_k(q)=
\case{q^k}{if $k<n/2$,}{2q^k}{if $k=n/2$.}
$$
\end{lem}
\begin{proof}
Since $B_k(q)=\gaus{n}{k}^2-\gaus{n}{k-1}\gaus{n}{k+1}$ it suffices to
prove, in view of~\ree{gau}, the following two statements. If $i\le k$ and
$$
(\lambda,\mu)\in P(n-k+1,k-1)\times P(n-k-1,k+1)
$$
with $|\lambda|+|\mu|=i$, then $(\lambda,\mu)\in P(n-k,k)^2$. Furthermore,
the number of elements in
$P(n-k,k)^2- P(n-k+1,k-1)\times P(n-k-1,k+1)$ is 0 or 1 or 2 depending
on whether $i<k$ or $i=k<n/2$ or $i=k=n/2$, respectively.
The first statement is an easy consequence of
$|\lambda|+|\mu|=i\le k\le n-k$. A similar argument works for the $i<k$
case of the second statement. If $i=k$ then the pair $((k),\emptyset)$ is in the
difference and if $i=k=n/2$ then the pair $(\emptyset,(1^k))$ is as well.
\end{proof}
\begin{prop}
\label{gau:pr}
Let $L_k^2\left(\gaus{n}{k}\right)=C_k(q)$. Then for
$n\ge2$ and $k=\fl{n/2}$ we have
$$
\mathop{\rm mint}\nolimits C_k(q)=-q^{n-2}.
$$
Consequently, $\left(\gaus{n}{k}\right)_{k\ge0}$ is not $2$-fold $q$-log-concave.
\end{prop}
\begin{proof}
The proofs for $n$ even and odd are similar, so we will only do the
former. So suppose $n=2k$ and consider
$$
C_k(q)=B_k(q)^2-B_{k-1}(q)B_{k+1}(q)=B_k(q)^2-B_{k-1}(q)^2.
$$
By the previous lemma
$\mathop{\rm mint}\nolimits B_k(q)^2=4q^{2k}$ and $\mathop{\rm mint}\nolimits B_{k-1}(q)^2= q^{2k-2}$. Thus
$\mathop{\rm mint}\nolimits C_k(q) = -q^{2k-2}=-q^{n-2}$ as desired.
\end{proof}
After what we have just proved, it may seem surprising that the
following conjecture, which is a $q$-analogue of
Conjecture~\ref{col:con}, does seem to hold.
\begin{conj}
\label{qcol:con}
The sequence $\left(\gaus{n}{k}\right)_{n\ge k}$ is infinitely
$q$-log-concave for
all fixed $k\ge0$.
\end{conj}
As evidence, we will prove a $q$-analogue of Proposition~\ref{small:k}
and comment on Proposition~\ref{small:i} in this setting.
\begin{prop}
\label{qsmall:k}
The sequence $\left(\gaus{n}{k}\right)_{n\ge k}$ is infinitely
$q$-log-concave for
$0\le k\le 2$.
\end{prop}
\begin{proof}
When $k=0$ one has the same sequence as when $q=1$.
When $k=1$ we claim that
$$
{\cal L}\left(\gaus{n}{1}\right) = (1,q,q^2,q^3,\ldots).
$$
Indeed,
\begin{eqnarray*}
[n]^2-[n-1][n+1]
&=& \frac{(1-q^n)^2-(1-q^{n-1})(1-q^{n+1})}{(1-q)^2}\\
&=& \frac{q^{n-1}-2q^n+q^{n+1}}{(1-q)^2}\\
&=& q^{n-1}
\end{eqnarray*}
(and recall that the sequence starts at $n=1$). It follows that
$$
{\cal L}^i\left(\gaus{n}{1}\right) = (1,0,0,0,\ldots)
$$
for $i\ge2$.
For $k=2$, the manipulations are much like those in the previous paragraph.
Using induction on $i$, we obtain
$$
L^i\left(\gaus{n}{2}\right) = q^{(2^i-1)(n-2)}\gaus{n}{2}
$$
for $i\ge0$. This completes the proof of the last case of the
proposition.
\end{proof}
If we now consider arbitrary $k$ it is not hard to show, using
algebraic manipulations like those in the proof just given,
that
\begin{equation}
\label{lq}
L_n \left(\gaus{n}{k}\right)=
\frac{q^{n-k}}{[n]}\gaus{n}{k}\gaus{n}{k-1}.
\end{equation}
These are, up to a power of $q$, the $q$-Narayana numbers. They were
introduced by F\"urlinger and Hofbauer~\cite{fh:qcn} and are contained
in a specialization of a result of MacMahon~\cite[page 1429]{mac:cp}
which was stated without proof. They were further studied by
Br\"and\'en~\cite{bra:qnn}. As shown in the references just cited,
these polynomials are the generating functions for a number of
different families of combinatorial objects. Thus they are $q$-nonnegative.
More computations show that
\begin{equation}
\label{llq}
L_n^2\left(\gaus{n}{k}\right)
=\frac{q^{3n-3k}[2]}{[n]^2[n-1]}\gaus{n}{k}^2\gaus{n}{k-1}\gaus{n}{k-1}.
\end{equation}
It is not clear that these polynomials are $q$-nonnegative, although they
must be if Conjecture~\ref{qcol:con} is true. Furthermore, when $q=1$,
the triangle made as $n$ and $k$ vary is not in Sloane's
Encyclopedia~\cite{slo:oei} (although it has now been submitted). We
expect that these integers and polynomials have interesting, yet to be
discovered, properties.
We conclude our discussion of the Gaussian polynomials by considering the
sequence
\begin{equation}
\label{gs}
\left(\gaus{n+mu}{mv}\right)_{m\ge0}
\end{equation}
for nonnegative integers $u$ and $v$, as we did in Section~\ref{cpt} for the
binomial coefficients.
When $u>v$ the sequence has an infinite number of nonzero entries. We
can use $\eqref{gau}$ to show that the highest degree term in
$\gaus{n+u}{v}^2 - \gaus{n+2u}{2v}$ has coefficient $-1$, so the
sequence \eqref{gs} is not even $q$-log-concave. When $u<v$, it
seems to be the case that the sequence is not $2$-fold $q$-log-concave,
as shown for the rows in Proposition~\ref{gau:pr}. When $u=v$, the
evidence is conflicting, reflecting the behavior of the binomial
coefficients. Since setting $q=1$ in $\gaus{n+mu}{mu}$ yields
$\binom{n+mu}{mu}$, we know that
$\left(\gaus{2+mu}{mu}\right)_{m\ge0}$ is not always
$4$-fold $q$-log-concave. It also transpires that the case $n=3$ is
not always $5$-fold $q$-log-concave. We have not encountered other
values of $n$ that fail to yield a $q$-log-concave sequence when
$u=v$.
While the variety of behavior of the Gaussian polynomials is
interesting, it would be desirable to have a $q$-analogue that
better reflects the behavior of the binomial coefficients. A
$q$-analogue that arises in the study of quantum groups serves
this purpose. Let us
replace the previous $q$-analogue of the nonnegative integer $n$
with the expression
$$
\langle n\rangle = \frac{q^n-q^{-n}}{q-q^{-1}} =
q^{1-n} + q^{3-n} + q^{5-n} + \cdots + q^{n-1}.
$$
From this, we obtain a $q$-analogue of the binomial coefficients by
proceeding as for the Gaussian polynomials: for $0\le k\le n$, we
define
$$
\qua{n}{k} = \frac{\langle n\rangle!}{\langle k\rangle! \langle n-k\rangle!}
$$
where $\langle n\rangle!=\langle1 \rangle \langle2 \rangle\cdots \langle
n\rangle$.
Letting $q\to1$ in $\qua{n}{k}$ gives $\binom{n}{k}$, and a
straightforward calculation shows that
\begin{equation}
\label{quagau}
\qua{n}{k} = \frac{1}{q^{nk-k^2}}\gaus{n}{k}_{q^2}.
\end{equation}
So $\qua{n}{k}$ is, in general, a Laurent polynomial in $q$
with nonnegative coefficients. Our definitions of $q$-nonnegativity and $q$-log-concavity for polynomials in $q$ extend to Laurent polynomials in the obvious way.
We offer the following generalizations of Conjectures~\ref{bm:con},
\ref{col:con} and \ref{all:con}.
\begin{conj}
\label{qua:con}
\
\begin{itemize}
\item[(a)] The row sequence $\left(\qua{n}{k}\right)_{k\ge0}$ is
infinitely $q$-log-concave for all $n\ge0$.
\item[(b)] The column sequence $\left(\qua{n}{k}\right)_{n\ge
k}$ is infinitely $q$-log-concave for all fixed $k\ge0$.
\item[(c)] For all integers $0\le u<v$, the sequence
$\left(\qua{n+mu}{mv}\right)_{m\ge0}$ is infinitely
$q$-log-concave for all $n\ge0$.
\end{itemize}
\end{conj}
Several remarks are in order. Suppose that for $f(g), g(q) \in
{\mathbb R}[q, q^{-1}]$, we say $f(q) \le g(q)$ if $g(q)-f(q)$ is $q$-nonnegative.
Then the proofs of Lemmas~\ref{rfactor:le} and
\ref{rfactor:mod} work equally well if the $a_i$'s are Laurent
polynomials and we replace the term ``log-concave'' by
``$q$-log-concave.'' Using these lemmas, we have verified
Conjecture~\ref{qua:con}(a) for all $n\le 53$. Even though (a) is a
special case of (c), we state it separately since (a) is the
$q$-generalization of the Boros-Moll conjecture, the primary
motivation for this paper.
As evidence for Conjecture~\ref{qua:con}(b), it is not hard to prove
the appropriate analogue of Propositions~\ref{small:k} and
\ref{qsmall:k}, i.e.\ that the sequence $\qua{n}{k}_{n\ge k}$ is
infinitely $q$-log-concave for all $0\le k\le2$. To obtain the
expressions for $L_n\left(\qua{n}{k}\right)$ and
$L_n^2\left(\qua{n}{k}\right)$, take
equations~\eqref{lq} and \eqref{llq}, replace all square
brackets by angle brackets and replace each the terms $q^{n-k}$ and
$q^{3n-3k}$ by the number 1.
Conjecture~\ref{qua:con}(c) has been verified for all $n\le 24$
with $v\le10$. When $u > v$, we can use $\eqref{quagau}$ to show
that the lowest degree term in $\qua{n+u}{v}^2 - \qua{n+2u}{2v}$ has
coefficient $-1$, so the sequence is not even $q$-log-concave.
When $u=v$, the quantum groups analogue has exactly the same behavior
as we observed above for the Gaussian polynomials.
\section{Symmetric functions}
\label{sf}
We now turn our attention to symmetric functions.
We will demonstrate that the complete homogeneous symmetric functions
$(h_k)_{k\ge0}$ are a natural analogue of the rows and columns of
Pascal's triangle. We show that the sequence
$(h_k)_{k\ge0}$ is
$i$-fold log-concave in the appropriate sense for $i\le3$,
but not $4$-fold log-concave.
Like the results of Section~\ref{gps}, this
result underlines the difficulties and subtleties of
Conjectures~\ref{bm:con} and~\ref{col:con}. In particular, it shows
that any proof of Conjecture~\ref{bm:con} or Conjecture~\ref{col:con}
would need to use techniques that do not carry over to the sequence
$(h_k)_{k\ge0}$.
For a more detailed
exposition of the background material below, we refer the reader to
the texts of Fulton~\cite{ful:yt}, Macdonald~\cite{mac:sfh},
Sagan~\cite{sag:sym} or Stanley~\cite{sta:ec2}.
Let ${\bf x}=\{x_1,x_2,\ldots\}$ be a countably infinite set of variables.
For each $n\ge0$, the elements of the symmetric group $\mathfrak{S}_n$ act on formal
power series $f({\bf x})\in{\mathbb R}[[{\bf x}]]$ by permutation of variables (where
$x_i$ is left fixed if $i>n$). The algebra of symmetric functions,
$\Lambda({\bf x})$, is the set of all series left fixed by all symmetric
groups and of bounded (total) degree.
The vector space of symmetric functions homogeneous of degree $k$ has
dimension equal to the number of partitions
$\lambda=(\lambda_1,\ldots,\lambda_\ell)$ of $k$. We will be
interested in three bases for this vector space. The
{\it monomial symmetric function\/} corresponding to $\lambda$,
$m_\lambda=m_\lambda({\bf x})$, is obtained by
symmetrizing the monomial $x_1^{\lambda_1}\cdots x_\ell^{\lambda_\ell}$. The
$k$th {\it complete homogeneous symmetric function\/}, $h_k$, is the sum of
all monomials of degree $k$. For partitions, we then define
$$
h_\lambda=h_{\lambda_1}\cdots h_{\lambda_\ell}.
$$
Finally, the
{\it Schur function\/} corresponding to $\lambda$ is
$$
s_\lambda= \det(h_{\lambda_i-i+j})_{1\le i,j \le \ell}.
$$
We remark that this determinant is a minor of the Toeplitz matrix for
the sequence $(h_k)$. We will have more to say about Toeplitz matrices
in the next section.
Our interest will be in the sequence just mentioned
$(h_k)_{k\ge0}$. Let $h_k(1^n)$
denote the integer obtained by substituting $x_1=\cdots=x_n=1$ and
$x_i=0$ for $i>n$ into $h_k=h_k({\bf x})$. Then $h_k(1^n)=\binom{n+k-1}{k}$
(the number of ways of choosing $k$ things from $n$ things with
repetition) and so the above sequence becomes a column of Pascal's
triangle. By the same token $h_k(1^{n-k})=\binom{n-1}{k}$ and so the
sequence becomes a row.
We will now collect the results from the theory of symmetric functions
which we will need.
Partially order partitions by {\it dominance\/}
where $\lambda\le\mu$ if and only if for every $i\ge1$ we have
$\lambda_1+\cdots+\lambda_i\le\mu_1+\cdots+\mu_i$. Also, if $\{b_\lambda\}$ is
any basis of $\Lambda({\bf x})$ and $f\in\Lambda({\bf x})$ then we let
$[b_\lambda]f$ denote the coefficient of the basis element $b_\lambda$ in the
expansion of $f$ in this basis. First we have a simple consequence
of Young's Rule.
\begin{thm}
\label{yr}
For any partitions $\lambda,\mu$ we have $[m_\mu] s_\lambda$ is a nonnegative
integer. In particular,
\vs{5pt}
\eqed{
[m_\mu]s_\lambda=
\case{$1$}{if $\mu=\lambda$,}
{$0$}{if $\mu\not\le \lambda.$}
}
\eth
Let $\lambda+\mu$ denote the componentwise sum
$(\lambda_1+\mu_1,\lambda_2+\mu_2,\ldots)$.
The next result follows from the
Littlewood-Richardson Rule and induction.
\begin{thm}
\label{lrr}
For any partitions $\lambda^1,\ldots,\lambda^r$ and $\mu$ we have
$[s_\mu] s_{\lambda^1}\cdots s_{\lambda_r}$ is a nonnegative integer. In particular,
\vs{5pt}
\eqed{
[s_\mu]s_{\lambda^1}\cdots s_{\lambda^r}=
\case{$1$}{if $\mu=\lambda^1+\cdots+\lambda^r$,}
{$0$}{if $\mu\not\le \lambda^1+\cdots+\lambda^r$.}
}
\eth
Because of this result we call $\lambda^1+\cdots+\lambda^r$ the
{\it dominant partition\/} for $s_{\lambda^1}\cdots s_{\lambda^r}$.
Finally, we need a result of Kirillov~\cite{kir:csg} about the product
of Schur functions, which was proved bijectively by Kleber~\cite{kle:prs} and
Fulmek and Kleber~\cite{fk:bps}. This result can be obtained by applying the Desnanot-Jacobi Identity---also known as Dodgson's condensation formula---to the Jacobi-Trudi matrix for $s_{k^{r+1}}$.
Note that, to improve readability, we drop the sequence parentheses when a sequence appears as a subscript.
\begin{thm}[\cite{fk:bps,kir:csg,kle:prs}]
\label{kir:thm}
For positive integers $k,r$ we have
\vs{5pt}
\eqed{
\left(s_{k^r}\right)^2 - s_{(k-1)^r} s_{(k+1)^r} = s_{k^{r-1}} s_{k^{r+1}}.
}
\eth
To state our results, we need a few more definitions. If ${b_\lambda}$ is
a basis for $\Lambda({\bf x})$ and $f\in\Lambda({\bf x})$ then we
say $f$ is {\it $b_\lambda$-nonnegative\/} if $[b_\lambda]f\ge0$ for all
partitions $\lambda$. Note that $m_\lambda$-nonnegativity is the natural
generalization to many variables of the $q$-nonnegativity definition for
${\mathbb R}[q]$. Also note that $s_\lambda$-nonnegativity implies
$m_\lambda$-nonnegativity by Theorem~\ref{yr}.
\begin{thm}
The sequence ${\cal L}^i(h_k)$ is $s_\lambda$-nonnegative for $0\le i\le 3$.
But the sequence ${\cal L}^4(h_k)$ is not $m_\lambda$-nonnegative.
\eth
\begin{proof}
From the definition of the Schur function we have
$$
L^0(h_k)=h_k=s_k \quad\mbox{and}\quad L^1(h_k)=(h_k)^2-h_{k-1}h_{k+1}=s_{k^2}.
$$
Now Theorem~\ref{kir:thm} immediately gives
$$
L^2(h_k) = \left(s_{k^2}\right)^2 - s_{(k-1)^2}s_{(k+1)^2} = s_k s_{k^3}
$$
which is $s_\lambda$-nonnegative by the first part of Theorem~\ref{lrr}.
Using Theorem~\ref{kir:thm} twice gives
\begin{eqnarray*}
L^3(h_k)
&=& \left(s_k\right)^2 \left(s_{k^3}\right)^2 - s_{k-1} s_{(k-1)^3} s_{k+1} s_{(k+1)^3}\\
&=& \left(s_k\right)^2 \left(s_{k^3}\right)^2 - \left(s_k\right)^2 s_{(k-1)^3} s_{(k+1)^3}\\
& & + \left(s_k\right)^2 s_{(k-1)^3} s_{(k+1)^3} - s_{k-1} s_{(k-1)^3} s_{k+1} s_{(k+1)^3}\\
&=& \left(s_k\right)^2 s_{k^2} s_{k^4} + s_{(k-1)^3} s_{k^2} s_{(k+1)^3}
\end{eqnarray*}
which is again $s_\lambda$-nonnegative. This finishes the cases
$0\le i\le3$.
We now assume $k\ge2$. Computing $L^4(h_k)$ from the expression for
$L^3(h_k)$ gives the sum of the terms in the left column below. The right column gives the dominant partition for each term, as determined by Theorem~\ref{lrr}.
\[
\begin{array}{ll}
+(s_k)^4 (s_{k^2})^2 (s_{k^4})^2
& (8k, 4k, 2k, 2k) \\
+2 (s_k)^2 (s_{k^2}) ^2 s_{k^4} s_{(k-1)^3} s_{(k+1)^3}
& (7k, 5k, 3k, k) \\
+ (s_{(k-1)^3})^2 (s_{k^2})^2 (s_{(k+1)^3})^2
& (6k, 6k, 4k) \\
- (s_{k-1})^2 s_{(k-1)^2} s_{(k-1)^4} (s_{k+1})^2 s_{(k+1)^2} s_{(k+1)^4}
& (8k, 4k, 2k, 2k) \\
- (s_{k-1})^2 s_{(k-1)^2} s_{(k-1)^4} s_{k^3} s_{(k+1)^2} s_{(k+2)^3}
& (7k-1, 5k+1, 3k+1, k-1) \\
-s_{(k-2)^3} s_{(k-1)^2} s_{k^3} (s_{k+1})^2 s_{(k+1)^2} s_{(k+1)^4}
& (7k+1, 5k-1, 3k-1, k+1)\\
-s_{(k-2)^3} s_{(k-1)^2} (s_{k^3})^2 s_{(k+1)^2} s_{(k+2)^3}
& (6k, 6k, 4k)
\end{array}
\]
Now consider $\lambda=(7k+1,5k-1,3k-1,k+1)$, the dominant partition for the penultimate term above. Observe that if $\mu$ is the dominant partition for any other
term, then $\lambda\not\le\mu$. So, by the
second part of Theorem~\ref{lrr}, $s_\lambda$ appears in the Schur-basis
expansion for $L^4(h_k)$ with coefficient $-1$. It then follows from the
second part of Theorem~\ref{yr}, that
the coefficient of $m_\lambda$ is $-1$ as well.
\end{proof}
\section{Real roots and Toeplitz matrices}
\label{rrp}
We now consider two other (almost equivalent) settings where, in contrast
to the results of the previous section, Conjecture~\ref{bm:con} does
seem to generalize. In fact, this may be the right level of
generality to find a proof.
Let $(a_k)=a_0,a_1,\ldots,a_n$ be a finite sequence of nonnegative real numbers.
It was shown by Isaac Newton that if all the roots of the polynomial
$p[a_k]\stackrel{\rm def}{=}a_0 + a_1 x + \cdots a_n x^n$ are real, then the
sequence $(a_k)$
is log-concave. For example, since the polynomial
$(1+x)^n$ has only real roots, the $n$th row of Pascal's triangle is
log-concave. It is natural to ask if the real-rootedness property is
preserved by the ${\cal L}$-operator.
The literature includes a
number of results about operations on polynomials which preserve
real-rootedness; for example,
see~\cite{bra:ltp,bre:ulp,bre:lus,pit:pbc,wag:tph,wy:prz}.
\begin{conj}
\label{pla:con}
Let $(a_k)$ be a finite sequence of nonnegative real numbers. If $p[a_k]$
has only real roots then the same is true of $p[{\cal L}(a_k)]$.
\end{conj}
This conjecture is due independently to
Richard Stanley~[private communication]. It is also one of a number
of related conjectures made by Steve Fisk~\cite{fis:qdp}.
If true, Conjecture~\ref{pla:con}
would immediately imply the original Boros-Moll Conjecture.
As evidence for the conjecture, we have verified it by computer for a
large number of randomly chosen real-rooted polynomials. We have also checked
that
$p[{\cal L}_k^i\binom{n}{k}]$ has only real roots for all $i\le 10$ and
$n\le 40$. It is interesting to note that Boros and Moll's
polynomial $P_m(x)$ in equation~\ree{Pm} does not have real roots even
for $m=2$. So if the corresponding sequence is infinitely log-concave
then it must be so for some other reason.
Along with the rows of Pascal's triangle, it appears that applying ${\cal L}$ to the other finite lines we were considering in Section~\ref{cpt} also yields sequences with real-rooted
generating functions.
So we make the following conjecture
which implies the ``if'' direction of Conjecture~\ref{all:con}.
\begin{conj}
For $0\le u<v$, the polynomial $p[{\cal L}_m^i(\binom{n+mu}{mv})]$ has
only real roots for all $i\ge0$.
\end{conj}
We have verified this assertion for all $n\le24$ with $i\le10$ and
$v\le10$.
In fact, it follows from a theorem of Yu~\cite{yu:ctc} that the conjecture holds for $i=0$ and all $0\le u<v$. So it will suffice to prove Conjecture~\ref{pla:con} to obtain this result for all $i$.
We can obtain a matrix-theoretic perspective on problems of
real-rooted\-ness via
the following renowned result of Aissen, Schoenberg and
Whitney~\cite{asw:gft}. A matrix $A$ is said to be totally
nonnegative if every minor of $A$ is nonnegative.
We can associate with any sequence $(a_k)$ a corresponding
(infinite) {\it Toeplitz matrix\/} $A = (a_{j-i})_{i,j\ge0}$.
In comparing the
next theorem to Newton's result, note that for a real-rooted
polynomial $p[a_k]$ the roots being nonpositive is
equivalent to the sequence $(a_k)$ being nonnegative.
\begin{thm}[\cite{asw:gft}]
\label{asw:th}
Let $(a_k)$ be a finite sequence of real numbers. Then every root of
$p[a_k]$ is a nonpositive real number if and only if the
Toeplitz matrix $(a_{j-i})_{i,j\ge0}$ is totally nonnegative. \hfill \qed
\eth
To make a connection with the ${\cal L}$-operator, note that
$$
a_k^2-a_{k-1}a_{k+1}= \left| \begin{array}{cc} a_k & a_{k+1} \\ a_{k-1} & a_k
\end{array} \right|,
$$
which is a minor of the Toeplitz matrix
$A =(a_{j-i})_{i,j\ge0}$. Call such a minor {\it adjacent}
since its entries are adjacent in $A$.
Now, for an arbitrary infinite matrix
$A = (a_{i,j})_{i,j\geq 0}$, let us define the infinite matrix ${\cal L}(A)$ by
$$
{\cal L}(A) = \left( \left|
\begin{array}{cc} a_{i,j} & a_{i,j+1} \\
a_{i+1,j} & a_{i+1,j+1}
\end{array}
\right| \right)_{i,j \ge0}.
$$
Note that if $A$ is the Toeplitz matrix of $(a_k)$ then ${\cal L}(A)$ is
the Toeplitz matrix of ${\cal L}(a_k)$.
Using Theorem~\ref{asw:th}, Conjecture~\ref{pla:con} can now be strengthened as
follows.
\begin{conj}
\label{tnn:con}
For a sequence $(a_k)$ of real numbers,
if $A = (a_{j-i})_{i,j\ge0}$ is totally nonnegative then ${\cal L}(A)$ is also
totally nonnegative.
\end{conj}
Note that if $(a_k)$ is finite, then Conjecture~\ref{tnn:con} is equivalent to Conjecture~\ref{pla:con}.
As regards evidence for Conjecture~\ref{tnn:con}, consider an arbitrary $n$-by-$n$
matrix $A = (a_{i,j})_{i,j=1}^n$. For finite matrices, ${\cal L}(A)$ is
defined in the obvious way to be the $(n-1)$-by-$(n-1)$ matrix
consisting of the 2-by-2 adjacent minors of $A$.
In~\cite[Theorem 6.5]{fhgj:ctp}, Fallat, Herman, Gekhtman, and Johnson
show that for $n\le4$, ${\cal L}(A)$
is totally nonnegative whenever $A$ is. However,
for $n=5$, an example from their paper can be modified to show that
if
$$
A =
\left(
\begin{array}{ccccc}
1 & t & 0 & 0 & 0 \\
t & t^2+1 & 2t & t^2 & 0\\
t^2 & t^3+2t & 1+4t^2 & 2t^3+t & 0\\
0 & t^2 & 2t^3+2t & t^4+2t^2+1 & t\\
0 & 0 & t^2 & t^3+t & t^2
\end{array}
\right)
$$
then $A$ is totally nonnegative for $t\ge0$, but ${\cal L}(A)$ is not
totally nonnegative for sufficiently large $t$ ($t\ge\sqrt{2}$ will
suffice). We conclude that the Toeplitz structure would be important
to any affirmative answer to Conjecture~\ref{tnn:con}.
We finish our discussion of the matrix-theoretic perspective with a
positive result similar in flavor to Conjecture~\ref{tnn:con}.
\begin{prop}
If $A$ is a finite square matrix that is positive semidefinite, then ${\cal L}(A)$
is also positive semidefinite.
\end{prop}
\begin{proof}
The key idea is to construct the {\it second compound matrix}
$\mathcal{C}_2(A)$ of $A$, which is the array of all 2-by-2 minors of
$A$, arranged lexicographically according to the row and column
indices of the minors~\cite{hj:ma}.
We claim that if $A$ is
positive semidefinite, then so is $\mathcal{C}_2(A)$. Indeed, since
the compound operation preserves multiplication and inverses, the
eigenvalues of $\mathcal{C}_2(A)$ are equal to the eigenvalues of
$\mathcal{C}_2(J)$, where $J$ is the Jordan form of $A$. If $J$ is
upper-triangular and has diagonal entries $\lambda_1, \lambda_2, \ldots,
\lambda_n$, then we see that $\mathcal{C}_2(J)$ is upper-triangular with
diagonal entries $\lambda_i \lambda_j$ for all $i<j$. Since the $\lambda_i$'s are
all nonnegative, so too are the eigenvalues of $\mathcal{C}_2(J)$,
implying that $\mathcal{C}_2(A)$ is positive semidefinite.
Finally, since ${\cal L}(A)$ is a principal submatrix of
$\mathcal{C}_2(A)$, ${\cal L}(A)$ is itself positive semidefinite.
\end{proof}
\medskip
{\it Acknowledgements.} We thank Bodo Lass for suggesting that we
approach Conjecture~\ref{bm:con} from the point-of-view of real roots
of polynomials. Section~\ref{rrp} also benefited from interesting
discussions with Charles R. Johnson.
| {
"timestamp": "2009-03-24T21:38:29",
"yymm": "0808",
"arxiv_id": "0808.1065",
"language": "en",
"url": "https://arxiv.org/abs/0808.1065",
"abstract": "Given a sequence (a_k) = a_0, a_1, a_2,... of real numbers, define a new sequence L(a_k) = (b_k) where b_k = a_k^2 - a_{k-1} a_{k+1}. So (a_k) is log-concave if and only if (b_k) is a nonnegative sequence. Call (a_k) \"infinitely log-concave\" if L^i(a_k) is nonnegative for all i >= 1. Boros and Moll conjectured that the rows of Pascal's triangle are infinitely log-concave. Using a computer and a stronger version of log-concavity, we prove their conjecture for the nth row for all n <= 1450. We also use our methods to give a simple proof of a recent result of Uminsky and Yeats about regions of infinite log-concavity. We investigate related questions about the columns of Pascal's triangle, q-analogues, symmetric functions, real-rooted polynomials, and Toeplitz matrices. In addition, we offer several conjectures.",
"subjects": "Combinatorics (math.CO)",
"title": "Infinite log-concavity: developments and conjectures",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815526228442,
"lm_q2_score": 0.8558511488056151,
"lm_q1q2_score": 0.8465921681895833
} |
https://arxiv.org/abs/1905.10483 | On the product dimension of clique factors | The product dimension of a graph $G$ is the minimum possible number of proper vertex colorings of $G$ so that for every pair $u,v$ of non-adjacent vertices there is at least one coloring in which $u$ and $v$ have the same color. What is the product dimension $Q(s,r)$ of the vertex disjoint union of $r$ cliques, each of size $s$? Lovász, Nešetřil and Pultr proved in 1980 that for $s=2$ it is $(1+o(1)) \log_2 r$ and raised the problem of estimating this function for larger values of $s$. We show that for every fixed $s$, the answer is still $(1+o(1)) \log_2 r$ where the $o(1)$ term tends to $0$ as $r$ tends to infinity, but the problem of determining the asymptotic behavior of $Q(s,r)$ when $s$ and $r$ grow together remains open. The proof combines linear algebraic tools with the method of Gargano, Körner, and Vaccaro on Sperner capacities of directed graphs. | \section{Introduction}
The product dimension of a graph $G=(V,E)$ is the minimum possible
cardinality $d$ of a collection of proper vertex colorings of $G$ such
that every pair of nonadjacent vertices have the same color in at
least one of the colorings. Equivalently, this is the minimum $d$ so
that one can assign to every vertex $v$ a vector
in $Z^d$, so that two vertices are adjacent if and only
if the corresponding vectors differ in all coordinates. This
dimension is also the minimum number of complete graphs so that $G$
is an induced subgraph of their tensor product,
where the tensor product of graphs $H_1, \ldots ,H_d$ is the
graph whose vertex set is the cartesian product of the vertex sets of the graphs $H_i$, and two vertices $(u_1,u_2, \ldots ,u_d)$ and $(v_1,v_2, \ldots ,v_d)$ are adjacent iff $u_i$ is adjacent (in $H_i$) to $v_i$ for all $1 \leq i \leq d$.
Yet another equivalent definition is the minimum number of subgraphs of the
complement $\overline{G}$ of $G$ so that each subgraph is a vertex disjoint
union of cliques, and every edge of $\overline{G}$ belongs to at least one of the subgraphs.
For positive integers $s,r \geq 2$ let $K_s(r)$ denote the graph
consisting of $r$ pairwise vertex disjoint copies of the complete
graph $K_s$, and let $Q(s,r)$ denote the product dimension
of this graph. Lov\'asz, Ne\v{s}et\v{r}il and Pultr \cite{LNP}
(see also \cite{Al1})
proved that $Q(2,r)=\lceil \log_2 (2r) \rceil$.
The proof of the upper bound is simple. If $q=\lceil \log_2 (2r)
\rceil$ then
$2^q \geq 2r$. Hence one can assign
distinct binary vectors of length $q$ to the $2r$ vertices of $K_2(r)$
so that the vectors assigned to each pair of adjacent vertices are
antipodal, that is, differ in all coordinates. It is easy to check
that two vertices are adjacent if and only if the corresponding vectors
differ in all coordinates, showing that $Q(2,r) \leq q.$
The lower bound is proved in \cite{LNP} by a linear algebra
argument, and the proof given in \cite{Al1} applies exterior
algebra. There is yet another (similar) short proof that proceeds
by assigning to each vertex of $K_s(r)$ a multilinear polynomial
in $x_1,x_2, \ldots ,x_q$ that depends on the coloring used, and by showing
that these polynomials are linearly independent.
As mentioned in the abstract,
Lov\'asz, Ne\v{s}et\v{r}il and Pultr \cite{LNP} raised the problem
of estimating $Q(s,r)$ for larger values of $s$.
More recently, Kleinberg and Weinberg considered the same problem, motivated by the investigation of prophet inequalities for intersection of matroids \cite{KW}. In this paper, we determine
the asymptotic behavior of $Q(s,r)$ for any fixed $s \geq 2$ and large $r$.
\begin{theo}
\label{t11}
For every fixed $s$, $Q(s,r) = (1+o(1)) \log_2 r$, where the
$o(1)$-term tends to $0$ as $r$ tends to infinity.
\end{theo}
The main tool in the proof is the method of Gargano, K\"orner and
Vaccaro in their work on Sperner capacities \cite{GKV}.
For completeness, and
since we are interested
in the behavior of the $o(1)$-term in the theorem above, we
describe a variant of the method as needed here, in a combinatorial
way that avoids any application of information theoretic techniques.
The proof is based on what we call here $Z_s$-covering families of
vectors.
Let $Z_s$ denote the ring of integers modulo $s$. For a subset
$A \subset Z_s$ and a vector $v=(v_1,v_2, \ldots ,v_q) \in Z_s^q$,
we say that $v$ is $A$-covering if for every $a \in A$ there is an
$1 \leq i \leq q$ so that $v_i=a$. The vector $v$ is covering if it
is $Z_s$-covering. A family ${\cal F} \subset Z_s^q$ is $A$-covering if
for every ordered pair of distinct vectors $u,v \in {\cal F}$, the difference
$u-v$ is $A$-covering. ${\cal F}$ is covering if it is $Z_s$-covering.
Therefore, a family ${\cal F}$ of vectors in $Z_s^q$ is
covering if every element of $Z_s$
appears in at least one coordinate of
the difference between any two distinct vectors in the
family. Let $R(s,q)$ denote the maximum possible cardinality of
a covering family of vectors in $Z_s^q$. The following simple
statement describes the connection between $Q(s,r)$ and $R(s,q)$.
\begin{prop}
\label{p12}
If $R(s,q) \geq r$ then $Q(s,r) \leq q$.
\end{prop}
Note that by definition $R(s,q)=1$ for all $q<s$.
Our main result about $R(s,q)$ is the following.
\begin{theo}
\label{t13}
\begin{enumerate}
\item
For every $q \geq s \geq 2$, $R(s,q) \leq 2^{q-1}$. Equality
holds for $s=2$.
\item
For every fixed $s$, $R(s,q) \geq (2-o(1))^q$, where the
$o(1)$-term tends to $0$ as $q \to \infty$.
\end{enumerate}
\end{theo}
The rest of this paper is organized as follows. Section 2 contains
the proof of Proposition \ref{p12} and that of a simple combinatorial lemma.
In Section 3 we present the proof of Theorem \ref{t13} and note that
in view of Proposition \ref{p12}
it implies Theorem \ref{t11}. The proof supplies
better estimates for prime values of $s$, and we thus first present
the proof for this special case (which suffices to deduce the assertion of
Theorem \ref{t11} for every $s$) and then
describe briefly the
proof for general $s$. The final Section 4 contains some
concluding remarks and open problems, including some (modest) estimates for
$R(s,q)$ when $q$ is not much larger than $s$.
To simplify the presentation we omit, throughout the paper, all
floor and ceiling signs whenever these are not crucial.
\section{Preliminaries}
We first prove Proposition \ref{p12}. If $R(s,q) \ge r$, then there
exists a matrix of elements in $Z_s$ with $r$ rows and $q$ columns so
that the difference of any two rows is covering. We will use the $q$
columns of this matrix to find $q$ graphs, each being
a disjoint union of cliques, that cover the complement of $K_s(r)$. This complement is a complete multipartite graph with $r$ parts of size $s$. Label the vertices of this graph with elements of $Z_s$ so that in each part all labels are used exactly once. We will associate each row of the matrix to a part and each column to a vertex disjoint union of cliques.
For a column $(a_1, \cdots, a_r)^T$, consider the following graph. For each $0 \le k<s$ take the $k+a_i$th vertex (taken modulo $s$) from the $i$th part of size $s$ and take the union of the $s$ cliques obtained as $0 \le k<s$ ranges. This is clearly a union of $s$ vertex disjoint cliques.
Now, suppose we have some two vertices of the graph we are trying to cover in different parts, say the $\ell+m$th vertex of part $i$
and the $\ell$th vertex of part $j$ for some $1 \le i<j \le q$
and some $\ell, m \in Z_s$. Then the difference of the $i$th and $j$th rows contains an $m$ in some column $(a_1, \cdots, a_r)^T$, so that $a_i-a_j \equiv m$ in $Z_s$, and then the disjoint union of cliques corresponding to this column will cover our desired edge.
Next, we need the following simple lemma.
\begin{lemma}
\label{l21}
Let $H$ be a bipartite graph with classes of vertices $A_1$, $A_2$ where
$|A_1|=n_1$, $|A_2|=n_2$, each vertex of $A_1$ has degree $d_1$, and each vertex of $A_2$ has degree $d_2$. Furthermore
suppose that $d_2 \ge \log(2n_2)$. Then there is a union of
vertex-disjoint
stars with centers in $A_1$, each star having at least
$\frac{d_1}{4 \log (2n_2)}$ leaves, such that all vertices of
$A_2$ are leaves.
\end{lemma}
\begin{proof}
Define a random subset $S$ of $A_1$ by choosing each vertex of $A_1$ to be in $S$ with probability $p=\frac{\log(2n_2)}{d_2}$ uniformly and independently. We claim that with positive probability, each vertex of $A_2$ has between $1$ and $4\log(2n_2)$ neighbors in $S$.
The proof is a simple union bound; a fixed vertex $v \in A_2$ has probability $$(1-p)^{d_2}< e^{-pd_2}=\frac{1}{2n_2}$$ of having no neighbors in $S$, and probability at most $$\dbinom{d_2}{4\log(2n_2)}p^{4\log(2n_2)} \le \left( \frac{ped_2}{4\log(2n_2)} \right)^{4\log(2n_2)}=\left(\frac{e}{4}\right)^{4\log(2n_2)}<\frac{1}{2n_2}$$ of
having more than $4\log(2n_2)$ neighbors, proving the claim.
Fix an $S$ with this property.
We now finish the proof of the lemma by an application of Hall's theorem.
For all $S' \subseteq S$, let $N(S')$ denote the set of all neighbors of
$S'$ and let $e(S',A_2)$ denote the number of edges from $S'$ to
$A_2$. Then
$|N(S')| \ge \frac{e(S',A_2)}{4\log(2n_2)}
=\frac{d_1}{4\log(2n_2)}|S'|$.
Hence every subset of $S$
expands by a factor of at least
$\frac{d_1}{4\log(2n_2)}$.
Thus by Hall's theorem, there is a
union of disjoint stars whose centers are exactly
the vertices of $S$, each having at least $\frac{d_1}{4\log(2n_2)}$ leaves.
Every remaining vertex of $A_2$ is adjacent to some vertex in $S$,
so we can simply add it to an existing star. \end{proof}
\section{Covering families}
\subsection{The upper bound}
The following proposition implies the assertion of Theorem \ref{t13},
part 1.
\begin{prop}
\label{p31}
Fix $s \geq 2$, and let ${\cal F} \subset Z_s^q$ be a $\{0,1\}$-covering
family of vectors. Then $|{\cal F}| \leq 2^{q-1}$. For $s=2$ equality holds.
\end{prop}
\begin{proof}
Put $m =|{\cal F}|$. Let $p$ be a prime divisor of $s$ and consider the
vectors in ${\cal F}$ as vectors in $Z_p^q$ by reducing their coordinates modulo
$p$. Note that these vectors form a $\{0,1\}$-covering family over $Z_p$. Let
$v_i=(v_{i1},v_{i2}, \ldots ,v_{iq})$, $(1 \leq i
\leq m)$ be the vectors in ${\cal F}$ (considered as elements of
$Z_p^q$).
For each $1\leq i \leq m$ define two polynomials $P_i,Q_i$ in
the variables $x_1,x_2, \ldots ,x_q$ over $Z_p$ as follows.
$$
P_i(x_1, \ldots ,x_q)=\prod_{j=1}^q (x_j -v_{ij}),~~
Q_i(x_1, \ldots ,x_q)=\prod_{j=1}^q (x_j -v_{ij}-1).
$$
It is not difficult to check that for every $i$, $Q_i(v_i) \neq 0$
and $P_i(v_i)=0$. In addition, for every
$1 \leq i \neq i' \leq m$, $P_{i'}(v_i) =0$ (as there is a coordinate
$j$ for which $v_{ij}-v_{i'j}=0$) and $Q_{i'}(v_i)=0$ (as there
is a $j$ so that $v_{ij}-v_{i'j}=1$).
Similar reasoning gives that for the vectors $v_i+J$, where $J$ is the all
$1$-vector of length $q$, $P_i(v_i+J) \neq 0$, $Q_i(v_i+J)=0$, and
for every $i' \neq i$,
$P_ {i'}(v_i+J) =Q_{i'} (v_i +J)=0$. Therefore, for each member of
the collection
of $2m$ polynomials $\{P_i,Q_i: 1 \leq i \leq m\}$ there is an
assignment of values of the variables in which this member is nonzero
and all others vanish. This easily implies that the set of $2m$ polynomials
$P_i,Q_i$ is linearly independent in $Z_p$,
and as each of its members lies in the space of multilinear
polynomials with the $m$ variables $x_j$, the number, $2m$,
of these polynomials is at most the dimension of this space which is
$2^q$. It follows that $|{\cal F}| =m \leq 2^{q-1}$, as needed. For $s=2$
the family of all binary vectors in which the first coordinate is $1$
is $\{0,1\}$-covering, showing that $R(2,q) \geq 2^{q-1}$ and
completing the proof. $\Box$
\end{proof}
\subsection{Prime $s$} For prime $s \ge 3$, we will prove that $R(s,q)
\ge (2-o_s(1))^q$ where the $o(1)$-term tends to $0$ as $q \to \infty$.
The crux of the proof is a Markov chain argument from \cite{GKV}, which we will iterate $O(\log s)$ times.
A \emph{balanced word} over $Z_s^q$ is a word containing the letters $1$ through $s-1$ an equal number of times. A \emph{special balanced word} is such a balanced word so that the first $\frac{q}{(s-1)/2}$ letters are $1$ and $2$ in some order, the next $\frac{q}{(s-1)/2}$ letters are $3$ and $4$ in some order, and so on.
Construct a bipartite graph between the set $A_2$ of balanced words
$w$ over $Z_s^q$ and the set $A_1$ of permutations $\pi$ on $q$ elements defined as follows: $w$ and $\pi$ are adjacent if and only if $\pi(w)$ is a special balanced word.
By symmetry all vertices in $A_1$ have the same degree $d_1$, and all vertices in $A_2$ have the same degree $d_2$. We have $$n_2=|A_2|=\dbinom{q}{q/(s-1), \cdots, q/(s-1)} \le (s-1)^q$$ and $d_1$ is the total number of special balanced words,
so $$d_1=\dbinom{2q/(s-1)}{q/(s-1)}^{(s-1)/2}>\frac{2^q}{q^{s/2}}.$$ Furthermore, $d_2=\left((\frac{q}{s-1})!\right)^{s-1}d_1 \ge \log(2n_2)$ so by Lemma \ref{l21} there exists a way to map balanced words to some set $T$ of permutations $\pi$ of $q$ elements, so that each balanced word is
associated to exactly one permutation, and each permutation in $T$ is associated to at least $\frac{d_1}{4\log(2n_2)}>\frac{2^q/q^{s/2}}{4\log(2(s-1)^q)}>\frac{2^q}{q^s}$ balanced words.
Thus, we can partition the balanced words into sets $S_1, S_2, \cdots$ so that for each $S_i$ we have $|S_i|>\frac{2^q}{q^s}$ and for all $i$ there exists $\pi_i$ so that $\pi_i(s_i)$ is a special balanced word for all $s_i \in S_i$.
Given all of the special balanced words of length $q$, any two of them have a difference vector which covers $\{\pm 1\}$. The idea of the proof will be to amplify this set $\{-1,1\}$, first to $\{-\alpha,-1,1,\alpha\}$ for a primitive root $\alpha$ modulo $s$, and after $r$ steps to $\{\pm \alpha^b\}$ for $0 \le b<2^r$. At each stage, the number of vectors will be $(2-o(1))^L$ where $L$ is the length of the vectors. Thus after $O(\log s)$ steps we will have a set of vectors that is $Z_s^{*}$ covering. We can then add an extra coordinate of $0$ to all of the vectors to make them $Z_s$-covering.
We describe the first step of this iteration in detail. Fix a primitive root $\alpha$ modulo $s$, which will be constant throughout the steps. Also fix $n=\frac{100}{\epsilon}(\log(s))^2$, which will again be constant throughout the steps.
Initially we set $q=q_0=(1+o(1))100s^2$, ensuring it is divisible by
$s-1$.
We will construct words of length $q_1=qn$ by stringing together balanced words of length $q$ in a specific way. If $x_i \in S_j$, then force $x_{i+1} \in \alpha S_j$; here we mean that if we take $x_{i+1}$ and multiply its letters by $\alpha^{-1}$ pointwise, the result will be in $S_j$.
Consider all vectors of length $qn$ constructed according to this rule, by concatenating $n$ balanced words of length $q$ in this way.
There are more than $n_2(\frac{2^q}{q^s})^{n-1}$ such words,
because at each stage other than the first we must pick
$x_{i+1}$ so that $x_i \in \alpha S_j$ for some $j$, and thus there
are more than $\frac{2^q}{q^s}$ choices for $x_{i+1}$.
We will make $x_1, x_n$ uniform over all such words;
this costs us a factor of $n_2^2$ and thus we now
have more than $\frac{1}{n_2}(\frac{2^q}{q^s})^{n-1}$ such words.
Using that $n_2<s^q$, we can find more than $\frac{2^{q(n-1)}}{q^{s(n-1)}s^q}=(2-o(1))^{qn}$ words of length $qn$ of the form $x_1x_2 \cdots x_n$ so that $x_1, x_n$ are fixed. We now note that for any two different words of this form $x_1 \cdots x_n$ and $x'_1 \cdots x'_n$,
with $x_1=x'_1, x_n=x'_n$, there is some minimal $i \ge 2$ so that $x_i \neq x'_i$. But then there is some $k$ so that $x_{i-1}=x'_{i-1} \in S_k$, and that means $x_i, x'_i \in \alpha S_k$.
Because $x_i \neq x'_i$, we have $\alpha^{-1}x_i \neq \alpha^{-1}x'_i$, and both $\alpha^{-1}x_i$ and $\alpha^{-1}x'_i$ are in $S_k$. It follows that $\alpha^{-1}x_i-\alpha^{-1}x'_i$ covers $\{\pm 1\}$ and so $x_i-x'_i$ covers $\{\pm \alpha\}$.
Similarly, there is some maximal $i<n$ so that $x_i \neq x'_i$. Then $x_{i+1}=x'_{i+1} \in \alpha S_j$ for some $j$, so $x_i \in S_j$, $x'_i \in S_j$. As $x_i \neq x'_i$, it follows that $x_i-x'_i$ must cover $\{\pm 1\}$ as coordinates.
When $s=5$, setting $\alpha=2$ and applying this construction already
gives a covering family for $\mathbb{Z}_s^{*}=\mathbb{Z}_5^*$
with $(2-o(1))^N$ vectors of length $N$. In the general case we iterate this argument to find $(2-o(1))^N$ vectors of length $N$, so that after the $r$th iteration the vectors we get cover $\{\pm \alpha^b\}$ for all $0 \le b<2^r$. We describe how to do this inductively.
After $r$ iterations, we find for some $q=q_r$ (that depends on $s$)
a family of $M^q=(2-o_q(1))^q$ (balanced) vectors over $Z_s^q$ which covers $\pm \alpha^b$ for all $0 \le b<2^r$. We call these vectors $y_1, \cdots, y_{M^q}$ and let $Y=\{y_1, \cdots, y_{M^q}\}$. We repeat the above argument.
The $y_i$ play the role of the special balanced words.
Again we construct a bipartite graph.
On one side there is a set $A_2=Y$ and on the
other side there is $A_1$, permutations of $q$ elements.
We have $\pi$ is adjacent to a (balanced) vector $y$ if and only
if $\pi(y) \in Y$. Again $n_2=|A_2| \le (s-1)^q$ but now $d_1=M^q$.
It can easily be verified that $d_2 \ge \log(2n_2)$. Thus by Lemma \ref{l21} the balanced words of length $q$ will be split into sets $S_1, S_2, \cdots$ so that all $S_i$ satisfy $|S_i| \ge \frac{d_1}{4\log(2n_2)} \ge \frac{M^q}{4+4q\log(s-1)}=(2-o_q(1))^q$, and furthermore for each $S_i$, there exists a permutation $\pi_i$ so that for all $s_i \in S_i$, $\pi_i(S_i) \in Y$. Now we will again construct a Markov chain.
Consider all words of length $qn=q_{r+1}$ consisting of $n$ balanced words of length $q$ of the form $x_1x_2 \cdots x_n$, so that if $x_i \in S_j$, then $x_{i+1} \in \alpha^{2^r}S_j$. If we have two such words $x_1x_2 \cdots x_n$ and $x'_1x'_2 \cdots x'_n$ so that $x_1=x'_1$, $x_n=x'_n$, let $i>1$ be minimal so that $x_i \neq x'_i$. Then if $x_{i-1}=x'_{i-1} \in S_j$, $x_i, x'_i \in \alpha^{2^r}S_j$.
This means that $x_i-x'_i$ covers $\{\pm \alpha^b\}$ modulo $s$ for $2^r \le b<2^{r+1}$. Now, if we let $x_i$, $i<n$, be maximal so that $x_i \neq x'_i$, then $x_{i+1}=x'_{i+1} \in \alpha^{2^r}S_j$ for some $j$ and so $x_i, x'_i$ are not equal but are both in $S_j$.
This means $x_i-x'_i$ covers $\{\pm \alpha^b\}$ modulo $s$
for $0 \le b<2^r$. So indeed the family covers $\{\pm \alpha^b\}$
modulo $s$
for $0 \le b<2^{r+1}$, as long as $x_1$ and $x_n$ are fixed over the family. We can always find
$$
\frac{\min_{i}|S_i|^{n-1}}{\dbinom{q}{q/(s-1), \cdots, q/(s-1)}}
\ge \frac{(2-o(1))^{qn}}{(s-1)^q}=(2-o(1))^{qn}
$$
such words if $n$ is large enough, where the $o(1)$ terms tend to $0$
as $q \to \infty$ and $n$ is sufficiently large.
We will soon see that our choice of $n=\frac{100}{\epsilon}(\log(s))^2$
is sufficient. Iterating the argument $ \log_2(s) $
times allows us to find $(2-o(1))^q$ vectors of some length $q$ which cover $Z_s^{*}$.
Adding a single coordinate where all vectors are $0$ gives us vectors of
length $q+1$ that cover $Z_s$, without changing the asymptotic analysis.
With care, we can extract quantitative bounds. We assume $\epsilon>0$ is a fixed constant and show that we only require $q=\left(\frac{O(1)}{\epsilon}(\log(s))^2\right)^{\log(s)}$ to have a $(2-\epsilon)^q$ size covering system over $Z_s^q$ for large $s$.
Say that after $r$ iterations, we have $M^q=M_r^q$ vectors of length
$q=q_r$ for some $M \le 2$ (which is nearly $2$).
Then $\min_{i}S_i \ge \frac{M^q}{5q\log(s)}$ and thus we find at least $$\frac{\min_{i}|S_i|^{n-1}}{\dbinom{q}{q/(s-1), \cdots, q/(s-1)}} \ge \frac{M^{qn}}{M^q(5q\log(s))^ns^q} \ge \frac{M^{qn}}{(2s)^q(5q\log(s))^n}$$ $$=\left( \frac{M}{(2s)^{1/n}q^{1/q}(5\log(s))^{1/q}} \right)^{qn}=M_{r+1}^{q_{r+1}}$$ vectors of length $qn=q_{r+1}$.
Recall that $n=\frac{100}{\epsilon}(\log(s))^2$ and $q_0=(1+o(1))100s^2$,
so when we iterate the Markov chain argument $\log_2(s)$ times, we lose a factor of at most
$$
\frac{M_0}{M_{\log_2(s)}} \le (2s)^{2\log(s)/n}
\left(\prod_{j=0}^{\infty}(q_0n^j)^{1/q_0n^j}\right)(5\log(s))^{1/q_0}
(5\log(s))^{2\log(s)/n}
$$
$$
\le (10s\log(s))^{2\log(s)/n}2^{10\log(s)/s^2}(5\log(s))^{1/100s^2}.
$$
In the last inequality here
the bound on the second term holds because the relevant infinite product
is at most $(q_0^{1/q_0})^{100}<2^{10 \log s/s^2}$.
For large $s$ this is easily seen to be smaller than, say,
$1+\epsilon/4$. Since for large $s$, $M_0>2-\epsilon/2$, for
$s>s_0(\epsilon)$ we get $M_{\log_2(s)} >2-\epsilon$.
Thus at the end we have
$q=100s^2\left(\frac{O(1)}{\epsilon}(\log(s))^2\right)^{\log_2(s)}$
and at least $(2-\epsilon)^q$ vectors of length $q$ which cover $Z_s$.
One can easily modify the argument to work for any larger $q$, or simply
use the super-multiplicative property of $R(s,q)$ (see the
beginning of Section 4)
to conclude,
taking $\epsilon=1/\log s$, that for every large $s$ and for all
$q>s^{(3+o(1)) \log \log s}$, $R(s,q) > (2-\frac{1}{\log s})^q$.
This completes the proof of Theorem \ref{t13}, part 2, for prime $s$.
\subsection{General $s$}
Given an arbitrary fixed integer $s>2$, we now show how to find $(2-o(1))^q$ vectors in $Z_s^q$ which form a $Z_s$-covering family; this shows
$R(s,q) \ge (2-o(1))^q$ even for composite $s$.
The general strategy is similar to our strategy in the previous section, except now we work over $Z$, and the set we are covering does not grow
so quickly. As before, we are not concerned with the vectors
covering $0$, because we can simply add an extra
coordinate to deal with it. Since the argument is very similar to
the one described in the previous subsection, we only provide a brief
description omitting some of the formal details.
We prove the stronger statement that for any fixed $s$, we can find a $(2-o_q(1))^q$ size family over $Z$ that covers $[-s,s]$, i.e. the difference of any two vectors contains all integers between $-s$ and $s$ as coordinates. Let $S$ be the least common multiple of the first $s$ positive integers. We will assume without loss of generality that $2S \mid q$.
At the first step of our iteration, we consider vectors over $Z^q$
with an equal number of each element of $[2S]$ as coordinates so that the first $\frac{q}{S}$ coordinates are $1$s and $2$s in some order,
the next $\frac{q}{S}$ are an equal number of $3$s and $4$s, and so on.
We can find $(2-o_q(1))^q$ of these and they cover $\{ \pm 1\}$. Furthermore, they have the property that for any ordered pair of these vectors, there is a coordinate in which the first has an even integer $2k$ and the second has the odd integer $2k-1$ for some $1 \le k \le S$. This property is crucial and maintained throughout our iterations.
Now we describe the $m$th step of our iteration, for $2 \le m \le s$. Define a bijection $f=f_m$ on $[2S]$ so that for all integers $1 \le k \le S$, $f(2k)=f(2k-1)+m$.
We can do this for instance by setting $f(1)=1, f(2)=m+1, f(3)=2, f(4)=m+2, \cdots, f(2m-1)=m, f(2m)=2m$ and then set $f(x)=f(x-2m)+2m$ for $x>2m$ as long as $x \le 2S$.
We then apply the same Markov chain argument as before, defining
sets $S_i$ and constructing words $x_1x_2 \cdots$
starting and ending at the same vectors.
Now, however, when $x_i$ is in some set $S_j$, instead of
demanding $x_{i+1} \in \alpha S_j$ we require
$x_{i+1} \in f(S_j)$, meaning that if we apply $f^{-1}$ to each coordinate of $x_{i+1}$, the result will be in $S_j$.
Looking at the first place where two vectors differ gives us a difference of $\pm m$. Looking at the last place where they differ shows that the crucial property is preserved, and also that the differences $\pm 1, \cdots, \pm (m-1)$ are retained.
Thus after $s=O(1)$ iterations, this algorithm produces $(2-o_q(1))^q$
vectors of length $q$ which cover all of $[-s,s]$, after we add
an extra coordinate to deal with covering $0$. This completes the proof
of Theorem \ref{t13}.
\section{Concluding remarks and open problems}
A natural open problem is to study the functions
$R(s,q)$ and $Q(s,r)$ in general. There are several
simple properties that $R(s,q)$ satisfies. We know that $R(s,q)$ is
(weakly) increasing in $q$, because to create $r$ covering
vectors for $Z_s$ of length $q'>q$, we can take $r$ vectors for $Z_s$ of length $q$ and pad them with $q'-q$ zeroes at the end.
Furthermore, we know that $R(s,q)$ is super-multiplicative, i.e. $R(s,q_1+q_2) \ge R(s,q_1)R(s,q_2)$,
because if $m=R(s,q_1)$, $n=R(s,q_2)$ then we can find
vectors $v_1, \cdots, v_m$ of length $q_1$ and $w_1, \cdots, w_n$ of
length $q_2$ that form covering families. The $mn$ vectors $v_iw_j$
obtained by concatenating $v_i$ and $w_j$
are clearly a covering family of length $q_1+q_2$.
For $q<s$, we have $R(s,q)=1$, because of course we can take a single vector in $Z_s^q$, but if we take two then their difference can only
cover a set of size $q$ and cannot cover $Z_s$.
The next natural question is studying the value of $R(s,s)$.
\begin{prop}
\label{p41}
$R(s,s) \le s$, and $R(s,s) \ge p$ where $p$ is the smallest prime
factor of $s$. When $p=2$ this is tight, that is,
if $s$ is even, then $R(s,s)=2$.
\end{prop}
\begin{proof}
For the lower bound, for each $0 \le a<p$ we have a vector $(0,a,2a, \cdots, (s-1)a)$ reduced modulo $s$. This is a covering system for $Z_s$, since all positive integers smaller than $p$ are relatively prime to $s$.
For the upper bound, assume there was a covering family in $Z_s^s$ with $s+1$ vectors, so that the difference of any two of these vectors has all values of $Z_s$ exactly once.
By the pigeonhole principle, there exist
two vectors $v$ and $w$ so that the difference of
their first and second coordinates is the same. But then $v-w$ has the same value in its first and second coordinate, a contradiction.
When $s$ is even, $R(s,s) \ge 2$ as $2$ is the least prime factor of $s$. If $R(s,s) \ge 3$ then there exist $3$
vectors in $\mathbb{Z}_s^s$, $(a_1, \cdots, a_s), (b_1, \cdots, b_s), (c_1, \cdots, c_s)$, which are covering.
But then $\sum (a_i-b_i) \equiv \sum_{j=0}^{s-1}j \equiv (s-1)\frac{s}{2} \equiv \frac{s}{2}$ modulo $s$, and similarly $\sum (b_i-c_i) \equiv \sum (c_i-a_i) \equiv \frac{s}{2}$ modulo $s$. Hence, these three sums are all $\frac{s}{2}$ modulo $s$, so they must add up to $\frac{3s}{2} \equiv \frac{s}{2}$ modulo $s$. But they add up to $0$, so this is a contradiction.
\end{proof}
Nonetheless, the problem of determining $R(s,s)$ for all $s$
remains open, and the lower bound in the last proposition
is not tight.
A computer search gives that $R(15,15) \ge 4$ (\cite{L}). One example is the $4$ vectors given by the rows of the matrix $$\begin{pmatrix}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 \\
0 & 2 & 1 & 5 & 7 & 9 & 12 & 14 & 13 & 3 & 6 & 4 & 10 & 8 & 11 \\
0 & 3 & 9 & 1 & 10 & 14 & 7 & 11 & 4 & 12 & 5 & 8 & 2 & 6 & 13 \end{pmatrix} $$
which are covering for $Z_{15}$.
For $q$ which is only a little bigger
than $s$, we can prove a reasonable upper bound,
using essentially the same observation applied in the
proof that $R(s,s) \le s$.
\begin{prop}
If $2(q-s)^2<s-1$, then $R(s,q) \le s+2(q-s)+2(q-s)^2$.
\end{prop}
\begin{proof}
Assume a contradiction for some $q>s$.
By the definition of $R(s,q)$, for any $r \le R(s,q)$ there exists a matrix of $r$ rows and $q$ columns so that the difference of any two rows contains all values modulo $s$; in particular there exists such a matrix for $r=s+1+2(q-s)+2(q-s)^2$.
We will double count the number of pairs of rows and columns $r_i, r_j, c_k, c_{\ell}$ with the following property: the $2 \times 2$ submatrix formed by $r_i, r_j, c_k, c_{\ell}$ has sums of opposite corners equal.
For any two rows $r_i, r_j$, we examine their difference. This is a vector over $Z_s$ with $q$ entries containing all elements modulo $s$, so at most $\dbinom{q-s+1}{2}$ pairs of its entries can be equal.
This means that for the pair of rows $r_i, r_j$, there exists at most $\dbinom{q-s+1}{2}$ pairs of columns $c_k, c_{\ell}$ that satisfy our property.
So the total number of such pairs as $1 \le i,j \le r$, $1 \le k, \ell \le q$ range is at most $\dbinom{r}{2}\dbinom{q-s+1}{2}$. However, if we instead fix $c_k, c_{\ell}$ we see that the difference of these two columns is a vector of length $r$ over $Z_s$ which has at least $r-s$ pairs of equal
elements. This means that
there are at least $\dbinom{q}{2}(r-s)$ $r_i, r_j, c_k, c_{\ell}$ with our property.
Hence
$$
\dbinom{q}{2}(r-s) \le \dbinom{r}{2}\dbinom{q-s+1}{2}
$$ and so $2q(q-1)(r-s) \le r(r-1)(q-s+1)(q-s).$
By our assumption, $2q>r$ and so $2(q-1) \ge r-1$. Hence $2q(q-1)(r-s) \le r(r-1)(q-s+1)(q-s) \le 4q(q-1)(q-s+1)(q-s).$ Thus $r-s \le 2(q-s+1)(q-s)=2(q-s)+2(q-s)^2$ and so $r \le s+2(q-s)+2(q-s)^2$, a contradiction. \end{proof}
Note that this argument gives a bound only when $q=s+O(\sqrt{s})$,
and the bound is $s+O((q-s)^2)$. We believe that this is not tight.
There is a nontrivial (though weak)
bound which holds in the regime $q=s+\omega(\sqrt{s})$ as long as $q<Cs$
for some fixed constant $C< \log_2 e$.
The natural open problem here is to study cases when $q$ is larger but still fairly small, for instance if $q=1.5s$, $q=s\log(s)$, or $q=s^2$. Furthermore, it would be interesting to figure out how large $q$ has to be so that $R(s,q)>(2-\epsilon)^s$ for a small positive constant $\epsilon$; the following argument shows that if $\epsilon$ is small enough, then $\frac{q}{s}$ must be bigger than an absolute constant which is above $1.44$, but we believe this is far from tight.
\begin{prop}
If $1 \le \frac{q}{s}<C<\log_2(e)$,
then $R(s,q) \le (2-\varepsilon)^q$ for $\varepsilon=\varepsilon(C)>0$.
\end{prop}
\begin{proof}
Say that $R(s,q) \ge r$ so there are $r$ vectors in $Z_s^q$ which are covering. Place them in a matrix with $r$ rows and $q$ columns. Now, for each column select uniformly an element of $Z_s$, and add it to all the elements of that column, reducing modulo $s$. This does not change the covering property.
After having done this, replace all entries of this matrix by their reduction modulo $2$. Given two entries of a matrix in the same column and different rows, if initially they differed by $k$ modulo $s$ they now have some probability $p_k$ of being equal that depends only on $k$,
which is the probability that if $x \in Z_s$ is chosen randomly
and uniformly
then the reductions of $x,x+k$ modulo $s$ have the same parity. One can easily check that $p_0=1$, $p_1=1/s$, $p_2=(s-2)/s$, and so on,
so that $\prod_{i \in Z_s}p_i=e^{-s(1+o(1))}$ by Stirling's formula.
Furthermore, if we take this product over all $i \in Z_s$ except
for a subset of size
$2cs$ for a small constant $c$, we have a bound of $e^{-s(1-\epsilon)}$, where $\epsilon$ is a small positive constant that tends to $0$ with $c$.
Let $K=cs$. For any pair of rows, the probability their colors match in all but at most $2K$ places is at most
$e^{-s(1-\epsilon')}$ for some $\epsilon'$ that tends to $0$ with $c$,
by a union bound over all
the at most $2\dbinom{q}{2K}$
possible sets of places where they do not match. Thus if we define
a graph on our $r$ vectors where two are adjacent iff their values
upon reduction modulo $2$ match in all but at most $2K$ places,
this graph will in expectation have edge density $e^{-s(1-\epsilon')}$
and thus for some fixed choice of the random shifts
will have at most $r^2e^{-s(1-\epsilon')}$ edges. Fixing these shifts
one can remove less than
$r^2e^{-s(1-\epsilon')}$ vertices from this graph and get an
independent set. But this set
must be of size at most $(2-\delta)^q$ for some $\delta=\delta(c) > 0$,
because this size is bounded by the cardinality
of a family of disjoint Hamming balls in $\{0,1\}^q$, each of
radius $K=cs$. Thus we have $r \le r^2e^{-s(1-\epsilon')}+(2-\delta)^q$,
and the same inequality holds for any $r'<r$ by applying the same
reasoning to a set of $r'$ of our vectors.
Now if $q<Cs$ for some fixed $C< \log_2 e$ then setting $q > q_0(\epsilon',C)$, $r'=3 (2-\delta)^q$ violates the
last inequality (with $r$ replaced by $r'$) since for this value
of $r'$ and large $q$, $(r')^2 e^{-s(1-\epsilon')}<r'/2$ and
$(2-\delta)^q <r'/2$ so their sum is smaller than $r'$.
This establishes the assertion of the proposition.
\iffalse
This means that if $e^{s(1-\epsilon')}>(2-\delta)^q$ then this forces $r \le \frac{1}{2}(2-\delta)^q$, since if $r$ were bigger, we could apply the same argument to a subset of these vectors.
In particular, this bounds $r$ to be at most $(2-\varepsilon)^q$ for some $\varepsilon>0$ whenever $e^{s(1-\epsilon')}>(2-\delta)^q$. Since $\delta, \epsilon', \varepsilon \to 0$ as $c \to 0$, we can always find some $c$ depending on $q/s$ so that this is true, as long as $\frac{q}{s}<\log_2(e)$.
Then $\varepsilon$ depends only on $c$ and thus only
on $\frac{q}{s}$, so we are done.
\fi
\end{proof}
On the opposite extreme, one may ask
what happens if $s \ge 3$ is a small fixed positive integer
and $q$ grows. We know by Proposition~\ref{p31} that $R(2,q)=2^{q-1}$,
so it is natural to ask what happens when $2$ is replaced by
a larger positive integer.
\begin{conj}
For any fixed $s \ge 3$, $R(s,q)=o(2^q)$,
and furthermore $R(s,q)=\Theta(2^q/q^{c})$ where
$c=c(s)>0$ is a constant that depends only on $s$.
\end{conj}
Note that the vectors over $Z_3^q$ with a zero in the first coordinate and with exactly $\lfloor q/2 \rfloor$ ones and $\lceil q/2 \rceil$ zeroes are a covering system, so $R(3,q)=\Omega(2^q/\sqrt{q})$.
When $s$ is odd the best upper bound known for $R(s,q)$
is $O(2^{q})$, as shown in Proposition \ref{p31}.
When $s$ is even, if the vectors of a covering system for $Z_s^q$ are reduced modulo $2$, any two differ in at least $s/2$ places,
so there are at most $O(2^q/q^{\lfloor \frac{s-1}{4} \rfloor})$ of them,
as the Hamming balls of radius $\lfloor \frac{s-1}{4} \rfloor$
centered at these reduced vectors are pairwise disjoint.
It would be interesting to establish the above conjecture and to find the relevant constants $c$ if they indeed exist.
In particular it seems plausible that
$c=1/2$ when $s=3$, and if so then the lower bound for $R(3,q)$ is
essentially optimal. We conjecture that this is indeed the case.
\begin{conj}
$R(3,q)=\Theta(2^q/\sqrt{q}).$
\end{conj}
In \cite{Ca} it is shown that $R(3,q) \le (\frac{1}{2}+o(1))2^q$
when $q$ is even and $R(3,q) \le (\frac{1}{3}+o(1))2^q$ when $q$
is odd, leaving this problem open. Note that we do not even know
how to prove the weaker claim that a $\{-1,0,1\}$ covering system
(or equivalently just a $\{\pm 1\}$ covering system)
over $Z$ must have size $o(2^q)$.
\vspace{0.2cm}
\noindent
Our original motivation for studying the function $R(s,q)$ here is
its connection to the product dimension $Q(s,r)$ of the disjoint union of
$r$ cliques, each of size $s$. This connection is described in Proposition
\ref{p12}. The results here suffice to determine the asymptotic behaviour
of $Q(s,r)$ for every fixed $s$ as $r$ tends to infinity, but do not provide
tight bounds when $r$ is not much bigger than $s$.
We conclude this short paper with several simple comments about this range
of the parameters. The first remark is that the product dimension
$Q(s,r)$ is at least $s$ for every $r \geq 2$. To see this fix a
vertex $u$ of the first clique and observe that in every proper coloring of
the graph $K_s(r)$ of $r$ disjoint cliques, each of size $s$, there is
at most one pair $uv$ with a vertex $v$ of the second clique so that $u$
and $v$ have the same color. As altogether there are $s$ such pairs,
and each one has to be monochromatic in at least one vertex coloring
in a collection exhibiting an upper bound for the product dimension, the
number of such colorings is at least $s$. Another comment is that
$Q(s,r)$ is clearly monotone non-decreasing in both $r$ and $s$. Therefore,
for every $r,s \geq 2$, $Q(s,r) \geq Q(2,r) = \lceil \log_2(2r) \rceil$.
A {\em transversal design} $TD(r,s)$ of order $s$ and block size $r$ (with
multiplicity $\lambda=1$) is a set $V$ of $sr$ elements partitioned
into $r$ pairwise disjoint groups, each of size $s$, and a collection
of blocks, each containing exactly one element of each group, so that
every pair of elements from distinct groups is contained in exactly
one block. A transeversal design is {\em resolvable}
if its blocks can be partitioned
into parallel classes where the blocks in any parallel class
partition the set $V$.
There is a substantial amount of literature about transversal
designs, see \cite{CD}. It is not difficult to check that
$Q(s,r)=s$ if and only if a resolvable
$TD(r,s)$ exists and hence the known results
about resolvable
transversal designs supply nearly precise information for the range
$r \leq s$ (it is easy to see that such a design cannot exist for
$r>s$). In particular, for every prime power $s$, $Q(s,s)=s$
and therefore by the obvious monotonicity, for any prime power $s$,
$Q(s,r)=s$ for every $2 \leq r \leq s$, and if $p$ is a prime power then for
every $s,r \leq p$, $Q(s,r) \leq p$. (Note that
Propositions \ref{p12} and \ref{p41} also imply that
$Q(s,s)=s$ when $s$ is
a prime.)
It is not difficult to prove that for every $s,r_1,r_2$,
\begin{equation}
\label{e999}
Q(s,r_1r_2) \leq Q(s,r_1)+Q(s,r_2).
\end{equation}
Indeed, given the graph
$K_s(r_1r_2)$ consisting of $r_1r_2$ disjoint cliques, each of size
$s$, we can split the cliques into $r_1$ disjoint groups, each consisting
of $r_2$ cliques. Define $Q(s,r_1)$
proper colorings in which the cliques
in every group are colored the same, based on the system of colorings that
shows that the product dimension of $K_s(r_1)$ is $Q(s,r_1)$. Add to these
$Q(s,r_2)$ additional colorings, whose restrictions to the $r_2$ cliques
in each group are exactly the colorings showing that the product dimension
of $K_s(r_2)$ is $Q(s,r_2)$. The resulting $Q(s,r_1)+Q(s,r_2)$ colorings
establish (\ref{e999}).
The above comments together with the results in the previous sections
provide upper and lower bounds for $Q(s,r)$ for all
$s$ and $r$, but these bounds are quite far from each other when
$r$ is much bigger than $s$ but much smaller than $2^{s^{3 \log \log s}}$.
In particular, for $r=2^s$ the bounds we have are
$$
s \leq Q(s,2^s) \leq (1+o(1)) \frac{s^2}{\log s}.
$$
It would be interesting to close this gap.
\vspace{0.2cm}
\noindent
{\bf Acknowledgment}
We thank J\'anos K\"orner, Robert Kleinberg and Matt Weinberg
for helpful comments.
| {
"timestamp": "2019-05-28T02:04:34",
"yymm": "1905",
"arxiv_id": "1905.10483",
"language": "en",
"url": "https://arxiv.org/abs/1905.10483",
"abstract": "The product dimension of a graph $G$ is the minimum possible number of proper vertex colorings of $G$ so that for every pair $u,v$ of non-adjacent vertices there is at least one coloring in which $u$ and $v$ have the same color. What is the product dimension $Q(s,r)$ of the vertex disjoint union of $r$ cliques, each of size $s$? Lovász, Nešetřil and Pultr proved in 1980 that for $s=2$ it is $(1+o(1)) \\log_2 r$ and raised the problem of estimating this function for larger values of $s$. We show that for every fixed $s$, the answer is still $(1+o(1)) \\log_2 r$ where the $o(1)$ term tends to $0$ as $r$ tends to infinity, but the problem of determining the asymptotic behavior of $Q(s,r)$ when $s$ and $r$ grow together remains open. The proof combines linear algebraic tools with the method of Gargano, Körner, and Vaccaro on Sperner capacities of directed graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "On the product dimension of clique factors",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9891815491146485,
"lm_q2_score": 0.8558511506439708,
"lm_q1q2_score": 0.8465921670055574
} |
https://arxiv.org/abs/1801.00517 | Dedekind Sums with Even Denominators | Let $S(a,b)$ denote the normalized Dedekind sum. We study the range of possible values for $S(a,b)=\frac{k}{q}$ with $\gcd(k,q)=1$. Girstmair proved local restrictions on $k$ depending on $q\pmod{12}$ and whether $q$ is a square and conjectured that these are the only restrictions possible. We verify the conjecture in the cases $q$ even, $q$ a square divisible by $3$ or $5$, and $2\le q\le 200$ (the latter by computer), and provide progress towards a general approach. | \section{Introduction}
\begin{defn}
For coprime integers $a$ and $b$ with $b>0$, the \textit{Dedekind sum} $s(a,b)$ is defined as
\[
s(a,b) = \sum_{k=1}^{b} \left(\!\!\left(\frac{k}{b}\right)\!\!\right)\left(\!\!\left(\frac{ak}{b}\right)\!\!\right),
\]
where $ {\displaystyle (\!(\,)\!):\mathbb {R} \rightarrow \mathbb {R} }$ denotes the \textit{sawtooth function}, defined by
\[
(\!(x)\!) = \begin{cases}
\{x\}-\frac{1}{2}&x \not \in \Z\\
0 &x \in \Z.
\end{cases}
\]
We will primarily work with the \textit{normalized Dedekind sum} $S(a,b)$, defined by
\[
S(a,b)=12s(a,b),
\]
which will make computation more convenient.
\end{defn}
The definition of the Dedekind sum is motivated by its use in the transformation law of the Dedekind eta function (\cite[p. 52]{Apostol76}). Dedekind sums have been studied in a variety of contexts, including in applications to algebraic geometry, lattice point enumeration, and the study of modular forms (\cite{Urzua07,Rademacher72,Apostol76,Bruggeman94}). The distribution of possible values of Dedekind sums has also been considered extensively (\cite{Vardi93,Bruggeman94,Hickerson97,Myerson98,Girstmair15,GirstmairIntegers2017,GirstmairLargest2017,GirstmairEqual2016}).
It was noted by Rademacher and Grosswald in \cite[p. 28]{Rademacher72} that the range of values of $S(a,b)$ is unknown, which is our central question. Hickerson (\cite{Hickerson97}) proved that this range is dense in $\R$, and Girstmair (\cite{Girstmair15}) found that each rational number $r \in [0,1)$ occurs as the fractional part of a Dedekind sum $S(a,b)$. Furthermore, Girstmair (\cite{GirstmairNovember2017}) proved that each value in this range occurs as a Dedekind sum infinitely many times in a nontrivial sense. Recently, Girstmair (\cite{Girstmair17}) classified the denominator of a Dedekind sum in terms of $a$ and $b$: if $S(a,b) = \frac{k}{q}$ with $\gcd(k,q)=1$, then $q = \frac{b}{\gcd(a^2+1,b)}$. Girstmair also conjectured that for a fixed integer $q\ge 2$, the set of possible $k$ coprime with $q$ such that $\frac{k}{q} = S(a,b)$ for some $a,b$ are exactly the integers $k$ for which
\begin{itemize}
\item If $3\nmid q$, then $3\mid k$.
\item If $2\nmid q$, then
\[
k \equiv \begin{cases}
2\pmod{4} & q\equiv 3 \pmod{4}\\
0 \pmod{8} & q\text{ is a square}\\
0\pmod{4}&\text{else.}
\end{cases}
\]
\end{itemize}
and showed that these conditions are indeed necessary. Finally, Girstmair proved that if $k\equiv k' \pmod{q(q^2-1)}$, then $\frac{k}{q}$ is the value of a normalized Dedekind sum if and only if $\frac{k'}{q}$ is the value of a normalized Dedekind sum, effectively reducing the problem to a finite existence problem $\pmod{q(q^2-1)}$ for each $q$. In particular, this reduction allowed Girstmair to verify the conjecture for all $q\le 60$ by computer.
In Section \ref{sec:reduce} of the present paper, we establish a decomposition $qS(a',b') = \lambda(a,t,t^{*}) + \Delta(at^{*}+j)$ for certain numerators of normalized Dedekind sums and analyze $\lambda$ and $\Delta$ $\pmod{q}$ and $\pmod{q^2-1}$ to reduce Girstmair's conjecture to an existence problem for $\lambda \pmod{q^2-1}$. In Section \ref{sec:partial}, we specialize $\lambda\pmod{q^2-1}$ to a function $f(a)$, which we then split into a linear and a periodic part based on a generalization of Rademacher's three-term relation used by Girstmair. If the slope of the linear part is small enough (in particular if $\gcd(a,q)$ is small enough), this observation is enough to prove Girstmair's conjecture. In particular, we prove the conjecture holds for all $q$ even and all perfect squares $q$ which are divisible by $3$ or $5$. We also use computer verification and our first observation to prove the conjecture for all $2\le q\le 200$.
As noted in \cite{Girstmair17}, the case $q=1$ has been resolved completely, so for the remainder of the paper we assume $q\ge 2$.
\section{Background}
Note that given a fixed $b$, the normalized Dedekind sum only depends on the residue class of $a\pmod{b}$.
A classical fact about Dedekind sums reveals unexpected symmetry:
\begin{lemma}[Reciprocity Law]
\label{recip}
If $a$ and $b$ are coprime positive integers, then
\[
S(a,b) +S(b,a) = \frac{a}{b}+\frac{b}{a}+\frac{1}{ab}-3.
\]
\end{lemma}
\begin{proof}
See, for example, \cite[p. 27]{Rademacher72}.
\end{proof}
The reciprocity law is crucial to the study of Dedekind sums. As an example, along with the fact that $S(a-nb,b) = S(a,b)$ for $n \in \Z$, the reciprocity law yields an alternative method for computing $S(a,b)$ by following the Euclidean algorithm.
Given coprime integers $a$ and $b$ with $b>0$, it is natural to ask what the denominator of $S(a,b)$ is. It follows from expansion of the original definition and some rearrangement (see, for example, \cite[p. 27]{Rademacher72}) that $bS(a,b) \in \Z$, and so the denominator of $S(a,b)$ when written as a reduced fraction is a divisor of $b$. In fact, a more exact statement holds:
\begin{thm}[Girstmair, \cite{Girstmair17}]
\label{denom}
Suppose $a$ and $b$ are coprime integers with $b>0$, and suppose we can write $S(a,b)$ as
\[
S(a,b) = \frac{k}{q}
\]
for $k,q \in \Z$, $q>0$, and $\gcd(k,q)=1$. Then
\[
q = \frac{b}{\gcd(b,a^2+1)}.
\]
In particular, given positive integers $b,q$ and an integer $a$ with $\gcd(a,b)=1$, $S(a,b)$ has the form $\frac{k}{q}$ for some $k\in \Z$ with $\gcd(k,q)=1$ if and only if
\[
b = \frac{q(a^2+1)}{t}
\]
for some positive integer $t$ with $\gcd(t,q)=1$.
\end{thm}
Furthermore, Girstmair proved that the problem of classifying the set of $k\in \Z$ such that $\gcd(k,q)=1$ and $\frac{k}{q}$ is the value of a normalized Dedekind sum can be reduced to classification $\pmod{q(q^2-1)}$. More precisely, he proved the following.
\begin{thm}[Girstmair, \cite{Girstmair17}]
\label{finitemod}
Let $k$ and $q$ be coprime integers with $q\ge 2$. If $k' \in \Z$ and
\[
k' \equiv k \pmod{q(q^2-1)},
\]
then $\frac{k}{q}$ is the value of a normalized Dedekind sum if and only if $\frac{k'}{q}$ is the value of a normalized Dedekind sum.
\end{thm}
Thus for each fixed value of $q$, the question of determining the set of normalized Dedekind sums with denominator $q$ is reduced to determining the finite set of residue classes $\pmod{q(q^2-1)}$ which represent the possible numerators. In fact, Girstmair provides necessary conditions on the numerators of such normalized Dedekind sums and conjectures that these are the only restrictions on the set of normalized Dedekind sums with a given denominator.
\begin{thm}[Girstmair, \cite{Girstmair17}]
\label{qgood}
Suppose for integers $a,b,k,q$ with $b\ge 1,q\ge 2,$ and $\gcd(a,b)=\gcd(k,q)=1$, it holds that $S(a,b) = \frac{k}{q}$. Then
\begin{itemize}
\item If $3\nmid q$, then $3\mid k$.
\item If $2\nmid q$, then
\[
k\equiv\begin{cases}
2\pmod{4} & q\equiv 3 \pmod{4}\\
0\pmod{8} & q \text{ is a square}\\
0 \pmod{4} & \text{else.}
\end{cases}
\]
\end{itemize}
\end{thm}
\begin{conj}[Girstmair, \cite{Girstmair17}]
\label{conj1}
For integers $k,q$ with $q\ge 2$ and $\gcd(k,q)=1$, there exist integers $a,b$ with $b\ge 1$ and $\gcd(a,b)=1$ such that
\[
S(a,b) = \frac{k}{q}
\]
if and only if the conditions of Theorem \ref{qgood} hold.
\end{conj}
This conjecture is our main focus of study.
\section{Reduction to \texorpdfstring{$\pmod{q^2-1}$}{mod}}
\label{sec:reduce}
We first reduce the problem from a question of existence $\pmod{q(q^2-1)}$ to an existence problem $\pmod{(q^2-1)}$. In proving Theorem \ref{finitemod}, Girstmair uses the generalized three-term relation (see \cite{Girstmair98}) to obtain a new formula for the normalized Dedekind sum in light of Theorem \ref{denom}.
\begin{lemma}[Girstmair, \cite{Girstmair17}]
\label{girstmairidentity}
Let $q,t$ be positive integers such that $q\ge 2$ and $\gcd(t,q)=1$, let $a$ be an integer such that $\gcd(a,q)=1$ and $t\mid a^2+1$, and let $b=\frac{q(a^2+1)}{t}$. Then
\[
S(a,b) = \frac{(q^2-1)a}{tq}-S(aq,t)+S(at^{*},q),
\]
where $t^{*}$ is any integer such that $t t^{*}\equiv 1 \pmod{q}$.
\end{lemma}
Now fix an integer $a$, no longer necessarily coprime with $q$, such that $t\mid a^2+1$. We consider Dedekind sums in the form $S(a',b')$ where $a'\equiv a \pmod{t}$ and $\gcd(a', q)=1$, as motivated by \cite{Girstmair17}. If $a'= a+tj$ for some $j$ such that $\gcd(a',q)=1$ and $b' = \frac{q(a'^2+1)}{t}$, then $t\mid a'^2+1$ still holds, so $S(a',b')$ has reduced denominator $q$. Its numerator is
\begin{align*}
q S(a',b') &=\frac{(q^2-1)a}{t} +(q^2-1)j - qS(a'q,t)+qS(a't^{*},q)\\
&=\frac{(q^2-1)a}{t} +(q^2-1)j - qS(aq,t)+qS(at^{*}+j,q)\\
\begin{split}&= \left(\frac{(q^2-1)a}{t}-(q^2-1)at^{*}-qS(aq,t)\right)\\
&\quad +\left((q^2-1)(at^{*}+j)+q S(at^{*}+j,q)\right).
\end{split}
\end{align*}
The first expression is only dependent on $a,t,$ and $t^{*}$, while the second is only dependent on $at^{*}+j$. This motivates analyzing the two expressions separately. By considering a fixed $a$ and all $a'$ in the form $a'=a+tj$ for varying $j$, we can isolate the behavior of $qS(a',b')$ dependent on $a' \pmod{t}$ in the first term and the behavior dependent on $a'\pmod{q}$ in the second term.
\begin{defn}
Given a fixed integer $q\ge 2$, we define the functions $\lambda(a,t,t^{*})$ and $\Delta(\ell)$ by
\[
\lambda(a,t,t^{*}) = \frac{(q^2-1)a}{t} - (q^2-1)at^{*} - qS(aq,t)
\]
and
\[
\Delta(\ell) = (q^2-1)\ell+qS(\ell,q)
\]
where $\lambda$ is defined on all triples of integers $a,t,t^{*}$ such that $t>0$, $t\mid a^2+1$, and $tt^{*}\equiv 1\pmod{q}$, and $\Delta$ is defined on all integers $\ell$ such that $\gcd(\ell,q)=1$.
\end{defn}
Note that $\gcd(a+tj,q)=1$ if and only if $\gcd(at^{*}+j,q)=1$, so $\Delta(at^{*}+j)$ is well defined if and only if $\gcd(a+tj,q)=1$. We can now give our central decomposition of the numerator of $S(a',b').$
\begin{prop}
\label{split}
If integers $a,q,t,j,a',$ and $b'$ are given such that $t$ is positive, $q\ge 2$, $\gcd(t,q)=1$, $a'=a+tj$, $b' = \frac{q(a'^2+1)}{t}$, and $\gcd(a',q)=1$, then $S(a',b')$ has reduced denominator $q$ and numerator
\[
qS(a',b') = \lambda(a,t,t^{*}) +\Delta(at^{*}+j).
\]
Furthermore, $\lambda(a,t,t^{*})\in \Z$ and $\Delta(at^{*}+j)\in \Z$.
\end{prop}
\begin{proof}
We've shown that the identity for $qS(a',b')$ holds and that the denominator of $S(a',b')$ is reduced, so it suffices to show that $\lambda(a,t,t^{*})\in \Z$ and $\Delta(at^{*}+j) \in \Z$. But the denominator of $S(\ell,q)$ divides $q$, which implies that $qS(\ell,q)$ and therefore $\Delta(at^{*}+j)$ are integers. Furthermore $qS(a',b')\in \Z$, which implies $\lambda(a,t,t^{*}) = qS(a',b')-\Delta(at^{*}+j) \in \Z$.
\end{proof}
To understand the possible values of $qS(a',b')\pmod{q(q^2-1)}$, we analyze the behavior of $\lambda(a,t,t^{*})$ and $\Delta(\ell)$ when reduced $\pmod{q}$ and $\pmod{q^2-1}$.
\begin{lemma}
\label{deltamodq}
If $\ell_1,\ell_2\in \Z$ such that $\gcd(\ell_1,q)=\gcd(\ell_2,q)=1$ and $\ell_1\equiv\ell_2\pmod{q}$, then $\Delta(\ell_1)\equiv \Delta(\ell_2)\pmod{q}$. Furthermore, as $\ell\pmod{q}$ ranges over $(\Z/q\Z)^{\times}$, $\Delta(\ell)\pmod{q}$ ranges over $(\Z/q\Z)^{\times}$. In other words, $\Delta$ is a bijection when considered as a mapping from $(\Z/q\Z)^{\times}$ to itself.
\end{lemma}
\begin{proof}
The first assertion is clear from the periodicity of $S(\ell,q)$ in $\ell$. Now we claim
\[
q S(\ell,q) \equiv \ell+\ell^{*} \pmod{q}
\]
where $\ell^{*}$ is an integer such that $\ell \ell^{*} \equiv 1\pmod{q}$. Indeed, if $c$ is an integer such that $\ell\ell^{*}-cq=1$, and we define
\[
M = \begin{pmatrix}
\ell^{*} & c\\
q & \ell
\end{pmatrix} \in \SL_2(\Z),
\]
then (as $q>0$) the Rademacher function $\Phi:\SL_2(\Z)\to \Z$ (see, for example, \cite[p. 50]{Rademacher72}) is given by
\[
\Phi(M) = \frac{\ell+\ell^{*}}{q} - S(\ell,q).
\]
But since $\Phi(M) \in \Z$, we have $qS(\ell,q) \equiv \ell+\ell^{*} \pmod{q}$.
Now reducing $\Delta(\ell)\pmod{q}$, we get
\begin{align*}
\Delta(\ell) &\equiv (q^2-1)\ell+qS(\ell,q) \\
& \equiv -\ell + (\ell+\ell^{*})\\
&\equiv \ell^{*} \pmod{q}
\end{align*}
implying $\Delta(\ell)$ takes the value of exactly the invertible residues $\pmod{q}$ as $\ell$ ranges over the invertible residues $\pmod{q}$.
\end{proof}
Now we consider $\lambda(a,t,t^{*})\pmod{q}$.
\begin{lemma}
\label{lambdamodq}
For any integers $a,t,t^{*}$ with $t>0$, $t\mid a^2+1$, $\gcd(aq,t)=1$, and $tt^{*}\equiv 1 \pmod{q}$, we have
\[
\lambda(a,t,t^{*}) \equiv 0 \pmod{q}.
\]
\end{lemma}
\begin{proof}
Note that $\gcd(q,t)=1$. Then
\begin{align*}
t \lambda(a,t,t^{*})&\equiv (q^2-1)a(1-tt^{*}) - qtS(aq,t)\\
&\equiv 0\pmod{q}
\end{align*}
because $tt^{*}\equiv 1\pmod{q}$ and $tS(aq,t)\in \Z$.
\end{proof}
Finally, we consider $\lambda(a,t,t^{*})\pmod{q^2-1}$.
\begin{defn}
Define $\Lambda_q \subseteq \Z/(q^2-1)\Z$ by
\[
\Lambda_q = \left\{\frac{(q^2-1)a}{t} - qS(aq,t)\pmod{q^2-1}:\gcd(aq,t)=1, t \mid a^2+1,t>0\right\}.
\]
\end{defn}
\begin{rem}
Alternatively, $\Lambda_q$ gives the range of $\lambda$:
\[
\Lambda_q = \left\{\lambda(a,t,t^{*})\pmod{q^2-1} : \gcd(aq,t)=1,t\mid a^2+1,tt^{*}\equiv 1 \pmod{q},t>0\right\}.
\]
\end{rem}
The characterizations are equivalent because the $(q^2-1)at^{*}$ term vanishes $\pmod{q^2-1}$.
\begin{defn}
Define $\Gamma_q\subseteq \Z/(q^2-1)\Z$ by
\[
\Gamma_q = \begin{cases}
\{24s: s \in \Z/(q^2-1)\Z\}& q\text{ is a square}\\
\{12s: s \in \Z/(q^2-1)\Z\}&\text{else.}
\end{cases}
\]
\end{defn}
\begin{lemma}
For any integer $q\ge 2$,
\[
\Lambda_q\subseteq\Gamma_q
\]
\end{lemma}
\begin{proof}
It suffices to show the following claims:
\begin{itemize}
\item If $3\nmid q$, then $3\mid \lambda(a,t,t^{*})$.
\item If $2\nmid q$, then
\[
\lambda(a,t,t^{*})\equiv \begin{cases}
0 \pmod{8}& q\text{ is a square}\\
0 \pmod{4}&\text{else.}
\end{cases}
\]
To prove both, we note that for any $a$ such that $t\mid a^2+1$, there exists some $a' = a+tj$ such that $\gcd(at^{*}+j,q)=1$ and therefore $\gcd(a',q)=1$, recalling that $\gcd(t,q)=1$. Then by Proposition \ref{split}, we may write
\[
\lambda(a,t,t^{*}) = qS(a',b') - \Delta(at^{*}+j)
\]
for $b'=\displaystyle\frac{(a'^2+1)q}{t}$.
First, suppose $3\nmid q$. By Theorem $\ref{qgood}$, $3\mid qS(a',b')$, and similarly $3 \mid q S(at^{*}+j,q)$, as the denominator of $S(at^{*}+j,q)$ is a divisor of $q$, which is coprime with $3$. But $3\mid (q^2-1)$, so $3 \mid \Delta(at^{*}+j)$ and $3 \mid \lambda(a,t,t^{*})$.
Now suppose $2\nmid q$. By Theorem \ref{qgood},
\[
qS(a',b') \equiv q-1 \pmod{4}.
\]
Furthermore, $8 \mid q^2-1$, and by \cite[p. 34]{Rademacher72},
\[
qS(at^{*}+j,q)\equiv q+1-2\left(\frac{at^{*}+j}{q}\right)\pmod{8},
\]
where $\left(\frac{at^{*}+j}{q}\right)$ denotes the Jacobi symbol. This implies
\[
\Delta(at^{*}+j)\equiv qS(at^{*}+j,q)\equiv q+1-2\left(\frac{at^{*}+j}{q}\right)\pmod{8}
\]
and
\begin{align*}
\lambda(a,t,t^{*})&\equiv qS(a',b')-\Delta(at^{*}+j) \\
&\equiv \left(q-1\right)-\left(q+1-2\left(\frac{at^{*}+j}{q}\right)\right)\\
&\equiv -2+2\left(\frac{at^{*}+j}{q}\right)\\
&\equiv 0\pmod{4}.
\end{align*}
\end{itemize}
Furthermore, if $q$ is a square, then by Theorem \ref{qgood}, $qS(a',b')\equiv 0 \pmod{8}$, and since the Jacobi symbol is multiplicative in the denominator, $\left(\frac{at^{*}+j}{q}\right) \equiv 1 \pmod{8}$. Thus
\begin{align*}
qS(at^{*}+j,q) &\equiv q+1-2\left(\frac{at^{*}+j}{q}\right) \equiv 1+1-2\left( 1 \right) \equiv 0\pmod{8}
\end{align*}
implying $\lambda(a,t,t^{*})\equiv qS(a',b') - \Delta(at^{*}+j)\equiv 0\pmod{8}$, as desired.
\end{proof}
Finally, we present our main conjecture, which specifies the range of $\lambda(a,t,t^{*})$ and, as we will see, implies Conjecture \ref{conj1}.
\begin{conj}
\label{conj2}
For any integer $q\ge 2$,
\[
\Lambda_q=\Gamma_q
\]
\end{conj}
In a sense, Conjecture \ref{conj2} states that $\lambda(a,t,t^{*})$ takes on all possible values $\pmod{q^2-1}$ after accounting for the $\pmod{3}$ and $\pmod{8}$ restrictions of $\Gamma_q$. Similarly, this implies that the normalized Dedekind sum takes on all possible values after accounting for the local restrictions given by Theorem \ref{qgood}.
\begin{thm}
For each $q$, Conjecture \ref{conj2} implies Conjecture \ref{conj1}.
\end{thm}
\begin{proof}
Suppose $a,q,t,a',b',$ and $j$ are as in Proposition \ref{split}. We have established the following facts:
\begin{itemize}
\item The denominator of $S(a',b')$ is $q$, so its numerator is
\[
qS(a',b') = \lambda(a,t,t^{*}) + \Delta(at^{*}+j).
\]
\item
It always holds that
\[
\lambda(a,t,t^{*})\equiv 0 \pmod{q}.
\]
\item
As $at^{*}+j$ ranges over all invertible residues $\pmod{q}$, the remainder of $\Delta(at^{*}+j)$ also ranges over all invertible residues $\pmod{q}$.
\item
Assuming Conjecture \ref{conj2}, the range of $\lambda(a,t,t^{*})$ when reduced $\pmod{q^2-1}$ is $\Lambda_q=\Gamma_q$.
\end{itemize}
For the rest of the proof, assume Conjecture \ref{conj2} does hold. It is at this point that we finally use our full flexibility to generate pairs $a',b'$ based on a choice of $a$ and a variable choice of $j$. Given any $a,t,$ and $t^{*}$, we may choose $j$ independently to yield new values $a'$ and $b'$ as long as $\gcd(a',q)=\gcd(at^{*}+j,q)=1$. Then for any residue $\lambda(a,t,t^{*})\pmod{q(q^2-1)}$ and any residue $\Delta(\ell)\pmod{q(q^2-1)}$, the sum $\lambda(a,t,t^{*}) +\Delta(\ell)\pmod{q(q^2-1)}$ is the residue $\pmod{q(q^2-1)}$ of the numerator $qS(a',b')$ of some Dedekind sum $S(a',b')$ with reduced denominator $q$.
More precisely, since $\Lambda_q=\Gamma_q$ but $\lambda(a,t,t^{*})\equiv 0 \pmod{q}$ always, we have that $\lambda(a,t,t^{*})\pmod{q(q^2-1)}$ can take any value from $\widetilde{\Gamma}_q \subseteq \Z/q(q^2-1)\Z$, defined by
\[
\widetilde{\Gamma}_q = \left\{s \in \Z/q(q^2-1)\Z: s \equiv 0 \pmod{q}, s \pmod{q^2-1} \in \Gamma_q\right\},
\]
where each element in $\Gamma_q$ lifts uniquely to an element of $\widetilde{\Gamma}_q$ by the Chinese Remainder Theorem. Again because $\Delta(\ell)$ is periodic in $\ell$ with period $q$, it can be considered as a map of $\ell \in (\Z/q\Z)^{\times}$. Then if we define $\phi: \widetilde{\Gamma}_q\times (\Z/q\Z)^{\times}\to \Z/(q(q^2-1))\Z$ by
\[
\phi(\lambda,\ell) = \lambda+\Delta(\ell)\pmod{q(q^2-1)},
\]
we must have that the set of possible numerators $qS(a',b')$ is the image of $\phi$.
We claim $\phi$ is injective. Indeed, suppose there exist $\lambda_1,\lambda_2 \in \widetilde{\Gamma}_q$ and $\ell_1,\ell_2 \in (\Z/q\Z)^{\times}$ such that
\[
\lambda_1+\Delta(\ell_1) \equiv \lambda_2 + \Delta(\ell_2)\pmod{q(q^2-1)}.
\]
Since $\lambda_1\equiv \lambda_2 \equiv 0 \pmod{q}$, reducing $\pmod{q}$ yields
\[
\Delta(\ell_1)\equiv \Delta(\ell_2) \pmod{q}
\]
so $\ell_1\equiv \ell_2 \pmod{q}$. This implies $\Delta(\ell_1) = \Delta(\ell_2)$, and so
\[
\lambda_1\equiv \lambda_2 \pmod{q(q^2-1)}.
\]
Then $\phi$ is injective, so the number of residue classes of possible numerators $\pmod{q(q^2-1)}$ is $|\widetilde{\Gamma}_q|\varphi(q) = |\Gamma_q|\varphi(q)$, where $\varphi$ denotes Euler's totient function. We claim that the number of residue classes $k$ satisfying the conditions of Theorem \ref{qgood} is also $|\Gamma_q|\varphi(q)$. Indeed, note that $2\nmid q$ if and only if $8 \mid q^2-1$ and $3\nmid q$ if and only if $3\mid q^2-1$. Since $\gcd(q,q^2-1)=1$, by the Chinese Remainder Theorem the number of such residue classes $k\pmod{q(q^2-1)}$ is
\[
\varphi(q)\left(\frac{q^2-1}{c_2(q)c_3(q)}\right)
\]
where
\[
c_2(q) = \begin{cases}
1 & 2\mid q\\
4 & 2\nmid q \text{ and }q\text{ nonsquare} \\
8 & 2\nmid q \text{ and }q\text{ square}
\end{cases}
\]
and
\[
c_3(q) = \begin{cases}
1 & 3 \mid q\\
3 & 3 \nmid q.
\end{cases}
\]
But $8\mid q^2-1$ if $2\nmid q$ and $3\mid q^2-1$ if $3\nmid q$, while $2\nmid q^2-1$ if $2\mid q$ and $3\nmid q^2-1$ if $3\mid q$, so
\[
|\Gamma_q| = \frac{q^2-1}{c_2(q)c_3(q)}
\]
as well. Thus all possible residue classes $k \pmod{q(q^2-1)}$ satisfying the conditions of Theorem $\ref{qgood}$ are achievable as numerators of normalized Dedekind sums of denominator $q$. This, together with Theorem \ref{finitemod}, establishes Conjecture \ref{conj1}.
\end{proof}
\section{A Partial Resolution of Conjecture \ref{conj2}}
\label{sec:partial}
In this section, we introduce identities that will allow us to prove Conjecture \ref{conj2} (and therefore Conjecture \ref{conj1}) in specific cases, such as $q$ even or $q$ an odd square divisible by $3$ or $5$. We also hypothesize an approach to proving Conjecture \ref{conj2} in general and provide proof by computer verification for all $2\le q \le 200$.
\begin{defn}
For a given positive integer $q$ and an integer $a$ such that $\gcd(q,a^2+1)=1$ let
\[
f(a) = \frac{(q^2-1) a}{a^2+1}-q S(aq,a^2+1).
\]
\end{defn}
Note that by setting $t=a^2+1$, we have $f(a)\pmod{q^2-1} \in \Lambda_q$ for any integer $a$ such that $\gcd(q,a^2+1)=1$ (recall $f(a)$ is an integer).
The following identity reveals that $f(a)$ is a piecewise linear function depending on the residue class of $a\pmod{q}$.
\begin{lemma}
\label{linear}
If an integer $a$ satisfies $\gcd(q,a^2+1)=1$, then
\[
f(a) = (g^2-1)a+S(a_1,q_1)+S(-a_1g^2-a_1^{*},q_1)
\]
where $g = \gcd(a,q)>0,a=ga_1,q=gq_1$, and $a_1a_1^{*}\equiv 1 \pmod{q_1}$.
\end{lemma}
\begin{proof}
Note that $f(a)$ is an odd function, so without loss of generality we may assume that $a>0$. (The case $a=0$ is trivial.) By the reciprocity law, we have
\[
S(aq,a^2+1)+S(a^2+1,aq) = \frac{aq}{a^2+1}+\frac{a^2+1}{aq}+\frac{1}{aq(a^2+1)}-3.
\]
Next, we apply the three-term relation (see \cite{Girstmair98}) to $S(a^2+1,aq)$ and $S(a_1,q_1)$. Suppose integers $j$ and $k$ satisfy
\[
-a_1j+q_1k=1,
\]
and let
\[
r = -aqk+(a^2+1)j.
\]
Then
\[
r = -ag(q_1k-a_1j)+j = -ag+j
\]
and in particular
\[
r\equiv -a_1g^2-a_1^{*}\pmod{q_1}.
\]
But
\[
(a^2+1)q_1 - (aq)a_1=q_1
\]
so
\[
S(a^2+1,aq) = S(a_1,q_1)+S(-a_1g^2-a_1^{*},q_1) + \frac{(aq)^2+2q_1^2}{aqq_1^2}-3.
\]
Combining this with the reciprocity law directly implies the desired result.
\end{proof}
\begin{cor}
If $a$ is an integer such that $\gcd(a^2+1,q)=1$ and $g = \gcd(a,q)$, then
\[
f(a+mq) = f(a)+mq(g^2-1)
\]
for any integer $m$.
\end{cor}
\begin{proof}
Note that $\gcd(a^2+1,q)=1$ implies $\gcd((a+mq)^2+1,q)=1$, and furthermore $\gcd(a,q) = \gcd(a+mq,q)$. Thus the conclusion of Lemma \ref{linear} holds for $a+mq$ with the same $g$ as $a$. But $S(a_1,q_1)+S(-a_1g^2-a_1^{*},q_1)$ only depends on the residue class of $a\pmod{q}$, which implies the desired result.
\end{proof}
As a consequence, we have the following.
\begin{thm}
\label{mainresult}
Conjecture \ref{conj2}, and thus Conjecture \ref{conj1}, holds for $q$ even or a square divisible by $3$ or $5$.
\end{thm}
\begin{proof}
First, suppose $q$ is even. If there exists an integer $a$ such that $\gcd(a,q) = 2$ and $\gcd(a^2+1,q)=1$, then $f(a+mq)=f(a)+3mq$ for any integer $m$. But $f(a+mq)\pmod{q^2-1} \in \Lambda_q$ for any integer $m$, and $q$ is invertible $\pmod{q^2-1}$, which implies $f(a)+3m \pmod{q^2-1} \in \Lambda_q$ for all integers $m$. Now for $q$ even, $\Gamma_q$ consists of multiples of $3\pmod{q^2-1}$, so in particular $|\Lambda_q|\ge |\Gamma_q|$. This, along with our prior assumption that $\Lambda_q\subseteq \Gamma_q$, would imply $\Lambda_q=\Gamma_q$.
So it suffices in this case to show that there exists an integer $a$ such that $\gcd(a,q)=2$ and $\gcd(a^2+1,q)=1$. For each prime $p_i>2$ dividing $q$, we may choose a residue $r_i\pmod{p_i}$ such that $r_i\not\equiv 0 \pmod{p_i}$ and $r_i^2+1\not\equiv 0 \pmod{p_i}$. This is because the two congruences only eliminate three of the $p_i$ residues for $p_i>3$, while if $p_i=3$, we may choose $r_i=1$. Then by the Chinese Remainder Theorem we may choose any $a$ such that
\[
a \equiv 2 \pmod{4}
\]
and
\[
a \equiv r_i\pmod{p_i}
\]
for each $i$, which suffices.
Now suppose $q$ is an odd square such that $3\mid q$. In the same way, it suffices to find $a$ such that $\gcd(a^2+1,q)=1$ and $\gcd(a,q) = 3$, since $3^2-1=8$. Again for each $p_i>3$ dividing $q$, we may choose a remainder $r_i$ such that $r_i\not\equiv 0\pmod{p_i}$ and $r_i^2+1\not\equiv 0\pmod{p_i}$. By the Chinese Remainder Theorem, we can find our desired $a$ by taking $a \equiv 3 \pmod{9}$ and $a\equiv r_i\pmod{p_i}$.
Finally, suppose $q$ is an odd square such that $5\mid q$ and $3\nmid q$. The argument is nearly identical to the case $3\mid q$: it suffices to find $a$ such that $\gcd(a^2+1,q)=1$ and $\gcd(a,q)=5$, since $5^2-1=24$. Taking $a\equiv 5 \pmod{25}$ and applying the Chinese Remainder Theorem in the same way, we recover the desired value of $a$.
\end{proof}
The specification $t=a^2+1$ is too strong to prove Conjecture \ref{conj2} in general. However, generalizing to $t=x^2+y^2$ yields enough flexibility to prove the conjecture for small values of $q$ (and possibly all $q$).
\begin{defn}
For a given integer $q\ge 2$ and integers $x$ and $y$ such that $\gcd(q,x^2+y^2)=\gcd(x,y)=1$, let
\[
h(x,y) = \frac{(q^2-1)xy^{*}}{x^2+y^2}-qS(qxy^{*},x^2+y^2)\pmod{q^2-1}
\]
where $yy^{*}\equiv 1 \pmod{x^2+y^2}$. (Note that $y^{*}\pmod{x^2+y^2}$ determines the expression $\pmod{q^2-1}$, so the choice of $y^{*}$ doesn't matter.)
\end{defn}
This is exactly $\lambda(a,t,t^{*})\pmod{q^2-1}$ for $t = x^2+y^2$ and $a = xy^{*}$. Note that this generalizes $f(a)$, as $h(x,1)=f(x)\pmod{q^2-1}$. In our case, this interpretation is useful for verifying small cases of Conjecture \ref{conj2} computationally.
\begin{thm}
\label{computer}
For all integers $q$ such that $2\le q\le 200$, if $x$ and $y$ range over pairs of coprime integers such that $\gcd(x^2+y^2,q)=1$, then $h(x,y)$ ranges over all of $\Gamma_q$.
\end{thm}
\begin{proof}
A naive computer search over pairs of coprime positive integers $x$ and $y$ suffices for the values of $q$ not already covered by Theorem \ref{mainresult}. Our Sage program terminates in under an hour with $x+y$ achieving the largest value when $(x,y) = (1576,1511)$ for $q=189$.
\end{proof}
\begin{cor}
Conjectures \ref{conj2} and \ref{conj1} hold for all positive integers $2 \le q\le 200$.
\end{cor}
For future work, we would hope to see a proof for Conjecture \ref{conj2}, and therefore Conjecture \ref{conj1}, in full generality. We conjecture that setting $t=x^2+y^2$ and $a=xy^{*}$ as in Theorem \ref{computer} is enough to do so:
\begin{conj}
Fix $q\ge 2$. If $x$ and $y$ vary over pairs of coprime integers such that $\gcd(q,x^2+y^2)=1$, then $h(x,y)$ ranges over $\Gamma_q$.
\end{conj}
Once again, as $h(x,y) \in \Lambda_q$, this would imply Conjecture \ref{conj2}.
In fact, we believe that $h(x,y)$ can achieve all values in $\Gamma_q$ with one fewer parameter. By setting $t = (x^2+1)(q^2+1) = (x+q)^2+(xq-1)^2$, we have
\begin{conj}
Fix $q\ge 2$. If $x$ ranges over all integers such that $\gcd(x+q,xq-1)=1$ and $\gcd(x^2+1,q)=1$, then $h(x+q,xq-1)$ ranges over $\Gamma_q$.
\end{conj}
More generally, we conjecture that for any positive integer $r$, we can set $t = (x^2+1)(r^2q^2+1) = (x+rq)^2+(xrq-1)^2$ and get the same result.
\begin{conj}
Fix $q\ge 2$ and $r\ge 1$. If $x$ ranges over all integers such that $\gcd(x+rq,xrq-1)=1$ and $\gcd(x^2+1,q)=1$, then $h(x+rq,xrq-1)$ ranges over $\Gamma_q$.
\end{conj}
\section{Acknowledgments}
This work was supported by NSF grant DMS-1659047 as part of the Duluth Research Experience for Undergraduates (REU). The author would like to thank Joe Gallian for supervising the program, suggesting the problem, and providing helpful commentary on drafts of this paper.
\bibliographystyle{plain}
| {
"timestamp": "2018-12-27T02:31:46",
"yymm": "1801",
"arxiv_id": "1801.00517",
"language": "en",
"url": "https://arxiv.org/abs/1801.00517",
"abstract": "Let $S(a,b)$ denote the normalized Dedekind sum. We study the range of possible values for $S(a,b)=\\frac{k}{q}$ with $\\gcd(k,q)=1$. Girstmair proved local restrictions on $k$ depending on $q\\pmod{12}$ and whether $q$ is a square and conjectured that these are the only restrictions possible. We verify the conjecture in the cases $q$ even, $q$ a square divisible by $3$ or $5$, and $2\\le q\\le 200$ (the latter by computer), and provide progress towards a general approach.",
"subjects": "Number Theory (math.NT)",
"title": "Dedekind Sums with Even Denominators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.984336353126336,
"lm_q2_score": 0.8596637451167997,
"lm_q1q2_score": 0.8461982757831986
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.